Thursday, March 11, 2010

Backtrack 4 is out

I've been using this for 3 weeks now and entirely forgot to mention it here.
Backtrack 4 shipped in late January - Get the download here

BackTrack is intended for all audiences from the most savvy security professionals to early newcomers to the information security field. BackTrack promotes a quick and easy way to find and update the largest database of security tool collection to-date.

Our community of users range from skilled penetration testers in the information security field, government entities, information technology, security enthusiasts, and individuals new to the security community. Feedback from all industries and skill levels allows us to truly develop a solution that is tailored towards everyone and far exceeds anything ever developed both commercially and freely available.

Whether you’re hacking wireless, exploiting servers, performing a web application assessment, learning, or social-engineering a client, BackTrack is the one-stop-shop for all of your security needs.

OpenSuSE 11.3 milestone 3 released

OpenSUSE 11.3 milestone 3 release is the first distro compiled entirely with the GNU Compiler Collection (GCC) version 4.5. The update caused a couple of problems with the openSUSE Build Service and packages that wouldn't compile. The openSUSE project management decided to release nevertheless, specifically to test and make improvements to the new GCC 4.5 upgrade. Milestone 3, therefore, is an actual alpha release to be addressed only by experienced Linux users.

Among the new features are Kernel 2.6.33, the Nouveau drivers for NVIDIA graphics cards and the current GNOME developer version 2.29, including the GNOME shell.

The following bugs are in a known state:

* YaST log files are truncated.
* Network installations choke on the wrong SHA1sum for cracklib-dict-full.
* VirtualBox is uninstallable.
* With the LXDE desktop, rcxdm fails to stop the lxdm login manager.

A glance at the Most Annoying Bugs site might be appropriate before installing 11.3. The list may indeed get longer in the next few days. Download here

SCO v. Novell Trial

For those interested in following the developments in the SCO vs. Novell trial you can find detailed observers notes at the links below. Note that this summary comes mostly from the excellent trial coverage from GROKLAW.NET



Day 1 - Monday March 8, 2010 Day 1 is mostly just the seating of the Jury.

Day 2 - Tuesday March 9, 2010 Day 2 included the opening arguments of both sides and the testimony of Bob Frankenberg, Former CEO of Novell.


Day 3 - Wednesday March 10, 2010
Testimony of Duff Thompson and Ed Chatlos

Day 4 - Thursday March 11, 2010 Most of the day was filled with video depositions by Jack Messman; Former CEO of Novell, Burt Levine a lawyer that came from USL, then worked for Novell and later Santa Cruz. Jim Wilt's depostion which was not heard by the jury; Alek Mohan; CEO of SCO from 1995-1998; and finally live deposition by Bill Broderick, another lawyer that worked for USL and then Novell;

Motion Filed by Novell - Friday March 12, 2010 This motion is to allow Novell to introduce into evidence the prior findings of the court that declares that Novell is in fact the owner of the copyrights and that they did not transfer with the sale. That motion is based on SCO's lawyers making the claim (at least 4 times) that Novell continued to slander SCO's title "to this very day".

Day 5 - Friday March 12, 2010 Continuation of testimony of Bill Broderick and Testimony of Ty Mattingly; Mattingly described himself as the "High Level Business Negotiator" for Novell during the sale of Unix/Unixware to Santa Cruz.

Novell Files a "Petition for Writ of Certiorari" - Review of Ruling to Supreme Court over the 10th Circuit that handed over copyrights to SCO that were not specifically transferred as part of the sale of Unix/Unixware. See the filing here

DAY 6 -
Motion for mistrial; Testimony of Kim Madsen, Steve Sabbath and Darl McBride

Judge Denies 2 Novell Motions, 1 for mistrial and the other to allow evidence on prior judicial opinions in the case.

Novell has filed a Notice of Filing of Offer of Proof Regarding Prior Inconsistent Declaration of Steven Sabbath. It is making a record that SCO was allowed to present testimony in direct examination that Novell knew was contradicted by deposition testimony, but then Novell couldn't tell the jury about it, because of rulings by the judge.

Day 7 -
Testimony of Darl McBride and Christine Botosan

Novell anticipates objections to SCO's Experts' testimony regarding the 'TK-7 v Estate of Barbouti' case -

SCO's motion to allow testimony regardi8ng a previous case and a letter from Brent Hatch. -

Day 8 -
Continued testimony of Darl McBride - McBride admits on stand that SCO did not need the copyrights to run their Unix business and that they only needed them for SCOSource. Also admitted into evidence was an exhibit showing that HP did not take a SCOSource license in part because they equated it with "supporting terrorism"

New Proposed Jury Instructions and Novell Tries again to get prior court rulings admitted as evidenc
e -

Day 9 -
Jury hears about Kimball's Rulings and Botosan

Day 10 -
Testimony of Chris Stone, O'Gara, Maciaszek, Nagle -

APA's "Included Assets" did not list SVR4.2 - Research Project -

Novell says "elliott Offer" "Inadequate".. -

Auditing Linux Servers Checklist

This checklist is to be used to audit a Linux environment. This checklist attempts to provide a generic set of controls to consider when auditing a Linux environment. It does not account for the differences between the different Linux distributions on the market (e.g. Red Hat, Caldera, Mandrake, etc.).

Some of the elements to consider prior to using this checklist:

· Utilities: While every attempt has been made to include the security implications of using various utilities, it is not possible to list all of them and their security implications in this checklist. Thus, the auditor should ascertain what utilities are being used on the intended Linux server to be reviewed and determine their security implications. A good source to ascertain security implications of using certain utilities is to review the website of the vendor supplying the utility, whether it be freeware, shareware, or commercial products. Another source is the supporting documentation that accompanies the utilities.


· Practicality of the checklist: This checklist lists controls to be checked for a very secure configuration. These may not be appropriate for all Linux servers in an organization due to the risk assigned to particular data and applications. Also, some of the controls may be cost prohibitive to implement and management may have during the accreditation process decided to accept the risk of not being totally secure. The cost may relate to monetary and non-monetary elements. Non-monetary elements include items such as response times and availability.


· Interoperability with other products: This checklist does not provide the security issues to be considered when another system performs certain operations (e.g. Windows NT providing the network authentication service). However, it is quite important that the auditor take this into consideration as certain systems coupled with a Linux server may introduce new vulnerabilities e.g. Netware is unsecure when mounting file systems. Also, this may aid the auditor in tailoring the checklist to suit the organizations environment (e.g. more focus on the Samba server/SMB and less attention to Linux authentication if NT provides the network authentication service).


· Mitigating controls: The auditor needs to be aware of other controls provided by applications or databases. It may be that a weakness identified in the operating system is mitigated by a strong control found in the application or the database e.g. weak access control for the Linux operating system may be mitigated by very granular access control for the application.


· Significance of findings: To produce a good report that will receive management attention the auditor needs to perform a mini risk analysis. The risk analysis would ascertain if the finding is so significant as to affect the organization adversely. The first step in the risk analysis is to determine how sensitive the data stored on the server is and how critical the server is in the business operations. The second step is to determine how the finding would affect the organization’s ability to maintain confidentiality, integrity and availability. Once this has been done, a report indicating the priority and the potential effect on the organization if the weakness is not corrected in a timely manner needs to be issued to management.


· Applications and Database interfaces with Linux: A further consideration is the security provided for application and database files by the Linux server. The auditor needs to ascertain what applications and databases are loaded on the Linux server and ascertain the appropriateness of the permissions assigned to these files. This would also apply to sensitive data files.


An important consideration prior to auditing a Linux server is to determine the Linux server’s function in the organization. This is paramount to determining how the checklist below may be tailored. Since it is outside the scope of this checklist to list the security considerations in all the different functional instances that a Linux server may be used (e.g. as an HTTP server); it is important for the auditor to determine the security elements to be considered for a function as well as the associated applications that may be run for a specific function (e.g. running Apache on an HTTP server).



1 Installation:

Ensure that the software is downloaded from secure sites. Ascertain if the PGP or MD- 5 signatures are verified.
Ensure that a process exists to ascertain the function of the server and thus to install only those packages that are of relevance to the function.
Ensure that the partition sizes are based on the function of the server (e.g. A news server requires sufficient space on the /var/spool/news partition.)
Ensure that the partition scheme is documented to allow recovery later.

2 Ensure that there is a process to update the system with the latest patches.

If the patches are downloaded, ensure that they are downloaded from secure sites.
Ensure that the patches are tested in a test environment prior to being rolled-out to the live environment.
If RPM is being used to automatically download the related packages, ensure that the sites listed in /etc/autorpm.d/pools/redhat-updates are secure, trusted sites.

3 Ensure that SSH is in use.


Ensure that during the installation of SSH, the SSH daemon has been configured to support TCP Wrappers and disable support for rsh as a fallback option.
Ensure that the SSH daemon is started at boot time by reviewing the /etc/rc.d/rc.local file for the following entry: /usr/local/sbin/sshd.
Ensure that the /etc/hosts.allow file is set up for SSh access.
Ensure that the .ssh/identity file has 600 permissions and is owned by root.
Ensure that the r programs are commented out of /etc/inetd.conf and have been removed.

4 Ensure that the inetd.conf file has been secured with the removal of unnecessary services. This is dependant on the function of the Linux server in the environment.

sadmind
login
finger
chargen
echo
time
daytime
discard
The following should be commented out:
ftp
tftp ftp
tftp
systat
rexd
ypupdated
netstat
rstatd
sadmind
login
finger
chargen
echo
time
daytime
discard
rusersd
sprayd
walld
exec
talk
comsat
rquotad
name
uucp


Ensure that the r programs have been commented out from the inetd.conf file due to the numerous vulnerabilities in these programs.
Ensure that there are no /etc/hosts.equiv file and that no user account has a .rhosts file in its home directory.

5 Ensure that Tripwire (or some other method of monitoring for modification of critical system files) is in use.

Ensure that one copy of the Tripwire database is copied onto a write protected floppy or CD.
Ascertain how often a Tripwire compare is done. Determine what corrective actions are taken if there are variances (i.e. changed files).
Ensure that Tripwire sends alerts to the appropriate system administrator if a modification has occurred.
If selective monitoring is enabled ascertain that the files being monitored are those that maintain sensitive information.

6 Vulnerability scans:


Ascertain how often vulnerability scans are run and what corrective action is taken if security weaknesses are detected.
If using Tiger, review /usr/local/tiger/systems/Linux/2 to ascertain whether the base information used for comparison is plausible.
If using TARA, review the tigerrc file to ensure that suitable system checks are enabled.
Other tools that can be used for vulnerability scans are SATAN, SARA, SAINT. Ensure that the latest versions of these scanners are being used.
Commercial products like ISS system scanner or internet scanner as well as Cybercop may be used as vulnerability scanners.

7 Ensure that Shadow passwords with MD5 hashing are enabled.

8 Ensure that a boot disk has been created to recover from emergencies.
Ensure that appropriate baselines are created for directory structures, file permissions, filenames and sizes. These files should be stored on CD’s.


9 Review the /etc/lilo.conf file to ensure that the LILO prompt has been password protected and that permissions have been changed to 600.

10 Logging:
Review the /etc/syslog.comf file to ascertain if warnings and errors on all facilities are being logged and that all priorities on the kernel facility are being logged.
Ensure that the permissions on the syslog files are 700.
Review the /etc/logrotate.conf file to ascertain if the logs are rotated in compliance with security policy.
Review the crontab file to ascertain if the logrotate is scheduled daily.
If remote logging is enabled ensure that the correct host is included in the /etc/syslog.conf file and that the system clock is synchronised with the logserver. To check the synchronization of the system clock review the /etc/cron.hourly/set-ntp file and ensure that the hardware clock CMOS value is set to the current system time.
Ensure that the log entries are reviewed regularly either manually or using tools like Swatch or Logcheck.
If Swatch is used, review the /urs/doc/swatch-2.2/config_files/swatchrc.personal control file to ensure that all different log files are being monitored (mail logs, samba logs, etc) and that the expressions to ignore are plausible.
If using Logcheck, review the logcheck.ignore files to ensure that the patterns to ignore are plausible.

11 Review /etc/inittab file to ascertain if:
Rebooting from the console with Ctrl+Alt+Del key sequence is disabled
Root password is required to enter single user mode

12 Review the /etc/ftpusers file to ensure that root and system accounts are included.

13 Review the /etc/security/access.conf file to ensure that all console logins except for root and administrator are disabled.

14 TCP Wrappers
Ensure that the default access rule is set to deny all in the /etc/hosts.allow file.
Determine if a procedure exists to run tcpdchk after rule changes.
Run tcpdchk to ensure that the syntax of /etc/inetd.conf file are consistent and that there are no errors.
Review the /etc/banners file to ensure that the appropriate legal notice has been included in the banner.
Review the /etc/hosts.allow file to ensure that the banners have been activated.

15 Startup/shutdown scripts

Ascertain if there is a process to ascertain which process is listening on which port (either lsof or nertstat command) and whether any unnecessary services are eliminated.
Review the /etc/rc.d/init.d file to ensure that only the necessary services based on the function of the server are being run.
The services to be stopped are as follows (this is dependant on the server function):
automounter /etc/rc2.d/S74autofs
Sendmail /etc/rc2.d/S88sendmail and /etc/rc1.d/K57sendamil
RPC /etc/rc2.d/ S71rpc
SNMP /etc/rc2.d/S76snmpdx
NFS server /etc/rc3.d/S15nfs.server
NFS client /etc/rc2/S73nfs.client

16 Domain Name Service
For the master server ensure that zone transfers are restricted by reviewing the /etc/named.conf file. The IP address of the masters should appear next to the allow-transfer option.
For slave/secondary servers ensure that the no zone information is transferred to any other server – review the /etc/named.conf file for the slaves. None should appear next to the allow-transfer option.
Ensure that named is run in chroot jail.
Ensure that syslogd is set to listen to named logging by reviewing the /etc/rc.d/init.d/syslog to ensure that the line referring to the syslog daemon has been edited to read as follows:
daemon syslog –a /home/dns/dev/log.

17 E-Mail

Ensure that the SMTP vrfy and expn have been turned off by reviewing the /etc/sendmail.cf file. (PrivacyOptions = goaway).
Review the /etc/mail/access file to ensure that it includes only the fully qualified hostname, subdomain, domain or network names/addresses that are authorized to relay mail.
Ensure that domain name masquerading has been set by reviewing the /etc/sendmail.cf file. The masquerade name should be appended to the DM line.
Ensure that the latest patches of POP/IMAP are installed on the mail server.
Review the /etc/hosts.allow file to ensure that mail is only delivered to the authorized network and domain. The network and domain name should appear after the ipop3d and imapd lines.
Ensure that SSL wrap1er has been installed for secure POP/IMAP connections.

18 Printing Services

Review the /etc/hosts.lpd file to ensure that only authorized hosts are allowed to use the print server.
If LPRng is used, review the /etc/lpd.perms file to ensure that only authorized hosts or networks are allowed access to the print server and to perform specific operations.

19 NFS
Ensure that only authorized hosts are allowed access to RPC services by reviewing the /etc/hosts.allow file for entries after portmap.
Review the /etc/exports file and ascertain that directories are only exported to authorized hosts with read only option.

20 Server Message Block SMB/SAMBA server
Ensure that the latest version of SAMBA is being run.
Review the /etc/smb.conf file to ensure that only authorized hosts are allowed SAMBA server access.
Ensure that encrypted passwords are used.
Ensure that the permissions on the /etc/smbpasswd file is 600.
Review the /etc/smbpasswd file to ensure that system accounts have been removed (bin, daemon, ftp).
Review the /etc/smb.conf file to ensure that unnecessary shares are disabled.
Review the /etc/smb.conf file to ensure that write permissions have been restricted to authorized users.
Review the /etc/smb.conf file to ensure that the files are not created world readable. The create mask should have a permission bit of 770 and the directory mask should have a permission bit of 750.

21 Review the /etc/securetty file to ensure that remote users are not included (i.e. the file only contains tty1 to tty8, inclusive).

22 FTP:

Review /home/ftp, to ensure that the bin, etc and pub directories are owned by root.
Review the /etc/hosts.allow file to ensure that only authorized hosts are allowed access to the ftp server. Authorized hosts and networks should be found after the in.ftpd.
Review the /etc/ftpaccess file to ensure that anonymous users are prevented from modifying writable directories. The entries should be as follows:
chmod no guest,anonymous
delete no guest,anonymous
overwrite no guest,anonymous
rename no guest,anonymous
Review the /etc/ftpaccess file to ensure that files uploaded to the incoming directory has root as owner and the user is not allowed to create sub-directories. Ensure that downloads are denied from the incoming directory.
The lines should read as follows:
upload /home/ftp /incoming yes root nodirs
noretrieve /home/ftp/incoming/
Ensure that the incoming directory is reviewed daily and the files moved out of the anonymous directory tree.

23 Intrusion Detection:
Ensure that PortSentry is in use.
Review the portsentry.conf file and ensure that the KILL_ROUTE option is configured to either:
add an entry to in the routing table to send responses to the attacking host to a fictitious destination
add firewall filter rules to drop all packets from the attacking host
Ensure that LIDS (Linux Intrusion Detection/Defense System) is in use. Review lidsadm to ascertain what directories are being protected and the other security options that are enabled.


24 Ensure PAM is enabled.


25 HTTP Server
Ensure that the basic access is set to default deny by reviewing access.conf. The options should be set as follows:

Options None
AllowOverride None
Order deny,allow
Deny from all

Ensure that further entries to access.conf only allow access and specific options to authorized hosts based on their function.
Ensure that directories don’t have any of the following options set:
ExecCGI
FollowSymlINKS
Includes
Indexes
Ensure that password protection is used for sensitive data. However, the auditor must be aware that this security control is inadequate on its own since the userid and password are passed over the network in the clear.
Ensure that SSL is used for secure HTTP communications.

Monday, February 1, 2010

Tools to securely erase files in Linux

Deleting a file or reformatting a disk does not destroy your sensitive data. The data can easily be undeleted or read by sector editors or other forensic tools

(1) Shred
Although it has some important limitations, the shred command can be useful for destroying files so that their contents are very difficult or impossible to recover. shred accomplishes its destruction by repeatedly overwriting files with data patterns designed to do maximum damage so that it becomes difficult to restore data even using high-sensitivity data recovery equipment.
Deleting a file with the rm command does not destroy the data; it merely removes an index listing pointing to the file and makes the file’s data blocks available for reuse. Thus, a file deleted with rm can be easily recovered using several common utilities until its freed data blocks have been reused.

Shred Syntax
shred [option(s)] file(s)_or_devices(s)
Available Options
-f, --force - change permissions to allow writing if necessary
-n, --iterations=N - Overwrite N times instead of the default (25)
-s, --size=N - shred this many bytes (suffixes like K, M, G accepted)
-u, --remove - truncate and remove file after overwriting
-v, --verbose - show progress
-x, --exact - do not round file sizes up to the next full block
-z, --zero - add a final overwrite with zeros to hide shredding
-shred standard output
--help - display this help and exit
--version - output version information and exit

Shred Examples
1) The following command could be used securely destroy the three files named file1, file2 and file3
shred file1 file2 file3
2) The following would destroy data on the seventh partition on the first HDD
shred /dev/hda7
3) You might use the following command to erase all trace of the filesystem you’d created on the floppy disk in your first drive.  That command takes about
20 minutes to erase a “1.44MB” (actually 1440 KB)
floppy.
shred --verbose /dev/fd0
4) To erase all data on a selected partition (in this example sda5), you could use a command such as
shred --verbose /dev/sda5


(2) Wipe

wipe Syntax
wipe [options] path1 path2 … pathn
Wipe Examples
Wipe every file and every directory (option -r) listed under /home/test/plaintext/, including /home/test/plaintext/.Regular files will be wiped with 34 passes and their sizes will then be halved a random number of times. Special files (character and block devices, FIFOs…) will not. All directory entries (files, special files and directories) will be renamed 10 times and then unlinked. Things with inappropriate permissions will be chmod()’ed (option -c). All of this will happen without user confirmation (option -f).
wipe -rcf /home/test/plaintext/
Assuming /dev/hda3 is the block device corresponding to the third partition of the master drive on the primary IDE interface, it will be wiped in quick mode (option -q) i.e. with four random passes. The inode won’t be renamed or unlinked (option -k). Before starting, it will ask you to type “yes”.
wipe -kq /dev/hda3
Since wipe never follows symlinks unless explicitly told to do so, if you want to wipe /dev/floppy which happens to be a symlink to /dev/fd0u1440 you will have to specify the -D option. Before starting, it will ask you to type “yes”.
wipe -kqD /dev/floppy
Here, wipe will recursively (option -r) destroy everything under /var/log, excepting /var/log. It will not attempt to chmod() things. It will however be verbose (option -i). It won’t ask you to type “yes” because of the -f option.
wipe -rfi >wipe.log /var/log/*
Due to various idiosyncrasies of the OS it’s not always easy to obtain the number of bytes a given device might contain (in fact, that quantity can be variable). This is why you sometimes need to tell wipe the amount of bytes to destroy. That’s what the -l option is for. Plus, you can use b,K,M and G as multipliers, respectively for 2^9 (512), 2^10 (1024 or a Kilo), 2^20 (a Mega) and 2^30 (a Giga) bytes. You can even combine more than one multiplier !! So that 1M416K = 1474560 bytes.
wipe -Kq -l 1440k /dev/fd0



(3) Secure-Delete tools
Tools to wipe files, free disk space, swap and memory. Even if you overwrite a file 10+ times, it can still be recovered. This package contains tools to securely wipe data from files, free disk space, swap and memory.
The Secure-Delete tools are a particularly useful set of programs that use advanced techniques to permanently delete files.
The Secure-Delete package comes with the following commands
srm(Secure remove) - used for deleting files or directories currently on your hard disk.
smem(Secure memory wiper) - used to wipe traces of data from your computer’s memory (RAM).
sfill(Secure free space wiper) - used to wipe all traces of data from the free space on your disk.
sswap(Secure swap wiper) - used to wipe all traces of data from your swap partition.
srm - Secure remove
srm removes each specified file by overwriting, renaming, and truncat-ing it before unlinking. This prevents other people from undeleting  or recovering any information about the file from the command line.
srm,  like  every  program  that  uses the getopt function to parse its arguments, lets you use the -- option to indicate  that  all  arguments are non-options.  To remove a file called ‘-f’ in the current directory, you could type either “srm -- -f” or “srm ./-f”.
srm Syntax
srm [OPTION]… FILE…
Available Options
-d, --directory - ignored (for compatibility with rm)
-f, --force - ignore nonexistent files, never prompt
-i, --interactive - prompt before any removal
-r, -R, --recursive - remove the contents of directories recursively
-s, --simple - only overwrite with a single pass of random data
-m, --medium - overwrite the file with 7 US DoD compliant passes  (0xF6,0×00,0xFF,random,0×00,0xFF,random)
-z, --zero - after overwriting, zero blocks used by file
-n, --nounlink - overwrite file, but do not rename or unlink it
-v, --verbose - explain what is being done
--help display this help and exit
--version - output version information and exit

srm Examples
Delete a file using srm
srm myfile.txt
Delete a directory using srm
srm -r myfiles
smem - Secure memory wiper
smem is designed to delete data which may lie still in your memory (RAM) in a secure manner which can not be recovered by thieves, law enforcement or other threats.
smem Syntax
smem [-f] [-l] [-l] [-v]
Available Options
-f - fast (and insecure mode): no /dev/urandom.
-l - lessens the security. Only two passes are written: the first with 0×00 and a final random one.
-l -l for a second time lessons the security even more: only one pass with 0×00 is written.
-v - verbose mode
sfill - secure free space wipe
sfill is designed to delete data which lies on available disk space.
sfill Syntax
sfill [-f] [-i] [-I] [-l] [-l] [-v] [-z] directory/mountpoint
Available Option
-f - fast (and insecure mode): no /dev/urandom, no synchronize mode.
-i - wipe only free inode space, not free disk space
-I -wipe only free disk space, not free inode space
-l -lessens the security. Only two passes are written: one mode with 0xff and a final mode with random values.
-l -l for a second time lessons the security even more: only one random pass is written.
-v - verbose mode
-z - wipes the last write with zeros instead of random data
directory/mountpoint this is the location of the file created in your filesystem. It should lie on the partition you want to write.
sswap - Secure swap wiper
sswap is designed to delete data which may lie still on your swapspace.
sswap Syntax
sswap [-f] [-l] [-l] [-v] [-z] swapdevice
Available Option
-f - fast (and insecure mode): no /dev/urandom, no synchronize  mode.
-l - lessens the security. Only two passes are written: one mode with 0xff and a final mode with random values.
-l  -l for a second time lessons the security even  more:  only  one pass with random values is written.
-v - verbose mode
-z - wipes the last write with zeros instead of random data
sswap Examples
Before you start using sswap you must disable your swap partition. You can determine your mounted swap devices using the following command
cat /proc/swaps
Disable swap using the following command
sudo swapoff /dev/sda3
/dev/sda3 - This is my swap device
Once your swap device is disabled, you can wipe it with sswipe using the following command
sudo sswap /dev/sda3
After completing the above command you need to re-enable swap using the following command
sudo swapon /dev/sda3


(4) DBAN
Darik’s Boot and Nuke (“DBAN”) is a self-contained boot disk that securely wipes the hard disks of most computers. DBAN will automatically and completely delete the contents of any hard disk that it can detect, which makes it an appropriate utility for bulk or emergency data destruction.

Thursday, January 21, 2010

Multi Party Authorization

Most of my IT career has been with Government/Defense and Banking business sectors where Security is a critical component of system design. I've not made a blog post in the past few weeks because I've been wrapped up with a government agency that is involved in Healthcare and they had some particular requirements for systems that contained information about patients with certain communicable diseases (including AIDS). Due to the heightened privacy concerns over this data; not to mention the HIPAA requirements I've spent much of the last 2 months involved in a project to improve the protection of this data. One of the layers we added is "Multi-Party Authorization" (MPA) for several MySQL applications and for file access to the reports that contained data extracted from those databases.

Multi-Party Authorization basically requires that at least 2 authorized individuals need to authenticate before the data can be accessed.. This "2 key" approach is sort of like the launch control for a nuclear Missile that requires 2 different people to turn keys before blowing up some small corner of the world.

We do background checks and screening of personnel before allowing them access to our data but the reality is that MOST unauthorized security breaches are done by insiders and the vast majority of those breaches go undetected because we lack internal mechanisms to audit when someone maliciously or accidentally ventures into data that they don't need to access.

Auditing in particular is re-active; it can only detect a breach after it has occurred and if you detect that an employee has made an unauthorized access after the fact you may be able to fire them but that doesn't erase the data from their memory or from their thumbdrive at home. I do a lot of auditing and we've had to terminate people for improperly accessing drivers license info on people in the news, perusing the tax records of politicians or, in one particularly disturbing case a man that was looking up license tag numbers for the 'attractive ladies' he saw on the highway. we've certainly heard of the people in Ohio, at various levels that accessed "Joe the Plumber's" records in the various systems in Ohio. It is of course good that those people got caught through audits; they are very useful trails but the data was still accessed without a legitimate need, printouts were made and in some cases data was sent on the internet


Our medical records are now mostly electronic. Multi-Party Authorization can be added to electronic health record systems to protect the private patient data from unwanted release or use. The patient could be enabled using Multi-Party Authorization to be the second party approver of any and all access to their medical records. That would keep sensitive medical data more secure and less likely to be incorrectly accessed or shared. Or another trusted entity could be the second party authorizer to control access to private medical data. Adding MPA to systems that contain and share medical records protects that data from inappropriate access. That security builds confidence in electronic health records.

Thursday, December 3, 2009

remove unnecessary services

1. Only run the services that you need to run for the services provided by the machine. For instance if the server is a database server you most likely don't need the same box to run apache, ftp and sendmail. every extra service running on a box steals performance from the systems primary function and possibly opens up new security vulnerabilities.

2. you can use lsof or a similar tool to determine what ports are listening on the computer.

ns003:~# lsof -i
COMMAND PID USER FD TYPE DEVICE SIZE NODE NAME
named 17829 root 4u IPv6 12689530 UDP *:34327
named 17829 root 6u IPv4 12689531 UDP *:34329
named 17829 root 20u IPv4 12689526 UDP ns003.psi.net:domain
named 17829 root 21u IPv4 12689527 TCP ns003.psi.net:domain (LISTEN)
named 17829 root 22u IPv4 12689528 UDP 10.4.20.46:domain
named 17829 root 23u IPv4 12689529 TCP 10.4.20.46:domain (LISTEN)
lighttpd 17841 www-data 4u IPv4 12689564 TCP *:www (LISTEN)
sshd 17860 root 3u IPv6 12689580 TCP *:ssh (LISTEN)
sshd 17880 root 3u IPv6 12689629 TCP *:8899 (LISTEN)
sshd 30435 root 4u IPv6 74368139 TCP 10.4.20.46:8872 10.4.20.1:3262 (ESTABLISHED)

3. Shut down any unknown or unneeded services, using the appropriate tools for your Linux distribution, such as update-rc.d on Debian systems, or in some cases editing the /etc/inetd.conf or /etc/xinetd.d/* files.


4. Don't allow root logins on your primary sshd port 22 (set PermitRootLogin to "no"); many automated tools run brute-force attacks on that. Set up a secondary port for root access that only works by shared keys, disallowing passwords:
* Copy the sshd_config file to root_sshd_config, and change the following items in the new file:
o Port from 22 to some other number, say 8899 (don't use this! make up your own!)
o PermitRootLogin from "no" (you were supposed to set it to "no" for port 22, remember?) to "yes"
o AllowUsers root add this line, or if it exists, change it to allow only root logins on this port
o ChallengeResponseAuthentication no uncomment this line if it's commented out, and make sure it says "no" instead of "yes"
* Test this command:

sshd -D -f /etc/ssh/root_sshd_config

and see if it works correctly -- try logging in from another computer (you must have already set up shared-key authentication between the two computers) using:

ssh -p8899 root@my.remote.server

and if so, control-C at the above (sshd) command to stop the sshd daemon, then add this to the end of /etc/inittab:

rssh:2345:respawn:sshd -D -f /etc/ssh/root_sshd_config

* Restart the init task: # init q This will run your "root ssh daemon" as a background task, automatically restarting it in case of failure.