ESM ActiveList Import Script

<shamelessly copied from Konrad Kaczkowski’s post on iRock>

ESM Active List Import script – arc_import_al.py

Version 20

Active List import script (PYTHON) – Version 0.6

!!!!! THIS SCRIPT DOES NOT VALIDATE CORRECTNESS OF IMPORTED CSV !!!!!

Fixed special character encoding in active list import over XML (tested on symantec GIN source adv_ip URLs)

Symbol Description ArcSight Active List MAP in XML
Double quotes (or speech marks) &quot;
& Ampersand \A
+ Comma \C
< Less than (or open angled bracket) \L
> Greater than (or close angled bracket) \G
\ Backslash \\
| Vertical bar \|

 

Fixed temporary files removing from /tmp directory – if AL was huge can use all /tmp space

Fixed verification of access to archive.log [ tree = ElementTree.parse(TEMP_FILE) …  IOError: [Errno 2] No such file or directory: ‘/tmp/AL_IN_ESM_INVALID’ ]

Fixed TEMP_FILE access verification – if no write rights generate new variable for TEMP_FILE

Things to add:

  • check capacity of Active List and compare to import file
  • check activelist.max_capacity and activelist.max_columns from server.properties
  • check activelist.max_capacity and activelist.max_columns from server.default.properties

THIS SCRIPT IS AFTER BETA TESTS on RedHat 6.5 with Python 2.6

Test scenario at the end of post

How does it work:

  • check if import csv file exist
  • check connectivity with ESM (validate if available, if password is correct and account is not vlocked)
  • check if Active List exist on ESM  [ use /opt/arcsight/manager bin/arcsight archive -action export command ]
  • check if number of columns from Active List is the same as number of columns from csv file
  • prepare xml file/files to import
  • import xml file   [ use /opt/arcsight/manager/bin/arcsight archive -action import command ]
  • if syslog server is specified send CEF events to syslog server
  • if option -c was set – delete successfully imported files – otherise change name to *.xml.done

Execution:

./arc_import_al.py -r 20 -l “/All Active Lists/BCC/al_IP” -f /opt/asset_import/al_IP.csv -m ManagerName -u UserName -p UserPass -s 10.0.1.33 -P 514 -d -c

where parameters are:

REQUIRED

-r 10                      [ numers of rows per single import ]
-l Actve List           [ avtive list full URI in format “/All Avtive Lists/customer/malware” ]
-f filename             [ if file contains space – use filename in ” QUITAS ” ]
-m ESM manager   [ HP ArcSight ESM manager FQDN ]
-u ESM user          [ HP ArcSight ESM import user ]

OPTIONAL

-p ESM user pass  [ HP ArcSoght ESM user password ]
-s Syslog Server    [ Syslog server ]
-P Syslog Port       [ Syslog server port ]
-c                          [ clean (delete) imported files ]
-d                          [ debugging – display detailed information from processing ]

ADDITIONAL PARAMETERS

-h  [ help ]
-v  [ version ]

 

# Possible reconfiguration options:
#
# Place where are stored xml files for import: line 66
# export_dlobal_dir = “/opt/asset_import/active list
#
# Device interface name: line 89
# CEF_dvc = get_ip(‘eth0‘)

 

Test scenarios

Test scenario 1:

– Active List 1 [ size: 400000, columns: 4, Type: Event-based ]
Import rows: 331776
Batch size ( -r ) : 100000
Time of import :
– processing time: 20 s
– importing: 4 x 12 s

Test scenario 2:

– Active List 2 [ size: 1200000, columns: 1, Type: Field-based ]
Import rows: 1100000
Batch size ( -r ) : 200000
Time of import :
– processing time: 95 s
– importing: 6 x 45 s

When Batch Size [ -r ] was set to 300k import failed.

Below ESM Active Channel

ESM ActiveChannel

Download arc_import_al.py

Installation notes for Logger 6 on CentOS

[Update 2016/04/15]:  Installing Logger 6.2 on CentOS 7.1

CentOS (or RHEL) 7 changed a number of things in the OS for command and control, such as the facility to control services – for example, rather than “service” the command is now “systemctl”.  Below I outline a “quickstart” way to get HPE ArcSight Logger 6.2 installed on CentOS 7.1 (minimal distribution). Of course you want to read the Logger Installation Guide, Chapter 3 “Installing Software Logger on Linux” for the complete instructions and be sure you understand the commands I suggest below before you run them. No warranties here, just suggestions.  ;-)

  1. Do a base install of CentOS (or RHEL) 7.1, minimal packages.  I often suggest adding in Compatibility Libraries, however for this Logger 6.2 install, I just used the base install.  Ensure /tmp has at least 5GB of free space and /opt/arcsight has at least 50GB of usable space – I’d suggest going with at least:
    • /boot – 500MB
    • / – 8GB+
    • swap – 6GB+
    • /opt – 85GB+
  2. Ensure some needed (and helpful) utilities are installed, since the minimal distribution does not include these and unfortunately the Logger install script just assumes they are there .. if they aren’t, the install will eventually fail (such as no unzip binary).
    • yum install -y bind-utils pciutils tzdata zip unzip
    • Unlike my ESM install, for Logger, I left SELinux enabled and things appear to be working alright, but your mileage may vary.  If in doubt, disable it and try again.  To disable, edit /etc/selinux/config and set the mode to “disable” (or at least to “permissive”)
    • Disable the netfilter firewall (again, at some point I’ll update this with the rules needed to leave netfilter enabled).
    • systemctl disable firewalld; systemctl mask firewalld
    • Install and configure NTP
    • yum install -y ntpdate ntp
    • (optionally edit /etc/ntp.conf to select the NTP servers you want your new Logger system to use)
    • systemctl enable ntpd; systemctl start ntpd
    • Edit /etc/rsyslog.conf and enable forwarding of syslog events to your friendly neighborhood syslog SmartConnector (optional, but otherwise how do you monitor your Logger installation?) .. you can typically just uncomment the log handling statements at the bottom of the file and fill in your syslog SmartConnector hostname or IP address. Note the forward statement I use only has a single at sign – indicating UDP versus TCP designated by two at signs:
    • $ActionQueueFileName fwdRule1 # unique name prefix for spool files
      $ActionQueueMaxDiskSpace 1g # 1gb space limit (use as much as possible)
      $ActionQueueSaveOnShutdown on # save messages to disk on shutdown
      $ActionQueueType LinkedList # run asynchronously
      $ActionResumeRetryCount -1 # infinite retries if host is down
      # remote host is: name/ip:port, e.g. 192.168.0.1:514, port optional
      #*.* @@remote-host:514
      *.* @10.10.10.5:514
    • Restart rsyslog after updating the conf file
    • systemctl restart rsyslog
    • Optionally add some packages that support trouble shooting or other non-Logger functions you run on the Logger server, such as system monitoring
    • yum install -y mailx tcpdump
  3. Update the maximum number of processes and open files our Logger software can use:
    Backup the current settings:
    cp /etc/security/limits.d/20-nproc.conf /etc/security/limits.d/20-nproc.conf.orig
    Drop in new config file (assuming you have copy/pasted the following settings into /root/20-nproc.conf):
    cp 20-nproc.conf /etc/security/limits.d/20-nproc.confContents of the /etc/security/limits.d/20-nproc.conf file becomes:
    # Default limit for number of user's processes to prevent
    # accidental fork bombs.
    # See rhbz #432903 for reasoning.
    * soft nproc 10240
    * hard nproc 10240
    * soft nofile 65536
    * hard nofile 65536
    root soft nproc unlimited

    Reboot to enable the new settings.
  4. Add an unprivileged user “arcsight” to own the application and run as:
    groupadd -g 1000 arcsight
    useradd -u 1000 -g 1000 -d /home/arcsight -m -c "ArcSight" arcsight
    passwd arcsight
  5. Ensure the *parent* directory for the Logger software exists. Standard locations for installation of ArcSight products should be /opt/arcsight, so for example, we’re going to install our Logger software at /opt/arcsight/logger.
    cd /opt
    mkdir /opt/arcsight
  6. Run the Logger installation binary as “root” user
    • ./ArcSight-logger-6.2.0.7633.0.bin
  7. After the installation script completes successfully, you should be able to login to the console via a web browser https://<hostname>
    Default username “admin” with default password “password”. You’ll be forced to change the admin password on login.
  8. If you are going to install any SmartConnectors on the system hosting your Logger, check out my post regarding required libraries for CentOS and RedHat, before you try to run the Linux SmartConnector install. This includes any Model Import Connectors (MIC) or forwarding connectors (SuperConnectors).

 

[Update 2016/03/11]: Starting with SmartConnector 7.1.7 (I think, might be a rev or two earlier), there are a couple more libraries that are needed to successfully install the SmartConnector on Linux. Include libXrender.i686 libXrender.x86_64 libgcc.i686 libgcc.x86_64
yum install libXrender.i686 libXrender.x86_64 libgcc.i686 libgcc.x86_64

These notes describe an installation of HP ArcSight Logger 6.0.1 on a CentOS 6.5 virtual machine.

For a test install of Logger 6, I built a CentOS vm with the following parameters:
Basic install from the CentOS 6.5 Minimum ISO
1 CPU with 2 cores
4GB memory
80GB virtual disk
1 bridged network adapter
Disk partition sizes:
root fs 6GB, swap 4GB, /home 2GB, /opt/arcsight 50GB, /archive 10GB, free space approximately 15GB

As soon as the system was up, I commented out the archive filesystem (will be re-mounted under the /opt/arcsight/logger directory)
vi /etc/fstab

Installed the bind-utils package so I could use dig and friends, then did a full yum update:
yum install bind-utils ntp
yum update

This turns the system into CentOS 6.6, but that’s still a supported system for Logger, so all’s good.

Next we prepare the system for Logger software install by adding a user and changing some of the system configuration.

Add a non-root user to own and run the Logger application:
groupadd -g 1000 arcsight
useradd -u 1000 -g 1000 -d /home/arcsight -m -c "ArcSight" arcsight
passwd arcsight

Install libraries that Logger depends on:
yum install glibc.i686 libX11.i686 libXext.i686 libXi.i686 libXtst.i686
yum install zip unzip

Update the maximum number of processes and open files our Logger processes can have:
cp 90-nproc.conf /etc/security/limits.d/90-nproc.conf

Contents of the /etc/security/limits.d/90-nproc.conf file becomes:
# Default limit for number of user's processes to prevent
# accidental fork bombs.
# See rhbz #432903 for reasoning.
*          soft    nproc     10240
*          hard    nproc     10240
*          soft    nofile    65536
*          hard    nofile    65536
root       soft    nproc     unlimited

Turn off services we don’t need and turn on the ones we do need. Later we will write some iptables rules so we can turn the firewall back on when we’re done.

chkconfig iptables off
service iptables stop
chkconfig iscsi off
service iscsi stop
chkconfig iscsid off
service iscsid stop
ntpdate name-of-ntp-server-you-trust
chkconfig ntpd on
service ntpd start

All of these steps are packaged up here in centos-setup.shl:
groupadd -g 1000 arcsight
useradd -u 1000 -g 1000 -d /home/arcsight -m -c "ArcSight" arcsight
passwd arcsight
cp 90-nproc.conf /etc/security/limits.d/90-nproc.conf
yum install glibc.i686 libX11.i686 libXext.i686 libXi.i686 libXtst.i686
yum install zip unzip
chkconfig iptables off
service iptables stop
chkconfig iscsi off
service iscsi stop
chkconfig iscsid off
service iscsid stop
ntpdate 0.centos.pool.ntp.org
chkconfig ntpd on
service ntpd start

Turns out since we need 3+GB of free space in /tmp, I needed to extend the root filesystem .. I only allocated 2GB to begin with. Extend the root logical volume (lv_root) by adding 1,000 Physical Extents (4MB each):

Boot into rescue mode .. do NOT mount linux partitions, then drop to a shell

vgs
vgchange -a y vg_swlogger1
lvextend -l +1000 /dev/vg_swlogger1/lv_root
e2fsck -f /dev/vg_swlogger1/lv_root
resize2fs /dev/vg_swlogger1/lv_root

Now reboot and confirm there is at least 4GB of free space in /tmp. Could also have mounted a ram filesystem, but this will do as I’m conserving my memory on the host.

Upload the Logger installer binary and also the license file to the system into root’s home directory (or where you have space).

As root, run the Logger software install:
chmod u+x ArcSight-logger-6.0.0.7307.1.bin
./ArcSight-logger-6.0.0.7307.1.bin

Word of advice .. if doing this in a vm, run the install from the vm console since it’s possible the vm will be busy enough a remote ssh session could get disconnected – and the install will not complete properly.

After the install, we should be able to open a browser by navigating to https://name-of-vm-here

Sign in as arcsight / password then navigate to the System Administration section to change the admin password.

Building a Highly-Available ArcSight SmartConnector Cluster with Pacemaker

Cost Effective SmartConnector HA

This paper describes the use of open source clustering software used to build a low-cost, reliable, high availability environment on CentOS Linux in which to run both passive and active SmartConnectors, providing automated failure recovery.

Introduction

At current time there is no inherent High-Availability capability for ArcSight SmartConnector installations other than HA management of connectors through multiple Connector Appliances. Once events have been acquired by a SmartConnector, the store-and-forward architecture provides a reliable event handling ecosystem, but the problem is what to do when a specific SmartConnector, or the system it is running on, fails. Traditionally customers would procure and employ hardware load balancers in front of SmartConnector Connector Appliances or Connector Concentrators, although that only really deals with passive connectors, such as syslog, SNMP or other listeners. Active connectors such as Windows, Database readers, etc would require a manual failure recovery in order to restore the service of event collection. Although customers can use commercial clustering technology, such as Veritas Cluster Server, those tools can require substantial capital investment. This paper describes the use of open source clustering software used to build a low-cost, reliable, high availability environment in which to run both passive and active SmartConnectors, providing active failure recovery and service continuance. This configuration is not endorsed or supported by HP Enterprise Security Products and is provided for informational purposes only.

This package includes documentation and scripts to setup a cluster from scratch in an automated manner. Access to cluster packages in CentOS or local customer provided repositories is needed by the setup scripts. Users of this package need to obtain a Linux binary of the HP ArcSight SmartConnector software – it is not included. The result of the included quickstart script will be a functional cluster with a syslog SmartConnector running and able to fail-over to a partner node in the case of primary node failure. The two cluster nodes must have at least two (2) network segments, although all traffic to/from the event sources can be on any customer network that is reachable via standard IPv4 routing – the cluster does not operate in-line but rather as a distinct IP node on the customer network.

Assuming a relatively fast connection to the Internet, or internal servers, for access to the CentOS software repositories, the quickstart script can complete the cluster setup in less than 15 minutes, but one should expect to take a day to review the cluster configuration, commands and proper operating procedures. Recovery from incorrect cluster commands or operations will almost assuredly require a cluster outage for re-configuration, resync or worse, backup/recovery. Given the relative low cost of simple 1U servers, it is strongly recommended that two pairs of nodes are used to create a test cluster and production cluster. Modest VMware or other virtual servers can be used to implement the test environment. TCP/UDP protocol ports that are used are specific to the unique cluster IP addresses, so there should not be any collisions – although care must be taken to choose unique multicast addresses for the cluster communication provided by corosync. This is not done automatically by the quickstart scripts.

Feed back is welcomed, both success stories and problems/bugs that are encountered, but users need to self-support any implementations. The current maintainer is Allen Pomeroy (a at pomeroy dot us)

Download the Whitepaper and Cluster setup scripts in this zip file: BuildingAHASmartConnectorCluster-2.0.6

Make your own Reduce File Size presets for PDF export

Within Preview there is a filter that can be used to reduce the size of PDF files (think of PDF files that are 600 DPI high resolution).  Unfortunately it produces very poor quality images to the point of being unusable. Fortunately there is a way to create and install your own custom quartz filters for use in Preview that give large file size reductions while maintaining good quality.

After some googling, I found a perfect article that explains why the default Mac OS X Reduce File Size filter produces terrible quality images .. and how to fix that:

http://hints.macworld.com/article.php?story=20120629091437274

The filter, which is just a XML file, can be edited with any text or programming editor then saved to the  /System/Library/Filters  directory with a unique filename.  The Reduce File Size (Good) filter is what I use .. rather than posting as a code block and messing around with escaping the XML so the code displays correctly, the file is available for [download here].

Simply download the contents of this file, ensure it is renamed to a .qfilter file, then copy into the system filter directory (so it is available for all users). I chose to use /System/Library/Filters/Reduce File Size Good.qfilter. You may need to be a Mac OS X Administrator to write this file into the shared system library folder. At this point, in Preview you can use this filter to reduce large scanned PDF files by almost a magnitude of order.

Here is the text of the original post:

 
Make your own Reduce File Size presets for PDF export
Jul 05, ’12 07:30:00AM Contributed by: zpjet

I was never satisfied with results of “Reduce File Size” Quartz filter when trying to make some PDFs smaller before sending them by e-mail. It made them too small, and the graphics were fuzzy.

I eventually found where these filters are:

/System/Library/Filters

I was delighted to find out they’re XML files easily editable with TextEdit (or any other text editor). I also found why this particular filter makes quite unusable PDFs, as these parameters were just too low:

Compression Quality 0.0
ImageSizeMax 512

So I copied this file to my Desktop, and then made two more copies of it, and called them Reduce File Size Good, Better and Best. Then I changed the parameters of each file to 0.25, 0.5 and 0.75 for Compression Quality, and used these three values for ImageSizeMax:

842 (that’s A4 at 72dpi)
1684 (A4 at 144dpi)
3508 (A4 at 300dpi)

Finally, I changed the default string for the Name key at the end of each file to reflect the three settings, so they display the names I have given them in the menu.

Then I copied them to a /Library/Filters folder I created (for some reason, ~/Library/Filters doesn’t work in Lion) and now when I open a picture or PDF in Preview, I have the option of four different qualities for reduced file sizes.

As an example, I have a JPEG of scanned A4 invoice at 300dpi and it’s 1.6MB. When exporting to PDF in reduced size, the file is only 27 KB and it’s quite unusable – very fuzzy and hard to read. The Good one is much easier to read, slightly fuzzy and still only 80 KB. Better is 420 KB and clear, and the Best is 600 KB and almost as good as the original even on a laser printer.

How to replay syslog events using the performance testing feature of ArcSight SmartConnectors

Aside

[Updated 2016/08/22]

For testing ArcSight SmartConnector settings or Logger and Enterprise Security Manager (ESM) content, it is quite useful to be able to replay previously captured syslog events.  The built in PerfTestSyslog class in ArcSight SmartConnectors make this easy.

There are several ways to capture syslog traffic into a text file for use in replay scenarios. Below are some methods that I have used – may not be the most elegant, but gets the job done.

Run a packet capture of syslog traffic

On the node that has inbound syslog traffic, run a packet capture using tcpdump:syslog-simulator

tcpdump -nn -i eth0 -s0 -w syslog-traffic.pcap port 514

where eth0 is the network interface receiving the syslog traffic, syslog-traffic.pcap is the resulting pcap format output file of captured events and 514 is the port that syslog traffic is expected to be received.

After capturing a suitable size of events, import the pcap file into Wireshark, click on one of the syslog packets, right click and select Follow UDP stream. A decoded content window will appear where you can select Save As .. and dump it to a sample events file. Ensure to select ASCII versus Raw format. This will be your event input file to feed the PerfTestSyslog function of the ArcSight SmartConnector.

Replaying the syslog events using an ArcSight SmartConnector is controlled via the GUI that is displayed when the PerfTestSyslog class is launched. In my example, I have a Test Connector installed on my current host (RedHat Enterprise Linux, however Windows, Solaris or AIX would work just as well) in the /opt/agents/syslog-udp-1514 directory. This connector is up and running listening on UDP 1514 for syslog messages, however we are also going to use it to feed the syslog event to the same connector. Just think of it in two separate unrelated processes, since you could just as easily use this to feed the syslog events to another host somewhere on the network.

cd /opt/agents/syslog-udp-1514/current/bin
./arcsight agent runjava com.arcsight.agent.loadable._PerfTestSyslog -H 127.0.0.1 -P 1514 -f ~arcsight/udp.txt -x 50

In this example, we are launching the connector framework (./arcsight) and telling the PerfTestSyslog class to read the ~arcsight/udp.txt file (our previously saved syslog events captured with tcpdump) and send them to Host 127.0.0.1 on Port 1514. The last argument is interesting – it configures a slider allowing the user to dynamically increase the Event Per Second (EPS) rate up to a maximum of (in our case) 50 EPS.

A sample capture file has events that look like:

<190>Jun 27 2012 12:16:53: %PIX-6-106015: Deny TCP (no connection) from 10.50.215.102/15603 to 2.3.4.5/80 flags FIN ACK  on interface outside

You can also eliminate the original timestamp if you chose:

%PIX-6-106015: Deny TCP (no connection) from 10.50.215.102/15605 to 204.110.227.10/80 flags FIN ACK  on interface outside

The PerfTestSyslog class has a number of pretty useful options, including -m to randomize the Device Address. This is really good for faking events from multiple firewalls.

Various configuration options exist on both the receiving SmartConnector (that is listening on UDP 1514) and the transmitting program, including the ability to keep the original timestamp intact or replace it with current time. This is especially useful for testing new content or performing historical analysis on previously saved event data where the original timestamp is needed.

Update:

For situations where you would like to run this without a GUI, you can add the -n option to start with No GUI.  In that case, although the rate is no longer dynamic, you do need to specify a starting event rate otherwise it appears the default is 0 .. eg. no events will be sent.  Instead of only specifying -x for max rate, also specify the starting rate with -r

./arcsight agent runjava com.arcsight.agent.loadable._PerfTestSyslog -H 127.0.0.1 -P 1514 -f ~arcsight/udp.txt -n -r 50 -x 50

See also: Common ArcSight Command Line Operations

Unix, Linux and Mac OS X Notes

Here’s some notable command syntax I use. You can also select the Notes category and you’ll get more specific topics such as Linux LVM and Mac OS X commands.

rsyslog options

Forward syslog events to external host via UDP:
– edit /etc/rsyslog.conf .. add a stanza like the example at the end of the file .. a single @ = UDP forward, @@ = TCP forward

$WorkDirectory /var/lib/rsyslog # where to place spool files
$ActionQueueFileName fwdRule1 # unique name prefix for spool files
$ActionQueueMaxDiskSpace 1g # 1gb space limit (use as much as possible)
$ActionQueueSaveOnShutdown on # save messages to disk on shutdown
$ActionQueueType LinkedList # run asynchronously
$ActionResumeRetryCount -1 # infinite retries if host is down
# remote host is: name/ip:port, e.g. 192.168.0.1:514, port optional
*.* @10.0.0.45:514

– restart the rsyslog daemon
systemctl restart rsyslog.service
or
service rsyslog restart

Mac OS X syslog to remote syslog server

Forward syslog events on Mac OS X 10.11 to external syslog server via UDP or TCP:
– edit /etc/syslog.conf .. add a line at the end of the file .. a single @ = UDP forward, @@ = TCP forward

*.* @10.0.0.45:514
# remote host is: name or ip:port, e.g. 10.0.0.45:514, port optional

– restart the OS X syslog daemon
sudo launchctl unload /System/Library/LaunchDaemons/com.apple.syslogd.plist
sudo launchctl load /System/Library/LaunchDaemons/com.apple.syslogd.plist

Write ISO image to USB on Mac

– plug in USB to Mac
– lookup disk number
sudo diskutil list
– unmount the USB
sudo diskutil unmountDisk /dev/disk2
– copy ISO image to USB
sudo dd if=CentOS.iso of=/dev/disk2

NIC MAC change

Changing MAC address of NIC
– RedHat stores this in: /etc/sysconfig
networking/devices/ifcfg-eth?
networking/profiles/default/ifcfg-eth?
hwconf
You need to edit the hwaddr in /etc/sysconfig/hwconf and HWADDR in the other locations (some are links).

ssh tunneling of syslog traffic

– Example SSH configuration for tunneling a syslog TCP stream from a remote server back to a local node:

Remote node has TCP client process (rsyslog) running, we want it to write to a local TCP port (15514/tcp), and have that local port forward to the local node we have initiated the ssh connection from to a syslog daemon listening on port 1514/tcp:

Remote node rsyslog.conf:
@@localhost:15514

Event flow is through ssh on the remote node, listening on 15514/tcp and forwarding to the local node via ssh tunnel launched on the local node:
$ ssh -R 15514:localhost:1514 remotehostusername@remote.hostname.domain

To complete the picture, we probably want some sort of process on the local node to detect when the ssh connection has been lost and (1) re-establish the ssh connection, (2) restart rsyslog on the remote host to re-establish the connection from the remote rsyslog daemon to the ssh listener on port 15514/tcp.

YUM Software Repository

– Manually add DVD location/repository by:

35.3.1.2. Using a Red Hat Enterprise Linux Installation DVD as a Software Repository

To use a Red Hat Enterprise Linux installation DVD as a software repository, either in the form of a physical disc, or in the form of an ISO image file.

1. Create a mount point for the repository:
mkdir -p /path/to/repo

Where /path/to/repo is a location for the repository, for example, /mnt/repo. Mount the DVD on the mount point that you just created. If you are using a physical disc, you need to know the device name of your DVD drive. You can find the names of any CD or DVD drives on your system with the command cat /proc/sys/dev/cdrom/info. The first CD or DVD drive on the system is typically named sr0. When you know the device name, mount the DVD:
mount -r -t iso9660 /dev/device_name /path/to/repo
For example: mount -r -t iso9660 /dev/sr0 /mnt/repo

If you are using an ISO image file of a disc, mount the image file like this:
mount -r -t iso9660 -o loop /path/to/image/file.iso /path/to/repo
For example: mount -r -o loop /home/root/Downloads/RHEL6-Server-i386-DVD.iso /mnt/repo

Note that you can only mount an image file if the storage device that holds the image file is itself mounted. For example, if the image file is stored on a hard drive that is not mounted automatically when the system boots, you must mount the hard drive before you mount an image file stored on that hard drive. Consider a hard drive named /dev/sdb that is not automatically mounted at boot time and which has an image file stored in a directory named Downloads on its first partition:

mkdir /mnt/temp
mount /dev/sdb1 /mnt/temp
mkdir /mnt/repo
mount -r -t iso9660 -o loop mount -r -o loop /mnt/temp/Downloads/RHEL6-Server-i386-DVD.iso /mnt/repo

2. Create a new repo file in the /etc/yum.repos.d/ directory:
The name of the file is not important, as long as it ends in .repo. For example, dvd.repo is an obvious choice. Choose a name for the repo file and open it as a new file with the vi text editor. For example:

vi /etc/yum.repos.d/dvd.repo

[dvd]
baseurl=file:///mnt/repo/Server
enabled=1
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release

The name of the repository is specified in square brackets — in this example, [dvd]. The name is not important, but you should choose something that is meaningful and recognizable. The line that specifies the baseurl should contain the path to the mount point that you created previously, suffixed with /Server for a Red Hat Enterprise Linux server installation DVD, or with /Client for a Red Hat Enterprise Linux client installation DVD. NOTE: After installing or upgrading software from the DVD, delete the repo file that you created to get updates from the online sources.

IP Networking

– Manually add IPv4 alias to interface by:
ip addr add 192.168.0.30/24 dev eth4
– Manually remove that IPv4 alias to interface by (note the subnet mask):
ip addr del 192.168.0.30/32 dev eth4
– Manually add route for specific host:
route add -host 45.56.119.201 gw 10.20.1.5

pcap files

– Split large pcap file by using command line tool that comes with Wireshark editcap:
editcap -c 10000 infile.pcap outfile.pcap

tcpdump options

Display only packets with SYN flag set (for host 10.10.1.1 and NOT port 80):
tcpdump 'host 10.10.1.1  &&  tcp[13]&0x02 = 2  &&  !port 80'

Mac OS X (10.7)

sudo /usr/sbin/sysctl -w net.inet.ip.fw.enable=1
sudo /sbin/ipfw -q /etc/firewall.conf
sudo ifconfig en0 lladdr 00:1e:c2:0f:86:10
sudo ifconfig en1 alias 192.168.0.10 netmask 255.255.255.0
sudo ifconfig en1 -alias 192.168.0.10
sudo route add -net 10.2.1.0/24 10.3.1.1

rpm commands:

List files in an rpm file
rpm -qlp package-name.rpm

List files associated with an already installed package
rpm --query –-filesbypkg package-name
How do I find out what rpm provides a file?
yum whatprovides '*bin/grep'
Returns the package that supplies the file, but the repoquery tool (in the yum-utils package) is faster and provides more output as well as do other queries such as listing package contents, dependencies, reverse-dependencies.

sed commands:

Remove specific patterns (delete or remove blank lines):
sed '/^$/d'
sed command matching multiple line pattern (a single log line got split into two lines, the second line beginning with a space):
cat syslog3.txt | sed 'N;s/\n / /' > syslog3a.txt
– matches the end of line (\n) and space at the beginning of the next line, then removes the newline

awk commands:

Print out key value pairs KVP separated by =:
awk /SRC=/ RS=" "
Print out source IP for all iptables entries that contain the keyword recent:
cat /var/log/iptables.log | egrep recent | awk /SRC=/ RS=" " | sort | uniq
Sum column one in a file, giving the average (where NR is the automatically computed number of lines in the file):
./packet_parser analyzer_data.pcap | awk '{print $5}' | sed -e 's/length=//g' | awk 'BEGIN {sum=0} { sum+=$1 } END { print sum/NR }'
Find the number of tabs per line – used to do a sanity check on tab delimited input files
awk -F$'\t' '{print NF-1;}' file | sort -u

sort by some mid-line column

I wanted to sort by the sub-facility message name internal to the dovecot messages, so found the default behavior of sorting by space delimited columns works.

sort -k6 refers to the sixth column with the default delimiter as space.
sort -tx -k1.20,1.25 is an alternative, where ‘x’ is a delimiter character that does not appear anywhere in the line, and character position 20 is the start of the sort key and character position 25 is the end of the sort key.

This sorts by the bold column:
$ sort -k6 dovecot.txt
Oct 7 09:09:31 server1 dovecot: auth: mysql: Connected to 10.30.132.15 (db1)
Oct 7 09:34:03 server1 dovecot: auth: sql(user1@example.com,10.30.132.15): Password mismatch
Oct 7 09:33:36 server1 dovecot: auth: sql(someuser@example.com,10.30.132.15): unknown user
Oct 7 09:15:27 server1 dovecot: imap(user1@example.com): Disconnected for inactivity bytes=946/215256
Oct 7 09:21:11 server1 dovecot: imap(user2@example2.com): Disconnected: Logged out bytes=120/12718

dos2unix equivalent with tr

tr -d '\15\32' < windows-file.csv > unix-file.csv

Fedora 16 biosdevname

– Fedora 16 includes a package called “biosdevname” that sets up strange network port names (p3p1 versus eth0) .. since I don’t particilarly care if my ethernet adapter(s) is(are) in a particular PCI slot, remove this nonsense by:

yum erase biosdevname

– to take total control of network interfaces back over (edit /etc/sysconfig/network-scripts/ifcfg-eth?)

– remove NetworkManager

yum erase NetworkManager
chkconfig network on

Resetting user passwords in Mac OS X Leopard without Administrator

For those odd times where you need to reset the password for a user on a Mac (OS X 10.5 Leopard) and you don’t have access to the / an administrator account, this is a procedure that will work if you have physical access to the system and can reboot it. No boot DVD is needed if you can boot the system off the internal hard disk.

We boot into single user mode off the internal hard disk, then reset the target user password.

  1. Boot into single user mode (press Command-S at power on)
  2. Check the root filesystem first
    fsck -fy
  3. Mount up the root filesystem
    mount -uw /
  4. Load system directory services
    launchctl load /System/Library/LaunchDaemons/com.apple.DirectoryServices.plist
  5. Edit user information
    dscl . -passwd /Users/username password (replace username with the targeted user and password with the new password)
  6. Reboot then sign in with the new password.
    reboot

MySQL Notes

MySQL Command Line and Configuration Notes

Drop tables with wildcard:

There are multiple ways to specify MySQL credentials, this is not the best, but simply an example of how to drop tables using a wildcard pattern. In this case, command line history such as .bash_history will store your MySQL username and password plaintext, and an extended process listing will also reveal both username and password. When run from the command line like this, the SQL commands and the credentials are not stored in the MySQL history file (.mysql_history).  On closed (private) systems, the risk is low, especially if you clean up after these maintenance activities by purging the command histories.

mysql -u user -p password database -e "show tables" | grep "table_pattern_to_drop_" | awk '{print "drop table " $1 ";"}' | mysql -u user -p password database

w3af web security assessment tool gets support from Rapid7

Rapid7, which purchased the Metasploit attack framework last year, has agreed to sponsor the open source w3af web assessment and exploit project. This is fantastic news for web application development teams, since it shows the open source (and hence more affordable) tools they can use to improve the security of their applications are maturing.

Websites like sectools.org maintain lists of various security tools and point to numerous open source web application fuzzing and testing tools, including BurpSuite, Nikto, WebScarab, Whisker and Wikto. Although each of the open source tools I use have various strengths, w3af is IMHO the first reasonable challenger to commercial web application testing tools like IBM’s AppScan.

Can we please get rid of bad input validation errors now??

For a commercial IT security professional that wants to help an internal web application development team improve the security of their applications, tools like IBM’s AppScan and Acunetix WVS can save valuable time by generating reports that include not only the vulnerable URI but also include vulnerability background information (CVSS, OWASP, WASC), the specific HTTP request/response strings and suggested code fixes. This is particularly valuable to a security architect or operations role that is pressed for time (an army of one anyone?).

The w3af support from Rapid7 will enable this excellent tool to mature more quickly and improves the capability for any web development team, regardless of funding, to improve their security. Can we please get rid of bad input validation errors now?? My recent thesis illustrated the downright depressing numbers of SQL injection flaws that continue to exist. With tools like w3af, there is no excuse left for web developers to press applications into production with these injection flaws that are trivial to avoid. At the very least a survey of the NIST National Vulnerability Database does show the number of SQL injection flaws starting to drop. Unfortunately they still substantially outnumber traditional memory corruption flaws such as buffer overflows.

Explosion of SQL buffer errors

Explosion of SQL buffer errors

As you can see, the story up to 2008 was pretty grim for web applications – SQL injection flaws increased by over 1,500% in the same time buffer overflow errors increased by just over 500%.

Although it looks like there has been a reversal of the shocking explosion of SQL injection flaws, the sheer volume of these web application flaws is astonishing .. especially since injection flaws have been around for about 10 years. Not exactly a problem that has recently snuck up on us.

Web developers that still turn out applications that contain SQL or command injection errors and most cross site request forgery errors are simply guilty of gross negligence.

Despite the web development industry knowing these errors exist and good developers designing and coding to avoid these issues, there is still a need to build sufficient forensics around externally facing (publicly accessible) applications to enable reconstruction of attacks. In my next post, I outline a summary of my thesis “Effective SQL injection attack reconstruction using network recording”.

IMAP mailstore migration .. again

So last weekend, I discovered that Spamhaus decided it would be a good idea to place all of the public IP addresses for Slicehost (my Linux VPS hoster) into their Spamhaus block list (SBL). This covered both my slice in Dallas and the one in St. Louis – meaning an impressive chunk of inbound mail to my domains was being trashed by the sending MTA and an even bigger chunk of my outbound mail was being outright rejected since the sending IP’s were on the SBL.  Slicehost worked hard to convince Spamhaus to recind the blocklist, so the Slicehost IP’s got moved over to the less-nasty-but-you’re-still-probably-a-spamming-dirtbag Policy Block list (PBL) assuming affected IP owners would request to be removed from that list.

Sample query to see if you’re on any Spamhaus block list:  http://www.spamhaus.org/query/bl?ip=10.11.12.13

It seems it’s time to relinquish the care and feeding of my own Postfix mail system and turn to a hosted solution.  This means I need to migrate about 5GB of IMAP store to another site (again).  Last time I did a wholesale migration, I used imapsync to make the transition painless. In the code example below, an SSL connection to the IMAPS server at imap-server.sourcedomain.com is made with username@sourcedomain.com and the password stored in the plaintext file secret1. An SSL connection is made to the target system (which happens to be the server on which the imapsync tool is running, but could just as easily be another IMAPS server somewhere on a network accessible to the host where imapsync is running). The –delete and –expunge1 arguments will clean the successfully moved messages from IMAP store #1 .. so be sure you have your messages on the target successfully! Imapsync can be run iteratively to ensure you have got all the messages from your source.


/usr/bin/imapsync \
--host1 imap-server.sourcedomain.com \
--ssl1 \
--authmech1 LOGIN \
--user1 username@sourcedomain.com --passfile1 secret1 \
--host2 127.0.0.1 --user2 username@targetdomain.com --passfile2 secret2 \
--ssl2 \
--delete --expunge1 \
--buffersize=128

And one can use the

--dry

option to just test the process but not actually move any of the messages.

So that’s it – I’m about half way though migrating my current IMAP stores over to a hosted mail solution, so that I don’t need to keep up with the increasing level of care and feeding that running your own mail service requires.  Before I get too many darts about that .. I first started running my own personal MTA in 1995, adding spam and av filtering over time, and adding substantial redundancy (servers, sites, storage) so I could rely on it and fix things that broke as I had time rather than right when they broke (which was always at a bad time).  My new hosted solution takes over from two VPS servers running Postfix, Spamassassin, ClamAV, Greylisting with the IMAP store replicated across data centers in different states (15 minute rsyncs).  So soon, the (hopefully) last Allen Pomeroy owned and operated MTA can be turned off, while I get to work on fun stuff, rather than figuring out why my email is bouncing.  :-)

Update 2012/12/17:

Sometimes manual manipulation of your mailstore via IMAP is needed, so here’s how I deleted a large number of folders I had trashed and were being synced to my new system from the old.  Kinda clunky, since I didn’t get the scripted version to work (just used a copy/paste in an interactive bash session), but got the job done for now.

Connect to the IMAP server using SSL:
openssl s_client -crlf -connect imap.emailsrvr.com:993

* OK [CAPABILITY IMAP4rev1 LITERAL+ SASL-IR LOGIN-REFERRALS ID ENABLE IDLE AUTH=PLAIN] Server ready director6.mail.ord1a.rsapps.net

Log in with your email credentials:
0 login user@domain.com Password

0 OK [CAPABILITY IMAP4rev1 LITERAL+ SASL-IR LOGIN-REFERRALS ID ENABLE IDLE SORT SORT=DISPLAY THREAD=REFERENCES THREAD=REFS MULTIAPPEND UNSELECT CHILDREN NAMESPACE UIDPLUS LIST-EXTENDED I18NLEVEL=1 CONDSTORE QRESYNC ESEARCH ESORT SEARCHRES WITHIN CONTEXT=SEARCH LIST-STATUS QUOTA] Logged in

List the folders you want to remove:
0 list "" "Trash.*"

That didn’t return the list I was expecting, so I listed all folders
0 list "" "*"

… and realized the source mail system adds “INBOX” on the front of the folder names, so then this command worked to list the folders to be deleted:
0 list "" "INBOX.Trash.*"

I copied the output and edited it to insert the folder name into a delete command:
0 delete "INBOX.Trash.Folder1"
0 delete "INBOX.Trash.Folder2"
0 delete "INBOX.Trash.Folder3"

0 OK Delete completed.
0 OK Delete completed.
0 OK Delete completed.

Finish off the session by logging out:
0 logout

* BYE Logging out
0 OK Logout completed.
closed

Sifting through Checkpoint FW1 logs

Recently I found myself in the unhappy position of needing to sift through slightly more than a billion Checkpoint Firewall-1 log lines, looking for specific patterns of access. The problem was that many of the exported fwm log files had differing column positions and there had been many ruleset changes over the course of 11 months worth of log data. Many of the excellent FW1 log summarization tools (such as Peter Sundstrom’s fwlogsum) didn’t handle the hundreds of files and differing column positions.

The final scripted solution was processing over 11,000 lines/second .. and still took over 23 hours for the first run.

Log file exports via fwm logexport can have variable column positioning, except for record ID number “num”, which is *always* column number one.  I see three viable alternatives to the changing column position in the ASCII log files exported via fwm – so we can automate the log processing:

  • Export the FW1 log file to ASCII via
    fwm logexport -i fw1-binary-logfile -o fw1-ascii-logfile.txt -n -p
    1. Parse the header line (line #1) of every log file and dynamically map (rearrange) the columns to a pre-determined standard in memory before further processing (painful, expensive)
    2. Tell Checkpoint fwm to export in a fixed column ordering
        create
        logexport.ini
        and place in
        $FWDIR/conf directory
        eg. fwmgmtsrv:
        C:\WINDOWS\FW1\R65\FW1\conf
        logexport.ini:
        [Fields_Info]
        included_fields = num,date,time,orig,origin_id,type,action,alert,i/f_name,
        i/f_dir,product,rule,src,dst,proto,service,s_port,xlatesrc,xlatedst,
        nat_rulenum,nat_addtnl_rulenum,xlatesport,xlatedport,user,
        partner,community,session_id,ipv6_src,ipv6_dst,
        srckeyid,dstkeyid,CookieI,CookieR,msgid,elapsed,
        bytes,packets,start_time,snid,ua_snid,d_name,id_src,ua_operation,
        sso_type_desc,app_name,auth_domain,uname4domain,wa_headers,
        result_desc,r_dest,comment,url,redirect_url,enc_desc,e2e_enc_desc,
        auth_result,attack,log_sys_message,
        rule_uid,rule_name,service_id,resource,reason,cat_server,
        dstname,SOAP Method,category,ICMP,message_info,
        TCP flags,rpc_prog,Total logs,
        Suppressed logs,DCE-RPC Interface UUID,Packet info,
        message,ip_id,ip_len,ip_offset,fragments_dropped,during_sec
    3. Use OPSEC LEA tools to extract event log records instead of export via fwm logexport

    Once the ASCII log files are available for processing, my fw1logsearch.pl script can be used to find complex patterns of interest.  Any matching records found by fw1logsearch will be output with an initial FW1 header line so that fw1logsearch can be used iteratively, to build very complex search criteria.  fw1logsearch can also write out a discard file allowing completely negative logic searches resulting in 100% of the input data separated into a match file and a didn’t match file.  Some examples of how I’ve used it are shown here:

    gunzip -c fwlogs/2009*gz | \
    fw1logsearch.pl --allinclude \
    -S '10\.1\.1[1359]\.|10\.2\.1[01]\.|192\.168\.2[245]\.' \
    -d '10\.1\.1[1359]\.|10\.2\.1[01]\.|192\.168\.2[245]\.' \
    -p '^1310$|^1411$|^1812$|^455' | \
    fw1logsearch.pl -S '192\.168\.22\.14$|10\.2\.11\.12$' |\
    fw1logsearch.pl --allexclude \
    -S '^192\.168\.24\.12$' -P '^1310$' --rejectfile 192-168-24-12-port-1310.txt

    Line by line:
    1. Unzip the compressed ASCII log files, feed them to the first instance of fw1logsearch.pl
    2. First fw1logsearch – all conditions must be true for any events to match
    Source address must NOT be in any of the following regex ranges:
    10.1.11.* 10.1.13.* 10.1.15.* 10.1.19.*
    10.2.10.* 10.2.11.*
    192.168.22.* 192.168.24.* 192.168.25.*
    Destination address must be in one of the same following regex ranges.
    Service (destination port) must be one of:
    Exactly port: 1310, 1411, 1812, or any port starting with 455
    No protocol is specified, so it will match either TCP or UDP

    fw1logsearch.pl will output any matching events to stdout, including a FW1 log header line, so the next instance of fw1logsearch.pl continues filtering the result set.

    3. The second fw1logsearch.pl specifies Source Address must not be any of the following
    192.168.22.14

    10.2.11.12

    4. The last fw1logsearch.pl excludes port 1310 from 192.168.24.12, and puts all those records into a separate reject file, while writing the other records to stdout.

    This script has been used to process over 4 billion records within the project I wrote it for – and precisely found all the use of particular business cases I needed to modify.  The result was zero outages and no unintended business interruption.

    Basic syntax/help file:

    Usage:  fw1logsearch.pl
    [-a|–incaction|-A|–excaction <action regex>]
    [-p|–incservice|-P|–excservice <dst port regex>]
    [-b|–incs_port|-B|–excs_port <src port regex>]
    [-s|–incsrc|-S|–excsrc <src regex>]
    [-d|–incdst|-D|–excdst <dst regex>]
    [-o|–incorig|-O|–excorig <fw regex>]
    [-r|–incrule|-R|–excrule <rule-number regex>]
    [-t|–incproto|-T|–excproto <proto regex>]

    [–dnscache <dns-cache-file>]
    [–resolveip]
    [–allinclude]
    [–allexclude]
    [–rejectfile <file>]
    [–debug <level>]

    fw1logsearch.pl will search a fwm logexport text file for regex patterns specified for supported columns (such as service, src, dst, rule, action, proto and orig).

    Include and exclude regex matches may be specified on the same line, although they both will include (print) a line or exclude (reject) a line based on single matches.  Allinclude or Allexclude must be specified to force a match
    only on all specified column regex patterns.

    Regex patterns can be enclosed with single quotes to include characters that are special to the shell, such as the ‘or’ (|) operator.

    Header will be output only if there are any matching lines.

    Example invocations:
    $ cat 2008-07-07*txt | \
    fw1logsearch.pl \
    -p ’53|domain’ \
    -d ‘192.168.1.2|host1|10.10.1.2|host2’ \
    -o ‘192.168.2.3|10.10.2.4|10.10.4.5’ \
    -S ‘64.65.66.67|32.33.34.35|10.10.*|192.168.*’ \
    –resolveip
    Will require destination port (service) to be 53, destination IP to be any of 192.168.1.2, host1, 10.10.1.2, or host2  the reporting firewall (origin) to be any of 192.168.2.3, 10.10.2.4, or 10.10.4.5  and the source IP must not be
    any of 64.65.66.67, 32.33.34.35, 10.10.*, or 192.168.*  Any lines that match this criteria, will display and the orig, src, and dst columns will use the default DNS cache file (dynamically built/managed) to perform name resolution, replacing the IP addresses where possible.

    Include regex patterns:
    -a  –incaction    Rule action (accept, deny)
    -b  –incs_port    Source port (s_port)
    -p  –incservice   Destination port (service)
    -s  –incsrc       Source IP|hostname
    -d  –incdst       Destination IP|hostname
    -o  –incorig      Reporting FW IP|hostname
    -r  –incrule      Rule number that triggered entry
    -t  –incproto     Protocol of connection

    Exclude regex patterns:
    -A  –excaction    Rule action (accept, deny)
    -B  –excs_port    Source port (s_port)
    -P  –excservice   Destination port (service)
    -S  –excsrc       Source IP|hostname
    -D  –excdst       Destination IP|hostname
    -O  –excorig      Reporting FW IP|hostname
    -R  –excrule      Rule number that triggered entry
    -T  –excproto     Protocol of connection

    Other options:
    –debug {level} Turn on debugging
    –dnscache      Specify location of DNS cache file to be used with
    the Resolve IPs option
    –resolveip     Resolve IPs for orig, src, and dst columns AFTER filtering
    –rejectfile    Write out all rejected lines to a specified file

    Download fw1logsearch.pl

    Mac OS X Command Line notes

    Encrypted Filesystems with Sparse Bundles
    Mac OS X offers encrypted filesystems through sparse bundles.  To mount up a sparse bundle, given the password used to create the bundle, use the hdiutil:

    hdiutil attach -verbose -readonly /path/to/sparse.bundle.directory

    This will mount up the sparse bundle located at the directory path specified.  To unmount the sparse bundle, use:

    hdiutil detach /Volume/sparse.bundle.name

    Adding entries to /etc/hosts
    Although simply editing /etc/hosts should work, there are times where the new entries may not be recognized, in these cases the OS X name cache daemon needs to be kicked:

    dscacheutil -flushcache

    Mac OS X Hostnames
    Although you can change the hostname of your Mac OS X device through the System Control Panel -> Sharing, the following command line can lock the name so DHCP and other dynamic networking protocols don’t mess up your hostname (from RichardBronosky):

    sudo hostname my-permanent-name

    sudo scutil –set LocalHostName $(hostname)

    sudo scutil –set HostName $(hostname)

    Handy Command Lines
    Command line short cuts:

    pmset -g batt   Show battery status

    launchctl unload /System/Library/LaunchDaemons/com.apple.syslogd.plist; sleep 1; launchctl load /System/Library/LaunchDaemons/com.apple.syslogd.plist Reload syslog daemon

    SHA Hash on Mac OS X
    Mac OS X doesn’t have sha256sum, but does have openssl, so the following can compute a SHA256 hash:

    openssl dgst -sha256 Fedora-17-x86_64-DVD.iso

    Synchronizing directories

    Fast way to synchronize the content of your iTunes libraries – this doesn’t sync the playlists or any iTunes meta information (and you may need to perform an Add to Library .. to import any new content). This was just a quick and dirty way to sync up my iTunes downloads with another iTunes library at home. This assumes that you’ve opened up the ability to Remote Login (ssh) to the target Mac (topic for another time).

    rsync -av -e ssh "Music/iTunes/iTunes Music/" ahull@10.20.1.103:"/Users/ahull/Music/iTunes/iTunes\ Music"

    Linux iptables notes

    Add local redirection of low port to unpriv high port

    Remove any existing entries:

    iptables -t nat -D PREROUTING –src 0/0 -p tcp –dport 25 -j REDIRECT –to-ports 11025 2> /dev/null
    iptables -t nat -D PREROUTING –src 0/0 -p tcp –dport 80 -j REDIRECT –to-ports 8080 2> /dev/null

    Add new redirects:
    iptables -t nat -I PREROUTING –src 0/0 -p tcp –dport 25 -j REDIRECT –to-ports 11025
    iptables -t nat -I PREROUTING –src 0/0 -p tcp –dport 80 -j REDIRECT –to-ports 8080

    Linux RAID, LVM and crypto Filesystem Notes

    LVM Notes

    I wanted to upgrade the disks in my Linux PVR to a 1TB pair and thus had to migrate from one existing disk (/dev/sda) to the new (/dev/sdb):

    1. Add new physical disk to system

    2. Partition disk to have a linux LVM partition – use flag 0x8e

    # fdisk /dev/sdb

    3. Add to LVM

    # pvcreate /dev/sdb2

    4. Add physical LVM volume to a LVM volume group (VolGroup00)

    # vgextend /dev/VolGroup00 /dev/sdb2

    2. Move all lvm volumes off old lvm disk

    # vgdisplay -v (look for old physical volume name)

    # pvmove /dev/olddisk      # will move all physical extents from olddisk to any available pv in the vg

    3. Remove old disk from vg

    # vgreduce /dev/olddisk

    4. Remove old disk from LVM

    # pvremove /dev/olddisk

    RAID Notes
    Debian RAID setup on my PVR:
    /dev/md0  /boot
    /dev/hda1
    /dev/hdb1
    /dev/md1  /
    /dev/hda2
    /dev/hdb2
    /dev/md2  swap
    /dev/hda3
    /dev/hdb3
    /dev/md3  /data
    /dev/hda4
    /dev/hdb4

    Show detail of RAID set:
    # mdadm –detail /dev/md0

    Detach mirror member:
    – first mark member as bad (unless is really is bad, in which case it’ll already be marked faulty):
    # mdadm –set-faulty /dev/md0 /dev/hdb1
    – now remove it from the RAID1 set
    # mdadm –remove  /dev/md0 /dev/hdb1

    To reattach member (after partitioning, or if it’s the same disk):
    # mdadm   /dev/md0  –add  /dev/hdb1
    – to watch the progress on the resync, look at /proc/mdstat
    # cat /proc/mdstat

    I think now (2010/01/24) the faulty syntax is:

    mdadm /dev/md0 –fail /dev/sdb1

    then

    mdadm /dev/md0 –remove /dev/sdb1

    Crypto Filesystem Notes

    Linux (2.6) crypto filesystems are supported via a loopback device. Various ciphers can be specified.  This example, default AES cipher is used and the disk partition is /dev/sdb1 – which is just setup as a normal Linux (0x83) partition.

    1. Load the crypto filesystem module

    modprobe cryptoloop

    2. Start the crypto device (I’ll insert initialization instructions here later)

    Note – you don’t need losetup, if the parameters are specified in fstab and mount does the startup. When losetup runs, it will prompt for the passphrase used to encrypt the partition. Once the crypto driver has the correct key to allow on the fly encryption/decryption, then processes that use the partition see cleartext (such as mount).

    losetup -e aes /dev/loop0 /dev/sdb1 || exit 1
    mount /bu

    Reducing malware risk by removing local Administrator privileges

    Running day-to-day with a Windows account that has Administrator privileges is a recipe for disaster.  Casual browsing of a website that is infected or inadvertent opening of infected attachments can result in an infection through the user’s Administrator privileges.  Something like 92% of Microsoft critical vulnerabilities announced in 2008 could have been mitigated by operating day-to-day as a normal user.  Splitting your accounts into a normal account and admin account is a good idea, but it can lead to some headaches when the normal user needs to run temporarily as Administrator.

    Fortunately there are some work arounds that can be used to temporarily elevate the user’s privileges to Administrator.  Most of these involve the RUNAS command:

    File explorer
    If you’re running IE7 under WinXP, in order to run Windows Explorer with the runas command, it must be run as a separate process. A quick way to do this, without having to change your Folder Options settings, would be to run an instance of Explorer with the undocumented parameter /separate, like this:

    runas /user:domain\username "explorer /separate"

    Command Line Prompt
    You can add a shortcut on the task bar with the following syntax to get an Administrator cmd prompt:

    %windir%\system32\runas.exe /user:yourdomain\a-someuser cmd

    yourdomain is the name of your AD domain if you have one, if not, leave it out.  a-someuser is a suggested naming convention for the Administrator account associated with the user named someuser.