Executing an Effective Security Program

In today’s global Internet connected and reliant IT environment, the issue of corporate networks becoming compromised is a fact. Defense in depth is still and important design pattern, but organizations with even relatively mature capabilities are relying on detection since prevention is simply not enough anymore. Whereas several years ago we used to speak about prevention of externally facing application attacks through coding flaws that lead to SQL Injection and buffer overflow attacks, now successful attackers have moved onto the weakest link: users. Compromise of user credentials now comprises 96% of the successful attacks on organizations. Why go through the brute force and difficult path of application compromised when the attackers can simply conduct a successful spear phishing attack on individuals in the organization?

This is where advanced detection comes in. User and Entity Behavior Analysis leads to high quality alerts regarding anomalous behavior that is exhibited by accounts where the user has been successfully compromised. Same detection capability exists for detecting users that are exceeding their authority, typically classed as Insider Threat – as well the machine learning can also detect systems (entities) that are behaving in a way that is antithetical to it’s normal behavior. Think of Point of Sale or healthcare Internet of Things devices that have been compromised and there aren’t specific user identities that can be used to profile normal behavior.

Of all these technologies that can be deployed, the foundation must be a sound information security program that puts policies, standards, guidelines and procedures in place that authorizes and supports the controls. The Security, Cyber, and IA Professionals (SCIAP.org) group have pulled together a concise document that outlines how to build an Effective Security Program.

Installation notes for ArcSight ESM 6.9.1 on CentOS 7.1

Aside

Installation of HPE ArcSight Enterprise Security Manager (ESM) 6.9.1 on CentOS 7.1 is substantially easier with engineering adding a “pre-installation” setup script to this version.  For a smooth installation, there are still a few steps we need to take .. outlined below.

  1. Base install of CentOS 7.1, minimal packages but add Compatibility Libraries. Be sure you use the CentOS-7-x86_64-Minimal-1503-01.iso revision since more recent releases of CentOS have other quirks that may make the ESM install or execution fail. Ensure /tmp has at least 5GB of free space and /opt/arcsight has at least 50GB of usable space – I’d suggest going with at least:
    • /boot – 500MB
    • / – 8GB+
    • swap – 6GB+
    • /opt – 85GB+
  2. Ensure some needed (and helpful) utilities are installed, since the minimal distribution does not include these and unfortunately the ESM install script just assumes they are there .. if they aren’t, the install will eventually fail.
    • yum install -y bind-utils pciutils tzdata zip unzip
    • Edit /etc/selinux/config and disable (or set to permissive) .. the CORR storage engine install will fail with “enforcing” mode of SElinux.  I’ll update this at some point with how to leave SElinux in enforcing mode.
    • Disable the netfilter firewall (again, at some point I’ll update this with the rules needed to leave netfilter enabled).
    • systemctl disable firewalld;  systemctl mask firewalld
    • Install and configure NTP
    • yum install -y ntpdate ntp
    • (optionally edit /etc/ntp.conf to select the NTP servers you want your new ESM system to use)
    • systemctl enable ntpd; systemctl start ntpd
    • Edit /etc/rsyslog.conf and enable forwarding of syslog events to your friendly neighborhood syslog SmartConnector (optional, but otherwise how do you monitor your ESM installation?) .. you can typically just uncomment the log handling statements at the bottom of the file and fill in your syslog SmartConnector hostname or IP address. Note the forward statement I use only has a single at sign – indicating UDP versus TCP designated by two at signs:
    • $ActionQueueFileName fwdRule1 # unique name prefix for spool files
      $ActionQueueMaxDiskSpace 1g   # 1gb space limit (use as much as possible)
      $ActionQueueSaveOnShutdown on # save messages to disk on shutdown
      $ActionQueueType LinkedList   # run asynchronously
      $ActionResumeRetryCount -1    # infinite retries if host is down
      # remote host is: name/ip:port, e.g. 192.168.0.1:514, port optional
      #*.* @@remote-host:514
      *.* @10.10.10.5:514
    • Restart rsyslog after updating the conf file
    • systemctl restart rsyslog
    • Optionally add some packages that support trouble shooting or other non-ESM functions you run on the ESM server, such as system monitoring
    • yum install -y mailx tcpdump
  3. Untar the ESM distribution tar ball, ensure the files are owned by the “arcsight” user, then run the Tools/prepare_system.sh to adjust the maximum open files and other requirements that we used to manually update in previous releases.  NOTE: in 6.9.1 there are some previous “shadow” requirements that are now enforced (eg. you don’t get to change) .. such as the application owner account must be “arcsight”, the installation directory must be “/opt/arcsight”.  The “prepare_system.sh” script will check to see if there already is an “arcsight” user and if not, will create it.  I usually manually create all the common users on my various systems since I want them to have the same uid / gid across all my systems.
  4. Run the Tools/prepare_system.sh script as “root” user
    • cd Tools
    • ./prepare_system.sh
  5. Run the ESM install as the “arcsight” user
    • ./ArcSightESMSuite.bin
  6. Download content from the HPE ArcSight Marketplace at https://saas.hpe.com/marketplace/arcsight
  7. Install your ESM 6.9.1 console on Windows, Linux or Mac OS X .. although the web interface is much richer in the last couple releases, you’ll still need to use the console for content creation and editing.
  8. Optionally extend the session timeout period for the web interface.  There still isn’t an easy setting to do this in the GUI, so get into command line on your ESM server and edit or add the following lines .. which indicate the timeout period in seconds.  The default is around five (5) minutes. You should be able to edit these configuration files as the “arcsight” user, but I typically restart the services as “root”.
    • Edit /opt/arcsight/manager/config/server.properties
    • service.session.timeout=28800
    • Edit /opt/arcsight/logger/userdata/logger/user/logger/logger.properties
    • server.search.timeout=28800
    • Restart the ESM services .. I typically run this as “root”
    • /etc/init.d/arcsight_services stop
    • /etc/init.d/arcsight_services start
  9. Optionally configure the manager to display a static banner at the top of each console interface so you can have multiple consoles open and know what manager each is connected to (cool!):
    • Edit /opt/arcsight/manager/config/server.properties and add server.staticbanner.* properties (backgroundcolor, textcolor, text). Both backgroundcolor and textcolor take black, blue, cyan, gray, green, magenta, orange, pink, red, white, yellow as acceptable arguments. Text is the identifier you would like that manager to display, such as “super-awesome-production-box”
    • server.staticbanner.textcolor=green
    • server.staticbanner.backgroundcolor=black
    • server.staticbanner.text=esm691
    • Restart the ESM manager service .. I typically run this as “root”
    • /etc/init.d/arcsight_services stop manager
    • /etc/init.d/arcsight_services start manager
  10. If you are going to install any SmartConnectors on the system hosting your Enterprise Security Manager, check out my post regarding required libraries for CentOS and RedHat, before you try to run the Linux SmartConnector install. This includes any Model Import Connectors (MIC) or forwarding connectors (SuperConnectors).

BlockSync Project

Welcome to the BlockSync Project

This project aims to provide an efficient way to provide mutual protection from deemed bad actors that attack Internet facing servers. The result will be an open source set of communication tools that use established protocols for high speed and light weight transmission of attacker information to a variable number of targets (unicasting to a possibly large number of hosts).

Background

There are many open source firewall technologies in widespread use, most based on either packet filter (pf) or netfilter (iptables). There is much technology that provides network clustering (for example, OpenBSD’s CARP and pfsync; netfilter; corosync and pacemaker), however it’s difficult for disparate (loosely coupled) servers to communicate the identity of attackers in real time to a trusted community of (tightly coupled) peers. Servers or firewalls that use state-table replication techniques, such as pfsync or netfilter, have a (near) real-time view of pass/block decisions other members have made. There needs to be a mechanism for loosely coupled servers to share block decisions in a similar fashion.

Our goal is to create an open source tool for those of us that have multiple Internet facing servers to crowd source information that will block attackers via the firewall technology of choice (OpenBSD/FreeBSD pf/pfSense, iptables, others).

Project Page

All project files are still private yet, but when we publish to GitHub or SourceForge, this section will be updated.

Funding

We have published a GoFundMe page to acquire more lab equipment here at gofundme.com/BlockSync

Using the ArcSight ESM Console to Create Replay Files

HP ArcSight Enterprise Security Manager (ESM) has some built-in capabilities to generate event files suitable for use with the ArcSight Test SmartConnector.  These replay files can be used to test functioning of new ESM content (Dashboards, Datamonitors, Filters, Rules, Queries, Trends, Reports, etc).  The Test connector has some very powerful features including the ability to replay the captured data as is, or to update the date/time stamp on each event to make the data appear as current versus historical data.  The Test connector can also run multiple replay files into it’s configured destinations simultaneously and at a variable rate suitable to support initial content development as well as high speed, high volume performance testing.

Preparing to Generate Replay File

There are multiple ways to generate replay files, but in this post we will focus on use of the ESM console application software to generate the replay file from selected events already existing in the ESM instance.  In order to constrain the events to a selected subset, we need to have a filter prepared to chose the appropriate events.

1-ReplayFileGen  2-ReplayFileGen

For this example, a filter named router4 will be used, where it simply selects all events that have been generated by device name router4 or device address 10.20.1.27

Generating the Replay File

On the workstation or system where the ESM Console software is installed, start the replay file generator with a replayfilegen argument to the arcsight script in the bin directory.  If the console is installed on Linux or Mac OS X, simply use ./arcsight replayfilegen as the command.

0-ReplayFileGen

When the replayfilegen tool starts, it will display a GUI that allows the user to select the target filename to be generated, the timeframe to query and the filter to select the event data.

3-ReplayFileGen

Note that a relative time frame may be specified by using relative start and end time operators – these will calculate the absolute time frames needed.

4-ReplayFileGen

Once the collection has started, there will be a progress display showing the generation of the replay file.

5-ReplayFileGen

Deploying and Using the Replay File

Now the replay file has been generated, the user can simply copy the file to the current directory of the Test SmartConnector. There can be multiple replay files in the current directory and all will be displayed when the Test connector GUI starts.

6-ReplayFileGen

The user can select which replay files are to be read and events forwarded to the Test connector destinations.  Any or all of the replay files may be selected, making the Test connector ideal for assisting in content development for multiple use cases.

7-ReplayFileGen

Once the desired replay files are selected, the events will be replayed to the configured destinations at the rate specified by the user, as soon as the Continue button is pressed.

8-ReplayFileGen

The Test connector will run through all the event data in each selected replay file and stop. By default there will only be one pass through the data files and no event data is altered. ESM Manager Receipt Time will show the current date/time however the original timestamps will be present in the event data.  The event rate can be changed dynamically while the replay is in progress, so for example, some basic event data could be played to the destinations for some time then the user could adjust the event rate substantially higher to speed the event ingest to the destinations.  This is useful for testing use cases where there may be denial of service or worm outbreak detection that is sensitive to event rates.

There are many run-time options that can be set for the Test Connector, including the ability to loop on the replay files, replay the event data with current time stamps and other event handling options.

ESM ActiveList Import Script

<shamelessly copied from Konrad Kaczkowski’s post on iRock>

ESM Active List Import script – arc_import_al.py

Version 20

Active List import script (PYTHON) – Version 0.6

!!!!! THIS SCRIPT DOES NOT VALIDATE CORRECTNESS OF IMPORTED CSV !!!!!

Fixed special character encoding in active list import over XML (tested on symantec GIN source adv_ip URLs)

Symbol Description ArcSight Active List MAP in XML
Double quotes (or speech marks) &quot;
& Ampersand \A
+ Comma \C
< Less than (or open angled bracket) \L
> Greater than (or close angled bracket) \G
\ Backslash \\
| Vertical bar \|

 

Fixed temporary files removing from /tmp directory – if AL was huge can use all /tmp space

Fixed verification of access to archive.log [ tree = ElementTree.parse(TEMP_FILE) …  IOError: [Errno 2] No such file or directory: ‘/tmp/AL_IN_ESM_INVALID’ ]

Fixed TEMP_FILE access verification – if no write rights generate new variable for TEMP_FILE

Things to add:

  • check capacity of Active List and compare to import file
  • check activelist.max_capacity and activelist.max_columns from server.properties
  • check activelist.max_capacity and activelist.max_columns from server.default.properties

THIS SCRIPT IS AFTER BETA TESTS on RedHat 6.5 with Python 2.6

Test scenario at the end of post

How does it work:

  • check if import csv file exist
  • check connectivity with ESM (validate if available, if password is correct and account is not vlocked)
  • check if Active List exist on ESM  [ use /opt/arcsight/manager bin/arcsight archive -action export command ]
  • check if number of columns from Active List is the same as number of columns from csv file
  • prepare xml file/files to import
  • import xml file   [ use /opt/arcsight/manager/bin/arcsight archive -action import command ]
  • if syslog server is specified send CEF events to syslog server
  • if option -c was set – delete successfully imported files – otherise change name to *.xml.done

Execution:

./arc_import_al.py -r 20 -l “/All Active Lists/BCC/al_IP” -f /opt/asset_import/al_IP.csv -m ManagerName -u UserName -p UserPass -s 10.0.1.33 -P 514 -d -c

where parameters are:

REQUIRED

-r 10                      [ numers of rows per single import ]
-l Actve List           [ avtive list full URI in format “/All Avtive Lists/customer/malware” ]
-f filename             [ if file contains space – use filename in ” QUITAS ” ]
-m ESM manager   [ HP ArcSight ESM manager FQDN ]
-u ESM user          [ HP ArcSight ESM import user ]

OPTIONAL

-p ESM user pass  [ HP ArcSoght ESM user password ]
-s Syslog Server    [ Syslog server ]
-P Syslog Port       [ Syslog server port ]
-c                          [ clean (delete) imported files ]
-d                          [ debugging – display detailed information from processing ]

ADDITIONAL PARAMETERS

-h  [ help ]
-v  [ version ]

 

# Possible reconfiguration options:
#
# Place where are stored xml files for import: line 66
# export_dlobal_dir = “/opt/asset_import/active list
#
# Device interface name: line 89
# CEF_dvc = get_ip(‘eth0‘)

 

Test scenarios

Test scenario 1:

– Active List 1 [ size: 400000, columns: 4, Type: Event-based ]
Import rows: 331776
Batch size ( -r ) : 100000
Time of import :
– processing time: 20 s
– importing: 4 x 12 s

Test scenario 2:

– Active List 2 [ size: 1200000, columns: 1, Type: Field-based ]
Import rows: 1100000
Batch size ( -r ) : 200000
Time of import :
– processing time: 95 s
– importing: 6 x 45 s

When Batch Size [ -r ] was set to 300k import failed.

Below ESM Active Channel

ESM ActiveChannel

Download arc_import_al.py

How To Increase ArcSight ESM Command Center GUI Timeout

In the appliance versions of most ArcSight products, there is the ability to set the user session timeout period. Typically this defaults to somewhere between five (5) and 15 minutes – good for a default but incredibly annoying for any real user.  In ArcSight Enterprise Security Manager (ESM), there is no such GUI configuration that allows modification of the user session timeout – so this is what has worked for me:

Set ArcSight Command Center (ACC) timeout greater than 900 seconds (15 minutes) – set to 28800 seconds (8 hours)
vi /opt/arcsight/manager/config/server.properties
service.session.timeout=28800
/sbin/service arcsight_services stop all
/sbin/service arcsight_services start all

Default is 600 seconds = 5 minutes.

In 6.5, 6.5.1 and 6.8 you also need to add the following for the Logger interface in ESM:

vi /opt/arcsight/logger/userdata/logger/user/logger/logger.properties
server.search.timeout=28800
/sbin/service arcsight_services stop all
/sbin/service arcsight_services start all

Default is 600 seconds = 5 minutes.

Yes, eight (8) hours may seem like a long time, so chose what is appropriate for your site.  :)

Common ArcSight Command Line Operations

Here are a number of command line operations that are frequently needed within the ArcSight ecosystem.

Export Enterprise Security Manager Certificate without a GUI
Use for ESM 6 or later.
Lookup the manager certificate details and alias name by running a list operation:
arcsight keytool -store clientcerts -list | grep manager
self-arcsight-manager-esm6c, Feb 20, 2013, trustedCertEntry,
Export the certificate by running an export operation with the certificate alias name:
arcsight keytool -store clientcerts -exportcert -alias self-arcsight-manager-esm6c -file /home/arcsight/manager.cer
The manager certificate can then be imported into Logger via the web interface or into the cacerts certificate store for a SmartConnector.

Launch SmartConnector Keytool GUI

To launch the keytool GUI for editing the certificate store used by a specific connector, use the following syntax, where … refers to the installation directory of the SmartConnector:

cd .../current/bin
./arcsight agent keytoolgui

Send syslog events via SmartConnector

To replay syslog events from a flat file to a syslog daemon destination, use the following syntax, where … refers to the installation directory of the SmartConnector:

cd /opt/agents/syslog-udp-1514/current/bin
./arcsight agent runjava com.arcsight.agent.loadable._PerfTestSyslog -H 127.0.0.1 -P 1514 -f ~arcsight/udp.txt -x 50

Required Parameters:
-H Host where packets will be sent to
-P Port where packets will be sent to

Optional Parameters:
-d Source IP address (1.1.1.1)
-f syslog-data-file
-x Max. rate (5000)

Options:
-h help – Get help for this command
-m multiple devices – Simulate multiple devices
-s sequential – Use sequence numbers as time
-t use raw TCP instead of UDP

See also: How to replay syslog events using the performance testing feature of ArcSight SmartConnectors and Creating event replay files for ArcSight SmartConnectors

Send SNMP events via SmartConnector

To replay SNMP events from a flat file to a SNMP daemon destination, use the following syntax, where … refers to the installation directory of the SmartConnector (note it does not have to be a SNMP SmartConnector):

cd /opt/agents/syslog-udp-1514/current/bin
./arcsight agent runjava com.arcsight.agent.loadable._PerfTestSyslog -H 127.0.0.1 -P 162 -f ~arcsight/snmp.txt

Required Parameters:
-H Host where packets will be sent to
-P Port where packets will be sent to

Optional Parameters:
-d Source IP address (1.1.1.1)
-f SNMP file to read
-x Max. rate (5000)

Options:
-h help – Get help for this command
-m multiple devices – Simulate multiple devices
-s sequential – Use sequence numbers as time

See also: Creating event replay files for ArcSight SmartConnectors

Installation notes for Logger 6 on CentOS

[Update 2016/04/15]:  Installing Logger 6.2 on CentOS 7.1

CentOS (or RHEL) 7 changed a number of things in the OS for command and control, such as the facility to control services – for example, rather than “service” the command is now “systemctl”.  Below I outline a “quickstart” way to get HPE ArcSight Logger 6.2 installed on CentOS 7.1 (minimal distribution). Of course you want to read the Logger Installation Guide, Chapter 3 “Installing Software Logger on Linux” for the complete instructions and be sure you understand the commands I suggest below before you run them. No warranties here, just suggestions.  ;-)

  1. Do a base install of CentOS (or RHEL) 7.1, minimal packages.  I often suggest adding in Compatibility Libraries, however for this Logger 6.2 install, I just used the base install.  Ensure /tmp has at least 5GB of free space and /opt/arcsight has at least 50GB of usable space – I’d suggest going with at least:
    • /boot – 500MB
    • / – 8GB+
    • swap – 6GB+
    • /opt – 85GB+
  2. Ensure some needed (and helpful) utilities are installed, since the minimal distribution does not include these and unfortunately the Logger install script just assumes they are there .. if they aren’t, the install will eventually fail (such as no unzip binary).
    • yum install -y bind-utils pciutils tzdata zip unzip
    • Unlike my ESM install, for Logger, I left SELinux enabled and things appear to be working alright, but your mileage may vary.  If in doubt, disable it and try again.  To disable, edit /etc/selinux/config and set the mode to “disable” (or at least to “permissive”)
    • Disable the netfilter firewall (again, at some point I’ll update this with the rules needed to leave netfilter enabled).
    • systemctl disable firewalld; systemctl mask firewalld
    • Install and configure NTP
    • yum install -y ntpdate ntp
    • (optionally edit /etc/ntp.conf to select the NTP servers you want your new Logger system to use)
    • systemctl enable ntpd; systemctl start ntpd
    • Edit /etc/rsyslog.conf and enable forwarding of syslog events to your friendly neighborhood syslog SmartConnector (optional, but otherwise how do you monitor your Logger installation?) .. you can typically just uncomment the log handling statements at the bottom of the file and fill in your syslog SmartConnector hostname or IP address. Note the forward statement I use only has a single at sign – indicating UDP versus TCP designated by two at signs:
    • $ActionQueueFileName fwdRule1 # unique name prefix for spool files
      $ActionQueueMaxDiskSpace 1g # 1gb space limit (use as much as possible)
      $ActionQueueSaveOnShutdown on # save messages to disk on shutdown
      $ActionQueueType LinkedList # run asynchronously
      $ActionResumeRetryCount -1 # infinite retries if host is down
      # remote host is: name/ip:port, e.g. 192.168.0.1:514, port optional
      #*.* @@remote-host:514
      *.* @10.10.10.5:514
    • Restart rsyslog after updating the conf file
    • systemctl restart rsyslog
    • Optionally add some packages that support trouble shooting or other non-Logger functions you run on the Logger server, such as system monitoring
    • yum install -y mailx tcpdump
  3. Update the maximum number of processes and open files our Logger software can use:
    Backup the current settings:
    cp /etc/security/limits.d/20-nproc.conf /etc/security/limits.d/20-nproc.conf.orig
    Drop in new config file (assuming you have copy/pasted the following settings into /root/20-nproc.conf):
    cp 20-nproc.conf /etc/security/limits.d/20-nproc.confContents of the /etc/security/limits.d/20-nproc.conf file becomes:
    # Default limit for number of user's processes to prevent
    # accidental fork bombs.
    # See rhbz #432903 for reasoning.
    * soft nproc 10240
    * hard nproc 10240
    * soft nofile 65536
    * hard nofile 65536
    root soft nproc unlimited

    Reboot to enable the new settings.
  4. Add an unprivileged user “arcsight” to own the application and run as:
    groupadd -g 1000 arcsight
    useradd -u 1000 -g 1000 -d /home/arcsight -m -c "ArcSight" arcsight
    passwd arcsight
  5. Ensure the *parent* directory for the Logger software exists. Standard locations for installation of ArcSight products should be /opt/arcsight, so for example, we’re going to install our Logger software at /opt/arcsight/logger.
    cd /opt
    mkdir /opt/arcsight
  6. Run the Logger installation binary as “root” user
    • ./ArcSight-logger-6.2.0.7633.0.bin
  7. After the installation script completes successfully, you should be able to login to the console via a web browser https://<hostname>
    Default username “admin” with default password “password”. You’ll be forced to change the admin password on login.
  8. If you are going to install any SmartConnectors on the system hosting your Logger, check out my post regarding required libraries for CentOS and RedHat, before you try to run the Linux SmartConnector install. This includes any Model Import Connectors (MIC) or forwarding connectors (SuperConnectors).

 

[Update 2016/03/11]: Starting with SmartConnector 7.1.7 (I think, might be a rev or two earlier), there are a couple more libraries that are needed to successfully install the SmartConnector on Linux. Include libXrender.i686 libXrender.x86_64 libgcc.i686 libgcc.x86_64
yum install libXrender.i686 libXrender.x86_64 libgcc.i686 libgcc.x86_64

These notes describe an installation of HP ArcSight Logger 6.0.1 on a CentOS 6.5 virtual machine.

For a test install of Logger 6, I built a CentOS vm with the following parameters:
Basic install from the CentOS 6.5 Minimum ISO
1 CPU with 2 cores
4GB memory
80GB virtual disk
1 bridged network adapter
Disk partition sizes:
root fs 6GB, swap 4GB, /home 2GB, /opt/arcsight 50GB, /archive 10GB, free space approximately 15GB

As soon as the system was up, I commented out the archive filesystem (will be re-mounted under the /opt/arcsight/logger directory)
vi /etc/fstab

Installed the bind-utils package so I could use dig and friends, then did a full yum update:
yum install bind-utils ntp
yum update

This turns the system into CentOS 6.6, but that’s still a supported system for Logger, so all’s good.

Next we prepare the system for Logger software install by adding a user and changing some of the system configuration.

Add a non-root user to own and run the Logger application:
groupadd -g 1000 arcsight
useradd -u 1000 -g 1000 -d /home/arcsight -m -c "ArcSight" arcsight
passwd arcsight

Install libraries that Logger depends on:
yum install glibc.i686 libX11.i686 libXext.i686 libXi.i686 libXtst.i686
yum install zip unzip

Update the maximum number of processes and open files our Logger processes can have:
cp 90-nproc.conf /etc/security/limits.d/90-nproc.conf

Contents of the /etc/security/limits.d/90-nproc.conf file becomes:
# Default limit for number of user's processes to prevent
# accidental fork bombs.
# See rhbz #432903 for reasoning.
*          soft    nproc     10240
*          hard    nproc     10240
*          soft    nofile    65536
*          hard    nofile    65536
root       soft    nproc     unlimited

Turn off services we don’t need and turn on the ones we do need. Later we will write some iptables rules so we can turn the firewall back on when we’re done.

chkconfig iptables off
service iptables stop
chkconfig iscsi off
service iscsi stop
chkconfig iscsid off
service iscsid stop
ntpdate name-of-ntp-server-you-trust
chkconfig ntpd on
service ntpd start

All of these steps are packaged up here in centos-setup.shl:
groupadd -g 1000 arcsight
useradd -u 1000 -g 1000 -d /home/arcsight -m -c "ArcSight" arcsight
passwd arcsight
cp 90-nproc.conf /etc/security/limits.d/90-nproc.conf
yum install glibc.i686 libX11.i686 libXext.i686 libXi.i686 libXtst.i686
yum install zip unzip
chkconfig iptables off
service iptables stop
chkconfig iscsi off
service iscsi stop
chkconfig iscsid off
service iscsid stop
ntpdate 0.centos.pool.ntp.org
chkconfig ntpd on
service ntpd start

Turns out since we need 3+GB of free space in /tmp, I needed to extend the root filesystem .. I only allocated 2GB to begin with. Extend the root logical volume (lv_root) by adding 1,000 Physical Extents (4MB each):

Boot into rescue mode .. do NOT mount linux partitions, then drop to a shell

vgs
vgchange -a y vg_swlogger1
lvextend -l +1000 /dev/vg_swlogger1/lv_root
e2fsck -f /dev/vg_swlogger1/lv_root
resize2fs /dev/vg_swlogger1/lv_root

Now reboot and confirm there is at least 4GB of free space in /tmp. Could also have mounted a ram filesystem, but this will do as I’m conserving my memory on the host.

Upload the Logger installer binary and also the license file to the system into root’s home directory (or where you have space).

As root, run the Logger software install:
chmod u+x ArcSight-logger-6.0.0.7307.1.bin
./ArcSight-logger-6.0.0.7307.1.bin

Word of advice .. if doing this in a vm, run the install from the vm console since it’s possible the vm will be busy enough a remote ssh session could get disconnected – and the install will not complete properly.

After the install, we should be able to open a browser by navigating to https://name-of-vm-here

Sign in as arcsight / password then navigate to the System Administration section to change the admin password.

Creating event replay files for ArcSight SmartConnectors

The ArcSight connector framework includes the capability to record event replay files from inbound event streams, regardless of the type of event data. This is enormously useful for development and testing individual of use cases, demonstrations and training. The following article is based on ArcSight SmartConnector version 7.0.7.

Events are replayed back to the target destinations by selecting some variety of previously recorded replay files using an ArcSight Test SmartConnector. Either multiple event files or a consolidated file can be used with the Test Alert connector. Since the Test Alert connector is a standard SmartConnector, multiple destinations can be configured, such as to Enterprise Security Manager (ESM) and/or Logger. As event files are replayed back into the target(s), the timestamp can be the original or can be overridden to the current time. This enables historical analysis as well as event data appropriate for any time sensitive rules or use cases.

Create Replay File Directly From Connector

1. Shut down Connector Service.
2. Open the .../current/user/agent/agent.properties file, add following two properties to agent.properties file:

agent.component.count=36
agent.component[35]=com.arcsight.agent.loadable._RecordComponent

agent.properties replay configuration

3. Start Connector Service again

The Connector will start capturing events being sent to ESM, writing the output to .../current/replayagent/{agent-id}.sessions

4. Stop the Connector when you are done capturing events
5. Open the agent.properties file again and remove or comment out the lines added in step 2, then restart the connector again
6. Rename the .sessions file to .events and copy it to the …/current directory of the Testalert SmartConnector and start (or restart) the Test Alert SmartConnector.
7. Start Test Alert to replay the file.

Testalert Connector

Once the replay file or files are selected, the events can be replayed into the system with a specified Event Per Second (EPS) rate

replay-event-rate

Optimizing the Collection and Replay

By default, the Test Alert SmartConnector will replay the recorded events with a current timestamp.  Where it is desirable to replay the events with the original timestamp, the connector can be configured through the normal connector reconfiguration (…/current/bin/runagentsetup.sh)

One of the disadvantages of this approach is apparent if using this method to collect sample event data that would not normally be directed to your ESM instance. On the source SmartConnector, the destination can be set to be a CSV file – enabling the ability to turn very large event feeds directly into .event files without using any ESM storage and processing capacity.

Replaying Events with Original Timestamps

To enable replay of the recorded events with their original timestamps, edit the .../current/user/agent/agent.properties file and add or uncomment out the following lines:

agents[0].preserveagenttime=true
agents[0].preservedetecttime=true

When the Test Alert SmartConnector starts again, the events will be replayed with original timestamps.
 

Enabling Single Line Logging from pfSense Firewalls to ArcSight

While pfSense firewall offerings are based on the BSD packet filter (pf) functions and offer excellent performance and value, the current implementation my customers are running (2.1.5) outputs firewall rule logs in two syslog lines.  The skilled developers that maintain pfSense have indicated that in the 2.2 release they will likely move to a single line log format that is friendly to machine parsers, however I needed to provide some ArcSight parsing of this log data immediately.

Two approaches are possible here.  I could write a FlexConnector parser that expects and parses two consecutive syslog lines into one in order to pull out the relevant source, destination addresses, ports and device handling information (pass, block, reject, interface, protocol, flags, etc).  The other option I discovered was what I implemented since my customer was willing to allow a system patch to be applied to the firewalls that outputs log entries on a single line.  This will make a FlexConnector parser a breeze to create.

After looking through the pfSense support forums, I found this post by jimp (one of the developers) that provided a patch to enable single line logging.  To install the patch, you need to install the System Patch installation package on the firewall, since it’s not there by default.

pfSense-systemPatchPackage

Package install on pfSense Firewall

 

Enables installation of patches

Enables installation of patches

 

 

From the Package Manager, install the System Patch package.  This enables downloading and automating the patch installation process.  This is a safe way to apply patches, since it will validate the patch can be applied and more importantly backed out if needed.

Select the Patches option in the System menu to start the patch application process. At this point you can (and probably should) just enter the URL to the patch versus manually entering the patch diff file in the text dialog box that comes up. Entering the URL (http://files.pfsense.org/jimp/patches/pf-log-oneline-option-2.1.1.diff) gives the ability to Fetch the contents of the patch (double click on the Fetch option).  pfSense-patchThe System Patch package will go download the patch at the given URL and validate it.  If the validation is successful (it can apply and also roll back the patch), then the Test and Apply links should automatically show up.  If not, only the Test link will show up .. click on the Test link to validate the patch can be applied and backed out.  This should happen automatically so the Apply link shows up.  After application of the patch there will be apfSense-rawLogFormatOption new option that shows up in the Status > System Logs > Settings pane that allows the raw logs to be written out on a single line.pfSense-block

Now the logs are coming in on a single line, creation of an ArcSight FlexConnector is simple.  I first directed logs from the firewall to a standard syslog daemon SmartConnector (7.0.5) and enabled Preserve Raw so I can grab pfSense-raw-logthe original event and write the parser to grab all the relevant fields I’m looking to correlate.

Using the ArcSight FlexConnector tool kit I was able to get it to suggest a starting point for the regex patterns to parse the single line logs.  Launched the regex GUI with arcsight regex in any connector’s current/bin directory. I copied a FlexConnector config file from an iptables parser I wrote to use to start from .. and produced a quick and dirty mapping of the pfSense events with this parser in user/agent/flexagent/syslog/pfsense.subagent.sdkrfilereader.properties.  Note this is a work in progress, so no categorization map files have been provided yet.  That will come soon.

 

Building a Highly-Available ArcSight SmartConnector Cluster with Pacemaker

Cost Effective SmartConnector HA

This paper describes the use of open source clustering software used to build a low-cost, reliable, high availability environment on CentOS Linux in which to run both passive and active SmartConnectors, providing automated failure recovery.

Introduction

At current time there is no inherent High-Availability capability for ArcSight SmartConnector installations other than HA management of connectors through multiple Connector Appliances. Once events have been acquired by a SmartConnector, the store-and-forward architecture provides a reliable event handling ecosystem, but the problem is what to do when a specific SmartConnector, or the system it is running on, fails. Traditionally customers would procure and employ hardware load balancers in front of SmartConnector Connector Appliances or Connector Concentrators, although that only really deals with passive connectors, such as syslog, SNMP or other listeners. Active connectors such as Windows, Database readers, etc would require a manual failure recovery in order to restore the service of event collection. Although customers can use commercial clustering technology, such as Veritas Cluster Server, those tools can require substantial capital investment. This paper describes the use of open source clustering software used to build a low-cost, reliable, high availability environment in which to run both passive and active SmartConnectors, providing active failure recovery and service continuance. This configuration is not endorsed or supported by HP Enterprise Security Products and is provided for informational purposes only.

This package includes documentation and scripts to setup a cluster from scratch in an automated manner. Access to cluster packages in CentOS or local customer provided repositories is needed by the setup scripts. Users of this package need to obtain a Linux binary of the HP ArcSight SmartConnector software – it is not included. The result of the included quickstart script will be a functional cluster with a syslog SmartConnector running and able to fail-over to a partner node in the case of primary node failure. The two cluster nodes must have at least two (2) network segments, although all traffic to/from the event sources can be on any customer network that is reachable via standard IPv4 routing – the cluster does not operate in-line but rather as a distinct IP node on the customer network.

Assuming a relatively fast connection to the Internet, or internal servers, for access to the CentOS software repositories, the quickstart script can complete the cluster setup in less than 15 minutes, but one should expect to take a day to review the cluster configuration, commands and proper operating procedures. Recovery from incorrect cluster commands or operations will almost assuredly require a cluster outage for re-configuration, resync or worse, backup/recovery. Given the relative low cost of simple 1U servers, it is strongly recommended that two pairs of nodes are used to create a test cluster and production cluster. Modest VMware or other virtual servers can be used to implement the test environment. TCP/UDP protocol ports that are used are specific to the unique cluster IP addresses, so there should not be any collisions – although care must be taken to choose unique multicast addresses for the cluster communication provided by corosync. This is not done automatically by the quickstart scripts.

Feed back is welcomed, both success stories and problems/bugs that are encountered, but users need to self-support any implementations. The current maintainer is Allen Pomeroy (a at pomeroy dot us)

Download the Whitepaper and Cluster setup scripts in this zip file: BuildingAHASmartConnectorCluster-2.0.6

Trade offs of the terrible syslog protocol

syslog is a very old message transmission protocol that transmits system messages across a network. The first versions of this protocol were drafted into RFC 5426. Some assumed updating the transmission to use TCP would make things better, and the IETF released RFC 6587 describing syslog over TCP. The problem is, that is inherently unreliable as well, since the application (syslog) has no mechanism to ensure that all messages transmitted were actually received, regardless of the network level transport protocol used to convey the messages.

Rainer Gerhards wrote a blog post on the unreliability of using plain TCP to transmit syslog event data.

An attempt to create a reliable syslog protocol is described in RFC 3195, the problem is that very few vendors have adopted that standard (BEEP).

There is a movement to find a more reliable system message delivery mechanism, as described in this Wikipedia post, however the problem is not only one of a technically feasible mechanism – one that relies on the application itself to validate and guarantee message integrity and completeness – but also on wide spread adoption by the 10’s or 100’s of millions of devices that send their system logs via syslog UDP.

That will take decades, so best is to use mechanisms that can collect the event messages in native syslog UDP format as close to the generating source as possible then use an application oriented framework to convey those messages to their destination. HP ArcSight SmartConnectors are a good way to accomplish this, with their application level event queuing on input, persistent caching output, compression, encryption, bandwidth throttling, filtering, aggregation and event QoS policies.

Libraries needed to install ArcSight SmartConnectors on RedHat Enterprise Linux and CentOS

[Update 2016/03/11]:

Starting with SmartConnector 7.1.7 (I think, might be a rev or two earlier), there are a couple more libraries that are needed to successfully install the SmartConnector on Linux. Include libXrender.i686 libXrender.x86_64 libgcc.i686 libgcc.x86_64
yum install libXrender.i686 libXrender.x86_64 libgcc.i686 libgcc.x86_64

[Update 2014/02/04]:
Simpler syntax for the install, using yum to do the automatic dependency processing, and .. a update for CentOS 6.4 64-bit. I believe RHEL 6.4 64-bit would also need these libraries. This worked for installing ArcSight SmartConnector 6.0.7 on CentOS 6.4 64-bit.

glibc.i686
libX11.i686
libXext.i686
libXi.i686
libXtst.i686

You could install like:
yum install glibc.i686 libX11.i686 libXext.i686 libXi.i686 libXtst.i686

[Original post]
While installing an ArcSight SmartConnector 6.0.2 on RedHat Enterprise Linux 6.2 64-bit, the initial install runs successfully, however the connector configuration never kicks off, then install just claims it is done. runagentsetup.sh fails with Error occurred during initialization of VM .. java/lang/NoClassDefFoundError: java/lang/Object .. obviously a pretty major Java error.

Turns out there are some additional libraries that need to be loaded in addition to what is listed in the documentation.

Some research leads me to believe there were some base libraries that may be missing from the vanilla RHEL 6.2 64 bit install. Basic Server + Desktop configuration was selected and all libraries referenced in the ESM 6.0c Install Guide and SmartConnector User Guide were installed. Tracing through all the dependencies created this exact list of of libraries that are required to be installed on RHEL 6.2 64 bit:

glibc-2.12-1.47.el6.i686.rpm
glibc-2.12-1.47.el6.x86_64.rpm
glibc-common-2.12-1.47.el6.x86_64.rpm
libX11-1.3-2.el6.i686.rpm
libX11-1.3-2.el6.x86_64.rpm
libX11-common-1.3-2.el6.noarch.rpm
libXau-1.0.5-1.el6.i686.rpm
libXau-1.0.5-1.el6.x86_64.rpm
libxcb-1.5-1.el6.i686.rpm
libxcb-1.5-1.el6.x86_64.rpm
libXext-1.1-3.el6.i686.rpm
libXext-1.1-3.el6.x86_64.rpm
libXi-1.3-3.el6.i686.rpm
libXi-1.3-3.el6.x86_64.rpm
libXtst-1.0.99.2-3.el6.i686.rpm
libXtst-1.0.99.2-3.el6.x86_64.rpm
nss-softokn-freebl-3.12.9-11.el6.i686.rpm
nss-softokn-freebl-3.12.9-11.el6.x86_64.rpm

Note the specific X libraries versus the generic list as shown in the connector user guide. What was interesting about these is that they did NOT all install when doing a wildcard rpm install, and additionally did not report any failures. After some trial and error, on my system it appears the 32 bit X libraries needed to be installed individually for some reason. You may want to use rpm -q -a to verify each of the libraries successfully installed. Once all the above libraries were installed, the connector installation worked as expected.

A tarball with the libraries can be downloaded from here.

Extract the libraries, change into the resulting directory, then you can use the following brute force syntax to determine which libraries are not installed and install them:

rpm -ivh `ls | while read rpmfile; do rpm -q \`basename $rpmfile .rpm\`; done | egrep 'not installed' | awk '{print $2}' | xargs`

How to replay syslog events using the performance testing feature of ArcSight SmartConnectors

Aside

[Updated 2016/08/22]

For testing ArcSight SmartConnector settings or Logger and Enterprise Security Manager (ESM) content, it is quite useful to be able to replay previously captured syslog events.  The built in PerfTestSyslog class in ArcSight SmartConnectors make this easy.

There are several ways to capture syslog traffic into a text file for use in replay scenarios. Below are some methods that I have used – may not be the most elegant, but gets the job done.

Run a packet capture of syslog traffic

On the node that has inbound syslog traffic, run a packet capture using tcpdump:syslog-simulator

tcpdump -nn -i eth0 -s0 -w syslog-traffic.pcap port 514

where eth0 is the network interface receiving the syslog traffic, syslog-traffic.pcap is the resulting pcap format output file of captured events and 514 is the port that syslog traffic is expected to be received.

After capturing a suitable size of events, import the pcap file into Wireshark, click on one of the syslog packets, right click and select Follow UDP stream. A decoded content window will appear where you can select Save As .. and dump it to a sample events file. Ensure to select ASCII versus Raw format. This will be your event input file to feed the PerfTestSyslog function of the ArcSight SmartConnector.

Replaying the syslog events using an ArcSight SmartConnector is controlled via the GUI that is displayed when the PerfTestSyslog class is launched. In my example, I have a Test Connector installed on my current host (RedHat Enterprise Linux, however Windows, Solaris or AIX would work just as well) in the /opt/agents/syslog-udp-1514 directory. This connector is up and running listening on UDP 1514 for syslog messages, however we are also going to use it to feed the syslog event to the same connector. Just think of it in two separate unrelated processes, since you could just as easily use this to feed the syslog events to another host somewhere on the network.

cd /opt/agents/syslog-udp-1514/current/bin
./arcsight agent runjava com.arcsight.agent.loadable._PerfTestSyslog -H 127.0.0.1 -P 1514 -f ~arcsight/udp.txt -x 50

In this example, we are launching the connector framework (./arcsight) and telling the PerfTestSyslog class to read the ~arcsight/udp.txt file (our previously saved syslog events captured with tcpdump) and send them to Host 127.0.0.1 on Port 1514. The last argument is interesting – it configures a slider allowing the user to dynamically increase the Event Per Second (EPS) rate up to a maximum of (in our case) 50 EPS.

A sample capture file has events that look like:

<190>Jun 27 2012 12:16:53: %PIX-6-106015: Deny TCP (no connection) from 10.50.215.102/15603 to 2.3.4.5/80 flags FIN ACK  on interface outside

You can also eliminate the original timestamp if you chose:

%PIX-6-106015: Deny TCP (no connection) from 10.50.215.102/15605 to 204.110.227.10/80 flags FIN ACK  on interface outside

The PerfTestSyslog class has a number of pretty useful options, including -m to randomize the Device Address. This is really good for faking events from multiple firewalls.

Various configuration options exist on both the receiving SmartConnector (that is listening on UDP 1514) and the transmitting program, including the ability to keep the original timestamp intact or replace it with current time. This is especially useful for testing new content or performing historical analysis on previously saved event data where the original timestamp is needed.

Update:

For situations where you would like to run this without a GUI, you can add the -n option to start with No GUI.  In that case, although the rate is no longer dynamic, you do need to specify a starting event rate otherwise it appears the default is 0 .. eg. no events will be sent.  Instead of only specifying -x for max rate, also specify the starting rate with -r

./arcsight agent runjava com.arcsight.agent.loadable._PerfTestSyslog -H 127.0.0.1 -P 1514 -f ~arcsight/udp.txt -n -r 50 -x 50

See also: Common ArcSight Command Line Operations

Malware Investigation Tools and Notes

Investigating possible malware involves both detection and identification phases. Here are some notes regarding the tools I commonly use for these two phases .. note this is intended to be a living document so may change as I learn of new resources or as older resources become stale or no longer very useful.

WARNING: Links shown below may lead to sites with active malware. Do not navigate to any site or link unless you know what you are doing.

Detection

Tools like HP TippingPoint IPS do a good job of detecting vulnerabilities (versus exploits) and also use vulnerability research and lighthouse sensors across the world to confirm infected systems (by IP) and sites (by domain).

Research

Both Google and Scumware have good domain and URL status reporting data.  URL shortening services are notorious for masking domains that have become infected, although there may be a large percentage of legitimate sites to which they refer. An example is the WordPress site wp.me:

http://www.google.com/safebrowsing/diagnostic?site=wp.me

http://www.scumware.org/report/wp.me

 Broad industry trends and general knowledge of attacks, outbreaks and other relevant news can be found on various blog sites:

hp.com/go/hpsrblog

 

How to reset the enable password on a Cisco ASA 5505

How to reset the enable password on an ASA 5505:

The following procedure worked for me to reset the enable password.

Connect to serial port – typically 9600,8,N,1.  On my MacBook Pro, I use a Keyspan USB-Serial adapter, so my command line is:

screen /dev/tty.USA19Hfd13P1.1 9600,8

You can eventually use <ctrl-A><ctrl-\> to kill the screen session.

Power on the device.
When it prompts to interrupt boot sequence, do so (press ESC).

It should prompt

rommon #0>

Type in:
rommon #0> confreg

Should show something like:

Current Configuration Register: 0×00000001
Configuration Summary:
boot default image from Flash

Do you wish to change this configuration? y/n [n]:

Press n (don’t change)

We can have the ASA boot a default config with no password by setting register flags 0×41, so do this:

rommon #2> confreg 0×41
rommon #2> reboot

You now can login as the password has been removed (use <return> as the password).  Be sure to set the enable password with:

config t
enable password new-password-here
config-register 0x1
wr

Ensure you either use the config-register command or interrupt the boot sequence again and reset the boot flags back to 0x1, otherwise the boot loader will continue to boot the default configuration – ignoring your configuration.

 

Unix, Linux and Mac OS X Notes

Here’s some notable command syntax I use. You can also select the Notes category and you’ll get more specific topics such as Linux LVM and Mac OS X commands.

rsyslog options

Forward syslog events to external host via UDP:
– edit /etc/rsyslog.conf .. add a stanza like the example at the end of the file .. a single @ = UDP forward, @@ = TCP forward

$WorkDirectory /var/lib/rsyslog # where to place spool files
$ActionQueueFileName fwdRule1 # unique name prefix for spool files
$ActionQueueMaxDiskSpace 1g # 1gb space limit (use as much as possible)
$ActionQueueSaveOnShutdown on # save messages to disk on shutdown
$ActionQueueType LinkedList # run asynchronously
$ActionResumeRetryCount -1 # infinite retries if host is down
# remote host is: name/ip:port, e.g. 192.168.0.1:514, port optional
*.* @10.0.0.45:514

– restart the rsyslog daemon
systemctl restart rsyslog.service
or
service rsyslog restart

Mac OS X syslog to remote syslog server

Forward syslog events on Mac OS X 10.11 to external syslog server via UDP or TCP:
– edit /etc/syslog.conf .. add a line at the end of the file .. a single @ = UDP forward, @@ = TCP forward

*.* @10.0.0.45:514
# remote host is: name or ip:port, e.g. 10.0.0.45:514, port optional

– restart the OS X syslog daemon
sudo launchctl unload /System/Library/LaunchDaemons/com.apple.syslogd.plist
sudo launchctl load /System/Library/LaunchDaemons/com.apple.syslogd.plist

Write ISO image to USB on Mac

– plug in USB to Mac
– lookup disk number
sudo diskutil list
– unmount the USB
sudo diskutil unmountDisk /dev/disk2
– copy ISO image to USB
sudo dd if=CentOS.iso of=/dev/disk2

NIC MAC change

Changing MAC address of NIC
– RedHat stores this in: /etc/sysconfig
networking/devices/ifcfg-eth?
networking/profiles/default/ifcfg-eth?
hwconf
You need to edit the hwaddr in /etc/sysconfig/hwconf and HWADDR in the other locations (some are links).

ssh tunneling of syslog traffic

– Example SSH configuration for tunneling a syslog TCP stream from a remote server back to a local node:

Remote node has TCP client process (rsyslog) running, we want it to write to a local TCP port (15514/tcp), and have that local port forward to the local node we have initiated the ssh connection from to a syslog daemon listening on port 1514/tcp:

Remote node rsyslog.conf:
@@localhost:15514

Event flow is through ssh on the remote node, listening on 15514/tcp and forwarding to the local node via ssh tunnel launched on the local node:
$ ssh -R 15514:localhost:1514 remotehostusername@remote.hostname.domain

To complete the picture, we probably want some sort of process on the local node to detect when the ssh connection has been lost and (1) re-establish the ssh connection, (2) restart rsyslog on the remote host to re-establish the connection from the remote rsyslog daemon to the ssh listener on port 15514/tcp.

YUM Software Repository

– Manually add DVD location/repository by:

35.3.1.2. Using a Red Hat Enterprise Linux Installation DVD as a Software Repository

To use a Red Hat Enterprise Linux installation DVD as a software repository, either in the form of a physical disc, or in the form of an ISO image file.

1. Create a mount point for the repository:
mkdir -p /path/to/repo

Where /path/to/repo is a location for the repository, for example, /mnt/repo. Mount the DVD on the mount point that you just created. If you are using a physical disc, you need to know the device name of your DVD drive. You can find the names of any CD or DVD drives on your system with the command cat /proc/sys/dev/cdrom/info. The first CD or DVD drive on the system is typically named sr0. When you know the device name, mount the DVD:
mount -r -t iso9660 /dev/device_name /path/to/repo
For example: mount -r -t iso9660 /dev/sr0 /mnt/repo

If you are using an ISO image file of a disc, mount the image file like this:
mount -r -t iso9660 -o loop /path/to/image/file.iso /path/to/repo
For example: mount -r -o loop /home/root/Downloads/RHEL6-Server-i386-DVD.iso /mnt/repo

Note that you can only mount an image file if the storage device that holds the image file is itself mounted. For example, if the image file is stored on a hard drive that is not mounted automatically when the system boots, you must mount the hard drive before you mount an image file stored on that hard drive. Consider a hard drive named /dev/sdb that is not automatically mounted at boot time and which has an image file stored in a directory named Downloads on its first partition:

mkdir /mnt/temp
mount /dev/sdb1 /mnt/temp
mkdir /mnt/repo
mount -r -t iso9660 -o loop mount -r -o loop /mnt/temp/Downloads/RHEL6-Server-i386-DVD.iso /mnt/repo

2. Create a new repo file in the /etc/yum.repos.d/ directory:
The name of the file is not important, as long as it ends in .repo. For example, dvd.repo is an obvious choice. Choose a name for the repo file and open it as a new file with the vi text editor. For example:

vi /etc/yum.repos.d/dvd.repo

[dvd]
baseurl=file:///mnt/repo/Server
enabled=1
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release

The name of the repository is specified in square brackets — in this example, [dvd]. The name is not important, but you should choose something that is meaningful and recognizable. The line that specifies the baseurl should contain the path to the mount point that you created previously, suffixed with /Server for a Red Hat Enterprise Linux server installation DVD, or with /Client for a Red Hat Enterprise Linux client installation DVD. NOTE: After installing or upgrading software from the DVD, delete the repo file that you created to get updates from the online sources.

IP Networking

– Manually add IPv4 alias to interface by:
ip addr add 192.168.0.30/24 dev eth4
– Manually remove that IPv4 alias to interface by (note the subnet mask):
ip addr del 192.168.0.30/32 dev eth4
– Manually add route for specific host:
route add -host 45.56.119.201 gw 10.20.1.5

pcap files

– Split large pcap file by using command line tool that comes with Wireshark editcap:
editcap -c 10000 infile.pcap outfile.pcap

tcpdump options

Display only packets with SYN flag set (for host 10.10.1.1 and NOT port 80):
tcpdump 'host 10.10.1.1  &&  tcp[13]&0x02 = 2  &&  !port 80'

Mac OS X (10.7)

sudo /usr/sbin/sysctl -w net.inet.ip.fw.enable=1
sudo /sbin/ipfw -q /etc/firewall.conf
sudo ifconfig en0 lladdr 00:1e:c2:0f:86:10
sudo ifconfig en1 alias 192.168.0.10 netmask 255.255.255.0
sudo ifconfig en1 -alias 192.168.0.10
sudo route add -net 10.2.1.0/24 10.3.1.1

rpm commands:

List files in an rpm file
rpm -qlp package-name.rpm

List files associated with an already installed package
rpm --query –-filesbypkg package-name
How do I find out what rpm provides a file?
yum whatprovides '*bin/grep'
Returns the package that supplies the file, but the repoquery tool (in the yum-utils package) is faster and provides more output as well as do other queries such as listing package contents, dependencies, reverse-dependencies.

sed commands:

Remove specific patterns (delete or remove blank lines):
sed '/^$/d'
sed command matching multiple line pattern (a single log line got split into two lines, the second line beginning with a space):
cat syslog3.txt | sed 'N;s/\n / /' > syslog3a.txt
– matches the end of line (\n) and space at the beginning of the next line, then removes the newline

awk commands:

Print out key value pairs KVP separated by =:
awk /SRC=/ RS=" "
Print out source IP for all iptables entries that contain the keyword recent:
cat /var/log/iptables.log | egrep recent | awk /SRC=/ RS=" " | sort | uniq
Sum column one in a file, giving the average (where NR is the automatically computed number of lines in the file):
./packet_parser analyzer_data.pcap | awk '{print $5}' | sed -e 's/length=//g' | awk 'BEGIN {sum=0} { sum+=$1 } END { print sum/NR }'
Find the number of tabs per line – used to do a sanity check on tab delimited input files
awk -F$'\t' '{print NF-1;}' file | sort -u

sort by some mid-line column

I wanted to sort by the sub-facility message name internal to the dovecot messages, so found the default behavior of sorting by space delimited columns works.

sort -k6 refers to the sixth column with the default delimiter as space.
sort -tx -k1.20,1.25 is an alternative, where ‘x’ is a delimiter character that does not appear anywhere in the line, and character position 20 is the start of the sort key and character position 25 is the end of the sort key.

This sorts by the bold column:
$ sort -k6 dovecot.txt
Oct 7 09:09:31 server1 dovecot: auth: mysql: Connected to 10.30.132.15 (db1)
Oct 7 09:34:03 server1 dovecot: auth: sql(user1@example.com,10.30.132.15): Password mismatch
Oct 7 09:33:36 server1 dovecot: auth: sql(someuser@example.com,10.30.132.15): unknown user
Oct 7 09:15:27 server1 dovecot: imap(user1@example.com): Disconnected for inactivity bytes=946/215256
Oct 7 09:21:11 server1 dovecot: imap(user2@example2.com): Disconnected: Logged out bytes=120/12718

dos2unix equivalent with tr

tr -d '\15\32' < windows-file.csv > unix-file.csv

Fedora 16 biosdevname

– Fedora 16 includes a package called “biosdevname” that sets up strange network port names (p3p1 versus eth0) .. since I don’t particilarly care if my ethernet adapter(s) is(are) in a particular PCI slot, remove this nonsense by:

yum erase biosdevname

– to take total control of network interfaces back over (edit /etc/sysconfig/network-scripts/ifcfg-eth?)

– remove NetworkManager

yum erase NetworkManager
chkconfig network on

Securing Apache web servers

Great article by Pete Freitag on Securing Apache Web Servers
(20 ways to Secure your Apache Configuration)

Here are 20 things you can do to make your apache configuration more secure.

Disclaimer: The thing about security is that there are no guarantees or absolutes. These suggestions should make your server a bit tighter, but don’t think your server is necessarily secure after following these suggestions.

Additionally some of these suggestions may decrease performance, or cause problems due to your environment. It is up to you to determine if any of the changes I suggest are not compatible with your requirements. In other words proceed at your own risk.

First, make sure you’ve installed latest security patches

There is no sense in putting locks on the windows, if your door is wide open. As such, if you’re not patched up there isn’t really much point in continuing any longer on this list.

Hide the Apache Version number, and other sensitive information.

By default many Apache installations tell the world what version of Apache you’re running, what operating system/version you’re running, and even what Apache Modules are installed on the server. Attackers can use this information to their advantage when performing an attack. It also sends the message that you have left most defaults alone.

There are two directives that you need to add, or edit in your httpd.conf file:

ServerSignature Off
ServerTokens Prod

The ServerSignature appears on the bottom of pages generated by apache such as 404 pages, directory listings, etc.

The ServerTokens directive is used to determine what Apache will put in the Server HTTP response header. By setting it to Prod it sets the HTTP response header as follows:

Server: Apache

If you’re super paranoid you could change this to something other than “Apache” by editing the source code, or by using mod_security (see below).

Continue reading

9/11 Tribute Movement

Few human made disasters in recent history have had a larger impact on the United States, North America, and in fact the western world than the attacks on the World Trade tower buildings. I encourage my friends and acquaintances to visit the 9/11 Tribute Movement website and pledge their memorial activity.

Remembrance of those who lost their lives and those who gave their lives in the line of duty is an important act that we all should honor.

 We will be doing our most difficult cross country mountain bike ride and will give a minute of silence at the top in honor of those who lost their lives as well as in support of the survivors.


Visit www.911day.org and tell the nation what you’ll be doing on 9/11/11.

Update: At 6,398′ on Moose Mountain, we gave a moment of silence.Moose Mountain 9/11 Tribute

90 Day Plan for New IT Security Managers

You’ve just taken over as an information security director, manager, or architect at an organization. Either this is a new organization that has never had this role before or your predecessor has moved on for some reason. Now what? The following outlines steps that have been shown to be effective (also based on what’s been ineffective) getting traction and generating results within the first three months. Once some small successes are under your belt, you can grow the momentum to help the business grow faster or reduce the risk to their success (or both).

Now what do we do?

Apply a tried and true multi phase approach .. assess current state, determine desired target state, perform a gap analysis, implement improvements based on priority. Basically we need to establish current state, determine what future state should be, and use the gap analysis as the deliverables of the IT security program. There may be many trade-offs that are made due to limiters like political challenges, funding constraints and difficulty in changing corporate culture. The plan you build with the business gives you the ammunition needed to persuade all your stakeholders of the value in the changes you’ll be proposing.

1. Understand the Current Environment

For a manager or enterprise architect to determine where to start, a current state must be known. This is basically an inventory of what IT security controls, people and processes are in place. This inventory is used to determine what immediately known risks and gaps from relevant security control frameworks exist. The known risks and gaps gives us a starting point to understand where impacts on the business may originate from.

Take the opportunity to socialize foundational security concepts with your new business owners and solicit their input. What are the security related concerns they have? If there has been any articulation of Strengths, Weaknesses, Opportunities, and Threats (SWOT), obtaining that review can also give you an idea of weaknesses or threats that are indicative of missing controls. In the discussions with your new constituents, talk to the infrastructure managers and ask them what security related concerns keep them awake at night – there is likely some awareness but they don’t know how to move forward. Keep in mind most organizations will want a pragmatic approach versus an ivory tower perfect target state.

Some simple questions can quickly give you a picture of the state of security controls. For example, in organizations I’ve worked with, the network administrators could not provide me a complete “layer three” diagram – a diagram that shows all the network segments and how they hang together. It wasn’t that they didn’t want to, the diagrams simply didn’t exist. With over 1,500 network nodes over two data centers and two office complexes, the network group had the topology and configuration “in their heads”. Obvious weaknesses and threats include prevention of succession planning or disaster recovery, poor security transparency, and making nearly any change to the environment higher risk than necessary.

Continue reading