Wednesday, November 5, 2014

WHOIS caching proxy in perl

Monday, July 28, 2014

Sonic Wall Status - Nagios plugin

Saturday, May 31, 2014

basic .xsession ....


set -x

XLIB_SKIP_ARGB_VISUALS=1
export XLIB_SKIP_ARGB_VISUALS

# (sleep 300 && /usr/X11R6/bin/xset dpms 3600 4800 10800 ) &

# PATH=/usr/local/kde4/bin:$PATH
# export PATH

#---------------

/usr/local/bin/unclutter &
/usr/local/bin/xset r rate 750 &
# /usr/local/bin/xset dpms 3200 4800 6400 &
(sleep 300 && /usr/local/bin/xset dpms 3600 4800 10800 ) &
/usr/local/bin/xset b 5 &
/usr/local/bin/xset s 7200 &
/usr/local/bin/mouseclock -fn lucidasans-18 &
/usr/local/bin/redshift -m vidmode -l -33:153 &
/usr/local/bin/xrandr -o right
/usr/local/bin/synergys
# /usr/local/bin/firefox -splash &

/home/zero/bin/autocopy-mike-blender.sh > /home/zero/bin/autocopy-mike-blender.log 2>&1 &

xmodmap -e "keycode 223 = XF86Standby"
xmodmap -e "keycode 160 = XF86AudioMute"
xmodmap -e "keycode 174 = XF86AudioLowerVolume"
xmodmap -e "keycode 176 = XF86AudioRaiseVolume"
xmodmap -e "keycode 162 = XF86AudioPlay XF86AudioPause"
xmodmap -e "keycode 164 = XF86AudioStop"
xmodmap -e "keycode 144 = XF86AudioPrev"
xmodmap -e "keycode 153 = XF86AudioNext"
xmodmap -e "keycode 178 = XF86WWW"
xmodmap -e "keycode 236 = XF86Mail"
xmodmap -e "keycode 229 = XF86Search"
xmodmap -e "keycode 230 = XF86Go"

# sleep 60

mixer vol 66
# mixer igain 0
# mixer ogain 99
mixer mic 0
mixer rec 55

#---------------

# afterstep ||
startxfce4 ||
startkde ||
xterm

Tuesday, April 15, 2014

Wireshark: massive capture


wireshark capturing

 [root@unix /usr/local/etc/rc.d]# ls -laF TS-*
 -rwxr-xr-x  1 root  wheel  530 Apr 19 13:50 TS-INTERNET.sh*

eg.

 [root@unix /usr/local/etc/rc.d]# cat TS-INTERNET.sh
 #!/bin/sh

 case $1 in

    'start')
        if [ -x $DAEMONPATH ]
        then
            mkdir -p /capture/INTERNET &&
            cd /capture/INTERNET &&
            ifconfig bce2 up monitor &&
            nice --15 tshark  -q -n -t ad -B 8 -i bce2 -w INTERNET -b filesize:256000 -b files:1500 &
            /bin/echo " capturing INTERNET\c"
        fi
        ;;

    'stop')
        /usr/bin/pkill -f -u 0 "tshark .* INTERNET"
        ;;

    *)
        /bin/echo "Usage: $0 [start|stop]"
        exit 1
        ;;
 esac



the stop function actually works so you can use the scripts to kill off a particular capture as well


and i have also set up a crontab that will find the oldest capture file in each directory and remove it


 [root@unix /usr/local/etc/rc.d]# crontab -l
 #---------------------------------------------------------------
 #
 1 * * * * /capture/clean-me.sh > /capture/clean-me.log 2>&1
 #
 #---------------------------------------------------------------



 [root@unix /usr/local/etc/rc.d]# cat /capture/clean-me.sh
 #!/bin/sh
 CAPTURE=/capture
 CAPTURE_DIRS="INTERNET"



 CAPTURE_PERCENT=`/bin/df -k $CAPTURE|/usr/bin/perl -ne 'print "$1" if (m!\s+(\d+)%\s+\S+!s)'`

 cd $CAPTURE || exit 39


 if [ "$CAPTURE_PERCENT" -ge 100 ]
 then
    echo -n "CLEANING $CAPTURE because disk usage is at ${CAPTURE_PERCENT}% ... "
    date

    for i in $CAPTURE_DIRS
    do
        /usr/local/bin/gfind $i -type f -printf "%T@ %p\n" |
            /usr/bin/sort -nr|
                /usr/bin/tail -1|
                    /usr/bin/awk '{print $2}'|
                        /usr/bin/xargs /bin/rm -v
    done

 else
    echo -n "NOT cleaning $CAPTURE because disk usage is only ${CAPTURE_PERCENT}% ... "
    date
 fi

Solaris 11: Installing GCC


i'm pretty sure this is required first so that the pkg thing can find it's way out to the internet to the http://pkg.oracle.com/ site

 exc04-sv02d:~ # set|fgrep -i proxy
 ftp_proxy=http://167.123.1.1:3128/
 http_proxy=http://167.123.1.1:3128/



 exc04-sv02d:/ # pkg install pkg:/developer/gcc-3@3.4.3-0.175.0.0.0.2.537


I've chose gcc-3 because the two libgcc shared libraries that were already installed, seem to be compiled with GCC-3

 /usr/sfw/lib/libgcc_s.so.1
 /usr/sfw/lib/sparcv9/libgcc_s.so.1



 exc04-sv02d:~ # strings -a /usr/sfw/lib/libgcc_s.so.1|fgrep GCC:|head
 GCC: (GNU) 3.4.3 (csl-sol210-3_4-20050802)
 GCC: (GNU) 3.4.3 (csl-sol210-3_4-20050802)
 GCC: (GNU) 3.4.3 (csl-sol210-3_4-20050802)
 GCC: (GNU) 3.4.3 (csl-sol210-3_4-20050802)
 GCC: (GNU) 3.4.3 (csl-sol210-3_4-20050802)




Solaris: creating a service

copy an existing XML manifest file as a basis for your new service


 exc04-sv02d:/ # cp /lib/svc/manifest/network/http-apache22.xml /jails/etc/httpd22-primary-chroot.xml


I've just cleaned it up a little - nothing too serious, but i've updated the 'boot' script referenced in the .xml file like so .. the "exec_method"'s
are the important bits where you configure start/stop/restart and I removed all the property stuff, that would allow you to choose 64bit apache and/or a
couple of other options - have a look at the original for more info


#----------------------------------------------------------------------

#----------------------------------------------------------------------


exc04-sv02d:/jails/etc # cat /jails/etc/httpd22-primary-chroot
#!/sbin/sh

CHROOT_DIR=/jails/web-jail
CHROOT_CMD=/usr/sbin/chroot

# TEMPORARY!!!
CHROOT_DIR=""
CHROOT_CMD=""

exec $CHROOT_CMD $CHROOT_DIR /web/server/etc/httpd22-primary $*

#----------------------------------------------------------------------

exc04-sv02d:/jails/etc # cat /web/server/etc/httpd22-primary
#!/bin/sh
#
# $Header: /web/server/etc/RCS/httpd22-primary,v 1.1 2012/05/08 00:38:04 mwd Exp mwd $
#
# Production WWW Server Startup/Shutdown script
#
# set -x
#

CHROOT="/usr/sbin/chroot /jail/web"
CHROOT=
HTTPD_BASE=/web/server/httpd22
HTTPD_CONF=$HTTPD_BASE/conf/main-primary.conf
HTTPD_CONF=$HTTPD_BASE/conf/httpd.conf
PERL58=/web/server/perl589/bin/perl

USAGE="$0 [start|stop|restart|configtest|perltest]"

PERL58MODULES="
ModPerl::Registry DBI Mysql Stat::lsMode
Net::DNS Net::SNMP
Crypt::CBC Crypt::Blowfish
WAT::AdminDBURI WAT::AuthLDAP WAT::AuthzDBURI
WAT::AuthzLDAP WAT::Breadcrumb WAT::CookieHome
WAT::Filter WAT::Index WAT::HeaderDeny WAT::Lib WAT::ModifyFilter
"

ulimit -n -S 2048
ulimit -n -H 4096
ulimit -s 16384


. /etc/TIMEZONE

export TZ CMASK LC_COLLATE LC_CTYPE LC_MESSAGES LC_MONETARY LC_NUMERIC LC_TIME

case $1 in
        start)
                /bin/echo " httpd22 production \c"
                $CHROOT $HTTPD_BASE/bin/httpd -k start -f $HTTPD_CONF
                ;;
        stop)
                /bin/echo " stopping httpd22 production \c"
                $CHROOT $HTTPD_BASE/bin/httpd -k stop -f $HTTPD_CONF
                ;;
        restart)
                $CHROOT $HTTPD_BASE/bin/httpd -e Info -k restart -f $HTTPD_CONF
                ;;
        configtest|test)
                $CHROOT $PERL58 -MWAT::Filter -e 'print "WAT::Filter installed, testing httpd22 config\n"' &&
                $CHROOT $HTTPD_BASE/bin/httpd -t -f $HTTPD_CONF
                ;;
        perltest)
                $CHROOT $PERL58 -v &&
                for i in $PERL58MODULES
                do
                    export i
                    $CHROOT $PERL58 -M$i -e 'print "$ENV{'i'}\n"' || exit 2
                done &&
                /bin/echo &&
                /bin/echo All required perl modules and dependants installed
                ;;
        *)
                /bin/echo $USAGE >&2; exit 1
                ;;
esac

exit 0

#----------------------------------------------------------------------

exc04-sv02d:/jails/etc # svccfg import /jails/etc/httpd22-primary-chroot.xml

exc04-sv02d:/jails/etc # svcs -a |fgrep http
disabled       Mar_30   svc:/network/http:apache22
disabled       11:20:19 svc:/network/http:httpd22-primary-chroot

exc04-sv02d:/jails/etc # svcadm enable svc:/network/http:httpd22-primary-chroot

exc04-sv02d:/jails/etc # svcs -a |fgrep http
disabled       Mar_30   svc:/network/http:apache22
online         11:21:21 svc:/network/http:httpd22-primary-chroot


#----------------------------------------------------------------------

ps. it's not actually chrooted, not yet, and this isn't tested but it's pretty much a copy of what i did on exc01-sp-p

Perl Script: finding free IP addresses on a C-class network

I need 10 IP addresses in a row for the new webster, preferably - don't worry i haven't stolen any yet .. but there are ten in a row from 167.123.1.240
to 167.123.1.249

but this simple one liner can be used to determine what is available

wal:/ # perl -e 'foreach $i ( 1 .. 255 ) { $x=qx{ping 167.123.1.$i 1}; print $x if ($x =~ m!no answer!sig)}'
no answer from 167.123.1.12
no answer from 167.123.1.16
no answer from 167.123.1.21
no answer from 167.123.1.22
no answer from 167.123.1.24
no answer from 167.123.1.28
no answer from 167.123.1.29
no answer from 167.123.1.30
no answer from 167.123.1.31
no answer from 167.123.1.34
no answer from 167.123.1.36
no answer from 167.123.1.37
no answer from 167.123.1.38
no answer from 167.123.1.41
no answer from 167.123.1.42
no answer from 167.123.1.43
no answer from 167.123.1.44
no answer from 167.123.1.45
no answer from 167.123.1.46
no answer from 167.123.1.47
no answer from 167.123.1.49
no answer from 167.123.1.50
no answer from 167.123.1.53


etc...

Solaris: Simple Zone Creation

very basic zone setup - i'll probably tweak the zonecfg values later



 exc06-sp-a:/ # zfs list

 exc06-sp-a:/ # zfs create data/zones/z-webdav

 exc06-sp-a:/ # zfs get all data/zones/z-webdav

 exc06-sp-a:/ # zfs set mountpoint=/zones/z-webdav data/zones/z-webdav





 exc06-sp-a:/zones # cat z-webdav.conf
 create -b

 set zonepath=/zones/z-webdav

 set autoboot=true

 set ip-type=shared

 add fs
    set dir=/cdrom
    set special=/cdrom
    set type=lofs
    add options ro
    add options nodevices
 end

 add net
    set address=167.123.1.240
    set physical=e1000g1
 end

 add attr
    set name=comment
    set type=string
    set value="Primary WebDAV Zone"
 end

 add rctl
    set name=zone.max-swap
    add value (priv=privileged,limit=8589934592,action=deny)
 end

 add dedicated-cpu
    set ncpus=4
 end





 exc06-sp-a:/zones # zonecfg -z z-webdav -f z-webdav.conf


 exc06-sp-a:/zones # chmod 700 z-webdav


 exc06-sp-a:/zones # zoneadm -z z-webdav install


 exc06-sp-a:/zones # zoneadm -z z-webdav boot


 exc06-sp-a:/zones # zlogin -C z-webdav



Select a Locale

  0. English (C - 7-bit ASCII)
  1. Australia (ISO8859-1)
  2. English, Australia (UTF-8)
  3. English, New Zealand (UTF-8)
  4. New Zealand (ISO8859-1)
  5. Go Back to Previous Screen

Please make a choice (0 - 5), or press h or ? for help: 1



What type of terminal are you using?
 1) ANSI Standard CRT
 2) DEC VT52
 3) DEC VT100
 ......
Type the number of your choice and press Return: 3





 Host name z-webdav.detir.qld.gov.au




 Time Zone [X] Australia
           [X] Queensland - most locations

Solaris: DTrace 556

I'm not even sure if this is right, but i've stolen a dtrace script (if i ever find the source again, I'll list it) and adapted it to print the payload written to the FD of a newly opened
connection.. ie. the HTTP request upstream in this case GET POST etc...

wal:/web/server/squid/etc # cat dtrace-connect.d
#!/usr/sbin/dtrace -qs

syscall::connect:entry
/execname == "squid"/
{
/* s = ( int ) copyin(arg1);*/
myfd = arg0;
socks = (struct sockaddr*) copyin(arg1, arg2);
hport = (uint_t) socks->sa_data[0];
lport = (uint_t) socks->sa_data[1];
hport <<= 8; port = hport + lport; printf("%s: (%d) %d.%d.%d.%d:%d\n", execname, myfd, socks->sa_data[2], socks->sa_data[3], socks->sa_data[4], socks->sa_data[5], port);
}

syscall::write:entry
/ arg0 == myfd /
{
printf("%s", copyinstr(arg1)); /* correct use of arg1 */
}

/* end end end */



run it with

wal:/web/server/squid/etc # ./dtrace-connect.d >& DTRACE.OUT &
to watch it in action to see when it happens aand what file descriptor is invovled, take a copy of the idle connections of squid

wal:/ # lsof -p `cat /web/squid/logs/squid.pid ` | fgrep IDLE > A


now the idea is that when the idle connections increase, watch it with netstat as it has less if a hit on the system

wal:/web/server/squid/etc # netstat -an|fgrep IDLE|wc -l

when it ticks over, you make a new file

wal:/ # lsof -p `cat /web/squid/logs/squid.pid ` | fgrep IDLE > B


wal:/web/server/squid/etc # diff A B
6a7
> squid 25357 nobody 161u IPv4 0x30009c90900 0t0 TCP *:* (IDLE)



and in this case the new one was FD 161... the number vefore the IPv46 is the FD (u means read/write?)


(the formatting of the output has been cleaned up below)

wal:/web/server/squid/etc # fgrep -a -A5 '(161)' DTRACE.OUT |less -S

squid: (161) 167.123.240.35:3128
POST http://www.vision6.com.au/api/xmlrpcserver.php?version=1.2&v6_session=1b1deddddb2a55571de542ef612b5fd2 HTTP/1.1
Content-Type: text/xml
User-Agent: Apache XML RPC 3.0 (Jakarta Commons httpclient Transport)
Host: www.vision6.com.au
Content-Length: isLoggedIn



wal:/web/server/squid/etc 32694 # diff A B
39a40
> squid 25357 nobody 334u IPv4 0x6001b534f80 0t0 TCP *:* (IDLE)




wal:/web/server/squid/etc 32693 # fgrep -A2 fd=334 DTRACE.OUT
squid: (fd=334) 167.123.240.35:3128
CONNECT corptech.service-now.com:443 HTTP/1.1



and it's corptech ...


bah ... i'm out


Solaris: Sorting packages by size

Subject: FYI: sorting solaris packages by package size


i couldn't think of a way to sort packages in solaris by their installation size so....

z-csq:~ # pkginfo -l| perl -ne '$/=undef; foreach $t (split(m!\n\n!, )) { print "$3:$1:$2\n" if ( $t =~
m!PKGINST:\s+(\S+).*?NAME:\s+(.*?)\n.*?(\d+)\s+blocks!s) }'|sort -rn|head
479568:SUNWmlib:mediaLib - Shared Libraries
395134:SUNWacroread:Acrobat Reader for PDF files
291553:SFWtetex:tetex - A complete TeX distribution for UNIX
265384:SFWxmacs:XEmacs - text editor
190831:SUNWj5rt:JDK 5.0 Runtime Env. (1.5.0_07)
148773:SFWgcc34:gcc-3.4.2 - GNU Compiler Collection
122094:SUNWj3rt:J2SDK 1.4 runtime environment
116086:SFWemacs:emacs - GNU Emacs Editor
104325:SUNWgcc:gcc - The GNU C compiler
99222:SFWqt:qt - C++ GUI framework


in terms of saving space, acroread and emacs could be removed without impacting much of the OS, I would imagine

Solaris: /etc/project

after getting annoyed at the constant 256 file descriptor limit being exceeded on evil (which i eventually figured out was SSHD ) i tried setting the
/etc/project file as below to try and get *everything* on the system to at least have 1024 soft limit and 2048 hard limit


exc04-sv02d:~ # cat /etc/project
system:0::::process.max-file-descriptor=(basic,1024,deny),(privileged,2048,deny)
user.root:1::::process.max-file-descriptor=(basic,1024,deny),(privileged,2048,deny)
noproject:2::::process.max-file-descriptor=(basic,1024,deny),(privileged,2048,deny)
default:3::::process.max-file-descriptor=(basic,1024,deny),(privileged,2048,deny)
group.staff:10::::process.max-file-descriptor=(basic,1024,deny),(privileged,2048,deny)



and rebooted the system but there are still processes that have 256 file decriptors

eg.

exc04-sv02d:~ # plimit 439
439:    /usr/sbin/cron
   resource              current         maximum
  time(seconds)         unlimited       unlimited
  file(blocks)          unlimited       unlimited
  data(kbytes)          unlimited       unlimited
  stack(kbytes)         8192            unlimited
  coredump(blocks)      unlimited       unlimited
  nofiles(descriptors)  256             65536
  vmemory(kbytes)       unlimited       unlimited


project is "system"

exc04-sv02d:~ # /usr/bin/ps -o project,user,pid,ppid,pcpu,pmem,nice -o vsz=VIRTUAL -o rss=RESIDENT -o tty,stime,time,args -eaf|fgrep cron
  system  root   439     1  0.0  0.1 20    9272     6136 ?         Feb_12       00:00 /usr/sbin/cron



"system" is configured how I expect

exc04-sv02d:~ # cat /etc/project
system:0::::process.max-file-descriptor=(basic,1024,deny),(privileged,2048,deny)
user.root:1::::process.max-file-descriptor=(basic,1024,deny),(privileged,2048,deny)
noproject:2::::process.max-file-descriptor=(basic,1024,deny),(privileged,2048,deny)
default:3::::process.max-file-descriptor=(basic,1024,deny),(privileged,2048,deny)
group.staff:10::::process.max-file-descriptor=(basic,1024,deny),(privileged,2048,deny)




am i going about this the wrong way? or are the defaults that would have previously been set in the /etc/system file *still* being used here?

Solaris: using LD_PRELOAD


exc04-sv02d:~ # smbclient -W DPWSERVICES -U michael.dean  //10.255.227.42/SHARE
Enter michael.dean's password:
Domain=[DPWSERVICES] OS=[Windows 5.1] Server=[Windows 2000 LAN Manager]
ld.so.1: smbclient: fatal: relocation error: file /usr/lib/libreadline.so.5: symbol tgetent: referenced symbol not found
Killed



exc04-sv02d:~  # man tgetent

        cc [ flag ... ] file ... -lcurses [ library ... ]



exc04-sv02d:~ # LD_PRELOAD=/usr/lib/libcurses.so smbclient -W DPWSERVICES -U michael.dean  //10.255.227.42/SHARE



SVN: sort of

i've long been looking at a way to version files in place in lieu of not doing it manually with RCS like I do (I've come up with something better in the mean time - I have NOT posted it yet)


here are some basic instructions on how to do that with Subversion


RESETTING EVERYTHING TO SCRATCH (only need to do this to start again)


# on the repository server
z-webdav:/ # \rm -rf /web/webdav/svn/unixsvn/hosts/z-webdav/

z-webdav:/ # svnadmin create /web/webdav/svn/unixsvn/hosts/z-webdav

z-webdav:/ # chown -R svn:svn /web/webdav/svn/unixsvn/hosts/


# on the client system
clientserver:/ # \rm -rf /.svn



and then run this script I wrote

clientserver:/ # ./SVNME.sh



you'll probably need to update the username and password to your own details - actually it would be better to have a specific SVN user added to AD for
doing this

and note that this was done on a solaris 10 box so it might not be quite the same on other boxes and you might need to use full path names to find the
various utils you need ... eg. gfind

z-webdav:/ # cat SVNME.sh
#!/bin/sh

USER=username
PASS=password
AUTH="--username $USER --password $PASS"
TARGETHOST=`hostname | sed 's/\..*$//g'`
SVNHOST=unixsvn
MINSIZE=+64c
MINUTES="-mmin +60 -and -mmin -1840"
MYEXIT=0

# test to see if we have a hostname repository
svn list http://$SVNHOST/hosts/$TARGETHOST || MYEXIT=23

cd /

INCLUDE_LIST="etc web/server*/etc web/server*/httpd-*.*.*/conf usr/local/etc"
ENDOFLINE="(-edit|\-|\.|,v|\.(bak|db|disabled|example|sample|old|OUT|[0-9]+))$"
PATHPREFIX="/(\.[a-z]|_|\.?(JUNK|OLD|OUT)|CVS|FFF|RCS|RUN|core|svc/repository-boot)"
PATHSUFFIX="-old/|etc/mnttab"
EXCLUDE_LIST="$ENDOFLINE|$PATHPREFIX|$PATHSUFFIX"

EXPANDED_INCLUDE_LIST=`/bin/ls -d $INCLUDE_LIST`


case $MYEXIT in
   0)
     echo TARGET REPOSITORY SEEMS TO EXIST - BEGIN PROCESSING
     ;;
   23)
     echo TARGET RESPOSITORY DOES NOT SEEM TO EXIST
     exit $MYEXIT
     ;;
esac

set -x

[ -d /.svn ] ||
    svn co http://$SVNHOST/hosts/$TARGETHOST .


for i in $EXPANDED_INCLUDE_LIST
do
echo ============= $i ================
    for j in `[ -d "$i" ] && gfind $i -type f $MINUTES -size $MINSIZE | egrep -v -e "$EXCLUDE_LIST"`
    do
        svn ls http://$SVNHOST/hosts/$TARGETHOST/$j ||
            svn add --parents $j
    done
done

svn commit -m "auto commit"

exit $MYEXIT


FreeBSD: unable to remove packages?

usslab02:/ # pkg_deinstall squid-3.2.3_1
[Updating the pkgdb in /var/db/pkg ... - 176 packages found (-0 +1) . done]
---> Deinstalling 'squid-3.2.3_1'
./+DEINSTALL: Permission denied
pkg_delete: deinstall script returned error status
** Listing the failed packages (-:ignored / *:skipped / !:failed)
! squid-3.2.3_1 (pkg_delete failed)



in this case the 'noexec' mount option on the file system stops things from being from run that file system


usslab02:/var/db/pkg # mount | fgrep /var/db/pkg
zroot/var/db/pkg on /var/db/pkg (zfs, local, noexec, nosuid, nfsv4acls)




usslab02:/var/db/pkg # mv squid-3.2.3_1 /usr/



usslab02:/var/db/pkg # ln -s /usr/squid-3.2.3_1




now teh system will let you remove it

Perl Script: Squid Throughput for Nagios



unix:/ # crontab -l|fgrep squid-t
0 0 * * * /web/server/etc/squid-throughput-cleanup.sh > /var/adm/squid-throughput-cleanup.log 2>&1






this nagios script accumulates data on a daily basis that checks for

* users downloading too many URLs and BYTES
* client IP addresses downloading too many URLs and BYTES

and checks to see on a per-scan basis for

* users that are consuming too much bandwidth
* client IP addresses that are consuming too much bandwidth
* web sites that are consuming too much bandwidth


the script that does the magic on NS1 is

wal:/web/server/etc/squid-throughput.pl


it will store its persisten data in - must be writable by 'nagios'

wal:/var/squid-throughput/


the .store files in this directory keep track of the running statistics for the day ... the only thing making this store data on a per day basis is that
there is a crontab entry that deletes these .store files once a day

wal:/ # crontab -l|fgrep squid-t
0 0 * * * /web/server/etc/squid-throughput-cleanup.sh > /var/adm/squid-throughput-cleanup.log 2>&1


it does a simple rm of the *.store files to reset the data

the .offset files in this directory are permanently kept and are used by 'logtail' to indicate where it was up to reading the last time it read the
access.log file

these files in wal:/var/squid-throughput/ are prefixed with the bizarre looking dutyunixfloggerdetirqldgovau or dutyunixmonsterdetirqldgovau this is
done to make sure that flogger and monster have their own set of data to work with but more to make sure that they don't stomp on each other as might
happen if they were sharing the .store files (which to be honest should work as a shared set of data as the script apparently uses routines that are
lock friendly, but that also might change the logic of the script so i'll leave it this inefficient way for the time being) ..

these prefix strings are chosen because it would seem to be the only nagios MACRO that I can use that is set differently for the server that is running
nagios ie. i use the admin email address which i've set to be unique in each nagios config file ie.

flogger:/nagios/local # fgrep admin_e nagios.cfg
admin_email=dutyunix@flogger.detir.qld.gov.au

monster:/nagios/local # fgrep admin_e nagios.cfg
admin_email=dutyunix@monster.detir.qld.gov.au

dutyunix on these boxes is /etc/alias'ed to be sent to us anyway

a "better" solution would be using a nagios macro, which i can't seem to find, that was simply the name of the server running nagios instead of the
$ADMINEMAIL$ macro that I am using...

monster:/nagios/etc # cat checkcommands/check_nrpe_squid_throughput.cfg
define command{
command_name check_nrpe_squid_throughput
command_line $USER1$/check_nrpe2 -t 60 -H $HOSTADDRESS$ -c check_squid_throughput -a $ARG1$ $ARG2$ $ADMINEMAIL$
}


parameters for the checks are set in the service file

monster:/nagios/etc # cat ./services/UNIX/squid-throughput.cfg
define hostgroup{
hostgroup_name squid-throughput
alias Squid Throughput
}

define servicegroup{
servicegroup_name squid-throughput
alias Squid Throughput
}

define service{
use unix-service
name squid-throughput
hostgroups squid-throughput
servicegroups squid-throughput
service_description Squid Throughput
check_command check_nrpe_squid_throughput!50,4096,40000,4000000!25,2048,20000,1000000
}


first set of values are the critical parameters. ie. 50,4096,40000,4000000

50 == 50 urls a second
4096 == 4096 KBytes per second (4MB a sec)
40000 == 40 thousand URLs downloaded (per day)
4000000 == 4 million KBs of data downloaded (4000 MBytes or 4Gbytes)

second set of values are the warning parameters ie. 25,2048,20000,1000000

25 == 25 urls per second
2048 == 2048 KBytes per second (2Mbytes per second)
20000 == 20 thousand URLs downloaded (per day)
1000000 == 1 million Kbytes (1000 MBytes or 1GByte)


association of the service with WAL is set in the host file

monster:/nagios/etc # fgrep squid-throughput hosts/unix/wal.cfg
hostgroups cpu-percent-public,..[snipped]...,squid-throughput



in NRPE, the script configuration is

wal:/ # fgrep squid-t /usr/local/nagios/etc/nrpe.cfg
command[check_squid_throughput]=/web/server/etc/squid-throughput.pl --critical $ARG1$ --warning $ARG2$ --hostname $ARG3$ --persistent


(it would probably would make more sense to call the --hostname option --servername...... but anwyay)

Thursday, April 10, 2014

Apache: crude rolling over of logs


if the apache in question doesn't have automated rolling over of logs you can easily do this ...


sourcery:/web/server/httpd/logs # mv svn-access_log svn-access_log-20130909


or if you don't particular care to keep the logs you can do ...


sourcery:/web/server/httpd/logs # > error_log



and then you need to restart the apache

sourcery:/web/server/httpd/logs # /etc/init.d/httpd restart




(usually) a new logfile will get created by apache, but I went to http://svn/ and logged in and that kicked off the new logfile (sourcery has one of
those create log files as it needs them apache logging mechanism)


sourcery:/web/server/httpd/logs # bzip2 -9 svn-access_log-20130909 &



i also do an 'lsof' before I do a bzip just to make sure nothing has that log file open any more

Solaris: turn on pam debugging on solaris with


z-webdav:~ # touch /etc/pam_debug

Solaris: PAM doesnt like comments in /etc/passwd


if you are having troubles logging into a solaris box with SSHD, make sure that the password file isn't corrupted


z-webdav:~ # pwck

###-------------
Too many/few fields

rsync: copying to different users


instead of the "-a" option to rsync

change it to a

rsync -rltD


the "-a" tries to set permissions and ownerships

using dtrace

i tracked what was doing it with DTRACE and it's this process that is (ultimately) firing off the iscsiadm commands that are failing


horatio:/ # ps -wwax|fgrep 10371
10371 ? R 155:11 java -Xmx150m -Xss192k -XX:MinHeapFreeRatio=10 -XX:MaxHeapFreeRatio=20 -client -Dxvmserver=true -Dfile.encoding=utf-8
-Djava.endorsed.dirs=/usr/lib/cacao/lib/endorsed -classpath
/usr/share/lib/jdmk/jdmkrt.jar:/usr/share/lib/jdmk/jmxremote_optional.jar:/usr/lib/cacao/lib/cacao_cacao.jar:/usr/lib/cacao/lib/cacao_j5core.jar:/usr/li
b/cacao/lib/bcprov-jdk14.jar -Djavax.management.builder.initial=com.sun.jdmk.JdmkMBeanServerBuilder -Dcacao.print.status=true
-Dcacao.config.dir=/etc/cacao/instances/scn-agent -Dcacao.monitoring.mode=smf
-Dcom.sun.cacao.ssl.keystore.password.file=/etc/cacao/instances/scn-agent/security/password com.sun.cacao.container.impl.ContainerPrivate

i ran this dtrace script to log the executable binary's name and PID and PPID of every process that gets run


dtrace -qn 'syscall::exec*:return { printf("%Y (pid=%d) (ppid=%d) %s\n",walltimestamp,pid,ppid,curpsinfo->pr_psargs); }' > OUT


and piped the output to a file and then did the rigmorale of reverse tracing it until i found the real process that was still running


horatio:/home/sysmwd 2230 # fgrep " iscsiadm " OUT
2013 Sep 30 14:38:50 (pid=2079) (ppid=2078) iscsiadm list initiator-node
2013 Sep 30 14:41:50 (pid=3059) (ppid=3058) iscsiadm list initiator-node


and eventually if you follow the PPID chain, it turns out to be that cacao java thing

using netcat


in unix-master:/STORE I'm copying in all the configs that I think we will need when setting up the clones


the idea being:

cloned base operating system + altered config from /STORE == new system


meaning to say that hopefully everything required to replicate the services will actually be on unix-master:/STORE and the base operating system should
contain all the software we need .. if not tell me, and i'll add it to the master


well, that's the theory, I should have thought of it sooner before we started making too many clones


basic recipe to do this is on UNIX-MASTER

unix-master:/STORE # mkdir WAL

unix-master:/STORE/WAL # nc6 -l -p 6666 | tar xvpf -



and then on the system you want the important bits from eg.

wal:/ # gtar cf - etc/. usr/local/etc/. web/server/etc/. web/server64/etc/. | nc6 -x unix-master 6666


there are probably other directories to copy


NOTE WELL:

you NEED to be in the ROOT directory and you most certainly NEED to put things like

web/server/etc/.

instead of, say

web/server/etc

as the latter example will *ONLY* copy the symlink eg. in this case

wal:/ 18851 # dir /web/server/etc
lrwxrwxrwx 1 root root 21 Dec 15 2006 /web/server/etc -> /global/u1/etc/server/

so USE the trailing slash plus the dot /. as that will make sure the contents of the directory are copied regardless of whether it is a symlink


EXAMPLE 1

using BELUGA as a receiver and NARWHAL as the sender, CAT a file from one machine to another


a. set up the remote listener

 beluga:~/INCOMING # nc6 -l -p 9876 > newname.tar

"-l" says LISTEN
"-p port" is the port to listen on


b. netcat the file from the sending machine

 narwhal:~ #  nc6 -x beluga 9876 < dot.tar

"-x" makes it hang up after the transfer is done, ie. close down the receiver once the sender is done


c. use your file!

 beluga:~/INCOMING # tar tf newname.tar | wc
      27      27     307

ps. this will work with any arbitrary file, I used a .tar file simply to demonstrate that the file transferred properly in this example by running
through tar itself to verfiy that it was good, it is probably prudent to verify that your file transferred properly - "cksum" could help with that

 narwhal:~ # cksum dot.tar
 3708157944      1505280 dot.tar

 beluga:~/INCOMING # cksum newname.tar
 3708157944      1505280 newname.tar





EXAMPLE 2

using BELUGA as a receiver and NARWHAL as the sender, TAR a directory from one machine to another



a. make a new directory for your incoming tar explosion

 beluga:~ # mkdir INCOMING


b. set up the receiving end with a gtar

 beluga:~/INCOMING # nc6 -l -p 9876 | gtar xvf -


c. send your tarred up directory

 narwhal:~ # gtar cf - . |  nc6 -x beluga 9876


the "-x" says transfer the file then hang up


d. now use your files!

beluga:~/INCOMING 25260 # nc6 -l -p 9876 | gtar xvf -
nc6: using stream socket
./
./.lesshst
./.vimrc
./nc6-1.0/
./nc6-1.0/CREDITS
./nc6-1.0/bootstrap
./nc6-1.0/aclocal.m4
./nc6-1.0/intl/
./nc6-1.0/intl/dcngettext.o
./nc6-1.0/intl/osdep.c
./nc6-1.0/intl/ngettext.c
./nc6-1.0/intl/gettext.o
....snip....
./.aliases
./.cvsrc
./.netrc
./.epltidyrc
./dot.tar




just from basic testing, this would seem to be about 20% faster than doing an "scp -c blowfish"

Perl Script: basic SNMP trap-handler.pl

#!/usr/local/bin/perl -w
#
# Reference:
#   http://net-snmp.sourceforge.net/wiki/index.php/TUT:Configuring_snmptrapd
#
use strict;

## set slurp mode to read the input
# $/ = undef;

# First line in the input is the hostname
my $traphost = ;
chomp $traphost;

# Second line in the input is the IP address port details
# which aren't totally useful so they aren't jammed into
# the output
my $trapip   = ;
chomp $trapip;

my $traptext = '';

# Take the rest of the input and chop it up into pairs
# of "xzy = 1234" as usually the output is a set of pairs
# with the OID and the value of the OID
while () {
    # skip over the uptime, it's just noise
    next if (m!DISMAN-EVENT-MIB::sysUpTimeInstance!soig);

    # this is noise too, it is in every trap
    next if (m!SNMPv2-MIB::snmpTrapOID.0.*SNMPv2-SMI::enterprises.3224.0.200!soig);

    if (m!^(\S+)\s+(.*)$!oig) {
        $traptext .= "$1 = $2\n";
    } else {
        $traptext .= $_;
    }
}

# Turn new lines in a "," as we want all the syslog output to go
# into one line
$traptext =~ s/\n/, /mg;

# chop off the last "," coz it looks stoopid
$traptext =~ s/,+\s*$//g;

system("/usr/bin/logger -t trap-handler '$traphost: $traptext'");

Perl Script: dig-to-hosts

FreeBSD: startup scripts


most scripts that get stuffed in /usr/local/etc/rc.d are "native" FreeBSD startup scripts in that they have been written around the FreeBSD startup API
thing - however - if you do not use that recipe for doing start up scripts and you just want a plain traditional old startup script, then you need to
make sure that the script has a .sh extension eg.

flogster:/usr/local/etc/rc.d # dir lat.sh
-rwxr-xr-x 1 root wheel 821 Feb 7 13:31 lat.sh*


yes, I couldn't get lat to start at boot-time until I renamed the startup script

flogster:/usr/local/etc/rc.d # mv lat lat.sh


failing this you've got to write them in this style....

from "man rc"


EXAMPLES
The following is a minimal rc.d/ style script. Most scripts require
little more than the following.

#!/bin/sh
#

# PROVIDE: foo
# REQUIRE: bar_service_required_to_precede_foo

. /etc/rc.subr

name="foo"
rcvar=`set_rcvar`
command="/usr/local/bin/foo"

load_rc_config $name
run_rc_command "$1"

FreeBSD: shared memory settings for squid with diskd

Shared mem settings on FreeBSD


turns out that some of the shared mem settings under FreeBSD are pretty useless by default


for instance: kern.ipc.msgssz is '8' by default which means that whenever squid tried to talk to its diskd process, it would want to be sending a lot
more than just a 8 bytes at a time - hence when it can't, it barfs...


here are the amended settings - these are read-only so they require a reboot to have them to go live

unix-master:~ # cat /boot/loader.conf
kern.ipc.msgssz=64
kern.ipc.msgtql=4096
kern.ipc.msgmnb=16384



I can only imagine that squid worked fine on UNX-MASTER for the first week or so that I have been using it because nothing was being written to or being
read from the disk cache, hence there was no need for squid to speak to its diskd ...

hopefully squid is now stable


FreeBSD: simple ZFS configuration


FYI: zfs addition to unx-flogster



just a simple listing of what is done ... for my reference really


create data zpool

flogster:~ # zpool create data /dev/da1



create file system

flogster:~ # zfs create data/nagios



change filesystem mount point away from default

flogster:~ # zfs set mountpoint=/nagios data/nagios



get rid of mountpoint for data file system created by default

flogster:~ # zfs set mountpoint=none data




and make sure zfs is enabled for the running system

flogster:~ # fgrep zfs /etc/rc.conf
zfs_enable="YES"


unix:~ # zfs set compression=lz4 data

FreeBSD: ipfw: add_dyn_rule: Cannot allocate rule

flogster:~ # fgrep dyn /etc/sysctl.conf
net.inet.ip.fw.dyn_buckets=4096
net.inet.ip.fw.dyn_max=8192

Solaris: ppriv



if you don't know what ppriv attribute you need, you can run ppriv with the -D option and it will tell you what you need

wal:/web/home/sysmwd $ ppriv -D -e cat /etc/shadow
cat[3329]: missing privilege "file_dac_read" (euid = 1073, syscall = 225) needed at ufs_iaccess+0x110
cat: cannot open /etc/shadow








FreeBSD: sysctl -da

sysctl -da



if you don't know what all those sysctl's are for... the -d option might give you a hint ...

eg.

flogster:/ # sysctl -da|fgrep msg
kern.ipc.msgseg: Number of message segments
kern.ipc.msgssz: Size of a message segment
kern.ipc.msgtql: Maximum number of messages in the system
kern.ipc.msgmnb: Maximum number of bytes in a queue
kern.ipc.msgmni: Number of message queue identifiers
kern.ipc.msgmax: Maximum message size
kern.consmsgbuf_size: Console tty buffer size
kern.features.sysv_msg: System V message queues support
kern.msgbuf_show_timestamp: Show timestamp in msgbuf
kern.msgbufsize: Size of the kernel message buffer
kern.msgbuf_clear: Clear kernel message buffer
kern.msgbuf: Contents of kernel message buffer

Perl Script: squid-report.pl

unix:/ # cat /web/server/etc/squid-report.pl
#!/usr/local/bin/perl -w
#
# $Header: /web/server/etc/RCS/squid-report.pl,v 1.15 2012/01/20 00:23:37 mwd Exp mwd $
#

use strict;

use Getopt::Long;
use MIME::Entity;
use POSIX qw(strftime);

#----------------------------------------------------------------

die "does not take input from a terminal" if ( -t 0 );

#----------------------------------------------------------------

my $VERSION = substr(q$Revision: 1.15 $, 10);

#----------------------------------------------------------------

my $opt_debug       = 0;
my $opt_ranklimit   = 100;
my $opt_return      = ''; # return email address
my $opt_email       = ''; # to email address
my $opt_sep         = ','; # CSV seperator
my $opt_csv         = 1;
my $opt_pdf         = 1;
my $opt_report      = 1;

# report options
my $opt_mimetypes   = 1;
my $opt_fileexts    = 1;
my $opt_users       = 1;
my $opt_websites    = 1;
my $opt_clientips   = 1;

GetOptions('report!'         => \$opt_report,
           'pdf!'            => \$opt_pdf,
           'csv!'            => \$opt_csv,
           'clientips!'      => \$opt_clientips,
           'websites!'       => \$opt_websites,
           'users!'          => \$opt_users,
           'fileexts!'       => \$opt_fileexts,
           'mimetypes!'      => \$opt_mimetypes,
           'debug|d+'        => \$opt_debug,
           'email=s'         => \$opt_email,
           'return=s'        => \$opt_return,
           'ranklimit=n'     => \$opt_ranklimit,
           );


#-counters-------------------------------------------------------

my $totalsize = 0;
my $totalurls = 0;

my %mimeurls  = ();
my %mimebytes = ();

my %fileexturls  = ();
my %fileextbytes = ();

my %userurls  = ();
my %userbytes = ();

my %siteurls  = ();
my %sitebytes = ();

my %clientipurls  = ();
my %clientipbytes = ();

#----------------------------------------------------------------

my $startdate = 0;
my $finishdate = 0;

#----------------------------------------------------------------

while(<>) {

    chomp;

    if (m!^(\d*?)\.\d+\s+\S+\s+(\S+)\s+\S+\s+(\d+)\s+\S+\s+(\S+)\s+(\S+)\s+\S+\s+(\S+)$!oig) {

        # NB we are only interested in the website name, so converting it to lower case doesn't matter
        my ($reqdate,$clientip,$size,$request,$username,$mimetype) = ($1, $2, $3, lc $4, lc $5, lc $6);

        $finishdate = $reqdate;
        $startdate = $finishdate if ( $startdate == 0 );

        print STDERR "$startdate = $finishdate\n" if ($opt_debug);

        $size = 0 if ($size < 0);

        my $website = $request || '';

        $website =~ s!^(http://[^/]+/).*$!$1!oig;
        if ( $website =~ s!:443$!/!oig ) {
            $website = "https://$website" if ( $website !~ m!^http!io );
        }

        my $fileext = $1 if ($request =~ m!//.*/.*?(\.[\w]+).?$!oig);
           $fileext ||= '-';

        $fileext = 'html' if ($fileext eq 'htm');

        $mimeurls{$mimetype}      ||= 0;
        $mimebytes{$mimetype}     ||= 0;
        $fileexturls{$fileext}    ||= 0;
        $fileextbytes{$fileext}   ||= 0;
        $userurls{$username}      ||= 0;
        $userbytes{$username}     ||= 0;
        $clientipurls{$clientip}  ||= 0;
        $clientipbytes{$clientip} ||= 0;

        $totalurls++;
        $totalsize += $size;

        $siteurls{$website}++;
        $sitebytes{$website} += $size;

        $mimeurls{$mimetype}++;
        $mimebytes{$mimetype} += $size;

        $fileexturls{$fileext}++;
        $fileextbytes{$fileext} += $size;

        $userurls{$username}++;
        $userbytes{$username} += $size;

        $clientipurls{$clientip}++;
        $clientipbytes{$clientip} += $size;

    } else {
        print STDERR "INVALID SQUID LOGFILE LINE: $_\n";
    }
}

#----------------------------------------------------------------


#----------------------------------------------------------------


my @reportdata = ();
my @reportfilenames = ();
my $mainsummary = '';

if ($opt_websites) {

    # top sites by url count
    {
        my $csv = '';
        my $rank = 0;
        my $title = "TOP WEB SITES by URLs";
        $mainsummary .= sprintf "$title\n\n";
        $mainsummary .= sprintf "%6s  %8s      %s\n\n", 'Rank', 'URLs', 'Web Site';
        $csv .= sprintf "%s%s%s%s%s\n", 'Rank', $opt_sep, 'URLs', $opt_sep, 'Web Site';
        foreach my $key ( sort { $siteurls{$b} <=> $siteurls{$a} } keys %siteurls ) {
            $rank++;
            $mainsummary .= sprintf "%6d %9d      %s\n", $rank, $siteurls{$key}, $key;
            $csv .= sprintf "%d%s%s%s%s\n", $rank, $opt_sep, $siteurls{$key}, $opt_sep, $key;
            last if ($rank >= $opt_ranklimit);
        }
        push @reportfilenames, makecsvfilename($title,$startdate, $finishdate);
        push @reportdata, $csv;
    }

    $mainsummary .= sprintf "\n\n\n";

    #----------------------------------------------------------------

    # top sites by bytes downloaded
    {
        my $csv = '';
        my $rank = 0;
        my $title = "TOP WEB SITES by BYTEs";
        $mainsummary .= sprintf "$title\n\n";
        $mainsummary .= sprintf "%6s  %10s    %s\n\n", 'Rank', 'Bytes', 'Web Site';
        foreach my $key ( sort { $sitebytes{$b} <=> $sitebytes{$a} } keys %sitebytes ) {
            $rank++;
            $mainsummary .= sprintf "%6d   %s    %s\n", $rank, print_bytes($sitebytes{$key}), $key;
            $csv .= sprintf "%d%s%s%s%s\n", $rank, $opt_sep, print_bytes($sitebytes{$key}), $opt_sep, $key;
            last if ($rank >= $opt_ranklimit);
        }
        push @reportfilenames, makecsvfilename($title,$startdate, $finishdate);
        push @reportdata, $csv;
    }

    $mainsummary .= sprintf "\n\n\n";

}

#----------------------------------------------------------------

if ( $opt_mimetypes ) {
    # top mimetypes by url count
    {
        my $csv = '';
        my $rank = 0;
        my $title = "TOP MIME-TYPES by URLs";
        $mainsummary .= sprintf "$title\n\n";
        $mainsummary .= sprintf "%6s  %7s    %s\n\n", 'Rank', 'URLs', 'MIME-Type';
        foreach my $key (sort { $mimeurls{$b} <=> $mimeurls{$a} } keys %mimeurls) {
            $rank++;
            $mainsummary .= sprintf "%6d  %7d    %s\n", $rank, $mimeurls{$key}, $key;
            $csv .= sprintf "%d%s%s%s%s\n", $rank, $opt_sep, $mimeurls{$key}, $opt_sep, $key;
            last if ($rank >= $opt_ranklimit);
        }
        push @reportfilenames, makecsvfilename($title,$startdate, $finishdate);
        push @reportdata, $csv;
    }

    $mainsummary .= sprintf "\n\n\n";

    #----------------------------------------------------------------

    # top mimetypes by bytes downloaded
    {
        my $csv = '';
        my $rank = 0;
        my $title = "TOP MIME-TYPES by BYTES";
        $mainsummary .= sprintf "$title\n\n";
        $mainsummary .= sprintf "%6s  %9s    %s\n\n", 'Rank', 'Bytes', 'MIME-Type';
        foreach my $key (sort { $mimebytes{$b} <=> $mimebytes{$a} } keys %mimebytes) {
            $rank++;
            $mainsummary .= sprintf "%6d  %s    %s\n", $rank, print_bytes($mimebytes{$key}), $key;
            $csv .= sprintf "%d%s%s%s%s\n", $rank, $opt_sep, print_bytes($mimebytes{$key}), $opt_sep, $key;
            last if ($rank >= $opt_ranklimit);
        }
        push @reportfilenames, makecsvfilename($title,$startdate, $finishdate);
        push @reportdata, $csv;
    }

    $mainsummary .= sprintf "\n\n\n";

}

#----------------------------------------------------------------

if ( $opt_fileexts ) {

    # top file extensions by url count
    {
        my $csv = '';
        my $rank = 0;
        my $title = "TOP FILE EXTENSIONS by URLs";
        $mainsummary .= sprintf "$title\n\n";
        $mainsummary .= sprintf "%6s  %7s    %s\n\n", 'Rank', 'URLs', 'File Extension';
        foreach my $key (sort { $fileexturls{$b} <=> $fileexturls{$a} } keys %fileexturls) {
            $rank++;
            $mainsummary .= sprintf "%6d  %7d    %s\n", $rank, $fileexturls{$key}, $key;
            $csv .= sprintf "%d%s%s%s%s\n", $rank, $opt_sep, $fileexturls{$key}, $opt_sep, $key;
            last if ($rank >= $opt_ranklimit);
        }
        push @reportfilenames, makecsvfilename($title,$startdate, $finishdate);
        push @reportdata, $csv;
    }

    $mainsummary .= sprintf "\n\n\n";

    #----------------------------------------------------------------

    # top file extensions by bytes downloaded
    {
        my $csv = '';
        my $rank = 0;
        my $title = "TOP FILE EXTENSIONS by BYTES";
        $mainsummary .= sprintf "$title\n\n";
        $mainsummary .= sprintf "%6s  %9s    %s\n\n", 'Rank', 'Bytes', 'File Extension';
        foreach my $key (sort { $fileextbytes{$b} <=> $fileextbytes{$a} } keys %fileextbytes) {
            $rank++;
            $mainsummary .= sprintf "%6d  %s    %s\n", $rank, print_bytes($fileextbytes{$key}), $key;
            $csv .= sprintf "%d%s%s%s%s\n", $rank, $opt_sep, print_bytes($fileextbytes{$key}), $opt_sep, $key;
            last if ($rank >= $opt_ranklimit);
        }
        push @reportfilenames, makecsvfilename($title,$startdate, $finishdate);
        push @reportdata, $csv;
    }

    $mainsummary .= sprintf "\n\n\n";
}

#----------------------------------------------------------------
if ($opt_users) {
    # top usernames by url count
    {
        my $csv = '';
        my $rank = 0;
        my $title = "TOP USERS by URLs";
        $mainsummary .= sprintf "$title\n\n";
        $mainsummary .= sprintf "%6s  %7s    %s\n\n", 'Rank', 'URLs', 'Username';
        foreach my $key (sort { $userurls{$b} <=> $userurls{$a} } keys %userurls) {
            $rank++;
            $mainsummary .= sprintf "%6d  %7d    %s%s\n", $rank, $userurls{$key}, $key, ($key eq '-') ? ' (anonymous user)' : '';
            $csv .= sprintf "%d%s%s%s%s\n", $rank, $opt_sep, $userurls{$key}, $opt_sep, $key;
            last if ($rank >= $opt_ranklimit);
        }
        push @reportfilenames, makecsvfilename($title,$startdate, $finishdate);
        push @reportdata, $csv;
    }

    $mainsummary .= sprintf "\n\n\n";

    #----------------------------------------------------------------
    # top users by bytes downloaded
    {
        my $csv = '';
        my $rank = 0;
        my $title = "TOP USERS by BYTES";
        $mainsummary .= sprintf "$title\n\n";
        $mainsummary .= sprintf "%6s  %9s    %s\n\n", 'Rank', 'Bytes', 'Username';
        foreach my $key (sort { $userbytes{$b} <=> $userbytes{$a} } keys %userbytes) {
            $rank++;
            $mainsummary .= sprintf "%6d  %s    %s%s\n", $rank, print_bytes($userbytes{$key}), $key, ($key eq '-') ? ' (anonymous user)' : '';
            $csv .= sprintf "%d%s%s%s%s\n", $rank, $opt_sep, print_bytes($userbytes{$key}), $opt_sep, $key;
            last if ($rank >= $opt_ranklimit);
        }
        push @reportfilenames, makecsvfilename($title,$startdate, $finishdate);
        push @reportdata, $csv;
    }

    $mainsummary .= sprintf "\n\n\n";
}

#----------------------------------------------------------------
if ($opt_clientips) {
    # top clientip by url count
    {
        my $csv = '';
        my $rank = 0;
        my $title = "TOP CLIENTIPs by URLs";
        $mainsummary .= sprintf "$title\n\n";
        $mainsummary .= sprintf "%6s  %7s    %s\n\n", 'Rank', 'URLs', 'Client IP Address';
        foreach my $key (sort { $clientipurls{$b} <=> $clientipurls{$a} } keys %clientipurls) {
            $rank++;
            $mainsummary .= sprintf "%6d  %7d    %s\n", $rank, $clientipurls{$key}, $key;
            $csv .= sprintf "%d%s%s%s%s\n", $rank, $opt_sep, $clientipurls{$key}, $opt_sep, $key;
            last if ($rank >= $opt_ranklimit);
        }
        push @reportfilenames, makecsvfilename($title,$startdate, $finishdate);
        push @reportdata, $csv;
    }

    $mainsummary .= sprintf "\n\n\n";

    #----------------------------------------------------------------

    # top usernames by bytes downloaded
    {
        my $csv = '';
        my $rank = 0;
        my $title = "TOP CLIENTIPs by BYTES";
        $mainsummary .= sprintf "$title\n\n";
        $mainsummary .= sprintf "%6s  %9s    %s\n\n", 'Rank', 'Bytes', 'Client IP Address';
        foreach my $key (sort { $clientipbytes{$b} <=> $clientipbytes{$a} } keys %clientipbytes) {
            $rank++;
            $mainsummary .= sprintf "%6d  %s    %s\n", $rank, print_bytes($clientipbytes{$key}), $key;
            $csv .= sprintf "%d%s%s%s%s\n", $rank, $opt_sep, print_bytes($clientipbytes{$key}), $opt_sep, $key;
            last if ($rank >= $opt_ranklimit);
        }
        push @reportfilenames, makecsvfilename($title,$startdate, $finishdate);
        push @reportdata, $csv;
    }

    $mainsummary .= sprintf "\n\n\n";
}

{
    my $subject = makecsvfilename("Internet Report", $startdate, $finishdate );
       $subject =~ s!.csv$!!soig;
       $subject =~ s!Internet-Report-!Internet Report - !soig;
       $subject =~ s!-to-! to !soig;

    #----------------------------------------------------------------
    #
    # craft an email full of CSV
    #
    if ($opt_email ne '') {

        $opt_return ||= $opt_email;

        print STDERR "$opt_return ||= $opt_email\n" if ( $opt_debug > 2 );

        my $entity = MIME::Entity->build(Type    =>"multipart/mixed",
                                         From    => $opt_return,
                                         To      => $opt_email,
                                         Subject => $subject );

        $entity->attach(Data     => $mainsummary,
                        Type     => "text/plain");


        # loop thru the reports, attaching them
        for my $index ( 0 .. $#reportdata ) {

            my $content_type = 'text/comma-separated-values';

            ### Attach stuff to it:
            $entity->attach(Data     => $reportdata[$index],
                            Filename => $reportfilenames[$index],
                            Type     => $content_type,
                            Encoding => "base64");

            print STDERR "$reportfilenames[$index]\n$reportdata[$index]\n" if ( $opt_debug > 3 );

        }

        $entity->print(\*STDOUT);

    } else {

        print "$subject\n\n$mainsummary\n";

    }
}

#----------------------------------------------------------------
# main program ends
#----------------------------------------------------------------

#-S-U-B-R-O-U-T-I-N-E-S------------------------------------------

sub print_bytes {
    my $bytes = shift;

    my $divideby = 1;
    my $bytelabel = 'b ';
    if ($bytes > 1024*1024*1024) {
        $divideby = 1024*1024*1024;
        $bytelabel = 'gb';
    } elsif ($bytes > 1024*1024) {
        $divideby = 1024*1024;
        $bytelabel = 'mb';
    } elsif ($bytes > 1024) {
        $divideby = 1024;
        $bytelabel = 'kb';
    }
    return sprintf "%7.2f%s", $bytes / $divideby, $bytelabel;
}


sub makecsvfilename {
    my $title = shift;
    my $startdate = shift;
    my $finishdate = shift;

    my $finish_at = strftime "%Y-%b-%d %H:%M%p", localtime($finishdate);
    my $start_at  = strftime "%Y-%b-%d %H:%M%p", localtime($startdate);

    # die "'$startdate' to '$finishdate'";

    print STDERR qq{$finish_at = strftime "%Y-%b-%d %H:%M", localtime($finishdate);} if ( $opt_debug );

    my $csvfilename = sprintf "%s %s to %s.csv", $title, $start_at, $finish_at;

    $csvfilename =~ s! +!-!sg;

    return $csvfilename;

}

Wednesday, April 9, 2014

Full time in FreeBSD directory listing ...


 unix-ns1:/ # dir -D %c /boot/loader.conf
 -rw-r--r--  1 root  wheel  134 Wed Apr  9 12:59:27 2014 /boot/loader.conf