Monday, December 31, 2007

500th Blog Post: Should I change my blog title?

Just recently when I visited certain websites, and I can't figure what are those sites about by just reading the title. I started to realize that my blog title doesn't make sense to a lot of people.

It is pretty odd when you read this title "When {Puffy} Meets ^RedDevil^" and you definitely have no clue at first glance. Since there are quite a few people asking why I name my blog title as it is now and I hate to repeat the explanation all over again, here's the short brief.

Puffy is OpenBSD mascot.

RedDevil is FreeBSD mascot.

Few years back when I first started learning *nix based Operating System which is RedHat Linux 6.2, but switching to OpenBSD quickly after I accidentally found it when looking for other distros. Since then OpenBSD becomes my favourite Operating System especially running it as router, firewall or IDS. I have tried to use OpenBSD as my desktop but figured that it lacks of application ports/packages that I need so I have to find the alternative and that's where FreeBSD kicks in and becoming the important OS platform for me. The time I started my blog was the time I addicted to BSD and I thought it would be cool to just name my blog title as "When {Puffy} Meets ^ReadDevil^". After all don't ask me why no penguin because my preference goes to BSD. I'm not anti-linux but I'm just more comfortable with BSD(just like you may like Windows but I'm not).

Later my friends told me that my blog is gearing towards network security instead of open source stuffs and why not just change my blog title to "Network Security Blog"? That may sound right to most people and making more sense, however I have my simple answer to this -

I won't change my blog title, it's been with me since 2005.

To everything, Happy New Year 2008!!!!!

Peace (;])

Thanks to whoever reading my blog, I know it sucks ..... but I just can't stop writing.

Sunday, December 30, 2007

Packets -> Flows -> Session

This is my last post before reaching the milestone 500th, so I try my best to write great post. Since I want to keep this post simple and clear, I will try my best to explain it in details. If you are network flow analysis guru, you can skip this post because I consider this as introductory post but may help others understanding more about network flow because it was me who taking amount of time to learn how to utilize network flow data. My approach will be similar to my previous post here but the topic is totally different. The "Not So Upcoming" Argus 3 will be the main weapon to be discussed here. Lets walk through it now.

Network Packets

For this matter, I need to obtain the network packets, I have logged the network traffic using tcpdump during the time I was downloading wireshark. Here's how I do it -

shell>sudo tcpdump -s 0 -nni lnc0 -w http-download.pcap


After I finished downloading wireshark, I terminated tcpdump and get initial view of the pcap file with capinfos.

shell>capinfos http-download.pcap
File name: http-download.pcap
File type: Wireshark/tcpdump/... - libpcap
Number of packets: 19782
File size: 1981512 bytes
Data size: 18047455 bytes
Capture duration: 405.100833 seconds
Start time: Thu Dec 20 23:43:22 2007
End time: Thu Dec 20 23:50:07 2007
Data rate: 44550.53 bytes/s
Data rate: 356404.20 bits/s
Average packet size: 912.32 bytes

For single file download which is approximately 20MB, it contains 19872 packets. It is painful to look at every single packet if it is not important. What if I don't want to know the payload in the packet but the connection summary such as how many packets have been sent by one host to another, how many bytes have been transferred in this connection? How long is the duration of this particular connection? Packet centric analysis doesn't fit well here. Therefore I introduce you network flow analysis. But before that, lets have fun with packets -

------------------------------------------------------------
Scenario:

Host A(Client) - 192.168.0.102
Host B(Server) - 128.121.50.122

Host A downloads the wireshark source from Host B
-------------------------------------------------------------

To get the count of how many packets have been sent by Host A to Host B -

shell>tcpdump -ttttnnr http-download.pcap \
ip src 192.168.0.102 | wc -l

reading from file http-download.pcap, link-type EN10MB (Ethernet)
7801

To get the count of how many packets have been sent by Host B to Host A -

shell>tcpdump -ttttnnr http-download.pcap \
ip src 128.121.50.122 | wc -l

reading from file http-download.pcap, link-type EN10MB (Ethernet)
11981

What if you want to know how many bytes have been sent by Host A to Host B and the reversal? It would be exhaustive if you have to look into those packets and count. Now this is where network flow kicks in.

Network Flows

Network Flow is really different beast. To give you the idea what is flow, I define it as -

Flow is the sequence of packets or a packet that belonged to certain network session(conversation) between two hosts but delimited by the setting of flow generation tool. To cut it short, it provides network traffic summarization by metering or accounting certain attributes in the network session.

To understand them better, lets convert the packet data(pcap) to argus format flow data -

shell>argus -mAJZRU 512 -r http-download.pcap \
-w http-download.arg3


I run argus with the option -mAJZRU 512 so that it will generate as much data as possible for each flow record. I won't explain each option here since you can find them in the man page or argus -h.

Now I can examine/parse http-download.arg3 with argus client tools for further flow processing. To make it easy to read, I use ra here as it is the most basic argus flow data processing tool. I choose to print the necessary field with -s option such as (start time|src address|src port|direction|dst address|dst port|src packets|dst packets) -

shell>ra -L0 -nnr http-download.arg3 \
-s stime saddr sport dir daddr dport spkts dpkts - ip
StartTime SrcAddr Sport Dir DstAddr Dport SrcPkts DstPkts
23:43:22.024899 192.168.0.102.51371 -> 128.121.50.122.80 1165 1800
23:44:22.068631 192.168.0.102.51371 -> 128.121.50.122.80 1186 1807
23:45:22.101391 192.168.0.102.51371 -> 128.121.50.122.80 1246 1919
23:46:22.117747 192.168.0.102.51371 -> 128.121.50.122.80 1125 1751
23:47:22.171437 192.168.0.102.51371 -> 128.121.50.122.80 1160 1759
23:48:22.209375 192.168.0.102.51371 -> 128.121.50.122.80 1080 1664
23:49:22.186030 192.168.0.102.51371 -> 128.121.50.122.80 839 1281


There are totally 7 flow records here for just single network session. Why?

If you read the argus configuration manual page, it mentions -

ARGUS_FLOW_STATUS_INTERVAL
Argus will periodically report on a flow’s activity every ARGUS_FLOW_STATUS_INTERVAL seconds, as long as there is new activity on the flow. This is so that you can get a view into the activity of very long lived flows. The default is 60 seconds, but this number may be too low or too high depending on your uses.

The default value is 60 seconds, but argus does support a minimum value of 1. This is very useful for doing measurements in a controlled experimental environment where the number of flows is <>
Command line equivalent -S

ARGUS_FLOW_STATUS_INTERVAL=60

For better understanding, I print the start time field only to get better interpretation -

shell>ra -nr http-download.arg3 -s stime - ip
23:43:22.024899
23:44:22.068631
23:45:22.101391
23:46:22.117747
23:47:22.171437
23:48:22.209375
23:49:22.186030

With the default setting, you may notice the boundary is 1 minute for each flow record, that's actually what I try to explain above -

Flow is the sequence of packets or a packet that belonged to certain network session(conversation) between two hosts but delimited by the setting of flow generation tool.

If the network session longer than 1 minute(long lived flow), then it will generate another flow(with same attribute/label) which is actually belonged to the same network session though. Of course you can tune this with -S option in argus. Lets try -

shell>argus -S 480 -mAJZRU 512 -r http-download.pcap \
-w http-download-480.arg3


I set 480 seconds here which is 8 minutes as the network session duration falls in that time range. Now we read it again with ra -

shell>ra -L0 -nr http-download-480.arg3 \
-s stime saddr sport dir daddr dport spkts dpkts - ip
StartTime SrcAddr Sport Dir DstAddr Dport SrcPkts DstPkts
23:43:22.024899 192.168.0.102.51371 -> 128.121.50.122.80 7801 11981

However in the real world implementation, this is not the right way to construct the network session from multiple flows, especially if your network structure is complex(provides various of network services) and busy(heavy network traffics) and this is really arbitrary. You can't easily identify that multiple flows belong to the same network session as there will be many other flow records inserted in between, another issue is what if the network session duration is longer than 480 seconds(8 minutes). That's where racluster(another argus client tool) comes into rescue.

Network Session

From the racluster partial man page -

Racluster reads argus data from an argus-data source, and clusters/merges the records based on the flow key criteria specified either on the command line, or in a racluster configuration file, and outputs a valid argus-stream. This tool is primarily used for data mining, data management and report generation.
The default action is to merge status records from the same flow and argus probe, providing in some cases huge data reduction with limited loss of flow information.

Racluster is easy to use but hard to master, however here's the simple usage to construct the network session from multiple network flow records.

shell>racluster -L0 -nr http-download.arg3 \
-s stime saddr sport dir daddr dport spkts dpkts
StartTime SrcAddr Sport Dir DstAddr Dport SrcPkts DstPkts
23:43:22.024899 192.168.0.102.51371 -> 128.121.50.122.80 7801 11981

It is just really that simple, to explain this network session.

Start Time - 23:43:22.024899
Source Address - 192.168.0.102
Source Port - 51371
Destination Address - 128.121.50.122
Destination Port - 80
Source Packets - 7801
Destination Packets - 11981

Start Time is the time when the network session started, others are pretty self-explained except Source Packets and Destination Packets. Source packets count how many packets have been sent by the Source Address, Destination Packets count how many packets have been sent by Destination Address. To generate summarization of this network session, you can run -

shell>racluster -L0 -nr http-download.arg3 \
-s dur pkts bytes

Dur TotPkts TotBytes
405.100830 19782 18047455

This network session duration is approximately 405 seconds, the total packets in this network session is 19782, and the total bytes is 18047255. Yes, this is where network flow analysis can be useful - traffic accounting but I won't really explain it much here since it will be another topic.

Maybe I should make this post title sounds more interesting with "Network Flow Demystified". There are other topic about network flow where I don't mention here such as Cisco Netflow, Unidirectional vs Bidirectional model, other interesting flow metrics that provided by argus and so forth, I wish I can close the gap in coming posts.

Enjoy (;])

Friday, December 28, 2007

SANS: Christmas Packet Challenge

I was back from Singapore and still in holiday mood, yesterday while chatting to my friend ayoi, he told me that SANS Incident Handler(Lorna Hutcheson) has posted the Christmas Packet Challenge where you can find here -

http://isc.sans.org/diary.html?storyid=3781

To be honest, being lazy I'm, I don't take a look at first but again thinking that this might refresh my packetysis skill since I haven't really done that for a while. If any of you have spare time to kill, feel free to try it out.

I primarily use HeX 1.0.2 liveCD for this game. I'm not too sured if I finish the game but I have sent my write up to SANS Incident Handlers. Interestingly the email I sent is blocked by email filter. Check out the screenshot -


Being spammer I'm, I figure I have two urls in the email, one is -

http://isc.sans.org/diary.html?storyid=3781&rss

The other one is my own blog url which resides in my email signature. I deleted both of the urls and tried to send the email again and finally the email got through. Sometime the false positive thing is really annoying.

I will post up my write up once the handler has posted the answer for the challenge.

Anyway it's end of the year, back to holiday mood again ..... zzZZZ

Cheers (;])

Friday, December 21, 2007

HeX 1.0.2 - The Christmas Release

Ho ho ho, Christmas is around the corner .....

For the sake of it, the HeX development team would like to present you HeX 1.0.2 - The Christmas Release!!!!! Get it now!

Malaysia Main

liveCD
- HeX liveCD 1.0.2
- HeX liveCD 1.0.2 md5 checksum
- HeX liveCD 1.0.2 sha256 checksum

Mini liveUSB
- HeX Mini liveUSB 1.0.2
- HeX Mini liveUSB 1.0.2 md5 checksum
- HeX Mini liveUSB 1.0.2 sha256 checksum

US Mirror

liveCD
- HeX liveCD 1.0.2
- HeX liveCD 1.0.2 md5 checksum
- HeX liveCD 1.0.2 sha256 checksum

Official Annoucement

We are no longer calling this project HeX liveCD but now simply HeX, as it has expanded quickly and the liveCD is one of the project under HeX.

Two sub projects will be launched under this release as well -
- NSM Console
- liveUSB

NSM Console

Matthew(Dakrone) is the main developer of NSM Console, here's the short description about it -

NSM Console (Network Security Monitoring Console) is a framework for performing analysis on packet capture files. It implements a modular structure to allow for an analyst to quickly write modules of their own without any programming language experience which means you can quickly integrate all the other NSM based tools to it. Using these modules a large amount of pcap analysis can be performed quickly using a set of global (as well as per-module) options. NSM Console also aims to be simple to run and easy to understand without lots of learning time.

If you want more information about what it is (and what it does), check out this introductory post

http://thnetos.wordpress.com/2007/11/27/nsm-console-a-framework-for-running-things/


You can access NSM Console by clicking the menu -> NSM-Tools -> NSM Console

HeX liveUSB


JJC(enhanced) created the liveUSB initially so instead of using a read-only liveCD, you can use a read-write USB thumb drive. Here's the short description of it -

After receiving numerous requests to create a HeX liveUSB Key Image we decided to go ahead and build one. This image includes all of the standard tools that you will find on HeX and it is writable; so you can update things (signatures etc), make changes and so on.

To use HeX liveUSB, you simply download the image and dd it to your USB Key (Thumbdrive). The 1.0.2 liveUSB is released inline with liveCD. However JJC will create the liveUSB with more spaces in case you want to store stuffs inside it soon.

Other Addition(Surprise)

Christmas Gifts for the Analyzt
1. HeXtra 1.0.2(Very soon because it needs to be tested with HeX 1.0.2 before release)
2. aimsnarf - aim protocol analyzer script
3. argi-PASVFTP.sh - argus 3 passive ftp extraction script
4. 4 additional PADS signatures
5. dsniff and honeysnap(thanks dakrone for porting this)
6. Add rp-Reference under analyzt home directory, and there's script resources.sh which will download all the useful docs, papers or articles which may assist analyzt wannabe.

Christmas Gifts for Everyone

Everyone loves eye candy, so do we! Since we call this as The Christmas Release, here's your Christmas gift(The Shiny & New HeX Christmas Wallpaper) -

1. HeX-WhiteChristmas.jpg
2. HeX-DarkChristmas.jpg

Thanks Vickson again for his artistic skillz!

Bug Fixes
1. unicornscan run time error
2. svn run time error
3. lsof run time error
4. firefox startup issue
5. pidgin and liferea dbus issue
6. CDROM-Mount.sh syntax error
7. script command issue
8. ping setuid issue

Other known major or minor issues in the Base System are fixed, thanks chfl4gs_

For quick glance, check out HeX 1.0.2 liveCD screenshots below -

The White Christmas

The Dark Christmas

Note to Everyone(Mailing List, Trac, Backports and IRC Channel)

For anyone who wants to learn about the network security tools that are included in HeX, please feel free to ask in the mailing list, or if you have a specific idea for HeX, we welcome your input.

However, if you want to submit a bug report, please do use trac and create the ticket, all you need to do is register an account and you can create the bug report ticket quickly. Otherwise if you are reporting it to the mailing list, developers will have to create the ticket on behalf of you. By helping yourself, you are helping us. Trac is available at -

https://trac.security.org.my/hex/

On the other hand, you can also browse the tickets at -

https://trac.security.org.my/hex/report


Just in case the bug has been previously reported.

Feel free to join the IRC Freenode #rawpacket channel if you need "not so real time" support.

From now on, we will have the backport too. The backport is basically serving extra application packages that are not available in the HeX base system. In order to install them, just download them from -

http://www.rawpacket.org/hex/packages/

For example to install tftpgrab, just run -

shell>sudo pkg_add -v tftpgrab-0.2.tbz

Last but not least, we are always looking for new contributor or developer. If you are interested in joining us, feel free to email -

geek00l[at]gmail[dot]com

To know more about HeX Project, check it out at -

http://www.rawpacket.org/projects/hex

Merry Christmas and happy holidays from the entire HeX Team, see you all in 2008!

Enjoy (;])

Thursday, December 20, 2007

Tip for RTFM

Read The F*ing Manual(RTFM) is considered as one of the most famous quote around. Most of the time we can read the manual page by using command -

shell>man ls

There you will be able to read the manual page for ls command, but what if you are not installing the manual page to the default path(usually /usr/share/man but it may be vary in different operating system). You can do this if you want to read argus client tool - racluster man page.

shell>nroff -man racluster.5 | less

And what if you want to convert them to html format, just use man2html -

shell>man2html racluster.5 > racluster.5.html

Here's the html page -


Pretty simple isn't it.

Peace ;]

VMware Inconsistent Time Issue: Blame the clock rate

Here's the interesting thing I have found when running FreeBSD VM on my laptop, the problem is with the inconsistent time which can't be solved even with this post.

You get to blame the clock rate because that's the cause of inaccurate timing when running FreeBSD VM using VMware. I try to unplug my power cable and check out the dmesg -

[14580.568000] /dev/vmmon[26067]: host clock rate change request 228 -> 100
[14580.568000] /dev/vmmon[26067]: host clock rate change request 100 -> 228
[14956.836000] /dev/vmmon[26067]: host clock rate change request 228 -> 100
[14956.836000] /dev/vmmon[26067]: host clock rate change request 100 -> 228
[15333.080000] /dev/vmmon[26067]: host clock rate change request 228 -> 100
[15333.080000] /dev/vmmon[26067]: host clock rate change request 100 -> 228
[15709.332000] /dev/vmmon[26067]: host clock rate change request 228 -> 100
[15709.332000] /dev/vmmon[26067]: host clock rate change request 100 -> 228
[16085.612000] /dev/vmmon[26067]: host clock rate change request 228 -> 100
.....

The time in the VM becomes inconsistent after I unplug my power and return to normal once I plug it back, the dmesg says it all. Maybe it's best not to use laptop for FreeBSD VM, the desktop will do just fine.

Cheers ;]

Tuesday, December 18, 2007

Ubuntu: Argus 3

I'm currently working hard on network flow analysis stuff, and argus is always my best friend. Another wonderful application suite is silktools and I think you should try it out if you are into network flow analysis. Anyway here's the quick installation for upcoming argus 3 on Ubuntu 7.10.

It is pretty straight forward to get argus 3 installed -

shell>sudo apt-get install libpcap0.8 libpcap0.8-dev flex bison rrdtool

Once you have installed all the dependencies of argus 3, lets download argus 3 server and its client suite to install.

shell>wget \
ftp://qosient.com/dev/argus-3.0/argus-3.0.0.tar.gz


shell>wget \
ftp://qosient.com/dev/argus-3.0/argus-clients-3.0.0.rc.63.tar.gz


Once you have downloaded them, you just need to perform usual compilation steps by decompressing them -> configure; make && make install.

And if you still don't know what argus is about, check out this post.

P/S: Both argus and silktools are included in the HeX liveCD.

Enjoy ;]

Monday, December 03, 2007

PADS: Sigs For Belkin ADSL Router

If you have Belkin ADSL Router running in your network, it's good to identify what services are running by it, there are actually 2 network services running in the Belkin ADSL router, the web and telnet.

After examining the network traffic, I decide to write the PADS signatures for it so that I can track the network assets passively. If I'm not mistaken, the Belkin ADSL router runs Micro Httpd which you can find here -

http://www.acme.com/software/micro_httpd/


I have also examined the telnet traffic so that I can write the sig for it, I have written the rough signatures quickly, and it's great to have them working properly after some testing -

# Belkin ADSL Router
telnet,v/Belkin Router Telnet///,BCM96358 ADSL Router\r\nLogin:[ ]

www,v/Micro HTTP Server///,Server: micro_httpd\r\n

For the quick execution, just check out the screenshot below and you will see the host 192.168.2.1 has been identified to run these two services.


I will add these two signatures to upcoming HeX 1.0.2, the recent that we delay the release of it is because more bugs been found and various stuffs to do.

Enjoy (;])

Saturday, December 01, 2007

HeX: Solution to Time Slowness in VMware Server

Thanks to my friend - Richard who has observed the time slowness when running FreeBSD on VMware. I don't really see it because the slowness(delay) is very minimal which scales like 10 minutes in 24 hours but I never run it for a day. I have only observed it after Richard reported this issue to me.

To me I think time is critical issue to network security analyzt(timelining, timestamping and etc), therefore I need to figure out the solution for this. I found there are two great posts which can be considered as solutions to the problem that are available here and there. You might as well read the comments in Richard's blog post too if you encounter the problem.

Here's the sum up for the solutions, put these two lines in /boot/loader.conf(if i recalled correctly, it is there by default in HeX).

kern.hz=100
hint.apic.0.disabled=1


Reboot your VM. However these two lines won't really solve your problem but minimal the time slowness. In order to run consistently with local time, you will have to install vmware tools, follow the instructions in the link above to get it done. Once you have finished the installation of vmware tools, you may find vmware-guestd running as daemon at the background. Now run -

shell>vmware-toolbox

VMware Tools properties configuration box will pop up, and you will see this in first tab -


Check on the option and click on close. Now you should shut down the VM and check your vmx file to see if this setting is there -

tools.syncTime = "TRUE"

If it is there(else add it manually), now you just need to boot the VM again and you are not supposed to encounter the time slowness problem anymore, it will follow the local time and sync(adjust) automatically. For your information, I have this done on VMware Server Console Version 1.0.4 build-56528. Feel free to try it out on VMware workstation.

Enjoy ;]

HeX: Malaysia Download Mirror

Thanks to Ganux(Terengganu Linux) for their initiative to host the mirror for our HeX liveCD. One of the member - Wariola went to the November meetup where me and chfl4gs_ have presented about the HeX project and decided to contribute the space and bandwidth so if you are from local and want to try out HeX liveCD, feel free to download from local mirror now which is located at -

http://www.ganux.com/OSS/hex-i386-1.0.1.iso


There are 4 members in the Ganux team, they are - wariola, Ganux, Hardyweb and Dinoz, I'm glad to hear that we have friends taking initiative to push open source softwares in other state. As usual, I believe every single bit helps. Thumbs up!

Cheers ;]