Monday, December 31, 2007

500th Blog Post: Should I change my blog title?

Just recently when I visited certain websites, and I can't figure what are those sites about by just reading the title. I started to realize that my blog title doesn't make sense to a lot of people.

It is pretty odd when you read this title "When {Puffy} Meets ^RedDevil^" and you definitely have no clue at first glance. Since there are quite a few people asking why I name my blog title as it is now and I hate to repeat the explanation all over again, here's the short brief.

Puffy is OpenBSD mascot.

RedDevil is FreeBSD mascot.

Few years back when I first started learning *nix based Operating System which is RedHat Linux 6.2, but switching to OpenBSD quickly after I accidentally found it when looking for other distros. Since then OpenBSD becomes my favourite Operating System especially running it as router, firewall or IDS. I have tried to use OpenBSD as my desktop but figured that it lacks of application ports/packages that I need so I have to find the alternative and that's where FreeBSD kicks in and becoming the important OS platform for me. The time I started my blog was the time I addicted to BSD and I thought it would be cool to just name my blog title as "When {Puffy} Meets ^ReadDevil^". After all don't ask me why no penguin because my preference goes to BSD. I'm not anti-linux but I'm just more comfortable with BSD(just like you may like Windows but I'm not).

Later my friends told me that my blog is gearing towards network security instead of open source stuffs and why not just change my blog title to "Network Security Blog"? That may sound right to most people and making more sense, however I have my simple answer to this -

I won't change my blog title, it's been with me since 2005.

To everything, Happy New Year 2008!!!!!

Peace (;])

Thanks to whoever reading my blog, I know it sucks ..... but I just can't stop writing.

Sunday, December 30, 2007

Packets -> Flows -> Session

This is my last post before reaching the milestone 500th, so I try my best to write great post. Since I want to keep this post simple and clear, I will try my best to explain it in details. If you are network flow analysis guru, you can skip this post because I consider this as introductory post but may help others understanding more about network flow because it was me who taking amount of time to learn how to utilize network flow data. My approach will be similar to my previous post here but the topic is totally different. The "Not So Upcoming" Argus 3 will be the main weapon to be discussed here. Lets walk through it now.

Network Packets

For this matter, I need to obtain the network packets, I have logged the network traffic using tcpdump during the time I was downloading wireshark. Here's how I do it -

shell>sudo tcpdump -s 0 -nni lnc0 -w http-download.pcap

After I finished downloading wireshark, I terminated tcpdump and get initial view of the pcap file with capinfos.

shell>capinfos http-download.pcap
File name: http-download.pcap
File type: Wireshark/tcpdump/... - libpcap
Number of packets: 19782
File size: 1981512 bytes
Data size: 18047455 bytes
Capture duration: 405.100833 seconds
Start time: Thu Dec 20 23:43:22 2007
End time: Thu Dec 20 23:50:07 2007
Data rate: 44550.53 bytes/s
Data rate: 356404.20 bits/s
Average packet size: 912.32 bytes

For single file download which is approximately 20MB, it contains 19872 packets. It is painful to look at every single packet if it is not important. What if I don't want to know the payload in the packet but the connection summary such as how many packets have been sent by one host to another, how many bytes have been transferred in this connection? How long is the duration of this particular connection? Packet centric analysis doesn't fit well here. Therefore I introduce you network flow analysis. But before that, lets have fun with packets -


Host A(Client) -
Host B(Server) -

Host A downloads the wireshark source from Host B

To get the count of how many packets have been sent by Host A to Host B -

shell>tcpdump -ttttnnr http-download.pcap \
ip src | wc -l

reading from file http-download.pcap, link-type EN10MB (Ethernet)

To get the count of how many packets have been sent by Host B to Host A -

shell>tcpdump -ttttnnr http-download.pcap \
ip src | wc -l

reading from file http-download.pcap, link-type EN10MB (Ethernet)

What if you want to know how many bytes have been sent by Host A to Host B and the reversal? It would be exhaustive if you have to look into those packets and count. Now this is where network flow kicks in.

Network Flows

Network Flow is really different beast. To give you the idea what is flow, I define it as -

Flow is the sequence of packets or a packet that belonged to certain network session(conversation) between two hosts but delimited by the setting of flow generation tool. To cut it short, it provides network traffic summarization by metering or accounting certain attributes in the network session.

To understand them better, lets convert the packet data(pcap) to argus format flow data -

shell>argus -mAJZRU 512 -r http-download.pcap \
-w http-download.arg3

I run argus with the option -mAJZRU 512 so that it will generate as much data as possible for each flow record. I won't explain each option here since you can find them in the man page or argus -h.

Now I can examine/parse http-download.arg3 with argus client tools for further flow processing. To make it easy to read, I use ra here as it is the most basic argus flow data processing tool. I choose to print the necessary field with -s option such as (start time|src address|src port|direction|dst address|dst port|src packets|dst packets) -

shell>ra -L0 -nnr http-download.arg3 \
-s stime saddr sport dir daddr dport spkts dpkts - ip
StartTime SrcAddr Sport Dir DstAddr Dport SrcPkts DstPkts
23:43:22.024899 -> 1165 1800
23:44:22.068631 -> 1186 1807
23:45:22.101391 -> 1246 1919
23:46:22.117747 -> 1125 1751
23:47:22.171437 -> 1160 1759
23:48:22.209375 -> 1080 1664
23:49:22.186030 -> 839 1281

There are totally 7 flow records here for just single network session. Why?

If you read the argus configuration manual page, it mentions -

Argus will periodically report on a flow’s activity every ARGUS_FLOW_STATUS_INTERVAL seconds, as long as there is new activity on the flow. This is so that you can get a view into the activity of very long lived flows. The default is 60 seconds, but this number may be too low or too high depending on your uses.

The default value is 60 seconds, but argus does support a minimum value of 1. This is very useful for doing measurements in a controlled experimental environment where the number of flows is <>
Command line equivalent -S


For better understanding, I print the start time field only to get better interpretation -

shell>ra -nr http-download.arg3 -s stime - ip

With the default setting, you may notice the boundary is 1 minute for each flow record, that's actually what I try to explain above -

Flow is the sequence of packets or a packet that belonged to certain network session(conversation) between two hosts but delimited by the setting of flow generation tool.

If the network session longer than 1 minute(long lived flow), then it will generate another flow(with same attribute/label) which is actually belonged to the same network session though. Of course you can tune this with -S option in argus. Lets try -

shell>argus -S 480 -mAJZRU 512 -r http-download.pcap \
-w http-download-480.arg3

I set 480 seconds here which is 8 minutes as the network session duration falls in that time range. Now we read it again with ra -

shell>ra -L0 -nr http-download-480.arg3 \
-s stime saddr sport dir daddr dport spkts dpkts - ip
StartTime SrcAddr Sport Dir DstAddr Dport SrcPkts DstPkts
23:43:22.024899 -> 7801 11981

However in the real world implementation, this is not the right way to construct the network session from multiple flows, especially if your network structure is complex(provides various of network services) and busy(heavy network traffics) and this is really arbitrary. You can't easily identify that multiple flows belong to the same network session as there will be many other flow records inserted in between, another issue is what if the network session duration is longer than 480 seconds(8 minutes). That's where racluster(another argus client tool) comes into rescue.

Network Session

From the racluster partial man page -

Racluster reads argus data from an argus-data source, and clusters/merges the records based on the flow key criteria specified either on the command line, or in a racluster configuration file, and outputs a valid argus-stream. This tool is primarily used for data mining, data management and report generation.
The default action is to merge status records from the same flow and argus probe, providing in some cases huge data reduction with limited loss of flow information.

Racluster is easy to use but hard to master, however here's the simple usage to construct the network session from multiple network flow records.

shell>racluster -L0 -nr http-download.arg3 \
-s stime saddr sport dir daddr dport spkts dpkts
StartTime SrcAddr Sport Dir DstAddr Dport SrcPkts DstPkts
23:43:22.024899 -> 7801 11981

It is just really that simple, to explain this network session.

Start Time - 23:43:22.024899
Source Address -
Source Port - 51371
Destination Address -
Destination Port - 80
Source Packets - 7801
Destination Packets - 11981

Start Time is the time when the network session started, others are pretty self-explained except Source Packets and Destination Packets. Source packets count how many packets have been sent by the Source Address, Destination Packets count how many packets have been sent by Destination Address. To generate summarization of this network session, you can run -

shell>racluster -L0 -nr http-download.arg3 \
-s dur pkts bytes

Dur TotPkts TotBytes
405.100830 19782 18047455

This network session duration is approximately 405 seconds, the total packets in this network session is 19782, and the total bytes is 18047255. Yes, this is where network flow analysis can be useful - traffic accounting but I won't really explain it much here since it will be another topic.

Maybe I should make this post title sounds more interesting with "Network Flow Demystified". There are other topic about network flow where I don't mention here such as Cisco Netflow, Unidirectional vs Bidirectional model, other interesting flow metrics that provided by argus and so forth, I wish I can close the gap in coming posts.

Enjoy (;])

Friday, December 28, 2007

SANS: Christmas Packet Challenge

I was back from Singapore and still in holiday mood, yesterday while chatting to my friend ayoi, he told me that SANS Incident Handler(Lorna Hutcheson) has posted the Christmas Packet Challenge where you can find here -

To be honest, being lazy I'm, I don't take a look at first but again thinking that this might refresh my packetysis skill since I haven't really done that for a while. If any of you have spare time to kill, feel free to try it out.

I primarily use HeX 1.0.2 liveCD for this game. I'm not too sured if I finish the game but I have sent my write up to SANS Incident Handlers. Interestingly the email I sent is blocked by email filter. Check out the screenshot -

Being spammer I'm, I figure I have two urls in the email, one is -

The other one is my own blog url which resides in my email signature. I deleted both of the urls and tried to send the email again and finally the email got through. Sometime the false positive thing is really annoying.

I will post up my write up once the handler has posted the answer for the challenge.

Anyway it's end of the year, back to holiday mood again ..... zzZZZ

Cheers (;])

Tuesday, December 25, 2007

Christmas Gift

Merry Christmas to everyone!

Thanks to my friend KMChow who has accidentally found this interesting joke and I would like to share to everyone here.


IP Address:
Registrar: TUCOWS INC.
Whois Server:
Referral URL:

IP Address:
Whois Server:
Referral URL:

IP Address:
Registrar: ONLINENIC, INC.
Whois Server:
Referral URL:

Server Name:
IP Address:
Whois Server:
Referral URL:

IP Address:
Whois Server:
Referral URL:

Server Name:
IP Address:
Registrar: TUCOWS INC.
Whois Server:
Referral URL:

Server Name:
IP Address:
Registrar: TUCOWS INC.
Whois Server:
Referral URL:

Server Name:
IP Address:
Whois Server:
Referral URL:

Server Name:
IP Address:
Registrar: ENOM, INC.
Whois Server:
Referral URL:

Server Name:
IP Address:
Registrar: ENOM, INC.
Whois Server:
Referral URL:

Server Name:
Whois Server:
Referral URL:

Server Name:
IP Address:
Registrar: GKG.NET, INC.
Whois Server:
Referral URL:

Server Name:
Registrar: TUCOWS INC.
Whois Server:
Referral URL:

Server Name:
IP Address:
Registrar: DOTSTER, INC.
Whois Server:
Referral URL:

Server Name:
IP Address:
Registrar: GODADDY.COM, INC.
Whois Server:
Referral URL:

Server Name:
Registrar: OVH
Whois Server:
Referral URL:

Server Name:
IP Address:
Whois Server:
Referral URL:

Server Name:
IP Address:
Whois Server:
Referral URL:

IP Address:
Whois Server:
Referral URL:

Server Name:
IP Address:
Whois Server:
Referral URL:

Truncated output .....

The output is much longer. Wait, don't be happy yet if you are windows hater! Check out more information below.


At first, I don't really look at the output and compare them, I was wondering which is the default whois server I query by looking at the dns traffic, tracing down the wire is always easy -

2007-12-26 01:54:07.173250 IP (tos 0x0, ttl 64, id 57639, offset 0, flags [DF], proto UDP (17), length 62) > [udp sum ok] 50526+ A? (34)
2007-12-26 01:54:07.836447 IP (tos 0x0, ttl 249, id 19808, offset 0, flags [DF], proto UDP (17), length 265) > 50526 q: A? 1/5/5 A ns:[|domain]

Lets do it again this time -

shell>whois -h google

Whois Server Version 2.0

Domain names in the .com and .net domains can now be registered with many different competing registrars. Go to for detailed information.

Aborting search 50 records found .....

The whois server is hacked? I don't think so.

Lets dig further on one of record -

64 bytes from icmp_seq=1 ttl=47 time=337 ms


;; global options: printcmd
;; Got answer:


;; Query time: 156 msec
;; WHEN: Wed Dec 26 01:30:37 2007
;; MSG SIZE rcvd: 132

Now I do another dig on baidu -


;; global options: printcmd
;; Got answer:

;; Query time: 364 msec
;; WHEN: Wed Dec 26 01:40:20 2007
;; MSG SIZE rcvd: 87

Check out the A record and you will get what I mean, both of them has the same IP address, in fact if you have already noticed it -

PING ( 56(84) bytes of data.
64 bytes from icmp_seq=1 ttl=47 time=253 ms

Feel free to visit You can visit others too -

- etc

This is really subdomain thing, while the malicious users can make use of this for nefarious purpose, I think this could be Christmas gift. In fact I found this post when googling -

That was back 2003.

I think the corresponded registrars should take action now to wipe the invalid information and if there's compromised, that should be the dns(it can be http because these days a lot of hosting companies provide web management interface to edit the dns information(I'm lazy to verify all these, it's Christmas!). But looking at those domains(some of them) which look malicious and cryptic, I'm wondering if they are really doing this for fun or to support other operations such as ad spam.

I think the hackers should check out the quote in HeX 1.0.2 wallpaper -

Anyway I'm not too sured if this is Santa's job ..... ho ho ho

Peace o<(;[>

Friday, December 21, 2007

HeX 1.0.2 - The Christmas Release

Ho ho ho, Christmas is around the corner .....

For the sake of it, the HeX development team would like to present you HeX 1.0.2 - The Christmas Release!!!!! Get it now!

Malaysia Main

- HeX liveCD 1.0.2
- HeX liveCD 1.0.2 md5 checksum
- HeX liveCD 1.0.2 sha256 checksum

Mini liveUSB
- HeX Mini liveUSB 1.0.2
- HeX Mini liveUSB 1.0.2 md5 checksum
- HeX Mini liveUSB 1.0.2 sha256 checksum

US Mirror

- HeX liveCD 1.0.2
- HeX liveCD 1.0.2 md5 checksum
- HeX liveCD 1.0.2 sha256 checksum

Official Annoucement

We are no longer calling this project HeX liveCD but now simply HeX, as it has expanded quickly and the liveCD is one of the project under HeX.

Two sub projects will be launched under this release as well -
- NSM Console
- liveUSB

NSM Console

Matthew(Dakrone) is the main developer of NSM Console, here's the short description about it -

NSM Console (Network Security Monitoring Console) is a framework for performing analysis on packet capture files. It implements a modular structure to allow for an analyst to quickly write modules of their own without any programming language experience which means you can quickly integrate all the other NSM based tools to it. Using these modules a large amount of pcap analysis can be performed quickly using a set of global (as well as per-module) options. NSM Console also aims to be simple to run and easy to understand without lots of learning time.

If you want more information about what it is (and what it does), check out this introductory post

You can access NSM Console by clicking the menu -> NSM-Tools -> NSM Console

HeX liveUSB

JJC(enhanced) created the liveUSB initially so instead of using a read-only liveCD, you can use a read-write USB thumb drive. Here's the short description of it -

After receiving numerous requests to create a HeX liveUSB Key Image we decided to go ahead and build one. This image includes all of the standard tools that you will find on HeX and it is writable; so you can update things (signatures etc), make changes and so on.

To use HeX liveUSB, you simply download the image and dd it to your USB Key (Thumbdrive). The 1.0.2 liveUSB is released inline with liveCD. However JJC will create the liveUSB with more spaces in case you want to store stuffs inside it soon.

Other Addition(Surprise)

Christmas Gifts for the Analyzt
1. HeXtra 1.0.2(Very soon because it needs to be tested with HeX 1.0.2 before release)
2. aimsnarf - aim protocol analyzer script
3. - argus 3 passive ftp extraction script
4. 4 additional PADS signatures
5. dsniff and honeysnap(thanks dakrone for porting this)
6. Add rp-Reference under analyzt home directory, and there's script which will download all the useful docs, papers or articles which may assist analyzt wannabe.

Christmas Gifts for Everyone

Everyone loves eye candy, so do we! Since we call this as The Christmas Release, here's your Christmas gift(The Shiny & New HeX Christmas Wallpaper) -

1. HeX-WhiteChristmas.jpg
2. HeX-DarkChristmas.jpg

Thanks Vickson again for his artistic skillz!

Bug Fixes
1. unicornscan run time error
2. svn run time error
3. lsof run time error
4. firefox startup issue
5. pidgin and liferea dbus issue
6. syntax error
7. script command issue
8. ping setuid issue

Other known major or minor issues in the Base System are fixed, thanks chfl4gs_

For quick glance, check out HeX 1.0.2 liveCD screenshots below -

The White Christmas

The Dark Christmas

Note to Everyone(Mailing List, Trac, Backports and IRC Channel)

For anyone who wants to learn about the network security tools that are included in HeX, please feel free to ask in the mailing list, or if you have a specific idea for HeX, we welcome your input.

However, if you want to submit a bug report, please do use trac and create the ticket, all you need to do is register an account and you can create the bug report ticket quickly. Otherwise if you are reporting it to the mailing list, developers will have to create the ticket on behalf of you. By helping yourself, you are helping us. Trac is available at -

On the other hand, you can also browse the tickets at -

Just in case the bug has been previously reported.

Feel free to join the IRC Freenode #rawpacket channel if you need "not so real time" support.

From now on, we will have the backport too. The backport is basically serving extra application packages that are not available in the HeX base system. In order to install them, just download them from -

For example to install tftpgrab, just run -

shell>sudo pkg_add -v tftpgrab-0.2.tbz

Last but not least, we are always looking for new contributor or developer. If you are interested in joining us, feel free to email -


To know more about HeX Project, check it out at -

Merry Christmas and happy holidays from the entire HeX Team, see you all in 2008!

Enjoy (;])

Thursday, December 20, 2007

Tip for RTFM

Read The F*ing Manual(RTFM) is considered as one of the most famous quote around. Most of the time we can read the manual page by using command -

shell>man ls

There you will be able to read the manual page for ls command, but what if you are not installing the manual page to the default path(usually /usr/share/man but it may be vary in different operating system). You can do this if you want to read argus client tool - racluster man page.

shell>nroff -man racluster.5 | less

And what if you want to convert them to html format, just use man2html -

shell>man2html racluster.5 > racluster.5.html

Here's the html page -

Pretty simple isn't it.

Peace ;]

VMware Inconsistent Time Issue: Blame the clock rate

Here's the interesting thing I have found when running FreeBSD VM on my laptop, the problem is with the inconsistent time which can't be solved even with this post.

You get to blame the clock rate because that's the cause of inaccurate timing when running FreeBSD VM using VMware. I try to unplug my power cable and check out the dmesg -

[14580.568000] /dev/vmmon[26067]: host clock rate change request 228 -> 100
[14580.568000] /dev/vmmon[26067]: host clock rate change request 100 -> 228
[14956.836000] /dev/vmmon[26067]: host clock rate change request 228 -> 100
[14956.836000] /dev/vmmon[26067]: host clock rate change request 100 -> 228
[15333.080000] /dev/vmmon[26067]: host clock rate change request 228 -> 100
[15333.080000] /dev/vmmon[26067]: host clock rate change request 100 -> 228
[15709.332000] /dev/vmmon[26067]: host clock rate change request 228 -> 100
[15709.332000] /dev/vmmon[26067]: host clock rate change request 100 -> 228
[16085.612000] /dev/vmmon[26067]: host clock rate change request 228 -> 100

The time in the VM becomes inconsistent after I unplug my power and return to normal once I plug it back, the dmesg says it all. Maybe it's best not to use laptop for FreeBSD VM, the desktop will do just fine.

Cheers ;]

Tuesday, December 18, 2007

Ubuntu: Argus 3

I'm currently working hard on network flow analysis stuff, and argus is always my best friend. Another wonderful application suite is silktools and I think you should try it out if you are into network flow analysis. Anyway here's the quick installation for upcoming argus 3 on Ubuntu 7.10.

It is pretty straight forward to get argus 3 installed -

shell>sudo apt-get install libpcap0.8 libpcap0.8-dev flex bison rrdtool

Once you have installed all the dependencies of argus 3, lets download argus 3 server and its client suite to install.

shell>wget \

shell>wget \

Once you have downloaded them, you just need to perform usual compilation steps by decompressing them -> configure; make && make install.

And if you still don't know what argus is about, check out this post.

P/S: Both argus and silktools are included in the HeX liveCD.

Enjoy ;]

Monday, December 03, 2007

PADS: Sigs For Belkin ADSL Router

If you have Belkin ADSL Router running in your network, it's good to identify what services are running by it, there are actually 2 network services running in the Belkin ADSL router, the web and telnet.

After examining the network traffic, I decide to write the PADS signatures for it so that I can track the network assets passively. If I'm not mistaken, the Belkin ADSL router runs Micro Httpd which you can find here -

I have also examined the telnet traffic so that I can write the sig for it, I have written the rough signatures quickly, and it's great to have them working properly after some testing -

# Belkin ADSL Router
telnet,v/Belkin Router Telnet///,BCM96358 ADSL Router\r\nLogin:[ ]

www,v/Micro HTTP Server///,Server: micro_httpd\r\n

For the quick execution, just check out the screenshot below and you will see the host has been identified to run these two services.

I will add these two signatures to upcoming HeX 1.0.2, the recent that we delay the release of it is because more bugs been found and various stuffs to do.

Enjoy (;])

Saturday, December 01, 2007

HeX: Solution to Time Slowness in VMware Server

Thanks to my friend - Richard who has observed the time slowness when running FreeBSD on VMware. I don't really see it because the slowness(delay) is very minimal which scales like 10 minutes in 24 hours but I never run it for a day. I have only observed it after Richard reported this issue to me.

To me I think time is critical issue to network security analyzt(timelining, timestamping and etc), therefore I need to figure out the solution for this. I found there are two great posts which can be considered as solutions to the problem that are available here and there. You might as well read the comments in Richard's blog post too if you encounter the problem.

Here's the sum up for the solutions, put these two lines in /boot/loader.conf(if i recalled correctly, it is there by default in HeX).


Reboot your VM. However these two lines won't really solve your problem but minimal the time slowness. In order to run consistently with local time, you will have to install vmware tools, follow the instructions in the link above to get it done. Once you have finished the installation of vmware tools, you may find vmware-guestd running as daemon at the background. Now run -


VMware Tools properties configuration box will pop up, and you will see this in first tab -

Check on the option and click on close. Now you should shut down the VM and check your vmx file to see if this setting is there -

tools.syncTime = "TRUE"

If it is there(else add it manually), now you just need to boot the VM again and you are not supposed to encounter the time slowness problem anymore, it will follow the local time and sync(adjust) automatically. For your information, I have this done on VMware Server Console Version 1.0.4 build-56528. Feel free to try it out on VMware workstation.

Enjoy ;]

HeX: Malaysia Download Mirror

Thanks to Ganux(Terengganu Linux) for their initiative to host the mirror for our HeX liveCD. One of the member - Wariola went to the November meetup where me and chfl4gs_ have presented about the HeX project and decided to contribute the space and bandwidth so if you are from local and want to try out HeX liveCD, feel free to download from local mirror now which is located at -

There are 4 members in the Ganux team, they are - wariola, Ganux, Hardyweb and Dinoz, I'm glad to hear that we have friends taking initiative to push open source softwares in other state. As usual, I believe every single bit helps. Thumbs up!

Cheers ;]

Monday, November 26, 2007

MyOSS: December Meetup

Yeah, we are going to have the local meetup again on December. More information can be found at -

For whoever came to the November Meetup for the craps that presented by me and chfl4gs_, all I can say is thanks for coming, your present are much appreciated.

Hopefully the meetup is gaining momentum again!!!!!

Enjoy :]

Youtube is down?

Before I head to sleep, I was thinking to check if there's additional video for todays new(hint, hint), but I got this -

It seems that the youtube servers are still running but returning service unavailable. Anyone has any idea? By the way time to sleep, it's 2am here.

Peace ;]

Sunday, November 25, 2007

Resources about Data Visualization

I think it is great to share what I have came across and learned along the way. Here are 3 interesting resources about data visualization with great information.

- Infovis

- Vizsec

- Secviz

If you like data visualization(human being tend to watch than read), you should find them fruitful.

Enjoy ;]

Regex Learning Tool: Kregexpeditor

I have introduced the application to help you in learning regular expressions previously which you can find here. Here's another similar application called kregexeditor.

Referring to the screenshot above, you may see a lot of symbols in the tool bar below the title bar. Each of them represents certain type of regular expression where you can point to them and read the description of each symbol. In order to choose them, you can left click on them, and left click again on the grey pane below the tool bar in order to add them as part of regular expressions you want to build. For example you can click on Beginning of line symbol and left click again to the grey pane, then you will see the regex inserted into the ASCII syntax which is ^.

Some of the symbols in the grey pane can be edited by right clicking and choose Edit so this is very flexible when you want to modify it to fit your need. You may notice the second row of the tool bars is quite useful when you want to copy, paste and save the regex you have built too.

In order to make sure the regex match what you want, you should just type in whatever characters or digits into the big white pane on the right. Once it matches, it will show in Red Color instead of black(shown in the screenshot) so that way you can make sure the regex works as expected.

Anyway I think this is great tool to learn how regex works in practical way, together with the cheat sheet that I have blogged previously here. Before I forget, you can get kregexpeditor easily with apt-get on Ubuntu.

Enjoy ;]

Saturday, November 24, 2007

The Art Of Statistic & Probability

I came across this site when googling for network packet sampling and one of the paper is about Sampling For Passive Internet Measurement.

I start to love Statistic and Probability after I came across network statistic and flow analysis(Thanks to NSM), and I seem to become addict to it now. However my lack of knowledge in math always push me back and I need to spend time to understand them. Anyway this is not what I want to tell here, it's more about the Project Euclid. I especially like their mission statement.

Project Euclid's mission is to advance scholarly communication in the field of theoretical and applied mathematics and statistics. Project Euclid is designed to address the unique needs of low-cost independent and society journals. Through a collaborative partnership arrangement, these publishers join forces and participate in an online presence with advanced functionality, without sacrificing their intellectual or economic independence or commitment to low subscription prices. Full-text searching, reference linking, interoperability through the Open Archives Initiative, and long-term retention of data are all important components of the project.

The end result is a vibrant online information community for independent and society journals. This will assure that mathematics and statistics will continue to benefit from a healthy balance of commercial enterprises, scholarly societies, and independent publishers.

This is cool, I'm about to download some of the papers and study. If you are into this field, let me know what do you think about it.

Beside this, thanks to kaeru who has lent me his math books.

Cheers ;]

HeX: The BackPort and Honeysnap Inclusion

I have got few requests about adding honeysnap to HeX liveCD, and you and I know HeX can be ran as liveCD or bump it into hard drive, most people just run it as liveCD instead unless they need to do heavy weight network data analysis. But now, you can install the packages(we call it back ports as those packages are meant for HeX 2.0[the next major version] but anyone who use 1.x can still have access to those tools that are not included by default). You can find the back ports at -

Thanks to dakrone, our new developer who has spent his precious time to create the honeysnap and its related packages and you can find his post about the honeysnap here. So here I will show you the remote installation of the honeysnap and its related packages in few step, just check out the screenshot will do as I'm lazy to copy and paste from terminal. Click ->

Thanks to the honeynet community and the developers of honeysnap. Honeysnap is in fact a very nifty tool to perform post processing on pcap data and we are proud that our liveCD includes it now.

Enjoy ;]

Bogus, Suspicious .....

Read about this and it raised my curiosity. However to me most of the statements are more to speculation. I don't have interest to give thought about the story because I'm not into it, but I'm more of digging into information gathering. This paragraph caught my eyes -

The tainted portable hard disc uploads any information saved on the computer automatically and without the owner's knowledge to and, the bureau said.

Lets have fun with it -

Domain ID:D145807509-LROR
Domain Name:NICE8.ORG
Created On:11-May-2007 07:20:24 UTC
Last Updated On:27-Sep-2007 05:57:07 UTC
Expiration Date:11-May-2008 07:20:24 UTC
Sponsoring Registrar:Xin Net Technology Corporation (R118-LROR)
Registrant ID:JHV8DUH7W9TIL
Registrant Name:ga ga
Registrant Organization:gaga

Registrant Street1:gagaga

Registrant Street2:
Registrant Street3:
Registrant City:gaga
Registrant State/Province:Beijing
Registrant Postal Code:126631
Registrant Country:CN
Registrant Phone:+86.2164729393
Registrant Phone Ext.:
Registrant FAX:+86.2164660456
Registrant FAX Ext.:
Admin Name:ga ga
Admin Organization:gaga

Admin Street1:gagaga

Admin Street2:
Admin Street3:
Admin City:gaga
Admin State/Province:Beijing
Admin Postal Code:126631
Admin Country:CN
Admin Phone:+86.68492333
Admin Phone Ext.:
Admin FAX:+86.4660456
Admin FAX Ext.:
Tech Name:ga ga
Tech Organization:gaga

Tech Street1:gagaga

Tech Street2:
Tech Street3:
Tech City:gaga
Tech State/Province:Beijing
Tech Postal Code:126631
Tech Country:CN
Tech Phone:+86.68492333
Tech Phone Ext.:
Tech FAX:+86.4660456
Tech FAX Ext.:
Name Server:NS2.XINNET.CN


Domain ID:D148394330-LROR
Domain Name:WE168.ORG
Created On:02-Jul-2007 14:22:33 UTC
Last Updated On:01-Sep-2007 03:53:20 UTC
Expiration Date:02-Jul-2008 14:22:33 UTC
Sponsoring Registrar:Xin Net Technology Corporation (R118-LROR)
Registrant Name:yon gge
Registrant Organization:yongge

Registrant Street1:yongge

Registrant Street2:
Registrant Street3:
Registrant City:yongge
Registrant State/Province:Beijing
Registrant Postal Code:123000
Registrant Country:CN
Registrant Phone:+86.2164729393
Registrant Phone Ext.:
Registrant FAX:+86.2164660456
Registrant FAX Ext.:
Admin Name:yon gge
Admin Organization:yongge

Admin Street1:yongge
Admin Street2:
Admin Street3:
Admin City:yongge
Admin State/Province:Beijing
Admin Postal Code:123000
Admin Country:CN
Admin Phone:+86.68492333
Admin Phone Ext.:
Admin FAX:+86.4660456
Admin FAX Ext.:
Tech Name:yon gge
Tech Organization:yongge

Tech Street1:yongge

Tech Street2:
Tech Street3:
Tech City:yongge
Tech State/Province:Beijing
Tech Postal Code:123000
Tech Country:CN
Tech Phone:+86.68492333
Tech Phone Ext.:
Tech FAX:+86.4660456
Tech FAX Ext.:
Name Server:NS2.XINNET.CN

If you look at the bold fonts, both entries have many similarities and pretty identical especially if you compare side by side. I'm still wondering if they will be taken down. By the way, check out the Beijing Postal Code here or here. Of course I don't really verify the information in those sites but that's interesting.

Peace ;]

Friday, November 23, 2007

TCP/IP Pervesion

I came across this blog post from Tyler Reguly about TCP/IP Pervesion that presented by Rares Stefan at Sector 2007. I can't find any presentation slide that are publicly available so I don't really know much about it but it looks very interesting to me because it might be giving hard time to NSM as it offers the false sense of data(This is more than evasion and now this is really unpredictable). I maybe kidding but you can check out the post here -

Someone mind to enlighten me about this?

Thanks to Tyler Reguly for summarizing the presentation and post it up. I'm pretty eager to know more about it.

Cheers ;]

Thursday, November 22, 2007

Mix Post But Helpful

I'm trying to move to using VMware Server instead of VMware Workstation now, the installation process is pretty straightforward on Ubuntu 7.10 but I encounter the issue when trying to load the virtual appliance that I have created using VMware Workstation because of incompatibility problem. In order to fix it, I found the solution in this post -

On the other hand, I found great tip in creating the FreeBSD application package from its port. I used to use make package command to create the FreeBSD packages but I think you should check out this one too -

Another great post I want to share here is about the network tap, many people(management sucko) don't believe this but lets listen to the expert here. This is the exactly "fail-open" that you need.

That's about it, hopefully those posts help me, help you.

Peace ;]

HeX: Welcome New Team Members

This is great news, at least it is to the development of HeX liveCD. We are pleased to welcome Matthew Lee Hinman(Dakrone) and also JJ Cummings(Enhanced) to join the development of HeX liveCD.

For your knowledge, both of them are very supportive and helpful through out the development of HeX liveCD. JJ Cummings is also the co developer of Inprotect project and long time HeX liveCD mirrors provider for US area while Matthew Lee Hinman just joined us recently but helping fixing bugs, creating ports and also contributing analysis script which will be imported to HeX soon.

Hopefully with more developers now, we can have next shiny version of HeX liveCD -> 2.0!!!!!

Thanks (;])

Wednesday, November 21, 2007

Ubuntu: Rumint

I plan to buy the book - Security Data Visualization that written by Greg Conti. I'm not much into visualization field so I guess it might be good for me to learn more about it with the present of this book. If you are asking for more, you can find other resources/books that recommended by Greg here.

Greg Conti has also written the tool called rumint(room-int) to visualize network packets. However its main supported platform is Windows but no worry, we have wine to run rumint. Assuming you have wine install at the first place, here's how I get rumint running on Ubuntu 7.10.

shell>wget \


shell>cd rumint_2.14_distro/

shell>wine ./setup.exe

You need winpcap if you want to do real time processing for network packets seen by your network interfaces, however I couldn't get it working even with winpcap installed successfully. But you can still load the pcap data to rumint. To launch rumint, just run -

shell>cd ~/.wine/drive_c/Program Files/rumint; wine ./rumint_214.exe

Here's the screenshot -

In orde to load the pcap data, just click on File -> Load PCAP Dataset and choose the data you want to load, then click on the Play button. You can also tune the setting for its filters based on color or ports under Toolbars -> Filters. Once you have clicked on the Play button, it will start replay the packets and there are 7 supported view format such as Text Rainfall, Byte Frequency, Parallel Plot and etc. Check out the next two screenshots below.

Here we have more views! I like the Parallel Plot and Detail view. You can also pause, stop or fast forward the replay of the pcap data.

Currently you can only do the post processing for the pcap data if you are using wine since there's issue with winpcap. But it's good enough when you want to perform packet visualization analysis. To get a good understanding of visualization techniques that offered by rumint, check out the link below.

Hopefully this post gives you the quick glance of what rumint offers and raise your interest in security data visualization field.

Enjoy (;])

Tuesday, November 20, 2007

PADS: Signature Contribution

Thanks to Kinstonian who has actually sent me PADS signatures which I would like to post it here. Credit goes to him and I will add the signatures to upcoming HeX 1.0.2, and only committing to the PADS development source tree once it is tested. Here I got the words from Kinstonian -

I've revised the signatures somewhat. I searched google images for Windows command prompts and the updated windows shell signature should detect Windows 2000, XP, 2003 and 2008 command prompts. I've tested it with netcat and it works.

ftp,v/Serv-U FTP Server/$1//,220-{0,1} {0,1}Serv-U FTP Server (v\d\.\d+) for WinSock ready

windowsshell,v/Windows $1Command Prompt//$2/,Microsoft Windows (.*)\[(.+)\]

I'd like to write more signatures, but I'd like to refresh my regex knowledge first and would need to find the time. However, I'll email you with any other signatures I write in the future.

So there are two signatures submitted by Kinstonian. One for Serv-U FTP server and the other for Windows CMD, if you are running any of them, feel free to test the signatures.

Thanks Kinstonian, we need more contributors like you.

Enjoy ;]

Now this is really bleeding .....

This is considered late post about the founder and admin of Bleeding Edge Threats - Matt Jonkman leaving Bleeding Threats which he has announced here. If you don't know what is Bleeding Edge Threats, check it out here.

First of all, I would like to thank Jonkman for his long time efforts to keep snort rules and other security related projects in sharp edge. And hope for the best of what he means as "something new" in future.

Now the question is, do we really call it Bleeding Edge anymore, I'm pretty curious what Sensory Network will come out with for the best of this project.

Anyway the good new is, Jonkman will still be with us no matter what because snort is only thing he knows how to do.

Cheers ;]


I like the idea of having cheat sheet in your pocket where you can learn things quickly and in the mean time, it also serves as quick reference. I have actually spent sometime to create this TCPDUMP VS SNOOP cheat sheet and I think it's good to share it with the world.

The cheat sheet is not only about the comparison of these two tools but also providing some usage tips. If you think there's any technical error in the cheat sheet, feel free to correct me. Following my previous post here, I think this is great for people who want to learn using snoop from tcpdump background and the same applies to the opposition.

Cheers (;])

Monday, November 19, 2007

SunOS: Snoopy Dog

When performing network traffic sniffing, capturing or inspection, we all usually use the sniffer calls tcpdump(to me sniffer is not the correct term but lets ignore it here), Sun has developed their own sniffer which is called snoop. I think snoop is useful for people who run SunOS based servers when coming to network traffic debugging. Anyway I'm trying all these on nexenta OS that I have came across lately and hopefully this blog post is useful to myself if I need to perform reactive Network Security Monitoring Operation on SunOS in future.

Before I have done anything using snoop, I check out the man page -

shell>man snoop

If you can't find certain man page for the command you want to use, you can try this too -

shell>info snoop

Snoop also has primitive support for filter expression, it is pretty similar to bpf filtering while I don't really look into it much. Just like tcpdump -d, snoop has -C to print the code generated from the filter expression for either the kernel packet filter, or snoop's own filter. For example -

shell>sudo snoop -C ip
Kernel Filter:

2: 129 (0x0081)

4: 3 (0x0003)

6: 2 (0x0002)

7: POP

10: 8 (0x0008)

I don't really dig into it much to understand the code like I did for tcpdump -d here.

By default snoop will capture the whole packet unless you specify the snap length with -s(same like tcpdump), there's very good tip in using -s option which I would like to show here as it can be useful for tcpdump user too -

-s snaplen

Truncate each packet after snaplen bytes. Usually the whole packet
is captured. This option is useful if only certain packet header
information is required. The packet truncation is done within the
kernel giving better utilization of the streams packet buffer. This
means less chance of dropped packets due to buffer overflow during
periods of high traffic. It also saves disk space when capturing
large traces to a capture file. To capture only IP headers (no
options) use a snaplen of 34. For UDP use 42, and for TCP use 54.
You can capture RPC headers with a snaplen of 80 bytes. NFS headers
can be captured in 120 bytes.

That's really neat -

- Ethernet Header(14)+IP Header without option enabled(20) = 34

- Ethernet Header(14)+IP Header without option enabled(20)+UDP Header(8) = 42

- Ethernet Header(14)+IP Header without option enabled(20)+TCP Header without option enabled(20)=54

To make sure we are capturing the IP header without option enabled, we can also make use of the filter such as -

ip[0] & 0x0F = 5

Netstat -i output tells me I can log via my network interface ae0, here's what I do with snoop to log the network packets to file -

shell>sudo snoop -q -r -d ae0 -o testing.snp

By default it will print the packet count that been seen by your network interface so with -q as quiet mode it won't, you can also specify -D in case you want to monitor the count of packet dropped during capture period. This is extremely useful to make sure you don't miss any packet. The -r option just like -n in tcpdump to avoid address resolution. While the -o option is to output it to a file which is just like -w in tcpdump.

After logged to the file, I check the file format -

shell>file testing.snp
testing.snp: Snoop capture file - version 2 (Ethernet)

You can read it with -

shell>snoop -t a -r -i testing.snp
1 0.00000 -> DNS C _nfsv4idmapdomain.localdomain. Internet TXT ?
2 0.06758 -> DNS R Error: 3(Name Error)
3 0.00047 -> DNS C _nfsv4idmapdomain. Internet TXT ?

I like -t a which prints the absolute time that is similar to tcpdump -tttt. The -i option is just like -r option in tcpdump in order to read the packet dump. You may notice the number of each packet that shown in the snoop output too, and you can jump to certain packet with -p option. For example -

shell>snoop -t a -r -p 2 -i testing.snp
2 11:44:3.12067 -> DNS R Error: 3(Name Error)

Or you can specify the range such as to jump to the packets within 10-20 range, just specify -p 10,20 will do.

You can also print summary line with -V option which summarizing the packet in human readable output -

shell>sudo snoop -t a -d ae0 -V

Using device ae0 (promiscuous mode)
10:44:44.86678 nexenta -> ETHER Type=0800 (IP), size=98 bytes10:44:44.86678 nexenta -> IP D= S= LEN=84, ID=24675, TOS=0x0, TTL=255
10:44:44.86678 nexenta -> ICMP Echo request (ID: 8040 Sequence number: 0)
10:44:44.86682 -> nexenta ETHER Type=0800 (IP), size=98 bytes10:44:44.86682 -> nexenta IP D= S= LEN=84, ID=11586, TOS=0x0, TTL=128
10:44:44.86682 -> nexenta ICMP Echo reply (ID: 8040 Sequence number: 0)

If you want the packet to be printed in side by side hexadecimal and ascii output which is like -XX in tcpdump, you just need to specify -x 0 in snoop. Here's the example command you can use -

shell>snoop -x 0 -t a -r -i testing.snp
15 11:01:37.34663 -> ICMP Destination unreachable (UDP port 34901 unreachable)

0: 0050 56f8 6c66 000c 2999 4f2b 0800 4500 .PV.lf..).O+..E.
16: 0070 8e9d 4000 ff01 3647 ac10 2f85 ac10 .p..@...6G../...
32: 2f02 0303 0c7b 0000 0000 4500 00af 2f13 /....{....E.../.
48: 0000 8011 5483 ac10 2f02 ac10 2f85 0035 ....T.../.../..5
64: 8855 009b 69cb fb53 8180 0001 0001 0002 .U..i..S........
80: 0002 0231 3202 3432 0237 3503 3230 3207 ...
96: 696e 2d61 6464 7204 6172 7061 0000 0c00
112: 01c0 0c00 0c00 0100 000b ff00 1807 ..............

If you want the output looks like the tshark which prints each protocol header in details, you can use -v, the example output for single packet is shown below -

shell>snoop -v -t a -r -i testing.snp
ETHER: ----- Ether Header -----
ETHER: Packet 1 arrived at 11:01:25.14378
ETHER: Packet size = 98 bytes
ETHER: Destination = 0:50:56:f8:6c:66,
ETHER: Source = 0:c:29:99:4f:2b,
ETHER: Ethertype = 0800 (IP)
IP: ----- IP Header -----
IP: Version = 4
IP: Header length = 20 bytes
IP: Type of service = 0x00
IP: xxx. .... = 0 (precedence)
IP: ...0 .... = normal delay
IP: .... 0... = normal throughput
IP: .... .0.. = normal reliability
IP: .... ..0. = not ECN capable transport
IP: .... ...0 = no ECN congestion experienced
IP: Total length = 84 bytes
IP: Identification = 36451
IP: Flags = 0x0
IP: .0.. .... = may fragment
IP: ..0. .... = last fragment
IP: Fragment offset = 0 bytes
IP: Time to live = 255 seconds/hops
IP: Protocol = 1 (ICMP)
IP: Header checksum = 5d58
IP: Source address =,
IP: Destination address =,
IP: No options
ICMP: ----- ICMP Header -----
ICMP: Type = 8 (Echo request)
ICMP: Code = 0 (ID: 8122 Sequence number: 0)
ICMP: Checksum = 3bf2

I think that's all for the snoopy dog, in fact this post is more about tcpdump vs snoop but I think both are great so no fight between them. If any of you have better knowledge in using snoop, please do share as I still considered myself as newbie in utilizing it practically.

Enjoy (;])

Sunday, November 18, 2007

Regular Expressions: Another good resource

This is great reference for people who want to learn about regular expressions, feel free to check it out -

Thanks to Dave who has created the regex cheat sheet with straight forward explanation.

Cheers ;]

Packets -> Flows -> CSV -> Graph

Comma-Separated Values(CSV) file format is widely used and it can be easily parsed by lot of graphing tools. Here's the simple trick to generate CSV data from packet dump(pcap) with the used of upcoming argus 3 and the pipe.

Say I downloaded this slammer.pcap that available at wireshark sample capture wiki -

shell>argus -w - -r slammer.pcap | \
ra -nnr - -c ',' -s saddr daddr dport - ip,,1434

There's only one flow but you get the idea of how to generate the CSV ouput from packet dump(pcap). The next thing to do is to generate the graph, I won't show it here but you are free to use any application such as OpenOffice Spreadsheet, afterglow and etc for that purpose.

The good thing about argus is that it provides wide range of useful flow metrics so you can actually generate rich set of data for graphing purpose.

Enjoy (;])

Hub Seeker

Yeah I'm looking for ethernet hub, it is pretty hard to find network hub in Malaysia now and I know there are many companies just throw it away or put their old network hub in the store to collect dust because network switch is cheap and better and network hub is obsolete in their point of view.

This is not joking, if your company has unused network hubs lying around and produce no value at all, I would like to give them life in my research lab especially from local companies since shipment is really expensive.

If you feel that you can help me, drop me an email. Thanks!

Cheers ;]

InfoSec Technical Forum

CyberSecurity Malaysia(Previously known as MyCERT/NISER) will organize the event where you can find the detail at -

The topics sound interesting but I'm not too sured it is more business or technical oriented. As far as I know Malaysia used to have technical event but business oriented most of the time which I avoid to participate.

Anyway I might be going there to meet my friends.

Peace ;]

Ubuntu: msttcorefonts problem fixed

This is just for my own note as I have problem when installing msttcorefonts on Ubuntu Linux but anyway it is fixed, the problem is due to the wrong setting in the proxy where you can find under System -> Preferences -> Network Proxy. If you encounter the similar problem when installing other stuffs, this post may give a hint of how to fix it as well. I have been lazy to post this up but anyway here you got it -

I have encountered the problem below when installing msttcorefonts and I have tried various ways to fix(e.g. remove or fix with dpkg tool)but no luck, here's the error I got when I was trying to apt-get remove --purge msttcorefonts

Blablabla .....
dpkg: error processing msttcorefonts (--purge):
subprocess pre-removal script returned error exit status 1

These fonts were provided by Microsoft "in the interest of cross-
platform compatibility". This is no longer the case, but they are
still available from third parties.

You are free to download these fonts and use them for your own use,
but you may not redistribute them in modified form, including changes
to the file name or packaging format.

Error parsing proxy URL http://:8080/: Invalid host name.
Error parsing proxy URL http://:8080/: Invalid host name.
Error parsing proxy URL http://:8080/: Invalid host name.
Error parsing proxy URL http://:8080/: Invalid host name.
Error parsing proxy URL http://:8080/: Invalid host name.
Error parsing proxy URL http://:8080/: Invalid host name.
Error parsing proxy URL http://:8080/: Invalid host name.
Error parsing proxy URL http://:8080/: Invalid host name.
Error parsing proxy URL http://:8080/: Invalid host name.
Error parsing proxy URL http://:8080/: Invalid host name.
Error parsing proxy URL http://:8080/: Invalid host name.
Error parsing proxy URL http://:8080/: Invalid host name.
Error parsing proxy URL http://:8080/: Invalid host name.
andale32.exe: No such file or directory

All done, errors in processing 1 file(s)
dpkg: error while cleaning up:
subprocess post-installation script returned error exit status 1
Errors were encountered while processing:
E: Sub-process /usr/bin/dpkg returned an error code (1)

I tried to search the ubuntu forum but no luck, and finally I figured I can fix by commenting these two lines in /var/lib/dpkg/info/msttcorefonts.postinst -

# db_get msttcorefonts/http_proxy
# http_proxy=$RET

Now I can do

shell>sudo apt-get remove --purge msttcorefonts

Problem fixed. Now you need to get the setting of your proxy right or just remove it to use direct connection and reinstall msttcorefonts.

Cheers ;]

Thursday, November 15, 2007

MyOSS November Meetup

After idle state, MyOSS Meetup is rebooted on 22th November again, me and chfl4gs_ will represent the HeX development team and give the talk about our Open Source Project titled "HeX liveCD Development & Showcase". More information about the meetup can be found here.

For your information, MyOSS Meetup is the local monthly FOSS event and feel free to join us!

Enjoy ;]

Wednesday, November 14, 2007

MyOSS: Basketball

Been long time I haven't played basketball, and now we are going to have basketball game on tomorrow which is November 15th, welcome all the myossers to join us! The detail of the event can be found in the link below -

Enjoy ;]