Getting a clearer picture of http response time breakdown via CLI

I came across this handy python script https://github.com/reorx/httpstat that provides a http response breakdown in text. This saves you having to open up a browser and look at a visual network response waterfall.

For example, using my website homepage and blog for comparision.

$ python httpstat.py http://ronaldbradford.com

HTTP/1.1 200 OK
Date: Fri, 23 Sep 2016 16:52:09 GMT
Server: Apache/2.4.7 (Ubuntu)
X-Powered-By: PHP/5.5.9-1ubuntu4.17
Vary: Accept-Encoding,User-Agent
Cache-Control: max-age=1
Expires: Fri, 23 Sep 2016 16:52:10 GMT
Transfer-Encoding: chunked
Content-Type: text/html

Body stored in: /var/folders/mk/0v6thtzd7mb9sb9r4fhv4bcc0000gn/T/tmpK_foIX

  DNS Lookup   TCP Connection   Server Processing   Content Transfer
[    72ms    |      27ms      |       35ms        |       39ms       ]
             |                |                   |                  |
    namelookup:72ms           |                   |                  |
                        connect:99ms              |                  |
                                      starttransfer:134ms            |
                                                                 total:173ms
$ python httpstat.py http://ronaldbradford.com/blog/

HTTP/1.1 200 OK
Date: Fri, 23 Sep 2016 16:52:39 GMT
Server: Apache/2.4.7 (Ubuntu)
X-Powered-By: PHP/5.5.9-1ubuntu4.17
X-Pingback: http://ronaldbradford.com/blog/xmlrpc.php
Vary: Accept-Encoding,User-Agent
Cache-Control: max-age=1
Expires: Fri, 23 Sep 2016 16:52:40 GMT
Transfer-Encoding: chunked
Content-Type: text/html; charset=UTF-8

Body stored in: /var/folders/mk/0v6thtzd7mb9sb9r4fhv4bcc0000gn/T/tmpn5R1f2

  DNS Lookup   TCP Connection   Server Processing   Content Transfer
[     5ms    |      34ms      |       129ms       |       790ms      ]
             |                |                   |                  |
    namelookup:5ms            |                   |                  |
                        connect:39ms              |                  |
                                      starttransfer:168ms            |
                                                                 total:958ms

Note that 301 redirects are not handled so be sure you are getting the full content you expect in a request.

$ python httpstat.py http://ronaldbradford.com/blog

HTTP/1.1 301 Moved Permanently
Date: Fri, 23 Sep 2016 16:52:22 GMT
Server: Apache/2.4.7 (Ubuntu)
Location: http://ronaldbradford.com/blog/
Cache-Control: max-age=1
Expires: Fri, 23 Sep 2016 16:52:23 GMT
Content-Length: 322
Content-Type: text/html; charset=iso-8859-1

Body stored in: /var/folders/mk/0v6thtzd7mb9sb9r4fhv4bcc0000gn/T/tmptLSJTv

  DNS Lookup   TCP Connection   Server Processing   Content Transfer
[     5ms    |      61ms      |       39ms        |        0ms       ]
             |                |                   |                  |
    namelookup:5ms            |                   |                  |
                        connect:66ms              |                  |
                                      starttransfer:105ms            |
                                                                 total:105ms

Linux One Liner – dirtree alternative

Linux has a cool command called dirtree that gives a more visual representation of your directory structure. If you have the misfortune of working on a Unix variant that doesn’t have it, checkout this cool one liner.

ls -R . | grep ":$" | sed -e 's/:$//' -e 's/[^-][^/]*\//--/g' -e 's/^/   /' -e 's/-/|/'

Thanks for the command Tom.

Linux One Liner – Finding Stuff

Let’s say you created a file in your home directory but can’t work out which directory you put it in.


$ find ~ -name somefile.txt

You can replace ~ (tilda) with a directory e.g. / (slash) for search all locations on your system.

Let’s say you want to find all the JPEG’s you have.

$ find ~ -name "*.jpg"

Now to improve on this, I know I put a JPEG somewhere in the past few days, give me just the files from the past 3 days.

$ find . -name "*.jpg" -mtime -3

And what if you only wanted files greater then 2MB

$ find . -name "*.jpg" -mtime -3 -size +2M

If you want to look at a more detailed listing of the files found, like the format you are familar with using ls, try this.

$ find . -name "*.jpg" -mtime -3 -exec ls -l {} ;

You can find more then files, lets find all the directories you have.

$ find ~ -type d

I haven’t added it, but historically you needed to add -print on the end to print the results, seems nowadays this is optional.

I briefly used the -exec option above, I used it for various purposes. Here are a few.

$ find /backup-location -name "backup.*.tar.gz" -mtime +3 -print -exec rm -f {} ;
$ find . -name CVS -exec rm -rf {} ;

The first I run against my backup directory, that removes the online backups older then 3 days. Of course I’ve also got offline backups.
The second is when I checkout stuff from CVS and I want to prune all the CVS information. NOTE: Using the rm -rf command is very dangerous, you should only use this when you know your stuff. Used in the wrong way you delete everything, if you don’t have backups, there ain’t any UNDO in Linux. Also if you do it as root, you can effectively kill your installation in a split second.

There are other commands that perform various level of finding (e.g. commands via path) and other various stuff. A topic for another time, but to entice you.


$ which find
$ whereis find
$ locate find

Linux One Liner – Parsing long HTML urls

Ever wanted to look at a long HTML URL more easily, say to investigate a parameter. Here is a search from MapQuest.

http://www.mapquest.com/maps/map.adp?formtype=address&addtohistory=&address=10%20Market%20St&city=San%20Francisco&state=CA&zipcode=94111%2d4801&country=US&geodiff=1


$ echo "[insert url here]" | | tr "&?" "n"

This produced for the above URL the following output.

http://www.mapquest.com/maps/map.adp

formtype=address
addtohistory=
address=10%20Market%20St
city=San%20Francisco
state=CA
zipcode=94111%2d4801
country=US
geodiff=1

The Translate command tr does however replace both the & and ? characters. There are of course many more approaches like.


echo "[insert url here]" | sed -e "s/&/\n/g" -e "s/?/\n/g"

You can easily preserve the & and ? characters extending the syntax with

echo "[insert url here]" | sed -e "s/&/\n&/g" -e "s/?/\n?/g

This produces.

http://www.mapquest.com/maps/map.adp

?formtype=address
&addtohistory=
&address=10%20Market%20St
&city=San%20Francisco
&state=CA
&zipcode=94111%2d4801
&country=US
&geodiff=1

Now don’t get me started with the awk command. One of my popular books is Sed & Awk. If you do any detailed Shell scripting, this is a very handy guide.

Linux One Liner – Security

Here are a few useful one liners for Linux Security. View current packet filtering rules. (i.e. what can and can’t access your computer.

$ iptables -L

On older distros, iptables may not be in place. Try ipchains. A good reference and tools on iptables can be found at www.iptablesrocks.org.

Identity open ports on your installation using the Network exploration tool and security scanner.


$ nmap -p 1-65535 localhost

On my computer this returned

Starting nmap 3.70 ( http://www.insecure.org/nmap/ ) at 2006-06-11 12:22 EST
Interesting ports on lamda.arabx (127.0.0.1):
(The 65525 ports scanned but not shown below are in state: closed)
PORT STATE SERVICE
22/tcp open ssh
25/tcp open smtp
111/tcp open rpcbind
139/tcp open netbios-ssn
445/tcp open microsoft-ds
631/tcp open ipp
901/tcp open samba-swat
8005/tcp open unknown
32769/tcp open unknown
34315/tcp open unknown

That’s a cause for a bit of concern. Will need to look into that more.

Looking into more detail, I know what runs samba-swat but let’s confirm.


$ fuser -n tcp 901

This provides a confirmation and the Process id of the process using this port. A more susync output would be.

$ ps -ef | grep `fuser -n tcp 901 | tail -1 | cut -d: -f2` | grep -v grep

This gives me.

root 3356 1 0 Jun10 ? 00:00:00 xinetd -stayalive -pidfile /var/run/xinetd.pid

Which is exactly right, Samba Swat (the web interface for Samba) which you access at http://localhost:901 is configured using xinetd.

Now to investigate some ports I didn’t know were open.

Linux One Liner – Using the manual

For users of Linux regardless of the skill level, using the OS manual is invaluable. Frank gives an example using crontab at Viewing a specific version of a man page, but as with Linux there is always more then one way to skin a cat.

To view a man page of a command e.g. du.

$ man du

The Unix Manual is generally broken down into 9 sections, and sometimes a manual page is in multiple sections. These section are:

  • Section 1 – Commands
  • Section 2 – System Calls
  • Section 3 – Library Calls
  • Section 4 – Special Files
  • Section 5 – File Formats and Conversions
  • Section 6 – Games for Linux
  • Section 7 – Macro Packages and Conventions
  • Section 8 – System Management Commands
  • Section 9 – Kernel Routines

As in Franks example, crontab is in both Section 1 and 5. Crontab tab the Linux Command, and the file format used for crontab. To get access to the later.

$ man -s 5 crontab

Frank made reference to a syntax of man crontab.5 which didn’t work in my distro, so again, different implementations may be possible.

Say you remember the command associated with cron but not the full name. You can search the man pages with.

$ man -k cron

This produced in my distro.

/etc/anacrontab [anacrontab] (5) - configuration file for anacron
anacron (8) - runs commands periodically
cron (8) - daemon to execute scheduled commands (ISC Cron V4.1)
crontab (1) - maintain crontab files for individual users (ISC Cron V4.1)
crontab (5) - tables for driving cron (ISC Cron V4.1)
hinotes (1) - Syncronize your Hi-Notes database with your desktop machine. Hi-Notes must be installed on your Palm handheld (and at least one entry must exist within Hi-Notes)
read-todos (1) - Syncronize your Palm ToDo application's database with your desktop machine

Of course you should not discount that a manual page exists for the man command.

$ man man

Linux One Liner – Calculating Used Diskspace

You can easily see the state of diskspace used with the command.


$ df

However, often you want to know where most of the diskspace is being taken. A high level summary can be performed with.

$ du -k -s /* | sort +0nr -1

Producing results like.

23450208        share
9369212 home
3803504 usr
2395876 var
2015380 opt
920121  proc
815476  src
...

A more indepth review of the worst offending directories can be done with.


$ du -k / | sort +0nr -1 | head -30

This view does however show all offending directories so you normally have to ignore the higher levels as the are inclusive of the more specific directories where the most diskspace is.

You get a result like

47642425        /
23450208        /share
9799580 /home
9153228 /home/rbradfor
8497152 /share/bittorrent
7065840 /share/bittorrent/Stargate.SG-1.Season.9
4986368 /home/rbradfor/vmplayer
4837136 /usr
3659200 /opt
2559836 /home/rbradfor/vmplayer/ultimateLAMP
2447692 /var
2426364 /home/rbradfor/vmplayer/ultimateLAMP-0.1
2377732 /usr/lib
2335428 /var/lib
2213440 /var/lib/vmware
2213432 /var/lib/vmware/Virtual Machines
2174928 /share/lib
2174912 /share/lib/vmware
2174896 /share/lib/vmware/Virtual Machines
1972900 /home/rbradfor/download
1945576 /var/lib/vmware/Virtual Machines/XP Pro Dell 5150
1868016 /share/UltimateLAMP
1604032 /usr/share
...

References

  • df – report filesystem disk space usage
  • du – estimate file space usage
  • sort – sort lines of text files