Wednesday, October 31, 2007

Reference script for CLI color codes

I have been experimenting with adding more color to my bash prompts of late. I find it easier to read when the fields are in distinct colors. The problem I always have is remembering what those wonderfully obscure ANSI escape sequences represent. I always have to look at a table to remind myself that "light red" maps to "1;31".

In an interesting guide to configuring your bash prompt that I have been going through, there was a script listed that makes this much easier:
# This file echoes a bunch of color codes to the
# terminal to demonstrate what's available. Each
# line is the color code of one forground color,
# out of 17 (default + 16 escapes), followed by a
# test use of that color on all nine background
# colors (default + 8 escapes).

T='gYw' # The test text

echo -e "\n 40m 41m 42m 43m\
44m 45m 46m 47m";

for FGs in ' m' ' 1m' ' 30m' '1;30m' ' 31m' '1;31m' ' 32m' \
'1;32m' ' 33m' '1;33m' ' 34m' '1;34m' ' 35m' '1;35m' \
' 36m' '1;36m' ' 37m' '1;37m';
do FG=${FGs// /}
echo -en " $FGs \033[$FG $T "
for BG in 40m 41m 42m 43m 44m 45m 46m 47m;
do echo -en "$EINS \033[$FG\033[$BG $T \033[0m";
This is from the Bash Prompt HOWTO, from a script by Daniel Crisman. It produces the following when run:

Just add an alias like alias colors="~/dir/of/handy/scripts/", and you can be reminded whenever you like.

I am still experimenting with my prompt colors, but differentiating the fields I find useful has made things a lot easier:

Tuesday, October 30, 2007

View compressed log files easily

Any linux distribution around these days will have compression and rotation in place for files in /var/log (or wherever else they happen to go). So if you look in there, you will see one or two log files (current and the last one) for each process logged, as well as 3-7 compressed files for the same process. These then get rotated out.

In any case, say you want to look inside one of those compressed log files. You could write out an untar command and then view the file. Then have to delete the temporary file. Messy. There is a better way.

The first I found was interesting (from PuppyLinux): Simply run man ./COMPRESSEDFILE. You view the contents of the file with the man file viewer. The annoying thing is that is does funny things with line breaks.

A much better way: zcat COMPRESSEDFILE | less. You view the file in less, and when done, there is no temp file to delete. To make it faster, just add a simple alias: alias viewlog="zcat $1 | less". This makes viewing those compressed logfiles as painless as viewing the current ones.

Tuesday, October 23, 2007

Loops in the CLI: Warm and Fuzzy

It seems nearly every week, I find something amazingly handy in bash that I had not used before. This time: Loops. I would bet nearly everyone who has written more than a handful of shell scripts over a few lines in length has used one or more of the loops bash has to offer. But, what I realized (not sure why it did not bash me sooner) is that loops are also very effective timesavers as one-off commands in everyday shell usage.

I suppose the association came from first using the shell in a very simple way, of just giving commands singly, and only using constructs like loops in scripts I wrote out in files. But recently, I had several identical operations to perform on a series of files with consistent names, and a loop came to mind. The files were all mp4 videos, and I needed to convert them with ffmpeg, and send them through flvtool2. Instead of 2 commands typed out for each file, I ran:
for FILE in $(find . -maxdepth 1 -type f -iname \*large.mp4); do ffmpeg -sameq -i $FILE -s 480x270 -ar 44100 -r 10 $FILE.flv; done 
You can use loops as like in any shell script, just write the whole loop on one line, separating the conditions, parts in the suite, and the termination with semicolons (these are mostly optional when using line breaks in scripts in files).

Handy Command: Fuser

Fuser is a very handy command when you are trying to investigate what is listening on a box. Consider the following case. You, being a diligent systems administrator, have been performing regular nmap scans against your boxes from remote hosts. You discover that something is listening on port 587 on a server.

What you immediately need to know is: what program is actually listening on that port? The quickest way to find out is to simply run sudo fuser 587/tcp on the box in question. This queries the kernel for what PID is listening on the specified port and reveals almost what you need:
587/tcp:              8102
The first column is the port you specified, the second is the PID using that port currently. This can be combined with ps to give you the desired output, such as via echo `sudo /sbin/fuser 587/tcp` | cut -d' ' -f 2 | xargs ps:
8102 ? S 14:13 /usr/libexec/postfix/master
I used echo in this case because I was unable to decipher the delimiter used between the two columns in the default output. The whole thing should be aliased such that you run the alias and pass a port, and the ps output is produced.

NOTE: fuser is generally found in /sbin or /usr/sbin, which you may have to add to your path.

Saturday, October 20, 2007

Easier Sharing: SSHFS

My last post was on how I used Samba to share files from my central media server to other machines on my local network. Seemed great at first, but having used it for a little while, I was rather disappointed. I had Rhythmbox looking at the mounted drive as my library. It started scanning the files once the directory was mounted. I could play them, but after a few minutes, Rhythmbox would freeze scanning the files. In addition, I could not even list the directory that contained the mounted directory. Something was getting frozen. This kept happening more and more, and was quite frustrating.

I then tried of SSHFS for the same purpose, and it has been working much better. If you can SSH to the machine with the files you want to share, then you don't have to make any changes to that computer. Just a few steps on the machine that will be accessing the remote store, and you are ready to go. From the top:
  • Create the dir which to which the remote dir will be mounted (e.g. /home/myuser/Music)
  • Make sure you have ssh access on remote machine from the local machine
  • On the local machine:
    • sudo apt-get install sshfs
    • sudo modprobe fuse
    • sudo usermod -G fuse -a youruser
    • sudo chgrp fuse /dev/fuse
    • Logout of myuser on the local machine, then back in
    • sshfs LOCALDIR
sudo modprobe fuse will make sure you have that kernel module loaded. You have to be in the fuse group to perform the mount, sudo or not. I am not exactly sure why you have to change the group of the fuse device, but you definitely have to to get it to work.

At this point, you should be able to view all the files in /dir/on/remote/server from LOCALDIR. Performance wise, I had great success. Pointing Rhythmbox to the dir I mounted, it scanned all the files in a timely manner, and I was able to play them as if they were on the local machine without issue.

If you wish to unmount the share, run sudo fusermount -u LOCALDIR. One problem that has been suggested for which I do not have a fix yet is how to handle if the remote machine has a problem, reboots, or the network connection is otherwise dropped. I am not sure how this would be handled, since the mount would no longer work. fuse being a kernel module, this may cause freezing. If I find this addressed somewhere, I will post it.

Additional information and links:

Friday, October 19, 2007

Quick and Handy Samba Setup

I just never got around to setting up and using Samba on my home network. I had my files in SVN or would login to the box that had what I needed, or used some other mechanism. But a certain case kept popping up that forced me to get it in place.

I have one server that has all of my movies, music, etc on my home network. I wanted to be able to access all my music from other machines. Using gnump3d was, aside from annoying for long term listening, a waste of bandwidth. Instead, I created a share on my media server accessible without password only on my LAN, and mounted this as my music library on my other machines.

To set up the first part, I followed this handy tutorial. He has all the details, but basically you just installed samba on the machine you want to share from, edit the conf file to create a share with no password access locally, and restart samba.

Then, on the machine you want to access the files from, you install smbfs. After that, create a dir you want to mount the shared files from (in my case ~/Music), then run
sudo smbmount //myserver/myshare ~/mnt. After that, you should be able to browse files you are sharing from the mountpoint you specified. Easy!

More details on setting up Samba, as well as having a shared directory mount on boot, may be found here. One last thing. smbclient allows you to find information on local shares from the machine you are sharing from. For example, smbclient -L SERVERNAME allows you to see all available shares on SERVERNAME.

Tuesday, October 16, 2007

More Tasty Tips for DenyHosts

After living with DenyHosts for some time, I have made some additional configuration changes that are quite helpful.
  1. Whitelist known good IPs
    I found after not very long that my home IP address had been added to my webserver's hosts.deny file by DenyHosts. This was a result of having messed up my own login a few times. This could easily be an issue if you are regularly connecting to the box running DenyHosts (from work, home, etc).
    A simple solution: Add your IP to an allowed-hosts file in your DenyHosts working directory. Once that is in place, you might also need to find and remove the entry for the IP in question from your hosts.deny file (if it has already been added). It would be a wise choice to add known good IPs to allowed-hosts when you first setup DenyHosts. Otherwise, you might find yourself blocked from your server with no way to get access!
    For more details, see the FAQ.
  2. Blacklist bad users
    It is also a good idea to add users you know are not allowed in a restricted-usernames file in your DenyHosts working directory. This keeps users that should never be allowed to login from trying, even once. There are two handy scripts provided to help in generating this list (located in the scripts dir of your DenyHosts working dir).
    The first,, scans your /etc/passwd file and outputs users who have nologin set (print daemons, etc). You can redirect this to your restricted-usernames file.
    The second script is only handy after you have been running for some time., when passed your working dir, prints out the users which bad SSH attempts use most often. By default, it prints the top 10. You could also pass it a large number, say 10000, and it would print all the bad users ever attempted since DenyHosts was first started. I take this list, remove the users I have in place and know to be good, and use it as my restricted-usernames list.
  3. Set the purge threshold and purge rate
    In your denyhosts,cfg file, set PURGE_DENY to something reasonable, 1 - 2 weeks perhaps. Otherwise, your hosts.deny might good huge.
    Then, set the PURGE_THRESHOLD to 2 -3. This means that while entries older than your PURGE_DENY will be removed from the deny list, if it occurs more than your PURGE_THRESHOLD, they will not be purged. This is more likely to stop the entries most often attempted.
  4. Rotate log files
    Assuming you have logrotate installed (and why don't you?), the FAQ has a great example file for adding a rotate entry for DenyHosts.
  5. Sync with central DenyHosts database
    One feature which really makes DenyHosts shine is the ability to share known rogue agents among DenyHosts instances. It is off by default, but to turn it on, edit denyhosts.cfg in your working dir. To enable synchronization, simply uncomment SYNC_SERVER. The value it is set to should be correct. By default, once synchronization is one, every hour the daemon will check with the server specified. It will upload hosts that you have denied, and download hosts that at least 3 others have denied. If you wish, you can tweak the timing of this process, just to upload, the threshold for downloading, etc.
    For more details, see the FAQ.
After these changes are made, be sure to restart DenyHosts via sudo /usr/share/denyhosts/daemon-control restart.

Monday, October 15, 2007

DenyHosts: Watches your SSH log for you

While I knew that brute-force attacks on SSH servers are very common, I had not taken the time to look at the connection attempt logs on my home servers until recently (to do that, by the way, on Ubuntu, try sudo tail -n 100 /var/log/auth.log). I was seeing attempts every few seconds for some periods, mostly on non-standard ports!

So far as I knew, no one had gotten through, but why risk the worry. Instead I installed DenyHosts. DenyHosts is a Python script that watches your auth.log, and adds IPs that repeatedly try and fail to connect to the /etc/hosts.deny list, effectively denying then future access.

It is rather easy to install. There is a package in the repos, but I was unable to get this to work on my servers for some reason (it is still in testing). I instead followed this handy tutorial. It worked flawlessly, with one exception. I had to run sudo touch /etc/hosts.deny right before starting the service. Otherwise it threw an error that the file did not exist and closed. With the touch, all went fine. That fix was listed in this bug report.

A few other notes:
  • If you have not done so, be SURE to change this line in /etc/ssh/sshd_config:
    • PermitRootLogin no
  • While editing /usr/share/denyhosts/denyhosts.cfg according to the tutorial, I recommend (following others posting this tip) to also change this line:
    • BLOCK_SERVICE = ALL. Also, of course, comment the line: BLOCK_SERVICE = sshd. This blocks access on all ports to IPs that get denied. And really, if you want to block a potentially malicious IP from SSH access, why give them access to other services?

Saturday, October 13, 2007

Topping top: htop

Top is a very handy and common application. But there is always room for improvement, even in common and simple programs. In this case, meet htop. It performs a similar function to top, viz. showing you what is using your system's CPU, memory, or other resources. However, htop presents this information is a much friendlier graphical way.

On Ubuntu, install is a snap. Make sure you have the universe repos enabled, and then run sudo apt-get install htop.

And no more obscure commands to remember for configuring layout and appearance. There is a handy list of commands at the bottom of the display. In addition, that display is in a wide range of handy colors (also configurable of course). The whole presentation is easier to read, and a lot easier to customize.

As mentioned, there are a lot of options for htop. Once you configure things and exit htop, your changes are saved to a .htoprc in your home directory. To see the basic options I have set, you can check out my .htoprc.

I like to have top (and now htop) running in the top left of my dual monitor setup. Using fluxbox, this is rather easy. See my .fluxbox startup and apps files.

Monday, October 8, 2007

Run vim without being there...

Say you have a text file. You need to alter it in some regular way before sending it on somewhere else. Instead of editing by hand, there is a neat option you can use to edit the file with familiar vim commands (instead of awk's lovely syntax), without having to open vim interactively.

The principle is simple: You make a file containing each of the commands you want to run, one per line. They will be the same form as if you typed them in a vim session, starting with ":". Make sure that ":wq" is the last one. Then you run vim with:
Example case. My file to edit is tester.txt, contents being:,,,,,
Then I have a commands.txt file containing:
Finally I ran it as vim -s commands.txt tester.txt. As a result, cat tester.txt:
You could put something like this in a cron job, making transitions between data manipulation steps quick and easy, without having to mess with the files by hand each time.

Sunday, October 7, 2007

Handy Alias: Grab latest svn log

I keep having the occasion to view the log comments for the current revision in my SVN repository. This means:
  • Knowing the number of the current revision,
  • Passing this to svn log -r
Instead of doing that by hand, I made an alias to do it for me:
alias svnlastlog="svn info | grep 'Last Changed Rev' | cut -d ' ' -f 4 | xargs -I mystr svn log -r mystr"
So if you are in a directory that is versioned, you just run that command, and (after being prompted to authenticate to your repo, if applicable), out comes the log entry for the current revision. Perhaps there is a built in command to do this already, but I could not find it via Google and the SVN Book.

Monday, October 1, 2007

What are all the wxPython Events?

On several occasions recently I found did not know what event to specify to trigger something in my wxPython apps. I had a hard time finding anything close to a comprehensive list of all events available, so I was limited to posting questions on the wxPython users mailing list. And, while this is a wonderful and active list, it would be nice to have a guide even closer at hand.

Cody Precord on the list provided me with a handy way to get this:
import wx

for x in dir(wx):
if x.startswith('EVT_'):
print x
Run that, and out comes a list of all the EVT types. Of course, in some cases you may not know what the event is just from the name, but they are mostly suitable self-explanatory. See for yourself: