[philiptellis] /bb|[^b]{2}/
Never stop Grokking


Showing posts with label linux. Show all posts
Showing posts with label linux. Show all posts

Thursday, December 02, 2010

Bad use of HTML5 form validation is worse than not using it at all

...or why I'm glad we don't use fidelity any more

Fidelity's online stock trading account uses HTML5 form's pattern attribute to do better form validation, only they get it horribly wrong.

First some background...

The pattern attribute that was added to input elements allows the page developer to tell the browser what kind of pre-validation to do on the form before submitting it to the server. This is akin to JavaScript based form validation that runs through the form's onsubmit event, except that it's done before the form's onsubmit event fires. Pages can choose to use this feature while falling back to JS validation if it isn't available. They'd still need to do server-side validation, but that's a different matter. Unfortunately, when you get these patterns wrong, it's not possible to submit a valid value, and given how new this attribute is, many web devs have probably implemented it incorrectly while never having tested on a browser that actually supports the attribute.

This is the problem I faced with fidelity. First some screenshots. This is what the wire transfer page looks like if you try to transfer $100. The message shows up when you click on the submit button:

On Firefox 4:
Fidelity's wire transfer page on Firefox 4
On Opera:
Fidelity's wire transfer page on Opera

There are a few things wrong with this. First, to any reasonable human being (and any reasonably well programmed computer too), the amount entered (100.00) looks like a valid format for currency information. I tried alternatives like $100, $100.00, 100, etc., but ended up with the same errors for all of them. Viewing the source told me what the problem was. This is what the relevant portion of the page source looks like:
$<input type="text" name="RED_AMT" id="RED_AMT"
        maxlength="11" size="14" value="" tabindex="10"
        pattern="$#,###.##"
        type="currency" onblur="onBlurRedAmt();"/>
The onblur handler reformatted the value so that it always had two decimal places and a comma to separate thousands, but didn't do anything else. The form's onsubmit event handler was never called. The pattern attribute, looked suspicious. This kind of pattern is what I'd expect for code written in COBOL, or perhaps something using perl forms. The pattern attribute, however, is supposed to be a JavaScript valid regular expression, and the pattern in the code was either not a regular expression, or a very bad one that requires several # characters after the end of the string.

The code also omits the title attribute which is strongly recommended for anything that uses the pattern attribute to make the resulting error message more meaningful, and in fact just usable. The result is that it's impossible to make a wire transfer using any browser that supports HTML5's new form element types and attributes. This is sad because while it looks like Fidelity had good intentions, they messed up horribly on the implementation, and put out an unusable product (unless of course you have firebug or greasemonkey and can mess with the code yourself).

I hacked up a little test page to see if I could reproduce the problem. It's reproducible in Firefox, but not in Opera and I can't figure out why. (Any ideas?). Also notice how using a title attribute makes the error message clearer.

One last point in closing. It looks like both Firefox and Opera's implementations (on Linux at least) have a bad focus grabbing bug. While the error message is visible, the browser grabs complete keyboard focus from the OS (or Windowing environment if you prefer). This means you can't do things like Alt+Tab, or PrtScrn, switching windows, etc. If you click the mouse anywhere, the error disappears. The result is that it's really hard to take a screenshot of the error message. I managed to do it by setting gimp to take a screenshot of the entire Desktop with a 4 second delay. The funny thing is that you can still use the keyboard within the browser to navigate through links, and this does not dismiss the error.

Update: The modality problem with the error message doesn't show up on Firefox 4 on MacOSX

Thursday, August 19, 2010

Where does firefox install its extensions?

It took me a while to figure this out for all the OSes I use. The only solution I found for MacOSX was hidden inside a post about installing extensions globally.

Linux

On Linux, extensions are stored inside
~/.mozilla/firefox/<profile>/extensions
$(MOZILLA_DIR)/extensions

MacOSX

On the Mac, it's stored in
~/Library/Application Support/Firefox/Profiles/<profile>/extensions
/Library/Application Support/Mozilla/Extensions

Saturday, April 10, 2010

Can a website crash your RHEL5 desktop?

A few months ago I started having trouble with my RHEL5 desktop when running Firefox 3.5. On a few websites, the entire system would crash pretty consistently. It took a long time, but I finally managed to find the problem and get a fix in there.

My solution is documented on the YDN blog. Please leave comments there.

Edit 2022-10-31: It looks like the YDN blog no longer has any posts, so I've pulled this off the Internet Archive and reposted it here:

On Ajaxian, Chris Heilmann recently wrote about a piece of JavaScript to crash Internet Explorer 6 (IE6). That's not something I worry about because I'm a geek and I've used a Linux-based operating system as my primary desktop for the last 10 years. I've kept my system up to date with all patches, never log in as root, and have a short timeout on sudo. I've believed that while a malicious website could possibly affect my browser (Firefox), it was unlikely to affect my OS. That was up until a few months ago, when I upgraded to Firefox 3.5.

I started noticing that a few websites would consistently cause my system to freeze and the bottom part of the screen would show pixmaps from all over the place. The system would stay this way for a few seconds, and then I'd be thrown off to the login screen. My error log showed that X.org had been killed by a segfault. At times the system would completely freeze and the only way to get it back was a hard reboot (yes, I tried pinging and sshing in first).

Yikes. This wasn't supposed to happen. Even worse, this meant that anyone who knew how to exploit this could cause my system to crash at will. On further investigation, it appeared that this problem showed up with sites that used jQuery or YUI, but it wasn't consistent. It also happened only with Firefox 3.5 or higher on Red Hat-based systems. Debian-based systems like Ubuntu didn't have any trouble.

I also found that we could consistently reproduce the problem with Yahoo! Search, which is where Ryan Grove and Sarah Forth-Smith came in to debug the problem. Even weirder was that my Gnome desktop would bleed through elements on the Search results page. Eventually we hit upon Bug 498500 on Red Hat's Bugzilla bug-tracking system.

I edited /etc/X11/xorg.conf and added Option "XaaNoOffscreenPixmaps" to the Device Section. I restarted X and started surfing. I surfed for several weeks and used Y! Search all the time. I also used a bunch of the other sites that caused the previous problems. I used sites with jQuery and YUI.

No more screen fuzz, no more freezes, no more crashes, and no more reboots.

I haven't investigated this further, but my best guess for what would have caused this problem is CSS sprites that are partially hidden, or elements with negative left margins. The former is a common performance optimization, while the latter is common for page accessibility. Both good things, so not something you'd want to change.

In any event, if you're like me and have a Linux-based desktop, and see a similar problem, it may be worth trying the preceding solution.

Note: The bug in question has been resolved by Red Hat.

Saturday, December 06, 2008

Installing linux on the Acer Extensa 4630Z

Up front - if you haven't purchased this laptop yet, then stop now and look for a different brand. The Acer Extensa 4630Z has a great webcam, but will take you through hell when you try to install linux on it.

My tests were with Fedora Core 9, because that's all that I had with me. I may have had better luck with FC10 or Ubuntu, but I do not have those CDs with me, and I have a really slow network connection, so downloading it is not an option.

If you've already bought this laptop, then this post will probably help you out with finding some of the drivers.

First off, the keyboard has some extra keys for the Euro and dollar. These are just above the cursor keys, but I haven't yet figured out the keyboard layout to use them, so forget about that.

The trackpad is standard, but you cannot tap on it - at least the default driver on Fedora won't let you. I tried adding a section to my xorg.conf for the touchpad, but that only slowed it down a lot.

Now the problems.

The laptop has two switches - one for the bluetooth antenna and one for the wifi antenna. The bluetooth switch directly controls the bluetooth antenna, but the wifi switch is just a simple key that triggers an event and it is up to the wifi driver to deal with that. It took me a while to figure this out, since the driver for the wifi card isn't installed by default on FC9. I spent a lot of time playing with acerhk to enable the card, but I guess that isn't needed. If you do use acerhk though, the series id is 3000.

The wifi card has the Atheros AR5B91 chipset. This is chipset isn't listed on the Atheros website or the various linux driver pages, don't be confused by that. The device ID shown by lspci -nn ends with 002a.

This card works with the ath5k driver, and on FC9, you'll have to get that from the linux wireless driver page. Just get the whole tarball and build it locally.

Before you can do that, you'll first need to yum install make, gcc, kernel-headers and kernel-devel.

Do not use the madwifi driver - it does not work with this card.

The ethernet card is a Broadcom card, and works well with the default driver, nothing much to worry about here.

However, I had problems getting dhclient to get a DHCP IP for both cards simultaneously. I think this is a quirk with Fedora, because I've done it successfully with Ubuntu on a Thinkpad. If you want to connect to wireless, and you already have dhclient running, you have to first disconnect from wired, and kill the dhclient process. I found this very stupid and frustrating.

The next thing that did not work was the microphone. There is a built in mic right next to the web cam. Your volume control sees this as Mic. There is another device called Front Mic, which is actually a mic that you plug in to the microphone jack in the front of the laptop.

Installing all the PulseAudio tools helps.

This does not help with Skype though, and the audio quality is very bad. I don't know if this is a property of the system, or with skype, but basically I was getting clicks, scratches and a lot of static with skype - even after setting the audio output devices in Skype to Pulse.

Anyway, the wireless problems did not end there. I managed to get the wifi card detected, and could connect to an unsecured network, but as soon as I tried to connect to a secure network, it failed. It failed with both WEP and WPA, and I have no idea how to debug or fix that.

I now have only one day left to get this box working 100%, so will try with Ubuntu tomorrow if I can get the CD from someone. Will update this post when I'm done.

Update: The Ubuntu story

After my trials with Fedora, I moved to Ubuntu. Got an 8.10 CD, but it appeared to be corrupt (tested on three different drives). Then got an 8.04 CD that worked.

The first problem was when starting the live CD. It correctly detects that it needs the intel display driver, but the driver itself is broken. I had the same problem with my Thinkpad - the screen is garbled once xorg starts up. I had to boot up in Save Graphics Mode (Press F4 at the grub menu), which uses the vesa driver.

The vesa driver only goes up to 1024x768, which looks weird on a widescreen, but it lets you get things started.

I went through the install procedure, which worked without problem. Followed the same instructions as above for getting the ath5k driver, except that this time it set up the ath9k driver. NetworkManager still wouldn't connect to a WPA protected wireless network, but after reading up online, I decided to try wicd, and it worked out of the box.

I then downloaded Skype and tried it out, but had very bad luck. The microphone wouldn't work, and skype crashed when I tried to use the webcam. This was much worse than Fedora. I tried the whole pulseaudio setup, but that didn't help.

I then upgraded the kernel to the latest, but still had no luck.

Finally I started looking for an updated video driver. I found melchiorre's weblog where he has a deb for the intel driver.

I first tried to get this driver using apt hoping for a newer release, but there wasn't anything, so I downloaded the deb from the blog, and installed it.

This worked in the broad sense. I could start up in graphical mode and use 1280x800 resolution - which is good for a 14" wide screen. The only problem is that the mouse pointer kept getting garbled every few seconds. It would reset to normal if I moused over anything that changed the pointer, but within a second or two it was garbled again.

This made it very confusing to determine where the pointer was (it was just a large square with a series of black dashes all over the place), so I switched back to vesa and decided to tolerate 1024x768.

The system was now in a state where it was usable as an internet browsing box, but Skype not working was a big problem, because everyone that my dad needs to communicate with is on Skype (which meant that Ekiga wasn't an option).

The Windows story

Just for completeness, I should mention the situation with Windows XP:

Windows does not detect the wireless card or the display card, so it runs in vesa mode at 1024x768, and there's no network. The only advantage with windows is that anyone in the world can fix it, so my absence won't be the bottleneck when this box breaks (and it will).

The final story

So, in the end, I've decided to leave Windows on half the drive - this will be the part that all service engineers can 'fix' when things break.

The rest of the drive I've split between Fedora 9 and Ubuntu 8.04 - I wish I could have more recent versions, but I'm not that lucky. I'll set Fedora as the default system, and get the wireless network working there, and use OpenOffice from the Ubuntu partition. At some point I might switch the default to Ubuntu once I figure out what works better with this hardware.

Update: 8.10

I finally managed to upgrade the box to Ubuntu 8.10. I had to do a remote upgrade, since I'm now half way around the world from the box. The upgrade is pretty easy. Just follow the upgrade instructions listed on the ubuntu site. You need to go through the section titled "Network upgrade for ubuntu servers".

The upgrade took almost two days to complete because the network was really slow, and it required me to hit "y" a few times, so it could not be automated (the do-release-upgrade script does not accept a -y flag), but other than that, it was really smooth.

Note: If the upgrade terminates partway, you need to resume it within a day. The first time it failed, I resumed after three days, and it restarted from scratch. The second time it failed, I resumed after 6 hours, and it continued from where it had left off.

The only problem with doing a remote upgrade is that I could not easily test a few things. For starters, I did not know if gnome was working correctly, however, this can be tested remotely. ssh to the box, and look at /var/log/Xorg.0.log. Look for errors (anything starting with (EE)). When I checked this, I found errors saying that there were no usable screen configurations.

I also found that /etc/X11/xorg.conf had been replaced by the upgrade. Restoring it from the original did not help, so I replaced the vesa driver with the intel driver (the one that Fedora worked with), and tried again:
   sudo /etc/init.d/gdm restart
This time it worked, and I confirmed with my dad on the other end that the track pad was working as expected.

The other things I could not test were the webcam, microphone and speakers, but I got my dad to test all three using Skype, and we had a successful VoIP chat - no crashes.

This system is now in a fully working condition.

Thanks for all the comments so far, they definitely helped me push on.

Tuesday, October 07, 2008

Platform dependent locales

Here's a simple snippet of PHP code:
        setlocale(LC_TIME, 'de_CH.UTF-8');
echo strftime("%x");
I run this on RHEL5 and Ubuntu 8.04 and I get different results:

RHEL5:
       2008-10-07
Ubuntu 8.04:
       07.10.2008
So I look through /usr/share/i18n/locales/de_CH for a hint, and I find it.

On RHEL, d_fmt in LC_TIME maps to <u0025><u0059><u002d><u0025><u006d><u002d><u0025><u0064>, which in English, is %Y-%m-%d, while on Ubuntu, it maps to <u0025><u0064><u002e><u0025><u006d><u002e><u0025><u0059>, which in English is, %d.%m.%Y, which is exactly where this discrepancy arises from.

Now I have no idea how to verify which is more recent, because Ubuntu and RHEL do not package things in the same way. Any ideas?

Friday, March 07, 2008

Creative Vista Webcam VF0330 on Ubuntu

Got back home a couple of days ago, and there was a new webcam sitting on my box. It was being used on windows, which I'm naturally allergic to. I decided to try and get it working on my ubuntu laptop.

Now I've set up webcams on linux before, and it was a pain, but it got done. This was over five years ago, so I'd assumed things may have gotten easier by now. They hadn't. New webcams were out, with just as bad support as there was five years ago. All the same, documentation on the web is good if you know where to find it. That's what this doc is for.

The first thing to do was find out what model my webcam was. I knew it was Creative, since that's what was written on the cam itself. Plugged it in, and dmesg just said that a new USB device had been plugged in. Then I ran lsusb, which gave me this:

Bus 005 Device 001: ID 0000:0000
Bus 001 Device 013: ID 041e:405f Creative Technology, Ltd
Bus 001 Device 001: ID 0000:0000
Bus 002 Device 001: ID 0000:0000
Bus 004 Device 001: ID 0000:0000
Bus 003 Device 001: ID 0000:0000

So the vendor ID was 041e — which is Creative, and the product ID was 405f, which probably maps onto some name, but that's immaterial.

Most webcams (well, creative ones at least) work with the ov511 driver that comes packaged with most distros, but this cam didn't, so I started searching around for support for this particular product ID. Found out from RastaGeeks that this was a Creative Vista Webcam VF0330 (the VF0330 matched the model number at the back of the cam), and that it was supported by the ov51x-jpeg driver, version 1.5.2 or higher.

The Ubuntu Webcam page had instructions on setting this up, which I followed and met with success.

Now, chances are that you'll find other information as well about this driver. One of the steps I'd followed was to install the ov51x-jpeg-source and module-assistant packages using apt, and then build and install the module using module-assistant. Unfortunately, this installed an older version of ov51x-jpeg, which didn't work with the camera. That led me to believe that the driver didn't work, until I tried the newer version.

If you've done this, then you will need to apt-get remove ov51x-jpeg-modules-2.6.22-14-generic first, then install the new driver and then run depmod -A.

Wednesday, February 07, 2007

Ayttm screen freeze

If you use the jabber service on ayttm, you may notice the main window appears to freeze up at times turning completely white. You can still chat, and clicking on the main window makes contacts visible, but it just doesn't refresh on its own.

This happens because of a problem with the jabber module and I haven't had a chance to figure it out yet. What I do know, is that you can unlock the application without restarting it, but you need to resort to a teeny weeny bit of geekery.

First, find out the pid of the running ayttm process:
   ayttm_pid=`ps -u philip | grep a[y]ttm | cut -f2 -d' '`
(I use [y] instead of y in ayttm so that the grep process doesn't show up in the list).

Once you have your pid, start gdb telling it to attach to this pid. Different versions of gdb have different ways to do this, so check the man page, but two common ways are:
   gdb ayttm $ayttm_pid
or
   gdb ayttm -p $ayttm_pid
Ok, so pretty much anyone could have told you how to get this far, it's going forward that needs a wee bit of knowledge of the source.

I'll save you the trouble and tell you that you need to look into jab_recv. File descriptor for the jabber socket (stored in j->fd) has closed, but the code is stuck on an infinite read. You need to set a breakpoint on jab_recv, and close the ayttm end of the fd:
   bt jab_recv
cont
n
n
p close(j->fd)
deta
^D
That's about it. You'll get an alert telling you that the jabber server closed the connection. Click Ok, and proceed as if nothing happened.

Update:
Finally, there's really no reason for you to do all that. Here's the one liner shell script (broken for readability) to do it for you:
   echo -e '\n\n\nb jab_recv\ncont\nn\nn\np close(j->fd)\ndeta\n' | \
gdb ayttm `ps -waux | grep a[y]ttm | cut -f2 -d' '` &>/dev/null

Thursday, October 19, 2006

Selectively enable network interfaces at bootup in linux

Do you have multiple network interfaces on your linux box and find yourself needing to have not all of them active at bootup? Perhaps not all networks are connected and you don't want to waste time with attemtps at network negotiation for a connection you know isn't available.

I've faced this situation a couple of times, and came up with a way to tell my startup scripts to skip certain interfaces via kernel commandline parameters (which can be specified via your boot loader).

It's so simple that I often wonder why I (or anyone else) hadn't done it before. It's likely that everyone else on earth answered no to my question above.

Anyway, this is what I did:

In /etc/init.d/network, in the loop that iterates over $interfaces:

# bring up all other interfaces configured to come up at boot time
for i in $interfaces; do
after we've eliminated non-boot interfaces:

if LANG=C egrep -L "^ONBOOT=['\"]?[Nn][Oo]['\"]?" ifcfg-$i > /dev/null ; then
# this loads the module, to preserve ordering
is_available $i
continue
fi
I add this:

# If interface was disabled from kernel cmdline, ignore
if cat /proc/cmdline | grep -q $i=off &>/dev/null; then
continue;
fi

Add the same for the next loop that iterates over $vlaninterfaces $bridgeinterfaces $xdslinterfaces $cipeinterfaces and you're done. As simple as that.

Now, when your boot loader pops the question, you choose to edit the kernel command line, and add something like eth0=off to prevent eth0 from coming up at boot time. You could turn this into an alternate boot config in grub by adding an entry in /boot/grub/grub.conf like this:

title Linux, skip eth1
root (hd0,1)
kernel /vmlinuz-2.6.10 ro root=LABEL=/ rhgb quiet eth1=off
initrd /initrd-2.6.10.img

Which will give you a nice menu option below your default Linux option saying Linux, skip eth1.

You can always enable your interface later by doing /sbin/ifup eth1.

Note: You may need to add is_available $i inside the if block. I don't know, and it works for me without it.

Thursday, December 29, 2005

Using the Samsung SP0802N and ASUS K8V-MX together without lockups

Early this year, I started having hard disk problems (signs of an impending crash), and the decision was to replace my old samsung 40Gig with a new samsung 80Gig. The drive installed was a Samsung SP0802N - since I'd heard mostly good reviews of it. I decided to keep both hard disks connected though, just in case.

A few months ago, the computer started showing signs of corrupted RAM. This isn't something that normally happens on two year old RAM. 2 day old RAM, maybe, 10 year old RAM, maybe, but not 2 year old RAM. Power problems are a possibility, and that's not unexpected in my room. Anyway, the system was checked by a hardware guy, and he said that the motherboard needed to be replaced.

The new motherboard was an ASUS K8V-MX and along with that, we got an AMD Semprom processor.

On my next trip back home, I noticed problems with the system. It was running slower, and was locking up on disk intensive processes. A power cycle was required to get it back, and then there was a high chance that BIOS wouldn't recognise my disk, but would grab grub from my old disk. I didn't have time to look at it back in October or November, but in December, I did.

Three things came to my mind.
- bad power,
- bad hard disk/disk controller
- incompatibility somewhere.

We thought the grounding might be bad throughout the house because the stabiliser and spike buster indicated the same at various outlets. I also read through the motherboard manual. I generally do this before installing a new motherboard, but since I hadn't installed this one, I hadn't read it before. The manual said that a BIOS upgrade was required to function correctly, and that MSDOS and a floppy was required to upgrade the BIOS. I had neither, so ignored that for the moment.

Decided to go get a new hard disk and a UPS, but changed my mind about the hard disk at the last moment, and got just the UPS and some more RAM.

The night before I bought the stuff, I moved the PC to a different room to check (I couldn't get it started in my bedroom), and it started up (which further convinced me that it could have been a power problem). I read through /usr/src/linux/Documentation/kernel-parameters.txt for info on what I could do to stabilise the kernel. That pointed me to other docs, one of which told me that a BIOS upgrade was required for certain ASUS motherboards.

Today, I decided to try upgrading the BIOS. I do not have a floppy drive, or MSDOS, so that was a problem. Booted up from the motherboard CD, which started FreeDOS. FreeDOS, however, only recognises FAT16 partitions, and I had none of those.

Switched back to linux, started fdisk, and tried to create a new FAT16 partition 5MB in size. It created one 16MB in size - I guess it's a least count issue. Had to zero out the first 512 bytes of the partition for DOS to recognise it...

dd if=/dev/zero of=/dev/hda11 bs=512 count=1

Then booted back into FreeDOS and formatted the drive:
format c:

Then booted back into linux to copy the ROM image and ROM writing utility to /dev/hda11, and finally, back to FreeDOS to run the utility.

Ran it, and rebooted to get a CMOS checksum error - not entirely unexpected. Went into BIOS setup and reset options that weren't applicable to my box (no floppy drive, no primary slave, boot order, etc.)

Booted into linux and haven't had a problem yet.

Next step - enable ACPI.

Thursday, February 24, 2005

Security in Linux

(from the linux security FAQ)

Glossary:

Cracker
someone who gains unauthorised access to a system. Not to be confused with a hacker. A hacker is really someone who like to play with computers and write good software. The media often tends to confuse the two. Hackers create, Crackers break.
IDS
Intrusion detection system. A system that tries to detect if your system has been compromised, and warns you of it.
Tripwire
A kind of IDS that checks whether critical system binaries and configuration files have been modified or not.
Firewall
a system that filters traffic moving from outside the network to inside, and vice-versa.
Port scanner
a program that checks a host to see which ports are open for external connections. It generally does a blind connect on all ports of a host. Some port scanners can do stealth scanning.
Security scanner
a program that checks a host for known vulnerabilities. Security scanners generally try to exploit a vulnerability without causing any harmful effects that would happen in a genuine break in. Some exploits are desined to crash a system, and in these cases, the security scanner may well have to crash a system if it is vulnerable. It is better though to be crashed while scanning your own system, than when someone is actually trying to crash you.

Introduction

Some of the most common questions asked by people trying to secure their linux systems are: What is security? How can I protect myself from a break-in? How secure should my linux box be? Which is the most secure linux distribution?

Security, in a nutshell, is ensuring the Confidentiality, Integrity, and Availability of your systems and network.

In short, to protect yourself, you should install the most recent security patches for your distro, turn off unused/unrequired services, run the others through tcpwrappers, and instead of telnet/ftp for remote access, use a secure alternative. The rest of this document will attempt to cover in more detail how to go about securing your linux system.

Most important is to decide how secure you need to be. You need to assess the risk to you, and base your security on that.

risk=(threat*vulnerability*impact)

Threat: the probability of being attacked
Vulnerability: how easy is it to break in
Impact: what is the cost of recovering from an attack

You cannot be 100% secure because there will always be security holes that you either do not know about, or are infeasible to patch.

When picking a distribution with security in mind, you should really pick one that has secure default values that you can tweak later. There's no point in installing a system that someone breaks into before you even have a chance to secure it.

Distributions like Secure Linux and slinux aim to set secure defaults.

Most distros do not have secure defaults because this tends to make the system hard for end users to use. Securing a system is really a trade-off between convenience to your users, and protecting their data.

In general, never rely on the default installation of any distribution. Consult the Linux Administrator's Security Guide for information on how to secure specific distributions.

Alternately, OpenBSD, was designed from the ground up as a secure unix, and is probably your best choice for a pure unix implementation. OpenBSD servers and firewalls are extremely secure.

A good idea would be to set up a rather open internal network, with tight security between the inside and the outside. That way, local users still have all the convenience, while the system is secure from an external threat. There are still two problems with this approach.

If you have legitimate users who need to connect to your system remotely, they would be inconvenienced by your external security. This shouldn't be an issue, as opening up your system to one person, can really open it up to the world.

On the inside too, if your users cannot be trusted, then lax internal security could hurt you. Your users could compromise your system by simply not setting good passwords, or leaving their terminals logged in while they are away. There have been cases when crackers have walked into offices, and found system passwords pasted on the office bulletin board for everyone's convenience. Although hitherto unheard of in India, companies abroad have been known to place spies in competitor's companies to steal corporate secrets. There's no use in having the ultimate in network security if your employee is simply going to copy all your secrets onto a floppy and walk out with it.

Apart from securing each computer system, and the network as a whole, one also needs to physically secure the entire installation.

Firewalls

To protect your network, you'd use a firewall between your internal network and the rest of the world.

A firewall set up is basically a set of rules that tell the firewall whether a given packet is to be allowed through or not. It can also log information on packets passing through, as well as modify or redirect these packets.

Setting up a firewall is very well explained in the linux firewall howto.

In general, you will require to configure IP Chains, IP Filter or IP Tables depending on whether you have a 2.2, 2.4 or 2.6 kernel.

A firewall is indispensible to the security of a network. Whether it is a dedicated machine or running as a service on another makes a difference.

Since a firewall is meant to filter traffic to and from your network, you ideally want it to sit between your network and the rest of the world. Your firewall would have two network interfaces, one of which connects to your network, and the other to the world.

Firewall rules decide which packets get from one side to another.

A firewall is generally implemented at the kernel level, and can be fast provided it works completely in memory and does not have too many rules. Ideally, you only want your firewall to filter IPs, and let a higher level service handle service based filtering, for example, have tcpd check if anyone is trying to connect to restricted ports on your system, or use a proxy based system to restrict websites that your users may visit. Better logging can be done at these levels, and they are less demanding on the kernel.

Services

Run only the services that you require and no more. On a desktop system, which you will not access remotely, there should not be any services. Run different levels of services on different machines.

You can find out which services are running by using the ps and netstat commands:

ps auxfw will show you a tree structure of processes running, while netstat -ap and netstat -altpu will show you which processes are listening on network ports.

You may also want to do a port scan of your machine using a tool like nmap (remember, Trinity used it in the Matrix Reloaded), or a security scanner like nessus.

Some really unsafe services include rsh, rcp, rexec. Many versions of sendmail and bind have well known security holes. Also disable echo, discard, finger, daytime, chargen and gopher if you don't use them.

Wherever possible, use an encrypted protocol rather than a plain text protocol. for example, use ssh instead of telnet/rsh, use scp instead of ftp, use IMAP w/SSL instead of POP3.

On a single user system, you should also disable identd, but on a multiuser system, this is a good way of tracking down problem users on your system.

You also want to use tcpwrappers to start your services. tcpwrappers are basically an intermediate between inetd and the service that actually serves a connection, like say telnet. Tcpd will check to see if the connecting host is allowed to connect to this service. Different kinds of access control and logging can be done through tcp wrappers.

TCPWrappers

TCPWrappers, and their associated configuration files /etc/hosts.deny and /etc/hosts.allow help a system administrator set up good access control for his system.

First, some background. Most unix systems use what is called a super server to run other servers. The purpose of a superserver is basically to listen on all ports that you want people to connect to, and when a connection is made to that port, it spawns the relevant server. The advantage of such a set up is threefold.

Primarily, all these other servers do not need to implement socket io routines. They simply communicate through stdio, and the superserver connects the socket's io streams to stdio before spawning a server.

Secondly, we keep our process table small by not running all servers all the time. Only one server runs all the time, and servers that are never required are never started. A server that is required is run only for the duration that it needs to serve a connection.

Finally, and really as a consequence of such a set up, we can implement security centrally, and have all servers benefit from it, even if they have no idea that it exists. In fact, these servers know nothing about security at all.

Now, in older systems, the superserver was inetd, or the Internet Daemon. In newer systems, it has been replaced with xinetd, which is simply an extended inetd. xinetd can implement security internally, while inetd spawns an external security handler, most commonly tcpd.

The configuration files for these servers are usually /etc/inetd.conf and /etc/xinetd.conf, /etc/xinetd.d/*. We aren't concerned too much about the contents of these files, except what services are started by it. Most commonly, the superserver will start services like telnetd, ftpd, rlogind, rshelld, rstatd, fingerd, talkd, ntalkd, etc. Many of these may not be required, and can be stopped. In inetd, this would involve commenting out the relevant line in inetd.conf, while in xinetd, this would involve setting disabled=yes in /etc/xinetd.d/service_name.

Disabling these services altogether will cause an inconvenience for your users. For example, you may want to allow nfs connects from certain hosts within your network, but disable it for everyone else. Furthermore, several services have well known exploits, and detecting when someone is trying these is a good early warning system for a possible attack.

This is where tcpwrappers, or tcpd (the tcp daemon) as it is known, comes in. TCPWrappers are basically wrappers around your services. It is implemented in two ways, either through the tcp daemon, which starts the requested service after doing access control checks, or through libwrap,
which may be linked into the server itself. Either way, the wrappers rely on the files /etc/hosts.{deny,allow}.

The full intent and use of tcp wrappers is well documented, and is shipped with all linux distributions. It can be found in /usr/doc/tcp_wrappers/* or /usr/share/doc/tcp_wrappers/*. Here I will outline the most important usage.
How exactly does tcpd come in to play?
Instead of directly starting the server, inetd can start tcpd, and tell tcpd to start the correct server after performing any checks, etc. that it wants. If one opens /etc/inetd.conf, one will find against the telnet and ftp lines that the daemon to be spawned is tcpd with
in.telned/in.ftpd as arguments.
telnet  stream  tcp     nowait  root    /usr/sbin/tcpd  in.telnetd

The other values on the line aren't important for that discussion, and you'll figure them out soon enough.

Now, in execve parlance, the first argument passed in the argument vector corresponds to argv[0], i.e., the name that the program should call itself. tcpd takes this hint, and calls itself in.telnetd (which is what will show up if you list running processes). It performs its checks, and then execs in.telnetd, passing all file descriptors on.

Thus, we have tcpd, which has a single access control file, doing checks for most daemons. Further more, since tcpd comes into the picture only while a connection is being estabilished, and leaves the scene thereafter, there is no overhead involved (except for that during checking, which is what we want).

Now, not all servers are started through inetd. Many, like sendmail, apache, and sshd, run as standalone servers. These servers can have tcpd compiled into them using libwrap.a and tcpd.h. They will then automatically check with hosts.allow and hosts.deny.

Now all these options must be selected while compiling tcpd and libwrap, but the defaults are decently secure anyway.

To check the configuration of your tcpd wrappers, use /sbin/tcpdchk. Give it the -v flag for more information.
The hosts.{deny,allow} files
Wietse Venema, the creator of tcpd, also developed a 'language' for specifying the access control rules that govern who can use which service.
These rules are specified in hosts.allow and hosts.deny. The normal strategy is to deny all connections, and explicitly allow only services that you want people connecting to. For eg: your hosts.deny would read like:
ALL: ALL 

This means deny all services to requests from all addresses.

Remember that hosts.allow is checked first, then hosts.deny. The first rule that matches is applied, so basically, if a match is not found in hosts.allow, it will be denied. If hosts.deny is empty or missing, then the default is to grant access. The extended acl language also allows deny rules to be specified in hosts.allow, so you really only have to manipulate a single file.

Rather than go into the details of all possible configurations, I'll just paste my own hosts.allow file here, and explain it line by line.
#
# hosts.allow This file describes the names of the hosts which are
#  allowed to use the local INET services, as decided
#  by the '/usr/sbin/tcpd' server.
#

# allow everyone to connect to 25.  ACL implemented in sendmail
sendmail: ALL

# ssh from certain hosts only.
sshd: 202.141.152.16 202.141.151. 202.141.152.210 127.0.0.1 : ALLOW

# Allow people within the domain to talk to me
in.talkd in.ntalkd: 202.141.151. 202.141.152. LOCAL : ALLOW
in.fingerd: 202.141.151. LOCAL EXCEPT 202.141.151.1 : ALLOW

# Set a default deny stance with back finger "booby trap" (Venema's term)
# Allow finger to prevent deadly finger wars, whereby another booby trapped
# box answers our finger with its own, spawning another from us, ad infinitum

ALL : ALL : spawn (/usr/sbin/safe_finger -l @%h | /bin/mail -s "Port Denial noted %d-%h" hostmaster) & : DENY

The above file starts off by allowing anyone to connect to my sendmail daemon. The sendmail daemon is in a better position to do access control, as this needs to be done based on sender and recipient address rather than IP address. If you suspect that certain hosts are unnecessarily hitting you on 25, then you can block them explicitly.

The next line allows ssh connections from certain specific hosts in the 202.141.152. domain, and all hosts in the 202.141.151. domain. I may need to connect to my machine from different places on my network. These connections would be over a broadcast network, so I prefer ssh for connecting.

I allow finger and talk from within my domain, but not from 202.141.151.1.

Finally, I set a booby trap for anyone connecting to services that they are not authorised to access. A reverse finger is done on the attacking host, and a mail is sent to the administrator of my machine with this information.

Intrusion Detection

Intrusion Detection is the ability to detect people trying to compromise your system. Intrusion detection is divided into two main categories, host based, and network based. Basically, if you use a single host to monitor itself, you are using a host based IDS, and if you use a single host to monitor your entire network, you are using a network based IDS. Most home users would use a host based IDS, while universities and offices would have a network based IDS.

There are many Intrusion Detection Systems (IDS) for linux, the most popular, for host based and network based is snort. Others are portsentry and lids - the Linux Intrusion detection system [inactive as of 2013]. Going into the details of each of these is beyond the scope of this document, but all tools have very good
documentation.

In addition to an IDS, you would also want to use an Integrity checker, which basically makes sure that none of your binaries and critical configuration files have been modified.

When a cracker compromises a system, the first thing he's likely to do, is create a backdoor for himself. There have been many instances where critical binaries like the ssh daemon have been replaced with trojaned versions that capture passwords and mail them back to the cracker. This
then gives the attacker free access to the system, even if the original hole is plugged.

Tools like tripwire, AIDE, and FreeVeracity check the integrity of your binaries. Of the above, FreeVeracity is reputed to be very easy to set up and use.

Typically, one would create an integrity database when the system is installed, and update it whenever new binaries are installed. The database should always be backed up onto read-only media like a CD. The checker should be run everyday through a cron tab entry, to check all critical files. If the tool finds any discrepancies, it sends a mail to a pre-defined email address.

The Integrity Checker should be configured well to prevent false alarms which makes it a hindrance more than an aide.
So, how do you know whether you've been compromised or not?
CERT has released an advisory to help you identify if an intruder is on your system.

In short though:
  • Check your log files,
  • Look for setuid/setgid files, especially if they are owned by root
  • Check what your integrity checker has to say about your system binaries
  • Check for packet sniffers which may or may not be running
  • If you didn't install it, it shouldn't be there
  • Check your crontabs and at queues.
  • Check for services that shouldn't be running on your system
  • Check /etc/passwd for new accounts/inactive accounts that have suddenly become active
Full details, including how to do the above are listed in the abovementioned document.
So what do you do once you know that you've been compromised?
Well, the first thing is not to panic. It is very important not to disturb any trails that the cracker has left behind, as these can all be used to detect who the attacker was, and even exactly what he did. Very importantly, don't touch anything on the system.

Step one is to disconnect the machine from the network. This will not only prevent further attacks, it will also prevent the attacker from covering up his trails if he finds out that he's been caught.

To prevent any data from being changed, you should also mount your file systems read-only.

Copy all your log files out to another system, or a floppy disk, where you can examine them safely.

Analyse the saved data to determine what the attacker did to break in and what he did after that.

Restore your system from known pre-compromise backups.

Again, CERT has published a white paper on recovering from an attack.

Testing Security

There are many commercial organisations that will test the security of your system for you. These are costly though. A cheaper alternative may be to use one of the many web based security scanners to test your system.

http://www.hackerwhacker.com
http://maxvision.net/#free
http://www.grc.com
http://privacy.net/analyze
http://www.secure-me.net/

You shouldn't trust what they tell you, but it will be interesting to monitor your logs and network while an attack is in progress.

You can also test yourself by using your own port scanner and security scanners.

Nmap is the most popular and widely used port scanner around, both by black hats and white hats. It can also determine which OS you use, which is what a cracker would need to know to find OS specific vulnerabilities.

If nmap can't figure out which OS you're using, that could slow down your attacker for a while.

SATAN (Security Analysis Tool for Auditing Networks) that was developed by Dan Farmer of Sun Microsystems, and Wietse Venema (of tcpd and postfix fame) from Eindhoven University of Technology, Netherlands, and currently at IBM, was developed with the specific intent of doing everything that an attacker would do to gain unauthorised access. This tool has been replaced with a next generation version called SAINT

Nessus has a plugin based architecture. Vulnerabilities checks are written as plugins, which means that you can check for new holes as they become publicly known, without upgrading your entire binary.

Viruses and Trojans

The real question in this section is, is linux vulnerable to viruses and trojans.

Practically, no. Technically though, it is possible.

Due to the design of Linux, it is difficult for viruses to spread far within a system, as they are confined to infecting the user space of the user who executes them. Of course, this is a problem if infected files are launched by root, but as a security conscious individual, you wouldn't be running untrusted files as root, would you?

It is theoretically possible for a virus launched by a regular user to escalate its privileges using system exploits; however, a virus with this capability would be quite sizable, and difficult to write. As of this date, few viruses have actually been discovered for Linux, and the ones that have been discovered aren't worth losing sleep over. This will undoubtedly change with time.

Worms like l10n and Top Ramen only worked because the systems were insecure to begin with. An insecure ftpd/rstatd was used to automatically gain access to machines, and use them as further launching grounds.

Viruses do exist for Linux, but are probably the least significant threat you face. On the other hand, Linux is definitely vulnerable to trojans.

A trojan is a malicious program that masquerades as a legitimate application. Unlike viruses, they do not self replicate, but instead, their primary purpose is (usually) to allow an attacker remote access to your computer or its resources. Sometimes, users can be tricked into downloading and installing trojans onto their own computers, but more commonly, trojans are installed by an intruder to allow him future access to your box.

Trojans often come packaged as "root kits". A "root kit" is a set of trojaned system applications to help mask a compromise. A root kit will usually include trojaned versions of ps, getty, passwd.

At this point in time, virus Scanners for Linux are aimed at detecting and disinfecting data served to Windows hosts by a Linux file/mailserver. This can be useful to help stop the spread of viruses among local, non-Unix machines. Due to the lack of viruses for Linux, there are presently no scanners to detect viruses within the Linux OS, or its applications. Trojans present a greater threat to the Linux OS itself than do viruses, and can be detected by regularly verifying the integrity of your binaries, or by using a rootkitdetector.
Trojan Detectors:
Chkrootkit: Checks Linux system for evidence of having been rootkitted.

Root Kit Detector: A daemon that alerts you if someone atttempts to rootkit you.
Virus Scanners for Linux File Servers:
AMaViS: A sendmail plugin that scans incoming mmail for viruses.

AntiVir for Linux: Scans incoming mail and ftp for virusess.

Interscan Viruswall: A Firewall-1 add-on that scans ftp, htttp, and smtp for viruses.

Sophos AntiVirus: Checks shares, mail attachments, ftp, etc. for viruses.

Finally, a system administrator must understand that security is a process. You need to keep yourself up to date with all the latest security news. Subscribe to the securityfocus, cert, and other security related mailing lists. Stick to the comp.os.linux.security newsgroup. That's also a good place to post your queries - if they haven't already been answered (hey, most of this doc was from the faq in there).

Monitor your log files regularly. Use remote logging to protect against modified log files. Protect your system binaries. Keep them on read-only partitions if required.

The only way to protect yourself completely, is to be aware of what is happening all the time.

References:

The comp.os.linux.security faq
The linux security howto
The linux administrator's security guide
The linux firewall and proxy server howto
CERT advisories
Security Focus

Friday, December 31, 2004

Nvidia GeForce MX 2 with linux 2.6.6+

I've been using a GeForce MX 2 for well over a year. It worked quite well with RH8, FC1 and Knoppix. I needed to use the proprietary drivers provided by Nvidia to get hardware acceleration though.

Motherboard: ASUS A7N266

A couple of months ago, upgraded to FC2, and the nvidia driver wouldn't work anymore. I had to run back to Bangalore, and since no one at home really needed hardware acceleration, I switched back to the free nv driver from X (well, I was using x.org now).

This December... well, yesterday actually, I decided to try out 3ddesktop, but of course, this requires hardware acceleration. So I started. Went through a lot to get it to work, and the details are boring. However, what I learnt could help others, so I'll document that.

The problem:

When starting X with the nvidia driver, the screen blanked out and the system froze. Pushing the reset button is the only thing that worked.

Solutions and Caveats:

Get the latest NVIDIA drivers and try.

At the time of writing, the latest drivers from the nvidia site are in the 1.0-6629 package. This doesn't work with the GeForce MX 2, and many other older chips, so if you try to use it, you'll spend too much time breaking your head for nothing. Instead, go for the 1.0-6111 driver, which does work well...

On kernels below 2.6.5 that is. FC2 ships with a modified 2.6.5 kernel that has a forced 4K stack and CONFIG_REGPARM turned on. The NVIDIA drivers are (or were) compiled with 8K stacks and do not work with CONFIG_REGPARM turned on. I'd faced similar problems when I first used the nvidia driver, and recompiling my kernel with 8K stacks fixed the problem.

Searching the net, I came across dozens of articles that spoke about 4K stacks v/s 8K stacks in the 2.6 kernel, but also said that from 5xxx onwards, the nvidia driver supported 4K stacks and CONFIG_REGPARM.

I tried getting prebuilt kernels (smaller download) with 16K stacks, but it didn't help, so finally decided to downlad the entire 32MB kernel source for 2.6.10.

While compiling, I came across this thread on NV News (pretty much the best resource for nvidia issues on linux). In short, the 6111 driver wouldn't work with kernels above 2.6.5 or something like that. I needed to patch the kernel source.

The patch was tiny enough: in arch/i386/mm/init.c, add a single line:
EXPORT_SYMBOL(__VMALLOC_RESERVE);
after the __VMALLOC_RESERVE definition.

Stopped compilation, made the change and restarted compilation.

Also had to rebuild the NVIDIA driver package, again as documented in that thread:

- extract the sources with the command : ./NVIDIA-Linux-x86-1.0-6111-pkg1.run --extract-only
- in the file "./NVIDIA-Linux-x86-1.0-611-pkg1/usr/src/nv/nv.c" replace the 4 occurences of
'pci_find_class' by 'pci_get_class'
- repack the nvidia installer with the following command:

sh ./NVIDIA-Linux-x86-1.0-6111-pkg1/usr/bin/makeself.sh --target-os Linux --target-arch x86 NVIDIA-Linux-x86-1.0-6111-pkg1 NVIDIA-Linux-x86-1.0-6111-pkg2.run "NVIDIA Acclerated Graphics Driver for Linux-x86 1.0-6111" ./nvidia-installer

The new installer is called "NVIDIA-Linux-x86-1.0-6111-pkg2.run"

With these changes, the driver compiled successfully and I was able to insert it.

I had a minor problem when rebooting. usbdevfs has become usbfs, so a change has to be made in /etc/rc.sysinit. Change all occurences of "usbdevfs usbdevfs" to "usbfs none".

Once you've done this, X should start with acceleration on.

3ddesktop is slow, but it starts up. Tux racer works well.

What I think is really cool about this solution, is that I did not have to make a single post to a single mailing list or forum. All the information I needed was already on the net. It was just a matter of reading it, understanding what it said, and following those instructions. For example, there were many threads on screen blanking with the 6629 driver, and somewhere in there was mentioned that the new driver didn't support older hardware, but that the 6111 did. That was the key that brought me the solution. I knew the 6111 didn't work out of the box, because I'd already tried it, but now I could concentrate on threads about the 6111 exclusively, only looking for anything that sounded familiar.

Saturday, November 13, 2004

/home is NTFS

A little over a year ago, at my previous company, I had to change my second harddisk on my PC. It was a bit of an experience, because the service engineer who came to do the job had never encountered linux before, but seemed to think that he could deal with it just like he did windows.

The engineer put in the new hard disk as a secondary master (my old one was a secondary slave to the CDD).

He then booted using a Win 95 diskette... hmm... what's this? Then started some norton disk copy utility. It's a DOS app that goes into a graphics mode... why?

Then started transferring data... hey, wait a minute, I don't have any NTFS partitions. Hit reset! Ok, cool down for a minute. I've got three ext3 partitions. So, now it's time to assess the damage.

Boot into linux - hmm, /dev/hdd1 (/home) won't mount, down to root shell. Get /home out of /etc/fstab and reboot. Ok, runlevel 3 again. Check other partitions - hdd5 (/usr) ... good, hdd6 (/usr/share) ... good... everything else is on hda... good. all my data, is in /home ... not good

So, I start trying to figure out how to recover. google... no luck. google again... one proprietary app, and a couple of howtos on recovering deleted files from ext2 partitions... no good. google again, get some docs on the structure of efs2, and find a util called e2salvage which won't build. time to start fooling around myself.

start by reading man pages. tune2fs, e2fsck, debugfs, mke2fs... so I know that mke2fs makes backups of the superblock, but where are they?

mke2fs -n /dev/hdd1... ok, that's where
dd if=/dev/hdd5 of=superblock bs=4096 count=2
hmm, so that's what a superblock looks like
dd if=/dev/hdd5 of=superblock2 bs=4096 count=2 skip=32768
hey, that's not a superblock. Ok, try various combinations, finally get this:
dd if=/dev/hdd5 of=superblock2 bs=1024 count=8 skip=131071
that's 32768*4-1
Ok, so that's where the second superblock is.

Check hdd1 - second superblock blown away as well. Look for the third... 98304*4-1=393215.. ok, that's good. should I dd it to the first? Hmm, no, e2fsck can do that for me... but, I shouldn't work on the original partition. Luckily I have 30GB of free space to play with, and /home is just 6GB.

dd if=/dev/hdd1 of=/mnt/tmp/home.img
cp home.img bak.home.img


Now I start playing with home.img.

The instructions in e2salvage said to try e2fsck before trying e2salvage, so I try that.

e2fsck home.img
no use... can't find superblock
e2fsck -b 32768 -B 4096 home.img
works... starts the fsck, and gets rid of the journal. this is gonna take too long if I do it manually, so I quit, and restart with:
e2fsck -b 32768 -B 4096 -y home.img
The other option would have been to -p(reen) it, but that wouldn't give me any messages on stdout, so I stuck with -y(es to all questions).

2 passes later it says, ok, got whatever I could.

mount -oloop,ro home.img /mnt/home
yippeee, it mounted
cd /mnt/home; ls
lost+found

ok, so everything's in lost+found, and it will take me ages to sift through all this. Filenames might give me some clues.
find . -type f | less
Ok, scroll, scroll, scroll... hmm, this looks like my home directory... yes.
cp -a \#172401 /mnt/home/philip
scroll some more, find /usr/share/doc (which I keep in /home/doc and symlink from /usr/share/doc). move it back to /usr/share/doc. find jdk1.1.8 documentation... pretend I didn't see that.

find moodle home - yay. find yabb home - yay again. Ok, find a bit more that's worth saving, and copy it over. Many files in each of these directories are corrupted, including mailboxes, and some amount of test data, but haven't found anything serious missing.

All code was in CVS anyway, so rebuilt from there where I had to.

Now decided to try e2salvage anyway, on the second copy of hdd1. It wouldn't compile. Changed some code to get it to compile, it ran, found inodes, directories and the works, then segfaulted. The program tries to read from inode 2, which doesn't exist on my partition, and then it tries to printf that inode without checking the return value.

I'd have fixed that, but the result is used in further calculations, so I just left it at that. The old hard disk was taken away, so I don't have anything to play with anymore.

It'll take me a little while to figure out all that was lost, but so far it doesn't look like anything serious.

...===...