Wednesday, 29 December 2010

Hibernate from E17 on Arch linux

Earlier this year, I had posted about modifying {install_folder}/etc/enlightenment/sysactions.conf to obtain "Sleep" feature from Enlightenment on Arch linux. I did not mention about hibernation because I had not got it to work. Hibernation can be enabled likewise modifying sysactions.conf to replace
action:   hibernate /etc/acpi/ force
action:   hibernate /usr/sbin/pm-hibernate
However, to enable hibernation I had to add "resume" to HOOKS variable defined in /etc/mkinitcpio.conf and add resume=/path/to/swap to my kernel line in /boot/grub/menu.lst.

Sunday, 26 December 2010

Sharing internet connection through Wlan

I wanted to share my internet connection with my roomies using wireless LAN. I had done it using bridging in Windows; but I hardly use Windows. So, I needed a solution in linux. I started discussing about it on IRC. At #gentoo, I met a French guy, named Francis Galiegue, who gave me an elegant solution.

We have three laptops and the connection is through wired ethernet. So, the solution was to use one laptop (mine) as a wireless access point giving access to the internet. In this setup, it is granted that the laptop has network connectivity to the internet, and that it has a WiFi device (internal or external) recognised by linux and the kernel is recent enough (2.6.27 or later is recommended). Once you meet the prequisites, there are four parts, which are solved by four linux daemons elegantly:

0. without even configuring the WiFi device, check that the laptop can connect to the internet
1. setting up basic iptables rules (see rule set 1);
2. cooking up a set of rules so that the laptop can access the Internet via the appropriate device (see rule set 2);
3. configure a DHCP server (using dhcpd), complete the firewall rule set to allow it to work (see rule set 3);
4. configure a name server (using BIND), complete the firewall rule set to allow it to work (see rule set 4);
5. configure an access point (using hostapd) - and no, no firewall rules are necessary for the access point to operate (iptables operates at the network layer, hostapd operates below that level);
6. complete firewall configuration so that "client" computers (the other laptops) can actually connect to the Internet.

Rule set #1:

The goal here is to create a generic table which uses Linux's netfilter connection tracking abilities. Here we use the "state" module, which recognizes four states:
  1. ESTABLISHED: the incoming packet is part of a connection known toLinux' connection tracking;
  2. RELATED: the incoming packet either directly relates to, or establishes a new connection related to, a connection known to Linux's connection tracking - such packets are of two types:
    1. ICMP messages (such as: "no route to host", "access prohibited", others);
    2. connection triggers from builtin modules (such as FTP data connections, others);
  3. INVALID: the incoming packet has an invalid payload (header length and/or checksum mismatch at the network layer or upper);
  4. NEW: the incoming packet tries to initiate a new connection.
We create a new chain, named "connstate" (ie, "connection state"), attached to the "filter" table. The purpose of this chain will be to handle all four connection states known to the "state" module. Eventually, all packets, either incoming (INPUT), outgoing (OUTPUT) or going through (FORWARD) will go through this chain, except for the loopback interface (lo), which is special:

# Create the chain - note that by default, if the table (the -t option of
# iptables) is not specified, the default is filter - this is what we want
iptables -N connstate
# All packets of connections already known to netfilter's state tracking
# (ESTABLISHED) or directly related (RELATED) should pass
iptables -A connstate -m state --state ESTABLISHED,RELATED -j ACCEPT
# All packets deemed invalid by netfilter should be dropped
iptables -A connstate -m state --state INVALID -j DROP
# From then on, packets have to be NEW. One thing: if the packet is TCP and does
# not have the SYN bit set (which it should have, see RFC 793) should be
# dropped...
iptables -A connstate -m state --state NEW -p tcp ! --syn -j DROP
# Any other NEW packets are returned to the caller
iptables -A connstate -m state --state NEW -j RETURN
# Normally, no packet ever should reach this point, netfilter must/will have
# sorted them out earlier on. If not, this is clearly a bug, so log them at the
# highest log level avaibale (CRIT == critical), and drop them for safety.
iptables -A connstate -j LOG --log-level CRIT --log-prefix "CONNSTATE BARF: "
iptables -A connstate -j DROP
# There is one exception to the rules above: the loopback interface. Packets
# going through the loopback will not go through the normal chain processing,
# we need to accept them unconditionally at the input and output phase.
iptables -A INPUT -i lo -j ACCEPT
iptables -A OUTPUT -o lo -j ACCEPT
# From here on, the first thing to do is to make all packets of all builtin
# chains of the filter table go through this chain.
for i in INPUT OUTPUT FORWARD; do iptables -A $i -j connstate;done

At this stage, this chain enforces no restriction on incoming or outgoing traffic, except for two quite important things:
  1. no TCP and/or IP header fragmentation attack is possible anymore: as soon as you use connection tracking (as we do here), the firewalling engine must have all protocol headers to decide how to deal with the whole packet (attempted fragmentation attacks will be deemed INVALID and therefore DROPped);
  2. some stack implementations disobey RFC 793 with respect to TCP connection initiation, since they don't set the SYN bit on the initial TCP packet of the connection: these will be dropped as well (observed with some versions of Windows).

Rule set #2:

We proceed to allow ICMP (ping) traffic from the wifi interface.

iptables -N local_to_eth0
iptables -A local_to_eth0 -j ACCEPT
iptables -A OUTPUT -o eth0 -j local_to_eth0
iptables -N ping
iptables -A ping -p icmp --icmp-type echo-request -m limit --limit 2/sec -j ACCEPT
iptables -A ping -p icmp --icmp-type echo-request -m limit --limit 2/sec -j DROP
iptables -N eth0_to_local
iptables -A eth0_to_local -j ping
iptables -A INPUT -i eth0 -j eth0_to_local
for i in INPUT OUTPUT FORWARD; do iptables -P $i DROP;done

Rule set #3:

We proceed to allow ICMP (ping) traffic from the wifi interface.

iptables -N dhcp
iptables -A dhcp -p udp --dport 67:68 -j ACCEPT
iptables -N wlan0_to_local
iptables -A wlan0_to_local -j ping
iptables -A wlan0_to_local -j dhcp
iptables -A INPUT -i wlan0 -j wlan0_to_local

There are two ways of being a gateway:

  1. configure a dhcp server and bind
  2. use dnsmasq

We are using the former method here. So, you might want to query your package manager for dhcpd and bind to see whether they are installed.

Next, find out your domain name (hostname -f). Let us say your domain name is "domain_name". Now pick a hostname for your system, say "hostname.domain_name". You might opt for a two component domain name. Now, proceed to assign the selected hostname to your system. Pick an IP in RFC1918 range to assign to this name, say Edit /etc/hosts and add the following line: hostname.domain_name hostname

Now let us ensure that the hostname is assigned to the machine at boot time. It can be done by editting /etc/conf.d/net and /etc/conf.d/hostname in Gentoo linux or by editting /etc/rc.conf in Arch linux. (Set it to the full qualified hostname, i.e. "hostname.domain_name" not just "hostname".)

Next, we setup wlan0 with the address and a /24 subnet mask. It can be done using ifconfig as follows:

ifconfig wlan0 netmask

This can also be put into /etc/rc.conf so that it is done each time during boot. You may wish to cross check the IP of the wifi device. (See ifconfig ouput and try to ping the IP.)

Now, we configure the DNS server daemon, named. First of all, we are going to create two zone files: one for "domain_name" and the other for The file /var/named/pri/ is as follows:

$TTL 1d
@       IN      SOA     hostname.domain_name.  (
                                      2010102401 ; Serial
                                      28800      ; Refresh
                                      14400      ; Retry
                                      3600000    ; Expire
                                      86400 )    ; Minimum
              IN      NS

5 IN PTR hostname.domain_name.

Now, we edit the named.conf as follows:

// /etc/named.conf

acl "trusted" {;

options {
directory "/var/named";
pid-file "/var/run/named/";
auth-nxdomain yes;
datasize default;
// Uncomment these to enable IPv6 connections support
// IPv4 will still work:
// listen-on-v6 { any; };
// Add this for no IPv4:
// listen-on { none; };
listen-on {;;

// Default security settings.
allow-query {
allow-recursion {; };
allow-transfer { none; };
allow-update { none; };
version none;
hostname none;
server-id none;

forward first;
forwarders {
// The service provider's DNS first
      ;                // Level3 Public DNS
      ;                // Level3 Public DNS
      ; // Google Open DNS
      ; // Google Open DNS

view "internal" in {
match-clients { trusted; };
recursion yes;
additional-from-auth yes;
additional-from-cache yes;

zone "localhost" IN {
type master;
file "";
allow-transfer { any; };

zone "" IN {
type master;
file "";
allow-transfer { any; };

zone "." IN {
type hint;
file "root.hint";

zone "domain_name" {
type master;
file "pri/";
allow-update {
notify no;

zone "" {
type master;
file "pri/";
allow-update {
notify no;

logging {
        channel xfer-log {
                file "/var/log/named.log";
                print-category yes;
                print-severity yes;
                print-time yes;
                severity info;
        category xfer-in { xfer-log; };
        category xfer-out { xfer-log; };
        category notify { xfer-log; };

Now, lets start the server. On Arch, I do it using the following command.

/etc/rc.d/named start

We then edit /etc/resolv.conf.head to add the following line

search domain_name

and /etc/resolv.conf.tail to add the following line.


Now, the nameserver can be tested using commands like the following.

host hostname.domain_name

You might like to add named to the list of daemons to be started at boot time. I prefer starting them each time.

Rule set #4:

# create a new chain
iptables -N local_to_wlan0
# the only rule of this chain is to accept
iptables -A local_to_wlan0 -j ACCEPT
# In the OUTPUT chain, every packet going out by wlan0 interface is branched out to
# local_to_wlan0 and as a result everything out to wlan0 is accepted.
iptables -A OUTPUT -o wlan0 -j local_to_wlan0

The following iptables rules are to allow the other machines in the LAN to access the DNS server.
Bind listens to TCP/53 and UDP/53 and thus traffic on those ports is accepted.

Rule set #5:

iptables -N named
iptables -A named -p udp --dport 53 -j ACCEPT
iptables -A named -p tcp --dport 53 -j ACCEPT
iptables -A wlan0_to_local -j named

We proceed to configure the dhcp server. The configuration in /etc/dhcpd.conf is as follows:

# We don't want dynamic DNS here
ddns-update-style none;
subnet netmask {
option subnet-mask;
option domain-name "domain_name";
option domain-name-servers;
option routers;

pool {
allow unknown-clients;

Then we have a final set of rules to connect the two interfaces.

Rule set #6:

iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
iptables -N wlan0_to_eth0
iptables -A wlan0_to_eth0 -j ACCEPT
iptables -A FORWARD -i wlan0 -o eth0 -j wlan0_to_eth0

The last piece is hostapd configuration. It is given as follows:

wpa_passphrase=<your passphrase>

Each time I can start sharing using the following commands.

ifconfig wlan0 netmask
/etc/rc.d/iptables start
/etc/rc.d/hostapd start
/etc/rc.d/dhcpd start
/etc/rc.d/named start

For a rather detailed reading, check out this webpage.

Saturday, 25 December 2010

Sent mail appears in Thunderbird's Inbox

When retrieving mail using Thunderbird, I noticed that all mails that I have sent using the web interface are also downloaded but they are put into the Inbox folder. Obviously, I would like Thunderbird to put them into "Sent" folder. However, over multiple versions this did not change; but the solution is pretty simple. You just have to create a filter redirecting mails sent from your email to the "Sent" folder.
Tools > Message Filters
Redirect sent mail to "Sent" folder

Saturday, 27 November 2010

Arch Linux review

I believe it is time I can provide a substantial review of Arch linux distribution. I have already written about my switch to Arch linux; so I will not reiterate those ideas here. I will just attempt to clarify your view towards Arch linux because most distributions are merely reflections of viewpoints of the developers of the distribution.

Arch linux is a rolling release distribution that provides bleeding edge software. That is all to it. Now, with bleeding edge software, there are always stability issues. If you are looking for rock-solid stability, go for Debian without any further thought. However, do consider the fact that if software is bleeding edge then most likely the bugs are upstream than with the distribution. This notion is reinforced by the fact that Arch does not mod packages. Arch branding is optional. When I started using Arch, I had reported a firefox issue which turned out to be an upstream problem. This was within 24 hours. Then there was the valgrind issue which I did talk about at length. So, basically Arch people respond fast. However, the IRC channel #archlinux is not very friendly (may be I still have a hang over from #gentoo). Arch is good at performance; boots up fast. It has decent support for hardware. Installing drivers for my WLAN was not at all difficult.

On the downside, some aspects that are not addressed by upstream properly are not handled by Arch appropriately. [Many distributions actively patch packages before release; though upstream commitment of the patches is a different scenario.] It is basically Arch policy. I have two unsolved bugs about the Arch kernel: one for 2.6.34 and one for 2.6.35. Also, my SD-card reader was not working properly before 2.6.35.

Yesterday, I upgraded to 2.6.36 and there were two error messages at boot time. One was about tomoyo-init scripts and the other was about HDA-Intel being unknown hardware. The former is harmless and the later was easy to resolve using the wiki. There still remain two things about Arch that annoy me:
Yet again, they are mostly upstream issues. Thus, Arch linux is a nice distribution; but it may not be the best choice for new users.

Tuesday, 9 November 2010

Cache_clean to clean up older packages

I have a 10 gig / partition and I had only about a gig left on it. So, I decided to clean up. I was aware that pacman stores multiple versions of packages so I checked out the size of /var/cache/pacman/pkg and it tuned out to be 4.9 gigs, i.e. more than 50% of my used space. The way to regain disk space was clear. I started looking up ways to clean up and found a python script in AUR. A snippet of the script's preview run is below.

['mercurial-1.5.1-1-x86_64.pkg.tar.xz', 'mercurial-1.5.2-1-x86_64.pkg.tar.xz', 'mercurial-1.5.3-1-x86_64.pkg.tar.xz', 'mercurial-1.5.4-1-x86_64.pkg.tar.xz', 'mercurial-1.6-1-x86_64.pkg.tar.xz', 'mercurial-1.6.2-1-x86_64.pkg.tar.xz', 'mercurial-1.6.3-1-x86_64.pkg.tar.xz']
['xulrunner-', 'xulrunner-', 'xulrunner-', 'xulrunner-', 'xulrunner-', 'xulrunner-', 'xulrunner-', 'xulrunner-']

Using the script I cleaned 2.5 gb within a minute. Elegant.
[N.B. - An alternative script in Go is available at github.]

Wednesday, 20 October 2010

Proxy problem with pacman

Recently, I was accessing internet from behind a firewall so I had set http and ftp proxy environment variables. When I ran

pacman -Syu

it could download the database updates for the system packages; but could not retrieve the packages and hung up. I found that wget was able to use the proxy fine. So, I tried

pacman --debug -S mkvtoolnix

to see what was happening and found that it was stuck after detecting the proxy. As wget was working, I tried uncommenting the following line in pacman.conf.

#XferCommand = /usr/bin/wget --passive-ftp -c -O %o %u

This time the upgrade was smooth and I upgraded 66 packages.

Tuesday, 19 October 2010

Sleep from E17 on Arch linux

Recently, I found that sleep and hibernate do not work from E17 while they were working fine from KDE. So, I decided to dig. I found that I had to configure them in sysactions.conf. I had installed e17 in /opt so I searched for it there and found it in /opt/e17/etc/enlightenment. I changed the following line
action:   suspend   /etc/acpi/ force
action:   suspend   /usr/sbin/pm-suspend
as my system had no such script as and sleep worked fine ine E17.

Thursday, 23 September 2010

Connecting to BSNL on linux

I have used BSNL broadband connection for some years now and have connected to the internet through BSNL on various linux distributions like Arch, Gentoo, openSUSE and Debian. I shall share the procedure for the same here. However, before the procedure I would like mention some basics.

  • Basic information
BSNL uses point to point protocol over ethernet (pppoe). So, your kernel should have it enabled. If you want to share your connection through another ethernet port or wifi; then you might be interested in bridging options too. DNS information is obtained from BSNL; you do not have to set it.
  • openSUSE

The easiest of them all is getting it done in openSUSE. The package needed is kinternet; so ensure that you have it installed during initial setup [It is not selected by default in the installer]. In YaST, configure the ethernet interface, which most likely would be eth0. Use static address and subnet mask Configure the gateway The DNS is obtained from the ISP. Clicking the kinternet icon in the tray should get you connected.
  • Gentoo

Gentoo lets users configure their kernels. Make sure your kernel meets the requirements mentioned in the Basics section. I prefer using Roaring Penguin PPPoE scripts. Install them during your installation. To configure before first use, issue the following command as root.
Provide you username and password when asked. Enter 'server' when asked for DNS servers. The defaults should do for the rest. Edit /etc/rc.conf to configure the ethernet interface as follows:

eth0="eth0 netmask broadcast"


To start and stop the connection, use the commands pppoe-start and pppoe-stop. In case, you can connect yet can't view web pages then set the obtained IP address for the pppoe interface as your DNS server. You may also need to specify the default gateway in rc.conf as
  • Debian

Debian also uses Roaring Penguin scripts. Configure as mentioned for Gentoo and Debian shall connect automatically.
  • Arch

Arch uses the same scripts. However, you will have to issue the commands as in Gentoo.

Saturday, 11 September 2010

BSNL DNS server poisoned

BSNL is one of the largest telephone and internet service providers in India; yet its negligence often tests your patience. Recently, two of BSNL's DNS servers
were poisoned. I found it out when whatever I entered in my URL bar took me to the same site. I checked up with nslookup to find out that all my DNS queries yielded the same result. The very next morning I thought I should report it to the authorities so that they can mend it soon. However, when I called them instead of listening to what I had to say the officer shouted me and told me to ring some other department [which I didn't care to remember]. This was too much for me to take. We had a rather loud talk and I decided to let the parasite rot in the bureaucratic dirt he has lived so far in.

I changed my DNS server to Google Public DNS and my problem was solved. As for other users, we if they cared about it, then the officer would have had enough complaints that he would have welcomed the information, I was willing to share. However, now BSNL has changed the DNS servers to
  • ( and
  • (

Freenode's web interface introduces captcha

Tuesday, 31 August 2010

Saturday, 21 August 2010


We should consider the meta-choice.

Online image editting

Recently, I had to do some image editing for my friend's blog. I did not have Gimp installed on my Arch system and Gwenview can do some cropping but not much else. Some time ago, I was thinking of having a site that allows editing of images online. It was just a random thought and I did not have the knowledge to implement such a site. However, now that there was a need for such a site [because I did not want to install image editing tools as do it very rarely], I googled for it and found this nice site to help me out. It had a Photoshop-like UI and worked nicely for my needs. I was glad to get my work done [and to know that my idea was feasible].

John Cleese on creativity and subconscious

Sunday, 15 August 2010

Connecting to BSNL GPRS on Arch linux

Popular Indian ISP BSNL is notorious for bad customer care. This makes the use of less popular technologies quite an experiment. Cutting out the elaboration of my experiments, let us look at the procedure for connecting to the internet using BSNL GRS. I used wvdial for the purpose and find it quite flexible. Internet access requires the cell phone to be connected in "PC Suite" mode. I shall describe the procedure for two phones:
1. Nokia XpressMusic 5130
2. Samsung Corby

Lets start with Nokia XpressMusic. To connect it from Windows, use Nokia's PcSuite; and add the following configurations manually:
1. access point name: bsnlnet
2. dialing number: *99#

Now let us look into connecting Nokia XpressMusic using linux. Firstly, get wvdial and its dependencies installed on your system. Connect the phone and check dmesg output to get the name of the modem device. Run wvdialconf to set the baud in /etc/wvdial.conf. Then edit /etc/wvdial.conf as follows:

Init1 = ATZ
Init2 = ATQ0 V1 E1 S0=0 &C1 &D2 +FCLASS=0
Init3 = AT+CGDCONT=1,"IP","bsnlnet"
Password = <your phone number>
Check Def Route = 1
Phone = *99***1#
New PPPD = 1
Modem Type = USB Modem
Stupid Mode = 1
# let wvdialconf set this for you
Baud = 460800
Auto DNS = 1
# This command is essential.
Dial Command = ATD
#  put your modem name here
Modem = /dev/ttyACM0
ISDN = 0
Username = <your phone number>

The same configuration shall work for the Samsung Corby. Finally, to connect Samsung Corby using the PcSuite, use the following configuration.
1. access point: bsnlnet
2. dialing number: *99***1#

Connecting to wireless LAN on Arch linux

Recently, I successfully accessed wireless LAN on my Arch linux. This post shall be a documentation of the same.
Following is the part of lspci -k output of interest here.

02:00.0 Network controller: Broadcom Corporation BCM4311 802.11b/g WLAN (rev 02)
        Subsystem: Hewlett-Packard Company BCM4311 802.11b/g Wireless LAN Controller
        Kernel driver in use: b43-pci-bridge
        Kernel modules: ssb

To get the driver, I followed the steps in the wiki. I edited the MODULES line of my rc.conf as follows:

MODULES=(acpi-cpufreq cpufreq_ondemand cpufreq_conservative cpufreq_powersave cpufreq_performance !pcspkr !snd_pcsp !b43legacy b43)

Using wireless LAN was fairly simple with the following sequence of commands:

# wake up the interface
ifconfig wlan0 up
# scan for wireless networks in range
iwlist scan
# join the desired network based on their essid
iwconfig wlan0 essid "<name>" key <password>
# for automatic detection
iwconfig wlan0 channel auto
# obtain the lease of an IP
dhcpcd wlan0

It is also easy to end a wireless session.

# release the interface
dhcpcd -k wlan0
# exit dhcp
dhcpcd -x
# turn off the interface
ifconfig wlan0 down

Wednesday, 14 July 2010

Octave on Arch

Well recently I had to do some image processing; so I got octave installed on my Arch system. However, I had least interest in base octave. So, soon I was looking for octave-image which I got in Arch User Repository. Soon I needed octave-statistics. I searched AUR again; but the closest match to my needs was octave-forge: the complete set of octave modules. I don't like the idea of installing a number of modules for only one module. So I decided to look into Arch packaging system and install the modules from source. So I looked up their wiki on pkgbuilds and octave-image's pkgbuild example.

Packaging is fairly easy on Arch. Within some time, I had my pkgbuilds ready for octave-miscellaneous and octave-statistics. I made some mistakes; but with some help from #archlinux, I was ready for installation.

Thursday, 8 July 2010

Really fast

Kernel crash

Recently,a few times during boot-up,my kernel crashed printing a trace to console. After the second time, I decided to find out more about it. So, first I needed logs to find clues. However, as I found out [from talking on #archlinux and ##kernel on freenode], those messages are not logged anywhere. So, I was advised to just jot down the console logs.

The following was what I jotted down the next time it happened. I have replaced memory address values by [<mem>].

Code: ff ff ...
RIP [<mem>]__rb_rotate_left
RSP <mem>
--[end of trace]
note:modprobe exited with preempt_count 1

On irc, this time I was advised to do a memcheck. I took my Ubuntu disk [I keep it as a rescue disk.] and ran the memtest. My system passed that. So, the next step was to check the hard drive. I used to have smartmontools on my earlier linux installations; but I had not installed it on my Arch system. Getting it was a matter of time. It was already enabled on my hard disk; so I just had to check the logs.

smartctl --all /dev/sda
smartctl 5.39.1 2010-01-28 r3054 [x86_64-unknown-linux-gnu] (local build)
Copyright (C) 2002-10 by Bruce Allen,

Model Family: Seagate Momentus 5400.2 series
Device Model: ST9120821AS
Serial Number: 5PL4ZYQ9
Firmware Version: 7.24
User Capacity: 120,034,123,776 bytes
Device is: In smartctl database [for details use: -P show]
ATA Version is: 7
ATA Standard is: Exact ATA specification draft version not indicated
Local Time is: Mon Jul 5 11:48:01 2010 IST
SMART support is: Available - device has SMART capability.
SMART support is: Enabled

SMART overall-health self-assessment test result: PASSED
See vendor-specific Attribute list for marginal Attributes.

General SMART Values:

SMART capabilities: (0x0003) Saves SMART data before entering
power-saving mode.
Supports SMART auto save timer.
Error logging capability: (0x01) Error logging supported.
No General Purpose Logging support.
Short self-test routine
recommended polling time: ( 2) minutes.
Extended self-test routine
recommended polling time: ( 68) minutes.
SCT capabilities: (0x0001) SCT Status supported.

SMART Attributes Data Structure revision number: 10
Vendor Specific SMART Attributes with Thresholds:
1 Raw_Read_Error_Rate 0x000f 100 253 006 Pre-fail Always - 0
3 Spin_Up_Time 0x0002 096 095 000 Old_age Always - 0
4 Start_Stop_Count 0x0033 096 096 020 Pre-fail Always - 4348
5 Reallocated_Sector_Ct 0x0033 100 100 036 Pre-fail Always - 0
7 Seek_Error_Rate 0x000f 075 060 030 Pre-fail Always - 21656905082
9 Power_On_Hours 0x0032 094 094 000 Old_age Always - 5840
10 Spin_Retry_Count 0x0013 100 100 034 Pre-fail Always - 0
12 Power_Cycle_Count 0x0033 096 096 020 Pre-fail Always - 4450
187 Reported_Uncorrect 0x0032 100 100 000 Old_age Always - 0
189 High_Fly_Writes 0x003a 001 001 000 Old_age Always - 1096
190 Airflow_Temperature_Cel 0x0022 045 034 045 Old_age Always FAILING_NOW 55 (255 255 58 46)
192 Power-Off_Retract_Count 0x0032 100 100 000 Old_age Always - 1276
193 Load_Cycle_Count 0x0032 002 002 000 Old_age Always - 197088
194 Temperature_Celsius 0x0022 055 066 000 Old_age Always - 55 (0 19 0 0)
195 Hardware_ECC_Recovered 0x001a 048 044 000 Old_age Always - 131646341
197 Current_Pending_Sector 0x0012 100 100 000 Old_age Always - 0
198 Offline_Uncorrectable 0x0010 100 100 000 Old_age Offline - 0
199 UDMA_CRC_Error_Count 0x003e 200 200 000 Old_age Always - 0
200 Multi_Zone_Error_Rate 0x0000 100 253 000 Old_age Offline - 0
202 Data_Address_Mark_Errs 0x0032 100 253 000 Old_age Always - 0

SMART Error Log Version: 1
No Errors Logged

SMART Self-test log structure revision number 1

SMART Selective self-test log data structure revision number 1
After scanning selected spans, do NOT read-scan remainder of disk.
If Selective self-test is pending on power-up, resume after 0 minute delay.

As is apparent from the logs, it was a hard disk temperature problem. It was probably caused by blocked airflow coupled with high ambient temperature.

Thursday, 1 July 2010

Injustice to programmers

In every field new professionals are selected by people who are experienced in the field. However, somehow programmers are not given that priviledge of being judged of competence by someone in the field. Programmers are selected by HR people.

It is seriously funny to me that Infosys's campus recruitment programmes do not need much technical knowledge. They take students basing on their soft skills and train them. So they are not looking for bright programmers; they are looking for people who bear the corporate pressure because they lack skills to value. [I don't say that the above statement is true of Infosys people; my point is: this is what logical analysis of their process of selection tells.]

Open Source projects are in a better position in that regard. They leave you with the source code to tinker and mailing lists and IRC channels to contact. You can try adding modules, providing patches and when your contribution is substantial you get into the core team.

Programmers are being meted out injustice and there are complaints of programmers' incompetence. Funny indeed!

Tuesday, 22 June 2010

Awesome E17

This video is in response to comments to my review of Enlightenment desktop environment. This video shows some ways [rather default ways] of how cool E17 desktop environment can be. [Interestingly, both meanings of the word 'cool' are appropriate.]

Sunday, 6 June 2010

Valgrind issue on 64-bit arch linux

A few days back, I was using valgrind on Ertf; when I stumbled upon an interesting bug. On 64-bit Arch linux systems, valgrind 3.5.0-5 was showing a problem with getimeofday(). Finally, I had found a problem with Arch linux [the first since installation.]. However, the very next day,
pacman -Syu
showed valgrind 3.5.0-6, which had the bug fixed. To nail down the bug, I had tested valgrind with a simple program calling ecore_time_get(). The output is below.

==4230== Using Valgrind-3.5.0 and LibVEX; rerun with -h for copyright info
==4230== Command: ./try_timer
./try_timer: error while loading shared libraries: cannot open shared object file: No such file or directory
==4230== Jump to the invalid address stated on the next line
==4230== at 0x426: ???
==4230== by 0x400D604: _dl_signal_error (in /lib/
==4230== by 0x400C70D: _dl_map_object_deps (in /lib/
==4230== by 0x4002BA3: dl_main (in /lib/
==4230== by 0x4013B6D: _dl_sysdep_start (in /lib/
==4230== by 0x40046B6: _dl_start (in /lib/
==4230== by 0x4000A97: ??? (in /lib/
==4230== Address 0x426 is not stack'd, malloc'd or (recently) free'd
==4230== Process terminating with default action of signal 11 (SIGSEGV)
==4230== Bad permissions for mapped region at address 0x426
==4230== at 0x426: ???
==4230== by 0x400D604: _dl_signal_error (in /lib/
==4230== by 0x400C70D: _dl_map_object_deps (in /lib/
==4230== by 0x4002BA3: dl_main (in /lib/
==4230== by 0x4013B6D: _dl_sysdep_start (in /lib/
==4230== by 0x40046B6: _dl_start (in /lib/
==4230== by 0x4000A97: ??? (in /lib/
==4230== HEAP SUMMARY:
==4230== in use at exit: 0 bytes in 0 blocks
==4230== total heap usage: 0 allocs, 0 frees, 0 bytes allocated
==4230== All heap blocks were freed -- no leaks are possible
==4230== For counts of detected and suppressed errors, rerun with: -v
==4230== ERROR SUMMARY: 1 errors from 1 contexts (suppressed: 3 from 3)

Tuesday, 1 June 2010

E17 review

Enlightenment has been quite interesting to me. It has not even got a beta release so far yet I like to use it. That is because, it does things differently. It is very efficient, keeps the CPU far more cooler than any other desktop environment, has nice effects already built-in, and is far snappier than most mainstream desktop environments [I am not interested in comparisions here; so I won't point to any other desktop environment in particular.].

However, there still are some issues [Actually, when you install E17 and read the about dialog, they clearly acknowledge that it is under "heavy development"]. I would like enumerate a few of them here. These are the ones that have concerned me recently while trying to make E17 my regular desktop environment.

#1. emacs bottom-line problem
Well with auto-hide on, the emacs bottom line goes below the screen. Have a look.
I would rather have it as follows:
I have started a thread in their developer mailing list about this. Apparently, the issue isn't a serious one.

#2. no create file option in file manager
Well the file manager is good; but not good enough at this stage. It open C source files in read only mode in emacs, which I don't like. It does not have options to create new files yet.
I hope this will be fixed soon.

#3. keystroke problem
This was a strange problem I noticed recently in E17. Well I guess the best way to put it is through an example. Suppose I am using chromium. I have multiple tabs opened. After some time, all of a sudden Ctrl+L does not respond. I any streamed video has taken focus. If so, I click outside the streaming video frame and try again. However, no response again. I click on the URL bar and press delete or backspace to clear the field and enter the new URL. No response. Well basically, with some apps, suddenly keystrokes do not get passed to the apps [There could be alternate explanations.]. I am clueless with this issue.

4. with streaming videos temp rises very fast
While watching online streaming videos, I have noticed the temperature suddenly shooting up very fast. With temperatures around 40-50, its fine; but when it rises till 60 it becomes alarming. This isn't very serious though as the temp falls down after streaming; though at a much slower speed.

Overall, E17 is nice. It is nice to note that E17 keeps with latest development in software. For now, it is bleeding edge; however I suggest give it a shot before believing anything about it. Its a nice experience.

Thursday, 13 May 2010

Native SDK

Recently, I had been looking into image editting on the client side. I figured it can be done on the server side; but impractical. I am certain I shall have to figure out a way of doing it on the client side. Javascript seemed to be the only option and I didn't want to do javascript as it wont be fast. However, when I found about native client sdk, I thought "This is exactly what I needed."

Tuesday, 11 May 2010

Social Web

Facebook and Twitter were accepted very fast and were milestones of social web. However, the recent discoveries of bugs raised questions on the security assured by these sites. Orkut at least says its a beta application; while facebook takes it a step further. Facebook users have to allow third party applications. Facebook is safe as the user made the choice of disclosing information to third party vendors.

Till that it was fine; but recently a bug was discovered in facebook chat that allowed users to view their friends' chats. Another bug was discovered in twitter that allowed users to have force followers. Now the web is undergoing a paradigm shift. However, these bugs remind us to remember the basics.

Watch this video to have glimpse of the paradigm shift.

Updates: Facebook issues
1. suspicious email guess
2. cross-site scripting
3. public updates are publicly searchable outside facebook
4. openbook [This should be enough evidence.]

Finally, people seem to notice.

Saturday, 24 April 2010

Software development and computer science

Its really funny that we [computer science students] are taught computer science and expected to develop software. We should rather be taught software development or expected to work on theory.

However, the complete mismatch between our course and job is the sole root of our predicament. Many hackers have written about it; yet I have not seen any academician paying heed to them. Courses are still designed anti-parallel to the requirements. Tech-schools are meant to teach us a technology; yet the courses are unable to justify there existence.

Some of the best software developers do not even have a computer science degree. Interestingly, they don't need it. Software development is a technologist job. Its more like an art. You may learn all the theory backing an art; but that will never make you an artist. All you will be is an analyst. You have to do it, indulge in the art form to be an artist. Same goes with software development. You may learn all the theory there is to it; but its only by coding that you learn to code.

Saturday, 17 April 2010

IPL schedule

IPL matches are mostly scheduled to be at the home ground on one of the two teams involved in the match. [Well the number of occurances are quite high to be ruled out as incidental.] Apart from concerns of fair-play, we also need to note that this is an essential bussiness strategy to gather crowd and increase sales.

Thursday, 15 April 2010

Bootchart on Archlinux

After switching to Arch linux, I thought of charting boot time. I found it to be faster than Gentoo and openSUSE.

Thursday, 8 April 2010

Distribution change

So far, I have tried various flavours of linux. I have tasted various desktop environments, stability policies, maintainance policies and packaging policies. I have tried KDE 3, KDE 4, GNOME, Enlightenment, XFCE. KDE 3 is undoubtedly "rock solid". GNOME is simple. XFCE is leaner GNOME. KDE 4 has slowed down with semantic desktop. However, much of the work is yet to be done to get back the old KDE feel. Enlightement is fast and stable; however the development is on. I like working in Enlightenment. However, I also have KDE 4.

Coming down to packaging, I have experienced .debs, .rpms and compressed source files. I also have rpm packaging experience. I would say all of them solve different purposes. They define (or may be match) the distro's policies and philosophies. For business oriented distros, rpms are a good choice. However, Debian's package management is also nice. It is a mark of their stability. Compressed source files however are the most flexible ones. Gentoo's packaging clearly reflects its philosophy of flexibility.

I had been running openSUSE for a long time now. Recently, I had decided to go for a change. I wanted to go for a rolling release as I wanted to keep at the edge of technology. Moreover, events like the okular problem inclined me towards rolling release distros.

The first option that came to my mind was Gentoo: its a lovely distro. However, I didn't have time for all the compilation so I thought of trying Arch. Distrowatch said its a lean distro that provides bleeding edge software. I downloaded the netinstall image and started my installation. After multiple Gentoo installs (successful ones), I was ready for it as soon as I had the image copied to my USB stick.

One common problem that I face while installing any distro is that my internet connection is PPPoE and not many people have it so its hard to find help regarding that. To add to it, I don't like to download unnecessary packages or large images. So, I spent some time trying to figure out how to connect during installation. Once that was done, I had a pretty smooth install. Arch linux is a nice experience. However, I miss Gentoo's community support on Arch. #gentoo is far more responsive and friendly than #archlinux. Interestingly, I solved Arch problems while talking at #gentoo.

I had KDE 4 installed on it. Then I moved on to get the latest svn snapshot of Enlightenment and installed it. Both are working fine. Arch's package management is not as flexible as portage in Gentoo. Also, sometimes you need to know your way around. For example, I had installed Ark on KDE; but was not able to unzip any of my .zip files. It was because I had installed zip with it; but not unzip. After installing unzip, its working fine.

Tuesday, 16 March 2010

Okular problem

Recently, poppler was having a bogus memory allocation bug. It had creeped into okular. Some pdf files were rendered okay; while some others flashed instantaneously and closed. The problem was on both of my systems openSUSE and Gentoo. As soon as poppler fixed the issue, Gentoo was quick at releasing it in stable tree. It was back in February.However, not until a couple of days back was the patch provided by openSUSE. Meanwhile, the inability of accessing my documents was really pissing me off. As my linux experience has evolved, I have gradually drifted away from openSUSE; though I love it as my first distro and still have it on my laptop.

Thursday, 18 February 2010