linux

By steve, 1 April, 2016

I have just finished troubleshooting an issue where DRBD was causing a bottleneck for an application (ddumbfs) that has an open mmaped file. The issue was that ddumbfs issues fsync() calls periodically on its mmaped files, and when these were happening I/O would pause while the data was being replicated.

The issue is partially fixed by increasing the max-buffers and max-epoch to the maximum values, to allow the maximum number of operations to be in-flight at any given point in time:

By steve, 12 March, 2016

I have had issues for a while with upload speed to our samba server. I was able to download at ~70MB/s, but upload at only 7MB/s. From memory, I fixed the download speed by setting a bunch of settings, but uploads remained slow.

Tracing the issue led me to do a strace on the smbd process, which showed 1460 bytes being written every 0.2ms, with the read() and write() system calls using all the time. The end solution was to set the following option in /setc/samba/smb.conf

socket options = TCP_NODELAY

Tags

By steve, 22 February, 2016

I have been re-configuring my home system to try and optimise the performance of the RAID volumes. I have just finished setting up a 6 disk RAID6 with a 64k stripe size (based on these benchmarks http://louwrentius.com/linux-raid-level-and-chunk-size-the-benchmarks.h…) and an external write-intent bitmap on an ext2 filesystem (to avoid any journal overheads).

By steve, 18 February, 2016

I have had a look at a few SSD caching solutions, and had previously settled on enhanceio as the "best" option because it works on entire block devices and is easily added/removed. I have had some issues where the presence of enhanceio actually reduces performance because it caches every read/write, so backups, single file copies, etc are all cached even though they will never be looked at again.

By steve, 4 February, 2016

I have just got ipcomp working between 2 hosts as follows:

  • Install the ipsec-tools program
  • Add the following config to /etc/ipsec-tools.d/peername.conf

    spdadd MY_IP PEER_IP any -P out ipsec ipcomp/transport//use;
    add MY_IP PEER_IP ipcomp 1000 -m transport -C deflate;

    spdadd PEER_IP MY_IP any -P in ipsec ipcomp/transport//use;
    add PEER_IP MY_IP ipcomp 1000 -m transport -C deflate;

By steve, 29 October, 2015

We have some iSCSI setups using a linux pacemaker cluster, and I have always had issues with adding more resources. The setup is as follows:

  • IP Primitive - One for each IP address
  • Target Primitive - One for the system
  • LUN Primitive - One per LUN
  • Device Primitive - One per device (may include a clone primitive)

These are controlled through the following mechanisms:

By steve, 29 October, 2015

I have just finished debugging an iSCSI storage system where the I/O on the disk appeared to be significantly higher than the I/O coming from the running programs. I discovered that the top of the iotop program can confirm this:

Total DISK READ : 63.46 M/s | Total DISK WRITE : 42.03 M/s
Actual DISK READ: 15.28 M/s | Actual DISK WRITE: 112.13 M/s

Tags

By steve, 21 October, 2015

Having built a number of iscsi target systems over the years, I have consistently come back to SCST as my preferred implementation. Reasons have varied from IET crashing under load (doe not handle error conditions correctly), LIO crashing in early releases or not supporting naa_id (which is needed for vmware to mount a filesystem if the LUN id changes, or is different for different clusters).