performance

By steve, 1 April, 2016

I have just finished troubleshooting an issue where DRBD was causing a bottleneck for an application (ddumbfs) that has an open mmaped file. The issue was that ddumbfs issues fsync() calls periodically on its mmaped files, and when these were happening I/O would pause while the data was being replicated.

The issue is partially fixed by increasing the max-buffers and max-epoch to the maximum values, to allow the maximum number of operations to be in-flight at any given point in time:

By steve, 22 February, 2016

I have been re-configuring my home system to try and optimise the performance of the RAID volumes. I have just finished setting up a 6 disk RAID6 with a 64k stripe size (based on these benchmarks http://louwrentius.com/linux-raid-level-and-chunk-size-the-benchmarks.h…) and an external write-intent bitmap on an ext2 filesystem (to avoid any journal overheads).

By steve, 23 February, 2012

A bunch of servers started seeing very high CPU usage in system time. The cause appeared to be related to a high number of nfs_inode_cache objects:

server:~# slabtop
OBJS ACTIVE USE OBJ SIZE SLABS OBJ/SLAB CACHE SIZE NAME
1525524 1525524 100% 1.02K 508508 3 2034032K nfs_inode_cache
966120 856476 88% 0.19K 48306 20 193224K dentry

This was confirmed by running the following to clear the nfs_inode_cache:
server:~# sync
server:~# echo 2 > /proc/sys/vm/drop_caches