So, I've moved on from my last employer after a decade(ish). I've landed in a shop that is heavily devops focused, but my first real task wasn't so much about that, but highly available MySQL servers. The MySQL end of it ended up being not too bad, two nodes running Percona XtraDB Cluster, and a third system running garbd as an arbitrator. Had some stumbles with encrypted tables (bad documentation I claim) but once running seemed to be pretty impervious to my ham-fisted attempts to break the cluster. The hard part came when I was trying to figure out how to move the IP around. The Percona blogs are a wealth of information, and I found this which would seem to be very straight forward. Its not. RedHat has changed the tools with which you configure the cluster since the writing of that blog post, so without further ado, here's my config so a) you don't have to spend the hours I did and b) so I don't have to re-learn this somewhere down the road. The assumptions are a two node Percona Xtradb Cluster with the mysql_monitor from the linked Percona blog, and using PCS to configure the resources:
pcs resource create pxc_monitor ocf:percona:mysql_monitor user="clustercheckuser" password="password" pid="/var/run/mysqld/mysqld.pid"
socket="/var/lib/mysql/mysql.sock" \
cluster_type="pxc" op monitor interval="5s" timeout="30s" OCF_CHECK_LEVEL="1"
pcs resource clone pxc_monitor cl_pxc_monitor meta clone-max="2" clone-node-max="1"
pcs resource create vip ocf:heartbeat:IPaddr2 params ip="172.16.0.11" nic="eth0" op monitor interval="10s"
pcs constraint location vip rule id="require_read" readable eq 1
Random SysAdmin Notes
Wednesday, July 6, 2016
Thursday, February 26, 2015
Using mbuffer to increase throughput of zfs send/recieve.
Using mbuffer to speed up zfs send/receive been talked about in many other places, I wanted to add my own experience to the mix. I used mbuffer on each end, dumping 3.4TB of data with pools that had lz4 compression enabled. I set the buffer to be 512MB, as the target system is tight on memory. Without mbuffer, the zfs send averaged about 504Mbps. Using mbuffer, I was seeing 972Mbps of network traffic coming across, and mbuffer showing 111MB/s passing though it, for 88% of theoretical maximum Gigabit Ethernet bandwidth (125MB/s). With the caveat that you lose approximately 20% of your bandwidth to network overhead (framing, TCP headers, etc), I was absolutely ecstatic at the throughput. Unfortunately, even at that speed it still took just over nine hours to complete the send of my snapshot. At least you can do incrementals afterwards, as long as I keep the original snapshot around.
Just thought I'd share.
Just thought I'd share.
Friday, September 5, 2014
Using perlbrew to compile ec2-consisent-snapshot on CentOS
I was tasked with taking a snapshot of a CentOS 5.7 mysql db in EC2 before we upgraded the instance type. The data files reside on a RAID0 consisting of two EBS volumes. The unanimous recommendation was to use ec2-consisent-snapshot. Well, the CentOS AMI I'm having to use is custom, and the version of Perl is OLD. Perlbrew to the rescue! Earlier in the day, I'd screwed up my system version of Perl (thank god this box does NOTHING other than run mysql) so I had to export the perllibs so perlbrew would install. Instructions follow:
curl -L http://install.perlbrew.pl | bash
perlbrew install perl-5.20.0
perlbrew switch perl-5.20.0
perlbrew install-cpanm
yum install expat-devel.x86_64
yum install openssl-devel.x86_64
yum install mysql-devel.x86_64
yum install util-linux
cpanm Net::Amazon::EC2
cpanm File::Slurp
cpanm DBI
cpanm DBD::mysql
cpanm DateTime::Locale
cpanm DateTime
git clone git://github.com/alestic/ec2-consistent-snapshot
vi ec2-consistent-snapshot (update #! line to point to correct version)
curl -L http://install.perlbrew.pl | bash
perlbrew install perl-5.20.0
perlbrew switch perl-5.20.0
perlbrew install-cpanm
yum install expat-devel.x86_64
yum install openssl-devel.x86_64
yum install mysql-devel.x86_64
yum install util-linux
cpanm Net::Amazon::EC2
cpanm File::Slurp
cpanm DBI
cpanm DBD::mysql
cpanm DateTime::Locale
cpanm DateTime
git clone git://github.com/alestic/ec2-consistent-snapshot
vi ec2-consistent-snapshot (update #! line to point to correct version)
Wednesday, March 19, 2014
Slow CIFS/SMB speeds on OS X Mavericks (10.9)
Lots of complaints about how slow accessing Windows/Samba shares via Mavericks. I've tried a couple of different things, but only found one that seems to restore SMB to a semblance of its former speed. Thankfully its really easy.
In 'Connect to Server' when you specify the server by: smb://servername/share, instead do this: smb://servername:139/share.
While I've not tried to analyze a network trace, I suspect that forcing the port bypasses some sort of smb version negotiation process. But that's just a guess.
In 'Connect to Server' when you specify the server by: smb://servername/share, instead do this: smb://servername:139/share.
While I've not tried to analyze a network trace, I suspect that forcing the port bypasses some sort of smb version negotiation process. But that's just a guess.
Thursday, June 20, 2013
No ProSet tabs after installing Intel drivers for PRO1000/PT Server Adapter
I rebuilt my Windows 7 Professional box this week. It was time, and installing with SP1 slipstreamed allowed me not to have to load a third party AHCI driver for my 6Gb/s SATA controller. Everything was fine until I realised my network speed was a little slow. I hadn't installed Intel's driver and ProSet yet, so I downloaded the most recent. After installation, the Advanced tab was gone, but no ProSet tabs. Uninstall the driver, the card would get re detected, Advanced tab would return. Reinstall Intel driver, Advanced tab gone, no ProSet tabs. Updated to latest driver from Windows Update. Same thing. This went on for a couple of days with slight variations. Here's what I did to ultimately get the Intel drivers and ProSet installed and working:
Hope I save someone out there some time.
** ADDENDUM **
I upgraded the Intel drivers recently, and after that I was back to my original issue. Turns out I also needed to uninstall VirtualBox to get things back to working.
** ADDENDUM ** ADDENDUM **
Actually, if you do not install Host-Only networking in Virtualbox, the problem doesn't appear.
M.
- Uninstall the nic via Device Manager. Check the "Delete driver" box.
- Follow this to disable automatic driver installation. (That procedure only works on Win 7 Pro+)
- Either re-scan for devices in Device Manager, or reboot your system. (I chose to reboot)
- Once restarted, verify that the nic shows up under "Unknown Devices" as an "Ethernet Controler".
- Revert the change you made in gpedit so you can install the correct driver now.
- Install the appropriate version of Intel's drivers/utilities for your card.
Hope I save someone out there some time.
** ADDENDUM **
I upgraded the Intel drivers recently, and after that I was back to my original issue. Turns out I also needed to uninstall VirtualBox to get things back to working.
** ADDENDUM ** ADDENDUM **
Actually, if you do not install Host-Only networking in Virtualbox, the problem doesn't appear.
M.
Thursday, June 6, 2013
DKMS for updated e1000e driver
So, since I updated the e1000e driver, I've gotten no more errors in my messages logs. However, it does seem that every month or two, a kernel update gets pushed to Squeeze, which un-does my driver fix. When I worked at Dell, and we needed driver to be recompiled for any kernel that got installed, DKMS was the solution. I rarely have to use cutting edge hardware with, umm, mature operating systems these days (I use old hardware with old operating systems), I hadn't given it much thought. While DKMS is awesome when someone else does all the upfront work for you, trying to sort it out on one's own is significantly less fun. I spent several hours over the course of a couple of weeks reading various things, and all of them failing to do what I wanted. Got back to this week and finally got it sorted out.
- Dowload appropriate source for the e1000e driver: http://sourceforge.net/projects/e1000/files/e1000e%20stable/2.3.2/e1000e-2.3.2.tar.gz
- Extract archive to /usr/src.
- Create a dkms.conf in the /usr/src/e1000e-2.3.2 directory with the following contents:
PACKAGE_NAME="e1000e"
PACKAGE_VERSION="2.3.2"
BUILT_MODULE_LOCATION[0]="src"
BUILT_MODULE_NAME[0]="e1000e"
DEST_MODULE_LOCATION[0]="/kernel/drivers/net/e1000e/"
AUTOINSTALL="yes"
MAKE[0]="BUILD_KERNEL=${kernelver} make -C src CFLAGS_EXTRA=-DDISABLE_PM"
CLEAN[0]="make -C src clean"
REMAKE_INITRD=yes - Now just add the module via dkms:
dkms -m e1000e -v 2.3.2 add
dkms -m e1000e -v 2.3.2 build
dkms -m e1000e -v 2.3.2 install
After completion, you should restart the system to verify that the initial ramdisk got rebuilt correctly, and that the correct version of the driver is being used (modinfo e1000e).
Thursday, March 14, 2013
Updating e1000e driver in Debian Squeeze
I kept having issues with my onboard Intel NIC on my Debian Squeeze NAS, the all too common:
Detected Hardware Unit Hang:
The version of the driver is significatly old, so I downloaded the newest from Sourceforge and tried to compile it. It failed to compile do to some power management code that is not in the kernel source from Debian. Eventually I found someone else who'd had the same problem with the igb driver and solved it just by adding a flag to make: CFLAGS_EXTRA=-DDISABLE_PM
Did the same when building the e1000e driver, compiled smoothly, remade initramfs, rebooted, haven't seen an error since.
Subscribe to:
Posts (Atom)