Blog Relaunch!

January 2, 2013 Leave a comment

With the start of the new year, I’m re-launching my blog. As a part of this, I have moved it to blog.jericon.net. This will allow me to use the domain on my own server for other stuff.

I am no longer going to focus solely on MySQL. MySQL posts will still be tagged #MySQL.  2013 is going to be a year of change for me.  I have a lot of things that I want to od.  I’m not going to call them resolutions, because resolutions are meant to be broken.  Instead, I’m going to call them goals and wishes.  So here they are (with the accompanying category I will be blogging about them under):

Goals

New Habits

  • Catch the 7:45 AM and 5:33 PM train every day
  • No caffeine after 6 PM
  • Make 10k steps/day tracked via my fitbit.
  • Drink More water
  • When given an option, take the stairs
  • Floss Daily
  • Practice Inbox Zero

My habits I am going to be tracking using Lift.  These are things that I hope to help me lead a healthier and more balanced life.  My goals are all things that are very important to me.

I want to push myself to be more fit.  To do that, I need an extreme goal.  I believe giving myself 9.5 months to train for a full marathon is good.  I will take part in smaller, 5k/10k races in the meanwhile to work up to it.  I need to find a good training regiment that will help me meet this goal.  Just “going out to run” isn’t going to do it.  If anyone has suggestions, please let me know.  I will be blogging about my progress on this goal under  “#Marathon“.

Losing weight also goes hand in hand with #Marathon.  I plan on doing this through eating better and regular exercise.  I believe that my goal is very much attainable, and not too extreme.  My progress and thoughts about this goal will be blogged under “#LoseIt“.

I likely will not blog a whole lot about #DebtFree, but getting out of the Credit Card debt that Jen and I have is an important focus for us this year.  As is visiting Jen’s family in the UK.

Lastly, I will be doing 2 photo projects this year.  One I will be putting on Twitter and Facebook that I’m calling #P365 (Project 365).  That is to take a photo each day of something in my life.  The other project I will put up after the end of the year.  It is a daily photo of me using the app “EveryDay“.

Here’s to a wonderful new year!

Categories: #DebtFree, #LoseIt, #Marathon, #P365

Chain Copying to Multiple hosts

May 17, 2012 Leave a comment

This week I was given the task of repopulating our entire primary database cluster.  This was due to an alter that had to be performed on our largest table.  It was easiest to run it on one host and populate the dataset from that host everywhere.

I recalled a while back reading a blog post from Tumblr about how to chain a copy to multiple hosts using a combination of nc, tar, and pigz.  I used this, with a few other things to greatly speed up our repopulation process.  As I was repopulating production servers, I did a combination of raw data copy and xtrabackup streams across our servers, depending on the position in our replication setup.

For a normal straight copy, here’s what I did:

On the last host, configure netcat to listen and then pipe the output through pigz and tar to uncompress and untar.  This needs to be run in the destination directory:

nc -l 1337 | pigz -d | tar xvf -

On any hosts in the middle of the chain, you do the same thing with one extra step.  Using a fifo to redirect the stream to the next host:

mkfifo copy_redirect
nc next_host_in_chain 1337 <copy_redirect &
nc -l 1337 | tee copy_redirect | pigz -d | tar -xvf -

And on the source host you actually make the stream.  This is where I differed the most from what Tumblr had written.  I added a progress bar using pv.

tar -c /data/mysql/ | pv --size $( du -sh /data/mysql/ | cut -f1 ) | pigz | nc first_host_in_chain 1337

To do this with an xtrabackup stream, the commands are similar.  On each host, tar needs to add the “i” flag (to become “tar xvfi -“).  The progress bar here became slightly less accurate, but was still a good rough estimate of the progress.  On the source host, the command became:

innobackupex --stream=tar /tmp/ --slave-info  | pv --size $( du -sh /data/mysql/ | cut -f1 ) | pigz | nc first_host_in_chain 1337

I found that using this method, for a raw copy, I was able to achieve between 300 and 350 MB/sec copying large tables.  Smaller tables averaged slower speeds.  I didn’t do enough testing here to see where the bottleneck was.  I can say that it was not network, cpu, or io.  Our servers involved have 10 GBit network and FusionIO drives.  Increasing the compression level may have helped add some throughput here as well.  Copying a 1.4 TB Dataset to 3 destination servers took under 2 hours.

This is definitely a tool that I will be adding to my arsenal to use on a regular basis.

Categories: #P365, MySQL Tags: , , , , ,
Follow

Get every new post delivered to your Inbox.