Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Monday, May 31, 2010

OpenSQLCamp EU 2010 - Call for participation

opensqlcamp2010
The European OpenSQLCamp 2010 will take place in parallel to the Free and Open Source Conference 2010 (FrOSCon) on Saturday 21st and Sunday 22nd August at the Fachhochschule Bonn-Rhein-Sieg in St. Augustin, Germany. St. Augustin is located close to Bonn and Cologne.
The Call for Participation is now online.

The event is organized by yours truly and Felix Schupp, and we are open to cooperation from other volunteers.
Specifically, we need help to beat the drum. Even if you can't participate, we will appreciate your help in making the Call for Participation known.
OpenSQLCamp2010 will use the FrOSCon's Pentabarf conference coordination system to collect talk submissions and perform the organizing and scheduling of the talks.
Please create an account there, if you don't have one already. Once you have activated your account via the email address you provided, please log into the system and create a new event. Make sure to select track OpenSQLCamp for your submission!

IMPORTANT! - FrOSCon uses CA certificates. If you browser does not recognize them, then you need to Import the CAcert Root Certificate before using the CfP pages.


The deadline for submitting your proposal is Sunday, July 11th, 2010 (12:00pm PST).

Sunday, May 30, 2010

MySQL Sandbox now with plugins, more tests, instrumentation

MySQL SandboxThe latest release of MySQL Sandbox, 3.0.12, has integrated plugin installation features, as mentioned in my previous post.
Not only that. This version has also more tests, fixes a couple of bugs, and introduces basic instrumentation. Now each script released with MySQL Sandbox, and every one that the Sandbox itself installs, can leave a trail in a file.

Let's start with the plugin. The documentation has been updated to cover this new feature. And 27 new tests give me some confidence that it should work as advertised.
While I was waiting for the test suite to finish its 238 tests, I was wondering how much was going on under the hood. So I spent one hour implementing some basic instrumentation, not only in the make_* scripts, but also in every script that the sandbox installs. The code is quite modular, and adding this feature was easy.
Now, if you want to use this instrumentation, you need to create a file, and set the operating system variable $SBINSTR to the full path of that file prior to using the Sandbox. Then, every script will leave an entry in that file, saying its name, the current time, and which parameters was using.
This is what I got after running the test suite. 66 instances of MySQL installed to perform over 200 tests, in about 18 minutes.


MySQL Sandbox scriptscalls
make_sandbox 66
low_level_make_sandbox 66
make_replication_sandbox 8
make_multiple_sandbox 7
make_multiple_custom_sandbox 2
Installed scriptscalls
use 440
stop 192
start 128
clear 56
sandbox_action 56
sbtool 34
stop_all 30
use_all 20
clear_all 13
start_all 12
send_kill 11
restart 9
initialize_slaves 8
restart_all 4
change_paths 2
change_ports 1
total 1165

The new release is available from Launchpad or directly from the CPAN

Monday, May 24, 2010

MySQL Sandbox meets plugins

Sandbox and pluginsI saw it coming.
In the past year I have been working more and more with plugins. The InnoDB plugins, the semi-synchronous replication plugins, the Gearman UDFs, the Spider plugins, and every time I found myself doing a long installation process, with inevitable mistakes.

So I told myself "I wish I had a tool that installed plugins instantly and painlessly, the way MySQL Sandbox installs a server.
There you go. So yesterday I had enough of suffering and have put together an installation script that does a good job of installing several plugins with little or no effort.

Overview

How does it work? For now, it's a separate script, but it will soon end into SBtool, the Sandbox tool.
Plugins need different operations, and the difficult part is finding a clear way of describing what you want to do, and how. But once you have come up with that set of instructions, there is seldom need to revisit it.
So the principle is th create a set of templates, one for every plugin, where you explain to the installation script what you want to do.
Having installed several plugins repeatedly in several versions of MySQL, I now have a good understanding of the process, and having gone through the motions of explaining the procedure to a Perl script, I feel that I know the process even more. That is, if you want to understand a process, script it. For if you want to script a process, you really need to understand what's going on.

The template


The template was not easy to write. After some bargaining with myself, I decided that the best format was Perl itself.
Let's see, for example, the InnoDB plugin

innodb => {
all_servers =>
{
operation_sequence => [qw(stop options_file start sql_commands )],
options_file =>
[
'ignore_builtin_innodb',
'plugin-load='
.'innodb=ha_innodb_plugin.so;'
.'innodb_trx=ha_innodb_plugin.so;'
.'innodb_locks=ha_innodb_plugin.so;'
.'innodb_lock_waits=ha_innodb_plugin.so;'
.'innodb_cmp=ha_innodb_plugin.so;'
.'innodb_cmp_reset=ha_innodb_plugin.so;'
.'innodb_cmpmem=ha_innodb_plugin.so;'
.'innodb_cmpmem_reset=ha_innodb_plugin.so',
'default-storage-engine=InnoDB',
'innodb_file_per_table=1',
'innodb_file_format=barracuda',
'innodb_strict_mode=1',
],
sql_commands =>
[
'select @@innodb_version;',
],
startup_file => [ ],
},
},

The first thing that you notice is that there is an all_servers section. This means that any server can get the same treatment, as opposed to the semi-synchronous plugin, where master and slave need different plugins and commands.
Then comes the operation_sequence, where we decide the order of the operations.
Inside options_file we put the commands that we want inside a my.cnf.
The sql_commands section has a list of queries that the script runs when instructed.

semisynch => {
master =>
{
operation_sequence => [qw(stop options_file start sql_commands )],
options_file =>
[
'plugin-load=rpl_semi_sync_master=semisync_master.so',
'rpl_semi_sync_master_enabled=1'
],
sql_commands =>
[
'select @@rpl_semi_sync_master_enabled;'
],
startup_file => []
},
slave =>
{
operation_sequence => [qw(stop options_file start sql_commands )],
options_file =>
[
'plugin-load=rpl_semi_sync_slave=semisync_slave.so',
'rpl_semi_sync_slave_enabled=1'
],
sql_commands =>
[
'select @@rpl_semi_sync_slave_enabled;'
],
startup_file => []
},
},

By contrast, the semisynch plugin looks comparatively more complex, with its two sections for master and slave. But as you look closely, you recognize the two operations described in the manual, and you feel that you could deal with them easily.

The script


The script was not much difficult to write. Since it only works with MySQL Sandbox instances, it leverages on the predictability of each server.
There is quite a lot of complexity inside, though, because the script checks every possible source of trouble before actually running the instructions from the template.
The script needs two parameters: a directory containin a sandbox, and the name of the plugin. It expects the plugin definition template (named plugin.conf to be in the destination directory, or in the $SANDBOX_HOME directory.
It recognizes if the target path is a single or multiple sandbox. If it is multiple, it installs the given plugin in every server. It also recognizes if the server is a master or a slave, and pulls the appropriate section from the template when required.

$ perl set_plugin.pl $HOME/sandboxes/rsandbox_5_1_47 innodb
executing "stop" on slave 1
executing "stop" on slave 2
executing "stop" on master
Installing <innodb> in </$HOME/sandboxes/rsandbox_5_1_47/master/>
. sandbox server started
--------------
select @@innodb_version
--------------

+------------------+
| @@innodb_version |
+------------------+
| 1.0.8 |
+------------------+
1 row in set (0.00 sec)

Bye
Installing <innodb> in </$HOME/sandboxes/rsandbox_5_1_47/node1/>
. sandbox server started
--------------
select @@innodb_version
--------------

+------------------+
| @@innodb_version |
+------------------+
| 1.0.8 |
+------------------+
1 row in set (0.00 sec)

Bye
Installing <innodb> in </$HOME/sandboxes/rsandbox_5_1_47/node2/>
.. sandbox server started
--------------
select @@innodb_version
--------------

+------------------+
| @@innodb_version |
+------------------+
| 1.0.8 |
+------------------+
1 row in set (0.00 sec)

Now there is no excuse for testing servers with plugins.
There is still some TODO, most notably testing, fixing conflicts that may happen when two plugins fight for the same plugin-load statement, and integrating with sbtool, as said before. But for now, it is enough.
You can try i, by using the script and the template

Monday, May 17, 2010

LOAD DATA: a tricky replication issue

When you are importing large amounts of data from other sources LOAD DATA is a common method of inserting data into a table.
It is one of the old commands implemented in MySQL. As such it is very fast, and it has been optimized for both MyISAM and InnoDB.
All is well when you are loading data into a standalone server. All is almost well when you are using replication. LOAD DATA used to be a problem in old versions of MYSQL, prior to 4.1. With recent versions of MySQL, it is replicated correctly, and sometimes efficiently.
The trouble starts when the data file is big. The definition of big and the amount of trouble that you can get depends on many factors. That's why users may not realize that this problem exists, even with fairly large files, and then being hit by this disaster when the file is only a little larger than the previous ones.
First, let me explain what happens when you replicate LOAD DATA.
  1. The LOAD DATA query runs in the master.
  2. When the query is finished, the master starts pumping data to the binary log.
  3. The slave receives the binary log, and it will create a copy of the data file in the default temporary directory.
  4. The slave executes the LOAD DATA query using the temporary file.
  5. When the slave is done loading the data, the temporary file is deleted
  6. The data from the relay log is deleted

At the end of the exercise, your data is only in the database table, both in the master and in the slaves. However, during the loading, each slave needs THREE TIMES THE STORAGE of the initial data file size, not counting the indexes.
If your data is 10 GB, then you will need 20 GB on the master (10 for the table, 10 for the binary log, and eventually 10 more for the indexes).
On the slave, you will need 30 GB: 10 for the table (+ indexes if requested), 10 for the relay logs, and 10 for the file in the temporary directory. The last part is the tricky one. The temporary directory is whatever is indicated in the TMPDIR system variable. If that directory is in a partition with less than 10 GB free, your replication will break, even if your data directory has terabytes of free space.

Friday, May 14, 2010

Sometimes, even a command line guy likes a GUI

As everyone knows, I am a command line guy. I am very much comfortable with the shell prompt and the command line SQL client. I do most of my work that way, and I am very much productive.
However, there comes a time when even for a command line enthusiast a GUI can be helpful.
Here comes the latest MySQL Workbench 5.2.
There are two areas where I feel that WB can give me a hand:
The first is when looking at tables that contain BLOB columns. Sure I can deal with them at the command line, but this editor makes my life easier.

When a column contains a BLOB, you can open the field viewer.

At first glance, this is nothing more than what the command line could provide. I could get output in hexadecimal format quite easily in any client. But, looking more closely, there is a tab labeled "image" that is not as easy to come by at the command line prompt.

And there is Mike Hillyer, the main author of the Sakila database, who has stored his own image in the staff table for future generations. If you stick to the command line, you may easily miss this piece of self advertising.

The second area where I like having MySQL Workbench is when I need to change my configuration file with less than common options. Since no human (apart from Sheeri, perhaps) can remember all the options, I usually need to search the manual.

In WB, instead, I can edit the options file with the GUI, without need of remembering the exact names and spelling of the items I need.
Now, if I couple the above issues with the notion that MySQL Workbench is A Useful Tool to Centrally Manage Many MySQL Instances, I think that every command line enthusiast should give this tool a try.
Lastly, I should mention that Workbench 5.2 is becoming quite popular, as the downloads map shows.

Wednesday, May 12, 2010

World map, shaped by MySQL downloads

MySQL downloads
A few years ago, during the MySQL Conference opening keynote, two world maps of MySQL downloads were displayed. With the lights down, they made an impression.
Oddly enough, to the best of my knowledge, the downloads map has not been drawn again since then. I asked my friend and colleague Markus Popp, and he provided the data from the downloads logs, leaving the implementation to me.
A first attempt with Google Maps API produced a chart that is nice to see for a single country or town, but hardly pleasant for the entire world.

Then, I abandoned the easy path, and looked at CPAN for inspiration, and immediately found something that could solve my problem. Using GD::Map, I quickly created a world map, and after a few minutes of fiddling with the innards, I manage to plot a map that looked like what I wanted.
.
To get the red dots on a black background I simply used Gimp, and soon I had the results I was looking for.

Thursday, May 06, 2010

Two quick performance tips with MySQL 5.1 partitions

partitionsWhile I was researching for my partitions tutorial, I came across two hidden problems, which may happen often, but are somehow difficult to detect and even more difficult to fix, unless you know what's going on, and why. I presented both cases during my tutorial, but there were no pictures to convey the mechanics of the problem. Here is the full story.

TO_DAYS() prunes two partitions instead of one


If you are partitioning by date, chances are that you are using TO_DAYS(). And depending on how you have partitioned your table, your queries are as fast as you expect them to be. However, there are cases where your query takes twice as long as it should, and of course this will not make you happy.

For example, in a table partitioned by month, when your query searches for values within one specific month, EXPLAIN PARTITIONS tells you that the search involves two partitions (see figure above). This means that, instead of searching through 1,000,000 rows in one partitions, the partitions engine is searching through 2,000,000 rows in two partitions.
But why? The reasoning, as reported from the developers, is that
This is not a bug, since TO_DAYS() returns NULL for invalid dates, it needs to scan the first partition as well (since that holds all NULL values) for ranges.

Bug#49754: Partitioning by RANGE with TO_DAYS always includes first partition when pruning
This makes sense, from a developer's standpoint. From a user's experience, though, it's a bug.
Anyway, it doesn't do us any good to rant about it. Our query is still twice as slow as we want it. We need to take action. The workaround is to create an empty partition in first position. If we are creating a new table, it's simple. Just say
PARTITION p000 VALUES LESS THAN (0)
and all will be well. The partition pruning mechanism will still find two partitions, but since the first one is empty, it won't impact the performance.
If you have an existing table already partitioned, like in our example, then you need to perform a different operation

Now we have a different first partition, with no records. When we issue the same query, the partition pruning will look at partition p0, but it will skip it because there are no records.

Inserting single rows in partitions is slow


Also this bug is sometimes difficult to detect. If you want to test partitions in MySQL 5.1, probably you will take an existing table and convert it to a partitioned one, or you create a new table and load the contents from a dump. Either way, you are unlikely to insert millions of records with single INSERT statements. These single inserts are slower than bulk inserts in the first place, but with partitions there is an additional penalty. Whenever you insert a record, the partitioning engine locks the entire table. When you insert thousands of records, the partitioning engine will lock all partitions before the insert, and unlock them after the insert. If you have 500 partitions, that's 500 locks and 500 unlocks for every statement. Ouch!
It's a design problem, and it is not likely to be fixed without turning around the whole architecture of partitions. Also in this case, there is a bug report, Partitioning performance drops drastically with hundreds of partitions, although nobody says that this is a feature.
What can you do, then? You have several choices:
  • You can use a bulk insert. Instead of single statements, use INSERT with multiple records, or LOAD DATA INFILE.
  • Explicitly LOCK the table before inserting and UNLOCK it after you finish with all the inserts. This will avoid the overhead, although it won't make your table concurrently accessible until you finish.
  • If you use partitioning only to facilitate heavy queries, consider using a non-partitioned table on the master, and partitioned ARCHIVE tables on the slaves (see figure below).


As I have said many times in my presentations, always benchmark before using partitions in production. Whether you think that it will boost your performance or that it will slow things down, don't trust your instincts, and test. You may be up for a surprise.