Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Wednesday, March 27, 2013

Multi-master data conflicts - Part 2: dealing with conflicts

In the first part of this article we examined the types of conflicts and their causes. In this part, we will analyse some of the methods available to deal with conflicts.

Pessimistic locking (or: conflicts won't happen)

Applicability: synchronous clusters with 2pc

We've covered this topic in the previous article, but it's worth repeating. If you use a synchronous cluster, you don't have conflicts. For example, MySQL Cluster ensures consistent data with updates coming from different nodes. However, MySQL Cluster is not a replacement for a MySQL server, and it has severe limitations.


Optimistic locking

Applicability: synchronous clusters without 2pc (Galera)

Conflicting transactions proceed on different nodes with local locking. The last one then rolls back when it discovers a prior transaction got in first on the same data. For a more detailed analysis of this handling method, see this article by Jay Janssen


Conflict resolution after-the fact

Applicability: EnterpriseDB (none so far for MySQL)

Asynchronous replication is hard for conflicts. A conflict in this state means that the data has been applied to the wrong node or to the wrong object, and something must be done to solve the issue.

Typical remedies offered for conflict resolution are:

  • Earliest or Latest Timestamp: This method says that the oldest or the latest record prevails when a conflict happens. This is hardly a reliable resolution. It's the easiest method to implement, and thus it is offered. But it often results in a hidden data inconsistency problem, where we may find data that we don't expect. The current data was applied simply because it was updated later than the correct record. Also, timestamp calculation requires time synchronization across servers, and possibly across timezones, which calls for extra effort to keep the system functioning.
  • Node Priority: There is a hierarchy of nodes, with different ranks. When a conflict occurs, the node with the highest rank prevails. This method requires the data origin to be stored alongside the contents, and to be easily searchable when conflicts occur. It must also take into account offline nodes, and therefore it should keep the conflict resolution metadata until the offline nodes are back in synch.

Methods that could be implemented in a more advanced technology may include:

  • Origin enforcement: data coming from authorized nodes will be preserved. Data from wrong origin will be dropped, and a consolidation event will be generated and sent to the other nodes. This method would be possible in systems (like Tungsten) that keep track of the event origin.
  • Data merge: If possible and desirable, data from two different sources can be preserved, and merged in the destination table. This rule should also originate a new event to fix the data in the other nodes.

Schema renaming

Applicability: fan-in topologies

Fan in with likely conflicts

Image #1 - Fan-in topology with likely conflicts.

A fan-in topology is easy to implement with Tungsten Replicator, but not easy to maintain. By its nature, fan-in is a conflict waiting to happen. Assuming that all the masters have the same structure, they will replicate multiple changes into the same schema, and it is quite likely that some changes will clash. For this reason, the simple solution often adopted is renaming the schema before the data reaches the slave.

Fan in with schema renaming

Image #2 - Fan-in topology with schema renaming.

I know of at least one user who has successfully applied this technique for a cluster made of 70 masters and one slave.

Conflict prevention: Discipline

Applicability: all topologies

A simple way of preventing conflicts, and one that would make life easier for all is discipline. The organization decides which entry points can update which data, and conflicts are not possible, because the data is inserted or modified only in the places where it is supposed to be.

Multi master r w split

Image #3 - Preventing conflicts with discipline in a star topology.

Conflict prevention: Enforced discipline

Applicability: all topologies

If you have worked in any large organization, either public or private, you know that discipline alone is the worst method you can rely on for something so delicate and valuable as your data. The reasons why this paradigm could fail are many: it could be because some people dislike discipline, or because someone makes a mistake, or because there are too many rules and they don't remember, or because of an application bug that lets you update what you shouldn't.

Either way, you end up with a system that has conflicts and nobody knows what happened and how to fix them. However, there is a way of enforcing this system based on discipline.

This is the "poor-man's" conflict avoidance system. It is based on simple technology, available in most database servers. If you can install a multi-master topology, using either native MySQL (circular) replication or Tungsten Replicator topologies, you can also apply this method.

The key to the system is to grant different privileges for every master. Looking at image #3, you can enforce discipline by granting different privileges to the application user in every master.

In master #1, where we can update personnel, app_user will have SELECT privileges on all databases, and all privileges on personnel.

In master #2, where we can update sales, app_user will have all privileges on sales and read only access to the other databases, and so on.

The key to make this system work well is that you should assign the privileges and not let the GRANT statement being replicated. It should work like this:

# master 1
GRANT SELECT on *.* to app_user identified by 'my password';
# This is good for all masters. Let it replicate

# master 1
SET SQL_LOG_BIN=OFF;
GRANT ALL on personnel.* to app_user;   # This won't replicate


# master 2
SET SQL_LOG_BIN=OFF;
GRANT ALL on sales.* to app_user;

# master 3
SET SQL_LOG_BIN=OFF;
GRANT ALL on vehicles.* to app_user;

# master 4
SET SQL_LOG_BIN=OFF;
GRANT ALL on buildings.* to app_user;

This method works quite well. Since updates for a given schema can be applied only in one master, there is little chance of any mischief happening. Conflicts are not completely removed, though. There are super users and maintenance users who can, consciously or not, introduce errors. For these cases, you may want to look at the next section.

Enforced discipline with certified origin

Applicability: all Tungsten topologies

Discipline based on granted privileges is often robust enough for your needs. However, if you want to keep track of where the data comes from, you should look at a System Of Records technology, where the origin of each piece of data can be traced to its origin.

Tungsten Replicator implements this technology with several topologies. The theory of this matter is beautifully explained by Robert Hodges in an article written some time ago. Here I would like to look at the practical stuff.

To implement a System of Records in Tungsten, you decide where you want to update each schema (which is defined as a shard in our lingo,) assign that schema to a service, and the replicator will enforce your rules.

Once you have defined the shards, you can set the rules. When an event comes to a slave from an UNKNOWN shard, i.e. a shard that was not among the defined rules, you can:

  • Accept the event; (not recommended, really)
  • Drop the event silently
  • Drop the event with a warning in the logs;
  • Generate an error that will break replication (recommended)

You can choose among the above actions when setting a rule for events that come from UNWANTED shards, i.e. a shard that is not the one designated to update that schema.

Here's an example of a shard definition based on an all-masters schema with three nodes:

Conflict prevention 0

Image #4 - Sample conflict prevention in an all-masters topology

# Options to add during installation
--svc-extractor-filters=shardfilter

# policy for unknown shards
--property=replicator.filter.shardfilter.unknownShardPolicy=error

# policy for unwanted shards
--property=replicator.filter.shardfilter.unwantedShardPolicy=error

# Whether the policy for unwanted shards is activated or not
--property=replicator.filter.shardfilter.enforceHomes=false

# whether we allow whitelists to be created
--property=replicator.filter.shardfilter.allowWhitelisted=false


# Loading the rules set

$ trepctl -host host1 -service charlie shard -insert < shards.map

$ cat shards.map
shard_id          master      critical
employees         alpha       false
buildings         bravo       false
vehicles          charlie     false
test              whitelisted false

The rules are set by service, rather than host name. The schema 'employees' can be updated by the service named 'alpha', which has its master in host #1. Similarly, 'buildings' can be updated by 'bravo', with a master in host #2, and 'vehicles' is updated by 'charlie' master service in host #3. Remember that in Tungsten each replication stream from one master to many slaves is a separate service. This way we can keep track of the events origin. Even if the event is routed through a hub in a star topology, it retains its origin in the metadata.

The last line of the rules says that the schema 'test' is whitelisted, i.e. it can be freely updated by any master. And this means that conflicts can happen there, so be careful if you use this feature!

Conflict prevention right event1

Image #5 - Example of a legitimate event coming through

When an expected event comes through, all is well. Each node checks that the event was originated by the authorised master, and the event is applied to the slave service.

Conflict prevention wrong event0

Image #6 - Example of an event originated from an unauthorised node

When the event comes from a node that was not authorised, Tungsten looks at the rules for such case. In our setup, the rule says 'error', and therefore replication will break at the receiving end of the service 'bravo' in host #1 and host #3.

mysql #2> create table employees.nicknames( ... )


# Only server #2 creates the table
# slave service 'bravo' in host1 and host3 get an error
# No table is created in hosts #1 and #3

To detect the error, we can ask for the list of services in host #1 and host #3. What we will see is something like this.

#3 $ trepctl services | simple_services 
alpha    [slave]
seqno:          7  - latency:   0.136 - ONLINE

bravo    [slave]
seqno:         -1  - latency:  -1.000 - OFFLINE:ERROR

charlie  [master]
seqno:         66  - latency:   0.440 - ONLINE

This Listing says that replication was stopped with an error in slave service 'bravo'. To determine what happened exactly, we ask for the status of that service:

#3 $  trepctl -service bravo status
NAME                     VALUE
----                     -----
appliedLastEventId     : NONE
appliedLastSeqno       : -1
appliedLatency         : -1.0
(...)
offlineRequests        : NONE
pendingError           : Stage task failed: q-to-dbms
pendingErrorCode       : NONE
pendingErrorEventId    : mysql-bin.000002:0000000000001241;0
pendingErrorSeqno      : 7
pendingExceptionMessage: Rejected event from wrong shard: 
seqno=7 shard ID=employees shard master=alpha service=bravo
(...)

This status gives us quite a lot of useful information:

  • The event with Global transaction ID (GTID) # 7 was rejected;
  • The reason for rejection was because it came from the wrong shard;
  • The expected shard master (i.e. the authorized service) was alpha;
  • The event was instead originated from service bravo.

With the above information, we can take action to fix the event. We know that GTID 7 is wrong, so we can skip it in both servers where the error occurred. To clean up the error, we can simply generate the correct event in the authorized master

#host #1 
$ trepctl -service bravo online -skip-seqno 7

mysql #1> drop table if exists employees.nicknames;
mysql #1> create table if exists employees.nicknames ( ... ) ;

#3 $ trepctl -service bravo online -skip-seqno 7

Statement-based vs row-based replication

As a general note about conflict solving, I need to mention that, in most cases, using row-based replication vs. statement based will help identifying conflicts, making them easier to clean up.

Even when the conflict involves a deleted row, row-based events will contain enough information that will allow us to identifying the critical data needed to recover information.

Be aware that, if you use binlog-row-image=minimal in MySQL 5.6, the binary log entry for a DELETE event will only include the primary key.


More about filters

We have seen at least in two examples (server renaming and conflict prevention) that you can help avoid conflicts with filters. This is a powerful feature that should be taken into account when planning a multi-master topology.

MySQL native replication offers very little in matter of data transformation through filtering. Tungsten Replicator, instead, allows you to define filters at several stages of the replication process: when extracting the data, after transporting it to the slaves, before applying it. You can write your own filters in JavaScript, and do with the data pretty much everything you want. If you have creative ideas about solving conflicts by manipulating data in transit, there is a good chance that you can implement them using filters. This topic deserves more than a paragraph, and probably I will come back to it soon with a full fledged article.


Parting thoughts

Multi master topologies are much coveted features. However, they often introduce the risk of conflicts.

Dealing with conflicts becomes somewhat easier if you understand how they happen and what kind of problems they generate.

There is no silver bullet solution for conflicts, but recent technology and good organization can help you ease the pain.

Monday, March 25, 2013

Multi-master data conflicts - Part 1: understanding the problem

What is a conflict?

Readers of this blog know that one of my favorite tools, Tungsten Replicator, can easily create multi-master replication topologies, such as all-masters, star, fan-in. While this is good news for system designers and ambitious DBAs, it also brings some inconvenience. When you allow updates to happen in more than one master, you risk having conflicts. You may have heard this term before. For the sake of clarity, let's define what conflicts are, before analyzing each case in detail.

You have a conflict when several sources (masters) update concurrently the same data in asynchronous replication.

It's important to stress that this happens with asynchronous replication. In a truly synchronous cluster, where all data is kept consistent through 2-phase commit, there is no risk of conflicts.

The above definition is not always correct. You may update data from several sources and end up with something completely legitimate. A better definition should be: a conflict happens after an unexpected concurrent modification of existing data coming from a remote source.

For example:

  • Both servers A and B insert a record with the same primary key;
  • Master A updates record 10 with value 3 while master B updates record 10 with value 4;
  • Master A deletes record 10 while master B updates record 10.

In each of these cases, there is data in one server, which could have been just inserted by a legitimate source, or could have been there for long time. Regardless of the record age, the conflict happen when the data clashes with the latest update. We usually think of conflicts as concurrent events from different sources, because this is the most frequent case of evident conflicts, i.e. conflicts that are detected (and hopefully fixed). But there are hidden conflict that happen (or are discovered) long time after the update.

When we consider the consequences of a conflict, we observe that a conflict creates one or more of the following:

  • Duplicate data (unwanted insert)
  • Data inconsistency (unwanted update)
  • Data loss (unwanted delete)
  • Replication break

Duplicate data

This is the most widely known conflict, because it's often caught immediately after the fact, allowing the DBA to take action. However, it's also one of the less understood cases, as we will see in the next section.

Multi master conflicts 003

Image #1. Data duplication

In its basic shape, data duplication happens when two masters insert the same data. One of the two transactions will be committed before the other, and the second one will fail with a rightful duplicate key error. The reason for this occurrence is often an application bug, or a design flaw, or sometimes an human error that the application fails to detect properly (a different application bug).

We will see the possible solution in part 2 of this article. For now, it's enough to know that this kind of error is the best kind of conflicts that can happen to you. Since it breaks replication, it has the positive effect of alerting the DBA about an error.

Depending on which event is applied first, you have different situations. If the remote event is applied first, you have a real conflict, where the legitimate event can't get to the database. This state requires a longer set of actions: you need to clean up both the origin and the destination servers. If the legitimate action is applied first, then you don't have a real conflict, as the wrong event was applied only in one server, and you need to clean up only the origin. If you have more than two masters in your topology, you may find that the damage could be a mix of both cases, as the wrong event may arrive to distant servers before or after the right one.

auto_increment_offsets and hidden data duplication

There is a popular belief that conflicts with multi-master topologies can be avoided using auto_increment_increment and auto_increment_offset. The combination of these two variables makes sure that auto_increment values are generated with different intervals for each master. For example, if I have three masters and I am using increment 3, the first master will generate 1,4,7,10, the second one will have 2,5,8,11, and the third one 3,6,9,12.

Where does this paradigm work? When the primary key is the auto generated column, then the conflict is prevented. For example, in a table that records bug reports, the incrementing value for the bug number is a legitimate primary key. If two masters enter a bug simultaneously, the bug numbers will have different values. (there will most likely be gaps in the bug numbers, and this could be a non desirable side effect, bt then, I am not advocating this system, although I made this mistake many years ago.)

However, if the table has a natural primary key that is not an auto-incremented value, conflicts are possible, and likely. In that case, you will have a duplicated key error, as in the case seen before.

Disaster strikes when the table has a poorly chosen auto_increment primary key.

For example, let's consider a departments table with this structure:

CREATE TABLE departments (
    dept_id int not null auto_increment primary key,
    dept_name varchar(30) not null
);

If two masters need to insert a new department named 'special ops', we may end up with this situation:

select * from departments;
+---------+-------------+
| dept_id | dept_name   |
+---------+-------------+
|       2 | special ops |
|       4 | special ops |
+---------+-------------+

Multi master conflicts 005

Image #2. Hidden data duplication

This is what I define hidden data duplication, because you have duplicated data, and no errors that may warn you of the problem. Here the issue is aggravated by the fact that 'department' is likely a lookup table. Thus, there will be a table where 'special ops' is referenced using dept_id 2, and another table where it is used with dept_id 4.

The reason for hidden data duplication is poor choice of primary key, and failure to enforce unique values in columns that should be such.

Data inconsistency

When two UPDATE statements are executed on the same record from different sources, there is the possibility of spoiling the data accuracy in several ways. The amount of damage depend on the type of update (with absolute values or calculated ones) and on whether we are using statement-based or row-based replication.

With absolute values, the last value inserted overwrites the previous one.

Multi master conflicts 006

Image #3. Data inconsistency

With calculated values, the data inconsistency may change with surprising consequences. For example, if we have a table accounts:

select * from accounts;
+----+--------+
| id | amount |
+----+--------+
|  1 |   1000 |
|  2 |   1000 |
|  3 |   1000 |
+----+--------+

If a statement that doubles the account for ID 2 is executed in two masters, then we will have an amount of 4,000 instead of 2,000. Using row-base replication can protect you against this kind of disaster.

Data loss

When a DELETE statement is entered for a record that later we want to read, we have lost data. This kind of DELETEs may happen because of bad operations, or more likely because of data inconsistencies that alter the conditions used for deleting.

Multi master conflicts 007

Image #4. Data loss

Unwanted DELETE operations may also break replication when the DELETE happens before an UPDATE on the same record. Either way, a data loss conflict is hard to resolve because the data has gone away. Depending on the amount of the loss, we may need to restore the table completely or partially from a backup.

Why conflicts happen

To understand why conflicts happen, let's first see why they don't happen when we try a conflicting operation in the same server.


SESSION_1 > create table people (id int not null primary key, name varchar(40) not null, amount int);
Query OK, 0 rows affected (0.01 sec)

SESSION_1 > insert into people values (1, 'Joe', 100), (2, 'Frank', 110), (3, 'Sue', 100);
Query OK, 3 rows affected (0.00 sec)
Records: 3  Duplicates: 0  Warnings: 0

SESSION_1 > set autocommit=0;
Query OK, 0 rows affected (0.00 sec)

SESSION_1 > begin;
Query OK, 0 rows affected (0.00 sec)

SESSION_2 > set autocommit=0;
Query OK, 0 rows affected (0.00 sec)

SESSION_2 > select * from people;
+----+-------+--------+
| id | name  | amount |
+----+-------+--------+
|  1 | Joe   |    100 |
|  2 | Frank |    110 |
|  3 | Sue   |    100 |
+----+-------+--------+
3 rows in set (0.00 sec)

SESSION_2 > begin;
Query OK, 0 rows affected (0.00 sec)


SESSION_1 > insert into people values (4,'Matt', 140);
Query OK, 1 row affected (0.00 sec)


SESSION_2 > insert into people values (4,'Matt', 130);
# ... hanging


SESSION_1 > commit;
Query OK, 0 rows affected (0.01 sec)


SESSION_2 > insert into people values (4,'Matt', 130);
# ...
ERROR 1062 (23000): Duplicate entry '4' for key 'PRIMARY'

What happens here is that user in session 1 inserts a record at the same time when user in session 2 inserts the same record. When the record is inserted in session 1, InnoDB creates a lock. If you look at the InnoDB locks before SESSION_1 issues a commit, you will see it:

SESSION_3 > select * from information_schema.innodb_locks\G
*************************** 1. row ***************************
    lock_id: 30B:0:307:5
lock_trx_id: 30B
  lock_mode: S
  lock_type: RECORD
 lock_table: `test`.`people`
 lock_index: `PRIMARY`
 lock_space: 0
  lock_page: 307
   lock_rec: 5
  lock_data: 4
*************************** 2. row ***************************
    lock_id: 30C:0:307:5
lock_trx_id: 30C
  lock_mode: X
  lock_type: RECORD
 lock_table: `test`.`people`
 lock_index: `PRIMARY`
 lock_space: 0
  lock_page: 307
   lock_rec: 5
  lock_data: 4
2 rows in set (0.00 sec)

You can see that there is an exclusive lock on the record.

This lock effectively prevents a duplicate.

Now, if you imagine the two sessions happening on different servers, the two users are in a similar situation, i.e. they don't know that a concurrent update on the same record is being attempted. But the difference is that, in asynchronous replication, there is no lock applied on a remote server. If the two transactions are committed at the same instant, both of them will be stored in their local server, and both of them will fail and break replication on the remote server. If the record in session 1 is applied a few seconds before the other, the user in session 2 will not be able to commit, same as it happened with the concurrent insertion in the single server example above. In this case, the conflict looks exactly as it happened in a single server.

However, if both commits happen at the same time, both users will have a positive feedback, since their transaction will return success, and both are happy, at least temporarily. Unbeknown to both, though, their transaction has failed on the remote server, and replication is broken on both servers, leaving each with a bit of mess to clean up.

These examples show that conflicts are often a matter of chance. Depending on the timing of the operations, we might catch them as they happen and take action before the conflict spreads its side effects, or we only notice later on, when replication fails, and the conflict has already spoiled our data.

Summing up

Conflicts in multi-master topologies are the consequence of unwanted or unexpected operations. The effects of a conflict range from data inconsistency to data loss, and may also cause replication to break.

The most desirable outcome for a conflict is a replication error, because it prevents further spreading of the error and alerts the DBA about a possible issue.

In the second part of this article, we will look at some of the methods to deal with conflicts in various scenarios.

Tuesday, March 12, 2013

Sessions at Percona Live MySQL Conference 2013: fun, competition, novelties, and a free pass

Percona Live MySQL Conference and Expo, April 22-25, 2013

The Percona Live MySQL Conference and Expo 2013 is almost 1 month away. It's time to start planning, set the expectations, and decide what to attend. This post will give a roundup of some of the sessions that I recommend attending and I look forward to.

First, the unexpected!

After much talk and disbelief, here they come! Oracle engineers will participate to the Percona Live conference. This is wonderful! Their participation was requested by the organizers, by the attendees, and by community advocates, who all told the Oracle management how important it is to be in this conference. Finally, they have agreed to come along, and here they are, with one keynote and three general sessions.

My talks

I will be a speaker at the conference, and thus it's no surprise that I will recommend my talks.

My company's talks

Continuent is very active at many conferences, and at this one we are participating massively. I know I look partial in this matter, but I am really proud of the products that we create and maintain at my company. That's why I highly recommend these talks.

Competing with whom?

MySQL is a standard, and widely popular. Yet, it has shortcomings and weak points, which allow for alternative solutions to flourish. There are many sessions that offer alternatives to the vanilla software.

  • [Tue 1:20pm] MariaDB Cassandra Interoperability. MariaDB is a magnetic fork of MySQL. It's magnetic in the sense that it attract most of the features or enhancements that nobody else wanted to accept. While some of its features may look like a whim (and some of them have been discontinued already), there are some that look more interesting than others. This integration with Cassandra deserves some exploration.
  • [Tue 3:50pm] MySQL Cluster - When to use it and when not to. The classic MySQL Cluster. Some believe that it's a drop-in replacement for a single server. It's not. It's a powerful solution, but it is not fit for all.
  • [Wed 11:10am] Fine Tuning Percona XtraBackup to your workload. This tool has become a de-facto standard. It is available everywhere, easy to use, and powerful. A great tale of an alternative tool that became the standard.
  • [Thu 9:00am] MySQL, YourSQL, NoSQL, NewSQL - the state of the MySQL ecosystem While all the keynotes are worth attending, this one is special. If you want to understand the MySQL world, Matt Aslett can draw a quite useful map for you.

New and renewed technologies

There are many interesting talks about new things, or old technologies with a new twist.

Tales from the trenches

Win a free pass

Percona is offering free passes for community participation. One of them is available to readers of this blog and I will be the judge.

To get a free pass, do the following:

  1. Blog, tweet, or post on another public media about this conference;
  2. Leave a comment here, with a link to your post;
  3. The free pass will be given to the most useful or pleasant post;
  4. Make sure there is a way to reach you by email or twitter;
Please notice:
  • I will award the free pass to the post that I like most. The adjudication will be entirely subjective.
  • Deadline: March 20th, 2013.

Monday, March 11, 2013

Deploying remote MySQL sandboxes

Stating the problem.

In my job, I do a lot of testing. And no matter how much organized we try to be, we end up with fewer machines than we would need to run all the tests that we want.

For some tasks, we can run MySQL Sandbox, and get the job done. But sometimes we need to make sure that applications and systems work well across the network, and we need to install and run systems on separate servers.

However, when you test replication systems, and every cluster takes three or four servers, you run our of available hosts very quickly. So you decide to use the clusters that are dedicated to automated testing to also run your own manual tests. Soon you realize that the tests that you are running manually are clashing with the automated ones, or with the ones that your colleagues are running.

A simple solution is installing additional sandboxes for the MySQL server in each host, and then run our operations against the new server. If we need more than one database server, MySQL Sandbox allows us to deploy them at will.

There are a few obstacles to overcome, for this approach to be useful:

  • Installing sandboxes in remote servers is time consuming and not automated. When we need to do automated tests, this is a serious issue.
  • By default, MySQL Sandbox works on the local host only. It can be installed to listen to remote clients, using an appropriate option.
  • Sandboxes for testing replication need to be configured on the fly. It can be done manually, but also this issue can be solved by using advanced installation options.
  • Installing via SSH can be challenging, when we need to provide several options on the command line, there are only two kinds of quotes available, and they can get messed up when we pass options across servers.

Remote sandboxes 001 001

Default deployment - with one MySQL server installed through rpm or deb

Remote sandboxes 002 001

Deployment with one additional sandbox per host

Remote sandboxes 003 001

Deployment with two additional sandbox per host

Remote sandboxes 004 001

Deployment with three additional sandbox per host

I faced this problem several times, and each time I was writing a quick shell script that would deploy the sandboxes in my servers. My O.S. user has access to all nodes in the cluster using SSH, and this makes the remote installation easier.

Remote deployment scripts

No manual installation is required. Following my experience with many different deployment in my clusters, I came up with a solid model that allows me to deploy remote sandboxes automatically. Initially, I had several sandboxes.sh scripts scattered around my code version control system. Then I implemented a general purpose script inside the Tungsten Toolbox. And finally I made an even more general script in the latest release of MySQL-Sandbox (3.0.31).

Remote database connection

This problem is easy to solve. By default, sandbox users are created as

msandbox@'127.%'
There is an option that installs the sandbox users with access from anywhere in the network (--remote_access='%'). We just need to pass this option correctly to the deployer. (See below)

Sandbox configuration

MySQL Sandboxes can be configured exactly as you need them. During the installation, you can pass options that change the configuration file according to your need. The syntax is :

make_sandbox tarball_name.tar.gz -- -c option=value -c another_option=value
The options passed with '-c' will be written to the configuration file, and used since the first start.

Passing complex options across servers

This is probably the most annoying problem. If you run

ssh myhost  "some_command --option='value' "
and there are quotes inside 'value', you may get an error, and a difficult one to debug.

The solution to this problem is to split the process into three phases:

  1. Create a deployment shell script, generated dynamically inside the remote_sandboxes script. This script will have all the needed quotes in the right places. There won't be any need of further quoting.
  2. Send the script to the remote host. Note that we generate a different script for every host.
  3. ssh to the remote host, and execute the deployment script remotely. The script will run all the commands you have prepared, without risk of contaminating the command line with unwanted quotes.

An additional advantage of this approach is that you can include some checks inside the dynamically generated script. Things that could not be easily done with an SSH call.

Try it out

The new script is installed along the rest of MySQL Sandbox executable, and it is called deploy_to_remote_sanboxes.sh. For example:

deploy_to_remote_sanboxes.sh -l host1,host2,host3 -P 19000 -d myGA -t ~/downloads/mysql-5.6.10-linux-blah-blah.tar.gz

This command will install a sandbox called 'myGA' with MySQL 5.6.10, using port 19000 and the given tarball.

Assumptions:

  • You must have ssh access between your current O.S. user and a corresponding user in the remote hosts. It is recommended that the user access through a SSH key.
  • MySQL Sandbox should be installed in all remote hosts.
  • The port that you have chosen for the sandboxed database server must be open (and not already used) in all hosts
  • If you have an existing sandbox with the same name it will be overwritten.

Sunday, March 10, 2013

North East Linux Fest and Open Database Camp - Boston, March 16-17 2013

On Thursday, I will travel to Boston, MA, to attend the Northeast LinuxFest, which includes also an edition of the Open Database Camp. The events will be at one of my favorite places on earth: The Massachusetts Institute of Technology, a.k.a. the MIT. Every time I speak at an event there, I feel at home, and I look forward to be there once more.

The Open Database Camp is organized, as usual, with the formula of an un-conference, where the schedule is finalized on the spot.

There are a few ideas for sessions. I have proposed two of the topics I am most familiar with:

In addition so seeing the MIT again, I will be also pleased to meet colleagues and friends from all over the place. If you happen to be nearby, let's get together!