Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
SlideShare a Scribd company logo
HA & CM Overview 2020
We provide services to help build, support and
improve the performance & reliability of your
Cloud-Based and On-Premise MySQL
infrastructure.
© ProxySQL 2013-2020. All rights reserved.
• ProxySQL Development
• ProxySQL Support Services
• ProxySQL, MySQL, DevOps & Outsourcing
• Consulting Services
© ProxySQL 2013-2020. All rights reserved.
MySQL protocol aware data gateway
– Clients connect to ProxySQL
– Requests are evaluated
– Various actions are performed
© ProxySQL 2013-2020. All rights reserved.
• High Availability and infinite scalability
• Seamless planned and unplanned failover
• MySQL Session Load balancing
• Launching multiple instances on same port
• Binding services on different ports
© ProxySQL 2013-2020. All rights reserved.
• Connection pooling and multiplexing
• Read caching outside of the database server
• Complex query routing and read/write split
• Query throttling, firewalling and mirroring
• On-the-fly query rewrite
• Advanced data masking
© ProxySQL 2013-2020. All rights reserved.
• Dynamic runtime reconfiguration
• Real time statistics and monitoring
• Scheduling of external scripts
• Causal reads by tracking GTIDs across pools of
backend servers
• Native ProxySQL Clustering
© ProxySQL 2013-2020. All rights reserved.
• SSL support for frontend & SSL v1.2
• Native support Galera / PXC and Group Replication
• Integration with Amazon RDS and Aurora
• MySQL, MariaDB, Percona and ClickHouse backends
• Supports millions of users and tens of thousands of
database servers
© ProxySQL 2013-2020. All rights reserved.
Deployment and HA for
ProxySQL
© ProxySQL 2013-2020. All rights reserved.
The three main approaches for deploying ProxySQL are either:
- Deploy ProxySQL on your application servers
- Deploy ProxySQL in a dedicated layer of servers
- Deploy ProxySQL on your application servers and in a
separate dedicated layer of servers
Each approach has its own advantages and disadvantages.
© ProxySQL 2013-2020. All rights reserved.
Regardless of your underlying infrastructure, there are various
implementation methods to implement all three approaches
for:
- On-premises bare metal servers
- Virtualized environments (VMWare, KVM, etc)
- Cloud environments (AWS, GCP, Azure, etc)
- Containerized environments (Kubernetes, Docker, etc)
© ProxySQL 2013-2020. All rights reserved.
ProxySQL is deployed locally
• No network overhead
• No single point of failure
• Isolated configuration
• Rolling upgrades
2
© ProxySQL 2013-2020. All rights reserved.
ProxySQL is deployed locally
• No network overhead
• No single point of failure
• Isolated configuration
• Rolling upgrades
2 • DB monitoring overhead
• More backend connections
• Configuration effort
• Query cache isolated
Configuration management is essential
QC1
QC2
QC3
© ProxySQL 2013-2020. All rights reserved.
ProxySQL is deployed on a standalone server
• DB monitoring overhead
• Less backend connections
• Configuration effort
• Shared Query Cache
Configuration management is optional
Global QC
© ProxySQL 2013-2020. All rights reserved.
ProxySQL is deployed on a standalone server
• DB monitoring overhead
• Less backend connections
• Configuration effort
• Shared Query Cache
Global QC
• Additional Round Trip Time
• Single point of failure
• Shared configuration
• No rolling upgrades
2
Configuration management is optional
© ProxySQL 2013-2020. All rights reserved.
ProxySQL is the ”first contact” for applications connecting to a
database services.
• ProxySQL Service is CRITICAL
– Deployment decisions affect uptime
– Architectural considerations affect scalability
– Additional layers introduce possible points of failure
© ProxySQL 2013-2020. All rights reserved.
For an App Server Deployment we eliminate the Single
Point of Failure by having 1 .. N instances deployed per
server:
• Local deployment (single instances on a same port)
• Local deployment (multiple instances on the same port)
• Kubernetes Sidecar implemenation
© ProxySQL 2013-2020. All rights reserved.
HA is required for a ProxySQL layered deployment:
• Keepalived (IP failover) or a TCP load balancer
• DNS / Consul DNS (other service discovery tools)
• Kubernetes Service Replicaset (with / without controller)
• ProxySQL Cascading
– Cascading in Kubernetes required both a Sidecar implementation as
well as Kubernetes Service Replicaset
© ProxySQL 2013-2020. All rights reserved.
VRRP
VIP 10.10.10.1
• HA provided by Virtual
Router Redundancy
Protocol or load balancer
© ProxySQL 2013-2020. All rights reserved.
VIP 10.10.10.1
• HA provided by Virtual
Router Redundancy
Protocol
• Service check fails
• Keepalived switches the
VIP to a standby ProxySQL
instance
• All client connections
LOST need to be re-
established by the
application
VRRP
Only one instance is used at a time,
other instances are just on standby
(gray)
© ProxySQL 2013-2020. All rights reserved.
• HA provided on a TCP
level (not MySQL Protocol)
• Application servers access
the service via the load
balancer
• Provides load balancing as
well as HA
LB
© ProxySQL 2013-2020. All rights reserved.
• HA provided Consul
agents running on each
ProxySQL instance
• Application servers access
the service via DNS
• Provides load balancing as
well as HA
• All client connections
LOST need to be re-
established by the
application
LB
© ProxySQL 2013-2020. All rights reserved.
• HA provided Consul
agents running on each
ProxySQL instance
• Application servers access
the service via DNS or by
querying Consul
• Provides load balancing as
well as HA
© ProxySQL 2013-2020. All rights reserved.
• HA provided Consul
agents running on each
ProxySQL instance
• Application servers access
the service via DNS or by
querying Consul
• Provides load balancing as
well as HA
• All client connections
LOST must be re-
established by the
application
© ProxySQL 2013-2020. All rights reserved.
• HA provided Kubernetes
Replicaset, applications
connect to one K8s service
• Provides load balancing as
well as HA
• No need for config
management, one step
deployment / configuration
© ProxySQL 2013-2020. All rights reserved.
• HA provided Kubernetes
Replicaset, applications
connect to one K8s service
• Provides load balancing as
well as HA
• No need for config
management, one step
deployment / configuration
• All client connections
LOST must be re-
established by the
application
© ProxySQL 2013-2020. All rights reserved.
• Optionally an additional
controller pod can be
added
• ProxySQL instances
serving traffic are never
configured, they just pull
config from the Controller
via ProxySQL Cluster
• Controller is configured
dynamically via SQL,
updates are pushed to the
pod – config is re-used
and persisted in a volume
© ProxySQL 2013-2020. All rights reserved.
• ProxySQL is deployed on
each application server
and in a ProxySQL layer
• Applications connect to
the local ProxySQL server
• Provides load balancing as
well as HA
ProxySQL
© ProxySQL 2013-2020. All rights reserved.
• ProxySQL is deployed on
each application server
and in a ProxySQL layer
• Applications connect to
the local ProxySQL server
• Provides load balancing as
well as HA
ProxySQL
© ProxySQL 2013-2020. All rights reserved.
• ProxySQL is deployed on
each application server
and in a ProxySQL layer
• Applications connect to
the local ProxySQL server
• Provides load balancing as
well as HA
• Open connections are held
and retried on an available
backend – no connections
are lost
ProxySQL
© ProxySQL 2013-2020. All rights reserved.
• ProxySQL is deployed on
each application server
and in a ProxySQL layer
• Applications connect to
the local ProxySQL server
• Provides load balancing as
well as HA
• Open connections are held
and retried on an available
backend – no connections
lost
• Note: If the PRIMARY
MySQL instance is lost
while multiplexing is
disabled or a transaction is
active – a ROLLBACK will
occur
ProxySQL
© ProxySQL 2013-2020. All rights reserved.
ProxySQL can be used to provide HA to itself as it
communicates using the MySQL Protocol:
• Application layer aware
• Provides connection retry
• Allows for scale up / scale down without connection loss
(during planned maintenance)
• Transactions can be lost during edge case unplanned
failover
© ProxySQL 2013-2020. All rights reserved.
ProxySQL can also be cascaded in a Kubernetes environment, this
requires the following:
• ProxySQL deployed in the same pod as the application as a
separate container, the application then connects to “localhost”
• An additional ProxySQL layer is configured using a K8s Service
tied to a Replicaset (auto-scale can be implemented)
• Optionally a ProxySQL Controller pod for the service, this is a
ProxySQL instance used ONLY for configuration, traffic ProxySQL
instances pull configuration and serve traffic
Configuration Management
© ProxySQL 2013-2020. All rights reserved.
ProxySQL provides a ProxySQL Admin interface (port
6032 by default) which accepts SQL commands in order
to manage the internal configuration system.
• Various configuration management tools can also be
used (e.g. Ansible, Puppet, Chef)
• ProxySQL Cluster is the native configuration
management feature that can be used to synchronize
configuration between multiple nodes
© ProxySQL 2013-2020. All rights reserved.
Various open-source configuration management tools used for
ProxySQL are available with extensive samples available online:
• Ansible – most widely used and supported
• Chef – widely used, scripts available from 3rd parties
• Puppet – also widely used and supported in Puppetforge
• Salt – several implementations are known, less material is
available for Salt provisioning (the MySQL modules are more than
sufficient)
• Docker / Kubernetes
© ProxySQL 2013-2020. All rights reserved.
• Ansible modules available for ProxySQL in
upstream (out of the box):
© ProxySQL 2013-2020. All rights reserved.
• Puppet modules available for ProxySQL in
Puppetforge:
© ProxySQL 2013-2020. All rights reserved.
It is recommended to implement automation that will run an idempotent SQL file
with commands rather than using modules, the reasoning is that:
• Runtime configuration is performed using SQL, it is easier and more flexible to
transfer these to automated configuration i.e. to just save the commands
executed in a file which is then deployed by your config management solution
• ProxySQL already provides a mechanism to stage changes (LOAD / SAVE
options), these can be implemented with greater control within an SQL script
• Often automation modules will rely on other underlying libraries and
incompatibilities have been known to occur in the past, while when using an
SQL file and MySQL client full compatibility is ensured.
© ProxySQL 2013-2020. All rights reserved.
Another important point to consider is that configuration
management tools generally rely on editing a
configuration file and restarting the service
• This contrasts with the zero-downtime runtime
configuration approach in ProxySQL
• A greater level of control and simpler code in SQL
using the ProxySQL runtime configuration approach
© ProxySQL 2013-2020. All rights reserved.
Configuration management tools configure each node
individually. ProxySQL provides an important feature known as
ProxySQL Cluster:
• Provides configuration synchronization across a cluster of hosts
• Pure SQL can be executed against a single host to propagate
configuration across all others (this can be implemented via
configuration management from an external tool as well e.g.
Ansible / Puppet / etc.)
• All nodes can share configuration or a specific node can be used
as a PRIMARY
© ProxySQL 2013-2020. All rights reserved.
ProxySQL Cluster shares configuration related to the
following:
• mysql_servers
• mysql_users
• mysql_query_rules
• proxysql_servers
ProxySQL Cluster
© ProxySQL 2013-2020. All rights reserved.
proxysql1
proxysql2
proxysql3
E.g. “Query Rule”
updates are made
on the proxysql1
© ProxySQL 2013-2020. All rights reserved.
ProxySQL Cluster
proxysql1
proxysql2
proxysql3
Changes are propagated
from proxysql1 to
proxysql2 / proxysql3
© ProxySQL 2013-2020. All rights reserved.
A ProxySQL Cluster can be configured
to pull configuration data from a
single specific node
• Nodes pulling configuration can
start with almost zero
configuration, just the IP of the
controller specified and a short
static configuration
© ProxySQL 2013-2020. All rights reserved.
A ProxySQL Cluster can also be
configured to pull configuration data
from a set of nodes
• The same principle applies as in
the previous slide regarding static
configuration
© ProxySQL 2013-2020. All rights reserved.
Finally, a ProxySQL Cluster can also
be configured to pull configuration
data from any node
• This approach requires the list of
cluster nodes to be updated when
the topology changes in the
“proxysql_servers” table
© ProxySQL 2013-2020. All rights reserved.
The total amount of instances configured to share
configuration and the frequency of checks will affect the
impact of clustering
- Consider the number of instances that should actually be
used to the “control” configuration, not all nodes need to be
“control” nodes
- It is generally sufficient to have 1x control node per region
/ availability zone / data center
© ProxySQL 2013-2020. All rights reserved.
Considering the tools discussed in this section there are 2x
parts to configuring ProxySQL:
• ProxySQL configuration
– Configuration variables which rarely change (e.g.
global_variables, interfaces, etc.)
• ProxySQL core logic
– MySQL servers, query rules, users and proxysql_servers
© ProxySQL 2013-2020. All rights reserved.
Depending on the size and complexity of your ProxySQL server(s) it may be
necessary to employ a combination of tools e.g.:
• Use a configuration management tool for your new deployments ProxySQL
configuration
– This could be a tool such as Ansible, Puppet or Chef
– A pre-built Docker image used within docker-compose or Kubernetes
• Use ProxySQL Cluster for maintaining and propagating the core logic
– This means leaving the configuration of MySQL servers, query rules and users outside
of your configuration management tools
– The proxysql_servers data should remain pre-configured (with at least one server)

More Related Content

ProxySQL High Avalability and Configuration Management Overview

  • 1. HA & CM Overview 2020
  • 2. We provide services to help build, support and improve the performance & reliability of your Cloud-Based and On-Premise MySQL infrastructure. © ProxySQL 2013-2020. All rights reserved.
  • 3. • ProxySQL Development • ProxySQL Support Services • ProxySQL, MySQL, DevOps & Outsourcing • Consulting Services © ProxySQL 2013-2020. All rights reserved.
  • 4. MySQL protocol aware data gateway – Clients connect to ProxySQL – Requests are evaluated – Various actions are performed © ProxySQL 2013-2020. All rights reserved.
  • 5. • High Availability and infinite scalability • Seamless planned and unplanned failover • MySQL Session Load balancing • Launching multiple instances on same port • Binding services on different ports © ProxySQL 2013-2020. All rights reserved.
  • 6. • Connection pooling and multiplexing • Read caching outside of the database server • Complex query routing and read/write split • Query throttling, firewalling and mirroring • On-the-fly query rewrite • Advanced data masking © ProxySQL 2013-2020. All rights reserved.
  • 7. • Dynamic runtime reconfiguration • Real time statistics and monitoring • Scheduling of external scripts • Causal reads by tracking GTIDs across pools of backend servers • Native ProxySQL Clustering © ProxySQL 2013-2020. All rights reserved.
  • 8. • SSL support for frontend & SSL v1.2 • Native support Galera / PXC and Group Replication • Integration with Amazon RDS and Aurora • MySQL, MariaDB, Percona and ClickHouse backends • Supports millions of users and tens of thousands of database servers © ProxySQL 2013-2020. All rights reserved.
  • 9. Deployment and HA for ProxySQL
  • 10. © ProxySQL 2013-2020. All rights reserved. The three main approaches for deploying ProxySQL are either: - Deploy ProxySQL on your application servers - Deploy ProxySQL in a dedicated layer of servers - Deploy ProxySQL on your application servers and in a separate dedicated layer of servers Each approach has its own advantages and disadvantages.
  • 11. © ProxySQL 2013-2020. All rights reserved. Regardless of your underlying infrastructure, there are various implementation methods to implement all three approaches for: - On-premises bare metal servers - Virtualized environments (VMWare, KVM, etc) - Cloud environments (AWS, GCP, Azure, etc) - Containerized environments (Kubernetes, Docker, etc)
  • 12. © ProxySQL 2013-2020. All rights reserved. ProxySQL is deployed locally • No network overhead • No single point of failure • Isolated configuration • Rolling upgrades 2
  • 13. © ProxySQL 2013-2020. All rights reserved. ProxySQL is deployed locally • No network overhead • No single point of failure • Isolated configuration • Rolling upgrades 2 • DB monitoring overhead • More backend connections • Configuration effort • Query cache isolated Configuration management is essential QC1 QC2 QC3
  • 14. © ProxySQL 2013-2020. All rights reserved. ProxySQL is deployed on a standalone server • DB monitoring overhead • Less backend connections • Configuration effort • Shared Query Cache Configuration management is optional Global QC
  • 15. © ProxySQL 2013-2020. All rights reserved. ProxySQL is deployed on a standalone server • DB monitoring overhead • Less backend connections • Configuration effort • Shared Query Cache Global QC • Additional Round Trip Time • Single point of failure • Shared configuration • No rolling upgrades 2 Configuration management is optional
  • 16. © ProxySQL 2013-2020. All rights reserved. ProxySQL is the ”first contact” for applications connecting to a database services. • ProxySQL Service is CRITICAL – Deployment decisions affect uptime – Architectural considerations affect scalability – Additional layers introduce possible points of failure
  • 17. © ProxySQL 2013-2020. All rights reserved. For an App Server Deployment we eliminate the Single Point of Failure by having 1 .. N instances deployed per server: • Local deployment (single instances on a same port) • Local deployment (multiple instances on the same port) • Kubernetes Sidecar implemenation
  • 18. © ProxySQL 2013-2020. All rights reserved. HA is required for a ProxySQL layered deployment: • Keepalived (IP failover) or a TCP load balancer • DNS / Consul DNS (other service discovery tools) • Kubernetes Service Replicaset (with / without controller) • ProxySQL Cascading – Cascading in Kubernetes required both a Sidecar implementation as well as Kubernetes Service Replicaset
  • 19. © ProxySQL 2013-2020. All rights reserved. VRRP VIP 10.10.10.1 • HA provided by Virtual Router Redundancy Protocol or load balancer
  • 20. © ProxySQL 2013-2020. All rights reserved. VIP 10.10.10.1 • HA provided by Virtual Router Redundancy Protocol • Service check fails • Keepalived switches the VIP to a standby ProxySQL instance • All client connections LOST need to be re- established by the application VRRP Only one instance is used at a time, other instances are just on standby (gray)
  • 21. © ProxySQL 2013-2020. All rights reserved. • HA provided on a TCP level (not MySQL Protocol) • Application servers access the service via the load balancer • Provides load balancing as well as HA LB
  • 22. © ProxySQL 2013-2020. All rights reserved. • HA provided Consul agents running on each ProxySQL instance • Application servers access the service via DNS • Provides load balancing as well as HA • All client connections LOST need to be re- established by the application LB
  • 23. © ProxySQL 2013-2020. All rights reserved. • HA provided Consul agents running on each ProxySQL instance • Application servers access the service via DNS or by querying Consul • Provides load balancing as well as HA
  • 24. © ProxySQL 2013-2020. All rights reserved. • HA provided Consul agents running on each ProxySQL instance • Application servers access the service via DNS or by querying Consul • Provides load balancing as well as HA • All client connections LOST must be re- established by the application
  • 25. © ProxySQL 2013-2020. All rights reserved. • HA provided Kubernetes Replicaset, applications connect to one K8s service • Provides load balancing as well as HA • No need for config management, one step deployment / configuration
  • 26. © ProxySQL 2013-2020. All rights reserved. • HA provided Kubernetes Replicaset, applications connect to one K8s service • Provides load balancing as well as HA • No need for config management, one step deployment / configuration • All client connections LOST must be re- established by the application
  • 27. © ProxySQL 2013-2020. All rights reserved. • Optionally an additional controller pod can be added • ProxySQL instances serving traffic are never configured, they just pull config from the Controller via ProxySQL Cluster • Controller is configured dynamically via SQL, updates are pushed to the pod – config is re-used and persisted in a volume
  • 28. © ProxySQL 2013-2020. All rights reserved. • ProxySQL is deployed on each application server and in a ProxySQL layer • Applications connect to the local ProxySQL server • Provides load balancing as well as HA ProxySQL
  • 29. © ProxySQL 2013-2020. All rights reserved. • ProxySQL is deployed on each application server and in a ProxySQL layer • Applications connect to the local ProxySQL server • Provides load balancing as well as HA ProxySQL
  • 30. © ProxySQL 2013-2020. All rights reserved. • ProxySQL is deployed on each application server and in a ProxySQL layer • Applications connect to the local ProxySQL server • Provides load balancing as well as HA • Open connections are held and retried on an available backend – no connections are lost ProxySQL
  • 31. © ProxySQL 2013-2020. All rights reserved. • ProxySQL is deployed on each application server and in a ProxySQL layer • Applications connect to the local ProxySQL server • Provides load balancing as well as HA • Open connections are held and retried on an available backend – no connections lost • Note: If the PRIMARY MySQL instance is lost while multiplexing is disabled or a transaction is active – a ROLLBACK will occur ProxySQL
  • 32. © ProxySQL 2013-2020. All rights reserved. ProxySQL can be used to provide HA to itself as it communicates using the MySQL Protocol: • Application layer aware • Provides connection retry • Allows for scale up / scale down without connection loss (during planned maintenance) • Transactions can be lost during edge case unplanned failover
  • 33. © ProxySQL 2013-2020. All rights reserved. ProxySQL can also be cascaded in a Kubernetes environment, this requires the following: • ProxySQL deployed in the same pod as the application as a separate container, the application then connects to “localhost” • An additional ProxySQL layer is configured using a K8s Service tied to a Replicaset (auto-scale can be implemented) • Optionally a ProxySQL Controller pod for the service, this is a ProxySQL instance used ONLY for configuration, traffic ProxySQL instances pull configuration and serve traffic
  • 35. © ProxySQL 2013-2020. All rights reserved. ProxySQL provides a ProxySQL Admin interface (port 6032 by default) which accepts SQL commands in order to manage the internal configuration system. • Various configuration management tools can also be used (e.g. Ansible, Puppet, Chef) • ProxySQL Cluster is the native configuration management feature that can be used to synchronize configuration between multiple nodes
  • 36. © ProxySQL 2013-2020. All rights reserved. Various open-source configuration management tools used for ProxySQL are available with extensive samples available online: • Ansible – most widely used and supported • Chef – widely used, scripts available from 3rd parties • Puppet – also widely used and supported in Puppetforge • Salt – several implementations are known, less material is available for Salt provisioning (the MySQL modules are more than sufficient) • Docker / Kubernetes
  • 37. © ProxySQL 2013-2020. All rights reserved. • Ansible modules available for ProxySQL in upstream (out of the box):
  • 38. © ProxySQL 2013-2020. All rights reserved. • Puppet modules available for ProxySQL in Puppetforge:
  • 39. © ProxySQL 2013-2020. All rights reserved. It is recommended to implement automation that will run an idempotent SQL file with commands rather than using modules, the reasoning is that: • Runtime configuration is performed using SQL, it is easier and more flexible to transfer these to automated configuration i.e. to just save the commands executed in a file which is then deployed by your config management solution • ProxySQL already provides a mechanism to stage changes (LOAD / SAVE options), these can be implemented with greater control within an SQL script • Often automation modules will rely on other underlying libraries and incompatibilities have been known to occur in the past, while when using an SQL file and MySQL client full compatibility is ensured.
  • 40. © ProxySQL 2013-2020. All rights reserved. Another important point to consider is that configuration management tools generally rely on editing a configuration file and restarting the service • This contrasts with the zero-downtime runtime configuration approach in ProxySQL • A greater level of control and simpler code in SQL using the ProxySQL runtime configuration approach
  • 41. © ProxySQL 2013-2020. All rights reserved. Configuration management tools configure each node individually. ProxySQL provides an important feature known as ProxySQL Cluster: • Provides configuration synchronization across a cluster of hosts • Pure SQL can be executed against a single host to propagate configuration across all others (this can be implemented via configuration management from an external tool as well e.g. Ansible / Puppet / etc.) • All nodes can share configuration or a specific node can be used as a PRIMARY
  • 42. © ProxySQL 2013-2020. All rights reserved. ProxySQL Cluster shares configuration related to the following: • mysql_servers • mysql_users • mysql_query_rules • proxysql_servers
  • 43. ProxySQL Cluster © ProxySQL 2013-2020. All rights reserved. proxysql1 proxysql2 proxysql3 E.g. “Query Rule” updates are made on the proxysql1
  • 44. © ProxySQL 2013-2020. All rights reserved. ProxySQL Cluster proxysql1 proxysql2 proxysql3 Changes are propagated from proxysql1 to proxysql2 / proxysql3
  • 45. © ProxySQL 2013-2020. All rights reserved. A ProxySQL Cluster can be configured to pull configuration data from a single specific node • Nodes pulling configuration can start with almost zero configuration, just the IP of the controller specified and a short static configuration
  • 46. © ProxySQL 2013-2020. All rights reserved. A ProxySQL Cluster can also be configured to pull configuration data from a set of nodes • The same principle applies as in the previous slide regarding static configuration
  • 47. © ProxySQL 2013-2020. All rights reserved. Finally, a ProxySQL Cluster can also be configured to pull configuration data from any node • This approach requires the list of cluster nodes to be updated when the topology changes in the “proxysql_servers” table
  • 48. © ProxySQL 2013-2020. All rights reserved. The total amount of instances configured to share configuration and the frequency of checks will affect the impact of clustering - Consider the number of instances that should actually be used to the “control” configuration, not all nodes need to be “control” nodes - It is generally sufficient to have 1x control node per region / availability zone / data center
  • 49. © ProxySQL 2013-2020. All rights reserved. Considering the tools discussed in this section there are 2x parts to configuring ProxySQL: • ProxySQL configuration – Configuration variables which rarely change (e.g. global_variables, interfaces, etc.) • ProxySQL core logic – MySQL servers, query rules, users and proxysql_servers
  • 50. © ProxySQL 2013-2020. All rights reserved. Depending on the size and complexity of your ProxySQL server(s) it may be necessary to employ a combination of tools e.g.: • Use a configuration management tool for your new deployments ProxySQL configuration – This could be a tool such as Ansible, Puppet or Chef – A pre-built Docker image used within docker-compose or Kubernetes • Use ProxySQL Cluster for maintaining and propagating the core logic – This means leaving the configuration of MySQL servers, query rules and users outside of your configuration management tools – The proxysql_servers data should remain pre-configured (with at least one server)