The document summarizes a presentation on optimizing Linux, Windows, and Firebird for heavy workloads. It describes two customer implementations using Firebird - a medical company with 17 departments and over 700 daily users, and a repair services company with over 500 daily users. It discusses tuning the operating system, hardware, CPU, RAM, I/O, network, and Firebird configuration to improve performance under heavy loads. Specific recommendations are provided for Linux and Windows configuration.
"Contos da Cantuária" e "Conto do Médico" - Exercícios ("The Canterbury Tales...Thaynã Guedes
Atividades sobre o conto "Conto do Médico" (The Physician's Tale) e sobre a coleção de histórias medievais "Contos da Cantuária" (The Canterbury Tales).
At Percona Live in April 2016, Red Hat's Kyle Bader reviewed the general architecture of Ceph and then discussed the results of a series of benchmarks done on small to mid-size Ceph clusters, which led to the development of prescriptive guidance around tuning Ceph storage nodes (OSDs).
9 DevOps Tips for Going in Production with Galera Cluster for MySQL - SlidesSeveralnines
Galera is a MySQL replication technology that can simplify the design of a high availability application stack. With a true multi-master MySQL setup, an application can now read and write from any database instance without worrying about master/slave roles, data integrity, slave lag or other drawbacks of asynchronous replication.
And that all sounds great until it’s time to go into production. Throw in a live migration from an existing database setup and devops life just got a bit more interesting ...
So if you are in devops, then this webinar is for you!
Operations is not so much about specific technologies, but about the techniques and tools you use to deploy and manage them. Monitoring, managing schema changes and pushing them in production, performance optimizations, configurations, version upgrades, backups; these are all aspects to consider – preferably before going live.
Let us guide you through 9 key tips to consider before taking Galera Cluster into production.
I. O documento descreve o estatuto da Associação dos Funcionários da Fundação Getulio Vargas (AF-FGV), definindo sua finalidade, estrutura de governança e diretrizes.
II. A AF-FGV é dirigida por uma Assembleia Geral, Diretoria e Conselho Deliberativo, que se reúnem para deliberar sobre assuntos de interesse dos sócios.
III. A Diretoria, composta por 5 membros efetivos e 5 suplentes, é o órgão executivo responsável pela administração da associação.
Building a document e-signing workflow with Azure Durable FunctionsJoonas Westlin
Durable functions offer an interesting programming model for building workflows. Whether you need to sometimes split and do multiple things or wait for user input, a lot of things are possible. They do present some challenges as well, and the limitations of orchestrator functions can make working with Durable seem very complicated.
In this talk we will go through the basics of Durable Functions along with strategies for deploying and monitoring them. A sample application will be presented where users can send documents for electronic signature. A Durable Functions workflow will power the signing process.
Top 8 Types of Toy Guns for Kids to PlayTactical Edge
Toy guns are toys which imitate real guns but are designed for children to play with. Many newer toy guns are brightly colored and oddly shaped to prevent them from being mistaken for real firearms. Find out all about the top 8 types of toy guns for the kid to play.
A empresa está enfrentando desafios financeiros devido à queda nas vendas e precisa cortar custos. Um plano de reestruturação é proposto para demitir funcionários e fechar algumas lojas menos rentáveis para reduzir gastos e voltar ao lucro.
Luigi Brochard from Lenovo presented this deck at the Switzerland HPC Conference.
"Lenovo has developed an open source HPC software stack for system management with GUI support. This enables customers to more efficiently manage their clusters by making it simple and easy for both the system administrator and end users.This talk will present this initiative, show a demo and present future evolutions."
Watch the video presentation:
https://www.youtube.com/watch?v=xqwLul_hA28
See more talks in the Swiss Conference Video Gallery: http://insidehpc.com/2016-swiss-hpc-conference/
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
Annie Leibovitz nasceu em 1949 nos Estados Unidos e encontrou sua paixão pela fotografia após estudar artes. Ela se tornou chefe de fotografia da revista Rolling Stone e fotografou muitas celebridades. Uma de suas fotos mais famosas foi da capa da Rolling Stone com John Lennon nu, horas antes de ele ser assassinado.
Este documento apresenta um livro eletrônico gratuito sobre Kanban e Scrum. O livro compara as duas abordagens, mostrando suas similaridades e diferenças, e fornece um estudo de caso sobre como elas foram aplicadas em conjunto em uma organização.
Updating Embedded Linux devices in the field requires robust, atomic, and fail-safe software update mechanisms to fix bugs remotely without rendering devices unusable. A commonly used open source updater is SWUpdate, a Linux application that can safely install updates downloaded over the network or from local media using techniques like separate recovery systems and ping-ponging between OS images. It aims to provide atomic system image updates with rollback capabilities and audit logs to ensure devices remain functional after updates.
1. Laporan ini membahas sistem pendingin pada mesin, termasuk definisi, komponen, cara kerja, dan perawatan. 2. Komponen utama sistem pendingin adalah radiator, pompa air, termostat, dan kipas pendingin. 3. Sistem pendingin bekerja dengan mengalirkan cairan pendingin ke seluruh bagian mesin untuk menyerap panas dan mendinginkannya melalui radiator.
The document is a user manual for the RS485-LN RS485 to LoRaWAN Converter. It describes the converter's specifications, features, applications, and provides instructions on setup and operation. Key details include connecting RS485 devices to the converter, configuring commands to read data from RS485 devices, and joining the converter to a LoRaWAN network like The Things Network using OTAA. The manual also covers troubleshooting and includes an order and support section.
MySQL Performance Tuning London Meetup June 2017Ivan Zoratti
The document discusses various techniques for tuning MySQL performance. It begins with an introduction and agenda, then covers top performance issues such as bad SQL queries, long running transactions, and incorrect configurations. The rest of the document provides tips for monitoring different aspects of the system and tuning various configuration options, software, and application design factors to optimize MySQL performance.
The document provides guidance on deploying MongoDB in production environments. It discusses sizing hardware requirements for memory, CPU, and disk I/O. It also covers installing and upgrading MongoDB, considerations for cloud platforms like EC2, security, backups, durability, scaling out, and monitoring. The focus is on performance optimization and ensuring data integrity and high availability.
Maximizing performance via tuning and optimizationMariaDB plc
Maximizing Performance via Tuning and Optimization outlines best practices for optimizing MariaDB server performance. It discusses:
- Defining service level agreements and metrics to monitor against them
- When to tune based on schema, query, or system changes
- Ensuring server, storage, network and OS settings support database needs
- Configuring connection pooling and threads to manage load
- Common MariaDB configuration settings that impact performance
- Query tuning techniques like indexing, monitoring tools, and database design
Maximizing performance via tuning and optimizationMariaDB plc
Maximizing performance via tuning and optimization involves:
- Defining service level agreements and translating them to database transactions.
- Capturing metrics on business, application, and database transactions to identify bottlenecks.
- Tuning from the start and periodically reviewing production systems for changes.
- Optimizing server, storage, network and OS settings as well as MariaDB configuration settings like buffer pool size, query cache size, and connection settings.
- Analyzing slow queries, indexing appropriately, and monitoring tools like Performance Schema.
- Designing databases and choosing optimal data types.
The environment in which your EECMS lives is as important as what can be seen by your clients in their browser. A solid foundation is key to the overall performance, scalability and security of your site. Building on over a decade of server optimization experience, extensive benchmarking and some custom ExpressionEngine extensions this session will show you how to make sure your ExpressionEngine install is ready for prime time.
Taking Splunk to the Next Level - Architecture Breakout SessionSplunk
This document provides an overview and agenda for taking a Splunk deployment to the next level by addressing scaling needs and high availability requirements. It discusses growing use cases and data volumes, making Splunk mission critical through clustering, and supporting global deployments. The agenda covers scaling strategies like indexer clustering, search head clustering, and hybrid cloud deployments. It also promotes justifying increased spending by mapping dependencies and costs of failures across an organization's systems.
- The document provides guidance on deploying MongoDB including sizing hardware, installing and upgrading MongoDB, configuration considerations for EC2, security, backups, durability, scaling out, and monitoring. Key aspects discussed are profiling and indexing queries for performance, allocating sufficient memory, CPU and disk I/O, using 64-bit OSes, ext4/XFS filesystems, upgrading to even version numbers, and replicating for high availability and backups.
Storage and performance- Batch processing, WhiptailInternet World
Batch processing allows jobs to run without manual intervention by shifting processing to less busy times. It avoids idling computing resources and allows higher overall utilization. Batch processing provides benefits like prioritizing batch and interactive work. The document then discusses different approaches to batch processing like dedicating all resources to it or sharing resources. It outlines challenges like systems being unavailable during batch processing. The rest of the document summarizes Whiptail's flash storage solutions for accelerating workloads and reducing costs and resources compared to HDDs.
Top 8 Types of Toy Guns for Kids to PlayTactical Edge
Toy guns are toys which imitate real guns but are designed for children to play with. Many newer toy guns are brightly colored and oddly shaped to prevent them from being mistaken for real firearms. Find out all about the top 8 types of toy guns for the kid to play.
A empresa está enfrentando desafios financeiros devido à queda nas vendas e precisa cortar custos. Um plano de reestruturação é proposto para demitir funcionários e fechar algumas lojas menos rentáveis para reduzir gastos e voltar ao lucro.
Luigi Brochard from Lenovo presented this deck at the Switzerland HPC Conference.
"Lenovo has developed an open source HPC software stack for system management with GUI support. This enables customers to more efficiently manage their clusters by making it simple and easy for both the system administrator and end users.This talk will present this initiative, show a demo and present future evolutions."
Watch the video presentation:
https://www.youtube.com/watch?v=xqwLul_hA28
See more talks in the Swiss Conference Video Gallery: http://insidehpc.com/2016-swiss-hpc-conference/
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
Annie Leibovitz nasceu em 1949 nos Estados Unidos e encontrou sua paixão pela fotografia após estudar artes. Ela se tornou chefe de fotografia da revista Rolling Stone e fotografou muitas celebridades. Uma de suas fotos mais famosas foi da capa da Rolling Stone com John Lennon nu, horas antes de ele ser assassinado.
Este documento apresenta um livro eletrônico gratuito sobre Kanban e Scrum. O livro compara as duas abordagens, mostrando suas similaridades e diferenças, e fornece um estudo de caso sobre como elas foram aplicadas em conjunto em uma organização.
Updating Embedded Linux devices in the field requires robust, atomic, and fail-safe software update mechanisms to fix bugs remotely without rendering devices unusable. A commonly used open source updater is SWUpdate, a Linux application that can safely install updates downloaded over the network or from local media using techniques like separate recovery systems and ping-ponging between OS images. It aims to provide atomic system image updates with rollback capabilities and audit logs to ensure devices remain functional after updates.
1. Laporan ini membahas sistem pendingin pada mesin, termasuk definisi, komponen, cara kerja, dan perawatan. 2. Komponen utama sistem pendingin adalah radiator, pompa air, termostat, dan kipas pendingin. 3. Sistem pendingin bekerja dengan mengalirkan cairan pendingin ke seluruh bagian mesin untuk menyerap panas dan mendinginkannya melalui radiator.
The document is a user manual for the RS485-LN RS485 to LoRaWAN Converter. It describes the converter's specifications, features, applications, and provides instructions on setup and operation. Key details include connecting RS485 devices to the converter, configuring commands to read data from RS485 devices, and joining the converter to a LoRaWAN network like The Things Network using OTAA. The manual also covers troubleshooting and includes an order and support section.
MySQL Performance Tuning London Meetup June 2017Ivan Zoratti
The document discusses various techniques for tuning MySQL performance. It begins with an introduction and agenda, then covers top performance issues such as bad SQL queries, long running transactions, and incorrect configurations. The rest of the document provides tips for monitoring different aspects of the system and tuning various configuration options, software, and application design factors to optimize MySQL performance.
The document provides guidance on deploying MongoDB in production environments. It discusses sizing hardware requirements for memory, CPU, and disk I/O. It also covers installing and upgrading MongoDB, considerations for cloud platforms like EC2, security, backups, durability, scaling out, and monitoring. The focus is on performance optimization and ensuring data integrity and high availability.
Maximizing performance via tuning and optimizationMariaDB plc
Maximizing Performance via Tuning and Optimization outlines best practices for optimizing MariaDB server performance. It discusses:
- Defining service level agreements and metrics to monitor against them
- When to tune based on schema, query, or system changes
- Ensuring server, storage, network and OS settings support database needs
- Configuring connection pooling and threads to manage load
- Common MariaDB configuration settings that impact performance
- Query tuning techniques like indexing, monitoring tools, and database design
Maximizing performance via tuning and optimizationMariaDB plc
Maximizing performance via tuning and optimization involves:
- Defining service level agreements and translating them to database transactions.
- Capturing metrics on business, application, and database transactions to identify bottlenecks.
- Tuning from the start and periodically reviewing production systems for changes.
- Optimizing server, storage, network and OS settings as well as MariaDB configuration settings like buffer pool size, query cache size, and connection settings.
- Analyzing slow queries, indexing appropriately, and monitoring tools like Performance Schema.
- Designing databases and choosing optimal data types.
The environment in which your EECMS lives is as important as what can be seen by your clients in their browser. A solid foundation is key to the overall performance, scalability and security of your site. Building on over a decade of server optimization experience, extensive benchmarking and some custom ExpressionEngine extensions this session will show you how to make sure your ExpressionEngine install is ready for prime time.
Taking Splunk to the Next Level - Architecture Breakout SessionSplunk
This document provides an overview and agenda for taking a Splunk deployment to the next level by addressing scaling needs and high availability requirements. It discusses growing use cases and data volumes, making Splunk mission critical through clustering, and supporting global deployments. The agenda covers scaling strategies like indexer clustering, search head clustering, and hybrid cloud deployments. It also promotes justifying increased spending by mapping dependencies and costs of failures across an organization's systems.
- The document provides guidance on deploying MongoDB including sizing hardware, installing and upgrading MongoDB, configuration considerations for EC2, security, backups, durability, scaling out, and monitoring. Key aspects discussed are profiling and indexing queries for performance, allocating sufficient memory, CPU and disk I/O, using 64-bit OSes, ext4/XFS filesystems, upgrading to even version numbers, and replicating for high availability and backups.
Storage and performance- Batch processing, WhiptailInternet World
Batch processing allows jobs to run without manual intervention by shifting processing to less busy times. It avoids idling computing resources and allows higher overall utilization. Batch processing provides benefits like prioritizing batch and interactive work. The document then discusses different approaches to batch processing like dedicating all resources to it or sharing resources. It outlines challenges like systems being unavailable during batch processing. The rest of the document summarizes Whiptail's flash storage solutions for accelerating workloads and reducing costs and resources compared to HDDs.
Deploying any software can be a challenge if you don't understand how resources are used or how to plan for the capacity of your systems. Whether you need to deploy or grow a single MongoDB instance, replica set, or tens of sharded clusters then you probably share the same challenges in trying to size that deployment.
This webinar will cover what resources MongoDB uses, and how to plan for their use in your deployment. Topics covered will include understanding how to model and plan capacity needs for new and growing deployments. The goal of this webinar will be to provide you with the tools needed to be successful in managing your MongoDB capacity planning tasks.
Windows Server 2012 R2 Software-Defined StorageAidan Finn
In this presentation I taught attendees how to build a Scale-Out File Server (SOFS) using Windows Server 2012 R2, JBODs, Storage Spaces, Failover Clustering, and SMB 3.0 Networking, suitable for storing application data such as Hyper-V and SQL Server.
On X86 systems, using an Unbreakable Enterprise Kernel (UEK) is recommended over other enterprise distributions as it provides better hardware support, security patches, and testing from the larger Linux community. Key configuration recommendations include enabling maximum CPU performance in BIOS, using memory types validated by Oracle, ensuring proper NUMA and CPU frequency settings, and installing only Oracle-validated packages to avoid issues. Monitoring tools like top, iostat, sar and ksar help identify any CPU, memory, disk or I/O bottlenecks.
MongoDB and Amazon Web Services: Storage Options for MongoDB DeploymentsMongoDB
When using MongoDB and AWS, you want to design your infrastructure to avoid storage bottlenecks and make the best use of your available storage resources. AWS offers a myriad of storage options, including ephemeral disks, EBS, Provisioned IOPS, and ephemeral SSD's, each offering different performance and persistence characteristics. In this session, we’ll evaluate each of these options in the context of your MongoDB deployment, assessing the benefits and drawbacks of each.
Red Hat Ceph Storage Acceleration Utilizing Flash Technology Red_Hat_Storage
Red Hat Ceph Storage can utilize flash technology to accelerate applications in three ways: 1) use all flash storage for highest performance, 2) use a hybrid configuration with performance critical data on flash tier and colder data on HDD tier, or 3) utilize host caching of critical data on flash. Benchmark results showed that using NVMe SSDs in Ceph provided much higher performance than SATA SSDs, with speed increases of up to 8x for some workloads. However, testing also showed that Ceph may not be well-suited for OLTP MySQL workloads due to small random reads/writes, as local SSD storage outperformed the Ceph cluster. Proper Linux tuning is also needed to maximize SSD performance within
Modeling, estimating, and predicting Ceph (Linux Foundation - Vault 2015)Lars Marowsky-Brée
This document discusses modeling and predicting performance for Ceph storage clusters. It describes many of the hardware, software, and configuration factors that impact Ceph performance, including network setup, storage nodes, disks, redundancy, placement groups and more. The document advocates for developing standardized benchmarks to better understand Ceph performance under different workloads and cluster configurations in order to answer customers' questions.
MongoDB stores data in files on disk that are broken into variable-sized extents containing documents. These extents, as well as separate index structures, are memory mapped by the operating system for efficient read/write. A write-ahead journal is used to provide durability and prevent data corruption after crashes by logging operations before writing to the data files. The journal increases write performance by 5-30% but can be optimized using a separate drive. Data fragmentation over time can be addressed using the compact command or adjusting the schema.
Cloud computing UNIT 2.1 presentation inRahulBhole12
Cloud storage allows users to store files online through cloud storage providers like Apple iCloud, Dropbox, Google Drive, Amazon Cloud Drive, and Microsoft SkyDrive. These providers offer various amounts of free storage and options to purchase additional storage. They allow files to be securely uploaded, accessed, and synced across devices. The best cloud storage provider depends on individual needs and preferences regarding storage space requirements and features offered.
The document discusses best practices for deploying MongoDB including sizing hardware with sufficient memory, CPU and I/O; using an appropriate operating system and filesystem; installing and upgrading MongoDB; ensuring durability with replication and backups; implementing security, monitoring performance with tools, and considerations for deploying on Amazon EC2.
This document discusses various techniques for optimizing Drupal performance, including:
- Defining goals such as faster page loads or handling more traffic
- Applying patches and rearchitecting content to optimize at a code level
- Using tools like Apache Benchmark and MySQL tuning to analyze performance bottlenecks
- Implementing solutions like caching, memcached, and reverse proxies to improve scalability
This document discusses various options for deploying solid state drives (SSDs) in the data center to address storage performance issues. It describes all-flash arrays that use only SSDs, hybrid arrays that combine SSDs and hard disk drives, and server-side flash caching. Key points covered include the performance benefits of SSDs over HDDs, different types of SSDs, form factors, deployment architectures like all-flash arrays from vendors, hybrid arrays, server-side caching software, virtual storage appliances, and hyperconverged infrastructure systems. Choosing the best solution depends on factors like performance needs, capacity, data services required, and budget.
This document discusses using solid state drives (SSDs) for server-side flash caching to improve performance. It covers SSD form factors for servers, the components of an SSD, deployment models for server-side flash including direct storage and pooled/replicated storage, use cases for server flash caching like databases and virtualization, and considerations for write-through versus write-back caching and live migration support. It also lists several vendors that provide server-side flash caching software.
The FDB library is not only a Firebird driver for Python, but also provides a number of add-on modules to build tools and applications, and as such is an important part of the Firebird project. In this session participants will learn about: - the history of the library - its use in the Firebird project and within IBPhoenix - with the concept, structure and capabilities of the library (in version 2) - ways to use the library to create tools and applications, including useful tips and tricks - plans for further development (for 2020 and in connection with the Firebird Butler a Saturnin projects)
Paper presented by Arno Brinkman https://www.abvisie.nl/firebird/
Global topics
- Introduction
- At which point does the optimizer his work
- Optimizer steps
- Index
1) The Firebird Tour 2017 is organized by the Firebird Project, IBSurgeon, and IBPhoenix to promote Firebird performance. Locations include Prague, Bad Sassendorf, and Moscow.
2) The Advanced Trace API allows configuration of trace sessions to monitor database activity and troubleshoot performance issues. Trace output can be configured for different databases and services.
3) The Trace Manager handles trace sessions and passes trace events to plugins, minimizing performance impact. Trace configuration is stored centrally and shared between connections.
This document summarizes details about the Firebird Tour 2017, which is organized by the Firebird Project, IBSurgeon, and IBPhoenix to focus on Firebird performance optimization. The tour will take place in Prague, Bad Sassendorf, and Moscow in October and November 2017. Moscow Exchange is the platinum sponsor. The agenda will cover new statistics elements for tables and indexes in Firebird 3.0 and enhancements to the optimizer in future versions.
This document summarizes the findings of performance tests conducted on Firebird 2.5, 3.0.0, and 3.0.2. It shows that Firebird 3.0 improved performance over 2.5 through changes like a more efficient hash join algorithm and reduced pointer page fetches. Tests on read-only queries found hash and merge joins outperformed loop joins by an order of magnitude, and Firebird 3 was up to 30% faster than 2.5 on SSD. Multi-threaded tests showed Firebird 3 scaling well to high thread counts under different server modes.
This document discusses using the Django web framework with the Firebird database backend. It provides links to resources for using Django and Firebird together, including the code on GitHub and mailing list. It compares content management systems like Drupal, WordPress, and Django. It also outlines the history of Firebird support in Django, which is now stable enough for production use according to the document.
The document discusses using the Django web framework with the Firebird database. It notes several reasons for using Django over other content management systems, including laziness since Python requires less code, a dislike of other CMS options, and finding Django's MTV architecture more elegant. It also provides an example of a Django book project using Firebird and notes that syncing Firebird databases with Django is now automated rather than requiring manual patching.
To conserve resources and optimize investment, a business must determine which potential opportunities are most likely to result in conversions and evolve into successful deals and determine which opportunities are at risk. This Hot Lead predictive analytics use case describes the value of predictive analytics to prioritize high-value leads and capitalize on an opportunity to convert a lead into a relationship by identifying key patterns that contribute to successful deal closures. Use these tools to identify the leads that are most likely to result in conversion and provide the most benefit to the enterprise. This technique can be used in many industries, including Financial Services, B2C and B2B. For more info https://www.smarten.com/augmented-analytics-learn-explore/use-cases.html
The truth behind the numbers: spotting statistical misuse.pptxandyprosser3
As a producer of official statistics, being able to define what misinformation means in relation to data and statistics is so important to us.
For our sixth webinar, we explored how we handle statistical misuse especially in the media. We were also joined by speakers from the Office for Statistics Regulation (OSR) to explain how they play an important role in investigating and challenging the misuse of statistics across government.
Valkey 101 - SCaLE 22x March 2025 Stokes.pdfDave Stokes
An Introduction to Valkey, Presented March 2025 at the Southern California Linux Expo, Pasadena CA. Valkey is a replacement for Redis and is a very fast in memory database, used to caches and other low latency applications. Valkey is open-source software and very fast.
Deep-QPP: A Pairwise Interaction-based Deep Learning Model for Supervised Que...suchanadatta3
Motivated by the recent success of end-to-end deep neural models
for ranking tasks, we present here a supervised end-to-end neural
approach for query performance prediction (QPP). In contrast to
unsupervised approaches that rely on various statistics of document
score distributions, our approach is entirely data-driven. Further,
in contrast to weakly supervised approaches, our method also does
not rely on the outputs from different QPP estimators. In particular, our model leverages information from the semantic interactions between the terms of a query and those in the top-documents retrieved with it. The architecture of the model comprises multiple layers of 2D convolution filters followed by a feed-forward layer of parameters. Experiments on standard test collections demonstrate
that our proposed supervised approach outperforms other state-of-the-art supervised and unsupervised approaches.
Design Data Model Objects for Analytics, Activation, and AIaaronmwinters
Explore using industry-specific data standards to design data model objects in Data Cloud that can consolidate fragmented and multi-format data sources into a single view of the customer.
Design of the data model objects is a critical first step in setting up Data Cloud and will impact aspects of the implementation, including the data harmonization and mappings, as well as downstream automations and AI processing. This session will provide concrete examples of data standards in the education space and how to design a Data Cloud data model that will hold up over the long-term as new source systems and activation targets are added to the landscape. This will help architects and business analysts accelerate adoption of Data Cloud.
Tuning Linux Windows and Firebird for Heavy Workload
1. TUNING LINUX, WINDOWS AND
FIREBIRD FOR HEAVY WORKLOAD
Alex Kovyazin,
IBSurgeon
Firebird Tour 2017: Performance Optimization
Prague, Bad Sassendorf, Moscow
2. Firebird 2017 Tour: Performance Optimization
• Firebird Tour 2017 is organized by Firebird Project,
IBSurgeon and IBPhoenix, and devoted to Firebird
Performance
• The Platinum sponsor is Moscow Exchange
• Tour's locations and dates:
• October 3, 2017 – Prague, Czech Republic
• October 5, 2017 – Bad Sassendorf, Germany
• November 3, 2017 – Moscow, Russia
3. • Platinum Sponsor
• Sponsor of
• «Firebird 2.5 SQL Language Reference»
• «Firebird 3.0 SQL Language Reference»
• «Firebird 3.0 Developer Guide»
• «Firebird 3.0 Operations Guide»
• Sponsor of Firebird 2017 Tour seminars
• www.moex.com
4. • Replication, Recovery and
Optimization for Firebird
and InterBase since 2002
• Platinum Sponsor of
Firebird Foundation
• Based in Moscow, Russia
www.ib-aid.com
5. Agenda
• Real customers with big databases
• Hardware they use
• OS tuning
• CPU
• RAM
• IO
• Network
• Firebird configuration
6. Customer 1: http://klinikabudzdorov.ru
• BudZdorov
• Medical centers and
hospitals in Moscow,
Saint-Petersburg and
major cities in Russia
• 17 departments
• 365 days per year, from 8-
00 to 21-00
7. ERP with Firebird in BudZdrorov
Central
Database
Replica of
Central
Database
Department’s serverCentral server
Standby for central server
Department’s
DB
Replica dept server
Department’s
DB
17departments
8. BudZdorov: Central database
• Size = 453 Gb
• Daily users = from 700 to 1800 (peak)
• Hardware server
• OS = Linux CentOS 6.7
• Firebird 2.5 Classic + HQbird
• Client-server, connected through optic with departments
• With async replica on the separate server
11. Customer 2: Customer revoked permission to publish
information
• Customer #2
• Repair services for
xxxxx across Russia
• 365 days per year,
24x7, with 1 hour
maintenance every
day
12. Customer #2: Central Database
• Size = 250Gb
• Daily users from 500 to 1000 (peak)
• Hardware server
• Windows 2012R2
• Firebird 3
• Middleware (web)
13. Performance problems – as usual
• Long running active transactions
• Garbage collection is blocked for hours and even days
• Badly written SQLs in applications
• Peaks of load
• People are mostly sick during the winter
• Railroad has peak of loads
• Anti-failure approach
• Replica with 1 minute delay
14. Tuning goals
1. Tune for throughput first, then, if possible, for response
time
1. During the day users are Ok with performance
2. Problems occur only during periods of high load
2. Tune OS to get appropriate results from the powerful
hardware
15. General requirements for high load server
1. Not a Primary/Backup Controller/Small Business Server (Windows)
2. No Exchange (store.exe and MSSQL inside) or Sharepoint (MSSQL
inside) or dedicated MSSQL
• Each MSSQL should be restricted in memory usage
3. Not a File Server/Print Server/Terminal Server/Web server
4. If it is virtual machine, it should be really fast
5. If there is your middleware - does it benefit from being on the same
server (i.e., local protocol)?
1. If not, put it on another server
2. If yes, make sure to allocate resources
Dedicated server means dedicated!
20. CPU
• How to improve CPU utilization?
• How can we improve distribution of load between cores?
21. CPU at Linux
• irqbalance
• yum install -y irqbalance && chkconfig irqbalance on &&
service irqbalance start
• Result: better CPU load distribution, increased throughput
22. CPU at Windows
• Windows: only CPU_AFFINITY in Firebird configuration
• Result: some cores can be excluded from Firebird usage
(reserved for middleware/other services), less conflicts,
slightly better throughput
23. RAM Tuning
• How to effectively use available RAM?
• How to avoid swapping?
• Firebird settings:
• DefaultDBCachePages – page buffers cache
• FileCacheSystemThreshold – limit to use/not use file cache
• TempCacheLimit – memory space for sorting
25. Paged Memory
File Cache
RAM in case of Big Databases and Big Caches
Database fileDatabase file
Page Buffers
Kernel
Competition
26. OS Memory Manager vs Firebird
• If Page Buffers is more than Paged Memory, OS Memory
Manager tries to send it to swap
• Race for resources between Paged Memory and File
Cache leads to swapping
27. Tuning RAM on Linux
• On Linux RedHat/CentOS file cache is not limited by
default
vm.pagecache = 100 #default
• For Classic – it is more or less fine, since it uses file cache
heavily
• For SuperServer it is not great, since SS 3.0 can use many
page buffers
Recommendation is to limit file cache to 40-50%:
vm.pagecache = 50
28. Tuning RAM on Linux
• We know that database should be kept in RAM: need to reduce
swapping!
• vm.swappiness = 10
• vm.dirty_ratio = 60
• vm.dirty_background_ratio = 2
• vm.min_free_kbytes = 1048576
29. Tuning RAM at Windows
•Windows Memory Manager has the following
default scenario of using RAM:
50% paged memory
41% file cache
9% kernel
•Memory distribution can be changed in registry/role
settings
Tip: use RAMMap tool to
see memory allocation
30. Recommendations for RAM on Windows
• Page Buffers must be < Paged Memory (50% of RAM by
default)
• %% can be changed on Windows level
• File Cache should be On
• For Classic and SuperClassic without exceptions
• For SuperServer with databases with size more than RAM > 2x
• File Cache should be enough to keep frequently requested
parts of database
• Firebird by default has file cache enabled: condition is
DefaultDBCachePages < FileSystemCacheThreshold
31. When can we disable File Cache?
• File Cache can be
disabled for SuperServer
for
• Read Only databases
• For database which fits
into Page Buffers with very
low % of writes
• For databases on SSD
with small % of writes
• Test it!
32. Paging file tuning
• In case of balanced settings for Page Buffers and enabled
File Cache, and in case of RAM > 32Gb, page file can be
limited to 16Gb.
• Page file will work fast on SSD – but not on the SSD with
database!
• Monitor life span of SSD!
33. Linux: general recommendations
• Centos
• Linux version 2.6.32-642.13.1.el6.x86_64 (mockbuild@c1bm.rdu2.centos.org)
(gcc version 4.4.7 20120313 (Red Hat 4.4.7-17) (GCC) ) #1 SMP Wed Jan 11
20:56:24 UTC 2017 – not so good, better choose newer OS version
• Use fresh and popular Linux distributions: Ubuntu 16+ Server and
CentOS 7+
• Use server version of Linux distributions – it has already tuned
limits for number of open files
34. Linux: file and process limits
# increase max user processes ulimit
(-u) 1291632
# Increase size of file handles and inode
cache
fs.file-max = 2097152
35. Process forking is set to unlimited
• [root@mskv-cbd-new limits.d]# cat
/etc/security/limits.d/90-nproc.conf
• * soft nproc unlimited
• root soft nproc unlimited
• [root@mskv-cbd-new security]# sed -e 's/^[ t]*//'
/etc/security/limits.conf | grep "^[^#;]" | sort
• firebird - nofile 32768
• * soft core unlimited
36. /etc/xinetd.conf – the most important
# cps = 25 30 ==> configures xinetd to allow
#no more than 25 connections PER SECOND to any given
service. If this limit is reached, the service is
retired for 30 seconds.
cps = 1500 10
# Sets the maximum number of requests xinetd can
handle at once.
instances = UNLIMITED
# per_source — Defines the maximum number of
#instances for a service per source IP address
per_source = UNLIMITED
38. IO on Linux: File System and Barriers
• Ext4
Since we have RAID and Enterprise SSDs with power loss
protection(and high quality hardware):
Barrier = 0 (disabled)
39. Disk IO on BudZdorov
• SSDs deliver high speed: 242Mb/sec
40. IO on Windows
• Enable disk cache (it does not work on Primary Disk
Controller)
41. Temp space on RAM/SSD?
• TempCacheLimit – by default it is very low, increase it!
• Temp files are created in %TEMP% or /tmp or in
TempDirectories
• Big TempCacheLimit allows to avoid temp files
• However, we still need big TempDirectories to
create/restore indices
42. Network
# Increase number of incoming connections
net.core.somaxconn = 4096
# Increase number of incoming connections backlog
net.core.netdev_max_backlog = 65536
# Increase the maximum amount of option memory buffers
net.core.optmem_max = 25165824
# Increase the tcp-time-wait buckets pool to prevent simple DOS attacks
net.ipv4.tcp_max_tw_buckets = 1440000
net.ipv4.tcp_tw_recycle = 1
net.ipv4.tcp_tw_reuse = 1
43. Network
#Number of times SYNACKs for passive TCP connection.
net.ipv4.tcp_synack_retries = 2
#Allowed local port range
net.ipv4.ip_local_port_range = 2000 65535
#Protect Against TCP Time-Wait
net.ipv4.tcp_rfc1337 = 1
#Decrease the time default value for tcp_fin_timeout connection
net.ipv4.tcp_fin_timeout = 15
#Decrease the time default value for connections to keep alive
net.ipv4.tcp_keepalive_time = 300
net.ipv4.tcp_keepalive_probes = 5
net.ipv4.tcp_keepalive_intvl = 15
46. Network on Windows
• Remove unused network protocols
• Set the correct order of NICs
• Results: well, no big difference
47. Results from network tuning on Linux
• Much better throughput (users do not claim :)
• Significant decrease of Load Average
• Better distribution of load between CPUs
48. Conclusion for Linux configuration
• Use server distribution
• Use fresh version (CentOS 7+, Ubuntu Srv 16+)
• xinetd configuration is critical (due to Classic)
• Tune limits for process files, memory, file cache, and network
49. Conclusion for Windows Tuning
1. Main focus is on RAM tuning
2. CPU tuning is through CPU Affinity restrictions
3. Don’t forget to disable useless services/applications
4. In general Windows has far less parameters to tune, and
they are not clear
50. Misc Windows Tuning tips
• Enable High Performance Power Plan
• Enable background processes priority
• Disable useless services
• Prefetch/Fetch On/Off – no differences
• Desktop Heap for Classic for non Local System account
52. Firebird at BudZdorov
• Firebird Classic 2.5
• Why not SuperClassic?
• It is slow for more than 800 connections
• No plans to fix it, since Firebird 3 SuperServer must be used
54. TempCacheLimit tips
• Default firebird.conf
• TempBlockSize = 1048576
• May increase to 2 or 3mln bytes, but not to 16mb
• TempCacheLimit = 67108864
• SuperServer and SuperClassic. Classic = 8mb.
• TempDirectories = c:temp;d:temp…
• Increase TempCacheLimit for SuperServer and SuperClassic!
55. Maintenance and backups
• Automatic sweep is disabled
• All connections are disconnected at 0-00
• Manual sweep is at 00-05
• Verified backup (gbak) – every day at 1am
• Replication works as a standby
56. Summary for 2.5
• 1500 connections and 453Gb is a acceptable load for the
Firebird 2.5
• Firebird and Linux should be tuned
• Maintenance is the key: sweep, restart of connections,
backups
• Replication is mandatory for protection, since
backup/restore takes 18 hours
58. Summary
• Firebird 3.0.2 get the biggest benefit from huge number of
page buffer (properly configured)
• Good design (short write) transactions eliminate need for
everyday restarts
59. Useful links
• Collection of optimized Firebird configuration files
https://ib-aid.com/en/optimized-firebird-configuration/
• Firebird Hardware Guide
https://ib-aid.com/en/articles/firebird-hardware-guide/
• 45 Ways To Speed Up Firebird
• https://ib-aid.com/en/articles/45-ways-to-speed-up-firebird-
database/