Kellyn Pot'Vin presented on performance diagnostics and tuning tools available in Oracle Enterprise Manager 12c. The presentation covered tools like Top Activity, SQL Monitor, ASH Analytics, Real-Time ADDM, and Compare Period ADDM. Top Activity provides a graphical view of current performance and resource usage. SQL Monitor allows monitoring of SQL statements and sessions. ASH Analytics is a more advanced replacement for Top Activity that provides historical performance data. Real-Time ADDM and Compare Period ADDM enable real-time and comparative Automatic Database Diagnostic Monitor reporting and findings within EM12c.
- Oracle Database 12c introduced several new features for DBAs including adaptive execution plans, PGA_AGGREGATE_LIMIT parameter, enhanced statistics gathering options, renaming datafiles online, FETCH FIRST clause for limiting rows, table restoration using RMAN, SQL statements in RMAN, preupgrade and parallel upgrade utilities, and real-time ADDM analysis.
- Adaptive execution plans allow queries to switch plans during execution if row counts differ significantly from estimates. PGA_AGGREGATE_LIMIT provides a hard limit on PGA memory usage to prevent sessions from consuming too much. Enhanced statistics options include system statistics for Exadata, concurrent collection, and new histogram types.
This document provides a summary of 15 labs on data warehousing and mining using Teradata. The labs cover topics like:
- Understanding Teradata and how to start the Teradata server
- Creating databases and users in Teradata Administrator
- Creating tables in a database using BTEQ
- Using Teradata SQL Assistant to execute queries
- Executing different data manipulation queries
- Getting familiar with visual tools, report generation, histograms, connecting databases to applications, loading data using Fastload, schemas, Teradata Warehouse Builder, and Parallel Transporter.
The document provides an overview of performance charts in the vSphere Client and how performance metrics are collected and displayed. It discusses the different types of performance charts, the data counters used to collect metrics, the collection levels that determine how much data is gathered, and the collection intervals that specify how statistics are aggregated over time. It also describes when performance data may be unavailable, such as for disconnected hosts or powered off virtual machines.
Optimizing Alert Monitoring with Oracle Enterprise ManagerDatavail
Watch this webinar to find out how OEM Grid configuration using Datavail’s Alert Optimizer™ and custom templates helps eliminate unwanted alerts, while enriching actionable alerts, and improving the performance of the entire database system.
These five areas help organize the tuning approach and define the major concerns beyond the architecture, setup, and data model. It also addresses how performance tuning becomes less of a mystery if it can be measured, documented, affected, and improved.
This document provides syntax and usage examples for several Essbase calculation commands:
CALC ALL calculates the entire database; CALC DIM calculates specific dimensions; CALC TWOPASS calculates two-pass members; CALC FIRST/LAST/AVERAGE calculate time balance members; SET UPDATECALC turns intelligent calculation on/off; SET AGGMISSG specifies how missing values are consolidated; SET CALCPARALLEL enables parallel calculation; AGG consolidates values without formulas; and FIX restricts calculations to a subset of members.
The document discusses SQL Server performance monitoring and tuning. It recommends taking a holistic view of the entire system landscape, including hardware, software, systems and networking components. It outlines various tools for performance monitoring, and provides guidance on identifying and addressing common performance issues like high CPU utilization, disk I/O issues and poorly performing queries.
This document discusses Live Query Statistics and the Query Store in Microsoft SQL Server 2016 for troubleshooting query performance issues. Live Query Statistics allows viewing execution plans and metrics of in-flight queries. The Query Store provides a dedicated store for query performance data, capturing plan histories and metrics to help identify regressed queries and other issues. Enabling these tools helps DBAs monitor performance and address issues like slow queries and plans impacted by upgrades or data changes.
This document summarizes a presentation on database optimization techniques for DBAs. It discusses using reports like AWR, ASH, and ADDM to analyze performance issues. It also covers using explain plans and trace files to diagnose problems. Specific troubleshooting steps are provided for examples involving parallel processing issues, performance degradation after an upgrade, and temporary space usage. The presentation emphasizes using data from tools like these to identify and address real performance problems, rather than superficial "tinsel" optimizations.
This document provides an overview of how to use various Oracle performance monitoring and diagnostic tools like ASH, AWR, and SQL Monitor to analyze and troubleshoot performance issues. It begins with introductions and background on the speaker. It then demonstrates how to generate and interpret reports from these tools using the Oracle Enterprise Manager console and command line. It provides examples of querying ASH data directly and using tools like Compare ADDM and SQL Monitor. The document aims to help users quickly understand performance problems by leveraging these built-in Oracle performance diagnostics.
- A gap analysis report assesses a company's compliance with a given standard by identifying any gaps between the company's current state and the requirements of the standard.
- Conducting a gap analysis via an on-site visit allows an external party to prepare a detailed report on areas needing attention or development and provide recommendations to achieve compliance.
- Unlike an accelerated registration program that ensures all registration requirements are met, a gap analysis provides an outside opinion to identify problems early on before an audit, avoiding expensive mistakes later in the process.
- While not necessary for all, a gap analysis can play an important guiding and assurance role for some companies' registration efforts.
Performance Tuning With Oracle ASH and AWR. Part 1 How And Whatudaymoogala
The document discusses various techniques for identifying and analyzing SQL performance issues in an Oracle database, including gathering diagnostic data from AWR reports, ASH reports, SQL execution plans, and real-time SQL monitoring reports. It provides an overview of how to use these tools to understand what is causing performance problems by identifying what is slow, quantifying the impact, determining the component involved, and analyzing the root cause.
This document summarizes a case study on using Exadata, Oracle Data Integrator (ODI), and parallel data loading to improve ETL performance. It describes how the environment had performance issues with ODI loads due to temporary tablespace usage. Tuning included setting dynamic sampling to 0, adding rollup tables and materialized views to pre-aggregate data, and indexing to reduce sorting and aggregation during loads. These changes led to significant improvements in elapsed time.
SQL Server 2016 introduces new capabilities to help improve performance, security, and analytics:
- Operational analytics allows running analytics queries concurrently with OLTP workloads using the same schema. This provides minimal impact on OLTP and best performance.
- In-Memory OLTP enhancements include greater Transact-SQL coverage, improved scaling, and tooling improvements.
- The new Query Store feature acts as a "flight data recorder" for databases, enabling quick performance issue identification and resolution.
This document provides an overview of the Automatic Workload Repository (AWR) and Active Session History (ASH) features in Oracle Database 12c. It discusses how AWR and ASH work, how to access and interpret their reports through the Oracle Enterprise Manager console and command line interface. Specific sections cover parsing AWR reports, querying ASH data directly, and using features like the SQL monitor to diagnose performance issues.
This slide deck presentation provides an overview of managing Microsoft SQL Server for those who are not primarily database administrators. The presentation covers how SQL Server works, backup and restore operations, indexes, database and server configuration options, security models, and high availability and replication options. It also demonstrates various SQL Server management tasks in the SQL Server Management Studio tool. The presentation encourages attendees to reuse the material and provides contact information for the company that created the presentation for additional training opportunities.
Oracle database performance diagnostics - before your beginHemant K Chitale
This is an article that I had written in 2011 for publication on OTN. It never did appear. So I am making it available here. It is not "slides" but is only 7 pages long. I hope you find it useful.
Practical SQL query monitoring and optimizationIvo Andreev
Practical SQL query monitoring and optimization
Today the project owners demand results as soon as possible and most often - for yesterday. Time to market is crucial and it is practical to deliver bit-by-bit, get feedback and grow with the number of your customers. But as the project grows, the team does too and not all have the same expertise. As well rarely in the beginning the requirements clear enough to allow performance-wise SQL interaction. In most cases there does not exist an ORM that can solve this task for you and you will need to have hard T-SQL writer in the team. If you already know this story or are going this way then in this practical session we will share how to monitor, measure and optimize your SQL code and DB layer interaction.
System Monitor is a Microsoft Windows utility that allows administrators to capture performance counters about hardware, operating systems, and applications. It uses a polling architecture to gather numeric statistics from counters exposed by components at user-defined intervals. The counters are organized in a three-level hierarchy of counter object, counter, and counter instance. System Monitor can be used to capture counter logs for analysis to troubleshoot issues like bottlenecks. It is recommended to select counter objects instead of individual counters to ensure all necessary data is captured.
System Monitor is a Microsoft Windows utility that allows administrators to capture performance counters about hardware, operating systems, and applications. It uses a polling architecture to gather numeric statistics from counters exposed by components at user-defined intervals. The counters are organized in a three-level hierarchy of counter object, counter, and counter instance. System Monitor can be used to analyze hardware bottlenecks by monitoring queue lengths for processors, disks, and networks. It also helps optimize SQL Server performance by capturing events using SQL Server Profiler.
Self-serve analytics journey at Celtra: Snowflake, Spark, and DatabricksGrega Kespret
Celtra provides a platform for streamlined ad creation and campaign management used by customers including Porsche, Taco Bell, and Fox to create, track, and analyze their digital display advertising. Celtra’s platform processes billions of ad events daily to give analysts fast and easy access to reports and ad hoc analytics. Celtra’s Grega Kešpret leads a technical dive into Celtra’s data-pipeline challenges and explains how it solved them by combining Snowflake’s cloud data warehouse with Spark to get the best of both.
Topics include:
- Why Celtra changed its pipeline, materializing session representations to eliminate the need to rerun its pipeline
- How and why it decided to use Snowflake rather than an alternative data warehouse or a home-grown custom solution
- How Snowflake complemented the existing Spark environment with the ability to store and analyze deeply nested data with full consistency
- How Snowflake + Spark enables production and ad hoc analytics on a single repository of data
Autonomous Transaction Processing (ATP): In Heavy Traffic, Why Drive Stick?Jim Czuprynski
Autonomous Transaction Processing (ATP) - the second in the family of Oracle’s Autonomous Databases – offers Oracle DBAs the ability to apply a force multiplier for their OLTP database application workloads. However, it’s important to understand both the benefits and limitations of ATP before migrating any workloads to that environment. I'll offer a quick but deep dive into how best to take advantage of ATP - including how to load data quickly into the underlying database – and some ideas on how ATP will impact the role of Oracle DBA in the immediate future. (Hint: Think automatic transmission instead of stick-shift.)
Getting to Know MySQL Enterprise MonitorMark Leith
MySQL Enterprise Monitor is the monitoring and management solution for DBAs and developers delivered as part of MySQL Enterprise Edition. It provides background monitoring, alerting, trending, and analysis of the MySQL database and the statement traffic that is running within it.
View this session to learn how to install/configure, customize, and use MySQL Enterprise Monitor to suit your environment. Whether you use a single server or have hundreds of instances, MySQL Enterprise Monitor can provide great insights into how your environment is performing.
Scalable scheduling of updates in streaming data warehousesFinalyear Projects
This document discusses scheduling updates in streaming data warehouses. It proposes a scheduling framework to handle complications from streaming data, including view hierarchies, data consistency, inability to preempt updates, and transient overload. Key aspects of the proposed system include defining a scheduling metric based on data staleness rather than job properties, and developing two modes (push and pull) for auditing logs to provide data accountability. The goal is to propagate new data across relevant tables and views as quickly as possible to allow real-time decision making.
This document provides an overview of various SAP administration topics and transaction codes. It begins with an explanation of SAP architecture including the application, middle, and operating system layers. It then covers SAP instances, active servers, work processes, user administration, system logs, ABAP dumps, database administration using transaction codes like DB02 and BRTOOLS, and other topics like transport management, backups, and alerts. Screenshots are included to illustrate many of the transaction codes and administration tasks.
Similar to Em12c performance tuning outside the box (20)
This are my keynote slides from SQL Saturday Oregon 2023 on AI and the Intersection of AI, Machine Learning and Economnic Challenges as a Technical Specialist
This document discusses migrating high IO SQL Server workloads to Azure. It begins by explaining that every company has at least one "whale" workload that requires high CPU, memory and IO. These whales can be challenging to move to the cloud. The document then provides tips on determining if a workload's issue is truly high IO or caused by another factor. It discusses various wait events that may indicate IO problems and tools for monitoring IO performance. Finally, it covers some considerations for IO in the cloud.
This document provides an overview of options for running Oracle solutions on Microsoft Azure infrastructure as a service (IaaS). It discusses architectural considerations for high availability, disaster recovery, storage, licensing, and migrating workloads from Oracle Exadata. Key points covered include using Oracle Data Guard for replication and failover, storage options like Azure NetApp Files that can support Exadata workloads, and identifying databases that are not dependent on Exadata features for lift and shift to Azure IaaS. The document aims to help customers understand how to optimize their use of Oracle solutions when deploying to Azure.
This document provides guidance and best practices for migrating database workloads to infrastructure as a service (IaaS) in Microsoft Azure. It discusses choosing the appropriate virtual machine series and storage options to meet performance needs. The document emphasizes migrating the workload, not the hardware, and using cloud services to simplify management like automated patching and backup snapshots. It also recommends bringing existing monitoring and management tools to the cloud when possible rather than replacing them. The key takeaways are to understand the workload demands, choose optimal IaaS configurations, leverage cloud-enabled tools, and involve database experts when issues arise to address the root cause rather than just adding resources.
This document discusses strategies for managing ADHD as an adult. It begins by describing the three main types of ADHD - inattentive, hyperactive-impulsive, and combined. It then lists some of the biggest challenges of ADHD like executive dysfunction, disorganization, lack of attention, procrastination, and internal preoccupation. The document provides tips and strategies for overcoming each challenge through organization, scheduling, list-making, breaking large tasks into small ones, and using technology tools. It emphasizes finding accommodations that work for the individual and their specific ADHD presentation and challenges.
This document provides guidance and best practices for using Infrastructure as a Service (IaaS) on Microsoft Azure for database workloads. It discusses key differences between IaaS, Platform as a Service (PaaS), and Software as a Service (SaaS). The document also covers Azure-specific concepts like virtual machine series, availability zones, storage accounts, and redundancy options to help architects design cloud infrastructures that meet business requirements. Specialized configurations like constrained VMs and ultra disks are also presented along with strategies for ensuring high performance and availability of database workloads on Azure IaaS.
Kellyn Gorman shares her experience living with ADHD and strategies for turning it into a positive. She discusses how ADHD impacted her childhood and how it still presents challenges as an adult. However, with the right tools and understanding of her needs, she is able to find success. She provides tips for organizing, prioritizing tasks, managing distractions, and accessing support. The key is learning about ADHD and how to structure one's environment and routine to play to one's strengths rather than fighting against the condition.
Migrating Oracle workloads to Azure requires understanding the workload and hardware requirements. It is important to analyze the workload using the Automatic Workload Repository (AWR) report to accurately size infrastructure needs. The right virtual machine series and storage options must be selected to meet the identified input/output and capacity needs. Rather than moving existing hardware, the focus should be migrating the Oracle workload to take advantage of cloud capabilities while ensuring performance and high availability.
This document discusses overcoming silos when implementing DevOps for a new product at a company. The teams involved were dispersed globally and siloed in their tools and processes. Challenges included isolating workload sizes, choosing a Linux image, and team ownership issues. The solution involved aligning teams, automating deployment with Bash scripts called by Terraform and Azure DevOps, and evolving the automation. This improved communication, decreased teams from 120 people to 7, and increased deployments and profits for the successful project.
This document discusses best practices for migrating database workloads to Azure Infrastructure as a Service (IaaS). Some key points include:
- Choosing the appropriate VM series like E or M series optimized for database workloads.
- Using availability zones and geo-redundant storage for high availability and disaster recovery.
- Sizing storage correctly based on the database's input/output needs and using premium SSDs where needed.
- Migrating existing monitoring and management tools to the cloud to provide familiarity and automating tasks like backups, patching, and problem resolution.
This document provides an overview of how to successfully migrate Oracle workloads to Microsoft Azure. It begins with an introduction of the presenter and their experience. It then discusses why customers might want to migrate to the cloud and the different Azure database options available. The bulk of the document outlines the key steps in planning and executing an Oracle workload migration to Azure, including sizing, deployment, monitoring, backup strategies, and ensuring high availability. It emphasizes adapting architectures for the cloud rather than directly porting on-premises systems. The document concludes with recommendations around automation, education resources, and references for Oracle-Azure configurations.
This document discusses the future of data and the Azure data ecosystem. It highlights that by 2025 there will be 175 zettabytes of data in the world and the average person will have over 5,000 digital interactions per day. It promotes Azure services like Power BI, Azure Synapse Analytics, Azure Data Factory and Azure Machine Learning for extracting value from data through analytics, visualization and machine learning. The document provides overviews of key Azure data and analytics services and how they fit together in an end-to-end data platform for business intelligence, artificial intelligence and continuous intelligence applications.
This is the second session of the learning pathway at PASS Summit 2019, which is still a stand alone session to teach you how to write proper Linux BASH scripts
This document discusses techniques for optimizing Power BI performance. It recommends tracing queries using DAX Studio to identify slow queries and refresh times. Tracing tools like SQL Profiler and log files can provide insights into issues occurring in the data sources, Power BI layer, and across the network. Focusing on optimization by addressing wait times through a scientific process can help resolve long-term performance problems.
The document provides tips and tricks for scripting success on Linux. It begins with introducing the speaker and emphasizing that the session will focus on best practices for those already familiar with BASH scripting. It then details various tips across multiple areas: setting the shell and environment variables, adding headers and comments to scripts, validating input, implementing error handling and debugging, leveraging utilities like CRON for scheduling, and ensuring scripts continue running across sessions. The tips are meant to help authors write more readable, maintainable, and reliable scripts.
This document discusses connecting Oracle Analytics Cloud (OAC) Essbase data to Microsoft Power BI. It provides an overview of Power BI and OAC, describes various methods for connecting the two including using a REST API and exporting data to Excel or CSV files, and demonstrates some visualization capabilities in Power BI including trends over time. Key lessons learned are that data can be accessed across tools through various connections, analytics concepts are often similar between tools, and while partnerships exist between Microsoft and Oracle, integration between specific products like Power BI and OAC is still limited.
Mentors provide guidance and support, while sponsors use their influence to advocate for and promote a protege's career. Obtaining both mentors and sponsors is important for advancing in one's field and overcoming biases, yet women often have fewer sponsors than men. The document outlines strategies for how women can find and work with sponsors, and how men can act as allies in supporting women. Developing representation of women in technology fields through mentorship and sponsorship can help initiatives become self-sustaining over time.
How to Avoid Learning the Linux-Kernel Memory ModelScyllaDB
The Linux-kernel memory model (LKMM) is a powerful tool for developing highly concurrent Linux-kernel code, but it also has a steep learning curve. Wouldn't it be great to get most of LKMM's benefits without the learning curve?
This talk will describe how to do exactly that by using the standard Linux-kernel APIs (locking, reference counting, RCU) along with a simple rules of thumb, thus gaining most of LKMM's power with less learning. And the full LKMM is always there when you need it!
Quality Patents: Patents That Stand the Test of TimeAurora Consulting
Is your patent a vanity piece of paper for your office wall? Or is it a reliable, defendable, assertable, property right? The difference is often quality.
Is your patent simply a transactional cost and a large pile of legal bills for your startup? Or is it a leverageable asset worthy of attracting precious investment dollars, worth its cost in multiples of valuation? The difference is often quality.
Is your patent application only good enough to get through the examination process? Or has it been crafted to stand the tests of time and varied audiences if you later need to assert that document against an infringer, find yourself litigating with it in an Article 3 Court at the hands of a judge and jury, God forbid, end up having to defend its validity at the PTAB, or even needing to use it to block pirated imports at the International Trade Commission? The difference is often quality.
Quality will be our focus for a good chunk of the remainder of this season. What goes into a quality patent, and where possible, how do you get it without breaking the bank?
** Episode Overview **
In this first episode of our quality series, Kristen Hansen and the panel discuss:
⦿ What do we mean when we say patent quality?
⦿ Why is patent quality important?
⦿ How to balance quality and budget
⦿ The importance of searching, continuations, and draftsperson domain expertise
⦿ Very practical tips, tricks, examples, and Kristen’s Musts for drafting quality applications
https://www.aurorapatents.com/patently-strategic-podcast.html
Transcript: Details of description part II: Describing images in practice - T...BookNet Canada
This presentation explores the practical application of image description techniques. Familiar guidelines will be demonstrated in practice, and descriptions will be developed “live”! If you have learned a lot about the theory of image description techniques but want to feel more confident putting them into practice, this is the presentation for you. There will be useful, actionable information for everyone, whether you are working with authors, colleagues, alone, or leveraging AI as a collaborator.
Link to presentation recording and slides: https://bnctechforum.ca/sessions/details-of-description-part-ii-describing-images-in-practice/
Presented by BookNet Canada on June 25, 2024, with support from the Department of Canadian Heritage.
UiPath Community Day Kraków: Devs4Devs ConferenceUiPathCommunity
We are honored to launch and host this event for our UiPath Polish Community, with the help of our partners - Proservartner!
We certainly hope we have managed to spike your interest in the subjects to be presented and the incredible networking opportunities at hand, too!
Check out our proposed agenda below 👇👇
08:30 ☕ Welcome coffee (30')
09:00 Opening note/ Intro to UiPath Community (10')
Cristina Vidu, Global Manager, Marketing Community @UiPath
Dawid Kot, Digital Transformation Lead @Proservartner
09:10 Cloud migration - Proservartner & DOVISTA case study (30')
Marcin Drozdowski, Automation CoE Manager @DOVISTA
Pawel Kamiński, RPA developer @DOVISTA
Mikolaj Zielinski, UiPath MVP, Senior Solutions Engineer @Proservartner
09:40 From bottlenecks to breakthroughs: Citizen Development in action (25')
Pawel Poplawski, Director, Improvement and Automation @McCormick & Company
Michał Cieślak, Senior Manager, Automation Programs @McCormick & Company
10:05 Next-level bots: API integration in UiPath Studio (30')
Mikolaj Zielinski, UiPath MVP, Senior Solutions Engineer @Proservartner
10:35 ☕ Coffee Break (15')
10:50 Document Understanding with my RPA Companion (45')
Ewa Gruszka, Enterprise Sales Specialist, AI & ML @UiPath
11:35 Power up your Robots: GenAI and GPT in REFramework (45')
Krzysztof Karaszewski, Global RPA Product Manager
12:20 🍕 Lunch Break (1hr)
13:20 From Concept to Quality: UiPath Test Suite for AI-powered Knowledge Bots (30')
Kamil Miśko, UiPath MVP, Senior RPA Developer @Zurich Insurance
13:50 Communications Mining - focus on AI capabilities (30')
Thomasz Wierzbicki, Business Analyst @Office Samurai
14:20 Polish MVP panel: Insights on MVP award achievements and career profiling
Paradigm Shifts in User Modeling: A Journey from Historical Foundations to Em...Erasmo Purificato
Slide of the tutorial entitled "Paradigm Shifts in User Modeling: A Journey from Historical Foundations to Emerging Trends" held at UMAP'24: 32nd ACM Conference on User Modeling, Adaptation and Personalization (July 1, 2024 | Cagliari, Italy)
Details of description part II: Describing images in practice - Tech Forum 2024BookNet Canada
This presentation explores the practical application of image description techniques. Familiar guidelines will be demonstrated in practice, and descriptions will be developed “live”! If you have learned a lot about the theory of image description techniques but want to feel more confident putting them into practice, this is the presentation for you. There will be useful, actionable information for everyone, whether you are working with authors, colleagues, alone, or leveraging AI as a collaborator.
Link to presentation recording and transcript: https://bnctechforum.ca/sessions/details-of-description-part-ii-describing-images-in-practice/
Presented by BookNet Canada on June 25, 2024, with support from the Department of Canadian Heritage.
Quantum Communications Q&A with Gemini LLM. These are based on Shannon's Noisy channel Theorem and offers how the classical theory applies to the quantum world.
Hire a private investigator to get cell phone recordsHackersList
Learn what private investigators can legally do to obtain cell phone records and track phones, plus ethical considerations and alternatives for addressing privacy concerns.
GDG Cloud Southlake #34: Neatsun Ziv: Automating AppsecJames Anderson
The lecture titled "Automating AppSec" delves into the critical challenges associated with manual application security (AppSec) processes and outlines strategic approaches for incorporating automation to enhance efficiency, accuracy, and scalability. The lecture is structured to highlight the inherent difficulties in traditional AppSec practices, emphasizing the labor-intensive triage of issues, the complexity of identifying responsible owners for security flaws, and the challenges of implementing security checks within CI/CD pipelines. Furthermore, it provides actionable insights on automating these processes to not only mitigate these pains but also to enable a more proactive and scalable security posture within development cycles.
The Pains of Manual AppSec:
This section will explore the time-consuming and error-prone nature of manually triaging security issues, including the difficulty of prioritizing vulnerabilities based on their actual risk to the organization. It will also discuss the challenges in determining ownership for remediation tasks, a process often complicated by cross-functional teams and microservices architectures. Additionally, the inefficiencies of manual checks within CI/CD gates will be examined, highlighting how they can delay deployments and introduce security risks.
Automating CI/CD Gates:
Here, the focus shifts to the automation of security within the CI/CD pipelines. The lecture will cover methods to seamlessly integrate security tools that automatically scan for vulnerabilities as part of the build process, thereby ensuring that security is a core component of the development lifecycle. Strategies for configuring automated gates that can block or flag builds based on the severity of detected issues will be discussed, ensuring that only secure code progresses through the pipeline.
Triaging Issues with Automation:
This segment addresses how automation can be leveraged to intelligently triage and prioritize security issues. It will cover technologies and methodologies for automatically assessing the context and potential impact of vulnerabilities, facilitating quicker and more accurate decision-making. The use of automated alerting and reporting mechanisms to ensure the right stakeholders are informed in a timely manner will also be discussed.
Identifying Ownership Automatically:
Automating the process of identifying who owns the responsibility for fixing specific security issues is critical for efficient remediation. This part of the lecture will explore tools and practices for mapping vulnerabilities to code owners, leveraging version control and project management tools.
Three Tips to Scale the Shift Left Program:
Finally, the lecture will offer three practical tips for organizations looking to scale their Shift Left security programs. These will include recommendations on fostering a security culture within development teams, employing DevSecOps principles to integrate security throughout the development
How RPA Help in the Transportation and Logistics Industry.pptxSynapseIndia
Revolutionize your transportation processes with our cutting-edge RPA software. Automate repetitive tasks, reduce costs, and enhance efficiency in the logistics sector with our advanced solutions.
How Netflix Builds High Performance Applications at Global ScaleScyllaDB
We all want to build applications that are blazingly fast. We also want to scale them to users all over the world. Can the two happen together? Can users in the slowest of environments also get a fast experience? Learn how we do this at Netflix: how we understand every user's needs and preferences and build high performance applications that work for every user, every time.
Navigating Post-Quantum Blockchain: Resilient Cryptography in Quantum Threatsanupriti
In the rapidly evolving landscape of blockchain technology, the advent of quantum computing poses unprecedented challenges to traditional cryptographic methods. As quantum computing capabilities advance, the vulnerabilities of current cryptographic standards become increasingly apparent.
This presentation, "Navigating Post-Quantum Blockchain: Resilient Cryptography in Quantum Threats," explores the intersection of blockchain technology and quantum computing. It delves into the urgent need for resilient cryptographic solutions that can withstand the computational power of quantum adversaries.
Key topics covered include:
An overview of quantum computing and its implications for blockchain security.
Current cryptographic standards and their vulnerabilities in the face of quantum threats.
Emerging post-quantum cryptographic algorithms and their applicability to blockchain systems.
Case studies and real-world implications of quantum-resistant blockchain implementations.
Strategies for integrating post-quantum cryptography into existing blockchain frameworks.
Join us as we navigate the complexities of securing blockchain networks in a quantum-enabled future. Gain insights into the latest advancements and best practices for safeguarding data integrity and privacy in the era of quantum threats.
Interaction Latency: Square's User-Centric Mobile Performance MetricScyllaDB
Mobile performance metrics often take inspiration from the backend world and measure resource usage (CPU usage, memory usage, etc) and workload durations (how long a piece of code takes to run).
However, mobile apps are used by humans and the app performance directly impacts their experience, so we should primarily track user-centric mobile performance metrics. Following the lead of tech giants, the mobile industry at large is now adopting the tracking of app launch time and smoothness (jank during motion).
At Square, our customers spend most of their time in the app long after it's launched, and they don't scroll much, so app launch time and smoothness aren't critical metrics. What should we track instead?
This talk will introduce you to Interaction Latency, a user-centric mobile performance metric inspired from the Web Vital metric Interaction to Next Paint"" (web.dev/inp). We'll go over why apps need to track this, how to properly implement its tracking (it's tricky!), how to aggregate this metric and what thresholds you should target.
Fluttercon 2024: Showing that you care about security - OpenSSF Scorecards fo...Chris Swan
Have you noticed the OpenSSF Scorecard badges on the official Dart and Flutter repos? It's Google's way of showing that they care about security. Practices such as pinning dependencies, branch protection, required reviews, continuous integration tests etc. are measured to provide a score and accompanying badge.
You can do the same for your projects, and this presentation will show you how, with an emphasis on the unique challenges that come up when working with Dart and Flutter.
The session will provide a walkthrough of the steps involved in securing a first repository, and then what it takes to repeat that process across an organization with multiple repos. It will also look at the ongoing maintenance involved once scorecards have been implemented, and how aspects of that maintenance can be better automated to minimize toil.
Coordinate Systems in FME 101 - Webinar SlidesSafe Software
If you’ve ever had to analyze a map or GPS data, chances are you’ve encountered and even worked with coordinate systems. As historical data continually updates through GPS, understanding coordinate systems is increasingly crucial. However, not everyone knows why they exist or how to effectively use them for data-driven insights.
During this webinar, you’ll learn exactly what coordinate systems are and how you can use FME to maintain and transform your data’s coordinate systems in an easy-to-digest way, accurately representing the geographical space that it exists within. During this webinar, you will have the chance to:
- Enhance Your Understanding: Gain a clear overview of what coordinate systems are and their value
- Learn Practical Applications: Why we need datams and projections, plus units between coordinate systems
- Maximize with FME: Understand how FME handles coordinate systems, including a brief summary of the 3 main reprojectors
- Custom Coordinate Systems: Learn how to work with FME and coordinate systems beyond what is natively supported
- Look Ahead: Gain insights into where FME is headed with coordinate systems in the future
Don’t miss the opportunity to improve the value you receive from your coordinate system data, ultimately allowing you to streamline your data analysis and maximize your time. See you there!
MYIR Product Brochure - A Global Provider of Embedded SOMs & SolutionsLinda Zhang
This brochure gives introduction of MYIR Electronics company and MYIR's products and services.
MYIR Electronics Limited (MYIR for short), established in 2011, is a global provider of embedded System-On-Modules (SOMs) and
comprehensive solutions based on various architectures such as ARM, FPGA, RISC-V, and AI. We cater to customers' needs for large-scale production, offering customized design, industry-specific application solutions, and one-stop OEM services.
MYIR, recognized as a national high-tech enterprise, is also listed among the "Specialized
and Special new" Enterprises in Shenzhen, China. Our core belief is that "Our success stems from our customers' success" and embraces the philosophy
of "Make Your Idea Real, then My Idea Realizing!"
MYIR Product Brochure - A Global Provider of Embedded SOMs & Solutions
Em12c performance tuning outside the box
1. K E L L Y N P O T ’ V I N
S R . T E C H N I C A L C O N S U L T A N T
EM12c Performance Diagnosis
and Tuning Outside the Box
2. Kellyn Pot’Vin
Westminster, Colorado
Oracle ACE Director, Sr. Technical Specialist at
Enkitec
Specialize in performance and management of large
enterprise environments.
Board of directors for RMOUG, Director of Training
Days and Database Track Lead for KSCOPE 2013
Blog: DBAKevlar.com
Twitter: @DBAKevlar
3. Performance Diagnostics in EM12c
Simple access to performance, resource usage and
demands.
Data collection to investigate performance issues-
current, recent and historical.
Capacity planning.
Have the real answer, not assumptions.
4. Presentation Agenda
Performance Out of the Box with EM12c
Top Activity
SQL Monitor
ASH Analytics
Real-time ADDM
Compare ADDM
5. Tools at your Disposal
Requires the
Diagnostics
Pack
6. Top Activity, “The Grid”
Graphical display of performance usage.
15 second refresh, manual refresh or historical.
7. When to Worry
Out of the Ordinary Activity, (KNOW YOUR DB!)
Colors outside of green and [some] blue.
Large amounts of blue, (high IO)
Remember that pink, (unknown) red,
(concurrency/application) tan, (network) and
orange, (commit) in the grid should be investigated.
Brown or black? Run for the hills! (JK)
8. Here’s our spike, which waits?
Commonly, focus on pink,
orange, red and brown for
issues.
Network and queuing do
have opportunities for
tuning, as well.
Green and blue are expected,
but also part of problems
when over utilized.
9. We’re in the Red, (Orange, too!)
Inspect
High %
use.
Note that
the update
and
execution
may be
impacting
each other.
13. The Icing on the Cake
Duh, add some memory to the EM12c box!
14. SQL Monitor for Performance
• Elapsed Time
• SQL_ID, Beginning SQL Text.
• Parallel, Waits and Execution Time
15. Digging in
• Choose your session, SQL_ID or SQL_Text
• Shows active, completed sessions for amount of time chosen.
• Shows high level wait events, dbtime, IO usage and duration.
16. Digging Down
By SQL_ID, we can inspect:
• Duration
• DB Time
• PL/SQL Java time
• Wait Activity
• Buffer Gets
• IO Requests and IO Bytes
• If Exadata, Offload Efficiency
17. Monitoring Procedural Call
All SQL_ID’s called will show, along with
duration so it’s simple to pinpoint trouble
statements.
18. SQL Details
• Note that the SQL Statement, along with elapsed time is
shown.
• Data sources from Top Activity, not AWR data.
20. Added Data
Along with the main stats-
Activity information on the statement.
The execution plan
If there is a SQL Plan or outline in place.
If there have been any tuning advisors run against the
statement
And a direct link to SQL Monitoring
21. How to Use SQL Monitoring
Active Monitoring of database processing.
Investigation of performance.
Save off reports, which provide a graphical image of
performance differing from Top Activity or ASH
Analytics.
Distinct diagnosis at a session or statement level.
22. ASH Analytics
Future of Top Activity
Package installation to database.
Always on, non-impact of Top Activity performance
data gathering.
More defined, more accurate.
Historical data enhanced over Top Activity historical
views.
24. Custom Review Pane
• You can choose to change the overview pane to display data for any
amount of time.
• Just click on the pane and drag it to the area you are interested in or
extend it to cover the areas you are interested to investigate.
• Choose your filters or view all data and you are ready to go!
27. Pick Your Poison
View data very similar to the SQL and Session data
in Top Activity.
All data is sourced by AWR data and dependent on
samples and AWR retention/interval info in the
respository.
29. Activity Details
Activity shows wait detail over time.
Processes, including parallel sessions involved
during shaded time.
Option to run AWR or ASH report.
30. The Rest of the Story
For standard SQL- Plan, Plan Control and Tuning
History is shown under individual tabs.
SQL Monitor is minimized access to the SQL
Monitor view.
32. Data Break Down
Display offers incredible diversity in wait, resource
usage and other critical event choices.
33. ASH Analytics – When to Use It
Need the more defined ASH data for EM diagnostics.
Want a second way to present data to less “DBA”
centric groups, (load map)
Database level OR session/statement level
performance diagnosis.
Dig down deep, present data in numerous formats to
get the most complete picture of a complex issue.
Can be used for Real-time or historical analysis.
34. Real-Time ADDM
Yes, it requires a PL/SQL installation for the view
data.
Uses ADDM data for the source.
Always on, low to no impact.
Normal Mode or Emergency Mode when Emergency
Monitoring is required.
35. On Your Mark, Get Set…
This is a recorded ADDM session, beginning from
the time you click “Start”.
36. In Progress Data
Ability to stop
and restart.
Findings
gathered during
progress.
Check progress
notifies of any
issues.
37. Finished!
Once finished, verify no failures/errors occurred in
the collection.
Use the tabs to investigate findings, activity, hang
data and statistics.
The number of findings are shown.
38. The Findings
Example shows low priority SQL statements using
significant db time, but not other issues at this time.
If any issues are found that are high priority, will be
listed in red and details below the main pane, (low,
medium, high priority levels.)
39. Activity Tab
Activity Data, but sourced from ADDM.
Similar output to Top Activity and ASH Analytics.
40. Wait Details
• By highlighting a wait link on the right, you can detail down to the actual wait
information for that wait event.
41. Hanging out
If a database hang situation occurred and the real-
time ADDM was used to diagnose, then the HANG
DATA tab will show any diagnostic data it has
collected during the collection.
Statistics Data:
42. Last but not Least…
Initialization Parameter data for the database
instance.
Any undocumented of non-recommended parameter
settings will be identified and listed in the findings
section.
43. Compare Period ADDM
How is it different from Real-Time ADDM?
Ability to compare TWO snapshots in time, side by side of
ADDM data.
Compares ADDM snapshots against each other, (dependent on
snapshot intervals and retention.)
All comparisons can be saved off or mailed from the console,
(mailed through EM12c settings)
45. Comparison Activity
• Clear comparison from previous day, same time to see performance issue vs.
the right hand side snapshot.
• Commonality comparison of the SQL for snapshots being compared.
• Note the concurrency, commits and increased application waits.
46. It’s all in the Details
First tab shows any configuration differences
between the two snapshots and what the
configuration parameter is.
47. Findings Summary Detail
Shows comparison increases or decreases in waits.
Lists the percentage of change between each period
compared.
Upon highlighting, details data regarding the
increase or decrease.
48. SQL Changes
We can dig down into each of the SQL Statements
found to be the highest impacts to the system and
diagnose further.
49. Finding Detail Descriptions
As shown above, the wait on Checkpoints to Tablespace are
describe below once you highlight the section in the findings
tab.
And for RAC, some waits can be broken down by instance.
50. Resource Usage: CPU
CPU Usage is viewable
by instance and total
usage.
If no CPU bound wait
issues were seen, its
stated by comparison
snapshot.
51. Resource Usage: Memory
• If you note, Memory has a warning alert by the tab to point you to it after the
comparison is completed.
• The base and comparison is in red, meaning that Virtual paging was an issue
in both snapshots.
• Data is separated by instance in RAC, showing clear usage for better
diagnostics.
52. Resource Usage: IO
I/O is separated by Throughput and Single block
read latency.
Again, if there was an issue, a warning would be on
the IO tab and the Base and Comparison would show
in red instead of green.
53. Resource Usage: Interconnect
As this is RAC, note that we also have an
interconnect tab with data on the speed and
performance.
Total vs. rate on throughput is viewed through a
radio button choice.
55. How to Use the Comparison ADDM
Excellent to diagnose “what has changed”.
“Just the Facts” information on a comparison of
time.
Dependent upon retention time settings and
intervals for AWR.
Historical data, can be set by date, custom, by
previous snapshot.
Will move to next snapshot window if mid-snapshot
time span is chosen.
56. EM12c blogs-
Leighton Nelson- http://blogs.griddba.com/
Rob Zoeteweij-http://oemgc.files.wordpress.com/
Gokhan Atil- http://www.gokhanatil.com/
Martin Bach- http://martincarstenbach.wordpress.com
Niall Litchfield- http://orawin.info/blog/
Info for Me!
Company Website: www.enkitec.com
Twitter: @DBAKevlar
RMOUG: www.rmoug.org
Linkedin: Kellyn Potvin and/or Rocky Mountain Oracle User Group
Email: dbakevlar@gmail.com or kpotvin@enkitec.com or
TrainingdaysDir@rmoug.org
Blog: https://dbakevlar.com
Reference
57. Kscope13 features more than 300
educational sessions, full-day
symposiums, hands-on training courses,
informal networking sessions, and a
plethora of chances to increase your
technical know-how by learning from the
best.
• Application Express
• ADF and Fusion Dev.
• Developer's Toolkit
• The Database
• Building Better Software
• Business Intelligence
• Essbase
• Planning
• Financial close
• EPM Reporting
• EPM Foundations and Data
Management
• EPM Business Content http://kscope13.com/registration