This document provides an introduction and overview of Amazon Relational Database Service (Amazon RDS). It discusses how RDS provides automated provisioning and scaling of database instances, high availability through multi-AZ deployments, security features including encryption and IAM access control, monitoring with CloudWatch, and migration services. It also introduces Amazon Aurora, a MySQL and PostgreSQL compatible database engine designed for the cloud that provides better performance than commercial databases.
DAT302_Deep Dive on Amazon Relational Database Service (RDS)Amazon Web Services
Amazon RDS enables customers to launch an optimally configured, secure and highly available database with just a few clicks. It provides cost-efficient and resizable capacity while managing time-consuming database administration tasks, freeing you up to focus on your applications and business. Amazon RDS provides you six database engines to choose from, including Oracle, Microsoft SQL Server, PostgreSQL, MySQL and MariaDB. In this session, we take a closer look at the capabilities of the RDS service and review the latest features available. We do a deep dive into how RDS works and the best practices to achieve the optimal performance, flexibility, and cost saving for your databases.
For more training on AWS, visit: https://www.qa.com/amazon
AWS Loft | London - Deep Dive: Amazon RDS by Toby Knight, Manager Solutions Architecture, 18 April 2016
RDS provides a fully managed relational database service. Key features include automated provisioning and scaling, high availability, data encryption at rest, backups and read replicas. RDS supports multiple database engines like MySQL, PostgreSQL, Oracle and SQL Server. Customers can migrate databases to RDS by backing up to S3 and restoring onto a new RDS instance.
Amazon RDS allows you to launch an optimally configured, secure and highly available database with just a few clicks. It provides cost-efficient and resizable capacity while managing time-consuming database administration tasks, freeing you to focus on your applications and business.
1) The document discusses Amazon S3 and Glacier object storage services. It provides an overview of features like storage classes, security practices, analytics, and use cases for large companies.
2) Key recommendations include starting to tag objects to organize data, using lifecycle policies to automate storage class transitions, and reviewing bucket security settings.
3) The presentation aims to help users better understand how to architect applications using S3 and optimize storage and access of trillions of objects stored on AWS.
This presentation provides an overview of Amazon Elastic Block Store (EBS) and key performance concepts. EBS provides persistent block level storage volumes for use with EC2 instances. It discusses the different volume types (standard and provisioned IOPS), factors that impact performance like block size and queue depth, and best practices for architecting storage for performance and availability. The presentation also provides examples of how enterprises and applications use EBS and guidelines for minimum, production and large-scale usage.
In this session we will explore the world’s first cloud-scale file system and its targeted use cases. Session attendees will learn about EFS’s benefits, how to identify applications that are appropriate for use with EFS, and details about its performance and security models. The target audience is file system administrators, application developers, and application owners that operate or build file-based applications.
Learning Objectives:
- Learn how to make decisions about the service and share best practices and useful tips for success
- Learn about Content based routing, HTTP/2, WebSockets
- Secure your web applications using TLS termination, AWS WAF on Application Load Balancer
This document discusses encryption options when using AWS, focusing on the AWS Key Management Service (KMS). KMS allows users to simplify the creation, control, rotation and use of encryption keys in AWS services like S3, EBS, RDS, Redshift and others. It addresses key storage, access and usage considerations. KMS uses symmetric AES-256 encryption for data keys and allows granular IAM control over who can create, enable/disable, use and audit keys. The presentation demonstrates how to create and use customer master keys in KMS and integrate encryption with S3 and EBS volumes.
In addition to running databases in Amazon EC2, AWS customers can choose among a variety of managed database services. These services save effort, save time, and unlock new capabilities and economies. In this session, we make it easy to understand how they differ, what they have in common, and how to choose one or more. We explain the fundamentals of Amazon DynamoDB, a fully managed NoSQL database service; Amazon RDS, a relational database service in the cloud; Amazon ElastiCache, a fast, in-memory caching service in the cloud; and Amazon Redshift, a fully managed, petabyte-scale data-warehouse solution that can be surprisingly economical. We’ll cover how each service might help support your application, how much each service costs, and how to get started.
Moving from an on-premises environment into AWS is just the start of the journey towards cost optimisation. In this session we’ll look at a range of ways in which our customers can understand their costs and increase their return-on-investment: building the business case; selecting the right models for the right workloads; benefiting from tiered pricing aggregation; using data to drive the choice of AWS services; implementation of intelligent auto-scaling; and, where appropriate, re-platforming to make use of new architectural patterns such as Serverless.
Amazon Elastic Compute Cloud (Amazon EC2) provides scalable computing capacity in the Amazon Web Services (AWS) cloud
Can use Amazon EC2 to launch as many or as few virtual servers as you need, configure security and networking, and manage storage
Amazon EC2 enables you to scale up or down to handle changes in requirements or spikes in popularity, reducing your need to forecast traffic
A closer look at the MySQL and PostgreSQL compatible relational database built for the cloud that combines the performance and availability of high-end commercial databases with the simplicity and cost-effectiveness of open source databases. We’ll explore how Aurora uses the AWS cloud to provide high reliability, high durability, and high throughput.
Speakers:
Steve Abraham - Principal Database Specialist Solutions Architect, AWS
Peter Dachnowicz - Sr. Technical Account Manager, AWS
Migrating Your Databases to AWS - Deep Dive on Amazon RDS and AWS Database Mi...Amazon Web Services
The document discusses migrating databases to AWS using Amazon Relational Database Service (RDS) and AWS Database Migration Service (DMS). It outlines that RDS provides a managed relational database service and discusses engines, availability, scaling, backups and security. It then discusses DMS for migrating or replicating databases to AWS targets like RDS and Redshift. The Schema Conversion Tool is also covered for converting schemas during migrations. Real customer examples like Expedia migrating from SQL Server to AWS are provided to illustrate use cases.
Amazon EMR enables fast processing of large structured or unstructured datasets, and in this presentation we'll show you how to setup an Amazon EMR job flow to analyse application logs, and perform Hive queries against it. We also review best practices around data file organisation on Amazon Simple Storage Service (S3), how clusters can be started from the AWS web console and command line, and how to monitor the status of a Map/Reduce job.
Finally we take a look at Hadoop ecosystem tools you can use with Amazon EMR and the additional features of the service.
See a recording of the webinar based on this presentation on YouTube here:
Check out the rest of the Masterclass webinars for 2015 here: http://aws.amazon.com/campaigns/emea/masterclass/
See the Journey Through the Cloud webinar series here: http://aws.amazon.com/campaigns/emea/journey/
This document discusses using AWS for disaster recovery. It outlines several disaster recovery scenarios that can be implemented on AWS, including backup and restore, pilot light, low-capacity standby, and multi-site hot standby. For each scenario, it describes the advantages, preparation needed, and objectives for recovery time and point objectives. It emphasizes testing disaster recovery plans on AWS and notes that initial steps are simple. The presentation encourages attendees to learn more about AWS disaster recovery resources and consider using AWS for a disaster recovery project.
Amazon S3 hosts trillions of objects and is used for storing a wide range of data, from system backups to digital media. This presentation from the Amazon S3 Masterclass webinar we explain the features of Amazon S3 from static website hosting, through server side encryption to Amazon Glacier integration. This webinar will dive deep into the feature sets of Amazon S3 to give a rounded overview of its capabilities, looking at common use cases, APIs and best practice.
See a recording of this video here on YouTube: http://youtu.be/VC0k-noNwOU
Check out future webinars in the Masterclass series here: http://aws.amazon.com/campaigns/emea/masterclass/
View the Journey Through the Cloud webinar series here: http://aws.amazon.com/campaigns/emea/journey/
Amazon RDS allows you to launch an optimally configured, secure and highly available database with just a few clicks. It provides cost-efficient and resizable capacity while managing time-consuming database administration tasks, freeing you to focus on your applications and business. We’ll discuss Amazon RDS fundamentals, learn about the seven available database engines, and examine customer success stories.
This document provides an overview of Amazon Relational Database Service (Amazon RDS). It discusses the multi-engine support, automated provisioning and scaling, high availability features, security capabilities, monitoring options, and compliance certifications of Amazon RDS. It also highlights key customers like Airbnb that use Amazon RDS to simplify database management and improve performance and availability.
ENT305 Migrating Your Databases to AWS: Deep Dive on Amazon Relational Databa...Amazon Web Services
Amazon RDS allows you to launch an optimally configured, secure and highly available database with just a few clicks. It provides cost-efficient and resizable capacity, automates time-consuming database administration tasks, and provides you with six familiar database engines to choose from: Amazon Aurora, Oracle, Microsoft SQL Server, PostgreSQL, MySQL and MariaDB. In this session, we will take a close look at the capabilities of Amazon RDS and explain how it works. We’ll also discuss the AWS Database Migration Service and AWS Schema Conversion Tool, which help you migrate databases and data warehouses with minimal downtime from on-premises and cloud environments to Amazon RDS and other Amazon services. Gain your freedom from expensive, proprietary databases while providing your applications with the fast performance, scalability, high availability, and compatibility they need.
Migrating Your Databases to AWS Deep Dive on Amazon RDS and AWSKristana Kane
This document provides an overview of migrating databases to AWS using Amazon RDS and AWS Database Migration Service (DMS). It discusses how AWS RDS offers scalable, managed relational databases, the different database engines supported by RDS, and key features like security, monitoring, high availability and scaling. It then covers how AWS DMS can be used to migrate databases to AWS with no downtime by continuously replicating and migrating data. Finally, it shares examples of how customers have used RDS and DMS for heterogeneous, homogeneous, large-scale and split migrations.
AWS re:Invent 2016: AWS Database State of the Union (DAT320)Amazon Web Services
Raju Gulabani, vice president of AWS Database Services (AWS), discusses the evolution of database services on AWS and the new database services and features we launched this year, and shares our vision for continued innovation in this space. We are witnessing an unprecedented growth in the amount of data collected, in many different shapes and forms. Storage, management, and analysis of this data requires database services that scale and perform in ways not possible before. AWS offers a collection of such database and other data services like Amazon Aurora, Amazon DynamoDB, Amazon RDS, Amazon Redshift, Amazon ElastiCache, Amazon Kinesis, and Amazon EMR to process, store, manage, and analyze data. In this session, we provide an overview of AWS database services and discuss how our customers are using these services today.
RDS provides a managed relational database service that allows customers to focus on applications rather than database administration. New features include increased storage and IOPS limits, HIPAA eligibility for some databases, and support for MariaDB. Amazon Aurora is a MySQL-compatible database designed for high performance, availability, and scalability. It uses 6 copies of data across 3 availability zones and provides up to 64TB of storage. The Database Migration Service allows migrating databases from on-premises or other platforms to AWS databases while keeping applications running.
Migrating Your Oracle Database to PostgreSQL - AWS Online Tech TalksAmazon Web Services
Learning Objectives:
- Learn about the capabilities of the PostgreSQL database
- Learn about PostgreSQL offerings on AWS
- Learn how to migrate from Oracle to PostgreSQL with minimal disruption
Migrating Your Oracle Database to PostgreSQL - AWS Online Tech TalksAmazon Web Services
Learning Objectives:
- Learn about the capabilities of the PostgreSQL database
- Learn about PostgreSQL offerings on AWS
- Learn how to migrate from Oracle to PostgreSQL with minimal disruption
Amazon RDS allows customers to launch an optimally configured, secure and highly available database with just a few clicks. It provides cost-efficient and resizable capacity while managing time-consuming database administration tasks, freeing you up to focus on your applications and business. Amazon RDS provides you six familiar database engines to choose from, including Amazon Aurora, Oracle, Microsoft SQL Server, PostgreSQL, MySQL and MariaDB. In this session we will take a closer look at the capabilities of RDS and all the different options available. We will do a deep dive into how RDS works and how Aurora differs from the rest of the engines.
Amazon Aurora is a MySQL-compatible database engine that combines the speed and availability of high-end commercial databases with the simplicity and cost-effectiveness of open source databases. This session introduces you to Amazon Aurora, explains common use cases for the service, and helps you get started with building your first Amazon Aurora–powered application.
AWS ofrece una gran variedad de servicios de base de datos que se adaptan a los requisitos de su aplicación. Los servicios de bases de datos están totalmente administrados y se pueden implementar en cuestión de minutos con tan solo unos clics. Los servicios de AWS incluyen Amazon Relational Database Service (Amazon RDS), compatible con 6 motores de bases de datos comunes, Amazon Aurora, base de datos relacional compatible con MySQL con un desempeño 5 veces superior, Amazon DynamoDB, servicio de bases de datos NoSQL rápido y flexible, Amazon Redshift, almacén de datos a escala de petabytes, y Amazon Elasticache, servicio de caché en memoria compatible con Memcached y Redis. AWS también proporciona AWS Database Migration Service, un servicio que permite migrar las bases de datos a la nube de AWS de forma sencilla y rentable.
Amazon RDS allows customers to launch an optimally configured, secure and highly available database with just a few clicks. It provides cost-efficient and resizable capacity while managing time-consuming database administration tasks, freeing you up to focus on your applications and business. Amazon RDS provides you six familiar database engines to choose from, including Amazon Aurora, Oracle, Microsoft SQL Server, PostgreSQL, MySQL and MariaDB. In this session we will take a closer look at the capabilities of RDS and all the different options available. We will do a deep dive into how RDS works and how Aurora differs from the rest of the engines.
RDS Postgres and Aurora Postgres | AWS Public Sector Summit 2017Amazon Web Services
Attend this session for a technical deep dive about RDS Postgres and Aurora Postgres. Come hear from Mark Porter, the General Manager of Aurora PostgreSQL and RDS at AWS, as he covers service specific use cases and applications within the AWS worldwide public sector community. Learn More: https://aws.amazon.com/government-education/
Migrating your Databases to AWS: Deep Dive on Amazon RDS and AWS Database Mig...Amazon Web Services
The document provides information about migrating databases to AWS using Amazon Relational Database Service (RDS) and AWS Database Migration Service (DMS). It discusses:
- Key features of RDS such as provisioning databases quickly with high availability, security, backups and monitoring capabilities.
- How DMS allows migrating databases to AWS with minimal downtime by continuously replicating and migrating data between databases.
- Examples of customers who have successfully migrated large databases to AWS using RDS and DMS to improve scalability, availability and reduce costs compared to on-premises databases.
This document discusses purpose-built databases and managed database services on AWS. It begins by explaining how data needs are rapidly expanding and changing due to factors like microservices and analytics. It then introduces several purpose-built AWS databases like Amazon Aurora, DynamoDB, DocumentDB, ElastiCache, and Neptune that are optimized for different use cases. Benefits highlighted include performance, scalability, availability, and that they are fully managed. Two customer examples of Duolingo and Capital One migrating to AWS databases are provided. The document concludes by discussing the advantages of moving to managed databases on AWS over self-managed databases.
This document summarizes Amazon Web Services database migration and replication services. It discusses how the AWS Database Migration Service can migrate databases between on-premises and cloud environments within 10 minutes with no application downtime. It also describes how the AWS Schema Conversion Tool can help migrate databases off Oracle and SQL Server to other database engines like MySQL. Finally, it provides an overview of Amazon RDS managed database services and high availability features.
In addition to running databases in Amazon EC2, AWS customers can choose among a variety of managed database services. These services save effort, save time, and unlock new capabilities and economies. In this session, we make it easy to understand how they differ, what they have in common, and how to choose one or more. We explain the fundamentals of Amazon DynamoDB, a fully managed NoSQL database service; Amazon RDS, a relational database service in the cloud; Amazon ElastiCache, a fast, in-memory caching service in the cloud; and Amazon Redshift, a fully managed, petabyte-scale data-warehouse solution that can be surprisingly economical. We will cover how each service might help support your application, how much each service costs, and how to get started.
Come costruire servizi di Forecasting sfruttando algoritmi di ML e deep learn...Amazon Web Services
Il Forecasting è un processo importante per tantissime aziende e viene utilizzato in vari ambiti per cercare di prevedere in modo accurato la crescita e distribuzione di un prodotto, l’utilizzo delle risorse necessarie nelle linee produttive, presentazioni finanziarie e tanto altro. Amazon utilizza delle tecniche avanzate di forecasting, in parte questi servizi sono stati messi a disposizione di tutti i clienti AWS.
In questa sessione illustreremo come pre-processare i dati che contengono una componente temporale e successivamente utilizzare un algoritmo che a partire dal tipo di dato analizzato produce un forecasting accurato.
Big Data per le Startup: come creare applicazioni Big Data in modalità Server...Amazon Web Services
La varietà e la quantità di dati che si crea ogni giorno accelera sempre più velocemente e rappresenta una opportunità irripetibile per innovare e creare nuove startup.
Tuttavia gestire grandi quantità di dati può apparire complesso: creare cluster Big Data su larga scala sembra essere un investimento accessibile solo ad aziende consolidate. Ma l’elasticità del Cloud e, in particolare, i servizi Serverless ci permettono di rompere questi limiti.
Vediamo quindi come è possibile sviluppare applicazioni Big Data rapidamente, senza preoccuparci dell’infrastruttura, ma dedicando tutte le risorse allo sviluppo delle nostre le nostre idee per creare prodotti innovativi.
Ora puoi utilizzare Amazon Elastic Kubernetes Service (EKS) per eseguire pod Kubernetes su AWS Fargate, il motore di elaborazione serverless creato per container su AWS. Questo rende più semplice che mai costruire ed eseguire le tue applicazioni Kubernetes nel cloud AWS.In questa sessione presenteremo le caratteristiche principali del servizio e come distribuire la tua applicazione in pochi passaggi
Vent'anni fa Amazon ha attraversato una trasformazione radicale con l'obiettivo di aumentare il ritmo dell'innovazione. In questo periodo abbiamo imparato come cambiare il nostro approccio allo sviluppo delle applicazioni ci ha permesso di aumentare notevolmente l'agilità, la velocità di rilascio e, in definitiva, ci ha consentito di creare applicazioni più affidabili e scalabili. In questa sessione illustreremo come definiamo le applicazioni moderne e come la creazione di app moderne influisce non solo sull'architettura dell'applicazione, ma sulla struttura organizzativa, sulle pipeline di rilascio dello sviluppo e persino sul modello operativo. Descriveremo anche approcci comuni alla modernizzazione, compreso l'approccio utilizzato dalla stessa Amazon.com.
Come spendere fino al 90% in meno con i container e le istanze spot Amazon Web Services
L’utilizzo dei container è in continua crescita.
Se correttamente disegnate, le applicazioni basate su Container sono molto spesso stateless e flessibili.
I servizi AWS ECS, EKS e Kubernetes su EC2 possono sfruttare le istanze Spot, portando ad un risparmio medio del 70% rispetto alle istanze On Demand. In questa sessione scopriremo insieme quali sono le caratteristiche delle istanze Spot e come possono essere utilizzate facilmente su AWS. Impareremo inoltre come Spreaker sfrutta le istanze spot per eseguire applicazioni di diverso tipo, in produzione, ad una frazione del costo on-demand!
In recent months, many customers have been asking us the question – how to monetise Open APIs, simplify Fintech integrations and accelerate adoption of various Open Banking business models. Therefore, AWS and FinConecta would like to invite you to Open Finance marketplace presentation on October 20th.
Event Agenda :
Open banking so far (short recap)
• PSD2, OB UK, OB Australia, OB LATAM, OB Israel
Intro to Open Finance marketplace
• Scope
• Features
• Tech overview and Demo
The role of the Cloud
The Future of APIs
• Complying with regulation
• Monetizing data / APIs
• Business models
• Time to market
One platform for all: a Strategic approach
Q&A
Rendi unica l’offerta della tua startup sul mercato con i servizi Machine Lea...Amazon Web Services
Per creare valore e costruire una propria offerta differenziante e riconoscibile, le startup di successo sanno come combinare tecnologie consolidate con componenti innovativi creati ad hoc.
AWS fornisce servizi pronti all'utilizzo e, allo stesso tempo, permette di personalizzare e creare gli elementi differenzianti della propria offerta.
Concentrandoci sulle tecnologie di Machine Learning, vedremo come selezionare i servizi di intelligenza artificiale offerti da AWS e, anche attraverso una demo, come costruire modelli di Machine Learning personalizzati utilizzando SageMaker Studio.
OpsWorks Configuration Management: automatizza la gestione e i deployment del...Amazon Web Services
Con l'approccio tradizionale al mondo IT per molti anni è stato difficile implementare tecniche di DevOps, che finora spesso hanno previsto attività manuali portando di tanto in tanto a dei downtime degli applicativi interrompendo l'operatività dell'utente. Con l'avvento del cloud, le tecniche di DevOps sono ormai a portata di tutti a basso costo per qualsiasi genere di workload, garantendo maggiore affidabilità del sistema e risultando in dei significativi miglioramenti della business continuity.
AWS mette a disposizione AWS OpsWork come strumento di Configuration Management che mira ad automatizzare e semplificare la gestione e i deployment delle istanze EC2 per mezzo di workload Chef e Puppet.
Scopri come sfruttare AWS OpsWork a garanzia e affidabilità del tuo applicativo installato su Instanze EC2.
Microsoft Active Directory su AWS per supportare i tuoi Windows WorkloadsAmazon Web Services
Vuoi conoscere le opzioni per eseguire Microsoft Active Directory su AWS? Quando si spostano carichi di lavoro Microsoft in AWS, è importante considerare come distribuire Microsoft Active Directory per supportare la gestione, l'autenticazione e l'autorizzazione dei criteri di gruppo. In questa sessione, discuteremo le opzioni per la distribuzione di Microsoft Active Directory su AWS, incluso AWS Directory Service per Microsoft Active Directory e la distribuzione di Active Directory su Windows su Amazon Elastic Compute Cloud (Amazon EC2). Trattiamo argomenti quali l'integrazione del tuo ambiente Microsoft Active Directory locale nel cloud e l'utilizzo di applicazioni SaaS, come Office 365, con AWS Single Sign-On.
Dal riconoscimento facciale al riconoscimento di frodi o difetti di fabbricazione, l'analisi di immagini e video che sfruttano tecniche di intelligenza artificiale, si stanno evolvendo e raffinando a ritmi elevati. In questo webinar esploreremo le possibilità messe a disposizione dai servizi AWS per applicare lo stato dell'arte delle tecniche di computer vision a scenari reali.
Amazon Web Services e VMware organizzano un evento virtuale gratuito il prossimo mercoledì 14 Ottobre dalle 12:00 alle 13:00 dedicato a VMware Cloud ™ on AWS, il servizio on demand che consente di eseguire applicazioni in ambienti cloud basati su VMware vSphere® e di accedere ad una vasta gamma di servizi AWS, sfruttando a pieno le potenzialità del cloud AWS e tutelando gli investimenti VMware esistenti.
Molte organizzazioni sfruttano i vantaggi del cloud migrando i propri carichi di lavoro Oracle e assicurandosi notevoli vantaggi in termini di agilità ed efficienza dei costi.
La migrazione di questi carichi di lavoro, può creare complessità durante la modernizzazione e il refactoring delle applicazioni e a questo si possono aggiungere rischi di prestazione che possono essere introdotti quando si spostano le applicazioni dai data center locali.
Crea la tua prima serverless ledger-based app con QLDB e NodeJSAmazon Web Services
Molte aziende oggi, costruiscono applicazioni con funzionalità di tipo ledger ad esempio per verificare lo storico di accrediti o addebiti nelle transazioni bancarie o ancora per tenere traccia del flusso supply chain dei propri prodotti.
Alla base di queste soluzioni ci sono i database ledger che permettono di avere un log delle transazioni trasparente, immutabile e crittograficamente verificabile, ma sono strumenti complessi e onerosi da gestire.
Amazon QLDB elimina la necessità di costruire sistemi personalizzati e complessi fornendo un database ledger serverless completamente gestito.
In questa sessione scopriremo come realizzare un'applicazione serverless completa che utilizzi le funzionalità di QLDB.
Con l’ascesa delle architetture di microservizi e delle ricche applicazioni mobili e Web, le API sono più importanti che mai per offrire agli utenti finali una user experience eccezionale. In questa sessione impareremo come affrontare le moderne sfide di progettazione delle API con GraphQL, un linguaggio di query API open source utilizzato da Facebook, Amazon e altro e come utilizzare AWS AppSync, un servizio GraphQL serverless gestito su AWS. Approfondiremo diversi scenari, comprendendo come AppSync può aiutare a risolvere questi casi d’uso creando API moderne con funzionalità di aggiornamento dati in tempo reale e offline.
Inoltre, impareremo come Sky Italia utilizza AWS AppSync per fornire aggiornamenti sportivi in tempo reale agli utenti del proprio portale web.
Database Oracle e VMware Cloud™ on AWS: i miti da sfatareAmazon Web Services
Molte organizzazioni sfruttano i vantaggi del cloud migrando i propri carichi di lavoro Oracle e assicurandosi notevoli vantaggi in termini di agilità ed efficienza dei costi.
La migrazione di questi carichi di lavoro, può creare complessità durante la modernizzazione e il refactoring delle applicazioni e a questo si possono aggiungere rischi di prestazione che possono essere introdotti quando si spostano le applicazioni dai data center locali.
In queste slide, gli esperti AWS e VMware presentano semplici e pratici accorgimenti per facilitare e semplificare la migrazione dei carichi di lavoro Oracle accelerando la trasformazione verso il cloud, approfondiranno l’architettura e dimostreranno come sfruttare a pieno le potenzialità di VMware Cloud ™ on AWS.
1) The document discusses building a minimum viable product (MVP) using Amazon Web Services (AWS).
2) It provides an example of an MVP for an omni-channel messenger platform that was built from 2017 to connect ecommerce stores to customers via web chat, Facebook Messenger, WhatsApp, and other channels.
3) The founder discusses how they started with an MVP in 2017 with 200 ecommerce stores in Hong Kong and Taiwan, and have since expanded to over 5000 clients across Southeast Asia using AWS for scaling.
This document discusses pitch decks and fundraising materials. It explains that venture capitalists will typically spend only 3 minutes and 44 seconds reviewing a pitch deck. Therefore, the deck needs to tell a compelling story to grab their attention. It also provides tips on tailoring different types of decks for different purposes, such as creating a concise 1-2 page teaser, a presentation deck for pitching in-person, and a more detailed read-only or fundraising deck. The document stresses the importance of including key information like the problem, solution, product, traction, market size, plans, team, and ask.
This document discusses building serverless web applications using AWS services like API Gateway, Lambda, DynamoDB, S3 and Amplify. It provides an overview of each service and how they can work together to create a scalable, secure and cost-effective serverless application stack without having to manage servers or infrastructure. Key services covered include API Gateway for hosting APIs, Lambda for backend logic, DynamoDB for database needs, S3 for static content, and Amplify for frontend hosting and continuous deployment.
This document provides tips for fundraising from startup founders Roland Yau and Sze Lok Chan. It discusses generating competition to create urgency for investors, fundraising in parallel rather than sequentially, having a clear fundraising narrative focused on what you do and why it's compelling, and prioritizing relationships with people over firms. It also notes how the pandemic has changed fundraising, with examples of deals done virtually during this time. The tips emphasize being fully prepared before fundraising and cultivating connections with investors in advance.
AWS_HK_StartupDay_Building Interactive websites while automating for efficien...Amazon Web Services
This document discusses Amazon's machine learning services for building conversational interfaces and extracting insights from unstructured text and audio. It describes Amazon Lex for creating chatbots, Amazon Comprehend for natural language processing tasks like entity extraction and sentiment analysis, and how they can be used together for applications like intelligent call centers and content analysis. Pre-trained APIs simplify adding machine learning to apps without requiring ML expertise.
Amazon Elastic Container Service (Amazon ECS) è un servizio di gestione dei container altamente scalabile, che semplifica la gestione dei contenitori Docker attraverso un layer di orchestrazione per il controllo del deployment e del relativo lifecycle. In questa sessione presenteremo le principali caratteristiche del servizio, le architetture di riferimento per i differenti carichi di lavoro e i semplici passi necessari per poter velocemente migrare uno o più dei tuo container.
Introduction to dosage forms and routes of drug administrationDefinition, the need for dosage forms, classification, overview of dosage form design
❖ Introduction to pharmaceutical ingredients (definition, importance)
❖ Routes of administration
Green Synthesis of Magnetic Nanoparticles and Their Biological application.pptxAhmedSaeed181245
Description:
This presentation explores the innovative green synthesis methods of magnetic nanoparticles (MNPs) and their diverse applications in biology. It covers the synthesis techniques emphasizing environmental sustainability, the unique properties of MNPs, and their role in biomedical applications such as targeted drug delivery, imaging, and biosensing. The presentation also discusses challenges, future directions, and the potential impact of MNPs in advancing biotechnological and medical fields.
2. AWS Data Services to Accelerate Your Move to the Cloud
RDS
Open
Source
RDS
Commercial
Aurora
Migration for DB Freedom
DynamoDB
& DAX
ElastiCache EMR Redshift
Spectrum
AthenaElasticsearch
Service
QuickSightGlue
Lex
Polly
Rekognition Machine
Learning
Databases to Elevate your Apps
Relational Non-Relational
& In-Memory
Analytics to Engage your Data
Inline Data Warehousing Reporting
Data Lake
Amazon AI to Drive the Future
Deep Learning, MXNet
Database Migration
Schema Conversion
3. • Multi-engine support
– Aurora, MySQL, MariaDB, PostgreSQL, Oracle, SQL Server
• Automated provisioning, Scaling, Patching,
Backup/Restore
• High availability with RDS Multi-AZ, Auto-Failover
– 99.95% SLA for Multi-AZ deployments
• Security
• Monitoring
Amazon RDS
4. Provisioning and Effortless Scaling
• Handle higher load or lower usage
• Naturally grow over time
• Control costs
5. Read Replicas
Bring data close to your customer’s
applications in different regions
Relieve pressure on your master
node for supporting reads and
writes
Promote a Read Replica to a
master for faster recovery in the
event of disaster
6. Enterprise-grade fault tolerance
solution for production
databases
Automatic failover
Synchronous replication
Inexpensive & enabled with one click
High Availability Multi-AZ Deployments
7. Security and Compliance
• Network Isolation
• Database instance IP firewall protection
• AWS IAM based resource-level
permission controls
• Encryption at rest using AWS KMS or
Oracle/Microsoft TDE
• SSL protection for data in transit
• Assurance programs for finance,
healthcare, government and more
8. Amazon Virtual Private Cloud (Amazon VPC)
Securely control network configuration
Availability Zone
AWS Region
10.1.0.0/16
10.1.1.0/24
Manage connectivity
VPN
connection
VPC
peering
Internet
gateway
MAWS Direct
Connect
Routing
rules
9. Security groups
Database IP firewall protection
Protocol Port Range Source
TCP 3306 172.31.0.0/16
TCP 3306 “Application
security group”
Corporate address admins
Application tier
M
10. IAM governed access
You can use AWS Identity and Access Management (IAM)
to control who can perform actions on RDS
Users and DBAApplications DBA and Ops
Your database RDS
Controlled with IAMControlled with database grants
M
11. At Rest Encryption for all RDS Engines
AWS Key Management Service (KMS)
Two-tiered key hierarchy using envelope encryption:
• Unique data key encrypts customer data
• AWS KMS master keys encrypt data keys
• Available for ALL RDS engines
Benefits:
• Limits risk of compromised data key
• Better performance for encrypting large data
• Easier to manage small number of master keys
than millions of data keys
• Centralized access and audit of key activity
Data key 1 Data key 2 Data key 3 Data key 4
Custom
application
Customer master
key(s)
Amazon
RDS
instance 3
Amazon
RDS
instance 2
Amazon
RDS
instance 1
13. Compliance
Aurora
SOC 1, 2, 3
ISO 20001/9001
ISO 27107/27018
PCI
HIPAA BAA
MySQL
SOC 1, 2, 3
ISO 20001/9001
ISO 27107/27018
PCI
FedRamp
HIPAA BAA
UK Gov. Programs
Singapore MTCS
Oracle
SOC 1, 2, 3
ISO 20001/9001
ISO 27107/27018
PCI
FedRamp
HIPAA BAA
UK Gov. Programs
Singapore MTCS
MariaDB
SOC 1, 2, 3
ISO 20001/9001
ISO 27107/27018
PCI
PostgreSQL
SOC 1, 2, 3
ISO 20001/9001
ISO 27107/27018
PCI
FedRamp
HIPAA BAA
UK Gov. Programs
Singapore MTCS
SQL Server
SOC 1, 2, 3
ISO 20001/9001
ISO 27107/27018
PCI
UK Gov.
Programs
Singapore MTCS
14. Standard monitoring
Amazon CloudWatch
metrics for Amazon RDS
CPU utilization
Storage
Memory
Swap usage
DB connections
I/O (read and write)
Latency (read and write)
Throughput (read and write)
Replica lag
Many more
Amazon CloudWatch Alarms
Similar to on-premises custom
monitoring tools
15. Enhanced Monitoring
Access to over 50
new CPU, memory,
file system, and disk
I/O metrics as low as
1 second intervals
17. • Airbnb moved its main MySQL database to Amazon RDS
with only 15 minutes of downtime
• RDS simplifies much of the time-consuming administrative
tasks associated with databases so engineers can spend
more time on features
• Uses asynchronous master-slave replication to improve
website performance launched via the RDS console or an
API call
• Leverages multi-Availability Zone (Multi-AZ) for high
availability
Airbnb – Amazon RDS for MySQL
18. The Forrester Wave™ is copyrighted by Forrester Research, Inc. Forrester and Forrester Wave™ are trademarks of Forrester
Research, Inc. The Forrester Wave™ is a graphical representation of Forrester's call on a market and is plotted using a detailed
spreadsheet with exposed scores, weightings, and comments. Forrester does not endorse any vendor, product, or service depicted in
the Forrester Wave. Information is based on best available resources. Opinions reflect judgment at the time and are subject to change.
The Forrester Wave™: Database-As-A-Service, Q2 2017
20. Key Questions We Asked
• What if we started from a clean sheet of paper with only constraint being that the
database was a relational database?
• Could we offer much better performance by leveraging the massive scale of our
cloud?
• Could we give you a database with designed durability indistinguishable from 100%
and availability of 99.99%?
• …And could we be better and cheaper than the 30-year old commercial databases in
use today?
21. Amazon RDS for Aurora
• MySQL compatible with up to 5x better performance on the
same hardware: 100,000 writes/sec & 500,000 reads/sec
• Scalable with up to 64 TB in single database, up to 15 read
replicas
• Highly available, durable, and fault-tolerant custom SSD
storage layer: 6-way replicated across 3 Availability Zones
• Transparent encryption for data at rest using AWS KMS
• Stored procedures in Amazon Aurora can invoke AWS
Lambda functions
Fastest growing service
in AWS history
A new relational database engine, built from the ground up to leverage AWS
23. Use case: Near real-time analytics and reporting
Master
Read
Replica
Read
Replica
Read
Replica
Shared distributed storage volume
Reader end-point
A customer in the travel industry migrated to Aurora for
their core reporting application accessed by ~1,000
internal users.
Replicas can be created, deleted and scaled within
minutes based on load.
Read-only queries are load balanced across replica
fleet through a DNS endpoint – no application
configuration needed when replicas are added or
removed.
Low replication lag allows mining for fresh data with
no delays, immediately after the data is loaded.
Significant performance gains for core analytics
queries - some of the queries executing in 1/100th
the original time.
► Up to 15 promotable read replicas
► Low replica lag – typically < 10ms
► Reader end-point with load balancing
24. Amazon Aurora is now PostgreSQL-compatible
• PostgreSQL 9.6 compatibility with support for PostGIS
• All the features you expect from Amazon Aurora including
15 read replicas with <10ms lag, shared storage, failover
without data loss, 6-way replication across 3 Availability
Zones, encryption with AWS KMS
• Available now in preview
25. Simplify monitoring from the
AWS Management Console
Database load: Identifies
database bottlenecks
Easy
Powerful
Identifies source of bottlenecks
Top SQL
Adjustable time frame
Hour, day, week, and longer
Max CPU
Amazon RDS Performance Insights
26. AWS Database Migration Service
• Fully managed service for migration from on-premises to the
AWS Cloud with minimal downtime
• Migrates data to and from all widely used commercial and
open source DBs
• Schema Conversion Tool that converts source DB schemas,
stored procedures and application code to a different target
format
• Supports homogenous and heterogeneous data replication
• A terabyte-sized DB can be migrated for as little as $3
27. Database Conversion Capabilities in SCT
Source Database Target Database
Microsoft SQL Server Amazon Aurora, MySQL, PostgreSQL
MySQL, MariaDB Amazon Aurora, PostgreSQL
Oracle Amazon Aurora, MySQL, PostgreSQL
Oracle Data Warehouse Amazon Redshift
PostgreSQL Amazon Aurora, MySQL
Teradata, Netezza, Greenplum Amazon Redshift
HP Vertica, SQL Server DW Amazon Redshift
MongoDB Amazon DynamoDB
29. Heterogeneous Migration
• Oracle private DC to RDS PostgreSQL migration
• Used the AWS Schema Conversion Tool to convert their
database schema
• Used on-going replication (CDC) to keep databases in sync
until they reached the cutover window
• Benefits:
• Improved reliability of the cloud environment
• Savings on Oracle licensing costs
• SCT Assessment Report let them understand the scope of the
migration
Intro myself. What I do. Time with Amazon/AWS
We are going to go through RDS as a whole attempting to provide you as much education and guidance as possible while also going deeper into some areas to help you get a better understanding on how RDS works so that you can better understand how it can be a beneficial service for you to run your applications and workloads on.
Have a few quick poll questions so I better understand who my audience is today.
Who is using RDS today
Who is looking to move to RDS
Who is running a database but on EC2.
AWS offers a broad portfolio, across Databases, Analytics, and Artificial Intelligence, driven by customer requests. Over 90% of new features and services are driven by customer feedback.
Our customers’ demands are usually driven by a need to make application experiences better. Better means faster to deploy, easier to run, more reliable, and less complex & less expensive to operate.
For databases, we have a mix of relational Open Source, Commercial Engines, and Amazon-created Aurora; for the greatest scalability and performance we have non-relational Open Source and Amazon-created DynamoDB.
Our customers use a range of analytics, as many are migrating from appliances to on-demand Cloud services with decoupled data, again with a number of Open Source offerings plus Amazon-created Redshift and Redshift Spectrum. We also have the support services needed to enable ETL, serverless query, and visualization.
Artificial Intelligence is a growing part of the AWS portfolio, for deep learning with platforms of machine learning and finished services to enable developers to easy add voice and pictorial capabilities to applications.
Our customers use migration for all purposes: both modernization of existing applications and moving between database options, as much for analytics as relational and non-relational databases.
Choice of engines. Same database engine as you use on-premise. Should expect Custom applications to work with minimal adjustments.
Speaking of the MANAGED database service, the whole point is to enable you to focus on what differentiates your organization.
We handle the provisioning, give you click-to-scale capability, handle patching, backup/restore.
We cover its HA – w/ Multi-AZ deployment **&** automated failover
Security and Monitoring
you choose Multi-AZ then the service will launch the RDS instance in one of the AZs available to you in the region you are launching in. If you choose single AZ it will launch in the AZ you choose.
Call out that you can modify your environment from single AZ to multi-AZ.
when you are running in a Muti AZ configuration this is how things work when there is a switch from the primary to the standby. This is a key feature of this being a managed service.
When one of the failure conditions that I talked about in the previous slides is met the RDS service is going to switch the DNS name for your database from the primary to the standby. DNS failover typically takes 30-60 seconds. Once this happens you are now running against your standby database with no action required on your part.
As part of the failover the standby is going to go through the standard crash recovery that happens for that particular engine, just as it would do if it were running in your data center or running on EC2.
Provisioning is a great place to start our conversation on Managed Database service. All it takes on AWS, is a handful of clicks on the console or an API call and you have access to a healthy database in a few minutes.
But sizing HW for the database is a challenge. Its like trying to swat a fly in mid-air. You know its right there; but by the time you catch up to it, its on a completely different trajectory.
With RDS, you address this with Agility. You can pick from a variety of server sizes and storage types to get started. You have the flexibility to change out infrastructure as your applications gain adoption and complexity. You can scale up in anticipation of peak workloads and then back down!
Swapping out database servers can now be done in minutes giving you the real ability to optimize cost/capacity. Impossible to do on-premise.
Another way to scale the database is to use …
Read-replicas!
RDS makes it easy to create read replicas of your database and automatically keeps them in sync. You can now offload the master database by pointing read workloads to read replicas. Depending on the database engines, you can even setup these read replicas in other regions from the master. You can leverage this to improve end-user experience by satisfying application reads from the nearest local region.
In addition to using Read Replication scalability; RDS also provides a simple 1-click mechanism to promote a Read Replica to “Master”. This allows you to mitigate impact to your end-users from failures or issues with your Master Database..
You can think of this as a limited HA. RDS leverages log replication to keep the Read Replicas in-sync. Sometimes there might be replication lag between the Master and the Read Replica. You could loose transactions if there’s a replication lag when the Master fails.
RDS provides a true Enterprise grade fault tolerance capability with synchronous replication in multi-AZ deployments.
It is enabled with just one-click and can be configured for automatic failover.
At AWS, Security is a top priority. RDS leverages many features of the AWS platform to ensure security for the databases. So lets dive into these components with .. Networking
First off, RDS instances are launched inside of a Virtual Private Cloud or VPC.
As many of you may already know a VPC gives you the ability to define a private network address space within AWS. You can also carve out different subnets within your VPC address space to further segment and isolate your application components, including your database.
So with your database launched inside of a VPC you now get to control which users and applications access your database and how they access it. By “HOW” we’re talking about network connectivity!
You can create a VPN connection from your corporate data center into the VPC so that you can access the database in a hybrid fashion.
Or you can use Direct Connect to link your data center to an AWS region giving you a connection with consistent performance
You can also use “VPC peering” to allow applications running in one VPC to access your database in another VPC.
You can grant public access to your database by attaching an internet gateway to your VPC
You can control the routing of your VPC using route tables that you attach to each of the subnets in your VPC
Another way to control access to the database is with Security Groups.
If you have used EC2 you are probably already familiar with security groups.
Think of Security Groups are a virtual firewall that is placed around your database environment.
A SG allows you to specify access rules for your database. Each rule is a combination of protocol, port range, and source of the traffic that you allow into your database.
For the Source, you can setup an IP address, a particular CIDR block covering multiple IP addresses, or even another security group. Meaning that in this example, your RDS instance will accept traffic from all instances in the “Application security group”.
This gives you the flexibility to setup a multi tier archicture where you only allow connections from the parts of the tier that actually need to access the database.
IAM is about the WHO gets to access the database. RDS uses IAM capabilities like users, roles and privileges to enforce controls around who gets to launch, terminate or modify the database instance.
This is different from database or applications users who are typically authenticated by the database itself.
KMS is managed service
Scaleability, Reliability, Durability, Security, Key deletion, Key creation, Key rotation
You get to focus on your application and integrating with the service.
For RDS you get an extra layer of protection around access to the underlying storage
Encryption uses industry standard AES-256 encryption to protect the data.
How you enable encryption for RDS
CloudTrail integration
Not only encrypting the DB Instance Storage but also…
Automated Backups
Read Replicas
Snapshots
Available for all six engines at no additional cost
--------------------
So here is how RDS provides you the ability to encrypt your data.
It is done using the AWS Key Management Service (KMS)
KMS is a managed service that provides you the ability to create and manage encryption keys and then encrypt and decrypt your data with those keys. All of these keys are tied to your own AWS account and are fully managed by you. KMS takes care of all the availability, scaling, security, and durability that you would normally have to deal with when implementing your own key store. With KMS performing these management tasks it allows you the ability to focus on using the keys and building out your application. On top of all this the KMS team has also focused on making all of this very easy for you to use.
Specific to RDS KMS gives you an additional layer of protection to guard against un-authorized access to the underlying storage of your RDS instance.
For the data encryption KMS uses industry standard AES-256 encryption to protect the data that is stored on the underlying host that your RDS instance is running on.
A little bit on how this works.
We are using a two key hierarchy for encrypting data on your RDS instance.
When you are launching an RDS instance you can choose to make the database encrypted. If you do nothing else besides turning on encryption this will result in an AWS managed key for RDS being created or the existing RDS customer master key will be used. This is good if you just want encryption and don’t want to think about anything else related to the key. Once this key is created it can only be used for RDS encryption and not with any other AWS service so you right away have scoped down the use of the key If you want you can create your own customer master key within KMS and then reference that key when you are creating your RDS instance. When you go the approach of creating your own key you get much more control over the use of that key such as when it is enabled or disabled, when the key is rotated, and what the access policies are for the key. Whatever your approach for the master key these keys cannot leave the service. You cannot export a plaintext version of that key out of the service. All you can do is ask that encrypt and decrypt operations happen against that key.
When RDS wants to encrypt data on the instance it will make a call to KMS using the necessary credentials. KMS will then give RDS a data key that is actually used to encrypt the data on that instance. This data key is encrypted using the master key that was created when you launched the instance or that you created and specified for use during the instance creation. This data key is specific to that RDS instance and will not be used with another RDS instance. Without the presence and access to this data key the data within RDS is useless to someone accessing it throughout the normal path. So with this you have an overall master key that is then vending out data keys for each instance that you choose to encrypt.
-------
Some of the benefits of using this service.
Encryption and decryption are handled transparently so you don’t have to modify your application to access your data.
Limited risks of a compromised data key
Better performance for encrypting large data sets and there is no performance penalty for using KMS encryption with your RDS instance.
You only have to manage a few master keys and not many data keys
You get centralized access and audit of key activity via cloudtrail so that you can see every time a key is accessed in used from your KMS configuration.
--------
Talk about why you should trust AWS with your keys
There are no tools in place to access your physical key material.
Your plaintext keys are never stored in nonvolatile memory.
You control who has permissions to use your keys.
Separation of duties between systems that use master keys and ones that use data keys.
Multiparty controls for all maintenance of KMS systems that use your master keys.
Third-party evidence of these controls:
Service Organization Control (SOC 1)
PCI-DSS
See AWS Compliance packages for details
----
Enhance this slide to show some different RDS clusters each with a different data key.
One of the reasons that customers are concerned with security is that they have certain compliance requirements that they have to meet
AWS has lots of different customers in the startup, enterprise, and public sector space running a wide range of applications and workloads across many different industries. Some of these applications and workloads require that they meet certain regulatory requirements. To support the audit and compliance requirements of these customers AWS has worked to achieve certification that you can run your workloads on RDS and be able to achieve a full compliant application.
So for RDS we currently offer 9 different compliance attestations including Soc1, 2, & 3, FedRamp, PCI/DSS, and HIPAA.
With these compliance certifications for RDS it means that you can build and run your applications and workloads, related to these compliance bodies, on RDS. However, it does not relieve you of the responsibility of making sure that your application meets the appropriate compliance requirements. AWS takes responsibility for the compliance of the RDS service and the infrastructure related to it and you, the customer, take care of the application that you have built on top of AWS. With this model you can take the audit findings from your application and combine them with the appropriate attestation from AWS’s third party auditors and have a complete verification of compliance for your application running on AWS.
----------------
Chart of what is certified and what is not is here: https://w.amazon.com/index.php/AWS_Security_Assurance/Compliance/Scope
Here is a listing of our compliance certifications and which database engines they currently apply to on RDS. You can see here that we have a wide range of certifications covering MySQL, Oracle, SQL Server, and PostgreSQL. As you can see there is a lot of coverage among these four different engines around key areas of compliance.
This slide needs to flow relatively quick.
For everyone using RDS you get a collection of 15-18 metrics that are automatically made available to you, depending on the type of instance that you are using. RDS sends the necessary information for these metrics to Amazon CloudWatch and then you are able to view the metrics in the RDS console, in the CloudWatch console, or via the CloudWatch APIs. These metrics are made available to you at one minute intervals.
With this monitoring you can keep an eye on the performance of your database around key items like CPU utilization, memory, storage, latency, and any lag between your master and read replica databases. You can view this information in individual graphs, multiple graphs, or even pull them into your own monitoring tool.
Additionally, you can also take these metrics and build alarms based on thresholds that are meaningful to you. Then whenever you go above that threshold you can have a notification or other action take place that helps to respond to that metric being outside of its normal boundaries. We will actually take a look at an example of what you can do there a bit later in this presentation.
---------------
Basic cloudwatch metrics - http://docs.aws.amazon.com/AmazonCloudWatch/latest/DeveloperGuide/rds-metricscollected.html
This slide needs to flow relatively quick
Turning on “Enhanced Monitoring” gives you access to 37 additional metrics. This also gives you fine grained granularity from 1 min down to 1 sec.
A few of the metrics that you get with enhanced monitoring are: Free Memory, Active Memory, Load Average, and how much of the file system you have used.
You might be asking the differences between basic monitoring and enhanced monitoring and why they are not all delivered the same. The reason is that standard monitoring is based on what the Hypervisor that is running your database instance can see. There are metrics related to your OS that the hypervisor cannot see and we utilize an agent running on your RDS instance to collect the necessary metrics to give you enhanced monitoring.
Enhanced monitoring is available for: all six engines
-------
Discussed in a section on this page: http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_Monitoring.html
Available on all classes except t1.micro and m1.small
Having leveraged AWS extensively, they also have built some home grown solutions that helped them monitor the security of AWS configurations. Recently they announced the open-sourcing of this solution. Cloud Security Architect Jason Chan provided some perspective on their use of RDS for PostgreSQL: "We recently announced the open source availability of Security Monkey, our solution for monitoring and analyzing the security of our Amazon Web Services configurations. We leveraged RDS PostgreSQL to capture the security data gathered by our solution. Building an application backed by production ready PostgreSQL database with Multi-AZ high availability, automated backups, patching and upgrades handled by RDS helped us focus on our development to bring this powerful open source solution to the community."
moved main MySQL database to RDS because it automatically takes care of the time consuming administrative tasks. Easy to handle difficult tasks like Replication and Scaling w/ just simple API calls. AirBnB use Multi-AZ deployments to automate database replication and durability.
What if we started with a clean sheet of paper?
could we create a modern database with massive scalability
Durability indistinguishable from 100% and 99.99% availability
FIVE highlights
#1 mySQL compatible w/ 5x better performance
#2 Scalable to 64TB w/ 15 read replicas
#3 HA, Durable and Fault Tolerant SSD storage w/ 6-way repl across 3 AZs!!
#4 Transparent Encryption - DATA AT REST w/ KMS
#5 Stored procedures that invoke AWS Lambda
Use Aurora for near real-time Analytics and reporting app with about a 1000 users.
They leverage Aurora's read replicas, creating, scaling and deleting replicas based on load.
The app connects via a Reader End point and load balances the read workloads across the instances as the cluster grows and shrinks with no change to the app.
Low replication lag; < 10ms means they're always working on current data.
Significant performance gains with some queries executing in 1/100th of the original time.
PostgreSQL 9.6 with support for PostGIS
All the same features of Aurora w/ 15 read replicas; <10ms lab, 6-way replication across 3-AZs, KMS for data at rest encryption
Talk Track:
With Amazon Aurora Postgres, we are also announcing a new database monitoring feature called Performance Insights.
Performance Insights is designed to help customers quickly assess whether there are any performance bottlenecks in their relational database workloads and where to take action.
Collects detailed database performance data through light weight mechanisms and uses the data to drive an intuitive graphical interface that provides a simple and complete view of recent database performance.
For questions:
Will be rolling out incrementally to all RDS engines over 2017
First release will be on Postgres compatible edition of Aurora
See the preview in the demo grounds
Feature is free.
Will be supported on all instances except micros
Feature will keep 35 days of historical data
We're doing a session at 2:30 on DMS
A fully managed solution for migrating databases from on-prem to AWS Cloud with minimal downtime. .
Works for OLTP and DW databases; Homogeneous and Hetrogeneous database migrations
1 TB database for
Moved ORacle from on-premise to AWS RDS Postgres
Leveraged SCT. SCT Assessment report to help understand the scope of effort and then with the actual schema migration.
Used DMS for moving data with minimal downtime.
Benefits:
reliability of the cloud
Savings on Oracle licensing costs.
And that’s it.
I really appreciate your attendance at this session today and I hope you got a lot of useful information out of this and that it helps you move forward in better utilizing the RDS service.
Don’t forget to fill out your surveys as it helps us to make sure that we are delivering the best content possible to you.