This document discusses purpose-built databases and managed database services on AWS. It begins by explaining how data needs are rapidly expanding and changing due to factors like microservices and analytics. It then introduces several purpose-built AWS databases like Amazon Aurora, DynamoDB, DocumentDB, ElastiCache, and Neptune that are optimized for different use cases. Benefits highlighted include performance, scalability, availability, and that they are fully managed. Two customer examples of Duolingo and Capital One migrating to AWS databases are provided. The document concludes by discussing the advantages of moving to managed databases on AWS over self-managed databases.
1) AWS Outposts allow customers to run compute and storage on-premises using the same AWS infrastructure, APIs, and tools that are used in AWS regions.
2) Outposts are rack-sized physical infrastructure deployed on the customer's premises that is managed and operated by AWS.
3) Customers can launch and run EC2 instances, EBS volumes, and other AWS services locally on Outposts to process workloads requiring low latency or local data access.
This document provides an overview of best practices for defending against DDoS attacks, including:
- Discussion of DDoS threats and trends over time, with attack sizes increasing significantly.
- Introduction to AWS Support and AWS Shield services for DDoS mitigation. AWS Shield provides automatic baseline protection across all AWS customers and additional enhanced protection can be purchased.
- Examples of real customer situations where AWS Support and the DDoS Response Team assisted in investigating and mitigating DDoS attacks on applications and servers. Recommendations are provided such as using AWS WAF and CloudFront for improved security and performance.
The document discusses building a modern data lake platform on AWS. It describes how AWS services like S3, Lake Formation, Glue, Athena, and others can be used to ingest raw data from various sources in real-time and at scale. The data is then stored, cataloged, and prepared for analysis using services like EMR, SageMaker, and Redshift to power analytics, machine learning, and business intelligence.
This document discusses using the AWS Cloud Development Kit (CDK) and CDK for Kubernetes (CDK8S) to define infrastructure and Kubernetes applications. CDK allows defining cloud resources like VPCs, EC2 instances, and S3 buckets using templates and programming languages. CDK8S allows defining Kubernetes resources like deployments, services, and namespaces in the same way. Using CDK and CDK8S together provides a way to define all cloud infrastructure and Kubernetes applications from a single codebase in a developer's preferred programming language.
The AWS Workshop Series Online is a series of live webinars designed for IT professionals who are looking to leverage the AWS Cloud to build and transform their business, are new to the AWS Cloud or looking to further expand their skills and expertise. In this series, we will cover:'Introduction to Cloud Computing with Amazon Web Services'.
Amazon Elastic Container Service (Amazon ECS) è un servizio di gestione dei container altamente scalabile, che semplifica la gestione dei contenitori Docker attraverso un layer di orchestrazione per il controllo del deployment e del relativo lifecycle. In questa sessione presenteremo le principali caratteristiche del servizio, le architetture di riferimento per i differenti carichi di lavoro e i semplici passi necessari per poter velocemente migrare uno o più dei tuo container.
This document discusses serverless computing on AWS using AWS Lambda and AWS Fargate. It provides an overview of the anatomy of an AWS Lambda function, including the handler function, event and context objects, dependencies and common helper functions. It also describes how to structure serverless applications using Lambda functions and how AWS Lambda layers can be used to share common code between functions. Finally, it outlines the architecture of running serverless containers on AWS Fargate without having to manage servers or clusters.
This document discusses event-driven architectures and how to build them using Amazon Web Services services. It covers how to use Amazon EventBridge to connect event sources and route events to targets like AWS Lambda. It also discusses using AWS Step Functions to coordinate functions and capture workflow status. Observability of event-driven systems is discussed through services like AWS X-Ray and CloudWatch.
This document discusses building data lakes and analytics pipelines on Amazon Web Services (AWS). It recommends using AWS services like Amazon S3, AWS Glue, AWS Lake Formation, and Amazon Redshift to build scalable and secure data lakes that can ingest and process large amounts of data from various sources. It also highlights how AWS provides the most comprehensive set of analytics services, enables the easiest setup of data lakes, and offers the most scalable and cost-effective infrastructure for analytics workloads.
This document discusses how cloud computing is transforming enterprise IT by allowing companies to focus on their core business. It provides an overview of traditional on-premises IT structures and how companies are migrating to cloud-first models using AWS. The summary discusses establishing a Cloud Center of Excellence to lead the migration effort and building hybrid cloud architectures to break dependencies on legacy systems over time.
This document discusses a webinar on data lakes and analytics hosted by Karlos Correia and Claudio Chiba, AWS solutions architects for the public sector. The agenda covers what a data lake is, why organizations use data lakes, how data lakes expand traditional analytics approaches, and the benefits of data lakes such as centralized data storage and schema-on-read capabilities. Amazon S3 and AWS analytics services are positioned as enabling technologies for building data lakes.
This document summarizes a presentation about new features for container services on AWS. It discusses Amazon ECS capacity providers which allow applications to control infrastructure, running containers on AWS Fargate without managing servers, and using EKS on Fargate for serverless Kubernetes deployments. It also provides updates on container usage growth, new EKS features like Fargate profiles, and the benefits of event-driven architectures on AWS.
Modernizing upstream workflows with aws storage - john malloryAmazon Web Services
Modernizing Upstream Workflows with AWS Storage
Accelerating seismic data retrieval, getting better data protection and reliability, and providing a common AWS data platform for compute and graphic intensive processing, simulation and visualization workloads.
Modernizing and transforming exploration and production workflows with AWS Storage services
Accelerating seismic data retrieval, getting better data protection and reliability, and providing a common AWS data platform for compute and graphic intensive processing, simulation and visualization workloads.
Capturing and processing streaming sensor data from remote oil rigs with Snowball Edge
Providing a Data Lake foundation for a next generation Digital Oilfield IoT analytics platform with Amazon S3
Speaker: John Mallory - AWS Storage Business Development Manager
On this presentation Carl Bachor, from AWS Professional Services, takes us on a deep dive of SAP enterprise software, and how it is implemented on the AWS cloud.
Semplificare l'analisi dei dati con architetture "Serverless": architetture e...Amazon Web Services
This document discusses serverless big data architectures using AWS services. It describes how serverless computing eliminates the need to provision and manage servers. AWS services like S3, DynamoDB, Kinesis, Lambda, Glue and Athena allow ingesting, storing, processing and analyzing data without managing infrastructure. These services scale automatically based on usage and provide built-in availability and fault tolerance.
This document discusses implementing Windows workloads on AWS. It provides a brief history of Windows support on AWS since 2008. It describes how line of business applications and corporate applications can be deployed on AWS through self-managed EC2 instances or managed RDS. It discusses why customers choose AWS for security, availability, performance, familiar environment, cost effectiveness and licensing options. The document concludes with next steps to get started with Windows workloads on AWS.
This document contains summaries of several startup companies and their use of AWS cloud services:
1. A robo-advisor startup scaled to over 20 billion page views per month and manages over $8 billion in assets. They built an entire insurance company on AWS in just 3 months.
2. A health startup in Sweden called KRY allows patients to meet with doctors via their phone. They have grown to over 100,000 users and 1% of Sweden's primary healthcare meetings. They run their entire operations on AWS services like EC2, RDS, ElastiCache, and Machine Learning.
3. Vivino, a wine database and review app, migrated their image storage from their own servers to
Customers who run SAP on AWS have lowered costs, improved performance, resilience, security, and agility. Application modernization can start with SAP at the core – but it can also start with machine learning, internet of things, big data and analytics. In this session, AWS is presenting and demonstrating use cases for modernizing IT systems that incorporates SAP. Customer Larsen & Toubro Infotech (LTI) shares their innovation agenda and journey to the cloud with AWS.
Harpreet Singh, SAP Solution Architect, Amazon Web Services
Transform Your Business with VMware Cloud on AWS: Technical Overview Amazon Web Services
VMware Cloud on AWS is a jointly engineered service that brings VMware’s enterprise class software-defined data center (SDDC) technologies to run on next-generation bare-metal AWS infrastructure—delivered as a cloud service. With VMware Cloud on AWS, not only will you be able to consume VMware products on AWS, but you will also be able to leverage AWS native services from virtual machines running within VMware Cloud on AWS. Come learn about the latest features and how you can leverage the best of both VMware and AWS for your environment.
Understanding AWS Managed Databases and Analytic Services - AWS Innovate Otta...Amazon Web Services
• Overview of database services to elevate your applications, analytic services to engage your data, and migration services to help you reach database freedom.
• Survey of how Canadian and other organizations are using the cloud to make data scalable, reliable, and secure.
The document discusses several Amazon Web Services related to databases and data warehousing. It describes Amazon Redshift, a fully managed data warehouse service; the purpose of data warehousing; Amazon ElastiCache, a web service for deploying Redis or Memcached in the cloud to improve application performance; Amazon DynamoDB Accelerator (DAX) which provides an in-memory cache for DynamoDB; AWS Database Migration Service which helps migrate databases to AWS easily and securely; and benefits of AWS DMS like simplicity, zero downtime, support for many databases, low cost, and reliability.
The document provides an overview of Amazon Web Services (AWS) databases and analytics services. It summarizes that AWS has significantly expanded its database and analytics offerings between 2015-2018, with over 750 new features and 10 new services launched. It highlights several core AWS database and analytics services, including Amazon DynamoDB, Amazon RDS, Amazon Aurora, Amazon Neptune, and Amazon ElastiCache. It also discusses how customers are migrating workloads from on-premises databases to AWS databases and analytics services.
This document discusses AWS Database Migration Service (DMS) and how it can be used to automate database migrations between on-premises and cloud databases. It provides an overview of DMS features like minimal downtime, cost effectiveness, reliability and ongoing replication. It also lists the supported source and target databases for homogeneous and heterogeneous migrations. The document demonstrates how Terraform can be used to automate and manage the DMS migration process. It describes the AWS Schema Conversion Tool and how Terraform is helpful for infrastructure as code with DMS.
Getting Started with Managed Database Services on AWS - AWS Summit Tel Aviv 2017Amazon Web Services
In addition to running databases in Amazon EC2, AWS customers can choose among a variety of managed database services. These services save effort, save time, and unlock new capabilities and economies. In this session, we make it easy to understand how they differ, what they have in common, and how to choose one or more. We explain the fundamentals of Amazon DynamoDB, a fully managed NoSQL database service; Amazon RDS, a relational database service in the cloud; and Amazon Redshift, a fully managed, petabyte-scale data-warehouse solution that can be surprisingly economical. We will cover how each service might help support your application and how to get started.
This document provides an overview of Amazon Relational Database Service (Amazon RDS). The key points summarized are:
- Amazon RDS supports multiple database engines including Aurora, MySQL, PostgreSQL, Oracle, and SQL Server. It provides automated provisioning and scaling of databases.
- RDS offers high availability through features like multi-AZ deployments with automatic failover and read replicas. It provides security, monitoring, and compliance capabilities.
- The document describes additional RDS features like provisioning and scaling databases, read replicas, security configurations, encryption, compliance certifications, and monitoring options. It provides examples of companies using RDS.
Introduction to Amazon Relational Database Service (Amazon RDS)Amazon Web Services
This document provides an introduction and overview of Amazon Relational Database Service (Amazon RDS). It discusses how RDS provides automated provisioning and scaling of database instances, high availability through multi-AZ deployments, security features including encryption and IAM access control, monitoring with CloudWatch, and migration services. It also introduces Amazon Aurora, a MySQL and PostgreSQL compatible database engine designed for the cloud that provides better performance than commercial databases.
Understanding AWS Managed Database and Analytics Services | AWS Public Sector...Amazon Web Services
The world is creating more data in more ways than ever before. The average internet user in 2017 generates 1.5GB of data per day, with the rate doubling every 18 months. A single autonomous vehicle can generate 4TB per day. Each smart manufacturing plant generates 1PB per day. Storing, managing, and analyzing this data requires integrated database and analytic services that provide reliability and security at scale. AWS offers a range of managed data services that let customers focus on making data useful, including Amazon Aurora, RDS, DynamoDB, Redshift, Spectrum, ElastiCache, Kinesis, EMR, Elasticsearch Service, and Glue. In this session, we discuss these services, share our vision for innovation, and show how our customers use these services today. Learn More: https://aws.amazon.com/government-education/
Understanding AWS Managed Database and Analytics Services | AWS Public Sector...Amazon Web Services
The world is creating more data in more ways than ever before. The average internet user in 2017 generates 1.5GB of data per day, with the rate doubling every 18 months. A single autonomous vehicle can generate 4TB per day. Each smart manufacturing plant generates 1PB per day. Storing, managing, and analyzing this data requires integrated database and analytic services that provide reliability and security at scale. AWS offers a range of managed data services that let customers focus on making data useful, including Amazon Aurora, RDS, DynamoDB, Redshift, Spectrum, ElastiCache, Kinesis, EMR, Elasticsearch Service, and Glue. In this session, we discuss these services, share our vision for innovation, and show how our customers use these services today. Learn More: https://aws.amazon.com/government-education/
Migrating your Databases to AWS: Deep Dive on Amazon RDS and AWS Database Mig...Amazon Web Services
The document provides information about migrating databases to AWS using Amazon Relational Database Service (RDS) and AWS Database Migration Service (DMS). It discusses:
- Key features of RDS such as provisioning databases quickly with high availability, security, backups and monitoring capabilities.
- How DMS allows migrating databases to AWS with minimal downtime by continuously replicating and migrating data between databases.
- Examples of customers who have successfully migrated large databases to AWS using RDS and DMS to improve scalability, availability and reduce costs compared to on-premises databases.
Migrating Your Oracle Database to PostgreSQL - AWS Online Tech TalksAmazon Web Services
Learning Objectives:
- Learn about the capabilities of the PostgreSQL database
- Learn about PostgreSQL offerings on AWS
- Learn how to migrate from Oracle to PostgreSQL with minimal disruption
Migrating Your Oracle Database to PostgreSQL - AWS Online Tech TalksAmazon Web Services
Learning Objectives:
- Learn about the capabilities of the PostgreSQL database
- Learn about PostgreSQL offerings on AWS
- Learn how to migrate from Oracle to PostgreSQL with minimal disruption
Transformation Track AWS Cloud Experience Argentina - Bases de Datos en AWSAmazon Web Services LATAM
The document discusses different database options on AWS, including relational databases like Amazon RDS and Aurora, non-relational databases like DynamoDB, graph databases like Neptune, and data warehousing with Redshift. It provides examples of how customers like Airbnb and Duolingo use different AWS databases together based on data model and use case. Key benefits highlighted are scalability, manageability, security, and cost effectiveness of AWS databases.
This document discusses Amazon Web Services' database and analytics services. It begins by noting that 85% of businesses want to be data-driven but only 37% have been successful. It then presents the "data flywheel" concept of breaking from legacy databases, modernizing data infrastructure and data warehouses, turning data into insights, and building data-driven applications to gain momentum with data. The document provides overviews and benefits of AWS services like Amazon Aurora, Athena, Redshift, RDS, DMS, and Elasticsearch. It also introduces new capabilities for these services like machine learning with Aurora, RDS on Outposts, UltraWarm storage for Elasticsearch, and materialized views in Redshift.
Join us for a series of introductory and technical sessions on AWS Big Data solutions. Gain a thorough understanding of what Amazon Web Services offers across the big data lifecycle and learn architectural best practices for applying those solutions to your projects.
We will kick off this technical seminar in the morning with an introduction to the AWS Big Data platform, including a discussion of popular use cases and reference architectures. In the afternoon, we will deep dive into Machine Learning and Streaming Analytics. We will then walk everyone through building your first Big Data application with AWS.
by Joyjeet Banerjee, Enterprise Solution Architect, AWS
Amazon RDS allows you to launch an optimally configured, secure and highly available database with just a few clicks. It provides cost-efficient and resizable capacity while managing time-consuming database administration tasks, freeing you to focus on your applications and business. We’ll discuss Amazon RDS fundamentals, learn about the seven available database engines, and examine customer success stories. Level 100
The cloud is all the rage. Does it live up to its hype? What are the benefits of the cloud? Join me as I discuss the reasons so many companies are moving to the cloud and demo how to get up and running with a VM (IaaS) and a database (PaaS) in Azure. See why the ability to scale easily, the quickness that you can create a VM, and the built-in redundancy are just some of the reasons that moving to the cloud a “no brainer”. And if you have an on-prem datacenter, learn how to get out of the air-conditioning business!
Database Freedom is an AWS initiative that accelerates enterprise migrations from commercial database engines to AWS native database services or managed open-source systems. We review the basics of the Amazon purpose-built database strategy and cover our Workload Qualification Framework, which helps you determine a good database migration candidate and predict the level of effort. In the hands-on lab, you use AWS Schema Conversion Tool and AWS Database Migration Service to migrate your databases to Amazon Aurora PostgreSQL. Bring a laptop with Firefox or Chrome and a working AWS account. We provide an AWS CloudFormation template to configure the lab environment.
Caserta Concepts, Datameer and Microsoft shared their combined knowledge and a use case on big data, the cloud and deep analytics. Attendes learned how a global leader in the test, measurement and control systems market reduced their big data implementations from 18 months to just a few.
Speakers shared how to provide a business user-friendly, self-service environment for data discovery and analytics, and focus on how to extend and optimize Hadoop based analytics, highlighting the advantages and practical applications of deploying on the cloud for enhanced performance, scalability and lower TCO.
Agenda included:
- Pizza and Networking
- Joe Caserta, President, Caserta Concepts - Why are we here?
- Nikhil Kumar, Sr. Solutions Engineer, Datameer - Solution use cases and technical demonstration
- Stefan Groschupf, CEO & Chairman, Datameer - The evolving Hadoop-based analytics trends and the role of cloud computing
- James Serra, Data Platform Solution Architect, Microsoft, Benefits of the Azure Cloud Service
- Q&A, Networking
For more information on Caserta Concepts, visit our website: http://casertaconcepts.com/
database migration simple, cross-engine and cross-platform migrations with ...Amazon Web Services
Learn how you can migrate databases with minimal downtime from on-premises and Amazon EC2 environments to Amazon RDS, Amazon Redshift, Amazon Aurora and EC2 databases using AWS Database Migration Service. We discuss homogeneous (e.g. Oracle-to-Oracle, PostgreSQL-to-PostgreSQL, etc.) and heterogeneous (e.g. Oracle to Aurora, SQL Server to MariaDB) database migrations. We also talk about the new AWS Schema Conversion Tool that saves you development time when migrating your Oracle and SQL Server database schemas, including PL/SQL and T-SQL procedural code, to their MySQL, MariaDB and Aurora equivalents. Best of all, we spend most of the time demonstrating the product and showing use cases designed to help your business.”
Similar to Track 3 Session 6_打造應用專屬資料庫 (Purpose-built) 與了解託管服務優勢 (20)
Come costruire servizi di Forecasting sfruttando algoritmi di ML e deep learn...Amazon Web Services
Il Forecasting è un processo importante per tantissime aziende e viene utilizzato in vari ambiti per cercare di prevedere in modo accurato la crescita e distribuzione di un prodotto, l’utilizzo delle risorse necessarie nelle linee produttive, presentazioni finanziarie e tanto altro. Amazon utilizza delle tecniche avanzate di forecasting, in parte questi servizi sono stati messi a disposizione di tutti i clienti AWS.
In questa sessione illustreremo come pre-processare i dati che contengono una componente temporale e successivamente utilizzare un algoritmo che a partire dal tipo di dato analizzato produce un forecasting accurato.
Big Data per le Startup: come creare applicazioni Big Data in modalità Server...Amazon Web Services
La varietà e la quantità di dati che si crea ogni giorno accelera sempre più velocemente e rappresenta una opportunità irripetibile per innovare e creare nuove startup.
Tuttavia gestire grandi quantità di dati può apparire complesso: creare cluster Big Data su larga scala sembra essere un investimento accessibile solo ad aziende consolidate. Ma l’elasticità del Cloud e, in particolare, i servizi Serverless ci permettono di rompere questi limiti.
Vediamo quindi come è possibile sviluppare applicazioni Big Data rapidamente, senza preoccuparci dell’infrastruttura, ma dedicando tutte le risorse allo sviluppo delle nostre le nostre idee per creare prodotti innovativi.
Ora puoi utilizzare Amazon Elastic Kubernetes Service (EKS) per eseguire pod Kubernetes su AWS Fargate, il motore di elaborazione serverless creato per container su AWS. Questo rende più semplice che mai costruire ed eseguire le tue applicazioni Kubernetes nel cloud AWS.In questa sessione presenteremo le caratteristiche principali del servizio e come distribuire la tua applicazione in pochi passaggi
Vent'anni fa Amazon ha attraversato una trasformazione radicale con l'obiettivo di aumentare il ritmo dell'innovazione. In questo periodo abbiamo imparato come cambiare il nostro approccio allo sviluppo delle applicazioni ci ha permesso di aumentare notevolmente l'agilità, la velocità di rilascio e, in definitiva, ci ha consentito di creare applicazioni più affidabili e scalabili. In questa sessione illustreremo come definiamo le applicazioni moderne e come la creazione di app moderne influisce non solo sull'architettura dell'applicazione, ma sulla struttura organizzativa, sulle pipeline di rilascio dello sviluppo e persino sul modello operativo. Descriveremo anche approcci comuni alla modernizzazione, compreso l'approccio utilizzato dalla stessa Amazon.com.
Come spendere fino al 90% in meno con i container e le istanze spot Amazon Web Services
L’utilizzo dei container è in continua crescita.
Se correttamente disegnate, le applicazioni basate su Container sono molto spesso stateless e flessibili.
I servizi AWS ECS, EKS e Kubernetes su EC2 possono sfruttare le istanze Spot, portando ad un risparmio medio del 70% rispetto alle istanze On Demand. In questa sessione scopriremo insieme quali sono le caratteristiche delle istanze Spot e come possono essere utilizzate facilmente su AWS. Impareremo inoltre come Spreaker sfrutta le istanze spot per eseguire applicazioni di diverso tipo, in produzione, ad una frazione del costo on-demand!
In recent months, many customers have been asking us the question – how to monetise Open APIs, simplify Fintech integrations and accelerate adoption of various Open Banking business models. Therefore, AWS and FinConecta would like to invite you to Open Finance marketplace presentation on October 20th.
Event Agenda :
Open banking so far (short recap)
• PSD2, OB UK, OB Australia, OB LATAM, OB Israel
Intro to Open Finance marketplace
• Scope
• Features
• Tech overview and Demo
The role of the Cloud
The Future of APIs
• Complying with regulation
• Monetizing data / APIs
• Business models
• Time to market
One platform for all: a Strategic approach
Q&A
Rendi unica l’offerta della tua startup sul mercato con i servizi Machine Lea...Amazon Web Services
Per creare valore e costruire una propria offerta differenziante e riconoscibile, le startup di successo sanno come combinare tecnologie consolidate con componenti innovativi creati ad hoc.
AWS fornisce servizi pronti all'utilizzo e, allo stesso tempo, permette di personalizzare e creare gli elementi differenzianti della propria offerta.
Concentrandoci sulle tecnologie di Machine Learning, vedremo come selezionare i servizi di intelligenza artificiale offerti da AWS e, anche attraverso una demo, come costruire modelli di Machine Learning personalizzati utilizzando SageMaker Studio.
OpsWorks Configuration Management: automatizza la gestione e i deployment del...Amazon Web Services
Con l'approccio tradizionale al mondo IT per molti anni è stato difficile implementare tecniche di DevOps, che finora spesso hanno previsto attività manuali portando di tanto in tanto a dei downtime degli applicativi interrompendo l'operatività dell'utente. Con l'avvento del cloud, le tecniche di DevOps sono ormai a portata di tutti a basso costo per qualsiasi genere di workload, garantendo maggiore affidabilità del sistema e risultando in dei significativi miglioramenti della business continuity.
AWS mette a disposizione AWS OpsWork come strumento di Configuration Management che mira ad automatizzare e semplificare la gestione e i deployment delle istanze EC2 per mezzo di workload Chef e Puppet.
Scopri come sfruttare AWS OpsWork a garanzia e affidabilità del tuo applicativo installato su Instanze EC2.
Microsoft Active Directory su AWS per supportare i tuoi Windows WorkloadsAmazon Web Services
Vuoi conoscere le opzioni per eseguire Microsoft Active Directory su AWS? Quando si spostano carichi di lavoro Microsoft in AWS, è importante considerare come distribuire Microsoft Active Directory per supportare la gestione, l'autenticazione e l'autorizzazione dei criteri di gruppo. In questa sessione, discuteremo le opzioni per la distribuzione di Microsoft Active Directory su AWS, incluso AWS Directory Service per Microsoft Active Directory e la distribuzione di Active Directory su Windows su Amazon Elastic Compute Cloud (Amazon EC2). Trattiamo argomenti quali l'integrazione del tuo ambiente Microsoft Active Directory locale nel cloud e l'utilizzo di applicazioni SaaS, come Office 365, con AWS Single Sign-On.
Dal riconoscimento facciale al riconoscimento di frodi o difetti di fabbricazione, l'analisi di immagini e video che sfruttano tecniche di intelligenza artificiale, si stanno evolvendo e raffinando a ritmi elevati. In questo webinar esploreremo le possibilità messe a disposizione dai servizi AWS per applicare lo stato dell'arte delle tecniche di computer vision a scenari reali.
Amazon Web Services e VMware organizzano un evento virtuale gratuito il prossimo mercoledì 14 Ottobre dalle 12:00 alle 13:00 dedicato a VMware Cloud ™ on AWS, il servizio on demand che consente di eseguire applicazioni in ambienti cloud basati su VMware vSphere® e di accedere ad una vasta gamma di servizi AWS, sfruttando a pieno le potenzialità del cloud AWS e tutelando gli investimenti VMware esistenti.
Molte organizzazioni sfruttano i vantaggi del cloud migrando i propri carichi di lavoro Oracle e assicurandosi notevoli vantaggi in termini di agilità ed efficienza dei costi.
La migrazione di questi carichi di lavoro, può creare complessità durante la modernizzazione e il refactoring delle applicazioni e a questo si possono aggiungere rischi di prestazione che possono essere introdotti quando si spostano le applicazioni dai data center locali.
Crea la tua prima serverless ledger-based app con QLDB e NodeJSAmazon Web Services
Molte aziende oggi, costruiscono applicazioni con funzionalità di tipo ledger ad esempio per verificare lo storico di accrediti o addebiti nelle transazioni bancarie o ancora per tenere traccia del flusso supply chain dei propri prodotti.
Alla base di queste soluzioni ci sono i database ledger che permettono di avere un log delle transazioni trasparente, immutabile e crittograficamente verificabile, ma sono strumenti complessi e onerosi da gestire.
Amazon QLDB elimina la necessità di costruire sistemi personalizzati e complessi fornendo un database ledger serverless completamente gestito.
In questa sessione scopriremo come realizzare un'applicazione serverless completa che utilizzi le funzionalità di QLDB.
Con l’ascesa delle architetture di microservizi e delle ricche applicazioni mobili e Web, le API sono più importanti che mai per offrire agli utenti finali una user experience eccezionale. In questa sessione impareremo come affrontare le moderne sfide di progettazione delle API con GraphQL, un linguaggio di query API open source utilizzato da Facebook, Amazon e altro e come utilizzare AWS AppSync, un servizio GraphQL serverless gestito su AWS. Approfondiremo diversi scenari, comprendendo come AppSync può aiutare a risolvere questi casi d’uso creando API moderne con funzionalità di aggiornamento dati in tempo reale e offline.
Inoltre, impareremo come Sky Italia utilizza AWS AppSync per fornire aggiornamenti sportivi in tempo reale agli utenti del proprio portale web.
Database Oracle e VMware Cloud™ on AWS: i miti da sfatareAmazon Web Services
Molte organizzazioni sfruttano i vantaggi del cloud migrando i propri carichi di lavoro Oracle e assicurandosi notevoli vantaggi in termini di agilità ed efficienza dei costi.
La migrazione di questi carichi di lavoro, può creare complessità durante la modernizzazione e il refactoring delle applicazioni e a questo si possono aggiungere rischi di prestazione che possono essere introdotti quando si spostano le applicazioni dai data center locali.
In queste slide, gli esperti AWS e VMware presentano semplici e pratici accorgimenti per facilitare e semplificare la migrazione dei carichi di lavoro Oracle accelerando la trasformazione verso il cloud, approfondiranno l’architettura e dimostreranno come sfruttare a pieno le potenzialità di VMware Cloud ™ on AWS.
1) The document discusses building a minimum viable product (MVP) using Amazon Web Services (AWS).
2) It provides an example of an MVP for an omni-channel messenger platform that was built from 2017 to connect ecommerce stores to customers via web chat, Facebook Messenger, WhatsApp, and other channels.
3) The founder discusses how they started with an MVP in 2017 with 200 ecommerce stores in Hong Kong and Taiwan, and have since expanded to over 5000 clients across Southeast Asia using AWS for scaling.
This document discusses pitch decks and fundraising materials. It explains that venture capitalists will typically spend only 3 minutes and 44 seconds reviewing a pitch deck. Therefore, the deck needs to tell a compelling story to grab their attention. It also provides tips on tailoring different types of decks for different purposes, such as creating a concise 1-2 page teaser, a presentation deck for pitching in-person, and a more detailed read-only or fundraising deck. The document stresses the importance of including key information like the problem, solution, product, traction, market size, plans, team, and ask.
This document discusses building serverless web applications using AWS services like API Gateway, Lambda, DynamoDB, S3 and Amplify. It provides an overview of each service and how they can work together to create a scalable, secure and cost-effective serverless application stack without having to manage servers or infrastructure. Key services covered include API Gateway for hosting APIs, Lambda for backend logic, DynamoDB for database needs, S3 for static content, and Amplify for frontend hosting and continuous deployment.
This document provides tips for fundraising from startup founders Roland Yau and Sze Lok Chan. It discusses generating competition to create urgency for investors, fundraising in parallel rather than sequentially, having a clear fundraising narrative focused on what you do and why it's compelling, and prioritizing relationships with people over firms. It also notes how the pandemic has changed fundraising, with examples of deals done virtually during this time. The tips emphasize being fully prepared before fundraising and cultivating connections with investors in advance.
AWS_HK_StartupDay_Building Interactive websites while automating for efficien...Amazon Web Services
This document discusses Amazon's machine learning services for building conversational interfaces and extracting insights from unstructured text and audio. It describes Amazon Lex for creating chatbots, Amazon Comprehend for natural language processing tasks like entity extraction and sentiment analysis, and how they can be used together for applications like intelligent call centers and content analysis. Pre-trained APIs simplify adding machine learning to apps without requiring ML expertise.
Durante i laboratori pratici, gli esperti AWS ti mostrano quali strumenti aiutano a sviluppare le applicazioni Serverless in locale e nel cloud AWS e ti aiuteranno a programmare i prossimi passi per iniziare ad utilizzare questa tecnologia nella tua azienda.
4. Rapid expansion of data requirements
Explosion of data Microservices change data
and analytics requirements
Accelerated rate of change
driven by DevOps
Data grows 10x every 5 years
driven by network-connected
smart devices
Microservices architecture
decreases need for “one size fits all’
databases and increases need for
real-time monitoring and analytics
Dev Ops
Transition from IT to DevOps
increases rate of change
5. 010010010
01010001
100010100
Data
1 Break free from
legacy databases
Move to managed2
Turn data to insights5
Build
data-driven apps
4
Modernize your
data warehouse
3
Data flywheel
Modernize your data infrastructure
Get the most value from your data
6. Hardware and software installation
Database configuration, patching, and backups
Cluster setup and data replication for high availability
Capacity planning, and scaling clusters for compute and storage
Managing databases on-premises:
Time-consuming and complex
7. The thorns of legacy databases
Costly Proprietary Lock-in Punitive licensing You’ve got mail
Audit
8. Break free with AWS
Performance at scale
Fully managed
Cost effective
Reliable
11. Performance
& scalability
5x throughput of standard
MySQL and 3x of standard
PostgreSQL; scale out up
to 15 read replicas
Availability
& durability
Fault-tolerant, self-healing
storage; 6 copies of
data across 3 AZs;
continuous backup to Amazon S3
Highly
secure
Network isolation,
encryption at
rest / in transit
Fully
managed
Managed by Amazon RDS:
On your part, no server provisioning,
software patching, setup,
configuration, or backups
Amazon Aurora
MySQL and PostgreSQL-compatible relational database built for the cloud
12. Performance
at scale
Consistent, single-digit
millisecond response times at
any scale; build applications
with virtually unlimited
throughput
Serverless architecture
No hardware provisioning,
software patching, or upgrades;
scales up or down automatically;
continuously backs up your data
Global replication
You can build global
applications with fast access
to local data by easily
replicating tables across
multiple AWS Regions
Enterprise
security
Encrypts all data by
default and fully integrates
with AWS Identity and
Access Management for
robust security
Amazon DynamoDB
Fast and flexible key-value database service for any scale
13. Secure and
compliant
2x throughput of
managed MongoDB services
Deeply integrated
with AWS services
Millions of requests per second,
millisecond latency
Same code, drivers, and tools
you use with MongoDB
Simple and
fully managed
Amazon DocumentDB
Fast, scalable, highly available MongoDB-compatible database service
14. Read scaling with replicas;
write and memory scaling with
sharding; nondisruptive scaling
Unlimited scale
AWS manages all hardware
and software setup,
configuration, and monitoring
Fully managed
In-memory data store
and cache for sub-millisecond
response times
Consistent high performance
Amazon ElastiCache
Managed, Redis, or Memcached-compatible in-memory data store
15. Fast ReliableOpen
Queries billions of
relationships with
millisecond latency
Six replicas of your
data across three
Availability Zones, with
full backup and restore
Build powerful
queries easily with
Gremlin and SPARQL
Supports Apache
TinkerPop & W3C
RDF graph models
Easy
Amazon Neptune
Fast, reliable graph database built for the cloud
16. 1,000x faster and 1/10th the
cost of relational databases
Collect data at the rate of
millions of inserts per
second (10M/second)
Trillions of
daily events
Adaptive query processing
engine maintains steady,
predictable performance
Time-series analytics
Built-in functions for
interpolation, smoothing,
and approximation
Serverless
Automated setup,
configuration, server
provisioning, and
software patching
Amazon Timestream
Fast, scalable, fully managed time series database
17. Immutable and transparent
Append-only, immutable
journal tracks history of all
changes that cannot be
deleted or modified; get
full visibility into entire
data lineage
Highly scalable
Executes 2–3x as many
transactions as ledgers in
common blockchain frameworks
Cryptographically
verifiable
All changes are
cryptographically
chained and verifiable
Easy to use
Flexible document model,
query with familiar
SQL-like interface
Amazon Quantum Ledger Database
Fully managed ledger database: Track and verify history of all changes made to your
application’s data
18. Amazon Keyspaces (for Apache Cassandra)
Scalable, highly available, and managed Apache Cassandra-compatible database service
Highly available
and secure
99.99% availability SLA
within an AWS Region
Data encrypted at rest;
integrated with IAM
No servers to
manage
No need to provision,
configure, and operate
large Cassandra
clusters
Apache Cassandra-
compatible
Use the same Cassandra
drivers and tools
Single-digit
millisecond
performance at scale
Automatically scale tables
up and down
Virtually unlimited
throughput and storage
21. Duolingo uses AWS databases to serve up
over 31 billion items for 80 language courses with
high performance and scalability
Primary database: Amazon DynamoDB
• 24,000 reads and 3,000 writes per second
• Personalize lessons for users taking 6B exercises per month
In-memory caching: Amazon ElastiCache
• Instance access to common words and phrases
Transactional data: Amazon Aurora
• Maintain user data
22. Capital One migrated its monolithic mainframe
to highly available AWS databases for
microservices-based applications
Transactional data: Amazon RDS
• State management
Analytics: Amazon Redshift
• Web logs
Consistent low latency: Amazon DynamoDB
• User data and mobile app
24. You
You
Fully managed services on AWS
Spend time innovating and building new applications, not managing infrastructure
AWS
Self-managed Fully managed
Schema design
Query construction
Query optimization
Automatic failover
Backup and recovery
Isolation and security
Industry compliance
Push-button scaling
Automated patching
Advanced monitoring
Routine maintenance
Built-in best practices
25. Move to managed relational databases
Reduce database administrative burden
No need to re-architect existing applications
Get better performance, availability, scalability, and security
Migrate on-premises or cloud-hosted relational databases to managed services
Amazon Aurora
MySQL, PostgreSQL
Amazon RDS
MySQL, PostgreSQL, MariaDB,
Oracle, SQL Server
PostgreSQL
26. Move to managed non-relational databases
Reduce database administrative burden
No need to re-architect existing applications
Get better performance, availability, scalability, and security
Migrate on-premises or cloud-hosted non-relational databases to managed services
Amazon DocumentDB
MongoDB
Amazon ElastiCache
Redis, Memcached
27. Move to AWS services to break free from the
infrastructure muck
Fully
managed
Broad
portfolio
Highly
available
and durable
Most
secure with
support for
compliance
28. Challenge
They experienced service admin challenges with their original
provider and wanted to scale business to the next level.
Solution
They moved from self-managed MySQL to Amazon Aurora
MySQL. They use Aurora as the primary transactional database,
Amazon DynamoDB for personalized search, and Amazon
ElastiCache as in-memory store for sub-millisecond site rendering.
Result
Initially, the appeal of AWS was the ease of managing and
customizing the stack. It was great to be able to ramp up more
servers without having to contact anyone and without having
minimum usage commitments. AWS is the easy answer for any
Internet business that wants to scale to the next level.
—Nathan Blecharczyk, Cofounder and CTO of Airbnb
“
”
MOVE TO MANAGED →
Amazon
Aurora
Amazon
ElastiCache
Amazon
DynamoDB
29. See more information at:
aws.amazon.com/databases
Contact us at:
https://aws.amazon.com/contact-us/
Get started
30. Learn databases with AWS Training and Certification
25+ free digital training courses cover topics and services
related to relational and nonrelational databases
Resources created by the experts at AWS to help you build and validate database skills
Validate expertise with the AWS Certified Database – Specialty
exam
The classroom offering, Planning and Designing Databases
on AWS, features AWS expert instructors and hands-on
activities
Visit the databases learning path at aws.amazon.com/training/path-databases
Let’s first talk about three major trends that impact the way you think about data.
There are three trends : 1/there is an explosion of relevant data you could track; 2/micro-services changes data and analytics requirements; and 3/the DevOps model drives a rapid rate of change.
1/ Explosion of data
There is an explosion of data being generated. You have to track a lot of data that comes from your business applications. However, the growth is coming from data generated by network-connected smart devices that drive variety and volume of data. Every “smart” device produces real-time data, such as mobile phones, connected cars, smart homes, wearable technologies, home appliances, security systems, industrial equipment, machinery, and electronics. Most new cars have built-in cellular connections, which account for one third of mobile sign-ups on cell phone networks. Applications also generate real-time data such as purchase data from e-commerce sites, user behavior from mobile apps, and tweets/posts from social media.
By our estimates, data grows 10x every 5 years. To take advantage of all of this data, you need to be able to partner with someone who can easily harness this volume of data.
2/ Micro-services changes data and analytics requirements
Organizations are moving from development of monolithic applications to a micro-services architecture. Micro-services lets organization break down a complex problem into independent units so developers can operate in small groups with less coordination and therefore respond more quickly and go faster. However, there are 2 implications: 1/it means developers can break down apps into smaller pieces and pick the best tool to solve each problem; and 2/it increases the need for real-time monitoring and analytics to understand what’s not working between all of the different micro-services, faster.
Developers can break down their applications into smaller pieces and are not beholden to using a single database for every workload. Instead, they can pick the right database purpose-built for the job.
Analytics is not an after the fact activity; it has to be built-in to everything that you do. You need to know what’s going on in your businesses in real-time. To fuel innovation, well run businesses now operate on data quickly (whether automatically or through human intervention).
3/ Rapid rate of change driven by DevOps
As we innovate quickly, and the velocity of changes, businesses are transforming IT to the DevOps model. This model uses automated development tool to enable continuous development, deployment, and improvement of software. It emphasizes communication, collaboration, and integration between software developers and IT operations. It also introduces a fast rate of change (and change management).
This means as you think about data and designing your data platform, you need to think about the rapid rate of change that occurs through the DevOps model.
<Note, this slide has an animated sequence that ends with isolated focus on Break free from legacy databases>
Intro slide: The Data Flywheel. We have known for a numbers of years now that the amount of data available to us is exploding and continues to grow at an exponential rate. This is driven by the fact that data is being produced by all devices, app and system logs, and machines and apps that produce telemetry. The types of data that need to be stored is more varied from traditional structured data to semi-structured and unstructured, and this data needs to be captured, stored, processed and analyzed in real-time.
Organizations that want to make the most of their data can no longer use traditional on-premises approaches to store, manage, and process the data at the scale that they need. As the cloud has been driving down the cost of compute and storage, and giving customers the agility and elasticity to store and process data on-demand, customers for the first time no longer have to worry about throwing away data they may need in the future, they can store everything they need, cost-effectively in the cloud, and process it as and when they need, paying as they go.
Click 1: Modernize/Data value
Click 2: It starts with breaking free from old guard databases with AWS Database Freedom.
For customers running legacy databases on premises, provisioning, operating, scaling, and managing databases is tedious, time-consuming, and expensive. Customer want to spend time innovating and building new applications, and not managing infrastructure.
Legacy databases are expensive in terms of license fees, maintenance, and support.
Proprietary platforms restrict innovation as customers are locked into using proprietary database features, missing out on innovation
and flexibility offered by open source and cloud-native databases.
Monolithic architectures make it difficult for customers to scale and iterate to support emerging use cases.
Punitive licensing tactics threaten customers with license audits.
Performance at scale:
AWS purpose-built database solutions are designed for fast, interactive query performance at any scale. Experience 3-5x the performance vs popular alternatives with capabilities to support over 20 million requests per second.
Cost effective:
Amazon database solutions provide the security, availability, and reliability of commercial-grade databases at 1/10th the cost of commercial databases.
Fully Managed:
Our fully managed services allow you to break free from the complexities of database and data warehouse administration. Serverless capabilities automatically scale throughput up or down based on demand.
Reliable:
Amazon databases are built for business critical workloads. Build scalable, reliable, and secure enterprise applications while your data is safeguarded behind the AWS infrastructure in highly secure data centers.
1/ Instead of looking at a list of hundreds of different databases, what if we instead think about common database categories. This is a simple mental model to help us reason how builders use these different systems.
2/ Relational is a category. Many builders understand relational systems, so I won’t spend a lot of time here. Other then reminding us if you have a workload where strong consistency matters, you will work with the team to define schemas, your not really sure every single question that will be asked of the data, but when you do, it matters that you always get back a consistent answer. Relational systems are awesome for workloads that need this. This is where Aurora fits, RDS open source Postgres, MySQL, and Maria DB, and commercial engines such as SQL Server and Oracle Database.
3/ To help with this change in how builders develop applications, over the years we have built a number of purpose-built non-relational database, starting in the key-value category with DynamoDB, a database that optimizes running key-value pairs at single-digit millisecond latency and at very large scale. A DynamoDB table can have one item or a trillion items and it will perform the same. Many, many companies, like Epic with Fortnite and Lyft, use DynamoDB. And just to give you an example of the type of scale that DynamoDB supports, over the two days of Prime Day, our biggest retail event in the history of the company, DynamoDB requests from Alexa, the Amazon.com sites, and the Amazon fulfillment centers totaled 7.11 trillion, peaking at 45.4 million requests per second. And, that is only a fraction of the total capacity that DynamoDB handles on any given day. This is unusual scale.
4/ Or, in the document category, let’s say that your developers want a flexible way to store and query data in the database by using the same document-model they use in their application code. Some refer to this as a schema on read system. Take Intuit as an example. Their automated compliance platform (ACP) ensures all of Intuit’s AWS resources, across thousands of accounts meet Intuit’s various compliance standards. ACP tracks audit events from tens of thousands of Intuit assets that are each modeled on a JSON document. As Intuit built the platform, they needed a database that can natively store, query, and index a diverse set of documents. That’s why we built Amazon DocumentDB, a fully managed, MongoDB-compatible document database that is purpose built to help developers easily work with JSON documents in their natural format but also architected to scale easily to meet Intuit’s growing needs. I always pause her for a minute. Do you remember when XML was a thing. I think XML 1.0 was first established in 1998. Back then what did commercial systems do to become an XML database? They added an XML data type. The problem is many of the database operators didn’t work with that data type. I would argue what document databases are today is what I think people were trying to do with SML years ago. Amazon DocumentDB launched in January and it’s off to a great start. And that’s why companies like Intuit, FINRA, Dow Jones and thousands of others use to store, retrieve, and manage JSON documents at scale.
5/ But let’s say your application can’t even stand single-digit millisecond latency. You need something even faster like microsecond latency. You would need to use an in-memory database and have a cache that can access data in microseconds. And that’s why we built ElastiCache, which is managed Redis and managed Memcached, and that’s what companies like Grab Taxi, McDonald’s, and MLBAM use to enable fast retrieval of data for real-time processing use cases such as messaging, real-time geospatial data like drive distance.
6/ Let's say that you have datasets that are really large and have a lot of interconnectedness. Take Nike as an example. They built an app on top of AWS which connects their athletes with their followers and provides personalized recommendations based on interests of more than 100 million users. Those are a lot of connections if you think about all the athletes and all the followers and all the interests. And they need fast queries over all of that connectedness. For example, running a complex query to learn the top five interests of all the followers of a certain athlete is easy to do on a graph database but unwieldly and slow on a relational database. That's why people are excited about graph databases and why we built Amazon Neptune, which is what companies like NBC Universal, Netflix, and Uber use for highly connected datasets.
7/ Driven by the rise of IoT devices, IT systems, and smart industrial machines, time-series data — data that measures how things change over time — is one of the fastest growing data types. Time-series data has specific characteristics such as typically arriving in time order form, data is append-only, and queries are always over a time interval. Time-series data is not just a timestamp or a datatype that you might use in a relational database. Instead, what makes a time-series database is that at its core, the single primary axis of the data model is time. Which means you can highly optimize how the data is stored, scaled, and retrieved. For this we have Amazon Timestream, a purpose-built time-series database that is in preview today.
8/ Ledgers are typically used to record a history of economic and financial activity in an organization. Many organizations build applications with ledger-like functionality because they want to maintain an accurate history of their applications' data, for example, tracking the history of credits and debits in banking transactions, verifying the data lineage of an insurance claim, or tracing movement of an item in a supply chain network. For this we built Amazon QLDB, a fully managed ledger database that provides a transparent, immutable, and cryptographically verifiable transaction log. BMW uses QLDB for their Digital Vehicle Passport Application, which maintains the complete and verifiable history of a vehicle, including maintenance records, tire changes, accidents, ownership, insurance, and loan records. This application will act as the single trusted authority where multiple third-party entities such as car dealerships, repair shops, banks, and insurance providers will submit data to BMW.
9/ People want the right tool for the right job, and they want the right database for whatever their workload is.
Amazon Aurora is a MySQL and PostgreSQL-compatible relational database built for the cloud, that combines the performance and availability of traditional enterprise databases with the simplicity and cost-effectiveness of open source databases. Amazon Aurora is up to five times faster than standard MySQL databases and three times faster than standard PostgreSQL databases. It provides the security, availability, and reliability of commercial databases at 1/10th the cost. Amazon Aurora is fully managed by Amazon Relational Database Service (RDS), which automates time-consuming administration tasks like hardware provisioning, database setup, patching, and backups. Amazon Aurora features a distributed, fault-tolerant, self-healing storage system that auto-scales up to 64TB per database instance. It delivers high performance and availability with up to 15 low-latency read replicas, point-in-time recovery, continuous backup to Amazon S3, and replication across three Availability Zones (AZs).
Amazon DynamoDB is a key-value and document database that delivers single-digit millisecond performance at any scale. It's a fully managed, multiregion, multimaster, durable database with built-in security, backup and restore, and in-memory caching for internet-scale applications. DynamoDB can handle more than 10 trillion requests per day and can support peaks of more than 20 million requests per second. Many of the world's fastest growing businesses such as Lyft, Airbnb, and Redfin as well as enterprises such as Samsung, Toyota, and Capital One depend on the scale and performance of DynamoDB to support their mission-critical workloads. Hundreds of thousands of AWS customers have chosen DynamoDB as their key-value and document database for mobile, web, gaming, ad tech, IoT, and other applications that need low-latency data access at any scale. Create a new table for your application and let DynamoDB handle the rest.
Amazon DocumentDB (with MongoDB compatibility) is a fast, scalable, highly available, and fully managed document database service that supports MongoDB workloads. Amazon DocumentDB is designed from the ground-up to give you the performance, scalability, and availability you need when operating mission-critical MongoDB workloads at scale. Amazon DocumentDB implements the Apache 2.0 open source MongoDB 3.6 API by emulating the responses that a MongoDB client expects from a MongoDB server, allowing you to use your existing MongoDB drivers and tools with Amazon DocumentDB. In Amazon DocumentDB, the storage and compute are decoupled, allowing each to scale independently, and you can increase the read capacity to millions of requests per second by adding up to 15 low latency read replicas in minutes, regardless of the size of your data. Amazon DocumentDB is designed for 99.99% availability and replicates six copies of your data across three AWS Availability Zones (AZs). You can use AWS Database Migration Service (DMS) for free (for six months) to easily migrate their on-premises or Amazon Elastic Compute Cloud (EC2) MongoDB databases to Amazon DocumentDB with virtually no downtime.
Amazon ElastiCache offers fully managed Redis and Memcached. Seamlessly deploy, run, and scale popular open source compatible in-memory data stores. Build data-intensive apps or improve the performance of your existing apps by retrieving data from high throughput and low latency in-memory data stores. Amazon ElastiCache is a popular choice for Gaming, Ad-Tech, Financial Services, Healthcare, and IoT apps. Amazon ElastiCache for Redis is a blazing fast in-memory data store that provides sub-millisecond latency to power internet-scale real-time applications. Built on open-source Redis and compatible with the Redis APIs, ElastiCache for Redis works with your Redis clients and uses the open Redis data format to store your data. Your self-managed Redis applications can work seamlessly with ElastiCache for Redis without any code changes. ElastiCache for Redis combines the speed, simplicity, and versatility of open-source Redis with manageability, security, and scalability from Amazon to power the most demanding real-time applications in Gaming, Ad-Tech, E-Commerce, Healthcare, Financial Services, and IoT. Amazon ElastiCache for Memcached is a Memcached-compatible in-memory key-value store service that can be used as a cache or a data store. It delivers the performance, ease-of-use, and simplicity of Memcached. ElastiCache for Memcached is fully managed, scalable, and secure - making it an ideal candidate for use cases where frequently accessed data must be in-memory. It is a popular choice for use cases such as Web, Mobile Apps, Gaming, Ad-Tech, and E-Commerce.
Amazon Neptune is a fast, reliable, fully managed graph database service that makes it easy to build and run applications that work with highly connected datasets. The core of Amazon Neptune is a purpose-built, high-performance graph database engine optimized for storing billions of relationships and querying the graph with milliseconds latency. Amazon Neptune supports popular graph models Property Graph and W3C's RDF, and their respective query languages Apache TinkerPop Gremlin and SPARQL, allowing you to easily build queries that efficiently navigate highly connected datasets. Neptune powers graph use cases such as recommendation engines, fraud detection, knowledge graphs, drug discovery, and network security. Amazon Neptune is highly available, with read replicas, point-in-time recovery, continuous backup to Amazon S3, and replication across Availability Zones. Neptune is secure with support for HTTPS encrypted client connections and encryption at rest. Neptune is fully managed, so you no longer need to worry about database management tasks such as hardware provisioning, software patching, setup, configuration, or backups.
Amazon Timestream is a fast, scalable, fully managed time series database service for IoT and operational applications that makes it easy to store and analyze trillions of events per day at 1/10th the cost of relational databases. Driven by the rise of IoT devices, IT systems, and smart industrial machines, time-series data — data that measures how things change over time — is one of the fastest growing data types. Time-series data has specific characteristics such as typically arriving in time order form, data is append-only, and queries are always over a time interval. While relational databases can store this data, they are inefficient at processing this data as they lack optimizations such as storing and retrieving data by time intervals. Timestream is a purpose-built time series database that efficiently stores and processes this data by time intervals. With Timestream, you can easily store and analyze log data for DevOps, sensor data for IoT applications, and industrial telemetry data for equipment maintenance. As your data grows over time, Timestream’s adaptive query processing engine understands its location and format, making your data simpler and faster to analyze. Timestream also automates rollups, retention, tiering, and compression of data, so you can manage your data at the lowest possible cost. Timestream is serverless, so there are no servers to manage. It manages time-consuming tasks such as server provisioning, software patching, setup, configuration, or data retention and tiering, freeing you to focus on building your applications.
Amazon QLDB provides a complete verifiable history of all application data changes and is built with tried and tested technology used inside Amazon for years, to solve building reliable system-of-record applications at scale.
QLDB gives you immutability. The database maintains a sequenced record of all changes to your data written to an append-only journal, allowing companies to query and analyze the full history
Data in QLDB is Cryptographically verifiable. The service uses a cryptographic hash function (SHA-256) to generate a secure output file of your data’s change history, known as a digest and the digest acts as a proof of your data’s change history, allowing you to look back and validate the integrity of your data changes.
QLDB is a serverless database and scales with the application. Unlike common Blockchain frameworks which require consensus, QLDB performs at low latency and higher throughput not requiring companies to trade performance for verifiability.
Finally, by leveraging familiar SQL APIs with PartiQL and a flexible open source document data model, Amazon Ion, QLDB is easy to use.
Amazon Keyspaces (for Apache Cassandra) is a scalable, highly available, and managed Apache Cassandra–compatible database service.
1/ For those that are currently running Cassandra on-premises or in the cloud, Keyspaces brings the performance, manageability, scale, and security of our fully managed database services to Cassandra workloads.
2/ Amazon Keyspaces is compatible with the Apache open source Cassandra API, enabling customers to use the same Cassandra application code, Apache 2.0 licensed drivers, and tools that they use today.
3/ Amazon Keyspaces is serverless, so customers no longer need to provision, configure, and operate large Cassandra clusters or add and remove nodes manually and rebalance partitions as database traffic scales up and down.
4/ Amazon Keyspaces provides customers with single-digit millisecond performance at any scale and can scale tables up and down automatically based on actual application traffic, with virtually unlimited throughput and storage. There is no limit on the size of the table or the number of items.
5/ Amazon Keyspaces offers both provisioned and on-demand capacity mode so you can optimize costs by specifying capacity per workload or pay for only the resources your applications use.
6/ Customers with existing Cassandra tables running on premises or on Amazon Elastic Compute Cloud (EC2) can migrate these tables to Keyspaces easily with commonly used Cassandra migration tools.
7/ Finally, Amazon Keyspaces integrates with our existing AWS services such as Amazon CloudWatch for logging and performance monitoring, AWS IAM for access management, and AWS Key Management Service for managing encryption keys used for encryption at rest.
Duolingo uses Amazon DynamoDB as one of its primary database solutions. Each second, Duolingo’s DynamoDB implementation supports 24,000 reads and 3,000 writes, personalizing lessons for users taking 6 billion exercises per month. And Amazon DynamoDB provides autoscaling, which intelligently adjusts performance based on user demand—ensuring high availability and minimizing wasted costs due to over-provisioning. Duolingo also uses Amazon ElastiCache to provide instant access to common words and phrases, Amazon Aurora as the transactional database for maintaining user data, and Amazon Redshift for data analytics. With this database backbone, Duolingo teaches more language students than the entire US school system.
Capital One uses Amazon RDS to store transaction data for state management, Amazon Redshift to store web logs for analytics that need aggregations, and DynamoDB to store user data so that customers can quickly access their information with the Capital One app.
With AWS services, you don’t need to worry about administration tasks such as server provisioning, patching, setup, configuration, backups, or recovery. AWS continuously monitors your clusters to keep your workloads up and running with self-healing storage and automated scaling, so that you can focus on higher value application development. You focus on high value application development tasks such as schema design, query construction & optimization leaving AWS to take care of operational tasks on your behalf.
You never have to over or under provision infrastructure to accommodate application growth, intermittent spikes, and performance requirements and incur fixed capital costs which include software licensing and support, hardware refresh, and resources to maintain hardware. AWS does it all for you so you can spend time innovating and building new applications, not managing infrastructure.
The most straightforward and simple solution for many customers who are struggling to maintain their own relational databases at scale is a move to a managed database service, like Amazon RDS or Amazon Aurora. In most cases, these customers can migrate workloads and applications to a managed service without needing to rearchitect their application, and their teams can continue to leverage the same DB skill sets.
The target customer for a move from self-managed to managed relational DBs:
Is self-managing DBs on-premises, in EC2, and/or in another public cloud.
Would like to reduce DB admin burden and reallocate DBA resources to app-centric work.
Does not want to rearchitect their application. Wants to continue leveraging same skill sets.
Does need a simple path to a managed service in the cloud for DB workloads.
Wants better better performance, availability, scalability, and security
Customers can lift and shift their self-managed databases like Oracle, SQL Server, MySQL, PostgreSQL, and MariaDB to Amazon RDS.
For customers looking for better performance and availability, they can move their lift and shift their MySQL & PostgreSQL databases to Amazon Aurora and get 3-5X better throughput.
Customers use non-relational databases like MongoDB and Redis as document and in-memory databases for use cases such as content management, personalization, mobile apps, catalogs, and real-time use cases such as caching, gaming leaderboards, and session stores. The most straightforward and simple solution for many customers who are struggling to maintain their own non-relational databases at scale is a move to a managed database service, like 1/ Moving self-managed MongoDB databases to Amazon DocumentDB 2/ Moving self-managed in-memory databases like Redis & ElastiCache to Amazon ElastiCache. In most cases, these customers can migrate workloads and applications to a managed service without needing to rearchitect their application, and their teams can continue to leverage the same DB skill sets.
The target customer for a move from self-managed to managed non-relational DBs:
Is self-managing DBs on-premises, in EC2, and/or in another public cloud.
Would like to reduce DB admin burden and reallocate DBA resources to app-centric work.
Does not want to rearchitect their application. Wants to continue leveraging same skill sets.
Does need a simple path to a managed service in the cloud for DB workloads.
Wants better performance, availability, scalability, and security
The solution is for customers to move to AWS managed services for databases and analytics.
Why choose AWS for database and analytics? Here are some top level points, which we will explore further through the following slides. AWS provides the most comprehensive, fully managed, performant & scalable, available & durable, and secure& compliant portfolio of services that enable customers to easily build their applications in the cloud and process and analyze all their data with the broadest set of data management and analytical approaches, including relational databases, nonrelational databases, data lakes, and machine learning. As a result, there are more organizations running their databases, data lakes, and analytics on AWS than anywhere else, with customers, like Airbnb, Capital One, Verizon, NETFLIX, Zillow, NASDAQ, Yelp, iRobot, and FINRA, trusting AWS to run their analytics workloads.
Details supporting the above claims:
Broad services portfolio
AWS offers the broadest set of databases, analytic tools and engines that analyzes data using open formats and open standards. Customers can choose from 14 purpose-built database engines including relational, key-value, document, in-memory, graph, time series, and ledger databases. AWS’s portfolio of purpose-built databases supports diverse data models and allows you to build use case driven, highly scalable, distributed applications. By picking the best database to solve a specific problem or a group of problems, you can break away from restrictive one-size-fits-all monolithic databases and focus on building applications to meet the needs of your business. For analytics, customers can store data in the standards-based data format of their choice such as CSV, ORC, Grok, Avro, and Parquet, and have the flexibility to analyze the day in a variety of ways such as data warehousing, interactive SQL queries, real-time analytics, operational analytics, and big data processing. AWS also has the most partner solutions that have pre-built integration giving customers choice, ensuring their needs will be met for existing and future analytics use cases.
Fully managed
With AWS database and analytics services, you don’t need to worry about administration tasks such as server provisioning, patching, setup, configuration, backups, or recovery. AWS continuously monitors your clusters to keep your workloads up and running with self-healing storage and automated scaling, so that you can focus on higher value application development.
Most performant & scalable
With AWS you get relational databases that are 3-5X faster than popular alternatives, or non-relational databases that give you microsecond to sub-millisecond latency. Start small and scale as your applications grow. You can scale your database and analytics compute and storage resources easily, often with no downtime. Because AWS database and analytics services are optimized for the data model or the type of analytics you need, your applications can scale and perform better at 1/10 the cost versus commercial databases.
Most available & durable
When running critical production systems, minimizing downtime and service interruptions is a high priority. At AWS, our managed services run on the same highly reliable infrastructure used by other AWS services providing high availability and durability without sacrificing performance. AWS services provide multi-region and multi-availability zone deployments for protection against region-wide or availability zone outages; read replicas so you have multiple copies of your data for scalability and disaster recovery; and continuous backups to Amazon S3 for 11 9’s of durability. 99.9X availability: 99.95% for RDS MAZ instances and 99.99% for Aurora MAZ clusters
AWS has the most security capabilities to protect customer’s data. Customers can launch AWS services in a virtual network with Amazon VPC, and have the ability to control access and permissions with AWS Identity and Access Management, and authentication with Kerberos.
AWS services provide support for compliance and assurance programs like HIPAA, PCI DSS, FedRAMP, and ISO for finance, healthcare, government, and more
Here’s an example on a customer who’s all-in on AWS. Airbnb moved away for self managing databases to fully managed AWS databases such as Aurora, DynamoDB, and ElastiCache.
https://aws.amazon.com/solutions/case-studies/airbnb/
Image source: free stock image from Pexels.com (no license fee)
If you’re ready to continue learning, we offer free digital courses for database services.
The DATABASE learning path tells you how to get started
Then, validate your experience with an industry-recognized certification in Databases.