講師: Ivan Cheng, Solution Architect, AWS
Join us for a series of introductory and technical sessions on AWS Big Data solutions. Gain a thorough understanding of what Amazon Web Services offers across the big data lifecycle and learn architectural best practices for applying those solutions to your projects.
We will kick off this technical seminar in the morning with an introduction to the AWS Big Data platform, including a discussion of popular use cases and reference architectures. In the afternoon, we will deep dive into Machine Learning and Streaming Analytics. We will then walk everyone through building your first Big Data application with AWS.
by Darin Briskman, Technical Evangelist, AWS
Elasticsearch is the most popular open-source search and analytics engine - it's easy to use, but not always easy to configure an manage. Learn about Amazon's fully managed service that provides easier deployment, operation, and scale for Elasticsearch. Level: 200
This document provides an overview of a presentation on microservices. The presentation includes sections on evolving from monolithic architectures to microservices, principles of microservices like loose coupling and single responsibility, and an example of building a simple microservice. The document lists the expected sections of the presentation.
Data warehousing is a critical component for analysing and extracting actionable insights from your data. Amazon Redshift allows you to deploy a scalable data warehouse in a matter of minutes and starts to analyse your data right away using your existing business intelligence tools.
DynamoDB is a NoSQL database service built for fast, scalable, consistent performance. This presentation introduces DynamoDB and discusses how to get started, provision throughput, design for the DynamoDB data model, query and scan tables and scale reads and writes without downtime.
Amazon Redshift is a fast, powerful, fully managed data warehouse service that allows for petabyte-scale data warehousing at very low costs. It uses columnar storage and data compression techniques to dramatically reduce I/O and allow for very fast query performance. Redshift automatically provisions clusters on optimized hardware and scales easily from terabytes to petabytes of data with no downtime. It integrates with popular BI tools and simplifies tasks like provisioning, administration, backup/restore and scaling the data warehouse.
The document discusses serverless architectures using AWS Lambda and Amazon API Gateway. It provides background on moving from monolithic to microservices architectures. It then covers AWS Lambda functions, event sources, and networking environments. Amazon API Gateway is presented as a way to build multi-tier serverless applications. Common serverless architecture patterns and best practices for AWS Lambda, API Gateway, and general serverless development are outlined. The document concludes with a demonstration of a simple CRUD backend using Lambda and DynamoDB with API Gateway.
By using a Data Lake, you no longer need to worry about structuring or transforming data before storing it. A Data Lake on AWS enables your organization to more rapidly analyze data, helping you quickly discover new business insights. Join us for our webinar to learn about the benefits of building a Data Lake on AWS and how your organization can begin reaping their rewards. In this webinar, select APN Partners will share their specific methodology for implementing a Data Lake on AWS and best practices for getting the most from your Data Lake.
Do you want to run your code without the cost and effort of provisioning and managing servers? Find out how in this deep dive session on AWS Lambda, which allows you to run code for virtually any type of application or back end service – all with zero administration. During the session, we’ll look at a number of key AWS Lambda features and benefits, including automated application scaling with high availability; pay-as-you-consume billing; and the ability to automatically trigger your code from other AWS services or from any web or mobile app.
This document outlines an agenda for an AWS Cost Management workshop. The agenda includes introductions and sessions on AWS Cost Explorer, AWS Budgets, AWS Reservations, and AWS Cost & Usage Reports. It provides overviews of AWS cost management products and highlights recent features including budget redesigns, forecasting enhancements, and reserved instance management updates.
This overview presentation discusses big data challenges and provides an overview of the AWS Big Data Platform by covering:
- How AWS customers leverage the platform to manage massive volumes of data from a variety of sources while containing costs.
- Reference architectures for popular use cases, including, connected devices (IoT), log streaming, real-time intelligence, and analytics.
- The AWS big data portfolio of services, including, Amazon S3, Kinesis, DynamoDB, Elastic MapReduce (EMR), and Redshift.
- The latest relational database engine, Amazon Aurora— a MySQL-compatible, highly-available relational database engine, which provides up to five times better performance than MySQL at one-tenth the cost of a commercial database.
Created by: Rahul Pathak,
Sr. Manager of Software Development
A quick tour in 16 slides of Amazon's Redshift clustered, massively parallel database.
Find out what differentiates it from the other database products Amazon has, including SimpleDB, DynamoDB and RDS (MySQL, SQL Server and Oracle).
Learn how it stores data on disk in a columnar format and how this relates to performance and interesting compression techniques.
Contrast the difference between Redshift and a MySQL instance and discover how the clustered architecture may help to dramatically reduce query time.
Best Practices for Implementing Your Encryption Strategy Using AWS Key Manage...Amazon Web Services
AWS Key Management Service (KMS) is a managed service that makes it easy for you to create and manage the encryption keys used to encrypt your data. In this session, we will dive deep into best practices learned by implementing AWS KMS at AWS’ largest enterprise clients. We will review the different capabilities described in the AWS Cloud Adoption Framework (CAF) Security Perspective and how to implement these recommendations using AWS KMS. In addition to sharing recommendations, we will also provide examples that will help you protect sensitive information on the AWS Cloud.
AWS Serverless Application Model (SAM) is a template driven tool for creating and managing serverless applications. In just a few lines of code you can define complex AWS Lambda based serverless applications, security permissions, and advanced configuration capabilities. Join us as we dive deep into best practices and tricks for using SAM at scale, including how to make the most of the dynamic template capabilities of SAM, how to use advanced features such as deployment preferences and policy templates, and how to debug serverless applications with SAM CLI.
Speaker: Chris Munns - Principal Developer Advocate, AWS Serverless Applications, AWS
This document provides an introduction to Amazon Aurora, AWS's managed relational database service. It discusses how Aurora was built to provide the speed and availability of commercial databases at the simplicity and cost-effectiveness of open source databases. The document outlines key Aurora features like automatic scaling, continuous backups, replication across Availability Zones, and integration with other AWS services. Customer case studies show how Aurora provides better performance at lower costs than alternative database options. The document also covers migration options and how Aurora offers a simpler, more cost-effective database solution than on-premises or self-managed options.
Amazon DynamoDB is a fully managed NoSQL database service for applications that need consistent, single-digit millisecond latency at any scale. This talk explores DynamoDB capabilities and benefits in detail and discusses how to get the most out of your DynamoDB database. We go over schema design best practices with DynamoDB across multiple use cases, including gaming, AdTech, IoT, and others. We also explore designing efficient indexes, scanning, and querying, and go into detail on a number of recently released features, including JSON document support, Streams, and more.
In this session, we introduce AWS Glue, provide an overview of its components, and share how you can use AWS Glue to automate discovering your data, cataloging it, and preparing it for analysis.
This document provides an agenda and overview for a workshop on building a data lake on AWS. The agenda includes reviewing data lakes, modernizing data warehouses with Amazon Redshift, data processing with Amazon EMR, and event-driven processing with AWS Lambda. It discusses how data lakes extend traditional data warehousing approaches and how services like Redshift, EMR, and Lambda can be used for analytics in a data lake on AWS.
by Joyjeet Banerjee, Solutions Architect, AWS
Amazon Athena is a new serverless query service that makes it easy to analyze data in Amazon S3, using standard SQL. With Athena, there is no infrastructure to setup or manage, and you can start analyzing your data immediately. You don’t even need to load your data into Athena, it works directly with data stored in S3. Level 200
In this session, we will show you how easy it is to start querying your data stored in Amazon S3, with Amazon Athena. First we will use Athena to create the schema for data already in S3. Then, we will demonstrate how you can run interactive queries through the built-in query editor. We will provide best practices and use cases for Athena. Then, we will talk about supported queries, data formats, and strategies to save costs when querying data with Athena.
Amazon Redshift is a fast, fully managed, petabyte-scale data warehouse service that makes it simple and cost-effective to efficiently analyze all your data using your existing business intelligence tools. You can start small for just $0.25 per hour with no commitment or upfront costs and scale to a petabyte or more for $1,000 per terabyte per year, less than a tenth of most other data warehousing solutions.
See a recording of the webinar based on this presentation here on YouTube: https://youtu.be/GgLKodmL5xE
Masterclass series webinars, including on-demand access to all of this years recorded webinars: http://aws.amazon.com/campaigns/emea/masterclass/
Journey Through the Cloud webinar series, including on-demand access to all webinars so far this year: http://aws.amazon.com/campaigns/emea/journey/
講師: George Chiu 邱志威, Sr. Industry Consultant, Teradata
Learn how Netflix engages customers by leveraging Teradata as a critical component of its data and analytics platform to create a data-driven, customer-focused business.
The document discusses Amazon Web Services (AWS) machine learning and artificial intelligence tools including Amazon Polly for text-to-speech, Amazon Lex for building conversational interfaces, and Amazon Rekognition for image and video analysis; it provides examples of how these tools work and can be used to build applications for tasks like flight booking, facial recognition, and building chatbots.
講師: Bob Yin, Senior Product Specialist, Informatica
These Informatica Cloud offerings are pre-built packages for quick time-to-value for customers looking to fast-track cloud data management initiatives. For example, customers can quickly kick start a new Amazon Redshift data warehouse project and use Informatica Cloud Connector for Amazon Redshift to load it with meaningful connected data from cloud sources such as Salesforce.com or on-premises sources such as relational databases -- all within hours, not months.
講師: Xiaoyong Han, Solution Architect, AWS
Data collection and storage is a primary challenge for any big data architecture. In this webinar, gain a thorough understanding of AWS solutions for data collection and storage, and learn architectural best practices for applying those solutions to your projects. This session will also include a discussion of popular use cases and reference architectures. In this webinar, you will learn:
• Overview of the different types of data that customers are handling to drive high-scale workloads on AWS, and how to choose the best approach for your workload • Optimization techniques that improve performance and reduce the cost of data ingestion • Leveraging Amazon S3, Amazon DynamoDB, and Amazon Kinesis for storage and data collection
講師: Jhen-Wei Huang, Solution Architect, AWS
Artificial Intelligence (AI) and deep learning are now ready to power your business, as it is powering most of the innovation of Amazon.com with autonomous drones, and robots, Amazon Alexa, Amazon Go, and many other hard and important business problems. Come and learn why and how to get started with deep learning, and what you can expect from a future with better AI in the cloud and on the edge.
Join us for a series of introductory and technical sessions on AWS Big Data solutions. Gain a thorough understanding of what Amazon Web Services offers across the big data lifecycle and learn architectural best practices for applying those solutions to your projects.
We will kick off this technical seminar in the morning with an introduction to the AWS Big Data platform, including a discussion of popular use cases and reference architectures. In the afternoon, we will deep dive into Machine Learning and Streaming Analytics. We will then walk everyone through building your first Big Data application with AWS.
This is the complete deck presented at the Westin Calgary Hotel, on August 16th, 2016.
It covers the current state of the AWS Big Data Solution set. Contains several use cases of Big Data, Machine Learning, and a tutorial on how to implement and use Big Data on the AWS Cloud Platform.
This document discusses building a big data application on AWS. It recommends starting with a business case and collecting data from various sources into AWS using services like Kinesis and Snowball. The data can then be stored cost-effectively in S3. Services like Redshift, EMR, and Athena allow users to process, analyze and explore the data at scale. Lambda and QuickSight help transform and visualize the data. The document provides an example reference architecture of a data lake on AWS and discusses how Trustpilot built one to address issues with their traditional data warehousing.
Join us for a series of introductory and technical sessions on AWS Big Data solutions. Gain a thorough understanding of what Amazon Web Services offers across the big data lifecycle and learn architectural best practices for applying those solutions to your projects.
We will kick off this technical seminar in the morning with an introduction to the AWS Big Data platform, including a discussion of popular use cases and reference architectures. In the afternoon, we will deep dive into Machine Learning and Streaming Analytics. We will then walk everyone through building your first Big Data application with AWS.
Businesses are generating more data than ever before.
Doing real time data analytics requires IT infrastructure that often needs to be scaled up quickly and running an on-premise environment in this setting has its limitations.
Organisations often require a massive amount of IT resources to analyse their data and the upfront capital cost can deter them from embarking on these projects.
What’s needed is scalable, agile and secure cloud-based infrastructure at the lowest possible cost so they can spin up servers that support their data analysis projects exactly when they are required. This infrastructure must enable them to create proof-of-concepts quickly and cheaply – to fail fast and move on.
Esta sesión está enfocada en mostrar cómo las empresas pueden optimizar sus recursos a través de las soluciones basadas en la nube, poniendo foco en la diferenciación, la innovación y reducción de riesgos en la infraestructura.
Por Ricardo Rentería de Amazon
Antoine Genereux takes us on a detailed overview of the Database solutions available on the AWS Cloud, addressing the needs and requirements of customers at all levels. He also discusses Business Intelligence and Analytics solutions.
Data warehousing in the era of Big Data: Deep Dive into Amazon RedshiftAmazon Web Services
Analyzing big data quickly and efficiently requires a data warehouse optimized to handle and scale for large datasets. Amazon Redshift is a fast, petabyte-scale data warehouse that makes it simple and cost-effective to analyze all of your data for a fraction of the cost of traditional data warehouses. In this session, we take an in-depth look at data warehousing with Amazon Redshift for big data analytics. We cover best practices to take advantage of Amazon Redshift's columnar technology and parallel processing capabilities to deliver high throughput and query performance. We also discuss how to design optimal schemas, load data efficiently, and use work load management.
Understanding AWS Managed Database and Analytics Services | AWS Public Sector...Amazon Web Services
The world is creating more data in more ways than ever before. The average internet user in 2017 generates 1.5GB of data per day, with the rate doubling every 18 months. A single autonomous vehicle can generate 4TB per day. Each smart manufacturing plant generates 1PB per day. Storing, managing, and analyzing this data requires integrated database and analytic services that provide reliability and security at scale. AWS offers a range of managed data services that let customers focus on making data useful, including Amazon Aurora, RDS, DynamoDB, Redshift, Spectrum, ElastiCache, Kinesis, EMR, Elasticsearch Service, and Glue. In this session, we discuss these services, share our vision for innovation, and show how our customers use these services today. Learn More: https://aws.amazon.com/government-education/
Understanding AWS Managed Database and Analytics Services | AWS Public Sector...Amazon Web Services
The world is creating more data in more ways than ever before. The average internet user in 2017 generates 1.5GB of data per day, with the rate doubling every 18 months. A single autonomous vehicle can generate 4TB per day. Each smart manufacturing plant generates 1PB per day. Storing, managing, and analyzing this data requires integrated database and analytic services that provide reliability and security at scale. AWS offers a range of managed data services that let customers focus on making data useful, including Amazon Aurora, RDS, DynamoDB, Redshift, Spectrum, ElastiCache, Kinesis, EMR, Elasticsearch Service, and Glue. In this session, we discuss these services, share our vision for innovation, and show how our customers use these services today. Learn More: https://aws.amazon.com/government-education/
Using AWS to design and build your data architecture has never been easier to gain insights and uncover new opportunities to scale and grow your business. Join this workshop to learn how you can gain insights at scale with the right big data applications.
Using AWS to design and build your data architecture has never been easier to gain insights and uncover new opportunities to scale and grow your business. Join this workshop to learn how you can gain insights at scale with the right big data applications.
The introductory morning session will discuss big data challenges and provide an overview of the AWS Big Data Platform. We will also cover:
• How AWS customers leverage the platform to manage massive volumes of data from a variety of sources while containing costs.
• Reference architectures for popular use cases, including: connected devices (IoT), log streaming, real-time intelligence, and analytics.
• The AWS big data portfolio of services, including Amazon S3, Kinesis, DynamoDB, Elastic MapReduce (EMR) and Redshift.
• The latest relational database engine, Amazon Aurora - a MySQL-compatible, highly-available relational database engine which provides up to five times better performance than MySQL at a price one-tenth the cost of a commercial database.
• Amazon Machine Learning – the latest big data service from AWS provides visualization tools and wizards that guide you through the process of creating machine learning (ML) models without having to learn complex ML algorithms and technology.
A data lake can be used as a source for both structured and unstructured data - but how? We'll look at using open standards including Spark and Presto with Amazon EMR, Amazon Redshift Spectrum and Amazon Athena to process and understand data.
Speakers:
Neel Mitra - Solutions Architect, AWS
Roger Dahlstrom - Solutions Architect, AWS
AWS Summit 2013 | Singapore - Big Data Analytics, Presented by AWS, Intel and...Amazon Web Services
Learn more about the tools, techniques and technologies for working productively with data at any scale. This session will introduce the family of data analytics tools on AWS which you can use to collect, compute and collaborate around data, from gigabytes to petabytes. We'll discuss Amazon Elastic MapReduce, Hadoop, structured and unstructured data, and the EC2 instance types which enable high performance analytics.
This document discusses big data and AWS tools for managing it. It defines big data as data with high volume, velocity and variety. AWS provides scalable tools like EC2, EMR, Kinesis and Redshift to handle the ingestion, storage, processing and analysis of large and diverse datasets in the cloud. These tools work together in an integrated environment and auto-scale based on demand, providing a cost-effective solution for big data challenges. An example use case of real-time IoT analytics is presented to illustrate how different AWS products interact to build scalable data pipelines.
This document discusses big data and AWS tools for managing it. It defines big data as data with high volume, velocity and variety. AWS provides scalable tools like EC2, EMR, Kinesis and Redshift for ingesting, processing and analyzing large and diverse datasets. These tools work together in an integrated environment and auto-scale based on demand, providing a cost-effective solution for big data challenges. An example use case of real-time IoT analytics is presented to illustrate how different AWS products interact for processing sensor data streams.
Big Data Architectural Patterns and Best Practices on AWSAmazon Web Services
This document discusses big data architectural patterns and best practices on AWS. It covers the evolution of big data approaches from batch processing to stream processing to machine learning. The key principles discussed are building decoupled systems, using the right tools for the job, leveraging managed AWS services, using log-centric design patterns, and being cost-conscious. An overview of major AWS services for data ingestion, storage, processing, analysis and consumption is also provided.
AWS Summit Singapore - Architecting a Serverless Data Lake on AWSAmazon Web Services
Unni Pillai, Specialist Solution Architect, ASEAN, AWS.
Daniel Muller, Head of Cloud Infrastructure, Spuul.
As the volume and types of data continues to grow, customers often have valuable data that is not easily discoverable and available for analytics. A common challenge for data engineering teams is architecting a data lake that can cater to the needs of diverse users - from developers to business analysts to data scientists.
In this session, we will dive deep into building a data lake using Amazon S3, Amazon Kinesis, Amazon Athena and AWS Glue. We will also see how AWS Glue crawlers can automatically discover your data, extracting and cataloguing relevant metadata to reduce operations in preparing your data for downstream consumers.
Furthermore, learn from our customer Spuul, on how they moved from a Data Warehouse based analytics to a serverless data lake. Why and how did Spuul undertake this journey? Hear about the benefits and challenges they encountered.
Similar to Welcome & AWS Big Data Solution Overview (20)
Come costruire servizi di Forecasting sfruttando algoritmi di ML e deep learn...Amazon Web Services
Il Forecasting è un processo importante per tantissime aziende e viene utilizzato in vari ambiti per cercare di prevedere in modo accurato la crescita e distribuzione di un prodotto, l’utilizzo delle risorse necessarie nelle linee produttive, presentazioni finanziarie e tanto altro. Amazon utilizza delle tecniche avanzate di forecasting, in parte questi servizi sono stati messi a disposizione di tutti i clienti AWS.
In questa sessione illustreremo come pre-processare i dati che contengono una componente temporale e successivamente utilizzare un algoritmo che a partire dal tipo di dato analizzato produce un forecasting accurato.
Big Data per le Startup: come creare applicazioni Big Data in modalità Server...Amazon Web Services
La varietà e la quantità di dati che si crea ogni giorno accelera sempre più velocemente e rappresenta una opportunità irripetibile per innovare e creare nuove startup.
Tuttavia gestire grandi quantità di dati può apparire complesso: creare cluster Big Data su larga scala sembra essere un investimento accessibile solo ad aziende consolidate. Ma l’elasticità del Cloud e, in particolare, i servizi Serverless ci permettono di rompere questi limiti.
Vediamo quindi come è possibile sviluppare applicazioni Big Data rapidamente, senza preoccuparci dell’infrastruttura, ma dedicando tutte le risorse allo sviluppo delle nostre le nostre idee per creare prodotti innovativi.
Ora puoi utilizzare Amazon Elastic Kubernetes Service (EKS) per eseguire pod Kubernetes su AWS Fargate, il motore di elaborazione serverless creato per container su AWS. Questo rende più semplice che mai costruire ed eseguire le tue applicazioni Kubernetes nel cloud AWS.In questa sessione presenteremo le caratteristiche principali del servizio e come distribuire la tua applicazione in pochi passaggi
Vent'anni fa Amazon ha attraversato una trasformazione radicale con l'obiettivo di aumentare il ritmo dell'innovazione. In questo periodo abbiamo imparato come cambiare il nostro approccio allo sviluppo delle applicazioni ci ha permesso di aumentare notevolmente l'agilità, la velocità di rilascio e, in definitiva, ci ha consentito di creare applicazioni più affidabili e scalabili. In questa sessione illustreremo come definiamo le applicazioni moderne e come la creazione di app moderne influisce non solo sull'architettura dell'applicazione, ma sulla struttura organizzativa, sulle pipeline di rilascio dello sviluppo e persino sul modello operativo. Descriveremo anche approcci comuni alla modernizzazione, compreso l'approccio utilizzato dalla stessa Amazon.com.
Come spendere fino al 90% in meno con i container e le istanze spot Amazon Web Services
L’utilizzo dei container è in continua crescita.
Se correttamente disegnate, le applicazioni basate su Container sono molto spesso stateless e flessibili.
I servizi AWS ECS, EKS e Kubernetes su EC2 possono sfruttare le istanze Spot, portando ad un risparmio medio del 70% rispetto alle istanze On Demand. In questa sessione scopriremo insieme quali sono le caratteristiche delle istanze Spot e come possono essere utilizzate facilmente su AWS. Impareremo inoltre come Spreaker sfrutta le istanze spot per eseguire applicazioni di diverso tipo, in produzione, ad una frazione del costo on-demand!
In recent months, many customers have been asking us the question – how to monetise Open APIs, simplify Fintech integrations and accelerate adoption of various Open Banking business models. Therefore, AWS and FinConecta would like to invite you to Open Finance marketplace presentation on October 20th.
Event Agenda :
Open banking so far (short recap)
• PSD2, OB UK, OB Australia, OB LATAM, OB Israel
Intro to Open Finance marketplace
• Scope
• Features
• Tech overview and Demo
The role of the Cloud
The Future of APIs
• Complying with regulation
• Monetizing data / APIs
• Business models
• Time to market
One platform for all: a Strategic approach
Q&A
Rendi unica l’offerta della tua startup sul mercato con i servizi Machine Lea...Amazon Web Services
Per creare valore e costruire una propria offerta differenziante e riconoscibile, le startup di successo sanno come combinare tecnologie consolidate con componenti innovativi creati ad hoc.
AWS fornisce servizi pronti all'utilizzo e, allo stesso tempo, permette di personalizzare e creare gli elementi differenzianti della propria offerta.
Concentrandoci sulle tecnologie di Machine Learning, vedremo come selezionare i servizi di intelligenza artificiale offerti da AWS e, anche attraverso una demo, come costruire modelli di Machine Learning personalizzati utilizzando SageMaker Studio.
OpsWorks Configuration Management: automatizza la gestione e i deployment del...Amazon Web Services
Con l'approccio tradizionale al mondo IT per molti anni è stato difficile implementare tecniche di DevOps, che finora spesso hanno previsto attività manuali portando di tanto in tanto a dei downtime degli applicativi interrompendo l'operatività dell'utente. Con l'avvento del cloud, le tecniche di DevOps sono ormai a portata di tutti a basso costo per qualsiasi genere di workload, garantendo maggiore affidabilità del sistema e risultando in dei significativi miglioramenti della business continuity.
AWS mette a disposizione AWS OpsWork come strumento di Configuration Management che mira ad automatizzare e semplificare la gestione e i deployment delle istanze EC2 per mezzo di workload Chef e Puppet.
Scopri come sfruttare AWS OpsWork a garanzia e affidabilità del tuo applicativo installato su Instanze EC2.
Microsoft Active Directory su AWS per supportare i tuoi Windows WorkloadsAmazon Web Services
Vuoi conoscere le opzioni per eseguire Microsoft Active Directory su AWS? Quando si spostano carichi di lavoro Microsoft in AWS, è importante considerare come distribuire Microsoft Active Directory per supportare la gestione, l'autenticazione e l'autorizzazione dei criteri di gruppo. In questa sessione, discuteremo le opzioni per la distribuzione di Microsoft Active Directory su AWS, incluso AWS Directory Service per Microsoft Active Directory e la distribuzione di Active Directory su Windows su Amazon Elastic Compute Cloud (Amazon EC2). Trattiamo argomenti quali l'integrazione del tuo ambiente Microsoft Active Directory locale nel cloud e l'utilizzo di applicazioni SaaS, come Office 365, con AWS Single Sign-On.
Dal riconoscimento facciale al riconoscimento di frodi o difetti di fabbricazione, l'analisi di immagini e video che sfruttano tecniche di intelligenza artificiale, si stanno evolvendo e raffinando a ritmi elevati. In questo webinar esploreremo le possibilità messe a disposizione dai servizi AWS per applicare lo stato dell'arte delle tecniche di computer vision a scenari reali.
Amazon Web Services e VMware organizzano un evento virtuale gratuito il prossimo mercoledì 14 Ottobre dalle 12:00 alle 13:00 dedicato a VMware Cloud ™ on AWS, il servizio on demand che consente di eseguire applicazioni in ambienti cloud basati su VMware vSphere® e di accedere ad una vasta gamma di servizi AWS, sfruttando a pieno le potenzialità del cloud AWS e tutelando gli investimenti VMware esistenti.
Molte organizzazioni sfruttano i vantaggi del cloud migrando i propri carichi di lavoro Oracle e assicurandosi notevoli vantaggi in termini di agilità ed efficienza dei costi.
La migrazione di questi carichi di lavoro, può creare complessità durante la modernizzazione e il refactoring delle applicazioni e a questo si possono aggiungere rischi di prestazione che possono essere introdotti quando si spostano le applicazioni dai data center locali.
Crea la tua prima serverless ledger-based app con QLDB e NodeJSAmazon Web Services
Molte aziende oggi, costruiscono applicazioni con funzionalità di tipo ledger ad esempio per verificare lo storico di accrediti o addebiti nelle transazioni bancarie o ancora per tenere traccia del flusso supply chain dei propri prodotti.
Alla base di queste soluzioni ci sono i database ledger che permettono di avere un log delle transazioni trasparente, immutabile e crittograficamente verificabile, ma sono strumenti complessi e onerosi da gestire.
Amazon QLDB elimina la necessità di costruire sistemi personalizzati e complessi fornendo un database ledger serverless completamente gestito.
In questa sessione scopriremo come realizzare un'applicazione serverless completa che utilizzi le funzionalità di QLDB.
Con l’ascesa delle architetture di microservizi e delle ricche applicazioni mobili e Web, le API sono più importanti che mai per offrire agli utenti finali una user experience eccezionale. In questa sessione impareremo come affrontare le moderne sfide di progettazione delle API con GraphQL, un linguaggio di query API open source utilizzato da Facebook, Amazon e altro e come utilizzare AWS AppSync, un servizio GraphQL serverless gestito su AWS. Approfondiremo diversi scenari, comprendendo come AppSync può aiutare a risolvere questi casi d’uso creando API moderne con funzionalità di aggiornamento dati in tempo reale e offline.
Inoltre, impareremo come Sky Italia utilizza AWS AppSync per fornire aggiornamenti sportivi in tempo reale agli utenti del proprio portale web.
Database Oracle e VMware Cloud™ on AWS: i miti da sfatareAmazon Web Services
Molte organizzazioni sfruttano i vantaggi del cloud migrando i propri carichi di lavoro Oracle e assicurandosi notevoli vantaggi in termini di agilità ed efficienza dei costi.
La migrazione di questi carichi di lavoro, può creare complessità durante la modernizzazione e il refactoring delle applicazioni e a questo si possono aggiungere rischi di prestazione che possono essere introdotti quando si spostano le applicazioni dai data center locali.
In queste slide, gli esperti AWS e VMware presentano semplici e pratici accorgimenti per facilitare e semplificare la migrazione dei carichi di lavoro Oracle accelerando la trasformazione verso il cloud, approfondiranno l’architettura e dimostreranno come sfruttare a pieno le potenzialità di VMware Cloud ™ on AWS.
1) The document discusses building a minimum viable product (MVP) using Amazon Web Services (AWS).
2) It provides an example of an MVP for an omni-channel messenger platform that was built from 2017 to connect ecommerce stores to customers via web chat, Facebook Messenger, WhatsApp, and other channels.
3) The founder discusses how they started with an MVP in 2017 with 200 ecommerce stores in Hong Kong and Taiwan, and have since expanded to over 5000 clients across Southeast Asia using AWS for scaling.
This document discusses pitch decks and fundraising materials. It explains that venture capitalists will typically spend only 3 minutes and 44 seconds reviewing a pitch deck. Therefore, the deck needs to tell a compelling story to grab their attention. It also provides tips on tailoring different types of decks for different purposes, such as creating a concise 1-2 page teaser, a presentation deck for pitching in-person, and a more detailed read-only or fundraising deck. The document stresses the importance of including key information like the problem, solution, product, traction, market size, plans, team, and ask.
This document discusses building serverless web applications using AWS services like API Gateway, Lambda, DynamoDB, S3 and Amplify. It provides an overview of each service and how they can work together to create a scalable, secure and cost-effective serverless application stack without having to manage servers or infrastructure. Key services covered include API Gateway for hosting APIs, Lambda for backend logic, DynamoDB for database needs, S3 for static content, and Amplify for frontend hosting and continuous deployment.
This document provides tips for fundraising from startup founders Roland Yau and Sze Lok Chan. It discusses generating competition to create urgency for investors, fundraising in parallel rather than sequentially, having a clear fundraising narrative focused on what you do and why it's compelling, and prioritizing relationships with people over firms. It also notes how the pandemic has changed fundraising, with examples of deals done virtually during this time. The tips emphasize being fully prepared before fundraising and cultivating connections with investors in advance.
AWS_HK_StartupDay_Building Interactive websites while automating for efficien...Amazon Web Services
This document discusses Amazon's machine learning services for building conversational interfaces and extracting insights from unstructured text and audio. It describes Amazon Lex for creating chatbots, Amazon Comprehend for natural language processing tasks like entity extraction and sentiment analysis, and how they can be used together for applications like intelligent call centers and content analysis. Pre-trained APIs simplify adding machine learning to apps without requiring ML expertise.
Amazon Elastic Container Service (Amazon ECS) è un servizio di gestione dei container altamente scalabile, che semplifica la gestione dei contenitori Docker attraverso un layer di orchestrazione per il controllo del deployment e del relativo lifecycle. In questa sessione presenteremo le principali caratteristiche del servizio, le architetture di riferimento per i differenti carichi di lavoro e i semplici passi necessari per poter velocemente migrare uno o più dei tuo container.
1. AWS Big Data Solution Overview
Ivan Cheng (鄭志帆)
AWS Solutions Architect
2. What is Big Data?
When your data sets become so large and complex
you have to start innovating around how to
collect, store, process, analyze, and share them.
3. GB
TB
PB
ZB
EB
Big Data: Unconstrained Growth
Unstructured data growth
is explosive
95% of the 1.2 zettabytes
of data in the digital
universe is unstructured
Machine data and IoT will
only steepen the curve
70% of this data is user-
generated content
Source: IDC, The Internet of Things: Getting Ready to Embrace Its Impact on the Digital Economy, March 2016.
8. AWS Big Data Benefits
Immediate Availability. Deploy instantly. No hardware to procure,
no infrastructure to maintain & scale.
Broad & Deep Capabilities. Over 50 services and 100s of features
to support virtually any big data application & workload.
Trusted & Secure. Designed to meet the strictest requirements.
Continuously audited, including certifications such as ISO 27001,
FedRAMP, DoD CSM, and PCI DSS.
Hundreds of Partners & Solutions. Get help from a consulting partner
or choose from hundreds of tools and applications across the entire data
management stack.
9. AWS Data PipelineAWS Database Migration Service
EMR
Analyze
Amazon
Glacier
S3
StoreCollect
Amazon Kinesis
Direct Connect
Amazon
Machine
Learning
Amazon
Redshift
DynamoDBAWS IoT
AWS Snowball
QuickSight
Amazon Athena
EC2
Amazon
Elasticsearch
Service
Lambda
AWS Glue
16. • Store unlimited number of objects
• Designed for 99.999999999% durability
• As Data Lake with integration with other AWS services
(Amazon Kinesis, Amazon Redshift, Amazon EMR, etc.)
• Low cost with tired-storage (Standard, IA, Amazon Glacier)
via life-cycle policy
• Secure – SSL, client/server-side encryption at rest
Amazon S3
17. • Fully Managed NoSQL Database
• Fast consistent performance (single-digit millisecond latency
at any scale)
• Highly scalable - automatic scaling of throughput capacity
• Highly available and durability
• Store unlimited number of data
Amazon
DynamoDB
18. • Fully Managed Relational Database Service
• MySQL and PostgreSQL compatible relational database with up to
5x better performance running on the same hardware
• Security, availability, and reliability of commercial databases at
1/10th the cost
• Designed to offer greater than 99.99% availability.
• Automatically grows storage as needed, from 10GB up to 64TB
• Achieve up to 500,000 reads and 100,000 writes per second
Amazon
Aurora
19. • Fully managed petabyte-scale relational, MPP, data warehousing
• Built-in end-to-end security, including SSL connections and cluster
encryption
• Fault-tolerant - automatically recovers from disk and node failures
• Data automatically backed up to Amazon S3
• $1,000/TB/Year; start at $0.25/hour. Provision in minutes; scale
from 160 GB to 2 PB of compressed data with just a few clicks
Amazon
Redshift
21. • Managed Hadoop framework
• Apache Hadoop, Hive, Spark, Zeppelin, Presto, HBase, Phoenix,
Tez, Flink, etc.
• Auto Scaling clusters with support for on-demand and spot pricing
• Support for end-to-end encryption, IAM/VPC, S3 client-side
encryption with customer managed keys and AWS KMS
• Integrates with Amazon S3, Amazon DynamoDB, Amazon Kinesis
and Amazon Redshift
Amazon
EMR
23. • Fully managed, reliable, and scalable Elasticsearch service
• Support for ELK
• Integration options with other AWS services (CloudWatch
Logs, Amazon DynamoDB, Amazon S3, Amazon Kinesis)
• Use Case: log analytics, full text search, application
monitoring, and more.
Amazon
Elasticsearch
24. • Serverless query service for querying data in S3 using
standard SQL with no infrastructure to manage
• Support for multiple data formats include text, CSV, TSV,
JSON, Avro, ORC, Parquet
• Pay per query only when you’re running queries based on
data scanned. If you compress your data, you pay less and
your queries run faster
Amazon
Athena
25. Familiar Technologies Under the Covers
Used for SQL Queries
In-memory distributed query engine
ANSI-SQL compatible with extensions
Used for DDL functionality
Complex data types
Multitude of formats
Supports data partitioning
26. • Fast and cloud-powered Business Analytics
• Easy to use, no infrastructure to manage
• Quick calculations with SPICE
• 1/10th the cost of legacy BI software
• Accessed from any browser or mobile device
Amazon
Quicksight
27. • Fully managed ETL (extract, transform, load) service
• Integrated data catalog, automatic schema discovery, ETL
code generation, flexible job scheduler
• Integrated across a wide range of AWS services (Amazon
RDS, Database running on Amazon EC2, Amazon Athena,
etc.)
AWS Glue
28. 1. Build your data catalog
2. Generate and Edit Transformations
3. Schedule and Run Your Jobs
How AWS Glue Works
30. • Fully managed streaming application
• Scalable – handle any amount of streaming data
• Ingest, buffer and process data in real-time
• React quickly – derive insight in seconds
Amazon
Kinesis
31. Amazon Kinesis
Amazon Kinesis
Streams
Build your own custom
applications that process or
analyze streaming data
Amazon Kinesis
Firehose
Easily load massive volumes
of streaming data into
Amazon S3, Amazon
Redshift, and Amazon
Elasticsearch
Amazon Kinesis
Analytics
Easily analyze data streams
using standard SQL queries
32. Amazon Kinesis Streams
• Reliably ingest and durably store streaming data at low
cost
• Build custom real-time applications to process
streaming data
33. Amazon Kinesis Firehose
Reliably ingest and deliver batched, compressed, and encrypted
data to S3, Amazon Redshift, and Amazon Elasticsearch Service
35. Hundreds of big data products are immediately available through the AWS marketplace
AWS Market Place for Big Data Solution
Advanced AnalyticsDatabase and Data Enablement Business Inteligence
Fully Integrated | 1-click deployment | Pay-as-you-go
pricing
37. Speed (Real-time)
Ingest ServingData
sources
Scale (Batch)
Modern data architecture
Insights to enhance business applications, new digital services
Data analysts
Data scientists
Business users
Engagement platforms
Automation / events
38. Speed (Real-time)
Ingest ServingData
sources
Scale (Batch)
Modern data architecture
Insights to enhance business applications, new digital services
Transactions
Web logs /
cookies
ERP
Data analysts
Data scientists
Business users
Engagement platformsConnected
devices
Social media Automation / events
39. Speed (Real-time)
Ingest ServingData
sources
Scale (Batch)
Modern data architecture
Insights to enhance business applications, new digital services
Data analysts
Data scientists
Business users
Engagement platforms
Automation / events
Transactions
Web logs /
cookies
ERP
AWS Database
Migration
AWS Direct
Connect
Internet
Interfaces
Amazon
Kinesis
Connected
devices
Social media
40. Speed (Real-time)
Ingest ServingData
sources
Scale (Batch)
Modern data architecture
Insights to enhance business applications, new digital services
Data analysts
Data scientists
Business users
Engagement platforms
Automation / events
Transactions
Web logs /
cookies
ERP
AWS Database
Migration
AWS Direct
Connect
Internet
Interfaces
Amazon
Kinesis
Connected
devices
Social media
Speed (Real-time)
Scale (Batch)
Amazon S3
Staged Data
(Data Lake)
Amazon S3
Raw Data
Amazon EMR
ETL
AWS Glue
AWS
Cloud Trail
AWS
IAM
Amazon
CloudWatch
AWS
KMS
41. Speed (Real-time)
Ingest ServingData
sources
Scale (Batch)
Modern data architecture
Insights to enhance business applications, new digital services
Data analysts
Data scientists
Business users
Engagement platforms
Automation / events
Transactions
Web logs /
cookies
ERP
AWS Database
Migration
AWS Direct
Connect
Internet
Interfaces
Amazon
Kinesis
Connected
devices
Social media
Speed (Real-time)
Scale (Batch)
Amazon S3
Staged Data
(Data Lake)
Amazon S3
Raw Data
Amazon EMR
ETL
Advanced
Analytics
MLlib
Deep Learning
Amazon ML
Serving
AWS
Cloud Trail
AWS
IAM
Amazon
CloudWatch
AWS
KMS
42. Speed (Real-time)
Ingest ServingData
sources
Scale (Batch)
Modern data architecture
Insights to enhance business applications, new digital services
Data analysts
Data scientists
Business users
Engagement platforms
Automation / events
Transactions
Web logs /
cookies
ERP
AWS Database
Migration
AWS Direct
Connect
Internet
Interfaces
Amazon
Kinesis
Connected
devices
Social media
Speed (Real-time)
Scale (Batch)
Amazon S3
Staged Data
(Data Lake)
Amazon S3
Raw Data
Amazon EMR
ETL
Advanced
Analytics
MLlib
Deep Learning
Amazon ML
Serving
Data Warehouse
Amazon Redshift
Legacy Apps
Amazon RDS
Schemaless
Amazon ElasticSearch
Direct Query
Amazon Athena
Near-Zero Latency
Amazon DynamoDB
Semi/Unstructured
Amazon EMR
AWS
Cloud Trail
AWS
IAM
Amazon
CloudWatch
AWS
KMS
43. Speed (Real-time)
Ingest ServingData
sources
Scale (Batch)
Modern data architecture
Insights to enhance business applications, new digital services
Data analysts
Data scientists
Business users
Engagement platforms
Automation / events
Transactions
Web logs /
cookies
ERP
AWS Database
Migration
AWS Direct
Connect
Internet
Interfaces
Amazon
Kinesis
Connected
devices
Social media
Speed (Real-time)
Scale (Batch)
Amazon S3
Staged Data
(Data Lake)
Amazon S3
Raw Data
Amazon EMR
ETL
Advanced
Analytics
MLlib
Deep Learning
Amazon ML
Serving
Data Warehouse
Amazon Redshift
Legacy Apps
Amazon RDS
Schemaless
Amazon ElasticSearch
Direct Query
Amazon Athena
Near-Zero Latency
Amazon DynamoDB
Semi/Unstructured
Amazon EMR
Amazon
QuickSight
Amazon
API Gateway
AWS
Cloud Trail
AWS
IAM
Amazon
CloudWatch
AWS
KMS
44. Speed (Real-time)
Ingest ServingData
sources
Scale (Batch)
Data analysts
Data scientists
Business users
Engagement platforms
Automation / events
Transactions
Web logs /
cookies
ERP
AWS Database
Migration
AWS Direct
Connect
Internet
Interfaces
Amazon
Kinesis
Connected
devices
Social media
Speed (Real-time)
Scale (Batch)
Amazon S3
Staged Data
(Data Lake)
Amazon S3
Raw Data
Amazon EMR
ETL
Advanced
Analytics
MLlib
Deep Learning
Amazon ML
Serving
Data Warehouse
Amazon Redshift
Legacy Apps
Amazon RDS
Schemaless
Amazon ElasticSearch
Direct Query
Amazon Athena
Near-Zero Latency
Amazon DynamoDB
Semi/Unstructured
Amazon EMR
Amazon
QuickSight
Amazon
API Gateway
Event Capture
Amazon Kinesis
Stream Analysis
Amazon EMR Event Scoring
Amazon AI
Event Handler
AWS Lambda Response Handler
AWS Lambda
Modern data architecture
Insights to enhance business applications, new digital services
AWS
Cloud Trail
AWS
IAM
Amazon
CloudWatch
AWS
KMS
45. Speed (Real-time)
Ingest ServingData
sources
Scale (Batch)
Data analysts
Data scientists
Business users
Engagement platforms
Automation / events
Transactions
Web logs /
cookies
ERP
AWS Database
Migration
AWS Direct
Connect
Internet
Interfaces
Amazon
Kinesis
Connected
devices
Social media
AWS
Cloud Trail
AWS
IAM
Amazon
CloudWatch
AWS
KMS
Speed (Real-time)
Scale (Batch)
Amazon S3
Staged Data
(Data Lake)
Amazon S3
Raw Data
Amazon EMR
ETL
Advanced
Analytics
MLlib
Deep Learning
Amazon ML
Serving
Data Warehouse
Amazon Redshift
Legacy Apps
Amazon RDS
Schemaless
Amazon ElasticSearch
Direct Query
Amazon Athena
Near-Zero Latency
Amazon DynamoDB
Semi/Unstructured
Amazon EMR
Amazon
QuickSight
Amazon
API Gateway
Event Capture
Amazon Kinesis
Stream Analysis
Amazon EMR Event Scoring
Amazon AI
Event Handler
AWS Lambda Response Handler
AWS Lambda
Modern data architecture
Insights to enhance business applications, new digital services
48. Data Marts
(Amazon
Redshift)
Query Cluster
(EMR)
Query Cluster
(EMR)
Auto Scaling
EC2
Analytics
App
Normalization
ETL Clusters
(EMR)
Batch Analytic
Clusters
Ad Hoc Query
Cluster (EMR)
Auto Scaling
EC2
Analytics
App
Users Data
Providers
Auto Scaling
EC2
Data
Ingestion
Services
Optimization
ETL Clusters
(EMR)
Shared Metastore
(RDS)
Query Optimized
(S3)
Auto Scaling EC2
Data
Catalog
& Lineage
Services
Reference Data
(RDS)
Shared Data Services
Auto Scaling
EC2
Cluster Mgt
& Workflow
Services
Source of
Truth (S3)
>5 PB, up to 75 billion events per day