Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Notes

Download as txt, pdf, or txt
Download as txt, pdf, or txt
You are on page 1of 12

cloud trail-records all our actions/events in our account

if u want these events to be delivered to u r s3 bucket / cloudwatch logs - u have


to create a cloudtrail
read only , write only events , all & region
u can record invoke api operation s for individual fuctions /lambda functions
check encryption on s3 bucket AES 256 means sse-s3 encrytion
15 minute delay cloudtrail logs to s3 bucket
cloud trail event hisory search filter by event name
cloud trail- log integrity
if any hacker wants to change content of cloudtrail file . download from s3 bucket
& change content & gzip it again & upload to s3
validating cloudtrail log file integrity with aws cli - run validate-log-file.sh
// start time is from 2015 to till now . so after running script , we have cloud-
trail digest file (this will be delivered once for every 1 hour )in s3 bucket .
wait for an hour and run validate-log-file.sh on aws cli (to match all files with
digest)
invalid hash value doesn't match (if file content is modified), invalid not found
(if file deleted from s3 bucket)
------receiving cloud trail from multiple accounts
1.Turn on CloudTrail in the account where the destination bucket will belong
(111111111111 in this example). Do not turn on CloudTrail in any other accounts
yet.

For instructions, see Creating a trail.

Update the bucket policy on your destination bucket to grant cross-account


permissions to CloudTrail.

For instructions, see Setting bucket policy for multiple accounts.

Turn on CloudTrail in the other accounts you want (222222222222, 333333333333, and
444444444444 in this example). Configure CloudTrail in these accounts to use the
same bucket belonging to the account that you specified in step 1 (111111111111 in
this example).

For instructions, see Turning on CloudTrail in additional accounts

(my explanation)

1.create trails in each account


2. set bucket policy comapactible for multiple accounts
assume one bucket as central s3 bucket (for storing logs) .s3 bucket-> permissions-
>add resource to bucket policy "Resource":
["arn:aws:s3:::heh849641/AWSLogs/090844620602/*"
,"arn:aws:s3:::heh849641/AWSLogs/048464165/*"]
just change account number , this means we are allowing log from that account to
store logs in our main bucket
if other account users want to read bucket logs then u have to attach bucket read
policy to those users
like only they can read their own logs using account number prefix files - this
can be achieved through bucket policy
------Amazon Kinesis services - 3 types(streams, analytics ,firehorse) (real time
streaming capability like kafka )
data repilcation to 3 az
streams- low latency streaming ingest at scale
analytics- real time streams using sql (joints,aggregations,logs) , autoscaling ,
no provision , real time , create streams out of real time queries send to streams
/ firehorse
firehorse data delivery stream- (for delivering data to some where like
s3 ,redshift,elastic search)
streams divided into shards/partitions.
producers send data to kinesis . they will distribute data into different shards &
consumers will be reading data in real time
data retention - 1day by default . it can go upto 7days
mutiple applications that can consume the same stream at the same time
in sqs , only one application can consume the same stream
if data inserted in kinesis , it cannot be deleated (immutable) - no manual
deleation . after 7 days it will be deleated.
one stream made up of multiple shards.
billing is per shard provisioned , u can have as many shards as u want
can send messages in a batch
planning shards / merge shards . shards can evolve
can decrease/increase number of shards
records ordered per shard not across shard
record - data blog - anything (upto 1 mb/second at write per shard else hot shard
problem & u get provisionedthroughputexception )
record key(sent along with record) - allow u r producer to direct it to correct
shard
hot partition problem- one shard will be overloaded - so use high distribution key
to avoid hot partition
once u send data to kinesis , kinesis will append sequence number (unique number
for each record)
2 MB/sec at read per shard for consumer
if 3 applications consuming at a time their is a possibility of throttling(because
each application may request 1 Mb & limit is 2mb per shard , we request 3 mb per
shard at the same time )
more consumers-> some throttling possible
producers -sdk,cloud watch logs can send logs to kinesis streams
consumers- lambda , sdk , firehorse, amazon kinesis kcl(kcl uses dynamo db to check
point offsets & also track other workers & shards the work among shards)
each kinesis app may run on java . u cannot have more kcl applications than shards
in your streams
if u have 6 shards , u can have 6 kcl applications readind same time . all apps
sharing same dynamo db table to share the reads
----------------- aws kinesis data firehorse- no scale , no need to provision
(near real time - 60 seconds delay) . kinesis streams - real time
data firehorse can send data to readshift , s3 , elastic search , splunk. but no
data storage so no replay data
pay for amount of data going through firehorse
data transformation through lambda
-----------
kinesis streams(200 milli second latency for classic consumers) -write custom code
to u r own consumers / producers
manage scale u r self (split/merge shards)
use lambda to insert data in realtime to elastic search
----------------firehorse creates delivery streams (for to send data fron kinesis
agent/ cloud watch logs to s3 )
larger the file / buffer size -32 mb , buffer intervel -120 seconds, less api calls
to s3
prefix - dataFirehorse/ // for to send right folder in s3 , create role
(to grant permissions for firehorse to send data into s3 ) & create stream
test with demo data
for every 32mb / 120 seconds , data will be sent to s3
firehorse - errors, monitoring , details
--------------
trust relationship says which service that role belongs to (which service can
assume/ access other services using this role )
lambda->test->general configuration -> time out (how long u r function can run )
max is 15 minutes
choose a vpc , if u r function want access to the resource that live in vpc .
select a vpc then u r function will launch in that vpc(takes time ) & SG rules . u
can access to rds (which is private /live in vpc )
debug & error handling -if it's in the service , that asynchronously invokes lambda
function. we can do dlq (dead letter queue) . if lambda function fail for 3
times ,then it is possible for us to send an event payload to sns / dlq
. so this ensure that event is not lost & we could troubleshoot later.
xray (tacing service allows u to decomposes how u r function are behaving & how
long each api is taking so u can understand where bottlenecks are in u r
infrastructure )- active tracing to record timing & error info for subset of
invocations.
concurrency - how many functions like this can run at very same time.(u need to
open a service limit request)
cloud trail can log function's invocations for oprational , risk auditing .
u can set cloudwatch alarm for lambda errors & lambda sends logs to cloudwatch
u can attach a policy (to be able to tell back to pipeline after job success /
failure details & send logs to cloudwatch logs ) to lambda role . (in pipeline ,
add action group , lambda as action provider , our function name )
invoke a lambda function in code pipeline (check aws documentation)
------------ triggering lambda / lambda integration with other services
lambda->add trigger(api gateway -this allows u to create an API(serverless) , an
external facing api that can invoke lambda function). lambda function can be
invoked using sdk & so that's why API gateway does provide some kind of http
interface to lambda functions. API gateway gives ability doing authentication ,
security
u can use ALB infront of u r lambda functions. alb doesn't provide authentication,
security
cloud watch allows us to react to any event in our infrastructure & then create a
event & assign lambda function to it .
cloud watch allows us to react to any event in cloud
cloud watch event schedule , to create a cron & a schedule. for example an event
that created for every one hour & that event trigger lambda function effectivly
creating serverless cron scripts.
we can have lambda function being invoked for every one hour using cloudwatch
events
we can also react to cloudwatch logs using cloud watch logs subscription & that
makes our lambda function , analyze logs in real time & look at if there is any
pattern we should be looking at & that may be creating some other thing out of it
.
for codecommit , we can react to hooks , whenever some one commits, we could have
lambda function , look at code
being commited & ensure that no credentials were commited as part of this or
otherwise send a notification to sns topic
if we enable dynamodb streams in our accounts & have a lambda function at the end
of it that means our lambda function can react in real time to the changes in
dynamodb for example if u have users table & u have user signup event then
dynamodb can react to users signup event & send a email saying hello to signed up
users

kinesis streams / kinesis analytics ,- real time processing of data .kinesis


streams / kinesis analytics feed into lambda & lambda can react in real time to
these events

s3 , whenever someone put new objects on s3 then lambda trigger workflow & can
create quick thumbnail from s3 bucket
** s3 , sns , sqs are asynchronous integration invocation
sns , notification service . lambda can react to sns . this is asynchronous
integration invocation
if lambda reacts to sns message but doesn't succed for 3 times then the message
will be put into deadletter queue
sqs,lambda function to process messages in our queue . if messages not process by
lambda then the message gets put back into sqs queue. so other function/application
can process this message
-------------
we have encryption for environment variables (define environment variable , kms ,
encrypt variable . look at code & function , we are using boto client to decrypt
environment variable . attach policy (resource kms ARN , write access ->decrypt)to
service role of lambda must have access to kms key to decrypt variable)
lambda can also access parameter store variables but u must have policy
(permissions for lambda to access/read ssm /parameter store ) to your lambda
service role
secret manager is another way to strore secrets
--------
alias pointers to lambda function version
lambda versions are immutable & lambda aliases are mutable.
we can create dev alias represent our dvelopment environment which most likely we
want to point at the latest function we have .(dev->latest version). now users will
interact with our function that don't directly with latest version (users->dev->
latest version (lambda function )). for users end point same even we shifted to new
version/deployment (blue/green)
lambda->qualifiers(versions,aliases) . if u happy with 1st function , actions-
>click publish new vesion
once u publish version , it cannot be changed
after creating 2 versions , create alias , actions ->alias (name-dev, version -
latest, u can shift traffic between 2 versions) . dev point to latest so both
results will be same .
updating version dev to prod , on prod .alias configuration
code deploy default lambda linear 10 percent every 1 minute (like blue green in
lambda shifting traffic )
linear (10 % gradual increase for every minute )means 10->20->30-> (till 100 , used
for deployment blue gren / smooth deployment )
canary is 2 step deploment . linear is gradual
*********any lambda function deploy first & then test
------aws lambda -SAM (serverless application model- to deploy applications in
lambda using code)
sam is a cli , providing u with a command line tool (aws sam cli like beanstalk cli
) to create & manage serverless applications .
sam --version // visit sam.md file more info
sam init --runtime python // creates a python project
cd sam-app ///u have events folder , hello_world folder ,
template.yaml(transfrom does template to valid cloud formation template , type -
events(type of events that passed to lambda function) APi , so it's api gateway,
to obtain our function we have to go path /hello , get method ) , tests
folder(running some test before deploy) , readme.md file.
events used to test our function locally in sam cli
sam build // resolve dependancies , copy source code. build
artifacts are in .aws-sam/build folder , build template is in
.aws-sam/build/template.yml
sam local invoke "HelloWorldFunction" -e events/events.json //uses
docker to fetching python image & start application & displays result
sam local start-api //starts local api gateway . run this command in
that project folder
http://127.0.0.1:3000/hello // test lambda function using this url . sam
framework helps to test functions
//we package code & deploy it . on cloud formation-> u can see stack is being
created .
gradual code deployment(aws documentation) - if u use aws sam to create serverless
applications , it comes built-in with code deploy to help ensure safe lambda
deployments
change message in (app.py) & template.yml (deployment preference) &
build ,package(new version in s3 ), deploy (cloud formation stack is updating ).
(capability - through iam role we deploying . so lambda access(sam deploy
command ) to cloud formation stack ) //code deploy application is created
(upload to s3, lambda version named live , code deploy(blue /green or 90 % to one
version, 10 % to other through alias ),cloudformation )
-------------------- x-ray(debuging , tracing , service map ) (launches
cloudformation stack)
AWS X-Ray is a service that collects data about requests that your application
serves, and provides tools that you can use to view, filter, and gain insights into
that data to identify issues and opportunities for optimization. For any traced
request to your application, you can see detailed information not only about the
request and response, but also about calls that your application makes to
downstream AWS resources, microservices, databases, and web APIs.

after application deployed (u get beanstalk environment url(on cloudformation stack


outputs tab) -> click start on page (it sends request(signup) for every 2 seconds
to dynamobd table) )

on aws ->xray-> service map (it starting computing a service map for me. this is a
way for aws to map what is happening within my infrastructure) . client is me -
>talking to ec2 instance , refresh , if ec2 talks to other services . service
map also shows how request went from ec2 to other services
so ec2 with dynamodb to store signup credentials , ec2 to sns , ec2 to metadata
service
orange color indicates error /some error in it (ec2) , refresh , click on ec2 -
>click error & view traces (trace list , trace overview)
on click on trace id -> u get timeline information(u can see all api calls ,
resources (operation ,arn)) , raw data
if any api call fails -> traces-> click a trace (exceptions for any error
information )
u can do zoom for a segment on the graph & view errors / why that request taken
more duration
traces , debuging , distributed application - xray (no logs here , only visual
graph , services overview /insights )
xray - how each trace goes & flows through your entire service map
delete entire cloud formation (so ur beanstalk app will be deleated)
https://github.com/awslabs/eb-node-express-sample/tree/xray (eb-node
application)
xray cloud formation stack resources (ElasticBeanstalkApplication,
ElasticBeanstalkApplicationVersion,SampleInstanceProfile,SampleEBServiceRole,XRayWr
iteOnlyPolicy,SampleInstanceProfileRole)
using amazon cloudwatch & amazon sns to notify when aws x-ray detects elevated
levels of latency ,errors,faults in u r application

cloudwatch event (for every 5 minutes ) triggers lambda function-> calls get
service graph api in xray & lambda finding errors(trottle/404) in that graph then
triggers cloudwatch event rule & this event rule sending/publish to a sns topic &
topic sends to sms/email . u can have multiple targets fo a cloud watch event
rule . u can also trigger an alarm & put in alarm state . if alarm state happens
too many times then u will send message to sns topic to email.
xray doesn't publish graph by itself . u need to use api(get service graph api) to
extract graph from xray
-----------amazon es (elastic search - not a serverless )
there is a functionality that needs to be implemented /dashboard functionality that
should be custom for elastic search.
kibana (more powerful than cloudwatch)- if u manage to store u r metrics within
elastic search then that means kibana will provide u alot of different dashboards
capabilities.
elk - logs(logstash agent )->elasticsearch->kibana
logstash - log ingestion mechanism , alternative to cloudwatch logs. for this ,
instead of using cloudwatch agents to send logs to cloud ,which we would use
logstash agent to send logs to elasticsearch & we would visualize these logs using
kibana
dynamodb- if u want to search through an item in dynamodb table , the only
operation we can use scan (inefficient)
because we have to go through entire dynamodb table( so a search mechanism on top
of dynamodb ) & so through the integration of dynamodb stream that will send data
to lambda function & we have to create that lambda function ,so we can send data
upto amazon elastic search then we build an API on top of elastic search to search
for items for example return id's & uses item's id to retrive data itself from
dynamo db table
//taking data from dynamodb & index it ,all the way through into elastic search
using dynamodb streams & lambda functions - dynamodb search pattern
cloudwatch logs->subscription filter->lambdafunction->es(elastic search)
//realtime, more expensive
cloudwatch logs->subscription filter->kinesis data firehorse->es(elastic search)
//near realtime
streaming cloudwatch logs data to amazon es service - aws documentation
--------
elastic search->create new domain->(production /multi az , developement /single az
(u just need an end point) , custom )
domain name- demo-es , instance type-t2.small (upto 10 gb ebs free
storage ) ,production(many data instances , many dedicated master instances ) ,
snapshot at midnight,encryption , (vpc,public access for elastic search cluster) ,
set domain access policy -allow access from my specific IP
kibana - set filter so according dashboard changes
create a cloud trail (enable cloudwatch logs ) -> go to cloudwatch (u find trail
related log stream select it & streams to amazon elastic search (so aws creates
lambda function for us that send data into elastic search)) . create a role to
allow lambda to send logs to elatic search . log format- aws cloud trail
//cloudwatch logs uses lambda to deliver log data to elasticsearch
//on lambda function-> monitoring (see how many times this function has been
invoked)
//on home of kibana -> connect to u r elastic search index (use elastic search data
) , index pattern - cwl-* , timje filter - @timestamp , create index pattern .
select our new index pattern , u see all events data on kibana dashboard
//logstash is an agent that runs on ec2 instance &v sends log line directly into
elastic search from which we can use kibana again to visualize these log lines .
finally remove subscription filter (stops lambda function from working ),delete
elastic search cluster/domain .
//log group suscription (creation of lambda ,attach role ),create index in elastic
search (cwl-) . then on discover -> select this index
-----------tagging/ metadata to aws resources in aws
cost according to departments,environments 2. code deploy for code deployment
groups (so code deploy will deploy to specific ec2 ) 3. tag based acess control /
iam policy & tags (some users only having access to ec2 that are tagged development
)
automated tool for to tag resources - cloudformation
aws config - to define tag rules & track , ensure everything is in compliant
within organization
aws tagging strategies - aws documentation

------------ logs (application logs- custom log messages ,stack traces . written to
local file in file systems)
usually streamed to cloudwatch logs using a cloudwatch agent on ec2.
if using lambda , direct integration with cloudwatch logs
if using ecs/fargate ,beanstalk in task definition , u have direct integration
with cloudwatch logs
os logs used to inform u system behaviour (install unified cloudwatch agent on
ec2 , it will stream log files into cloudwatch logs ) .
aws managed access logs (alb,nlb,clb )- to s3
cloudtrail logs to s3 ,cloud watch . vpc flowlogs to s3,cloud watch
route53 access logs to cloudwatch logs
cloudfront access logs to s3
s3 access logs to s3
-----------------cloudwatch/events bridge (allows u to have external events from
external API providers onto a cloudwatch events
source (event pattern - cloudformation , codestar,apigateway , autoscaling ,
codebuild , codecommit , codedeploy)
for any api call , if we want to create an event type as AWS API CALL VIA CLOUD
TRAIL for any service, specific operation (CreateImage) . whenever an image is
created then this event can trigger target(sns,lambda,etc)
cloudtrail + cloudwatch rules
-----------step functions usecases (sequential batch jobs , scale serverless
microservice ,ETL ,send messages from automated workflows)
create state machine ,new execution ,stepfunction execution event history ,
step functions(complex workflows) - invoking lambda function using step
functions .u can integrate stepfunctions with cloudwatch rules
if step function execution state change then invoke lambda function/etc.. .
-----------API GATEWAY (creates api for u , using swagger/open api3 , swagger
file ) , u can import API from swagger/open api3
api (private,edge optimized , regional)
create lambda function , create API ,stages , deploy API ,api invokes lambda
function
if u want to publish API & u can use API keys
if u want to publish logs of API to cloudwatch logs then API->settings (API
integration request -lambda proxy , lambda pass)
introducing Amazon API GateWay Private EndPoints(aws documentation)

-------------guard duty (intelligent threat discovery to protect u r aws account)


guard duty runs some system analysis in background . if u want to do anything . it
will use the logs that's available & make sure that it's protecting u against
malicious usage
30 day trail , after have to pay ,not cheap
guardy duty integration with lambda
input data for guardy duty is vpc flow logs , cloudwatch logs ,dns logs
enable guard duty for to track/avoid malicious usage.
in guard duty , u can generate sample findings to basically understand the kind of
findings that guard duty generates
disable/suspend guard duty
lists - trusted /threat ip address
account - basically accounts that are sharing findings with u
findings-> all threats , informational based on analysing all internal accounts &
detects threats , viruses , bitcoin mining
guard duty integration with cloudwatch rules , rule->event type - guard duty
finding .target can be slack channel , email , lambda function , sns
-----------------macie (analyze datasets in s3 & ensures that u r data is
sensitive is being protected & helps u to classify u r sensitive data(credit
card /social security numbers ) as critical content)
After you enable Macie, Macie gathers information about your buckets, such as the
storage size, encryption settings, and public access settings for each bucket.
Macie also begins monitoring the buckets for security and access control, notifying
you if the security of a bucket is reduced in some way. You can evaluate this
feature at no charge for the first 30 days, and review estimated costs before
charges begin to accrue.
(aws documentation ) classify sensitive data in u r environment using Amazon macie
launching cloudformation template so it stores some sensitive data in s3 buckets ,
through maice we gathers information about your buckets , sensitive data & displays
on maice dashboard
after creation of cloud formation stack ->in macie -> integrations -> select
account id (your ) & select s3 bucket (for analysing that bucket ) -> start
classification . (1 gb free tier for macie (for to analyze s3 bucket ) )
on dash board-> critical assests , user sessions , total events occurences
on alerts -click on ssh key uploaded to s3 alert ->u can see in which bucket it
stored , link to that object , s3 object activity (if anyone aceessed that key ?
all these details in object activity(cloud trail activity ))
classify data , research . maice integrate with multiple accounts
finding data leaks , pii info , ssh keys shared on s3 . to retrive those maice
will do
---------secrets manager(rotate ,manage ,retrive secrets , link it up to lambda
function that allow u to rotate credentials & tight integration with postgress ,
rds )
secret type (credentials for rds , redshift ,document db ,other db , api
keys ,etc ). username , password , key value pair , json format , aws kms to encrpt
secrets
automatic secrets rotation . create lambda function , it rotates your secrets
link secrets to any database (rds ,other )
view code for to retrive those secrets (python , java , javascript)
---------license manager - allows u to map licenses to other resources (ec2 , ami )
(windows , oracle , microsoft ,sap)
rules(optional) attached to resource so u can proactivly monitor resources , track
inventory , alert users
create license configuration -> assosiate ami (custom ami/any )/ resources . if
every time an instance is being created & running with this ami then 1 license will
be consumed
licence type - instances , cores,sockets,vcpus . sns notification will be sent when
license limit reaches
----------cost allocation tags
columns in reports , aws generated cost allocation tags , automaticaly applied on
resources when u create . prefix aws
cost allocation tags appears on billing console (24hrs) . for user , tag prefix
user (automaticaly)
billing -> cost allocation tags -> activate aws generated tags
we won't see tags directly attached to our resources . u can only see tags in that
billing console
if u r want to separate costs by environment(dev,prod,test) tag , click on it and
activate .
*** use cost allocation tags (filter costs based on tags ) - create ec2 , tag with
environment (Dev)tag & now u can filter reports (budgets , cost explorer )based on
this tag on billing console
---------------data protection & network protection
ssl termination - it will receive an ssl connection & terminate it & then pass on
the payload directly to ec2
possible to have multiple certifications per ALB using server name indication or
SNI
cloudfront with ssl
intransit , rest encryption
ipsec - tls (vpn connection )
network acl (nacl) - vpc level protection . stateless firewall
waf - blocks ip
SG (STATEFUL - if traffic allowed in then it's also allowed to go out )
-----------------
bucket policies + iam roles (gives access to u r bucket )
cross region or same region replication - replicate an s3 bucket (in same account
or other account )(s3 ->management ->replication->rule , destination bucket ,
configure rule )
asynchronous type of replication for us
lifecycle rule - transition after 90 days (current version , previous version to
glacier/different tiers , expiration objects after some days )
u can have different lifecycle rules on s3 bucket diferent folders
s3 object ->properties-> storage class ->change to glacier (for to lower storage
cost )
object ->encryption->aes256(amazon server side encryption data , kms key ,client
side encrypt)
s3->properties ->server access logging ->enable logging (logs)
logs can be queried using athena , to visulaize data
------------multi az in aws (efs , elb, asg ,beanstalk :assign AZ )
if u want network interface in zone a,b,c (which zone u want load balancing ) .
beanstalk relies on elb , asg . it inherits same AZ
rds , elastic cache (multi AZ , synchronous standby database for failover in one
other AZ . we use 2 AZ for rds , elastic cache .used for fail over database with
in same region ***)
aurora - multi az automatically . we can have multi AZ master database for itself
elastic search - enable multimaster setting to get multi AZ setup for elastic
search
jenkins - self deploy , multi AZ jenkins - multi master (leveraging an
autoscaling group to deploy your jenkins master )
s3 - multi AZ implicit (for except one-zone infrequent access storage class)
dynamo db - multi AZ by default
aws managed services - multi az
ebs- single AZ
ebs multi AZ - create autoscaling group (this ASG is multi AZ ) . one instance in
one of the az specified , create lifecycle hook for terminate action -> make a
snapshot of ebs volume (lambda function will backup ebs volume & creates snapshot
from it . once snapshot created then lambda tells ASG snashot of ebs is done),
when an instance start in that AZ(launch hook lambda function creates new ebs
volume from previous snapshot in another AZ /any AZ & lambda function have script
to attach ebs volume to newly launched ec2 ) -> copy snapshot , create ebs volume
& attach to ec2
prewarm if u use io1
-----------------multi region
dynamodb (global applications)- using streams
aws config aggregators -configuration of all regions in an account , multi account

rds - cross region (only for read replicas & also can promote to main database )
aurora global data base - only 2 regions (1 master , other read replica & disaster
recovery ) . can promote any region as main database
ebs volume snapshots , AMI(region scoped . copy ami so other regions do have
access to same ami , store AMI Id in parameter store SO U HAVE constant name
across all u r regions ) , RDS can be copied accross regions
vpc peering- private traffic between regions
route 53 - multi region (uses global network of dns)
s3 - cross region replication
cloudfront - global cdn more edge locations . lambda at edge deployment for global
lambda function , ab testing alias
route 53 - health check for every region , multi region (automatic dns failover) .
based on health checks route53 redirects traffic . route 53 , latancy record based
on users region
health check can also monitor any cloudwatch alarm (throttles of dynamodb )
create cloudwatch metric ->integrate metric with new cloud watch alarm ->sends data
to sns topic -> lambda connected to sns topic -> lambda sends notifications to
slack channel
-----------amazon inspector
aws inspector can be target of cloudwatch events to run assesments
(aws documentation) inspector adds event trigger to automatically run assesments
inspector -> assesment templates ->add schedule (assesment events , setup recurring
assesment for every 7 days (inspector run on ec2 for every 7 days). actually this
rule created in cloudwatch rules )
target (type) of that rule is running my assessment template . in cloudwatch
target as inspector assesment template
after ec2 instance running , u can launch inspector assessment template on that ec2
as target
assesment template can send sns topic
(aws documentation ) how to remidiate amazon inspector security findings
automatically
(aws documentation ) how to setup continous golden AMI vulnerability assessments
with amazon inspector
(aws documentation ) anouncing the golden AMI pipeline
(aws documentation ) aws building ami factory process using ec2 ssm market place &
service catalog
----------ec2 instance compliance
aws config (integrate with ssm , automation )- audit , compliance . using
cloudwatch event rules we can have automatio & remidiation on aws config
------health service dashboard
status.aws.amazon.com (aws services global health dashboard for all regions ,
status history ,rss is a way to subscribe to events of this )
whenever u have rss reader
----------------

You might also like