Notes
Notes
Notes
Turn on CloudTrail in the other accounts you want (222222222222, 333333333333, and
444444444444 in this example). Configure CloudTrail in these accounts to use the
same bucket belonging to the account that you specified in step 1 (111111111111 in
this example).
(my explanation)
s3 , whenever someone put new objects on s3 then lambda trigger workflow & can
create quick thumbnail from s3 bucket
** s3 , sns , sqs are asynchronous integration invocation
sns , notification service . lambda can react to sns . this is asynchronous
integration invocation
if lambda reacts to sns message but doesn't succed for 3 times then the message
will be put into deadletter queue
sqs,lambda function to process messages in our queue . if messages not process by
lambda then the message gets put back into sqs queue. so other function/application
can process this message
-------------
we have encryption for environment variables (define environment variable , kms ,
encrypt variable . look at code & function , we are using boto client to decrypt
environment variable . attach policy (resource kms ARN , write access ->decrypt)to
service role of lambda must have access to kms key to decrypt variable)
lambda can also access parameter store variables but u must have policy
(permissions for lambda to access/read ssm /parameter store ) to your lambda
service role
secret manager is another way to strore secrets
--------
alias pointers to lambda function version
lambda versions are immutable & lambda aliases are mutable.
we can create dev alias represent our dvelopment environment which most likely we
want to point at the latest function we have .(dev->latest version). now users will
interact with our function that don't directly with latest version (users->dev->
latest version (lambda function )). for users end point same even we shifted to new
version/deployment (blue/green)
lambda->qualifiers(versions,aliases) . if u happy with 1st function , actions-
>click publish new vesion
once u publish version , it cannot be changed
after creating 2 versions , create alias , actions ->alias (name-dev, version -
latest, u can shift traffic between 2 versions) . dev point to latest so both
results will be same .
updating version dev to prod , on prod .alias configuration
code deploy default lambda linear 10 percent every 1 minute (like blue green in
lambda shifting traffic )
linear (10 % gradual increase for every minute )means 10->20->30-> (till 100 , used
for deployment blue gren / smooth deployment )
canary is 2 step deploment . linear is gradual
*********any lambda function deploy first & then test
------aws lambda -SAM (serverless application model- to deploy applications in
lambda using code)
sam is a cli , providing u with a command line tool (aws sam cli like beanstalk cli
) to create & manage serverless applications .
sam --version // visit sam.md file more info
sam init --runtime python // creates a python project
cd sam-app ///u have events folder , hello_world folder ,
template.yaml(transfrom does template to valid cloud formation template , type -
events(type of events that passed to lambda function) APi , so it's api gateway,
to obtain our function we have to go path /hello , get method ) , tests
folder(running some test before deploy) , readme.md file.
events used to test our function locally in sam cli
sam build // resolve dependancies , copy source code. build
artifacts are in .aws-sam/build folder , build template is in
.aws-sam/build/template.yml
sam local invoke "HelloWorldFunction" -e events/events.json //uses
docker to fetching python image & start application & displays result
sam local start-api //starts local api gateway . run this command in
that project folder
http://127.0.0.1:3000/hello // test lambda function using this url . sam
framework helps to test functions
//we package code & deploy it . on cloud formation-> u can see stack is being
created .
gradual code deployment(aws documentation) - if u use aws sam to create serverless
applications , it comes built-in with code deploy to help ensure safe lambda
deployments
change message in (app.py) & template.yml (deployment preference) &
build ,package(new version in s3 ), deploy (cloud formation stack is updating ).
(capability - through iam role we deploying . so lambda access(sam deploy
command ) to cloud formation stack ) //code deploy application is created
(upload to s3, lambda version named live , code deploy(blue /green or 90 % to one
version, 10 % to other through alias ),cloudformation )
-------------------- x-ray(debuging , tracing , service map ) (launches
cloudformation stack)
AWS X-Ray is a service that collects data about requests that your application
serves, and provides tools that you can use to view, filter, and gain insights into
that data to identify issues and opportunities for optimization. For any traced
request to your application, you can see detailed information not only about the
request and response, but also about calls that your application makes to
downstream AWS resources, microservices, databases, and web APIs.
on aws ->xray-> service map (it starting computing a service map for me. this is a
way for aws to map what is happening within my infrastructure) . client is me -
>talking to ec2 instance , refresh , if ec2 talks to other services . service
map also shows how request went from ec2 to other services
so ec2 with dynamodb to store signup credentials , ec2 to sns , ec2 to metadata
service
orange color indicates error /some error in it (ec2) , refresh , click on ec2 -
>click error & view traces (trace list , trace overview)
on click on trace id -> u get timeline information(u can see all api calls ,
resources (operation ,arn)) , raw data
if any api call fails -> traces-> click a trace (exceptions for any error
information )
u can do zoom for a segment on the graph & view errors / why that request taken
more duration
traces , debuging , distributed application - xray (no logs here , only visual
graph , services overview /insights )
xray - how each trace goes & flows through your entire service map
delete entire cloud formation (so ur beanstalk app will be deleated)
https://github.com/awslabs/eb-node-express-sample/tree/xray (eb-node
application)
xray cloud formation stack resources (ElasticBeanstalkApplication,
ElasticBeanstalkApplicationVersion,SampleInstanceProfile,SampleEBServiceRole,XRayWr
iteOnlyPolicy,SampleInstanceProfileRole)
using amazon cloudwatch & amazon sns to notify when aws x-ray detects elevated
levels of latency ,errors,faults in u r application
cloudwatch event (for every 5 minutes ) triggers lambda function-> calls get
service graph api in xray & lambda finding errors(trottle/404) in that graph then
triggers cloudwatch event rule & this event rule sending/publish to a sns topic &
topic sends to sms/email . u can have multiple targets fo a cloud watch event
rule . u can also trigger an alarm & put in alarm state . if alarm state happens
too many times then u will send message to sns topic to email.
xray doesn't publish graph by itself . u need to use api(get service graph api) to
extract graph from xray
-----------amazon es (elastic search - not a serverless )
there is a functionality that needs to be implemented /dashboard functionality that
should be custom for elastic search.
kibana (more powerful than cloudwatch)- if u manage to store u r metrics within
elastic search then that means kibana will provide u alot of different dashboards
capabilities.
elk - logs(logstash agent )->elasticsearch->kibana
logstash - log ingestion mechanism , alternative to cloudwatch logs. for this ,
instead of using cloudwatch agents to send logs to cloud ,which we would use
logstash agent to send logs to elasticsearch & we would visualize these logs using
kibana
dynamodb- if u want to search through an item in dynamodb table , the only
operation we can use scan (inefficient)
because we have to go through entire dynamodb table( so a search mechanism on top
of dynamodb ) & so through the integration of dynamodb stream that will send data
to lambda function & we have to create that lambda function ,so we can send data
upto amazon elastic search then we build an API on top of elastic search to search
for items for example return id's & uses item's id to retrive data itself from
dynamo db table
//taking data from dynamodb & index it ,all the way through into elastic search
using dynamodb streams & lambda functions - dynamodb search pattern
cloudwatch logs->subscription filter->lambdafunction->es(elastic search)
//realtime, more expensive
cloudwatch logs->subscription filter->kinesis data firehorse->es(elastic search)
//near realtime
streaming cloudwatch logs data to amazon es service - aws documentation
--------
elastic search->create new domain->(production /multi az , developement /single az
(u just need an end point) , custom )
domain name- demo-es , instance type-t2.small (upto 10 gb ebs free
storage ) ,production(many data instances , many dedicated master instances ) ,
snapshot at midnight,encryption , (vpc,public access for elastic search cluster) ,
set domain access policy -allow access from my specific IP
kibana - set filter so according dashboard changes
create a cloud trail (enable cloudwatch logs ) -> go to cloudwatch (u find trail
related log stream select it & streams to amazon elastic search (so aws creates
lambda function for us that send data into elastic search)) . create a role to
allow lambda to send logs to elatic search . log format- aws cloud trail
//cloudwatch logs uses lambda to deliver log data to elasticsearch
//on lambda function-> monitoring (see how many times this function has been
invoked)
//on home of kibana -> connect to u r elastic search index (use elastic search data
) , index pattern - cwl-* , timje filter - @timestamp , create index pattern .
select our new index pattern , u see all events data on kibana dashboard
//logstash is an agent that runs on ec2 instance &v sends log line directly into
elastic search from which we can use kibana again to visualize these log lines .
finally remove subscription filter (stops lambda function from working ),delete
elastic search cluster/domain .
//log group suscription (creation of lambda ,attach role ),create index in elastic
search (cwl-) . then on discover -> select this index
-----------tagging/ metadata to aws resources in aws
cost according to departments,environments 2. code deploy for code deployment
groups (so code deploy will deploy to specific ec2 ) 3. tag based acess control /
iam policy & tags (some users only having access to ec2 that are tagged development
)
automated tool for to tag resources - cloudformation
aws config - to define tag rules & track , ensure everything is in compliant
within organization
aws tagging strategies - aws documentation
------------ logs (application logs- custom log messages ,stack traces . written to
local file in file systems)
usually streamed to cloudwatch logs using a cloudwatch agent on ec2.
if using lambda , direct integration with cloudwatch logs
if using ecs/fargate ,beanstalk in task definition , u have direct integration
with cloudwatch logs
os logs used to inform u system behaviour (install unified cloudwatch agent on
ec2 , it will stream log files into cloudwatch logs ) .
aws managed access logs (alb,nlb,clb )- to s3
cloudtrail logs to s3 ,cloud watch . vpc flowlogs to s3,cloud watch
route53 access logs to cloudwatch logs
cloudfront access logs to s3
s3 access logs to s3
-----------------cloudwatch/events bridge (allows u to have external events from
external API providers onto a cloudwatch events
source (event pattern - cloudformation , codestar,apigateway , autoscaling ,
codebuild , codecommit , codedeploy)
for any api call , if we want to create an event type as AWS API CALL VIA CLOUD
TRAIL for any service, specific operation (CreateImage) . whenever an image is
created then this event can trigger target(sns,lambda,etc)
cloudtrail + cloudwatch rules
-----------step functions usecases (sequential batch jobs , scale serverless
microservice ,ETL ,send messages from automated workflows)
create state machine ,new execution ,stepfunction execution event history ,
step functions(complex workflows) - invoking lambda function using step
functions .u can integrate stepfunctions with cloudwatch rules
if step function execution state change then invoke lambda function/etc.. .
-----------API GATEWAY (creates api for u , using swagger/open api3 , swagger
file ) , u can import API from swagger/open api3
api (private,edge optimized , regional)
create lambda function , create API ,stages , deploy API ,api invokes lambda
function
if u want to publish API & u can use API keys
if u want to publish logs of API to cloudwatch logs then API->settings (API
integration request -lambda proxy , lambda pass)
introducing Amazon API GateWay Private EndPoints(aws documentation)
rds - cross region (only for read replicas & also can promote to main database )
aurora global data base - only 2 regions (1 master , other read replica & disaster
recovery ) . can promote any region as main database
ebs volume snapshots , AMI(region scoped . copy ami so other regions do have
access to same ami , store AMI Id in parameter store SO U HAVE constant name
across all u r regions ) , RDS can be copied accross regions
vpc peering- private traffic between regions
route 53 - multi region (uses global network of dns)
s3 - cross region replication
cloudfront - global cdn more edge locations . lambda at edge deployment for global
lambda function , ab testing alias
route 53 - health check for every region , multi region (automatic dns failover) .
based on health checks route53 redirects traffic . route 53 , latancy record based
on users region
health check can also monitor any cloudwatch alarm (throttles of dynamodb )
create cloudwatch metric ->integrate metric with new cloud watch alarm ->sends data
to sns topic -> lambda connected to sns topic -> lambda sends notifications to
slack channel
-----------amazon inspector
aws inspector can be target of cloudwatch events to run assesments
(aws documentation) inspector adds event trigger to automatically run assesments
inspector -> assesment templates ->add schedule (assesment events , setup recurring
assesment for every 7 days (inspector run on ec2 for every 7 days). actually this
rule created in cloudwatch rules )
target (type) of that rule is running my assessment template . in cloudwatch
target as inspector assesment template
after ec2 instance running , u can launch inspector assessment template on that ec2
as target
assesment template can send sns topic
(aws documentation ) how to remidiate amazon inspector security findings
automatically
(aws documentation ) how to setup continous golden AMI vulnerability assessments
with amazon inspector
(aws documentation ) anouncing the golden AMI pipeline
(aws documentation ) aws building ami factory process using ec2 ssm market place &
service catalog
----------ec2 instance compliance
aws config (integrate with ssm , automation )- audit , compliance . using
cloudwatch event rules we can have automatio & remidiation on aws config
------health service dashboard
status.aws.amazon.com (aws services global health dashboard for all regions ,
status history ,rss is a way to subscribe to events of this )
whenever u have rss reader
----------------