Use the following policy to provide EC2 service read only access for N.Virginia region alone

Create a policy update the below in JSON section and assign this policy to user.

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": "ec2:Describe*",
            "Resource": "*",
            "Condition": {
                "StringEquals": {
                    "ec2:Region": "us-east-1"
                }
            }
        },
        {
            "Effect": "Allow",
            "Action": "elasticloadbalancing:Describe*",
            "Resource": "*",
            "Condition": {
                "StringEquals": {
                    "ec2:Region": "us-east-1"
                }
            }
        },
        {
            "Effect": "Allow",
            "Action": [
                "cloudwatch:ListMetrics",
                "cloudwatch:GetMetricStatistics",
                "cloudwatch:Describe*"
            ],
            "Resource": "*",
            "Condition": {
                "StringEquals": {
                    "ec2:Region": "us-east-1"
                }
            }
        },
        {
            "Effect": "Allow",
            "Action": "autoscaling:Describe*",
            "Resource": "*",
            "Condition": {
                "StringEquals": {
                    "ec2:Region": "us-east-1"
                }
            }
        }
    ]
}
Advertisements

Docker

Posted: July 14, 2018 in Devops

Docker Have Two editions

Docker EE (Enterprise Edition)
Docker CE (Community Edition)

————————-
Example to run a container
————————-
docker container run -it –name=name_to_this_container image_name
docker container run -it –name=name_to_this_container -d image_name –> used to start container in background

docker container run -it –name=bb busybox
docker container run -it –name=bb1 -d busybox

————————-
To exit from a container
————————-
exit (or) ctrl+p ctrl+q

————————————
Check running and exited containers
————————————
docker ps
docker ps -a
docker container ls
docker container ls -a
————————————
To login into a container
————————————
docker exec -it container-name sh (or) bash
docker container exec -it container_name sh (or) bash
docker attach container-name

docker exec -it bb sh (or) bash
docker container exec -it bb sh (or) bash
docker attach bb –> if we used this command to login we need to use “ctrl+p ctrl+q” to exit, if we use exit command container will be stopped.

————————————-
To stop container
————————————-
docker stop container-name
docker container stop container-name

docker stop bb
docker container stop bb1

————————————
To start a stopped container
————————————
docker start container-name
docker container start container_name

docker start bb
docker container start bb

————————————
To remove docker
————————————
docker rm bb
docker container rm bb

Default Network

Following Networks exists in all docker servers

* none
* host
* bridge (default)

The “Bridge” Network in more Detail

* Traffic to each host goes through a NAT gateway
* container IP addresses are ramdomly assigned from a private pool
* containers on the same bridgecan connect to each other by IP

The “Host” Network in more Detail

* For standalone containers,
* remove network isolation between the container and the Docker host,and use the host’s networking directly.
* host is only available for swarm services on Docker 17.06 and higher

The “None” Network in more Detail

* For this container, disable all networking.
* Usually used in conjunction with a custom network driver.
* none is not available for swarm services

DNS in Containers

* Docker runs an internal DNS server for containers
* The DNS server is accessible at 127.0.0.11 from within the container

docker container run -it –dns=192.168.1.1 –dns-search=”example.com” –name=bb busybox

# it will made entry in resolve.conf file inside container

User defined Bridge Networks

* Preffered way to connect containers
* Containers on separate networks are isolated from each other
* Can connect to many networks as needed
* Docker provideds DNS service discovery based the container’s name

docker network create –driver bridge frontend
docker network create –driver bridge backend

docker network ls

Now create containers on each networks

docker container run -it –network=frontend -d –name=front busybox
docker container run -it –network=backend -d –name=back busybox

lauch a test container to ping test

docker container run -it –network=frontend –name=test busybox
ping -c 2 front
ping -c 2 back

To connect to different network

docker network connect backend test
ping back

Docker Image

Image Layers

* Images are made of layers
* Each instructions in the Dockerfile creates a new layers
* Each layer is the set of differences
* Running containers have thier own Read write layer

docker history –no-trunc nginx

root@ip-172-31-87-86:~# docker history ubuntu:15.04
IMAGE CREATED CREATED BY SIZE COMMENT
This Read Write layer —- > Container Layers
d1b55fd07600 2 years ago /bin/sh -c #(nop) CMD [“/bin/bash”] 0B ——> Read only
<missing> 2 years ago /bin/sh -c sed -i ‘s/^#\s*\(deb.*universe\)$… 1.88kB ——> Read only
<missing> 2 years ago /bin/sh -c echo ‘#!/bin/sh’ > /usr/sbin/poli… 701B ——> Read only
<missing> 2 years ago /bin/sh -c #(nop) ADD file:3f4708cf445dc1b53… 131MB ——> Read only

* Layers are shared between images
* Existing layers do not have downladed again
* Saves time and disk space

Sharing Images

* Each container gets its own “Container Layer”
* Changes only happen to the container layer
* The container layer is deleted when the container is deleted

Building an IMAGE

* Docker build
* Looks for “Dockerfile” in the base directory
* use -t option to name your image_name
* Image is only availabel if the build completed successfully
* Previous build layers will be reused
* the -f flag can be used to specify a different Dockerfile

docker build -t my-image_name .
docker build -t my-image_name -f Dockerfile-bleed .

Create a file called Dockerfile-copy-file and update the following

FROM httpd:latest
COPY index.html /usr/local/apache2/htdocs

create a index.html file in present working directory and update the following

<html><body><h1>This file deployed using Dockerfile</h1></body></html>

Run the below command to create a image using docker file

docker build -t demo -f Dockerfile-copy-file .

Create a container using image which created

docker run -it -p 80:80 -d demo

To find ILO ip address we need hponcfg tool which can be installed using following package  “hponcfg-4.3.0-0.x86_64”

 

using following command you can get ILO IP address

hponcfg -w /tmp/ilo.out

cat /tmp/ilo.out

 

 

<MOD_NETWORK_SETTINGS>
<SPEED_AUTOSELECT VALUE = “Y”/>
<NIC_SPEED VALUE = “10”/>
<FULL_DUPLEX VALUE = “N”/>
<IP_ADDRESS VALUE = “1.2.3.4“/>
<SUBNET_MASK VALUE = “255.255.252.0”/>
<GATEWAY_IP_ADDRESS VALUE = “1.2.3.254”/>
<DNS_NAME VALUE = “hostname”/>
<PRIM_DNS_SERVER value = “8.8.8.8”/>
<DHCP_ENABLE VALUE = “N”/>
<DOMAIN_NAME VALUE = “domain.com”/>
<DHCP_GATEWAY VALUE = “Y”/>
<DHCP_DNS_SERVER VALUE = “Y”/>
<DHCP_STATIC_ROUTE VALUE = “Y”/>
<DHCP_WINS_SERVER VALUE = “Y”/>
<REG_WINS_SERVER VALUE = “Y”/>
<PRIM_WINS_SERVER value = “1.2.3.6”/>

 

 

In Centos 7 ” yum update ” command gives duplicate package error i got the

working solution for this issue just follow the steps

# yum update

Sample Output

** Found 48 pre-existing rpmdb problem(s), ‘yum check’ output follows:

note the packages name for example:

avahi-libs-0.6.31-15.el7.x86_64 is a duplicate with avahi-libs-0.6.31-14.el7.x86_64

bash-4.2.46-19.el7.x86_64 is a duplicate with bash-4.2.46-12.el7.x86_64

bzip2-libs-1.0.6-13.el7.x86_64 is a duplicate with bzip2-libs-1.0.6-12.el7.x86_64

chkconfig-1.3.61-5.el7.x86_64 is a duplicate with chkconfig-1.3.61-4.el7.x86_64

cpio-2.11-24.el7.x86_64 is a duplicate with cpio-2.11-22.el7.x86_64

cyrus-sasl-lib-2.1.26-19.2.el7.x86_64 is a duplicate with cyrus-sasl-lib-2.1.26-17.el7.x86_64

1:dbus-libs-1.6.12-13.el7.x86_64 is a duplicate with 1:dbus-libs-1.6.12-11.el7.x86_64

elfutils-libelf-0.163-3.el7.x86_64 is a duplicate with elfutils-libelf-0.160-1.el7.x86_64

elfutils-libs-0.163-3.el7.x86_64 is a duplicate with elfutils-libs-0.160-1.el7.x86_64

file-libs-5.11-31.el7.x86_64 is a duplicate with file-libs-5.11-21.el7.x86_64

freetype-2.4.11-11.el7.x86_64 is a duplicate with freetype-2.4.11-10.el7_1.1.x86_64

glib2-2.42.2-5.el7.x86_64 is a duplicate with glib2-2.40.0-4.el7.x86_64

 

and use the given comman as follows with duplicate package name

rpm -e –justdb avahi-libs-0.6.31-15.el7.x86_64
rpm -e –justdb bash-4.2.46-19.el7.x86_64
rpm -e –justdb bzip2-libs-1.0.6-13.el7.x86_64
rpm -e –justdb chkconfig-1.3.61-5.el7.x86_64
rpm -e –justdb cpio-2.11-24.el7.x86_64
rpm -e –justdb cyrus-sasl-lib-2.1.26-19.2.el7.x86_64
rpm -e –justdb 1:dbus-libs-1.6.12-13.el7.x86_64
rpm -e –justdb elfutils-libelf-0.163-3.el7.x86_64
rpm -e –justdb elfutils-libs-0.163-3.el7.x86_64
rpm -e –justdb file-libs-5.11-31.el7.x86_64
rpm -e –justdb freetype-2.4.11-11.el7.x86_64
rpm -e –justdb glib2-2.42.2-5.el7.x86_64

 

Then try to update

#yum update

 

it will work now

How To Install OwnCloud 8 on Ubuntu 14.04

OwnCloud-logo

For those of you who didn’t know, OwnCloud is a free and open-source software which enables you to create a private “file-hosting” cloud. OwnCloud is similar to DropBox service with the diference of being free to download and install on your private server. Owncloud made by PHP and backend database MySQL (MariaDB), SQLLite or PostgreSQL. OwnCloud also enables you to easily view and sync address book, calendar events, tasks and bookmarks. You can access it via the good looking and easy to use web interface or install OwnCloud client on your Desktop or Laptop machine (supports Linux, Windows and Mac OSX).

This article assumes you have at least basic knowledge of linux, know how to use the shell, and most importantly, you host your site on your own VPS. The installation is quite simple. I will show you through the step by step installation OwnCloud 8 on Ubuntu 14.04.

Install OwnCloud 8 on Ubuntu 14.04

Step 1. First of all log in to your server as root and make sure that all packages are up to date.

Step 2. Instal Apache web server on your Ubuntu 14.04 VPS if it is not already installed.

Step 3. Next, install PHP on your server.

Once the installation is done add the following PHP modules required by OwnCloud:

Step 4. Install MySQL database server.

By default, MySQL is not hardened. You can secure MySQL using the mysql_secure_installation script. you should read and below each steps carefully which will set root password, remove anonymous users, disallow remote root login, and remove the test database and access to secure MySQL.

Step 5. Create a new MySQL database for OwnCloud using the following commands.

Step 6. Installing Owncloud 8.First we will need to download the latest stable release of OwnCloud on your server (at the time version 8.0.0).

Set the directory permissions:

Step 7. Configuring Apache for OwnCloud.While configuring Apache web server, it is recommended that you to enable .htaccess to get a enhanced security features, by default .htaccess is disabled in Apache server. To enable it, open your virtual host file and make AllowOverride is set to All.For example, here i used external config file instead of modifying main file.

Remember to restart all services related to Apache server.

Step 8. Access OwnCloud application.Navigate to http://your-domain.com/ and follow the easy instructions. Enter username and password for the administrator user account, click on the ‘Advanced options’ hyperlink and enter the data directory (or leave the default setting), then enter database username, database password, database name, host (localhost) and click ‘Finish setup’.

 

1. how to restore DB point in time
2. how to restore linux server db pitr
3. how to fix ip change for restored instance
4. how you restore ec2 instance in private subnet
5. when we restore a instance what are the modification available
6. what is private ip used for
7. what is manageable services on AWS ?
8. what is three tier architecture ?
9. can we deploy RDS in
10. i have 1 vpc two servers (one web server and one DB server) and 1 public and 1 private subnet how to make it highly available
11. how to manage failover in above question ?
12. how to restore DB from instance
13. how to create snapshot automatically

——————————–

 

1. what is your understanding on Virtualization ?
2. how to secure S3 bucket
3. can we encrypte data on S3 bucket.
4. what is diff between s3 and glaizer
5. what is cross zone load balancing
6. when we have a multiple AZ
7. what way of OSI model used on ELB
8. what is sticky session used for in ELB
9. how do you rate yourself in linux
10. what is SAR provides
11. where are the kernel modules located
12. how to change runlevel in linux
13. how do you patch server ( automation and manual )
14. can you please let me know some of hypervisors ( esxi, hyperv & xen )
15. what is diff xen and kvm
16. what hyprevison redhat virtualization and cloud product used
17. what hypervisor AWS using
18. what extend you used scripting
19. different types of hosting in apache

 

——————————–

1. in aws what type of role that your are doing
2. what are the tasks that you are doing in VMWare
3. what are the components required for build cluster (hba, nic cards, shared storages)
4. what are the versions you have worked on redhat
5. what are the diff between 5,6 & 7 ( init , systemd , ext4 & xfs , udev roles nic naming )
6. how to we identifiy hard link and soft link file
7. do you have worked on iptables & firewalld
8. how to you configuring packet forwarding
9. any idea for allow and deny files
10. what is sticky bit
11. system is in hung state what are the thing that you will check and how you will recover
12. if server hung you just reboot or will do any action on it.
13. will you collect logs for the servers crash
14. how do i virtualize nic
15. for adding additional ip address in a same nic
16. how do we identify disk is local or SAN
17. lsscsi command
18. when i create eth0.0 will use same mac ?
19. mac id will same for virtual nic?
20. how to check a module loaded or not ( lsmod )
21. how to load a moduel ( modprobe )
22. what is proc directory
23. i have a pid i want to hung the pid
24. i modified the applicaiton conf file how to reread the configuration
25. kill -8 ?
26. how to import disk group
27. how to failover cluster.
28. lvdisplay ? shows device,volume ?

 

————————————————-

1. how to find how many users are in linux
2. how do you identify system and non system user
3. how to disable user login
4. you want to have a directory only user created he only need to delete how to achieve it
5. if you have execute permission on directory what is the use
6. the password in linux is stored in shadow the file have only permission on root only how non root user can able to change password
7. what is the maximum amount of memory a process can consume
8. how you make sure wheter system is alive or not
9. how to use dhcp in linux
10.one linux server not able to get dhcp ip what are the thing to check
11.what is the protocol used by traceroute
12.DNS woking process

In order to migrate Virtual machines from on premises to AWS we need to do the following things .

 

* Remove vmware tools

* create a user with sudo permission

* install ssh

*Enable DHCP option in NIC configuration

 

Prerequisites

You must provide an Amazon S3 bucket and an IAM role named vmimport.

Amazon S3 Bucket

VM Import requires an Amazon S3 bucket to store your disk images, in the region where you want to import your VMs. You can create a bucket as follows, or use an existing bucket if you prefer.

(Optional) To create an S3 bucket

  1. Open the Amazon S3 console at https://console.aws.amazon.com/s3/.
  2. Choose Create Bucket.
  3. In the Create a Bucket dialog box, do the following:
    1. For Bucket Name, type a name for your bucket. This name must be unique across all existing bucket names in Amazon S3. In some regions, there might be additional restrictions on bucket names. For more information, see Bucket Restrictions and Limitations in the Amazon Simple Storage Service Developer Guide.
    2. For Region, select the region that you want for your AMI.
    3. Choose Create.

VM Import Service Role

VM Import requires a role to perform certain operations in your account, such as downloading disk images from an Amazon S3 bucket. You must create a role named vmimport with a trust relationship policy document that allows VM Import to assume the role, and you must attach an IAM policy to the role.

To create the service role

  1. Create a file named trust-policy.json with the following policy:
    Copy
    {
       "Version": "2012-10-17",
       "Statement": [
          {
             "Effect": "Allow",
             "Principal": { "Service": "vmie.amazonaws.com" },
             "Action": "sts:AssumeRole",
             "Condition": {
                "StringEquals":{
                   "sts:Externalid": "vmimport"
                }
             }
          }
       ]
    }

    You can save the file anywhere on your computer. Take note of the location of the file, because you’ll specify the file in the next step.

  2. Use the create-role command to create a role named vmimport and give VM Import/Export access to it. Ensure that you specify the full path to the location of the trust-policy.json file.
    Copy
    aws iam create-role --role-name vmimport --assume-role-policy-document file://trust-policy.json
  3. Create a file named role-policy.json with the following policy, where disk-image-file-bucket is the bucket where the disk images are stored:
    Copy
    {
       "Version": "2012-10-17",
       "Statement": [
          {
             "Effect": "Allow",
             "Action": [
                "s3:ListBucket",
                "s3:GetBucketLocation"
             ],
             "Resource": [
                "arn:aws:s3:::disk-image-file-bucket" ] }, { "Effect": "Allow", "Action": [ "s3:GetObject" ], "Resource": [ "arn:aws:s3:::disk-image-file-bucket/*" ] }, { "Effect": "Allow", "Action":[ "ec2:ModifySnapshotAttribute", "ec2:CopySnapshot", "ec2:RegisterImage", "ec2:Describe*" ], "Resource": "*" } ] }
  4. Use the following put-role-policy command to attach the policy to the role created above. Ensure that you specify the full path to the location of the role-policy.json file.
    Copy
    aws iam put-role-policy --role-name vmimport --policy-name vmimport --policy-document file://role-policy.json

For more information about IAM roles, see IAM Roles in the IAM User Guide.

Upload the Image to Amazon S3

Upload your VM image file to your Amazon S3 bucket using the upload tool of your choice. For information about uploading files through the S3 console, see Uploading Objects into Amazon S3. For information about the Enhanced Uploader Java applet, see Using the Enhanced Uploader.

Import the VM

After you upload your VM image file to Amazon S3, you can use the AWS CLI to import the image. These tools accept either a URL (public Amazon S3 file, a signed GET URL for private Amazon S3 files) or the Amazon S3 bucket and path to the disk file.

Use the import-image command to create an import image task.

Example 1: Import an OVA

Copy
aws ec2 import-image --description "Windows 2008 OVA" --license-type BYOL --disk-containers file://containers.json

The following is an example containers.json file.

Copy
[
  {
    "Description": "Windows 2008 OVA", "Format": "ova", "UserBucket": { "S3Bucket": "my-import-bucket", "S3Key": "vms/my-windows-2008-vm.ova" } }]

Example 2: Import Multiple Disks

Copy
$ C:\> aws ec2 import-image --description "Windows 2008 VMDKs" --license-type BYOL --disk-containers file://containers.json

The following is an example containers.json file.

Copy
[
  {
    "Description": "First disk", "Format": "vmdk", "UserBucket": { "S3Bucket": "my-import-bucket", "S3Key": "disks/my-windows-2008-vm-disk1.vmdk" } }, { "Description": "Second disk", "Format": "vmdk", "UserBucket": { "S3Bucket": "my-import-bucket", "S3Key": "disks/my-windows-2008-vm-disk2.vmdk" } } ]

Check the Status of the Import Task

Use the describe-import-image-tasks command to return the status of an import task.

Status values include the following:

  • active — The import task is in progress.
  • deleting — The import task is being canceled.
  • deleted — The import task is canceled.
  • updating — Import status is updating.
  • validating — The imported image is being validated.
  • converting — The imported image is being converted into an AMI.
  • completed — The import task is completed and the AMI is ready to use.
Copy
aws ec2 describe-import-image-tasks --import-task-ids import-ami-abcd1234

(Optional) Cancel an Import Task

Use the cancel-import-task command to cancel an active import task.

Copy
aws ec2 cancel-import-task --import-task-id import-ami-abcd1234

 

 

Status check example output

 

Thats it you will have AMI of your Virtual machine on the region which you used. use that AMI and launch a new instance.

Overview

In this post, we’ll cover how to automate EBS snapshots for your AWS infrastructure using Lambda and CloudWatch.   We’ll build a solution that creates nightly snapshots for volumes attached to EC2 instances and deletes any snapshots older than 10 days.   This will work across all AWS regions.

Lambda offers the ability to execute “serverless” code which means that AWS will provide the run-time platform for us.   It currently supports the following languages: Node.js, Java, C# and Python.   We’ll be using Python to write our functions in this article.

We’ll use a CloudWatch rule to trigger the execution of the Lambda functions based on a cron expression.

IAM Role

Before we write any code, we need to create an IAM role that has permissions to do the following:

  • Retrieve information about volumes and snapshots from EC2
  • Take new snapshots using the CreateSnapshot API call
  • Delete snapshots using the DeleteSnapshot API call
  • Write logs to CloudWatch for debugging

In the AWS management console, we’ll go to IAM > Roles > Create New Role.   We name our role “ebs-snapshots-role”.

For Role Type, we select AWS Lambda.   This will grant the Lambda service permissions to assume the role.

On the next page, we won’t select any of the managed policies so move on to Next Step.

Go back to the Roles page and select the newly created role.   Under the Permissions tab, you’ll find a link to create a custom inline policy.

Paste the JSON below for the policy:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "logs:*"
            ],
            "Resource": "arn:aws:logs:*:*:*"
        },
        {
            "Effect": "Allow",
            "Action": "ec2:Describe*",
            "Resource": "*"
        },
        {
            "Effect": "Allow",
            "Action": [
                "ec2:CreateSnapshot",
                "ec2:DeleteSnapshot",
                "ec2:CreateTags",
                "ec2:ModifySnapshotAttribute",
                "ec2:ResetSnapshotAttribute"
            ],
            "Resource": [
                "*"
            ]
        }
    ]
}

Create Snapshots Function in Lambda

Now, we can move on to writing the code to create snapshots.   In the Lambda console, go to Functions > Create a Lambda Function -> Configure function and use the following parameters:

In our code, we’ll be using Boto library which is the AWS SDK for Python.

Paste the code below into the code pane:

# Backup all in-use volumes in all regions

import boto3

def lambda_handler(event, context):
    ec2 = boto3.client('ec2')
    
    # Get list of regions
    regions = ec2.describe_regions().get('Regions',[] )

    # Iterate over regions
    for region in regions:
        print "Checking region %s " % region['RegionName']
        reg=region['RegionName']

        # Connect to region
        ec2 = boto3.client('ec2', region_name=reg)
    
        # Get all in-use volumes in all regions  
        result = ec2.describe_volumes( Filters=[{'Name': 'status', 'Values': ['in-use']}])
        
        for volume in result['Volumes']:
            print "Backing up %s in %s" % (volume['VolumeId'], volume['AvailabilityZone'])
        
            # Create snapshot
            result = ec2.create_snapshot(VolumeId=volume['VolumeId'],Description='Created by Lambda backup function ebs-snapshots')
        
            # Get snapshot resource 
            ec2resource = boto3.resource('ec2', region_name=reg)
            snapshot = ec2resource.Snapshot(result['SnapshotId'])
        
            volumename = 'N/A'
        
            # Find name tag for volume if it exists
            if 'Tags' in volume:
                for tags in volume['Tags']:
                    if tags["Key"] == 'Name':
                        volumename = tags["Value"]
        
            # Add volume name to snapshot for easier identification
            snapshot.create_tags(Tags=[{'Key': 'Name','Value': volumename}])

The code will create snapshots for any in-use volumes across all regions.   It will also add the name of the volume to the snapshot name tag so it’s easier for us to identify whenever we view the list of snapshots.

Next, select the role we created in the Lamba function handler and role section.

The default timeout for Lambda functions is 3 seconds, which is too short for our task.  Let’s increase the timeout to 1 minute under Advanced Settings.    This will give our function enough time to kick off the snapshot process for each volume.

Click Next then Create Function in the review page to finish.

Schedule Trigger as CloudWatch Rule

Navigate to the Triggers tab and click on Add Trigger which brings up the following window:

Snap 2017-02-13 at 22.58.19.png

Selecting CloudWatch Event – Schedule from the dropdown list allows us to configure a rule based on a schedule. It’s important to note that the times listed in the cron entry are in UTC.

You’ll be prompted to enter a name, description, and schedule for the rule.

It’s important to note that the times listed for the cron expression are in UTC.  In the example below, we are scheduling the Lambda function to run each weeknight at 11pm UTC.

Testing

We can test our function immediately by click on the Save and Test button in the function page.   This will execute the function and show the results in the console at the bottom of the page.

Snap 2017-02-13 at 23.48.10.png

Logging

After verifying that the function runs successfully, we can take a look at the CloudWatch logs by clicking on the link shown in the Log Output section.

You’ll notice a Log Group was created with the name /aws/lambda/ebs-create-snapshots.  Select the most recent Log Stream to view individual messages:

11:00:19  START RequestId: bb6def8d-f26d-11e6-8983-89eca50275e0 Version: $LATEST
11:00:21  Backing up volume vol-0c0b66f7fd875964a in us-east-2a
11:00:22  END RequestId: bb6def8d-f26d-11e6-8983-89eca50275e0
11:00:22  REPORT RequestId: bb6def8d-f26d-11e6-8983-89eca50275e0 Duration: 3256.15 ms Billed Duration: 3300 ms Memory Size: 128 MB Max Memory Used: 40 MB

Delete Snapshots Function in Lambda

Let’s take a look at how we can delete snapshots older than the retention period which we’ll say is 10 days.

Before using the code below, you’ll want to replace account_id with your AWS account number and adjust retention_days according to your needs.

# Delete snapshots older than retention period

import boto3
from botocore.exceptions import ClientError

from datetime import datetime,timedelta

def delete_snapshot(snapshot_id, reg):
    print "Deleting snapshot %s " % (snapshot_id)
    try:  
        ec2resource = boto3.resource('ec2', region_name=reg)
        snapshot = ec2resource.Snapshot(snapshot_id)
        snapshot.delete()
    except ClientError as e:
        print "Caught exception: %s" % e
        
    return
    
def lambda_handler(event, context):
    
    # Get current timestamp in UTC
    now = datetime.now()

	# AWS Account ID    
	account_id = '1234567890'
    
    # Define retention period in days
    retention_days = 10
    
    # Create EC2 client
    ec2 = boto3.client('ec2')
    
    # Get list of regions
    regions = ec2.describe_regions().get('Regions',[] )

    # Iterate over regions
    for region in regions:
        print "Checking region %s " % region['RegionName']
        reg=region['RegionName']
        
        # Connect to region
        ec2 = boto3.client('ec2', region_name=reg)
        
        # Filtering by snapshot timestamp comparison is not supported
        # So we grab all snapshot id's
        result = ec2.describe_snapshots( OwnerIds=[account_id] )
    
        for snapshot in result['Snapshots']:
            print "Checking snapshot %s which was created on %s" % (snapshot['SnapshotId'],snapshot['StartTime'])
       
            # Remove timezone info from snapshot in order for comparison to work below
            snapshot_time = snapshot['StartTime'].replace(tzinfo=None)
        
            # Subtract snapshot time from now returns a timedelta 
            # Check if the timedelta is greater than retention days
            if (now - snapshot_time) > timedelta(retention_days):
                print "Snapshot is older than configured retention of %d days" % (retention_days)
                delete_snapshot(snapshot['SnapshotId'], reg)
            else:
                print "Snapshot is newer than configured retention of %d days so we keep it" % (retention_days)          
   Its for document purpose source from : https://www.codebyamir.com/blog/automated-ebs-snapshots-using-aws-lambda-cloudwatch

1.How to configure network
2.Kernel patching
3.Firmware update
4.Physical Server provisioning (database,app)
5.Virtual server provisioning
6.How to find read only file system
7.How to recover deleted lv
8.Diff between rpm ivh and yum
9.Mount Nas trying to do showmount -e filerip what pre-requisite for Nas client.
10.There are 8 NIC how to find how many NIC’s connect cable.
11.One out of 4nic cable connted for one ip connect and rest 3 not (how to confirm the cable connected to perticular subnet)
12.In rhel7 boot process and rhel 6 ( diff between 6 & 7)
13.Mechanism changed in rhel7 NIC card naming convention.
14.What parameters change NIC name
15.Vxvm primary daemon
16.How to scan disk in vxvm
17.How to perform disk scan in Linux
18.What is – – – in disk scanm
19.ultipath configuration
20.raid conceps
21.what is stripe and concadinaiton
22.what is port no of ntp
23.how you will restart network in rhel6 & 7
24.patching
25.df -h showing 100% full but du -csh shwoin 2% how you will trouble shoot
26.how to set ip address (diff ways)
27.what is lvm configuration file
28.how to disable lvm
29.how to check verify port in physical 30.server ( ethtool – port will blink )
31.how to verify os install log
32.apache log location and home directory and how to change it.
33.how to kill all the process
34.tarceroute
35.TcpDump
36.awk,sed
37.how to create public repo
38.passwd less login
39.NFS & Samba
40.netstat.
41.How to find a perticular port availability
42.What is file system .
43.how to configure networks
44.Python Modules.
45.multipath configuration
46.raid conceps
47.what is stripe and concadinaiton
48.what is port no of ntp
49.how you will restart network
50.patching
51.kernal patching
52.df -h showing 100% full but du -csh shwoin 2% how you will trouble shoot
53.how to set ip address
54.what is lvm configuration file
55.how to disable lvm

56.what is default vxvm daemon
57.how to scan disk in vxvm
58.how to check verify port in physical server ( ethtool – port will blink )
59.how to verify os install log
60.apache log location and home directory and how to change it

AWS
61.how to configure 2 tier applicaiton system in AWS
62.3 tire architecture
63.server migration pre-req
64.stps involved in migration
65.how to create auto snapshot for ebs
66.how to migrate live db to RDS
67.EFS
68.how to create 50 TB file system
69.how many subnet need for ( 2app +2db ) servers in 2 AZ architecture
70.Lambda experiance

—-
71.what is statergy of swap memory allocation
72.swap is madatory in linux ?
73.how to list all serivice running in linux ( in rhel 6 service –status-all , checkconfig –list & in rhel 7 systemctl list-unit-files & 74.systemctl list-units –type service
75.how to find high memory utilizing proces

——-

76.booting process
77.filesystem expansion
78.what is nice value
79.swaping and paging
80.i/o schduler
81.how to check machine is swapping and contrlloing
82.how to change the kernel parameter
83.sysctl -w
84.echo command to scan lun
85.how to find out when patching
86.how to change the kernel from one to one (grup.conf
87.what is initrd image
88.what is context switching
89.vmstat explain
90.page in and page out
91.how to find when the process started ( ps and lstart)
92.yum history
93.kernel version
94.how to check the port is opened
95.how to check this UDP is opened or not (nmap nc lsof)
96.how to find zombie process
97.inode
98.how to boot a system with secondary super block
99.what is D state
100.how to remove the lun from server
101.how to check if multipath is working or not
102.naming convenstion of HBA card, LUN,SAN switch
103.how to see WWPN wwwn, wwwid
104.how to increase the inode number in filesystem
105.how to scan the lun