Today i came across a requirement that we need to provide EC2 stop/start access only for some of the instance but it is not possible to set policy based on resource level. here we created policy based on tag value, that is if instance is tagged with mentioned value then the user will get the access to the instance.

use the following IAM policy and assign to user and add tage Key Owner and value ekna to instance which needs access to user

 

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Action": [
                "ec2:StartInstances",
                "ec2:StopInstances",
                "ec2:RebootInstances"
            ],
            "Condition": {
                "StringEquals": {
                    "ec2:ResourceTag/Owner": "ekna"
                }
            },
            "Resource": [
                "arn:aws:ec2:us-east-1:088811122222:instance/*"
            ],
            "Effect": "Allow"
        },
        {
            "Effect": "Allow",
            "Action": "ec2:Describe*",
            "Resource": "*"
        }
    ]
}
Advertisements

Use the following JSON to create custom policy to provide access only for one particular S3 bucket for multiple buckets. replace “mys3bucket” with your bucket name. after creating the policy attach to the user who required the access.

 





{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": "s3:ListAllMyBuckets",
            "Resource": "arn:aws:s3:::*"
        },
        {
            "Effect": "Allow",
            "Action": "s3:*",
            "Resource": [
                "arn:aws:s3:::mys3bucket",
                "arn:aws:s3:::mys3bucket/*"
            ]
        }
    ]
}

Use the following policy to provide EC2 service read only access for N.Virginia region alone

Create a policy update the below in JSON section and assign this policy to user.

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": "ec2:Describe*",
            "Resource": "*",
            "Condition": {
                "StringEquals": {
                    "ec2:Region": "us-east-1"
                }
            }
        },
        {
            "Effect": "Allow",
            "Action": "elasticloadbalancing:Describe*",
            "Resource": "*",
            "Condition": {
                "StringEquals": {
                    "ec2:Region": "us-east-1"
                }
            }
        },
        {
            "Effect": "Allow",
            "Action": [
                "cloudwatch:ListMetrics",
                "cloudwatch:GetMetricStatistics",
                "cloudwatch:Describe*"
            ],
            "Resource": "*",
            "Condition": {
                "StringEquals": {
                    "ec2:Region": "us-east-1"
                }
            }
        },
        {
            "Effect": "Allow",
            "Action": "autoscaling:Describe*",
            "Resource": "*",
            "Condition": {
                "StringEquals": {
                    "ec2:Region": "us-east-1"
                }
            }
        }
    ]
}

Docker Tutorial

Posted: July 14, 2018 in Devops

Docker Have Two editions

Docker EE (Enterprise Edition)
Docker CE (Community Edition)

————————-
Example to run a container
————————-
docker container run -it –name=name_to_this_container image_name
docker container run -it –name=name_to_this_container -d image_name –> used to start container in background

docker container run -it –name=bb busybox
docker container run -it –name=bb1 -d busybox

————————-
To exit from a container
————————-
exit (or) ctrl+p ctrl+q

————————————
Check running and exited containers
————————————
docker ps
docker ps -a
docker container ls
docker container ls -a
————————————
To login into a container
————————————
docker exec -it container-name sh (or) bash
docker container exec -it container_name sh (or) bash
docker attach container-name

docker exec -it bb sh (or) bash
docker container exec -it bb sh (or) bash
docker attach bb –> if we used this command to login we need to use “ctrl+p ctrl+q” to exit, if we use exit command container will be stopped.

————————————-
To stop container
————————————-
docker stop container-name
docker container stop container-name

docker stop bb
docker container stop bb1

————————————
To start a stopped container
————————————
docker start container-name
docker container start container_name

docker start bb
docker container start bb

————————————
To remove docker
————————————
docker rm bb
docker container rm bb

Default Network

Following Networks exists in all docker servers

* none
* host
* bridge (default)

The “Bridge” Network in more Detail

* Traffic to each host goes through a NAT gateway
* container IP addresses are ramdomly assigned from a private pool
* containers on the same bridgecan connect to each other by IP

The “Host” Network in more Detail

* For standalone containers,
* remove network isolation between the container and the Docker host,and use the host’s networking directly.
* host is only available for swarm services on Docker 17.06 and higher

The “None” Network in more Detail

* For this container, disable all networking.
* Usually used in conjunction with a custom network driver.
* none is not available for swarm services

DNS in Containers

* Docker runs an internal DNS server for containers
* The DNS server is accessible at 127.0.0.11 from within the container

docker container run -it –dns=192.168.1.1 –dns-search=”example.com” –name=bb busybox

# it will made entry in resolve.conf file inside container

User defined Bridge Networks

* Preffered way to connect containers
* Containers on separate networks are isolated from each other
* Can connect to many networks as needed
* Docker provideds DNS service discovery based the container’s name

docker network create –driver bridge frontend
docker network create –driver bridge backend

docker network ls

Now create containers on each networks

docker container run -it –network=frontend -d –name=front busybox
docker container run -it –network=backend -d –name=back busybox

lauch a test container to ping test

docker container run -it –network=frontend –name=test busybox
ping -c 2 front
ping -c 2 back

To connect to different network

docker network connect backend test
ping back

Docker Image

Image Layers

* Images are made of layers
* Each instructions in the Dockerfile creates a new layers
* Each layer is the set of differences
* Running containers have thier own Read write layer

docker history –no-trunc nginx

root@ip-172-31-87-86:~# docker history ubuntu:15.04
IMAGE CREATED CREATED BY SIZE COMMENT
This Read Write layer —- > Container Layers
d1b55fd07600 2 years ago /bin/sh -c #(nop) CMD [“/bin/bash”] 0B ——> Read only
<missing> 2 years ago /bin/sh -c sed -i ‘s/^#\s*\(deb.*universe\)$… 1.88kB ——> Read only
<missing> 2 years ago /bin/sh -c echo ‘#!/bin/sh’ > /usr/sbin/poli… 701B ——> Read only
<missing> 2 years ago /bin/sh -c #(nop) ADD file:3f4708cf445dc1b53… 131MB ——> Read only

* Layers are shared between images
* Existing layers do not have downladed again
* Saves time and disk space

Sharing Images

* Each container gets its own “Container Layer”
* Changes only happen to the container layer
* The container layer is deleted when the container is deleted

Building an IMAGE

* Docker build
* Looks for “Dockerfile” in the base directory
* use -t option to name your image_name
* Image is only availabel if the build completed successfully
* Previous build layers will be reused
* the -f flag can be used to specify a different Dockerfile

docker build -t my-image_name .
docker build -t my-image_name -f Dockerfile-bleed .

Create a file called Dockerfile-copy-file and update the following

FROM httpd:latest
COPY index.html /usr/local/apache2/htdocs

create a index.html file in present working directory and update the following

<html><body><h1>This file deployed using Dockerfile</h1></body></html>

Run the below command to create a image using docker file

docker build -t demo -f Dockerfile-copy-file .

Create a container using image which created

docker run -it -p 80:80 -d demo
docker run -rm -it demo ls -R /usr/local/apache2/htdocs/dir/ —> To delete container after accessed

Demo Copy Directory

FROM httpd:latest
COPY folder /usr/local/apache2/htdocs/dir/

Distination Paths
* Destination data can be relative or absolute
* Paths that do not end in “/” are treated as a regular file
* Paths that end in “/” is treated as a directory

Download a file

ADD src … dest

* ADD https://foo/example /dest/example/
* If neither the source URI or the destination end with a slash, the file is downloaded and copied to the destination
* If both the source url and the destination end with a slash, the downloaded file is copied in to the destination directory.

Unpack Tar files

* Unpack local tar files
* Can be compressed with gzip, bzip2 and xz
* Does not work with URLs

Create a Dockerfile called “Dockerfile-unzip-files

FROM debian:stretch
ADD test.tar.gz /data/

tar -ztf test.tar.gz

docker build -t imagename -f Dockerfile-unzip-files .
docker run –rm -it imagename ls -R /data/

Source Pattern Matching.

* Matches zero or more characters
? Matches exactly one character
\\ Escape character
[],.[^] Character class

COPY test/[ac].txt /data/ –> This will copies a.txt & c.txt to /data/
COPY test/[^a].txt /data/ –> This will copies every .txt files except a.txt

Best Practies

* Use a separate COPY or ADD instructions for each file or directory
* Use COPY rather that ADD
* Do not use ADD to download files which will be deleted later on build process which will create a larger image
* Add one-off administrative scripts

Running Commands to customize an Image

Shell form

* Commadnd is passed to the default shell

/bin/sh -C -> for linux container
cmd /S /C -> for windowns container
* Works like the shell
* Succeeds if the command returns a valid success code

FROM debian:stretch
RUN touch /tmp/test

——————–

FROM debian:stretch
RUN apt-get update && \
apt-get install -y \
build-essential \
git \
golang \
python-pygments \
rsync \
ruby-dev \
rubygems \
ssh-client \
wget

Pipe Problems

We need to use the below format if we need to use pipe

FROM buildpack-deps:stretch-curl
RUN wget -O – https://google/ | wc -l

If we use above format to create image it will success even if command failes
so we need to use alternate method

FROM buildpack-deps:stretch-curl
RUN /bin/bash -C “set -O pipefail && wget 0O – https://google/ | wc -l

————————————————————————
Changing the Default Shell

* SHELL instruction changes the default shell
* Specified in JSON format
* Later RUN instructions will use the new shell.

FROM microsoft/windowsservercore
SHELL [“powershell”, “-NoProfile”, “-Command” ]

EXEC from

* Does not require a shell
* No Variable replacement
* JSON format
* Enclose entries in doubel -quotes (“)
* Escape characters with backslash (\)

FROM debian:stretch
RUN touch /this-is-shell-form-${HOSTNAME}
RUN [“touch” , “/this-is-exec-form-${HOSTNAME}”]

—————————————————————–

To find ILO ip address we need hponcfg tool which can be installed using following package  “hponcfg-4.3.0-0.x86_64”

 

using following command you can get ILO IP address

hponcfg -w /tmp/ilo.out

cat /tmp/ilo.out

 

 

<MOD_NETWORK_SETTINGS>
<SPEED_AUTOSELECT VALUE = “Y”/>
<NIC_SPEED VALUE = “10”/>
<FULL_DUPLEX VALUE = “N”/>
<IP_ADDRESS VALUE = “1.2.3.4“/>
<SUBNET_MASK VALUE = “255.255.252.0”/>
<GATEWAY_IP_ADDRESS VALUE = “1.2.3.254”/>
<DNS_NAME VALUE = “hostname”/>
<PRIM_DNS_SERVER value = “8.8.8.8”/>
<DHCP_ENABLE VALUE = “N”/>
<DOMAIN_NAME VALUE = “domain.com”/>
<DHCP_GATEWAY VALUE = “Y”/>
<DHCP_DNS_SERVER VALUE = “Y”/>
<DHCP_STATIC_ROUTE VALUE = “Y”/>
<DHCP_WINS_SERVER VALUE = “Y”/>
<REG_WINS_SERVER VALUE = “Y”/>
<PRIM_WINS_SERVER value = “1.2.3.6”/>

 

 

In Centos 7 ” yum update ” command gives duplicate package error i got the

working solution for this issue just follow the steps

# yum update

Sample Output

** Found 48 pre-existing rpmdb problem(s), ‘yum check’ output follows:

note the packages name for example:

avahi-libs-0.6.31-15.el7.x86_64 is a duplicate with avahi-libs-0.6.31-14.el7.x86_64

bash-4.2.46-19.el7.x86_64 is a duplicate with bash-4.2.46-12.el7.x86_64

bzip2-libs-1.0.6-13.el7.x86_64 is a duplicate with bzip2-libs-1.0.6-12.el7.x86_64

chkconfig-1.3.61-5.el7.x86_64 is a duplicate with chkconfig-1.3.61-4.el7.x86_64

cpio-2.11-24.el7.x86_64 is a duplicate with cpio-2.11-22.el7.x86_64

cyrus-sasl-lib-2.1.26-19.2.el7.x86_64 is a duplicate with cyrus-sasl-lib-2.1.26-17.el7.x86_64

1:dbus-libs-1.6.12-13.el7.x86_64 is a duplicate with 1:dbus-libs-1.6.12-11.el7.x86_64

elfutils-libelf-0.163-3.el7.x86_64 is a duplicate with elfutils-libelf-0.160-1.el7.x86_64

elfutils-libs-0.163-3.el7.x86_64 is a duplicate with elfutils-libs-0.160-1.el7.x86_64

file-libs-5.11-31.el7.x86_64 is a duplicate with file-libs-5.11-21.el7.x86_64

freetype-2.4.11-11.el7.x86_64 is a duplicate with freetype-2.4.11-10.el7_1.1.x86_64

glib2-2.42.2-5.el7.x86_64 is a duplicate with glib2-2.40.0-4.el7.x86_64

 

and use the given comman as follows with duplicate package name

rpm -e –justdb avahi-libs-0.6.31-15.el7.x86_64
rpm -e –justdb bash-4.2.46-19.el7.x86_64
rpm -e –justdb bzip2-libs-1.0.6-13.el7.x86_64
rpm -e –justdb chkconfig-1.3.61-5.el7.x86_64
rpm -e –justdb cpio-2.11-24.el7.x86_64
rpm -e –justdb cyrus-sasl-lib-2.1.26-19.2.el7.x86_64
rpm -e –justdb 1:dbus-libs-1.6.12-13.el7.x86_64
rpm -e –justdb elfutils-libelf-0.163-3.el7.x86_64
rpm -e –justdb elfutils-libs-0.163-3.el7.x86_64
rpm -e –justdb file-libs-5.11-31.el7.x86_64
rpm -e –justdb freetype-2.4.11-11.el7.x86_64
rpm -e –justdb glib2-2.42.2-5.el7.x86_64

 

Then try to update

#yum update

 

it will work now

How To Install OwnCloud 8 on Ubuntu 14.04

OwnCloud-logo

For those of you who didn’t know, OwnCloud is a free and open-source software which enables you to create a private “file-hosting” cloud. OwnCloud is similar to DropBox service with the diference of being free to download and install on your private server. Owncloud made by PHP and backend database MySQL (MariaDB), SQLLite or PostgreSQL. OwnCloud also enables you to easily view and sync address book, calendar events, tasks and bookmarks. You can access it via the good looking and easy to use web interface or install OwnCloud client on your Desktop or Laptop machine (supports Linux, Windows and Mac OSX).

This article assumes you have at least basic knowledge of linux, know how to use the shell, and most importantly, you host your site on your own VPS. The installation is quite simple. I will show you through the step by step installation OwnCloud 8 on Ubuntu 14.04.

Install OwnCloud 8 on Ubuntu 14.04

Step 1. First of all log in to your server as root and make sure that all packages are up to date.

Step 2. Instal Apache web server on your Ubuntu 14.04 VPS if it is not already installed.

Step 3. Next, install PHP on your server.

Once the installation is done add the following PHP modules required by OwnCloud:

Step 4. Install MySQL database server.

By default, MySQL is not hardened. You can secure MySQL using the mysql_secure_installation script. you should read and below each steps carefully which will set root password, remove anonymous users, disallow remote root login, and remove the test database and access to secure MySQL.

Step 5. Create a new MySQL database for OwnCloud using the following commands.

Step 6. Installing Owncloud 8.First we will need to download the latest stable release of OwnCloud on your server (at the time version 8.0.0).

Set the directory permissions:

Step 7. Configuring Apache for OwnCloud.While configuring Apache web server, it is recommended that you to enable .htaccess to get a enhanced security features, by default .htaccess is disabled in Apache server. To enable it, open your virtual host file and make AllowOverride is set to All.For example, here i used external config file instead of modifying main file.

Remember to restart all services related to Apache server.

Step 8. Access OwnCloud application.Navigate to http://your-domain.com/ and follow the easy instructions. Enter username and password for the administrator user account, click on the ‘Advanced options’ hyperlink and enter the data directory (or leave the default setting), then enter database username, database password, database name, host (localhost) and click ‘Finish setup’.

 

1. how to restore DB point in time
2. how to restore linux server db pitr
3. how to fix ip change for restored instance
4. how you restore ec2 instance in private subnet
5. when we restore a instance what are the modification available
6. what is private ip used for
7. what is manageable services on AWS ?
8. what is three tier architecture ?
9. can we deploy RDS in
10. i have 1 vpc two servers (one web server and one DB server) and 1 public and 1 private subnet how to make it highly available
11. how to manage failover in above question ?
12. how to restore DB from instance
13. how to create snapshot automatically

——————————–

 

1. what is your understanding on Virtualization ?
2. how to secure S3 bucket
3. can we encrypte data on S3 bucket.
4. what is diff between s3 and glaizer
5. what is cross zone load balancing
6. when we have a multiple AZ
7. what way of OSI model used on ELB
8. what is sticky session used for in ELB
9. how do you rate yourself in linux
10. what is SAR provides
11. where are the kernel modules located
12. how to change runlevel in linux
13. how do you patch server ( automation and manual )
14. can you please let me know some of hypervisors ( esxi, hyperv & xen )
15. what is diff xen and kvm
16. what hyprevison redhat virtualization and cloud product used
17. what hypervisor AWS using
18. what extend you used scripting
19. different types of hosting in apache

 

——————————–

1. in aws what type of role that your are doing
2. what are the tasks that you are doing in VMWare
3. what are the components required for build cluster (hba, nic cards, shared storages)
4. what are the versions you have worked on redhat
5. what are the diff between 5,6 & 7 ( init , systemd , ext4 & xfs , udev roles nic naming )
6. how to we identifiy hard link and soft link file
7. do you have worked on iptables & firewalld
8. how to you configuring packet forwarding
9. any idea for allow and deny files
10. what is sticky bit
11. system is in hung state what are the thing that you will check and how you will recover
12. if server hung you just reboot or will do any action on it.
13. will you collect logs for the servers crash
14. how do i virtualize nic
15. for adding additional ip address in a same nic
16. how do we identify disk is local or SAN
17. lsscsi command
18. when i create eth0.0 will use same mac ?
19. mac id will same for virtual nic?
20. how to check a module loaded or not ( lsmod )
21. how to load a moduel ( modprobe )
22. what is proc directory
23. i have a pid i want to hung the pid
24. i modified the applicaiton conf file how to reread the configuration
25. kill -8 ?
26. how to import disk group
27. how to failover cluster.
28. lvdisplay ? shows device,volume ?

 

————————————————-

1. how to find how many users are in linux
2. how do you identify system and non system user
3. how to disable user login
4. you want to have a directory only user created he only need to delete how to achieve it
5. if you have execute permission on directory what is the use
6. the password in linux is stored in shadow the file have only permission on root only how non root user can able to change password
7. what is the maximum amount of memory a process can consume
8. how you make sure wheter system is alive or not
9. how to use dhcp in linux
10.one linux server not able to get dhcp ip what are the thing to check
11.what is the protocol used by traceroute
12.DNS woking process

In order to migrate Virtual machines from on premises to AWS we need to do the following things .

 

* Remove vmware tools

* create a user with sudo permission

* install ssh

*Enable DHCP option in NIC configuration

 

Prerequisites

You must provide an Amazon S3 bucket and an IAM role named vmimport.

Amazon S3 Bucket

VM Import requires an Amazon S3 bucket to store your disk images, in the region where you want to import your VMs. You can create a bucket as follows, or use an existing bucket if you prefer.

(Optional) To create an S3 bucket

  1. Open the Amazon S3 console at https://console.aws.amazon.com/s3/.
  2. Choose Create Bucket.
  3. In the Create a Bucket dialog box, do the following:
    1. For Bucket Name, type a name for your bucket. This name must be unique across all existing bucket names in Amazon S3. In some regions, there might be additional restrictions on bucket names. For more information, see Bucket Restrictions and Limitations in the Amazon Simple Storage Service Developer Guide.
    2. For Region, select the region that you want for your AMI.
    3. Choose Create.

VM Import Service Role

VM Import requires a role to perform certain operations in your account, such as downloading disk images from an Amazon S3 bucket. You must create a role named vmimport with a trust relationship policy document that allows VM Import to assume the role, and you must attach an IAM policy to the role.

To create the service role

  1. Create a file named trust-policy.json with the following policy:
    Copy
    {
       "Version": "2012-10-17",
       "Statement": [
          {
             "Effect": "Allow",
             "Principal": { "Service": "vmie.amazonaws.com" },
             "Action": "sts:AssumeRole",
             "Condition": {
                "StringEquals":{
                   "sts:Externalid": "vmimport"
                }
             }
          }
       ]
    }

    You can save the file anywhere on your computer. Take note of the location of the file, because you’ll specify the file in the next step.

  2. Use the create-role command to create a role named vmimport and give VM Import/Export access to it. Ensure that you specify the full path to the location of the trust-policy.json file.
    Copy
    aws iam create-role --role-name vmimport --assume-role-policy-document file://trust-policy.json
  3. Create a file named role-policy.json with the following policy, where disk-image-file-bucket is the bucket where the disk images are stored:
    Copy
    {
       "Version": "2012-10-17",
       "Statement": [
          {
             "Effect": "Allow",
             "Action": [
                "s3:ListBucket",
                "s3:GetBucketLocation"
             ],
             "Resource": [
                "arn:aws:s3:::disk-image-file-bucket" ] }, { "Effect": "Allow", "Action": [ "s3:GetObject" ], "Resource": [ "arn:aws:s3:::disk-image-file-bucket/*" ] }, { "Effect": "Allow", "Action":[ "ec2:ModifySnapshotAttribute", "ec2:CopySnapshot", "ec2:RegisterImage", "ec2:Describe*" ], "Resource": "*" } ] }
  4. Use the following put-role-policy command to attach the policy to the role created above. Ensure that you specify the full path to the location of the role-policy.json file.
    Copy
    aws iam put-role-policy --role-name vmimport --policy-name vmimport --policy-document file://role-policy.json

For more information about IAM roles, see IAM Roles in the IAM User Guide.

Upload the Image to Amazon S3

Upload your VM image file to your Amazon S3 bucket using the upload tool of your choice. For information about uploading files through the S3 console, see Uploading Objects into Amazon S3. For information about the Enhanced Uploader Java applet, see Using the Enhanced Uploader.

Import the VM

After you upload your VM image file to Amazon S3, you can use the AWS CLI to import the image. These tools accept either a URL (public Amazon S3 file, a signed GET URL for private Amazon S3 files) or the Amazon S3 bucket and path to the disk file.

Use the import-image command to create an import image task.

Example 1: Import an OVA

Copy
aws ec2 import-image --description "Windows 2008 OVA" --license-type BYOL --disk-containers file://containers.json

The following is an example containers.json file.

Copy
[
  {
    "Description": "Windows 2008 OVA", "Format": "ova", "UserBucket": { "S3Bucket": "my-import-bucket", "S3Key": "vms/my-windows-2008-vm.ova" } }]

Example 2: Import Multiple Disks

Copy
$ C:\> aws ec2 import-image --description "Windows 2008 VMDKs" --license-type BYOL --disk-containers file://containers.json

The following is an example containers.json file.

Copy
[
  {
    "Description": "First disk", "Format": "vmdk", "UserBucket": { "S3Bucket": "my-import-bucket", "S3Key": "disks/my-windows-2008-vm-disk1.vmdk" } }, { "Description": "Second disk", "Format": "vmdk", "UserBucket": { "S3Bucket": "my-import-bucket", "S3Key": "disks/my-windows-2008-vm-disk2.vmdk" } } ]

Check the Status of the Import Task

Use the describe-import-image-tasks command to return the status of an import task.

Status values include the following:

  • active — The import task is in progress.
  • deleting — The import task is being canceled.
  • deleted — The import task is canceled.
  • updating — Import status is updating.
  • validating — The imported image is being validated.
  • converting — The imported image is being converted into an AMI.
  • completed — The import task is completed and the AMI is ready to use.
Copy
aws ec2 describe-import-image-tasks --import-task-ids import-ami-abcd1234

(Optional) Cancel an Import Task

Use the cancel-import-task command to cancel an active import task.

Copy
aws ec2 cancel-import-task --import-task-id import-ami-abcd1234

 

 

Status check example output

 

Thats it you will have AMI of your Virtual machine on the region which you used. use that AMI and launch a new instance.