[go: up one dir, main page]

0% found this document useful (0 votes)
37 views48 pages

Training Notes Formatted

SUSE Linux is an open-source, enterprise-grade Linux distribution known for its stability and scalability, featuring components like SUSE Linux Enterprise Server and openSUSE. The document provides detailed instructions on downloading, installing, configuring, and managing SUSE Linux, including networking, user management, and IPTables for firewall configurations. It also covers essential commands for system administration and networking fundamentals, including the OSI model and TCP/IP protocols.

Uploaded by

joey joseph
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
37 views48 pages

Training Notes Formatted

SUSE Linux is an open-source, enterprise-grade Linux distribution known for its stability and scalability, featuring components like SUSE Linux Enterprise Server and openSUSE. The document provides detailed instructions on downloading, installing, configuring, and managing SUSE Linux, including networking, user management, and IPTables for firewall configurations. It also covers essential commands for system administration and networking fundamentals, including the OSI model and TCP/IP protocols.

Uploaded by

joey joseph
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 48

Introduction to SUSE Linux

SUSE Linux Detailed Notes

1. Introduction to SUSE Linux

SUSE Linux is an open-source, enterprise-grade Linux distribution known for its stability,
security, and scalability. It is widely used in server environments and supports containerization,
cloud computing, and virtualization. Key components include:

• SUSE Linux Enterprise Server (SLES): Designed for enterprise environments.

• openSUSE Leap: Stable release suitable for desktops and servers.

• openSUSE Tumbleweed: Rolling release for developers needing the latest packages.

• YaST (Yet another Setup Tool): A powerful configuration tool.

• Zypper: Package manager for software management.

2. Download, Install, and Configure SUSE Linux

1. Download the SUSE Linux ISO from the official website.

2. Create a bootable USB using Rufus (Windows) or dd (Linux/Mac).

3. Boot into the installer and follow the guided installation.

4. Configure disk partitions, network settings, and user credentials.

5. Install necessary software packages using zypper install <package_name>.

3. Accessing SUSE Linux Servers

• Direct Login: Access via GUI or CLI.

• SSH Access: Connect remotely using ssh user@server-ip.

• Podman for Containers:

o Run a new container: podman run -it opensuse/leap

o Start an existing container: podman start <container-id>

o Execute commands inside a container: podman exec -it <container-id>


/bin/bash

o View running containers: podman ps

4. File and Directory Management

• Navigation Commands: ls, pwd, cd, cat, less, mkdir, rmdir, mv

• File Operations:

o Delete a non-empty directory: rm -r <directory-name>

o Move files: mv <filename> /<directory-name>


o Create an empty file: touch <filename>

o View permissions: ls -la

o Change ownership: chown user:group <filename>

o Modify permissions: chmod 0774 <filename>

• Access Control Lists (ACLs):

o Get ACL info: getfacl <filename>

o Set ACL: setfacl -m u:user:rwx <filename>

5. User and Group Management

• User Management:

o Add a new user: useradd <username>

o Set a user password: passwd <username>

o Delete a user: userdel <username>

o Force password change on first login: chage -d 0 <username>

• Group Management:

o Create a new group: groupadd <groupname>

o Add a user to a group: usermod -aG <groupname> <username>

o Check user details: cat /etc/passwd

o Check group details: cat /etc/group

6. File System Structure

• Common Directories:

o /etc - Configuration files

o /var - Logs and frequently changing content

o /home - User accounts

o /sbin - System binaries

o /bin - User binaries

o /lib - Shared libraries

o /usr - Third-party applications

o /root - Root user directory

• Mounting and Unmounting:

o Mount a filesystem: mount /dev/sdX /mnt

o Unmount: umount /mnt


7. Networking on SUSE Linux

• Check Network Configuration: ip a

• Modify Network Settings with YaST: yast lan

• Check connectivity: ping google.com

• Assign a Static IP: ip addr add 192.168.1.100/24 dev eth0

• Manage Network Interfaces:

o Enable: ip link set eth0 up

o Disable: ip link set eth0 down

• Firewall Configuration:

o Check firewall status: firewalld --state

o Open a port: firewall-cmd --permanent --add-port=80/tcp

o Reload firewall: firewall-cmd --reload

8. Package Management in SUSE Linux

• Using Zypper:

o Install a package: zypper install <package-name>

o Remove a package: zypper remove <package-name>

o Check package dependencies: zypper verify

9. Process and Resource Management

• Monitor Processes:

o List running processes: ps aux

o Kill a process: kill <PID>

o View running processes: top or htop

o Check memory usage: free -h

o Monitor disk usage: du -sh /path

• Troubleshooting Commands:

o Check system logs: journalctl -xe

o Kernel logs: dmesg

o Authentication logs: cat /var/log/auth.log

o Check open ports: netstat -tulnp

o View current connections: ss -tulwn

10. Virtualization in SUSE Linux


• Popular Virtualization Software: KVM

• Managing Virtual Machines (VMs):

o Start a VM: virsh start <vm_name>

o Stop a VM: virsh shutdown <vm_name>

o List all VMs: virsh list --all

Conclusion

SUSE Linux is a powerful, enterprise-grade Linux distribution with robust tools for system
administration, user management, networking, and virtualization. Mastering its core
components, such as YaST, Zypper, and Podman, ensures efficient system operations and
troubleshooting.

Networking
# Networking Detailed Notes

## Basic Knowledge of Networking

Networking connects computers and devices to share resources, communicate, and transfer
data. Key components include:

- **Nodes**: Devices connected to the network (computers, routers, switches).

- **Protocols**: Rules governing data transmission (TCP/IP, HTTP, FTP).

- **Network Types**: LAN, WAN, MAN, PAN.

- **IP Addressing**: Unique identifiers for device communication.

- **Subnetting**: Dividing a large network into smaller, manageable segments.

## OSI Layers

The **Open Systems Interconnection (OSI) model** standardizes networking functions into
seven layers:

1. **Physical Layer**: Manages hardware, cables, and transmission.

2. **Data Link Layer**: Handles MAC addresses, error detection, and framing.

3. **Network Layer**: Responsible for logical addressing and routing (IP addresses).

4. **Transport Layer**: Ensures end-to-end communication (TCP, UDP protocols).

5. **Session Layer**: Manages communication sessions.

6. **Presentation Layer**: Handles data formatting, encryption, and compression.

7. **Application Layer**: Provides network services directly to end-users (HTTP, FTP, SMTP).
## IP Addressing Classes

IPv4 addresses are categorized as follows:

- **Class A** (1.0.0.0 - 126.255.255.255) - Large networks (Subnet Mask: 255.0.0.0).

- **Class B** (128.0.0.0 - 191.255.255.255) - Medium networks (Subnet Mask: 255.255.0.0).

- **Class C** (192.0.0.0 - 223.255.255.255) - Small networks (Subnet Mask: 255.255.255.0).

- **Class D** (224.0.0.0 - 239.255.255.255) - Multicast groups.

- **Class E** (240.0.0.0 - 255.255.255.255) - Experimental use.

## Assigning IP Addresses

### Static IP Configuration (Linux - SUSE)

```bash

ip addr add <IP-Address>/<Subnet-Mask> dev <Interface>

```

Example:

```bash

ip addr add 192.168.1.100/24 dev eth0

```

Modify `/etc/sysconfig/network/ifcfg-eth0` to make it permanent.

### Dynamic IP Assignment (DHCP)

```bash

dhclient eth0

```

## Routing

Defines paths for data packets between networks.

- **View Routing Table**: `ip route show`

- **Add Default Gateway**: `ip route add default via <Gateway-IP> dev <Interface>`

- **Delete a Route**: `ip route del <IP-Address>`


## Hostname Configuration

- **View Current Hostname**: `hostname`

- **Change Temporarily**: `hostnamectl set-hostname <new-hostname>`

- **Change Permanently**: Modify `/etc/hostname`

## DNS Resolution

- **Check DNS Resolution**: `nslookup example.com`

- **View DNS Configuration**: `cat /etc/resolv.conf`

- **Update DNS Servers**: Modify `/etc/resolv.conf`

- **Flush DNS Cache**: `systemctl restart nscd`

---

# Podman and Networking Commands in SUSE Linux

## Podman Commands

- **Check Running Containers**: `podman ps`

- **Check All Containers**: `podman ps -a`

- **Run a Shell in a Container**: `podman exec -it <container_name> /bin/sh`

- **Start an Interactive Container**: `podman run -it <image_name>`

- **View Help**: `podman --help`

## Zypper - SUSE Package Manager

- **Install a Package**: `zypper install <package-name>`

- **Networking Tools Installation**:

```bash

zypper install iputils traceroute mtr

```

## Networking Commands

- **Check Network Interfaces**: `ip a`


- **Extract IPv4 Address**:

```bash

ip -4 -o addr show | awk '{print $4}' | cut -d'/' -f1

```

- **Extract IPv6 Address**:

```bash

ip -6 -o addr show | awk '{print $4}' | cut -d'/' -f1

```

---

# Networking Fundamentals: TCP/IP and OSI Model

## TCP/IP Layer Model

Defines how data is transmitted over networks:

1. **Application Layer** – Protocols: HTTP, FTP, SMTP, DNS.

2. **Transport Layer** – Protocols: TCP, UDP.

3. **Internet Layer** – Protocols: IP, ICMP, ARP.

4. **Network Access Layer** – Manages physical connections.

## TCP vs UDP (Transport Layer Protocols)

| Feature | TCP (Reliable) | UDP (Faster) |

|----------|---------------|--------------|

| Reliability | High | Low |

| Speed | Slower | Faster |

| Use Cases | Web browsing, email | Streaming, gaming |

## Common Protocols and Ports

### Web Services

- **HTTP (Port 80)**, **HTTPS (Port 443)**


### File Transfer Protocols

| Protocol | Port(s) | Description |

|----------|---------|-------------|

| **FTP** | 20, 21 | File transfer |

| **SFTP** | 22 | Secure FTP |

| **TFTP** | 69 (UDP) | Lightweight, no authentication |

### Email Protocols

| Protocol | Port | Secure Port |

|----------|------|-------------|

| **POP3** | 110 | 995 (SSL/TLS) |

| **IMAP** | 143 | 993 (SSL/TLS) |

| **SMTP** | 25 | 465 (SSL/TLS) |

## Subnet Mask

Defines network and host portions of an IP.

Example:

- **IP Address**: `192.168.1.100`

- **Subnet Mask**: `255.255.255.0`

## Classful Addressing (Replaced by CIDR)

| Class | Starting IP | Ending IP | Default Subnet Mask |

|-------|------------|------------|---------------------|

| A | 1.0.0.0 | 126.255.255.255 | 255.0.0.0 |

| B | 128.0.0.0 | 191.255.255.255 | 255.255.0.0 |

| C | 192.0.0.0 | 223.255.255.255 | 255.255.255.0 |

> **Why was it replaced?** CIDR introduced variable-length subnet masks (VLSM) to optimize
IP allocation.

These notes provide a detailed breakdown with examples and explanations. Let me know if you
need modifications!
IP Tables
Here are detailed notes on IPTables, including elaboration and examples:

IPTables - A Complete Guide

1. Introduction to IPTables

IPTables is a command-line utility in Linux used for configuring packet filtering rules in the
netfilter framework. It controls incoming, outgoing, and forwarded network traffic using rules
defined in tables.

IPTables is widely used for firewall configurations, NAT (Network Address Translation), and
traffic filtering in Linux-based systems.

2. Understanding IPTables Components

a) Tables

IPTables uses different tables, each serving a unique purpose. The main tables are:

1. Filter Table (Default) – Used for packet filtering.

o Chains: INPUT, FORWARD, OUTPUT

2. NAT Table – Used for network address translation.

o Chains: PREROUTING, OUTPUT, POSTROUTING

3. Mangle Table – Used for modifying packet headers.

o Chains: PREROUTING, OUTPUT, INPUT, FORWARD, POSTROUTING

4. Raw Table – Used for connection tracking exemptions.

o Chains: PREROUTING, OUTPUT

5. Security Table – Used for Mandatory Access Control (MAC) rules.

Example to list all tables:

iptables -t nat -L

b) Chains

Each table has chains where rules are applied:

• INPUT – Manages packets coming into the system.

• OUTPUT – Manages packets going out of the system.

• FORWARD – Manages packets routed through the system (used in routers).

• PREROUTING – Alters packets before routing.


• POSTROUTING – Alters packets after routing.

Example to list all rules in a chain:

iptables -L INPUT -v

c) Rules

Rules define what action should be taken on a packet.


Each rule consists of:

• Match conditions (source, destination, protocol, etc.).

• Target action (ACCEPT, DROP, REJECT, etc.).

Example:

iptables -A INPUT -s 192.168.1.100 -j DROP

This blocks all incoming traffic from 192.168.1.100.

d) Targets

Targets specify what action to take when a rule matches.

• ACCEPT – Allows the packet to pass.

• DROP – Silently drops the packet.

• REJECT – Drops the packet and sends an error response.

• LOG – Logs packet details to /var/log/syslog.

• MASQUERADE – Used in NAT for dynamic IPs.

• SNAT/DNAT – Source and destination NAT.

Example:

iptables -A INPUT -p tcp --dport 22 -j REJECT

This rejects SSH connections.

3. Common IPTables Commands

a) Listing Rules

To view current rules:

iptables -L -v -n

• -L → List rules

• -v → Show detailed info


• -n → Show numeric IPs instead of hostnames

b) Adding Rules

To block an IP:

iptables -A INPUT -s 203.0.113.50 -j DROP

To allow SSH (port 22) traffic:

iptables -A INPUT -p tcp --dport 22 -j ACCEPT

c) Deleting Rules

To delete a rule by line number:

iptables -D INPUT 3

To flush all rules:

iptables -F

d) Saving and Restoring Rules

To save IPTables rules:

iptables-save > /etc/iptables.rules

To restore IPTables rules:

iptables-restore < /etc/iptables.rules

4. Practical Examples of IPTables Usage

a) Basic Firewall Rules

Block all incoming traffic except SSH (port 22) and HTTP (port 80):

iptables -P INPUT DROP

iptables -A INPUT -p tcp --dport 22 -j ACCEPT

iptables -A INPUT -p tcp --dport 80 -j ACCEPT

iptables -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT

b) Port Forwarding (NAT Example)

Forward HTTP traffic from external port 8080 to internal server on port 80:

iptables -t nat -A PREROUTING -p tcp --dport 8080 -j DNAT --to-destination 192.168.1.10:80

c) Blocking Specific IPs

Block access from an IP range (e.g., 192.168.1.0/24):

iptables -A INPUT -s 192.168.1.0/24 -j DROP

d) Allowing Only a Specific IP to SSH


Allow only 203.0.113.10 to connect via SSH:

iptables -A INPUT -p tcp --dport 22 -s 203.0.113.10 -j ACCEPT

iptables -A INPUT -p tcp --dport 22 -j DROP

5. Persisting IPTables Rules

Since IPTables rules are not persistent after a reboot, save them permanently:

For Ubuntu/Debian:

sudo apt install iptables-persistent

sudo netfilter-persistent save

sudo netfilter-persistent reload

For RHEL/CentOS:

service iptables save

service iptables restart

6. Alternative to IPTables: Firewalld & UFW

Many modern Linux distributions use Firewalld or UFW instead of IPTables.

• Firewalld (RHEL-based systems)

• firewall-cmd --add-port=80/tcp --permanent

• firewall-cmd --reload

• UFW (Uncomplicated Firewall) (Ubuntu-based systems)

• ufw allow 22/tcp

• ufw enable

7. Debugging and Logging

a) Enable Logging for Dropped Packets

iptables -A INPUT -j LOG --log-prefix "DROPPED: " --log-level 4

Logs appear in /var/log/syslog.

b) Check Packet Counts

iptables -L -v

8. Conclusion
IPTables is a powerful firewall tool in Linux for network traffic filtering and NAT.

• It consists of tables, chains, rules, and targets.

• Common operations include blocking IPs, port forwarding, and NAT.

• Rules must be persisted to survive reboots.

For easier management, UFW and Firewalld offer simplified alternatives.

Firewall, Load Balancers, Routing, Proxy


Detailed Notes on Given Commands

1. SSH Key Management

View Public SSH Key

cat /root/.ssh/id_rsa.pub

• Displays the public SSH key of the root user.

• Useful for copying the key to remote servers for passwordless authentication.

Generate SSH Key Pair

ssh-keygen

• Generates a new public-private key pair.

• Keys are stored in ~/.ssh/ by default.

Configure SSH Access

nano /etc/ssh/ssh_config

nano /etc/authorized_keys

mv /etc/authorized_keys ~/.ssh

• Edits SSH client configuration.

• Moves authorized keys to the correct location for authentication.

Start and Check SSH Service

sudo service ssh start

sudo service ssh status

• Starts the SSH service and verifies its status.

2. Installing and Managing Apache Web Server

Install Apache2

sudo apt install -y apache2

• Installs Apache web server.


• -y flag auto-confirms the installation.

Start, Check, and Enable Apache2

sudo service apache2 start

sudo service apache2 status

sudo systemctl enable apache2

• Starts and checks the status of Apache.

• Enables Apache to start at boot.

Testing Apache2

curl localhost

• Checks if Apache is serving web pages.

Modify Default Web Page

echo "ubuntu2" | sudo tee /var/www/html/index.html

• Creates or updates the index page with ubuntu2 text.

3. Configuring Firewall and Routing

Check Firewall Status

sudo ufw status

• Displays the status of Uncomplicated Firewall (UFW).

Iptables Firewall Management

sudo iptables -L

sudo iptables-restore < fwof

• Lists firewall rules.

• Restores firewall rules from a saved configuration.

Network Routing

sudo apt install -y net-tools

ip route show

sudo ip route add 10.89.0.5/24 via 10.89.0.1 dev eth0

sudo ip route del 10.89.0.5

• Installs networking utilities.

• Adds and removes a custom IP route.

Testing Network Connection

ping -c 4 10.89.0.5
• Sends 4 ICMP packets to test network connectivity.

4. Configuring Apache for Load Balancing

Create Load Balancer Configuration

sudo nano /etc/apache2/sites-available/loadbalancer.conf

• Opens configuration file for Apache load balancing setup.

Enable Required Apache Modules

sudo a2enmod proxy proxy_http proxy_balancer lbmethod_byrequests

• Enables necessary modules for load balancing.

Enable Load Balancer Site and Reload Apache

sudo a2ensite loadbalancer.conf

sudo systemctl reload apache2

• Activates the load balancer configuration and reloads Apache.

Test Load Balancer Configuration

curl -I http://yourdomain.com

• Checks if the load balancer is working by inspecting HTTP headers.

5. Installing and Configuring HAProxy

Install HAProxy and Git

sudo apt install -y haproxy git

• Installs HAProxy, a popular load balancer, and Git for version control.

Backup HAProxy Configuration

sudo cp /etc/haproxy/haproxy.cfg /etc/haproxy/haproxy.cfg.old

ls -l /etc/haproxy/

• Creates a backup of the HAProxy configuration file.

• Lists HAProxy configuration directory contents.

Edit and Validate HAProxy Configuration

sudo nano /etc/haproxy/haproxy.cfg

sudo haproxy -c -f /etc/haproxy/haproxy.cfg

• Edits the HAProxy configuration.

• Validates the syntax of the HAProxy configuration file.

Start and Check HAProxy Service

sudo systemctl start haproxy


sudo systemctl status haproxy

• Starts HAProxy and checks its running status.

Stopping Apache (If Needed)

sudo systemctl stop apache2

• Stops Apache to avoid conflicts when using HAProxy for load balancing.

6. Additional Network Utilities

Install Required Packages

sudo apt install -y iputils-ping openssh-server

• Installs ping utility and OpenSSH server.

7. Summary

• SSH Setup: Configured public-private keys and started SSH service.

• Apache Web Server: Installed, configured, and tested Apache with load balancing.

• Firewall and Routing: Managed UFW, iptables, and custom IP routes.

• HAProxy Load Balancer: Installed, configured, and started HAProxy for load balancing.

• Network Tools: Installed utilities for routing, networking, and testing connectivity.

Here are detailed notes on the topics you provided:

Advanced Session SUSE Linux Admin


1. Overview of SUSE Linux Administration

SUSE Linux Enterprise Server (SLES) is a powerful, scalable, and enterprise-grade operating
system. Advanced administration involves in-depth knowledge of system processes,
automation, kernel management, and troubleshooting.

2. System Performance Optimization

• Monitoring Tools: top, htop, vmstat, iostat, free, sar

• Process Management: ps, nice, renice, kill, pkill

• Memory Management: Configuring swap space, analyzing memory leaks

• Disk I/O Performance: iotop, dd, hdparm, fio

3. SUSE Package Management (zypper & RPM)

• Installing & Updating Packages:

• zypper install package_name


• zypper update

• rpm -ivh package.rpm

• Removing Packages:

• zypper remove package_name

• Repository Management:

• zypper addrepo <repo_url> repo_name

4. Managing Users & Permissions

• Creating users & groups:

• useradd -m user1

• groupadd dev_group

• usermod -aG dev_group user1

• File Permissions & Ownership:

• chmod 755 file

• chown user1:group1 file

5. Advanced System Services Management

• Systemd Services:

• systemctl start service_name

• systemctl enable service_name

• journalctl -u service_name

Automating Processes with a Shell Script

1. Introduction to Shell Scripting

Shell scripting allows automation of repetitive tasks using commands in a script file.

2. Basic Structure of a Shell Script

#!/bin/bash

echo "Hello, this is a shell script"

3. Variables and User Input

#!/bin/bash

name="John"

echo "Hello, $name"

read -p "Enter your name: " username


echo "Hello, $username"

4. Conditional Statements

#!/bin/bash

if [ -f "/etc/passwd" ]; then

echo "File exists"

else

echo "File does not exist"

fi

5. Looping Statements

#!/bin/bash

for i in {1..5}

do

echo "Number: $i"

done

6. Automating Backups

#!/bin/bash

tar -czf backup_$(date +%F).tar.gz /home/user/documents

echo "Backup completed!"

7. Cron Jobs for Automation

crontab -e

0 2 * * * /path/to/backup.sh

(Runs backup.sh every day at 2 AM.)

Programming, Modelling, Visualization

1. Programming in Linux (Bash, Python, C)

• Bash scripting for automation

• Python for scripting, data processing

• C programming for system-level programming

Example (C Program to list files in a directory):

#include <stdio.h>

#include <dirent.h>
int main() {

struct dirent *de;

DIR *dr = opendir(".");

if (dr == NULL) {

printf("Could not open current directory");

return 0;

while ((de = readdir(dr)) != NULL)

printf("%s\n", de->d_name);

closedir(dr);

return 0;

2. Modelling & Data Visualization

• Gnuplot for plotting:

• echo "set terminal png; set output 'graph.png'; plot sin(x)" | gnuplot

• Python (Matplotlib) for visualization:

• import matplotlib.pyplot as plt

• x = [1, 2, 3, 4, 5]

• y = [10, 20, 30, 40, 50]

• plt.plot(x, y)

• plt.show()

SUSE Linux Kernel Internals

1. Overview of the Linux Kernel in SUSE

The Linux kernel manages processes, memory, filesystems, and hardware abstraction.

2. Viewing Kernel Information

uname -r # Check kernel version


cat /proc/version # Detailed kernel version

3. Kernel Module Management

• List loaded modules:

• lsmod

• Load a module:

• modprobe module_name

• Remove a module:

• modprobe -r module_name

4. Kernel Parameters (sysctl)

sysctl -a # View all kernel parameters

sysctl -w net.ipv4.ip_forward=1 # Enable IP forwarding

5. Custom Kernel Compilation

1. Download Kernel Source

2. wget https://cdn.kernel.org/pub/linux/kernel/v5.x/linux-5.10.tar.xz

3. Extract and Configure

4. tar -xvf linux-5.10.tar.xz

5. cd linux-5.10

6. make menuconfig

7. Compile and Install

8. make -j$(nproc)

9. sudo make modules_install

10. sudo make install

6. Kernel Debugging Tools

• dmesg – View boot and kernel logs

• dmesg | less

• strace – Trace system calls

• strace ls

• perf – Performance profiling

• perf stat ls

Conclusion
These advanced SUSE Linux topics are essential for system administrators managing enterprise
environments. Automation through scripting optimizes workflows, and an understanding of
kernel internals helps in troubleshooting and performance tuning.

Basic Learning on Ansible


Introduction to Ansible

Ansible is an open-source IT automation tool that simplifies configuration management,


application deployment, and orchestration. It is agentless, meaning it does not require any
software or agents installed on target machines, and it uses SSH for communication.

Key Features of Ansible:

• Agentless: No need to install additional software on managed nodes.

• Declarative Language: Uses YAML to define automation tasks.

• Idempotent: Ensures tasks execute only when necessary, preventing redundant


changes.

• Simple Architecture: Uses SSH for Linux and WinRM for Windows to manage remote
machines.

Installing Ansible

Ansible can be installed on Linux/macOS systems. For Windows, WSL (Windows Subsystem for
Linux) is recommended.

Installation on Ubuntu/Debian:

sudo apt update

sudo apt install ansible -y

Installation on CentOS/RHEL:

sudo yum install epel-release -y

sudo yum install ansible -y

Verifying Installation:

ansible --version

Ansible Configuration

The main configuration file for Ansible is /etc/ansible/ansible.cfg. Important settings include:

• inventory - Specifies the location of the inventory file.

• remote_user - Defines the user to connect to remote hosts.


• host_key_checking - Controls SSH key verification.

Ansible Inventory

Ansible uses an inventory file to define managed nodes. The default location is
/etc/ansible/hosts.

Example Inventory File:

[web_servers]

192.168.1.10

192.168.1.11

[db_servers]

db.example.com ansible_user=root ansible_port=2222

You can also use a dynamic inventory by querying cloud providers like AWS, Azure, or GCP.

Running Ad-hoc Commands

Ansible allows executing one-time commands without writing a playbook.

Example Commands:

• Check connectivity to all hosts:

• ansible all -m ping

• Get system uptime:

• ansible all -m command -a "uptime"

• List files in /tmp directory:

• ansible all -m shell -a "ls -l /tmp"

Ansible Playbooks

A playbook is a YAML file that defines automation tasks.

Example Playbook:

---

- name: Install and start Apache

hosts: web_servers

become: yes
tasks:

- name: Install Apache

apt:

name: apache2

state: present

- name: Start Apache service

service:

name: apache2

state: started

Running the Playbook:

ansible-playbook playbook.yml

Ansible Modules

Ansible provides several built-in modules to manage different tasks.

Commonly Used Modules:

• ping - Checks connectivity (ansible all -m ping)

• command - Executes shell commands (ansible all -m command -a "uptime")

• file - Manages files and directories

• service - Manages services (service: name=apache2 state=started)

• yum/apt - Installs packages (apt: name=nginx state=latest)

Ansible Roles

Roles help organize Ansible playbooks into reusable components.

Creating a Role:

ansible-galaxy init my_role

Role Directory Structure:

my_role/

│── defaults/

│── handlers/

│── tasks/

│── templates/
│── vars/

Roles can be included in playbooks using:

roles:

- my_role

Conclusion

Ansible is a powerful automation tool that simplifies IT operations. By understanding its basic
components like inventory, modules, playbooks, and roles, you can automate complex
infrastructure tasks efficiently.

Next Steps:

• Learn about Ansible Vault for securing credentials.

• Explore Ansible Tower/AWX for managing large-scale deployments.

• Practice by creating more complex playbooks and roles.

Cloud Computing and AWS Overview

## What is Cloud Computing?

Cloud computing refers to the delivery of computing services—including servers, storage,


databases, networking, software, and analytics—over the internet ("the cloud") to offer faster
innovation, flexible resources, and economies of scale. Instead of owning and maintaining
physical data centers and servers, users can access computing resources on-demand from
cloud providers like Amazon Web Services (AWS), Microsoft Azure, and Google Cloud.

### Key Characteristics of Cloud Computing:

- **On-Demand Self-Service** – Users can provision resources automatically without human


intervention.

- **Broad Network Access** – Services are accessible from anywhere over the internet.

- **Resource Pooling** – Multiple users share a pool of resources dynamically assigned and
reassigned based on demand.

- **Scalability and Elasticity** – Resources can be scaled up or down automatically based on


workload.

- **Measured Service** – Users pay only for the resources they use (pay-as-you-go model).

### Example:
A startup can host its website on AWS instead of purchasing servers, ensuring high availability,
automatic scaling, and lower costs.

---

## Overview of AWS

Amazon Web Services (AWS) is a comprehensive cloud platform offering a vast array of
services, including computing power, storage options, and networking capabilities.

### Key AWS Services:

- **Compute**: EC2, Lambda

- **Storage**: S3, EBS

- **Database**: RDS, DynamoDB

- **Networking**: VPC, Route 53

- **Security & Identity**: IAM, Security Groups

- **Machine Learning**: SageMaker, Rekognition

AWS operates on a **pay-as-you-go pricing model**, reducing the need for upfront investments.

---

## AWS Global Infrastructure

AWS operates in multiple **Regions** worldwide, each containing multiple **Availability Zones
(AZs)** to ensure fault tolerance and high availability.

### Key Components:

- **Regions** – Geographical areas where AWS has data centers (e.g., US-East-1, AP-South-1 in
Mumbai).

- **Availability Zones (AZs)** – Multiple data centers within a region for redundancy.

- **Edge Locations** – Content delivery locations for faster access through AWS CloudFront
(CDN).

### Example:
If a business deploys an application in **US-East-1**, it can use **three AZs** (A, B, C) to
ensure high availability.

---

## AWS Free Tier

AWS Free Tier allows new users to explore AWS services at no cost for **12 months** with
limited usage.

### Free Tier Offers:

- **EC2**: 750 hours/month of t2.micro or t3.micro instances.

- **S3**: 5GB of storage.

- **RDS**: 750 hours of usage with specific database engines.

- **Lambda**: 1 million free requests per month.

**Important:** Always monitor usage to avoid accidental charges.

---

## Introduction to EC2 (Elastic Compute Cloud)

Amazon EC2 provides scalable virtual servers called **instances** for hosting applications.

### Features:

- **Flexible Compute Capacity** – Choose from various instance types.

- **Secure & Reliable** – Integrated with IAM & Security Groups.

- **Pay-as-you-go** pricing – Costs depend on usage.

### Example:

A web application running on an EC2 instance can scale up automatically during high traffic and
scale down when demand decreases.

---
## EC2 Instance Types

AWS provides various EC2 instance families optimized for different workloads:

- **General Purpose**: Balanced CPU, memory, and networking (e.g., t3.micro, m5.large)

- **Compute Optimized**: High-performance CPU (e.g., c5.large)

- **Memory Optimized**: Large RAM for databases (e.g., r5.large)

- **Storage Optimized**: High disk I/O (e.g., i3.large)

- **GPU Instances**: Machine learning and graphics rendering (e.g., p3.2xlarge)

### Example:

A startup running a basic web application might use a **t3.micro** instance, while a data
analytics firm may prefer **r5.large** for high memory.

---

## Launching, Configuring, and Terminating an EC2 Instance

### Steps to Launch an EC2 Instance:

1. **Sign in** to AWS Management Console.

2. **Go to EC2 Dashboard** → Click **Launch Instance**.

3. **Choose AMI (Amazon Machine Image)** – Example: Ubuntu 20.04.

4. **Select Instance Type** – Example: t2.micro (Free Tier eligible).

5. **Configure Instance** (networking, storage, IAM roles, etc.).

6. **Add Storage** – Default is 8GB EBS.

7. **Configure Security Group** – Allow SSH (Port 22), HTTP (Port 80), etc.

8. **Review & Launch** – Choose an existing key pair or create a new one.

9. **Connect via SSH** (using the key pair downloaded).

### Terminating an EC2 Instance:

- Go to EC2 Dashboard → Select the instance → Click **Terminate**.

---
## Security Groups

Security Groups act as **virtual firewalls** controlling inbound and outbound traffic to EC2
instances.

### Example Rules:

- **Allow SSH (22)** from **your IP only**.

- **Allow HTTP (80) & HTTPS (443)** for web traffic.

- **Block all other traffic by default**.

### Example:

If a web server is running on EC2, the security group should allow HTTP & HTTPS traffic, but
restrict SSH access to the admin's IP.

---

## Set Up AWS Free Tier Account

1. Visit **aws.amazon.com/free**.

2. Click **Create an AWS Account**.

3. Provide **Email & Password**.

4. Enter **Billing Information** (Credit/Debit card required for verification).

5. Choose **Basic Support Plan (Free)**.

6. Complete **Phone Verification**.

7. Log in to **AWS Management Console**.

---

## Explore the AWS Management Console

The **AWS Management Console** provides a graphical interface to manage AWS services.

### Key Sections:

- **EC2 Dashboard** – Manage instances.


- **S3** – Storage management.

- **IAM** – Security management.

- **Billing** – Monitor usage & avoid extra charges.

---

## Launch an EC2 Instance

1. Open **EC2 Dashboard**.

2. Click **Launch Instance**.

3. Select an **Amazon Machine Image (AMI)**.

4. Choose **t2.micro** (Free Tier eligible).

5. Configure **network settings**.

6. Set up **Security Group**.

7. **Launch** and download **Key Pair**.

---

## SSH into the Instance

1. Open **Terminal (Linux/macOS) or PowerShell (Windows)**.

2. Navigate to the directory with the key file (`.pem`).

3. Run the command:

```bash

chmod 400 your-key.pem

ssh -i your-key.pem ec2-user@your-instance-ip

```

---

## Terminate the Instance

1. Open **EC2 Dashboard**.

2. Select the **running instance**.


3. Click **Actions** → **Instance State** → **Terminate**.

This shuts down the instance permanently and releases resources.

Storage Services
1. Block Storage

o Stores data in fixed-sized blocks.

o Commonly used for low-latency, high-performance storage needs.

o Example: Amazon Elastic Block Store (EBS), which provides block-level


storage for EC2 instances.

o Each block can be updated independently without affecting others.

o Use Case: Suitable for databases, virtual machines, and applications requiring
structured access.

2. File Storage

o Provides a hierarchical file system accessible over a network.

o Example: Amazon Elastic File System (EFS), which allows multiple EC2
instances to share the same storage.

o Uses NFS (Network File System) protocol.

o Use Case: Ideal for applications requiring shared storage, such as web hosting,
content management, and media processing.

3. Object Storage

o Stores data as objects in a flat structure.

o Example: Amazon Simple Storage Service (S3).

o Offers high durability (99.999999999%, or "eleven 9s") and scalability.

o Objects are stored in buckets, and each object has metadata and a unique
identifier.

o Use Case: Ideal for backups, data lakes, media storage, and large-scale
distributed applications.

Key Components of VPC (Very Important - 3-4 Questions Expected)

1. IP Addressing

o Determines how instances within a VPC communicate.

o Private and public IPs are assigned.


o CIDR (Classless Inter-Domain Routing) defines the IP range (e.g., 10.0.0.0/16).

2. Subnets

o Logical subdivisions of a VPC.

o Can be public (has internet access) or private (internal communication only).

o Each subnet is associated with an Availability Zone (AZ).

3. Route Tables

o Directs network traffic within the VPC.

o Contains rules defining how traffic is routed to different subnets or external


networks.

o A main route table exists by default, but custom tables can be created.

4. Security Groups

o Acts as a stateful firewall for EC2 instances.

o Rules define inbound (ingress) and outbound (egress) traffic.

o Stateful: Responses to allowed traffic are automatically permitted.

5. Network Access Control List (NACL)

o Acts as a stateless firewall at the subnet level.

o Rules apply in both inbound and outbound directions.

o Needs explicit allow rules for return traffic.

Cross-Region VPC Considerations

• Cross-region VPC creation is not possible.

• VPCs can be linked across regions using VPC peering or Transit Gateway, but they
cannot be created across regions.

• Security groups within a VPC act as a stateful firewall, meaning responses to allowed
traffic are automatically permitted.

• Traffic Direction:

o Inbound traffic is known as ingress.

o Outbound traffic is known as egress.

• The VersionField is crucial as it defines the language version and configurations.

Route 53 & Load Balancers

• Amazon Route 53
o Global in nature.

o Used for domain name resolution, traffic routing, and DNS management.

• Load Balancers

o Regional in nature (specific to a region).

o Application Load Balancer (ALB) supports path-based routing, directing


requests based on the URL path.

Differences: EFS vs. EBS vs. S3 vs. Instance Store

EFS (Elastic EBS (Elastic Block S3 (Simple Storage


Feature Instance Store
File System) Store) Service)

Ephemeral
Storage Type File Storage Block Storage Object Storage
Storage

Shared access, Backup, archival, Temporary


Use Case Persistent disk storage
multi-instance large-scale storage storage

Multiple EC2 Accessed via Tied to EC2


Accessibility Single EC2 instance
instances HTTP/API instance

High-speed
Scalable, lower High scalability, lower
Performance High-performance temporary
latency latency
storage

99.999999999% Lost if instance


Durability High High
durability stops

Web servers, Databases, VMs, apps


Example Backups, media files, Cache, buffer
CMS, shared requiring structured
Usage logs storage
logs access

EC2 Creation via AWS CLI


Step 1: Creating a Key Pair

A key pair is required to securely access the EC2 instance via SSH. The private key will be stored
locally.

Command:

aws ec2 create-key-pair --key-name Mykey --query 'KeyMaterial' --output text > Mykey.pem

Explanation:

• aws ec2 create-key-pair → Creates a new key pair.


• --key-name Mykey → Specifies the name of the key pair as Mykey.

• --query 'KeyMaterial' → Extracts only the key material from the output.

• --output text → Ensures the key material is printed as plain text.

• > Mykey.pem → Saves the key as a .pem file.

Example Output:

-----BEGIN RSA PRIVATE KEY-----

MIIEvgIBADANB...<key-content>...IDAQAB

-----END RSA PRIVATE KEY-----

Important Notes:

• Set permissions on the key to ensure security:

• chmod 400 Mykey.pem

• This key will be required to access the EC2 instance via SSH.

Step 2: Creating a Security Group

Security groups act as virtual firewalls that control inbound and outbound traffic.

Command:

aws ec2 create-security-group --group-name Msg --description "my group"

Explanation:

• aws ec2 create-security-group → Creates a security group.

• --group-name Msg → Assigns the name Msg to the security group.

• --description "my group" → Adds a description for identification.

Example Output:

"GroupId": "sg-0d02f7d19731ab955"

• Note down the Group ID (sg-0d02f7d19731ab955) for further use.

Step 3: Configuring Security Group Rules

To allow traffic, we authorize inbound rules.

Allow SSH (Port 22)


aws ec2 authorize-security-group-ingress --group-id sg-0d02f7d19731ab955 --protocol tcp --
port 22 --cidr 0.0.0.0/0

• Enables SSH access (Port 22) from any IP (0.0.0.0/0).

Allow HTTP (Port 80)

aws ec2 authorize-security-group-ingress --group-id sg-0d02f7d19731ab955 --protocol tcp --


port 80 --cidr 0.0.0.0/0

• Allows incoming HTTP traffic for web applications.

Allow HTTPS (Port 443)

aws ec2 authorize-security-group-ingress --group-id sg-0d02f7d19731ab955 --protocol tcp --


port 443 --cidr 0.0.0.0/0

• Enables secure HTTPS access.

Security Warning:

• Using 0.0.0.0/0 makes the instance accessible from any IP. Restrict access using
specific IP ranges for better security.

Step 4: Launching the EC2 Instance

Now, we create an EC2 instance with the specified security group and key pair.

Command:

aws ec2 run-instances --image-id ami-05716d7e60b53d380 --count 1 --instance-type t2.micro -


-key-name Mykey --security-group-ids sg-0d02f7d19731ab955

Explanation:

• aws ec2 run-instances → Launches a new EC2 instance.

• --image-id ami-05716d7e60b53d380 → Specifies the Amazon Machine Image (AMI) ID.


This is a Linux-based AMI.

• --count 1 → Creates one instance.

• --instance-type t2.micro → Selects the instance type (free-tier eligible).

• --key-name Mykey → Uses the Mykey key pair for SSH access.

• --security-group-ids sg-0d02f7d19731ab955 → Attaches the security group.

Example Output:

"Instances": [

"InstanceId": "i-0abcd1234efgh5678",
"ImageId": "ami-05716d7e60b53d380",

"State": {"Code": 0, "Name": "pending"},

"PublicDnsName": "",

"InstanceType": "t2.micro",

"KeyName": "Mykey",

"SecurityGroups": [{"GroupId": "sg-0d02f7d19731ab955", "GroupName": "Msg"}],

"Placement": {"AvailabilityZone": "us-east-1a"},

"PrivateIpAddress": "172.31.32.5"

• Note the InstanceId (i-0abcd1234efgh5678) for future reference.

Step 5: Connecting to the EC2 Instance

After the instance is running, retrieve its public IP:

aws ec2 describe-instances --instance-ids i-0abcd1234efgh5678 --query


"Reservations[*].Instances[*].PublicIpAddress" --output text

Example Output:

34.229.23.45

SSH into the instance:

ssh -i Mykey.pem ec2-user@34.229.23.45

For Ubuntu-based AMI, use:

ssh -i Mykey.pem ubuntu@34.229.23.45

Conclusion

• We created an SSH key pair for authentication.

• We set up a security group with inbound rules for SSH, HTTP, and HTTPS.

• We launched an EC2 instance with a specified AMI.

• We connected to the instance using SSH.

Amazon S3 (Simple Storage Service) and AWS IAM (Identity and Access Management)
Introduction to S3
Amazon S3 (Simple Storage Service) is an object storage service that provides high scalability,
security, and data availability. It allows users to store and retrieve any amount of data at any
time, from anywhere on the web. S3 is commonly used for backup, archival, big data analytics,
hosting static websites, and media storage.

Key Features of S3:

• Scalability – Automatically scales as per demand.

• Durability – Provides 99.999999999% (11 9s) durability.

• Security – Supports encryption, access control policies, and IAM integration.

• Data Consistency – Offers strong read-after-write consistency.

Buckets and Objects

• Buckets: Logical containers for storing objects. Each bucket has a globally unique name
and is tied to a specific AWS region.

• Objects: Files stored in S3 with associated metadata. Objects are identified by a unique
key within a bucket.

Example:

• Bucket Name: my-data-storage

• Object Key: images/photo.jpg

• Object URL: https://my-data-storage.s3.amazonaws.com/images/photo.jpg

S3 Storage Classes

S3 offers different storage classes for various use cases:

• S3 Standard – For frequently accessed data.

• S3 Intelligent-Tiering – Automatically moves data between access tiers.

• S3 Standard-IA (Infrequent Access) – For data that is accessed less frequently.

• S3 One Zone-IA – Lower-cost storage but stored in a single availability zone.

• S3 Glacier – Used for archival storage, retrieval times in minutes to hours.

• S3 Glacier Deep Archive – Lowest-cost option, retrieval takes hours.

Permissions and Access Control

• IAM Policies – Define user access through JSON-based policies.

• Bucket Policies – Apply access rules at the bucket level.

• Access Control Lists (ACLs) – Provide object-level permissions.

• Pre-Signed URLs – Grant temporary access to objects without modifying bucket


policies.
Example Bucket Policy (Allow public read access to a bucket)

"Version": "2012-10-17",

"Statement": [

"Effect": "Allow",

"Principal": "*",

"Action": "s3:GetObject",

"Resource": "arn:aws:s3:::my-data-storage/*"

Introduction to IAM

AWS IAM (Identity and Access Management) enables secure access control for AWS services
and resources. It allows managing users, groups, roles, and policies.

IAM Components:

• Users – Individual AWS accounts for people or applications.

• Groups – Collection of users with shared permissions.

• Roles – Temporary credentials assigned to AWS services or applications.

• Policies – JSON-based permissions defining what actions are allowed.

• Trusted Entities – Define who can assume a role (e.g., EC2, Lambda).

Best Practices for IAM

• Use IAM roles instead of root user access

• Apply least privilege principle – Grant only necessary permissions.

• Enable Multi-Factor Authentication (MFA) for enhanced security.

• Rotate credentials regularly to minimize security risks.

• Monitor IAM activities using AWS CloudTrail.

Practical Implementation

Create an S3 Bucket

1. Open AWS Console → Go to S3.


2. Click Create Bucket.

3. Provide a unique bucket name and select a region.

4. Configure settings (e.g., versioning, encryption, logging).

5. Click Create.

Upload and Manage Objects

1. Open the S3 bucket.

2. Click Upload and select files.

3. Configure permissions and storage class.

4. Click Upload.

Set Bucket Policies

1. Navigate to Permissions in the bucket.

2. Click Bucket Policy and add JSON policy.

3. Save changes.

Create IAM Users and Groups

1. Open AWS Console → IAM.

2. Click Users → Add User.

3. Assign programmatic and console access.

4. Attach policies or add to groups.

Attach Policies to Users and Groups

1. Go to IAM → Groups.

2. Click Create Group and add users.

3. Attach predefined policies or create custom ones.

Configure MFA (Multi-Factor Authentication)

1. Open IAM, select a user.

2. Click Security credentials → Enable MFA.

3. Use an MFA app (e.g., Google Authenticator) to scan the QR code.

4. Enter the generated OTP to activate MFA.

Conclusion

Amazon S3 and IAM are essential AWS services for secure storage and access management. By
implementing best practices such as bucket policies, IAM roles, and MFA, users can ensure
data security and efficient resource access control.
Here are detailed notes on AWS S3 and IAM with explanations and examples:

Amazon S3 (Simple Storage Service)


Amazon S3 is an object storage service that provides scalability, data availability, security, and
performance. It is used to store and retrieve any amount of data at any time.

Creating an S3 Bucket

To create an S3 bucket, use the following command:

aws s3api create-bucket --bucket mycola --region us-east-1

• --bucket mycola: Specifies the bucket name.

• --region us-east-1: Specifies the AWS region where the bucket is created.

Enabling Versioning on an S3 Bucket

S3 versioning allows you to keep multiple versions of an object to protect against accidental
deletions or overwrites.

aws s3api put-bucket-versioning --bucket mycola --versioning-configuration Status=Enabled

• Status=Enabled: Turns on versioning.

Configuring Lifecycle Rules for an S3 Bucket

Lifecycle configuration helps in automating object transitions (e.g., moving objects to Glacier)
and expirations.

aws s3api put-bucket-lifecycle-configuration --bucket mycola --lifecycle-configuration


file://lifecycle.json

• file://lifecycle.json: Specifies a lifecycle policy stored in a JSON file.

• Example lifecycle.json file:

"Rules": [

"ID": "MoveToGlacier",

"Prefix": "logs/",

"Status": "Enabled",

"Transitions": [

"Days": 30,

"StorageClass": "GLACIER"
}

• Moves objects with the prefix "logs/" to Amazon Glacier after 30 days.

Retrieving the Lifecycle Configuration

To verify the applied lifecycle configuration:

aws s3api get-bucket-lifecycle-configuration --bucket mycola

2. AWS IAM (Identity and Access Management)

AWS IAM is used to manage permissions and access to AWS services securely.

Creating an IAM User

aws iam create-user --user-name homie --permissions-boundary


arn:aws:iam::aws:policy/AmazonS3FullAccess

• Creates a user homie with full access to S3.

• --permissions-boundary: Restricts the user's permissions to the specified policy.

Creating an IAM Group

aws iam create-group --group-name groupie

• Creates a group named groupie.

Adding a User to a Group

aws iam add-user-to-group --group-name groupie --user-name homie

• Adds homie to groupie.

Listing Users in a Group

aws iam get-group --group-name groupie

• Retrieves details of all users in the groupie group.

Creating an IAM Access Key for a User

aws iam create-access-key --user-name homie --output text > acckey.pem

aws iam create-access-key --user-name homie --output text > acckey.csv

• Generates an access key for the user homie and stores it in acckey.pem or acckey.csv.

Creating a Custom IAM Policy


aws iam create-policy --policy-name newtestpolicy --policy-document file://policy.json

• Example policy.json:

"Version": "2012-10-17",

"Statement": [

"Effect": "Allow",

"Action": [

"s3:ListBucket",

"s3:GetObject"

],

"Resource": [

"arn:aws:s3:::mycola",

"arn:aws:s3:::mycola/*"

• This policy allows listing the bucket mycola and retrieving objects from it.

Retrieving Policy Details

aws iam get-policy --policy-arn arn:aws:iam::767397794724:policy/newtestpolicy

Attaching a Policy to a Group

aws iam attach-group-policy --group-name groupie --policy-arn


arn:aws:iam::767397794724:policy/newtestpolicy

• Attaches newtestpolicy to groupie.

Listing Attached Policies for a Group

aws iam list-group-policies --group-name groupie

Retrieving a Group’s Inline Policy

aws iam get-group-policy --group-name groupie --policy-name newtestpolicy

Listing Entities Using a Policy

aws iam list-entities-for-policy --policy-arn arn:aws:iam::767397794724:policy/newtestpolicy


• Displays all IAM users, groups, and roles using the newtestpolicy.

3. IAM Roles

IAM roles are used to grant temporary access to AWS services.

Creating an IAM Role

aws iam create-role --role-name newtest --assume-role-policy-document file://rolepolicy.json

• Example rolepolicy.json:

"Version": "2012-10-17",

"Statement": [

"Effect": "Allow",

"Principal": {

"Service": "ec2.amazonaws.com"

},

"Action": "sts:AssumeRole"

• This role allows EC2 instances to assume it for accessing AWS services.

Summary

AWS Command Purpose

aws s3api create-bucket --bucket mycola Creates an S3 bucket.

aws s3api put-bucket-versioning --bucket mycola --versioning- Enables versioning on


configuration Status=Enabled a bucket.

aws s3api put-bucket-lifecycle-configuration --bucket mycola -- Applies a lifecycle


lifecycle-configuration file://lifecycle.json policy.

aws iam create-user --user-name homie Creates an IAM user.

aws iam create-group --group-name groupie Creates an IAM group.


AWS Command Purpose

aws iam add-user-to-group --group-name groupie --user-name homie Adds a user to a group.

aws iam create-policy --policy-name newtestpolicy --policy-document Creates a custom


file://policy.json policy.

Attaches a policy to a
aws iam attach-group-policy --group-name groupie --policy-arn ...
group.

aws iam create-role --role-name newtest --assume-role-policy-


Creates an IAM role.
document file://rolepolicy.json

AWS VPC (Virtual Private Cloud)


1. Introduction to VPC

What is AWS VPC?

AWS Virtual Private Cloud (VPC) is a logically isolated network environment within AWS that
allows users to launch AWS resources in a customized network. It provides control over
networking features such as IP addressing, subnets, route tables, and security settings.

Key Features of VPC:

• Private, isolated network environment.

• Customizable IP address range using CIDR notation.

• Supports public and private subnets.

• Provides routing control using route tables.

• Enhanced security with Security Groups and Network ACLs.

• Supports VPC Peering, Transit Gateway, and VPN connectivity.

2. Subnets, Route Tables, and Transit Gateways

Subnets

Subnets are subdivisions within a VPC that help organize resources and manage traffic
efficiently. Each subnet is associated with a specific Availability Zone (AZ).

Types of Subnets:

1. Public Subnet: Connected to the internet via an Internet Gateway (IGW), allowing
external access.

2. Private Subnet: No direct internet access, used for internal communication.

3. Isolated Subnet: No internet access at all, used for highly secure workloads.

Example: Creating a VPC with two subnets:


• VPC CIDR: 10.0.0.0/16

• Public Subnet CIDR: 10.0.1.0/24

• Private Subnet CIDR: 10.0.2.0/24

Route Tables

A route table contains rules (routes) that determine how network traffic is directed within a VPC.

Types of Routes:

• Local (default route for internal VPC traffic).

• Internet Gateway (IGW) route for public subnet internet access.

• NAT Gateway for private subnet outbound traffic.

Example Route Table Configuration:

Destination Target Subnet Association

10.0.0.0/16 local All subnets

0.0.0.0/0 IGW Public Subnet

0.0.0.0/0 NAT Gateway Private Subnet

Transit Gateways

AWS Transit Gateway simplifies VPC interconnection by acting as a central hub to connect
multiple VPCs and on-premise networks.

Example Use Case:

• Multiple VPCs across different regions connecting via a Transit Gateway instead of
individual VPC peering connections.

3. Security Groups vs. NACLs (Network ACLs)

Security Groups (SGs)

• Stateful firewall at the instance level.

• Allows inbound/outbound rules.

• Evaluates only allowed traffic (deny by default).

• Rules are applied immediately.

Example Security Group Rules:

Type Protocol Port Source

SSH TCP 22 0.0.0.0/0

HTTP TCP 80 0.0.0.0/0

Network ACLs (NACLs)


• Stateless firewall at the subnet level.

• Supports both allow and deny rules.

• Processes rules in an ordered list.

• Applied automatically to all resources in the subnet.

Example NACL Rules:

Rule # Type Protocol Port Source/Destination Allow/Deny

100 HTTP TCP 80 0.0.0.0/0 ALLOW

200 All Traffic ALL ALL 0.0.0.0/0 DENY

4. VPC Peering

VPC Peering allows direct network connectivity between two VPCs.

Key Considerations:

• Peered VPCs must have non-overlapping CIDR blocks.

• Traffic within peering is private and encrypted.

• Peering connections do not support transitive routing.

Example Use Case: Connecting VPC A (10.0.0.0/16) with VPC B (192.168.0.0/16) to allow
internal traffic flow.

5. Creating a VPC with Public and Private Subnets

Steps:

1. Create a VPC (e.g., 10.0.0.0/16).

2. Create a Public Subnet (10.0.1.0/24).

3. Create a Private Subnet (10.0.2.0/24).

4. Attach an Internet Gateway (IGW) to the VPC.

5. Update Route Table:

o Public Subnet Route: 0.0.0.0/0 → IGW

o Private Subnet Route: 0.0.0.0/0 → NAT Gateway

6. Launch instances in each subnet.

6. Configuring a Security Group and a Network ACL

Security Group Configuration:

• Inbound: Allow HTTP, HTTPS, SSH.

• Outbound: Allow all traffic.

NACL Configuration:
• Allow inbound HTTP/HTTPS (80, 443) for public subnet.

• Allow outbound responses for established connections.

7. Launching Instances in Public and Private Subnets

Observations:

1. Public Subnet Instance:

o Assigned a public IP.

o Can connect to the internet.

o Accessible via SSH.

2. Private Subnet Instance:

o No public IP.

o Requires a NAT Gateway for outbound traffic.

o Cannot be accessed directly from the internet.

Example Commands:

• Public Instance SSH: ssh -i key.pem ec2-user@public-ip

• Private Instance via Bastion Host: ssh -J ec2-user@public-ip ec2-user@private-ip

Conclusion:

AWS VPC provides granular control over network configurations, ensuring both security and
scalability. Implementing security best practices using subnets, routing, and firewall rules
enhances the overall cloud architecture.

Create a VPC
aws ec2 create-vpc --cidr-block 10.0.0.0/16 --tag-specifications
'ResourceType=vpc,Tags=[{Key=Name,Value=MyVPC}]'

aws ec2 describe-vpcs --filters "Name=tag:Name,Values=MyVPC"

# Create Subnets

aws ec2 create-subnet --vpc-id <vpc-id> --cidr-block 10.0.1.0/24 --availability-zone us-east-2a -


-tag-specifications 'ResourceType=subnet,Tags=[{Key=Name,Value=PublicSubnet-AZ1}]'

aws ec2 create-subnet --vpc-id <vpc-id> --cidr-block 10.0.2.0/24 --availability-zone us-east-2b


--tag-specifications 'ResourceType=subnet,Tags=[{Key=Name,Value=PrivateSubnet-AZ1}]'

aws ec2 modify-subnet-attribute --subnet-id <subnet-id> --map-public-ip-on-launch

# Create and Attach Internet Gateway


aws ec2 create-internet-gateway --tag-specifications 'ResourceType=internet-
gateway,Tags=[{Key=Name,Value=MyInternetGateway}]'

aws ec2 attach-internet-gateway --internet-gateway-id <igw-id> --vpc-id <vpc-id>

# Configure Route Tables

aws ec2 create-route-table --vpc-id <vpc-id> --tag-specifications 'ResourceType=route-


table,Tags=[{Key=Name,Value=PublicRouteTable}]'

aws ec2 create-route --route-table-id <route-table-id> --destination-cidr-block 0.0.0.0/0 --


gateway-id <igw-id>

aws ec2 associate-route-table --route-table-id <route-table-id> --subnet-id <subnet-id>

aws ec2 create-route-table --vpc-id <vpc-id> --tag-specifications 'ResourceType=route-


table,Tags=[{Key=Name,Value=PrivateRouteTable}]'

aws ec2 associate-route-table --route-table-id <private-route-table-id> --subnet-id <private-


subnet-id>

# Configure Network ACLs

aws ec2 create-network-acl --vpc-id <vpc-id> --tag-specifications 'ResourceType=network-


acl,Tags=[{Key=Name,Value=PublicNetworkACL}]'

aws ec2 create-network-acl-entry --network-acl-id <acl-id> --rule-number 100 --protocol tcp --


port-range From=80,To=80 --egress false --cidr-block 0.0.0.0/0 --rule-action allow

aws ec2 associate-network-acl --network-acl-id <acl-id> --subnet-id <subnet-id>

# Create VPC Peering Connection

aws ec2 create-vpc-peering-connection --vpc-id <vpc-id> --peer-vpc-id <peer-vpc-id>

aws ec2 accept-vpc-peering-connection --vpc-peering-connection-id <peering-connection-id>

# Launch EC2 Instances

aws ec2 run-instances --image-id ami-12345678 --instance-type t2.micro --subnet-id <public-


subnet-id> --associate-public-ip-address --tag-specifications
'ResourceType=instance,Tags=[{Key=Name,Value=PublicInstance}]'

aws ec2 run-instances --image-id ami-12345678 --instance-type t2.micro --subnet-id <private-


subnet-id> --tag-specifications
'ResourceType=instance,Tags=[{Key=Name,Value=PrivateInstance}]'

# Configure Security Groups


aws ec2 create-security-group --group-name MySecurityGroup --description "My security
group" --vpc-id <vpc-id> --query GroupId --output text

aws ec2 authorize-security-group-ingress --group-id <sg-id> --protocol tcp --port 22 --cidr


0.0.0.0/0

aws ec2 authorize-security-group-ingress --group-id <sg-id> --protocol tcp --port 80 --cidr


0.0.0.0/0

aws ec2 authorize-security-group-ingress --group-id <sg-id> --protocol tcp --port 443 --cidr
0.0.0.0/0

# Assign Elastic IP to EC2

aws ec2 allocate-address --query AllocationId --output text

aws ec2 associate-address --instance-id <instance-id> --allocation-id <allocation-id>

# Setup NAT Gateway for Private Subnet

aws ec2 create-nat-gateway --subnet-id <public-subnet-id> --allocation-id <allocation-id> --


query NatGateway.NatGatewayId --output text

aws ec2 create-route --route-table-id <private-route-table-id> --destination-cidr-block 0.0.0.0/0


--nat-gateway-id <nat-id>

# Create S3 VPC Endpoint

aws ec2 create-vpc-endpoint --vpc-id <vpc-id> --service-name com.amazonaws.us-east-2.s3 --


route-table-ids <private-route-table-id> --query VpcEndpoint.VpcEndpointId --output text

You might also like