SPC Lab Manual-Final
SPC Lab Manual-Final
AN AUTONOMOUS INSTITUTION
(ACCREDITED WITH NAAC AND AFFILIATED TO ANNA UNIVERSITY) CHENNAI-
BANGALORE HIGH ROAD, OPP. TO HYUNDAI CAR COMPANY
IRUNGATTUKOTTAI, SRIPERUBUDUR, CHENNAI – 602 117
NAME:
REGISTER NUMBER:
BATCH:
BRANCH:
DEPARTMENT OF INFORMATION TECHNOLOGY
DEPARTMENT OF INFORMATION TECHNOLOGY
DEPARTMENT VISION
DEPARTMENT MISSION
Exp PAGE
DATE NAME OF THE EXPERIMENT SIGNATURE
No No
Simulate a cloud scenario using cloud Sim
1. and run a scheduling algorithm not present
in cloud Sim
4.
Simulate a secure file sharing using a
cloud sim
5b. K-Anonymization
Aim:
To simulate a cloud scenario using CloudSim and run a scheduling algorithm that is
not present in CloudSim
Procedure:
• Download the CloudSim library (version 3.0.3 or later) and include it in the project
• Define the criteria or objectives the want to optimize in the scheduling algorithm,
such as minimizing makespan, maximizing resource utilization, or improving response
time.
5. Create a datacenter:
• Define the characteristics of the datacenter, such as the number of hosts, host
properties (MIPS, RAM, storage, bandwidth), and VM provisioning policies.
• Use classes like Datacenter Characteristics, Host, Vm, and VmAllocationPolicy in
CloudSim to create the datacenter.
6. Create a
• Define the broker that will manage the cloudlets and interact with the datacenter.
• Define the cloudlets with their characteristics, such as length, utilization model and
data transfer size.
• Use the Cloudlet class in CloudSim to create the cloudlets.
• Consider the objectives and criteria defined in step 4 to allocate VMs to suitable hosts
based on the scheduling policy.
• Retrieve the results from the broker, such as the list of finished cloudlets and their
execution details.
• Analyze and process the results based on the objectives and criteria of the custom
scheduling algorithm.
Source code
import org.cloudbus.cloudsim.*;
import org.cloudbus.cloudsim.core.CloudSim;
import java.util.*;
public class
CustomSchedulingSimulation ( public
Calendar calendar
Calendar.getInstance();
CloudSim.init(numUsers, calendar,
false);
// Create a datacenter
// Create a broker
// Create and submit cloudlets to the broker int numVMs = 5: int numCloudlets = 10;
create VMsAndCloudlets (broker, numVMs, numCloudlets);
CloudSim
startSimulation();
CloudSim stopSimulation();
int mips = 1000; // Example MIPS value int ram 2048; // Example RAM value long storage
1000000; // Example storage value int bw = 10000; // Example bandwidth value
String os "Linux";
String vmm =
cost 3.0;
double costPerMem =
0.05; double
costPerStorage = 0.001;
Datacenter datacenter =
null; try {
} catch (Exception e) {
e.printStackTrace();
}
return
datacenter, Lab
try {
} catch (Exception e) {
e.printStackTrace();
broker,
numCloudlets) {
ArrayList<>();
// Create VMs with required characteristics // Define VM properties like MIPS, RAM,
storage, bandwidth, etc.
int mips = 1000; // Example MIPS value int ram 512; // Example RAM
Vm vm now Vmi, broker getld(), mips, pesNumber, ram, bw, size, Xen", new
Cloudlet SchedulerTimeShared()).
vmList.add(vm),
cloudlet.setUserid(broker.getId());
cloudlets
broker.submit VimList(vmList);
broker.submitCloudletList(cloudletList);
Simulation
Output
Simulation
Results:
seconds Datacenter
Information:
-Number of hosts: 5
Number of virtual
machines: 10 Number of
cloudlets: 20
Scheduling
Algorithm:
CustomSche
duler
Scheduled
Cloudlets:
Cloudlet 1:
VM ID-1
Cloudlet 2:
VM ID-2
Cloudlet 3:
VM ID-3
Cloudlet 4:
VM ID-4
Cloudlet 5:
VM ID-5
Cloudlet 6:
VM ID-1 7:
VM ID-2
Cloudlet 8:
VM ID-3
Cloudlet 9:
VM ID-4
Cloudlet 10:
VM ID-5
Cloudlet 11:
VM ID-1
Cloudlet 12:
VM ID-2
Cloudlet 13:
VM ID-3
Cloudlet 14:
VM ID-4
Cloudlet 15:
VM ID-5
Cloudlet 16:
VM ID-1
Cloudlet 17:
VM ID-2
Cloudlet 18:
VM ID-3
Cloudlet 19:
VM ID-4
Cloudlet 20:
VM ID-5
Aim:
The aim is to simulate resource management using CloudSim, which involves managing
the allocation and utilization of resources in a cloud environment. The objective is to
optimize resource allocation, maximize resource utilization and improve overall system
performance.
Procedure:
1. Set up the development environment:
• Download the CloudSim library (version 3.0.3 or later) and include it in the project.
import
org.cloudbus.cloudsim.core.CloudSim;
import java.util.*;
3. Create a new Java class for the simulation, e.g., "Resource ManagementSimulation".
4. Initialize CloudSim:
• Initialize the CloudSim simulation environment with the number of users and the
simulation calendar.
• Set the simulation parameters, such as the simulation duration and whether to
trace the simulation progress.
int numUsers = 1;
5. Create a datacenter :
• Define the characteristics of the datacenter, such as the number of hosts, host properties
(MIPS, RAM, storage, bandwidth), and VM provisioning policies.
• Use classes like Datacenter Characteristics, Host, Vm, and VmAllocationPolicy in CloudSim
to create the datacenter.
• Define the broker that will manage the allocation and utilization of resources.
• Define the virtual machines (VMs) with their characteristics, such as MIPS, RAM,
• Define the cloudlets with their characteristics, such as length, utilization model and data
transfer size.
• Use the submitVmList() method to submit the list of VMs to the broker.
• Use the submit CloudletList() method to submit the list of cloudlets to the broker.
broker.submitVmList(vmList):
broker.submitCloudletList(cloudletList);
• CloudSim will simulate the resource management based on the defined datacenter, broker,
VMs, and cloudlets. CloudSim.startSunulation().
CloudSim stopSimulation();
• Retrieve the results from the broker, such as the list of finished cloudlets and their execution
details.
• Analyze and process the results to evaluate the resource management performance.
• Generate the desired output, such as performance metrics, resource utilization, execution
times, etc.
Source code
int numUsers-1
List<Vm>> vmList
20;
broker submitVmList(vmList);
broker submitCloudletList(cloudletList);
CloudSim startSimulation();
CloudSim.stopSimulation();
printResults(finishedCloudlets);
// Set cloudlet properties like length, utilization model, and data transfer size
private static void printResults(List<Cloudlet> cloudlets) { // Process and print the results
Output
Simulation Results
Datacenter Information:
Number of hosts: 5
-Number of
cloudlets: 50
Resource
Utilization:
Result:
The result and output of the simulation will depend on the specific resource management
strategies implemented and the characteristics of the simulated cloud scenario. It can analyze
various performance metrics such as makespan, resource utilization, response time,
throughput, etc. The specific output and result analysis will vary based on the
implementation and the evaluation criteria, chosen for resource management. It can print the
output within the code using System.out.println() statements or save the results to a file for
furthe analysis.
Ex. No : 3
Date :
Simulate log forensics using cloud sim
Aim:
The aim is to simulate resource management using CloudSim, which involves managing the
allocation and utilization of resources in a cloud environment. The objective is to optimize
resource allocation, maximize resource utilization, and improve overall system performance.
Download the CloudSim library (version 3.0.3 or later) and include it in the project
Procedure:
unport org cloudous cloudsim", import org cloudbus cloudsim.core CloudSc import
java.util."
3. Create a new Java class for the simulation, e.g. "Resource Management Simulsion"
4. Initialize CloudSun
• Set the simulation parameters, such as the simulation duration and whether to
trace the simulation progress
5. Create a datacenter:
• Define the characteristics of the datacenter, such as the number of hosts, host
properties (MIPS, RAM, storage, bandwidth), and VM provisioning policies • Use classes
like Datacenter Characteristics, Host, Vm, and VmAllocationPolicy m CloudSim to create
the datacenter.
6. Create a broker:
• Define the broker that will manage the allocation and utilization of resources
cloudlets
• Define the Virtual Machines (VMs) with their characteristics, such as MIPS, RAM.
storage and bandwidth.
• Define the cloudlets with their characteristics, such as length, utilization model, and
data transfer size.
createCloudlets(numCloudlets):
• Use the submitVml.ist() method to submit the list of VMs to the broker.
will simulate the resource management based on the defined datacenter, broker, VMs,
and cloudlets. CloudSim startSimulation().
CloudSim.stopSimulation();
• Retrieve the results from the broker, such as the list of finished cloudlets and their
execution details.
Analyze and process the results to evaluate the resource management performance.
import org.cloudbus.cloudsim.core.CloudSim,
import java.util.*;
int numUsers = 1;
printAnomalies(anomalies);
private static List<LogEntry> generatel.ogData() ( Generate or retrieve log data for the
simulation.
// Sumotate tog entries with various attributes like timestamp, source IP, destination IP, log
message, etc // Return the generated log dats as a list of LogEntry objects
// Implement log analysis algorithms to detect suspicious activities // Use pattern matching,
machine learning, statistical analysis, etc.
// Implement log analysis algorithms to detect anomalies // Use pattern matching, machine
learning, statistical analysis, etc.
// Print or process the list of detected suspicious activities // Generate alerts, reports, or
visualizations based on the detected activities
private static void printAnomalies (List<LogEntry> anomalies) { // Print or process the list of
detected anomalies
Output
Detected Suspicious
Activities:
Detected Anomalies:
Destimation IP:
Destination IP:
IP: 192.168.1.110,
Aim:
Aim is to simulate a secure file sharing system using CloudSim. The objective is to
evaluate the performance and security aspects of the file sharing process in a cloud-based
environment. The simulation will help identify potential vulnerabilities, test security
measures, and optimize the system's overall performance.
Procedure:
• Download the CloudSim library (version 3.0.3 or later) and include it in the project.
javaCopy code
3. Create a new Java class for the simulation, e.g., "SecureFileSharing Simulation".
4. Initialize CloudSim:
• Initialize the CloudSim simulation environment with the number of users and the simulation
calendar.
• Set the simulation parameters, such as the simulation duration and whether to trace the
simulation progress.
return null;
// Choose a user from the list of available users based on a specific algorithm or criteria return
null;
private static List<FileRequest> generate FileRequests() {
// Generate a list of file requests with properties like file name, size, etc.
return null;
// Generate random file data of the specified size for simulation return null;
// Perform necessary security checks, encryption, and store the file in the cloud storage
private static byte[] downloadFile(User user, String filename) { // Implement the secure file
download mechanism
// Perform necessary security checks, decryption, and retrieve the file from the cloud storage //
Return the downloaded file data as a byte array
return null;
// Include information on the file sharing activities, security aspects, and performance metrics
}
Simulation Results:
Datacenter Information:
-Number of hosts: 5
- Number of users: 1
Security Metrics:
Performance Metrics:
Output
The specific result and output of the simulation will depend on the implementation of the file
sharing mechanisms, security measures and performance metrics. The output may include
information such as:
It can customize the output based on the specific requirements and the metrics choosen to
measure. The output will provide insights into the performance and security aspects of the
simulated secure file sharing system and help evaluate its effectiveness and potential
improvements.
Ex. No : 5a Implement data anonymization techniques over the simple
Date : dataset (masking, kanonymization, etc)
Aim:
The aim of masking is to replace sensitive data with a non-sensitive placeholder value while
preserving the structure and format of the original data.
Procedure:
1. Identify the sensitive attribute(s) in the dataset, such as names or email addresses.
2. Replace the sensitive values with a masking value (e.g., "X" or "").
3. Ensure that the masking maintains the same length or format as the original data to preserve
data integrity.
Source code:
import pandas as pd
Original dataset
data pd.DataFrame(
})
data[Name'] = XXXXXXXXXX
data[Email] xxxxxxxxxx
print(data)
Output:
Name
Email Age
• XXXXXXXXXXxxxxxxxxx
xxxxx 25
1 XXXXXXXXXXXxxxxxxXxxx
30
2 XXXXXXXXXXxxxxxxxxxxxxxxxx
35
Result:
The sensitive attributes, Name and Email, have been replaced with masking values,
ensuring the original structure and format of the dataset are maintained.
Ex. No : 5b
Date :
K-Anonymization
Aim:
The aim of k-anonymization is to generalize or suppress certain attributes in the dataset to
ensure that each record is indistinguishable from at least k-1 othe records.
Procedure:
2 Identify the quasi-identifiers (attributes that can potentially identify individuals when
combined) in the dataset.
5. Note: Implementing k-anonymization can be more complex and requires domain- specific
knowledge to determine appropriate generalization techniques.
Source code:
import pandas as pd
Original dataset
data = pd.DataFrame({
data['Name'] = 'Anonymous'
Anonymous XXXXX 25
1 Anonymous XXXXX 30
2 Anonymous XXXXX 35
Result:
The quasi identifiers, Name and Zip Code, have been generalized to "Anonymous" and
"XXXXX," respectively, ensuring each record is indistinguishable from at least k-1 othe
records (in this case, 2-1-1). The original structure and format of the dataset are preserved.
Ex. No : 6
Date : Implement any encryption algorithm to protect the images.
Aim:
The aim is to encrypt an image file using the AES encryption algorithm to protect its
contents from unauthorized access.
Procedure:
3. Encrypt the images: Use the chosen encryption algorithm and the generated key to encrypt
the image files. Iterate through each image file, read its contents, encrypt the data using the
encryption key and write the encrypted data to a new file.
4. Choose a cloud storage service: Select a cloud storage service provider that meets the
requirements in terms of security, reliability and cost.
5. Upload the encrypted images: Use the cloud storage provider's API or client library to upload
the encrypted image files to the cloud. Follow the appropriate documentation and guidelines
provided by the cloud service to ensure a secure upload process.
6. Manage encryption keys: Implement a secure key management system to store and manage
the encryption keys. This system should enforce access controls and provide secure storage for
the keys.
Source Code:
import boto3
#Set AWS S3 credentials and bucket name AWS ACCESS KEY ID= the_access_key AWS
SECRET ACCESS KEY = the_secret_access_key'
Set encryption key (must be 16, 24, or 32 bytes long) cryption key = b'ThisIsASecretKey!
image_data = file.read()
Generate a random initialization vector (IV)
jy = os.urandom(16)
encrypted_data = cipher.encrypt(padded_data)
#Create an S3 client
#3
boto3.client('s3.d
aws_access_key_id=AWS_ACCESS_KEY_ID,
aws secret_access_key=AWS_SECRET_ACCESS_KEY)
de
# Upload encrypted data as an S3 object s3.put_object(Body=encrypted_data. Bucket-
BUCKET_NAME. Key=filename)
iv_filename = f(filename).iv
83.put_object(Body=iv.
Bucket-BUCKET_NAME.
Key=iv_filename)
Alename encrypted_image.jpg
filename)
Output
Result:
The script will encrypt the image using AES encryption and produce an encrypted image
file encrypted_image.jpg in the same directory.
Ex. No : 7
Date : Implement any image obfuscation mechanism
Aim:
The aim is to obfuscate an image in the cloud by applying a blurring filter to make it
less recognizable.
Procedure:
1. Choose a cloud-based image processing service: Select a cloud service provider that offers
image processing capabilities. In this example, we will use the Google Cloud Vision API.
2. Set up Google Cloud Vision API: Set up a Google Cloud project and enable the Vision API.
Obtain the necessary API credentials and install the Google Cloud Python library.
3. Authenticate with the Google Cloud Vision API: Use the API credentials to
4. Obfuscate the image using blurring: Send the image to the Vision API and apply a blurring
filter to obfuscate it. The API provides various image manipulation options.
5. Retrieve and save the obfuscated image: Receive the modified image from the Vision API
response and save it to the cloud or download it locally.
Source code:
Authenticate with Google Cloud Vision API client vision Image Annotator Cient()
output_path = 'obfuscated_image.jpg
blurred_image.save(output_path, 'JPEG)
image_path = 'original_image.jpg'
obfuscated_image_path = obfuscate_image(image_path)
Make sure the have the necessary credentials and have installed the google-cloud-vision library
(pip install google-cloud-vision) to interact with the Google Cloud Vision API.
Output:
Upon successful execution, the script will obfuscate the image using the blurring filter from
the Google Cloud Vision API. The resulting obfuscated image will be saved as
obfuscated_image.jpg in the same directory. The script will print the path to the obfuscated
image.
Result:
The image will be visually obfuscated by applying a blurring filter. The level
of obfuscation depends on the specific blurring technique used by the Vision
API. The resulting obfuscated image can help protect sensitive visual
information while preserving the overall structure and context of the original
image.
Ex. No : 8
Implement a role-based access control mechanism in a
Date : specific scenario
Aim:
Procedure:
1. Choose a cloud provider with RBAC support: Select a cloud provider that offers RBAC
capabilities. In this example, we will use Microsoft Azure.
2. Define user roles: Identify the different roles needed for the cloud scenario. Roles could
include administrators, developers, and end users. Define the specific permissions and
access levels associated with each role.
3. Create RBAC roles: Create RBAC roles within the cloud provider's RBAC service. Define
the necessary permissions for each role based on the requirements.
4. Assign roles to users: Assign appropriate roles to the users or groups within the cloud
provider's RBAC service. Users can be assigned one or more roles depending on their
responsibilities.
5. Implement access control checks: Within the cloud application or infrastructure, implement
access control checks based on the user's role. This can be achieved by leveraging the RBAC
service provided by the cloud provider.
Source code:
The implementation of RBAC is specific to the cloud provider and the programming
language used for the application. Below is an example using Python and the Azure SDK: from
azure identity import Default AzureCredential from azure keyvault secrets import SecretClient
roles = (
end_user': ['read']
user toles = {
user1@example.com'. 'admin',
Lab
#Get the logged-in user's email (replace this with the authentication logic)
logged_in_user_email = user1@example.com
user_role=user_roles[logged_in_user_email]
if permission in roles[user_rolej:
return True
return False
write:', can_write)
Output:
The output of the script will be a boolean value indicating whethe the logged-in user has the
necessary permissions based on their assigned role. In this example, it will print whethe the
user can write or not.
Result:
Aim:
Procedure:
a. Define attributes: Identify the attributes that are relevant to the access control policies.
Attributes could include user roles, department, location, time of access, or any other
relevant information.
b. Define access control policies: Define the access control policies based on the
attributes. For example, the may have a policy that allows users with the "Manager" de in
the "Sales" department to access certain res
c. Set up atribute authority: Create an attribute authority service that can provide attribute
values for users. This service could be a separate component or integrated within the
application
e. Enforce access control: Based on the access control shocks, allow or deny accom to
the requested resources or functionalities within the cloud environment
Source code:
The implementation of ABAC is specific to the cloud provider and the programming language
used for the application. Below is an example using Python:
This could involve querying a database of stay alle mentiod attribute value Nonw if attribute
nattie role
#Example: Get the user's role from a database attribute value got user role from database id)
elld attribute nasze departmen #Example: Get the user's department from an external service
TROPICAL PUBLICATION
Securit
y L-29
action': 'read
Lati
role': 'Admin',
Yesource': 'admin_panel',
'action': 'write
return True
return False
# Example usage: checking if user with ID 'user1' can read the 'sales_data' resource can_read
= check_access('user1', 'sales_data', 'read) print("User can read:', can_read)
Output:
The output of the script will be a boolean value indicating whether the user with the specified ID
has access to the requested resource and action. In this example, it will print whether the user can
read the 'sales data' resource.
Result:
The ABAC mechanism implemented allows the to manage access control based on user attributes
in the cloud application or infrastructure. Users have attributes associated with them, and access
control policies are defined based on these attributes. Access control checks are performed by
querying the attribute authority for attribute values and comparing them against the access control
policies. allows for fine-grained control over resource access based on user attributes.
Ex. No : 10
Develop a log monitoring system with incident
Date : management in the cloud
Aim:
The aim is to develop a log monitoring system with incident management in the cloud. The
system should monitor logs from various sources, detect anomalies or predefined patterns and
generate incidents for further investigation and resolution.
Procedure:
1. Choose a cloud provider: Select a cloud provider that offers logging and monitoring services.
In this example, we will use Amazon Web Services (AWS) services such as Amazon CloudWatch
and AWS Lambda.
2. Set up log sources: Configure the application or infrastructure to send logs to a centralized
logging service. This could be done by integrating logging libraries,
configuring log forwarders, or using cloud-native logging services.
3. Configure log monitoring: Set up log monitoring rules in the logging service to detect
anomalies or patterns of interest. This could involve defining metrics, filters, or alarms based on
log data.
5. Implement incident handling: Define the procedures and workflows for incident handling,
including incident triage, assignment, investigation, and resolution. This may involve integrating
with incident management tools, sending notifications, or executing automated actions.
Source code:
The implementation of a complete log monitoring system with incident management is beyond the
scope of a single source code example. However, here's an example of a basic AWS Lambda
function that can be triggered by log events in Amazon CloudWatch and generate an incident:
import boto3
def generate incident(event, context): #Extract relevant information from the log event log_group =
event['detail']['logGroup']
event['detail']['message']
incident_title,
impact-1, # Define the impact level of the incident urgency-1, # Define the urgency level of the
incident
description incident_description,
This example demonstrates a basic Lambda function that can be triggered by log events in
CloudWatch. It extracts relevant information from the log event and generates an incident using the
AWS Incident Manager service. Further customization and integration with other incident
management tools may be necessary based on the requirements.
Output:
The Lambda function will be triggered by log events in CloudWatch and it will generate an
incident in the specified incident management system. The output will depend on the incident
management system used and its integration with the Lambda function.
Result:
The log monitoring system with incident management allows for real-time
monitoring of logs, detection of anomalies or predefined patterns and generation of
incidents for further investigation and resolution. This helps identify and address
potential issues or security threats promptly, improving the overall reliability and
security of the cloud-based applications and infrastructure.