[go: up one dir, main page]

0% found this document useful (0 votes)
64 views7 pages

Siddartha Gudipati - AWS Cloud DevOps Resume

Download as docx, pdf, or txt
Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1/ 7

Siddartha Gudipati

AWS Cloud Engineer/ DevOps


siddartha.gp@gmail.com
+1(216) 586-4265
Summary
 With strong 10 years of experience in AWS Cloud/DevOps environment in Analysis, Design,
Development, Testing, Customization, Bug fixes, Enhancement, Support and Implementation of
various infrastrucure using agile methodology on AWS platform.
 Experienced on deploying, managing, and operating scalable, highly available, and fault tolerant
systems with implementing data, security best practices on AWS Services
 Implementing, and supporting the foundational Client AWS infrastructure including but not limited to
AWS organizations/accounts, Landing Zone, networking, security, identity, and targeted business
applications/integrations.
 Experience in writing Ansible Playbooks using python SSH to manage the configuration of AWS
nodes and test the playbooks on AWS instances using python.
 Working experience of AWS services like EC2, S3, Elastic Load Balancer, Elastic Container Service,
RDS, Elastic Beanstalk, Cloud Front, VPC, CloudWatch, FSx, Lambda, EKS, GKE, AKS, Transfer for
SFTP, Trusted Advisor, Route53, ECS, Step Functions, Lambda Function SNS, SQS, SCP, AWS
Organizations, Cloudwatch, Event Bridge, Certificate Manager,Cost Explorer and AWS CLI.
 Experienced in Infrastructure automation using Terraform and AWS Cloud Formation by creating
custom launch templates.
 Experience working on Docker with Kubernetes, Used Docker storage, networking, cloud, logging,
compose and continuous integration.
 Work on shift rotations for production support and operational readiness support.
 Hands on in installing, configuring, upgrading and managing Puppet masters and agents.
 Solid knowledge on automation for deployment/ configuration of different Application servers like
WebSphere, WebLogic and Web Servers like Apache Tomcat.
 Experience in DevOps, software configuration management, Build and Release Management.
 Experience in using version control and source code management tools.
 Knowledge of using Subversion (SVN) and Bitbucket for version control or source code management.
 Experienced to build utilities like Maven, ANT for building of jar, war, and ear files.
 Experience in using Jenkins for automating software development process by continuous integration
(CI) and to facilitate continuous delivery (CD) for technical aspects.
 Experienced to write scripts using Python, Groovy, Shell and Ruby.
 Worked on Python/Bash scripts to gather resources metrics from AWS EC2 Instances, and configured
Alerts and Dashboards using AWS Cloud Watch Monitoring.
 Experience working with Linux/UNIX and Docker containers
 Writing/Debugging Docker files to build Application Docker images & deploying them to Kubernetes
by writing ML files and by using kubectl cli..
 Deployed Kubernetes cluster in production using Terraform scripts.
 Ability to use IBM Rational ClearCase, Ant, Maven, Cruise Control, Bamboo, Hudson.
 Experience in various configuration and Automation tools like Chef and Puppet for deploying
applications into web servers and DB servers. Maintain large deployments using Chef and Puppet.
 Extensively worked on Source Code Management tools like SVN, GIT, GIT HUB and performed
operations namely branching, tagging, merging, repository management, etc.
 Familiarity with DMZ based network architectures and associated infrastructure•
 Able to work as part of a high performing, collaborative team with limited supervision
 Automated ML Pipeline Development: Using DevOps principles, end-to-end ML pipelines were
designed and implemented to automate model training, testing, and deployment
 Designed and deployed automated deployment pipelines on AWS to reduce deployment times by
50% and enhance system dependability for Hive-based data processing applications.
Designed and oversaw the use of Terraform and AWS CloudFormation to handle fault-tolerant, scal-
able AWS infrastructure that supported workloads involving large amounts of data processing.
Reduced query execution time by 30% and saved a substantial amount of money on AWS resources
because to performance and cost-efficiency improvements made to Hive queries and data models.
Software delivery was streamlined and downtime for vital applications was reduced by using CI/CD
best practices, which included automated testing and deployment techniques.
Worked along with data engineering teams to create and implement AWS data lake solutions,
utilizing Hive and other big data technologies to provide real-time
 procedures. This shortened time-to-market and boosted efficiency.
 Infrastructure as Code (IaC) for Machine Learning Workflows: Consistency and reproducibility across
environments were ensured by using technologies such as Terraform and Ansible to provision and
manage infrastructure resources needed for ML experiments and production deployments.
 Continuous Integration and Continuous Deployment (CI/CD): To expedite the development lifecycle
and enable quick iterations, CI/CD pipelines were established for machine learning projects. Version
control systems (like Git), automated testing frameworks (like pytest), and containerization
technologies (like Docker) were integrated.

 Monitoring and Logging for ML Models: Prometheus and Grafana were used to implement monitoring
solutions that tracked metrics related to model performance, looked for anomalies, and made sure
that retraining

Education & Certifications:


 Master of Science in Computer Science from Cleveland State University, USA
 Bachelor of Engineering in Electronics & Telecommunication from Savitribai Phule Pune University,
India.
 Certified AWS SysOps Administrator- Associate.
 Cisco Certified Network Associate (CCNA).

Technical Skills:
Source/ Version Control Tools GIT, GitHub, Bitbucket, SVN
Build Management Tools Jenkins, Maven, ANT Hive
Configuration Management Tools Chef, Puppet, Ansible
Infrastructure Automation Cloud Formation, Terraform and Ansible
Monitoring and Log Management Jira, Nagios and Splunk
Tools
Cloud Services AWS, Azure, OpenStack (IaaS, PaaS, SaaS)
Containerization Tools Docker, Kubernetes, EKS,AKS GKE
Virtualization Tools VMWare, Hyper V
Repositories Nexus, JFrog Artifactory
Scripting Shell, Python, Ruby and Groovy
Database MySQL, Oracle
Routing Protocols RIP, EIGRP, OSPF, IS-IS, BGPv4, MP-BGP
Web & Application Servers Apache HTTPD, Apache Tomcat, WebSphere, Web logic, JBOSS
Simulators Cisco Packet Tracer, Wireshark, GNS 3
Operating Systems Red Hat Linux, Linux, Solaris, Cent OS, Ubuntu and Windows
Other Tools MS office Suite, File Zilla Client, Putty, .Net

Experience:

Cisco – Bothell, WA March 2022 - Current


AWS DevOps Engineer
Responsibilities:
 Managing initiatives for migration and modernization in AWS cloud environment.
 Participate in planning, implementation, and growth of the infrastructure on Amazon Web
Services (AWS) Cloud.
 Automated infrastructure migration to cloud environment using CloudFormation and Terraform.
 Ceating Jenkins multibranch pipelines to deploy the AWS infrastructure to production and Non-
production.
 Created step functions to orchestrate the flow of files processing in S3 based on event triggers.
 Used AWS CloudFront to make files available for download from S3 at edge locations for faster
availabilty.
 Maintain SCP in management account and access control using IAM roles, IAM Users and IAM
policies.
 Designed and deployed AWS solutions using EC2, S3, EFS, ECS, EKS, Secrets Manger, FSx,
SNS, SQS, Elastic Load Balancer, Secrets Manager, Lambda Function, Route53, IAM, Event
Bridge, RDS, Certificate Manager, AWS CLI, Auto scaling groups, CloudFront, Step Functions
etc.
 Created lambda script using python for SHA 512 calculation and for generating signed URL for
accesing the files hosted in S3 through AWS CloudFront.
 automated cloud resource provisioning and configuration management for machine learning
workloads by utilizing Terraform and Ansible to implement Infrastructure as Code (IaC)
techniques.
 In charge of the planning and implementation of an enterprise-scale data lake solution on
AWS, which processed and analyzed data using Hive.
Designed AWS infrastructure with Terraform to be highly available and scalable, guarantee-
ing peak performance for workloads based on Hive.
Jenkins and Ansible were used to implement automated deployment pipelines that allowed
for quick provisioning and configuration of AWS services.
Hive queries and data models were optimized to increase query performance and decrease
resource usage, which led to a 40% reduction in query execution time.
Project: Analytics Platform for Real-time
created and put into use a real-time analytics platform on AWS that processes streaming
data from several sources using Hive.
Kubernetes-based orchestrated microservices based on Docker that integrate seamlessly
with Hive and other big data solutions.
 Using Jenkins and GitLab CI/CD, I created ML project CI/CD pipelines and integrated them with
containerization technologies (like Docker) and version control systems (like Git) to automate
model training and deployment.
 Working along with data science teams, I implemented alerting and monitoring systems utilizing
Prometheus and Grafana to guarantee high availability and performance for machine learning
models deployed and monitored in production environments.
 Experience with AWS security tools and services: AWS Security Model, IAM (Identity Access
Management), ACM (Amazon Certificate Manager), Security Groups, Network ACLs, Encryption
and Firewalls.
 Work closely with the architects and engineers to implement networks, systems, and storage
environment that effectively reflect business needs, security requirements, and service level
requirements.
 Provide Production support and take care of operational needs for the applications hosted in
AWS to make sure the infrastructure is up and running.
 Providing analysis reports and AWS recommendations for cost optimization and security alerts..
 Create roles and users in AWS identity and access management to grant or revoke permissions.
 Deploy CloudFormation stacks using serverless code to create and maintain the infrastructure
with the help of serverless.
 Implemented AWS CloudFront to improve the lantecy of the web application hosted in S3,
increase high performce and enhance security.
 Work with the PMs closely to prepare and deliver technical solutions for various sized migrations
and upgrades.
 Build automated CI/CD pipeline with AWS Code Pipeline, Jenkins and AWS Code Deploy.
 Create and maintain approved Terraform IaC modules to ensure consistency and security.
 Assist with application migrations from acquisition AWS Orgs. to Client Org.
 Design and implement best practices for operational excellence, security, reliability, performance,
efficiency, and cost optimization across Cloud platforms
 Create and maintain documentation related to the Client Cloud Program.
 Integrated AWS Dynamo DB using AWS lambda to store the values the items and backup the
Dynamo DB streams and implemented Terraform modules for deployment of applications
 Setup CloudWatch alarms, SNS notifications on various AWS services as required.
Environment: Ansible, Jenkins, GIT, AWS (VPC, VPN, IAM, Auto scaling, S3, EKS, EC2, ECS, EBS,
IAM, Gateways, Security Groups,CloudWatch, Elastic Beanstalk, AWS CLI, SQS, SNS, Secrets Manager,
AWS Organizations, SCP, RDS, Lambda, Event Bridge), IaaS, PaaS, Cloud formation, CloudFront,
Terraform, serverless, JIRA, code pipeline, code repository, code build, Linux, VMware, Shell,
PostgreSQL, Python, Kubernetes, BitBucket.

American Express GBT – Bothell, WA Feb 2021 – Feb 2022


AWS Cloud Migration/ Infrastructure Engineer
Responsibilities:
 Managing initiatives for migration and modernization in AWS cloud environment and giving
production and operational support until migration cutoff.
 Participate in planning, implementation, and growth of the infrastructure on Amazon Web
Services (AWS) Cloud.
 Work with the larger design team to develop a full Hybrid Cloud Solution.
 Automated infrastructure migration to cloud environment using CloudFormation.
 Deploying VPC resources from management account using CloudFormation Stack sets and Code
Pipeline.
 Created lambda script using python for HA and scheduled start and stop of instances, setup
trigger to AWS lambda using CloudWatch scheduled events.
 Use Control tower CfCT scripts to deploy SCP into management account and deploy CFT stacks
to the AWS accounts/Ous.
 Mainitaing the production infrasturcutre and provide critical support for debugging and
maintaince. Managing accesss control to prodcution environment accounts.
 Designed and deployed AWS solutions using EC2, S3, EFS, EKS, Secrets Manger, FSx, SNS,
SQS, Elastic Load Balancer, Secrets Manager, Lambda Function, Event Bridge, RDS, Certificate
Manager, AWS CLI, Auto scaling groups, etc.
 Experience with AWS security tools and services: AWS Security Model, IAM (Identity Access
Management), ACM (Amazon Certificate Manager), Security Groups, Network ACLs, Encryption
and Firewalls.
 Providing analysis reports and AWS recommendations for cost optimization and security alerts.
 Create roles and users in AWS identity and access management to grant or revoke permissions.
 Deploy CloudFormation stacks using serverless code to create and maintain the infrastructure
with the help of serverless.
 Work with the PMs closely to prepare and deliver technical solutions for various sized migrations
and upgrades.
 Help application teams to migrate from on-prem to AWS. Work in triage calls with vendors and
app teams to troubleshoot and provision infrastructure as needed.
 Create and Manage IAM Roles and Policies.
 Setup and manage Service Control Policies “SCPs”.
 Support Production network and participate in rotating on-call schedule.
 Develop and maintain AWS Landing Zone resources such as accounts, VPCs, IAM, etc.
 Work closely with Client and acquisition Cloud Security/Engineering staff to align security
policies, IaC blueprints, and security guardrails.
Environment: Ansible, Maven, ANT, GIT, AWS (VPC, VPN, S3, EKS, ECS, EC2, EBS, IAM, Gateways,
Security Groups,
CloudWatch, Elastic Beanstalk, AWS CLI, SQS, SNS, Secrets Manager, AWS Organizations, SCP, RDS,
Lambda, Event Bridge), IaaS, PaaS, Cloud formation, Terraform, serverless, JIRA, code pipeline, code
repository, code build, Linux, VMware, Shell, PostgreSQL, Python,
Kubernetes.

T-Mobile – Bellevue, WA Nov 2018 – Feb 2021


Sr. AWS/DevOps Engineer
Responsibilities:
 Involved in Design/Architecture of AWS and hybrid cloud solutions.
 Setup and build AWS infrastructure using various resources, VPC, EC2, EKS, S3, IAM, EBS,Lambda,
SNS, SQS, SCP, Security Groups, Auto Scaling, Transfer for SFTP, Elastic Beanstalk, Cloud Front,
VPC, CloudWatch, Lambda, Trusted Advisor, RDS, Event Bridge, Cost Explorer and AWS CLI.

 Managed Identity and Access Management (IAM) service in AWS for assigning roles and polices to
users and used the IAM console to create custom users and groups.
 Communicate with customer to generate correspondence for customer components running on AWS
VPC & AWS EKS kubernetes cluster.
 Taking care of production environment operational tasks to ensure continuous and immediate
operational support.
 Organizing and coordinating Product Releases, work closely with product development, QA, Support
across global locations to ensure successful releases.
 Build automated CI/CD pipeline with AWS Code Pipeline, Jenkins and AWS Code Deploy.
 Provide technical assistance to all phases of the Cloud Program, including Infrastructure as a Service
(IaaS), Platform as a Service (PaaS).
 Maintaining Tagging compliance for all the AWS resources, update all the tags using AWS CLI.
 Get resources metrics using AWS CLI, like Max/Avg CPU utilization, Enabling ENA for latest
generation of EC2, changing Instance profile/IAM role, Change and describe Instance attributes,
resource tagging and Create AMI Image.
 Responsible for DevOps tool upgrades and security patches
 Implemented rapid-provisioning and management for Linux using Amazon EC2, Ansible, and custom
Bash scripts. Implement Life-cycle Policy for snapshots.
 Written Templates for AWS infrastructure as a code using Terraform to build staging and production
environments.
 Responsible to handle the deployments of infrastructure changes through terraform/cloudformation in
both production and non-production environments.
 developed and put into use a scalable ML model deployment platform with Helm and Kubernetes that
allows ML models to be scaled and deployed across hybrid cloud environments with ease.
 resilience and reliability were ensured by designing and implementing automated testing frameworks
for ML models that included unit, integration, and performance tests.
 Led the way in the use of container orchestration technologies (like Kubernetes) for machine learning
applications, maximizing resource efficiency and facilitating the quick release of new ML models.
 Facilitated training sessions and workshops for data scientists and ML engineers on DevOps best
practices and technologies, encouraging a culture of cooperation and ongoing development and put
into use a platform-neutral deployment orchestration framework that leverages Helm Charts and
Kubernetes to handle the scaling and deployment of machine learning models in diverse contexts.
 integrated for automated testing with GitLab CI/CD
 Deploy Amazon Web Services (AWS) resources using AWS Cloud Formation.
 Created alarms and notifications for EC2 instances using CloudWatch.
 Build Docker containerization with Kubernetes(EKS), collaborated with development support teams to
setup a continues delivery environment with the use of Docker.
 Deployed application which is containerized using Docker onto a Kubernetes cluster which is
managed by Amazon Elastic Container Service for Kubernetes(EKS).
 Ensuring regular Tag compliance and Patch compliance to the servers.
 Creating S3 buckets and maintained and utilized the policy management of S3 buckets and Glacier
for storage and backup on AWS.
 Worked with Jenkins pipeline suite for supporting the implementation and integration of continuous
delivery (CD) pipelines into Jenkins.
 Write Python Scripts for automating the build and deployment process.
 Used Ansible and Ansible Tower as Configuration management tool, to automate repetitive tasks,
quickly deploys critical applications, and proactively manages change.
 Provisioned and Managed the configurations of multiple servers using Ansible.
 Enable SSH access to servers from the Jump server without key or password using Ansible and shell.
 Define Terraform modules such as Compute and Users to reuse in different environments.
 Configured GIT plugin to offer integration between GIT and Jenkins.

 Deploy built Artifacts to application server using Maven.
Environment: Ansible, Maven, ANT, GIT, Nexus, AWS (VPC, VPN, S3, EC2, EBS, EKS, IAM, Gateways,
Security Groups, CloudWatch, Elastic Beanstalk, AWS CLI, SQS, SNS, Secrets Manager, RDS, Auto Scaling,
Lambda, Event Bridge), IaaS, PaaS, Cloud formation, Terraform, JIRA, Jenkins, Docker, RHEL, VMware,
Shell, Bash scripting, SQL, AKS, GKE, Python, Production Support.

CareSource – Dayton, OH June 2018 – Nov 2018


Sr. DevOps/Cloud Engineer
Responsibilities:
 Worked on Build and Deployment of web applications in an Agile continuous integration environment
and automating the process.
 Responsible for DevOps tool upgrades and security patches.
 Worked on Python Code using Ansible Python API to Automate Cloud Deployment process and
provision AWS environments using Ansible Playbooks.
 Troubleshooting OpenShift router operation, analysing stats with different projects to determine the
bottleneck.
 Configured Elastic Load Balancers with EC2 Auto scaling groups.
 Performed Provisioning of IaaS and PaaS Virtual Machines and Web apps, Worker roles on AWS.
 Creating and Building Infrastructure on AWS Cloud Platform using Cloud Formation.
 Design and development of the new technical flow based on JAVA/J2EE and .NET technologies.
 Worked on Shell Scripts, Python Scripts for automating the build and deployment process.
 Setting up and building AWS infrastructure like VPC, EC2, S3, IAM, Security Group, Auto Scaling and
RDS in Cloud Formation using JSON templates.
 Building/Maintaining Docker container clusters managed by Kubernetes.
 Utilized Kubernetes and Docker for the runtime environment of the CI/CD system to build, test &
deploy.
 Developed Microservices on boarding tools leveraging Python and Jenkins allowing for easy creation
and maintenance of build jobs and Kubernetes deploy and services.
 Worked on CI/CD tool Jenkins for building and deploying the Java application.
 Implemented project builds framework in Jenkins using Maven build framework tool.
 Performed Unit Testing for java applications using Junit frameworks and configuring results as post
build action.
 Manage the artifacts generated by Maven in the Nexus repository.
 Worked on SonarQube to perform code analysis, code coverage and detecting bugs.
 Used JIRA for Issue tracking, Bug tracking, and Project Management by raising tickets.
 Performed configurations from Apache tomcat and web logic to Jenkins.
 Worked on Ansible to manage existing servers and automate the build/configuration of new servers
and created Ansible Playbooks to automate system operations.
 Creating alarms in Cloud watch service for monitoring the server's performance, CPU Utilization, disk
usage etc.
 Deployed code on WebLogic and Tomcat servers for Production, QA, and Development
environments.
Environment: AWS (IAM, EC2, S3, CloudFormation, CloudWatch, VPC, RDS), IaaS, PaaS, Jenkins, TFS,
VSTS, Git, Chef, Ansible, Docker, Kubernetes, Shell, Junit, Tomcat, Nagios, Groovy, OpenShift, .NET, JIRA.

Cardinal Health – Dublin, OH Nov 2017 – May 2018


Sr. DevOps / Build & Release Engineer
Responsibilities:
 Build, manage, and continuously improved the build infrastructure for software development
engineering teams including implementation of build scripts, continuous integration (CI) infrastructure
and deployment.
 Developed and supported the Software Release Management and procedures. Worked on version
control tools like GIT, Github and integrated build process with Jenkins.
 Worked with GIT, Github to manage source code.
 Work with development/testing, deployment, systems/infrastructure and project teams to ensure
continuous operation of build and test systems.
 Implementing Chef Cookbooks for OS component configuration to keep AWS server’s template
minimal.
 Setup and maintenance of automated environment using Chef Recipes & Cookbooks for different
application. Written cookbooks for installing Jenkins, HTTPD, WebLogic, JBoss, WebSphere, JDK.
 Converting production support scripts to Chef Recipes and AWS server provisioning using
Chef Recipes.
 Orchestration of application processes on different environments using Chef in cloud (AWS) for
deployment on multiple platforms.
 Integration of Automated Build with Deployment Pipeline and installed Chef Server and clients to pick
up the Build from Jenkins repository and deploy in target environments.
 Created Shell & Python scripts for various Systems Administration tasks to automate repeated
processes.
 Developed a continuous deployment (CD) pipeline using Jenkins, shell scripts.
 Troubleshoot problems arising from Build failures and Test failures.
 Deploy the code on web application servers like Apache Tomcat/WebSphere.
 Desiged a process for pro-automation using Jenkins in all the application environments and making
sure it follows all the standard procedures of the Application SDLC.
 Automated the continuous integration and deployments using Jenkins. Built end to end CI/CD
Pipelines in Jenkins to retrieve code, compile applications, perform tests and push the build artifacts.
 Created and maintained Ant build.xml and Maven Pom.xml for performing the builds.
 Defined dependencies and plugins in Maven pom.xml for various activities and integrated Maven with
GIT to manage and deploy project related tags and servers and Splunk to capture and analyse data
from various layers Load Balancers, Web servers and application servers.
 Used JIRA for project management and issue tracking.
Environment: AWS, Linux, GIT, Github, Jenkins, Chef, Splunk, Maven, Shell script, Python, Apache Tomcat
WebSphere, Windows, Ubuntu, RHEL, CentOS, JIRA, Linux.

Aforeserve.com Ltd. – Gujarat, India June 2015 – August 2016


DevOps Engineer
Responsibilities:
 Provided 24x7 on call support.
 Troubleshot P1 tickets, escalated the issue to right team to resolve quickly.
 Worked with Jenkins, Bamboo for CI and CD.
 Performed Linux administration, patching, configuring and maintenance.
 Deployed code to QA, PT, training, security, prod-stage environments.
 Used Ansible as configuration management and automation tool.
 Migrated application to containerize deployment using Docker and Kubernetes.
 Worked on automation scripting in Python, PowerShell to automate all deployment activities.
 Used Bamboo, Jenkins, TFS for continuous integration on project.
 Used TFS for continuous integration in azure migration project.
 Experienced using Docker on Daily basis.
 Collaborated with others to troubleshoot and resolve major production issues.
 Integrated and collaborated with others on all matters of system operation and development.
 Built J2EE code using build.xml and pom.xml.
 Worked with build pipelines and deployment strategy using Jenkins, Bamboo.
 Used Jenkins for build and deployment for migration application. Worked in Agile environment.
 Worked on Installation, configuration and upgrading of RedHat server software and related products.
 Performing daily system monitoring, verifying the integrity and availability of all hardware, server
resources, systems and key processes.
 Review system and application logs.
 Applying OS patches and upgrading on a regular basis.
 Configuring and administrating LDAP, DNS, and Send mail on Red Hat Linux.
 Creating, changing, and deleting user accounts as per request.
 Repairing and recovering from hardware or software failures.
 Ensuring that the network infrastructure is up and running.
Environment:Subversion, GIT, Ansible, GitLab,ServiceNow, Microsoft OfficeTools, SQL, MySQL,Quality
Centre, Windows Server, Ruby, Java, J2EE, Ant, Maven, Linux, TFS, .NET, Bamboo, Jenkins, AWS, Ansible,
Apache Mesos, OpenShift, Docker, Kubernetes, Microsoft Visual Studio, IIS, PowerShell Scripting, Red-Hat
Enterprise Linux 3/4, NFS, FTP, Apache.

You might also like