Home

Gurunath Siddam Rao - Sr Aws Cloud Engineer
[email protected]
Location: Hartford, Connecticut, USA
Relocation: Yes
Visa: GC
Certifications:
Gurunath Rao Siddam
E-Mail: [email protected]
Mobile No: +1 9802464107
LinkedIn: https://www.linkedin.com/in/gurunath-rao-9334a1248



Sr. DevOps Engineer/Cloud Consultant

Professional Summary:

Overall 10 years of experience in IT industry with involvement in infrastructure management, software management, configuration management, software integration, release management.
Highly Organized and hard-working IT Professional with extensive experience as Cloud/DevOps/Build Release in solving complex problems with creative solutions, supporting development and Deployment of applications, and supporting operations in different environments/Release-streets.
Extensive knowledge in all phases of the software development life cycle (SDLC). best practices of Software Configuration Management (SCM) in Agile, Scrum, and Waterfall methodologies and Continuous Integration (CI) and Continuous Deployment (CD) practices.
Version controlling experience using the toolset like SVN, Git branching, merging, and automation processes across the environments using SCM tools like GIT, Subversion (SVN), Bitbucket and TFS on Linux and windows platforms.
Good understanding of Pivotal cloud foundry(PCF) Architecture (Diego Architecture), PCF components and their functionalities. Experienced in using Pivotal Cloud Foundry (PCF) CLI for deploying applications and other CF management activities.
Expertise in Cloud Infrastructure Automation which includes Amazon Web Services (AWS), Ansible, Maven, Docker, Jenkins, Harness.
Experience in Server infrastructure development on OpenShift cluster Platform, AWSCloud, Google cloud and Microsoft Azure.
Experience with Jenkins using CloudBees Docker plugin to automate container deployment. Wrote Docker Compose files in YAML for managing whole life cycle of multi container aplications.
Experience in VMware Tanzu products, including Tanzu Application Services (PCF) and RabbitMQ.
SME - Azure, DevOps, Terraform, Docker and Kubernetes.
Configured and managed cloud-specific security services, such as AWS Security Groups, Azure Network Security Groups, VPC Firewall, to control inbound and outbound traffic.
SAP HANA Technical experience.
Strong understanding of Terraform best practices, including code organization, variable management, and configuration segregation, to create modular and maintainable infrastructure codebases.
Expert in using Build Automation tools and Continuous Integration concepts by using tools like ANT, Jenkins, Bamboo, TeamCity, and Maven and releases of internal projects using JFrog Artifactory tool.
Develop CI/CD Pipelines for automated Dev/Prod deployments in AWS by integrating with other systems like Jenkins and GITLAB CI/CD, GITHUB ACTIONS, TeamCity, Harness.
Perform code & guardrail warnings review, evaluate implementations, and provide feedback for tool improvements from a PEGA, DEVOPS and AWS standpoint.
Primary role is to help customers with the adoption of Kubernetes, how to operate cloud native based infrastructures, and how Kubernetes works and how best to use it. This may include running POCs and pilots, crafting solutions, integrating existing systems.
Worked on various Azure services like Compute (Web roles and Worker roles), Caching, Storage, SQL Azure, NoSQL, Network Services, Azure Active Directory, Scheduling, Auto scaling through ARM and PowerShell Automation deployments and open-source frameworks like Azure Databricks along with monitoring analytics services.
Worked on Ansible and Terraform modules for many different Azure capabilities groups templated deployments.
Worked on the Deployment, Configuration, Monitoring and Maintenance of OpenShift Container Platform.
Strong Experience on Continuous Integration/Continuous Delivery (CI/CD) using the Jenkins Pipelines, IBM Urban Code Deploy (UCD), SonarQube, Maven, ANT as build tool for deployable artifacts. Experience in Jenkins to automate most of the build related tasks with Groovy script.
Automated the Installations of various Webservers, Application Servers and Database Servers using the Configuration management tools like Ansible, Chef and Puppet.
Proficient in developing and deploying .NET applications on the Azure SME platform, leveraging Azure services such as Azure App Service, Azure Functions, Azure Storage, and Azure SQL Database.
Wrote chef cookbooks using several of its components like attributes, files, recipes, resources and templates.
Wrote/Test Ansible playbooks in managing configuring nodes and on Azure virtual machines and also Ansible Tower to create project, inventory files, template and scheduling jobs.
Experienced inseveral AWS services EC2, VPC, S3, Cloud-Watch, Route 53, RDS, Cloud-Formation, ELB, S3 Bucket, IAM, Auto-scaling configurations and repository management tools Artifactory and Nexus
Experienced in Configuring and deploying the code through Web Application servers Apache Tomcat, JBoss, WebLogic and WebSphere.
Experienced in scripting and implementing services of secret management Key vault and Storage Queues, Azure Extensions, Azure web apps, functions (PaaS) and also been part of some python cron job automation.
Migrated Servers, Databases and Applications from Microsoft Azure to AWS and vice versa.
Integrated Node.js applications with cloud platforms like AWS or Azure, utilizing serverless computing services like AWS Lambda or Azure Functions for scalable and cost-effective deployments
Deployed and configured Atlassian Jira in both hosted and local instances for issue tracking, workflow collaboration, and tool-chain automation, bug tracking through Service now and azure boards.
Experienced in Migration & deployment of Applications with upgrade version of Application & Hardware, MS build, batch script, IIS and Jenkins Administrator.
Implemented and managed Java application monitoring and logging solutions, such as New Relic or Log4j, to track application health, performance, and troubleshoot issues.
Integrated JavaScript build and package management tools like NPM into the CI/CD pipelines, automating JavaScript code compilation, minification, and bundling processes.
Used containerization technologies like Docker for building clusters for orchestrating containers deployment.
Worked on several Docker components like Docker Engine, Hub, Machine, Compose and Docker Registry.
Created private cloud using Kubernetes (AKS, EKS) that supports development, test, and production environments.
Successfully collaborated with financial bank clients to understand their specific requirements and translate them into efficient and secure software solutions.
Good analytical, problem-solving, communication skills and can work either independently with little or no supervision or as a member of a team.
Demonstrated expertise in cloud security, specializing in Google Cloud Platform (GCP), with a focus on securing infrastructure, applications, and data.
Played a key role in the systematic and incremental modernization of the Department's Unemployment Insurance systems, transitioning from on-premise to a cloud-based infrastructure within GCP.
Designed comprehensive security solutions for the GCP environment, ensuring the protection of cloud-based infrastructure and adherence to industry standards and regulations.
Collaborated closely with the Technical Lead to define the future state of the technology platform, contributing to the creation of a roadmap for implementing the envisioned security measures.





Professional Experience:

Client: SMBC (Remote) January 2023 Till date
Role: Cloud DevOps Developer

Worked with designing, deploying, and managing Kubernetes clusters for container orchestration and management.
Utilized Kubernetes to automate application deployments, scaling, and management, ensuring high availability and fault tolerance.
Deploying PEGA PRPC software packages using the configured environment and verifying the system logs to ensure successful deployment.
Successfully designed, deployed, and managed production workloads on Google Cloud Platform (GCP) for high availability, scalability, and reliability.
Automated infrastructure and deployment with Terraform, Harness, Ansible, Cloudbees Jenkins.
Implemented robust and scalable architectures using GCP services such as Compute Engine, App Engine, Kubernetes Engine, and Cloud Functions.
Created CAMUNDA workflows for multiple use cases which include instantiation, health check and configuration of PNF devices on cloud environment.
Experience in setting up CI/CD pipeline integrating various tools with CloudBees Jenkins to build and run Terraform jobs to create infrastructure in AWS.
Experience on version control systems like Subversion, GIT and Source code management client tools like GIT Bash, GIT Hub, and GIT GUI.
Assist Development teams to migrate applications to cloud-native environments using Kubernetes.
Demonstrated end to end troubleshooting skills by leading complex support calls with vendors (JFrog, VMware Tanzu)to improve the overall performance of CI/CD tools
Data Migration process from MSSQL to Oracle has been implemented using Pega Data Mover Utility tool.
ManagedOpenShiftmaster, nodes wif upgrades, decommission them from active participation by evacuating the nodes and upgrading them using OLM(Operator Lifecycle Manager) operator.
I ntegrating on premise SAP systems with HANA Cloud Portal.
Architected and designed the enterprise Adobe Experience Manager (AEM) platform at Confidential.
Configured and optimized GCP services, including load balancers, auto-scaling, and managed instance groups, to ensure optimal performance and cost efficiency.
Utilized GCP networking services, such as VPC, Cloud Load Balancing, and Cloud CDN, to design and implement secure and high-performance network architectures.
Hana Cloud portal and Active Directory integration.
Added unit testing for every CAMUNDA workflows.
Implemented continuous integration using Jenkins and configured various plugins GIT, Maven, SonarQube and Nexus.
Leveraged GCP's managed databases like Cloud SQL and Cloud Spanner for persistent storage needs, ensuring data integrity and availability for production workloads.
Implemented automated deployment pipelines using tools like Cloud Build, Jenkins, or GitLab CI/CD, ensuring smooth and efficient release processes.
Migrated various applications like Java, UI, .NET and Node Applications to Enterprise Cloudbees Jenkins.
Improve reliability through Modern cloud native operations practices.
Configured vSphere 7.0 with Tanzu. Enabled Kubernetes (k8s) data domain.
Installed and configured Splunk Light, Heavy weight forwarders and integrated wif OpenShift-logging Fluentd Secure forwarder and wifVault cluster Fluentdforwarder plugin td-agent to forward logs onto On-Prem Splunk.
Proven ability to lead an active, skilled DevOps AEM Administrator team in a Team Lead capacity supporting Adobe Experience Manager (AEM).
High availability solutions like PEGA s HA feature and Blue Green Solutions has been successfully implemented.
Helped to design, create, and maintain an end-to-end test Harness for Fugue (mostly Python & Bash).
Utilized GCP monitoring and logging services like Stackdriver Monitoring, Logging, and Trace for real-time visibility, troubleshooting, and performance optimization.
Implemented security best practices on GCP, including IAM (Identity and Access Management), VPC firewall rules, SSL certificates, and encryption at rest. Tanzu
Branching and merging code lines in the GIT and resolved all the conflicts raised during the merges.
Ensured successful architecture and deployment of enterprise-grade PaaS solutions using Pivotal Cloud Foundry ( PCF ) as well as proper operation during initial application migration and set new development.
Achieved Infrastructure as Code for deploying and updating productionTanzuApplication Services (PCF) Foundations through Concourse CI server and pipelines.
Providing SAP Basis/S4 HANA support activities.
Deployed Openshift Enterprise v3.4/3.6 on RedHat 7 environment and integration wif private Docker Registry.
Led the DevOps AEM Administrator Team to coordinate and implement all Adobe AEM updates.
Implemented disaster recovery and backup strategies using GCP services like Cloud Storage, Cloud Snapshot, and managed database backups.
Used CAMUNDA Modeler for workflow creation.
Collaborated with cross-functional teams, including developers, operations, and security teams, to ensure the successful deployment and operation of production workloads on GCP.
Implemented containerization using Docker and container orchestration with Kubernetes, leveraging their benefits for scalability and portability.
Configured and optimized Kubernetes deployments, services, and ingress controllers for efficient load balancing and routing of traffic.
Used VMware like Tanzu, oracle, AWS.
Launched PEGA container in FARGATE service to auto scale/manage EC2 instances with AWS Oracle RDS database service.
Participated in Cloudbees Jenkins POC and Implemented Cloudbees Jenkins in Organization as an Admin Role.
Implemented Kubernetes resources like ConfigMaps and Secrets to manage application configurations and sensitive data securely.
Implementing tool to migrate CAMUNDA BPMN processes from one version to other version.
Success Factors Adapter configuration in SAP HANA Cloud Integration HCI.
Utilized Kubernetes features like Stateful Sets, DaemonSets, and Jobs for managing stateful applications, system daemons, and batch processing workloads.
Implemented monitoring and logging solutions for Kubernetes clusters using tools like Prometheus, Grafana, and the ELK Stack.
Developed coud form reports for the show-back and charge- back reports on OpenShift projects and cloud resources.
Implemented automated scaling and self-healing capabilities using Kubernetes Horizontal Pod Autoscaling (HPA) and liveness/readiness probes.
Experienced in creating shell scripts for canary and full deployment through Harness.
Collaborated with development teams to optimize application performance and resource utilization in Kubernetes environments.
Implemented Kubernetes RBAC (Role-Based Access Control) for secure access management and authorization within the cluster.
Upgrading HANA DB using HLM tool.
Directly collaborated with Adobe on integrating AEM 5.6.1 with IBM WebSEAL for identity propagation and SSO user management for security authentication.
Integrated EKS clusters with CICD pipelines as Cloudbees Jenkins for containerized applications and configured deployments pipelines to build, test and deploy container images to eks clusters
Managed cluster resources, scaling capacity, and ensuring high availability of EKS clusters to meet application demands and maintain service uptime.
Leveraged Terraform templates to specify the desired state of GKE clusters, including networking, security, and compute resources.
Use of Docker, Kubernetes and OpenShift to manage micro services for development of continuous integration and continuous delivery.
Configured WAS/Tomcat Application server Java Virtual Machines for Pega PRPC 6.x and 7.x versions on NDM (Network deployment manager).
Converted existing Terraform modules that had version conflicts to utilize cloud formation during terraform deployments to enable more control or missing capabilities
Involved in troubleshooting to identify and resolve issues related to cluster deployments, networking, or container runtime environments.

Environment: GCP, Ansible, Terraform, Jenkins, Cloudbees, Docker, Kubernetes, Shell, Harness, , YAML, ELK, Apache Tomcat, GitHub, Prometheus, Grafana.

Client: LendingTree, Charlotte, NC November 2021 December 2022
Role: Azure Cloud DevOps Consultant
Developed methodologies for cloud migration, implemented best practices according to customer requirements for going to Azure and Azure stack hub.
Setting up Landing zone with the terraform automation (.tf files) to deploy multiple environments on azure and azure stack hub with configuring Load balancers availability sets, VMS, NSG s.
Maintaining the terraform state files with workspaces and making sure that define/directory structure to provision the services across .tf files and defining the modules depending on the development.
Implemented disaster recovery strategies and conducted regular business continuity testing to minimize the impact of potential disruptions and ensure the availability of banking services.
Delivered technical infrastructure support services, addressing escalated issues from the Support Centre and other Technical Services groups, in a dynamic banking (Bank of America and M&T Bank in 2021) environment
Build a new RHEL image and worked with Packer in creating JSON files, running packer build with all the provisioners like basic install packages, subscriptions, secrets, image details in template file. Maintaining post build activities that can be packaged during build for ansible playbooks packages.
Implemented Business continuity and Disaster Recovery by making applications available in secondary azure SME region and mechanism for failover.
Involved to develop backup and recovery techniques for applications and database on virtualization platform and Backup, Configure and Restore Azure Virtual Machine for Windows and Linux using Azure Backup
Designed and Deployed Azure Resource Manager Template and in designing custom build steps using PowerShell.
Developed data ingestion pipelines utilizing Azure Data Factory to efficiently move data from various sources into Azure Data Lake.
Integrated Azure Stream Analytics to process and analyze real-time data streams directly into Azure Data Lake.
Implemented and configured Red Hat OpenShift, showcasing expertise in deploying and managing Kubernetes-based clusters on both on-premises servers and public cloud environments.
Scaled applications and enabling horizontal pod autoscaling based on CPU utilization, ensuring efficient resource allocation and automatic adjustment to demand fluctuations.
Setting up and Configured Azure Queue Storage for reliable and decoupled communication between applications through message queuing (MQ).
Integrated app with Azure data services, such as Azure Data Lake and Azure Blob Storage, to enable secure and scalable data movement in the cloud.
Created data pipelines and mappings in Informatica Intelligent Cloud Services (IICS) for seamless data extraction, transformation, and loading (ETL) processes.
Designed and developed Cloud Service projects and deployed to Web Apps, Function apps using Azure devops pipelines.
Implemented CICD pipelines using popular tools such as Jenkins, GitLab CICD, Azure DevOps, Bitbucket to automate the build, test, and deployment processes.
Designed and configured multi-stage CICD pipelines for different environments (development, staging, production) to ensure seamless software delivery.
Creating Python Scripts to automate daily repetitive task and running it using Azure Devops Pipelines and in fetching secret values key identity management and inserting records into azure storage tables
Created solutions for metrics collection using Application Insight and monitoring in log analytics workspace, audit stream. This integration included azure SME roles-based access control across resources.
Leveraged Databricks' integration with Azure services, such as Azure Data Lake Storage and Azure Blob Storage, for seamless data access, storage, and security.
Manage and Create Storage Account, Azure storage container and uploading files to blob storage with necessary properties metadata with Azure active directory (Azure AD) and Azure RBAC
Automated Azure Scalability and Azure Availability - Build VMs availability sets using the Azure portal to provide resiliency for IaaS based solution and Virtual Machine Scale Sets (VMSS) using Azure Resource Manager (ARM) to manage network traffic.
Troubleshooting and debugging skills in resolving Terraform-related issues, including dependency conflicts, resource conflicts, or infrastructure drift.
Implemented CI/CD pipelines using Azure DevOps to automate the build, test, and deployment processes for .NET applications, ensuring faster and more reliable software delivery.
Implemented strong authentication and authorization mechanisms, leveraging technologies like multi-factor authentication (MFA) and role-based access controls (RBAC) to enforce least privilege access.
Supporting on-prem migration to cloud. Configuring environment in writing ansible playbooks, modules, creating modules for system resources as packages or files and enabling to orchestrate the infrastructure in Azure.
Implemented and enforced Git branching strategies such as GitFlow, GitHub Flow, or Feature Branching to facilitate parallel development, collaboration, and release management.
Utilized Azure SDK for Python to interact with various Azure services programmatically, including storage (Azure Blob Storage, Azure Data Lake Storage), databases (Azure SQL Database, Azure Cosmos DB), and messaging (Azure Service Bus, Azure Event Hubs).
Implemented serverless computing solutions using Azure Functions and Python, enabling event-driven and scalable application architectures.
Implemented event-driven architectures using Azure Event Grid, Azure Functions, and Python, enabling seamless communication and event processing across different services and components.
Adding the Jenkins artifacts to release pipeline with azure app service deploy task YAML file and enabling the continuous deployment CD trigger so that every time the source artifact is updated.
Defining the infrastructure and applying to applications playbook and letting azure automatically scale each environment as needed and automating deployments of packages and change in code using ansible procedure.
Integrated Urban Code Deploy (UCD) with version control systems (e.g., Git) and CICD tools (e.g., Jenkins, Azure DevOps) to establish end-to-end automation for application deployment.
Implemented release management processes in UCD, including versioning, tagging, and tracking of application releases for auditing and rollback purposes.
Leveraged ansible tower as CI engine and integration ansible playbooks with job templates to provision, to perform build and clean up on OpenStack and azure platforms.
Installed and created a role and playbook for setting up a webhooks listener on ansible tower with personal access token to authenticate.
Customized azure extension with azure pipeline decorator and published at org azure marketplace to make use in the ado pipelines. This extension includes pre and post task and extension package JSON files.
Azure DevOps extension installation and checkout the scripts flow in Log Analytics workspace for build monitoring metadata using Kusto s query. Pipeline real time monitoring on ado repos, boards, builds and pipelines environment variables.
Integrating Power BI with Azure monitoring service Log Analytics using data collector API to allow data export and expose the event metrics and logs.
Created monitoring solutions that reads Application Insight Records using Kusto Query language. Through the Azure portal and the built-in visual of consuming application insights by querying the underlying data directly to build custom visualization through Azure Monitor dashboard and workbooks
Used Azure Monitor managed service for Prometheus as data source for Grafana using managed system identity, this allows to collect and analyse metrics at scale using Prometheus monitoring solution with a Grafana dashboard (Azure managed Grafana)
Created Python Scripts to fetch Secret values from Key vault for using in the MongoDB Task and also to read records from MongoDB, PGSQL, MySQL and Azure SQL for Audit ITS Compliance
Provided Architecture Assessment on Azure DevOps, Kubernetes cluster and networking configuration, Identity access management and vulnerability , logging and data threat protection and High availability.
Implemented macOS-specific CI/CD pipelines using Azure Pipelines, automating build, test, and deployment processes for macOS applications.
Deployed and used container package manager Helm charts and orchestration with Cloudify, and ECS. Enhanced and extended the functionalities that are needed with Sidecar Containers creating multi-container pods
Created Alerts and alert processing rules application alerts monitoring with action group azure functions and logic apps
Worked on Metric alerts evaluate resource metrics at regular intervals. Logs from Azure Monitor converted to metrics, or Application Insights metrics and applied to multiple conditions and dynamic thresholds.
Involved in Network topologies and servers SSH, HTTPS, DNS, VPC, Subnets, LAN, WAN, TCP/IP, and Firewalls
Resolved security vulnerabilities as part of systems manager, server-side encryption enablement on buckets and whitelisting IPS as required.
Involved in writing Groovy scripts for building CI/CD pipeline with Jenkins file. Includes Integration of Jenkins with AWS and Azure DevOps services as an end -end automation in build and deployment
Worked on Docker for checking Container Status, Image Checking, Docker Image Launch from zip file, executing commands within the containers, Inspecting the Container Environment Variable, setting up Kubernetes Dashboard for Managing Kubernetes Environment
Creating ADO pipeline to create AKS (Azure Kubernetes Service) cluster with for dev, QA, test and prod env with terraform TF manifests, ACR (Azure Container Registry) and authentication with Azure RM service connection.
Provisioned Kubernetes manifests with Storage class persistent volume and deploy, connect MySQL DB cluster IP service
Monitoring the health of nodes in a cluster and automatically initiates auto repair for a problem with AKS service.
Setup Policies Daily, weekly for Prod and Non-prod environments depending on the application RPO (Recovery point objective and RTO (Recovery Time Objective) and configured automated scheduled backup for VM s and Diagnostics settings to Monitor the logs and Azure backup reports by alerting a backup failure.

Environment: Azure, Azure stack hub, AWS, Ansible, Azure RM, Terraform, Jenkins, Docker, Kubernetes, Open stack, Maven, Ansible Tower, Shell, PowerShell, Python, Load Balancers, Log Analytics workspace, Application Insights, Cosmos DB, MongoDB, Power BI, Azure ADO, Azure extension, AKS, Jenkins, Prometheus, Azure Grafana.


Client : AgFirst, Columbia, SC March 2019- October 2021
Role: Sr. DevOps Infrastructure/Cloud Engineer
Experienced in configuring the monitoring and alerting tools according to the requirement like Prometheus and Grafana, deployed multiple dashboards for individual applications.
Setting up alerts and kube state metrics of the pods in each namespace where we can know the status of each pod/container restart count and monitor logging events.
Implemented static code analysis and code quality checks (e.g., SonarQube) as part of the CI pipeline to enforce coding standards and identify potential issues early in the development process.
Orchestrated containerized applications using containerization platforms like Docker, Kubernetes, or AWS ECS, and integrated them into CICD workflows.
Setting up and configuring Amazon EKS clusters using IAC terraform and demonstrated ability to define EKS cluster allowing for repeatable and automated provisioning of clusters.
Developed and maintained Helm charts to package, version, and deploy Kubernetes applications, following best practices and adhering to industry standards
Leveraged Terraform templates to specify the desired state of EKS clusters, including networking, security, and compute resources.
Utilized branching strategies to support multiple environments (e.g., development, staging, production), allowing for isolated testing and deployment of changes.
Monitored and tracked UCD deployments using built-in dashboards and reporting tools, ensuring visibility into deployment status, success rates, and performance metrics.
Collaborated with release management teams to plan and coordinate feature releases, using branching strategies to manage feature toggles and release timelines.
Created Cloud formation templated and Worked on Amazon Cloud in setting up of EC2 Instance, Created Security Groups, Created VPC with specific Subnet, Taking Snapshots of the EC2 Instance, Created EBS Volumes for taking Backup of the data and moving to different VPC subnet for restoring databases, Restricted Access to the EC2 Instance
Created Jenkins pipeline to build and deploy aws infrastructure and deploying container based to Amazon eks with ECR (Amazon Container Registry) docker
Integrating UCD with Jenkins for continuous delivery of all the components with configure applications for each environment to reduce the downtime and deployment time
Configuring and centralize security setup, template creation on running the build through UCD and creating role-based organisation structure
Automated the cloud deployments using AWS cloud formation templates and version control the infrastructure and manage the server configurations using configuration management tools Ansible, Terraform, and shell scripts.
Design highly available, cost effective and fault tolerant systems using multiple AWSEC2 instances, Auto Scaling, AWS Elastic Load Balance and AWS Amazon machine image (AMIs), DNS mapping with subdomain using Route 53.
Wrote Python and bash scripts for automation of network and configuration file features.
Managed different microservices on cloud-native application with modernized service networking layer Istio service mesh that provides transparent and easily automate application network functions.
Experiencing in Sonar Corbertura and SonarQube, fortify to scan the code to go test analysis
Setup and build AWS infrastructure various resources, VPC EC2, S3, IAM, EBS, Security Group, Auto Scaling, and RDS in Cloud Formation JSON templates
Worked on multiple things like setting up Kubernetes dashboards with AAF (Application Authorization Framework) and using kube config.
Perform the custom health checks and automatic node deployments with Amazon Kubernetes service EKS
Deployed and Configured an ELK stack, setup the ELK Stack to collect, search and analyse log files from across servers.
Worked on Apache Kafka, a distributed streaming platform, for building event-driven architectures and scalable messaging systems in Java applications.
Build servers using AWS, importing volumes, launching EC2, RDS, creating security groups, auto-scaling, load balancers (ELBs) in the defined virtual private connection (VPC)
Created a Jenkins job which generate the Kubernetes namespace and mapped to their application users in each namespace.
Creating custom own monitoring dashboard for Kubernetes cluster to handle extraction, transformation on all loading assets for k8 and querying, visualization in Grafana
Ensuring the applications are not consuming more system resources than those allocated for their namespace by providing the resource quota to support which is automated through Jenkins.
Involved in setting up builds using Chef as a configuration management tool and managed the configurations of more than 40 servers and Managed Nodes, Run-Lists, roles, environments, cookbooks, recipes in Chef.
Created Chef Knife, Recipes and Cookbooks to maintain chef servers, its roles and Installed Chef-Server Enterprise On-premises/WorkStation/Bootstrapped the Nodes using Knife.
Monitoring and alerting the servers maintainability using tools like Nagios and Splunk.
Deployed AWS lambda code from s3 buckets by implementing serverless architecture using API gateway and lambda. Created and configured lambda functions to get events from amazon S3 bucket.
Implemented rapid-provisioning and life-cycle management for Ubuntu Linux using Chef, and custom Ruby/Bash scripts.
Setting up ELK stack on AWS ec2 instance to collect logs and evets data, transform to Logstash and explore visualize in Kibana
Developed Ansible playbooks, inventories, and custom playbooks in YAML, and encrypted the data using Ansible Vault and maintained role-based access control by using Ansible Tower and implemented orchestration using Ansible to run tasks in a sequence which can work on different servers.
Successful capability for increasing namespace configuration (e.g., vCPU, memory) in runtime and changed k8s configuration (e.g., add resources such as worker/master node) in runtime.
Automated the python script for listing, deleting of VM s and associated DNS, Network resources
Worked on setting up the project structure, dependencies during the build scan through Gradle and generate the output of the build to help in troubleshooting.
Developed Terraform key features such as Infrastructure as code, Implementation Resource, Change Automation and Used Auto scaling for launching Cloud instances while deploying microservices.
Worked with Ansible Tower to create projects, inventory files, templates, and scheduling jobs. Wrote Ansible playbooks with python SSH as the Wrapper to Manage Configurations of Azure Nodes and Test playbooks on Azure Virtual machines.
Integrated GKE clusters with other GCP services such as Cloud Load Balancing, Cloud DNS, and Cloud IAM for seamless application deployment and management.
Experience in Virtualization technologies like VMWare, Vagrant and worked with containerizing applications like Docker, Kubernetes.
Working with Ansible tower to manage Web Applications, Config Files, Data Base, Commands, User Mount Points, Packages and for running playbooks stream in real-time and amazed to see the status of every running job without any further reloads.
Create automation and deployment templates for relational and NoSQL databases including MSSQL, MySQL, Cassandra, and MongoDB.
Configuring, automation and maintaining build and deployment CI/CD tools Jenkins, Bitbucket, ANT, Maven, Build Forge, Docker-registry/daemon, Nexus, and JIRA for Multi-Environment (Local/POC/NON-PROD/PROD) with high degrees of standardization for both infrastructure and application stack automation in AWS cloud platform.
Worked with container-based deployments using Docker, working with Docker images, Docker Hub and Docker-registries and Kubernetes.
Implemented a production ready, load balanced, highly available, fault tolerant, auto scaling Kubernetes infrastructure and micro service container orchestration.
Environment: AWS, Ansible, Terraform, Jenkins, Docker, Kubernetes, Open stack, Maven, Ansible Tower, Shell, PowerShell, Mesos, Python, WebLogic Server 11g, Load Balancers, ELK, Tomcat, GitHub, Nagios, Splunk, Prometheus, Grafana, Cloudify.

Client: Gilead Sciences, CA March 2018 -Feb 2019
Role: Cloud Engineer

Designing an Infrastructure for different applications before migrating into Azure cloud for flexible, cost-effective, reliable, scalable, high-performance, and secured applications.
Worked on various Azure services like Compute (Web Roles, Worker Roles), Azure Websites, Caching, SQL Azure, NoSQL, Storage, Network services, Azure Active Directory, API Management, Scheduling, Auto Scaling, and PowerShell Automation
Configured and deployed Azure Automation scripts for applications utilizing the Azure stack that including compute, blobs, ADF, Azure Data Lake, Azure Data Factory, Azure SQL, Cloud services and ARM and utilities focusing on Automation.
Worked on Ansible and Ansible Tower to automate and deployed applications by managing the changes and wrote playbooks to manage Web applications. Experience in installing and configuring the Ansible management node to deploy the configuration to the end user nodes.
Created Ansible cloud modules for interacting with Azure services which provides the tools to easily create and orchestrate infrastructure on Azure and automated cloud-native applications in Azure using Azure microservices such as Azure functions and Kubernetes on Azure.
Used Azure Kubernetes Service to deploy a managed Kubernetes cluster in Azure and created an AKS cluster in the Azure portal, with the Azure CLI, also used template driven deployment options such as Resource Manager templates and terraform.
Used Ansible server and workstation to manage and configure nodes and also contributed to developing DevOps practises for infrastructure as a code creation (Ex: Ansible, python, bash, Go)
Migrated and designed, configured and deployed Microsoft Azure for a multitude of applications utilizing the Azure stack (Including Compute, Web & Mobile, Resource Groups, Azure SQL, Cloud Services, and ARM), focusing on high - availability, fault tolerance, and auto-scaling.
Writing reusable modules in Terraform, Azure resource management as infrastructure as a code
Used Ansible to manage Web applications, Environments configuration Files, Users, Mount points and Packages.
In Depth knowledge in writing the Ansible playbooks which is the entry point for Ansible provisioning, where the automation is defined through tasks using YAML format. Also Run Ansible Scripts to provision.
Worked on existing application logic and functionality in recreating Azure Data Lake, Data Factory, SQL Database, Datawarehouse environment.
Design and implement migration strategies on Azure SQL, Azure data lake (ADLS), Azure Data Factor (ADF) with data transformation as part of Cloud data integration strategy.
Worked on Kafka for messaging system and spark for processing large sets of data.
Worked in using Dockers Swarm and deployed spring boot applications. Evaluated Kubernetes, Rancher for Docker container orchestration.
Experienced in Kubernetes service to deploy a managed Kubernetes cluster in Azure and created an AKS cluster in the Azure portal.
Integrated Bamboo with artifact repositories, such as Artifactory Nexus, to store and manage build artifacts, ensuring artifact traceability and version control
Script, debug and automate PowerShell scripts to reduce manual administration tasks and cloud deployments.
Designed uDeploy Processes that deploys multiple applications using WAS, JBOSS, DM-Server Containers across both virtual and bare-metal environments.
Wrote Python scripts for pushing data from Dynamo DB to MySQL Database.
Created and maintained the Python deployment scripts for Web Sphere web application server.
Responsible for build and deployment of Java applications on to different environments such as QA, UAT and Production.

Environment: Azure, Microservices, Rancher, Azure Data factory, Azure Data Bricks, HDInsight Kubernetes, Glacier, Dynamo DB, Mongo DB, TeamCity, Groovy, shell, PowerShell, scripts, Mesos, Ansible, Ansible Tower
Docker, Chef, Terraform, Maven, Jenkins, GIT, SRE, Python, Apache Tomcat6.x/7.x, RHEL, UNIX/Linux Environment.

Client: Ingram Micro- Irvine, CA Jan 2017 - Feb 2018
Role: DevOps/Automation Engineer

Built and Deployed java source code into application servers in an AGILE continuous integration environment.
Worked in DevOps collaboration team for internal build and automation configuration management in Linux/Unix and windows environment.
As being a DevOps engineer used series of tools (subversion, maven, Jenkins, chef, Jira) and involved in day-to-day build and release cycles.
Managed source control sys GIT and SVN, modified build related scripts developed in ANT (build.xml file).
Developed build and deployment scripts and used ANT/Maven tools in Jenkins to span from one environment to other.
Installation and support of various applications and Databases including Oracle, MySQL along with Web Logic 10, JBOSS 4.2.x, Oracle 10g, Tomcat.
Implemented Azure Application Insights to store user activities and error logging.
Deployed the archives like war files into the Tomcat Application Servers, Build and administrate Continuous Delivery pipelines using Git, Vagrant, Jenkins, and Groovy DSL.
Thorough Knowledge of Linux internals and utilities (kernel, Memory, Swap, CPU)
Worked in Azure infrastructure management (Azure Web Roles, Worker Roles, SQL Azure, Azure Storage, Azure AD Licenses, Office365)
Worked in Private Cloud and Hybrid cloud configurations, patterns, and practices in Windows Azure and SQL Azure and in Azure web and database deployments.
Created Puppet manifests, profiles, and roles module to automate system operations and also Developed/managed Puppet manifest for automated deployment to various servers.
Constructed the puppet modules for continuous deployment and worked on Jenkins for continuous integration.
Wrote Ansible Playbooks with PythonSSH as the Wrapper to Manage Configurations of OpenStack Nodes and Test Playbooks on AWS instances using Python.
Experience in working on single node and multi node OpenStack cloud platform, Virtualization and was one of the top contributors, a service part of OpenStack cloud software.
Installed, configured and automated build jobs in Jenkins for continuous integration using various plugins in AWS pipelining.
Developed and maintained build pipelines and scripts using tools like Grunt to automate HTML minification, concatenation, and other optimization processes.
Performed Continuous Delivery in a Micro Services infrastructure with Amazon Virtuix, Docker and Kubernetes.
Created Jenkins workflows using Groovy script to automate entire build and deployment process.
Performed multiple and consistent deployments using uDeploy to JBoss, Built and maintaining Docker infrastructure for Service oriented architecture (SOA) applications.
Worked on Continuous integration tools like Jenkins to build and test the applications and working on issue tracking tool like iTrack, JIRA, Confluence.
Configured AWS Identity Access Management (IAM) Group and users for improved login authentication.
Launching and configuring of Amazon EC2(AWS) Cloud Servers using AMI's (Linux/Ubuntu) and configuring the servers for specified applications.
Worked on AWS designing and followed Info security compliance related guidelines.
Managed monitoring using Nagios and updated parameters with active and passive checks.

Environment: Linux (Red hat, Solaris, Ubuntu), AWS, Azure, Windows, AWS, Puppet, Puppet DB, Chef, Ansible, Docker, WebLogic s, JBoss, Groovy, Oracle, MySQL, Ant, Maven, CVS, GIT, SVN, Jenkins Docker, itrack, Jira, Nagios


Client: Softboot Technologies, Hyderabad, India January 2015 November 2016
Role: Linux System Administrator

Managed a diverse server infrastructure with Linux servers, ensuring high availability and reliability.
Performed system installation, configuration, and maintenance for various Linux distributions, including CentOS, Ubuntu, and Red Hat Enterprise Linux.
Implemented automation scripts using Bash and Python for routine tasks, improving efficiency, and reducing manual errors.
Monitored system performance using tools such as Nagios and Zabbix, proactively addressing issues to prevent downtime.
Conducted regular security audits, applying patches and implementing security best practices to protect against vulnerabilities and threats.
Set up and managed user accounts and access controls, enforcing security policies and ensuring data integrity.
Provided 24/7 on-call support for critical system issues and emergencies, resolving incidents promptly. - Collaborated with the development team to deploy and maintain web applications, including Apache, Nginx, and Tomcat.
Managed network services like DNS, DHCP, and LDAP, optimizing their performance and reliability.
Implemented and maintained backup and disaster recovery solutions, ensuring data integrity and business continuity.
Assisted in capacity planning and hardware upgrades, optimizing resource allocation, and reducing costs.
Documented system configurations, procedures, and troubleshooting guides for knowledge sharing within the team.
Operating Systems: Linux (CentOS, Ubuntu, Red Hat) - Scripting: Bash, Python
Monitoring Tools: Nagios, Zabbix
Web Servers: Apache, Nginx
Database Management: MySQL, PostgreSQL
Virtualization: VMware, KVM
Networking: DNS, DHCP, LDAP - Security: Firewall management, Intrusion detection/prevention - Automation: Ansible, Puppet - Backup and Recovery: Bacula, rsync



EDUCATION: Bachelor of Technology In computer Science (JNTU)- 2015.
Keywords: continuous integration continuous deployment quality analyst user interface message queue javascript business intelligence sthree database sfour active directory information technology golang microsoft California North Carolina South Carolina

To remove this resume please click here or send an email from [email protected] to [email protected] with subject as "delete" (without inverted commas)
[email protected];2301
Enter the captcha code and we will send and email at [email protected]
with a link to edit / delete this resume
Captcha Image: