Home

Rohan - Sr. DevOps Engineer
[email protected]
Location: Columbus, Ohio, USA
Relocation: Yes
Visa: H1B
Rohan
Sr. DevOps Engineer
614 452 1355 / 609 286 9441
[email protected]
Columbus, OH
Yes
H1B

http://www.linkedin.com/in/rohanr-453754258
________________________________________
Professional Summary:
Over 9 years of IT Industrial experience dealing with various DevOps tools involving in Continuous Integration, Continuous Delivery, Continuous Deployment, Cloud Migration, Containerization techniques and other dev operations.
Expertise in infrastructure development on Amazon Web Services (AWS) cloud platform services stack including Elastic Cloud Compute EC2, S3, EBS, EFS, Elastic Bean Stalk, Route 53, VPC, Cloud Front, Dynamo DB, Red Shift, RDS, Key Management Service (KMS), Identity & Access Management (IAM), Elastic Container Service (ECS), Elastic Load balancing, Cloud Formation, Elastic Cache, SNS, SQS focusing on high availability, fault-tolerance and auto scaling.
Expert in implementation of Azure Cloud services including ARM templates, Azure virtual Networks, Virtual Machines, Cloud Services, Resource Groups, Express Route, Traffic Manager, VPN, Load Balancing, Application Gateways, Auto - Scaling.
Experience in designing Azure virtual machines (VMs) and VM architecture for IaaS and PaaS; understand availability sets, fault domains and update domains in Azure.
Proficient in creating S3 buckets and managed policies for S3 buckets and utilized them and Glacier for storage, backup and archived in AWS.
Have knowledge on partition of Kafka messages and setting up the replication factors in Kafka Cluster.
Delivered significant business value by stabilizing the existing GM CQ 5.4 environment and led the effort to upgrade to AEM 6.2.
Engineering of efficient distributed caching solutions that improve performance (Pivotal Gemfire, Redis, HazelCast).
Worked on Big Data Integration & Analytics based on Hadoop, SOLR, Spark, Kafka, Storm, and web Methods.
Experience in designing a Terraform and deploying it in cloud deployment manager to spin up resources like cloud virtual networks, Compute Engines in public and private subnets along with AutoScaler in Google Cloud Platform.
Utilized AWS Elastic Beanstalk for illustrating and scaling web applications and created with Java, PHP, Node.js, Python, Ruby and Docker on different environments for servers like Apache.
Strong knowledge and worked on several Azure, Google Cloud Platform and OpenStack IaaS, PaaS and SaaS tools.
Created a Virtual Network on Windows Azure to connect all the servers, Designed ARM templates for Azure Platform. Utilized Azure services like compute, blobs, ADF, Azure Data Lake, Azure Data Factory, Azure SQL, Cloud services and ARM and utilities focusing on Automation.
Experience configuring continuous integration (CI) from source control, setting up build definition within Azure Devops (VSTS) as well as configuring continuous delivery to automate the deployment of web applications and Webjobs to Azure Webapps
Hands on Experience in creating DevOps strategy as implementing Continuous Integration of code with Jenkins from Source code repositories like GIT, SVN, IBM Clear case.
Administered and Implemented CI tools Hudson/Jenkins for automated builds with the help of build tools like ANT, Maven, Gradle.
Experience in integrating Unit tests and code Quality Analysis tools like Junit.
Experience in using Nexus, Antifactory and Frog Repository Managers for Maven and ANT builds.
Production experience in large environments using configuration management tools Chef, Puppet and Ansible to achieve continuous integration/continuous delivery of the product.
Defined Chef Server and workstation to handle nodes and configure nodes, written recipes in Ruby.
Well versed with Installing and configuring an automated tool Puppet that includes the installation and configuration of the Puppet master, agent nodes and an admin control workstation.
Wrote Ansible Playbooks with Python SSH as the wrapper to manage the configuration of AWS nodes and test playbooks on AWS instances using Python. Ran Ansible scripts to provision Dev Servers.
Created micro services using REST protocol with Docker and Kubernetes, Utilized Mesos, Kubernetes and Docker for the runtime environment for the CI/CD system to build, test, and deploy.
Worked with Docker Trusted Registry as a repository for our Docker images and worked with Docker container networks.
Hands-on experience with monitoring tools like Prometheus, Dynatrace,Datadog. And worked with Apache Kafka and Zookeeper.
Expertise in writing new plugins to support new functionality in Terraform.
Experienced with Atlassian stack from an operations and engineering perspective.
Ability to build Deployment, build scripts and automate solutions using various scripting languages such as to execute scripts in; Shell (Bash), Python, Ruby, Perl, PowerShell, XML and JavaScript.
Created alarms and triggers using Cloud Watch based on thresholds, Utilized Nagios and Splunk and ELK, New Relic for monitoring, also utilized Kibana and Grafana as visualization tools.
Good Knowledge about SonarQube in Implementing multi-tier application provisioning in OpenStack cloud, integrating it with Jenkins.
Hands on experience in deploying WAR, JAR, and EAR files in WebLogic, WebSphere, JBoss application servers in LINUX/Unix/Windows environment.
Responsible for developing Java/J2EE applications on to Apache Tomcat application servers and configured is to host the wiki websites.
Experienced in deploying Database changes to Oracle, MS SQL Server, MY SQL Data base.
Proficient in OSI Model, TCP/IP protocol suite (IP, TCP, ARP, UDP, TFTP, FTP AND SMTP).
Worked on various bug tracking tools like HP Quality Center and JIRA.
Extensive involvement in LINUX/Unix system Administration, System Builds, Server Builds, Installations, Upgrades, Patches, Migration, Troubleshooting on RHEL.
Experience in working with agile development team and at all phases of Software Development Lifecycle and managing project with the help CI/CD (Continuous Integration and Continuous Deployment).

Technical skills:
Operating Systems UNIX, Linux, Windows, Solaris, Ubuntu, Centos
Infrastructure as A service AWS, OpenStack, Azure, Rackspace, Google cloud
Virtualization Platforms Virtual Box, Vagrant, VMware
Configuration management Chef, Puppet, Ansible, Docker, Vagrant
CI and Build Tools Jenkins, GitLab, Hudson, Bamboo, ANT, Maven, Team City, MS Build
Application/Web Servers Oracle Web logic Server 11g, Apache Tomcat, Oracle Application Server 10g BEA WebLogic 8.1/9.2, WebSphere, JBoss, Tomcat, IIS
Amazon Web Services EC2, VPN, Elastic Load Balancer, Auto Scaling Services, Glacier, Elastic beanstalk, Cloud Front, Relational Database, Dynamo DB, Virtual Private Cloud, Route 53, Cloud Watch, Identity and Access Management (IAM), EMR, SNS, SQS, Cloud Formation, Lambda.
Scripting Languages Bash, Perl, Ruby, Shell, Python, HTML
Build Tools Maven, Ant
Cloud platforms AWS, Azure, Rackspace, Open stack, Kubernetes
Logging & Monitoring Tools App Dynamics, New Relic, Splunk, Log stash, Nagios, Datadog, New Relic
Databases Oracle 10g/11g, Mongo DB, MySQL, RDS
Version Controls Subversion, Git, GitHub, TFS, Bitbucket
Networking LDAP, DNS, FTP, DHCP, SSH, TCP/IP, NFS.
Issue Tracking Tools Jira, Remedy, Clear Quest

PROFESSIONAL EXPERIENCE:
Client: Best Buy Richfield, MN Oct 2022- Till date
Role: Cloud Solutions Engineer
Responsibilities:

Built S3 buckets and managed policies for S3 buckets and used S3 bucket and Glacier for storage and backup on AWS.
Designing, developing, and maintaining infrastructure-as-code (IaC) templates using Terraform.
Defining infrastructure resources, providers, and configurations in Terraform files.
Orchestrating the provisioning and lifecycle management of cloud resources across multiple platforms (e.g., AWS, Azure, GCP) using Terraform.
Implementing and managing Terraform modules to promote reusability and maintainability of infrastructure code.
Designed the AWS Infrastructure using VPC, EC2, S3, Route 53, EBS, Security Group, Auto Scaling, and RDS in Cloud Formation.
Designing and developing Cloud Formation. templates to define and provision infrastructure resources in an automated and repeatable manner.
Implementing infrastructure components, such as EC2 instances, VPCs, security groups, load balancers, and database services, using Cloud Formation.
Managing and organizing CloudFormation stacks to deploy and manage resources across multiple environments and AWS regions.
Implementing best practices for modular and reusable CloudFormation. templates to promote code maintainability and scalability.
Worked with Terraform infrastructure-as-code (IaC) key features such as Infrastructure as a code, Execution plans, Resource Graphs and Change Automation. Experience in Converting existing AWS Infrastructure to Server less architecture (AWS Lambda), deploying via Terraform infrastructure-as-code (IaC) and AWS Cloud Formation templates as infrastructure-as-code (IaC)
Created automated pipelines in AWS Code Pipeline to deploy Docker containers in AWS ECS using services like CloudFormation, CodeBuild, CodeDeploy, S3 and puppet.
Worked on JIRA for defect/issues logging & tracking and documented all my work using CONFLUENCE.
Integrated services like GitHub, AWS Code Pipeline, Jenkins and AWS Elastic Beanstalk to create a deployment pipeline.
Orchestrated Docker containers cluster using Kubernetes/Docker Swarm.
Implemented and maintained the branching and build/release strategies utilizing GIT.
Experienced in updating the source code and deployment code hosting in GitHub.
Work with other teams to help develop the Puppet infrastructure to conform to various requirements including security and compliance of managed servers.
Worked on building CI/CD pipelines with Jenkins.
Installed and configured Jenkins in Linux Environment and automated processes using Jenkins.
Experience in designing CI/CD Pipelines on CI servers for Mobile applications using Jenkins.
Used Prometheus to collect and store metric data from client applications and used Grafana to create charts and data is available to analyze and visualize.
Used JIRA for change management and bug tracking.
Strong experience in Cloud Storage Systems S3 and AWS RDS.
Launching Amazon EC2 cloud instances using Amazon Machine Images for AWS Cloud.
Create, manage, and delete users and groups as per the request using Amazon Identity and Access Management.
Used AWS Cloud Formation to provision and update a web application and build servers using AWS, importing volumes, launching EC2, RDS.
Involved in designing and developing Amazon EC2, Amazon S3, Amazon RDS, Amazon Elastic Load Balancing, Amazon SWF, Amazon SQS, and other services of the AWS infrastructure.
Integrated services like GitHub, AWS Code Pipeline, Jenkins, and AWS Elastic Beanstalk to create a deployment pipeline.
Monitoring and Tracing experience with tools like Splunk Grafana Cloud Watch.
Used Ansible playbooks to set up a continuous delivery pipeline. Deployed cloud formation templates, provisioning AWS environments using Ansible playbooks.
Designed and developed a configuration management database using Python to maintain and audit the everyday configuration changes.
Created Python scripts to totally automate AWS service which include web servers, ELB, Cloud front distribution, database, EC2, S3 Bucket and application Configuration.
Ensuring the availability and security of the AWS GovCloud infrastructure.
Maintaining physical data centers, network infrastructure, and computing resources.
Implementing security measures and controls to protect the infrastructure.
Providing compliance and regulatory certifications for GovCloud, such as FedRAMP and ITAR.

Environment: Apache, AWS, Jenkins, Docker, Maven, Grafana, Prometheus, Puppet, Subversion, GitHub, Linux, JIRA.


Client: Chase NYC July 2021- Sep 2022
Role: DevOps Engineer

Responsibilities:
Designed, configured and deployed Amazon Web Services (AWS) for a multitude of applications using the AWS stack (EC2, Route53, S3, RDS, Cloud Formation, Cloud Watch, SQS, IAM), focusing on high-availability, fault tolerance, and auto-scaling.
Handled migration of on premises applications to cloud and created resources in cloud to enable this. Used all critical AWS tools, used ELBs and Auto-Scaling policies for scalability, elasticity and availability.
Implemented Kafka producer and consumer applications on Kafka cluster setup with help of Zookeeper.
Used Spring Kafka API calls to process the messages smoothly on Kafka Cluster setup.
Experience in working on DevOps/Agile Scrum operations and tools area (Build & Release Automation, Environment service).
Installed, configured and managed the ELK (Elastic Search, Log stash and Kibana) for Log management within EC2 / Elastic Load balancer for Elastic Search.
Provisioned the highly available EC2 Instances using Terraform and cloud formation and wrote new plugins to support new functionality in Terraform.
Closely worked with Kafka Admin team to set up Kafka cluster setup on the QA and Production environments.
Created Shell and Python Scripts to automate creation of AMI s through pre-boot and bootstrapping techniques.
Utilized Amazon IAM to grant fine-grained access to AWS resources to users. Also, managed roles and permissions of users to AWS account through IAM.
Worked on Kibana and Elastic search to identify the Kafka message failure scenarios.
Hands on experience with IAM to set up user roles with corresponding user and group policies using JSON.
Worked on Branching, Merging, Tagging and maintaining the version across the environments using SCM tools like GIT and Subversion (SVN) on windows and Linux platforms. Used Jenkins pipeline plugin to analyze the Maven dependencies and the SCM changes.
Virtualized the servers using the Docker for the test and dev-environments needs, configuration automation using Docker containers.
Developed AWS Lambda and AWS S3 using GoLang.
Created additional Docker Slave Nodes for Jenkins using custom Docker Images and pulled them to ECR. Worked on all major components of Docker like, Docker Daemon, Hub, Images, Registry, Swarm.
Created Jenkins jobs to deploy applications to Kubernetes Cluster.
Focused on containerization and immutable infrastructure, Docker has been core to this experience, along with Mesos, Marathon and Kubernetes.
Handled large scale RDBMS migration through RedShift, used Multi - AZ Deployment in RDS to enable High Availability and Automatic Failover at the database tier for MySQL workloads.
Developed Python based API (RESTful Web Service) to track sales and perform sales analysis using Flask and PostgreSQL.
Worked on setting up the Chef repo, Chef workstations and Chef nodes. Developed Chef recipes through Knife command-line tool to create Chef cookbooks to manage systems configuration.
Involved in chef-infra maintenance including backup/monitoring/security fix and on Chef Server backups.
Used Ansible server and workstation to manage deployments, wrote Ansible Playbooks in YAML.
Used Ansible Tower, which provides an easy-to-use dashboard and role-based access control, so that it's easier to allow individual teams access to use Ansible for their deployments.
Expertise in Troubleshooting build issues using Elastic search, Logstash, Kibana (ELK) with Logspout.
Used AWS Beanstalk for deploying and scaling web applications and services developed with Java, PHP, Node.js, Python, Ruby, and Docker on familiar servers such as Apache, Nginx and IIS.
Configuring network services such as DNS/NFS/NIS/NTP for UNIX/LINUX Servers and setting up UNIX/LINUX environments for various Servers.
Successfully developed large-scale distributed systems and reliable, fault tolerant software.
Created and managed multiple Instances of Apache Tomcat and deployed several test applications in those instances in QA environment.
Integrated application logs and setup triggered monitors and alerts on Data Dog agent Monitoring tool.
Managed all the bugs and changes into a production environment using the Jira tracking tool, Nagios & Graphite for System monitoring, Cloud Watch and Cloud Trial for monitoring the cloud environment.
Working on GIT for data integrity.
Environment: AWS EC2, IAM, S3, AWS Cloud Watch, Route 53, ELB, VPC, Dynamo DB, SNS, SQS, API Gateway, Auto scaling, EBS, RDS, Terraform, ELK, ANT, Maven, SVN, GIT, GITHUB, Chef, Ansible, Docker, Kubernetes, Jenkins, JIRA, Apache HTTPD, Apache Tomcat, WebSphere, JBoss, JSON, Bash, Python, Ruby, Linux, LAMP, Nagios, Maven, Shell, Perl

Client: Wells Fargo India Apr 2016 Dec 2020
Role: DevOps Engineer
Responsibilities:
Designed the AWS Infrastructure using VPC, EC2, S3, Route 53, EBS, Security Group, Auto Scaling, and RDS in Cloud Formation.
Handled storage over cloud with EBS and S3 policies, performed capacity planning and designing, OS upgrades and hardware refresh.
Worked with DevOps platform team and was responsible for specialization areas related to Puppet for cloud automation. Implementing Change requests for server configuration, software installation.
Managing Linux VMs using puppet as per project requirements.
Utilized Puppet for configuration management of hosted Instances within AWS. Configuring and Networking of Virtual Private Cloud (VPC). Utilized S3 bucket and Glacier for storage and backup on AWS.
Installation and Upgradation of packages and patches, Configuration management, Version Control, Service packs, troubleshooting connectivity issues and reviewing Security constraints.
Monitored and managed various DEV, QA, PREPROD, PROD environments for production and deployment activities. Identified cross functional dependencies through monitoring and tracking release milestones.
Implemented continuous integration and deployment solutions to target environments. Responsible for the Continuous Delivery pipeline given to all application teams as they on-board to Jenkins as a part of migration.
Configured and maintained Jenkins and Docker for Continuous Integration and end to end automation of all build and deployments, also have good knowledge on XL deploy and Code Deploy as release automation solution.
Used Ansible to manage Web applications, Environments configuration Files, Users, Mount points and Packages.
Production environment from a handful AMI s to a single bare metal host running Docker.
Automating the installation of software through power Shell scripts.
End to end deployment ownership for projects on Amazon AWS. This includes Python scripting for automation, scalability, build promotions for staging to production etc.
Wrote Puppet manifests in Ruby for deploying, configuring and managing collected for metric collection and monitoring.
Deployed Puppet, Puppet dashboard for configuration management to existing infrastructure.
Familiar with OpenStack concepts of user facing availability zones and administrator facing host aggregates.
Implemented multi-tier application provisioning in OpenStack cloud. integrating it with Chef.
Used Splunk to monitor/metric collection for applications in a cloud-based environment.
Developed Splunk infrastructure and related solutions as per automation toolsets.
Integrate Splunk with AWS deployment using Puppet to collect data from all database server systems into Splunk, also utilized NewRelic for monitoring.

Environment: AWS EC2, S3, VPC, Route53, Cloud Formation, Puppet, Chef, Docker, Maven, ANT, GIT, GITHUB, SVN, JIRA, Confluence, Jenkins, OpenStack, Splunk, RHEL, CentOS.

Client: Avaya India Aug 2012 Mar 2016
Role: DevOps Engineer
Responsibilities:
Implemented a CD pipeline involving Jenkins & GIT to complete the automation from commit to deployment.
Worked hands-on to create automated, containerized cloud application platform (PaaS), and design and implement DevOps processes that use those platforms.
Migrate SVN repositories to GIT and administrate GITLAB to manage GIT repositories.
Configuration management and deployments using Chef server and good understanding of Knife, Chef Bootstrap process etc.
Used Python API for uploading all the agent logs into Azure blob storage. Managed internal deployments of monitoring and alarm services for the Azure Infrastructure (OMS).
Build Data Sync job on Windows Azure to synchronize data from SQL 2012 to SQL Azure.
Migrating Services from On-premises to Azure Cloud Environments. Collaborate with development and QA teams to maintain high-quality deployment.
Design and Implement WCF services layer hosted on Windows Azure. This layer is the middle tier between SQL Azure and SharePoint online external content.
Create Cache Memory on Windows Azure to improve the performance of data transfer between SQL Azure and WCF services.
Implemented Python Scripts using stranded libraries for getting all the agent logs (Inventory, Remote connections, Network usage and performance counters) from the flavors of LINUX.
Created and wrote Shell Scripts (Bash), Ruby, Python, and PowerShell for automating tasks.
Administered TFS for .NET applications. Worked with deployment of .NET batch applications which processes high volumes of data.
Hands on experience with build tools like Jenkins, TeamCity, Sonar, Maven, ANT.
Performed parallel build for .NET application which will automatically decide which of the projects in the generated build list can be built independently.
Used JIRA as a change Management/Work Management/SCRUM Agile tool.
Configured Nagios to monitor servers with Chef automation.
Implemented Nagios monitoring solution for mission critical servers.
Supporting engineering plans and schedules by providing CM / Release Engineering services to build, deploy, develop scripts, oversee branch and merge strategies, and build automated tools as necessary to offer services to engineering team.

Environment: ANT, Maven, Subversion, CVS, Chef, Azure, LINUX, Shell/Perl Scripts, Python, DB2, LDAP, GIT, Jenkins, Tomcat, Nagios, JIRA.
Keywords: continuous integration continuous deployment quality analyst javascript sthree database information technology hewlett packard microsoft Arizona Minnesota Ohio

To remove this resume please click here or send an email from [email protected] to [email protected] with subject as "delete" (without inverted commas)
[email protected];1611
Enter the captcha code and we will send and email at [email protected]
with a link to edit / delete this resume
Captcha Image: