Home

Mangulal - DevOps Engineer
[email protected]
Location: Dallas, Texas, USA
Relocation: yes
Visa: H1B
Mangulal
[email protected]
+1(571)464-0219


Over 8+ Years of experience in IT industry encompassing of DevOps Engineer working with AWS Platforms, as a Linux Systems Administrator, CI/CD Pipeline, Build and Release Engineer. Dexterous in prioritizing and finishing tasks in an appropriate manner, yet flexible to multitask when necessary (Development/Testing/Staging & Production).
Involved in creating the company's DevOps strategy in a mix environment of Linux (Ubuntu, CentOS, RHEL) servers along with creating and implementing a cloud strategy based on Amazon Web Services (AWS).
Installed Kubernetes cluster including Kubernetes(K8) master and nodes. I configured Istio etcd, kube-apiserver, kube-scheduler, kube-controller-manager in K8 Master and as well as configured Docker, kubelet, kube-proxy, flannel in K8 nodes.
Well-versed in System Administration, System Builds, Server builds, Installs, Upgrades, Patches, Migration, Troubleshooting, Security, Backup, Disaster Recovery, Performance Monitoring and Fine-tuning on UNIX / Red Hat Linux Systems.
Designed and developed enterprise services using REST based APIs.
Configured K8 POC using bare metals, VMware, AWS and Azure using Ansible Playbooks
Experience in RPM Package Administration for installing, upgrading, and checking dependencies.
Performed automated installations of Operating System using kick start for Linux.
Sphere and Samba Server in UNIX, Linux and Windows environment.
Worked on Jenkins/Hudson by installing, configuring and maintaining for purpose of continuous integration (CI) and for end to end automation for all build and deployments and creating Jenkins Pipeline scripting and Groovy Scripting for CI/CD pipelines.
Installed, Configured, Managed Monitoring Tools such as Nagios for Resource Monitoring/Network Monitoring/Log Trace Monitoring.
Knowledge and experience with container management tool such as Docker, EC2 container
Experience in branching, tagging and maintaining the versions across the environments using SCM tools like Git and GitHub on Linux and Windows platforms.
Installed K8 GUI and Monitored the Pods, services, replication Factor, K8 Master, Nodes and their health status, Docker container details, events and logs.
Baked the Docker containers for many java based Applications and Deployed into the private Docker registry (Jfrog Artifactory).
Experience in Installing Firmware Upgrades, kernel patches, systems configuration, performance tuning on Unix/Linux systems.
Achieved CI/CD Pipeline by using the GitHub, Jenkins with Groovy Scripting , Artifactory,
Ansible Playbooks
Worked with Chef Enterprise Hosted as well as on premise. Installed Workstation, Bootstrapped Nodes, Wrote Recipes, Cookbooks and uploaded them to Chef-server.

Experience using MAVEN and ANT as build tools for the building of deployable Artifacts (jar, war & ear) from source code.
Experienced with Handling Cloud environments like AWS (EC2, S3).
Good experience in setting up the EC2 instances for achieving the configuration policies on the servers
Good experience with AWS Cloud Services, (EC2, S3, EBS, ELB, Cloud Watch, Elastic IP,RDS, SNS, SQS, Glacier, IAM, VPC, Cloud Formation, Route53) and managing security
Built S3 buckets and managed policies for S3 buckets and used S3 bucket and Glacier for storage and backup on AWS
Expertise in designing and implementing Computer layer, like Amazon Machine Image (AMI) Design and Customization, Automation Scripts.
Troubleshoot the build issue during the Jenkins build process in Groovy Scripting.
Manage deployment automation using CHEF, Custom CHEF Modules and Ruby
Worked with Ansible playbooks for virtual and physical instance provisioning, configuration management, patching and software deployment.
Monitoring and analysis of Kubernetes pods logs using Elasticsearch by deploying Filebeat as a DaemonSet.
Maintained Beats using Elasticsearch Centralized Beats Management Console.
Managing DNS, LDAP, LAMP, FTP, Tomcat & Apache web servers on Linux machines.
Managed all the bugs and changes into a production environment using Jira tracking tool.
Involved in setting up JIRA as defect tracking system and configured various workflows, customizations and plugins for the JIRA bug/issue tracker
Installed, Configured, Managed Monitoring Tools such as Nagios for Resource Monitoring/Network Monitoring/Log Trace Monitoring.
Setup the private Docker registry using the Nginx, and Jfrog Artifactory.
Day to day administration of the Development, Production and Test environment systems with 24x7 on-call support
TECHNICAL SKILLS
Cloud Technologies Amazon Web Services (AWS) and Microsoft Cloud Platform
(Azure Cloud).
Continuous Integration Tools GitLab ,Jenkins, Bamboo,
Servlet container Glassfish, Apache Tomcat, JBoss, Jetty, WebLogic, IBM WebSphere
Source code management Tools Git, Bitbucket, GitHub, GitLab. Configuration management Tools Ansible, chef, puppet
Application Servers Oracle WebLogic, Tomcat, WAS


Build Tools Virtualization Tools Containerization services Container-orchestration
Configuration of Plugins in Jenkins

Maven. Ant
Oracle virtual box, VMware, Hyper-V Docker, ECS Container (AWS) Kubernetes, EKS Istio
Job DSL plugin, Build Pipeline plugin, Delivery Pipeline plugin, JIRA Plugin for Jenkins



Continuous Monitoring, Analytics Programming Languages Databases

Elastics reach Logstash Kibana Stack (ELK Stack) Datadog, AWS CloudWatch
Ruby, Python, shell scripting and, YAML, Terraform and CloudFormation Templates.
RDS(AWS), Redshift, Oracle, IBM DB2, MYSQL, SQL lite, postgre SQL Hives, spark-shell

Networking Operating System

Education:

VPC(AWS), Subnets, Security Groups (Protocols =TCP/IP, DNS, NFS, NIS, LDAP, SSH, SSL, SFTP, SMTP, SNMP).
Linux all Distribution (Ubuntu, Debian, CentOS Linux, Red hat enterprise Linux (RHEL) Linux Mint, and OpenSUSE)Windows family ,and iOS Mac.

Master s In computer science 2017 (USA) Bachelor s In computer science 2015 (OU-Ind)


PROFESSIONAL EXPERIENCE
Client Name: EA Sports (CA)
Role: AWS DevOps & Infrastructure Engineer Responsibilities:


Feb-2022 To Present

Designed and Developed Enterprise level Continuous Integration environment for Build and Deployment Systems.
Highly motivated and committed DevOps Engineer experienced in Automating, Configuring and deploying instances on AWS and Data centres.
Working with AWS service such as CloudFormation ,CloudWatch watch CloudTrail AWS config S3,EC2,VPC,IAM and ECR.
Working with AWS technology and concept such as Lambda,S3,Security Group ,AMI.
Working with CI/CD system such as Jenkins, GitLab CI.
creating Kubernetes cluster with cloud formation templates and deploy them in AWS environment and monitoring the health of pods using Helm Charts.
Working with Scheduling, deploying, managing container replicas onto a node using Kubernetes and experienced in creating Kubernetes clusters work with Helm charts running on the same cluster resources.
Writing python script to get the ladp groups and user information from LADP server.
Configure harsh crop vault for manage secrets and protects sensitive data.
Secure, store and tightly control access to tokens, passwords, certificates, encryption keys for protecting secrets and other sensitive data using a UI, CLI, or HTTP API in Vault.
Creating Lambda function with terraform code and setup automation for EMR Cluster s.
Configuring hive-site and provide all the information hive meta store and LADP information To access hive services.
Working with automation /configuration management using Terraform ,Ansible.
Creating EMR cluster using terraform and adding hive configuration and spark job configuration.
Good experience in maintaining the user accounts (IAM), RDS, Route 53, VPC, RDB, Dynamo DB and SNS services in AWS cloud.
Proficient knowledge with Helm charts to manage and release of helm packages.
Handle day-to-day operations; Administer, and monitor Big Data Platform components (Hadoop, Hive, HBase, BigSQL, etc).
Deploying aws resources using terraform could and terraform workspace for different environments.
Support Technical and Application team requests - data copies across environments, data cleanup, query tuning, etc.
Provisioning Auto scaling, Cloud watch (monitoring), Amazon S3 (storage), and Amazon EBS (persistent disk storage).
Working on Datadog related support. Manage user community, troubleshooting ,updating, solving problem.
Troubleshoot the build issue during the Jenkins build process in Groovy Scripting.

PROFESSIONAL EXPERIENCE
Client Name: Humana (KY) Role: AWS DevOps Engineer Responsibilities:


Dec-2020 To Jan -2022

Designed build and maintain data platform infrastructure on AWS environment.
Developed data pipelines to collect the metrics that is required to monitor data refreshes, reports deliveries and track SLAs.
Build continuous integration/deployment (CI/CD) pipelines to accelerate development and improve team agility Oversee project.
Monitoring all aspects of data platform system security, performance, storage, incidents, and usage for databases, data pipelines, applications, and infrastructure on AWS. Escalate to respective teams for fixes on production.
Working with GitLab Workflows leveraging AWS infrastructure including but not limited to - S3, ECS/Fargate, Docker/containerization, EC2s and CloudFront.
Managing and monitoring of overall application availability, latency and system health.
Automated build pipeline, and continuous integration. Source control, branching, & merging: git/svn/etc (Repository Management).
Developed AWS strategy, planning, and configuration of S3, Security groups, IAM, ELBs, Cross Zone, DR, AMI rehydration with Blue Green strategy for zero downtime deployments.
Optimized cost of AWS Cloud through reserved instances, selection and changing of EC2 instance
types based on resource need, S3 storage classes and S3 lifecycle policies, leveraging Autoscaling.
Developed AWS Python Boto3 scripts to graceful start and shutdown of services.
Responsible for Setup and build AWS infrastructure various resources, VPC EC2, S3, IAM, EBS, Security Group, Auto Scaling, and in Cloud Formation. Built S3 buckets and managed policies for S3 buckets and used S3 bucket and Glacier for storage and backup on AWS.
Orchestration and containerization of dockers using Kubernetes.
Provide technical assistance and consultation to customer's installation, operations and maintenance personnel on Datadog.
Azure Cloud Services (PaaS & IaaS), Storage, Web Apps, Active Directory, Azure Container Service, VPN Gateway, Content Delivery Management, Traffic Manager, Azure Monitoring, OMS, Key Vault, Visual Studio Online (VSO), Cognitive Services (LUIS) and SQL Azure.
PROFESSIONAL EXPERIENCE

Client Name: EA Sports (CA) Role: AWS DevOps Engineer Responsibilities:

June-2019 To 2020-December

Writing Terraform script to provision EKS Cluster and Istio Deploy all AWS Resource on cloud environment suing Helm chats .
Working with Datadog integration with other service.
Creating user s in Datadog and providing access to all user and manager their credential
Supporting technologies and product including AWS cloud ,Linux CentOS Terraform Git .
Usage of Identity Access Management service (IAM) in creating and managing the user accounts and groups and their policies.
Automation we used Lambda, setup function, state machine, CloudWatch, sns topics ,for each environment.
Performed S3 buckets creation, policies and on the IAM role based polices and customizing the JSON template.
Installed and configured Hive and also Hive UDFs, managing and reviewing Hadoop files.
setup Hadoop cluster on Amazon EC2 using whirr for poc.

Set up a Firewall rules in order to allow or deny traffic to and from the VM's instances based on specified configuration and used cloud CDN (content delivery network) to deliver content from cache locations drastically improving user experience and latency.
Working with monitoring tool such as Datadog, CloudWatch ELK Stack.
Writing Terraform script to provision Kubernetes cluster and Deployed application on EKS cluster using Helm chart.
Supporting Bigdata services prior to production via infrastructure design, software platform. development, load testing, capacity planning and launch reviews.
Troubleshoot and resolve issues related to user queries, application jobs, etc.
Monitoring the cluster connectivity and performance ,Manage and review Hadoop log files.
Supporting AWS RDS database (SSL cert update and instance upgrade).
Supporting all application for SSL certificate rotation by using amazon certificate manager.
Working with CI/CD pipeline ,Blue/Green deployment and DevOps Principles.
Deploying application using Elastic Beanstalk and supporting application version updates.
Implemented and maintained the monitoring and alerting of production and corporate servers/storage using AWS Cloud watch.
Created s3 bucket, permission and lifecycle policy s using cross region replica setup with terraform code.
providing highly available and fault tolerant applications utilizing orchestration technologies like Kubernetes and Apache Mesos on Google Cloud Platform.
Configurations- Core-Site, HDFS-Site, YARN-Site and Map Red-Site, Backup and recovery tasks Resource and security management.
Designing, Architecting and implementing scalable cloud-based web applications using AWS and .
Implanting CI/CD pipeline in Gitlab and automated Deployment process.
Responsible for deciding the size of the Hadoop cluster based on the data to be stored.
Writing Terraform scripts to provision AWS service for EC2 ,ELB,VPC ,RDS ,IAM, S3.
Working with Docker and other container platforms (AWS ECS).
Installed Kubernetes cluster including Istio and Kubernetes(K8) master and nodes. I configured etcd, kube.
API server, kube-scheduler, kube-controller-manager in K8 Master and as well as configured Docker, kubelet, kube-proxy, flannel in K8 nodes.



PROFESSIONAL EXPERIENCE
Client Name: PRAHELTHSCIENCES (NC)
Role: AWS Deployment Engineer Responsibilities:


Oct-2018 To May-2019


Designed and Developed Enterprise level Continuous Integration environment for Build and Deployment Systems.
Creations of Security group and given inbound Outbound Rules giving all Ports and protocols
SSH,HTTP,HTTPS,RDP, TCP,SMPT TO secure my EC2 instances .
Integration of Automated Build with Deployment Pipeline. Installed Chef server and clients to pick up the build from Jenkins repository and deploy in target environments.
Creating Security group and manages all EC2 instance.
Resource management of Hadoop cluster including.
Responsible for building scalable distribution data solutions using Hadoop.

Deployed Multi tenants cloud applications on Hybrid Cloud using Kubernetes and Docker
containers.
Managed major architecture changes from single server large software system to a distributed system with Docker and Kubernetes orchestration.
Writing Terraform script to provision EKS Cluster and Deploy all AWS Resource on cloud environment.
Develop CI/CD system with Jenkins on Kubernetes container environment, utilizing Kubernetes
and Docker for the runtime environment for the CI/CD system to build and test and deploy.
Working closely with Datadog components designers, test & measurement engineers and the end users, i.e. specialists from our diagnostic centre.
Analyzing the log files, taking thread dumps, JVM Dumps and Exception stack traces.
Real time Data Streaming of data from SAP HANA DB to Elasticsearch with Logstash JDBC plugin.
Created a best practice Build environment using Jenkins, Packer, immutable instances, and AWS.
Configuration of Filebeat and Metricbeat for capturing CPU Metrics and log Monitoring using Ansible Playbooks.
Integration of POS (Point of Sale) Logs into Elastic Search for near real time log analysis of transactions.
Configured CI infrastructure (Jenkins) and full end to end automation using Jenkins.
Installation and Configuration of multi-tenant Elastic Search Stack on different Data Centers using Ansible Playbooks and Terraform.
Monitoring and analysis of Kubernetes pods logs using Elasticsearch by deploying Filebeat as a DaemonSet.
Deployed the application using Jenkins into any point studio cloud Hub and make changes Created schedule for Database jobs.
Working 24X7 With AWS Product support team to Troubleshot issues.
Setting up IAM Users/Roles/Groups/Policies and automated DB & App backups to S3 using
AWSCLI.

Environment: AWS (EC2, EMR, Lambda, S3, ELB, Elastic Beanstalk, Elastic Filesystem, RDS, DMS, VPC, Route53, Security Groups, CloudWatch, Code pipeline, CloudTrail, IAM Rules, SNS), GitHub, Jenkins, Apache Tomcat 7.0, Splunk, Shell, Python.


PROFESSIONAL EXPERIENCE
Client Name: BAY AREA PETROLEUM SERVICES, INC (CA)
Role: AWS Sysops Admin Responsibilities:


Feb-2018 To Sep -2018


Working with AWS Cloud Services, (EC2, S3, EBS, ELB, Cloud Watch, Elastic IP,RDS, SNS, SQS, Glacier, IAM, VPC, Cloud Formation, Route53) and managing security
Experience in supported Cloud environment using AWS (Amazon Web Services) and familiar with creating instances Implemented an automatic alert notification system that sends email when tests don t get started on GitHub repositories.
Created a Local Git server which will be the mirror image of the GitHub repositories.
Troubleshoot build, packaging and component management issues, working with the core Engineering team to resolve them.
Working with Python module Boto3 for all automation process.

Achieved self healing by setting the replication factor to optimal value, high availability, fault- tolerance, resilience, cost-effective, deployments for various tools/apps or Microservices inside K8 Cluster.
Writing Terraform script to provision EKS Cluster and Deploy all AWS Resource on cloud environment.
Installation and configuration of virtual machines in an Enterprise SAN and NAS environment
Working with SAP application ,Creating user and Backup their Data on AWS Environment
Fully automated deployment to production with the ability to deploy multiple times a day.
Monitoring of ELK Stack Clusters using X-Pack.
Installed Kubernetes cluster including Kubernetes(K8) master and nodes. I configured etcd, kube- APIserver, kube-scheduler, kube-controller-manager in K8 Master and as well as configured Docker, kubelet, kube-proxy, flannel in K8 nodes.
Responsible for Continuous Integration (CI) and Continuous Delivery (CD) process implementation- using Jenkins along with scripts to automate routine jobs.
Working Hands-on experience with Datadog
Installed Vault, find the best packages for system and download it. Vault is packaged as a zip archive.
Integration of Automated Build with Deployment Pipeline. Installed Chef server and clients to pick up the build from Jenkins s repository and deploy in target environments.
Implemented Chef Recipes for deployment of build on internal Data Centre servers. Re-used and modified Chef Recipes to create a deployment directly into Amazon EC2 instances.
Performed Branching, Tagging, and Release Activities on Version Control Tool: GIT.







PROFESSIONAL EXPERIENCE
Client Name: American Airlines (TX) Role: Build & Release Engineer
Responsibilities:


Aug-2017 To Jan -2018


Designed and Developed Enterprise level Continuous Integration environment for Build and Deployment Systems.
Worked with Jenkins Api to get the necessary information from the build jobs.
Implemented an automatic alert notification system that sends email when tests don t get started on
GitHub repositories.
Created a Local Git server which will be the mirror image of the GitHub repositories.
Troubleshoot build, packaging and component management issues, working with the core Engineering team to resolve them.
Expertise in tracking defects, issues, risks using Quality Center.
Fully automated deployment to production with the ability to deploy multiple times a day.
Working with GitLab and implemented CI/CD pipeline writing YML for complete automation.
Created the automated build and deployment process for application, re-engineering setup for better user experience, and leading up to building a continuous integration system.
Experience in creating AWS AMI, have used Hashi corp Packer to create and manage the AMI's.
Responsible for Continuous Integration (CI) and Continuous Delivery (CD) process implementation- using Jenkins along with scripts to automate routine jobs.

Integration of Automated Build with Deployment Pipeline. Installed Chef server and clients to pick up the build from Jenkins s repository and deploy in target environments.
Implemented Chef Recipes for deployment of build on internal Data Centre servers. Re-used and modified Chef Recipes to create a deployment directly into Amazon EC2 instances.
Performed Branching, Tagging, and Release Activities on Version Control Tool: GIT.
Working with Maven build tool to take file from developer and unit test then .c files converted into Jar/war files.

PROFESSIONAL EXPERIENCE
April-2016 To July-2017

Responsibilities:
Implemented Pipeline scripting and Groovy Scripting for Jenkins Master/Slave concept in Jenkins
Pipelines.
Created and deployed instances using both Amazon Web Services .
Written Chef Cookbooks for various DB configurations to modularize and optimize end products configuration.
Migrated on premises Databases to AWS using AWS Database Migration Service (DMS). Created an AWS MySQL DB cluster and connected to the database through an Amazon RDS MySQL DB Instance using the Amazon RDS Console.
Expert in configuring and implementing Nagios (or similar) monitoring software.
Utilized AWS CLI to automate backups of ephemeral data-stores to S3 buckets, EBS and create nightly AMIs for mission critical production servers as backups.
Developed Web forms using Hypersion Planning web client for the users to input Forecast, Budget, Actual accounting changes and other variance explanations data.
Responsible for maintaining 4-5 Different Testing/QA Environments and erection of the PROD Environment in AWS.
Implemented multiple high-performance Mongo DB replica sets on EC2 with robust reliability.
Developing monitoring and alerting with Datadog.
Good understanding of Knife, Chef Bootstrap process etc.
Written wrapper scripts to automate the deployment of cookbooks on nodes and running the chef client on them in a Chef environment.
Writing Terraform script to provision EKS Cluster and Istio Deploy all AWS Resource on cloud environment.
Implemented Chef Server and components installations, including cert imports, increase chef license, creating admins and users.
Involved in Chef infra maintenance including backup/monitoring/security fix.
Implemented auto builds (on QA and Dev servers) on our node server environment by configuring in config. Cookbook modules.
Hands-on experience with creating custom IAM users and groups and attaching policies to user groups.
Expertise on creating AWS cloud formation templates(CFT) to create custom-sized VPC, EC2
instances, ELB, AWS lambda.
Expertise in launching AMI and creating security groups and cloud watch metrics for the AMI.
Worked on operational support activities to ensure availability of customer websites hosted on AWS cloudinfrastructure using the Virtual private cloud(VPC) and public cloud.

PROFESSIONAL EXPERIENCE
May-2015 To Dce-2015

Responsibilities:
Primarily worked on Installation and configuration of Solaris 9/10/11, Redhat 4.x, 5/6.x, OEL on Dell Power Edge Rack, Blade & Oracle SPARC servers using Kickstart with PXE and Solaris Jumpstart.
Installation and administration of Solaris and Linux Enterprise Servers for test lab, production and disaster recovery setup.
Installation of ESX server and creation of VM s and install different guest OS and Updating ESX host s using VMware Update manager.
Experience in supported Cloud environment using AWS (Amazon Web Services) and familiar with creating instances and managing cloud servers and monitoring them by using cloud watch.
Installed/Configured Redhat cluster 5x version & Configuring the cluster resources
Upgrading the kernel in all the Redhat servers and creating initrd image to boot from the Upgraded kernel.
Migration of Redhat servers from 4.x version and working with the application team to resolve the issues post migration.

Rapid-provisioning and configuration management for Ubuntu using Cloud Formation and Chef on Amazon Web Services.
Planned major patch releases by coordinating with the application team for the proper implementation of security patches within the environments.
Worked on User administration setup, maintaining account, Monitor system performance using Nagios
and Tivoli.
Implementation of VMware Infrastructure for Solaris& Linux Redhat 5.0 with VMware ESX 3.5, Virtual Center 2.5 and administered the VMs with VI client.
Extensive experience in installing, integrating, tuning and troubleshooting Apache, Tomcat and Web Sphere application server and Web sphere IHS including troubleshooting.
Used Logical Volume Manager (LVM) for the management of Volumes including creation of physical and logical volumes on Linux.
Configuration of multi-pathing using tools like mpxio and EMC Power path, Vmax and performed migrations on Solaris and Linux servers.
Responsible for network troubleshooting and planning in a web hosting environment including the configuration of 3-Tier Environment.
Involved in support and upgrade of Puppet master server from 2.x to 3.x version on servers and clients.
Supported Oracle DB and Oracle RAC on Redhat environments.Experience setting Linux to support RAC, Oracle and WebLogic installations and performed trouble shooting on
Performance issues on HPUX and Linux servers.Troubleshooting on Sun Java System Web Server
6.0 and Apache 1.3.x web server on Solaris with experience in
Experience working with SAN & NAS environments majorly servers connected to EMC Clarions, DMX, Celera s&
NetApp filers connected to Linux and Solaris servers.
Experience working with servers connected to on SAN and NAS environments like EMC and NetApp.
Experience setting up cluster environments like Veritas Clustering for high availability of Business- Critical Application.
Installation of Oracle 9i, 10g on the Sun servers running Solaris 10 and Redhat Linux.
Creating resource pools, zones and containers on Solaris 10 and T2000 to optimize and consolidate the use of the.
Experience working with EMC Power path and Redhat and Solaris Native multipath.
Patch and package administration for installation of patches as per the company policy and installation of packaged.
Performed tasks on F5 load balancer like ordering new certs, installing and renewing SSL.
Keywords: cprogramm continuous integration continuous deployment quality analyst user interface sthree database information technology ffive California Kentucky North Carolina Texas

To remove this resume please click here or send an email from [email protected] to [email protected] with subject as "delete" (without inverted commas)
[email protected];358
Enter the captcha code and we will send and email at [email protected]
with a link to edit / delete this resume
Captcha Image: