Home

BANDRU MAHESH - Senior DevOps Cloud Engineer
[email protected]
Location: Remote, Remote, USA
Relocation:
Visa: GC
BANDRU MAHESH
Senior DevOps Cloud Engineer
(408) 827-5122
[email protected]


GC

PROFESSIONAL SUMMARY
9+ years of IT experience in DevOps Implementation, Cloud computing, Systems Administration, Migration, Change Management, Software Configuration Management (SCM) and Build & Release engineering and including experience in services based on continuous delivery and build with cloud services like AWS and Azure development in Windows and Linux systems.
Worked in System Administration, System Builds, Server builds, Installs, Upgrades, Patches, Migration, Troubleshooting, Security, Backup, Disaster Recovery, Performance Monitoring, Fine-tuning on RedHat Linux.
Expertise in Cloud Infrastructure Automation which includes Amazon Web Services, OpenStack, Ansible, Puppet, Maven, Jenkins, Chef, SVN, GitHub, WebLogic, Tomcat, JBoss, and LINUX etc.
Solid experience and understanding of designing and operationalization of large-scale data and analytics solutions on Snowflake Data Warehouse.
Middleware & application server technologies such as WebLogic, WebSphere, JBoss, Apache/HTTP Server, Tomcat, and Disintegrating security into SDLC and implementing security controls in cloud and on-premises.
Experience with Atlassian tool suite, including Jira, Confluence, Bitbucket, Bamboo, and Crowd.
Extensively worked with CI/CD pipelines using Jenkins, Maven, Nexus, GitHub, CHEF, Terraform and AWS.
Managed available and fault-tolerant systems in AWS, through various API's, console operations and CLI.
Familiarity with IAM compliance requirements and standards, such as HIPAA, PCI DSS, and SOC 2.
Expertise in Terraform for building, changing, versioning infrastructure and collaborating with the automation of AWS Infrastructure via Terraform and Jenkins.
Hands on experience in Azure Automation Assets, Graphical runbooks, PowerShell runbooks that will automate specific tasks.
Expertise in Azure infrastructure management (Azure AD, Azure Web Roles, Worker Roles, SQL Azure, AZURE Storage, Azure AD Licenses, Office365).
Virtual Machine Backup and Recover from a Recovery Services Vault using Azure PowerShell and Portal.
Experience with SonarQube administration, including installation, configuration, and upgrades.
Experience in using Puppet to automate repetitive tasks, quickly deploy critical applications, proactively manage change, and create custom Puppet module skeletons.
Contributed to the open-source Golang community by developing and maintaining Golang libraries and tools.
Designed and implemented a highly scalable, fault-tolerant, and efficient microservices architecture using Golang, Docker, and Kubernetes.
Designed, implemented, and maintained secure and automated CI/CD pipelines to enable rapid and secure software delivery. Worked on Hudson/Jenkins for configuring and maintaining CI and End-to-End automation for all build and deployments. Implementation with installation and configuration of Kubernetes, clustering them and managed local deployments in Kubernetes.
Used Kubernetes for deployment scaling and load balancing to the application from development through production, easing the code development and deployment pipeline by implementing Docker containerization.
Worked on build, Ant, and Maven Build tools in Java Environment for building artifacts (war, jar & ear) files.
Proficient in leveraging HashiCorp tools like Terraform, Vault, Consul, and Nomad to build scalable, secure, and highly available infrastructure solutions.
Integrated Jenkins with Maven (Build tool), Git (Repo), SonarQube (code coverage), Nexus (Artifactory)
Implementing containerization using technologies like Docker and container orchestration platforms like Red Hat OpenShift. Developed and maintained CI/CD pipelines for Golang applications using Jenkins, GitLab CI/CD.
Extensive experience in managing and configuring Nginx, Apache Webservers, Apache Tomcat for seamless application deployment and performance optimization.
Installed and Managed Jenkins and Nexus for CI and sharing artifacts respectively within the company.
Expert in deploying code through web application servers like Web Sphere/Web Logic/Apache Tomcat/ JBOSS.
Expertise in all build/release engineering tasks associated with the component /software/ production releases. Prepared build scripts, build specs & applying label for the software builds.
Expertise in troubleshooting build problems as they arise & work with engineering team to resolve issues.
Developed and maintained Ansible playbooks for installing RPM packages on multiple Linux servers.
Monitored the server s using tools like New Relic and Nagios and providing 24x7 on call support rotation basis. Experience in all phases of the project life cycle fit-gap analysis, design, testing and implementation.
Experience with all phases of testing including Unit, System, ITRB and user acceptance.
Experience with managing and coordinating infrastructure changes (Network, Servers, and Databases).
Expert in BCP, SOX, Security and RSAM, Archer Compliance and change and release management of applications.
Experience in troubleshooting IAM issues and performing security audits and risk assessments.

EDUCATION
Bachelors

CERTIFICATIONS
Certified in Microsoft Azure Administrator Associate
Certified Kubernetes Administrator
Certified in AWS Developer

TECHNICAL SKILLS

Automation Tools and IAAC Azure DevOps Pipelines, Jenkins, chef, Puppet, Ansible, Docker, Kubernetes, Vagrant, Maven, Terraform, Arm Templates, Hudson, Bamboo.
Cloud Platforms AWS, Azure, Google Cloud Platform (GCP), OpenStack, Pivotal Cloud Foundry (PCF).
Database Systems Cassandra DB, OracleDB2, MSSQL, MySQL, MongoDB, AWS RDS, DynamoDB.
Version Control tools GIT, Subversion, CVS, Bitbucket, Gerrit, ClearCase
Web Servers Tomcat, APACHE 2.x, 3.x, JBOSS 4.x/5.x, WebLogic (8/9/10), WebSphere4/5, TFS, Nginx, Azure, IIS, Redhat Satellite.
Scripting/Languages Perl, Python, Ruby, Bash/Shell Scripting, PowerShell scripting, YAML, PHP, JSON
Virtualization Technologies Docker Containers, AWS ECS, Vagrant, VMware
Application Servers Web Logic Application server 9.x, 10.x, Apache Tomcat 5.x/7.x, Red Hat JBoss 4.22
Volume manager VERITAS volume manager (VVM), Logical Volume Manager (LVM) with Linux
Monitoring Tools Logstash and Kibana (ELK), CloudWatch, CloudTrail, Splunk, Nagios, Prometheus, Grafana.
Operating system Linux (Red Hat 4/5/6/7, CENTOS & SUSE), Ubuntu 12/13/14, CentOS, Linux, Windows

PROFESSIONAL EXPERIENCE
Client: University of Delaware Oct 2022 Till now
Role: Sr Cloud/DevOps Engineer
Responsibilities:
Virtual Networks, Azure SQL Database, Azure Search, Azure Data Lake, Azure Data Factory, Azure Blob Storage, Azure Service Bus, Function Apps, Application Insights, Express Route.
Have Worked on Setting up Azure Monitor Dashboard for various Azure Services by enabling Diagnostic settings and writing queries in Log Analytics Workspace to send the logs to Azure storage accounts and stream the logs to Azure Event Hubs.
Created and configured HTTP Triggers in the Azure Functions with Application Insights for monitoring and performing load testing on the applications using the VSTS and used Python API for uploading all the agent logs into Azure blob storage.
developing and maintaining data integration solutions, leveraging apis, hl7 standards, and other interoperability frameworks.
Data was transferred from On-Premises SQL Database servers to Azure SQL Database servers via Azure Data Factory Pipelines created with the Azure Data Factory copy tool and Self-Hosted Runtimes.
Created and maintained Continuous Integration (CI) using tools Azure DevOps (VSTS) over multiple environments to facilitate an agile development process that is automated and repeatable, enabling teams to safely deploy code in Azure Kubernetes Services (AKS) using VSTS by YAML scripting.
Worked on Integrating Azure-DevOps Boards with Microsoft Teams and Pipelines for Notifying Sprint Boards and Teams respectively.
Collaborated with data architects and stakeholders to design and architect data solutions using Azure Synapse which involves understanding business requirements, selecting appropriate Synapse components, defining data pipelines, data flows and data storage structures.
Implemented practices to optimize costs in development and testing environments. This may involve using automation scripts or Azure DevTest labs to spin up environments on demand and tear them down when not in use, reducing the idle time of resources and avoiding unnecessary costs.
Involved in implementing strategies and practices to optimize the cost of Azure resources and services used in development and deployment processes.
Facilitated seamless integration between Databricks and Azure DevOps, ensuring efficient development, testing, and deployment of data solutions.
implemented monitoring and alerting mechanisms for Databricks workloads and resources. This includes setting up monitoring tools, defining metrics and alerts, and integrating with Azure Monitor or other monitoring solutions. You would ensure timely detection and resolution of issues, optimize performance, and track usage patterns for cost management.
Well versed in using Terraform templates for provisioning Virtual Networks, Subnets, VM Scale sets, load balancers, and NAT rules. Configured BGP Routes between on-premises data centers and Azure cloud to enable ExpressRoute connections.
Performed Azure Scalability configuration that sets up a group of Virtual Machines (VMs) and configures Azure Availability and Azure Scalability to provide High Application Availability and can automatically increase or decrease in response to demand.
Used Azure Kubernetes Service (AKS) to deploy a managed Kubernetes cluster in Azure and created an AKS cluster in the Azure portal using template-driven deployment options such as Azure Resource Manager (ARM) templates and terraform.
Used Azure Kubernetes Service (AKS) for Implementing Jenkins pipelines into Azure pipelines to drive all micro services builds out to the Docker registry and then deployed to Kubernetes, Created Pods, and managed them.
Involved in integrating Azure Log Analytics with Azure VMs for Monitoring, Storing, tracking Metrics, resolving, and investigating Root cause issues.
PowerShell runbooks were written and deployed to Automation Accounts using CICD Azure DevOps. PowerShell scripts were also written to make API calls to Azure DevOps and find Users who had not accessed Azure DevOps in more than 90 days.
Expertise in designing, deploying, and managing infrastructure on Microsoft Azure, including virtual machines, virtual networks, storage accounts, and Azure services like Azure App Service, Azure Functions, Azure SQL Database, etc.
Extensive knowledge of Windows Server internals and administration, including IIS configuration, troubleshooting, performance tuning, and security.
Experience working in agile development environments using methodologies such as Scrum, SaFE (Scaled Agile Framework), or others. Familiarity with version control systems, code reviews, and defect tracking tools.
Manage clusters and pods, ensuring efficient resource allocation and utilization. Maintain and update Yama files to ensure smooth functioning of containerized applications. Stay up to date with the latest cloud technologies and trends, recommending improvements to enhance platform capabilities. Conduct regular security audits and administer firewall settings to ensure a robust and secure infrastructure.
Configure and administer Azure Active Directory (Azure AD) for user and access management. Develop infrastructure as code using Terraform scripts to automate deployment and configuration processes.
Implemented and enhanced Continuous Integration and Continuous Delivery (CI/CD) practices for Windows applications, automating build, test, and deployment processes. Utilize relevant CI/CD tools such as Jenkins, TeamCity, or Azure DevOps to streamline the software delivery pipeline and ensure efficient release management. Maintain and optimize the CI/CD infrastructure, making improvements to enhance build speeds, test coverage, and overall deployment reliability.
Designed, implemented, and maintained the infrastructure as code (IAC) using tools like terraform, google cloud deployment manager, or cloud formation. Designing and implementing scalable architectures that can handle high traffic and demand. Collaborated with development teams to optimize application performance and ensure efficient resource utilization on GCP.
Build and maintain robust CI/CD pipelines using tools such as Jenkins, GitLab CI/CD, or Google Cloud Build. Facilitated automated testing, build processes, and deployment of applications on GCP.
In the event of production incidents, played a crucial role in identifying and resolving issues promptly. Leveraged monitoring tools and performed root cause analysis to prevent similar incidents from recurring.
Facilitated effective communication and collaboration to streamline processes and ensure smooth deployment and operations on GCP.
Optimized and enhanced the GCP environment, enabling the organization to leverage the full potential of cloud services.
Designed, implemented, and maintained REST APIs to enable communication and data exchange between different systems or components. Created REST endpoints to expose functionality of DevOps tools, orchestrate deployments, or provide integration points for various services.
Used SNS to facilitate communication and notifications within the system which involved configuring SNS topics, managing subscriptions, and integrating SNS with other services to trigger notifications for events like deployment status updates, error alerts, or system health checks.
Actively contributed to the open-source community by developing CI framework tools based on OpenStack's Zuul, Openshift's Prow, GitHub, and GitLab CI, enhancing the CI/CD capabilities.
Demonstrated expertise in AWS native services, designing and deploying scalable and resilient cloud solutions for various projects.
Specialized in AWS Glue for ETL (Extract, Transform, Load) operations, ensuring efficient data processing and transformation. Developed and managed Lambda functions to automate tasks, improve operational efficiency, and reduce manual intervention. Leveraged AWS Step Functions to orchestrate complex workflows, optimizing automation and resource utilization.
Successfully implemented and maintained ETL pipelines using AWS technologies, guaranteeing data integrity and reliability. Worked with Data bricks for big data processing and analytics, enabling data-driven insights and decision-making. Collaborated with cross-functional teams to design, implement, and maintain cloud-based infrastructure and applications.
Environment: Azure, Terraform, Kubernetes, Ansible, Shell, Python, Linux, Jira, Bitbucket, My SQL, Jenkins, Apache Tomcat 7. x, Azure-DevOps, Docker, NoSQL, ARM Templates, Virtualization, Kubernetes, Nagios, Splunk, App Dynamics Nguni, LDAP, JDK1.7, XML, SVN, Git, Windows, Maven.

Client: NYL (New York Life Insurance) Apr 2021 Sep 2022
Role: Sr DevOps Engineer
Responsibilities:
Involved in three different development teams and multiple simultaneous software releases.
Configured and monitored Amazon Web Services resources as well as involved in deploying the content cloud platform to Amazon Web Services using EC2, S3 and EBS.
Migrated an existing on-premises application to AWS using AWS Snowball, Ansible, and Terraform.
Communication between components and DB is also made secure by enabling SSL in the SQL server.
Developed shell scripts to automate the provisioning, configuration, and deployment of infrastructure resources using tools like Ansible, Puppet, or Chef. This includes setting up servers, networking, storage, and other components needed for application deployment.
Create shell scripts to support continuous integration and continuous deployment (CI/CD) pipelines. These scripts would help build, test, and deploy applications in an automated and efficient manner. Involved automating the build, testing, and deployment processes for applications written in Golang, ensuring smooth and efficient releases.
Utilized Golang expertise to develop and maintain automation tools, scripts, and services. Build custom applications, microservices, or command-line interfaces (CLIs) using Golang to streamline various processes within the DevOps workflow.
Develop automation scripts using languages such as Python, Bash, or PowerShell to orchestrate the CI/CD processes. These scripts would facilitate tasks such as code compilation, unit testing, integration testing, packaging, and deployment.
Launched and configured Amazon EC2 Cloud Servers using AMI's and Configured S3 versioning and lifecycle policies to backup files and archive files in Glacier. Configured AWS IAM and Security Group in Public and Private Subnets in VPC.
Used TFS as a build tool on C# projects for the development of build artifacts on the source code.
Created Load Balancer sheet to describe the number of listeners, target groups, targets for a load balancer. Including ports, protocols, site certificates and DNS friendly names.
Provided Firewall rules and security groups for the load balancers to components.
Managed the deployment and implementation of deliverables into DEV, QA, UAT and production environments.
Solid understanding of Project Life Cycle Management and strong experience of working of AGILE Methodologies. understanding Agile Tools Rally, tracking status, and setting up velocities for the projects.
Proficient in writing Terraform files to build the AWS with the paradigm of Infrastructure as a Code.
Implemented Terraform modules for EC2 Machines, Elastic Load Balancing, EKS Cluster, Azure VM, Azure Data Factory, AKS and releasing those modules into GitHub Enterprise Server.
Build Customized Amazon Machine Images (AMIs) & deployed these customized images based on requirements.
Using Bash and Python included Boto3 to supplement automation provided by Ansible and Terraform for tasks such as encrypting EBS volumes backing AMIs.
Involved in using Terraform migrate legacy and monolithic systems to Amazon Web Services.
Provided security and managed user access and quota using AWS Identity and Access Management (IAM), including creating new Policies for user management.
Created and maintained cloud application migrated on premises application servers to AWS.
Design AWS Cloud Formation templates to create custom sized VPC, subnets, S3 to ensure successful deployment of Web applications and database templates.
Involved in installation, configuration, and maintenance of Octopus for compilation and packaging of new code releases.
Automate manual infrastructure provisioning and deployment processes to increase efficiency and reduce human error. This may involve scripting and programming using languages like Python, Bash, or PowerShell to automate routine tasks, create custom tools, and streamline infrastructure operations.
Analyze cloud resource usage, identify cost optimization opportunities, and implement strategies to optimize cloud costs. This may involve rightsizing instances, utilizing reserved instances or savings plans, implementing cost allocation tags, and monitoring cost trends.
Automated SQL installation using PowerShell Script in Octopus deploys and Configured WSFC (Windows Server Failover cluster)
Performed DB restoration in DB servers.
To secure the communication for caching, we proposed to use the AWS elastic cache for Radis. It is HIPAA eligible and offers encryption in transit and at rest.
Configured Radis Cache service and File share server in AWS.
Developed, maintained, and distributed release notes for each scheduled release.
Responsible for daily build and working closely with development and database teams to resolve any build issues.
Interacted with Business Users for gathering business requirements and involved in analyzing, documenting business requirements, functional requirements for developing Forms, Documents, and Reports.
Developing and maintaining data integration solutions, leveraging APIs, hl7 standards, and other interoperability frameworks.
Design, deploy, and manage cloud infrastructure components, such as virtual machines, storage, networking, and load balancers.
Responsible for managing and administering the Ansible Tower infrastructure. This includes installation, configuration, and maintenance of Ansible Tower, as well as managing user access, roles, and permissions.
Developed Ansible playbooks and roles to automate various aspects of infrastructure provisioning, configuration management, and application deployments. This involved writing and maintaining Ansible code following best practices and standards.
Environment: AWS, S3, EBS, Elastic Load balancer (ELB), Auto Scaling groups, C#. Net, MS Visual Studio 2012, IIS 6.0, AWS Cloud Server, SQL Server 2012, SQL Server 2008 R2, TFS, Octopus, Power Shell scripting, Windows, and Linux environment.

Client: Dexcom, San Diego, California Nov 2019 Mar 2021
Role: DevOps Engineer
Responsibilities:
Creating a fully Automated Build and Deployment Platform, coordinating code builds promotions, and orchestrating deployments using Jenkins and GitHub.
Deployed Microservices, including provisioning AWS environments using Ansible Playbooks.
Worked on container-based deployments using Docker and clustering them within OpenShift.
Designed a Rapid deployment method using Chef and Ansible to auto-deploy servers as needed.
Extensively worked on Jenkins for continuous integration and End-to-End automation for all builds and deployments.
Involved in Amazon Web Services (AWS) provisioning and AWS services like EC2, S3, RDS, DynamoDB, VPC, Route53, CloudWatch, CloudFormation, IAM, and Elasticsearch.
Managed GIT and GitHub repositories for branching, merging, and tagging.
Involved heavily in setting up the CI/CD pipeline using Jenkins, Maven, Nexus, GitHub, Ansible, Terraform, and AWS.
Worked on migrating a current application to Microservices architecture. This architecture included Docker as the container technology with Kubernetes.
Managed Product Backlog and tracked bugs using JIRA.
Designed and worked with the team to implement ELK (elastic search, log stash and Kibana) Stack on AWS.
Involved in setting up application servers like Tomcat and WebLogic across Linux platforms and writing shell scripts, Perl, Python, and scripting on Linux.
Established infrastructure and service monitoring using Prometheus and Grafana.
Building and deploying Java applications in QA, UAT, and Production environments.
Environment: AWS, Docker, Kubernetes, OpenStack, ANT, Maven, SVN, GIT, GitHub, Chef, Puppet, Ansible, Linux, Shell, Bash, Perl, Grafana, Jenkins, Tomcat, Jira.

Client: Verizon, Irving, Tx July 2018 Oct 2019
Role: Build & Release Engineer
Responsibilities:
Used Jenkins as a continuous integration tool: creating new jobs, managing required plugins, configuring the jobs selecting required source code management tools, building triggers, building system, and post-build actions, scheduling automatic builds, and notifying the build reports.
Develop/Improve continuous integration and automation scripts and perform database deployments.
Responsible for troubleshooting environmental issues.
Implementing Continuous Integration and Continuous Delivery framework using SVN, Bitbucket,
ANT, Maven, Jenkins, Bamboo, Nexus, Control Tier, Make in Linux environment.
Worked on creating various modules and automation of different facts in Puppet, adding nodes to enterprise Puppet Master and managing Puppet agents. Making Puppet manifests files and implementing Puppet to convert IaC.
Integrated Bitbucket with JIRA for transition JIRA issues from within Bitbucket Server and monitored the JIRA issues in Bitbucket Server.
Set up the Linux Cron jobs to automate various build-related and application data synchronization jobs.
Install, maintain, and upgrade Drupal and Word press on LAMP stack and configure LAMP Stack on Unix/Linux servers.
Built and managed a highly available monitoring infrastructure to monitor different application servers and their components using Nagios.
Designed and scripted using Ant for J2EE, Web Services, Reusable JARS, Web Client, and Open Source to create Master build.xml build Properties and provided technical support to the development team for compilation problems.
Environment: Linux, Windows, Tomcat, Jira, JBoss, Puppet, Puppet, Ant, Maven, SVN, Bitbucket, Nagios, Java, Shell Scripting, Python, Bash, Java, Agile, Scrum.

Client: Capgemini, Bangalore, India Aug 2012 July 2017
Role: Linux Administrator
Responsibilities:
Day to day problem handling like related to File Systems, Disk, Memory, CPU, Network etc.
Build application and database servers using Azure and create AMIs as well as use RDS for Oracle DB. Install, configure and administer log analyzer tool Cloud Watch.
Worked on migrating VMware to AWS using snowball and VM Import/Export, experience on Azure code deploy, lambda, VPC and CLI. Experiencing with Nagios and Splunk monitoring, automate site configuration using Puppet. Experience with shell scripting (sh, bash, cash, ksh)
Worked on version control using Github, automating builds using Jenkins, automating tasks using puppet, worked on tomcat, Jboss install configuration and Mongodb, mysql.
Deploy Puppet to completely provision and manage AWS EC2 instances, volumes, DNS, and S3.
Involved in RPM package building according to the requirement using fpm and deploying the package using puppet enterprises to puppet agent.
Used UNIX/Linux shell scripting to automate system administration tasks, system backup/restore management and user account management.
Creation of Oracle and MS-SQL server databases. Maintenance of tables paces in oracle.
Backup and restore of databases in Oracle and MSSQL Server DB s, worked on the configuration of server monitoring tool like Nagios, limiting user account privileges using SUDOERS.
Support for Windows and Linux problems assigned by client operations.
Environment: RedHat LINUX 5.5/6.3, Kickstart, Ubuntu, Windows, Oracle, DB2, Jenkins, Git, Subversion, Vsphere, VMware, AWS, Chef, Puppet, Apache Webserver, JBoss, WebSphere Application Server & UNIX shell scripting.
Keywords: csharp continuous integration continuous deployment quality analyst sthree database active directory information technology green card microsoft Texas

To remove this resume please click here or send an email from [email protected] to [email protected] with subject as "delete" (without inverted commas)
[email protected];780
Enter the captcha code and we will send and email at [email protected]
with a link to edit / delete this resume
Captcha Image: