Home

Babu LS - Sr. Certified Cloud DevOps/DevSecops Engineer
[email protected]
Location: Hartford, Connecticut, USA
Relocation: Remote/CT
Visa: H1B
Babu
Sr. GCP/AWS/Azure Certified DevOps Engineer

[email protected]

CAREER PROFILE:

Having 10+ years of experience in IT industry comprising of Development, Systems Administration and Software configuration management (SCM) includes Devops Build/Release Management.
Worked with Version Control Systems CVS, SVN (Subversion), GIT, GitHub, Bit Bucket, Code Commit, S3.
Experience in maintaining Atlassian products like JIRA, Confluence, Bamboo and Bit bucket.
Implemented API Gateway monitoring and logging using AWS Cloud Watch, enabling real-time visibility into API performance, troubleshooting, and error handling.
Implemented API Gateway caching to improve response time and reduce the load on backend services, resulting in faster and more efficient API operations.
Implemented API Gateway integrations with other AWS services such as AWS DynamoDB, AWS S3, AWS Lambda, and AWS Step Functions, enabling seamless communication and data processing within the cloud environment.
Worked closely with stakeholders to gather requirements, design and implement customized GitLab configurations, and drive continuous improvement initiatives to enhance the development process
Knowledge of monitoring and logging solutions such as Prometheus, Grafana, and Fluentd to monitor the performance and health of Go applications.
Experience in building and managing CI/CD pipelines for Go applications using tools like Jenkins, Travis CI, or CircleCI.
Proficient in leveraging GitHub as a robust repository and collaboration platform, effectively managing version control, code reviews, and pulls requests to streamline team collaboration and enhance code quality.
Strong expertise in architecting and deploying highly available, fault-tolerant, and scalable applications on AWS, leveraging a wide range of services such as EC2, S3, RDS, Lambda, DynamoDB, and VPC.
Strong knowledge of infrastructure automation using tools like Terraform or Cloud Formation. Implemented Infrastructure as Code practices to manage cloud resources, ensuring consistent and reproducible deployments.
Proficient in creating and maintaining Jenkins pipeline scripts to automate Build test, and deployment workflows, ensuring consistent and reliable software releases.
Skilled in configuring and customizing Cloud Bees Jenkins instances, including job setup, environment configurations, and plugin installations, to meet specific project requirements
Familiarity with test-driven development methodologies and their application in Node.js development, including continuous integration and delivery (CI/CD) pipelines.
Skilled in automating build, test, and deployment processes through GitActions, ensuring continuous integration and delivery of high-quality software projects.
Solid understanding of Kubernetes architecture and concepts. Hands-on experience in managing Kubernetes clusters, deploying applications, and troubleshooting issues. Leveraged Argo CD to deploy and manage applications on Kubernetes.
Experience in using monitoring and logging tools such as Prometheus and ELK stack for monitoring and analyzing the performance of Node.js applications.
Proficient in using configuration management tools such as Ansible, Chef, or Puppet for automating the setup and configuration of Node.js infrastructure.
Familiarity with infrastructure as code tools such as Terraform or Cloud Formation for defining and deploying Node.js infrastructure.


TECHNOLOGY FOCUS:

Infrastructure as Cloud Service AWS, AZURE, GCP.
Virtualization Platforms Virtual Box, VMware, Docker.
Configuration management Chef, Puppet, Ansible.
CI, CD Test & Build Systems Ant, Maven, Jenkins, Github Actions, Git Lab, Cloud Bees.
Application/Web Servers Web Logic, Tomcat, JBoss, Nginx.
Scripting Languages Bash, Perl, Ruby, Python, Terraform.
Databases SQL Server, DynamoDb.
Version Control Git, SVN
Containerizations Tools, Docker, Kubernetes.

PROFESSIONAL EXPERIENCE:

Project: Belk, Charlotte, North Carolina, USA. APRIL 2023-Present
Senior DevOps Engineer

Designed, deployed, and managed a highly scalable, reliable, and secure infrastructure on AWS cloud, using services like EC2, S3, RDS, VPC, Cloud Watch, and Lambda.
Utilized Terraform to create, manage, and update Infrastructure as Code (IaC) scripts, automating the provisioning and deployment of infrastructure.
Set up and managed CI/CD pipelines using Jenkins and Maven. Coordinated with development teams to integrate their projects into the CI/CD pipeline and maintained Git repositories for version control.
Implemented Docker for containerization of applications, and managed container orchestration via Kubernetes and AWS EKS. Ensured efficient scaling and management of deployed containers.
Leveraged AWSCloud Watch and other application performance monitoring tools to regularly monitor the system, identify potential issues, and troubleshoot system errors.
Led the implementation and administration of GitLab as the primary source code repository and collaboration platform for the organization, enabling efficient version control, code reviews, and streamlined development workflows.
Provided technical leadership and guidance to development teams in adopting and maximizing the potential of GitLab, ensuring adherence to best practices, code quality standards, and efficient code collaboration.
Utilized AWS Lambda for serverless computing and AWS CloudFormation (CFT) for infrastructure provisioning.
Oversaw Kubernetes clusters, ensuring smooth container orchestration and management.
Employed Istio to enhance microservices communication and streamline service mesh management.
Utilized Ansible for DevOps configuration management and automation.
Developed Ansible playbooks and roles for provisioning and configuring infrastructure.
Developed robust and scalable Java applications, utilizing industry best practices and design patterns.
Setup data dog monitoring across different servers and aws services.
Setup AWS infrastructure monitoring through Data dog and application performance monitoring through App Dynamics.
Good working experience with DevSecOps technologies like Identity and Access Management, Directory Service, Cloud Watch, Cloud Trail, AWS Cognito, AWS Single Sign - On and AWS Config,
Managed secured user directory using AWS Cognito user pools and provided sign in through social identity providers like Microsoft Active Directory using SAML in conjunction with AWS Identity and Access Management.
Configured Cloud Watch andDatadogto monitor real-time granular metrics of all the AWS Services and configured individual dashboards for each resource Agents.
Increased pre-production server visibility by producingDatadogmetrics. Enabled Data dog APM, JVM metrics in different Micro services. Creating Data dog Dashboards to visualize different Micro services metrics.
Proficient in Load Runner with expertise in Java, MQ, and Web service protocols.
Conducted comprehensive performance testing to identify system bottlenecks and optimize application performance.
Demonstrated skills in Blazemeter and JMeter for web services and MQ testing.
Designed and executed performance tests, simulating various scenarios to evaluate application resilience and scalability.
Conducted in-depth analysis of production workloads to understand system behavior under real-world conditions.
Implemented RESTful APIs and microservices using Java and frameworks like Spring Boot, enabling seamless integration with other systems.
Collaborated with cross-functional teams to design and implement CI/CD pipelines using tools like Jenkins, Git, and Docker.
Automated infrastructure provisioning and management using Terraform, defining infrastructure as code (IaC) for improved scalability and reproducibility.
Configured and maintained monitoring systems such as Prometheus and Victoria Metrics, ensuring real-time visibility into application performance and resource utilization.
Developed custom Prometheus exporters to monitor application-specific metrics, enabling effective monitoring and alerting.
Proficient in computer applications and scripting like Shell, Python, Ruby, Perl, JavaScript and XML.
Developed and implemented the MVC Architectural Pattern using Struts Framework including JavaScript, EJB, and Action classes.
Implemented highly interactive features and redesigned some parts of products by writing plain JavaScript due to some compatibility issues using JQuery.
Configured and maintained GitLab instances, including user management, access controls, project creation, and integration with other tools and services, ensuring a secure and efficient development environment.
Customization and Development: Developed custom components, workflows, and templates using AEM's Java-based technologies, including Adobe Experience Manager Core Components and Adobe Granite.
Developed and maintained infrastructure-as-code using tools like Ansible and Chef to automate provisioning and configuration management of mainframe resources.
Integrated mainframe workflows with version control systems (e.g., Git) to enable code collaboration and track changes effectively.
Implemented continuous integration and delivery (CI/CD) pipelines for mainframe applications, streamlining the software release process.
Implemented and maintained CI/CD pipelines for Node.js applications, utilizing tools like Jenkins, GitLab CI/CD, or CircleCI to automate build, test, and deployment processes.
Implemented and optimized monitoring and logging solutions for Node.js applications, leveraging tools like Prometheus, Grafana, ELK stack, or AWS Cloud Watch, to ensure high availability and performance.
Currently monitoring and optimizing application performance using Prometheus and Grafana in real-time to ensure optimal user experience.
Actively automating CI/CD pipelines with Jenkins, continuously deploying and monitoring changes for immediate feedback.
Integrating performance testing into our ongoing CI/CD pipeline, identifying bottlenecks as soon as they arise.
Conducting real-time load tests with Apache JMeter to assess our application's ability to handle traffic spikes
Conducted assessments to identify migration candidates, dependencies, and potential risks.
Produced detailed migration plans, including cost estimates, resource requirements, and timelines.
Led data migration efforts using AWS Database Migration Service (DMS) and AWS Snowball.
Managed and maintained the Cloud Bees Jenkins Distribution, ensuring high availability and optimal performance.



Project: Belk, Charlotte, North Carolina, USA (Offshore) Feb2021 - Feb2023
DevOps Engineer

Utilized AWS Cloud Watch and other monitoring tools to keep track of system performance and troubleshoot issues.
Set up alerts for system anomalies and responded swiftly to minimize potential impact on services.
Utilized Git and GitHub for source code management, ensuring efficient tracking and control of code changes across multiple projects.
Reviewed pull requests and coordinated code merges, maintaining code quality and consistency.
Set up and managed CI/CD pipelines using GitHub Actions, automating build, test, and deployment processes and reducing deployment times.
Implemented and managed Cloud Bees CI/CD platform for developing, testing, and deploying applications.
Used the AWS services like EC2, dynamo DB, API Gateway, IAM, Cognito, Cloud Watch.
Used Cognito for Multi-Authentication Factor when User creating an account in Portal.
Developed Infrastructure as code with CloudFormation Templates to create custom-sized VPC, Subnets, Kinesis, Cognito, EC2 instances, ELB, Security Groups, Elastic Container Service (ECS), Cloud Front, RDS, S3, Route53, SNS, SQS, Glue, EMR, API Gateway, Lambda, ECR, DynamoDB, ElasticCache, Secret Manager. Worked on tagging standard for proper identification and ownership of AWS Services.
Developed and maintained Jenkins pipelines using Groovy scripts for automated build, test, and deployment.
Created custom automation scripts in Groovy and Shell for various DevOps tasks.
Implemented Spinnaker for continuous delivery, orchestrating and automating software deployments.
Managed and optimized AWS resources, with a focus on core services like EC2, S3, IAM, and VPC.
Automated performance tests and integrated them into CI/CD pipelines for early performance issue detection.
Collaborated effectively with cross-functional teams, including developers, testers, and operations, to ensure a unified approach to performance testing.
Created comprehensive documentation of performance test plans, methodologies, and best practices.
Proficient in designing, implementing, and managing data warehousing solutions using Amazon Redshift, a leading cloud-based data warehousing platform.
Led the migration of on-premises data warehouses to Amazon Redshift, achieving significant cost savings and improved performance.
Designed and optimized Redshift data models, including schema design, data distribution, and sort key strategies, to maximize query performance and minimize query execution times.
Conducted data loading and transformation processes using various ETL (Extract, Transform, Load) tools and techniques, ensuring data accuracy and consistency within Redshift clusters.
Created and maintained data pipelines to ingest data from various sources into Amazon Redshift, automating data integration and processing tasks.
Implemented security best practices for Amazon Redshift, including IAM (Identity and Access Management) roles, encryption, and VPC (Virtual Private Cloud) configurations, to secure sensitive data.
Implemented and managed SonarQube for continuous inspection of code quality, performing automatic reviews to detect bugs, code smells, and security vulnerabilities.
Set up and managed Nexus Repository as centralized artifact storage, ensuring consistent access to build artifacts and dependencies.
Integrated Nexus with CI/CD pipelines to automate artifact storage and retrieval, improving build and deployment efficiency.
Implemented and optimized CI/CD pipelines using GitLab CI/CD, automating build, test, and deployment processes to enable continuous integration and delivery of software projects.
Utilized Maven as a build automation tool for managing project builds, dependencies, and documentation.
Creating Monitors for Datadog and Cloud Watch using terraform. Integrating Data dog with Slack and Pager Duty.
Integrate Data dog in Jenkins pipeline and Automate the Dashboard and Alerts.
Created system alerts using various data dog tools and alerted application teams based on the escalation matrix.
Implemented highly interactive features and redesigned some parts of products by writing plain JavaScript due to some compatibility issues using JQuery.
Handled browser compatibility issues for different browsers related to CSS, HTML and JavaScript for IE, Firefox, and Chrome browsers.
Employing AWS auto-scaling to dynamically adjust resources in response to changing traffic patterns as they occur.
Designed and implemented alerting rules using Alert Manager to proactively detect and respond to performance issues and system failures.
Implemented automated testing frameworks using tools like JUnit and Selenium, ensuring high-quality code and reliable application behavior.
Configured Maven pom.xml files to manage project dependencies, plugin configurations, and properties
Integrated Maven with tools like SonarQube for code quality analysis, ensuring high-quality code in the build process.
Set up and managed Splunk for real-time log management and analysis, providing insights into system performance and potential issues.
Developed and maintained Splunk queries, reports, alerts, and dashboards tailored to meet specific operational requirements.
Wrote and maintained Python and Bash scripts to automate routine tasks, reducing manual effort and increasing productivity.

Project: Rakuten, Bangalore, India Dec2019 Jan2021
Devops Engineer

Designed and implemented Infrastructure as Code (IaC) using AWS Cloud Formation and Terraform to automate the provisioning and management of AWS resources, reducing manual intervention and increasing efficiency.
Developed and managed CI/CD pipelines using Jenkins, AWS Code Pipeline, and other related tools to automate the build, test, and deployment process, ensuring rapid, reliable, and repeatable delivery of features.
Collaborated with DevOps teams to optimize the CI/CD pipeline, streamlining code delivery, testing, and deployment processes.
Managed Docker containerization and Kubernetes orchestration for deploying, scaling, and managing distributed applications, providing scalable and reliable micro service-based solutions.
Extensive use of a variety of AWS services such as EC2, S3, IAM, RDS, Lambda, Cloud Watch, Elastic Beanstalk, and others to deploy, monitor, and maintain applications and data.
Utilized configuration management tools like Ansible, Puppet, and Chef for automating system configurations, enhancing system consistency and stability.
Implemented comprehensive monitoring and logging solutions using AWS Cloud Watch, and other tools like ELK stack (Elasticsearch, Log stash, and Kibana) to track system health and debug issues quickly.
Conducted knowledge-sharing sessions to disseminate performance testing insights across the organization.
Led or contributed to performance improvement projects, resulting in measurable enhancements in application response times and system reliability.
Set up real-time monitoring systems and alerting mechanisms to proactively address performance issues
Develop workflows for Cognito across all Environments and secure them.
Ensured secure practices by managing access control using AWS IAM, securing data with encryption using AWS KMS, and implementing AWS security best practices.
Optimized application performance by leveraging AWS Cloud Front, Auto Scaling, and load balancing services, ensuring high availability and reliability of applications.
Designed and implemented disaster recovery strategies and high-availability architectures using AWS services like Multi-AZ deployments, Elastic IP addresses, and Route53, ensuring business continuity.
Integrated application logs and setup triggered monitors and alerts on Data Dog agent Monitoring tool.
Led the design and deployment of scalable, secure, and resilient containerized applications using Amazon Elastic Kubernetes Service (EKS).
Managed the full lifecycle of EKS clusters, including creation, scaling, updating, and monitoring, resulting in highly efficient and available applications.

Project: Admiral Group plc, Bangalore, India Apr 2017 Nov 2019
Devops Engineer.

Led the design, deployment, and management of cloud infrastructure using GCP services such as Compute Engine, Cloud Storage, and Cloud SQL. Utilized Terraform for infrastructure-as-code (IAC) to ensure reproducibility and scalability.
Configured and managed Jenkins and GitLab CI pipelines to automate the build, test, and deployment of applications onto GCP. Ensured rapid iteration and robust testing procedures.
Utilized Ansible for automated configuration management, allowing for the efficient and consistent deployment and scaling of applications across multiple GCP instances.
Implemented a comprehensive monitoring and alerting system using Prometheus and Grafana on GCP. Regularly evaluated system metrics and set up alerts for preemptive issue resolution, ensuring optimal performance and availability.
Deployed a variety of services on GKE clusters including stateless, stateful, and daemon services based on application architecture and requirements.
Leveraged Git as the primary tool for version control to track and manage code changes, facilitating collaboration and ensuring the integrity and continuity of projects.
Devised and implemented effective branching strategies using Git, allowing the development team to work in parallel without impacting the production codebase.
Integrated Git and GitHub with Jenkins and other CI/CD tools to automate the build, test, and deployment process, accelerating the software development lifecycle.
Configured GitHub Actions to automate software workflows, including CI/CD, testing, and code linting, resulting in more efficient and reliable development processes.
Collaborating with the security team in real-time to ensure that application performance and security are closely aligned.
Providing immediate responses to performance incidents, rapidly diagnosing and implementing solutions to minimize downtime.
Managed and organized source code using GitLab. Enforced best practices for version control, ensuring efficient collaboration and project tracking.
Utilized Splunk for centralized log management and analysis, creating dashboards to visualize and monitor real-time application performance. Leveraged Splunk's advanced search capabilities for efficient troubleshooting and incident resolution, reducing system downtime and improving overall application reliability.
Worked closely with security teams to ensure the implementation of GCP security best practices including IAM policies, network security settings, and data encryption. Assisted in the auditing and compliance activities as needed.

Project: Flipkart, Hyderabad, India. Apr 2014 Mar2017
Role: Linux Administration

Managed Linux servers for multiple high-traffic applications, ensuring maximum uptime, efficient performance, and secure operations.
Developed and maintained shell scripts for automating tasks, significantly improving operational efficiency.
Monitored system performance using tools like top, iostat, vmstat performed necessary tuning to optimize system and application performance.
Implemented robust system security measures, including managing user privileges, configuring firewalls, and running regular system audits using tools like SELinux and Fail2ban.
Managed software installation, upgrades, and patches using package management tools like YUM and APT.
Configured and maintained crucial server services such as Apache, Nginx, MySQL, and Postfix, ensuring optimal configuration for high performance and reliability.
Developed and executed backup strategies and disaster recovery plans using tools like rsync and dd, ensuring data integrity and availability.
Managed and maintained Linux server infrastructure, ensuring high availability, performance, and security for critical applications and services.
Monitored server performance, including CPU utilization, memory usage, disk space, and network traffic, using tools such as Nagios, Zabbix, or Prometheus, and implemented proactive measures to optimize system performance.

EDUCATION:

Bachelors of Technology (Electronics and Communication Engineering (ECE)) JNTUA
Keywords: continuous integration continuous deployment message queue javascript sthree database information technology golang Arizona

To remove this resume please click here or send an email from [email protected] to [email protected] with subject as "delete" (without inverted commas)
[email protected];3972
Enter the captcha code and we will send and email at [email protected]
with a link to edit / delete this resume
Captcha Image: