Home

kalyan - SRE/DevOps
[email protected]
Location: Austin, Texas, USA
Relocation: no
Visa: Citizen
Kalyan
EMAIL ID: [email protected]
PHONE: +1 (737) 727-8209
Principal DevOps/SRE/Cloud Engineer/System Administrator


PROFESSIONAL SUMMARY

Accumulated 13+ years of professional experience working as a Linux Admin, Build & Release/DevOps Engineer with AWS, GCP and AZURE Cloud platform experience.
Experienced in both Interpreted languages (Perl and Python) and Compiled languages (C, C++, C#, Dot Net, and JAVA).
Integrated GCP CI/CD pipelines successfully, resulting in a 40% reduction in deployment times and an increased release frequency.
Employed Terraform to efficiently manage GCP infrastructure, leading to optimized resource utilization and achieving a 20% cost reduction.
Developed modular Terraform scripts for provisioning and managing AWS infrastructure. This initiative enhanced scalability and reduced deployment time.
Integrated Terraform with CI/CD pipelines, automating infrastructure deployments and ensuring consistency across various environments.
Working with Automating the Google cloud platform Infrastructure using GCP Cloud Deployment Manager and Securing the GCP infrastructure using Private subnets, Security groups, NACL(VPC), etc. also Configuring and deploying instances on GCP environments and Datacentres and familiar with Compute, Kubernetes Engine, Stackdriver Monitoring, Elastic Search and managing security groups.
Hands-on experience with KubeVela and its components like understanding the Vela Core, CLI, Application Delivery Control Plane, and the concept of Application Composition
Terraform modules encapsulate groups of resources dedicated to one task, reducing the amount of code you have to develop for similar infrastructure components.
Written Ansible Playbooks with Python SSH as the Wrapper to Manage Node Configurations and to test Playbooks on instances using Python SDK and automated various infrastructure activities like continuous deployment, application server setup, Stack monitoring using Ansible playbooks.
Terraform templates are configuration files that define and describe the infrastructure resources required for a particular application or environment using a declarative configuration language called Hashicorp Configuration Language (HCL).
Having a strong understanding of Kafka's architecture, including topics such as topics, partitions, producers, consumers, and brokers, is essential for a DevOps engineer working in a Kafka environment.
As a DevOps engineer, you should be proficient in developing and administering MySQL databases, as well as using replication, clustering, and monitoring solutions to improve data availability and performance.
In a DevOps role, MySQL database deployments and version control are automated through CI/CD pipelines, enabling agile development and seamless integration of database updates.
As a proficient DevOps engineer, you possess the skills to adeptly set up and administer PostgreSQL databases for optimal performance. This includes implementing high availability, backup, and recovery techniques.
Successfully implemented automated deployment pipelines based on PostgreSQL, resulting in the seamless integration of database updates within CI/CD workflows. This initiative led to increased development and operational efficiency.
Operating in a DevOps role, I orchestrated the provisioning and configuration of databases using infrastructure-as-code principles. This facilitated effective collaboration between development and operations teams.
Database migrations and updates inside CI/CD pipelines have been streamlined, ensuring dependable and scalable data management while optimizing application performance.
Strong proficiency in using Shell scripting languages like BASH for Linux and Mac platforms and PowerShell for Windows systems.
Established and managed CI/CD pipelines to automate the processes of building, testing, and deploying software. This led to reduced release cycles and improved code quality.
Introduced automated CI/CD pipelines, significantly enhancing software delivery efficiency and streamlining the development process.
Docker and Kubernetes were used to orchestrate containerized apps, ensuring smooth deployment and scalability.
Familiar with development methodologies such as Waterfall, Scrum, Kanban, Agile, and hybrid.
Excellent configuration management using Puppet, Chef, and Ansible.
Experienced in Configuration Management, Cloud Infrastructure, and Automation utilizing Amazon Web Services (AWS), Ant, Maven, Jenkins, Chef, SVN, Git, GitHub, Clear Case, and Tomcat.
Expertise in installing, configuring, and managing Web Logic, Apache, VMWare Server in clustered environments for High Availability, Load balancing, and fault tolerance.
Extensive proficiency in Pivotal Cloud Foundry (PCF), encompassing hands-on experience in the installation, administration, and scalability of applications, ensuring robust availability and reliability.
Demonstrated mastery in implementing CI/CD pipelines, automating the build, testing, and deployment procedures for PCF applications. Proficiently integrated PCF deployments with source control systems, optimizing delivery efficiency.
Knowledgeable in transferring data from Datacenters to cloud using AWS Import/Export Snowball service.
As a Ping Developer, I successfully implemented and fine-tuned Ping Identity solutions for multi-tier authentication and authorization, resulting in an improved user experience and enhanced security posture.
TECHNICAL SKILLS

CLOUD ENVIRONMENTS AWS, AZURE, GCP.

CONTAINERIZATION TOOLS DOCKER, KUBERNETES, OpenShift, DOCKER SWARM.
CONFIGURATION MANAGEMENT & INFRASTRUCTURE TOOLS CHEF, PUPPET, ANSIBLE, TERRAFORM.
MONITORING TOOLS SPLUNK, NAGIOS, ELK, PROMETHUS, GRAFFANA
DATABASES ORACLE, MYSQL, DYNAMODB, SQL SERVER, NOSQL.
BACKUP TOOLS VERITAS/SYMANTEC NETBACKUP.
BUILD & INTEGRATION TOOLS ANT, MAVEN, BAMBOO, JENKINS.
VERSION CONTROL TOOLS SUBVERSION (SVN), GIT, GIT HUB, BIT BUCKET.
WEB SERVERS APACHE TOMCAT, IBM WEBSPHERE, JBOSS, ORACLE WEBLOGIC.
LANGUAGES/SCRIPTS C, HTML, SHELL, BASH, PHP, PYTHON, CHEF, PHP, RUBY, PERL.
WEB TECHNOLOGIES HTML, CSS, JAVA SCRIPT, BOOTSTRAP, XML, JSON.
AWS SERVICES EC2, ELB, VPC, RDS, AMI, IAM, CLOUD FORMATION, S3, CLOUD WATCH, EBS,
ROUTE 53.
AZURE SERVICES APP SERVICES, KEY VAULT, FUNCTION APP, STORAGE ACCOUNTS, AZURE
ACTIVE DIRECTORY (AZURE AD), AZURE CONTAINER REGISTRY (ACR) AND AZURE KUBERNETES SERVICE (AKS), AZURE SQL,
AZURE DATA FACTORY.
BUG TRACKING TOOLS JIRA, BUGZILLA, SERVICENOW, REMED
SDLC AGILE/SCRUM METHODOLOGIES, WATERFALL.
CI/CD TOOLS GITHUB CI/CD PIPELINES, JENKINS. Terraform scripts


EDUCATION:

Masters in computer science - Master in computer science

WORK EXPERIENCE

COMPANY: CITI
LOCATION: TX
NOVEMBER 2020 TILL DATE.
ROLE: SR. AWS DEVOPS ENGINEER/SRE
RESPONSIBILITIES:-

Designed and implemented highly available and scalable AWS infrastructure using EC2, S3, RDS, ECS, EBS, and ELB to meet service level objectives (SLOs) and service level agreements (SLAs).
Configured AWS security using IAM, S3 bucket policies, and VPC security groups to ensure secure access and compliance with industry standards.
Configured and optimized Databricks clusters for performance, scalability, and reliability based on workload requirements and business needs.
Set up and maintained CI/CD pipelines using Jenkins, AWS Code Pipeline, and Code Deploy to automate deployments and reduce release cycles.
Optimize The NGINX/Apache/PHP/MySQL & Sysctl for better server performance
Utilized AWS CloudFormation and Terraform to create infrastructure as code (IaC) templates to deploy, manage, and version infrastructure resources.
Implemented and managed infrastructure to support Databricks clusters on cloud platforms like AWS, Azure, or GCP using Infrastructure as Code (IaC) tools such as Terraform or CloudFormation.
Use existing cookbooks from Chef Market place and customize the recipes with respect to each VM.
Developed and maintained Cloud Formation JSON Templates and automated cloud deployments using Chef.
Involved in setting up builds using Chef as a configuration management tool and managed the configurations of servers and Installed Chef Server on the workstation and bootstrapped the nodes using Knife and involved in writing Chef Cookbooks and recipes to automate the deployment process.
Provisioned Multiple EKS clusters using Terraform Shared module and used Helm builder to do deployments where used ECR to store docker images and Nginx as reverse proxy.
Developed automated orchestration to deploy IaaS Infrastructure, auto enroll in Azure Backup and Azure Site Recovery based upon tagging strategies.
Implemented monitoring and alerting using CloudWatch, Splunk, Nagios, and Zabbix to proactively identify and remediate system issues.
Integrated Databricks with CI/CD pipelines using tools like Jenkins, GitLab CI/CD, or Azure DevOps to automate deployment and testing processes for notebooks and jobs.
Integrated GCP CI/CD pipelines successfully, resulting in a 40% reduction in deployment times and an increased release frequency.
Leveraged Terraform for the management of GCP infrastructure, leading to optimized resource utilization and a 20% reduction in costs.
Extensive experience in setting up the CI/CD pipelines using Jenkins, Maven, Nexus, GitHub, CHEF, Terraform and AWS, deployed PaaS services using the Azure Release pipelines.
Automated deployment, scaling, and monitoring of AWS Glue resources using infrastructure-as-code (IaC) tools like AWS CloudFormation or AWS CDK.
Integrating AWS Glue with other AWS services and third-party tools to build end-to-end data pipelines.
Ensuring security, compliance, and reliability of AWS Glue environments through best practices and automation.
Configured OpenShift Container Platform for deploying, managing and scaling containerized applications in an enterprise environment
NoSQL databases are designed to handle unstructured or semi-structured data and are often chosen for use cases that require high scalability, flexibility, and horizontal partitioning.
Created, managed, and performed container-based deployments using Docker images containing Middleware and Applications together and Evaluated Kubernetes for Docker container orchestration.
Worked with developers in mitigating issues with pipelines on Jenkins and ArgoCD
Wrote Chef Cookbooks and recipes in Ruby to provision pre-prod environments consisting of Cassandra DB installations, WebLogic domain creations and several proprietary middleware installations
Playing a key role in automating the deployments using GitHub, Terraform, Puppet, Chef and Jenkins
Proficient in setting up and administering PostgreSQL databases for best performance, as well as providing high availability, backup and recovery techniques.
Introduced automated deployment pipelines centered on PostgreSQL, enabling the smooth integration of database updates within CI/CD workflows. This initiative led to notable improvements in both development and operational efficiency.
Implemented and configured Backstage.IO for managing and visualizing the end-to-end software development lifecycle.
Integrated Backstage.IO into existing CI/CD workflows to streamline development processes.
Developed modular Terraform scripts to facilitate the provisioning and management of AWS infrastructure. This led to enhanced scalability and reduced deployment times.
Worked on Oracle Cloud Infrastructure (OCI) to support the deployment of enterprise applications and workloads, offering high performance, scalability, and security.
Used OIC (Oracle Integration Cloud) simplify the process of designing, monitoring, and managing integrations, making it easier for organizations to streamline business processes.
Worked on OIC to supports process automation, enabling organizations to automate and optimize their business processes.
Automated model training and deployment processes by utilizing SageMaker s functionalities and customized scripts, leading to decreased manual labor and heightened efficiency.
Implemented monitoring solutions (e.g., Prometheus, Grafana) for Node.js applications, providing real-time insights into performance, resource utilization, and potential issues.
Managed local software repositories such as Gitlab, Stash, Artifactory, and Nexus to oversee version control and artifact management.
Expertise in deploying, scaling, and managing containerized applications in Kubernetes.
Experience in securing Kubernetes clusters and containerized applications with best practices like RBAC, Pod Security Policies, and network policies.
Experience with cloud-native security best practices and experience implementing them in Kubernetes clusters, including secrets management, encryption, and vulnerability management
Hands-on experience with Harness, a continuous delivery platform, to automate and orchestrate release processes.
Successfully implemented and configured Harness for various projects, optimizing deployment workflows
Implemented blue-green deployment strategies in Harness to minimize downtime and ensure seamless releases
Engineered infrastructure automation through the creation of Ansible playbooks, enabling tasks like Continuous Deployment. Additionally, integrated Ansible with Jenkins on the AZURE platform for seamless operations.
Familiar with CI/CD tools like Jenkins, GitLab CI/CD, or Tekton, and be able to configure pipelines to build, test, and deploy applications using KubeVela.
Experienced with Infrastructure as Code tools such as Terraform and Ansible for managing infrastructure resources alongside application deployment with KubeVela.
Deployed a managed Kubernetes cluster in Azure using Azure Kubernetes Service (AKS) and configured an AKS cluster through various methods including the Azure portal, Azure CLI, and template-driven deployment options such as Resource Manager Templates and terraform.
Managed Node.js versions using tools like NVM (Node Version Manager) to ensure compatibility and consistency across development environments
Introduced automated CI/CD pipelines, significantly enhancing software delivery efficiency and streamlining the development process.
Utilized Docker and Kubernetes to efficiently manage containerized applications, ensuring seamless deployment and scalability. Worked with developers to optimize application performance and scalability on AWS infrastructure by analyzing metrics and tuning resources.
Participated in incident response and root cause analysis to minimize the impact of service disruptions and prevent recurrence.
Extensive proficiency in Pivotal Cloud Foundry (PCF), encompassing hands-on experience in the installation, administration, and scalability of applications, ensuring robust availability and reliability.
Demonstrated mastery in implementing CI/CD pipelines, automating the build, testing, and deployment procedures for PCF applications. Proficiently integrated PCF deployments with source control systems, optimizing delivery efficiency.Collaborated with cross-functional teams to design, test, and deploy new services and features on AWS infrastructure.
Integrated Golang-based monitoring solutions (such as Prometheus) to establish comprehensive observability across the infrastructure, resulting in improved system reliability and performance.
Contributed to the development of microservices architecture using Golang, optimizing scalability and facilitating rapid feature deployment.
Proficient in scripting and automation using Python, Bash, or other languages to streamline serverless application development, deployment, and maintenance processes.
Experience with monitoring and logging solutions like Prometheus, Grafana, and ELK stack
Strong analytical and troubleshooting skills, with experience in incident management and root cause analysis.
Integrated Jenkins with Jira, Atlassian Tool and GitHub for streamlined software development processes.
Experience with AWS CDK, it provides a library of reusable constructs for AWS services, making it easier to define cloud resources using high-level abstractions.
Experienced in building and deploying serverless architectures using AWS Lambda, API Gateway, DynamoDB, S3, and other AWS services.
Skilled in developing custom Splunk apps and dashboards, leveraging Splunk's REST API, Splunk SDKs, and web frameworks to extend Splunk's functionality and meet specific business requirements.
Proficient in designing and implementing secure, scalable, and highly available serverless applications on AWS using frameworks such as Serverless Framework, AWS SAM, and AWS CDK.
Involved CDK integrates seamlessly with other AWS tools, such as the AWS CLI and SDKs, for deployment, management, and automation.
Involved in Scrum Agile meetings to stand up calls, which brings me with full knowledge on Scrum.
Configured and optimized Kafka brokers, producers, and consumers for optimal performance and throughput
Conducted Kafka performance benchmarking and optimization using tools such as JMeter or Gatling.
Extensive knowledge and hands-on experience with Apache Kafka, including its architecture, messaging concepts, and real-time data processing capabilities.
Proficient in deploying, configuring, and managing high-performance Kafka clusters, ensuring fault tolerance, scalability, and efficient resource utilization.
Skilled in leveraging Kafka's robust streaming platform to enable seamless integration between disparate systems, facilitating real-time data processing and analytics.
Proficient in designing and implementing event-driven architectures using Kafka, enabling reliable, asynchronous communication between microservices and distributed systems.
Demonstrated ability to optimize Kafka's performance by fine-tuning configurations, implementing effective partitioning strategies, and leveraging compression techniques.
Expertise in setting up Kafka replication and synchronization mechanisms to ensure data consistency, availability, and disaster recovery.
Proficient in monitoring Kafka clusters, diagnosing performance issues, and implementing effective monitoring solutions using tools like Confluent Control Center or custom monitoring frameworks.
Skilled in implementing robust security measures for Kafka clusters, including SSL/TLS encryption, authentication, authorization, and data protection.
Currently engaged in utilizing Azure Synapse Analytics for the implementation of Pyspark Notebooks.
Experience integrating Kafka with popular Big Data technologies such as Apache Hadoop, Apache Spark, or Apache Flink for real-time data processing and analytics.
Strong documentation skills, including documenting Kafka cluster configurations, best practices, and operational procedures, enabling smooth knowledge transfer and onboarding of new team members.


RESPONSIBILITIES:-

Experience with deploying and managing Azure resources using ARM templates and PowerShell scripts.
Proficient in Azure DevOps for building, testing, and deploying applications in a CI/CD pipeline.
Expertise in Azure Infrastructure as Code (IaC) tools such as Azure Resource Manager (ARM) and terraform for creating and managing cloud resources.
Strong knowledge of Azure services, including but not limited to Azure Virtual Machines, Azure App Service, Azure Kubernetes Service (AKS), Azure Functions, Azure SQL Database, Azure Storage, and Azure Network Security Groups (NSGs).
Experience in monitoring and managing Azure resources using Azure Monitor, Azure Log Analytics, and Azure Application Insights, experienced with cloud-native applications
Hands-on experience in automating Azure resource provisioning and configuration using PowerShell, ARM templates, and terraform.
Created CI/CD pipelines for .NET, python apps in Azure DevOps by integrating source code repositories such as GitHub, VSTS, and artifacts. Created deployment areas such as testing, pre-production, and production environment in Kubernetes cluster.
Deployed a managed Kubernetes cluster in Azure using Azure Kubernetes Service (AKS) and configured an AKS cluster through various methods including the Azure portal, Azure CLI, and template-driven deployment options such as Resource Manager templates and terraform.
Proficient in deploying applications on various servers including Apache Webserver, Nginx, and application servers like Tomcat and JBoss.
Proficient in designing and implementing scalable and high-performance infrastructure solutions on GCP and OpenShift, utilizing load balancing, caching, and autoscaling techniques
Expert Knowledge in Azure cloud services, Azure storage, Azure active directory, and Azure Service Bus. Managing Client's Microsoft Azure based PaaS and IAAS environment.
Deployed Azure IaaS virtual machines (VMs) and Cloud services (PaaS role instances) into secure VNets with Azure Internal Load Balancer and subnets.
Automated CICD pipelines and build infrastructure using Terraform, CloudFormation, Groovy, yml, Bash & Python scripting for AWS Lambda which helps the non-CICADA users in with one click automation.
NET Core integrated with various Azure services such as Azure App Service (for hosting web applications), Azure SQL Database (for storing relational data), Azure Storage (for storing files and unstructured data), Azure Cosmos DB (for NoSQL databases), Azure Functions (for serverless computing), Azure Service Bus (for messaging), and many others.
Azure SDKs and libraries are available for .NET Core developers to facilitate seamless integration with Azure services.
Integrated Databricks with CI/CD pipelines using tools like Jenkins, GitLab CI/CD, or Azure DevOps to automate deployment and testing processes for notebooks and jobs.
Managed AWS infrastructure using automation and configuration management tools like Ansible, Puppet, and custom-built solutions.
Implemented Splunk in the organization for analyzing Big Data and designing the Splunk Architecture.
Utilized Docker to containerize Node.js applications, improving consistency across development, testing, and production environments.
Skilled in developing custom Splunk apps and dashboards, leveraging Splunk's REST API, Splunk SDKs, and web frameworks to extend Splunk's functionality and meet specific business requirements.
Engineered infrastructure automation through the creation of Ansible playbooks, enabling tasks like Continuous Deployment. Additionally, integrated Ansible with Jenkins on the AZURE platform for seamless operations.
Implemented monitoring solutions (e.g., Prometheus, Grafana) for Node.js applications, providing real-time insights into performance, resource utilization, and potential issues.
Managed Node.js versions using tools like NVM (Node Version Manager) to ensure compatibility and consistency across development environments
Successfully executed data migration projects, transferring data from On-prem SQL server to Cloud databases such as Azure Synapse Analytics (DW) and Azure SQL DB.
Demonstrated expertise in utilizing Azure BLOB and Data Lake storage solutions, proficiently loading data into Azure SQL Synapse Analytics (DW).EasyEngine (ee) is a command line control panel to set up the NGINX server on Debian/Ubuntu Linux distribution for HTML, PHP, MySQL, HHVM, PageSpeed and WordPress websites.
Maintain NGINX package with custom modules like ngx_pagespeed, lua, etc
Automate NGINX/PHP/MySQL Setup and Monitor
Experience in migrating on-premises applications to Azure and implementing hybrid cloud.
Developed modular Terraform scripts to manage AWS infrastructure, resulting in improved scalability and faster deployment times.
Integrated Terraform with CI/CD pipelines, automating infrastructure deployments and ensuring uniformity across diverse environments.
Established and maintained CI/CD pipelines to automate software building, testing, and deployment, leading to shorter release cycles and higher code quality.
Implemented automated CI/CD pipelines, streamlining software delivery and enhancing development team efficiency.
Automated MySQL database deployments and version control through CI/CD pipelines, enabling agile development and seamless integration of database updates in a DevOps capacity.
Demonstrated proficiency in setting up and managing PostgreSQL databases for optimal performance, as well as implementing high availability, backup, and recovery techniques, as required for a DevOps role.
Created automated deployment pipelines based on PostgreSQL, achieving seamless integration of database updates within CI/CD workflows, resulting in heightened development and operational efficiency.
Orchestrated database provisioning and configuration using infrastructure-as-code principles, fostering collaboration between development and operations teams, as part of the DevOps role.
Streamlined database migrations and updates within CI/CD pipelines, ensuring reliable and scalable data management while optimizing application performance.
Utilized Docker and Kubernetes to orchestrate containerized applications, ensuring smooth deployment and scalability.
Involved in SCRUM ceremonies (stand-up, grooming, planning, demo/review and retrospective) with the teams To ensure successful project forecasting and realistic commitments.
Experience in troubleshooting and resolving Azure-related issues and incidents in a timely and efficient manner.
Hands-on experience Designing, planning and implementation for existing on-premises applications to AZURE Cloud (ARM), Configured and deployed Azure Automation Scripts utilizing Azure stack Services and Utilities focusing on Automation.
Extensive expertise in Pivotal Cloud Foundry (PCF), encompassing proficiency in the installation, administration, and scaling of applications, ensuring robust availability and dependability.
Demonstrated mastery in the implementation of CI/CD pipelines, automating the build, testing, and deployment procedures for PCF applications. Proficiently integrated PCF deployments with source control systems, optimizing delivery efficiency.
Implemented storage blobs and Azure files by creating storage accounts and configuring the Content Delivery Network (CDN). Managed access and storage access key for custom domains
Orchestrated the setup and management of Continuous Integration (CI) using the Team Foundation (TF) Build Service by implementing and configuring servers for hosting the Team Foundation Server (TFS) instance.
Managed administrative and monitoring duties for the Visual Studio Team System (VSTS), including tasks such as backups and consolidation of collections during migrations between different versions of VSTS.
Utilized Apache Kafka to ingest real-time network log data into Hadoop Distributed File System (HDFS).
Designed and implemented Kafka topics and partitions to ensure high availability, scalability, and fault tolerance.



RESPONSIBILITIES:-
Creating and managing AWS services like IAM, EC2, VPC, S3, EBS, ELB, ECS, ECR
Demonstrated experience of building custom modules using Terraform
With CI/CD pipelines to establish and maintain infrastructure services
Automating the build process for creating JAR/WAR packages using Maven build tool
Experience in migrating build.xml into pom.xml to build applications using Apache MAVEN. Extensively worked on Configuration management tool Chef, for automation.
Written multiple Chef Cookbooks in Ruby language. Implemented environments, roles, data bags in Chef for better environment management. Setup Chef Server, workstation, client and wrote scripts to deploy applications.
Installed Chef-Server Enterprise On-Premises/WorkStation/ Bootstrapped the Nodes using Knife.
Provision, Configure, & De-Provision Environments via Terraform Automation
Creating alarms and notifications for EC2 instances using Cloud Watch.
Designing and deploying AWS Solutions using EC2, S3, and EBS, Elastic Load balancer (ELB) and auto-scaling groups.
In the Lab, setup the Live Electric Meters in the different environments for the testing.
Created the work request/Service order in different edge systems like LCIS, CC&B(Customer Care and Billing), Maximo, AODS, MDMS and Command Center and then configure the Live Meters make them sync with all these edge systems.


RESPONSIBILITIES:-

Configured OpenShift Container Platform for deploying, managing and scaling containerized applications in an enterprise environment.
Managed Puppet infrastructure to automate the deployment and configuration of software and system configurations across multiple servers and environments.
Implemented Splunk in the organization for analyzing Big Data and designing the Splunk Architecture.
Installed and configured Log stash and File beat on Linux and Application servers for transferring logs to Elastic search.
Developed and managed CI/CD pipelines to automate the process of building, testing, and deploying software. This led to reduced release cycles and improved code quality.
Introduced automated CI/CD pipelines, significantly enhancing software delivery efficiency and streamlining the development process.
Leveraged Docker and Kubernetes for orchestrating containerized applications, ensuring seamless deployment
and scalability.
Migrated applications and data from different storage platforms to Cloud-based solutions.
Enabled developers and QA to deploy and test applications via Jenkins using various scripting technologies like Python, Node, and Bash
Implemented Continuous Integration (CI) and Continuous Deployment (CD) using Jenkins, Ant, Maven, Sonar, and Nexus for multiple environments.
Proficient in Pivotal Cloud Foundry (PCF), with a background in installation, administration, and scalability of applications while ensuring high availability and reliability.
Skilled in implementing CI/CD pipelines, automating the build, testing, and deployment procedures for PCF applications, and seamlessly integrating PCF deployments with source control systems to enhance delivery efficiency.
Designed cloud-hosted solutions with specific experience in the AWS product suite.
Managed and improved the build infrastructure for global software engineering teams, including implementation of build scripts, continuous integration infrastructure, and deployment tools.
Experienced with building tools like Maven, JUnit, and jQuery, and has worked on the Mavenization of multiple projects.
Managed local software repositories such as Gitlab, Stash, Artifactory, and Nexus to oversee version control and artifact management.
Integrated Jenkins with Jira, Atlassian Tool and GitHub for streamlined software development processes.
Implemented and optimized Ping Identity solutions for multi-tier authentication and authorization, improving user experience and security posture.
Oversaw the integration of Ping Federate and Ping Access to provide a strong and centralized identity and access management solution, enhancing application security and compliance
Proficient in JIRA administration and skilled in designing workflows, including expertise in JIRA service desk.
Proficient in designing and implementing scalable and high-performance infrastructure solutions on GCP and OpenShift, utilizing load balancing, caching, and autoscaling techniques.
Strong collaboration skills, including working with cross-functional teams to design and implement infrastructure solutions, as well as excellent documentation skills for sharing knowledge and best practices.



RESPONSIBILITIES:-

Configuring and maintaining Splunk environments, including indexing, search heads, and data ingestion configurations.
Created Splunk Dashboards to highlight key business metrics such as transaction volume and average processing time, as well as to measure the performance of other third-party systems.
Building, Deployment, Configuration, Management of SPLUNK Cloud instances in a distributed environment which spread across different application environments belonging to multiple lines of business.
MySQL database deployments and version control are automated utilizing CI/CD pipelines, allowing for agile development and smooth integration of database updates.
Established automated deployment pipelines utilizing PostgreSQL, seamlessly integrating database updates into CI/CD workflows. This led to notable enhancements in both development and operational efficiency.
Engineered modular Terraform scripts for the provisioning and management of AWS infrastructure, resulting in improved scalability and reduced deployment time.
Integrated Terraform with CI/CD pipelines, automating the deployment of infrastructure and ensuring uniformity across diverse environments.
Orchestrated database provisioning and configuration as a DevOps engineer, leveraging infrastructure-as-code principles to foster collaboration between development and operations teams.
Streamlined database migrations and updates within CI/CD pipelines, ensuring reliable and scalable data management while optimizing application performance.
Maintained the Splunk software to automatically send out an alert to notify the appropriate authority through Email and activate the necessary support.
Involved in standardizing Splunk forwarder deployment, configuration and maintenance across UNIX and Windows platforms.
Created many of the proof-of-concept dashboards for IT operations, and services owners which are used to monitor application and server health.
Designed and developed Custom Apps as per the Business requirement and assigned roles.
Performed Data Exfiltration by Compromised Account and Data Exfiltration by Malware using User Behavior Analytics.
Created alerts to notify system outages or reaching threshold values. These alerts include Splunk license threshold limit, syslog server threshold limit, file system overflow and cold storage outage.
Developing customized Shell scripts in order to install, manage, configure multiple instances of Splunk forwarders, indexers, search heads, deployment servers.
Created detailed documentation for all the reports, alerts and dashboards. People without Splunk knowledge can follow the mentioned instructions and generate alerts/reports manually in case of automated mail generation failure (Firewall issues between SPLUNK and mail server).
Used SPLUNK forwarders to provide reliable and secure collection and delivery of data to the Splunk platform for indexing, storage and analysis.
Extensive experience in Splunk administration, including deploying, configuring, and managing Splunk instances and data ingestion pipelines.
Proficient in Pivotal Cloud Foundry (PCF), including the installation, management, and scalability of applications while ensuring high availability and reliability.
Demonstrated skill in implementing CI/CD pipelines, automating build, testing, and deployment procedures for PCF applications, and seamlessly integrating PCF deployments with source control systems to enhance delivery efficiency.
Proficient in utilizing Splunk for log analysis and monitoring, including creating complex queries, creating dashboards, and generating reports to identify trends, anomalies, and troubleshoot issues.
Skilled in developing custom Splunk apps and dashboards, leveraging Splunk's REST API, Splunk SDKs, and web frameworks to extend Splunk's functionality and meet specific business requirements.
Expertise in configuring alerts and notifications in Splunk, developing incident response workflows, and implementing proactive measures to identify and resolve critical incidents.
Strong skills in ingesting and parsing data from various sources into Splunk, including log files, databases, APIs, and cloud services, ensuring accurate indexing and efficient searching.
Familiarity with serverless frameworks and tools such as Serverless Framework, AWS SAM (Serverless Application Model), or Azure Serverless, simplifying serverless application development and deployment.
Proficient in leveraging serverless platforms' auto-scaling capabilities to handle varying workloads and ensure optimal resource utilization, enhancing application performance and cost efficiency.
Knowledge of serverless security best practices, including implementing fine-grained access controls, secure data storage, and encryption mechanisms for serverless functions and data.
Keywords: cprogramm cplusplus csharp continuous integration continuous deployment quality analyst javascript sthree database active directory information technology Delaware Idaho Texas

To remove this resume please click here or send an email from [email protected] to [email protected] with subject as "delete" (without inverted commas)
[email protected];2302
Enter the captcha code and we will send and email at [email protected]
with a link to edit / delete this resume
Captcha Image: