Home

srivenimandadi - Sr. DevOps Engineer
[email protected]
Location: Irving, Texas, USA
Relocation: Yes
Visa: GC
PROFESSIONAL SUMMARY:
Certified IT professional with over 10+ years of industry experience, boasting certifications in AWS and Azure cloud platforms. Specialized in DevOps methodologies, CI/CD pipeline implementation, configuration management, and build/release management, with a proven track record in QA/test engineering, ensuring the delivery of high-quality solutions.
Experience in cloud implementations AWS and AZURE, demonstrating expertise in infrastructure provisioning, configuration, deployment, and optimization to support diverse business requirements and objectives.
Proficient AWS cloud services like EC2, Elastic Load-balancers (ELB), Elastic Container Service (ECS), ECR, EKS, S3, Elastic Beanstalk, ELK stack, SNS, SQS, Cloud Front, RDS, Dynamo DB, VPC, Route53, Cloud Watch, Cloud Trail, Cloud Formation, IAM for troubleshooting Amazon images for server migration from physical into cloud.
Proficient with Azure cloud services such as Azure storage, IIS, Azure Active Directory (Entra ID), Azure Resource Manager (ARM), Azure Storage, ACR, AKS, Azure, Blob Storage, Azure VMs, SQL Database, Azure Functions, Azure application insights, Azure Service Fabric, Azure Monitor, and Azure Service Bus, Cosmos DB.
Experienced in provisioning of IaaS, PaaS, SaaS virtual machines and web/worker roles on Microsoft Azure classic and Azure Resource manager (ARM).
Capability in writing Infrastructure as a code (IAC) in Terraform, Azure resource management, AWS Cloud Formation. Created reusable Terraform modules in both Azure and AWS Cloud Environments.
Mastery in using build tools like MAVEN and ANT for the building of deployable artifacts such as war& jar from source code.
Proficient at using Gitflow branching approaches to efficiently manage feature development, issue resolution, and releases. This technique led in higher code quality, faster deployments, and better team cooperation.
Skilled in supporting, configuring, troubleshooting, and maintaining diverse server environments including Windows and Linux (Ubuntu, CentOS, RHEL).
understanding in implementing end-to-end CI/CD pipelines within Jenkins, automating code retrieval, compilation, testing, and artifact deployment to Nexus Artifactory.
Specialization in Azure CI/CD processes by utilizing Azure DevOps, Azure CLI, App services to build Repos, Pipelines, Web Apps, and monitoring applications using Application Insights.
Experience working with micro-service technologies & Containerization tool such as Docker which is used to containerize and deploy applications on Kubernetes clusters.
Worked on creation of custom Docker container images, tagging, and pushing the images and worked on creating the Docker Containers and Docker consoles for managing the application life cycle. Deployed Docker Engines in Virtualized Platforms for containerization of multiple applications.
Proficient in managing container registries, including Docker Registry, Azure Container Registry (ACR) and Amazon Elastic Container Registry (ECR), for securely storing and managing Docker images, facilitating efficient deployment and scalability within cloud-native environments.
participation in Kubernetes administration in proficiently creating clusters and configuring various components such as pods, replication controllers, replica sets, and services using YAML files along with Managing containerized applications effectively utilizing Kubernetes features like nodes, config maps, selectors, and services, deploying application containers as pods to ensure efficient resource utilization and scalability.
Deployed a managed Kubernetes cluster in AWS/AZURE through AKS and EKS establishing clusters via AWS/AZURE CLI and used template-driven deployment approaches and focusing on deploying and managing containerized applications.
Skilled in using Ansible as a configuration management tool, effectively utilizing it to manage and configure infrastructure resources through the automation of playbooks and roles.
Experience in writing Python, PowerShell, bash for batch processing which is process automation of databases, applications, backup, and scheduling to reduce both human intervention and man-hours.
Hands-on experience with databases (Cosmos DB, MySQL, MongoDB, Cassandra) creating users, performing dump/restore and taking automated snapshots
Proficient in utilizing various log monitoring tools such as Splunk, Nagios, Prometheus, Grafana, ELK (Elasticsearch, Logstash, Kibana), and New Relic for observing log data, monitoring system health, and receiving notifications regarding node security and status.
Skilled in Agile methodologies like Scrum and Kanban, adept at optimizing project management and team collaboration. Proficient in Atlassian tools including Confluence, Azure Boards, JIRA for streamlined workflow and effective communication within teams.
CERTIFICATIONS:
AWS Developer Associate
Microsoft Certified Azure Administrator
Certified Kubernetes Administrator

TECHNICAL SKILLS:

Cloud Platforms Microsoft Azure, Aws Cloud
AWS Services RDS, EC2, VPC, IAM, EBS, S3, ELB, Auto Scaling, Cloud Trial, ECR, EKS, Cloud Watch
Azure Services App Services, Key vault, function app, Blob storage, Azure Active Directory (Azure AD), Service Bus, Azure Container Registry (ACR) and Azure Kubernetes service (AKS), Azure SQL
Version Control Tools GIT, SVN
Source code Management GITHUB, Azure repos, Bitbucket
CI/CD Jenkins, Azure DevOps pipelines
Configuration&Automation Tools Ansible, Terraform, ARM templates, and cloud formation.
Testing tools Junit, selenium, cucumber, Jasmine, Soup UI, POSTMAN
Container Platforms Docker, Kubernetes, OpenShift, ECS
Build Tools Maven, Gradle, ANT, MS Build
Monitoring Tools Nagios, Splunk, ELK, cloud watch, cloud trail, Azure monitor, New Relic, Prometheus, Grafana.
Languages Python, Shell scripting, Bash, Node.js, Groovy, YAML
Artifactory Jfrog and Nexus
Web Servers Nginx, Tomcat, WebSphere, JBOSS, WebLogic, IIS
Documentation Confluence, SharePoint
Operating Systems Microsoft Windows, Linux, UNIX, ubuntu
Tracking Tools Jira, SNOW
Code Scanning SonarQube, X ray, ECR Inspector, Trivy
Databases Postgres, RDS, Cosmos DB, My SQL DB, MongoDB, Cassandra, SQL Server, AZURE Data factory

PROFESSIONAL EXPERIENCE:

Client: Walmart. West Memphis, Arkansas | APR 2022 to till now
Role: Sr. DevOps Cloud Engineer/SRE
Responsibilities:

Provisioned and managed AWS services, such as EC2 instances, VPCs, Lambda functions, ELB, IAM roles, S3 buckets, RDS databases, ECR, and EKS to support the project's infrastructure requirements.
Used AWS for creating new instances, checking security group settings, adding elastic IPs for servers, and deleting the elastic IPs for the servers needed, applying the inbound IP addresses as needed.
Created Users, Groups, Roles, Policies, and Identity providers in AWS using Identity Access Management for improvement in login authentication.
Created a Lambda functions using Python Script to stop all the instances with a specific tag for AWS Instance and made it into Cloud Watch Scheduler to schedule it every night.
Managed Cloud automation using AWS cloud formation and Terraform templates to create custom sized VPC, Subnets, NAT, EC2 instance, ELB and managing Security groups.
Developed custom C# scripts and modules to automate the configuration and deployment of AWS resources, such as EC2 instances, S3 buckets, and RDS databases.
Used Ansible server to manage and configure nodes, Managed Ansible Playbooks with Ansible roles. Used file module in Ansible playbook to copy and remove files on remote systems
Managed the planned migration of critical infrastructure to AWS, using advanced automation techniques and architectural best practices to strengthen scalability, fault tolerance, and cost effectiveness.
As a direct result of the planned move to AWS, we were able to reduce system downtime by 30% and infrastructure expenditures by 20%.
Implemented SRE practices and principles to enhance the reliability, performance, and scalability of critical applications. Developed and implemented monitoring and alerting systems, automated incident response workflows, and conducted root cause analysis to identify and address underlying issues, resulting in significant improvements in system stability and uptime.
Created scripts in Python which integrated with Amazon API to control instance operations.
Involved in source control management with GitHub Enterprise level repositories. Regular activities include configure user s access levels, monitor logs, identifying merge conflicts and managing master repository.
created build and deployment scripts with Maven, automating the process using Jenkins plugins to seamlessly transition between environments throughout the build pipeline.
Created CI/CD pipelines and setup auto trigger, auto build and auto deployment with the help of the CI/CD tool like Jenkins.
Handled Selenium Synchronization problems using Explicit & Implicit waits during regression testing.
Used SonarQube for continuous inspection for code quality and to perform automatic reviews of code to detect bugs
Managed storage of binaries, artifacts, and dependencies post-successful builds utilizing Nexus Artifactory Repository Managers.
Worked on creating the Docker containers and Docker consoles for managing the application life cycle.
Developed Docker files, created, and tested images, then pushed them to the container registry (ECR). Orchestrated deployments using EKS, and monitored and managed updates to facilitate efficient application development and deployment.
Used container orchestrator Elastic Kubernetes Service (EKS) to deploy, load balance, scale and manage Docker containers with multiple namespace versions, developed CI/CD system with Jenkins to build test and deploy microservices.
Containerized and deployed an application using Docker on to a Kubernetes cluster managed by AWS EKS. Used AWS CloudFormation Templates (CFT) to launch a cluster of worker nodes on Amazon EC2 instances.
Utilized Kubernetes manifests and Helm charts to deploy and manage microservices within Kubernetes clusters, ensuring reproducible builds and efficient deployment processes.
Leveraged Kubernetes to automate deployments, scaling, and management of containerized applications across clusters of hosts, including configuring Autoscaling for multiple clusters and utilizing Kubernetes' built-in self-healing capabilities to replace failed pods automatically
Used Ingress Resources in Kubernetes to support a high-level abstraction that allows simple host or URL or HTTP-based routing and used it to expose the applications.
Implemented Istio service mesh to provide a unified way to control and monitor microservices in a Kubernetes cluster.
Configured Kubernetes clusters on AWS Cloud and used AWS Elastic Load Balancer to route external web traffic to an internal HAProxy cluster. HAProxy is configured with a back-end for each Kubernetes service, which proxies traffic to individual pods
Experienced in developing Ansible Playbooks to automate the provisioning of the Kubernetes cluster, troubleshooting the Kubernetes Pods which have issues with Persistent Volume Claims
Contributed to the integration of the AWS Kafka live stream module with Kubernetes, implementing Spring Kafka API calls to guarantee smooth message processing inside the Kafka cluster architecture.
Used Helm to install Prometheus and Grafana for monitoring within the Kubernetes cluster, ensuring optimal application performance oversight. Furthermore, used CloudWatch to retrieve logs data for thorough monitoring.
Utilized Splunk's alerting and notification features to proactively identify and respond to critical events or incidents in the cloud environment.
Configured the DynamoDB using AWS management console and created the necessary tables and used Java based AWS SDK for access, storage, and retrieval of information.
Used agile methodology throughout the project. Involved in weekly and daily bases release management.
Created and managed JIRA templates and complex JIRA workflows including project workflows, Screen schemes, permission scheme and notification schemes.
Used Atlassian JIRA as tracking tool for SCM Support activities.

Environment: AWS Cloud (EC2/EMR), Splunk, Terraform, Ansible, Docker, Kubernetes, Jenkins CICD Pipelines, Python, GIT, GitHub, Bash, CloudWatch, Prometheus and Grafana, Helm, containers, Nexus Artifactory, Jira, ECR, EKS, DynamoDB, Kafka, VPCs, Lambda functions, ELB, IAM roles, S3 buckets,

Client: Fannie Mae. Virginia AUG 2020 - MAR 2022
Role: Azure/DevOps Engineer
Responsibilities:
Executed Azure infrastructure setup, encompassing virtual networks, VMs, subnets, security groups, Active Directory, Azure Container Registry (ACR), Azure Kubernetes Service (AKS), Key Vault, and monitoring components, to ensure a secure and scalable environment.
Implemented Azure active Directory and role-based Access control (RBAC) for secure access and segregation of duties
Deployed Azure Storage services, including Azure Blob Storage and Azure Files, to enable efficient and scalable data storage. Configured storage accounts, containers, and file shares to support seamless data management and retrieval.
Setting up secure Vnets and Subnets for azure IaaS VMs and PaaS role instances, ensuring compliance with Azure network.
Used python scripts, shell and ansible to automate the deployment of server infrastructure for DevOps services.
Create a Terraform templates to manage Azure infrastructure and create Storage Accounts and Blob containers to store remote state files using Terraform modules.
Used Ansible server to manage and configure nodes, Managed Ansible Playbooks with Ansible roles. Used file module in Ansible playbook to copy and remove files on remote systems.
Implemented Azure multi-Factor Authentication (MFA) as part of AD premium to securely authenticate users and created custom Azure templates for rapid deployment and powerful PowerShell scripting.
Used Azure DevOps, including Git repositories, Azure Pipelines, and Azure Boards, to code versioning efficiency, enable continuous integration, and agile project management processes.
Implementing CI/CD pipelines using Azure DevOps YAML pipelines, and able to integrate with tools with Azure Databricks to enable streamlined data processing and analysis.
leveraging Azure Databricks to enhance data analytics and processing, showcasing expertise in collaborative and scalable environments for advanced analytics on cloud-based platforms.
Designed and implemented secure data pipelines for processing sensitive data using Azure Data Factory.
Implemented the Azure Data Factory Triggers and scheduled the Pipelines; monitored the scheduled Azure Data Factory pipelines and configured the alerts to get notification of failure pipelines.
Experience in integrating Unit Tests and Code Quality Analysis tools like Junit.
Used Azure artifacts for storing the build artifacts where versioning is solid and accurate, also used JFROG artifactory for storing Docker images.
Use Docker to provide a high-level API for lightweight, process-isolated containers. Created custom container images, tagging, and pushing them to Docker Hub and Azure Container Registry. Ensured container image security through thorough reviews using Trivy.
Utilized Azure Kubernetes Service (AKS) to deploy managed Kubernetes clusters within Azure. creating AKS clusters through the Azure portal, employing versatile deployment methodologies including Azure Resource Manager (ARM) templates and terraform
Created cluster using Kubernetes and worked on creating many pods, controllers, replica sets, services, deployment labels, health checks and ingress by writing YAML files.
Used Azure DevOps to automate scaling and self-healing of Kubernetes-deployed applications. and also used Azure DevOps to configure Autoscaling for many Kubernetes clusters and used Kubernetes' built-in self-healing features to automatically replace failed pods
Used Kubernetes to manage containerized applications using its nodes, config maps, selector, Services, and deployed application containers as pods.
Deployed Nginx Ingress controller in the AKS cluster and New Relic APM. Monitored the pods in the cluster along with ingress controller with New Relic. Created dash boards using New Relic Query Language.
Utilizing Helm charts to define, package, and deploy complex applications efficiently on Kubernetes clusters.
Used Istio to collect and analyze telemetry data to improve the performance and reliability of microservices
Scheduled Splunk based Reports and alerts to monitor the system health performance and maintained Splunk based native Role and User creation.
Created ARM Template for deploying the resources into Azure using PowerShell and continuous integration by VSTS.
Azure Monitor collects and analyzes metrics and logs from Cosmos DB, allowing for proactive monitoring and troubleshooting of dB performance and behavior.

Environment: Azure, Terraform, Ansible, Docker, Kubernetes, Jenkins, Git, Splunk, Python Scripts, Helm charts, ACR, AKS, Azure CLI, YAML file, Nginx Ingress controller, Cosmos DB, ARM Template, New Relic, JFROG Artifactory, Shell Script, ISTIO


Client: Elevance Health | Virginia May 2019 to Jul 2020
Role: DevOps Engineer
Responsibilities:
Extensive working on AWS servers and AZURE servers performing several operations.
Configured Amazon cloud services including S3, RDS, Elastic Load Balancing, IAM, Route53 and Security Groups in Public and Private Subnets in VPC, created storage cached and storage volume gateways to store data and other services.
Configured Azure cloud services including Azure Firewalls, Azure IAM, Azure Active Directory (AD), Azure Resource Manager (ARM), Azure Storage, Blob Storage, Azure VMs, IIS, SQL Database, Azure Functions, and Azure Monitor.
Proven experience in implementing MFA solutions in Azure AD with various methods such as hard tokens, soft tokens, and 3rd party MFA tools, providing flexibility and choice for users.
Implemented security measures in Hashi Corp such as IAM roles and policies, encryption at rest and in transit, and AWS WAF rules.
Maintenance of source code in Version control systems includes GitLab and GitHub.
Automated weekly releases with Maven scripting to compile, debug Java code and placing builds into Nexus repository.
Scanned/Analyzed the builds using the SonarQube for effective coding practices.
Created CI/CD pipelines and setup auto trigger, auto build and auto deployment with the help of the CI/CD tool like Jenkins
Integrated automated testing suites into CI/CD pipelines to validate software changes, ensuring code quality and minimizing production issues. Involved in designing and developing RESTful APIs and SOAP web services using Apache in AWS environments.
Implemented Docker-Maven plugin and Maven POM to build Docker Images for all microservices and later used Docker file to build the Docker Images from the java jar files.
Container management using Docker by writing Docker files and set up the automated build on Docker HUB and installed & configured Kubernetes.
Setting up Azure infrastructure using terraform templates and deploying microservices to AKS and pushing the images to ACR using Azure DevOps.
Responsible for setting up the Azure Kubernetes Service (AKS) to deploy spring applications, configure the Azure Container Registry (ACR) to store Docker Images and manage the Azure Kubernetes cluster.
Implemented Kubernetes manifests, helm charts for deployment of microservices into Kubernetes clusters.
Kubernetes was used to manage containerized applications, including nodes, configuration maps, selectors, services, and application containers delivered as pods.
Experienced in implementing RBAC, pod security policies, and network policies for safe container deployments in Kubernetes.
Created Terraform continuous build integration system and implemented the Disaster Recovery solution (Blue-Green Deployment) and automated the process with the Terraform Template.
Exporters and integrations are configured to collect metrics from various AWS and Azure services, such as EC2, RDS, Azure VMs, Azure Functions, and Azure Monitor.
Develop Java RESTFUL Microservices to asynchronously push data to both Oracle & Cassandra DB using Spring Boot Application.
Integrated Kafka with cloud-native monitoring tools like Prometheus and Grafana to collect and analyze metrics from Kafka clusters, ensuring optimal performance and reliability.
Applied Datadog security monitoring to detect threats and vulnerabilities in Azure and AWS settings
Created custom Splunk queries and dashboards to monitor and troubleshoot specific AWS and Azure services, like EC2, RDS, Azure VMs, and Azure Functions.


Environment: AWS (EC Route, EBS, Security Group, Auto Scaling, and RDS), Azure (Azure Firewalls, Azure IAM, Azure Active Directory (AD), Azure Resource Manager (ARM), Azure Storage, Blob Storage, Azure VMs, IIS, SQL Database, Azure Functions, and Azure Monitor) GIT, Docker, SonarQube, Maven, Jenkins, ANT, Python, Nagios, Datadog, Kafka, Cassandra DB, Terraform, Kubernetes, RBAC, HELM,

Client: First horizon Bank, Memphis, TN Dec 2017 to April 2019
Role: Build & Release Engineer
Responsibilities:
Responsible for the management and continuous improvement of the release process for internal and external web applications.
Collaborated with Architects, Systems, Network, Software and QA Engineers, and to continuously improve the efficiency and reliability of Build and Deployment processes to support all phases of development including production releases.
Developed and implemented Software Release Management strategies for various applications according to the Agile process.
Maintained build related scripts developed in shell for Maven builds. Created and modified build configuration files including POM.xml.
Worked in all areas of Jenkins setting up CI for new branches, build automation, plug-in management and securing Jenkins and setting up master.
Deployed applications using Jenkins server and Troubleshoot build & release job failures, resolve, work with developers on the resolution.
Integrated with leading CI servers and kept builds reproducible with exhaustive build information to track and protect all artifacts used by your CI build using Binary Repository Manager/ Nexus Build System.
Maven was used to streamline build operations, and it was smoothly connected with Jenkins for continuous integration. Nexus was also used for efficient repository management, allowing for the exchange of snapshots and releases for internal projects.
Managed Puppet classes, resources, packages, nodes, and other common tasks using Puppet console dashboard and live management.
Wrote Puppet manifests for deploying, configuring, installing shield, and managing collected for metric collection and monitoring.
Troubleshooting, event inspection and reporting of various Puppet issues and starting/restarting of Puppet enterprise services.
Used LAMP (Linux, Apache, MySQL, PHP) stack in building some of the applications in Linux especially RedHat.

Environment: SVN, Jenkins, Nexus, GIT, ANT, MAVEN, Puppet, Ansible, Python Scripts, Shell Scripts, Sonar, Red Hat, Splunk, LAMP (Linux, Apache, MySQL, PHP)



Client: Accenture |Hyderabad, INDIA | June 2013 to Oct 2016
Role: QA/ Test Engineer
Responsibilities:
prepare and execute the test cases and test scenarios.
Reporting defects in the Mercury quality center tool detect reporting tool.
Used EMT (Enterprise Management tool) for telecom to manage data, debug scripts, and track defects.
Prepared data compatible with SP console and runner scripts, debugged scripts, executed tests and reported defects.
Used Java, SQL server 2000, and web logic.
Using SoapUI, an open-source API testing tool, to test functionality, performance and security of both SOAP and RESTFUL web services.
Created automated test scripts to check if the software works correctly, using widely used tools and frameworks.
Integrate automate tests into continuous integration (CI) pipeline using tool Jenkins.


ENVIRONMENT: Java, SQL server 2000, and web logic, SoapUI, (CI) pipeline, Jenkins, RESTFUL web services, EMT (Enterprise Management tool).
Keywords: csharp continuous integration continuous deployment quality analyst user interface javascript sthree database active directory information technology trade national microsoft Idaho Tennessee

To remove this resume please click here or send an email from [email protected] to [email protected] with subject as "delete" (without inverted commas)
[email protected];2259
Enter the captcha code and we will send and email at [email protected]
with a link to edit / delete this resume
Captcha Image: