Home

sreenivas anda - lead devops engineer
[email protected]
Location: Austin, Texas, USA
Relocation: yes
Visa: h1b
Sreenivas Anda
Senior DevOps/Cloud Engineer
Ph: +1(737)342-6379
Email: [email protected]
Professional Summary:
Overall, 10 years of IT industry experience as a DevOps/Cloud Engineer. Production Support of various applications
on Red Hat Enterprise Linux 7.x, 6.x, 5.x, Windows OS environments.
Experienced in designing, implementing, and managing cloud infrastructure on both Azure and AWS, with a focus on
creating scalable and reliable solutions that meet business requirements.
Proficient in leveraging Azure services such as Virtual Machines, ARM, SQL Database, and Azure Functions, as well as
AWS services such as EC2, S3, RDS, and Lambda, to build and deploy cloud-based solutions.
Proficient in using Azure Repos for version control and Azure Pipelines for continuous integration and deployment
(CI/CD) of applications.
Proficient in integrating Databricks with major cloud platforms like AWS and Azure, streamlining data workflows and
enhancing data accessibility.
Experienced in managing multi-cloud environments, including cost optimization, resource allocation, and
performance monitoring, to ensure efficient use of cloud resources.
Led the end-to-end implementation of Azure DevOps pipelines, enabling seamless CI/CD processes, improving
deployment frequency by 50%, and enhancing collaboration across development and operations teams through
integrated tools and automated workflows.
Experience in installation and management of network related services like DNS, Apache, LDAP, HTTPD, and SMTP.
Experience in working with different virtualization environments such as VMware, Red hat Virtualization.
Seasoned Senior DevOps Engineer with expertise in designing, implementing, and maintaining CI/CD pipelines on
Azure DevOps.
Utilized Python for configuration management and infrastructure as code using tools such as Ansible and Terraform.
Managed infrastructure as code for Guidewire environments, using tools like Terraform or CloudFormation to
provision and configure resources.
Leveraged deep expertise in Splunk to architect and implement robust monitoring solutions, enabling real-time
visibility into application performance and infrastructure health, leading to faster incident response and improved
system reliability.
Analysed AWS billing to identify cost-saving opportunities, utilizing tools like AWS Cost Explorer and Trusted Advisor.
Implemented strategies such as right-sizing instances, leveraging Reserved Instances, and optimizing storage to
reduce monthly costs by 20%.
Implemented and managed Jenkins pipelines to automate build, test, and deployment processes.
Optimized Guidewire applications performance by tuning infrastructure components and identifying areas for
improvement.
Very good understanding in the concepts and implementations of high availability, fault tolerance, fail over,
replication, backup, recovery, Service Oriented Architecture (SOA) and various Software Development Life Cycle
(SDLC) methods.
Utilize AWS CLI to automate backups of ephemeral data-stores to S3 buckets, EBS and create nightly AMIs for mission
critical production servers as backup.
Designed and implemented Azure networking solutions, including VNet, VPN, ExpressRoute, and Azure Firewall.
Proactively identified and implemented cost-cutting features, optimizing resource utilization and enhancing overall
efficiency.
Familiar with cloud-native technologies and best practices, such as serverless computing, microservices architecture,
and containerization, to design and deploy modern cloud applications.
Implemented documentation best practices and standards, such as version control and collaboration tools, to
facilitate knowledge sharing and improve team efficiency.
Mentored the fellow resources on best practices for production maintenance, ensuring adherence to standards and
improving team efficiency. Introduced new ideas for proactive maintenance, resulting decrease in critical incidents.
Managed end-to-end release process for multiple projects, creating detailed run sheets and process documentation.
This resulted in a 30% reduction in release failures and improved coordination among cross-functional teams.
Documented Guidewire deployment processes and configurations to maintain a reliable and reproducible
deployment environment.
Strong troubleshooting and problem-solving skills, with the ability to quickly identify and resolve issues in complex
cloud environments spanning multiple cloud providers.

Technical Summary:
Cloud Technologies Azure, Amazon Web Services (AWS)
CI Tools Jenkins, Gitlab, Azure pipelines
Version Tools SVN, Bitbucket, GitHub, Gitlab
CM Tools Chef, Ansible.
Containerization Docker, Kubernetes.
IaC CloudFormation, Terraform.
Data Warehousing Snowflake
Database MySQL, Oracle.
SDLC Agile, Scrum, Waterfall.
Build Tools MAVEN, Gradle.
Repositories Nexus, Artifactory, JFrog.
Monitoring Tools Splunk, Grafana and Prometheus.
Languages/ Scripting Java, Python, Shell Scripting.
Operating Systems Windows server 2000/2003/2008/XP, Windows 7, LINUX (RHEL)
Professional Experience:
Client: HP INC, TX Nov 2023 Till Date
Role: Lead DevOps Engineer
Responsibilities:
Managed Azure services, including Azure App Service, Azure SQL Database, and Azure Virtual Machines, ensuring high
availability and scalability of applications.
Integrated third-party health data APIs with Azure services to enable real-time data exchange, improving customer
service and claims processing efficiency.
Implemented branching and merging strategies in Azure Repos, ensuring code stability and facilitating parallel
development efforts.
Developed custom Splunk dashboards to visualize the metrics, facilitating data-driven decision-making and
performance monitoring.
Integrated Databricks with Azure services to streamline data ingestion, processing, and storage workflows.
Pipelines were created in Azure DataFactory utilizing Datasets/Pipeline/ to extract, transform, and load data from
many sources such as Azure SQL, Blob storage, Azure SQL Data warehouse, write-back tool, and backwards.
Successfully designed and implemented scalable and secure data storage solutions using Azure Data Lake Storage
(ADLS), enhancing data accessibility and management.
Managed Azure resource groups and resources using ARM, maintaining an organized and efficient Azure
environment.
Automated alerting and monitoring workflows in Splunk based on events and metrics, ensuring proactive issue
detection and resolution.
Worked on Snowflake Schema, Data Modelling and Elements, and Source to Target Mappings and Design elements.
Implemented security measures in Jenkins, including authentication, authorization, and role-based access control
(RBAC), ensuring secure access to Jenkins resources.
Integrated Bash and Python scripts with CI/CD pipelines (e.g., Jenkins) to automate the build, test, and deployment
processes, enabling continuous integration and continuous deployment practices.
Implemented data automation solutions using Python and Bash, leveraging APIs and data manipulation libraries to
extract, transform, and load data, improving data accuracy and reducing manual effort.
Analysed the SQL scripts and designed it by using PySpark SQL for faster performance.
Experienced in monitoring and logging solutions on Azure, including Azure Monitor, Application Insights, and Log
Analytics, to ensure high availability and performance of applications.
Strong experience in migrating other data bases to Snowflake.

Designing and implementing serverless data processing workflows, state machines, error handling, and retry
mechanisms.
Developed and implemented automated scripts in Bash and Python to streamline manual processes, resulting in a
significant reduction in time and effort required for repetitive tasks.
Created Bash and Python scripts for automating software builds, and configuration management, leading to improved
efficiency and consistency in the software development lifecycle.
Managed Kubernetes clusters, ensuring high availability, fault tolerance, and efficient resource utilization.
Implemented various deployment strategies in Kubernetes, including Blue-Green deployments and canary releases to
minimize downtime.
Tuned Jenkins performance by optimizing resource utilization, job scheduling, and build parallelization, improving
overall pipeline efficiency.
Implemented auto-scaling configurations such as HPA and load balancing in Kubernetes to handle varying workloads
and optimize application performance.
Integrated monitoring tools with Kubernetes to capture resource usage metrics and facilitate efficient logging for
troubleshooting.
Proficient in writing Cloud Formation Templates (CFT) in YAML and JSON format to build the Cloud Services with the
paradigm of Infrastructure as a Code.

Client: iCare NSW, HYD Dec 2020 June 2023
Company: Capgemini
Role: Senior DevOps Engineer
Responsibilities:
Worked on Installation of Red Hat Enterprise Linux 7.x, 6.x, 5.x, Windows ServersOS on virtual and physical based
servers in development, test and production environments.
Administration of all servers, which include Linux virtual servers, Windows Virtual Servers and Linux Physical Servers.
Implemented containerization with Docker and Kubernetes, improving scalability and resource utilization for
microservices architecture.
Hands on involvement in Amazon Web Services (AWS) provisioning and great information of AWS administrations like
EC2, S3, Glacier, ELB (Load Balancers), EBS and so on.
Worked on multiple AWS instances of configuration and maintenance of the Security group, ELB s and AMI's.
Managed large-scale Kubernetes clusters in production environments, ensuring high availability and scalability.
Received recognition for outstanding performance in ensuring the security and compliance of infrastructure
components, contributing to successful audits.
Implemented Infrastructure as Code practices using tools like Terraform to automate Kubernetes cluster provisioning
and configuration.
Set up Jenkins in a high availability configuration, ensuring continuous availability and reliability of CI/CD processes.
Implemented Horizontal Pod Autoscaling (HPA) to automatically scale pods based on CPU utilization, improving
application responsiveness during peak loads.
Introduced Amazon Workspaces to provide individual VMs for developers, improving flexibility and reducing
infrastructure costs by 20% compared to traditional VM provisioning methods.
Maintained leadership from offshore by effectively coordinating with onshore teams and providing regular updates to
customers.
Ensured high availability of Guidewire applications by implementing monitoring and alerting solutions to quickly
identify and resolve issues.
Implemented Kubernetes security best practices, such as network policies, RBAC, and pod security policies, to secure
cluster environments
Managed production environment for large data applications such as Policy Center, ensuring high availability. and
performance.
Managed resource onboarding and offboarding processes, ensuring smooth transitions for new hires and departing
team members. Created documentation and conducted training sessions to familiarize new team members with
processes and tools, while ensuring access revocation and knowledge transfer for departing members.

Client: SICORP Sep 2018 Dec 2020

Company: Capgemini
Role: DevOps Engineer
Responsibilities:
Provided support for Insurance applications in OS servers.
Created AWS Launch configurations based on customized AMI and use this launch configuration to configure auto
scaling groups and Implemented AWS solutions using EC2, S3, Route53, Elastic Load Balancer, Auto scaling groups.
Expertise in configuring Red Hat Cluster Nodes for any legacy applications and verified the daily health check on the
Cluster Nodes utilizing clusters.
Manage all CM tools (SVN, Maven, Jenkins, Bitbucket, GitHub) and their usage process ensuring traceability,
repeatability, quality and support.
Implemented backup and recovery strategies for Jenkins configurations ensuring minimal downtime in case of
failures.
Virtualized the servers using Docker for the test environments and dev-environments needs, also configuration
automation using Docker containers.
Scheduled and coordinated year-end activities such as data archiving, system maintenance, and reporting. Handled ad
hoc requests promptly, ensuring minimal disruption to regular operations and meeting business needs.
Maintaining appropriate file and system security, monitoring and controlling system access, file access, changing
permission, ownership of files and directories, maintaining passwords, assigning special privileges to selected users,
monitoring status of process in order to increase the system efficiency.
Provided troubleshooting solutions through automations and monitoring system stats on Splunk applications with the
help of Splunk-dashboards during new releases.
Supported Guidewire applications in production, resolving issues and implementing enhancements to improve
system performance and user experiences
Installing and updating packages using yum and rpm and experienced on Task Scheduling & Systems backups
Client: Nestle Nov 2014 Aug 2018
Company: Tech Mahindra
Role: Cloud Engineer
Responsibilities:
Systems administration, maintenance and monitoring various day-to-day operations.
Experience with using CloudFormation to build Infra stacks and managing configuration.
Worked on configuration of AWS services (EC2, S3, IAM, Amazon Glacier, EBS, VPC, Elastic Load Balancing, Amazon
Cloud Watch, Auto Scaling, Cloud Formation).
Provide highly durable and available data by using S3 data store, versioning, lifecycle policies, and create AMIs for
mission critical production servers for backup.
Creating and maintaining users, profiles, security, rights, disk space, LVMs and process monitoring, worked with
Redhat Package Manager (RPM) and YUM, Job Scheduling using Cron.
Experience in providing day-to-day user administration like adding/deleting users in local and global groups on Red
Hat Linux platform and managing user's queries.
Experience installing and configuring SSH (secure shell) encryption to access on Ubuntu and Redhat Linux securely and
create, and maintain user Accounts with stipulated Permissions, security passwords, etc. on Linux server platforms.
Develop tools/scripts to automate integration with other IT tools in support of accurate asset management, cyber
reporting capabilities and to manage licenses.
Configured Kickstart Server in Linux 5.x, 6.x and JumpStart Server in Solaris 10, 11 and build severs using Redhat
Kickstart and Solaris JumpStart server.
Professional Certification/Training:
AWS Certified Cloud Practitioner (CLF-C01), Solutions Architect provided by Amazon
Keywords: continuous integration continuous deployment sthree active directory information technology hewlett packard Texas

To remove this resume please click here or send an email from [email protected] to [email protected] with subject as "delete" (without inverted commas)
[email protected];2762
Enter the captcha code and we will send and email at [email protected]
with a link to edit / delete this resume
Captcha Image: