Home

Susmitha - Sr Devops Engineer
[email protected]
Location: Mountain Lakes, New Jersey, USA
Relocation:
Visa: H1B
SUSMITHA GAJULA
AWS DevOps Engineer
(732) 709-1007 - Ext: 129


Professional Summary:

Around 10 years of hands-on experience supporting, automating, and optimizing mission critical deployments in AWS, leveraging configuration management, CI/CD, and DevOps processes.
Experienced with principles and best practices of Software Configuration Management (SCM) in Agile, scrum, and Waterfall methodologies.
Automation, Build/Release Engineering and Software development involving cloud computing platforms like Amazon Web Services (AWS), Azure and Google Cloud (GCP)
Keeping up-to-date with the latest developments in BigQuery and other GCP services and incorporating them into the DevOps workflow as needed.
Experience in Designing, Architecting and implementing scalable cloud-based web applications using AWS and GCP.
Perform as ScrumMaster for teams with a focus on guiding the teams toward improving the way they work and facilitating overall sprint planning including daily stand-ups, grooming, reviews/demos, retrospectives.
Experienced in AWS Cloud platform and its features which includes EC2, VPC, EBS, AMI, SNS, RDS, EBS, Cloud Watch, Cloud Trail, Cloud Formation, AWS Config, Elastic load balancer, Auto scaling, Cloud Front, IAM, S3, Glacier and R53.
Designed AWS Cloud Formation templates to create custom sized VPC, Subnets, and NAT to ensure successful deployment of Web applications, database templates and expertise in architecting secure VPC solutions in AWS with the help of Network ACLs, Security groups, public and private network configurations.
Experience on AWS, focusing on high-availability, fault tolerance, and auto-scaling using Terraform templates along with (CI/CD) with AWS Lambda and AWS Code Pipeline.
Utilized Cloud Watch to monitor resources such as EC2, CPU memory, Amazon RDS services, EBS volumes, to set alarms for notification or automated actions and to monitor logs for a better understanding and operation of the system.
Implemented automation using Configuration Management tools like Ansible, Chef.
Experience writing Ansible playbooks and deploying applications using Ansible.
Experience in designing and implementing Cloud Automation and orchestration framework in private/public cloud environments involving AWS APIs, Openstack, VMware, Chef, Puppet, Python, Ruby, Azure APIs and Workflow Engine.
Migrated applications to the PKS, GCP cloud.
Good Knowledge and experience using Elasticsearch, cloud watch, Nagios, Splunk and Grafana for logging and monitoring.
Exposure to test data management tools like Delphix to create virtual production like databases.
Designed and configured the Hashicorp Vault roles and policies to secure the Kubernetes Infrastructure.
Experienced with Hashicorp tools like Vault and Consul.
Worked on Hashicorp vault secret management tool to provide security for credentials, tokens and API keys.
Worked on onboarding the applications to Gremlin Chaos engineering tests.
Experience in working with container-based deployments using Docker images, Docker file, Docker Hub to
link code repositories and to build and test images, Docker Compose for defining and running multi-container applications.
Experience in Ansible configuration/deployment and writing Ansible playbooks to manage environments configuration files, packages, and users.
Set up a GCP Firewall rules in order to allow or deny traffic to and from the VM's instances based on specified configuration and used GCP cloud CDN (content delivery network) to deliver content from GCP cache locations.
Implemented Chef Recipes for Deployment on build on internal Data Centre Servers. Also re-used and modified same Chef Recipes to create a Deployment directly into Amazon EC2 instances.
Developed build and deployment scripts using MAVEN as build tool and automated the build and deploy processes using Jenkins to move from one environment to other environments.
Handled work from initial stage of development to create branches, make developers follow standards creating build scripts, labelling, automating the build process and deploy process by using Jenkins plugin.
Experience in using Nexus Repository Managers for Maven builds. High exposure to REMEDY and JIRA defect tracking tools for tracking defects and changes for Change management.
Configuration and maintenance of NFS, Samba, Send mail, LDAP, DNS, DHCP and Networking with TCP/IP on Linux.
Imported and managed with various corporate applications into GitHub code administration repo and Managed GIT, GitHub, Bit bucket and SVN as Source Control Systems and Managed SVN repositories for branching, merging, and tagging.
Experience in Migration from Teradata On-Prem to Cloud GCP - Snowflake Environment.
Good commitment, result oriented, hard working with a quest and zeal to learn new technologies and undertake challenging tasks.

Technical Skills:

Cloud Computing Amazon AWS Cloud, Open stack
DevOps Tools Jenkins, Git, Gitlab, Maven, Nexus Repository, SonarQube, App D
Infrastructure as a Code Terraform
Cloud Services Azure, AWS, OpenStack, Snowflake.
Configuration Management Tools Chef, Ansible
Operating Systems Linux & Windows.
AWS Services IAM, EC2, S3, RDS, SQS, SNS, Cloud Trail, Cloud Watch, EBS, VPC.
Languages/Scripts Python, Java Script, Bash, CSS, HTML, C#
Container and Orchestration Tools Kubernetes, Docker, AWS EKS
Ticketing and Tracking Tools Service Now, Remedy, JIRA
Databases MySQL, MS SQL Server 2008 R2, Oracle
SDLC Agile, Scrum


CERTIFICATIONS

AWS Certified Solutions Architect Associate
EDUCATION
University of Central Missouri, Warrensburg, MO, USA Dec 2015
Master s in Computer Science

Jawaharlal Nehru Technological University, Kakinada, India May 2014
Bachelor s in Information Technology
PROFESSIONAL EXPERIENCE:

Barclays Bank Whippany, NJ Jan 2022 Till Date
DevOps Engineer

Responsibilities:
Responsible for designing, implementing, and supporting cloud-based infrastructure and its solutions
Participate in software development life cycle (SDLC) in Waterfall and Agile scrum methodology.
Acted as a Scrum Master for product team and managed Sprint planning meetings, Daily scrum, sprint review , product backlog refinement meetings and sprint retrospective meetings.
Daily stand-up meetings with the team to track the backlog in scaled Agile Framework using Scrum.
Configure Delphix engines, manage Datasets on Delphix engines and create and refresh virtual databases.
Worked on onboarding the applications to Gremlin Chaos engineering tests by communicating with application team to gather detailed information covering architecture, integrations and underlying platform and infrastructure.
Worked on OpenShift platform in managing Docker containers and Kubernetes clusters and created Kubernetes clusters using ansible paybooks on Exoscale.
Building/Maintaining Docker/ Kubernetes container clusters managed by Kubernetes Linux, Bash, GIT, Docker , on GCP.
Analyse the existing test cases, data and environment details required to conduct resiliency (chaos) tests.
Building and maintaining Docker container clusters managed by Open shift.
Worked on chef configuration management tool by updating the roles to different applications.
Daily responsible to review the code and approve the PR s as per the jira tickets.
Implemented terraform for building, updating, and versioning infrastructure safely and
efficiently, as well as creating custom in-house solutions with Ansible configurations.
Developed terraform templates that can spin up infrastructure for multi-tier application
and provisioned boot strapped software on Cloud with terraform.
Manage, configure and upgrade various tools used in the CI/CD process such as Jenkins, Vault, Artifactory, Nexus, Bitbucket Server etc.
Configured Elastic Load Balancers and Auto Scaling groups to distribute the traffic and to have a cost efficient, fault tolerant and highly available environment.
Designed and developed AWS Cloud formation templates to create custom VPC, subnets, NAT to ensure deployments of web applications.
Configured the cloud watch alerts to send notification through SNS using lambda function to slack channel.
Experience in Private Cloud and Hybrid cloud configurations, patterns, and practices in Windows Azure and SQL Azure and in Azure web and database deployments.
Building and Installing servers through Azure Resource Manager Templates (ARM).
Setup Azure Virtual Appliances (VMs) to meet security requirements as software-based appliance functions (firewall, WAN optimization and intrusion detections).
Worked as an administrator on Microsoft Azure and part of Devops Team for internal projects automation and build configuration management. Involved in configuring virtual machines, storage accounts, and resource groups.
Worked with chef recipes/cookbooks which involved installing, updating Oracle 11g, ucDeploy agents,7zip,updating CentOS and LDAP servers.
Gained experience in dealing with Windows Azure IaaS - Virtual Networks, Virtual Machines, Cloud Services, Resource Groups, Express Route, Traffic Manager, VPN, Load Balancing, Application Gateways, Auto-Scaling.
Expertise in scripting and programming languages like Shell scripting, Python for automating day to day administration tasks on cloud platforms.
Setup and maintained logging and monitoring subsystems using tools like Elasticsearch, Kibana, Splunk and Grafana.
Experience in analyze and correlate events through Splunk search strings and operational strings.
Work with Development/QA teams, to troubleshoot issues related to infrastructure and applications.
Configured and administered Jenkins for continuous integration and deployment into Tomcat Application Server and to improve reusability for building pipelines.
Deploying a Linux Kubernetes Clusters with ACS from the Azure CLI.
Configured VM's availability sets using Azure portal to provide resiliency for IaaS based solution and scale sets using Azure Resource Manager to manage network traffic.
Working knowledge of cloud PaaS platforms such as Pivotal Cloud Foundry, Azure, AWS, IBM Bluemix.
Responsible for creation and submitting weekly sprint status reports.
Piloted and shared US Subscription Activation Campaign driving early Azure cloud developer adoptio
Aligned Azure and Google Cloud Platform capabilities and services with work load requirements
Experienced with deployments, Maintenance and troubleshooting applications on Microsoft Cloud infrastructure AZURE
Exposed Virtual machines and cloud services in the VNets to the Internet using External Load Balancer
Extensive experience in Windows AZURE (IaaS) migrating like creating AZURE VMs, storage accounts, VHDs, storage pools, migrating on premise servers to AZURE and creating availability sets in AZURE.
Used JIRA to maintain product backlog and sprint backlog and to create and track user stories, sprint planning, tracking and managing sprints, created scrum and Kanban boards, status reports and burn down charts.

Environment: Bitbucket, Jenkins, OpenShift Container Platform, Hashi Corp Vault, Azure, Gremlin, AWS Terraform, App View X, UCD, Maven, JIRA, Nexus Repository, Confluence, Delphix, Service now.

Comcast Cable Corporation - Philadelphia, PA March 2017 Jan 2022
AWS Cloud/DevOps Engineer

Responsibilities:
Responsible for designing, implementing, and supporting of cloud-based infrastructure and its solutions.
Manage, configure and upgrade various tools used in the CI/CD process such as Jenkins, Vault, Artifactory, Nexus, Bitbucket Server.
Worked on AWS services such as VPC, EC2, S3, ELB, Autoscaling Groups (ASG), EBS, RDS, IAM, CloudFormation, Elastic Beanstalk, Route 53, CloudWatch, CloudFront, CloudTrail, API Gateway, Lambda, SNS & SQS.
Used Amazon RDS Multi-AZ for automatic failover and high availability at the database tier for heavy MySQL workloads.
Worked on AWS cloud infrastructure to maintain Web servers on EC2 instances with AMIS behind Elastic load balancer with Auto-scaling to maintain scalability and elasticity to scale up and down the servers as per requirement.
Managing the OpenShift Cluster that includes scaling up and down the AWS app note.
Used AWS CloudFront (content delivery network) to deliver content from AWS edge locations drastically improving user experience and latency.
Used CloudWatch logs to move application logs to S3 and created alarms in conjunction with SNS to notify of resource usage and billing events.
Implemented AWS Code Pipeline and Created Cloud formation JSON templates in Terraform for infrastructure as code.
Created alerts and monitoring dashboards using Grafana for all the microservices deployed in AWS.
Created multiple terraform modules to manage configuration, applications, services and automate installation process for web server and AWS instances.

Created wiki pages to document the events, thresholds for shared pipelines for VPC association and included the links to each significant event from Datadog/ELK.
Setup Datadog monitoring across different servers and aws services.
Created Datadog dashboards for various applications and monitored real-time and historical metrics.
Created system alerts using various Datadog tools and alerted application teams based on escalation matrix.
Monitored performance and history of infrastructure with tools such as CloudWatch, Datadog.
Worked in all areas of Jenkins setting up CI for new branches, build automation, plugin management and securing Jenkins and setting up master/slave configurations.
Worked on Docker Registry for storing in house docker images and using them for CICD system and assigning the volumes for the storage of docker images inside the system or else customizing the storage location using Bing mount.
Created the snapshots for the production release which are well tested and passed till STAGE environment.
Using Cloud Formation to create different services by scripting in Json format. Integrated Jenkins with GitHub private repositories build Automation tools (Maven and Ant), and Artifact repository for pushing successful build code.
Worked on cloud migration project from on prem to AWS cloud.
Wrote complex SQL Statements to validate data and ensure system integrity and security in Oracle DB.
Executing issue post-mortem if there are any hard stops in offer flow through xfinity.com, by verifying the products and services related to cable video, voice data and home security services are configured properly and update the details in JIRA.
Created a site to site VPN between on premise and Azure, using RAS for secure replication of on premise domain controller, to the newly created Microsoft Azure domain controller.
Configuring the offers using extensive analysis considering multiple permutations and combinations to make sure there is no Financial impact.
Developed Stored Procedures, Functions, PL/SQL Queries, Indexes and Triggers for fetching Transaction details, Customer Details, and Product Configuration data.
Solving complex technical issues or data related by debugging code, non-routine problem analysis, applying advanced analytical methods as needed. Write complex SQL queries using joins and develop custom procedures, functions using PL/SQL to generate various reports to predict business trends
Monitoring the Production Fallouts and triaging errors through testing and troubleshooting to resolve the issues.
Worked on production incidents using service now and closing them within the SLA.
Issue identification, data analysis and secure analysis through Splunk.
Configured S3 versioning and life-cycle policies to backup files and archive files to Glacier.
Sets up monitoring for applications running in cloud using AppDynamics.

Environment: Git lab Ansible, OpenShift, Maven, Docker, Kubernetes, JIRA, Nexus, JSON, Python, Azure, AWS (EC2, VPC, S3, RDS, CloudFormation, Kanban, CloudWatch, CloudTrail, Route53), App D, Datadog, Splunk, Jenkins, Grafana, BASH.

Trizetto Corporation, Denver, CO Oct 2016 - March 2017
DevOps /AWS Engineer
Responsibilities:

Good knowledge on the Foundations of US Health Care how it works and the government programs like Medicare and Medicaid.
Worked on google cloud platform (GCP) services like compute engine, cloud load balancing, cloud storage, cloud SQL, stack driver monitoring and cloud deployment.
Performed analysis into the different stages of the system development life cycle to support development and testing efforts, identify positive and negative trends, and formulate recommendations for process improvements and developments standards.
Develop and maintenance of structured and well documented code in c# visual studio.
Expertise in scripting and programming languages like Shell scripting, Python for automating day to day administration tasks on cloud platforms.
Created AWS Launch configurations based on customized AMIs and used them to configure auto scaling groups.
Configured VPCs and secured them using multi-tier protection - security groups (at instance level) and network access control lists (NACL, at subnet level)
Configured auto scaling policies to scale up/down EC2 instances based on ELB health checks, resource monitoring and linked alarms to auto-scaling events.
Developed automation framework to deploy CloudFormation stacks.
Managing Amazon Web Services (AWS) infrastructure with automation and orchestration tools such as Chef and Ansible.
Recommendations for No-SQL database with DynamoDB for specific application like ad-tech& IoT, which requires millisecond latency at any scale.
Work with AWS direct connect & VPN Cloud hub to connect cloud to provide secure connection between sites with multiple VPN connections.

Environment: BASH, JSON, EC2, S3, Glacier, RDS, VPC, Direct Connect, Route53, CloudWatch, Ops Works, IAM, WAF, SNS, ELB, AWS CloudFront, AWS AMIs

Techno Rocket Systems, Dallas, TX Mar 2015 - Oct 2016
Cloud DevOps Engineer
Responsibilities:
Built up CI/CD pipeline using Git, Ant, Maven, Jenkins for JAVA and Middleware applications.
Involved in Software development life cycle (SDLC) of application from design phase to implementation phase, testing, Deployment, and maintenance phase
Migrated Jenkins distributed build system from local machines to AWS.
Creating new jobs in Jenkins and managing the build related issues.
Created containerized build and test environments using Dockers.
Setup Static Code Analysis and Code Coverage to ensure quality of code.
Working with CI/CD Principles According to Organizational Standards.
Followed agile principles and used JIRA for maintenance and Bug development tasks.
Handled Jira tickets for SCM support activities and have experience in Jira workflows
Performed all necessary day-to-day Git support for different projects.
Configured Elastic Load Balancers with EC2 Auto scaling groups.
Implemented Jenkins pipeline as a code using Groovy.
Involved in Designing and deploying AWS solutions using EC2, S3, RDS, EBS, Elastic Load Balancer and Auto scaling groups.
Worked on Docker containers snapshots, attaching it to a running container, removing the images, managing the containers and setting up environment for development and testing for the redirection of ports and volumes.
Created automated Unit test plans and performed Unit testing modules according to the requirements and development standards.
Development and maintenance of structured and well documented code in C# using Visual Studio.

Environment: Jenkins, Shell Scripting Docker, Maven, GIT, JIRA, SVN, Ansible, Unix, Artifactory and AWS Cloud.

Pruthvi Information Solutions Limited, Hyderabad, India June 2013 Dec 2014
Build &Release Engineer
Responsibilities:
Designed Puppet modules using RUBY to Provision several pre-mode environments.
Responsible for the maintenance and development of processes and supported tools/scripts for the
automatic building, testing and deployment of the products to various developments.
Developed Puppet Modules for installation & auto healing of various tools like Bamboo, Nolio agents,
MSSQL, Nexus etc. these modules are designed to work on both windows and Linux platforms.
Involved in the migration of the Bamboo server, Artifactory & Git server. Responsible for writing Hooks
and Triggers using Perl. Built Java application using ANT.
Successfully implemented the Master-Slave architecture setup to improvise the
performance of Jenkins.
Configured and administrated CI/CD pipeline using Bamboo as integration, Ant as build
and GIT as Source code management tools.
Verified and rectified the errors that are basically caused in CI/CD pipeline setup.
Setup constant security checks to CI/CD pipeline to successfully monitor the faster events which may
occur and to solve them.
Experienced in migrating data from SVN to GIT.
Environment: Java, J2EE, SVN (Subversion), Ant, Bamboo, JIRA, Shell/Perl Scripting, Nagios,WebSphere, UNIX.
Keywords: csharp continuous integration continuous deployment quality analyst sthree database active directory information technology microsoft procedural language Arizona Colorado Missouri New Jersey Pennsylvania Texas

To remove this resume please click here or send an email from [email protected] to [email protected] with subject as "delete" (without inverted commas)
[email protected];2500
Enter the captcha code and we will send and email at [email protected]
with a link to edit / delete this resume
Captcha Image: