Home

Akhila Thotakura - Devops Engineer
[email protected]
Location: Crofton, Maryland, USA
Relocation: Remote only
Visa: H1B
Name: Akhila Ch
Email: [email protected]
Phone: 770-709-4679
LinkedIn: https://www.linkedin.com/in/akhila-ch-791923229/


PROFESSIONAL SUMMARY:

12+ years of experience in the IT industry as a Cloud/DevOps Admin/Engineer with a major focus in the areas of Cloud Infrastructure Providers, Data Center Migration, Containerization Technologies, Configuration Management, Iac, CI/CD Pipeline, Disaster Recovery Virtualization technologies using different tools and cloud services like AWS, Azure, and GCP.
Designed and deployed applications utilizing AWS stack (ELB, VPC, RDS, DynamoDB, EBS, EKS, SNS, SQS, KMS, EC2, S3, Route53, Lambda, Kinesis, MSK, Beanstalk and API Gateway), focusing on high-availability, fault tolerance and auto-scaling in AWS Cloud Formation, deployed services (Ops Work and Cloud Formation) and security practices (IAM, CloudWatch, CloudTrail) and datalake platform like Athena, AWS Glue, Lake Formation, EMR
Collaborate with cross-functional teams to ensure the scalability, reliability, and performance of applications and worked closely with development and operations teams to optimize and scale AWS infrastructure.
Designed and implemented AWS Disaster recovery architecture with Active-Active and Active-Passive models
Configured cluster operations in AWS Kubernetes (EKS) to deploy microservices with CI/CD system and used Kubernetes cluster to scale up operations of clusters, maintain the cluster services, load balancing, network policies, group Docker containers across different platform.
Worked on application containers using docker using custom docker files and orchestrating using docker swarm and Kubernetes on both on-premises and cloud.
Used Jenkins innovatively to automate most of the build related tasks. Improved throughput and efficiency of build system by providing EO/managers rights to trigger a required build.
Extensively involved in infrastructure as code, execution plans, resource graph and change automation using Terraform.
Implemented Istio mesh on Kubernetes cluster (EKS) and configured service entry, gateways to manage incoming and outgoing traffic to Kubernetes cluster
Developed various helm charts for different usecases and maintain them
Experience in installing and configuring various components like Map Reduce, Hive, Pig, HBase, Sqoop, Hue, Oozie, Spark, Kafka, Yarn, ZooKeeper, NiFi in Apache Hadoop eco-system using Hortonworks distribution (HDP & HDF).
Hands on experience on performing administration, configuration management, monitoring, debugging, NameNode Recovery, HDFS High Availability, writing Hadoop Shell commands and performance tuning on Hadoop Clusters.
Built industrial standard Data Lake on on-premise and Cloud platforms.
Experience in writing playbooks in ansible and used Ansible AWX and Ansible Vault for securing the deployments.
Expertise in configuration management tools like Ansible, Chef and Puppet by creating custom playbooks, recipes.
Installed and configured CI/CD tools like Jenkins, Team City, Harness, Concourse, Azure DevOps.
Integrated artifact storage JFrog Artifactory to CI/CD build tools.
Experience in managing the repository manager like Nexus for the Maven builds. Integrated Maven with Jenkins by which the Surefire test reports and Javadoc produced by Maven are captured and to build parallel modules.
In-depth knowledge of computer applications and scripting like Shell, Ruby, Groovy, Python, YAML, Perl, and XML.
Experience in installing and developing on ELK. Used Elasticsearch for powering not only Search but using ELK stack for logging and monitoring our systems end to end Using Beats.
Expertise in Installing, Configuring, Managing the monitoring tools such as Splunk, New Relic and Nagios for Resource Monitoring/Network Monitoring/ Log Trace Monitoring.
Experience in using Prometheus as monitoring tool and Grafana for analysis & visualization.
Hands-on experience with JIRA as defect tracking system and configure various workflows, customizations and plugins for JIRA bug/issue tracker, integrated Jenkins with JIRA, GitHub.
Utilized MySQL, MongoDB, DynamoDB and Elastic cache to perform essential database administration.
Hands on experience configuring RedShift, ElasticSearch and Dynamo DB with EC2 Instances.
Installed MongoDB on physical machines, Virtual machines as well as AWS.
Backup & Recovery, Database optimization and Security maintenance and experienced in Installing, Configuring and troubleshooting of Red Hat and VMware ESX environment.
Worked on Linux server virtualization by creating Linux VMs for server consolidations. Configuration and administration of VMware ESXI, vCenter, vSphere Client and Linux / Windows clients.
Experience in configuration and maintenance of common Linux services such as Tomcat, Apache, MySQL, NFS, FTP, Postfix, LDAP, DHCP, DNS BIND, HTTP, HTTPS, SSH, iptables and firewall etc.
Installed, configured and administered Red Hat Linux servers and support for servers on various hardware platforms.
Worked on all phases of Software Development Lifecycle and handled change management process for application development.



PROFESSIONAL EXPERIENCE:
Lead Devops engineer
T-mobile - Dallas, TX May 2022 to Present
Responsibilities:
Worked on designing and deploying a multi-tier application utilizing almost all of the main services of the AWS stack (like EC2, S3, RDS, VPC, IAM, ELB, Cloud watch, Route 53, CDN, Lambda and Cloud Formation) focused on high - availability, fault tolerance environment.
Implemented best practices with AWS in security aspects with implementing IAM and integrating it with SAML Okta for single sign-on and setting up guardrails for cross account access between AWS
Working with development teams to optimize and scale AWS infrastructure as needed and introduce best practices with AWS
Created Terraform modules to provision various AWS resources like EC2, VPC, EKS, MSK, IAM, Lambdas, API gateway, SQS, RDS and S3 for reusing the current environment using terraform AWS provider.
Utilized AWS CLI for Setting up scalability for application servers for Setting up and administering DNS system in AWS using Route53 Managing users and groups using the amazon identity and access management (IAM).
Configure and maintain AWS Kubernetes cluster (EKS) and manage all upgrades and node groups management and cluster management
Automate AWS infrastructure and CI build/deployment pipelines of Java-based applications using Python/bash scripting and Gitlab to deploy services on Kubernetes.
Configured Kubernetes cluster autoscaler for nodes autoscaling and VPA, HPA for pod autoscaling
Deployed and configured Kubecost on Kubernetes cluster to identify the cost and better utilize the cluster
Implemented Istio mesh on Kubernetes cluster and configured service entry, gateways to manage incoming and outgoing traffic to Kubernetes cluster
Working on writing Helm charts for various deployments and maintain them
Built inhouse Helm repo for custom helm charts and maintain them
Working on setting up and configuring Kafka cluster for different use cases across the organization.
Working on setting up Prometheus stack for monitorning and alerting.
Setup Thanos for Prometheus to integrate multiple clusters and higher retention period
Configure various Prometheus exporters like node exporter, cloudwatch exporter, blackbox exporter, kafka exporter etc for monitoring major infrastructure components using Prometheus
Setup alerts using Alertmanager and integrate it with PagerDuty
Configured ELK stack for centralized monitoring using Filebeat, logstash, Elasticsearch and kibana
Configure and maintain filebeat as deamonset on each node to collect application logs and parse them using logstash
Developed Kibana dashboards, visualizations, Alerts and Anomaly detection on Kibana
Integrated gitops and ArgoCD to deploy helm charts to Kubernetes cluster for easy maintenance.
Configured Vault for secrets management for Kubernetes cluster
Working on building cloud based disaster recovery and conducting multiple disaster recovery exercises on major cloud components like EKS (Kubernetes Cluster), Databases (Aurora postgres), MSK (kafka cluster) etc
Conducted active-active and active-passive disaster recovery exercises.
Replicated Region failure and availability zone failure scenarios with AWS

Sr. DevOps/Cloud Engineer
Albertsons - Dallas, TX Aug 2020 to April 2022
Responsibilities:
Worked on designing and deploying a multi-tier application utilizing almost all of the main services of the AWS stack (like EC2, S3, RDS, VPC, IAM, ELB, Cloud watch, Route 53, Lambda and Cloud Formation) focused on high - availability, fault tolerance environment.
Created Cloud Formation Template for main services like EC2, VPC and S3 for reusing the current environment.
Extensively involved in Managing Ubuntu, Linux and Windows virtual servers on AWS EC2 instance by creating Chef Nodes through Open-Source Chef Server.
In-depth knowledge on Amazon EC2, S3, Simple DB, RDS, Elastic Load Balancing, SQS, and other services in the AWS cloud infrastructure such as IAAS, PAAS and SAAS.
Orchestrated and migrated CI/CD processes using Cloud Formation Templates and Containerized the infrastructure using Docker, which was setup in Vagrant, AWS and VPCs.
Created a script to apply changes for multiple CloudFormation templates to update CD/CD code pipeline.
Written Cloud Formation Templates (CFT) in JSON and YAML format to build the AWS services with the paradigm of Infrastructure as a Code.
Automated the cloud deployment using Chef, Python and AWS Cloud Formation Templates. Used Chef for unattended bootstrapping in AWS.
Utilized AWS CLI for Setting up scalability for application servers for Setting up and administering DNS system in AWS using Route53 Managing users and groups using the amazon identity and access management (IAM).
Automate AWS infrastructure and CI build/deployment pipelines of Java-based applications using Python/bash scripting and Gitlab to deploy services on Kubernetes cluster (EKS).
Configured nodegroups for EKS cluster for better node management
Worked on writing helm charts for various deployments.
Configured Istio mesh on kubernetes cluster (EKS) and implemented security protocols like mutual TLS for internal service to service communication.
Implemented a Continuous Integration/ Continuous Delivery (CI/CD) pipeline with Jenkins, bitbucket, AWS Lambda and JFROG Artifactory.
Built enterprise terraform modules and used terraform workspaces to reuse the code to provision resources across multiple environments in order to avoid environment drift.
Automated creation of S3 buckets, CloudFront redirects, auto scaling groups, ELBs, target groups using terraform and used remote s3 bucket as backend to store the terraform state file.
Wrote Ansible playbooks, inventories, created custom playbooks written in YAML language, encrypted the data using Ansible Vault.
Deployed and configured Elasticsearch, Logstash, and Kibana (ELK) for log analytics, full-text search, application monitoring in integration with AWS Lambda and CloudWatch. Then store that logs and metrics into an S3 bucket using Lambda function.
Developed microservice onboarding tools leveraging Python and Jenkins, allowing for easy creation and maintenance of build jobs, Kubernetes deploy and services.
Implemented JIRA to track all issues pertaining to the software development lifecycle and Integration of JIRA with Git repository to track all code changes.
New Relic application monitoring for performance metrics in real time to detect and diagnose application problems automatically. Identified and fixed performance issues Confidential instant of time by dynamic monitoring through catch point & new relic tools in production environment. Monitor from end-to-end view of runtime systems CPU, bandwidth, disk space, and application logs with New Relic.
Automated Datadog Dashboards with the stack through Terraform Scripts. Configured CloudWatch and Datadog to monitor real-time granular metrics of all the AWS Services and configured individual dashboards for each resource Agents.
Migrating Data from a PostgreSQL DB Instance to an Aurora PostgreSQL DB Cluster by Using an Aurora Read Replica by configuring VPC security groups to secure network access to the DB cluster.
Set up Cloud data lake using S3, lake formation and configured datalake infrastructure with Athena, AWS Glue, EMR and Airflow
Provide engineering design across different workloads including incident & problem management, change management, security, and compliance
Worked with different teams to develop, maintain, and communicate current development schedules, timelines, and development status.

Sr. DevOps/Cloud Engineer
Mckesson - Irving, TX Nov 2017 to July 2020
Responsibilities:
Worked with AWS services using S3, RDS, EBS, Elastic Load Balancer, and Auto-scaling groups, EC2 instances with optimized volumes and achieved cloud automation and deployments using ansible, python, and Terraform templates.
Configured AWS Identity and Access Management (IAM) Groups and Users for improved login authentication. Created AWS RDS, Aurora DB cluster and connected to the database through an Aurora DB instance using the Amazon RDS Console.
Worked with AWS CLI and AWS SDK to manage resources on AWS and created python script using API Calls to manage all resources deployed on AWS.
Designed the data models to use in AWS Lambda applications which are aimed to do complex analysis creating analytical reports for end-to-end traceability and definition of Key Business elements from Aurora.
Converted existing AWS infrastructure to Serverless architecture (AWS Lambda, Kinesis), deployed via Terraform scripts.
Involved in various aspects and phases of architecting, designing, and implementing solutions in IT infrastructure with an emphasis on AWS cloud.
Configured Kubernetes provider with Terraform which is used to interact with resources supported by Kubernetes to create several services such as Config Map, Namespace, Volume, Auto scaler, etc.
Managed Kubernetes using Helm charts. Created reproducible builds of the Kubernetes applications, managed Kubernetes manifest files and managed releases of Helm packages.
Utilized Kubernetes for the runtime environment of the CI/CD system to build, test deploy.
Created additional Docker Slave Nodes for Jenkins using custom Docker Images and pulled them to ECR and monitored using Prometheus.
Implemented Jenkins and built pipelines to drive all microservice builds out to the Docker registry and then deployed to Kubernetes.
Supported Data analytics team with docker by integrating with cloud, created custom Docker Images, and built various containers integrating Docker engine and Docker Management Platform, to deploy the code-services oriented environments for scalable applications.
Configured Jenkins server and built jobs to provide Continuous Automated builds based on polling the Git source control system during the day and periodically scheduled builds overnight to support development by integrating Jenkins, Git, Maven, SonarQube and Nexus.
Deployed & configured Chef server & Chef solo including bootstrapping of client nodes for provisioning, Managed & configured hundreds of servers using Chef. Written several Chef Cookbooks & Recipes scripting to automate the installation of WebLogic domain & customized Recipes from Supermarket to align.
Replaced Splunk logging and analytics with an automated ELK cluster, increasing data capture capacity and reducing costs.
Worked on setting up Airflow cluster to schedule jobs
Designed an ELK system to monitor and search enterprise alerts. Installed, configured, and managed the ELK Stack for Log management within EC2 / Elastic Load balancer for Elastic Search. Monitored performance of the applications and analyzed log information using ELK (Elasticsearch, Logstash, Kibana).
Wrote Python scripts to apply the Integration label to all the files which needs manual labelling of files.
Created and wrote shell scripts Bash and PowerShell for automating tasks.


DevOps Engineer
Flexera - Chicago, IL May 2015 to Oct 2017
Responsibilities:
Create continuous integration/deployment with Application Release Automation by integrating and improving client's existing infrastructure and build pipelines, largely aiming for autonomous automation when possible.
Primarily used Ruby for chef cookbooks, shell scripting to code tasks that connect various AWS resources.
Written Chef Cookbooks for various DB configurations to modularize and optimize end-product configuration, converting production support scripts to Chef Recipes and AWS server provisioning using Chef Recipes.
Implemented the Chef Software setup and configuration on VM's from the scratch and deployed the run-list into chef-server and bootstrapped the chef clients remotely.
Developed processes, tools, automation for Jenkins based software for build system and delivering SW Builds, Managed build results in Jenkins and deployed using workflows.
Installed and configured Jenkins for Automating Deployments and providing an automation solution.
Used Jenkins for continuous deployment and integration of the build and release process.
Managed and Performed SCM related work for company's website. The project involved working on multiple environments for QA and Production.
Redesigned Release management process and build scripts written in Bash.
Developed build and deployment scripts using MAVEN as build tools in Jenkins to move from one environment to other environments.
Built the AWS Infrastructure like VPC, EC2, S3, Route 53, EBS, Security Group, Auto Scaling, and RDS using terraform.
Worked on replicating realtime data between multiple datacenters using NiFi
Maintained and administered GIT source code tool, Created Branches, Labels, and performed merges in Stash and GIT.
Involved in migration from SVN to GIT repos and worked with Linux sys admins for the same.
Created cookbooks for OpenStack deployments and bug fixes with Chef.
Debug and resolve Pre-Post OpenStack deployment failures.
Written Unit test cases for chef recipe testing using test kitchen, food critic etc.
Setting up chef repo's (local & remote) working with both hosted and standalone server versions
Created and modified HTML, PHP, jQuery, JavaScript, web pages, also Writing restful APIs and Http Server in Node.js
Setup and upgrade database servers and replication environments (PostgreSQL, Maria DB, and MongoDB).
Expertise in marathon in binding volumes to applications and running databases like MYSQL and PostgreSQL.

Systems Engineer
Value Labs, Hyderabad May 2012 to Aug 2014
Responsibilities:
Installation, Configuration, backup, recovery, maintenance, and support of Solaris, RedHat Linux.
Installing, upgrading, and configuring RHEL using Kickstart and RedHat Satellite server.
Main responsibilities include Build and Deployment of the java applications onto different environments like Dev, QA, UAT and Prod.
Integrated Maven with Shell scripts created in Bash to automate the deployments for the Java based applications. Managed the deployment activities for multiple server instances by enabling password less SSH communication between the servers and utilizing the resync utility in the shell scripts.
Hands-on experience on CI tools like JENKINS.
Managed branching and Merging in a multi-project environment.
Managed the entire Release Communication and Co-ordination process.
Imported and managed multiple applications in Subversion (SVN).
Provided end-user training for all Subversion (SVN) users to effectively use the tool.
Involved in backing up repository, creating folder and granting user access privileges.
Assist our Client in the centralized Build Farm which has responsibility of creating and maintaining build scripts required by the applications.
Automated the Build and Deployment process using WebLogic server.
Integrated SVN and Maven with Jenkins to implement the continuous integration process.
Performed weekly and on-call deployments of application codes to production environments.




Technical Skills
Cloud AWS, Azure, GCP
SCM Tools Subversion, GIT, Bitbucket
Build Tools Ant, Maven, Gradle
CI/CD Tools Jenkins, Bamboo, TeamCity, Gitlab
Container Tools and Automation Kubernetes, Docker, Docker swarm, OpenShift, Helm, Istio
Configuration Management Tools Ansible, Chef, Puppet
Logging/Monitoring Tools Splunk, Nagios, Cloud Watch, New Relic, Prometheus, Grafana
Languages Python, Java, Shell script
Bug Tracking Tools JIRA, Remedy, HP
Web Servers Apache, JBOSS, Web Sphere
Application Servers Tomcat, WebLogic, WebSphere
Virtualization VMware ESX, ESXi, vSphere 4 vSphere 5
Databases MySQL, Postgres, Oracle, SQL Server and MongoDB
SDLC Waterfall, Agile, Scrum
Networking TCP/IP, DNS, NFS, ICMP, SMTP, DHCP, UDP, NIS, LAN, FTP
Operating Systems Red Hat Linux 7/6/5/4, Ubuntu 16/14/13/12, Debian, CentOS, Windows, Solaris 11/10 Mac OS, Fedora
Big Data Technologies HDFS, Hive, Yarn, MapReduce, Athena, EMR, Glue, Pig, HUE, Oozie, Elasticsearch, Spark, Kafka, Ambari
Keywords: continuous integration continuous deployment quality analyst javascript sthree database information technology hewlett packard Colorado Illinois Texas

To remove this resume please click here or send an email from [email protected] to [email protected] with subject as "delete" (without inverted commas)
[email protected];3212
Enter the captcha code and we will send and email at [email protected]
with a link to edit / delete this resume
Captcha Image: