Home

Arun Acha - GCP Devops Engineer
[email protected]
Location: Austin, Texas, USA
Relocation: open
Visa: H1B
Arun Acha
Email:[email protected]
([email protected]/609-508-8255)

LinkedIn :linkedin.com/in/arun-acha-9218763a
Summary

Worked as DevOps Engineer in automating, building, deploying, managing, and releasing of Codepipeline from one environment to other environment tightly maintaining Continuous Integration (CI), Continuous Delivery and Continuous Deployment (CD) in multiple environments like (DEV/FUT/STAGE & PROD).
Hands on experience with AWS services like EC2, S3, RDS, VPC, ELB, EBS, Cloud Watch, and Auto scaling.
Cloud Development and automation using Node.js. GO. AWS Lambda, AWS CDK (Cloud Deployment Kit) and AWS SAM (Serverless Application Model)
Strong experience with UNIX/Linux in installing and configuring LVM, RAID, NGINX, HTTPD, Tomcat, MySQL, Oracle, patching, custom log metrics, configured custom Cloud Watch metrics.
Expertise in AWS Identity and Access Management (IAM) such as creating users, groups, organizing IAM users to groups, assigning roles to groups.
Good experience on Linux Administration installing, configuring, and troubleshooting webservers and application servers and been worked configuring LVM and RAID.
Experience with DevOps essential tools like Git, Jenkins, maven, Ant. Docker, EKS, Chef, Terraform and Linux/Unix system administrator with RHEL, CentOS & Amazon Linux.
Configured and monitored distributed and multi-platform servers using Chef. Defined Chef Server and workstation to manage and configure nodes.
Good experience in containerization technology such as Docker swarm and EKS.
Setup up and maintenance of automated environment using basic Chef recipes & cookbooks and mostly on Puppet manifest and modules within AWS environment.
Writing Ansible playbooks, replacing the dependency on Chef Cookbooks and Chef Recipes to automate infrastructure as a Codepipeline.
Worked on integrating Ansible Tower with cloud environment, provided role-based access control, setup job monitoring, email notifications, Scheduled jobs, multi-playbook workflow to chain playbooks.
Good Experience with writing vagrant files and provisioning VMs on developers workstations.
Expertise in the Installation, Configuration and Administration of Apache web server, Nginx and WebLogic.
Setup SCM Polling for Immediate Build with Maven and Maven Repository (Nexus Artifactory) by installing Jenkins Plugins for GIT Repository.
Providing 24 x7 on-call support in debugging and fixing Linux and Middleware related issues in AWS Cloud Environment.

PROFESSIONAL SKILLS:
Operating Systems : UNIX, Linux, Ubuntu, RHEL 5/6/7, Windows 7/8/10
SCM Tools : CVS, SVN, GIT, Bit Bucket, GitHub
Build Tools : ANT, Maven, Gradle, TFS
CI Tools,CM Tools : Jenkins/Hudson, Bamboo, Antilipo/: CHEF, Puppet, Terraform
Database : SQL Server, Oracle 9i/10g, PL/SQL, NoSQL s
Scripting Languages : GO, Shell, Ruby, Perl, Groovy, JavaScript, XML
Cloud Computing : AWS, GCP, Cloud Foundry, Salesforce




Professional Experience:
Client: DXC Technology Date: Feb2019-Till Date
Clients:(Intermountainhealthcare,Monumentalsports,Sabrecorporation Location:AustinTx(Remote)
Role: Sr DevOps/MLOps Engineer (AWS /GCP)

Responsible for Setup and build AWS infrastructure using resources VPC, EC2, S3, RDS, Dynamo DB, IAM, EBS, Route53, SNS, SES, SQS. CloudWatch, CloudTrail, Security Group, Autoscaling and RDS using CloudFormation templates.
Implemented AWS Step Functions to automate and orchestrate the Amazon SageMaker related tasks such as publishing data to S3, training ML model and deploying it for prediction.
Integrated Apache Airflow with AWS to monitor multi-stage ML workflows with the tasks running on Amazon SageMaker.
Involved in design and deployment of a multitude of Cloud services on AWS stack such as EC2, Route53, S3, RDS, Dynamo DB. SNS, SQS. IAM, while focusing on high-availability, fault tolerance, and auto-scaling in AWS CloudFormation
Developed strategy for cloud migration and implementation of best practices using AWS services like database migration service, AWS server migration service from On-Premises to cloud.
Implemented and maintained the monitoring and alerting of production and corporate servers/storage using AWS
CloudWatch / Splunk and assigned AWS elastic IP addresses to work around host or availability zone failures by quickly remapping the address to another running instance
Provisioned the highly available EC2 Instances using Terraform and cloud formation and wrote new python scripts to support new functionality in Terraform.
Worked in cloud formation to automate AWS environment creation along with the ability to deploy AWS using bill scripts (Boto3 and AWS CLI) and automate solutions using python and shell scripting
Managed AWS infrastructure as Codepipeline (laaS) using Terraform. Expertise in writing new python scripts in order to support new functionality in Terraform. Provisioned the highly available EC2 Instances using Terraform and cloud formation and setting up the build and deployment automation for Terraform scripts using Jenkins
Designed AWS Cloud Formation templates to create custom sized VPC, to set up IAM policies for users, subnets, NAT to ensure successful deployment of Web applications, database templates and security groups
Experience in Amazon Cloud Services (AWS) creating features like EC2, IAM, VPC, EBS, AMI, APIs, Route 53, snapshots, Autoscaling, Cloud watch, CloudTrail, CloudFront, SQS, SNS, RDS, CloudWatch, S3, API Gateways, Autoscaling, ALB, NLB,Lambda, Security groups using Terraform.
Applications developed using mix of technologies (Python, Django, SQL, WCF, Pandas, numpy, REST, SOLR)
Develop web applications in Python/Django with client - specific customizations.
Design application architecture and AP..
Developed Python/Django application for Google Analytics aggregation and reporting.
Generated Python Django Forms to record data of online users
Worked on Python Open stack API's.
Added support for Amazon AWS S and RDS to host static/media files and the database into Amazon Cloud.
Developed tools using Python, Shell scripting, XML to automate some of the menial tasks. Interfacing with supervisors, systems administrators and production to ensure production deadlines are met.
Create and manage S3 buckets, enable logging in S3 bucket to track the request, who is accessing the data and enable versioning in S3 bucket and restore the deleted file by creating IAM roles.
Create & Utilize Cloud Watch to monitor resources such as EC2, CPU memory, Amazon RDS DB services, Dynamo tables, EBS volumes, Lambda Functions. Encrypting the EBS volumes to make sure the data in rest is secured and protected
Implemented a server less architecture using Lambda and deployed AWS Lambda Code pipeline from Amazon S3 buckets. Created a Lambda Deployment function and configured it to receive events from your S3 bucket using cloud watch events.
Managed Docker orchestration and Docker containerization using Kubernetes. Used Kubernetes to orchestrate the deployment, scaling, and management of Docker Containers
Created and deployed Kubernetes pod definitions, tags, labels, multi-pod container replication. Managed multiple
Kubernetes pod containers scaling, and auto-scaling.
Worked on GCP Site Recovery and GCP Backup- Deployed Instances on GCP environments and in Data centers and migrating to GCP using GCP Site Recovery and collecting data from all GCP Resources using Log Analytics and analyzed the data to resolve issues.
Configured GCP Multi-Factor Authentication (MFA) as a part of GCP AD Premium to securely authenticate users and worked on creating custom GCP templates for quick deployments and advanced PowerShell scripting. Deployed GCP SQL DB with GEO Replication, GCP SQL DB Sync to standby database in another region & failed over configuration.
Worked on services, created, and configured HTTP Triggers in the GCP Functions with application insights for monitoring and performing load testing on the applications using the Visual Studio Team Services (VSTS) also called GCP DevOps Services.
Created GCP Automation Assets, Graphical runbook, PowerShell runbook that will automate specific tasks, deployed GCP AD Connect, configuring Active Directory Federation Service (AD FS) authentication flow, ADFS installation using GCP AD Connect, and involved in administrative tasks that include Build, Design, Deploy of GCP environment.
Implemented a CI/CD pipeline with Docker, Jenkins (TFS Plugin installed), Team Foundation Server (TFS), GitHub and GCP Container Service, whenever a new TFS/GitHub branch gets started, Jenkins, our Continuous Integration (CI) server, automatically attempts to build a new Docker container from it.
Worked with Terraform Templates to automate the GCP Iaas virtual machines using Terraform modules and deployed virtual machine scale sets in production environment.
Managed GCP Infrastructure GCP Web Roles, Worker Roles, VM Role, GCP SQL, GCP Storage, GCP AD Licenses, Virtual Machine Backup and Recover from a Recovery Services Vault using GCP PowerShell and GCP Portal.
Written Templates for GCP Infrastructure as Codepipeline using Terraform to build staging and production environments. Integrated GCP Log Analytics with GCP VMs for monitoring the log files, store them and track metrics and used Terraform as a tool, Managed different infrastructure resources Cloud, VMware, and Docker containers.
Worked on Openshift for container orchestration with EKS container storage, automation to enhance container platform multi-tenancy also worked on with EKS architecture and design troubleshooting issues and multi-regional deployment models and patterns for large-scale applications.
Deploying windows EKS (K8s) cluster with GCP Container Service (ACS) from GCP CLI and Utilized EKS and Docker for the runtime environment of the CI/CD system to build, test and Octopus Deploy.
Worked in container-based technologies like Docker, EKS and Openshift. Point team player on Openshift for creating new Projects, Services for load balancing and adding them to Routes to be accessible from outside, troubleshooting pods through ssh and logs, modification of Build configs, templates, Image streams, etc.
Managing the Openshift cluster that includes scaling up and down the AWS app nodes.
Working on implementing new OCR solution; Spring Boot. Openshift, microservices. Member of group developing containerized applications; Docker, Spring Boot,
EKS, Openshift. Deployed Microservices to IBM Bluemix Cloud Foundry and later migrated to Openshift.
The deployment model uses Atlassian development repository tools, Jenkins as the build engine, while execution deployments to container orchestration tools ranged over time from Openshift on EC2, AWS.
Enterprise Container Services, and today using AWS Faregate Implemented Micro Services framework with Spring Boot, NODE.JS and Openshift containerization platform (OCP).
Develop software modules using technologies such as Python, Representational State Transfer (REST) Application Programming Interface (API), Amazon Web Services (AWS), Amazon Simple Storage Service (S3), Postgres, Snowflake
Engineers' solutions to meet business requirements and require substantial experience working with tools such as Redshift, Python, Java, SQS and build data APIs and data delivery services that support critical operational and analytical applications for internal business operations using Spark, Spark Streaming, AWS (cloud), and EMR
Finding solutions on own to challenging problems and troubleshooting experience with DevOps build & deploy issues in Python environments
Research new solutions and writes Codepipeline from scratch to prototype new concepts for potential production level implementation in Python
Using Ansible created multiple playbooks for machine creations and SQL server, cluster server and my SQL installations.
Integrated the f5 client VPN with GCP Anthos to create the secured tunnel between cloud to on-premises Automated the CI/CD pipeline and created a Docker image using Bitbucket, Maven, O Jenkins and pushed it to google container registry.
Integrated the Linux environment with Active Directory providing a Single Sign On (SSO) solution.
Written Shell scripts for automating the deployment process to all environments in the public cloud.
Automated the release process & GKE cluster upgrades to prod environment without any downtime using node affinity in EKS.
Experience in rotating the application logs from persistent disk to google cloud storage using GSutil command line tool.
Deployed the resources to Google Anthos by using the terraform Codepipeline and Terraform backend in GCP buckets.
Used Terraform to Setup/teardown of ELK stack (Elasticsearch, Log stash, Kibana) and troubleshoot the build issues with ELK and work towards the solution.
Written Terraform handlers with multiple tasks to trigger multiple handlers and to decouple handlers from their names, making it easier to share handlers among Playbooks and Roles.
Managed EKS charts using Helm, Created reproducible builds of the EKS applications, managed EKS manifest files and Managed releases of Helm packages.
Worked on GKE Topology Diagram including masters, slave, RBAC, helm, kubecti, ingress controllers GKE Diagram including masters, slave, RBAC, helm, kubectl, ingress controllers
Implemented Docker -maven-plugin in Maven pom.xml files to build Docker images for all microservices and later used Docker file to build the Docker images from the Java jar files.
Virtualized the servers using Docker for the test environments and dev-environments needs, also configuration automation using Docker containers.
Experience in creating Docker Containers leveraging existing Linux Containers and AMI's in addition to creating Docker Containers from scratch
Managed local deployments in EKS, creating local cluster and deploying application containers.
Configured MQ Series network using clustering, distributed queuing and remote administration.
Container management using Docker by writing Docker files and setting up the automated build on Docker HUB and installed and configured EKS.
Designed, wrote and maintained systems in GO scripting for administering GIT, by using Jenkins as a full cycle continuous delivery tool involving package creation, distribution, and deployment onto Tomcat application servers via shell scripts embedded into Jenkins jobs.
Built and managed a highly available monitoring infrastructure to monitor different application servers like JBoss, Apache Tomcat and its components using Nagios.

Environment: - GCP, Office 365, Terraform, Maven, Jenkins, Terraform, GCP ARM, GCP AD, GCP Site Recovery, EKS, GO, Ruby, XML, Shell Scripting, PowerShell, Nexus, JFrog Artifactory, Jenkins, Git, Jira, GitHub, Terraform, Docker, Windows Server, TFS, VSTS, LDAP, Nagios.

Client: Synapse group Inc Date: Oct2018-Feb2019
Location: Connecticut
Role: AWS DevOps Engineer

Responsibilities:
Worked in highly collaborative operations team to streamline the process of implementing security Confidential AWS cloud environment and introduced best practices for remediation
Worked as an active team member for both product development and the operations teams to provide the best DevOps Practices and supported their applications with feasible approaches.
Jenkins created jobs to create AWS infrastructure from Git repos containing Terraform Codepipeline. Implementing & Working on Terraform scripts to create Infrastructure on AWS /Azure.
Coordinate/assist developers with establishing and applying appropriate branching, labeling/naming conventions using GIT source control and analyze and resolve conflicts related to merging of source Codepipeline for GIT.
Experience in Building the artifacts from Git generated by Maven/Gradle and upload to Nexus and frog Artifactory
Repository and deploy to higher environments using Jenkins files/Jenkins.
Wrote Shell and Groovy scripts in Jenkins to automate entire CI/CD pipeline from development to deployment for every environment and written Ansible playbooks for continuous deployments.
Used Ansible to automate deployment workflow of Java, Dot Net applications on Apache Tomcat, Apache, JBoss, Liberty, WebSphere, WebLogic, IIS by usage of scripting languages shell, python, PowerShell. groovy.
Developed applications and methods with Python for ETL, writing and reviewing Codepipeline for server-side Python applications.
Had done POC on implementation of continuous deployment pipeline with Jenkins and Jenkins workflow on Kubernetes
Worked with Network related python libraries for transferring the files and connecting remotely to the servers.
Added several options to the application to choose particular algorithm for data and address generation.
Performed troubleshooting, fixed and deployed many Python bug fixes of the two main applications that were a main source of data for both customers and internal customer service team.
Generating various capacity planning reports (graphical) using Python packages like Numpy, matplotlib.
Analyzing various logs that are been generating and predicting/forecasting next occurrence of event with various Python libraries.
Assist with configuration of Cloud Compute systems using OpenStack on Ubuntu, collaboration using Orchestration with Keystone, Kubernetes and other functions within Open Stack.
Responsible for Configuring Ansible Consumer and Producer metrics to visualize the Ansible System performance and monitoring
Worked for 5 scrum teams (Java, AEM, Jenkins, Ant, Maven, SVN, git, Agile methodology, cucumber scripts, sonar, XL Deploy and XL Release, SharePoint, CI/CD automation from scratch, Docker)
Conducted Dry-Run Tests to ensure fool-proof execution of customized scripts before execution in production environments.
Assist with configuration of Cloud Compute systems using OpenStack on Ubuntu, collaboration using Orchestrator Keystone, Kubernetes and other functions within Open Stack.
Responsible for Configuring Ansible Consumer and Producer metrics to visualize the Ansible System performance and monitoring
Over saw the quality of Automated Build plans to help the Delivery process to NON-PROD and PROD Environments
Worked on implementing backup methodologies by Power Shell Scripts for Azure Services like Azure SQL Database, Key Vault, Storage blobs, App Services etc.
Assigned RBAC policies Confidential group level and user level as per the LTA created for the services implemented new (Automation account, scheduler, notification hub, IOT Hubs, Batch and other services)
Assigned RBAC Roles using Organization Active Directory Confidential the Subscription Level to grant accesses only to required members based on Least-Access Privileges (we use CWS Groups)
Collaborated with cross functional teams (firewall team, data base team, application team) in execution of this project.
Experience in troubleshooting the SQL Server connection issues on incremental deployments
Provided status to Business Level management and Technical and conducting Proof of Concept for Latest AWS cloud-based service
Environment: Net, Azure, PowerShell, XL Release, XL Deploy, Kubernetes, Ansible, GIT, Python, AWS, Redis, VMware, Jenkins, Terraform, jele, SVN, Puppet, Open Stack, Docker, Jira, Maven, VSTS, Apache Tomcat Application Server, Salt stack


Client: Kohl s Innovation Centre Date: Apr2017-Oct2018
Location: Milwaukee
Role: DevOps Engineer

Responsibilities:
Involved in migrating service from a managed hosting environment to AWS including overall plan, cost analysis, service design, network layout, data migration, automation, deployments and cutover, monitoring, documentation, and timeline.
Created AWS Launch configurations based on customized AMI and used this launch configuration to configure auto scaling groups and implemented AWS solutions using EC2, S3, RDS, Dynamo DB, Route53, EBS, Elastic Load Balancer, Auto scaling groups.
Launched and configured Amazon EC2 Cloud Servers using AMI's (Linux/Ubuntu) and configured the servers for specified applications.
Created S3 buckets and maintained and utilized the policy management of S3 buckets and Glacier for storage and backup on AWS.
Mass Build of Linux/Solaris OS servers using kickstart /jumpstart automation application.
Building Linux servers (web-logic application, Apache, DB servers etc.) in large quantity as per EDC and Non-EDC production requirement as well as App/Dev requirement.
Worked with the development team to generate deployment profiles (jar, war, ear) using MAVEN Scripts and Jenkins.
Responsible for Continuous Integration (CI) and Continuous Delivery (CD) process implementation using Jenkins along with Shell scripts to automate routine jobs.
Involved in setting up the CI/CD pipeline using Jenkins, Maven, Nexus, GitHub, Terraform, Terraform and AWS
Configured AWS Identity Access Management (IAM) Group and users for improved login authentication.
Deployed and monitored scalable infrastructure on Amazon Web Services (AWS) & configuration management using Terraform.
Involved in writing various Custom Terraform Playbooks for deployment orchestration and developed Terraform Playbooks to simplify and automate day-to-day server administration tasks.
Created Jenkins Automated Pipeline for CI and CD with Maven Scripts along with GIT Version control.
Experienced in GIT for branching, tagging, and merging. Also, created branches and Tags for each release.
Used Jenkins AWS Codepipeline Deploy plugin to deploy.
Connected continuous integration systems with the Bitbucket version control repository and continually built as the check-ins came from the developer and managed Maven project dependencies by creating parent-child relationships between projects.
Installed, configured and administered Jenkins CI for Gradle and Maven Builds of RDBMS and NoSQL tools such as Dynamo DB.

Environment: AWS, EC2, S3, EBS, ELB, Auto Scaling groups, VPC, IAM, Cloud Watch, Micro Services, Glacier, Bitbucket, Dynamo DB, RDBMS, Shell Scripting, GIT, Terraform, Docker, EKS, Docker Swarm, Chef, Maven, Gradle, Jenkins, GO, YAML.

Client: PAREXEL India Limited Date: Jan 2012 to Dec 2015
Location: Hyderabad, India
Role: DevOps/Build and Release Engineer

Responsibilities:
As part of DevOps team, my role includes release management, Environment Management, deployments, Continuous integration, continuous deployment, Incident management, version management.
Providing a better workflow for Continuous Integration and Continuous Delivery.
Assisted in migrating applications from customer on-premises datacenter to the cloud (AWS).
Well versed in managing source Codepipeline repositories like Git, GitHub, bit bucket.
Working for DevOps Platform team responsible for specialization areas related to Chef for Cloud Automation.
Used Ansible to manage Web applications, Environments configuration Files, Users, Mount points and Packages.
Configure and administer Git source Codepipeline repositories.
Develop and implement an automated Linux infrastructure using Ansible.
Worked on vagrant for configure lightweight, reproducible, and portable development environments.
Implemented Ansible Playbooks for Deployment on build on internal Data Centre Servers. I also re-used and modified the same Ansible Playbooks to create a Deployment directly into Amazon EC2 instances.
Worked on AWS AIM, which included managing application in the cloud and creating EC2 instances
Expertise in Azure infrastructure management (Azure Web Roles, Worker Roles, SQL Azure, Azure Storage, Azure AD
Licenses, Office365) Worked on Cloud automation using AS Cloud Formation templates.
Worked on Azure laaS, Provisioning VM's, Virtual Hard disks, Virtual Networks, Deploying Web Apps and Creating Web-Jobs, Azure Windows server Microsoft SQL Server, Microsoft Visual Studio, Windows PowerShell, Cloud infrastructure.
Used Jenkins. Codepipeline Deploy for CI/CD pipelines and chef for server provisioning and infrastructure automation in an SAAS environment.
Worked on various Docker/Kubernetes components like Docker Engine, Hub, Machine, Compose and Docker Registry.
Maintained high availability clustered and standalone server environments and refined automation components with scripting and configuration management (Ansible).

Environment: AWS, Ansible, Jenkins, Docker

Education:
Masters in information technology Atlantis University, Florida (2016-2017)
Bachelors in electrical and Electronics and Engineering, JNTU-H, India (2007-2011)
Keywords: continuous integration continuous deployment machine learning message queue javascript sthree database active directory information technology golang ffive procedural language

To remove this resume please click here or send an email from [email protected] to [email protected] with subject as "delete" (without inverted commas)
[email protected];2015
Enter the captcha code and we will send and email at [email protected]
with a link to edit / delete this resume
Captcha Image: