Home

Aditya T - Devops Engineer
[email protected]
Location: Chantilly, Virginia, USA
Relocation: Yes
Visa: H-1B
Name: ADITYA T
Mobile: +1 210-239-8339
Email: [email protected]
Location: Chantilly, VA

Profile Highlights

Skilled DevOps and Platform Engineer with 12+ years of hands-on experience on infrastructure as code (IAC),
automation frameworks, monitoring tools and integration. Develop and deployment of services/policies in
various API management tools using CI/CD and DevOps process.
Experienced in AZURE public cloud services. Such as Virtual machines, V-Net, Subnets, Azure Networking,
Active Directory, Storage Accounts, Function App, Azure AKS, Key vaults, Manage Identities, Load balancers.
Developed Azure function apps in Python to monitor DMZ applications.
Managed public cloud Amazon Web Services like EC2, S3 bucket, RDS, EBS, ELB, Auto-Scaling, AMI and
IAM through. Responsible for creating multi-region, multi-zone AWS cloud infrastructure and certified AWS
solutions architect.
Experience in deploying and configuring applications in Kubernetes clusters and performed end to end testing of
the solution.
Written Terraform modules to provision required infrastructure. Designed and Implemented disaster recovery
plan for various applications.
Implemented end to end monitoring for applications performance running in Kubernetes clusters. Developed
dashboards for visualization and created alerting mechanism.
Experience in creating and managing APIs in IBM APIC and Azure APIM
Create and maintain fully automated pipelines in Jenkins using Groovy and JobDSL. Actively manage and
monitor Linux servers and applications using Ansible and Icinga/Nagios.
Written scripts for configuration management, software deployments and daily maintenance tasks on servers
using Ansible, Bash, PowerShell and Python.
Experience in design, development and implementation of Middleware Oriented Systems in on-premise
environment and implemented best practices for optimizing IT infrastructure, security policies and SDLC
standards
Configured build and release pipelines in Azure to deploy function apps and applications. Incorporate approval
and review steps within CICD pipelines to follow Standard SDLC. Followed release management for production
changes.
Created and configured custom Jenkins docker image for CICD. Automate builds and deployments using
Jenkins, Gradle and Groovy to reduce human error and speed up production processes.
Implemented zero downtime deployments and upgrades using strategies like failover and blue/green
deployments.
Integrated Jenkins, Azure DevOps and AWS DevOps pipelines with SonarQude for code quality. Used
Artifactory to store software files and docker images. Performed Xray scan to find vulnerabilities.
Participated in sprint planning using Jira and Miro and followed agile methodology for continuous collaboration
and improvement.
Worked on various protocols like FTP, SFTP, HTTP, HTTPS, JAVA API, JDBC.
Experience in working with cross functional teams like Software developers, QA, product managers to deliver.
Provided production support. Excellent written and verbal communication

Technical skills

Cloud Azure, AWS, Digital Ocean
Scripting Languages Ansible, Terraform, Bash, PowerShell, Gradle, Python,

JobDSL

API Management

IBM APIC, CA API GATEWAY (Layer 7)

Programming Languages Core Java, Groovy
Database MYSQL, SQL Server, Postgres SQL
Other/Support Tools Jira, Confluence, OpenAPI specification, Kafka,

Checkmarx, Helm charts, Lens, Jscape, Jmeter, TFS, IIS,
PasswordState, SUSE, RHEL, TIBCO BW, EMS

Networking

WAF, NLB, DNS, Subnets, Firewalls, SSL,TCP,
Certificates

Education
B. Tech (Computer Science Engineering) from JNTUH University. 2012, Hyderabad, Telangana, India.
Career Graph
Working as Senior DevOps Engineer at DSV, USA from February 2023 Till Date
Worked as Senior IT Specialist at DSV, South Africa from March 2017 January 2023
Worked as an Integration Specialist at AMERIS TECHNOLIGIES, South Africa from March 2012 to February
2017
Key Projects
PROJECT # 9:
Project Title: AZURE KUBERNETES ADMINISTRATOR/DEVOPS ENGINEER
Client: DSV US
Role: Senior Platform/DevOps Engineer
Duration: February 2023 Till Date
Responsibilities:
Administrating and supporting company s Azure Kubernetes infrastructure, ensuring it is secure, resilient and
performance and responsible for complete DevOps activities and coordinating with development team.
Setting up the complete Kubernetes Dev Environment from scratch to deploy latest tools using helm charts for
different team.
Responsible to configure alert notification to monitor CPU metrics, VM health s and events logs.
Successfully created pipeline of deployment operation activities where all code is written in groovy and python
stored into ADO, for staging testing purpose. Written terraform scripts to create required resources and objects.
Preferable Azure by creating multilevel hybrid pipeline of CI CD helped clients to achieve Kubernetes platform.
Automated various infrastructure activities like Continuous Deployment, Application Server setup, stack
monitoring using Ansible Playbooks using ADO.
Implemented Prometheus federation to aggregate and centrally manage monitoring data from multiple clusters or
datacenters.
Implemented cluster services using Docker and Azure Kubernetes services (AKS) to manage local deployments in
Kubernetes by building a self-hosted Kubernetes cluster using ADO CI/CD pipeline.
Maintained and automated the scripts by using Bash, Groovy and Python for automated deployments.
Developed Ansible playbooks to manage Web Applications, Environments configuration files, Users, Mount,
points and packages. Implemented Continuous Integration using ADO and GIT.
Defined SonarQube for quality gates for code scanning and Artifactory x-ray scanner for packages and image
scanning.
Familiar with helm charts for deployment manager to use of charts and templates for listed file names.
Implemented Pod security policies (PSP) in AKS for required best practices and the ability to control what pods to
be controlled, scheduled in AKS cluster prevents some possible security vulnerabilities or privilege Escalations.
Familiar with all objects and components in Kubernetes like ingress controller, cert-manager, crd, services, config
maps, secrets and deploying pods in selected nodes without any downtime for Dev and Prod Kubernetes clusters.
Implemented scanning and logging for Docker and Kubernetes containers to scan monitor events, runtime,
vulnerabilities, Compliance for containers, images, Hosts, Registry, ADO pipelines.
Contributed to Git-based infrastructure as code (IaC) projects using tools like GitOps for configuration
management and deployment automation.

Implemented Git hooks and automation scripts for pre-commit checks, linting, and testing to maintain code
quality.
Conducted code reviews, resolved merge conflicts, and enforced coding standards using Git tools and best
practices.
Implemented peer review process with SonarQube analyze pipelines to verify developers changes before merging
into main branch.
Implemented HTTPS Ingress controller and use TLS certificate on AKS to provide reverse proxy, configurable
traffic routing for individual Kubernetes services.
Conducted user training sessions and documentation updates for Grafana best practices and usage guidelines.
Moved all Kubernetes container logs, application logs, event logs and cluster logs, activity logs and diagnostic
logs into Azure Event Hubs and then into Prometheus for monitoring.
Daily monitoring production servers using Grafana and Prometheus which is integrated with Kubernetes,
exceptions and report to the team if something happen during standups.
Managing Azure DevOps build and release pipeline. Setting up new repos managing the permissions for various
GIT branches. Deployed microservices, including provisioning AZURE environment.
Extended support for existing product teams on how to integrate CI/CD into development life cycle.
PROJECT # 8:
Project Title: IBM APP Connect Installation and Configuration
Client: DSV Air and Sea
Role: Azure DevOps Engineer
Duration: November 2019 January 2023
Description: IBM ACE is an integration application. ACE enable organization to integrate external and internal
systems for message exchange and transformation. DevOps team is responsible to install, configure and manage
clusters.
Responsibilities:
Installed IBM App connect in AZURE Public Cloud.
Actively involved in solution architecture design and followed agile methodology for development and
improvements.
Implemented continuous integration and continuous delivery pipelines for deployment of the application and
followed agile principals
Written terraform modules to provision AKS cluster.
Written terraform scripts to provision Azure Cloud Services and administration which include Vms, Vnet, Storage
accounts, Key vaults, Function apps, Container Instances, Databases
Developed and configured monitoring solutions using Prometheus to ensure high system availability and
performance.
Configured yaml files to deploy Certmanager, Ingress, ACE servers, MQ servers.
Prepared yaml definitions to configure namespaces, secrets, services, storage classes, ingress rules.
Deployed multiple ingress controllers with different subnets and ingress classes for network segregation.
Developed automated life-cycle framework to deploy ace servers into different environments using ADO repos
and pipeliens
Integrated Sonarqube and Artifactory with CICD pipelines for DevSecOps best practices
Exported logs onto observability using Elastic agent to monitor cluster health.
Worked on taking regular backups for application data and setting up of environments for the application
launching and Release process for projects early in SDLC
Developed custom Prometheus exporters for monitoring application-specific metrics and endpoints.
Integrated Git with CI/CD pipelines for automated code deployment and continuous integration processes.
Implemented Thanos for long-term storage and global querying capabilities in Prometheus-based monitoring
setups.

Maintained and administered GIT source code tool and Created Branches, Labels and performed Merges.
Researched and implemented an Agile workflow team to deliver an end-to-end continuous integration and testing
of applications using Jenkins.
Performed proof of concept with Splunk monitoring tool for kubernetes cluster and application logs
Used Ansible server and workstation to manage and wrote Ansible Playbook roles for continuous deployment
Created Log collection in ELK (Elastic Search, Logstash) installed on all nodes in cluster to send log data
Integrated Kafka with application to offload logs
PROJECT # 7:
Project Title: IBM APIC Installation and configuration
Client: DSV Air and Sea
Role: DevOps Engineer
Duration: November 2017 October 2019
Description: IBM APIC is an API management tool. APIs enable organizations to share information with
external developers, business associates, and other teams within the same organization. DevOps team is responsible to
install, configure and manage clusters.
Responsibilities:
Installed IBM APIC in AWS Public Cloud and On-premise vms.
Actively involved in solution architecture design and followed agile methodology for development and
improvements.
Implemented continuous integration and continuous delivery pipelines for deployment of the application and
followed agile principals
Utilized Git submodules or subtrees for managing dependencies and shared libraries across projects.
Written terraform modules to provision EKS cluster.
Written terraform scripts to provision Amazon AWS Cloud Services and administration which include EC2, ELB,
EBS, IAM, S3, Route 53, Lambda, Amazon VPC
Developed Grafana template variables and queries for dynamic dashboard filtering and drill-down capabilities.
Implemented Git branching strategies such as Gitflow or trunk-based development to streamline development
workflows.
Written Ansible scripts to deploy IBM api-connect into AWS EKS.
Configured yaml files to deploy subsystems like management, gateways, portal and analytics.
Prepared yaml definitions to configure namespaces, secrets, services, storage classes, ingress rules.
Deployed application-level device gateways, communication gateways, and exposed gateways using
containerization and orchestration technologies like Docker and Kubernetes.
Automated the configuration of organizations, catalogs, mail-servers, users, tls profiles and topology in APIC.
Developed automated life-cycle framework to deploy products and apis into different environments using Jenkins,
Gradle.
Integrated Sonarqube and Artifactory with ADO pipelines for DevSecOps best practices
Exported logs onto observability using filebeat to monitor cluster health.
Worked on taking regular backups for Amazon cloud instances and setting up of environments for the application
launching and Release process for projects early in SDLC
Maintained and administered GIT source code tool and Created Branches, Labels and performed Merges.
Researched and implemented an Agile workflow team to deliver an end-to-end continuous integration and testing
of applications using Jenkins.
Integrated Prometheus with alerting tools like Alertmanager to implement automated alerting and notification
workflows.
Configured Thanos components such as Sidecar, Store, and Query to achieve scalable and fault-tolerant
monitoring solutions.

Used Ansible server and workstation to manage and wrote Ansible Playbook roles for continuous deployment
Created Log collection in ELK (Elastic Search, Logstash) installed on all nodes in cluster to send log data
PROJECT # 6:
Project Title: Microsoft Azure API management Deployment Automation
Client: DSV Air and Sea
Role: DevOps Engineer
Duration: January 2017 November 2017
Description: Automating Azure API deployment into Azure portal.
Responsibilities:
Configured automated build and release pipelines
Created policies and development standards for APIs SDLC.
Used configuration management and release management to deliver APIs.
Conducted disaster recovery testing and backup strategies using Thanos to ensure data integrity and availability.
Created docker container to backup and restore APIM developer portal for disaster recovery.
Configured Jenkins jobs to maintain developer portal content life cycle.
Investigating Azure DevOps resource kit functionality.
Conducted capacity planning and scaling based on Prometheus metrics to ensure infrastructure readiness for
growing workloads.
Designed and implemented custom Prometheus metrics, alerts, and queries to track system health and
performance indicators.
Managed Git workflows using branching models like GitHub Flow or GitLab Flow for feature development and
release management.
Configured Grafana alerts and notification channels for proactive monitoring and incident response.
Implemented data pipelines and ETL processes to collect, process, and analyze data generated by the AI/ML
solution.
Created virtual machine with framework to generate ARM templates from swagger files.
Implemented visualizations, dashboards alerts in monitoring systems. Analyzed APIs performance using metrics.
Automated API testing using Jenkins, postman collection and scripts like Python and PowerShell.
Prepared support documentation and troubleshooting pages for non-technical persons.
PROJECT # 5:
Project Title: Microsoft Biztalk Application Deployment Automation
Client: DSV Air and Sea
Role: DevOps Engineer
Duration: June 2016 December 2016
Description: Automating Microsoft Biztalk Applications deployment into different environments.
Responsibilities:
Configured WinRM on Biztalk servers to run Ansible scripts.
Used agile methodology and Jira for improvements and bug fixes.
Written Bash scripts to execute Ansible playbooks.
Maintain Ansible inventory for Biztalk servers.
Written Gradle and Groovy scripts to export and imports resources from Biztalk servers.
Used git plugin to store resources in repository.
Conducted regular Prometheus cluster maintenance, including upgrades, scaling, and troubleshooting to ensure
optimal performance.
Implemented Grafana annotations and annotations queries for contextualizing events and incidents on dashboards.
Integrated Sonarqube and Artifactory with CICD pipelines for DevSecOps best practices
Written Curl commands for API call to artifactory to store builds.
Witten JobDSL scripts for Jenkins jobs (PreBuild, Build and Deploy) to initiate deploy process.

Written PowerShell scripts to stop and start Biztalk applications.
Setup Thanos for cross-cluster federation to aggregate metrics from multiple Prometheus instances across
distributed environments.
Automated IIS server deployments for custom .Net applications using powershell.
Written PowerShell code to create windows forms to help developers to configure files
PROJECT # 4:
Project Title: Elastic Stack Installation and Configuration
Client: UTI Pharma
Role: DevOps Engineer
Duration: January 2015 June 2016
Description: Installing and Configuring Elastic stack for application logs.
Responsibilities:
Written ansible roles to install applications like elastic search, curator,
logstash, kibana, kafka, filebeat, metricbeat, nginx.
Written bash scripts to start and stop applications
Integrated Prometheus with Kubernetes clusters for automated monitoring and scaling based on resource
utilization.
Written ansible scripts to add cron jobs to servers for start applications after restart
Maintained yaml configuration files as templates using ansible.
Contributed to the Prometheus open-source community by reporting issues, submitting feature requests, and
participating in discussions.
Utilized Grafana plugins and extensions to extend functionality and integrate with additional data sources like
InfluxDB or Elasticsearch.
Configured filebeat to fetch logs and send to kafka topics.
Utilized Git for version control and collaboration, managing code repositories, branches, and merges effectively.
Created and maintained topics and partitions in kafka manager
Implemented Git hooks for enforcing code quality checks, running automated tests, and triggering CI/CD
pipelines.
Created kibana dashboards and indexes.
Written logstash json pipelines to filter logs.
Installed and configured ngnix for authentication and authorization to log into kibana.
Maintain artifactory to install latest releases.
PROJECT # 3:
Project Title: API Gateway Deployment Automation and maintenance
Client: UTI Pharma
Role: DevOps Engineer
Duration: Feb 2014 January 2015
Description: Automating API Gateway services/policies deployment into different environments and maintain
servers.
Responsibilities:
Used agile methodology and Jira for development, improvements and bug fixes.
Written Gradle and Groovy scripts to export and imports objects from gateway.
Integrated Sonarqube and Artifactory with CICD pipelines
Used git plugin to store code in repository.
Configured API call to artifactory to store builds.
Written Groovy scripts for Jenkins jobs to initiate deploy process.
Created ssh keys for identification.
Written/ Configured Icinga checks to monitor gateway servers.

Written Ansibles role to update system user passwords on servers.
Written bash scripts to purge logs in gateway servers.
Monitoring ports and connection between gateways using Icinga checks.
PROJECT # 2:
Project Title: Health Care Consolidation
Client: UTI Pharma
Role: Senior Integration Developer
Duration: March 2013 Feb 2014
Description: DSV HealthCare Business unit has over 50 clients. All these client integrations were running in legacy
tibco environment. This project is designed to re-develop and deploy all services according to new standards to
improve processing speed, CPU utilization and logging process.
Responsibilities:
Analyzing the legacy Tibco services and preparing design documents like Integration Requirements specifications,
mapping specs and Visio diagrams.
Participating in minutes of meeting to discuss dependencies with other teams
Developing Tibco processes in new life cycle.
Adding external jar references to class path for JAVA API calls.
Doing Unit and integration testing with dependent/End Systems.
Shared and Requested URLs, WSDL, Schemas between different teams.
Creating policies in Layer7 for rest and soap services.
Creating Queues, Topics and EMS Routing between servers.
Configuring CLE for logging and exceptions.
Working with DevOps to configure services for auto deployments.
Involved in debugging of various defects reported in QA and Production
Involved in UAT testing with clients.
Working with DevOps to configure services for auto deployments.
Preparing support documents for support team handover.
PROJECT # 1:
Project Title: Infor 10.2 WMS Integration
Client: UTI Pharma
Role: Tibco Developer
Duration: March 2012 March 2013
Description: Infor 10.2 is a Warehouse Management System. This project was designed to integrate clients
KENWOOD, DAIKIN, SHARP, IVECO, SAMSUNG with WMS system. Each integration has messages like
ItemMaster, Sales Order, Advance Ship Notice, Inventory Balance.
Responsibilities:
Gathered requirements from business users and converted them into functional and technical requirement.
Requested/Provided resources from/to clients like Sample files, XSD s, WSDL, Certs/Keys, Endpoints,
connections details.
Requested Firewall team to open ports for client communication.
Imported framework libs to services for common life cycle processes.
Developed processes based on technical specs.
Developed mappings for EDM and Native formats.
Developed XSDs and web services for clients.
Written SQL queries for data extraction for WMS or back end systems.
Configured CLE logging services.
Integrated with common systems like staging and MDM.
Created EMS queues and Topics.

Configure the TIBCO ADB Adapter services.
Worked with DevOps to configure services for auto deployments.
Keywords: continuous integration continuous deployment quality analyst artificial intelligence machine learning message queue sthree information technology business works California Virginia

To remove this resume please click here or send an email from [email protected] to [email protected] with subject as "delete" (without inverted commas)
[email protected];2431
Enter the captcha code and we will send and email at [email protected]
with a link to edit / delete this resume
Captcha Image: