| Venkata Karthik Varma Sagi - Sr. Azure Databricks Engineer |
| [email protected] |
| Location: Seattle, Washington, USA |
| Relocation: |
| Visa: GC |
| Resume file: VenkataKarthikVarmaSagi Resume Databricks_1776880974139.docx Please check the file(s) for viruses. Files are checked manually and then made available for download. |
|
Venkata Karthik Varma Sagi
Sr. Azure Databricks Engineer Phone: +17743204999 Email: [email protected] LinkedIn: https://www.linkedin.com/in/kvinld/ PROFESSIONAL SUMMARY: Overall 10+ years of comprehensive experience in IT Industry in which 8 years of experience in the areas of DevOps, AWS/Azure Cloud, and CI/CD pipeline, Configuration Management Build/Release Management, and 2 years of experience in Linux/Windows Administration. Involved in various stages of Software Development Life Cycle (SDLC) including analysis, requirement gathering, and Design, Development, Testing, Deployment and Maintenance of DevOps applications. Managed architecting and building solutions leveraging DevOps tools such as GIT, Maven, Jenkins, Docker, Ansible, and Chef Etc. Experience with AWS Cloud services like EC2, VPC, ELB, Auto-Scaling, Security Groups, ECR, EKS, Route53, IAM, EBS, AMI, EFS, RDS, S3, SNS, SQS, Cloud Watch, CloudFormation, and Lambda & Direct Connect. Collaborated with development teams to create and maintain efficient CI/CD pipelines using AWS CodePipeline, CodeBuild, and CodeDeploy, resulting in accelerated software delivery. Automated application deployments using Argo CD, leveraging blue/green deployment strategies and canary releases to minimize downtime and ensure smooth rollouts. Experienced in building tools like Maven, Ant & Gradle for building deployable artifacts from source code. Expertise in using Repository Managers like Nexus, Docker Hub and JFrog to store the Artifacts. Experience in Administrating Microsoft Azure Services like Azure App Services, Azure SQL Database, Azure AD, Azure Blob storage, Azure Functions, Virtual machines, Azure Fabric controller, Azure Data Factory, Azure web applications, Azure Service Bus, and Notification hub. Experience in designing Azure Resource Manager (ARM) Template to deploy multiple resources as well as in designing custom build steps using Power Shell and Developed PowerShell scripts and ARM templates to automate the provisioning and deployment process. Implemented scalable, resilient, and cost-effective cloud architectures on Azure, leveraging services like Azure Virtual Machines, Azure Kubernetes Service, and Azure App Services. Configured Azure Automation Desired State Configuration (DSC) configuration management to assign permissions through Role-Based Access Control (RBAC), assign nodes to proper automation accounts and DSC configurations, to get alerts on any changes made to nodes and their configuration. Expertise in Designing and implementing Continuous Integration (CI) throughout many environments utilizing Azure DevOps tools to provide an agile development process that is automated and repeatable, allowing teams to safely deploy code several times per day while ensuring Azure Kubernetes Services (AKS) are supported. Experience in working on several Docker components like Docker engine, Hub, Machine, Compose, and Docker registry. Worked on creation of custom Docker container images, tagging, and pushing the images to Docker Hub. Managed Kubernetes charts using Helm charts, and created reproducible builds of the Kubernetes applications, managed Kubernetes deployment and services files and managed releases of Helm packages. Used Azure Infrastructure as a Service (IaaS), Provisioning VMs, Virtual Networks, Deploying Web Apps, Microsoft SQL Server, using ARM Templates, and Azure DevOps CI/CD pipelines. Conducted vulnerability assessments and security scans in various stages for early detection and mitigation of security threats through SonarQube and Aqua scans. Worked on monitoring tools like Nagios, Splunk, CloudWatch to health check the various deployed resources and services. Expertise in configuring the Monitoring and Alerting tools according to the requirements like Prometheus and Grafana, Splunk setting up alerts, and developing multiple dashboards for individual applications in Kubernetes. Experience in Configuring VNet Peering using Terraform Modules and configuring Network Security Groups for two tiers and three-tier applications were set up to filter network traffic, to facilitate connectivity between resources across various Virtual Networks. Used Dynatrace APM tool to monitor our Kubernetes workloads, applications, and cloud services by enabling full stack monitoring. Created dashboards to provide insights into application performance metrics, such as response time, throughput, and error rates. Implemented Ansible to manage servers and automate the build and configuration of new servers. Worked with Ansible playbooks for virtual and physical instance provisioning, configuration management, patching and software deployment. Worked on Terraform key features such as Infrastructure as Code, Execution plans, Resource Graphs, and Change Automation. Performed L2 & L3 level Full Life-cycle triage for all events on production servers including Incident logging and troubleshooting. Integrated Kibana with Elasticsearch to seamlessly visualize and analyze data stored in Elasticsearch indices. Created CI/CD pipelines for .NET, Java, and Python apps in Azure DevOps by integrating Bitbucket, SonarQube, and Nexus repository. Utilized Groovy to build complex pipelines in Jenkins. Created deployment areas such as testing, pre-production, and production environment in Azure Kubernetes Service. Led the migration of legacy applications to Azure Kubernetes Service (AKS), improving scalability and resilience while reducing infrastructure costs. Hands-on in using OpenShift for container orchestration with Kubernetes, container storage, and automation, to enhance container platform multi-tenancy. Good exposure in managing various LINUX servers clustered environments. Experience in building Docker images using GitLab CI/CD build automation runner. Experience in Setting up the build and deployment automation for Terraform scripts using Jenkins. Resolved production issues very quickly with an analytical bent of mind. Excellent communication skills with experience in organizing meetings and gathering project requirements from multiple teams in large multi-functional organizations. EDUCATION: Masters in computer science from Kent State University December 2013 Bachelor of Technology from Andhra University, India. May 2012 TECHNICAL SKILLS: Title Tools Used Cloud Environments Microsoft Azure, Amazon Web Services (AWS) AWS EC2, S3, Lambda, RDS, ECS, ECR, EKS, CloudFormation, IAM, VPC, CloudWatch, Kinesis, Elastic Beanstalk, Autoscaling, CloudTrail, AWS Direct Connect, Route53, SQS, SNS Azure VM, App Services, Azure Repos, Azure Pipelines, Azure Boards, Azure Kubernetes Service (AKS), Azure Container Registry (ACR), Azure Functions, Azure Blob Storage, DevOps Services, Azure Monitor and Log Analytics, Networking Services Configuration Management Ansible, Chef, Puppet Build Tools ANT, Maven, Gradle CI/CD Tools Jenkins, Argo CD, Azure Pipelines, GitLab, GitHub Actions Monitoring Tools Splunk, Dynatrace APM, Cloud Watch, ELK, Grafana, Prometheus, Datadog Container Tools Kubernetes (EKS, AKS), OpenShift, ECS, Docker Scripting/Programming Languages Python, Java, Shell (Bash), Ruby, .NET, YAML, JSON, Golang, PowerShell, Groovy Version Control Tools GIT, GitHub, Azure Repos, Bit Bucket, GitLab Operating Systems UNIX, Linux, RHEL, Windows Server Databases SQL Server, MYSQL, NoSQL, S3, MongoDB, Dynamo DB, Cassandra, Data Lake Ticketing Tools Jira, ServiceNow, Bugzilla, Mingle Testing / Code Quality Selenium, SonarQube, Veracode, X-Ray Web/Application Servers Apache Tomcat, Nginx, IIS, httpd, Web logic, Kafka Virtualization Tools Oracle Virtual Box, VMWare, vSphere, Vagrant Infrastructure as Code Terraform, ARM Templates, CloudFormation WORK EXPERIENCE Client: Alabama Medicaid Agency, Montgomery, Alabama August 2025 Present Role: Systems Analyst / Data Platform Engineer Responsibilities: Analyzed business requirements and translated them into technical solutions, supporting application development and system enhancements. Designed and developed data pipelines and workflows using Databricks for scalable data processing and analytics. Implemented real-time and batch data streaming solutions using Kafka, enabling high-throughput and reliable data ingestion. Developed and maintained RESTful APIs for data access and system integration using C#/.NET and other programming frameworks. Built and optimized backend services using programming languages such as Java, Python, and .NET. Integrated microservices and distributed systems using API-driven architectures and event-driven patterns. Designed and implemented data transformation logic, ensuring data quality, consistency, and performance. Collaborated with cross-functional teams including developers, data engineers, and business analysts to deliver scalable solutions. Performed system analysis, debugging, and troubleshooting across application and data layers. Optimized application and data workflows for performance, scalability, and maintainability. Participated in code reviews, testing, and deployment processes to ensure high-quality deliverables. Maintained documentation for system design, APIs, and data flows to support knowledge sharing and future enhancements. Participated in capacity planning and resource forecasting to support infrastructure growth. Provided technical support during major incidents, performing root cause analysis and implementing preventive measures. Collaborated with database teams to support performance tuning and system-level optimizations for database workloads. Provided on-call support and handled incident, problem, and change management processes following ITIL practices. Environment: Databricks, Kafka, C#, .NET, REST APIs, Java, Python, System Analysis, Microservices, Data Pipelines, Event Streaming, SQL, API Integration. Client: State of Nevada, Carson City, Nevada. April 2022 Till now Role: Azure Databricks Engineer Team Name: CloudOps Innovators Responsibilities: Creating and maintaining containerized microservices and configuring/maintaining a private container registry on Microsoft Azure for hosting images and using Azure Active Directory (AAD). Managed Azure Active Directory (AAD) for identity lifecycle, role-based access control (RBAC), single sign-on (SSO), multi-factor authentication (MFA), and application registration for secure authentication and authorization. Administered user access and role assignments to Azure resources using Azure RBAC, resource policies, and management groups, ensuring least privilege access and governance compliance. Designed and configured Azure virtual networks (VNets), subnets, NSGs, service endpoints, and private links to ensure secure and scalable network connectivity. Implemented and managed Azure storage solutions (Blob, File, Queue, Table) with lifecycle management policies, secure access keys, and replication strategies for performance and durability. Monitored and maintained Azure resources using Azure Monitor, Log Analytics, and custom alerts to ensure availability, performance optimization, and proactive incident response. Provisioned and managed Azure compute resources such as Virtual Machines (VMs), VM Scale Sets, and Azure Kubernetes Service (AKS), including image management and autoscaling policies. Deployed and maintained containerized microservices using Azure Kubernetes Service (AKS), integrated with private container registries and Helm for Cloud Native Function (CNF) deployments. Created and managed CI/CD pipelines in Azure DevOps to automate build, test, and deployment workflows; integrated Azure Key Vault to securely manage secrets during pipeline execution. Developed reusable ARM templates and automated provisioning of Azure infrastructure, including compute, networking, and storage resources, to support consistent and repeatable deployments. Configured ArgoCD and GitOps workflows to synchronize application configurations with AKS clusters, ensuring version control and environment consistency. Monitored resource utilization and optimized cost by implementing Azure Advisor recommendations, resource tagging, and right-sizing compute instances across subscriptions. Designed and implemented authentication solutions using Azure Active Directory, including the deployment of single sign-on (SSO), multi-factor authentication (MFA), and identity federation across diverse applications. Developed reusable ARM templates for provisioning Azure resources such as virtual machines, virtual networks, storage accounts, and web applications, ensuring consistency and scalability across environments. Created Azure Pipelines for Continuous Integration and Continuous Deployment (CI/CD) to build, test, and deploy applications and configured Azure Artifacts for package management, storing and managing software artifacts. Configured a package feed in Azure Artifacts, defined feeds for specific package types (e.g., npm, Maven), set up permissions and access controls, and published packages to the feed using build pipelines. Deployed Cloud Native Functions (CNF) on Azure Kubernetes Service clusters using Helm charts. Created value files based on test deployments done on test clusters and elevated them to production clusters on Azure. Set up Bugzilla on an Azure Virtual Machine to track bugs in the deployment cycle and monitor environment issues. Created CI/CD pipelines to integrate with Azure Key Vault and retrieve secrets to be used in pipeline jobs. Worked on automating cron jobs to schedule development, model, and production jobs and disable them after execution as a self-service for developers on Azure. Provisioned Azure Kubernetes Service (AKS) to deploy, manage, and scale Kubernetes clusters on Azure. Deployed containerized applications to AKS clusters using Kubernetes manifests and Helm charts. Worked with AKS cluster configurations to define and customize cluster settings, including networking and security, and managed/scaled containerized workloads using deployments, services, and pods. Developed and enforced Azure Policies across multiple environments, ensuring compliance with corporate security standards and regulatory requirements. Deployed, configured, and maintained Azure Databricks workspaces for advanced analytics and big data processing, integrating seamlessly with Azure Data Lake Storage (ADLS), Azure Blob Storage, and Azure SQL Data Warehouse. Managed Databricks clusters, pools, and job scheduling to optimize performance and resource utilization, and handled Databricks Runtime upgrades and compatibility testing. Configured Databricks notebooks, Repos, and Delta Lake for efficient workflows, ensuring data governance compliance via Unity Catalog management and applying grants on catalogs and schemas. Implemented Azure Databricks security and access controls using Azure AD, enforcing Role-Based Access Control (RBAC) and managing personal access tokens (PATs) and OAuth-based authentication. Built data pipelines leveraging Azure Databricks and Azure Data Factory, integrating heterogeneous data sources and automating workflows while utilizing Databricks SQL for querying and performance diagnostics. Used Terraform to migrate legacy and monolithic systems to Azure by creating Terraform templates and modules for provisioning resources across Azure, Kubernetes, and other application environments. Set up build and deployment automation for Terraform scripts using Jenkins on Azure, restricting user/service account access by assigning and managing roles for enhanced security in development and test environments. Involved in migration of on-premises data to Azure Data Lake using Azure Data Factory. Implemented and configured HashiCorp Vault to securely store and manage sensitive information, including cryptographic keys, passwords, and API tokens. Designed and implemented disaster recovery and high availability solutions for Azure Databricks environments, including data replication, backup, and failover mechanisms to ensure business continuity and data resilience. Created hooks on Bitbucket repositories to aid in the automation of Jenkins jobs on Azure. Created jobs to manage F5 load-balanced environment deployments in the development environment on Azure. Installed, integrated, and ran Docker containers on Azure Container Instances or Azure Kubernetes Service. Utilized Kubernetes and Docker within a CI/CD framework to build, test, and deploy applications, and created Jenkins jobs to deploy applications to Azure Kubernetes Service. Created Docker containers and images, managed tagging and pushing to Azure Container Registry, and deployed containers via Azure Container Instances or Kubernetes for efficient application lifecycle management. Leveraged Istio for streamlined microservices communication and seamless integration within Kubernetes deployments. Installed and configured Ingress Nginx Controller in Kubernetes clusters to enable external access to services. Implemented encryption and decryption mechanisms using HashiCorp Vault's transit secrets engine to protect sensitive data in transit and at rest, ensuring compliance with data security standards. Used Jenkins and Azure Pipelines to drive microservices builds to the Docker registry and subsequently deployed them to Azure Kubernetes Service. Deployed artifacts to staging and production environments using tools like Azure Container Registry; built and published Docker images to the registry. Wrote Ansible playbooks to automate deployments, restart servers, and install new packages as required. Monitored deployed applications using performance monitoring tools like Grafana and Prometheus integrated with Azure Monitor. Implemented data onboarding strategies to ingest data from diverse sources into Splunk, including log files, databases, message queues, and APIs, and configured both index-time and search-time extractions for efficient data handling. Configured alerting and monitoring rules in Splunk to proactively detect anomalies, security threats, and operational issues, integrating with external notification systems (e.g., email, Slack, PagerDuty) to alert stakeholders of critical events. Implemented and managed monitoring solutions using Datadog, deploying agents, configuring monitors, and creating dashboards, while leveraging Datadog s Terraform provider and API to manage monitoring resources alongside infrastructure provisioning. Integrated Datadog monitoring into Infrastructure as Code (IaC) pipelines to ensure consistent monitoring configurations across deployments. Created alarms and trigger points in Azure Monitor based on thresholds, monitoring server performance, CPU utilization, and disk usage in both development and test environments. Monitored Azure Kubernetes Service cluster jobs and overall performance. Worked on upgrading Azure Kubernetes Service clusters, including commissioning and decommissioning of nodes and pods. Tracked and managed tasks, defects, and project progress using Jira as a comprehensive project management and issue-tracking tool. Created and maintained Azure Runbooks for automated provisioning, monitoring, and maintenance tasks using PowerShell and Python. Collaborated with cross-functional teams to establish incident management procedures based on alerts, minimizing downtime and service disruption. Created design documents and presentations for data pipeline architecture, project planning, and reporting. Environment: Azure DevOps, Terraform, Azure SQL, Azure Active Directory, Jenkins, Python, GIT, Bitbucket, Ansible, Azure Services, Docker, Azure Databricks, Azure Key Vault, SonarQube, Argo CD, Azure Kubernetes Service (AKS), Azure Container Registry (ACR), CI/CD pipelines, Datadog, HashiCorp, OpenShift Container Platform, .NET, ISTIO, ELK stack, Azure Log Analytics, Azure Pipelines, Nginx, Prometheus & Grafana, Splunk, Kafka, Azure Cosmos DB, Migration, Jira. Client: Union Bank of Switzerland, Jersey City, New Jersey March 2018 April 2022 Role: AWS Databricks Engineer Team Name: CloudOps Vanguard Responsibilities: Designed, implemented, and maintained scalable data lake architectures using Amazon S3 for storage and Databricks on AWS for distributed data processing. Developed and automated ETL pipelines using PySpark on Databricks to process high-volume batch and streaming data from AWS S3, transforming and loading it into Snowflake. Integrated AWS Glue Catalog with Databricks for metadata management and schema discovery, streamlining query operations across large datasets. Developed reusable notebook workflows and job clusters in Databricks, leveraging parameterized notebooks and the Databricks REST API for automation. Tuned Spark configurations and partition strategies in Databricks to enhance parallelism, performance, and resource efficiency for big data processing. Designed and managed Snowflake schemas, roles, warehouses, and user access policies, applying RBAC principles and least privilege models. Built incremental load frameworks in Databricks with watermarking and checkpointing, ensuring reliable and fault-tolerant processing of streaming data. Utilized AWS EventBridge and Step Functions to orchestrate complex, event-driven workflows integrating Databricks notebooks and downstream services. Automated infrastructure provisioning using Terraform and AWS CLI, managing environments for Databricks, Snowflake, and associated AWS services. Integrated AWS Secrets Manager and IAM roles with scoped permissions to securely manage and access credentials and tokens within Databricks notebooks and Lambda functions. Built and deployed CI/CD pipelines using GitHub Actions, Jenkins, and Terraform Cloud to manage version-controlled deployments of data pipelines and infrastructure changes. Employed unit testing and data validation frameworks in Databricks for automated quality checks, schema validations, and regression testing. Configured AWS CloudTrail and CloudWatch Logs Insights for auditing access and performance metrics of Databricks clusters and Lambda-based orchestration. Set up monitoring dashboards and alerting policies using Datadog AWS integration, tracking resource utilization, pipeline failures, and latency anomalies. Leveraged Snowflake Time Travel and Fail-safe features for data recovery and auditing use cases in highly regulated environments. Integrated Databricks Delta Lake on AWS for ACID-compliant transactions, supporting upserts, merges, and schema enforcement in large-scale data lakes. Designed data archival strategies using S3 Intelligent Tiering and lifecycle policies to reduce storage costs while ensuring long-term data retention. Conducted performance benchmarking between Snowflake compute warehouses and Databricks clusters for use case-specific cost and speed optimization. Enabled real-time data ingestion using Kinesis Data Streams into S3, triggering downstream processing with AWS Lambda and Databricks jobs. Provided technical leadership in designing and documenting AWS-native data architectures, facilitating knowledge transfer across engineering and analytics teams. Participated in Agile ceremonies and collaborated with product owners and stakeholders to prioritize backlog items related to data availability and pipeline improvements. Implemented row-level security in Snowflake using secure views and dynamic data masking for compliance with data governance standards. Conducted data quality monitoring using expectation libraries (like Deequ or Great Expectations) in Databricks pipelines integrated with CI tools. Managed multi-account AWS organizations using cross-account IAM roles and S3 bucket policies to centralize data ingestion and processing. Automated schema evolution and tracking using Delta Lake features, ensuring flexibility and robustness in evolving source data formats. Deployed Kubernetes clusters on top of Amazon EC2 Instances using KOPS and Managed local deployments in Kubernetes, creating local clusters, deploying application containers, and building/maintaining Docker container clusters managed by Kubernetes and deployed Kubernetes using HELM Charts. Set up development and production data pipelines for ML teams on Mesos managed EC2 clusters with Marathon Docker Management and data stored in AWS S3, transformed with Python ETL scripts. Designed and implemented ServiceNow solutions tailored to organizational needs, leveraging IT Service Management (ITSM) modules such as Incident, Change, Problem, and Service Catalog. Expertise in using build tools like MAVEN and ANT for building deployable Artifacts such as War & Ear from Source Code. Performed administrative tasks such as user management, role-based access control (RBAC), license management, and Splunk instance tuning. Scheduled backups, maintenance tasks, and upgrades to keep Splunk environments running smoothly. Integrated Splunk with other IT operations tools and platforms (e.g., Nagios, ServiceNow, AWS CloudWatch) to streamline monitoring, troubleshooting, and incident management workflows. Implemented Dynatrace for end-to-end application performance monitoring, enabling real-time visibility into application health and performance. Implemented Real User Monitoring (RUM) with Dynatrace to analyze user interactions and optimize page load times. Configured custom alerts in Dynatrace to receive real-time notifications for application and infrastructure anomalies, enabling swift incident response. Defined and implemented SLOs and SLIs for critical services, establishing measurable targets for reliability and performance. Managed SLAs to ensure the delivery of services met agreed-upon performance standards and availability targets. Developed and tracked key SRE KPIs, including MTTR (Mean Time to Recovery), availability, incident frequency, and error rate. Implemented KPI dashboards to provide real-time visibility into system performance and reliability metrics. Environments: Ansible, Apache Tomcat, AWS, AWS CodePipeline, Argo CD, AWS Secret Manager, Chef, CI/CD Pipeline, CloudCheck, CloudFormation, CloudWatch, Confluence, Cost Explorer, Docker, Dynatrace, Elastic Container Registry (ECR), Elastic Kubernetes Service (EKS), ELK Stack, GitLab, GitHub, GIT, Helm Charts, IAM, Jenkins, JIRA, Migration, Nagios XI, OpenShift, Prometheus, Python, ServiceNow, SonarQube, Splunk, Terraform. Client: Tele stream, San Francisco, CA March 2016 Mar 2018 Role: DevOps Engineer Team Name: DevOps Trailblazers Responsibilities: Established a Continuous Delivery pipeline with Docker, Jenkins, and GitHub. Installed and configured Jenkins to support various Java builds, automated continuous builds using Jenkins plugins, and published Docker Images to the Nexus Repository. Implemented SonarQube for continuous inspection of code quality and automated Nagios alerts and email notifications using Python scripts executed through Chef. Installed, configured, and maintained web servers like Apache Web Server and WebSphere Application Server on Red Hat Enterprise Linux (RHEL). Proficient with Red Hat Linux kernel, memory upgrades, and swaps area. Experienced in 111 Linux Kickstart and Sun Solaris Jumpstart Installation. Configured DNS, DHCP, NIS, NFS in Sun Solaris 8/9, and other Network Services. Leveraged multiple EC2 instances simultaneously and ensured exceptionally durable and available data using S3 data store, versioning, and lifecycle policies. Created AMIs for mission-critical production server backups. Automated deployments using AWS by creating IAMs, integrating Jenkins with AWS using the code pipeline plugin, and provisioning EC2 instances. Implemented various concepts of Chef such as Roles, Environments, Data Bags, Knife, and Chef Server Admin/Organizations. Wrote Chef Recipes to automate the build/deployment process and utilized data bags in Chef for better environment management. Implemented monitoring solutions using Splunk, enabling proactive issue detection and resolution for CI/CD pipelines and infrastructure. Evaluated Chef Cookbook modifications on cloud instances in AWS using Test Kitchen and Chef Spec. Developed Chef Cookbooks for various DB configurations to modularize and optimize product configuration, converting production support scripts to Chef Recipes, and provisioning AWS servers using Chef Recipes. Worked with the Knife command-line tool for creating Recipes and Cookbooks and utilized the Chef Supermarket. Implemented Docker-Maven plugin and Maven POM to build Docker Images for all microservices and utilized Docker file to build Docker Images from Java jar files. Utilized Git for source code version control, integrated with Jenkins for CI/CD pipeline, and managed user management with Maven and Ant build tools. Installed, configured, and managed Monitoring Tools such as Nagios for Resource Monitoring and Network Monitoring. Developed automated build and deployment processes for applications, re-engineered setups for better user experience, and built a continuous integration system for all products. Managed infrastructure servers from SCM to GitHub and Chef. Extensively worked with the distributed version control system Git. Responsible for building/deploying consistently repeatable build/deployments to company production and non-production environments using Jenkins. Collaborated with the development team to generate deployment profiles (jar, war, ear) using Ant Scripts and Jenkins. Used Maven dependency management system to deploy snapshot and release artifacts to Nexus, facilitating artifact sharing across projects. Implemented CI/CD Automation Process using CI Tool Jenkins, CD Tool Docker. Installed, updated, diagnosed, and troubleshot the issue tracking and project management application, learning agile methodology by JIRA. Created and configured new JIRA projects and maintained existing JIRA projects. Managed servers built on Linux, Solaris, and Windows platforms using the Chef Configuration management tool. Created Deployment notes in collaboration with the Local SCM team and released Deployment instructions to Application Support. Environments: Docker, Jenkins, GitHub, Nexus, SonarQube, Nagios, Python, CI/CD pipeline, Chef, Red Hat Enterprise Linux (RHEL), AWS (Amazon Web Services), Apache Web Server, WebSphere Application Server, Sun Solaris, EC2, S3, IAM, Test Kitchen, Chef Spec, Knife, Docker-Maven plugin, Maven, Git, Ant, Nagios, Maven, JIRA. Client: University of Pittsburgh Medical Center, Pittsburg, Pennsylvania Feb 2015 -Mar 2016 Role: Linux Administrator Team Name: Cloud Infrastructure Specialists Responsibilities: Deployment and management through AWS cloud formation on EC2 instances and maintaining amazon S3 storage. Knowledge on SaaS, PaaS, and IaaS concepts of cloud computing architecture. Responsible for creating and managing a Docker deployment pipeline for custom application images in the cloud using Jenkins. Implemented and maintained the branching and build/release strategies utilizing GIT Administration of Jenkins server- Includes Setup of Jenkins, Configure Nightly builds, and parameterized builds. Wrote Python scripts for automating the deployments of applications. Used Git as source code management tool and integrated it with Jenkins for CI/CD pipeline, code quality tracking and user management with build tool Maven. Expertise in Installation and configuration of automated tool like Puppet which includes Puppet Master, Agent Nodes and an administration control workstation and tools like Hiera, Mcollective & Puppet Console. Used Bamboo and Octopus as CI (Continuous Integration) and CD (Continuous Deployment). Using Puppet as a configuration management tool for environment more than 5000 servers which have Virtual machines and Physical machines. Wrote modules from scratch and enhanced the existing modules as per the application requirements and wrote templates in ruby format and used Hiera to use template variables to configure the nodes. Experience in using Atlassian tools such as JIRA (for work tracking) and Confluence as central repository for documentation. Troubleshooting and performance tuning issues with applications like Oracle 10.x, 11.x and application servers like WebLogic. Used Puppet to manage Web Applications, Config Files, and Data base, Commands, Users Mount Points, and Packages. Also involved in Production support task including in troubleshooting and data issues for both divisional and national systems. Environments: AWS CloudFormation, Amazon S3, Amazon EC2, CI/CD, Docker, Jenkins, GIT, Python, PowerShell, Maven, Puppet, Bamboo, Octopus, Atlassian JIRA, Atlassian Confluence, Oracle WebLogic, Hiera, Mcollective, Shell Scripting. Client: Intel Corporation, Santa Clara, CA Feb 2014 Feb 2015 Role: Linux System Administrator Team Name: System Administration and Support Engineers Responsibilities: Installation, Configuration & Upgrade of Linux, Solaris, and HP-UX Operating System. Proficient in the installation of patches and other software packages using RPM and YUM in Linux, pkgadd, pkginfo, pkgrm, patchadd, showrev p, patchadd p in Solaris, and swinstall, swremove, swlist in HP-UX. Expert in creating depot for patches and installing packages using depot in HP-UX and Build RPM using RPMBuild in Linux. Exceptional knowledge in Installation, Configuration and file system and RAID volume management through VXVM and Solaris Volume Manager (SVM) in Solaris and LVM in Linux and HP-UX. Impressive knowledge of Linux/Unix kernel tuning, building customized kernels. Experience in installing, configuring, and maintaining WebLogic Application Server and WebSphere Server with java Application tools on Linux and UNIX servers environment. Created Zettabyte file system (ZFS) in Solaris 10. Created pools, snapshots, and clones. Exported ZFS from local zones to local zones. Worked on maintaining DNS & NTP, MySQL database servers. Installed and configured system network monitoring tool used Nagios and troubleshooted virtual machine issues. Compiled, Build & Installed PostgreSQL database 8.3.1 and written shell script for startup in SUSE Enterprise Linux 10sp1 Supermicro dedicated server 6015B-3R for Fortress platform development Lab, app, and QA team. Expert in applying new patches and packages on Linux. Environments: Linux, Solaris, HP-UX, RPM, YUM, pkgadd, pkginfo, pkgrm, patchadd, showrev, swinstall, swremove, swlist, VXVM, Solaris Volume Manager (SVM), LVM, WebLogic Application Server, WebSphere Server, ZFS, DNS, NTP, MySQL, Nagios, PostgreSQL, shell scripting. Keywords: csharp continuous integration continuous deployment quality analyst machine learning user experience sthree database active directory information technology ffive hewlett packard bay area California |