Home

PRAVEEN KODALI - DevOps & SRE Engineer
[email protected]
Location: Gardiner, North Carolina, USA
Relocation:
Visa: H1
Summary:
Around 8+ years of experience in IT sector in Linux administration, build engineering and release management process, building and deploying applications by adopting DevOps practices such as Continuous development, Continuous Integration (CI) and Continuous Deployment (CD)in runtime with various tools like Git, Maven, VSTS, Jenkins, Ansible, Chef, Docker, Kubernetes and managing cloud services with Azure, AWS & GCP.
Experience in assigning the azure services on locations specific to integrate with web-apps and key-vaults.
Working Experience on Azure Storage, SQL Azure and also in different PaaS Solutions with Web, and worker Roles and Azure Web Apps.
Experience in Creating the CI/CD Pipeline for the .net, java & UI based Web Applications.
Experience in Cloud Computing using PCF, Docker, AWS EC2 and S3
Responsible for setting up databases in AWS using RDS, storage using S3 buckets and configuring instance backups to S3 bucket.
Experience with Troubleshooting AWS EC2 instances for Status check through System Status checks and Instance Status checks alerts and rectifying if necessary.
Experience in writing Infrastructure as a code (IAC) in Terraform, Azure resource management, AWS Cloud formation. Created reusable Terraform modules in both Azure and AWS cloud environments.
Involved in using Terraform and Ansible, migrate legacy and monolithic systems to Azure and managing Ubuntu and RHEL virtual servers on Azure by creating Ansible Nodes.
Experience with GCP computing services like AppEngine and Cloud Functions.
Expertise with Docker images using a Docker file, worked on container snapshots, removing Images and managing Docker volumes. Orchestration of Docker images and Containers using Kubernetes by creating master and node.
Extensive experience in using Build Automation scripting like, Apache ANT, Maven 3.
Experience in Kubernetes to deploy scale, load balance, and manage Docker containers with multiple names spaced versions using Helm charts.
Handled UrbanCode deploy tool for automating application deployments for agile software development.
Experience in Micro services development using spring boot and deployment in Pivotal Cloud Foundry (PCF)
Monitored infrastructure & apps by alerting, aggregating the information using Prometheus & Grafana, Splunk, CloudTrail, CloudWatch, Elastic Stack.
Experience in using OpenShift for container orchestration with Kubernetes, container storage, automation, to enhance container platform multi-tenancy.
Worked on High Availability, Failover and Disaster recovery of the JIRA, Confluence and Bitbucket applications.
Have ample experience in load balancing and monitoring with Nagios and Splunk.
Developed Groovy scripts to automate build processes, Created and maintained the Groovy deployment scripts for Web Logic and web application servers.
Extensively designed and developed Automation Framework for RESTful Service API testing using Groovy scripting language in SOAP UI Pro tool.
Experienced in Ansible Tower, which provides an easy-to-use dashboard and role-based access control and in developing Ansible playbooks for managing the applications.
Experience in Designing and implementing scalable cloud-based web applications using AWS and GCP.
Experienced in branching, tagging and maintaining the version across the Environments using SCM tools like Git, GitLab, GitHub and Subversion (SVN) on Linux and windows platforms.
Created Shell scripts (Bash), Ruby, Python, Groovy, YAML and Power Shell for automating tasks.
Experience in integrating code quality tools with SonarQube and JaCoCo in CI/CD pipelines
Created Splunk app for Enterprise Security to identify and address emerging security threats through the use of continuous monitoring, alerting and analytics.
Worked extensively on troubleshooting Tomcat Application server and Web server issues for all (Dev, STG, Pre-Prod, Prod)
Integrated WebLogic and JBoss with Proxy servers (Sun One), Apache in Authentication servers.
Migrating on premise Database Servers to AWS Cloud using AWS DMS. Developed Python modules to automate processes in AWS.
Managed SVN repositories for branching, merging and tagging and developing Shell/Groovy Scripts for automation purpose.
Experience in working with SQL databases, mongo DB and Casandra and search optimized store elastic search.
Experience in writing Shell scripts using ksh, bash and perl for process automation of databases, applications, backup and scheduling.



Technical Skills:
Cloud Platforms AWS, Microsoft Azure, Google Cloud Platform (GCP), OpenStack and PCF.
Continuous Integration Tools Jenkins, TeamCity.
Continuous Deployment Tools Docker, Kubernetes Clusters.
Configuration Management Tools Ansible, Puppet and Chef.
Source Control Management Tools GIT, Bitbucket and SVN.
Build Tools Maven, ANT and Gradle.
Tracking Tools JIRA and Service Now.
Web Servers and Application Apache, Nginx, JBOSS, Apache Tomcat and WebLogic.
Operating Systems Windows, Linux/Unix and MAC OS.
Scripting languages Shell, Python, SQL, XML, HTML, CSS3, Ruby, JSON and YAML.


PROFESSIONAL EXPERIENCE

Silicon Valley Bank, Orlando. Florida July 2021 Current
Site Reliability Engineer

Roles and Responsibilities:

Worked on Amazon Web Services (AWS) such as Elastic Cloud Computing, Simple Storage Services, Cloud Formation, Glacier, Block Storage, Elastic Beanstalk, Amazon Lambda, Virtual Private cloud, Load balancing, Relational Database Service, and Cloud Watch.
Created TFE scripts to fully automate AWS services which includes ELB, Cloud Front distribution, EC2, Security Groups, and S3.
Orchestrated end-to-end data workflows in Azure Data Factory, ensuring smooth data integration and transformation across various sources and destinations.
Integrated and configured DataDog as the primary monitoring and observability platform for infrastructure, applications, and services.
Designed, deployed, and maintained Amazon Connect solutions for multiple clients, ensuring high availability and scalability.
Collaborated with cross-functional teams to design and implement data movement strategies, optimizing data transfer efficiency and minimizing latency.
Proficient in adding, modifying, and removing individual role permissions in AWS IAM to ensure proper access control.
Proficient in deploying and managing OpenShift clusters.
Designed and developed custom DataDog metrics and dashboards to provide real-time insights into system performance, enabling proactive issue resolution.
Configuring and Networking of Virtual Private Cloud (VPC), public subnet and private subnet and route out the private subnet using NAT Gateway.
Expertise in writing AWS Service Control Policies (SCPs) from scratch in JSON format, enabling fine-grained control over account permissions.
Developed and tested disaster recovery plans for Azure Synapse Analytics, ensuring data integrity and minimal downtime in the event of system failures.
Integrated Amazon Connect with other AWS services like Lambda, S3, and CloudWatch to create end-to-end call center solutions.
Created monitoring and alerting systems for Azure Synapse Analytics to detect and respond to performance issues.
Created snapshots and Amazon machine images (AMI) of the instances for backup and created access Management (IAM) policies.
Experienced in configuring and optimizing OpenShift for performance and scalability.
Created Cloud Watch dashboards for monitoring CPU utilization, Network In-Out, Packet In-Out and other parameters of the instances.
Utilized DataDog's APM tools to profile and optimize application code, resulting in improvement in application response times.
Successfully integrated custom metrics into Dynatrace using to provide granular insights into application performance and behavior.
Implemented and optimized Azure Spark clusters to process large-scale data sets, improving data processing.
Built and deployed a TFE scripts to EC2 application servers in a Continuous Integration Agile Environment and automated the complete process.
Utilized Dynatrace monitoring capabilities to identify performance bottlenecks and inefficiencies in applications and infrastructure, leading to targeted optimizations and resource utilization improvements.
Established monitoring and alerting systems for Azure Data Factory pipelines to ensure proactive detection and resolution of issues.
Applied Azure Synapse Analytics for conducting failure mode analysis and implementing improvements to minimize downtime.
Proficient in implementing and configuring Dynatrace to monitor complex microservices architectures and distributed systems.
Skilled in updating IAM roles and permission boundaries to align with security and access requirements.
Developed and deployed applications on OpenShift using various programming languages and frameworks.
Managed AWS EC2 instances utilizing Auto Scaling, Elastic Load Balancing and Glacier for our QA and UAT environments as well as infrastructure servers for GIT and Chef.
Utilized Infrastructure as Code (IaC) tools like Ansible, Terraform, or Helm to automate OpenShift deployments and configurations.
Automated Spark job deployments and monitoring using Azure DevOps, reducing deployment time and enhancing system reliability.
Used IAM to create new accounts, roles and groups and polices and developed critical modules like generating amazon resource numbers and integration points with Dynamo DB, RDS.
Assist in cloudbees Jenkins, Jfrog Aritfactory and IBMUdeploy and configured CI/CD Pipelines.
Environment: AWS, Terraform, Docker, open shift, Kubernetes, Jenkins, ansible tower, CI/CD, bitbucket, Elk, Nagios, Groovy script, Linux, Python script, PCF.
Armor, Richardson, TX May 2020 July 2021
Site Reliability Engineer

Roles and Responsibilities:
Developed the Azure Solution and Services like IaaS and PaaS. Managed Azure Infrastructure Web Roles, Worker Roles, Storage, Azure AD Licenses, Office365. Created Cache Memory on Windows Azure to improve the performance of data transfer between SQL Azure and WCF services.
Designed and implemented Azure Landing Zones for various projects, ensuring alignment with best practices and security standards.
Troubleshooted and resolved performance bottlenecks in Azure Spark applications, ensuring optimal performance and resource utilization.
Worked in highly collaborative operations team to streamline the process of implementing security Confidential Azure cloud environment and introduced best practices for remediation.
Worked on Azure Site Recovery and Azure Backup- Deployed Instances on Azure environments and in Data centers and migrating to Azure using Azure Site Recovery and collecting data from all Azure Resources using Log Analytics and analyzed the data to resolve issues.
Designed and implemented data storage solutions in Azure Data Lake Storage, ensuring secure and scalable storage for petabytes of data.
Developed automation scripts and workflows to streamline the deployment and management of Dynatrace across diverse environments, ensuring consistency and scalability.
Utilized AppDynamics to identify and resolve bottlenecks, enhancing system performance. Applied deep-dive analysis to pinpoint code inefficiencies and infrastructure limitations, resulting in substantial improvements in application response times.
Developed comprehensive documentation outlining AppDynamics configurations, aiding team members in utilizing the platform effectively.
Developed and maintained data access policies and permissions in Azure Data Lake, ensuring data security and compliance with industry regulations.
Proficient in writing simple to intermediate-level scripts to automate recurring tasks and streamline IAM processes.
Integrated OpenShift with continuous integration and continuous deployment (CI/CD) pipelines.
Played a key role in automating the deployments on Azure using GitHub, Terraform and Jenkins.
Creating, validating, and reviewing solutions and effort estimate of converting existing workloads from classic to ARM based Azure Cloud Environment.
Implemented comprehensive monitoring strategies using AppDynamics, configuring custom dashboards and alerts for proactive issue detection.
Implemented security controls within Azure Landing Zones to meet regulatory compliance requirements.
Leveraged DataDog's machine learning-based anomaly detection capabilities to automatically identify and respond to abnormal system behavior.
Proficient in writing Infrastructure as Code using tools such as ARM templates, Terraform, or Azure Bicep for automated deployment and management of Azure resources.
Implemented Azure DevOps Code Pipeline in Terraform for infrastructure as code.
Implemented automated deployment strategies using tools like Jenkins, GitLab CI/CD.
Developed automation system using PowerShell scripts and JSON templates to remediate the Azure services.
Used Terraform to reliably version and create infrastructure on Azure. Created resources, using Azure Terraform modules, and automated infrastructure management.
Worked on Openshift platform in managing Docker containers and Kubernetes Clusters and Created Kubernetes clusters using ansible playbooks.
Created comprehensive visualizations of infrastructure topologies using DataDog's network maps, enabling a deeper understanding of dependencies and potential bottlenecks.
Configured and integrated GIT into the continuous integration (CI) environment along with Jenkins and written scripts to containerize using Ansible and orchestrate it using Kubernetes.
Implemented security best practices on OpenShift, including role-based access control (RBAC) and network policies.
Used Jenkins pipelines to drive all micro services builds out to the Docker registry and then deployed to Kubernetes, Created Pods and managed using Kubernetes.
Developed and maintained AWS Lambda functions to extend the functionality of Amazon Connect and enhance customer experiences.
Leveraged AppDynamics metrics to forecast resource needs accurately, ensuring optimal capacity planning and cost-efficient resource allocation.
Analyzed DataDog metrics and recommendations to optimize cloud resource utilization, leading to reduction in infrastructure costs.
Employed AppDynamics as a key tool during incident response, rapidly isolating issues and minimizing downtime.
Used Ansible Tower, which provides an easy-to-use dashboard and role-based access control, so that it's easier to allow individual teams access to use Ansible for their deployments.
Implemented a CI/CD pipeline using Azure DevOps (VSTS, TFS) in both cloud and on-premises with GIT, MS Build, Docker, Maven along with Jenkins plugins.
Experience in Migrating the Legacy application into GCP Platform.
Worked on TERRAFORM for provisioning of Environments in GCP platform.
Environment: Azure, Terraform, Docker, open shift, Kubernetes, Jenkins, ansible tower, CI/CD, nexus, sonarqube, maven, bitbucket,
Grafana, octopus deploy, Promotheus, Elk, Newrelic, Nagios, Sumologic, Groovy script, Linux, Python script, GCP, PCF.

Charter Communications, Charlotte, NC July 2018 May 2020
AWS/DevOps Engineer

Roles and Responsibilities:
Worked with various services of AWS: EC2, ELB, Route53, S3, Cloud Front, SNS, RDS, IAM, Cloud Watch and Cloud Formation, Elastic Beanstalk, Lambda, CloudTrail.
Worked on Multiple AWS instances, set the security groups, Elastic Load Balancer and AMIs, Auto scaling to design cost effective, fault tolerant and highly available systems.
Designed and developed AWS Cloud Formation templates to create custom VPC, Subnets, NAT to ensure deployment of web applications.
Followed AWS IAM best practices to ensure secure identity and access management within cloud environments.
Automated Compliance policy Framework for multiple projects in GCP.
Configuring Virtual Private Network (VPN) between AWS Regions As well as from cloud to on premise through Virtual Private Gateway (VPG) and Customer Gateway (CGW)
Integrated AppDynamics into automation workflows, scripting customized solutions to automate routine tasks such as health checks, deployment validations, and auto-scaling triggers.
Used Terraform to customize our infrastructure on AWS, configured various AWS resources.
Worked with Terraform to create stacks in AWS from the scratch and updated the terraform as per the organization s requirement on a regular basis.
Implemented real-time reporting and analytics using Amazon Connect's data streams, enabling clients to make data-driven decisions for their call centers.
Configured IAM roles with assume policies to facilitate cross-account access and secure resource sharing.
Set up monitoring and logging solutions (e.g., Prometheus, Grafana, ELK stack) for OpenShift clusters.
Created custom alerts and dashboards to proactively monitor cluster health.
Worked closely with developers to create custom applications and features using the Amazon Connect API, enhancing the functionality of the platform.
Vast Experience in identifying production bugs in the data using stack driver logs in GCP.
Worked on Docker container snapshots, attaching to a running container, removing images, managing Directory structures, and managing containers.
Collaborated cross-functionally with development and operations teams, sharing AppDynamics insights and best practices.
Deployed OpenShift on various cloud platforms, such as AWS, Azure, or Google Cloud.
Leveraged cloud-native services for enhanced scalability and resilience.
Created Docker images using a Dockerfile, worked on Docker container snapshots, removing images and managing Docker volumes.
Used Cloud shell SDK in GCP to configure the service data proc, storage, bigquery.
Used Kubernetes to manage containerized applications using its nodes, Config Maps, Selector, Services, and deployed application containers as Pods.
Launched Kubernetes to provide a platform for automating deployment, scaling and operations of application containers across clusters of hosts.
Implemented server automation with Continuous Integration and Continuous Deployment (CI/CD) tools like Jenkins/Maven for deployment and build management system.
Scheduled Jenkins jobs by integrating GITHUB, Maven and Jfrog for the automated builds using Shell scripts.
Developed GIT hooks for the local repository, code commit and remote repository, code push functionality and worked on GitHub.
Integrated Jenkins/Helm/Kubernetes with GCP to perform semi-automates and automated releases to lower and production environments.
Extensive use of cloud shell SDK in GCP to configure/deploy the service like cloud Dataproc, google cloud storage and cloud Bigquery.
written Jenkins Groovy global libraries for automating the CI/CD process using Jfrog.
Used Jenkins to create CI/CD pipeline for Artifactory using the plugin provided by Jfrog
Resolved performance issues with AEM platform and provided critical support in analyzing and resolving issues with long-running queries.
Worked Closely with Security teams by Providing them the logs with respect to firewalls, VPC s and setting up rules in GCP for vulnetability.
Administered Jenkins for Continuous Integration and deployment into Tomcat/WebLogic Application Servers, testing in build environment and release to test team on scheduled time.
Solutions involved logging with ELK and Splunk, custom build packs, service-to-service security, and other common issues faced when Pivotal Cloud Foundry (PCF) is involved in a large-scale digital transformation.
Worked on Installation of MongoDB RPM s, Tar Files and preparing YAML config files.
Managed SVN repositories for branching, merging and tagging and developing Shell/Groovy Scripts for automation purpose.

Environment: AWS, Terraform, Docker, Kubernetes, ansible tower, urbancode deploy, Jfrog, sonarqube, Maven, GitHub, Tomcat, Jboss, Web sphere, WebLogic, MongoDB, Splunk.


DXC Technologies, Madison, WI March 2017 June 2018
Cloud/DevOps

Roles and Responsibilities:
Involved in designing and deploying multiple application utilizing AWS stack and implemented AWS solutions like EC2, S3, IAM, EBS, Elastic Load Balance (ELB), Security Group, Auto Scaling.
Responsible to Manage IAM Policies, providing access to different AWS resources, design and refine the workflows used to grant access.
Utilized AWS Lambda platform to upload data into AWS S3 buckets and to trigger other Lambda functions.
Configured AWS Multi Factor Authentication in IAM to implement 2 step authentication of user's access using Google Authenticator and AWS Virtual MFA.
Design AWS Cloud Formation templates to create custom sized VPC, subnets, NAT to ensure successful deployment of Web applications and database templates.
Worked on Cloud automation using AWS Cloud Formation templates
Used & Implemented Kubernetes to deploy scale, load balance, scale and manage docker containers with multiple names spaced versions.
Built and Deployed Docker images on AWS ECS and automated the CI-CD pipeline.
Created ECS cluster for dev and prod environments and deployed dockerized application on ECS clusters.
Used existing cookbooks from Chef Market place and customizing the recipes with respect to each VM.
Work with Chef automation to create infrastructure and deploy application code changes autonomously.
Collaborated with development support teams to setup a CI/CD environment with the use of Jenkins.
Installed and configured Jenkins for CI/CD. Configured master and slaves to run various builds on different machines.
Maintained Artifacts in binary repositories using JFrog Artifactory and pushed new Artifacts by configuring the Jenkins project Jenkins Artifactory plugin.
Responsible for installing and administrating the SonarQube for code quality check and Nexus repository and generating reports for different projects.
Implementing a Continuous delivery pipeline with Docker, Jenkins and GitHub and AWS AMI s.
Worked on MongoDB database concepts such as locking, transactions, indexes, Sharding, replication, schema design.
Developed enhancements to MongoDB architecture to improve performance and scalability.
Created advanced Dashboards, alerts, reports, advanced splunk searches and visualization in Splunk Enterprise.
Configured Splunk to ingest logs from Servers and Applications, S3 (CloudTrail), CloudWatch logs,
Involved in testing of services using SOAP/REST services using SOAP UI, Groovy Scrip
Responsible for writing the Design specifications for the generic and application specific web services in Groovy Grails.
Performed data driven testing by using JDBC and Groovy script as a data source in SOAP UI and configured SQL queries to fetch data from the Oracle database.
Environment: AWS, Docker, ECS, Chef, Jfrog, Splunk, sonarqube, MongoDB, Gradle, GitHub, octopus deploy, Groovy script.

Infomerica Inc, Cary, NC Jan 2016 Feb 2017
Infrastructure Engineer

Roles and Responsibilities:
Setting up the build and deployment automation for Java base project by using JENKINS and Maven.
Develop and implement an automated infrastructure using Puppet, wrote Puppet models for installing and managing java versions, wrote a python plugin for collected to write metrics to state.
Gathering Information from the Clients and providing consultation by performing POC and setup the Build/Deployment and Release management.
Implementing a Continuous Delivery framework using Jenkins, ANT in Linux environment.
Created Scripts to Automate AWS services which include web servers, ELB, Cloud front Distribution, database, AWS EC2 and database security groups.
Setup the continuous Integration (CI) and continuous Deployment (CD) process for the application using the Jenkins.
Worked on Spring Frameworks Spring IOC, Spring Boot, Spring Cloud and using third party libraries.
Developed Restful Micro Services using Spring Rest and MVC, for OSS services.
Implemented the application using Spring IOC, Spring MVC Framework, Spring Batch, Spring Boot and handled the security using Spring Security.
Developed REST architecture-based web services to facilitate communication between client and servers.
Responsible for building out and improving the reliability and performance of cloud applications and Cloud infrastructure deployed on Amazon Web Services.
Developed & Supported tools for integration, automated testing & Release.
Used chef server and workstation to manage and configure nodes, installed Chef Server and clients to pick up the Build from GIT repository and deploy in target environments.
Used Gradle build tool to automate the process of generating Dockerfiles, building Docker Images and pushing them to Docker Private Registry.
Source code management is performed using GIT from master repository and knowledge on container management using Docker in creating images.
Environment: Java, Jenkins, MVC, Maven, Puppet, Spring IOC, Spring Boot, Spring Cloud, Restful Micro Services, GitHub.

MedAptus Inc, Raleigh, NC June 2015 Jan 2016
Java/ Automation Engineer

Roles and Responsibilities:
Designed and developed Micro Services business components using Spring Boot.
Worked on Spring Frameworks Spring IOC, Spring Boot, Spring Cloud) and using third party libraries.
Developed various helper classes needed following Core Java multi-threaded programming and Collection classes.
Designed and developed third-party payment services, REST services to offer users convenient payment methods using various APIs provided by various third-party payment processors based on OAuth 2.0 protocol.
Used Spring AOP Module to implement logging in the application to know the application status.
Designed and developed the End Points (Controllers), Business Layer, DAO Layer using Hibernate/JDBC template, using Spring IOC (Dependency Injection).
Developed the persistence layer using Hibernate Framework, created the POJO objects and mapped using Hibernate annotations and Transaction Management.
Implemented Web-Services to integrate between different applications components using Restful web services.
Used Amazon Identity Access Management (IAM) tool created groups & permissions for users to work collaboratively.
Worked on MongoDB database concepts such as locking, transactions, indexes, Sharding, replication, schema design, etc.
Developed an API to write XML documents from a database. Utilized XML and XSL Transformation for dynamic web-content and database connectivity.
Extensively used JSON to parse the data from server side to satisfy the business requirement.
Used WebSphere server to route our JMS queue messages to different business floors and configured routes in WebSphere and used WebSphere in e-mail notification.
Extensively used JUnit for unit testing, integration testing and production testing, configured and customized logs using Log4J.
Environment: Java/J2ee, GIT, ANT Maven, Nexus, Tomcat, VMware, Perl scripts, Jira, Shell scripts, Jenkins, Python.

Allvy Software Solutions Jan 2013 July 2014
Linux Admin
Roles and Responsibilities:
Administration of RHEL 4, 5 and Solaris 9, 10 which includes installation, testing, tuning, upgrading and loading patches, troubleshooting server issues.
Launching and configuring of Amazon EC2 Cloud Servers using AMI's (Linux/Ubuntu) and configuring the servers for specified applications
Installed and Configured Oracle ASR on total Solaris 10 Environment.
Actively involved in architecting the puppet infrastructure to manage more than 4000 servers.
Written multiple manifests and also customized facts for efficient management of the puppet clients
Creating, cloning Linux Virtual Machines, templates using VMware Virtual Client 3.5 and migrating servers across ESX hosts.
Jumpstart Solaris servers, custom configure, install packages, patches, harden. Creating LDOMs, installing Solaris, creating volumes & installing packages.
Created Zetabyte file system (ZFS) in Solaris 10. Creating Zones, zpools and adding resources like network & file systems.
Creating LDOMs, installing Solaris, creating volumes, installing packages.
Solaris Live Upgrade from Solaris 8, 9 and old update from Solaris 10 newer update version across the whole environment.
Established continuous integration (Cl) practices and standards.
Set up Jenkins server and build jobs to provide continuous automated builds based on polling the Git source control system during the day and periodic scheduled builds overnight to support development needs using Jenkins, Git and JUnit.
Automation of configuration and management through Puppet.
Lockdown the local accounts to secure the environment using NIS.
Managing systems routine backup, scheduling jobs like disabling and enabling cron jobs, enabling system logging, network
logging of servers for maintenance, performance tuning, testing.
Troubleshooting tickets, which includes storage, network, system related issues and logging in JIRA and BMC Remedy
Education Details:
1. Master of Science in Information Technology Management from Campbellsville University Dec 2018
2. Master of Science in Information System Management from Virginia International University May 2016
3. Bachelor of Science in Electrical and Electronic from JNTUH May 2013
Keywords: continuous integration continuous deployment quality analyst user interface sthree database active directory information technology microsoft North Carolina Texas Wisconsin

To remove this resume please click here or send an email from [email protected] to [email protected] with subject as "delete" (without inverted commas)
[email protected];3059
Enter the captcha code and we will send and email at [email protected]
with a link to edit / delete this resume
Captcha Image: