Home

Dinesh - SRE Devops
[email protected]
Location: New Brunswick, New Jersey, USA
Relocation:
Visa: H1-B
I Have 10+ years of Professional IT experience, including Site Reliability Engineering, Cloud Infra Management, Incident Management, Configuration Management, Production Support, automating provision, build and deployment process using CI/CD and IaC tools, focusing on high availability and scalability in Azure, AWS and GCP Cloud.
Lead SRE team members to create and maintain Recovery Procedures, RCA s in collaboration with other engineering teams
Perform Incident Management and Change Management to maintain the continuous availability of all Cloud Infrastructure services
Ensure all SRE and operating procedures are maintained and executed.
Maintain a 24x7 production environment with a high level of service availability and Perform quality reviews, manage operational issues.
Provide mentorship to a growing SRE team on core SRE principles and tools.
Ensure highest level of uptime to meet the customer SLA by implementing system wide corrections to prevent reoccurrence of issues.
Worked as Point of contact for all Major Incidents(P1&P2).
Execute Major Incident Management, leading major incidents throughout lifecycle per the major incident process; provide regular updates and weekly reporting of major incidents
Strive for continuous improvement of overall major incident process and communication, including tracking and archiving all post-incident reports and incident and problem trend analysis
Administer the Major Incident Management (MIM) process and ensure adherence to process and escalation requirements within various support and delivery areas, assisting teams in establishing SLAs and KPIs
Perform Incident Management and Change Management to maintain the continuous availability of all Cloud Infrastructure services
Experience with cloud computing platforms, such as AWS, Azure, and Google Cloud Platform.
Experience with OpenShift system integration, Infrastructure as a service and Cloud knowledge
Establish continuous process improvement cycles where the process performance, activities, roles and responsibilities, policies, procedures, and supporting technology are reviewed and enhanced where applicable.
Working in a 7x24x365 fast-paced supporting multiple clients with the ability to work a flexible schedule.
Expertise in Amazon AWS Cloud Administration which includes services like: EC2, S3, EBS, VPC,ELB, Custom AMI s, SNS, RDS, IAM, Security Groups, Route 53, Auto scaling groups, Cloud Front, Cloud Watch, Cloud Trail, Cloud Formation, AWS OpsWorks.
Azure Cloud and DevOps consultant with concentrations on Azure IAAS / PAAS.
Strong experience on Terraform, Azure Devops.
Hands on experience on various GCP services like Compute Engine, App Engine, Cloud Functions, Cloud Storage, Cloud Spanner, Cloud Pub/Sub, Cloud Identity and Access Management, Cloud VPN, Armor, CDN and Load Balancing.
Strong expertise on DevOps concepts like Continuous Integration (CI), Continuous delivery (CD) and Infrastructure as Code (IaaS), Cloud Computing etc.
Worked with different components of iPaaS solution Azure provides, Service Bus, Functions and Logic Apps to use connectors and create workflows.
Experience on Jenkins and Docker files.
Experience in Python script hardening security of on premise and cloud-based systems.
Experience in dealing with Windows Azure IaaS-Virtual Networks, Virtual Machines, Cloud Services, Express Route, Traffic Manager, VPN, Load Balancing, and Auto-Scaling.
Experience with VM auto scaling, scale sets and load balancers for VM.
Experience in provisioning and configuring RHEL, Centos, Ubuntu and installation of packages for Linux Servers.
Proficient with Configuration Management tools & Build management tools such as Ansible, Maven.
Excellent working knowledge on Microsoft SQL Server 2005/2008/2014.
Experience in working on source control tools like GitHub (GIT).
Configured and Monitored CI CD pipelines using AZURE repo.
Proficient in Deployment automation using Powershell Scripting
Experience in Kubernetes orchestration for Docker Containers which handles scheduling and manages workloads based on
user-defined parameters.
Experience in Docker tools like Docker swarm and compose. Docker Swarm provides native clustering functionality for
Docker containers, Docker Compose used to run the multi-Docker container applications.
Experience with messaging systems such as Kafka, RabbitMQ, and ActiveMQ
Experience with big data technologies such as Hadoop and Spark
Experience with cloud computing platforms such as AWS, Azure, and Google Cloud Platform
Experience with automation tools such as Ansible, Chef, and Puppet
Experience with monitoring and logging tools such as Prometheus and ELK Stack
Experience with troubleshooting and problem solving
Strong analytical and problem-solving skills
Excellent communication and teamwork skills
Experience with IBM MQ programming languages, such as Java, Python, and C++.
Experience in general Systems Administration and Change Management, Software Configuration Management.
Worked with Docker on multiple cloud providers, from helping developers build and containerize their application (CI/CD) to deploying either on public or private clouds.
Deployed Openstack environments through automated tools, Ansible / custom pipeline and Terraform for Infrastructure
Automation.
Proven ability to design, implement, and manage messaging systems for large-scale enterprise applications
Strong understanding of cloud computing platforms such as AWS, Azure, and Google Cloud Platform
Proficient in automation tools such as Ansible, Chef, and Puppet
Experience with monitoring and logging tools such as Prometheus and ELK Stack
Experience in the areas of Virtualization with installation, support of VMware windows servers.
Worked on google cloud platform (GCP) services like compute engine, cloud load balancing, cloud storage, cloud SQL, stack driver monitoring and cloud deployment manager.
Experienced with setup, configuration, and main ELK stack (Elasticsearch, Logstash, and Kibana, ) and OpenGrok source code.
Good understanding of OpenShift platform in managing Docker containers and Kubernetes Clusters.
Setup GCP Firewall rules to allow or deny traffic to and from the VM's instances based on specified configuration and used GCP cloud CDN (content delivery network) to deliver content from GCP cache locations drastically improving user experience and latency.
Responsible to designing and deploying new ELK clusters (Elasticsearch, Logstash, Kibana, beats, Kafka, zookeeper etc.
Worked on GKE Topology Diagram including masters, slave, RBAC, helm, kubectl, ingress controllers GKE Diagram including masters, slave, RBAC, helm, kubectl, ingress controllers
Continuous improvement of system and application monitoring and automation
Identify and automate manual workarounds and process improvements
Worked on OpenShift for container orchestration with Kubernetes container storage,automation to enhance container platform multi-tenancy also worked on with Kubernetes architecture and design troubleshooting issues and multi-regional deployment models and patterns for large-scale applications.
Proactive monitoring of Monitor the availability, latency, scalability, and efficiency of all services
Perform periodic on-call duty as part of the SRE team
Participate & contribute in daily huddles and status meetings
Document task/ workflow analysis and comments in a concise, effective manner such that it can be easily understood by participant
Developed build workflows using Gradle, Gitlab-CI, Docker and Openshift.
Develop and deliver client-specific operational training; monitor ongoing adherence to SOPs to ensure high quality
Work with the client team across shores to deliver against client requirements
Proactively identify training needs and provide necessary coaching as required to BOA s
Proactively seek performance feedback to build & enhance knowledge
Build and leverage partnerships across shores to deliver against client requirements
Create robust documentation & SOPs for transition of activities between Ops and Shared Services, combined with ongoing coaching
Document task/ workflow analysis and comments in a concise, effective manner such that it can be easily understood by the broader team.


Technical Skills
Operating Systems Ubuntu, Centos, RHEL-7
SCM Tools GIT, GIT Hub, Azure Repo
Build Tool Maven, Ms build
Code Coverage tool SonarQube
Artifactory tool Azure Artifactory
Continuous Integration tool Jenkins, AzureDevops
Infra as a code Terraform
Issue / Bug Tracking tool JIRA, Azure Boards
Cloud Azure,AWS, GCP
Monitoring Tool Dynatrace, Splunk, Azure Monitor, Azure Appinsights ,New Relic, Kibana
Containerization tool Docker, Kubernetes

Professional Experience
Client: Pactera edge IL Oct 2023 till date
Role: SRE Devops Engineer
Led efforts to restore service in a timely manner for critical business functions, application and infrastructure services as part of a 15-member remote team, comprised of professionals from several different countries and time zones.
Implemented and executed major incident management processes including invocation, ownership, escalation, communication and restoration of service.
Used ITIL best practices to support affected business units by managing, directing, coordinating and communicating across multiple technical and non-technical teams which include application, infrastructure, third party suppliers, and business units.
Designed, implemented, and managed IBM MQ messaging systems for large-scale enterprise applications.
Managed end-to-end delivery of multiple complex digital projects resulting in a 20% increase in customer satisfaction.
Led implementation of solutions from establishing project requirements and goals to solution go-live.
Managed and ran multiple delivery teams ensuring governance at top management level, conducting project meetings, and identifying successful and unsuccessful project factors and gaps.
Worked closely with multi-disciplined teams to drive estimates, delivery plans, and retrospectives, identified and managed engagement risks, and flagged major issues early.
Operated in a consultative role within the working team and provided hands-on management of the delivery stream.
Built and managed a high performing team of delivery managers resulting in a 30% increase in team productivity.
Hired and managed contract resources or agencies for specific projects or to augment team staffing.
Developed and managed the One-Data platform using Splunk queries, Prometheus, Grafana dashboard and Service-Now.
Experience with big data technologies such as Hadoop and Spark
Developed and implemented a Kafka-based messaging system for a large-scale event-driven application
Optimized a RabbitMQ cluster to improve performance and scalability
Migrated a legacy messaging system to Kafka
Developed a monitoring and alerting system for a Kafka cluster
Resolved a critical Kafka issue that caused an outage
Worked as part of a team to deploy a new Kafka-based system
Web application performance baselining, analysis, tuning, capacity planning, and demand forecasting.
Enabled authentication in own web ASP .NET 4.7 based API by using Azure AD B2C
Migrated Java bases applications thru Azure Web Apps.
Experience with messaging systems such as IBM MQ, ActiveMQ, and RabbitMQ
Experience with cloud computing platforms such as AWS, Azure, and Google Cloud Platform
Experience with automation tools such as Ansible, Chef, and Puppet
Experience with monitoring and logging tools such as Prometheus and ELK Stack
Experience with troubleshooting and problem solving
Deployed Java Core apps to Azure App Service
Deployed dashboards in Dynatrace for both operations and various lines of business.
Used Dynatrace to perform RCA and quickly drill down to correct error fault path and error hot spots.
Assist with the development and implementation of DevOps SRE solutions for large-scale distributed web applications across multiple tiers and data centers.
Experience troubleshooting problems and working with cross-functional teams for resolution.
Good knowledge and experience in using Splunk, Prometheus, Grafana, and Alert manager for logging and monitoring.
Developed Automation Script using Shell Script and Python for the Linux platform.
Supported patching activities and validated different docker services & servers.
Setup datadog monitoring across different servers and AWS services.
Monitor performance and history of infrastructure with tools such as CloudWatch, Datadog.
Designed and developed monitoring to improve the observability and reliability of for applications using Splunk.
Responsible in Administration of Splunk at CMS as a central logging platform for reviewing current logs, assist ADOs (Application Delivery Organizations) to setup logging hosted in AWS Cloud
Helped to improve engineering quality, operation excellence and evolution of Splunk Observability s web applications, web services, and APIs
Configured Microsoft add on for Splunk to send data from Azure event hubs to Splunk
Responsible for setting up monitoring using Splunk for capacity planning, system health, availability, and optimization of infrastructure
Helping application teams in on-boarding Splunk and creating dashboards/alerts/reports etc
Provide regular reports and dashboards to the engineering staff and Senior Leadership on the efficiency of core systems and SRE response/resolution times.
Extensively involved in infrastructure as code, execution plans, resource graph and change automation using Terraform.
Deliver 24x7 support for critical systems through the utilization of communication and alerting tools for fast response times, e.g., Jira, ITSM, and Slack.
Detail-oriented with the ability to catch minor errors which can result in major problems.
Designing and configuring patch management systems
Created Terraform templates for provisioning virtual networks, subnets, VM Scale sets, Load balancers and NAT rules and used Terraform graph to visualize execution plan using the graph command.
Experience with the Azure logic apps with different triggers and worked on ISE environment in logic apps
Familiarity with hosted application service provider environments, including remote administration of servers and devices.
Created Logic Apps with different triggers, connectors for integrating the data from Workday to different destinations.
Experience in integrating non - standard logs and sources into Splunk including SQL queries, scripted inputs and custom parsers.
Extensive knowledge of a tier Splunk installation, Indexer, Intermediate, Heavy forwarder, Search heads, UFs and apps.
Splunk Application support to onboard various applications to the Splunk Command Center.
Configured and created Roles, Groups, Users and Group Members in the organization for various business groups and well versed with Splunk access roles.
Design, build and manage the ELK (Elasticsearch, Logstash, and Kibana) cluster for centralized logging and search functionalities for the App.
Scripting & automating tasks using Python for backup, monitoring, and file processing.
Created the AWS VPC network for the Installed Instances and configured the Security Groups and Elastic IP & accordingly using CloudFormation/ Terraform with Infrastructure as code.
Involved in designing and deploying multitude of applications utilizing almost all the AWS stack (Including EC2, Route53, S3, RDS, Dynamo DB) focusing on high-availability, fault tolerance and Auto scaling, load balancing, faster static content delivery, built all the stack using AWS CloudFormation with JSON or YAML.


Client: TATA CONSULTANCY SERVICES Hyderabad, India | May 2022 Sep 2023
Role: AWS Devops Engineer
Responsibilities:
Administered Jenkins, proposed and implemented branching strategy suitable for agile/scrum development in a fast-paced engineering environment.
Used build triggers to create a schedule for Jenkins to build periodically or on a specific date/time.
Integrated Docker container orchestration framework using Kubernetes by creating pods, config Maps and deployments.
Designed, implemented, and managed ActiveMQ and RabbitMQ messaging systems for large-scale enterprise applications.
Used EKS clusters and maintained pods/containers with autoscaling, health check probes, resources allocation by using automated scripts in YAML.
Worked closely with developers to maintain healthy environment by establishing and applying appropriate branching, labelling/naming conventions with GitHub repos.
Experience in designing Cloud Formation Templates (CFTs) to create EC2 instances, RDS, CloudWatch, S3, ELB, Auto-Scaling Groups, Route53 record sets and other services on AWS.
Written Chef Cookbooks for various DB configurations to modify and optimize end product configuration, converting production support scripts to Chef Recipes and AWS server provisioning using Chef Recipes.
Implemented Chef Recipes for Deployment on build on internal Data Centre Servers. Also re-used and modified same Chef Recipes to create a Deployment directly into Amazon EC2 instances.
Established Chef Best practices approaches to systems deployment with tools such as vagrant, bookshelf and test-kitchen and the treatment of each Chef cookbook as a unit of software deployment, independently version controlled.
Responsible for CI/CD process implementation using Jenkins along with Shell scripts to automate routine jobs.
Deployed EC2 Instance, adding (EBS) block level storage volume to increase the availability of the website.
Implemented AWS Code Pipeline and created CFT JSON templates in Terraform for infrastructure as code.
Experience in Setting up the build and deployment automation for Terraform scripts using Jenkins.
Managed Red Hat LINUX user accounts, groups, directories, file permissions and Sudo rules.
Worked on using Chef Attributes, Chef Templates, Chef recipes, Chef files for managing the configurations across various nodes using RUBY.
Worked on CloudWatch service to monitor and maintain infrastructure. Also, created alerts in case of any unusual activity with the containers.
Using Splunk logging system to get essential logs in finding problems and created dashboards to monitor application stability.
Deployed the EAR and WAR archives into WebLogic and Apache Servers.
Used PostgreSQL to control job flow, persist data (business current view) and to create delta files using SQL.
Improved agility and operational performance by organizing more efficient workflows and business processes.

Client: ROBERT BOSCH Bangalore | Mar 2021 Apr 2022
Role: Devops Support
Responsibilities:

Responsible for the overall incident management activities within Cingular's Information Technology organization.
Performed troubleshooting activities during a high severity outage situation of multi-disciplined technical staff members.
Interacted with application users to gather specifics on the impact and nature of the situation.
Provided on-going updates to Executive Management on the progress to resolution.
Initiated escalations to Executive Management, vendors, and/or other groups where appropriate.
Worked closely with Network teams and System Administrators to assist with the development and implementation of monitoring tools, troubleshooting documentation, system/application documentation and overall process improvement.
Specialized with over 200 mission critical applications, work closely with the support, development, and change management teams to remain current on changes and activities for those applications.
Understanding the entire build process precisely during knowledge transfer.
Managing the projects in git repository.
Responsible for Maintaining/Administration of Git Version control tool.
Performing Branching, tagging, Merging and Also take the backup of the code.
Create and give the accesses permissions to developers.
Maintaining up of Maven build scripts and Tomcat application server creating artifacts using Maven.
Performing deployment of WAR files.
Troubleshoot Build issues and coordinate with development team on resolving those build issues.
Responsible for Maintaining/Administration of Azure DevOps Continuous Integration Tool.
Creating pipelines using AzureDevops and deploy the infra using Terraform in Azure.
Experience on Kubernetes and Azure Devops and Boards and Azure repos.
Acted as build and release engineer, deployed the services by (Azure DevOps) pipeline. Created and Maintained pipelines to manage the IAC for all the applications.
Managed Openshift master, nodes with upgrades, decommission them from active participation by evacuating the nodes and upgrading them.
Deployment of a DockerizedRabbitMQ in OpenShift so that we might use the auto-scaling capabilities of OpenShift with a RabbitMQ cluster.
Work to continuously improve speed, efficiency and scalability of OpenShift system
Managing the OpenShift cluster that includes scaling up and down the AWS app nodes
Design, installation, configuration and administration of Linux 5,6 and 7 servers and support of OpenShift Enterprise and non-OpenShift support.
Point team player on Openshift for creating new Projects, Services for load balancing and adding them to Routes to be accessible from outside, troubleshooting pods through ssh and logs, modification of Buildconfigs, templates, Imagestreams, etc
Openshift virtualized PaaS provider - useful in automating the provisioning of commodity computing resources for cost and performance efficiency.
Working on setting up openshift platform in Azure (ARO)
Collaborated with cross functional teams (firewall team, data base team, application team) in execution of this project.
Monitor builds and provide proactive support to resolve any build issues.
Performing deployments to multiple environments like Dev, QA, UAT & Production environments.
Create data retention policies, perform index administration, maintenance and optimization for Splunk
Designed core scripts to automate Splunk maintenance and alerting tasks.
Integrate new log sources and data correlation rules into the Splunk.
Responsible for designing, developing, testing, troubleshooting, deploying and maintaining Splunk solutions, reporting, alerting and dashboards.
Worked on customization of existing Python scripts of some of the internal applications.
Configured and created Roles, Groups, Users and Group Members in the organization for various business groups and well versed with Splunk access roles.
Experience on Splunk search construction with ability to create well-structured search queries that minimize performance impact.
Automated Azure cloud deployments using Terraform.
Experience with Splunk Searching and Reporting modules, Knowledge Objects, Administration, Add-On s, Dashboards, Clustering and Forwarder Management
Created and maintained the Python deployment scripts for Tomcat web application servers.
Created and maintained the Python deployment scripts for Tomcat web application servers.
Involving the code review and using the SonarQube to validate the violation
Directed the design and Global implementation of a complex Dynatrace deep diagnostics solutions across multiple platforms and environments while meeting global SLA and OLAs. Successfully deployed Dynatrace to five data centers in three countries encompassing five environments, each environment having multiple collectors.
AppDynamics Installation, Administration, Upgradation, Troubleshooting Console Issues & Database Issues
Worked on AppDynamics Monitoring of large scale JEE Application, Node JS
Identifying the Critical applications for System resource utilization and JVM heap size was monitored using AppDynamics
Established infrastructure and service monitoring using Prometheus and Grafana
Created alerts & monitoring dashboards using Prometheus & Grafana for all microservices deployed in Azure.
Created alarms and trigger points in CloudWatch based on thresholds and monitoring the servers performance, CPU Utilization, disk usage.
Utilized AWS CloudWatch services to monitor the environment for operational and performance metrics during load testing.
Performed AWS Cloud administration managing EC2 instances, S3, EBS, SES, CloudWatch, RedShift, Route 53, RDS and SNS services.
Experience with migration to Amazon web Services AWS from Datacenter.
Used Kafka for messaging system and spark for processing large sets of data
Used Kafka to collect Website activity and Stream processing.
Working on Migration of on-premise data to AWS RDS - MS SQL server and database.
Configured AWS Identity and Access Management (IAM) Groups and Users for improved login authentication.
Build servers using GCP, importing volumes, launching EC2, RDS , creating security groups, auto-scaling, load balancers (ELBs) in the defined virtual private connection.
Worked on google cloud platform (GCP) services like compute engine, cloud load balancing, cloud storage,cloud SQL, stack driver monitoring and cloud deployment manager.
Worked on GCP which includes different services like google compute engine, google cloud functions, Auto Scaler, Cloud Storage, Google Kubernetes Engine (GKE) and cloud big table.

Client: Innowave GDU PVT LTD Bangalore, India | Jul 2020 mar 2021
Role: Cloud devops engineer
Responsibilities:
Responsible for Installation, Configuration, Access control, configuring jobs in Jenkins Integrated maven with GIT to manage and deploy project relates tags.
Installed and configured GIT and communicating with the repositories in GITHUB
Used Jenkins as continuous integration tool: creating new jobs, managing required plugins, configuring the jobs selecting required source code management tool, build trigger, build system and post build actions, scheduled automatic builds, notifying the build reports.
Proficiency in creating Docker images using Docker file, worked on Docker container snapshots, removing images, and managing Docker volumes and implemented Docker automation solution for CI/CD model.
Implement Nightly Builds & Milestone Builds using Jenkins.
Designed and developed Kibana dashboards and visualizations.
Configured Jobs for scheduled Builds to DEV environments.
Perform automated deployments using Jenkins
Install Plug-ins on need basis in Jenkins
Handling all phases of Build activities.
Configured Kibana settings and plugins as per the requirements of the project.
Experienced in implementing security measures such as authentication, authorization, and encryption to ensure that the Kibana platform is secure
Troubleshoot issues related to data ingestion, data processing, and data visualization in Kibana.
Experience in Kibana in data visualization and monitoring Cluster and performance through X-pack
Monitoring daily builds using continuous integration tool Hudson.
Debugging compilation and runtime issues in build failures.
Notify Broken builds to appropriate stakeholders and enable for successful build
Copy the built package to appropriate team for testing
Developing and maintaining scripts to automate the build, packaging as well as automation of time consuming, error prone tasks associated with the build.
Creating and maintaining Continuous Build Process documentation.
Participated in all phases of Release activities.
Collaborate with QA and Development managers on release schedules and content of builds, releases, and patches
Validate / smoke test the release to ensure it is operating as expected.
Ensure completeness of release notes and publish release package communication.
Responsible for releasing the product internally.
Implemented, managed, and orchestrated Docker Container Clusters using Kubernetes.
Worked on cluster creation for minion/worker in Kubernetes.
Set up Git repositories and Assign SSH Keys to my team
New Relic Configured for application performance monitoring.

Client: IBM (IBM SOFTWARE LABS)- Bangalore, India | Sep 2018 Jun 2020
Role: System admin & Devops
Responsibilities:
Worked as Linux Administrator in IT infrastructure environment providing server administration, application administration and better Network solutions to support business objectives.
Configured the hardware and OS (Solaris 10 and SUSE) on servers.
Installed, configured and updated Red Hat 6 & Windows NT/2000 Systems using Jumpstart and Kickstart.
Created and maintained Virtual machines in VMware ESX.
Worked on different VMware products like VMware workstation, GSX/VMware server, VMware player, VMware Converter.
Used Wireshark to Capture and analyze TCP, UDP, and IP Packets.
Managed UNIX Infrastructure and EMC storage involving maintenance of the servers and troubleshooting Problems in the environment.
Managed systems routine backup, scheduling jobs like disabling and enabling CRON jobs.
Served as communication conduit between programmers and network operations central staff.
Planning and implementing the configuration changes to the servers adhering with ITIL change management process.
Responsible for maintaining the management applications and tools used to monitor, diagnose and troubleshoot the Data Network Infrastructure.
Configured services like DNS, NIS, NFS, LDAP, Send Mail, FTP, Remote access on Linux.
Troubleshooting System, Network, and Operating System issues.
Administration and troubleshooting skills on Disks and File Systems, Users and Groups.
Maintenance of NFS server with auto mount and monitoring backup daily and weekly dumps.
Working 24/7 on call for application and system support.
Environment: Solaris 10, SUSE, RHEL, VMware ESX, Wireshark, TCP, UDP, IP, DNS, NIS, NFS,LDAP, FTP.

Devops Engineer | Maharshi electronics systems Gujarat, India | may 2018 Aug 2018

Create and Maintain Branching, Merging and Tagging across each production releases and perform builds using Jenkins Continuous Integration using Nodejs and npm packages.
etting up GIT repository setup in Github and Branching, And Merging Model.
Created Docker file for running our Application in a docker container.
Deployed an Application in a tomcat or Nginx servers and run applications in a docker container.
Created a continuous delivery pipeline from the ground up built with git, and Jenkins for Target's Finance Integration Team.
Worked on CI and CD automation setup from development to production environments.
Worked with developers to ensure new environments both met their requirements and confirmed to industry- standard best practices.
Have written batch scripts and shell scripts for automating Jobs and deployments.
Find root cause analysis of failures and documenting bugs and fixes.


Devops Engineer | Devmode Bangalore, India | Jun 2016 Apr 2018

Setting up GIT repository setup in Github and Branching, And Merging Model.
Build manages, and continuously improved the build infrastructure for software development engineering teams including implementation of build scripts, continuous integration infrastructure and deployment.
Integrated Maven with GIT to manage and deploy project related tags.
Responsible for Branching and merging the code as per the schedule.
Communicating with developers for build plan and build failures.
Having experience in writing shell scripts for automation.
Reported to a DevOps Manager, who will coordinate with teams outside of the development group.

Software Engineer | NEC Delhi, India | Jun 2013 May 2016

Extensive exposure to Configuration management policies and practices with regards to SDLC along with automation of scripting using shell, python and Perl scripting
Hands on Exposure on Version Control GIT
Created branches in GIT implementing parallel development process
Worked on Maven creating artifacts through source code and internal deployment in Nexus repository
Built applications using Chef/puppet scripting, Ant with Ivy build
Extensive experience in creation and management of Chef POC environment
Experience installing packages using YUM and RPM on Nix
Installed Jenkins on Linux machines and created master and slave configurations to implement multiple parallel builds
Created Power Shell Scripts to patch DLLs across various servers and also to automate database deployments (Dachas) using SSDT
Application deployment and configuration for enterprise scale infrastructure using Jenkins
Performed DevOps for Linux, Mac and windows platforms
Extensive experience with Nagios monitoring system as well as other monitoring tools
Responsible for creation and management of Chef Cookbooks
Implemented Configuration management and change management policies
Keywords: cplusplus continuous integration continuous deployment quality analyst message queue javascript sthree database active directory information technology golang microsoft Illinois

To remove this resume please click here or send an email from [email protected] to [email protected] with subject as "delete" (without inverted commas)
[email protected];2098
Enter the captcha code and we will send and email at [email protected]
with a link to edit / delete this resume
Captcha Image: