Home

Bhadrinath - Sr. DevOps Engineer
[email protected]
Location: Atlanta, Georgia, USA
Relocation: No
Visa: H1B
Bhadrinath Thota
Sr. DevOps Engineer
(360)851-9682
[email protected]


PROFESSIONAL SUMMARY:
Around 9+ Years of experience in IT industry comprising of DevOps, Cloud Computing, Infrastructure Configuration Management, Linux Systems administration and Software Configuration Management (SCM).
Primary duties include coming up with a combination of Automation/Tools and Processes to achieve Continuous Delivery/ Continuous Integration for various applications by integrating various tools like Jenkins, Git, Jira, Nexus/Artifactory, Puppet/Chef, Maven/Gradle, various testing frameworks like Junit, Selenium, Cucumber, Soap UI, J-meter, various App servers like Weblogic, WebSphere, J-Boss along with establishing process for Code Promotion within the Enterprise to move code from Dev, QA, SIT, Stage and Prod Environments.
Experience working with Apache Hadoop, Kafka, Spark and Log stash.
Worked with Apache Kafka for High throughput for both publishing and subscribing, with disk structures that provide constant performance even with many terabytes of stored messages.
Used Apache spark for processing large sets of data volumes for rapid processing and enhancing the output.
Terraform is an infrastructure as code tool that lets you build, change, and version cloud and on-prem resources safely and efficiently.
Experience in AWS operations and automation using CLI or API s for various services like EC2, EBS, S3, Glacier, VPC, Route53, Cloud Formation, Cloud Front, OpsWorks, RDS, DynamoDB, ElastiCache, ELB, Auto scaling etc.
Azure DevOps can be used for virtually any code or application development project you wish to complete.
Amazon DynamoDB is a fully managed, serverless, key-value NoSQL database designed to run high-performance applications at any scale.
DynamoDB is a proprietary NoSQL database by Amazon that supports key-value and document data offered via the Amazon Web Services.
GitLab's DevOps platform is a single application powered by a cohesive user interface, agnostic of self-managed or SaaS deployment.
GitLab is a web-based Git repository that provides free open and private repositories, issue-following capabilities, and wikis.
You can have the independence to create, delete or update the resources, terraform will ensure that there is no change in the state of the infrastructure.
Good experience in maintaining a Hybrid IT environment configuration encompassing many aspects of Linux System Administration like Automating OS Installations, RAID, Security Hardening, Capacity Planning, VM patching etc.
Exposure to Mesos, Marathon & Zookeeper cluster environment for application deployments & Docker containers.
Expertise in using tools like Chef/Puppet to treat Infrastructure as code.
Experience writing various Chef Cookbooks, both, for Infrastructure Configuration and Deployment Automation using Roles, Environments, secure Data bags with vault, Attributes/resources, ERB templates etc.
Experience writing various custom Ansible Playbooks and modules for Deployment Orchestration. June.
AWS networking services allow customers to separate their cloud infrastructure, scale-up workload request and even connect the physical network to personal private virtual networks.
AWS Direct Connect is a networking service that provides an alternative to using the internet to connect to AWS.
AWS Lambda is a serverless compute service that can run code in response to predetermined events or conditions and automatically manage all the computing resources required for those processes.
Lambda functions can be triggered by events from other AWS services like S3, DynamoDB
SRE is a practice that is focused on the design and implementation of systems that are highly resilient, scalable, and reliable.
CloudBees can be classified as a tool in the "Platform as a Service" category, while Jenkins is grouped under "Continuous Integration".
CloudBees Jenkins Operation Center is an on-premises plug-in for building Jenkins infrastructure at scale. [email protected] is a multi-tenant hosted Jenkins cloud service that CloudBees manages.
DevOps in Google Cloud Platform (F) reduces complexities and increases the efficiency of development and operations workflows.
Harness Platform can manage several types of deployments in the ecosystem such as Helm.
Enabling canary deployments with automatic verification is core to the Harness Platform.
GCP creates sample templates or creates models to monitor and enforce infrastructure compliance.
A GCP DevOps Engineer is a professional who specializes in deploying and managing applications on the Google Cloud Platform (GCP).
Site reliability engineering(SRE) primarily boosts system availability and reliability, whereas DevOps accelerates development and delivery while enforcing continuity and building a team.
An Adobe Experience Manager (AEM) deployment usually consists of multiple environments, used for different purposes on different levels: Development. Quality Assurance. Staging. Production.
AEM gives them all-inclusive control over presenting content in digital channels like the Web, mobile apps, and social media.
Python is a high-level, interpreted, interactive and object-oriented scripting language.
Python is an interpreted, interactive, object-oriented programming language.
With its open platform, digital asset management (DAM) capabilities, and a wide variety of industry-specific solutions, AEM makes it easy to create integrated digital content management.
Groovy is a scripting language with Java-like syntax for the Java platform.
The Groovy scripting language simplifies the authoring of code by employing dot-separated notation, yet still supporting syntax to manipulate collections, Strings, and JavaBeans.
Kubernetes, also known as K8s, is an open-source system for automating deployment, scaling, and management of containerized applications.
Databricks to process, store, clean, share, analyze, model, and monetize their datasets with solutions from BI to machine learning.
Databricks ETL is a data and AI solution that organizations can use to accelerate the performance and functionality of ETL pipelines.
Splunk provides observability across the entire DevSecOps practice and delivers actionable insights for development, operations and security teams.
Splunk product enables real-time DevOps monitoring across all stages of the delivery life cycle, helping you to deliver better apps and more business impact faster.
Ruby is a popular programming language in DevOps, automation and website deployment.
Ruby is a server-side scripting language similar to Python and PERL.
Ruby can be used to write Common Gateway Interface (CGI) scripts.
Using IAM technology significantly reduces your risk of data breaches.
EKS offers even more detailed networking and security features with AWS Identity and Access Management (IAM).
Argo CD are specifically focused on Kubernetes deployments and want a more streamlined and focused tool.
Argo CD is an open-source GitOps continuous delivery tool. It monitors your cluster and your declaratively-defined infrastructure stored in a Git repository and resolves differences between the two effectively automating an application deployment.
Argo CD Workflows that it runs all pipeline steps on existing resources in the Kubernetes cluster, with no external dependencies.
OpenShift empowers easy integration with leading CI/CD platforms, AI-powered performance monitoring solutions.
Azure Red Hat OpenShift provides a flexible, self-service deployment of fully managed OpenShift clusters.
Involved in designing and developing web pages using HTML 5, CSS3, JavaScript, Bootstrap, React JS, Redux, Node JS, and Mongo DB.
Responsible for React UI and architecture. Building components library, including Tree, Slide-View, and Table Grid.
Used React-JS to build the UI components, and developed filters to display different dimensions of data and font size modifiers.
EKS is a managed service provided by AWS that simplifies the deployment and management of Kubernetes clusters, while Kubernetes is an open-source container orchestration platform.
While Docker is a container runtime, Kubernetes is a platform for running and managing containers from many container runtimes.
Used OpenStack APIs and sdks for Nova, Neutron, Cinder, Swift, Glance, Keystone etc. to manage
Knowledge of puppet as Configuration management tool, to automate repetitive tasks, quickly deploy critical applications, and proactively manage change.
Used various components like Hiera, Mcollective, Puppet-DB, Facter etc while writing various manifests and modules in Puppet.
IBM Operational Decision Manager (ODM) is a full-featured Decision Management platform focused on deploying High-Performance Decision Solutions Managed by the Business.
Oracle Disk Manager (ODM) is a disk management interface defined by Oracle to enhance file management and disk I/O performance.
GitOps gives you tools and a framework to take DevOps practices, like collaboration, CI/CD, and version control, and apply them to infrastructure automation and application deployment.
GitOps is a practice that helps manage software development and infrastructure provisioning through Git-based repositories.
UNIX systems also have a graphical user interface (GUI) similar to Microsoft Windows which provides an easy to use environment.
UNIX is required for operations which aren't covered by a graphical program, or for when there is no windows interface available, for example, in a telnet session.
Wrote custom puppet modules for managing the full application stack (Tomcat/httpd/MySQL/Java) and streamlined email infrastructure
Experience in application development, debugging, implementation, supporting dev team, testing of Oracle based ERP using SQL and Database Triggers.
Experience writing build scripts with tools like Ant and Maven and Gradle.
Experience in Configuring and Administering Repository Managers like Nexus, Artifactory.
Extensive experience in configuration, deployment automation with various App Servers like Oracle WebLogic, Web Sphere, J-Boss, Webservers like Apache, Tomcat and more modern Web Containers like Nginx etc.
Knowledge of databases like MySQL, Oracle, MSSQL, MongoDB, Dynamo DB.
Experience in setting up Baselines, Branching, Patches, Merging and Automation processes using Shell/bash and Batch Scripts.
Experience with Build Management tools Ant and Maven for writing build.xmls and pom.xmls
Strong analytical, diagnostics, troubleshooting skills to consistently deliver productive technological solutions.
Experience in using bug tracking systems like JIRA, Remedy, HP Quality Center and IBM Clear Quest.


TECHNICAL SKILLS:
Cloud Computing: Amazon Web Services (EC2, IAM, Elastic Beanstalk, and Elastic Load balancer (ELB), GCP, RDS (MySQL), AWS, Azure, CloudBees, AEM, EKS, IBM ODM, React, SRE, UNIX, Databricks, VMware, S3, AWS Networking, Glacier, Route 53, SES, VPC etc.), Open Stack Platform Cloud Foundry, OpenShift, Kubernetes, Terraform, Git, Lambda, Bitbucket, Yaml, Splunk, Harness, Argo CD, Prometheus, GiTops, Grafana, CI/CD, GitLab, MSSQL server, Microservices.
Configuration Management: ANT, Maven, GIT, Svn, Clear Case, Jenkins, Puppet, Chef, Ansible, Sonar, Nexus
Tools/ Webservers: Web Sphere Application Server, Web Logic, J-Boss, Apache, Tomcat, Nagios, Kafka, Log stash, Spark, Docker, Hadoop, Glue, Azure Red Hat.
Scripting/ Languages: Shell scripting, Python, Ruby and PowerShell Scripting, Groovy.
Database: Sybase, Oracle, MySQL, DB2, Angular JS, Cassandra, Mongo DB, Cockroach DB, Cassandra DB, Yugabyte DB, Dynamo DB.
Networking/ Protocols: DNS, TCP/IP, FTP, HTTPS, SSH, SFTP, SCP, SSL, ARP, DHCP and POP3
Operating Systems: Sun Solaris Linux (Red Hat SUSE Linux), AIX, VMware ESX, Windows NT, Centos, Ubuntu.

PROFESSIONAL EXPERIENCE:
AT&T, Atlanta. Dec 2021 Till Date.
Sr. DevOps Engineer
Responsibilities:
Installed, Configured and Administered WebSphere Application Server ND/XD on Red Hat Linux platform.
Used WSINSTANCE to create multiple WebSphere instances from the command line.
Used Flume, Kafka to aggregate log data into HDFS.
Developed a stream filtering system using Spark streaming on top of Apache Kafka.
Designed a system using Kafka to auto - scale the backend servers based on the events throughput.
Used Cassandra to support contracts and services that are available from third parties.
Used XML Web services with SOAP protocol for transferring data between different applications.
Organized and created a separate role for every installation and created a role called Common for all the common activities and included in meta/main.yml.
Kubernetes creates and manages containers on the cloud-based server systems. Kubernetes helps DevOps teams to reduce the burden of infrastructure
Used Ansible Tower for scheduling playbooks and used GIT repository to store our playbooks.
Used pre-tasks and post-tasks to perform regular health checks to tail the logs and other clusters.
Also written custom modules to control system resources like services, packages, and to handle executing system commands.
Automation the front-ends platform into highly scalable, consistent, repeatable infrastructure using high degree of automation using Vagrant, Jenkins, and cloud Formation.
Experience in using Ansible playbooks, inventory, dynamic inventory and automated Existing cloud environment.
Basic knowledge of Networking is a must to use AWS as all the operations will involve your networking skills.
Ruby and Python have unique advantages that make them suitable for DevOps.
While Ruby boasts an elegant syntax and a powerful web development framework in Ruby on Rails, Python's extensive ecosystem, greater versatility, and higher popularity make it the more widely adopted language for DevOps.
Computer networking refers to interconnected computing devices that can exchange data and share resources with each other.
Written playbooks and roles to manage configurations of and deployments to remote machines.
Installed the "htop" utility which is an improved version of top an interactive system process monitor.
Azure Databricks is a cloud-based service that is deployed on the Azure cloud platform.
Use the Databricks platform to build and deploy data engineering workflows, machine learning models, analytics dashboards, and more.
Databricks develops a web-based platform for working with Spark, that provides automated cluster management and IPython-style notebooks.
AWS Glue integrates with other AWS services, such as Amazon S3, Amazon Redshift, and AWS Lambda, among others, enabling users to leverage these services for their data processing workflows.
AWS Glue is a cloud service that prepares data for analysis through automated extract, transform and load (ETL) processes.
Lambda runs your code on a high-availability compute infrastructure and performs all of the administration of the compute resources, including server and operating system.
AWS Lambda is an event-driven, serverless computing platform provided by Amazon as a part of Amazon Web Services.
Apache CassandraDB is designed for organizations, across various industries, with large data sets that are constantly changing.
Cassandra is a Java-based system that can be managed and monitored via Java Management Extensions (JMX).
Groovy is built on Java, making it simple to learn for programmers with some experience.
Groovy is a scripting language with Java-like syntax for the Java platform.
SRE, which is a set of practices and tools, is designed to make sure that all parts of an organization are working together to deliver high-quality software.
SRE focuses on the system engineer position in core infrastructure and is more appropriate in a production setting.
Site Reliability Engineering (SRE) focuses on designing and implementing highly scalable, resilient, and dependable systems.
Terraform is an open source infrastructure as code (IAC) software tool that allows DevOps engineers to programmatically provision the physical resources an application requires to run. Infrastructure as code is an IT practice that manages an application's underlying IT infrastructure through programming.
Infrastructure-as-Code(IaC) tools automate the management of IT infrastructure using programming languages and automation tools.
Terraform by HashiCorp is an open-source DevOps tool. It allows to build, manage, and define infrastructure across cloud providers.
Kubernetes automates operational tasks of container management and includes built-in commands for deploying applications, rolling out changes to your applications, scaling your applications up and down to fit changing needs, monitoring your applications, and more making it easier to manage applications.
GitLab is a DevOps tool used for source code management. It is a free and open-source version control system used to handle small to very large projects efficiently.
GitLab and Jenkins are two popular tools used for continuous integration and continuous development/deployment features.
Chef is a Configuration management DevOps tool that manages the infrastructure by writing code rather than using a manual process so that it can be automated, tested and deployed very easily.
Chef has Client-server architecture and it supports multiple platforms like Windows, Ubuntu, Centos, and Solaris etc.
Amazon Elastic Kubernetes Service (Amazon EKS) is a managed Kubernetes service that makes it easy for you to run Kubernetes on AWS and on-premises.
Used React JS to abstract away from the DOM function, giving a simpler programming model and better performance.
Utilized React JS and its two-way data binding feature to enhance user feedback functionality.
Created typescript reusable components and services to consume REST APIs using Component based architecture provided by React JS lib.
Amazon Elastic Kubernetes Service (EKS) by running a Jenkins Manager within an EKS pod. In doing so, we can run Jenkins workloads by allowing Amazon EKS to spawn dynamic Jenkins Agent(s) in order to perform application and infrastructure deployment.
EKS is a Kubernetes service with a fully managed control plane. Amazon Elastic Compute Cloud (Amazon EC2): EC2 is a web service that provides secure, resizable compute capacity in the cloud.
AWS offers Amazon Elastic Kubernetes Service (EKS), a managed service that makes it easy for you to use Kubernetes on AWS without needing to install and operate the Kubernetes control plane.
Working with other members of the development team to design, develop and implement features, bug fixes, and other improvements for the Ansible core software.
Experience in AWS Ansible Python Script to generate inventory and push the deployment to Managed configurations of multiple servers using Ansible.
Written multiple manifests and also customized facts for efficient management of the Ansible clients.
Red Hat OpenShift Pipelines is a Kubernetes-native CI/CD solution based on Tekton.
OpenShift helps build applications and host apps on the OpenShift server with the ability to modify and deploy.
Written scripts in Python to automate log rotation of multiple logs from web servers.
Written Python scripts to create test cases during two week sprints using agile methodology.
Launching Amazon EC2 Cloud Instances using Amazon Images (Linux/ Ubuntu) and Configuring launched instances with respect to specific applications.
Working with AWS services such as EC2, VPC, RDS, CloudWatch, CloudFront, Route53 etc.
Focus on continuous integration and deployment, promoting Enterprise Solutions to target environments.
Expertise in Docker containers and its configuration based on requirement, maintaining the Docker hub for container images.
Configuring and Networking of Virtual Private Cloud (VPC) and Cloud Foundry.
Written Cloud formation templates and deployed AWS resources using it.
Creating S3 buckets and also managing policies for S3 buckets and Utilized S3 bucket and Glacier for storage and backup on AWS
Implemented Git mirror for SVN repository, which enables users to use both Git and SVN.
Implemented Continuous Integration using Jenkins and GIT.
Deployed JAVA/J2EE applications through Tomcat Application servers
Worked with a complex environment on Red Hat Linux and Windows Servers while ensuring that these systems adhere to organizational standards and policies.
Maintain and track inventory using Jenkins and set alerts when the servers are full and need attention.
CloudBees, the Enterprise Jenkins Company, is the continuous delivery (CD) leader.
CloudBees provides solutions that enable IT organizations to respond rapidly to the software delivery needs of the business.
Strong in building Object Oriented applications using Java, writing Shell Scripts on UNIX
Generated Perl & UNIX scripts for build activities in QA, Staging and Production environments
Used monitoring tool JIRA and also triaged the issues and maintained bugs using JIRA tool.
IBM Operational Decision Manager automates the implementation of business policies of your organization by managing millions of business rules and enabling rapid business change.
The ODM is in charge of research and development and has the ability to sell the product on their own, to other buyers or as a White Label, a term describing the status of a product which is sold under several names.
Manage the integration of new software projects and products into the build environment.
Work with product development to resolve build-related issues in all projects.
GCP is a public cloud vendor that offers a suite of computing services to do everything from data management to delivering web and video over the web to AI and machine learning tools
IAM basic roles are the most limited form of GCP roles and include owners, editors, and viewers.
Identity and access management (IAM) is a framework of business processes, policies and technologies that facilitates the management of electronic or digital identities.
A GCP DevOps Engineer is a professional who specializes in deploying and managing applications on the Google Cloud Platform (GCP).
DynamoDB is from the AWS ecosystem and can only be used within AWS.
DynamoDB's low latency and automatic scaling capabilities make it a good choice for high-traffic applications that require fast and reliable access to data.
Experience in Azure synapse arm templets deployment pipelines.
Experience in Source control management.
Harness Continuous Delivery as a Service platform provides a simple and secure way.
Harness is a continuous integration and continuous delivery (CI/CD) platform for cloud and on-premise projects.
Experience in creating and managing YAML based Azure DevOps pipelines
Automating applications infrastructure and application code repositories using Azure DevOps
Involved in scrum ceremonies (stand-up, grooming, planning, demo/review and retrospective) with the teams to ensure successful project forecasting and realistic commitments.
The AEM developer finally gets started with the actual front-end development typically using CSS, HTML, JavaScript, jQuery.
VMware software can run programs and operating systems, store data, connect to networks, and do other computing functions, and requires maintenance such as updates and system monitoring.
VMware does not require that you know any specific programming language before interviewing for a tech position.
Splunk Enterprise on the AWS Cloud, to gain the flexibility of the AWS infrastructure to tailor your Splunk Enterprise deployment according to your needs, and you can modify your deployment on demand, as these needs change.
Argo CD has the ability to automatically sync an application when it detects differences between the desired manifests in Git, and the live state in the cluster.
The wide adoption and popularity are because Argo CD helps developers manage infrastructure and applications lifecycle in one platform.
Argo CD can pull updated code from Git repositories and deploy it directly to Kubernetes resources.
GitOps is an operational framework that applies such as continuous integration/continuous delivery (CI/CD) and version control, to infrastructure automation.
The GitOps service of Google Cloud Platform that is a part of Anthos, is a tool to sync configurations like deployments, Helm charts, and Config maps, across multiple clusters.
Hadoop is a scalable and cost-effective alternative to the Big Data management packages.
Hadoop is an open source framework based on Java that manages the storage and processing of large amounts of data for applications.
Adobe Experience Manager (AEM) is a content management system that optimises the authoring, management, and delivery of content and digital media.
An AEM deployment usually consists of multiple environments, used for different purposes on different levels: Development. Quality Assurance. Staging. Production.
Migrating the data to respective DEV and QA Oracle database before doing code roll-out
Troubleshooting various production related outages.
Delivery of API platform Testing and Automation Framework Development on Cloud Platform.
Functional and Performance testing of SAAS and PAAS API platform built on Java and Open source stack.
Experience in MLOps Pipelines.
Skilled in supporting WebSphere, WebLogic, and JBoss application Server.
Migrated applications from WebSphere Application Server to JBoss Application Server.
Have involved in creating different Apigee API documents as part of the project requirements.
Have used almost all the Apigee edge policies while implementing endpoints on gateway.
Used JavaScript and AngularJS directives for validation purposes.
Utilized Angular JS framework to bind HTML template (views) to JavaScript object (models).
Implemented Angular Controllers to maintain each view data.
Implemented Angular Service calls using Angular Factory.
Environment: Cloud AWS Servers, GCP, Automation, Azure, Python, AEM, IAM, Angular, CloudBees, Kubernetes, RHEL, Databricks, Centos, Ubuntu, Glue, React JS, Ruby, IBM ODM, Splunk, Argo CD, UNIX, CI/CD Pipelines, Elastic, VMware, Jenkins, Jira, SRE, DynamoDB, GitLab, Lambda, Tomcat, Cockroach DB, Cassandra DB, IAC, Yugabyte DB, Chef, OpenShift, GitOps, AWS Networking, Cloud Foundry, WebSphere Application Server, Terraform, Hadoop, Ansible, JBOSS, Red Hat, Open Shift, Linux, Harness, Sonar, Nexus, API Platforms, Apigee, SOAP UI, Kafka, Docker, Mesos , Marathon, Groovy, EKS.

Anthem, Indianapolis. July 2020 - Nov 2021.
DevOps Engineer
Design EC2 instance architecture to meet high availability application architecture and security parameters.
Developed processes, tools, and automation for Jenkins-based software for buildingsystems and delivering SW Builds.
Experience in deploying and maintaining private cloud infrastructure of OpenStack and Cloud Foundry.
Ansible is a powerful open-source automation tool used for configuration management, application deployment, and orchestration in DevOps workflows.
Ansible is used for automating IT operations such as deploying applications, managing configurations, scaling infrastructure, and other activities involving many repetitive tasks.
Terraform Cloud provides an API for a subset of its features.
Terraform Cloud can be fully operated via API, CLI, and UI, which allows organizations to easily integrate it into their existing CI/CD pipelines.
Terraform is an software tool that allows DevOps engineers to programmatically provision the physical resources an application requires to run. Infrastructure as code is an IT practice that manages an application's underlying IT infrastructure through programming.
Proficiency in Neutron L2 and L3 agents, Cinder Storage/block storage, Swift Storage/object storage, file, CEPH Storage, Ubuntu, Canonical Stack, OpenStack APIs, OpenStack Dashboard, cloud ecosystems, IaaS, PaaS, DPaaS, FWaaS, LBaaS, OPNFV, SDN, marketplace, private, public and hybrid clouds, along with various drivers and plugins such as Open Daylight ML2 Mechanism Driver, Open Flow Agent, VMware NSX Network Virtualization Platform Plugin, GlusterFS driver and NFS driver.
Global DevOps market is anticipated to increase throughout the forecast period at a CAGR of 19.7%, from an estimated USD 10.4 billion in 2023 to USD 25.5 billion by 2028.
Experience in Marketplace using platforms as a service.
Expert in Analyzing and developing integrated solutions.
Designed an advanced platform to manage cloud services and the rest of the infrastructure.
Used Python scripts to design data visualization to present current impact and growth.
Python is a computer programming language often used to build websites and software, automate tasks, and conduct data analysis.
Worked on managing packages and configuration across multiple of nodes.
Designed and built a continuous integration and deployment framework for Chef Code using test driven development.
AWS lambda doesn't include most packages/Libraries which are used on a daily basis (Pandas, Requests) and the standard pip install pandas won't work inside AWS lambda.
AWS Lambda can be considered as a framework of EC2 Container Service (ECS) that uses containers to run a piece of code that represents your application.
Yugabyte Structured Query Language (YSQL) is an ANSI SQL, fully-relational API that is best fit for scale-out RDBMS applications that need ultra resilience, massive write scalability, and geographic data distribution.
YugabyteDB beats CockroachDB in the context of multiple developer benefits including higher performance for large data sizes, better PostgreSQL compatibility, more flexible geo-distributed deployment options as well as higher data density.
Worked on Hadoop environment for automating common tasks.
Also worked on Apache Hadoop and used Kafka for messaging system and spark for processing large sets of data.
Apache Cassandra is a NoSQL database ideal for high-speed, online transactional data.
Cassandra delivers the continuous availability (zero downtime), high performance, and linear scalability that modern applications require, while also offering operational simplicity and effortless replication across multiple data centers and geographies.
AEM makes it easy to create integrated digital content management.
This has resulted in a demand for AEM experts who are able to handle the AEM development process, making this a suitable career option for software developers.
Understanding AEM and building the best AEM website is as challenging as building a difficult product within a short time.
Databricks to process, store, clean, share, analyze, model, and monetize their datasets with solutions from BI to machine learning.
Use the Databricks platform to build and deploy data engineering workflows, machine learning models, analytics dashboards, and more.
GitLab CI/CD is the part of GitLab that you use for all of the continuous methods (Continuous Integration, Delivery, and Deployment).
The GitLab API allows you to perform many of the actions you typically do when using the user interface.
Documented release, builds and source control processes and plans.
Provided deployment support for several releases in finance and corporate business area.
Documented work done, skills required and mitigated in future projects
Evaluate Chef and Puppet framework and tools to automate the cloud deployment and operations.
Puppet module creation, integration, and testing Key Technologies: MongoDB, Go Continuous Delivery Engine, Puppet
Creating snapshots and amazon machine images (AMIs) of the instances for backup and creating clone instances.
Groovy is built on Java, making it simple to learn for programmers with some experience.
The Groovy scripting language simplifies the authoring of code by employing dot-separated notation, yet still supporting syntax to manipulate collections, Strings, and JavaBeans.
Kubernetes is an open-source system for automating deployment, scaling, and management of containerized applications.
IAC tools, such as Terraform, CloudFormation, and Ansible, enable the declarative definition and orchestration of infrastructure resources, ensuring consistent and reproducible deployments across different environments.
Terraform and Ansible are two major IaC tools that help enterprises create configurations and scale them easily.
Creating CI/CD Pipelines using Azure DevOps.
Setting up private networks and sub-networks using virtual private cloud (VPC) and creating security groups to associate with the networks and Setting up scalability for application servers using command line interface.
Setting up and administering DNS system in AWS using Route53.
Used XML Web services with SOAP protocol for transferring data between different applications.
Well versed with user and plugin management for Jenkins.
Develop Docker based infrastructure - Mesos, Kubernetes.
Docker is a container runtime, Kubernetes is a platform for running and managing containers from many container runtimes.
Used marathon and mesos to check the application status and its log.
Worked on Apache Mesos-Marathon for Resource Management.
Migrating VCS, Oracle RAC and Redhat Clusters with GFS (Global File Systems) servers across the Data Center, which includes Configuring of new IP, VIP and Private IP.
Experience in Azur data factory(ADF)Pipelines.
Experience in Administrating and operating infrastructure on Azure public or private cloud
Managing users and groups using the amazon identity and access management (IAM).
IAM work unit is responsible for how users within an organisation are given an identity - and how it is protected.
IAM systems help administrators grant appropriate access privileges to users, allowing them to use tools and information critical to their job.
Implemented continuous integration using the Hudson, which tracks the source code changes
Support development engineers with Configuration Management issues. Assist my seniors and Project Leaders in technical issues.
Used marathon and mesos to check the application status and its log.
Worked on Apache Mesos-Marathon for Resource Management.
Integrated Kafka with Flume in sand box Environment using Kafka source and Kafka sink.
Worked with application teams to install operating system, Hadoop updates, patches, version upgrades as required. Integrated Kafka with Spark in sand box Environment.
Responsible for Installing, setup and Configuring Apache Kafka and Apache Zookeeper.
Used Kafka to collect Website activity and Stream processing.
Experience in Azure migration workloads.
The Splunk Web Services is written in AJAX, Python and XML, among other languages to create an intuitive and easy-to-use graphical user interface.
Splunk have partnered to bring data-driven solutions to clients by leveraging Splunk solutions with Accenture's high-performing services.
Implemented standardized processes for testing React applications, leveraging tools like Karma with Mocha for unit and integration testing.
Created typescript reusable components and services to consume REST APIs using Component based architecture provided by React.
Operational Decision Manager automates the implementation of business policies of your organization by using Decision Center and Decision Server.
ODM for developers deploys these applications in a single container, which can be used for in-house or web-based purposes.
Developed reusable components in React JS, leveraging Custom Directives, Filters, Services, and Factories, tailored to project requirements.
SREs' main role is to deal with operational problems, for example production failures, infrastructure issues (disk, memory), security, monitoring.
Site reliability engineers (SRE) can use various testing techniques to ensure software operations are as failure-free as possible for a specified time in a specified environment.
SREs also have the potential to deliver unique value for organizations that need to support complex software architectures and environments, like Kubernetes.
EKS (Elastic Kubernetes Service) is a fully managed Kubernetes service, while ECS (Elastic Container Service) is a fully managed container orchestration service.
Amazon EKS is a managed service, meaning that AWS takes care of much of the underlying infrastructure and configuration.
Azure Red Hat OpenShift is a managed service that offers OpenShift clusters on Microsoft Azure.
Each Azure Red Hat OpenShift cluster is single-tenant (dedicated to a single customer).
Creation and setup of new environments/branches such as development, testing and production
Worked closely with development team and SQA team for product releases
Worked under an environment that has a good defect tracking system through the use of Manual Test and Test Manager.
Azure Red Hat OpenShift is a managed service that offers OpenShift clusters on Microsoft Azure. It is jointly engineered and operated by Microsoft and Red Hat with an integrated support experience. Each Azure Red Hat OpenShift cluster is single-tenant (dedicated to a single customer).
DevOps in Google Cloud Platform (GCP) reduces complexities and increases the efficiency of development and operations workflows.
VMware lets you run more applications using fewer physical servers.
Azure VMware Solution is a Microsoft service, verified by VMware, that runs on Azure infrastructure.
The UNIX operating system is made up of three parts; the kernel, the shell and the programs.
When the process rm myfile has finished running, the shell then returns the UNIX prompt % to the user, indicating that it is waiting for further commands.
Argo CD is an open-source Continuous Deployment solution that provides Kubernetes-first support.
Argo Events is an event-driven workflow automation framework and dependency manager that helps you manage Kubernetes resources, Argo Workflows, and serverless workloads on events from a variety of sources.
Argo CD is a GitOps platform that reads environment configurations and deploys them automatically to a Kubernetes cluster.
Linux environments give developers more control over their choices.
Linux Mint has established itself as one of the best distros for beginner Linux users.
GCP creates sample templates or creates models to monitor and enforce infrastructure compliance.
Linux is simple and only performs the actions we command it to perform.
These networking devices use a system of rules, called communications protocols, to transmit information over physical or wireless technologies.
AWS networking services offer a wide range of databases and networking options which are scalable, on-demand, and available with a few clicks of the mouse.
Responsible for WebSphere installation, configuration, maintenance and patching.
Configured WebSphere Application Server with DB2 database.
Configured virtual hosts and transports for WebSphere application servers.
High availability testing (both Data platform and API platform), Failover and Operations testing for multi region Amazon AWS product implementations.
Wrote queries to create, alter, insert and delete elements from lists, sets and maps in Datastax Cassandra.
Responsible for building scalable distributed data solutions using Datastax Cassandra.
Ran many performance tests using the Cassandra-stress tool in order to measure and improve the read and write performance of the cluster.
Created data-models for customer data using the Cassandra Query Language.
Environment: WebSphere Application Server, Dell Servers, AWS, SRE, Chef, Azure, Python, Lambda, EKS, AEM, Databricks, Automation, Kubernetes, GCP, Hadoop, React JS, Splunk, IBM ODM, UNIX, VMware, CI/CD, Argo CD, Pipelines, Red Hat Linux 6, OpenShift, Ansible, IAM, DynamoDB, AWS Networking, Oracle RAC, Marketplace, Ubuntu, Open Shift, Kafka, Elastic, Puppet, Tomcat Server, Nginx, Groovy, API Platforms, Cloud Foundry, Terraform, Cockroach DB, GitLab, Cassandra DB, Yugabyte DB, Apigee, SOAP UI, Assandra, Afka, Docker, Mesos , Marathon .

First Midwest Bancrop, IL. Sep 2019 - May 2020.
Devops Engineer
Confidential,Bellevue,WA
Responsibilities:
Involved in DevOps migration/automation processes for build and deploy systems.
Writing various puppet manifests files with Hiera and customized functions and defined resources.
Integration with Puppet modules by using MCollective framework and Jenkins jobs.
Extensive knowledge on writing and deploying modules in puppet.
Configured Apache webserver in the Linux AWS Cloud environment using Puppet automation.
Hands on experience in Amazon Web Services (AWS) provisioning and good knowledge of AWS services like EC2, S3, Glacier, ELB (Load Balancers), RDS, SNS, SWF, and EBS etc.
Application deployment and data migration on AWS Redshift.
Using Clover ETL migrated data to AWS Redshift.
Utilized Cloud watch to monitor resources such as EC2, CPU memory, Amazon to design high availability applications on AWS across availability zones
Worked hands-on to create automated, containerized cloud application platforms (PAAS), and design and implement DevOps processes that use those platforms.
Configured AWS Identity and Access Management (IAM) Groups and Users for improved login authentication.
Creating S3 buckets and also managing policies for S3 buckets and Utilized S3 bucket and Glacier for Archival storage and backup on AWS
Monitored the UAT/Production Environments for any down time issues by performing regular Cron job updates in servers.
Built end to end CI/CD Pipelines in Jenkins to retrieve code, compile applications, perform tests and push build artifacts to Nexus Artifactory.
Created job chains with Jenkins Job Builder, Parameterized Triggers, and target host deployments. Utilized many Jenkins plugins and Jenkins API.
Installed and Administered on GIT Server, migrated Projects from Subversion to GIT.
Migrated from SVN to Gitlab Server and internal to gitlab servers.
Developed build and deployment scripts using Maven as build tool in Jenkins to move from one environment to other environments.
Written Maven scripts, Shell script for end to end build and deployment automation.
Deploying and configuring Writing Bash scripts to perform certain tasks and assisting user with problems and MYSQL, Mongo DB, SQL optimization
Adopted Puppet for the Automation of the environment and worked on Installation and configuration of Puppet.
Deploy and monitor scalable infrastructure on Amazon web services (AWS)& configuration management using Puppet.
Operational Decision Manager automates the implementation of business policies of your organization by using Decision Center and Decision Server.
ODM for developers deploys these applications in a single container, which can be used for in-house or web-based purposes.
Experience in build and deployment in pipelines,YAML.
Designed and implemented fully automated server build management, monitoring and deployment by Using DevOps Technologies like Puppet.
Experience in working with GIT to store the code and integrated it to Puppet.
Integration of Maven/Nexus, Jenkins, Urban code Deploy with Patterns/Release, Git, Confluence, Jira and Cloud Foundry.
Red Hat OpenShift is a cloud-based Kubernetes platform that helps developers build applications.
OpenShift comes with an integrated service that makes authentication and authorization a simple process.
Experience in Agile/Scrum software development methodologies, practices, and the software development lifecycle (SDLC).
Worked with a complex environment on Red Hat Linux and Windows Servers while ensuring that these systems adhere to organizational standards and policies.
Developing procedures to unify, streamline and automate application development and deployment procedures with Linux container technology using Dockers.
Virtualized the servers using the Dockers for the test environments and dev-environments needs and also configuration automation using Dockers containers.
Setting up and configuring of Nagios, improved monitoring in Nagios and custom plugins.
Involved in setting up JIRA as defect tracking system and configured various workflows, customizations and plugins for JIRA bug/issue tracker.
Installed and configured Open source puppet with Foreman puppet console.
Deploying applications on multiple Tomcat servers and maintaining Load balancing, high availability and fail-over functionality.
Expertise in Docker containers and its configuration based on requirement, maintaining the Docker hub for container images.
Environment: AWS, Puppet, Java/J2EE, Jenkins, JIRA, Docker, CI/CD, IBM, ODM, Pipelines, Linux, OpenShift, Maven, GIT, Python, Ruby, Cloud foundry, Bash Script, Nexus, Sonarqube.


Vinfo soft Solutions Pvt Ltd, Hyderabad. Feb 2017- July 2019.
DevOps Engineer
Responsibilities:
Maintained source code repository in subversion, GIT.
Automated deployment of builds to different environments using Anthill Pro
Create and setup automated nightly build environment for Java projects using Maven
Maintain and track inventory using Jenkins and set alerts when the servers are full and need attention.
Writing/Modifying various Manifests and applying them on the nodes using Puppet.
Developed scripts using BASH and BATCH files for Automation of Activities and builds.
Working closely with Web Administrators to set up an automated deployment for SharePoint applications using SVN and Git Tools.
Worked with the automated scripts for the Build and Deployment of the applications
Monitor and administer the automated build and continuous integration process to ensure correct build execution, and facilitate the resolution of build failures
Having Experience in Docker containers(GIT), Deploying secure code.
Having knowledge of Agile, Scrum software development methodologies and SDLC.
Maintained configuration files for each application for the purpose of building and installing on different environments
Suggested and implemented the process of nightly builds and auto deployments, for fast-paced applications
Work closely with Business analysts and Project managers to meet release and build schedule deadlines.
Experience in Automation, Source control, configuration management.
Created User defined types to store specialized data structures in Cassandra
Implemented a distributed messaging queue to integrate with Cassandra using Apache Kafka and ZooKeeper.
Worked with application teams to install an operating system, Hadoop updates, patches, and Kafka version upgrades as required.
Environment: Red Hat Linux, Oracle, Docker, Maven, Anthill Pro, Jenkins, JAVA, ANT, Puppet, SVN Subversion, WebSphere, Cassandra, Kafka, Cloud Foundry.

Pravara IT Solutions. Jun 2014 - Jan 2017.
Java /DevOps Engineer
Responsibilities:
Developed 32 UI screens using HTML, JSP, and JavaScript.
Client Side Validation throughout the application is done by JavaScript and Server Side validation is performed inside Action Classes.
Implemented MVC design pattern using Spring MVC Framework in cardholder application.
Responsible for requirement gathering and analysis.
Prepared Design documents.
Used Java Server Pages for content layout and presentation.
Encapsulated Business Rules in PL/SQL packages and the data was written to the database in accordance with the business rules.
Used JDBC API for interaction with the Oracle Database.
Debugging and testing of the applications & fine-tuning performance.
Provided maintenance support in a production environment.
Responsible for coding the corresponding controllers.
Prepared and executed test plans -Involved in unit, system, and Integration testing.
Supported the QA and UAT bug fixes.
Responsible for the Branching and Merging with SVN SCM.
Responsible for maintaining the ANT build.xml s for all the projects.
Environment: HTML, Java, J2EE, Oracle, JSP, Java Script, PL/SQL, ANT, SVN.

EDUCATION & CERTIFICATION:
B TECH Computer science engineering from Vagdevi college of engineering, Warangal.
Keywords: continuous integration continuous deployment quality analyst artificial intelligence user interface javascript business intelligence sthree database information technology golang hewlett packard procedural language Illinois North Dakota Washington
Keywords: continuous integration continuous deployment quality analyst artificial intelligence user interface javascript business intelligence sthree database information technology golang hewlett packard procedural language Illinois North Dakota Washington

To remove this resume please click here or send an email from [email protected] to [email protected] with subject as "delete" (without inverted commas)
[email protected];637
Enter the captcha code and we will send and email at [email protected]
with a link to edit / delete this resume
Captcha Image: