Home

Pradeep Kumar - Sr Lead Data/MLOPS
[email protected]
Location: Irving, Texas, USA
Relocation: Open
Visa: H1B
Pradeep Kumar KB
Lead/Senior Data Engineer
Irving, Texas, United States

Phone: +1(469)-639-0484

PROFESSIONAL SUMMARY:
Total 13+ years of IT experience as a Technology Lead and team of 9 members in CI (Continuous Integration), CD (Continuous Delivery), strong background in Build and Release Management and Cloud Implementation within the suites for needs of an environment under MLOPs Culture.
7+ years of experience as a Build and Release Engineer with MLOPs Methodologies as primary focus on CI (Continuous Integration), CD (Continuous Delivery), Pipeline, Build and Deploy automation.
Good experience in Site Reliable Engineer (SRE) activities includes SLA, SLI and SLO s.
Change management experience with planning and coordinating and follow up approval, implementation of technical changes.
Problem management -worked to identify the root cause on reactive/Proactive problems and prevent them.
Document support interaction with system future reference and addition to knowledge base.
Arrange for bridge calls for critical issues for all parties and assist to work together and provide resolution with in the agreed SLA.
Expertise in Windows and Linux environments. Worked on Agile Software Development life cycle.
Good hands on to support for multiple environments and as well as communicating with clients and different teams globally.
Experience in password authentication tools like PWP and Cyber Ark
Hands-on experience in AWS Services like Sagemaker, ECS, ECR and EKS deploying ML models.
Hands-on experience of software containerization platforms like Docker and Container Orchestration tools like Kubernetes.
Experience in Administration / Maintenance of source control management systems such as GitHub and Created tags and Branches, commits and pull requests, fixed merge issues and administered Software Repositories.
Provisioned virtual environments using Docker and Kubernetes. Hands-on experience on Docker Volumes & Docker Networks
Managed Kubernetes deployments, objects for high availability and scalability using HPA and resources management
Good experience in Configuration management tool like Ansible. Good understanding in ArgoCD and Istio(Service mesh). Good understanding in Iac tool like Terraform. Good understanding in Openshift
Good experience in Amazon web services which includes EC2,ECS, S3, EBS, IAM, Auto scaling, VPC, Cloud CDP POC, Security Groups, KMS, ECR and API private link.
Basic idea of CircleCI for continuous integration and deployment for mobile applications
Hands-on experience Mobile MLOps tools focuses on successful and continuous building, integration, deployment, delivery, and monitoring releases.
Accelerate mobile builds, releases, and teams through a MLOps and DevOps platform built for mobile deployments.
Extensively worked on Jenkins for Continuous Integration, Continuous Delivery & Continuous Deployment through building Pipeline jobs.
Experience in Build automation using tools like MAVEN for the building of deployable artifacts such as PIP, JAR, WAR & EAR from source code.
Exposed to all aspects of software development life cycle (SDLC) such as Analysis, Planning, Developing, Testing and Implementing and Post-production analysis of the projects.
Hands on experience in Monitoring tool like ELK, Prometheus and Grafana.
Good experience in Datadog monitoring, excellent knowledge on Agile Methodology, ITIL process.
Lead support team on production support activities, tracking the defects raised during the release deployment with development team and working with developer to ensure the defects are cleared before the code goes into Production environment.
Troubleshooting and resolving issues raised by the functional testing and performance testing teams following the ITIL guidelines for incident management and SLA specifications as agreed with the client.
Extensive experience in Selenium automation testing and have experience in API testing. Experience in Python, R, Groovy, COREJAVA and JAVASCRIPT.
Worked on Inbound and Outbound Web services. Participating in Migration process that means moving release code from one environment DEV to TEST/UAT environments.
Excellent interpersonal, project management, communication, and documentation skills. Preparing PMR report and driving PMR call every month.

EDUCATION DETAILS:
Master of Science, Computer Science, and Information Technology,
Southern Arkansas University, 2017.
Bachelor of Technology, Robotics Engineering,
Jawaharlal Nehru Technological University Anantapur, 2008.

TECHNICAL SKILLS:
Languages : Python, R, Groovy, Scala, Ruby, C
Operating Systems : Red Hat Enterprise Linux, Ubuntu, Windows10
Application/ Web Servers : Apache Tomcat, HTTP
Automation/ Monitoring Tools : Selenium Web driver, Jenkins, ELK, Cloud Watch
Source/ Version Control Tools : Git, Github
Data Visualization/ Presentations : Tableau, Looker, Power BI and MS PowerPoint
Machine Learning/Analytics : Linear & Logistic Regression, Clustering algorithms
CM/ IaC : Ansible, Terraform
Cloud Services : Azure, AWS, Containerization (Docker, Kubernetes),
Cloudera CDP
Build/ Quality/ Repository : Maven, SonarQube, Nexus
Databases & Modelling : Teradata, No-SQL, MySQL, Cassandra, Big Query
ETL Platforms : Talend, (GCP) Google Analytics, Adobe Analytics, Azure,
Informatica, Rest API
Monitoring Project Progress : Jira, Kanban, Scrum

CERTIFICATION:
AWS Certified Solutions Architect-Associate Level on Udemy
CCA 175, Cloudera Certified Spark and Hadoop Developer.
Microsoft Certified Data Scientist on edx.
Data Analytics Certified by Delft University of Technology on edx.

PROFESSIONAL EXPERIENCE:

Client#1: Verizon Business Miami, Florida Mar 2022 Till Date
Role: Lead Data MLOps Engineer

Responsibilities:
Worked in the team of Customer 360 analysis, gained understanding of Customer s Membership ML models and methodology used. Design and Build model Deployment and Monitoring solution to assess if a Customer s Membership should be issued (AAI), renewed (Rev), or cancelled (AAC) depending on various parameters.
Design and Build production ML pipelines, deploy and monitoring scripts and dashboards using azure components.
I have experience in Designing Architecture, Integration and deployment ML code for classification model on Azure using for Fraud detection and prevention.
I have experience in discussing technology s availability, advantages and budgeting . I have experience in Data modeling, Data mapping, and Data warehousing.
I have built ETL process from Oracle and Teradata, I have built batch processing data pipelines using PySpark, Hive and Hadoop.
I have created a Classification model which is adding value in Fraud Detection.
I have created a Clustering model which is adding value in Fraud Prevention to identify anomaly patterns.
Involved in Designing Architecture to build the data pipeline to make continuous flow of data from Kafka to HDFS.
Scheduled jobs on Oozie and applied activities on Version Control Tool using GIT.
Installed CDP 7.1.2 in Datacenter.
Migrated Hadoop cluster from HDP 3.1.5 to CDP 7.1.2
Extensive usage of Python libraries TensorFlow, h2O, AzureML, Pickle, Shutil, SkLearn, Pandas, NumPy, Scikit Learn, OS, JSON etc.
Proficient in Azure services like Azure Blob Storage, Machine Learning workspace, Container Registry, Key Vault, Container Instances, Kubernetes Service etc.
Written Templates for Azure infrastructure as a code using Terraform to build staging and production environments.
Responsible for Continuous Integration and Continuous Deployment process implementation using Jenkins along with Python, Shell scripts and Serverless to automate routine jobs.
Worked on Data Extraction, aggregations, and consolidation of Adobe data within AWS Glue using
PySpark.
Implemented Continuous Integration using Jenkins and GIT from scratch.
Responsible for performing tasks like Branching, Tagging, and Release Activities on Version Control Tools like SVN, GIT.
Environment: Azure, SQL, Hue, Oozie, Teradata, Cassandra, Hive, Spark, Hadoop, Sqoop, Teradata (BTEQ), Git, Python, S Oracle, Jenkins, GitLab, Supervised and Unsupervised Learning, Tableau, Looker.

Client#2: The Janssen Pharmaceutical Companies of Johnson & Johnson Tempe, Arizona Jun 2021 Mar 2022
Role: Lead Engineer/data scientist

Responsibilities:
Developed an effective Statistical model to find problems with customer self-installation (CSI) to further improve the KPI s and better understanding of the device setup.
Improved performance of the tables through load testing using Cassandra stress tool. Design and document CI/CD tools configuration management.
Migrated data from AWS S3 buckets, Hive and applied Natural Language Processing (NLP) techniques. Used Classification algorithm to predict customer behavior using NLP data.
Implemented a distributed messaging queue to integrate with Cassandra using Apache Kafka and Zookeeper.
Responsible for design and development of Python, R and Scala programs/ scripts to prepare transform and harmonize datasets in preparation for modeling.
Design and develop ETL integration patterns using PySpark.
Develop framework for converting existing PowerCenter mappings and to PySpark Jobs.
Extensive usage of Python libraries, Pandas, NumPy, Scikit Learn, Matplotlib, Seaborn, Stats-models, SciPy, MLlib, NLTK, Spacy, etc.
Educated on Azure cloud to know about Cloud Governance.
Handled importing data from various data sources, performed transformations using Hive, Map Reduce, and loaded data into HDFS.
Involved in loading data from UNIX file system to HDFS. Involved in designing schema, writing CQL's and loading data using Cassandra.
Used ODBC to connect hive for visualizing on Tableau.
Environment: Python, Azure Cloud, Hive, Oracle, SQL Server, MS Excel, MS Visio, Hadoop, Apache Spark, SQL, Tableau, AWS, S3, Scala, NLP, R, PyCharm, AWS Sagemaker Adobe Analytics, Azure Machine Learning Module, Tamr etc.

Client#2: ArcherDX Boulder, Colorado. Nov 2019 May 2021
Role: Data Analytics/ DevOps Engineer

Responsibilities:
Creating Helm charts to the existing k8s templates and changing according to the requirement
Deploying the helm charts and Migration of K8s to helm charts
used Jenkins pipelines to drive all micro services builds out to the Docker registry and then deployed to Kubernetes, created pods and managed using Kubernetes
building/maintain Docker container clusters managed by Kubernetes in EKS.
Utilized Kubernetes and Docker for the runtime environment of the ci/cd system to build, test deploy
Setup Nginx ingress controller to manage the ingress/egress rouging rules for Kubernetes
Responsible for implementing containerized based application by using Elastic Kubernetes service(EKS)
Participate in on-call activities as needed for major incident resolution and problem management
Involved in Trouble Shooting of packaging and deployment issue
Involved in build and deploy, maintaining, and troubleshooting of various application
Followed upon ITIL process, Using Service now handled Incidents, change requests
Monitoring ticketing tool, if any open issue, assign inc s within team round Robbin method
Arrange for bridge calls for critical issues for all parties and assist to work together and provide resolution with in the agreed SLA.
Keep incident lifecycle and status updated via the service now reports
Managed Docker orchestration and Docker containerization using Kubernetes.
Used Kubernetes to orchestrate the deployment, scaling, and management of Docker Containers.
Responsible of operational activities once after deploying application into production.
Responsible of maintaining certificates and renewing those in timely manner.
Documented the entire build and release process and provided support.
Configured and exposed application endpoints to Prometheus and created graphs in Grafana for visualization.
Implemented AWS solutions using ECS, S3, EBS, Elastic Load Balancer, and Auto scaling groups, Optimized volumes and EC2 instances.,)
Deploy Docker containers in AWS ECS using service like Cloud formation
Used ECR as a repository for ECS(Elastic container services)
Set up Base64 based encryption and decryption using AWS KMS
Participated in design reviews of architecture patterns for service/application deployment in cloud(AWS)
Designs, develops, documents, tests, and debugs new and existing configuration management (Terraform) infrastructure as code.
Experience with Jenkins, AWS, Terraform and GIT as source control.
Automating the process using Python and created pipeline and stored in GIT
Deliver and maintain standard operating procedures and assist in troubleshooting issues
Documenting operational and security standards.
Automated resulting scripts and workflow using Apache Airflow and ensure daily execution in Prod
Install and configure Apache Airflow for S3 bucket and created DAGs to run the Airflow
Creating pipeline jobs for continuous delivery through Jenkins.
Responsible of maintaining source code in GitHub.
Responsible of creating EKS cluster using a terraform template.
Managed Docker orchestration and Docker containerization using Kubernetes.
Used Kubernetes to orchestrate the deployment, scaling, and management of Docker Containers.
Responsible of operational activities once after deploying application into production.
Managing log mechanism through Splunk.
Established monitoring using Prometheus & Grafana.
Responsible of maintaining certificates and renewing those in timely manner.
Documented the entire build and release process and provided support.
Configured and exposed application endpoints to Prometheus and created graphs in Grafana for visualization.
Creating pipeline jobs for continuous delivery through Jenkins.
Responsible of maintaining source code in GitHub.
Assigning goals to team members and tracking it in timely manner and performing year end appraisals.
Taking care of team s time sheet
Environment: Bit bucket, Docker, Kubernetes, MAVEN, Jenkins, JIRA, Adobe Analytics, Nexus, SonarQube, LINUX, AWS, ITIL, DevOps Security tools, Agile tools, Terraform, Shell scripting, Python, SQL and Prometheus (Monitoring tools etc.)

Client#3: AT&T Inc Dallas, TX Feb 2019 Nov 2019
Role: DevOps Engineer

Responsibilities:
Creating branches and tags on GIT repository and provided branches access permission to dev team.
Monitoring ticketing tool, if any open issue, assign inc s within team round Robbin method
Interacting with customers understand the requirements of the customers
Act as a bridge between customers and l2,l3 teams, to fix the issues customers requirements
Proposed and implemented branching strategy suitable for agile development in GitHub.
Escalate l2 and l3 teams depending customers requirements
Responsible for tagging and maintain code on version control GIT.
Managed Sonar type Nexus repositories to download the artifacts (jar, war & ear) during the build
Managed Kubernetes deployments, objects for high availability and scalability using HPA and resources management
Managed Docker orchestration and Docker containerization using Kubernetes.
Used Kubernetes to orchestrate the deployment, scaling and management of Docker Containers.
Installed Jenkins/Plugins for GIT Repository, Setup SCM Polling for Immediate Build with Maven
Developed build, auto build and deployment scripts using MAVEN as build tools in Jenkins to move from one environment to other environments and create new jobs and branches through Jenkins.
Integrated Jenkins & Kubernetes cluster.
Setup Nginx ingress controller to manage the ingress/egress rouging rules for Kubernetes
Responsible for implementing containerized based application by using Elastic Kubernetes service(EKS)
Implemented a Continuous Delivery framework using Jenkins, Maven & Nexus in Linux environment. Experience with Container based deployments by using Docker.
Expertise in using Docker Engine, Docker Hub and building Docker Images at root of the repository.
Documented the entire build and release process and provided support.
Installed and configured Jenkins for Automating build, deployments and test execution and providing a complete automation solution.
Added new modules on Jenkins pipe line and configured jobs.
Automate the build Process Using Jenkins jobs.
Taking weekly back-ups like Jenkins home directory.
Proposed and implemented branching strategy suitable for agile development in GitHub.
Configured application servers (Tomcat) to deploy the code.
Automated the build process using GIT Hub, Maven, SonarQube, Nexus Artifact and Jenkins.
Virtualized the servers using the Docker for the test environments and dev-environment needs and also configuration automation using Docker containers.
Experience with container-based deployments using Docker, working with Docker images, Docker Hub and Docker registries.
Virtualized the servers using the Docker& Kubernetes for the test environments and dev-environment needs.
Implemented AWS solutions using E2C, S3, EBS, Elastic Load Balancer, and Auto scaling groups, Optimized volumes and EC2 instances.,)
Managed weekly calls with clients on behalf of team to resolve issues & update status of projects.
Environment: Bit bucket, Docker, Kubernetes, MAVEN, Jenkins, JIRA, Nexus, SonarQube, LINUX, Windows 10,AWS, DevOps Security tools, Agile tools ,Terraform, Shell scripting ,Python, SQL and Prometheus(Monitoring tools etc.)

Client#4: TCF Bank Minneapolis, MN Jan 2018 Feb 2019
Role: Cloud Systems Integrator

Responsibilities:
Leading and managing a team of 5 members in terms of technical and resource management.
Integrated Jenkins & Kubernetes cluster.
Set up Jenkins pipelines to drive all micro services builds out to the Docker registry and then deployed to Kubernetes, created pods and managed using Kubernetes
Taking care of Release Management activities.
Performing deployments using Ansible.
Setting up the environments for new customer through terraform and ansible in AWS & VM s.
Working on the new enhancements/development tasks assigned and tracking in Jira regularly.
Implemented a Continuous Delivery framework using Jenkins, Maven & Nexus in Linux environment.
Documented the entire build and release process and provided support.
Configured and exposed application endpoints to Prometheus and created graphs in Grafana for visualization.
Taking weekly back-ups like Jenkins home directory.
Assigning goals to team members and tracking it in timely manner and performing year end appraisals.
Taking care of team s time sheet
Environment: AWS, ECS, Git, EMR, PyCharm AWS, Redshift AWS, Lambdas Serverless, Bit bucket, Docker, Kubernetes, MAVEN, Jenkins, JIRA, Nexus, SonarQube, LINUX, Windows 10.

Client#5: Regeneron Tarrytown, New York. Jan 2017 Dec 2017
Role: Site Reliable Engineer

Responsibilities:
Taking care of Release Management activities.
Installed Jenkins/Plugins for GIT Repository, Setup SCM Polling for Immediate Build with Maven
Implemented a Continuous Delivery framework using Jenkins, Maven & Nexus in Linux environment.
Documented the entire build and release process and provided support.
Configured and exposed application endpoints to Prometheus.
Taking weekly back-ups like Jenkins home directory.
Proposed and implemented branching strategy suitable for agile development.
Preparing test Scenarios
Preparing Low Level Test cases.
Involved in UAT testing, PPE Testing, Regression Testing and Functional testing.
Actively Participated in review meetings.
If needed, will do Database testing also.
Responsible of operational activities once after deploying application into production.
Responsible of maintaining certificates and renewing those in timely manner.
Documented the entire build and release process and provided support.
Environment: Bit bucket, MAVEN, Jenkins, JIRA, Nexus, SonarQube, LINUX, Windows 10

Client#6: Siemens Industry Software Inc. Hyderabad, India. July 2010 Dec 2015
Role: Software Engineer

Responsibilities:
Actively involved in resolving customer technical issues, communicating solutions & interact with customer for delivering solutions
Defect fixing, resolving Service requests, repository migration to different environments and deployment.
Creation of branches in GIT and Subversion for parallel development process.
Downloading repository and pushing configuration files to application servers as part of release activities.
Performing health check as per test team request to make sure the applications are healthy.
Performed data load activities.
Proficient with user account maintenance, backup & recovery, Auto-mounting and Printer configuration.
Managed weekly calls with clients on behalf of team to resolve issues & update status of projects
Performed all necessary day-to-day maintenance.
Used Jenkins to automate most of the build related tasks.
Strong Knowledge of ITIL process and documentation
Automate the functional test cases using Selenium
Created automation test scripts using JBehave BDD automation framework
Performing and executing the scripts using Core java
Environment: GitHub, MAVEN, Jenkins, JIRA, Nexus, SonarQube, LINUX, Windows, AWS, Selenium Testing, Core Java, BDD, ITIL

Client#7: Sonata Software Limited Hyderabad, India. Jan 2009 Jun 2010
Role: Jr. Software Engineer

Responsibilities:
Actively involved in resolving customer technical issues, communicating solutions & interact with customer for delivering solutions
Defect fixing, resolving Service requests, repository migration to different environments and deployment.
Created Workflow to send emails to the sales engineers based on the requirements using Outbound Communication Manager.
Implemented a Continuous Delivery framework using Jenkins, Maven & Nexus in Linux environment.
Documented the entire build and release process and provided support.
Taking weekly back-ups like Jenkins home directory.
Added new fields to meet the business requirements.
Enhancement of Functionality of Siebel Business Components using Joins, Links, Drilldown applet
Worked in L1&L2, 24X7 Support team that involves Customer Data loading and Application Performance Tuning.
Creating users manually and adding positions.
Implemented Drill downs in applets.
Worked on User Properties.
Environment: GitHub,MAVEN, Jenkins, JIRA, Nexus, SonarQube, LINUX, Windows,AWS,Selenium Testing, Core Java, BDD, ITIL.

I hereby declare that the details furnished above are true to the best of my knowledge.

Sincerely,

Pradeep Kumar KB
Keywords: cprogramm continuous integration continuous deployment machine learning business intelligence sthree rlang information technology microsoft Minnesota Texas

To remove this resume please click here or send an email from [email protected] to [email protected] with subject as "delete" (without inverted commas)
[email protected];273
Enter the captcha code and we will send and email at [email protected]
with a link to edit / delete this resume
Captcha Image: