Home

ethesham - DevOps Engineer
[email protected]
Location: Dallas, Texas, USA
Relocation: Yes
Visa: H1B
Ethesham Uddin || Cloud DevOps Engineer
469-804-8995 || [email protected]
LinkedIn id: linkedin.com/in/ethesham-uddin-081ab7202

Summary:
Successful IT Professional with 9+ years of IT experience, including over 5 years in Cloud computing, Build Release Management and DevOps Engineering. Understands and manages the space between operations and development to quickly deliver code to customers. Has experience with the cloud, as well as DevOps Automation Development for Linux systems. Seeking a position in DevOps/AWS/Azure to contribute my technical knowledge.
Experienced in Linux Administration, Configuration Management, Continuous Integration (CI), Continuous Deployment, and Cloud Implementations.
Configuration Management using Amazon Cloud Formation, Continuous integration with Jenkins, AWS management (EC2, EBS, RDS, Route 53).
Experience in AWS cloud platform and its features which includes EC2, S3, VPC, EBS, ELB, IAM, AMI, SNS, RDS, Dynamo DB, Cloud Trial, Cloud Watch, ELB, EKS, Cloud Formation, Auto Scaling, Lambda, Route 53.
Expertise in Querying RDBMS such as Oracle and MYSQL by using SQL for Data integrity.
Experience in execution of Shell and Python scripts to automate tasks.
Developed and maintained the Continuous Integration and Deployment systems using GIT, Jenkins Maven, and Nexus.
Experience in Branching, Merging, Tagging, and maintaining the versions across the environments using SCM tools like Sub Version (SVN), GIT (Git Hub, Git Lab).
Experience in Amazon EC2 setting up instances and setting up security groups and creating AMI's on AWS for launching the instances.
Operational efforts to migrate all legacy services to a fully virtualized Infrastructure.
Experience in build tools like MAVEN to write POM.XML scripts.
Hands on experience on deployment tools and Configuration management tools like Ansible
Used Ansible and Ansible Tower as Configuration management tool, to automate repetitive tasks, quickly deploys critical applications, and proactively manages change.
Extensive experienced in writing Ansible playbooks and Modules to automate our Build/deployment process and do an overall process improvement to any manual processes.
Experience in working with container-based deployments using Docker, Docker Images, Docker File, Docker Hub, Docker Compose and Docker Registries.
Proficient in using Terraform to define, provision, and manage cloud infrastructure.
Experience with Terraform Cloud and Enterprise for remote execution, VCS integration, and team management.
Experience with Jenkins/Maven deployment and build management system.
Experienced on working with various OS like Linux (Red hat, Ubuntu, CentOS), UNIX, Windows.
Experience in Cloud technologies like AWS in both windows and Linux environments.
Integration of Automated Build with Deployment Pipeline. Installed Ansible Control Server to pick up the Build from Jenkins repository and deploy in target environments (Integration, QA, and Production).
Experienced in Maintaining Kubernetes Clusters and managing docker containers across Worker nodes in Kubernetes Clusters.
Experience in OpenStack services such as Compute, Network, Storage, Dashboard, Image, Monitoring and orchestration service.
Launching and complete set up of Kubernetes Clusters and responsible for maintaining its Infrastructure in ALL Cloud Services, EKS, AKS, GKE and performed operations through the respective CLIs as AWS CLI, CLI and Google cloud SDK.
Experienced in trouble shooting and automated deployment to web and application servers like WebLogic, WebSphere and Tomcat over AWS Cloud.
Expertise in AWS provisioning and solid knowledge of AWS services like EC2, S3, ELB, RDS, IAM, Route 53, VPC, Autoscaling, Cloud Front, Cloud Watch, Cloud Formation, Security Groups, EKS, ECR, and coding applications with Python.
Experienced in ELK Stack (Elastic Search, Logstash, Fluentd, Kibana) and Prometheus and Grafana for Logging and Monitoring Applications deployed in AWS and Kubernetes Clusters.
Good understanding of OSI Model, TCP/IP protocol suite and experienced in handling Static IP entries creation in DNS, DHCP scope creation and performing backup of DNS and DHCP data.
Used JIRA to keep track of all the ongoing tasks and maintain bug resolutions.

TECHNICAL SKILLS:

Scripting languages PowerShell/Shell Script, Python, YAML, JSON, Bash, JavaScript.
Build Tools ANT, MAVEN, Gradle.
Bug Tracker &monitoring tools JIRA Splunk, ELK, EFK, Cloud Watch, Nagios, Splunk: 5.x and 6.x, Datadog
Cloud technologies Amazon Web Services (EC2, IAM, AMI, S3, RDS, VPC, SNS, SQS, Elastic ache, EBS, Cloud watch, Cloud Formation.)
Tools MS Office Suite, Nexus, Atlassian Confluence, Jfrog.
Languages C, C++, SQL, Python, Objective C, Java/J2EE.
Network/ Protocol DNS, DHCP, Cisco Routers/Switches WAN, TCP/IP, NIS, NFS, LAN, FTP/TFTP,
Operating System Windows 98/XP/NT/2000/2003/2008, UNIX-6,7, IOS, Red Hat LINUX-4,5,6,7, Ubuntu-12,13,14.
SCM Tools Subversion, GIT Hub and Bit Bucket.
CI & Provisioning Tools Jenkins, Jira, Ansible, Chef, Docker.
Web/App servers Web logic, Web Sphere, Apache Tomcat, RHEL.

Professional experience:
Client: Staples Nov 2022 to Till Date
Role: AWS Devops Engineer
Boston, MA

Responsibilities:
Configured various jobs in Jenkins for deployment of java-based application and running test suites.
Created Jenkins file pipeline as code for CICD deployment. Integrated Jenkins with Gitlab for code commits, trigger builds and poll SCM.
Worked on Ansible playbook, modules, roles, tasks to deploy the application in Apache tomcat server with environment specific configuration files stored in inventory files.
Uploaded all source code artifacts to Jfrog Artifactory from Jenkins into different repositories.
Developed and implemented branching and release strategies in GitHub.
Worked on merging strategies in GitHub using commands like Git cherry-pick and Git merge.
Using Git to manage Terraform code and collaborate with team members.
Experience in and demonstrated understanding of source control management concepts such as branching, merging, labeling and integration.
Experience in administering and supporting source code repositories like Subversion, Git on Linux and Windows environments.
Created Ansible playbooks to install third party software on Jenkins agents and different environment servers.
Integrating Terraform with CI/CD pipelines for automated infrastructure deployment.
Dockerized Java application by writing a Docker File and created a docker image and deployed on Kubernetes cluster using Kubernetes deployment yaml file.
Worked on Kubernetes deployment yaml file for replica set, autoscaling, rollout status.
Worked on Kubernetes namespaces and services for microservices communication.
Worked on installing and configuring SonarQube for code quality reports. Configured quality gates in SonarQube like code complexity, code coverage, code duplicate.
Created AWS cloud formation templates for creation policy, helper scripts, update policies.
Configured various AWS resources like EC2 instances, EBS, Elastic load balancer, Autoscaling using CloudFormation Templates.
Experience in installing Filebeat agents on EC2 and configuring the App logs to forward to the ELK.
Implemented AWS services like EMR cluster s, SQS for queueing and SNS for alerting the error scenarios.
Implemented Jenkins as a full cycle continuous delivery tool involving package creation, distribution and deployment onto tomcat application servers via shell scripts embedded into Jenkins jobs.
Supported the Java .Net Applications for Deployment and resolving the hot tickets on demand and providing the solutions.
Managed CI/CD pipelines in AWS services, optimizing automated deployment processes.
Utilized Amazon Web Services for scalable and efficient cloud infrastructure, leading to improved performance and cost savings.
Implemented comprehensive monitoring and alerting solutions with Azure Monitor and AWS CloudWatch for proactive incident management.
Triggered the lambda functions using API gateway, Apigee gateway, cloud watch events.
Setup and build AWS infrastructure resources like S3 bucket, API Gateway, DynamoDB, KMS, secret manager, VPC, Subnets.
Integrated Graphana with Prometheus as data source, setup responsive dashboards with custom formula and setup alerts on threshold integrated alerting channels such as emails etc.
Planning and building framework on AWS environment and valuable information of AWS like VPC, EC2, S3, IAM, Route 53, Cloud watch and Elastic Cache.
Designing and developing the new Micro-Services, maintaining and expanding our AWS Azure infrastructure.
Environment: AWS (EC2, VPC, ELB, S3, RDS, Cloud Trail, EBS, IAM, CloudWatch, Event Bridges, AWS Lambda, API Gateway, Route 53, Cloud Formation, Aurora DB, MySQL, AWS CLI, AWS Auto Scaling and Route 53), Linux, Azure DevOps, ECR, ECS, EKS,Docker, Kubernetes, Openstack, Shell, Python.

Client: ADT Jan 2022 Oct 2022
Role: AWS DevOps Engineer
Boca Raton, FL

Project Description:
Working as a technologist resource, to guide different development teams through tough spots in migrations.
Deployed and managed infrastructure for multiple environments using Terraform, ensuring consistency and reliability.
Automating deployment of all resources related to my work as a DevOps Software Engineer.
Implemented a CI/CD pipeline involving GitLab, Jenkins, Sonar, Coverity, Ansible, Docker, Kubernetes and Selenium for complete automation from commit to deployment.
Created fully automated CI build and deployment infrastructure and processes for multiple projects.
Created and maintain fully automated CI/CD pipelines for code deployment using groovy scripting language (DSL Jenkins library).
Created simple solutions to create a new Jenkins instances using Docker, Ansible, groovy and shell scripting which will create the new instances launch & also take the back up from the host servers.
Used Maven as Build Tool for the development of build Artifacts on the source code and experience with writing pom.xml.
Developed scripts for build, deployment, maintenance and related tasks using Jenkins, Docker, Maven, Python and Bash.
Developed automation and deployment utilities using Bash, PowerShell and Python.
Written Dockerfiles for creating containers and custom images according to requirement from clients.
Created Docker containers for Jenkins slaves, each docker container as a Jenkins slave in Jenkins instances.
Managed Kubernetes charts using Helm and managed local deployments in Kubernetes, creating local cluster and deploying application containers.
Used Jenkins pipelines to drive all micro services builds out to the Docker registry and then deployed to Kubernetes, Created Pods and managed using Kubernetes.
Created the Jenkins pipelines using docker images and Dockerfiles through Jenkins declarative pipeline using groovy scripting language.
Created a Continuous Delivery process to include support building of Docker Images and publish into a private repository- Artifactory.
Worked on Ansible, used YML packages for installing, configuring push to make changes in time.
Used Ansible for the infrastructure Configuration and automated lots of manual process through playbooks.
Used Ansible and Chef as Configuration management tool, to automate repetitive tasks and proactively manages change.
Used Nagios and Prometheus as server monitoring tool and set up this tool from the installation step to production.
Managed software releases through the branching strategy.
Used ELK (Elasticsearch, Logstash and kibana) for name search pattern for a customer and for Jenkins log tracking.
Documented all the developed Continuous Integration and Automated Testing procedures in confluence.
Achieved cost savings by optimizing resource provisioning and implementing efficient infrastructure designs.
Environments: Jenkins, GitLab, GitSwarm, Sonar, Coverity, Maven, Kubernetes, Docker, Docker-compose, Ansible, Chef, Groovy, Shell, Python, Rally, Prometheus, Nagios, Helm, SUSE Linux.

Client: Pindrop Oct 2020 Dec 2021
Role: AWS DevOps Engineer
Atlanta, GA

Responsibilities:
Launching Amazon EC2 Cloud Instances using Amazon Web Services (Linux/ Ubuntu/RHEL/Windows) and Configuring launched instances with respect to specific applications.
Responsible for managing infrastructure provisioning (S3, ELB, EC2, RDS, Route 53, IAM, security groups) and deployment via SCALR and EC2 Installs with CentOS, Ubuntu and RHEL 6 and Scientific Linux.
Updating Certificates using AWS Certificate Manager, Venafi Certificate Manager for private, public secure socket layer/TLS, SSL certificates for establishing secured identity of websites.
Involved in maintaining the user accounts (IAM), RDS, Route 53, VPC, RDB, DynamoDB, SNS services template.
Working on EC2 Instance Savings Plans, Reserved Instances gives the flexibility to change your usage between instances.
Implemented and maintained the monitoring and alerting of production and corporate servers/storage using AWS Cloud watch.
Creating alarms that initiate Amazon EC2 Auto Scaling and Amazon Simple Notification Service (Amazon SNS) actions using Cloud Watch.
Virtualizing the servers using the Docker for the test environments and dev-environments needs. And also, configuration automation using Docker, Kubernetes, OpenShift containers.
Created the AWS VPC network for the Installed Instances and configured the Security Groups and Elastic IP s accordingly. Installing the Load Balancers and VPC with Public and private subnets.
Setting up tools ECS (Elastic Container Services), EKS (Elastic Kubernetes Services) for orchestrating, linking, and deploying the services related to the containers.
Ensured infrastructure compliance with security and regulatory standards by incorporating best practices and governance policies in Terraform configurations.
Coordinating the resources by working closely with Project Manager's for the release and Project Manager for all the Operational Projects. Coordinate with the Development, QA and IT Operations teams during deployments to ensure there are no Conflicts.
Managed systems routine backup, scheduling jobs like disabling and enabling Cronjobs, enabling system logging, network logging of servers for maintenance, performance tuning, testing.
Successfully managed large-scale infrastructure deployments using Terraform, supporting critical business applications.
Environments: Amazon Web Services EC2, Cloud Watch, AWS VPC, Agile, GitHub, Gitlab, Jenkins, Puppet, Chef, Docker, Artifactory, AWS Route53, S3, Cloud Watch, JERA, RHEL, Python, Shell and Ruby Scripting Language.

MetLife Insurance July 2015 Dec 2017
Role: Data Analyst/DevOps Engineer
Hyderabad, Telangana - India
Project Description:
Used AWS Cloud watch as a monitoring tool.
Designed, configured and managed public/private cloud infrastructures using Amazon Web Services (AWS), which includes VPC, EC2, S3, Internet gateway (IGW), NAT gateway, Cloud Trial, Cloud watch, ELB, Lambda.
Created alarms in CloudWatch service for monitoring the server performance, CPU Utilization, disk usage.
Utilized Cloud Formation by creating DevOps process for consistent and reliable deployment methodology.
Managed AWS cloud and migrating applications to the Amazon Cloud.
Designed AWS Cloud Formation templates to create custom sized VPC, subnets, NAT to ensure successful deployment of Web applications and database templates.
Managed SVN/GIT repositories for branching, merging, and tagging of the source code.
Created snapshots and Amazon machine images (AMIs) of the instances for backup and to use them for cloning new(similar) instances.
Worked with software Build automation and standardization tools like Maven.
Used Nexus as an artifact repository to store artifacts like WAR, JAR files.
Managed Maven project dependencies by creating parent-child relationships between projects.
Initiated Regular Build jobs using the Continuous Integration tool, Jenkins.
Installed several plugins in Jenkins to integrate multiple tools required for the implementing our projects.
Used the concept of upstream and downstream jobs in Jenkins and Created Continuous Integration and Continuous Delivery Pipelines for the build and deployment process.
Implemented CI/CD pipelines integrating Terraform for automated infrastructure deployment, reducing deployment times
Deploying configurations using Ansible Control Server across development servers, which are implemented using AWS EC2 instances service.
Worked with Terraform to access AWS components like EC2, IAM, VPC, ELB, Security groups. Used S3 for Terraform state management.
Configured RDS instances using Cloud Formations and Terraform.
Wrote Ansible playbooks for software and hardware provisioning, using YAML.
Experience working with Docker containers, Kubernetes, running/managing containers, container snapshots and managing images.
Familiar with container orchestration using Kubernetes.
Managed Kubernetes charts using helm and reproduced builds of the Kubernetes applications, managed Kubernetes Clusters.
Configured SSL (Secure Sockets Layer) and obtained digital certificates, private key for WebLogic server to provide secure connections in Nagios.
Created and wrote shell (Bash), Ruby, Python and PowerShell scripts for setting up Baselines, branching, merging and automation process across the environments using SCM tools like GIT, SVN on Windows and Linux platforms.

Environment: GIT, Maven, Ansible, Jenkins, Docker, Kubernetes, Nagios, Artifactory, AWS Cloud Watch, RDS, Unix, EC2, AMI, Route 53, S3, Ruby, Shell Scripts, ELK, Lambda and Cloud Watch, Auto Scaling, Python.

Credit safe Group July 2012 July 2015
Hyderabad, India
Role: Business/Data Analyst

Responsibilities:
Creating Dashboards and visualization of different types for analysis, monitoring, management and better understanding of the business performance metrics.
Provided maintenance support for the software used by hundreds of users.
As a part of servicing support team, helped end users by resolving the issues faced by them while performing operations such as opening new accounts. Performed a vital role in improving the usability of software
Involved in Creating Dashboards Scorecards, views, pivot tables, charts, for further data analysis.
Define and communicate project scope, milestones and deliverables and projected requirements from clients.
Use Tableau for SQL queried data, and data analysis, generating reports, graphics and statistical analysis.
Involved in the report generation using the SQL advanced techniques like Rank, Row number etc.
Analyze data using Tableau for automation and determine business data trends.
Involved in Proving guidance for transitioning from Access to SQL Server.
Commitment to staying updated with the latest Terraform features, best practices, and industry trends.

Environment: MS SQL Server, SQL Enterprise Manager, MS Excel, Cloud technologies, Erwin, ETL, MS Access Java, Jira MS office, Outlook.

Education:
Bachelors in Commerce w/Computers Osmania University
Keywords: cprogramm cplusplus continuous integration continuous deployment quality analyst sthree database information technology microsoft Florida Georgia Idaho Massachusetts

To remove this resume please click here or send an email from [email protected] to [email protected] with subject as "delete" (without inverted commas)
[email protected];2946
Enter the captcha code and we will send and email at [email protected]
with a link to edit / delete this resume
Captcha Image: