Home

Gayatri - PYTHON DEVELOPER
[email protected]
Location: New York, New York, USA
Relocation: Open
Visa: H1B
e in Python with proven expertise in using new tools and technical developments (libraries used:
libraries- Beautiful Soup, Jasy, NumPy, SciPy, Matplotlib, Pickle, PySide, python-twitter, Pandas data frame,
networks, urllib2, MySQL DB for database connectivity) to drive .com
Good experience in developing web applications implementing Model View Control architecture using Django, Flask,
Pyramid, and Python web application frameworks.
Implemented a real-time health monitoring FastAPI endpoint for robust system reliability and performance insights.
Crafted a Flask REST API for continuous health checks on microservices, deployed on AWS Fargate for high
availability.
Developed Django Rest Framework-based(DRF) Data Catalog API, providing centralized catalog management
Engineered Python Spark jobs for efficient data extraction from S3, leveraging Databricks File System for optimized
access.
Orchestrated data processing workflows with Apache Airflow and Databricks scheduler for streamlined execution.
Successfully migrated Snowflake PySpark jobs and data to Databricks File System (DBFS), ensuring compatibility
and optimization..
Employed CloudFormation scripts for creating and managing cloud resources, following infrastructure as code(Iaac)
principles.
Deployed Data Catalog API on AWS Fargate, utilizing AWS services like S3, Secrets Manager, Fargate, and Django
for seamless management.
Implemented AWS IAM, KMS, WAF, CloudTrail, and Config for secure API access and compliance.
Configured Amazon VPC for isolated and secure API resources, enhancing network security.
Architected AWS Step Functions workflows for seamless microservices orchestration in Python.
Engineered Python-based AWS Lambda functions for precise task execution within AWS serverless architecture.
Integrated API Gateway with Python Lambda functions, establishing RESTful APIs for streamlined communication.
Utilized AWS Route 53 to expose Flask and Django APIs to frontend clients, optimizing accessibility and user
experience.
Developed and executed comprehensive test cases using Pytest to validate data processing correctness post-migration.
Implemented a robust CI/CD pipeline using Groovy in Jenkins, Ansible, and Docker for PySpark jobs and
Flask/Django API applications.
Enhanced development workflows with an efficient CI/CD pipeline, incorporating automated unit tests using Pytest.
Leveraged AWS Lambda functions for developing and deploying serverless applications, optimizing code execution.
Applied advanced data transformation techniques using Python's Pandas and NumPy libraries for effective
preprocessing of raw datasets.
Automated testing, building, and deployment processes for Spark jobs and APIs, ensuring consistency through CI/CD
Jenkins pipeline.
Containerized and deployed ETL and REST services on AWS ECS, embracing DevOps practices.
Experienced in creating AWS IAM and Security Group in Public and Private Subnets in VPC. Created AWS Route53
to route traffic between different regions.
TECHNICAL SKILLS
Programming Languages C, C++, Python-3.x & 2.x, SQL, gremlin, and Shell Scripting
Python Libraries Python, Django, Flask, Beautiful Soup, PySpark, Pymongo,
SQLAlchemy, Pandas, Numpy, httplib2, Jinja2, Matplotlib,
Pickle, SciPy, wxPython, PyTables, pdb.
Frameworks Django, PySpark, web2py, pyramid, Flask.
Technologies Databricks, Apache Airflow, FastAPI, Flask, Django Rest
Framework, Elasticsearch, Kibana, CloudFormation, Fargate, S3,
Secrets Manager, IAM, KMS, WAF, CloudTrail, Config,
Amazon VPC, Step Functions, Lambda, API Gateway, ECS,
CI/CD Jenkins, Snowflake, Test Automation, DevOps, GitHub,
MySQL, Kubernetes, Datadog, Route53, Athena, Glue,
PyCharm, Microsoft Visual Code, Linux, Shell Scripting, JIRA,
PostgreSQL, NumPy, Pandas, Kafka, XML, AVRO JSON,
DataDog, Gitlab, GitHub actions.
Protocols TCP/IP, HTTP/HTTPS, SMTP
IDE's/ Development Tools VsCode, PyCharm, and Sublime Text.
Version Control GIT (GitHub)
Deployment Tools Jenkins, Ansible, Docker
Tracking Tools JIRA, AzureDevOps
Methodologies Agile, Kanban
Databases Microsoft, SQL Server, MySQL, Postgres, Oracle, DynamoDB,
MongoDB, Elasticsearch, Amazon Neptune(Graph Database)
Operating systems Linux/Unix, CentOS, Amazon Linux, Windows Variants
Cloud Environment AWS Services, EC2, ELB, VPC, RDS, AMI, IAM, AWS Fargate,
AWS S3, Secrets Manager, IAM, KMS, WAF, CloudTrail,
CloudWatch, ALB, Config, Amazon VPC, Step Functions,
Lambda, API Gateway, ECS, CI/CD Jenkins, Redshift.
Professional Experience:
Client: American Airlines March 2023 Present
PySpark/Python Developer
Responsibilities:
Developed Python Spark jobs to extract and process data from S3, leveraging the Databricks File System layer for
optimized data access.
Implemented data processing workflows, scheduled and orchestrated using Apache Airflow and Databricks scheduler
for efficient job execution.
Implemented a Python s FastAPI endpoint dedicated to monitoring and reporting the health checks of various services
within the architecture, providing real-time insights into system reliability and performance.
Automated the testing, building, and deployment processes, ensuring consistent and reliable releases of Spark jobs
upon pipeline completion.
Utilized Elasticsearch to centralize and analyze Spark job logs, enhancing visibility into job execution.
Visualized the status of Spark jobs through Kibana dashboards, providing real-time insights for monitoring and
performance analysis.
Created and managed cloud resources using CloudFormation scripts, ensuring infrastructure as code for scalability
and reproducibility.
Developed a Data Catalog API using Python and Django Rest Framework, providing a centralized catalog for data
assets.
Deployed the Data Catalog API on AWS Fargate services, ensuring high availability and scalability.
Leveraged AWS services such as S3, Secrets Manager, and Fargate for seamless deployment and management of
applications.
Implemented AWS Identity and Access Management (IAM) to control and manage access to APIs, enhancing security
and compliance.
Utilized AWS Key Management Service (KMS) for encryption of sensitive API data, ensuring data protection in
transit and at rest.
Employed AWS WAF (Web Application Firewall) to protect APIs against common web exploits and security
vulnerabilities.
Implemented AWS CloudTrail for logging API activities, ensuring auditability and compliance with security policies.
Employed AWS Config to assess, audit, and evaluate the configurations of API-related AWS resources, ensuring
adherence to best practices.
Configured Amazon VPC (Virtual Private Cloud) to isolate and secure API resources, enhancing network security
and control.
Architected AWS Step Functions workflows to orchestrate microservices seamlessly in Python.
Engineered Python-based AWS Lambda functions for precise task execution within AWS serverless architecture.
Integrated API Gateway with Python Lambda functions, establishing RESTful APIs for streamlined communication.
Containerized and Deployed the ETL and REST services on AWS ECS through the CI/CD Jenkins pipe.
Successfully migrated existing Snowflake PySpark jobs and associated data to the Databricks File System (DBFS)
layer.
Developed comprehensive test cases to validate the correctness of data processing after migrating from Snowflake to
Databricks.
Environment: AWS, Mac OS, Python3.7, Python Spark, Databricks File System (DBFS), Apache Airflow, FastAPI,
Elasticsearch, Kibana, CloudFormation, Python, Django Rest Framework, AWS Fargate, AWS S3, Secrets Manager, AWS
IAM, AWS KMS, AWS WAF, AWS CloudTrail, AWS Config, Amazon VPC, AWS Step Functions, AWS Lambda, AWS
API Gateway, AWS ECS, CI/CD Jenkins, Snowflake, Test Automation.
Client: Target, India March 2020 Oct
2022
Python Developer
Responsibilities:
Designed and developed PySpark jobs for data processing, transforming raw data into meaningful insights.
Deployed PySpark jobs on AWS Glue, ensuring efficient and scalable data processing.
Visualized processed data by creating AWS Athena tables, providing easy access for analysis and reporting.
Developed a robust Flask REST API using Flask-RESTful for performing constant health checks on the microservices
present in the architecture.
Deployed the Flask API on AWS Fargate services, ensuring high availability and scalability.
Utilized AWS Route 53 to expose the Flask API to frontend clients, enhancing accessibility and user experience.
Integrated CloudWatch with PySpark jobs and Flask API to monitor and analyze system logs.
Utilized CloudWatch logs for effective debugging and troubleshooting of PySpark jobs and API applications.
Created a robust CI/CD pipeline using Groovy in Jenkins, Ansible, and Docker.
Automated the testing, building, and deployment processes, ensuring consistent and reliable releases.
Enhanced development workflows with an efficient CI/CD pipeline for both PySpark jobs and Flask API applications.
Implemented comprehensive unit tests using Pytest, ensuring the robustness and reliability of both PySpark jobs and
Flask API applications.
Orchestrated data transformation processes using AWS Step Functions to execute Lambda functions.
Developed Lambda functions to convert raw XML data into AVRO JSON format, ensuring efficient and standardized
data representation.
Utilized AWS Step Functions to sequence and coordinate the execution of Lambda functions, optimizing the data
transformation workflow.
Implemented Kafka topics to serve as data pipelines for downstream teams, facilitating real-time data distribution.
Collaborated with downstream teams to understand data consumption requirements, tailoring Kafka topics for efficient
data transfer.
Environment: Python 3.6, AWS, DevOps, Flask, Django, PySpark, GitHub, Jenkins, Ansible, MySQL, Kubernetes,
Amazon Web Service (AWS), S3, lambda, step functions, datadog, route53, athena, glue, PyCharm, Microsoft Visual
Code, Linux, Shell Scripting, JIRA.
Client: S&P GLOBAL, India July 2016 - Feb 2020
Python Developer
Responsibilities:
Leveraged AWS Lambda functions to develop and deploy serverless applications, optimizing code execution and
resource utilization.
Applied advanced data transformation techniques using Python's Pandas and NumPy libraries to preprocess and
reshape raw datasets effectively.
Improved the efficiency of business processes by implementing and orchestrating complex workflows with AWS Step
Functions.
Designed and implemented Python RESTful APIs using AWS API Gateway, fostering seamless communication
between different components of the system.
Integrated AWS API Gateway with Lambda functions to create scalable and cost-effective solutions.
Designed and developed a data management system using PostgreSQL.
Used Django Database ORM to access database objects and store data.
Wrote Python scripts to parse XML documents and load the data in the database.
Developed robust and scalable backend services using both Flask and Django frameworks.
Implemented APIs, authentication, and authorization mechanisms within Flask and Django applications.
Expertise in writing Constraints, Indexes, Views, SQL Stored Procedures, Cursors, Triggers, and User Defined
functions.
Created unit tests using Pytest framework for working/new code.
Responsible for debugging and troubleshooting the REST APIs.
Developed views and models with Python and Django to create backend Rest APIs.
Environment: Python 3, Django, Flask, PostgreSQL, AWS S3, AWS API Gateway, AWS Step Functions, AWS lambda
function, Restful
Keywords: cprogramm cplusplus continuous integration continuous deployment sthree database

To remove this resume please click here or send an email from [email protected] to [email protected] with subject as "delete" (without inverted commas)
[email protected];1675
Enter the captcha code and we will send and email at [email protected]
with a link to edit / delete this resume
Captcha Image: