Home

Ranaga - AWS Data Engineer and AWS DATA Architect and Snowflake and ETL
[email protected]
Location: Arizona City, Arizona, USA
Relocation: Open
Visa: H1B
usiness information systems, focusing on database architecture,
data modeling, data analysis, programming, and application integration.
Experience in creating Tables, Views, Stored Procedures, and Indexes, troubleshooting
database issues and Performance Tuning of Stored Procedures and Database.
Strong Data Warehousing ETL experience of using Informatica 10 PowerCenter tools -
Mapping Designer, Repository manager, Workflow Manager/Monitor.
Good experience in migration of data solutions from other databases to Snowflake, GCP
Big query and AWS Redshift using various cloud services.
Experience with snowflake Multi-Cluster Warehouses and In-Depth knowledge of
Snowflake database, schema, and table structures.
Expertise in Snowflake data modelling, ELT using Snowflake SQL, implementing complex
stored Procedures and standard DWH and ETL concepts using Snowpark API with
python.
Created tables, views, complex stored procedures, and functions in Redshift, Big Query
and Snowflake for the consumption of client applications responsible for BI Reporting and
Analytics.
Developed data ingestion modules to data into various layers in S3, Redshift using AWS
Glue, AWS Lambda and AWS Step Functions.
Designed and implemented a CI/CD pipeline utilizing Gitlab, ensuring efficient and
seamless deployment of code changes.
Create Spark Glue jobs to implement data transformation logics in AWS and stored output
in Redshift cluster.
Extensively worked on Spark using Scala on EMR and developed data ingestion using big
data framework Spark with Scala and python.
Hands on experience in creating real - time data streaming solutions using Apache Spark
core, Spark SQL, Data frames, Kafka, and Spark streaming.
Utilized Apache Spark with Python to develop and execute Big Data Analytics and
Machine learning applications, executed machine Learning use cases under Spark ML and
MLib.
Utilized Spark, Scala, Spark Streaming, MLib, Python, a broad variety of machine learning
methods including classifications, regressions, Clustering and dimensionally.
Have exposure and intermediate level of expertise on NoSQL database MongoDB.
Experience in building and architecting multiple Data pipelines and transformation using
GCP, Storage, Big Query, Dataflow, Composer and Data Proc.
2
Experience in creating Data Governance Policies, Data Dictionary, Reference Data,
Metadata, Data Lineage, and Data Quality Rules.
Software Profile:
Operating systems Windows 10,11
Big data and Cloud Related
Technologies
Snowflake - Snow SQL, Snow pipe, Snowpark API, Rest endpoints API
AWS - S3, Lambda, Glue, EMR, Athena, Redshift, AWS Step functions
GCP - Storage, Big Query, Dataflow, Composer and Data Proc
BI and Scheduler Tools SSDT, SSIS, SSRS, SSAS Power BI, DAX, MDX and Auto-Sys, Airflow
Databases SQL server 2008/2012/2016, Oracle 11g
API Integration Tools MuleSoft Any point studio, RAML, Cloud Hub, Runtime Manager
Programming Languages VB.Net, C#.Net, ASP.Net, Web services, Scala, and Python
Internet Technologies ReactJS, HTTP, REST XML, FLASK, and JSON
Repository Tools Microsoft TFS 2013, Maven, SVN, GitHub
Panasonic Avionics Corp Irvine, California. 03/2023-12/2023
Role: Data Solution Architect
Roles & Responsibilities:
Liaise with users in analyzing business requirements and translating them into detailed
conceptual, logical, and physical data models for the web application using snowflake
schema in PostgreSQL.
Develop a detailed data mapping document for the data interface between the source and
target data systems.
Experienced in Designing Data warehouse using Data Vault 2.0 Methodology to provide
long-term historical storage of data coming in from multiple systems.
Worked with Data Warehouse in the development and execution of data conversion, data
cleaning and standardization strategies and plans as several small tables are combined into
one single data repository system MDM (Master Data Management).
Design and implement data integration processes between cloud and on-premises
systems.
Designed and implemented a CI/CD pipeline utilizing Gitlab, ensuring efficient and
seamless deployment of code changes.
Created and demonstrated the data sharing architecture using AWS lake formation, Data
Exchange, and Deta mesh principles.
Design and developed analytics reports using Redshift spectrum, External tables,
materialized views, views, and Quick sight.
Convert and transformed the complex Glue job to Redshift user defined functions and
views.
Design developed and implemented the web application using FLASK, python, ReactJS,
HTML, JavaScript, and CSS which provides a web interface to map the Panasonic products
sold or installed on each aircraft.
Implemented the Authentication and Authorization of Web app using Aws Cognito tokens
via next cloud.
3
Created Py Spark Glue jobs to implement data transformation logics in AWS and stored
the data in Redshift cluster.
Developed data ingestion modules to data into various layers in S3, Redshift using AWS
Glue, AWS Lambda and AWS Step Functions.
Involved in creating Data Governance Policies, Data Dictionary, Reference Data, Metadata,
Data Lineage, and Data Quality Rules.
Implemented a server less architecture using API gateway, Lambda to data consumption
from Amazon Redshift.
Developed spark programs using Scala and python to process complex structured and
unstructured data sets.
Created data pipelines integrating Kafka with spark streaming applications using scala for
writing applications.
Utilized Spark, Scala, Spark Streaming, MLLib, Python and a broad variety of machine
learning methods including classifications, regressions, Clustering and dimensionally.
Environnent : AWS S3, Step Functions, Glue, Ath na, Redshift, Apache Spark, Py Spark, Python,
ReactJS, FLASK, HTML, JavaScript, Django, CSS, Redshift spectrum, Glue, Lake
formation, Data exchange, ECS, ECR, Boto3, PostgreSQL, Gitlab, Secrets manager
and Quick sight, JSON, Yaml, API Gateway, Lambda, Scala, Kafka.
Invesco Atlanta 01/2018-02/2023
Offshore - Hyderabad, Ind.
Role: Sr Engineer, Technology /Data Architect
GCCP- Global client communication platform is a client reporting application used to generate
various kinds of reports like Client books, Factsheets for Invesco Global institutional clients located
across APAC, EMEA and USA. This system allows the Reporting team, Marketing Team,
Compliance Team for Review\Approvals and finally publish the documents to the external
clients/Invesco website.
Roles & Responsibilities:
Liaise with EMEA and APAC business subject matter experts in analyzing business
requirements and translating them into detailed conceptual data models, process models,
logical models, physical models, and schema development in the database.
Recommend the appropriate reporting solution to end-users based on need. Assist
business units with prioritization of outstanding DW development projects.
Responsible for understanding the existing legacy Reporting platform, documenting, and
translating the requirements into system solutions, and developing the migration plan to
cloud platform as per the schedule.
Good experience in migration of data solutions from other databases to Snowflake and
AWS Redshift using various AWS services.
Implemented the incremental loads inside Snowflake by using Tasks, Streams, Pipes, and
stored procedures.
Created complex ETL snowflake stored procedures and functions using Snowpark API
with python.
Build and maintain the data pipelines using dbt, Snowflake, and airflow.
Created complex reporting and analytical snowflake stored procedures and functions
using Snowpark API with python for the consumption of Mule API s.
Created tables, views, complex stored procedures, and functions in Redshift for the
consumption of client applications responsible for BI Reporting and Analytics.
4
Developed data ingestion modules to data into various layers in S3, Redshift using AWS
Glue, AWS Lambda and AWS Step Functions.
Create Py Spark Glue jobs to implement data transformation logics in AWS and stored
output in Redshift cluster.
Implemented Multiple Data Pipeline DAG s and Maintenance DAG S in Airflow.
Created job flow using Airflow in Python language and automating the jobs.
Validating the data from SQL server to snowflake to make sure it has Apple to Apple
match.
Build and architect multiple Data pipelines, end to end ETL and ELT process for Data
ingestion and transformation using GCP, Storage, Big Query, Dataflow, Composer and
Data Proc.
Design and architect various layers of Data Lake and design star schema in Big Query.
Developed spark programs using Scala and python to process complex structured and
unstructured data sets.
Involved in Extraction, Transformation and Loading (ETL) data from various sources into
Data Warehouses and Data Marts using Informatica Power Center (Repository Manager,
Designer, Workflow Manager, Workflow Monitor, Metadata Manger), ETL tool on Oracle,
SQL Server Databases.
Environnent : AWS S3, Glue, Ath na, Redshift, Py Spark SQL Server 2016, Scala, SSIS, DBT,
Snowflake, Python, Snowpark API, Rest endpoints API, and Apache Airflow,
Informatica 10, Gcp, Big query, Cloud Dataflow, Dataproc, Cloud SQL, pub/sub,
Composer.
Invesco Atlanta. 07/2012-12/2017
Offshore - Hyderabad, Ind.
Role: Sr Engineer, Technology
Roles & Responsibilities:
Liaise with EMEA and APAC business subject matter experts in analyzing business
requirements and translating them into detailed conceptual data models, process models,
logical models, physical models, and schema development in the database.
Recommend the appropriate reporting solution to end-users based on need. Assist
business units with prioritization of outstanding DW development projects.
Architect the Report data mart. Review and maintain the schema, its tables, indexes, views,
and functions in SQL server.
Gathering requirements, designing, and maintaining ETL process involving data quality,
testing and information delivery and access to the data warehouse. Coordinate Quality
Assurance and system testing, assist with systems implementations, and the evaluation of
the results.
Expert on OLAP and OLTP process and UNIX Shell Scripts.
Interact with vendor (technical issues, project initiatives) independently, as necessary and
act as the point person for issues escalation.
Responsible for Application Upgrades and Implementation. Identify new functionality
and/or hardware requirements. Creates test plans. Responsible for review and validation
of functionality. Report any problems. Create and/or manage cutover plans including
downtime.
Write and analyze complex reports using Desk net, Microsoft power Bl. Make
modifications to complex reports. Query tuning and troubleshooting of complex SQL
queries to decrease execution runtimes to produce online reporting.
Design ETL packages using SSIS for ETL loads, generate Excel extracts on schedule basis.
5
Created reports in Power BI utilizing the SSAS Tabular for INVESCO
marketing/Presentations team.
Involved in Analyzing, designing, building &, testing of OLAP cubes with SSAS and in
adding calculations using MDX.
Extensively involved in the SSAS storage and partitions, and Aggregations, calculation of
queries with MDX developing reports using MDX and SQL.
Extensive use of Stored Procedures. Writing new stored procedures, T-SQL, UDF,
and modifying existing ones and tune them such that they perform well.
Assisted with documentation and code deployments from Development, QA and
Production environments.
Environnent : SQL Server 2012, SSIS, SSAS, Shell script, XML, TFS and MS. Net Web Services
and Desk Net Content Welder 8.2 and Power BI, MDX, DAX.
CSSI (Currently Vertafore) Hyderabad, Ind. 05/2009-06/2012
Role: Sr Software Engineer
Client: Golden rule insurance, USA.
VUE Compensation Management is a powerful, flexible, and intuitive tool that makes it easy for
insurance organizations to organize and streamline complex commission and incentive programs.
The system will create the reports and information necessary to manage the business and build
strategic business knowledge. Agent Management, Agent License, Agent Appointment, Agent
Contract, Carrier, Product, Insured, Plan, Financial, Reports, Tracking and Controlling, Admin and
Document Management are the different components Present in VUE.
Roles & Responsibilities:
Working with business/functional analysts to determine ETL requirements.
Involved in ETL Process and developed medium to complex 40+ SSIS packages.
Migrated data from Flat files, CSV files, XML and Excel spreadsheets to SQL Server
databases using Integration Services (SSIS).
Extensive use of Stored Procedures. Writing new stored procedures, T-SQL, UDF,
and addressed performance issues associated with stored procedures and fixed them.
Created proper indexes to support queries.
Create package configurations, deploy Integration Services projects and schedule SSIS
packages via jobs.
Created and maintained reports in SSRS. Report writing using SQL Server Reporting
Services and creating various types of reports like drill down, Parameterized, Cascading,
Table, Matrix, Chart and Sub Reports.
Assisted with documentation and code deployments from Development, QA, and
Production environments.
Environnent : SQL Server 2005/2008, SSIS, SSRS, Flat files, XML, CSV, VBScript.
HSBC- Hyderabad, Ind. 06/2008-04/2009
Role: Software Engineer
Broker Unit Database is used to record the data related to the Brokers who get the business to the
company in the form of customers who are referred to as Introductions. The main purpose and
scope of the Broker Unit Database System is to record the details of Brokers, record the details of
Professionals, record the details of Introductions and Generating different type of Reports as per
required by Business.
6
Roles & Responsibilities:
Involved in the design and development of all modules of the application.
Designed and coded the presentation layer using ASP.NET Web Forms, JavaScript, and
C#.Net.
Used ADO.NET for database interactions.
Extensively wrote stored procedures for database manipulations.
Environnent : ASP.NET 2.0, ADO.NET 2.0, SQL Server 2005, IIS 5.0, C# .NET, Java Script
Aegis Bpo Services Ltd Hyderabad, Ind. 02/2007-05/2008
Role: Software Programmer
The Verizon Wireless owns and operates the nation s most reliable wireless network.
Headquartered in Bedminster, NJ, Verizon Wireless is a joint venture of Verizon Communications
(NYSE: VZ) and Vodafone (NYSE and LSE: VOD). A Dow 30 company is a leader in delivering
broadband and other communication innovations to wireline and wireless customers.
Major Responsibilities:
Involved in ETL Process and developed medium to complex SSIS packages.
Migrated data from Flat files, Excel spreadsheets to SQL Server databases using Integration
Services (SSIS).
Extensive use of Stored Procedures. Writing new stored procedures, T-SQL, UDF,
and modifying existing ones and tune them such that they perform well.
Create package configurations, deploy Integration Services projects and schedule SSIS
packages via jobs.
Environnent : SQL Server 2005, SSIS, Excel, VBScript
Keywords: csharp continuous integration continuous deployment quality analyst machine learning business intelligence sthree information technology microsoft New Jersey

To remove this resume please click here or send an email from [email protected] to [email protected] with subject as "delete" (without inverted commas)
[email protected];1673
Enter the captcha code and we will send and email at [email protected]
with a link to edit / delete this resume
Captcha Image: