Home

Kedar Nanda - Data Architect
[email protected]
Location: Phoenix, Arizona, USA
Relocation: NO
Visa: H1B
Kedar Kumar Nanda

CCA 175 Spark and Hadoop Developer Certified
AWS Certified Cloud Practitioner
OCA 1Z0-051 Oracle Database 11g: SQL/PLSQL Certified
Data Mining Certified from Arizona State University (ASU)

[email protected] |(276) 323-5230

PROFILE


Over 15 years of Professional Experience in Data Architect and Engineering Functions: Database Architect, Modelling and Development, Data Warehousing, Data Analytics, Data Models, and Reporting.
Good knowledge on USA Healthcare Data.
Working experience with SQL and MPP systems (SQL, SPARK SQL, AWS Redshift, SnowSQL, HIVE etc).
Working Experience in RDBMS Database: Oracle PL/SQL.
Working Experience in Python and PySpark.
Proficiency in advanced SQL, performance tuning.
Experience with Cassandra NOSQL database with Spark to build complex pipelines.
Scheduling Tools Autosys and Control-M.
Working Experience on Data Warehousing Systems Using Snowflake, AWS Redshift and Oracle PL/SQL.
Building and Deploying ETL and data pipelines using Spark, Python, Apache Nifi and Snowflake in conjunction with AWS services like SQS and SNS. Using PySpark to process and Transform Distributed Data.
Expertise with relational databases and experience with schema design and dimensional data modeling.
Experience using business intelligence reporting tools (Alteryx)
Extensive domain knowledge of United States Healthcare Data.
Good Knowledge on AWS Web App 3-tier architecture and AWS services EC2, ELB, ASG, RDS, ElastiCache, S3, CLI, Elastic Beanstalk, CICD, CloudFormation, CloudWatch, X-Ray, CloudTrail, SQS, SNS, Kinesis, AWS Lambda, DynamoDB etc.
Good understanding of OOPs concepts and Data Structure.
Good understanding of Big Data Technology Concepts.
Strong analytical, interpersonal communication and problem solving skills
Talent Management - I lead a team of 6 Database Developers, Analysts and Senior Analysts.

PROFESSIONAL EXPERIENCE


PRA Health Science, Inc. Jan 05, 2021 - Present
Designation: Sr Data Architect
Location: Phoenix, AZ, USA


Databricks and Snowflake Cloud Computing
Cloud Services AWS Services (S3, CLI, IAM, EC2, ELB, RDS, Redshift, ElastiCache, Elastic Beanstalk, Microsoft Azure etc).
Relational databases/tools - Oracle 12c, Toad, SQL Developer, PostGreSQL,
Programming Language Python, Scala, Oracle PL/SQL programming, Unix Shell Scripting, PRO*C/C++, Splunk
DBT Tool(Data Build Tool)
Azure Data Factory and Databricks
Scripting Language Unix Shell Scripting, Django
SQL and MPP Systems (SQL, Spark SQL & Data Frames, GraphQL, HIVE, Pig)
ETL Tool Apache Nifi, Matillion
Datawarehosuing Systems - Snowflake, DataBricks, Oracle PLSQL, Toad Data Modeler, Oracle Warehouse Builder(OWB), SQL*Loader
Big Data Technologies Cloudera Spark, Hadoop, Apache Hive, HDFS, Sqoop.
Data Visualization Libraries - Numpy, Pandas, Matplotlib, Seaborn, Plotly, Cufflinks, Chlorepath Maps
Reporting tool Alteryx
Automation/Job Scheduling Tool Airflow, Autosys and Control-M
File Transfers: AWS S3 CLI, Microsoft Azure, FTP/SFTP, Filezillae, WinSCP
Operating Systems Unix, Linux, Windows
Cloud Migration (AWS GCP)

Job Duties and Responsibilities:
Collaborate with Operation Team, Business analysts, production support, design analysts, and functional teams to understand, analyze and gather requirements.
Data Integration and Transformation using Snowflake and Matillion.
Responsible for designing, building, and maintaining o various application data infrastructure. Make sure application data is stored, managed, and utilized efficiently and securely.
Build robust and scalable data warehouses and data integration ETL pipelines. Use Snowflake to make several layers of data warehouse available for reporting, data Science and analytics and build and deliver high quality data architecture to support business analysis and customer reporting needs.
Build file transfer applications using GCP and AWS Services.
Build Large scale ETL Applications using PySpark and Scala-Spark.
Migrating data from legacy systems to new systems
Developing and maintaining data governance policies and procedures.
Design and develop operational data storage solutions (ODS) and Dimensional DataMart model storage solutions.
Design and develop data integration ETL data pipelines and streaming pipelines using advanced technologies.
Design and develop highly scalable and extensible EDW, ODSPRD, staging and PR1 applications and self-service platforms to support the problems by writing advanced and complex SQL queries, query optimization and join strategies.
Design and Develop Oracle fine-grain control - Row/column level security models with masking and encryption methods.
Design and develop different data models like ER models, Conceptual/Logical/Physical models, Relational models to efficiently build and optimize data warehouses.
Generate QC Reports to identify system bottlenecks, data anomalies, trend differences, frequency counts, and determine if they are within the specifications of the final delivery for high revenue generating and critical warehouses.
Develop automation frameworks to expedite the QC reports, trending reports and adhoc reporting requirements to validate the data, building and deploying large-scale, complex data processing pipelines using advanced technologies.
Create descriptive reports, trend reports and automate these process to run them on weekly and monthly intervals or custom intervals.
Design, build and deploy large-scale, complex data processing pipelines.



Harman (A Samsung Company) July 08, 2013 Jan 04, 2021

Designation: Sr Data Engineer (Architect)
Duration: 7 Years 7 Months
Location: Phoenix, AZ, USA


Primary Skills Used:
Oracle SQL, PL/SQL, Oracle 12C, Apache Hadoop, HDFS, TOAD, Toad Data Modeler, Spark, Oracle Warehouse Builder, SQL Developer, Apache Hive, Sqoop, Autosys, Amazon Web Services (AWS) Services, Microsoft Azure, Snowflake, Putty, Unix Shell Scripting, SQL*Loader, Python, PRO*C/C++, RPM, Summus Ticketing Tool, PHAST, Integrated Data Verse (IDV), Prescriber Patient tool, Graphical User Interface tool, Market Definition Tool, SHS Graphical User Interface tool.

SQL and MPP Systems (SQL, Spark SQL & DataFrames, HIVE, SnowSQL)
RDBMS databases/tools - Oracle 11g/12c, Toad, SQL Developer
NOSQL Database: Cassandra (CQL) in conjunction with Spark.
Programming Language Python, Oracle PL/SQL programming, Shell Scripting
Datawarehosuing Systems - Snowflake, Oracle PLSQL, Toad Data Modeler, Oracle Warehouse Builder(OWB), SQL*Loader
Scripting Language Unix Shell Scripting
Data Processing Transformation and pipelines using PySpark and Python.
Big Data Technologies Cloudera Spark, Hadoop, Apache Hive, HDFS, Sqoop.
Data Analysis and Visualization using Different Python Libraries (Numpy, Pandas, Matplotlib, Seaborn, Plotly, Cufflinks, Chlorepath Maps)
AWS Services IAM, EC2, ELB, Route 53, RDS, ElastiCache, S3, CLI, Elastic Beanstalk etc.
ETL and Reporting tool Alteryx
Workflow Management Tool/Job Scheduling Tool - Autosys and Control-M
File Transfers: AWS S3 CLI, Microsoft Azure, FTP/SFTP, Filezillae, WinSCP

Job Duties and Responsibilities:
Lead multiple database technical teams (Data Warehousing Development, Target and Compensation (T & C), Brand Analytics (BA), Dynamic Claim Analyzer (DCA)).
Collaborate with business analysts, design analysts and other technical/functional teams to understand and gather requirements.
Design and Build robust and scalable data integration warehouses and ETL pipelines using advanced technologies. Design Operational Data Storage and Multi-Dimensional model storage solutions.
Use Snowflake to make several layers of data warehouse available for reporting, Data Science and Analytics.
Build and deliver high quality data architecture to support business analysis, customer reporting needs.
Design and develop the conceptual/Logical/Physical Data Models. Design ER Models, Relational Models, Object Oriented Models etc for data warehouses. Design multidimensional schemas like Star Schema, Snowflake Schema or Galaxy Schemas.
Build and deploy large-scale, complex data processing warehouse pipelines using advanced technologies.
Design and develop highly scalable and extensible staging and data warehouse applications and self-service platforms which enables collection, storage, modelling and analysis of massive data sets from different structured, semi-structured and un-structured sources.




Sonata Software July 02, 2012 July 05, 2013
Designation: Senior Systems Analyst
Client: Sony
Location: Bengaluru, India


Primary Skills Used: Oracle PL/SQL, SQL, Unix Shell Scripting, Toad, Toad Data Modeler, Python, PRO*C/C++

Job Duties and Responsibilities:
Understanding and Collecting requirements from Business user side or Business Analyst and try to convert that requirement to SQL queries, PL/SQL.
Design and develop various Sony Supply Chain Data Warehouses and Storage solutions using Oracle, SQL, Unix Shell Scripting, Python and PRO*C.
Work in SQL and PL/SQL programming, developing complex code units, PL/SQL Packages, Procedures, Functions, Triggers, Views and Exception handling for retrieving, manipulating, checking and migrating complex data sets in Oracle. Partitioned large Tables using range partition technique. Worked extensively on Ref Cursor, External Tables and Collections.
Involved in all phases of the SDLC (Software Development Life Cycle) from analysis, design, development, testing, implementation and maintenance with timely delivery against aggressive deadlines.
Involved in Data flow diagrams, Data dictionary, Database normalization theory techniques, Entity relation modelling, Logical/Physical Data Models using various design techniques.
Work in SQL performance tuning using Cost-Based Optimization (CBO). Good knowledge of key Oracle performance related features such as Query Optimizer, Execution Plans and Indexes. Performance Tuning for Oracle RDBMS using explain Plan and HINTS.
Troubleshoot and fix the prpraoduction automated job failures, data investigation, provide a solution, fix the issue and deploy the updated code into production. Ensure daily, weekly and monthly jobs run smoothly in Autosys.


Federal Bank Limited Apr 19, 2010 June 27, 2012
Designation: Assistant Manager
Location: Bengaluru, India


Job Duties and Responsibilities:
Provide banking solutions and support using Banking Product Finacle .
Collaborate with different team members and manager to understand, gather requirements and provide customer solutions.
Support and develop various Banking Customer Data Modules, Accounts and storage solutions using Finacle product and other proprietary tools, platforms and databases.
Handling transactions and various activities of the customers. Resolve open tickets related to Finacle data applications.
Handle all phases and solutions of Finacle banking modules.
Develop both adhoc and static reporting solutions. Administer data quality mechanisms.


Tata Consultancy Services July 12, 2007 Oct 15, 2009
Designation: Asst. Systems Engineer
Location: Bengaluru, India


Primary Skills Used: Oracle PL/SQL, Toad, Oracle Warehouse Builder (OWB), I-DEAS

Job Duties and Responsibilities:

Collaborate with business analysts and team leader to understand and gather requirements.
B & W data modelling, design and development of Nissan data warehouses and storage solutions using different tools, platforms and databases.
Deliver robust test case, plans and strategies. Handle all phases of software development life cycle; facilitation, collaboration, knowledge transfer and process improvement.
Develop comprehensive data integration solutions.
Troubleshoot issues related to existing data warehouses to meet client deliverable SLAs.

EDUCATION



Qualification Institute University
Bachelors of Technology CET, Bhubaneswar, India BPUT
XII Khallikote Collge School, Bbsr, India CBSE
X BKH School, Bbsr, India CBSE


TOOLS & TECHNOLOGY EXPERTS


Databases: Oracle Database 10g,11g/12c, Toad
SQL and MPP SQL, Spark SQL & DataFrames, HIVE, SnowSQL, Redshift
Programming Language: Python, Oracle PL/SQL, Unix Shell Scripting, PRO*C/C++
Datawarehosuing Systems: Snowflake, Redshift, PLSQL, Toad Data Modeler, Oracle Warehouse Builder (OWB), SQL Developer, SQL*Loader
ETL Tool Apache Nifi
Scripting Language Unix Shell Scripting
Big Data Technologies: Apache Hadoop HDFS, Hive, Sqoop, Pig, Spark, PySpark
Reporting/Other Tools: Alteryx
File Transfer Tools: AWS, Microsoft Azure, FTP/sFTP, Filezillae, WinSCP
AWS Services: S3, CLI, Redshift, EMR, Glue, Athena, Lambda, RDS, Kinesis
Visualization Libraries: Numpy, Pandas, Matplotlib, Seaborn, Plotly, Cufflinks, Chlorepath Maps
IT Processes: Agile, Software Development Life Cycle (SDLC)
Automation Tools: Autosys, Control-M, Airflow
OS: Unix, Linux, Windows






CERTIFICATION


CCA Spark and Hadoop Developer (CCA175)
AWS Certified Cloud Practitioner
OCA (Oracle Certified Associates) in SQL from ORACLE CORPORATION.
OCA (Oracle Certified Associates) in PL/SQL from ORACLE CORPORATION.
Data Mining Certified from ARIZONA STATE UNIVERSITY, AZ, USA.
Operating Systems from ARIZONA STATE UNIVERSITY, AZ, USA.
Intro to HIPAA for Business Associates certified
Earned Badge for Oracle Cloud Infrastructure (OCI Explorer)
Keywords: cprogramm cplusplus business analyst sthree information technology procedural language Arizona

To remove this resume please click here or send an email from [email protected] to [email protected] with subject as "delete" (without inverted commas)
[email protected];3806
Enter the captcha code and we will send and email at [email protected]
with a link to edit / delete this resume
Captcha Image: