Home

Srichandan A - Senior Snowflake/Cloud Data Engineer
[email protected]
Location: Chicago, Illinois, USA
Relocation: IN, TX, NC, MA, WI, VA
Visa: H1B
Srichandan A
Senior Snowflake/Cloud Data Engineer
603-265-0657
[email protected]
Chicago, IL
IN, TX, NC, MA, WI, VA
H1B

Professional Summary:
AWS Certified Solution Architect - Associate with 11 years of IT experience as a Snowflake Developer, AWS Data Engineer and BI(Spotfire) Developer with demonstrable experience and deep understanding of Cloud technologies.
Experienced in optimizing data pipelines, improving performance, and ensuring data integrity.
Experience in implementing Snowflake Data Warehouse.
Experience in Snowflake cloud development and snowflake admin management.
In-depth understanding of Snowflake Architecture.
Experience in migrating data various sources to Snowflake Data Warehouse.
Experience in building Snowpipe for ingesting streaming data into snowflake.
Experience In using Informatica Cloud (IICS), AWS Glue, Fivetran to extract, transform and load data (ETL/ELT).
Strong Experience on AWS platform and its dimensions of scalability including S3, RDS, Redshift AWS Glue, SQS, SNS, EC2, Lambda, VPC, ELB, IAM, Auto-scaling, CloudWatch, CloudTrail, Security Groups.
Expert in Writing SQL Queries, stored procedures.
Automated data pipelines using streams and tasks.
Collaborated with cross-functional teams, including data engineers, developers, and system administrators, to investigate and resolve issues affecting the data pipelines and ensure smooth data flow.
Experience in using and tuning relational databases (e.g., Microsoft SQL Server, Oracle, MySQL) and columnar databases (e.g., Snowflake, Amazon Redshift).
Designed and implemented data pipelines using Python, AWS services (such as AWS Glue or Lambda), and Snowflake to efficiently move and transform data between various data sources and Snowflake data warehouse..
Ensured data quality and integrity during the ingestion process by implementing appropriate data validation and cleansing mechanisms.
Experienced with event-driven and scheduled AWS Lambda functions to trigger various AWS resources.
Experience in creating complex Spotfire dashboards with sophisticated visualizations involving information links and calculated columns using Tibco Spotfire Professional.
Experience in Agile environments, ensuring efficient project delivery.
Knowledge in Data Warehousing Concepts in OLTP/OLAP System Analysis and developing Database Schemas like Star Schema and Snowflake Schema for Relational and Dimensional Modelling.
Provided 24/7 technical support to Production and development environments.

Technical Skills:
Cloud Platform Snowflake, AWS , Databricks
Monitoring Tools Splunk, CloudWatch
Database MySQL, SQL Server, Teradata, Oracle, PostgreSQL
Version Control Tools Subversion (SVN), GIT, GIT Hub
Bug Tracking Tools JIRA, ServiceNow, Bugzilla
Languages & Scripts SQL, Python, JavaScript, HTML, CSS, XML, Java, C
SDLC Agile, Scrum
ETL/ELT Tools AWS Glue, Informatica cloud (IICS) , Talend, Fivetran.

Certifications:
AWS Certified Solution Architect Associate
Coursera Certified - Python Data Structures from University of Michigan

Professional Experience:

Office of Attorney General - Texas, Remote Nov 2022 to Present
Senior Snowflake/AWS Cloud Data Engineer
Responsibilities:
Developed and maintained data ingestion pipelines to ensure the timely and accurate transfer of data from AWS RDS PostgreSQL to Snowflake using AWS Glue workflows.
Improved performance of data processing in AWS Glue using data partitioning.
Developed and optimized PySpark jobs within AWS Glue to transform and process large-scale datasets, ensuring efficient data extraction, transformation, and loading processes
Developed data piplines to move data from Drupal database to snowflake using GoAnywhere managed file transfer and AWS S3.
Leveraged Python's data processing libraries, such as Pandas and NumPy, to perform data transformations, data cleansing, and data enrichment as part of the ETL (Extract, Transform, Load) process.
Utilized Snowflake's SnowSQL command-line tool and integrated it with Python scripts for interacting with Snowflake from the command line.
Deployed database changes through liquibase scripts and used Bitbucket as a respository to store the scripts and Jenkins to validate the code.
Implemented streams to capture and track all changes made to the data in real-time, providing a reliable source of change data for further processing.
Developed tasks and workflows to orchestrate the execution of queries based on the captured change data, ensuring efficient and synchronized data processing.
Ensured data quality and integrity during the ingestion process by implementing appropriate data validation and cleansing mechanisms.
Collaborated with stakeholders to identify and implement appropriate tags for different types of data, such as source system, data sensitivity, business unit, etc.
Conducted performance analysis and tuning of Snowflake queries and data processing tasks to optimize query execution time and resource utilization.
Implemented query and workload optimization techniques, such as query rewriting, indexing, query caching, and materialized views, to enhance Snowflake's performance.
Collaborated with database administrators and data engineers to identify and resolve performance bottlenecks, ensuring efficient data processing and query response times.
Implemented appropriate logging and error handling mechanisms within the data pipelines to facilitate troubleshooting and debugging.

Environment: Snowflake EDW, AWS Glue, AWS RDS, AWS SQS, AWS EC2, Python, SnowSQL, Liquibase, Jenkins, Bitbucket, GoAnywhere.

Drinkworks, Remote Aug 2021 to Oct 2022
Senior Snowflake Consultant
Responsibilities:
Setup and configuration of Snowflake Data Warehouse as Enterprise Data Platform.
Designed architecture of ELT/ETL data pipelines from source to target following best practices which involves batch processing, near real time data ingestion and micro-batch processing.
Extracted data from multiple sources which includes salesforce, flat files through email, Redshift, S3 buckets, API s and ingested in snowflake.
Developed Python scripts to perform data quality checks and validation on incoming data, ensuring data accuracy and consistency within Snowflake.
Implemented snowpipes for continuous data ingestion of streaming data such as IOT devices.
Created resource monitor to manage data platform costs and snowflake credit s consumption.
Manage and maintain proper data governance with RBAC (role-based access authentication).
Recommended technical solutions for complex development and integration problems.
Implemented dynamic data masking at query runtime.
Shared data to other consumers by creating reader accounts.
Improve Performance by implementing clustering keys, automating warehouse size, and tuning of queries.
Followed and implemented snowflake s best practices for securing data in snowflake.
Environment: Snowflake EDW, AWS Glue, AWS S3, AWS Kinesis, Lambda, AWS SES, AWS EC2, Amazon AppFlow, Python, Salesforce, Fivetran, Airflow.

USAA, San Antonio, TX Feb 2019 to July 2021
Snowflake Consultant
Responsibilities:
Managed RBAC Controls in building security Model for Snowflake Data warehouse and used Resource Monitors.
Shared data to customers by creating share and granting usage.
Worked on performance optimization of Snowflake DW using dedicated and multi clustered warehouses, also by using cluster keys to partition large tables.
Involved in Migrating Database Objects from Teradata and MS SQL Server to Snowflake.
Created Snowpipe for continuous loading of data into snowflake from AWS S3 buckets by configuring SQS events in order to trigger Snowpipe.
Performed loading into snowflake by bulk loading using COPY.
Implemented streams and tasks for continuous ELT workflows to process recently changed data (CDC).
Performed data masking while sharing confidential data to users.
Successfully migrated logic and data from a legacy database to Snowflake by developing new tables based on stakeholder needs using SnowSQL.
Created internal and external stages and transformed data during load.
Performed data transformations using AWS Glue and PySpark/python script to ingest data into snowflake.
Used functions such as Lateral Flatten to convert the loaded JSON data into column in snowflake.
Worked with both Maximized and Auto-scale functionality.
Used Temporary and Transient tables on diff datasets.
Cloned Production data for code modifications and testing.
Shared sample data using grant access to customer for UAT.
Used time traveling feature to recover data.
Environment: Snowflake EDW, AWS Glue, AWS S3, AWS Kinesis, AWS EC2, Informatica Cloud (IICS), Teradata, Oracle, Power BI, MS SQL Server.

Kaiser Permanente, Pleasanton, CA July 2017 to Dec 2018
AWS Data Engineer
Responsibilities:
Designed and setup Enterprise Data Lake to provide support for various uses cases including Analytics, processing, storing, and Reporting of voluminous, rapidly changing data.
Designing and building multi-terabyte, full end-to-end Data Warehouse infrastructure from the ground up on Confidential Redshift for large scale data handling Millions of records every day.
Implementing and Managing ETL solutions and automating operational processes.
Optimizing and tuning the Redshift environment, enabling queries to perform up to 100x faster for Tableau and SAS Visual Analytics.
Implemented Work Load Management (WML) in Redshift to prioritize basic dashboard queries over more complex longer-running adhoc queries. This allowed for a more reliable and faster reporting interface, giving sub-second query response for basic queries.
Worked on integrating data from different cloud sources like Salesforce, NetSuite in Redshift and AWS RDS (MySQL).
Designed and Developed ETL jobs using AWS Glue, Informatica to extract data from Salesforce, NetSuite and load it in data mart in Redshift.
Built on-premises data pipelines using Kafka and Spark streaming using the feed from API streaming Gateway REST service.
Build pipelines from Salesforce and NetSuite to MySQL (AWS RDS).
Developed the PySpark code for AWS Glue jobs and for EMR.
Extensively worked on configuring S3 versioning and lifecycle policies to and backup files and archive files in glacier.
Implemented a centralized logging system using log stash configured as an ELK stack (Elastic search, Log stash, and Kibana) to monitor system logs, AWS Cloud Watch, VPC Flow logs, Cloud Trail Events, changes in S3.
Environment: AWS, Redshift, Amazon RDS, VPC, IAM, S3, EC2, Lambda, AWS Glue, Cloud Watch, Cloud Formation, Cloud Trail, Cloud Front, Python, MySQL, Elastic search, Docker, Jenkins, GIT, Splunk, ELB, Informatica

T. Rowe Price, Baltimore, MD Jan 2017 to June 2017
Sr. Spotfire Developer
Project Name: CRM360
Remediation of the legacy dashboards along with the enhancements.
Responsibilities:
Development and modification of dashboards as per business requirements.
Developed Spotfire Dashboards metrics for Customer, Employee, Financial and Operational metrics.
Creation of data base links and views and accessing views through information designer component.
Creation of information links through information designer component, Joins, filters and parameterized filters as per requirements.
Create advanced and customizable visualizations using calculated columns, custom expressions and Iron Python scripting.
Serve as a primary point of contact for Production Support issue resolution.
Extensively worked with formatting the reports according to the user requirements.
Used Filter Schemes to isolate the filter behaviors on visualizations
Used Over statements for calculations
Used Dynamic calculations based on the markings user selects in the visualizations
Deployed TIBCO Spotfire DXP files to UAT/Production environments.
Supported in promotion of dashboards to various environments.
Perform UAT to ensure that solution meets the expectation of user community as well as application lead.
Environment: Spotfire 7.0, Spotfire 7.8, MS SQL, SSMS, Java Script, Iron Python

Merck & CO., Branchburg, NJ Jan 2016 to Dec 2016
Sr.Spotfire Developer
Project Name: AppFIT
The AppFIT dashboard project provides insights into the health of applications. The dashboard allows you to filter on applications based on many parameters so that you can focus on what you are interested in. IT Application owners, application support leaders, Service Leads, Tower Leads, Client Service Leaders and IT leadership will find these dashboards valuable for application health investment decision making.
Responsibilities:
Gathering the requirements from business team and transmuting them into technical architecture and design documents.
Involved in estimation, analysis and solution documents for building dashboards using spotfire.
Responsible in creating the Analysis Design and Building dashboards using spotfire by creating the information links from oracle to display data in reports.
Created dashboards, reports with Bar Charts, Scatter Plots, Map Chats, Pie Charts, Cross Tables and Graphical tables using the key concepts like filtering scheme (Global Filter), (Local Filter) in spotfire.
Created Dynamic Visualizations using advanced Trellis, Scatter plots, 3D Scatter Plots, Map charts and Tree Map, Text Area, Input Field, Lists, drop down lists, Property, Document Properties, On-Demand Data and In-Database.
Created visualizations with complex spotfire features like calculations/functions.
Worked on Information Designer, building Information links, elements, filters, joins, prompts, prompt groups etc.
Code migration across environments (Dev > QA > Prod).
Root cause analysis and fixing the issues with the Spotfire environment.
Analysis of the Server Issues, Library issues, etc
Iron Python scripting to refresh data tables, changing filtering schemes, reset axis, reset marker size, jittering, refresh calculations, on demand data.
Environment: Spotfire 6.5, Spotfire 7.0 [7.0.1], Oracle SQL Developer, Oracle 11g, Iron Python

Procter & Gamble, Mason, OH Feb 2015 to Dec 2015
Senior Tibco Spotfire Developer
Responsibilities:
Translate product requirements and business questions into report requirements and visualization dashboards.
Designing and developing the information links, Data Source, Columns, Joins, Filtering and Procedures.
Created dashboards, reports, analytics and advanced graphical virtualizations by using the key concepts filtering scheme, Data on demand, property controls etc.
Created the visualizations with complex calculations/functions.
Created information links and managed process analysts work to realize the required solution from a data perspective.
Pivoting and transforming data in spotfire professional before building visualizations.
Implementing security for write back data from spotfire.
Conveying problems, solutions, updates and project status to project manager in a timely and regular basis.
Testing and debugging the dashboard for quality assurance.
Serve as a primary point of contact for Production Support issue resolution.
Analyze and document process along with quick resolutions for typical issues in production and which were critical to the process.
Perform day to day administration of Spotfire, including group/library management, upgrades, monitoring log files, performance tuning, troubleshooting and reporting.
Optimized TIBCO Spotfire and Background Tasks for Targeted Interactions Analytics and Visual Inventories.
Environment: Spotfire 5. 5, Spotfire 6.5, SQL Server 2008, Oracle 11g, IronPython

FGS - Hyderabad, India May 2011 - July 2013
Systems Engineer
Responsibilities:
Maintained and implemented Windows 2000, 2003 Servers, and Active Directory, DNS, DHCP, Troubleshoot Server and Client issues.
Installation/Upgrade & Configuration of Servers On Linux Platform - Network Configuration, Firmware Install, OS Install, File system Configuration.
Extensively worked on Linux 4.0X and 5.0X series on the system and Network Administration on Red Hat Linux.
Installed & Configured Red Hat Enterprise Linux 4.0 and 5.0 Network servers with SAMBA file server & FTP server.
Configured Linux server Hardening, SQUID proxy server with access control list for Internet.
Recognize and troubleshoot problems with server hardware and application software.
Document troubleshooting standards and procedures.
Recommended and used hybrid clustering (horizontal and vertical) to make use of efficient resources on a single system and to provide for hardware failover and load-balancing and worked with developers.
System Administration support including activities for Backups, recovery, monitoring and controlling access permissions and privileges.
Responsible for adding, creating new users, groups and setup home directories and appropriate access restrictions to software and directories & files using access modes.
Installing and configuring of windows servers and troubleshooting server related issues.
Perform Security Health Check of Windows servers and other compliance.
Performed timely Patch management as per the Schedule and involved in change Management as per the process.
Website content and application release management and creating Virtual directories. Application support for ASP, .NET & java based hosted internal & external websites.
Deploying .Net based Applications and Java based apps on IIS 6.0
Troubleshooting application issues hosted on the IIS and provide RCA for the issues.
The setup and maintenance of new infrastructure of, Domain Controller, DNS Servers, DHCP Servers, File Server.
Installing, Configuring and Managing AD / DHCP / DNS Services.
Resolving technical issues for Microsoft Exchange servers.
Environment: Windows 2008, 2012 Servers, MQ SQL, SQL Server, IIS, Apache, Linux, AD

Education:
Master s in computer science University of Central Missouri, Warrensburg, Missouri - 2014
Bachelor s in computer science engineering, JNTU Hyderabad, India 2011
Keywords: cprogramm quality analyst message queue business intelligence sthree active directory information technology microsoft California Colorado Illinois Maryland Massachusetts New Jersey North Carolina Ohio Texas Virginia Wisconsin

To remove this resume please click here or send an email from [email protected] to [email protected] with subject as "delete" (without inverted commas)
[email protected];383
Enter the captcha code and we will send and email at [email protected]
with a link to edit / delete this resume
Captcha Image: