Home

Shravya - Data Analyst
[email protected]
Location: Irving, Texas, USA
Relocation: Yes
Visa: H4 EAD
Shravya
[email protected] || 817-678-0872
Only Corp to Corp
Professional Summary:
Sr. Data Analyst with 8+ years of experience, includes Data Analysis, Quantitative Methods, Statistical Computing, Data Visualization, Design, and Business Intelligence (BI).
Expertise in all aspects of Software Development Life Cycle (SDLC) from requirement analysis, Design, Development Coding, Testing, Implementation and Maintenance.
Hands-on experience in working on various Datawarehouse (DW)/ Business Intelligence (BI) Tools and databases and specialized in tools such as Informatica, Python, Tableau, Snowflake, Power BI, and Teradata.
Experience in the development and maintenance of operational reports and dashboards in Adobe Cloud, Quantum Metrics, Tableau and Power BI.
Experienced in Python to manipulate data for data loading and extraction and worked with Python libraries like Pandas, NumPy, Scikit-learn, Matplotlib, Beautiful Soup, Seaborn, Keras for Data Analysis.
Experience in Data Modeling, Evaluating Data Sources, and strong understanding of Data Warehouse, Data Mart Design, ETL, BI, Data visualization, OLAP, and Client/ Server applications.
Excellent in Data Analysis, Data Profiling, Data Validation, Data Cleansing, Data Verification, and Data Mismatch Identification.
Comprehensive knowledge and experience in process improvement, normalization/de-normalization, data extraction, data cleansing, data manipulation.
Hands on Experience in Amazon Web Services (AWS) like Amazon EC2, Amazon S3, Amazon Redshift, Amazon EMR and Amazon SQS.
Experience in designing Star Schema, and Snowflake schema for Data Warehouse, by using tools like Erwin data modeler, Power Designer and Embarcadero E-R Studio.
Experience in modeling with both OLTP/OLAP systems and Data warehousing environments.
Extensive experience in development of T-SQL, DML, DDL, DTS, Stored Procedures, Triggers, Sequences, Functions and Packages.
Experience in Data Profiling, Analysis by following and applying appropriate database standards and processes, in definition and design of enterprise business data hierarchies.
Proficient in using automation tools such as Apache Airflow and Power Automate to streamline data processing, workflow orchestration, and task automation.
Proficient in web scraping techniques using libraries such as Beautiful Soup and Scrapy, enabling data extraction from websites and online sources for analysis and integration into analytical workflows.
Hands-on experience with the Data Catalog tool for proficiently managing metadata and data assets.
Implemented and configured Data Catalog to support data governance initiatives, and ensure compliance with regulatory requirements and industry standards.
Experience in extracting, transforming, and loading (ETL) data from spreadsheets, database tables, and other sources using Informatica.
Hands-on experience with version control systems such as Git and Subversion for code management and collaboration in a team environment.
Proficient in Excel for data analysis, manipulation, and visualization, utilizing Excel-advanced functions such as VLOOKUP, and HLOOKUP to extract insights from large datasets from other sources.
Strong expertise in creating interactive dashboards and visualizations using Tableau and Power BI, enabling stakeholders to understand and explore data effectively.
Proficient in utilizing PyCharm IDE, Jupyter Notebook, and Spyder to develop Python scripts and applications for data analysis, visualization, and machine learning tasks.
Experience in creating visually appealing and interactive Dashboards, and reports as needed using Tableau Desktop and Tableau Server.
Experience in connecting to various data sources, designing complex calculations, and implementing LOD expressions, parameters, and filters in tableau to manipulate data and perform advanced analysis.
Experience in working on Data warehouse concepts like Data Warehouse Architecture, Star schema, Snowflake schema, and Data Marts, Dimension and Fact table Environment.
Expertise in Power BI, Power BI Pro, and Power BI Mobile, and Expert in creating and developing Power BI Dashboards into rich look.
Experience in creating calculated columns, measures, and DAX functions to perform complex calculations and analysis, implementing security features to ensure data integrity.
Experience in monitoring and analyzing all the database activities using Dynatrace.
Experience managing tasks, sprints, workflows to streamline project progress, and issue tracking and resolution using Jira.
Excellent interpersonal skills, ability to manage and balance multiple priorities in a fast-paced, complex business environment with excellent communication skills, and can manage time effectively to consistently meet deadlines.

Technical Skills:
Programming Languages Python (Pandas, NumPy, Matplotlib, SciPy, Beautiful Soup), SQL
ETL Informatica, AWS Glue
Data Modeling Erwin, ER studio
Cloud Technology Amazon Web Services (AWS)
AWS Services Amazon EC2, Amazon S3, Amazon Redshift, Amazon EMR and Amazon SQS, AWS Glue, AWS Pipeline
Database Cassandra, MongoDB, Oracle, Teradata, MS SQL Server, SQL
Automation Tools Apache Airflow and Power Automate
Datawarehouse Snowflakes, Star Schema
Microsoft Packages MS Office - Word, Excel (VLOOKUP, and HLOOKUP), PowerPoint
IDE s PyCharm IDE, Jupyter Notebook, and Spyder
BI Tableau, Power BI, Adobe Analytics, Quantum Metric, Dynatrace
SDLC Agile/Scrum
Operating Systems Windows, Linux, Mac

Professional Experience:

Client: TBK Bank, Dallas, TX July 2022 Till Date
Role: Data Analyst
Description: TBK Bank, a Texas-state savings bank, offers commercial and consumer-banking products focused on meeting client needs. Its capabilities include commercial real estate, mortgage warehouse lending and general business lending. It specializes in Asset Based Lending, Equipment Finance, Insurance Premium Financing, Deposit Products, and Commercial Lending.

Responsibilities:
Responsible for gathering business requirements, and SDLC Processes, designed data maps and data models.
Used the Agile Method for daily scrum to discuss project-related information.
Performed complex data analysis in support of ad-hoc and standing customer requests.
Conducted data lineage analysis to track data flow across systems and processes.
Designed and developed automation test scripts using Python.
Used Pandas, NumPy, Seaborn, SciPy, Matplotlib, Scikit-Learn, and NLTK in Python for developing various machine-learning algorithms.
Utilized Pandas library to manipulate large datasets efficiently, ensuring data integrity and accuracy.
Deployed and monitored scalable infrastructure on the cloud environment of Amazon Web Services (AWS).
Used AWS Glue to crawl the data lake in S3 to populate the Data Catalog.
Worked on Data Lake in AWS S3, Copy Data to Redshift, and Custom SQL to implement business Logic using Unix and Python Script Orchestration for Analytics Solutions.
Created custom data visualizations using matplotlib and seaborn libraries to communicate findings effectively.
Designed data profiles for processing, including running SQL, and Procedural/SQL queries and using Python for Data Acquisition and Data Integrity which consists of Datasets Comparing and Dataset schema checks.
Developed and maintained Excel-based dashboards and reports to track key performance indicators (KPIs) and provide actionable insights to stakeholders.
Conducted Excel data cleansing and validation activities to ensure data accuracy and consistency, utilizing VLOOKUP to reconcile data across multiple sources.
Used Adobe Analytics for reporting of business metrics and Quantum Metric for customer experience analytics.
Used Erwin Data Modeler tool for relational database and dimensional data warehouse designs.
Involved in various projects related to Data Modeling, Data Analysis, Design, and Development for both OLTP and Data warehousing environments.
Utilized Data Catalog tool for proficiently managing metadata and data assets.
Implemented and configured Data Catalog to support data governance initiatives, and ensure compliance with regulatory requirements and industry standards.
Used Visio to create comprehensive data flow diagrams and visual representations of metadata structures.
Designed and implemented effective Analytics solutions and models with Snowflake.
Utilized Power BI (Power View) to create analytical dashboards that depict critical KPIs such as legal case matters, billing hours, case proceedings, and slicers and dicers, enabling end-users to make filters.
Wrote SQL queries to validate the database systems and for backend database testing.
Designed & developed various Ad hoc reports for different teams in Business (Teradata and Oracle SQL, MS ACCESS, MS EXCEL).
Designing dashboards and reports, parameterized reports, and predictive analysis in Power BI.
Automated data extraction processes from various sources using Power Automate, reducing manual effort.
Conducted descriptive and inferential statistical analysis to derive insights from data and support decision-making.
Deploying and managing user permissions for reports and dashboards on Power BI web portal.
Monitoring, optimizing, and managing the performance of the data in Dynatrace.
Creating DAX Queries to generate computed columns in Power BI.
Managed tasks, sprints, workflows to streamline project progress, and issue tracking and resolution using Jira.

Environment: Agile Method, Python, Pandas, NumPy, Seaborn, SciPy, Matplotlib, Scikit-Learn, NLTK, AWS, AWS Glue, S3, Redshift, SQL, Excel, VLOOKUP, Erwin Data Modeler, Data Modeling, Snowflake, Power BI, Adobe Analytics, Quantum Metric, Dynatrace,DAX Queries, Power Automate.

Client: State of IA Education Department, Des Moines, IA Oct 2020 - June 2022
Role: Data Analyst
Description: The Iowa Department of Education is the state agency responsible for overseeing public education in the United States. It is dedicated to ensuring high-quality education for all students, promoting accountability and continuous improvement in schools, and providing support and resources to educators and districts. The department sets standards, develops curriculum frameworks, and administers assessments to measure student progress and school performance.

Responsibilities:
Participated in the daily scrum (Agile meetings) to organize sprint planning, sprint retrospectives and sprint demos.
Worked with different datasets including both structured and unstructured data and Participated in all phases of Data mining, Data cleaning, Data collection, variable selection, feature engineering, developing models, Validation, and Visualization.
Performed data wrangling to clean, transform, and reshape the data utilizing Panda s library.
Analyzed data using SQL, and Python, and presented analytical reports to management and technical teams.
Analyzed existing internal data and external data, worked on entry errors, and classification errors, and defined criteria for missing values using Python.
Used TensorFlow, Gensim, Spacy, Keras, Pandas, NumPy, Seaborn, Matplotlib, Scikit-learn, SciPy, NLTK in Python
Involved in Migrating the data model from one database to a Teradata database and prepared a Teradata staging model.
Involved in writing, testing, and implementing Teradata Fastload, Multiload, and BTEQ scripts, DDL, and DML.
Wrote Teradata Macros and used various Teradata analytic functions.
Performed daily integration and Informatica/ ETL tasks by extracting, transforming, and loading data to and from different RDBMS.
Conducted exploratory data analysis (EDA) to uncover hidden patterns and insights in datasets.
Used Normalization methods up to 3NF and De-normalization techniques for effective performance in OLTP and OLAP systems.
Designed and implemented complex data pipelines using Apache Airflow to orchestrate and automate ETL processes, ensuring timely and reliable data delivery.
Implemented data modeling techniques in Excel to perform forecasting, trend analysis, and scenario planning, using VLOOKUP to retrieve historical data for analysis.
Developed and optimized SQL queries in Snowflake to extract, transform, and analyze large-scale datasets efficiently, ensuring high performance and scalability.
Identify, analyze, and interpret trends or patterns in large data sets using Power BI.
Developed test cases and scenarios to validate data integrity and accuracy in analytics solutions.
Designed and implemented data models in Snowflake, including schemas, tables, views, and materialized views, to support analytical reporting and business intelligence needs.
Utilized Data Catalog tool for proficiently managing metadata and data assets.
Implemented and configured Data Catalog to support data governance initiatives, and ensure compliance with regulatory requirements and industry standards.
Designed and documented data management processes and workflows using Visio for better understanding and communication among project stakeholders.
Worked with NoSQL databases like MongoDB in making MongoDB tables to load expansive arrangements of semi structured data.
Developed workflows in Power Automate to streamline data cleansing and transformation tasks, improving data quality and consistency.
Designed and developed visualizations and dashboards in Microsoft Power BI.
Designed interactive reports and dashboards in Power BI to visualize business performance metrics and trends.
Used Jira for managing tasks and customized workflows to align with project requirements and facilitate efficient issue tracking and resolution.


Environment: Agile Method, Pandas, SQL, Python, TensorFlow, Gensim, Spacy, Keras, NumPy, Seaborn, Matplotlib, Scikit-learn, SciPy, NLTK, Teradata database, Informatica, ETL, Apache Airflow, Excel, VLOOKUP, Snowflake, MongoDB, Power Automate, Power BI.

Client: e-bay, Austin, TX Jan 2019 - Sep 2020
Role: Data Analyst
Description: eBay is a multinational e-commerce corporation with millions of active users known for its online marketplace connecting buyers and sellers globally. eBay facilitates consumer-to-consumer and business-to-consumer sales through its platform. It offers a diverse range of products, including electronics, fashion, collectibles. Etc. eBay's mission is to create economic opportunity for individuals, entrepreneurs, and businesses worldwide.

Responsibilities:
Implemented Agile methodologies to enhance project management efficiency, facilitating seamless collaboration among cross-functional teams.
Employed Scrum framework to streamline project workflows, ensuring timely delivery of data analysis solutions and meeting stakeholders' requirements.
Leveraged Pandas library to manipulate and analyze large datasets, extracting valuable insights to drive strategic decision-making processes.
Integrated Python with SQL databases to streamline data retrieval and analysis workflows.
Utilized Python programming language along with NumPy and SciPy libraries to perform statistical analysis, hypothesis testing, and predictive modeling.
Visualized data effectively using Seaborn and Matplotlib libraries, creating informative charts, graphs, and dashboards to communicate findings to stakeholders.
Applied Scikit-learn library for machine learning tasks, including classification, regression, clustering, and model evaluation, to derive actionable insights.
Employed NLTK (Natural Language Toolkit) for text mining and sentiment analysis, extracting meaningful information from unstructured textual data.
Managed data storage and retrieval efficiently using Cassandra database, ensuring high availability and scalability for large-scale applications.
Implemented ETL (Extract, Transform, Load) processes using Informatica, automating data integration tasks and ensuring data quality and consistency.
Proficient in advanced Excel functionalities, including VLOOKUP, pivot tables, and macros, for data manipulation, analysis, and reporting purposes.
Worked with Snowflake data warehouse to handle large volumes of structured and semi-structured data, optimizing performance and scalability.
Worked with Dynatrace to monitor, explore, and analyze the performance of business data.
Leveraged Jupyter Notebook as a versatile IDE for exploratory data analysis (EDA), creating interactive data visualizations and documenting analysis workflows.
Executed complex SQL queries to retrieve and manipulate data from relational databases such as PostgreSQL, MySQL, and SQL Server.
Familiarity with MongoDB for handling and analyzing unstructured data, enabling efficient storage and retrieval of documents and JSON data.
Conducted data cleansing, transformation, and normalization tasks as part of ETL/ Informatica processes to ensure data integrity and accuracy.
Conducted exploratory data analysis (EDA) to uncover hidden patterns and relationships in data, guiding business strategy formulation.
Developed interactive and insightful visualizations using Power BI, enabling stakeholders to gain actionable insights from data analysis.

Environment: Python, Pandas, SQL, NumPy, Seaborn, Matplotlib, Scikit-learn, SciPy, Agile, Scrum, NLTK, Informatica, ETL, Excel, VLOOKUP, Snowflake, Dynatrace, MongoDB, Power BI.

Client: Apple Inc, Austin, TX Jun 2017 Dec 2018
Role: Data Analyst
Description: Apple Inc. is a multinational technology company renowned for its innovative products and services. Apple Inc has become one of the world's most valuable companies. Apple is famous for its iconic products such as the iPhone, iPad, Mac, and Apple Watch, as well as its software ecosystem including iOS, macOS, and iCloud. With a focus on design, user experience, and technological advancement, Apple has revolutionized industries and shaped consumer behavior globally.

Responsibilities:
Participated in all phases of data mining, data cleaning, data collection, developing models, validation, visualization, and performed Gap analysis.
Developed SQL Queries, and Python programs, to fetch complex data from different tables in remote databases using joins, database links and Bulk collects.
Used Pandas, NumPy, Seaborn, Matplotlib, Scikit-learn, SciPy, and NLTK in Python
Worked extensively on ER Studio in several projects in both OLAP and OLTP applications.
Developed and maintained data dictionaries and documentation to ensure data lineage and transparency.
Analyzed complex data sets, performing ad-hoc analysis and data manipulation in Landing, Staging and Warehouse schemas using SQL.
Utilized business requirements to create ad-hoc reports using SQL.
Worked with complete data warehouse life cycle, testing methodologies, OLAP and OLTP applications in agile approaches.
Involved in extensive Data validation by writing several complex SQL queries and involved in back-end testing and worked with data quality issues.
Conducted data reconciliation and discrepancy analysis in Excel, leveraging VLOOKUP to identify and resolve discrepancies between different data sources.
Designed different types of STAR schemas using ERWIN with various Dimensions like time, services, customers, and FACT tables.
Implemented CI/CD pipelines using tools like Jenkins, and GitLab to automate the deployment and testing of data analytics solutions, ensuring rapid and reliable delivery of insights.
Design of Database and ER Diagrams using multiple data modeling tools like ERWIN, ER Studio and Informatica.
Performed user acceptance testing (UAT) to validate analytics solutions against business requirements.
Documented and tracked defects and issues identified during testing, ensuring timely resolution.
Leveraged agile methodologies to manage and prioritize data analytics projects effectively.
Prototyped data visualizations using Charts, drill-down, parameterized controls using Tableau to highlight the value of analytics in Executive decision support control.
Expertise in Business intelligence and Data Visualization tools like Tableau.

Environment: SQL Queries, Python, Pandas, NumPy, Seaborn, Matplotlib, Scikit-learn, SciPy, NLTK, ER Studio, OLAP, OLTP, SQL, Excel, VLOOKUP, STAR schemas, ERWIN, CI/CD pipelines, Jenkins, GitLab, ER Diagrams, Informatica, User acceptance testing (UAT), Agile methodologies, Tableau.

Client: Luxoft, Hyderabad, India Jan 2016 - Apr 2017
Role: Data Analyst
Description: Luxoft, a DXC Technology Company, is a digital transformation services and software engineering firm providing bespoke IT solutions that drive business change for customers globally. Luxoft enables digital business transformation, enhances customer experiences, and boosts operational efficiency through its strategy, consulting, and engineering services.

Responsibilities:
Managed end-to-end data analytics projects from requirement gathering to implementation and maintenance.
Involved in generating various graphs and charts for analyzing the data using Python Libraries.
Performed data Cleaning, feature scaling, and feature selection using Pandas, NumPy, and scikit-learn packages in Python.
Collaborated with data engineers and DevOps teams to design and optimize CI/CD workflows for data analytics projects, ensuring scalability, reliability, and security.
Worked on database design, relational integrity constraints, OLAP, OLTP, Cubes and Normalization (3NF), and De-normalization of the database.
Developed dimensional model for Data Warehouse/OLAP applications by identifying required facts and dimensions.
Created conceptual, logical, and physical models for databases that are required for supporting services within the enterprise data architecture (conceptual data model for defining the major subject areas used, logical model for defining standard business meaning for entities/fields, and physical models for DDL).
Created high-level ETL design documents and assisted ETL developers in the detailed design and development of ETL maps using Informatica.
Tested complex ETL mappings and sessions based on business user requirements and business rules to load data from source flat files and RDBMS tables to target database systems.
Used Excel sheets, flat files, and CSV files to generate Tableau ad-hoc reports.
Designed a STAR schema for the detailed data marts and Plan data marts involving shared dimensions.
Wrote SQL data migration scripts and used data manipulating techniques to extract and process data into the core data warehouse from various data sources as well as generated data from various databases.
Utilized SQL and Python for all ETL processes and maintained ETL jobs for the business streams.
Developed Tableau visualizations and dashboards using Tableau Desktop, and Tableau workbooks from multiple data sources using Data Blending.

Environment: Python, Pandas, NumPy, Scikit-learn, CI/CD workflows, OLAP, OLTP, Cubes, ETL design, Informatica, ETL development, SQL, Excel, Tableau, Informatica, STAR schema.


Education:

Bachelor's degree in Electronics and Communications Engineering from Jawaharlal Nehru Technological University (JNTU), Hyderabad in 2012.
Keywords: continuous integration continuous deployment business intelligence sthree active directory rlang information technology microsoft Delaware Iowa Texas

To remove this resume please click here or send an email from [email protected] to [email protected] with subject as "delete" (without inverted commas)
[email protected];2929
Enter the captcha code and we will send and email at [email protected]
with a link to edit / delete this resume
Captcha Image: