Home

Indrasena - Data Engineer
[email protected]
Location: Dallas, Texas, USA
Relocation: Yes
Visa: H1B
Indrasena M
Sr. Data Engineer +1 732 429 1935
[email protected] Dallas, TX
__________________________________________________________________________________________
Professional Summary:

Overall 8+ years of IT experience in various industries working on Big Data technology. The Hadoop working environment includes Spark, MapReduce, Kafka, Hive, Ambari, Sqoop, HBase, and Impala.
Experience in programming with Scala, Java, Python, SQL, T-SQL, and R.
Deep understanding of common Machine Learning techniques related to Time series, Regression,
Experience implementing real-time and batch data pipelines using AWS Services, Lambda, S3, DynamoDB, Kinesis, EC2, Glue, Athena, AWS Step Function, and Redshift.
Hands-on experience in developing and deploying enterprise-based applications using major Hadoop ecosystem components like MapReduce, YARN, Hive, HBase, and Flume.
Adept at configuring and installing Hadoop/Spark Ecosystem Components.
Proficient with Spark Core, Spark SQL, Spark MLlib, Spark GraphX, Data Frame, Pair RDD, Spark YARN, and Spark Streaming for processing and transforming complex data using in-memory computing capabilities written in Scala.
Proven track record of efficiently manipulating and maintaining databases using DBX, ensuring data accuracy and accessibility.
Implemented version control for ETL code and maintained comprehensive documentation for EMR, Redshift, and Glue workflows.
Proficient in designing, implementing, and maintaining PostgreSQL and Aurora databases for optimal performance.
Proficient in creating complex Excel models and spreadsheets to analyze and visualize large datasets, improving data-driven decision-making processes
Expertise in Adobe Experience Platform Real-Time CDP, ensuring clients derive maximum value from their investment.
Experience applying various data sources like Oracle SE2, SQL Server, Flat Files, and Unstructured files in a data warehouse.
Encoded and decoded JSON objects using Pyspark to create and modify the data frames in Apache Spark
Hands-on experience in architecting, implementing, and optimizing data solutions within the Snowflake data warehousing environment.
Able to use Sqoop to migrate data between RDBMS, NoSQL databases, and HDFS.
Experience in Extraction, Transformation, and Loading (ETL) data from various sources into Data Warehouses, as well as data processing like collecting, aggregating, and moving data from multiple sources using Apache Flume, Kafka, and Microsoft SSIS.
Experienced working on Master Data Management (MDM) using Profisee MDM.
Proficient in utilizing Profisee to manage and optimize data assets, along with a proven track record of designing and implementing effective data engineering solutions.
Adept at modifying C# code for configuring and administering Profisee admin tool.
Exceptional skills in Power BI for data visualization, dashboard development, and report generation.
Skilled in connecting Power BI to various data sources, including SQL databases, Excel, and cloud services.
Proficient in managing and maintaining Power BI workspaces, datasets, and reports.
Skilled in DAX (Data Analysis Expressions) for creating calculated columns, measures, and advanced calculations in Power BI.
Implemented highly available and scalable Aurora and PostgreSQL databases for real-time data processing and analytics workloads
Skilled in using Hudi for data extraction, transformation, and loading into Data Lakes and AWS Redshift
Experience in Web Scraping which is an automatic method to obtain large amounts of data from websites.
Hands-on experience with Hadoop architecture and various components such as Hadoop File System HDFS, Job Tracker, Task Tracker, Name Node, Data Node, and Hadoop MapReduce programming.
Comprehensive experience developing simple to complex Map reduction and Streaming jobs using Scala and Java for data cleansing, filtering, and data aggregation. Also, possess detailed knowledge of the MapReduce framework.
Knowledge of Adobe RTCDP capabilities and best practices to drive continuous improvement in data engineering processes.
Used IDEs like Eclipse, IntelliJ IDE, PyCharm IDE, Notepad ++, and Visual Studio for development.
Seasoned practice in Machine Learning algorithms and Predictive Modeling such as Linear Regression, Logistic Regression, Na ve Bayes, Decision Tree, Random Forest, KNN, Neural Networks, and K-means Clustering.
Proficient in designing and implementing scalable search and analytics solutions using AWS Elasticsearch
Optimized SQL queries and database performance to meet the stringent demands of processing data using PostgreSQL and Aurora
Designed, configured, and tested Adobe Experience Platform connections, data sets, XDM schemas, and identity namespaces to support data integration.
Ample knowledge of data architecture, including data ingestion pipeline design, Hadoop/Spark architecture, data modelling, data mining, machine learning, and advanced data processing.
Experience working with NoSQL databases like Cassandra and HBase and developing real-time read/write access to large datasets via HBase.
Developed Spark Applications that can handle data from various RDBMS (MySQL, Oracle Database) and Streaming sources.
Provided troubleshooting and support for EMR, Redshift, and Glue-related issues, ensuring minimal downtime and optimal system performance.
Worked with processes to transfer data from AWS and flat files common staging tables in various formats to meaningful data into Snowflake.
Experienced in managing and optimizing relational databases on AWS RDS, with a preference for PostgreSQL
Experience working with GitHub/Git 2.12 source and version control systems.
Experience in privacy and data security laws and regulations, including GDPR, COPPA, and VPPA
Knowledge of best practices prevalent in data governance, data quality, and data privacy. Familiarity with data management principles and practices.
Specialized in Mulesoft API development and Data Governance tools like Axon, EDC, and IDQ.
Deep understanding of Oracle Goldengate mechanism, features such as integrated capture, native DDL capture, and multi-thread replicate.
Proven ability to collaborate with cross-functional teams, including data analysts, data scientists, and business stakeholders, to deliver data-driven solutions.


Technical Skills:

Big Data Technologies Spark, Cloudera, Hive, Impala, HBase, Oozie, Kafka, Databricks, Airflow, Adobe Analytics
Languages SQL, Python, C#, Java and Scala
Cloud Azure Data factory, Azure Stream Analytics, Databricks, Azure PowerShell, Azure HD Insight, ADLS, Blob storage, AWS, S3, EMR, Elasticsearch, Glue, Athena, Hudi, Redshift, GCS, DataProc, Data Studio
MDM & Data Governance Tools Axon, EDC, IDQ, and Profisee MDM
Databases Oracle, MySQL, SQL Server, Teradata, PostgreSQL, Aurora
IDE and Notebooks Eclipse, IntelliJ, PyCharm, Jupiter, Databricks notebooks
Data formats JSON, Parquet, AVRO, XML and CSV
Search and BI tools PowerBI, Data Studio, Tableau
Web Technologies JDBC, JSP, Servlets, Struts (Tomcat, JBoss)

Professional experience:

Client: 7-Eleven, Irving, TX May 2022 Till Date
Role: Sr. Data Engineer

Responsibilities
Experience in building and architecting multiple Data pipelines, end-to-end ETL, and ELT processes for Data ingestion and transformation in AWS.
Designed, developed, and maintained complex ETL (Extract, Transform, Load) pipelines on DBX to ensure seamless data flow, from raw data ingestion to the creation of structured datasets.
Collaborated with internal stakeholders to define and refine MDM strategies that align with the organization's business objectives.
Used AWS API Gateway and AWS Lambda to get AWS cluster inventory by using AWS Python API
Led end-to-end design and implementation of MDM solutions using Profisee, ensuring accurate and consistent master data across the organization.
Utilized DBX (DataBricks) for data processing and manipulation to support various data-driven initiatives within the organization.
Provide production support for existing products that include SSIS, SQL Server, interim data marts, matillion, AWS, Snowflake
Collaborated with cross-functional teams, including Consultants, Solution Architects, Data Scientists, and Digital Marketers.
Implemented a Continuous Delivery pipeline with Docker and Git Hub
Worked with Lambda function to load Data into Redshift on the arrival of CSV files in an S3 bucket.
Devised simple and complex SQL scripts to check and validate Dataflow in various applications.
Performed Data Analysis, Migration, Cleansing, Transformation, Integration, Data Import, and Data Export through Python.
Developed custom workflows and scripts to extend Hudi functionality and improve data processing
Led the design and implementation of PostgreSQL databases to support critical applications
Worked on JSON schema to define tables and column mapping from AWS S3 data to AWS Redshift and used AWS Data Pipeline for configuring data loads from AWS S3 to AWS Redshift
Performed data engineering functions: data extraction, transformation, loading, and integration in support of enterprise data infrastructures.
Architected several DAGs (Directed Acyclic Graph) for automating ETL pipelines.
Hands-on experience in architecting the ETL transformation layers and writing Spark jobs to do the processing.
Implemented data security and compliance measures using Elasticsearch and accessed control features
Proficient in crafting and executing queries for segmentation, reporting, analysis, and machine learning models using Adobe's query service.
Gather and process raw data at scale (including writing scripts, web scraping, calling APIs, writing SQL queries, and writing applications)
Imported data from AWS S3 into Spark RDD and performed actions/transformations on them.
Created Partitions, Bucketing, and Indexing for optimization as part of Hive data modelling.
Involved in developing Hive DDLs to create, alter, and drop Hive tables.
Worked on different RDDs to transform the data coming from other data sources and transform data into required formats.
Maintained up-to-date knowledge of Adobe RTCDP capabilities and best practices to drive continuous improvement in data engineering processes.
Integrated and leveraged DataBricks (DBX) as a central platform for data manipulation, transformation, and analysis within 7-Eleven's data ecosystem.
Hands-on experience in web scraping which consists of crawler and scraper. To extract all the data on sites and specify the data.
Deployed and configured EMR clusters to process large-scale data sets, optimizing cluster performance for efficient data processing.
Designed, developed, and implemented robust APIs using Mulesoft, contributing to enhancing data integration processes and improving system interoperability.
Created RESTful APIs to expose functionalities and enable third-party integrations.
Collaborated with cross-functional teams to gather and analyze requirements, ensuring seamless integration of various systems and applications through well-designed APIs.
Developed and maintained robust C# applications, ensuring optimal performance and adherence to coding standards.
Design and develop Spark applications using PySpark and Spark-SQL for data extraction, transformation, and aggregation from multiple file formats for analyzing & transforming the data to uncover insights into the customer usage patterns.
Developed scripts (Python) to streamline routine database maintenance tasks.
Worked on migration and conversion of data using Pyspark and Spark SQL for data extraction, transformation and aggregation from multiple file formats for analyzing and transforming using Python.
Developed and maintained ETL (Extract, Transform, Load) pipelines using Spark to efficiently process and transform large volumes of data.
Wrote complex SQL queries to extract, analyze, and manipulate data from relational databases, ensuring data accuracy and integrity.
Created data frames in SPARK SQL from data in HDFS and performed transformations, analyzed the data, and stored the data in HDFS.
Deployed various microservices such as Cassandra, Spark, and MongoDB in Kubernetes and Hadoop clusters using Docker.
Worked with data analysts and data scientists to develop and deploy data models and workflows using Hudi's data validation and data quality features
Automated and streamlined the process of extracting, transforming, and loading marketing and customer data into the Adobe RTCDP platform.
Worked with Spark Core, Spark Streaming, and Spark SQL modules of Spark for faster processing of data.
Developed Spark code and SQL for faster testing and processing of real-time data.
Worked on Talend ETL to load data from various sources to Data Lake.
Design and develop interactive and user-friendly PowerBI dashboards and reports that provide insights into key performance indicators (KPIs), sales trends, and operational metrics.
Create complex DAX calculations to support various business logic requirements, such as year-over-year comparisons, running totals, and forecasting.
Developed data integration strategies and oversee the integration of data from diverse source systems into the Profisee MDM platform.
Automated Elasticsearch deployments and management using tools like Terraform
Conducted performance tuning on EMR jobs to enhance overall job execution time and resource utilization.
Strong experience in data migration from RDBMS to Snowflake cloud data warehouse.
Optimized DBX-based workflows for performance, resource utilization, and cost efficiency, ensuring timely data delivery while minimizing operational overhead.
Used Amazon DynamoDB to gather and track the event-based metrics.
Used Amazon Elastic Beanstalk with Amazon EC2 to deploy the project in AWS.
Used Spark for interactive queries, processing of streaming data, and integrating with popular NoSQL databases for a massive volume of data.
Consumed the data from the Kafka queue using Spark.
Involved in regular standup meetings, status calls, and Business owner meetings with stakeholders.

Environment: Spark, Python, C#, AWS, S3, EMR, Glue, Hudi, redshift, Elasticsearch, Adobe RTCDP, Glue, Kafka, Redshift, DynamoDB, Hive, Spark SQL, Docker, Kubernetes, Profisee MDM, Databricks, Pyspark, SSIS, Data Warehouse, Snowflake, Mulesoft, Airflow, PowerBI, ETL workflows.



Client: Centene Corporation, St Louis, MO. March 2021 April 2022
Role: Data Engineer

Responsibilities
Managed the Profisee MDM platform, overseeing data mapping, transformation, and integration efforts for 50+ data sources.
Conducted data profiling and analysis to identify data quality issues and implemented corrective actions, resulting in a 15% increase in data accuracy.
Collaborated with business stakeholders to define data governance standards and establish data ownership responsibilities.
Designed and developed APIs using Mulesoft's Anypoint Platform, enabling seamless communication between disparate systems and applications.
Contributed to the successful implementation of Adobe Real-Time CDP for the client Centene, enabling them to leverage their data for enhanced customer experiences.
Conducted API testing, debugging, and troubleshooting, ensuring optimal API performance and reliability.
Leveraged Axon, EDC, and IDQ to establish and enforce data governance policies, resulting in improved data quality, accuracy, and compliance across the organization.
Contributed to the documentation of API specifications, usage guidelines, and best practices for fellow developers and stakeholders.
Designed and maintained data lineage documentation, ensuring transparency and compliance with data regulations.
Stayed up-to-date with the latest developments in the Adobe RTCDP and customer data platform space to provide clients with innovative solutions.
Leveraged expertise with Profisee, Microsoft Azure MDM solutions, and related technologies to drive successful MDM initiatives.
Developed a C# application that aggregates financial data from multiple sources and presents it in a unified dashboard.
Optimized SQL queries and database performance to meet the stringent demands of processing healthcare-related data using PostgreSQL
Participated in cross-functional projects, providing PostgreSQL expertise to enhance data-driven decision-making.
Worked with Snowflake cloud data warehouse and AWS S3 buckets for integrating data from multiple source systems into the snowflake table.
Designed, configured, and tested Adobe Experience Platform source and destination connections, data sets, XDM schemas, and identity namespaces, ensuring seamless data flow and integration.
Performed data extraction, transformation, loading, and integration in data warehouse, operational data stores, and master data management.
Implemented Copy activity, Custom Azure Data Factory Pipeline Activities
Primarily involved in Data Migration using SQL, SQL Azure, Azure Storage, and Azure Data Factory, SSIS, PowerShell.
Maintained comprehensive documentation on Aurora database structures, configurations
Architect & implement medium to large-scale BI solutions on Azure using Azure Data Platform services (Azure Data Lake, Data Factory, Data Lake Analytics, Stream Analytics, Azure SQL DW, HDInsight/Databricks, NoSQL DB).
Migration of on-premises data (Oracle/ SQL Server/ DB2/ MongoDB) to Azure Data Lake and Stored (ADLS) using Azure Data Factory (ADF V1/V2) & Azure, self-hosted Integration runtime.
Extract Transform and Load data from Sources Systems to Azure Data Storage services using a combination of Azure Data Factory, T-SQL, Spark SQL, and U-SQL Azure Data Lake Analytics.
Developed a detailed project plan and helped manage the data conversion migration from the legacy system to the Snowflake.
Mentored junior analysts, providing training on MDM concepts and data engineering best practices.
Designed & build infrastructure for the Google Cloud environment from scratch.
Leveraged cloud and GPU computing technologies for automated machine learning and analytics pipelines.
Worked on confluence and Jira.
Monitored tools to track and analyze PostgreSQL performance metrics for healthcare databases.
Performed data quality issue analysis using SQL by building analytical warehouse on Snowflake.
Ensured data quality, accuracy, and consistency in the Adobe RTCDP platform, resulting in reliable insights and improved marketing campaigns.
Designed and implemented a configurable data delivery pipeline for scheduled updates to customer-facing data stores built with Python.
Integrated Aurora with other data technologies like Kafka, Spark, and Hadoop for streamlined data pipelines
Implemented a Continuous Delivery pipeline with Docker, and GitHub
Built performant, scalable ETL processes to load, cleanse and validate data.
Participated in the entire software development lifecycle with requirements, solution design, development, QA implementation, and product support using Scrum and other Agile methodologies.
Collaborate with team members and stakeholders in the design and development of the data environment.
Preparing associated documentation for specifications, requirements, and testing.

Environment: Azure, Azure Data Factory, MDM, Axon, EDC, C#, Lambda Architecture, Profisee, Stream Analytics, Snowflake, MySQL, Doker, Databricks, Data Warehouse, SQL Server, Snowflake, Adobe RTCDP, Python, Scala, Spark, Hive, Spark -SQL

Client: Wells Fargo, Phoenix, AZ Jan 2020 Feb 2021
Role: Data Engineer

Responsibilities
Assisted in the development of ETL pipelines using Apache Spark, processing, and transforming large datasets for analytics purposes.
Conducted data validation and testing to ensure accurate data transformations and proper integration into the data warehouse.
Collaborated with the team to identify performance bottlenecks and implemented optimizations, improving pipeline efficiency by 20%.
Collaborated effectively with cross-functional teams within Wells Fargo, including business analysts, data stewards, IT professionals, and compliance officers, to ensure the successful implementation of MDM projects.
Optimized Hudi queries and data processing workflows for performance and scalability
Involved in converting Hive/SQL queries into Spark transformations using Spark RDDs, and Python.
Created Hive tables as per the requirement as managed or external tables, intended for efficiency.
Proactively and continuously drive system-wide quality improvements by undertaking thorough root cause analysis for significant incidents with component engineering teams.
Implemented Schema extraction for parquet and Avro file formats in Hive.
Used Spark-SQL to Load JSON data, create Schema RDD, load it into Hive tables, and handle structure data using Spark SQL.
SSIS performance tuning using counters, error handling, event handling, re-running of failed SSIS packages using checkpoints and scripting with Active-X and VB.NET in SSIS.
Development using SSIS script task, look up transformations and data flow tasks using T- SQL and Visual Basic (VB) scripts.
Hands-on experience in Google cloud platform- GCP, Google Big Query, GCS bucket, G - cloud function, cloud dataflow, Pub/sub cloud shell, GSUTIL, BQ command line utilities, Data Proc, Stack driver.
Built data pipelines in airflow in GCP for ETL-related jobs using different airflow operations.
Used shell SDK in GCP to configure services like Data Proc, Big Query, and storage.
Implement Hive UDFs for data evaluation, filtering, loading, and storing.
Collaborated with the IT team to integrate MDM processes with existing systems, optimizing data flow, and supporting efficient data access and retrieval using Profisee.
Utilized Spark SQL API in PySpark to extract and load data and perform SQL queries.
Worked on developing a PySpark script to encrypt the raw data by using hashing algorithm concepts on client-specified columns.
Responsible for the database's design, Development, and testing and Developed Stored Procedures, Views, and Triggers.
Developed Python-based API (RESTful Web Service) to track revenue and perform revenue analysis.
Created Tableau reports with complex calculations and worked on Ad-hoc reporting using PowerBI.
Creating a data model that correlates all the metrics and gives a valuable output.
Worked on tuning SQL Queries to reduce run time by working on Indexes and Execution Plan.
Performing ETL testing activities like running the Jobs, Extracting the data using necessary queries from database transform, and uploading into the Data warehouse servers.
Design, develop, and test dimensional data models using the Kimball method's Star and Snowflake schema methodologies.
Integrated Hudi with other data technologies like Kafka, Spark, and Hive for streamlined data pipelines
Designed, implemented, and continuously improved data quality rules, standards, and processes to monitor, enhance, and maintain high data quality levels within the MDM system.
Involved in converting Hive/SQL queries into Spark transformations using Spark RDDs, Python, and Scala.
Ensure deliverables (Daily, Weekly & Monthly MIS Reports) are prepared to satisfy the project requirements cost and schedule.
Designed SSIS Packages to extract, transfer, and load (ETL) existing data into SQL Server from different environments for the SSAS cubes (OLAP)
SQL Server reporting services (SSRS). Created & formatted Cross-Tab, Conditional, Drill-down, Top N, Summary, Form, OLAP, Sub reports, ad-hoc reports, parameterized reports, interactive reports & custom reports.
Created action filters, parameters, and calculated sets for preparing dashboards and worksheets using PowerBI.
Created dashboards for analyzing POS data using Power BI.

Environment: Spark, Python, ETL, Profisee, Power BI, Pyspark, Hive, Power BI, GCP, Big Query, DataProc Data Pipeline, IBM Cognos 10.1, Data Stage, Cognos Report Studio 10.1, Cognos 8 & 10 BI, Cognos Connection, Cognos office Connection, Cognos 8.2/3/4, Hudi, Data Warehouse, Airflow, MDM, Data stage and Quality Stage 7.5, MS SQL Server 2016, T-SQL, SQL Server Integration Services (SSIS), SQL Server Reporting Services (SSRS), SQL Server Analysis Services (SSAS), Management Studio (SSMS), Advance Excel

Client Humana Insurance, Louisville, KY Sep 2017 Dec 2019
Role: Data Analyst

Responsibilities
Handled importing of data from various sources. Loading data into HDFS and importing the data from MYSQL into HDFS using Sqoop.
Involved in loading data from the edge node to HDFS using shell scripting.
Hands-on experience in architecting the ETL transformation layers and writing spark jobs to do the processing.
Integrated Elasticsearch with other data technologies like Kafka, Spark, and Hadoop for streamlined data pipelines
Gather and process raw data at scale (including writing scripts, web scraping, calling APIs, writing SQL queries, and writing applications)
Optimized SQL queries to ensure fast and reliable query performance on Redshift clusters, considering distribution and sort keys.
Implemented and optimized distributed data processing workflows using Hadoop and Spark on EMR to handle and analyze massive datasets.
Imported data from AWS S3 into Spark RDD and performed actions/transformations on them.
Created Partitions, Bucketing, and Indexing for optimization as part of Hive data modeling.
Involved in developing Hive DDLs to create, alter, and drop Hive tables.
Worked on different RDDs to transform the data coming from other data sources and transform data into required formats.
Developed Spark applications using PySpark and Spark-SQL for data extraction, transformation, and aggregation from multiple file formats for analyzing & transforming the data to uncover insights into the customer usage patterns.
Developed and maintained ETL jobs using AWS Glue to extract, transform, and load data from various sources into data warehouses.
Involved in analyzing system failures, identifying root causes, and recommending course of action.
Managing and Scheduling jobs on Hadoop Cluster using Oozie workflows.
Used Spark stream processing to get data into in-memory, implements RDD transformations, and actions to process as units.
Designed and implemented Redshift data warehouses to support analytical queries and reporting requirements.
Created and worked with Sqoop jobs with total load to populate Hive External tables.
Worked extensively on Hive to create, alter, and drop tables and was involved in writing hive queries.
Created and altered HBase tables on top of data in the data lake.
Analyzed SQL scripts and design it by using PySpark SQL for faster performance.
Wrote Shell scripts to automate Hive or Sqoop jobs to fetch, transform or query the data.
Worked on performance tuning for the HIVE queries.
Integrated EMR with other AWS services, leveraging S3 for data storage
Collected the log data from web servers and integrated it into HDFS using Flume.

Environment: Hive, Sqoop, Spark, HDFS, Pyspark, MySQL, Oozie, Flume, HBase, Python, YARN, Cloudera, RDDs, AWS, EMR, Redshift, Elasticsearch, Glue, Spark.

Client: Momentum Business Solutions, Hyderabad, India. Jun 2014 Nov 2016
Role: Data Analyst

Responsibilities
Devised simple and complex SQL scripts to check and validate Dataflow in various applications.
Performed Data Analysis, Migration, Cleansing, Transformation, Integration, Data Import, and Data Export through Python.
Continuously updated Excel skills through training and self-learning to stay current with the latest features and techniques for data analysis.
Devised PL/SQL Stored Procedures, Functions, Triggers, Views, and packages. Made use of Indexing, Aggregation, and Materialized views to optimize query performance.
Developed logistic regression models (using R programming and Python) to predict subscription response rate based on customer variables like past transactions, response to initial mailings, promotions, demographics, interests, hobbies, etc.
Created Tableau dashboards/reports for data visualization, Reporting, and Analysis and presented them to Business.
Redefined many attributes and relationships in the reverse engineered model and cleansed unwanted tables/columns as part of data analysis responsibilities.
Interacted with the database administrators and business analysts for data type and class words.
Conducted design sessions with business analysts and ETL developers to come up with a design that satisfies the organization's requirements.
Created Data Connections, Published on Tableau Server for usage with Operational or Monitoring Dashboards.
Developer data cleaning using Excel, including data consolidation, text-to-columns, and conditional formatting to ensure data accuracy
Knowledge in Tableau Administration Tool for Configuration, adding users, managing licenses and data connections, scheduling tasks, and embedding views by integrating with other platforms.
Worked with senior management to plan, define and clarify dashboard goals, objectives, and requirements.
Responsible for daily communications to management and internal organizations regarding the status of all assigned projects and tasks.

Environment: SQL, Tableau, R, Python, Excel, Lookups, R, Dataflow, PL/SQL, ETL.
Keywords: csharp quality analyst business intelligence sthree database active directory rlang information technology microsoft procedural language Arizona Kentucky Missouri Texas

To remove this resume please click here or send an email from [email protected] to [email protected] with subject as "delete" (without inverted commas)
[email protected];1564
Enter the captcha code and we will send and email at [email protected]
with a link to edit / delete this resume
Captcha Image: