Home

Madhu - BI Data Engineer Business Intelligence
[email protected]
Location: Bishop, Texas, USA
Relocation: YES Anywhere in USA
Visa: H1B
Madhu Bikkineni
DataOps/BI Data Engineer
Austin, TX | (972) 505 -3896
[email protected]

Professional Summary
A Microsoft certified Associate with 11 years of IT experience in the realm of DataOps with strong exposure in data, analytics, and operations as a Data and BI Engineer.
Proficient in the design and implementation of scalable data solutions that drive business intelligence and to achieve operational efficiency.
Demonstrated ability to lead end-to-end data projects from conception to execution.
Expertise in data integrations, BI reporting solutions, and cloud-based technologies, with strong focus on ensuring data integrity and security.
An experienced data engineer specializing in data integrations in a cloud architecture to feed data and analytics applications.
Proficient in building data pipelines using Pyspark and SparkSQL in Databricks for data extraction, transformation, and aggregation from multiple file formats to feed data analytics applications.
Experienced in building a unified data catalog (Databricks Unity Catalog) to ease the management of data assets and to serve the data requirements of analytics applications, functional and business teams across organizations.
Expertise in the databricks administration and maintenance activities and enforcing best practices in the Databricks workspaces and Data catalogs.
Proficiency in the creation and migration of Databricks workflows, feature enhancements, cluster policies and configurations, and troubleshooting the issues reported by the application users.
Proficient in the security implementation on the Databricks Catalog objects for Azure Active Directory, custom groups, managed identities, and Service principals through SCIM provisioning.
Proficient in the configuration and maintenance of CI/CD pipelines to automate resource provisioning, access, and code deployment release cycles.
Experienced in the migration of on-prem solutions to Azure and to the Private cloud.
Implemented data and storage management solutions in Azure.
Proficient in building data pipelines in the Azure Data Factory using various Azure resources.
Experience in building complex reporting solutions and operational dashboards for businesses with actionable insights and aid in the decision-making.
Good knowledge in the design and implementation of scalable data observability systems that can handle increased data volumes without compromising performance.
Contributed towards data architecture platform, enabling the platform to be easily adapted by data governance, data science, third party data consumers, business users, and BI applications.
Expertise in the requirement gathering, stakeholder management, Proof-of-Concepts for feasibility analysis and implementation of scalable data and reporting solutions.
Adept at collaborating with cross-functional teams and translate business requirements into actionable insights.
Experience in data ingestion using Azure Data Factory from multiple sources that includes on-premises, ADLS, Azure SQL database, Azure Databricks, Databricks DeltaLake, etc.,
Deep understanding on the functioning of Azure data factory (ADF) objects- ADF activities, Data Flows, Pipelines, parameters, variables, Integration Runtime services, and Triggers.
Experience in establishing connectivity from Databricks to Azure Blob Storage, Azure Synapse Analytics, ADLS Gen2, JDBC connectors, on-prem databases and external sources through API s.
Experience in the BI reporting solutions design, development, implementation, and application server management activities using Power BI, Microsoft BI Stack (SSIS, SSAS and SSRS), Tibco Spotfire, Tableau, SAP Business Objects and OBIEE.
Proficient with Power BI import data from multiple data sources excel, flat file, JSON, SQL etc., cleanse, transform with Power Query (M) and ensure Query Folding for better performance.
Expertise with DAX to create Calculated Columns, Measures with functions that includes Calculate, Filter, All, Related, Summarize, and time intelligence functions.
Expertise in Power BI reports and dashboards using column/bar/line charts, donut, treemap, KPIs etc. with interactive capabilities including drill up/down/through, filters, slicers, bookmarks etc.
Managing workspaces, datasets, reports, schedules and configure gateways in Power BI Service.
Experienced with SSIS ETL Packages Extract data from diverse sources that includes Excel, Flat file, OLEDB. Transform with various control flow and data flow items and load to destinations that includes SQL, Excel, Flat file, OLEDB.
Implemented Row Level Security (RLS) through applying filters on the datasets.
Experienced in building operational SSRS reports that includes tabular, matrix, sub reports, parameterized reports etc. using expressions.
Build data extracts, Self-Service models to serve the adhoc requirements of the businesses.
Deep understanding of the usage of trend lines, reference lines and forecasting techniques.
Worked on the installations, configuration of application servers & node manager servers, clustering activities, upgradation, scheduling dashboards & adhoc refresh requests.
Extensive experience in Library content migration activities across environments
Application servers monitoring & maintenance, debugging of server logs for analyzing platform level issues and closely working with end-users on root cause analysis & providing solutions.

Technical Skills:
Azure services Data Factory, Data Lake, Delta Lake, Azure AD, Azure Integration Services (Logic Apps, APIM, Functions), DevOps
ETL Tools Databricks, SSIS, Power Query
Reporting Tools Power BI, Tableau, SSRS, Tibco Spotfire, SAP BOBJ, OBIEE
Databases SQL Server, Snowflake, SingleStore, MySQL, PostgreSQL, Oracle, Cosmos DB, MongoDB
Programming/Scripting Python, PySpark, Spark SQL, PL/SQL, DAX, Shell scripting
Version Control Azure DevOps, TFS, GitLab, GitHub, SVN
Methodologies DevOps, Agile, JIRA
Client: Kestra Financial Jun 2022 Till date
Role: BI Data Engineer
Description:
The scope of the project includes databricks administration & workspace consolidation, building unified data catalog, migration of azure resources into private cloud, configuring CI/CD pipelines, analyzing data pipeline failures, resolving security vulnerabilities of the data processes, tech modernization of Legacy data processes etc.,
Responsibilities:
Transitioned to Unity Catalog to replace the legacy Hive Metastore, simplifying data asset maintenance and consolidating Databricks workspaces.
Databricks administration activities including Catalog maintenance, security and access management, provisioning of resources that includes cluster pools, policies, serverless SQL warehouse, workflows, feature enhancements etc.,
Designed a process to migrate the existing data processes on Hive metastore to Unity Catalog.
Setup the guidelines in the new architecture and helped cross functional and data teams to adhere to these standards as part of enforcing best practices.
Lead the effort to move Azure data resources to a Private Cloud in alignment with the new Hybrid cloud strategy.
Automated the resource provisioning and code deployment release cycles through configuration of CI/CD pipelines at process levels.
Implemented a new data strategy to ingest FINRA files into Delta Lake.
Scheduled workflows for implementing batch, micro batch, and real-time streaming solutions for the data pipelines.
Collaborated on ETL tasks, maintaining data integrity, and verifying pipeline stability.
Data ingestion from multiple sources into Azure SQL Database using ADF capabilities.
Created Pipelines, Dataflows, Datasets, Linked services, Activities, Dataflow tasks in the ADF.
Applied various ADF dataflow transformations such as Data Conversion, Conditional split, Derived column, Lookup, join, Union, Aggregate, pivot, filter, etc.,
Implemented SCD1 and SCD2 using Merge Functionality and created a dimension framework using config files in JSON format.
Troubleshooting overnight job and azure data pipeline failures by analyzing azure monitoring logs through Kusto queries.
Coordinated with Data governance teams on the integration of Alation with databricks catalogs.
Client: Dell Technologies Oct 2019 May 2021
Role: Senior BI Developer
Description:
The project is about building a data lake to cater to the reporting and analytics requirements by bringing inventory related information scattered across various systems that include Oracle EBS and Novora. Building Operational dashboards using Power BI and non-operational reports using SSRS with enhanced and interactive capabilities as part of tech modernization. Building operational dashboards on SingleStore and non-operational reports using Greenplum/Teradata as per the business requirements.

Responsibilities:
Requirements gathering, analysis and provide mock layouts for review calls.
Data ingestion from multiple sources into Azure Data Lake Storage through orchestrating the data pipelines.
Created data pipelines, datasets, linked services, activities, dataflow tasks using Azure Data Factory capabilities.
Modernization of SSIS packages using Azure and Databricks capabilities.
Applied mapping and transformation logic, data cleansing, filtering and standardized the datasets to feed Power BI dashboards.
Applied various SSIS / ADF dataflow transformations such as data conversion, conditional split, derived column, lookup, multicast, union, aggregate, script component, etc.,
Building operational reports for real-time analysis to improve process efficiency.
Produced adhoc extracts as per the requirements.
Built Proof of Concept (POC) models to help business/functional teams assess the impacts of the proposed design changes.
Built BI report/dashboards with interacting capabilities for end users using Power BI features.
Created calculated columns, measures using DAX functions for KPI s.
Enable self-service Power BI models for the business team to meet their ad-hoc needs.
Generate adhoc reports with Power BI sourcing excel, flat files, CSV, JSON files, APIs
Build interactive reports with rich visualizations, KPIs and Scorecards
Enable auto refresh of reports standard and data driven refresh
Employed data extraction methods to feed downstream systems/applications.
Environment: Azure Data Factory, Azure Data Lake Storage, Databricks, Key Vault, Power BI, DAX, SSRS, SSIS, SQL Server Analysis Services (SSAS), T-SQL, SingleStore, Greenplum, Kafka,Teradata, Azure SQL, Azure Blob, Azure Storage Explorer, SQL Server 2012, TFS, GitLab
Client: Anadarko Petroleum Corporation IPSO July 2019 Sep 2019
Role: Data Visualization Engineer
Description:
The scope of the project is to build a Tableau dashboard on production forecasting and improve operational efficiency with the help of key metrics that include total oil output vs net oil output values using historical data. The dashboard also features visuals of well s location operator-wise with variables that include production of gas, oil, and water over time.
Responsibilities:
Design and implementation of proof-of-concept solutions and created advanced BI visualizations.
Used Data Blending, groups, combine fields, calculated fields, and aggregated fields to compare and analyze data from different perspectives.
Developed Interactive Tableau Dashboards with filters, quick filters, context filters, global filters, parameters, and calculated fields on Tableau reports.
Developed Tableau dashboards featuring trend lines, reference lines, drills down and drilled up report functionalities.
Published workbooks by creating user filters and security so that intended teams can view them.
Developed parameter and dimension-based reports, drill-down reports, matrix reports, charts, and Tabular reports using Tableau Desktop.
Generated context filters and used performance actions to manage the data volumes.
Involved in troubleshooting and performance tuning of reports and resolving issues within Tableau Server and Reports.
Environment: Tableau Desktop, Tableau Server, MySQL, Excel, Windows Server

Client: General Electric Mar 2016 May 2019
Role: BI Consultant
Description:
GE Global operations EDM is an integrated BI platform comprised of Tableau, Tibco Spotfire and Business Objects applications to serve various GE business units. Data modeling and design and implementation of data load strategies and to power BI reports/dashboards. Also, responsible for BI reports creation as per the business requirements and BI servers monitoring and maintenance activities and end user support on application performance issues.
Responsibilities:
Gathering and analyzing business requirements, creating mock reports for review calls
Created conditional filters and action links to filter the data on dashboards
Developed worksheets using calculated fields, hierarchies, parameters, groups, sets, quick Table calculations and filters
Developed worksheets with built-in and advanced visualizations that include funnel, box plots, word clout, donut, crosstab, bar, tree map, heat map and filled maps etc.,
Developed reports using multiple datasources including PostgreSQL, SQL Server, Oracle and spread sheets with joins/data blending methods
Managing different sites, creating users and providing access to different site roles.
Monitor server activity/usage statistics to identify performance bottlenecks
Responsible for monitoring, maintenance of tableau server and troubleshooting issues.
Scheduling extract refresh and subscriptions based on the business requirement.
Monitoring extracts and cleanup of unused extracts improving server performance
Involved in the up-gradation activities of Spotfire from 7.0 to 7.6 & 7.11 versions
Installation, configuration, deploying hotfixes, content migration, scheduling dashboards, user security management, nodes & resource pool management in Spotfire
Migrated library elements including dashboards, columns, filters, joins, info links and data sources etc.,
Weekend server restarts and monitoring server performance.
Troubling issues during migration and schedule updates failures by analyzing logs.
Implementing security setup which includes creation of custom access levels, providing granular rights at different levels including Folder, Application, Universe, Connection.
Involved in the upgradation of BO environment from XI 3.1 to BI 4.1 including cleanup activities for storage reduction
Experience in migrating the objects from one environment to another environment using import wizard, Promotion management
Troubleshooting the issues that users face while logging into the BO applications

Environment: Tibco Spotfire Server 7.x, Tibco Spotfire node manager server, Tableau Desktop, Tableau Server 10.x, SAP Business Objects 3.1/4.1, Oracle, PostgreSQL, ServiceNow, Windows Server 2012

Client: GE Corporate Apr 2014 Feb 2016
Role: MSBI Developer
Description:
The scope of the project was to cater to the reporting requirements of the functional users and to aid in the overall decision-making process through customized BI reporting solutions through building robust support systems using data warehouse modeling techniques.
Responsibilities:
Involved in design and implementation of the reporting requirements using SSRS/SSIS as per the business requirements and provided documentation.
Created SSIS packages for Uploading of different formats of files (excel, access) and databases (SQL server, flat files) into the SQL Server data warehouse using SSIS.
Extracted large volumes of data from various data sources and loaded the data into target data sources by performing various kinds of transformations using SQL Server Integration services.
Developed and deployed SSIS packages for ETL from OLTP and various sources to staging and staging to Data warehouse using Lookup, Fuzzy Lookup, Derived Columns, Condition Split, Term, Slowly Changing Dimension and more.
Performed ETL mappings using MS SQL Server Integration Services.
Created Jobs in SSIS and was responsible for ETL job scheduled to run daily.
Resolved the issues associated with ETL Data Warehouse Failure.
Worked with business analysts and functional users in the requirement gathering and translated them into technical specifications.
Developed stored procedures, views, T-SQL Scripting for complex business logic.
Using SQL server reporting services (SSRS) delivering enterprise, Web-enabled reporting to create reports that draw content from a variety of data sources.
Designing and implementing a variety of SSRS reports such as Parameterized, Drilldown, Ad hoc and Sub-reports using Report Designer and Report Builder based on the requirements.
Designed SSRS reports with dynamic sorting, defining data source and subtotals for the report.
Environment: SQL Server 2012, SQL BI Suite (SSIS, SSAS, SSRS), ETL, T-SQL, Windows Server

Client: Herbalife Aug 2012 Mar 2014
Role: OBIEE Developer
Description:
The aim of the project is to support clients and management through building customized reporting solutions by leveraging business intelligence for effective decision making based on the data.
Responsibilities:
Designed complex models with multiple facts using LTS and proper aggregates content definition.
Developed level/value/parent child hierarchies, and time-series calculations.
Deployment of shared repository to DEV, UAT and PROD environments and manual upload and download of rpd using EM.
Moving catalog objects across environments (dev, stage, prod instances).
Worked in multi-user development environment (MUDE) for RPD changes.
Worked on cache configuration and management activities involving purging with event pooling & manually and unit testing against RPD, reports, dashboards.
Created reports using various layouts/views including charts/pivot, report level calculations.
Developed prompts, dashboards with conditions, actions, agents development and cache seeding.
Involved in creating managing session/repository, presentation variables.
Assisted the project teams in deploying RPD and migration of code across environments.
Implemented Oracle BI Security Setup (groups, data access/query privileges) for Metadata Objects, Web Catalog Objects (Dashboard, Pages, Folders and Reports) and user security management
Restarting the services from EM, Console and Command line
Updating the config files that includes instance config, NQS config files as per the business requirement
Actively engaged in the end user issues towards resolution with application usage
Environment: OBIEE 10.1.3.4, OBIEE 11.1.1.3, 11.1.1.6, Oracle, Remedy, Windows

Education:
Master of Science in Computer and Information Science from Southern Arkansas University, Arkansas
Master of Science in Industrial Technology from University of Central Missouri, Missouri
Bachelor of Technology in Production Engineering from Kakatiya University, India

Certifications:
Microsoft Certified: Power BI Data Analyst Associate
Microsoft Certified: Azure Data Fundamentals
Microsoft Certified: Azure Fundamentals
Keywords: continuous integration continuous deployment business intelligence database active directory information technology microsoft procedural language Texas

To remove this resume please click here or send an email from [email protected] to [email protected] with subject as "delete" (without inverted commas)
[email protected];1140
Enter the captcha code and we will send and email at [email protected]
with a link to edit / delete this resume
Captcha Image: