Chandra Bhan - Senior Data Engineer |
bhaanchandra87@gmail.com |
Location: Seattle, Washington, USA |
Relocation: Open |
Visa: H1-B |
Resume file: Chandra_Bhan_SP_1743516740573.docx Please check the file(s) for viruses. Files are checked manually and then made available for download. |
Chandra Bhan Seattle, WA-98391
Bhaanchandra87@gmail.com Ph: 980-485-4544*104 Career Overview: I have extensive expertise in the data domain, worked with industry leaders such as Amazon, Apple, Oracle, UnitedHealth Group (UHG), Xfinity, and Agilent Technologies in various roles, including Architect, Software Developer, Data Engineer, Data/Business Analyst and Data Modeler. I specialize in building and optimizing Data Lakehouses, Data Lakes, and Data Warehouses. My expertise includes designing large-scale cloud infrastructure, developing robust data platforms, implementing CI/CD pipelines, BI systems, and migrating complex ETL workflows while ensuring high-performance optimization. Education and Professional Certifications: Masters in computer application - 2006 Technical Skills: Programming Languages: Python, PySpark, Spark, Terraform, Ruby, Scala, PL/SQL, Java, shell script, Perl Databases: Oracle ADW, ADB, MySQL, Redshift, Aurora-RDS, Dynamo DB, EMR, RS-Spectrum, MS SQL Server, Delta/Data Lake AWS Tools: Redshift, Redshift-Spectrum, S3, EMR, Lambda, Crawler, Glue, ETLM, Hoot, Andes, DataLake, SQS, SNS, QuickSight OCI Tools: OCI-ADW, OCI-JDW, Data Integrator, DataFlow, DataCatalog, OKE, Container Registry, Oracle Data Science SQL query tools: Toad, SQL Data Modeler, SQL Developer, PL/SQL Developer, MySQL Workbench, SQL Workbench Virtualization/Containerization: Docker, Podman, Kubernetes Batch Scheduling: Apache-Airflow, DJS, Autosys, Cron, Datanet. Methodologies: Waterfall, Agile Source Control: bitbucket, git, SVN, CVS, VSS Configuration Management: Wiki, quip, Confluence, Share Point, JIRA, Rally, SIM, Twiki Operating Systems: Mac-OS, Windows, Linux, UNIX Domain Expertise: Cloud Security, Transportation, Supply Chain, Video Data Processing (Telecom), Point of Sale (Retail), HealthCare Relevant Experience and Accomplishments: Cloud & Data Architecture: Expertise in designing and implementing distributed, scalable, and fault-tolerant software systems using OCI & AWS infrastructure. Lead architectural reviews, quality assurance, and compliance (CSAP, ECAR) Data Engineering & Platform Development: Extensive experience in building Data Platforms, Data Warehouses, and Data Lakes, handling structured & unstructured data, and migrating large-scale ETL pipelines from legacy systems (Mainframe, SAS) to cloud architectures. ETL & Automation: Proficient in building and optimizing ETL pipelines using AWS, OCI, Python, PySpark, Ruby, Shell/Perl scripting, and other cloud-native tools. Expertise in batch processing, orchestration, and automation for high- performance data processing. Writing code and do code review to ensuring code quality and adherence to coding standards. Database Engineering & Optimization: Deep knowledge of database design & performance tuning across Oracle, Redshift, PostgreSQL, DynamoDB, EMR, and others. Experienced in Stored Procedures, Triggers, Materialized Views, and Index Optimization. Industry Expertise: Proven experience in Risk Management, Cloud Security, POS Systems, Risk Data Aggregation (RDA), Transportation, Fulfilment Data Processing, and Healthcare Data Projects. AI & Analytics: Designed and implemented high-volume data structures for AI-driven analytics. Technical Leadership & Communication: Skilled in code reviews, coding standards enforcement, and translating complex technical concepts into business-friendly insights. Strong experience in technical documentation (BRD, HLD, test cases, user stories, traceability docs). WORK EXPERIENCE: Seattle, WA Oracle Cloud: Oct-2021- Present Sr Principal Data Software Developer: Data Architecture and Data Platform: Designed and built a Data Lake and Data Warehouse for Oracle Cloud security. Developed Terraform scripts to automate infrastructure deployment across Oracle Cloud in multiple realms. Virtualized and containerized the Lakehouse solution for multi-region deployment and developed a business continuity plan. Collaboration and Cross-functional Teamwork: Collaborated with cross-functional teams to gather data requirements and facilitate data usage. Worked closely with Security Engineers to transform and load data into Lake-House for vulnerability scanning. Collaborated with other teams to scrape, transform, and load data to meet reporting needs. Technology Selection and CI/CD: Evaluated and selected tools, established CI/CD pipelines, and implemented Docker and Kubernetes for containerization. ETL Pipeline Development: Built scalable ETL pipelines using Python, PySpark, Spark, and other OCI technologies. Migrated complex ETL pipelines, transforming data from multiple sources for reporting and dashboards. Security and Access Control: Designed an API security system and implemented IAM policies with row-level authentication for secure data access. Researched source systems to extract data for security teams to report open vulnerabilities based on criticality. Dashboard and Reporting: Designed dashboards for monitoring vulnerabilities, tracking security trends, and visualizing metrics. Amazon Corporate LLC Data Software Engineer ETL Pipeline Development & Migration: Seattle, WA Nov-2016 - Oct-2021 Designed and built complex ETL pipelines using Redshift, EMR, Spectrum, Glue, Lambda, DynamoDB, and PostgreSQL. Migrated large-scale ETL workflows from Oracle to AWS with improved performance. Developed and deployed packages using Amazon s proprietary Apollo Cloud Control (ACC). Performance Optimization & Data Integrity: Optimized business-critical queries and pipelines for efficiency. Improved data ingestion models and ETL workflows to enhance data integrity & availability. Migrated terabyte-scale tables using advanced algorithms to optimize database resource utilization. Identified and resolved data quality issues in AWS technologies. Data Modeling & Automation: Designed data warehouse models for seamless migration across Oracle, PostgreSQL, DynamoDB, and S3. Automated data ingestion & ETL processes using Python, PySpark, Ruby, and Shell scripting. Developed streaming and batch data ingestion capabilities. Database Management & Scaling: Led DB capacity planning, scaling, and performance tuning for high-volume processing. Designed ETLM, BI Metadata, and BICON functionalities to enhance database tools. Planned DB space estimation and archival strategies for future growth Cloud & Data Engineering Leadership: Mentored Data Engineers on AWS cloud technologies. Migrated thousands of ETL jobs to AWS with minimal operational overhead. Ensured best practices to improve query performance by 10x over traditional databases. Processed and transformed unstructured datasets for multiple consumers. United Health Group (Optum) Data Engineer Eden Prairie- Mn 04/2016 - 11-2016 Project Planning & Coordination with all stakeholder and boundary team. Coordinating with client for requirement gathering, clarification for design. Mapped source-to-target systems and translated functional requirements into technical tasks. Designed and developed PL/SQL stored procedures, functions, packages, triggers, dynamic SQL, views, and materialized views. Optimized SQL and PL/SQL performance using EXPLAIN PLAN, SQL*TRACE, TKPROF, AUTOTRACE, and AWR reportsCoordinating with client for requirement gathering, clarification to design the project. Presented database designs to DBAs, SMEs, Security, and Production teams for review and approval. Coordinated with QA/Test teams for dry runs and deployment. Conducted root cause analysis for production issues. Designed DB space estimation and archival strategies for future data growth. Managed data migration, validation, mapping, synchronization, and cleanup. Comcast Data Engineer Denver, CO 01/2013 - 03/2016 Requirement Gathering & Design: Coordinated with clients for requirement gathering and created architecture, data flow, and class diagrams for schema mapping. Database Development: Designed and implemented tables, views, materialized views, PL/SQL stored procedures, functions, packages, triggers, and dynamic SQL for new requirements. Performance Optimization: Optimized SQL and PL/SQL using EXPLAIN PLAN, TKPROF, AUTOTRACE, AWR reports, bulk collections, and partition swapping for better performance. ETL & Data Processing: Fetched data via FTP/SCP, DB links, performed ETL using PL/SQL and Perl, and automated data loading with SQL*Loader scripts. Deployment & Scheduling: Prepared DB rollout & rollback plans, scheduled Crontab & Oracle Scheduler jobs, and optimized query performance for efficiency. Data Migration: Led environment migrations, ensuring smooth data transition between systems with minimal downtime Apple Inc. (Cont.) PL/SQL Developer Cupertino, CA 06/2011 01/2013 Database Design & Optimization: Designed, redesigned, and optimized database objects for each release, automating DB tasks and improving performance. Development of DB Objects: Created tables, queues, materialized views, packages, procedures, triggers, PL/SQL objects, nested tables, roles, synonyms, and Unix scripts for new requirements. Security & Access Management: Managed grants, roles, public/private synonyms, and analyzed EXPLAIN PLAN for optimized SQL performance. Release & Impact Analysis: Tracked DB object changes, performed impact analysis, and prepared DB rollout & rollback plans for each release. Automation & Performance Tuning: Developed Unix scripts for automating DB tasks & data reconciliation, reviewed AWR reports, and optimized production queries. ETL & Data Loading: Wrote SQL*Loader scripts to process feed files via middleware, created external tables for file-based data ingestion, and optimized OLTP table partitions. Scheduling & Data Validation: Automated data flow & reconciliation scripts using Autosys, Crontab, and Unix scheduling to ensure smooth downstream processing. Agilent Technologies (Cont. TCS) PL/SQL Developer Design the archival strategy and provide project plan. India / San Jose, CA 12/2006 06/2011 Coordinated with the front-end design team to provide them with the necessary stored procedures and packages. Created PL/SQL scripts to extract the data from the operational database to load into archival tables as per the archival criteria. Wrote UNIX shell scripts to archive files from PROD DB to ARCHIVAL DB at server level. Performed unit testing, system testing and integration testing. Tools & Technologies: SQL, PL/SQL, HP-UNIX, ASP, JavaScript, VBScript, XML, Oracle 9i, PL/ SQL Developer. Additional Information: Program or Courses: AWS - Architect Oracle Cloud Architect Oracle Certified developer Keywords: continuous integration continuous deployment quality analyst artificial intelligence business intelligence sthree database hewlett packard microsoft procedural language California Colorado Minnesota Washington |