Data Engineer with Oracle fusion cloud # San Francisco, CA # Contract # Onsite at Oracle, Arizona, USA |
Email: [email protected] |
Greetings..!! Hello, My name is Atul, Recruiter at TekNavigators Staffing. Please go through the job description and let me know if you are interested. Full Name Contact Number Email Address Current Location with Zip Highest Education/University/Year LinkedIn ID Availability to Interview Availability to Start Work Authorization/Visa Status Rate Expectation Must have skills Snowflake, dbt, and Python Experience 12+ Data Engineer with Oracle fusion cloud San Francisco, CA Client is a pioneer in the development and commercialization of Intravascular Lithotripsy (IVL) to treat complex calcified cardiovascular disease. Shockwave Medical aims to establish a new standard of care for medical device treatment of atherosclerotic cardiovascular disease through its differentiated and proprietary local delivery of sonic pressure waves for the treatment of calcified plaque. Position Overview We are searching for a talented and experienced Modern Data Stack Data Engineer to join our growing team. You will be responsible for designing, building, and maintaining data pipelines using technologies like Fivetran, Azure Data Factory and Python into our Snowflake data warehouse and transforming the data using tools like dbt and Coalesce. You will also be responsible for ensuring data quality, security, and performance of the data pipelines. Essential Job Functions Design, develop, and maintain data pipelines using Fivetran, Python, dbt/coalesce and Airflow from varied sources. Primary sources include Oracle Fusion Cloud, Salesforce, among many others. Write efficient SQL queries to transform and clean data for optimal consumption in Snowflake. Implement data quality checks and monitoring processes to ensure data accuracy and consistency. Collaborate with data analysts and business stakeholders to understand data requirements and design data models. Document data pipelines and processes for maintainability and knowledge sharing. Monitor and optimize data pipeline performance for efficiency and scalability. Troubleshoot and resolve data pipeline issues to ensure data availability. Create dashboards and advanced visualizations in Power BI Implement data observability tools like Monte Carlo Stay up to date on the latest trends and technologies in the modern data stack (Snowflake, Fivetran, dbt, etc.). Requirements 5+ years of experience as a Data Engineer or similar role. Proven experience with designing and building data pipelines. Must have skills Snowflake, dbt, and Python Advanced knowledge of Snowflake, Fivetran, Airflow and dbt Monte Carlo or other data observability tools experience a plus Strong knowledge of SQL and experience with data transformation techniques. Excellent problem-solving and analytical skills. Proficiency in extracting and loading data from the following two source applications - Oracle Fusion Cloud (Finance, Supply Chain, Manufacturing), Salesforce a huge plus Strong communication and collaboration skills. Ability to work independently and as part of a team. Excellent project management and team leadership skills. Strong interpersonal and communication skills to collaborate effectively with business partners cross-functional teams. Strong strategic mindset with the ability to align analytics initiatives with overall business goals and proactively identify opportunities for leveraging data to drive business value. -- Keywords: business intelligence information technology golang California Idaho Data Engineer with Oracle fusion cloud # San Francisco, CA # Contract # Onsite [email protected] |
[email protected] View all |
Fri May 17 21:22:00 UTC 2024 |