Hiring Databricks Feature Store ::: Minimum 13+ years experience at Edison, New Jersey, USA |
Email: [email protected] |
Hello, Hope you are doing good. Below is the job description kindly go through it and please let me know if you are interested. Role: Databricks Feature Store Location : Edison NJ (Onsite) Hybrid Contract Job Description: As a Data Engineer supporting Machine Learning (ML) initiatives, you will be responsible for using the Databricks Lakehouse Platform to complete advanced data engineering tasks. You will work closely with our data scientists and ML engineers to ensure that data is available, reliable, and optimized for their needs. Key Responsibilities: 1. Cloud Data Architecture: Design and build robust data pipelines using Spark SQL and Python in both batch and incrementally processed paradigms orchestrated via Azure Data Factory. 2. Feature Engineering (Mandatory): Collaborate with data scientists to understand the features needed for ML models. Implement feature extraction and transformation logic in the data pipelines. 3. FeatureOps (Mandatory): Implement FeatureOps to manage the lifecycle of features including their discovery, validation, and serving for training and inference purposes. 4. Training Dataset Support: Work with data scientists to understand their requirements for training datasets. Ensure that these datasets are accurately prepared, cleaned, and made available in a timely manner. 5. Data Pipeline Automation: Automate the data pipelines using CI/CD approaches to ensure seamless deployment and updates. This includes automating tests, deployments, and monitoring of these pipelines. 6. Data Quality: Implement data quality frameworks and monitoring to ensure high data accuracy and reliability. Identify and resolve any data inconsistencies or anomalies. 7. Collaboration: Work closely with data scientists and ML engineers to understand their data needs. Provide them with the necessary data in the right format to facilitate their work. 8. Optimization: Continually optimize pipelines and databases for improved performance and efficiency. This includes implementing real-time processing where necessary. 9. Data Governance: Ensure compliance with data privacy regulations and best practices. Implement appropriate access controls and security measures. 10. Data APIs Qualifications: - Experience supporting machine learning projects. - Familiarity with ML platforms (e.g., TensorFlow, PyTorch). - Experience with cloud platforms (e.g., Azure, AWS). - Bachelor's degree in Computer Science, Engineering, or a related field. - Proven experience as a Data Engineer or in a similar role. - Experience with big data tools (e.g., Hadoop, Spark) and databases (e.g., SQL, NoSQL). - Knowledge of machine learning concepts and workflows. - Strong programming skills (e.g., Python, Java). - Excellent problem-solving abilities and attention to detail. - Strong communication skills to effectively collaborate with other teams Kumar Roushan | Talent Acquisition Specialist Amaze Systems Inc USA: 8951 Cypress Waters Blvd, Suite 160, Dallas, TX 75019 Canada: 55 York Street, Suite 401, Toronto, Ontario M5J 1R7 Canada E: [email protected] | www.amaze-systems.com/ USA | Canada | UK | India Amaze Systems is an Equal Opportunity Employer (EOE), and does not discriminate based on age, gender, religion, disability, marital status, race and also adheres to laws relating to non-discrimination on the basis of national origin and citizenship status. -- Keywords: continuous integration continuous deployment machine learning information technology golang New Jersey Texas |
[email protected] View all |
Mon Feb 26 21:01:00 UTC 2024 |