Home

Databricks Architect-Remote at Remote, Remote, USA
Email: [email protected]
From:

sravani,

NityaINC

[email protected]

Reply to:   [email protected]

Role: Databricks Architect 

Location: Remote

Job Description:

We are looking for a highly skilled Data Engineering Solution Architect to lead the design and implementation of data engineering solutions using the Databricks platform. In this role, you will collaborate with cross-functional teams, including data scientists, software engineers, and business stakeholders, to architect scalable and efficient data processing pipelines.

Responsibilities:

Work closely with clients and internal stakeholders to understand their data engineering requirements and translate them into scalable Databricks-based solutions.
Design and implement end-to-end data processing pipelines using Databricks, integrating various data sources, such as structured and unstructured data, into a unified data lake or data warehouse.
Optimize and fine-tune data engineering workflows to ensure efficient data processing, data quality, and high-performance analytics.
Collaborate with data scientists and business stakeholders to design and implement machine learning pipelines, leveraging Databricks MLflow and model serving capabilities.
Provide technical leadership and mentorship to the data engineering team, ensuring best practices and standards are followed.
Stay up-to-date with the latest advancements in Databricks and other related technologies, and evaluate their applicability to improve our data engineering solutions.

Qualifications:

Bachelor's or Master's degree in Computer Science, Engineering, or a related field.
Proven experience of 12+ years as a Data Engineer or Data Engineering Solution Architect, with a focus on designing and implementing solutions using the Databricks platform.
In-depth understanding and hands-on experience with the Databricks platform, including Databricks Runtime, Delta Lake, and Apache Spark.
Experience with streaming data processing using technologies like Apache Kafka, Apache Flink, or Apache Beam.
Proficiency in programming languages such as Python, Scala, or SQL for data manipulation, transformation, and integration.
Hands-on experience with cloud platforms such as AWS, Azure, or GCP, and familiarity with deploying and managing Databricks workspaces on these platforms.
In-depth knowledge of data integration techniques, data modelling, and ETL/ELT processes.
Experience with containerization technologies (e.g., Docker, Kubernetes) and deployment of data engineering solutions in a containerized environment.
Excellent problem-solving and analytical skills, with a strong attention to detail.
Effective communication skills with the ability to explain complex technical concepts to both technical and non-technical stakeholders.
Familiarity with data governance, data security, and compliance best practices in the context of data engineering solutions.
Relevant certifications in Databricks or related technologies would be a plus.
Knowledge of GxP compliance in Life Sciences and BioTech Industry

Keywords:
[email protected]
View all
Thu Nov 09 20:56:00 UTC 2023

To remove this job post send "job_kill 843287" as subject from [email protected] to [email protected]. Do not write anything extra in the subject line as this is a automatic system which will not work otherwise.


Your reply to [email protected] -
To       

Subject   
Message -

Your email id:

Captcha Image:
Captcha Code:


Pages not loading, taking too much time to load, server timeout or unavailable, or any other issues please contact admin at [email protected]
Time Taken: 0

Location: ,