Databricks || Python Engineer at Remote, Remote, USA |
Email: [email protected] |
From: Saurabh Rai, Cloud Think Tech Inc. [email protected] Reply to: [email protected] ONLY USC and GC Official Title/Assigned Title: Databricks / Python Engineer Direct Hire/Contract/Contract to hire: Contract (12 months) Job Description Databricks / Python Engineer We are looking for a savvy Databricks / Python Engineer to join our growing team of analytics experts. The person will be responsible for expanding and optimizing our data and data pipeline architecture, as well as optimizing data flow and collection for cross functional teams. The ideal candidate is an experienced data pipeline builder and data wrangler who enjoys optimizing data systems and building them from the ground up. The Data Engineer will develop and support a broad range of software capabilities including building data pipelines, managing the ETL/ELT processes, receiving, and delivering data through various interfaces, and processing significant amounts of data related to railcar movements, railcar liability, and financial calculations. The right candidate will be excited by the prospect of optimizing or even re-designing our companys data architecture to support our next generation of products and data initiatives. Databricks / Python Engineer Responsibilities To perform this job successfully an individual must be able to perform the following essential duties satisfactorily. Other duties may be assigned to address business needs and changing business practices. Participate as a member of an Agile team developing Data Engineering solutions. Engage in requirements gathering and technical design discussions to meet business needs. Design and develop generic, scalable data pipelines in Azure Data factory and Databricks with python for on-prem and cloud data sources. Assemble large, complex sets of data that meet non-functional and functional business requirements. Leverage your curiosity for solving unstructured data problems and ability to manipulate and optimize large data sets to advance business problem-solving. Contribute to documentation, testing and cross-training of other team members. Work closely with others to assist and resolve production issues. Databricks / Python Engineer Qualifications Bachelor's degree in computer science, computer engineering, a related field, or equivalent experience. 5+ years of data engineering or equivalent experience. 5+ years of hands-on experience in developing and deploying data architecture strategies or engineering practices. 5+ years of experience with complex SQL queries and knowledge of database technologies. Expert-level coding experience with PySpark and Python. Expert-level technical experience with Apache Spark / Azure Databricks. Proficient in using and designing solutions on Azure Cloud infrastructure (particularly Azure Data Factory) and Azure DevOps. Proficient with core business intelligence and data warehousing technology. Proficient designing and developing data integration solutions using ETL tools such as Azure Data Factory and/or SSIS. Proficient with software development practices such as Agile, TDD, and CI/CD. Ability to collaborate and communicate professionally, both verbally and in writing, at all levels of the organization, particularly bridging conversations between data and business stakeholders. Preferred Qualifications Experience with Snowflake. Experience with graph databases or graph libraries. Kafka or other streaming technologies. Elastic Search. Experience in the rail or other commodities driven industry. Keywords: continuous integration continuous deployment green card Databricks || Python Engineer [email protected] |
[email protected] View all |
Tue Oct 22 02:21:00 UTC 2024 |