Home

Databricks Support Engineer at Remote, Remote, USA
Email: [email protected]
From:

Pallavi,

nss

[email protected]

Reply to:   [email protected]

Candidate Requirements

Years of Experience Required: 3+ overall years of experience in the field.

Degrees or certifications required: No degree is required to be eligible for this role, but it is preferred to have a relevant degree.

Disqualifiers: Candidates with a majority of their experience in developer work, has poor communication skills, has not worked in a customer facing support role, or are missing any of the required technologies will not be eligible for the role.

Best vs. Average: The ideal resume would contain multiple years of experience with support engineering, troubleshooting, and is well articulated both on paper and in person.

Performance Indicators: Performance will be assessed based on quality of work and ticketing metrics.

Required Skills

With increasingly vast seas of digital information, smart organizations can today do things that were up to now impossible; spot unseen business trends, prevent diseases, make our roads safer and so on. At Version 1 our Mission is to prove IT can make a real difference. We prove this every day and due to continued expansion, were searching for like-minded individuals to help us take it to the next level

This is an exciting opportunity for an experienced developer of large scale data solutions. You will join a team delivering a transformative cloud hosted data platform for a key Version 1 customer

The ideal candidate will have a proven track record in implementing data ingestion and transformation pipelines for large scale organizations. We are seeking someone with deep technical skills in a variety of technologies to play an important role in developing and delivering early proofs of concept and production implementation.

You will gain good experience in building solutions using a variety of open source tools a Microsoft Azure services and a proven track record in delivering high quality work to tight deadlines.

Your main responsibilities will be:

Designing and implementing highly performant data ingestion pipelines from multiple sources using Apache Spark and/or Azure Databricks and/or HDInsights

Delivering and presenting proofs of concept to of key technology components to project stakeholders.

Developing scalable and re-usable frameworks for ingesting of geospatial data sets

Integrating the end to end data piple-line to take data from source systems to target data repositories ensuring the quality and consistency of data is maintained at all times

Working with event based / streaming technologies to ingest and process data

Working with other members of the project team to support delivery of additional project components (API interfaces, Search)

Evaluating the performance and applicability of multiple tools against customer requirements

Working within an Agile delivery / DevOps methodology to deliver proof of concept and production implementation in iterative sprints.

Skills:

databricks, azure, spark, designing engineering, data management

Top Skills Details:

databricks,azure,spark,designing engineering,data management

Additional Skills & Qualifications:

Qualifications

Strong knowledge of Data Management principles

Experience in building ETL / data warehouse transformation processes

Direct experience of building data pipelines using HDInsights and Apache Spark (preferably Databricks).

Experience using geospatial frameworks on Apache Spark and associated design and development patterns

Microsoft Azure Big Data Architecture certification.

Hands on experience designing and delivering solutions using the Azure Data Analytics platform (Cortana Intelligence Platform) including Azure Storage, Azure SQL Data Warehouse, Azure Data Lake, Azure Cosmos DB, Azure Stream Analytics

Experience with Apache Kafka / Nifi for use with streaming data / event-based data

Experience with other Open Source big data products Hadoop (incl. Hive, Pig, Impala)

Experience with Open Source non-relational / NoSQL data repositories (incl. MongoDB, Cassandra, Neo4J)

Experience working with structured and unstructured data including imaging & geospatial data.

Experience working in a Dev/Ops environment with tools such as Microsoft Visual Studio Team Services, Chef, Puppet or Terraform

Cannot support C2C

Keywords: database information technology
Databricks Support Engineer
[email protected]
[email protected]
View all
Sat Apr 13 12:19:00 UTC 2024

To remove this job post send "job_kill 1309437" as subject from [email protected] to [email protected]. Do not write anything extra in the subject line as this is a automatic system which will not work otherwise.


Your reply to [email protected] -
To       

Subject   
Message -

Your email id:

Captcha Image:
Captcha Code:


Pages not loading, taking too much time to load, server timeout or unavailable, or any other issues please contact admin at [email protected]
Time Taken: 0

Location: ,