Home

Data Engineer at Remote, Remote, USA
Email: [email protected]
From:

vaishnavi,

Xforia

[email protected]

Reply to:   [email protected]

H
ello Professional
,

Greetings

 from XFORIA

I found your resume on Job Portal and noticed you meet our clients mandate.

If interested, please check out the job description here below and share your Updated Resume

Job Title : Data Engineer

Location : Remote

Job Description:

Must Have:

Azure Databricks

Azure Data Factory 

Azure Synapse Analytics

We are expanding our efforts into complementary data technologies for decision support in areas of ingesting and processing large data sets including data commonly referred to as semi-structured or unstructured data. Our interests are in enabling data science and search-based applications on large and low latent data sets in both a batch and streaming context for processing. To that end, this role will engage with team counterparts in exploring and deploying technologies for creating data sets using a combination of batch and streaming transformation processes. These data sets support both off-line and in-line machine learning training and model execution..

Code, test, deploy, Orchestrate, monitor, document and troubleshoot cloud-based data engineering processing and associated automation in accordance with best practices and security standards throughout the development lifecycle.

Work closely with data scientists, data architects, ETL developers, other IT counterparts, and business partners to identify, capture, collect, and format data from the external sources, internal systems, and the data warehouse to extract features of interest.

Contribute to the evaluation, research, experimentation efforts with batch and streaming data engineering technologies in a lab to keep pace with industry innovation.

Work with data engineering related groups to inform on and showcase capabilities of emerging technologies and to enable the adoption of these new technologies and associated techniques.

Qualifications

What makes you a dream candidate

Experience with ingesting various source data formats such as JSON, Parquet, SequenceFile, Cloud Databases, MQ, Relational Databases such as Oracle

Experience with Cloud technologies (such as Azure, AWS, GCP) and native toolsets such as Azure ARM Templates, Hashicorp Terraform, AWS Cloud Formation

Experience with Azure cloud services to include but not limited to Synapse Analytics, Data Factory, Databricks, Delta Lake

Understanding of cloud computing technologies, business drivers and emerging computing trends

Thorough understanding of Hybrid Cloud Computing: virtualization technologies, Infrastructure as a Service, Platform as a Service and Software as a Service Cloud delivery models and the current competitive landscape

Experience:

High School Diploma or equivalent required

Bachelors Degree in related field or equivalent work experience required.

2-4 years of hands-on experience with software engineering to include but not limited to Spark, PySpark, Java, Scala and/or Python required.

2-4 years of hands-on experience with ETL/ELT data pipelines to process Big Data in Data Lake Ecosystems on prem and/or in the cloud required.

2-4 years of hands-on experience with SQL, data modeling and relational databases and no SQL databases required.

Keywords: message queue information technology
[email protected]
View all
Thu Nov 02 21:00:00 UTC 2023

To remove this job post send "job_kill 819179" as subject from [email protected] to [email protected]. Do not write anything extra in the subject line as this is a automatic system which will not work otherwise.


Your reply to [email protected] -
To       

Subject   
Message -

Your email id:

Captcha Image:
Captcha Code:


Pages not loading, taking too much time to load, server timeout or unavailable, or any other issues please contact admin at [email protected]
Time Taken: 0

Location: ,