Home

Immediate Hiring for a Data Engineer || McLean, VA, Hybrid at Mclean, Virginia, USA
Email: [email protected]
From:

Gokulraj R,

Exaways corporation

[email protected]

Reply to:   [email protected]

Hi,

I hope you are doing well,

This is Gokul Raj from Exaways Corporation. We have an urgent requirement for the position below, if you are interested, please let me know what you think.

Role:

 Data Engineer

Location: McLean, VA, Hybrid, 3 days onsite

Client: Pyramid Consulting || Freddie Mac

Combination of - Coding + Data Engineering + Python + PySpark + Ops concept 

Supplier Vetting Questions

1) Please describe your data engineering experience. Please provide examples of a big data problems you've solved using PySpark.

2) Please describe your experience using OLAP databases. Which databases have you used and what for

HM Thoughts:

This is a data platform team.

They need data engineering and data architecture and that they have to do using Python.

The team is looking for someone who can help the team to build data pipeline using YAML files.

Good experience with Python AND PySpark experience is must.

At least 3-5 experience as a Data Engineer.

All the work is focused on Python mainly on programming side.

Good SQL is mandatory.

Basic knowledge on AWS would be good.

Any database experience is fine unless he is good on SQL.

The person will be making API call using Python. Boto3 is good to have which is Python SDK.

Job Description:

Position Overview:

Develop data filtering, transformational and loading requirements

Define and execute ETLs using Apache Sparks on Hadoop among other Data technologies

Determine appropriate translations and validations between source data and target databases

Implement business logic to cleanse & transform data

Design and implement appropriate error handling procedures

Develop project, documentation and storage standards in conjunction with data architects

Monitor performance, troubleshoot and tune ETL processes as appropriate using tools like in the AWS ecosystem.

Create and automate ETL mappings to consume loan level data source applications to target applications

Execution of end-to-end implementation of underlying data ingestion workflow.

Operations and Technology:

Leverage and align work to appropriate resources across the team to ensure work is completed in the most efficient and impactful way

Understand capabilities of and current trends in Data Engineering domain

Qualifications

At least 5 years of experience developing in Python, SQL (Postgres/snowflake preferred)

Bachelors degree with equivalent work experience in computer science, data science or a related field.

Experience working with different Databases and understanding of data concepts (including data warehousing, data lake patterns, structured and unstructured data)

3+ years experience of Data Storage/Hadoop platform implementation, including 3+ years of hands-on experience in implementation and performance tuning Hadoop/Spark implementations.

Implementation and tuning experience specifically using Amazon Elastic Map Reduce (EMR).

Implementing AWS services in a variety of distributed computing, enterprise environments.

Experience writing automated unit, integration, regression, performance and acceptance tests

Solid understanding of software design principles

Keywords: rlang Virginia
[email protected]
View all
Tue Dec 12 22:27:00 UTC 2023

To remove this job post send "job_kill 931350" as subject from [email protected] to [email protected]. Do not write anything extra in the subject line as this is a automatic system which will not work otherwise.


Your reply to [email protected] -
To       

Subject   
Message -

Your email id:

Captcha Image:
Captcha Code:


Pages not loading, taking too much time to load, server timeout or unavailable, or any other issues please contact admin at [email protected]
Time Taken: 7

Location: Mclean, Virginia