Home

Job Title: Lead Data Engineer at Remote, Remote, USA
Email: [email protected]
Client:
  Circle K

Job
Title: Lead
Data Engineer

Job
Type:
Contract To Hire (6 Months then
convert to FTE)

Interview Process:
Introduction call with Lisa at Progilisys plus
3 rounds with Circle K (1-hacker rank, 2- Hiring Manager and
data architect, 3 -Director)

Work
Location:
Hybrid (will work 2 days onsite
3 days work from home)  Must be able to reliably commute to one of the
Circle K locations listed below in Charlotte, NC or Tempe/Phoenix, AZ (
Charlotte, NC is preferred):

Top
skills, experience, background, etc.

MUST have :
python, SQL , good to have: spark|Tech Stack: prefer Azure
but open for others AWS, GCP etc | Must have :databricks or snowflake or fabric

How
many years of experience needed
minimum 10 year

Any
nice-to-have skills that would make them exceptional
Certifications , 2+ yrs data architecture, data engineering
patters , git repo, selfstarter, and go-getter attitude, software engineering
background , ML experience

JOB
DESRIPTION:

As the Technical Lead Data Engineer, your primary
responsibility will be to spearhead the design, development, and implementation
of data solutions aimed at empowering our organization to derive actionable
insights from intricate datasets. You will take the lead in guiding a team of
data engineers, fostering collaboration with cross-functional teams, and
spearheading initiatives geared towards fortifying our data infrastructure,
CI/CD pipelines, and analytics capabilities.

Responsibilities:

 Apply advanced knowledge of Data Engineering
principles, methodologies and techniques to design and implement data loading
and aggregation frameworks across broad areas of the organization.

Gather and process raw, structured, semi-structured and
unstructured data using batch and real-time data processing frameworks.

Implement and optimize data solutions in enterprise
data warehouses and big data repositories, focusing primarily on movement
to the cloud.

Drive new and enhanced capabilities to Enterprise Data
Platform partners to meet the needs of product / engineering / business.

Experience building enterprise systems especially using
Databricks, Snowflake and platforms like Azure, AWS, GCP etc

Leverage strong Python, Spark, SQL programming skills
to construct robust pipelines for efficient data processing and analysis.

Implement CI/CD pipelines for automating build, test,
and deployment processes to accelerate the delivery of data solutions.

Implement data modeling techniques to design and
optimize data schemas, ensuring data integrity and performance.

Drive continuous improvement initiatives to enhance
performance, reliability, and scalability of our data infrastructure.

Collaborate with data scientists, analysts, and other
stakeholders to understand business requirements and translate them into
technical solutions.

Implement best practices for data governance, security,
and compliance to ensure the integrity and confidentiality of our data
assets.

Qualifications:

Bachelors
or masters degree in computer science, Engineering, or a related field.

Proven experience (8+) in a data engineering role, with
expertise in designing and building data pipelines, ETL processes, and
data warehouses.

Strong proficiency in SQL, Python and Spark programming
languages.

Strong experience with cloud platforms such as AWS,
Azure, or GCP is a must.

Hands-on experience with big data technologies such as
Hadoop, Spark, Kafka, and distributed computing frameworks.

Knowledge of data lake and data warehouse solutions,
including Databricks, Snowflake, Amazon Redshift, Google BigQuery, Azure
Data Factory, Airflow etc.

Experience in implementing CI/CD pipelines for
automating build, test, and deployment processes.

Solid understanding of data modeling concepts, data
warehousing architectures, and data management best practices.

Excellent communication and leadership skills, with the
ability to effectively collaborate with cross-functional teams and drive
consensus on technical decisions.

Relevant certifications (e.g., Azure, databricks,
snowflake) would be a plus.

--

Keywords: continuous integration continuous deployment machine learning information technology golang Arizona North Carolina
Job Title: Lead Data Engineer
[email protected]
[email protected]
View all
Thu Jun 20 02:25:00 UTC 2024

To remove this job post send "job_kill 1496716" as subject from [email protected] to [email protected]. Do not write anything extra in the subject line as this is a automatic system which will not work otherwise.


Your reply to [email protected] -
To       

Subject   
Message -

Your email id:

Captcha Image:
Captcha Code:


Pages not loading, taking too much time to load, server timeout or unavailable, or any other issues please contact admin at [email protected]
Time Taken: 0

Location: ,