Home

Direct Client::Bigdata Engineer|::Location : Dallas, Texas only local can apply at Dallas, Texas, USA
Email: [email protected]
Hi Partner,

We have immediate job opening below is the JD let me know if you are intrested.

Data engineer OR Bigdata Engineer who has strong experience in Databricks platform using AWS cloud who is strong in coding using python or Pyspark in Spark, to build data pipelines
with experience working on Streaming data using Kafka or any other tools.

Role: Data Engineer

Client: Direct Client

Location
: Dallas, Texas

Data Engineer:

The Data Engineer is responsible for building Data Engineering Solutions using next generation data techniques. The individual will be working directly
with product owners, customers and technologists to deliver data products/solutions in a collaborative and agile environment.

Responsibilities:

Responsible for design and development of big data solutions. Partner with domain experts, product managers, analyst,
and data scientists to develop Big Data pipelines in Hadoop

Responsible for moving all legacy workloads to cloud platform

Work with data scientist to build Client pipelines using heterogeneous sources and provide engineering services for
data science applications

Ensure automation through CI/CD across platforms both in cloud and on-premises

Define needs around maintainability, testability, performance, security, quality and usability for data platform

Drive implementation, consistent patterns, reusable components, and coding standards for data engineering processes

Convert SAS based pipelines into languages like PySpark, Scala to execute on Hadoop and non-Hadoop ecosystems

Tune Big data applications on Hadoop and non-Hadoop platforms for optimal performance

Evaluate new IT developments and evolving business requirements and recommend appropriate systems alternatives and/or
enhancements to current systems by analyzing business processes, systems and industry standards.

Applies in-depth understanding of how data analytics collectively integrate within the sub-function as well as coordinates
and contributes to the objectives of the entire function.

Produces detailed analysis of issues where the best course of action is not evident from the information available,
but actions must be recommended/taken.

Appropriately assess risk when business decisions are made, demonstrating particular consideration for the firm's
reputation and safeguarding Client, its clients and assets, by driving compliance with applicable laws, rules and regulations, adhering to Policy, applying sound ethical judgment regarding personal behavior, conduct and business practices, and escalating,
managing and reporting control issues with transparency.

Qualifications:

8+ years of total IT experience

5+ years of experience with Hadoop (Cloudera)/big data technologies

Advanced knowledge of the Hadoop ecosystem and Big Data technologies Hands-on experience with the Hadoop eco-system
(HDFS, MapReduce, Hive, Pig, Impala, Spark, Kafka, Kudu, Solr)

Experience on designing and developing Data Pipelines for Data Ingestion or Transformation using Java or Scala or
Python.

Experience with Spark programming (pyspark or scala or java)

Expert level building pipelines using Apache Spark Familiarity with core provider services from AWS, Azure or GCP,
preferably having supported deployments on one or more of these platforms

Hands-on experience with Python/Pyspark/Scala and basic libraries for machine learning is required;

Exposure to containerization and related technologies (e.g. Docker, Kubernetes)

Exposure to aspects of DevOps (source control, continuous integration, deployments, etc.)

Proficient in programming in Java or Python with prior Apache Beam/Spark experience a plus.

System level understanding - Data structures, algorithms, distributed storage & compute

Can-do attitude on solving complex business problems, good interpersonal and teamwork skills

Possess team management experience and have led a team of data engineers and analysts.

Experience in Snowflake is a plus.

Education:

Bachelors degree/University degree or equivalent experience

Please share your details along with the visa and DL Copy!

Full Name:

Best Mobile Number:

Best Email id:

LinkedIn id:

Current Location:

Visa Status: H1B

i140 approved:

Reason for change:

Availability Start Date:

Salary range:

Interview available:

[email protected]

--

Keywords: continuous integration continuous deployment information technology Idaho
Direct Client::Bigdata Engineer|::Location : Dallas, Texas only local can apply
[email protected]
[email protected]
View all
Wed Oct 16 02:35:00 UTC 2024

To remove this job post send "job_kill 1845407" as subject from [email protected] to [email protected]. Do not write anything extra in the subject line as this is a automatic system which will not work otherwise.


Your reply to [email protected] -
To       

Subject   
Message -

Your email id:

Captcha Image:
Captcha Code:


Pages not loading, taking too much time to load, server timeout or unavailable, or any other issues please contact admin at [email protected]
Time Taken: 6

Location: Dallas, Texas