Home

Need Big Data Engineer - San Jose, CA (Hybrid) at San Jose, California, USA
Email: [email protected]
From:

Rahul.B,

SPAR INFORMATION SYSTEM

[email protected]

Reply to:   [email protected]

Hi Associate,

Hope you are doing good.

I have urgent  requirement kindly let me know if you have any resource available.

Role: Big Data Engineer

Location: San Jose, CA (Hybrid)

Duration: Long term

Job Description:

Key Skills - (SQL/PLSQL, Hadoop, Hive, Spark), Databricks

What youll do

Designing, develop & tune data products, applications and integrations on large scale data platforms (Hadoop, Kafka Streaming, Hana, SQL server etc) with an emphasis on performance, reliability and scalability and most of all quality.
Analyze the business needs, profile large data sets and build custom data models and applications to drive the Adobe business decision making and customers experience
Develop and extend design patterns, processes, standards, frameworks and reusable components for various data engineering functions/areas.
Collaborate with key stakeholders including business team, engineering leads, architects, BSA's & program managers.

The ideal candidate will have:

MS/BS in Computer Science / related technical field with 4+ years of strong hands-on experience in enterprise data warehousing / big data implementations & complex data solutions and frameworks
Strong SQL, ETL, scripting and or programming skills with a preference towards Python, Java, Scala, shell scripting
Demonstrated ability to clearly form and communicate ideas to both technical and non-technical audiences.
Strong problem-solving skills with an ability to isolate, deconstruct and resolve complex data / engineering challenges
Results driven with attention to detail, strong sense of ownership, and a commitment to up-leveling the broader IDS engineering team through mentoring, innovation and thought leadership

Desired skills:

Familiarity with streaming applications
Experience in development methodologies like Agile / Scrum
Strong Experience with Hadoop ETL/ Data Ingestion: Sqoop, Flume, Hive, Spark, Hbase
Strong experience on SQL and PLSQL
Nice to have experience in Real Time Data Ingestion using Kafka, Storm, Spark or Complex Event Processing
Experience in Hadoop Data Consumption and Other Components: Hive, Hue HBase, , Spark, Pig, Impala, Presto
Experience monitoring, troubleshooting and tuning services and applications and operational expertise such as good troubleshooting skills, understanding of systems capacity, bottlenecks, and basics of memory, CPU, OS, storage, and networks.
Experience in Design & Development of API framework using Python/Java is a Plu
Experience in developing BI Dash boards and Reports is a plus

Rahul.B

Team Lead

SPAR Information Systems

Email: [email protected]

Keywords: business intelligence computer associates
[email protected]
View all
Fri Jan 06 02:20:00 UTC 2023

To remove this job post send "job_kill 256728" as subject from [email protected] to [email protected]. Do not write anything extra in the subject line as this is a automatic system which will not work otherwise.


Your reply to [email protected] -
To       

Subject   
Message -

Your email id:

Captcha Image:
Captcha Code:


Pages not loading, taking too much time to load, server timeout or unavailable, or any other issues please contact admin at [email protected]
Time Taken: 9

Location: San Jose, California