Big Data or Hadoop Engineer:: Phoenix, AZ (Only Local required) :: Contract at Phoenix, Arizona, USA |
Email: [email protected] |
From: Deeksha Rawat, kk Software Associates LLC [email protected] Reply to: [email protected] Job Title: Big Data/ Hadoop Engineer Location: Phoenix, AZ (Only Local required) Duration: 12+ Months Need Big Data/ Hadoop Engineer Need Hive, SQL, and Spark experience 6-8 years Job Summary: We are seeking a skilled and experienced Big Data Developer to join our team. The ideal candidate will have expertise in building scalable, high-performance data pipelines and platforms to support real-time and batch data processing. You will work closely with data engineers, data scientists, and business analysts to design solutions that enable data-driven decision-making across the organization. Technical Expertise: Strong knowledge and hands-on experience with Big Data technologies such as Hadoop, Spark, Hive, Kafka, HBase, etc. Proficiency in programming languages such as Java, Python, or Scala. Experience with distributed data storage systems such as HDFS, Amazon S3, or Google Cloud Storage. Familiarity with real-time data processing tools like Apache Flink, Storm, or Samza. Strong understanding of ETL processes, data pipeline creation, and data modeling. Database Skills: Knowledge of both SQL and NoSQL databases such as PostgreSQL, Cassandra, and MongoDB. Familiarity with query optimization techniques and database performance tuning. Data Engineering & Cloud Platforms: Experience with cloud platforms like AWS, GCP, or Azure, specifically in data storage, computation, and analytics services (e.g., EMR, BigQuery, Redshift). Experience with containerization tools (e.g., Docker, Kubernetes) is a plus. Keywords: sthree rlang Arizona Big Data or Hadoop Engineer:: Phoenix, AZ (Only Local required) :: Contract [email protected] |
[email protected] View all |
Wed Oct 23 04:09:00 UTC 2024 |