100% Remote>>Urgent requirement for Bigdata Developer (Spark|Scala) at Remote, Remote, USA |
Email: [email protected] |
From: Ankit Kalia, HMG America [email protected] Reply to: [email protected] Hi Hope you are doing fine!! My name is Ankit Kalia and I am a Staffing Specialist at HMG America. I am reaching out to you on an exciting job opportunity with one of our clients. Please review the requirement, if you are interested then reply back with your contact details ASAP to [email protected] or call me at +1 (732) 790 5493 Title: Bigdata Developer Spark/Scala Location: Remote Work In EST Customer: IQVIA Job Description: Develop and debug Springboot microservices (primarily in Scala) integrating to components like Elastic, vault, NoSQL databases (Mongo DB), and RDBMS Develop and debug end to end data pipeline using Databricks, and Snowflake to fetch large data volumes from source Work on data engineering and application development Debug, and tune Spark (version 3.0+) applications to run it efficiently with billions of records Technical Develop and debug Springboot microservices (primarily in Scala) integrating to components like Elastic, vault, NoSQL databases (Mongo DB), and RDBMS Develop and debug end to end data pipeline using Databricks, and Snowflake to fetch large data volumes from source Work on data engineering and application development Debug, and tune Spark (version 3.0+) applications to run it efficiently with billions of records Design and deploy auto-scaling features for the application to optimize performance using Kubernetes, and docker on Azure Cloud Design, implement and upgrade the code written in Scala (version 2.12+) and python (version 3.0+) Work on delta table performance optimizations Work on different file formats like Parquet, text, and delta formats Build applications using Maven, SBT and integrated with continuous integration servers like Jenkins/Azure DevOps to build jobs Write Shell-Scripts, Cron Automation, and Regular Expressions Perform migration from on-prem Cloudera CDH data to Azure storage accounts Create various database objects like tables, views, UDF, UDAF Others Coordinate the development, integration, and production deployments Take ownership of tasks. Work independently after initial KT of ~ 4 weeks Translate business requirements (working with BAs) into technical specifications and design Debug and understand the projects codebase with minimal help Perform peer review of the codebase and highlight issues/improvements Coding Best practices, well commented and readable code, IQVIA Analytics Engine 2 Communicate and articulate effectively. Explain technical issues succinctly and precisely Work in an agile setup with knowledge of tools like Jira, Confluence, etc. Skills and Experience Overall 10+ years of experience in Data engineering and SDLC 5+ years of experience in production delivery using Cloud, Big Data technology stack, *nix, Springboot (JPA, Web, and Core) 3+ years of experience managing applications on azure cloud platform. Hands[1]on experience in using Azure cloud services, Dockerizing applications, and Databricks 7+ years of hands-on experience in Java, Spark and Scala, SQL, and Maven. Strong knowledge of spark internals and performance tuning 2+ years of experience in Elastic search, and Python Excellent communication skills with both Technical and Business audience Experienced working in agile environment with familiarity with Jira and Confluence Ability to work in a fast-paced, team-oriented environment, willingness, and ability to travel if needed Strong interpersonal skills, including a positive, solution-oriented attitude Desired skills o Akka, Kafka, Healthcare and/or reference data o Snowflake o Message Queu Keywords: access management database information technology |
[email protected] View all |
Thu Jan 05 23:12:00 UTC 2023 |