Senior Scala Developer at Remote, Remote, USA |
Email: [email protected] |
From: Pooja, ICS Globalsoft [email protected] Reply to: [email protected] Role: Senior Scala Developer Location: Remote Visa: USC/GC/H1B Duration: Long Term Contarct Responsibilities: As a Senior Software Engineer Scala, Big Data you will join a highly skilled software team in delivering innovative big data pipeline which provide data to various upstream consumers like mobile and web applications that make up CNH Industrials next generation digital platform. The digital platform will enable products that integrate with connected CNH Industrial tractors, sprayers and combines and enable wide range of farm management capabilities. Responsibilities include: Leading a small team of software engineers, data engineers and, also contributing individually to design, develop and test data pipelines for data parsing, enrichment, and processing. Design, develop, test and document quality pipelines for real time and batch data processing for user/consumers and functional requirements within specified timeframes and in accordance with CNHI coding standards. Design and implement complex real-time streaming and data visualization technologies. Generate rapid prototypes for feasibility testing. Generate all documentation relevant to software and data operations Contribute to growing team members, building a strong cohesive team; provide guidance, mentorship Perform tasks as specified by the Delivery Lead/Team Lead Qualifications: Bachelor's degree in Computer Science or Computer Engineering from an accredited university 5+ years of relevant industry experience after completing education 5+ years of Scala application design and testing experience in the industry Experience with working with large amounts of data, Big Data processing batch and streaming, writing server applications would be a plus Strong working knowledge of Functional Programming paradigm and category theory in language like Scala or Haskell Working experience with real time Streaming and batch processing with Apache Spark and Apache Flink (Experience on one platform is also fine) Preferred Qualifications: Strong knowledge of distributed file systems, memory management, sharding and partitioning datasets/data frames Strong fundamentals on functional programming, Object oriented programming, RESTful architectures, Design Patterns, Data Structures, Algorithms Experience with RESTful API development would be a plus with concurrency frameworks with actor model (AKKA) Experience with Microservices Infrastructure management for Development; working on Docker, Kubernetes, Helm/Terraform Experience with Microsoft Azure and cloud services including exposure to PaaS services like service bus, event hub, blob stores, key vaults, API managers, Function Apps (serverless), Azure Databricks Expertise in Scala is mandatory, Java (Optional) Batch and streams data processing: Apache Kafka, Apache Flink, Apache Spark, Azure Service Bus, Azure Event hub (Experience with any one of them would be a plus) Experience OAuth 2.0 (JWT), Swagger, Postman, Open API Specification Relational (SQL Server / Postgres); NoSQL (HBase), Delta Tables (Parquet and Avro formats) Big Data/Geospatial (HBase 2.1.6 (HDI 4.0, Geo mesa 3.0.0) Caching (Redis, play, caffeine, or others) Experience working with cloud platforms services like Azure or AWS Good working knowledge of CI/CD environments (preferably Azure DevOps), Git or similar configuration management software; Build Automation (Maven) Knowledge of Testing Tools such as ScalaTest, Junit, Mockito Microservices implementation skills will be plus Asynchronous programming experience would be a plus Keywords: continuous integration continuous deployment green card |
[email protected] View all |
Mon Jan 09 20:42:00 UTC 2023 |