Home

DATA ENGINEER at Remote, Remote, USA
Email: [email protected]
From:

SAI PREM,

ICS GLOBAL SOFT

[email protected]

Reply to:   [email protected]

JOB SUMMARY

The Senior Data Engineer is responsible for expanding and optimizing data and data pipeline architecture, as well as optimizing data flow and collection for cross functional teams. The Senior Data Engineer develops and supports a broad range of software capabilities including building data pipelines, data warehousing, managing the ETL/ELT processes, receiving, and delivering data through various interfaces, and processing significant amounts of data related to railcar movements, railcar liability, and financial calculations.

DUTIES & RESPONSIBILITIES

    To perform this job successfully an individual must be able to perform the following essential duties satisfactorily. Other duties may be assigned to address business needs and changing business practices.

    Participates as a member of an Agile team developing Sr Data Engineering solutions.

    Engages in requirements gathering, and technical design discussions to meet business needs.

    Designs and develops generic, scalable data pipelines in Azure Data factory and Databricks with python for on-prem and cloud data sources

    Assembles large, complex sets of data that meet non-functional and functional business requirements.

    Solves unstructured data problems, manipulates, and optimizes large data sets to advance business problem-solving.

    Contributes to documentation, testing and cross-training of other team members.

    Works closely with others to assist and resolve production issues.

QUALIFICATIONS

    Bachelor's degree in computer science, computer engineering, a related field, or 4 years equivalent experience in a computer science related field.

    5+ years of data engineering or 5 years equivalent experience in a related computer science field.

    5+ years of hands-on experience in developing and deploying data architecture strategies, data warehousing concepts, or engineering practices.

    5+ years of experience with complex SQL queries and knowledge of database technologies.

    Expert-level coding experience with PySpark and Python.

    Expert-level technical experience with Apache Spark / Azure Databricks.

    Proficient in using and designing solutions on Azure Cloud infrastructure (particularly Azure Data Factory) and Azure DevOps.

    Proficient with core business intelligence and data warehousing technology.

    Proficient designing and developing data integration solutions using ETL tools such as Azure Data Factory and/or SSIS.

    Proficient with software development practices such as Agile, TDD, and CI/CD.

    Ability to collaborate and communicate professionally, both verbally and in writing, at all levels of the organization, particularly bridging conversations between data and business stakeholders.

Preferred Qualifications

    Experience with Snowflake

    Experience with graph databases or graph libraries

    Kafka or other streaming technologies

    Elastic Search

    Experience in the rail or other commodities driven industry.

Keywords: continuous integration continuous deployment
[email protected]
View all
Mon Dec 04 21:18:00 UTC 2023

To remove this job post send "job_kill 905497" as subject from [email protected] to [email protected]. Do not write anything extra in the subject line as this is a automatic system which will not work otherwise.


Your reply to [email protected] -
To       

Subject   
Message -

Your email id:

Captcha Image:
Captcha Code:


Pages not loading, taking too much time to load, server timeout or unavailable, or any other issues please contact admin at [email protected]
Time Taken: 0

Location: ,