Home

Fixed Income Data Engineer : Newark NJ Hybrid : no h1 at Newark, New Jersey, USA
Email: [email protected]
From:

vivek paliwal,

kpg99

[email protected]

Reply to:   [email protected]

Mention visa and location

Must have  fixed income exp

No h1

Role                 : Fixed Income Data Engineer

Location         : 
Newark, NJ Hybrid

Duration          : Long term

Visa                    : No h1

Job Description:      

Global Financial firm with offices in Newark, NJ is seeking a Fixed Income Data Engineer to design, develop, and maintain ETL processes and data pipelines to collect and transform data from various sources (e.g., databases, APIs, logs) into a structured and usable format.  Work in an agile environment through collaboration, ownership and innovation.

MUST HAVE:

       5+ years of experience in building out Data pipelines in Python.

       Experience developing and deploying PySpark/Python scripts in cloud environment.

       Experience working in AWS Cloud especially services like S3, Glue, Lambda, Step Functions, DynamoDB, ECS etc.

       Strong knowledge of data warehousing AND ETL processes

       Python API Development/Snowflake Snowpark coding experience

       Understanding of capital markets within Fixed Income Structured products Bonds FX etc

Our Role:

       Design, develop, and maintain ETL processes and data pipelines to collect and transform data from various sources (e.g., databases, APIs, logs) into a structured and usable format.

       Create and maintain data storage solutions, such as data warehouses, data lakes, and databases. Optimize data storage structures for performance and cost-effectiveness.

       Integrate and merge data from different sources while ensuring data quality, consistency, and accuracy.

        Manage and optimize data warehouses to store and organize data for efficient retrieval and analysis.

        Cleanse, preprocess, and transform data to meet business requirements and maintain data quality.

       data pipelines and database performance to ensure data processing efficiency.

       Implement and maintain security measures to protect sensitive data and ensure compliance with data privacy regulations

       Align with the Product Owner and Scrum Master in assessing business needs and transforming them into scalable applications.

       Build and maintain code to manage data received from heterogenous data formats including web-based sources, internal/external databases, flat files, heterogenous data formats (binary, ASCII).

       Build new enterprise Datawarehouse and maintain the existing one.

       Design and support effective storage and retrieval of very large internal and external data set and be forward think about the convergence strategy with our AWS cloud migration

       Assess the impact of scaling up and scaling out and ensure sustained data management and data delivery performance.

       Build interfaces for supporting evolving and new applications and accommodating new data sources and types of data.

Your Required Skills:

       5+ years of experience in building out Data pipelines in Python.

       Strong knowledge of data warehousing, ETL processes, and database management.

       Proficiency in data modeling, database design, and SQL.

       3+ years of experience developing and deploying PySpark/python scripts in cloud environment.

      
3 + years of experience working in AWS Cloud especially services like S3, Glue, Lambda, Step Functions, DynamoDB, ECS etc.

      
1+ years of hands-on experience in the design & development of data ingress/egress patterns on Snowflake

      
Proficiency in Aurora Postgres database clusters on AWS

      
Familiarity with Orchestration tools like Airflow, Autosys, etc

       Experience with data lake/data marts/data warehouse

       Proficiency in SQL, data querying, and performance optimization techniques

       Ability to communicate the status, challenges and proposed solution with the team.

       Demonstrating the ability to learn new skills and work as a team.

       Knowledge of data security and privacy best practices.

       Working knowledge of data governance and ability to ensure high data quality is maintained throughout the data lifecycle of a project.

       Knowledge of data visualization and business intelligence tools (e.g., Tableau, Power BI).

       Ability to prioritize multiple tasks and projects and work effectively under pressure; exceptional organizational and administrative skills; at ease with abundance of details, yet mindful of big picture at all times.

       Strong analytical and problem-solving skills, with ability to conduct root cause analysis on system, process or production problems and ability to provide viable solutions.

       Experience working in an Agile environment with Scrum Master/Product owner and ability to deliver.

Your Desired Skills:

       Good exposure to Containers like ECS or Docker

       Python API Development/Snowflake Snowpark coding experience

       Streaming or messaging knowledge with Kafka or Kinesis is desirable.

       Understanding of capital markets within Fixed Income

Keywords: business intelligence sthree New Jersey
[email protected]
View all
Wed Jan 24 00:56:00 UTC 2024

To remove this job post send "job_kill 1043504" as subject from [email protected] to [email protected]. Do not write anything extra in the subject line as this is a automatic system which will not work otherwise.


Your reply to [email protected] -
To       

Subject   
Message -

Your email id:

Captcha Image:
Captcha Code:


Pages not loading, taking too much time to load, server timeout or unavailable, or any other issues please contact admin at [email protected]
Time Taken: 0

Location: ,