Home

Grab Interview (Need WA local) || Data Engineer || Issaquah, WA (Onsite - Hybrid) at Issaquah, Washington, USA
Email: [email protected]
http://bit.ly/4ey8w48
https://jobs.nvoids.com/job_details.jsp?id=2265115&uid=

From:

Rahul Kumar,

SPAR Information Systems

[email protected]

Reply to: [email protected]

Hello Folks,

(Must have strong exp in ETL, Informatica, SQL, Data Modelling exp) And Need WA local who can share local WA state ID/ DL copy- Note it
Hope you all are doing good.
Please go through the Job description and let me know your interest.
Title: Data Engineer
Work Location: Issaquah, WA (Onsite/ Hybrid)
Duration: Long Term Contract
Job Description:
Must Have Skills Data Engineer
Skill 1 8 +Years of Exp in ETL
Skill 2 8 +Years of Exp in Informatica
Skill 3 8+ Years of Exp in SQL queries
Skill 4 5+ Years of exp in Data Modelling
We are looking for a talented Data Engineer with expertise in Data Warehouse Appliances, SQL Queries, Data Modeling, and Informatica to join our growing team.
As a Data Engineer, you will be responsible for designing, building, and maintaining robust data pipelines, ensuring seamless integration of data from various sources, and managing data storage and processing frameworks.
You will work with cutting-edge data warehouse technologies to create scalable and efficient data architectures that support business intelligence and analytics initiatives.
Key Responsibilities:
Data Warehouse Design & Implementation: Design, implement, and optimize data warehouse solutions using Data Warehouse Appliances such as Teradata, Snowflake, Amazon Redshift, or similar technologies. Ensure efficient data storage, retrieval, and processing capabilities.
ETL Development with Informatica: Build and manage ETL processes using Informatica to automate the extraction, transformation, and loading of data into the data warehouse. Ensure data is transformed in line with business needs.
SQL Query Optimization: Write complex SQL queries to extract, manipulate, and analyze large volumes of data across relational and cloud-based data sources. Optimize queries for performance in large-scale environments.
Data Modeling: Design and implement data models (conceptual, logical, and physical) to ensure data is structured efficiently for both storage and analytics. Collaborate with data architects and analysts to create flexible and scalable data models.
Data Integration: Integrate and consolidate data from multiple sources (e.g., transactional systems, APIs, cloud storage, and flat files) into a centralized data warehouse for analysis and reporting.
Data Pipeline Development: Develop, maintain, and optimize robust data pipelines that can handle large volumes of data and support real-time or batch processing needs.
Performance Tuning & Optimization: Monitor and optimize the performance of data pipelines and ETL workflows, ensuring efficient data processing, low latency, and high scalability.
Data Quality & Governance: Ensure the accuracy, consistency, and reliability of data within the data warehouse by implementing data validation and data quality checks.
Collaboration & Communication: Work closely with data scientists, business analysts, and other technical teams to understand data requirements and deliver solutions that support data-driven decision-making.
Documentation & Best Practices: Maintain comprehensive documentation for ETL workflows, data models, and data integration processes. Follow industry best practices and company standards for data architecture and engineering.
Qualifications:
Bachelors degree in Computer Science, Information Systems, Engineering, or a related field (or equivalent experience).
Proven experience as a Data Engineer, Data Architect, or similar role in a data engineering capacity.
Strong expertise in Data Warehouse Appliances such as Teradata, Snowflake, Redshift, or similar platforms.
Extensive experience writing SQL queries to extract and analyze data from relational databases and cloud environments.
In-depth experience with Informatica for developing ETL processes.
Proficiency in Data Modeling, including designing and implementing logical, physical, and conceptual data models for data warehousing.
Experience with cloud-based data platforms (e.g., AWS, Azure, P) is a plus.
Strong knowledge of data pipeline development, batch processing, and real-time data integration.
Excellent understanding of data governance and data quality practices.
Experience with version control systems (e.g., Git) and collaborative development practices.
Ability to optimize and troubleshoot complex SQL queries and ETL processes for performance.
Strong problem-solving skills and the ability to work in a collaborative, cross-functional team environment.
Good communication skills, both verbal and written, with the ability to present complex data engineering concepts to both technical and non-technical stakeholders.
Preferred Skills:
Familiarity with tools like Apache Kafka, Airflow, or other data orchestration frameworks.
Knowledge of data analytics and reporting tools (e.g., Tableau, Power BI, or similar).
Familiarity with big data technologies (e.g., Hadoop, Spark, Hive) is a plus.
Experience with automation and scripting languages (e.g., Python, scripting).

Thanks & Regards,
Rahul Kumar
Sr. Technical Recruiter
SPAR Information Systems
(a E-verify Company)
Email : [email protected]

Keywords: business intelligence information technology golang Idaho Washington
Grab Interview (Need WA local) || Data Engineer || Issaquah, WA (Onsite - Hybrid)
[email protected]
http://bit.ly/4ey8w48
https://jobs.nvoids.com/job_details.jsp?id=2265115&uid=
[email protected]
View All
10:14 PM 18-Mar-25


To remove this job post send "job_kill 2265115" as subject from [email protected] to [email protected]. Do not write anything extra in the subject line as this is a automatic system which will not work otherwise.

Pages not loading, taking too much time to load, server timeout or unavailable, or any other issues please contact admin at [email protected]


Time Taken: 6

Location: Issaquah, Washington