Principal Data Engineer at Parsippany, New Jersey, USA |
Email: [email protected] |
From: jyothi, NCS [email protected] Reply to: [email protected] Hi, Hope youre doing great today, This is Jyothi from National Computer Systems Inc. Job Detail: Job Title: Principal Data Engineer Location: Parsippany, NJ (3 days onsite 2 days remote) Duration: 03-06+ Months Contract-to-hire Must have: 8+ years of experience overall, strong in Pyspark/Python, understanding of ML concepts, creating data pipelines / unstructured data, data bricks. Good experience working with AWS and its services. Performance related analysis Experience in building data ingestion pipelines for Structured and Unstructured data both for storage and optimal retrieval Experience working with Cloud data stores, NoSQL, Graph and Vector databases. Good experience with languages such as Python, SQL, and PySpark Experience working with Databricks and Snowflake technologies. Experience with relevant code repository and project tools such as GitHub, JIRA, and Confluence Working experience with Continuous Integration & Continuous Deployment with hands-on expertise on Jenkins, Terraform, Splunk and Dynatrace. Highly innovative with aptitude for foresight, systems thinking and design thinking, with a bias towards simplifying processes. Detail oriented individual with strong analytical, problem-solving, and organizational skills Ability to clearly communicate to both technical and business teams. Job Description: Responsibilities: Build data ingestion framework and data pipelines to ingest unstructured and structured data from various data sources such as SharePoint, Confluence, Chat Bots, Jira, External Sites, etc. into our existing One Data platform. Work closely with cross-functional teams, including product managers, data scientists and engineers to understand project requirements and objectives ensuring alignment with overall business goals. Design a scalable target state architecture for data processing-based on document content (Data types may include, but are not limited to: XML, HTML, DOC, PDF, XLS, JPEG, TIFF, and PPT) including PII/CII handling, policy-based hierarchy rules and Metadata tagging. Design, development, and deployment of optimal data pipelines including incremental data ingestion strategy by taking advantage of leading-edge technologies through experimentation and iterative refinement. Design and implement vector databases to efficiently store and retrieve high-dimensional vectors. Conducting research to stay up to date with the latest advancements in generative AI services and identify opportunities to integrate them into our products and services. Implement data quality and validation checks to ensure accuracy and consistency of data. Build automation that effectively and repeatably ensures quality, security, integrity, and maintainability of our solutions. Monitor and troubleshoot data pipeline performance, identifying and resolving bottlenecks and issues. Define and implement data access policies; implement and maintain data security measures and access policies for cloud storage buckets and vector databases. Keywords: artificial intelligence machine learning New Jersey |
[email protected] View all |
Tue Feb 13 19:42:00 UTC 2024 |