Senior Data Engineer || Need Local to MD,VA at Remote, Remote, USA |
Email: [email protected] |
Senior Data Engineer Work location: Fulton, MD 20759 Type: 6 months C2H Experience: Minimum 13 years of experience GC/USC Work style: Hybrid (3-4 days remote) Client: Project with Department of Transportation C2C Primary work location Fulton, MD But Virginia and Washington will work. Relocation expenses will not provide my client. Additional Notes (must have): Look for the candidates who have extensive experience coding with both Python and PySpark. Candidates must have senior level experience with creating data models and data pipelines. Extensive experience with AWS Services such as: AWS Redshift, AWS EMR, AWS Lambda, AWS Glue, AWS RDS, DMS and the other which are mentioned below. Role Description: At a high level, the candidate must be hands on data engineer with ability to lead the work which includes working directly with client, understand their requirements, design & develop the solution. Communication and ability to collaborate effectively. Technical requirements: Data Pipeline Development: Design, implement, and manage robust data pipelines using Python, PySpark, SQL to efficiently extract, transform, and load data from diverse sources (Batch & Streaming) AWS Expertise: Demonstrate expertise in core AWS services such as AWS DMS, AWS Glue, AWS Step Functions, Amazon S3, Amazon Redshift, Amazon RDS, Amazon EMR, AWS IAM, AWS LAMBDA etc., and apply them to build scalable and reliable data solutions. Data Modeling: Develop and maintain efficient data models to support analytical and reporting needs. Database Management: Administer databases using AWS services like Amazon RDS or Amazon Redshift, focusing on schema design, performance optimization, and monitoring. Data Warehousing: Utilize Amazon Redshift or Amazon Snowflake to create high-performing analytical databases that empower data-driven decision-making. ETL Best Practices: Implement industry best practices for ETL processes, including data validation, error handling, and data quality checks. Performance Optimization: Optimize query performance through continuous tuning of databases and leveraging AWS's scalability capabilities. Monitoring and Logging: Establish robust monitoring and logging mechanisms using AWS CloudWatch, Amazon CloudTrail, or comparable tools to ensure pipeline reliability. Security and Compliance: Ensure adherence to security best practices and relevant compliance standards, tailoring solutions to meet GDPR, HIPAA, or other regulatory requirements. Automation: Drive automation of deployment and scaling of data pipelines using infrastructure as code (IaC) tools like AWS CloudFormation and Terraform. Collaboration: Collaborate closely with cross-functional teams, including data scientists, analysts, and other stakeholders, to understand their data needs and provide effective solutions. Continuous Learning: Stay updated on the latest developments in AWS services and data engineering methodologies, applying new insights to enhance our data infrastructure. Soft Skills: Exhibit strong communication skills to facilitate effective teamwork and interaction with diverse groups Regards; Vivek Sah | Technical Recruiter Largeton INC. | Tel : (571)568-4156 Email: [email protected] -- Keywords: sthree information technology green card Maryland |
[email protected] View all |
Wed Jan 24 21:15:00 UTC 2024 |