Data Engineer role- Python, Pyspark at Remote, Remote, USA |
Email: [email protected] |
From: Vinod Kumar, Excel Hire Staffing, LLC [email protected] Reply to: [email protected] Data Engineer role- Python, Pyspark Location: Sunnyvale, California - USC /GC - C2C Contract Please share Sunnyvale candidate only with DL Proof Skills & Qualifications: Essential Skills: Programming Languages: Strong proficiency in Python for data manipulation and automation tasks. Big Data Frameworks: In-depth knowledge of Apache Spark and hands-on experience with PySpark for distributed data processing. Data Engineering Tools: Experience with data pipeline orchestration tools like Airflow, Luigi, or similar. ETL Processes: Experience in designing, building, and maintaining ETL processes for both batch and streaming data. Databases: Strong working knowledge of SQL and experience with relational databases (e.g., PostgreSQL, MySQL) and NoSQL databases (e.g., MongoDB, Cassandra). Cloud Platforms: Familiarity with cloud platforms such as AWS, Google Cloud, or Azure, specifically their data engineering tools like S3, Redshift, BigQuery, or Azure Data Lake. Data Warehousing: Experience with data warehousing concepts and technologies (e.g., Snowflake, Redshift, Google BigQuery). Version Control: Familiarity with Git or other version control systems for collaborative development. Preferred Skills: Data Governance: Knowledge of data governance practices, data security, and compliance standards. Data Visualization: Basic knowledge of data visualization tools like Tableau, Power BI, or Looker. Machine Learning: Exposure to machine learning frameworks or experience working with data scientists in deploying models to production. Containerization: Knowledge of container technologies like Docker and Kubernetes for deployment of data applications. Keywords: business intelligence sthree green card Data Engineer role- Python, Pyspark [email protected] |
[email protected] View all |
Thu Nov 14 04:28:00 UTC 2024 |