Need - Data Engineer - remote at Remote, Remote, USA |
Email: [email protected] |
From: pradeep, shrive technologies [email protected] Reply to: [email protected] Role : Data Engineer Location : Remote EXP : 10+ mandatory : Python, databricks, pyspark, scala Job Description : Responsibilities: Apply broad knowledge of technology options, technology platforms, design techniques and approaches across the Data Engineering ecosystem to build systems that meet quality needs. Build systems and datasets using software engineering best practices, data management fundamentals, data storage principles, recent advances in distributed systems, and operational excellence best practices. Analyze systems, define transformation requirements, design suitable data models and document the design/specifications. Demonstrate passion for quality and productivity by use of efficient development techniques, standards and guidelines. Design, build, execute, and maintain automated tests and/or manage deep data profiling runs to ensure data products and pipelines meet expectations Support business analysts by providing technical insights to close gaps in understanding Work collaboratively with analysts, engineers, subject matter experts, and product managers to apply WBDs analytical and quality methods to satisfy client needs. Participate in the growth of the DQE practice by sharing knowledge and lessons learned, continually improving best practices, and contributing to methods that will systematically advance workforce capabilities Effectively communicate through technical documentation, commented code, and interactions with stakeholders and adjacent teams Contribute to building a vibrant workplace, where teams can thrive, and model the organizations positive, supportive culture of respect and excellence Requirements: Direct experience with any or all of the following: data quality, data management, data governance, analytics, business intelligence, dashboards or reporting, data science, and/or data-intensive software development projects (including exposure to and familiarity with the technologies and tools) Demonstrated familiarity or experience with navigating data ecosystems (data warehouses, data lakes, structured and unstructured data, file types, BI systems) and familiarity with terms and concepts therein Ability to effectively coordinate and collaborate with data engineers, team members, and other internal subject matter experts to successfully leverage and continually improve WBDs tools, processes, and resources Aptitude for becoming familiar with new tools and technologies Ability to work independently as well as in a remote, team environment; ability to work effectively and collaboratively within your team, participating in SCRUM/agile and other meetings Ability to communicate frequently and openly with leads and team members. Qualifications: A Bachelors degree or higher in Computer Science, Engineering, or a related field. Proficiency in functional programming languages (including Python) as well as declarative programming languages (e.g. SQL, MYSQL). Good understanding and experience in a big data processing platform such as Databricks. Good understanding and experience in Airflow or equivalent orchestration tool. Good understanding and experience in Cloud data base such Snowflake or equivalent database. 3+ years of hands-on experience in working on data-centric systems using MPP Database technologies and python, Spark, or AWS equivalents. 3+ years of experience with AWS Cloud Data Technologies including familiarity with EMR, Kinesis, Athena. Keywords: business intelligence |
[email protected] View all |
Fri Dec 23 18:16:00 UTC 2022 |