Immediate need 15+ years Azure Data Engineer in Wisconsin Onsite Role at Remote, Remote, USA |
Email: [email protected] |
From: Shobhana Kulhade, Sydata Inc [email protected] Reply to: [email protected] Immediate need 15+ years Azure Data Engineer in Wisconsin Onsite Role Senior Azure Data Engineer Onsite Role Location - Wisconsin Data Bricks - 4 - 5 years Azure UC (Unity Catalog) experience Job Description Summary The Sr Azure Databricks Data Engineer will use comprehensive modern data engineer techniques and methods with Advanced Analytics to support business decisions for our clients. Your goal is to support the use of data-driven insights to help our clients achieve business outcomes and objectives. You can collect, aggregate, and analyze structured/unstructured data from multiple internal and external sources and patterns, insights, and trends to decision-makers. You will help design and build data pipelines, data streams, reporting tools, information dashboards, data service APIs, data generators, and other end-user information portals and insight tools. You will be a critical part of the data supply chain, ensuring that stakeholders can access and manipulate data for routine and ad hoc analysis to drive business outcomes using Advanced Analytics. You are expected to function as a productive member of a team, working and communicating proactively with engineering peers, technical lead, project managers, product owners, and resource managers. Specific requirements for the role include: Strong experience as an Azure Data Engineer and must have Azure Databricks experience. Expert proficiency in Python, PySpark Must have data migration experience from on prem to cloud Hands-on experience in Stream Analytics, Event/IoT Hubs, and Cosmos In depth understanding of Azure cloud and Data lake and Analytics solutions on Azure. Expert level hands-on development Design and Develop applications on Databricks, Azure Data Factory, Azure Data Lake, Azure SQL DW, Azure SQL, SSIS, Airflow is required. In depth understanding of Spark Architecture including Spark Streaming, Spark Core, Spark SQL, Data Frames, RDD caching, Spark MLib Hands-on knowledge of data frameworks, data lakes and open-source projects such as Apache Spark, MLflow, and Delta Lake Knowledge of different programming and scripting languages Good working knowledge of code versioning tools [such as Git, Bitbucket or SVN] Hands-on experience in using Spark SQL with various data sources like JSON, Parquet and Key Value Pair Experience preparing data for Data Science and Machine Learning. Experience preparing data for use in Azure Machine Learning and/or Azure Databricks is a plus. Demonstrated experience preparing data, automating and building data pipelines for AI Use Cases (text, voice, image, IoT data etc.). Good to have programming language experience with .NET or Spark/Scala Experience in creating tables, partitioning, bucketing, loading and aggregating data using Spark SQL/PySpark Broad experience in Microsoft SQL technologies including SSAS Tabular models, DAX, T-SQL, Service Broker, Replication, and Performance Tuning Implementation experience across different data stores (e. g., Azure Data Lake GEN2, Azure SQL Data Warehouse, Azure Blob Storage, HDFS) , messaging systems (e. g., Azure Event Hubs, Apache Kafka), and data processing engines (e. g., Azure Data Lake Analytics, Apache Hadoop, Apache Spark, Apache Storm, Azure HDInsight) Knowledge of Azure DevOps processes like CI/CD as well as Agile tools and processes including Git, Jenkins, Jira, and Confluence Working experience with Visual Studio, PowerShell Scripting, and ARM templates. Able to build ingestion to ADLS and enable BI layer for Analytics Strong understanding of Data Modeling and defining conceptual logical and physical data models. Big Data/analytics/information analysis/database management in the cloud IoT/event-driven/microservices in the cloud- Experience with private and public cloud architectures, pros/cons, and migration considerations. Ability to remain up to date with industry standards and technological advancements that will enhance data quality and reliability to advance strategic initiatives Basic experience with or knowledge of agile methodologies Working knowledge of RESTful APIs, OAuth2 authorization framework and security best practices for API Gateways Responsibilities: Work closely with team members to lead and drive enterprise solutions, advising on key decision points on trade-offs, best practices, and risk mitigation Manage data related requests, analyze issues, and provide efficient resolution. Design all program specifications and perform required tests Design and Develop data Ingestion using ADF and processing layer using Databricks. Work with the SMEs to implement data strategies and build data flows. Prepare codes for all modules according to required specification. Monitor all production issues and inquiries and provide efficient resolution. Evaluate all functional requirements, map documents, and troubleshoot all development processes Document all technical specifications and associates project deliverables. Design all test cases to provide support to all systems and perform unit tests. Qualifications: 2+ years of hands-on experience designing and implementing multi-tenant solutions using Azure Databricks for data governance, data pipelines for near real-time data warehouse, and machine learning solutions. 5+ years experience in a software development, data engineering, or data analytics field using Python, Scala, Spark, Java, or equivalent technologies. Bachelors or Masters degree in Big Data, Computer Science, Engineering, Mathematics, or similar area of study or equivalent work experience Strong written and verbal communication skills Ability to manage competing priorities in a fast-paced environment Ability to resolve issues Self-Motivated and ability to work independently Nice to have- Microsoft Certified: Azure Data Engineer Associate Thanks & Regards: Shobhana Kulhade Sr. IT Recruiter Sydata Inc Email: [email protected] 6494 Weathers Place Suite#100 San Diego, California, 92121 Website: www.sydatainc.com Notice: This email contains confidential or proprietary information which may be legally privileged. It is intended only for the named recipient (s). If an addressing or transmission error has misdirected the email, please notify the author by replying to this message by "REMOVE" Keywords: continuous integration continuous deployment artificial intelligence business intelligence active directory information technology |
[email protected] View all |
Mon Nov 27 22:37:00 UTC 2023 |