Urgent Hiring || Data Engineer || Cincinnati Ohio Onsite Need local candidate at Cincinnati, Ohio, USA |
Email: [email protected] |
From: Pankaj Tiwary, Sibitalent Corp [email protected] Reply to: [email protected] Job Title :: Data Engineer Visa :: Any visa except H1B/CPT Rate : $60/hr C2C Location: : : Onsite Monday through Friday Cincinnati, Ohio Duration: 3+ Months Experience : 8+ Years Client : Note: Try to find local Job Description:- The data engineer designs and builds platforms, tools, and solutions that help the bank manage, secure, and generate value from its data. The person in this role creates scalable and reusable solutions for gathering, collecting, storing, processing, and serving data on both small and very large (i.e. Big Data) scales. These solutions can include on-premise and cloud-based data platforms, and solutions in any of the following domains ETL, business intelligence, analytics, persistence (relational, NoSQL, data lakes), search, messaging, data warehousing, stream processing, and machine learning. Responsible and accountable for risk by openly exchanging ideas and opinions, elevating concerns, and personally following policies and procedures as defined. Accountable for always doing the right thing for customers and colleagues, and ensures that actions and behaviors drive a positive customer experience. While operating within the Bank's risk appetite, achieves results by consistently identifying, assessing, managing, monitoring, and reporting risks of all types. ESSENTIAL DUTIES AND RESPONSIBILITIES: - Responsible for design, Development, and Support of data solutions, APIs, tools, and processes to enable rapid delivery of business capabilities. - Work closely with IT application teams, Enterprise architecture, infrastructure, information security, and LOB stakeholders to translate business and technical strategies into data-driven solutions for the Bank. - Act as a technical Expert addressing problems related to system and application design, performance, integration, security, etc. - Conduct research and Development based on current trends and technologies related to the banking industry, data engineering and architecture, data security, and related topics. - Work with developers to Build CI/CD pipelines, Self-service Build tools, and automated deployment processes. - Evaluate software products and Provide documented recommendations as needed. - Provide Support and troubleshooting for data platforms. Must be willing to Provide escalated on-call Support for complicated and/or critical incidents. - Participate in the planning process for hardware and software. - Plan and work on internal projects as needed, including legacy system replacement, Monitoring and analytics improvements, tool Development, and technical documentation. - Provide technical guidance and mentoring for other team members. - Manage and prioritize multiple assignments. SUPERVISORY RESPONSIBLITIES: None MINIMUM KNOWLEDGE, SKILLS, AND ABILITIES REQUIRED: - Bachelor's degree in Computer Science/Information Systems or equivalent combination of education and experience. - Must be able to communicate ideas both verbally and in writing to management, business and IT sponsors, and technical resources in language that is appropriate for each group. - Fundamental understanding of distributed computing principles - Knowledge of application and data security concepts, best practices, and common vulnerabilities. - Conceptual understanding of one or more of the following disciplines preferred big data technologies and distributions, metadata management products, commercial ETL tools, Bi and reporting tools, messaging systems, data warehousing, Java (language and run time environment), major version control systems, continuous integration/delivery tools, infrastructure automation and virtualization tools, major cloud, or rest API design and development. Nice To Have Apache Kafka Cloud Data Warehousing - Snowflake preferred ETL Tools: - Talend BigData 7.x/6.x, Talend Integration Suite 7.x/6.x, Talend ESB, Matillion ETL for Redshift 1.39.9, Datastage 11.x/9.x/8.x/7.x PX, Quality Stage, profile stage, Information Analyzer, Fast Track, Business Glossary, Metadata Work Bench, Informatica. Languages: - SQL, PL/SQL, UNIX Shell Script, Java, Python, SCALA, Basic C & C++, Amazon Redshift databases Databases: - Big Data, Hadoop, Hive, NoSQL, Oracle 8i/9i/10g, IBM DB2, TeraData, SQL Server, Sybase, MS-Access 2007 Other DW-BI Application Tools: - Toad 7.x, DB2-AQT 7.x, Datastage version control tool, HP-PVCS Versioning tool, Micro strategy 8.1.0, Business Objects, Cognos, Qlikview, Tableau. Environment: - Spark-Hadoop v2.4.2, AIX Unix 5.2, Windows 98, XP, 2K, Windows7/10, AWS. Methodologies: - ER*win Data Modeler, Dimension Modeling, ER Modeling Thanks and Regards// Pankaj Tiwary Technical Recruiter Email : [email protected] Web: www.sibitalent.com\\ Keywords: cprogramm cplusplus continuous integration continuous deployment business intelligence information technology hewlett packard microsoft procedural language |
[email protected] View all |
Wed Sep 20 20:58:00 UTC 2023 |