Big Data Architect Hybrid role in CA No H1B at Remote, Remote, USA |
Email: [email protected] |
From: ayush, istaff [email protected] Reply to: [email protected] Hi, Hope so you are doing Good!!! Please find the Job Description. If you feel comfortable yourself then send me your updated resume or call me back on 575-236-4255. Title- Big Data Architect Client: Kaiser Permanente Location: Hybrid/remote occasionally onsite 2000 Broadway Oakland CA 94612. See notes below Duration: longer term contract, 10 month SOWs Visa status: no h1b When sending me a candidates include the following. I need every bullet, none can be left blank. Incomplete submittals will be deleted and possibly added to junk folder. First legal name Last legal name Current location Rate needed U.S. worker status Is there a LinkedIn profile If so, what is the LinkedIn URL Does this person have a photo of their face on the profile Is the resume attached Name of the employer the candidate will be getting paid from/or company on the I-9: Did you place this person on their current assignment MM/DD (not year) of birth Last 4 or SS# Notes from call with manager: Implementation of Traefik. Implementation of Kubernetes (on prem) OpenShift or Rancher Implementation of R Server / RStudio / Jupyter Notebooks Expert Linux administration MapR Storage implementation or Administration Versed with Yaml file configuration Understanding of how to Architect and build out a complete solution for Big Data for use by Data Scientist. Must understand how to setup an environment for a Data Scientist workbench. Must be expert with Python and understand HDFS systems and how to implement them. This is with the Division of Research This is not an analyst position, but an architect position. At a minimum the person needs to come onsite once a month, maybe a few more times potentially, but mandatory once a month. It's a bit of a crossover with a Data Scientist, but I don't need the person to know how to run models. It would be nice, but they need to know how to build a platform to use Data Scientist tools to run models and what type of systems fit to make it work. They should be very experienced with GPUs and how they are used and the scenarios to use them. Job Description: Architected Kubernetes, Linux Expert, Traefik, MapR expert. Postgress expert, python expert, understands how to build these systems from the ground up on-premises, knows storage systems real well. Understands how to build out a platform with Spark, Jupiter Notebooks, R Server / Studio with interactions with SAS. 5 or more years of experience designing, collecting, storing, processing, and analyzing of huge sets of data. Primary focus will be on choosing optimal solutions to use for these purposes, then maintaining, implementing, and monitoring them. Responsible for integrating them with the architecture used across the company. Responsibilities Selecting and integrating any Big Data tools and frameworks required to provide requested capabilities Implementing ETL process Monitoring performance and advising any necessary infrastructure changes. Defining data retention policies Skills and Qualifications Proficient understanding of distributed computing principles Management of Hadoop cluster, with all included services. Ability to solve any ongoing issues with operating the cluster. Experience with building stream-processing systems Good knowledge of Big Data querying tools, such as Pig, Hive, and Impala. Experience with Spark, Experience with integration of data from multiple data sources, Experience with NoSQL databases, such as HBase, Cassandra, MongoDB. Knowledge of various ETL techniques and frameworks. Experience with various messaging systems, such as Kafka. Experience with Big Data ML toolkits. Regards. Ayush Kumar | iStaffX LLC. IT Recruiter , Email: [email protected] Phone: 575-236-4255 Website - https://istaffx.com/ Keywords: machine learning materials management rlang information technology California |
[email protected] View all |
Fri Feb 17 22:39:00 UTC 2023 |