Home

Senior Big Data Architect-16+ - Local to Washington DC, Maryland at Washington, DC, USA
Email: [email protected]
From:

Sandeep,

Key Infotek

[email protected]

Reply to:   [email protected]

Main information:

Job title: Senior Big Data Architect

Job type: Contract / C2C (temporary)

On-site/Remote: On-site

Locals required: yes

Location: OCTO - 200 I Street, SE, Washington, District Of Columbia 20003, US

Job industry: Public Sector and Government

Job category: IT, Computer & Mathematical

Requirements:

Minimum education: Bachelor

Degree required: Bachelors degree in IT or related field or equivalent experience

Special examinations / certificates: Databricks Certified Data Engineer Professional is highly desired.

Years of work experience: 16 year(s)

Responsibilities:

Veteran Firm Seeking a Senior Big Data Architect for Onsite Assignment in Washington, DC

We are looking to hire a Senior Big Data Architect for the DC government's Office of the Chief Technology Officer (OCTO).

The ideal candidate has 16+ years of experience planning, coordinating, and monitoring project activities in an enterprise environment, 5+ years of knowledge of Big Data, Data Architecture, and Implementation best practices, and 5+ years of expertise in Data-centric systems for the analysis and visualization of data, such as Tableau, MicroStrategy, ArcGIS, Kibana, and Oracle.

If youre interested, I'll gladly provide more details about the role and further discuss your qualifications.

Position Description: This role will provide expertise to support the development of a Big Data / Data Lake system architecture that supports enterprise data operations for the District of Columbia government, including the Internet of Things (IoT) / Smart City projects, enterprise data warehouse, the open data portal, and data science applications. This is an exciting opportunity to work as a part of a collaborative senior data team supporting DC's Chief Data Officer. This architecture includes Databricks, Microsoft Azure platform tools (including Data Lake and Synapse), Apache platform tools (including Hadoop, Hive, Impala, Spark, Sedona, and Airflow), and data pipeline/ETL development tools (including Streamsets, Apache NiFi, Azure Data Factory). The platform will be designed for District-wide use and integration with other OCTO Enterprise Data tools such as Esri, Tableau, MicroStrategy, API Gateways, and Oracle databases and integration tools.

Position Responsibilities:
Coordinates IT project management, engineering, maintenance, QA, and risk management.
Plans, coordinates, and monitors project activities.
Develops technical applications to support users.
Develops, implements, maintains, and enforces documented standards and procedures for assigned systems' design, development, installation, modification, and documentation.
Provides training for system products and procedures.
Performs application upgrades.
Performs monitoring, maintenance, or reporting on real-time databases, real-time network and serial data communications, and real-time graphics and logic applications.
Troubleshoots problems.
Ensures project life-cycle follows District standards and procedures.

General skills (must have):

Experience implementing modern Big Data storage and analytics platforms such as Databricks and Data Lakes 5 year(s) of experience

Knowledge of modern Big Data and Data Architecture and Implementation best practices 5 year(s) of experience

Knowledge of architecture and implementation of networking, security and storage on cloud platforms such as Microsoft Azure 5 year(s) of experience

Experience with deployment of data tools and storage on cloud platforms such as Microsoft Azure 5 year(s) of experience

Knowledge of Data-centric systems for the analysis and visualization of data, such as Tableau, MicroStrategy, ArcGIS, Kibana, Oracle 5 year(s) of experience

Experience querying structured and unstructured data sources including SQL and NoSQL databases 5 year(s) of experience

Experience modeling and ingesting data into and between various data systems through the use of Data Pipelines 5 year(s) of experience

Experience with API / Web Services (REST/SOAP) 3 year(s) of experience

Experience with complex event processing and real-time streaming data 3 year(s) of experience

Experience with deployment and management of data science tools and modules such as JupyterHub 3 year(s) of experience

Experience with ETL, data processing, analytics using languages such as Python, Java or R 3 year(s) of experience

16+ yrs planning, coordinating, and monitoring project activities 10+ years of experience

16+ yrs leading projects, ensuring they are in compliance with established standards/procedures 10+ years of experience

General skills (nice to have):

Experience with Cloudera Data Platform 3 year(s) of experience

Language skills (must have):

English Native or bilingual proficiency

Experience required:

- **Minimum Education/Certification Requirements:** Bachelors degree in Information Technology or related field or equivalent experience **Skills Matrix:** **Skill** | **Required/ Desired** | **Years** Experience implementing modern Big Data storage and analytics platforms such as Databricks and Data Lakes | Required | 5 Knowledge of modern Big Data and Data Architecture and Implementation best practices | Required | 5 Knowledge of architecture and implementation of networking, security and storage on cloud platforms such as Microsoft Azure | Required | 5 Experience with deployment of data tools and storage on cloud platforms such as Microsoft Azure | Required | 5 Knowledge of Data-centric systems for the analysis and visualization of data, such as Tableau, MicroStrategy, ArcGIS, Kibana, Oracle | Required | 5 Experience querying structured and unstructured data sources including SQL and NoSQL databases | Required | 5 Experience modeling and ingesting data into and between various data systems through the use of Data Pipelines | Required | 5 Experience with API / Web Services (REST/SOAP) | Required | 3 Experience with complex event processing and real-time streaming data | Required | 3 Experience with deployment and management of data science tools and modules such as JupyterHub | Required | 3 Experience with ETL, data processing, analytics using languages such as Python, Java or R | Required | 3 Databricks Certified Data Engineer Professional | Highly desired Experience with Cloudera Data Platform | Highly desired | 3 16+ yrs planning, coordinating, and monitoring project activities | Required | 16 16+ yrs leading projects, ensuring they are in compliance with established standards/procedures | Required | 16 Bachelors degree in IT or related field or equivalent experience | Required

Additional job information:

- - **The target annual salary** is $131,000-$141,000. - **The role is onsite** at OCTO - 200 I Street, SE Washington DC 20003, therefore candidates must live in the DMV.

Date information:

Start date: 10/01/2024

Job duration: 12 months (expected)

Hours per week: 40

Vendor information:

Work permits accepted:
- US citizen
- Green card
- Green card-EAD
- TN
- H-1B

Work permit: Other

W-2 employment at vendor: not required

Interview: Phone or video call

Required:
- Candidate name
- Email address
- Work authorization
- Candidate location
- CV
- Bill rate
- Vendor note
- Phone number
- Availability to start
- Availability to interview

Keywords: quality analyst rlang information technology trade national Tennessee
Senior Big Data Architect-16+ - Local to Washington DC, Maryland
[email protected]
[email protected]
View all
Thu Sep 12 03:03:00 UTC 2024

To remove this job post send "job_kill 1742786" as subject from [email protected] to [email protected]. Do not write anything extra in the subject line as this is a automatic system which will not work otherwise.


Your reply to [email protected] -
To       

Subject   
Message -

Your email id:

Captcha Image:
Captcha Code:


Pages not loading, taking too much time to load, server timeout or unavailable, or any other issues please contact admin at [email protected]
Time Taken: 2

Location: ,