Home

free job posting and resume search at jobs.nvoids.com - add [email protected] to googlegroups at Remote, Remote, USA
Email: [email protected]
RATES FOR ALL POSITIONS $60/HR MAX OR LESS!

HI

position: Data Architect with healthcare exp

Location :Remote 

Job Description: Information Architect (IA) is a visionary who translate business requirements into technical requirements and define data standards and principles.  IA is responsible for data architecture framework, data model, design, deploy
and maintain data warehouse and analytics solutions.  IA also provides standard common business vocabulary and outlines high-level integrated designs to meet requirements and aligns with enterprise analytics strategy.

Translate business requirements into data models that lay the foundation for data warehouses, reports and analytics

Define data architecture framework, standards and principles including modeling, metadata, security, reference data such as customer, test, vendor, employees, etc.

Data model, design and deploy data warehouse solutions leveraging fact and dimension models

Design and deploy cubes for multi-dimensional or tabular analysis

Create de-normalized data models, data marts in column store databases

Create ER diagrams by using industry standard tools

Qualifications:

14+ years of experience in data warehouse, ETL, BI projects and atleast 5+ years working as Data Architect

Must have experience in healthcare domain

Must have experience in HL7 & FHIR standards

Must have hands-on experience in RDBMS, SQL, OLAP, OLTP and MDX models

Must have hands-on experience in designing data marts and star schema models

Strong oral and written communication skills

Position: Python/Django Developer

Location: Charlotte, NC (Hybrid)

Type: Contract

Job Description:

Vaccinated Candidates Only ---- Need a screenshot during a video call showing the candidates face and ID proof.

Responsibilities:-

Develop highly scalable applications in Python/Django framework.

Create and deploy applications with various interconnected Azure components in an Azure environment.

Understand and enhance front-end applications using React JS, HTML5, and CSS3.

Identify and fix bottlenecks that may arise from inefficient code.

Knowledge of user authentication and authorization between multiple systems, servers, and environments.

Ensure that programs are written to the highest standards (e.g., Unit Tests) and technical specifications.

Exposure to Power BI tools is highly desirable.

Documentation of the key aspects of the project.

Ability to collaborate on projects and work independently when required.

Qualifications:-

5+ years of prior experience as a developer in the required technologies.

Solid organizational skills, and ability to multi-task across different projects.

Experience with Agile methodologies.

Skilled at independently researching topics using all means available to discover relevant information.

Ability to work in a team environment.

Excellent verbal and written communication skills.

Self-starter with the ability to multi-task and maintain momentum.

Comments for Suppliers:- React.js (P4 - Expert)

Kindly find the Job description Below,

Role:
Principal Cloud/SRE Engineer Disaster Recovery

Location: Remote

Responsibilities:

Design and Implement DR Solutions:
 Develop
comprehensive disaster recovery plans tailored to data-heavy applications and data engineering pipelines hosted on AWS. Ensure that all critical systems are recoverable within agreed Recovery Time Objectives (RTO) and Recovery Point Objectives (RPO).

AWS Cloud Expertise:
 Utilize
AWS-native services such as AWS Backup, S3, RDS, DynamoDB, EBS snapshots, and Terraform to build scalable and reliable backup and disaster recovery frameworks.

Data Engineering Pipeline DR:
 Collaborate
with Data Engineering teams to set up failover solutions and backup strategies for ETL pipelines and streaming data architectures using services like EMR, Glue, Redshift, Kinesis, and Lambda.

Automated Backup and Restore Processes:
 Implement
automated and scheduled backups, ensuring data integrity across large-scale environments. Develop and document failover strategies for continuous operation during disasters.

Monitoring and Testing:
 Regularly
test the disaster recovery plans, simulating failure scenarios to ensure operational readiness. Identify and resolve gaps through continuous testing, including full-scale failover tests.

Failover and Redundancy Strategies:
 Implement
advanced redundancy strategies (such as multi-region failover, cross-region replication, and autoscaling) to maintain service availability and minimize downtime during disaster recovery events.

Disaster Recovery Playbooks:
 Create
comprehensive playbooks with detailed, step-by-step recovery procedures for both engineering and operations teams, ensuring clear guidance during an actual disaster event.

Collaboration with Stakeholders:
 Work
closely with development, operations, and data teams to ensure DR plans are integrated into broader application and data pipeline architectures. Ensure alignment with business continuity goals.

Cost Optimization:
 Ensure
that DR solutions are cost-effective, leveraging AWSs pricing model while optimizing for storage and data replication.

Qualifications:

11-15
 years
of experience in Cloud, SRE, or a similar role with a strong focus on
disaster recovery for large-scale, data-heavy environments.

Proven experience in setting up and managing DR solutions on
AWS, including in-depth knowledge of AWS services like S3, EC2, RDS, EBS, Redshift, Glue, and Terraform.

Expertise in handling
data-intensive applications and creating resilient solutions for data pipelines, including
ETL, streaming, and batch processing.

Strong understanding of
high availability, resilience patterns, multi-region failover, and AWS
fault-tolerant architectures.

Experience with
data backup, archival strategies, and restoration processes for high-volume data systems.

Familiarity with automation tools like
Terraform for DR environment setup and scaling.

Experience conducting
disaster recovery drills, simulations, and root cause analyses to continuously improve DR effectiveness.

Strong skills in
incident management and collaborating with cross-functional teams to mitigate risks and ensure system uptime.

Exceptional problem-solving skills and meticulous attention to detail.

Excellent leadership, communication, and interpersonal skills, with a proven ability to inspire and
lead teams.

Role:
Principal DevOps/SRE Engineer Application-Centric Observability

Location: Remote

Kindly find the Job Description Below,

Responsibilities:

Design and Implement Observability Framework:
 Develop
and implement an end-to-end observability framework that extends beyond infrastructure to focus on
application-specific metrics. Ensure comprehensive visibility into the performance of key business applications.

Datadog Integration and Enhancement:
 Leverage
Datadog to instrument application-level monitoring, integrating golden signals (SLI/SLOs) for performance, availability, and reliability.

Develop SLI/SLO Blueprints:
 Create
and maintain SLI/SLO blueprints for key business applications, defining and measuring
golden signals (latency, traffic, errors, saturation) to ensure optimal system health.

System Performance Optimization:
 Proactively
monitor and assess application performance, identifying areas for improvement. Collaborate with development and SRE teams to implement performance optimization measures.

Dashboard and Visualization:
 Develop
centralized dashboards with drill-down capabilities, providing real-time visibility into the health of applications and enabling quick identification of performance issues.

Business Journey Mapping:
 Work
closely with business and engineering teams to map out critical business journeys and ensure that observability systems capture relevant metrics for each journey.

Gap Analysis and Continuous Improvement:
 Perform
baseline measurements, identify gaps in existing monitoring systems, and work to close those gaps by integrating additional telemetry data.

Incident Response and Alerting:
 Define
and implement alerting mechanisms based on SLI/SLO thresholds. Ensure the observability system can trigger appropriate alerts and escalations in case of performance degradation.

Collaboration with Development Teams:
 Work
alongside development and data engineering teams to embed observability practices into the SDLC, ensuring that monitoring is an integral part of the application architecture from the ground up.

Knowledge Sharing:
 Provide
training and guidance to teams on best practices for application observability, ensuring consistent adoption of tools and methodologies across the organization.

Qualifications:

11-15 years
 of
hands-on experience in DevOps/SRE, with a strong focus on observability for large-scale, high-performance applications.

Expertise in using and enhancing observability tools like
Datadog, including deep experience with metrics collection, alerting, and
dashboard creation.

Proven ability to create and implement
SLI/SLO frameworks to track application performance, availability, and reliability.

Strong understanding of monitoring application health across various services, containers, and microservices
architectures.

Experience in
business journey mapping and ensuring observability captures relevant metrics at every stage of the user experience.

Expertise in
root cause analysis and providing insights into system performance through observability data.

Proficiency in programming/scripting languages (e.g.,
Python, Bash) for automation and tool integration.

Proven track record of driving performance improvements and maintaining system health through proactive
monitoring and alerting.

Role: Identity Access Management Engineer (Saviynt)

Work location : Plano, TX (Permanent remote)

Duration: 6 -12+ Months Contract

We have an excellent opportunity for an Identity Access Management (IAM) Engineer to provide value to one of our major clients.

Responsibilities will include:

Transition of existing IAM Solutions, Architecture, Build, and Deployment deployed using Microsoft Identity Manager (MIM)

Lead the SailPoint and Saviynt Project work, participate in Project Planning Meetings

Lead the requirements track, use cases working with Business, Application team

Lead the Design track, Testing track, Deployment and Release Plan

Define the application on-boarding plan with SailPoint and Saviynt to enable faster integration of applications

Design and develop Identity, Accounts Aggregation Process to onboard accounts from HR system, AD, Azure AD and other Applications onto SailPoint and Saviynt

Design and development of access review process and workflows

Design and development of Provisioning track for Accounts provisioning

Participate in all SailPoint and Saviynt deployment activities connector configuration, custom rule development, workflow configuration and development, third party system integration.

Requirements:

Undergraduate degree and 10+ years relevant identity access management solutions design experience, or equivalent combination of education and work experience.

5 years of experience developing Saviynt Solution models across business, process and technical viewpoints.

Must have implemented at least three Saviynt implementation projects successfully

Experience developing Saviynt Solution models across business, process, and technical viewpoints.

Experience developing custom connectors between Saviynt and target applications using Java technologies.

NEED ANSWERS TO ALL QUESTIONS BELOW

1. Work Visa : ( submit copy 

2. Current Location: 

3. Availability : 

4. Bill rate : 

5. Skype ID: 

6 Dir Contact Number

Thanks
!  

FOR CONSIDERATION: PLEASE INCLUDE CANDIDATES

NAME,RATE AND DIRECT CONTACT NUMBER: 

DON MANDELL 

DMANDELL@ CALSIERRA.COM

This message is intended for recruiting and consulting professionals
only and is not intended as a solicitation.  If you received this email in error and/or you wish to be removed from our list for any

From:
Jobs Nvoids <[email protected]>

Sent: Sunday, October 27, 2024 2:48 PM

To: [email protected]

Subject: Re: free job posting and resume search at jobs.nvoids.com - add [email protected] to googlegroups

You don't often get email from
[email protected].
Learn why this is important

Hello,

Please add email - [email protected] to your distribution list to post your job requirements/hotlists automatically

at https://jobs.nvoids.com - Free job posting and resume search portal with 1 million+ job posts, 20K+ resume and 5 million+ views

Add [email protected] to google groups

Send your resumes at [email protected]

Search for job posts/hotlists with search strings and subscribe to search strings with your emailid to recieve new job posts/hotlists directly to your email id

View all requirements over telegram group - @ustaffing

To receive all requirements at your email id send "subscribe" as subject to
[email protected]

Free resume and job search portal

Thank you

https://jobs.nvoids.com

Happy Recruiting

To stop receiving emails from us please send "remove" as subject to
[email protected]

Keywords: javascript business intelligence sthree active directory Delaware Idaho Iowa North Carolina Texas
free job posting and resume search at jobs.nvoids.com - add [email protected] to googlegroups
[email protected]
[email protected]
View all
Wed Oct 30 04:22:00 UTC 2024

To remove this job post send "job_kill 1890040" as subject from [email protected] to [email protected]. Do not write anything extra in the subject line as this is a automatic system which will not work otherwise.


Your reply to [email protected] -
To       

Subject   
Message -

Your email id:

Captcha Image:
Captcha Code:


Pages not loading, taking too much time to load, server timeout or unavailable, or any other issues please contact admin at [email protected]
Time Taken: 1

Location: ,