Home

Dr. Hagedorn - Azure Cloud Architect
[email protected]
Location: Remote, Remote, USA
Relocation: Remote
Visa: USC
Dr. Hagedorn
C2C $70/hr.
REMOTE ONLY, NO HYBRID, C2C ONLY
AZURE/AWS ARCHITECT DATABRICKS SYNAPSE HANDS-ON
GOVERNMENT AND BANK LEVEL 5 SECURITY CLEARANCES
CONTACT BY EMAIL ONLY
AWS, CLOUDERA CDH/HORTONWORKS HDP
TERADATA/DB2/ORACLE GLOBAL PROFESSIONAL SERVICES

Immediate Start
Times for Interview 2-5 PM EASTERN ANY DAY OF THE WEEK
PLEASE SEND A CALENDAR INVITE WITH JD TO SCHEDULE ANY CALLS, Even if JUST 2 min call.
Admin: Megan Bui [email protected]

Email: [email protected]
Skype: Dr.RichardHagedorn
LINKEDIN: https://www.linkedin.com/in/dr-richard-hagedorn-19967716/
My Company web page: www.alphacontech.com
Admin: Megan Bui, [email protected]

Highest Degree:
PhD Cal Southern University Irvine, Clinical Psychology, AI/ML 2023
PhD WALDEN UNIVERSITY Dissertation 2019
MS WALDEN UNIVERSITY 2013
BS University of Washington, 1975
Certifications:
MICROSOFT CERTIFIED AZURE SOLUTION ARCHITECT
MICROSOFT CERTIFIED AZURE DEVELOPER ASSOCIATE
AWS MASTER SOLUTION ARCHITECT CERTIFICATION
TERADATA CERTIFICATION & PROFESSIONAL SERVICES
IBM CERTIFICATION & PROFESSIONAL SERVICES
ORACLE CERTIFICATION

AZURE ARCHITETURE & SECURITY ADMIN for Microsoft SaaS, PaaS & IaaS
AZURE ACTIVE DIRECTORY both basic and premium tiers
Azure roles, groups, and policies
Azure RBAC for role-based access control
Azure Link to custom Domains
Azure identity Management (IAM)
Azure single sign on capabilities (SSO)
Azure multi-factor authentication capabilities (MFA)
Azure Security Logging
Azure Security Monitoring with machine learning (ML & AI) with Workbooks
Azure Defender and Sentinel
Azure Virtual Networks (VPN) VNET & Sub-Nets, VPC, Tenant, Spoke/Hub Designs
Azure connectivity between networks, Region, and Subscription
Azure hybrid identities
Stream Sets (resilient pipelines) the Control Hub for data ingestion to Data Lake (HDFS/HIVE)
Azure Databricks, Best Practices for Clusters and Python Notebooks with connect to PowerBI and Tableau
AWS SOLUTION
IAM Security through IAM Identity and Access Management in AWS
RANGER - data security across the Hadoop platform.
Name Node Heap Memory Management
S3 Buckets, Simple Storage Service at CAS and Sate of MN
Data Lake Arch/Design DMS Data Migration Services and Direct Connect
HDFS & HIVE INGEST into S3 buckets (GLUE ETL the Data Lake) into Redshift (Warehouse)
RedShift 4+ years at Chemical Abstract Services & State of MN
EMR Elastic Map Reduce and EC2 Elastic Compute Cloud Cluster
VPC Virtual Private Cloud Utilization
RDS Relational Data Service, Metastore
ELB Elastic Load Balancing
AWS Data Pipeline/Data Lake/Enterprise Warehouse Architect Functions Performed
Worked directly with end users and Business and principal engineers to orient the data team s strategy with the company goals
Lead data engineering team to design and implement data stores, pipelines, ETL routines and API access
Lead data product teams to consume data and make it available to both internal and external customers for analysis, troubleshooting, BI, predictive use cases, etc.
Designing and implementing Data Warehouse for large-scale high-volume data loads for customer action and churn analysis
Design and implement features built on machine learning, such as customer behavior, churn, next best action, next best offer, etc.
BIG DATA
Cloudera CDH 5.7, 5.8, 5.9,5.10 & 5.15Architect & Administration
Cloudera Navigator 2.9
Cloudera Director (Installation in cloud AWS)
KEBEROS Specialist
Hortonworks 2.0 Architect certified
MapR Administration
Hadoop/MapR certified
Enterprise NoSQL
SPSS v23 Scientific Data
UNIX ADMINISTRATION- SCRIPTS/CRONTABS

Senior Azure Security Specialist with additional skills on Big Data Platform (12 years overall) is AWS (6 yrs.) along with a lot of Cloudera (4 yrs.) and Azure (4 yrs.) Big Data architecture/Design not only data lake design and development but also a total solution architecture approach to Big Data implementation. 10 plus years experience in HADOOP both development and architecture with 6 plus years as an architect. My initial work included work at Berkeley 2003, on initial release of Hadoop and next with AWS, the Amazon Web Services, Cloudera Navigator 2.9, Cloudera CDH 5.7-5.10, Impala/KUDU, Cloudera Director and Hortonworks/Azure. TALEND MDM 4yrs, ETL process including scripting 14+ years, Zookeeper, HMaster, HBase database, HFile, Apache: Flume (log files) ingest 2 years, Oozie (sched. Workflow) 3 years, Sqoop (xfers data) 3 years, Python (2.7 & 3.6 w/SPSS Statistics 23) 5 years, Dev Tools such as Spark (with Perf, & Caching) 2 years, HBase 4 years, Pig 4 years, Analysis with: Drill (SQL) 2 years, Hive (HQL) 4 years, Mahout (Clustering, Classification, Collaborative filtering) 6 mos., additionally C & C++, and Shell Script 5 years. I have extensive use of MDM tools, and Erwin and additionally Power Designer and IBM s ER tool. I have extensive work on Apache Hadoop which is a highly scalable storage platform designed to process exceptionally large data sets across hundreds to thousands of computing nodes that operate in parallel. Hadoop provides a cost-effective storage solution on commodity hardware for large data volumes with no format requirements. Additionally, extensive work with MapReduce, the programming paradigm that allows for this massive scalability, is the heart of Hadoop. Note that the term MapReduce refers to two separate and distinct tasks that Hadoop programs perform. Hadoop has two main components- HDFS and YARN.
Significant experience with three or more of the following technologies: Teradata, Tableau, Cognos, Oracle, SAS, Hadoop, Hive, SQL Server, DB2, SSIS, Essbase, Microsoft Analysis Services
Experience & Skills
Expert level solution architecture skills in the following:
1. Data Bricks & Synapse
2. Data Governance
3. Data Security
4. Big Data
5. Data Quality and Recovery
Expert level skills with hands-on experience in the following:
1. Migrate on-prem database to AWS S3 with AWS Database Migration Service (DMS)
2. Setup AWS Glue to prepare data for analysis through automated extract, transform and load (ETL) processes and load to enterprise warehouse on EMR EC2 instances.
3. Setup AWS Kinesis to process hundreds of terabytes per hour from high volumes of streaming data from various sources
4. Develop event driven data processing pipeline code and execute on AWS Lambda
5. Develop interactive query with AWS Athena to analyze the data in AWS S3
6. Setup AWS Elastic MapReduce (EMR) tool EC2 instances with Hadoop installed and configured on them, to be able to process big data and perform analysis
7. Build and train machine learning models for predictive or analytical applications in the AWS SageMaker, with experience in creating notebook instance, prepare data, train the model from the data, deploy, and evaluate the model performance
8. Setup data warehouse with Amazon Redshift. Experience in creating Redshift clusters, upload data sets and perform data analysis queries.
Years of Expertise
9+ years of Azure architecture/admin experience Data Bricks & Synpse
5+ years of experience working with AWS DMS, S3, GLUE, KINESIS, LAMBDA, ATHENA

Professional Experience:
CDC
SEP 2022 to JAN 2023 (Remote)
AZURE CHIEF SOLUTION CLOUD ARCHITECT
MICROSOFT CYBERSECURITY ENGINEER
The chief architect for the CDC s migration from on-premises to the Microsoft Azure Cloud for the CDC. Provided initial analysis for the migration effort including power point presentations and recommendations. Provided detailed architecture analysis for this effort. Provided high level architecture (HLA) and detailed directed data flow diagrams (DFD). Completed discovery and analysis for this effort and provided detailed reporting with signoff at every phase of implementation for the CDC. The primary deliverables for the project as noted.

Optum Health Care (Remote)
NOV 2020 to SEP 2022
LEAD AZURE ARCHITECT & SECURITY AMINISTRATOR
I was the lead Azure Architect and Security Admin for this healthcare provider. My team included both onshore and offshore components and included providing the security architecture for the migration of existing OHI applications from on-prem to the Azure cloud. Extensive Cosmos DB monitoring metrics and threshold alerts. Considerations included RBAC security architecture and active directory architecture. Also of concern were the implementation considerations for firewalls, virtual networks, subnets, and IP access involving both API and UI web interfaces. Architectural designs and documentation standards were provided for non-production environments (NPE). Utilized Kubernetes extensively as a migration tool, development, and administration. It is a container orchestration system for Docker containers that is more extensive than Docker Swarm and is meant to coordinate clusters of nodes at scale in production in an efficient manner.

United Bank of Switzerland (UBS) GLOBAL (Remote)
NOV 2019 to OCT 2020
LEAD AZURE SECURITY ENGINEER COSMOS DB
As the lead Cosmos DB engineer, I was responsible for the creation and training of the Azure Cosmos DB while at UBS. UBS had begun the transition from on-prem to the cloud for all applications and I was the first and primary consultant hired to initiate the successful move. I setup and worked with clients at UBS to access Azure Cosmos DB worldwide for the Swiss bank UBS, including regional operations in Switzerland, UK, Germany, Hong Kong, Australia, China, and the US (Nashville TN). I set up and tested data ingestion to all the database types available in Cosmos DB which included Core (SQL), MongoDB API, Cassandra, Azure Table and Gremlin for graphics through the Azure subscription and resource group. Azure Comos DB is a database service that is globally distributed. It allowed me to manage the data kept in data centers that were scattered throughout the world. It provides the tools you need to scale both global distribution pattern and computational resources, and all these tools are provided, and services were provided through Microservices Azure. The Azure cloud platform is more than 200 products and cloud. I utilized many of the services as a lead engineer at UBS to build, run, and manage applications across multiple clouds, on-premises, and at the edge, with the expansive list of tools and frameworks as provided by Azure.
Additionally, I provided the operational runbook for Cosmos DB and the security documentation for control and auditing of Azure Cosmos DB in addition to providing general Azure administration and support in the new cloud environment for UBS. Also, I administrated BigFix which allowed for the tracking od release, versions, and packages of PostgreSQL on-prem hardware.

Florida Power & Light (Ft Lauderdale)
DEC 2018 to NOV 2019
SENIOR AZURE SECURITY SOLUTION ARCHITECT
On-Prem to the Cloud: POC for both AZURE and AWS (and additionally some analysis of GOOGLE Cloud s fit for the project). Responsible for architecting the recommendations (Pros & Cons) for both AZURE and AWS solution producing architectural diagrams, high level solution architecture, conceptual, logical & physical models with ERwin, Directed Data Flow (DDF) for application architecture, IaaS architecture, SaaS architecture and PaaS architecture. This was for the EDP/SSUP involving the on-prem (Oracle Work Request tables to be followed by Customer Tables and Labor Tables as identified as modeled subject matter areas) to consider the strengths of both AZURE and AWS Cloud for the companies first phase, Work Request processing on the project. This involved setting up the Virtual Machines (VMs) in AZURE, establishment of encryption, container blobs (S3 Buckets for AWS), Aroura Database (the SQLPostgres/Citus (Structured data) & MongoDB/Percona (unstructured data) Databases as Data Lake raw data repositories for both AZURE and AWS). I have extensive use of Data Factory and Redbrick. This included delivery of the 3 primary element component details for the IaaS, SaaS and PaaS for the new cloud solution architecture. The architecture migration path included multiple source entities for ELT raw data ingestion via AZURE and AWS Data Migration Services (DMS) processes for both to Blob Containers (AZURE) and S3 (AWS) and Aurora (raw data). Lambda was utilized for data cleansing and data validation...with messaging via AZURE and AWS SNS (test use cases) for load notification and validation reporting. Alation was additionally employed as the data catalog and metadata repository. Performance tuning on the Aurora PostgreSQL via index modifications, Sharding (Geographically) yielded sub-second (millisecond) latencies for the validation queries. APIs were established for both AZURE and AWS Aurora to Microservices and apigee (I obtained an average of 435 ms latency for all APIs). Initial bulk loads with incremental and CDC implementation (I obtained a CDC latency of 5 seconds for table loads). Devops established with EC2 instances and ECS/ECR and with dockers setup (all access authorities established for dev team offshore) were implemented real-time (CICD) and Jenkins pipeline was utilized. Guidelines were established for the S3 buckets to implement migration paths to production such as naming standards for S3 buckets. I developed conceptual models, physical and logical utilizing Erwin. I used Confluence as the documentation repository for the project. I functioned as the SCRUM Master/AWS Solution Architect for the team of 14 developers offshore. I have hands on development, governance, and business analysis.

VERTEX INC, AWS-HDP SUPPORT (Remote)
NOV 2018 to DEC 2018
AZURE SECURITY SOLUTION ARCHITECTURE
Onsite Philadelphia
Evaluation of AZURE and AWS cloud capabilities was the primary architectural goal here. This included my recommendations and hands-on problem resolution including monitoring and responding to cluster issues over 18 clusters ranging from 7-10 nodes. My activities included establishing AZURE VM development clusters, POC clusters, stage clusters, UAT clusters, utility clusters and production clusters. RANGER Installation and administration for data security across the Hadoop platform. Ranger can support a true data lake architecture. I have extensive use of Ranger, a framework to enable monitors and manage comprehensive data security across the Hadoop platform. Resolutions of issues such as authorizations to AWS S3 buckets, Hive insert/overwrite issues in production. Resolutions of issues such as authorizations to AWS S3 buckets, Hive insert/overwrite issues in production. EMR with Service Catalog deployments, identification of cluster version drift (to make sure versions of Hive UDF, Red Hat, etc. were at same level between clusters). I have extensive use of Data Factory and Redbrick. I completed documentation to suggest methods to ensure stability between clusters, documentation to identify all versioning across all clusters. Provides 24/7 support as the only Hadoop Admin for Vertex. Work with AMI for release verification via a shell script (sample script is available). Completed project and turned over to offshore Chania team.

STATE OF MINNESOTA, DEPT. OF HUMAN SERVICES (Minniapolis)
JUN 2017 to NOV 2018 AZURE/AWS SECURITY ADMINISTRATION for Infrastructure SOLUTION ARCHITECTURE, DESIGN, PROGRAMMING,
Onsite St Paul, Minnesota
I completed migration from on-prem Oracle of the audit trail data for the last10 years. I considered pros and cons for both AZURE and AWS as recommendations for migration to the cloud from infrastructure in Cloudera after initial migration of Oracle audit trail data to Cloudera clusters which I created. Design and architecture of the state s cloud data lake solution in both AZURE and AWS. For AWS Redshift, EMR and S3 buckets as primary tools for migration and integration. Also, I designed the Cloudera data lake infrastructure design. The vision with Ranger was to provide a comprehensive service security across the Apache Hadoop ecosystem. With the advent of Apache YARN, the Hadoop platform can now support a true data lake architecture. I utilized Redshift, EMR and S3 buckets as primary tools for migration and integration. Also, I designed the Cloudera data lake infrastructure design. I completed installation and administration of AWS, AZURE and Cloudera 5.11.1 for production and development clusters. Involvement with audit reporting at the state and support for application programming including the installation and utilization of Eclipse and associated plugin packages SCALA, Python, Java, R. SQLSSIS Data Audit & Compliance Project where I was the Architect, Designer, Analyst, Capacity Planning, Programmer and Production Implementer. 100+ terabytes of all state audit and compliance current and historical data from Oracle Exadata Databases to HDFS Hadoop files for data analysis and reporting. I utilized Azure and Extensive use of Data Factory and Redbrick. The CDH 5.11.1production cluster was established with 12 nodes and full services provided. This included analysis for the Teradata Connector in implementation of Sqoop1 and JDBC Drivers. Utilized Copy2Hadoop (data pump file for Oracle data type control) to JOIN the large audit tables and consolidate to one HIVE table. Also, I configured for real time updates from 12 additional tables.
I provided solutions for performance issues and resource utilization. I was responsible for system operations, networking, operating systems, and storage and have a strong knowledge of computer hardware and operations, in the state s complex network.
Amazon Redshift Cluster Management.
Amazon Redshift is a fully managed, petabyte-scale data warehouse service in the cloud. First began with a few hundred gigabytes of data and scaled to a petabyte. This enabled us to use the data to acquire new insights to provide better customer service.
At the state I provided:
Responsible for implementation and ongoing administration of Hadoop infrastructure.
Aligning with the systems engineering team to propose and deploy new hardware and software environments required for Hadoop and to expand existing environments.
Working with data delivery teams to setup new Hadoop users. This job includes setting up Linux users, setting up Kerberos principals and testing HDFS, Hive, Pig and MapReduce access for the new users.
Cluster maintenance as well as creation and removal of nodes using tools like Ganglia, Nagios, Cloudera Manager Enterprise, Dell Open Manage and other tools.
Performance tuning of Hadoop clusters and Hadoop MapReduce routines.
Screen Hadoop cluster job performances and capacity planning
Monitor Hadoop cluster connectivity and security
Manage and review Hadoop log files.
File system management and monitoring.
Ingest SQL Server SSIS
HDFS support and maintenance.
Diligently teaming with the infrastructure, network, database, application, and business intelligence teams to guarantee high data quality and availability.
Collaborating with application teams to install operating system and Hadoop updates, patches, version upgrades when required.
Point of Contact for Vendor escalation
RedShift 2+ years at Chemical Abstract Services & State of MN
Utilized Amazon Redshift cluster. It is a set of nodes, which consists of a leader node and one or more compute nodes. The type and number of compute nodes that you need depends on the size of your data, the number of queries you will execute, and the query execution performance that you need.
EMR Elastic Map Reduce 2+ years
Utilized EMR provides a managed Hadoop framework that made easy at CAS and State of MN, fast, and cost-effective to process vast amounts of data across dynamically scalable Amazon EC2 instances. We ran distributed frameworks such as Apache Spark, HBase, Presto, and Flunk in Amazon EMR, and interacted with data in other AWS data S3 stores.
S3 Buckets, Simple Storage Service at CAS and Sate of MN
Utilized S3 bucket s to provide comprehensive security and compliance capabilities that meet even the most stringent regulatory requirements, manage cost. It allowed us to run powerful analytics directly on data at rest in S3.
Extensive use of Data Factory and Redbrick in AZURE POC.

AMERICAN CHEMICAL SOCIETY (ACS) (Onsite Columbus, OH)
CHEMICAL ABSTRACTS SERVICES (CAS),
MAR 2016 to JUN 2017
SECURITY SOLUTION ARCHITECTURE
POC for AWS, AZURE SECURITY & CLOUDERA ADMIN & SECURITY
Responsibilities:
AZURE/AWS POC: Design and architecture of the CAS s cloud data lake solution in AWS
Use of AWS with some Lambda and Amazon S3
RANGER - data security across the Hadoop platform.
Utilized Redshift, EMR and S3 buckets
Cloudera administrator version 4-5.10 and Kerberos 2.0 security administrator at CAS working with a small team of Hadoop administrators.
TALEND MDN for data linage and master data management
Extensive SPSS
I mentored and assisted the team with Cloudera administration and Cloudera Navigator.
Cloud installation utilizing Cloudera Director with AWS provider.
Extensive use of Data Factory and Redbrick in Azure POC.
Performance and Tuning:
Assisted with establishment of queue architecture through the Fair Scheduler.
Tuning MapReduce jobs for enhanced throughput (Java Heap Adjustments)
Block Size Adjustments
Spark Performance Adjustments
Ingest SQL Server SSIS
I assisted with setup and administration of Kerberos to allow trusted, secure communications between trusted entities. Hadoop Security, Kerberos & Sentry Together: For Hadoop operators in finance, government, healthcare, and other highly-regulated industries to enable access to sensitive data under proper compliance, each of the four functional requirements must be achieved: They are: Perimeter Security: Guarding access to the cluster through network security, firewalls, and, ultimately, authentication to confirm user identities Data Security: Protecting the data in the cluster from unauthorized visibility through masking and encryption, both at rest and in transit Access Security: Defining what authenticated users and applications can do with the data in the cluster through filesystem ACLs and fine-grained authorization Visibility: Reporting on the origins of data and on data usage through centralized auditing and lineage capabilities Requirements 1 and 2 are now addressed through Kerberos authentication, encryption, and masking. Cloudera Navigator supports requirement 4 via centralized auditing for files, records, and metadata. But Requirement 3, for access security, had been largely unaddressed, until Sentry.

GILEAD, (Foster City CA)
JAN 2016 to MAR 2016
SOLUTION ARCHITECTURE
POC FOR AWS/AZURE Data Lake Architect, Onsite
Responsibilities:
Lead Architect for the POC of AZURE & AWS CLOUD ARCHITECTURE & DATA MODEL at Gilead which provided for a framework to assessment the HADOOP Solution Architecture upgrade as a staging area repository for unstructured, semi structured, and structured data. Installation and 5 node cluster developed on the POC for AZURE, AWS & Hortonworks. SPSS Data Scientist calculations for Multi-variant Linear Regression analysis. This work was done with the primary target in mind and regarding the medical model which has been specifically concerned with issues regarding signal refinement positive results. Positive results are defined to be when an association is detected between a medical product and an adverse outcome that exceeds a pre-specified threshold in the direction of increased risk.
Talend utilized several projects to simplify and automate big data integration with graphical tools and wizards that generate native code. This allowed the teams to start working with Apache Hadoop, Apache Spark, Spark Streaming and NoSQL databases MongoDB/Percona for sharding. Talend Big Data Integration platform was utilized to deliver high-scale, in-memory fast data processing, as part of the Talend Data Fabric solution, so the project enterprise systems allowing more data into real-time decisions.
Selection bias is a distortion in an effect estimate due to the way the study sample is selected from the source population. To avoid case selection bias, the cases (outcomes) that contributed to a safety signal must represent cases in the source population.

OPTUM HEALTH CARE, Remote
JULY 2015 to JAN 2016
ON-PREM & AZURE DATA LAKE Analysis/Architecture
Responsibilities:
Data Lake work included development with AWS and completed on a 9 node clustered Data Lake architecture. Primarily unstructured and semi-structured data with utilization of Sqoop, MongoDB/Percona, PostgreSQL/Citus, Spark (Hive, Python & Java), Flume, Cloudera Search & Talend as the MDM repository and Apache Sentry for authorization for Impala and Hive access. Lead Hadoop Architect for the de-normalization project at Optum Corporation which involved the simplification of 3rd normal form tables to enhance performance and usability for the end user business community. Extensive consideration was given to Hadoop as the staging area repository for ingesting source data. The thought was that this data could then be identified and used for marketing analysis. Additionally, of interest was logging information which might potentially be mined to determine better monitoring of issues related to anomalies in the data. Erwin was a primary tool used for the de-normalization/simplification project. Both Logical Data Models (LDM) and Physical Data Model (PDM) were generated in all platforms, through development, to UAT and finally to production. Use of the ALM, Application Lifecycle Management tool greatly assisted in the reporting and tracking of the project fixes as required and the Rally tool allowed for tracking and timely reporting of the deliverable products to the business. Involved the business users at all points of decision making and signoff processes. Projects were delivered on time and on budget. MongoDB: One of the most popular document stores. It is a document-oriented database. All data in Mongodb is treated in JSON/BSON format. It is a schema less database which goes over terabytes of data in database. It also supports master slave replication methods for making multiple copies of data over servers, making the integration of data in certain types of applications easier and faster.

Boeing Aviation, Space & Security Systems, (Newport Beach Ca)
FEB 2015 to JULY 2015
Solution Architect, Senior Modeler,
MICROSOFT AZURE, Onsite/Remote
Responsibilities:
AZURE at Boeing Space and Security. They needed skilled modelers, data architects and HADOOP/Oracle, skilled implementers to transition systems from Oracle and other System of Record (SOR) data to a Data Lake (PostgreSQL/Citus) work included development with Cloudera CDH and completed on a 6 node clustered Data Lake architecture. PYTHON CODING. Ingested unstructured and semi-structured data with utilization of Sqoop, HBase, Spark (Hive, Python & Java), Flume and Talend platform in the cloud. Security for the Data Lake via Apache Sentry. This implementation required the interface with end and business units to migrate data and data attributes to the newly modeled enterprise architecture. This effort has involved extensive user interface to determine the correct mappings for attributes and their datatypes with metadata information passed to the new staging areas and on to the base, 3thrd Normal form table architecture.
This activity consisted of the interfaces, email and formal meetings to establish the correct linage of data through its initial attribute discovery level and on through the Agile development process to ensure data integrity. As funding went south literal at Boeing the work was concluder and final turnover/meetings took place.

Freescale (Austin/Phoenix)
Mar 2014 to Feb 2015
Solution Architecture
Senior Teradata & Big Data Architect & Data Lake, Onsite
Responsibilities:
This project utilized AWS and was an extensive evaluation of HADOOP systems and infrastructure at Freescale (FSL), providing detailed evaluation and recommendations to the modeling environment and to the current modeling architecture at FSL and for the EDW. Of concern on this project was the scalability and reliability of the daily operations and it is noted that these issues were among the most significant requirements, along with data quality (directly from originating source) and capability for extremely high performance which is accomplished with the MapR distribution for Hadoop.
Additionally, we investigated Cloudera CDH 5.6 with a POC completed on a 2 node clustered Data Lake architecture as a POC for Freescale. Ingested unstructured and semi-structured data with utilization of Sqoop, Spark (Hive, Python & Java), and Flume and Talend platform in the cloud.
TALEND administration for big data Data Lake.
Rather than employ HBase, the authentication system uses MapR-DB, a NoSQL database that supports the HBase API and is part of MapR. Strict availability requirements, provide robustness in the face of machine failure, and operate across multiple datacenters and deliver sub-second performance
Provided extensive executive level reporting regarding findings and recommendations. implementation and evaluated additional tools such as PDCR, MDS, MDM, ANTANASUITE, APPFLUENT and HADOOP 14.10 functions and features and migrated from Erwin & Model Mart v7.2 to 8.2 and then finally to v9.5. I functioned as the lead consultant for the 6-month effort at FSL assuming responsibility for delivery and executive meeting status delivery regarding all aspects of the project.
Provided numerous power point presentations including the delivery of the score card evaluation of the as is ongoing modeling, DBA, and support activities at FSL. Identified areas to improve upon especially in the modeling area and rendered assistance with the BI semantic layer performance tuning effort and the MDS glossary deliverable for metadata. Designed and assisted with the development of the executive dashboard reporting process.
Recommended and provided information regarding three new primary tools at FSL, Appfluent, Antanasuite and MDS/MDM (HADOOP). These tools were recommended as part of the agile improvement process to increase productivity and ROI estimated at yielding a 73% overall realized benefit.

ADDITIONAL ENGAGEMENTS: Johnson & Johnson, Raritan NJ JUN 2013 to MAR 2014 Senior Teradata Architect & Big Data (ON-PREM and AZURE) Data Lake Analysis Hortonworks IBM, Raleigh NC SEPT 2011 to JUN 2013 Senior Hadoop & Data Lake Architect. Navistar, CHICAGO, IL. Apr 2012 to Oct 2012 (overlap W/IBM) Senior Teradata/Hadoop Architect Dept. Of Defense, Department of Army, Huntsville, AL. AESIP (Army Enterprise System Integration Program) Dec 2010 to Aug 2011 Senior Teradata Architect, Modeling, Performance Dept. Of Defense, Navair Logistics, River, Maryland JUL 2010 to DEC 2010 Senior Teradata MDM Architect, Modeling, Performance Bank of America, Corp Center, Charlotte, NC APR 2009 to MAY 2010 Senior Teradata/Hadoop Evaluation Architect, Modeling, Performance Verizon Communications, Washington, Washington DC JUN 2008 to APR 2009 Senior Teradata/Hadoop Architect EDS International, East Coast & Global Operations, SF California Sep 2006 to Jun 2008 DB2/Teradata Architect & Modeler University Of California Berkeley, SF, BAY AREA NOV 2005 to AUG 2006 DB2/Teradata/Hadoop (initial studies) ARCHITECT & MODELING, IBM Accenture, San Francisco, CA APR 2005 to OCT 2005 DB2 Architect, Modeling & Administration Mercury Interactive, Sunnyvale, CA AUG 2003 to APR 2005 DB2/UDB, Teradata Development, Performance Tuning Teradata Corporation, US Postal Service, Rockville, MD JUL 2002 to JUN 2003, Modeling and Warehouse Architect & Senior Architect Bank of America, Concord CA JAN 2002 to JULY 2002 Developer and Modeler
Education:
PhD, Psychology, CALSOUTHERN UNIVERSITY IRVINE, candidate GPA 4.0
PhD, Psychology, Walden University, candidate GPA 3.9
PHI CHI, National Graduate School Honor Society
MS, Computer Science, San Francisco State University GPA 3.9
BS, Computer Science/Psychology, University of Washington GPA 3.75
Jurist Doctor, Law Doctorate, Howard Taft University, 3rd Yr. GPA 4.0
Dale Carnegie, courses and awards Best Class Presentation Award.
CLIENT LIST, LAST 23 YEARS Bank of America (5 Times, several projects), Wells Fargo, CITI Group, in Mexico, Monterey Savings & Loan, Coca-Cola/TERADATA, Standard Oil of California, Ford Aerospace, Sprint (started Sprint as first tech analysis from former Southern Pacific Communications), Stanford University, Berkeley University, Pacific Bell, Westinghouse, Apple Computer, Pacific Gas & Electric, AT&T, City of San Francisco, Atari, FMC, American President Lines
PLEASE Contact by Email ONLY until interview Set!
MILITARY US MARINE CORPS, 26TH EXPEDITIONARY FORCE, Hawaii, 1st Marine Division, Vietnam, COMBAT DECORATIONS.
Keywords: cprogramm cplusplus artificial intelligence machine learning user interface business intelligence sthree database rlang information technology trade national microsoft Alabama California Delaware Illinois Maryland Minnesota New Jersey North Carolina Ohio Tennessee

To remove this resume please click here or send an email from [email protected] to [email protected] with subject as "delete" (without inverted commas)
[email protected];1471
Enter the captcha code and we will send and email at [email protected]
with a link to edit / delete this resume
Captcha Image: