| Naga M Pragada - Sr. Data Modeler / Data Architect | AWS | Snowflake | Guidewire |
| [email protected] |
| Location: Columbus, Ohio, USA |
| Relocation: Open |
| Visa: H1B |
| Resume file: Sr. Data Modeler_Data Architect_AWS_Naga Pragada_1777561526229.docx Please check the file(s) for viruses. Files are checked manually and then made available for download. |
|
Naga M Pragada
________________________________________ DATA MODELER | DATA ANALYST DATA PROFILING & WAREHOUSING | SYSTEM ANALYSIS & DESIGN| DATA MODELLING PROFILE & VALUE Over 13+ years of experience designing and governing enterprise data architectures, with deep expertise across data engineering, data modeling, analytics platforms, and dimensional architectures. Proven ability to translate complex business requirements into scalable, secure, and analytics-ready data solutions across OLTP, OLAP, and cloud data warehouse environments, including Guidewire P&C ecosystems. Architected and integrated heterogeneous data sources across on-premises and cloud platforms, defining end-to-end data flows, integration patterns, and ETL/ELT frameworks that ensure data integrity, interoperability, and long-term scalability for enterprise analytics and reporting. Designed and governed logical and physical data models aligned with enterprise standards, supporting both transactional and analytical workloads, while enabling performance reporting, advanced analytics, and downstream consumption by BI and data science teams. Provided architectural leadership and cross-functional collaboration, partnering with business stakeholders, analysts, engineers, and DBAs to deliver well-defined data architectures, comprehensive metadata assets, and clearly documented data definitions that improve consistency and usability across the organization. Established and enforced data quality, transformation, and validation standards, applying disciplined analytical thinking to resolve complex data issues, improve reliability, and ensure trusted datasets for decision-making, regulatory reporting, and enterprise analytics. Core Competencies Data Warehousing Data Profiling and Cleansing Integration and Extraction Tools Logical and Physical Data Modeling Snowflake Cloud Database Dimensional Modeling Data Visualization Guidewire P&C Data Transformation Data Mapping Business Analysis Designing Database Diagrams Requirements gathering Extract, Transform and Load Informatica Data Quality (IDQ) PROFESSIONAL TRAITS Strong analytical and problem-solving mindset, with the ability to translate complex business requirements into scalable data models, optimized ETL pipelines, and reliable analytics solutions that support enterprise decision-making. Highly collaborative and detail-oriented professional, experienced in working with cross-functional teams including business stakeholders, data engineers, analysts, and architects to deliver governed, high-quality data platforms and enterprise reporting solutions. Experience in Developing and implementing ETL Data pipelines and creating centralized data models on a cloud native data warehouse for Policy and Claims domains, ensuring structured data management across Guidewire Policy Center and Claim Center applications. Strong experience in ETL pipeline development with deep expertise in building end-to-end data integration solutions, including source-to-target mapping, complex transformation logic, and performance-optimized pipelines across on-prem and cloud environments, ensuring high data quality, reliability, and scalability for enterprise analytics. Experience in developing data models which will serve both OLTP and OLAP functionality as per business needs. Expert in writing advanced SQL scripts, stored procedures, triggers, views, and performance-optimized indexes, enabling efficient data transformation, integrity enforcement, and high-performance query execution across complex transactional and analytical systems. Enhanced Data Integrity and Integration successfully integrated Guidewire Policy Center P&C data with multiple source systems, implementing robust data mapping, validation logic, and referential integrity checks to ensure consistency, accuracy, and alignment across enterprise platforms. Demonstrated expertise in leveraging Python for data quality and cleansing, using libraries such as pandas and NumPy to develop custom validation rules, detect and correct inconsistencies, handle missing values, and eliminate redundancies. Implemented automated data profiling and anomaly detection logic to ensure clean, reliable datasets for downstream analytics and machine learning, significantly improving data integrity and insight accuracy. CERTIFICATIONS Achieved AWS Cloud Practitioner certification, demonstrating proficiency in Amazon Web Services cloud technologies. Obtained certification as an Azure Data Engineer, showcasing expertise in Microsoft Azure data engineering solutions. Earned certification in Power BI, showcasing proficiency in data visualization, report creation, and data analysis using Microsoft's powerful business intelligence tool. Databricks Certified Data Analyst Associate Certified in building data analytics solutions using Databricks, including SQL analytics, Delta Lake data processing, data visualization, and performance optimization for large-scale cloud data platforms. CAREER PROGRESSION June 2020 till date Nationwide Insurance, Columbus, OH Sr. Data Modeler/Sr. Data Architect Hercules Cloud Migration initiative, spearheaded the OLAP Physical Data Modeling efforts, emphasizing the establishment of a highly normalized database schema in compliance with Third Normal Form (3NF) principles. This project centered on the seamless integration and transformation of P&C Insurance data from diverse sources, prominently the Guidewire Policy Centre (SQL Server, Oracle, Mainframes), into Snowflake and Amazon S3 cloud databases. Served as Data Architect for enterprise P&C analytics platforms, defining end-to-end data architecture standards, modeling conventions, integration patterns, and design principles aligned with business and regulatory requirements. Developed and governed enterprise logical, physical, and canonical data models, ensuring consistency across transactional (OLTP), analytical (OLAP), and cloud-native platforms. Defined data flow architectures and system integration patterns for ingesting Guidewire Policy Center and Claim Center data into Snowflake, Amazon S3, and Redshift, supporting enterprise reporting, actuarial, and ML workloads. Designed and developed logical data models for data warehouse using Ralph Kimball methodologies ensuring optimal data structure for efficient data processing and retrieval. Designed and reviewed logical and physical database architectures optimized for scalability, query performance, and data growth, partnering with DBAs on indexing strategies, partitioning, workload isolation, and capacity planning. Established performance baselines and optimization strategies to meet enterprise SLAs for data availability, latency, and throughput across analytical platforms. Led data governance initiatives by defining enterprise data standards, quality rules, naming conventions, and stewardship practices across policy, claims, billing, and reference data domains. Implemented metadata management and lineage tracking using Collibra, maintaining enterprise data dictionaries, business glossaries, and technical metadata for improved discoverability and compliance. Enforced data quality controls include validation rules, reconciliation checks, anomaly detection, and audit metrics embedded within ETL and ELT pipelines. Collaborated with business analysts, developers, and stakeholders, data scientists and business stakeholders to gather analytical requirements and translate them into scalable data models and curated datasets for predictive modeling and feature engineering use cases. Designed secure data architectures implementing role-based access control (RBAC), encryption at rest and in transit, schema-level security, and audit logging to meet enterprise security and compliance requirements. Partnered with security and compliance teams to ensure data platforms aligned with regulatory standards, privacy controls, and internal risk management policies. Architected and optimized enterprise ETL/ELT frameworks supporting batch and streaming ingestion across on-prem and cloud systems, ensuring interoperability, fault tolerance, and scalability. Defined reusable integration patterns for APIs, CDC pipelines, and event-driven data flows to support near real-time analytics and downstream BI consumption. Acted as architectural advisor to business analysts, engineers, and data scientists, translating complex business requirements into scalable, governed data solutions. Conducted architecture and data model design reviews, providing guidance on best practices for normalization, dimensional modeling, performance optimization, and data governance. Guided detailed source-to-target mappings and implemented complex transformation logic, incorporating support for temporal data handling, Change Data Capture (CDC), and Slowly Changing Dimensions (SCDs) to ensure accurate and historical data tracking across ETL pipelines. Developed modular and reusable DBT models for raw, staged, and curated layers, implementing tests, tags, snapshots, and documentation to enforce data quality, validation rules, and lineage across the ML data pipeline. Designed Slowly Changing Dimensions SCD Type 2 to preserve full historical attribute changes across policy, claims, customer, and agent dimensions. Leveraged Redshift as a core analytical engine, designing high-performance table structures, materialized views, stored procedures, and loading strategies to support ML model training and real-time scoring environments. Conducted data profiling, cleansing, and data validations using SQL queries and Python scripts within Databricks environment. Collaborated with business stakeholders to define the precise grain of each fact table such as policy term, coverage level, transaction level, or claim line item to ensure consistent reporting and accurate aggregation across actuarial, underwriting, and financial analytics. Implemented Guidewire Policy Centre and Claim Centre applications for P&C personal and commercial lines of business and designed centralized Data warehouse and Data models for Policy and Claims domains. Developed conceptual and logical data models using Erwin Data Modeler, generating DDL statements and collaborating with the Database team for table creation. Designed and implemented Master Data Management (MDM) solutions to standardize and maintain policyholder, claims, agents, and vendor master data across Guidewire Policy Center and Claim Center. Developed centralized reference data models to manage key P&C insurance entities, ensuring data consistency and accuracy across policy, billing, and claims systems. Collaborated with business SMEs, underwriters, and claims managers to document and standardize metadata across Guidewire Policy Center and Claim Center modules, enhancing clarity and accessibility of key data assets. Implemented Collibra Data Catalog and Business Glossary as the central governance platform, defining business terms, stewardship ownership, and data domain hierarchies for critical P&C insurance entities such as policies, claims, exposures, transactions, reserves, and recoveries. Designed and enforced a data governance operating model including data ownership, stewardship roles, governance councils, and domain accountability structures to ensure enterprise-wide data quality, compliance, and lifecycle management. Designed and implemented end-to-end data migration pipelines to extract, transform, and load historical and transactional claims data from Guidewire ClaimCenter, ensuring accurate mapping of policy, exposure, reserve, payment, and recovery records to target cloud data platforms. Architected and executed complex SQL extraction frameworks to retrieve large volumes of transactional claims data from legacy SQL Server, Oracle, and mainframe systems, supporting high-volume batch migration workloads while maintaining performance and consistency. Developed AWS-based data migration pipelines utilizing Amazon S3 as a staging layer, enabling scalable ingestion and processing of legacy ClaimCenter policy and claims datasets before loading them into cloud data warehouse environments. April 2018 June 2020 Nationwide Insurance, Columbus, OH Sr. Data Modeler/Sr. Data Architect Actuarial Modernization program is instrumental in developing, maintaining the data pipelines and supporting the data model for the P&C, Annuity, and Retirement plans, reducing redundancy and improving advanced analytics and reporting data consumption through consistent use of future state tools. Played a key Solution Architect role in the Actuarial Modernization program, designing future-state cloud-ready data architectures to migrate legacy actuarial, P&C, annuity, and retirement platforms toward an AWS-based analytics ecosystem. Defined target-state AWS data architecture patterns to modernize legacy Guidewire, actuarial, and relational platforms into scalable, analytics-ready cloud solutions, reducing dependency on tightly coupled on-prem ETL and warehouse systems. Designed hybrid integration architectures enabling phased migration from on-prem SQL Server, Oracle, Netezza, and mainframe-based enterprise systems supporting legacy actuarial and GEN application datasets to cloud-based AWS data lake and analytics platforms (Amazon S3, Redshift, and Glue). Established enterprise logical, physical, and canonical highly normalized data models, supporting integration of transactional datasets originating from legacy insurance processing systems and GEN-based operational applications. Architected large-scale data ingestion, transformation, and batch processing frameworks using Spark-based processing on AWS, supporting high-volume actuarial, policy, claims, and financial datasets. Defined migration mappings and source-to-target strategies to move legacy normalized, dimensional, and Data Vault 2.0 models into cloud-optimized schemas for Amazon Redshift and downstream analytics and reporting environments. Worked with enterprise application teams to analyze and integrate data structures generated by legacy model-driven systems (including GEN-based operational platforms) supporting actuarial calculations, rating logic, and policy processing workflows. Led data quality, profiling, and governance strategies using Informatica Data Quality (IDQ), ensuring data accuracy, completeness, and auditability throughout the cloud migration lifecycle for actuarial and regulatory reporting datasets. Designed secure cloud data handling patterns for sensitive and PII data, implementing masking, restricted access schemas, and controlled data exposure aligned with enterprise compliance and regulatory standards. Partnered with security, infrastructure, and compliance teams to define IAM-driven access models, encryption standards, and audit controls for AWS-based data platforms supporting enterprise insurance data. Acted as a trusted Solution Architect and advisor to actuarial teams, business analysts, and ETL engineers, translating complex actuarial, rating, and financial business requirements into scalable, governed data solutions. Reviewed data models, ETL designs, and integration workflows to enforce enterprise architecture standards, performance optimization strategies, and cost-efficient processing patterns. Implemented orchestration and scheduling patterns using AWS data pipeline orchestration tools and workflow automation, enabling reliable management of end-to-end batch and analytical data pipelines. Established atomic fact grains and dimensional standards to ensure actuarial accuracy, historical traceability, and scalable analytics in cloud-native environments. Delivered architectural guidance and documentation enabling smooth transition from legacy Hadoop and on-prem data platforms to AWS-based analytics solutions. Worked extensively on forward and reverse engineering processes and created DDL scripts for implementing enterprise data modeling changes. Performed domain-driven data architecture analysis across P&C insurance subject areas (policies, policyholders, agreements, claims, transactions), identifying data flow gaps, architectural inconsistencies, and integration issues, and led design workshops with business stakeholders and SMEs to define scalable resolution strategies. Defined atomic fact grains by aligning Guidewire and actuarial source system processes to real business events, establishing enterprise-level grain standards to prevent aggregation inconsistencies and enable scalable analytical models. Architected source-to-target data structures and canonical schemas, governing table definitions and data contracts for ingestion from flat files, SQL Server, Netezza, and legacy GEN-driven operational data stores, while enforcing secure data zone segregation for PII-sensitive datasets. Designed enterprise PII protection and data masking architectures, orchestrating automated masking workflows and data profiling strategies to identify, remediate, and prevent data quality issues in regulated datasets. Led the architecture and design of enterprise 3NF normalized data models for Guidewire Policy Center personal lines, integrating multiple upstream systems to deliver a governed relational data foundation supporting enterprise reporting and analytics. Architected large-scale ingestion and transformation frameworks enabling efficient data movement from relational databases and mainframe platforms hosting legacy GEN application datasets into AWS-based big data environments, supporting enterprise analytics and reporting. Applied enterprise modeling patterns across dimensional, Data Vault, and normalized architectures, selecting fit-for-purpose models to support operational reporting, actuarial analytics, and enterprise data platforms. Led end-to-end requirements documentation for the full Claims Life Cycle by creating detailed BRD, FRD, user stories, AS-IS and TO-BE process flows, and data mapping documents covering FNOL, claim setup, adjudication, reserves, payout, subrogation, litigation, and indemnity workflows within Guidewire Claim Center. Conducted stakeholder workshops with claims adjusters, underwriting, finance, and legal teams to perform gap analysis between current and future-state processes, translating business rules into functional specifications and configuring workflows and validation logic in Guidewire Claim Center Developed comprehensive source-to-target data mapping documents supporting integrations between Claim Center and downstream systems (Policy, Billing, Payments, and third-party vendors), ensuring accurate claims financial transactions, reserve calculations, recoveries, and regulatory compliance. October 2017 April 2018 J.B. Hunt Transportation Services, Inc, Lowell AR Sr. Data Modeler/Sr. Data Engineer Worked on a next-generation transportation platform to streamline end-to-end logistics operations, including account setup, member enrollment, product routing, provider networks, contracting, reimbursement, and claims adjudication. The project focused on building a unified data architecture to support operational analytics, data integration, and scalable warehousing across all logistics functions. Participated in the development of a modern logistics data platform, supporting end-to-end data integration and warehousing for provider, enrolment, reimbursement, and claims systems. Delivered high-quality ETL solutions by leveraging SQL, PL/SQL, Informatica, and advanced data modeling techniques. Designed and implemented logical and physical data models to support data integration across operational systems, using 3NF, Star Schema, and Data Vault 2.0 methodologies to align with enterprise data warehousing standards. Developed source-to-target data mappings and complex transformation logic, including nested joins, multi-step derivations, and translation rules using Informatica PowerCenter and custom SQL procedures. Translated functional and business requirements into executable data pipelines, mapping API payloads and legacy feeds to dimensional warehouse targets, while identifying and reconciling structural mismatches. Performed detailed data profiling and validation using Informatica and SQL to assess data integrity, distribution, and anomaly detection across member, provider, and product domains. Identified and modelled fact and dimension tables, defined grain, surrogate keys, slowly changing dimensions (SCDs), and established referential integrity constraints for optimized query performance. Generated DDL scripts for target schema deployment, and managed environment promotion via version-controlled SQL artifacts and Jira-based deployment workflows. Developed and implemented AWS Glue-based ETL jobs utilizing PySpark to process large-scale claims and policy datasets, enabling scalable transformation and migration of high-volume insurance data into cloud-native analytics platforms. Built AWS Glue workflows and job orchestration frameworks integrated with AWS Step Functions, automating multi-stage data migration pipelines including extraction, transformation, validation, and loading processes. Designed cloud migration pipelines leveraging Amazon S3 staging zones, AWS Glue ETL processing layers, and Amazon Aurora PostgreSQL target databases, ensuring secure and efficient migration of legacy insurance datasets. Optimized PySpark-based transformation pipelines to process millions of claims transactions, improving data migration throughput and reducing migration execution time during large-scale batch migration cycles. Deployed and validated data models across development environments, integrating DDL with CI/CD practices where applicable and coordinating schema change impact analysis. Established end-to-end data lineage and metadata traceability by integrating Collibra with Snowflake, AWS S3, and Databricks pipelines, enabling visibility from Guidewire transactional systems through ETL transformations into analytical reporting layers. Developed enterprise metadata management processes, synchronizing technical metadata from Erwin data models and Snowflake schemas into Collibra to enable automated lineage mapping, impact analysis, and regulatory audit support. Implemented data classification and sensitive data protection policies, tagging and governing PII, PCI, and confidential policyholder data to comply with regulatory frameworks including GDPR, CCPA, and NAIC insurance data regulations. Architected logical and physical data models using Erwin, maintaining standardized Logical Data Models (LDM) and Physical Data Models (PDM) aligned with governance standards and enabling metadata synchronization with Collibra for enterprise data discovery. Jan 2015 Dec 2015 Whataburger, San Antonio, TX (offshore). Sr. Data Modeler / Sr. Data Analyst Designed and delivered full-stack data solutions, from source ingestion to reporting, supporting enterprise-level analytics and decision-making. Built scalable ETL pipelines, curated data models, and produced executive-ready dashboards aligned with complex business requirements. Developed detailed conceptual, logical, and physical data models using Erwin, translating business use cases into normalized and dimensional schemas aligned with Star and Snowflake architectures. Performed forward and reverse engineering in Erwin to create database-ready schemas and generate DDL scripts, including index strategies, partitioning rules, and referential integrity enforcement. Designed and implemented ETL jobs in Talend Open Studio, reading data from .CSV and fixed-width files, applying transformation logic, and loading cleansed outputs into Oracle data warehouse targets. Created ETL workflows to populate dimension and fact tables, apply business rules, lookups, and surrogate key handling to support consistent data mart structures. Developed and executed test strategies for ETL processes, validating source-to-target mappings and transformation logic to ensure high data integrity in staging and reporting layers. Created dynamic reports and dashboards in Power BI, using story points, drilldowns, KPIs, and pivot tables to visualize operational metrics and trends across various business domains. Built and published SQL Server Reporting Services (SSRS) reports and ad hoc queries using T-SQL, supporting upper management with insights into sales, inventory, and operational performance. Designed complex SQL datasets and stored procedures for Power BI reports, optimizing queries using joins, subqueries, and window functions across multi-table sources. Migrated and transformed legacy datasets for Power BI, collaborating with application developers and stakeholders to provide optimized SQL scripts and data views for visualization. Performed detailed data profiling and data quality analysis using SQL and Talend profiling features to identify anomalies, missing values, and inconsistencies, ensuring reliable and trusted datasets for downstream reporting and analytics. Collaborated with business stakeholders and analysts to gather reporting requirements, define KPIs, and translate business logic into technical specifications, ensuring that dashboards and analytical reports aligned with operational and strategic decision-making needs. Feb 2012- Dec 2014 Lloyds banking group, Hyd, India Data Modeler/Data Analyst Established comprehensive metadata definitions for enterprise databases by utilizing data analysis, profiling, and cleansing techniques with SQL and data management tools. Produced logical and physical data models and mapping spreadsheets by gathering business requirements and applying Erwin Data Modeler, streamlining the development of departmental and functional data marts. Collaborated with Analytical and Reporting infrastructure teams to align data warehouse design with OLAP reporting needs, using SQL Server and data modeling techniques. Streamlined data flow processes during transformation initiatives by standardizing metadata definitions and implementing data profiling with SQL, supporting clear traceability in regulatory environments. Collaborated with cross-departmental teams to design logical data models and document traceability workflows connecting business processes to physical databases, leveraging data modeling tools and business requirements analysis. Designed and implemented SSIS packages to extract, transform, and load data from multiple sources, ensuring seamless data integration into data warehouses. Developed SSIS packages to move data from OLTP to OLAP environments by designing workflow steps and automating alert notifications for job status, which enhanced system monitoring capabilities. Built a complete Product Analytics pipeline over a 6-month period utilizing Microsoft SSIS for ETL processes and Power BI for daily reporting, streamlining data flow from extraction to visualization for ongoing business insights. Integrated advanced analytics solutions using Microsoft SSIS and Power BI for continuous product performance tracking over a 6-month period, facilitating the delivery of actionable business insights that contributed to significant revenue growth across enterprise channels. Implemented a self-serve analytics dashboard by utilizing Power BI and Microsoft SSIS across a 6-month project cycle, enabling real-time access to data insights for over 80 business unit users and streamlining the decision-making process. May 2011 Jan 2012 United Online, Hyd, India Data Analyst Translated stakeholder requirements into actionable documentation for project planning and quality assurance, using workflow mapping and structured outlines. Created and implemented test plans to confirm adherence to business and functional specifications, utilizing SQL and report building tools for validation. Created conceptual, logical, and physical data models using data modeling tools, applying business analysis techniques to align technical solutions with project requirements. Converted project data requirements into structured data models by reviewing stakeholder documentation and collaborating with the technical team, applying ER modeling tools for logical and physical schema creation. Collaborated with development and ETL teams to review data quality outcomes from SOR files, leveraging SQL for efficient anomaly detection and resolution. Responsible as a member of development team to provide business data requirements analysis services, producing logical and Physical data models. Demonstrated a clear understanding of each logical data model and articulated the intent and vision of each model through consumer and supplier model walkthroughs. Collaborated with the ETL process development team to refine data extraction and transformation workflows, applying SQL to validate and improve integration processes. Applied SQL to create and modify database objects, aligning result sets with evolving business reporting requirements and ensuring reusable components for future analytics initiatives. Aug 2009 Apr 2011 Krish Radiant Solutions, Hyd, India Oracle Developer Developed and implemented triggers, stored procedures, views, and SQL scripts using PL/SQL and Oracle, supporting transaction processing and automation of business rules. Implemented updates to existing PL/SQL packages, triggers, procedures, and functions to accommodate newly defined business functionalities, ensuring integration with core application workflows. Developed PL/SQL procedures by closely analysing business requirements, supporting inventory allocation and bidding workflows to streamline shipper operations. Collaborated with business analysts and quality analysts to gather and document business requirements, translating them into a comprehensive Business Object Model (BOM) using data modeling tools such as Erwin. Performed impact analysis and contributed to system analysis and design using Oracle PL/SQL and Erwin Data Modeler, ensuring alignment with evolving business requirements. Translated business requirements into logical and physical data models using Erwin Data Modeler, ensuring alignment of database structures with project objectives. Tested and evaluated modifications to database structures, leveraging SQL and Oracle utilities to identify and resolve potential performance issues prior to implementation. EDUCATION & PROFESSIONAL DEVELOPMENT Jawaharlal Nehru Technological University, India (March 2005 May 2009) Bachelor of Technology in Electronics & Communications Southern Arkansas University, Magnolia, AR (Jan 2016 March 2017) Master of computer and Information Sciences TECHNICAL SKILLS Data Modeling Tools Erwin r7/r7.1/7.2/r8.2/r9.1/9.5/9.6, Embarcadero ER/Studio, Enterprise Architect, Oracle Designer, and Sybase Power Designer ETL Tools Talend, Informatica 6.2/7.1, Data Junction, Ab-Initio, Data Stage, SSIS. Power Center 8.6 / 7.1.1, Informatica Designer, Workflow Manager, Workflow Monitor Database Tools Microsoft SQL Server, MySQL, Oracle, DB2, MS Access 2000 and Teradata V2R6.1 OLAP Tools Microsoft Analysis Services, Business Objects and Crystal Reports Other Tools SAS Enterprise Guide, SAP ECC and Panorama Web Service Packages Microsoft office Suite, Microsoft Visual Studio, Microsoft Project 2010 and Microsoft Visio Programing Languages Python, R, SQL, TSQL, PL/SQL, Base SAS, HTML, XML, UNIX and Shell Scripting Reporting Tool Tableau, Power BI, QlikView v9+, SQL/Oracle, DW/BI Concept REFERENCES AVAILABLE ON REQUEST Keywords: cprogramm continuous integration continuous deployment machine learning business intelligence sthree active directory rlang microsoft mississippi procedural language Arkansas Ohio Texas |