Home

Raja Krishna - Cloud Data Engineer
[email protected]
Location: Memphis, Tennessee, USA
Relocation:
Visa:
Rajakrishna Reddy Koppela
Data Engineer

PROFESSIONAL SUMMARY

8+ years of IT experience in data warehousing, from requirements to implementation.
Scalability Expert: Designed systems for processing large, diverse datasets.
ETL & Analytics: Proficient in ETL, data mining, and enabling data-driven decisions.
Cloud Mastery: Created data lakes on AWS, Hadoop, Teradata, and migrated to Snowflake.
Leadership: Managed teams, delivered end-to-end solutions, and executed large-scale BI projects.
Comprehensive Skills: Strong SQL, Python, and Scala; expertise in Palantir Ontology.
Cloud Proficiency: Utilized AWS (EMR, S3, Athena) for cloud implementations.
Migration Success: Smooth transition from Teradata to Snowflake without affecting reports.
Automation: Automated jobs with TWS Maestro, Autosys, and more.
ETL Tools: Experienced in Informatica, Alteryx, SSIS, and SAP BODS.
Modeling Skills: Strong data warehousing modeling, ERWIN proficiency.
Diverse Industries: Retail supply chain, banking (Nike, Unilever, Walmart, Barclays).
Release Coordination: Managed code deployments across environments.
Reporting: Proficient in IBM Cognos 8 and Tableau.
Database Management: Expertise in indexes, statistics, and temporary tables.
Scripting: Unix scripting and command proficiency.
Education: Conducted Teradata training for professionals.
Full SDLC: Involved in design, coding, testing, documentation, and maintenance.


TECHNICAL SKILLS:

Tools:
AWS, Azure Cloud, Cloudera Hadoop (CDH), Hive/Impala, Teradata SQL, Oracle, Palantir Foundry, DBT, Snowflake Cloud Data warehouse, Airflow, Tableau, Cognos, Databricks, GitHub, Jira, Jenkins, Splunk, Ansible, Chef, Nagios.

Languages:
Scala, Python, UNIX Shell Scripting.

Frameworks:
Apache Spark, Erwin 4.0, Relational and Star/Snowflake Dimensional Modeling, Logical Volume Manager (LVM).

Education:
MS in Data Science - May 2018
Christian Brothers University - GPA 3.88/4
Bachelor of Engineering in Computer Science and Engineering - May 2014
Jawaharlal Nehru Technological University, Hyderabad - GPA 7.38/10

ACHIEVMENTS:

I managed a team of 10 employees as we finished the job two weeks early.
Developed a new software program that increased system effectiveness and cut the business's labour needs by 20 hours each week.


PROFESSIONAL EXPERIENCE:

Walmart (Dallas, TX) April 2021 - Current
Role: Sr. Data Engineer

Responsibilities:
Led data requirements gathering and documentation efforts for big data and advanced analytics projects.
Designed and implemented data pipelines, seamlessly moving data from cloud storage and relational databases to Google BigQuery.
Developed Airflow DAGs to automate Dataproc cluster management for compute and ETL tasks.
Collaborated closely with data science teams to implement advanced analytical models.
Created custom Looker dashboards on BigQuery fact tables to meet specific business needs.
Developed efficient PySpark ETL solutions for handling large datasets.
Demonstrated strong quantitative, analytical, and SQL skills in building Spark SQL ETL processes.
Utilized Azure tools (Azure Functions, Data Factory, Data Lake) to transition from Databricks.
Conducted data analysis and worked on Fivetran and dbt for data extraction, loading, and transformation.
Designed architecture for processing and storing high-volume datasets.
Collaborated with stakeholders to implement product features.
Deployed applications using CI/CD (Jenkins, GitHub) and followed Agile methodologies in project execution.

Environment: GitHub, Jenkins, Google Big Query, Apache PySpark, Apache Airflow, Databricks, Azure Data factory, Azure Data Lake, SQL, Python, Fivetran and DBT.


JPMC (Nashville, TN) July 2019 April 2021
Role: Data Engineer, Enterprise Data Analytics

Responsibilities:
Gathered and documented data requirements for big data and analytics.
Conducted data analysis and shaped data for effective visualization.
Designed architecture for high-volume data processing and storage.
Collaborated with business and IT to implement product features.
Developed Azure functions for data loading into Snowflake.
Created Azure Data Factory ETL pipelines and used dbt for transformations.
Led technical implementation of data warehousing solutions.
Scheduled tasks with Airflow DAGs for ingestions, ETL, and reporting.
Configured PostgreSQL and managed master-slave clusters.
Migrated workflows from Oozie to Apache Airflow DAGs.
Collaborated on software integration and critical-path designs.
Created and modified database objects, including Tables, Views, and Stored procedures.
Utilized Airflow workflows for automation on Amazon EMR.
Provided recommendations for software issue resolution and risk mitigation.

Environments: Apache Hadoop, Apache spark, Power BI, Azure Services, Amazon Web services, Snowflake, ETL Pipelines, Apache Airflow, PostgreSQL, Python and Airflow DAGs.


Toyota (Plano, Texas) Jun 2018 May 2019
Role: DevOps/Release Engineer

Responsibilities:
AWS Infrastructure Design: Designed AWS infrastructure using VPC, EC2, S3, Route 53, EBS, Security Group, Auto Scaling, and RDS via CloudFormation.
DevOps Automation: Specialized in Puppet for cloud automation, handling server configuration, software installation, and configuration management within AWS environments.
Continuous Integration/Deployment: Implemented CI/CD solutions with Jenkins, Docker, XL Deploy, and Code Deploy, ensuring streamlined build and deployment processes.
Configuration Management: Utilized Ansible and Chef for managing configurations, applications, and deployments across various environments.
Monitoring and Cloud Migration: Deployed Splunk for monitoring, integrated it with AWS using Puppet, and migrated services to Azure and Azure Cloud Environments, ensuring scalability and automation.

Environment: ANT, Maven, Subversion, CVS, Chef, Azure, LINUX, Shell/Peri Scripts, Python, DB2, LDAP, GIT, Jenkins, Tomcat, Nagios, JIRA.


Genpact (Hyderabad, INDIA) Jun 2014 May 2017
Role: DevOps Engineer

Responsibilities:
CI/CD Implementation: Spearheaded CI/CD pipeline development using Jenkins and created a Splunk SLI/SLO dashboard for stakeholder visibility.
System Maintenance: Collaborated with stakeholders to ensure efficient system maintenance for production servers.
Version Control: Proficient in Git and SVN for software version control.
Patch Management: Successfully implemented Linux server patch management using Ansible playbooks.
Deployment Expertise: Managed deployment of releases to QA, UAT, and Amazon EC2 Linux environments, handling code builds and configurations.
Build Automation: Created ANT scripts for building and deploying Java/J2EE, Oracle ADF, and Oracle SOA-based enterprise applications to WebLogic Application Servers.
Configuration Management: Set up Jenkins CI/CD processes, Jenkins master-slave architecture, and integrated Jenkins with Deploy for consistent and automated build and deployment processes.

Environment: GIT, SVN, Jenkins, Java/J2EE, ANT, Maven, GIT, Chef, Python Scripts, Shell Scripts, Sonar, RHEL, CentOS.
Keywords: continuous integration continuous deployment quality analyst business intelligence sthree information technology trade national microsoft Tennessee Texas

To remove this resume please click here or send an email from [email protected] to [email protected] with subject as "delete" (without inverted commas)
[email protected];1900
Enter the captcha code and we will send and email at [email protected]
with a link to edit / delete this resume
Captcha Image: