Home

Afreen najam - data engineer
[email protected]
Location: Kansas City, Kansas, USA
Relocation: yes
Visa: h1B
Dear Recruiters,

Hope you are doing great. We have the following experienced quality consultants for Sr. Big Data Engineer available for any c2c positions, and please let me know if you have any open C2C positions.

Please send us C2C positions at [email protected] or reach me at 8325484963

Our consultant- having genuine experience



Hari.G
Sr. Big Data/ Data Engineer

PROFESSIONAL SUMMARY
Around 7+ years of experience in IT experience in software design, development,
implementation, and support of business applications for Telecom, health and Insurance
industries.
Experience in Big data Hadoop, Hadoop Ecosystem components like MapReduce, Sqoop, Flume,
Kafka, Pig, Hive, Spark, Storm, HBase, Airflow, Oozie, and Zookeeper.
Worked extensively on installing and configuring Hadoop ecosystem components Hive, SQOOP,
HBase, Zookeeper and Flume.
Good Knowledge in writing Spark Applications in Python (pySpark).
Working with data extraction, transformation and load using Hive, Sqoop and HBase.
Hands on Experience in designing and developing applications in Spark using Scala to compare
the performance of Spark with Hive and SQL/Oracle.
Implemented ETL operations using Big Data platform.
Hands on experience on Streaming data ingestion and Processing.
Experienced in designing different time driven and data driven automated workflows using
Airflow.
Highly Acumen in choosing an efficient ecosystem in Hadoop and providing the best solutions
to Big Data problems.
Well versed with Design and Architecture principles to implement Big Data Systems.
Experience in configuring the Zookeeper to coordinate the servers in clusters and to maintain
the data consistency.
Acumen on Data Migration from Relational Database to Hadoop Platform using SQOOP.
Experienced in migrating ETL transformations using Pig Latin Scripts, transformations, join
operations.
Good understanding of MPP databases such as HP Vertica and Impala.
Hands on experience in configuring and working with Flume to load the data from multiple
sources directly into HDFS.
Expertise in relational databases like Oracle, My SQL and SQL Server.
Strong analytical and problem-solving skills, highly motivated, good team player with very good
communication & interpersonal skills.
Experience in developing data pipelines using AWS services including EC2, S3, Redshift, Glue,
Lambda functions, Step functions, CloudWatch, SNS, DynamoDB, SQS.
Proficiency in multiple databases like MongoDB, MySQL, ORACLE, and MS SQL Server.
Worked as team JIRA administrator providing access, working assigned tickets, and teaming
with project developers to test product requirements/bugs/new improvements.
Created Snowflake Schemas by normalizing the dimension tables as appropriate and creating a
Sub Dimension named Demographic as a subset to the Customer Dimension.
Keywords: sthree information technology hewlett packard microsoft

To remove this resume please click here or send an email from [email protected] to [email protected] with subject as "delete" (without inverted commas)
[email protected];1053
Enter the captcha code and we will send and email at [email protected]
with a link to edit / delete this resume
Captcha Image: