Home

Sravan - Sr. Big Data Developer
[email protected]
Location: Plano, Texas, USA
Relocation: Yes
Visa: H1B
Sravan
Sr. Big Data Developer
802-330-0339
[email protected]
Plano, TX
Yes
H1B


Krishna
Sr. Sales Recruiter
Email: [email protected]
Phone: 802-330-0339

SUMMARY

13 years of experience in software development as Big Data Developer and Technical Lead in Product Development, Design, Architecture & implementation of Enterprise, Big Data Solutions using Hadoop, Spark, Scala, Kafka, NoSQL etc.,
Working for AT&T as Lead developer for last 6 years, responsible for designing and Development of Enterprise solutions in telecom domain on Big Data Platform
Hands-on experienced in Big Data and Data Real-time Streaming technologies of Hadoop 2.x, Spark 2.x, Kafka, Pyspark,Map Reduce2, PIG, Hive, HBase, Cassandra, HDFS, YARN, Sqoop, Control-M & HDP 2.x
Experience on migrating on-premises ETL process(HDFs) to Azure cloud using Azure Data Factory and Databricks.
Good experience as Azure Developer, building and implementing Azure data solutions. Successfully handled migration of Big Data Applications to Azure based Cloud platform for AT&T.
Having extensive knowledge on Hadoop technology experience in storage, writing queries, processing and analysis of data.
Design and Developed real time streaming pipelines and batch data for source data from various sources, defining strategy for data lakes, data flow, retention, aggregation, summarization for optimizing the performance of analytics products.
Experience on working with Apache Nifi for Ingesting data into cloud platform by extract data from different source systems.
Developed Scala and Pyspark scripts, UDFFs using both Data frames/SQL/Data sets and RDD in Spark 2.3 for Data Aggregation, queries and writing data back into OLTP system through Sqoop.
Optimization of Hive queries and Spark jobs using best practices and right parameters and using technologies like Hadoop,YARN,Scala,Pyspak.
Experienced in design and implementation of Workflows, Data Pipelines and Data processing frameworks for billions of click streams and logs.
Responsible for creating and loading data into Hive tables with appropriate static and dynamic partitions, intended for efficiency.
Responsible for delivering the highly complex project with Agile and Scrum methodology.
Coordination with business customers to gather business requirements. And, interact with other technical peers to derive technical requirements and delivered the BRD and TDD documents.
Good Experience in Core java, Struts, Hibernate, Web services, Spring Frame work and other Java/J2ee technologies.
Design and implementation of distributed messaging & streaming services using Apache Kafka
Have good experience in building and integration using Maven and well-versed using version control tools such SVN, CVS and GIT

TECHNICAL SKILLS

Big Data & Related Technologies Hadoop, Spark, Scala, HDFS, Pyspark,Yarn, Pig, Hive, Tez, Zookeeper, Sqoop, Kafka, TWS, Control-M
NOSQL Databases HBase, Cassandra
Cloud Platform Microsoft Azure
Operating System WINDOWS NT/2000/XP, Linux
Hadoop Distributions Cloud era CDH4, Horton Works HDP 2.x
Programming Languages JAVA SDK, Pyspark, Scala,Python
Frameworks NiFi, Struts, Spring 3.0, Jersey, JPA & Hibernate, Azure Data Bricks
Web services RESTful, SOAP (JAX-RPC & JAX WS)
RDBMS Databases Oracle, MySQL
JAVA Technologies J2EE, JDBC, Servlets, JSP, Hibernate, AJAX, XML, XSL
Tools Putty, WIN SCP, TOAD, Axis, SQL Developer, SVN & GIT
IDEs RAD, Eclipse
Web/Application Servers Tomcat, JBoss, WebSphere & WebLogic
Scripting Languages JavaScript, jQuery


EDUCATION
Bachelor of Engineering (B.E.) with distinction from Anna University, India 2004

CERTIFICATIONS
Microsoft certified Azure Developer for implementing an Azure Data Solutions (DP-200)
Sun (Oracle) Certified JAVA Programmer (SCJP) for Java Platform (CX-310-035)
Data bricks certified associate developer for Apache Spark 3.0 - Python


PROFESSIONAL EXPERIENCE

Developer Oct 2019 - Till Date
Smart Data Lake AT&T, Dallas, USA

Smart Data Lake is an integral part of the AT&T Ecosystem which does provide mechanisms of associating multi-structured with structured data available sourced out of various warehouses, applications and external engines. Data Lake is effectively increasing ROI by utilizing technologies within the overall AT&T eco-system. Business well recognized the value of smartness of the Data Lake hence continuously pouring investment to better utilize the Big Data infrastructure to better extract customer Insights in a more cost-effective manner.

Roles & Responsibilities
Design and implementation of Real-time and batch processing pipelines and applications using Apache Storm, Spark, Spark streaming and Hadoop
Developed Pyspark scripts, UDFFs using both Data frames/SQL/Data sets and RDD in Spark 2.3 for Data Aggregation, queries and writing data back into OLTP system through Sqoop.
Developed scala/pyspark in Data bricks extract the data by connecting external data bases and load data to azure container.
Create Scala frame to bring data DB2 to Azure container.
Implemented Pyspark scripts for post-process and pre-process using Spark in Data bricks and loading data in delta lake format.
Using Azure data bricks connected to azure containers and perform the transformations on raw data and load the data in delta lake format in target container.
Optimize the Spark jobs to run on cluster for faster processing.
Worked with SNOWPIPE to enables automatic loading of data from files including structure/semi structured file formats such as XML, JSON, CSV,Avro and Parquet using snowflake SnowSQL and writing user defined functions.
Experienced in performance tuning of Spark Applications for setting right Batch Interval time, correct level of Parallelism and memory tuning.
Designing solutions to export and import data between RDBMS and Hadoop using Sqoop.
Created Control-M folders for Workflow to automate data loading into the Hadoop Distributed File System and hive to pre-process the data and to apply complex transformation
Used Spark API over Cloudera Hadoop YARN to perform analytics on data in Hive.
Assisting team members for technical impediments and provide Optimal Solutions.
Co-ordination with multiple teams and stake holders for business/functional requirements.

Technology: Hadoop, HDFS, Hive, Azure, Data bricks ,Sqoop, Scala, Pyspark, Apache Spark, Kafka, Control-M,Git

Developer Nov 2018 Oct 2019
Event Log Management AT&T, Dallas, USA

IP TV log detection is an event based near real-time video program viewing analytics project based on the regular tuning intervals from the Direct tv video set up boxes. Set up boxes (Bali and X1) continuously send the Linear, DVR & Video based events based on the viewer s channel operation. Massive event log system gets open intervals/sessions as an input, and these will be decoded into tuning intervals/closed sessions.
Roles and Responsibilities
Co-ordination with multiple teams and stake holders for business/functional requirements.
Evaluation of Real time processing tools and machines (VM's) which will process millions of tuples every day.
Build the infrastructure required for optimal extraction, transformation, and loading(ETL)of data from a wide variety of data sources like Salesforce,SQL Server,Oracle and Teradat using Spark,Scala,Hive,kafka and other Bigdata technologies.
Involved in loading and transforming large sets of structured and semi structured data from multiple data sources to Raw Data Zone(HDFS) using sqoop imports and spark jobs.
Developed Hive queries for data sampling and analysis to the analysts.
Data Ingestion using Sqoop from various sources like Informatica, Oracle.
Environment: Hadoop 2.x, HDFS, MR2, YARN, PIG, HIVE, STORM, KAFKA, ApaHDP2.x, TWS, Zookeeper, Tez, Ambari, Control-M, Java, GIT, Maven, Eclipse

Developer June 2016 Oct 2018
Product Analysis AT&T, Dallas, USA

Project does analyze and process the reports of products from various online e-commerce sites. System successfully overcomes existing problems for storage and processing large structured, semi-structured and un-structured data using RDBMS.

Roles and Responsibilities
Data Ingestion into HDFS from Data Source & Scala programming for data transformations.
Partitioning, bucketing concepts in Hive and designed both managed and external tables in Hive to optimize the performance.
Successfully handled performance issues in Hive and Pig scripts with understanding of joins, Group and Aggregation and translation to Map Reduce jobs.
Moving data from Hadoop Hive to MySQL using Sqoop.

Environment: Hadoop, HDFS, MapReduce, Java, Pig, Spark, Scala, Hive, Sqoop

J2EE Developer Oct 2013 - Jan 2016
Consumer Relations System(CRS3) TransUnion LLC, Chicago, USA

Consumer relations System is a web-based application which helps TransUnion to serve their consumers with different services like sends Disclosures, Corrected Copies, to make Disputes, add security Alerts, Freeze the credit report etc., CRS3 is very rich in graphics with large no. of screens that populates various details of credit report. Credit reports will be pulled based on information given by the consumer through web services connected to credit data repository.

Role and Responsibilities
Assisting the team members for technical impediments and providing the Optimal solutions
Co-ordination with multiple teams and stake holders for business/functional requirements.
Evaluation of Hadoop platform and its eco system tools for the batch process.
Development on Struts like Model View Controller (MVC) Framework
Developed Controller classes, View Resolver, JSP s and Spring Configuration Files.
Development of RESTful APIs for applications to consume
Used Java Messaging Services (JMS) for reliable and asynchronous exchange of messages.
Responsible for building scalable distributed data solutions using Hadoop.

Environment: Java, J2EE, Struts, Spring 3, RESTful services, Hibernate, Teradata, Shell Script, GIT, Hadoop(Hdfs)




System Analyst Sep 2011 Oct 2013
Online Dispute Application (ODA) TransUnion LLC, Chicago, USA

Online Dispute Application (ODA) is used by the consumers of CIBIL to raise their dispute online. This application allows the consumers (Citizens of India) to dispute any inaccurate credit or personal information on their credit report by completing an online form. This application also has features to upload dispute proof documents by the consumers to make their dispute raising process hassle free.

Role and Responsibilities

Responsible for analysis, design, testing phases of the project
Development on Struts like Model View Controller (MVC) Framework
Design and implementation of business logic using Entity and Session Beans to handle transactions and updating of data into the Oracle database.
Involved in development of middle layer business methods, which incorporated the core business functionality using Stateless Session Beans.
Developed Front-end UI using JSP, Servlets, HTML and Java Script.
Build and execution of Batch files
Code reviews and Mentoring and providing technical guidance to the team.
Responsible for 3rd level Production Support activities.

Environment: Windows, Core Java & J2EE, Servlets, JSP, Struts 2, JSF, Fa ade Design Pattern, Oracle, RAD, ClearCase, Shell scripting, ClearQuest, Tomcat, CA Wily Introscope.

Sr. Java/J2EE Programmer Mar 2010 - Aug 2011
Strategic Refresh TransUnion LLC, Chicago, USA

Strategic Refresh (SR) is a project targeted to achieve two major goals of Elsevier which is to format the logs of its backbone application in structured way along with eliminating PII and secure and migration of applications from using RAD 7.0 to RAD 9.0 and from WAS 7 to 8.5.

Role and Responsibilities
Modification of Compliance changes imposed by Info Security.
Modification of log files for PCI/PII compliances. Means, log files shouldn t contain any Personal Identification Information.
Developed MVC Controller classes, View Resolver, JSP s using Spring Framework
Design and development using Collections and multithreading for the action classes.
Successfully implemented Spring Dependency Injection (DI) in many parts of the project
Involved in Veracode Platform it s scans the application and provide the existing triage flaws details and fixing the flaws.

Environment: Core Java, 2EE, Servlets, Spring 3.x, JSP, Struts 2, DB2, RAD 9.0, WebSphere Application Server 8.5, ClearCase, ClearQuest, WebSphere, CA Wily Introscope


Java/J2EE Programmer Mar 2008 - Dec 2009
Electronic Fee & Billing System (eFBS) Standard Chartered Bank

Role and Responsibilities
Analysis of business requirements, design and development of Technical solution.
Responsible of online module development which has the major functionality in eFBS.
User interface was developed using JSPs, Servlets, HTML, DHTML and CSS.
Involved in Front-end validations using Java Script.
Worked on Error Handling, Exception, Cursor Management, Subprogram and Packages
Developed persistence layer using Hibernate framework.
Involved in mapping the products to the customer scheme.
Involved in performance tuning of project using CA Willy Introscope tool.

Environment: Windows, Swings, EJB, JSP, JMS, Servlets, Struts 1.x, Spring 2.x, JavaScript, WebSphere, WSAD 5.0, SCM Source Safe and Oracle.


Java/J2EE Programmer Apr 2007 - Mar 2008
Finacle eBanking Saudi Holland Bank , Arab National Bank & Banqui Saudi France

Role and Responsibilities
Involved in project support and enhancements for implementing the base versions of Finacle eBanking (for Versions 611.03, 64, 623.05)
Developed Front-end UI using JSP, Servlets, HTML and Java Script
Responsible for interacting with clients for requirements gathering and understanding.
Created base modules using Struts Application Framework.
Responsible for looking into the impact analysis of major issues and quick deliverables

Environment: Windows NT, JDK, JSP, Servlets, Struts 1.X, EJB, WebSphere, WSAD 5.0, VSS and SQL Server 2000
Keywords: user interface information technology California Colorado Texas

To remove this resume please click here or send an email from [email protected] to [email protected] with subject as "delete" (without inverted commas)
[email protected];214
Enter the captcha code and we will send and email at [email protected]
with a link to edit / delete this resume
Captcha Image: