Home

adarsh - Data scientist
[email protected]
Location: Newark, New Jersey, USA
Relocation: yes
Visa: H1B
Adarsh Gogineni
Data Scientist/AI Engineer/Machine Learning/Python/RAG-LLM Engineering
Email ID:- [email protected]
571-441-0403


Professional Summary:
I have 5+ years of experience as well as a robust academic foundation with a Bachelor's in Computer Engineering from Rutgers University. My expertise in Python, SQL, and various AI and machine learning frameworks is demonstrated through the creation, refinement, and deployment of machine learning models, architecting data-centric solutions for business challenges, and using deep learning to address complex issues in various sectors. A key part of my skill set is also the ability to distill complex data insights into clear, coherent, and actionable business intelligence.


Technical Skills:

Programming Languages and Frameworks:
1. Python; R; SQL; JavaScript: Proficient in leveraging Python for data science and machine learning projects, R for statistical analysis, SQL for database management, and JavaScript for developing interactive web applications.
2. Jupyter Notebook: Skilled in using Jupyter Notebook for interactive computing and sharing of live code, equations, visualizations, and narrative text.

Machine Learning and AI Frameworks:
1. Scikit-Learn: Adept at employing Scikit-Learn for a variety of machine learning algorithms including regression, classification, and clustering tasks.
2. TensorFlow; PyTorch; Keras: Proficient in developing deep learning models using TensorFlow and PyTorch, including neural networks, CNNs, RNNs, and GANs for advanced predictive modeling and AI applications.
3. HuggingFace Transformers: Experienced in utilizing HuggingFace Transformers for cutting-edge natural language processing (NLP), enabling the development of applications requiring language understanding and generation.

Databases and Data Curation:
1. MySQL; MongoDB; PostgreSQL: Well-versed in managing relational and NoSQL databases, performing CRUD operations, and designing database schemas.
2. NumPy; Pandas; Matplotlib: Experienced in using NumPy and Pandas for high-performance data manipulation and analysis in Python as well as Matplotlib for data visualizations.
3. Power BI; Tableau: Skilled in building interactive reports and dashboards using Power BI, Python, R, Excel, and Tableau.
4. SQL Server Integrated Services (SSIS): Experienced in using SSIS for data extraction, transformation, and loading (ETL) tasks.

Web Development and Cloud Platforms:
1. React; Flask; Django: Proficient in developing web applications using React for the frontend, and Flask and Django for the backend.
2. Git; GitHub: Proficient in using Git for version control and GitHub for code collaboration and repository management.
3. Amazon Web Services (AWS); Microsoft Azure; Google Cloud Platform (GCP): Adept at leveraging cloud platforms for scalable computing resources, data storage, and deploying machine learning models.
4. Docker; Kubernetes: Experienced in containerization with Docker and orchestrating containers with Kubernetes to streamline development and deployment processes.


Work Experience:

Client: CDW - Chicago, IL
Period: July 2021 to Till Date
Role: Data Scientist/ ML Engineer
Tech stack: Python, SQL, SSIS, Azure Data Factory, Azure Databricks, TensorFlow, Keras, AWS, Docker, Kubernetes, Snowflake, Pyspark

Project: Marketing Direct - ETL Pipeline Optimization for Sales Data Analysis (Extract, Transform, Load)
The aim of this project was to extract, clean, and transform key customer activity data like number of clicks, number of sales, advertisement watch-time, support chat, sales and service calls, as well as feedback surveys, enabling in-depth analysis and valuable customer insights.
Responsibilities:
Developed and optimized Big Data ETL pipelines with SSIS and Azure Data Factory, Azure Databricks, enhancing the data preprocessing phase for the application of TensorFlow and Keras in training models on over 1 billion records which improved model performance by 9%.
Conducted comprehensive data analysis and performed SQL transformations on customer sales datasets that exceeded 500 million entries using Python (Pandas, NumPy, and Scikit-learn). This process boosted the accuracy of predictive analytics models for customer behavior insights by 5%.
Coordinated cloud-based machine learning deployments on AWS for 3+ major projects, achieving a 15% reduction in data schema update times. Automated deployment tasks with Docker and Kubernetes, significantly enhancing the efficiency of Continuous Integration/Continuous Deployment (CI/CD) pipelines.

Project: Cloud-Based Deployments
Responsibilities:
Streamlined cloud-based machine learning deployments, significantly improving project delivery times and system reliability on AWS.
Implemented automated deployment strategies using Docker and Kubernetes, reducing manual intervention and improving the efficiency of CI/CD pipelines.
Designed and deployed scalable machine learning models on cloud platforms, leveraging AWS services such as SageMaker, EC2, and S3 for robust and efficient AI solutions.
Optimized the performance of deployed models by integrating advanced monitoring tools and automated scaling policies, ensuring high availability and responsiveness.
Developed and maintained infrastructure-as-code (IaC) scripts using Terraform and AWS CloudFormation, facilitating reproducible and maintainable deployment environments.
Collaborated with data scientists and engineers to ensure seamless integration and continuous improvement of AI models in production environments.

Project: Marketing Analytics Sales Data Analysis and AI Model Enhancement
Responsibilities:
Led the preprocessing of a comprehensive dataset of sales records, leveraging advanced machine learning frameworks such as TensorFlow and Keras to significantly boost model accuracy and performance.
Applied deep learning techniques and neural network architectures to refine predictive models, achieving a substantial increase in forecasting precision.
Conducted extensive data analysis and SQL transformations to extract actionable insights from datasets exceeding 500 million entries, enhancing the predictive capabilities of AI-driven analytics models.
Implemented feature engineering, hyperparameter tuning, and model optimization strategies to improve the robustness and efficiency of AI models.
Collaborated with cross-functional teams to integrate AI solutions into the marketing analytics workflow, driving data-driven decision-making processes.

Client: LIGL - Austin, TX
Period: July - 2020 to July - 2021
Role: Data Scientist/ ML Engineer
Tech stack: Python, SQL, Tableau, Snowflake, Pyspark

Project: TotalHold - Data Integration and Dashboard Development
This project is aimed to provide a full interactive end-to-end dashboard for users to streamline the reporting process in eDiscovery for the legal case filing lifecycle
Responsibilities:
Designed and executed Python scripts and SQL queries to consolidate data from over 10 distinct sources into a comprehensive, interactive dashboard. Significantly improved the efficiency of data integration and analytics capabilities.
Optimized tool performance and SQL server maintenance, resolving over 100 support tickets which contributed to achieving 99% uptime, ensuring reliable data access for decision-making models.

Project: Data Processing Automation and PII Extraction Workflows
Responsibilities:
Developed Python scripts to automate the integration and processing of data from multiple sources, streamlining the creation of an interactive dashboard and facilitating more efficient data analytics.
Implemented SQL scripts for the accurate extraction of Personally Identifiable Information (PII), significantly improving the quality and integrity of data by 20%, which is instrumental for the accuracy of future ML model predictions.
Leveraged machine learning algorithms to enhance data preprocessing workflows, ensuring high-quality input for downstream AI models.
Utilized natural language processing (NLP) techniques to automate the extraction and classification of PII, improving efficiency and accuracy.
Collaborated with cross-functional teams to define and enforce data governance policies, ensuring compliance with data security standards and privacy regulations during PII handling.
Conducted regular audits and implemented automated monitoring to maintain data integrity and compliance with evolving data protection laws.

Project: SQL Server Maintenance and Optimization
Responsibilities:
Conducted comprehensive maintenance of SQL servers, effectively troubleshooting, and resolving over 100 support tickets related to performance and security issues.
Implemented security updates and patches, ensuring the SQL server infrastructure remained robust against vulnerabilities, thereby maintaining a 99% uptime record.
Designed and executed regular backup and disaster recovery plans, minimizing potential data loss, and contributing to business continuity and reliability of data-driven decision-making processes.



Education:

Rutgers University - New Brunswick, NJ
Period: Sep - 2016 to May - 2020
Degree: Bachelor of Science (B.S.) in Electrical and Computer Engineering; Minor in Computer Science
Relevant Coursework:
Software Engineering, Operating Systems, Data Management, Numerical Analysis, Algorithms, Intro AI
Keywords: continuous integration continuous deployment artificial intelligence machine learning business intelligence sthree rlang Idaho Illinois New Jersey Texas

To remove this resume please click here or send an email from [email protected] to [email protected] with subject as "delete" (without inverted commas)
[email protected];3508
Enter the captcha code and we will send and email at [email protected]
with a link to edit / delete this resume
Captcha Image: