Home

Venkat K - Oracle Cloud Admin
[email protected]
Location: Richmond, Virginia, USA
Relocation: Yes
Visa: H1B
Venkat K
Oracle Cloud Admin
+1 770-406-0124
[email protected]
Richmond, VA
Yes
H1B

SUMMARY:

13+ years of experience as Senior Database Administrator in planning, designing, installing, configuring, upgrades and patches, performance tuning including RMAN backup and recovery, cloning, Database Security and administering multiple Oracle databases with EXADATA, RAC, DATA GUARD, GOLDENGATE in Oracle Enterprise Linux, Red Hat & Sun Solaris UNIX (SAN storage, RAID).
Strong experience in Software Development Life Cycle (SDLC) including Requirements Analysis,Design Specification and Testing as per Cycle in both Waterfall and Agile methodologies.
Extensively used Python Libraries PySpark, Pytest, Pymongo, cxOracle, PyExcel, Boto3, Psycopg, embedPy, NumPy and Beautiful Soup.
Hands-on use of Spark and Scala API's to compare the performance of Spark with Hive and SQL, andSpark SQL to manipulate Data Frames in Scala.
Experienced in big data analysis and developing data models using Hive, PIG, and Map reduce, SQL
with strong data architecting skills designing data-centric solutions.
Experience working with data modeling tools like Erwin and ER/Studio.
Strong experience in writing scripts using Python API, PySpark API and Spark API for analyzing the data.
Experience in developing Map Reduce Programs using Apache Hadoop for analyzing the big data as per the requirement.
Experience in designing star schema, Snowflake schema for Data Warehouse, ODS architecture.
Expertise in Amazon Web Services (AWS) Cloud Platform which includes services like EC2, S3, AWS Glue, VPC, ELB, IAM, DynamoDB, Cloud Front, Cloud Watch, Route 53, Elastic Beanstalk (EBS), Auto Scaling, Security Groups, EC2 Container Service (ECS), Code Commit, Code Pipeline, Code Build, Code Deploy,Dynamo DB, Auto Scaling, Security Groups, Red shift, CloudWatch, CloudFormation, CloudTrail, Ops Works, Kinesis, IAM, SQS, SNS, SES.
Experience in Data Analysis, Data Profiling, Data Integration, Migration, Data governance and Metadata Management, Master Data Management and Configuration Management.
Hands on Spark MLlib utilities such as including classification, regression, clustering, collaborative filtering, dimensionality reduction.
Skilled in System Analysis, E-R/Dimensional Data Modeling, Database Design and implementingRDBMS specific features.
Knowledge of working with Proof of Concepts (PoC's) and gap analysis and gathered necessary data for analysis from different sources, prepared data for data exploration using data munging and Teradata.
Well experience in Normalization and De-Normalization techniques for optimum performance in relational and dimensional database environments.
Experience in developing customized UDF s in Python to extend Hive and Pig Latin functionality.
Expertise in designing complex Mappings and have expertise in performance tuning and slowly changing Dimension Tables and Fact tables
Extensively worked with Teradata utilities Fast export, and Multi Load to export and load data to/from different source systems including flat files.
Experienced in building Automation Regressing Scripts for validation of ETL process between multiple databases like Oracle, SQL Server, Hive, and Mongo DB using Python.
Proficiency in SQL across several dialects (we commonly write MySQL, PostgreSQL, Redshift, SQL Server, and Oracle)
Good knowledge of Data Marts, OLAP, Dimensional Data Modeling with Ralph Kimball Methodology (Star Schema Modeling, Snow-Flake Modeling for FACT and Dimensions Tables) using Analysis
Excellent in performing data transfer activities between SAS and various databases and data fileformats like XLS, CSV, etc.
Experienced in creating shell scripts to push data loads from various sources from the edge nodesonto the HDFS
Experienced in development and support knowledge on Oracle, SQL, PL/SQL, T-SQL queries.
Experience in Designing and implementing data structures and commonly used data businessintelligence tools for data analysis.
Expert in building Enterprise Data Warehouse or Data warehouse appliances from Scratch using both Kimball and Inmon s Approach.
Experience in working with Excel Pivot and VBA macros for various business scenarios.
Expertise in SQL Server Analysis Services (SSAS) and SQL Server Reporting Services (SSRS).
Experience in Amazon web services (AWS) cloud like S3, EC2 and EMR and in Microsoft Azure.
Expertise in Azure infrastructure management (Azure Web Roles, Worker Roles, SQL Azure, Azure Storage, Azure AD Licenses, Office365)
Experience in dealing with Windows Azure IaaS - Virtual Networks, Virtual Machines, Cloud Services, Resource Groups, Express Route, Traffic Manager, VPN, Load Balancing, Application Gateways, Auto- Scaling.
Exposure on usage of Apache Kafka to develop data pipeline of logs as a stream of messages using producers and consumers.
Excellent understanding and knowledge of NOSQL databases like Mongo dB and Cassandra.
Created Cassandra tables to store various data formats of data coming from different sources.
Extensive knowledge and experience on real time data streaming techniques like Kafka and SparkStreaming.
Configured Spark Streaming to receive real time data from the Kafka and store the stream data to HDFS.
Experienced in data formats like JSON, PARQUET, AVRO, RC and ORC formats
Utilized Flume to analyze log files and write into HDFS.
Experience in importing and exporting data by Sqoop between HDFS and RDBMS and migratingaccording to client's requirement
Used GitHub version control tool to push and pull functions to get the updated code from repository
Solid skills in database design, optimization and performance tuning, design and developed applications, and SQL queries, Flash-Back Recovery, Data Pump, ASM, ASH, AWR, ADDM, Automatic Undo management, OEM Grid Monitoring and Recycle Bin, SCAN, FAN, TAF, RAC ONE NODE, Server pools.
Experience in working on AWS and its services like AWS IAM, VPC, EC2, ECS, EBS, RDS, S3, Lambda, ELB, AutoScaling, Route 53, Cloud Front, Cloud Watch, Cloud Trail, SQS, and SNS and experienced in Cloud automation using AWS Cloud Formation templates to create custom sized VPC, subnets, NAT, EC2 instances, ELB and Security groups.
24x7 Production support of Oracle 9i/10g/11g/12c databases. Managed very large (Multiple TB) size databases.
Oracle DBA (Oracle RDBMS 9i /10g/11g/12c)
Created infrastructure in a coded manner (infrastructure as code) using Puppet, Chef and Ansible for configuration management of virtual environments, and Vagrant for virtual machine and resource control
Work on Oracle Data Masking and Subsettingto helps database customers improve security, accelerate compliance, and reduce IT costs by sanitizing copies of production data for testing, development, and other activities and by easily discarding unnecessary data.
Experienced in Planning, Installation, Physical and Logical Database Design, Backup and Recovery, Cloning or Refreshing Database and Oracle Applications.
Experienced in upgrading Oracle database to 10.2.0.4 to 11.2.0.4 to 12C.
Experience in setting up and maintaining Oracle Data Guard for production databases.
Worked on AWS ops work, AWS Lambda, AWS code deploys, AWS Cloud Formation and Cloud Foundry.
Worked on AWS ops work, AWS Lambda, AWS code deploys, AWS Cloud Formation and Cloud Foundry.
Install, configure and troubleshoot all components maintained multiple Oracle databases.
Perform day-to-day database administration and ensure integrity, security and availability of applications environments.
Implemented active Oracle Data Guard to facilitate data protection, site redundancy and disaster recovery.

Applications DBCA, DBUA, Recovery Manager (RMAN), Oracle Enterprise Manager (OEM), OEM Grid Control, Oracle Data Guard, Oracle Management Service (OMS), Real Application Clusters (RAC), ASM, Sql Navigator, Erwin, TOAD, Data Pump (expdp, impdp), Autosys, Query Analyzer, VERITAS NetBackup, Developer 2000, SQL*Plus, SQL*Loader, Golden Gate,Commvault,Toad,SQL Developer
Databases Installation, Implementation, Administration, Upgrading and Troubleshooting of Oracle 8i, 9i, 10g RAC, 11g RAC,11gR2 grid control RAC. 12C, PostgreSQL 8/9, MS SQL, Server 2005/2008.
Tool/Utilities TKPROF, STATSPACK, Explain Plan, ORACLE Bug DB, Oracle Demantra, OEM.
Backend Scripting SQL*Plus, PL/Sql, UNIX Shell Script, Perl Scripting. Seaborn, Matplotlib, NLTK), NoSQL, PySpark, PySpark SQL, SAS, R
Programming (Caret, Glmnet, XGBoost, rpart, ggplot2, sqldf), RStudio, PL/SQL, Linux shell scripts, Scala, Java, SQL, Python Programming (Pandas, NumPy, SciPy, Scikit-Learn
Operating Systems Windows NT/2000/2003/XP, RHEL, Sun Solaris, AIX, HP-UX.
Others AD Utilities, SQL * Loader, TOAD, XML/BI Publisher,XtermIO,webto go mobile tool, Workflow Builder, SQL Developer.
Data Engineer/Big Data Tools / Cloud / Visualization / Other Tools Databricks, Hadoop Distributed File System (HDFS), Hive, Pig, Sqoop, MapReduce, Spring Boot, Flume, YARN, Hortonworks, Cloudera, Mahout, MLlib, Oozie, Zookeeper, etc. AWS, Azure Databricks, Azure Data Explorer, Azure HDInsight, Salesforce, GCP, Google Shell, Linux, PuTTY, Bash Shell, Unix, etc., Tableau, Power BI, SAS, We Intelligence,
Crystal Reports, Dashboard Design.



PROJECT EXPERIENCE:

DCHF Sept 2020-Till Date
Cloud Infrastructure Admin

Automated the monitoring, Configuration Management, Log Aggregation of the Systems and Centralizing SIEM with
Collaborated with Business Analysts, SMEs across departments to gather business requirements, and identify workable items for further development.
Partnered with ETL developers to ensure that data is well cleaned and the data warehouse is up-to-date for reporting purpose by Pig
mport Management and Plumslice s Assortment Planning using Oracle PL/SQL and Pro*C to process Terabytes of data in an Oracle RDBMS in the Linux and Windows environments.
Extensive use of embedded SQL database calls in C code.
Created Pro*C program using Pointer, Structures, Macros, Functions.
Experience in creating new Pro*C Batch programs with Embedded SQL .
Selected and generated data into csv files and stored them into AWS S3 by using AWS EC2 and then structured and stored in AWS Redshift.
Hands on experience in storage, compute and networking services with implementation experience in data engineering using key AWS services such as EC2, S3, ELB, EBS, RDS, IAM, EFS, CloudFormation, Redshift, DynamoDB, Glue, Lambda, Step Functions, Kinesis, Route 53, SQS,SNS, SES, AWS Systems Manager etc.
Processed some simple statistical analysis of data profiling like cancel rate, var, skew, kurt of trades, and runs of each stock every day group by 1 min, 5 min, and 15 min.
Used PySpark and Pandas to calculate the moving average and RSI score of the stocks and generated them into data warehouse.
Exploring with Spark to improve the performance and optimization of the existing algorithms in Hadoop using Spark context, Spark-SQL, postgreSQL, Data Frame, OpenShift, Talend, pair RDD's
Involved in integration of Hadoop cluster with spark engine to perform BATCH and GRAPHX operations.
Performed data preprocessing and feature engineering for further predictive analytics using Python Pandas.
Developed and validated machine learning models including Ridge and Lasso regression for predicting total amount of trade.
Creating Pipelines in AWS Glue using Linked AWS Lambda, AWS S3, AWS RDS(postgres), AWS Redshift, AWS SFTP and AWS kinesis to Extract, Transform and load data from different sources like Redshift warehouse, S3 data lake, third party SFTP server and Streaming data.
Boosted the performance of regression models by applying polynomial transformation and feature selection and used those methods to select stocks.
Generated report on predictive analytics using Python and Tableau including visualizing model performance and prediction results.
Have Extensive Experience in IT data analytics projects, Hands on experience in migrating on premise ETLs to Google Cloud Platform (GCP) using cloud native tools such as BIG query, Cloud Data Proc, Google Cloud Storage, Composer.
Practical understanding of the Data modeling (Dimensional & Relational) concepts like Star - Schema Modeling, Snowflake Schema Modeling, Fact and Dimension tables.
Utilized Agile and Scrum methodology for team and project management.
Used Git for version control with colleagues.

Environment: Spark (PySpark, SparkSQL, SparkMLIib), Python 3.x(Scikit-learn, Numpy, Pandas), Tableau 10.1, GitHub, AWS EMR/EC2/S3/Redshift,/SQS/SNS and Pig.

American Diabetes Assosciation (ADA) Jan 2018 Sept 2020
Cloud Project Manager/Cloud Admin

Single Oracle person to support all databases and applications in both in house servers and Cloud servers.
Automated the monitoring, Configuration Management, Log Aggregation of the Systems and Centralizing SIEM with AWS Cloud Formation, Chef, Nagios and Elastic Search.
Add/Remove nodes into Cassandra cluster.
Implement Oracle Cloud ERP, HCM,BI, PROJECTS, SCM services in ADA.
Migrated All Data from 11i EBS databases to Oracle ERP Cloud.
Designed the business requirement collection approach based on the project scope and SDLC methodology.
Installing, configuring and maintaining Data Pipelines
Creating Pipelines in AWS Glue using Linked AWS Lambda, AWS S3, AWS RDS(postgres), AWS Redshift, AWS SFTP and AWS kinesis to Extract, Transform and load data from different sources like Redshift warehouse, S3 data lake, third party SFTP server and Streaming data.
Files extracted from Hadoop and dropped on daily hourly basis into S3
Working with Data governance and Data quality to design various models and processes.
Involved in all the steps and scope of the project reference data approach to MDM, have created Data Dictionary and Mapping from Sources to the Target in MDM Data Model.
Developed Automation Regressing Scripts for validation of ETL process between multiple databases like AWS Redshift, Oracle, Mongo DB, T-SQL, and SQL Server using Python.
Automated the data processing with Oozie to automate data loading into the Hadoop Distributed File System.
Designing and Developing Oracle PL/SQL and Shell Scripts, Data Import/Export, Data Conversions and Data Cleansing.
Performing data analysis, statistical analysis, generated reports, listings and graphs using SAS tools, SAS/Graph, SAS/SQL, SAS/Connect and SAS/Access.
Developing Spark applications using Scala and Spark-SQL for data extraction, transformation and aggregation from multiple file formats.
Using Kafka and integrating with the Spark Streaming
Developed data analysis tools using SQL and Python code.
Authoring Python (PySpark) Scripts for custom UDF s for Row/ Column manipulations, merges, aggregations, stacking, data labeling and for all Cleaning and conforming tasks.
Working with relational database systems (RDBMS) such as Oracle and database systems like HBase.
Using ORC, Parquet file formats on HDInsight, Azure Blobs and Azure tables to store for raw data.
Involved in writing T-SQL working on SSIS, SSAS, Data Cleansing, Data Scrubbing and Data Migration.
Working on Dimensional and Relational Data Modeling using Star and Snowflake Schemas,OLTP/OLAP system, Conceptual, Logical and Physical data modeling using Erwin.
Performing PoC for Big data solution using Hadoop for data loading and data querying
Writing Pig Scripts to generate Map Reduce jobs and performed ETL procedures on the data inHDFS.
Using Sqoop to channel data from different sources of HDFS and RDBMS.
To meet specific business requirements wrote UDF's in Scala and Store Procedures Replaced theexisting Map Reduce programs and Hive Queries into Spark application using Scala
Involved in Normalization and De-Normalization of existing tables for faster query retrieval.
Developed python programs and excel functions using VB Script to move data and transform data.
Developing Json Scripts for deploying the Pipeline in Azure Data Factory (ADF) that process the datausing the Cosmos Activity.
Extensively using MS Access to pull the data from various data bases and integrate the data.
Writing HiveQL as per the requirements and Processing data in Spark engine and store in Hive tables.
Responsible for importing data from PostgreSQL to HDFS, HIVE using SQOOP tool.
Experienced in migrating HiveQL into Impala to minimize query response time.
Implemented Avro and parquet data formats for apache Hive computations to handle custom businessrequirements.
Responsible for performing extensive data validation using Hive.
Sqoop jobs, Hive scripts were created for data ingestion from relational databases to compare withhistorical data.
mplemented Spark using Scala and Spark SQL for faster testing and processing of data.
Support and maintain 11i EBS applications and Databases.
Upgrade oracle databases from version11.2.0.3 to 11.2.0.4.
Implement Oracle Fusion environment Databases for Financials.
Install and Maintain Banner ERP application on oracle databases.
Work Closely with Banner Developers to resolve Production issues.
Implemented oracle BI and Projects and other Cloud environments.
Implement Oracle PAAS environment Java cloud Services.
Migrating databases to AWS Cloud Environment.
Work on Finance administration and support activities.
Using ANT, Puppet/ Chef Scripts with Ivy to build the application and deploy.
Designed, configured and managed public/private cloud infrastructures utilizing Amazon Web Services (AWS), including EC2, S3, Cloud Front, Elastic Filesystem, RDS, VPC, Direct Connect, Route53, Cloud Watch, Cloud Trail, Cloud Formation, IAM which allowed automated operations.
Work and resolve financial team issues, Work closely with Financial team.
Work closely with HCM team and resolve issues.
Work on Oracle Fusion Audit system and provide details to auditors
Using AWS Cloud Formation templates to deploy other AWS service.
Setting up the build and deployment automation for Terraform scripts using Jenkins.
Automate the cloud deployments using chef, python (boto & fabric) and AWS Cloud Formation Templates.
Capable of doing Ansible setup, managing hosts file, Using Yaml linter, authoring various playbooks and custom modules with Ansible.
Retrofitted version 12.1.3 custom code to 12.2.6 version upgrade making database editioning compatible.
Created AWS cloud formation templates to create custom sized VPC, subnets, EC2 instances, ELB, security groups.
Resolved integrated services issues and provide solutions daily bases

Environment: Erwin9.8, BigData3.0, Hadoop3.0, Oracle12c, PL/SQL, Scala, Spark-SQL, PySpark, Python,kafka1.1, SAS, Azure SQL, MDM, Oozie4.3, SSIS, T-SQL, ETL, HDFS, Cosmos, Pig0.17, Sqoop1.4,


Akamai Feb 2017 Jan 2018
Sr. Oracle DBA/Cloud DBA

Akamai is the global leader in Content Delivery Network (CDN) services, making the Internet fast, reliable and secure for its customers. The company's advanced web performance, mobile performance, cloud security and media delivery solutions are revolutionizing how businesses optimize consumer, enterprise and entertainment experiences for any device, anywhere.

Responsibilities:
Upgraded databases from 11.2.0.4 to12C
Apply Exadata patches and quarterly PSU patches on all oracle databases running on 4 node exadata RAC databases
Created New RAC databases and DAtagurad implementation fro Newly created database on 4 Node RAC environment
Installed 12c Real Application Cluster and involved in migration of standalone databases to RAC Cluster.
Supporting Oracle/SQL server databases on AWS cloud environment.
Worked on GG install and resolve issues when GG services down or replication fails.
Monitor Resources and Applications using AWS Cloud Watch, including creating alarms to monitor metrics such as EBS, EC2, ELB, RDS, S3, SNS and configured notifications for the alarms generated based on events defined.
Spearheaded migration from Puppet environment to Docker-based service architecture.
Converting existing AWS infrastructure to Server-less architecture deployed via Cloud Formation.
Worked on daily performance issues and provide appropriate solutions to users to resolve Performance issues
Designed and implemented a Cassandra NoSQL based database.
Implemented Best Practices for migration databases to Oracle EXADATA x3-2.
Using AWS Cloud Formation templates to deploy other AWS service.
Provisioned the highly available EC2 Instances using Terraform and cloud formation and wrote new plugins to support new functionality in Terraform.
Used Ansible and Terraform for creating subnets, security groups, route tables and ACL's for VPC creation.
Worked on ASM grid installation and maintain discgroups and storage allocation to all databases
Maintained production databases and Standby databases and performance maintenance activities on all Exadata machines
Used Golden Gate in moving large volumes of transactional data, with minimal impact to online systems.
Implemented backup and recovery procedures (Cold/Hot backups, RMAN Incremental backups & Import/Export).
Installed Apex 4.5 version services on database and configured apex services successfully for applications access
Installed web logic services and database configuration to provid applications setup
Worked on EBS performance issues and provide solution to users to resolve issues
Worked on Daily tickets and ONCALL schedule to resolve any production related activities.


HarvardPilgrim HealthCare March 2014 Feb 2017
Sr. Oracle DBA/Cloud DBA

Harvard Pilgrim is a not-for-profit health services company serving members throughout Connecticut, Maine, Massachusetts and New Hampshire. Our mission is to improve the quality and value of health care for the people and communities we serve.

Responsibilities:
Upgraded databases from 11.2.0.3 to 11.2.0.4 to 12C
Performed Oracle 11g/12c RMAN refresh of different environments such as development, testing etc. from production environment.
Installed, configured, and maintained Oracle10g and 11gR2 on Linux and also Upgraded from Oracle 10g to 11g and 12c.
Good experience in PATCHING AND UPGRADATION from 11.2.0.x to 12.1.0.x ,
Responsible for upgrade and migration of data from 10g RAC to 11g RAC.
Used configuration management tools Puppet and Ansible.
Worked on setting up/installation and configuration of release 11g R2 RAC with ASM and Raw Devices and 11g CRS on IBM AIX 6.1.
Using AWS Cloud Formation templates to deploy other AWS service.
Created Ansible playbooks to install and setup Artifactory.
Documented all Oracle 12c installation/Up gradation work and delivered KT to all team peers.
Installed, Configured Oracle 12c Golden gate on Oracle 12c Database.
Strong script writing skills using shell and perl languages, Correcting the sql/plsql queries and guide user understandable methods.
Merging user traffic to replicated database and keep physical for reporting in Golden Gate environment.
Automate provisioning and repetitive tasks using Terraform and Python, Docker container, Service Orchestration.
Cloning of production Databases to the test environment using Recovery Manager (RMAN).
Monitor sync or lag between primary and read replicas of AWS-RDS-MySQL, using AWS cloud watch metrics such as replica lag and created alarms when these events are triggered and send notification using AWS SNS.
Export and Import of User, Tables and Database using exp/imp and Data Pump.
Maintain monitor patching on oracle Exadata x5-2 machines and environments on
Provisioning of Fusion and EBS databases on Exadata RAC configuration.
Apply quarterly CPU patches on Exadata databases
Performed Non-RAC to RAC and RAC to Non-RAC cloning.
Performance tuning of Oracle Database, applications and queries
Conduct Pre-patch analysis for different types of patches as requested by customer.
Configured Workflow Notification Mailer for inbound and outbound processing.
Maintenance and support of Development, Test and Production environments.
Converted single instance e-business database to Oracle 11g RAC
Worked with technical, functional teams and Oracle Support to troubleshoot and resolve issues.
Working on System Administrator Activities.
Responsible for supporting multiple instances, activities include installation, maintenance and upgrade of Oracle databases.
Supported cross-product support across database/middle-tier issues with the development and release of Application products.
Install, configure and troubleshoot all components of Multiple Oracle.
Perform day-to-day database administration and ensure integrity, security and availability of database environments.
Create, maintain and provide support to Oracle Cluster databases.



Oracle Corp, Redwood Shores, CA April 2011 March 2014
Sr. Oracle DBA
Project: EFOPS (EBS Fusion Environment Operations)

EFOPS is a fast growing global team responsible for supporting both Fusion and EBS Applications development around the world. They are currently maintaining thousands of Fusion and EBS development environments across the globe. It is also responsible for Hosting, managing, supporting and administering all development, unit testing, system testing, maintenance, quality analysis and certification testing environments. Also responsible for provisioning of oracle EBS and oracle fusion environments

Environment:
Red Hat Linux 5, OEL 5.4,Windows 2003,11g R1, Oracle Applications EBS 11i,R12.1.2,R12.2 11gR1 11g2 grid controlRAC,12C,TOAD, Developer 6i.

Responsibilities:
Provided production 24*7 support several databases.
Working on System Administrator Activities.
Installation of Oracle Database
Supported cross-product support across database/middle-tier issues with the development and release of Application products.
Install, configure and troubleshoot all components of Multiple Oracle.
Installed, upgrade and configured Oracle Enterprise Databases 11.1 to 11.2 and configure datagaurd 11g for DR with two physical standby database for upgrade, migrations and patch sets application.
Creation and maintenance of Active Dataguard in 11g.
Switch over and Fail over between Primary and Standby whenever needed.
Dataguard Broker setup, which would help in Fast Start Fail Over.
Worked on Fusion applications and oracle Spatial with SaaS Pot environments
Worked on Oracle 12c database fusion applications
Installed OEM12c and configured for 7 production windows databases.
Configured the RMAN backups for several Production databases with one catalog database.
Implemented Best Practices for migration E-Business Suite and regular databases to Oracle Exadata.
Create, maintain and provide support to Oracle Cluster databases.
Managed databases on Exadata and GoldenGate.
Utilized Oracle Streams and GoldenGate for data replication and synchronization
Extract/Replicat with Golden gate for active-passive, live setup
Used import/export utilities for cloning/migration of small sized databases and Datapump import/export to move data between 10g and 10g/11g environments.
Worked on data loading and extract using Export/Import, Data Pump, SQL loader and External Tables.
Worked High Availability Oracle Golden Gate (OGG) systems on Oracle 11g and 11gR2 RAC environments (Front-end: Linux 5, Back-end: Oracle Exadata).
Worked on setup and configuration of OEM11g/ 12c.
Experience in registering services through OEM11g/12c GRID Control.
Automate the Zip DB (Compressed Oracle DB) master setup saving time and effort by achieving a reduction of 70% in time.
Regular database activities, backups monitoring, issue resolving through EMS(Environment Management System Tool)
Performance tuning of the database - SQL Tuning, Used Tuning utilities like TKPROF, EXPLAIN PLAN, AWR, ADDM, ASH and Tuning of SGA, distribution of disk I/O, Sizing of tables and indexes
Used hints for SQL tuning and optimization
Highly experienced in Oracle 11g, 10g Automatic Storage Management (ASM) required to fulfill the storage needs.
Experienced in ASM administration.
Conduct training and mentor new hires on the core environment management platform.
Upgraded Database to 11g (11.2.0.4) RAC in both TEST and PRODUCTION environment.
Performed NON-RAC to RAC and RAC to NON-RAC cloning.
Performance tuning of Oracle Database, applications and queries
Conduct Pre-patch analysis for different types of patches as requested by customer.
Configured Workflow Notification Mailer for inbound and outbound processing.
Maintenance and support of Development, Test and Production environments.
Converted single instance e-business database to Oracle 11g RAC
Oracle Data Guard Implementation for High Availability and remote Disaster Recovery (DR) site.
Development of automated RMAN scripts for both backups and cloning of Oracle Databases.
Performance tuning of Oracle Database, applications and queries.
Worked on NUMA environment NUMA is fully supported by Linux and Windows setup Oracle can now better exploit high end NUMA hardware in SMP servers.
Cloning of database RMAN and other cloning options to create an in house instance for internal research and development.
Work on Oracle Data Masking and Subsettingto helps database customers improve security, accelerate compliance, and reduce IT costs by sanitizing copies of production data for testing, development, and other activities and by easily discarding unnecessary data.
Performance Monitoring, Space Monitoring / Capacity planning, concurrent manager tuning.
Conduct Pre-patch analysis for different types of patches as requested by customer.
Responding to different types of alerts automation using oracle alerts.
Worked with technical, functional teams and Oracle Support to troubleshoot and resolve issues.
Performing pre and post health checkups of oracle databases.
Researched and documented available options to create a DB Link between oracle and SQL Development databases to rewrite the existing interface.


Genex Technologies Hyderabad, India June 2008- April 2011
Oracle DBA

Environment:
Unix/Linux, HP-UX and AIX, , Oracle 10g and 9i, Oracle RAC, Oracle Streams, TOAD, Enterprise Manager, EXPLAIN PLAN, TKPROF, SQL Trace, Query Analyzer, Oracle Fail Safe

Responsibilities:

Installation, Implementation and maintenance of Oracle databases.
Create the users, roles, schemas with suitable permissions
Creating and managing the database objects such as tables, indexes and views
Creating and managing the database storage structures which include index table spaces, rollback table spaces and rollback segments
Cloning using Rapid clone, Autoconfig and adclone utility.
Pre-patch or Impact analysis for the patches and document preparation with pre and post patch installation steps.
Experience in Writing shell scripts, SQL scripts and implementing auto manageable scripts.
Planning for storage spaces requirements and placing them for optimum I/O performance.
Tuning the databases for optimal performance.
Data migration using Data Pump export/import to bring historical data from application so to analyze the growth of db over a period.
Wrote custom PL/SQL packages for DBA related jobs such as purging custom tables, loading data into custom tables etc.
Successfully upgraded Oracle Applications Database from 10.2.0.2 to 10.2.0.4
Worked closely with technical, functional teams and Oracle Support to troubleshoot and resolve issues during upgrade.
Performed maintenance tasks like Purging Concurrent manager log files, Gather schema statistics, Purging Workflow Obsolete Runtime Data.
24*7 on-call production support handling weekday, weekend, nighttime, and holiday outages with focused effort to minimize downtime.
System Administration responsibilities include application security, scheduling work shifts, registering new concurrent programs / request Sets.
Researched and resolved JInitiator performance issues and suggested migration to Sun JDK.
Work on Data Masking and subsetting can be performed on a cloned copy of the original data, eliminating any overhead on production systems
Upgraded JInitiator and other technology stack components in development environment.
Performance Monitoring, Space Monitoring / Capacity planning, concurrent manager tuning.
Configured Workflow Notification Mailer for inbound and outbound processing
Conduct Pre-patch analysis for different types of patches as requested by customer.
Responding to different types of alerts automation using oracle alerts.
Applied Oracle Projects Maintenance pack to Development environments.
Worked with technical, functional teams and Oracle Support to troubleshoot and resolve issues.
Researched and documented available options to create a DB Link between oracle and SQL Development databases to rewrite the existing interface.
Created and implemented Oracle database backup and recovery strategies using RMAN.
Performed Database and SQL tuning by using various Tools like STATSPACK, TKPROF, EXPLAIN PLAN, optimizer hints and SQL tuning advisor.
Database backup Includes daily, weekly and monthly.
User management and cloning several instances for UAT and testing and development purposes from the Production database.

EDUCATION DETAILS.:

Completed Bachelor of Engineering in computer Science from Anna University Chennai India

Trainings Attended:

Attended Oracle Weblogic server training with Oracle University.
Attended Oracle database 11g training on RAC and Grid Administration with Oracle University.
Attended Oracle 11g database and data guard concepts training with Oracle University
Attended GodenGate 12c traning with oracle University
Attend Big database tranings from Akamai
Keywords: cprogramm user experience business intelligence sthree database active directory rlang information technology golang hewlett packard microsoft procedural language California Delaware Virginia

To remove this resume please click here or send an email from [email protected] to [email protected] with subject as "delete" (without inverted commas)
[email protected];1423
Enter the captcha code and we will send and email at [email protected]
with a link to edit / delete this resume
Captcha Image: