Friday, 13 December 2013
Oracledbacr, por simple pasión ...-Copyleft Miembro Comunidad Tecnológica de Oracle Latinoamérica: Oracle Database 12c Multitenant Applications Multi...
Sunday, 24 November 2013
Thursday, 14 November 2013
ASM Support Guy: ASM in Exadata
Sunday, 10 November 2013
The Arup Nanda Blog: Sangam 13 Presentations and Scripts
Friday, 1 November 2013
Exadata Certification: Exadata Offload Processing
Exadata Certification: Exadata Storage Server Shutdown/Startup procedure ...
Exadata Certification: How to find serial number of Exadata Server/Storag...
Exadata Certification: EHCC with Example - Exadata Compression
Exadata Certification: Verify InfiniBand Fabric Topology
Exadata Certification: Verify Hardware and Firmware on Exadata Components...
Exadata Certification: Exadata Monitoring Tools
Exadata Certification: User Equivalence between DB Node and Storage Cell
Exadata Certification: DBFS creation in Oracle
Exadata Certification: Pre-login Message / Banner Configuration in Linux
Exadata Certification: ASR Installation and Registration procedure with O...
Exadata Certification: Location of Log files in Exadata
Wednesday, 18 September 2013
The Arup Nanda Blog: Advanced Linux Commands Mastery Series on OTN
Tuesday, 10 September 2013
The Arup Nanda Blog: New York Oracle User Group Fall Conference Materia...
Friday, 16 August 2013
alt.oracle: RMAN 12c – More SQL without ‘sql’
Thursday, 8 August 2013
alt.oracle: Oracle 12c - Out With the Grid, In With the Cloud
Wednesday, 7 August 2013
Oracle Database 12c: Consolidating to Oracle 12c Multitenant Architectu...
Oracle Database 12c: ASM 12c External Redundancy Diskgroup Handles Corr...
Oracle Database 12c: ASM 12c New Feature Replace Command
Oracle Database 12c: Oracle Database 12c New AWR Report Sections
Oracle Database 12c: ASM 12101 Normal Redundancy Testing
Oracle Database 12c: Upgrade Grid Infrastructure Cluster 11203 to 12101...
Oracle Database 12c: Create Oracle Database 12c Container Database (CDB...
Oracle Database 12c: Install Oracle Database 12c Single Instance Softwa...
Oracle Database 12c: Oracle Database 12c Multitenant Architecture Part ...
Oracle Database 12c: Provision Cross Platform non-CDB 11gr2 to Pluggabl...
Oracle Database 12c: Use Database 12c Net Manager to Configure TNS for ...
Oracle Database 12c: RMAN Backup for Multitenant Architecture
Oracle Database 12c: Startup and Shutdown of a Container Database
Thursday, 11 July 2013
Hemant's Oracle DBA Blog: Interesting Bugs in 12cR1
Hemant's Oracle DBA Blog: 12c RMAN Restrictions -- when connected to a PDB
Wednesday, 3 July 2013
The Momen Blog: Upgrading Oracle Database 11.2.0.3 to 12.1.0 (Orac...
Friday, 21 June 2013
Hemant's Oracle DBA Blog: Getting the ROWIDs present in a Block
Tuesday, 18 June 2013
Data Warehousing and Big Data from an Oracle Perspective: Oracle Wins Big Data Company of the Year 2013
Friday, 14 June 2013
The Arup Nanda Blog: Primary Keys Guarantee Uniqueness? Think Again.
Monday, 10 June 2013
The Momen Blog: OOW-2012: Very Large Databases (VLDB): Challenges ...
The Momen Blog: Bug: ORA-00979: not a GROUP BY expression
The Momen Blog: Exporting Multiple Tables on a Common Filter
The Momen Blog: Connecting to Oracle Database Even if Background P...
Sunday, 9 June 2013
The Arup Nanda Blog: Switching Back to Regular Listener Log Format
The Arup Nanda Blog: Application Design is the only Reason for Deadlock...
The Arup Nanda Blog: Streams Pool is only for Streams? Think Again!
The Arup Nanda Blog: Demystifying Big Data for Oracle Professionals
Tuesday, 21 May 2013
Exadata Smart Flash Cache - a note of understanding
Exadata Smart Flash Cache
The Oracle Exadata is meant to deliver high performance through its smart features. One of its smartly carved out feature is intelligent and very smart flash cache. The flash cache is a hardware component configured in the exadata storage cell server which delivers high performance in read and write operations. There are four flash cards in each of the storage cell and each flash card contains a flash disk. In total, there are 16 flash disk contained by a single storage cell server.
In Exadata x3 version, the flash cache capacity has been increased by four times, 40 times the response speed and 100GB/sec scan rates. In X3 version, the Sun flash F40 PCIe cards total upto 22.4 TB of flash compared to 5.4TB from the X2 version. In addition, they are capable of scanning the data 1.4 times faster than X2 version. The flash belongs to eMLC category (enterprise grade multi-level cell) which employs different techniques to improve flash health, endurance and most importantly, accommodate more write cycles (20k to 30k). (Write cycle – transparent process of making room for the new incoming data by flushing out old one)
Flash Cache Hardware details
Exadata X2 version – [Sun F20 cards]
Capacity = 4(no of flash cards in a cell) * 4 (no of flash disk per card) * 24GB (single flash disk capacity) = 384GB per storage cell
Exadata X3 version – [Sun F40 cards]
Capacity = 4(no of flash cards in a cell) * 4 (no of flash disk per card) * 100GB (single flash disk capacity) = 1600GB per storage cell
The motivation and how the Flash works?
The IO operations on the disk are mechanical as a block has to travel a specified path and sit at a specific location. Being a static memory, the disk operation to place a block are significantly costlier. Flash cache is a volatile memory which enables huge reduction in IO operations by replacing them fast and rapid cache operations. In an OLTP system, the database is read write intensive and data grows at an uneven rate. The flash cache is designed to reduce the IO bottlenecks and yield atleast 20x benefits by speeding up IO operations. The cache operations are fully redundant and transparent to the end user except the statistics. The flash cache can also be worked in co ordination with IORM to control the use of flash in case of multiple databases. This feature enables the customers to reserve flash for the critical databases and ensure transparent performance benefits.
The smartness of the flash comes with the fact that it has a peculiar ability to understand the type of IO(s). Its because of this intelligence that it knows what IO(s) to cache and which one to skip. Flash cache caches IO pertaining to control files, file headers, data blocks and index blocks while it skips the IO(s) incurred in backups, mirroring, data pumps, and large table scans.
Important: Do not confuse the exadata smart flash cache with the flash cache option in Oracle database 11gr2 on solaris or linux. The "database flash cache" is an extension to SGA to expand buffer cache area on the database server. But the exadata smart flash cache resides on the storage server to cache the frequently accessed data and speed up the read (also write with X3) operations by reducing disk IO(s).
The exadata X3 series makes a considerable software level changes to the flash cache by speeding up not only read operations but write operations as well. It is capable of supporting 1 million 8k flash write per second and reading 1.5 million 8k blocks. This makes the new flash 20x more efficient than the X2 or V2 series flash. So now the cache is not only capable of keeping the hot data but in fact, all the data whatever comes in from the database. The credit goes to the new working policy adopted by the flash – known as Write Back Cache.
Make the flash persistent by partitioning flash disk into grid disk
The flash memory dumps the cachable hot data which can be lost once the power goes off (by virtue of being a volatile memory). Optionally, a portion of flashcache can be partitioned and utilized as persistent logical flash disks. Thereafter, the flash disks can be used to store (not cache) the hot data and shield it from power risks. Grid disks can be carved out of the flash based cell disks which can be further assigned to an ASM disk group. The partitioning and assigning process is very similar to the physical disks partitioning.
Though the flash based grid disks defeat the purpose of cache, but it can be used as a reserve disks in case of highly intensive write operations on the disk where the existing configuration of the disk is not sufficient. Another points which discourages partitioning the disk is that the flash storage will be trimmed to half to respect the mirroring.
Flash cache working policies – WTC and WBC
There are two working mechanisms for flash cache – write through and write back. The exadata systems before X3 release worked with write through policy. It was during the announcement of X3 exadata systems in OOW 12, it was learnt that the flash cache will also support write back caching mechanism too.
Write Through Cache (Read/Write)
The older mechanism was pretty direct and straight. The data is written directly to the disk without the intervention of flash in any ways. The acknowledgment is sent back to the database by CELLSRV via iDB. Thereafter, if the data block qualifies the caching criteria – it is written to flash as well.
While reading the data blocks, the cellsrv maintains a hash lookup to map the data blocks agains the target destination i.e. flash or disk. If flash is hit, requested data is sent to the database. If cache miss, the data is read from the disk, and once again it is validated against the caching criteria. If the block qualifies to be "hot", the block is retained in the flash cache.
What is the caching criteria? An IO comes with two metadata values – 1) the
CELL_FLASH_CACHE
parameter setting defined at the segment or partition level 2) the CACHE hint associated by the database.The metadata contains the
CELL_FLASH_CACHE
parameter setting for the object. Based on its value, the data block can be cached. DEFAULT means the smart flash cache has the authority to decide whether to cache it or not. KEEP means the smart flash cache must cache the data on priority. NONE means the data block caching is not required. A huge object with DEFAULT setting will not be cached. The KEEP and DEFAULT have different retention policies. Also, the upper ceiling limit for KEEP cached blocks is 80% of the total flash cache size. In addition, the unused KEEP cached blocks are flushed off from the cache if they fail the aging criteria.The database adds another caching hint based on purpose of the IO. It can be CACHE, NOCACHE or EVICT. The first two are pretty straight and direct. The third one EVICT hints that the specific block has to be flushed out of cache.
Write Back Cache (Read/Write)
With Exadata x3 announcement, the smart flash cache adopts WBC mechanism to speed up the read as well as write operations with backward compatibility support. This implies that WBC feature can be enabled on earlier exadata systems (X2 or V2) as well by upgrading the storage cell software version and db version. The WBC feature is supported for cell storage version is 11.2.3.2 onwards and db version 11.2.0.3 BP 9 onwards. By default, it is disabled.
The flash cache can directly service the write operations on the database. During first time inserts, a block written to the flashcache is marked as "dirty" to signify the latest copy of the block. If the database requests to update a block which doesn't resides in flash, it is pulled from the disk into the cache, updated and marked as "dirty". If the database requests for a block, it is read directly from the cache, thus reducing the heavy IO operations. A block written in flash and frequently accessed can stay upto years in the flash. However, if the block is rarely accessed, only its primary copy can be retained in the flash. The rest of the data can be copied back to the disk.
Steps to enable the flash in Write Back mode -
CellCli> drop flashcache
CellCLI> alter cell shutdown services cellsrv
CellCLI> alter cell flashCacheMode = WriteBack
CellCLI> alter cell startup services cellsrv
CellCLI> create flashcache all
The Write Back Cache mode can be reverted back to the Write Through Cache mode by manually flushing all the dirty blocks back to the disk
CellCLI> alter flashcache all flush
CellCLI> drop flashcache
CellCLI> alter cell shutdown services cellsrv
CellCLI> alter cell flashCacheMode=Writethrough
CellCLI> alter cell startup services cellsrv
As I am finishing the blog post, I am realizing that I have scope of publishing one more post detailing on Write Back Cache. I shall be back with more details on Write Back Cache working and some hands on
References
http://www.oracle.com/technetwork/server-storage/engineered-systems/exadata/exadata-smart-flash-cache-366203.pdf
http://www.youtube.com/watch?v=6Sv70I9UMYo
http://flashdba.com/history-of-exadata/smart-flash-cache/
http://structureddata.org/2011/10/12/exadata-smart-flash-logging-explained/
http://www.infosysblogs.com/oracle/2011/07/exadata_smart_flash_cache.html
http://uhesse.com/2011/02/02/exadata-part-iv-flash-cache/
Saturday, 4 May 2013
Exadata Hybrid Columnar Compression
How EHCC works? What is a Compression Unit?
EHCC is one of the exclusive smart features of Exadata which targets the storage savings and performance at the same time. EHCC can also be enabled on other storage systems like Pillar Axiom and ZFS storage servers. Traditionally, the rows within a block are sequentially placed in a row format next to another. The collision of unlike data type columns restricts the compression of data within a block. EHCC enables the analysis of set of rows and encapsulates them into a compression unit where the like columns are compressed. As Oracle designates a column vector to the like valued column, compression of like columns having like values ensures considerable savings in space. The column compression gives a much better compression ratio as compared to the row compression.
Don't run into the thoughts that Exadata offers a columnar storage through EHCC. It is still a row based database storage but stressed on the word "hybrid" columnar. The rows are placed in a Compression Unit where like columns are compressed together efficiently. Kevin Clossion explain the structure of CU in one of his blog posts (http://kevinclosson.wordpress.com/2009/09/01/oracle-switches-to-columnar-store-technology-with-oracle-database-11g-release-2/ ) as "A compression unit is a collection of data blocks. Multiple rows are stored in a compression unit. Columns are stored separately within the compression unit. Likeness amongst the column values within a compression unit yields the space savings. There are still rowids (that change when a row is updated by the way) and row locks. It is a hybrid.".
Notice that EHCC is powerful only for direct path operations i.e. Bypassing the buffer cache.
A table or partition segment on Exadata system can accommodate compression units, OLTP compressed blocks and uncompressed blocks. A CU is independent of a block or the block size but surely, it is larger than a single block as it spans across multiple blocks. The read performance is benefited from the fact that a row can be retrieved in a single IO by picking up the specific CU instead of scanning the complete table. Hence, EHCC reduces the storage space through compression and disk IO's by a considerable factor. A compression unit cannot be further compressed.
Compression Algorithms – The three compression algorithms used by EHCC are LZO, ZLIB, and BZ2. The LZO algorithm ensures highest levels of compression while ZLIB promises a fair and logical compression. The BZ2 offers a low level of compression of data.
CU Size – On an average, a typical CU size is 32k-64k in case of warehouse compression while for archival compression, the CU size is between 32k to 256k. In a warehouse compression, around 1M of rows (16-20 rows depending on a row size) are analyzed in a single CU. In archival compression, around 3M to 10M of row data is analyzed to built up a CU.
EHCC types – EHCC works in two formats – warehouse compression and archival compression. Warehouse compression is aimed for OLTP and data warehouse applications and the compression ratio hovers between 6x to 10x. Archival compression suits the historical data which hsa less probability of updates and transactions.
EHCC DDLs – Here are few scripts to demonstrate basic compression operations on tables
--Create new tables/partitions with different compression techniques--
create table t_comp_dwh_h ( a number ) compress for query high;
create table t_comp_dwh_l ( a number ) compress for query low;
create table t_comp_arch_h ( a number ) compress for archive high;
create table t_comp_arch_l ( a number ) compress for archive low;
--Query compression type for a table--
select compression,compress_for from user_tables where table_name = '[table name]';
--Enable EHCC existing tables/partitions--
alter table t_comp_dwh compress for query low;
--Enable EHCC for new tables/partitions--
alter table t_comp_dwh move compress for query low;
--Disable EHCC feature--
alter table t_comp_dwh nocompress;
--Specify multiple compression types in single table--
Create table t_comp_dwh_arch
(id number,
name varchar2(100),
yr number(4))
PARTITION BY RANGE (yr)
(PARTITION P1 VALUES LESS THAN (2001) organization hybrid columnar compress for archive high ,
PARTITION P2 VALUES LESS THAN (2002) compress for query)
Language support to CU – A CU is fully compatible with indexes (b-tree and bitmap), mviews, partitioning, and data guard. It is fully supported with DML, DDL, parallel queries, parallel DML and DDLs. Let us examine certain operations with a CU.
Select – EHCC with smart scan enables the query offloading on the exadata storage servers. All read operations are marked with direct path read i.e. bypass the buffer cache. If the database reads multiple columns of the table and does frequent transaction, the benefits of EHCC are compromised. This is how the read operation carries on -
A CU is buffered => Predicates processing => Predicate columns decompressed => Predicate evaluation => Reject CU's if no row satisfies the predicate => For satisfying rows, the projected columns are decompressed => A small CU is created with only projected and predicate columns => Returned to the DB server.
Locking – When a row is locked in the compression unit, whole compression unit is locked until the lock is released.
Inserts – As a feature, the hybrid columnar compression works only at the load time with direct operations only. Data load technique can be any of the data warehouse load technique or a bulk load one. For conventional inserts/single row inserts, data still resides in the blocks which can be either uncompressed or OLTP compressed. New CU's will only be created during bulk inserts or table movement to the columnar compression state.
Updates – Updating a row in the CU causes the CU to be locked and the row moves out of CU to a less compression state. This hinders the concurrency of the CU which negatively effects the compression. The effect can be observed in warehouse compression but it is certainly more in archival compression. The ROWID of the updating row changes after the transaction.
Delete – Every row in a CU has an uncompressed delete bit which is checked if a row is marked for deletion.
Compression Adviser – The DBMS_COMPRESSION package serves as the compression adviser You can get to know about the compression paradigm of a row by using DBMS_COMPRESSION.GET_COMPRESSION_TYPE subprogram. It returns a number indicating the compression technique for the input ROWID. Possible return values are 1 (No Compression),2 (OLTP Compression),4 (EHCC - Query high),8 (EHCC - Query low),16 (EHCC - Archive high),32 (EHCC - Archive low). In addition, the GET_COMPRESSION_RATIO subprogram can be used to suggest the compression technique based on the compression ratio for a segment.
Critical look
EHCC is one of the most discussed SMART feature of Exadata database systems. It promises to provide atleast 10x storage benefits - though certain benchmarks have shown better results too. A famous citation which I see in almost other session on EHCC - a 100TB database can be compressed to 10TB thus saving 90TB of space on the storage and hence, 9 other databases of size 100TB can be placed on the same storage - thus, the IT management can be relieved of storage purchases for atleast 3-4 years assuming the data grows by a factor of two. I'll say the claim looks pretty convincing from the marketing perspective but quite impractical on technical grounds. I would like to read it as - 1000TB of historical data can be accommodated on 100TB of storage.
A lot has been written and discussed over the topic whether Oracle is on the way to embrace the columnar storage techniques. I'll say NO because it just looks application of the concept which looks no harm. The biggest hump for the EHCC feature is its own comfort zone i.e. database with less transactions and low concurrency. On a database which does frequent transactions and reads the data, the feature stands defeated.
References - Some of the best blog references on the topic over the web
http://dbmsmusings.blogspot.com/2010/01/exadatas-columnar-compression.html
http://www.oracle.com/technetwork/issue-archive/2010/10-jan/o10compression-082302.html
http://www.rittmanmead.com/2010/01/hybrid-columnar-compression-in-oracle-exadata-v2/
http://flashdba.com/history-of-exadata/66-2/
Any conflicts/comments/observations/feedback invited on the write up.
Saturday, 23 March 2013
Book Review: OCA Oracle Database 11g: Database Administration I: A Real-World Certification Guide by Steve Ries
The book OCA Oracle Database 11g: Database Administration I: A Real-World Certification Guide by Packt publishing has been published. The book has been authored by Steve Ries and I was the technical reviewer of the book. It was pleasure reviewing the book as the style of presentation, topics coverage and authenticity of content was pretty impressive. The book targets the OCA certification exam 1Z0-052 which is the Step-2 of completing the Associate level. As an information, Steve had earlier authored the book on Step-1 i.e. OCA Oracle Database 11g: SQL Fundamentals I: A Real World Certification Guide (1ZO-051) - with Packt publishing.
One of the key features of the product is that it doesn't mentions the certification exam code on the cover. The book cover has a quite soothing tagline - "Learn how to become Oracle-certified Database Administrator". Usually, one searches for certification guides only during when the exam is near. But readers will find this book as a handy compilation of basics and practice. The book covers all the certification objectives as prescribed by Oracle for this exam, though its doesn't sticks with the prescribed sequence. Still, the structure of the book is perfectly fine to be followed for the exam.
The book starts with a fair introduction of Oracle as a RDBMS solution. Under the basics section, readers will learn how to install the database software, create a database and learn the whereabout of Oracle architecture. At the next level, the book covers the core concepts like managing Oracle storage structures, instance, security and concurrency. Later, it gives insights on network configuration and database performance. Core DBA concepts like backup and recovery are well explained and demonstrated.
Here is the outline of chapters in the book -
Chapter 1: Introducing the Oracle Relational Database System
Chapter 2: Installing the Oracle Database Software
Chapter 3: Creating the Oracle Database
Chapter 4: Examining the Oracle Architecture
Chapter 5: Managing Oracle Storage Structures
Chapter 6: Managing the Oracle Instance
Chapter 7: Managing Security
Chapter 8: Managing Concurrency
Chapter 9: Configuring an Oracle Network
Chapter 10: Managing Database Performance
Chapter 11: Understanding Backup and Recovery Concepts
Chapter 12: Performing Database Backups
Chapter 13: Performing Database Recovery
Chapter 14: Migrating Data
Steve has got a unique style of writing and directing the content. He follow the perfect writing principle - Keep It Short and Simple. He presents a situation, makes the space for the readers to think upon it, and then gradually draws out the conclusion out of it - which helps the readers to strengthen the concepts in the area. For the beginners and even mid level DBAs, the book would be a watchable product as it takes a deep dive into DBA basics without complicating the stuff and messing up with the concepts. Overall, the product is highly commendable one with lots of learning, demos, and illustrations. If you are looking for a book which is not only a certification guide but also a daily reference, this one is for you.
Place your orders from the below link
http://www.packtpub.com/oracle-database-11g-administration1-certification-guide/book
The book is also available at all major bookstores like Amazon, Safari, Barnes n Noble.
Related articles
- Top 10 Oracle Certification Books for Database Administrators (business2community.com)
- http://www.packtpub.com/oracle-advanced-pl-sql-developer-professional-guide/book
- Test Your Technical Proficiency Using Oracle Certifications (daydaily.com)
Tuesday, 12 February 2013
Oracle – Emerging Technology trends and Innovation
Introduction
Innovation is the need of the hour. Innovation walks through the storm of thoughts and lays down the platform for fresh efforts, stability and business. It has been years of innovative practices that have elevated Oracle from a sole database software provider to a giant software enterprise. From last three decades, Oracle has been undoubtedly associated with relational database software – not surprising though. With the turn of time and technology, Teradata, Netezza, IBM and many more software firms were emerging out in the field of “multiplied” offerings of hardware and software. Again, an “innovative” kick and the urge to sustain in the market brought Oracle into the field of engineered systems. Later, with the acquisition of Sun Microsystems, Siebel, Peoplesoft and many more to mention, Oracle solidified its footsteps amongst the group of companies which provided “folded” offering of hardware with software.
In this paper, I shall take a walk through tour of some of the recent innovations in Oracle. We shall see what are the emerging trends and offerings from Oracle which can push IT leaders to differentiate Oracle from others.
The Talks on the Walks
I could recall the slogans tagged under the logo “Oracle” which reflect exactly what it beats during the time. Until the year 2008 (the oldest I can remember), the tag read as “The Information Company”. By this time, its RDBMS solution reached the pinnacles of maturity, perhaps being the most trusted and proven relational database software in the industry. Moreover, the developments on engineered systems and other verticals like Middleware, Application and BI were snoozing out for a big roar. Hardware based database machines with Oracle database software were on their way to provide cohesive benefits to the world. It was in 2010 that the new tag line was incorporated to showcase their latest strengths - “Software. Hardware. Complete”. Later in the same year, it was strategically re framed as “Hardware and Software – Engineered to Work Together” to highlight the engineering part.
In the recent times, the terms “Engineering” and “Innovation” moved parallel to deliver quality and ensure customer satisfaction. Apart from high performance database machines, visible developments were made to adopt the “Cloud Computing” technology. Oracle replaced “Internet” technology with “Grid Computing” one in order to multiply resource availability for a task. Following the huge success of grid computing, RAC systems became the base for the modern database implementations and also database machines like Exadata and Oracle Database Appliance. RAC systems promised Maximum Availability Architecture (MAA), Load Balancing, and Scalability.
Ever since Web 2.0 companies started facing issues while handling humongous amount of unstructured data, there has been a lot of buzz around on Big Data technology. Amidst of multiple companies, Oracle also took a responsible step to address the requirements by devising out Oracle Big Data Appliance.
That’s the story how Oracle floors innovation to renovate technology. Look what facts have to say. New products and innovations have been credited for 40 to 50% of the annual revenues in a company. The driving changes help them to stay in competition, be in the news and pave new ways of growth, development and recognition. There is a definite portion of investment which is solely meant to create and transform the business, apart from growing and running it.
Oracle Cloud: Consolidation of technology stack and reduce IT cost
Cloud computing technology is one of the most hyped topics in the last few years – not surprising though after seeing its benefits and business acceleration. What is Cloud computing? Cloud computing model can be thought of a workbench with multiple utilities mounted on it. End users can make use of any of these utilities at any time from any location. The Software as a Service (SaaS) model delivers ample of benefits from both the ends. As a cloud service provider, you ensure effective utilization of available resources through shared resource pools, consolidate the standalone servers and offer wide range of services to the end users. As a cloud service user, you have full authority to use any utility as a service whenever you need - technology at your doorstep. Pay as you go licensing model gives you management flexibility and relief of capital expenses which used to incur in requesting, procuring and maintaining the database applications.
Cloud computing well deserves the accolades as it provides high quality of service at a much reduced cost. The company management would love to have access to wide range of software over the consolidated private cloud based systems which enables them to offer better service to their customers and explore new dimensions of growth.
Oracle Engineered Systems: Consolidation and simplification of Data Centers
Oracle RDBMS software needs no introduction in the industry. The time it was maturing in the market, Oracle had planned to step in the field of enterprise storage and hardware business. Objective was to cut down the costing pressure and maintenance while providing high performance, much more storage, high scalability, and maximum availability. The Oracle engineered system family include Exadata (Database machine), Exalogic (Elastic cloud), Exalytics (In-memory machine), Oracle Database Appliance, Oracle Big Data Appliance, and SPARC super cluster T4-4. All of these engineered systems share the common characteristics i.e. easy to manage, upgrade, low TCO (total cost of ownership), reduced CMR (change management request), and most importantly single vendor support.
It was the OOW (Oracle Open World) of 2008 when Larry Ellison first announced the Exadata database machine which was Oracle database software on HP hardware servers. With the acquisition of Sun Microsystems in 2010, Oracle used Sun’s hardware servers for exadata machines. Two flavors of exadata machines were made available as X2-2 and X2-8 differentiating on the number of processors, storage capacity, and network configurations. In OOW’2012, Oracle announced the availability of X3 database machines as next generation systems. As per CEO Larry, “If you thought the old Exadatas were fast, you ain’t seen nothing yet”. The authoritative and commanding statement said enough to grasp the confidence of the market and thus the business.
Exadata is a fully integrated and optimized database machine which clubs computing, storage and networking tiers in a single hardware. Advanced features like hybrid columnar compression, smart flash cache, intelligent storage and storage indexes are the credit points for extreme performance (almost 10X times) in OLTP systems. Several customer studies and benchmarking results have revealed that the exadata machine ideally suffices the data warehousing, OLTP and database consolidation requirements with low cost overheads.
Oracle Database Appliance is a shorter version of Exadata in terms of objectives and positioning but focuses solely on simplicity and affordability. Similar to exadata, database appliance too employs the RAC technology for high availability. As stated, simplicity is the key differentiating factor between quarter rack exadata and database appliance. There is minimal skill set required for installation, management and troubleshoot Oracle database appliance. Customers would be pleased with the generic format of troubleshoot exercise with Automatic Service Request (ASR) service where an SR is automatically raised for specific hardware faults.
Exalogic is another fully integrated engineered system from Oracle for running middleware and applications. Being an engineered system, exalogic provides incredible performance and scalability while lowering down the TCO and maintenance cost. IT firms looking for deploying java based application or other middleware application on weblogic server can realize substantial gains in performance with the use of exalogic. Exalogic assures low latency, high throughput and accelerated coordination with database by a huge margin to come out as a reliable offering.
Exalytics is a database machine which is deployed in environments which require high performance analysis, planning and modeling. It combines the efficiencies of business intelligence and in-memory database technology to deliver high analytic experience to end users - “at the speed of thought”. It is often deployed along with oracle exadata database machine to cohesively enjoy best of both the worlds.
Big Data: The next big thing for enterprises
The face of information has changed tremendously over the years. This change is accredited to the steep increase in the volume of semi or unstructured data coming from industries, social platforms, sensors – collectively coming under “Big Data” category. Imagine the data flow from emails, twitter, facebook or blog aggregators in a day. The real challenge was to capture the data, store it securely and analyze for business implications. Large scale companies into real time business can get affected by inaccurate and incomplete analysis of its consumers.
Let us see how Oracle sees it through. Oracle Big Data platform is an integrated and engineered system to drive big data strategies through a comprehensive software stack comprising of Big Data appliance, Big Data Connectors, Exadata and Exalytics. It follows a logical cycle of acquiring, organizing and analyzing the juggles of big data to ensure predictable performance, high availability and tight security. The appliance employes the services of Cloudera’s Hadoop solution (HDFS – Hadoop Distributed File System) for processing the distributed data whereas Oracle NoSQL database is responsible for real time data processing assuring low latency and high throughput. The data is now organized by Oracle Big Data Connectors using map reduce methods and decent parallelism. The data loading is done either using oracle loader for hadoop (OLH), or direct connector for HDFS, or data integrator for hadoop. The data warehouse is kept with oracle exadata machine so as to lose nothing on performance at any moment. The Big Data appliance and other engineered systems are connected through a high speed infiniband band. For advanced analysis, the system uses Exalytics and Oracle R for graphical and statistical overview.
Big Data strategies are already on the move as the next potential development in technology. Surely, there are much more challenges for Oracle to face in the area. It is important to sink out the shades and work on the strengths to stand still. Big data customers would be pleasant to know that Oracle offers single point of support for the complete technology track, including upgrades and troubleshooting.
Focus areas on target
Progressive efforts follow continuity in the area of data security, automation and testing and maximum availability to keep them sync with the ongoing developments. The 12c release is on the verge of market roll out. Enterprise manager 12c has been the part of discussions as the most accomplished integrated and integrable database platform. Grid and RAC have been taken in trust while Cloud is the talk of the town. In the coming years, Oracle is likely to make considerable movements in Oracle Fusion Middleware across the complete product stack. With the arrival of cloud in the market base, it might cover Application Unlimited plan and fusion applications too.
Conclusion
Oracle has been able to drive innovation in the right direction – so as to build up huge customer base across the globe and be the cutting edge leader among the technology differentiators. The paper discussed the salient prevailing trends as seen by Oracle in the last few years. Be it is software or hardware, Oracle has been able to cultivate best out of technology and inherit the tone of the market. The facts which lives throughout is that innovation played a vital role in nourishing the technologies from thoughts to the business. It defines the journey of Oracle from an “Information Company” to “Hardware and Software. Engineered to Work Together”.
Monday, 14 January 2013
Interview with Saurabh K. Gupta, Author of 'Oracle Advanced PL/SQL Developer Professional Guide'
Saurabh is the author of our recently published Oracle Advanced PL/SQL Developer Professional Guide Which helps master the advanced features of PL/SQL to design and optimize code using real-time demonstrations.
Find the complete interview at the below link
http://authors.packtpub.com/content/interview-saurabh-k-gupta-author-oracle-advanced-plsql-developer-professional-guide
Tuesday, 1 January 2013
SbhOracle welcomes 2013!!
Wish you all a great New Year 2013. I wish the year brings in lot of happiness, joy, success and opportunities in all your endeavors. SbhOracle enters into its third year. The last year saw some great heights, mixed responses from the viewers and streamlined reachability in the community.
The blog consistently clocked 500+ reviews every month - which is pretty a decent count. The book "Oracle Advanced PL/SQL Developer Professional Guide" got motivating and mixed reviews. My sincere thanks to Packt publishing who got me into this area and produce one of the first books on OCP (1Z0-146) exam preparation. As a newbie author, I did made a naive step and managed to produce a quality content. There were some reviews on the grammatical component of the language but that's how we learn and improvise upon. Surely, there is no business alone and no growth without the quibbles. At the same time, the book was praised by some reputed names in the community and received motivating reviews for its planned structured content, examples and purposeful presentation. I am happy that the book made a good buzz among the readers and justified its objectives pretty well. Apart from being listed in all major bookstores, the book was honored to find a place in Oracle Magazine (Oracle Magazine July/August 2012 issue [Book beat section]), Oracle ACE newsletter, and library catalogs of Stanford, GW and Wollongong university.
Overall, it has been a great year of introspection and self discovery. Surely, there are several milestones to be set and many more of them to be achieved - and that's how life is. Thank you all for what I am and where I am. Will be back with more useful content in 2013. Till then - enjoy blogging, enjoy reading!! Bye for now.
Related articles
- Oracle acquires Eloqua for just 871mio (techmuch.wordpress.com)
- Oracle to Stop Patching Java 6 in February 2013 (dmxzone.com)
- EMC VPLEX and Oracle RAC for Continuous Operations (emcplus.emc.com)