I noticed an article posted to
Slashdot the other week, that talked about
the approach Oracle (and BEA) had taken to multi-core processors.
What this boils down to is Oracle regarding multi-core
processors as being equivalent to two (or more) regular
processors, and charging license fees accordingly, but what I
found particularly interesting was where the discussion started
to talk about
the benefits of portable code, which was put forward as an
argument as portable code would allow customers to move off of
Oracle if licensing became too expensive. This area (whether
code should be database independent, or whether it should try
and use as many proprietary features as possible) is a
interest of mine, as I've worked with a number of customers
recently where this argument was put forward and opinion is
generally divided as to whether this is a good or bad thing.
The particular thread
starts off with the comment:
"Absolutely. But how many can easily switch?
For a long time I have had (occasionally heated) arguments
with SQL addicts who insist that almost everything about an
application should be coded in SQL and stored procedures.
Meanwhile I have been moving all my logic away from the
database engine, using APIs such as Java Data Objects, which
makes my code very rapidly portable between databases. Now I
am in a position to switch my code (and data) easily between
different database vendors if there is a licensing or price
I strongly believe we should start to think of databases
simply as engines for storing and retrieving inter-related
objects and not as platforms for writing applications."
which is the essence of the database independence argument. I
was particularly interested in the comment about
Objects - this technology was put forward by a recent client
as a way of resolving the "lowest common denominator" problem
when writing database independent code - Java Data Objects
provides a level of abstraction away from the particular
database platform, allowing you to write database calls using a
generalised API, but the API itself then fires optimised code at
the particular database platform, using a kind of "expert
system" to write the most appropriate code for the database
that's being used, using for example sequences and autonumber
when they're supported, falling back to triggers and sequence
tables when they're not. Java Data Objects is a standard rather
than a product, and
Oracle Toplink is an example of an implementation of this
Another poster then
goes on to say:
"Whereas for my part I am absolutely sick of dealing
with software that does not perform well on ANY platform and
cannot be moved rapidly to a new technology. "We need to
deploy on PalmPilots? Too bad they don't support the neato
language where we put the business logic. I guess 3 years to
reimplement" - whereas if the business logic had been stored
in the database, reimplemention would be a few weeks work.
Tom Kyte [oracle.com] is of course not a disinterested
observer, but his opinion based on 20 years of experience is
that he can re-implement an application in (say) DB2 faster
than you can move your "portable" business logic to a new
platform - and the result will be 2 systems each faster, more
scalable, and more secure than your portable system. Which is
pretty much my experience with every "platform independent"
package I have worked with.
Which doesn't even touch on the topic of data integrity..."
"Whereas for my part I am absolutely sick of dealing with
software that does not perform well on ANY platform and cannot
be moved rapidly to a new technology.
Me too. That is why I use Java+JDO, and not DB-specific SQL.
Too bad they don't support the neato language where we put
the business logic.
Good point. Show me a platform that does not support Java. I
would rather have the logic there than in some neato DB
language that has to be ported, at great expense.
whereas if the business logic had been stored in the
database, reimplemention would be a few weeks work.
A few weeks work? Have you actually worked on such a
re-implementation? This is nonsense. A moderate project can
take months, and a large scale project years, especially on a
live system. I know this from personal experience.
and the result will be 2 systems each faster, more
scalable, and more secure than your portable system.
This is simply a statement with no foundation.
There are no security, scalability or speed issues with the
system I use - JDO. It is designed to be secure and scalable,
to work at high performance on clustered systems and to
generate optimal SQL for each version of major databases.
Large corporations use it for this purpose.
Which doesn't even touch on the topic of data integrity...
Why should the matter of data integrity be relevant? Systems
like JDO and Hibernate and Toplink fully support all aspects
of transactions, clustering and cache management. Data
integrity is, of course, not an issue. If it were, these
products would not be so widely and successfully used in
Other posters also sang the praises of Java Data Objects:
"These days, developing portable code does not mean you
aren't using the database optimally. For example, I use a Java
Data Objects product that has detailed knowledge of certain
databases, including Oracle, MySQL, PostgreSQL, SQL Server. My
portable code is translated into very optimised SQL that makes
good use of the specific database (if there is a way to do a
particular action very fast in Oracle, the product will use
I am certainly not advocating anything as primitive as
sticking to a basic SQL dialect and a subset of database
features - that is a waste of the database"
and, in a
posting very relevant to my article last year about a
use Oracle sequences as this would make their code
"As for id-generators: hibernate has about 10 built-in
methods, a.o. oracle sequences, but also a table where the
application may reserve id-s in (large) blocks thus removing
the potential bottleneck with that. (sequences have their
drawbacks too). A third way is to generate a unique 16-byte id
by mixing in ip address, JVM starting time and a counter
(however id's generated in the DB would have to use some other
scheme without overlap in this case)."
point echoed by:
"As an example, the JDO implementation I use does take
advantage of Oracle sequences if the target is an Oracle
database. My application code doesn't know anything about that
though, so when the target database is DB2, no code change is
This is a good thing, because our customers dictate what
database our product runs on - not us."
Well, that was a bit of an eye opener for someone who up
until now has been a pretty strong advocate of database-specific
code. So does Java Data Objects mean that you can write database
independent code and still take advantage of all the
optimisations and proprietary features in the Oracle database?
Does it mean that we can move our business logic to the middle
treat the database as just a persistence layer? Is Tom only
telling part of the story, or is still the case
even JDO-generated code still needs to be tuned? Any
comments from someone who's used JDO and successfully ported
applications between databases, and kept performance pretty
optimal? I'd be interested to hear more.