Jan 31, 2010

"Oracle has finalized the Sun transaction and the deal has closed."


The title of this blogging is from the offical website from oracle "Overview and Frequently Asked Questions for the Developer Community".
So what are the important answers for these questions?

For the near future, all these sites [Sun Developer Network, java.sun.com, and BigAdmin] will remain in their current form.
Java.net is an important part of the community, and Oracle will continue
to invest in it—as well as look for new and better ways to support its
membership
NetBeans.org will continue to be available at the usual URL—no changes.
We will also communicate important admin/Solaris-related news through
BigAdmin's existing newsletter.
But what has changed with closing the deal?
If you try www.sun.com you get redirected to www.oracle.com and you can find the hardware here:

And after some clicks you will see something like this:

If you want to get the sun hardware more quickly, you can use http://catalogs.sun.com.

Jan 23, 2010

Oracles SQL Developer 2.1...

On december 15th Oracle released a new version of its database tool SQL Developer:
Version 2.1.0.63.73

They jumped from version 1.5.5 to 2.1.0, so there should be some really new features built in. The release notes tell us:
  • Oracle SQL Developer Unit Testing provides a unit testing framework that allows you to build a set of sequential steps to create test cases for testing your PL/SQL code. These tests can optionally be collected into test suites, which can then be run and rerun to verify required functionality after any changes to your PL/SQL code. Command
    line access for executing, exporting, and importing suites or tests is provided for integration with your build and version control environment.
  • SQL Developer 2.1 incorporates a Data Modeler viewer, integrated into SQL Developer. The Data Modeler Viewer also supports visualizing tables, views and object types on read-only Data Modeler diagrams. The SQL Developer Data Modeler Viewer extension is a free extension to SQL Developer. For a updateable data model, download and review the Oracle SQL Developer Data Modeler, a stand alone product that supports logical,
    relational and conceptual modeling. The tool supports forward and reverse engineering and import and export from various sources. The Data Modeler supports an additional standalone model viewer, which allows users to open models built in the full Data Modeler.
  • The SQL Worksheet has been redesigned for SQL Developer 2.1 to support concurrent task processing for long running operations. Updates to the worksheet include the support of multiple data grids off the F9 (Run Statement) command and dockable OWA, DBMSOutput and SQL  History windows.
  • A detailed explanation is given here.
  • Let's focus on the second point: Data Modeler



  • I tried to work with that tool, but without the documentation or tutorial you are really lost. The official documentation does not explain anything useful about the data modeler...
    After searching a while i found some some nice links to start:
    After at least the first tutorial you will be able to maintain your ERD with SQL Developer 2.1...

    Jan 16, 2010

    Backup on Linux: storeBackup / storeBackupRecover

    You made a good start into 2010? What about new years pledges?
    Here one thing, which you can add to your list:
    • BACKUP / RESTORE for your private data
      (your company should already do backups ;-)
    I am using storeBackup which has some really nice features:
    • The first measure to decrease the necessary hard drive storage space would be the compression of data - if that makes sense. storeBackup allows the use of any compression algorithm as an external program. The default is bzip2.
    • Within storeBackup, a hard link is used for referencing. With this trick of adding hard links, which were already created in existing backup files, each file is present in each backup although it exists physically on the hard drive only once. Copying and renaming of files or directories takes only the storage space of the hard links - nearly nothing.
    • Unlike with traditional backups, there is no need to consider if an incremental backup is depending on previous backups. The options permit the deletion or saving of backups on specific workdays, first or last existing backup of the week/month or year.
    So what do you have to do?
    storeBackup -s /home/schroff -t /home/backup/truecrypt/schroff --progressReport 1000
    and everything inside /home/schroff is backuped to /home/backup/truecrypt/schroff (if you like to exclude some directories use -e .wine for example). Here the output of the first run:

    STATISTIC 2009.10.18 07:56:11 20965 [sec] | user| system
    STATISTIC 2009.10.18 07:56:11 20965 -------+----------+----------
    STATISTIC 2009.10.18 07:56:11 20965 process| 31.84| 59.91
    STATISTIC 2009.10.18 07:56:11 20965 childs | 16.49| 0.84
    STATISTIC 2009.10.18 07:56:11 20965 -------+----------+----------
    STATISTIC 2009.10.18 07:56:11 20965 sum | 48.33| 60.75 => 109.08 (1m49s)
    STATISTIC 2009.10.18 07:56:11 20965 directories = 8923
    STATISTIC 2009.10.18 07:56:11 20965 files = 53726
    STATISTIC 2009.10.18 07:56:11 20965 symbolic links = 417
    STATISTIC 2009.10.18 07:56:11 20965 named pipes = 0
    STATISTIC 2009.10.18 07:56:11 20965 new internal linked files = 0
    STATISTIC 2009.10.18 07:56:11 20965 old linked files = 16
    STATISTIC 2009.10.18 07:56:11 20965 unchanged files = 53260
    STATISTIC 2009.10.18 07:56:11 20965 copied files = 10
    STATISTIC 2009.10.18 07:56:11 20965 compressed files = 23
    STATISTIC 2009.10.18 07:56:11 20965 excluded files because pattern = 0
    STATISTIC 2009.10.18 07:56:11 20965 included files because pattern = 0
    STATISTIC 2009.10.18 07:56:11 20965 max size of copy queue = 3
    STATISTIC 2009.10.18 07:56:11 20965 max size of compression queue = 4
    STATISTIC 2009.10.18 07:56:11 20965 calced md5 sums = 49
    STATISTIC 2009.10.18 07:56:11 20965 forks total = 46
    STATISTIC 2009.10.18 07:56:11 20965 forks md5 = 23
    STATISTIC 2009.10.18 07:56:11 20965 forks copy = 0
    STATISTIC 2009.10.18 07:56:11 20965 forks bzip2 = 23
    STATISTIC 2009.10.18 07:56:11 20965 sum of source = 4.6G (4943663018)
    STATISTIC 2009.10.18 07:56:11 20965 sum of target all = 3.8G (4131620957)
    STATISTIC 2009.10.18 07:56:11 20965 sum of target all = 83.57%
    STATISTIC 2009.10.18 07:56:11 20965 sum of target new = 10M (10961305)
    STATISTIC 2009.10.18 07:56:11 20965 sum of target new = 0.22%
    STATISTIC 2009.10.18 07:56:11 20965 sum of md5ed files = 14M (14712199)
    STATISTIC 2009.10.18 07:56:11 20965 sum of md5ed files = 0.30%
    STATISTIC 2009.10.18 07:56:11 20965 sum internal linked (copy) = 0.0 (0)
    STATISTIC 2009.10.18 07:56:11 20965 sum internal linked (compr) = 0.0 (0)
    STATISTIC 2009.10.18 07:56:11 20965 sum old linked (copy) = 405 (405)
    STATISTIC 2009.10.18 07:56:11 20965 sum old linked (compr) = 411k (420508)
    STATISTIC 2009.10.18 07:56:11 20965 sum unchanged (copy) = 2.4G (2545886135)
    STATISTIC 2009.10.18 07:56:11 20965 sum unchanged (compr) = 1.5G (1574352604)
    STATISTIC 2009.10.18 07:56:11 20965 sum new (copy) = 19k (19126)
    STATISTIC 2009.10.18 07:56:11 20965 sum new (compr) = 10M (10942179)
    STATISTIC 2009.10.18 07:56:11 20965 sum new (compr), orig size = 14M (14272160)
    STATISTIC 2009.10.18 07:56:11 20965 sum new / orig = 76.70%
    STATISTIC 2009.10.18 07:56:11 20965 size of md5CheckSum file = 1.7M (1810889)
    STATISTIC 2009.10.18 07:56:11 20965 size of temporary db files = 7.6M (7921664)
    STATISTIC 2009.10.18 07:56:11 20965 deleted old backups = 0
    STATISTIC 2009.10.18 07:56:11 20965 deleted directories = 0
    STATISTIC 2009.10.18 07:56:11 20965 deleted files = 0
    STATISTIC 2009.10.18 07:56:11 20965 (only) removed links = 0
    STATISTIC 2009.10.18 07:56:11 20965 freed space in old directories = 0.0 (0)
    STATISTIC 2009.10.18 07:56:11 20965 add. used space in files = 12M (12772194)
    STATISTIC 2009.10.18 07:56:11 20965 backup duration = 3m12s
    STATISTIC 2009.10.18 07:56:11 20965 over all files/sec (real time) = 279.82
    STATISTIC 2009.10.18 07:56:11 20965 over all files/sec (CPU time) = 492.54
    STATISTIC 2009.10.18 07:56:11 20965 CPU usage = 56.81%
    END 2009.10.18 07:56:11 20965 backing up directory </home/schroff> to </home/backup/2009.10.18_07.52.59>

    The first execution ran for several hours because all files had to be compressed. A run few days later just finished within several minutes and consumed only few space more...
    And recovery? Just use storeBackupRecover.
    This program recovers files saved with storeBackup.pl.
    usage:
    storeBackupRecover -r restore [-b root] -t targetDir [--flat]
    [-o] [--tmpdir] [--noHardLinks] [-p number] [-v] [-n]
    --restoreTree -r file or (part of) the tree to restore
    when restoring a file, the file name in the backup has
    to be used (eg. with compression suffix)
    --backupRoot -b root of storeBackup tree, normally not needed
    --targetDir -t directory for unpacking
    --flat do not create subdirectories
    --overwrite -o overwrite existing files
    --tmpdir -T directory for temporary file, default is </tmp>
    --noHardLinks do not reconstruct hard links in restore tree
    --noRestoreParallel -p max no of paralell programs to unpack, default is 12
    --verbose -v print verbose messages
    --noRestored -n print number of restored dirs, hardlinks, symlinks, files
    Copyright (c) 2002-2004 by Heinz-Josef Claes
    Published under the GNU General Public License



    Jan 9, 2010

    WLAN problems: ping: sendmsg: No buffer space available ???

    Last week i installed linux on a new laptop. Everything went fine but after the first hibernate cylce downloads just stopped. Restarting the wireless cured the problem for some minutes and then the connectivity went down again...
    I started to ping the router and i only saw roundtrip time larger than 50ms, where i expected <1ms. Each time the connection went down, ping failed with
    ping: sendmsg: No buffer space available
    After some googling around i found the solution:
    Install ndiswrapper including ndiswrapper-utils
    After doing a
    modprobe ndiswrapper
    the ping roundtrip times were <1ms and the connectivity never went down again....
    To ensure that ndiswrapper is loaded during bootup do
    ndiswrapper -m