+ Reply to Thread
Results 1 to 3 of 3

File system fragmentation

  1. File system fragmentation

    I tested this again after a couple of years, and the behavior doesn't
    seem to have changed: If a Berkeley DB table is written using TDS with a
    reasonably sized cache, data is written from the cache to the file
    system in what a appears to be a random fashion. Apparently, a lot of
    holes are created, which are then filled. This degrades file system
    performance and makes hot backups somewhat difficult (because the read
    performance is a fraction of that what can actually achieved).

    Is there still no way to preallocate the contents of B-tree files?

    (Without TDS, the problem disappears, it seems to be related TDS or the
    cache size.)

  2. Re: File system fragmentation

    > (Without TDS, the problem disappears, it seems to be related TDS or the
    > cache size.)


    I believe it's purely related to cache size, not to TDS. The issue is
    Berkeley DB's approximation to LRU in the cache. We don't maintain a
    pure LRU list because it's a concurrency bottleneck.

    You could try defining HAVE_FILESYSTEM_NOTZERO in db_config.h, and
    changing the __os_fs_notzero function to return 1. This won't change
    the order in which pages are flushed from the cache, but will ensure
    that the file grows without holes.

    Please let us know if this does make a difference.

    Regards,
    Michael Cahill, Oracle.

  3. Re: File system fragmentation

    * michael cahill:

    >> (Without TDS, the problem disappears, it seems to be related TDS or the
    >> cache size.)

    >
    > I believe it's purely related to cache size, not to TDS. The issue is
    > Berkeley DB's approximation to LRU in the cache. We don't maintain a
    > pure LRU list because it's a concurrency bottleneck.


    Couldn't you write all previous dirty pages (in file order) when you
    extend a database?

    > You could try defining HAVE_FILESYSTEM_NOTZERO in db_config.h, and
    > changing the __os_fs_notzero function to return 1. This won't change
    > the order in which pages are flushed from the cache, but will ensure
    > that the file grows without holes.
    >
    > Please let us know if this does make a difference.


    The results are mixed. When using DS with a small cache file, the
    number of fragments is significantly reduced. However, loading the
    database takes longer because there are now intervening fdatasync calls
    (and lseek/write is used instead of pwrite). With a large cache, I
    don't see much difference. Coincidentally, the number of fragments is
    almost the same as in the -DHAVE_FILESYSTEM_NOTZERO case.

    (I tested this with Berkeley DB 4.7.25 on GNU/Linux.)

+ Reply to Thread