Hi,Robert:
We have a program which use leveldb before,but now write can't meet our
demond.I find Hyperleveldb and was excited about it.In offical guide,write
in multi thread should be more efficient than leveldb,but the benchmark let
me down.I use db_bench with threads=4 in both leveldb and hyperleveldb,the
result is below:
Leveldb:
fillseq : 45.564 micros/op; 9.5 MB/s
fillsync : 2485.968 micros/op; 0.2 MB/s (1000 ops)
fillrandom : 72.795 micros/op; 6.0 MB/s
overwrite : 81.077 micros/op; 5.4 MB/s
readrandom : 260.956 micros/op; (1000000 of 1000000 found)
readrandom : 85.416 micros/op; (1000000 of 1000000 found)
readseq : 0.562 micros/op; 755.0 MB/s
readreverse : 1.240 micros/op; 353.9 MB/s
compact : 16799649.000 micros/op;
readrandom : 82.920 micros/op; (1000000 of 1000000 found)
readseq : 0.716 micros/op; 609.8 MB/s
readreverse : 1.115 micros/op; 393.1 MB/s
fill100K : 73507.079 micros/op; 5.2 MB/s (1000 ops)
crc32c : 5.664 micros/op; 2686.7 MB/s (4K per op)
snappycomp : 8044.500 micros/op; (snappy failure)
snappyuncomp : 30891.000 micros/op; (snappy failure)
acquireload : 29.294 micros/op; (each op is 1000 loads)
HyperLeveldb:
fillseq : 105.254 micros/op; 4.2 MB/s
fillsync : 2454.652 micros/op; 0.2 MB/s (1000 ops)
fillrandom : 106.570 micros/op; 4.1 MB/s
overwrite : 101.861 micros/op; 4.3 MB/s
readrandom : 116.297 micros/op; (1000000 of 1000000 found)
readrandom : 80.939 micros/op; (1000000 of 1000000 found)
readseq : 4.684 micros/op; 94.5 MB/s
readreverse : 1.353 micros/op; 324.3 MB/s
compact : 28697551.000 micros/op;
readrandom : 95.498 micros/op; (1000000 of 1000000 found)
readseq : 1.519 micros/op; 290.8 MB/s
readreverse : 1.685 micros/op; 262.2 MB/s
fill100K : 18759.587 micros/op; 20.2 MB/s (1000 ops)
crc32c : 13.482 micros/op; 1113.6 MB/s (4K per op)
snappycomp : 32.232 micros/op; 483.5 MB/s (output: 55.1%)
snappyuncomp : 2.832 micros/op; 5415.9 MB/s
acquireload : 0.770 micros/op; (each op is 1000 loads)
can you tell me the reason?
åš 2014幎1æ7æ¥ææäº UTC+8äžå6:06:19ïŒRobert EscrivaåéïŒ
Post by Robert EscrivaHi JT,
I'm the maintainer of HyperLevelDB so I can weigh in here. Part of the
reason for our end of the fork is that it gives us 100% control to push
changes and cut new releases. We also have some features like LiveBackup
that the LevelDB project doesn't "need" (the features aren't required to
meet the authors' use cases).
We work to minimize divergence so that the two code bases are
interchangeable with only a change of include and linker commands. We also
have to merge the ldb extension for SST files. Right now you can workaround
that by renaming .ldb to .sst.
I'm the Dev who first pointed out that mmap was the cause of the reported
issue. The next (non-trivial) hyperleveldb release will include a fix that
satisfies me as far as mmap goes.
Happy hacking!
Post by JT OldsHello LevelDB community.
This is relatively old news, but as most of you are aware, the HyperDex
people forked LevelDB and got significantly better compaction performance.
http://hyperdex.org/performance/leveldb/
Why is this a separate project? It appears that the two projects have
started to diverge more (HyperLevelDB still uses mmap I guess?), but what
are the possibilities of getting some of the ideas (at least) from
HyperLevelDB merged into mainline LevelDB?
I am compelled by their graphs definitely, but not the project
fragmentation.
Any ideas, thoughts?
--
Sent from my Android phone with K-9 Mail. Please excuse my brevity.
--
You received this message because you are subscribed to the Google Groups "leveldb" group.
To unsubscribe from this group and stop receiving emails from it, send an email to leveldb+***@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.