Martin Simmons wrote:
> The schema you used for the native types would be useful to know too...
Oh, sorry, I thought I included it. Schema follows, but first:
I just tested COPY ... WITH BINARY in PostgreSQL as well.
Results:
Base64 Expanded Diff %
Dump size 1679111554 2323408248 644296694 38
Import time 211 274 63 30
So expanding the fields is a significant slowdown when using WITH
BINARY. As such, I guess it's not parsing the numbers that takes the
time. In fact, I no longer have any idea what might be slowing things
down. I almost wonder if it's just disk I/O reading the files - I'll
have to try this from a machine with gigabit where I can pop the files
on a different host.
Importing using COPY ... WITH BINARY isn't actually significantly faster
than importing from tab-separated text. In fact, the expanded case is
*SLOWER* from "with binary" than the text form. That seems REALLY
strange to me. I'm going to repeat the tests a few times and poke around
a bit, because I don't get it.
Anyway, the expanded schema:
CREATE TABLE file2 (
fileid bigint,
fileindex integer,
jobid integer,
pathid integer,
filenameid integer,
markid integer,
st_dev integer,
st_ino integer,
st_mod integer,
st_nlink integer,
st_uid integer,
st_gid integer,
st_rdev bigint,
st_size integer,
st_blksize integer,
st_blocks integer,
st_atime integer,
st_mtime integer,
st_ctime integer,
linkfi integer,
md5 text
);
------------------------------------------------------------------------------
This SF.net email is sponsored by:
High Quality Requirements in a Collaborative Environment.
Download a free trial of Rational Requirements Composer Now!
http://p.sf.net/sfu/www-ibm-com
_______________________________________________
Bacula-users mailing list
Bacula-users AT lists.sourceforge DOT net
https://lists.sourceforge.net/lists/listinfo/bacula-users
|