Today started with having to copy 100+ gigs of jpegs to a USB mounted drive. I ruled out two obvious choices right away.
– cp:  too slow
– rsync:  No point, this was a first time copy so I know everything needs to be copied.
So I hit google looking for the fastest way to copy files locally.  Surprisingly not as simple as it sounds.  Ninety percent of the results are server-to-server copying, and they all say rsync.  One diamond in the ruff was a  posting on 4bcj.com by Brett Jones.  The post shows how to use netcat and tar to send files server-to-server at an more than acceptable rate.  The post reminded me of something similar I had done a while ago for disk-to-disk copies.  It goes like this:

# cd <srcdir>
# tar cf - | ( cd <dstdir> ; tar xf - )

For example:

# cd /data/images
# tar cf - | ( cd /mnt/usb/images ; tar xf - )

One reason this is so fast is that tar is not preserving the owner, group or permission information.  (Adding the “p” option to the set of tar options will preserve the information but slows things down.)
Unlike rsync and cp, this technique will copy the whole contents of the source directory every time,  so I tend to use it for the initial copying then fall back to cp or rsync.
Edit:
If your version of tar supports the -C option (change to directory) you can probably change the syntax to:

tar -C<srcdir> -cf - | tar -C <dstdir> -xf -

I have yet to try this syntax though.