Okay, yes, we are using a disk volume for the initial backups, and then it gets moved to tape later ('next storage pool').
But what happens when it gets copied from the disk volume to the copy pool tape or migrated to the primary pool tape? Will the resourceutil value from the node's stanza still apply? Or does that only apply when it's first sending to the disk volume? If not, then won't you be in the same boat when you have to offload the disk volume to tape? Than again, maybe not since it all lives on the disk volume, and the tape library is right there so there's no network bottleneck, etc.?
That's interesting about not being able to write multiple sessions to the same drive. I'm used to doing that with EMC, wherein you have, for example a drive parallelism of, say, 6, and you have a client parallelism of maybe 10, so the client will send up to 10 file systems simultaneously to the server, 6 of which will go to one drive, assuming there's one available that's idle, and the other 4 will go to another drive. A file system will not be split between drives, however. All file systems coming into the drive will be wrapped (multplexed) together. Higher drive parallelism can help to keep the drive streaming optimally, and thus faster backup times but slower recover times since it as to unmultiplex all that. Higher parallelism on the client can get backups done faster, but again, a similar drawback. Obviously, in this case, if a drive is running 4 sessions then it could only accept 2 more, until one freed up.