Using mbuffer to speed up zfs send/receive been talked about in many other places, I wanted to add my own experience to the mix. I used mbuffer on each end, dumping 3.4TB of data with pools that had lz4 compression enabled. I set the buffer to be 512MB, as the target system is tight on memory. Without mbuffer, the zfs send averaged about 504Mbps. Using mbuffer, I was seeing 972Mbps of network traffic coming across, and mbuffer showing 111MB/s passing though it, for 88% of theoretical maximum Gigabit Ethernet bandwidth (125MB/s). With the caveat that you lose approximately 20% of your bandwidth to network overhead (framing, TCP headers, etc), I was absolutely ecstatic at the throughput. Unfortunately, even at that speed it still took just over nine hours to complete the send of my snapshot. At least you can do incrementals afterwards, as long as I keep the original snapshot around.
Just thought I'd share.