I have to make images of hard drives. I have troubles with large drives because the compressed files are massive. The way I do images it compresses the blank space as well as the used space. Today I tried for the first time to write zeros to the free space before making the image. The drive in question had a 230 gigabyte partition I wanted to image. The data on the drive only added up to about 20 gigabytes.
Commands from memory for writing the zeros to the free space
$> dd -if=/dev/zero -of=/tmp.txt bs=1G count=194 $> rm /tmp.txt $> reboot
Next time I'll do bs=1m count=194000 or something. I worry that the huge block size may have not cleared as much data as I hoped. Live and learn.
Once you start the image process, you can watch it hit the zero blocks. The speed of the compression goes through the roof. The bytes second climbs very quickly. All zeros is easy to compress. This should not only help on the size of the file, but the speed of the backup as well.
Up to 5% had been really stressful because It is only barely working. I hoped the drive used up most of the space toward the front of the drive. When I used to look at the data on drive this was the case. I believe the drive just doesn't have to move as much to read and write to that part of the drive. If this was not the case, I would be screwed. As I watched the size of the file climb and the percentage not go by fast enough, I started to sweat. Things were looking good at the half way point. At 53%, the size of the file was 5+ gigabytes.
It took several hours, but the result was awesome. The 230G drive with 20G of data and a snoot full of zeros applied liberally to the "empty" space compressed down to a 9.5G file. This is much better than past performance using similar techniques without the zeros. Many times I would not get 3 to 2 compression on un zeroed partitions.
No comments:
Post a Comment