Configure your dedup process to run once a day, and for as long as possible.Ĭool, thansk for sharing this tips in concise format.Although bigger volumes mean better dedup ratios! Windows dedup is single threaded, but it can process multiple volumes at once. I've had 4TB VBK process fine, it just take a long time! Large files take a long time to dedup and will have to be fully reprocessed if the process is interrupted. Try to keep your VBK files below 1TB in size - Microsoft doesn't official support files bigger than this.Modify the garbage collection schedule to run daily rather than weekly.I have a script do it if you're interested. If possible, spread your active full backups over the entire week.This gave best overall space savings for me. Turn Veeam's compression off and use the "LAN" block size.Use Active full jobs with incrementals.Apply all patches as some rollups have improvements to dedup. Format the disk using the command line "/L" for "large size file records".Lightsout wrote:I'll give my feedback on 2012 dedup best practices. disk is the limited space without buying another set of repository space. * side note: our host is dedicated to veeam with internal storage so there's plenty of cpu and memory. will meeting a&b necessitate using more space on our onsite host? what speeds up the backups the best given our goals?Ĭ. what is the combination that results in the smallest files at the cloud provider assuming they don't do their own compression & deduplication?ī. Hopefully this dump of info for you wasn't too much but easy to navigate. However the copy jobs start within minutes or hours of the backup jobs finishing and this is at night whereas the deduplication doesn't start until 10am so it's pretty unlikely it would be unpacking deduped data just to copy offsite. if we have a big enough buffer I could see us letting it go 2 days so that the copy job isn't recompacting the data on the way out. We have win2012r2 deduplication set to 0 days so that it dedupes it right away keeping our size on disk low. The copy job is set to high or extreme compression. The backup jobs are set to dedupe-friendly compression witht local target for storage optimization. unfortunately we have to assume they'd be using cheap disk that has no compression/deduplication built in so it's up to us to keep the files small. we have a wan accelerator as does the cloud host which helps reduce the copy job time.īecause cost is usually the first sticking point we're trying to figure out how to keep the size of the backups at the cloud provider as small as possible. We have deduplication running on our repository server on our host. the less expensive it is the more likely the customer will choose our backup solution.Īs of now we're doing incrementals with a weekly synthetic to help with the backup window issue. we get charged and thus the customer gets charged. use as little space as possible with the hosting provider. Use as little space as possible on our veeam host on prem at customer since we pay for this unit out of pocket and adding more space is not free.ģ. Restore windows are reasonable and not typically a sticking point.Ģ. They use their own storage and don't use Windows Server 2012.ġ. After we back up their data to our host we send that data to a Veeam cloud host. We have several customers where we backup their VMs to our own Veeam host that has 21TB of usable space after raid, hotspare, etc, before compression & deduplication. been going through these recommendations and I wonder if our situation is slightly different.
0 Comments
Leave a Reply. |
Details
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |