Company is looking to save money(surprise, surprise) and wanted to look for another option other than the FDR product currently in house(a wonderful product). Since DFDSS is part of the operating system, they were looking for me to look into the incremental backup scheme.
Not much as far as documentation
YES (or 1)
NO (or 0)
For a multivolume data set, the value used for checking is 1 if any of
the indicators from all of its VTOCs is 1. Otherwise, the value is 0.
This seems to be what to code (DSCHA EQ YES or 1)
Has anybody used this or have any experience with it?
Joined: 06 Jun 2008 Posts: 8569 Location: Dubuque, Iowa, USA
I've used it several times at a past employer. It works fine IIRC but it is not a pure incremental backup -- if the data set has changed, it will be copied / dumped in its entirety (not just changed records). I seem to recall the syntax being a little tricky -- DUMP DS(INCL(A.**) BY((DSCHA,EQ,YES)) <other options>); I'm not sure why the double parentheses were needed.
What you're being asked to do is much more complicated than you or management realise. You don't say whether you're planning to move to DFHSM instead of FDR/ABR either which does make a difference.
FDR/ABR is a properly developed product that tracks and maintains its backup copies as well as Archive copies if that feature is enabled. To develop an equivalent with DFDSS will be very involved. Also, FDR/ABR is not just a backup tool, it does archiving with auto-recall, and it has the incredibly powerful FDREPORT built into it, and the Compaktor functions. It is more flexible than DFDSS and DFHSM in function and scheduling ability, and it is A LOT less CPU hungry than DFHSM. I'd suggest making a very detailed comparison of FDR/ABR vs DFDSS and DFHSM (if applicable) otherwise it might end up being more costly.
1. What if some datasets are migrated/archived? DFDSS will not back those up and in some cases won't even warn you that's the case unless you use a SET PATCH command.
2. You will need to have periodic FULL backups with incrementals in between to cater for loss of backups. These will need to be managed to ensure obsolete versions are released.
3. Will the backups be to GDG's, or will you have to generate your own naming scheme and some sort of version management.
4. How will the backups be retrieved if needed? You will need some sort of indexing record and an automated means to recover the correct version from the correct backup. Who will do the restores?
5. How will you manage the size and number of backups that can run concurrently? You don't want any one backup to be too large or they become unwieldy. You possibly don't want to be running too many concurrently. The scheduling could be quite complex to fit in with Application schedules.
6. Whatever design you make has to cater for massive future growth.
7. How do you decide where the backups get created. Will it always be Tape or could some be to DASD if they're small? Each option requires different management techniques.
8. If required, how will your backups be replicated across to Disaster Recovery sites and properly managed there.
9. How do you prove you're backing up everything that is required? Sometimes you get datasets that are not opened (so change flag not on) but still need a backup in case they're expired because they may be referenced in JCL somewhere.
10. What is the scope of the backup? Does it need to include DB2 for example when there are Image Copies being taken?
11. Will the backup be driven mainly by dataset name or by volume or storgrp, or a combination?
12. Some datasets such as ZFS's may need special treatment by using Concurrent Copy for example to minimise the enqueue time against them.
13. There are some benefits in using DFDSS COPY with Fast Replicate to create backup copies on DASD but this would require another layer of management for the naming standards. The benefit though is that the backup version is accessible as a normal dataset and doesn't have to be restored, although it could be migrated. The copying can be extremely quick with Flashcopy. (Flashcopy only works intra-controller though)
Another option you might consider is using ABARS for your backups. This is initiated through DFHSM and has some advantages such as the ability to include migrated dataset and tape datasets in the backups.
Or you could just use the DFHSM incremental Backup facility if you have DFHSM.