View previous topic :: View next topic
|
Author |
Message |
JPVRoff
New User
Joined: 06 Oct 2009 Posts: 45 Location: Melbourne, Australia
|
|
|
|
Hi,
Within Rexx, we normally use IGGCSI00 to obtain catalog info for many datasets (as it's quicker than moth other methods).
But for one particular use (where we want to migrate datasets at the end of a batch run to save on DASD usage) I've had to rely on LISTDSI up to this point. Simply because the requirement was only to migrate those datasets bigger then a certain size. This is twofold, it means that we don't tie up HSM with thousands of requests to migrate small datasets, yet we also manage to save a lot of DASD space after each batch run.
Now LISTDSI is a wonderful Rexx function for one or two or ten files. But when we're looking through 5-10,000, the small amount of time it takes to allocate and deallocate each dataset for each LISTDSI starts to become a very significant piece of time.
Question is, is there any other way to find how much space is allocated or used by PS/PO datasets without having to allocate them? |
|
Back to top |
|
|
enrico-sorichetti
Superior Member
Joined: 14 Mar 2007 Posts: 10888 Location: italy
|
|
|
|
use something like
Code: |
000005 //DSS EXEC PGM=ADRDSSU,REGION=0M,
000006 // PARM='TYPRUN=NORUN'
000007 //SYSPRINT DD SYSOUT=*
000008 //SYSIN DD *
000009 DUMP -
000010 DATASET( -
000011 INCLUDE( -
000012 XXXXXX.** -
000013 ) -
000014 BY( REFDT LT (*,-365) ) -
000015 ) -
000016 OUTDDNAME(OUT)
000017 //OUT DD DUMMY
|
change the
Code: |
BY( REFDT LT (*,-365) ) |
to the appropriate construct
DFDSS filtering capabilities are quite sophisticated |
|
Back to top |
|
|
expat
Global Moderator
Joined: 14 Mar 2007 Posts: 8796 Location: Welsh Wales
|
|
|
|
I have used DCOLLECT in the past for things like this, the records are pretty easy to understand and work with. |
|
Back to top |
|
|
Willy Jensen
Active Member
Joined: 01 Sep 2015 Posts: 734 Location: Denmark
|
|
|
|
If you know which disk(s) the datasets are on then I will recommend the VTOC command from Use [URL] BBCode for External Links file 112. The command will scan disks based on volser prefix, or all which will be a bit too much at most sites, and filter on datasetname, used space and more. The generated list is easily postprocessed. |
|
Back to top |
|
|
vasanthz
Global Moderator
Joined: 28 Aug 2007 Posts: 1744 Location: Tirupur, India
|
|
|
|
DCOLLECT D type records have this information. I can provide a working code if you have SAS-MXG at your shop. |
|
Back to top |
|
|
JPVRoff
New User
Joined: 06 Oct 2009 Posts: 45 Location: Melbourne, Australia
|
|
|
|
Thanks you for the suggestions.
I'll have a go at DFDSS first, as it doesn't involve downloading anything (financial institutions are so fussy about that...sarcasm) and I've got a few macros that pull info from DFDSS output.
I might have a CBT from an earlier time on my 'home' LPAR, so if I'll try to have a look at both VTOC and DCOLLECT. |
|
Back to top |
|
|
Pedro
Global Moderator
Joined: 01 Sep 2006 Posts: 2594 Location: Silicon Valley
|
|
|
|
I would consider also using the 'modified' date. If a data set has not been updated since your last check, then just ignore it. |
|
Back to top |
|
|
steve-myers
Active Member
Joined: 30 Nov 2013 Posts: 917 Location: The Universe
|
|
|
|
Pedro wrote: |
I would consider also using the 'modified' date. If a data set has not been updated since your last check, then just ignore it. |
There is no such thing as a "modified date" data set attribute. There is a flag - mainly for storage management products such as HSM - to indicate that a data set has been modified since the last backup by the storage management product. |
|
Back to top |
|
|
sergeyken
Senior Member
Joined: 29 Apr 2008 Posts: 2141 Location: USA
|
|
|
|
There are only 3 date fields stored in DSCB block of VTOC.
- DS1CREDT - Creation date ('YDD'), discontinuous binary.
- DS1EXPDT - Expiration date ('YDD'), discontinuous binary.
- DS1REFD - Date last referenced ('YDD' or zero, if not maintained).
For some reason the last update date is not maintained... |
|
Back to top |
|
|
steve-myers
Active Member
Joined: 30 Nov 2013 Posts: 917 Location: The Universe
|
|
|
|
There is NO last modified date field in the format 1 DSCB. Period. End of story. There never has been.
There is one bit that notes the data set has been modified. Obviously that is not a date. This bit is maintained for storage management products like HSM.
The YDD date data segeyken mentions is 3 bytes of binary data. The first byte is the year, from 1900. 2019, for example, is 77, or 119 in decimal. The next two bytes are the day of the year. April 19 2019 (today, in other words) is 77006D. |
|
Back to top |
|
|
Pedro
Global Moderator
Joined: 01 Sep 2006 Posts: 2594 Location: Silicon Valley
|
|
|
|
Quote: |
There is NO last modified date field in the format 1 DSCB |
Sorry, I am not sure what I was thinking about. |
|
Back to top |
|
|
JPVRoff
New User
Joined: 06 Oct 2009 Posts: 45 Location: Melbourne, Australia
|
|
|
|
steve-myers wrote: |
There is NO last modified date field in the format 1 DSCB. Period. End of story. There never has been.
There is one bit that notes the data set has been modified. Obviously that is not a date. This bit is maintained for storage management products like HSM.
The YDD date data segeyken mentions is 3 bytes of binary data. The first byte is the year, from 1900. 2019, for example, is 77, or 119 in decimal. The next two bytes are the day of the year. April 19 2019 (today, in other words) is 77006D. |
I was just going over this when looking for another answer (or, at least, I think it's an old answer) when I saw this bit about the bit.
Interestingly, we've been having difficult to pin down issues with this bit. It appears that in some IAM files, this bit is not being set after an update. And you can guess what fun ensues when the file gets migrated. I know the best option would be not to migrate any of the files, but there's at least 16 regions of our software, plus unknown numbers of other regions, all on this one little LPAR.
One day someone will take it seriously... |
|
Back to top |
|
|
Pete Wilson
Active Member
Joined: 31 Dec 2009 Posts: 592 Location: London
|
|
|
|
Seems to me this could be managed by DFHSM naturally with appropriate MGMTCLAS's set up. If it is the same sets of data you're manually migrating every time then perhaps they need a specific MGMTCLAS's assigned that would do early migration as part of the normal migration cycle.
DFHSM already selects data to Migrate (or Expire) during Primary Space Management based on GDG status, size descending and last ref date/create date etc to minimise the amount of data it has to move to reach the volumes low free-space threshold anyway. So best to leave it to do that once the MGMTCLAS's are set up correctly.
Or if they're being migrated to ML2 anyway why not just write the QSAM ones direct to Tape and avoid the big DFHSM CPU usage overhead altogether? |
|
Back to top |
|
|
JPVRoff
New User
Joined: 06 Oct 2009 Posts: 45 Location: Melbourne, Australia
|
|
|
|
Pete Wilson wrote: |
Seems to me this could be managed by DFHSM naturally with appropriate MGMTCLAS's set up. If it is the same sets of data you're manually migrating every time then perhaps they need a specific MGMTCLAS's assigned that would do early migration as part of the normal migration cycle.
DFHSM already selects data to Migrate (or Expire) during Primary Space Management based on GDG status, size descending and last ref date/create date etc to minimise the amount of data it has to move to reach the volumes low free-space threshold anyway. So best to leave it to do that once the MGMTCLAS's are set up correctly.
Or if they're being migrated to ML2 anyway why not just write the QSAM ones direct to Tape and avoid the big DFHSM CPU usage overhead altogether? |
Hi Pete,
I didn't see this before. In this particular site, we're a third-party provider on the clients' site, run by IBM in Latin America (unsure which actual country). So getting any changes made to our benefit is problematic.
Thanks for the suggestion, though. |
|
Back to top |
|
|
|