View previous topic :: View next topic
|
Author |
Message |
Nileshkul
New User
Joined: 09 May 2016 Posts: 43 Location: India
|
|
|
|
Hi,
I am trying to use STEM in REXX to get record count. I am able to get record count for file up to 50 K records (data approx 40 Mb). Just want to make sure if there is any limit of number of records/storage in bytes that can be successfully processed by STEM command to get record count.
Any help would be great.
Many Thanks - Nilesh |
|
Back to top |
|
|
sergeyken
Senior Member
Joined: 29 Apr 2008 Posts: 2141 Location: USA
|
|
|
|
What did you get so far?
What did you try?
What are your results? (If any?)
Please, clarify this sentence: "Trying to use STEM to get record count..." It sounds like
Code: |
Record_Count = STEM |
|
|
Back to top |
|
|
Nileshkul
New User
Joined: 09 May 2016 Posts: 43 Location: India
|
|
|
|
Thanks for reply. Apologies, I did not provided enough details earlier.
I have tried code something like below:
"ALLOC DA('DSN1.DSN2.DSN3)') F(INPUT) SHR REUSE"
"EXECIO * DISKR INPUT (STEM LINES. FINIS"
SAY "REC COUNT:" LINES.0
My query is - the above code working fine up to 50 K records (data approx 40 Mb) - so will it work for any number of records of data like 1 million or so.. OR is there any limit on storage that above code will probably fail with huge files.
Many Thanks - Nilesh |
|
Back to top |
|
|
sergeyken
Senior Member
Joined: 29 Apr 2008 Posts: 2141 Location: USA
|
|
|
|
Nileshkul wrote: |
Thanks for reply. Apologies, I did not provided enough details earlier.
I have tried code something like below:
Code: |
"ALLOC DA('DSN1.DSN2.DSN3)') F(INPUT) SHR REUSE"
"EXECIO * DISKR INPUT (STEM LINES. FINIS"
SAY "REC COUNT:" LINES.0 |
My query is - the above code working fine up to 50 K records (data approx 40 Mb) - so will it work for any number of records of data like 1 million or so.. OR is there any limit on storage that above code will probably fail with huge files.
Many Thanks - Nilesh |
FYI:
1) Learn how to use the Code button when posting your samples
2) Your code actually is trying to read the whole dataset content into memory. For a huge dataset you'll get the memory-related ABEND much faster than LINES.0 is assigned its value...
3) There is no way to get the dataset number of records without reading it record-by-record. |
|
Back to top |
|
|
Nileshkul
New User
Joined: 09 May 2016 Posts: 43 Location: India
|
|
|
|
Thanks for quick response. Also thanks about Code button use details.
Is there any known limit or threshold I can use to guess number of records it would work fine without memory issue. I meant the limit in memory size in Mb that I can use for my calculations.
Many Thanks - Nilesh |
|
Back to top |
|
|
sergeyken
Senior Member
Joined: 29 Apr 2008 Posts: 2141 Location: USA
|
|
|
|
Nileshkul wrote: |
Thanks for quick response. Also thanks about Code button use details.
Is there any known limit or threshold I can use to guess number of records it would work fine without memory issue. I meant the limit in memory size in Mb that I can use for my calculations.
Many Thanks - Nilesh |
It is a very-very bad idea: to load the whole dataset into memory, only to find the number of records.
The available limit of memory depends on multiple settings provided by your zOS administrator; request your own Helpdesk, or whatever is available.
There is no other way (at non system administration level) to get the number of records in a dataset besides reading the records one-by-one (or by relatively small groups), WITHOUT KEEPING THEIR CONTENT IN MEMORY. You need to organize a sort of loop to do this. |
|
Back to top |
|
|
sergeyken
Senior Member
Joined: 29 Apr 2008 Posts: 2141 Location: USA
|
|
|
|
Something like this:
Code: |
NumRecords = 0
EndOfFile = 0
Do While EndOfFile = 0
"MAKEBUF" /* make buffer to read */
"EXECIO 1000 DISKR INPUT" /* read up to 1000 records into stack */
If RC <> 0 Then EndOfFile = 1
NumRecords = NumRecords + Queued() /* accumulated number of records */
"DROPBUF" /* eliminate records in buffer */
End
Say "Number of records is" NumRecords
|
|
|
Back to top |
|
|
sergeyken
Senior Member
Joined: 29 Apr 2008 Posts: 2141 Location: USA
|
|
|
|
Cannot guess: what is the goal, to calculate the number of records in advance, before reading them?
If it is only required to add new records at the end, then DISP=MOD is used. Otherwise, it seems to be some useless activity... |
|
Back to top |
|
|
Willy Jensen
Active Member
Joined: 01 Sep 2015 Posts: 734 Location: Denmark
|
|
|
|
Not very elegant, but you can use ICETOOL like so:
Code: |
cc=bpxwdyn('alloc dd(dd1) da(your.dataset) shr reuse')
cc=bpxwdyn('alloc dd(toolin) new delete reuse',
'lrecl(80) recfm(f,b) blksize(4080)',
'tracks space(1,1) unit(sysda)')
cc=bpxwdyn('alloc dd(toolmsg) new delete reuse',
'tracks space(1,1) unit(sysda)')
cc=bpxwdyn('alloc dd(dfsmsg) dummy reuse')
parse value 1 ' COUNT FROM(DD1)' with r.0 r.1
"execio 1 diskw toolin (stem r. finis)"
Address Attchmvs "ICETOOL"
"execio * diskr toolmsg (stem lst. finis)"
do n=1 to lst.0
if word(lst.n,1)='ICE628I' then say lst.n
end
"free dd(toolmsg dfsmsg dd1 toolin)" |
|
|
Back to top |
|
|
sergeyken
Senior Member
Joined: 29 Apr 2008 Posts: 2141 Location: USA
|
|
|
|
Quote: |
You don't need a musket to shoot a butterfly |
I'm pretty sure, the real problem is in the wrong general design of the application. |
|
Back to top |
|
|
Pedro
Global Moderator
Joined: 01 Sep 2006 Posts: 2594 Location: Silicon Valley
|
|
|
|
I also am unsure of the need to get the number of records. Please clarify.
But I have this suggestion. Use the LISTDSI function to retrieve overall data set information. Use SYSALLOC and SYSUSED to determine how much of the data set is used. And maybe also use SYSLRECL and SYSBLKSIZE and SYSBLKSTRK to guess at how many records are in the dataset. (sorry, I do not have an example of the math required) |
|
Back to top |
|
|
Nileshkul
New User
Joined: 09 May 2016 Posts: 43 Location: India
|
|
|
|
sergeyken wrote: |
Something like this:
Code: |
NumRecords = 0
EndOfFile = 0
Do While EndOfFile = 0
"MAKEBUF" /* make buffer to read */
"EXECIO 1000 DISKR INPUT" /* read up to 1000 records into stack */
If RC <> 0 Then EndOfFile = 1
NumRecords = NumRecords + Queued() /* accumulated number of records */
"DROPBUF" /* eliminate records in buffer */
End
Say "Number of records is" NumRecords
|
|
Thanks a lot for this simple way of getting record count, This is very helpful, really Thanks again.
Many Thanks - Nilesh |
|
Back to top |
|
|
Nileshkul
New User
Joined: 09 May 2016 Posts: 43 Location: India
|
|
|
|
Willy Jensen wrote: |
Not very elegant, but you can use ICETOOL like so:
Code: |
cc=bpxwdyn('alloc dd(dd1) da(your.dataset) shr reuse')
cc=bpxwdyn('alloc dd(toolin) new delete reuse',
'lrecl(80) recfm(f,b) blksize(4080)',
'tracks space(1,1) unit(sysda)')
cc=bpxwdyn('alloc dd(toolmsg) new delete reuse',
'tracks space(1,1) unit(sysda)')
cc=bpxwdyn('alloc dd(dfsmsg) dummy reuse')
parse value 1 ' COUNT FROM(DD1)' with r.0 r.1
"execio 1 diskw toolin (stem r. finis)"
Address Attchmvs "ICETOOL"
"execio * diskr toolmsg (stem lst. finis)"
do n=1 to lst.0
if word(lst.n,1)='ICE628I' then say lst.n
end
"free dd(toolmsg dfsmsg dd1 toolin)" |
|
Thanks for sharing this, this is helpful. |
|
Back to top |
|
|
don.leahy
Active Member
Joined: 06 Jul 2010 Posts: 765 Location: Whitby, ON, Canada
|
|
|
|
I would bet that the ICETOOL approach would be the fastest. Why not try both approaches and share the results? |
|
Back to top |
|
|
sergeyken
Senior Member
Joined: 29 Apr 2008 Posts: 2141 Location: USA
|
|
|
|
don.leahy wrote: |
I would bet that the ICETOOL approach would be the fastest. Why not try both approaches and share the results? |
Either yes, or no.
For relatively small datasets the allocation of DD-names, and dynamic LINK to the utility might take longer than a straightforward reading the records.
For a huge enough datasets the utility-call approach may be faster. |
|
Back to top |
|
|
|