View previous topic :: View next topic
|
Author |
Message |
surya anem
New User
Joined: 20 Dec 2007 Posts: 54 Location: Hyderabad
|
|
|
|
Hi,
We have an application through which data is written to MQ (The commit frequency is 1000) and other application reads that data for further processing.
The application which writing to MQ is mainframes (Writes at a speed of 5000 records per second) and the application that reads is powercenter (reads at a speed of 50 records per second).
The Queue depth is 60000. In case the writing application have to write more that 65000, the job is abending as the Queue limit is reaching even before the Power center has read the data.
Can you please suggest suitable approach for this.
Giving a time delay after writing certain number of records was one of the approach thought of but it looked slightly complicated.
Thanks,
Surya |
|
Back to top |
|
|
Garry Carroll
Senior Member
Joined: 08 May 2006 Posts: 1205 Location: Dublin, Ireland
|
|
|
|
Quote: |
The commit frequency is 1000 |
This suggests that this application is committing messages in batches of 1,000. While these might appear to be on the target queue, they are inaccessible until the commit happens. The receiving application doesn't start processing these messages until after the commit.
Are you really getting 1,000 MQPUTs/sec and only 50/sec in powercenter? or are these figures distorted by the need to commit?
An option you could consider is to redesign so that the mainframe app waits for some response that indicates powercentre is ready for work after committing 1,000 messages. Once that message is received, send the next batch of messages, &c.
Probably the best way to approach is to have multiple instances of powercentre servicing the one queue. There are facilities in MQ for ensuring that all the related messages in a batch are returned to the one instance of the servicing app.
Garry. |
|
Back to top |
|
|
surya anem
New User
Joined: 20 Dec 2007 Posts: 54 Location: Hyderabad
|
|
|
|
Thanks a lot Garry. I have checked with the powercenter team about the above possible solutions. Waiting for a reply from them.
Meanwhile i was thinking of some other options.
On a particular day, both batch and Online messages will be written to the Queue simultaneously.
So one option i am think of is writing the batch messages to a file and online to Queue and then transfer the batch file to the customer Messaging system through Connect direct.
Can you please help me with some other possible options.
Thanks,
Surya |
|
Back to top |
|
|
Garry Carroll
Senior Member
Joined: 08 May 2006 Posts: 1205 Location: Dublin, Ireland
|
|
|
|
Without a more detailed outline of the realtionships between online, batch, powercentre and the messaging system, it's not possible to analyse with any degree of confidence.
Garry. |
|
Back to top |
|
|
surya anem
New User
Joined: 20 Dec 2007 Posts: 54 Location: Hyderabad
|
|
|
|
Garry,
This is a system which will be sending out messages to customers informing them of certain activity.
Some of the messages will be sent via batch and some of the alerts will be sent via online. The aim of my system is just to send the messages to Power center and its the power center's responsibility to despatch them to the customer.
Batch will be running at 6:00AM and 2 PM. Online is 24*7.
Once the batch run is complete,all the messages related to batch will be written to MQ and power center will read the message once the messages have reached the MQ. The issues is during batch because we get lakhs of records at one go and powercenter is unable to read it.
Since Online keep writing messages to MQ, powercenter will always be active and keep reading data from MQ and emptying it.
Message System -----------> MQ ---------> Power Center.
(Both batch and
Online Messages)
Please let me know in case you are looking for some other information.
Thanks,
Surya |
|
Back to top |
|
|
enrico-sorichetti
Superior Member
Joined: 14 Mar 2007 Posts: 10888 Location: italy
|
|
|
|
Quote: |
The issues is during batch because we get lakhs of records at one go and powercenter is unable to read it. |
the general paradigm of client-server and queues is ....
if the service time is higher than the arrival time the queue will grow undefinitely
nothing to do about it... looks like the whole process must be reviewed |
|
Back to top |
|
|
Garry Carroll
Senior Member
Joined: 08 May 2006 Posts: 1205 Location: Dublin, Ireland
|
|
|
|
I agree with Enrico and will re-iterate my previous question about the need to commit in batches of 1,000. I presume that it's just the batch system that's committing this way?
999 of a batch of 1,000 messages will contribute to queue depth until the commit happens, at which time powercentre will start depleting these. Meantime, batch adds another 1,000 and so on. When 100 of the first 1,000 have been removed, batch may have added another 1,000, so depth is now 1,900 - and the story continues.....
If the commit level is lower, then powercentre has a better opportunity of depleting the queue - that's not to say that it won't reach queue full.
Garry. |
|
Back to top |
|
|
surya anem
New User
Joined: 20 Dec 2007 Posts: 54 Location: Hyderabad
|
|
|
|
yeah Gary. Its just the batch system that is committing this way. Intially we had a commit frequency of 3000 and then because of this problem the commit frequency was changed to 1000 but it didnt solve much of our problem.
So we have increased the Queue depth from 20000 to 60000 but still we have this issue when the number or records are around 65000.
I dont think increasing the Queue depth still is a better option.
Thanks,
Surya |
|
Back to top |
|
|
Garry Carroll
Senior Member
Joined: 08 May 2006 Posts: 1205 Location: Dublin, Ireland
|
|
|
|
Quote: |
Intially we had a commit frequency of 3000 and then because of this problem the commit frequency was changed to 1000 but it didnt solve much of our problem |
But why do you need to commit so many messages at a time? Indeed, do you need to MQPUT these messages within syncpoint at all? If the batch job fails, can the messages be repeated? Can powercentre handle receiving duplicates if you resend? DOe sit matter? These are all design-related questions you need to address.
Ga RR y. |
|
Back to top |
|
|
surya anem
New User
Joined: 20 Dec 2007 Posts: 54 Location: Hyderabad
|
|
|
|
Quote: |
If the batch job fails, can the messages be repeated? Can powercentre handle receiving duplicates if you resend? DOe sit matter? These are all design-related questions you need to address.
|
The batch has a restart logic. In case of any abends, the messages will be processed from the last successful commit.
In case we dont put the MQPUT within syncpoint, will we not have any issue in case the job abends. Will the records that are read till the point of abend be auto committed.
Will changing the commit frequency to a still lesser number help us in any way.
Thanks,
Surya |
|
Back to top |
|
|
Garry Carroll
Senior Member
Joined: 08 May 2006 Posts: 1205 Location: Dublin, Ireland
|
|
|
|
Quote: |
In case we dont put the MQPUT within syncpoint, will we not have any issue in case the job abends. Will the records that are read till the point of abend be auto committed.
|
If you MQPUT with MQPMO-NO-SYNCPOINT the messages are committed immediately to the queue and can be processed by powercentre.
Now, you mention records that are read to the point of abend: Where are these records being read?- a file? a database?. If a database, the database commit does not have to include the MQPUT. So, you can issue thousands of MQPUTs before committing the database. On restart, you may resend (duplicate) messages since the last database commit. Powercentre would then have to handle these duplicates.
A similar would apply to processing records form a file - you need to know a commit-point.
Quote: |
Will changing the commit frequency to a still lesser number help us in any way. |
The answer is - maybe. If you use MQPMO-NO-SYNCPOINT, then the messages put are available earlier for powercentre to process. This may be enough to alleviate the situation (but no guarantees).
GaRRy. |
|
Back to top |
|
|
surya anem
New User
Joined: 20 Dec 2007 Posts: 54 Location: Hyderabad
|
|
|
|
Gary,
So using MQPUT with MQPMO-NO-SYNCPOINT should be one of the option which i think i should try.
Secondly, the data is read from a file and i just checked with Power center and they said that it is a difficult process at their end to handle the duplicates.
In this case i think a method should be thought of to avoid sending duplicates to the Queue.
Thanks,
Surya |
|
Back to top |
|
|
Garry Carroll
Senior Member
Joined: 08 May 2006 Posts: 1205 Location: Dublin, Ireland
|
|
|
|
Quote: |
In this case i think a method should be thought of to avoid sending duplicates to the Queue.
|
Yes, perhaps a method where you update the file header (or something) with the record number or key of the last record processed.....
Ga RR y.. |
|
Back to top |
|
|
surya anem
New User
Joined: 20 Dec 2007 Posts: 54 Location: Hyderabad
|
|
|
|
Thanks a lot Gary.
I will try this as well.
Thanks,
Surya |
|
Back to top |
|
|
Nic Clouston
Global Moderator
Joined: 10 May 2007 Posts: 2454 Location: Hampshire, UK
|
|
|
|
Surya - it is Garry - with 2 Rs |
|
Back to top |
|
|
|