This is for general discussion if anyone has thoughts on the subject, so no great urgency. It's something that gave me a lot of grief in a previous incarnation, and I never really liked the answer(s) we came up with...
We had a large mainframe testing environment, covering several thousand applications, almost a terabyte of data and a full set of OEM products. Test cycles covered a range of dates and typically ran for about two weeks before we'd reset and run the next cycle. Populating and setting up the test environment at the start of each exercise was a pretty large effort, the total number of test cycles in a stream could end up taking 3 months or more before things were released to production.
Meanwhile, in production some 1500-2000 change records were being processed every week - everything from JCL changes, to deletion of a dodgy record, to product updates to new applications. The people in change management had no regular interaction with the test environment teams or the testers, and no real understanding of what was being tested.
Practically, that gave us a risk of implementing things which had been tested against a production environment that could be 3 months or more "out of synch" with the original test environment.
Has/is anyone dealing with this kind of situation, and - if so - how are you addressing it?
Joined: 11 Dec 2008 Posts: 59 Location: Pune , India
These are from my personal experiences .
1) Before releasing the code in production ,after testing in test environment , we generally do an audit and freeze of the components through version control tools like Changeman or Endeavor. If the components are out of synch , then the audit fails . So we check for the issue and correct them and reaudit until the audit passes and then only it is baselined in production. Audit and Freeze process is always a pain but during any development and testing project this is included in the delivery timelines as either 'other activity' or in 'buffer time'.
2) Once the component is in production , it's the responsibility of the development/testing team that has created the component to support it in production for any issues etc. for atleat one month. As a production support person we need only to give temporary fix and pass the buck to the respective development/testing team for permanent solution . It's their headache and reduces burden on production support .
3) Suppose 2 teams are working on the same component , then the one who is implementing first in production is the king . It is the responsibility of the second team to retrofit the changes made by first team into their component and baseline later.
Hope this helps you . We can still discuss further.
We're not quite talking about the same things, the site I was at had Endevor and was very good as far as production implementation and post implementation support processes went.
What I'm talking about is where you have very large scale long running testing going on, and how you keep the test environment reasonably concurrent with production, for executables and all other changes.
(Executables gave us some grief, mainly as regarded scheduling them into the refreshes between test cycles and having to regression test any production changes in with the changes we were testing for implementation 2 or 3 months down the line... They were simple compared with many of the "other" changes we had to track and potentially implement.)
Joined: 20 Feb 2009 Posts: 108 Location: Kansas City
One approach is to require all production changes (in programs, procs, jcl) to migrate up through the test environment before it gets to production. The idea is you'd have complete, functioning systems running everyday for the alpha and beta test platforms populated with test data generating output files, reports, and utilizing test control cards.
This keeps production from being out of sync with test. The problem is keeping good test data in the test systems. It can grow stale if no real transaction processing occurrs on it. The other issue is security; mirror all data from prod to test means you my customer acct numbers and address in test were potentially unauthorized users could see it.