Can any one explain me Virtual Storage
In 31 bit environment S/390 system are capable of addressing upto 2Gb
of main storage. In 64 bit can therotically reach upto 16 EB however current z900 server mogel are lim,ited to 64GB of addressable main memory
Can any one explain me what is processor Complex
Whether it is possible in 31 bit environment to increase the addressing capabilitiy to 4GB or 8GB
Kidding, right? A youth of today doesn't understand "virtual"?
In 31 bit environment S/390 system are capable of addressing upto 2Gbof main storage. In 64 bit can therotically reach upto 16 EB however current z900 server mogel are lim,ited to 64GB of addressable main memory
I recall that not all of the 64 bits are devoted to addressing, Googling might help.
Can any one explain me what is processor Complex
I got no idea, try Google.
Whether it is possible in 31 bit environment to increase the addressing capability to 4GB or 8GB
Just off the top of my head, where would you "borrow" the bits to represent "4GB or 8GB"?
Joined: 23 Nov 2006 Posts: 19270 Location: Inside the Matrix
If you try to google "processor complex" you'll do better searchng for "Parallel Sysplex" - sysplex is a shortening of "system complex".
For a different take on "virtual storage", i've included a writeup of the "Paging Game". Simply said, virtual storage is some dasd set aside for the system to use when it needs more memory than exists in the cpu. To keep the system running as well as it can, "pages" are brought into real storage an released as smoothly as possible.
THE PAGING GAME
*** RULES ***
Each player gets several million *things*.
Things are kept in *crates* that hold 4096 things each. Things in the same crate are called *crate-mates*.
Crates are stored either in the *workshop* or in the *warehouse*. The workshop is almost always too small to hold all the crates.
There is only one workshop, but there may be several warehouses. Everybody shares them.
Each thing has its own *thing number*.
What you do with a thing is to *zark* it. Everybody takes turns zarking.
You can only zark your own things, not anybody else's.
Things may only be zarked when they are in the workshop.
Only the *Thing King* knows whether a thing is in the workshop or in a warehouse.
The longer a thing goes without being zarked, the *grubbier* it is said to become.
The way to get things is to ask the Thing King. He only gives out things in crates. This is to keep royal overhead down.
The way to zark a thing is to give its thing number. If you give the number of a thing that happens to be in the workshop, it gets zarked right away. If it is in a warehouse, the Thing King moves the crate containing your thing into the workshop. If there is no room in the workshop, he first finds the grubbiest crate in the workshop, whether it be yours or somebody else's, and packs it off with all its crate-mates to a warehouse.
In its place he puts the crate containing your thing. Your thing gets zarked and you never even know that it wasn't in the workshop all along.
Each player's stock of things has the same numbers as everybody else's. The Thing King always knows who owns what thing and whose turn it is, so you can't ever accidentally zark somebody else's thing even if it has the same number as one of yours.
*** NOTES ***
Traditionally, the Thing King sits at a large, segmented table and is attended by pages (the so-called "table pages") whose jobs it is to help the king remember where all the things are and to whom they belong.
One consequence of the last Rule is that everybody's thing numbers will be similar from game to game, regardless of the number of players.
The Thing King has a few things of his own, some of which move back and forth between workshop and warehouse just like anybody else's, but some of which are just too heavy to move out of the workshop.
With the given set of rules, oft-zarked things tend to get kept mostly in the workshop, while little-zarked things stay mostly in a warehouse.
Sometimes even the warehouses get full. The Thing King then has to start piling things on the dump out back. This makes the game slow because it takes a long time to get things off the dump when they are needed in the workshop. A forthcoming change in the rules will allow the Thing King to select the grubbiest things in the warehouses and send them to dump in his spare time, thus keeping the warehouses from getting too full. This means that the most infrequently-zarked things will end up in the dump so the Thing King won't have to get things from the dump so often (this is no longer "forthcoming" - it was implememnted long ago). This should speed up the game when there are lots of users and the warehouses are getting full.
Long Live the Thing King!
That's the humorous description of virtual & paging.
Here's a (rather large) bit more. . . .
Exabyte. Now there's a word you probably don't use in everyday conversation. What is an exabyte? It's one quintillion bytes. It's one million terabytes. It's one billion gigabytes. It's one-sixteenth of the amount of byte-addressable server storage that can be accessed via a 64-bit addressing scheme. Friends, that addressing capability is available today on the IBM zSeries platform running the z/OS operating system, and DB2 can take advantage of it.
I recently had the opportunity to visit my old friend Jim Teng at IBM's Silicon Valley Lab (home of DB2 for z/OS). Dr. Teng is an IBM Distinguished Engineer on the DB2 development team and, although his knowledge of DB2 for z/OS is quite broad, I've long thought of him as Dr. Buffer Manager. He was a lead architect of the part of DB2 that manages the caching of data and index pages in mainframe system memory. Jim and I took a stroll down memory lane (sorry, bad pun). I'll share with you some of what we talked about, plus other random (sorry again) thoughts on the subject of DB2's use of real and virtual storage.
Spread Out Versus Up
Virtual storage (the operating system's ability to simulate an amount of system memory larger than the hardware offered) was very important when the mainframe real storage resource was limited to 16MBs. MVS (short for Multiple Virtual Storage) allowed many active 16MB address spaces on a system that really had room for only one address space in its entirety. The key to making that work was the operating system's ability to determine how much of that 16MB an address space really needed at any given time (usually much less than 16MB). The part that was "in memory" but not needed at the moment was paged out to files on disk.
DB2 took advantage of MVS technology by spreading its functional components across three address spaces: database services (storage pools such as the buffer pools and the EDM pool), system services (where threads are created), and IRLM (the lock manager). The DB2 developers, however, really had their eye on the operating system that was soon to follow MVS: MVS/XA (the XA stood for eXtended Architecture). MVS/XA and the new 3080-series mainframe servers offered a whopping 2GB of central storage. DB2 was quick to take advantage of MVS/XA 31-bit addressing, allowing for great big (so it seemed at the time) buffer pools and a much larger EDM pool. In fact, it seemed to me that DB2 was much quicker in offering this large-memory exploitation capability than DB2 users were in deciding to use it. I remember telling clients in the early 1990s, during my time on IBM's DB2 National Technical Support team, "That's right. I'm recommending that you increase the size of your buffer pool configuration to 10,000 buffers." That was 40MBs! In hindsight, that was actually a pretty conservative recommendation, because 40MB was just 1/50th of the capacity of the DB2 for MVS/XA database services address space. Still, some folks kind of gulped on hearing it.
Of course, people did come to realize that it was okay to use way more than 16MB of DB2 DBM1 virtual storage (DBM1 being another name for the database services address space). And then what happened? OS/390 followed MVS/XA, the 3080 mainframes were superseded by the 3090 series, and DB2 gave us the ability to cache table and index pages in expanded storage hiperpools. As if 2GB weren't enough, we now had an extra 8GB of system memory available for page caching. Only "clean" pages (either nonupdated, or updated and subsequently "hardened" to disk) could be placed in hiperpools, but most cached pages were clean anyway. Going to an expanded storage hiperpool to retrieve a page pushed out of a central storage buffer pool sure beat getting the page from a slow (by comparison) disk subsystem. Again, DB2 provided this capability before most people were ready to take advantage of it. "Hey, folks! Does your mainframe system have expanded storage? It does? Are you using any of it for DB2 hiperpools? You're not? Hello! Your company paid for that expanded storage. Get some more bang for those bucks!"
And still DB2's storage-exploiting capabilities continued to expand. In the mid-1990s, the very cool DB2 data sharing technology made the scene. With a DB2 data sharing group on a multimainframe parallel sysplex, you had a single logical system that was in fact multiple DB2 subsystems with shared read/write access to one database. Why have one DB2 subsystem with x gigabytes of virtual and hiperpool space when you could have several subsystems (up to 32) in a data sharing group, with x gigabytes of buffer space in each subsystem? Oh, and don't forget the group buffer pools in the sysplex coupling facilities. Good-bye, memory-constraint worries, right?
When did that ocean of virtual storage start to look like the local swimming pool on the first day of summer vacation? When did we start running out of that 2GB resource that initially seemed so enormous? And why?
In some cases, it was a matter of letting our guard down. People had increased the size of their DB2 buffer pools for years with impunity; some lost sight of the fact that even a very large space has its limits. "I'm going to bump up the size of my DB2 buffer pool configuration again, and there should be no problem because..." Wham! "Out of storage? What do you mean, out of storage? My buffer pool configuration was only 1.8GB, not 2GB! What's that? Other things have to fit within that 2GB, too?" The rapid and recent growth of those "other things" in DBM1 made the difference between "2GBs! Woo-hoo!" and "Please, IBM, I'd like some more."
One of the other users of DBM1 space that experienced a growth spurt (at many DB2 shops) is the EDM pool. For a long time, people got by with pretty small EDM pools (just a few megabytes). Then came packaged applications featuring lots and lots of dynamic SQL and a DB2 feature that allowed the caching of prepared dynamic statements in the EDM pool to boost the CPU efficiency of such workloads, along with a strong desire by many DB2 users to exploit that functionality. Suddenly, some folks were defining EDM pools of a few hundred megabytes for statement caching purposes. EDM pools were also enlarged to enable the binding of more DB2 packages with RELEASE(DEALLOCATE), a practice that can significantly reduce the overhead of DB2 data sharing, but at a cost of less stealable space in the EDM pool. Sections of packages bound with RELEASE(DEALLOCATE) stay fixed in the EDM pool until the DB2 thread through which the package was executed is deallocated. A decrease in stealable EDM pool space generally requires an increase in the overall size of the EDM pool.
In addition to EDM pool growth, DBM1 space was pressured by DB2 tablespace compression (compression dictionaries are kept in the database services address space). Throw in the RID pool (for sorting row IDs obtained from indexes in list prefetch and multi-index access operations), the sort pool (for result set row sorts), space for each open data set, and space for each concurrent DB2 user, and you can see how even a large DBM1 storage resource could end up being almost completely utilized.
Big Blue to the Rescue
IBM saw this coming a while back, and delivered a two-part remedy for those seeking DB2 virtual storage-constraint relief. The first part of this solution was exploitation of extended hardware addressing.
Extended hardware addressing refers to a zSeries server and z/OS enhancement that allows a mainframe to have more than 2GBs of byte-addressable storage (also known as central storage). You were still limited to 2GB of virtual storage in an address space, but with more than 2GB of available central storage, you could have more active address spaces in the system without running into a paging problem. DB2 took advantage of extended hardware addressing by spreading into additional address spaces. First, IBM allowed DB2 users to place the portion of the EDM pool used for dynamic statement caching (several hundred megabytes for some DB2 systems) in a z/OS data space (a data-only address space). A subsequent enhancement allowed users to place the DB2 virtual storage buffer pools in data spaces (at CheckFree, we freed up a large amount of DBM1 space by doing this).
Part 2 of the virtual storage constraint relief solution came in the form of DB2 for z/OS version 8 support of the long-awaited (by mainframers) 64-bit virtual storage addressing capability recently delivered for the zSeries server line. (DB2 for Linux, Unix, and Windows had previously provided 64-bit addressing support for nonmainframe platforms.)
Really, Really Big
As I mentioned, a 64-bit addressing scheme enables access to 16 million terabytes of server storage. Now, you can't go out right now and try to fill an address space that large. For now, the size of the DB2 buffer pool configuration is limited to 1TB (that should be okay for a while). Expanded storage hiperpools? Buffer pools in data spaces? So long to that stuff ? it's time to pile back into DBM1, where we most like to have the buffer pools anyway.
Note that although we have long spoken of the 16MB virtual storage "line," it's time now to refer to the 2GB storage "bar." The DB2 for z/OS virtual buffer pools go above the bar in a 64-bit system. Some things, such as the space needed for each DB2 thread, stay below the bar. There's still a need for space below the 16MB line (a little is needed for each open data set), but not much.
Is there any cost associated with the use of DB2 for z/OS 64-bit addressing? Some, yes. For one thing, you will need more real memory (perhaps about 20 percent more) to support a given amount of DB2 virtual storage. (I don't have a problem with that ? memory is a lot less expensive than it used to be). There's also an increase in CPU utilization (attributable to the cost of handling larger addresses), although some of this can be reclaimed through the use of the new PGFIX buffer pool parameter, which causes pages to be fixed in memory (what we used to call V=R, or virtual equals real).
Will History Repeat Itself?
Will we be sitting around years from now, talking about the challenge of having to live with only 16 million terabytes of virtual storage, and reminiscing about how it seemed like so much back in '05? You might, kiddo. By the time people start complaining about the constraint of 64-bit addressing, I expect to be spending a lot of my time puttering about the garden and generally enjoying retirement