
Memory
Memory. At the word we think of human memory, and of machine memory. Computer memory.
Memory is a vital part of both machine and animal. Without it we as humans would not have any consciousness and the ability to create. Animals would not be able to survive.
In machines, and especially in computers, memory allows the machine to function in various ways, for example software to be run and data to be saved and processed.
What is memory ?
Memory. Something just about everyone could use more of, including your computer. Memory is the ability to retain data for a period of time, short or long. This data can be of a complexity including imagery, sounds, sensations, smells and other sensations like human memory, or it can be predetermined data as in computer memory.
One of the differences between human and machine memory is that we can program and access machine memory through the use of software, but we cannot access human memory in the same straightforward manner. Yet.
Lets now talk about computer memory.
To start with there are basically two types of memory for a computer: storage space (hard drive) and active memory (RAM).
We will focus on active memory or RAM.
Computer Memory - RAM
People in the computer industry commonly use the term "memory" to refer to RAM (Random Access Memory). As your processor cranks on your game, it uses RAM to store some of the data needed to make your game work. While all forms of memory work together, RAM is considered the main memory since most data, regardless of its source, is stored in RAM before it is registered in any other storage device. Consequently, RAM is used millions of times every second. A computer uses Ram to hold temporary instructions and data needed to complete tasks. This enables the computer's CPU (Central Processing Unit), to access instructions and data stored in memory very quickly.
Computer memory is extremely important to computer operation. Files and programs are loaded into memory from external media like fixed disks (hard drives) and removable disks (floppies tapes). Memory can be built right into a system board, but it is more typically attached to the system board in the form of a chip or module. Inside these chips are microscopic digital switches which are used to represent binary data.
A good example of this is when the CPU loads an application program - such as a word processing or page layout program - into memory, thereby allowing the application program to work as quickly and efficiently as possible. In practical terms, having the program loaded into memory means that you can get work done more quickly with less time spent waiting for the computer to perform tasks.
The process begins when you enter a command from your keyboard. The CPU interprets the command and instructs the hard drive to load the command or program into memory. Once the data is loaded into memory, the CPU is able to access it much more quickly than if it had to retrieve it from the hard drive.
This process of putting things the CPU needs in a place where it can get at them more quickly is similar to placing various electronic files and documents you're using on the computer into a single file folder or directory. By doing so, you keep all the files you need handy and avoid searching in several places every time you need them.
In general the more RAM a computer has the faster the computer operates. Why? RAM is where all the information is kept just before the computer needs to use it.
Think of it this way. During a conversation a person can speak without interruption if everything being talked about is in his or her memory. However, if a person does not have enough memory and has to look something up during the course of the conversation, in a book or newspaper, then the conversation stops until the needed information is found.
Computers are very similar; they can continue processing without interruption as long as all needed information is in memory (RAM). When that is not the case, the computer stops, retrieves the needed information from storage (i.e. Hard drive, CD, disk) and places it into memory and then continues processing. The more interruptions the computer receives to retrieve information the slower the computer. The more memory a computer has, the fewer interruptions and the faster the computer operates. More memory equates to more speed.
These days, no matter how much memory your computer has, it never seems to be quite enough. Not long ago, it was unheard of for a PC (Personal Computer), to have more than 1 or 2 MB (Megabytes) of memory. Today, most systems require 64MB to run basic applications. And up to 256MB or more is needed for optimal performance when using graphical and multimedia programs.
As an indication of how much things have changed over the past two decades, consider this: in 1981, referring to computer memory, Bill Gates said, "640K (roughly 1/2 of a megabyte) ought to be enough for anybody."
For some, the memory equation is simple: more is good; less is bad. However, for those who want to know more, this reference guide contains answers to the most common questions, plus much, much more.
Different RAM Types and its uses
Intro
The type of RAM doesn't matter nearly as much as how much of it you've got, but using plain old SDRAM memory today will slow you down. There are three main types of RAM: SDRAM, DDR and Rambus DRAM.
SDRAM (Synchronous DRAM)Almost all systems used to ship with 3.3 volt, 168-pin SDRAM DIMMs. SDRAM is not an extension of older EDO DRAM but a new type of DRAM altogether. SDRAM started out running at 66 MHz, while older fast page mode DRAM and EDO max out at 50 MHz. SDRAM is able to scale to 133 MHz (PC133) officially, and unofficially up to 180MHz or higher. As processors get faster, new generations of memory such as DDR and RDRAM are required to get proper performance.
DDR (Double Data Rate SDRAM)DDR basically doubles the rate of data transfer of standard SDRAM by transferring data on the up and down tick of a clock cycle. DDR memory operating at 333MHz actually operates at 166MHz * 2 (aka PC333 / PC2700) or 133MHz*2 (PC266 / PC2100). DDR is a 2.5 volt technology that uses 184 pins in its DIMMs. It is incompatible with SDRAM physically, but uses a similar parallel bus, making it easier to implement than RDRAM, which is a different technology.
Rambus DRAM (RDRAM)Despite it's higher price, Intel has given RDRAM it's blessing for the consumer market, and it will be the sole choice of memory for Intel's Pentium 4. RDRAM is a serial memory technology that arrived in three flavors, PC600, PC700, and PC800. PC800 RDRAM has double the maximum throughput of old PC100 SDRAM, but a higher latency. RDRAM designs with multiple channels, such as those in Pentium 4 motherboards, are currently at the top of the heap in memory throughput, especially when paired with PC1066 RDRAM memory.
DIMMs vs. RIMMsDRAM comes in two major form factors: DIMMs and RIMMS.
DIMMs are 64-bit components, but if used in a motherboard with a dual-channel configuration (like with an Nvidia nForce chipset) you must pair them to get maximum performance. So far there aren't many DDR chipset that use dual-channels. Typically, if you want to add 512 MB of DIMM memory to your machine, you just pop in a 512 MB DIMM if you've got an available slot. DIMMs for SDRAM and DDR are different, and not physically compatible. SDRAM DIMMs have 168-pins and run at 3.3 volts, while DDR DIMMs have 184-pins and run at 2.5 volts.
RIMMs use only a 16-bit interface but run at higher speeds than DDR. To get maximum performance, Intel RDRAM chipsets require the use of RIMMs in pairs over a dual-channel 32-bit interface. You have to plan more when upgrading and purchasing RDRAM.
From the top: SIMM, DIMM and SODIMM memory modules
Memory SpeedSDRAM initially shipped at a speed of 66MHz. As memory buses got faster, it was pumped up to 100MHz, and then 133MHz. The speed grades are referred to as PC66 (unofficially), PC100 and PC133 SDRAM respectively. Some manufacturers are shipping a PC150 speed grade. However, this is an unofficial speed rating, and of little use unless you plan to overclock your system.
DDR comes in PC1600, PC2100, PC2700 and PC3200 DIMMs. A PC1600 DIMM is made up of PC200 DDR chips, while a PC2100 DIMM is made up of PC266 chips. PC2700 uses PC333 DDR chips and PC3200 uses PC400 chips that haven't gained widespread support. Go for PC2700 DDR. It is about the cost of PC2100 memory and will give you better performance.
RDRAM comes in PC600, PC700, PC800 and PC1066 speeds. Go for PC1066 RDRAM if you can find it. If you can't, PC800 RDRAM is widely available.
CAS LatencySDRAM comes with latency ratings or "CAS (Column Address Strobe) latency" ratings. Standard PC100 / PC133 SDRAM comes in CAS 2 or CAS 3 speed ratings. The lower latency of CAS 2 memory will give you more performance. It also costs a bit more, but it's worth it.
DDR memory comes in CAS 2 and CAS 2.5 ratings, with CAS 2 costing more and performing better.
RDRAM has no CAS latency ratings, but may eventually come in 32 and 4 bank forms with 32-bank RDRAM costing more and performing better. For now, it's all 32-bank RDRAM.
Understanding CacheCache Memory is fast memory that serves as a buffer between the processor and main memory. The cache holds data that was recently used by the processor and saves a trip all the way back to slower main memory. The memory structure of PCs is often thought of as just main memory, but it's really a five or six level structure:
The first two levels of memory are contained in the processor itself, consisting of the processor's small internal memory, or registers, and L1 cache, which is the first level of cache, usually contained in the processor.
The third level of memory is the L2 cache, usually contained on the motherboard. However, the Celeron chip from Intel actually contains 128K of L2 cache within the form factor of the chip. More and more chip makers are planning to put this cache on board the processor itself. The benefit is that it will then run at the same speed as the processor, and cost less to put on the chip than to set up a bus and logic externally from the processor.
The fourth level, is being referred to as L3 cache. This cache used to be the L2 cache on the motherboard, but now that some processors include L1 and L2 cache on the chip, it becomes L3 cache. Usually, it runs slower than the processor, but faster than main memory.
The fifth level (or fourth if you have no "L3 cache") of memory is the main memory itself.
The sixth level is a piece of the hard disk used by the Operating System, usually called virtual memory. Most operating systems use this when they run out of main memory, but some use it in other ways as well.
This six-tiered structure is designed to efficiently speed data to the processor when it needs it, and also to allow the operating system to function when levels of main memory are low. You might ask, "Why is all this necessary?" The answer is cost. If there were one type of super-fast, super-cheap memory, it could theoretically satisfy the needs of this entire memory architecture. This will probably never happen since you don't need very much cache memory to drastically improve performance, and there will always be a faster, more expensive alternative to the current form of main memory.
Memory RedundancyOne important aspect to consider in memory is what level of redundancy you want. There are a few different levels of redundancy available in memory. Depending on your motherboard, it may support all or some of these types of memory:
The cheapest and most prevalent level of redundancy is non-parity memory. When you have non-parity memory in your machine and it encounters a memory error, the operating system will have no way of knowing and will most likely crash, but could corrupt data as well with no way of telling the OS. This is the most common type of memory, and unless specified, that's what you're getting. It works fine for most applications, but I wouldn't run life support systems on it.
The second level of redundancy is parity memory (also called true parity). Parity memory has extra chips that act as parity chips. Thus, the chip will be able to detect when a memory error has occurred and signal the operating system. You'll probably still crash, but at least you'll know why.
The third level of redundancy is ECC (Error Checking and Correcting). This requires even more logic and is usually more expensive. Not only does it detect memory errors, but it also corrects 1-bit ECC errors. If you have a 2-bit error, you will still have some problems. Some motherboards enable you to have ECC memory
RAM Memory TechnologyMemory Types In order to enable computers to work faster, there are several types of memory available today. Within a single computer there is no longer just one type of memory. Because the types of memory relate to speed, it is important to understand the differences when comparing the components of a computer.SIMM (Single In-line Memory Modules)SIMMs are used to store a single row of DRAM, EDO or BEDO chips where the module is soldered onto a PCB. One SIMM can contain several chips. When you add more memory to a computer, most likely you are adding a SIMM.The first SIMMs transferred 8 bits of data at a time and contained 30 pins. When CPU's began to read 32-bit chunks, a wider SIMM was developed and contained 72 pins.72 pin SIMMS are 3/4" longer than 30 pin SIMMs and have a notch in the lower middle of the PCB. 72 pin SIMMs install at a slight angle.DIMM (Dual In-line Memory Modules)DIMMs allow the ability to have two rows of DRAM, EDO or BEDO chips. They are able to contain twice as much memory on the same size circuit board. DIMMs contain 168 pins and transfer data in 64 bit chunks.DIMMs install straight up and down and have two notches on the bottom of the PCB.SODIMM (Small Outline DIMM)SO DIMMs are commonly used in notebooks and are smaller than normal DIMMs. There are two types of SO DIMMs. Either 72 pins and a transfer rate of 32 bits or 144 pins with a transfer rate of 64 bits.
RDRAM - RIMMRambus, Inc, in conjunction with Intel has created new technology, Direct RDRAM, to increase the access speed for memory. RIMMs appeared on motherboards sometime during 1999. The in-line memory modules are called RIMMs. They have 184 pins and provide 1.6 GB per second of peak bandwidth in 16 bit chunks. As chip speed gets faster, so does the access to memory and the amount of heat produced. An aluminum sheath, called a heat spreader, covers the module to protect the chips from overheating.SO RIMMSimilar in appearance to a SODIMM and uses Rambus technology.TechnologyDRAM (Dynamic Random Access Memory)One of the most common types of computer memory (RAM). It can only hold data for a short period of time and must be refreshed periodically. DRAMs are measured by storage capability and access time.Storage is rated in megabytes (8 MB, 16 MB, etc). Access time is rated in nanoseconds (60ns, 70ns, 80ns, etc) and represents the amount of time to save or return information. With a 60ns DRAM, it would require 60 billionths of a second to save or return information. The lower the nanospeed, the faster the memory operates. DRAM chips require two CPU wait states for each execution. Can only execute either a read or write operation at one time. FPM (Fast Page Mode)At one time, this was the most common and was often just referred to as DRAM. It offered faster access to data located within the same row. EDO (Extended Data Out)Newer than DRAM (1995) and requires only one CPU wait state. You can gain a 10 to 15% improvement in performance with EDO memory.BEDO (Burst Extended Data Out)A step up from the EDO chips. It requires zero wait states and provides at least another 13 percent increase in performance.SDRAM (Static RAM) Introduced in late 1996, retains memory and does not require refreshing. It synchronizes itself with the timing of the CPU. It also takes advantage of interleaving and burst mode functions. SDRAM is faster and more expensive than DRAM. It comes in speeds of 66, 100, 133, 200, and 266MHz.DDR SDRAM (Double Data Rate Synchronous DRAM)Allows transactions on both the rising and falling edges of the clock cycle. It has a bus clock speed of 100MHz and will yield an effective data transfer rate of 200MHz.Direct RambusExtraordinarily fast. By using doubled clocked provides a transfer rate up to 1.6GBs yielding a 800MHz speed over a narrow 16 bit bus.Cache RAMThis is where SRAM is used for storing information required by the CPU. It is in kilobyte sizes of 128KB, 256KB, etc.Other Memory TypesVRAM (Video RAM)VRAM is a video version of FPM and is most often used in video accelerator cards. Because it has two ports, It provides the extra benefit over DRAM of being able to execute simultaneous read/write operations at the same time. One channel is used to refresh the screen and the other manages image changes. VRAM tends to be more expensive.Flash MemoryThis is a solid-state, nonvolatile, rewritable memory that functions like RAM and a hard disk combined. If power is lost, all data remains in memory. Because of its high speed, durability, and low voltage requirements, it is ideal for digital cameras, cell phones, printers, handheld computers, pagers and audio recorders.Shadow RAMWhen your computer starts up (boots), minimal instructions for performing the startup procedures and video controls are stored in ROM (Read Only Memory) in what is commonly called BIOS. ROM executes slowly. Shadow RAM allows for the capability of moving selected parts of the BIOS code from ROM to the faster RAM memory.
Memory (RAM) and its influence on performance
It's been proven that adding more memory to a computer system increases its performance. If there isn't enough room in memory for all the information the CPU needs, the computer has to set up what's known as a virtual memory file. In so doing, the CPU reserves space on the hard disk to simulate additional RAM. This process, referred to as "swapping", slows the system down. In an average computer, it takes the CPU approximately 200ns (nanoseconds) to access RAM compared to 12,000,000ns to access the hard drive. To put this into perspective, this is equivalent to what's normally a 3 1/2 minute task taking 4 1/2 months to complete!
Why does the RAM memory influence the computer performance?
At first, technically speaking, the RAM memory does not have any kind of influence on the processor performance of the computer: the RAM memory does not have the power of making the computer processor work faster, that is, the RAM memory does not increase the processing performance of the processor.
So, what is the relationship between the RAM memory and the performance? The story is not so simple as it seems and we will need to explain a little more how the computer works for you to understand the relationship between the RAM memory and the performance of the computer.
The computer processor search for instructions that are stored in the RAM memory of the computer to be executed. If those instructions are not stored in the RAM memory, they will have to be transferred from the hard disk (or from any other storage system, such as floppy disks, CD-ROMs and Zip-disks) to the RAM memory - the well-known process of "loading" a program.
So, a greater amount of RAM memory means that more instructions fit into that memory and, therefore, bigger programs can be loaded at once. All the present operating systems work with the multitask concept, where we can run more than one program at once. You can, for example, have a word processor and a spreadsheet open ("loaded") at the same time in the RAM memory. However, depending on the amount of RAM memory that your computer has, it is possible that those programs have too many instructions and, consequently, do not "fit" at the same time (or even alone, depending on the program) in the RAM memory.
At first, if you want the computer to load a program and it does not "fit" in the RAM memory because there is little of it installed in the computer or because it is already too full, the operating system would have to show a message like "Insufficient Memory".
But it does not happen because of a feature that all processors since the 386 have, called virtual memory. With this feature, the computer’s processor creates a file in the hard disk called swap file, that is used to store RAM memory data. So, if you attempt to load a program that does not fit in the RAM, the operating system sends to the swap file parts of programs that are presently stored in the RAM memory but are not being accessed, freeing space in the RAM memory and allowing the program to be loaded. When you need to access a part of the program that the system has stored in the hard disk, the opposite process happens: the system stores in the disk parts of memory that are not in use at the time and transfers the original memory content back.
The problem is that the hard disk is a mechanical system, and not an electronic one. This means that the data transfer between the hard disk and the RAM memory is much slower than the data transfer between the processor and the RAM memory. For you to have an idea of magnitude, the processor communicates with the RAM memory typically at a transfer rate of 800 MB/s (100 MHz bus), while the hard disks transfer data at rates such as 33 MB/s, 66 MB/s and 100 MB/s, depending on their technology (DMA/33, DMA/66 and DMA/100, respectively).
So, every time the computer performs a change of data from the memory to the swap file of the hard disk, you notice a slowness, since this change is not immediate.
When we install more RAM memory in the computer, the probability of “running out” of RAM memory and having the necessity to make a change with the hard disk swap file is smaller and, therefore, you notice that the computer is faster than before.
To have a clearer idea, suppose your computer has 64 MB of RAM memory and all the programs that are loaded (open) at the same time use 100 MB. This means that the system is using the virtual memory feature, making changes with the hard disk. However, if that same computer had 128 MB, it would not be necessary to make any changes with the hard disk (with the same programs loaded), making the computer faster.
The more peripherals you add to a computer, or the more advanced applications you ask it to perform, the more RAM it needs to operate smoothly.
Virtual Memory and its influences on performance
While virtual memory makes it possible for computers to more easily handle larger and more complex applications, as with any powerful tool, it comes at a price. The price in this case is one of performance — a virtual memory operating system has a lot more to do than an operating system that is not capable of virtual memory. This means that performance will never be as good with virtual memory than with the same application that is 100% memory-resident.
However, this is no reason to throw up one's hands and give up. The benefits of virtual memory are too great to do that. And, with a bit of effort, good performance is possible. The thing that must be done is to look at the system resources that are impacted by heavy use of the virtual memory subsystem.
Worst Case Performance Scenario
For a moment, take what you have read earlier, and consider what system resources are used by extremely heavy page fault and swapping activity:
· RAM -- It stands to reason that available RAM will be low (otherwise there would be no need to page fault or swap).
· Disk -- While disk space would not be impacted, I/O bandwidth would be.
· CPU -- The CPU will be expending cycles doing the necessary processing to support memory management and setting up the necessary I/O operations for paging and swapping.
The interrelated nature of these loads makes it easy to see how resource shortages can lead to severe performance problems. All it takes is:
· A system with too little RAM
· Heavy page fault activity
· A system running near its limit in terms of CPU or disk I/O
At this point, the system will be thrashing, with performance rapidly decreasing.
Best Case Performance Scenario
At best, system performance will present a minimal additional load to a well-configured system:
· RAM -- Sufficient RAM for all working sets with enough left over to handle any page faults
· Disk -- Because of the limited page fault activity, disk I/O bandwidth would be minimally impacted
· CPU -- The majority of CPU cycles will be dedicated to actually running applications, instead of memory management
From this, the overall point to keep in mind is that the performance impact of virtual memory is minimal when it is used as little as possible. This means that the primary determinant of good virtual memory subsystem performance is having enough RAM.
It's been proven that adding more memory to a computer system increases its performance. If there isn't enough room in memory for all the information the CPU needs, the computer has to set up what's known as a virtual memory file. In so doing, the CPU reserves space on the hard disk to simulate additional RAM. This process, referred to as "swapping", slows the system down. In an average computer, it takes the CPU approximately 200ns (nanoseconds) to access RAM compared to 12,000,000ns to access the hard drive. To put this into perspective, this is equivalent to what's normally a 3 1/2 minute task taking 4 1/2 months to complete!
Why does the RAM memory influence the computer performance?
At first, technically speaking, the RAM memory does not have any kind of influence on the processor performance of the computer: the RAM memory does not have the power of making the computer processor work faster, that is, the RAM memory does not increase the processing performance of the processor.
So, what is the relationship between the RAM memory and the performance? The story is not so simple as it seems and we will need to explain a little more how the computer works for you to understand the relationship between the RAM memory and the performance of the computer.
The computer processor search for instructions that are stored in the RAM memory of the computer to be executed. If those instructions are not stored in the RAM memory, they will have to be transferred from the hard disk (or from any other storage system, such as floppy disks, CD-ROMs and Zip-disks) to the RAM memory - the well-known process of "loading" a program.
So, a greater amount of RAM memory means that more instructions fit into that memory and, therefore, bigger programs can be loaded at once. All the present operating systems work with the multitask concept, where we can run more than one program at once. You can, for example, have a word processor and a spreadsheet open ("loaded") at the same time in the RAM memory. However, depending on the amount of RAM memory that your computer has, it is possible that those programs have too many instructions and, consequently, do not "fit" at the same time (or even alone, depending on the program) in the RAM memory.
At first, if you want the computer to load a program and it does not "fit" in the RAM memory because there is little of it installed in the computer or because it is already too full, the operating system would have to show a message like "Insufficient Memory".
But it does not happen because of a feature that all processors since the 386 have, called virtual memory. With this feature, the computer’s processor creates a file in the hard disk called swap file, that is used to store RAM memory data. So, if you attempt to load a program that does not fit in the RAM, the operating system sends to the swap file parts of programs that are presently stored in the RAM memory but are not being accessed, freeing space in the RAM memory and allowing the program to be loaded. When you need to access a part of the program that the system has stored in the hard disk, the opposite process happens: the system stores in the disk parts of memory that are not in use at the time and transfers the original memory content back.
The problem is that the hard disk is a mechanical system, and not an electronic one. This means that the data transfer between the hard disk and the RAM memory is much slower than the data transfer between the processor and the RAM memory. For you to have an idea of magnitude, the processor communicates with the RAM memory typically at a transfer rate of 800 MB/s (100 MHz bus), while the hard disks transfer data at rates such as 33 MB/s, 66 MB/s and 100 MB/s, depending on their technology (DMA/33, DMA/66 and DMA/100, respectively).
So, every time the computer performs a change of data from the memory to the swap file of the hard disk, you notice a slowness, since this change is not immediate.
When we install more RAM memory in the computer, the probability of “running out” of RAM memory and having the necessity to make a change with the hard disk swap file is smaller and, therefore, you notice that the computer is faster than before.
To have a clearer idea, suppose your computer has 64 MB of RAM memory and all the programs that are loaded (open) at the same time use 100 MB. This means that the system is using the virtual memory feature, making changes with the hard disk. However, if that same computer had 128 MB, it would not be necessary to make any changes with the hard disk (with the same programs loaded), making the computer faster.
The more peripherals you add to a computer, or the more advanced applications you ask it to perform, the more RAM it needs to operate smoothly.
Virtual Memory and its influences on performance
While virtual memory makes it possible for computers to more easily handle larger and more complex applications, as with any powerful tool, it comes at a price. The price in this case is one of performance — a virtual memory operating system has a lot more to do than an operating system that is not capable of virtual memory. This means that performance will never be as good with virtual memory than with the same application that is 100% memory-resident.
However, this is no reason to throw up one's hands and give up. The benefits of virtual memory are too great to do that. And, with a bit of effort, good performance is possible. The thing that must be done is to look at the system resources that are impacted by heavy use of the virtual memory subsystem.
Worst Case Performance Scenario
For a moment, take what you have read earlier, and consider what system resources are used by extremely heavy page fault and swapping activity:
· RAM -- It stands to reason that available RAM will be low (otherwise there would be no need to page fault or swap).
· Disk -- While disk space would not be impacted, I/O bandwidth would be.
· CPU -- The CPU will be expending cycles doing the necessary processing to support memory management and setting up the necessary I/O operations for paging and swapping.
The interrelated nature of these loads makes it easy to see how resource shortages can lead to severe performance problems. All it takes is:
· A system with too little RAM
· Heavy page fault activity
· A system running near its limit in terms of CPU or disk I/O
At this point, the system will be thrashing, with performance rapidly decreasing.
Best Case Performance Scenario
At best, system performance will present a minimal additional load to a well-configured system:
· RAM -- Sufficient RAM for all working sets with enough left over to handle any page faults
· Disk -- Because of the limited page fault activity, disk I/O bandwidth would be minimally impacted
· CPU -- The majority of CPU cycles will be dedicated to actually running applications, instead of memory management
From this, the overall point to keep in mind is that the performance impact of virtual memory is minimal when it is used as little as possible. This means that the primary determinant of good virtual memory subsystem performance is having enough RAM.
No comments:
Post a Comment