[go: up one dir, main page]

Talk:Memory paging

Latest comment: 3 months ago by Guy Harris in topic Paging on Windows

x86-architecture specific

edit

Clearly there are portions of this article that assume the x86 architecture is understood. I say, remove these architecture-specific parts as the concepts in the article apply universally. — Preceding unsigned comment added by 128.138.124.156 (talk) 21:51, 3 May 2004 (UTC)Reply

I've removed the "only on 286" and "only on 386" bits and replaced with a 'for example' note for Intel. Could probably be removed entirely. The example looks like an Intel system, but since it's an only example that can probably stay. - Lady Lysine Ikinsile 12:19, Jun 9, 2004 (UTC)

Paging on Windows

edit

Add that this definition doesnt stand right with Windows. Because RAM is faster than auxiliary storage, paging is avoided until there is not enough RAM to store all the data needed

Windows starts to swap even when still has 30% or more RAM free. Even if you have ton of RAM (e.g. 2GB under XP) it still Swaps and slows down all operation and make system unstable if you turn it off. Should be noted to have worse swapping out of all and biggest swap file fragmentation. Linux and AmigaOS use partition as solution which is more efficient even it sound weird. —Preceding unsigned comment added by 79.175.123.185 (talk) 22:47, 7 February 2011 (UTC)Reply

Because RAM is faster than auxiliary storage, paging is avoided until there is not enough RAM to store all the data needed A page fault occurs if a reference is made to data that's in a page that's not currently in main memory; servicing page faults by bringing pages into memory is part of paging.
Another part is removing pages from main memory (writing them back to secondary storage if they've been modified since they were last written to secondary storage). That is done if memory is needed to, for example, store a page that's being paged into memory; it might also be done in order to keep enough free pages to more quickly handle page faults.. That's discussed in Memory paging § Page replacement techniques under "Free page queue, stealing, and reclamation".
Paging in, as described by my first paragraph, is done when a page isn't in memory, whether it's not in memory because it was in memory earlier but was evicted or because it was never fetched into memory in the first place. For example, if a program is started, the operating system might map the entire code section of the program into the address space, pre-load the first few pages of code, and rely on the rest of the code to be brought into memory as needed via page faults. I suspect Windows does that, just as most if not all UN*Xes do.
Paging out as described by my second paragraph, is done either if a free page of main memory is needed immediately or in order to keep a pool of free pages available. Perhaps Windows doesn't do the latter, but the article doesn't claim that it does; it just says that "Some operating systems periodically look for pages that have not been recently referenced and then free the page frame and add it to the free page queue, a process known as "page stealing".", without indicating which operating systems do that.
As for swapping, that's a term that's used in various operating systems that do paging, but I'm not sure it's used in the same sense for all operating systems. Guy Harris (talk) 06:27, 31 July 2024 (UTC)Reply

Paging and Page (computer science)

edit

I've separated out references to Paging and Page (computer science). I've also basically gotten rid of references to memory paging, replacing them with Page (computer science).

For now, Page (computer science) is a redirect to Paging, but I've kept them separate so that someone in the future can make a proper article for Page (computer science).

Anyone feel like starting on the article Page (computer science)? — Preceding unsigned comment added by Pengo (talkcontribs) 08:09, 19 January 2005 (UTC)Reply

SEEM'S REALLY FINE NOW. — Preceding unsigned comment added by 167.206.128.33 (talk) 00:37, 26 January 2006 (UTC)Reply

Removed wrong statement

edit

I edited out the following line:

(in the Intel x86 family, for example, only i386 and higher CPUs possess MMUs)

That's wrong. The 286 lacked support for paging (used segmentation instead), but it did have an MMU. — Preceding unsigned comment added by 80.36.7.132 (talk) 23:46, 27 September 2006 (UTC)Reply

re-wrote entire article

edit

The original article mixed in discussion of advantages and disadvantages that really applied to a discussion of operating system design, not the paging system. For example, one disadvantage listed stated that inter task communication was more difficult. This would only be the case if two tasks running in different address spaces wanted to share data in RAM.

Address space isolation is usually restricted to different users sharing the same computer, not two tasks run by the same user. A database system might have a task reading a file and another writing data from the file to the database. These two tasks would run in the same address space and would be able to see the entire RAM addressable by each other.

Large commercial installations will have many users sharing the computer simultaneously. In this case, each user would have its own address space. A task run by one user could not see the memory used by a task run by another user - it is totally invisible to them. A task can not generate a page fault trying to address memory used by a different address space; it simply can't do it.

--Eric 14:13, 11 February 2007 (UTC)Reply
"Address space isolation is usually restricted to different users sharing the same computer, not two tasks run by the same user."
Huh?! Separate processes always run in their own address spaces, now matter which user runs them. Applications might decide to use shared memory for communication; however, they will only share some mappings in that case, and not the entire address space. -- intgr 15:21, 11 February 2007 (UTC)Reply
As I said earlier, a database system that has two tasks or proceses, one that reads from a file and one that writes to a database will run each in the same address space. They will not need to use shared memory. Operating systems and many applications could not run if all their sub tasks each had their own address space. It really depends on the operating system though, so which operating system are you referring to? Many systems will create an address space for each application, so there are many ways to mix and match. In most cases, memory protection between tasks is enough security.

I am not sure about a JAVA virtual machine running under windows; are you familiar with that?.--Eric 16:12, 11 February 2007 (UTC)Reply

Yes, Java can spawn separate virtual machines for a single user — and even if it didn't, it would be some trickery within the JVM, and irrelevant to the operating system. -- intgr 18:59, 11 February 2007 (UTC)Reply
I've only worked on two operating systems myself and neither would provide discrete address space to a subtask: there is too much overhead in updating the page tables every time there is a context switch between tasks that are tightly coupled. I am not sure where we are going with this discussion or if you agree with me. I am reacting to "Separate processes always run in their own address spaces" Perhaps we are just discussing semantics: processes, tasks, users ... Thoughts? —The preceding unsigned comment was added by Sailorman2003 (talkcontribs) 19:26, 11 February 2007 (UTC).Reply
Sorry, I missed your other earlier comment due to the confusing indentation.
"As I said earlier, a database system that has two tasks or proceses"
Perhaps I come from a different background than you, but as far as I can tell, the term "task" is never solidly defined. I realize now that you were referring to threads, and not processes. Yes, separate threads under a single process do typically share the memory space (such as the DBMS server/daemon process). However, not all DBMSes use threads — the PostgreSQL server, for example, forks new processes instead of threads. Or with Berkeley DB/SQLite, there is no server at all; database updates are done within the address space of the calling process, and concurrent processes accessing the same database use memory-mapped files or shared memory for concurrency control.
"so which operating system are you referring to?"
Pretty much all the mainstream ones: Unices, Linux (since it got the clone() syscall), *BSD, Mac OS X, Windows NT, Windows 9x.
"there is too much overhead in updating the page tables every time there is a context switch between tasks that are tightly coupled."
"Tightly coupled"? You mean message-passing between threads? How many DBMSes use synchronous message-passing for communication? Normally, performance-critical or heavily concurrent code handles concurrency with locking and occasional semaphores, to keep the number of context switches at minimum, and to increase memory locality and concurrency on SMP computers (at the cost of complexity). While graphical user interfaces normally use message passing, usually one thread is processing all GUI messages within a single process — again, no context switching.
As far as I can tell, there is no difference from the kernel level, whether it's switching contexts between different processes, or threads within a single process. However, the old Linux threading library (before the clone() syscall appeared), and a few programming language interpreters, do implement concurrency at a higher level, dividing a single process and a single OS thread into several logical threads, managing the "context switching" in user space, so that indeed no context switches are actually made, and no updates to page tables are needed. Sometimes, this approach is called "microthreading" (though this term also has other meanings). This is, however, increasingly rare these days. The standard Java and Python interpreters, for example, use native OS threads where available. But indeed, microthreads do yield better performance under specific workloads, for the reason you pointed out.
-- intgr 22:32, 11 February 2007 (UTC)Reply

My operating experience is with VM, IBM's multi user operating system and it was 15 years ago. That said, it was then and probably is now one of the most powerful multi user operating systems available.

After reading your latest comments, I think we are mostly in agreement about when there is a context switch and when not.

VM ran the kernel in real addressing mode and the code resided in low memory. The rest of the operating system resided in a shared segment that was mapped into every users virtual machine and ran within the user's address space.

DBMS systems ran partly in the user's virtual machine and partly in their own vitual machine each with their own discrete address space. The user virtual machine ran the code that created schemas, parsed SQL statements, generated stored procedures...

The DBMS virtual machine did the query optimization, ran the queries that materialized the set and ran any necessary stored procedures. It ran entirely in one address space. It handled concurrency as you suggested with semaphores. Communication between the user portion of the DBMS and the DBMS virtual machine was through message passing as I believe is the case with windows.

I don't have an in depth knowledge of UNIX, but I believe it does essentially the same thing.--Eric 18:38, 15 February 2007 (UTC)Reply

Seems like this confusion is caused purely due to a difference in terminology. :)
Conceptually, the Java VM has very little in common with the term "virtual machine" in VM-based operating systems.
Unix doesn't have a concept of virtual machines. Essentially, the VM approach seems to imply a strict process hierarchy of OS→user→process or OS→VM→process, while Unices have a "flat" process model, followed by threads on the lower level: OS→process→thread. "User" is just an attribute of the process. A process can interact with other processes only through syscalls (calls to the kernel), no matter who owns the processes. Though syscalls can also be used to set up shared memory regions.
So if I understood you correctly: in this OS, all processes in a single VM shared the address space; interaction between separate VMs had to go through the kernel. In this case, the VMs are analogous to processes in Unix, and VM processes are analogous to OS threads in Unix. Switches between separate threads of a single Unix process don't go through a context switch either (though schedulers do not seem to optimize for this due to fairness considerations)
"Communication between the user portion of the DBMS and the DBMS virtual machine was through message passing as I believe is the case with windows."
Communication between the server and client is indeed done over sockets, most likely implementing a custom messaging protocol. When I said they use locks/semaphores, I meant concurrency within the DBMS server. -- intgr 10:46, 22 February 2007 (UTC)Reply

Thanks for the clarification; now I think we are on the same page:). --Eric 19:28, 22 February 2007 (UTC)Reply

"so which operating system are you referring to?"
Pretty much all the mainstream ones: Unices, Linux (since it got the clone() syscall), *BSD, Mac OS X, Windows NT, Windows 9x.
And also VMS. Jeh (talk) 16:15, 20 February 2008 (UTC)Reply
"there is too much overhead in updating the page tables every time there is a context switch between tasks that are tightly coupled."
On most modern processors (and the VAX, and even x86 ;) ) the "overhead" amounts to loading a single register -- e.g. CR3 on x86 -- with the base address of the first-level page table (each process has a different one). To be sure there is some slight additional cost associated with this, as the non-global entries in the translation cache have to be invalidated... but there are not that many of them in the first place and so the cache gets repopulated quickly. A process-to-process (that is, address space to address space) context switch is therefore only marginally more costly than an intra-process, thread-to-thread context switch. Jeh (talk) 16:15, 20 February 2008 (UTC)Reply
Separate processes always run in their own address spaces? That may be true for today's most popular operating systems. However, several historically important operating systems were single address space operating system. I presume that's because they were designed on early CPUs where it was too much hassle to switch from address space to another -- unlike the "modern" CPUs that Jeh mentioned. --DavidCary (talk) 18:39, 3 December 2011 (UTC)Reply

Definition of terms

edit

Given two ratios: 2:10 and 8:10, which is high and whicn is low? —The preceding unsigned comment was added by Sailorman2003 (talkcontribs) 14:51, 3 March 2007 (UTC).Reply

Hockey stick

edit

"the graph will look like a hockey stick"? —The preceding unsigned comment was added by 71.202.164.218 (talk)

Who was first

edit

Jacek Karpinski claimed the K-202 (his computer) was the first to use the paging. Was there any earlier one? Szopen 06:02, 21 May 2007 (UTC)Reply

Err, umm, the IBM System/360 Model 67 and the GE 645, to name two commercial machines? Or, earlier, the IBM M44/44X and the Manchester/Ferranti Atlas? 1971 is a bit late to be claiming to have pioneered paged virtual memory; the Atlas was decommissioned in 1971. Guy Harris 18:21, 18 September 2007 (UTC)Reply
Or perhaps they meant "first minicomputer to use paging"? I've updated the article to say that. Guy Harris 18:27, 18 September 2007 (UTC)Reply
I would say it's a hoa^W unverifiable information. The K-202 is some kind of an urban legend here in Poland. Various press articles contain contradictory technical specs, and I am not aware of any proper sources (yet). --Kubanczyk 20:53, 19 September 2007 (UTC)Reply
I still think it might be interesting to check why k-202 had 8MB of memory whilst for example SuperNova had only 64KB or something like that. —Preceding unsigned comment added by 81.168.138.71 (talk) 17:50, 28 November 2007 (UTC)Reply
The claim about 8 MB of memory is blatantly false, please provide reliable sources, as required by Wikipedia policy WP:RS. The K-202's marketing information vaguely mentioned that it may scale up to 8 MB, but this was theoretical possibility never verified in practice. --Kubanczyk (talk) 10:18, 20 February 2008 (UTC)Reply
Kubanczyk, the Karpinski himself said it used "paging". http://www.historycy.org/index.php?showtopic=33075&pid=274657&st=0 (from your nickname, I presume you know Polish)
"Zastosowałem tam kompletną nowość - powiększenie pojemności pamięci przez adresowanie stronicowe. To mój wynalazek. W Londynie, na wystawie w Olimpii, stały obok siebie: brytyjski Modular One, amerykańskie maszyny i K-202 - wszystkie 16-bitowe. I wszystkie miały 64 kilo pamięci, a K-202 - 8 mega! Wszyscy pytali, jak to zrobiłem. Odpowiadałem, że zrobiłem i, jak widać, działa. W jakiś czas później przyjechał do mnie do Warszawy konstruktor z CDC22. To była wtedy jedna z największych amerykańskich firm komputerowych. Chciał się dowiedzieć, jak dokonałem cudu. Powiedziałem mu, żeby się domyślił, bo to bardzo proste. Myślał dwa dni - i nic. To mu w końcu powiedziałem. Potem przyjechał następny inżynier z DEC23. Też mu powiedziałem."
Szopen (talk) 13:21, 26 May 2008 (UTC)Reply
What troubles me is that Karpiński lied in this interview. Not only this is unreliable, not only original research, but first of all it contains at least one big lie. He said "I used a complete novelty (...) paging." when referring to his computer from 1970s. In fact virtual memory using paging was first introduced 10 years before in the Atlas Computer, and in 1970s it was included in S/370 line, probably the most popular mainframe at the time. So it was a widely, publicly known invention by 1970! Karpiński might be the first one to use this concept in a minicomputer (as opposed to mainframe computer), but this is yet to be proven by a third-party source. After such statement, contemporary Karpiński's info should never be cited as a source here. --Kubanczyk (talk) 14:01, 26 May 2008 (UTC)Reply

Last paragraph

edit

If one graphs the number of faults on one axis and working set size on the other, the graph will look like a hockey stick. Why don't we get an actual graph or pseudograph instead? I had a difficulty imagining that, because I imagined the "hockey stick" as pointing rightwards instead of pointing leftwards. For now, I'm just going to edit that so it reads "will look like a leftward pointing hockey stick." Miggyb 05:33, 24 August 2007 (UTC)Reply

Use the {{reqdiagram}} template on talk pages; maybe someone can create it for us. -- intgr #%@! 23:06, 25 August 2007 (UTC)Reply

Virtual Memory VS Physical memory

edit

Since the bus speed of physical memory is far greater than that of virtual memory stored on hard-disks, why is virtual memory used long before physical memory is used. For example I have 1.5GB of RAM, yet only 512MB of that is used, whilest at the same time 300-500MB of virtual memory is used. It's NOT more effeciant because of the difference in speed, so why isn't most of my ram being used?

PS: I'm using Windows XP

anon 16:48 GMT 09/09/2007

Good question, bad place - ask a Windows expert... Or just don't worry, it's only Windows after all. --Kubanczyk 20:55, 19 September 2007 (UTC)Reply
Really, it is not a case of "virtual is used before physical" at all. This confusion derives from Windows' misleading displays and nomenclature. The best reference for this would be Windows Internals by Russinovich and Solomon. Or you could ask in the forums at arstechnica.com. Jeh (talk) 01:32, 8 January 2008 (UTC)Reply

Paging vs. swapping

edit

I'm afraid that this article is once again blurring the term "swapping" (like virtual memory once did). Paging can mean two different things:

  • In computer architecture, a technique for implementing virtual memory, where the virtual address space is divided into fixed-sized blocks called pages, each of which can be mapped onto any physical addresses available on the system.
  • In operating systems it's the act of managing disk-backed data in main memory. The terms "paging in" and "paging out" respectively refer to loading and dropping of disk-backed pages from main memory (usually the page cache, but also applied to swap space — I am not sure whether this is correct usage or not).

However, the term "swapping" is only applied to moving dynamic application memory from and to a pre-allocated pool in secondary storage, not memory that is already backed on the disk. From the practical perspective, one difference is that paging out disk-backed pages only involves writing if the page is marked dirty (in the general case, it's just read-only executable code). Swapping something out always involves writing to the disk.

Does this make sense? -- intgr [talk] 11:04, 5 October 2007 (UTC)Reply

I've tried to point out meaning 1 in the article, hmmm have I failed at this? It is not WP:COMMONNAME now, so it's only a side note. About meaning 2, I don't know, but please be very, very careful to stay with the reliable sources. Provide your own sources. Let the article be NPOV (not only Unix-like terminology). This is a controversial topic, so in a second we could have some people here in a Linux/Windows war (again). --Kubanczyk 12:40, 5 October 2007 (UTC)Reply
"I've tried to point out meaning 1 in the article, hmmm have I failed at this?"
No, this is not what I intended to say.
I can live with both concepts explained on this page, for now. My immediate concern is that this article should not state that "paging" the same thing as "swapping"; these terms are used interchangeably in some parts of the article, especially these two quotes: "paging, sometimes called swapping" in the lead section, and "Most Unix-like systems, including Linux, use term swapping instead of paging".
All modern Unix-like OSes implement paging for disk files that are cached or buffered in RAM, just like Windows (per my second definition). The difference is that Windows does not have a distinct term for "swap" (in their user interfaces anyway), so they arbitrarily say "virtual memory" or "paging" instead. (Windows 3.x did actually call the swap file "WIN386.SWP", go figure). To put it simply, what I'm saying is that Windows's meaning of "paging" is a superset of the Unix meanings of "paging" and "swapping".
Can you agree with me this far? If not, what in particular do I provide sources for? -- intgr [talk] 18:09, 5 October 2007 (UTC)Reply
Well, alternate sources are definately required if you want to change some well-sourced sentences. But I hope that's obvious. I mean lead section here.
For the Windows part I agree.
For Unix-like: "Most Unix-like systems, including Linux, use term swapping instead of paging" - this I've got wrong, I admit. However, not being an expert, my experience is that paging=pagingS+caching. PagingS==swapping. Do you state otherwise: paging==caching? In such case I think you should provide a source too. --Kubanczyk 22:00, 5 October 2007 (UTC)Reply

Sorry, I haven't found the time to do any useful work on this article, but I just noticed another awkward claim creep in: "there is a concept known as page cache, of using the same single mechanism for both virtual memory and disk caching."

I thought we already established that swapping is not the same as virtual memory? And page cache is in fact not related to swapping; you may or may not say that swapping and paging are related, but the page cache keeps track of the cache of pages that are already present on the disk with copies in the main memory. Obviously "swappable" memory is not already present on the disk, hence why it needs a preallocated storage space. And once it's written to the disk, it is deallocated from the main memory. -- intgr [talk] 01:33, 19 October 2007 (UTC)Reply

Duh, in fact I'm somewhat proud of this sentence and I'll defend it. First of all, forget about swapping - it mentions "virtual memory mechanism", a term used purposely and in the purest sense. In a machine with virtual memory and all the mechanisms that support it, you could still use any other disk cache mechanism to utilize your free memory. A page cache is very special in this sense, that it re-uses exactly the same mechanism as virtual memory does: the same (or very similar) kernel's code that did swapping can now perform paging, the same mechanism that implemented "VM address translation" can now implement mmap(), you have very similar page fault procedure in normal and mmaped memory (either read a swap file or a nomal file, respectively). That is what I was trying to express.
Going further, if I understand correctly, you suggest that the main difference is that page cache is read-only cache, while swappable memory is read-write one. Next you state that normal resident memory cannot have a copy already swapped out. I think those definitions would be too narrow: there is nothing inherently wrong in having a page in a disk cache that has (yet) no copy on the disk; there is nothing wrong with pre-fetching a page from swap space for no apparent reason, just for the pleasure of keeping a non-dirty copy in the memory (like ck does). --Kubanczyk 18:43, 19 October 2007 (UTC)Reply


I find this article confusing. All references to swapping should be removed from the main article. Swapping is a method of workload control, i.e. the operating system determines that the resources required by this task (CPU cycles, RAM, whatever) would be more productive if it were freed and applied to another task. Therefore, the workload will be stopped and its RAM contents copied to another media until resources are available for its continuation.

Swapping does not require paging but may be easier to implement if pages exist.
MVS/ESA operating systems (and follow-on versions) implemented swapping to RAM storage that was multi-processor locked at the page level instead of the byte level OR to disk.
The DEC RT-11 operating system permitted swapping to any block structured device including DECtape. RT-11 did not support pages and could swap a swappable area of arbitrary length from any location.

Paging is a mechanism for resource allocation, i.e. this collection of work needs more real RAM than is available. The operating system will provide a mechanism to allocate real RAM to active workloads and withdraw real RAM from less active or less important workloads. Ccalvin 13:19, 21 October 2007 (UTC)Reply

"Swapping is a method of workload control" - I would like to point out, that it is a valid definition of the term, but a very old one. The second definition is somewhat more common now as it is used in Unix-likes, including Linux. Now, per general Wikipedia rules, both meanings should be stated here. I tried to do it, honestly. Feel free to expand on the more traditional meaning (it is there already). But, do not remove "all references to swapping", especially if it is a well sourced material.
Your paging definition is unclear, probably wrong. Please provide sources. --Kubanczyk 20:10, 21 October 2007 (UTC)Reply


I concur with the opinion that this difference needs to be clearly delineated in the text; it is an error to say ~"well, they're pretty much the same", even though some contexts do certainly use the terms interchangeably...
One point of differentiation is that paging systems are going to work hand-in-hand with address translation. This is because paging is triggered by "page faults", which in turn are triggered by lack of a "valid" bit in the virtual page's page table entry... which will not be in use in the first place if address translation is not in effect. Swapping otoh can most certainly occur without page tables being enabled...
Incidently VMS (now "OpenVMS", if you must insist) uses both paging and swapping: Swapping applies to entire working sets at a time, while paging is done on a page by page basis -- only, of course, to pages within an inswapped working set...
The NT family does not do whole-process "swapping" exactly, but implements a similar net result: Its "Balance set manager", at times of extreme memory pressure, will gradually reduce the limit on an idle process's working set size to nearly zero, thereby forcing it to lose nearly all of its pages (other than shared pages still in other working sets) to the standby and modified page lists, much as in normal w.s. replacement. Thus only modified pages need be written to backing store, unlike a traditional outswap. There is no whole-working-set inswap either; the limit is simply set back to a larger value and the process is allowed to demand-page what it needs back into its working set. All of this is a lot less work for the disk subsystem than a traditional whole-process outswap/inswap would be. ...
I believe the example of OpenVMS, and related functionality in Windows, highlights the need to differentiate the terms. One could at least say "where an operating system uses both mechanisms, paging means x and swapping means y." No? Jeh (talk) 01:46, 8 January 2008 (UTC)Reply


I'm appending to this section as the discussion section for a {{dubious}} template that I've added. The specific issues that I'm addressing are:

  1. At least one system with segmentation did not use the term swap for reading and writing segments; the Master Control Program (MCP) for the B5000 used the term overlay.
  2. The historical use of swap as referring to entire processes is still alive; MVS has both paging and swapping. Current versions of MVS no longer have dedicated swap files, but they still use swapping as a mechanism for regulating the size of the working set.

I can provide references for both points if that would not be TMI. Shmuel (Seymour J.) Metz Username:Chatul (talk) 15:34, 21 November 2010 (UTC)Reply

merge

edit

I suggest merging the "demand paging" article into a section of "paging" article.

Most of the "demand paging" article repeats information already covered in the "paging" article, so this merge will make the "paging" article a few sentences longer. Merging would also make it easier to compare and contrast demand paging with the alternatives. --68.0.124.33 (talk) 15:35, 3 April 2008 (UTC)Reply

Oppose - I would say that the best thing about "Demand paging" is that it is already a nice Wikipedia:Summary style article, as it does not repeat information from this one. Could you quote what information is repeated, as we seem to disagree? Also demand paging is pretty advanced idea if compared to traditional paging, I would not want it to vanish somewhere in a big article. --Kubanczyk (talk) 16:46, 3 April 2008 (UTC)Reply
I am confused.
Wikipedia:Summary style says that "Summary sections are linked to the detailed article with a {{main|<name of detailed article>}} or comparable template".
I don't see where the "demand paging" article links to any more detailed article in that way.
Are you alluding to some other article that links to the more detailed "demand paging" article in some summary style? I don't see that, either.
Are you suggesting that some other article -- paging or virtual memory or some other article -- ought to link to the more detailed "demand paging" article in a summary style? That may be a good idea.
--68.0.124.33 (talk) 23:13, 3 April 2008 (UTC)Reply
Yep, "Paging" ought to link to more detailed "Demand paging" in a summary style. That may be a good idea. --Kubanczyk (talk) 12:33, 4 April 2008 (UTC)Reply
OK, done. (Plus I tried to fill in the other alternatives to "demand paging" -- did I miss any?) --68.0.124.33 (talk) 03:43, 9 April 2008 (UTC)Reply

"Demand paging" misleading

edit

Please see the discussion I started on Talk:Demand paging. -- intgr [talk] 13:09, 24 September 2008 (UTC)Reply

Different Types of Paging | merge

edit

Anticipatory paging is repeated in the form of Swap prefetch .I think both should be merged . — Preceding unsigned comment added by 196.12.53.9 (talk) 16:16, 20 December 2008 (UTC)Reply

fragmentation

edit

"The fragmentation of the pagefile that occurs when it expands is temporary" From context, that is talking about some version of Windows, but it's not particularly true. Specifically, a page file on Windows XP will probably be fragmented on a clean disk with free space, be it fixed size or system managed, even if it is not used, and even if the disk is entirely defragmented. Windows doesn't normally create a contiguous pagefile.sys. —Preceding unsigned comment added by 218.214.18.240 (talk) 14:22, 5 June 2010 (UTC)Reply

Whether Windows creates a perfectly contiguous pagefile at install time isn't the point. The claim quoted above regarding "fragmentation that occurs when it expands" is absolutely true. When all processes using the expanded areas are gone - at OS shutdown time if nothing else - the expansion extents are freed and the pagefile is back to whatever state it was in prior to expansion. Jeh (talk) 17:02, 17 November 2010 (UTC)Reply

That is bull. Until recently I was wondering why my WindowsXP machine of 5+ years was running particularly slow, even taking into account my perception of speed having changed. It's page file was in 58 (fifty-eight) fragments. — Preceding unsigned comment added by 76.100.18.105 (talk) 18:52, 17 May 2012 (UTC)Reply

Recentism

edit

The Paging#Implementations section only discusses recent systems; should that section not mention at least these following notable instances?

It depends on what our purpose is in this article. Are we trying to show the evolution and different fundamental concepts, or are we making a train-spotter's list of instances of the concept? What is notable about these particular machines that a general reader must know about? --Wtshymanski (talk) 15:48, 17 November 2010 (UTC)Reply
In the case of tha Atlas, it's notable because it was the first paging system. CP-67, Multics and TSS were notable for innovations in the exploitation of virtual memory, although those weren't releated to paging per se. The others were notable due to their prevalence. Shmuel (Seymour J.) Metz Username:Chatul (talk) 20:04, 17 November 2010 (UTC)Reply
I encourage you to write up these points in the article. I don't like laundry lists, but if we talk about milestones and turning points, that is, I think, the point of an encyclopedia article. There's never enough history in Wikipedia technology articles. --Wtshymanski (talk) 21:55, 17 November 2010 (UTC)Reply
I′ve created a {{todo}} list and taken a first crack at a section for the Ferranti Atlas; I could use some guidance on what level of detail to provide for the IBM implementations, some of which I plan to write up. Also, I'm soliciting people with a DEC background to write up the PDP-10, PDP-11 and VAX-11 systems. Shmuel (Seymour J.) Metz Username:Chatul (talk) 17:55, 23 November 2010 (UTC)Reply
If you're asking my opinion - in *this* article, I'd recommend just an overview in the Children's Museum tour-guide level - accurate, fairly comprehensive, but simple. Hit the high points about why the Binford 6100 was a turning point or notable development in the history of memory paging, but don't get into a cycle-by-cycle account of what each backplane pin was doing during a page fault. (It's too easy to get into minute details that unbalance an article, in my opinion. This isn't the article to teach people how to wire up a mainframe.) If there are notable developments, say, within the IBM line that affected their commercial life, perhaps the IBM computer articles are the place to develop more details. Pitfalls, flaws, errors, dead-ends, mistakes, and blunders, and why they were considered such, are always interests of mine, and I think they contribute perspective to an article. And as always, good references are essential; we wouldn't let Bill Gates Himself write a history of MS DOS without published references. — Preceding unsigned comment added by Wtshymanski (talkcontribs) 12:17, 23 November 2010 (UTC)Reply

The name of this article is confusing

edit

I came here looking for information about the mechanisms of memory paging. I.e. the way in which memory is split into pages and noncontiguous areas of memory can be mapped into contiguous spaces different for each process. But instead, this article deals mostly with swapping pages between main memory and hard drives. There is no information at all about the paging mechanisms themselves. CodeCat (talk) 23:36, 23 October 2011 (UTC)Reply

"Paging" does refer to moving pages between main memory and backing store. You probably want the Virtual memory article, specifically the material on page tables (address translation). Jeh (talk) 02:14, 24 October 2011 (UTC)Reply
Maybe it does for some people but it's not very clear where to find the information I'm looking for. I think it would be better if this page is moved to Swapping, and Paging is turned into a disambiguation page. As far as I know, 'swapping' is not ambiguous so it would be a less confusing name. CodeCat (talk) 10:33, 24 October 2011 (UTC)Reply
"Swapping" can refer to swapping of entire processes or of segments into or out of memory on systems that don't support paging. It can also be used to refer to marking a process on a paging system as "swapped out" so that it's not scheduled to run until "swapped in" again; that way it doesn't compete with other processes for pages, to avoid thrashing. Guy Harris (talk) 16:49, 24 October 2011 (UTC)Reply

this alogoritms only allow for pagging. — Preceding unsigned comment added by 117.199.181.3 (talk) 09:56, 15 November 2011 (UTC)Reply

To which algorithms are you referring? Paging algorithms obviously only allow for paging, but not all forms of virtual memory involve paging. Most if not all current systems implement virtual memory with paging, but I have the impression the Burroughs/Unisys Master Control Program for the Burroughs large systems and its Unisys successors swaps memory objects pointed to by descriptors into and out of memory. I've seen comments indicating that memory objects that are arrays can be paged rather than swapped as a whole, at least in the B6000-series machines and its successors. Guy Harris (talk) 18:16, 15 November 2011 (UTC)Reply
Yes, an array descriptor on the B6x00 and B7x00 has a bit to indicate that the array is split into fixed length pages, and MCP has code to handle the presence fault when there is a reference to a page of the array that is not in memory. I don't have any information on the successor systems at Unisys, but it seems unlikely that they would have dropped the facility. Shmuel (Seymour J.) Metz Username:Chatul (talk) 22:39, 16 November 2011 (UTC)Reply

OS X swap files

edit

I amended the article text as it did not clearly distinguish swap partitions as used in other systems and regular (filesystem) partitions set aside for swap files, which is doable in OS X. The reference did not (and I have seen no other reference which does) support the interpretation that an actual swap partition can be used with OS X. n.b. man fstab (in OS X 10.5) does list swap (along with procfs and others) as a recognised vfstype, but that is not the same as it being directly usable. (I would however be delighted to see it done as it might be even faster than a separate non-journaled HFS+ or UFS partition.) Anihl (talk) 05:41, 31 January 2012 (UTC)Reply

And, given that there is no procfs in Mac OS X, the fstab man page is not entirely to be trusted. I'll look at the paging code to see if it can page to a raw device. Note that swap files aren't partitions, they're just ordinary files, and are created and destroyed on demand by the dynamic_pager daemon. Guy Harris (talk) 06:54, 31 January 2012 (UTC)Reply
The listing probably is trustworthy so long as we understand that it doesnʼt mean OSX will usually use all these vfstypes, only that they will be recognised if found or created. So it is common to use OSX utilities (pdisk or now diskutil) to create identified swap partitions for use by on-board Linux or BSD installations. Incidentally, while OSX does not use procfs under normal circumstances, it might for all I know still be usable under abnormal conditions (OS development?) for debugging, as in ancestral BSDs. (This old article is interesting.) Anihl (talk) 16:05, 31 January 2012 (UTC)Reply
Actually, no, it's not 100% trustworthy; as noted, it lists procfs, but Mac OS X doesn't have a procfs implementation, so procfs won't be recognized if found. (The section in question from the "Mac OS X Internals" Web site explicitly says "Mac OS X does not have procfs." and there is, in fact, no procfs code in XNU and there's no procfs kext in Mac OS X.) In addition, as there's no swapon command, I'm not sure there's any program that would do anything with a swap partition (dynamic_pager doesn't know anything about swap partitions) The macx_swapon() system call rejects a swapon for anything that's not a regular file, meaning it'd reject a swapon for a character special file, so it's not clear what, if anything, could be used to swap to a raw partition.
(It also doesn't list a number of file system types that are supported, e.g. smbfs, autofs, afpfs, cddafs, and devfs.) Guy Harris (talk) 18:35, 31 January 2012 (UTC)Reply

Size of address space

edit

To answer a query from Guy Harris, The address space of early MVS was limited to 2^24 8-bit bytes and ultimately IBM supported processors with 2^25 octets, although MVS systems with less than 2^24 were common. With the advent of MVS/XA and MVS/ESA, the address space size became substantially larger than the physical memory limit.

I know of no Multics processor with a physical capacity anywhere near the size of an address space.

TSS/360 supported a 24-bit mode and a 32-bit mode. In 24-bit mode, the address space was close to the maximum physical memory, especially with LCS. In 32-bit mode, the address space was substantially larger. I don't know whether the TSS/370 PRPQ had 31-bit mode (there was no 32-bit mode on S/370.) Shmuel (Seymour J.) Metz Username:Chatul (talk) 17:03, 14 November 2013 (UTC)Reply

Paging vs segmentation in intro

edit

Regarding this, in the intro: "The main advantage of paging over memory segmentation is that it allows the physical address space of a process to be noncontiguous. Before paging came into use, systems had to fit whole programs into storage contiguously, which caused various storage and fragmentation problems."

I could certainly be wrong, but isn't it actually virtual memory which allows a process space to be noncontiguous? Paging adds the ability to map secondary storage to virtual memory, but once you have virtual memory you've already addressed contiguity. I read the linked source in Google Books, and it describes the Ferranti Atlas computer as introducing paging+virtual memory as a solution to the earlier practice of "overlaying." In other words, it may be misleading or incorrect to refer to paging as an alternative to memory segmentation. Proxyma (talk) 03:12, 17 November 2013 (UTC)Reply

Paging is a virtual memory system, and segmentation is another. "Virtual memory" refers to the translation between physical memory addresses and nonphysical (thus "virtual") addresses. Virtual memory allows swapping to secondary storage, but it certainly isn't synonymous with that, nor is it the only thing it's used for. So it's really a common misconception that "virtual memory" and/or "paging" means "swapping". "Paging" just refers to the idea of dividing the address space into fixed-size pages, so that they can be mapped to physical memory in any random order, or not mapped at all. It's this last point that makes swapping possible, but paging is useful even if there is no secondary storage. The article is really misleading in this respect. I pointed this out in the discussion above but it was never changed. CodeCat (talk) 03:37, 17 November 2013 (UTC)Reply
Segmentation also allows the physical memory used by a task to be non-contiguous... just not as non-contiguous as paging does. As a matter of fact, the address translation aspect of segmenting is not that different from that of a page-oriented VM system. It is just that in paging, the address translation table entry to use is driven by the high bits of the virtual address, while in segmentation, it is determined by the segment descriptor that is in effect. But in both schemes you find a starting physical address from the translation table (either the page table entry or the segment descriptor) and use that to modify the address that was asserted by the instruction being executed. Paging does allow for far greater degrees of "discontiguousness" because most architectures only allow a very small number of segments to be active at one time (hence much coarser granularity) but allow for a far larger number of pages.
In any case, I prefer to refer to this aspect of mm as "address translation", not "paging." I agree that paging does not require moving pages between disk and RAM (x86's "paging enable" bit certainly does not) but this association is so widely believed that I think it is better to stick with the term that more obviously describes the operation. Jeh (talk) 04:09, 17 November 2013 (UTC)Reply
I too prefer address translation for the process of translating virtual addresses to real, especially since paging often refers to the reading of pages from backing store and to writing pages from page frames into backing store. Where it is necessary to distinguish type of translation, the terms page translation and segment translation' seem unambiguous.
It's unfortunate that nomenclature for virtual memory is not consistent, but after half a century of inconsistent notation we should take the inconsistent nomenclature into account in our technical writing. Shmuel (Seymour J.) Metz Username:Chatul (talk) 21:41, 17 November 2013 (UTC)Reply

Paging vs RAM drive (See also)

edit
  • Paging: scheme by which a computer can store and retrieve data from secondary storage for use in primary storage
  • RAM drive: block of primary storage that a computer's software uses as secondary storage

I think that, at least conceptually, paging is the opposite of RAM drive. So RAM drive should appear at least in "See also". --Edupedro (talk) 08:06, 21 July 2014 (UTC)Reply

Let's have this discussion in one place. I have already answered at the RAM drive talk page. Thanks. Jeh (talk) 08:37, 21 July 2014 (UTC)Reply

Dubious claim about Win8

edit

"Before Windows 8,[dubious – discuss] the file used for paging in Windows NT was pagefile.sys." The implication is that it has changed. I don't claim much expertise in Win8, but a quick google search doesn't find anything to support such a change. JMP EAX (talk) 02:22, 1 August 2014 (UTC)Reply

Based on [1] it seems Win8 supports both paging (still to pagefile.sys) and swapping (to the new swapfile.sys), with the latter added exclusively for the Metro junk. JMP EAX (talk) 02:27, 1 August 2014 (UTC)Reply

A more authoritative article is here, with the nutshell about swapfile.sys being "This process is analogous to hibernating a specific app". JMP EAX (talk) 02:32, 1 August 2014 (UTC)Reply

Yes. But the swapfile doesn't have much to do with paging: It isn't read in response to page faults, and it isn't written from stuff that is on the modified page list that was put there because of page replacement that happened after a page fault. The description at your second link raises several questions, but as far as paging (the subject of this article) is concerned, the backing store is pagefile.sys. Jeh (talk) 06:48, 1 August 2014 (UTC)Reply
"Well, with the introduction of the Modern App, we needed a way to manage their memory outside of the traditional Virtual Memory/Pagefile method." Hmm... Perhaps that's related to the fact Windows 8 is also used on mobile devices, which have only one application facing the user? — Dsimic (talk | contribs) 08:55, 1 August 2014 (UTC)Reply

Added diagram

edit

I have added a slightly poor diagram that illustrates virtual memory paging with reference to pages of a book, and removed 'Computing diagram requested' tag. Egmason (talk) 12:02, 22 February 2016 (UTC)Reply

I'm sorry but "pages of a book" is not really a good analogy for paging in computer memory. I've been working with OSs that do paging, and virtual memory OSs in particular, for quite a while and I can't fit your diagram to anything I know. Maybe if you added some text labels? Jeh (talk) 16:39, 22 February 2016 (UTC)Reply
@Egmason: Thanks for your effort, but I agree with Jeh, the diagram doesn't add much to the article. -- intgr [talk] 00:43, 29 February 2016 (UTC)Reply
@Jeh and Intgr: After completing a quick B.Comp.Sci. (joking; just browsing Google images "paging site:.ac"), I have replaced the original image with one that I hope overviews the article without misleading anyone. Most of the diagrams I found were either older ones trying to explain some sort of page register memory addressing schemes, or they were illustrated using formal process imagery that I don't think is very accessible to the average user. Let me know what you think. Egmason (talk) 05:31, 10 April 2016 (UTC)Reply
I think this story could be better told in words (which, by the way, it has been). A "thought bubble" emanating from a chip as it "thinks"? Do readers know (or care) that that is what a microprocessor looks like, and that that is what a SIMM/DIMM memory looks like? (With a caption with inexplicable capitalization and a run-on sentence.) Although a single concept, such as a page fault, might be illustrated, I think it is a mistake to render the entire paging process in comic-book form, especially in the History section, which covers present and past implementations including some that are not similar at all. Spike-from-NH (talk) 15:10, 10 April 2016 (UTC)Reply
Sorry but I agree with Spike. Someone who knows what the diagram is showing will be able to follow it but it's not going to be helpful to the newcomer to the concept. Besides, this... diagram... cartoon... something... looks like something you'd find in a children's book. No, you'd never find information like this in a children's book; it's the art style I'm carping at. Besides being childish in style, it also looks like it was cobbled together from several different images, all drawn with different styles. It's very non-encyclopedic. Jeh (talk) 16:10, 10 April 2016 (UTC)Reply
Okay. Deleted. If you don't want a picture then don't reinstate the Diagram requested tag. 02:17, 11 April 2016 (UTC) — Preceding unsigned comment added by Egmason (talkcontribs)
I think we do need an diagram. Egmason's offering is here: https://commons.wikimedia.org/wiki/File:Paging.svg While I would say that it does a reasonable job of showing the concept of swap, it doesn't go as far to capture the concept of paging. Anyone have anything better? Regards, Ben Aveling 03:04, 19 February 2018 (UTC)Reply
Sorry, I don't think it does a "reasonable job" of anything, for reasons I already mentioned. Jeh (talk) 03:25, 19 February 2018 (UTC)Reply
I could explain how I understand Egmason's diagram, but I don't think that would be useful. I had to explain the concept of paging earlier today, to explain why low free memory wasn't necessarily a problem. What would have been useful to me would have been a FSM diagram showing how and when pages move between different states - although I was explaining active/inactive/cached, so maybe going beyond the scope of this article. Anyway, you know this better than I - how would you propose showing what happens? Regards, Ben Aveling 07:31, 19 February 2018 (UTC)Reply
Perhaps there can't be "a" diagram; the topic might simply be too complicated for a single diagram. Guy Harris (talk) 10:03, 19 February 2018 (UTC)Reply
And there are multiple possibilities of what might be meant by "paging". The PDP-8 and the early HP 2100 series used a "paged" organization of memory, but they had no address translation unit , no page tables, etc. Before anyone goes haring off to produce diagrams we should probably agree on what they should be portraying. Jeh (talk) 12:02, 19 February 2018 (UTC)Reply

pagefile.sys vs. swapfile.sys

edit

Windows 10 has both, so what is the difference? ZFT (talk) 19:13, 11 April 2016 (UTC)Reply

It's a Windows 8 thing, described in this blog post. Guy Harris (talk) 19:42, 11 April 2016 (UTC)Reply

"Swapping" in the history section, and elsewhere

edit

It's a little off. Windows NT does do something that's sort of like swapping, but it isn't as drastic as VMS's; it's done gradually by shrinking the working set limit, a step at a time. Nor is there a specific inswap; if the process ever needs to be brought in again it's demand-paged in just like always.

The last sentence is problematic also. In systems that support memory-mapped files, the memory-mapped files themselves are usually the backing store, so no separate "swapfile" is required. In Windows, if the pages in question are copy-on-write pages and they get modified, then the pagefile is used as the backing store for the modified pages and that's still called paging.

I don't know how to fix this quickly. Jeh (talk) 07:06, 4 June 2016 (UTC)Reply

The copy-on-write behavior for mapped files not specific to NT - it's standard behavior on UN*Xes as well. The term used in at least some UN*Xes for pages backed by swap space is "anonymous pages", and if you map a file with PROT_WRITE set in the protections and MAP_PRIVATE set in flags, and store into a page, a copy is made and the copy is an anonymous page.
But it's not called "swapping" on UN*X, either. That's a behavior that might be worth discussing, but, unless somebody can supply a citation for the claim that it's called "swapping", we shouldn't discuss it under the name "swapping". Perhaps the fact that backing store for anonymous pages is often called "swap space" (if it's a partition) or "swap files" (if it's one or more files) in UN*Xes causes some people to somehow use "swap" in connection with it, but it's not the same as "swapping" in the sense of "evicting regions from memory" in a non-paged system or "evicting the pages from a region and not letting it back in until the demand for pages drops" or whatever form "swapping" takes on a particular paged VM system. Guy Harris (talk) 08:13, 4 June 2016 (UTC)Reply
Agreed. I'd say that if we don't hear from someone who says it IS called "swapping" on some OS within a few days, the last sentence of that graf should go. Maybe we copy it onto the talk page and say "preserved here in case evidence surfaces". Jeh (talk) 08:15, 4 June 2016 (UTC)Reply
Doesn't Windows do some kind of true swapping, for Modern UI applications? See this for more details. — Dsimic (talk | contribs) 19:55, 9 June 2016 (UTC)Reply
Yes, in Win8 and later. Jeh (talk) 14:07, 19 July 2017 (UTC)Reply

Terminology in general

edit

It's a little suprising that this article has no coverage at all of address translation. Address translation faults (page faults) are what drive a virtual memory manager. The "paging enabled" bit in the x86/x64 control registers actually just enables pagetable-based address translation; actually moving stuff between RAM and backing store is the OS's job. Virtual memory on most platforms includes three mechanisms: Address translation; paging; and file mapping. Jeh (talk) 07:11, 4 June 2016 (UTC)Reply

Address translation is probably best discussed in the memory management unit article, although it needs to be at least mentioned in an article about paging. Guy Harris (talk) 08:16, 4 June 2016 (UTC)Reply
And there is the page table article too. As in many other large and complex subject areas here, the information is spread across several articles. Jeh (talk) 09:38, 4 June 2016 (UTC)Reply

Redundant section swap death

edit

The problem described in Paging#Swap death is simply thrashing, described in Paging#Thrashing. The discussion in Paging#Linux should simply link to Paging#Thrashing, note the difference in nomenclature and give any information specific to Linux. Shmuel (Seymour J.) Metz Username:Chatul (talk) 20:06, 2 July 2017 (UTC)Reply

@Chatul: Agreed, you're welcome to merge these sections. -- intgr [talk] 13:25, 19 July 2017 (UTC)Reply
Agreed. Jeh (talk) 14:08, 19 July 2017 (UTC)Reply

Merging Swappiness into Paging

edit

I propose that Swappiness be merged into Paging. Swappiness is a single parameter of the Linux kernel related to swap usage. The Linux kernel has hundreds of such parameters. Moreover, half of the Swappiness article is a general tutorial on how to set a Linux sysctl parameter. Arguably swappiness is more important than other kernel parameters and deserves a mention, but I don't think it deserves a whole article, rather a few lines in this more general article.

For comparison, here is what I did on the French Wikipedia, which had a Swappiness article translated from the English one, after duly submitting the idea to the community of course:

  • I created the missing fr:Espace d'échange (literally "swap space") article to connect to en:Paging.
  • In a "Performance" section I wrote a few lines about the general idea behind swappiness.
  • In a "Linux kernel" subsection I put a few sentences about the swappiness parameter and the value table.
  • I discarded the "tutorial" part about how to set the parameter.
  • I made fr:Swappiness into a redirect to fr:Espace d'échange.

--Rinaku (t · c) 15:29, 25 February 2018 (UTC)Reply

Most of that article should be deleted per WP:NOTHOWTO, WP:NOTMANUAL, etc. What's left would be a couple of sentences that could certainly go into this one. Jeh (talk) 21:20, 25 February 2018 (UTC)Reply
The notion of "swapping" needs a better treatment in various pages, including here and virtual memory.
Some systems didn't support "virtual memory" in the sense of a single address space being larger than physical memory, but did allow more active address spaces than would simultaneously fit in physical memory, swapping the address spaces in and out as necessary. PDP-6/PDP-10 OSes on -6's and -10's without paging did this, as did several PDP-11 OSes. OS/360 rollout/rollin sounds as if it's a similar concept on a system without "address spaces" (everything ran in unmapped physical memory, as there was no memory-mapping hardware on most S/360's) - if a job (process) needs more memory than fit in the region of physical memory it was assigned, other jobs could be "rolled out" to allow the job to take more space, and "rolled in" when the job ends or releases the additional memory.
No, OS/360 MVT Rollin/Rollout was at the region level; if a job required a region that wouldn't fit, the Initiator rolled out other job to make room for the region, and didn't roll them back in until the step ended, regardless of how much free space there was within the region. Shmuel (Seymour J.) Metz Username:Chatul (talk) 14:29, 28 March 2018 (UTC)Reply
Some systems supported virtual memory but not paging, such as the Burroughs B5000; not all items to which descriptors referred had to fit in memory for a process to be able to run - a descriptor had a presence bit, and a reference through a descriptor with the presence bit set to "not present" would take a fault, and the item referred to by the descriptor could be swapped in.
Systems that support paging can choose to "swap out" a process in order to, for example, make sure that sum of the sizes of the working sets of all non-swapped-out processes doesn't exceed the size of physical memory; I think I remember that 4.2 BSD had the notion of a process being "swapped out", which meant that it wasn't eligible to run even if it wasn't blocked and some of its pages were in memory. I don't think "swapping out" a process meant that its pages would be forcibly evicted from memory; other systems of that sort might either forcibly evict all pages, or write out all resident pages, in a single operation, to a "swap space" separate from the secondary storage space used for paging.
Obviously, the first two of those wouldn't be discussed here, and the first of those would probably not be discussed under virtual memory, given that it didn't allow a process to have "virtual memory" larger than available physical memory. The second is mentioned in various places in virtual memory, but perhaps needs some cleanup.
Swappiness doesn't actually say what "swapping" is; I don't know whether Linux implements the third flavor of swapping, or whether the "swapping" in question is something else.
(None of this is helped by the convention that paging space for anonymous pages is usually called "swap space" on UN*Xes. The name reflects PDP-11 UNIX's swap-based memory management, and continues to be used on systems with paging-based memory management.) Guy Harris (talk) 23:07, 25 February 2018 (UTC)Reply

Demand paging and program loading

edit

Not all executable program files are memory mapped. In particular, IBM mainframe operating systems have fetch and load components that read programs into virtual storage prior to execution. As an example, while MVS program objects in a PDSE program library and executable files in a UNIX System Services file system are memory mapped, load modules in a PDS are not. Shmuel (Seymour J.) Metz Username:Chatul (talk) 14:29, 8 May 2018 (UTC)Reply

Apples & Oranges

edit

I see much confusion of swapping and paging here, definitely not the same thing. Let me show by some examples:

  • An ancient UNIX system which doesn't support memory pages swaps process memory to the disk
  • A modern Linux machine with much RAM and no swap configured; it uses paging to access RAM only.

So, the first example swaps but does no paging; the latter one uses paging but not swapping. The two terms are definitely NOT the same! However I see that "Memory swapping" redirects to this page, which would not help much if someone would like to find out about the swapping from my first example. To make matters worse, there is also a separate article on Virtual memory.

I see there has already been much debate on this page about this problem, but the article hasn't improved since. I'll tag it as needing expert attention in hope someone may help organize the topic better. --Arny (talk) 09:32, 6 June 2019 (UTC)Reply

It doesn't help that some operating systems have files called "swap" that are actually used for paging. Shmuel (Seymour J.) Metz Username:Chatul (talk) 15:56, 6 June 2019 (UTC)Reply
Is "A modern Linux machine with much RAM and no swap configured" using demand paging to read code from executable images and shared libraries? Presumably it's not using demand paging to allow the sum of the sizes of the data regions of all processes to exceed the available RAM as, with no swap space (swap partitions or swap files), "anonymous" pages, such as are used for the stack and heap, have no backing store.
If it's not using demand paging to read code from executable images and shared libraries (or other files), as well as not using it for anonymous pages, it's not "paging" in the sense of this article.
There's a separate article on virtual memory because virtual memory can be implemented by techniques other than paging.
For some UN*Xes, the terms "swap partitions" or "swap files" are used for historical reasons, dating back to PDP-11 UNIX which is an example of the ancient UNIX systems that swapped but didn't page.
I'd have to dig into old BSD code to see whether the BSDs that did paging had a notion of a "swapped-out" process, which would have to be marked as "swapped in" in order to be scheduled; that may have been a technique to reduce thrashing. I think they did; they didn't, as I remember, use a measurement of process working set sizes to ensure that the sum of the sizes of the working sets of the "swapped-in" processes was <= the amount of main memory in the page pool, "swapping out" processes as necessary to make that the case, but I have a vague memory of a process structure's flags word having a "swapped out" flag. Guy Harris (talk) 18:36, 6 June 2019 (UTC)Reply
AFAIK, Linux always uses paging, regardless of having "swap" areas. The memory is by default overcommited, so a portion of virtual memory doesn't have any backing physical store. I also believe demand paging of executable images is always used (unless they're UPX-ed!)
Concerning BSD, IIRC they've introduced paged VM in 3BSD, when they got a VAX which does have paged MMU. Before that most UNIX installs worked on PDP-11 and did whole process swapping for its lack of a PMMU. --Arny (talk) 14:30, 7 June 2019 (UTC)Reply
I don't know whether Unix or BSD ever supported it, but the PDP-111 had mapping registers for paging on some models. Shmuel (Seymour J.) Metz Username:Chatul (talk) 15:49, 7 June 2019 (UTC)Reply
I think MINI-UNIX was the only PDP-11 UNIX that didn't use the mapping registers. However, neither UNIX nor, as far as I know, any PDP-11 operating system from DEC used them for paging, rather than having all the virtual memory of runnable processes being in main memory, and swapping out processes to make room for other processes. Guy Harris (talk) 18:03, 7 June 2019 (UTC)Reply
No - at least according to History of Unix, the first PDP-11 version ran on the PDP-11/20, which didn't have the mapping registers, so it presumably had only one in-memory process, and would swap that out and swap another one in on a context switch, as was the case for MINI-UNIX. Guy Harris (talk) 01:26, 12 February 2021 (UTC)Reply
Do you think we need an article dedicated to swapping or just a clearer separation of the concepts within this article? BernardoSulzbach (talk) 11:29, 29 January 2022 (UTC)Reply
I believe that it would be best to split the article, with appropriate {{distinguish}} templates, and to discuss the anachronistic nomenclature swapping for paging files on some systems.. The swapping article should mention OS/360 TSO and earlier systems, as well as Unix on machines without paging, and should discuss systems with both paging and swapping.
Does Burroughs MCP use the nomenclature swapping? --Shmuel (Seymour J.) Metz Username:Chatul (talk) 01:31, 30 January 2022 (UTC).Reply

Section title Addressing limits on 32-bit hardware

edit

The issues discussed in #Addressing limits on 32-bit hardware also apply to, e.g., 24-bit hardware, 36-bit hardware, 48-bit hardware.

#Main memory the same size as virtual memory should discuss the case where individual addresses spaces are the same size as physical memory but there are multiple address spaces. --Shmuel (Seymour J.) Metz Username:Chatul (talk) 01:39, 30 January 2022 (UTC)Reply

I renamed it to § Physical and virtual address space sizes. Guy Harris (talk) 04:05, 3 January 2024 (UTC)Reply
#Main memory the same size as virtual memory should discuss the case where individual addresses spaces are the same size as physical memory but there are multiple address spaces. That section said

However, even in this case, paging can be used to create a virtual memory of over 4 GB. For instance, many programs may be running concurrently. Together, they may require more than 4 GB, but not all of it will have to be in RAM at once. A paging system makes efficient decisions on which memory to relegate to secondary storage, leading to the best use of the installed RAM.

which sounds as if that's what it's discussing. I changed it to say

However, even in this case, paging can be used to support more virtual memory than physical memory. For instance, many programs may be running concurrently. Together, they may require more physical memory than can be installed on the system, but not all of it will have to be in RAM at once. A paging system makes efficient decisions on which memory to relegate to secondary storage, leading to the best use of the installed RAM.

which is a bit less "all the world's an IA-32 processor without PAE"ish. It doesn't deal with multiple address spaces other than multiple address spaces for multiple processes/programs; are there example of that worthy of note in this particular discussion? Guy Harris (talk) 04:24, 3 January 2024 (UTC)Reply

this article confuses memory swapping and paging

edit

it would be a good idea to seperate the ideas of memory swapping (which i have created a draft wikipedia page of) and paging, as they are fundamentally different things. i do not the posess the time to edit the articles so that the info is of accuracy (hence the draft page is just a redirect) but i do believe it is nessecary to seperate the two. Thewindboi (talk) 18:30, 2 January 2024 (UTC)Reply

Presumably by "swapping" you're referring to the mechanism described by the first paragraph of Memory paging § History, which says:

In the 1960s, swapping was an early virtual memory technique. An entire program or entire segment would be "swapped out" (or "rolled out") from RAM to disk or drum, and another one would be swapped in (or rolled in).[1][2] A swapped-out program would be current but its execution would be suspended while its RAM was in use by another program; a program with a swapped-out segment could continue running until it needed that segment, at which point it would be suspended until the segment was swapped in.

as opposed to "swapping" in a system that uses paging. For example, 4.1BSD and 4.2BSD had a notion of "[kicking] out deadwood", to quote the comment in sched() in src/sys/sys/vm_sched.c, where a process would be marked as "not loaded" if the paging system thought there wasn't enough free memory and some process had been sleeping for a while (the full criteria are more complicated than that, but that's a summary).
In what places does the article discuss swapping - other than the places that note that (for historical reasons coming from UNIX's history of being a swapping-based system) terms involving "swap" are used in the context of the paging system, e.g. "swap partition" and "swap file" - in a way that confuses it with paging? Guy Harris (talk) 19:06, 2 January 2024 (UTC)Reply
It also appears that, at least for Linux, paging individual anonymous pages (pages whose backing store isn't a file but is what, in UN*Xes, tends to be called, for historical reasons, "swap space") out to backing store sometimes appears to be called "swapping" them out, so "swapping" seems to be used to refer just to some cases of paging, rather than to "address space swapping" as per Virtual memory § Address space swapping.
Apple's documentation doesn't do much better.
At least one document about Oracle Solaris seems to speak of "swapping" as meaning paging of anonymous pages, but at least it says "Paging activities are not necessarily bad, but constantly paging out pages and bringing in new pages, especially when the free column is low, is bad for performance.", rather than speaking of "swapping" as bad.
This page seems to speak of FreeBSD and System V Release 3 as doing "desperation swapping", which is address-space swapping to try to free up pages, and speak of Linux as not swapping.
Support for memory-mapped files makes the terminological mess worse, because pages not in memory can be paged into memory from a regular file rather than from a "swap partition" or "swap file", and modified pages can be paged out of memory to a regular file rather than to a "swap partition" or "swap file". If pages that a process is currently using get evicted from memory, regardless of whether they're backed by a regular file or swap space, that will slow down the process, as, if it refers to the evicted page, a page fault will occur, and the page will have to be read from the backing store. This means that "swapping is bad" should not be interpreted as meaning "moving pages between RAM and swap space" is bad; yes, that's bad, but if some program has memory-mapped a large file and is processing the data from that file, if the pages from that file get evicted from memory while the program is working on them, that's bad, too.
Ideally, "swapping" should be used only to refer to "address space swapping", and "paging" should be used for all other forms of page I/O; perhaps this article should mention, in the beginning, uses of "swapping" to mean "paging", whether it's all paging or just paging from/to swap space, with references for those usages, as alternate meanings of "swapping", and use "paging" after that. Guy Harris (talk) 06:46, 3 January 2024 (UTC)Reply
On a systems with segmentation,[a] the OS can transfer an individual segment between RAM and backing store, while leaving other segments of the same process where they are. That is also known as swapping. Also, the reference to processes larger than RAM is also applicable to Memory segmentation, even on systems without paging. -- Shmuel (Seymour J.) Metz Username:Chatul (talk) 15:03, 3 January 2024 (UTC)Reply

Notes

  1. ^ E.g., MCP on the Burroughs B5000 and B6500.

References

  1. ^ Belzer, Jack; Holzman, Albert G.; Kent, Allen, eds. (1981). "Operating systems". Encyclopedia of computer science and technology. Vol. 11. CRC Press. p. 442. ISBN 0-8247-2261-2. Archived from the original on 2017-02-27.
  2. ^ Cragon, Harvey G. (1996). Memory Systems and Pipelined Processors. Jones and Bartlett Publishers. p. 109. ISBN 0-86720-474-5. Archived from the original on 2017-02-27.

“For use in main memory” should be removed

edit

In the beginning of the article, the part which says “For use in main memory” should be removed. This is because the main memory itself could be a page in memory. For example, the 6502 processors first memory page is $0000-$00ff, the second memory page is $0100-$01ff and so on. LukeTheAwesomePro (talk) 00:05, 31 July 2024 (UTC)Reply

The pages of the 6502's address space are not "pages" in the sense of this article. This article is discussing systems in which the hardware enables, and the operating system software implements, a mechanism by which addresses used by software are passed through a mapping mechanism that either maps the address to a physical address in memory or provides a "page not present in main memory" indication. The "page not present" indication causes a trap to code in the operating system that will attempt to find a corresponding page worth of memory in secondary storage; if the software finds that page, it will attempt to find an unused page worth of main memory and, if it doesn't find one, will choose an in-use page, write it out to secondary storage if it's been modified since it was last read from secondary storage, mark it as "not present in main memory" in the mapping mechanism, and then use it for the page to be read in. Once the read is complete, the operating system updates the mapping mechanism to map the page of software address to that page in main memory, and then restarts the code that got the "page not present in main memory" indication so that it retries the instruction that got that indication. (There are a lot more details, but those are discussed in this and other Wikipedia pages.)
The pages in main memory and in address spaces referred to by software are similar to the pages of the 6502's memory only in that they both have a fixed size, with addresses starting at 0, but the relevant parts in this article are the ones where they are not the same. Guy Harris (talk) 05:43, 31 July 2024 (UTC)Reply