aboutsummaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorJeremy Fitzhardinge <jeremy@goop.org>2008-06-20 21:32:12 +0000
committerGreg Kroah-Hartman <gregkh@suse.de>2008-06-24 14:08:30 -0700
commit14e2b0d02fb9ebd54166c1d76c3eeb15c00524c2 (patch)
treefcaf77f27849adbda929583de2c4c6308bf0906d
parent6433ff677312c44e46336d208b44a1f99aac11f4 (diff)
x86: set PAE PHYSICAL_MASK_SHIFT to 44 bits.
commit ad524d46f36bbc32033bb72ba42958f12bf49b06 upstream When a 64-bit x86 processor runs in 32-bit PAE mode, a pte can potentially have the same number of physical address bits as the 64-bit host ("Enhanced Legacy PAE Paging"). This means, in theory, we could have up to 52 bits of physical address in a pte. The 32-bit kernel uses a 32-bit unsigned long to represent a pfn. This means that it can only represent physical addresses up to 32+12=44 bits wide. Rather than widening pfns everywhere, just set 2^44 as the Linux x86_32-PAE architectural limit for physical address size. This is a bugfix for two cases: 1. running a 32-bit PAE kernel on a machine with more than 64GB RAM. 2. running a 32-bit PAE Xen guest on a host machine with more than 64GB RAM In both cases, a pte could need to have more than 36 bits of physical, and masking it to 36-bits will cause fairly severe havoc. Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> Cc: Jan Beulich <jbeulich@novell.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
-rw-r--r--include/asm-x86/page_32.h3
1 files changed, 2 insertions, 1 deletions
diff --git a/include/asm-x86/page_32.h b/include/asm-x86/page_32.h
index 5f7257fd589b..8f8085b12dea 100644
--- a/include/asm-x86/page_32.h
+++ b/include/asm-x86/page_32.h
@@ -14,7 +14,8 @@
#define __PAGE_OFFSET _AC(CONFIG_PAGE_OFFSET, UL)
#ifdef CONFIG_X86_PAE
-#define __PHYSICAL_MASK_SHIFT 36
+/* 44=32+12, the limit we can fit into an unsigned long pfn */
+#define __PHYSICAL_MASK_SHIFT 44
#define __VIRTUAL_MASK_SHIFT 32
#define PAGETABLE_LEVELS 3