aboutsummaryrefslogtreecommitdiff
path: root/arch/ia64/kernel/vmlinux.lds.S
AgeCommit message (Collapse)Author
2010-06-21[IA64] beautify vmlinux.lds.hSam Ravnborg
Use the same style as used for C code in vmlinux.lds.h. This is the same format as have been gradually introduced for other architectures in the kernel. This patch do not introduce any functional changes. Note: Use "git diff -w" to supress whitespace noise. Signed-off-by: Sam Ravnborg <sam@ravnborg.org> Signed-off-by: Tony Luck <tony.luck@intel.com>
2010-03-03Rename .text.lock to .text..lock.Denys Vlasenko
Signed-off-by: Denys Vlasenko <vda.linux@googlemail.com> Signed-off-by: Michal Marek <mmarek@suse.cz>
2010-03-03Rename .text.ivt to .text..ivt.Denys Vlasenko
Signed-off-by: Denys Vlasenko <vda.linux@googlemail.com> Signed-off-by: Michal Marek <mmarek@suse.cz>
2010-03-03Rename .data..patch.XXX to .data..patch.XXX.Denys Vlasenko
Signed-off-by: Denys Vlasenko <vda.linux@googlemail.com> Signed-off-by: Michal Marek <mmarek@suse.cz>
2010-03-03Rename .data.gate to .data..gate.Denys Vlasenko
Signed-off-by: Denys Vlasenko <vda.linux@googlemail.com> Signed-off-by: Michal Marek <mmarek@suse.cz>
2010-03-03Rename .data.page_aligned to .data..page_aligned.Tim Abbott
Signed-off-by: Tim Abbott <tabbott@ksplice.com> Cc: Sam Ravnborg <sam@ravnborg.org> Signed-off-by: Denys Vlasenko <vda.linux@googlemail.com> Signed-off-by: Michal Marek <mmarek@suse.cz>
2009-10-02ia64: allocate percpu area for cpu0 like percpu areas for other cpusTejun Heo
cpu0 used special percpu area reserved by the linker, __cpu0_per_cpu, which is set up early in boot by head.S. However, this doesn't guarantee that the area will be on the same node as cpu0 and the percpu area for cpu0 ends up very far away from percpu areas for other cpus which cause problems for congruent percpu allocator. This patch makes percpu area initialization allocate percpu area for cpu0 like any other cpus and copy it from __cpu0_per_cpu which now resides in the __init area. This means that for cpu0, percpu area is first setup at __cpu0_per_cpu early by head.S and then moved to an area in the linear mapping during memory initialization and it's not allowed to take a pointer to percpu variables between head.S and memory initialization. Signed-off-by: Tejun Heo <tj@kernel.org> Acked-by: Tony Luck <tony.luck@intel.com> Cc: Fenghua Yu <fenghua.yu@intel.com> Cc: linux-ia64 <linux-ia64@vger.kernel.org>
2009-09-18Merge branch 'release' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/aegl/linux-2.6 * 'release' of git://git.kernel.org/pub/scm/linux/kernel/git/aegl/linux-2.6: [IA64] Clean up linker script using standard macros. [IA64] Use standard macros for page-aligned data. [IA64] Use .ref.text, not .text.init for start_ap. [IA64] sgi-xp: fix printk format warnings [IA64] ioc4_serial: fix printk format warnings [IA64] mbcs: fix printk format warnings [IA64] pci_br, fix infinite loop in find_free_ate() [IA64] kdump: Short path to freeze CPUs [IA64] kdump: Try INIT regardless of [IA64] kdump: Mask INIT first in panic-kdump path [IA64] kdump: Don't return APs to SAL from kdump [IA64] kexec: Unregister MCA handler before kexec [IA64] kexec: Make INIT safe while transition to [IA64] kdump: Mask MCA/INIT on frozen cpus Fix up conflict in arch/ia64/kernel/vmlinux.lds.S as per Tony's suggestion.
2009-09-15[IA64] Clean up linker script using standard macros.Nelson Elhage
Aside from using fewer output sections and moving some data around, the main side effect of this change is changing the alignment of some sections. In particular: * cachline-aligned and read_mostly data are now aligned to SMP_CACHE_BYTES. (Previously, they were laid out consecutively after a PAGE_SIZE alignment) * .init.ramfs is now page-aligned, per the INIT_RAM_FS macro. (Previously it had no explicit alignment). Signed-off-by: Nelson Elhage <nelhage@ksplice.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
2009-09-15[IA64] Use standard macros for page-aligned data.Nelson Elhage
Signed-off-by: Nelson Elhage <nelhage@ksplice.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
2009-09-15[IA64] Use .ref.text, not .text.init for start_ap.Tim Abbott
It seems that start_ap doesn't need to be in a special location in the kernel, but it references some init code so it should be in .ref.text. Since this is the only thing in the .text.head section, eliminate .text.head from the linker script. Signed-off-by: Tim Abbott <tabbott@ksplice.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
2009-07-09linker script: unify usage of discard definitionTejun Heo
Discarded sections in different archs share some commonality but have considerable differences. This led to linker script for each arch implementing its own /DISCARD/ definition, which makes maintaining tedious and adding new entries error-prone. This patch makes all linker scripts to move discard definitions to the end of the linker script and use the common DISCARDS macro. As ld uses the first matching section definition, archs can include default discarded sections by including them earlier in the linker script. ia64 is notable because it first throws away some ia64 specific subsections and then include the rest of the sections into the final image, so those sections must be discarded before the inclusion. defconfig compile tested for x86, x86-64, powerpc, powerpc64, ia64, alpha, sparc, sparc64 and s390. Michal Simek tested microblaze. Signed-off-by: Tejun Heo <tj@kernel.org> Acked-by: Paul Mundt <lethal@linux-sh.org> Acked-by: Mike Frysinger <vapier@gentoo.org> Tested-by: Michal Simek <monstr@monstr.eu> Cc: linux-arch@vger.kernel.org Cc: Michal Simek <monstr@monstr.eu> Cc: microblaze-uclinux@itee.uq.edu.au Cc: Sam Ravnborg <sam@ravnborg.org> Cc: Tony Luck <tony.luck@intel.com>
2009-06-24linker script: throw away .discard sectionTejun Heo
x86 throws away .discard section but no other archs do. Also, .discard is not thrown away while linking modules. Make every arch and module linking throw it away. This will be used to define dummy variables for percpu declarations and definitions. This patch is based on Ivan Kokshaysky's alpha percpu patch. [ Impact: always throw away everything in .discard ] Signed-off-by: Tejun Heo <tj@kernel.org> Cc: Ivan Kokshaysky <ink@jurassic.park.msu.ru> Cc: Richard Henderson <rth@twiddle.net> Cc: Russell King <linux@arm.linux.org.uk> Cc: Haavard Skinnemoen <hskinnemoen@atmel.com> Cc: Bryan Wu <cooloney@kernel.org> Cc: Mikael Starvik <starvik@axis.com> Cc: Jesper Nilsson <jesper.nilsson@axis.com> Cc: David Howells <dhowells@redhat.com> Cc: Yoshinori Sato <ysato@users.sourceforge.jp> Cc: Tony Luck <tony.luck@intel.com> Cc: Hirokazu Takata <takata@linux-m32r.org> Cc: Geert Uytterhoeven <geert@linux-m68k.org> Cc: Michal Simek <monstr@monstr.eu> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: Kyle McMartin <kyle@mcmartin.ca> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Paul Mackerras <paulus@samba.org> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: Paul Mundt <lethal@linux-sh.org> Cc: David S. Miller <davem@davemloft.net> Cc: Jeff Dike <jdike@addtoit.com> Cc: Chris Zankel <chris@zankel.net> Cc: Rusty Russell <rusty@rustcorp.com.au> Cc: Ingo Molnar <mingo@elte.hu>
2009-03-31Pull pvops into release branchTony Luck
2009-03-26ia64/pv_op/binarypatch: add helper functions to support binary patching for ↵Isaku Yamahata
paravirt_ops. add helper functions to support binary patching for paravirt_ops. Signed-off-by: Isaku Yamahata <yamahata@valinux.co.jp> Signed-off-by: Tony Luck <tony.luck@intel.com>
2009-03-26ia64/pv_ops/xen: define xen specific gate page.Isaku Yamahata
define xen specific gate page. At this phase bits in the gate page is same to native. At the next phase, it will be paravirtualized. Signed-off-by: Isaku Yamahata <yamahata@valinux.co.jp> Signed-off-by: Tony Luck <tony.luck@intel.com>
2009-03-10linker script: define __per_cpu_load on all SMP capable archsTejun Heo
Impact: __per_cpu_load available on all SMP capable archs Percpu now requires three symbols to be defined - __per_cpu_load, __per_cpu_start and __per_cpu_end. There were three archs which didn't have it. Update them as follows. * powerpc: can use generic PERCPU() macro. Compile tested for powerpc32, compile/boot tested for powerpc64. * ia64: can use generic PERCPU_VADDR() macro. __phys_per_cpu_start is identical to __per_cpu_load. Compile tested and symbol table looks identical after the change except for the additional __per_cpu_load. * arm: added explicit __per_cpu_load definition. Currently uses unified .init output section so can't use the generic macro. Dunno whether the unified .init ouput section is required by arch peculiarity so I left it alone. Please break it up and use PERCPU() if possible. Signed-off-by: Tejun Heo <tj@kernel.org> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Pat Gefre <pfg@sgi.com> Cc: Russell King <rmk@arm.linux.org.uk>
2009-01-17linker script: add missing .data.percpu.page_alignedTejun Heo
arm, arm/mach-integrator and powerpc were missing .data.percpu.page_aligned in their percpu output section definitions. Add it. Signed-off-by: Tejun Heo <tj@kernel.org>
2008-09-29[IA64] Put the space for cpu0 per-cpu area into .data sectionTony Luck
Initial fix for making sure that we can access percpu variables in all C code (commit: 10617bbe84628eb18ab5f723d3ba35005adde143) inadvertantly allocated the memory in the "percpu" section of the vmlinux ELF executable. This confused kexec/dump. Signed-off-by: Tony Luck <tony.luck@intel.com>
2008-08-12[IA64] Ensure cpu0 can access per-cpu variables in early boot codeTony Luck
ia64 handles per-cpu variables a litle differently from other architectures in that it maps the physical memory allocated for each cpu at a constant virtual address (0xffffffffffff0000). This mapping is not enabled until the architecture specific cpu_init() function is run, which causes problems since some generic code is run before this point. In particular when CONFIG_PRINTK_TIME is enabled, the boot cpu will trap on the access to per-cpu memory at the first printk() call so the boot will fail without the kernel printing anything to the console. Fix this by allocating percpu memory for cpu0 in the kernel data section and doing all initialization to enable percpu access in head.S before calling any generic code. Other cpus must take care not to access per-cpu variables too early, but their code path from start_secondary() to cpu_init() is all in arch/ia64 Signed-off-by: Tony Luck <tony.luck@intel.com>
2008-05-27[IA64] pvops: preparation: move the constants, LOAD_OFFSET, to a header file.Isaku Yamahata
Move the LOAD_OFFSET definition from vmlinux.lds.S into system.h. On paravirtualized environments, it is necessary to detect the execution environment. One of the solutions is the multi entry point. The multi entry point allows a boot loader to start the kernel execution from the entry point which is different from the ELF entry point. The non standard entry point will defined as the specialized elf note which contains the LMA of the entry point symbol. The constant, LOAD_OFFSET, is necessary to calculate the symbol's LMA. Move the definition into the public header file to make it available to the multi entry point support. Cc: "He, Qing" <qing.he@intel.com> Signed-off-by: Isaku Yamahata <yamahata@valinux.co.jp> Signed-off-by: Tony Luck <tony.luck@intel.com>
2008-05-27[IA64] Workaround for RSE issueTony Luck
Problem: An application violating the architectural rules regarding operation dependencies and having specific Register Stack Engine (RSE) state at the time of the violation, may result in an illegal operation fault and invalid RSE state. Such faults may initiate a cascade of repeated illegal operation faults within OS interruption handlers. The specific behavior is OS dependent. Implication: An application causing an illegal operation fault with specific RSE state may result in a series of illegal operation faults and an eventual OS stack overflow condition. Workaround: OS interruption handlers that switch to kernel backing store implement a check for invalid RSE state to avoid the series of illegal operation faults. The core of the workaround is the RSE_WORKAROUND code sequence inserted into each invocation of the SAVE_MIN_WITH_COVER and SAVE_MIN_WITH_COVER_R19 macros. This sequence includes hard-coded constants that depend on the number of stacked physical registers being 96. The rest of this patch consists of code to disable this workaround should this not be the case (with the presumption that if a future Itanium processor increases the number of registers, it would also remove the need for this patch). Move the start of the RBS up to a mod32 boundary to avoid some corner cases. The dispatch_illegal_op_fault code outgrew the spot it was squatting in when built with this patch and CONFIG_VIRT_CPU_ACCOUNTING=y Move it out to the end of the ivt. Signed-off-by: Tony Luck <tony.luck@intel.com>
2008-01-28all archs: consolidate init and exit sections in vmlinux.lds.hSam Ravnborg
This patch consolidate all definitions of .init.text, .init.data and .exit.text, .exit.data section definitions in the generic vmlinux.lds.h. This is a preparational patch - alone it does not buy us much good. Signed-off-by: Sam Ravnborg <sam@ravnborg.org>
2007-12-07[IA64] rename _bss to __bss_startBernhard Walle
Rename _bss to __bss_start as on other architectures. That makes it possible to use the <linux/sections.h> instead of own declarations. Also add __bss_stop because that symbol exists on other architectures. Signed-off-by: Bernhard Walle <bwalle@suse.de> Cc: "Luck, Tony" <tony.luck@intel.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Tony Luck <tony.luck@intel.com>
2007-08-13[IA64] get back PT_IA_64_UNWIND program headerDavid Mosberger-Tang
Explicitly put the unwind section into its own program-header. This used to be unnecessary (probably because binutils did it for us), but with current binutils (e.g., v2.17.50.20070804) we won't get the PT_IA_64_UNWIND header without this patch which will break unwinding in a debugger and simulators such as Ski. Signed-off-by: David Mosberger-Tang <dmosberger@gmail.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
2007-08-13[IA64] need NOTES in vmlinux.lds.SDavid Mosberger-Tang
Add NOTES to linker script such that the kernel can be built with recent versions of binutils. Without this patch, final link fails with this error: ld: .tmp_vmlinux1: section `.text' can't be allocated in segment 0 ld: final link failed: Bad value This error is due to the fact that the --build-id option is used with newer linkers to include a .notes section on the kernel, but without the NOTES macro, that section won't be included in the kernel which then leads to the above error message. Signed-off-by: David Mosberger-Tang <dmosberger@gmail.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
2007-07-25[IA64] fix section mismatch warningsTony Luck
In 741f98fe298a73c9d47ed53703c1279a29718581 Sam added full checking across the entire vmlinux image. This flushed out a dozen new section mismatch warnings. Start the whack-a-mole game again to stomp them out. Signed-off-by: Tony Luck <tony.luck@intel.com>
2007-07-19define new percpu interface for shared dataFenghua Yu
per cpu data section contains two types of data. One set which is exclusively accessed by the local cpu and the other set which is per cpu, but also shared by remote cpus. In the current kernel, these two sets are not clearely separated out. This can potentially cause the same data cacheline shared between the two sets of data, which will result in unnecessary bouncing of the cacheline between cpus. One way to fix the problem is to cacheline align the remotely accessed per cpu data, both at the beginning and at the end. Because of the padding at both ends, this will likely cause some memory wastage and also the interface to achieve this is not clean. This patch: Moves the remotely accessed per cpu data (which is currently marked as ____cacheline_aligned_in_smp) into a different section, where all the data elements are cacheline aligned. And as such, this differentiates the local only data and remotely accessed data cleanly. Signed-off-by: Fenghua Yu <fenghua.yu@intel.com> Acked-by: Suresh Siddha <suresh.b.siddha@intel.com> Cc: Rusty Russell <rusty@rustcorp.com.au> Cc: Christoph Lameter <clameter@sgi.com> Cc: <linux-arch@vger.kernel.org> Cc: "Luck, Tony" <tony.luck@intel.com> Cc: Andi Kleen <ak@suse.de> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2007-05-19all-archs: consolidate .data section definition in asm-genericSam Ravnborg
With this consolidation we can now modify the .data section definition in one spot for all archs. Signed-off-by: Sam Ravnborg <sam@ravnborg.org>
2007-05-19all-archs: consolidate .text section definition in asm-genericSam Ravnborg
Move definition of .text section to asm-generic. Signed-off-by: Sam Ravnborg <sam@ravnborg.org>
2007-04-30Pull percpu-dtc into release branchTony Luck
2007-02-11[PATCH] disable init/initramfs.c: architecturesJean-Paul Saman
Update all arch/*/kernel/vmlinux.lds.S to not include space for initramfs when CONFIG_BLK_DEV_INITRAMFS is not selected. This saves another 4 kbytes on most platfoms (some reserve PAGE_SIZE for initramfs). Signed-off-by: Jean-Paul Saman <jean-paul.saman@nxp.com> Cc: Al Viro <viro@zeniv.linux.org.uk> Cc: <linux-arch@vger.kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2007-02-06[IA64] remove per-cpu ia64_phys_stacked_size_p8Chen, Kenneth W
It's not efficient to use a per-cpu variable just to store how many physical stack register a cpu has. Ever since the incarnation of ia64 up till upcoming Montecito processor, that variable has "glued" to 96. Having a variable in memory means that the kernel is burning an extra cacheline access on every syscall and kernel exit path. Such "static" value is better served with the instruction patching utility exists today. Convert ia64_phys_stacked_size_p8 into dynamic insn patching. This also has a pleasant side effect of eliminating access to per-cpu area while psr.ic=0 in the kernel exit path. (fixable for per-cpu DTC work, but why bother?) There are some concerns with the default value that the instruc- tion encoded in the kernel image. It shouldn't be concerned. The reasons are: (1) cpu_init() is called at CPU initialization. In there, we find out physical stack register size from PAL and patch two instructions in kernel exit code. The code in question can not be executed before the patching is done. (2) current implementation stores zero in ia64_phys_stacked_size_p8, and that's what the current kernel exit path loads the value with. With the new code, it is equivalent that we store reg size 96 in ia64_phys_stacked_size_p8, thus creating a better safety net. Given (1) above can never fail, having (2) is just a bonus. All in all, this patch allow one less memory reference in the kernel exit path, thus reducing syscall and interrupt return latency; and avoid polluting potential useful data in the CPU cache. Signed-off-by: Ken Chen <kenneth.w.chen@intel.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
2007-02-05[IA64] alignment bug in ldscriptKirill Korotaev
Occasionally the FSYS_RETURN patch list can have an odd length, causing other data structures to get out of alignment. In OpenVZ it is odd and we get misaligned kernel image, which does not boot. Signed-off-by: Alexey Kuznetsov <kuznet@ms2.inr.ac.ru> Signed-off-by: Kirill Korotaev <dev@openvz.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Tony Luck <tony.luck@intel.com>
2006-10-27[PATCH] vmlinux.lds: consolidate initcall sectionsAndrew Morton
Add a vmlinux.lds.h helper macro for defining the eight-level initcall table, teach all the architectures to use it. This is a prerequisite for a patch which performs initcall synchronisation for multithreaded-probing. Cc: Greg KH <greg@kroah.com> Signed-off-by: Andrew Morton <akpm@osdl.org> [ Added AVR32 as well ] Signed-off-by: Haavard Skinnemoen <hskinnemoen@atmel.com> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-09-26[IA64] minor reformatting to vmlinux.lds.SAl Stone
Minor reformatting to vmlinux.lds.S to make it 80-column usable, in accordance with Linux coding style. Signed-off-by: Al Stone <ahs3@fc.hp.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
2006-06-30Remove obsolete #include <linux/config.h>Jörn Engel
Signed-off-by: Jörn Engel <joern@wohnheim.fh-wedel.de> Signed-off-by: Adrian Bunk <bunk@stusta.de>
2006-03-29[IA64] Move __mca_table out of the __init sectionRuss Anderson
Move __mca_table out of the __init section. Signed-off-by: Tony Luck <tony.luck@intel.com>
2006-03-24[IA64] MCA recovery: kernel context recovery tableRuss Anderson
Memory errors encountered by user applications may surface when the CPU is running in kernel context. The current code will not attempt recovery if the MCA surfaces in kernel context (privilage mode 0). This patch adds a check for cases where the user initiated the load that surfaces in kernel interrupt code. An example is a user process lauching a load from memory and the data in memory had bad ECC. Before the bad data gets to the CPU register, and interrupt comes in. The code jumps to the IVT interrupt entry point and begins execution in kernel context. The process of saving the user registers (SAVE_REST) causes the bad data to be loaded into a CPU register, triggering the MCA. The MCA surfaces in kernel context, even though the load was initiated from user context. As suggested by David and Tony, this patch uses an exception table like approach, puting the tagged recovery addresses in a searchable table. One difference from the exception table is that MCAs do not surface in precise places (such as with a TLB miss), so instead of tagging specific instructions, address ranges are registers. A single macro is used to do the tagging, with the input parameter being the label of the starting address and the macro being the ending address. This limits clutter in the code. This patch only tags one spot, the interrupt ivt entry. Testing showed that spot to be a "heavy hitter" with MCAs surfacing while saving user registers. Other spots can be added as needed by adding a single macro. Signed-off-by: Russ Anderson (rja@sgi.com) Signed-off-by: Tony Luck <tony.luck@intel.com>
2006-03-22[IA64] move patchlist and machvec into init sectionChen, Kenneth W
ia64_mv is initialized based on platform detected or specified. However, there is one instantiation of each platform type. We don't expect to switch platform vector during run time. Move those platform specific type into init section since a copy is made into global ia64_mv at initialization. Also move instruction patch list into init section as well. Signed-off-by: Ken Chen <kenneth.w.chen@intel.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
2005-12-16[IA64] Add __read_mostly support for IA64Christoph Lameter
sparc64, i386 and x86_64 have support for a special data section dedicated to rarely updated data that is frequently read. The section was created to avoid false sharing of those rarely read data with frequently written kernel data. This patch creates such a data section for ia64 and will group rarely written data into this section. Signed-off-by: Christoph Lameter <clameter@sgi.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
2005-09-07[PATCH] Kprobes: prevent possible race conditions ia64 changesPrasanna S Panchamukhi
This patch contains the ia64 architecture specific changes to prevent the possible race conditions. Signed-off-by: Prasanna S Panchamukhi <prasanna@in.ibm.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-06-27[PATCH] kprobes/ia64: refuse kprobe on ivt codeKeshavamurthy Anil S
Not safe to insert kprobes on IVT code. This patch checks to see if the address on which Kprobes is being inserted is in ivt code and if it is in ivt code then refuse to register kprobe. Signed-off-by: Anil S Keshavamurthy <anil.s.keshavamurthy@intel.com> Acked-by: David Mosberger <davidm@napali.hpl.hp.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-04-16Linux-2.6.12-rc2v2.6.12-rc2Linus Torvalds
Initial git repository build. I'm not bothering with the full history, even though we have it. We can create a separate "historical" git archive of that later if we want to, and in the meantime it's about 3.2GB when imported into git - space that would just make the early git days unnecessarily complicated, when we don't have a lot of good infrastructure for it. Let it rip!