[Linux-ia64] [PATCH] kernel updated (relative to 2.3.99pre6)

From: David Mosberger <davidm_at_hpl.hp.com>
Date: 2000-05-26 17:49:50
Attached below is the long overdue kernel update.  The diff below is
relative to the previous IA-64 kernel.  A full diff relative to
Linus's 2.3.99pre6 is at:

 ftp://ftp.kernel.org/pub/linux/kernel/ports/ia64/

in file "linux-2.3.99-pre6-ia64-000525.diff.gz".  Note: this kernel
hasn't been tested in a lot of configurations so I'd not recommend to
use it in a distribution (not without further testing at least).  In
particular, I haven't had time to test SMP.  UP works fine on Big Sur
and the HP simulator.  The reason I'm making the patch available now
is that we need to sync up with Linus's tree so if this patch happens
not to work on some systems, not much is lost as I plan to put out a
patch soon that brings us to 2.4-test1.

An updated kdb patch is available at:

 ftp://ftp.hpl.hp.com/pub/linux-ia64/kdb-2.3.99-pre6-000525.diff.gz

The kdb patch is known not to work with SMP (it crashes in the boot
with a "CPU already initialized" error; looks like it's a stupid
problem, but I haven't had time to fix it; if someone could make this
work again, it would be great).

Summary of changes:

 o The big change this time is lots of new code to support full stack
   unwinding.  This is fairly intricate stuff and the code is not 100%
   complete yet.  For example, more debugging needs to be done, the
   memory/cache management is stupid, and SMP locking is
   missing. Having said that, I'm quite happy with how things are
   shaping up so far.  It looks like we should be able to unwind both
   reliably and efficiently.  the "backtrace" command in kdb works
   better than it ever did and there is complete support for all
   unwind directives (modulo bugs that is... ;-).  The new unwind
   support is enabled only if CONFIG_IA64_NEW_UNWIND is turned on.  Do
   NOT turn this on right now unless you have a toolchain that has all
   the unwind fixes applied (goes for gcc and gas).  Without those
   fixes, the unwind info will be incorrect and things won't work
   properly.

   As a consequence of this, I added unwind directives for all
   assembly files (I think I missed mca_asm.S though).  Please be sure
   to keep this info in sync with the assembly code.  For a
   description of the unwind directives, see the Assembly Reference
   Guide (available at http://developer.intel.com/design/ia64)

 o Workaround for the global tlb flush hw erratum  (Asit).

 o Kernel module support (Stephen Zeisset).

 o SIGFPE now passes ISR in the si_isr field of siginfo to permit
   user-level fp emulation/testing etc. (Goutham).

 o Fixes to the emu10k1 sound driver (Bill, I think).

 o New kernel option: if you have a CPU that has an A2 or later
   stepping, you can turn off CONFIG_ITANIUM_A1_SPECIFIC and enjoy
   reduced interrupt latency etc.

 o Increased number of PCI busses to scan to 255 (the fix necessary to
   make this work come from Asit).

 o Changes required for the AzusA platform (kouchi).  Please note that
   I did not include the azusa_vectors table nor did I change the
   isa_irq_to_vector_map.  For the former, please hack (or better
   still: fix) the bootloader.  For the latter, I think we need a
   better solution.  Either we can find an ISA vector map that will
   work on all platforms or, if that's not possible, we'll need to
   assign the ISA vectors dynamically.  Anyone want to take a stab at
   a clean solution for this?

 o Timer-management related updated by Walt.  I chose to use a different
   approach to handling "race conditions" with the itm.  Basically, the
   new code turns off interrupts while checking if the itc is past
   the itm and additionally allows for a slack of 1000 cycles.  Haven't
   done much experimenting with it, but I do think this should be the
   good way to reduce the number of spurious lost tick messages.

 o IA-32 updates by Don and myself.  Don, note that the IA-32
   state-switching is now in ia32_support.c.  I rearranged that code a
   little.  I think it should be much faster now, but I haven't
   measured it.

 o New header file <asm/asmmacro.h> defines a couple of convenience
   macros to assist in assembly programming.

 o Remove support for old compilers that didn't know how to return
   things in r9, r10, and r11.

 o ptrace updates in preparation for the new unwind code.  Kevin, note
   that PT_PRI_UNAT now exists but it does NOT yet return the correct
   NaT bits.  This will be one of the next things I'll work on.

 o Fix execve() code so that running gdb on vmlinux and typing "run"
   no longer freezes the system.  ;-)

That should be it.  Please give it a spin and let me know how it
works.

If someone is looking for a small but potentially interesting project:
in the SMP case, we currently do not implement a fine-grained
gettimeofday().  This is easy to do if you're willing to do an IPI to
the bootstrap processor on every gettimeofday(), but it seems to me we
should be able to do better than that (a lot better, actually).  Would
be great if someone could come up with a great way of solving this
issue (to get started, take a look at
arch/ia64/kernel/time.c:gettimeoffset()).

Enjoy,

	--david

diff -urN linux-davidm/arch/ia64/Makefile linux-2.3.99-pre6-lia/arch/ia64/Makefile
--- linux-davidm/arch/ia64/Makefile	Thu May 25 23:22:10 2000
+++ linux-2.3.99-pre6-lia/arch/ia64/Makefile	Thu May 25 22:52:30 2000
@@ -12,16 +12,11 @@
 AWK := awk
 
 LINKFLAGS = -static -T arch/$(ARCH)/vmlinux.lds
-# next line is for HP compiler backend:
-#AFLAGS += -DGCC_RETVAL_POINTER_IN_R8
-# The next line is needed when compiling with the July snapshot of the Cygnus compiler:
-#EXTRA	= -D__GCC_DOESNT_KNOW_IN_REGS__
-# next two lines are for the September snapshot of the Cygnus compiler:
-AFLAGS += -D__GCC_MULTIREG_RETVALS__ -Wa,-x
-EXTRA	= -D__GCC_MULTIREG_RETVALS__
+AFLAGS += -Wa,-x
+EXTRA	=
 
-#CFLAGS := $(CFLAGS) -pipe -mconstant-gp $(EXTRA) -Wa,-x -ffixed-r13 -mfixed-range=f10-f15,f32-f127
-CFLAGS := $(CFLAGS) -pipe $(EXTRA) -Wa,-x -ffixed-r13 -mfixed-range=f10-f15,f32-f127
+CFLAGS := $(CFLAGS) -pipe $(EXTRA) -Wa,-x -ffixed-r13 -mfixed-range=f10-f15,f32-f127 \
+	  -mconstant-gp -funwind-tables
 
 ifdef CONFIG_IA64_GENERIC
 	CORE_FILES      :=      arch/$(ARCH)/hp/hp.a	\
diff -urN linux-davidm/arch/ia64/boot/Makefile linux-2.3.99-pre6-lia/arch/ia64/boot/Makefile
--- linux-davidm/arch/ia64/boot/Makefile	Thu Mar 30 16:56:04 2000
+++ linux-2.3.99-pre6-lia/arch/ia64/boot/Makefile	Thu May 25 22:53:10 2000
@@ -25,7 +25,8 @@
 all:	$(TARGETS)
 
 bootloader: $(OBJECTS)
-	$(LD) $(LINKFLAGS) $(OBJECTS) $(LIBS) -o bootloader
+	$(LD) $(LINKFLAGS) $(OBJECTS) $(TOPDIR)/lib/lib.a $(TOPDIR)/arch/$(ARCH)/lib/lib.a \
+	-o bootloader
 
 clean:
 	rm -f $(TARGETS)
diff -urN linux-davidm/arch/ia64/config.in linux-2.3.99-pre6-lia/arch/ia64/config.in
--- linux-davidm/arch/ia64/config.in	Thu May 25 23:22:10 2000
+++ linux-2.3.99-pre6-lia/arch/ia64/config.in	Thu May 25 22:54:04 2000
@@ -24,9 +24,9 @@
 if [ "$CONFIG_IA64_DIG" = "y" ]; then
 	bool '  Enable Itanium A-step specific code' CONFIG_ITANIUM_ASTEP_SPECIFIC
 	bool '  Enable Itanium A1-step specific code' CONFIG_ITANIUM_A1_SPECIFIC
+	bool '  Enable use of global TLB purge instruction (ptc.g)' CONFIG_ITANIUM_PTCG
 	bool '  Enable SoftSDV hacks' CONFIG_IA64_SOFTSDV_HACKS
-	bool '  Enable BigSur hacks' CONFIG_IA64_BIGSUR_HACKS
-	bool '  Enable Lion hacks' CONFIG_IA64_LION_HACKS
+	bool '  Enable AzusA hacks' CONFIG_IA64_AZUSA_HACKS
 	bool '  Emulate PAL/SAL/EFI firmware' CONFIG_IA64_FW_EMU
 	bool '  Enable IA64 Machine Check Abort' CONFIG_IA64_MCA
 fi
@@ -188,5 +188,6 @@
 bool 'Turn on compare-and-exchange bug checking (slow!)' CONFIG_IA64_DEBUG_CMPXCHG
 bool 'Turn on irq debug checks (slow!)' CONFIG_IA64_DEBUG_IRQ
 bool 'Print possible IA64 hazards to console' CONFIG_IA64_PRINT_HAZARDS
+bool 'Enable new unwind support' CONFIG_IA64_NEW_UNWIND
 
 endmenu
diff -urN linux-davidm/arch/ia64/dig/iosapic.c linux-2.3.99-pre6-lia/arch/ia64/dig/iosapic.c
--- linux-davidm/arch/ia64/dig/iosapic.c	Thu May 25 23:22:10 2000
+++ linux-2.3.99-pre6-lia/arch/ia64/dig/iosapic.c	Thu May 25 22:54:40 2000
@@ -67,6 +67,12 @@
 		 (delivery << IO_SAPIC_DELIVERY_SHIFT) |
 		 vector);
 
+#ifdef CONFIG_IA64_AZUSA_HACKS
+	/* set Flush Disable bit */
+	if (iosapic_addr != 0xc0000000fec00000)
+		low32 |= (1 << 17);
+#endif
+
 	/* dest contains both id and eid */
 	high32 = (dest << IO_SAPIC_DEST_SHIFT);	
 
@@ -216,29 +222,31 @@
 }
 
 void
-iosapic_init (unsigned long address)
+iosapic_init (unsigned long address, int irqbase)
 {
 	struct hw_interrupt_type *irq_type;
 	struct pci_vector_struct *vectors;
 	int i, irq;
 
-	/* 
-	 * Map the legacy ISA devices into the IOSAPIC data.  Some of
-	 * these may get reprogrammed later on with data from the ACPI
-	 * Interrupt Source Override table.
-	 */
-	for (i = 0; i < 16; i++) {
-		irq = isa_irq_to_vector(i);
-		iosapic_pin(irq) = i; 
-		iosapic_bus(irq) = BUS_ISA;
-		iosapic_busdata(irq) = 0;
-		iosapic_dmode(irq) = IO_SAPIC_LOWEST_PRIORITY;
-		iosapic_trigger(irq)  = IO_SAPIC_EDGE;
-		iosapic_polarity(irq) = IO_SAPIC_POL_HIGH;
+	if (irqbase == 0)
+		/* 
+		 * Map the legacy ISA devices into the IOSAPIC data.
+		 * Some of these may get reprogrammed later on with
+		 * data from the ACPI Interrupt Source Override table.
+		 */
+		for (i = 0; i < 16; i++) {
+			irq = isa_irq_to_vector(i);
+			iosapic_pin(irq) = i; 
+			iosapic_bus(irq) = BUS_ISA;
+			iosapic_busdata(irq) = 0;
+			iosapic_dmode(irq) = IO_SAPIC_LOWEST_PRIORITY;
+			iosapic_trigger(irq)  = IO_SAPIC_EDGE;
+			iosapic_polarity(irq) = IO_SAPIC_POL_HIGH;
 #ifdef DEBUG_IRQ_ROUTING
-		printk("ISA: IRQ %02x -> Vector %02x IOSAPIC Pin %d\n", i, irq, iosapic_pin(irq));
+			printk("ISA: IRQ %02x -> Vector %02x IOSAPIC Pin %d\n",
+			       i, irq, iosapic_pin(irq));
 #endif
-	}
+		}
 
 #ifndef CONFIG_IA64_SOFTSDV_HACKS
 	/* 
@@ -251,6 +259,8 @@
 		irq = vectors[i].irq;
 		if (irq < 16)
 			irq = isa_irq_to_vector(irq);
+		if (iosapic_baseirq(irq) != irqbase)
+			continue;
 
 		iosapic_bustype(irq) = BUS_PCI;
 		iosapic_pin(irq) = irq - iosapic_baseirq(irq);
@@ -274,6 +284,9 @@
 #endif /* CONFIG_IA64_SOFTSDV_HACKS */
 
 	for (i = 0; i < NR_IRQS; ++i) {
+		if (iosapic_baseirq(i) != irqbase)
+			continue;
+
 		if (iosapic_pin(i) != -1) {
 			if (iosapic_trigger(i) == IO_SAPIC_LEVEL)
 			  irq_type = &irq_type_iosapic_level;
diff -urN linux-davidm/arch/ia64/ia32/ia32_entry.S linux-2.3.99-pre6-lia/arch/ia64/ia32/ia32_entry.S
--- linux-davidm/arch/ia64/ia32/ia32_entry.S	Fri Apr 21 15:21:23 2000
+++ linux-2.3.99-pre6-lia/arch/ia64/ia32/ia32_entry.S	Thu May 25 22:55:27 2000
@@ -1,14 +1,15 @@
+#include <asm/asmmacro.h>
 #include <asm/offsets.h>
 #include <asm/signal.h>
 
+#include "../kernel/entry.h"
+
 	//
 	// Get possibly unaligned sigmask argument into an aligned
 	//   kernel buffer
 	.text
-	.proc ia32_rt_sigsuspend
-	.global ia32_rt_sigsuspend
-ia32_rt_sigsuspend:
 
+GLOBAL_ENTRY(ia32_rt_sigsuspend)
 	// We'll cheat and not do an alloc here since we are ultimately
 	// going to do a simple branch to the IA64 sys_rt_sigsuspend.
 	// r32 is still the first argument which is the signal mask.
@@ -32,24 +33,22 @@
 	st4 [r32]=r2
 	st4 [r10]=r3
 	br.cond.sptk.many sys_rt_sigsuspend
+END(ia32_rt_sigsuspend)
 
 	.section __ex_table,"a"
 	data4 @gprel(1b)
 	data4 (2b-1b)|1
 	.previous
 
+GLOBAL_ENTRY(ia32_ret_from_syscall)
+	PT_REGS_UNWIND_INFO
 
-	.endp ia32_rt_sigsuspend
-
-	.global ia32_ret_from_syscall
-	.proc ia32_ret_from_syscall
-ia32_ret_from_syscall:
 	cmp.ge p6,p7=r8,r0                      // syscall executed successfully?
 	adds r2=IA64_PT_REGS_R8_OFFSET+16,sp    // r2 = &pt_regs.r8
 	;; 
 	st8 [r2]=r8                             // store return value in slot for r8
 	br.cond.sptk.few ia64_leave_kernel
-	.endp ia32_ret_from_syscall
+END(ia32_ret_from_syscall)
 
 	//
 	// Invoke a system call, but do some tracing before and after the call.
@@ -61,9 +60,8 @@
 	//	r15 = syscall number
 	//	b6  = syscall entry point
 	//
-	.global ia32_trace_syscall
-	.proc ia32_trace_syscall
-ia32_trace_syscall:
+GLOBAL_ENTRY(ia32_trace_syscall)
+	PT_REGS_UNWIND_INFO
 	br.call.sptk.few rp=invoke_syscall_trace // give parent a chance to catch syscall args
 .Lret4:	br.call.sptk.few rp=b6			// do the syscall
 .Lret5:	cmp.lt p6,p0=r8,r0			// syscall failed?
@@ -72,42 +70,38 @@
 	st8.spill [r2]=r8			// store return value in slot for r8
 	br.call.sptk.few rp=invoke_syscall_trace // give parent a chance to catch return value
 .Lret6:	br.cond.sptk.many ia64_leave_kernel	// rp MUST be != ia64_leave_kernel!
+END(ia32_trace_syscall)
 
-	.endp ia32_trace_syscall
-
-	.align 16
-	.global sys32_vfork
-	.proc sys32_vfork
-sys32_vfork:
+GLOBAL_ENTRY(sys32_vfork)
 	alloc r16=ar.pfs,2,2,3,0;;
 	mov out0=IA64_CLONE_VFORK|IA64_CLONE_VM|SIGCHLD	// out0 = clone_flags
 	br.cond.sptk.few .fork1			// do the work
-	.endp sys32_vfork
+END(sys32_vfork)
 
-	.align 16
-	.global sys32_fork
-	.proc sys32_fork
-sys32_fork:
-	alloc r16=ar.pfs,2,2,3,0;;
+GLOBAL_ENTRY(sys32_fork)
+	UNW(.prologue ASM_UNW_PRLG_RP|ASM_UNW_PRLG_PFS, ASM_UNW_PRLG_GRSAVE(2))
+	alloc r16=ar.pfs,2,2,3,0
 	mov out0=SIGCHLD			// out0 = clone_flags
+	;;
 .fork1:
-	movl r28=1f
-	mov loc1=rp
-	br.cond.sptk.many save_switch_stack
-1:
-	mov loc0=r16				// save ar.pfs across do_fork
+	mov loc0=rp
+	mov loc1=r16				// save ar.pfs across do_fork
+	DO_SAVE_SWITCH_STACK
+
+	UNW(.body)
+
 	adds out2=IA64_SWITCH_STACK_SIZE+16,sp
 	adds r2=IA64_SWITCH_STACK_SIZE+IA64_PT_REGS_R12_OFFSET+16,sp
 	;;
 	ld8 out1=[r2]				// fetch usp from pt_regs.r12
 	br.call.sptk.few rp=do_fork
 .ret1:
-	mov ar.pfs=loc0
+	mov ar.pfs=loc1
+	UNW(.restore sp)
 	adds sp=IA64_SWITCH_STACK_SIZE,sp	// pop the switch stack
-	mov rp=loc1
-	;;
+	mov rp=loc0
 	br.ret.sptk.many rp
-	.endp sys32_fork
+END(sys32_fork)
 
 	.rodata
 	.align 8
@@ -304,3 +298,8 @@
 	data8 sys_ni_syscall		  /* streams1 */
 	data8 sys_ni_syscall		  /* streams2 */
 	data8 sys32_vfork	  /* 190 */
+	/*
+	 *  CAUTION: If any system calls are added beyond this point
+	 *	then the check in `arch/ia64/kernel/ivt.S' will have
+	 *	to be modified also.  You've been warned.
+	 */
diff -urN linux-davidm/arch/ia64/ia32/ia32_signal.c linux-2.3.99-pre6-lia/arch/ia64/ia32/ia32_signal.c
--- linux-davidm/arch/ia64/ia32/ia32_signal.c	Fri Apr 21 15:21:23 2000
+++ linux-2.3.99-pre6-lia/arch/ia64/ia32/ia32_signal.c	Wed May 10 13:50:50 2000
@@ -226,7 +226,7 @@
 
        /* Set up to return from userspace.  If provided, use a stub
           already in userspace.  */
-       err |= __put_user(frame->retcode, &frame->pretcode);
+       err |= __put_user((long)frame->retcode, &frame->pretcode);
        /* This is popl %eax ; movl $,%eax ; int $0x80 */
        err |= __put_user(0xb858, (short *)(frame->retcode+0));
 #define __IA32_NR_sigreturn            119
@@ -281,8 +281,8 @@
                           ? current->exec_domain->signal_invmap[sig]
                           : sig),
                          &frame->sig);
-       err |= __put_user(&frame->info, &frame->pinfo);
-       err |= __put_user(&frame->uc, &frame->puc);
+       err |= __put_user((long)&frame->info, &frame->pinfo);
+       err |= __put_user((long)&frame->uc, &frame->puc);
        err |= __copy_to_user(&frame->info, info, sizeof(*info));
 
        /* Create the ucontext.  */
@@ -296,7 +296,7 @@
                                regs, set->sig[0]);
        err |= __copy_to_user(&frame->uc.uc_sigmask, set, sizeof(*set));
        
-       err |= __put_user(frame->retcode, &frame->pretcode);
+       err |= __put_user((long)frame->retcode, &frame->pretcode);
        /* This is movl $,%eax ; int $0x80 */
        err |= __put_user(0xb8, (char *)(frame->retcode+0));
 #define __IA32_NR_rt_sigreturn         173
diff -urN linux-davidm/arch/ia64/ia32/ia32_support.c linux-2.3.99-pre6-lia/arch/ia64/ia32/ia32_support.c
--- linux-davidm/arch/ia64/ia32/ia32_support.c	Tue Feb  8 12:01:59 2000
+++ linux-2.3.99-pre6-lia/arch/ia64/ia32/ia32_support.c	Mon May 22 18:03:20 2000
@@ -16,6 +16,43 @@
 
 extern void die_if_kernel (char *str, struct pt_regs *regs, long err);
 
+void
+ia32_save_state (struct thread_struct *thread)
+{
+	unsigned long eflag, fsr, fcr, fir, fdr;
+
+	asm ("mov %0=ar.eflag;"
+	     "mov %1=ar.fsr;"
+	     "mov %2=ar.fcr;"
+	     "mov %3=ar.fir;"
+	     "mov %4=ar.fdr"
+	     : "=r"(eflag), "=r"(fsr), "=r"(fcr), "=r"(fir), "=r"(fdr));
+	thread->eflag = eflag;
+	thread->fsr = fsr;
+	thread->fcr = fcr;
+	thread->fir = fir;
+	thread->fdr = fdr;
+}
+
+void
+ia32_load_state (struct thread_struct *thread)
+{
+	unsigned long eflag, fsr, fcr, fir, fdr;
+
+	eflag = thread->eflag;
+	fsr = thread->fsr;
+	fcr = thread->fcr;
+	fir = thread->fir;
+	fdr = thread->fdr;
+
+	asm volatile ("mov ar.eflag=%0;"
+		      "mov ar.fsr=%1;"
+		      "mov ar.fcr=%2;"
+		      "mov ar.fir=%3;"
+		      "mov ar.fdr=%4"
+		      :: "r"(eflag), "r"(fsr), "r"(fcr), "r"(fir), "r"(fdr));
+}
+
 /*
  * Setup IA32 GDT and TSS 
  */
diff -urN linux-davidm/arch/ia64/ia32/sys_ia32.c linux-2.3.99-pre6-lia/arch/ia64/ia32/sys_ia32.c
--- linux-davidm/arch/ia64/ia32/sys_ia32.c	Thu May 25 23:22:10 2000
+++ linux-2.3.99-pre6-lia/arch/ia64/ia32/sys_ia32.c	Thu May 25 22:55:38 2000
@@ -7,6 +7,8 @@
  * Copyright (C) 1999 		Arun Sharma <arun.sharma@intel.com>
  * Copyright (C) 1997,1998 	Jakub Jelinek (jj@sunsite.mff.cuni.cz)
  * Copyright (C) 1997 		David S. Miller (davem@caip.rutgers.edu)
+ * Copyright (C) 2000		Hewlett-Packard Co.
+ * Copyright (C) 2000		David Mosberger-Tang <davidm@hpl.hp.com>
  *
  * These routines maintain argument size conversion between 32bit and 64bit
  * environment.
@@ -55,24 +57,29 @@
 #include <net/sock.h>
 #include <asm/ia32.h>
 
-#define A(__x) ((unsigned long)(__x))
-#define AA(__x) ((unsigned long)(__x))
+#define A(__x)		((unsigned long)(__x))
+#define AA(__x)		((unsigned long)(__x))
+#define ROUND_UP(x,a)	((__typeof__(x))(((unsigned long)(x) + ((a) - 1)) & ~((a) - 1)))
+#define NAME_OFFSET(de) ((int) ((de)->d_name - (char *) (de)))
+
+extern asmlinkage long sys_execve (char *, char **, char **, struct pt_regs *);
+extern asmlinkage long sys_munmap (unsigned long, size_t len);
+extern asmlinkage long sys_mprotect (unsigned long, size_t, unsigned long);
 
 static int
 nargs(unsigned int arg, char **ap)
 {
-	char *ptr;
-	int n, err;
+	int n, err, addr;
 
 	n = 0;
 	do {
-		if (err = get_user(ptr, (int *)arg))
+		if ((err = get_user(addr, (int *)A(arg))) != 0)
 			return(err);
 		if (ap)
-			*ap++ = ptr;
+			*ap++ = (char *)A(addr);
 		arg += sizeof(unsigned int);
 		n++;
-	} while (ptr);
+	} while (addr);
 	return(n - 1);
 }
 
@@ -106,14 +113,14 @@
 	down(&current->mm->mmap_sem);
 	lock_kernel();
 
-	av = do_mmap_pgoff(0, NULL, len,
-		PROT_READ | PROT_WRITE, MAP_PRIVATE | MAP_ANONYMOUS, 0);
+	av = (char **) do_mmap_pgoff(0, 0UL, len, PROT_READ | PROT_WRITE,
+				     MAP_PRIVATE | MAP_ANONYMOUS, 0);
 
 	unlock_kernel();
 	up(&current->mm->mmap_sem);
 
 	if (IS_ERR(av))
-		return(av);
+		return (long)av;
 	ae = av + na + 1;
 	av[na] = (char *)0;
 	ae[ne] = (char *)0;
@@ -121,7 +128,7 @@
 	(void)nargs(envp, ae);
 	r = sys_execve(filename, av, ae, regs);
 	if (IS_ERR(r))
-		sys_munmap(av, len);
+		sys_munmap((unsigned long) av, len);
 	return(r);
 }
 
@@ -146,9 +153,9 @@
 	return err;
 }
 
-extern asmlinkage int sys_newstat(char * filename, struct stat * statbuf);
+extern asmlinkage long sys_newstat(char * filename, struct stat * statbuf);
 
-asmlinkage int
+asmlinkage long
 sys32_newstat(char * filename, struct stat32 *statbuf)
 {
 	int ret;
@@ -163,9 +170,9 @@
 	return ret;
 }
 
-extern asmlinkage int sys_newlstat(char * filename, struct stat * statbuf);
+extern asmlinkage long sys_newlstat(char * filename, struct stat * statbuf);
 
-asmlinkage int
+asmlinkage long
 sys32_newlstat(char * filename, struct stat32 *statbuf)
 {
 	int ret;
@@ -180,9 +187,9 @@
 	return ret;
 }
 
-extern asmlinkage int sys_newfstat(unsigned int fd, struct stat * statbuf);
+extern asmlinkage long sys_newfstat(unsigned int fd, struct stat * statbuf);
 
-asmlinkage int
+asmlinkage long
 sys32_newfstat(unsigned int fd, struct stat32 *statbuf)
 {
 	int ret;
@@ -223,14 +230,14 @@
 	}
 	if (addr && ((addr + len) & ~PAGE_MASK) && get_user(c, (char *)(addr + len)) == 0) {
 		back = kmalloc(PAGE_SIZE - ((addr + len) & ~PAGE_MASK), GFP_KERNEL);
-		memcpy(back, addr + len, PAGE_SIZE - ((addr + len) & ~PAGE_MASK));
+		memcpy(back, (char *)addr + len, PAGE_SIZE - ((addr + len) & ~PAGE_MASK));
 	}
 	if ((r = do_mmap(0, baddr, len + (addr - baddr), prot, flags | MAP_ANONYMOUS, 0)) < 0)
 		return(r);
 	if (addr == 0)
 		addr = r;
 	if (back) {
-		memcpy(addr + len, back, PAGE_SIZE - ((addr + len) & ~PAGE_MASK));
+		memcpy((char *)addr + len, back, PAGE_SIZE - ((addr + len) & ~PAGE_MASK));
 		kfree(back);
 	}
 	if (front) {
@@ -238,7 +245,7 @@
 		kfree(front);
 	}
 	if (flags & MAP_ANONYMOUS) {
-		memset(addr, 0, len);
+		memset((char *)addr, 0, len);
 		return(addr);
 	}
 	if (!file)
@@ -272,7 +279,7 @@
 	unsigned int offset;
 };
 
-asmlinkage int
+asmlinkage long
 sys32_mmap(struct mmap_arg_struct *arg)
 {
 	int error = -EFAULT;
@@ -337,7 +344,7 @@
 	return(sys_mprotect(start & PAGE_MASK, len & PAGE_MASK, prot));
 }
 
-asmlinkage int
+asmlinkage long
 sys32_rt_sigaction(int sig, struct sigaction32 *act,
 		   struct sigaction32 *oact,  unsigned int sigsetsize)
 {
@@ -396,10 +403,10 @@
 }
 
 
-extern asmlinkage int sys_rt_sigprocmask(int how, sigset_t *set, sigset_t *oset,
-					 size_t sigsetsize);
+extern asmlinkage long sys_rt_sigprocmask(int how, sigset_t *set, sigset_t *oset,
+					  size_t sigsetsize);
 
-asmlinkage int
+asmlinkage long
 sys32_rt_sigprocmask(int how, sigset32_t *set, sigset32_t *oset,
 		     unsigned int sigsetsize)
 {
@@ -454,9 +461,9 @@
 	return err;
 }
 
-extern asmlinkage int sys_statfs(const char * path, struct statfs * buf);
+extern asmlinkage long sys_statfs(const char * path, struct statfs * buf);
 
-asmlinkage int
+asmlinkage long
 sys32_statfs(const char * path, struct statfs32 *buf)
 {
 	int ret;
@@ -471,9 +478,9 @@
 	return ret;
 }
 
-extern asmlinkage int sys_fstatfs(unsigned int fd, struct statfs * buf);
+extern asmlinkage long sys_fstatfs(unsigned int fd, struct statfs * buf);
 
-asmlinkage int
+asmlinkage long
 sys32_fstatfs(unsigned int fd, struct statfs32 *buf)
 {
 	int ret;
@@ -540,7 +547,7 @@
 
 extern int do_getitimer(int which, struct itimerval *value);
 
-asmlinkage int
+asmlinkage long
 sys32_getitimer(int which, struct itimerval32 *it)
 {
 	struct itimerval kit;
@@ -555,7 +562,7 @@
 
 extern int do_setitimer(int which, struct itimerval *, struct itimerval *);
 
-asmlinkage int
+asmlinkage long
 sys32_setitimer(int which, struct itimerval32 *in, struct itimerval32 *out)
 {
 	struct itimerval kin, kout;
@@ -600,7 +607,7 @@
 extern struct timezone sys_tz;
 extern int do_sys_settimeofday(struct timeval *tv, struct timezone *tz);
 
-asmlinkage int
+asmlinkage long
 sys32_gettimeofday(struct timeval32 *tv, struct timezone *tz)
 {
 	if (tv) {
@@ -616,7 +623,7 @@
 	return 0;
 }
 
-asmlinkage int
+asmlinkage long
 sys32_settimeofday(struct timeval32 *tv, struct timezone *tz)
 {
 	struct timeval ktv;
@@ -634,56 +641,135 @@
 	return do_sys_settimeofday(tv ? &ktv : NULL, tz ? &ktz : NULL);
 }
 
-struct dirent32 {
-	unsigned int	d_ino;
-	unsigned int	d_off;
-	unsigned short	d_reclen;
-	char		d_name[NAME_MAX + 1];
+struct linux32_dirent {
+	u32	d_ino;
+	u32	d_off;
+	u16	d_reclen;
+	char	d_name[1];
 };
 
-static void
-xlate_dirent(void *dirent64, void *dirent32, long n)
+struct old_linux32_dirent {
+	u32	d_ino;
+	u32	d_offset;
+	u16	d_namlen;
+	char	d_name[1];
+};
+
+struct getdents32_callback {
+	struct linux32_dirent * current_dir;
+	struct linux32_dirent * previous;
+	int count;
+	int error;
+};
+
+struct readdir32_callback {
+	struct old_linux32_dirent * dirent;
+	int count;
+};
+
+static int
+filldir32 (void *__buf, const char *name, int namlen, off_t offset, ino_t ino)
 {
-	long off;
-	struct dirent *dirp;
-	struct dirent32 *dirp32;
-
-	off = 0;
-	while (off < n) {
-		dirp = (struct dirent *)(dirent64 + off);
-		dirp32 = (struct dirent32 *)(dirent32 + off);
-		off += dirp->d_reclen;
-		dirp32->d_ino = dirp->d_ino;
-		dirp32->d_off = (unsigned int)dirp->d_off;
-		dirp32->d_reclen = dirp->d_reclen;
-		strncpy(dirp32->d_name, dirp->d_name, dirp->d_reclen - ((3 * 4) + 2));
-	}
-	return;
+	struct linux32_dirent * dirent;
+	struct getdents32_callback * buf = (struct getdents32_callback *) __buf;
+	int reclen = ROUND_UP(NAME_OFFSET(dirent) + namlen + 1, 4);
+
+	buf->error = -EINVAL;	/* only used if we fail.. */
+	if (reclen > buf->count)
+		return -EINVAL;
+	dirent = buf->previous;
+	if (dirent)
+		put_user(offset, &dirent->d_off);
+	dirent = buf->current_dir;
+	buf->previous = dirent;
+	put_user(ino, &dirent->d_ino);
+	put_user(reclen, &dirent->d_reclen);
+	copy_to_user(dirent->d_name, name, namlen);
+	put_user(0, dirent->d_name + namlen);
+	((char *) dirent) += reclen;
+	buf->current_dir = dirent;
+	buf->count -= reclen;
+	return 0;
 }
 
 asmlinkage long
-sys32_getdents(unsigned int fd, void * dirent32, unsigned int count)
+sys32_getdents (unsigned int fd, void * dirent, unsigned int count)
 {
-	long n;
-	void *dirent64;
+	struct file * file;
+	struct linux32_dirent * lastdirent;
+	struct getdents32_callback buf;
+	int error;
 
-	dirent64 = (unsigned long)(dirent32 + (sizeof(long) - 1)) & ~(sizeof(long) - 1);
-	if ((n = sys_getdents(fd, dirent64, count - (dirent64 - dirent32))) < 0)
-		return(n);
-	xlate_dirent(dirent64, dirent32, n);
-	return(n);
+	error = -EBADF;
+	file = fget(fd);
+	if (!file)
+		goto out;
+
+	buf.current_dir = (struct linux32_dirent *) dirent;
+	buf.previous = NULL;
+	buf.count = count;
+	buf.error = 0;
+
+	lock_kernel();
+	error = vfs_readdir(file, filldir32, &buf);
+	if (error < 0)
+		goto out_putf;
+	error = buf.error;
+	lastdirent = buf.previous;
+	if (lastdirent) {
+		put_user(file->f_pos, &lastdirent->d_off);
+		error = count - buf.count;
+	}
+
+out_putf:
+	unlock_kernel();
+	fput(file);
+out:
+	return error;
 }
 
-asmlinkage int
-sys32_readdir(unsigned int fd, void * dirent32, unsigned int count)
+static int
+fillonedir32 (void * __buf, const char * name, int namlen, off_t offset, ino_t ino)
 {
-	int n;
-	struct dirent dirent64;
+	struct readdir32_callback * buf = (struct readdir32_callback *) __buf;
+	struct old_linux32_dirent * dirent;
 
-	if ((n = old_readdir(fd, &dirent64, count)) < 0)
-		return(n);
-	xlate_dirent(&dirent64, dirent32, dirent64.d_reclen);
-	return(n);
+	if (buf->count)
+		return -EINVAL;
+	buf->count++;
+	dirent = buf->dirent;
+	put_user(ino, &dirent->d_ino);
+	put_user(offset, &dirent->d_offset);
+	put_user(namlen, &dirent->d_namlen);
+	copy_to_user(dirent->d_name, name, namlen);
+	put_user(0, dirent->d_name + namlen);
+	return 0;
+}
+
+asmlinkage long
+sys32_readdir (unsigned int fd, void * dirent, unsigned int count)
+{
+	int error;
+	struct file * file;
+	struct readdir32_callback buf;
+
+	error = -EBADF;
+	file = fget(fd);
+	if (!file)
+		goto out;
+
+	buf.count = 0;
+	buf.dirent = dirent;
+
+	lock_kernel();
+	error = vfs_readdir(file, fillonedir32, &buf);
+	if (error >= 0)
+		error = buf.count;
+	unlock_kernel();
+
+	fput(file);
+out:
+	return error;
 }
 
 /*
@@ -696,9 +782,9 @@
  */
 #define MAX_SELECT_SECONDS \
 	((unsigned long) (MAX_SCHEDULE_TIMEOUT / HZ)-1)
-#define ROUND_UP(x,y) (((x)+(y)-1)/(y))
+#define ROUND_UP_TIME(x,y) (((x)+(y)-1)/(y))
 
-asmlinkage int
+asmlinkage long
 sys32_select(int n, fd_set *inp, fd_set *outp, fd_set *exp, struct timeval32 *tvp32)
 {
 	fd_set_bits fds;
@@ -718,7 +804,7 @@
 			goto out_nofds;
 
 		if ((unsigned long) sec < MAX_SELECT_SECONDS) {
-			timeout = ROUND_UP(usec, 1000000/HZ);
+			timeout = ROUND_UP_TIME(usec, 1000000/HZ);
 			timeout += sec * (unsigned long) HZ;
 		}
 	}
@@ -795,13 +881,15 @@
 	unsigned int tvp;
 };
 
-asmlinkage int old_select(struct sel_arg_struct *arg)
+asmlinkage long
+old_select(struct sel_arg_struct *arg)
 {
 	struct sel_arg_struct a;
 
 	if (copy_from_user(&a, arg, sizeof(a)))
 		return -EFAULT;
-	return sys32_select(a.n, a.inp, a.outp, a.exp, a.tvp);
+	return sys32_select(a.n, (fd_set *)A(a.inp), (fd_set *)A(a.outp), (fd_set *)A(a.exp),
+			    (struct timeval32 *)A(a.tvp));
 }
 
 struct timespec32 {
@@ -809,10 +897,9 @@
 	int	tv_nsec;
 };
 
-extern asmlinkage int sys_nanosleep(struct timespec *rqtp,
-				    struct timespec *rmtp); 
+extern asmlinkage long sys_nanosleep(struct timespec *rqtp, struct timespec *rmtp); 
 
-asmlinkage int
+asmlinkage long
 sys32_nanosleep(struct timespec32 *rqtp, struct timespec32 *rmtp)
 {
 	struct timespec t;
@@ -993,9 +1080,9 @@
 	int	rlim_max;
 };
 
-extern asmlinkage int sys_getrlimit(unsigned int resource, struct rlimit *rlim);
+extern asmlinkage long sys_getrlimit(unsigned int resource, struct rlimit *rlim);
 
-asmlinkage int
+asmlinkage long
 sys32_getrlimit(unsigned int resource, struct rlimit32 *rlim)
 {
 	struct rlimit r;
@@ -1012,9 +1099,9 @@
 	return ret;
 }
 
-extern asmlinkage int sys_setrlimit(unsigned int resource, struct rlimit *rlim);
+extern asmlinkage long sys_setrlimit(unsigned int resource, struct rlimit *rlim);
 
-asmlinkage int
+asmlinkage long
 sys32_setrlimit(unsigned int resource, struct rlimit32 *rlim)
 {
 	struct rlimit r;
@@ -1035,118 +1122,6 @@
 	return ret;
 }
 
-/* Argument list sizes for sys_socketcall */
-#define AL(x) ((x) * sizeof(u32))
-static unsigned char nas[18]={AL(0),AL(3),AL(3),AL(3),AL(2),AL(3),
-                                AL(3),AL(3),AL(4),AL(4),AL(4),AL(6),
-                                AL(6),AL(2),AL(5),AL(5),AL(3),AL(3)};
-#undef AL
-
-extern asmlinkage int sys_bind(int fd, struct sockaddr *umyaddr, int addrlen);
-extern asmlinkage int sys_connect(int fd, struct sockaddr *uservaddr,
-				  int addrlen);
-extern asmlinkage int sys_accept(int fd, struct sockaddr *upeer_sockaddr,
-				 int *upeer_addrlen); 
-extern asmlinkage int sys_getsockname(int fd, struct sockaddr *usockaddr,
-				      int *usockaddr_len);
-extern asmlinkage int sys_getpeername(int fd, struct sockaddr *usockaddr,
-				      int *usockaddr_len);
-extern asmlinkage int sys_send(int fd, void *buff, size_t len, unsigned flags);
-extern asmlinkage int sys_sendto(int fd, u32 buff, __kernel_size_t32 len,
-				   unsigned flags, u32 addr, int addr_len);
-extern asmlinkage int sys_recv(int fd, void *ubuf, size_t size, unsigned flags);
-extern asmlinkage int sys_recvfrom(int fd, u32 ubuf, __kernel_size_t32 size,
-				     unsigned flags, u32 addr, u32 addr_len);
-extern asmlinkage int sys_setsockopt(int fd, int level, int optname,
-				     char *optval, int optlen);
-extern asmlinkage int sys_getsockopt(int fd, int level, int optname,
-				       u32 optval, u32 optlen);
-
-extern asmlinkage int sys_socket(int family, int type, int protocol);
-extern asmlinkage int sys_socketpair(int family, int type, int protocol,
-				     int usockvec[2]);
-extern asmlinkage int sys_shutdown(int fd, int how);
-extern asmlinkage int sys_listen(int fd, int backlog);
-
-asmlinkage int sys32_socketcall(int call, u32 *args)
-{
-	int i, ret;
-	u32 a[6];
-	u32 a0,a1;
-				 
-	if (call<SYS_SOCKET||call>SYS_RECVMSG)
-		return -EINVAL;
-	if (copy_from_user(a, args, nas[call]))
-		return -EFAULT;
-	a0=a[0];
-	a1=a[1];
-	
-	switch(call) 
-	{
-		case SYS_SOCKET:
-			ret = sys_socket(a0, a1, a[2]);
-			break;
-		case SYS_BIND:
-			ret = sys_bind(a0, (struct sockaddr *)A(a1), a[2]);
-			break;
-		case SYS_CONNECT:
-			ret = sys_connect(a0, (struct sockaddr *)A(a1), a[2]);
-			break;
-		case SYS_LISTEN:
-			ret = sys_listen(a0, a1);
-			break;
-		case SYS_ACCEPT:
-			ret = sys_accept(a0, (struct sockaddr *)A(a1),
-					  (int *)A(a[2]));
-			break;
-		case SYS_GETSOCKNAME:
-			ret = sys_getsockname(a0, (struct sockaddr *)A(a1),
-					       (int *)A(a[2]));
-			break;
-		case SYS_GETPEERNAME:
-			ret = sys_getpeername(a0, (struct sockaddr *)A(a1),
-					       (int *)A(a[2]));
-			break;
-		case SYS_SOCKETPAIR:
-			ret = sys_socketpair(a0, a1, a[2], (int *)A(a[3]));
-			break;
-		case SYS_SEND:
-			ret = sys_send(a0, (void *)A(a1), a[2], a[3]);
-			break;
-		case SYS_SENDTO:
-			ret = sys_sendto(a0, a1, a[2], a[3], a[4], a[5]);
-			break;
-		case SYS_RECV:
-			ret = sys_recv(a0, (void *)A(a1), a[2], a[3]);
-			break;
-		case SYS_RECVFROM:
-			ret = sys_recvfrom(a0, a1, a[2], a[3], a[4], a[5]);
-			break;
-		case SYS_SHUTDOWN:
-			ret = sys_shutdown(a0,a1);
-			break;
-		case SYS_SETSOCKOPT:
-			ret = sys_setsockopt(a0, a1, a[2], (char *)A(a[3]),
-					      a[4]);
-			break;
-		case SYS_GETSOCKOPT:
-			ret = sys_getsockopt(a0, a1, a[2], a[3], a[4]);
-			break;
-		case SYS_SENDMSG:
-			ret = sys32_sendmsg(a0, (struct msghdr32 *)A(a1),
-					     a[2]);
-			break;
-		case SYS_RECVMSG:
-			ret = sys32_recvmsg(a0, (struct msghdr32 *)A(a1),
-					     a[2]);
-			break;
-		default:
-			ret = EINVAL;
-			break;
-	}
-	return ret;
-}
-
 /*
  *  Declare the IA32 version of the msghdr
  */
@@ -1169,13 +1144,13 @@
 	if (!access_ok(VERIFY_READ, mp32, sizeof(*mp32)))
 		return(-EFAULT);
 	__get_user(i, &mp32->msg_name);
-	mp->msg_name = (void *)i;
+	mp->msg_name = (void *)A(i);
 	__get_user(mp->msg_namelen, &mp32->msg_namelen);
 	__get_user(i, &mp32->msg_iov);
-	mp->msg_iov = (struct iov *)i;
+	mp->msg_iov = (struct iovec *)A(i);
 	__get_user(mp->msg_iovlen, &mp32->msg_iovlen);
 	__get_user(i, &mp32->msg_control);
-	mp->msg_control = (void *)i;
+	mp->msg_control = (void *)A(i);
 	__get_user(mp->msg_controllen, &mp32->msg_controllen);
 	__get_user(mp->msg_flags, &mp32->msg_flags);
 	return(0);
@@ -1221,7 +1196,7 @@
 	iov32 = (struct iovec32 *)iov;
 	for (ct = m->msg_iovlen; ct-- > 0; ) {
 		iov[ct].iov_len = (__kernel_size_t)iov32[ct].iov_len;
-		iov[ct].iov_base = (void *)iov32[ct].iov_base;
+		iov[ct].iov_base = (void *) A(iov32[ct].iov_base);
 		err += iov[ct].iov_len;
 	}
 out:
@@ -1246,7 +1221,7 @@
  *	BSD sendmsg interface
  */
 
-asmlinkage int sys32_sendmsg(int fd, struct msghdr32 *msg, unsigned flags)
+int sys32_sendmsg(int fd, struct msghdr32 *msg, unsigned flags)
 {
 	struct socket *sock;
 	char address[MAX_SOCK_ADDR];
@@ -1325,7 +1300,8 @@
  *	BSD recvmsg interface
  */
 
-asmlinkage int sys32_recvmsg(int fd, struct msghdr32 *msg, unsigned int flags)
+int
+sys32_recvmsg (int fd, struct msghdr32 *msg, unsigned int flags)
 {
 	struct socket *sock;
 	struct iovec iovstack[UIO_FASTIOV];
@@ -1407,6 +1383,118 @@
 	return err;
 }
 
+/* Argument list sizes for sys_socketcall */
+#define AL(x) ((x) * sizeof(u32))
+static unsigned char nas[18]={AL(0),AL(3),AL(3),AL(3),AL(2),AL(3),
+                                AL(3),AL(3),AL(4),AL(4),AL(4),AL(6),
+                                AL(6),AL(2),AL(5),AL(5),AL(3),AL(3)};
+#undef AL
+
+extern asmlinkage long sys_bind(int fd, struct sockaddr *umyaddr, int addrlen);
+extern asmlinkage long sys_connect(int fd, struct sockaddr *uservaddr,
+				  int addrlen);
+extern asmlinkage long sys_accept(int fd, struct sockaddr *upeer_sockaddr,
+				 int *upeer_addrlen); 
+extern asmlinkage long sys_getsockname(int fd, struct sockaddr *usockaddr,
+				      int *usockaddr_len);
+extern asmlinkage long sys_getpeername(int fd, struct sockaddr *usockaddr,
+				      int *usockaddr_len);
+extern asmlinkage long sys_send(int fd, void *buff, size_t len, unsigned flags);
+extern asmlinkage long sys_sendto(int fd, u32 buff, __kernel_size_t32 len,
+				   unsigned flags, u32 addr, int addr_len);
+extern asmlinkage long sys_recv(int fd, void *ubuf, size_t size, unsigned flags);
+extern asmlinkage long sys_recvfrom(int fd, u32 ubuf, __kernel_size_t32 size,
+				     unsigned flags, u32 addr, u32 addr_len);
+extern asmlinkage long sys_setsockopt(int fd, int level, int optname,
+				     char *optval, int optlen);
+extern asmlinkage long sys_getsockopt(int fd, int level, int optname,
+				       u32 optval, u32 optlen);
+
+extern asmlinkage long sys_socket(int family, int type, int protocol);
+extern asmlinkage long sys_socketpair(int family, int type, int protocol,
+				     int usockvec[2]);
+extern asmlinkage long sys_shutdown(int fd, int how);
+extern asmlinkage long sys_listen(int fd, int backlog);
+
+asmlinkage long sys32_socketcall(int call, u32 *args)
+{
+	int ret;
+	u32 a[6];
+	u32 a0,a1;
+				 
+	if (call<SYS_SOCKET||call>SYS_RECVMSG)
+		return -EINVAL;
+	if (copy_from_user(a, args, nas[call]))
+		return -EFAULT;
+	a0=a[0];
+	a1=a[1];
+	
+	switch(call) 
+	{
+		case SYS_SOCKET:
+			ret = sys_socket(a0, a1, a[2]);
+			break;
+		case SYS_BIND:
+			ret = sys_bind(a0, (struct sockaddr *)A(a1), a[2]);
+			break;
+		case SYS_CONNECT:
+			ret = sys_connect(a0, (struct sockaddr *)A(a1), a[2]);
+			break;
+		case SYS_LISTEN:
+			ret = sys_listen(a0, a1);
+			break;
+		case SYS_ACCEPT:
+			ret = sys_accept(a0, (struct sockaddr *)A(a1),
+					  (int *)A(a[2]));
+			break;
+		case SYS_GETSOCKNAME:
+			ret = sys_getsockname(a0, (struct sockaddr *)A(a1),
+					       (int *)A(a[2]));
+			break;
+		case SYS_GETPEERNAME:
+			ret = sys_getpeername(a0, (struct sockaddr *)A(a1),
+					       (int *)A(a[2]));
+			break;
+		case SYS_SOCKETPAIR:
+			ret = sys_socketpair(a0, a1, a[2], (int *)A(a[3]));
+			break;
+		case SYS_SEND:
+			ret = sys_send(a0, (void *)A(a1), a[2], a[3]);
+			break;
+		case SYS_SENDTO:
+			ret = sys_sendto(a0, a1, a[2], a[3], a[4], a[5]);
+			break;
+		case SYS_RECV:
+			ret = sys_recv(a0, (void *)A(a1), a[2], a[3]);
+			break;
+		case SYS_RECVFROM:
+			ret = sys_recvfrom(a0, a1, a[2], a[3], a[4], a[5]);
+			break;
+		case SYS_SHUTDOWN:
+			ret = sys_shutdown(a0,a1);
+			break;
+		case SYS_SETSOCKOPT:
+			ret = sys_setsockopt(a0, a1, a[2], (char *)A(a[3]),
+					      a[4]);
+			break;
+		case SYS_GETSOCKOPT:
+			ret = sys_getsockopt(a0, a1, a[2], a[3], a[4]);
+			break;
+		case SYS_SENDMSG:
+			ret = sys32_sendmsg(a0, (struct msghdr32 *)A(a1),
+					     a[2]);
+			break;
+		case SYS_RECVMSG:
+			ret = sys32_recvmsg(a0, (struct msghdr32 *)A(a1),
+					     a[2]);
+			break;
+		default:
+			ret = EINVAL;
+			break;
+	}
+	return ret;
+}
+
 /*
  * sys32_ipc() is the de-multiplexer for the SysV IPC calls in 32bit emulation..
  *
@@ -1601,7 +1689,7 @@
 static int
 do_sys32_msgctl (int first, int second, void *uptr)
 {
-	int err, err2;
+	int err = -EINVAL, err2;
 	struct msqid_ds m;
 	struct msqid64_ds m64;
 	struct msqid_ds32 *up = (struct msqid_ds32 *)uptr;
@@ -1632,7 +1720,7 @@
 	case MSG_STAT:
 		old_fs = get_fs ();
 		set_fs (KERNEL_DS);
-		err = sys_msgctl (first, second, &m64);
+		err = sys_msgctl (first, second, (void *) &m64);
 		set_fs (old_fs);
 		err2 = put_user (m64.msg_perm.key, &up->msg_perm.key);
 		err2 |= __put_user(m64.msg_perm.uid, &up->msg_perm.uid);
@@ -1713,7 +1801,7 @@
 	case SHM_STAT:
 		old_fs = get_fs ();
 		set_fs (KERNEL_DS);
-		err = sys_shmctl (first, second, &s64);
+		err = sys_shmctl (first, second, (void *) &s64);
 		set_fs (old_fs);
 		if (err < 0)
 			break;
@@ -1741,7 +1829,7 @@
 	case SHM_INFO:
 		old_fs = get_fs ();
 		set_fs (KERNEL_DS);
-		err = sys_shmctl (first, second, &si);
+		err = sys_shmctl (first, second, (void *)&si);
 		set_fs (old_fs);
 		if (err < 0)
 			break;
@@ -1761,7 +1849,7 @@
 	return err;
 }
 
-asmlinkage int
+asmlinkage long
 sys32_ipc (u32 call, int first, int second, int third, u32 ptr, u32 fifth)
 {
 	int version, err;
@@ -1886,10 +1974,10 @@
 	return err;
 }
 
-extern asmlinkage int sys_wait4(pid_t pid,unsigned int * stat_addr,
+extern asmlinkage long sys_wait4(pid_t pid,unsigned int * stat_addr,
 				int options, struct rusage * ru);
 
-asmlinkage int
+asmlinkage long
 sys32_wait4(__kernel_pid_t32 pid, unsigned int *stat_addr, int options,
 	    struct rusage32 *ru)
 {
@@ -1911,17 +1999,17 @@
 	}
 }
 
-asmlinkage int
+asmlinkage long
 sys32_waitpid(__kernel_pid_t32 pid, unsigned int *stat_addr, int options)
 {
 	return sys32_wait4(pid, stat_addr, options, NULL);
 }
 
 
-extern asmlinkage int
+extern asmlinkage long
 sys_getrusage(int who, struct rusage *ru);
 
-asmlinkage int
+asmlinkage long
 sys32_getrusage(int who, struct rusage32 *ru)
 {
 	struct rusage r;
@@ -2417,9 +2505,9 @@
 
 /* 32-bit timeval and related flotsam.  */
 
-extern asmlinkage int sys_ioperm(unsigned long from, unsigned long num, int on);
+extern asmlinkage long sys_ioperm(unsigned long from, unsigned long num, int on);
 
-asmlinkage int
+asmlinkage long
 sys32_ioperm(u32 from, u32 num, int on)
 {
 	return sys_ioperm((unsigned long)from, (unsigned long)num, on);
@@ -2491,10 +2579,10 @@
     __kernel_time_t32 dqb_itime;
 };
                                 
-extern asmlinkage int sys_quotactl(int cmd, const char *special, int id,
+extern asmlinkage long sys_quotactl(int cmd, const char *special, int id,
 				   caddr_t addr);
 
-asmlinkage int
+asmlinkage long
 sys32_quotactl(int cmd, const char *special, int id, unsigned long addr)
 {
 	int cmds = cmd >> SUBCMDSHIFT;
@@ -2538,13 +2626,13 @@
 	return err;
 }
 
-extern asmlinkage int sys_utime(char * filename, struct utimbuf * times);
+extern asmlinkage long sys_utime(char * filename, struct utimbuf * times);
 
 struct utimbuf32 {
 	__kernel_time_t32 actime, modtime;
 };
 
-asmlinkage int
+asmlinkage long
 sys32_utime(char * filename, struct utimbuf32 *times)
 {
 	struct utimbuf t;
@@ -2628,10 +2716,10 @@
 		__put_user(*fdset, ufdset);
 }
 
-extern asmlinkage int sys_sysfs(int option, unsigned long arg1,
+extern asmlinkage long sys_sysfs(int option, unsigned long arg1,
 				unsigned long arg2);
 
-asmlinkage int
+asmlinkage long
 sys32_sysfs(int option, u32 arg1, u32 arg2)
 {
 	return sys_sysfs(option, arg1, arg2);
@@ -2729,7 +2817,7 @@
 #define SMBFS_NAME	"smbfs"
 #define NCPFS_NAME	"ncpfs"
 
-asmlinkage int
+asmlinkage long
 sys32_mount(char *dev_name, char *dir_name, char *type,
 	    unsigned long new_flags, u32 data)
 {
@@ -2803,9 +2891,9 @@
         char _f[22];
 };
 
-extern asmlinkage int sys_sysinfo(struct sysinfo *info);
+extern asmlinkage long sys_sysinfo(struct sysinfo *info);
 
-asmlinkage int
+asmlinkage long
 sys32_sysinfo(struct sysinfo32 *info)
 {
 	struct sysinfo s;
@@ -2831,10 +2919,10 @@
 	return ret;
 }
                 
-extern asmlinkage int sys_sched_rr_get_interval(pid_t pid,
+extern asmlinkage long sys_sched_rr_get_interval(pid_t pid,
 						struct timespec *interval);
 
-asmlinkage int
+asmlinkage long
 sys32_sched_rr_get_interval(__kernel_pid_t32 pid, struct timespec32 *interval)
 {
 	struct timespec t;
@@ -2850,10 +2938,10 @@
 	return ret;
 }
 
-extern asmlinkage int sys_sigprocmask(int how, old_sigset_t *set,
+extern asmlinkage long sys_sigprocmask(int how, old_sigset_t *set,
 				      old_sigset_t *oset);
 
-asmlinkage int
+asmlinkage long
 sys32_sigprocmask(int how, old_sigset_t32 *set, old_sigset_t32 *oset)
 {
 	old_sigset_t s;
@@ -2869,9 +2957,9 @@
 	return 0;
 }
 
-extern asmlinkage int sys_sigpending(old_sigset_t *set);
+extern asmlinkage long sys_sigpending(old_sigset_t *set);
 
-asmlinkage int
+asmlinkage long
 sys32_sigpending(old_sigset_t32 *set)
 {
 	old_sigset_t s;
@@ -2885,9 +2973,9 @@
 	return ret;
 }
 
-extern asmlinkage int sys_rt_sigpending(sigset_t *set, size_t sigsetsize);
+extern asmlinkage long sys_rt_sigpending(sigset_t *set, size_t sigsetsize);
 
-asmlinkage int
+asmlinkage long
 sys32_rt_sigpending(sigset_t32 *set, __kernel_size_t32 sigsetsize)
 {
 	sigset_t s;
@@ -2990,11 +3078,11 @@
 	return d;
 }
 
-extern asmlinkage int
+extern asmlinkage long
 sys_rt_sigtimedwait(const sigset_t *uthese, siginfo_t *uinfo,
 		    const struct timespec *uts, size_t sigsetsize);
 
-asmlinkage int
+asmlinkage long
 sys32_rt_sigtimedwait(sigset_t32 *uthese, siginfo_t32 *uinfo,
 		      struct timespec32 *uts, __kernel_size_t32 sigsetsize)
 {
@@ -3031,10 +3119,10 @@
 	return ret;
 }
 
-extern asmlinkage int
+extern asmlinkage long
 sys_rt_sigqueueinfo(int pid, int sig, siginfo_t *uinfo);
 
-asmlinkage int
+asmlinkage long
 sys32_rt_sigqueueinfo(int pid, int sig, siginfo_t32 *uinfo)
 {
 	siginfo_t info;
@@ -3052,9 +3140,9 @@
 	return ret;
 }
 
-extern asmlinkage int sys_setreuid(uid_t ruid, uid_t euid);
+extern asmlinkage long sys_setreuid(uid_t ruid, uid_t euid);
 
-asmlinkage int sys32_setreuid(__kernel_uid_t32 ruid, __kernel_uid_t32 euid)
+asmlinkage long sys32_setreuid(__kernel_uid_t32 ruid, __kernel_uid_t32 euid)
 {
 	uid_t sruid, seuid;
 
@@ -3063,9 +3151,9 @@
 	return sys_setreuid(sruid, seuid);
 }
 
-extern asmlinkage int sys_setresuid(uid_t ruid, uid_t euid, uid_t suid);
+extern asmlinkage long sys_setresuid(uid_t ruid, uid_t euid, uid_t suid);
 
-asmlinkage int
+asmlinkage long
 sys32_setresuid(__kernel_uid_t32 ruid, __kernel_uid_t32 euid,
 		__kernel_uid_t32 suid)
 {
@@ -3077,9 +3165,9 @@
 	return sys_setresuid(sruid, seuid, ssuid);
 }
 
-extern asmlinkage int sys_getresuid(uid_t *ruid, uid_t *euid, uid_t *suid);
+extern asmlinkage long sys_getresuid(uid_t *ruid, uid_t *euid, uid_t *suid);
 
-asmlinkage int
+asmlinkage long
 sys32_getresuid(__kernel_uid_t32 *ruid, __kernel_uid_t32 *euid,
 		__kernel_uid_t32 *suid)
 {
@@ -3095,9 +3183,9 @@
 	return ret;
 }
 
-extern asmlinkage int sys_setregid(gid_t rgid, gid_t egid);
+extern asmlinkage long sys_setregid(gid_t rgid, gid_t egid);
 
-asmlinkage int
+asmlinkage long
 sys32_setregid(__kernel_gid_t32 rgid, __kernel_gid_t32 egid)
 {
 	gid_t srgid, segid;
@@ -3107,9 +3195,9 @@
 	return sys_setregid(srgid, segid);
 }
 
-extern asmlinkage int sys_setresgid(gid_t rgid, gid_t egid, gid_t sgid);
+extern asmlinkage long sys_setresgid(gid_t rgid, gid_t egid, gid_t sgid);
 
-asmlinkage int
+asmlinkage long
 sys32_setresgid(__kernel_gid_t32 rgid, __kernel_gid_t32 egid,
 		__kernel_gid_t32 sgid)
 {
@@ -3121,9 +3209,9 @@
 	return sys_setresgid(srgid, segid, ssgid);
 }
 
-extern asmlinkage int sys_getresgid(gid_t *rgid, gid_t *egid, gid_t *sgid);
+extern asmlinkage long sys_getresgid(gid_t *rgid, gid_t *egid, gid_t *sgid);
 
-asmlinkage int
+asmlinkage long
 sys32_getresgid(__kernel_gid_t32 *rgid, __kernel_gid_t32 *egid,
 		__kernel_gid_t32 *sgid) 
 {
@@ -3142,9 +3230,9 @@
 	return ret;
 }
 
-extern asmlinkage int sys_getgroups(int gidsetsize, gid_t *grouplist);
+extern asmlinkage long sys_getgroups(int gidsetsize, gid_t *grouplist);
 
-asmlinkage int
+asmlinkage long
 sys32_getgroups(int gidsetsize, __kernel_gid_t32 *grouplist)
 {
 	gid_t gl[NGROUPS];
@@ -3161,9 +3249,9 @@
 	return ret;
 }
 
-extern asmlinkage int sys_setgroups(int gidsetsize, gid_t *grouplist);
+extern asmlinkage long sys_setgroups(int gidsetsize, gid_t *grouplist);
 
-asmlinkage int
+asmlinkage long
 sys32_setgroups(int gidsetsize, __kernel_gid_t32 *grouplist)
 {
 	gid_t gl[NGROUPS];
@@ -3607,7 +3695,7 @@
 	kmsg->msg_control = (void *) orig_cmsg_uptr;
 }
 
-asmlinkage int
+asmlinkage long
 sys32_sendmsg(int fd, struct msghdr32 *user_msg, unsigned user_flags)
 {
 	struct socket *sock;
@@ -3655,7 +3743,7 @@
 	return err;
 }
 
-asmlinkage int
+asmlinkage long
 sys32_recvmsg(int fd, struct msghdr32 *user_msg, unsigned int user_flags)
 {
 	struct iovec iovstack[UIO_FASTIOV];
@@ -3746,7 +3834,7 @@
 
 extern void check_pending(int signum);
 
-asmlinkage int
+asmlinkage long
 sys32_sigaction (int sig, struct old_sigaction32 *act,
 		 struct old_sigaction32 *oact)
 {
@@ -3791,21 +3879,21 @@
 	return sys_create_module(name_user, (size_t)size);
 }
 
-extern asmlinkage int sys_init_module(const char *name_user,
+extern asmlinkage long sys_init_module(const char *name_user,
 				      struct module *mod_user); 
 
 /* Hey, when you're trying to init module, take time and prepare us a nice 64bit
  * module structure, even if from 32bit modutils... Why to pollute kernel... :))
  */
-asmlinkage int
+asmlinkage long
 sys32_init_module(const char *name_user, struct module *mod_user)
 {
 	return sys_init_module(name_user, mod_user);
 }
 
-extern asmlinkage int sys_delete_module(const char *name_user);
+extern asmlinkage long sys_delete_module(const char *name_user);
 
-asmlinkage int
+asmlinkage long
 sys32_delete_module(const char *name_user)
 {
 	return sys_delete_module(name_user);
@@ -4080,7 +4168,7 @@
 	return error;
 }
 
-asmlinkage int
+asmlinkage long
 sys32_query_module(char *name_user, int which, char *buf,
 		   __kernel_size_t32 bufsize, u32 ret) 
 {
@@ -4148,9 +4236,9 @@
 	char name[60];
 };
 		 
-extern asmlinkage int sys_get_kernel_syms(struct kernel_sym *table);
+extern asmlinkage long sys_get_kernel_syms(struct kernel_sym *table);
 
-asmlinkage int
+asmlinkage long
 sys32_get_kernel_syms(struct kernel_sym32 *table)
 {
 	int len, i;
@@ -4182,19 +4270,19 @@
 	return -ENOSYS;
 }
 
-asmlinkage int
+asmlinkage long
 sys32_init_module(const char *name_user, struct module *mod_user)
 {
 	return -ENOSYS;
 }
 
-asmlinkage int
+asmlinkage long
 sys32_delete_module(const char *name_user)
 {
 	return -ENOSYS;
 }
 
-asmlinkage int
+asmlinkage long
 sys32_query_module(const char *name_user, int which, char *buf, size_t bufsize,
 		 size_t *ret)
 {
@@ -4206,7 +4294,7 @@
 	return -ENOSYS;
 }
 
-asmlinkage int
+asmlinkage long
 sys32_get_kernel_syms(struct kernel_sym *table)
 {
 	return -ENOSYS;
@@ -4422,7 +4510,7 @@
 	return err;
 }
 
-extern asmlinkage int sys_nfsservctl(int cmd, void *arg, void *resp);
+extern asmlinkage long sys_nfsservctl(int cmd, void *arg, void *resp);
 
 int asmlinkage
 sys32_nfsservctl(int cmd, struct nfsctl_arg32 *arg32, union nfsctl_res32 *res32)
@@ -4493,9 +4581,9 @@
 	return err;
 }
 
-asmlinkage int sys_utimes(char *, struct timeval *);
+asmlinkage long sys_utimes(char *, struct timeval *);
 
-asmlinkage int
+asmlinkage long
 sys32_utimes(char *filename, struct timeval32 *tvs)
 {
 	char *kfilename;
@@ -4523,7 +4611,7 @@
 }
 
 /* These are here just in case some old ia32 binary calls it. */
-asmlinkage int
+asmlinkage long
 sys32_pause(void)
 {
 	current->state = TASK_INTERRUPTIBLE;
@@ -4532,19 +4620,19 @@
 }
 
 /* PCI config space poking. */
-extern asmlinkage int sys_pciconfig_read(unsigned long bus,
+extern asmlinkage long sys_pciconfig_read(unsigned long bus,
 					 unsigned long dfn,
 					 unsigned long off,
 					 unsigned long len,
 					 unsigned char *buf);
 
-extern asmlinkage int sys_pciconfig_write(unsigned long bus,
+extern asmlinkage long sys_pciconfig_write(unsigned long bus,
 					  unsigned long dfn,
 					  unsigned long off,
 					  unsigned long len,
 					  unsigned char *buf);
 
-asmlinkage int
+asmlinkage long
 sys32_pciconfig_read(u32 bus, u32 dfn, u32 off, u32 len, u32 ubuf)
 {
 	return sys_pciconfig_read((unsigned long) bus,
@@ -4554,7 +4642,7 @@
 				  (unsigned char *)AA(ubuf));
 }
 
-asmlinkage int
+asmlinkage long
 sys32_pciconfig_write(u32 bus, u32 dfn, u32 off, u32 len, u32 ubuf)
 {
 	return sys_pciconfig_write((unsigned long) bus,
@@ -4564,11 +4652,11 @@
 				   (unsigned char *)AA(ubuf));
 }
 
-extern asmlinkage int sys_prctl(int option, unsigned long arg2,
+extern asmlinkage long sys_prctl(int option, unsigned long arg2,
 				unsigned long arg3, unsigned long arg4,
 				unsigned long arg5);
 
-asmlinkage int
+asmlinkage long
 sys32_prctl(int option, u32 arg2, u32 arg3, u32 arg4, u32 arg5)
 {
 	return sys_prctl(option,
@@ -4579,9 +4667,9 @@
 }
 
 
-extern asmlinkage int sys_newuname(struct new_utsname * name);
+extern asmlinkage long sys_newuname(struct new_utsname * name);
 
-asmlinkage int
+asmlinkage long
 sys32_newuname(struct new_utsname * name)
 {
 	int ret = sys_newuname(name);
@@ -4617,9 +4705,9 @@
 }
 
 
-extern asmlinkage int sys_personality(unsigned long);
+extern asmlinkage long sys_personality(unsigned long);
 
-asmlinkage int
+asmlinkage long
 sys32_personality(unsigned long personality)
 {
 	int ret;
@@ -4636,7 +4724,7 @@
 extern asmlinkage ssize_t sys_sendfile(int out_fd, int in_fd, off_t *offset,
 				       size_t count); 
 
-asmlinkage int
+asmlinkage long
 sys32_sendfile(int out_fd, int in_fd, __kernel_off_t32 *offset, s32 count)
 {
 	mm_segment_t old_fs = get_fs();
@@ -4673,7 +4761,7 @@
 
 extern int do_adjtimex(struct timex *);
 
-asmlinkage int
+asmlinkage long
 sys32_adjtimex(struct timex32 *utp)
 {
 	struct timex txc;
diff -urN linux-davidm/arch/ia64/kernel/Makefile linux-2.3.99-pre6-lia/arch/ia64/kernel/Makefile
--- linux-davidm/arch/ia64/kernel/Makefile	Fri Apr 21 15:21:24 2000
+++ linux-2.3.99-pre6-lia/arch/ia64/kernel/Makefile	Thu May 25 19:44:00 2000
@@ -15,11 +15,10 @@
 all: kernel.o head.o init_task.o
 
 O_TARGET := kernel.o
-O_OBJS	 := acpi.o entry.o gate.o efi.o efi_stub.o irq.o irq_ia64.o irq_sapic.o ivt.o \
-	    pal.o pci-dma.o process.o perfmon.o ptrace.o sal.o sal_stub.o semaphore.o setup.o \
+O_OBJS	 := acpi.o entry.o gate.o efi.o efi_stub.o irq.o irq_ia64.o irq_sapic.o ivt.o	\
+	    pal.o pci-dma.o process.o perfmon.o ptrace.o sal.o semaphore.o setup.o	\
 	    signal.o sys_ia64.o traps.o time.o unaligned.o unwind.o
-#O_OBJS   := fpreg.o
-#OX_OBJS  := ia64_ksyms.o
+OX_OBJS  := ia64_ksyms.o
 
 ifdef CONFIG_IA64_GENERIC
 O_OBJS	+= machvec.o
diff -urN linux-davidm/arch/ia64/kernel/acpi.c linux-2.3.99-pre6-lia/arch/ia64/kernel/acpi.c
--- linux-davidm/arch/ia64/kernel/acpi.c	Fri Apr 21 15:21:24 2000
+++ linux-2.3.99-pre6-lia/arch/ia64/kernel/acpi.c	Thu May 25 22:56:31 2000
@@ -89,16 +89,16 @@
 #ifdef CONFIG_IA64_DIG
 	acpi_entry_iosapic_t *iosapic = (acpi_entry_iosapic_t *) p;
 	unsigned int ver, v;
-	int l, pins;
+	int l, max_pin;
 
 	ver = iosapic_version(iosapic->address);
-	pins = (ver >> 16) & 0xff;
+	max_pin = (ver >> 16) & 0xff;
 	
 	printk("IOSAPIC Version %x.%x: address 0x%lx IRQs 0x%x - 0x%x\n", 
 	       (ver & 0xf0) >> 4, (ver & 0x0f), iosapic->address, 
-	       iosapic->irq_base, iosapic->irq_base + pins);
+	       iosapic->irq_base, iosapic->irq_base + max_pin);
 	
-	for (l = 0; l < pins; l++) {
+	for (l = 0; l <= max_pin; l++) {
 		v = iosapic->irq_base + l;
 		if (v < 16)
 			v = isa_irq_to_vector(v);
@@ -110,7 +110,7 @@
 		iosapic_addr(v) = (unsigned long) ioremap(iosapic->address, 0);
 		iosapic_baseirq(v) = iosapic->irq_base;
 	}
-	iosapic_init(iosapic->address);
+	iosapic_init(iosapic->address, iosapic->irq_base);
 #endif
 }
 
diff -urN linux-davidm/arch/ia64/kernel/efi_stub.S linux-2.3.99-pre6-lia/arch/ia64/kernel/efi_stub.S
--- linux-davidm/arch/ia64/kernel/efi_stub.S	Sun Feb  6 18:42:40 2000
+++ linux-2.3.99-pre6-lia/arch/ia64/kernel/efi_stub.S	Thu May 25 22:56:41 2000
@@ -1,7 +1,8 @@
 /*
  * EFI call stub.
  *
- * Copyright (C) 1999 David Mosberger <davidm@hpl.hp.com>
+ * Copyright (C) 1999-2000 Hewlett-Packard Co
+ * Copyright (C) 1999-2000 David Mosberger <davidm@hpl.hp.com>
  *
  * This stub allows us to make EFI calls in physical mode with interrupts
  * turned off.  We need this because we can't call SetVirtualMap() until
@@ -30,6 +31,7 @@
 	(IA64_PSR_BN)
 
 #include <asm/processor.h>
+#include <asm/asmmacro.h>
 
 	.text
 	.psr abi64
@@ -44,8 +46,7 @@
  * Inputs:
  *	r16 = new psr to establish
  */
-	.proc switch_mode
-switch_mode:
+ENTRY(switch_mode)
  {
 	alloc r2=ar.pfs,0,0,0,0
 	rsm psr.i | psr.ic		// disable interrupts and interrupt collection
@@ -83,7 +84,7 @@
 	;;
 1:	mov rp=r14
 	br.ret.sptk.few rp
-	.endp switch_mode
+END(switch_mode)
 
 /*
  * Inputs:
@@ -94,13 +95,12 @@
  *	r8 = EFI_STATUS returned by called function
  */
 
-	.global efi_call_phys
-	.proc efi_call_phys
-efi_call_phys:
-
-	alloc loc0=ar.pfs,8,5,7,0
+GLOBAL_ENTRY(efi_call_phys)
+	UNW(.prologue ASM_UNW_PRLG_RP|ASM_UNW_PRLG_PFS, ASM_UNW_PRLG_GRSAVE(8))
+	alloc loc1=ar.pfs,8,5,7,0
 	ld8 r2=[in0],8			// load EFI function's entry point
-	mov loc1=rp
+	mov loc0=rp
+	UNW(.body)
 	;;
 	mov loc2=gp			// save global pointer
 	mov loc4=ar.rsc			// save RSE configuration
@@ -133,9 +133,8 @@
 	br.call.sptk.few rp=switch_mode	// return to virtual mode
 .ret2:
 	mov ar.rsc=loc4			// restore RSE configuration
-	mov ar.pfs=loc0
-	mov rp=loc1
+	mov ar.pfs=loc1
+	mov rp=loc0
 	mov gp=loc2
 	br.ret.sptk.few rp
-	
-	.endp efi_call_phys
+END(efi_call_phys)
diff -urN linux-davidm/arch/ia64/kernel/entry.S linux-2.3.99-pre6-lia/arch/ia64/kernel/entry.S
--- linux-davidm/arch/ia64/kernel/entry.S	Thu May 25 23:22:10 2000
+++ linux-2.3.99-pre6-lia/arch/ia64/kernel/entry.S	Thu May 25 22:57:38 2000
@@ -13,8 +13,6 @@
 /*
  * Global (preserved) predicate usage on syscall entry/exit path:
  *
- * 
- *	pEOI:		See entry.h.
  *	pKern:		See entry.h.
  *	pSys:		See entry.h.
  *	pNonSys:	!pSys
@@ -30,6 +28,7 @@
 #include <asm/offsets.h>
 #include <asm/processor.h>
 #include <asm/unistd.h>
+#include <asm/asmmacro.h>
 
 #include "entry.h"
 
@@ -42,11 +41,11 @@
 	 * execve() is special because in case of success, we need to
 	 * setup a null register window frame.
 	 */
-	.align 16
-	.proc ia64_execve
-ia64_execve:
-	alloc loc0=ar.pfs,3,2,4,0
-	mov loc1=rp
+ENTRY(ia64_execve)
+	UNW(.prologue ASM_UNW_PRLG_RP|ASM_UNW_PRLG_PFS, ASM_UNW_PRLG_GRSAVE(3))
+	alloc loc1=ar.pfs,3,2,4,0
+	mov loc0=rp
+	UNW(.body)
 	mov out0=in0			// filename
 	;;				// stop bit between alloc and call
 	mov out1=in1			// argv
@@ -54,25 +53,22 @@
 	add out3=16,sp			// regs
 	br.call.sptk.few rp=sys_execve
 .ret0:	cmp4.ge p6,p0=r8,r0
-	mov ar.pfs=loc0			// restore ar.pfs
+	mov ar.pfs=loc1			// restore ar.pfs
 	;;
 (p6)	mov ar.pfs=r0			// clear ar.pfs in case of success
 	sxt4 r8=r8			// return 64-bit result
-	mov rp=loc1
+	mov rp=loc0
 
 	br.ret.sptk.few rp
-	.endp ia64_execve
+END(ia64_execve)
 
-	.align 16
-	.global sys_clone
-	.proc sys_clone
-sys_clone:
+GLOBAL_ENTRY(sys_clone)
+	UNW(.prologue ASM_UNW_PRLG_RP|ASM_UNW_PRLG_PFS, ASM_UNW_PRLG_GRSAVE(2))
 	alloc r16=ar.pfs,2,2,3,0;;
-	movl r28=1f
-	mov loc1=rp
-	br.cond.sptk.many save_switch_stack
-1:
-	mov loc0=r16				// save ar.pfs across do_fork
+	mov loc0=rp
+	DO_SAVE_SWITCH_STACK
+	mov loc1=r16				// save ar.pfs across do_fork
+	UNW(.body)
 	adds out2=IA64_SWITCH_STACK_SIZE+16,sp
 	adds r2=IA64_SWITCH_STACK_SIZE+IA64_PT_REGS_R12_OFFSET+16,sp
 	cmp.eq p8,p9=in1,r0			// usp == 0?
@@ -82,24 +78,22 @@
 (p9)	mov out1=in1
 	br.call.sptk.few rp=do_fork
 .ret1:
-	mov ar.pfs=loc0
+	mov ar.pfs=loc1
+	UNW(.restore sp)
 	adds sp=IA64_SWITCH_STACK_SIZE,sp	// pop the switch stack
-	mov rp=loc1
+	mov rp=loc0
 	;;
 	br.ret.sptk.many rp
-	.endp sys_clone
+END(sys_clone)
 
 /*
- * prev_task <- switch_to(struct task_struct *next)
+ * prev_task <- ia64_switch_to(struct task_struct *next)
  */
-	.align 16
-	.global ia64_switch_to
-	.proc ia64_switch_to
-ia64_switch_to:
+GLOBAL_ENTRY(ia64_switch_to)
+	UNW(.prologue)
 	alloc r16=ar.pfs,1,0,0,0
-	movl r28=1f
-	br.cond.sptk.many save_switch_stack
-1:
+	DO_SAVE_SWITCH_STACK
+	UNW(.body)
 	// disable interrupts to ensure atomicity for next few instructions:
 	mov r17=psr		// M-unit
 	;;
@@ -126,63 +120,58 @@
 
 	movl r28=1f
 	br.cond.sptk.many load_switch_stack
-1:
+1:	UNW(.restore sp)
+	adds sp=IA64_SWITCH_STACK_SIZE,sp	// pop switch_stack
 	br.ret.sptk.few rp
-	.endp ia64_switch_to
+END(ia64_switch_to)
 
 	/*
 	 * Like save_switch_stack, but also save the stack frame that is active
 	 * at the time this function is called.
 	 */
-	.align 16
-	.proc save_switch_stack_with_current_frame
-save_switch_stack_with_current_frame:
-1:	{
-	  alloc r16=ar.pfs,0,0,0,0		// pass ar.pfs to save_switch_stack
-	  mov r28=ip
-	}
-	;;
-	adds r28=1f-1b,r28
-	br.cond.sptk.many save_switch_stack
-1:	br.ret.sptk.few rp
-	.endp save_switch_stack_with_current_frame
+ENTRY(save_switch_stack_with_current_frame)
+	UNW(.prologue)
+	alloc r16=ar.pfs,0,0,0,0		// pass ar.pfs to save_switch_stack
+	DO_SAVE_SWITCH_STACK
+	br.ret.sptk.few rp
+END(save_switch_stack_with_current_frame)
 /*
  * Note that interrupts are enabled during save_switch_stack and
  * load_switch_stack.  This means that we may get an interrupt with
  * "sp" pointing to the new kernel stack while ar.bspstore is still
  * pointing to the old kernel backing store area.  Since ar.rsc,
  * ar.rnat, ar.bsp, and ar.bspstore are all preserved by interrupts,
- * this is not a problem.
+ * this is not a problem.  Also, we don't need to specify unwind
+ * information for preserved registers that are not modified in
+ * save_switch_stack as the right unwind information is already
+ * specified at the call-site of save_switch_stack.
  */
 
 /*
  * save_switch_stack:
  *	- r16 holds ar.pfs
- *	- r28 holds address to return to
+ *	- b7 holds address to return to
  *	- rp (b0) holds return address to save
  */
-	.align 16
-	.global save_switch_stack
-	.proc save_switch_stack
-save_switch_stack:
+GLOBAL_ENTRY(save_switch_stack)
+	UNW(.prologue)
+	UNW(.altrp b7)
 	flushrs			// flush dirty regs to backing store (must be first in insn group)
 	mov r17=ar.unat		// preserve caller's
-	adds r2=-IA64_SWITCH_STACK_SIZE+16,sp	// r2 = &sw->caller_unat
+	adds r2=16,sp		// r2 = &sw->caller_unat
 	;;
 	mov r18=ar.fpsr		// preserve fpsr
 	mov ar.rsc=r0		// put RSE in mode: enforced lazy, little endian, pl 0
 	;;
 	mov r19=ar.rnat
-	adds r3=-IA64_SWITCH_STACK_SIZE+24,sp	// r3 = &sw->ar_fpsr
-
-	// Note: the instruction ordering is important here: we can't
-	// store anything to the switch stack before sp is updated
-	// as otherwise an interrupt might overwrite the memory!
-	adds sp=-IA64_SWITCH_STACK_SIZE,sp
+	adds r3=24,sp		// r3 = &sw->ar_fpsr
 	;;
+	.savesp ar.unat,SW(CALLER_UNAT)
 	st8 [r2]=r17,16
+	.savesp ar.fpsr,SW(AR_FPSR)
 	st8 [r3]=r18,24
 	;;
+	UNW(.body)
 	stf.spill [r2]=f2,32
 	stf.spill [r3]=f3,32
 	mov r21=b0
@@ -259,16 +248,17 @@
 	st8 [r3]=r21		// save predicate registers
 	mov ar.rsc=3		// put RSE back into eager mode, pl 0
 	br.cond.sptk.few b7
-	.endp save_switch_stack
+END(save_switch_stack)
 
 /*
  * load_switch_stack:
- *	- r28 holds address to return to
+ *	- b7 holds address to return to
  */
-	.align 16
-	.proc load_switch_stack
-load_switch_stack:
+ENTRY(load_switch_stack)
+	UNW(.prologue)
+	UNW(.altrp b7)
 	invala			// invalidate ALAT
+	UNW(.body)
 	adds r2=IA64_SWITCH_STACK_B0_OFFSET+16,sp	// get pointer to switch_stack.b0
 	mov ar.rsc=r0		// put RSE into enforced lazy mode
 	adds r3=IA64_SWITCH_STACK_B0_OFFSET+24,sp	// get pointer to switch_stack.b1
@@ -360,14 +350,10 @@
 	mov ar.unat=r18				// restore caller's unat
 	mov ar.fpsr=r19				// restore fpsr
 	mov ar.rsc=3				// put RSE back into eager mode, pl 0
-	adds sp=IA64_SWITCH_STACK_SIZE,sp	// pop switch_stack
 	br.cond.sptk.few b7
-	.endp load_switch_stack
+END(load_switch_stack)
 
-	.align 16
-	.global __ia64_syscall
-	.proc __ia64_syscall
-__ia64_syscall:
+GLOBAL_ENTRY(__ia64_syscall)
 	.regstk 6,0,0,0
 	mov r15=in5				// put syscall number in place
 	break __BREAK_SYSCALL
@@ -377,30 +363,30 @@
 (p6)	st4 [r2]=r8
 (p6)	mov r8=-1
 	br.ret.sptk.few rp
-	.endp __ia64_syscall
+END(__ia64_syscall)
 
 	//
 	// We invoke syscall_trace through this intermediate function to
 	// ensure that the syscall input arguments are not clobbered.  We
 	// also use it to preserve b6, which contains the syscall entry point.
 	//
-	.align 16
-	.global invoke_syscall_trace
-	.proc invoke_syscall_trace
-invoke_syscall_trace:
-	alloc loc0=ar.pfs,8,3,0,0
+GLOBAL_ENTRY(invoke_syscall_trace)
+	UNW(.prologue ASM_UNW_PRLG_RP|ASM_UNW_PRLG_PFS, ASM_UNW_PRLG_GRSAVE(8))
+	alloc loc1=ar.pfs,8,3,0,0
 	;;			// WAW on CFM at the br.call
-	mov loc1=rp
+	mov loc0=rp
+	.fframe IA64_SWITCH_STACK_SIZE
+	adds sp=-IA64_SWITCH_STACK_SIZE,sp
 	br.call.sptk.many rp=save_switch_stack_with_current_frame	// must preserve b6!!
 .ret2:	mov loc2=b6
 	br.call.sptk.few rp=syscall_trace
 .ret3:	adds sp=IA64_SWITCH_STACK_SIZE,sp	// drop switch_stack frame
-	mov rp=loc1
-	mov ar.pfs=loc0
+	mov rp=loc0
+	mov ar.pfs=loc1
 	mov b6=loc2
 	;;
 	br.ret.sptk.few rp
-	.endp invoke_syscall_trace
+END(invoke_syscall_trace)
 
 	//
 	// Invoke a system call, but do some tracing before and after the call.
@@ -414,19 +400,19 @@
 	//
 	.global ia64_trace_syscall
 	.global ia64_strace_leave_kernel
-	.global ia64_strace_clear_r8
 
-	.proc ia64_strace_clear_r8
-ia64_strace_clear_r8:		// this is where we return after cloning when PF_TRACESYS is on
+GLOBAL_ENTRY(ia64_strace_clear_r8)
+	// this is where we return after cloning when PF_TRACESYS is on
+	PT_REGS_UNWIND_INFO
 # ifdef CONFIG_SMP
 	br.call.sptk.few rp=invoke_schedule_tail
 # endif
 	mov r8=0
 	br strace_check_retval
-	.endp ia64_strace_clear_r8
+END(ia64_strace_clear_r8)
 
-	.proc ia64_trace_syscall
-ia64_trace_syscall:
+ENTRY(ia64_trace_syscall)
+	PT_REGS_UNWIND_INFO
 	br.call.sptk.few rp=invoke_syscall_trace // give parent a chance to catch syscall args
 .ret4:	br.call.sptk.few rp=b6			// do the syscall
 strace_check_retval:
@@ -454,7 +440,7 @@
 (p6)	mov r10=-1
 (p6)	mov r8=r9
 	br.cond.sptk.few strace_save_retval
-	.endp ia64_trace_syscall
+END(ia64_trace_syscall)
 
 /*
  * A couple of convenience macros to help implement/understand the state
@@ -472,12 +458,8 @@
 #define rKRBS		r22
 #define rB6		r21
 
-	.align 16
-	.global ia64_ret_from_syscall
-	.global ia64_ret_from_syscall_clear_r8
-	.global ia64_leave_kernel
-	.proc ia64_ret_from_syscall
-ia64_ret_from_syscall_clear_r8:
+GLOBAL_ENTRY(ia64_ret_from_syscall_clear_r8)
+	PT_REGS_UNWIND_INFO
 #ifdef CONFIG_SMP
 	// In SMP mode, we need to call schedule_tail to complete the scheduling process.
 	// Called by ia64_switch_to after do_fork()->copy_thread().  r8 contains the
@@ -487,7 +469,10 @@
 #endif                  
 	mov r8=0
 	;;					// added stop bits to prevent r8 dependency
-ia64_ret_from_syscall:
+END(ia64_ret_from_syscall_clear_r8)
+	// fall through
+GLOBAL_ENTRY(ia64_ret_from_syscall)
+	PT_REGS_UNWIND_INFO
 	cmp.ge p6,p7=r8,r0			// syscall executed successfully?
 	adds r2=IA64_PT_REGS_R8_OFFSET+16,sp	// r2 = &pt_regs.r8
 	adds r3=IA64_PT_REGS_R8_OFFSET+32,sp	// r3 = &pt_regs.r10
@@ -497,19 +482,21 @@
 	.mem.offset 8,0
 (p6)	st8.spill [r3]=r0	// clear error indication in slot for r10 and set unat bit
 (p7)	br.cond.spnt.few handle_syscall_error	// handle potential syscall failure
-
-ia64_leave_kernel:
+END(ia64_ret_from_syscall)
+	// fall through
+GLOBAL_ENTRY(ia64_leave_kernel)
 	// check & deliver software interrupts:
 
+	PT_REGS_UNWIND_INFO
 #ifdef CONFIG_SMP
-	adds	r2=IA64_TASK_PROCESSOR_OFFSET,r13
-	movl	r3=softirq_state
+	adds r2=IA64_TASK_PROCESSOR_OFFSET,r13
+	movl r3=softirq_state
 	;;
-	ld4	r2=[r2]
+	ld4 r2=[r2]
 	;;
-	shl	r2=r2,SMP_LOG_CACHE_BYTES	// can't use shladd here...
+	shl r2=r2,SMP_LOG_CACHE_BYTES	// can't use shladd here...
 	;;
-	add	r3=r2,r3
+	add r3=r2,r3
 #else
 	movl r3=softirq_state
 #endif
@@ -538,18 +525,8 @@
 	ld4 r14=[r14]
 	mov rp=r3			// arrange for schedule() to return to back_from_resched
 	;;
-	/*
-	 * If pEOI is set, we need to write the cr.eoi now and then
-	 * clear pEOI because both invoke_schedule() and
-	 * handle_signal_delivery() may call the scheduler.  Since
-	 * we're returning to user-level, we get at most one nested
-	 * interrupt of the same priority level, which doesn't tax the
-	 * kernel stack too much.
-	 */
-(pEOI)	mov cr.eoi=r0
 	cmp.ne p6,p0=r2,r0
 	cmp.ne p2,p0=r14,r0		// NOTE: pKern is an alias for p2!!
-(pEOI)	cmp.ne pEOI,p0=r0,r0		// clear pEOI before calling schedule()
 	srlz.d
 (p6)	br.call.spnt.many b6=invoke_schedule	// ignore return value
 2:
@@ -557,13 +534,19 @@
 (p2)	br.call.spnt.few rp=handle_signal_delivery
 #if defined(CONFIG_SMP) || defined(CONFIG_IA64_SOFTSDV_HACKS)
 	// Check for lost ticks
+	rsm psr.i
 	mov r2 = ar.itc
+	movl r14 = 1000			// latency tolerance
 	mov r3 = cr.itm
 	;;
 	sub r2 = r2, r3
 	;;
+	sub r2 = r2, r14
+	;;
 	cmp.ge p6,p7 = r2, r0
 (p6)	br.call.spnt.few rp=invoke_ia64_reset_itm
+	;;
+	ssm psr.i
 #endif 
 restore_all:
 
@@ -692,18 +675,6 @@
 	;;
 	add r18=r16,r18			// adjust the loadrs value
 	;;
-#ifdef CONFIG_IA64_SOFTSDV_HACKS
-	// Reset ITM if we've missed a timer tick.  Workaround for SoftSDV bug
-	mov r16 = r2
-	mov r2 = ar.itc
-	mov r17 = cr.itm
-	;; 
-	cmp.gt p6,p7 = r2, r17
-(p6)	addl r17 = 100, r2
-	;;
-	mov cr.itm = r17
-	mov r2 = r16
-#endif
 dont_preserve_current_frame:
 	alloc r16=ar.pfs,0,0,0,0	// drop the current call frame (noop for syscalls)
 	;;
@@ -724,14 +695,14 @@
 	mov ar.rsc=rARRSC
 	mov ar.unat=rARUNAT
 	mov cr.ifs=rCRIFS	// restore cr.ifs only if not a (synchronous) syscall
-(pEOI)	mov cr.eoi=r0
 	mov pr=rARPR,-1
 	mov cr.iip=rCRIIP
 	mov cr.ipsr=rCRIPSR
 	;;
 	rfi;;			// must be last instruction in an insn group
+END(ia64_leave_kernel)
 
-handle_syscall_error:
+ENTRY(handle_syscall_error)
 	/*
 	 * Some system calls (e.g., ptrace, mmap) can return arbitrary
 	 * values which could lead us to mistake a negative return
@@ -740,6 +711,7 @@
 	 * If pt_regs.r8 is zero, we assume that the call completed
 	 * successfully.
 	 */
+	PT_REGS_UNWIND_INFO
 	ld8 r3=[r2]		// load pt_regs.r8
 	sub r9=0,r8		// negate return value to get errno
 	;;
@@ -753,84 +725,87 @@
 .mem.offset 0,0; st8.spill [r2]=r9	// store errno in pt_regs.r8 and set unat bit
 .mem.offset 8,0; st8.spill [r3]=r10	// store error indication in pt_regs.r10 and set unat bit
 	br.cond.sptk.many ia64_leave_kernel
-	.endp handle_syscall_error
+END(handle_syscall_error)
 
 #ifdef CONFIG_SMP
 	/*
 	 * Invoke schedule_tail(task) while preserving in0-in7, which may be needed
 	 * in case a system call gets restarted.
 	 */
-	.proc invoke_schedule_tail
-invoke_schedule_tail:
-	alloc loc0=ar.pfs,8,2,1,0
-	mov loc1=rp
+ENTRY(invoke_schedule_tail)
+	UNW(.prologue ASM_UNW_PRLG_RP|ASM_UNW_PRLG_PFS, ASM_UNW_PRLG_GRSAVE(8))
+	alloc loc1=ar.pfs,8,2,1,0
+	mov loc0=rp
 	mov out0=r8				// Address of previous task
 	;;
 	br.call.sptk.few rp=schedule_tail
 .ret8:
-	mov ar.pfs=loc0
-	mov rp=loc1
+	mov ar.pfs=loc1
+	mov rp=loc0
 	br.ret.sptk.many rp
-	.endp invoke_schedule_tail
+END(invoke_schedule_tail)
+
 #endif /* CONFIG_SMP */
 
 #if defined(CONFIG_SMP) || defined(CONFIG_IA64_SOFTSDV_HACKS)
-	.proc invoke_ia64_reset_itm
-invoke_ia64_reset_itm:
-	alloc loc0=ar.pfs,8,2,0,0
-	mov loc1=rp
+
+ENTRY(invoke_ia64_reset_itm)
+	UNW(.prologue ASM_UNW_PRLG_RP|ASM_UNW_PRLG_PFS, ASM_UNW_PRLG_GRSAVE(8))
+	alloc loc1=ar.pfs,8,2,0,0
+	mov loc0=rp
 	;;
+	UNW(.body)
 	br.call.sptk.many rp=ia64_reset_itm
 	;;
-	mov ar.pfs=loc0
-	mov rp=loc1
+	mov ar.pfs=loc1
+	mov rp=loc0
 	br.ret.sptk.many rp
-	.endp invoke_ia64_reset_itm
+END(invoke_ia64_reset_itm)
+
 #endif /* defined(CONFIG_SMP) || defined(CONFIG_IA64_SOFTSDV_HACKS) */
 
 	/*
 	 * Invoke do_softirq() while preserving in0-in7, which may be needed
 	 * in case a system call gets restarted.
 	 */
-	.proc invoke_do_softirq
-invoke_do_softirq:
-	alloc loc0=ar.pfs,8,2,0,0
-	mov loc1=rp
-(pEOI)	mov cr.eoi=r0
+ENTRY(invoke_do_softirq)
+	UNW(.prologue ASM_UNW_PRLG_RP|ASM_UNW_PRLG_PFS, ASM_UNW_PRLG_GRSAVE(8))
+	alloc loc1=ar.pfs,8,2,0,0
+	mov loc0=rp
 	;;
-(pEOI)	cmp.ne pEOI,p0=r0,r0
+	UNW(.body)
 	br.call.sptk.few rp=do_softirq
 .ret9:
-	mov ar.pfs=loc0
-	mov rp=loc1
+	mov ar.pfs=loc1
+	mov rp=loc0
 	br.ret.sptk.many rp
-	.endp invoke_do_softirq
+END(invoke_do_softirq)
 
 	/*
 	 * Invoke schedule() while preserving in0-in7, which may be needed
 	 * in case a system call gets restarted.
 	 */
-	.proc invoke_schedule
-invoke_schedule:
-	alloc loc0=ar.pfs,8,2,0,0
-	mov loc1=rp
+ENTRY(invoke_schedule)
+	UNW(.prologue ASM_UNW_PRLG_RP|ASM_UNW_PRLG_PFS, ASM_UNW_PRLG_GRSAVE(8))
+	alloc loc1=ar.pfs,8,2,0,0
+	mov loc0=rp
 	;;
+	UNW(.body)
 	br.call.sptk.few rp=schedule
 .ret10:
-	mov ar.pfs=loc0
-	mov rp=loc1
+	mov ar.pfs=loc1
+	mov rp=loc0
 	br.ret.sptk.many rp
-	.endp invoke_schedule
+END(invoke_schedule)
 
 	//
 	// Setup stack and call ia64_do_signal.  Note that pSys and pNonSys need to
 	// be set up by the caller.  We declare 8 input registers so the system call
 	// args get preserved, in case we need to restart a system call.
 	//
-	.align 16
-	.proc handle_signal_delivery
-handle_signal_delivery:
-	alloc loc0=ar.pfs,8,2,3,0 // preserve all eight input regs in case of syscall restart!
+ENTRY(handle_signal_delivery)
+	UNW(.prologue ASM_UNW_PRLG_RP|ASM_UNW_PRLG_PFS, ASM_UNW_PRLG_GRSAVE(8))
+	alloc loc1=ar.pfs,8,2,3,0 // preserve all eight input regs in case of syscall restart!
 	mov r9=ar.unat
 
 	// If the process is being ptraced, the signal may not actually be delivered to
@@ -852,38 +827,36 @@
 (p17)	adds sp=-IA64_SWITCH_STACK_SIZE,sp	// make space for (dummy) switch_stack
 	;;
 (p17)	st8 [r3]=r9				// save ar.unat in sw->caller_unat
-	mov loc1=rp				// save return address
+	mov loc0=rp				// save return address
+	UNW(.body)
 	br.call.sptk.few rp=ia64_do_signal
 .ret11:
 	adds r3=IA64_SWITCH_STACK_CALLER_UNAT_OFFSET+16,sp
 	;;
 	ld8 r9=[r3]				// load new unat from sw->caller_unat
-	mov rp=loc1
+	mov rp=loc0
 	;;
 (p17)	adds sp=IA64_SWITCH_STACK_SIZE,sp	// drop (dummy) switch_stack
 (p17)	mov ar.unat=r9
-(p17)	mov ar.pfs=loc0
+(p17)	mov ar.pfs=loc1
 (p17)	br.ret.sptk.many rp
 
-	// restore the switch stack (ptrace may have modified it):
-	movl r28=1f
-	br.cond.sptk.many load_switch_stack
-1:	br.ret.sptk.many rp
+	DO_LOAD_SWITCH_STACK( )		// restore the switch stack (ptrace may have modified it)
+	br.ret.sptk.many rp
 	// NOT REACHED
 
 setup_switch_stack:
-	movl r28=back_from_setup_switch_stack
+	UNW(.prologue)
 	mov r16=loc0
-	br.cond.sptk.many save_switch_stack
-	// NOT REACHED
-
-	.endp handle_signal_delivery
-
-	.align 16
-	.proc sys_rt_sigsuspend
-	.global sys_rt_sigsuspend
-sys_rt_sigsuspend:
-	alloc loc0=ar.pfs,2,2,3,0
+	DO_SAVE_SWITCH_STACK
+	UNW(.body)
+	br.cond.sptk.many back_from_setup_switch_stack
+
+END(handle_signal_delivery)
+
+GLOBAL_ENTRY(sys_rt_sigsuspend)
+	UNW(.prologue ASM_UNW_PRLG_RP|ASM_UNW_PRLG_PFS, ASM_UNW_PRLG_GRSAVE(2))
+	alloc loc1=ar.pfs,2,2,3,0
 
 	// If the process is being ptraced, the signal may not actually be delivered to
 	// the process.  Instead, SIGCHLD will be sent to the parent.  We need to
@@ -895,34 +868,36 @@
 	mov out1=in1				// sigsetsize
 	;;
 	adds out2=16,sp				// out1=&pt_regs
-	movl r28=back_from_sigsuspend_setup_switch_stack
-	mov r16=loc0
-	br.cond.sptk.many save_switch_stack
-	;;
-back_from_sigsuspend_setup_switch_stack:
-	mov loc1=rp				// save return address
+	mov r16=loc1
+	DO_SAVE_SWITCH_STACK
+	mov loc0=rp				// save return address
+	UNW(.body)
 	br.call.sptk.many rp=ia64_rt_sigsuspend
 .ret12:
 	adds r3=IA64_SWITCH_STACK_CALLER_UNAT_OFFSET+16,sp
 	;;
 	ld8 r9=[r3]				// load new unat from sw->caller_unat
-	mov rp=loc1
+	mov rp=loc0
 	;;
-
-	// restore the switch stack (ptrace may have modified it):
-	movl r28=1f
-	br.cond.sptk.many load_switch_stack
-1:	br.ret.sptk.many rp
+	// restore the switch stack (ptrace may have modified it)
+	DO_LOAD_SWITCH_STACK(PT_REGS_UNWIND_INFO)
+	br.ret.sptk.many rp
 	// NOT REACHED
-	.endp sys_rt_sigsuspend
+END(sys_rt_sigsuspend)
 
-	.align 16
-	.proc sys_rt_sigreturn
-sys_rt_sigreturn:
+ENTRY(sys_rt_sigreturn)
 	.regstk 0,0,3,0	// inherited from gate.s:invoke_sighandler()
+	PT_REGS_UNWIND_INFO
 	adds out0=16,sp				// out0 = &pt_regs
+	UNW(.prologue)
+	UNW(.fframe IA64_PT_REGS_SIZE+IA64_SWITCH_STACK_SIZE)
+	UNW(.spillsp rp, PT(CR_IIP)+IA64_SWITCH_STACK_SIZE)
+	UNW(.spillsp ar.pfs, PT(CR_IFS)+IA64_SWITCH_STACK_SIZE)
+	UNW(.spillsp ar.unat, PT(AR_UNAT)+IA64_SWITCH_STACK_SIZE)
+	UNW(.spillsp pr, PT(PR)+IA64_SWITCH_STACK_SIZE)
 	adds sp=-IA64_SWITCH_STACK_SIZE,sp	// make space for unat and padding
 	;;
+	UNW(.body)
 	cmp.eq pNonSys,p0=r0,r0			// sigreturn isn't a normal syscall...
 	br.call.sptk.few rp=ia64_rt_sigreturn
 .ret13:
@@ -931,28 +906,25 @@
 	ld8 r9=[r3]			// load new ar.unat
 	mov rp=r8
 	;;
+	PT_REGS_UNWIND_INFO
 	adds sp=IA64_SWITCH_STACK_SIZE,sp	// drop (dummy) switch-stack frame
 	mov ar.unat=r9
 	br rp
-	.endp sys_rt_sigreturn
+END(sys_rt_sigreturn)
 
-	.align 16
-	.global ia64_prepare_handle_unaligned
-	.proc ia64_prepare_handle_unaligned
-ia64_prepare_handle_unaligned:
-	movl r28=1f
+GLOBAL_ENTRY(ia64_prepare_handle_unaligned)
 	//
 	// r16 = fake ar.pfs, we simply need to make sure 
 	// privilege is still 0
 	//
+	PT_REGS_UNWIND_INFO
 	mov r16=r0 				
-	br.cond.sptk.few save_switch_stack
-1: 	br.call.sptk.few rp=ia64_handle_unaligned // stack frame setup in ivt
+	DO_SAVE_SWITCH_STACK
+	br.call.sptk.few rp=ia64_handle_unaligned // stack frame setup in ivt
 .ret14:
-	movl r28=2f
-	br.cond.sptk.many load_switch_stack
-2:	br.cond.sptk.many rp			  // goes to ia64_leave_kernel
-	.endp ia64_prepare_handle_unaligned
+	DO_LOAD_SWITCH_STACK(PT_REGS_UNWIND_INFO)
+	br.cond.sptk.many rp			  // goes to ia64_leave_kernel
+END(ia64_prepare_handle_unaligned)
 
 	.rodata
 	.align 8
@@ -1066,7 +1038,7 @@
 	data8 sys_setdomainname
 	data8 sys_newuname			// 1130
 	data8 sys_adjtimex
-	data8 sys_create_module
+	data8 ia64_create_module
 	data8 sys_init_module
 	data8 sys_delete_module
 	data8 sys_get_kernel_syms		// 1135
@@ -1213,4 +1185,3 @@
 	data8 ia64_ni_syscall
 	data8 ia64_ni_syscall
 	data8 ia64_ni_syscall
-
diff -urN linux-davidm/arch/ia64/kernel/entry.h linux-2.3.99-pre6-lia/arch/ia64/kernel/entry.h
--- linux-davidm/arch/ia64/kernel/entry.h	Sun Feb  6 18:42:40 2000
+++ linux-2.3.99-pre6-lia/arch/ia64/kernel/entry.h	Thu May 25 23:00:20 2000
@@ -2,7 +2,60 @@
  * Preserved registers that are shared between code in ivt.S and entry.S.  Be
  * careful not to step on these!
  */
-#define pEOI		p1	/* should leave_kernel write EOI? */
 #define pKern		p2	/* will leave_kernel return to kernel-mode? */
 #define pSys		p4	/* are we processing a (synchronous) system call? */
 #define pNonSys		p5	/* complement of pSys */
+
+#define PT(f)		(IA64_PT_REGS_##f##_OFFSET + 16)
+#define SW(f)		(IA64_SWITCH_STACK_##f##_OFFSET + 16)
+
+#define PT_REGS_UNWIND_INFO			\
+	UNW(.prologue);				\
+	UNW(.unwabi @svr4, 105);		\
+	UNW(.fframe IA64_PT_REGS_SIZE);		\
+	UNW(.spillsp rp, PT(CR_IIP));		\
+	UNW(.spillsp ar.pfs, PT(CR_IFS));	\
+	UNW(.spillsp ar.unat, PT(AR_UNAT));	\
+	UNW(.spillsp pr, PT(PR));		\
+	UNW(.body)
+
+#define SAVE_SWITCH_STACK_UNWIND_INFO							\
+	.savesp ar.unat,SW(CALLER_UNAT); .savesp ar.fpsr,SW(AR_FPSR));			\
+	UNW(.spillsp f2,SW(F2)); UNW(.spillsp f3,SW(F3));				\
+	UNW(.spillsp f4,SW(F4)); UNW(.spillsp f5,SW(F5));				\
+	UNW(.spillsp f16,SW(F16)); UNW(.spillsp f17,SW(F17));				\
+	UNW(.spillsp f18,SW(F18)); UNW(.spillsp f19,SW(F19));				\
+	UNW(.spillsp f20,SW(F20)); UNW(.spillsp f21,SW(F21));				\
+	UNW(.spillsp f22,SW(F22)); UNW(.spillsp f23,SW(F23));				\
+	UNW(.spillsp f24,SW(F24)); UNW(.spillsp f25,SW(F25));				\
+	UNW(.spillsp f26,SW(F26)); UNW(.spillsp f27,SW(F27));				\
+	UNW(.spillsp f28,SW(F28)); UNW(.spillsp f29,SW(F29));				\
+	UNW(.spillsp f30,SW(F30)); UNW(.spillsp f31,SW(F31));				\
+	UNW(.spillsp r4,SW(R4)); UNW(.spillsp r5,SW(R5));				\
+	UNW(.spillsp r6,SW(R6)); UNW(.spillsp r7,SW(R7));				\
+	UNW(.spillsp b1,SW(B1)); UNW(.spillsp b2,SW(B2));				\
+	UNW(.spillsp b3,SW(B3)); UNW(.spillsp b4,SW(B4));				\
+	UNW(.spillsp b5,SW(B5));							\
+	UNW(.spillsp ar.pfs,SW(AR_PFS)); UNW(.spillsp ar.lc,SW(AR_LC));			\
+	UNW(.spillsp @priunat,SW(AR_UNAT));						\
+	UNW(.spillsp ar.rnat,SW(AR_RNAT)); UNW(.spillsp ar.bspstore,SW(AR_BSPSTORE));	\
+	UNW(.spillsp pr,SW(PR))
+
+#define DO_SAVE_SWITCH_STACK			\
+	movl r28=1f;				\
+	;;					\
+	.fframe IA64_SWITCH_STACK_SIZE;		\
+	adds sp=-IA64_SWITCH_STACK_SIZE,sp;	\
+	mov b7=r28;				\
+	SAVE_SWITCH_STACK_UNWIND_INFO;		\
+	br.cond.sptk.many save_switch_stack;	\
+1:
+
+#define DO_LOAD_SWITCH_STACK(extra)		\
+	movl r28=1f;				\
+	;;					\
+	mov b7=r28;				\
+	br.cond.sptk.many load_switch_stack;	\
+1:	UNW(.restore sp);			\
+	extra;					\
+	adds sp=IA64_SWITCH_STACK_SIZE,sp
diff -urN linux-davidm/arch/ia64/kernel/gate.S linux-2.3.99-pre6-lia/arch/ia64/kernel/gate.S
--- linux-davidm/arch/ia64/kernel/gate.S	Fri Apr 21 15:21:24 2000
+++ linux-2.3.99-pre6-lia/arch/ia64/kernel/gate.S	Thu May 25 22:57:57 2000
@@ -3,10 +3,11 @@
  * each task's text region.  For now, it contains the signal
  * trampoline code only.
  *
- * Copyright (C) 1999 Hewlett-Packard Co
- * Copyright (C) 1999 David Mosberger-Tang <davidm@hpl.hp.com>
+ * Copyright (C) 1999-2000 Hewlett-Packard Co
+ * Copyright (C) 1999-2000 David Mosberger-Tang <davidm@hpl.hp.com>
  */
 
+#include <asm/asmmacro.h>
 #include <asm/offsets.h>
 #include <asm/sigcontext.h>
 #include <asm/system.h>
@@ -75,15 +76,12 @@
 	 *	[sp+16] = sigframe
 	 */
 
-	.global ia64_sigtramp
-	.proc ia64_sigtramp
-ia64_sigtramp:
+GLOBAL_ENTRY(ia64_sigtramp)
 	ld8 r10=[r3],8				// get signal handler entry point
 	br.call.sptk.many rp=invoke_sighandler
-	.endp ia64_sigtramp
+END(ia64_sigtramp)
 
-	.proc invoke_sighandler
-invoke_sighandler:
+ENTRY(invoke_sighandler)
 	ld8 gp=[r3]			// get signal handler's global pointer
 	mov b6=r10
 	cover				// push args in interrupted frame onto backing store
@@ -152,10 +150,9 @@
 	ldf.fill f15=[base1],32
 	mov r15=__NR_rt_sigreturn
 	break __BREAK_SYSCALL
-	.endp invoke_sighandler
+END(invoke_sighandler)
 
-	.proc setup_rbs
-setup_rbs:
+ENTRY(setup_rbs)
 	flushrs					// must be first in insn
 	mov ar.rsc=r0				// put RSE into enforced lazy mode
 	adds r16=(RNAT_OFF+SIGCONTEXT_OFF),sp
@@ -167,9 +164,9 @@
 	mov ar.rsc=0xf				// set RSE into eager mode, pl 3
 	invala					// invalidate ALAT
 	br.cond.sptk.many back_from_setup_rbs
+END(setup_rbs)
 
-	.proc restore_rbs
-restore_rbs:
+ENTRY(restore_rbs)
 	flushrs
 	mov ar.rsc=r0				// put RSE into enforced lazy mode
 	adds r16=(RNAT_OFF+SIGCONTEXT_OFF),sp
@@ -181,5 +178,4 @@
 	mov ar.rsc=0xf				// (will be restored later on from sc_ar_rsc)
 	// invala not necessary as that will happen when returning to user-mode
 	br.cond.sptk.many back_from_restore_rbs
-
-	.endp restore_rbs
+END(restore_rbs)
diff -urN linux-davidm/arch/ia64/kernel/head.S linux-2.3.99-pre6-lia/arch/ia64/kernel/head.S
--- linux-davidm/arch/ia64/kernel/head.S	Sun Feb 13 10:30:38 2000
+++ linux-2.3.99-pre6-lia/arch/ia64/kernel/head.S	Thu May 25 22:58:07 2000
@@ -16,6 +16,7 @@
 
 #include <linux/config.h>
 
+#include <asm/asmmacro.h>
 #include <asm/fpu.h>
 #include <asm/pal.h>
 #include <asm/offsets.h>
@@ -54,10 +55,12 @@
 	stringz "Halting kernel\n"
 
 	.text
-	.align 16
-	.global _start
-	.proc _start
-_start:
+
+GLOBAL_ENTRY(_start)
+	UNW(.prologue)
+	UNW(.save rp, r4)		// terminate unwind chain with a NULL rp
+	UNW(mov r4=r0)
+	.body
 	// set IVT entry point---can't access I/O ports without it
 	movl r3=ia64_ivt
 	;;
@@ -156,12 +159,9 @@
 	ld8 out0=[r2]
 	br.call.sptk.few b0=console_print
 self:	br.sptk.few self		// endless loop
-	.endp _start
+END(_start)
 
-	.align 16
-	.global ia64_save_debug_regs
-	.proc ia64_save_debug_regs
-ia64_save_debug_regs:
+GLOBAL_ENTRY(ia64_save_debug_regs)
 	alloc r16=ar.pfs,1,0,0,0
 	mov r20=ar.lc			// preserve ar.lc
 	mov ar.lc=IA64_NUM_DBG_REGS-1
@@ -177,13 +177,10 @@
 	br.cloop.sptk.few 1b
 	;;
 	mov ar.lc=r20			// restore ar.lc
-	br.ret.sptk.few b0
-	.endp ia64_save_debug_regs
+	br.ret.sptk.few rp
+END(ia64_save_debug_regs)
 
-	.align 16
-	.global ia64_load_debug_regs
-	.proc ia64_load_debug_regs
-ia64_load_debug_regs:
+GLOBAL_ENTRY(ia64_load_debug_regs)
 	alloc r16=ar.pfs,1,0,0,0
 	lfetch.nta [in0]
 	mov r20=ar.lc			// preserve ar.lc
@@ -200,13 +197,10 @@
 	br.cloop.sptk.few 1b
 	;;
 	mov ar.lc=r20			// restore ar.lc
-	br.ret.sptk.few b0
-	.endp ia64_load_debug_regs
+	br.ret.sptk.few rp
+END(ia64_load_debug_regs)
 
-	.align 16
-	.global __ia64_save_fpu
-	.proc __ia64_save_fpu
-__ia64_save_fpu:
+GLOBAL_ENTRY(__ia64_save_fpu)
 	alloc r2=ar.pfs,1,0,0,0
 	adds r3=16,in0
 	;;
@@ -354,12 +348,9 @@
 	stf.spill.nta [in0]=f126,32
 	stf.spill.nta [ r3]=f127,32
 	br.ret.sptk.few rp
-	.endp __ia64_save_fpu
+END(__ia64_save_fpu)
 
-	.align 16
-	.global __ia64_load_fpu
-	.proc __ia64_load_fpu
-__ia64_load_fpu:
+GLOBAL_ENTRY(__ia64_load_fpu)
 	alloc r2=ar.pfs,1,0,0,0
 	adds r3=16,in0
 	;;
@@ -507,12 +498,9 @@
 	ldf.fill.nta f126=[in0],32
 	ldf.fill.nta f127=[ r3],32
 	br.ret.sptk.few rp
-	.endp __ia64_load_fpu
+END(__ia64_load_fpu)
 
-	.align 16
-	.global __ia64_init_fpu
-	.proc __ia64_init_fpu
-__ia64_init_fpu:
+GLOBAL_ENTRY(__ia64_init_fpu)
 	alloc r2=ar.pfs,0,0,0,0
 	stf.spill [sp]=f0
 	mov      f32=f0
@@ -644,4 +632,4 @@
 	ldf.fill f126=[sp]
 	mov      f127=f0
 	br.ret.sptk.few rp
-	.endp __ia64_init_fpu
+END(__ia64_init_fpu)
diff -urN linux-davidm/arch/ia64/kernel/ia64_ksyms.c linux-2.3.99-pre6-lia/arch/ia64/kernel/ia64_ksyms.c
--- linux-davidm/arch/ia64/kernel/ia64_ksyms.c	Wed Dec 31 16:00:00 1969
+++ linux-2.3.99-pre6-lia/arch/ia64/kernel/ia64_ksyms.c	Mon May 15 11:30:23 2000
@@ -0,0 +1,45 @@
+/*
+ * Architecture-specific kernel symbols
+ */
+
+#include <linux/config.h>
+#include <linux/module.h>
+
+#include <asm/processor.h>
+EXPORT_SYMBOL(cpu_data);
+EXPORT_SYMBOL(kernel_thread);
+
+#include <asm/uaccess.h>
+EXPORT_SYMBOL(__copy_user);
+
+#include <linux/string.h>
+EXPORT_SYMBOL(memset);
+EXPORT_SYMBOL(memcmp);
+EXPORT_SYMBOL(memcpy);
+EXPORT_SYMBOL(strcat);
+EXPORT_SYMBOL(strchr);
+EXPORT_SYMBOL(strcmp);
+EXPORT_SYMBOL(strlen);
+EXPORT_SYMBOL(strncat);
+EXPORT_SYMBOL(strncmp);
+EXPORT_SYMBOL(strncpy);
+EXPORT_SYMBOL(strtok);
+
+#include <linux/pci.h>
+EXPORT_SYMBOL(pci_alloc_consistent);
+EXPORT_SYMBOL(pci_free_consistent);
+
+#include <asm/irq.h>
+EXPORT_SYMBOL(enable_irq);
+EXPORT_SYMBOL(disable_irq);
+
+/* from arch/ia64/lib */
+extern void __divdi3(void);
+extern void __udivdi3(void);
+extern void __moddi3(void);
+extern void __umoddi3(void);
+
+EXPORT_SYMBOL_NOVERS(__divdi3);
+EXPORT_SYMBOL_NOVERS(__udivdi3);
+EXPORT_SYMBOL_NOVERS(__moddi3);
+EXPORT_SYMBOL_NOVERS(__umoddi3);
diff -urN linux-davidm/arch/ia64/kernel/irq_ia64.c linux-2.3.99-pre6-lia/arch/ia64/kernel/irq_ia64.c
--- linux-davidm/arch/ia64/kernel/irq_ia64.c	Thu May 25 23:22:10 2000
+++ linux-2.3.99-pre6-lia/arch/ia64/kernel/irq_ia64.c	Wed May 10 16:47:53 2000
@@ -33,6 +33,8 @@
 #include <asm/pgtable.h>
 #include <asm/system.h>
 
+#define IRQ_DEBUG	0
+
 #ifdef CONFIG_ITANIUM_A1_SPECIFIC
 spinlock_t ivr_read_lock;
 #endif
@@ -73,12 +75,7 @@
 void
 ia64_handle_irq (unsigned long vector, struct pt_regs *regs)
 {
-	unsigned long bsp, sp, saved_tpr;
-# ifndef CONFIG_SMP
-	static unsigned int max_prio = 0;
-	unsigned int prev_prio;
-# endif
-
+	unsigned long saved_tpr;
 #ifdef CONFIG_ITANIUM_A1_SPECIFIC
 	unsigned long eoi_ptr;
  
@@ -116,66 +113,59 @@
 # endif
 #endif /* CONFIG_ITANIUM_A1_SPECIFIC */
 
-# ifndef CONFIG_SMP
-	prev_prio = max_prio;
-	if (vector < max_prio) {
-		printk ("ia64_handle_irq: got vector %lu while %u "
-			"was in progress!\n", vector, max_prio);
-	} else
-		max_prio = vector;
-# endif /* !CONFIG_SMP */
-
-	asm ("mov %0=ar.bsp" : "=r"(bsp));
-	asm ("mov %0=sp" : "=r"(sp));
-
-	if ((sp - bsp) < 1024) {
-		static unsigned char count;
-		static long last_time;
-
-		if (count > 5 && jiffies - last_time > 5*HZ)
-			count = 0;
-		if (++count < 5) {
-			last_time = jiffies;
-			printk("ia64_handle_irq: DANGER: less than "
-			       "1KB of free stack space!!\n"
-			       "(bsp=0x%lx, sp=%lx)\n", bsp, sp);
+#if IRQ_DEBUG
+	{
+		unsigned long bsp, sp;
+
+		asm ("mov %0=ar.bsp" : "=r"(bsp));
+		asm ("mov %0=sp" : "=r"(sp));
+
+		if ((sp - bsp) < 1024) {
+			static unsigned char count;
+			static long last_time;
+
+			if (count > 5 && jiffies - last_time > 5*HZ)
+				count = 0;
+			if (++count < 5) {
+				last_time = jiffies;
+				printk("ia64_handle_irq: DANGER: less than "
+				       "1KB of free stack space!!\n"
+				       "(bsp=0x%lx, sp=%lx)\n", bsp, sp);
+			}
 		}
 	}
+#endif /* IRQ_DEBUG */
 
 	/*
-	 * Always set TPR to limit maximum interrupt nesting
-	 * depth to 16 (without this, it would be ~240, which
-	 * could easily lead to kernel stack overflows.
+	 * Always set TPR to limit maximum interrupt nesting depth to
+	 * 16 (without this, it would be ~240, which could easily lead
+	 * to kernel stack overflows).
 	 */
 	saved_tpr = ia64_get_tpr();
 	ia64_srlz_d();
 	do {
-		ia64_set_tpr(vector);
-		ia64_srlz_d();
-
-		/*
-		 * The interrupt is now said to be in service
-		 */
 		if (vector >= NR_IRQS) {
 			printk("handle_irq: invalid vector %lu\n", vector);
-			goto out;
+			ia64_set_tpr(saved_tpr);
+			ia64_srlz_d();
+			return;
 		}
+		ia64_set_tpr(vector);
+		ia64_srlz_d();
+
 		do_IRQ(vector, regs);
 
+		/*
+		 * Disable interrupts and send EOI:
+		 */
+		local_irq_disable();
+		ia64_set_tpr(saved_tpr);
+		ia64_eoi();
 #ifdef CONFIG_ITANIUM_A1_SPECIFIC
 		break;
 #endif
 		vector = ia64_get_ivr();
 	} while (vector != IA64_SPURIOUS_INT);
-  out:
-# ifndef CONFIG_SMP
-	max_prio = prev_prio;
-# endif /* !CONFIG_SMP */
-
-	local_irq_disable();
-	ia64_srlz_d();
-	ia64_set_tpr(saved_tpr);
-	ia64_srlz_d();
 }
 
 #ifdef CONFIG_SMP
diff -urN linux-davidm/arch/ia64/kernel/ivt.S linux-2.3.99-pre6-lia/arch/ia64/kernel/ivt.S
--- linux-davidm/arch/ia64/kernel/ivt.S	Thu May 25 23:22:10 2000
+++ linux-2.3.99-pre6-lia/arch/ia64/kernel/ivt.S	Thu May 25 22:59:08 2000
@@ -170,9 +170,31 @@
 	 * The ITLB basically does the same as the VHPT handler except
 	 * that we always insert exactly one instruction TLB entry.
 	 */
+#if 0
+	/*
+	 * This code works, but I don't want to enable it until I have numbers
+	 * that prove this to be a win.
+	 */
+	mov r31=pr				// save predicates
+	;;
+	thash r17=r16				// compute virtual address of L3 PTE
+	;;
+	ld8.s r18=[r17]				// try to read L3 PTE
+	;;
+	tnat.nz p6,p0=r18			// did read succeed?
+(p6)	br.cond.spnt.many 1f
+	;;
+	itc.i r18
+	;;
+	mov pr=r31,-1
+	rfi
+
+1:	rsm psr.dt				// use physical addressing for data
+#else
 	mov r16=cr.ifa				// get address that caused the TLB miss
 	;;
 	rsm psr.dt				// use physical addressing for data
+#endif
 	mov r31=pr				// save the predicate registers
 	mov r19=ar.k7				// get page table base address
 	shl r21=r16,3				// shift bit 60 into sign bit
@@ -222,9 +244,31 @@
 	 * that we always insert exactly one data TLB entry.
 	 */
 	mov r16=cr.ifa				// get address that caused the TLB miss
+#if 0
+	/*
+	 * This code works, but I don't want to enable it until I have numbers
+	 * that prove this to be a win.
+	 */
+	mov r31=pr				// save predicates
+	;;
+	thash r17=r16				// compute virtual address of L3 PTE
+	;;
+	ld8.s r18=[r17]				// try to read L3 PTE
+	;;
+	tnat.nz p6,p0=r18			// did read succeed?
+(p6)	br.cond.spnt.many 1f
 	;;
+	itc.d r18
+	;;
+	mov pr=r31,-1
+	rfi
+
+1:	rsm psr.dt				// use physical addressing for data
+#else
 	rsm psr.dt				// use physical addressing for data
 	mov r31=pr				// save the predicate registers
+	;;
+#endif
 	mov r19=ar.k7				// get page table base address
 	shl r21=r16,3				// shift bit 60 into sign bit
 	shr.u r17=r16,61			// get the region number into r17
@@ -265,37 +309,6 @@
 	mov pr=r31,-1				// restore predicate registers
 	rfi
 
-	//-----------------------------------------------------------------------------------
-	// call do_page_fault (predicates are in r31, psr.dt is off, r16 is faulting address)
-page_fault:
-	SAVE_MIN_WITH_COVER
-	//
-	// Copy control registers to temporary registers, then turn on psr bits,
-	// then copy the temporary regs to the output regs.  We have to do this
-	// because the "alloc" can cause a mandatory store which could lead to
-	// an "Alt DTLB" fault which we can handle only if psr.ic is on.
-	//
-	mov r8=cr.ifa
-	mov r9=cr.isr
-	adds r3=8,r2				// set up second base pointer
-	;;
-	ssm psr.ic | psr.dt
-	;;
-	srlz.i					// guarantee that interrupt collection is enabled
-	;;
-(p15)	ssm psr.i				// restore psr.i
-	movl r14=ia64_leave_kernel
-	;;
-	alloc r15=ar.pfs,0,0,3,0		// must be first in insn group
-	mov out0=r8
-	mov out1=r9
-	;;
-	SAVE_REST
-	mov rp=r14
-	;;
-	adds out2=16,r12			// out2 = pointer to pt_regs
-	br.call.sptk.few b6=ia64_do_page_fault	// ignore return address
-
 	.align 1024
 /////////////////////////////////////////////////////////////////////////////////////////
 // 0x0c00 Entry 3 (size 64 bundles) Alt ITLB (19)
@@ -330,6 +343,37 @@
 	itc.d r16		// insert the TLB entry
 	rfi
 
+	//-----------------------------------------------------------------------------------
+	// call do_page_fault (predicates are in r31, psr.dt is off, r16 is faulting address)
+page_fault:
+	SAVE_MIN_WITH_COVER
+	//
+	// Copy control registers to temporary registers, then turn on psr bits,
+	// then copy the temporary regs to the output regs.  We have to do this
+	// because the "alloc" can cause a mandatory store which could lead to
+	// an "Alt DTLB" fault which we can handle only if psr.ic is on.
+	//
+	mov r8=cr.ifa
+	mov r9=cr.isr
+	adds r3=8,r2				// set up second base pointer
+	;;
+	ssm psr.ic | psr.dt
+	;;
+	srlz.i					// guarantee that interrupt collection is enabled
+	;;
+(p15)	ssm psr.i				// restore psr.i
+	movl r14=ia64_leave_kernel
+	;;
+	alloc r15=ar.pfs,0,0,3,0		// must be first in insn group
+	mov out0=r8
+	mov out1=r9
+	;;
+	SAVE_REST
+	mov rp=r14
+	;;
+	adds out2=16,r12			// out2 = pointer to pt_regs
+	br.call.sptk.few b6=ia64_do_page_fault	// ignore return address
+
 	.align 1024
 /////////////////////////////////////////////////////////////////////////////////////////
 // 0x1400 Entry 5 (size 64 bundles) Data nested TLB (6,45)
@@ -338,7 +382,7 @@
 	// Access-bit, or Data Access-bit faults cause a nested fault because the
 	// dTLB entry for the virtual page table isn't present.  In such a case,
 	// we lookup the pte for the faulting address by walking the page table
-	// and return to the contination point passed in register r30.
+	// and return to the continuation point passed in register r30.
 	// In accessing the page tables, we don't need to check for NULL entries
 	// because if the page tables didn't map the faulting address, it would not
 	// be possible to receive one of the above faults.
@@ -486,7 +530,6 @@
 	;;
 	srlz.d			// ensure everyone knows psr.dt is off...
 	cmp.eq p0,p7=r16,r17	// is this a system call? (p7 <- false, if so)
-
 #if 1
 	// Allow syscalls via the old system call number for the time being.  This is
 	// so we can transition to the new syscall number in a relatively smooth
@@ -495,7 +538,6 @@
 	;;
 (p7)	cmp.eq.or.andcm p0,p7=r16,r17		// is this the old syscall number?
 #endif
-
 (p7)	br.cond.spnt.many non_syscall
 
 	SAVE_MIN				// uses r31; defines r2:
@@ -572,7 +614,6 @@
 	ssm psr.ic | psr.dt	// turn interrupt collection and data translation back on
 	;;
 	adds r3=8,r2		// set up second base pointer for SAVE_REST
-	cmp.eq pEOI,p0=r0,r0	// set pEOI flag so that ia64_leave_kernel writes cr.eoi
 	srlz.i			// ensure everybody knows psr.ic and psr.dt are back on
 	;;
 	SAVE_REST
@@ -640,14 +681,17 @@
 (p6)    br.call.dpnt.few b6=non_ia32_syscall
 
 	adds r14=IA64_PT_REGS_R8_OFFSET + 16,sp	// 16 byte hole per SW conventions
-
+	adds r15=IA64_PT_REGS_R1_OFFSET + 16,sp
+	;;
+	cmp.eq pSys,pNonSys=r0,r0 // set pSys=1, pNonSys=0
+	st8 [r15]=r8		// save orignal EAX in r1 (IA32 procs don't use the GP)
 	;;
 	alloc r15=ar.pfs,0,0,6,0	// must first in an insn group
 	;; 
 	ld4 r8=[r14],8          // r8 == EAX (syscall number)
-	mov r15=0xff
+	mov r15=190		// sys_vfork - last implemented system call
 	;;
-	cmp.ltu.unc p6,p7=r8,r15
+	cmp.leu.unc p6,p7=r8,r15
 	ld4 out1=[r14],8        // r9 == ecx
 	;;
 	ld4 out2=[r14],8         // r10 == edx
diff -urN linux-davidm/arch/ia64/kernel/minstate.h linux-2.3.99-pre6-lia/arch/ia64/kernel/minstate.h
--- linux-davidm/arch/ia64/kernel/minstate.h	Fri Apr 21 15:21:24 2000
+++ linux-2.3.99-pre6-lia/arch/ia64/kernel/minstate.h	Thu May 25 23:00:25 2000
@@ -69,7 +69,7 @@
 (p7)	mov rARBSPSTORE=ar.bspstore;			/* save ar.bspstore */			  \
 (p7)	dep rKRBS=-1,rKRBS,61,3;			/* compute kernel virtual addr of RBS */  \
 	;;											  \
-(pKern)	addl r1=-IA64_PT_REGS_SIZE,r1;		/* if in kernel mode, use sp (r12) */		  \
+(pKern)	addl r1=16-IA64_PT_REGS_SIZE,r1;	/* if in kernel mode, use sp (r12) */		  \
 (p7)	mov ar.bspstore=rKRBS;			/* switch to kernel RBS */			  \
 	;;											  \
 (p7)	mov r18=ar.bsp;										  \
@@ -101,7 +101,6 @@
 	;;											  \
 	st8 [r16]=r18,16;	/* save ar.rsc value for "loadrs" */				  \
 	st8.spill [r17]=rR1,16;	/* save original r1 */						  \
-	cmp.ne pEOI,p0=r0,r0	/* clear pEOI by default */					  \
 	;;											  \
 .mem.offset 0,0;	st8.spill [r16]=r2,16;							  \
 .mem.offset 8,0;	st8.spill [r17]=r3,16;							  \
diff -urN linux-davidm/arch/ia64/kernel/pal.S linux-2.3.99-pre6-lia/arch/ia64/kernel/pal.S
--- linux-davidm/arch/ia64/kernel/pal.S	Sun Feb  6 18:42:40 2000
+++ linux-2.3.99-pre6-lia/arch/ia64/kernel/pal.S	Thu May 25 23:00:37 2000
@@ -4,9 +4,11 @@
  *
  * Copyright (C) 1999 Don Dugger <don.dugger@intel.com>
  * Copyright (C) 1999 Walt Drummond <drummond@valinux.com>
- * Copyright (C) 1999 David Mosberger <davidm@hpl.hp.com>
+ * Copyright (C) 1999-2000 David Mosberger <davidm@hpl.hp.com>
  */
 
+#include <asm/asmmacro.h>
+
 	.text
 	.psr abi64
 	.psr lsb
@@ -24,29 +26,23 @@
  *
  * in0		Address of the PAL entry point (text address, NOT a function descriptor).
  */
-	.align 16
-	.global ia64_pal_handler_init
-	.proc ia64_pal_handler_init
-ia64_pal_handler_init:
+GLOBAL_ENTRY(ia64_pal_handler_init)
 	alloc r3=ar.pfs,1,0,0,0
 	movl r2=pal_entry_point
 	;;
 	st8 [r2]=in0
 	br.ret.sptk.few rp
-	
-	.endp ia64_pal_handler_init	
+END(ia64_pal_handler_init)
 
 /*
  * Default PAL call handler.  This needs to be coded in assembly because it uses
  * the static calling convention, i.e., the RSE may not be used and calls are
  * done via "br.cond" (not "br.call").
  */
-	.align 16
-	.global ia64_pal_default_handler
-	.proc ia64_pal_default_handler
-ia64_pal_default_handler:
+GLOBAL_ENTRY(ia64_pal_default_handler)
 	mov r8=-1
 	br.cond.sptk.few rp
+END(ia64_pal_default_handler)
 
 /*
  * Make a PAL call using the static calling convention.
@@ -56,44 +52,23 @@
  * in2 - in4   Remaning PAL arguments
  *
  */
-
-#ifdef __GCC_MULTIREG_RETVALS__
-# define arg0	in0
-# define arg1	in1
-# define arg2	in2
-# define arg3	in3
-# define arg4	in4
-#else
-# define arg0	in1
-# define arg1	in2
-# define arg2	in3
-# define arg3	in4
-# define arg4	in5
-#endif
-
-	.text
-	.psr abi64
-	.psr lsb
-	.lsb
-
-	.align 16
-	.global	ia64_pal_call_static
-	.proc ia64_pal_call_static
-ia64_pal_call_static:
-	alloc	loc0 = ar.pfs,6,90,0,0
+GLOBAL_ENTRY(ia64_pal_call_static)
+	UNW(.prologue ASM_UNW_PRLG_RP|ASM_UNW_PRLG_PFS, ASM_UNW_PRLG_GRSAVE(6))
+	alloc	loc1 = ar.pfs,6,90,0,0
 	movl	loc2 = pal_entry_point
 1:	{
-	  mov	r28 = arg0
-	  mov	r29 = arg1
+	  mov	r28 = in0
+	  mov	r29 = in1
 	  mov	r8 = ip
 	}
 	;;
 	ld8	loc2 = [loc2]		// loc2 <- entry point
-	mov	r30 = arg2
-	mov	r31 = arg3
+	mov	r30 = in2
+	mov	r31 = in3
 	;;
 	mov	loc3 = psr
-	mov	loc1 = rp
+	mov	loc0 = rp
+	.body
 	adds	r8 = .ret0-1b,r8
 	;; 
 	rsm	psr.i
@@ -102,18 +77,9 @@
 	;; 
 	br.cond.sptk.few b7
 .ret0:	mov	psr.l = loc3
-#ifndef __GCC_MULTIREG_RETVALS__
-	st8	[in0] = r8, 8
-	;;
-	st8	[in0] = r9, 8 
-	;;
-	st8	[in0] = r10, 8
-	;;
-	st8	[in0] = r11, 8
-#endif
-	mov	ar.pfs = loc0
-	mov	rp = loc1
+	mov	ar.pfs = loc1
+	mov	rp = loc0
 	;;
 	srlz.d				// seralize restoration of psr.l
 	br.ret.sptk.few	b0
-	.endp ia64_pal_call_static
+END(ia64_pal_call_static)
diff -urN linux-davidm/arch/ia64/kernel/pci.c linux-2.3.99-pre6-lia/arch/ia64/kernel/pci.c
--- linux-davidm/arch/ia64/kernel/pci.c	Fri Mar 10 15:24:02 2000
+++ linux-2.3.99-pre6-lia/arch/ia64/kernel/pci.c	Wed May 10 10:33:26 2000
@@ -133,7 +133,7 @@
  * Initialization. Uses the SAL interface
  */
 
-#define PCI_BUSSES_TO_SCAN 2	/* On "real" ;) hardware this will be 255 */
+#define PCI_BUSSES_TO_SCAN 255
 
 void __init 
 pcibios_init(void)
diff -urN linux-davidm/arch/ia64/kernel/process.c linux-2.3.99-pre6-lia/arch/ia64/kernel/process.c
--- linux-davidm/arch/ia64/kernel/process.c	Thu May 25 23:22:10 2000
+++ linux-2.3.99-pre6-lia/arch/ia64/kernel/process.c	Thu May 25 23:01:11 2000
@@ -98,6 +98,7 @@
 		if (pm_idle)
 			(*pm_idle)();
 #ifdef CONFIG_ITANIUM_ASTEP_SPECIFIC
+		local_irq_disable();
 		{
 			u64 itc, itm;
 
@@ -106,15 +107,40 @@
 			if (time_after(itc, itm)) {
 				extern void ia64_reset_itm (void);
 
-				printk("cpu_idle: ITM in past, resetting it (itc=%lx,itm=%lx:%lums)...\n",
+				printk("cpu_idle: ITM in past (itc=%lx,itm=%lx:%lums)\n",
 				       itc, itm, (itc - itm)/500000);
 				ia64_reset_itm();
 			}
 		}
+		local_irq_enable();
 #endif
 	}
 }
 
+void
+ia64_save_extra (struct task_struct *task)
+{
+	extern void ia64_save_debug_regs (unsigned long *save_area);
+	extern void ia32_save_state (struct thread_struct *thread);
+
+	if ((task->thread.flags & IA64_THREAD_DBG_VALID) != 0)
+		ia64_save_debug_regs(&task->thread.dbr[0]);
+	if (IS_IA32_PROCESS(ia64_task_regs(task)))
+		ia32_save_state(&task->thread);
+}
+
+void
+ia64_load_extra (struct task_struct *task)
+{
+	extern void ia64_load_debug_regs (unsigned long *save_area);
+	extern void ia32_load_state (struct thread_struct *thread);
+
+	if ((task->thread.flags & IA64_THREAD_DBG_VALID) != 0)
+		ia64_load_debug_regs(&task->thread.dbr[0]);
+	if (IS_IA32_PROCESS(ia64_task_regs(task)))
+		ia32_load_state(&task->thread);
+}
+
 /*
  * Copy the state of an ia-64 thread.
  *
@@ -391,7 +417,7 @@
 unsigned long
 get_wchan (struct task_struct *p)
 {
-	struct ia64_frame_info info;
+	struct unw_frame_info info;
 	unsigned long ip;
 	int count = 0;
 	/*
@@ -410,11 +436,11 @@
 	 * gracefully if the process wasn't really blocked after all.
 	 * --davidm 99/12/15
 	 */
-	ia64_unwind_init_from_blocked_task(&info, p);
+	unw_init_from_blocked_task(&info, p);
 	do {
-		if (ia64_unwind_to_previous_frame(&info) < 0)
+		if (unw_unwind(&info) < 0)
 			return 0;
-		ip = ia64_unwind_get_ip(&info);
+		ip = unw_get_ip(&info);
 		if (ip < first_sched || ip >= last_sched)
 			return ip;
 	} while (count++ < 16);
diff -urN linux-davidm/arch/ia64/kernel/ptrace.c linux-2.3.99-pre6-lia/arch/ia64/kernel/ptrace.c
--- linux-davidm/arch/ia64/kernel/ptrace.c	Fri Apr 21 15:21:24 2000
+++ linux-2.3.99-pre6-lia/arch/ia64/kernel/ptrace.c	Thu May 25 23:01:27 2000
@@ -30,7 +30,8 @@
  *	dd (data debug fault disable; one bit)
  *	ri (restart instruction; two bits)
  */
-#define CR_IPSR_CHANGE_MASK 0x06a00100003eUL
+#define IPSR_WRITE_MASK	0x000006a00100003eUL
+#define IPSR_READ_MASK	IPSR_WRITE_MASK
 
 /*
  * Collect the NaT bits for r1-r31 from sw->caller_unat and
@@ -465,15 +466,138 @@
 	}
 }
 
+static int
+access_uarea (struct task_struct *child, unsigned long addr, unsigned long *data, int write_access)
+{
+	unsigned long *ptr, *rbs, *bspstore, ndirty, regnum;
+	struct switch_stack *sw;
+	struct pt_regs *pt;
+
+	if ((addr & 0x7) != 0)
+		return -1;
+
+	if (addr < PT_F127+16) {
+		/* accessing fph */
+		sync_fph(child);
+		ptr = (unsigned long *) ((unsigned long) &child->thread.fph + addr);
+	} else if (addr < PT_F9+16) {
+		/* accessing switch_stack or pt_regs: */
+		pt = ia64_task_regs(child);
+		sw = (struct switch_stack *) pt - 1;
+
+		switch (addr) {
+		      case PT_AR_BSP:
+			if (write_access)
+				/* FIXME? Account for lack of ``cover'' in the syscall case */
+				return sync_kernel_register_backing_store(child, *data, 1);
+			else {
+				rbs = (unsigned long *) child + IA64_RBS_OFFSET/8;
+				bspstore = (unsigned long *) pt->ar_bspstore;
+				ndirty = ia64_rse_num_regs(rbs, rbs + (pt->loadrs >> 19));
+
+				/*
+				 * If we're in a system call, no ``cover'' was done.  So to
+				 * make things uniform, we'll add the appropriate displacement
+				 * onto bsp if we're in a system call.
+				 */
+				if (!(pt->cr_ifs & (1UL << 63)))
+					ndirty += sw->ar_pfs & 0x7f;
+				*data = (unsigned long) ia64_rse_skip_regs(bspstore, ndirty);
+				return 0;
+			}
+
+		      case PT_CFM:
+			if (write_access) {
+				pt = ia64_task_regs(child);
+				sw = (struct switch_stack *) pt - 1;
+
+				if (pt->cr_ifs & (1UL << 63))
+					pt->cr_ifs = ((pt->cr_ifs & ~0x3fffffffffUL)
+						      | (*data & 0x3fffffffffUL));
+				else
+					sw->ar_pfs = ((sw->ar_pfs & ~0x3fffffffffUL)
+						      | (*data & 0x3fffffffffUL));
+				return 0;
+			} else {
+				if ((pt->cr_ifs & (1UL << 63)) == 0)
+					*data = sw->ar_pfs;
+				else
+					/* return only the CFM */
+					*data = pt->cr_ifs & 0x3fffffffffUL;
+				return 0;
+			}
+
+		      case PT_CR_IPSR:
+			if (write_access)
+				pt->cr_ipsr = ((*data & IPSR_WRITE_MASK)
+					       | (pt->cr_ipsr & ~IPSR_WRITE_MASK));
+			else
+				*data = (pt->cr_ipsr & IPSR_READ_MASK);
+			return 0;
+
+		      case PT_R1: case PT_R2: case PT_R3:
+		      case PT_R4: case PT_R5: case PT_R6: case PT_R7:
+		      case PT_R8: case PT_R9: case PT_R10: case PT_R11:
+		      case PT_R12: case PT_R13: case PT_R14: case PT_R15:
+		      case PT_R16: case PT_R17: case PT_R18: case PT_R19:
+		      case PT_R20: case PT_R21: case PT_R22: case PT_R23:
+		      case PT_R24: case PT_R25: case PT_R26: case PT_R27:
+		      case PT_R28: case PT_R29: case PT_R30: case PT_R31:
+		      case PT_B0: case PT_B1: case PT_B2: case PT_B3:
+		      case PT_B4: case PT_B5: case PT_B6: case PT_B7:
+		      case PT_F2: case PT_F3:
+		      case PT_F4:  case PT_F5:  case PT_F6:  case PT_F7:
+		      case PT_F8:  case PT_F9:  case PT_F10: case PT_F11:
+		      case PT_F12: case PT_F13: case PT_F14: case PT_F15:
+		      case PT_F16: case PT_F17: case PT_F18: case PT_F19:
+		      case PT_F20: case PT_F21: case PT_F22: case PT_F23:
+		      case PT_F24: case PT_F25: case PT_F26: case PT_F27:
+		      case PT_F28: case PT_F29: case PT_F30: case PT_F31:
+		      case PT_AR_LC: case PT_AR_UNAT: case PT_AR_PFS: case PT_AR_RNAT:
+		      case PT_AR_CCV: case PT_AR_FPSR:
+		      case PT_CR_IIP: case PT_PR:
+			ptr = (unsigned long *) ((long) sw + addr - PT_PRI_UNAT);
+			break;
+
+		      default:
+			/* disallow accessing anything else... */
+			return -1;
+		}
+	} else {
+		/* access debug registers */
+
+		if (!(child->thread.flags & IA64_THREAD_DBG_VALID)) {
+			child->thread.flags |= IA64_THREAD_DBG_VALID;
+			memset(child->thread.dbr, 0, sizeof child->thread.dbr);
+			memset(child->thread.ibr, 0, sizeof child->thread.ibr);
+		}
+		if (addr >= PT_IBR) {
+			regnum = (addr - PT_IBR) >> 3;
+			ptr = &child->thread.ibr[0];
+		} else {
+			regnum = (addr - PT_DBR) >> 3;
+			ptr = &child->thread.dbr[0];
+		}
+
+		if (regnum >= 8)
+			return -1;
+
+		ptr += regnum;
+	}
+	if (write_access)
+		*ptr = *data;
+	else
+		*data = *ptr;
+	return 0;
+}
+
 asmlinkage long
 sys_ptrace (long request, pid_t pid, unsigned long addr, unsigned long data,
 	    long arg4, long arg5, long arg6, long arg7, long stack)
 {
 	struct pt_regs *regs = (struct pt_regs *) &stack;
-	struct switch_stack *child_stack;
-	struct pt_regs *child_regs;
 	struct task_struct *child;
-	unsigned long flags, regnum, *base;
+	unsigned long flags;
 	long ret;
 
 	lock_kernel();
@@ -558,139 +682,18 @@
 		goto out;
 
 	      case PTRACE_PEEKUSR:		/* read the word at addr in the USER area */
-		ret = -EIO;
-		if ((addr & 0x7) != 0)
+		if (access_uarea(child, addr, &data, 0) < 0) {
+			ret = -EIO;
 			goto out;
-
-		if (addr < PT_CALLER_UNAT) {
-			/* accessing fph */
-			sync_fph(child);
-			addr += (unsigned long) &child->thread.fph;
-			ret = *(unsigned long *) addr;
-		} else if (addr < PT_F9+16) {
-			/* accessing switch_stack or pt_regs: */
-			child_regs = ia64_task_regs(child);
-			child_stack = (struct switch_stack *) child_regs - 1;
-			ret = *(unsigned long *) ((long) child_stack + addr - PT_CALLER_UNAT);
-
-			if (addr == PT_AR_BSP) {
-				/* ret currently contains pt_regs.loadrs */
-				unsigned long *rbs, *bspstore, ndirty;
-
-				rbs = (unsigned long *) child + IA64_RBS_OFFSET/8;
-				bspstore = (unsigned long *) child_regs->ar_bspstore;
-				ndirty = ia64_rse_num_regs(rbs, rbs + (ret >> 19));
-				ret = (unsigned long) ia64_rse_skip_regs(bspstore, ndirty);
-
-				/*
-				 * If we're in a system call, no ``cover'' was done.  So
-				 * to make things uniform, we'll add the appropriate
-				 * displacement onto bsp if we're in a system call.
-				 *
-				 * Note: It may be better to leave the system call case
-				 * alone and subtract the amount of the cover for the
-				 * non-syscall case.  That way the reported bsp value
-				 * would actually be the correct bsp for the child
-				 * process.
-				 */
-				if (!(child_regs->cr_ifs & (1UL << 63))) {
-					ret = (unsigned long)
-						ia64_rse_skip_regs((unsigned long *) ret,
-						                   child_stack->ar_pfs & 0x7f);
-				}
-			} else if (addr == PT_CFM) {
-				/* ret currently contains pt_regs.cr_ifs */
-				if ((ret & (1UL << 63)) == 0)
-					ret = child_stack->ar_pfs;
-				ret &= 0x3fffffffffUL;		/* return only the CFM */
-			}
-		} else {
-			if (!(child->thread.flags & IA64_THREAD_DBG_VALID)) {
-				child->thread.flags |= IA64_THREAD_DBG_VALID;
-				memset(child->thread.dbr, 0, sizeof child->thread.dbr);
-				memset(child->thread.ibr, 0, sizeof child->thread.ibr);
-			}
-			if (addr >= PT_IBR) {
-				regnum = (addr - PT_IBR) >> 3;
-				base = &child->thread.ibr[0];
-			} else {
-				regnum = (addr - PT_DBR) >> 3;
-				base = &child->thread.dbr[0];
-			}
-			if (regnum >= 8)
-				goto out;
-			ret = base[regnum];
 		}
+		ret = data;
 		regs->r8 = 0;	/* ensure "ret" is not mistaken as an error code */
 		goto out;
 
 	      case PTRACE_POKEUSR:	      /* write the word at addr in the USER area */
-		ret = -EIO;
-		if ((addr & 0x7) != 0)
-			goto out;
-
-		if (addr < PT_CALLER_UNAT) {
-			/* accessing fph */
-			sync_fph(child);
-			addr += (unsigned long) &child->thread.fph;
-			*(unsigned long *) addr = data;
-		} else if (addr == PT_AR_BSPSTORE || addr == PT_CALLER_UNAT 
-			|| addr == PT_KERNEL_FPSR || addr == PT_K_B0 || addr == PT_K_AR_PFS 
-			|| (PT_K_AR_UNAT <= addr && addr <= PT_K_PR)) {
-			/*
-			 * Don't permit changes to certain registers.
-			 * 
-			 * We don't allow bspstore to be modified because doing
-			 * so would mess up any modifications to bsp.  (See
-			 * sync_kernel_register_backing_store for the details.)
-			 */
+		if (access_uarea(child, addr, &data, 1) < 0) {
+			ret = -EIO;
 			goto out;
-		} else if (addr == PT_AR_BSP) {
-			/* FIXME? Account for lack of ``cover'' in the syscall case */
-			ret = sync_kernel_register_backing_store(child, data, 1);
-			goto out;
-		} else if (addr == PT_CFM) {
-			child_regs = ia64_task_regs(child);
-			child_stack = (struct switch_stack *) child_regs - 1;
-
-			if (child_regs->cr_ifs & (1UL << 63)) {
-				child_regs->cr_ifs = (child_regs->cr_ifs & ~0x3fffffffffUL)
-				                   | (data & 0x3fffffffffUL);
-			} else {
-				child_stack->ar_pfs = (child_stack->ar_pfs & ~0x3fffffffffUL)
-				                    | (data & 0x3fffffffffUL);
-			}
-		} else if (addr < PT_F9+16) {
-			/* accessing switch_stack or pt_regs */
-			child_regs = ia64_task_regs(child);
-			child_stack = (struct switch_stack *) child_regs - 1;
-
-			if (addr == PT_CR_IPSR)
-				data = (data & CR_IPSR_CHANGE_MASK)
-				     | (child_regs->cr_ipsr & ~CR_IPSR_CHANGE_MASK);
-			
-			*(unsigned long *) ((long) child_stack + addr - PT_CALLER_UNAT) = data;
-		} else {
-			if (!(child->thread.flags & IA64_THREAD_DBG_VALID)) {
-				child->thread.flags |= IA64_THREAD_DBG_VALID;
-				memset(child->thread.dbr, 0, sizeof child->thread.dbr);
-				memset(child->thread.ibr, 0, sizeof child->thread.ibr);
-			}
-
-			if (addr >= PT_IBR) {
-				regnum = (addr - PT_IBR) >> 3;
-				base = &child->thread.ibr[0];
-			} else {
-				regnum = (addr - PT_DBR) >> 3;
-				base = &child->thread.dbr[0];
-			}
-			if (regnum >= 8)
-				goto out;
-			if (regnum & 1) {
-				/* force breakpoint to be effective only for user-level: */
-				data &= ~(0x7UL << 56);
-			}
-			base[regnum] = data;
 		}
 		ret = 0;
 		goto out;
diff -urN linux-davidm/arch/ia64/kernel/sal_stub.S linux-2.3.99-pre6-lia/arch/ia64/kernel/sal_stub.S
--- linux-davidm/arch/ia64/kernel/sal_stub.S	Fri Apr 21 15:21:24 2000
+++ linux-2.3.99-pre6-lia/arch/ia64/kernel/sal_stub.S	Wed Dec 31 16:00:00 1969
@@ -1,118 +0,0 @@
-/*
- * Copyright (C) 1998-2000 Hewlett-Packard Co
- * Copyright (C) 1998-2000 David Mosberger-Tang <davidm@hpl.hp.com>
- */
-#ifndef __GCC_MULTIREG_RETVALS__
-	/*
-	 * gcc currently does not conform to the ia-64 calling
-	 * convention as far as returning function values are
-	 * concerned.  Instead of returning values up to 32 bytes in
-	 * size in r8-r11, gcc returns any value bigger than a
-	 * doubleword via a structure that's allocated by the caller
-	 * and whose address is passed into the function.  Since
-	 * SAL_PROC returns values according to the calling
-	 * convention, this stub takes care of copying r8-r11 to the
-	 * place where gcc expects them.
-	 */
-	.text
-	.psr abi64
-	.psr lsb
-	.lsb
-
-	.align 16
-	.global ia64_sal_stub
-ia64_sal_stub:
-	/*
-	 * Sheesh, the Cygnus backend passes the pointer to a return value structure in
-	 * in0 whereas the HP backend passes it in r8.  Don't you hate those little
-	 * differences...
-	 */
-#ifdef GCC_RETVAL_POINTER_IN_R8
-	adds r2=-24,sp
-	adds sp=-48,sp
-	mov r14=rp
-	;;
-	st8	[r2]=r8,8	// save pointer to return value
-	addl	r3=@ltoff(ia64_sal),gp
-	;;
-	ld8	r3=[r3]
-	st8	[r2]=gp,8	// save global pointer
-	;;
-	ld8	r3=[r3]		// fetch the value of ia64_sal
-	st8	[r2]=r14	// save return pointer
-	;;
-	ld8	r2=[r3],8	// load function's entry point
-	;;
-	ld8	gp=[r3]		// load function's global pointer
-	;;
-	mov	b6=r2
-	br.call.sptk.few rp=b6
-.ret0:	adds	r2=24,sp
-	;;
-	ld8	r3=[r2],8	// restore pointer to return value
-	;;
-	ld8	gp=[r2],8	// restore global pointer
-	st8	[r3]=r8,8
-	;;
-	ld8	r14=[r2]	// restore return pointer
-	st8	[r3]=r9,8
-	;;
-	mov	rp=r14
-	st8	[r3]=r10,8
-	;;
-	st8	[r3]=r11,8
-	adds	sp=48,sp
-	br.sptk.few rp
-#else
-	/*
-	 * On input:
-	 *	in0 = pointer to return value structure
-	 *	in1 = index of SAL function to call
-	 *	in2..inN = remaining args to SAL call
-	 */
-	/*
-	 * We allocate one input and eight output register such that the br.call instruction
-	 * will rename in1-in7 to in0-in6---exactly what we want because SAL doesn't want to
-	 * see the pointer to the return value structure.
-	 */
-	alloc	r15=ar.pfs,1,0,8,0
-
-	adds	r2=-24,sp
-	adds	sp=-48,sp
-	mov	r14=rp
-	;;
-	st8	[r2]=r15,8	// save ar.pfs
-	addl	r3=@ltoff(ia64_sal),gp
-	;;
-	ld8	r3=[r3]		// get address of ia64_sal
-	st8	[r2]=gp,8	// save global pointer
-	;;
-	ld8	r3=[r3]		// get value of ia64_sal
-	st8	[r2]=r14,8	// save return address (rp)
-	;;
-	ld8	r2=[r3],8	// load function's entry point
-	;;
-	ld8	gp=[r3]		// load function's global pointer
-	mov	b6=r2
-	br.call.sptk.few rp=b6	// make SAL call
-.ret0:	adds	r2=24,sp
-	;;
-	ld8	r15=[r2],8	// restore ar.pfs
-	;;
-	ld8	gp=[r2],8	// restore global pointer
-	st8	[in0]=r8,8	// store 1. dword of return value
-	;;
-	ld8	r14=[r2]	// restore return address (rp)
-	st8	[in0]=r9,8	// store 2. dword of return value
-	;;
-	mov	rp=r14
-	st8	[in0]=r10,8	// store 3. dword of return value
-	;;
-	st8	[in0]=r11,8
-	adds	sp=48,sp	// pop stack frame
-	mov	ar.pfs=r15
-	br.ret.sptk.few rp
-#endif
-
-	.endp ia64_sal_stub
-#endif /* __GCC_MULTIREG_RETVALS__ */
diff -urN linux-davidm/arch/ia64/kernel/setup.c linux-2.3.99-pre6-lia/arch/ia64/kernel/setup.c
--- linux-davidm/arch/ia64/kernel/setup.c	Fri Apr 21 15:21:24 2000
+++ linux-2.3.99-pre6-lia/arch/ia64/kernel/setup.c	Thu May 25 23:02:08 2000
@@ -276,29 +276,9 @@
 		cpuid.bits[i] = ia64_get_cpuid(i);
 	}
 
-#ifdef CONFIG_SMP
-	/*
-	 * XXX Instead of copying the ITC info from the bootstrap
-	 * processor, ia64_init_itm() should be done per CPU.  That
-	 * should get you the right info.  --davidm 1/24/00
-	 */
-	if (c != &cpu_data[bootstrap_processor]) {
-		memset(c, 0, sizeof(struct cpuinfo_ia64));
-		c->proc_freq = cpu_data[bootstrap_processor].proc_freq;
-		c->itc_freq = cpu_data[bootstrap_processor].itc_freq;
-		c->cyc_per_usec = cpu_data[bootstrap_processor].cyc_per_usec;
-		c->usec_per_cyc = cpu_data[bootstrap_processor].usec_per_cyc;
-	}
-#else
 	memset(c, 0, sizeof(struct cpuinfo_ia64));
-#endif
 
 	memcpy(c->vendor, cpuid.field.vendor, 16);
-#ifdef CONFIG_IA64_SOFTSDV_HACKS
-        /* BUG: SoftSDV doesn't support the cpuid registers. */
-	if (c->vendor[0] == '\0') 
-		memcpy(c->vendor, "Intel", 6);
-#endif                                   
 	c->ppn = cpuid.field.ppn;
 	c->number = cpuid.field.number;
 	c->revision = cpuid.field.revision;
@@ -306,8 +286,11 @@
 	c->family = cpuid.field.family;
 	c->archrev = cpuid.field.archrev;
 	c->features = cpuid.field.features;
-#ifdef CONFIG_SMP
-	c->loops_per_sec = loops_per_sec;
+
+#ifdef CONFIG_IA64_SOFTSDV_HACKS
+	/* BUG: SoftSDV doesn't support the cpuid registers. */
+	if (c->vendor[0] == '\0') 
+		memcpy(c->vendor, "Intel", 6);
 #endif
 }
 
diff -urN linux-davidm/arch/ia64/kernel/signal.c linux-2.3.99-pre6-lia/arch/ia64/kernel/signal.c
--- linux-davidm/arch/ia64/kernel/signal.c	Thu May 25 23:22:10 2000
+++ linux-2.3.99-pre6-lia/arch/ia64/kernel/signal.c	Thu May 25 23:02:20 2000
@@ -390,6 +390,7 @@
 	struct k_sigaction *ka;
 	siginfo_t info;
 	long restart = in_syscall;
+	long errno = pt->r8;
 
 	/*
 	 * In the ia64_leave_kernel code path, we want the common case
@@ -402,6 +403,16 @@
 	if (!oldset)
 		oldset = &current->blocked;
 
+#ifdef CONFIG_IA32_SUPPORT
+	if (IS_IA32_PROCESS(pt)) {
+		if (in_syscall) {
+			if (errno >= 0)
+				restart = 0;
+			else
+				errno = -errno;
+		}
+	} else
+#endif
 	if (pt->r10 != -1) {
 		/*
 		 * A system calls has to be restarted only if one of
@@ -511,7 +522,7 @@
 		}
 
 		if (restart) {
-			switch (pt->r8) {
+			switch (errno) {
 			      case ERESTARTSYS:
 				if ((ka->sa.sa_flags & SA_RESTART) == 0) {
 			      case ERESTARTNOHAND:
@@ -520,6 +531,12 @@
 					break;
 				}
 			      case ERESTARTNOINTR:
+#ifdef CONFIG_IA32_SUPPOT
+				if (IS_IA32_PROCESS(pt)) {
+					pt->r8 = pt->r1;
+					pt->cr_iip -= 2;
+				} else
+#endif
 				ia64_decrement_ip(pt);
 			}
 		}
@@ -534,9 +551,13 @@
 	/* Did we come from a system call? */
 	if (restart) {
 		/* Restart the system call - no handlers present */
-		if (pt->r8 == ERESTARTNOHAND ||
-		    pt->r8 == ERESTARTSYS ||
-		    pt->r8 == ERESTARTNOINTR) {
+		if (errno == ERESTARTNOHAND || errno == ERESTARTSYS || errno == ERESTARTNOINTR) {
+#ifdef CONFIG_IA32_SUPPORT
+			if (IS_IA32_PROCESS(pt)) {
+				pt->r8 = pt->r1;
+				pt->cr_iip -= 2;
+			} else
+#endif
 			/*
 			 * Note: the syscall number is in r15 which is
 			 * saved in pt_regs so all we need to do here
diff -urN linux-davidm/arch/ia64/kernel/smp.c linux-2.3.99-pre6-lia/arch/ia64/kernel/smp.c
--- linux-davidm/arch/ia64/kernel/smp.c	Thu May 25 23:22:10 2000
+++ linux-2.3.99-pre6-lia/arch/ia64/kernel/smp.c	Thu May 25 23:29:32 2000
@@ -21,11 +21,13 @@
 #include <linux/smp.h>
 #include <linux/kernel_stat.h>
 #include <linux/mm.h>
+#include <linux/delay.h>
 
 #include <asm/atomic.h>
 #include <asm/bitops.h>
 #include <asm/current.h>
 #include <asm/delay.h>
+
 #include <asm/io.h>
 #include <asm/irq.h>
 #include <asm/page.h>
@@ -39,6 +41,7 @@
 
 extern int cpu_idle(void * unused);
 extern void _start(void);
+extern void machine_halt(void);
 
 extern int cpu_now_booting;			     /* Used by head.S to find idle task */
 extern volatile unsigned long cpu_online_map;	     /* Bitmap of available cpu's */
@@ -66,7 +69,7 @@
 	atomic_t unstarted_count;
 	atomic_t unfinished_count;
 };
-static struct smp_call_struct *smp_call_function_data;
+static volatile struct smp_call_struct *smp_call_function_data;
 
 #ifdef	CONFIG_ITANIUM_A1_SPECIFIC
 extern spinlock_t ivr_read_lock;
@@ -75,6 +78,9 @@
 #define IPI_RESCHEDULE	        0
 #define IPI_CALL_FUNC	        1
 #define IPI_CPU_STOP	        2
+#ifndef CONFIG_ITANIUM_PTCG
+# define IPI_FLUSH_TLB		3
+#endif	/*!CONFIG_ITANIUM_PTCG */
 
 /*
  *	Setup routine for controlling SMP activation
@@ -126,6 +132,29 @@
 
 }
 
+static inline void
+pointer_unlock(void **lock, void **data)
+{
+	*data = *lock;
+	*lock = NULL;
+}
+
+static inline int
+pointer_lock(void *lock, void *data, int retry)
+{
+ again:
+	if (cmpxchg_acq(lock, 0, data) == 0)
+		return 0;
+
+	if (!retry)
+		return -EBUSY;
+
+	while (*(void**) lock)
+		;
+
+	goto again;
+}
+
 void
 handle_IPI(int irq, void *dev_id, struct pt_regs *regs) 
 {
@@ -160,7 +189,8 @@
 				void *info;
 				int wait;
 
-				data = smp_call_function_data;
+				/* release the 'pointer lock' */
+				pointer_unlock((void **) &smp_call_function_data, (void **) &data);
 				func = data->func;
 				info = data->info;
 				wait = data->wait;
@@ -182,6 +212,48 @@
 			halt_processor();
 			break;
 
+#ifndef CONFIG_ITANIUM_PTCG
+		case IPI_FLUSH_TLB:
+                {
+			extern unsigned long flush_start, flush_end, flush_nbits, flush_rid;
+			extern atomic_t flush_cpu_count;
+			unsigned long saved_rid = ia64_get_rr(flush_start);
+			unsigned long end = flush_end;
+			unsigned long start = flush_start;
+			unsigned long nbits = flush_nbits;
+
+			/*
+			 * Current CPU may be running with different
+			 * RID so we need to reload the RID of flushed
+			 * address.  Purging the translation also
+			 * needs ALAT invalidation; we do not need
+			 * "invala" here since it is done in
+			 * ia64_leave_kernel.
+			 */
+			ia64_srlz_d();
+			if (saved_rid != flush_rid) {
+				ia64_set_rr(flush_start, flush_rid);
+				ia64_srlz_d();
+			}
+			
+			do {
+				/*
+				 * Purge local TLB entries.
+				 */
+				__asm__ __volatile__ ("ptc.l %0,%1" ::
+						      "r"(start), "r"(nbits<<2) : "memory");
+				start += (1UL << nbits);
+			} while (start < end);
+
+			if (saved_rid != flush_rid) {
+				ia64_set_rr(flush_start, saved_rid);
+				ia64_srlz_d();
+			}
+			atomic_dec(&flush_cpu_count);
+			break;
+		}
+#endif	/* !CONFIG_ITANIUM_PTCG */
+
 		default:
 			printk(KERN_CRIT "Unknown IPI on CPU %d: %lu\n", this_cpu, which);
 			break;
@@ -199,7 +271,7 @@
 	if (dest_cpu == -1) 
                 return;
         
-        ipi_op[dest_cpu] |= (1 << op);
+	set_bit(op, &ipi_op[dest_cpu]);
 	ipi_send(dest_cpu, IPI_IRQ, IA64_IPI_DM_INT, 0);
 }
 
@@ -243,6 +315,14 @@
 	send_IPI_allbutself(IPI_CPU_STOP);
 }
 
+#ifndef CONFIG_ITANIUM_PTCG
+void
+smp_send_flush_tlb(void)
+{
+	send_IPI_allbutself(IPI_FLUSH_TLB);
+}
+#endif	/* !CONFIG_ITANIUM_PTCG */
+
 /*
  * Run a function on all other CPUs.
  *  <func>	The function to run. This must be fast and non-blocking.
@@ -268,30 +348,8 @@
 	atomic_set(&data.unstarted_count, smp_num_cpus - 1);
 	atomic_set(&data.unfinished_count, smp_num_cpus - 1);
 
-	if (retry) {
-		while (1) {
-			if (smp_call_function_data) {
-				schedule ();  /*  Give a mate a go  */
-				continue;
-			}
-			spin_lock (&lock);
-			if (smp_call_function_data) {
-				spin_unlock (&lock);  /*  Bad luck  */
-				continue;
-			}
-			/*  Mine, all mine!  */
-			break;
-		}
-	}
-	else {
-		if (smp_call_function_data) 
-			return -EBUSY;
-		spin_lock (&lock);
-		if (smp_call_function_data) {
-			spin_unlock (&lock);
-			return -EBUSY;
-		}
-	}
+	if (pointer_lock(&smp_call_function_data, &data, retry))
+		return -EBUSY;
 
 	smp_call_function_data = &data;
 	spin_unlock (&lock);
@@ -395,6 +453,23 @@
 	identify_cpu(c);
 }
 
+static inline void __init
+smp_calibrate_delay(int cpuid)
+{
+	struct cpuinfo_ia64 *c = &cpu_data[cpuid];
+#if 0
+	unsigned long old = loops_per_sec;
+	extern void calibrate_delay(void);
+	
+	loops_per_sec = 0;
+	calibrate_delay();
+	c->loops_per_sec = loops_per_sec;
+	loops_per_sec = old;
+#else
+	c->loops_per_sec = loops_per_sec;
+#endif
+}
+
 /* 
  * SAL shoves the AP's here when we start them.  Physical mode, no kernel TR, 
  * no RRs set, better than even chance that psr is bogus.  Fix all that and 
@@ -453,27 +528,27 @@
 	cpu_init();
 	__flush_tlb_all();
 
+	normal_xtp();
+
 	smp_store_cpu_info(smp_processor_id());
 	smp_setup_percpu_timer(smp_processor_id());
 
-	if (test_and_set_bit(smp_processor_id(), &cpu_online_map)) {
-		printk("CPU#%d already initialized!\n", smp_processor_id());
-		machine_halt();
-	}  
-	while (!smp_threads_ready) 
-		mb();
-
-	normal_xtp();
-
 	/* setup the CPU local timer tick */
-	ia64_cpu_local_tick();
+	ia64_init_itm();
 
 	/* Disable all local interrupts */
 	ia64_set_lrr0(0, 1);	
 	ia64_set_lrr1(0, 1);	
 
-	__sti();		/* Interrupts have been off till now. */
+	if (test_and_set_bit(smp_processor_id(), &cpu_online_map)) {
+		printk("CPU#%d already initialized!\n", smp_processor_id());
+		machine_halt();
+	}  
+	while (!smp_threads_ready) 
+		mb();
 
+	local_irq_enable();		/* Interrupts have been off until now */
+	smp_calibrate_delay(smp_processor_id());
 	printk("SMP: CPU %d starting idle loop\n", smp_processor_id());
 
 	cpu_idle(NULL);
@@ -593,6 +668,7 @@
 
 	/* And generate an entry in cpu_data */
 	smp_store_cpu_info(bootstrap_processor);
+	smp_calibrate_delay(smp_processor_id());
 #if 0
 	smp_tune_scheduling();
 #endif
@@ -709,3 +785,4 @@
 	}
 
 }
+
diff -urN linux-davidm/arch/ia64/kernel/sys_ia64.c linux-2.3.99-pre6-lia/arch/ia64/kernel/sys_ia64.c
--- linux-davidm/arch/ia64/kernel/sys_ia64.c	Sun Feb 13 10:30:38 2000
+++ linux-2.3.99-pre6-lia/arch/ia64/kernel/sys_ia64.c	Mon May 15 11:29:56 2000
@@ -196,6 +196,20 @@
         return -ENOSYS;
 }
 
+asmlinkage unsigned long
+ia64_create_module (const char *name_user, size_t size, long arg2, long arg3,
+		    long arg4, long arg5, long arg6, long arg7, long stack)
+{
+	extern unsigned long sys_create_module (const char *, size_t);
+	struct pt_regs *regs = (struct pt_regs *) &stack;
+	unsigned long   addr;
+
+	addr = sys_create_module (name_user, size);
+	if (!IS_ERR(addr))
+		regs->r8 = 0;	/* ensure large addresses are not mistaken as failures... */
+	return addr;
+}
+
 #ifndef CONFIG_PCI
 
 asmlinkage long
@@ -211,6 +225,5 @@
 {
 	return -ENOSYS;
 }
-
 
 #endif /* CONFIG_PCI */
diff -urN linux-davidm/arch/ia64/kernel/time.c linux-2.3.99-pre6-lia/arch/ia64/kernel/time.c
--- linux-davidm/arch/ia64/kernel/time.c	Thu May 25 23:22:10 2000
+++ linux-2.3.99-pre6-lia/arch/ia64/kernel/time.c	Thu May 25 23:04:08 2000
@@ -222,14 +222,13 @@
 
 #ifdef CONFIG_ITANIUM_ASTEP_SPECIFIC
 
+/*
+ * Interrupts must be disabled before calling this routine.
+ */
 void
 ia64_reset_itm (void)
 {
-	unsigned long flags;
-
-	local_irq_save(flags);
 	timer_interrupt(0, 0, ia64_task_regs(current));
-	local_irq_restore(flags);
 }
 
 #endif /* CONFIG_ITANIUM_ASTEP_SPECIFIC */
@@ -277,25 +276,7 @@
 		itc_ratio.num = 3;
 		itc_ratio.den = 1;
 	}
-#if defined(CONFIG_IA64_LION_HACKS)
-	/* Our Lion currently returns base freq 104.857MHz, which
-	   ain't right (it really is 100MHz).  */
-	printk("SAL/PAL returned: base-freq=%lu, itc-ratio=%lu/%lu, proc-ratio=%lu/%lu\n",
-	       platform_base_freq, itc_ratio.num, itc_ratio.den,
-	       proc_ratio.num, proc_ratio.den);
-	platform_base_freq = 100000000;
-#elif 0 && defined(CONFIG_IA64_BIGSUR_HACKS)
-	/* BigSur with 991020 firmware returned itc-ratio=9/2 and base
-	   freq 75MHz, which wasn't right.  The 991119 firmware seems
-	   to return the right values, so this isn't necessary
-	   anymore... */
-	printk("SAL/PAL returned: base-freq=%lu, itc-ratio=%lu/%lu, proc-ratio=%lu/%lu\n",
-	       platform_base_freq, itc_ratio.num, itc_ratio.den,
-	       proc_ratio.num, proc_ratio.den);
-	platform_base_freq = 100000000;
-	proc_ratio.num = 5; proc_ratio.den = 1;
-	itc_ratio.num  = 5; itc_ratio.den  = 1;
-#elif defined(CONFIG_IA64_SOFTSDV_HACKS)
+#ifdef CONFIG_IA64_SOFTSDV_HACKS
 	platform_base_freq = 10000000;
 	proc_ratio.num = 4; proc_ratio.den = 1;
 	itc_ratio.num  = 4; itc_ratio.den  = 1;
@@ -313,8 +294,9 @@
 
         itc_freq = (platform_base_freq*itc_ratio.num)/itc_ratio.den;
         itm.delta = itc_freq / HZ;
-        printk("timer: base freq=%lu.%03luMHz, ITC ratio=%lu/%lu, ITC freq=%lu.%03luMHz\n",
-               platform_base_freq / 1000000, (platform_base_freq / 1000) % 1000,
+        printk("timer: CPU %d base freq=%lu.%03luMHz, ITC ratio=%lu/%lu, ITC freq=%lu.%03luMHz\n",
+	       smp_processor_id(),
+	       platform_base_freq / 1000000, (platform_base_freq / 1000) % 1000,
                itc_ratio.num, itc_ratio.den, itc_freq / 1000000, (itc_freq / 1000) % 1000);
 
 	my_cpu_data.proc_freq = (platform_base_freq*proc_ratio.num)/proc_ratio.den;
diff -urN linux-davidm/arch/ia64/kernel/traps.c linux-2.3.99-pre6-lia/arch/ia64/kernel/traps.c
--- linux-davidm/arch/ia64/kernel/traps.c	Thu May 25 23:22:10 2000
+++ linux-2.3.99-pre6-lia/arch/ia64/kernel/traps.c	Mon May 15 11:59:08 2000
@@ -3,8 +3,12 @@
  *
  * Copyright (C) 1998-2000 Hewlett-Packard Co
  * Copyright (C) 1998-2000 David Mosberger-Tang <davidm@hpl.hp.com>
+ *
+ * 05/12/00 grao <goutham.rao@intel.com> : added isr in siginfo for SIGFPE
  */
 
+#define FPSWA_DEBUG	1
+
 /*
  * The fpu_fault() handler needs to be able to access and update all
  * floating point registers.  Those saved in pt_regs can be accessed
@@ -300,6 +304,7 @@
 	if (copy_from_user(bundle, (void *) fault_ip, sizeof(bundle)))
 		return -1;
 
+#ifdef FPSWA_DEBUG
 	if (fpu_swa_count > 5 && jiffies - last_time > 5*HZ)
 		fpu_swa_count = 0;
 	if (++fpu_swa_count < 5) {
@@ -307,7 +312,7 @@
 		printk("%s(%d): floating-point assist fault at ip %016lx\n",
 		       current->comm, current->pid, regs->cr_iip + ia64_psr(regs)->ri);
 	}
-
+#endif
 	exception = fp_emulate(fp_fault, bundle, &regs->cr_ipsr, &regs->ar_fpsr, &isr, &regs->pr,
  			       &regs->cr_ifs, regs);
 	if (fp_fault) {
@@ -331,6 +336,7 @@
 			} else if (isr & 0x44) {
 				siginfo.si_code = FPE_FLTDIV;
 			}
+			siginfo.si_isr = isr;
 			send_sig_info(SIGFPE, &siginfo, current);
 		}
 	} else {
@@ -350,6 +356,7 @@
 			} else if (isr & 0x2200) {
 				siginfo.si_code = FPE_FLTRES;
 			}
+			siginfo.si_isr = isr;
 			send_sig_info(SIGFPE, &siginfo, current);
 		}
 	}
diff -urN linux-davidm/arch/ia64/kernel/unwind.c linux-2.3.99-pre6-lia/arch/ia64/kernel/unwind.c
--- linux-davidm/arch/ia64/kernel/unwind.c	Sun Feb  6 18:42:40 2000
+++ linux-2.3.99-pre6-lia/arch/ia64/kernel/unwind.c	Thu May 25 23:05:40 2000
@@ -1,16 +1,1462 @@
 /*
- * Copyright (C) 1999 Hewlett-Packard Co
- * Copyright (C) 1999 David Mosberger-Tang <davidm@hpl.hp.com>
+ * Copyright (C) 1999-2000 Hewlett-Packard Co
+ * Copyright (C) 1999-2000 David Mosberger-Tang <davidm@hpl.hp.com>
  */
+#include <linux/config.h>
 #include <linux/kernel.h>
 #include <linux/sched.h>
+#include <linux/slab.h>
 
 #include <asm/unwind.h>
 
+#ifdef CONFIG_IA64_NEW_UNWIND
+
+#include <asm/ptrace.h>
+#include <asm/ptrace_offsets.h>
+#include <asm/rse.h>
+#include <asm/system.h>
+
+#include "entry.h"
+#include "unwind_i.h"
+
+#define MIN(a,b)	((a) < (b) ? (a) : (b))
+#define p5		5
+
+/*
+ * The unwind tables are supposed to be sorted, but the GNU toolchain
+ * currently fails to produce a sorted table in the presence of
+ * functions that go into sections other than .text.  For example, the
+ * kernel likes to put initialization code into .text.init, which
+ * messes up the sort order.  Hopefully, this will get fixed sometime
+ * soon.  --davidm 00/05/23
+ */
+#define UNWIND_TABLE_SORT_BUG
+
+#define UNW_DEBUG	1
+
+#if UNW_DEBUG
+# define dprintk(format...)	printk(format)
+#else
+# define dprintk(format...)
+#endif
+
+#define alloc_reg_state()	kmalloc(sizeof(struct unw_state_record), GFP_ATOMIC)
+#define free_reg_state(usr)	kfree(usr)
+
+typedef unsigned long unw_word;
+
+static const enum unw_register_index save_order[] = {
+	UNW_REG_RP, UNW_REG_PFS, UNW_REG_PSP, UNW_REG_PR,
+	UNW_REG_UNAT, UNW_REG_LC, UNW_REG_FPSR, UNW_REG_PRI_UNAT_GR
+};
+
+#define struct_offset(str,fld)	((char *)&((str *)NULL)->fld - (char *) 0)
+
+static unsigned short preg_index[UNW_NUM_REGS] = {
+	struct_offset(struct unw_frame_info, pri_unat)/8,	/* PRI_UNAT_GR */
+	struct_offset(struct unw_frame_info, pri_unat)/8,	/* PRI_UNAT_MEM */
+	struct_offset(struct unw_frame_info, pbsp)/8,
+	struct_offset(struct unw_frame_info, bspstore)/8,
+	struct_offset(struct unw_frame_info, pfs)/8,
+	struct_offset(struct unw_frame_info, rnat)/8,
+	struct_offset(struct unw_frame_info, psp)/8,
+	struct_offset(struct unw_frame_info, rp)/8,
+	struct_offset(struct unw_frame_info, r4)/8,
+	struct_offset(struct unw_frame_info, r5)/8,
+	struct_offset(struct unw_frame_info, r6)/8,
+	struct_offset(struct unw_frame_info, r7)/8,
+	struct_offset(struct unw_frame_info, unat)/8,
+	struct_offset(struct unw_frame_info, pr)/8,
+	struct_offset(struct unw_frame_info, lc)/8,
+	struct_offset(struct unw_frame_info, fpsr)/8,
+	struct_offset(struct unw_frame_info, b1)/8,
+	struct_offset(struct unw_frame_info, b2)/8,
+	struct_offset(struct unw_frame_info, b3)/8,
+	struct_offset(struct unw_frame_info, b4)/8,
+	struct_offset(struct unw_frame_info, b5)/8,
+	struct_offset(struct unw_frame_info, f2)/8,
+	struct_offset(struct unw_frame_info, f3)/8,
+	struct_offset(struct unw_frame_info, f4)/8,
+	struct_offset(struct unw_frame_info, f5)/8,
+	struct_offset(struct unw_frame_info, fr[16])/8,
+	struct_offset(struct unw_frame_info, fr[17])/8,
+	struct_offset(struct unw_frame_info, fr[18])/8,
+	struct_offset(struct unw_frame_info, fr[19])/8,
+	struct_offset(struct unw_frame_info, fr[20])/8,
+	struct_offset(struct unw_frame_info, fr[21])/8,
+	struct_offset(struct unw_frame_info, fr[22])/8,
+	struct_offset(struct unw_frame_info, fr[23])/8,
+	struct_offset(struct unw_frame_info, fr[24])/8,
+	struct_offset(struct unw_frame_info, fr[25])/8,
+	struct_offset(struct unw_frame_info, fr[26])/8,
+	struct_offset(struct unw_frame_info, fr[27])/8,
+	struct_offset(struct unw_frame_info, fr[28])/8,
+	struct_offset(struct unw_frame_info, fr[29])/8,
+	struct_offset(struct unw_frame_info, fr[30])/8,
+	struct_offset(struct unw_frame_info, fr[31])/8,
+};
+
+#if UNW_DEBUG
+
+static const char *preg_name[UNW_NUM_REGS] = {
+	"pri_unat_gr", "pri_unat_mem", "bsp", "bspstore", "ar.pfs", "ar.rnat", "psp", "rp",
+	"r4", "r5", "r6", "r7",
+	"ar.unat", "pr", "ar.lc", "ar.fpsr",
+	"b1", "b2", "b3", "b4", "b5",
+	"f2", "f3", "f4", "f5",
+	"f16", "f17", "f18", "f19", "f20", "f21", "f22", "f23",
+	"f24", "f25", "f26", "f27", "f28", "f29", "f30", "f31"
+};
+
+#endif /* UNW_DEBUG */
+
+/* Maps a preserved register index (preg_index) into the corresponding switch_stack offset: */
+static unsigned short sw_offset[sizeof (struct unw_frame_info) / 8];
+
+static struct unw_table *unw_tables;
+
+static void
+push (struct unw_state_record *sr)
+{
+	struct unw_reg_state *rs;
+
+	rs = alloc_reg_state();
+	memcpy(rs, &sr->curr, sizeof(*rs));
+	rs->next = sr->stack;
+	sr->stack = rs;
+}
+
+static void
+pop (struct unw_state_record *sr)
+{
+	struct unw_reg_state *rs;
+
+	if (!sr->stack) {
+		printk ("unwind: stack underflow!\n");
+		return;
+	}
+	rs = sr->stack;
+	sr->stack = rs->next;
+	free_reg_state(rs);
+}
+
+static enum unw_register_index __attribute__((const))
+decode_abreg (unsigned char abreg, int memory)
+{
+	switch (abreg) {
+	      case 0x04 ... 0x07: return UNW_REG_R4 + (abreg - 0x04);
+	      case 0x22 ... 0x25: return UNW_REG_F2 + (abreg - 0x22);
+	      case 0x30 ... 0x3f: return UNW_REG_F16 + (abreg - 0x30);
+	      case 0x41 ... 0x45: return UNW_REG_B1 + (abreg - 0x41);
+	      case 0x60: return UNW_REG_PR;
+	      case 0x61: return UNW_REG_PSP;
+	      case 0x62: return memory ? UNW_REG_PRI_UNAT_MEM : UNW_REG_PRI_UNAT_GR;
+	      case 0x63: return UNW_REG_RP;
+	      case 0x64: return UNW_REG_BSP;
+	      case 0x65: return UNW_REG_BSPSTORE;
+	      case 0x66: return UNW_REG_RNAT;
+	      case 0x67: return UNW_REG_UNAT;
+	      case 0x68: return UNW_REG_FPSR;
+	      case 0x69: return UNW_REG_PFS;
+	      case 0x6a: return UNW_REG_LC;
+	      default:
+		break;
+	}
+	dprintk("unwind: bad abreg=0x%x\n", abreg);
+	return UNW_REG_LC;
+}
+
+static void
+set_reg (struct unw_reg_info *reg, enum unw_where where, int when, unsigned long val)
+{
+	reg->val = val;
+	reg->where = where;
+	if (reg->when == UNW_WHEN_NEVER)
+		reg->when = when;
+}
+
+static void
+alloc_spill_area (unsigned long *offp, unsigned long regsize,
+		  struct unw_reg_info *lo, struct unw_reg_info *hi)
+{
+	struct unw_reg_info *reg;
+
+	for (reg = hi; reg >= lo; --reg) {
+		if (reg->where == UNW_WHERE_SPILL_HOME) {
+			reg->where = UNW_WHERE_PSPREL;
+			reg->val = *offp;
+			*offp += regsize;
+		}
+	}
+}
+
+static void
+spill_next_when (struct unw_reg_info **regp, struct unw_reg_info *lim, unw_word t)
+{
+	struct unw_reg_info *reg;
+
+	for (reg = *regp; reg <= lim; ++reg) {
+		if (reg->where == UNW_WHERE_SPILL_HOME) {
+			reg->when = t;
+			*regp = reg + 1;
+			return;
+		}
+	}
+	dprintk("unwind: excess spill!\n");
+}
+
+static void
+finish_prologue (struct unw_state_record *sr)
+{
+	struct unw_reg_info *reg;
+	unsigned long off;
+	int i;
+
+	/*
+	 * First, resolve implicit register save locations
+	 * (see Section "11.4.2.3 Rules for Using Unwind
+	 * Descriptors", rule 3):
+	 */
+	for (i = 0; i < (int) sizeof(save_order)/sizeof(save_order[0]); ++i) {
+		reg = sr->curr.reg + save_order[i];
+		if (reg->where == UNW_WHERE_GR_SAVE) {
+			reg->where = UNW_WHERE_GR;
+			reg->val = sr->gr_save_loc++;
+		}
+	}
+
+	/*
+	 * Next, compute when the fp, general, and branch registers get
+	 * saved.  This must come before alloc_spill_area() because
+	 * we need to know which registers are spilled to their home
+	 * locations.
+	 */
+	if (sr->imask) {
+		unsigned char kind, mask = 0, *cp = sr->imask;
+		unsigned long t;
+		static const unsigned char limit[3] = {
+			UNW_REG_F31, UNW_REG_R7, UNW_REG_B5
+		};
+		struct unw_reg_info *(regs[3]);
+
+		regs[0] = sr->curr.reg + UNW_REG_F2;
+		regs[1] = sr->curr.reg + UNW_REG_R4;
+		regs[2] = sr->curr.reg + UNW_REG_B1;
+
+		for (t = 0; t < sr->region_len; ++t) {
+			if ((t & 3) == 0)
+				mask = *cp++;
+			kind = (mask >> 2*(3-(t & 3))) & 3;
+			if (kind > 0)
+				spill_next_when(&regs[kind - 1], sr->curr.reg + limit[kind - 1],
+						sr->region_start + t);
+		}
+	}
+	/*
+	 * Next, lay out the memory stack spill area:
+	 */
+	if (sr->any_spills) {
+		off = sr->spill_offset;
+		alloc_spill_area(&off, 16, sr->curr.reg + UNW_REG_F2, sr->curr.reg + UNW_REG_F31); 
+		alloc_spill_area(&off,  8, sr->curr.reg + UNW_REG_B1, sr->curr.reg + UNW_REG_B5);
+		alloc_spill_area(&off,  8, sr->curr.reg + UNW_REG_R4, sr->curr.reg + UNW_REG_R7);
+	}
+}
+
+/*
+ * Region header descriptors.
+ */
+
+static inline void
+desc_prologue (int body, unw_word rlen, unsigned char mask, unsigned char grsave,
+	       struct unw_state_record *sr)
+{
+	int i;
+
+	if (!(sr->in_body || sr->first_region))
+		finish_prologue(sr);
+	sr->first_region = 0;
+
+	/* check if we're done: */
+	if (body && sr->when_target < sr->region_start + sr->region_len) {
+		sr->done = 1;
+		return;
+	}
+
+	for (i = 0; i < sr->epilogue_count; ++i)
+		pop(sr);
+	sr->epilogue_count = 0;
+	sr->epilogue_start = UNW_WHEN_NEVER;
+
+	if (!body)
+		push(sr);
+
+	sr->region_start += sr->region_len;
+	sr->region_len = rlen;
+	sr->in_body = body;
+
+	if (!body) {
+		for (i = 0; i < 4; ++i) {
+			if (mask & 0x8)
+				set_reg(sr->curr.reg + save_order[i], UNW_WHERE_GR,
+					sr->region_start + sr->region_len - 1, grsave++);
+			mask <<= 1;
+		}
+		sr->gr_save_loc = grsave;
+		sr->any_spills = 0;
+		sr->imask = 0;
+		sr->spill_offset = 0x10;	/* default to psp+16 */
+	}
+}
+
+/*
+ * Prologue descriptors.
+ */
+
+static inline void
+desc_abi (unsigned char abi, unsigned char context, struct unw_state_record *sr)
+{
+	dprintk("unwind: ignoring unwabi(abi=0x%x,context=0x%x)\n", abi, context);
+	sr->flags |= UNW_FLAG_INTERRUPT_FRAME;
+}
+
+static inline void
+desc_br_gr (unsigned char brmask, unsigned char gr, struct unw_state_record *sr)
+{
+	int i;
+
+	for (i = 0; i < 5; ++i) {
+		if (brmask & 1)
+			set_reg(sr->curr.reg + UNW_REG_B1 + i, UNW_WHERE_GR,
+				sr->region_start + sr->region_len - 1, gr++);
+		brmask >>= 1;
+	}
+}
+
+static inline void
+desc_br_mem (unsigned char brmask, struct unw_state_record *sr)
+{
+	int i;
+
+	for (i = 0; i < 5; ++i) {
+		if (brmask & 1) {
+			set_reg(sr->curr.reg + UNW_REG_B1 + i, UNW_WHERE_SPILL_HOME,
+				sr->region_start + sr->region_len - 1, 0);
+			sr->any_spills = 1;
+		}
+		brmask >>= 1;
+	}
+}
+
+static inline void
+desc_frgr_mem (unsigned char grmask, unw_word frmask, struct unw_state_record *sr)
+{
+	int i;
+
+	for (i = 0; i < 4; ++i) {
+		if ((grmask & 1) != 0) {
+			set_reg(sr->curr.reg + UNW_REG_R4 + i, UNW_WHERE_SPILL_HOME,
+				sr->region_start + sr->region_len - 1, 0);
+			sr->any_spills = 1;
+		}
+		grmask >>= 1;
+	}
+	for (i = 0; i < 20; ++i) {
+		if ((frmask & 1) != 0) {
+			set_reg(sr->curr.reg + UNW_REG_F2 + i, UNW_WHERE_SPILL_HOME,
+				sr->region_start + sr->region_len - 1, 0);
+			sr->any_spills = 1;
+		}
+		frmask >>= 1;
+	}
+}
+
+static inline void
+desc_fr_mem (unsigned char frmask, struct unw_state_record *sr)
+{
+	int i;
+
+	for (i = 0; i < 4; ++i) {
+		if ((frmask & 1) != 0) {
+			set_reg(sr->curr.reg + UNW_REG_F2 + i, UNW_WHERE_SPILL_HOME,
+				sr->region_start + sr->region_len - 1, 0);
+			sr->any_spills = 1;
+		}
+		frmask >>= 1;
+	}
+}
+
+static inline void
+desc_gr_gr (unsigned char grmask, unsigned char gr, struct unw_state_record *sr)
+{
+	int i;
+
+	for (i = 0; i < 4; ++i) {
+		if ((grmask & 1) != 0)
+			set_reg(sr->curr.reg + UNW_REG_R4 + i, UNW_WHERE_GR,
+				sr->region_start + sr->region_len - 1, gr++);
+		grmask >>= 1;
+	}
+}
+
+static inline void
+desc_gr_mem (unsigned char grmask, struct unw_state_record *sr)
+{
+	int i;
+
+	for (i = 0; i < 4; ++i) {
+		if ((grmask & 1) != 0) {
+			set_reg(sr->curr.reg + UNW_REG_R4 + i, UNW_WHERE_SPILL_HOME,
+				sr->region_start + sr->region_len - 1, 0);
+			sr->any_spills = 1;
+		}
+		grmask >>= 1;
+	}
+}
+
+static inline void
+desc_mem_stack_f (unw_word t, unw_word size, struct unw_state_record *sr)
+{
+	set_reg(sr->curr.reg + UNW_REG_PSP, UNW_WHERE_NONE,
+		sr->region_start + MIN((int)t, sr->region_len - 1), 16*size);
+}
+
+static inline void
+desc_mem_stack_v (unw_word t, struct unw_state_record *sr)
+{
+	sr->curr.reg[UNW_REG_PSP].when = sr->region_start + MIN((int)t, sr->region_len - 1);
+}
+
+static inline void
+desc_reg_gr (unsigned char reg, unsigned char dst, struct unw_state_record *sr)
+{
+	set_reg(sr->curr.reg + reg, UNW_WHERE_GR, sr->region_start + sr->region_len - 1, dst);
+}
+
+static inline void
+desc_reg_psprel (unsigned char reg, unw_word pspoff, struct unw_state_record *sr)
+{
+	set_reg(sr->curr.reg + reg, UNW_WHERE_PSPREL, sr->region_start + sr->region_len - 1,
+		0x10 - 4*pspoff);
+}
+
+static inline void
+desc_reg_sprel (unsigned char reg, unw_word spoff, struct unw_state_record *sr)
+{
+	set_reg(sr->curr.reg + reg, UNW_WHERE_SPREL, sr->region_start + sr->region_len - 1,
+		4*spoff);
+}
+
+static inline void
+desc_rp_br (unsigned char dst, struct unw_state_record *sr)
+{
+	sr->return_link_reg = dst;
+}
+
+static inline void
+desc_reg_when (unsigned char regnum, unw_word t, struct unw_state_record *sr)
+{
+	struct unw_reg_info *reg = sr->curr.reg + regnum;
+
+	if (reg->where == UNW_WHERE_NONE)
+		reg->where = UNW_WHERE_GR_SAVE;
+	reg->when = sr->region_start + MIN((int)t, sr->region_len - 1);
+}
+
+static inline void
+desc_spill_base (unw_word pspoff, struct unw_state_record *sr)
+{
+	sr->spill_offset = 0x10 - 4*pspoff;
+}
+
+static inline unsigned char *
+desc_spill_mask (unsigned char *imaskp, struct unw_state_record *sr)
+{
+	sr->imask = imaskp;
+	return imaskp + (2*sr->region_len + 7)/8;
+}
+
+/*
+ * Body descriptors.
+ */
+static inline void
+desc_epilogue (unw_word t, unw_word ecount, struct unw_state_record *sr)
+{
+	sr->epilogue_start = sr->region_start + sr->region_len - 1 - t;
+	sr->epilogue_count = ecount + 1;
+}
+
+static inline void
+desc_copy_state (unw_word label, struct unw_state_record *sr)
+{
+	struct unw_reg_state *rs;
+
+	for (rs = sr->reg_state_list; rs; rs = rs->next) {
+		if (rs->label == label) {
+			memcpy (&sr->curr, rs, sizeof(sr->curr));
+			return;
+		}
+	}
+	printk("unwind: failed to find state labelled 0x%lx\n", label);
+}
+
+static inline void
+desc_label_state (unw_word label, struct unw_state_record *sr)
+{
+	struct unw_reg_state *rs;
+
+	rs = alloc_reg_state();
+	memcpy(rs, &sr->curr, sizeof(*rs));
+	rs->label = label;
+	rs->next = sr->reg_state_list;
+	sr->reg_state_list = rs;
+}
+
+/*
+ * General descriptors.
+ */
+
+static inline int
+desc_is_active (unsigned char qp, unw_word t, struct unw_state_record *sr)
+{
+	if (sr->when_target <= sr->region_start + MIN((int)t, sr->region_len - 1))
+		return 0;
+	if (qp > 0) {
+		if ((sr->pr_val & (1UL << qp)) == 0) 
+			return 0;
+		sr->pr_mask |= (1UL << qp);
+	}
+	return 1;
+}
+
+static inline void
+desc_restore_p (unsigned char qp, unw_word t, unsigned char abreg, struct unw_state_record *sr)
+{
+	struct unw_reg_info *r;
+
+	if (!desc_is_active(qp, t, sr))
+		return;
+
+	r = sr->curr.reg + decode_abreg(abreg, 0);
+	r->where = UNW_WHERE_NONE;
+	r->when = sr->region_start + MIN((int)t, sr->region_len - 1);
+	r->val = 0;
+}
+
+static inline void
+desc_spill_reg_p (unsigned char qp, unw_word t, unsigned char abreg, unsigned char x,
+		     unsigned char ytreg, struct unw_state_record *sr)
+{
+	enum unw_where where = UNW_WHERE_GR;
+	struct unw_reg_info *r;
+
+	if (!desc_is_active(qp, t, sr))
+		return;
+
+	if (x)
+		where = UNW_WHERE_BR;
+	else if (ytreg & 0x80)
+		where = UNW_WHERE_FR;
+
+	r = sr->curr.reg + decode_abreg(abreg, 0);
+	r->where = where;
+	r->when = sr->region_start + MIN((int)t, sr->region_len - 1);
+	r->val = (ytreg & 0x7f);
+}
+
+static inline void
+desc_spill_psprel_p (unsigned char qp, unw_word t, unsigned char abreg, unw_word pspoff,
+		     struct unw_state_record *sr)
+{
+	struct unw_reg_info *r;
+
+	if (!desc_is_active(qp, t, sr))
+		return;
+
+	r = sr->curr.reg + decode_abreg(abreg, 1);
+	r->where = UNW_WHERE_PSPREL;
+	r->when = sr->region_start + MIN((int)t, sr->region_len - 1);
+	r->val = 0x10 - 4*pspoff;
+}
+
+static inline void
+desc_spill_sprel_p (unsigned char qp, unw_word t, unsigned char abreg, unw_word spoff,
+		       struct unw_state_record *sr)
+{
+	struct unw_reg_info *r;
+
+	if (!desc_is_active(qp, t, sr))
+		return;
+
+	r = sr->curr.reg + decode_abreg(abreg, 1);
+	r->where = UNW_WHERE_SPREL;
+	r->when = sr->region_start + MIN((int)t, sr->region_len - 1);
+	r->val = 4*spoff;
+}
+
+#define UNW_DEC_BAD_CODE(code)			printk("unwind: unknown code 0x%02x\n", code);
+
+/*
+ * region headers:
+ */
+#define UNW_DEC_PROLOGUE_GR(fmt,r,m,gr,arg)	desc_prologue(0,r,m,gr,arg)
+#define UNW_DEC_PROLOGUE(fmt,b,r,arg)		desc_prologue(b,r,0,32,arg)
+/*
+ * prologue descriptors:
+ */
+#define UNW_DEC_ABI(fmt,a,c,arg)		desc_abi(a,c,arg)
+#define UNW_DEC_BR_GR(fmt,b,g,arg)		desc_br_gr(b,g,arg)
+#define UNW_DEC_BR_MEM(fmt,b,arg)		desc_br_mem(b,arg)
+#define UNW_DEC_FRGR_MEM(fmt,g,f,arg)		desc_frgr_mem(g,f,arg)
+#define UNW_DEC_FR_MEM(fmt,f,arg)		desc_fr_mem(f,arg)
+#define UNW_DEC_GR_GR(fmt,m,g,arg)		desc_gr_gr(m,g,arg)
+#define UNW_DEC_GR_MEM(fmt,m,arg)		desc_gr_mem(m,arg)
+#define UNW_DEC_MEM_STACK_F(fmt,t,s,arg)	desc_mem_stack_f(t,s,arg)
+#define UNW_DEC_MEM_STACK_V(fmt,t,arg)		desc_mem_stack_v(t,arg)
+#define UNW_DEC_REG_GR(fmt,r,d,arg)		desc_reg_gr(r,d,arg)
+#define UNW_DEC_REG_PSPREL(fmt,r,o,arg)		desc_reg_psprel(r,o,arg)
+#define UNW_DEC_REG_SPREL(fmt,r,o,arg)		desc_reg_sprel(r,o,arg)
+#define UNW_DEC_REG_WHEN(fmt,r,t,arg)		desc_reg_when(r,t,arg)
+#define UNW_DEC_PRIUNAT_WHEN_GR(fmt,t,arg)	desc_reg_when(UNW_REG_PRI_UNAT_GR,t,arg)
+#define UNW_DEC_PRIUNAT_WHEN_MEM(fmt,t,arg)	desc_reg_when(UNW_REG_PRI_UNAT_MEM,t,arg)
+#define UNW_DEC_PRIUNAT_GR(fmt,r,arg)		desc_reg_gr(UNW_REG_PRI_UNAT_GR,r,arg)
+#define UNW_DEC_PRIUNAT_PSPREL(fmt,o,arg)	desc_reg_psprel(UNW_REG_PRI_UNAT_MEM,o,arg)
+#define UNW_DEC_PRIUNAT_SPREL(fmt,o,arg)	desc_reg_sprel(UNW_REG_PRI_UNAT_MEM,o,arg)
+#define UNW_DEC_RP_BR(fmt,d,arg)		desc_rp_br(d,arg)
+#define UNW_DEC_SPILL_BASE(fmt,o,arg)		desc_spill_base(o,arg)
+#define UNW_DEC_SPILL_MASK(fmt,m,arg)		(m = desc_spill_mask(m,arg))
+/*
+ * body descriptors:
+ */
+#define UNW_DEC_EPILOGUE(fmt,t,c,arg)		desc_epilogue(t,c,arg)
+#define UNW_DEC_COPY_STATE(fmt,l,arg)		desc_copy_state(l,arg)
+#define UNW_DEC_LABEL_STATE(fmt,l,arg)		desc_label_state(l,arg)
+/*
+ * general unwind descriptors:
+ */
+#define UNW_DEC_SPILL_REG_P(f,p,t,a,x,y,arg)	desc_spill_reg_p(p,t,a,x,y,arg)
+#define UNW_DEC_SPILL_REG(f,t,a,x,y,arg)	desc_spill_reg_p(0,t,a,x,y,arg)
+#define UNW_DEC_SPILL_PSPREL_P(f,p,t,a,o,arg)	desc_spill_psprel_p(p,t,a,o,arg)
+#define UNW_DEC_SPILL_PSPREL(f,t,a,o,arg)	desc_spill_psprel_p(0,t,a,o,arg)
+#define UNW_DEC_SPILL_SPREL_P(f,p,t,a,o,arg)	desc_spill_sprel_p(p,t,a,o,arg)
+#define UNW_DEC_SPILL_SPREL(f,t,a,o,arg)	desc_spill_sprel_p(0,t,a,o,arg)
+#define UNW_DEC_RESTORE_P(f,p,t,a,arg)		desc_restore_p(p,t,a,arg)
+#define UNW_DEC_RESTORE(f,t,a,arg)		desc_restore_p(0,t,a,arg)
+
+#include "unwind_decoder.c"
+
+static struct unw_table_entry *
+lookup (struct unw_table *table, unsigned long rel_ip)
+{
+	struct unw_table_entry *e = 0;
+	unsigned long lo, hi, mid;
+
+	/* do a binary search for right entry: */
+	for (lo = 0, hi = table->length; lo < hi; ) {
+		mid = (lo + hi) / 2;
+		e = &table->array[mid];
+		if (rel_ip < e->start_offset)
+			hi = mid;
+		else if (rel_ip >= e->end_offset)
+			lo = mid + 1;
+		else
+			break;
+	}
+	return e;
+}
+
+static struct unw_script *
+script_new (void)
+{
+	struct unw_script *script;
+
+	script = kmalloc(sizeof(*script), GFP_ATOMIC);	/* XXX fix me */
+	memset(script, 0, sizeof(*script));
+	return script;
+}
+
+static void
+script_emit (struct unw_script *script, struct unw_insn insn)
+{
+	if (script->count >= UNW_MAX_SCRIPT_LEN) {
+		dprintk("unwind: script exceeds maximum size of %u instructions!\n",
+			UNW_MAX_SCRIPT_LEN);
+	}
+	script->insn[script->count++] = insn;
+}
+
+static void
+script_finalize (struct unw_script *script, struct unw_state_record *sr)
+{
+	script->pr_mask = sr->pr_mask;
+	script->pr_val = sr->pr_val;
+}
+
+static inline void
+emit_nat_info (struct unw_state_record *sr, int i, struct unw_script *script)
+{
+	struct unw_reg_info *r = sr->curr.reg + i;
+	enum unw_insn_opcode opc;
+	struct unw_insn insn;
+	unsigned long val;
+
+	switch (r->where) {
+	      case UNW_WHERE_GR:
+		if (r->val >= 32) {
+			/* register got spilled to a stacked register */
+			opc = UNW_INSN_SETNAT_TYPE;
+			val = UNW_NAT_STACKED;
+		} else {
+			/* register got spilled to a scratch register */
+			opc = UNW_INSN_SETNAT_TYPE;
+			val = UNW_NAT_SCRATCH;
+		}
+		break;
+
+	      case UNW_WHERE_FR:
+		opc = UNW_INSN_SETNAT_TYPE;
+		val = UNW_NAT_VAL;
+		break;
+
+	      case UNW_WHERE_BR:
+		opc = UNW_INSN_SETNAT_TYPE;
+		val = UNW_NAT_NONE;
+		break;
+
+	      case UNW_WHERE_PSPREL:
+	      case UNW_WHERE_SPREL:
+		opc = UNW_INSN_SETNAT_PRI_UNAT;
+		val = 0;
+		break;
+
+	      default:
+		dprintk("unwind: don't know how to emit nat info for where = %u\n", r->where);
+		return;
+	}
+	insn.opc = opc;
+	insn.dst = preg_index[i];
+	insn.val = val;
+	script_emit(script, insn);
+}
+
+static void
+compile_reg (struct unw_state_record *sr, int i, struct unw_script *script)
+{
+	struct unw_reg_info *r = sr->curr.reg + i;
+	enum unw_insn_opcode opc;
+	unsigned long val, rval;
+	struct unw_insn insn;
+	long need_nat_info;
+
+	if (r->where == UNW_WHERE_NONE || r->when >= sr->when_target)
+		return;
+
+	opc = UNW_INSN_MOVE;
+	val = rval = r->val;
+	need_nat_info = (i >= UNW_REG_R4 && i <= UNW_REG_R7);
+
+	switch (r->where) {
+	      case UNW_WHERE_GR:
+		if (rval >= 32) {
+			opc = UNW_INSN_MOVE_STACKED;
+			val = rval - 32;
+		} else if (rval >= 4 && rval <= 7) {
+			if (need_nat_info) {
+				opc = UNW_INSN_MOVE2;
+				need_nat_info = 0;
+			}
+			val = preg_index[UNW_REG_R4 + (rval - 4)];
+		} else {
+			opc = UNW_INSN_LOAD_SPREL;
+			val = 0x10 - sizeof(struct pt_regs); 
+			if (rval >= 1 && rval <= 3)
+				val += struct_offset(struct pt_regs, r1) + 8*(rval - 1);
+			else if (rval <= 11)
+				val += struct_offset(struct pt_regs, r8) + 8*(rval - 8);
+			else if (rval <= 15)
+				val += struct_offset(struct pt_regs, r12) + 8*(rval - 12);
+			else if (rval <= 31)
+				val += struct_offset(struct pt_regs, r16) + 8*(rval - 16);
+			else
+				dprintk("unwind: bad scratch reg r%lu\n", rval);
+		}
+		break;
+
+	      case UNW_WHERE_FR:
+		if (rval <= 5)
+			val = preg_index[UNW_REG_F2  + (rval -  1)];
+		else if (rval >= 16 && rval <= 31)
+			val = preg_index[UNW_REG_F16 + (rval - 16)];
+		else {
+			opc = UNW_INSN_LOAD_SPREL;
+			val = 0x10 - sizeof(struct pt_regs);
+			if (rval <= 9)
+				val += struct_offset(struct pt_regs, f6) + 16*(rval - 6);
+			else
+				dprintk("unwind: kernel may not touch f%lu\n", rval);
+		}
+		break;
+
+	      case UNW_WHERE_BR:
+		if (rval >= 1 && rval <= 5)
+			val = preg_index[UNW_REG_B1 + (rval - 1)];
+		else {
+			opc = UNW_INSN_LOAD_SPREL;
+			val = 0x10 - sizeof(struct pt_regs);
+			if (rval == 0)
+				val += struct_offset(struct pt_regs, b0);
+			else if (rval == 6)
+				val += struct_offset(struct pt_regs, b6);
+			else
+				val += struct_offset(struct pt_regs, b7);
+		}
+		break;
+
+	      case UNW_WHERE_SPREL:
+		opc = UNW_INSN_LOAD_SPREL;
+		break;
+
+	      case UNW_WHERE_PSPREL:
+		opc = UNW_INSN_LOAD_PSPREL;
+		break;
+
+	      default:
+		dprintk("unwind: register %u has unexpected `where' value of %u\n", i, r->where);
+		break;
+	}
+	insn.opc = opc;
+	insn.dst = preg_index[i];
+	insn.val = val;
+	script_emit(script, insn);
+	if (need_nat_info)
+		emit_nat_info(sr, i, script);
+}
+
+/*
+ * Build an unwind script that unwinds from state OLD_STATE to the
+ * entrypoint of the function that called OLD_STATE.
+ */
+static struct unw_script *
+build_script (struct unw_frame_info *info)
+{
+	struct unw_reg_state *rs, *next;
+	struct unw_table_entry *e = 0;
+	struct unw_script *script = 0;
+	struct unw_state_record sr;
+	struct unw_table *table;
+	struct unw_reg_info *r;
+	struct unw_insn insn;
+	u8 *dp, *desc_end;
+	unsigned long ip;
+	u64 hdr;
+	int i;
+
+	/* build state record */
+	memset(&sr, 0, sizeof(sr));
+	for (r = sr.curr.reg; r < sr.curr.reg + UNW_NUM_REGS; ++r)
+		r->when = UNW_WHEN_NEVER;
+	sr.pr_val = info->pr_val;
+
+	script = script_new ();
+	if (!script) {
+		dprintk("unwind: failed to create unwind script\n");
+		return 0;
+	}
+	ip = script->ip = info->ip;
+
+	/* search the kernels and the modules' unwind tables for IP: */
+
+	for (table = unw_tables; table; table = table->next) {
+		if (ip >= table->start && ip < table->end) {
+			e = lookup(table, ip - table->segment_base);
+			break;
+		}
+	}
+	if (!e) {
+		/* no info, return default unwinder (leaf proc, no mem stack, no saved regs)  */
+		dprintk("unwind: no unwind info for ip=%lx\n", ip);
+		sr.curr.reg[UNW_REG_RP].where = UNW_WHERE_BR;
+		sr.curr.reg[UNW_REG_RP].when = -1;
+		sr.curr.reg[UNW_REG_RP].val = 0;
+		compile_reg(&sr, UNW_REG_RP, script);
+		script_finalize(script, &sr);
+		return script;
+	}
+
+	sr.when_target = (3*((ip & ~0xfUL) - (table->segment_base + e->start_offset))
+			  + (ip & 0xfUL));
+	hdr = *(u64 *) (table->segment_base + e->info_offset);
+	dp =   (u8 *)  (table->segment_base + e->info_offset + 8);
+	desc_end = dp + 8*UNW_LENGTH(hdr);
+
+	while (!sr.done && dp < desc_end)
+		dp = unw_decode(dp, sr.in_body, &sr);
+
+	if (sr.when_target > sr.epilogue_start) {
+		/*
+		 * sp has been restored and all values on the memory stack below
+		 * psp also have been restored.
+		 */
+		sr.curr.reg[UNW_REG_PSP].where = UNW_WHERE_NONE;
+		sr.curr.reg[UNW_REG_PSP].val = 0;
+		for (r = sr.curr.reg; r < sr.curr.reg + UNW_NUM_REGS; ++r)
+			if ((r->where == UNW_WHERE_PSPREL && r->val <= 0x10)
+			    || r->where == UNW_WHERE_SPREL)
+				r->where = UNW_WHERE_NONE;
+	}
+
+	script->flags = sr.flags;
+
+	/*
+	 * If RP did't get saved, generate entry for the return link
+	 * register.
+	 */
+	if (sr.curr.reg[UNW_REG_RP].when >= sr.when_target) {
+		sr.curr.reg[UNW_REG_RP].where = UNW_WHERE_BR;
+		sr.curr.reg[UNW_REG_RP].when = -1;
+		sr.curr.reg[UNW_REG_RP].val = sr.return_link_reg;
+	}
+
+#if UNW_DEBUG
+	printk ("unwind: state record for func 0x%lx, t=%u:\n",
+		table->segment_base + e->start_offset, sr.when_target);
+	for (r = sr.curr.reg; r < sr.curr.reg + UNW_NUM_REGS; ++r) {
+		if (r->where != UNW_WHERE_NONE || r->when != UNW_WHEN_NEVER) {
+			printk("  %s <- ", preg_name[r - sr.curr.reg]);
+			switch (r->where) {
+			      case UNW_WHERE_GR:     printk("r%lu", r->val); break;
+			      case UNW_WHERE_FR:     printk("f%lu", r->val); break;
+			      case UNW_WHERE_BR:     printk("b%lu", r->val); break;
+			      case UNW_WHERE_SPREL:  printk("[sp+0x%lx]", r->val); break;
+			      case UNW_WHERE_PSPREL: printk("[psp+0x%lx]", 0x10 - r->val); break;
+			      case UNW_WHERE_NONE:
+				printk("%s+0x%lx", preg_name[r - sr.curr.reg], r->val);
+				break; 
+			      default:		     printk("BADWHERE(%d)", r->where); break;
+			}
+			printk ("\t\t%d\n", r->when);
+		}
+	}
+#endif
+
+	/* translate state record into unwinder instructions: */
+
+	if (sr.curr.reg[UNW_REG_PSP].where == UNW_WHERE_NONE
+	    && sr.when_target > sr.curr.reg[UNW_REG_PSP].when && sr.curr.reg[UNW_REG_PSP].val != 0)
+	{
+		/* new psp is sp plus frame size */
+		insn.opc = UNW_INSN_ADD;
+		insn.dst = preg_index[UNW_REG_PSP];
+		insn.val = sr.curr.reg[UNW_REG_PSP].val;
+		script_emit(script, insn);
+	}
+
+	/* determine where the primary UNaT is: */
+	if (sr.when_target < sr.curr.reg[UNW_REG_PRI_UNAT_GR].when)
+		i = UNW_REG_PRI_UNAT_MEM;
+	else if (sr.when_target < sr.curr.reg[UNW_REG_PRI_UNAT_MEM].when)
+		i = UNW_REG_PRI_UNAT_GR;
+	else if (sr.curr.reg[UNW_REG_PRI_UNAT_MEM].when > sr.curr.reg[UNW_REG_PRI_UNAT_GR].when)
+		i = UNW_REG_PRI_UNAT_MEM;
+	else
+		i = UNW_REG_PRI_UNAT_GR;
+
+	compile_reg(&sr, i, script);
+
+	for (i = UNW_REG_BSP; i < UNW_NUM_REGS; ++i)
+		compile_reg(&sr, i, script);
+
+	/* free labelled register states & stack: */
+
+	for (rs = sr.reg_state_list; rs; rs = next) {
+		next = rs->next;
+		free_reg_state(rs);
+	}
+	while (sr.stack)
+		pop(&sr);
+
+	script_finalize(script, &sr);
+	return script;
+}
+
+int
+unw_access_gr (struct unw_frame_info *info, int regnum, unsigned long *val, char *nat, int write)
+{
+	unsigned long *addr, *nat_addr, nat_mask = 0, dummy_nat;
+	struct unw_ireg *ireg;
+	struct pt_regs *pt;
+
+	if ((unsigned) regnum - 1 >= 127)
+		return -1;
+
+	if (regnum < 32) {
+		if (regnum >= 4 && regnum <= 7) {
+			/* access a preserved register */
+			ireg = &info->r4 + (regnum - 4);
+			addr = ireg->loc;
+			if (addr) {
+				nat_addr = addr + ireg->nat.off;
+				switch (ireg->nat.type) {
+				      case UNW_NAT_VAL:
+					/* simulate getf.sig/setf.sig */
+					if (write) {
+						if (*nat) {
+							/* write NaTVal and be done with it */
+							addr[0] = 0;
+							addr[1] = 0x1fffe;
+							return 0;
+						}
+						addr[1] = 0x1003e;
+					} else {
+						if (addr[0] == 0 && addr[1] == 0x1ffe) {
+							/* return NaT and be done with it */
+							*val = 0;
+							*nat = 1;
+							return 0;
+						}
+					}
+					/* fall through */
+				      case UNW_NAT_NONE:
+					nat_addr = &dummy_nat;
+					break;
+
+				      case UNW_NAT_SCRATCH:
+					if (info->unat)
+						nat_addr = info->unat;
+					else
+						nat_addr = &info->sw->caller_unat;
+				      case UNW_NAT_PRI_UNAT:
+					nat_mask = (1UL << ((long) addr & 0x1f8)/8);
+					break;
+
+				      case UNW_NAT_STACKED:
+					nat_addr = ia64_rse_rnat_addr(addr);
+					if ((unsigned long) addr < info->regstk.limit
+					    || (unsigned long) addr >= info->regstk.top)
+						return -1;
+					if ((unsigned long) nat_addr >= info->regstk.top)
+						nat_addr = &info->sw->ar_rnat;
+					nat_mask = (1UL << ia64_rse_slot_num(addr));
+					break;
+				}
+			} else {
+				addr = &info->sw->r4 + (regnum - 4);
+				nat_addr = &info->sw->ar_unat;
+				nat_mask = (1UL << ((long) addr & 0x1f8)/8);
+			}
+		} else {
+			/* access a scratch register */
+			pt = (struct pt_regs *) info->sp - 1;
+			if (regnum <= 3)
+				addr = &pt->r1 + (regnum - 1);
+			else if (regnum <= 11)
+				addr = &pt->r8 + (regnum - 8);
+			else if (regnum <= 15)
+				addr = &pt->r12 + (regnum - 12);
+			else
+				addr = &pt->r16 + (regnum - 16);
+			if (info->unat)
+				nat_addr = info->unat;
+			else
+				nat_addr = &info->sw->caller_unat;
+			nat_mask = (1UL << ((long) addr & 0x1f8)/8);
+		}
+	} else {
+		/* access a stacked register */
+		addr = ia64_rse_skip_regs((unsigned long *) info->bsp, regnum);
+		nat_addr = ia64_rse_rnat_addr(addr);
+		if ((unsigned long) addr < info->regstk.limit
+		    || (unsigned long) addr >= info->regstk.top)
+		{
+			dprintk("unwind: ignoring attempt to access register outside of rbs\n");
+			return -1;
+		}
+		if ((unsigned long) nat_addr >= info->regstk.top)
+			nat_addr = &info->sw->ar_rnat;
+		nat_mask = (1UL << ia64_rse_slot_num(addr));
+	}
+
+	if (write) {
+		*addr = *val;
+		*nat_addr = (*nat_addr & ~nat_mask) | nat_mask;
+	} else {
+		*val = *addr;
+		*nat = (*nat_addr & nat_mask) != 0;
+	}
+	return 0;
+}
+
+int
+unw_access_br (struct unw_frame_info *info, int regnum, unsigned long *val, int write)
+{
+	unsigned long *addr;
+	struct pt_regs *pt;
+
+	pt = (struct pt_regs *) info->sp - 1;
+	switch (regnum) {
+		/* scratch: */
+	      case 0: addr = &pt->b0; break;
+	      case 6: addr = &pt->b6; break;
+	      case 7: addr = &pt->b7; break;
+
+		/* preserved: */
+	      case 1: case 2: case 3: case 4: case 5:
+		addr = *(&info->b1 + (regnum - 1));
+		break;
+
+	      default:
+		return -1;
+	}
+	if (write)
+		*addr = *val;
+	else
+		*val = *addr;
+	return 0;
+}
+
+int
+unw_access_fr (struct unw_frame_info *info, int regnum, struct ia64_fpreg *val, int write)
+{
+	struct ia64_fpreg *addr = 0;
+	struct pt_regs *pt;
+
+	if ((unsigned) (regnum - 2) >= 30)
+		return -1;
+
+	pt = (struct pt_regs *) info->sp - 1;
+
+	if (regnum <= 5) {
+		addr = *(&info->f2 + (regnum - 2));
+		if (!addr)
+			addr = &info->sw->f2 + (regnum - 2);
+	} else if (regnum <= 15) {
+		if (regnum <= 9)
+			addr = &pt->f6  + (regnum - 6);
+		else
+			addr = &info->sw->f10 + (regnum - 10);
+	} else if (regnum <= 31) {
+		addr = *(&info->fr[regnum - 16]);
+		if (!addr)
+			addr = &info->sw->f16 + (regnum - 16);
+	}
+
+	if (write)
+		*addr = *val;
+	else
+		*val = *addr;
+	return 0;
+}
+
+int
+unw_access_ar (struct unw_frame_info *info, int regnum, unsigned long *val, int write)
+{
+	unsigned long *addr;
+	struct pt_regs *pt;
+
+	pt = (struct pt_regs *) info->sp - 1;
+
+	switch (regnum) {
+	      case UNW_AR_BSP:
+		addr = info->pbsp;
+		if (!addr)
+			addr = &info->sw->ar_bspstore;
+		break;
+
+	      case UNW_AR_BSPSTORE:
+		addr = info->bspstore;
+		if (!addr)
+			addr = &info->sw->ar_bspstore;
+		break;
+
+	      case UNW_AR_PFS:
+		addr = info->pfs;
+		if (!addr)
+			addr = &info->sw->ar_pfs;
+		break;
+
+	      case UNW_AR_RNAT:
+		addr = info->rnat;
+		if (!addr)
+			addr = &info->sw->ar_rnat;
+		break;
+
+	      case UNW_AR_UNAT:
+		addr = info->unat;
+		if (!addr)
+			addr = &info->sw->ar_unat;
+		break;
+
+	      case UNW_AR_LC:
+		addr = info->lc;
+		if (!addr)
+			addr = &info->sw->ar_lc;
+		break;
+
+	      case UNW_AR_FPSR:
+		addr = info->fpsr;
+		if (!addr)
+			addr = &info->sw->ar_fpsr;
+		break;
+
+	      case UNW_AR_RSC:
+		addr = &pt->ar_rsc;
+		break;
+
+	      case UNW_AR_CCV:
+		addr = &pt->ar_ccv;
+		break;
+
+	      default:
+		return -1;
+	}
+
+	if (write)
+		*addr = *val;
+	else
+		*val = *addr;
+	return 0;
+}
+
+inline int
+unw_access_pr (struct unw_frame_info *info, unsigned long *val, int write)
+{
+	unsigned long *addr;
+
+	addr = info->pr;
+	if (!addr)
+		addr = &info->sw->pr;
+
+	if (write)
+		*addr = *val;
+	else
+		*val = *addr;
+	return 0;
+}
+
+/*
+ * Apply the unwinding actions represented by OPS and update SR to
+ * reflect the state that existed upon entry to the function that this
+ * unwinder represents.
+ */
+static void
+run_script (struct unw_script *script, struct unw_frame_info *state)
+{
+	struct unw_insn *ip, *limit, next_insn;
+	unsigned long opc, dst, val, off;
+	unsigned long *s = (unsigned long *) state;
+
+	state->flags = script->flags;
+	ip = script->insn;
+	limit = script->insn + script->count;
+	next_insn = *ip;
+
+	while (ip++ < limit) {
+		opc = next_insn.opc;
+		dst = next_insn.dst;
+		val = next_insn.val;
+		next_insn = *ip;
+
+	  redo:
+		switch (opc) {
+		      case UNW_INSN_ADD:
+			s[dst] += val;
+			break;
+
+		      case UNW_INSN_MOVE2:
+			if (!s[val])
+				goto lazy_init;
+			s[dst+1] = s[val+1];
+			s[dst] = s[val];
+			break;
+
+		      case UNW_INSN_MOVE:
+			if (!s[val])
+				goto lazy_init;
+			s[dst] = s[val];
+			break;
+
+		      case UNW_INSN_MOVE_STACKED:
+			s[dst] = (unsigned long) ia64_rse_skip_regs((unsigned long *)state->bsp,
+								    val);
+			break;
+
+		      case UNW_INSN_LOAD_PSPREL:
+			s[dst] = state->psp + val;
+			break;
+
+		      case UNW_INSN_LOAD_SPREL:
+			s[dst] = state->sp + val;
+			break;
+
+		      case UNW_INSN_SETNAT_PRI_UNAT:
+			if (!state->pri_unat)
+				state->pri_unat = &state->sw->caller_unat;
+			s[dst+1] = ((*state->pri_unat - s[dst]) << 32) | UNW_NAT_PRI_UNAT;
+			break;
+
+		      case UNW_INSN_SETNAT_TYPE:
+			s[dst+1] = val;
+			break;
+		}
+	}
+	return;
+
+  lazy_init:
+	off = sw_offset[val];
+	s[val] = (unsigned long) state->sw + off;
+	if (off >= struct_offset (struct unw_frame_info, r4)
+	    && off <= struct_offset (struct unw_frame_info, r7))
+		/*
+		 * We're initializing a general register: init NaT info, too.  Note that we
+		 * rely on the fact that call_unat is the first field in struct switch_stack:
+		 */
+		s[val+1] = (-off << 32) | UNW_NAT_PRI_UNAT;
+	goto redo;
+}
+
+static void
+find_save_locs (struct unw_frame_info *info)
+{
+	static struct unw_script *script_cache;
+	struct unw_script *scr;
+
+	/* XXX BIG FIXME---need hash table here... */
+	scr = script_cache;
+	if (scr) {
+		while (scr->ip != info->ip || ((info->pr_val ^ scr->pr_val) & scr->pr_mask) != 0) {
+			if (scr->hint == -1) {
+				scr = 0;
+				break;
+			}
+			scr = (struct unw_script *) ((int *) scr + scr->hint);
+		}
+	}
+
+	if (!scr) {
+		scr = build_script(info);
+		if (scr) {
+			if (script_cache)
+				scr->hint = (int *) script_cache - (int *) scr;
+			else
+				scr->hint = -1;
+			script_cache = scr;
+		}
+	}
+
+	if (!scr) {
+		dprintk("unwind: failed to locate/build unwind script for ip %lx\n", info->ip);
+		return;
+	}
+	run_script(scr, info);
+}
+
+int
+unw_unwind (struct unw_frame_info *info)
+{
+	unsigned long ip, pr, num_regs;
+	
+	/* restore the ip */
+	if (!info->rp) {
+		dprintk("unwind: failed to locate return link!\n");
+		return -1;
+	}
+	ip = info->ip = *info->rp;
+	if (ip <= TASK_SIZE) {
+		dprintk("unwind: reached user-space (ip=%lx)\n", ip);
+		return -1;
+	}
+
+	/* restore the cfm: */
+	if (!info->pfs) {
+		dprintk("unwind: failed to locate ar.pfs!\n");
+		return -1;
+	}
+	info->cfm = *info->pfs;
+
+	/* restore the bsp: */
+	pr = info->pr_val;
+	num_regs = 0;
+	if ((info->flags & UNW_FLAG_INTERRUPT_FRAME)) {
+		if ((pr & (1UL << pNonSys)) != 0)
+			num_regs = info->cfm & 0x7f;		/* size of frame */
+		info->pfs =
+			(unsigned long *) (info->sp + 16 + struct_offset(struct pt_regs, ar_pfs));
+	} else
+		num_regs = (info->cfm >> 7) & 0x7f;	/* size of locals */
+	info->bsp = (unsigned long) ia64_rse_skip_regs((unsigned long *) info->bsp, -num_regs);
+	if (info->bsp < info->regstk.limit || info->bsp > info->regstk.top) {
+		dprintk("unwind: bsp (0x%lx) out of range [0x%lx-0x%lx]\n",
+			info->bsp, info->regstk.limit, info->regstk.top);
+		return -1;
+	}
+
+	/* restore the sp: */
+	info->sp = info->psp;
+	if (info->sp < info->memstk.top || info->sp > info->memstk.limit) {
+		dprintk("unwind: sp (0x%lx) out of range [0x%lx-0x%lx]\n",
+			info->sp, info->regstk.top, info->regstk.limit);
+		return -1;
+	}
+
+	/* finally, restore the predicates: */
+	unw_get_pr(info, &info->pr_val);
+
+	find_save_locs(info);
+	return 0;
+}
+
+static void
+unw_init_frame_info (struct unw_frame_info *info, struct task_struct *t, struct switch_stack *sw)
+{
+	unsigned long rbslimit, rbstop, stklimit, stktop, sol;
+
+	/*
+	 * Subtle stuff here: we _could_ unwind through the
+	 * switch_stack frame but we don't want to do that because it
+	 * would be slow as each preserved register would have to be
+	 * processed.  Instead, what we do here is zero out the frame
+	 * info and start the unwind process at the function that
+	 * created the switch_stack frame.  When a preserved value in
+	 * switch_stack needs to be accessed, run_script() will
+	 * initialize the appropriate pointer on demand.
+	 */
+	memset(info, 0, sizeof(*info));
+
+	rbslimit = (unsigned long) t + IA64_RBS_OFFSET;
+	rbstop   = sw->ar_bspstore;
+	if (rbstop - (unsigned long) t >= IA64_STK_OFFSET)
+		rbstop = rbslimit;
+
+	stklimit = (unsigned long) t + IA64_STK_OFFSET;
+	stktop   = (unsigned long) sw - 16;
+	if (stktop <= rbstop)
+		stktop = rbstop;
+
+	info->regstk.limit = rbslimit;
+	info->regstk.top   = rbstop;
+	info->memstk.limit = stklimit;
+	info->memstk.top   = stktop;
+	info->sw  = sw;
+	info->sp = info->psp = (unsigned long) (sw + 1) - 16;
+	info->cfm = sw->ar_pfs;
+	sol = (info->cfm >> 7) & 0x7f;
+	info->bsp = (unsigned long) ia64_rse_skip_regs((unsigned long *) info->regstk.top, -sol);
+	info->ip = sw->b0;
+	info->pr_val = sw->pr;
+
+	find_save_locs(info);
+}
+
+#endif /* CONFIG_IA64_NEW_UNWIND */
+
 void
-ia64_unwind_init_from_blocked_task (struct ia64_frame_info *info, struct task_struct *t)
+unw_init_from_blocked_task (struct unw_frame_info *info, struct task_struct *t)
 {
 	struct switch_stack *sw = (struct switch_stack *) (t->thread.ksp + 16);
+
+#ifdef CONFIG_IA64_NEW_UNWIND
+	unw_init_frame_info(info, t, sw);
+#else
 	unsigned long sol, limit, top;
 
 	memset(info, 0, sizeof(*info));
@@ -22,17 +1468,25 @@
 	if (top - (unsigned long) t >= IA64_STK_OFFSET)
 		top = limit;
 
-	info->regstk.limit = (unsigned long *) limit;
-	info->regstk.top   = (unsigned long *) top;
-	info->bsp	   = ia64_rse_skip_regs(info->regstk.top, -sol);
-	info->top_rnat	   = sw->ar_rnat;
-	info->cfm	   = sw->ar_pfs;
-	info->ip	   = sw->b0;
+	info->regstk.limit = limit;
+	info->regstk.top   = top;
+	info->sw  = sw;
+	info->bsp = (unsigned long) ia64_rse_skip_regs((unsigned long *) info->regstk.top, -sol);
+	info->cfm = sw->ar_pfs;
+	info->ip  = sw->b0;
+#endif
 }
 
 void
-ia64_unwind_init_from_current (struct ia64_frame_info *info, struct pt_regs *regs)
+unw_init_from_current (struct unw_frame_info *info, struct pt_regs *regs)
 {
+#ifdef CONFIG_IA64_NEW_UNWIND
+	struct switch_stack *sw = (struct switch_stack *) regs - 1;
+
+	unw_init_frame_info(info, current, sw);
+	/* skip over interrupt frame: */
+	unw_unwind(info);
+#else
 	struct switch_stack *sw = (struct switch_stack *) regs - 1;
 	unsigned long sol, sof, *bsp, limit, top;
 
@@ -44,34 +1498,40 @@
 	memset(info, 0, sizeof(*info));
 
 	sol = (sw->ar_pfs >> 7) & 0x7f;	/* size of frame */
-	info->regstk.limit = (unsigned long *) limit;
-	info->regstk.top   = (unsigned long *) top;
-	info->top_rnat	   = sw->ar_rnat;
 
 	/* this gives us the bsp top level frame (kdb interrupt frame): */
 	bsp = ia64_rse_skip_regs((unsigned long *) top, -sol);
 
 	/* now skip past the interrupt frame: */
 	sof = regs->cr_ifs & 0x7f;	/* size of frame */
+
+	info->regstk.limit = limit;
+	info->regstk.top   = top;
+	info->sw = sw;
+	info->bsp = (unsigned long) ia64_rse_skip_regs(bsp, -sof);
 	info->cfm = regs->cr_ifs;
-	info->bsp = ia64_rse_skip_regs(bsp, -sof);
 	info->ip  = regs->cr_iip;
+#endif
 }
 
+#ifndef CONFIG_IA64_NEW_UNWIND
+
 static unsigned long
-read_reg (struct ia64_frame_info *info, int regnum, int *is_nat)
+read_reg (struct unw_frame_info *info, int regnum, int *is_nat)
 {
 	unsigned long *addr, *rnat_addr, rnat;
 
-	addr = ia64_rse_skip_regs(info->bsp, regnum);
-	if (addr < info->regstk.limit || addr >= info->regstk.top || ((long) addr & 0x7) != 0) {
+	addr = ia64_rse_skip_regs((unsigned long *) info->bsp, regnum);
+	if ((unsigned long) addr < info->regstk.limit
+	    || (unsigned long) addr >= info->regstk.top || ((long) addr & 0x7) != 0)
+	{
 		*is_nat = 1;
 		return 0xdeadbeefdeadbeef;
 	}
 	rnat_addr = ia64_rse_rnat_addr(addr);
 
-	if (rnat_addr >= info->regstk.top)
-		rnat = info->top_rnat;
+	if ((unsigned long) rnat_addr >= info->regstk.top)
+		rnat = info->sw->ar_rnat;
 	else
 		rnat = *rnat_addr;
 	*is_nat = (rnat & (1UL << ia64_rse_slot_num(addr))) != 0;
@@ -83,7 +1543,7 @@
  * store for r32.
  */
 int
-ia64_unwind_to_previous_frame (struct ia64_frame_info *info)
+unw_unwind (struct unw_frame_info *info)
 {
 	unsigned long sol, cfm = info->cfm;
 	int is_nat;
@@ -113,6 +1573,110 @@
 	sol = (cfm >> 7) & 0x7f;
 
 	info->cfm = cfm;
-	info->bsp = ia64_rse_skip_regs(info->bsp, -sol);
+	info->bsp = (unsigned long) ia64_rse_skip_regs((unsigned long *) info->bsp, -sol);
 	return 0;
+}
+#endif /* !CONFIG_IA64_NEW_UNWIND */
+
+#ifdef CONFIG_IA64_NEW_UNWIND
+
+void *
+unw_add_unwind_table (const char *name, unsigned long segment_base, unsigned long gp,
+		      void *table_start, void *table_end)
+{
+	struct unw_table *table;
+	struct unw_table_entry *start = table_start, *end = table_end;
+
+	if (end - start <= 0) {
+		dprintk("unwind: ignoring attempt to insert empty unwind table\n");
+		return 0;
+	}
+
+#ifdef UNWIND_TABLE_SORT_BUG
+	{
+		struct unw_table_entry *e1, *e2, tmp;
+
+		/* stupid bubble sort... */
+
+		for (e1 = start; e1 < end; ++e1) {
+			for (e2 = e1 + 1; e2 < end; ++e2) {
+				if (e2->start_offset < e1->start_offset) {
+					tmp = *e1;
+					*e1 = *e2;
+					*e2 = tmp;
+				}
+			}
+		}
+	}
+#endif
+
+	table = kmalloc(sizeof(*table), GFP_USER);
+	table->name = name;
+	table->segment_base = segment_base;
+	table->gp = gp;
+	table->start = segment_base + start[0].start_offset;
+	table->end = segment_base + end[-1].end_offset;
+	table->array = start;
+	table->length = end - start;
+
+	table->next = unw_tables;
+	unw_tables = table;
+
+	return table;
+}
+
+void
+unw_remove_unwind_table (void *handle)
+{
+	struct unw_table *table, *prev;
+
+	if (!handle) {
+		dprintk("unwind: ignoring attempt to remove non-existent unwind table\n");
+		return;
+	}
+
+	table = handle;
+	for (prev = (struct unw_table *) &unw_tables; prev; prev = prev->next)
+		if (prev->next == table)
+			break;
+	if (!prev) {
+		dprintk("unwind: failed to find unwind table %p\n", table);
+		return;
+	}
+	prev->next = table->next;
+	kfree(table);
+
+	/* XXX need to implement this... */
+	dprintk("unwind: don't forget to clear cache entries for this module!\n");
+}
+#endif /* CONFIG_IA64_NEW_UNWIND */
+
+void
+unw_init (void)
+{
+#ifdef CONFIG_IA64_NEW_UNWIND
+	extern int ia64_unw_start, ia64_unw_end, __gp;
+	long i, off;
+#	define SW(f)	struct_offset(struct switch_stack, f)
+
+	sw_offset[preg_index[UNW_REG_PRI_UNAT_GR]] = SW(ar_unat);
+	sw_offset[preg_index[UNW_REG_BSPSTORE]] = SW(ar_bspstore);
+	sw_offset[preg_index[UNW_REG_PFS]] = SW(ar_unat);
+	sw_offset[preg_index[UNW_REG_RP]] = SW(b0);
+	sw_offset[preg_index[UNW_REG_UNAT]] = SW(ar_unat);
+	sw_offset[preg_index[UNW_REG_PR]] = SW(pr);
+	sw_offset[preg_index[UNW_REG_LC]] = SW(ar_lc);
+	sw_offset[preg_index[UNW_REG_FPSR]] = SW(ar_fpsr);
+	for (i = UNW_REG_R4, off = SW(r4); i <= UNW_REG_R7; ++i, off += 8)
+		sw_offset[preg_index[i]] = off;
+	for (i = UNW_REG_B1, off = SW(b1); i <= UNW_REG_B5; ++i, off += 8)
+		sw_offset[preg_index[i]] = off;
+	for (i = UNW_REG_F2, off = SW(f2); i <= UNW_REG_F5; ++i, off += 16)
+		sw_offset[preg_index[i]] = off;
+	for (i = UNW_REG_F16, off = SW(f16); i <= UNW_REG_F31; ++i, off += 16)
+		sw_offset[preg_index[i]] = off;
+
+	unw_add_unwind_table("kernel", KERNEL_START, (unsigned long) &__gp,
+			     &ia64_unw_start, &ia64_unw_end);
+#endif /* CONFIG_IA64_NEW_UNWIND */
 }
diff -urN linux-davidm/arch/ia64/kernel/unwind_decoder.c linux-2.3.99-pre6-lia/arch/ia64/kernel/unwind_decoder.c
--- linux-davidm/arch/ia64/kernel/unwind_decoder.c	Wed Dec 31 16:00:00 1969
+++ linux-2.3.99-pre6-lia/arch/ia64/kernel/unwind_decoder.c	Tue May 23 17:35:51 2000
@@ -0,0 +1,459 @@
+/*
+ * Copyright (C) 2000 Hewlett-Packard Co
+ * Copyright (C) 2000 David Mosberger-Tang <davidm@hpl.hp.com>
+ *
+ * Generic IA-64 unwind info decoder.
+ *
+ * This file is used both by the Linux kernel and objdump.  Please keep
+ * the two copies of this file in sync.
+ *
+ * You need to customize the decoder by defining the following
+ * macros/constants before including this file:
+ *
+ *  Types:
+ *	unw_word	Unsigned integer type with at least 64 bits 
+ *
+ *  Register names:
+ *	UNW_REG_BSP
+ *	UNW_REG_BSPSTORE
+ *	UNW_REG_FPSR
+ *	UNW_REG_LC
+ *	UNW_REG_PFS
+ *	UNW_REG_PR
+ *	UNW_REG_RNAT
+ *	UNW_REG_PSP
+ *	UNW_REG_RP
+ *	UNW_REG_UNAT
+ *
+ *  Decoder action macros:
+ *	UNW_DEC_BAD_CODE(code)
+ *	UNW_DEC_ABI(fmt,abi,context,arg)
+ *	UNW_DEC_BR_GR(fmt,brmask,gr,arg)
+ *	UNW_DEC_BR_MEM(fmt,brmask,arg)
+ *	UNW_DEC_COPY_STATE(fmt,label,arg)
+ *	UNW_DEC_EPILOGUE(fmt,t,ecount,arg)
+ *	UNW_DEC_FRGR_MEM(fmt,grmask,frmask,arg)
+ *	UNW_DEC_FR_MEM(fmt,frmask,arg)
+ *	UNW_DEC_GR_GR(fmt,grmask,gr,arg)
+ *	UNW_DEC_GR_MEM(fmt,grmask,arg)
+ *	UNW_DEC_LABEL_STATE(fmt,label,arg)
+ *	UNW_DEC_MEM_STACK_F(fmt,t,size,arg)
+ *	UNW_DEC_MEM_STACK_V(fmt,t,arg)
+ *	UNW_DEC_PRIUNAT_GR(fmt,r,arg)
+ *	UNW_DEC_PRIUNAT_WHEN_GR(fmt,t,arg)
+ *	UNW_DEC_PRIUNAT_WHEN_MEM(fmt,t,arg)
+ *	UNW_DEC_PRIUNAT_WHEN_PSPREL(fmt,pspoff,arg)
+ *	UNW_DEC_PRIUNAT_WHEN_SPREL(fmt,spoff,arg)
+ *	UNW_DEC_PROLOGUE(fmt,body,rlen,arg)
+ *	UNW_DEC_PROLOGUE_GR(fmt,rlen,mask,grsave,arg)
+ *	UNW_DEC_REG_PSPREL(fmt,reg,pspoff,arg)
+ *	UNW_DEC_REG_REG(fmt,src,dst,arg)
+ *	UNW_DEC_REG_SPREL(fmt,reg,spoff,arg)
+ *	UNW_DEC_REG_WHEN(fmt,reg,t,arg)
+ *	UNW_DEC_RESTORE(fmt,t,abreg,arg)
+ *	UNW_DEC_RESTORE_P(fmt,qp,t,abreg,arg)
+ *	UNW_DEC_SPILL_BASE(fmt,pspoff,arg)
+ *	UNW_DEC_SPILL_MASK(fmt,imaskp,arg)
+ *	UNW_DEC_SPILL_PSPREL(fmt,t,abreg,pspoff,arg)
+ *	UNW_DEC_SPILL_PSPREL_P(fmt,qp,t,abreg,pspoff,arg)
+ *	UNW_DEC_SPILL_REG(fmt,t,abreg,x,ytreg,arg)
+ *	UNW_DEC_SPILL_REG_P(fmt,qp,t,abreg,x,ytreg,arg)
+ *	UNW_DEC_SPILL_SPREL(fmt,t,abreg,spoff,arg)
+ *	UNW_DEC_SPILL_SPREL_P(fmt,qp,t,abreg,pspoff,arg)
+ */
+
+static unw_word
+unw_decode_uleb128 (unsigned char **dpp)
+{
+  unsigned shift = 0;
+  unw_word byte, result = 0;
+  unsigned char *bp = *dpp;
+
+  while (1)
+    {
+      byte = *bp++;
+      result |= (byte & 0x7f) << shift;
+      if ((byte & 0x80) == 0)
+	break;
+      shift += 7;
+    }
+  *dpp = bp;
+  return result;
+}
+
+static unsigned char *
+unw_decode_x1 (unsigned char *dp, unsigned char code, void *arg)
+{
+  unsigned char byte1, abreg;
+  unw_word t, off;
+
+  byte1 = *dp++;
+  t = unw_decode_uleb128 (&dp);
+  off = unw_decode_uleb128 (&dp);
+  abreg = (byte1 & 0x7f);
+  if (byte1 & 0x80)
+	  UNW_DEC_SPILL_SPREL(X1, t, abreg, off, arg);
+  else
+	  UNW_DEC_SPILL_PSPREL(X1, t, abreg, off, arg);
+  return dp;
+}
+
+static unsigned char *
+unw_decode_x2 (unsigned char *dp, unsigned char code, void *arg)
+{
+  unsigned char byte1, byte2, abreg, x, ytreg;
+  unw_word t;
+
+  byte1 = *dp++; byte2 = *dp++;
+  t = unw_decode_uleb128 (&dp);
+  abreg = (byte1 & 0x7f);
+  ytreg = byte2;
+  x = (byte1 >> 7) & 1;
+  if ((byte1 & 0x80) == 0 && ytreg == 0)
+    UNW_DEC_RESTORE(X2, t, abreg, arg);
+  else
+    UNW_DEC_SPILL_REG(X2, t, abreg, x, ytreg, arg);
+  return dp;
+}
+
+static unsigned char *
+unw_decode_x3 (unsigned char *dp, unsigned char code, void *arg)
+{
+  unsigned char byte1, byte2, abreg, qp;
+  unw_word t, off;
+
+  byte1 = *dp++; byte2 = *dp++;
+  t = unw_decode_uleb128 (&dp);
+  off = unw_decode_uleb128 (&dp);
+
+  qp = (byte1 & 0x3f);
+  abreg = (byte2 & 0x7f);
+
+  if (byte1 & 0x80)
+    UNW_DEC_SPILL_SPREL_P(X3, qp, t, abreg, off, arg);
+  else
+    UNW_DEC_SPILL_PSPREL_P(X3, qp, t, abreg, off, arg);
+  return dp;
+}
+
+static unsigned char *
+unw_decode_x4 (unsigned char *dp, unsigned char code, void *arg)
+{
+  unsigned char byte1, byte2, byte3, qp, abreg, x, ytreg;
+  unw_word t;
+
+  byte1 = *dp++; byte2 = *dp++; byte3 = *dp++;
+  t = unw_decode_uleb128 (&dp);
+
+  qp = (byte1 & 0x3f);
+  abreg = (byte2 & 0x7f);
+  x = (byte2 >> 7) & 1;
+  ytreg = byte3;
+
+  if ((byte2 & 0x80) == 0 && byte3 == 0)
+    UNW_DEC_RESTORE_P(X4, qp, t, abreg, arg);
+  else
+    UNW_DEC_SPILL_REG_P(X4, qp, t, abreg, x, ytreg, arg);
+  return dp;
+}
+
+static unsigned char *
+unw_decode_r1 (unsigned char *dp, unsigned char code, void *arg)
+{
+  int body = (code & 0x20) != 0;
+  unw_word rlen;
+
+  rlen = (code & 0x1f);
+  UNW_DEC_PROLOGUE(R1, body, rlen, arg);
+  return dp;
+}
+
+static unsigned char *
+unw_decode_r2 (unsigned char *dp, unsigned char code, void *arg)
+{
+  unsigned char byte1, mask, grsave;
+  unw_word rlen;
+
+  byte1 = *dp++;
+
+  mask = ((code & 0x7) << 1) | ((byte1 >> 7) & 1);
+  grsave = (byte1 & 0x7f);
+  rlen = unw_decode_uleb128 (&dp);
+  UNW_DEC_PROLOGUE_GR(R2, rlen, mask, grsave, arg);
+  return dp;
+}
+
+static unsigned char *
+unw_decode_r3 (unsigned char *dp, unsigned char code, void *arg)
+{
+  unw_word rlen;
+
+  rlen = unw_decode_uleb128 (&dp);
+  UNW_DEC_PROLOGUE(R3, ((code & 0x3) == 1), rlen, arg);
+  return dp;
+}
+
+static unsigned char *
+unw_decode_p1 (unsigned char *dp, unsigned char code, void *arg)
+{
+  unsigned char brmask = (code & 0x1f);
+
+  UNW_DEC_BR_MEM(P1, brmask, arg);
+  return dp;
+}
+
+static unsigned char *
+unw_decode_p2_p5 (unsigned char *dp, unsigned char code, void *arg)
+{
+  if ((code & 0x10) == 0)
+    {
+      unsigned char byte1 = *dp++;
+
+      UNW_DEC_BR_GR(P2, ((code & 0xf) << 1) | ((byte1 >> 7) & 1),
+		    (byte1 & 0x7f), arg);
+    }
+  else if ((code & 0x08) == 0)
+    {
+      unsigned char byte1 = *dp++, r, dst;
+
+      r = ((code & 0x7) << 1) | ((byte1 >> 7) & 1);
+      dst = (byte1 & 0x7f);
+      switch (r)
+	{
+	case 0: UNW_DEC_REG_GR(P3, UNW_REG_PSP, dst, arg); break;
+	case 1: UNW_DEC_REG_GR(P3, UNW_REG_RP, dst, arg); break;
+	case 2: UNW_DEC_REG_GR(P3, UNW_REG_PFS, dst, arg); break;
+	case 3: UNW_DEC_REG_GR(P3, UNW_REG_PR, dst, arg); break;
+	case 4: UNW_DEC_REG_GR(P3, UNW_REG_UNAT, dst, arg); break;
+	case 5: UNW_DEC_REG_GR(P3, UNW_REG_LC, dst, arg); break;
+	case 6: UNW_DEC_RP_BR(P3, dst, arg); break;
+	case 7: UNW_DEC_REG_GR(P3, UNW_REG_RNAT, dst, arg); break;
+	case 8: UNW_DEC_REG_GR(P3, UNW_REG_BSP, dst, arg); break;
+	case 9: UNW_DEC_REG_GR(P3, UNW_REG_BSPSTORE, dst, arg); break;
+	case 10: UNW_DEC_REG_GR(P3, UNW_REG_FPSR, dst, arg); break;
+	case 11: UNW_DEC_PRIUNAT_GR(P3, dst, arg); break;
+	default: UNW_DEC_BAD_CODE(r); break;
+	}
+    }
+  else if ((code & 0x7) == 0)
+    UNW_DEC_SPILL_MASK(P4, dp, arg);
+  else if ((code & 0x7) == 1)
+    {
+      unw_word grmask, frmask, byte1, byte2, byte3;
+
+      byte1 = *dp++; byte2 = *dp++; byte3 = *dp++;
+      grmask = ((byte1 >> 4) & 0xf);
+      frmask = ((byte1 & 0xf) << 16) | (byte2 << 8) | byte3;
+      UNW_DEC_FRGR_MEM(P5, grmask, frmask, arg);
+    }
+  else
+    UNW_DEC_BAD_CODE(code);
+  return dp;
+}
+
+static unsigned char *
+unw_decode_p6 (unsigned char *dp, unsigned char code, void *arg)
+{
+  int gregs = (code & 0x10) != 0;
+  unsigned char mask = (code & 0x0f);
+
+  if (gregs)
+    UNW_DEC_GR_MEM(P6, mask, arg);
+  else
+    UNW_DEC_FR_MEM(P6, mask, arg);
+  return dp;
+}
+
+static unsigned char *
+unw_decode_p7_p10 (unsigned char *dp, unsigned char code, void *arg)
+{
+  unsigned char r, byte1, byte2;
+  unw_word t, size;
+
+  if ((code & 0x10) == 0)
+    {
+      r = (code & 0xf);
+      t = unw_decode_uleb128 (&dp);
+      switch (r)
+	{
+	case 0:
+	  size = unw_decode_uleb128 (&dp);
+	  UNW_DEC_MEM_STACK_F(P7, t, size, arg);
+	  break;
+
+	case 1: UNW_DEC_MEM_STACK_V(P7, t, arg); break;
+	case 2: UNW_DEC_SPILL_BASE(P7, t, arg); break;
+	case 3: UNW_DEC_REG_SPREL(P7, UNW_REG_PSP, t, arg); break;
+	case 4: UNW_DEC_REG_WHEN(P7, UNW_REG_RP, t, arg); break;
+	case 5: UNW_DEC_REG_PSPREL(P7, UNW_REG_RP, t, arg); break;
+	case 6: UNW_DEC_REG_WHEN(P7, UNW_REG_PFS, t, arg); break;
+	case 7: UNW_DEC_REG_PSPREL(P7, UNW_REG_PFS, t, arg); break;
+	case 8: UNW_DEC_REG_WHEN(P7, UNW_REG_PR, t, arg); break;
+	case 9: UNW_DEC_REG_PSPREL(P7, UNW_REG_PR, t, arg); break;
+	case 10: UNW_DEC_REG_WHEN(P7, UNW_REG_LC, t, arg); break;
+	case 11: UNW_DEC_REG_PSPREL(P7, UNW_REG_LC, t, arg); break;
+	case 12: UNW_DEC_REG_WHEN(P7, UNW_REG_UNAT, t, arg); break;
+	case 13: UNW_DEC_REG_PSPREL(P7, UNW_REG_UNAT, t, arg); break;
+	case 14: UNW_DEC_REG_WHEN(P7, UNW_REG_FPSR, t, arg); break;
+	case 15: UNW_DEC_REG_PSPREL(P7, UNW_REG_FPSR, t, arg); break;
+	default: UNW_DEC_BAD_CODE(r); break;
+	}
+    }
+  else
+    {
+      switch (code & 0xf)
+	{
+	case 0x0: /* p8 */
+	  {
+	    r = *dp++;
+	    t = unw_decode_uleb128 (&dp);
+	    switch (r)
+	      {
+	      case  1: UNW_DEC_REG_SPREL(P8, UNW_REG_RP, t, arg); break;
+	      case  2: UNW_DEC_REG_SPREL(P8, UNW_REG_PFS, t, arg); break;
+	      case  3: UNW_DEC_REG_SPREL(P8, UNW_REG_PR, t, arg); break;
+	      case  4: UNW_DEC_REG_SPREL(P8, UNW_REG_LC, t, arg); break;
+	      case  5: UNW_DEC_REG_SPREL(P8, UNW_REG_UNAT, t, arg); break;
+	      case  6: UNW_DEC_REG_SPREL(P8, UNW_REG_FPSR, t, arg); break;
+	      case  7: UNW_DEC_REG_WHEN(P8, UNW_REG_BSP, t, arg); break;
+	      case  8: UNW_DEC_REG_PSPREL(P8, UNW_REG_BSP, t, arg); break;
+	      case  9: UNW_DEC_REG_SPREL(P8, UNW_REG_BSP, t, arg); break;
+	      case 10: UNW_DEC_REG_WHEN(P8, UNW_REG_BSPSTORE, t, arg); break;
+	      case 11: UNW_DEC_REG_PSPREL(P8, UNW_REG_BSPSTORE, t, arg); break;
+	      case 12: UNW_DEC_REG_SPREL(P8, UNW_REG_BSPSTORE, t, arg); break;
+	      case 13: UNW_DEC_REG_WHEN(P8, UNW_REG_RNAT, t, arg); break;
+	      case 14: UNW_DEC_REG_PSPREL(P8, UNW_REG_RNAT, t, arg); break;
+	      case 15: UNW_DEC_REG_SPREL(P8, UNW_REG_RNAT, t, arg); break;
+	      case 16: UNW_DEC_PRIUNAT_WHEN_GR(P8, t, arg); break;
+	      case 17: UNW_DEC_PRIUNAT_PSPREL(P8, t, arg); break;
+	      case 18: UNW_DEC_PRIUNAT_SPREL(P8, t, arg); break;
+	      case 19: UNW_DEC_PRIUNAT_WHEN_MEM(P8, t, arg); break;
+	      default: UNW_DEC_BAD_CODE(r); break;
+	    }
+	  }
+	  break;
+
+	case 0x1:
+	  byte1 = *dp++; byte2 = *dp++;
+	  UNW_DEC_GR_GR(P9, (byte1 & 0xf), (byte2 & 0x7f), arg);
+	  break;
+
+	case 0xf: /* p10 */
+	  byte1 = *dp++; byte2 = *dp++;
+	  UNW_DEC_ABI(P10, byte1, byte2, arg);
+	  break;
+
+	case 0x9:
+	  return unw_decode_x1 (dp, code, arg);
+
+	case 0xa:
+	  return unw_decode_x2 (dp, code, arg);
+
+	case 0xb:
+	  return unw_decode_x3 (dp, code, arg);
+
+	case 0xc:
+	  return unw_decode_x4 (dp, code, arg);
+
+	default:
+	  UNW_DEC_BAD_CODE(code);
+	  break;
+	}
+    }
+  return dp;
+}
+
+static unsigned char *
+unw_decode_b1 (unsigned char *dp, unsigned char code, void *arg)
+{
+  unw_word label = (code & 0x1f);
+
+  if ((code & 0x20) != 0)
+    UNW_DEC_COPY_STATE(B1, label, arg);
+  else
+    UNW_DEC_LABEL_STATE(B1, label, arg);
+  return dp;
+}
+
+static unsigned char *
+unw_decode_b2 (unsigned char *dp, unsigned char code, void *arg)
+{
+  unw_word t;
+
+  t = unw_decode_uleb128 (&dp);
+  UNW_DEC_EPILOGUE(B2, t, (code & 0x1f), arg);
+  return dp;
+}
+
+static unsigned char *
+unw_decode_b3_x4 (unsigned char *dp, unsigned char code, void *arg)
+{
+  unw_word t, ecount, label;
+
+  if ((code & 0x10) == 0)
+    {
+      t = unw_decode_uleb128 (&dp);
+      ecount = unw_decode_uleb128 (&dp);
+      UNW_DEC_EPILOGUE(B3, t, ecount, arg);
+    }
+  else if ((code & 0x07) == 0)
+    {
+      label = unw_decode_uleb128 (&dp);
+      if ((code & 0x08) != 0)
+	UNW_DEC_COPY_STATE(B4, label, arg);
+      else
+	UNW_DEC_LABEL_STATE(B4, label, arg);
+    }
+  else
+    switch (code & 0x7)
+      {
+      case 1: return unw_decode_x1 (dp, code, arg);
+      case 2: return unw_decode_x2 (dp, code, arg);
+      case 3: return unw_decode_x3 (dp, code, arg);
+      case 4: return unw_decode_x4 (dp, code, arg);
+      default: UNW_DEC_BAD_CODE(code); break;
+      }
+  return dp;
+}
+
+typedef unsigned char *(*unw_decoder) (unsigned char *, unsigned char, void *);
+
+static unw_decoder unw_decode_table[2][8] =
+{
+  /* prologue table: */
+  {
+    unw_decode_r1,	/* 0 */
+    unw_decode_r1,
+    unw_decode_r2,
+    unw_decode_r3,
+    unw_decode_p1,	/* 4 */
+    unw_decode_p2_p5,
+    unw_decode_p6,
+    unw_decode_p7_p10
+  },
+  {
+    unw_decode_r1,	/* 0 */
+    unw_decode_r1,
+    unw_decode_r2,
+    unw_decode_r3,
+    unw_decode_b1,	/* 4 */
+    unw_decode_b1,
+    unw_decode_b2,
+    unw_decode_b3_x4
+  }
+};
+
+/*
+ * Decode one descriptor and return address of next descriptor.
+ */
+static inline unsigned char *
+unw_decode (unsigned char *dp, int inside_body, void *arg)
+{
+  unw_decoder decoder;
+  unsigned char code;
+
+  code = *dp++;
+  decoder = unw_decode_table[inside_body][code >> 5];
+  dp = (*decoder) (dp, code, arg);
+  return dp;
+}
diff -urN linux-davidm/arch/ia64/kernel/unwind_i.h linux-2.3.99-pre6-lia/arch/ia64/kernel/unwind_i.h
--- linux-davidm/arch/ia64/kernel/unwind_i.h	Wed Dec 31 16:00:00 1969
+++ linux-2.3.99-pre6-lia/arch/ia64/kernel/unwind_i.h	Thu May 25 23:05:53 2000
@@ -0,0 +1,151 @@
+/*
+ * Copyright (C) 2000 Hewlett-Packard Co
+ * Copyright (C) 2000 David Mosberger-Tang <davidm@hpl.hp.com>
+ *
+ * Kernel unwind support.
+ */
+
+#define UNW_VER(x)		((x) >> 48)
+#define UNW_FLAG_MASK		0x0000ffff00000000
+#define UNW_FLAG_OSMASK		0x0000f00000000000
+#define UNW_FLAG_EHANDLER(x)	((x) & 0x0000000100000000L)
+#define UNW_FLAG_UHANDLER(x)	((x) & 0x0000000200000000L)
+#define UNW_LENGTH(x)		((x) & 0x00000000ffffffffL)
+
+enum unw_register_index {
+	/* primary unat: */
+	UNW_REG_PRI_UNAT_GR,
+	UNW_REG_PRI_UNAT_MEM,
+
+	/* register stack */
+	UNW_REG_BSP,					/* register stack pointer */
+	UNW_REG_BSPSTORE,
+	UNW_REG_PFS,					/* previous function state */
+	UNW_REG_RNAT,
+	/* memory stack */
+	UNW_REG_PSP,					/* previous memory stack pointer */
+	/* return pointer: */
+	UNW_REG_RP,
+
+	/* preserved registers: */
+	UNW_REG_R4, UNW_REG_R5, UNW_REG_R6, UNW_REG_R7,
+	UNW_REG_UNAT, UNW_REG_PR, UNW_REG_LC, UNW_REG_FPSR,
+	UNW_REG_B1, UNW_REG_B2, UNW_REG_B3, UNW_REG_B4, UNW_REG_B5,
+	UNW_REG_F2, UNW_REG_F3, UNW_REG_F4, UNW_REG_F5,
+	UNW_REG_F16, UNW_REG_F17, UNW_REG_F18, UNW_REG_F19,
+	UNW_REG_F20, UNW_REG_F21, UNW_REG_F22, UNW_REG_F23,
+	UNW_REG_F24, UNW_REG_F25, UNW_REG_F26, UNW_REG_F27,
+	UNW_REG_F28, UNW_REG_F29, UNW_REG_F30, UNW_REG_F31,
+	UNW_NUM_REGS
+};
+
+struct unw_info_block {
+	u64 header;
+	u64 desc[0];		/* unwind descriptors */
+	/* personality routine and language-specific data follow behind descriptors */
+};
+
+struct unw_table_entry {
+	u64 start_offset;
+	u64 end_offset;
+	u64 info_offset;
+};
+
+struct unw_table {
+	struct unw_table *next;		/* must be first member! */
+	const char *name;
+	unsigned long gp;		/* global pointer for this load-module */
+	unsigned long segment_base;	/* base for offsets in the unwind table entries */
+	unsigned long start;
+	unsigned long end;
+	struct unw_table_entry *array;
+	unsigned long length;
+};
+
+enum unw_where {
+	UNW_WHERE_NONE,			/* register isn't saved at all */
+	UNW_WHERE_GR,			/* register is saved in a general register */
+	UNW_WHERE_FR,			/* register is saved in a floating-point register */
+	UNW_WHERE_BR,			/* register is saved in a branch register */
+	UNW_WHERE_SPREL,		/* register is saved on memstack (sp-relative) */
+	UNW_WHERE_PSPREL,		/* register is saved on memstack (psp-relative) */
+	/*
+	 * At the end of each prologue these locations get resolved to
+	 * UNW_WHERE_PSPREL and UNW_WHERE_GR, respectively:
+	 */
+	UNW_WHERE_SPILL_HOME,		/* register is saved in its spill home */
+	UNW_WHERE_GR_SAVE		/* register is saved in next general register */
+};
+
+#define UNW_WHEN_NEVER	0x7fffffff
+
+struct unw_reg_info {
+	unsigned long val;		/* save location: register number or offset */
+	enum unw_where where;		/* where the register gets saved */
+	int when;			/* when the register gets saved */
+};
+
+struct unw_state_record {
+	unsigned int first_region : 1;	/* is this the first region? */
+	unsigned int done : 1;		/* are we done scanning descriptors? */
+	unsigned int any_spills : 1;	/* got any register spills? */
+	unsigned int in_body : 1;	/* are we inside a body (as opposed to a prologue)? */
+	unsigned long flags;		/* see UNW_FLAG_* in unwind.h */
+
+	u8 *imask;			/* imask of of spill_mask record or NULL */
+	unsigned long pr_val;		/* predicate values */
+	unsigned long pr_mask;		/* predicate mask */
+	long spill_offset;		/* psp-relative offset for spill base */
+	int region_start;
+	int region_len;
+	int epilogue_start;
+	int epilogue_count;
+	int when_target;
+
+	u8 gr_save_loc;			/* next general register to use for saving a register */
+	u8 return_link_reg;		/* branch register in which the return link is passed */
+
+	struct unw_reg_state {
+		struct unw_reg_state *next;
+		unsigned long label;		/* label of this state record */
+		struct unw_reg_info reg[UNW_NUM_REGS];
+	} curr, *stack, *reg_state_list;
+};
+
+enum unw_nat_type {
+	UNW_NAT_NONE,		/* NaT not represented */
+	UNW_NAT_VAL,		/* NaT represented by NaT value (fp reg) */
+	UNW_NAT_PRI_UNAT,	/* NaT value is in unat word at offset OFF  */
+	UNW_NAT_SCRATCH,	/* NaT value is in scratch.pri_unat */
+	UNW_NAT_STACKED		/* NaT is in rnat */
+};
+
+enum unw_insn_opcode {
+	UNW_INSN_ADD,			/* s[dst] += val */
+	UNW_INSN_MOVE,			/* s[dst] = s[val] */
+	UNW_INSN_MOVE2,			/* s[dst] = s[val]; s[dst+1] = s[val+1] */
+	UNW_INSN_MOVE_STACKED,		/* s[dst] = ia64_rse_skip(*s.bsp, val) */
+	UNW_INSN_LOAD_PSPREL,		/* s[dst] = *(*s.psp + 8*val) */
+	UNW_INSN_LOAD_SPREL,		/* s[dst] = *(*s.sp + 8*val) */
+	UNW_INSN_SETNAT_PRI_UNAT,	/* s[dst+1].nat.type = PRI_UNAT;
+					   s[dst+1].nat.off = *s.pri_unat - s[dst] */
+	UNW_INSN_SETNAT_TYPE		/* s[dst+1].nat.type = val */
+};
+
+struct unw_insn {
+	unsigned int opc	:  4;
+	unsigned int dst	:  9;
+	signed int val		: 19;
+};
+
+#define UNW_MAX_SCRIPT_LEN	(2*UNW_NUM_REGS)
+
+struct unw_script {
+	unsigned long ip;		/* ip this script is for */
+	unsigned long pr_mask;		/* mask of predicates script depends on */
+	unsigned long pr_val;		/* predicate values this script is for */
+	unsigned long flags;		/* see UNW_FLAG_* in unwind.h */
+	unsigned int count;		/* number of instructions in script */
+	int hint;			/* hint for next script to try */
+	struct unw_insn insn[UNW_MAX_SCRIPT_LEN];
+};
diff -urN linux-davidm/arch/ia64/lib/Makefile linux-2.3.99-pre6-lia/arch/ia64/lib/Makefile
--- linux-davidm/arch/ia64/lib/Makefile	Thu Mar 30 16:56:04 2000
+++ linux-2.3.99-pre6-lia/arch/ia64/lib/Makefile	Thu May 25 23:06:07 2000
@@ -5,15 +5,18 @@
 .S.o:
 	$(CC) $(AFLAGS) -c $< -o $@
 
-OBJS  = __divdi3.o __divsi3.o __udivdi3.o __udivsi3.o		\
+L_TARGET = lib.a
+
+L_OBJS  = __divdi3.o __divsi3.o __udivdi3.o __udivsi3.o		\
 	__moddi3.o __modsi3.o __umoddi3.o __umodsi3.o		\
 	checksum.o clear_page.o csum_partial_copy.o copy_page.o	\
 	copy_user.o clear_user.o memset.o strncpy_from_user.o	\
 	strlen.o strlen_user.o strnlen_user.o			\
 	flush.o do_csum.o
 
-lib.a: $(OBJS)
-	$(AR) rcs lib.a $(OBJS)
+LX_OBJS = io.o
+
+include $(TOPDIR)/Rules.make
 
 __divdi3.o: idiv.S
 	$(CC) $(AFLAGS) -c -o $@ $<
@@ -38,5 +41,3 @@
 
 __umodsi3.o: idiv.S
 	$(CC) $(AFLAGS) -c -DMODULO -DUNSIGNED -DSINGLE -c -o $@ $<
-
-include $(TOPDIR)/Rules.make
diff -urN linux-davidm/arch/ia64/lib/clear_page.S linux-2.3.99-pre6-lia/arch/ia64/lib/clear_page.S
--- linux-davidm/arch/ia64/lib/clear_page.S	Sun Feb  6 18:42:40 2000
+++ linux-2.3.99-pre6-lia/arch/ia64/lib/clear_page.S	Thu May 25 23:06:16 2000
@@ -10,10 +10,11 @@
  * Output:
  * 	none
  *
- * Copyright (C) 1999 Hewlett-Packard Co
+ * Copyright (C) 1999-2000 Hewlett-Packard Co
  * Copyright (C) 1999 Stephane Eranian <eranian@hpl.hp.com>
- * Copyright (C) 1999 David Mosberger-Tang <davidm@hpl.hp.com>
+ * Copyright (C) 1999-2000 David Mosberger-Tang <davidm@hpl.hp.com>
  */
+#include <asm/asmmacro.h>
 #include <asm/page.h>
 
 	.text
@@ -21,12 +22,14 @@
 	.psr lsb
 	.lsb
 
-	.align 32
-	.global clear_page
-	.proc clear_page
-clear_page:
+GLOBAL_ENTRY(clear_page)
+	UNW(.prologue)
 	alloc r11=ar.pfs,1,0,0,0
+	UNW(.save ar.lc, r16)
 	mov r16=ar.lc		// slow
+
+	UNW(.body)
+
 	mov r17=PAGE_SIZE/32-1	// -1 = repeat/until
 	;;
 	adds r18=16,in0
@@ -38,5 +41,4 @@
 	;;
 	mov ar.lc=r16		// restore lc
 	br.ret.sptk.few rp
-
-	.endp clear_page
+END(clear_page)
diff -urN linux-davidm/arch/ia64/lib/clear_user.S linux-2.3.99-pre6-lia/arch/ia64/lib/clear_user.S
--- linux-davidm/arch/ia64/lib/clear_user.S	Fri Apr 21 15:21:24 2000
+++ linux-2.3.99-pre6-lia/arch/ia64/lib/clear_user.S	Thu May 25 23:06:22 2000
@@ -11,6 +11,8 @@
  * Copyright (C) 1999 Stephane Eranian <eranian@hpl.hp.com>
  */
 
+#include <asm/asmmacro.h>
+
 //
 // arguments
 //
@@ -23,11 +25,10 @@
 #define cnt		r16
 #define buf2		r17
 #define saved_lc	r18
-#define saved_pr	r19
-#define saved_pfs	r20
-#define tmp		r21
-#define len2		r22
-#define len3		r23
+#define saved_pfs	r19
+#define tmp		r20
+#define len2		r21
+#define len3		r22
 
 //
 // Theory of operations:
@@ -65,14 +66,14 @@
 	.psr lsb
 	.lsb
 
-	.align 32
- 	.global	__do_clear_user
- 	.proc	__do_clear_user
-
-__do_clear_user:
+GLOBAL_ENTRY(__do_clear_user)
+	UNW(.prologue)
+	UNW(.save ar.pfs, saved_pfs)
  	alloc	saved_pfs=ar.pfs,2,0,0,0
 	cmp.eq p6,p0=r0,len		// check for zero length
+	UNW(.save ar.lc, saved_lc)
 	mov saved_lc=ar.lc		// preserve ar.lc (slow)
+	.body
 	;;				// avoid WAW on CFM
 	adds tmp=-1,len			// br.ctop is repeat/until
 	mov ret0=len			// return value is length at this point
@@ -222,4 +223,4 @@
 	mov ret0=len
 	mov ar.lc=saved_lc
 	br.ret.dptk.few rp
- 	.endp
+END(__do_clear_user)
diff -urN linux-davidm/arch/ia64/lib/copy_page.S linux-2.3.99-pre6-lia/arch/ia64/lib/copy_page.S
--- linux-davidm/arch/ia64/lib/copy_page.S	Fri Mar 10 15:24:02 2000
+++ linux-2.3.99-pre6-lia/arch/ia64/lib/copy_page.S	Thu May 25 23:06:29 2000
@@ -13,6 +13,7 @@
  * Copyright (C) 1999 Hewlett-Packard Co
  * Copyright (C) 1999 Stephane Eranian <eranian@hpl.hp.com>
  */
+#include <asm/asmmacro.h>
 #include <asm/page.h>
 
 #define PIPE_DEPTH	6
@@ -32,19 +33,21 @@
 	.psr lsb
 	.lsb
 
-	.align 32
-	.global copy_page
-	.proc copy_page
-
-copy_page:
+GLOBAL_ENTRY(copy_page)
+	UNW(.prologue)
+	UNW(.save ar.pfs, saved_pfs)
 	alloc saved_pfs=ar.pfs,3,((2*PIPE_DEPTH+7)&~7),0,((2*PIPE_DEPTH+7)&~7)
 
 	.rotr t1[PIPE_DEPTH], t2[PIPE_DEPTH]
 	.rotp p[PIPE_DEPTH]
 
+	UNW(.save ar.lc, saved_lc)
 	mov saved_lc=ar.lc	// save ar.lc ahead of time
+	UNW(.save pr, saved_pr)
 	mov saved_pr=pr		// rotating predicates are preserved
 				// resgisters we must save.
+	UNW(.body)
+
 	mov src1=in1		// initialize 1st stream source 
 	adds src2=8,in1		// initialize 2nd stream source 
 	mov lcount=PAGE_SIZE/16-1 // as many 16bytes as there are on a page
@@ -87,5 +90,4 @@
 	mov ar.pfs=saved_pfs	// restore ar.ec 
 	mov ar.lc=saved_lc	// restore saved lc
 	br.ret.sptk.few rp	// bye...
-
-	.endp copy_page
+END(copy_page)
diff -urN linux-davidm/arch/ia64/lib/copy_user.S linux-2.3.99-pre6-lia/arch/ia64/lib/copy_user.S
--- linux-davidm/arch/ia64/lib/copy_user.S	Fri Mar 10 15:24:02 2000
+++ linux-2.3.99-pre6-lia/arch/ia64/lib/copy_user.S	Thu May 25 23:06:38 2000
@@ -29,6 +29,8 @@
  * 	- fix extraneous stop bit introduced by the EX() macro.
  */
 
+#include <asm/asmmacro.h>
+
 // The label comes first because our store instruction contains a comma
 // and confuse the preprocessor otherwise
 //
@@ -81,10 +83,9 @@
  	.psr	abi64
  	.psr	lsb
 
- 	.align	16
- 	.global	__copy_user
- 	.proc	__copy_user
-__copy_user:
+GLOBAL_ENTRY(__copy_user)
+	UNW(.prologue)
+	UNW(.save ar.pfs, saved_pfs)
 	alloc saved_pfs=ar.pfs,3,((2*PIPE_DEPTH+7)&~7),0,((2*PIPE_DEPTH+7)&~7)
 
 	.rotr val1[PIPE_DEPTH],val2[PIPE_DEPTH]
@@ -95,13 +96,17 @@
 
 	;;			// RAW of cfm when len=0
 	cmp.eq p8,p0=r0,len	// check for zero length
+	UNW(.save ar.lc, saved_lc)
 	mov saved_lc=ar.lc	// preserve ar.lc (slow)
 (p8)	br.ret.spnt.few rp	// empty mempcy()
 	;;
 	add enddst=dst,len	// first byte after end of source
 	add endsrc=src,len	// first byte after end of destination
+	UNW(.save pr, saved_pr)
 	mov saved_pr=pr		// preserve predicates
 
+	UNW(.body)
+
 	mov dst1=dst		// copy because of rotation
 	mov ar.ec=PIPE_DEPTH
 	mov pr.rot=1<<16	// p16=true all others are false
@@ -400,7 +405,4 @@
 
 	mov ar.pfs=saved_pfs
 	br.ret.dptk.few rp
-
-
- 	.endp __copy_user
-
+END(__copy_user)
diff -urN linux-davidm/arch/ia64/lib/do_csum.S linux-2.3.99-pre6-lia/arch/ia64/lib/do_csum.S
--- linux-davidm/arch/ia64/lib/do_csum.S	Sun Feb  6 18:42:40 2000
+++ linux-2.3.99-pre6-lia/arch/ia64/lib/do_csum.S	Thu May 25 23:06:47 2000
@@ -13,6 +13,8 @@
  *
  */
 
+#include <asm/asmmacro.h>
+
 //
 // Theory of operations:
 //	The goal is to go as quickly as possible to the point where
@@ -100,10 +102,9 @@
 
 // unsigned long do_csum(unsigned char *buf,int len)
 
-	.align 32
-	.global do_csum
-	.proc do_csum
-do_csum:
+GLOBAL_ENTRY(do_csum)
+	UNW(.prologue)
+	UNW(.save ar.pfs, saved_pfs)
 	alloc saved_pfs=ar.pfs,2,8,0,8
 
 	.rotr p[4], result[3]
@@ -125,6 +126,7 @@
 	;;
 	and lastoff=7,tmp1	// how many bytes off for last element
 	andcm last=tmp2,tmp3	// address of word containing last byte
+	UNW(.save pr, saved_pr)
 	mov saved_pr=pr		// preserve predicates (rotation)
 	;;
 	sub tmp3=last,first	// tmp3=distance from first to last
@@ -145,8 +147,12 @@
 	shl hmask=hmask,tmp2 	// build head mask, mask off [0,firstoff[
 	;;
 	shr.u tmask=tmask,tmp1	// build tail mask, mask off ]8,lastoff]
+	UNW(.save ar.lc, saved_lc)
 	mov saved_lc=ar.lc	// save lc
 	;;
+
+	UNW(.body)
+
 (p8)	and hmask=hmask,tmask	// apply tail mask to head mask if 1 word only
 (p9)	and p[1]=lastval,tmask	// mask last it as appropriate
 	shr.u tmp3=tmp3,3	// we do 8 bytes per loop
@@ -228,3 +234,4 @@
 	mov ar.lc=saved_lc
 (p10)	shr.u ret0=ret0,64-16	// + shift back to position = swap bytes
 	br.ret.sptk.few rp
+END(do_csum)
diff -urN linux-davidm/arch/ia64/lib/flush.S linux-2.3.99-pre6-lia/arch/ia64/lib/flush.S
--- linux-davidm/arch/ia64/lib/flush.S	Sun Feb  6 18:42:40 2000
+++ linux-2.3.99-pre6-lia/arch/ia64/lib/flush.S	Thu May 25 23:07:00 2000
@@ -1,9 +1,10 @@
 /*
  * Cache flushing routines.
  *
- * Copyright (C) 1999 Hewlett-Packard Co
- * Copyright (C) 1999 David Mosberger-Tang <davidm@hpl.hp.com>
+ * Copyright (C) 1999-2000 Hewlett-Packard Co
+ * Copyright (C) 1999-2000 David Mosberger-Tang <davidm@hpl.hp.com>
  */
+#include <asm/asmmacro.h>
 #include <asm/page.h>
 
 	.text
@@ -11,12 +12,14 @@
 	.psr lsb
 	.lsb
 
-	.align 16
-	.global ia64_flush_icache_page
-	.proc ia64_flush_icache_page
-ia64_flush_icache_page:
+GLOBAL_ENTRY(ia64_flush_icache_page)
+	UNW(.prologue)
 	alloc r2=ar.pfs,1,0,0,0
+	UNW(.save ar.lc, r3)
 	mov r3=ar.lc			// save ar.lc	
+
+	.body
+
 	mov r8=PAGE_SIZE/64-1		// repeat/until loop
 	;;
 	mov ar.lc=r8
@@ -34,4 +37,4 @@
 	;;	
 	mov ar.lc=r3			// restore ar.lc
 	br.ret.sptk.few rp
-	.endp ia64_flush_icache_page
+END(ia64_flush_icache_page)
diff -urN linux-davidm/arch/ia64/lib/idiv.S linux-2.3.99-pre6-lia/arch/ia64/lib/idiv.S
--- linux-davidm/arch/ia64/lib/idiv.S	Tue Feb  8 12:01:59 2000
+++ linux-2.3.99-pre6-lia/arch/ia64/lib/idiv.S	Thu May 25 23:07:06 2000
@@ -31,6 +31,7 @@
 	  nops while maximizing parallelism
 */
 
+#include <asm/asmmacro.h>
 #include <asm/break.h>
 
 	.text
@@ -73,12 +74,10 @@
 #define PASTE(a,b)	PASTE1(a,b)
 #define NAME		PASTE(PASTE(__,SGN),PASTE(OP,PASTE(PREC,3)))
 
-	.align 32
-	.global NAME
-	.proc NAME
-NAME:
-
+GLOBAL_ENTRY(NAME)
+	UNW(.prologue)
 	alloc r2=ar.pfs,2,6,0,8
+	UNW(.save pr, r18)
 	mov r18=pr
 #ifdef SINGLE
 # ifdef UNSIGNED
@@ -101,6 +100,10 @@
 #endif
 
 	setf.sig f8=in0
+	UNW(.save ar.lc, r3)
+
+	UNW(.body)
+
 	mov r3=ar.lc		// save ar.lc
 	setf.sig f9=in1
 	;;
@@ -156,3 +159,4 @@
 	mov ar.lc=r3		// restore ar.lc
 	mov pr=r18,0xffffffffffff0000	// restore p16-p63
 	br.ret.sptk.few rp
+END(NAME)
diff -urN linux-davidm/arch/ia64/lib/io.c linux-2.3.99-pre6-lia/arch/ia64/lib/io.c
--- linux-davidm/arch/ia64/lib/io.c	Wed Dec 31 16:00:00 1969
+++ linux-2.3.99-pre6-lia/arch/ia64/lib/io.c	Thu May 25 23:07:14 2000
@@ -0,0 +1,54 @@
+#include <linux/module.h>
+#include <linux/types.h>
+
+#include <asm/io.h>
+
+/*
+ * Copy data from IO memory space to "real" memory space.
+ * This needs to be optimized.
+ */
+void
+__ia64_memcpy_fromio (void * to, unsigned long from, long count)
+{
+	while (count) {
+		count--;
+		*(char *) to = readb(from);
+		((char *) to)++;
+		from++;
+	}
+}
+
+/*
+ * Copy data from "real" memory space to IO memory space.
+ * This needs to be optimized.
+ */
+void
+__ia64_memcpy_toio (unsigned long to, void * from, long count)
+{
+	while (count) {
+		count--;
+		writeb(*(char *) from, to);
+		((char *) from)++;
+		to++;
+	}
+}
+
+/*
+ * "memset" on IO memory space.
+ * This needs to be optimized.
+ */
+void
+__ia64_memset_c_io (unsigned long dst, unsigned long c, long count)
+{
+	unsigned char ch = (char)(c & 0xff);
+
+	while (count) {
+		count--;
+		writeb(ch, dst);
+		dst++;
+	}
+}
+
+EXPORT_SYMBOL(__ia64_memcpy_fromio);
+EXPORT_SYMBOL(__ia64_memcpy_toio);
+EXPORT_SYMBOL(__ia64_memset_c_io);
diff -urN linux-davidm/arch/ia64/lib/memset.S linux-2.3.99-pre6-lia/arch/ia64/lib/memset.S
--- linux-davidm/arch/ia64/lib/memset.S	Sun Feb  6 18:42:40 2000
+++ linux-2.3.99-pre6-lia/arch/ia64/lib/memset.S	Thu May 25 23:07:21 2000
@@ -14,6 +14,7 @@
  * Copyright (C) 1999 Stephane Eranian <eranian@hpl.hp.com>
  */
 
+#include <asm/asmmacro.h>
 
 // arguments
 //
@@ -28,22 +29,23 @@
 #define cnt		r18
 #define buf2		r19
 #define saved_lc	r20
-#define saved_pr	r21
-#define tmp		r22
+#define tmp		r21
 
  	.text
  	.psr	abi64
  	.psr	lsb
 
- 	.align	16
- 	.global	memset
- 	.proc	memset
-
-memset:
+GLOBAL_ENTRY(memset)
+	UNW(.prologue)
+	UNW(.save ar.pfs, saved_pfs)
  	alloc	saved_pfs=ar.pfs,3,0,0,0	// cnt is sink here
 	cmp.eq p8,p0=r0,len	// check for zero length
+	UNW(.save ar.lc, saved_lc)
 	mov saved_lc=ar.lc	// preserve ar.lc (slow)
 	;; 
+
+	UNW(.body)
+
 	adds tmp=-1,len		// br.ctop is repeat/until
 	tbit.nz p6,p0=buf,0	// odd alignment
 (p8)	br.ret.spnt.few rp
@@ -108,4 +110,4 @@
 	;;
 (p6)	st1 [buf]=val		// only 1 byte left
 	br.ret.dptk.few rp
- 	.endp
+END(memset)
diff -urN linux-davidm/arch/ia64/lib/strlen.S linux-2.3.99-pre6-lia/arch/ia64/lib/strlen.S
--- linux-davidm/arch/ia64/lib/strlen.S	Fri Apr 21 15:21:24 2000
+++ linux-2.3.99-pre6-lia/arch/ia64/lib/strlen.S	Thu May 25 23:07:27 2000
@@ -16,6 +16,8 @@
  * 09/24/99 S.Eranian add speculation recovery code
  */
 
+#include <asm/asmmacro.h>
+
 //
 //
 // This is an enhanced version of the basic strlen. it includes a combination
@@ -82,10 +84,9 @@
 	.psr lsb
 	.lsb
 
-	.align 32
-	.global strlen
-	.proc strlen
-strlen:
+GLOBAL_ENTRY(strlen)
+	UNW(.prologue)
+	UNW(.save ar.pfs, saved_pfs)
 	alloc saved_pfs=ar.pfs,11,0,0,8 // rotating must be multiple of 8
 
 	.rotr v[2], w[2]	// declares our 4 aliases
@@ -93,8 +94,12 @@
 	extr.u tmp=in0,0,3	// tmp=least significant 3 bits
 	mov orig=in0		// keep trackof initial byte address
 	dep src=0,in0,0,3	// src=8byte-aligned in0 address
+	UNW(.save pr, saved_pr)
 	mov saved_pr=pr		// preserve predicates (rotation)
 	;;
+
+	UNW(.body)
+
 	ld8 v[1]=[src],8	// must not speculate: can fail here
 	shl tmp=tmp,3		// multiply by 8bits/byte
 	mov mask=-1		// our mask
@@ -194,5 +199,4 @@
 	sub ret0=ret0,tmp	// length=now - back -1
 	mov ar.pfs=saved_pfs	// because of ar.ec, restore no matter what
 	br.ret.sptk.few rp	// end of sucessful recovery code
-
-	.endp strlen
+END(strlen)
diff -urN linux-davidm/arch/ia64/lib/strlen_user.S linux-2.3.99-pre6-lia/arch/ia64/lib/strlen_user.S
--- linux-davidm/arch/ia64/lib/strlen_user.S	Sun Feb  6 18:42:40 2000
+++ linux-2.3.99-pre6-lia/arch/ia64/lib/strlen_user.S	Thu May 25 23:07:33 2000
@@ -15,6 +15,8 @@
  * 09/24/99 S.Eranian added speculation recovery code
  */
 
+#include <asm/asmmacro.h>
+
 //
 // int strlen_user(char *)
 // ------------------------
@@ -93,10 +95,9 @@
 	.psr lsb
 	.lsb
 
-	.align 32
-	.global __strlen_user
-	.proc __strlen_user
-__strlen_user:
+GLOBAL_ENTRY(__strlen_user)
+	UNW(.prologue)
+	UNW(.save ar.pfs, saved_pfs)
 	alloc saved_pfs=ar.pfs,11,0,0,8
 
 	.rotr v[2], w[2]	// declares our 4 aliases
@@ -104,8 +105,12 @@
 	extr.u tmp=in0,0,3	// tmp=least significant 3 bits
 	mov orig=in0		// keep trackof initial byte address
 	dep src=0,in0,0,3	// src=8byte-aligned in0 address
+	UNW(.save pr, saved_pr)
 	mov saved_pr=pr		// preserve predicates (rotation)
 	;;
+
+	.body
+
 	ld8.s v[1]=[src],8	// load the initial 8bytes (must speculate)
 	shl tmp=tmp,3		// multiply by 8bits/byte
 	mov mask=-1		// our mask
@@ -209,5 +214,4 @@
 	mov pr=saved_pr,0xffffffffffff0000
 	mov ar.pfs=saved_pfs	// because of ar.ec, restore no matter what
 	br.ret.sptk.few rp
-	
-	.endp __strlen_user
+END(__strlen_user)
diff -urN linux-davidm/arch/ia64/lib/strncpy_from_user.S linux-2.3.99-pre6-lia/arch/ia64/lib/strncpy_from_user.S
--- linux-davidm/arch/ia64/lib/strncpy_from_user.S	Fri Mar 10 15:24:02 2000
+++ linux-2.3.99-pre6-lia/arch/ia64/lib/strncpy_from_user.S	Thu May 25 23:07:45 2000
@@ -16,6 +16,8 @@
  *			 by Andreas Schwab <schwab@suse.de>).
  */
 
+#include <asm/asmmacro.h>
+
 #define EX(x...)				\
 99:	x;					\
 	.section __ex_table,"a";		\
@@ -28,10 +30,7 @@
 	.psr lsb
 	.lsb
 
-	.align 32
-	.global __strncpy_from_user
-	.proc __strncpy_from_user
-__strncpy_from_user:
+GLOBAL_ENTRY(__strncpy_from_user)
 	alloc r2=ar.pfs,3,0,0,0
 	mov r8=0
 	mov r9=in1
@@ -53,5 +52,4 @@
 
 .Lexit:
 	br.ret.sptk.few rp
-
-	.endp __strncpy_from_user
+END(__strncpy_from_user)
diff -urN linux-davidm/arch/ia64/lib/strnlen_user.S linux-2.3.99-pre6-lia/arch/ia64/lib/strnlen_user.S
--- linux-davidm/arch/ia64/lib/strnlen_user.S	Sun Feb  6 18:42:40 2000
+++ linux-2.3.99-pre6-lia/arch/ia64/lib/strnlen_user.S	Thu May 25 23:07:55 2000
@@ -12,6 +12,8 @@
  * Copyright (C) 1999 David Mosberger-Tang <davidm@hpl.hp.com>
  */
 
+#include <asm/asmmacro.h>
+
 /* If a fault occurs, r8 gets set to -EFAULT and r9 gets cleared.  */
 #define EX(x...)				\
 	.section __ex_table,"a";		\
@@ -25,12 +27,14 @@
 	.psr lsb
 	.lsb
 
-	.align 32
-	.global __strnlen_user
-	.proc __strnlen_user
-__strnlen_user:
+GLOBAL_ENTRY(__strnlen_user)
+	UNW(.prologue)
 	alloc r2=ar.pfs,2,0,0,0
+	UNW(.save ar.lc, r16)
 	mov r16=ar.lc			// preserve ar.lc
+
+	UNW(.body)
+
 	add r3=-1,in1
 	;;
 	mov ar.lc=r3
@@ -51,5 +55,4 @@
 	mov r8=r9
 	mov ar.lc=r16			// restore ar.lc
 	br.ret.sptk.few rp
-
-	.endp __strnlen_user
+END(__strnlen_user)
diff -urN linux-davidm/arch/ia64/mm/init.c linux-2.3.99-pre6-lia/arch/ia64/mm/init.c
--- linux-davidm/arch/ia64/mm/init.c	Fri Apr 21 15:21:24 2000
+++ linux-2.3.99-pre6-lia/arch/ia64/mm/init.c	Wed May 17 20:50:19 2000
@@ -314,54 +314,6 @@
 	ia64_set_pta(pta | (0<<8) | ((3*(PAGE_SHIFT-3)+3)<<2) | 1);
 }
 
-#ifdef CONFIG_IA64_VIRTUAL_MEM_MAP
-
-static int
-create_mem_map_page_table (u64 start, u64 end, void *arg)
-{
-	unsigned long address, start_page, end_page;
-	struct page *map_start, *map_end;
-	pgd_t *pgd;
-	pmd_t *pmd;
-	pte_t *pte;
-	void *page;
-
-	map_start = mem_map + MAP_NR(start);
-	map_end   = mem_map + MAP_NR(end);
-
-	start_page = (unsigned long) map_start & PAGE_MASK;
-	end_page = PAGE_ALIGN((unsigned long) map_end);
-
-	printk("[%lx,%lx) -> %lx-%lx\n", start, end, start_page, end_page);
-
-	for (address = start_page; address < end_page; address += PAGE_SIZE) {
-		pgd = pgd_offset_k(address);
-		if (pgd_none(*pgd)) {
-			pmd = alloc_bootmem_pages(PAGE_SIZE);
-			clear_page(pmd);
-			pgd_set(pgd, pmd);
-			pmd += (address >> PMD_SHIFT) & (PTRS_PER_PMD - 1);
-		} else
-			pmd = pmd_offset(pgd, address);
-		if (pmd_none(*pmd)) {
-			pte = alloc_bootmem_pages(PAGE_SIZE);
-			clear_page(pte);
-			pmd_set(pmd, pte);
-			pte += (address >> PAGE_SHIFT) & (PTRS_PER_PTE - 1);
-		} else
-			pte = pte_offset(pmd, address);
-
-		if (pte_none(*pte)) {
-			page = alloc_bootmem_pages(PAGE_SIZE);
-			clear_page(page);
-			set_pte(pte, mk_pte_phys(__pa(page), PAGE_KERNEL));
-		}
-	}
-	return 0;
-}
-
-#endif /* CONFIG_IA64_VIRTUAL_MEM_MAP */
-
 /*
  * Set up the page tables.
  */
diff -urN linux-davidm/arch/ia64/mm/tlb.c linux-2.3.99-pre6-lia/arch/ia64/mm/tlb.c
--- linux-davidm/arch/ia64/mm/tlb.c	Fri Apr 21 15:21:24 2000
+++ linux-2.3.99-pre6-lia/arch/ia64/mm/tlb.c	Thu May 25 23:08:05 2000
@@ -42,6 +42,66 @@
  */
 spinlock_t ptcg_lock = SPIN_LOCK_UNLOCKED; /* see <asm/pgtable.h> */
 
+#if defined(CONFIG_SMP) && !defined(CONFIG_ITANIUM_PTCG)
+
+#include <linux/irq.h>
+
+unsigned long 	flush_end, flush_start, flush_nbits, flush_rid;
+atomic_t flush_cpu_count;
+
+/*
+ * flush_tlb_no_ptcg is called with ptcg_lock locked
+ */
+static inline void
+flush_tlb_no_ptcg (__u64 start, __u64 end, __u64 nbits)
+{
+	__u64 flags;
+	__u64 saved_tpr;
+
+	/*
+	 * Some times this is called with interrupts disabled and causes
+	 * dead-lock; to avoid this we enable interrupt and raise the TPR
+	 * to enable ONLY IPI.
+	 */
+	__save_flags(flags);
+	if (!(flags & IA64_PSR_I)) {
+		saved_tpr = ia64_get_tpr();
+		ia64_srlz_d();
+		ia64_set_tpr(IPI_IRQ - 16);
+		ia64_srlz_d();
+		local_irq_enable();
+	}
+
+	flush_rid = ia64_get_rr(start);
+	ia64_srlz_d();
+	flush_start = start;
+	flush_end = end;
+	flush_nbits = nbits;
+	atomic_set(&flush_cpu_count, smp_num_cpus - 1);
+	smp_send_flush_tlb();
+	/*
+	 * Purge local TLB entries. ALAT invalidation is done in ia64_leave_kernel.
+	 */
+	do {
+		__asm__ __volatile__ ("ptc.l %0,%1" :: "r"(start), "r"(nbits<<2) : "memory");
+		start += (1UL << nbits);
+	} while (start < end);
+
+	/*
+	 * Wait for other CPUs to finish purging entries.
+	 */
+	while (atomic_read(&flush_cpu_count)) {
+		/* Nothing */
+	}
+	if (!(flags & IA64_PSR_I)) {
+		local_irq_disable();
+		ia64_set_tpr(saved_tpr);
+		ia64_srlz_d();
+	}
+}
+
+#endif /* CONFIG_SMP && !CONFIG_ITANIUM_PTCG */
+
 void
 get_new_mmu_context (struct mm_struct *mm)
 {
@@ -143,15 +203,22 @@
 	start &= ~((1UL << nbits) - 1);
 
 	spin_lock(&ptcg_lock);
+#if defined(CONFIG_SMP) && !defined(CONFIG_ITANIUM_PTCG)
+	flush_tlb_no_ptcg(start, end, nbits);
+#else
 	do {
-#ifdef CONFIG_SMP
-		__asm__ __volatile__ ("ptc.g %0,%1;;srlz.i;;"
+# ifdef CONFIG_SMP
+		/*
+		 * Flush ALAT entries also.
+		 */
+		__asm__ __volatile__ ("ptc.ga %0,%1;;srlz.i;;"
 				      :: "r"(start), "r"(nbits<<2) : "memory");
-#else
+# else
 		__asm__ __volatile__ ("ptc.l %0,%1" :: "r"(start), "r"(nbits<<2) : "memory");
-#endif
+# endif
 		start += (1UL << nbits);
 	} while (start < end);
+#endif /* CONFIG_SMP && !defined(CONFIG_ITANIUM_PTCG)
 	spin_unlock(&ptcg_lock);
 	ia64_insn_group_barrier();
 	ia64_srlz_i();			/* srlz.i implies srlz.d */
diff -urN linux-davidm/arch/ia64/tools/Makefile linux-2.3.99-pre6-lia/arch/ia64/tools/Makefile
--- linux-davidm/arch/ia64/tools/Makefile	Fri Apr 21 15:21:24 2000
+++ linux-2.3.99-pre6-lia/arch/ia64/tools/Makefile	Mon May 15 12:09:49 2000
@@ -48,4 +48,4 @@
 
 endif
 
-.PHONY: all
+.PHONY: all modules
diff -urN linux-davidm/arch/ia64/tools/print_offsets.c linux-2.3.99-pre6-lia/arch/ia64/tools/print_offsets.c
--- linux-davidm/arch/ia64/tools/print_offsets.c	Fri Apr 21 15:21:24 2000
+++ linux-2.3.99-pre6-lia/arch/ia64/tools/print_offsets.c	Thu May 25 23:08:14 2000
@@ -58,11 +58,95 @@
     { "IA64_TASK_PID_OFFSET",		offsetof (struct task_struct, pid) },
     { "IA64_TASK_MM_OFFSET",		offsetof (struct task_struct, mm) },
     { "IA64_PT_REGS_CR_IPSR_OFFSET",	offsetof (struct pt_regs, cr_ipsr) },
+    { "IA64_PT_REGS_CR_IIP_OFFSET",	offsetof (struct pt_regs, cr_iip) },
+    { "IA64_PT_REGS_CR_IFS_OFFSET",	offsetof (struct pt_regs, cr_ifs) },
+    { "IA64_PT_REGS_AR_UNAT_OFFSET",	offsetof (struct pt_regs, ar_unat) },
+    { "IA64_PT_REGS_AR_PFS_OFFSET",	offsetof (struct pt_regs, ar_pfs) },
+    { "IA64_PT_REGS_AR_RSC_OFFSET",	offsetof (struct pt_regs, ar_rsc) },
+    { "IA64_PT_REGS_AR_RNAT_OFFSET",	offsetof (struct pt_regs, ar_rnat) },
+    { "IA64_PT_REGS_AR_BSPSTORE_OFFSET",offsetof (struct pt_regs, ar_bspstore) },
+    { "IA64_PT_REGS_PR_OFFSET",		offsetof (struct pt_regs, pr) },
+    { "IA64_PT_REGS_B6_OFFSET",		offsetof (struct pt_regs, b6) },
+    { "IA64_PT_REGS_LOADRS_OFFSET",	offsetof (struct pt_regs, loadrs) },
+    { "IA64_PT_REGS_R1_OFFSET",		offsetof (struct pt_regs, r1) },
+    { "IA64_PT_REGS_R2_OFFSET",		offsetof (struct pt_regs, r2) },
+    { "IA64_PT_REGS_R3_OFFSET",		offsetof (struct pt_regs, r3) },
     { "IA64_PT_REGS_R12_OFFSET",	offsetof (struct pt_regs, r12) },
+    { "IA64_PT_REGS_R13_OFFSET",	offsetof (struct pt_regs, r13) },
+    { "IA64_PT_REGS_R14_OFFSET",	offsetof (struct pt_regs, r14) },
+    { "IA64_PT_REGS_R15_OFFSET",	offsetof (struct pt_regs, r15) },
     { "IA64_PT_REGS_R8_OFFSET",		offsetof (struct pt_regs, r8) },
+    { "IA64_PT_REGS_R9_OFFSET",		offsetof (struct pt_regs, r9) },
+    { "IA64_PT_REGS_R10_OFFSET",	offsetof (struct pt_regs, r10) },
+    { "IA64_PT_REGS_R11_OFFSET",	offsetof (struct pt_regs, r11) },
     { "IA64_PT_REGS_R16_OFFSET",	offsetof (struct pt_regs, r16) },
-    { "IA64_SWITCH_STACK_B0_OFFSET",	offsetof (struct switch_stack, b0) },
-    { "IA64_SWITCH_STACK_CALLER_UNAT_OFFSET", offsetof (struct switch_stack, caller_unat) },
+    { "IA64_PT_REGS_R17_OFFSET",	offsetof (struct pt_regs, r17) },
+    { "IA64_PT_REGS_R18_OFFSET",	offsetof (struct pt_regs, r18) },
+    { "IA64_PT_REGS_R19_OFFSET",	offsetof (struct pt_regs, r19) },
+    { "IA64_PT_REGS_R20_OFFSET",	offsetof (struct pt_regs, r20) },
+    { "IA64_PT_REGS_R21_OFFSET",	offsetof (struct pt_regs, r21) },
+    { "IA64_PT_REGS_R22_OFFSET",	offsetof (struct pt_regs, r22) },
+    { "IA64_PT_REGS_R23_OFFSET",	offsetof (struct pt_regs, r23) },
+    { "IA64_PT_REGS_R24_OFFSET",	offsetof (struct pt_regs, r24) },
+    { "IA64_PT_REGS_R25_OFFSET",	offsetof (struct pt_regs, r25) },
+    { "IA64_PT_REGS_R26_OFFSET",	offsetof (struct pt_regs, r26) },
+    { "IA64_PT_REGS_R27_OFFSET",	offsetof (struct pt_regs, r27) },
+    { "IA64_PT_REGS_R28_OFFSET",	offsetof (struct pt_regs, r28) },
+    { "IA64_PT_REGS_R29_OFFSET",	offsetof (struct pt_regs, r29) },
+    { "IA64_PT_REGS_R30_OFFSET",	offsetof (struct pt_regs, r30) },
+    { "IA64_PT_REGS_R31_OFFSET",	offsetof (struct pt_regs, r31) },
+    { "IA64_PT_REGS_AR_CCV_OFFSET",	offsetof (struct pt_regs, ar_ccv) },
+    { "IA64_PT_REGS_AR_FPSR_OFFSET",	offsetof (struct pt_regs, ar_fpsr) },
+    { "IA64_PT_REGS_B0_OFFSET",		offsetof (struct pt_regs, b0) },
+    { "IA64_PT_REGS_B7_OFFSET",		offsetof (struct pt_regs, b7) },
+    { "IA64_PT_REGS_F6_OFFSET",		offsetof (struct pt_regs, f6) },
+    { "IA64_PT_REGS_F7_OFFSET",		offsetof (struct pt_regs, f7) },
+    { "IA64_PT_REGS_F8_OFFSET",		offsetof (struct pt_regs, f8) },
+    { "IA64_PT_REGS_F9_OFFSET",		offsetof (struct pt_regs, f9) },
+    { "IA64_SWITCH_STACK_CALLER_UNAT_OFFSET",	offsetof (struct switch_stack, caller_unat) },
+    { "IA64_SWITCH_STACK_AR_FPSR_OFFSET",	offsetof (struct switch_stack, ar_fpsr) },
+    { "IA64_SWITCH_STACK_F2_OFFSET",		offsetof (struct switch_stack, f2) },
+    { "IA64_SWITCH_STACK_F3_OFFSET",		offsetof (struct switch_stack, f3) },
+    { "IA64_SWITCH_STACK_F4_OFFSET",		offsetof (struct switch_stack, f4) },
+    { "IA64_SWITCH_STACK_F5_OFFSET",		offsetof (struct switch_stack, f5) },
+    { "IA64_SWITCH_STACK_F10_OFFSET",		offsetof (struct switch_stack, f10) },
+    { "IA64_SWITCH_STACK_F11_OFFSET",		offsetof (struct switch_stack, f11) },
+    { "IA64_SWITCH_STACK_F12_OFFSET",		offsetof (struct switch_stack, f12) },
+    { "IA64_SWITCH_STACK_F13_OFFSET",		offsetof (struct switch_stack, f13) },
+    { "IA64_SWITCH_STACK_F14_OFFSET",		offsetof (struct switch_stack, f14) },
+    { "IA64_SWITCH_STACK_F15_OFFSET",		offsetof (struct switch_stack, f15) },
+    { "IA64_SWITCH_STACK_F16_OFFSET",		offsetof (struct switch_stack, f16) },
+    { "IA64_SWITCH_STACK_F17_OFFSET",		offsetof (struct switch_stack, f17) },
+    { "IA64_SWITCH_STACK_F18_OFFSET",		offsetof (struct switch_stack, f18) },
+    { "IA64_SWITCH_STACK_F19_OFFSET",		offsetof (struct switch_stack, f19) },
+    { "IA64_SWITCH_STACK_F20_OFFSET",		offsetof (struct switch_stack, f20) },
+    { "IA64_SWITCH_STACK_F21_OFFSET",		offsetof (struct switch_stack, f21) },
+    { "IA64_SWITCH_STACK_F22_OFFSET",		offsetof (struct switch_stack, f22) },
+    { "IA64_SWITCH_STACK_F23_OFFSET",		offsetof (struct switch_stack, f23) },
+    { "IA64_SWITCH_STACK_F24_OFFSET",		offsetof (struct switch_stack, f24) },
+    { "IA64_SWITCH_STACK_F25_OFFSET",		offsetof (struct switch_stack, f25) },
+    { "IA64_SWITCH_STACK_F26_OFFSET",		offsetof (struct switch_stack, f26) },
+    { "IA64_SWITCH_STACK_F27_OFFSET",		offsetof (struct switch_stack, f27) },
+    { "IA64_SWITCH_STACK_F28_OFFSET",		offsetof (struct switch_stack, f28) },
+    { "IA64_SWITCH_STACK_F29_OFFSET",		offsetof (struct switch_stack, f29) },
+    { "IA64_SWITCH_STACK_F30_OFFSET",		offsetof (struct switch_stack, f30) },
+    { "IA64_SWITCH_STACK_F31_OFFSET",		offsetof (struct switch_stack, f31) },
+    { "IA64_SWITCH_STACK_R4_OFFSET",		offsetof (struct switch_stack, r4) },
+    { "IA64_SWITCH_STACK_R5_OFFSET",		offsetof (struct switch_stack, r5) },
+    { "IA64_SWITCH_STACK_R6_OFFSET",		offsetof (struct switch_stack, r6) },
+    { "IA64_SWITCH_STACK_R7_OFFSET",		offsetof (struct switch_stack, r7) },
+    { "IA64_SWITCH_STACK_B0_OFFSET",		offsetof (struct switch_stack, b0) },
+    { "IA64_SWITCH_STACK_B1_OFFSET",		offsetof (struct switch_stack, b1) },
+    { "IA64_SWITCH_STACK_B2_OFFSET",		offsetof (struct switch_stack, b2) },
+    { "IA64_SWITCH_STACK_B3_OFFSET",		offsetof (struct switch_stack, b3) },
+    { "IA64_SWITCH_STACK_B4_OFFSET",		offsetof (struct switch_stack, b4) },
+    { "IA64_SWITCH_STACK_B5_OFFSET",		offsetof (struct switch_stack, b5) },
+    { "IA64_SWITCH_STACK_AR_PFS_OFFSET",	offsetof (struct switch_stack, ar_pfs) },
+    { "IA64_SWITCH_STACK_AR_LC_OFFSET",		offsetof (struct switch_stack, ar_lc) },
+    { "IA64_SWITCH_STACK_AR_UNAT_OFFSET",	offsetof (struct switch_stack, ar_unat) },
+    { "IA64_SWITCH_STACK_AR_RNAT_OFFSET",	offsetof (struct switch_stack, ar_rnat) },
+    { "IA64_SWITCH_STACK_AR_BSPSTORE_OFFSET",	offsetof (struct switch_stack, ar_bspstore) },
+    { "IA64_SWITCH_STACK_PR_OFFSET",	offsetof (struct switch_stack, b0) },
     { "IA64_SIGCONTEXT_AR_BSP_OFFSET",	offsetof (struct sigcontext, sc_ar_bsp) },
     { "IA64_SIGCONTEXT_AR_RNAT_OFFSET",	offsetof (struct sigcontext, sc_ar_rnat) },
     { "IA64_SIGCONTEXT_FLAGS_OFFSET",	offsetof (struct sigcontext, sc_flags) },
diff -urN linux-davidm/arch/ia64/vmlinux.lds.S linux-2.3.99-pre6-lia/arch/ia64/vmlinux.lds.S
--- linux-davidm/arch/ia64/vmlinux.lds.S	Fri Apr 21 15:21:24 2000
+++ linux-2.3.99-pre6-lia/arch/ia64/vmlinux.lds.S	Thu May 25 23:09:33 2000
@@ -32,6 +32,13 @@
 #endif
   _etext = .;
 
+  /* Read-only data */
+
+  __gp = ALIGN(8) + 0x200000;
+
+  /* Global data */
+  _data = .;
+
   /* Exception table */
   . = ALIGN(16);
   __start___ex_table = .;
@@ -39,19 +46,26 @@
 	{ *(__ex_table) }
   __stop___ex_table = .;
 
-  /* Kernel symbol names for modules: */
-  .kstrtab : AT(ADDR(.kstrtab) - PAGE_OFFSET)
-	{ *(.kstrtab) }
+  /* Unwind table */
+  ia64_unw_start = .;
+  .IA_64.unwind : AT(ADDR(.IA_64.unwind) - PAGE_OFFSET)
+	{ *(.IA_64.unwind) }
+  ia64_unw_end = .;
+  .IA_64.unwind_info : AT(ADDR(.IA_64.unwind_info) - PAGE_OFFSET)
+	{ *(.IA_64.unwind_info) }
 
-  /* The initial task and kernel stack */
-  . = ALIGN(PAGE_SIZE);
-  init_task : AT(ADDR(init_task) - PAGE_OFFSET)
-	{ *(init_task) }
+  .rodata : AT(ADDR(.rodata) - PAGE_OFFSET)
+	{ *(.rodata) }
+  .opd : AT(ADDR(.opd) - PAGE_OFFSET)
+	{ *(.opd) }
+
+  /* Initialization code and data: */
 
-  /* Startup code */
+  . = ALIGN(PAGE_SIZE);
   __init_begin = .;
   .text.init : AT(ADDR(.text.init) - PAGE_OFFSET)
 	{ *(.text.init) }
+
   .data.init : AT(ADDR(.data.init) - PAGE_OFFSET)
 	{ *(.data.init) }
    . = ALIGN(16);
@@ -66,6 +80,10 @@
   . = ALIGN(PAGE_SIZE);
   __init_end = .;
 
+  /* The initial task and kernel stack */
+  init_task : AT(ADDR(init_task) - PAGE_OFFSET)
+	{ *(init_task) }
+
   .data.page_aligned : AT(ADDR(.data.page_aligned) - PAGE_OFFSET)
         { *(.data.idt) }
 
@@ -73,17 +91,12 @@
   .data.cacheline_aligned : AT(ADDR(.data.cacheline_aligned) - PAGE_OFFSET)
         { *(.data.cacheline_aligned) }
 
-  /* Global data */
-  _data = .;
+  /* Kernel symbol names for modules: */
+  .kstrtab : AT(ADDR(.kstrtab) - PAGE_OFFSET)
+	{ *(.kstrtab) }
 
-  .rodata : AT(ADDR(.rodata) - PAGE_OFFSET)
-	{ *(.rodata) }
-  .opd : AT(ADDR(.opd) - PAGE_OFFSET)
-	{ *(.opd) }
   .data : AT(ADDR(.data) - PAGE_OFFSET)
 	{ *(.data) *(.gnu.linkonce.d*) CONSTRUCTORS }
-
-  __gp = ALIGN (8) + 0x200000;
 
   .got : AT(ADDR(.got) - PAGE_OFFSET)
 	{ *(.got.plt) *(.got) }
diff -urN linux-davidm/drivers/sound/emu10k1/main.c linux-2.3.99-pre6-lia/drivers/sound/emu10k1/main.c
--- linux-davidm/drivers/sound/emu10k1/main.c	Fri Apr 21 16:08:45 2000
+++ linux-2.3.99-pre6-lia/drivers/sound/emu10k1/main.c	Mon May 15 21:19:42 2000
@@ -80,7 +80,7 @@
 	"EMU10K1",
 };
 
-static struct pci_device_id emu10k1_pci_tbl[] __initdata = {
+static struct pci_device_id emu10k1_pci_tbl[] = {
 	{PCI_VENDOR_ID_CREATIVE, PCI_DEVICE_ID_CREATIVE_EMU10K1,
 	 PCI_ANY_ID, PCI_ANY_ID, 0, 0, EMU10K1},
 	{0,}
@@ -532,7 +532,7 @@
 	return CTSTATUS_SUCCESS;
 }
 
-static void __devexit audio_exit(struct emu10k1_card *card)
+static void audio_exit(struct emu10k1_card *card)
 {
 	kfree(card->waveout);
 	kfree(card->wavein);
@@ -550,7 +550,7 @@
 	return;
 }
 
-static void __devexit emu10k1_exit(struct emu10k1_card *card)
+static void emu10k1_exit(struct emu10k1_card *card)
 {
 	int ch;
 
@@ -764,7 +764,7 @@
 MODULE_AUTHOR("Bertrand Lee, Cai Ying. (Email to: emu10k1-devel@opensource.creative.com)");
 MODULE_DESCRIPTION("Creative EMU10K1 PCI Audio Driver v" DRIVER_VERSION "\nCopyright (C) 1999 Creative Technology Ltd.");
 
-static struct pci_driver emu10k1_pci_driver __initdata = {
+static struct pci_driver emu10k1_pci_driver = {
 	name:"emu10k1",
 	id_table:emu10k1_pci_tbl,
 	probe:emu10k1_probe,
diff -urN linux-davidm/fs/binfmt_elf.c linux-2.3.99-pre6-lia/fs/binfmt_elf.c
--- linux-davidm/fs/binfmt_elf.c	Thu May 25 23:22:10 2000
+++ linux-2.3.99-pre6-lia/fs/binfmt_elf.c	Mon May 22 13:35:15 2000
@@ -280,11 +280,13 @@
 
 #ifdef CONFIG_BINFMT_ELF32
 	    set_brk(ia32_mm_addr(vaddr + load_addr), vaddr + load_addr + eppnt->p_memsz);
-	    memset((char *) vaddr + load_addr + eppnt->p_filesz, 0,
-		   eppnt->p_memsz - eppnt->p_filesz);
-	    kernel_read(interpreter, eppnt->p_offset, (char *)(vaddr + load_addr),
-			eppnt->p_filesz);
 	    map_addr = vaddr + load_addr;
+	    if (eppnt->p_memsz < (1UL<<32) && map_addr <= (1UL<<32) - eppnt->p_memsz) {
+		    memset((char *) map_addr + eppnt->p_filesz, 0,
+			   eppnt->p_memsz - eppnt->p_filesz);
+		    kernel_read(interpreter, eppnt->p_offset, (char *)map_addr, eppnt->p_filesz);
+	    } else
+		    map_addr = -EINVAL;
 #else /* !CONFIG_BINFMT_ELF32 */
 	    map_addr = do_mmap(interpreter,
 			    load_addr + ELF_PAGESTART(vaddr),
@@ -293,16 +295,6 @@
 			    elf_type,
 			    eppnt->p_offset - ELF_PAGEOFFSET(eppnt->p_vaddr));
 #endif /* !CONFIG_BINFMT_ELF32 */
-	    if (IS_ERR(map_addr)) {
-		    printk("elf_interp:0x%lx+%lx, 0x%lx(0x%lx)\n",
-			   (unsigned long)vaddr, (unsigned long)load_addr,
-			   (unsigned long) eppnt->p_memsz, (unsigned long) eppnt->p_filesz);
-		    map_addr = vaddr + load_addr;
-		    do_brk(map_addr & PAGE_MASK, eppnt->p_filesz);
-		    if (kernel_read(interpreter, eppnt->p_offset, (char *) map_addr,
-				    eppnt->p_filesz) < 0)
-			    goto out_close;
-	    }
 
 	    if (!load_addr_set && interp_elf_ex->e_type == ET_DYN) {
 		load_addr = map_addr - ELF_PAGESTART(vaddr);
@@ -507,17 +499,20 @@
 			if (strcmp(elf_interpreter,"/usr/lib/libc.so.1") == 0 ||
 			    strcmp(elf_interpreter,"/usr/lib/ld.so.1") == 0)
 				ibcs2_interpreter = 1;
-#ifdef CONFIG_BINFMT_ELF32
-#define INTRP32	"/lib/i386/ld-linux.so.2"
-			if (strcmp(elf_interpreter,"/lib/ld-linux.so.2") == 0) {
+#if defined(__ia64__) && !defined(CONFIG_BINFMT_ELF32)
+			/*
+			 * XXX temporary gross hack until all IA-64 Linux binaries
+			 * use /lib/ld-linux-ia64.so.1 as the linker name.
+			 */
+#define INTRP64	"/lib/ld-linux-ia64.so.1"
+ 			if (strcmp(elf_interpreter,"/lib/ld-linux.so.2") == 0) {
 				kfree(elf_interpreter);
-				elf_interpreter=(char *)kmalloc(sizeof(INTRP32),
-								GFP_KERNEL);
-				if (!elf_interpreter)
-					goto out_free_file;
-				strcpy(elf_interpreter, INTRP32);
-			}
-#endif /* CONFIG_BINFMT_ELF32 */
+				elf_interpreter=(char *)kmalloc(sizeof(INTRP64), GFP_KERNEL);
+ 				if (!elf_interpreter)
+ 					goto out_free_file;
+				strcpy(elf_interpreter, INTRP64);
+ 			}
+#endif /* defined(__ia64__) && !defined(CONFIG_BINFMT_ELF32) */
 #if 0
 			printk("Using ELF interpreter %s\n", elf_interpreter);
 #endif
@@ -659,11 +654,14 @@
 
 #ifdef CONFIG_BINFMT_ELF32
 		set_brk(ia32_mm_addr(vaddr + load_bias), vaddr + load_bias + elf_ppnt->p_memsz);
-		memset((char *) vaddr + load_bias + elf_ppnt->p_filesz, 0,
-		       elf_ppnt->p_memsz - elf_ppnt->p_filesz);
-		kernel_read(bprm->file, elf_ppnt->p_offset, (char *) (vaddr + load_bias),
-			    elf_ppnt->p_filesz);
 		error = vaddr + load_bias;
+		if (elf_ppnt->p_memsz < (1UL<<32) && error <= (1UL<<32) - elf_ppnt->p_memsz) {
+			memset((char *) error + elf_ppnt->p_filesz, 0,
+			       elf_ppnt->p_memsz - elf_ppnt->p_filesz);
+			kernel_read(bprm->file, elf_ppnt->p_offset, (char *) error,
+				    elf_ppnt->p_filesz);
+		} else
+			error = EINVAL;
 #else /* CONFIG_BINFMT_ELF32 */
 		error = do_mmap(bprm->file, ELF_PAGESTART(load_bias + vaddr),
 		                (elf_ppnt->p_filesz +
@@ -671,16 +669,6 @@
 		                elf_prot, elf_flags, (elf_ppnt->p_offset -
 		                ELF_PAGEOFFSET(elf_ppnt->p_vaddr)));
 #endif /* CONFIG_BINFMT_ELF32 */
-		if (IS_ERR(error)) {
-			printk("do_load:0x%lx+%lx, 0x%lx(0x%lx)\n",
-			       (unsigned long)vaddr, (unsigned long)load_bias,
-			       (unsigned long)elf_ppnt->p_memsz,
-			       (unsigned long)elf_ppnt->p_filesz);
-			error = vaddr + load_bias;
-			do_brk(error & PAGE_MASK, elf_ppnt->p_filesz);
-			error = kernel_read(bprm->file, elf_ppnt->p_offset, (char *) error,
-					    elf_ppnt->p_filesz);
-		}
 
 		if (!load_addr_set) {
 			load_addr_set = 1;
diff -urN linux-davidm/fs/filesystems.c linux-2.3.99-pre6-lia/fs/filesystems.c
--- linux-davidm/fs/filesystems.c	Mon Mar 13 12:43:48 2000
+++ linux-2.3.99-pre6-lia/fs/filesystems.c	Sat May  6 00:44:13 2000
@@ -52,7 +52,7 @@
 #ifdef CONFIG_NFSD_MODULE
 int (*do_nfsservctl)(int, void *, void *) = NULL;
 #endif
-int
+long
 asmlinkage sys_nfsservctl(int cmd, void *argp, void *resp)
 {
 #ifndef CONFIG_NFSD_MODULE
diff -urN linux-davidm/fs/nfsd/nfsctl.c linux-2.3.99-pre6-lia/fs/nfsd/nfsctl.c
--- linux-davidm/fs/nfsd/nfsctl.c	Mon Apr 10 23:02:46 2000
+++ linux-2.3.99-pre6-lia/fs/nfsd/nfsctl.c	Sat May  6 00:43:51 2000
@@ -218,7 +218,7 @@
 };
 #define CMD_MAX (sizeof(sizes)/sizeof(sizes[0])-1)
 
-int
+long
 asmlinkage handle_sys_nfsservctl(int cmd, void *opaque_argp, void *opaque_resp)
 {
 	struct nfsctl_arg *	argp = opaque_argp;
diff -urN linux-davidm/fs/readdir.c linux-2.3.99-pre6-lia/fs/readdir.c
--- linux-davidm/fs/readdir.c	Mon Apr 24 18:56:33 2000
+++ linux-2.3.99-pre6-lia/fs/readdir.c	Sat May  6 16:46:37 2000
@@ -32,6 +32,10 @@
 	return res;
 }
 
+#define NAME_OFFSET(de) ((int) ((de)->d_name - (char *) (de)))
+#define ROUND_UP(x) (((x)+sizeof(long)-1) & ~(sizeof(long)-1))
+
+#if !defined(__ia64__)
 /*
  * Traditional linux readdir() handling..
  *
@@ -40,8 +44,6 @@
  * anyway. Thus the special "fillonedir()" function for that
  * case (the low-level handlers don't need to care about this).
  */
-#define NAME_OFFSET(de) ((int) ((de)->d_name - (char *) (de)))
-#define ROUND_UP(x) (((x)+sizeof(long)-1) & ~(sizeof(long)-1))
 
 struct old_linux_dirent {
 	unsigned long	d_ino;
@@ -96,6 +98,8 @@
 out:
 	return error;
 }
+
+#endif /* !defined(__ia64__) */
 
 /*
  * New, all-improved, singing, dancing, iBCS2-compliant getdents()
diff -urN linux-davidm/include/asm-ia64/asmmacro.h linux-2.3.99-pre6-lia/include/asm-ia64/asmmacro.h
--- linux-davidm/include/asm-ia64/asmmacro.h	Wed Dec 31 16:00:00 1969
+++ linux-2.3.99-pre6-lia/include/asm-ia64/asmmacro.h	Thu May 25 23:10:15 2000
@@ -0,0 +1,48 @@
+#ifndef _ASM_IA64_ASMMACRO_H
+#define _ASM_IA64_ASMMACRO_H
+
+/*
+ * Copyright (C) 2000 Hewlett-Packard Co
+ * Copyright (C) 2000 David Mosberger-Tang <davidm@hpl.hp.com>
+ */
+
+#if 1
+
+/*
+ * This is a hack that's necessary as long as we support old versions
+ * of gas, that have no unwind support.
+ */
+#include <linux/config.h>
+
+#ifdef CONFIG_IA64_NEW_UNWIND
+# define UNW(args...)	args
+#else
+# define UNW(args...)
+#endif
+
+#endif
+
+#define ENTRY(name)				\
+	.align 16;				\
+	.proc name;				\
+name:
+
+#define GLOBAL_ENTRY(name)			\
+	.global name;				\
+	ENTRY(name)
+
+#define END(name)				\
+	.endp name
+
+/*
+ * Helper macros to make unwind directives more readable:
+ */
+
+/* prologue_gr: */
+#define ASM_UNW_PRLG_RP			0x8
+#define ASM_UNW_PRLG_PFS		0x4
+#define ASM_UNW_PRLG_PSP		0x2
+#define ASM_UNW_PRLG_PR			0x1
+#define ASM_UNW_PRLG_GRSAVE(ninputs)	(32+(ninputs))
+
+#endif /* _ASM_IA64_ASMMACRO_H */
diff -urN linux-davidm/include/asm-ia64/iosapic.h linux-2.3.99-pre6-lia/include/asm-ia64/iosapic.h
--- linux-davidm/include/asm-ia64/iosapic.h	Tue Feb  8 12:01:59 2000
+++ linux-2.3.99-pre6-lia/include/asm-ia64/iosapic.h	Thu May 25 23:10:29 2000
@@ -92,7 +92,7 @@
  * }
  */
 extern unsigned int iosapic_version(unsigned long);
-extern void iosapic_init(unsigned long);
+extern void iosapic_init(unsigned long, int);
 
 struct iosapic_vector {
 	unsigned long iosapic_base; /* IOSAPIC Base address */
diff -urN linux-davidm/include/asm-ia64/offsets.h linux-2.3.99-pre6-lia/include/asm-ia64/offsets.h
--- linux-davidm/include/asm-ia64/offsets.h	Fri Apr 21 15:21:24 2000
+++ linux-2.3.99-pre6-lia/include/asm-ia64/offsets.h	Thu May 25 23:10:59 2000
@@ -25,11 +25,95 @@
 #define IA64_TASK_PID_OFFSET		188	/* 0xbc */
 #define IA64_TASK_MM_OFFSET		88	/* 0x58 */
 #define IA64_PT_REGS_CR_IPSR_OFFSET	0	/* 0x0 */
+#define IA64_PT_REGS_CR_IIP_OFFSET	8	/* 0x8 */
+#define IA64_PT_REGS_CR_IFS_OFFSET	16	/* 0x10 */
+#define IA64_PT_REGS_AR_UNAT_OFFSET	24	/* 0x18 */
+#define IA64_PT_REGS_AR_PFS_OFFSET	32	/* 0x20 */
+#define IA64_PT_REGS_AR_RSC_OFFSET	40	/* 0x28 */
+#define IA64_PT_REGS_AR_RNAT_OFFSET	48	/* 0x30 */
+#define IA64_PT_REGS_AR_BSPSTORE_OFFSET	56	/* 0x38 */
+#define IA64_PT_REGS_PR_OFFSET		64	/* 0x40 */
+#define IA64_PT_REGS_B6_OFFSET		72	/* 0x48 */
+#define IA64_PT_REGS_LOADRS_OFFSET	80	/* 0x50 */
+#define IA64_PT_REGS_R1_OFFSET		88	/* 0x58 */
+#define IA64_PT_REGS_R2_OFFSET		96	/* 0x60 */
+#define IA64_PT_REGS_R3_OFFSET		104	/* 0x68 */
 #define IA64_PT_REGS_R12_OFFSET		112	/* 0x70 */
+#define IA64_PT_REGS_R13_OFFSET		120	/* 0x78 */
+#define IA64_PT_REGS_R14_OFFSET		128	/* 0x80 */
+#define IA64_PT_REGS_R15_OFFSET		136	/* 0x88 */
 #define IA64_PT_REGS_R8_OFFSET		144	/* 0x90 */
+#define IA64_PT_REGS_R9_OFFSET		152	/* 0x98 */
+#define IA64_PT_REGS_R10_OFFSET		160	/* 0xa0 */
+#define IA64_PT_REGS_R11_OFFSET		168	/* 0xa8 */
 #define IA64_PT_REGS_R16_OFFSET		176	/* 0xb0 */
-#define IA64_SWITCH_STACK_B0_OFFSET	464	/* 0x1d0 */
+#define IA64_PT_REGS_R17_OFFSET		184	/* 0xb8 */
+#define IA64_PT_REGS_R18_OFFSET		192	/* 0xc0 */
+#define IA64_PT_REGS_R19_OFFSET		200	/* 0xc8 */
+#define IA64_PT_REGS_R20_OFFSET		208	/* 0xd0 */
+#define IA64_PT_REGS_R21_OFFSET		216	/* 0xd8 */
+#define IA64_PT_REGS_R22_OFFSET		224	/* 0xe0 */
+#define IA64_PT_REGS_R23_OFFSET		232	/* 0xe8 */
+#define IA64_PT_REGS_R24_OFFSET		240	/* 0xf0 */
+#define IA64_PT_REGS_R25_OFFSET		248	/* 0xf8 */
+#define IA64_PT_REGS_R26_OFFSET		256	/* 0x100 */
+#define IA64_PT_REGS_R27_OFFSET		264	/* 0x108 */
+#define IA64_PT_REGS_R28_OFFSET		272	/* 0x110 */
+#define IA64_PT_REGS_R29_OFFSET		280	/* 0x118 */
+#define IA64_PT_REGS_R30_OFFSET		288	/* 0x120 */
+#define IA64_PT_REGS_R31_OFFSET		296	/* 0x128 */
+#define IA64_PT_REGS_AR_CCV_OFFSET	304	/* 0x130 */
+#define IA64_PT_REGS_AR_FPSR_OFFSET	312	/* 0x138 */
+#define IA64_PT_REGS_B0_OFFSET		320	/* 0x140 */
+#define IA64_PT_REGS_B7_OFFSET		328	/* 0x148 */
+#define IA64_PT_REGS_F6_OFFSET		336	/* 0x150 */
+#define IA64_PT_REGS_F7_OFFSET		352	/* 0x160 */
+#define IA64_PT_REGS_F8_OFFSET		368	/* 0x170 */
+#define IA64_PT_REGS_F9_OFFSET		384	/* 0x180 */
 #define IA64_SWITCH_STACK_CALLER_UNAT_OFFSET 0	/* 0x0 */
+#define IA64_SWITCH_STACK_AR_FPSR_OFFSET	8	/* 0x8 */
+#define IA64_SWITCH_STACK_F2_OFFSET	16	/* 0x10 */
+#define IA64_SWITCH_STACK_F3_OFFSET	32	/* 0x20 */
+#define IA64_SWITCH_STACK_F4_OFFSET	48	/* 0x30 */
+#define IA64_SWITCH_STACK_F5_OFFSET	64	/* 0x40 */
+#define IA64_SWITCH_STACK_F10_OFFSET	80	/* 0x50 */
+#define IA64_SWITCH_STACK_F11_OFFSET	96	/* 0x60 */
+#define IA64_SWITCH_STACK_F12_OFFSET	112	/* 0x70 */
+#define IA64_SWITCH_STACK_F13_OFFSET	128	/* 0x80 */
+#define IA64_SWITCH_STACK_F14_OFFSET	144	/* 0x90 */
+#define IA64_SWITCH_STACK_F15_OFFSET	160	/* 0xa0 */
+#define IA64_SWITCH_STACK_F16_OFFSET	176	/* 0xb0 */
+#define IA64_SWITCH_STACK_F17_OFFSET	192	/* 0xc0 */
+#define IA64_SWITCH_STACK_F18_OFFSET	208	/* 0xd0 */
+#define IA64_SWITCH_STACK_F19_OFFSET	224	/* 0xe0 */
+#define IA64_SWITCH_STACK_F20_OFFSET	240	/* 0xf0 */
+#define IA64_SWITCH_STACK_F21_OFFSET	256	/* 0x100 */
+#define IA64_SWITCH_STACK_F22_OFFSET	272	/* 0x110 */
+#define IA64_SWITCH_STACK_F23_OFFSET	288	/* 0x120 */
+#define IA64_SWITCH_STACK_F24_OFFSET	304	/* 0x130 */
+#define IA64_SWITCH_STACK_F25_OFFSET	320	/* 0x140 */
+#define IA64_SWITCH_STACK_F26_OFFSET	336	/* 0x150 */
+#define IA64_SWITCH_STACK_F27_OFFSET	352	/* 0x160 */
+#define IA64_SWITCH_STACK_F28_OFFSET	368	/* 0x170 */
+#define IA64_SWITCH_STACK_F29_OFFSET	384	/* 0x180 */
+#define IA64_SWITCH_STACK_F30_OFFSET	400	/* 0x190 */
+#define IA64_SWITCH_STACK_F31_OFFSET	416	/* 0x1a0 */
+#define IA64_SWITCH_STACK_R4_OFFSET	432	/* 0x1b0 */
+#define IA64_SWITCH_STACK_R5_OFFSET	440	/* 0x1b8 */
+#define IA64_SWITCH_STACK_R6_OFFSET	448	/* 0x1c0 */
+#define IA64_SWITCH_STACK_R7_OFFSET	456	/* 0x1c8 */
+#define IA64_SWITCH_STACK_B0_OFFSET	464	/* 0x1d0 */
+#define IA64_SWITCH_STACK_B1_OFFSET	472	/* 0x1d8 */
+#define IA64_SWITCH_STACK_B2_OFFSET	480	/* 0x1e0 */
+#define IA64_SWITCH_STACK_B3_OFFSET	488	/* 0x1e8 */
+#define IA64_SWITCH_STACK_B4_OFFSET	496	/* 0x1f0 */
+#define IA64_SWITCH_STACK_B5_OFFSET	504	/* 0x1f8 */
+#define IA64_SWITCH_STACK_AR_PFS_OFFSET	512	/* 0x200 */
+#define IA64_SWITCH_STACK_AR_LC_OFFSET	520	/* 0x208 */
+#define IA64_SWITCH_STACK_AR_UNAT_OFFSET	528	/* 0x210 */
+#define IA64_SWITCH_STACK_AR_RNAT_OFFSET	536	/* 0x218 */
+#define IA64_SWITCH_STACK_AR_BSPSTORE_OFFSET 544	/* 0x220 */
+#define IA64_SWITCH_STACK_PR_OFFSET	464	/* 0x1d0 */
 #define IA64_SIGCONTEXT_AR_BSP_OFFSET	72	/* 0x48 */
 #define IA64_SIGCONTEXT_AR_RNAT_OFFSET	80	/* 0x50 */
 #define IA64_SIGCONTEXT_FLAGS_OFFSET	0	/* 0x0 */
diff -urN linux-davidm/include/asm-ia64/pal.h linux-2.3.99-pre6-lia/include/asm-ia64/pal.h
--- linux-davidm/include/asm-ia64/pal.h	Fri Apr 21 15:21:24 2000
+++ linux-2.3.99-pre6-lia/include/asm-ia64/pal.h	Thu May 25 23:11:11 2000
@@ -640,23 +640,9 @@
  * (generally 0) MUST be passed.  Reserved parameters are not optional
  * parameters.
  */
-#ifdef __GCC_MULTIREG_RETVALS__
-  extern struct ia64_pal_retval ia64_pal_call_static (u64, u64, u64, u64); 
-  /*
-   * If multi-register return values are returned according to the
-   * ia-64 calling convention, we can call ia64_pal_call_static
-   * directly.
-   */
-# define PAL_CALL(iprv,a0,a1,a2,a3)	iprv = ia64_pal_call_static(a0,a1, a2, a3)
-#else
-  extern void ia64_pal_call_static (struct ia64_pal_retval *, u64, u64, u64, u64);
-  /*
-   * If multi-register return values are returned through an aggregate
-   * allocated in the caller, we need to use the stub implemented in
-   * sal-stub.S.
-   */
-# define PAL_CALL(iprv,a0,a1,a2,a3)	ia64_pal_call_static(&iprv, a0, a1, a2, a3)
-#endif
+extern struct ia64_pal_retval ia64_pal_call_static (u64, u64, u64, u64); 
+
+#define PAL_CALL(iprv,a0,a1,a2,a3)	iprv = ia64_pal_call_static(a0,a1, a2, a3)
 
 typedef int (*ia64_pal_handler) (u64, ...);
 extern ia64_pal_handler ia64_pal;
diff -urN linux-davidm/include/asm-ia64/pgtable.h linux-2.3.99-pre6-lia/include/asm-ia64/pgtable.h
--- linux-davidm/include/asm-ia64/pgtable.h	Fri Apr 21 16:38:55 2000
+++ linux-2.3.99-pre6-lia/include/asm-ia64/pgtable.h	Tue May 23 17:52:09 2000
@@ -133,7 +133,7 @@
 #define PAGE_READONLY	__pgprot(__ACCESS_BITS | _PAGE_PL_3 | _PAGE_AR_R)
 #define PAGE_COPY	__pgprot(__ACCESS_BITS | _PAGE_PL_3 | _PAGE_AR_RX)
 #define PAGE_GATE	__pgprot(__ACCESS_BITS | _PAGE_PL_0 | _PAGE_AR_X_RX)
-#define PAGE_KERNEL	__pgprot(__DIRTY_BITS  | _PAGE_PL_0 | _PAGE_AR_RW)
+#define PAGE_KERNEL	__pgprot(__DIRTY_BITS  | _PAGE_PL_0 | _PAGE_AR_RWX)
 
 /*
  * Next come the mappings that determine how mmap() protection bits
diff -urN linux-davidm/include/asm-ia64/processor.h linux-2.3.99-pre6-lia/include/asm-ia64/processor.h
--- linux-davidm/include/asm-ia64/processor.h	Thu May 25 23:22:10 2000
+++ linux-2.3.99-pre6-lia/include/asm-ia64/processor.h	Tue May 23 17:52:08 2000
@@ -319,6 +319,7 @@
 	set_fs(USER_DS);							\
 	ia64_psr(regs)->cpl = 3;	/* set user mode */			\
 	ia64_psr(regs)->ri = 0;		/* clear return slot number */		\
+	ia64_psr(regs)->is = 0;		/* IA-64 instruction set */		\
 	regs->cr_iip = new_ip;							\
 	regs->ar_rsc = 0xf;		/* eager mode, privilege level 3 */	\
 	regs->r12 = new_sp - 16;	/* allocate 16 byte scratch area */	\
@@ -437,6 +438,14 @@
 	__asm__ __volatile__ (";; srlz.d" ::: "memory");
 }
 
+extern inline __u64
+ia64_get_rr (__u64 reg_bits)
+{
+	__u64 r;
+	__asm__ __volatile__ ("mov %0=rr[%1]" : "=r"(r) : "r"(reg_bits) : "memory");
+	return r;
+}
+
 extern inline void
 ia64_set_rr (__u64 reg_bits, __u64 rr_val)
 {
@@ -646,14 +655,14 @@
 extern inline unsigned long
 thread_saved_pc (struct thread_struct *t)
 {
-	struct ia64_frame_info info;
+	struct unw_frame_info info;
 	/* XXX ouch: Linus, please pass the task pointer to thread_saved_pc() instead! */
 	struct task_struct *p = (void *) ((unsigned long) t - IA64_TASK_THREAD_OFFSET);
 
-	ia64_unwind_init_from_blocked_task(&info, p);
-	if (ia64_unwind_to_previous_frame(&info) < 0)
+	unw_init_from_blocked_task(&info, p);
+	if (unw_unwind(&info) < 0)
 		return 0;
-	return ia64_unwind_get_ip(&info);
+	return unw_get_ip(&info);
 }
 
 /*
diff -urN linux-davidm/include/asm-ia64/ptrace_offsets.h linux-2.3.99-pre6-lia/include/asm-ia64/ptrace_offsets.h
--- linux-davidm/include/asm-ia64/ptrace_offsets.h	Fri Apr 21 15:21:24 2000
+++ linux-2.3.99-pre6-lia/include/asm-ia64/ptrace_offsets.h	Thu May 25 23:11:22 2000
@@ -118,8 +118,8 @@
 #define PT_F126			0x05e0
 #define PT_F127			0x05f0
 /* switch stack: */
-#define PT_CALLER_UNAT		0x0600
-#define PT_KERNEL_FPSR		0x0608
+#define PT_PRI_UNAT		0x0600
+
 #define PT_F2			0x0610
 #define PT_F3			0x0620
 #define PT_F4			0x0630
@@ -150,23 +150,19 @@
 #define PT_R5			0x07b8
 #define PT_R6			0x07c0
 #define PT_R7			0x07c8
-#define PT_K_B0			0x07d0
+
 #define PT_B1			0x07d8
 #define PT_B2			0x07e0
 #define PT_B3			0x07e8
 #define PT_B4			0x07f0
 #define PT_B5			0x07f8
-#define PT_K_AR_PFS		0x0800
+
 #define PT_AR_LC		0x0808
-#define PT_K_AR_UNAT		0x0810
-#define PT_K_AR_RNAT		0x0818
-#define PT_K_AR_BSPSTORE	0x0820
-#define PT_K_PR			0x0828
+
 /* pt_regs */
 #define PT_CR_IPSR		0x0830
 #define PT_CR_IIP		0x0838
 #define PT_CFM			0x0840
-#define PT_CR_IFS		PT_CFM		/* Use of PT_CR_IFS is deprecated */
 #define PT_AR_UNAT		0x0848
 #define PT_AR_PFS		0x0850
 #define PT_AR_RSC		0x0858
diff -urN linux-davidm/include/asm-ia64/sal.h linux-2.3.99-pre6-lia/include/asm-ia64/sal.h
--- linux-davidm/include/asm-ia64/sal.h	Fri Apr 21 15:21:24 2000
+++ linux-2.3.99-pre6-lia/include/asm-ia64/sal.h	Thu May 25 23:11:39 2000
@@ -23,17 +23,7 @@
 
 extern spinlock_t sal_lock;
 
-#ifdef __GCC_MULTIREG_RETVALS__
-  /* If multi-register return values are returned according to the
-     ia-64 calling convention, we can call ia64_sal directly.  */
-# define __SAL_CALL(result,args...)	result = (*ia64_sal)(args)
-#else
-  /* If multi-register return values are returned through an aggregate
-     allocated in the caller, we need to use the stub implemented in
-     sal-stub.S.  */
-  extern struct ia64_sal_retval ia64_sal_stub (u64 index, ...);
-# define __SAL_CALL(result,args...)	result = ia64_sal_stub(args)
-#endif
+#define __SAL_CALL(result,args...)	result = (*ia64_sal)(args)
 
 #ifdef CONFIG_SMP
 # define SAL_CALL(result,args...) do {		\
@@ -494,7 +484,19 @@
 ia64_sal_pci_config_read (u64 pci_config_addr, u64 size, u64 *value)
 {
 	struct ia64_sal_retval isrv;
+#ifdef CONFIG_ITANIUM_A1_SPECIFIC
+	extern spinlock_t ivr_read_lock;
+	unsigned long flags;
+
+	/*
+	 * Avoid PCI configuration read/write overwrite -- A0 Interrupt loss workaround
+	 */
+	spin_lock_irqsave(&ivr_read_lock, flags);
+#endif
 	SAL_CALL(isrv, SAL_PCI_CONFIG_READ, pci_config_addr, size);
+#ifdef CONFIG_ITANIUM_A1_SPECIFIC
+	spin_unlock_irqrestore(&ivr_read_lock, flags);
+#endif
 	if (value)
 		*value = isrv.v0;
 	return isrv.status;
@@ -505,7 +507,7 @@
 ia64_sal_pci_config_write (u64 pci_config_addr, u64 size, u64 value)
 {
 	struct ia64_sal_retval isrv;
-#if defined(CONFIG_ITANIUM_ASTEP_SPECIFIC) && !defined(SAPIC_FIXED)
+#ifdef CONFIG_ITANIUM_A1_SPECIFIC
 	extern spinlock_t ivr_read_lock;
 	unsigned long flags;
 
@@ -515,7 +517,7 @@
 	spin_lock_irqsave(&ivr_read_lock, flags);
 #endif
 	SAL_CALL(isrv, SAL_PCI_CONFIG_WRITE, pci_config_addr, size, value);
-#if defined(CONFIG_ITANIUM_ASTEP_SPECIFIC) && !defined(SAPIC_FIXED)
+#ifdef CONFIG_ITANIUM_A1_SPECIFIC
 	spin_unlock_irqrestore(&ivr_read_lock, flags);
 #endif
 	return isrv.status;
diff -urN linux-davidm/include/asm-ia64/siginfo.h linux-2.3.99-pre6-lia/include/asm-ia64/siginfo.h
--- linux-davidm/include/asm-ia64/siginfo.h	Fri Apr 21 15:21:24 2000
+++ linux-2.3.99-pre6-lia/include/asm-ia64/siginfo.h	Mon May 15 12:13:17 2000
@@ -56,6 +56,8 @@
 		struct {
 			void *_addr;		/* faulting insn/memory ref. */
 			int _imm;		/* immediate value for "break" */
+			int _pad0;
+			unsigned long _isr;	/* isr */
 		} _sigfault;
 
 		/* SIGPOLL */
@@ -79,6 +81,7 @@
 #define si_ptr		_sifields._rt._sigval.sival_ptr
 #define si_addr		_sifields._sigfault._addr
 #define si_imm		_sifields._sigfault._imm	/* as per UNIX SysV ABI spec */
+#define si_isr		_sifields._sigfault._isr	/* valid if si_code==FPE_FLTxxx */
 #define si_band		_sifields._sigpoll._band
 #define si_fd		_sifields._sigpoll._fd
 
diff -urN linux-davidm/include/asm-ia64/system.h linux-2.3.99-pre6-lia/include/asm-ia64/system.h
--- linux-davidm/include/asm-ia64/system.h	Fri Apr 21 15:21:24 2000
+++ linux-2.3.99-pre6-lia/include/asm-ia64/system.h	Mon May 22 18:06:31 2000
@@ -33,9 +33,9 @@
 
 struct pci_vector_struct {
 	__u16 bus;	/* PCI Bus number */
-        __u32 pci_id;	/* ACPI split 16 bits device, 16 bits function (see section 6.1.1) */
-        __u8 pin;	/* PCI PIN (0 = A, 1 = B, 2 = C, 3 = D) */
-        __u8 irq;	/* IRQ assigned */
+	__u32 pci_id;	/* ACPI split 16 bits device, 16 bits function (see section 6.1.1) */
+	__u8 pin;	/* PCI PIN (0 = A, 1 = B, 2 = C, 3 = D) */
+	__u8 irq;	/* IRQ assigned */
 };
 
 extern struct ia64_boot_param {
@@ -394,32 +394,13 @@
 
 #ifdef __KERNEL__
 
-extern void ia64_save_debug_regs (unsigned long *save_area);
-extern void ia64_load_debug_regs (unsigned long *save_area);
-
 #define prepare_to_switch()    do { } while(0)
 
 #ifdef CONFIG_IA32_SUPPORT
 # define IS_IA32_PROCESS(regs)	(ia64_psr(regs)->is != 0)
-# define IA32_STATE(prev,next)							\
-	if (IS_IA32_PROCESS(ia64_task_regs(prev))) {				\
-	    __asm__ __volatile__("mov %0=ar.eflag":"=r"((prev)->thread.eflag));	\
-	    __asm__ __volatile__("mov %0=ar.fsr":"=r"((prev)->thread.fsr));	\
-	    __asm__ __volatile__("mov %0=ar.fcr":"=r"((prev)->thread.fcr));	\
-	    __asm__ __volatile__("mov %0=ar.fir":"=r"((prev)->thread.fir));	\
-	    __asm__ __volatile__("mov %0=ar.fdr":"=r"((prev)->thread.fdr));	\
-	}									\
-	if (IS_IA32_PROCESS(ia64_task_regs(next))) {				\
-	    __asm__ __volatile__("mov ar.eflag=%0"::"r"((next)->thread.eflag));	\
-	    __asm__ __volatile__("mov ar.fsr=%0"::"r"((next)->thread.fsr));	\
-	    __asm__ __volatile__("mov ar.fcr=%0"::"r"((next)->thread.fcr));	\
-	    __asm__ __volatile__("mov ar.fir=%0"::"r"((next)->thread.fir));	\
-	    __asm__ __volatile__("mov ar.fdr=%0"::"r"((next)->thread.fdr));	\
-	}
-#else /* !CONFIG_IA32_SUPPORT */
-# define IA32_STATE(prev,next)
+#else
 # define IS_IA32_PROCESS(regs)		0
-#endif /* CONFIG_IA32_SUPPORT */
+#endif
 
 /*
  * Context switch from one thread to another.  If the two threads have
@@ -432,15 +413,18 @@
  * ia64_ret_from_syscall_clear_r8.
  */
 extern struct task_struct *ia64_switch_to (void *next_task);
+
+extern void ia64_save_extra (struct task_struct *task);
+extern void ia64_load_extra (struct task_struct *task);
+
 #define __switch_to(prev,next,last) do {					\
+	if (((prev)->thread.flags & IA64_THREAD_DBG_VALID)			\
+	    || IS_IA32_PROCESS(ia64_task_regs(prev)))				\
+		ia64_save_extra(prev);						\
+	if (((next)->thread.flags & IA64_THREAD_DBG_VALID)			\
+	    || IS_IA32_PROCESS(ia64_task_regs(next)))				\
+		ia64_load_extra(next);						\
 	ia64_psr(ia64_task_regs(next))->dfh = (ia64_get_fpu_owner() != (next));	\
-	if ((prev)->thread.flags & IA64_THREAD_DBG_VALID) {			\
-		ia64_save_debug_regs(&(prev)->thread.dbr[0]);			\
-	}									\
-	if ((next)->thread.flags & IA64_THREAD_DBG_VALID) {			\
-		ia64_load_debug_regs(&(next)->thread.dbr[0]);			\
-	}									\
-	IA32_STATE(prev,next);							\
 	(last) = ia64_switch_to((next));					\
 } while (0)
 
diff -urN linux-davidm/include/asm-ia64/unistd.h linux-2.3.99-pre6-lia/include/asm-ia64/unistd.h
--- linux-davidm/include/asm-ia64/unistd.h	Fri Apr 21 15:21:24 2000
+++ linux-2.3.99-pre6-lia/include/asm-ia64/unistd.h	Mon May 15 12:13:41 2000
@@ -269,7 +269,7 @@
 name (type1 arg1, type2 arg2, type3 arg3, type4 arg4, type5 arg5)			\
 {											\
 	return __ia64_syscall((long) arg1, (long) arg2, (long) arg3,			\
-			      (long) arg4, (long), __NR_##name);			\
+			      (long) arg4, (long) arg5, __NR_##name);			\
 }
 
 #ifdef __KERNEL_SYSCALLS__
diff -urN linux-davidm/include/asm-ia64/unwind.h linux-2.3.99-pre6-lia/include/asm-ia64/unwind.h
--- linux-davidm/include/asm-ia64/unwind.h	Sun Feb  6 18:42:40 2000
+++ linux-2.3.99-pre6-lia/include/asm-ia64/unwind.h	Thu May 25 23:12:06 2000
@@ -2,8 +2,8 @@
 #define _ASM_IA64_UNWIND_H
 
 /*
- * Copyright (C) 1999 Hewlett-Packard Co
- * Copyright (C) 1999 David Mosberger-Tang <davidm@hpl.hp.com>
+ * Copyright (C) 1999-2000 Hewlett-Packard Co
+ * Copyright (C) 1999-2000 David Mosberger-Tang <davidm@hpl.hp.com>
  *
  * A simple API for unwinding kernel stacks.  This is used for
  * debugging and error reporting purposes.  The kernel doesn't need
@@ -16,27 +16,68 @@
 struct task_struct;	/* forward declaration */
 struct switch_stack;	/* forward declaration */
 
+enum unw_application_register {
+	UNW_AR_BSP,
+	UNW_AR_BSPSTORE,
+	UNW_AR_PFS,
+	UNW_AR_RNAT,
+	UNW_AR_UNAT,
+	UNW_AR_LC,
+	UNW_AR_FPSR,
+	UNW_AR_RSC,
+	UNW_AR_CCV
+};
+
 /*
  * The following declarations are private to the unwind
  * implementation:
  */
 
-struct ia64_stack {
-	unsigned long *limit;
-	unsigned long *top;
+struct unw_stack {
+	unsigned long limit;
+	unsigned long top;
 };
 
+#define UNW_FLAG_INTERRUPT_FRAME	(1UL << 0)
+
 /*
  * No user of this module should every access this structure directly
  * as it is subject to change.  It is declared here solely so we can
  * use automatic variables.
  */
-struct ia64_frame_info {
-	struct ia64_stack regstk;
-	unsigned long *bsp;
-	unsigned long top_rnat;		/* RSE NaT collection at top of backing store */
+struct unw_frame_info {
+	struct unw_stack regstk;
+	struct unw_stack memstk;
+	unsigned long flags;
+	unsigned long bsp;
+	unsigned long sp;		/* stack pointer */
+	unsigned long psp;		/* previous sp */
 	unsigned long cfm;
 	unsigned long ip;		/* instruction pointer */
+	unsigned long pr_val;		/* current predicates */
+
+	struct switch_stack *sw;
+
+	/* preserved state: */
+	unsigned long *pbsp;		/* previous bsp */
+	unsigned long *bspstore;
+	unsigned long *pfs;
+	unsigned long *rnat;
+	unsigned long *rp;
+	unsigned long *pri_unat;
+	unsigned long *unat;
+	unsigned long *pr;
+	unsigned long *lc;
+	unsigned long *fpsr;
+	struct unw_ireg {
+		unsigned long *loc;
+		struct unw_ireg_nat {
+			int type : 3;		/* enum unw_nat_type */
+			signed int off;		/* NaT word is at loc+nat.off */
+		} nat;
+	} r4, r5, r6, r7;
+	unsigned long *b1, *b2, *b3, *b4, *b5;
+	struct ia64_fpreg *f2, *f3, *f4, *f5, *fr[16];
 };
 
 /*
@@ -44,10 +85,19 @@
  */
 
 /*
+ * Initialize unwind support.
+ */
+extern void unw_init (void);
+
+extern void *unw_add_unwind_table (const char *name, unsigned long segment_base, unsigned long gp,
+				   void *table_start, void *table_end);
+
+extern void unw_remove_unwind_table (void *handle);
+
+/*
  * Prepare to unwind blocked task t.
  */
-extern void ia64_unwind_init_from_blocked_task (struct ia64_frame_info *info,
-						struct task_struct *t);
+extern void unw_init_from_blocked_task (struct unw_frame_info *info, struct task_struct *t);
 
 /*
  * Prepare to unwind the current task.  For this to work, the kernel
@@ -63,15 +113,36 @@
  *	| struct switch_stack |
  *	+---------------------+
  */
-extern void ia64_unwind_init_from_current (struct ia64_frame_info *info, struct pt_regs *regs);
+extern void unw_init_from_current (struct unw_frame_info *info, struct pt_regs *regs);
 
 /*
  * Unwind to previous to frame.  Returns 0 if successful, negative
  * number in case of an error.
  */
-extern int ia64_unwind_to_previous_frame (struct ia64_frame_info *info);
+extern int unw_unwind (struct unw_frame_info *info);
 
-#define ia64_unwind_get_ip(info)	((info)->ip)
-#define ia64_unwind_get_bsp(info)	((unsigned long) (info)->bsp)
+#define unw_get_ip(info)	((info)->ip)
+#define unw_get_sp(info)	((unsigned long) (info)->sp)
+#define unw_get_psp(info)	((unsigned long) (info)->psp)
+#define unw_get_bsp(info)	((unsigned long) (info)->bsp)
+#define unw_get_cfm(info)	((info)->cfm)
+
+extern int unw_access_gr (struct unw_frame_info *, int, unsigned long *, char *, int);
+extern int unw_access_br (struct unw_frame_info *, int, unsigned long *, int);
+extern int unw_access_fr (struct unw_frame_info *, int, struct ia64_fpreg *, int);
+extern int unw_access_ar (struct unw_frame_info *, int, unsigned long *, int);
+extern int unw_access_pr (struct unw_frame_info *, unsigned long *, int);
+
+#define unw_set_gr(i,n,v,nat)	unw_access_gr(i,n,v,nat,1)
+#define unw_set_br(i,n,v)	unw_access_br(i,n,v,1)
+#define unw_set_fr(i,n,v)	unw_access_fr(i,n,v,1)
+#define unw_set_ar(i,n,v)	unw_access_ar(i,n,v,1)
+#define unw_set_pr(i,v)		unw_access_ar(i,v,1)
+
+#define unw_get_gr(i,n,v,nat)	unw_access_gr(i,n,v,nat,0)
+#define unw_get_br(i,n,v)	unw_access_br(i,n,v,0)
+#define unw_get_fr(i,n,v)	unw_access_fr(i,n,v,0)
+#define unw_get_ar(i,n,v)	unw_access_ar(i,n,v,0)
+#define unw_get_pr(i,v)		unw_access_pr(i,v,0)
 
-#endif /* _ASM_IA64_UNWIND_H */
+#endif /* _ASM_UNWIND_H */
diff -urN linux-davidm/include/linux/nfsd/syscall.h linux-2.3.99-pre6-lia/include/linux/nfsd/syscall.h
--- linux-davidm/include/linux/nfsd/syscall.h	Wed Apr 26 15:29:54 2000
+++ linux-2.3.99-pre6-lia/include/linux/nfsd/syscall.h	Tue May 23 17:52:57 2000
@@ -133,7 +133,7 @@
  * Kernel syscall implementation.
  */
 #if defined(CONFIG_NFSD) || defined(CONFIG_NFSD_MODULE)
-extern asmlinkage int	sys_nfsservctl(int, void *, void *);
+extern asmlinkage long	sys_nfsservctl(int, void *, void *);
 #else
 #define sys_nfsservctl		sys_ni_syscall
 #endif
diff -urN linux-davidm/init/main.c linux-2.3.99-pre6-lia/init/main.c
--- linux-davidm/init/main.c	Thu May 25 23:22:10 2000
+++ linux-2.3.99-pre6-lia/init/main.c	Wed May 17 21:01:27 2000
@@ -561,6 +561,7 @@
 #endif
 	mem_init();
 	kmem_cache_sizes_init();
+	unw_init();		/* XXX remove reliance on kmalloc and move to setup_arch() */
 #ifdef CONFIG_PERFMON
 	perfmon_init();
 #endif
Received on Fri May 26 00:43:00 2000

This archive was generated by hypermail 2.1.8 : 2005-08-02 09:19:59 EST