[PATCH] Fix for bits_wanted in sba_iommu.c

From: Croxon, Nigel <nigel.croxon_at_hp.com>
Date: 2004-11-04 09:05:56
bits_wanted is expanded to bytes using the wrong shift value
(when iovp_shift != PAGE_SHIFT), resulting in an explosion of
used iommu resources.

This potentially results in mistakenly running out of DMA mapping
resources when the system is under *heavy* i/o load.

This patch is against 2.6.9 (kernel.org)

Signed-off-by: Nigel Croxon <nigel.croxon@hp.com>
Signed-off by: Alex Williamson <alex.williamson@hp.com>



--- linux/arch/ia64/hp/common/sba_iommu.c.      2004-11-03 14:41:08.000000000 -0500
+++ linux/arch/ia64/hp/common/sba_iommu.c       2004-11-03 14:49:37.000000000 -0500
@@ -478,7 +478,7 @@
         * purges IOTLB entries in power-of-two sizes, so we also
         * allocate IOVA space in power-of-two sizes.
         */
-       bits_wanted = 1UL << get_iovp_order(bits_wanted << PAGE_SHIFT);
+       bits_wanted = 1UL << get_iovp_order(bits_wanted << iovp_shift);

        if (likely(bits_wanted == 1)) {
                unsigned int bitshiftcnt;
@@ -687,7 +687,7 @@
        unsigned long m;

        /* Round up to power-of-two size: see AR2305 note above */
-       bits_not_wanted = 1UL << get_iovp_order(bits_not_wanted << PAGE_SHIFT);
+       bits_not_wanted = 1UL << get_iovp_order(bits_not_wanted << iovp_shift);
        for (; bits_not_wanted > 0 ; res_ptr++) {

                if (unlikely(bits_not_wanted > BITS_PER_LONG)) {




-
To unsubscribe from this list: send the line "unsubscribe linux-ia64" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Received on Wed Nov 3 17:16:52 2004

This archive was generated by hypermail 2.1.8 : 2005-08-02 09:20:32 EST