Re: Hugetlb demanding paging for -mm tree

From: William Lee Irwin III <wli_at_holomorphy.com>
Date: 2004-08-07 18:36:13
On Thu, Aug 05, 2004 at 06:39:59AM -0700, Chen, Kenneth W wrote:
> +static void scrub_one_pmd(pmd_t * pmd)
> +{
> +	struct page *page;
> +
> +	if (pmd && !pmd_none(*pmd) && !pmd_huge(*pmd)) {
> +		page = pmd_page(*pmd);
> +		pmd_clear(pmd);
> +		dec_page_state(nr_page_table_pages);
> +		page_cache_release(page);
> +	}
> +}

This is needed because we're only freeing pagetables at pgd granularity
at munmap() -time. It makes more sense to refine it to pmd granularity
instead of this cleanup pass, as it's a memory leak beyond just hugetlb
data structure corruption.

I wonder why this bugfix was rolled into the demand paging patch instead
of shipped separately. And for that matter, this fix applies to mainline.


On Thu, Aug 05, 2004 at 06:39:59AM -0700, Chen, Kenneth W wrote:
> +int
> +handle_hugetlb_mm_fault(struct mm_struct *mm, struct vm_area_struct * vma,
> +	unsigned long addr, int write_access)
> +{
> +	hugepte_t *pte;
> +	struct page *page;
> +	struct address_space *mapping;
> +	int idx, ret;

Well, to go along with the general theme of this, using a hugepte_t
type and macros in generic code trivially defined to normal pte bits
for other arches could easily consolidate this fault handler with the
others. Of course, this much consolidation makes some rather limiting
assumptions that will either prevent other improvements or have to be
partially undone for other improvements.


-- wli
-
To unsubscribe from this list: send the line "unsubscribe linux-ia64" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Received on Sat Aug 7 04:36:31 2004

This archive was generated by hypermail 2.1.8 : 2005-08-02 09:20:29 EST