Re: [PATCH] [0/6] HUGETLB memory commitment

From: Andrew Morton <>
Date: 2004-03-26 11:22:32
Keith Owens <> wrote:
> FWIW, lkcd (crash dump) treats hugetlb pages as normal kernel pages and
> dumps them, which is pointless and wastes a lot of time.  To avoid
> dumping these pages in lkcd, I had to add a PG_hugetlb flag.  lkcd runs
> at the page level, not mm or vma, so VM_hugetlb was not available.  In
> set_hugetlb_mem_size()
> 	for (j = 0; j < (HPAGE_SIZE / PAGE_SIZE); j++) {
> 		SetPageReserved(map);
> 		SetPageHugetlb(map);
> 		map++;
> 	}
> In dump_base.c, I changed kernel_page(), referenced_page() and
> unreferenced_page() to test for PageHugetlb() before PageReserved().

That makes sense.

> Since you are looking at identifying hugetlb pages, could any other
> code benefit from a PG_hugetlb flag?

In the overcommit code we don't actually have the page yet.  We're asking
"do we have enough memory available to honour this mmap() invokation when
it later faults in real pages".

To unsubscribe from this list: send the line "unsubscribe linux-ia64" in
the body of a message to
More majordomo info at
Received on Thu Mar 25 19:36:57 2004

This archive was generated by hypermail 2.1.8 : 2005-08-02 09:20:24 EST