Re: [PATCH] [0/6] HUGETLB memory commitment

From: Keith Owens <>
Date: 2004-03-26 11:10:44
On Thu, 25 Mar 2004 23:59:21 +0000, 
Andy Whitcroft <> wrote:
>--On 25 March 2004 15:51 -0800 Andrew Morton <> wrote:
>> I think it's simply:
>> - Make normal overcommit logic skip hugepages completely
>> - Teach the overcommit_memory=2 logic that hugepages are basically
>>   "pinned", so subtract them from the arithmetic.
>> And that's it.  The hugepages are semantically quite different from normal
>> memory (prefaulted, preallocated, unswappable) and we've deliberately
>> avoided pretending otherwise.
>True currently.  Though the thread that prompted this was in response to 
>the time taken for this prefault and for the wish to fault them.
>I'll have a poke about at it and see how small I can make it.

FWIW, lkcd (crash dump) treats hugetlb pages as normal kernel pages and
dumps them, which is pointless and wastes a lot of time.  To avoid
dumping these pages in lkcd, I had to add a PG_hugetlb flag.  lkcd runs
at the page level, not mm or vma, so VM_hugetlb was not available.  In

	for (j = 0; j < (HPAGE_SIZE / PAGE_SIZE); j++) {

In dump_base.c, I changed kernel_page(), referenced_page() and
unreferenced_page() to test for PageHugetlb() before PageReserved().

Since you are looking at identifying hugetlb pages, could any other
code benefit from a PG_hugetlb flag?

To unsubscribe from this list: send the line "unsubscribe linux-ia64" in
the body of a message to
More majordomo info at
Received on Thu Mar 25 19:20:45 2004

This archive was generated by hypermail 2.1.8 : 2005-08-02 09:20:24 EST