Re: Hugetlbpages in very large memory machines.......

From: Hirokazu Takahashi <taka_at_valinux.co.jp>
Date: 2004-03-13 15:56:38
Hello,

My following patch might help you. It inclueds pagefault routine
for hugetlbpages. If you want to use it for your purpose, you need to
remove some code from hugetlb_prefault() that will call hugetlb_fault().
http://people.valinux.co.jp/~taka/patches/va01-hugepagefault.patch

But it's just for IA32.

I heard that n-yoshida@pst.fujitsu.com was porting this patch
on IA64.

> We've run into a scaling problem using hugetlbpages in very large memory machines, e. g. machines 
> with 1TB or more of main memory.  The problem is that hugetlbpage pages are not faulted in, rather 
> they are zeroed and mapped in in by hugetlb_prefault() (at least on ia64), which is called in 
> response to the user's mmap() request.  The net is that all of the hugetlb pages end up being 
> allocated and zeroed by a single thread, and if most of the machine's memory is allocated to hugetlb 
> pages, and there is 1 TB or more of main memory, zeroing and allocating all of those pages can take 
> a long time (500 s or more).
> 
> We've looked at allocating and zeroing hugetlbpages at fault time, which would at least allow 
> multiple processors to be thrown at the problem.  Question is, has anyone else been working on
> this problem and might they have prototype code they could share with us?
> 
> Thanks,
> -- 
> Best Regards,
> Ray


Thank you,
Hirokazu Takahashi.
-
To unsubscribe from this list: send the line "unsubscribe linux-ia64" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Received on Fri Mar 12 23:53:56 2004

This archive was generated by hypermail 2.1.8 : 2005-08-02 09:20:24 EST