Bug #5601
closedregression in 5498, too many TLB pages invalidated
0%
Description
In 5498 the TLB invalidation code was modified to allow a
contiguous range of pages to be invalidated by one xcall, instead
of one per page. A new function, hat_tlb_inval_range(), was
introduced and the old function, hat_tlb_inval(), was redefined
in terms of the new one. However, the range function takes the
number of pages as its third argument, but hat_tlb_inval() is
passing the pagesize (4K). This will potentially invalidate more
pages than planned.
Since the original semantic of hat_tlb_inval() was to invalidate
one page, the third argument should be 1.
Related issues
Updated by Ryan Zezeski almost 9 years ago
- Subject changed from TLB invalidation regression in 5498 to regression in 5498, too many TLB pages invalidated
- Assignee set to Ryan Zezeski
Updated by Ryan Zezeski almost 9 years ago
- Status changed from New to Rejected
It turns out I misread the for loop in hati_demap_func(), it's
increasing i by MMU_PAGESIZE as well so passing MMU_PAGESIZE as
length will indeed only unmap one page. I'm sorry for this noise.
Next time I'll ask someone to confirm my findings before writing
a bug like this.