PR sync from: Kefeng Wang wangkefeng.wang@huawei.com
https://mailweb.openeuler.org/hyperkitty/list/kernel@openeuler.org/message/XR2MZZ4OEDXF72JOWEZSFLWPV3JGDABG/
This is to optimize fork/unmap/zap() with PTE-mapped THP.

Catalin Marinas (1):
arm64: Mark the 'addr' argument to set_ptes() and __set_pte_at() as
unused

David Hildenbrand (24):
arm/pgtable: define PFN_PTE_SHIFT
nios2/pgtable: define PFN_PTE_SHIFT
powerpc/pgtable: define PFN_PTE_SHIFT
riscv/pgtable: define PFN_PTE_SHIFT
s390/pgtable: define PFN_PTE_SHIFT
sparc/pgtable: define PFN_PTE_SHIFT
mm/pgtable: make pte_next_pfn() independent of set_ptes()
arm/mm: use pte_next_pfn() in set_ptes()
powerpc/mm: use pte_next_pfn() in set_ptes()
mm/memory: factor out copying the actual PTE in copy_present_pte()
mm/memory: pass PTE to copy_present_pte()
mm/memory: optimize fork() with PTE-mapped THP
mm/memory: ignore dirty/accessed/soft-dirty bits in folio_pte_batch()
mm/memory: ignore writable bit in folio_pte_batch()
mm/memory: factor out zapping of present pte into zap_present_pte()
mm/memory: handle !page case in zap_present_pte() separately
mm/memory: further separate anon and pagecache folio handling in
zap_present_pte()
mm/memory: factor out zapping folio pte into zap_present_folio_pte()
mm/mmu_gather: pass "delay_rmap" instead of encoded page to
__tlb_remove_page_size()
mm/mmu_gather: define ENCODED_PAGE_FLAG_DELAY_RMAP
mm/mmu_gather: add tlb_remove_tlb_entries()
mm/mmu_gather: add __tlb_remove_folio_pages()
mm/mmu_gather: improve cond_resched() handling with large folios and
expensive page freeing
mm/memory: optimize unmap/zap with PTE-mapped THP

Kefeng Wang (7):
s390: use pfn_swap_entry_folio() in ptep_zap_swap_entry()
mm: use pfn_swap_entry_folio() in __split_huge_pmd_locked()
mm: use pfn_swap_entry_to_folio() in zap_huge_pmd()
mm: use pfn_swap_entry_folio() in copy_nonpresent_pte()
mm: convert to should_zap_page() to should_zap_folio()
mm: convert mm_counter() to take a folio
mm: convert mm_counter_file() to take a folio

Matthew Wilcox (Oracle) (2):
mm: add pfn_swap_entry_folio()
mprotect: use pfn_swap_entry_folio

Ryan Roberts (2):
arm64/mm: Hoist synchronization out of set_ptes() loop
arm64/mm: make set_ptes() robust when OAs cross 48-bit boundary

--
2.27.0

#I9CHB4:[OLK-6.6] mm/memory: optimize fork/unmap/zap