Show patches with: Submitter = Wei Yang       |    State = Action Required       |   312 patches
« 1 2 3 4 »
Patch Series A/R/T S/W/F Date Submitter Delegate State
mm/vmscan: sc->reclaim_idx must be a valid zone index mm/vmscan: sc->reclaim_idx must be a valid zone index - - - --- 2022-03-17 Wei Yang New
[v3] mm/memcg: mz already removed from rb_tree if not NULL [v3] mm/memcg: mz already removed from rb_tree if not NULL 1 - - --- 2022-03-14 Wei Yang New
[v2,3/3] mm/memcg: add next_mz back to soft limit tree if not reclaimed yet [v2,1/3] mm/memcg: mz already removed from rb_tree in mem_cgroup_largest_soft_limit_node() 1 - - --- 2022-03-12 Wei Yang New
[v2,2/3] mm/memcg: __mem_cgroup_remove_exceeded could handle a !on-tree mz properly [v2,1/3] mm/memcg: mz already removed from rb_tree in mem_cgroup_largest_soft_limit_node() - - - --- 2022-03-12 Wei Yang New
[v2,1/3] mm/memcg: mz already removed from rb_tree in mem_cgroup_largest_soft_limit_node() [v2,1/3] mm/memcg: mz already removed from rb_tree in mem_cgroup_largest_soft_limit_node() 1 - - --- 2022-03-12 Wei Yang New
[3/3] mm/memcg: add next_mz back if not reclaimed yet [1/3] mm/memcg: mz already removed from rb_tree in mem_cgroup_largest_soft_limit_node() - - - --- 2022-03-08 Wei Yang New
[2/3] mm/memcg: __mem_cgroup_remove_exceeded could handle a !on-tree mz properly [1/3] mm/memcg: mz already removed from rb_tree in mem_cgroup_largest_soft_limit_node() - - - --- 2022-03-08 Wei Yang New
[1/3] mm/memcg: mz already removed from rb_tree in mem_cgroup_largest_soft_limit_node() [1/3] mm/memcg: mz already removed from rb_tree in mem_cgroup_largest_soft_limit_node() - - - --- 2022-03-08 Wei Yang New
[3/3] mm/memcg: move generation assignment and comparison together mm/memcg: some cleanup for mem_cgroup_iter() 1 - - --- 2022-02-25 Wei Yang New
[2/3] mm/memcg: set pos to prev unconditionally mm/memcg: some cleanup for mem_cgroup_iter() - - - --- 2022-02-25 Wei Yang New
[1/3] mm/memcg: set memcg after css verified and got reference mm/memcg: some cleanup for mem_cgroup_iter() 1 - - --- 2022-02-25 Wei Yang New
mm/page_alloc: add zone to zonelist if populated mm/page_alloc: add zone to zonelist if populated - - - --- 2022-02-03 Wei Yang New
[2/2] mm/memcg: retrieve parent memcg from css.parent [1/2] mm/memcg: mem_cgroup_per_node is already set to 0 on allocation 1 3 - --- 2022-02-01 Wei Yang New
[1/2] mm/memcg: mem_cgroup_per_node is already set to 0 on allocation [1/2] mm/memcg: mem_cgroup_per_node is already set to 0 on allocation 1 4 - --- 2022-02-01 Wei Yang New
mm/memory_hotplug: build zonelist for managed_zone mm/memory_hotplug: build zonelist for managed_zone - - - --- 2022-01-27 Wei Yang New
[2/2] mm/page_alloc: add penalty to local_node [1/2] mm/page_alloc: add same penalty is enough to get round-robin order - - - --- 2022-01-23 Wei Yang New
[1/2] mm/page_alloc: add same penalty is enough to get round-robin order [1/2] mm/page_alloc: add same penalty is enough to get round-robin order - - - --- 2022-01-23 Wei Yang New
mm/page_alloc: clear node_load[] in function build_zonelists mm/page_alloc: clear node_load[] in function build_zonelists - - - --- 2022-01-15 Wei Yang New
[4/4] mm/memcg: refine mem_cgroup_threshold_ary->current_threshold calculation [1/4] mm/memcg: use NUMA_NO_NODE to indicate allocation from unspecified node - - - --- 2022-01-11 Wei Yang New
[3/4] mm/memcg: retrieve parent memcg from css.parent [1/4] mm/memcg: use NUMA_NO_NODE to indicate allocation from unspecified node 1 3 - --- 2022-01-11 Wei Yang New
[2/4] mm/memcg: mem_cgroup_per_node is already set to 0 on allocation [1/4] mm/memcg: use NUMA_NO_NODE to indicate allocation from unspecified node 1 4 - --- 2022-01-11 Wei Yang New
[1/4] mm/memcg: use NUMA_NO_NODE to indicate allocation from unspecified node [1/4] mm/memcg: use NUMA_NO_NODE to indicate allocation from unspecified node 1 3 - --- 2022-01-11 Wei Yang New
[3/3] mm/swapfile.c: count won't be bigger than SWAP_MAP_MAX [1/3] mm/swapfile.c: classify SWAP_MAP_XXX to make it more readable - - - --- 2020-05-01 Wei Yang New
[2/3] mm/swapfile.c: __swap_entry_free() always free 1 entry [1/3] mm/swapfile.c: classify SWAP_MAP_XXX to make it more readable - - - --- 2020-05-01 Wei Yang New
[1/3] mm/swapfile.c: classify SWAP_MAP_XXX to make it more readable [1/3] mm/swapfile.c: classify SWAP_MAP_XXX to make it more readable - - - --- 2020-05-01 Wei Yang New
[v2] mm/swapfile.c: simplify the scan loop in scan_swap_map_slots() [v2] mm/swapfile.c: simplify the scan loop in scan_swap_map_slots() - - - --- 2020-04-22 Wei Yang New
[v2,3/3] mm/swapfile.c: omit a duplicate code by compare tmp and max first [v2,1/3] mm/swapfile.c: found_free could be represented by (tmp < max) - - - --- 2020-04-21 Wei Yang New
[v2,2/3] mm/swapfile.c: tmp is always smaller than max [v2,1/3] mm/swapfile.c: found_free could be represented by (tmp < max) - - - --- 2020-04-21 Wei Yang New
[v2,1/3] mm/swapfile.c: found_free could be represented by (tmp < max) [v2,1/3] mm/swapfile.c: found_free could be represented by (tmp < max) - 1 - --- 2020-04-21 Wei Yang New
[4/4] mm/swapfile.c: move new_cluster to check free_clusters directly [1/4] mm/swapfile.c: found_free could be represented by (tmp < max) - - - --- 2020-04-19 Wei Yang New
[2/4] mm/swapfile.c: tmp is always smaller than max [1/4] mm/swapfile.c: found_free could be represented by (tmp < max) - - - --- 2020-04-19 Wei Yang New
[1/4] mm/swapfile.c: found_free could be represented by (tmp < max) [1/4] mm/swapfile.c: found_free could be represented by (tmp < max) - - - --- 2020-04-19 Wei Yang New
[v3,5/5] mm/page_alloc.c: extract check_[new|free]_page_bad() common part to page_bad_reason() mm/page_alloc.c: cleanup on check page - - - --- 2020-04-11 Wei Yang New
[v3,4/5] mm/page_alloc.c: rename free_pages_check() to check_free_page() mm/page_alloc.c: cleanup on check page - - - --- 2020-04-11 Wei Yang New
[v3,3/5] mm/page_alloc.c: rename free_pages_check_bad() to check_free_page_bad() mm/page_alloc.c: cleanup on check page - - - --- 2020-04-11 Wei Yang New
[v3,2/5] mm/page_alloc.c: bad_flags is not necessary for bad_page() mm/page_alloc.c: cleanup on check page - - - --- 2020-04-11 Wei Yang New
[v3,1/5] mm/page_alloc.c: bad_[reason|flags] is not necessary when PageHWPoison mm/page_alloc.c: cleanup on check page 1 1 - --- 2020-04-11 Wei Yang New
mm/vmscan.c: use update_lru_size() in update_lru_sizes() mm/vmscan.c: use update_lru_size() in update_lru_sizes() 1 1 - --- 2020-03-31 Wei Yang New
[v4] mm/page_alloc.c: use NODE_MASK_NONE in build_zonelists() [v4] mm/page_alloc.c: use NODE_MASK_NONE in build_zonelists() - 3 - --- 2020-03-30 Wei Yang New
mm: rename gfpflags_to_migratetype to gfp_migratetype for same convention mm: rename gfpflags_to_migratetype to gfp_migratetype for same convention - 1 - --- 2020-03-29 Wei Yang New
[v3] mm/page_alloc.c: use NODE_MASK_NONE define used_mask [v3] mm/page_alloc.c: use NODE_MASK_NONE define used_mask - 2 - --- 2020-03-29 Wei Yang New
[3/3] mm/swapfile.c: remove the unnecessary goto for SSD case Cleanup scan_swap_map_slots() a little - - - --- 2020-03-28 Wei Yang New
[2/3] mm/swapfile.c: explicitly show ssd/non-ssd is handled mutually exclusive Cleanup scan_swap_map_slots() a little - - - --- 2020-03-28 Wei Yang New
[1/3] mm/swapfile.c: offset is only used when there is more slots Cleanup scan_swap_map_slots() a little - - - --- 2020-03-28 Wei Yang New
[v2,2/2] mm/page_alloc.c: define node_order with all zero [v2,1/2] mm/page_alloc.c: use NODE_MASK_NONE define used_mask - - - --- 2020-03-27 Wei Yang New
[v2,1/2] mm/page_alloc.c: use NODE_MASK_NONE define used_mask [v2,1/2] mm/page_alloc.c: use NODE_MASK_NONE define used_mask - - - --- 2020-03-27 Wei Yang New
[2/2] mm/page_alloc.c: define node_order with all zero [1/2] mm/page_alloc.c: leverage compiler to zero out used_mask - - - --- 2020-03-26 Wei Yang New
[1/2] mm/page_alloc.c: leverage compiler to zero out used_mask [1/2] mm/page_alloc.c: leverage compiler to zero out used_mask - - - --- 2020-03-26 Wei Yang New
[2/2] mm/swapfile.c: remove the extra check in scan_swap_map_slots() [1/2] mm/swapfile.c: simplify the calculation of n_goal - - - --- 2020-03-25 Wei Yang New
[1/2] mm/swapfile.c: simplify the calculation of n_goal [1/2] mm/swapfile.c: simplify the calculation of n_goal - - - --- 2020-03-25 Wei Yang New
[v2] mm/swap_state.c: use the same way to count page in [add_to|delete_from]_swap_cache [v2] mm/swap_state.c: use the same way to count page in [add_to|delete_from]_swap_cache - - - --- 2020-03-15 Wei Yang New
mm/swap_state.c: use the same way to count page in [add_to|delete_from]_swap_cache mm/swap_state.c: use the same way to count page in [add_to|delete_from]_swap_cache - - - --- 2020-03-14 Wei Yang New
mm/swapfile.c: simplify the scan loop in scan_swap_map_slots() mm/swapfile.c: simplify the scan loop in scan_swap_map_slots() - - - --- 2020-02-29 Wei Yang New
mm: fix some typo scatter in mm directory mm: fix some typo scatter in mm directory 1 1 - --- 2019-01-18 Wei Yang New
mm, page_alloc: cleanup usemap_size() when SPARSEMEM is not set mm, page_alloc: cleanup usemap_size() when SPARSEMEM is not set - - - --- 2019-01-18 Wei Yang New
[v4] mm: remove extra drain pages on pcp list [v4] mm: remove extra drain pages on pcp list 2 1 - --- 2019-01-05 Wei Yang New
[v3] mm: remove extra drain pages on pcp list [v3] mm: remove extra drain pages on pcp list 1 - - --- 2018-12-21 Wei Yang New
[v2] mm, page_isolation: remove drain_all_pages() in set_migratetype_isolate() [v2] mm, page_isolation: remove drain_all_pages() in set_migratetype_isolate() 1 1 - --- 2018-12-18 Wei Yang New
mm, page_alloc: clear zone_movable_pfn if the node doesn't have ZONE_MOVABLE mm, page_alloc: clear zone_movable_pfn if the node doesn't have ZONE_MOVABLE - - - --- 2018-12-16 Wei Yang New
mm: remove unused page state adjustment macro mm: remove unused page state adjustment macro 1 1 - --- 2018-12-14 Wei Yang New
mm, page_isolation: remove drain_all_pages() in set_migratetype_isolate() mm, page_isolation: remove drain_all_pages() in set_migratetype_isolate() - - - --- 2018-12-14 Wei Yang New
mm, memory_hotplug: pass next_memory_node to new_page_nodemask() mm, memory_hotplug: pass next_memory_node to new_page_nodemask() - - - --- 2018-12-13 Wei Yang New
[v2] mm, page_alloc: enable pcpu_drain with zone capability [v2] mm, page_alloc: enable pcpu_drain with zone capability 1 2 - --- 2018-12-12 Wei Yang New
mm, page_alloc: enable pcpu_drain with zone capability mm, page_alloc: enable pcpu_drain with zone capability 1 - - --- 2018-12-12 Wei Yang New
mm, sparse: remove check with __highest_present_section_nr in for_each_present_section_nr() mm, sparse: remove check with __highest_present_section_nr in for_each_present_section_nr() - - - --- 2018-12-11 Wei Yang New
mm, page_alloc: calculate first_deferred_pfn directly mm, page_alloc: calculate first_deferred_pfn directly - - - --- 2018-12-07 Wei Yang New
[v2,2/2] core-api/memory-hotplug.rst: divide Locking Internal section by different locks [v2,1/2] admin-guide/memory-hotplug.rst: remove locking internal part from admin-guide - 1 - --- 2018-12-06 Wei Yang New
[v2,1/2] admin-guide/memory-hotplug.rst: remove locking internal part from admin-guide [v2,1/2] admin-guide/memory-hotplug.rst: remove locking internal part from admin-guide 1 1 - --- 2018-12-06 Wei Yang New
[2/2] mm, page_alloc: cleanup usemap_size() when SPARSEMEM is not set [1/2] mm, pageblock: make sure pageblock won't exceed mem_sectioin - - - --- 2018-12-05 Wei Yang New
[1/2] mm, pageblock: make sure pageblock won't exceed mem_sectioin [1/2] mm, pageblock: make sure pageblock won't exceed mem_sectioin - - - --- 2018-12-05 Wei Yang New
[2/2] core-api/memory-hotplug.rst: divide Locking Internal section by different locks [1/2] admin-guide/memory-hotplug.rst: remove locking internal part from admin-guide - - - --- 2018-12-05 Wei Yang New
[1/2] admin-guide/memory-hotplug.rst: remove locking internal part from admin-guide [1/2] admin-guide/memory-hotplug.rst: remove locking internal part from admin-guide 1 1 - --- 2018-12-05 Wei Yang New
[v4,2/2] mm, sparse: pass nid instead of pgdat to sparse_add_one_section() [v4,1/2] mm, sparse: drop pgdat_resize_lock in sparse_add/remove_one_section() 1 1 - --- 2018-12-04 Wei Yang New
[v4,1/2] mm, sparse: drop pgdat_resize_lock in sparse_add/remove_one_section() [v4,1/2] mm, sparse: drop pgdat_resize_lock in sparse_add/remove_one_section() 1 1 - --- 2018-12-04 Wei Yang New
[v4] mm, hotplug: move init_currently_empty_zone() under zone_span_lock protection [v4] mm, hotplug: move init_currently_empty_zone() under zone_span_lock protection 1 1 - --- 2018-12-03 Wei Yang New
[v3] mm, hotplug: move init_currently_empty_zone() under zone_span_lock protection [v3] mm, hotplug: move init_currently_empty_zone() under zone_span_lock protection 1 1 - --- 2018-11-30 Wei Yang New
[v2] mm, show_mem: drop pgdat_resize_lock in show_mem() [v2] mm, show_mem: drop pgdat_resize_lock in show_mem() 1 1 - --- 2018-11-29 Wei Yang New
[v3,2/2] mm, sparse: pass nid instead of pgdat to sparse_add_one_section() [v3,1/2] mm, sparse: drop pgdat_resize_lock in sparse_add/remove_one_section() 1 1 - --- 2018-11-29 Wei Yang New
[v3,1/2] mm, sparse: drop pgdat_resize_lock in sparse_add/remove_one_section() [v3,1/2] mm, sparse: drop pgdat_resize_lock in sparse_add/remove_one_section() 1 - - --- 2018-11-29 Wei Yang New
mm, show_mem: drop pgdat_resize_lock in show_mem() mm, show_mem: drop pgdat_resize_lock in show_mem() - - - --- 2018-11-28 Wei Yang New
[v2] mm, sparse: drop pgdat_resize_lock in sparse_add/remove_one_section() [v2] mm, sparse: drop pgdat_resize_lock in sparse_add/remove_one_section() 1 1 - --- 2018-11-28 Wei Yang New
[RFC] mm: update highest_memmap_pfn based on exact pfn [RFC] mm: update highest_memmap_pfn based on exact pfn - - - --- 2018-11-28 Wei Yang New
mm, sparse: drop pgdat_resize_lock in sparse_add/remove_one_section() mm, sparse: drop pgdat_resize_lock in sparse_add/remove_one_section() - - - --- 2018-11-27 Wei Yang New
drivers/base/memory.c: remove an unnecessary check on NR_MEM_SECTIONS drivers/base/memory.c: remove an unnecessary check on NR_MEM_SECTIONS - - - --- 2018-11-23 Wei Yang New
[v2] mm, hotplug: move init_currently_empty_zone() under zone_span_lock protection [v2] mm, hotplug: move init_currently_empty_zone() under zone_span_lock protection 1 1 - --- 2018-11-22 Wei Yang New
mm: check nr_initialised with PAGES_PER_SECTION directly in defer_init() mm: check nr_initialised with PAGES_PER_SECTION directly in defer_init() - 1 - --- 2018-11-22 Wei Yang New
[v2] mm/slub: improve performance by skipping checked node in get_any_partial() [v2] mm/slub: improve performance by skipping checked node in get_any_partial() - 1 - --- 2018-11-20 Wei Yang New
mm, hotplug: protect nr_zones with pgdat_resize_lock() mm, hotplug: protect nr_zones with pgdat_resize_lock() 1 - - --- 2018-11-20 Wei Yang New
[RFC] mm, meminit: remove init_reserved_page() [RFC] mm, meminit: remove init_reserved_page() - - - --- 2018-11-19 Wei Yang New
mm, page_alloc: fix calculation of pgdat->nr_zones mm, page_alloc: fix calculation of pgdat->nr_zones 1 1 - --- 2018-11-17 Wei Yang New
mm: use managed_zone() for more exact check in zone iteration mm: use managed_zone() for more exact check in zone iteration - - - --- 2018-11-14 Wei Yang New
[v2] mm/slub: skip node in case there is no slab to acquire [v2] mm/slub: skip node in case there is no slab to acquire - - - --- 2018-11-13 Wei Yang New
[v2] vmscan: return NODE_RECLAIM_NOSCAN in node_reclaim() when CONFIG_NUMA is n [v2] vmscan: return NODE_RECLAIM_NOSCAN in node_reclaim() when CONFIG_NUMA is n 1 1 - --- 2018-11-13 Wei Yang New
vmscan: return NODE_RECLAIM_NOSCAN in node_reclaim() when CONFIG_NUMA is n vmscan: return NODE_RECLAIM_NOSCAN in node_reclaim() when CONFIG_NUMA is n - - - --- 2018-11-13 Wei Yang New
mm, page_alloc: skip to set lowmem_reserve[] for empty zones mm, page_alloc: skip to set lowmem_reserve[] for empty zones - - - --- 2018-11-13 Wei Yang New
mm, page_alloc: skip zone who has no managed_pages in calculate_totalreserve_pages() mm, page_alloc: skip zone who has no managed_pages in calculate_totalreserve_pages() - - - --- 2018-11-12 Wei Yang New
mm/slub: skip node in case there is no slab to acquire mm/slub: skip node in case there is no slab to acquire - - - --- 2018-11-08 Wei Yang New
mm/slub: record final state of slub action in deactivate_slab() mm/slub: record final state of slub action in deactivate_slab() - - - --- 2018-11-07 Wei Yang New
mm/slub: page is always non-NULL for node_match() mm/slub: page is always non-NULL for node_match() 1 - - --- 2018-11-06 Wei Yang New
mm/slub: remove validation on cpu_slab in __flush_cpu_slab() mm/slub: remove validation on cpu_slab in __flush_cpu_slab() - - - --- 2018-11-03 Wei Yang New
« 1 2 3 4 »