学习
<h2>基本概念</h2>
<ol>
<li>
<p>基本结构:node -> zone</p>
</li>
<li>
<p>每一个 CPU 以及和它直连的内存条组成一个 node,可通过 <code>dmidecode</code> 或 <code>munactl --hardware</code> 查看</p>
</li>
<li>
<p>每个 node 划分成多个 zone。常见的 zone 类型:ZONE_DMA、ZONE_DMA32、ZONE_NORMAL。每个 zone 下包含多个 Page。可通过 <code>cat /proc/zoneinfo</code> 查看 zone 信息</p>
</li>
<li>
<p>每个 zone 用伙伴系统进行管理。可通过 <code>cat /proc/pagetypeinfo</code> 查看伙伴系统中每个尺寸的可用连续内存块数量</p>
</li>
<li>内核基于伙伴系统实现了 slab 或 slub 内存分配器,一个 slab 内只分配特定大小的内存。可通过 <code>cat /proc/slabinfo</code> 查看所有 kmem_cache,另外一个命令是 <code>slabtop</code></li>
</ol>
<h2>zone 结构</h2>
<pre><code class="language-c">// file: include/linux/mmzone.h
#define MAX_ORDER 11
struct zone {
struct free_area free_area[MAX_ORDER];
// ...
}
struct free_area {
struct list_head free_list[MIGRATE_TYPES];
unsigned long nr_free;
};
enum {
MIGRATE_UNMOVABLE,
MIGRATE_RECLAIMABLE,
MIGRATE_MOVABLE,
MIGRATE_PCPTYPES, /* the number of types on the pcp lists */
MIGRATE_RESERVE = MIGRATE_PCPTYPES,
#ifdef CONFIG_CMA
/*
* MIGRATE_CMA migration type is designed to mimic the way
* ZONE_MOVABLE works. Only movable pages can be allocated
* from MIGRATE_CMA pageblocks and page allocator never
* implicitly change migration type of MIGRATE_CMA pageblock.
*
* The way to use it is to change migratetype of a range of
* pageblocks to MIGRATE_CMA which can be done by
* __free_pageblock_cma() function. What is important though
* is that a range of pageblocks must be aligned to
* MAX_ORDER_NR_PAGES should biggest page be bigger then
* a single pageblock.
*/
MIGRATE_CMA,
#endif
#ifdef CONFIG_MEMORY_ISOLATION
MIGRATE_ISOLATE, /* can't allocate from here */
#endif
MIGRATE_TYPES
};</code></pre>
<p>腾讯云设备信息:</p>
<p><img src="https://www.showdoc.com.cn/server/api/attachment/visitFile?sign=20ab15ed3d0b732545ae7cc7afb01d3d&amp;file=file.png" alt="" /></p>
<h2>slab 结构</h2>
<pre><code class="language-c">// file: include/linux/slab_def.h
struct kmem_cache {
struct kmem_cache_node **node;
// ...
}
/*
* The slab lists for all objects.
*/
struct kmem_cache_node {
spinlock_t list_lock;
#ifdef CONFIG_SLAB
struct list_head slabs_partial; /* partial list first, better asm code */
struct list_head slabs_full;
struct list_head slabs_free;
unsigned long free_objects;
unsigned int free_limit;
unsigned int colour_next; /* Per-node cache coloring */
struct array_cache *shared; /* shared per node */
struct array_cache **alien; /* on other nodes */
unsigned long next_reap; /* updated without locking */
int free_touched; /* updated without locking */
#endif
#ifdef CONFIG_SLUB
unsigned long nr_partial;
struct list_head partial;
#ifdef CONFIG_SLUB_DEBUG
atomic_long_t nr_slabs;
atomic_long_t total_objects;
struct list_head full;
#endif
#endif
};</code></pre>