Print this page
XXXX kmem: double-calling kmem_depot_ws_update isn't obvious
While the double-call is documented in a comment, it's not obvious what
exactly it is trying to accomplish. The easiest way to address this is to
introduce a new function that "zeroes-out" the working set statistics to
force everything to be eligible for reaping.
Split |
Close |
Expand all |
Collapse all |
--- old/usr/src/uts/common/os/kmem.c
+++ new/usr/src/uts/common/os/kmem.c
1 1 /*
2 2 * CDDL HEADER START
3 3 *
4 4 * The contents of this file are subject to the terms of the
5 5 * Common Development and Distribution License (the "License").
6 6 * You may not use this file except in compliance with the License.
7 7 *
8 8 * You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE
9 9 * or http://www.opensolaris.org/os/licensing.
10 10 * See the License for the specific language governing permissions
11 11 * and limitations under the License.
12 12 *
↓ open down ↓ |
12 lines elided |
↑ open up ↑ |
13 13 * When distributing Covered Code, include this CDDL HEADER in each
14 14 * file and include the License file at usr/src/OPENSOLARIS.LICENSE.
15 15 * If applicable, add the following below this CDDL HEADER, with the
16 16 * fields enclosed by brackets "[]" replaced with your own identifying
17 17 * information: Portions Copyright [yyyy] [name of copyright owner]
18 18 *
19 19 * CDDL HEADER END
20 20 */
21 21 /*
22 22 * Copyright (c) 1994, 2010, Oracle and/or its affiliates. All rights reserved.
23 + * Copyright 2015 Nexenta Systems, Inc. All rights reserved.
23 24 */
24 25
25 26 /*
26 27 * Kernel memory allocator, as described in the following two papers and a
27 28 * statement about the consolidator:
28 29 *
29 30 * Jeff Bonwick,
30 31 * The Slab Allocator: An Object-Caching Kernel Memory Allocator.
31 32 * Proceedings of the Summer 1994 Usenix Conference.
32 33 * Available as /shared/sac/PSARC/1994/028/materials/kmem.pdf.
33 34 *
34 35 * Jeff Bonwick and Jonathan Adams,
35 36 * Magazines and vmem: Extending the Slab Allocator to Many CPUs and
36 37 * Arbitrary Resources.
37 38 * Proceedings of the 2001 Usenix Conference.
38 39 * Available as /shared/sac/PSARC/2000/550/materials/vmem.pdf.
39 40 *
40 41 * kmem Slab Consolidator Big Theory Statement:
41 42 *
42 43 * 1. Motivation
43 44 *
44 45 * As stated in Bonwick94, slabs provide the following advantages over other
45 46 * allocation structures in terms of memory fragmentation:
46 47 *
47 48 * - Internal fragmentation (per-buffer wasted space) is minimal.
48 49 * - Severe external fragmentation (unused buffers on the free list) is
49 50 * unlikely.
50 51 *
51 52 * Segregating objects by size eliminates one source of external fragmentation,
52 53 * and according to Bonwick:
53 54 *
54 55 * The other reason that slabs reduce external fragmentation is that all
55 56 * objects in a slab are of the same type, so they have the same lifetime
56 57 * distribution. The resulting segregation of short-lived and long-lived
57 58 * objects at slab granularity reduces the likelihood of an entire page being
58 59 * held hostage due to a single long-lived allocation [Barrett93, Hanson90].
59 60 *
60 61 * While unlikely, severe external fragmentation remains possible. Clients that
61 62 * allocate both short- and long-lived objects from the same cache cannot
62 63 * anticipate the distribution of long-lived objects within the allocator's slab
63 64 * implementation. Even a small percentage of long-lived objects distributed
64 65 * randomly across many slabs can lead to a worst case scenario where the client
65 66 * frees the majority of its objects and the system gets back almost none of the
66 67 * slabs. Despite the client doing what it reasonably can to help the system
67 68 * reclaim memory, the allocator cannot shake free enough slabs because of
68 69 * lonely allocations stubbornly hanging on. Although the allocator is in a
69 70 * position to diagnose the fragmentation, there is nothing that the allocator
70 71 * by itself can do about it. It only takes a single allocated object to prevent
71 72 * an entire slab from being reclaimed, and any object handed out by
72 73 * kmem_cache_alloc() is by definition in the client's control. Conversely,
73 74 * although the client is in a position to move a long-lived object, it has no
74 75 * way of knowing if the object is causing fragmentation, and if so, where to
75 76 * move it. A solution necessarily requires further cooperation between the
76 77 * allocator and the client.
77 78 *
78 79 * 2. Move Callback
79 80 *
80 81 * The kmem slab consolidator therefore adds a move callback to the
81 82 * allocator/client interface, improving worst-case external fragmentation in
82 83 * kmem caches that supply a function to move objects from one memory location
83 84 * to another. In a situation of low memory kmem attempts to consolidate all of
84 85 * a cache's slabs at once; otherwise it works slowly to bring external
85 86 * fragmentation within the 1/8 limit guaranteed for internal fragmentation,
86 87 * thereby helping to avoid a low memory situation in the future.
87 88 *
88 89 * The callback has the following signature:
89 90 *
90 91 * kmem_cbrc_t move(void *old, void *new, size_t size, void *user_arg)
91 92 *
92 93 * It supplies the kmem client with two addresses: the allocated object that
93 94 * kmem wants to move and a buffer selected by kmem for the client to use as the
94 95 * copy destination. The callback is kmem's way of saying "Please get off of
95 96 * this buffer and use this one instead." kmem knows where it wants to move the
96 97 * object in order to best reduce fragmentation. All the client needs to know
97 98 * about the second argument (void *new) is that it is an allocated, constructed
98 99 * object ready to take the contents of the old object. When the move function
99 100 * is called, the system is likely to be low on memory, and the new object
100 101 * spares the client from having to worry about allocating memory for the
101 102 * requested move. The third argument supplies the size of the object, in case a
102 103 * single move function handles multiple caches whose objects differ only in
103 104 * size (such as zio_buf_512, zio_buf_1024, etc). Finally, the same optional
104 105 * user argument passed to the constructor, destructor, and reclaim functions is
105 106 * also passed to the move callback.
106 107 *
107 108 * 2.1 Setting the Move Callback
108 109 *
109 110 * The client sets the move callback after creating the cache and before
110 111 * allocating from it:
111 112 *
112 113 * object_cache = kmem_cache_create(...);
113 114 * kmem_cache_set_move(object_cache, object_move);
114 115 *
115 116 * 2.2 Move Callback Return Values
116 117 *
117 118 * Only the client knows about its own data and when is a good time to move it.
118 119 * The client is cooperating with kmem to return unused memory to the system,
119 120 * and kmem respectfully accepts this help at the client's convenience. When
120 121 * asked to move an object, the client can respond with any of the following:
121 122 *
122 123 * typedef enum kmem_cbrc {
123 124 * KMEM_CBRC_YES,
124 125 * KMEM_CBRC_NO,
125 126 * KMEM_CBRC_LATER,
126 127 * KMEM_CBRC_DONT_NEED,
127 128 * KMEM_CBRC_DONT_KNOW
128 129 * } kmem_cbrc_t;
129 130 *
130 131 * The client must not explicitly kmem_cache_free() either of the objects passed
131 132 * to the callback, since kmem wants to free them directly to the slab layer
132 133 * (bypassing the per-CPU magazine layer). The response tells kmem which of the
133 134 * objects to free:
134 135 *
135 136 * YES: (Did it) The client moved the object, so kmem frees the old one.
136 137 * NO: (Never) The client refused, so kmem frees the new object (the
137 138 * unused copy destination). kmem also marks the slab of the old
138 139 * object so as not to bother the client with further callbacks for
139 140 * that object as long as the slab remains on the partial slab list.
140 141 * (The system won't be getting the slab back as long as the
141 142 * immovable object holds it hostage, so there's no point in moving
142 143 * any of its objects.)
143 144 * LATER: The client is using the object and cannot move it now, so kmem
144 145 * frees the new object (the unused copy destination). kmem still
145 146 * attempts to move other objects off the slab, since it expects to
146 147 * succeed in clearing the slab in a later callback. The client
147 148 * should use LATER instead of NO if the object is likely to become
148 149 * movable very soon.
149 150 * DONT_NEED: The client no longer needs the object, so kmem frees the old along
150 151 * with the new object (the unused copy destination). This response
151 152 * is the client's opportunity to be a model citizen and give back as
152 153 * much as it can.
153 154 * DONT_KNOW: The client does not know about the object because
154 155 * a) the client has just allocated the object and not yet put it
155 156 * wherever it expects to find known objects
156 157 * b) the client has removed the object from wherever it expects to
157 158 * find known objects and is about to free it, or
158 159 * c) the client has freed the object.
159 160 * In all these cases (a, b, and c) kmem frees the new object (the
160 161 * unused copy destination) and searches for the old object in the
161 162 * magazine layer. If found, the object is removed from the magazine
162 163 * layer and freed to the slab layer so it will no longer hold the
163 164 * slab hostage.
164 165 *
165 166 * 2.3 Object States
166 167 *
167 168 * Neither kmem nor the client can be assumed to know the object's whereabouts
168 169 * at the time of the callback. An object belonging to a kmem cache may be in
169 170 * any of the following states:
170 171 *
171 172 * 1. Uninitialized on the slab
172 173 * 2. Allocated from the slab but not constructed (still uninitialized)
173 174 * 3. Allocated from the slab, constructed, but not yet ready for business
174 175 * (not in a valid state for the move callback)
175 176 * 4. In use (valid and known to the client)
176 177 * 5. About to be freed (no longer in a valid state for the move callback)
177 178 * 6. Freed to a magazine (still constructed)
178 179 * 7. Allocated from a magazine, not yet ready for business (not in a valid
179 180 * state for the move callback), and about to return to state #4
180 181 * 8. Deconstructed on a magazine that is about to be freed
181 182 * 9. Freed to the slab
182 183 *
183 184 * Since the move callback may be called at any time while the object is in any
184 185 * of the above states (except state #1), the client needs a safe way to
185 186 * determine whether or not it knows about the object. Specifically, the client
186 187 * needs to know whether or not the object is in state #4, the only state in
187 188 * which a move is valid. If the object is in any other state, the client should
188 189 * immediately return KMEM_CBRC_DONT_KNOW, since it is unsafe to access any of
189 190 * the object's fields.
190 191 *
191 192 * Note that although an object may be in state #4 when kmem initiates the move
192 193 * request, the object may no longer be in that state by the time kmem actually
193 194 * calls the move function. Not only does the client free objects
194 195 * asynchronously, kmem itself puts move requests on a queue where thay are
195 196 * pending until kmem processes them from another context. Also, objects freed
196 197 * to a magazine appear allocated from the point of view of the slab layer, so
197 198 * kmem may even initiate requests for objects in a state other than state #4.
198 199 *
199 200 * 2.3.1 Magazine Layer
200 201 *
201 202 * An important insight revealed by the states listed above is that the magazine
202 203 * layer is populated only by kmem_cache_free(). Magazines of constructed
203 204 * objects are never populated directly from the slab layer (which contains raw,
204 205 * unconstructed objects). Whenever an allocation request cannot be satisfied
205 206 * from the magazine layer, the magazines are bypassed and the request is
206 207 * satisfied from the slab layer (creating a new slab if necessary). kmem calls
207 208 * the object constructor only when allocating from the slab layer, and only in
208 209 * response to kmem_cache_alloc() or to prepare the destination buffer passed in
209 210 * the move callback. kmem does not preconstruct objects in anticipation of
210 211 * kmem_cache_alloc().
211 212 *
212 213 * 2.3.2 Object Constructor and Destructor
213 214 *
214 215 * If the client supplies a destructor, it must be valid to call the destructor
215 216 * on a newly created object (immediately after the constructor).
216 217 *
217 218 * 2.4 Recognizing Known Objects
218 219 *
219 220 * There is a simple test to determine safely whether or not the client knows
220 221 * about a given object in the move callback. It relies on the fact that kmem
221 222 * guarantees that the object of the move callback has only been touched by the
222 223 * client itself or else by kmem. kmem does this by ensuring that none of the
223 224 * cache's slabs are freed to the virtual memory (VM) subsystem while a move
224 225 * callback is pending. When the last object on a slab is freed, if there is a
225 226 * pending move, kmem puts the slab on a per-cache dead list and defers freeing
226 227 * slabs on that list until all pending callbacks are completed. That way,
227 228 * clients can be certain that the object of a move callback is in one of the
228 229 * states listed above, making it possible to distinguish known objects (in
229 230 * state #4) using the two low order bits of any pointer member (with the
230 231 * exception of 'char *' or 'short *' which may not be 4-byte aligned on some
231 232 * platforms).
232 233 *
233 234 * The test works as long as the client always transitions objects from state #4
234 235 * (known, in use) to state #5 (about to be freed, invalid) by setting the low
235 236 * order bit of the client-designated pointer member. Since kmem only writes
236 237 * invalid memory patterns, such as 0xbaddcafe to uninitialized memory and
237 238 * 0xdeadbeef to freed memory, any scribbling on the object done by kmem is
238 239 * guaranteed to set at least one of the two low order bits. Therefore, given an
239 240 * object with a back pointer to a 'container_t *o_container', the client can
240 241 * test
241 242 *
242 243 * container_t *container = object->o_container;
243 244 * if ((uintptr_t)container & 0x3) {
244 245 * return (KMEM_CBRC_DONT_KNOW);
245 246 * }
246 247 *
247 248 * Typically, an object will have a pointer to some structure with a list or
248 249 * hash where objects from the cache are kept while in use. Assuming that the
249 250 * client has some way of knowing that the container structure is valid and will
250 251 * not go away during the move, and assuming that the structure includes a lock
251 252 * to protect whatever collection is used, then the client would continue as
252 253 * follows:
253 254 *
254 255 * // Ensure that the container structure does not go away.
255 256 * if (container_hold(container) == 0) {
256 257 * return (KMEM_CBRC_DONT_KNOW);
257 258 * }
258 259 * mutex_enter(&container->c_objects_lock);
259 260 * if (container != object->o_container) {
260 261 * mutex_exit(&container->c_objects_lock);
261 262 * container_rele(container);
262 263 * return (KMEM_CBRC_DONT_KNOW);
263 264 * }
264 265 *
265 266 * At this point the client knows that the object cannot be freed as long as
266 267 * c_objects_lock is held. Note that after acquiring the lock, the client must
267 268 * recheck the o_container pointer in case the object was removed just before
268 269 * acquiring the lock.
269 270 *
270 271 * When the client is about to free an object, it must first remove that object
271 272 * from the list, hash, or other structure where it is kept. At that time, to
272 273 * mark the object so it can be distinguished from the remaining, known objects,
273 274 * the client sets the designated low order bit:
274 275 *
275 276 * mutex_enter(&container->c_objects_lock);
276 277 * object->o_container = (void *)((uintptr_t)object->o_container | 0x1);
277 278 * list_remove(&container->c_objects, object);
278 279 * mutex_exit(&container->c_objects_lock);
279 280 *
280 281 * In the common case, the object is freed to the magazine layer, where it may
281 282 * be reused on a subsequent allocation without the overhead of calling the
282 283 * constructor. While in the magazine it appears allocated from the point of
283 284 * view of the slab layer, making it a candidate for the move callback. Most
284 285 * objects unrecognized by the client in the move callback fall into this
285 286 * category and are cheaply distinguished from known objects by the test
286 287 * described earlier. Since recognition is cheap for the client, and searching
287 288 * magazines is expensive for kmem, kmem defers searching until the client first
288 289 * returns KMEM_CBRC_DONT_KNOW. As long as the needed effort is reasonable, kmem
289 290 * elsewhere does what it can to avoid bothering the client unnecessarily.
290 291 *
291 292 * Invalidating the designated pointer member before freeing the object marks
292 293 * the object to be avoided in the callback, and conversely, assigning a valid
293 294 * value to the designated pointer member after allocating the object makes the
294 295 * object fair game for the callback:
295 296 *
296 297 * ... allocate object ...
297 298 * ... set any initial state not set by the constructor ...
298 299 *
299 300 * mutex_enter(&container->c_objects_lock);
300 301 * list_insert_tail(&container->c_objects, object);
301 302 * membar_producer();
302 303 * object->o_container = container;
303 304 * mutex_exit(&container->c_objects_lock);
304 305 *
305 306 * Note that everything else must be valid before setting o_container makes the
306 307 * object fair game for the move callback. The membar_producer() call ensures
307 308 * that all the object's state is written to memory before setting the pointer
308 309 * that transitions the object from state #3 or #7 (allocated, constructed, not
309 310 * yet in use) to state #4 (in use, valid). That's important because the move
310 311 * function has to check the validity of the pointer before it can safely
311 312 * acquire the lock protecting the collection where it expects to find known
312 313 * objects.
313 314 *
314 315 * This method of distinguishing known objects observes the usual symmetry:
315 316 * invalidating the designated pointer is the first thing the client does before
316 317 * freeing the object, and setting the designated pointer is the last thing the
317 318 * client does after allocating the object. Of course, the client is not
318 319 * required to use this method. Fundamentally, how the client recognizes known
319 320 * objects is completely up to the client, but this method is recommended as an
320 321 * efficient and safe way to take advantage of the guarantees made by kmem. If
321 322 * the entire object is arbitrary data without any markable bits from a suitable
322 323 * pointer member, then the client must find some other method, such as
323 324 * searching a hash table of known objects.
324 325 *
325 326 * 2.5 Preventing Objects From Moving
326 327 *
327 328 * Besides a way to distinguish known objects, the other thing that the client
328 329 * needs is a strategy to ensure that an object will not move while the client
329 330 * is actively using it. The details of satisfying this requirement tend to be
330 331 * highly cache-specific. It might seem that the same rules that let a client
331 332 * remove an object safely should also decide when an object can be moved
332 333 * safely. However, any object state that makes a removal attempt invalid is
333 334 * likely to be long-lasting for objects that the client does not expect to
334 335 * remove. kmem knows nothing about the object state and is equally likely (from
335 336 * the client's point of view) to request a move for any object in the cache,
336 337 * whether prepared for removal or not. Even a low percentage of objects stuck
337 338 * in place by unremovability will defeat the consolidator if the stuck objects
338 339 * are the same long-lived allocations likely to hold slabs hostage.
339 340 * Fundamentally, the consolidator is not aimed at common cases. Severe external
340 341 * fragmentation is a worst case scenario manifested as sparsely allocated
341 342 * slabs, by definition a low percentage of the cache's objects. When deciding
342 343 * what makes an object movable, keep in mind the goal of the consolidator: to
343 344 * bring worst-case external fragmentation within the limits guaranteed for
344 345 * internal fragmentation. Removability is a poor criterion if it is likely to
345 346 * exclude more than an insignificant percentage of objects for long periods of
346 347 * time.
347 348 *
348 349 * A tricky general solution exists, and it has the advantage of letting you
349 350 * move any object at almost any moment, practically eliminating the likelihood
350 351 * that an object can hold a slab hostage. However, if there is a cache-specific
351 352 * way to ensure that an object is not actively in use in the vast majority of
352 353 * cases, a simpler solution that leverages this cache-specific knowledge is
353 354 * preferred.
354 355 *
355 356 * 2.5.1 Cache-Specific Solution
356 357 *
357 358 * As an example of a cache-specific solution, the ZFS znode cache takes
358 359 * advantage of the fact that the vast majority of znodes are only being
359 360 * referenced from the DNLC. (A typical case might be a few hundred in active
360 361 * use and a hundred thousand in the DNLC.) In the move callback, after the ZFS
361 362 * client has established that it recognizes the znode and can access its fields
362 363 * safely (using the method described earlier), it then tests whether the znode
363 364 * is referenced by anything other than the DNLC. If so, it assumes that the
364 365 * znode may be in active use and is unsafe to move, so it drops its locks and
365 366 * returns KMEM_CBRC_LATER. The advantage of this strategy is that everywhere
366 367 * else znodes are used, no change is needed to protect against the possibility
367 368 * of the znode moving. The disadvantage is that it remains possible for an
368 369 * application to hold a znode slab hostage with an open file descriptor.
369 370 * However, this case ought to be rare and the consolidator has a way to deal
370 371 * with it: If the client responds KMEM_CBRC_LATER repeatedly for the same
371 372 * object, kmem eventually stops believing it and treats the slab as if the
372 373 * client had responded KMEM_CBRC_NO. Having marked the hostage slab, kmem can
373 374 * then focus on getting it off of the partial slab list by allocating rather
374 375 * than freeing all of its objects. (Either way of getting a slab off the
375 376 * free list reduces fragmentation.)
376 377 *
377 378 * 2.5.2 General Solution
378 379 *
379 380 * The general solution, on the other hand, requires an explicit hold everywhere
380 381 * the object is used to prevent it from moving. To keep the client locking
381 382 * strategy as uncomplicated as possible, kmem guarantees the simplifying
382 383 * assumption that move callbacks are sequential, even across multiple caches.
383 384 * Internally, a global queue processed by a single thread supports all caches
384 385 * implementing the callback function. No matter how many caches supply a move
385 386 * function, the consolidator never moves more than one object at a time, so the
386 387 * client does not have to worry about tricky lock ordering involving several
387 388 * related objects from different kmem caches.
388 389 *
389 390 * The general solution implements the explicit hold as a read-write lock, which
390 391 * allows multiple readers to access an object from the cache simultaneously
391 392 * while a single writer is excluded from moving it. A single rwlock for the
392 393 * entire cache would lock out all threads from using any of the cache's objects
393 394 * even though only a single object is being moved, so to reduce contention,
394 395 * the client can fan out the single rwlock into an array of rwlocks hashed by
395 396 * the object address, making it probable that moving one object will not
396 397 * prevent other threads from using a different object. The rwlock cannot be a
397 398 * member of the object itself, because the possibility of the object moving
398 399 * makes it unsafe to access any of the object's fields until the lock is
399 400 * acquired.
400 401 *
401 402 * Assuming a small, fixed number of locks, it's possible that multiple objects
402 403 * will hash to the same lock. A thread that needs to use multiple objects in
403 404 * the same function may acquire the same lock multiple times. Since rwlocks are
404 405 * reentrant for readers, and since there is never more than a single writer at
405 406 * a time (assuming that the client acquires the lock as a writer only when
406 407 * moving an object inside the callback), there would seem to be no problem.
407 408 * However, a client locking multiple objects in the same function must handle
408 409 * one case of potential deadlock: Assume that thread A needs to prevent both
409 410 * object 1 and object 2 from moving, and thread B, the callback, meanwhile
410 411 * tries to move object 3. It's possible, if objects 1, 2, and 3 all hash to the
411 412 * same lock, that thread A will acquire the lock for object 1 as a reader
412 413 * before thread B sets the lock's write-wanted bit, preventing thread A from
413 414 * reacquiring the lock for object 2 as a reader. Unable to make forward
414 415 * progress, thread A will never release the lock for object 1, resulting in
415 416 * deadlock.
416 417 *
417 418 * There are two ways of avoiding the deadlock just described. The first is to
418 419 * use rw_tryenter() rather than rw_enter() in the callback function when
419 420 * attempting to acquire the lock as a writer. If tryenter discovers that the
420 421 * same object (or another object hashed to the same lock) is already in use, it
421 422 * aborts the callback and returns KMEM_CBRC_LATER. The second way is to use
422 423 * rprwlock_t (declared in common/fs/zfs/sys/rprwlock.h) instead of rwlock_t,
423 424 * since it allows a thread to acquire the lock as a reader in spite of a
424 425 * waiting writer. This second approach insists on moving the object now, no
425 426 * matter how many readers the move function must wait for in order to do so,
426 427 * and could delay the completion of the callback indefinitely (blocking
427 428 * callbacks to other clients). In practice, a less insistent callback using
428 429 * rw_tryenter() returns KMEM_CBRC_LATER infrequently enough that there seems
429 430 * little reason to use anything else.
430 431 *
431 432 * Avoiding deadlock is not the only problem that an implementation using an
432 433 * explicit hold needs to solve. Locking the object in the first place (to
433 434 * prevent it from moving) remains a problem, since the object could move
434 435 * between the time you obtain a pointer to the object and the time you acquire
435 436 * the rwlock hashed to that pointer value. Therefore the client needs to
436 437 * recheck the value of the pointer after acquiring the lock, drop the lock if
437 438 * the value has changed, and try again. This requires a level of indirection:
438 439 * something that points to the object rather than the object itself, that the
439 440 * client can access safely while attempting to acquire the lock. (The object
440 441 * itself cannot be referenced safely because it can move at any time.)
441 442 * The following lock-acquisition function takes whatever is safe to reference
442 443 * (arg), follows its pointer to the object (using function f), and tries as
443 444 * often as necessary to acquire the hashed lock and verify that the object
444 445 * still has not moved:
445 446 *
446 447 * object_t *
447 448 * object_hold(object_f f, void *arg)
448 449 * {
449 450 * object_t *op;
450 451 *
451 452 * op = f(arg);
452 453 * if (op == NULL) {
453 454 * return (NULL);
454 455 * }
455 456 *
456 457 * rw_enter(OBJECT_RWLOCK(op), RW_READER);
457 458 * while (op != f(arg)) {
458 459 * rw_exit(OBJECT_RWLOCK(op));
459 460 * op = f(arg);
460 461 * if (op == NULL) {
461 462 * break;
462 463 * }
463 464 * rw_enter(OBJECT_RWLOCK(op), RW_READER);
464 465 * }
465 466 *
466 467 * return (op);
467 468 * }
468 469 *
469 470 * The OBJECT_RWLOCK macro hashes the object address to obtain the rwlock. The
470 471 * lock reacquisition loop, while necessary, almost never executes. The function
471 472 * pointer f (used to obtain the object pointer from arg) has the following type
472 473 * definition:
473 474 *
474 475 * typedef object_t *(*object_f)(void *arg);
475 476 *
476 477 * An object_f implementation is likely to be as simple as accessing a structure
477 478 * member:
478 479 *
479 480 * object_t *
480 481 * s_object(void *arg)
481 482 * {
482 483 * something_t *sp = arg;
483 484 * return (sp->s_object);
484 485 * }
485 486 *
486 487 * The flexibility of a function pointer allows the path to the object to be
487 488 * arbitrarily complex and also supports the notion that depending on where you
488 489 * are using the object, you may need to get it from someplace different.
489 490 *
490 491 * The function that releases the explicit hold is simpler because it does not
491 492 * have to worry about the object moving:
492 493 *
493 494 * void
494 495 * object_rele(object_t *op)
495 496 * {
496 497 * rw_exit(OBJECT_RWLOCK(op));
497 498 * }
498 499 *
499 500 * The caller is spared these details so that obtaining and releasing an
500 501 * explicit hold feels like a simple mutex_enter()/mutex_exit() pair. The caller
501 502 * of object_hold() only needs to know that the returned object pointer is valid
502 503 * if not NULL and that the object will not move until released.
503 504 *
504 505 * Although object_hold() prevents an object from moving, it does not prevent it
505 506 * from being freed. The caller must take measures before calling object_hold()
506 507 * (afterwards is too late) to ensure that the held object cannot be freed. The
507 508 * caller must do so without accessing the unsafe object reference, so any lock
508 509 * or reference count used to ensure the continued existence of the object must
509 510 * live outside the object itself.
510 511 *
511 512 * Obtaining a new object is a special case where an explicit hold is impossible
512 513 * for the caller. Any function that returns a newly allocated object (either as
513 514 * a return value, or as an in-out paramter) must return it already held; after
514 515 * the caller gets it is too late, since the object cannot be safely accessed
515 516 * without the level of indirection described earlier. The following
516 517 * object_alloc() example uses the same code shown earlier to transition a new
517 518 * object into the state of being recognized (by the client) as a known object.
518 519 * The function must acquire the hold (rw_enter) before that state transition
519 520 * makes the object movable:
520 521 *
521 522 * static object_t *
522 523 * object_alloc(container_t *container)
523 524 * {
524 525 * object_t *object = kmem_cache_alloc(object_cache, 0);
525 526 * ... set any initial state not set by the constructor ...
526 527 * rw_enter(OBJECT_RWLOCK(object), RW_READER);
527 528 * mutex_enter(&container->c_objects_lock);
528 529 * list_insert_tail(&container->c_objects, object);
529 530 * membar_producer();
530 531 * object->o_container = container;
531 532 * mutex_exit(&container->c_objects_lock);
532 533 * return (object);
533 534 * }
534 535 *
535 536 * Functions that implicitly acquire an object hold (any function that calls
536 537 * object_alloc() to supply an object for the caller) need to be carefully noted
537 538 * so that the matching object_rele() is not neglected. Otherwise, leaked holds
538 539 * prevent all objects hashed to the affected rwlocks from ever being moved.
539 540 *
540 541 * The pointer to a held object can be hashed to the holding rwlock even after
541 542 * the object has been freed. Although it is possible to release the hold
542 543 * after freeing the object, you may decide to release the hold implicitly in
543 544 * whatever function frees the object, so as to release the hold as soon as
544 545 * possible, and for the sake of symmetry with the function that implicitly
545 546 * acquires the hold when it allocates the object. Here, object_free() releases
546 547 * the hold acquired by object_alloc(). Its implicit object_rele() forms a
547 548 * matching pair with object_hold():
548 549 *
549 550 * void
550 551 * object_free(object_t *object)
551 552 * {
552 553 * container_t *container;
553 554 *
554 555 * ASSERT(object_held(object));
555 556 * container = object->o_container;
556 557 * mutex_enter(&container->c_objects_lock);
557 558 * object->o_container =
558 559 * (void *)((uintptr_t)object->o_container | 0x1);
559 560 * list_remove(&container->c_objects, object);
560 561 * mutex_exit(&container->c_objects_lock);
561 562 * object_rele(object);
562 563 * kmem_cache_free(object_cache, object);
563 564 * }
564 565 *
565 566 * Note that object_free() cannot safely accept an object pointer as an argument
566 567 * unless the object is already held. Any function that calls object_free()
567 568 * needs to be carefully noted since it similarly forms a matching pair with
568 569 * object_hold().
569 570 *
570 571 * To complete the picture, the following callback function implements the
571 572 * general solution by moving objects only if they are currently unheld:
572 573 *
573 574 * static kmem_cbrc_t
574 575 * object_move(void *buf, void *newbuf, size_t size, void *arg)
575 576 * {
576 577 * object_t *op = buf, *np = newbuf;
577 578 * container_t *container;
578 579 *
579 580 * container = op->o_container;
580 581 * if ((uintptr_t)container & 0x3) {
581 582 * return (KMEM_CBRC_DONT_KNOW);
582 583 * }
583 584 *
584 585 * // Ensure that the container structure does not go away.
585 586 * if (container_hold(container) == 0) {
586 587 * return (KMEM_CBRC_DONT_KNOW);
587 588 * }
588 589 *
589 590 * mutex_enter(&container->c_objects_lock);
590 591 * if (container != op->o_container) {
591 592 * mutex_exit(&container->c_objects_lock);
592 593 * container_rele(container);
593 594 * return (KMEM_CBRC_DONT_KNOW);
594 595 * }
595 596 *
596 597 * if (rw_tryenter(OBJECT_RWLOCK(op), RW_WRITER) == 0) {
597 598 * mutex_exit(&container->c_objects_lock);
598 599 * container_rele(container);
599 600 * return (KMEM_CBRC_LATER);
600 601 * }
601 602 *
602 603 * object_move_impl(op, np); // critical section
603 604 * rw_exit(OBJECT_RWLOCK(op));
604 605 *
605 606 * op->o_container = (void *)((uintptr_t)op->o_container | 0x1);
606 607 * list_link_replace(&op->o_link_node, &np->o_link_node);
607 608 * mutex_exit(&container->c_objects_lock);
608 609 * container_rele(container);
609 610 * return (KMEM_CBRC_YES);
610 611 * }
611 612 *
612 613 * Note that object_move() must invalidate the designated o_container pointer of
613 614 * the old object in the same way that object_free() does, since kmem will free
614 615 * the object in response to the KMEM_CBRC_YES return value.
615 616 *
616 617 * The lock order in object_move() differs from object_alloc(), which locks
617 618 * OBJECT_RWLOCK first and &container->c_objects_lock second, but as long as the
618 619 * callback uses rw_tryenter() (preventing the deadlock described earlier), it's
619 620 * not a problem. Holding the lock on the object list in the example above
620 621 * through the entire callback not only prevents the object from going away, it
621 622 * also allows you to lock the list elsewhere and know that none of its elements
622 623 * will move during iteration.
623 624 *
624 625 * Adding an explicit hold everywhere an object from the cache is used is tricky
625 626 * and involves much more change to client code than a cache-specific solution
626 627 * that leverages existing state to decide whether or not an object is
627 628 * movable. However, this approach has the advantage that no object remains
628 629 * immovable for any significant length of time, making it extremely unlikely
629 630 * that long-lived allocations can continue holding slabs hostage; and it works
630 631 * for any cache.
631 632 *
632 633 * 3. Consolidator Implementation
633 634 *
634 635 * Once the client supplies a move function that a) recognizes known objects and
635 636 * b) avoids moving objects that are actively in use, the remaining work is up
636 637 * to the consolidator to decide which objects to move and when to issue
637 638 * callbacks.
638 639 *
639 640 * The consolidator relies on the fact that a cache's slabs are ordered by
640 641 * usage. Each slab has a fixed number of objects. Depending on the slab's
641 642 * "color" (the offset of the first object from the beginning of the slab;
642 643 * offsets are staggered to mitigate false sharing of cache lines) it is either
643 644 * the maximum number of objects per slab determined at cache creation time or
644 645 * else the number closest to the maximum that fits within the space remaining
645 646 * after the initial offset. A completely allocated slab may contribute some
646 647 * internal fragmentation (per-slab overhead) but no external fragmentation, so
647 648 * it is of no interest to the consolidator. At the other extreme, slabs whose
648 649 * objects have all been freed to the slab are released to the virtual memory
649 650 * (VM) subsystem (objects freed to magazines are still allocated as far as the
650 651 * slab is concerned). External fragmentation exists when there are slabs
651 652 * somewhere between these extremes. A partial slab has at least one but not all
652 653 * of its objects allocated. The more partial slabs, and the fewer allocated
653 654 * objects on each of them, the higher the fragmentation. Hence the
654 655 * consolidator's overall strategy is to reduce the number of partial slabs by
655 656 * moving allocated objects from the least allocated slabs to the most allocated
656 657 * slabs.
657 658 *
658 659 * Partial slabs are kept in an AVL tree ordered by usage. Completely allocated
659 660 * slabs are kept separately in an unordered list. Since the majority of slabs
660 661 * tend to be completely allocated (a typical unfragmented cache may have
661 662 * thousands of complete slabs and only a single partial slab), separating
662 663 * complete slabs improves the efficiency of partial slab ordering, since the
663 664 * complete slabs do not affect the depth or balance of the AVL tree. This
664 665 * ordered sequence of partial slabs acts as a "free list" supplying objects for
665 666 * allocation requests.
666 667 *
667 668 * Objects are always allocated from the first partial slab in the free list,
668 669 * where the allocation is most likely to eliminate a partial slab (by
669 670 * completely allocating it). Conversely, when a single object from a completely
670 671 * allocated slab is freed to the slab, that slab is added to the front of the
671 672 * free list. Since most free list activity involves highly allocated slabs
672 673 * coming and going at the front of the list, slabs tend naturally toward the
673 674 * ideal order: highly allocated at the front, sparsely allocated at the back.
674 675 * Slabs with few allocated objects are likely to become completely free if they
675 676 * keep a safe distance away from the front of the free list. Slab misorders
676 677 * interfere with the natural tendency of slabs to become completely free or
677 678 * completely allocated. For example, a slab with a single allocated object
678 679 * needs only a single free to escape the cache; its natural desire is
679 680 * frustrated when it finds itself at the front of the list where a second
680 681 * allocation happens just before the free could have released it. Another slab
681 682 * with all but one object allocated might have supplied the buffer instead, so
682 683 * that both (as opposed to neither) of the slabs would have been taken off the
683 684 * free list.
684 685 *
685 686 * Although slabs tend naturally toward the ideal order, misorders allowed by a
686 687 * simple list implementation defeat the consolidator's strategy of merging
687 688 * least- and most-allocated slabs. Without an AVL tree to guarantee order, kmem
688 689 * needs another way to fix misorders to optimize its callback strategy. One
689 690 * approach is to periodically scan a limited number of slabs, advancing a
690 691 * marker to hold the current scan position, and to move extreme misorders to
691 692 * the front or back of the free list and to the front or back of the current
692 693 * scan range. By making consecutive scan ranges overlap by one slab, the least
693 694 * allocated slab in the current range can be carried along from the end of one
694 695 * scan to the start of the next.
695 696 *
696 697 * Maintaining partial slabs in an AVL tree relieves kmem of this additional
697 698 * task, however. Since most of the cache's activity is in the magazine layer,
698 699 * and allocations from the slab layer represent only a startup cost, the
699 700 * overhead of maintaining a balanced tree is not a significant concern compared
700 701 * to the opportunity of reducing complexity by eliminating the partial slab
701 702 * scanner just described. The overhead of an AVL tree is minimized by
702 703 * maintaining only partial slabs in the tree and keeping completely allocated
703 704 * slabs separately in a list. To avoid increasing the size of the slab
704 705 * structure the AVL linkage pointers are reused for the slab's list linkage,
705 706 * since the slab will always be either partial or complete, never stored both
706 707 * ways at the same time. To further minimize the overhead of the AVL tree the
707 708 * compare function that orders partial slabs by usage divides the range of
708 709 * allocated object counts into bins such that counts within the same bin are
709 710 * considered equal. Binning partial slabs makes it less likely that allocating
710 711 * or freeing a single object will change the slab's order, requiring a tree
711 712 * reinsertion (an avl_remove() followed by an avl_add(), both potentially
712 713 * requiring some rebalancing of the tree). Allocation counts closest to
713 714 * completely free and completely allocated are left unbinned (finely sorted) to
714 715 * better support the consolidator's strategy of merging slabs at either
715 716 * extreme.
716 717 *
717 718 * 3.1 Assessing Fragmentation and Selecting Candidate Slabs
718 719 *
719 720 * The consolidator piggybacks on the kmem maintenance thread and is called on
720 721 * the same interval as kmem_cache_update(), once per cache every fifteen
721 722 * seconds. kmem maintains a running count of unallocated objects in the slab
722 723 * layer (cache_bufslab). The consolidator checks whether that number exceeds
723 724 * 12.5% (1/8) of the total objects in the cache (cache_buftotal), and whether
724 725 * there is a significant number of slabs in the cache (arbitrarily a minimum
725 726 * 101 total slabs). Unused objects that have fallen out of the magazine layer's
726 727 * working set are included in the assessment, and magazines in the depot are
727 728 * reaped if those objects would lift cache_bufslab above the fragmentation
728 729 * threshold. Once the consolidator decides that a cache is fragmented, it looks
729 730 * for a candidate slab to reclaim, starting at the end of the partial slab free
730 731 * list and scanning backwards. At first the consolidator is choosy: only a slab
731 732 * with fewer than 12.5% (1/8) of its objects allocated qualifies (or else a
732 733 * single allocated object, regardless of percentage). If there is difficulty
733 734 * finding a candidate slab, kmem raises the allocation threshold incrementally,
734 735 * up to a maximum 87.5% (7/8), so that eventually the consolidator will reduce
735 736 * external fragmentation (unused objects on the free list) below 12.5% (1/8),
736 737 * even in the worst case of every slab in the cache being almost 7/8 allocated.
737 738 * The threshold can also be lowered incrementally when candidate slabs are easy
738 739 * to find, and the threshold is reset to the minimum 1/8 as soon as the cache
739 740 * is no longer fragmented.
740 741 *
741 742 * 3.2 Generating Callbacks
742 743 *
743 744 * Once an eligible slab is chosen, a callback is generated for every allocated
744 745 * object on the slab, in the hope that the client will move everything off the
745 746 * slab and make it reclaimable. Objects selected as move destinations are
746 747 * chosen from slabs at the front of the free list. Assuming slabs in the ideal
747 748 * order (most allocated at the front, least allocated at the back) and a
748 749 * cooperative client, the consolidator will succeed in removing slabs from both
749 750 * ends of the free list, completely allocating on the one hand and completely
750 751 * freeing on the other. Objects selected as move destinations are allocated in
751 752 * the kmem maintenance thread where move requests are enqueued. A separate
752 753 * callback thread removes pending callbacks from the queue and calls the
753 754 * client. The separate thread ensures that client code (the move function) does
754 755 * not interfere with internal kmem maintenance tasks. A map of pending
755 756 * callbacks keyed by object address (the object to be moved) is checked to
756 757 * ensure that duplicate callbacks are not generated for the same object.
757 758 * Allocating the move destination (the object to move to) prevents subsequent
758 759 * callbacks from selecting the same destination as an earlier pending callback.
759 760 *
760 761 * Move requests can also be generated by kmem_cache_reap() when the system is
761 762 * desperate for memory and by kmem_cache_move_notify(), called by the client to
762 763 * notify kmem that a move refused earlier with KMEM_CBRC_LATER is now possible.
763 764 * The map of pending callbacks is protected by the same lock that protects the
764 765 * slab layer.
765 766 *
766 767 * When the system is desperate for memory, kmem does not bother to determine
767 768 * whether or not the cache exceeds the fragmentation threshold, but tries to
768 769 * consolidate as many slabs as possible. Normally, the consolidator chews
769 770 * slowly, one sparsely allocated slab at a time during each maintenance
770 771 * interval that the cache is fragmented. When desperate, the consolidator
771 772 * starts at the last partial slab and enqueues callbacks for every allocated
772 773 * object on every partial slab, working backwards until it reaches the first
773 774 * partial slab. The first partial slab, meanwhile, advances in pace with the
774 775 * consolidator as allocations to supply move destinations for the enqueued
775 776 * callbacks use up the highly allocated slabs at the front of the free list.
776 777 * Ideally, the overgrown free list collapses like an accordion, starting at
777 778 * both ends and ending at the center with a single partial slab.
778 779 *
779 780 * 3.3 Client Responses
780 781 *
781 782 * When the client returns KMEM_CBRC_NO in response to the move callback, kmem
782 783 * marks the slab that supplied the stuck object non-reclaimable and moves it to
783 784 * front of the free list. The slab remains marked as long as it remains on the
784 785 * free list, and it appears more allocated to the partial slab compare function
785 786 * than any unmarked slab, no matter how many of its objects are allocated.
786 787 * Since even one immovable object ties up the entire slab, the goal is to
787 788 * completely allocate any slab that cannot be completely freed. kmem does not
788 789 * bother generating callbacks to move objects from a marked slab unless the
789 790 * system is desperate.
790 791 *
791 792 * When the client responds KMEM_CBRC_LATER, kmem increments a count for the
792 793 * slab. If the client responds LATER too many times, kmem disbelieves and
793 794 * treats the response as a NO. The count is cleared when the slab is taken off
794 795 * the partial slab list or when the client moves one of the slab's objects.
795 796 *
796 797 * 4. Observability
797 798 *
798 799 * A kmem cache's external fragmentation is best observed with 'mdb -k' using
799 800 * the ::kmem_slabs dcmd. For a complete description of the command, enter
800 801 * '::help kmem_slabs' at the mdb prompt.
801 802 */
802 803
803 804 #include <sys/kmem_impl.h>
804 805 #include <sys/vmem_impl.h>
805 806 #include <sys/param.h>
806 807 #include <sys/sysmacros.h>
807 808 #include <sys/vm.h>
808 809 #include <sys/proc.h>
809 810 #include <sys/tuneable.h>
810 811 #include <sys/systm.h>
811 812 #include <sys/cmn_err.h>
812 813 #include <sys/debug.h>
813 814 #include <sys/sdt.h>
814 815 #include <sys/mutex.h>
815 816 #include <sys/bitmap.h>
816 817 #include <sys/atomic.h>
817 818 #include <sys/kobj.h>
818 819 #include <sys/disp.h>
819 820 #include <vm/seg_kmem.h>
820 821 #include <sys/log.h>
821 822 #include <sys/callb.h>
822 823 #include <sys/taskq.h>
823 824 #include <sys/modctl.h>
824 825 #include <sys/reboot.h>
825 826 #include <sys/id32.h>
826 827 #include <sys/zone.h>
827 828 #include <sys/netstack.h>
828 829 #ifdef DEBUG
829 830 #include <sys/random.h>
830 831 #endif
831 832
832 833 extern void streams_msg_init(void);
833 834 extern int segkp_fromheap;
834 835 extern void segkp_cache_free(void);
835 836 extern int callout_init_done;
836 837
837 838 struct kmem_cache_kstat {
838 839 kstat_named_t kmc_buf_size;
839 840 kstat_named_t kmc_align;
840 841 kstat_named_t kmc_chunk_size;
841 842 kstat_named_t kmc_slab_size;
842 843 kstat_named_t kmc_alloc;
843 844 kstat_named_t kmc_alloc_fail;
844 845 kstat_named_t kmc_free;
845 846 kstat_named_t kmc_depot_alloc;
846 847 kstat_named_t kmc_depot_free;
847 848 kstat_named_t kmc_depot_contention;
848 849 kstat_named_t kmc_slab_alloc;
849 850 kstat_named_t kmc_slab_free;
850 851 kstat_named_t kmc_buf_constructed;
851 852 kstat_named_t kmc_buf_avail;
852 853 kstat_named_t kmc_buf_inuse;
853 854 kstat_named_t kmc_buf_total;
854 855 kstat_named_t kmc_buf_max;
855 856 kstat_named_t kmc_slab_create;
856 857 kstat_named_t kmc_slab_destroy;
857 858 kstat_named_t kmc_vmem_source;
858 859 kstat_named_t kmc_hash_size;
859 860 kstat_named_t kmc_hash_lookup_depth;
860 861 kstat_named_t kmc_hash_rescale;
861 862 kstat_named_t kmc_full_magazines;
862 863 kstat_named_t kmc_empty_magazines;
863 864 kstat_named_t kmc_magazine_size;
864 865 kstat_named_t kmc_reap; /* number of kmem_cache_reap() calls */
865 866 kstat_named_t kmc_defrag; /* attempts to defrag all partial slabs */
866 867 kstat_named_t kmc_scan; /* attempts to defrag one partial slab */
867 868 kstat_named_t kmc_move_callbacks; /* sum of yes, no, later, dn, dk */
868 869 kstat_named_t kmc_move_yes;
869 870 kstat_named_t kmc_move_no;
870 871 kstat_named_t kmc_move_later;
871 872 kstat_named_t kmc_move_dont_need;
872 873 kstat_named_t kmc_move_dont_know; /* obj unrecognized by client ... */
873 874 kstat_named_t kmc_move_hunt_found; /* ... but found in mag layer */
874 875 kstat_named_t kmc_move_slabs_freed; /* slabs freed by consolidator */
875 876 kstat_named_t kmc_move_reclaimable; /* buffers, if consolidator ran */
876 877 } kmem_cache_kstat = {
877 878 { "buf_size", KSTAT_DATA_UINT64 },
878 879 { "align", KSTAT_DATA_UINT64 },
879 880 { "chunk_size", KSTAT_DATA_UINT64 },
880 881 { "slab_size", KSTAT_DATA_UINT64 },
881 882 { "alloc", KSTAT_DATA_UINT64 },
882 883 { "alloc_fail", KSTAT_DATA_UINT64 },
883 884 { "free", KSTAT_DATA_UINT64 },
884 885 { "depot_alloc", KSTAT_DATA_UINT64 },
885 886 { "depot_free", KSTAT_DATA_UINT64 },
886 887 { "depot_contention", KSTAT_DATA_UINT64 },
887 888 { "slab_alloc", KSTAT_DATA_UINT64 },
888 889 { "slab_free", KSTAT_DATA_UINT64 },
889 890 { "buf_constructed", KSTAT_DATA_UINT64 },
890 891 { "buf_avail", KSTAT_DATA_UINT64 },
891 892 { "buf_inuse", KSTAT_DATA_UINT64 },
892 893 { "buf_total", KSTAT_DATA_UINT64 },
893 894 { "buf_max", KSTAT_DATA_UINT64 },
894 895 { "slab_create", KSTAT_DATA_UINT64 },
895 896 { "slab_destroy", KSTAT_DATA_UINT64 },
896 897 { "vmem_source", KSTAT_DATA_UINT64 },
897 898 { "hash_size", KSTAT_DATA_UINT64 },
898 899 { "hash_lookup_depth", KSTAT_DATA_UINT64 },
899 900 { "hash_rescale", KSTAT_DATA_UINT64 },
900 901 { "full_magazines", KSTAT_DATA_UINT64 },
901 902 { "empty_magazines", KSTAT_DATA_UINT64 },
902 903 { "magazine_size", KSTAT_DATA_UINT64 },
903 904 { "reap", KSTAT_DATA_UINT64 },
904 905 { "defrag", KSTAT_DATA_UINT64 },
905 906 { "scan", KSTAT_DATA_UINT64 },
906 907 { "move_callbacks", KSTAT_DATA_UINT64 },
907 908 { "move_yes", KSTAT_DATA_UINT64 },
908 909 { "move_no", KSTAT_DATA_UINT64 },
909 910 { "move_later", KSTAT_DATA_UINT64 },
910 911 { "move_dont_need", KSTAT_DATA_UINT64 },
911 912 { "move_dont_know", KSTAT_DATA_UINT64 },
912 913 { "move_hunt_found", KSTAT_DATA_UINT64 },
913 914 { "move_slabs_freed", KSTAT_DATA_UINT64 },
914 915 { "move_reclaimable", KSTAT_DATA_UINT64 },
915 916 };
916 917
917 918 static kmutex_t kmem_cache_kstat_lock;
918 919
919 920 /*
920 921 * The default set of caches to back kmem_alloc().
921 922 * These sizes should be reevaluated periodically.
922 923 *
923 924 * We want allocations that are multiples of the coherency granularity
924 925 * (64 bytes) to be satisfied from a cache which is a multiple of 64
925 926 * bytes, so that it will be 64-byte aligned. For all multiples of 64,
926 927 * the next kmem_cache_size greater than or equal to it must be a
927 928 * multiple of 64.
928 929 *
929 930 * We split the table into two sections: size <= 4k and size > 4k. This
930 931 * saves a lot of space and cache footprint in our cache tables.
931 932 */
932 933 static const int kmem_alloc_sizes[] = {
933 934 1 * 8,
934 935 2 * 8,
935 936 3 * 8,
936 937 4 * 8, 5 * 8, 6 * 8, 7 * 8,
937 938 4 * 16, 5 * 16, 6 * 16, 7 * 16,
938 939 4 * 32, 5 * 32, 6 * 32, 7 * 32,
939 940 4 * 64, 5 * 64, 6 * 64, 7 * 64,
940 941 4 * 128, 5 * 128, 6 * 128, 7 * 128,
941 942 P2ALIGN(8192 / 7, 64),
942 943 P2ALIGN(8192 / 6, 64),
943 944 P2ALIGN(8192 / 5, 64),
944 945 P2ALIGN(8192 / 4, 64),
945 946 P2ALIGN(8192 / 3, 64),
946 947 P2ALIGN(8192 / 2, 64),
947 948 };
948 949
949 950 static const int kmem_big_alloc_sizes[] = {
950 951 2 * 4096, 3 * 4096,
951 952 2 * 8192, 3 * 8192,
952 953 4 * 8192, 5 * 8192, 6 * 8192, 7 * 8192,
953 954 8 * 8192, 9 * 8192, 10 * 8192, 11 * 8192,
954 955 12 * 8192, 13 * 8192, 14 * 8192, 15 * 8192,
955 956 16 * 8192
956 957 };
957 958
958 959 #define KMEM_MAXBUF 4096
959 960 #define KMEM_BIG_MAXBUF_32BIT 32768
960 961 #define KMEM_BIG_MAXBUF 131072
961 962
962 963 #define KMEM_BIG_MULTIPLE 4096 /* big_alloc_sizes must be a multiple */
963 964 #define KMEM_BIG_SHIFT 12 /* lg(KMEM_BIG_MULTIPLE) */
964 965
965 966 static kmem_cache_t *kmem_alloc_table[KMEM_MAXBUF >> KMEM_ALIGN_SHIFT];
966 967 static kmem_cache_t *kmem_big_alloc_table[KMEM_BIG_MAXBUF >> KMEM_BIG_SHIFT];
967 968
968 969 #define KMEM_ALLOC_TABLE_MAX (KMEM_MAXBUF >> KMEM_ALIGN_SHIFT)
969 970 static size_t kmem_big_alloc_table_max = 0; /* # of filled elements */
970 971
971 972 static kmem_magtype_t kmem_magtype[] = {
972 973 { 1, 8, 3200, 65536 },
973 974 { 3, 16, 256, 32768 },
974 975 { 7, 32, 64, 16384 },
975 976 { 15, 64, 0, 8192 },
976 977 { 31, 64, 0, 4096 },
977 978 { 47, 64, 0, 2048 },
978 979 { 63, 64, 0, 1024 },
979 980 { 95, 64, 0, 512 },
980 981 { 143, 64, 0, 0 },
981 982 };
982 983
983 984 static uint32_t kmem_reaping;
984 985 static uint32_t kmem_reaping_idspace;
985 986
986 987 /*
987 988 * kmem tunables
988 989 */
989 990 clock_t kmem_reap_interval; /* cache reaping rate [15 * HZ ticks] */
990 991 int kmem_depot_contention = 3; /* max failed tryenters per real interval */
991 992 pgcnt_t kmem_reapahead = 0; /* start reaping N pages before pageout */
992 993 int kmem_panic = 1; /* whether to panic on error */
993 994 int kmem_logging = 1; /* kmem_log_enter() override */
994 995 uint32_t kmem_mtbf = 0; /* mean time between failures [default: off] */
995 996 size_t kmem_transaction_log_size; /* transaction log size [2% of memory] */
996 997 size_t kmem_content_log_size; /* content log size [2% of memory] */
997 998 size_t kmem_failure_log_size; /* failure log [4 pages per CPU] */
998 999 size_t kmem_slab_log_size; /* slab create log [4 pages per CPU] */
999 1000 size_t kmem_content_maxsave = 256; /* KMF_CONTENTS max bytes to log */
1000 1001 size_t kmem_lite_minsize = 0; /* minimum buffer size for KMF_LITE */
1001 1002 size_t kmem_lite_maxalign = 1024; /* maximum buffer alignment for KMF_LITE */
1002 1003 int kmem_lite_pcs = 4; /* number of PCs to store in KMF_LITE mode */
1003 1004 size_t kmem_maxverify; /* maximum bytes to inspect in debug routines */
1004 1005 size_t kmem_minfirewall; /* hardware-enforced redzone threshold */
1005 1006
1006 1007 #ifdef _LP64
1007 1008 size_t kmem_max_cached = KMEM_BIG_MAXBUF; /* maximum kmem_alloc cache */
1008 1009 #else
1009 1010 size_t kmem_max_cached = KMEM_BIG_MAXBUF_32BIT; /* maximum kmem_alloc cache */
1010 1011 #endif
1011 1012
1012 1013 #ifdef DEBUG
1013 1014 int kmem_flags = KMF_AUDIT | KMF_DEADBEEF | KMF_REDZONE | KMF_CONTENTS;
1014 1015 #else
1015 1016 int kmem_flags = 0;
1016 1017 #endif
1017 1018 int kmem_ready;
1018 1019
1019 1020 static kmem_cache_t *kmem_slab_cache;
1020 1021 static kmem_cache_t *kmem_bufctl_cache;
1021 1022 static kmem_cache_t *kmem_bufctl_audit_cache;
1022 1023
1023 1024 static kmutex_t kmem_cache_lock; /* inter-cache linkage only */
1024 1025 static list_t kmem_caches;
1025 1026
1026 1027 static taskq_t *kmem_taskq;
1027 1028 static kmutex_t kmem_flags_lock;
1028 1029 static vmem_t *kmem_metadata_arena;
1029 1030 static vmem_t *kmem_msb_arena; /* arena for metadata caches */
1030 1031 static vmem_t *kmem_cache_arena;
1031 1032 static vmem_t *kmem_hash_arena;
1032 1033 static vmem_t *kmem_log_arena;
1033 1034 static vmem_t *kmem_oversize_arena;
1034 1035 static vmem_t *kmem_va_arena;
1035 1036 static vmem_t *kmem_default_arena;
1036 1037 static vmem_t *kmem_firewall_va_arena;
1037 1038 static vmem_t *kmem_firewall_arena;
1038 1039
1039 1040 /*
1040 1041 * Define KMEM_STATS to turn on statistic gathering. By default, it is only
1041 1042 * turned on when DEBUG is also defined.
1042 1043 */
1043 1044 #ifdef DEBUG
1044 1045 #define KMEM_STATS
1045 1046 #endif /* DEBUG */
1046 1047
1047 1048 #ifdef KMEM_STATS
1048 1049 #define KMEM_STAT_ADD(stat) ((stat)++)
1049 1050 #define KMEM_STAT_COND_ADD(cond, stat) ((void) (!(cond) || (stat)++))
1050 1051 #else
1051 1052 #define KMEM_STAT_ADD(stat) /* nothing */
1052 1053 #define KMEM_STAT_COND_ADD(cond, stat) /* nothing */
1053 1054 #endif /* KMEM_STATS */
1054 1055
1055 1056 /*
1056 1057 * kmem slab consolidator thresholds (tunables)
1057 1058 */
1058 1059 size_t kmem_frag_minslabs = 101; /* minimum total slabs */
1059 1060 size_t kmem_frag_numer = 1; /* free buffers (numerator) */
1060 1061 size_t kmem_frag_denom = KMEM_VOID_FRACTION; /* buffers (denominator) */
1061 1062 /*
1062 1063 * Maximum number of slabs from which to move buffers during a single
1063 1064 * maintenance interval while the system is not low on memory.
1064 1065 */
1065 1066 size_t kmem_reclaim_max_slabs = 1;
1066 1067 /*
1067 1068 * Number of slabs to scan backwards from the end of the partial slab list
1068 1069 * when searching for buffers to relocate.
1069 1070 */
1070 1071 size_t kmem_reclaim_scan_range = 12;
1071 1072
1072 1073 #ifdef KMEM_STATS
1073 1074 static struct {
1074 1075 uint64_t kms_callbacks;
1075 1076 uint64_t kms_yes;
1076 1077 uint64_t kms_no;
1077 1078 uint64_t kms_later;
1078 1079 uint64_t kms_dont_need;
1079 1080 uint64_t kms_dont_know;
1080 1081 uint64_t kms_hunt_found_mag;
1081 1082 uint64_t kms_hunt_found_slab;
1082 1083 uint64_t kms_hunt_alloc_fail;
1083 1084 uint64_t kms_hunt_lucky;
1084 1085 uint64_t kms_notify;
1085 1086 uint64_t kms_notify_callbacks;
1086 1087 uint64_t kms_disbelief;
1087 1088 uint64_t kms_already_pending;
1088 1089 uint64_t kms_callback_alloc_fail;
1089 1090 uint64_t kms_callback_taskq_fail;
1090 1091 uint64_t kms_endscan_slab_dead;
1091 1092 uint64_t kms_endscan_slab_destroyed;
1092 1093 uint64_t kms_endscan_nomem;
1093 1094 uint64_t kms_endscan_refcnt_changed;
1094 1095 uint64_t kms_endscan_nomove_changed;
1095 1096 uint64_t kms_endscan_freelist;
1096 1097 uint64_t kms_avl_update;
1097 1098 uint64_t kms_avl_noupdate;
1098 1099 uint64_t kms_no_longer_reclaimable;
1099 1100 uint64_t kms_notify_no_longer_reclaimable;
1100 1101 uint64_t kms_notify_slab_dead;
1101 1102 uint64_t kms_notify_slab_destroyed;
1102 1103 uint64_t kms_alloc_fail;
1103 1104 uint64_t kms_constructor_fail;
1104 1105 uint64_t kms_dead_slabs_freed;
1105 1106 uint64_t kms_defrags;
1106 1107 uint64_t kms_scans;
1107 1108 uint64_t kms_scan_depot_ws_reaps;
1108 1109 uint64_t kms_debug_reaps;
1109 1110 uint64_t kms_debug_scans;
1110 1111 } kmem_move_stats;
1111 1112 #endif /* KMEM_STATS */
1112 1113
1113 1114 /* consolidator knobs */
1114 1115 static boolean_t kmem_move_noreap;
1115 1116 static boolean_t kmem_move_blocked;
1116 1117 static boolean_t kmem_move_fulltilt;
1117 1118 static boolean_t kmem_move_any_partial;
1118 1119
1119 1120 #ifdef DEBUG
1120 1121 /*
1121 1122 * kmem consolidator debug tunables:
1122 1123 * Ensure code coverage by occasionally running the consolidator even when the
1123 1124 * caches are not fragmented (they may never be). These intervals are mean time
1124 1125 * in cache maintenance intervals (kmem_cache_update).
1125 1126 */
1126 1127 uint32_t kmem_mtb_move = 60; /* defrag 1 slab (~15min) */
1127 1128 uint32_t kmem_mtb_reap = 1800; /* defrag all slabs (~7.5hrs) */
1128 1129 #endif /* DEBUG */
1129 1130
1130 1131 static kmem_cache_t *kmem_defrag_cache;
1131 1132 static kmem_cache_t *kmem_move_cache;
1132 1133 static taskq_t *kmem_move_taskq;
1133 1134
1134 1135 static void kmem_cache_scan(kmem_cache_t *);
1135 1136 static void kmem_cache_defrag(kmem_cache_t *);
1136 1137 static void kmem_slab_prefill(kmem_cache_t *, kmem_slab_t *);
1137 1138
1138 1139
1139 1140 kmem_log_header_t *kmem_transaction_log;
1140 1141 kmem_log_header_t *kmem_content_log;
1141 1142 kmem_log_header_t *kmem_failure_log;
1142 1143 kmem_log_header_t *kmem_slab_log;
1143 1144
1144 1145 static int kmem_lite_count; /* # of PCs in kmem_buftag_lite_t */
1145 1146
1146 1147 #define KMEM_BUFTAG_LITE_ENTER(bt, count, caller) \
1147 1148 if ((count) > 0) { \
1148 1149 pc_t *_s = ((kmem_buftag_lite_t *)(bt))->bt_history; \
1149 1150 pc_t *_e; \
1150 1151 /* memmove() the old entries down one notch */ \
1151 1152 for (_e = &_s[(count) - 1]; _e > _s; _e--) \
1152 1153 *_e = *(_e - 1); \
1153 1154 *_s = (uintptr_t)(caller); \
1154 1155 }
1155 1156
1156 1157 #define KMERR_MODIFIED 0 /* buffer modified while on freelist */
1157 1158 #define KMERR_REDZONE 1 /* redzone violation (write past end of buf) */
1158 1159 #define KMERR_DUPFREE 2 /* freed a buffer twice */
1159 1160 #define KMERR_BADADDR 3 /* freed a bad (unallocated) address */
1160 1161 #define KMERR_BADBUFTAG 4 /* buftag corrupted */
1161 1162 #define KMERR_BADBUFCTL 5 /* bufctl corrupted */
1162 1163 #define KMERR_BADCACHE 6 /* freed a buffer to the wrong cache */
1163 1164 #define KMERR_BADSIZE 7 /* alloc size != free size */
1164 1165 #define KMERR_BADBASE 8 /* buffer base address wrong */
1165 1166
1166 1167 struct {
1167 1168 hrtime_t kmp_timestamp; /* timestamp of panic */
1168 1169 int kmp_error; /* type of kmem error */
1169 1170 void *kmp_buffer; /* buffer that induced panic */
1170 1171 void *kmp_realbuf; /* real start address for buffer */
1171 1172 kmem_cache_t *kmp_cache; /* buffer's cache according to client */
1172 1173 kmem_cache_t *kmp_realcache; /* actual cache containing buffer */
1173 1174 kmem_slab_t *kmp_slab; /* slab accoring to kmem_findslab() */
1174 1175 kmem_bufctl_t *kmp_bufctl; /* bufctl */
1175 1176 } kmem_panic_info;
1176 1177
1177 1178
1178 1179 static void
1179 1180 copy_pattern(uint64_t pattern, void *buf_arg, size_t size)
1180 1181 {
1181 1182 uint64_t *bufend = (uint64_t *)((char *)buf_arg + size);
1182 1183 uint64_t *buf = buf_arg;
1183 1184
1184 1185 while (buf < bufend)
1185 1186 *buf++ = pattern;
1186 1187 }
1187 1188
1188 1189 static void *
1189 1190 verify_pattern(uint64_t pattern, void *buf_arg, size_t size)
1190 1191 {
1191 1192 uint64_t *bufend = (uint64_t *)((char *)buf_arg + size);
1192 1193 uint64_t *buf;
1193 1194
1194 1195 for (buf = buf_arg; buf < bufend; buf++)
1195 1196 if (*buf != pattern)
1196 1197 return (buf);
1197 1198 return (NULL);
1198 1199 }
1199 1200
1200 1201 static void *
1201 1202 verify_and_copy_pattern(uint64_t old, uint64_t new, void *buf_arg, size_t size)
1202 1203 {
1203 1204 uint64_t *bufend = (uint64_t *)((char *)buf_arg + size);
1204 1205 uint64_t *buf;
1205 1206
1206 1207 for (buf = buf_arg; buf < bufend; buf++) {
1207 1208 if (*buf != old) {
1208 1209 copy_pattern(old, buf_arg,
1209 1210 (char *)buf - (char *)buf_arg);
1210 1211 return (buf);
1211 1212 }
1212 1213 *buf = new;
1213 1214 }
1214 1215
1215 1216 return (NULL);
1216 1217 }
1217 1218
1218 1219 static void
1219 1220 kmem_cache_applyall(void (*func)(kmem_cache_t *), taskq_t *tq, int tqflag)
1220 1221 {
1221 1222 kmem_cache_t *cp;
1222 1223
1223 1224 mutex_enter(&kmem_cache_lock);
1224 1225 for (cp = list_head(&kmem_caches); cp != NULL;
1225 1226 cp = list_next(&kmem_caches, cp))
1226 1227 if (tq != NULL)
1227 1228 (void) taskq_dispatch(tq, (task_func_t *)func, cp,
1228 1229 tqflag);
1229 1230 else
1230 1231 func(cp);
1231 1232 mutex_exit(&kmem_cache_lock);
1232 1233 }
1233 1234
1234 1235 static void
1235 1236 kmem_cache_applyall_id(void (*func)(kmem_cache_t *), taskq_t *tq, int tqflag)
1236 1237 {
1237 1238 kmem_cache_t *cp;
1238 1239
1239 1240 mutex_enter(&kmem_cache_lock);
1240 1241 for (cp = list_head(&kmem_caches); cp != NULL;
1241 1242 cp = list_next(&kmem_caches, cp)) {
1242 1243 if (!(cp->cache_cflags & KMC_IDENTIFIER))
1243 1244 continue;
1244 1245 if (tq != NULL)
1245 1246 (void) taskq_dispatch(tq, (task_func_t *)func, cp,
1246 1247 tqflag);
1247 1248 else
1248 1249 func(cp);
1249 1250 }
1250 1251 mutex_exit(&kmem_cache_lock);
1251 1252 }
1252 1253
1253 1254 /*
1254 1255 * Debugging support. Given a buffer address, find its slab.
1255 1256 */
1256 1257 static kmem_slab_t *
1257 1258 kmem_findslab(kmem_cache_t *cp, void *buf)
1258 1259 {
1259 1260 kmem_slab_t *sp;
1260 1261
1261 1262 mutex_enter(&cp->cache_lock);
1262 1263 for (sp = list_head(&cp->cache_complete_slabs); sp != NULL;
1263 1264 sp = list_next(&cp->cache_complete_slabs, sp)) {
1264 1265 if (KMEM_SLAB_MEMBER(sp, buf)) {
1265 1266 mutex_exit(&cp->cache_lock);
1266 1267 return (sp);
1267 1268 }
1268 1269 }
1269 1270 for (sp = avl_first(&cp->cache_partial_slabs); sp != NULL;
1270 1271 sp = AVL_NEXT(&cp->cache_partial_slabs, sp)) {
1271 1272 if (KMEM_SLAB_MEMBER(sp, buf)) {
1272 1273 mutex_exit(&cp->cache_lock);
1273 1274 return (sp);
1274 1275 }
1275 1276 }
1276 1277 mutex_exit(&cp->cache_lock);
1277 1278
1278 1279 return (NULL);
1279 1280 }
1280 1281
1281 1282 static void
1282 1283 kmem_error(int error, kmem_cache_t *cparg, void *bufarg)
1283 1284 {
1284 1285 kmem_buftag_t *btp = NULL;
1285 1286 kmem_bufctl_t *bcp = NULL;
1286 1287 kmem_cache_t *cp = cparg;
1287 1288 kmem_slab_t *sp;
1288 1289 uint64_t *off;
1289 1290 void *buf = bufarg;
1290 1291
1291 1292 kmem_logging = 0; /* stop logging when a bad thing happens */
1292 1293
1293 1294 kmem_panic_info.kmp_timestamp = gethrtime();
1294 1295
1295 1296 sp = kmem_findslab(cp, buf);
1296 1297 if (sp == NULL) {
1297 1298 for (cp = list_tail(&kmem_caches); cp != NULL;
1298 1299 cp = list_prev(&kmem_caches, cp)) {
1299 1300 if ((sp = kmem_findslab(cp, buf)) != NULL)
1300 1301 break;
1301 1302 }
1302 1303 }
1303 1304
1304 1305 if (sp == NULL) {
1305 1306 cp = NULL;
1306 1307 error = KMERR_BADADDR;
1307 1308 } else {
1308 1309 if (cp != cparg)
1309 1310 error = KMERR_BADCACHE;
1310 1311 else
1311 1312 buf = (char *)bufarg - ((uintptr_t)bufarg -
1312 1313 (uintptr_t)sp->slab_base) % cp->cache_chunksize;
1313 1314 if (buf != bufarg)
1314 1315 error = KMERR_BADBASE;
1315 1316 if (cp->cache_flags & KMF_BUFTAG)
1316 1317 btp = KMEM_BUFTAG(cp, buf);
1317 1318 if (cp->cache_flags & KMF_HASH) {
1318 1319 mutex_enter(&cp->cache_lock);
1319 1320 for (bcp = *KMEM_HASH(cp, buf); bcp; bcp = bcp->bc_next)
1320 1321 if (bcp->bc_addr == buf)
1321 1322 break;
1322 1323 mutex_exit(&cp->cache_lock);
1323 1324 if (bcp == NULL && btp != NULL)
1324 1325 bcp = btp->bt_bufctl;
1325 1326 if (kmem_findslab(cp->cache_bufctl_cache, bcp) ==
1326 1327 NULL || P2PHASE((uintptr_t)bcp, KMEM_ALIGN) ||
1327 1328 bcp->bc_addr != buf) {
1328 1329 error = KMERR_BADBUFCTL;
1329 1330 bcp = NULL;
1330 1331 }
1331 1332 }
1332 1333 }
1333 1334
1334 1335 kmem_panic_info.kmp_error = error;
1335 1336 kmem_panic_info.kmp_buffer = bufarg;
1336 1337 kmem_panic_info.kmp_realbuf = buf;
1337 1338 kmem_panic_info.kmp_cache = cparg;
1338 1339 kmem_panic_info.kmp_realcache = cp;
1339 1340 kmem_panic_info.kmp_slab = sp;
1340 1341 kmem_panic_info.kmp_bufctl = bcp;
1341 1342
1342 1343 printf("kernel memory allocator: ");
1343 1344
1344 1345 switch (error) {
1345 1346
1346 1347 case KMERR_MODIFIED:
1347 1348 printf("buffer modified after being freed\n");
1348 1349 off = verify_pattern(KMEM_FREE_PATTERN, buf, cp->cache_verify);
1349 1350 if (off == NULL) /* shouldn't happen */
1350 1351 off = buf;
1351 1352 printf("modification occurred at offset 0x%lx "
1352 1353 "(0x%llx replaced by 0x%llx)\n",
1353 1354 (uintptr_t)off - (uintptr_t)buf,
1354 1355 (longlong_t)KMEM_FREE_PATTERN, (longlong_t)*off);
1355 1356 break;
1356 1357
1357 1358 case KMERR_REDZONE:
1358 1359 printf("redzone violation: write past end of buffer\n");
1359 1360 break;
1360 1361
1361 1362 case KMERR_BADADDR:
1362 1363 printf("invalid free: buffer not in cache\n");
1363 1364 break;
1364 1365
1365 1366 case KMERR_DUPFREE:
1366 1367 printf("duplicate free: buffer freed twice\n");
1367 1368 break;
1368 1369
1369 1370 case KMERR_BADBUFTAG:
1370 1371 printf("boundary tag corrupted\n");
1371 1372 printf("bcp ^ bxstat = %lx, should be %lx\n",
1372 1373 (intptr_t)btp->bt_bufctl ^ btp->bt_bxstat,
1373 1374 KMEM_BUFTAG_FREE);
1374 1375 break;
1375 1376
1376 1377 case KMERR_BADBUFCTL:
1377 1378 printf("bufctl corrupted\n");
1378 1379 break;
1379 1380
1380 1381 case KMERR_BADCACHE:
1381 1382 printf("buffer freed to wrong cache\n");
1382 1383 printf("buffer was allocated from %s,\n", cp->cache_name);
1383 1384 printf("caller attempting free to %s.\n", cparg->cache_name);
1384 1385 break;
1385 1386
1386 1387 case KMERR_BADSIZE:
1387 1388 printf("bad free: free size (%u) != alloc size (%u)\n",
1388 1389 KMEM_SIZE_DECODE(((uint32_t *)btp)[0]),
1389 1390 KMEM_SIZE_DECODE(((uint32_t *)btp)[1]));
1390 1391 break;
1391 1392
1392 1393 case KMERR_BADBASE:
1393 1394 printf("bad free: free address (%p) != alloc address (%p)\n",
1394 1395 bufarg, buf);
1395 1396 break;
1396 1397 }
1397 1398
1398 1399 printf("buffer=%p bufctl=%p cache: %s\n",
1399 1400 bufarg, (void *)bcp, cparg->cache_name);
1400 1401
1401 1402 if (bcp != NULL && (cp->cache_flags & KMF_AUDIT) &&
1402 1403 error != KMERR_BADBUFCTL) {
1403 1404 int d;
1404 1405 timestruc_t ts;
1405 1406 kmem_bufctl_audit_t *bcap = (kmem_bufctl_audit_t *)bcp;
1406 1407
1407 1408 hrt2ts(kmem_panic_info.kmp_timestamp - bcap->bc_timestamp, &ts);
1408 1409 printf("previous transaction on buffer %p:\n", buf);
1409 1410 printf("thread=%p time=T-%ld.%09ld slab=%p cache: %s\n",
1410 1411 (void *)bcap->bc_thread, ts.tv_sec, ts.tv_nsec,
1411 1412 (void *)sp, cp->cache_name);
1412 1413 for (d = 0; d < MIN(bcap->bc_depth, KMEM_STACK_DEPTH); d++) {
1413 1414 ulong_t off;
1414 1415 char *sym = kobj_getsymname(bcap->bc_stack[d], &off);
1415 1416 printf("%s+%lx\n", sym ? sym : "?", off);
1416 1417 }
1417 1418 }
1418 1419 if (kmem_panic > 0)
1419 1420 panic("kernel heap corruption detected");
1420 1421 if (kmem_panic == 0)
1421 1422 debug_enter(NULL);
1422 1423 kmem_logging = 1; /* resume logging */
1423 1424 }
1424 1425
1425 1426 static kmem_log_header_t *
1426 1427 kmem_log_init(size_t logsize)
1427 1428 {
1428 1429 kmem_log_header_t *lhp;
1429 1430 int nchunks = 4 * max_ncpus;
1430 1431 size_t lhsize = (size_t)&((kmem_log_header_t *)0)->lh_cpu[max_ncpus];
1431 1432 int i;
1432 1433
1433 1434 /*
1434 1435 * Make sure that lhp->lh_cpu[] is nicely aligned
1435 1436 * to prevent false sharing of cache lines.
1436 1437 */
1437 1438 lhsize = P2ROUNDUP(lhsize, KMEM_ALIGN);
1438 1439 lhp = vmem_xalloc(kmem_log_arena, lhsize, 64, P2NPHASE(lhsize, 64), 0,
1439 1440 NULL, NULL, VM_SLEEP);
1440 1441 bzero(lhp, lhsize);
1441 1442
1442 1443 mutex_init(&lhp->lh_lock, NULL, MUTEX_DEFAULT, NULL);
1443 1444 lhp->lh_nchunks = nchunks;
1444 1445 lhp->lh_chunksize = P2ROUNDUP(logsize / nchunks + 1, PAGESIZE);
1445 1446 lhp->lh_base = vmem_alloc(kmem_log_arena,
1446 1447 lhp->lh_chunksize * nchunks, VM_SLEEP);
1447 1448 lhp->lh_free = vmem_alloc(kmem_log_arena,
1448 1449 nchunks * sizeof (int), VM_SLEEP);
1449 1450 bzero(lhp->lh_base, lhp->lh_chunksize * nchunks);
1450 1451
1451 1452 for (i = 0; i < max_ncpus; i++) {
1452 1453 kmem_cpu_log_header_t *clhp = &lhp->lh_cpu[i];
1453 1454 mutex_init(&clhp->clh_lock, NULL, MUTEX_DEFAULT, NULL);
1454 1455 clhp->clh_chunk = i;
1455 1456 }
1456 1457
1457 1458 for (i = max_ncpus; i < nchunks; i++)
1458 1459 lhp->lh_free[i] = i;
1459 1460
1460 1461 lhp->lh_head = max_ncpus;
1461 1462 lhp->lh_tail = 0;
1462 1463
1463 1464 return (lhp);
1464 1465 }
1465 1466
1466 1467 static void *
1467 1468 kmem_log_enter(kmem_log_header_t *lhp, void *data, size_t size)
1468 1469 {
1469 1470 void *logspace;
1470 1471 kmem_cpu_log_header_t *clhp = &lhp->lh_cpu[CPU->cpu_seqid];
1471 1472
1472 1473 if (lhp == NULL || kmem_logging == 0 || panicstr)
1473 1474 return (NULL);
1474 1475
1475 1476 mutex_enter(&clhp->clh_lock);
1476 1477 clhp->clh_hits++;
1477 1478 if (size > clhp->clh_avail) {
1478 1479 mutex_enter(&lhp->lh_lock);
1479 1480 lhp->lh_hits++;
1480 1481 lhp->lh_free[lhp->lh_tail] = clhp->clh_chunk;
1481 1482 lhp->lh_tail = (lhp->lh_tail + 1) % lhp->lh_nchunks;
1482 1483 clhp->clh_chunk = lhp->lh_free[lhp->lh_head];
1483 1484 lhp->lh_head = (lhp->lh_head + 1) % lhp->lh_nchunks;
1484 1485 clhp->clh_current = lhp->lh_base +
1485 1486 clhp->clh_chunk * lhp->lh_chunksize;
1486 1487 clhp->clh_avail = lhp->lh_chunksize;
1487 1488 if (size > lhp->lh_chunksize)
1488 1489 size = lhp->lh_chunksize;
1489 1490 mutex_exit(&lhp->lh_lock);
1490 1491 }
1491 1492 logspace = clhp->clh_current;
1492 1493 clhp->clh_current += size;
1493 1494 clhp->clh_avail -= size;
1494 1495 bcopy(data, logspace, size);
1495 1496 mutex_exit(&clhp->clh_lock);
1496 1497 return (logspace);
1497 1498 }
1498 1499
1499 1500 #define KMEM_AUDIT(lp, cp, bcp) \
1500 1501 { \
1501 1502 kmem_bufctl_audit_t *_bcp = (kmem_bufctl_audit_t *)(bcp); \
1502 1503 _bcp->bc_timestamp = gethrtime(); \
1503 1504 _bcp->bc_thread = curthread; \
1504 1505 _bcp->bc_depth = getpcstack(_bcp->bc_stack, KMEM_STACK_DEPTH); \
1505 1506 _bcp->bc_lastlog = kmem_log_enter((lp), _bcp, sizeof (*_bcp)); \
1506 1507 }
1507 1508
1508 1509 static void
1509 1510 kmem_log_event(kmem_log_header_t *lp, kmem_cache_t *cp,
1510 1511 kmem_slab_t *sp, void *addr)
1511 1512 {
1512 1513 kmem_bufctl_audit_t bca;
1513 1514
1514 1515 bzero(&bca, sizeof (kmem_bufctl_audit_t));
1515 1516 bca.bc_addr = addr;
1516 1517 bca.bc_slab = sp;
1517 1518 bca.bc_cache = cp;
1518 1519 KMEM_AUDIT(lp, cp, &bca);
1519 1520 }
1520 1521
1521 1522 /*
1522 1523 * Create a new slab for cache cp.
1523 1524 */
1524 1525 static kmem_slab_t *
1525 1526 kmem_slab_create(kmem_cache_t *cp, int kmflag)
1526 1527 {
1527 1528 size_t slabsize = cp->cache_slabsize;
1528 1529 size_t chunksize = cp->cache_chunksize;
1529 1530 int cache_flags = cp->cache_flags;
1530 1531 size_t color, chunks;
1531 1532 char *buf, *slab;
1532 1533 kmem_slab_t *sp;
1533 1534 kmem_bufctl_t *bcp;
1534 1535 vmem_t *vmp = cp->cache_arena;
1535 1536
1536 1537 ASSERT(MUTEX_NOT_HELD(&cp->cache_lock));
1537 1538
1538 1539 color = cp->cache_color + cp->cache_align;
1539 1540 if (color > cp->cache_maxcolor)
1540 1541 color = cp->cache_mincolor;
1541 1542 cp->cache_color = color;
1542 1543
1543 1544 slab = vmem_alloc(vmp, slabsize, kmflag & KM_VMFLAGS);
1544 1545
1545 1546 if (slab == NULL)
1546 1547 goto vmem_alloc_failure;
1547 1548
1548 1549 ASSERT(P2PHASE((uintptr_t)slab, vmp->vm_quantum) == 0);
1549 1550
1550 1551 /*
1551 1552 * Reverify what was already checked in kmem_cache_set_move(), since the
1552 1553 * consolidator depends (for correctness) on slabs being initialized
1553 1554 * with the 0xbaddcafe memory pattern (setting a low order bit usable by
1554 1555 * clients to distinguish uninitialized memory from known objects).
1555 1556 */
1556 1557 ASSERT((cp->cache_move == NULL) || !(cp->cache_cflags & KMC_NOTOUCH));
1557 1558 if (!(cp->cache_cflags & KMC_NOTOUCH))
1558 1559 copy_pattern(KMEM_UNINITIALIZED_PATTERN, slab, slabsize);
1559 1560
1560 1561 if (cache_flags & KMF_HASH) {
1561 1562 if ((sp = kmem_cache_alloc(kmem_slab_cache, kmflag)) == NULL)
1562 1563 goto slab_alloc_failure;
1563 1564 chunks = (slabsize - color) / chunksize;
1564 1565 } else {
1565 1566 sp = KMEM_SLAB(cp, slab);
1566 1567 chunks = (slabsize - sizeof (kmem_slab_t) - color) / chunksize;
1567 1568 }
1568 1569
1569 1570 sp->slab_cache = cp;
1570 1571 sp->slab_head = NULL;
1571 1572 sp->slab_refcnt = 0;
1572 1573 sp->slab_base = buf = slab + color;
1573 1574 sp->slab_chunks = chunks;
1574 1575 sp->slab_stuck_offset = (uint32_t)-1;
1575 1576 sp->slab_later_count = 0;
1576 1577 sp->slab_flags = 0;
1577 1578
1578 1579 ASSERT(chunks > 0);
1579 1580 while (chunks-- != 0) {
1580 1581 if (cache_flags & KMF_HASH) {
1581 1582 bcp = kmem_cache_alloc(cp->cache_bufctl_cache, kmflag);
1582 1583 if (bcp == NULL)
1583 1584 goto bufctl_alloc_failure;
1584 1585 if (cache_flags & KMF_AUDIT) {
1585 1586 kmem_bufctl_audit_t *bcap =
1586 1587 (kmem_bufctl_audit_t *)bcp;
1587 1588 bzero(bcap, sizeof (kmem_bufctl_audit_t));
1588 1589 bcap->bc_cache = cp;
1589 1590 }
1590 1591 bcp->bc_addr = buf;
1591 1592 bcp->bc_slab = sp;
1592 1593 } else {
1593 1594 bcp = KMEM_BUFCTL(cp, buf);
1594 1595 }
1595 1596 if (cache_flags & KMF_BUFTAG) {
1596 1597 kmem_buftag_t *btp = KMEM_BUFTAG(cp, buf);
1597 1598 btp->bt_redzone = KMEM_REDZONE_PATTERN;
1598 1599 btp->bt_bufctl = bcp;
1599 1600 btp->bt_bxstat = (intptr_t)bcp ^ KMEM_BUFTAG_FREE;
1600 1601 if (cache_flags & KMF_DEADBEEF) {
1601 1602 copy_pattern(KMEM_FREE_PATTERN, buf,
1602 1603 cp->cache_verify);
1603 1604 }
1604 1605 }
1605 1606 bcp->bc_next = sp->slab_head;
1606 1607 sp->slab_head = bcp;
1607 1608 buf += chunksize;
1608 1609 }
1609 1610
1610 1611 kmem_log_event(kmem_slab_log, cp, sp, slab);
1611 1612
1612 1613 return (sp);
1613 1614
1614 1615 bufctl_alloc_failure:
1615 1616
1616 1617 while ((bcp = sp->slab_head) != NULL) {
1617 1618 sp->slab_head = bcp->bc_next;
1618 1619 kmem_cache_free(cp->cache_bufctl_cache, bcp);
1619 1620 }
1620 1621 kmem_cache_free(kmem_slab_cache, sp);
1621 1622
1622 1623 slab_alloc_failure:
1623 1624
1624 1625 vmem_free(vmp, slab, slabsize);
1625 1626
1626 1627 vmem_alloc_failure:
1627 1628
1628 1629 kmem_log_event(kmem_failure_log, cp, NULL, NULL);
1629 1630 atomic_inc_64(&cp->cache_alloc_fail);
1630 1631
1631 1632 return (NULL);
1632 1633 }
1633 1634
1634 1635 /*
1635 1636 * Destroy a slab.
1636 1637 */
1637 1638 static void
1638 1639 kmem_slab_destroy(kmem_cache_t *cp, kmem_slab_t *sp)
1639 1640 {
1640 1641 vmem_t *vmp = cp->cache_arena;
1641 1642 void *slab = (void *)P2ALIGN((uintptr_t)sp->slab_base, vmp->vm_quantum);
1642 1643
1643 1644 ASSERT(MUTEX_NOT_HELD(&cp->cache_lock));
1644 1645 ASSERT(sp->slab_refcnt == 0);
1645 1646
1646 1647 if (cp->cache_flags & KMF_HASH) {
1647 1648 kmem_bufctl_t *bcp;
1648 1649 while ((bcp = sp->slab_head) != NULL) {
1649 1650 sp->slab_head = bcp->bc_next;
1650 1651 kmem_cache_free(cp->cache_bufctl_cache, bcp);
1651 1652 }
1652 1653 kmem_cache_free(kmem_slab_cache, sp);
1653 1654 }
1654 1655 vmem_free(vmp, slab, cp->cache_slabsize);
1655 1656 }
1656 1657
1657 1658 static void *
1658 1659 kmem_slab_alloc_impl(kmem_cache_t *cp, kmem_slab_t *sp, boolean_t prefill)
1659 1660 {
1660 1661 kmem_bufctl_t *bcp, **hash_bucket;
1661 1662 void *buf;
1662 1663 boolean_t new_slab = (sp->slab_refcnt == 0);
1663 1664
1664 1665 ASSERT(MUTEX_HELD(&cp->cache_lock));
1665 1666 /*
1666 1667 * kmem_slab_alloc() drops cache_lock when it creates a new slab, so we
1667 1668 * can't ASSERT(avl_is_empty(&cp->cache_partial_slabs)) here when the
1668 1669 * slab is newly created.
1669 1670 */
1670 1671 ASSERT(new_slab || (KMEM_SLAB_IS_PARTIAL(sp) &&
1671 1672 (sp == avl_first(&cp->cache_partial_slabs))));
1672 1673 ASSERT(sp->slab_cache == cp);
1673 1674
1674 1675 cp->cache_slab_alloc++;
1675 1676 cp->cache_bufslab--;
1676 1677 sp->slab_refcnt++;
1677 1678
1678 1679 bcp = sp->slab_head;
1679 1680 sp->slab_head = bcp->bc_next;
1680 1681
1681 1682 if (cp->cache_flags & KMF_HASH) {
1682 1683 /*
1683 1684 * Add buffer to allocated-address hash table.
1684 1685 */
1685 1686 buf = bcp->bc_addr;
1686 1687 hash_bucket = KMEM_HASH(cp, buf);
1687 1688 bcp->bc_next = *hash_bucket;
1688 1689 *hash_bucket = bcp;
1689 1690 if ((cp->cache_flags & (KMF_AUDIT | KMF_BUFTAG)) == KMF_AUDIT) {
1690 1691 KMEM_AUDIT(kmem_transaction_log, cp, bcp);
1691 1692 }
1692 1693 } else {
1693 1694 buf = KMEM_BUF(cp, bcp);
1694 1695 }
1695 1696
1696 1697 ASSERT(KMEM_SLAB_MEMBER(sp, buf));
1697 1698
1698 1699 if (sp->slab_head == NULL) {
1699 1700 ASSERT(KMEM_SLAB_IS_ALL_USED(sp));
1700 1701 if (new_slab) {
1701 1702 ASSERT(sp->slab_chunks == 1);
1702 1703 } else {
1703 1704 ASSERT(sp->slab_chunks > 1); /* the slab was partial */
1704 1705 avl_remove(&cp->cache_partial_slabs, sp);
1705 1706 sp->slab_later_count = 0; /* clear history */
1706 1707 sp->slab_flags &= ~KMEM_SLAB_NOMOVE;
1707 1708 sp->slab_stuck_offset = (uint32_t)-1;
1708 1709 }
1709 1710 list_insert_head(&cp->cache_complete_slabs, sp);
1710 1711 cp->cache_complete_slab_count++;
1711 1712 return (buf);
1712 1713 }
1713 1714
1714 1715 ASSERT(KMEM_SLAB_IS_PARTIAL(sp));
1715 1716 /*
1716 1717 * Peek to see if the magazine layer is enabled before
1717 1718 * we prefill. We're not holding the cpu cache lock,
1718 1719 * so the peek could be wrong, but there's no harm in it.
1719 1720 */
1720 1721 if (new_slab && prefill && (cp->cache_flags & KMF_PREFILL) &&
1721 1722 (KMEM_CPU_CACHE(cp)->cc_magsize != 0)) {
1722 1723 kmem_slab_prefill(cp, sp);
1723 1724 return (buf);
1724 1725 }
1725 1726
1726 1727 if (new_slab) {
1727 1728 avl_add(&cp->cache_partial_slabs, sp);
1728 1729 return (buf);
1729 1730 }
1730 1731
1731 1732 /*
1732 1733 * The slab is now more allocated than it was, so the
1733 1734 * order remains unchanged.
1734 1735 */
1735 1736 ASSERT(!avl_update(&cp->cache_partial_slabs, sp));
1736 1737 return (buf);
1737 1738 }
1738 1739
1739 1740 /*
1740 1741 * Allocate a raw (unconstructed) buffer from cp's slab layer.
1741 1742 */
1742 1743 static void *
1743 1744 kmem_slab_alloc(kmem_cache_t *cp, int kmflag)
1744 1745 {
1745 1746 kmem_slab_t *sp;
1746 1747 void *buf;
1747 1748 boolean_t test_destructor;
1748 1749
1749 1750 mutex_enter(&cp->cache_lock);
1750 1751 test_destructor = (cp->cache_slab_alloc == 0);
1751 1752 sp = avl_first(&cp->cache_partial_slabs);
1752 1753 if (sp == NULL) {
1753 1754 ASSERT(cp->cache_bufslab == 0);
1754 1755
1755 1756 /*
1756 1757 * The freelist is empty. Create a new slab.
1757 1758 */
1758 1759 mutex_exit(&cp->cache_lock);
1759 1760 if ((sp = kmem_slab_create(cp, kmflag)) == NULL) {
1760 1761 return (NULL);
1761 1762 }
1762 1763 mutex_enter(&cp->cache_lock);
1763 1764 cp->cache_slab_create++;
1764 1765 if ((cp->cache_buftotal += sp->slab_chunks) > cp->cache_bufmax)
1765 1766 cp->cache_bufmax = cp->cache_buftotal;
1766 1767 cp->cache_bufslab += sp->slab_chunks;
1767 1768 }
1768 1769
1769 1770 buf = kmem_slab_alloc_impl(cp, sp, B_TRUE);
1770 1771 ASSERT((cp->cache_slab_create - cp->cache_slab_destroy) ==
1771 1772 (cp->cache_complete_slab_count +
1772 1773 avl_numnodes(&cp->cache_partial_slabs) +
1773 1774 (cp->cache_defrag == NULL ? 0 : cp->cache_defrag->kmd_deadcount)));
1774 1775 mutex_exit(&cp->cache_lock);
1775 1776
1776 1777 if (test_destructor && cp->cache_destructor != NULL) {
1777 1778 /*
1778 1779 * On the first kmem_slab_alloc(), assert that it is valid to
1779 1780 * call the destructor on a newly constructed object without any
1780 1781 * client involvement.
1781 1782 */
1782 1783 if ((cp->cache_constructor == NULL) ||
1783 1784 cp->cache_constructor(buf, cp->cache_private,
1784 1785 kmflag) == 0) {
1785 1786 cp->cache_destructor(buf, cp->cache_private);
1786 1787 }
1787 1788 copy_pattern(KMEM_UNINITIALIZED_PATTERN, buf,
1788 1789 cp->cache_bufsize);
1789 1790 if (cp->cache_flags & KMF_DEADBEEF) {
1790 1791 copy_pattern(KMEM_FREE_PATTERN, buf, cp->cache_verify);
1791 1792 }
1792 1793 }
1793 1794
1794 1795 return (buf);
1795 1796 }
1796 1797
1797 1798 static void kmem_slab_move_yes(kmem_cache_t *, kmem_slab_t *, void *);
1798 1799
1799 1800 /*
1800 1801 * Free a raw (unconstructed) buffer to cp's slab layer.
1801 1802 */
1802 1803 static void
1803 1804 kmem_slab_free(kmem_cache_t *cp, void *buf)
1804 1805 {
1805 1806 kmem_slab_t *sp;
1806 1807 kmem_bufctl_t *bcp, **prev_bcpp;
1807 1808
1808 1809 ASSERT(buf != NULL);
1809 1810
1810 1811 mutex_enter(&cp->cache_lock);
1811 1812 cp->cache_slab_free++;
1812 1813
1813 1814 if (cp->cache_flags & KMF_HASH) {
1814 1815 /*
1815 1816 * Look up buffer in allocated-address hash table.
1816 1817 */
1817 1818 prev_bcpp = KMEM_HASH(cp, buf);
1818 1819 while ((bcp = *prev_bcpp) != NULL) {
1819 1820 if (bcp->bc_addr == buf) {
1820 1821 *prev_bcpp = bcp->bc_next;
1821 1822 sp = bcp->bc_slab;
1822 1823 break;
1823 1824 }
1824 1825 cp->cache_lookup_depth++;
1825 1826 prev_bcpp = &bcp->bc_next;
1826 1827 }
1827 1828 } else {
1828 1829 bcp = KMEM_BUFCTL(cp, buf);
1829 1830 sp = KMEM_SLAB(cp, buf);
1830 1831 }
1831 1832
1832 1833 if (bcp == NULL || sp->slab_cache != cp || !KMEM_SLAB_MEMBER(sp, buf)) {
1833 1834 mutex_exit(&cp->cache_lock);
1834 1835 kmem_error(KMERR_BADADDR, cp, buf);
1835 1836 return;
1836 1837 }
1837 1838
1838 1839 if (KMEM_SLAB_OFFSET(sp, buf) == sp->slab_stuck_offset) {
1839 1840 /*
1840 1841 * If this is the buffer that prevented the consolidator from
1841 1842 * clearing the slab, we can reset the slab flags now that the
1842 1843 * buffer is freed. (It makes sense to do this in
1843 1844 * kmem_cache_free(), where the client gives up ownership of the
1844 1845 * buffer, but on the hot path the test is too expensive.)
1845 1846 */
1846 1847 kmem_slab_move_yes(cp, sp, buf);
1847 1848 }
1848 1849
1849 1850 if ((cp->cache_flags & (KMF_AUDIT | KMF_BUFTAG)) == KMF_AUDIT) {
1850 1851 if (cp->cache_flags & KMF_CONTENTS)
1851 1852 ((kmem_bufctl_audit_t *)bcp)->bc_contents =
1852 1853 kmem_log_enter(kmem_content_log, buf,
1853 1854 cp->cache_contents);
1854 1855 KMEM_AUDIT(kmem_transaction_log, cp, bcp);
1855 1856 }
1856 1857
1857 1858 bcp->bc_next = sp->slab_head;
1858 1859 sp->slab_head = bcp;
1859 1860
1860 1861 cp->cache_bufslab++;
1861 1862 ASSERT(sp->slab_refcnt >= 1);
1862 1863
1863 1864 if (--sp->slab_refcnt == 0) {
1864 1865 /*
1865 1866 * There are no outstanding allocations from this slab,
1866 1867 * so we can reclaim the memory.
1867 1868 */
1868 1869 if (sp->slab_chunks == 1) {
1869 1870 list_remove(&cp->cache_complete_slabs, sp);
1870 1871 cp->cache_complete_slab_count--;
1871 1872 } else {
1872 1873 avl_remove(&cp->cache_partial_slabs, sp);
1873 1874 }
1874 1875
1875 1876 cp->cache_buftotal -= sp->slab_chunks;
1876 1877 cp->cache_bufslab -= sp->slab_chunks;
1877 1878 /*
1878 1879 * Defer releasing the slab to the virtual memory subsystem
1879 1880 * while there is a pending move callback, since we guarantee
1880 1881 * that buffers passed to the move callback have only been
1881 1882 * touched by kmem or by the client itself. Since the memory
1882 1883 * patterns baddcafe (uninitialized) and deadbeef (freed) both
1883 1884 * set at least one of the two lowest order bits, the client can
1884 1885 * test those bits in the move callback to determine whether or
1885 1886 * not it knows about the buffer (assuming that the client also
1886 1887 * sets one of those low order bits whenever it frees a buffer).
1887 1888 */
1888 1889 if (cp->cache_defrag == NULL ||
1889 1890 (avl_is_empty(&cp->cache_defrag->kmd_moves_pending) &&
1890 1891 !(sp->slab_flags & KMEM_SLAB_MOVE_PENDING))) {
1891 1892 cp->cache_slab_destroy++;
1892 1893 mutex_exit(&cp->cache_lock);
1893 1894 kmem_slab_destroy(cp, sp);
1894 1895 } else {
1895 1896 list_t *deadlist = &cp->cache_defrag->kmd_deadlist;
1896 1897 /*
1897 1898 * Slabs are inserted at both ends of the deadlist to
1898 1899 * distinguish between slabs freed while move callbacks
1899 1900 * are pending (list head) and a slab freed while the
1900 1901 * lock is dropped in kmem_move_buffers() (list tail) so
1901 1902 * that in both cases slab_destroy() is called from the
1902 1903 * right context.
1903 1904 */
1904 1905 if (sp->slab_flags & KMEM_SLAB_MOVE_PENDING) {
1905 1906 list_insert_tail(deadlist, sp);
1906 1907 } else {
1907 1908 list_insert_head(deadlist, sp);
1908 1909 }
1909 1910 cp->cache_defrag->kmd_deadcount++;
1910 1911 mutex_exit(&cp->cache_lock);
1911 1912 }
1912 1913 return;
1913 1914 }
1914 1915
1915 1916 if (bcp->bc_next == NULL) {
1916 1917 /* Transition the slab from completely allocated to partial. */
1917 1918 ASSERT(sp->slab_refcnt == (sp->slab_chunks - 1));
1918 1919 ASSERT(sp->slab_chunks > 1);
1919 1920 list_remove(&cp->cache_complete_slabs, sp);
1920 1921 cp->cache_complete_slab_count--;
1921 1922 avl_add(&cp->cache_partial_slabs, sp);
1922 1923 } else {
1923 1924 #ifdef DEBUG
1924 1925 if (avl_update_gt(&cp->cache_partial_slabs, sp)) {
1925 1926 KMEM_STAT_ADD(kmem_move_stats.kms_avl_update);
1926 1927 } else {
1927 1928 KMEM_STAT_ADD(kmem_move_stats.kms_avl_noupdate);
1928 1929 }
1929 1930 #else
1930 1931 (void) avl_update_gt(&cp->cache_partial_slabs, sp);
1931 1932 #endif
1932 1933 }
1933 1934
1934 1935 ASSERT((cp->cache_slab_create - cp->cache_slab_destroy) ==
1935 1936 (cp->cache_complete_slab_count +
1936 1937 avl_numnodes(&cp->cache_partial_slabs) +
1937 1938 (cp->cache_defrag == NULL ? 0 : cp->cache_defrag->kmd_deadcount)));
1938 1939 mutex_exit(&cp->cache_lock);
1939 1940 }
1940 1941
1941 1942 /*
1942 1943 * Return -1 if kmem_error, 1 if constructor fails, 0 if successful.
1943 1944 */
1944 1945 static int
1945 1946 kmem_cache_alloc_debug(kmem_cache_t *cp, void *buf, int kmflag, int construct,
1946 1947 caddr_t caller)
1947 1948 {
1948 1949 kmem_buftag_t *btp = KMEM_BUFTAG(cp, buf);
1949 1950 kmem_bufctl_audit_t *bcp = (kmem_bufctl_audit_t *)btp->bt_bufctl;
1950 1951 uint32_t mtbf;
1951 1952
1952 1953 if (btp->bt_bxstat != ((intptr_t)bcp ^ KMEM_BUFTAG_FREE)) {
1953 1954 kmem_error(KMERR_BADBUFTAG, cp, buf);
1954 1955 return (-1);
1955 1956 }
1956 1957
1957 1958 btp->bt_bxstat = (intptr_t)bcp ^ KMEM_BUFTAG_ALLOC;
1958 1959
1959 1960 if ((cp->cache_flags & KMF_HASH) && bcp->bc_addr != buf) {
1960 1961 kmem_error(KMERR_BADBUFCTL, cp, buf);
1961 1962 return (-1);
1962 1963 }
1963 1964
1964 1965 if (cp->cache_flags & KMF_DEADBEEF) {
1965 1966 if (!construct && (cp->cache_flags & KMF_LITE)) {
1966 1967 if (*(uint64_t *)buf != KMEM_FREE_PATTERN) {
1967 1968 kmem_error(KMERR_MODIFIED, cp, buf);
1968 1969 return (-1);
1969 1970 }
1970 1971 if (cp->cache_constructor != NULL)
1971 1972 *(uint64_t *)buf = btp->bt_redzone;
1972 1973 else
1973 1974 *(uint64_t *)buf = KMEM_UNINITIALIZED_PATTERN;
1974 1975 } else {
1975 1976 construct = 1;
1976 1977 if (verify_and_copy_pattern(KMEM_FREE_PATTERN,
1977 1978 KMEM_UNINITIALIZED_PATTERN, buf,
1978 1979 cp->cache_verify)) {
1979 1980 kmem_error(KMERR_MODIFIED, cp, buf);
1980 1981 return (-1);
1981 1982 }
1982 1983 }
1983 1984 }
1984 1985 btp->bt_redzone = KMEM_REDZONE_PATTERN;
1985 1986
1986 1987 if ((mtbf = kmem_mtbf | cp->cache_mtbf) != 0 &&
1987 1988 gethrtime() % mtbf == 0 &&
1988 1989 (kmflag & (KM_NOSLEEP | KM_PANIC)) == KM_NOSLEEP) {
1989 1990 kmem_log_event(kmem_failure_log, cp, NULL, NULL);
1990 1991 if (!construct && cp->cache_destructor != NULL)
1991 1992 cp->cache_destructor(buf, cp->cache_private);
1992 1993 } else {
1993 1994 mtbf = 0;
1994 1995 }
1995 1996
1996 1997 if (mtbf || (construct && cp->cache_constructor != NULL &&
1997 1998 cp->cache_constructor(buf, cp->cache_private, kmflag) != 0)) {
1998 1999 atomic_inc_64(&cp->cache_alloc_fail);
1999 2000 btp->bt_bxstat = (intptr_t)bcp ^ KMEM_BUFTAG_FREE;
2000 2001 if (cp->cache_flags & KMF_DEADBEEF)
2001 2002 copy_pattern(KMEM_FREE_PATTERN, buf, cp->cache_verify);
2002 2003 kmem_slab_free(cp, buf);
2003 2004 return (1);
2004 2005 }
2005 2006
2006 2007 if (cp->cache_flags & KMF_AUDIT) {
2007 2008 KMEM_AUDIT(kmem_transaction_log, cp, bcp);
2008 2009 }
2009 2010
2010 2011 if ((cp->cache_flags & KMF_LITE) &&
2011 2012 !(cp->cache_cflags & KMC_KMEM_ALLOC)) {
2012 2013 KMEM_BUFTAG_LITE_ENTER(btp, kmem_lite_count, caller);
2013 2014 }
2014 2015
2015 2016 return (0);
2016 2017 }
2017 2018
2018 2019 static int
2019 2020 kmem_cache_free_debug(kmem_cache_t *cp, void *buf, caddr_t caller)
2020 2021 {
2021 2022 kmem_buftag_t *btp = KMEM_BUFTAG(cp, buf);
2022 2023 kmem_bufctl_audit_t *bcp = (kmem_bufctl_audit_t *)btp->bt_bufctl;
2023 2024 kmem_slab_t *sp;
2024 2025
2025 2026 if (btp->bt_bxstat != ((intptr_t)bcp ^ KMEM_BUFTAG_ALLOC)) {
2026 2027 if (btp->bt_bxstat == ((intptr_t)bcp ^ KMEM_BUFTAG_FREE)) {
2027 2028 kmem_error(KMERR_DUPFREE, cp, buf);
2028 2029 return (-1);
2029 2030 }
2030 2031 sp = kmem_findslab(cp, buf);
2031 2032 if (sp == NULL || sp->slab_cache != cp)
2032 2033 kmem_error(KMERR_BADADDR, cp, buf);
2033 2034 else
2034 2035 kmem_error(KMERR_REDZONE, cp, buf);
2035 2036 return (-1);
2036 2037 }
2037 2038
2038 2039 btp->bt_bxstat = (intptr_t)bcp ^ KMEM_BUFTAG_FREE;
2039 2040
2040 2041 if ((cp->cache_flags & KMF_HASH) && bcp->bc_addr != buf) {
2041 2042 kmem_error(KMERR_BADBUFCTL, cp, buf);
2042 2043 return (-1);
2043 2044 }
2044 2045
2045 2046 if (btp->bt_redzone != KMEM_REDZONE_PATTERN) {
2046 2047 kmem_error(KMERR_REDZONE, cp, buf);
2047 2048 return (-1);
2048 2049 }
2049 2050
2050 2051 if (cp->cache_flags & KMF_AUDIT) {
2051 2052 if (cp->cache_flags & KMF_CONTENTS)
2052 2053 bcp->bc_contents = kmem_log_enter(kmem_content_log,
2053 2054 buf, cp->cache_contents);
2054 2055 KMEM_AUDIT(kmem_transaction_log, cp, bcp);
2055 2056 }
2056 2057
2057 2058 if ((cp->cache_flags & KMF_LITE) &&
2058 2059 !(cp->cache_cflags & KMC_KMEM_ALLOC)) {
2059 2060 KMEM_BUFTAG_LITE_ENTER(btp, kmem_lite_count, caller);
2060 2061 }
2061 2062
2062 2063 if (cp->cache_flags & KMF_DEADBEEF) {
2063 2064 if (cp->cache_flags & KMF_LITE)
2064 2065 btp->bt_redzone = *(uint64_t *)buf;
2065 2066 else if (cp->cache_destructor != NULL)
2066 2067 cp->cache_destructor(buf, cp->cache_private);
2067 2068
2068 2069 copy_pattern(KMEM_FREE_PATTERN, buf, cp->cache_verify);
2069 2070 }
2070 2071
2071 2072 return (0);
2072 2073 }
2073 2074
2074 2075 /*
2075 2076 * Free each object in magazine mp to cp's slab layer, and free mp itself.
2076 2077 */
2077 2078 static void
2078 2079 kmem_magazine_destroy(kmem_cache_t *cp, kmem_magazine_t *mp, int nrounds)
2079 2080 {
2080 2081 int round;
2081 2082
2082 2083 ASSERT(!list_link_active(&cp->cache_link) ||
2083 2084 taskq_member(kmem_taskq, curthread));
2084 2085
2085 2086 for (round = 0; round < nrounds; round++) {
2086 2087 void *buf = mp->mag_round[round];
2087 2088
2088 2089 if (cp->cache_flags & KMF_DEADBEEF) {
2089 2090 if (verify_pattern(KMEM_FREE_PATTERN, buf,
2090 2091 cp->cache_verify) != NULL) {
2091 2092 kmem_error(KMERR_MODIFIED, cp, buf);
2092 2093 continue;
2093 2094 }
2094 2095 if ((cp->cache_flags & KMF_LITE) &&
2095 2096 cp->cache_destructor != NULL) {
2096 2097 kmem_buftag_t *btp = KMEM_BUFTAG(cp, buf);
2097 2098 *(uint64_t *)buf = btp->bt_redzone;
2098 2099 cp->cache_destructor(buf, cp->cache_private);
2099 2100 *(uint64_t *)buf = KMEM_FREE_PATTERN;
2100 2101 }
2101 2102 } else if (cp->cache_destructor != NULL) {
2102 2103 cp->cache_destructor(buf, cp->cache_private);
2103 2104 }
2104 2105
2105 2106 kmem_slab_free(cp, buf);
2106 2107 }
2107 2108 ASSERT(KMEM_MAGAZINE_VALID(cp, mp));
2108 2109 kmem_cache_free(cp->cache_magtype->mt_cache, mp);
2109 2110 }
2110 2111
2111 2112 /*
2112 2113 * Allocate a magazine from the depot.
2113 2114 */
2114 2115 static kmem_magazine_t *
2115 2116 kmem_depot_alloc(kmem_cache_t *cp, kmem_maglist_t *mlp)
2116 2117 {
2117 2118 kmem_magazine_t *mp;
2118 2119
2119 2120 /*
2120 2121 * If we can't get the depot lock without contention,
2121 2122 * update our contention count. We use the depot
2122 2123 * contention rate to determine whether we need to
2123 2124 * increase the magazine size for better scalability.
2124 2125 */
2125 2126 if (!mutex_tryenter(&cp->cache_depot_lock)) {
2126 2127 mutex_enter(&cp->cache_depot_lock);
2127 2128 cp->cache_depot_contention++;
2128 2129 }
2129 2130
2130 2131 if ((mp = mlp->ml_list) != NULL) {
2131 2132 ASSERT(KMEM_MAGAZINE_VALID(cp, mp));
2132 2133 mlp->ml_list = mp->mag_next;
2133 2134 if (--mlp->ml_total < mlp->ml_min)
2134 2135 mlp->ml_min = mlp->ml_total;
2135 2136 mlp->ml_alloc++;
2136 2137 }
2137 2138
2138 2139 mutex_exit(&cp->cache_depot_lock);
2139 2140
2140 2141 return (mp);
2141 2142 }
2142 2143
2143 2144 /*
2144 2145 * Free a magazine to the depot.
2145 2146 */
2146 2147 static void
2147 2148 kmem_depot_free(kmem_cache_t *cp, kmem_maglist_t *mlp, kmem_magazine_t *mp)
2148 2149 {
2149 2150 mutex_enter(&cp->cache_depot_lock);
2150 2151 ASSERT(KMEM_MAGAZINE_VALID(cp, mp));
2151 2152 mp->mag_next = mlp->ml_list;
2152 2153 mlp->ml_list = mp;
2153 2154 mlp->ml_total++;
2154 2155 mutex_exit(&cp->cache_depot_lock);
2155 2156 }
2156 2157
2157 2158 /*
2158 2159 * Update the working set statistics for cp's depot.
2159 2160 */
2160 2161 static void
2161 2162 kmem_depot_ws_update(kmem_cache_t *cp)
↓ open down ↓ |
2129 lines elided |
↑ open up ↑ |
2162 2163 {
2163 2164 mutex_enter(&cp->cache_depot_lock);
2164 2165 cp->cache_full.ml_reaplimit = cp->cache_full.ml_min;
2165 2166 cp->cache_full.ml_min = cp->cache_full.ml_total;
2166 2167 cp->cache_empty.ml_reaplimit = cp->cache_empty.ml_min;
2167 2168 cp->cache_empty.ml_min = cp->cache_empty.ml_total;
2168 2169 mutex_exit(&cp->cache_depot_lock);
2169 2170 }
2170 2171
2171 2172 /*
2173 + * Set the working set statistics for cp's depot to zero. (Everything is
2174 + * eligible for reaping.)
2175 + */
2176 +static void
2177 +kmem_depot_ws_zero(kmem_cache_t *cp)
2178 +{
2179 + mutex_enter(&cp->cache_depot_lock);
2180 + cp->cache_full.ml_reaplimit = cp->cache_full.ml_total;
2181 + cp->cache_full.ml_min = cp->cache_full.ml_total;
2182 + cp->cache_empty.ml_reaplimit = cp->cache_empty.ml_total;
2183 + cp->cache_empty.ml_min = cp->cache_empty.ml_total;
2184 + mutex_exit(&cp->cache_depot_lock);
2185 +}
2186 +
2187 +/*
2172 2188 * Reap all magazines that have fallen out of the depot's working set.
2173 2189 */
2174 2190 static void
2175 2191 kmem_depot_ws_reap(kmem_cache_t *cp)
2176 2192 {
2177 2193 long reap;
2178 2194 kmem_magazine_t *mp;
2179 2195
2180 2196 ASSERT(!list_link_active(&cp->cache_link) ||
2181 2197 taskq_member(kmem_taskq, curthread));
2182 2198
2183 2199 reap = MIN(cp->cache_full.ml_reaplimit, cp->cache_full.ml_min);
2184 2200 while (reap-- && (mp = kmem_depot_alloc(cp, &cp->cache_full)) != NULL)
2185 2201 kmem_magazine_destroy(cp, mp, cp->cache_magtype->mt_magsize);
2186 2202
2187 2203 reap = MIN(cp->cache_empty.ml_reaplimit, cp->cache_empty.ml_min);
2188 2204 while (reap-- && (mp = kmem_depot_alloc(cp, &cp->cache_empty)) != NULL)
2189 2205 kmem_magazine_destroy(cp, mp, 0);
2190 2206 }
2191 2207
2192 2208 static void
2193 2209 kmem_cpu_reload(kmem_cpu_cache_t *ccp, kmem_magazine_t *mp, int rounds)
2194 2210 {
2195 2211 ASSERT((ccp->cc_loaded == NULL && ccp->cc_rounds == -1) ||
2196 2212 (ccp->cc_loaded && ccp->cc_rounds + rounds == ccp->cc_magsize));
2197 2213 ASSERT(ccp->cc_magsize > 0);
2198 2214
2199 2215 ccp->cc_ploaded = ccp->cc_loaded;
2200 2216 ccp->cc_prounds = ccp->cc_rounds;
2201 2217 ccp->cc_loaded = mp;
2202 2218 ccp->cc_rounds = rounds;
2203 2219 }
2204 2220
2205 2221 /*
2206 2222 * Intercept kmem alloc/free calls during crash dump in order to avoid
2207 2223 * changing kmem state while memory is being saved to the dump device.
2208 2224 * Otherwise, ::kmem_verify will report "corrupt buffers". Note that
2209 2225 * there are no locks because only one CPU calls kmem during a crash
2210 2226 * dump. To enable this feature, first create the associated vmem
2211 2227 * arena with VMC_DUMPSAFE.
2212 2228 */
2213 2229 static void *kmem_dump_start; /* start of pre-reserved heap */
2214 2230 static void *kmem_dump_end; /* end of heap area */
2215 2231 static void *kmem_dump_curr; /* current free heap pointer */
2216 2232 static size_t kmem_dump_size; /* size of heap area */
2217 2233
2218 2234 /* append to each buf created in the pre-reserved heap */
2219 2235 typedef struct kmem_dumpctl {
2220 2236 void *kdc_next; /* cache dump free list linkage */
2221 2237 } kmem_dumpctl_t;
2222 2238
2223 2239 #define KMEM_DUMPCTL(cp, buf) \
2224 2240 ((kmem_dumpctl_t *)P2ROUNDUP((uintptr_t)(buf) + (cp)->cache_bufsize, \
2225 2241 sizeof (void *)))
2226 2242
2227 2243 /* Keep some simple stats. */
2228 2244 #define KMEM_DUMP_LOGS (100)
2229 2245
2230 2246 typedef struct kmem_dump_log {
2231 2247 kmem_cache_t *kdl_cache;
2232 2248 uint_t kdl_allocs; /* # of dump allocations */
2233 2249 uint_t kdl_frees; /* # of dump frees */
2234 2250 uint_t kdl_alloc_fails; /* # of allocation failures */
2235 2251 uint_t kdl_free_nondump; /* # of non-dump frees */
2236 2252 uint_t kdl_unsafe; /* cache was used, but unsafe */
2237 2253 } kmem_dump_log_t;
2238 2254
2239 2255 static kmem_dump_log_t *kmem_dump_log;
2240 2256 static int kmem_dump_log_idx;
2241 2257
2242 2258 #define KDI_LOG(cp, stat) { \
2243 2259 kmem_dump_log_t *kdl; \
2244 2260 if ((kdl = (kmem_dump_log_t *)((cp)->cache_dumplog)) != NULL) { \
2245 2261 kdl->stat++; \
2246 2262 } else if (kmem_dump_log_idx < KMEM_DUMP_LOGS) { \
2247 2263 kdl = &kmem_dump_log[kmem_dump_log_idx++]; \
2248 2264 kdl->stat++; \
2249 2265 kdl->kdl_cache = (cp); \
2250 2266 (cp)->cache_dumplog = kdl; \
2251 2267 } \
2252 2268 }
2253 2269
2254 2270 /* set non zero for full report */
2255 2271 uint_t kmem_dump_verbose = 0;
2256 2272
2257 2273 /* stats for overize heap */
2258 2274 uint_t kmem_dump_oversize_allocs = 0;
2259 2275 uint_t kmem_dump_oversize_max = 0;
2260 2276
2261 2277 static void
2262 2278 kmem_dumppr(char **pp, char *e, const char *format, ...)
2263 2279 {
2264 2280 char *p = *pp;
2265 2281
2266 2282 if (p < e) {
2267 2283 int n;
2268 2284 va_list ap;
2269 2285
2270 2286 va_start(ap, format);
2271 2287 n = vsnprintf(p, e - p, format, ap);
2272 2288 va_end(ap);
2273 2289 *pp = p + n;
2274 2290 }
2275 2291 }
2276 2292
2277 2293 /*
2278 2294 * Called when dumpadm(1M) configures dump parameters.
2279 2295 */
2280 2296 void
2281 2297 kmem_dump_init(size_t size)
2282 2298 {
2283 2299 if (kmem_dump_start != NULL)
2284 2300 kmem_free(kmem_dump_start, kmem_dump_size);
2285 2301
2286 2302 if (kmem_dump_log == NULL)
2287 2303 kmem_dump_log = (kmem_dump_log_t *)kmem_zalloc(KMEM_DUMP_LOGS *
2288 2304 sizeof (kmem_dump_log_t), KM_SLEEP);
2289 2305
2290 2306 kmem_dump_start = kmem_alloc(size, KM_SLEEP);
2291 2307
2292 2308 if (kmem_dump_start != NULL) {
2293 2309 kmem_dump_size = size;
2294 2310 kmem_dump_curr = kmem_dump_start;
2295 2311 kmem_dump_end = (void *)((char *)kmem_dump_start + size);
2296 2312 copy_pattern(KMEM_UNINITIALIZED_PATTERN, kmem_dump_start, size);
2297 2313 } else {
2298 2314 kmem_dump_size = 0;
2299 2315 kmem_dump_curr = NULL;
2300 2316 kmem_dump_end = NULL;
2301 2317 }
2302 2318 }
2303 2319
2304 2320 /*
2305 2321 * Set flag for each kmem_cache_t if is safe to use alternate dump
2306 2322 * memory. Called just before panic crash dump starts. Set the flag
2307 2323 * for the calling CPU.
2308 2324 */
2309 2325 void
2310 2326 kmem_dump_begin(void)
2311 2327 {
2312 2328 ASSERT(panicstr != NULL);
2313 2329 if (kmem_dump_start != NULL) {
2314 2330 kmem_cache_t *cp;
2315 2331
2316 2332 for (cp = list_head(&kmem_caches); cp != NULL;
2317 2333 cp = list_next(&kmem_caches, cp)) {
2318 2334 kmem_cpu_cache_t *ccp = KMEM_CPU_CACHE(cp);
2319 2335
2320 2336 if (cp->cache_arena->vm_cflags & VMC_DUMPSAFE) {
2321 2337 cp->cache_flags |= KMF_DUMPDIVERT;
2322 2338 ccp->cc_flags |= KMF_DUMPDIVERT;
2323 2339 ccp->cc_dump_rounds = ccp->cc_rounds;
2324 2340 ccp->cc_dump_prounds = ccp->cc_prounds;
2325 2341 ccp->cc_rounds = ccp->cc_prounds = -1;
2326 2342 } else {
2327 2343 cp->cache_flags |= KMF_DUMPUNSAFE;
2328 2344 ccp->cc_flags |= KMF_DUMPUNSAFE;
2329 2345 }
2330 2346 }
2331 2347 }
2332 2348 }
2333 2349
2334 2350 /*
2335 2351 * finished dump intercept
2336 2352 * print any warnings on the console
2337 2353 * return verbose information to dumpsys() in the given buffer
2338 2354 */
2339 2355 size_t
2340 2356 kmem_dump_finish(char *buf, size_t size)
2341 2357 {
2342 2358 int kdi_idx;
2343 2359 int kdi_end = kmem_dump_log_idx;
2344 2360 int percent = 0;
2345 2361 int header = 0;
2346 2362 int warn = 0;
2347 2363 size_t used;
2348 2364 kmem_cache_t *cp;
2349 2365 kmem_dump_log_t *kdl;
2350 2366 char *e = buf + size;
2351 2367 char *p = buf;
2352 2368
2353 2369 if (kmem_dump_size == 0 || kmem_dump_verbose == 0)
2354 2370 return (0);
2355 2371
2356 2372 used = (char *)kmem_dump_curr - (char *)kmem_dump_start;
2357 2373 percent = (used * 100) / kmem_dump_size;
2358 2374
2359 2375 kmem_dumppr(&p, e, "%% heap used,%d\n", percent);
2360 2376 kmem_dumppr(&p, e, "used bytes,%ld\n", used);
2361 2377 kmem_dumppr(&p, e, "heap size,%ld\n", kmem_dump_size);
2362 2378 kmem_dumppr(&p, e, "Oversize allocs,%d\n",
2363 2379 kmem_dump_oversize_allocs);
2364 2380 kmem_dumppr(&p, e, "Oversize max size,%ld\n",
2365 2381 kmem_dump_oversize_max);
2366 2382
2367 2383 for (kdi_idx = 0; kdi_idx < kdi_end; kdi_idx++) {
2368 2384 kdl = &kmem_dump_log[kdi_idx];
2369 2385 cp = kdl->kdl_cache;
2370 2386 if (cp == NULL)
2371 2387 break;
2372 2388 if (kdl->kdl_alloc_fails)
2373 2389 ++warn;
2374 2390 if (header == 0) {
2375 2391 kmem_dumppr(&p, e,
2376 2392 "Cache Name,Allocs,Frees,Alloc Fails,"
2377 2393 "Nondump Frees,Unsafe Allocs/Frees\n");
2378 2394 header = 1;
2379 2395 }
2380 2396 kmem_dumppr(&p, e, "%s,%d,%d,%d,%d,%d\n",
2381 2397 cp->cache_name, kdl->kdl_allocs, kdl->kdl_frees,
2382 2398 kdl->kdl_alloc_fails, kdl->kdl_free_nondump,
2383 2399 kdl->kdl_unsafe);
2384 2400 }
2385 2401
2386 2402 /* return buffer size used */
2387 2403 if (p < e)
2388 2404 bzero(p, e - p);
2389 2405 return (p - buf);
2390 2406 }
2391 2407
2392 2408 /*
2393 2409 * Allocate a constructed object from alternate dump memory.
2394 2410 */
2395 2411 void *
2396 2412 kmem_cache_alloc_dump(kmem_cache_t *cp, int kmflag)
2397 2413 {
2398 2414 void *buf;
2399 2415 void *curr;
2400 2416 char *bufend;
2401 2417
2402 2418 /* return a constructed object */
2403 2419 if ((buf = cp->cache_dumpfreelist) != NULL) {
2404 2420 cp->cache_dumpfreelist = KMEM_DUMPCTL(cp, buf)->kdc_next;
2405 2421 KDI_LOG(cp, kdl_allocs);
2406 2422 return (buf);
2407 2423 }
2408 2424
2409 2425 /* create a new constructed object */
2410 2426 curr = kmem_dump_curr;
2411 2427 buf = (void *)P2ROUNDUP((uintptr_t)curr, cp->cache_align);
2412 2428 bufend = (char *)KMEM_DUMPCTL(cp, buf) + sizeof (kmem_dumpctl_t);
2413 2429
2414 2430 /* hat layer objects cannot cross a page boundary */
2415 2431 if (cp->cache_align < PAGESIZE) {
2416 2432 char *page = (char *)P2ROUNDUP((uintptr_t)buf, PAGESIZE);
2417 2433 if (bufend > page) {
2418 2434 bufend += page - (char *)buf;
2419 2435 buf = (void *)page;
2420 2436 }
2421 2437 }
2422 2438
2423 2439 /* fall back to normal alloc if reserved area is used up */
2424 2440 if (bufend > (char *)kmem_dump_end) {
2425 2441 kmem_dump_curr = kmem_dump_end;
2426 2442 KDI_LOG(cp, kdl_alloc_fails);
2427 2443 return (NULL);
2428 2444 }
2429 2445
2430 2446 /*
2431 2447 * Must advance curr pointer before calling a constructor that
2432 2448 * may also allocate memory.
2433 2449 */
2434 2450 kmem_dump_curr = bufend;
2435 2451
2436 2452 /* run constructor */
2437 2453 if (cp->cache_constructor != NULL &&
2438 2454 cp->cache_constructor(buf, cp->cache_private, kmflag)
2439 2455 != 0) {
2440 2456 #ifdef DEBUG
2441 2457 printf("name='%s' cache=0x%p: kmem cache constructor failed\n",
2442 2458 cp->cache_name, (void *)cp);
2443 2459 #endif
2444 2460 /* reset curr pointer iff no allocs were done */
2445 2461 if (kmem_dump_curr == bufend)
2446 2462 kmem_dump_curr = curr;
2447 2463
2448 2464 /* fall back to normal alloc if the constructor fails */
2449 2465 KDI_LOG(cp, kdl_alloc_fails);
2450 2466 return (NULL);
2451 2467 }
2452 2468
2453 2469 KDI_LOG(cp, kdl_allocs);
2454 2470 return (buf);
2455 2471 }
2456 2472
2457 2473 /*
2458 2474 * Free a constructed object in alternate dump memory.
2459 2475 */
2460 2476 int
2461 2477 kmem_cache_free_dump(kmem_cache_t *cp, void *buf)
2462 2478 {
2463 2479 /* save constructed buffers for next time */
2464 2480 if ((char *)buf >= (char *)kmem_dump_start &&
2465 2481 (char *)buf < (char *)kmem_dump_end) {
2466 2482 KMEM_DUMPCTL(cp, buf)->kdc_next = cp->cache_dumpfreelist;
2467 2483 cp->cache_dumpfreelist = buf;
2468 2484 KDI_LOG(cp, kdl_frees);
2469 2485 return (0);
2470 2486 }
2471 2487
2472 2488 /* count all non-dump buf frees */
2473 2489 KDI_LOG(cp, kdl_free_nondump);
2474 2490
2475 2491 /* just drop buffers that were allocated before dump started */
2476 2492 if (kmem_dump_curr < kmem_dump_end)
2477 2493 return (0);
2478 2494
2479 2495 /* fall back to normal free if reserved area is used up */
2480 2496 return (1);
2481 2497 }
2482 2498
2483 2499 /*
2484 2500 * Allocate a constructed object from cache cp.
2485 2501 */
2486 2502 void *
2487 2503 kmem_cache_alloc(kmem_cache_t *cp, int kmflag)
2488 2504 {
2489 2505 kmem_cpu_cache_t *ccp = KMEM_CPU_CACHE(cp);
2490 2506 kmem_magazine_t *fmp;
2491 2507 void *buf;
2492 2508
2493 2509 mutex_enter(&ccp->cc_lock);
2494 2510 for (;;) {
2495 2511 /*
2496 2512 * If there's an object available in the current CPU's
2497 2513 * loaded magazine, just take it and return.
2498 2514 */
2499 2515 if (ccp->cc_rounds > 0) {
2500 2516 buf = ccp->cc_loaded->mag_round[--ccp->cc_rounds];
2501 2517 ccp->cc_alloc++;
2502 2518 mutex_exit(&ccp->cc_lock);
2503 2519 if (ccp->cc_flags & (KMF_BUFTAG | KMF_DUMPUNSAFE)) {
2504 2520 if (ccp->cc_flags & KMF_DUMPUNSAFE) {
2505 2521 ASSERT(!(ccp->cc_flags &
2506 2522 KMF_DUMPDIVERT));
2507 2523 KDI_LOG(cp, kdl_unsafe);
2508 2524 }
2509 2525 if ((ccp->cc_flags & KMF_BUFTAG) &&
2510 2526 kmem_cache_alloc_debug(cp, buf, kmflag, 0,
2511 2527 caller()) != 0) {
2512 2528 if (kmflag & KM_NOSLEEP)
2513 2529 return (NULL);
2514 2530 mutex_enter(&ccp->cc_lock);
2515 2531 continue;
2516 2532 }
2517 2533 }
2518 2534 return (buf);
2519 2535 }
2520 2536
2521 2537 /*
2522 2538 * The loaded magazine is empty. If the previously loaded
2523 2539 * magazine was full, exchange them and try again.
2524 2540 */
2525 2541 if (ccp->cc_prounds > 0) {
2526 2542 kmem_cpu_reload(ccp, ccp->cc_ploaded, ccp->cc_prounds);
2527 2543 continue;
2528 2544 }
2529 2545
2530 2546 /*
2531 2547 * Return an alternate buffer at dump time to preserve
2532 2548 * the heap.
2533 2549 */
2534 2550 if (ccp->cc_flags & (KMF_DUMPDIVERT | KMF_DUMPUNSAFE)) {
2535 2551 if (ccp->cc_flags & KMF_DUMPUNSAFE) {
2536 2552 ASSERT(!(ccp->cc_flags & KMF_DUMPDIVERT));
2537 2553 /* log it so that we can warn about it */
2538 2554 KDI_LOG(cp, kdl_unsafe);
2539 2555 } else {
2540 2556 if ((buf = kmem_cache_alloc_dump(cp, kmflag)) !=
2541 2557 NULL) {
2542 2558 mutex_exit(&ccp->cc_lock);
2543 2559 return (buf);
2544 2560 }
2545 2561 break; /* fall back to slab layer */
2546 2562 }
2547 2563 }
2548 2564
2549 2565 /*
2550 2566 * If the magazine layer is disabled, break out now.
2551 2567 */
2552 2568 if (ccp->cc_magsize == 0)
2553 2569 break;
2554 2570
2555 2571 /*
2556 2572 * Try to get a full magazine from the depot.
2557 2573 */
2558 2574 fmp = kmem_depot_alloc(cp, &cp->cache_full);
2559 2575 if (fmp != NULL) {
2560 2576 if (ccp->cc_ploaded != NULL)
2561 2577 kmem_depot_free(cp, &cp->cache_empty,
2562 2578 ccp->cc_ploaded);
2563 2579 kmem_cpu_reload(ccp, fmp, ccp->cc_magsize);
2564 2580 continue;
2565 2581 }
2566 2582
2567 2583 /*
2568 2584 * There are no full magazines in the depot,
2569 2585 * so fall through to the slab layer.
2570 2586 */
2571 2587 break;
2572 2588 }
2573 2589 mutex_exit(&ccp->cc_lock);
2574 2590
2575 2591 /*
2576 2592 * We couldn't allocate a constructed object from the magazine layer,
2577 2593 * so get a raw buffer from the slab layer and apply its constructor.
2578 2594 */
2579 2595 buf = kmem_slab_alloc(cp, kmflag);
2580 2596
2581 2597 if (buf == NULL)
2582 2598 return (NULL);
2583 2599
2584 2600 if (cp->cache_flags & KMF_BUFTAG) {
2585 2601 /*
2586 2602 * Make kmem_cache_alloc_debug() apply the constructor for us.
2587 2603 */
2588 2604 int rc = kmem_cache_alloc_debug(cp, buf, kmflag, 1, caller());
2589 2605 if (rc != 0) {
2590 2606 if (kmflag & KM_NOSLEEP)
2591 2607 return (NULL);
2592 2608 /*
2593 2609 * kmem_cache_alloc_debug() detected corruption
2594 2610 * but didn't panic (kmem_panic <= 0). We should not be
2595 2611 * here because the constructor failed (indicated by a
2596 2612 * return code of 1). Try again.
2597 2613 */
2598 2614 ASSERT(rc == -1);
2599 2615 return (kmem_cache_alloc(cp, kmflag));
2600 2616 }
2601 2617 return (buf);
2602 2618 }
2603 2619
2604 2620 if (cp->cache_constructor != NULL &&
2605 2621 cp->cache_constructor(buf, cp->cache_private, kmflag) != 0) {
2606 2622 atomic_inc_64(&cp->cache_alloc_fail);
2607 2623 kmem_slab_free(cp, buf);
2608 2624 return (NULL);
2609 2625 }
2610 2626
2611 2627 return (buf);
2612 2628 }
2613 2629
2614 2630 /*
2615 2631 * The freed argument tells whether or not kmem_cache_free_debug() has already
2616 2632 * been called so that we can avoid the duplicate free error. For example, a
2617 2633 * buffer on a magazine has already been freed by the client but is still
2618 2634 * constructed.
2619 2635 */
2620 2636 static void
2621 2637 kmem_slab_free_constructed(kmem_cache_t *cp, void *buf, boolean_t freed)
2622 2638 {
2623 2639 if (!freed && (cp->cache_flags & KMF_BUFTAG))
2624 2640 if (kmem_cache_free_debug(cp, buf, caller()) == -1)
2625 2641 return;
2626 2642
2627 2643 /*
2628 2644 * Note that if KMF_DEADBEEF is in effect and KMF_LITE is not,
2629 2645 * kmem_cache_free_debug() will have already applied the destructor.
2630 2646 */
2631 2647 if ((cp->cache_flags & (KMF_DEADBEEF | KMF_LITE)) != KMF_DEADBEEF &&
2632 2648 cp->cache_destructor != NULL) {
2633 2649 if (cp->cache_flags & KMF_DEADBEEF) { /* KMF_LITE implied */
2634 2650 kmem_buftag_t *btp = KMEM_BUFTAG(cp, buf);
2635 2651 *(uint64_t *)buf = btp->bt_redzone;
2636 2652 cp->cache_destructor(buf, cp->cache_private);
2637 2653 *(uint64_t *)buf = KMEM_FREE_PATTERN;
2638 2654 } else {
2639 2655 cp->cache_destructor(buf, cp->cache_private);
2640 2656 }
2641 2657 }
2642 2658
2643 2659 kmem_slab_free(cp, buf);
2644 2660 }
2645 2661
2646 2662 /*
2647 2663 * Used when there's no room to free a buffer to the per-CPU cache.
2648 2664 * Drops and re-acquires &ccp->cc_lock, and returns non-zero if the
2649 2665 * caller should try freeing to the per-CPU cache again.
2650 2666 * Note that we don't directly install the magazine in the cpu cache,
2651 2667 * since its state may have changed wildly while the lock was dropped.
2652 2668 */
2653 2669 static int
2654 2670 kmem_cpucache_magazine_alloc(kmem_cpu_cache_t *ccp, kmem_cache_t *cp)
2655 2671 {
2656 2672 kmem_magazine_t *emp;
2657 2673 kmem_magtype_t *mtp;
2658 2674
2659 2675 ASSERT(MUTEX_HELD(&ccp->cc_lock));
2660 2676 ASSERT(((uint_t)ccp->cc_rounds == ccp->cc_magsize ||
2661 2677 ((uint_t)ccp->cc_rounds == -1)) &&
2662 2678 ((uint_t)ccp->cc_prounds == ccp->cc_magsize ||
2663 2679 ((uint_t)ccp->cc_prounds == -1)));
2664 2680
2665 2681 emp = kmem_depot_alloc(cp, &cp->cache_empty);
2666 2682 if (emp != NULL) {
2667 2683 if (ccp->cc_ploaded != NULL)
2668 2684 kmem_depot_free(cp, &cp->cache_full,
2669 2685 ccp->cc_ploaded);
2670 2686 kmem_cpu_reload(ccp, emp, 0);
2671 2687 return (1);
2672 2688 }
2673 2689 /*
2674 2690 * There are no empty magazines in the depot,
2675 2691 * so try to allocate a new one. We must drop all locks
2676 2692 * across kmem_cache_alloc() because lower layers may
2677 2693 * attempt to allocate from this cache.
2678 2694 */
2679 2695 mtp = cp->cache_magtype;
2680 2696 mutex_exit(&ccp->cc_lock);
2681 2697 emp = kmem_cache_alloc(mtp->mt_cache, KM_NOSLEEP);
2682 2698 mutex_enter(&ccp->cc_lock);
2683 2699
2684 2700 if (emp != NULL) {
2685 2701 /*
2686 2702 * We successfully allocated an empty magazine.
2687 2703 * However, we had to drop ccp->cc_lock to do it,
2688 2704 * so the cache's magazine size may have changed.
2689 2705 * If so, free the magazine and try again.
2690 2706 */
2691 2707 if (ccp->cc_magsize != mtp->mt_magsize) {
2692 2708 mutex_exit(&ccp->cc_lock);
2693 2709 kmem_cache_free(mtp->mt_cache, emp);
2694 2710 mutex_enter(&ccp->cc_lock);
2695 2711 return (1);
2696 2712 }
2697 2713
2698 2714 /*
2699 2715 * We got a magazine of the right size. Add it to
2700 2716 * the depot and try the whole dance again.
2701 2717 */
2702 2718 kmem_depot_free(cp, &cp->cache_empty, emp);
2703 2719 return (1);
2704 2720 }
2705 2721
2706 2722 /*
2707 2723 * We couldn't allocate an empty magazine,
2708 2724 * so fall through to the slab layer.
2709 2725 */
2710 2726 return (0);
2711 2727 }
2712 2728
2713 2729 /*
2714 2730 * Free a constructed object to cache cp.
2715 2731 */
2716 2732 void
2717 2733 kmem_cache_free(kmem_cache_t *cp, void *buf)
2718 2734 {
2719 2735 kmem_cpu_cache_t *ccp = KMEM_CPU_CACHE(cp);
2720 2736
2721 2737 /*
2722 2738 * The client must not free either of the buffers passed to the move
2723 2739 * callback function.
2724 2740 */
2725 2741 ASSERT(cp->cache_defrag == NULL ||
2726 2742 cp->cache_defrag->kmd_thread != curthread ||
2727 2743 (buf != cp->cache_defrag->kmd_from_buf &&
2728 2744 buf != cp->cache_defrag->kmd_to_buf));
2729 2745
2730 2746 if (ccp->cc_flags & (KMF_BUFTAG | KMF_DUMPDIVERT | KMF_DUMPUNSAFE)) {
2731 2747 if (ccp->cc_flags & KMF_DUMPUNSAFE) {
2732 2748 ASSERT(!(ccp->cc_flags & KMF_DUMPDIVERT));
2733 2749 /* log it so that we can warn about it */
2734 2750 KDI_LOG(cp, kdl_unsafe);
2735 2751 } else if (KMEM_DUMPCC(ccp) && !kmem_cache_free_dump(cp, buf)) {
2736 2752 return;
2737 2753 }
2738 2754 if (ccp->cc_flags & KMF_BUFTAG) {
2739 2755 if (kmem_cache_free_debug(cp, buf, caller()) == -1)
2740 2756 return;
2741 2757 }
2742 2758 }
2743 2759
2744 2760 mutex_enter(&ccp->cc_lock);
2745 2761 /*
2746 2762 * Any changes to this logic should be reflected in kmem_slab_prefill()
2747 2763 */
2748 2764 for (;;) {
2749 2765 /*
2750 2766 * If there's a slot available in the current CPU's
2751 2767 * loaded magazine, just put the object there and return.
2752 2768 */
2753 2769 if ((uint_t)ccp->cc_rounds < ccp->cc_magsize) {
2754 2770 ccp->cc_loaded->mag_round[ccp->cc_rounds++] = buf;
2755 2771 ccp->cc_free++;
2756 2772 mutex_exit(&ccp->cc_lock);
2757 2773 return;
2758 2774 }
2759 2775
2760 2776 /*
2761 2777 * The loaded magazine is full. If the previously loaded
2762 2778 * magazine was empty, exchange them and try again.
2763 2779 */
2764 2780 if (ccp->cc_prounds == 0) {
2765 2781 kmem_cpu_reload(ccp, ccp->cc_ploaded, ccp->cc_prounds);
2766 2782 continue;
2767 2783 }
2768 2784
2769 2785 /*
2770 2786 * If the magazine layer is disabled, break out now.
2771 2787 */
2772 2788 if (ccp->cc_magsize == 0)
2773 2789 break;
2774 2790
2775 2791 if (!kmem_cpucache_magazine_alloc(ccp, cp)) {
2776 2792 /*
2777 2793 * We couldn't free our constructed object to the
2778 2794 * magazine layer, so apply its destructor and free it
2779 2795 * to the slab layer.
2780 2796 */
2781 2797 break;
2782 2798 }
2783 2799 }
2784 2800 mutex_exit(&ccp->cc_lock);
2785 2801 kmem_slab_free_constructed(cp, buf, B_TRUE);
2786 2802 }
2787 2803
2788 2804 static void
2789 2805 kmem_slab_prefill(kmem_cache_t *cp, kmem_slab_t *sp)
2790 2806 {
2791 2807 kmem_cpu_cache_t *ccp = KMEM_CPU_CACHE(cp);
2792 2808 int cache_flags = cp->cache_flags;
2793 2809
2794 2810 kmem_bufctl_t *next, *head;
2795 2811 size_t nbufs;
2796 2812
2797 2813 /*
2798 2814 * Completely allocate the newly created slab and put the pre-allocated
2799 2815 * buffers in magazines. Any of the buffers that cannot be put in
2800 2816 * magazines must be returned to the slab.
2801 2817 */
2802 2818 ASSERT(MUTEX_HELD(&cp->cache_lock));
2803 2819 ASSERT((cache_flags & (KMF_PREFILL|KMF_BUFTAG)) == KMF_PREFILL);
2804 2820 ASSERT(cp->cache_constructor == NULL);
2805 2821 ASSERT(sp->slab_cache == cp);
2806 2822 ASSERT(sp->slab_refcnt == 1);
2807 2823 ASSERT(sp->slab_head != NULL && sp->slab_chunks > sp->slab_refcnt);
2808 2824 ASSERT(avl_find(&cp->cache_partial_slabs, sp, NULL) == NULL);
2809 2825
2810 2826 head = sp->slab_head;
2811 2827 nbufs = (sp->slab_chunks - sp->slab_refcnt);
2812 2828 sp->slab_head = NULL;
2813 2829 sp->slab_refcnt += nbufs;
2814 2830 cp->cache_bufslab -= nbufs;
2815 2831 cp->cache_slab_alloc += nbufs;
2816 2832 list_insert_head(&cp->cache_complete_slabs, sp);
2817 2833 cp->cache_complete_slab_count++;
2818 2834 mutex_exit(&cp->cache_lock);
2819 2835 mutex_enter(&ccp->cc_lock);
2820 2836
2821 2837 while (head != NULL) {
2822 2838 void *buf = KMEM_BUF(cp, head);
2823 2839 /*
2824 2840 * If there's a slot available in the current CPU's
2825 2841 * loaded magazine, just put the object there and
2826 2842 * continue.
2827 2843 */
2828 2844 if ((uint_t)ccp->cc_rounds < ccp->cc_magsize) {
2829 2845 ccp->cc_loaded->mag_round[ccp->cc_rounds++] =
2830 2846 buf;
2831 2847 ccp->cc_free++;
2832 2848 nbufs--;
2833 2849 head = head->bc_next;
2834 2850 continue;
2835 2851 }
2836 2852
2837 2853 /*
2838 2854 * The loaded magazine is full. If the previously
2839 2855 * loaded magazine was empty, exchange them and try
2840 2856 * again.
2841 2857 */
2842 2858 if (ccp->cc_prounds == 0) {
2843 2859 kmem_cpu_reload(ccp, ccp->cc_ploaded,
2844 2860 ccp->cc_prounds);
2845 2861 continue;
2846 2862 }
2847 2863
2848 2864 /*
2849 2865 * If the magazine layer is disabled, break out now.
2850 2866 */
2851 2867
2852 2868 if (ccp->cc_magsize == 0) {
2853 2869 break;
2854 2870 }
2855 2871
2856 2872 if (!kmem_cpucache_magazine_alloc(ccp, cp))
2857 2873 break;
2858 2874 }
2859 2875 mutex_exit(&ccp->cc_lock);
2860 2876 if (nbufs != 0) {
2861 2877 ASSERT(head != NULL);
2862 2878
2863 2879 /*
2864 2880 * If there was a failure, return remaining objects to
2865 2881 * the slab
2866 2882 */
2867 2883 while (head != NULL) {
2868 2884 ASSERT(nbufs != 0);
2869 2885 next = head->bc_next;
2870 2886 head->bc_next = NULL;
2871 2887 kmem_slab_free(cp, KMEM_BUF(cp, head));
2872 2888 head = next;
2873 2889 nbufs--;
2874 2890 }
2875 2891 }
2876 2892 ASSERT(head == NULL);
2877 2893 ASSERT(nbufs == 0);
2878 2894 mutex_enter(&cp->cache_lock);
2879 2895 }
2880 2896
2881 2897 void *
2882 2898 kmem_zalloc(size_t size, int kmflag)
2883 2899 {
2884 2900 size_t index;
2885 2901 void *buf;
2886 2902
2887 2903 if ((index = ((size - 1) >> KMEM_ALIGN_SHIFT)) < KMEM_ALLOC_TABLE_MAX) {
2888 2904 kmem_cache_t *cp = kmem_alloc_table[index];
2889 2905 buf = kmem_cache_alloc(cp, kmflag);
2890 2906 if (buf != NULL) {
2891 2907 if ((cp->cache_flags & KMF_BUFTAG) && !KMEM_DUMP(cp)) {
2892 2908 kmem_buftag_t *btp = KMEM_BUFTAG(cp, buf);
2893 2909 ((uint8_t *)buf)[size] = KMEM_REDZONE_BYTE;
2894 2910 ((uint32_t *)btp)[1] = KMEM_SIZE_ENCODE(size);
2895 2911
2896 2912 if (cp->cache_flags & KMF_LITE) {
2897 2913 KMEM_BUFTAG_LITE_ENTER(btp,
2898 2914 kmem_lite_count, caller());
2899 2915 }
2900 2916 }
2901 2917 bzero(buf, size);
2902 2918 }
2903 2919 } else {
2904 2920 buf = kmem_alloc(size, kmflag);
2905 2921 if (buf != NULL)
2906 2922 bzero(buf, size);
2907 2923 }
2908 2924 return (buf);
2909 2925 }
2910 2926
2911 2927 void *
2912 2928 kmem_alloc(size_t size, int kmflag)
2913 2929 {
2914 2930 size_t index;
2915 2931 kmem_cache_t *cp;
2916 2932 void *buf;
2917 2933
2918 2934 if ((index = ((size - 1) >> KMEM_ALIGN_SHIFT)) < KMEM_ALLOC_TABLE_MAX) {
2919 2935 cp = kmem_alloc_table[index];
2920 2936 /* fall through to kmem_cache_alloc() */
2921 2937
2922 2938 } else if ((index = ((size - 1) >> KMEM_BIG_SHIFT)) <
2923 2939 kmem_big_alloc_table_max) {
2924 2940 cp = kmem_big_alloc_table[index];
2925 2941 /* fall through to kmem_cache_alloc() */
2926 2942
2927 2943 } else {
2928 2944 if (size == 0)
2929 2945 return (NULL);
2930 2946
2931 2947 buf = vmem_alloc(kmem_oversize_arena, size,
2932 2948 kmflag & KM_VMFLAGS);
2933 2949 if (buf == NULL)
2934 2950 kmem_log_event(kmem_failure_log, NULL, NULL,
2935 2951 (void *)size);
2936 2952 else if (KMEM_DUMP(kmem_slab_cache)) {
2937 2953 /* stats for dump intercept */
2938 2954 kmem_dump_oversize_allocs++;
2939 2955 if (size > kmem_dump_oversize_max)
2940 2956 kmem_dump_oversize_max = size;
2941 2957 }
2942 2958 return (buf);
2943 2959 }
2944 2960
2945 2961 buf = kmem_cache_alloc(cp, kmflag);
2946 2962 if ((cp->cache_flags & KMF_BUFTAG) && !KMEM_DUMP(cp) && buf != NULL) {
2947 2963 kmem_buftag_t *btp = KMEM_BUFTAG(cp, buf);
2948 2964 ((uint8_t *)buf)[size] = KMEM_REDZONE_BYTE;
2949 2965 ((uint32_t *)btp)[1] = KMEM_SIZE_ENCODE(size);
2950 2966
2951 2967 if (cp->cache_flags & KMF_LITE) {
2952 2968 KMEM_BUFTAG_LITE_ENTER(btp, kmem_lite_count, caller());
2953 2969 }
2954 2970 }
2955 2971 return (buf);
2956 2972 }
2957 2973
2958 2974 void
2959 2975 kmem_free(void *buf, size_t size)
2960 2976 {
2961 2977 size_t index;
2962 2978 kmem_cache_t *cp;
2963 2979
2964 2980 if ((index = (size - 1) >> KMEM_ALIGN_SHIFT) < KMEM_ALLOC_TABLE_MAX) {
2965 2981 cp = kmem_alloc_table[index];
2966 2982 /* fall through to kmem_cache_free() */
2967 2983
2968 2984 } else if ((index = ((size - 1) >> KMEM_BIG_SHIFT)) <
2969 2985 kmem_big_alloc_table_max) {
2970 2986 cp = kmem_big_alloc_table[index];
2971 2987 /* fall through to kmem_cache_free() */
2972 2988
2973 2989 } else {
2974 2990 EQUIV(buf == NULL, size == 0);
2975 2991 if (buf == NULL && size == 0)
2976 2992 return;
2977 2993 vmem_free(kmem_oversize_arena, buf, size);
2978 2994 return;
2979 2995 }
2980 2996
2981 2997 if ((cp->cache_flags & KMF_BUFTAG) && !KMEM_DUMP(cp)) {
2982 2998 kmem_buftag_t *btp = KMEM_BUFTAG(cp, buf);
2983 2999 uint32_t *ip = (uint32_t *)btp;
2984 3000 if (ip[1] != KMEM_SIZE_ENCODE(size)) {
2985 3001 if (*(uint64_t *)buf == KMEM_FREE_PATTERN) {
2986 3002 kmem_error(KMERR_DUPFREE, cp, buf);
2987 3003 return;
2988 3004 }
2989 3005 if (KMEM_SIZE_VALID(ip[1])) {
2990 3006 ip[0] = KMEM_SIZE_ENCODE(size);
2991 3007 kmem_error(KMERR_BADSIZE, cp, buf);
2992 3008 } else {
2993 3009 kmem_error(KMERR_REDZONE, cp, buf);
2994 3010 }
2995 3011 return;
2996 3012 }
2997 3013 if (((uint8_t *)buf)[size] != KMEM_REDZONE_BYTE) {
2998 3014 kmem_error(KMERR_REDZONE, cp, buf);
2999 3015 return;
3000 3016 }
3001 3017 btp->bt_redzone = KMEM_REDZONE_PATTERN;
3002 3018 if (cp->cache_flags & KMF_LITE) {
3003 3019 KMEM_BUFTAG_LITE_ENTER(btp, kmem_lite_count,
3004 3020 caller());
3005 3021 }
3006 3022 }
3007 3023 kmem_cache_free(cp, buf);
3008 3024 }
3009 3025
3010 3026 void *
3011 3027 kmem_firewall_va_alloc(vmem_t *vmp, size_t size, int vmflag)
3012 3028 {
3013 3029 size_t realsize = size + vmp->vm_quantum;
3014 3030 void *addr;
3015 3031
3016 3032 /*
3017 3033 * Annoying edge case: if 'size' is just shy of ULONG_MAX, adding
3018 3034 * vm_quantum will cause integer wraparound. Check for this, and
3019 3035 * blow off the firewall page in this case. Note that such a
3020 3036 * giant allocation (the entire kernel address space) can never
3021 3037 * be satisfied, so it will either fail immediately (VM_NOSLEEP)
3022 3038 * or sleep forever (VM_SLEEP). Thus, there is no need for a
3023 3039 * corresponding check in kmem_firewall_va_free().
3024 3040 */
3025 3041 if (realsize < size)
3026 3042 realsize = size;
3027 3043
3028 3044 /*
3029 3045 * While boot still owns resource management, make sure that this
3030 3046 * redzone virtual address allocation is properly accounted for in
3031 3047 * OBPs "virtual-memory" "available" lists because we're
3032 3048 * effectively claiming them for a red zone. If we don't do this,
3033 3049 * the available lists become too fragmented and too large for the
3034 3050 * current boot/kernel memory list interface.
3035 3051 */
3036 3052 addr = vmem_alloc(vmp, realsize, vmflag | VM_NEXTFIT);
3037 3053
3038 3054 if (addr != NULL && kvseg.s_base == NULL && realsize != size)
3039 3055 (void) boot_virt_alloc((char *)addr + size, vmp->vm_quantum);
3040 3056
3041 3057 return (addr);
3042 3058 }
3043 3059
3044 3060 void
3045 3061 kmem_firewall_va_free(vmem_t *vmp, void *addr, size_t size)
3046 3062 {
3047 3063 ASSERT((kvseg.s_base == NULL ?
3048 3064 va_to_pfn((char *)addr + size) :
3049 3065 hat_getpfnum(kas.a_hat, (caddr_t)addr + size)) == PFN_INVALID);
3050 3066
3051 3067 vmem_free(vmp, addr, size + vmp->vm_quantum);
3052 3068 }
3053 3069
3054 3070 /*
3055 3071 * Try to allocate at least `size' bytes of memory without sleeping or
3056 3072 * panicking. Return actual allocated size in `asize'. If allocation failed,
3057 3073 * try final allocation with sleep or panic allowed.
3058 3074 */
3059 3075 void *
3060 3076 kmem_alloc_tryhard(size_t size, size_t *asize, int kmflag)
3061 3077 {
3062 3078 void *p;
3063 3079
3064 3080 *asize = P2ROUNDUP(size, KMEM_ALIGN);
3065 3081 do {
3066 3082 p = kmem_alloc(*asize, (kmflag | KM_NOSLEEP) & ~KM_PANIC);
3067 3083 if (p != NULL)
3068 3084 return (p);
3069 3085 *asize += KMEM_ALIGN;
3070 3086 } while (*asize <= PAGESIZE);
3071 3087
3072 3088 *asize = P2ROUNDUP(size, KMEM_ALIGN);
3073 3089 return (kmem_alloc(*asize, kmflag));
3074 3090 }
3075 3091
3076 3092 /*
3077 3093 * Reclaim all unused memory from a cache.
3078 3094 */
3079 3095 static void
3080 3096 kmem_cache_reap(kmem_cache_t *cp)
3081 3097 {
3082 3098 ASSERT(taskq_member(kmem_taskq, curthread));
3083 3099 cp->cache_reap++;
3084 3100
3085 3101 /*
3086 3102 * Ask the cache's owner to free some memory if possible.
3087 3103 * The idea is to handle things like the inode cache, which
3088 3104 * typically sits on a bunch of memory that it doesn't truly
3089 3105 * *need*. Reclaim policy is entirely up to the owner; this
3090 3106 * callback is just an advisory plea for help.
3091 3107 */
3092 3108 if (cp->cache_reclaim != NULL) {
3093 3109 long delta;
3094 3110
3095 3111 /*
3096 3112 * Reclaimed memory should be reapable (not included in the
3097 3113 * depot's working set).
3098 3114 */
3099 3115 delta = cp->cache_full.ml_total;
3100 3116 cp->cache_reclaim(cp->cache_private);
3101 3117 delta = cp->cache_full.ml_total - delta;
3102 3118 if (delta > 0) {
3103 3119 mutex_enter(&cp->cache_depot_lock);
3104 3120 cp->cache_full.ml_reaplimit += delta;
3105 3121 cp->cache_full.ml_min += delta;
3106 3122 mutex_exit(&cp->cache_depot_lock);
3107 3123 }
3108 3124 }
3109 3125
3110 3126 kmem_depot_ws_reap(cp);
3111 3127
3112 3128 if (cp->cache_defrag != NULL && !kmem_move_noreap) {
3113 3129 kmem_cache_defrag(cp);
3114 3130 }
3115 3131 }
3116 3132
3117 3133 static void
3118 3134 kmem_reap_timeout(void *flag_arg)
3119 3135 {
3120 3136 uint32_t *flag = (uint32_t *)flag_arg;
3121 3137
3122 3138 ASSERT(flag == &kmem_reaping || flag == &kmem_reaping_idspace);
3123 3139 *flag = 0;
3124 3140 }
3125 3141
3126 3142 static void
3127 3143 kmem_reap_done(void *flag)
3128 3144 {
3129 3145 if (!callout_init_done) {
3130 3146 /* can't schedule a timeout at this point */
3131 3147 kmem_reap_timeout(flag);
3132 3148 } else {
3133 3149 (void) timeout(kmem_reap_timeout, flag, kmem_reap_interval);
3134 3150 }
3135 3151 }
3136 3152
3137 3153 static void
3138 3154 kmem_reap_start(void *flag)
3139 3155 {
3140 3156 ASSERT(flag == &kmem_reaping || flag == &kmem_reaping_idspace);
3141 3157
3142 3158 if (flag == &kmem_reaping) {
3143 3159 kmem_cache_applyall(kmem_cache_reap, kmem_taskq, TQ_NOSLEEP);
3144 3160 /*
3145 3161 * if we have segkp under heap, reap segkp cache.
3146 3162 */
3147 3163 if (segkp_fromheap)
3148 3164 segkp_cache_free();
3149 3165 }
3150 3166 else
3151 3167 kmem_cache_applyall_id(kmem_cache_reap, kmem_taskq, TQ_NOSLEEP);
3152 3168
3153 3169 /*
3154 3170 * We use taskq_dispatch() to schedule a timeout to clear
3155 3171 * the flag so that kmem_reap() becomes self-throttling:
3156 3172 * we won't reap again until the current reap completes *and*
3157 3173 * at least kmem_reap_interval ticks have elapsed.
3158 3174 */
3159 3175 if (!taskq_dispatch(kmem_taskq, kmem_reap_done, flag, TQ_NOSLEEP))
3160 3176 kmem_reap_done(flag);
3161 3177 }
3162 3178
3163 3179 static void
3164 3180 kmem_reap_common(void *flag_arg)
3165 3181 {
3166 3182 uint32_t *flag = (uint32_t *)flag_arg;
3167 3183
3168 3184 if (MUTEX_HELD(&kmem_cache_lock) || kmem_taskq == NULL ||
3169 3185 atomic_cas_32(flag, 0, 1) != 0)
3170 3186 return;
3171 3187
3172 3188 /*
3173 3189 * It may not be kosher to do memory allocation when a reap is called
3174 3190 * (for example, if vmem_populate() is in the call chain). So we
3175 3191 * start the reap going with a TQ_NOALLOC dispatch. If the dispatch
3176 3192 * fails, we reset the flag, and the next reap will try again.
3177 3193 */
3178 3194 if (!taskq_dispatch(kmem_taskq, kmem_reap_start, flag, TQ_NOALLOC))
3179 3195 *flag = 0;
3180 3196 }
3181 3197
3182 3198 /*
3183 3199 * Reclaim all unused memory from all caches. Called from the VM system
3184 3200 * when memory gets tight.
3185 3201 */
3186 3202 void
3187 3203 kmem_reap(void)
3188 3204 {
3189 3205 kmem_reap_common(&kmem_reaping);
3190 3206 }
3191 3207
3192 3208 /*
3193 3209 * Reclaim all unused memory from identifier arenas, called when a vmem
3194 3210 * arena not back by memory is exhausted. Since reaping memory-backed caches
3195 3211 * cannot help with identifier exhaustion, we avoid both a large amount of
3196 3212 * work and unwanted side-effects from reclaim callbacks.
3197 3213 */
3198 3214 void
3199 3215 kmem_reap_idspace(void)
3200 3216 {
3201 3217 kmem_reap_common(&kmem_reaping_idspace);
3202 3218 }
3203 3219
3204 3220 /*
3205 3221 * Purge all magazines from a cache and set its magazine limit to zero.
3206 3222 * All calls are serialized by the kmem_taskq lock, except for the final
3207 3223 * call from kmem_cache_destroy().
3208 3224 */
3209 3225 static void
3210 3226 kmem_cache_magazine_purge(kmem_cache_t *cp)
3211 3227 {
3212 3228 kmem_cpu_cache_t *ccp;
3213 3229 kmem_magazine_t *mp, *pmp;
3214 3230 int rounds, prounds, cpu_seqid;
3215 3231
3216 3232 ASSERT(!list_link_active(&cp->cache_link) ||
3217 3233 taskq_member(kmem_taskq, curthread));
3218 3234 ASSERT(MUTEX_NOT_HELD(&cp->cache_lock));
3219 3235
3220 3236 for (cpu_seqid = 0; cpu_seqid < max_ncpus; cpu_seqid++) {
3221 3237 ccp = &cp->cache_cpu[cpu_seqid];
3222 3238
3223 3239 mutex_enter(&ccp->cc_lock);
3224 3240 mp = ccp->cc_loaded;
3225 3241 pmp = ccp->cc_ploaded;
3226 3242 rounds = ccp->cc_rounds;
3227 3243 prounds = ccp->cc_prounds;
3228 3244 ccp->cc_loaded = NULL;
3229 3245 ccp->cc_ploaded = NULL;
3230 3246 ccp->cc_rounds = -1;
↓ open down ↓ |
1049 lines elided |
↑ open up ↑ |
3231 3247 ccp->cc_prounds = -1;
3232 3248 ccp->cc_magsize = 0;
3233 3249 mutex_exit(&ccp->cc_lock);
3234 3250
3235 3251 if (mp)
3236 3252 kmem_magazine_destroy(cp, mp, rounds);
3237 3253 if (pmp)
3238 3254 kmem_magazine_destroy(cp, pmp, prounds);
3239 3255 }
3240 3256
3241 - /*
3242 - * Updating the working set statistics twice in a row has the
3243 - * effect of setting the working set size to zero, so everything
3244 - * is eligible for reaping.
3245 - */
3246 - kmem_depot_ws_update(cp);
3247 - kmem_depot_ws_update(cp);
3248 -
3257 + kmem_depot_ws_zero(cp);
3249 3258 kmem_depot_ws_reap(cp);
3250 3259 }
3251 3260
3252 3261 /*
3253 3262 * Enable per-cpu magazines on a cache.
3254 3263 */
3255 3264 static void
3256 3265 kmem_cache_magazine_enable(kmem_cache_t *cp)
3257 3266 {
3258 3267 int cpu_seqid;
3259 3268
3260 3269 if (cp->cache_flags & KMF_NOMAGAZINE)
3261 3270 return;
3262 3271
↓ open down ↓ |
4 lines elided |
↑ open up ↑ |
3263 3272 for (cpu_seqid = 0; cpu_seqid < max_ncpus; cpu_seqid++) {
3264 3273 kmem_cpu_cache_t *ccp = &cp->cache_cpu[cpu_seqid];
3265 3274 mutex_enter(&ccp->cc_lock);
3266 3275 ccp->cc_magsize = cp->cache_magtype->mt_magsize;
3267 3276 mutex_exit(&ccp->cc_lock);
3268 3277 }
3269 3278
3270 3279 }
3271 3280
3272 3281 /*
3273 - * Reap (almost) everything right now. See kmem_cache_magazine_purge()
3274 - * for explanation of the back-to-back kmem_depot_ws_update() calls.
3282 + * Reap (almost) everything right now.
3275 3283 */
3276 3284 void
3277 3285 kmem_cache_reap_now(kmem_cache_t *cp)
3278 3286 {
3279 3287 ASSERT(list_link_active(&cp->cache_link));
3280 3288
3281 - kmem_depot_ws_update(cp);
3282 - kmem_depot_ws_update(cp);
3289 + kmem_depot_ws_zero(cp);
3283 3290
3284 3291 (void) taskq_dispatch(kmem_taskq,
3285 3292 (task_func_t *)kmem_depot_ws_reap, cp, TQ_SLEEP);
3286 3293 taskq_wait(kmem_taskq);
3287 3294 }
3288 3295
3289 3296 /*
3290 3297 * Recompute a cache's magazine size. The trade-off is that larger magazines
3291 3298 * provide a higher transfer rate with the depot, while smaller magazines
3292 3299 * reduce memory consumption. Magazine resizing is an expensive operation;
3293 3300 * it should not be done frequently.
3294 3301 *
3295 3302 * Changes to the magazine size are serialized by the kmem_taskq lock.
3296 3303 *
3297 3304 * Note: at present this only grows the magazine size. It might be useful
3298 3305 * to allow shrinkage too.
3299 3306 */
3300 3307 static void
3301 3308 kmem_cache_magazine_resize(kmem_cache_t *cp)
3302 3309 {
3303 3310 kmem_magtype_t *mtp = cp->cache_magtype;
3304 3311
3305 3312 ASSERT(taskq_member(kmem_taskq, curthread));
3306 3313
3307 3314 if (cp->cache_chunksize < mtp->mt_maxbuf) {
3308 3315 kmem_cache_magazine_purge(cp);
3309 3316 mutex_enter(&cp->cache_depot_lock);
3310 3317 cp->cache_magtype = ++mtp;
3311 3318 cp->cache_depot_contention_prev =
3312 3319 cp->cache_depot_contention + INT_MAX;
3313 3320 mutex_exit(&cp->cache_depot_lock);
3314 3321 kmem_cache_magazine_enable(cp);
3315 3322 }
3316 3323 }
3317 3324
3318 3325 /*
3319 3326 * Rescale a cache's hash table, so that the table size is roughly the
3320 3327 * cache size. We want the average lookup time to be extremely small.
3321 3328 */
3322 3329 static void
3323 3330 kmem_hash_rescale(kmem_cache_t *cp)
3324 3331 {
3325 3332 kmem_bufctl_t **old_table, **new_table, *bcp;
3326 3333 size_t old_size, new_size, h;
3327 3334
3328 3335 ASSERT(taskq_member(kmem_taskq, curthread));
3329 3336
3330 3337 new_size = MAX(KMEM_HASH_INITIAL,
3331 3338 1 << (highbit(3 * cp->cache_buftotal + 4) - 2));
3332 3339 old_size = cp->cache_hash_mask + 1;
3333 3340
3334 3341 if ((old_size >> 1) <= new_size && new_size <= (old_size << 1))
3335 3342 return;
3336 3343
3337 3344 new_table = vmem_alloc(kmem_hash_arena, new_size * sizeof (void *),
3338 3345 VM_NOSLEEP);
3339 3346 if (new_table == NULL)
3340 3347 return;
3341 3348 bzero(new_table, new_size * sizeof (void *));
3342 3349
3343 3350 mutex_enter(&cp->cache_lock);
3344 3351
3345 3352 old_size = cp->cache_hash_mask + 1;
3346 3353 old_table = cp->cache_hash_table;
3347 3354
3348 3355 cp->cache_hash_mask = new_size - 1;
3349 3356 cp->cache_hash_table = new_table;
3350 3357 cp->cache_rescale++;
3351 3358
3352 3359 for (h = 0; h < old_size; h++) {
3353 3360 bcp = old_table[h];
3354 3361 while (bcp != NULL) {
3355 3362 void *addr = bcp->bc_addr;
3356 3363 kmem_bufctl_t *next_bcp = bcp->bc_next;
3357 3364 kmem_bufctl_t **hash_bucket = KMEM_HASH(cp, addr);
3358 3365 bcp->bc_next = *hash_bucket;
3359 3366 *hash_bucket = bcp;
3360 3367 bcp = next_bcp;
3361 3368 }
3362 3369 }
3363 3370
3364 3371 mutex_exit(&cp->cache_lock);
3365 3372
3366 3373 vmem_free(kmem_hash_arena, old_table, old_size * sizeof (void *));
3367 3374 }
3368 3375
3369 3376 /*
3370 3377 * Perform periodic maintenance on a cache: hash rescaling, depot working-set
3371 3378 * update, magazine resizing, and slab consolidation.
3372 3379 */
3373 3380 static void
3374 3381 kmem_cache_update(kmem_cache_t *cp)
3375 3382 {
3376 3383 int need_hash_rescale = 0;
3377 3384 int need_magazine_resize = 0;
3378 3385
3379 3386 ASSERT(MUTEX_HELD(&kmem_cache_lock));
3380 3387
3381 3388 /*
3382 3389 * If the cache has become much larger or smaller than its hash table,
3383 3390 * fire off a request to rescale the hash table.
3384 3391 */
3385 3392 mutex_enter(&cp->cache_lock);
3386 3393
3387 3394 if ((cp->cache_flags & KMF_HASH) &&
3388 3395 (cp->cache_buftotal > (cp->cache_hash_mask << 1) ||
3389 3396 (cp->cache_buftotal < (cp->cache_hash_mask >> 1) &&
3390 3397 cp->cache_hash_mask > KMEM_HASH_INITIAL)))
3391 3398 need_hash_rescale = 1;
3392 3399
3393 3400 mutex_exit(&cp->cache_lock);
3394 3401
3395 3402 /*
3396 3403 * Update the depot working set statistics.
3397 3404 */
3398 3405 kmem_depot_ws_update(cp);
3399 3406
3400 3407 /*
3401 3408 * If there's a lot of contention in the depot,
3402 3409 * increase the magazine size.
3403 3410 */
3404 3411 mutex_enter(&cp->cache_depot_lock);
3405 3412
3406 3413 if (cp->cache_chunksize < cp->cache_magtype->mt_maxbuf &&
3407 3414 (int)(cp->cache_depot_contention -
3408 3415 cp->cache_depot_contention_prev) > kmem_depot_contention)
3409 3416 need_magazine_resize = 1;
3410 3417
3411 3418 cp->cache_depot_contention_prev = cp->cache_depot_contention;
3412 3419
3413 3420 mutex_exit(&cp->cache_depot_lock);
3414 3421
3415 3422 if (need_hash_rescale)
3416 3423 (void) taskq_dispatch(kmem_taskq,
3417 3424 (task_func_t *)kmem_hash_rescale, cp, TQ_NOSLEEP);
3418 3425
3419 3426 if (need_magazine_resize)
3420 3427 (void) taskq_dispatch(kmem_taskq,
3421 3428 (task_func_t *)kmem_cache_magazine_resize, cp, TQ_NOSLEEP);
3422 3429
3423 3430 if (cp->cache_defrag != NULL)
3424 3431 (void) taskq_dispatch(kmem_taskq,
3425 3432 (task_func_t *)kmem_cache_scan, cp, TQ_NOSLEEP);
3426 3433 }
3427 3434
3428 3435 static void kmem_update(void *);
3429 3436
3430 3437 static void
3431 3438 kmem_update_timeout(void *dummy)
3432 3439 {
3433 3440 (void) timeout(kmem_update, dummy, kmem_reap_interval);
3434 3441 }
3435 3442
3436 3443 static void
3437 3444 kmem_update(void *dummy)
3438 3445 {
3439 3446 kmem_cache_applyall(kmem_cache_update, NULL, TQ_NOSLEEP);
3440 3447
3441 3448 /*
3442 3449 * We use taskq_dispatch() to reschedule the timeout so that
3443 3450 * kmem_update() becomes self-throttling: it won't schedule
3444 3451 * new tasks until all previous tasks have completed.
3445 3452 */
3446 3453 if (!taskq_dispatch(kmem_taskq, kmem_update_timeout, dummy, TQ_NOSLEEP))
3447 3454 kmem_update_timeout(NULL);
3448 3455 }
3449 3456
3450 3457 static int
3451 3458 kmem_cache_kstat_update(kstat_t *ksp, int rw)
3452 3459 {
3453 3460 struct kmem_cache_kstat *kmcp = &kmem_cache_kstat;
3454 3461 kmem_cache_t *cp = ksp->ks_private;
3455 3462 uint64_t cpu_buf_avail;
3456 3463 uint64_t buf_avail = 0;
3457 3464 int cpu_seqid;
3458 3465 long reap;
3459 3466
3460 3467 ASSERT(MUTEX_HELD(&kmem_cache_kstat_lock));
3461 3468
3462 3469 if (rw == KSTAT_WRITE)
3463 3470 return (EACCES);
3464 3471
3465 3472 mutex_enter(&cp->cache_lock);
3466 3473
3467 3474 kmcp->kmc_alloc_fail.value.ui64 = cp->cache_alloc_fail;
3468 3475 kmcp->kmc_alloc.value.ui64 = cp->cache_slab_alloc;
3469 3476 kmcp->kmc_free.value.ui64 = cp->cache_slab_free;
3470 3477 kmcp->kmc_slab_alloc.value.ui64 = cp->cache_slab_alloc;
3471 3478 kmcp->kmc_slab_free.value.ui64 = cp->cache_slab_free;
3472 3479
3473 3480 for (cpu_seqid = 0; cpu_seqid < max_ncpus; cpu_seqid++) {
3474 3481 kmem_cpu_cache_t *ccp = &cp->cache_cpu[cpu_seqid];
3475 3482
3476 3483 mutex_enter(&ccp->cc_lock);
3477 3484
3478 3485 cpu_buf_avail = 0;
3479 3486 if (ccp->cc_rounds > 0)
3480 3487 cpu_buf_avail += ccp->cc_rounds;
3481 3488 if (ccp->cc_prounds > 0)
3482 3489 cpu_buf_avail += ccp->cc_prounds;
3483 3490
3484 3491 kmcp->kmc_alloc.value.ui64 += ccp->cc_alloc;
3485 3492 kmcp->kmc_free.value.ui64 += ccp->cc_free;
3486 3493 buf_avail += cpu_buf_avail;
3487 3494
3488 3495 mutex_exit(&ccp->cc_lock);
3489 3496 }
3490 3497
3491 3498 mutex_enter(&cp->cache_depot_lock);
3492 3499
3493 3500 kmcp->kmc_depot_alloc.value.ui64 = cp->cache_full.ml_alloc;
3494 3501 kmcp->kmc_depot_free.value.ui64 = cp->cache_empty.ml_alloc;
3495 3502 kmcp->kmc_depot_contention.value.ui64 = cp->cache_depot_contention;
3496 3503 kmcp->kmc_full_magazines.value.ui64 = cp->cache_full.ml_total;
3497 3504 kmcp->kmc_empty_magazines.value.ui64 = cp->cache_empty.ml_total;
3498 3505 kmcp->kmc_magazine_size.value.ui64 =
3499 3506 (cp->cache_flags & KMF_NOMAGAZINE) ?
3500 3507 0 : cp->cache_magtype->mt_magsize;
3501 3508
3502 3509 kmcp->kmc_alloc.value.ui64 += cp->cache_full.ml_alloc;
3503 3510 kmcp->kmc_free.value.ui64 += cp->cache_empty.ml_alloc;
3504 3511 buf_avail += cp->cache_full.ml_total * cp->cache_magtype->mt_magsize;
3505 3512
3506 3513 reap = MIN(cp->cache_full.ml_reaplimit, cp->cache_full.ml_min);
3507 3514 reap = MIN(reap, cp->cache_full.ml_total);
3508 3515
3509 3516 mutex_exit(&cp->cache_depot_lock);
3510 3517
3511 3518 kmcp->kmc_buf_size.value.ui64 = cp->cache_bufsize;
3512 3519 kmcp->kmc_align.value.ui64 = cp->cache_align;
3513 3520 kmcp->kmc_chunk_size.value.ui64 = cp->cache_chunksize;
3514 3521 kmcp->kmc_slab_size.value.ui64 = cp->cache_slabsize;
3515 3522 kmcp->kmc_buf_constructed.value.ui64 = buf_avail;
3516 3523 buf_avail += cp->cache_bufslab;
3517 3524 kmcp->kmc_buf_avail.value.ui64 = buf_avail;
3518 3525 kmcp->kmc_buf_inuse.value.ui64 = cp->cache_buftotal - buf_avail;
3519 3526 kmcp->kmc_buf_total.value.ui64 = cp->cache_buftotal;
3520 3527 kmcp->kmc_buf_max.value.ui64 = cp->cache_bufmax;
3521 3528 kmcp->kmc_slab_create.value.ui64 = cp->cache_slab_create;
3522 3529 kmcp->kmc_slab_destroy.value.ui64 = cp->cache_slab_destroy;
3523 3530 kmcp->kmc_hash_size.value.ui64 = (cp->cache_flags & KMF_HASH) ?
3524 3531 cp->cache_hash_mask + 1 : 0;
3525 3532 kmcp->kmc_hash_lookup_depth.value.ui64 = cp->cache_lookup_depth;
3526 3533 kmcp->kmc_hash_rescale.value.ui64 = cp->cache_rescale;
3527 3534 kmcp->kmc_vmem_source.value.ui64 = cp->cache_arena->vm_id;
3528 3535 kmcp->kmc_reap.value.ui64 = cp->cache_reap;
3529 3536
3530 3537 if (cp->cache_defrag == NULL) {
3531 3538 kmcp->kmc_move_callbacks.value.ui64 = 0;
3532 3539 kmcp->kmc_move_yes.value.ui64 = 0;
3533 3540 kmcp->kmc_move_no.value.ui64 = 0;
3534 3541 kmcp->kmc_move_later.value.ui64 = 0;
3535 3542 kmcp->kmc_move_dont_need.value.ui64 = 0;
3536 3543 kmcp->kmc_move_dont_know.value.ui64 = 0;
3537 3544 kmcp->kmc_move_hunt_found.value.ui64 = 0;
3538 3545 kmcp->kmc_move_slabs_freed.value.ui64 = 0;
3539 3546 kmcp->kmc_defrag.value.ui64 = 0;
3540 3547 kmcp->kmc_scan.value.ui64 = 0;
3541 3548 kmcp->kmc_move_reclaimable.value.ui64 = 0;
3542 3549 } else {
3543 3550 int64_t reclaimable;
3544 3551
3545 3552 kmem_defrag_t *kd = cp->cache_defrag;
3546 3553 kmcp->kmc_move_callbacks.value.ui64 = kd->kmd_callbacks;
3547 3554 kmcp->kmc_move_yes.value.ui64 = kd->kmd_yes;
3548 3555 kmcp->kmc_move_no.value.ui64 = kd->kmd_no;
3549 3556 kmcp->kmc_move_later.value.ui64 = kd->kmd_later;
3550 3557 kmcp->kmc_move_dont_need.value.ui64 = kd->kmd_dont_need;
3551 3558 kmcp->kmc_move_dont_know.value.ui64 = kd->kmd_dont_know;
3552 3559 kmcp->kmc_move_hunt_found.value.ui64 = kd->kmd_hunt_found;
3553 3560 kmcp->kmc_move_slabs_freed.value.ui64 = kd->kmd_slabs_freed;
3554 3561 kmcp->kmc_defrag.value.ui64 = kd->kmd_defrags;
3555 3562 kmcp->kmc_scan.value.ui64 = kd->kmd_scans;
3556 3563
3557 3564 reclaimable = cp->cache_bufslab - (cp->cache_maxchunks - 1);
3558 3565 reclaimable = MAX(reclaimable, 0);
3559 3566 reclaimable += ((uint64_t)reap * cp->cache_magtype->mt_magsize);
3560 3567 kmcp->kmc_move_reclaimable.value.ui64 = reclaimable;
3561 3568 }
3562 3569
3563 3570 mutex_exit(&cp->cache_lock);
3564 3571 return (0);
3565 3572 }
3566 3573
3567 3574 /*
3568 3575 * Return a named statistic about a particular cache.
3569 3576 * This shouldn't be called very often, so it's currently designed for
3570 3577 * simplicity (leverages existing kstat support) rather than efficiency.
3571 3578 */
3572 3579 uint64_t
3573 3580 kmem_cache_stat(kmem_cache_t *cp, char *name)
3574 3581 {
3575 3582 int i;
3576 3583 kstat_t *ksp = cp->cache_kstat;
3577 3584 kstat_named_t *knp = (kstat_named_t *)&kmem_cache_kstat;
3578 3585 uint64_t value = 0;
3579 3586
3580 3587 if (ksp != NULL) {
3581 3588 mutex_enter(&kmem_cache_kstat_lock);
3582 3589 (void) kmem_cache_kstat_update(ksp, KSTAT_READ);
3583 3590 for (i = 0; i < ksp->ks_ndata; i++) {
3584 3591 if (strcmp(knp[i].name, name) == 0) {
3585 3592 value = knp[i].value.ui64;
3586 3593 break;
3587 3594 }
3588 3595 }
3589 3596 mutex_exit(&kmem_cache_kstat_lock);
3590 3597 }
3591 3598 return (value);
3592 3599 }
3593 3600
3594 3601 /*
3595 3602 * Return an estimate of currently available kernel heap memory.
3596 3603 * On 32-bit systems, physical memory may exceed virtual memory,
3597 3604 * we just truncate the result at 1GB.
3598 3605 */
3599 3606 size_t
3600 3607 kmem_avail(void)
3601 3608 {
3602 3609 spgcnt_t rmem = availrmem - tune.t_minarmem;
3603 3610 spgcnt_t fmem = freemem - minfree;
3604 3611
3605 3612 return ((size_t)ptob(MIN(MAX(MIN(rmem, fmem), 0),
3606 3613 1 << (30 - PAGESHIFT))));
3607 3614 }
3608 3615
3609 3616 /*
3610 3617 * Return the maximum amount of memory that is (in theory) allocatable
3611 3618 * from the heap. This may be used as an estimate only since there
3612 3619 * is no guarentee this space will still be available when an allocation
3613 3620 * request is made, nor that the space may be allocated in one big request
3614 3621 * due to kernel heap fragmentation.
3615 3622 */
3616 3623 size_t
3617 3624 kmem_maxavail(void)
3618 3625 {
3619 3626 spgcnt_t pmem = availrmem - tune.t_minarmem;
3620 3627 spgcnt_t vmem = btop(vmem_size(heap_arena, VMEM_FREE));
3621 3628
3622 3629 return ((size_t)ptob(MAX(MIN(pmem, vmem), 0)));
3623 3630 }
3624 3631
3625 3632 /*
3626 3633 * Indicate whether memory-intensive kmem debugging is enabled.
3627 3634 */
3628 3635 int
3629 3636 kmem_debugging(void)
3630 3637 {
3631 3638 return (kmem_flags & (KMF_AUDIT | KMF_REDZONE));
3632 3639 }
3633 3640
3634 3641 /* binning function, sorts finely at the two extremes */
3635 3642 #define KMEM_PARTIAL_SLAB_WEIGHT(sp, binshift) \
3636 3643 ((((sp)->slab_refcnt <= (binshift)) || \
3637 3644 (((sp)->slab_chunks - (sp)->slab_refcnt) <= (binshift))) \
3638 3645 ? -(sp)->slab_refcnt \
3639 3646 : -((binshift) + ((sp)->slab_refcnt >> (binshift))))
3640 3647
3641 3648 /*
3642 3649 * Minimizing the number of partial slabs on the freelist minimizes
3643 3650 * fragmentation (the ratio of unused buffers held by the slab layer). There are
3644 3651 * two ways to get a slab off of the freelist: 1) free all the buffers on the
3645 3652 * slab, and 2) allocate all the buffers on the slab. It follows that we want
3646 3653 * the most-used slabs at the front of the list where they have the best chance
3647 3654 * of being completely allocated, and the least-used slabs at a safe distance
3648 3655 * from the front to improve the odds that the few remaining buffers will all be
3649 3656 * freed before another allocation can tie up the slab. For that reason a slab
3650 3657 * with a higher slab_refcnt sorts less than than a slab with a lower
3651 3658 * slab_refcnt.
3652 3659 *
3653 3660 * However, if a slab has at least one buffer that is deemed unfreeable, we
3654 3661 * would rather have that slab at the front of the list regardless of
3655 3662 * slab_refcnt, since even one unfreeable buffer makes the entire slab
3656 3663 * unfreeable. If the client returns KMEM_CBRC_NO in response to a cache_move()
3657 3664 * callback, the slab is marked unfreeable for as long as it remains on the
3658 3665 * freelist.
3659 3666 */
3660 3667 static int
3661 3668 kmem_partial_slab_cmp(const void *p0, const void *p1)
3662 3669 {
3663 3670 const kmem_cache_t *cp;
3664 3671 const kmem_slab_t *s0 = p0;
3665 3672 const kmem_slab_t *s1 = p1;
3666 3673 int w0, w1;
3667 3674 size_t binshift;
3668 3675
3669 3676 ASSERT(KMEM_SLAB_IS_PARTIAL(s0));
3670 3677 ASSERT(KMEM_SLAB_IS_PARTIAL(s1));
3671 3678 ASSERT(s0->slab_cache == s1->slab_cache);
3672 3679 cp = s1->slab_cache;
3673 3680 ASSERT(MUTEX_HELD(&cp->cache_lock));
3674 3681 binshift = cp->cache_partial_binshift;
3675 3682
3676 3683 /* weight of first slab */
3677 3684 w0 = KMEM_PARTIAL_SLAB_WEIGHT(s0, binshift);
3678 3685 if (s0->slab_flags & KMEM_SLAB_NOMOVE) {
3679 3686 w0 -= cp->cache_maxchunks;
3680 3687 }
3681 3688
3682 3689 /* weight of second slab */
3683 3690 w1 = KMEM_PARTIAL_SLAB_WEIGHT(s1, binshift);
3684 3691 if (s1->slab_flags & KMEM_SLAB_NOMOVE) {
3685 3692 w1 -= cp->cache_maxchunks;
3686 3693 }
3687 3694
3688 3695 if (w0 < w1)
3689 3696 return (-1);
3690 3697 if (w0 > w1)
3691 3698 return (1);
3692 3699
3693 3700 /* compare pointer values */
3694 3701 if ((uintptr_t)s0 < (uintptr_t)s1)
3695 3702 return (-1);
3696 3703 if ((uintptr_t)s0 > (uintptr_t)s1)
3697 3704 return (1);
3698 3705
3699 3706 return (0);
3700 3707 }
3701 3708
3702 3709 /*
3703 3710 * It must be valid to call the destructor (if any) on a newly created object.
3704 3711 * That is, the constructor (if any) must leave the object in a valid state for
3705 3712 * the destructor.
3706 3713 */
3707 3714 kmem_cache_t *
3708 3715 kmem_cache_create(
3709 3716 char *name, /* descriptive name for this cache */
3710 3717 size_t bufsize, /* size of the objects it manages */
3711 3718 size_t align, /* required object alignment */
3712 3719 int (*constructor)(void *, void *, int), /* object constructor */
3713 3720 void (*destructor)(void *, void *), /* object destructor */
3714 3721 void (*reclaim)(void *), /* memory reclaim callback */
3715 3722 void *private, /* pass-thru arg for constr/destr/reclaim */
3716 3723 vmem_t *vmp, /* vmem source for slab allocation */
3717 3724 int cflags) /* cache creation flags */
3718 3725 {
3719 3726 int cpu_seqid;
3720 3727 size_t chunksize;
3721 3728 kmem_cache_t *cp;
3722 3729 kmem_magtype_t *mtp;
3723 3730 size_t csize = KMEM_CACHE_SIZE(max_ncpus);
3724 3731
3725 3732 #ifdef DEBUG
3726 3733 /*
3727 3734 * Cache names should conform to the rules for valid C identifiers
3728 3735 */
3729 3736 if (!strident_valid(name)) {
3730 3737 cmn_err(CE_CONT,
3731 3738 "kmem_cache_create: '%s' is an invalid cache name\n"
3732 3739 "cache names must conform to the rules for "
3733 3740 "C identifiers\n", name);
3734 3741 }
3735 3742 #endif /* DEBUG */
3736 3743
3737 3744 if (vmp == NULL)
3738 3745 vmp = kmem_default_arena;
3739 3746
3740 3747 /*
3741 3748 * If this kmem cache has an identifier vmem arena as its source, mark
3742 3749 * it such to allow kmem_reap_idspace().
3743 3750 */
3744 3751 ASSERT(!(cflags & KMC_IDENTIFIER)); /* consumer should not set this */
3745 3752 if (vmp->vm_cflags & VMC_IDENTIFIER)
3746 3753 cflags |= KMC_IDENTIFIER;
3747 3754
3748 3755 /*
3749 3756 * Get a kmem_cache structure. We arrange that cp->cache_cpu[]
3750 3757 * is aligned on a KMEM_CPU_CACHE_SIZE boundary to prevent
3751 3758 * false sharing of per-CPU data.
3752 3759 */
3753 3760 cp = vmem_xalloc(kmem_cache_arena, csize, KMEM_CPU_CACHE_SIZE,
3754 3761 P2NPHASE(csize, KMEM_CPU_CACHE_SIZE), 0, NULL, NULL, VM_SLEEP);
3755 3762 bzero(cp, csize);
3756 3763 list_link_init(&cp->cache_link);
3757 3764
3758 3765 if (align == 0)
3759 3766 align = KMEM_ALIGN;
3760 3767
3761 3768 /*
3762 3769 * If we're not at least KMEM_ALIGN aligned, we can't use free
3763 3770 * memory to hold bufctl information (because we can't safely
3764 3771 * perform word loads and stores on it).
3765 3772 */
3766 3773 if (align < KMEM_ALIGN)
3767 3774 cflags |= KMC_NOTOUCH;
3768 3775
3769 3776 if (!ISP2(align) || align > vmp->vm_quantum)
3770 3777 panic("kmem_cache_create: bad alignment %lu", align);
3771 3778
3772 3779 mutex_enter(&kmem_flags_lock);
3773 3780 if (kmem_flags & KMF_RANDOMIZE)
3774 3781 kmem_flags = (((kmem_flags | ~KMF_RANDOM) + 1) & KMF_RANDOM) |
3775 3782 KMF_RANDOMIZE;
3776 3783 cp->cache_flags = (kmem_flags | cflags) & KMF_DEBUG;
3777 3784 mutex_exit(&kmem_flags_lock);
3778 3785
3779 3786 /*
3780 3787 * Make sure all the various flags are reasonable.
3781 3788 */
3782 3789 ASSERT(!(cflags & KMC_NOHASH) || !(cflags & KMC_NOTOUCH));
3783 3790
3784 3791 if (cp->cache_flags & KMF_LITE) {
3785 3792 if (bufsize >= kmem_lite_minsize &&
3786 3793 align <= kmem_lite_maxalign &&
3787 3794 P2PHASE(bufsize, kmem_lite_maxalign) != 0) {
3788 3795 cp->cache_flags |= KMF_BUFTAG;
3789 3796 cp->cache_flags &= ~(KMF_AUDIT | KMF_FIREWALL);
3790 3797 } else {
3791 3798 cp->cache_flags &= ~KMF_DEBUG;
3792 3799 }
3793 3800 }
3794 3801
3795 3802 if (cp->cache_flags & KMF_DEADBEEF)
3796 3803 cp->cache_flags |= KMF_REDZONE;
3797 3804
3798 3805 if ((cflags & KMC_QCACHE) && (cp->cache_flags & KMF_AUDIT))
3799 3806 cp->cache_flags |= KMF_NOMAGAZINE;
3800 3807
3801 3808 if (cflags & KMC_NODEBUG)
3802 3809 cp->cache_flags &= ~KMF_DEBUG;
3803 3810
3804 3811 if (cflags & KMC_NOTOUCH)
3805 3812 cp->cache_flags &= ~KMF_TOUCH;
3806 3813
3807 3814 if (cflags & KMC_PREFILL)
3808 3815 cp->cache_flags |= KMF_PREFILL;
3809 3816
3810 3817 if (cflags & KMC_NOHASH)
3811 3818 cp->cache_flags &= ~(KMF_AUDIT | KMF_FIREWALL);
3812 3819
3813 3820 if (cflags & KMC_NOMAGAZINE)
3814 3821 cp->cache_flags |= KMF_NOMAGAZINE;
3815 3822
3816 3823 if ((cp->cache_flags & KMF_AUDIT) && !(cflags & KMC_NOTOUCH))
3817 3824 cp->cache_flags |= KMF_REDZONE;
3818 3825
3819 3826 if (!(cp->cache_flags & KMF_AUDIT))
3820 3827 cp->cache_flags &= ~KMF_CONTENTS;
3821 3828
3822 3829 if ((cp->cache_flags & KMF_BUFTAG) && bufsize >= kmem_minfirewall &&
3823 3830 !(cp->cache_flags & KMF_LITE) && !(cflags & KMC_NOHASH))
3824 3831 cp->cache_flags |= KMF_FIREWALL;
3825 3832
3826 3833 if (vmp != kmem_default_arena || kmem_firewall_arena == NULL)
3827 3834 cp->cache_flags &= ~KMF_FIREWALL;
3828 3835
3829 3836 if (cp->cache_flags & KMF_FIREWALL) {
3830 3837 cp->cache_flags &= ~KMF_BUFTAG;
3831 3838 cp->cache_flags |= KMF_NOMAGAZINE;
3832 3839 ASSERT(vmp == kmem_default_arena);
3833 3840 vmp = kmem_firewall_arena;
3834 3841 }
3835 3842
3836 3843 /*
3837 3844 * Set cache properties.
3838 3845 */
3839 3846 (void) strncpy(cp->cache_name, name, KMEM_CACHE_NAMELEN);
3840 3847 strident_canon(cp->cache_name, KMEM_CACHE_NAMELEN + 1);
3841 3848 cp->cache_bufsize = bufsize;
3842 3849 cp->cache_align = align;
3843 3850 cp->cache_constructor = constructor;
3844 3851 cp->cache_destructor = destructor;
3845 3852 cp->cache_reclaim = reclaim;
3846 3853 cp->cache_private = private;
3847 3854 cp->cache_arena = vmp;
3848 3855 cp->cache_cflags = cflags;
3849 3856
3850 3857 /*
3851 3858 * Determine the chunk size.
3852 3859 */
3853 3860 chunksize = bufsize;
3854 3861
3855 3862 if (align >= KMEM_ALIGN) {
3856 3863 chunksize = P2ROUNDUP(chunksize, KMEM_ALIGN);
3857 3864 cp->cache_bufctl = chunksize - KMEM_ALIGN;
3858 3865 }
3859 3866
3860 3867 if (cp->cache_flags & KMF_BUFTAG) {
3861 3868 cp->cache_bufctl = chunksize;
3862 3869 cp->cache_buftag = chunksize;
3863 3870 if (cp->cache_flags & KMF_LITE)
3864 3871 chunksize += KMEM_BUFTAG_LITE_SIZE(kmem_lite_count);
3865 3872 else
3866 3873 chunksize += sizeof (kmem_buftag_t);
3867 3874 }
3868 3875
3869 3876 if (cp->cache_flags & KMF_DEADBEEF) {
3870 3877 cp->cache_verify = MIN(cp->cache_buftag, kmem_maxverify);
3871 3878 if (cp->cache_flags & KMF_LITE)
3872 3879 cp->cache_verify = sizeof (uint64_t);
3873 3880 }
3874 3881
3875 3882 cp->cache_contents = MIN(cp->cache_bufctl, kmem_content_maxsave);
3876 3883
3877 3884 cp->cache_chunksize = chunksize = P2ROUNDUP(chunksize, align);
3878 3885
3879 3886 /*
3880 3887 * Now that we know the chunk size, determine the optimal slab size.
3881 3888 */
3882 3889 if (vmp == kmem_firewall_arena) {
3883 3890 cp->cache_slabsize = P2ROUNDUP(chunksize, vmp->vm_quantum);
3884 3891 cp->cache_mincolor = cp->cache_slabsize - chunksize;
3885 3892 cp->cache_maxcolor = cp->cache_mincolor;
3886 3893 cp->cache_flags |= KMF_HASH;
3887 3894 ASSERT(!(cp->cache_flags & KMF_BUFTAG));
3888 3895 } else if ((cflags & KMC_NOHASH) || (!(cflags & KMC_NOTOUCH) &&
3889 3896 !(cp->cache_flags & KMF_AUDIT) &&
3890 3897 chunksize < vmp->vm_quantum / KMEM_VOID_FRACTION)) {
3891 3898 cp->cache_slabsize = vmp->vm_quantum;
3892 3899 cp->cache_mincolor = 0;
3893 3900 cp->cache_maxcolor =
3894 3901 (cp->cache_slabsize - sizeof (kmem_slab_t)) % chunksize;
3895 3902 ASSERT(chunksize + sizeof (kmem_slab_t) <= cp->cache_slabsize);
3896 3903 ASSERT(!(cp->cache_flags & KMF_AUDIT));
3897 3904 } else {
3898 3905 size_t chunks, bestfit, waste, slabsize;
3899 3906 size_t minwaste = LONG_MAX;
3900 3907
3901 3908 for (chunks = 1; chunks <= KMEM_VOID_FRACTION; chunks++) {
3902 3909 slabsize = P2ROUNDUP(chunksize * chunks,
3903 3910 vmp->vm_quantum);
3904 3911 chunks = slabsize / chunksize;
3905 3912 waste = (slabsize % chunksize) / chunks;
3906 3913 if (waste < minwaste) {
3907 3914 minwaste = waste;
3908 3915 bestfit = slabsize;
3909 3916 }
3910 3917 }
3911 3918 if (cflags & KMC_QCACHE)
3912 3919 bestfit = VMEM_QCACHE_SLABSIZE(vmp->vm_qcache_max);
3913 3920 cp->cache_slabsize = bestfit;
3914 3921 cp->cache_mincolor = 0;
3915 3922 cp->cache_maxcolor = bestfit % chunksize;
3916 3923 cp->cache_flags |= KMF_HASH;
3917 3924 }
3918 3925
3919 3926 cp->cache_maxchunks = (cp->cache_slabsize / cp->cache_chunksize);
3920 3927 cp->cache_partial_binshift = highbit(cp->cache_maxchunks / 16) + 1;
3921 3928
3922 3929 /*
3923 3930 * Disallowing prefill when either the DEBUG or HASH flag is set or when
3924 3931 * there is a constructor avoids some tricky issues with debug setup
3925 3932 * that may be revisited later. We cannot allow prefill in a
3926 3933 * metadata cache because of potential recursion.
3927 3934 */
3928 3935 if (vmp == kmem_msb_arena ||
3929 3936 cp->cache_flags & (KMF_HASH | KMF_BUFTAG) ||
3930 3937 cp->cache_constructor != NULL)
3931 3938 cp->cache_flags &= ~KMF_PREFILL;
3932 3939
3933 3940 if (cp->cache_flags & KMF_HASH) {
3934 3941 ASSERT(!(cflags & KMC_NOHASH));
3935 3942 cp->cache_bufctl_cache = (cp->cache_flags & KMF_AUDIT) ?
3936 3943 kmem_bufctl_audit_cache : kmem_bufctl_cache;
3937 3944 }
3938 3945
3939 3946 if (cp->cache_maxcolor >= vmp->vm_quantum)
3940 3947 cp->cache_maxcolor = vmp->vm_quantum - 1;
3941 3948
3942 3949 cp->cache_color = cp->cache_mincolor;
3943 3950
3944 3951 /*
3945 3952 * Initialize the rest of the slab layer.
3946 3953 */
3947 3954 mutex_init(&cp->cache_lock, NULL, MUTEX_DEFAULT, NULL);
3948 3955
3949 3956 avl_create(&cp->cache_partial_slabs, kmem_partial_slab_cmp,
3950 3957 sizeof (kmem_slab_t), offsetof(kmem_slab_t, slab_link));
3951 3958 /* LINTED: E_TRUE_LOGICAL_EXPR */
3952 3959 ASSERT(sizeof (list_node_t) <= sizeof (avl_node_t));
3953 3960 /* reuse partial slab AVL linkage for complete slab list linkage */
3954 3961 list_create(&cp->cache_complete_slabs,
3955 3962 sizeof (kmem_slab_t), offsetof(kmem_slab_t, slab_link));
3956 3963
3957 3964 if (cp->cache_flags & KMF_HASH) {
3958 3965 cp->cache_hash_table = vmem_alloc(kmem_hash_arena,
3959 3966 KMEM_HASH_INITIAL * sizeof (void *), VM_SLEEP);
3960 3967 bzero(cp->cache_hash_table,
3961 3968 KMEM_HASH_INITIAL * sizeof (void *));
3962 3969 cp->cache_hash_mask = KMEM_HASH_INITIAL - 1;
3963 3970 cp->cache_hash_shift = highbit((ulong_t)chunksize) - 1;
3964 3971 }
3965 3972
3966 3973 /*
3967 3974 * Initialize the depot.
3968 3975 */
3969 3976 mutex_init(&cp->cache_depot_lock, NULL, MUTEX_DEFAULT, NULL);
3970 3977
3971 3978 for (mtp = kmem_magtype; chunksize <= mtp->mt_minbuf; mtp++)
3972 3979 continue;
3973 3980
3974 3981 cp->cache_magtype = mtp;
3975 3982
3976 3983 /*
3977 3984 * Initialize the CPU layer.
3978 3985 */
3979 3986 for (cpu_seqid = 0; cpu_seqid < max_ncpus; cpu_seqid++) {
3980 3987 kmem_cpu_cache_t *ccp = &cp->cache_cpu[cpu_seqid];
3981 3988 mutex_init(&ccp->cc_lock, NULL, MUTEX_DEFAULT, NULL);
3982 3989 ccp->cc_flags = cp->cache_flags;
3983 3990 ccp->cc_rounds = -1;
3984 3991 ccp->cc_prounds = -1;
3985 3992 }
3986 3993
3987 3994 /*
3988 3995 * Create the cache's kstats.
3989 3996 */
3990 3997 if ((cp->cache_kstat = kstat_create("unix", 0, cp->cache_name,
3991 3998 "kmem_cache", KSTAT_TYPE_NAMED,
3992 3999 sizeof (kmem_cache_kstat) / sizeof (kstat_named_t),
3993 4000 KSTAT_FLAG_VIRTUAL)) != NULL) {
3994 4001 cp->cache_kstat->ks_data = &kmem_cache_kstat;
3995 4002 cp->cache_kstat->ks_update = kmem_cache_kstat_update;
3996 4003 cp->cache_kstat->ks_private = cp;
3997 4004 cp->cache_kstat->ks_lock = &kmem_cache_kstat_lock;
3998 4005 kstat_install(cp->cache_kstat);
3999 4006 }
4000 4007
4001 4008 /*
4002 4009 * Add the cache to the global list. This makes it visible
4003 4010 * to kmem_update(), so the cache must be ready for business.
4004 4011 */
4005 4012 mutex_enter(&kmem_cache_lock);
4006 4013 list_insert_tail(&kmem_caches, cp);
4007 4014 mutex_exit(&kmem_cache_lock);
4008 4015
4009 4016 if (kmem_ready)
4010 4017 kmem_cache_magazine_enable(cp);
4011 4018
4012 4019 return (cp);
4013 4020 }
4014 4021
4015 4022 static int
4016 4023 kmem_move_cmp(const void *buf, const void *p)
4017 4024 {
4018 4025 const kmem_move_t *kmm = p;
4019 4026 uintptr_t v1 = (uintptr_t)buf;
4020 4027 uintptr_t v2 = (uintptr_t)kmm->kmm_from_buf;
4021 4028 return (v1 < v2 ? -1 : (v1 > v2 ? 1 : 0));
4022 4029 }
4023 4030
4024 4031 static void
4025 4032 kmem_reset_reclaim_threshold(kmem_defrag_t *kmd)
4026 4033 {
4027 4034 kmd->kmd_reclaim_numer = 1;
4028 4035 }
4029 4036
4030 4037 /*
4031 4038 * Initially, when choosing candidate slabs for buffers to move, we want to be
4032 4039 * very selective and take only slabs that are less than
4033 4040 * (1 / KMEM_VOID_FRACTION) allocated. If we have difficulty finding candidate
4034 4041 * slabs, then we raise the allocation ceiling incrementally. The reclaim
4035 4042 * threshold is reset to (1 / KMEM_VOID_FRACTION) as soon as the cache is no
4036 4043 * longer fragmented.
4037 4044 */
4038 4045 static void
4039 4046 kmem_adjust_reclaim_threshold(kmem_defrag_t *kmd, int direction)
4040 4047 {
4041 4048 if (direction > 0) {
4042 4049 /* make it easier to find a candidate slab */
4043 4050 if (kmd->kmd_reclaim_numer < (KMEM_VOID_FRACTION - 1)) {
4044 4051 kmd->kmd_reclaim_numer++;
4045 4052 }
4046 4053 } else {
4047 4054 /* be more selective */
4048 4055 if (kmd->kmd_reclaim_numer > 1) {
4049 4056 kmd->kmd_reclaim_numer--;
4050 4057 }
4051 4058 }
4052 4059 }
4053 4060
4054 4061 void
4055 4062 kmem_cache_set_move(kmem_cache_t *cp,
4056 4063 kmem_cbrc_t (*move)(void *, void *, size_t, void *))
4057 4064 {
4058 4065 kmem_defrag_t *defrag;
4059 4066
4060 4067 ASSERT(move != NULL);
4061 4068 /*
4062 4069 * The consolidator does not support NOTOUCH caches because kmem cannot
4063 4070 * initialize their slabs with the 0xbaddcafe memory pattern, which sets
4064 4071 * a low order bit usable by clients to distinguish uninitialized memory
4065 4072 * from known objects (see kmem_slab_create).
4066 4073 */
4067 4074 ASSERT(!(cp->cache_cflags & KMC_NOTOUCH));
4068 4075 ASSERT(!(cp->cache_cflags & KMC_IDENTIFIER));
4069 4076
4070 4077 /*
4071 4078 * We should not be holding anyone's cache lock when calling
4072 4079 * kmem_cache_alloc(), so allocate in all cases before acquiring the
4073 4080 * lock.
4074 4081 */
4075 4082 defrag = kmem_cache_alloc(kmem_defrag_cache, KM_SLEEP);
4076 4083
4077 4084 mutex_enter(&cp->cache_lock);
4078 4085
4079 4086 if (KMEM_IS_MOVABLE(cp)) {
4080 4087 if (cp->cache_move == NULL) {
4081 4088 ASSERT(cp->cache_slab_alloc == 0);
4082 4089
4083 4090 cp->cache_defrag = defrag;
4084 4091 defrag = NULL; /* nothing to free */
4085 4092 bzero(cp->cache_defrag, sizeof (kmem_defrag_t));
4086 4093 avl_create(&cp->cache_defrag->kmd_moves_pending,
4087 4094 kmem_move_cmp, sizeof (kmem_move_t),
4088 4095 offsetof(kmem_move_t, kmm_entry));
4089 4096 /* LINTED: E_TRUE_LOGICAL_EXPR */
4090 4097 ASSERT(sizeof (list_node_t) <= sizeof (avl_node_t));
4091 4098 /* reuse the slab's AVL linkage for deadlist linkage */
4092 4099 list_create(&cp->cache_defrag->kmd_deadlist,
4093 4100 sizeof (kmem_slab_t),
4094 4101 offsetof(kmem_slab_t, slab_link));
4095 4102 kmem_reset_reclaim_threshold(cp->cache_defrag);
4096 4103 }
4097 4104 cp->cache_move = move;
4098 4105 }
4099 4106
4100 4107 mutex_exit(&cp->cache_lock);
4101 4108
4102 4109 if (defrag != NULL) {
4103 4110 kmem_cache_free(kmem_defrag_cache, defrag); /* unused */
4104 4111 }
4105 4112 }
4106 4113
4107 4114 void
4108 4115 kmem_cache_destroy(kmem_cache_t *cp)
4109 4116 {
4110 4117 int cpu_seqid;
4111 4118
4112 4119 /*
4113 4120 * Remove the cache from the global cache list so that no one else
4114 4121 * can schedule tasks on its behalf, wait for any pending tasks to
4115 4122 * complete, purge the cache, and then destroy it.
4116 4123 */
4117 4124 mutex_enter(&kmem_cache_lock);
4118 4125 list_remove(&kmem_caches, cp);
4119 4126 mutex_exit(&kmem_cache_lock);
4120 4127
4121 4128 if (kmem_taskq != NULL)
4122 4129 taskq_wait(kmem_taskq);
4123 4130 if (kmem_move_taskq != NULL)
4124 4131 taskq_wait(kmem_move_taskq);
4125 4132
4126 4133 kmem_cache_magazine_purge(cp);
4127 4134
4128 4135 mutex_enter(&cp->cache_lock);
4129 4136 if (cp->cache_buftotal != 0)
4130 4137 cmn_err(CE_WARN, "kmem_cache_destroy: '%s' (%p) not empty",
4131 4138 cp->cache_name, (void *)cp);
4132 4139 if (cp->cache_defrag != NULL) {
4133 4140 avl_destroy(&cp->cache_defrag->kmd_moves_pending);
4134 4141 list_destroy(&cp->cache_defrag->kmd_deadlist);
4135 4142 kmem_cache_free(kmem_defrag_cache, cp->cache_defrag);
4136 4143 cp->cache_defrag = NULL;
4137 4144 }
4138 4145 /*
4139 4146 * The cache is now dead. There should be no further activity. We
4140 4147 * enforce this by setting land mines in the constructor, destructor,
4141 4148 * reclaim, and move routines that induce a kernel text fault if
4142 4149 * invoked.
4143 4150 */
4144 4151 cp->cache_constructor = (int (*)(void *, void *, int))1;
4145 4152 cp->cache_destructor = (void (*)(void *, void *))2;
4146 4153 cp->cache_reclaim = (void (*)(void *))3;
4147 4154 cp->cache_move = (kmem_cbrc_t (*)(void *, void *, size_t, void *))4;
4148 4155 mutex_exit(&cp->cache_lock);
4149 4156
4150 4157 kstat_delete(cp->cache_kstat);
4151 4158
4152 4159 if (cp->cache_hash_table != NULL)
4153 4160 vmem_free(kmem_hash_arena, cp->cache_hash_table,
4154 4161 (cp->cache_hash_mask + 1) * sizeof (void *));
4155 4162
4156 4163 for (cpu_seqid = 0; cpu_seqid < max_ncpus; cpu_seqid++)
4157 4164 mutex_destroy(&cp->cache_cpu[cpu_seqid].cc_lock);
4158 4165
4159 4166 mutex_destroy(&cp->cache_depot_lock);
4160 4167 mutex_destroy(&cp->cache_lock);
4161 4168
4162 4169 vmem_free(kmem_cache_arena, cp, KMEM_CACHE_SIZE(max_ncpus));
4163 4170 }
4164 4171
4165 4172 /*ARGSUSED*/
4166 4173 static int
4167 4174 kmem_cpu_setup(cpu_setup_t what, int id, void *arg)
4168 4175 {
4169 4176 ASSERT(MUTEX_HELD(&cpu_lock));
4170 4177 if (what == CPU_UNCONFIG) {
4171 4178 kmem_cache_applyall(kmem_cache_magazine_purge,
4172 4179 kmem_taskq, TQ_SLEEP);
4173 4180 kmem_cache_applyall(kmem_cache_magazine_enable,
4174 4181 kmem_taskq, TQ_SLEEP);
4175 4182 }
4176 4183 return (0);
4177 4184 }
4178 4185
4179 4186 static void
4180 4187 kmem_alloc_caches_create(const int *array, size_t count,
4181 4188 kmem_cache_t **alloc_table, size_t maxbuf, uint_t shift)
4182 4189 {
4183 4190 char name[KMEM_CACHE_NAMELEN + 1];
4184 4191 size_t table_unit = (1 << shift); /* range of one alloc_table entry */
4185 4192 size_t size = table_unit;
4186 4193 int i;
4187 4194
4188 4195 for (i = 0; i < count; i++) {
4189 4196 size_t cache_size = array[i];
4190 4197 size_t align = KMEM_ALIGN;
4191 4198 kmem_cache_t *cp;
4192 4199
4193 4200 /* if the table has an entry for maxbuf, we're done */
4194 4201 if (size > maxbuf)
4195 4202 break;
4196 4203
4197 4204 /* cache size must be a multiple of the table unit */
4198 4205 ASSERT(P2PHASE(cache_size, table_unit) == 0);
4199 4206
4200 4207 /*
4201 4208 * If they allocate a multiple of the coherency granularity,
4202 4209 * they get a coherency-granularity-aligned address.
4203 4210 */
4204 4211 if (IS_P2ALIGNED(cache_size, 64))
4205 4212 align = 64;
4206 4213 if (IS_P2ALIGNED(cache_size, PAGESIZE))
4207 4214 align = PAGESIZE;
4208 4215 (void) snprintf(name, sizeof (name),
4209 4216 "kmem_alloc_%lu", cache_size);
4210 4217 cp = kmem_cache_create(name, cache_size, align,
4211 4218 NULL, NULL, NULL, NULL, NULL, KMC_KMEM_ALLOC);
4212 4219
4213 4220 while (size <= cache_size) {
4214 4221 alloc_table[(size - 1) >> shift] = cp;
4215 4222 size += table_unit;
4216 4223 }
4217 4224 }
4218 4225
4219 4226 ASSERT(size > maxbuf); /* i.e. maxbuf <= max(cache_size) */
4220 4227 }
4221 4228
4222 4229 static void
4223 4230 kmem_cache_init(int pass, int use_large_pages)
4224 4231 {
4225 4232 int i;
4226 4233 size_t maxbuf;
4227 4234 kmem_magtype_t *mtp;
4228 4235
4229 4236 for (i = 0; i < sizeof (kmem_magtype) / sizeof (*mtp); i++) {
4230 4237 char name[KMEM_CACHE_NAMELEN + 1];
4231 4238
4232 4239 mtp = &kmem_magtype[i];
4233 4240 (void) sprintf(name, "kmem_magazine_%d", mtp->mt_magsize);
4234 4241 mtp->mt_cache = kmem_cache_create(name,
4235 4242 (mtp->mt_magsize + 1) * sizeof (void *),
4236 4243 mtp->mt_align, NULL, NULL, NULL, NULL,
4237 4244 kmem_msb_arena, KMC_NOHASH);
4238 4245 }
4239 4246
4240 4247 kmem_slab_cache = kmem_cache_create("kmem_slab_cache",
4241 4248 sizeof (kmem_slab_t), 0, NULL, NULL, NULL, NULL,
4242 4249 kmem_msb_arena, KMC_NOHASH);
4243 4250
4244 4251 kmem_bufctl_cache = kmem_cache_create("kmem_bufctl_cache",
4245 4252 sizeof (kmem_bufctl_t), 0, NULL, NULL, NULL, NULL,
4246 4253 kmem_msb_arena, KMC_NOHASH);
4247 4254
4248 4255 kmem_bufctl_audit_cache = kmem_cache_create("kmem_bufctl_audit_cache",
4249 4256 sizeof (kmem_bufctl_audit_t), 0, NULL, NULL, NULL, NULL,
4250 4257 kmem_msb_arena, KMC_NOHASH);
4251 4258
4252 4259 if (pass == 2) {
4253 4260 kmem_va_arena = vmem_create("kmem_va",
4254 4261 NULL, 0, PAGESIZE,
4255 4262 vmem_alloc, vmem_free, heap_arena,
4256 4263 8 * PAGESIZE, VM_SLEEP);
4257 4264
4258 4265 if (use_large_pages) {
4259 4266 kmem_default_arena = vmem_xcreate("kmem_default",
4260 4267 NULL, 0, PAGESIZE,
4261 4268 segkmem_alloc_lp, segkmem_free_lp, kmem_va_arena,
4262 4269 0, VMC_DUMPSAFE | VM_SLEEP);
4263 4270 } else {
4264 4271 kmem_default_arena = vmem_create("kmem_default",
4265 4272 NULL, 0, PAGESIZE,
4266 4273 segkmem_alloc, segkmem_free, kmem_va_arena,
4267 4274 0, VMC_DUMPSAFE | VM_SLEEP);
4268 4275 }
4269 4276
4270 4277 /* Figure out what our maximum cache size is */
4271 4278 maxbuf = kmem_max_cached;
4272 4279 if (maxbuf <= KMEM_MAXBUF) {
4273 4280 maxbuf = 0;
4274 4281 kmem_max_cached = KMEM_MAXBUF;
4275 4282 } else {
4276 4283 size_t size = 0;
4277 4284 size_t max =
4278 4285 sizeof (kmem_big_alloc_sizes) / sizeof (int);
4279 4286 /*
4280 4287 * Round maxbuf up to an existing cache size. If maxbuf
4281 4288 * is larger than the largest cache, we truncate it to
4282 4289 * the largest cache's size.
4283 4290 */
4284 4291 for (i = 0; i < max; i++) {
4285 4292 size = kmem_big_alloc_sizes[i];
4286 4293 if (maxbuf <= size)
4287 4294 break;
4288 4295 }
4289 4296 kmem_max_cached = maxbuf = size;
4290 4297 }
4291 4298
4292 4299 /*
4293 4300 * The big alloc table may not be completely overwritten, so
4294 4301 * we clear out any stale cache pointers from the first pass.
4295 4302 */
4296 4303 bzero(kmem_big_alloc_table, sizeof (kmem_big_alloc_table));
4297 4304 } else {
4298 4305 /*
4299 4306 * During the first pass, the kmem_alloc_* caches
4300 4307 * are treated as metadata.
4301 4308 */
4302 4309 kmem_default_arena = kmem_msb_arena;
4303 4310 maxbuf = KMEM_BIG_MAXBUF_32BIT;
4304 4311 }
4305 4312
4306 4313 /*
4307 4314 * Set up the default caches to back kmem_alloc()
4308 4315 */
4309 4316 kmem_alloc_caches_create(
4310 4317 kmem_alloc_sizes, sizeof (kmem_alloc_sizes) / sizeof (int),
4311 4318 kmem_alloc_table, KMEM_MAXBUF, KMEM_ALIGN_SHIFT);
4312 4319
4313 4320 kmem_alloc_caches_create(
4314 4321 kmem_big_alloc_sizes, sizeof (kmem_big_alloc_sizes) / sizeof (int),
4315 4322 kmem_big_alloc_table, maxbuf, KMEM_BIG_SHIFT);
4316 4323
4317 4324 kmem_big_alloc_table_max = maxbuf >> KMEM_BIG_SHIFT;
4318 4325 }
4319 4326
4320 4327 void
4321 4328 kmem_init(void)
4322 4329 {
4323 4330 kmem_cache_t *cp;
4324 4331 int old_kmem_flags = kmem_flags;
4325 4332 int use_large_pages = 0;
4326 4333 size_t maxverify, minfirewall;
4327 4334
4328 4335 kstat_init();
4329 4336
4330 4337 /*
4331 4338 * Small-memory systems (< 24 MB) can't handle kmem_flags overhead.
4332 4339 */
4333 4340 if (physmem < btop(24 << 20) && !(old_kmem_flags & KMF_STICKY))
4334 4341 kmem_flags = 0;
4335 4342
4336 4343 /*
4337 4344 * Don't do firewalled allocations if the heap is less than 1TB
4338 4345 * (i.e. on a 32-bit kernel)
4339 4346 * The resulting VM_NEXTFIT allocations would create too much
4340 4347 * fragmentation in a small heap.
4341 4348 */
4342 4349 #if defined(_LP64)
4343 4350 maxverify = minfirewall = PAGESIZE / 2;
4344 4351 #else
4345 4352 maxverify = minfirewall = ULONG_MAX;
4346 4353 #endif
4347 4354
4348 4355 /* LINTED */
4349 4356 ASSERT(sizeof (kmem_cpu_cache_t) == KMEM_CPU_CACHE_SIZE);
4350 4357
4351 4358 list_create(&kmem_caches, sizeof (kmem_cache_t),
4352 4359 offsetof(kmem_cache_t, cache_link));
4353 4360
4354 4361 kmem_metadata_arena = vmem_create("kmem_metadata", NULL, 0, PAGESIZE,
4355 4362 vmem_alloc, vmem_free, heap_arena, 8 * PAGESIZE,
4356 4363 VM_SLEEP | VMC_NO_QCACHE);
4357 4364
4358 4365 kmem_msb_arena = vmem_create("kmem_msb", NULL, 0,
4359 4366 PAGESIZE, segkmem_alloc, segkmem_free, kmem_metadata_arena, 0,
4360 4367 VMC_DUMPSAFE | VM_SLEEP);
4361 4368
4362 4369 kmem_cache_arena = vmem_create("kmem_cache", NULL, 0, KMEM_ALIGN,
4363 4370 segkmem_alloc, segkmem_free, kmem_metadata_arena, 0, VM_SLEEP);
4364 4371
4365 4372 kmem_hash_arena = vmem_create("kmem_hash", NULL, 0, KMEM_ALIGN,
4366 4373 segkmem_alloc, segkmem_free, kmem_metadata_arena, 0, VM_SLEEP);
4367 4374
4368 4375 kmem_log_arena = vmem_create("kmem_log", NULL, 0, KMEM_ALIGN,
4369 4376 segkmem_alloc, segkmem_free, heap_arena, 0, VM_SLEEP);
4370 4377
4371 4378 kmem_firewall_va_arena = vmem_create("kmem_firewall_va",
4372 4379 NULL, 0, PAGESIZE,
4373 4380 kmem_firewall_va_alloc, kmem_firewall_va_free, heap_arena,
4374 4381 0, VM_SLEEP);
4375 4382
4376 4383 kmem_firewall_arena = vmem_create("kmem_firewall", NULL, 0, PAGESIZE,
4377 4384 segkmem_alloc, segkmem_free, kmem_firewall_va_arena, 0,
4378 4385 VMC_DUMPSAFE | VM_SLEEP);
4379 4386
4380 4387 /* temporary oversize arena for mod_read_system_file */
4381 4388 kmem_oversize_arena = vmem_create("kmem_oversize", NULL, 0, PAGESIZE,
4382 4389 segkmem_alloc, segkmem_free, heap_arena, 0, VM_SLEEP);
4383 4390
4384 4391 kmem_reap_interval = 15 * hz;
4385 4392
4386 4393 /*
4387 4394 * Read /etc/system. This is a chicken-and-egg problem because
4388 4395 * kmem_flags may be set in /etc/system, but mod_read_system_file()
4389 4396 * needs to use the allocator. The simplest solution is to create
4390 4397 * all the standard kmem caches, read /etc/system, destroy all the
4391 4398 * caches we just created, and then create them all again in light
4392 4399 * of the (possibly) new kmem_flags and other kmem tunables.
4393 4400 */
4394 4401 kmem_cache_init(1, 0);
4395 4402
4396 4403 mod_read_system_file(boothowto & RB_ASKNAME);
4397 4404
4398 4405 while ((cp = list_tail(&kmem_caches)) != NULL)
4399 4406 kmem_cache_destroy(cp);
4400 4407
4401 4408 vmem_destroy(kmem_oversize_arena);
4402 4409
4403 4410 if (old_kmem_flags & KMF_STICKY)
4404 4411 kmem_flags = old_kmem_flags;
4405 4412
4406 4413 if (!(kmem_flags & KMF_AUDIT))
4407 4414 vmem_seg_size = offsetof(vmem_seg_t, vs_thread);
4408 4415
4409 4416 if (kmem_maxverify == 0)
4410 4417 kmem_maxverify = maxverify;
4411 4418
4412 4419 if (kmem_minfirewall == 0)
4413 4420 kmem_minfirewall = minfirewall;
4414 4421
4415 4422 /*
4416 4423 * give segkmem a chance to figure out if we are using large pages
4417 4424 * for the kernel heap
4418 4425 */
4419 4426 use_large_pages = segkmem_lpsetup();
4420 4427
4421 4428 /*
4422 4429 * To protect against corruption, we keep the actual number of callers
4423 4430 * KMF_LITE records seperate from the tunable. We arbitrarily clamp
4424 4431 * to 16, since the overhead for small buffers quickly gets out of
4425 4432 * hand.
4426 4433 *
4427 4434 * The real limit would depend on the needs of the largest KMC_NOHASH
4428 4435 * cache.
4429 4436 */
4430 4437 kmem_lite_count = MIN(MAX(0, kmem_lite_pcs), 16);
4431 4438 kmem_lite_pcs = kmem_lite_count;
4432 4439
4433 4440 /*
4434 4441 * Normally, we firewall oversized allocations when possible, but
4435 4442 * if we are using large pages for kernel memory, and we don't have
4436 4443 * any non-LITE debugging flags set, we want to allocate oversized
4437 4444 * buffers from large pages, and so skip the firewalling.
4438 4445 */
4439 4446 if (use_large_pages &&
4440 4447 ((kmem_flags & KMF_LITE) || !(kmem_flags & KMF_DEBUG))) {
4441 4448 kmem_oversize_arena = vmem_xcreate("kmem_oversize", NULL, 0,
4442 4449 PAGESIZE, segkmem_alloc_lp, segkmem_free_lp, heap_arena,
4443 4450 0, VMC_DUMPSAFE | VM_SLEEP);
4444 4451 } else {
4445 4452 kmem_oversize_arena = vmem_create("kmem_oversize",
4446 4453 NULL, 0, PAGESIZE,
4447 4454 segkmem_alloc, segkmem_free, kmem_minfirewall < ULONG_MAX?
4448 4455 kmem_firewall_va_arena : heap_arena, 0, VMC_DUMPSAFE |
4449 4456 VM_SLEEP);
4450 4457 }
4451 4458
4452 4459 kmem_cache_init(2, use_large_pages);
4453 4460
4454 4461 if (kmem_flags & (KMF_AUDIT | KMF_RANDOMIZE)) {
4455 4462 if (kmem_transaction_log_size == 0)
4456 4463 kmem_transaction_log_size = kmem_maxavail() / 50;
4457 4464 kmem_transaction_log = kmem_log_init(kmem_transaction_log_size);
4458 4465 }
4459 4466
4460 4467 if (kmem_flags & (KMF_CONTENTS | KMF_RANDOMIZE)) {
4461 4468 if (kmem_content_log_size == 0)
4462 4469 kmem_content_log_size = kmem_maxavail() / 50;
4463 4470 kmem_content_log = kmem_log_init(kmem_content_log_size);
4464 4471 }
4465 4472
4466 4473 kmem_failure_log = kmem_log_init(kmem_failure_log_size);
4467 4474
4468 4475 kmem_slab_log = kmem_log_init(kmem_slab_log_size);
4469 4476
4470 4477 /*
4471 4478 * Initialize STREAMS message caches so allocb() is available.
4472 4479 * This allows us to initialize the logging framework (cmn_err(9F),
4473 4480 * strlog(9F), etc) so we can start recording messages.
4474 4481 */
4475 4482 streams_msg_init();
4476 4483
4477 4484 /*
4478 4485 * Initialize the ZSD framework in Zones so modules loaded henceforth
4479 4486 * can register their callbacks.
4480 4487 */
4481 4488 zone_zsd_init();
4482 4489
4483 4490 log_init();
4484 4491 taskq_init();
4485 4492
4486 4493 /*
4487 4494 * Warn about invalid or dangerous values of kmem_flags.
4488 4495 * Always warn about unsupported values.
4489 4496 */
4490 4497 if (((kmem_flags & ~(KMF_AUDIT | KMF_DEADBEEF | KMF_REDZONE |
4491 4498 KMF_CONTENTS | KMF_LITE)) != 0) ||
4492 4499 ((kmem_flags & KMF_LITE) && kmem_flags != KMF_LITE))
4493 4500 cmn_err(CE_WARN, "kmem_flags set to unsupported value 0x%x. "
4494 4501 "See the Solaris Tunable Parameters Reference Manual.",
4495 4502 kmem_flags);
4496 4503
4497 4504 #ifdef DEBUG
4498 4505 if ((kmem_flags & KMF_DEBUG) == 0)
4499 4506 cmn_err(CE_NOTE, "kmem debugging disabled.");
4500 4507 #else
4501 4508 /*
4502 4509 * For non-debug kernels, the only "normal" flags are 0, KMF_LITE,
4503 4510 * KMF_REDZONE, and KMF_CONTENTS (the last because it is only enabled
4504 4511 * if KMF_AUDIT is set). We should warn the user about the performance
4505 4512 * penalty of KMF_AUDIT or KMF_DEADBEEF if they are set and KMF_LITE
4506 4513 * isn't set (since that disables AUDIT).
4507 4514 */
4508 4515 if (!(kmem_flags & KMF_LITE) &&
4509 4516 (kmem_flags & (KMF_AUDIT | KMF_DEADBEEF)) != 0)
4510 4517 cmn_err(CE_WARN, "High-overhead kmem debugging features "
4511 4518 "enabled (kmem_flags = 0x%x). Performance degradation "
4512 4519 "and large memory overhead possible. See the Solaris "
4513 4520 "Tunable Parameters Reference Manual.", kmem_flags);
4514 4521 #endif /* not DEBUG */
4515 4522
4516 4523 kmem_cache_applyall(kmem_cache_magazine_enable, NULL, TQ_SLEEP);
4517 4524
4518 4525 kmem_ready = 1;
4519 4526
4520 4527 /*
4521 4528 * Initialize the platform-specific aligned/DMA memory allocator.
4522 4529 */
4523 4530 ka_init();
4524 4531
4525 4532 /*
4526 4533 * Initialize 32-bit ID cache.
4527 4534 */
4528 4535 id32_init();
4529 4536
4530 4537 /*
4531 4538 * Initialize the networking stack so modules loaded can
4532 4539 * register their callbacks.
4533 4540 */
4534 4541 netstack_init();
4535 4542 }
4536 4543
4537 4544 static void
4538 4545 kmem_move_init(void)
4539 4546 {
4540 4547 kmem_defrag_cache = kmem_cache_create("kmem_defrag_cache",
4541 4548 sizeof (kmem_defrag_t), 0, NULL, NULL, NULL, NULL,
4542 4549 kmem_msb_arena, KMC_NOHASH);
4543 4550 kmem_move_cache = kmem_cache_create("kmem_move_cache",
4544 4551 sizeof (kmem_move_t), 0, NULL, NULL, NULL, NULL,
4545 4552 kmem_msb_arena, KMC_NOHASH);
4546 4553
4547 4554 /*
4548 4555 * kmem guarantees that move callbacks are sequential and that even
4549 4556 * across multiple caches no two moves ever execute simultaneously.
4550 4557 * Move callbacks are processed on a separate taskq so that client code
4551 4558 * does not interfere with internal maintenance tasks.
4552 4559 */
4553 4560 kmem_move_taskq = taskq_create_instance("kmem_move_taskq", 0, 1,
4554 4561 minclsyspri, 100, INT_MAX, TASKQ_PREPOPULATE);
4555 4562 }
4556 4563
4557 4564 void
4558 4565 kmem_thread_init(void)
4559 4566 {
4560 4567 kmem_move_init();
4561 4568 kmem_taskq = taskq_create_instance("kmem_taskq", 0, 1, minclsyspri,
4562 4569 300, INT_MAX, TASKQ_PREPOPULATE);
4563 4570 }
4564 4571
4565 4572 void
4566 4573 kmem_mp_init(void)
4567 4574 {
4568 4575 mutex_enter(&cpu_lock);
4569 4576 register_cpu_setup_func(kmem_cpu_setup, NULL);
4570 4577 mutex_exit(&cpu_lock);
4571 4578
4572 4579 kmem_update_timeout(NULL);
4573 4580
4574 4581 taskq_mp_init();
4575 4582 }
4576 4583
4577 4584 /*
4578 4585 * Return the slab of the allocated buffer, or NULL if the buffer is not
4579 4586 * allocated. This function may be called with a known slab address to determine
4580 4587 * whether or not the buffer is allocated, or with a NULL slab address to obtain
4581 4588 * an allocated buffer's slab.
4582 4589 */
4583 4590 static kmem_slab_t *
4584 4591 kmem_slab_allocated(kmem_cache_t *cp, kmem_slab_t *sp, void *buf)
4585 4592 {
4586 4593 kmem_bufctl_t *bcp, *bufbcp;
4587 4594
4588 4595 ASSERT(MUTEX_HELD(&cp->cache_lock));
4589 4596 ASSERT(sp == NULL || KMEM_SLAB_MEMBER(sp, buf));
4590 4597
4591 4598 if (cp->cache_flags & KMF_HASH) {
4592 4599 for (bcp = *KMEM_HASH(cp, buf);
4593 4600 (bcp != NULL) && (bcp->bc_addr != buf);
4594 4601 bcp = bcp->bc_next) {
4595 4602 continue;
4596 4603 }
4597 4604 ASSERT(sp != NULL && bcp != NULL ? sp == bcp->bc_slab : 1);
4598 4605 return (bcp == NULL ? NULL : bcp->bc_slab);
4599 4606 }
4600 4607
4601 4608 if (sp == NULL) {
4602 4609 sp = KMEM_SLAB(cp, buf);
4603 4610 }
4604 4611 bufbcp = KMEM_BUFCTL(cp, buf);
4605 4612 for (bcp = sp->slab_head;
4606 4613 (bcp != NULL) && (bcp != bufbcp);
4607 4614 bcp = bcp->bc_next) {
4608 4615 continue;
4609 4616 }
4610 4617 return (bcp == NULL ? sp : NULL);
4611 4618 }
4612 4619
4613 4620 static boolean_t
4614 4621 kmem_slab_is_reclaimable(kmem_cache_t *cp, kmem_slab_t *sp, int flags)
4615 4622 {
4616 4623 long refcnt = sp->slab_refcnt;
4617 4624
4618 4625 ASSERT(cp->cache_defrag != NULL);
4619 4626
4620 4627 /*
4621 4628 * For code coverage we want to be able to move an object within the
4622 4629 * same slab (the only partial slab) even if allocating the destination
4623 4630 * buffer resulted in a completely allocated slab.
4624 4631 */
4625 4632 if (flags & KMM_DEBUG) {
4626 4633 return ((flags & KMM_DESPERATE) ||
4627 4634 ((sp->slab_flags & KMEM_SLAB_NOMOVE) == 0));
4628 4635 }
4629 4636
4630 4637 /* If we're desperate, we don't care if the client said NO. */
4631 4638 if (flags & KMM_DESPERATE) {
4632 4639 return (refcnt < sp->slab_chunks); /* any partial */
4633 4640 }
4634 4641
4635 4642 if (sp->slab_flags & KMEM_SLAB_NOMOVE) {
4636 4643 return (B_FALSE);
4637 4644 }
4638 4645
4639 4646 if ((refcnt == 1) || kmem_move_any_partial) {
4640 4647 return (refcnt < sp->slab_chunks);
4641 4648 }
4642 4649
4643 4650 /*
4644 4651 * The reclaim threshold is adjusted at each kmem_cache_scan() so that
4645 4652 * slabs with a progressively higher percentage of used buffers can be
4646 4653 * reclaimed until the cache as a whole is no longer fragmented.
4647 4654 *
4648 4655 * sp->slab_refcnt kmd_reclaim_numer
4649 4656 * --------------- < ------------------
4650 4657 * sp->slab_chunks KMEM_VOID_FRACTION
4651 4658 */
4652 4659 return ((refcnt * KMEM_VOID_FRACTION) <
4653 4660 (sp->slab_chunks * cp->cache_defrag->kmd_reclaim_numer));
4654 4661 }
4655 4662
4656 4663 static void *
4657 4664 kmem_hunt_mag(kmem_cache_t *cp, kmem_magazine_t *m, int n, void *buf,
4658 4665 void *tbuf)
4659 4666 {
4660 4667 int i; /* magazine round index */
4661 4668
4662 4669 for (i = 0; i < n; i++) {
4663 4670 if (buf == m->mag_round[i]) {
4664 4671 if (cp->cache_flags & KMF_BUFTAG) {
4665 4672 (void) kmem_cache_free_debug(cp, tbuf,
4666 4673 caller());
4667 4674 }
4668 4675 m->mag_round[i] = tbuf;
4669 4676 return (buf);
4670 4677 }
4671 4678 }
4672 4679
4673 4680 return (NULL);
4674 4681 }
4675 4682
4676 4683 /*
4677 4684 * Hunt the magazine layer for the given buffer. If found, the buffer is
4678 4685 * removed from the magazine layer and returned, otherwise NULL is returned.
4679 4686 * The state of the returned buffer is freed and constructed.
4680 4687 */
4681 4688 static void *
4682 4689 kmem_hunt_mags(kmem_cache_t *cp, void *buf)
4683 4690 {
4684 4691 kmem_cpu_cache_t *ccp;
4685 4692 kmem_magazine_t *m;
4686 4693 int cpu_seqid;
4687 4694 int n; /* magazine rounds */
4688 4695 void *tbuf; /* temporary swap buffer */
4689 4696
4690 4697 ASSERT(MUTEX_NOT_HELD(&cp->cache_lock));
4691 4698
4692 4699 /*
4693 4700 * Allocated a buffer to swap with the one we hope to pull out of a
4694 4701 * magazine when found.
4695 4702 */
4696 4703 tbuf = kmem_cache_alloc(cp, KM_NOSLEEP);
4697 4704 if (tbuf == NULL) {
4698 4705 KMEM_STAT_ADD(kmem_move_stats.kms_hunt_alloc_fail);
4699 4706 return (NULL);
4700 4707 }
4701 4708 if (tbuf == buf) {
4702 4709 KMEM_STAT_ADD(kmem_move_stats.kms_hunt_lucky);
4703 4710 if (cp->cache_flags & KMF_BUFTAG) {
4704 4711 (void) kmem_cache_free_debug(cp, buf, caller());
4705 4712 }
4706 4713 return (buf);
4707 4714 }
4708 4715
4709 4716 /* Hunt the depot. */
4710 4717 mutex_enter(&cp->cache_depot_lock);
4711 4718 n = cp->cache_magtype->mt_magsize;
4712 4719 for (m = cp->cache_full.ml_list; m != NULL; m = m->mag_next) {
4713 4720 if (kmem_hunt_mag(cp, m, n, buf, tbuf) != NULL) {
4714 4721 mutex_exit(&cp->cache_depot_lock);
4715 4722 return (buf);
4716 4723 }
4717 4724 }
4718 4725 mutex_exit(&cp->cache_depot_lock);
4719 4726
4720 4727 /* Hunt the per-CPU magazines. */
4721 4728 for (cpu_seqid = 0; cpu_seqid < max_ncpus; cpu_seqid++) {
4722 4729 ccp = &cp->cache_cpu[cpu_seqid];
4723 4730
4724 4731 mutex_enter(&ccp->cc_lock);
4725 4732 m = ccp->cc_loaded;
4726 4733 n = ccp->cc_rounds;
4727 4734 if (kmem_hunt_mag(cp, m, n, buf, tbuf) != NULL) {
4728 4735 mutex_exit(&ccp->cc_lock);
4729 4736 return (buf);
4730 4737 }
4731 4738 m = ccp->cc_ploaded;
4732 4739 n = ccp->cc_prounds;
4733 4740 if (kmem_hunt_mag(cp, m, n, buf, tbuf) != NULL) {
4734 4741 mutex_exit(&ccp->cc_lock);
4735 4742 return (buf);
4736 4743 }
4737 4744 mutex_exit(&ccp->cc_lock);
4738 4745 }
4739 4746
4740 4747 kmem_cache_free(cp, tbuf);
4741 4748 return (NULL);
4742 4749 }
4743 4750
4744 4751 /*
4745 4752 * May be called from the kmem_move_taskq, from kmem_cache_move_notify_task(),
4746 4753 * or when the buffer is freed.
4747 4754 */
4748 4755 static void
4749 4756 kmem_slab_move_yes(kmem_cache_t *cp, kmem_slab_t *sp, void *from_buf)
4750 4757 {
4751 4758 ASSERT(MUTEX_HELD(&cp->cache_lock));
4752 4759 ASSERT(KMEM_SLAB_MEMBER(sp, from_buf));
4753 4760
4754 4761 if (!KMEM_SLAB_IS_PARTIAL(sp)) {
4755 4762 return;
4756 4763 }
4757 4764
4758 4765 if (sp->slab_flags & KMEM_SLAB_NOMOVE) {
4759 4766 if (KMEM_SLAB_OFFSET(sp, from_buf) == sp->slab_stuck_offset) {
4760 4767 avl_remove(&cp->cache_partial_slabs, sp);
4761 4768 sp->slab_flags &= ~KMEM_SLAB_NOMOVE;
4762 4769 sp->slab_stuck_offset = (uint32_t)-1;
4763 4770 avl_add(&cp->cache_partial_slabs, sp);
4764 4771 }
4765 4772 } else {
4766 4773 sp->slab_later_count = 0;
4767 4774 sp->slab_stuck_offset = (uint32_t)-1;
4768 4775 }
4769 4776 }
4770 4777
4771 4778 static void
4772 4779 kmem_slab_move_no(kmem_cache_t *cp, kmem_slab_t *sp, void *from_buf)
4773 4780 {
4774 4781 ASSERT(taskq_member(kmem_move_taskq, curthread));
4775 4782 ASSERT(MUTEX_HELD(&cp->cache_lock));
4776 4783 ASSERT(KMEM_SLAB_MEMBER(sp, from_buf));
4777 4784
4778 4785 if (!KMEM_SLAB_IS_PARTIAL(sp)) {
4779 4786 return;
4780 4787 }
4781 4788
4782 4789 avl_remove(&cp->cache_partial_slabs, sp);
4783 4790 sp->slab_later_count = 0;
4784 4791 sp->slab_flags |= KMEM_SLAB_NOMOVE;
4785 4792 sp->slab_stuck_offset = KMEM_SLAB_OFFSET(sp, from_buf);
4786 4793 avl_add(&cp->cache_partial_slabs, sp);
4787 4794 }
4788 4795
4789 4796 static void kmem_move_end(kmem_cache_t *, kmem_move_t *);
4790 4797
4791 4798 /*
4792 4799 * The move callback takes two buffer addresses, the buffer to be moved, and a
4793 4800 * newly allocated and constructed buffer selected by kmem as the destination.
4794 4801 * It also takes the size of the buffer and an optional user argument specified
4795 4802 * at cache creation time. kmem guarantees that the buffer to be moved has not
4796 4803 * been unmapped by the virtual memory subsystem. Beyond that, it cannot
4797 4804 * guarantee the present whereabouts of the buffer to be moved, so it is up to
4798 4805 * the client to safely determine whether or not it is still using the buffer.
4799 4806 * The client must not free either of the buffers passed to the move callback,
4800 4807 * since kmem wants to free them directly to the slab layer. The client response
4801 4808 * tells kmem which of the two buffers to free:
4802 4809 *
4803 4810 * YES kmem frees the old buffer (the move was successful)
4804 4811 * NO kmem frees the new buffer, marks the slab of the old buffer
4805 4812 * non-reclaimable to avoid bothering the client again
4806 4813 * LATER kmem frees the new buffer, increments slab_later_count
4807 4814 * DONT_KNOW kmem frees the new buffer, searches mags for the old buffer
4808 4815 * DONT_NEED kmem frees both the old buffer and the new buffer
4809 4816 *
4810 4817 * The pending callback argument now being processed contains both of the
4811 4818 * buffers (old and new) passed to the move callback function, the slab of the
4812 4819 * old buffer, and flags related to the move request, such as whether or not the
4813 4820 * system was desperate for memory.
4814 4821 *
4815 4822 * Slabs are not freed while there is a pending callback, but instead are kept
4816 4823 * on a deadlist, which is drained after the last callback completes. This means
4817 4824 * that slabs are safe to access until kmem_move_end(), no matter how many of
4818 4825 * their buffers have been freed. Once slab_refcnt reaches zero, it stays at
4819 4826 * zero for as long as the slab remains on the deadlist and until the slab is
4820 4827 * freed.
4821 4828 */
4822 4829 static void
4823 4830 kmem_move_buffer(kmem_move_t *callback)
4824 4831 {
4825 4832 kmem_cbrc_t response;
4826 4833 kmem_slab_t *sp = callback->kmm_from_slab;
4827 4834 kmem_cache_t *cp = sp->slab_cache;
4828 4835 boolean_t free_on_slab;
4829 4836
4830 4837 ASSERT(taskq_member(kmem_move_taskq, curthread));
4831 4838 ASSERT(MUTEX_NOT_HELD(&cp->cache_lock));
4832 4839 ASSERT(KMEM_SLAB_MEMBER(sp, callback->kmm_from_buf));
4833 4840
4834 4841 /*
4835 4842 * The number of allocated buffers on the slab may have changed since we
4836 4843 * last checked the slab's reclaimability (when the pending move was
4837 4844 * enqueued), or the client may have responded NO when asked to move
4838 4845 * another buffer on the same slab.
4839 4846 */
4840 4847 if (!kmem_slab_is_reclaimable(cp, sp, callback->kmm_flags)) {
4841 4848 KMEM_STAT_ADD(kmem_move_stats.kms_no_longer_reclaimable);
4842 4849 KMEM_STAT_COND_ADD((callback->kmm_flags & KMM_NOTIFY),
4843 4850 kmem_move_stats.kms_notify_no_longer_reclaimable);
4844 4851 kmem_slab_free(cp, callback->kmm_to_buf);
4845 4852 kmem_move_end(cp, callback);
4846 4853 return;
4847 4854 }
4848 4855
4849 4856 /*
4850 4857 * Hunting magazines is expensive, so we'll wait to do that until the
4851 4858 * client responds KMEM_CBRC_DONT_KNOW. However, checking the slab layer
4852 4859 * is cheap, so we might as well do that here in case we can avoid
4853 4860 * bothering the client.
4854 4861 */
4855 4862 mutex_enter(&cp->cache_lock);
4856 4863 free_on_slab = (kmem_slab_allocated(cp, sp,
4857 4864 callback->kmm_from_buf) == NULL);
4858 4865 mutex_exit(&cp->cache_lock);
4859 4866
4860 4867 if (free_on_slab) {
4861 4868 KMEM_STAT_ADD(kmem_move_stats.kms_hunt_found_slab);
4862 4869 kmem_slab_free(cp, callback->kmm_to_buf);
4863 4870 kmem_move_end(cp, callback);
4864 4871 return;
4865 4872 }
4866 4873
4867 4874 if (cp->cache_flags & KMF_BUFTAG) {
4868 4875 /*
4869 4876 * Make kmem_cache_alloc_debug() apply the constructor for us.
4870 4877 */
4871 4878 if (kmem_cache_alloc_debug(cp, callback->kmm_to_buf,
4872 4879 KM_NOSLEEP, 1, caller()) != 0) {
4873 4880 KMEM_STAT_ADD(kmem_move_stats.kms_alloc_fail);
4874 4881 kmem_move_end(cp, callback);
4875 4882 return;
4876 4883 }
4877 4884 } else if (cp->cache_constructor != NULL &&
4878 4885 cp->cache_constructor(callback->kmm_to_buf, cp->cache_private,
4879 4886 KM_NOSLEEP) != 0) {
4880 4887 atomic_inc_64(&cp->cache_alloc_fail);
4881 4888 KMEM_STAT_ADD(kmem_move_stats.kms_constructor_fail);
4882 4889 kmem_slab_free(cp, callback->kmm_to_buf);
4883 4890 kmem_move_end(cp, callback);
4884 4891 return;
4885 4892 }
4886 4893
4887 4894 KMEM_STAT_ADD(kmem_move_stats.kms_callbacks);
4888 4895 KMEM_STAT_COND_ADD((callback->kmm_flags & KMM_NOTIFY),
4889 4896 kmem_move_stats.kms_notify_callbacks);
4890 4897 cp->cache_defrag->kmd_callbacks++;
4891 4898 cp->cache_defrag->kmd_thread = curthread;
4892 4899 cp->cache_defrag->kmd_from_buf = callback->kmm_from_buf;
4893 4900 cp->cache_defrag->kmd_to_buf = callback->kmm_to_buf;
4894 4901 DTRACE_PROBE2(kmem__move__start, kmem_cache_t *, cp, kmem_move_t *,
4895 4902 callback);
4896 4903
4897 4904 response = cp->cache_move(callback->kmm_from_buf,
4898 4905 callback->kmm_to_buf, cp->cache_bufsize, cp->cache_private);
4899 4906
4900 4907 DTRACE_PROBE3(kmem__move__end, kmem_cache_t *, cp, kmem_move_t *,
4901 4908 callback, kmem_cbrc_t, response);
4902 4909 cp->cache_defrag->kmd_thread = NULL;
4903 4910 cp->cache_defrag->kmd_from_buf = NULL;
4904 4911 cp->cache_defrag->kmd_to_buf = NULL;
4905 4912
4906 4913 if (response == KMEM_CBRC_YES) {
4907 4914 KMEM_STAT_ADD(kmem_move_stats.kms_yes);
4908 4915 cp->cache_defrag->kmd_yes++;
4909 4916 kmem_slab_free_constructed(cp, callback->kmm_from_buf, B_FALSE);
4910 4917 /* slab safe to access until kmem_move_end() */
4911 4918 if (sp->slab_refcnt == 0)
4912 4919 cp->cache_defrag->kmd_slabs_freed++;
4913 4920 mutex_enter(&cp->cache_lock);
4914 4921 kmem_slab_move_yes(cp, sp, callback->kmm_from_buf);
4915 4922 mutex_exit(&cp->cache_lock);
4916 4923 kmem_move_end(cp, callback);
4917 4924 return;
4918 4925 }
4919 4926
4920 4927 switch (response) {
4921 4928 case KMEM_CBRC_NO:
4922 4929 KMEM_STAT_ADD(kmem_move_stats.kms_no);
4923 4930 cp->cache_defrag->kmd_no++;
4924 4931 mutex_enter(&cp->cache_lock);
4925 4932 kmem_slab_move_no(cp, sp, callback->kmm_from_buf);
4926 4933 mutex_exit(&cp->cache_lock);
4927 4934 break;
4928 4935 case KMEM_CBRC_LATER:
4929 4936 KMEM_STAT_ADD(kmem_move_stats.kms_later);
4930 4937 cp->cache_defrag->kmd_later++;
4931 4938 mutex_enter(&cp->cache_lock);
4932 4939 if (!KMEM_SLAB_IS_PARTIAL(sp)) {
4933 4940 mutex_exit(&cp->cache_lock);
4934 4941 break;
4935 4942 }
4936 4943
4937 4944 if (++sp->slab_later_count >= KMEM_DISBELIEF) {
4938 4945 KMEM_STAT_ADD(kmem_move_stats.kms_disbelief);
4939 4946 kmem_slab_move_no(cp, sp, callback->kmm_from_buf);
4940 4947 } else if (!(sp->slab_flags & KMEM_SLAB_NOMOVE)) {
4941 4948 sp->slab_stuck_offset = KMEM_SLAB_OFFSET(sp,
4942 4949 callback->kmm_from_buf);
4943 4950 }
4944 4951 mutex_exit(&cp->cache_lock);
4945 4952 break;
4946 4953 case KMEM_CBRC_DONT_NEED:
4947 4954 KMEM_STAT_ADD(kmem_move_stats.kms_dont_need);
4948 4955 cp->cache_defrag->kmd_dont_need++;
4949 4956 kmem_slab_free_constructed(cp, callback->kmm_from_buf, B_FALSE);
4950 4957 if (sp->slab_refcnt == 0)
4951 4958 cp->cache_defrag->kmd_slabs_freed++;
4952 4959 mutex_enter(&cp->cache_lock);
4953 4960 kmem_slab_move_yes(cp, sp, callback->kmm_from_buf);
4954 4961 mutex_exit(&cp->cache_lock);
4955 4962 break;
4956 4963 case KMEM_CBRC_DONT_KNOW:
4957 4964 KMEM_STAT_ADD(kmem_move_stats.kms_dont_know);
4958 4965 cp->cache_defrag->kmd_dont_know++;
4959 4966 if (kmem_hunt_mags(cp, callback->kmm_from_buf) != NULL) {
4960 4967 KMEM_STAT_ADD(kmem_move_stats.kms_hunt_found_mag);
4961 4968 cp->cache_defrag->kmd_hunt_found++;
4962 4969 kmem_slab_free_constructed(cp, callback->kmm_from_buf,
4963 4970 B_TRUE);
4964 4971 if (sp->slab_refcnt == 0)
4965 4972 cp->cache_defrag->kmd_slabs_freed++;
4966 4973 mutex_enter(&cp->cache_lock);
4967 4974 kmem_slab_move_yes(cp, sp, callback->kmm_from_buf);
4968 4975 mutex_exit(&cp->cache_lock);
4969 4976 }
4970 4977 break;
4971 4978 default:
4972 4979 panic("'%s' (%p) unexpected move callback response %d\n",
4973 4980 cp->cache_name, (void *)cp, response);
4974 4981 }
4975 4982
4976 4983 kmem_slab_free_constructed(cp, callback->kmm_to_buf, B_FALSE);
4977 4984 kmem_move_end(cp, callback);
4978 4985 }
4979 4986
4980 4987 /* Return B_FALSE if there is insufficient memory for the move request. */
4981 4988 static boolean_t
4982 4989 kmem_move_begin(kmem_cache_t *cp, kmem_slab_t *sp, void *buf, int flags)
4983 4990 {
4984 4991 void *to_buf;
4985 4992 avl_index_t index;
4986 4993 kmem_move_t *callback, *pending;
4987 4994 ulong_t n;
4988 4995
4989 4996 ASSERT(taskq_member(kmem_taskq, curthread));
4990 4997 ASSERT(MUTEX_NOT_HELD(&cp->cache_lock));
4991 4998 ASSERT(sp->slab_flags & KMEM_SLAB_MOVE_PENDING);
4992 4999
4993 5000 callback = kmem_cache_alloc(kmem_move_cache, KM_NOSLEEP);
4994 5001 if (callback == NULL) {
4995 5002 KMEM_STAT_ADD(kmem_move_stats.kms_callback_alloc_fail);
4996 5003 return (B_FALSE);
4997 5004 }
4998 5005
4999 5006 callback->kmm_from_slab = sp;
5000 5007 callback->kmm_from_buf = buf;
5001 5008 callback->kmm_flags = flags;
5002 5009
5003 5010 mutex_enter(&cp->cache_lock);
5004 5011
5005 5012 n = avl_numnodes(&cp->cache_partial_slabs);
5006 5013 if ((n == 0) || ((n == 1) && !(flags & KMM_DEBUG))) {
5007 5014 mutex_exit(&cp->cache_lock);
5008 5015 kmem_cache_free(kmem_move_cache, callback);
5009 5016 return (B_TRUE); /* there is no need for the move request */
5010 5017 }
5011 5018
5012 5019 pending = avl_find(&cp->cache_defrag->kmd_moves_pending, buf, &index);
5013 5020 if (pending != NULL) {
5014 5021 /*
5015 5022 * If the move is already pending and we're desperate now,
5016 5023 * update the move flags.
5017 5024 */
5018 5025 if (flags & KMM_DESPERATE) {
5019 5026 pending->kmm_flags |= KMM_DESPERATE;
5020 5027 }
5021 5028 mutex_exit(&cp->cache_lock);
5022 5029 KMEM_STAT_ADD(kmem_move_stats.kms_already_pending);
5023 5030 kmem_cache_free(kmem_move_cache, callback);
5024 5031 return (B_TRUE);
5025 5032 }
5026 5033
5027 5034 to_buf = kmem_slab_alloc_impl(cp, avl_first(&cp->cache_partial_slabs),
5028 5035 B_FALSE);
5029 5036 callback->kmm_to_buf = to_buf;
5030 5037 avl_insert(&cp->cache_defrag->kmd_moves_pending, callback, index);
5031 5038
5032 5039 mutex_exit(&cp->cache_lock);
5033 5040
5034 5041 if (!taskq_dispatch(kmem_move_taskq, (task_func_t *)kmem_move_buffer,
5035 5042 callback, TQ_NOSLEEP)) {
5036 5043 KMEM_STAT_ADD(kmem_move_stats.kms_callback_taskq_fail);
5037 5044 mutex_enter(&cp->cache_lock);
5038 5045 avl_remove(&cp->cache_defrag->kmd_moves_pending, callback);
5039 5046 mutex_exit(&cp->cache_lock);
5040 5047 kmem_slab_free(cp, to_buf);
5041 5048 kmem_cache_free(kmem_move_cache, callback);
5042 5049 return (B_FALSE);
5043 5050 }
5044 5051
5045 5052 return (B_TRUE);
5046 5053 }
5047 5054
5048 5055 static void
5049 5056 kmem_move_end(kmem_cache_t *cp, kmem_move_t *callback)
5050 5057 {
5051 5058 avl_index_t index;
5052 5059
5053 5060 ASSERT(cp->cache_defrag != NULL);
5054 5061 ASSERT(taskq_member(kmem_move_taskq, curthread));
5055 5062 ASSERT(MUTEX_NOT_HELD(&cp->cache_lock));
5056 5063
5057 5064 mutex_enter(&cp->cache_lock);
5058 5065 VERIFY(avl_find(&cp->cache_defrag->kmd_moves_pending,
5059 5066 callback->kmm_from_buf, &index) != NULL);
5060 5067 avl_remove(&cp->cache_defrag->kmd_moves_pending, callback);
5061 5068 if (avl_is_empty(&cp->cache_defrag->kmd_moves_pending)) {
5062 5069 list_t *deadlist = &cp->cache_defrag->kmd_deadlist;
5063 5070 kmem_slab_t *sp;
5064 5071
5065 5072 /*
5066 5073 * The last pending move completed. Release all slabs from the
5067 5074 * front of the dead list except for any slab at the tail that
5068 5075 * needs to be released from the context of kmem_move_buffers().
5069 5076 * kmem deferred unmapping the buffers on these slabs in order
5070 5077 * to guarantee that buffers passed to the move callback have
5071 5078 * been touched only by kmem or by the client itself.
5072 5079 */
5073 5080 while ((sp = list_remove_head(deadlist)) != NULL) {
5074 5081 if (sp->slab_flags & KMEM_SLAB_MOVE_PENDING) {
5075 5082 list_insert_tail(deadlist, sp);
5076 5083 break;
5077 5084 }
5078 5085 cp->cache_defrag->kmd_deadcount--;
5079 5086 cp->cache_slab_destroy++;
5080 5087 mutex_exit(&cp->cache_lock);
5081 5088 kmem_slab_destroy(cp, sp);
5082 5089 KMEM_STAT_ADD(kmem_move_stats.kms_dead_slabs_freed);
5083 5090 mutex_enter(&cp->cache_lock);
5084 5091 }
5085 5092 }
5086 5093 mutex_exit(&cp->cache_lock);
5087 5094 kmem_cache_free(kmem_move_cache, callback);
5088 5095 }
5089 5096
5090 5097 /*
5091 5098 * Move buffers from least used slabs first by scanning backwards from the end
5092 5099 * of the partial slab list. Scan at most max_scan candidate slabs and move
5093 5100 * buffers from at most max_slabs slabs (0 for all partial slabs in both cases).
5094 5101 * If desperate to reclaim memory, move buffers from any partial slab, otherwise
5095 5102 * skip slabs with a ratio of allocated buffers at or above the current
5096 5103 * threshold. Return the number of unskipped slabs (at most max_slabs, -1 if the
5097 5104 * scan is aborted) so that the caller can adjust the reclaimability threshold
5098 5105 * depending on how many reclaimable slabs it finds.
5099 5106 *
5100 5107 * kmem_move_buffers() drops and reacquires cache_lock every time it issues a
5101 5108 * move request, since it is not valid for kmem_move_begin() to call
5102 5109 * kmem_cache_alloc() or taskq_dispatch() with cache_lock held.
5103 5110 */
5104 5111 static int
5105 5112 kmem_move_buffers(kmem_cache_t *cp, size_t max_scan, size_t max_slabs,
5106 5113 int flags)
5107 5114 {
5108 5115 kmem_slab_t *sp;
5109 5116 void *buf;
5110 5117 int i, j; /* slab index, buffer index */
5111 5118 int s; /* reclaimable slabs */
5112 5119 int b; /* allocated (movable) buffers on reclaimable slab */
5113 5120 boolean_t success;
5114 5121 int refcnt;
5115 5122 int nomove;
5116 5123
5117 5124 ASSERT(taskq_member(kmem_taskq, curthread));
5118 5125 ASSERT(MUTEX_HELD(&cp->cache_lock));
5119 5126 ASSERT(kmem_move_cache != NULL);
5120 5127 ASSERT(cp->cache_move != NULL && cp->cache_defrag != NULL);
5121 5128 ASSERT((flags & KMM_DEBUG) ? !avl_is_empty(&cp->cache_partial_slabs) :
5122 5129 avl_numnodes(&cp->cache_partial_slabs) > 1);
5123 5130
5124 5131 if (kmem_move_blocked) {
5125 5132 return (0);
5126 5133 }
5127 5134
5128 5135 if (kmem_move_fulltilt) {
5129 5136 flags |= KMM_DESPERATE;
5130 5137 }
5131 5138
5132 5139 if (max_scan == 0 || (flags & KMM_DESPERATE)) {
5133 5140 /*
5134 5141 * Scan as many slabs as needed to find the desired number of
5135 5142 * candidate slabs.
5136 5143 */
5137 5144 max_scan = (size_t)-1;
5138 5145 }
5139 5146
5140 5147 if (max_slabs == 0 || (flags & KMM_DESPERATE)) {
5141 5148 /* Find as many candidate slabs as possible. */
5142 5149 max_slabs = (size_t)-1;
5143 5150 }
5144 5151
5145 5152 sp = avl_last(&cp->cache_partial_slabs);
5146 5153 ASSERT(KMEM_SLAB_IS_PARTIAL(sp));
5147 5154 for (i = 0, s = 0; (i < max_scan) && (s < max_slabs) && (sp != NULL) &&
5148 5155 ((sp != avl_first(&cp->cache_partial_slabs)) ||
5149 5156 (flags & KMM_DEBUG));
5150 5157 sp = AVL_PREV(&cp->cache_partial_slabs, sp), i++) {
5151 5158
5152 5159 if (!kmem_slab_is_reclaimable(cp, sp, flags)) {
5153 5160 continue;
5154 5161 }
5155 5162 s++;
5156 5163
5157 5164 /* Look for allocated buffers to move. */
5158 5165 for (j = 0, b = 0, buf = sp->slab_base;
5159 5166 (j < sp->slab_chunks) && (b < sp->slab_refcnt);
5160 5167 buf = (((char *)buf) + cp->cache_chunksize), j++) {
5161 5168
5162 5169 if (kmem_slab_allocated(cp, sp, buf) == NULL) {
5163 5170 continue;
5164 5171 }
5165 5172
5166 5173 b++;
5167 5174
5168 5175 /*
5169 5176 * Prevent the slab from being destroyed while we drop
5170 5177 * cache_lock and while the pending move is not yet
5171 5178 * registered. Flag the pending move while
5172 5179 * kmd_moves_pending may still be empty, since we can't
5173 5180 * yet rely on a non-zero pending move count to prevent
5174 5181 * the slab from being destroyed.
5175 5182 */
5176 5183 ASSERT(!(sp->slab_flags & KMEM_SLAB_MOVE_PENDING));
5177 5184 sp->slab_flags |= KMEM_SLAB_MOVE_PENDING;
5178 5185 /*
5179 5186 * Recheck refcnt and nomove after reacquiring the lock,
5180 5187 * since these control the order of partial slabs, and
5181 5188 * we want to know if we can pick up the scan where we
5182 5189 * left off.
5183 5190 */
5184 5191 refcnt = sp->slab_refcnt;
5185 5192 nomove = (sp->slab_flags & KMEM_SLAB_NOMOVE);
5186 5193 mutex_exit(&cp->cache_lock);
5187 5194
5188 5195 success = kmem_move_begin(cp, sp, buf, flags);
5189 5196
5190 5197 /*
5191 5198 * Now, before the lock is reacquired, kmem could
5192 5199 * process all pending move requests and purge the
5193 5200 * deadlist, so that upon reacquiring the lock, sp has
5194 5201 * been remapped. Or, the client may free all the
5195 5202 * objects on the slab while the pending moves are still
5196 5203 * on the taskq. Therefore, the KMEM_SLAB_MOVE_PENDING
5197 5204 * flag causes the slab to be put at the end of the
5198 5205 * deadlist and prevents it from being destroyed, since
5199 5206 * we plan to destroy it here after reacquiring the
5200 5207 * lock.
5201 5208 */
5202 5209 mutex_enter(&cp->cache_lock);
5203 5210 ASSERT(sp->slab_flags & KMEM_SLAB_MOVE_PENDING);
5204 5211 sp->slab_flags &= ~KMEM_SLAB_MOVE_PENDING;
5205 5212
5206 5213 if (sp->slab_refcnt == 0) {
5207 5214 list_t *deadlist =
5208 5215 &cp->cache_defrag->kmd_deadlist;
5209 5216 list_remove(deadlist, sp);
5210 5217
5211 5218 if (!avl_is_empty(
5212 5219 &cp->cache_defrag->kmd_moves_pending)) {
5213 5220 /*
5214 5221 * A pending move makes it unsafe to
5215 5222 * destroy the slab, because even though
5216 5223 * the move is no longer needed, the
5217 5224 * context where that is determined
5218 5225 * requires the slab to exist.
5219 5226 * Fortunately, a pending move also
5220 5227 * means we don't need to destroy the
5221 5228 * slab here, since it will get
5222 5229 * destroyed along with any other slabs
5223 5230 * on the deadlist after the last
5224 5231 * pending move completes.
5225 5232 */
5226 5233 list_insert_head(deadlist, sp);
5227 5234 KMEM_STAT_ADD(kmem_move_stats.
5228 5235 kms_endscan_slab_dead);
5229 5236 return (-1);
5230 5237 }
5231 5238
5232 5239 /*
5233 5240 * Destroy the slab now if it was completely
5234 5241 * freed while we dropped cache_lock and there
5235 5242 * are no pending moves. Since slab_refcnt
5236 5243 * cannot change once it reaches zero, no new
5237 5244 * pending moves from that slab are possible.
5238 5245 */
5239 5246 cp->cache_defrag->kmd_deadcount--;
5240 5247 cp->cache_slab_destroy++;
5241 5248 mutex_exit(&cp->cache_lock);
5242 5249 kmem_slab_destroy(cp, sp);
5243 5250 KMEM_STAT_ADD(kmem_move_stats.
5244 5251 kms_dead_slabs_freed);
5245 5252 KMEM_STAT_ADD(kmem_move_stats.
5246 5253 kms_endscan_slab_destroyed);
5247 5254 mutex_enter(&cp->cache_lock);
5248 5255 /*
5249 5256 * Since we can't pick up the scan where we left
5250 5257 * off, abort the scan and say nothing about the
5251 5258 * number of reclaimable slabs.
5252 5259 */
5253 5260 return (-1);
5254 5261 }
5255 5262
5256 5263 if (!success) {
5257 5264 /*
5258 5265 * Abort the scan if there is not enough memory
5259 5266 * for the request and say nothing about the
5260 5267 * number of reclaimable slabs.
5261 5268 */
5262 5269 KMEM_STAT_COND_ADD(s < max_slabs,
5263 5270 kmem_move_stats.kms_endscan_nomem);
5264 5271 return (-1);
5265 5272 }
5266 5273
5267 5274 /*
5268 5275 * The slab's position changed while the lock was
5269 5276 * dropped, so we don't know where we are in the
5270 5277 * sequence any more.
5271 5278 */
5272 5279 if (sp->slab_refcnt != refcnt) {
5273 5280 /*
5274 5281 * If this is a KMM_DEBUG move, the slab_refcnt
5275 5282 * may have changed because we allocated a
5276 5283 * destination buffer on the same slab. In that
5277 5284 * case, we're not interested in counting it.
5278 5285 */
5279 5286 KMEM_STAT_COND_ADD(!(flags & KMM_DEBUG) &&
5280 5287 (s < max_slabs),
5281 5288 kmem_move_stats.kms_endscan_refcnt_changed);
5282 5289 return (-1);
5283 5290 }
5284 5291 if ((sp->slab_flags & KMEM_SLAB_NOMOVE) != nomove) {
5285 5292 KMEM_STAT_COND_ADD(s < max_slabs,
5286 5293 kmem_move_stats.kms_endscan_nomove_changed);
5287 5294 return (-1);
5288 5295 }
5289 5296
5290 5297 /*
5291 5298 * Generating a move request allocates a destination
5292 5299 * buffer from the slab layer, bumping the first partial
5293 5300 * slab if it is completely allocated. If the current
5294 5301 * slab becomes the first partial slab as a result, we
5295 5302 * can't continue to scan backwards.
5296 5303 *
5297 5304 * If this is a KMM_DEBUG move and we allocated the
5298 5305 * destination buffer from the last partial slab, then
5299 5306 * the buffer we're moving is on the same slab and our
5300 5307 * slab_refcnt has changed, causing us to return before
5301 5308 * reaching here if there are no partial slabs left.
5302 5309 */
5303 5310 ASSERT(!avl_is_empty(&cp->cache_partial_slabs));
5304 5311 if (sp == avl_first(&cp->cache_partial_slabs)) {
5305 5312 /*
5306 5313 * We're not interested in a second KMM_DEBUG
5307 5314 * move.
5308 5315 */
5309 5316 goto end_scan;
5310 5317 }
5311 5318 }
5312 5319 }
5313 5320 end_scan:
5314 5321
5315 5322 KMEM_STAT_COND_ADD(!(flags & KMM_DEBUG) &&
5316 5323 (s < max_slabs) &&
5317 5324 (sp == avl_first(&cp->cache_partial_slabs)),
5318 5325 kmem_move_stats.kms_endscan_freelist);
5319 5326
5320 5327 return (s);
5321 5328 }
5322 5329
5323 5330 typedef struct kmem_move_notify_args {
5324 5331 kmem_cache_t *kmna_cache;
5325 5332 void *kmna_buf;
5326 5333 } kmem_move_notify_args_t;
5327 5334
5328 5335 static void
5329 5336 kmem_cache_move_notify_task(void *arg)
5330 5337 {
5331 5338 kmem_move_notify_args_t *args = arg;
5332 5339 kmem_cache_t *cp = args->kmna_cache;
5333 5340 void *buf = args->kmna_buf;
5334 5341 kmem_slab_t *sp;
5335 5342
5336 5343 ASSERT(taskq_member(kmem_taskq, curthread));
5337 5344 ASSERT(list_link_active(&cp->cache_link));
5338 5345
5339 5346 kmem_free(args, sizeof (kmem_move_notify_args_t));
5340 5347 mutex_enter(&cp->cache_lock);
5341 5348 sp = kmem_slab_allocated(cp, NULL, buf);
5342 5349
5343 5350 /* Ignore the notification if the buffer is no longer allocated. */
5344 5351 if (sp == NULL) {
5345 5352 mutex_exit(&cp->cache_lock);
5346 5353 return;
5347 5354 }
5348 5355
5349 5356 /* Ignore the notification if there's no reason to move the buffer. */
5350 5357 if (avl_numnodes(&cp->cache_partial_slabs) > 1) {
5351 5358 /*
5352 5359 * So far the notification is not ignored. Ignore the
5353 5360 * notification if the slab is not marked by an earlier refusal
5354 5361 * to move a buffer.
5355 5362 */
5356 5363 if (!(sp->slab_flags & KMEM_SLAB_NOMOVE) &&
5357 5364 (sp->slab_later_count == 0)) {
5358 5365 mutex_exit(&cp->cache_lock);
5359 5366 return;
5360 5367 }
5361 5368
5362 5369 kmem_slab_move_yes(cp, sp, buf);
5363 5370 ASSERT(!(sp->slab_flags & KMEM_SLAB_MOVE_PENDING));
5364 5371 sp->slab_flags |= KMEM_SLAB_MOVE_PENDING;
5365 5372 mutex_exit(&cp->cache_lock);
5366 5373 /* see kmem_move_buffers() about dropping the lock */
5367 5374 (void) kmem_move_begin(cp, sp, buf, KMM_NOTIFY);
5368 5375 mutex_enter(&cp->cache_lock);
5369 5376 ASSERT(sp->slab_flags & KMEM_SLAB_MOVE_PENDING);
5370 5377 sp->slab_flags &= ~KMEM_SLAB_MOVE_PENDING;
5371 5378 if (sp->slab_refcnt == 0) {
5372 5379 list_t *deadlist = &cp->cache_defrag->kmd_deadlist;
5373 5380 list_remove(deadlist, sp);
5374 5381
5375 5382 if (!avl_is_empty(
5376 5383 &cp->cache_defrag->kmd_moves_pending)) {
5377 5384 list_insert_head(deadlist, sp);
5378 5385 mutex_exit(&cp->cache_lock);
5379 5386 KMEM_STAT_ADD(kmem_move_stats.
5380 5387 kms_notify_slab_dead);
5381 5388 return;
5382 5389 }
5383 5390
5384 5391 cp->cache_defrag->kmd_deadcount--;
5385 5392 cp->cache_slab_destroy++;
5386 5393 mutex_exit(&cp->cache_lock);
5387 5394 kmem_slab_destroy(cp, sp);
5388 5395 KMEM_STAT_ADD(kmem_move_stats.kms_dead_slabs_freed);
5389 5396 KMEM_STAT_ADD(kmem_move_stats.
5390 5397 kms_notify_slab_destroyed);
5391 5398 return;
5392 5399 }
5393 5400 } else {
5394 5401 kmem_slab_move_yes(cp, sp, buf);
5395 5402 }
5396 5403 mutex_exit(&cp->cache_lock);
5397 5404 }
5398 5405
5399 5406 void
5400 5407 kmem_cache_move_notify(kmem_cache_t *cp, void *buf)
5401 5408 {
5402 5409 kmem_move_notify_args_t *args;
5403 5410
5404 5411 KMEM_STAT_ADD(kmem_move_stats.kms_notify);
5405 5412 args = kmem_alloc(sizeof (kmem_move_notify_args_t), KM_NOSLEEP);
5406 5413 if (args != NULL) {
5407 5414 args->kmna_cache = cp;
5408 5415 args->kmna_buf = buf;
5409 5416 if (!taskq_dispatch(kmem_taskq,
5410 5417 (task_func_t *)kmem_cache_move_notify_task, args,
5411 5418 TQ_NOSLEEP))
5412 5419 kmem_free(args, sizeof (kmem_move_notify_args_t));
5413 5420 }
5414 5421 }
5415 5422
5416 5423 static void
5417 5424 kmem_cache_defrag(kmem_cache_t *cp)
5418 5425 {
5419 5426 size_t n;
5420 5427
5421 5428 ASSERT(cp->cache_defrag != NULL);
5422 5429
5423 5430 mutex_enter(&cp->cache_lock);
5424 5431 n = avl_numnodes(&cp->cache_partial_slabs);
5425 5432 if (n > 1) {
5426 5433 /* kmem_move_buffers() drops and reacquires cache_lock */
5427 5434 KMEM_STAT_ADD(kmem_move_stats.kms_defrags);
5428 5435 cp->cache_defrag->kmd_defrags++;
5429 5436 (void) kmem_move_buffers(cp, n, 0, KMM_DESPERATE);
5430 5437 }
5431 5438 mutex_exit(&cp->cache_lock);
5432 5439 }
5433 5440
5434 5441 /* Is this cache above the fragmentation threshold? */
5435 5442 static boolean_t
5436 5443 kmem_cache_frag_threshold(kmem_cache_t *cp, uint64_t nfree)
5437 5444 {
5438 5445 /*
5439 5446 * nfree kmem_frag_numer
5440 5447 * ------------------ > ---------------
5441 5448 * cp->cache_buftotal kmem_frag_denom
5442 5449 */
5443 5450 return ((nfree * kmem_frag_denom) >
5444 5451 (cp->cache_buftotal * kmem_frag_numer));
5445 5452 }
5446 5453
5447 5454 static boolean_t
5448 5455 kmem_cache_is_fragmented(kmem_cache_t *cp, boolean_t *doreap)
5449 5456 {
5450 5457 boolean_t fragmented;
5451 5458 uint64_t nfree;
5452 5459
5453 5460 ASSERT(MUTEX_HELD(&cp->cache_lock));
5454 5461 *doreap = B_FALSE;
5455 5462
5456 5463 if (kmem_move_fulltilt) {
5457 5464 if (avl_numnodes(&cp->cache_partial_slabs) > 1) {
5458 5465 return (B_TRUE);
5459 5466 }
5460 5467 } else {
5461 5468 if ((cp->cache_complete_slab_count + avl_numnodes(
5462 5469 &cp->cache_partial_slabs)) < kmem_frag_minslabs) {
5463 5470 return (B_FALSE);
5464 5471 }
5465 5472 }
5466 5473
5467 5474 nfree = cp->cache_bufslab;
5468 5475 fragmented = ((avl_numnodes(&cp->cache_partial_slabs) > 1) &&
5469 5476 kmem_cache_frag_threshold(cp, nfree));
5470 5477
5471 5478 /*
5472 5479 * Free buffers in the magazine layer appear allocated from the point of
5473 5480 * view of the slab layer. We want to know if the slab layer would
5474 5481 * appear fragmented if we included free buffers from magazines that
5475 5482 * have fallen out of the working set.
5476 5483 */
5477 5484 if (!fragmented) {
5478 5485 long reap;
5479 5486
5480 5487 mutex_enter(&cp->cache_depot_lock);
5481 5488 reap = MIN(cp->cache_full.ml_reaplimit, cp->cache_full.ml_min);
5482 5489 reap = MIN(reap, cp->cache_full.ml_total);
5483 5490 mutex_exit(&cp->cache_depot_lock);
5484 5491
5485 5492 nfree += ((uint64_t)reap * cp->cache_magtype->mt_magsize);
5486 5493 if (kmem_cache_frag_threshold(cp, nfree)) {
5487 5494 *doreap = B_TRUE;
5488 5495 }
5489 5496 }
5490 5497
5491 5498 return (fragmented);
5492 5499 }
5493 5500
5494 5501 /* Called periodically from kmem_taskq */
5495 5502 static void
5496 5503 kmem_cache_scan(kmem_cache_t *cp)
5497 5504 {
5498 5505 boolean_t reap = B_FALSE;
5499 5506 kmem_defrag_t *kmd;
5500 5507
5501 5508 ASSERT(taskq_member(kmem_taskq, curthread));
5502 5509
5503 5510 mutex_enter(&cp->cache_lock);
5504 5511
5505 5512 kmd = cp->cache_defrag;
5506 5513 if (kmd->kmd_consolidate > 0) {
5507 5514 kmd->kmd_consolidate--;
5508 5515 mutex_exit(&cp->cache_lock);
5509 5516 kmem_cache_reap(cp);
5510 5517 return;
5511 5518 }
5512 5519
5513 5520 if (kmem_cache_is_fragmented(cp, &reap)) {
5514 5521 size_t slabs_found;
5515 5522
5516 5523 /*
5517 5524 * Consolidate reclaimable slabs from the end of the partial
5518 5525 * slab list (scan at most kmem_reclaim_scan_range slabs to find
5519 5526 * reclaimable slabs). Keep track of how many candidate slabs we
5520 5527 * looked for and how many we actually found so we can adjust
5521 5528 * the definition of a candidate slab if we're having trouble
5522 5529 * finding them.
5523 5530 *
5524 5531 * kmem_move_buffers() drops and reacquires cache_lock.
5525 5532 */
5526 5533 KMEM_STAT_ADD(kmem_move_stats.kms_scans);
5527 5534 kmd->kmd_scans++;
5528 5535 slabs_found = kmem_move_buffers(cp, kmem_reclaim_scan_range,
5529 5536 kmem_reclaim_max_slabs, 0);
5530 5537 if (slabs_found >= 0) {
5531 5538 kmd->kmd_slabs_sought += kmem_reclaim_max_slabs;
5532 5539 kmd->kmd_slabs_found += slabs_found;
5533 5540 }
5534 5541
5535 5542 if (++kmd->kmd_tries >= kmem_reclaim_scan_range) {
5536 5543 kmd->kmd_tries = 0;
5537 5544
5538 5545 /*
5539 5546 * If we had difficulty finding candidate slabs in
5540 5547 * previous scans, adjust the threshold so that
5541 5548 * candidates are easier to find.
5542 5549 */
5543 5550 if (kmd->kmd_slabs_found == kmd->kmd_slabs_sought) {
5544 5551 kmem_adjust_reclaim_threshold(kmd, -1);
5545 5552 } else if ((kmd->kmd_slabs_found * 2) <
5546 5553 kmd->kmd_slabs_sought) {
5547 5554 kmem_adjust_reclaim_threshold(kmd, 1);
5548 5555 }
5549 5556 kmd->kmd_slabs_sought = 0;
5550 5557 kmd->kmd_slabs_found = 0;
5551 5558 }
5552 5559 } else {
5553 5560 kmem_reset_reclaim_threshold(cp->cache_defrag);
5554 5561 #ifdef DEBUG
5555 5562 if (!avl_is_empty(&cp->cache_partial_slabs)) {
5556 5563 /*
5557 5564 * In a debug kernel we want the consolidator to
5558 5565 * run occasionally even when there is plenty of
5559 5566 * memory.
5560 5567 */
5561 5568 uint16_t debug_rand;
5562 5569
5563 5570 (void) random_get_bytes((uint8_t *)&debug_rand, 2);
5564 5571 if (!kmem_move_noreap &&
5565 5572 ((debug_rand % kmem_mtb_reap) == 0)) {
5566 5573 mutex_exit(&cp->cache_lock);
5567 5574 KMEM_STAT_ADD(kmem_move_stats.kms_debug_reaps);
5568 5575 kmem_cache_reap(cp);
5569 5576 return;
5570 5577 } else if ((debug_rand % kmem_mtb_move) == 0) {
5571 5578 KMEM_STAT_ADD(kmem_move_stats.kms_scans);
5572 5579 KMEM_STAT_ADD(kmem_move_stats.kms_debug_scans);
5573 5580 kmd->kmd_scans++;
5574 5581 (void) kmem_move_buffers(cp,
5575 5582 kmem_reclaim_scan_range, 1, KMM_DEBUG);
5576 5583 }
5577 5584 }
5578 5585 #endif /* DEBUG */
5579 5586 }
5580 5587
5581 5588 mutex_exit(&cp->cache_lock);
5582 5589
5583 5590 if (reap) {
5584 5591 KMEM_STAT_ADD(kmem_move_stats.kms_scan_depot_ws_reaps);
5585 5592 kmem_depot_ws_reap(cp);
5586 5593 }
5587 5594 }
↓ open down ↓ |
2295 lines elided |
↑ open up ↑ |
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX