Greg Kroah-Hartman | b244131 | 2017-11-01 15:07:57 +0100 | [diff] [blame] | 1 | /* SPDX-License-Identifier: GPL-2.0 */ |
Jason Baron | bf5438fc | 2010-09-17 11:09:00 -0400 | [diff] [blame] | 2 | #ifndef _LINUX_JUMP_LABEL_H |
| 3 | #define _LINUX_JUMP_LABEL_H |
| 4 | |
Peter Zijlstra | efb3040 | 2012-01-26 13:32:15 +0100 | [diff] [blame] | 5 | /* |
| 6 | * Jump label support |
| 7 | * |
| 8 | * Copyright (C) 2009-2012 Jason Baron <[email protected]> |
Peter Zijlstra | 90eec10 | 2015-11-16 11:08:45 +0100 | [diff] [blame] | 9 | * Copyright (C) 2011-2012 Red Hat, Inc., Peter Zijlstra |
Peter Zijlstra | efb3040 | 2012-01-26 13:32:15 +0100 | [diff] [blame] | 10 | * |
Jason Baron | 412758c | 2015-07-30 03:59:48 +0000 | [diff] [blame] | 11 | * DEPRECATED API: |
| 12 | * |
| 13 | * The use of 'struct static_key' directly, is now DEPRECATED. In addition |
| 14 | * static_key_{true,false}() is also DEPRECATED. IE DO NOT use the following: |
| 15 | * |
| 16 | * struct static_key false = STATIC_KEY_INIT_FALSE; |
| 17 | * struct static_key true = STATIC_KEY_INIT_TRUE; |
| 18 | * static_key_true() |
| 19 | * static_key_false() |
| 20 | * |
| 21 | * The updated API replacements are: |
| 22 | * |
| 23 | * DEFINE_STATIC_KEY_TRUE(key); |
| 24 | * DEFINE_STATIC_KEY_FALSE(key); |
Catalin Marinas | ef0da55 | 2016-09-05 18:25:47 +0100 | [diff] [blame] | 25 | * DEFINE_STATIC_KEY_ARRAY_TRUE(keys, count); |
| 26 | * DEFINE_STATIC_KEY_ARRAY_FALSE(keys, count); |
Jonathan Corbet | 1975dbc | 2015-09-14 17:11:05 -0600 | [diff] [blame] | 27 | * static_branch_likely() |
| 28 | * static_branch_unlikely() |
Jason Baron | 412758c | 2015-07-30 03:59:48 +0000 | [diff] [blame] | 29 | * |
Peter Zijlstra | efb3040 | 2012-01-26 13:32:15 +0100 | [diff] [blame] | 30 | * Jump labels provide an interface to generate dynamic branches using |
Jason Baron | 412758c | 2015-07-30 03:59:48 +0000 | [diff] [blame] | 31 | * self-modifying code. Assuming toolchain and architecture support, if we |
| 32 | * define a "key" that is initially false via "DEFINE_STATIC_KEY_FALSE(key)", |
| 33 | * an "if (static_branch_unlikely(&key))" statement is an unconditional branch |
| 34 | * (which defaults to false - and the true block is placed out of line). |
| 35 | * Similarly, we can define an initially true key via |
| 36 | * "DEFINE_STATIC_KEY_TRUE(key)", and use it in the same |
| 37 | * "if (static_branch_unlikely(&key))", in which case we will generate an |
| 38 | * unconditional branch to the out-of-line true branch. Keys that are |
| 39 | * initially true or false can be using in both static_branch_unlikely() |
| 40 | * and static_branch_likely() statements. |
Peter Zijlstra | efb3040 | 2012-01-26 13:32:15 +0100 | [diff] [blame] | 41 | * |
Jason Baron | 412758c | 2015-07-30 03:59:48 +0000 | [diff] [blame] | 42 | * At runtime we can change the branch target by setting the key |
| 43 | * to true via a call to static_branch_enable(), or false using |
| 44 | * static_branch_disable(). If the direction of the branch is switched by |
| 45 | * these calls then we run-time modify the branch target via a |
| 46 | * no-op -> jump or jump -> no-op conversion. For example, for an |
| 47 | * initially false key that is used in an "if (static_branch_unlikely(&key))" |
| 48 | * statement, setting the key to true requires us to patch in a jump |
| 49 | * to the out-of-line of true branch. |
Peter Zijlstra | efb3040 | 2012-01-26 13:32:15 +0100 | [diff] [blame] | 50 | * |
Jonathan Corbet | 1975dbc | 2015-09-14 17:11:05 -0600 | [diff] [blame] | 51 | * In addition to static_branch_{enable,disable}, we can also reference count |
Jason Baron | 412758c | 2015-07-30 03:59:48 +0000 | [diff] [blame] | 52 | * the key or branch direction via static_branch_{inc,dec}. Thus, |
| 53 | * static_branch_inc() can be thought of as a 'make more true' and |
Jonathan Corbet | 1975dbc | 2015-09-14 17:11:05 -0600 | [diff] [blame] | 54 | * static_branch_dec() as a 'make more false'. |
Jason Baron | 412758c | 2015-07-30 03:59:48 +0000 | [diff] [blame] | 55 | * |
| 56 | * Since this relies on modifying code, the branch modifying functions |
Peter Zijlstra | efb3040 | 2012-01-26 13:32:15 +0100 | [diff] [blame] | 57 | * must be considered absolute slow paths (machine wide synchronization etc.). |
Ingo Molnar | fd3cbdc | 2014-08-10 08:53:39 +0200 | [diff] [blame] | 58 | * OTOH, since the affected branches are unconditional, their runtime overhead |
Peter Zijlstra | efb3040 | 2012-01-26 13:32:15 +0100 | [diff] [blame] | 59 | * will be absolutely minimal, esp. in the default (off) case where the total |
| 60 | * effect is a single NOP of appropriate size. The on case will patch in a jump |
| 61 | * to the out-of-line block. |
| 62 | * |
Ingo Molnar | fd3cbdc | 2014-08-10 08:53:39 +0200 | [diff] [blame] | 63 | * When the control is directly exposed to userspace, it is prudent to delay the |
Peter Zijlstra | efb3040 | 2012-01-26 13:32:15 +0100 | [diff] [blame] | 64 | * decrement to avoid high frequency code modifications which can (and do) |
Ingo Molnar | c5905af | 2012-02-24 08:31:31 +0100 | [diff] [blame] | 65 | * cause significant performance degradation. Struct static_key_deferred and |
| 66 | * static_key_slow_dec_deferred() provide for this. |
Peter Zijlstra | efb3040 | 2012-01-26 13:32:15 +0100 | [diff] [blame] | 67 | * |
Jason Baron | 412758c | 2015-07-30 03:59:48 +0000 | [diff] [blame] | 68 | * Lacking toolchain and or architecture support, static keys fall back to a |
| 69 | * simple conditional branch. |
Ingo Molnar | c5905af | 2012-02-24 08:31:31 +0100 | [diff] [blame] | 70 | * |
Mauro Carvalho Chehab | 8e2a46a4 | 2020-06-15 08:50:25 +0200 | [diff] [blame] | 71 | * Additional babbling in: Documentation/staging/static-keys.rst |
Ingo Molnar | fd3cbdc | 2014-08-10 08:53:39 +0200 | [diff] [blame] | 72 | */ |
Peter Zijlstra | efb3040 | 2012-01-26 13:32:15 +0100 | [diff] [blame] | 73 | |
Anton Blanchard | c0ccf6f | 2015-04-09 13:51:31 +1000 | [diff] [blame] | 74 | #ifndef __ASSEMBLY__ |
| 75 | |
Jason Baron | d430d3d | 2011-03-16 17:29:47 -0400 | [diff] [blame] | 76 | #include <linux/types.h> |
| 77 | #include <linux/compiler.h> |
Hannes Frederic Sowa | c4b2c0c | 2013-10-19 21:48:53 +0200 | [diff] [blame] | 78 | |
| 79 | extern bool static_key_initialized; |
| 80 | |
Borislav Petkov | 5cdda51 | 2017-10-18 17:24:28 +0200 | [diff] [blame] | 81 | #define STATIC_KEY_CHECK_USE(key) WARN(!static_key_initialized, \ |
| 82 | "%s(): static key '%pS' used before call to jump_label_init()", \ |
| 83 | __func__, (key)) |
Jason Baron | d430d3d | 2011-03-16 17:29:47 -0400 | [diff] [blame] | 84 | |
Ingo Molnar | c5905af | 2012-02-24 08:31:31 +0100 | [diff] [blame] | 85 | struct static_key { |
Jason Baron | d430d3d | 2011-03-16 17:29:47 -0400 | [diff] [blame] | 86 | atomic_t enabled; |
Masahiro Yamada | cd27ccfc | 2022-02-14 01:57:17 +0900 | [diff] [blame] | 87 | #ifdef CONFIG_JUMP_LABEL |
Jason Baron | 3821fd3 | 2017-02-03 15:42:24 -0500 | [diff] [blame] | 88 | /* |
Steven Rostedt (VMware) | b17ef2e | 2017-03-02 17:28:45 -0500 | [diff] [blame] | 89 | * Note: |
| 90 | * To make anonymous unions work with old compilers, the static |
| 91 | * initialization of them requires brackets. This creates a dependency |
| 92 | * on the order of the struct with the initializers. If any fields |
| 93 | * are added, STATIC_KEY_INIT_TRUE and STATIC_KEY_INIT_FALSE may need |
| 94 | * to be modified. |
| 95 | * |
Jason Baron | 3821fd3 | 2017-02-03 15:42:24 -0500 | [diff] [blame] | 96 | * bit 0 => 1 if key is initially true |
| 97 | * 0 if initially false |
| 98 | * bit 1 => 1 if points to struct static_key_mod |
| 99 | * 0 if points to struct jump_entry |
| 100 | */ |
| 101 | union { |
| 102 | unsigned long type; |
| 103 | struct jump_entry *entries; |
| 104 | struct static_key_mod *next; |
| 105 | }; |
Masahiro Yamada | cd27ccfc | 2022-02-14 01:57:17 +0900 | [diff] [blame] | 106 | #endif /* CONFIG_JUMP_LABEL */ |
Jason Baron | d430d3d | 2011-03-16 17:29:47 -0400 | [diff] [blame] | 107 | }; |
| 108 | |
Anton Blanchard | c0ccf6f | 2015-04-09 13:51:31 +1000 | [diff] [blame] | 109 | #endif /* __ASSEMBLY__ */ |
| 110 | |
Masahiro Yamada | e9666d1 | 2018-12-31 00:14:15 +0900 | [diff] [blame] | 111 | #ifdef CONFIG_JUMP_LABEL |
Anton Blanchard | c0ccf6f | 2015-04-09 13:51:31 +1000 | [diff] [blame] | 112 | #include <asm/jump_label.h> |
Ard Biesheuvel | 9ae033a | 2018-09-18 23:51:36 -0700 | [diff] [blame] | 113 | |
| 114 | #ifndef __ASSEMBLY__ |
Ard Biesheuvel | 50ff18a | 2018-09-18 23:51:37 -0700 | [diff] [blame] | 115 | #ifdef CONFIG_HAVE_ARCH_JUMP_LABEL_RELATIVE |
| 116 | |
| 117 | struct jump_entry { |
| 118 | s32 code; |
| 119 | s32 target; |
| 120 | long key; // key may be far away from the core kernel under KASLR |
| 121 | }; |
| 122 | |
| 123 | static inline unsigned long jump_entry_code(const struct jump_entry *entry) |
| 124 | { |
| 125 | return (unsigned long)&entry->code + entry->code; |
| 126 | } |
| 127 | |
| 128 | static inline unsigned long jump_entry_target(const struct jump_entry *entry) |
| 129 | { |
| 130 | return (unsigned long)&entry->target + entry->target; |
| 131 | } |
| 132 | |
| 133 | static inline struct static_key *jump_entry_key(const struct jump_entry *entry) |
| 134 | { |
Ard Biesheuvel | 1948367 | 2018-09-18 23:51:42 -0700 | [diff] [blame] | 135 | long offset = entry->key & ~3L; |
Ard Biesheuvel | 50ff18a | 2018-09-18 23:51:37 -0700 | [diff] [blame] | 136 | |
| 137 | return (struct static_key *)((unsigned long)&entry->key + offset); |
| 138 | } |
| 139 | |
| 140 | #else |
Ard Biesheuvel | 9ae033a | 2018-09-18 23:51:36 -0700 | [diff] [blame] | 141 | |
| 142 | static inline unsigned long jump_entry_code(const struct jump_entry *entry) |
| 143 | { |
| 144 | return entry->code; |
| 145 | } |
| 146 | |
| 147 | static inline unsigned long jump_entry_target(const struct jump_entry *entry) |
| 148 | { |
| 149 | return entry->target; |
| 150 | } |
| 151 | |
| 152 | static inline struct static_key *jump_entry_key(const struct jump_entry *entry) |
| 153 | { |
Ard Biesheuvel | 1948367 | 2018-09-18 23:51:42 -0700 | [diff] [blame] | 154 | return (struct static_key *)((unsigned long)entry->key & ~3UL); |
Ard Biesheuvel | 9ae033a | 2018-09-18 23:51:36 -0700 | [diff] [blame] | 155 | } |
| 156 | |
Ard Biesheuvel | 50ff18a | 2018-09-18 23:51:37 -0700 | [diff] [blame] | 157 | #endif |
| 158 | |
Ard Biesheuvel | 9ae033a | 2018-09-18 23:51:36 -0700 | [diff] [blame] | 159 | static inline bool jump_entry_is_branch(const struct jump_entry *entry) |
| 160 | { |
| 161 | return (unsigned long)entry->key & 1UL; |
| 162 | } |
| 163 | |
| 164 | static inline bool jump_entry_is_init(const struct jump_entry *entry) |
| 165 | { |
Ard Biesheuvel | 1948367 | 2018-09-18 23:51:42 -0700 | [diff] [blame] | 166 | return (unsigned long)entry->key & 2UL; |
Ard Biesheuvel | 9ae033a | 2018-09-18 23:51:36 -0700 | [diff] [blame] | 167 | } |
| 168 | |
Peter Zijlstra | 5af0ea2 | 2021-05-06 21:34:00 +0200 | [diff] [blame] | 169 | static inline void jump_entry_set_init(struct jump_entry *entry, bool set) |
Ard Biesheuvel | 9ae033a | 2018-09-18 23:51:36 -0700 | [diff] [blame] | 170 | { |
Peter Zijlstra | 5af0ea2 | 2021-05-06 21:34:00 +0200 | [diff] [blame] | 171 | if (set) |
| 172 | entry->key |= 2; |
| 173 | else |
| 174 | entry->key &= ~2; |
Ard Biesheuvel | 9ae033a | 2018-09-18 23:51:36 -0700 | [diff] [blame] | 175 | } |
| 176 | |
Peter Zijlstra | fa5e5dc | 2021-05-06 21:33:58 +0200 | [diff] [blame] | 177 | static inline int jump_entry_size(struct jump_entry *entry) |
| 178 | { |
| 179 | #ifdef JUMP_LABEL_NOP_SIZE |
| 180 | return JUMP_LABEL_NOP_SIZE; |
| 181 | #else |
| 182 | return arch_jump_entry_size(entry); |
| 183 | #endif |
| 184 | } |
| 185 | |
Ard Biesheuvel | 9ae033a | 2018-09-18 23:51:36 -0700 | [diff] [blame] | 186 | #endif |
Anton Blanchard | c0ccf6f | 2015-04-09 13:51:31 +1000 | [diff] [blame] | 187 | #endif |
| 188 | |
| 189 | #ifndef __ASSEMBLY__ |
Jason Baron | bf5438fc | 2010-09-17 11:09:00 -0400 | [diff] [blame] | 190 | |
| 191 | enum jump_label_type { |
Peter Zijlstra | 76b235c | 2015-07-24 14:45:44 +0200 | [diff] [blame] | 192 | JUMP_LABEL_NOP = 0, |
| 193 | JUMP_LABEL_JMP, |
Jason Baron | bf5438fc | 2010-09-17 11:09:00 -0400 | [diff] [blame] | 194 | }; |
| 195 | |
| 196 | struct module; |
| 197 | |
Masahiro Yamada | e9666d1 | 2018-12-31 00:14:15 +0900 | [diff] [blame] | 198 | #ifdef CONFIG_JUMP_LABEL |
Paolo Bonzini | 4c5ea0a | 2016-06-21 18:52:17 +0200 | [diff] [blame] | 199 | |
Jason Baron | 3821fd3 | 2017-02-03 15:42:24 -0500 | [diff] [blame] | 200 | #define JUMP_TYPE_FALSE 0UL |
| 201 | #define JUMP_TYPE_TRUE 1UL |
| 202 | #define JUMP_TYPE_LINKED 2UL |
| 203 | #define JUMP_TYPE_MASK 3UL |
Ingo Molnar | c5905af | 2012-02-24 08:31:31 +0100 | [diff] [blame] | 204 | |
| 205 | static __always_inline bool static_key_false(struct static_key *key) |
| 206 | { |
Peter Zijlstra | 11276d5 | 2015-07-24 15:09:55 +0200 | [diff] [blame] | 207 | return arch_static_branch(key, false); |
Ingo Molnar | c5905af | 2012-02-24 08:31:31 +0100 | [diff] [blame] | 208 | } |
| 209 | |
| 210 | static __always_inline bool static_key_true(struct static_key *key) |
| 211 | { |
Peter Zijlstra | 11276d5 | 2015-07-24 15:09:55 +0200 | [diff] [blame] | 212 | return !arch_static_branch(key, true); |
Ingo Molnar | c5905af | 2012-02-24 08:31:31 +0100 | [diff] [blame] | 213 | } |
| 214 | |
Jason Baron | bf5438fc | 2010-09-17 11:09:00 -0400 | [diff] [blame] | 215 | extern struct jump_entry __start___jump_table[]; |
| 216 | extern struct jump_entry __stop___jump_table[]; |
| 217 | |
Jeremy Fitzhardinge | 97ce2c8 | 2011-10-12 16:17:54 -0700 | [diff] [blame] | 218 | extern void jump_label_init(void); |
Jason Baron | 91bad2f | 2010-10-01 17:23:48 -0400 | [diff] [blame] | 219 | extern void jump_label_lock(void); |
| 220 | extern void jump_label_unlock(void); |
Jason Baron | bf5438fc | 2010-09-17 11:09:00 -0400 | [diff] [blame] | 221 | extern void arch_jump_label_transform(struct jump_entry *entry, |
Jeremy Fitzhardinge | 3734880 | 2011-09-29 11:10:05 -0700 | [diff] [blame] | 222 | enum jump_label_type type); |
Daniel Bristot de Oliveira | c2ba8a1 | 2019-06-12 11:57:30 +0200 | [diff] [blame] | 223 | extern bool arch_jump_label_transform_queue(struct jump_entry *entry, |
| 224 | enum jump_label_type type); |
| 225 | extern void arch_jump_label_transform_apply(void); |
Jason Baron | 4c3ef6d | 2010-09-17 11:09:08 -0400 | [diff] [blame] | 226 | extern int jump_label_text_reserved(void *start, void *end); |
Ingo Molnar | c5905af | 2012-02-24 08:31:31 +0100 | [diff] [blame] | 227 | extern void static_key_slow_inc(struct static_key *key); |
| 228 | extern void static_key_slow_dec(struct static_key *key); |
Peter Zijlstra | ce48c146 | 2018-01-22 22:53:28 +0100 | [diff] [blame] | 229 | extern void static_key_slow_inc_cpuslocked(struct static_key *key); |
| 230 | extern void static_key_slow_dec_cpuslocked(struct static_key *key); |
Jason Baron | 1f69bf9 | 2016-08-03 13:46:36 -0700 | [diff] [blame] | 231 | extern int static_key_count(struct static_key *key); |
| 232 | extern void static_key_enable(struct static_key *key); |
| 233 | extern void static_key_disable(struct static_key *key); |
Marc Zyngier | 5a40527 | 2017-08-01 09:02:56 +0100 | [diff] [blame] | 234 | extern void static_key_enable_cpuslocked(struct static_key *key); |
| 235 | extern void static_key_disable_cpuslocked(struct static_key *key); |
Ard Biesheuvel | fdfd428 | 2022-06-15 17:41:41 +0200 | [diff] [blame] | 236 | extern enum jump_label_type jump_label_init_type(struct jump_entry *entry); |
Ingo Molnar | c5905af | 2012-02-24 08:31:31 +0100 | [diff] [blame] | 237 | |
Jason Baron | 1f69bf9 | 2016-08-03 13:46:36 -0700 | [diff] [blame] | 238 | /* |
| 239 | * We should be using ATOMIC_INIT() for initializing .enabled, but |
| 240 | * the inclusion of atomic.h is problematic for inclusion of jump_label.h |
| 241 | * in 'low-level' headers. Thus, we are initializing .enabled with a |
| 242 | * raw value, but have added a BUILD_BUG_ON() to catch any issues in |
| 243 | * jump_label_init() see: kernel/jump_label.c. |
| 244 | */ |
Peter Zijlstra | 11276d5 | 2015-07-24 15:09:55 +0200 | [diff] [blame] | 245 | #define STATIC_KEY_INIT_TRUE \ |
Jason Baron | 1f69bf9 | 2016-08-03 13:46:36 -0700 | [diff] [blame] | 246 | { .enabled = { 1 }, \ |
Masahiro Yamada | fe65deb | 2022-02-14 01:57:16 +0900 | [diff] [blame] | 247 | { .type = JUMP_TYPE_TRUE } } |
Peter Zijlstra | 11276d5 | 2015-07-24 15:09:55 +0200 | [diff] [blame] | 248 | #define STATIC_KEY_INIT_FALSE \ |
Jason Baron | 1f69bf9 | 2016-08-03 13:46:36 -0700 | [diff] [blame] | 249 | { .enabled = { 0 }, \ |
Masahiro Yamada | fe65deb | 2022-02-14 01:57:16 +0900 | [diff] [blame] | 250 | { .type = JUMP_TYPE_FALSE } } |
Jason Baron | bf5438fc | 2010-09-17 11:09:00 -0400 | [diff] [blame] | 251 | |
Masahiro Yamada | e9666d1 | 2018-12-31 00:14:15 +0900 | [diff] [blame] | 252 | #else /* !CONFIG_JUMP_LABEL */ |
Jason Baron | bf5438fc | 2010-09-17 11:09:00 -0400 | [diff] [blame] | 253 | |
Jason Baron | 1f69bf9 | 2016-08-03 13:46:36 -0700 | [diff] [blame] | 254 | #include <linux/atomic.h> |
| 255 | #include <linux/bug.h> |
| 256 | |
Peter Zijlstra | 656d054 | 2022-05-02 12:30:20 +0200 | [diff] [blame] | 257 | static __always_inline int static_key_count(struct static_key *key) |
Paolo Bonzini | 4c5ea0a | 2016-06-21 18:52:17 +0200 | [diff] [blame] | 258 | { |
Peter Zijlstra | 656d054 | 2022-05-02 12:30:20 +0200 | [diff] [blame] | 259 | return arch_atomic_read(&key->enabled); |
Paolo Bonzini | 4c5ea0a | 2016-06-21 18:52:17 +0200 | [diff] [blame] | 260 | } |
| 261 | |
Jeremy Fitzhardinge | 97ce2c8 | 2011-10-12 16:17:54 -0700 | [diff] [blame] | 262 | static __always_inline void jump_label_init(void) |
| 263 | { |
Hannes Frederic Sowa | c4b2c0c | 2013-10-19 21:48:53 +0200 | [diff] [blame] | 264 | static_key_initialized = true; |
Jeremy Fitzhardinge | 97ce2c8 | 2011-10-12 16:17:54 -0700 | [diff] [blame] | 265 | } |
| 266 | |
Ingo Molnar | c5905af | 2012-02-24 08:31:31 +0100 | [diff] [blame] | 267 | static __always_inline bool static_key_false(struct static_key *key) |
Jason Baron | bf5438fc | 2010-09-17 11:09:00 -0400 | [diff] [blame] | 268 | { |
Steven Rostedt (VMware) | 2f0df49 | 2020-12-11 16:37:54 -0500 | [diff] [blame] | 269 | if (unlikely_notrace(static_key_count(key) > 0)) |
Jason Baron | d430d3d | 2011-03-16 17:29:47 -0400 | [diff] [blame] | 270 | return true; |
| 271 | return false; |
| 272 | } |
| 273 | |
Ingo Molnar | c5905af | 2012-02-24 08:31:31 +0100 | [diff] [blame] | 274 | static __always_inline bool static_key_true(struct static_key *key) |
| 275 | { |
Steven Rostedt (VMware) | 2f0df49 | 2020-12-11 16:37:54 -0500 | [diff] [blame] | 276 | if (likely_notrace(static_key_count(key) > 0)) |
Ingo Molnar | c5905af | 2012-02-24 08:31:31 +0100 | [diff] [blame] | 277 | return true; |
| 278 | return false; |
| 279 | } |
| 280 | |
Ingo Molnar | c5905af | 2012-02-24 08:31:31 +0100 | [diff] [blame] | 281 | static inline void static_key_slow_inc(struct static_key *key) |
Jason Baron | d430d3d | 2011-03-16 17:29:47 -0400 | [diff] [blame] | 282 | { |
Borislav Petkov | 5cdda51 | 2017-10-18 17:24:28 +0200 | [diff] [blame] | 283 | STATIC_KEY_CHECK_USE(key); |
Jason Baron | d430d3d | 2011-03-16 17:29:47 -0400 | [diff] [blame] | 284 | atomic_inc(&key->enabled); |
| 285 | } |
| 286 | |
Ingo Molnar | c5905af | 2012-02-24 08:31:31 +0100 | [diff] [blame] | 287 | static inline void static_key_slow_dec(struct static_key *key) |
Jason Baron | d430d3d | 2011-03-16 17:29:47 -0400 | [diff] [blame] | 288 | { |
Borislav Petkov | 5cdda51 | 2017-10-18 17:24:28 +0200 | [diff] [blame] | 289 | STATIC_KEY_CHECK_USE(key); |
Jason Baron | d430d3d | 2011-03-16 17:29:47 -0400 | [diff] [blame] | 290 | atomic_dec(&key->enabled); |
Jason Baron | bf5438fc | 2010-09-17 11:09:00 -0400 | [diff] [blame] | 291 | } |
| 292 | |
Peter Zijlstra | ce48c146 | 2018-01-22 22:53:28 +0100 | [diff] [blame] | 293 | #define static_key_slow_inc_cpuslocked(key) static_key_slow_inc(key) |
| 294 | #define static_key_slow_dec_cpuslocked(key) static_key_slow_dec(key) |
| 295 | |
Jason Baron | 4c3ef6d | 2010-09-17 11:09:08 -0400 | [diff] [blame] | 296 | static inline int jump_label_text_reserved(void *start, void *end) |
| 297 | { |
| 298 | return 0; |
| 299 | } |
| 300 | |
Jason Baron | 91bad2f | 2010-10-01 17:23:48 -0400 | [diff] [blame] | 301 | static inline void jump_label_lock(void) {} |
| 302 | static inline void jump_label_unlock(void) {} |
| 303 | |
Peter Zijlstra | e33886b | 2015-07-24 15:03:40 +0200 | [diff] [blame] | 304 | static inline void static_key_enable(struct static_key *key) |
| 305 | { |
Borislav Petkov | 5cdda51 | 2017-10-18 17:24:28 +0200 | [diff] [blame] | 306 | STATIC_KEY_CHECK_USE(key); |
Peter Zijlstra | e33886b | 2015-07-24 15:03:40 +0200 | [diff] [blame] | 307 | |
Paolo Bonzini | 1dbb670 | 2017-08-01 17:24:04 +0200 | [diff] [blame] | 308 | if (atomic_read(&key->enabled) != 0) { |
| 309 | WARN_ON_ONCE(atomic_read(&key->enabled) != 1); |
| 310 | return; |
| 311 | } |
| 312 | atomic_set(&key->enabled, 1); |
Peter Zijlstra | e33886b | 2015-07-24 15:03:40 +0200 | [diff] [blame] | 313 | } |
| 314 | |
| 315 | static inline void static_key_disable(struct static_key *key) |
| 316 | { |
Borislav Petkov | 5cdda51 | 2017-10-18 17:24:28 +0200 | [diff] [blame] | 317 | STATIC_KEY_CHECK_USE(key); |
Peter Zijlstra | e33886b | 2015-07-24 15:03:40 +0200 | [diff] [blame] | 318 | |
Paolo Bonzini | 1dbb670 | 2017-08-01 17:24:04 +0200 | [diff] [blame] | 319 | if (atomic_read(&key->enabled) != 1) { |
| 320 | WARN_ON_ONCE(atomic_read(&key->enabled) != 0); |
| 321 | return; |
| 322 | } |
| 323 | atomic_set(&key->enabled, 0); |
Peter Zijlstra | e33886b | 2015-07-24 15:03:40 +0200 | [diff] [blame] | 324 | } |
| 325 | |
Marc Zyngier | 5a40527 | 2017-08-01 09:02:56 +0100 | [diff] [blame] | 326 | #define static_key_enable_cpuslocked(k) static_key_enable((k)) |
| 327 | #define static_key_disable_cpuslocked(k) static_key_disable((k)) |
| 328 | |
Jason Baron | 1f69bf9 | 2016-08-03 13:46:36 -0700 | [diff] [blame] | 329 | #define STATIC_KEY_INIT_TRUE { .enabled = ATOMIC_INIT(1) } |
| 330 | #define STATIC_KEY_INIT_FALSE { .enabled = ATOMIC_INIT(0) } |
| 331 | |
Masahiro Yamada | e9666d1 | 2018-12-31 00:14:15 +0900 | [diff] [blame] | 332 | #endif /* CONFIG_JUMP_LABEL */ |
Jason Baron | 1f69bf9 | 2016-08-03 13:46:36 -0700 | [diff] [blame] | 333 | |
| 334 | #define STATIC_KEY_INIT STATIC_KEY_INIT_FALSE |
| 335 | #define jump_label_enabled static_key_enabled |
| 336 | |
Peter Zijlstra | 11276d5 | 2015-07-24 15:09:55 +0200 | [diff] [blame] | 337 | /* -------------------------------------------------------------------------- */ |
| 338 | |
| 339 | /* |
| 340 | * Two type wrappers around static_key, such that we can use compile time |
| 341 | * type differentiation to emit the right code. |
| 342 | * |
| 343 | * All the below code is macros in order to play type games. |
| 344 | */ |
| 345 | |
| 346 | struct static_key_true { |
| 347 | struct static_key key; |
| 348 | }; |
| 349 | |
| 350 | struct static_key_false { |
| 351 | struct static_key key; |
| 352 | }; |
| 353 | |
| 354 | #define STATIC_KEY_TRUE_INIT (struct static_key_true) { .key = STATIC_KEY_INIT_TRUE, } |
| 355 | #define STATIC_KEY_FALSE_INIT (struct static_key_false){ .key = STATIC_KEY_INIT_FALSE, } |
| 356 | |
| 357 | #define DEFINE_STATIC_KEY_TRUE(name) \ |
| 358 | struct static_key_true name = STATIC_KEY_TRUE_INIT |
| 359 | |
Chris von Recklinghausen | b5cb15d | 2018-07-03 15:43:08 -0400 | [diff] [blame] | 360 | #define DEFINE_STATIC_KEY_TRUE_RO(name) \ |
| 361 | struct static_key_true name __ro_after_init = STATIC_KEY_TRUE_INIT |
| 362 | |
Tony Luck | b8fb037 | 2016-09-01 11:39:33 -0700 | [diff] [blame] | 363 | #define DECLARE_STATIC_KEY_TRUE(name) \ |
| 364 | extern struct static_key_true name |
| 365 | |
Peter Zijlstra | 11276d5 | 2015-07-24 15:09:55 +0200 | [diff] [blame] | 366 | #define DEFINE_STATIC_KEY_FALSE(name) \ |
| 367 | struct static_key_false name = STATIC_KEY_FALSE_INIT |
| 368 | |
Chris von Recklinghausen | b5cb15d | 2018-07-03 15:43:08 -0400 | [diff] [blame] | 369 | #define DEFINE_STATIC_KEY_FALSE_RO(name) \ |
| 370 | struct static_key_false name __ro_after_init = STATIC_KEY_FALSE_INIT |
| 371 | |
Tony Luck | b8fb037 | 2016-09-01 11:39:33 -0700 | [diff] [blame] | 372 | #define DECLARE_STATIC_KEY_FALSE(name) \ |
| 373 | extern struct static_key_false name |
| 374 | |
Catalin Marinas | ef0da55 | 2016-09-05 18:25:47 +0100 | [diff] [blame] | 375 | #define DEFINE_STATIC_KEY_ARRAY_TRUE(name, count) \ |
| 376 | struct static_key_true name[count] = { \ |
| 377 | [0 ... (count) - 1] = STATIC_KEY_TRUE_INIT, \ |
| 378 | } |
| 379 | |
| 380 | #define DEFINE_STATIC_KEY_ARRAY_FALSE(name, count) \ |
| 381 | struct static_key_false name[count] = { \ |
| 382 | [0 ... (count) - 1] = STATIC_KEY_FALSE_INIT, \ |
| 383 | } |
| 384 | |
Kees Cook | 0d66ccc | 2021-04-01 16:23:42 -0700 | [diff] [blame] | 385 | #define _DEFINE_STATIC_KEY_1(name) DEFINE_STATIC_KEY_TRUE(name) |
| 386 | #define _DEFINE_STATIC_KEY_0(name) DEFINE_STATIC_KEY_FALSE(name) |
| 387 | #define DEFINE_STATIC_KEY_MAYBE(cfg, name) \ |
| 388 | __PASTE(_DEFINE_STATIC_KEY_, IS_ENABLED(cfg))(name) |
| 389 | |
| 390 | #define _DEFINE_STATIC_KEY_RO_1(name) DEFINE_STATIC_KEY_TRUE_RO(name) |
| 391 | #define _DEFINE_STATIC_KEY_RO_0(name) DEFINE_STATIC_KEY_FALSE_RO(name) |
| 392 | #define DEFINE_STATIC_KEY_MAYBE_RO(cfg, name) \ |
| 393 | __PASTE(_DEFINE_STATIC_KEY_RO_, IS_ENABLED(cfg))(name) |
| 394 | |
| 395 | #define _DECLARE_STATIC_KEY_1(name) DECLARE_STATIC_KEY_TRUE(name) |
| 396 | #define _DECLARE_STATIC_KEY_0(name) DECLARE_STATIC_KEY_FALSE(name) |
| 397 | #define DECLARE_STATIC_KEY_MAYBE(cfg, name) \ |
| 398 | __PASTE(_DECLARE_STATIC_KEY_, IS_ENABLED(cfg))(name) |
| 399 | |
Tejun Heo | fa128fd | 2015-09-18 11:56:28 -0400 | [diff] [blame] | 400 | extern bool ____wrong_branch_error(void); |
| 401 | |
| 402 | #define static_key_enabled(x) \ |
| 403 | ({ \ |
| 404 | if (!__builtin_types_compatible_p(typeof(*x), struct static_key) && \ |
| 405 | !__builtin_types_compatible_p(typeof(*x), struct static_key_true) &&\ |
| 406 | !__builtin_types_compatible_p(typeof(*x), struct static_key_false)) \ |
| 407 | ____wrong_branch_error(); \ |
| 408 | static_key_count((struct static_key *)x) > 0; \ |
| 409 | }) |
| 410 | |
Masahiro Yamada | e9666d1 | 2018-12-31 00:14:15 +0900 | [diff] [blame] | 411 | #ifdef CONFIG_JUMP_LABEL |
Peter Zijlstra | 11276d5 | 2015-07-24 15:09:55 +0200 | [diff] [blame] | 412 | |
| 413 | /* |
| 414 | * Combine the right initial value (type) with the right branch order |
| 415 | * to generate the desired result. |
| 416 | * |
| 417 | * |
| 418 | * type\branch| likely (1) | unlikely (0) |
| 419 | * -----------+-----------------------+------------------ |
| 420 | * | | |
| 421 | * true (1) | ... | ... |
| 422 | * | NOP | JMP L |
| 423 | * | <br-stmts> | 1: ... |
| 424 | * | L: ... | |
| 425 | * | | |
| 426 | * | | L: <br-stmts> |
| 427 | * | | jmp 1b |
| 428 | * | | |
| 429 | * -----------+-----------------------+------------------ |
| 430 | * | | |
| 431 | * false (0) | ... | ... |
| 432 | * | JMP L | NOP |
| 433 | * | <br-stmts> | 1: ... |
| 434 | * | L: ... | |
| 435 | * | | |
| 436 | * | | L: <br-stmts> |
| 437 | * | | jmp 1b |
| 438 | * | | |
| 439 | * -----------+-----------------------+------------------ |
| 440 | * |
| 441 | * The initial value is encoded in the LSB of static_key::entries, |
| 442 | * type: 0 = false, 1 = true. |
| 443 | * |
| 444 | * The branch type is encoded in the LSB of jump_entry::key, |
| 445 | * branch: 0 = unlikely, 1 = likely. |
| 446 | * |
| 447 | * This gives the following logic table: |
| 448 | * |
| 449 | * enabled type branch instuction |
| 450 | * -----------------------------+----------- |
| 451 | * 0 0 0 | NOP |
| 452 | * 0 0 1 | JMP |
| 453 | * 0 1 0 | NOP |
| 454 | * 0 1 1 | JMP |
| 455 | * |
| 456 | * 1 0 0 | JMP |
| 457 | * 1 0 1 | NOP |
| 458 | * 1 1 0 | JMP |
| 459 | * 1 1 1 | NOP |
| 460 | * |
| 461 | * Which gives the following functions: |
| 462 | * |
| 463 | * dynamic: instruction = enabled ^ branch |
| 464 | * static: instruction = type ^ branch |
| 465 | * |
| 466 | * See jump_label_type() / jump_label_init_type(). |
| 467 | */ |
| 468 | |
Peter Zijlstra | 11276d5 | 2015-07-24 15:09:55 +0200 | [diff] [blame] | 469 | #define static_branch_likely(x) \ |
| 470 | ({ \ |
| 471 | bool branch; \ |
| 472 | if (__builtin_types_compatible_p(typeof(*x), struct static_key_true)) \ |
| 473 | branch = !arch_static_branch(&(x)->key, true); \ |
| 474 | else if (__builtin_types_compatible_p(typeof(*x), struct static_key_false)) \ |
| 475 | branch = !arch_static_branch_jump(&(x)->key, true); \ |
| 476 | else \ |
| 477 | branch = ____wrong_branch_error(); \ |
Steven Rostedt (VMware) | 2f0df49 | 2020-12-11 16:37:54 -0500 | [diff] [blame] | 478 | likely_notrace(branch); \ |
Peter Zijlstra | 11276d5 | 2015-07-24 15:09:55 +0200 | [diff] [blame] | 479 | }) |
| 480 | |
| 481 | #define static_branch_unlikely(x) \ |
| 482 | ({ \ |
| 483 | bool branch; \ |
| 484 | if (__builtin_types_compatible_p(typeof(*x), struct static_key_true)) \ |
| 485 | branch = arch_static_branch_jump(&(x)->key, false); \ |
| 486 | else if (__builtin_types_compatible_p(typeof(*x), struct static_key_false)) \ |
| 487 | branch = arch_static_branch(&(x)->key, false); \ |
| 488 | else \ |
| 489 | branch = ____wrong_branch_error(); \ |
Steven Rostedt (VMware) | 2f0df49 | 2020-12-11 16:37:54 -0500 | [diff] [blame] | 490 | unlikely_notrace(branch); \ |
Peter Zijlstra | 11276d5 | 2015-07-24 15:09:55 +0200 | [diff] [blame] | 491 | }) |
| 492 | |
Masahiro Yamada | e9666d1 | 2018-12-31 00:14:15 +0900 | [diff] [blame] | 493 | #else /* !CONFIG_JUMP_LABEL */ |
Peter Zijlstra | 11276d5 | 2015-07-24 15:09:55 +0200 | [diff] [blame] | 494 | |
Steven Rostedt (VMware) | 2f0df49 | 2020-12-11 16:37:54 -0500 | [diff] [blame] | 495 | #define static_branch_likely(x) likely_notrace(static_key_enabled(&(x)->key)) |
| 496 | #define static_branch_unlikely(x) unlikely_notrace(static_key_enabled(&(x)->key)) |
Peter Zijlstra | 11276d5 | 2015-07-24 15:09:55 +0200 | [diff] [blame] | 497 | |
Masahiro Yamada | e9666d1 | 2018-12-31 00:14:15 +0900 | [diff] [blame] | 498 | #endif /* CONFIG_JUMP_LABEL */ |
Peter Zijlstra | 11276d5 | 2015-07-24 15:09:55 +0200 | [diff] [blame] | 499 | |
Kees Cook | 0d66ccc | 2021-04-01 16:23:42 -0700 | [diff] [blame] | 500 | #define static_branch_maybe(config, x) \ |
| 501 | (IS_ENABLED(config) ? static_branch_likely(x) \ |
| 502 | : static_branch_unlikely(x)) |
| 503 | |
Peter Zijlstra | 11276d5 | 2015-07-24 15:09:55 +0200 | [diff] [blame] | 504 | /* |
| 505 | * Advanced usage; refcount, branch is enabled when: count != 0 |
| 506 | */ |
| 507 | |
| 508 | #define static_branch_inc(x) static_key_slow_inc(&(x)->key) |
| 509 | #define static_branch_dec(x) static_key_slow_dec(&(x)->key) |
Peter Zijlstra | ce48c146 | 2018-01-22 22:53:28 +0100 | [diff] [blame] | 510 | #define static_branch_inc_cpuslocked(x) static_key_slow_inc_cpuslocked(&(x)->key) |
| 511 | #define static_branch_dec_cpuslocked(x) static_key_slow_dec_cpuslocked(&(x)->key) |
Peter Zijlstra | 11276d5 | 2015-07-24 15:09:55 +0200 | [diff] [blame] | 512 | |
| 513 | /* |
| 514 | * Normal usage; boolean enable/disable. |
| 515 | */ |
| 516 | |
Marc Zyngier | 5a40527 | 2017-08-01 09:02:56 +0100 | [diff] [blame] | 517 | #define static_branch_enable(x) static_key_enable(&(x)->key) |
| 518 | #define static_branch_disable(x) static_key_disable(&(x)->key) |
| 519 | #define static_branch_enable_cpuslocked(x) static_key_enable_cpuslocked(&(x)->key) |
| 520 | #define static_branch_disable_cpuslocked(x) static_key_disable_cpuslocked(&(x)->key) |
Peter Zijlstra | 11276d5 | 2015-07-24 15:09:55 +0200 | [diff] [blame] | 521 | |
Anton Blanchard | c0ccf6f | 2015-04-09 13:51:31 +1000 | [diff] [blame] | 522 | #endif /* __ASSEMBLY__ */ |
Luis R. Rodriguez | 85b36c9 | 2017-01-18 09:38:04 -0800 | [diff] [blame] | 523 | |
| 524 | #endif /* _LINUX_JUMP_LABEL_H */ |