当前位置:首页 > Linux内核中断管理和延迟函数(BH)
Linux Interrupt Management January 1, 2013
其中在调用 set_ioapic_affinity_irq() 函数时,以中断号和 CPU 掩码作为参数,接着继续调用 io_apic_write(),修改相应的中断重定向中的值,来完成中断亲和力的设置。当执行 ping 命令时,网卡中断被触发,产生了一个中断信号,多 APIC 系统根据中断重定向表中的值,依照仲裁机制,选择 CPU0~3 中的某一个 CPU,并将该信号传递给相应的本地 APIC,本地 APIC 又中断它的 CPU,整个事件不通报给其他所有的 CPU。
1.5.5 SMP体系结构下中断负载均衡
Linux试图在各个CPU上以轮询方式分发。有些情况下硬件不能以公平方式分发中断,因此在必要时利用kirqd内核线程纠正对IRQ的自动分配(周期性的调用do_irq_balance())。内核可以设置CPU的IRQ亲和力(affinity).
中断负载均衡的实现主要封装在 arch\\ arch\\i386\\kernel\\io-apic.c 文件中。如果在编译内核时配置了 CONFIG_IRQBALANCE 选项,那么 SMP 体系结构中的中断负载均衡将以模块的形式存在于内核中。
late_initcall(balanced_irq_init); #define late_initcall(fn) module_init(fn) //include\\linux\\init.h 在 balanced_irq_init() 函数中,将创建一个内核线程来负责中断负载均衡:
static int __init balanced_irq_init(void) { …… printk(KERN_INFO \ if (kernel_thread(balanced_irq, NULL, CLONE_KERNEL) >= 0) return 0; else printk(KERN_ERR \ …… } 在 balanced_irq() 函数中,每隔 5HZ=5s 的时间,将调用一次 do_irq_balance() 函数,进行中断的迁徙。将重负载 CPU 上的中断迁移到较空闲的CPU上进行处理。
1.5.6 中断线程化
1.5.6.1 简介
在 Linux 中,中断具有最高的优先级。不论在任何时刻,只要产生中断事件,内核将立即执行相应的中断处理程序,等到所有挂起的中断和软中断处理完毕后才能执行正常的任务,因此有可能造成实时任务得不到及时的处理。中断线程化之后,中断将作为内核线程运行而且被赋予不同的实时优先级,实时任务可以有比中断线程更高的优先级。这样,具有最高优先级的实时任务就能得到优先处理,即使在严重负载下仍有实时性保证。但是并不是所有的中断都可以被线程化,比如时钟中断,主要用来维护系统时间以及定时器等,
36
Linux Interrupt Management January 1, 2013
其中定时器是操作系统的脉搏,一旦被线程化,就有可能被挂起,这样后果将不堪设想,所以不应当被线程化。
使用线程化的上下文执行中断列程:
int request_threaded_irq(unsigned int irq, irq_handler_t handler, irq_handler_t thread_fn,unsigned long flags, const char *name, void *dev) /* thread_fn:中断线程化函数 */ int request_any_context_irq(unsigned int irq, irq_handler_t handler, unsigned long flags, const char *name, void *dev_id) /* 申请一个可用于任何context的irq */ 1.5.6.2 实现
对于IRQ,在内核初始化阶段init (init/main.c)调用init_hardirqs来为每一个IRQ创建一个内核线程,IRQ号为0 的中断赋予实时优先级49,IRQ号为1的赋予实时优先级48,依次类推直到25,因此任何IRQ线程的最低实时优先级为25。
当内核处理中断时,如果中断例程返回IRQ_WAKE_THREAD,则说明中断被线程化了,需要唤醒中断内核线程。
irqreturn_t handle_irq_event_percpu(struct irq_desc *desc, struct irqaction *action) { irqreturn_t retval = IRQ_NONE; unsigned int flags = 0, irq = desc->irq_data.irq; do { irqreturn_t res; trace_irq_handler_entry(irq, action); res = action->handler(irq, action->dev_id); trace_irq_handler_exit(irq, action, res); if (WARN_ONCE(!irqs_disabled(),\ irq, action->handler)) local_irq_disable(); switch (res) { case IRQ_WAKE_THREAD: /* * Catch drivers which return WAKE_THREAD but * did not set up a thread function */ if (unlikely(!action->thread_fn)) { warn_no_thread(irq, action); break; } irq_wake_thread(desc, action); 37
Linux Interrupt Management January 1, 2013
/* Fall through to add to randomness */ case IRQ_HANDLED: flags |= action->flags; break; default: break; } retval |= res; action = action->next; } while (action); add_interrupt_randomness(irq, flags); if (!noirqdebug) note_interrupt(irq, desc, retval); return retval; } static void irq_wake_thread(struct irq_desc *desc, struct irqaction *action) { /* * In case the thread crashed and was killed we just pretend that * we handled the interrupt. The hardirq handler has disabled the * device interrupt, so no irq storm is lurking. */ if (action->thread->flags & PF_EXITING) return; /* * Wake up the handler thread for this action. If the * RUNTHREAD bit is already set, nothing to do. */ if (test_and_set_bit(IRQTF_RUNTHREAD, &action->thread_flags)) return; /* * It's safe to OR the mask lockless here. We have only two * places which write to threads_oneshot: This code and the * irq thread. * * This code is the hard irq context and can never run on two * cpus in parallel. If it ever does we have more serious * problems than this bitmask. * * The irq threads of this irq which clear their \ * in threads_oneshot are serialized via desc->lock against * each other and they are serialized against this code by * IRQS_INPROGRESS. * * Hard irq handler: * 38
Linux Interrupt Management
} January 1, 2013
* spin_lock(desc->lock); * desc->state |= IRQS_INPROGRESS; * spin_unlock(desc->lock); * set_bit(IRQTF_RUNTHREAD, &action->thread_flags); * desc->threads_oneshot |= mask; * spin_lock(desc->lock); * desc->state &= ~IRQS_INPROGRESS; * spin_unlock(desc->lock); * * irq thread: * * again: * spin_lock(desc->lock); * if (desc->state & IRQS_INPROGRESS) { * spin_unlock(desc->lock); * while(desc->state & IRQS_INPROGRESS) * cpu_relax(); * goto again; * } * if (!test_bit(IRQTF_RUNTHREAD, &action->thread_flags)) * desc->threads_oneshot &= ~mask; * spin_unlock(desc->lock); * * So either the thread waits for us to clear IRQS_INPROGRESS * or we are waiting in the flow handler for desc->lock to be * released before we reach this point. The thread also checks * IRQTF_RUNTHREAD under desc->lock. If set it leaves * threads_oneshot untouched and runs the thread another time. */ desc->threads_oneshot |= action->thread_mask; /* * We increment the threads_active counter in case we wake up * the irq thread. The irq thread decrements the counter when * it returns from the handler or in the exit path and wakes * up waiters which are stuck in synchronize_irq() when the * active count becomes zero. synchronize_irq() is serialized * against this code (hard irq handler) via IRQS_INPROGRESS * like the finalize_oneshot() code. See comment above. */ atomic_inc(&desc->threads_active); wake_up_process(action->thread); 1.5.7 自动识别中断号
驱动需要信息来正确安装处理例程。自动检测中断号对驱动的可用性来说是一个基本需求。Linux提供了一个底层设施来探测中断号。它只能在非共享中断的模式下工作,但是大多数硬件有能力工作在共享中断的模式下,并可提供更好的找到配置中断号的方法.
39
共分享92篇相关文档