created | 2023-03-12T09:10:46Z |
---|---|
begin | 2023-03-09T00:00:00Z |
end | 2023-03-10T00:00:00Z |
path | src/sys |
commits | 6 |
date | 2023-03-09T00:13:47Z | |||
---|---|---|---|---|
author | chris | |||
files | src/sys/dev/pci/if_igc.c | log | diff | annotate |
message |
Fix comment to reflect the disabled status of Energy Efficient Ethernet. Intel just disabled EEE for 1Gbps and 2.5Gbps modes on both i225 and i226 chips due to bugs. We already had it this way. ok patrick@ kevlo@ |
date | 2023-03-09T03:50:38Z | |||
---|---|---|---|---|
author | cheloha | |||
files | src/sys/kern/kern_clockintr.c | log | diff | annotate |
src/sys/sys/clockintr.h | log | diff | annotate | |
message |
clockintr: add a priority queue - Add cq_pend to struct clockintr_queue. cq_pend is the list of clock interrupts pending to run, sorted in ascending order by cl_expiration (earliest deadline first; EDF). If the cl_expiration of two clockintrs is equal, the first clock interrupt scheduled has priority (FIFO). We may need to switch to an RB tree or a min-heap in the future. For now, there are only three clock interrupts, so a linked list is fine. - Add cl_flags to struct clockintr. We have one flag, CLST_PENDING. It indicates whether the given clockintr is enqueued on cq_pend. - Rewrite clockintr_dispatch() to operate on cq_pend. Clock interrupts are run in EDF order until the most imminent clockintr expires in the future. - Add code to clockintr_establish(), clockintr_advance() and clockintr_schedule() to enqueue/dequeue the given clockintr on cq_est and cq_pend as needed. - Add cq_est to struct clockintr_queue. cq_est is the list of all clockintrs established on a clockintr_queue. - Add a new counter, cs_spurious, to clockintr_stat. A dispatch is "spurious" if no clockintrs are on cq_pend when we call clockintr_dispatch(). With input from aisha@. Heavily tested by mlarkin@. Shared with hackers@. ok aisha@ mlarkin@ |
date | 2023-03-09T05:56:58Z | |||
---|---|---|---|---|
author | dlg | |||
files | src/sys/net/bpf.c | log | diff | annotate |
src/sys/net/bpf.h | log | diff | annotate | |
src/sys/net/bpfdesc.h | log | diff | annotate | |
message |
add a timeout between capturing a packet and making the buffer readable. before this, there were three reasons that a bpf read will finish. the first is the obvious one: the bpf packet buffer in the kernel fills up. by default this is about 32k, so if you're only capturing a small packet packet every few seconds, it can take a long time for the buffer to fill up before you can read them. the second is if bpf has been configured to enable immediate mode with ioctl(BIOCIMMEDIATE). this means that when any packet is written into the bpf buffer, the buffer is immediately readable. this is fine if the packet rate is low, but if the packet rate is high you don't get the benefit of buffering many packets that bpf is supposed to provide. the third mechanism is if bpf has been configured with the BIOCSRTIMEOUT ioctl, which sets a maximum wait time on a bpf read. BIOCSRTIMEOUT means than a clock starts ticking down when a program (eg pflogd) reads from bpf. when the clock reaches zero then the read returns with whatever is in the bpf packet buffer. however, there could be nothing in the buffer, and the read will still complete. deraadt@ noticed this behaviour with pflogd. it wants packets logged by pf to end up on disk in a timely fashion, but it's fine with tolerating a bit of delay so it can take advantatage of buffering to amortise the cost of the reads per packet. it currently does this with BIOCSRTIMEOUT set to half a second, which means it's always waking up every half second even if there's nothing to log. this diff adds BIOCSWTIMEOUT, which specifies a timeout from when bpf first puts a packet in the capture buffer, and when the buffer becomes readable. by default this wait timeout is infinite, meaning the buffer has to be filled before it becomes readable. BIOCSWTIMEOUT can be set to enable the new functionality. BIOCIMMEDIATE is turned into a variation of BIOCSWTIMEOUT with the wait time set to 0, ie, wait 0 seconds between when a packet is written to the buffer and when the buffer becomes readable. combining BIOCSWTIMEOUT and BIOCIMMEDIATE simplifies the code a lot. for pflogd, this means if there are no packets to capture, pflogd won't wake up every half second to do nothing. however, when a packet is logged by pf, bpf will wait another half second to see if any more packets arrive (or the buffer fills up) before the read fires. discussed a lot with deraadt@ and sashan@ ok sashan@ |
date | 2023-03-09T10:29:04Z | |||
---|---|---|---|---|
author | claudio | |||
files | src/sys/arch/sparc64/dev/vnet.c | log | diff | annotate |
message |
Improve vnet(4) to work better in busy conditions. No longer limit the ifq size to a low number, increase the slots on the DMA Ring a bit and abstract the VNET buffer size into a define. Enqueue packets on the ring but mark the initial packet ready at the end. This way the other ldom is not able to rush ahead and overconsume packets. The dring indexes are passed between ldoms and can get out of sync with causes the TX ring to stall. Tested by myself and jan@ OK kettenis@ jan@ kn@ |
date | 2023-03-09T13:17:28Z | |||
---|---|---|---|---|
author | jsg | |||
files | src/sys/arch/amd64/amd64/cpu.c | log | diff | annotate |
message |
workaround Intel Braswell/Cherry Trail mwait hang dlg has a Dell Wyse 3040 with cpu0: Intel(R) Atom(TM) x5-Z8350 CPU @ 1.44GHz, 480.02 MHz, 06-4c-04 cpu0: mwait min=64, max=64, C-substates=0.2.0.0.0.0.3.3, IBE which hangs soon after the login prompt with MP kernels This is a hardware bug described in: Intel Atom Z8000 Processor Series Specification Update Document Number: 332067-012 "CHT45 Processor May Not Wake From C6 or Deeper Sleep State" tested by dlg@, ok guenther@ |
date | 2023-03-09T19:48:42Z | |||
---|---|---|---|---|
author | kettenis | |||
files | src/sys/arch/arm64/dev/aplpcie.c | log | diff | annotate |
message |
Check that a PCIe port isn't disabled in the device tree. ok patrick@ |