created | 2018-12-19T12:48:13Z |
---|---|
begin | 2018-04-26T06:00:00Z |
end | 2018-04-26T12:00:00Z |
path | src/sys |
commits | 8 |
date | 2018-04-26T06:28:43Z | |||
---|---|---|---|---|
author | mpi | |||
files | src/sys/kern/kern_descrip.c | log | diff | annotate |
message |
Rewrite fdcopy() to avoid memcpy()s. With and ok visa@ |
date | 2018-04-26T06:51:48Z | |||
---|---|---|---|---|
author | mpi | |||
files | src/sys/arch/amd64/amd64/pmap.c | log | diff | annotate |
src/sys/kern/kern_lock.c | log | diff | annotate | |
message |
Drop into ddb(4) if pmap_tlb_shoot*() take too much time in MP_LOCKDEBUG kernels. While here sync all MP_LOCKDEBUG/while loops. ok mlarkin@, visa@ |
date | 2018-04-26T09:30:08Z | |||
---|---|---|---|---|
author | deraadt | |||
files | src/sys/sys/pledge.h | log | diff | annotate |
message | prot_exec is the correct name; spotted by landry |
date | 2018-04-26T10:14:26Z | |||
---|---|---|---|---|
author | mpi | |||
files | src/sys/dev/usb/xhci.c | log | diff | annotate |
message |
Follows section 6.2.3.6 to compute endpoint interval. ok stsp@ |
date | 2018-04-26T10:19:31Z | |||
---|---|---|---|---|
author | mpi | |||
files | src/sys/dev/usb/xhci.c | log | diff | annotate |
message |
Reduce differences between isoch & bulk/intr routines. ok stsp@ |
date | 2018-04-26T10:43:58Z | |||
---|---|---|---|---|
author | mlarkin | |||
files | src/sys/arch/amd64/amd64/vmm.c | log | diff | annotate |
message |
vmm(4): ensure SVM_INTERCEPT_INTR is always enabled before entering guest VM. |
date | 2018-04-26T10:45:45Z | |||
---|---|---|---|---|
author | pirofti | |||
files | src/sys/kern/sys_socket.c | log | diff | annotate |
message |
Remove solock() surrounding PRU_CONTROL in soo_ioctl(). We do not need the lock there. Missed this in my former commit pushing NET_LOCK() down the stack. Found the hard way by naddy@, sorry! OK mpi@. |
date | 2018-04-26T11:37:25Z | |||
---|---|---|---|---|
author | mlarkin | |||
files | src/sys/arch/amd64/amd64/vmm.c | log | diff | annotate |
message |
vmm(4): remove some unnecessary kernel lock code from the SVM guest loop that is not needed; this code deals with delaying the relocking of the kernel lock until after interrupts are processed during external interrupt exiting, but this is handled differently on SVM. External interrupts are automatically handled by the CPU as soon as stgi() is performed after exit. (The original code came from the VMX/Intel guest loop.) ok guenther@ |