OpenBSD cvs log

created 2021-12-19T23:16:17Z
begin 2021-12-15T00:00:00Z
end 2021-12-16T00:00:00Z
path src/sys
commits 5

date 2021-12-15T00:37:21Z
author deraadt
files src/sys/arch/arm64/stand/efiboot/efiboot.c log diff annotate
message typo in previous

date 2021-12-15T12:53:53Z
author mpi
files src/sys/dev/pci/drm/i915/gem/i915_gem_shmem.c log diff annotate
src/sys/dev/pci/drm/radeon/radeon_ttm.c log diff annotate
src/sys/uvm/uvm_aobj.c log diff annotate
src/sys/uvm/uvm_device.c log diff annotate
src/sys/uvm/uvm_fault.c log diff annotate
src/sys/uvm/uvm_km.c log diff annotate
src/sys/uvm/uvm_map.c log diff annotate
src/sys/uvm/uvm_map.h log diff annotate
src/sys/uvm/uvm_object.c log diff annotate
src/sys/uvm/uvm_object.h log diff annotate
src/sys/uvm/uvm_page.c log diff annotate
src/sys/uvm/uvm_pager.c log diff annotate
src/sys/uvm/uvm_pdaemon.c log diff annotate
src/sys/uvm/uvm_vnode.c log diff annotate
message Use a per-UVM object lock to serialize the lower part of the fault handler.

Like the per-amap lock the `vmobjlock' is principally used to serialized
access to objects in the fault handler to allow faults occurring on
different CPUs and different objects to be processed in parallel.

The fault handler now acquires the `vmobjlock' of a given UVM object as
soon as it finds one. For now a write-lock is always acquired even if
some operations could use a read-lock.

Every pager, corresponding to a different kind of UVM object, now expect
the UVM object to be locked and some operations, like *_get() return it
unlocked. This is enforced by assertions checking for rw_write_held().

The KERNEL_LOCK() is now pushed to the VFS boundary in the vnode pager.

To ensure the correct amap or object lock is held when modifying a page
many uvm_page* operations are now asserting for the "owner" lock.
However, fields of the "struct vm_page" are still being protected by the
global `pageqlock'. To prevent lock ordering issues with the new
`vmobjlock' and to reduce differences with NetBSD this lock is now taken
and released for each page instead of around the whole loop.

This commit does not remove the KERNEL_LOCK/UNLOCK() dance. Unlocking
will follow if there is no fallout.

Ported from NetBSD, tested by many, thanks!

ok kettenis@, kn@

date 2021-12-15T15:30:47Z
author visa
files src/sys/kern/tty.c log diff annotate
src/sys/kern/tty_pty.c log diff annotate
message Adjust pty and tty event filters

* Implement EVFILT_EXCEPT for ttys for HUP condition detection.
This filter is used when pollfd.events has no read/write events.

* Add HUP condition detection to filt_ptcwrite() and filt_ttywrite()
to reflect ptcpoll() and ttpoll(). Only poll(2) and select(2) can
utilize the code; kevent(2) should behave as before with EVFILT_WRITE.

* Clear EV_EOF and __EV_HUP if the EOF/HUP condition ends.

OK mpi@

date 2021-12-15T15:58:01Z
author bluhm
files src/sys/netinet/igmp.c log diff annotate
message Syzkaller found a dereference in igmp_leavegroup() where inm->inm_rti
is NULL. It should be set in rti_fill(), but is not if malloc(9)
fails. There is no rollback after malloc failure so the field stays
uninitialized. The code is only called from ioctl, setsockopt or
a task. Malloc should wait instead of failing, otherwise syscalls
would be unreliable. While there also put an M_WAIT in the init
code. During init malloc must not fail.
OK mvs@
Reported-by: [email protected]

date 2021-12-15T17:21:08Z
author deraadt
files src/sys/netinet/ip_mroute.c log diff annotate
src/sys/netinet6/ip6_mroute.c log diff annotate
message structure pads can leak uninitialized memory to userland via copyout,
therefore the mandatory idiom is completely clearing structs before
building them for copyout -- that means ALMOST ALL STRUCTS, because
we never know when some architecture will pad a struct.. In two more
cases, the clearing wasn't performed.
from Reno Robert ZDI
ok millert bluhm