OpenBSD cvs log

created 2018-11-30T02:24:20Z
begin 2018-08-21T00:00:00Z
end 2018-08-22T00:00:00Z
path src/sys
commits 10

date 2018-08-21T06:03:34Z
author jsg
files src/sys/arch/i386/i386/machdep.c log diff annotate
src/sys/arch/i386/include/cpu.h log diff annotate
message print sefflags_edx cpuid bits on i386 as well

date 2018-08-21T12:34:11Z
author bluhm
files src/sys/kern/uipc_socket.c log diff annotate
message branches: 1.227.2;
If the control message of IP_SENDSRCADDR did not fit into the socket
buffer together with an UDP packet, sosend(9) returned EWOULDBLOCK.
As it is an persistent problem, EMSGSIZE is the correct error code.
Split the AF_UNIX case into a separate condition and do not change
its logic. For atomic protocols, check that both data and control
message length fit into the socket buffer.
original bug report from Alexander Markert
discussed with jca@; OK vgross@

date 2018-08-21T12:44:13Z
author jsg
files src/sys/arch/i386/i386/machdep.c log diff annotate
src/sys/arch/i386/include/specialreg.h log diff annotate
message print rdtscp and xsave_ext cpuid bits on i386 as well
move printing of ecxfeatures bits to match amd64

date 2018-08-21T13:10:13Z
author bluhm
files src/sys/arch/amd64/amd64/vm_machdep.c log diff annotate
message If a kernel thread was created by a user land system call, the user
land FPU context was saved to proc0. This was an information leak
as proc0 is used to initialize the FPU at exec and signal handlers.
Never save the FPU to proc0, it has the initialization value. Also
check whether the FPU has valid user land state that has to be
forked.
This bug is a regression from the eager FPU commit. OK guenther@

date 2018-08-21T13:50:31Z
author visa
files src/sys/kern/kern_descrip.c log diff annotate
message Use explicit fd indexing to access fd_ofiles, to clarify the code.

OK mpi@

date 2018-08-21T16:40:23Z
author akoshibe
files src/sys/net/switchofp.c log diff annotate
message Fix alignment fault in switchd(8) on sparc64. Use memcpy to set oxm_value,
which isn't aligned to 64 bits.

Based on pointers from Ori Bernstein
Reported by Ryan Keating
ok yasuoka@ deraadt@

date 2018-08-21T18:06:12Z
author anton
files src/sys/arch/amd64/conf/GENERIC log diff annotate
src/sys/arch/amd64/conf/Makefile.amd64 log diff annotate
src/sys/arch/amd64/conf/files.amd64 log diff annotate
src/sys/arch/i386/conf/GENERIC log diff annotate
src/sys/arch/i386/conf/Makefile.i386 log diff annotate
src/sys/arch/i386/conf/files.i386 log diff annotate
src/sys/conf/files log diff annotate
src/sys/dev/kcov.c log diff annotate
src/sys/kern/kern_exit.c log diff annotate
message Rework kcov kernel config. Instead of treating kcov as both an option and a
pseudo-device, get rid of the option. Enabling kcov now requires the following
line to be added to the kernel config:

pseudo-device kcov 1

This is how pseudo devices are enabled in general. A side-effect of this change
is that dev/kcov.c will no longer be compiled by default.

Prodded by deraadt@; ok mpi@ visa@

date 2018-08-21T19:04:38Z
author deraadt
files src/sys/arch/amd64/amd64/identcpu.c log diff annotate
src/sys/arch/amd64/amd64/vmm.c log diff annotate
src/sys/arch/amd64/amd64/vmm_support.S log diff annotate
message Perform mitigations for Intel L1TF screwup. There are three options:
(1) Future cpus which don't have the bug, (2) cpu's with microcode
containing a L1D flush operation, (3) stuffing the L1D cache with fresh
data and expiring old content. This stuffing loop is complicated and
interesting, no details on the mitigation have been released by Intel so
Mike and I studied other systems for inspiration. Replacement algorithm
for the L1D is described in the tlbleed paper. We use a 64K PA-linear
region filled with trapsleds (in case there is L1D->L1I data movement).
The TLBs covering the region are loaded first, because TLB loading
apparently flows through the D cache. Before performing vmlaunch or
vmresume, the cachelines covering the guest registers are also flushed.
with mlarkin, additional testing by pd, handy comments from the
kettenis and guenther peanuts

date 2018-08-21T19:04:40Z
author deraadt
files src/sys/arch/amd64/include/cpu.h log diff annotate
src/sys/arch/amd64/include/specialreg.h log diff annotate
src/sys/arch/amd64/include/vmmvar.h log diff annotate
message Perform mitigations for Intel L1TF screwup. There are three options:
(1) Future cpus which don't have the bug, (2) cpu's with microcode
containing a L1D flush operation, (3) stuffing the L1D cache with fresh
data and expiring old content. This stuffing loop is complicated and
interesting, no details on the mitigation have been released by Intel so
Mike and I studied other systems for inspiration. Replacement algorithm
for the L1D is described in the tlbleed paper. We use a 64K PA-linear
region filled with trapsleds (in case there is L1D->L1I data movement).
The TLBs covering the region are loaded first, because TLB loading
apparently flows through the D cache. Before performing vmlaunch or
vmresume, the cachelines covering the guest registers are also flushed.
with mlarkin, additional testing by pd, handy comments from the
kettenis and guenther peanuts

date 2018-08-21T22:16:42Z
author kettenis
files src/sys/dev/fdt/dwpcie.c log diff annotate
message Implement address translation. Makes I/O space access work.