created | 2019-05-18T11:40:32Z |
---|---|
begin | 2019-05-17T00:00:00Z |
end | 2019-05-18T00:00:00Z |
path | src/sys |
commits | 6 |
date | 2019-05-17T01:05:20Z | |||
---|---|---|---|---|
author | kevlo | |||
files | src/sys/dev/ic/athn.c | log | diff | annotate |
src/sys/dev/ic/athnreg.h | log | diff | annotate | |
message |
For AR9271, use correct clock control register and add a macro to access it. ok stsp@ |
date | 2019-05-17T03:53:08Z | |||
---|---|---|---|---|
author | visa | |||
files | src/sys/kern/kern_smr.c | log | diff | annotate |
src/sys/kern/subr_xxx.c | log | diff | annotate | |
message |
Add SMR_ASSERT_NONCRITICAL() in assertwaitok(). This eases debugging because now the error is detected before context switch. The sleep code path eventually calls assertwaitok() in mi_switch(), so the assertwaitok() in the SMR barrier function is somewhat redundant and can be removed. OK mpi@ |
date | 2019-05-17T07:12:32Z | |||
---|---|---|---|---|
author | jmatthew | |||
files | src/sys/dev/pci/if_mcx.c | log | diff | annotate |
message | Implement mcx_down() and use it to unwind unsuccessful mcx_up() attempts. |
date | 2019-05-17T19:07:15Z | |||
---|---|---|---|---|
author | guenther | |||
files | src/sys/arch/amd64/amd64/cpu.c | log | diff | annotate |
src/sys/arch/amd64/amd64/genassym.cf | log | diff | annotate | |
src/sys/arch/amd64/amd64/identcpu.c | log | diff | annotate | |
src/sys/arch/amd64/amd64/locore.S | log | diff | annotate | |
src/sys/arch/amd64/amd64/mainbus.c | log | diff | annotate | |
src/sys/arch/amd64/amd64/mds.S | log | diff | annotate | |
src/sys/arch/amd64/amd64/vmm.c | log | diff | annotate | |
message |
Mitigate Intel's Microarchitectural Data Sampling vulnerability. If the CPU has the new VERW behavior than that is used, otherwise use the proper sequence from Intel's "Deep Dive" doc is used in the return-to-userspace and enter-VMM-guest paths. The enter-C3-idle path is not mitigated because it's only a problem when SMT/HT is enabled: mitigating everything when that's enabled would be a _huge_ set of changes that we see no point in doing. Update vmm(4) to pass through the MSR bits so that guests can apply the optimal mitigation. VMM help and specific feedback from mlarkin@ vendor-portability help from jsg@ and kettenis@ ok kettenis@ mlarkin@ deraadt@ jsg@ |
date | 2019-05-17T19:07:16Z | |||
---|---|---|---|---|
author | guenther | |||
files | src/sys/arch/amd64/amd64/vmm_support.S | log | diff | annotate |
src/sys/arch/amd64/conf/Makefile.amd64 | log | diff | annotate | |
src/sys/arch/amd64/conf/files.amd64 | log | diff | annotate | |
src/sys/arch/amd64/include/codepatch.h | log | diff | annotate | |
src/sys/arch/amd64/include/cpu.h | log | diff | annotate | |
src/sys/arch/amd64/include/specialreg.h | log | diff | annotate | |
src/sys/arch/amd64/include/vmmvar.h | log | diff | annotate | |
message |
Mitigate Intel's Microarchitectural Data Sampling vulnerability. If the CPU has the new VERW behavior than that is used, otherwise use the proper sequence from Intel's "Deep Dive" doc is used in the return-to-userspace and enter-VMM-guest paths. The enter-C3-idle path is not mitigated because it's only a problem when SMT/HT is enabled: mitigating everything when that's enabled would be a _huge_ set of changes that we see no point in doing. Update vmm(4) to pass through the MSR bits so that guests can apply the optimal mitigation. VMM help and specific feedback from mlarkin@ vendor-portability help from jsg@ and kettenis@ ok kettenis@ mlarkin@ deraadt@ jsg@ |
date | 2019-05-17T19:07:47Z | |||
---|---|---|---|---|
author | guenther | |||
files | src/sys/arch/amd64/include/cpu_full.h | log | diff | annotate |
message | Oops, forgot to include a copyright year when originally added |