Discussion:
[RFC PATCH v4 00/28] x86: Secure Memory Encryption (AMD)
Dave Young
2017-03-01 09:17:25 UTC
Permalink
Hi Tom,
This RFC patch series provides support for AMD's new Secure Memory
Encryption (SME) feature.
SME can be used to mark individual pages of memory as encrypted through the
page tables. A page of memory that is marked encrypted will be automatically
decrypted when read from DRAM and will be automatically encrypted when
written to DRAM. Details on SME can found in the links below.
The SME feature is identified through a CPUID function and enabled through
the SYSCFG MSR. Once enabled, page table entries will determine how the
memory is accessed. If a page table entry has the memory encryption mask set,
then that memory will be accessed as encrypted memory. The memory encryption
mask (as well as other related information) is determined from settings
returned through the same CPUID function that identifies the presence of the
feature.
The approach that this patch series takes is to encrypt everything possible
starting early in the boot where the kernel is encrypted. Using the page
table macros the encryption mask can be incorporated into all page table
entries and page allocations. By updating the protection map, userspace
allocations are also marked encrypted. Certain data must be accounted for
as having been placed in memory before SME was enabled (EFI, initrd, etc.)
and accessed accordingly.
This patch series is a pre-cursor to another AMD processor feature called
Secure Encrypted Virtualization (SEV). The support for SEV will build upon
the SME support and will be submitted later. Details on SEV can be found
in the links below.
http://amd-dev.wpengine.netdna-cdn.com/wordpress/media/2013/12/AMD_Memory_Encryption_Whitepaper_v7-Public.pdf
http://support.amd.com/TechDocs/24593.pdf
SME is section 7.10
SEV is section 15.34
This patch series is based off of the master branch of tip.
Commit a27cb9e1b2b4 ("Merge branch 'WIP.sched/core'")
---
Still to do: IOMMU enablement support
- Broke out some of the patches into smaller individual patches
- Updated Documentation
- Added a message to indicate why the IOMMU was disabled
- Updated CPU feature support for SME by taking into account whether
BIOS has enabled SME
- Eliminated redundant functions
- Added some warning messages for DMA usage of bounce buffers when SME
is active
- Added support for persistent memory
- Added support to determine when setup data is being mapped and be sure
to map it un-encrypted
- Added CONFIG support to set the default action of whether to activate
SME if it is supported/enabled
- Added support for (re)booting with kexec
Could you please add kexec list in cc when you updating the patches so
that kexec/kdump people do not miss them?
- Updated Documentation
- Make the encryption mask available outside of arch/x86 through a
standard include file
- Conversion of assembler routines to C where possible (not everything
could be converted, e.g. the routine that does the actual encryption
needs to be copied into a safe location and it is difficult to
determine the actual length of the function in order to copy it)
- Fix SME feature use of scattered CPUID feature
- Creation of SME specific functions for things like encrypting
the setup data, ramdisk, etc.
- New take on early_memremap / memremap encryption support
- Additional support for accessing video buffers (fbdev/gpu) as
un-encrypted
- Disable IOMMU for now - need to investigate further in relation to
how it needs to be programmed relative to accessing physical memory
- Added Documentation.
- Removed AMD vendor check for setting the PAT write protect mode
- Updated naming of trampoline flag for SME as well as moving of the
SME check to before paging is enabled.
- Change to early_memremap to identify the data being mapped as either
boot data or kernel data. The idea being that boot data will have
been placed in memory as un-encrypted data and would need to be accessed
as such.
- Updated debugfs support for the bootparams to access the data properly.
- Do not set the SYSCFG[MEME] bit, only check it. The setting of the
MemEncryptionModeEn bit results in a reduction of physical address size
of the processor. It is possible that BIOS could have configured resources
resources into a range that will now not be addressable. To prevent this,
rely on BIOS to set the SYSCFG[MEME] bit and only then enable memory
encryption support in the kernel.
x86: Documentation for AMD Secure Memory Encryption (SME)
x86: Set the write-protect cache mode for full PAT support
x86: Add the Secure Memory Encryption CPU feature
x86: Handle reduction in physical address size with SME
x86: Add Secure Memory Encryption (SME) support
x86: Add support to enable SME during early boot processing
x86: Provide general kernel support for memory encryption
x86: Extend the early_memremap support with additional attrs
x86: Add support for early encryption/decryption of memory
x86: Insure that boot memory areas are mapped properly
x86: Add support to determine the E820 type of an address
efi: Add an EFI table address match function
efi: Update efi_mem_type() to return defined EFI mem types
Add support to access boot related data in the clear
Add support to access persistent memory in the clear
x86: Add support for changing memory encryption attribute
x86: Decrypt trampoline area if memory encryption is active
x86: DMA support for memory encryption
swiotlb: Add warnings for use of bounce buffers with SME
iommu/amd: Disable AMD IOMMU if memory encryption is active
x86: Check for memory encryption on the APs
x86: Do not specify encrypted memory for video mappings
x86/kvm: Enable Secure Memory Encryption of nested page tables
x86: Access the setup data through debugfs decrypted
x86: Access the setup data through sysfs decrypted
x86: Allow kexec to be used with SME
x86: Add support to encrypt the kernel in-place
x86: Add support to make use of Secure Memory Encryption
Documentation/admin-guide/kernel-parameters.txt | 11 +
Documentation/x86/amd-memory-encryption.txt | 57 ++++
arch/x86/Kconfig | 26 ++
arch/x86/boot/compressed/pagetable.c | 7 +
arch/x86/include/asm/cacheflush.h | 5
arch/x86/include/asm/cpufeature.h | 7 -
arch/x86/include/asm/cpufeatures.h | 5
arch/x86/include/asm/disabled-features.h | 3
arch/x86/include/asm/dma-mapping.h | 5
arch/x86/include/asm/e820/api.h | 2
arch/x86/include/asm/e820/types.h | 2
arch/x86/include/asm/fixmap.h | 20 +
arch/x86/include/asm/init.h | 1
arch/x86/include/asm/io.h | 3
arch/x86/include/asm/kvm_host.h | 3
arch/x86/include/asm/mem_encrypt.h | 108 ++++++++
arch/x86/include/asm/msr-index.h | 2
arch/x86/include/asm/page.h | 4
arch/x86/include/asm/pgtable.h | 26 +-
arch/x86/include/asm/pgtable_types.h | 54 +++-
arch/x86/include/asm/processor.h | 3
arch/x86/include/asm/realmode.h | 12 +
arch/x86/include/asm/required-features.h | 3
arch/x86/include/asm/setup.h | 8 +
arch/x86/include/asm/vga.h | 13 +
arch/x86/kernel/Makefile | 3
arch/x86/kernel/cpu/common.c | 23 ++
arch/x86/kernel/e820.c | 26 ++
arch/x86/kernel/espfix_64.c | 2
arch/x86/kernel/head64.c | 46 +++
arch/x86/kernel/head_64.S | 65 ++++-
arch/x86/kernel/kdebugfs.c | 30 +-
arch/x86/kernel/ksysfs.c | 27 +-
arch/x86/kernel/machine_kexec_64.c | 3
arch/x86/kernel/mem_encrypt_boot.S | 156 ++++++++++++
arch/x86/kernel/mem_encrypt_init.c | 310 +++++++++++++++++++++++
arch/x86/kernel/pci-dma.c | 11 +
arch/x86/kernel/pci-nommu.c | 2
arch/x86/kernel/pci-swiotlb.c | 8 -
arch/x86/kernel/process.c | 43 +++
arch/x86/kernel/setup.c | 43 +++
arch/x86/kernel/smp.c | 4
arch/x86/kvm/mmu.c | 8 -
arch/x86/kvm/vmx.c | 3
arch/x86/kvm/x86.c | 3
arch/x86/mm/Makefile | 1
arch/x86/mm/ident_map.c | 6
arch/x86/mm/ioremap.c | 157 ++++++++++++
arch/x86/mm/kasan_init_64.c | 4
arch/x86/mm/mem_encrypt.c | 218 ++++++++++++++++
arch/x86/mm/pageattr.c | 71 +++++
arch/x86/mm/pat.c | 6
arch/x86/platform/efi/efi.c | 4
arch/x86/platform/efi/efi_64.c | 16 +
arch/x86/realmode/init.c | 16 +
arch/x86/realmode/rm/trampoline_64.S | 17 +
drivers/firmware/efi/efi.c | 33 ++
drivers/gpu/drm/drm_gem.c | 2
drivers/gpu/drm/drm_vm.c | 4
drivers/gpu/drm/ttm/ttm_bo_vm.c | 7 -
drivers/gpu/drm/udl/udl_fb.c | 4
drivers/iommu/amd_iommu_init.c | 7 +
drivers/video/fbdev/core/fbmem.c | 12 +
include/asm-generic/early_ioremap.h | 2
include/asm-generic/pgtable.h | 8 +
include/linux/dma-mapping.h | 11 +
include/linux/efi.h | 7 +
include/linux/mem_encrypt.h | 53 ++++
include/linux/swiotlb.h | 1
init/main.c | 13 +
kernel/kexec_core.c | 24 ++
kernel/memremap.c | 11 +
lib/swiotlb.c | 59 ++++
mm/early_ioremap.c | 28 ++
74 files changed, 1880 insertions(+), 128 deletions(-)
create mode 100644 Documentation/x86/amd-memory-encryption.txt
create mode 100644 arch/x86/include/asm/mem_encrypt.h
create mode 100644 arch/x86/kernel/mem_encrypt_boot.S
create mode 100644 arch/x86/kernel/mem_encrypt_init.c
create mode 100644 arch/x86/mm/mem_encrypt.c
create mode 100644 include/linux/mem_encrypt.h
--
Tom Lendacky
--
To unsubscribe from this list: send the line "unsubscribe linux-efi" in
More majordomo info at http://vger.kernel.org/majordomo-info.html
Thanks a lot!
Dave
Dave Young
2017-03-01 09:27:44 UTC
Permalink
Add kexec list..
Post by Dave Young
Hi Tom,
Provide support so that kexec can be used to boot a kernel when SME is
enabled.
Is the point of kexec and kdump to ehh, dump memory ? But if the
rest of the memory is encrypted you won't get much, will you?
Kexec can be used to reboot a system without going back through BIOS.
So you can use kexec without using kdump.
For kdump, just taking a quick look, the option to enable memory
encryption can be provided on the crash kernel command line and then
Is there a simple way to get the SME status? Probably add some sysfs
file for this purpose.
crash kernel can would be able to copy the memory decrypted if the
pagetable is set up properly. It looks like currently ioremap_cache()
is used to map the old memory page. That might be able to be changed
to a memremap() so that the encryption bit is set in the mapping. That
will mean that memory that is not marked encrypted (EFI tables, swiotlb
memory, etc) would not be read correctly.
Manage to store info about those ranges which are not encrypted so that
memremap can handle them?
Would it make sense to include some printk to the user if they
are setting up kdump that they won't get anything out of it?
Probably a good idea to add something like that.
It will break kdump functionality, it should be fixed instead of
just adding printk to warn user..
Thanks
Dave
Tom Lendacky
2017-03-01 15:36:35 UTC
Permalink
+kexec list
Provide support so that kexec can be used to boot a kernel when SME is
enabled.
Support is needed to allocate pages for kexec without encryption. This
is needed in order to be able to reboot in the kernel in the same manner
as originally booted.
Additionally, when shutting down all of the CPUs we need to be sure to
disable caches, flush the caches and then halt. This is needed when booting
from a state where SME was not active into a state where SME is active.
Without these steps, it is possible for cache lines to exist for the same
physical location but tagged both with and without the encryption bit. This
can cause random memory corruption when caches are flushed depending on
which cacheline is written last.
---
arch/x86/include/asm/cacheflush.h | 2 ++
arch/x86/include/asm/init.h | 1 +
arch/x86/include/asm/mem_encrypt.h | 10 ++++++++
arch/x86/include/asm/pgtable_types.h | 1 +
arch/x86/kernel/machine_kexec_64.c | 3 ++
arch/x86/kernel/process.c | 43 +++++++++++++++++++++++++++++++++-
arch/x86/kernel/smp.c | 4 ++-
arch/x86/mm/ident_map.c | 6 +++--
arch/x86/mm/pageattr.c | 2 ++
include/linux/mem_encrypt.h | 10 ++++++++
kernel/kexec_core.c | 24 +++++++++++++++++++
11 files changed, 100 insertions(+), 6 deletions(-)
diff --git a/arch/x86/include/asm/cacheflush.h b/arch/x86/include/asm/cacheflush.h
index 33ae60a..2180cd5 100644
--- a/arch/x86/include/asm/cacheflush.h
+++ b/arch/x86/include/asm/cacheflush.h
@@ -48,8 +48,10 @@
int set_memory_rw(unsigned long addr, int numpages);
int set_memory_np(unsigned long addr, int numpages);
int set_memory_4k(unsigned long addr, int numpages);
+#ifdef CONFIG_AMD_MEM_ENCRYPT
int set_memory_encrypted(unsigned long addr, int numpages);
int set_memory_decrypted(unsigned long addr, int numpages);
+#endif
int set_memory_array_uc(unsigned long *addr, int addrinarray);
int set_memory_array_wc(unsigned long *addr, int addrinarray);
Hmm, why is this ifdeffery creeping in now?
Just supply !CONFIG_AMD_MEM_ENCRYPT versions which don't do anything but
return the address.
This was added because the set_memory_decrypted() call is now called
from kernel/kexec_core.c. And since all the set_memory() functions
are defined in an arch include I had to swizzle things around. I think
I should probably do something similar to the SWIOTLB support and have
a __weak function to alter the memory area attributes.
diff --git a/arch/x86/include/asm/init.h b/arch/x86/include/asm/init.h
index 737da62..b2ec511 100644
--- a/arch/x86/include/asm/init.h
+++ b/arch/x86/include/asm/init.h
@@ -6,6 +6,7 @@ struct x86_mapping_info {
void *context; /* context for alloc_pgt_page */
unsigned long pmd_flag; /* page flag for PMD entry */
unsigned long offset; /* ident mapping offset */
+ unsigned long kernpg_flag; /* kernel pagetable flag override */
};
int kernel_ident_mapping_init(struct x86_mapping_info *info, pgd_t *pgd_page,
diff --git a/arch/x86/include/asm/mem_encrypt.h b/arch/x86/include/asm/mem_encrypt.h
index 5a17f1b..1fd5426 100644
--- a/arch/x86/include/asm/mem_encrypt.h
+++ b/arch/x86/include/asm/mem_encrypt.h
@@ -64,6 +64,16 @@ static inline u64 sme_dma_mask(void)
return 0ULL;
}
+static inline int set_memory_encrypted(unsigned long vaddr, int numpages)
+{
+ return 0;
+}
+
+static inline int set_memory_decrypted(unsigned long vaddr, int numpages)
+{
+ return 0;
+}
+
#endif
static inline void __init sme_early_encrypt(resource_size_t paddr,
diff --git a/arch/x86/include/asm/pgtable_types.h b/arch/x86/include/asm/pgtable_types.h
index f00e70f..456c5cc 100644
--- a/arch/x86/include/asm/pgtable_types.h
+++ b/arch/x86/include/asm/pgtable_types.h
@@ -213,6 +213,7 @@ enum page_cache_mode {
#define PAGE_KERNEL __pgprot(__PAGE_KERNEL | _PAGE_ENC)
#define PAGE_KERNEL_RO __pgprot(__PAGE_KERNEL_RO | _PAGE_ENC)
#define PAGE_KERNEL_EXEC __pgprot(__PAGE_KERNEL_EXEC | _PAGE_ENC)
+#define PAGE_KERNEL_EXEC_NOENC __pgprot(__PAGE_KERNEL_EXEC)
#define PAGE_KERNEL_RX __pgprot(__PAGE_KERNEL_RX | _PAGE_ENC)
#define PAGE_KERNEL_NOCACHE __pgprot(__PAGE_KERNEL_NOCACHE | _PAGE_ENC)
#define PAGE_KERNEL_LARGE __pgprot(__PAGE_KERNEL_LARGE | _PAGE_ENC)
diff --git a/arch/x86/kernel/machine_kexec_64.c b/arch/x86/kernel/machine_kexec_64.c
index 307b1f4..b01648c 100644
--- a/arch/x86/kernel/machine_kexec_64.c
+++ b/arch/x86/kernel/machine_kexec_64.c
@@ -76,7 +76,7 @@ static int init_transition_pgtable(struct kimage *image, pgd_t *pgd)
set_pmd(pmd, __pmd(__pa(pte) | _KERNPG_TABLE));
}
pte = pte_offset_kernel(pmd, vaddr);
- set_pte(pte, pfn_pte(paddr >> PAGE_SHIFT, PAGE_KERNEL_EXEC));
+ set_pte(pte, pfn_pte(paddr >> PAGE_SHIFT, PAGE_KERNEL_EXEC_NOENC));
return 0;
free_transition_pgtable(image);
@@ -104,6 +104,7 @@ static int init_pgtable(struct kimage *image, unsigned long start_pgtable)
.alloc_pgt_page = alloc_pgt_page,
.context = image,
.pmd_flag = __PAGE_KERNEL_LARGE_EXEC,
+ .kernpg_flag = _KERNPG_TABLE_NOENC,
};
unsigned long mstart, mend;
pgd_t *level4p;
diff --git a/arch/x86/kernel/process.c b/arch/x86/kernel/process.c
index 3ed869c..9b01261 100644
--- a/arch/x86/kernel/process.c
+++ b/arch/x86/kernel/process.c
@@ -279,8 +279,43 @@ bool xen_set_default_idle(void)
return ret;
}
#endif
-void stop_this_cpu(void *dummy)
+
+static bool is_smt_thread(int cpu)
{
+#ifdef CONFIG_SCHED_SMT
+ if (cpumask_test_cpu(smp_processor_id(), cpu_smt_mask(cpu)))
+ return true;
+#endif
No, no sched stuff in here. Just
if (cpumask_test_cpu(smp_processor_id(), topology_sibling_cpumask(cpu)))
Ah, ok, much nicer.
+ return false;
+}
+
+void stop_this_cpu(void *data)
+{
+ atomic_t *stopping_cpu = data;
+ bool do_cache_disable = false;
+ bool do_wbinvd = false;
+
+ if (stopping_cpu) {
+ int stopping_id = atomic_read(stopping_cpu);
+ struct cpuinfo_x86 *c = &cpu_data(stopping_id);
+
+ /*
+ * If the processor supports SME then we need to clear
+ * out cache information before halting it because we could
+ * be performing a kexec. With kexec, going from SME
+ * inactive to SME active requires clearing cache entries
+ * so that addresses without the encryption bit set don't
+ * corrupt the same physical address that has the encryption
+ * bit set when caches are flushed. If this is not an SMT
+ * thread of the stopping CPU then we disable caching at this
+ * point to keep the cache clean.
+ */
+ if (cpu_has(c, X86_FEATURE_SME)) {
+ do_cache_disable = !is_smt_thread(stopping_id);
+ do_wbinvd = true;
+ }
+ }
Let's simplify this (diff ontop of yours). Notice the sme_active() call
in there - I believe we want to do this only when SME is active - not on
any CPU which merely supports SME.
No, because we could be going from SME not active and rebooting using
kexec to a state where SME is active. This is where the cache line
issue can arise.
diff --git a/arch/x86/kernel/process.c b/arch/x86/kernel/process.c
index 9b012612698d..e771d7a42e49 100644
--- a/arch/x86/kernel/process.c
+++ b/arch/x86/kernel/process.c
@@ -296,9 +296,6 @@ void stop_this_cpu(void *data)
bool do_wbinvd = false;
if (stopping_cpu) {
- int stopping_id = atomic_read(stopping_cpu);
- struct cpuinfo_x86 *c = &cpu_data(stopping_id);
-
/*
* If the processor supports SME then we need to clear
* out cache information before halting it because we could
@@ -310,8 +307,8 @@ void stop_this_cpu(void *data)
* thread of the stopping CPU then we disable caching at this
* point to keep the cache clean.
*/
- if (cpu_has(c, X86_FEATURE_SME)) {
- do_cache_disable = !is_smt_thread(stopping_id);
+ if (sme_active()) {
+ do_cache_disable = !is_smt_thread(atomic_read(stopping_cpu));
do_wbinvd = true;
}
}
local_irq_disable();
/*
@@ -289,6 +324,12 @@ void stop_this_cpu(void *dummy)
disable_local_APIC();
mcheck_cpu_clear(this_cpu_ptr(&cpu_info));
+ if (do_cache_disable)
+ write_cr0(read_cr0() | X86_CR0_CD);
Question: what clears CD back again? The CPU online path?
Yes, when the CPU comes back online the cache is re-enabled.
+
+ if (do_wbinvd)
+ wbinvd();
+
Ok, so this whole shebang is pretty much crippling the machine.
And, AFAICT, you're doing this now from smp_stop_nmi_callback() and
smp_reboot_interrupt() as they both pass a !NULL arg to stop_this_cpu().
I'll take a closer look at the sysfs support to see how moving a cpu
to/from online is affected.
And AFAICT those are not all cases where we kexec.
Yes, kexec can be invoked through a reboot command.
What you need instead, IMO, is __crash_kexec() ->
machine_crash_shutdown() -> native_machine_crash_shutdown() and put all
the SME special handling there.
I *think*.
I'll take a closer look at the kexec path to see what can be done. I
might be able to do something with determining if SME is configured
in the kernel and what the default SME state is combined with checking
the command line of the kernel being kexeced.

Thanks,
Tom
for (;;)
halt();
}
...
diff --git a/arch/x86/mm/pageattr.c b/arch/x86/mm/pageattr.c
index 9710f5c..46cc89d 100644
--- a/arch/x86/mm/pageattr.c
+++ b/arch/x86/mm/pageattr.c
@@ -1742,6 +1742,7 @@ int set_memory_4k(unsigned long addr, int numpages)
__pgprot(0), 1, 0, NULL);
}
+#ifdef CONFIG_AMD_MEM_ENCRYPT
static int __set_memory_enc_dec(unsigned long addr, int numpages, bool enc)
{
struct cpa_data cpa;
@@ -1807,6 +1808,7 @@ int set_memory_decrypted(unsigned long addr, int numpages)
return __set_memory_enc_dec(addr, numpages, false);
}
EXPORT_SYMBOL(set_memory_decrypted);
+#endif /* CONFIG_AMD_MEM_ENCRYPT */
Btw, I don't see those things used in modules to justify the
EXPORT_SYMBOL(). And it should be EXPORT_SYMBOL_GPL() since it is a new
symbol.
Ok.
So you could put those wrappers in a header and do the ifdeffery there and
I'll look at making it cleaner. It would be simple if all the
set_memory() functions weren't arch specific. The earlier response about
making it an arch callback function might be best.
static int __set_memory_enc_dec(unsigned long addr, int numpages, bool enc)
{
if (!IS_ENABLED(CONFIG_AMD_MEM_ENCRYPT))
return 0;
...
}
so that you can save yourself the ifdeffery. The compiler would still
parse the function body so everything else used in there would have to
be defined too, even in the !CONFIG_AMD_MEM_ENCRYPT case.
int set_pages_uc(struct page *page, int numpages)
{
diff --git a/include/linux/mem_encrypt.h b/include/linux/mem_encrypt.h
index 6829ff1..913cf80 100644
--- a/include/linux/mem_encrypt.h
+++ b/include/linux/mem_encrypt.h
@@ -34,6 +34,16 @@ static inline u64 sme_dma_mask(void)
return 0ULL;
}
+static inline int set_memory_encrypted(unsigned long vaddr, int numpages)
+{
+ return 0;
+}
+
+static inline int set_memory_decrypted(unsigned long vaddr, int numpages)
+{
+ return 0;
+}
+
#endif
#endif /* CONFIG_AMD_MEM_ENCRYPT */
diff --git a/kernel/kexec_core.c b/kernel/kexec_core.c
index 5617cc4..ab62f41 100644
--- a/kernel/kexec_core.c
+++ b/kernel/kexec_core.c
@@ -38,6 +38,7 @@
#include <linux/syscore_ops.h>
#include <linux/compiler.h>
#include <linux/hugetlb.h>
+#include <linux/mem_encrypt.h>
#include <asm/page.h>
#include <asm/sections.h>
@@ -315,6 +316,18 @@ static struct page *kimage_alloc_pages(gfp_t gfp_mask, unsigned int order)
count = 1 << order;
for (i = 0; i < count; i++)
SetPageReserved(pages + i);
+
+ /*
+ * If SME is active we need to be sure that kexec pages are
+ * not encrypted because when we boot to the new kernel the
+ * pages won't be accessed encrypted (initially).
+ */
+ if (sme_active()) {
+ void *vaddr = page_address(pages);
+
+ set_memory_decrypted((unsigned long)vaddr, count);
+ memset(vaddr, 0, count * PAGE_SIZE);
Why the memset?
Since the memory attribute was changed, a page with all zeroes in memory
when it was an encrypted page would now read as random data (since the
data on the page won't be decrypted). So after setting the attribute
the memset just clears it to zero. I guess I can to the memset only if
__GFP_ZERO is present inf gfp_mask.
+ }
}
return pages;
@@ -326,6 +339,17 @@ static void kimage_free_pages(struct page *page)
order = page_private(page);
count = 1 << order;
+
+ /*
+ * If SME is active we need to reset the pages back to being an
+ * encrypted mapping before freeing them.
+ */
+ if (sme_active()) {
+ void *vaddr = page_address(page);
+
+ set_memory_encrypted((unsigned long)vaddr, count);
if (sme_active())
set_memory_encrypted((unsigned long)page_address(page), count);
looks ok to me too.
Ok.

Thanks,
Tom
Tom Lendacky
2017-03-01 17:51:40 UTC
Permalink
Post by Dave Young
Hi Tom,
Hi Dave,
... SNIP ...
Post by Dave Young
- Added support for (re)booting with kexec
Could you please add kexec list in cc when you updating the patches so
that kexec/kdump people do not miss them?
Sorry about that, I'll be sure to add it to the cc list.

Thanks,
Tom
Tom Lendacky
2017-03-06 18:04:28 UTC
Permalink
+kexec-list
Post by Tom Lendacky
Post by Dave Young
Hi Tom,
Hi Dave,
Post by Dave Young
Provide support so that kexec can be used to boot a kernel when SME is
enabled.
Is the point of kexec and kdump to ehh, dump memory ? But if the
rest of the memory is encrypted you won't get much, will you?
Kexec can be used to reboot a system without going back through BIOS.
So you can use kexec without using kdump.
For kdump, just taking a quick look, the option to enable memory
encryption can be provided on the crash kernel command line and then
Is there a simple way to get the SME status? Probably add some sysfs
file for this purpose.
Currently there is not. I can look at adding something, maybe just the
sme_me_mask value, which if non-zero, would indicate SME is active.
Post by Dave Young
crash kernel can would be able to copy the memory decrypted if the
pagetable is set up properly. It looks like currently ioremap_cache()
is used to map the old memory page. That might be able to be changed
to a memremap() so that the encryption bit is set in the mapping. That
will mean that memory that is not marked encrypted (EFI tables, swiotlb
memory, etc) would not be read correctly.
Manage to store info about those ranges which are not encrypted so that
memremap can handle them?
I can look into whether something can be done in this area. Any input
you can provide as to what would be the best way/place to store the
range info so kdump can make use of it, would be greatly appreciated.
Post by Dave Young
Would it make sense to include some printk to the user if they
are setting up kdump that they won't get anything out of it?
Probably a good idea to add something like that.
It will break kdump functionality, it should be fixed instead of
just adding printk to warn user..
I do want kdump to work. I'll investigate further what can be done in
this area.
Thanks,
Tom
Post by Dave Young
Thanks
Dave
Loading...