| 087c6aa2 | 17-Dec-2019 |
Etienne Carriere <etienne.carriere@st.com> |
plat-stm32mp1: shared resources: remove unused stm32mp_clock_is_*()
Remove unused functions stm32mp_clock_is_shareable(), stm32mp_clock_is_shared() and stm32mp_clock_is_non_secure()? These were init
plat-stm32mp1: shared resources: remove unused stm32mp_clock_is_*()
Remove unused functions stm32mp_clock_is_shareable(), stm32mp_clock_is_shared() and stm32mp_clock_is_non_secure()? These were initially designed to allow a secure service to expose clocks to non-secure world. These functions are now deprecated since stm32mp_nsec_can_access_clock() was introduced.
Signed-off-by: Etienne Carriere <etienne.carriere@st.com> Acked-by: Jerome Forissier <jerome@forissier.org>
show more ...
|
| 2c14ebf5 | 02-Dec-2019 |
Etienne Carriere <etienne.carriere@st.com> |
plat-stm32mp1: shared resources: helper for shareable clocks
stm32mp_nsec_can_access_clock() reports whether a clock is assigned to the secure world only or if it can be manipulated by the non-secur
plat-stm32mp1: shared resources: helper for shareable clocks
stm32mp_nsec_can_access_clock() reports whether a clock is assigned to the secure world only or if it can be manipulated by the non-secure world through some service exposed by secure world as a SCMI server.
Signed-off-by: Etienne Carriere <etienne.carriere@st.com> Acked-by: Jerome Forissier <jerome@forissier.org>
show more ...
|
| 70224f58 | 05-Apr-2020 |
Etienne Carriere <etienne.carriere@linaro.org> |
ta: pkcs11: drop derive from AES_ECB
Drop key derivation as a capability of mechanisms AES_ECB as not part of the PKCS#11 specification.
Reported-by: Ricardo Salveti <ricardo@foundries.io> Signed-o
ta: pkcs11: drop derive from AES_ECB
Drop key derivation as a capability of mechanisms AES_ECB as not part of the PKCS#11 specification.
Reported-by: Ricardo Salveti <ricardo@foundries.io> Signed-off-by: Etienne Carriere <etienne.carriere@linaro.org> Reviewed-by: Rouven Czerwinski <r.czerwinski@pengutronix.de> Acked-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| 6459f267 | 08-Apr-2020 |
Etienne Carriere <etienne.carriere@linaro.org> |
ta: pkcs11: fix MECHANISM_IDS to return OK when no output buffer
Fix PKCS11 TA command PKCS11_CMD_MECHANISM_IDS to handle case where client provides a NULL buffer reference when querying the list of
ta: pkcs11: fix MECHANISM_IDS to return OK when no output buffer
Fix PKCS11 TA command PKCS11_CMD_MECHANISM_IDS to handle case where client provides a NULL buffer reference when querying the list of supported mechanism IDs. In such case TA should return OK, not PKCS11_CKR_BUFFER_TOO_SMALL.
This change is needed since commit [1] that makes an OP-TEE TA able to receive memref parameters with a NULL buffer reference.
Link: [1] 20b567068a37 ("libutee: flag NULL pointer using invalid shm id") Signed-off-by: Etienne Carriere <etienne.carriere@linaro.org> Reviewed-by: Rouven Czerwinski <r.czerwinski@pengutronix.de>
show more ...
|
| 5b25c76a | 07-Apr-2020 |
Jerome Forissier <jerome@forissier.org> |
Squashed commit upgrading to mbedtls-2.16.5
Squash merging branch import/mbedtls-2.16.5
058aefb2bfa4 ("core: mbedtls: use SHA-256 crypto accelerated routines") bcef9baed8f1 ("core: mbedtls: use SHA
Squashed commit upgrading to mbedtls-2.16.5
Squash merging branch import/mbedtls-2.16.5
058aefb2bfa4 ("core: mbedtls: use SHA-256 crypto accelerated routines") bcef9baed8f1 ("core: mbedtls: use SHA-1 crypto accelerated routines") c9359f31db12 ("core: mbedtls: use AES crypto accelerated routines") 0e6c1e2642c7 ("core: merge tee_*_get_digest_size() into a single function") 0cb3c28a2f4d ("libmbedtls: mbedtls_mpi_exp_mod(): optimize mempool usage") 5abf0e6ab72e ("libmbedtls: mbedtls_mpi_exp_mod(): reduce stack usage") 2ccc08ac7fef ("libmbedtls: preserve mempool usage on reinit") cd2a24648569 ("libmbedtls: mbedtls_mpi_exp_mod() initialize W") 7727182ecb56 ("libmbedtls: fix no CRT issue") 120737075dcf ("libmbedtls: add interfaces in mbedtls for context memory operation") 1126250b3af8 ("libmbedtls: add missing source file chachapoly.c") 23972e9f1c98 ("libmedtls: mpi_miller_rabin: increase count limit") 1fcbc05b3cd2 ("libmbedtls: add mbedtls_mpi_init_mempool()") 66e03f068078 ("libmbedtls: make mbedtls_mpi_mont*() available") d07e0ce56236 ("libmbedtls: refine mbedtls license header") 491ee2cd0ff4 ("mbedtls: configure mbedtls to reach for config") 9b6cee685d9a ("mbedtls: remove default include/mbedtls/config.h") 84f7467a0a91 ("Import mbedtls-2.16.5")
Signed-off-by: Jerome Forissier <jerome@forissier.org> Acked-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| ee3e1c54 | 07-Apr-2020 |
Cedric Neveux <cedric.neveux@nxp.com> |
core: utee_param_to_param(): set mobj to NULL when NULL memrefs of size 0
Set the tee_ta_param mobj to NULL if user parameter is a NULL memrefs of size 0. When mobj pointer is NULL, it also identify
core: utee_param_to_param(): set mobj to NULL when NULL memrefs of size 0
Set the tee_ta_param mobj to NULL if user parameter is a NULL memrefs of size 0. When mobj pointer is NULL, it also identify the last parameter of the list.
Fixes: 9d2e798360b5 ("core: TEE capability for null sized memrefs support")
Signed-off-by: Cedric Neveux <cedric.neveux@nxp.com> Reviewed-by: Jens Wiklander <jens.wiklander@linaro.org> Tested-by: Jerome Forissier <jerome@forissier.org> (HiKey960) Tested-by: Etienne Carriere <etienne.carriere@linaro.org>
show more ...
|
| 2288b071 | 06-Apr-2020 |
Jerome Forissier <jerome@forissier.org> |
core: lockdep: introduce CFG_LOCKDEP_RECORD_STACK
The lockdep algorithm uses quite a bit of heap memory to record the call stacks. This commit adds a configuration flag so that this may be turned of
core: lockdep: introduce CFG_LOCKDEP_RECORD_STACK
The lockdep algorithm uses quite a bit of heap memory to record the call stacks. This commit adds a configuration flag so that this may be turned off. When CFG_LOCKDEP_RECORD_STACK=n the deadlock detection still works but the diagnostics message will show no call stack obviously.
Signed-off-by: Jerome Forissier <jerome@forissier.org> Acked-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| 80f47278 | 30-Mar-2020 |
Jens Wiklander <jens.wiklander@linaro.org> |
core: drop __weak from internal_aes_gcm_update_payload_blocks()
Removes the __weak attribute from internal_aes_gcm_update_payload_blocks() now that both AArch32 and AArch64 have an optimized replace
core: drop __weak from internal_aes_gcm_update_payload_blocks()
Removes the __weak attribute from internal_aes_gcm_update_payload_blocks() now that both AArch32 and AArch64 have an optimized replacement.
The previous __weak internal_aes_gcm_update_payload_blocks() is now moved into core/crypto/aes-gcm-sw.c with its helper functions.
Acked-by: Etienne Carriere <etienne.carriere@linaro.org> Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| 76dd08ed | 30-Mar-2020 |
Jens Wiklander <jens.wiklander@linaro.org> |
core: optimize AArch64 AES-GCM routines
Optimize handling of the last odd AES-GCM block by reusing function recently added to boost AArch32 performance. Resulting in a small gain in performance and
core: optimize AArch64 AES-GCM routines
Optimize handling of the last odd AES-GCM block by reusing function recently added to boost AArch32 performance. Resulting in a small gain in performance and fewer lines of code.
With this patch together with the recent changes the throughput of AArch64 AES-GCM has increased from around 400MiB/s to 470MiB/s with blocks of 4096 bytes.
Acked-by: Etienne Carriere <etienne.carriere@linaro.org> Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| 9cd2e73b | 30-Mar-2020 |
Jens Wiklander <jens.wiklander@linaro.org> |
core: optimize AArch32 AES-GCM routines
In AArch32 there are not enough SIMD registers to make a fused GHASH and AES-CTR assembly function. But we can do better than using the default implementation
core: optimize AArch32 AES-GCM routines
In AArch32 there are not enough SIMD registers to make a fused GHASH and AES-CTR assembly function. But we can do better than using the default implementation. By carefully using the GHASH and AES primitive assembly functions there's some gain in performance.
Before this patch throughput was around 12MiB/s to now a bit more than 110MiB/s with blocks of 4096 bytes.
Acked-by: Etienne Carriere <etienne.carriere@linaro.org> Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| 7756183f | 30-Mar-2020 |
Jens Wiklander <jens.wiklander@linaro.org> |
core: add ce_aes_xor_block()
Adds ce_aes_xor_block() which xors two memory blocks of size TEE_AES_BLOCK_SIZE and saves the result back into memory. The operations are done with SIMD instructions so
core: add ce_aes_xor_block()
Adds ce_aes_xor_block() which xors two memory blocks of size TEE_AES_BLOCK_SIZE and saves the result back into memory. The operations are done with SIMD instructions so the memory blocks may be unaligned, but VFP must be enabled with thread_kernel_enable_vfp().
Acked-by: Etienne Carriere <etienne.carriere@linaro.org> Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| 1df59751 | 30-Mar-2020 |
Jens Wiklander <jens.wiklander@linaro.org> |
core: crypto: remove internal_aes_gcm_expand_enc_key()
Removes internal_aes_gcm_expand_enc_key() which is replaced by crypto_aes_expand_enc_key().
Reviewed-by: Etienne Carriere <etienne.carriere@li
core: crypto: remove internal_aes_gcm_expand_enc_key()
Removes internal_aes_gcm_expand_enc_key() which is replaced by crypto_aes_expand_enc_key().
Reviewed-by: Etienne Carriere <etienne.carriere@linaro.org> Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| 8a15c688 | 30-Mar-2020 |
Jens Wiklander <jens.wiklander@linaro.org> |
core: update AArch64 GHASH acceleration routines
Update AArch64 GHASH acceleration routines for improved performance.
The core parts of assembly and wrapper updates are written by Ard Biesheuvel <a
core: update AArch64 GHASH acceleration routines
Update AArch64 GHASH acceleration routines for improved performance.
The core parts of assembly and wrapper updates are written by Ard Biesheuvel <ard.biesheuvel@linaro.org>, see [1].
Link: [1] https://github.com/torvalds/linux/commit/22240df7ac6d76a271197571a7be45addef2ba15 Acked-by: Jerome Forissier <jerome@forissier.org> Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| 8f848cdb | 30-Mar-2020 |
Jens Wiklander <jens.wiklander@linaro.org> |
core: crypto: add internal_aes_gcm_{en,de}crypt_block()
Adds internal_aes_gcm_encrypt_block() and internal_aes_gcm_decrypt_block() to encrypt or decrypt a well aligned AES-GCM payload block.
Review
core: crypto: add internal_aes_gcm_{en,de}crypt_block()
Adds internal_aes_gcm_encrypt_block() and internal_aes_gcm_decrypt_block() to encrypt or decrypt a well aligned AES-GCM payload block.
Reviewed-by: Etienne Carriere <etienne.carriere@linaro.org> Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| 4f6d7160 | 30-Mar-2020 |
Jens Wiklander <jens.wiklander@linaro.org> |
core: crypto: remove internal_aes_gcm_encrypt_block()
Replaces calls to internal_aes_gcm_encrypt_block() with calls to crypto_aes_enc_block(). Removes internal_aes_gcm_encrypt_block().
Reviewed-by:
core: crypto: remove internal_aes_gcm_encrypt_block()
Replaces calls to internal_aes_gcm_encrypt_block() with calls to crypto_aes_enc_block(). Removes internal_aes_gcm_encrypt_block().
Reviewed-by: Etienne Carriere <etienne.carriere@linaro.org> Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| d7fd8f87 | 30-Mar-2020 |
Jens Wiklander <jens.wiklander@linaro.org> |
core: crypto: unaligned aes-gcm acceleration
The Arm CE code supports working with unaligned data. In order to make full use of that is the generic __weak function internal_aes_gcm_update_payload_bl
core: crypto: unaligned aes-gcm acceleration
The Arm CE code supports working with unaligned data. In order to make full use of that is the generic __weak function internal_aes_gcm_update_payload_block_aligned() replaced with internal_aes_gcm_update_payload_blocks(). The latter now supports working with unaligned buffers.
Acked-by: Jerome Forissier <jerome@forissier.org> Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| 6898b2ca | 01-Apr-2020 |
Jens Wiklander <jens.wiklander@linaro.org> |
core: arm: pmull_ghash_update_*() accepts unaligned payload
Updates the relevant ld1 and vld1 instructions for AArch64 and AArch32 respectively to allow unaligned src and head parameters.
Reviewed-
core: arm: pmull_ghash_update_*() accepts unaligned payload
Updates the relevant ld1 and vld1 instructions for AArch64 and AArch32 respectively to allow unaligned src and head parameters.
Reviewed-by: Jerome Forissier <jerome@forissier.org> Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| b314df1f | 30-Mar-2020 |
Jens Wiklander <jens.wiklander@linaro.org> |
core: crypto: refactor aes-gcm implementation
Adds struct internal_ghash_key to represent the ghash key instead of some lose fields inside struct internal_aes_gcm_state.
Software of CE configuratio
core: crypto: refactor aes-gcm implementation
Adds struct internal_ghash_key to represent the ghash key instead of some lose fields inside struct internal_aes_gcm_state.
Software of CE configuration is done explicitly in core/crypto/aes-gcm-sw.c, dropping the __weak attribute for all functions but internal_aes_gcm_update_payload_block_aligned() which is only overridden with CFG_CRYPTO_WITH_CE=y in AArch64.
Content of aes-gcm-private.h is moved into internal_aes-gcm.h.
internal_aes_gcm_gfmul() is made available for generic GF multiplication.
The CE versions of internal_aes_gcm_expand_enc_key() and internal_aes_gcm_encrypt_block() are now only wrappers around crypto_accel_aes_expand_keys() and crypto_accel_aes_ecb_enc().
Acked-by: Jerome Forissier <jerome@forissier.org> Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| 5b2aaa11 | 30-Mar-2020 |
Jens Wiklander <jens.wiklander@linaro.org> |
libutee: optimize memcpy() for speed
Overrides the -Os flag with -O2 in order to compile a speed optimized version of memcpy().
Acked-by: Etienne Carriere <etienne.carriere@linaro.org> Signed-off-b
libutee: optimize memcpy() for speed
Overrides the -Os flag with -O2 in order to compile a speed optimized version of memcpy().
Acked-by: Etienne Carriere <etienne.carriere@linaro.org> Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| 01ffca57 | 30-Mar-2020 |
Jens Wiklander <jens.wiklander@linaro.org> |
ldelf: ldelf.ld.S: make sure _ldelf_start() is first
Makes sure that _ldelf_start() which is the entry point of ldelf is first in the binary. _ldelf_start() depends on this to perform relocation.
A
ldelf: ldelf.ld.S: make sure _ldelf_start() is first
Makes sure that _ldelf_start() which is the entry point of ldelf is first in the binary. _ldelf_start() depends on this to perform relocation.
Acked-by: Etienne Carriere <etienne.carriere@linaro.org> Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| 7395539f | 30-Mar-2020 |
Jens Wiklander <jens.wiklander@linaro.org> |
core: fobj.c: use crypto_aes_expand_enc_key()
fobj_generate_authenc_key() uses crypto_aes_expand_enc_key() instead to prepare the key used for paging.
Acked-by: Etienne Carriere <etienne.carriere@l
core: fobj.c: use crypto_aes_expand_enc_key()
fobj_generate_authenc_key() uses crypto_aes_expand_enc_key() instead to prepare the key used for paging.
Acked-by: Etienne Carriere <etienne.carriere@linaro.org> Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| 2fc5dc95 | 30-Mar-2020 |
Jens Wiklander <jens.wiklander@linaro.org> |
core: mbedtls: use SHA-256 crypto accelerated routines
Uses the recently provided accelerated SHA-256 routine.
Acked-by: Etienne Carriere <etienne.carriere@linaro.org> Signed-off-by: Jens Wiklander
core: mbedtls: use SHA-256 crypto accelerated routines
Uses the recently provided accelerated SHA-256 routine.
Acked-by: Etienne Carriere <etienne.carriere@linaro.org> Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| 734545da | 30-Mar-2020 |
Jens Wiklander <jens.wiklander@linaro.org> |
core: mbedtls: use SHA-1 crypto accelerated routines
Uses the recently provided accelerated SHA-1 routine.
Acked-by: Etienne Carriere <etienne.carriere@linaro.org> Signed-off-by: Jens Wiklander <je
core: mbedtls: use SHA-1 crypto accelerated routines
Uses the recently provided accelerated SHA-1 routine.
Acked-by: Etienne Carriere <etienne.carriere@linaro.org> Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| 10b90791 | 30-Mar-2020 |
Jens Wiklander <jens.wiklander@linaro.org> |
core: mbedtls: use AES crypto accelerated routines
Uses the recently provided accelerated AES crypto routines in mbedtls.
Acked-by: Etienne Carriere <etienne.carriere@linaro.org> Signed-off-by: Jen
core: mbedtls: use AES crypto accelerated routines
Uses the recently provided accelerated AES crypto routines in mbedtls.
Acked-by: Etienne Carriere <etienne.carriere@linaro.org> Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| a828d70f | 02-Apr-2020 |
Jens Wiklander <jens.wiklander@linaro.org> |
core: ltc: use SHA-256 crypto accelerated function
Uses the recently provided accelerated SHA-256 function in LTC.
Acked-by: Etienne Carriere <etienne.carriere@linaro.org> Signed-off-by: Jens Wikla
core: ltc: use SHA-256 crypto accelerated function
Uses the recently provided accelerated SHA-256 function in LTC.
Acked-by: Etienne Carriere <etienne.carriere@linaro.org> Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|