History log of /optee_os/ (Results 4976 – 5000 of 8578)
Revision Date Author Comments
(<<< Hide modified files)
(Show modified files >>>)
ee3e1c5407-Apr-2020 Cedric Neveux <cedric.neveux@nxp.com>

core: utee_param_to_param(): set mobj to NULL when NULL memrefs of size 0

Set the tee_ta_param mobj to NULL if user parameter is a NULL memrefs
of size 0.
When mobj pointer is NULL, it also identify

core: utee_param_to_param(): set mobj to NULL when NULL memrefs of size 0

Set the tee_ta_param mobj to NULL if user parameter is a NULL memrefs
of size 0.
When mobj pointer is NULL, it also identify the last parameter of the list.

Fixes: 9d2e798360b5 ("core: TEE capability for null sized memrefs support")

Signed-off-by: Cedric Neveux <cedric.neveux@nxp.com>
Reviewed-by: Jens Wiklander <jens.wiklander@linaro.org>
Tested-by: Jerome Forissier <jerome@forissier.org> (HiKey960)
Tested-by: Etienne Carriere <etienne.carriere@linaro.org>

show more ...

2288b07106-Apr-2020 Jerome Forissier <jerome@forissier.org>

core: lockdep: introduce CFG_LOCKDEP_RECORD_STACK

The lockdep algorithm uses quite a bit of heap memory to record the
call stacks. This commit adds a configuration flag so that this may be
turned of

core: lockdep: introduce CFG_LOCKDEP_RECORD_STACK

The lockdep algorithm uses quite a bit of heap memory to record the
call stacks. This commit adds a configuration flag so that this may be
turned off. When CFG_LOCKDEP_RECORD_STACK=n the deadlock detection
still works but the diagnostics message will show no call stack
obviously.

Signed-off-by: Jerome Forissier <jerome@forissier.org>
Acked-by: Jens Wiklander <jens.wiklander@linaro.org>

show more ...

80f4727830-Mar-2020 Jens Wiklander <jens.wiklander@linaro.org>

core: drop __weak from internal_aes_gcm_update_payload_blocks()

Removes the __weak attribute from internal_aes_gcm_update_payload_blocks()
now that both AArch32 and AArch64 have an optimized replace

core: drop __weak from internal_aes_gcm_update_payload_blocks()

Removes the __weak attribute from internal_aes_gcm_update_payload_blocks()
now that both AArch32 and AArch64 have an optimized replacement.

The previous __weak internal_aes_gcm_update_payload_blocks() is now
moved into core/crypto/aes-gcm-sw.c with its helper functions.

Acked-by: Etienne Carriere <etienne.carriere@linaro.org>
Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>

show more ...

76dd08ed30-Mar-2020 Jens Wiklander <jens.wiklander@linaro.org>

core: optimize AArch64 AES-GCM routines

Optimize handling of the last odd AES-GCM block by reusing function
recently added to boost AArch32 performance. Resulting in a small gain
in performance and

core: optimize AArch64 AES-GCM routines

Optimize handling of the last odd AES-GCM block by reusing function
recently added to boost AArch32 performance. Resulting in a small gain
in performance and fewer lines of code.

With this patch together with the recent changes the throughput of
AArch64 AES-GCM has increased from around 400MiB/s to 470MiB/s with
blocks of 4096 bytes.

Acked-by: Etienne Carriere <etienne.carriere@linaro.org>
Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>

show more ...

9cd2e73b30-Mar-2020 Jens Wiklander <jens.wiklander@linaro.org>

core: optimize AArch32 AES-GCM routines

In AArch32 there are not enough SIMD registers to make a fused GHASH and
AES-CTR assembly function. But we can do better than using the default
implementation

core: optimize AArch32 AES-GCM routines

In AArch32 there are not enough SIMD registers to make a fused GHASH and
AES-CTR assembly function. But we can do better than using the default
implementation. By carefully using the GHASH and AES primitive assembly
functions there's some gain in performance.

Before this patch throughput was around 12MiB/s to now a bit more than
110MiB/s with blocks of 4096 bytes.

Acked-by: Etienne Carriere <etienne.carriere@linaro.org>
Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>

show more ...

7756183f30-Mar-2020 Jens Wiklander <jens.wiklander@linaro.org>

core: add ce_aes_xor_block()

Adds ce_aes_xor_block() which xors two memory blocks of size
TEE_AES_BLOCK_SIZE and saves the result back into memory. The operations
are done with SIMD instructions so

core: add ce_aes_xor_block()

Adds ce_aes_xor_block() which xors two memory blocks of size
TEE_AES_BLOCK_SIZE and saves the result back into memory. The operations
are done with SIMD instructions so the memory blocks may be unaligned,
but VFP must be enabled with thread_kernel_enable_vfp().

Acked-by: Etienne Carriere <etienne.carriere@linaro.org>
Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>

show more ...

1df5975130-Mar-2020 Jens Wiklander <jens.wiklander@linaro.org>

core: crypto: remove internal_aes_gcm_expand_enc_key()

Removes internal_aes_gcm_expand_enc_key() which is replaced by
crypto_aes_expand_enc_key().

Reviewed-by: Etienne Carriere <etienne.carriere@li

core: crypto: remove internal_aes_gcm_expand_enc_key()

Removes internal_aes_gcm_expand_enc_key() which is replaced by
crypto_aes_expand_enc_key().

Reviewed-by: Etienne Carriere <etienne.carriere@linaro.org>
Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>

show more ...

8a15c68830-Mar-2020 Jens Wiklander <jens.wiklander@linaro.org>

core: update AArch64 GHASH acceleration routines

Update AArch64 GHASH acceleration routines for improved performance.

The core parts of assembly and wrapper updates are written by
Ard Biesheuvel <a

core: update AArch64 GHASH acceleration routines

Update AArch64 GHASH acceleration routines for improved performance.

The core parts of assembly and wrapper updates are written by
Ard Biesheuvel <ard.biesheuvel@linaro.org>, see [1].

Link: [1] https://github.com/torvalds/linux/commit/22240df7ac6d76a271197571a7be45addef2ba15
Acked-by: Jerome Forissier <jerome@forissier.org>
Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>

show more ...

8f848cdb30-Mar-2020 Jens Wiklander <jens.wiklander@linaro.org>

core: crypto: add internal_aes_gcm_{en,de}crypt_block()

Adds internal_aes_gcm_encrypt_block() and
internal_aes_gcm_decrypt_block() to encrypt or decrypt a well aligned
AES-GCM payload block.

Review

core: crypto: add internal_aes_gcm_{en,de}crypt_block()

Adds internal_aes_gcm_encrypt_block() and
internal_aes_gcm_decrypt_block() to encrypt or decrypt a well aligned
AES-GCM payload block.

Reviewed-by: Etienne Carriere <etienne.carriere@linaro.org>
Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>

show more ...

4f6d716030-Mar-2020 Jens Wiklander <jens.wiklander@linaro.org>

core: crypto: remove internal_aes_gcm_encrypt_block()

Replaces calls to internal_aes_gcm_encrypt_block() with calls to
crypto_aes_enc_block(). Removes internal_aes_gcm_encrypt_block().

Reviewed-by:

core: crypto: remove internal_aes_gcm_encrypt_block()

Replaces calls to internal_aes_gcm_encrypt_block() with calls to
crypto_aes_enc_block(). Removes internal_aes_gcm_encrypt_block().

Reviewed-by: Etienne Carriere <etienne.carriere@linaro.org>
Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>

show more ...

d7fd8f8730-Mar-2020 Jens Wiklander <jens.wiklander@linaro.org>

core: crypto: unaligned aes-gcm acceleration

The Arm CE code supports working with unaligned data. In order to make
full use of that is the generic __weak function
internal_aes_gcm_update_payload_bl

core: crypto: unaligned aes-gcm acceleration

The Arm CE code supports working with unaligned data. In order to make
full use of that is the generic __weak function
internal_aes_gcm_update_payload_block_aligned() replaced with
internal_aes_gcm_update_payload_blocks(). The latter now supports
working with unaligned buffers.

Acked-by: Jerome Forissier <jerome@forissier.org>
Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>

show more ...

6898b2ca01-Apr-2020 Jens Wiklander <jens.wiklander@linaro.org>

core: arm: pmull_ghash_update_*() accepts unaligned payload

Updates the relevant ld1 and vld1 instructions for AArch64 and AArch32
respectively to allow unaligned src and head parameters.

Reviewed-

core: arm: pmull_ghash_update_*() accepts unaligned payload

Updates the relevant ld1 and vld1 instructions for AArch64 and AArch32
respectively to allow unaligned src and head parameters.

Reviewed-by: Jerome Forissier <jerome@forissier.org>
Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>

show more ...

b314df1f30-Mar-2020 Jens Wiklander <jens.wiklander@linaro.org>

core: crypto: refactor aes-gcm implementation

Adds struct internal_ghash_key to represent the ghash key instead of
some lose fields inside struct internal_aes_gcm_state.

Software of CE configuratio

core: crypto: refactor aes-gcm implementation

Adds struct internal_ghash_key to represent the ghash key instead of
some lose fields inside struct internal_aes_gcm_state.

Software of CE configuration is done explicitly in
core/crypto/aes-gcm-sw.c, dropping the __weak attribute for all
functions but internal_aes_gcm_update_payload_block_aligned() which
is only overridden with CFG_CRYPTO_WITH_CE=y in AArch64.

Content of aes-gcm-private.h is moved into internal_aes-gcm.h.

internal_aes_gcm_gfmul() is made available for generic GF
multiplication.

The CE versions of internal_aes_gcm_expand_enc_key() and
internal_aes_gcm_encrypt_block() are now only wrappers around
crypto_accel_aes_expand_keys() and crypto_accel_aes_ecb_enc().

Acked-by: Jerome Forissier <jerome@forissier.org>
Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>

show more ...

5b2aaa1130-Mar-2020 Jens Wiklander <jens.wiklander@linaro.org>

libutee: optimize memcpy() for speed

Overrides the -Os flag with -O2 in order to compile a speed optimized
version of memcpy().

Acked-by: Etienne Carriere <etienne.carriere@linaro.org>
Signed-off-b

libutee: optimize memcpy() for speed

Overrides the -Os flag with -O2 in order to compile a speed optimized
version of memcpy().

Acked-by: Etienne Carriere <etienne.carriere@linaro.org>
Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>

show more ...

01ffca5730-Mar-2020 Jens Wiklander <jens.wiklander@linaro.org>

ldelf: ldelf.ld.S: make sure _ldelf_start() is first

Makes sure that _ldelf_start() which is the entry point of ldelf is
first in the binary. _ldelf_start() depends on this to perform
relocation.

A

ldelf: ldelf.ld.S: make sure _ldelf_start() is first

Makes sure that _ldelf_start() which is the entry point of ldelf is
first in the binary. _ldelf_start() depends on this to perform
relocation.

Acked-by: Etienne Carriere <etienne.carriere@linaro.org>
Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>

show more ...

7395539f30-Mar-2020 Jens Wiklander <jens.wiklander@linaro.org>

core: fobj.c: use crypto_aes_expand_enc_key()

fobj_generate_authenc_key() uses crypto_aes_expand_enc_key() instead
to prepare the key used for paging.

Acked-by: Etienne Carriere <etienne.carriere@l

core: fobj.c: use crypto_aes_expand_enc_key()

fobj_generate_authenc_key() uses crypto_aes_expand_enc_key() instead
to prepare the key used for paging.

Acked-by: Etienne Carriere <etienne.carriere@linaro.org>
Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>

show more ...

2fc5dc9530-Mar-2020 Jens Wiklander <jens.wiklander@linaro.org>

core: mbedtls: use SHA-256 crypto accelerated routines

Uses the recently provided accelerated SHA-256 routine.

Acked-by: Etienne Carriere <etienne.carriere@linaro.org>
Signed-off-by: Jens Wiklander

core: mbedtls: use SHA-256 crypto accelerated routines

Uses the recently provided accelerated SHA-256 routine.

Acked-by: Etienne Carriere <etienne.carriere@linaro.org>
Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>

show more ...

734545da30-Mar-2020 Jens Wiklander <jens.wiklander@linaro.org>

core: mbedtls: use SHA-1 crypto accelerated routines

Uses the recently provided accelerated SHA-1 routine.

Acked-by: Etienne Carriere <etienne.carriere@linaro.org>
Signed-off-by: Jens Wiklander <je

core: mbedtls: use SHA-1 crypto accelerated routines

Uses the recently provided accelerated SHA-1 routine.

Acked-by: Etienne Carriere <etienne.carriere@linaro.org>
Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>

show more ...

10b9079130-Mar-2020 Jens Wiklander <jens.wiklander@linaro.org>

core: mbedtls: use AES crypto accelerated routines

Uses the recently provided accelerated AES crypto routines in mbedtls.

Acked-by: Etienne Carriere <etienne.carriere@linaro.org>
Signed-off-by: Jen

core: mbedtls: use AES crypto accelerated routines

Uses the recently provided accelerated AES crypto routines in mbedtls.

Acked-by: Etienne Carriere <etienne.carriere@linaro.org>
Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>

show more ...

a828d70f02-Apr-2020 Jens Wiklander <jens.wiklander@linaro.org>

core: ltc: use SHA-256 crypto accelerated function

Uses the recently provided accelerated SHA-256 function in LTC.

Acked-by: Etienne Carriere <etienne.carriere@linaro.org>
Signed-off-by: Jens Wikla

core: ltc: use SHA-256 crypto accelerated function

Uses the recently provided accelerated SHA-256 function in LTC.

Acked-by: Etienne Carriere <etienne.carriere@linaro.org>
Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>

show more ...

2b49b29502-Apr-2020 Jens Wiklander <jens.wiklander@linaro.org>

core: ltc: use SHA1 crypto accelerated function

Uses the recently provided accelerated SHA1 function in LTC.

Acked-by: Etienne Carriere <etienne.carriere@linaro.org>
Signed-off-by: Jens Wiklander <

core: ltc: use SHA1 crypto accelerated function

Uses the recently provided accelerated SHA1 function in LTC.

Acked-by: Etienne Carriere <etienne.carriere@linaro.org>
Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>

show more ...

f942926630-Mar-2020 Jens Wiklander <jens.wiklander@linaro.org>

core: ltc: use AES crypto accelerated routines

Uses the recently provided accelerated AES crypto routines in LTC.

Acked-by: Etienne Carriere <etienne.carriere@linaro.org>
Signed-off-by: Jens Wiklan

core: ltc: use AES crypto accelerated routines

Uses the recently provided accelerated AES crypto routines in LTC.

Acked-by: Etienne Carriere <etienne.carriere@linaro.org>
Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>

show more ...

75fea8a930-Mar-2020 Jens Wiklander <jens.wiklander@linaro.org>

core: add accelerated SHA-256 routines

Adds an Arm CE accelerated SHA-256 function to core/arch/arm/crypto. The
code originates from the previous implementation inside LTC library.
With this multipl

core: add accelerated SHA-256 routines

Adds an Arm CE accelerated SHA-256 function to core/arch/arm/crypto. The
code originates from the previous implementation inside LTC library.
With this multiple crypto libraries can share the function.

The old CFG_CRYPTO_SHA256_ARM64_CE and CFG_CRYPTO_SHA256_ARM32_CE are
replaced by CFG_CRYPTO_SHA256_ARM_CE.

CFG_CORE_CRYPTO_SHA256_ACCEL is introduced as to indicate that some kind of
SHA-256 acceleration is available, not necessarily based on Arm CE.

Acked-by: Etienne Carriere <etienne.carriere@linaro.org>
Acked-by: Jerome Forissier <jerome@forissier.org>
Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>

show more ...

858d527930-Mar-2020 Jens Wiklander <jens.wiklander@linaro.org>

core: add accelerated SHA1 routines

Adds an Arm CE accelerated SHA1 function to core/arch/arm/crypto. The code
originates from the previous implementation inside LTC library. With
this multiple cryp

core: add accelerated SHA1 routines

Adds an Arm CE accelerated SHA1 function to core/arch/arm/crypto. The code
originates from the previous implementation inside LTC library. With
this multiple crypto libraries can share the function.

The old CFG_CRYPTO_SHA1_ARM64_CE and CFG_CRYPTO_SHA1_ARM32_CE are
replaced by CFG_CRYPTO_SHA1_ARM_CE.

CFG_CORE_CRYPTO_SHA1_ACCEL is introduced as to indicate that some kind of
SHA-1 acceleration is available, not necessarily based on Arm CE.

Acked-by: Etienne Carriere <etienne.carriere@linaro.org>
Acked-by: Jerome Forissier <jerome@forissier.org>
Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>

show more ...

06d2e41630-Mar-2020 Jens Wiklander <jens.wiklander@linaro.org>

core: add accelerated AES routines

Adds Arm CE accelerated AES routines to core/arch/arm/crypto. The code
originates from the previous implementation inside LTC library. With
this multiple crypto li

core: add accelerated AES routines

Adds Arm CE accelerated AES routines to core/arch/arm/crypto. The code
originates from the previous implementation inside LTC library. With
this multiple crypto library can share these routines.

A new header file, <crypto/crypto_accel.h>, is added with primitive
functions implementing crypto accelerated ciphers.

The old CFG_CRYPTO_AES_ARM64_CE and CFG_CRYPTO_AES_ARM32_CE are
replaced by CFG_CRYPTO_AES_ARM_CE.

CFG_CORE_CRYPTO_AES_ACCEL is introduced as to indicate that some kind of
AES acceleration is available, not necessarily based on Arm CE.

Acked-by: Etienne Carriere <etienne.carriere@linaro.org>
Acked-by: Jerome Forissier <jerome@forissier.org>
Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>

show more ...

1...<<191192193194195196197198199200>>...344