| #
10298621 |
| 23-Sep-2025 |
Rayan Hu <rayan.hu@mediatek.com> |
core: crypto: fix AES-GCM in-place decryption order
Fix AES-GCM in-place decryption to ensure GHASH always uses the original ciphertext. Previously, plaintext could overwrite ciphertext before GHASH
core: crypto: fix AES-GCM in-place decryption order
Fix AES-GCM in-place decryption to ensure GHASH always uses the original ciphertext. Previously, plaintext could overwrite ciphertext before GHASH, causing authentication failures. Now GHASH is processed before decryption, so in-place and non in-place decryption both work correctly without extra buffering or conditional checks.
Tested with both in-place and non in-place decryption; all cases now produce correct authentication tags.
Fixes: 1fca7e269b13 ("core: crypto: add new AES-GCM implementation") Signed-off-by: Rayan Hu <rayan.hu@mediatek.com> Reviewed-by: Menson Chen <menson.chen@mediatek.com> Reviewed-by: ChingMing Chen <chingming.chen@mediatek.com> Reviewed-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| #
76dd08ed |
| 30-Mar-2020 |
Jens Wiklander <jens.wiklander@linaro.org> |
core: optimize AArch64 AES-GCM routines
Optimize handling of the last odd AES-GCM block by reusing function recently added to boost AArch32 performance. Resulting in a small gain in performance and
core: optimize AArch64 AES-GCM routines
Optimize handling of the last odd AES-GCM block by reusing function recently added to boost AArch32 performance. Resulting in a small gain in performance and fewer lines of code.
With this patch together with the recent changes the throughput of AArch64 AES-GCM has increased from around 400MiB/s to 470MiB/s with blocks of 4096 bytes.
Acked-by: Etienne Carriere <etienne.carriere@linaro.org> Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| #
9cd2e73b |
| 30-Mar-2020 |
Jens Wiklander <jens.wiklander@linaro.org> |
core: optimize AArch32 AES-GCM routines
In AArch32 there are not enough SIMD registers to make a fused GHASH and AES-CTR assembly function. But we can do better than using the default implementation
core: optimize AArch32 AES-GCM routines
In AArch32 there are not enough SIMD registers to make a fused GHASH and AES-CTR assembly function. But we can do better than using the default implementation. By carefully using the GHASH and AES primitive assembly functions there's some gain in performance.
Before this patch throughput was around 12MiB/s to now a bit more than 110MiB/s with blocks of 4096 bytes.
Acked-by: Etienne Carriere <etienne.carriere@linaro.org> Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| #
1df59751 |
| 30-Mar-2020 |
Jens Wiklander <jens.wiklander@linaro.org> |
core: crypto: remove internal_aes_gcm_expand_enc_key()
Removes internal_aes_gcm_expand_enc_key() which is replaced by crypto_aes_expand_enc_key().
Reviewed-by: Etienne Carriere <etienne.carriere@li
core: crypto: remove internal_aes_gcm_expand_enc_key()
Removes internal_aes_gcm_expand_enc_key() which is replaced by crypto_aes_expand_enc_key().
Reviewed-by: Etienne Carriere <etienne.carriere@linaro.org> Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| #
8a15c688 |
| 30-Mar-2020 |
Jens Wiklander <jens.wiklander@linaro.org> |
core: update AArch64 GHASH acceleration routines
Update AArch64 GHASH acceleration routines for improved performance.
The core parts of assembly and wrapper updates are written by Ard Biesheuvel <a
core: update AArch64 GHASH acceleration routines
Update AArch64 GHASH acceleration routines for improved performance.
The core parts of assembly and wrapper updates are written by Ard Biesheuvel <ard.biesheuvel@linaro.org>, see [1].
Link: [1] https://github.com/torvalds/linux/commit/22240df7ac6d76a271197571a7be45addef2ba15 Acked-by: Jerome Forissier <jerome@forissier.org> Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| #
4f6d7160 |
| 30-Mar-2020 |
Jens Wiklander <jens.wiklander@linaro.org> |
core: crypto: remove internal_aes_gcm_encrypt_block()
Replaces calls to internal_aes_gcm_encrypt_block() with calls to crypto_aes_enc_block(). Removes internal_aes_gcm_encrypt_block().
Reviewed-by:
core: crypto: remove internal_aes_gcm_encrypt_block()
Replaces calls to internal_aes_gcm_encrypt_block() with calls to crypto_aes_enc_block(). Removes internal_aes_gcm_encrypt_block().
Reviewed-by: Etienne Carriere <etienne.carriere@linaro.org> Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| #
d7fd8f87 |
| 30-Mar-2020 |
Jens Wiklander <jens.wiklander@linaro.org> |
core: crypto: unaligned aes-gcm acceleration
The Arm CE code supports working with unaligned data. In order to make full use of that is the generic __weak function internal_aes_gcm_update_payload_bl
core: crypto: unaligned aes-gcm acceleration
The Arm CE code supports working with unaligned data. In order to make full use of that is the generic __weak function internal_aes_gcm_update_payload_block_aligned() replaced with internal_aes_gcm_update_payload_blocks(). The latter now supports working with unaligned buffers.
Acked-by: Jerome Forissier <jerome@forissier.org> Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| #
b314df1f |
| 30-Mar-2020 |
Jens Wiklander <jens.wiklander@linaro.org> |
core: crypto: refactor aes-gcm implementation
Adds struct internal_ghash_key to represent the ghash key instead of some lose fields inside struct internal_aes_gcm_state.
Software of CE configuratio
core: crypto: refactor aes-gcm implementation
Adds struct internal_ghash_key to represent the ghash key instead of some lose fields inside struct internal_aes_gcm_state.
Software of CE configuration is done explicitly in core/crypto/aes-gcm-sw.c, dropping the __weak attribute for all functions but internal_aes_gcm_update_payload_block_aligned() which is only overridden with CFG_CRYPTO_WITH_CE=y in AArch64.
Content of aes-gcm-private.h is moved into internal_aes-gcm.h.
internal_aes_gcm_gfmul() is made available for generic GF multiplication.
The CE versions of internal_aes_gcm_expand_enc_key() and internal_aes_gcm_encrypt_block() are now only wrappers around crypto_accel_aes_expand_keys() and crypto_accel_aes_ecb_enc().
Acked-by: Jerome Forissier <jerome@forissier.org> Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| #
d1f66029 |
| 01-Mar-2018 |
Edison Ai <edison.ai@arm.com> |
core/crypto/aes-gcm-ce.c:Remove unused included header file
Remove tomcrypt.h from aes-gcm-ce.c which is unused for it.
Acked-by: Jerome Forissier <jerome.forissier@linaro.org> Acked-by: Jens Wikla
core/crypto/aes-gcm-ce.c:Remove unused included header file
Remove tomcrypt.h from aes-gcm-ce.c which is unused for it.
Acked-by: Jerome Forissier <jerome.forissier@linaro.org> Acked-by: Jens Wiklander <jens.wiklander@linaro.org> Signed-off-by: Edison Ai <edison.ai@arm.com>
show more ...
|
| #
6d0fd331 |
| 11-Jan-2018 |
Jerome Forissier <jerome.forissier@linaro.org> |
core: crypto: rename CFG_HWSUPP_PMULL to CFG_HWSUPP_PMULT_64
CFG_HWSUPP_PMULL is used to determine whether the CPU supports long polynomial multiplies of 64-bit values, which means: - for AArch64: P
core: crypto: rename CFG_HWSUPP_PMULL to CFG_HWSUPP_PMULT_64
CFG_HWSUPP_PMULL is used to determine whether the CPU supports long polynomial multiplies of 64-bit values, which means: - for AArch64: PMULL and PMULL2 with the 1Q arrangement specifier - for AArch32: VMULL.P64 Otherwise, 8-bit polynomial multiplication is used instead.
Therefore, CFG_HWSUPP_PMULT_64 is a better name because it does not seem to imply Aarch64 (no PMULL) and clearly states the 64-bit size.
Signed-off-by: Jerome Forissier <jerome.forissier@linaro.org> Reviewed-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| #
fb7ef469 |
| 15-Dec-2017 |
Jerome Forissier <jerome.forissier@linaro.org> |
Reformat copyright/license header in files with an SPDX ID
Some files were committed with an SPDX license identifier before the rules were defined [1]. Reformat them accordingly.
[1] documentation/
Reformat copyright/license header in files with an SPDX ID
Some files were committed with an SPDX license identifier before the rules were defined [1]. Reformat them accordingly.
[1] documentation/copyright_and_license_headers.rst
Signed-off-by: Jerome Forissier <jerome.forissier@linaro.org> Acked-by: Joakim Bech <joakim.bech@linaro.org>
show more ...
|
| #
54af8d67 |
| 21-Nov-2017 |
Jens Wiklander <jens.wiklander@linaro.org> |
core: crypto: AES-GCM: separate encryption key
Separates the AES (CTR) encryption key from the rest of the context to allow more efficient key handling.
Acked-by: Jerome Forissier <jerome.forissier
core: crypto: AES-GCM: separate encryption key
Separates the AES (CTR) encryption key from the rest of the context to allow more efficient key handling.
Acked-by: Jerome Forissier <jerome.forissier@linaro.org> Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| #
424cb386 |
| 21-Nov-2017 |
Jens Wiklander <jens.wiklander@linaro.org> |
core: arm64: crypto: AES-GCM: add internal key expansion
Adds internal encryption key expansion when internal AES-GCM uses AES crypto extensions. This avoids a dependency on the crypto library to us
core: arm64: crypto: AES-GCM: add internal key expansion
Adds internal encryption key expansion when internal AES-GCM uses AES crypto extensions. This avoids a dependency on the crypto library to use the same endian on the expanded encryption key.
Copies code from core/lib/libtomcrypt/src/ciphers/ aes_armv8a_ce.c and aes_modes_armv8a_ce_a64.S and makes some small changes to make it fit in the new place.
Acked-by: Jerome Forissier <jerome.forissier@linaro.org> Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| #
61b4cd9c |
| 21-Nov-2017 |
Jens Wiklander <jens.wiklander@linaro.org> |
core: crypto: AES-GCM: remove tomcrypt.h dependency
Removes tomcrypt.h dependency by replacing the "symmetric_key skey" field in struct internal_aes_gcm_ctx with a raw key. Replaces calls to the LTC
core: crypto: AES-GCM: remove tomcrypt.h dependency
Removes tomcrypt.h dependency by replacing the "symmetric_key skey" field in struct internal_aes_gcm_ctx with a raw key. Replaces calls to the LTC functions aes_setup() and aes_ecb_encrypt() with calls to crypto_aes_expand_enc_key() and crypto_aes_enc_block() respectively.
Reviewed-by: Jerome Forissier <jerome.forissier@linaro.org> Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| #
f6cbe5da |
| 16-Nov-2017 |
Jens Wiklander <jens.wiklander@linaro.org> |
core: arm: crypto: fix AES-GCM counter increase
In pmull_gcm_encrypt() and pmull_gcm_decrypt() it was assumed that it's enough to only increase the least significant 64-bits of the counter fed to th
core: arm: crypto: fix AES-GCM counter increase
In pmull_gcm_encrypt() and pmull_gcm_decrypt() it was assumed that it's enough to only increase the least significant 64-bits of the counter fed to the block cipher. This can hold for 96-bit IVs, but not for IVs of any other length as the number stored in the least significant 64-bits of the counter can't be easily predicted.
In this patch pmull_gcm_encrypt() and pmull_gcm_decrypt() are updated to increase the entire counter, at the same time is the interface changed to accept the counter in little endian format instead.
Acked-by: Jerome Forissier <jerome.forissier@linaro.org> Tested-by: Jens Wiklander <jens.wiklander@linaro.org> (QEMU, Hikey) Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| #
1fca7e26 |
| 16-Nov-2017 |
Jens Wiklander <jens.wiklander@linaro.org> |
core: crypto: add new AES-GCM implementation
Adds a new AES-GCM implementation optimized for hardware acceleration.
This implementation is enabled by default, to use the implementation in libTomCry
core: crypto: add new AES-GCM implementation
Adds a new AES-GCM implementation optimized for hardware acceleration.
This implementation is enabled by default, to use the implementation in libTomCrypt instead set CFG_CRYPTO_AES_GCM_FROM_CRYPTOLIB=y.
Tested-by: Jerome Forissier <jerome.forissier@linaro.org> (HiKey960) Acked-by: Jerome Forissier <jerome.forissier@linaro.org> Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|