| b1f3797d | 06-Feb-2019 |
Volodymyr Babchuk <vlad.babchuk@gmail.com> |
bget: fix nex_ pool building with disabled stats
gen_malloc_reset_stats() and gen_malloc_get_stats() are only available when BufStats is defined.
Signed-off-by: Volodymyr Babchuk <vlad.babchuk@gmai
bget: fix nex_ pool building with disabled stats
gen_malloc_reset_stats() and gen_malloc_get_stats() are only available when BufStats is defined.
Signed-off-by: Volodymyr Babchuk <vlad.babchuk@gmail.com> Acked-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| 8cd8a629 | 06-Feb-2019 |
Jens Wiklander <jens.wiklander@linaro.org> |
Remove memalign()
Removes the unused memalign() function. Usage of this function will cause severe fragmentation of the heap.
Another problem is with the implementation which is added on top of bge
Remove memalign()
Removes the unused memalign() function. Usage of this function will cause severe fragmentation of the heap.
Another problem is with the implementation which is added on top of bget while still depending heavily on internals of bget. The implementation was somewhat buggy since it can sometimes can cause: E/TC:0 0 assertion 'bn->prevfree == 0' failed at lib/libutils/isoc/bget_malloc.c :423 <create_free_block> E/TC:0 0 Panic at core/kernel/assert.c:28 <_assert_break>
Acked-by: Jerome Forissier <jerome.forissier@linaro.org> Reviewed-by: Volodymyr Babchuk <vlad.babchuk@gmail.com> Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| b2dd8747 | 05-Feb-2019 |
Jens Wiklander <jens.wiklander@linaro.org> |
Fix alignment of data for mempool_alloc_pool()
Prior to this patch was _TEE_MathAPI_Init() in lib/libutee/tee_api_arith_mpi.c supplying a data buffer which was only 4 byte aligned while mempool_allo
Fix alignment of data for mempool_alloc_pool()
Prior to this patch was _TEE_MathAPI_Init() in lib/libutee/tee_api_arith_mpi.c supplying a data buffer which was only 4 byte aligned while mempool_alloc_pool() requires the alignment of long. This will work in 32-bit mode, but could lead to alignment problem in 64-bit mode. The same problem can happen with lib/libutee/tee_api_arith_mpa.c, but so far it has remained hidden.
Incorrect alignment can result in errors like: E/TA: assertion '!((vaddr_t)data & (POOL_ALIGN - 1))' failed at lib/libutils/ext/mempool.c:134 in mempool_alloc_pool()
This fix introduces MEMPOOL_ALIGN which specifies required alignment of data supplied to mempool_alloc_pool().
Fixes: 062e3d01c039 ("ta: switch to to mbedtls for bignum") Reviewed-by: Joakim Bech <joakim.bech@linaro.org> Tested-by: Joakim Bech <joakim.bech@linaro.org> (QEMU v8) Acked-by: Jerome Forissier <jerome.forissier@linaro.org> Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| 1131d3c5 | 18-Dec-2018 |
Volodymyr Babchuk <vlad.babchuk@gmail.com> |
libutils: add nex_strdup() function
This is the same as strdup() but it uses nex_malloc(), so it can be used in nexus part of OP-TEE.
Signed-off-by: Volodymyr Babchuk <vlad.babchuk@gmail.com> Revie
libutils: add nex_strdup() function
This is the same as strdup() but it uses nex_malloc(), so it can be used in nexus part of OP-TEE.
Signed-off-by: Volodymyr Babchuk <vlad.babchuk@gmail.com> Reviewed-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| c211d0a4 | 06-Feb-2018 |
Volodymyr Babchuk <vlad.babchuk@gmail.com> |
virt: tag variables with __nex_data and __nex_bss
Variables that are needed by OP-TEE nexus will be moved to nexus memory.
Signed-off-by: Volodymyr Babchuk <vlad.babchuk@gmail.com> Acked-by: Jens W
virt: tag variables with __nex_data and __nex_bss
Variables that are needed by OP-TEE nexus will be moved to nexus memory.
Signed-off-by: Volodymyr Babchuk <vlad.babchuk@gmail.com> Acked-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| 15216d4d | 06-Feb-2018 |
Volodymyr Babchuk <vlad.babchuk@gmail.com> |
virt: add nexus memory area
This patch is the first in series of patches that split OP-TEE RW memory into two regions: nexus memory and TEE memory. Nexus memory will be always mapped and it will be
virt: add nexus memory area
This patch is the first in series of patches that split OP-TEE RW memory into two regions: nexus memory and TEE memory. Nexus memory will be always mapped and it will be used to store all data that is vital for OP-TEE core and is not bound to virtual guests.
TEE memory is a memory that holds data specific for certain guest. There will be TEE memory bank for every guest and it will be mapped into OP-TEE address space only during call from that guest.
This patch adds nexus memory and moves stacks into it. Also it provides __nex_bss and __nex_data macros, so one can easily set right section for a variable.
Signed-off-by: Volodymyr Babchuk <vlad.babchuk@gmail.com> Reviewed-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| 386fc264 | 05-Feb-2018 |
Volodymyr Babchuk <vlad.babchuk@gmail.com> |
bget_malloc: add nex_malloc pool
If virtualization enabled, this pool will be used to allocate memory for OP-TEE nexus needs. Without virtualization, generic malloc pool will be used.
Signed-off-by
bget_malloc: add nex_malloc pool
If virtualization enabled, this pool will be used to allocate memory for OP-TEE nexus needs. Without virtualization, generic malloc pool will be used.
Signed-off-by: Volodymyr Babchuk <vlad.babchuk@gmail.com> Reviewed-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| 741b437f | 05-Feb-2018 |
Volodymyr Babchuk <vlad.babchuk@gmail.com> |
bget_malloc: hold all malloc state in malloc_ctx structure
This patch moves all bget_malloc.c state into malloc_ctx structure. malloc_lock.c is removed because spinlock now is also stored in malloc_
bget_malloc: hold all malloc state in malloc_ctx structure
This patch moves all bget_malloc.c state into malloc_ctx structure. malloc_lock.c is removed because spinlock now is also stored in malloc_ctx.
Multiple malloc pools can be used now.
Signed-off-by: Volodymyr Babchuk <vlad.babchuk@gmail.com> Reviewed-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| 7539e8c3 | 31-Jan-2019 |
PeiKan Tsai <mark1990301@gmail.com> |
bget: Check for size overflow
Check size overflow to avoid size <= 0 which may be caused by calculation "size += sizeof(struct bhead)" and "size = (size + (SizeQuant - 1)) & (~(SizeQuant - 1))".
Si
bget: Check for size overflow
Check size overflow to avoid size <= 0 which may be caused by calculation "size += sizeof(struct bhead)" and "size = (size + (SizeQuant - 1)) & (~(SizeQuant - 1))".
Signed-off-by: Peikan Tsai <mark1990301@gmail.com> Reviewed-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| b6bc49ca | 17-Jan-2019 |
Sumit Garg <sumit.garg@linaro.org> |
trace: fix core id print if in non-atomic context
Make "?" print repetitive equivalent to number of digits needed to display core id rather than extra spaces as it causes symbolize.py script parsing
trace: fix core id print if in non-atomic context
Make "?" print repetitive equivalent to number of digits needed to display core id rather than extra spaces as it causes symbolize.py script parsing failure for call stack addresses in case number of cores is greater than 10.
Also change symbolize.py to detect repetitive "?".
Signed-off-by: Sumit Garg <sumit.garg@linaro.org> Reviewed-by: Jerome Forissier <jerome.forissier@linaro.org> Reviewed-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| 60b39904 | 16-Jan-2019 |
Jens Wiklander <jens.wiklander@linaro.org> |
mempool: fix race in get_pool()
Fixes a race in get_pool() which could leave the pool with zero refences but still owned by the last thread using the pool.
Some performance number on Hikey with def
mempool: fix race in get_pool()
Fixes a race in get_pool() which could leave the pool with zero refences but still owned by the last thread using the pool.
Some performance number on Hikey with default configuration: github/master (edbb89f, before this commit):
4006 real 1m 41.11s 4007 real 1m 14.51s 4008 real 0m 0.13s 4009 real 1m 5.68s
Revert "mempool: optimize reference counting", before this commit: 4006 real 3m 27.78s 4007 real 0m 50.03s 4008 real 0m 0.13s 4009 real 2m 24.07s
With this commit, two runs: 4006 real 1m 37.51s 4007 real 0m 56.67s 4008 real 0m 0.09s 4009 real 1m 3.18s
4006 real 1m 37.61s 4007 real 0m 35.32s 4008 real 0m 0.13s 4009 real 1m 3.15s
Numbers are gathered with this script: for a in 4006 4007 4008 4009 ; do \ echo -n $a " " >> time.txt ;\ time -o time.txt.tmp xtest -l 15 $a || break ;\ grep real time.txt.tmp >> time.txt done cat time.txt
Reviewed-by: Volodymyr Babchuk <vlad.babchuk@gmail.com> Acked-by: Jerome Forissier <jerome.forissier@linaro.org> Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| 91334787 | 16-Jan-2019 |
Jens Wiklander <jens.wiklander@linaro.org> |
atomic.h: add atomic_{load,store}_int()
Adds atomic_load_int() and atomic_store_int().
Reviewed-by: Volodymyr Babchuk <vlad.babchuk@gmail.com> Acked-by: Jerome Forissier <jerome.forissier@linaro.or
atomic.h: add atomic_{load,store}_int()
Adds atomic_load_int() and atomic_store_int().
Reviewed-by: Volodymyr Babchuk <vlad.babchuk@gmail.com> Acked-by: Jerome Forissier <jerome.forissier@linaro.org> Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| e7d51f42 | 12-Nov-2018 |
Jens Wiklander <jens.wiklander@linaro.org> |
mempool: add mempool_calloc()
Acked-by: Etienne Carriere <etienne.carriere@linaro.org> Reviewed-by: Joakim Bech <joakim.bech@linaro.org> Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org> |
| b31756b3 | 15-Nov-2018 |
Jerome Forissier <jerome.forissier@linaro.org> |
lib.mk: centralize profiling flag (-pg)
Code cleanup, no functional change. This commit avoids the duplication of the -pg flag in the library makefiles.
Signed-off-by: Jerome Forissier <jerome.fori
lib.mk: centralize profiling flag (-pg)
Code cleanup, no functional change. This commit avoids the duplication of the -pg flag in the library makefiles.
Signed-off-by: Jerome Forissier <jerome.forissier@linaro.org> Acked-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| b69b86b6 | 08-Nov-2018 |
Jens Wiklander <jens.wiklander@linaro.org> |
mempool: report max memory usage
Adds CFG_MEMPOOL_REPORT_LAST_OFFSET which if set to y causes mempool to report each time the maximum amount of memory has increased. This helps to determine required
mempool: report max memory usage
Adds CFG_MEMPOOL_REPORT_LAST_OFFSET which if set to y causes mempool to report each time the maximum amount of memory has increased. This helps to determine required size of a mempool.
Reviewed-by: Jerome Forissier <jerome.forissier@linaro.org> Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| d4f909c0 | 08-Nov-2018 |
Jens Wiklander <jens.wiklander@linaro.org> |
mempool: optimize reference counting
Optimizes reference counting in mempool by using refcount_inc() and refcount_dec() in order to be able to avoid using the mutex in the quick case.
Reviewed-by:
mempool: optimize reference counting
Optimizes reference counting in mempool by using refcount_inc() and refcount_dec() in order to be able to avoid using the mutex in the quick case.
Reviewed-by: Jerome Forissier <jerome.forissier@linaro.org> Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| b54b9a98 | 09-Nov-2018 |
Jens Wiklander <jens.wiklander@linaro.org> |
mempool: add out of memory message
Adds a helpful message when a memory allocation with mempool_alloc() fails. If this occurs it's because the memory pool size isn't tuned properly with regards to t
mempool: add out of memory message
Adds a helpful message when a memory allocation with mempool_alloc() fails. If this occurs it's because the memory pool size isn't tuned properly with regards to the user of the pool.
Reviewed-by: Jerome Forissier <jerome.forissier@linaro.org> Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| 3f58e4ec | 05-Nov-2018 |
Ovidiu Mihalachi <ovidiu_mihalachi@mentor.com> |
trace levels: Redefine TRACE_MIN level to 0
The global `trace_level` session-wise indicator which is set by `trace_set_level()` [1], could get a wrong value in case of an input `level` set to 0, mea
trace levels: Redefine TRACE_MIN level to 0
The global `trace_level` session-wise indicator which is set by `trace_set_level()` [1], could get a wrong value in case of an input `level` set to 0, meaning that all logs need to be disabled by user define `CFG_TEE_TA_LOG_LEVEL=0` when building TA applications.
This inconsistency is caused by a rather wrong value of `TRACE_MIN` low boundary value set to 1. According to [1] `trace level` will be set to `TRACE_MAX` (4) in case input level is smaller than `TRACE_MIN` and larger than `TRACE_MAX`. In the scenario when the needed log level is 0, `trace level` would be set to `TRACE_MAX` and will cause a lot of flow log level information dumped by trace functions/macros that are using `trace_printf()` primitive.
This patch sets the `TRACE_MIN` to 0 in order to assure a proper trace level setting and completely disable all logs in case `CFG_TEE_TA_LOG_LEVEL=0`.
[1] void trace_set_level(int level) { if (((int)level >= TRACE_MIN) && (level <= TRACE_MAX)) trace_level = level; else trace_level = TRACE_MAX; }
Acked-by: Christoph Gellner <cgellner@de.adit-jv.com> Signed-off-by: Ovidiu Mihalachi <ovidiu_mihalachi@mentor.com> Reviewed-by: Jens Wiklander <jens.wiklander@linaro.org> Reviewed-by: Jerome Forissier <jerome.forissier@linaro.org> Reviewed-by: Etienne Carriere <etienne.carriere@linaro.org>
show more ...
|
| 7445d9ac | 13-Nov-2018 |
Jerome Forissier <jerome.forissier@linaro.org> |
Move __early_ta from <compiler.h> to <kernel/early_ta.h>
The __early_ta macro is used only in C files generated by scripts/ta_bin_to_c.py. There is no reason to have it defined in a widely used head
Move __early_ta from <compiler.h> to <kernel/early_ta.h>
The __early_ta macro is used only in C files generated by scripts/ta_bin_to_c.py. There is no reason to have it defined in a widely used header like <compiler.h>.
Signed-off-by: Jerome Forissier <jerome.forissier@linaro.org> Acked-by: Etienne Carriere <etienne.carriere@linaro.org>
show more ...
|
| fd118772 | 12-Nov-2018 |
Jerome Forissier <jerome.forissier@linaro.org> |
core: force read-only flag on .rodata.* sections
This commit fixes a warning with GCC 8.2 that did not occur with GCC 6.2:
$ make out/arm-plat-vexpress/core/arch/arm/kernel/user_ta.o CHK ou
core: force read-only flag on .rodata.* sections
This commit fixes a warning with GCC 8.2 that did not occur with GCC 6.2:
$ make out/arm-plat-vexpress/core/arch/arm/kernel/user_ta.o CHK out/arm-plat-vexpress/conf.mk CHK out/arm-plat-vexpress/include/generated/conf.h CHK out/arm-plat-vexpress/core/include/generated/asm-defines.h CC out/arm-plat-vexpress/core/arch/arm/kernel/user_ta.o {standard input}: Assembler messages: {standard input}:4087: Warning: setting incorrect section attributes for .rodata.__unpaged
The message is printed as the assembler processes this code fragment, generated by the C compiler:
.section .rodata.__unpaged,"aw"
The older compiler (GCC 6.2) would generate instead:
.section .rodata.__unpaged,"a",%progbits
The problem with .rodata.__unpaged,"aw" is that the "w" (writeable) flag is not consistent with the section name (.rodata.*), which by convention is supposed to be read-only.
- The section name (".rodata.__unpaged") is given by our macro: __rodata_unpaged. - The "w" flag is added by GCC, not sure why exactly. One reason [1] is when a relocatable binary is being generated and the structure contains relocatable data. But, we are not explicitly asking for a relocatable binary, so this might as well be a bug or counter-intuitive feature of the compiler.
Anyway, to avoid the warning, we need to fix the section flags. The section type (%progbits) is optional, it is deduced from the section name by default. %progbits indicates that the section contains data (i.e., is not empty).
Link: [1] https://gcc.gnu.org/ml/gcc/2004-05/msg01016.html Signed-off-by: Jerome Forissier <jerome.forissier@linaro.org> Tested-by: Jerome Forissier <jerome.forissier@linaro.org> (QEMU) Tested-by: Jerome Forissier <jerome.forissier@linaro.org> (HiKey960) Acked-by: Jens Wiklander <jens.wiklander@linaro.org> Acked-by: Etienne Carriere <etienne.carriere@linaro.org>
show more ...
|
| b38854bd | 09-Nov-2018 |
Bryan O'Donoghue <bryan.odonoghue@linaro.org> |
libutils: Import strtoul from newlib
This patch imports strtoul from newlib which the latest version of libfdt depends on.
Some modification of the original source is required to do this, specifica
libutils: Import strtoul from newlib
This patch imports strtoul from newlib which the latest version of libfdt depends on.
Some modification of the original source is required to do this, specifically:
This is an import of the newlib 1.19.0 version of strtoul dropping
- Headers and prototypes for re-entrancy
- Any reliance on errno
Signed-off-by: Bryan O'Donoghue <bryan.odonoghue@linaro.org> Reviewed-by: Etienne Carriere <etienne.carriere@linaro.org>
show more ...
|
| 9fdd6c3c | 10-Nov-2018 |
Bryan O'Donoghue <bryan.odonoghue@linaro.org> |
libutils: isoc: implement isalpha(), isspace() and isupper()
This patch implements isalpha(), isspace() and isupper() which are dependencies for a subsequent patch which brings in strtoul from newli
libutils: isoc: implement isalpha(), isspace() and isupper()
This patch implements isalpha(), isspace() and isupper() which are dependencies for a subsequent patch which brings in strtoul from newlib.
Signed-off-by: Bryan O'Donoghue <bryan.odonoghue@linaro.org> Reviewed-by: Etienne Carriere <etienne.carriere@linaro.org>
show more ...
|
| da1d55f3 | 09-Nov-2018 |
Bryan O'Donoghue <bryan.odonoghue@linaro.org> |
libutils: Import strrchr from newlib
libfdt 1.4.7 depends on strrchr, this patch imports the same from newlib.
Signed-off-by: Bryan O'Donoghue <bryan.odonoghue@linaro.org> Reviewed-by: Etienne Carr
libutils: Import strrchr from newlib
libfdt 1.4.7 depends on strrchr, this patch imports the same from newlib.
Signed-off-by: Bryan O'Donoghue <bryan.odonoghue@linaro.org> Reviewed-by: Etienne Carriere <etienne.carriere@linaro.org>
show more ...
|
| 5810998e | 15-Oct-2018 |
Jerome Forissier <jerome.forissier@linaro.org> |
libutils: sys/queue.h: add STAILQ_FOREACH_SAFE()
Import macro STAILQ_FOREACH_SAFE from FreeBSD.
Signed-off-by: Jerome Forissier <jerome.forissier@linaro.org> Reviewed-by: Joakim Bech <joakim.bech@l
libutils: sys/queue.h: add STAILQ_FOREACH_SAFE()
Import macro STAILQ_FOREACH_SAFE from FreeBSD.
Signed-off-by: Jerome Forissier <jerome.forissier@linaro.org> Reviewed-by: Joakim Bech <joakim.bech@linaro.org> Reviewed-by: Etienne Carriere <etienne.carriere@linaro.org> Acked-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|
| bde8a250 | 02-Oct-2018 |
Joakim Bech <joakim.bech@linaro.org> |
pager: enable BestFit allocation when using the pager
When running xtest 6018 we have got panics because of TEE_ERROR_OUT_OF_MEMORY errors when trying to allocate memory (using malloc and calloc). T
pager: enable BestFit allocation when using the pager
When running xtest 6018 we have got panics because of TEE_ERROR_OUT_OF_MEMORY errors when trying to allocate memory (using malloc and calloc). The reason for this seems to be a fragmented heap when running with the pager enabled. By enabling the BestFit algorithm in bget we have seen a much improved use of the heap with a lot less fragmentation. We have been running xtest on QEMU v8 and HiKey 6220 and the performance difference seems to be negligible.
Fixes: https://github.com/OP-TEE/optee_os/issues/2580
Signed-off-by: Joakim Bech <joakim.bech@linaro.org> Tested-by: Joakim Bech <joakim.bech@linaro.org> (HiKey 6220, QEMU v8) Reviewed-by: Jerome Forissier <jerome.forissier@linaro.org> Reviewed-by: Jens Wiklander <jens.wiklander@linaro.org>
show more ...
|