Home
last modified time | relevance | path

Searched full:jobs (Results 1 – 25 of 398) sorted by relevance

12345678910>>...16

/OK3568_Linux_fs/kernel/drivers/gpu/arm/midgard/
H A Dmali_kbase_js_defs.h55 /** Callback function run on all of a context's jobs registered with the Job
60 * @brief Maximum number of jobs that can be submitted to a job slot whilst
63 * This is important because GPU NULL jobs can complete whilst the IRQ handler
65 * jobs to be submitted inside the IRQ handler, which increases IRQ latency.
89 /** Attribute indicating a context that contains Compute jobs. That is,
90 * the context has jobs of type @ref BASE_JD_REQ_ONLY_COMPUTE
93 * both types of jobs.
97 /** Attribute indicating a context that contains Non-Compute jobs. That is,
98 * the context has some jobs that are \b not of type @ref
102 * both types of jobs.
[all …]
H A Dmali_kbase_hwaccess_jm.h41 * Inspect the jobs in the slot ringbuffers and update state.
43 * This will cause jobs to be submitted to hardware if they are unblocked
168 * kbase_backend_reset() - The GPU is being reset. Cancel all jobs on the GPU
218 * kbase_backend_ctx_count_changed() - Number of contexts ready to submit jobs
237 * kbase_backend_slot_free() - Return the number of jobs that can be currently
242 * Return : Number of jobs that can be submitted.
258 * kbase_backend_jm_kill_jobs_from_kctx - Kill all jobs that are currently
262 * This is used in response to a page fault to remove all jobs from the faulting
268 * kbase_jm_wait_for_zero_jobs - Wait for context to have zero jobs running, and
291 * This function just soft-stops all the slots to ensure that as many jobs as
[all …]
H A Dmali_kbase_config_defaults.h141 * Default minimum number of scheduling ticks before jobs are soft-stopped.
149 * Default minimum number of scheduling ticks before CL jobs are soft-stopped.
154 * Default minimum number of scheduling ticks before jobs are hard-stopped
160 * Default minimum number of scheduling ticks before CL jobs are hard-stopped.
165 * Default minimum number of scheduling ticks before jobs are hard-stopped
171 * Default timeout for some software jobs, after which the software event wait
172 * jobs will be cancelled.
196 * Default number of milliseconds given for other jobs on the GPU to be
204 * When a context has used up this amount of time across its jobs, it is
H A Dmali_kbase_js.c613 /* The caller must de-register all jobs before calling this */ in kbasep_js_kctx_term()
731 * This function should be used when a context has been scheduled, but no jobs
761 * are no jobs remaining on the specified slot.
801 * This function should be used when a context has no jobs on the GPU, and no
802 * jobs remaining for the specified slot.
1274 * kbasep_js_release_result - Try running more jobs after releasing a context
1284 * This includes running more jobs when:
1321 * run more jobs than before */ in kbasep_js_run_jobs_after_ctx_and_atom_release()
1335 * This also starts more jobs running in the case of an ctx-attribute state
1404 * there are no jobs, in this case we have to handle the in kbasep_js_runpool_release_ctx_internal()
[all …]
H A Dmali_kbase_js.h99 * It does not register any jobs owned by the struct kbase_context with the scheduler.
119 * It is a Programming Error to call this whilst there are still jobs
129 * - Update the numbers of jobs information
149 * It is a programming error to have more than U32_MAX jobs in flight at a time.
185 * Do not use this for removing jobs being killed by kbase_jd_cancel() - use
216 * should call kbase_js_sched_all() to try to run more jobs
278 * - If the context is not dying and has jobs, it gets re-added to the policy
282 * In addition, if the context is dying the jobs are killed asynchronously.
304 * out of jobs).
309 * - If the context is in the processing of dying (all the jobs are being
[all …]
H A Dmali_kbase_defs.h95 …* actually being reset to give other contexts time for their jobs to be soft-stopped and removed f…
570 * 2. List of waiting soft jobs.
574 /* Used to keep track of all JIT free/alloc jobs in submission order
617 /** Tracks all job-dispatch jobs. This includes those not tracked by
618 * the scheduler: 'not ready to run' and 'dependency-only' jobs. */
621 /** Waitq that reflects whether there are no jobs (including SW-only
622 * dependency jobs). This is set when no jobs are present on the ctx,
623 * and clear when there are jobs.
625 * @note: Job Dispatcher knows about more jobs than the Job Scheduler:
626 * the Job Scheduler is unaware of jobs that are blocked on dependencies,
[all …]
/OK3568_Linux_fs/kernel/scripts/
H A Djobserver-exec6 # with PARALLELISM environment variable set, and releases the jobs back again.
15 jobs = b"" variable
37 jobs += slot
42 # If something went wrong, give back the jobs.
43 if len(jobs):
44 os.write(writer, jobs)
48 claim = len(jobs) + 1
63 if len(jobs):
64 os.write(writer, jobs)
/OK3568_Linux_fs/kernel/drivers/gpu/arm/bifrost/jm/
H A Dmali_kbase_js_defs.h38 * jobs registered with the Job Scheduler
44 * @brief Maximum number of jobs that can be submitted to a job slot whilst
47 * This is important because GPU NULL jobs can complete whilst the IRQ handler
49 * jobs to be submitted inside the IRQ handler, which increases IRQ latency.
56 * Compute jobs.
58 * Non-Compute jobs.
82 * Attribute indicating a context that contains Compute jobs. That is,
83 * the context has jobs of type @ref BASE_JD_REQ_ONLY_COMPUTE
86 * both types of jobs.
89 * Attribute indicating a context that contains Non-Compute jobs. That is,
[all …]
H A Dmali_kbase_jm_js.h88 * It does not register any jobs owned by the struct kbase_context with
112 * It is a Programming Error to call this whilst there are still jobs
171 * * Update the numbers of jobs information
191 * It is a programming error to have more than U32_MAX jobs in flight at a time.
235 * Do not use this for removing jobs being killed by kbase_jd_cancel() - use
271 * should call kbase_js_sched_all() to try to run more jobs and
288 * * If the context is not dying and has jobs, it gets re-added to the policy
292 * In addition, if the context is dying the jobs are killed asynchronously.
317 * run out of jobs).
322 * * If the context is in the processing of dying (all the jobs are being
[all …]
/OK3568_Linux_fs/kernel/Documentation/core-api/
H A Dpadata.rst9 Padata is a mechanism by which the kernel can farm jobs out to be done in
16 Padata also supports multithreaded jobs, splitting up the job evenly while load
19 Running Serialized Jobs
25 The first step in using padata to run serialized jobs is to set up a
26 padata_instance structure for overall control of how jobs are to be run::
39 jobs to be serialized independently. A padata_instance may have one or more
40 padata_shells associated with it, each allowing a separate series of jobs.
45 The CPUs used to run jobs can be changed in two ways, programatically with
52 parallel cpumask describes which processors will be used to execute jobs
116 true parallelism is achieved by submitting multiple jobs. parallel() runs with
[all …]
/OK3568_Linux_fs/kernel/tools/perf/
H A DMakefile22 # Do a parallel build with multiple jobs, based on the number of CPUs online
25 # (To override it, run 'make JOBS=1' and similar.)
27 ifeq ($(JOBS),)
28JOBS := $(shell (getconf _NPROCESSORS_ONLN || egrep -c '^processor|^CPU[0-9]' /proc/cpuinfo) 2>/de… macro
29 ifeq ($(JOBS),0)
30 JOBS := 1 macro
55 @printf ' BUILD: Doing '\''make \033[33m-j'$(JOBS)'\033[m'\'' parallel build\n'
59 @$(MAKE) -f Makefile.perf --no-print-directory -j$(JOBS) O=$(FULL_O) $(SET_DEBUG) $@
93 # The build-test target is not really parallel, don't print the jobs info,
/OK3568_Linux_fs/kernel/drivers/gpu/arm/mali400/mali/common/
H A Dmali_soft_job.h26 * Soft jobs of type MALI_SOFT_JOB_TYPE_USER_SIGNALED will only complete after activation if either
29 * Soft jobs of type MALI_SOFT_JOB_TYPE_SELF_SIGNALED will release job resource automatically
43 * For soft jobs of type MALI_SOFT_JOB_TYPE_USER_SIGNALED the state is changed to
48 * state is changed to MALI_SOFT_JOB_STATE_TIMED_OUT. This can only happen to soft jobs in state
69 …_mali_osk_atomic_t refcount; /**< Soft jobs are reference counted to prev…
84 * The soft job system is used to manage all soft jobs that belongs to a session.
88 _MALI_OSK_LIST_HEAD(jobs_used); /**< List of all allocated soft jobs. */
90 …_irq_t *lock; /**< Lock used to protect soft job system and its soft jobs. */
106 * @note The soft job must not have any started or activated jobs. Call @ref
136 * Create soft jobs with @ref mali_soft_job_create before starting them.
[all …]
H A Dmali_session.h42 _MALI_OSK_LIST_HEAD(pp_job_list); /**< List of all PP jobs on this session */
45 …_mali_osk_atomic_t number_of_window_jobs; /**< Record the window jobs completed on this session in…
47 _mali_osk_atomic_t number_of_pp_jobs; /** < Record the pp jobs on this session */
49 …LIST_SIZE]; /**< List of PP job lists per frame builder id. Used to link jobs from same frame bui…
54 …mali_bool use_high_priority_job_queue; /**< If MALI_TRUE, jobs added from this session will use th…
127 * Get the max completed window jobs from all active session,
H A Dmali_scheduler.c59 /* Queue of jobs to be executed on the GP group */
62 /* Queue of PP jobs */
201 * Count how many physical sub jobs are present from the head of queue in mali_scheduler_job_physical_head_count()
209 /* Check for partially started normal pri jobs */ in mali_scheduler_job_physical_head_count()
220 * Remember; virtual jobs can't be queued and started in mali_scheduler_job_physical_head_count()
276 /* Check for partially started normal pri jobs */ in mali_scheduler_job_pp_next()
358 * For PP jobs we favour partially started jobs in normal in mali_scheduler_job_pp_physical_peek()
359 * priority queue over unstarted jobs in high priority queue in mali_scheduler_job_pp_physical_peek()
616 /* With ZRAM feature enabled, all pp jobs will be force to use deferred delete. */ in mali_scheduler_complete_pp_job()
632 MALI_DEBUG_PRINT(3, ("Mali scheduler: Aborting all queued jobs from session 0x%08X.\n", in mali_scheduler_abort_session()
[all …]
/OK3568_Linux_fs/kernel/drivers/md/
H A Ddm-kcopyd.c178 * We maintain four lists of jobs:
180 * i) jobs waiting for pages
181 * ii) jobs that have pages, and are waiting for the io to be issued.
182 * iii) jobs that don't need to do any IO and just run a callback
183 * iv) jobs that have completed.
528 static struct kcopyd_job *pop_io_job(struct list_head *jobs, in pop_io_job() argument
534 * For I/O jobs, pop any read, any write without sequential write in pop_io_job()
537 list_for_each_entry(job, jobs, list) { in pop_io_job()
553 static struct kcopyd_job *pop(struct list_head *jobs, in pop() argument
561 if (!list_empty(jobs)) { in pop()
[all …]
/OK3568_Linux_fs/yocto/meta-openembedded/meta-initramfs/recipes-devtools/klibc/files/
H A D0001-dash-Specify-format-string-in-fmtstr.patch8 usr/dash/jobs.c:429:3: error: format not a string literal and no format arguments [-Werror=format-s…
14 usr/dash/jobs.c | 2 +-
17 diff --git a/usr/dash/jobs.c b/usr/dash/jobs.c
19 --- a/usr/dash/jobs.c
20 +++ b/usr/dash/jobs.c
/OK3568_Linux_fs/u-boot/tools/
H A Dgenboardscfg.py227 def scan_defconfigs(jobs=1): argument
233 jobs: The number of jobs to run simultaneously
245 for i in range(jobs):
246 defconfigs = all_defconfigs[total_boards * i / jobs :
247 total_boards * (i + 1) / jobs]
411 def gen_boards_cfg(output, jobs=1, force=False): argument
416 jobs: The number of jobs to run simultaneously
425 params_list = scan_defconfigs(jobs)
439 parser.add_option('-j', '--jobs', type='int', default=cpu_count,
440 help='the number of jobs to run simultaneously')
[all …]
/OK3568_Linux_fs/yocto/meta-browser/meta-chromium/recipes-browser/chromium/files/
H A D0001-limit-number-of-LTO-jobs.patch4 Subject: [PATCH] limit number of LTO jobs.
6 --thinlto-jobs accepts "all" only since llvm 13. Dunfell
40 - ldflags += [ "-Wl,--thinlto-jobs=all" ]
42 + # linker jobs. This is still suboptimal to a potential dynamic
44 + ldflags += [ "-Wl,--thinlto-jobs=" + max_jobs_per_link ]
57 + # Limit the number of jobs (threads/processes) the linker is allowed
/OK3568_Linux_fs/kernel/drivers/gpu/arm/bifrost/
H A Dmali_kbase_hwaccess_jm.h44 * Inspect the jobs in the slot ringbuffers and update state.
46 * This will cause jobs to be submitted to hardware if they are unblocked
171 * kbase_backend_reset() - The GPU is being reset. Cancel all jobs on the GPU
209 * kbase_backend_ctx_count_changed() - Number of contexts ready to submit jobs
228 * kbase_backend_slot_free() - Return the number of jobs that can be currently
233 * Return: Number of jobs that can be submitted.
249 * kbase_backend_jm_kill_running_jobs_from_kctx - Kill all jobs that are
253 * This is used in response to a page fault to remove all jobs from the faulting
261 * kbase_jm_wait_for_zero_jobs - Wait for context to have zero jobs running, and
285 * jobs from the context)
H A Dmali_kbase_config_defaults.h121 /* Default minimum number of scheduling ticks before jobs are soft-stopped.
128 /* Default minimum number of scheduling ticks before CL jobs are soft-stopped. */
131 /* Default minimum number of scheduling ticks before jobs are hard-stopped */
134 /* Default minimum number of scheduling ticks before CL jobs are hard-stopped. */
137 /* Default minimum number of scheduling ticks before jobs are hard-stopped
142 /* Default timeout for some software jobs, after which the software event wait
143 * jobs will be cancelled.
219 /* Default number of milliseconds given for other jobs on the GPU to be
238 * When a context has used up this amount of time across its jobs, it is
/OK3568_Linux_fs/kernel/tools/testing/kunit/
H A Dkunit.py28 ['jobs', 'build_dir', 'alltests',
34 KunitRequest = namedtuple('KunitRequest', ['raw_output','timeout', 'jobs',
74 request.jobs,
144 build_request = KunitBuildRequest(request.jobs, request.build_dir,
187 parser.add_argument('--jobs',
189 'jobs (commands) to run simultaneously."',
190 type=int, default=8, metavar='jobs')
266 cli_args.jobs,
300 request = KunitBuildRequest(cli_args.jobs,
/OK3568_Linux_fs/kernel/drivers/gpu/arm/midgard/backend/gpu/
H A Dmali_kbase_jm_hw.c185 * fact that we got an IRQ for the previous set of completed jobs.
339 * JOB_IRQ_JS_STATE. However since both jobs in kbase_job_done()
344 * assume that _both_ jobs had completed in kbase_job_done()
348 * So at this point if there are no active jobs in kbase_job_done()
360 * remaining jobs because the failed job will in kbase_job_done()
361 * have prevented any futher jobs from starting in kbase_job_done()
402 * will not allow further jobs in a job in kbase_job_done()
445 * it early (without waiting for a timeout) because some jobs in kbase_job_done()
537 /* For HW_ISSUE_8316, only 'bad' jobs attacking in kbasep_job_slot_soft_or_hard_stop_do_action()
543 * Whilst such 'bad' jobs can be cleared by a in kbasep_job_slot_soft_or_hard_stop_do_action()
[all …]
H A Dmali_kbase_pm_defs.h73 * @time_busy: number of ns the GPU was busy executing jobs since the
76 * jobs since the @time_period_start timestamp.
81 * @gpu_active: true when the GPU is executing jobs. false when
84 * @busy_cl: number of ns the GPU was busy executing CL jobs. Note that
85 * if two CL jobs were active for 400ns, this value would be updated
87 * @busy_gl: number of ns the GPU was busy executing GL jobs. Note that
88 * if two GL jobs were active for 400ns, this value would be updated
90 * @active_cl_ctx: number of CL jobs active on the GPU. Array is per-device.
91 * @active_gl_ctx: number of GL jobs active on the GPU. Array is per-slot. As
92 * GL jobs never run on slot 2 this slot is not recorded.
[all …]
/OK3568_Linux_fs/kernel/tools/testing/selftests/net/
H A Dudpgro.sh17 local -r jobs="$(jobs -p)"
20 [ -n "${jobs}" ] && kill -1 ${jobs} 2>/dev/null
56 wait $(jobs -p)
102 wait $(jobs -p)
126 wait $(jobs -p)
/OK3568_Linux_fs/kernel/tools/memory-model/scripts/
H A Dparseargs.sh40 echo " --jobs N (number of jobs, default one per CPU)"
43 …echo "Defaults: --destdir '$LKMM_DESTDIR_DEF' --herdopts '$LKMM_HERD_OPTIONS_DEF' --jobs '$LKMM_JO…
108 --jobs|--job|-j)
109 checkarg --jobs "(number)" "$#" "$2" '^[1-9][0-9]\+$' '^--'

12345678910>>...16