aboutsummaryrefslogtreecommitdiff
path: root/drivers/gpu/drm/i915/intel_ringbuffer.h
diff options
context:
space:
mode:
authorMichel Thierry <michel.thierry@intel.com>2014-07-24 17:04:38 +0100
committerDaniel Vetter <daniel.vetter@ffwll.ch>2014-08-14 22:10:59 +0200
commitacdd884a2e1b873995c120d5eabd8cab77f48f20 (patch)
tree36a39235bd890e4f7ca5ea081c6453891d91fcb7 /drivers/gpu/drm/i915/intel_ringbuffer.h
parentae1250b9da308acd16554365d125b4afb795b825 (diff)
drm/i915/bdw: Two-stage execlist submit process
Context switch (and execlist submission) should happen only when other contexts are not active, otherwise pre-emption occurs. To assure this, we place context switch requests in a queue and those request are later consumed when the right context switch interrupt is received (still TODO). v2: Use a spinlock, do not remove the requests on unqueue (wait for context switch completion). Signed-off-by: Thomas Daniel <thomas.daniel@intel.com> v3: Several rebases and code changes. Use unique ID. v4: - Move the queue/lock init to the late ring initialization. - Damien's kmalloc review comments: check return, use sizeof(*req), do not cast. v5: - Do not reuse drm_i915_gem_request. Instead, create our own. - New namespace. Signed-off-by: Michel Thierry <michel.thierry@intel.com> (v1) Signed-off-by: Oscar Mateo <oscar.mateo@intel.com> (v2-v5) Reviewed-by: Damien Lespiau <damien.lespiau@intel.com> [davnet: Checkpatch + wash-up s/BUG_ON/WARN_ON/.] Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Diffstat (limited to 'drivers/gpu/drm/i915/intel_ringbuffer.h')
-rw-r--r--drivers/gpu/drm/i915/intel_ringbuffer.h2
1 files changed, 2 insertions, 0 deletions
diff --git a/drivers/gpu/drm/i915/intel_ringbuffer.h b/drivers/gpu/drm/i915/intel_ringbuffer.h
index 26785ca72530..670262dabb6c 100644
--- a/drivers/gpu/drm/i915/intel_ringbuffer.h
+++ b/drivers/gpu/drm/i915/intel_ringbuffer.h
@@ -231,6 +231,8 @@ struct intel_engine_cs {
} semaphore;
/* Execlists */
+ spinlock_t execlist_lock;
+ struct list_head execlist_queue;
u32 irq_keep_mask; /* bitmask for interrupts that should not be masked */
int (*emit_request)(struct intel_ringbuffer *ringbuf);
int (*emit_flush)(struct intel_ringbuffer *ringbuf,