cregit-Linux how code gets into the kernel

Release 4.8 drivers/gpu/drm/i915/intel_lrc.c

/*
 * Copyright © 2014 Intel Corporation
 *
 * Permission is hereby granted, free of charge, to any person obtaining a
 * copy of this software and associated documentation files (the "Software"),
 * to deal in the Software without restriction, including without limitation
 * the rights to use, copy, modify, merge, publish, distribute, sublicense,
 * and/or sell copies of the Software, and to permit persons to whom the
 * Software is furnished to do so, subject to the following conditions:
 *
 * The above copyright notice and this permission notice (including the next
 * paragraph) shall be included in all copies or substantial portions of the
 * Software.
 *
 * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
 * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
 * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
 * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
 * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
 * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
 * IN THE SOFTWARE.
 *
 * Authors:
 *    Ben Widawsky <ben@bwidawsk.net>
 *    Michel Thierry <michel.thierry@intel.com>
 *    Thomas Daniel <thomas.daniel@intel.com>
 *    Oscar Mateo <oscar.mateo@intel.com>
 *
 */

/**
 * DOC: Logical Rings, Logical Ring Contexts and Execlists
 *
 * Motivation:
 * GEN8 brings an expansion of the HW contexts: "Logical Ring Contexts".
 * These expanded contexts enable a number of new abilities, especially
 * "Execlists" (also implemented in this file).
 *
 * One of the main differences with the legacy HW contexts is that logical
 * ring contexts incorporate many more things to the context's state, like
 * PDPs or ringbuffer control registers:
 *
 * The reason why PDPs are included in the context is straightforward: as
 * PPGTTs (per-process GTTs) are actually per-context, having the PDPs
 * contained there mean you don't need to do a ppgtt->switch_mm yourself,
 * instead, the GPU will do it for you on the context switch.
 *
 * But, what about the ringbuffer control registers (head, tail, etc..)?
 * shouldn't we just need a set of those per engine command streamer? This is
 * where the name "Logical Rings" starts to make sense: by virtualizing the
 * rings, the engine cs shifts to a new "ring buffer" with every context
 * switch. When you want to submit a workload to the GPU you: A) choose your
 * context, B) find its appropriate virtualized ring, C) write commands to it
 * and then, finally, D) tell the GPU to switch to that context.
 *
 * Instead of the legacy MI_SET_CONTEXT, the way you tell the GPU to switch
 * to a contexts is via a context execution list, ergo "Execlists".
 *
 * LRC implementation:
 * Regarding the creation of contexts, we have:
 *
 * - One global default context.
 * - One local default context for each opened fd.
 * - One local extra context for each context create ioctl call.
 *
 * Now that ringbuffers belong per-context (and not per-engine, like before)
 * and that contexts are uniquely tied to a given engine (and not reusable,
 * like before) we need:
 *
 * - One ringbuffer per-engine inside each context.
 * - One backing object per-engine inside each context.
 *
 * The global default context starts its life with these new objects fully
 * allocated and populated. The local default context for each opened fd is
 * more complex, because we don't know at creation time which engine is going
 * to use them. To handle this, we have implemented a deferred creation of LR
 * contexts:
 *
 * The local context starts its life as a hollow or blank holder, that only
 * gets populated for a given engine once we receive an execbuffer. If later
 * on we receive another execbuffer ioctl for the same context but a different
 * engine, we allocate/populate a new ringbuffer and context backing object and
 * so on.
 *
 * Finally, regarding local contexts created using the ioctl call: as they are
 * only allowed with the render ring, we can allocate & populate them right
 * away (no need to defer anything, at least for now).
 *
 * Execlists implementation:
 * Execlists are the new method by which, on gen8+ hardware, workloads are
 * submitted for execution (as opposed to the legacy, ringbuffer-based, method).
 * This method works as follows:
 *
 * When a request is committed, its commands (the BB start and any leading or
 * trailing commands, like the seqno breadcrumbs) are placed in the ringbuffer
 * for the appropriate context. The tail pointer in the hardware context is not
 * updated at this time, but instead, kept by the driver in the ringbuffer
 * structure. A structure representing this request is added to a request queue
 * for the appropriate engine: this structure contains a copy of the context's
 * tail after the request was written to the ring buffer and a pointer to the
 * context itself.
 *
 * If the engine's request queue was empty before the request was added, the
 * queue is processed immediately. Otherwise the queue will be processed during
 * a context switch interrupt. In any case, elements on the queue will get sent
 * (in pairs) to the GPU's ExecLists Submit Port (ELSP, for short) with a
 * globally unique 20-bits submission ID.
 *
 * When execution of a request completes, the GPU updates the context status
 * buffer with a context complete event and generates a context switch interrupt.
 * During the interrupt handling, the driver examines the events in the buffer:
 * for each context complete event, if the announced ID matches that on the head
 * of the request queue, then that request is retired and removed from the queue.
 *
 * After processing, if any requests were retired and the queue is not empty
 * then a new execution list can be submitted. The two requests at the front of
 * the queue are next to be submitted but since a context may not occur twice in
 * an execution list, if subsequent requests have the same ID as the first then
 * the two requests must be combined. This is done simply by discarding requests
 * at the head of the queue until either only one requests is left (in which case
 * we use a NULL second context) or the first two requests have unique IDs.
 *
 * By always executing the first two requests in the queue the driver ensures
 * that the GPU is kept as busy as possible. In the case where a single context
 * completes but a second context is still executing, the request for this second
 * context will be at the head of the queue when we remove the first one. This
 * request will then be resubmitted along with a new request for a different context,
 * which will cause the hardware to continue executing the second request and queue
 * the new request (the GPU detects the condition of a context getting preempted
 * with the same context and optimizes the context switch flow by not doing
 * preemption, but just sampling the new tail pointer).
 *
 */
#include <linux/interrupt.h>

#include <drm/drmP.h>
#include <drm/i915_drm.h>
#include "i915_drv.h"
#include "intel_mocs.h"


#define GEN9_LR_CONTEXT_RENDER_SIZE (22 * PAGE_SIZE)

#define GEN8_LR_CONTEXT_RENDER_SIZE (20 * PAGE_SIZE)

#define GEN8_LR_CONTEXT_OTHER_SIZE (2 * PAGE_SIZE)


#define RING_EXECLIST_QFULL		(1 << 0x2)

#define RING_EXECLIST1_VALID		(1 << 0x3)

#define RING_EXECLIST0_VALID		(1 << 0x4)

#define RING_EXECLIST_ACTIVE_STATUS	(3 << 0xE)

#define RING_EXECLIST1_ACTIVE		(1 << 0x11)

#define RING_EXECLIST0_ACTIVE		(1 << 0x12)


#define GEN8_CTX_STATUS_IDLE_ACTIVE	(1 << 0)

#define GEN8_CTX_STATUS_PREEMPTED	(1 << 1)

#define GEN8_CTX_STATUS_ELEMENT_SWITCH	(1 << 2)

#define GEN8_CTX_STATUS_ACTIVE_IDLE	(1 << 3)

#define GEN8_CTX_STATUS_COMPLETE	(1 << 4)

#define GEN8_CTX_STATUS_LITE_RESTORE	(1 << 15)


#define CTX_LRI_HEADER_0		0x01

#define CTX_CONTEXT_CONTROL		0x02

#define CTX_RING_HEAD			0x04

#define CTX_RING_TAIL			0x06

#define CTX_RING_BUFFER_START		0x08

#define CTX_RING_BUFFER_CONTROL		0x0a

#define CTX_BB_HEAD_U			0x0c

#define CTX_BB_HEAD_L			0x0e

#define CTX_BB_STATE			0x10

#define CTX_SECOND_BB_HEAD_U		0x12

#define CTX_SECOND_BB_HEAD_L		0x14

#define CTX_SECOND_BB_STATE		0x16

#define CTX_BB_PER_CTX_PTR		0x18

#define CTX_RCS_INDIRECT_CTX		0x1a

#define CTX_RCS_INDIRECT_CTX_OFFSET	0x1c

#define CTX_LRI_HEADER_1		0x21

#define CTX_CTX_TIMESTAMP		0x22

#define CTX_PDP3_UDW			0x24

#define CTX_PDP3_LDW			0x26

#define CTX_PDP2_UDW			0x28

#define CTX_PDP2_LDW			0x2a

#define CTX_PDP1_UDW			0x2c

#define CTX_PDP1_LDW			0x2e

#define CTX_PDP0_UDW			0x30

#define CTX_PDP0_LDW			0x32

#define CTX_LRI_HEADER_2		0x41

#define CTX_R_PWR_CLK_STATE		0x42

#define CTX_GPGPU_CSR_BASE_ADDRESS	0x44


#define GEN8_CTX_VALID (1<<0)

#define GEN8_CTX_FORCE_PD_RESTORE (1<<1)

#define GEN8_CTX_FORCE_RESTORE (1<<2)

#define GEN8_CTX_L3LLC_COHERENT (1<<5)

#define GEN8_CTX_PRIVILEGE (1<<8)


#define ASSIGN_CTX_REG(reg_state, pos, reg, val) do { \
        (reg_state)[(pos)+0] = i915_mmio_reg_offset(reg); \
        (reg_state)[(pos)+1] = (val); \
} while (0)


#define ASSIGN_CTX_PDP(ppgtt, reg_state, n) do {            \
        const u64 _addr = i915_page_dir_dma_addr((ppgtt), (n)); \
        reg_state[CTX_PDP ## n ## _UDW+1] = upper_32_bits(_addr); \
        reg_state[CTX_PDP ## n ## _LDW+1] = lower_32_bits(_addr); \
} while (0)


#define ASSIGN_CTX_PML4(ppgtt, reg_state) do { \
        reg_state[CTX_PDP0_UDW + 1] = upper_32_bits(px_dma(&ppgtt->pml4)); \
        reg_state[CTX_PDP0_LDW + 1] = lower_32_bits(px_dma(&ppgtt->pml4)); \
} while (0)

enum {
	
FAULT_AND_HANG = 0,
	
FAULT_AND_HALT, /* Debug only */
	
FAULT_AND_STREAM,
	
FAULT_AND_CONTINUE /* Unsupported */
};

#define GEN8_CTX_ID_SHIFT 32

#define GEN8_CTX_ID_WIDTH 21

#define GEN8_CTX_RCS_INDIRECT_CTX_OFFSET_DEFAULT	0x17

#define GEN9_CTX_RCS_INDIRECT_CTX_OFFSET_DEFAULT	0x26

/* Typical size of the average request (2 pipecontrols and a MI_BB) */

#define EXECLISTS_REQUEST_SIZE 64 
/* bytes */

static int execlists_context_deferred_alloc(struct i915_gem_context *ctx,
					    struct intel_engine_cs *engine);
static int intel_lr_context_pin(struct i915_gem_context *ctx,
				struct intel_engine_cs *engine);

/**
 * intel_sanitize_enable_execlists() - sanitize i915.enable_execlists
 * @dev_priv: i915 device private
 * @enable_execlists: value of i915.enable_execlists module parameter.
 *
 * Only certain platforms support Execlists (the prerequisites being
 * support for Logical Ring Contexts and Aliasing PPGTT or better).
 *
 * Return: 1 if Execlists is supported and has to be enabled.
 */

int intel_sanitize_enable_execlists(struct drm_i915_private *dev_priv, int enable_execlists) { /* On platforms with execlist available, vGPU will only * support execlist mode, no ring buffer mode. */ if (HAS_LOGICAL_RING_CONTEXTS(dev_priv) && intel_vgpu_active(dev_priv)) return 1; if (INTEL_GEN(dev_priv) >= 9) return 1; if (enable_execlists == 0) return 0; if (HAS_LOGICAL_RING_CONTEXTS(dev_priv) && USES_PPGTT(dev_priv) && i915.use_mmio_flip >= 0) return 1; return 0; }

Contributors

PersonTokensPropCommitsCommitProp
oscar mateooscar mateo3648.65%233.33%
zhiyuan lvzhiyuan lv1418.92%116.67%
damien lespiaudamien lespiau1013.51%116.67%
chris wilsonchris wilson810.81%116.67%
daniel vetterdaniel vetter68.11%116.67%
Total74100.00%6100.00%


static void logical_ring_init_platform_invariants(struct intel_engine_cs *engine) { struct drm_i915_private *dev_priv = engine->i915; if (IS_GEN8(dev_priv) || IS_GEN9(dev_priv)) engine->idle_lite_restore_wa = ~0; engine->disable_lite_restore_wa = (IS_SKL_REVID(dev_priv, 0, SKL_REVID_B0) || IS_BXT_REVID(dev_priv, 0, BXT_REVID_A1)) && (engine->id == VCS || engine->id == VCS2); engine->ctx_desc_template = GEN8_CTX_VALID; if (IS_GEN8(dev_priv)) engine->ctx_desc_template |= GEN8_CTX_L3LLC_COHERENT; engine->ctx_desc_template |= GEN8_CTX_PRIVILEGE; /* TODO: WaDisableLiteRestore when we start using semaphore * signalling between Command Streamers */ /* ring->ctx_desc_template |= GEN8_CTX_FORCE_RESTORE; */ /* WaEnableForceRestoreInCtxtDescForVCS:skl */ /* WaEnableForceRestoreInCtxtDescForVCS:bxt */ if (engine->disable_lite_restore_wa) engine->ctx_desc_template |= GEN8_CTX_FORCE_RESTORE; }

Contributors

PersonTokensPropCommitsCommitProp
tvrtko ursulintvrtko ursulin6857.63%327.27%
michel thierrymichel thierry2016.95%19.09%
jani nikulajani nikula108.47%19.09%
chris wilsonchris wilson86.78%19.09%
nicholas hoathnicholas hoath43.39%19.09%
mika kuoppalamika kuoppala32.54%19.09%
dave gordondave gordon21.69%19.09%
ben widawskyben widawsky21.69%19.09%
tim goretim gore10.85%19.09%
Total118100.00%11100.00%

/** * intel_lr_context_descriptor_update() - calculate & cache the descriptor * descriptor for a pinned context * * @ctx: Context to work on * @engine: Engine the descriptor will be used with * * The context descriptor encodes various attributes of a context, * including its GTT address and some flags. Because it's fairly * expensive to calculate, we'll just do it once and cache the result, * which remains valid until the context is unpinned. * * This is what a descriptor looks like, from LSB to MSB: * bits 0-11: flags, GEN8_CTX_* (cached in ctx_desc_template) * bits 12-31: LRCA, GTT address of (the HWSP of) this context * bits 32-52: ctx ID, a globally unique tag * bits 53-54: mbz, reserved for use by hardware * bits 55-63: group ID, currently unused and set to 0 */
static void intel_lr_context_descriptor_update(struct i915_gem_context *ctx, struct intel_engine_cs *engine) { struct intel_context *ce = &ctx->engine[engine->id]; u64 desc; BUILD_BUG_ON(MAX_CONTEXT_HW_ID > (1<<GEN8_CTX_ID_WIDTH)); desc = ctx->desc_template; /* bits 3-4 */ desc |= engine->ctx_desc_template; /* bits 0-11 */ desc |= ce->lrc_vma->node.start + LRC_PPHWSP_PN * PAGE_SIZE; /* bits 12-31 */ desc |= (u64)ctx->hw_id << GEN8_CTX_ID_SHIFT; /* bits 32-52 */ ce->lrc_desc = desc; }

Contributors

PersonTokensPropCommitsCommitProp
chris wilsonchris wilson4447.83%433.33%
tvrtko ursulintvrtko ursulin1617.39%216.67%
michel thierrymichel thierry1010.87%18.33%
ben widawskyben widawsky88.70%18.33%
zhi wangzhi wang77.61%18.33%
alex daialex dai44.35%18.33%
nicholas hoathnicholas hoath22.17%18.33%
mika kuoppalamika kuoppala11.09%18.33%
Total92100.00%12100.00%


uint64_t intel_lr_context_descriptor(struct i915_gem_context *ctx, struct intel_engine_cs *engine) { return ctx->engine[engine->id].lrc_desc; }

Contributors

PersonTokensPropCommitsCommitProp
tvrtko ursulintvrtko ursulin2592.59%250.00%
ben widawskyben widawsky13.70%125.00%
chris wilsonchris wilson13.70%125.00%
Total27100.00%4100.00%


static void execlists_elsp_write(struct drm_i915_gem_request *rq0, struct drm_i915_gem_request *rq1) { struct intel_engine_cs *engine = rq0->engine; struct drm_i915_private *dev_priv = rq0->i915; uint64_t desc[2]; if (rq1) { desc[1] = intel_lr_context_descriptor(rq1->ctx, rq1->engine); rq1->elsp_submitted++; } else { desc[1] = 0; } desc[0] = intel_lr_context_descriptor(rq0->ctx, rq0->engine); rq0->elsp_submitted++; /* You must always write both descriptors in the order below. */ I915_WRITE_FW(RING_ELSP(engine), upper_32_bits(desc[1])); I915_WRITE_FW(RING_ELSP(engine), lower_32_bits(desc[1])); I915_WRITE_FW(RING_ELSP(engine), upper_32_bits(desc[0])); /* The context is automatically loaded after the following */ I915_WRITE_FW(RING_ELSP(engine), lower_32_bits(desc[0])); /* ELSP is a wo register, use another nearby reg for posting */ POSTING_READ_FW(RING_EXECLIST_STATUS_LO(engine)); }

Contributors

PersonTokensPropCommitsCommitProp
ben widawskyben widawsky8951.45%18.33%
mika kuoppalamika kuoppala5028.90%325.00%
tvrtko ursulintvrtko ursulin148.09%433.33%
dave gordondave gordon126.94%18.33%
chris wilsonchris wilson74.05%216.67%
ville syrjalaville syrjala10.58%18.33%
Total173100.00%12100.00%


static void execlists_update_context_pdps(struct i915_hw_ppgtt *ppgtt, u32 *reg_state) { ASSIGN_CTX_PDP(ppgtt, reg_state, 3); ASSIGN_CTX_PDP(ppgtt, reg_state, 2); ASSIGN_CTX_PDP(ppgtt, reg_state, 1); ASSIGN_CTX_PDP(ppgtt, reg_state, 0); }

Contributors

PersonTokensPropCommitsCommitProp
tvrtko ursulintvrtko ursulin4384.31%125.00%
ben widawskyben widawsky47.84%125.00%
chris wilsonchris wilson35.88%125.00%
deepak sdeepak s11.96%125.00%
Total51100.00%4100.00%


static void execlists_update_context(struct drm_i915_gem_request *rq) { struct intel_engine_cs *engine = rq->engine; struct i915_hw_ppgtt *ppgtt = rq->ctx->ppgtt; uint32_t *reg_state = rq->ctx->engine[engine->id].lrc_reg_state; reg_state[CTX_RING_TAIL+1] = rq->tail; /* True 32b PPGTT with dynamic page allocation: update PDP * registers and point the unallocated PDPs to scratch page. * PML4 is allocated during ppgtt init, so this is not needed * in 48-bit mode. */ if (ppgtt && !USES_FULL_48BIT_PPGTT(ppgtt->base.dev)) execlists_update_context_pdps(ppgtt, reg_state); }

Contributors

PersonTokensPropCommitsCommitProp
mika kuoppalamika kuoppala3037.04%111.11%
michel thierrymichel thierry2429.63%222.22%
oscar mateooscar mateo1518.52%111.11%
tvrtko ursulintvrtko ursulin911.11%444.44%
thomas danielthomas daniel33.70%111.11%
Total81100.00%9100.00%


static void execlists_submit_requests(struct drm_i915_gem_request *rq0, struct drm_i915_gem_request *rq1) { struct drm_i915_private *dev_priv = rq0->i915; unsigned int fw_domains = rq0->engine->fw_domains; execlists_update_context(rq0); if (rq1) execlists_update_context(rq1); spin_lock_irq(&dev_priv->uncore.lock); intel_uncore_forcewake_get__locked(dev_priv, fw_domains); execlists_elsp_write(rq0, rq1); intel_uncore_forcewake_put__locked(dev_priv, fw_domains); spin_unlock_irq(&dev_priv->uncore.lock); }

Contributors

PersonTokensPropCommitsCommitProp
tvrtko ursulintvrtko ursulin5358.89%330.00%
mika kuoppalamika kuoppala1415.56%330.00%
thomas danielthomas daniel1314.44%220.00%
ben widawskyben widawsky910.00%110.00%
dave gordondave gordon11.11%110.00%
Total90100.00%10100.00%


static inline void execlists_context_status_change( struct drm_i915_gem_request *rq, unsigned long status) { /* * Only used when GVT-g is enabled now. When GVT-g is disabled, * The compiler should eliminate this function as dead-code. */ if (!IS_ENABLED(CONFIG_DRM_I915_GVT)) return; atomic_notifier_call_chain(&rq->ctx->status_notifier, status, rq); }

Contributors

PersonTokensPropCommitsCommitProp
zhi wangzhi wang40100.00%1100.00%
Total40100.00%1100.00%


static void execlists_context_unqueue(struct intel_engine_cs *engine) { struct drm_i915_gem_request *req0 = NULL, *req1 = NULL; struct drm_i915_gem_request *cursor, *tmp; assert_spin_locked(&engine->execlist_lock); /* * If irqs are not active generate a warning as batches that finish * without the irqs may get lost and a GPU Hang may occur. */ WARN_ON(!intel_irqs_enabled(engine->i915)); /* Try to read in pairs */ list_for_each_entry_safe(cursor, tmp, &engine->execlist_queue, execlist_link) { if (!req0) { req0 = cursor; } else if (req0->ctx == cursor->ctx) { /* Same ctx: ignore first request, as second request * will update tail past first request's workload */ cursor->elsp_submitted = req0->elsp_submitted; list_del(&req0->execlist_link); i915_gem_request_unreference(req0); req0 = cursor; } else { if (IS_ENABLED(CONFIG_DRM_I915_GVT)) { /* * req0 (after merged) ctx requires single * submission, stop picking */ if (req0->ctx->execlists_force_single_submission) break; /* * req0 ctx doesn't require single submission, * but next req ctx requires, stop picking */ if (cursor->ctx->execlists_force_single_submission) break; } req1 = cursor; WARN_ON(req1->elsp_submitted); break; } } if (unlikely(!req0)) return; execlists_context_status_change(req0, INTEL_CONTEXT_SCHEDULE_IN); if (req1) execlists_context_status_change(req1, INTEL_CONTEXT_SCHEDULE_IN); if (req0->elsp_submitted & engine->idle_lite_restore_wa) { /* * WaIdleLiteRestore: make sure we never cause a lite restore * with HEAD==TAIL. * * Apply the wa NOOPS to prevent ring:HEAD == req:TAIL as we * resubmit the request. See gen8_emit_request() for where we * prepare the padding after the end of the request. */ struct intel_ringbuffer *ringbuf; ringbuf = req0->ctx->engine[engine->id].ringbuf; req0->tail += 8; req0->tail &= ringbuf->size - 1; } execlists_submit_requests(req0, req1); }

Contributors

PersonTokensPropCommitsCommitProp
thomas danielthomas daniel9037.34%215.38%
michel thierrymichel thierry4819.92%17.69%
zhi wangzhi wang4719.50%215.38%
tvrtko ursulintvrtko ursulin3414.11%430.77%
peter antoinepeter antoine104.15%17.69%
oscar mateooscar mateo93.73%17.69%
nicholas hoathnicholas hoath20.83%17.69%
chris wilsonchris wilson10.41%17.69%
Total241100.00%13100.00%


static unsigned int execlists_check_remove_request(struct intel_engine_cs *engine, u32 ctx_id) { struct drm_i915_gem_request *head_req; assert_spin_locked(&engine->execlist_lock); head_req = list_first_entry_or_null(&engine->execlist_queue, struct drm_i915_gem_request, execlist_link); if (WARN_ON(!head_req || (head_req->ctx_hw_id != ctx_id))) return 0; WARN(head_req->elsp_submitted == 0, "Never submitted head request\n"); if (--head_req->elsp_submitted > 0) return 0; execlists_context_status_change(head_req, INTEL_CONTEXT_SCHEDULE_OUT); list_del(&head_req->execlist_link); i915_gem_request_unreference(head_req); return 1; }

Contributors

PersonTokensPropCommitsCommitProp
thomas danielthomas daniel4844.44%220.00%
tvrtko ursulintvrtko ursulin3229.63%550.00%
oscar mateooscar mateo1917.59%110.00%
zhi wangzhi wang76.48%110.00%
nicholas hoathnicholas hoath21.85%110.00%
Total108100.00%10100.00%


static u32 get_context_status(struct intel_engine_cs *engine, unsigned int read_pointer, u32 *context_id) { struct drm_i915_private *dev_priv = engine->i915; u32 status; read_pointer %= GEN8_CSB_ENTRIES; status = I915_READ_FW(RING_CONTEXT_STATUS_BUF_LO(engine, read_pointer)); if (status & GEN8_CTX_STATUS_IDLE_ACTIVE) return 0; *context_id = I915_READ_FW(RING_CONTEXT_STATUS_BUF_HI(engine, read_pointer)); return status; }

Contributors

PersonTokensPropCommitsCommitProp
ben widawskyben widawsky4562.50%125.00%
tvrtko ursulintvrtko ursulin2636.11%250.00%
chris wilsonchris wilson11.39%125.00%
Total72100.00%4100.00%

/** * intel_lrc_irq_handler() - handle Context Switch interrupts * @data: tasklet handler passed in unsigned long * * Check the unread Context Status Buffers and manage the submission of new * contexts to the ELSP accordingly. */
static void intel_lrc_irq_handler(unsigned long data) { struct intel_engine_cs *engine = (struct intel_engine_cs *)data; struct drm_i915_private *dev_priv = engine->i915; u32 status_pointer; unsigned int read_pointer, write_pointer; u32 csb[GEN8_CSB_ENTRIES][2]; unsigned int csb_read = 0, i; unsigned int submit_contexts = 0; intel_uncore_forcewake_get(dev_priv, engine->fw_domains); status_pointer = I915_READ_FW(RING_CONTEXT_STATUS_PTR(engine)); read_pointer = engine->next_context_status_buffer; write_pointer = GEN8_CSB_WRITE_PTR(status_pointer); if (read_pointer > write_pointer) write_pointer += GEN8_CSB_ENTRIES; while (read_pointer < write_pointer) { if (WARN_ON_ONCE(csb_read == GEN8_CSB_ENTRIES)) break; csb[csb_read][0] = get_context_status(engine, ++read_pointer, &csb[csb_read][1]); csb_read++; } engine->next_context_status_buffer = write_pointer % GEN8_CSB_ENTRIES; /* Update the read pointer to the old write pointer. Manual ringbuffer * management ftw </sarcasm> */ I915_WRITE_FW(RING_CONTEXT_STATUS_PTR(engine), _MASKED_FIELD(GEN8_CSB_READ_PTR_MASK, engine->next_context_status_buffer << 8)); intel_uncore_forcewake_put(dev_priv, engine->fw_domains); spin_lock(&engine->execlist_lock); for (i = 0; i < csb_read; i++) { if (unlikely(csb[i][0] & GEN8_CTX_STATUS_PREEMPTED)) { if (csb[i][0] & GEN8_CTX_STATUS_LITE_RESTORE) { if (execlists_check_remove_request(engine, csb[i][1])) WARN(1, "Lite Restored request removed from queue\n"); } else WARN(1, "Preemption without Lite Restore\n"); } if (csb[i][0] & (GEN8_CTX_STATUS_ACTIVE_IDLE | GEN8_CTX_STATUS_ELEMENT_SWITCH)) submit_contexts += execlists_check_remove_request(engine, csb[i][1]); } if (submit_contexts) { if (!engine->disable_lite_restore_wa || (csb[i][0] & GEN8_CTX_STATUS_ACTIVE_IDLE)) execlists_context_unqueue(engine); } spin_unlock(&engine->execlist_lock); if (unlikely(submit_contexts > 2)) DRM_ERROR("More than two context complete events?\n"); }

Contributors

PersonTokensPropCommitsCommitProp
tvrtko ursulintvrtko ursulin21660.85%529.41%
thomas danielthomas daniel6418.03%15.88%
oscar mateooscar mateo4312.11%211.76%
ben widawskyben widawsky164.51%317.65%
michel thierrymichel thierry123.38%211.76%
chris wilsonchris wilson10.28%15.88%
daniel vetterdaniel vetter10.28%15.88%
ville syrjalaville syrjala10.28%15.88%
mika kuoppalamika kuoppala10.28%15.88%
Total355100.00%17100.00%


static void execlists_context_queue(struct drm_i915_gem_request *request) { struct intel_engine_cs *engine = request->engine; struct drm_i915_gem_request *cursor; int num_elements = 0; spin_lock_bh(&engine->execlist_lock); list_for_each_entry(cursor, &engine->execlist_queue, execlist_link) if (++num_elements > 2) break; if (num_elements > 2) { struct drm_i915_gem_request *tail_req; tail_req = list_last_entry(&engine->execlist_queue, struct drm_i915_gem_request, execlist_link); if (request->ctx == tail_req->ctx) { WARN(tail_req->elsp_submitted != 0, "More than 2 already-submitted reqs queued\n"); list_del(&tail_req->execlist_link); i915_gem_request_unreference(tail_req); } } i915_gem_request_reference(request); list_add_tail(&request->execlist_link, &engine->execlist_queue); request->ctx_hw_id = request->ctx->hw_id; if (num_elements == 0) execlists_context_unqueue(engine); spin_unlock_bh(&engine->execlist_lock); }

Contributors

PersonTokensPropCommitsCommitProp
oscar mateooscar mateo6841.21%18.33%
michel thierrymichel thierry4527.27%18.33%
tvrtko ursulintvrtko ursulin3219.39%650.00%
john harrisonjohn harrison106.06%18.33%
nicholas hoathnicholas hoath74.24%216.67%
thomas danielthomas daniel31.82%18.33%
Total165100.00%12100.00%


static int logical_ring_invalidate_all_caches(struct drm_i915_gem_request *req) { struct intel_engine_cs *engine = req->engine; uint32_t flush_domains; int ret; flush_domains = 0; if (engine->gpu_caches_dirty) flush_domains = I915_GEM_GPU_DOMAINS; ret = engine->emit_flush(req, I915_GEM_GPU_DOMAINS, flush_domains); if (ret) return ret; engine->gpu_caches_dirty = false; return 0; }

Contributors

PersonTokensPropCommitsCommitProp
oscar mateooscar mateo5884.06%120.00%
tvrtko ursulintvrtko ursulin57.25%240.00%
john harrisonjohn harrison45.80%120.00%
nicholas hoathnicholas hoath22.90%120.00%
Total69100.00%5100.00%


static int execlists_move_to_gpu(struct drm_i915_gem_request *req, struct list_head *vmas) { const unsigned other_rings = ~intel_engine_flag(req->engine); struct i915_vma *vma; uint32_t flush_domains = 0; bool flush_chipset = false; int ret; list_for_each_entry(vma, vmas, exec_list) { struct drm_i915_gem_object *obj = vma->obj; if (obj->active & other_rings) { ret = i915_gem_object_sync(obj, req->engine, &req); if (ret) return ret; } if (obj->base.write_domain & I915_GEM_DOMAIN_CPU) flush_chipset |= i915_gem_clflush_object(obj, false); flush_domains |= obj->base.write_domain; } if (flush_domains & I915_GEM_DOMAIN_GTT) wmb(); /* Unconditionally invalidate gpu caches and ensure that we do flush * any residual writes from the previous batch. */ return logical_ring_invalidate_all_caches(req); }

Contributors

PersonTokensPropCommitsCommitProp
oscar mateooscar mateo10575.54%114.29%
chris wilsonchris wilson1812.95%114.29%
john harrisonjohn harrison107.19%228.57%
nicholas hoathnicholas hoath32.16%114.29%
tvrtko ursulintvrtko ursulin32.16%228.57%
Total139100.00%7100.00%


int intel_logical_ring_alloc_request_extras(struct drm_i915_gem_request *request) { struct intel_engine_cs *engine = request->engine; struct intel_context *ce = &request->ctx->engine[engine->id]; int ret; /* Flush enough space to reduce the likelihood of waiting after * we start building the request - in which case we will just * have to repeat work. */ request->reserved_space += EXECLISTS_REQUEST_SIZE; if (!ce->state) { ret = execlists_context_deferred_alloc(request->ctx, engine); if (ret) return ret; } request->ringbuf = ce->ringbuf; if (i915.enable_guc_submission) { /* * Check that the GuC has space for the request before * going any further, as the i915_add_request() call * later on mustn't fail ... */ ret = i915_guc_wq_check_space(request); if (ret) return ret; } ret = intel_lr_context_pin(request->ctx, engine); if (ret) return ret; ret = intel_ring_begin(request, 0); if (ret) goto err_unpin; if (!ce->initialised) { ret = engine->init_context(request); if (ret) goto err_unpin; ce->initialised = true; } /* Note that after this point, we have committed to using * this request as it is being used to both track the * state of engine initialisation and liveness of the * golden renderstate above. Think twice before you try * to cancel/unwind this request now. */ request->reserved_space -= EXECLISTS_REQUEST_SIZE; return 0; err_unpin: intel_lr_context_unpin(request->ctx, engine); return ret; }

Contributors

PersonTokensPropCommitsCommitProp
chris wilsonchris wilson13670.47%637.50%
alex daialex dai178.81%16.25%
oscar mateooscar mateo168.29%212.50%
dave gordondave gordon94.66%212.50%
mika kuoppalamika kuoppala73.63%16.25%
john harrisonjohn harrison42.07%212.50%
tvrtko ursulintvrtko ursulin42.07%212.50%
Total193100.00%16100.00%

/* * intel_logical_ring_advance_and_submit() - advance the tail and submit the workload * @request: Request to advance the logical ringbuffer of. * * The tail is updated in our logical ringbuffer struct, not in the actual context. What * really happens during submission is that the context and current tail will be placed * on a queue waiting for the ELSP to be ready to accept a new context submission. At that * point, the tail *inside* the context is updated and the ELSP written to. */
static int intel_logical_ring_advance_and_submit(struct drm_i915_gem_request *request) { struct intel_ringbuffer *ringbuf = request->ringbuf; struct intel_engine_cs *engine = request->engine; intel_logical_ring_advance(ringbuf); request->tail = ringbuf->tail; /* * Here we add two extra NOOPs as padding to avoid * lite restore of a context with HEAD==TAIL. * * Caller must reserve WA_TAIL_DWORDS for us! */ intel_logical_ring_emit(ringbuf, MI_NOOP); intel_logical_ring_emit(ringbuf, MI_NOOP); intel_logical_ring_advance(ringbuf); /* We keep the previous context alive until we retire the following * request. This ensures that any the context object is still pinned * for any residual writes the HW makes into it on the context switch * into the next object following the breadcrumb. Otherwise, we may * retire the context too early. */ request->previous_context = engine->last_context; engine->last_context = request->ctx; if (i915.enable_guc_submission) i915_guc_submit(request); else execlists_context_queue(request); return 0; }

Contributors

PersonTokensPropCommitsCommitProp
chris wilsonchris wilson3030.30%218.18%
tvrtko ursulintvrtko ursulin2424.24%218.18%
alex daialex dai1717.17%19.09%
oscar mateooscar mateo1616.16%327.27%
john harrisonjohn harrison1010.10%218.18%
dave gordondave gordon22.02%19.09%
Total99100.00%11100.00%

/** * execlists_submission() - submit a batchbuffer for execution, Execlists style * @params: execbuffer call parameters. * @args: execbuffer call arguments. * @vmas: list of vmas. * * This is the evil twin version of i915_gem_ringbuffer_submission. It abstracts * away the submission details of the execbuffer ioctl call. * * Return: non-zero if the submission fails. */
int intel_execlists_submission(struct i915_execbuffer_params *params, struct drm_i915_gem_execbuffer2 *args, struct list_head *vmas) { struct drm_device *dev = params->dev; struct intel_engine_cs *engine = params->engine; struct drm_i915_private *dev_priv = to_i915(dev); struct intel_ringbuffer *ringbuf = params->ctx->engine[engine->id].ringbuf; u64 exec_start; int instp_mode; u32 instp_mask; int ret; instp_mode = args->flags & I915_EXEC_CONSTANTS_MASK; instp_mask = I915_EXEC_CONSTANTS_MASK; switch (instp_mode) { case I915_EXEC_CONSTANTS_REL_GENERAL: case I915_EXEC_CONSTANTS_ABSOLUTE: case I915_EXEC_CONSTANTS_REL_SURFACE: if (instp_mode != 0 && engine != &dev_priv->engine[RCS]) { DRM_DEBUG("non-0 rel constants mode on non-RCS\n"); return -EINVAL; } if (instp_mode != dev_priv->relative_constants_mode) { if (instp_mode == I915_EXEC_CONSTANTS_REL_SURFACE) { DRM_DEBUG("rel surface constants mode invalid on gen5+\n"); return -EINVAL; } /* The HW changed the meaning on this bit on gen6 */ instp_mask &= ~I915_EXEC_CONSTANTS_REL_SURFACE; } break; default: DRM_DEBUG("execbuf with unknown constants: %d\n", instp_mode); return -EINVAL; } if (args->flags & I915_EXEC_GEN7_SOL_RESET) { DRM_DEBUG("sol reset is gen7 only\n"); return -EINVAL; } ret = execlists_move_to_gpu(params->request, vmas); if (ret) return ret; if (engine == &dev_priv->engine[RCS] && instp_mode != dev_priv->relative_constants_mode) { ret = intel_ring_begin(params->request, 4); if (ret) return ret; intel_logical_ring_emit(ringbuf, MI_NOOP); intel_logical_ring_emit(ringbuf, MI_LOAD_REGISTER_IMM(1)); intel_logical_ring_emit_reg(ringbuf, INSTPM); intel_logical_ring_emit(ringbuf, instp_mask << 16 | instp_mode); intel_logical_ring_advance(ringbuf); dev_priv->relative_constants_mode = instp_mode; } exec_start = params->batch_obj_vm_offset + args->batch_start_offset; ret = engine->emit_bb_start(params->request, exec_start, params->dispatch_flags); if (ret) return ret; trace_i915_gem_ring_dispatch(params->request, params->dispatch_flags); i915_gem_execbuffer_move_to_active(vmas, params->request); return 0; }

Contributors

PersonTokensPropCommitsCommitProp
john harrisonjohn harrison23866.85%1257.14%
oscar mateooscar mateo8122.75%314.29%
thomas danielthomas daniel246.74%14.76%
tvrtko ursulintvrtko ursulin82.25%29.52%
chris wilsonchris wilson41.12%29.52%
ville syrjalaville syrjala10.28%14.76%
Total356100.00%21100.00%


void intel_execlists_cancel_requests(struct intel_engine_cs *engine) { struct drm_i915_gem_request *req, *tmp; LIST_HEAD(cancel_list); WARN_ON(!mutex_is_locked(&engine->i915->drm.struct_mutex)); spin_lock_bh(&engine->execlist_lock); list_replace_init(&engine->execlist_queue, &cancel_list); spin_unlock_bh(&engine->execlist_lock); list_for_each_entry_safe(req, tmp, &cancel_list, execlist_link) { list_del(&req->execlist_link); i915_gem_request_unreference(req); } }

Contributors

PersonTokensPropCommitsCommitProp
john harrisonjohn harrison3942.86%112.50%
oscar mateooscar mateo3134.07%112.50%
tvrtko ursulintvrtko ursulin1617.58%337.50%
chris wilsonchris wilson44.40%225.00%
dave gordondave gordon11.10%112.50%
Total91100.00%8100.00%


void intel_logical_ring_stop(struct intel_engine_cs *engine) { struct drm_i915_private *dev_priv = engine->i915; int ret; if (!intel_engine_initialized(engine)) return; ret = intel_engine_idle(engine); if (ret) DRM_ERROR("failed to quiesce %s whilst cleaning up: %d\n", engine->name, ret); /* TODO: Is this correct with Execlists enabled? */ I915_WRITE_MODE(engine, _MASKED_BIT_ENABLE(STOP_RING)); if (intel_wait_for_register(dev_priv, RING_MI_MODE(engine->mmio_base), MODE_IDLE, MODE_IDLE, 1000)) { DRM_ERROR("%s :timed out trying to stop ring\n", engine->name); return; } I915_WRITE_MODE(engine, _MASKED_BIT_DISABLE(STOP_RING)); }

Contributors

PersonTokensPropCommitsCommitProp
oscar mateooscar mateo5047.17%110.00%
john harrisonjohn harrison3129.25%110.00%
tvrtko ursulintvrtko ursulin1110.38%330.00%
chris wilsonchris wilson109.43%220.00%
dave gordondave gordon21.89%110.00%
nicholas hoathnicholas hoath21.89%220.00%
Total106100.00%10100.00%


int logical_ring_flush_all_caches(struct drm_i915_gem_request *req) { struct intel_engine_cs *engine = req->engine; int ret; if (!engine->gpu_caches_dirty) return 0; ret = engine->emit_flush(req, 0, I915_GEM_GPU_DOMAINS); if (ret) return ret; engine->gpu_caches_dirty = false; return 0; }

Contributors

PersonTokensPropCommitsCommitProp
oscar mateooscar mateo3659.02%116.67%
john harrisonjohn harrison1829.51%233.33%
tvrtko ursulintvrtko ursulin58.20%233.33%
nicholas hoathnicholas hoath23.28%116.67%
Total61100.00%6100.00%


static int intel_lr_context_pin(struct i915_gem_context *ctx, struct intel_engine_cs *engine) { struct drm_i915_private *dev_priv = ctx->i915; struct intel_context *ce = &ctx->engine[engine->id]; void *vaddr; u32 *lrc_reg_state; int ret; lockdep_assert_held(&ctx->i915->drm.struct_mutex); if (ce->pin_count++) return 0; ret = i915_gem_obj_ggtt_pin(ce->state, GEN8_LR_CONTEXT_ALIGN, PIN_OFFSET_BIAS | GUC_WOPCM_TOP); if (ret) goto err; vaddr = i915_gem_object_pin_map(ce->state); if (IS_ERR(vaddr)) { ret = PTR_ERR(vaddr); goto unpin_ctx_obj; } lrc_reg_state = vaddr + LRC_STATE_PN * PAGE_SIZE; ret = intel_pin_and_map_ringbuffer_obj(dev_priv, ce->ringbuf); if (ret) goto unpin_map; i915_gem_context_reference(ctx); ce->lrc_vma = i915_gem_obj_to_ggtt(ce->state); intel_lr_context_descriptor_update(ctx, engine); lrc_reg_state[CTX_RING_BUFFER_START+1] = ce->ringbuf->vma->node.start; ce->lrc_reg_state = lrc_reg_state; ce->state->dirty = true; /* Invalidate GuC TLB. */ if (i915.enable_guc_submission) I915_WRITE(GEN8_GTCR, GEN8_GTCR_INVALIDATE); return 0; unpin_map: i915_gem_object_unpin_map(ce->state); unpin_ctx_obj: i915_gem_object_ggtt_unpin(ce->state); err: ce->pin_count = 0; return ret; }

Contributors

PersonTokensPropCommitsCommitProp
tvrtko ursulintvrtko ursulin8835.34%633.33%
chris wilsonchris wilson7429.72%633.33%
oscar mateooscar mateo3915.66%15.56%
john harrisonjohn harrison197.63%15.56%
alex daialex dai176.83%15.56%
nicholas hoathnicholas hoath104.02%211.11%
mika kuoppalamika kuoppala20.80%15.56%
Total249100.00%18100.00%


void intel_lr_context_unpin(struct i915_gem_context *ctx, struct intel_engine_cs *engine) { struct intel_context *ce = &ctx->engine[engine->id]; lockdep_assert_held(&ctx->i915->drm.struct_mutex); GEM_BUG_ON(ce->pin_count == 0); if (--ce->pin_count) return; intel_unpin_ringbuffer_obj(ce->ringbuf); i915_gem_object_unpin_map(ce->state); i915_gem_object_ggtt_unpin(ce->state); ce->lrc_vma = NULL; ce->lrc_desc = 0; ce->lrc_reg_state = NULL; i915_gem_context_unreference(ctx); }

Contributors

PersonTokensPropCommitsCommitProp
chris wilsonchris wilson4038.83%428.57%
tvrtko ursulintvrtko ursulin2928.16%535.71%
john harrisonjohn harrison1514.56%17.14%
oscar mateooscar mateo1312.62%17.14%
daniel vetterdaniel vetter43.88%17.14%
nicholas hoathnicholas hoath10.97%17.14%
mika kuoppalamika kuoppala10.97%17.14%
Total103100.00%14100.00%


static int intel_logical_ring_workarounds_emit(struct drm_i915_gem_request *req) { int ret, i; struct intel_engine_cs *engine = req->engine; struct intel_ringbuffer *ringbuf = req->ringbuf; struct i915_workarounds *w = &req->i915->workarounds; if (w->count == 0) return 0; engine->gpu_caches_dirty = true; ret = logical_ring_flush_all_caches(req); if (ret) return ret; ret = intel_ring_begin(req, w->count * 2 + 2); if (ret) return ret; intel_logical_ring_emit(ringbuf, MI_LOAD_REGISTER_IMM(w->count)); for (i = 0; i < w->count; i++) { intel_logical_ring_emit_reg(ringbuf, w->reg[i].addr); intel_logical_ring_emit(ringbuf, w->reg[i].value); } intel_logical_ring_emit(ringbuf, MI_NOOP); intel_logical_ring_advance(ringbuf); engine->gpu_caches_dirty = true; ret = logical_ring_flush_all_caches(req); if (ret) return ret; return 0; }

Contributors

PersonTokensPropCommitsCommitProp
michel thierrymichel thierry16787.43%112.50%
john harrisonjohn harrison147.33%112.50%
chris wilsonchris wilson42.09%225.00%
tvrtko ursulintvrtko ursulin42.09%225.00%
nicholas hoathnicholas hoath10.52%112.50%
ville syrjalaville syrjala10.52%112.50%
Total191100.00%8100.00%

#define wa_ctx_emit(batch, index, cmd) \ do { \ int __index = (index)++; \ if (WARN_ON(__index >= (PAGE_SIZE / sizeof(uint32_t)))) { \ return -ENOSPC; \ } \ batch[__index] = (cmd); \ } while (0) #define wa_ctx_emit_reg(batch, index, reg) \ wa_ctx_emit((batch), (index), i915_mmio_reg_offset(reg)) /* * In this WA we need to set GEN8_L3SQCREG4[21:21] and reset it after * PIPE_CONTROL instruction. This is required for the flush to happen correctly * but there is a slight complication as this is applied in WA batch where the * values are only initialized once so we cannot take register value at the * beginning and reuse it further; hence we save its value to memory, upload a * constant value with bit21 set and then we restore it back with the saved value. * To simplify the WA, a constant value is formed by using the default value * of this register. This shouldn't be a problem because we are only modifying * it for a short period and this batch in non-premptible. We can ofcourse * use additional instructions that read the actual value of the register * at that time and set our bit of interest but it makes the WA complicated. * * This WA is also required for Gen9 so extracting as a function avoids * code duplication. */
static inline int gen8_emit_flush_coherentl3_wa(struct intel_engine_cs *engine, uint32_t *const batch, uint32_t index) { struct drm_i915_private *dev_priv = engine->i915; uint32_t l3sqc4_flush = (0x40400000 | GEN8_LQSC_FLUSH_COHERENT_LINES); /* * WaDisableLSQCROPERFforOCL:skl,kbl * This WA is implemented in skl_init_clock_gating() but since * this batch updates GEN8_L3SQCREG4 with default value we need to * set this bit here to retain the WA during flush. */ if (IS_SKL_REVID(dev_priv, 0, SKL_REVID_E0) || IS_KBL_REVID(dev_priv, 0, KBL_REVID_E0)) l3sqc4_flush |= GEN8_LQSC_RO_PERF_DIS; wa_ctx_emit(batch, index, (MI_STORE_REGISTER_MEM_GEN8 | MI_SRM_LRM_GLOBAL_GTT)); wa_ctx_emit_reg(batch, index, GEN8_L3SQCREG4); wa_ctx_emit(batch, index, engine->scratch.gtt_offset + 256); wa_ctx_emit(batch, index, 0); wa_ctx_emit(batch, index, MI_LOAD_REGISTER_IMM(1)); wa_ctx_emit_reg(batch, index, GEN8_L3SQCREG4); wa_ctx_emit(batch, index, l3sqc4_flush); wa_ctx_emit(batch, index, GFX_OP_PIPE_CONTROL(6)); wa_ctx_emit(batch, index, (PIPE_CONTROL_CS_STALL | PIPE_CONTROL_DC_FLUSH_ENABLE)); wa_ctx_emit(batch, index, 0); wa_ctx_emit(batch, index, 0); wa_ctx_emit(batch, index, 0); wa_ctx_emit(batch, index, 0); wa_ctx_emit(batch, index, (MI_LOAD_REGISTER_MEM_GEN8 | MI_SRM_LRM_GLOBAL_GTT)); wa_ctx_emit_reg(batch, index, GEN8_L3SQCREG4); wa_ctx_emit(batch, index, engine->scratch.gtt_offset + 256); wa_ctx_emit(batch, index, 0); return index; }

Contributors

PersonTokensPropCommitsCommitProp
arun siluveryarun siluvery21887.55%333.33%
mika kuoppalamika kuoppala197.63%222.22%
jani nikulajani nikula52.01%111.11%
ville syrjalaville syrjala31.20%111.11%
tvrtko ursulintvrtko ursulin31.20%111.11%
dave airliedave airlie10.40%111.11%
Total249100.00%9100.00%


static inline uint32_t wa_ctx_start(struct i915_wa_ctx_bb *wa_ctx, uint32_t offset, uint32_t start_alignment) { return wa_ctx->offset = ALIGN(offset, start_alignment); }

Contributors

PersonTokensPropCommitsCommitProp
arun siluveryarun siluvery2066.67%133.33%
oscar mateooscar mateo1033.33%266.67%
Total30100.00%3100.00%


static inline int wa_ctx_end(struct i915_wa_ctx_bb *wa_ctx, uint32_t offset, uint32_t size_alignment) { wa_ctx->size = offset - wa_ctx->offset; WARN(wa_ctx->size % size_alignment, "wa_ctx_bb failed sanity checks: size %d is not aligned to %d\n", wa_ctx->size, size_alignment); return 0; }

Contributors

PersonTokensPropCommitsCommitProp
arun siluveryarun siluvery3368.75%133.33%
oscar mateooscar mateo1327.08%133.33%
thomas danielthomas daniel24.17%133.33%
Total48100.00%3100.00%

/** * gen8_init_indirectctx_bb() - initialize indirect ctx batch with WA * * @engine: only applicable for RCS * @wa_ctx: structure representing wa_ctx * offset: specifies start of the batch, should be cache-aligned. This is updated * with the offset value received as input. * size: size of the batch in DWORDS but HW expects in terms of cachelines * @batch: page in which WA are loaded * @offset: This field specifies the start of the batch, it should be * cache-aligned otherwise it is adjusted accordingly. * Typically we only have one indirect_ctx and per_ctx batch buffer which are * initialized at the beginning and shared across all contexts but this field * helps us to have multiple batches at different offsets and select them based * on a criteria. At the moment this batch always start at the beginning of the page * and at this point we don't have multiple wa_ctx batch buffers. * * The number of WA applied are not known at the beginning; we use this field * to return the no of DWORDS written. * * It is to be noted that this batch does not contain MI_BATCH_BUFFER_END * so it adds NOOPs as padding to make it cacheline aligned. * MI_BATCH_BUFFER_END will be added to perctx batch and both of them together * makes a complete batch buffer. * * Return: non-zero if we exceed the PAGE_SIZE limit. */
static int gen8_init_indirectctx_bb(struct intel_engine_cs *engine, struct i915_wa_ctx_bb *wa_ctx, uint32_t *const batch, uint32_t *offset) { uint32_t scratch_addr; uint32_t index = wa_ctx_start(wa_ctx, *offset, CACHELINE_DWORDS); /* WaDisableCtxRestoreArbitration:bdw,chv */ wa_ctx_emit(batch, index, MI_ARB_ON_OFF | MI_ARB_DISABLE); /* WaFlushCoherentL3CacheLinesAtContextSwitch:bdw */ if (IS_BROADWELL(engine->i915)) { int rc = gen8_emit_flush_coherentl3_wa(engine, batch, index); if (rc < 0) return rc; index = rc; } /* WaClearSlmSpaceAtContextSwitch:bdw,chv */ /* Actual scratch location is at 128 bytes offset */ scratch_addr = engine->scratch.gtt_offset + 2*CACHELINE_BYTES; wa_ctx_emit(batch, index, GFX_OP_PIPE_CONTROL(6)); wa_ctx_emit(batch, index, (PIPE_CONTROL_FLUSH_L3 | PIPE_CONTROL_GLOBAL_GTT_IVB | PIPE_CONTROL_CS_STALL | PIPE_CONTROL_QW_WRITE)); wa_ctx_emit(batch, index, scratch_addr); wa_ctx_emit(batch, index, 0); wa_ctx_emit(batch, index, 0); wa_ctx_emit(batch, index, 0); /* Pad to end of cacheline */ while (index % CACHELINE_DWORDS) wa_ctx_emit(batch, index, MI_NOOP); /* * MI_BATCH_BUFFER_END is not required in Indirect ctx BB because * execution depends on the length specified in terms of cache lines * in the register CTX_RCS_INDIRECT_CTX */ return wa_ctx_end(wa_ctx, *offset = index, CACHELINE_DWORDS); }

Contributors

PersonTokensPropCommitsCommitProp
arun siluveryarun siluvery15979.90%654.55%
oscar mateooscar mateo2311.56%19.09%
andrzej hajdaandrzej hajda105.03%19.09%
tvrtko ursulintvrtko ursulin42.01%19.09%
michel thierrymichel thierry21.01%19.09%
chris wilsonchris wilson10.50%19.09%
Total199100.00%11100.00%

/** * gen8_init_perctx_bb() - initialize per ctx batch with WA * * @engine: only applicable for RCS * @wa_ctx: structure representing wa_ctx * offset: specifies start of the batch, should be cache-aligned. * size: size of the batch in DWORDS but HW expects in terms of cachelines * @batch: page in which WA are loaded * @offset: This field specifies the start of this batch. * This batch is started immediately after indirect_ctx batch. Since we ensure * that indirect_ctx ends on a cacheline this batch is aligned automatically. * * The number of DWORDS written are returned using this field. * * This batch is terminated with MI_BATCH_BUFFER_END and so we need not add padding * to align it with cacheline as padding after MI_BATCH_BUFFER_END is redundant. */
static int gen8_init_perctx_bb(struct intel_engine_cs *engine, struct i915_wa_ctx_bb *wa_ctx, uint32_t *const batch, uint32_t *offset) { uint32_t index = wa_ctx_start(wa_ctx, *offset, CACHELINE_DWORDS); /* WaDisableCtxRestoreArbitration:bdw,chv */ wa_ctx_emit(batch, index, MI_ARB_ON_OFF | MI_ARB_ENABLE); wa_ctx_emit(batch, index, MI_BATCH_BUFFER_END); return wa_ctx_end(wa_ctx, *offset = index, 1); }

Contributors

PersonTokensPropCommitsCommitProp
arun siluveryarun siluvery5880.56%360.00%
damien lespiaudamien lespiau1318.06%120.00%
tvrtko ursulintvrtko ursulin11.39%120.00%
Total72100.00%5100.00%


static int gen9_init_indirectctx_bb(struct intel_engine_cs *engine, struct i915_wa_ctx_bb *wa_ctx, uint32_t *const batch, uint32_t *offset) { int ret; struct drm_i915_private *dev_priv = engine->i915; uint32_t index = wa_ctx_start(wa_ctx, *offset, CACHELINE_DWORDS); /* WaDisableCtxRestoreArbitration:skl,bxt */ if (IS_SKL_REVID(dev_priv, 0, SKL_REVID_D0) || IS_BXT_REVID(dev_priv, 0, BXT_REVID_A1)) wa_ctx_emit(batch, index, MI_ARB_ON_OFF | MI_ARB_DISABLE); /* WaFlushCoherentL3CacheLinesAtContextSwitch:skl,bxt */ ret = gen8_emit_flush_coherentl3_wa(engine, batch, index); if (ret < 0) return ret; index = ret; /* WaClearSlmSpaceAtContextSwitch:kbl */ /* Actual scratch location is at 128 bytes offset */ if (IS_KBL_REVID(dev_priv, 0, KBL_REVID_A0)) { uint32_t scratch_addr = engine->scratch.gtt_offset + 2*CACHELINE_BYTES; wa_ctx_emit(batch, index, GFX_OP_PIPE_CONTROL(6)); wa_ctx_emit(batch, index, (PIPE_CONTROL_FLUSH_L3 | PIPE_CONTROL_GLOBAL_GTT_IVB | PIPE_CONTROL_CS_STALL | PIPE_CONTROL_QW_WRITE)); wa_ctx_emit(batch, index, scratch_addr); wa_ctx_emit(batch, index, 0); wa_ctx_emit(batch, index, 0); wa_ctx_emit(batch, index, 0); } /* WaMediaPoolStateCmdInWABB:bxt */ if (HAS_POOLED_EU(engine->i915)) { /* * EU pool configuration is setup along with golden context * during context initialization. This value depends on * device type (2x6 or 3x6) and needs to be updated based * on which subslice is disabled especially for 2x6 * devices, however it is safe to load default * configuration of 3x6 device instead of masking off * corresponding bits because HW ignores bits of a disabled * subslice and drops down to appropriate config. Please * see render_state_setup() in i915_gem_render_state.c for * possible configurations, to avoid duplication they are * not shown here again. */ u32 eu_pool_config = 0x00777000; wa_ctx_emit(batch, index, GEN9_MEDIA_POOL_STATE); wa_ctx_emit(batch, index, GEN9_MEDIA_POOL_ENABLE); wa_ctx_emit(batch, index, eu_pool_config); wa_ctx_emit(batch, index, 0); wa_ctx_emit(batch, index, 0); wa_ctx_emit(batch, index, 0); } /* Pad to end of cacheline */ while (index % CACHELINE_DWORDS) wa_ctx_emit(batch, index, MI_NOOP); return wa_ctx_end(wa_ctx, *offset = index, CACHELINE_DWORDS); }

Contributors

PersonTokensPropCommitsCommitProp
arun siluveryarun siluvery11337.54%330.00%
mika kuoppalamika kuoppala9932.89%220.00%
tim goretim gore7324.25%220.00%
jani nikulajani nikula103.32%110.00%
dave airliedave airlie41.33%110.00%
tvrtko ursulintvrtko ursulin20.66%110.00%
Total301100.00%10100.00%


static int gen9_init_perctx_bb(struct intel_engine_cs *engine, struct i915_wa_ctx_bb *wa_ctx, uint32_t *const batch, uint32_t *offset) { uint32_t index = wa_ctx_start(wa_ctx, *offset, CACHELINE_DWORDS); /* WaSetDisablePixMaskCammingAndRhwoInCommonSliceChicken:skl,bxt */ if (IS_SKL_REVID(engine->i915, 0, SKL_REVID_B0) || IS_BXT_REVID(engine->i915, 0, BXT_REVID_A1)) { wa_ctx_emit(batch, index, MI_LOAD_REGISTER_IMM(1)); wa_ctx_emit_reg(batch, index, GEN9_SLICE_COMMON_ECO_CHICKEN0); wa_ctx_emit(batch, index, _MASKED_BIT_ENABLE(DISABLE_PIXEL_MASK_CAMMING)); wa_ctx_emit(batch, index, MI_NOOP); } /* WaClearTdlStateAckDirtyBits:bxt */ if (IS_BXT_REVID(engine->i915, 0, BXT_REVID_B0)) { wa_ctx_emit(batch, index, MI_LOAD_REGISTER_IMM(4)); wa_ctx_emit_reg(batch, index, GEN8_STATE_ACK); wa_ctx_emit(batch, index, _MASKED_BIT_DISABLE(GEN9_SUBSLICE_TDL_ACK_BITS)); wa_ctx_emit_reg(batch, index, GEN9_STATE_ACK_SLICE1); wa_ctx_emit(batch, index, _MASKED_BIT_DISABLE(GEN9_SUBSLICE_TDL_ACK_BITS)); wa_ctx_emit_reg(batch, index, GEN9_STATE_ACK_SLICE2); wa_ctx_emit(batch, index, _MASKED_BIT_DISABLE(GEN9_SUBSLICE_TDL_ACK_BITS)); wa_ctx_emit_reg(batch, index, GEN7_ROW_CHICKEN2); /* dummy write to CS, mask bits are 0 to ensure the register is not modified */ wa_ctx_emit(batch, index, 0x0); wa_ctx_emit(batch, index, MI_NOOP); } /* WaDisableCtxRestoreArbitration:skl,bxt */ if (IS_SKL_REVID(engine->i915, 0, SKL_REVID_D0) || IS_BXT_REVID(engine->i915, 0, BXT_REVID_A1)) wa_ctx_emit(batch, index, MI_ARB_ON_OFF | MI_ARB_ENABLE); wa_ctx_emit(batch, index, MI_BATCH_BUFFER_END); return wa_ctx_end(wa_ctx, *offset = index, 1); }

Contributors

PersonTokensPropCommitsCommitProp
arun siluveryarun siluvery12945.42%333.33%
tim goretim gore11841.55%222.22%
jani nikulajani nikula207.04%111.11%
chris wilsonchris wilson155.28%111.11%
tvrtko ursulintvrtko ursulin10.35%111.11%
ville syrjalaville syrjala10.35%111.11%
Total284100.00%9100.00%


static int lrc_setup_wa_ctx_obj(struct intel_engine_cs *engine, u32 size) { int ret; engine->wa_ctx.obj = i915_gem_object_create(&engine->i915->drm, PAGE_ALIGN(size)); if (IS_ERR(engine->wa_ctx.obj)) { DRM_DEBUG_DRIVER("alloc LRC WA ctx backing obj failed.\n"); ret = PTR_ERR(engine->wa_ctx.obj); engine->wa_ctx.obj = NULL; return ret; } ret = i915_gem_obj_ggtt_pin(engine->wa_ctx.obj, PAGE_SIZE, 0); if (ret) { DRM_DEBUG_DRIVER("pin LRC WA ctx backing obj failed: %d\n", ret); drm_gem_object_unreference(&engine->wa_ctx.obj->base); return ret; } return 0; }

Contributors

PersonTokensPropCommitsCommitProp
arun siluveryarun siluvery9072.58%116.67%
chris wilsonchris wilson2721.77%350.00%
tvrtko ursulintvrtko ursulin64.84%116.67%
dave gordondave gordon10.81%116.67%
Total124100.00%6100.00%


static void lrc_destroy_wa_ctx_obj(struct intel_engine_cs *engine) { if (engine->wa_ctx.obj) { i915_gem_object_ggtt_unpin(engine->wa_ctx.obj); drm_gem_object_unreference(&engine->wa_ctx.obj->base); engine->wa_ctx.obj = NULL; } }

Contributors

PersonTokensPropCommitsCommitProp
arun siluveryarun siluvery4590.00%150.00%
tvrtko ursulintvrtko ursulin510.00%150.00%
Total50100.00%2100.00%


static int intel_init_workaround_bb(struct intel_engine_cs *engine) { int ret; uint32_t *batch; uint32_t offset; struct page *page; struct i915_ctx_workarounds *wa_ctx = &engine->wa_ctx; WARN_ON(engine->id != RCS); /* update this when WA for higher Gen are added */ if (INTEL_GEN(engine->i915) > 9) { DRM_ERROR("WA batch buffer is not initialized for Gen%d\n", INTEL_GEN(engine->i915)); return 0; } /* some WA perform writes to scratch page, ensure it is valid */ if (engine->scratch.obj == NULL) { DRM_ERROR("scratch page not allocated for %s\n", engine->name); return -EINVAL; } ret = lrc_setup_wa_ctx_obj(engine, PAGE_SIZE); if (ret) { DRM_DEBUG_DRIVER("Failed to setup context WA page: %d\n", ret); return ret; } page = i915_gem_object_get_dirty_page(wa_ctx->obj, 0); batch = kmap_atomic(page); offset = 0; if (IS_GEN8(engine->i915)) { ret = gen8_init_indirectctx_bb(engine, &wa_ctx->indirect_ctx, batch, &offset); if (ret) goto out; ret = gen8_init_perctx_bb(engine, &wa_ctx->per_ctx, batch, &offset); if (ret) goto out; } else if (IS_GEN9(engine->i915)) { ret = gen9_init_indirectctx_bb(engine, &wa_ctx->indirect_ctx, batch, &offset); if (ret) goto out; ret = gen9_init_perctx_bb(engine, &wa_ctx->per_ctx, batch, &offset); if (ret) goto out; } out: kunmap_atomic(batch); if (ret) lrc_destroy_wa_ctx_obj(engine); return ret; }

Contributors

PersonTokensPropCommitsCommitProp
arun siluveryarun siluvery26191.58%457.14%
tvrtko ursulintvrtko ursulin155.26%114.29%
chris wilsonchris wilson82.81%114.29%
dave gordondave gordon10.35%114.29%
Total285100.00%7100.00%


static void lrc_init_hws(struct intel_engine_cs *engine) { struct drm_i915_private *dev_priv = engine->i915; I915_WRITE(RING_HWS_PGA(engine->mmio_base), (u32)engine->status_page.gfx_addr); POSTING_READ(RING_HWS_PGA(engine->mmio_base)); }

Contributors

PersonTokensPropCommitsCommitProp
tvrtko ursulintvrtko ursulin4897.96%150.00%
chris wilsonchris wilson12.04%150.00%
Total49100.00%2100.00%


static int gen8_init_common_ring(struct intel_engine_cs *engine) { struct drm_i915_private *dev_priv = engine->i915; unsigned int next_context_status_buffer_hw; lrc_init_hws(engine); I915_WRITE_IMR(engine, ~(engine->irq_enable_mask | engine->irq_keep_mask)); I915_WRITE(RING_HWSTAM(engine->mmio_base), 0xffffffff); I915_WRITE(RING_MODE_GEN7(engine), _MASKED_BIT_DISABLE(GFX_REPLAY_MODE) | _MASKED_BIT_ENABLE(GFX_RUN_LIST_ENABLE)); POSTING_READ(RING_MODE_GEN7(engine)); /* * Instead of resetting the Context Status Buffer (CSB) read pointer to * zero, we need to read the write pointer from hardware and use its * value because "this register is power context save restored". * Effectively, these states have been observed: * * | Suspend-to-idle (freeze) | Suspend-to-RAM (mem) | * BDW | CSB regs not reset | CSB regs reset | * CHT | CSB regs not reset | CSB regs not reset | * SKL | ? | ? | * BXT | ? | ? | */ next_context_status_buffer_hw = GEN8_CSB_WRITE_PTR(I915_READ(RING_CONTEXT_STATUS_PTR(engine))); /* * When the CSB registers are reset (also after power-up / gpu reset), * CSB write pointer is set to all 1's, which is not valid, use '5' in * this special case, so the first element read is CSB[0]. */ if (next_context_status_buffer_hw == GEN8_CSB_PTR_MASK) next_context_status_buffer_hw = (GEN8_CSB_ENTRIES - 1); engine->next_context_status_buffer = next_context_status_buffer_hw; DRM_DEBUG_DRIVER("Execlists enabled for %s\n", engine->name); intel_engine_init_hangcheck(engine); return intel_mocs_init_engine(engine); }

Contributors

PersonTokensPropCommitsCommitProp
arun siluveryarun siluvery8259.42%110.00%
michel thierrymichel thierry2719.57%110.00%
tvrtko ursulintvrtko ursulin1510.87%330.00%
ben widawskyben widawsky42.90%110.00%
peter antoinepeter antoine42.90%110.00%
nicholas hoathnicholas hoath32.17%110.00%
chris wilsonchris wilson21.45%110.00%
tomas elftomas elf10.72%110.00%
Total138100.00%10100.00%


static int gen8_init_render_ring(struct intel_engine_cs *engine) { struct drm_i915_private *dev_priv = engine->i915; int ret; ret = gen8_init_common_ring(engine); if (ret) return ret; /* We need to disable the AsyncFlip performance optimisations in order * to use MI_WAIT_FOR_EVENT within the CS. It should already be * programmed to '1' on all products. * * WaDisableAsyncFlipPerfMode:snb,ivb,hsw,vlv,bdw,chv */ I915_WRITE(MI_MODE, _MASKED_BIT_ENABLE(ASYNC_FLIP_PERF_DISABLE)); I915_WRITE(INSTPM, _MASKED_BIT_ENABLE(INSTPM_FORCE_ORDERING)); return init_workarounds_ring(engine); }

Contributors

PersonTokensPropCommitsCommitProp
arun siluveryarun siluvery5992.19%133.33%
tvrtko ursulintvrtko ursulin34.69%133.33%
chris wilsonchris wilson23.12%133.33%
Total64100.00%3100.00%


static int gen9_init_render_ring(struct intel_engine_cs *engine) { int ret; ret = gen8_init_common_ring(engine); if (ret) return ret; return init_workarounds_ring(engine); }

Contributors

PersonTokensPropCommitsCommitProp
arun siluveryarun siluvery2779.41%133.33%
damien lespiaudamien lespiau411.76%133.33%
tvrtko ursulintvrtko ursulin38.82%133.33%
Total34100.00%3100.00%


static int intel_logical_ring_emit_pdps(struct drm_i915_gem_request *req) { struct i915_hw_ppgtt *ppgtt = req->ctx->ppgtt; struct intel_engine_cs *engine = req->engine; struct intel_ringbuffer *ringbuf = req->ringbuf; const int num_lri_cmds = GEN8_LEGACY_PDPES * 2; int i, ret; ret = intel_ring_begin(req, num_lri_cmds * 2 + 2); if (ret) return ret; intel_logical_ring_emit(ringbuf, MI_LOAD_REGISTER_IMM(num_lri_cmds)); for (i = GEN8_LEGACY_PDPES - 1; i >= 0; i--) { const dma_addr_t pd_daddr = i915_page_dir_dma_addr(ppgtt, i); intel_logical_ring_emit_reg(ringbuf, GEN8_RING_PDP_UDW(engine, i)); intel_logical_ring_emit(ringbuf, upper_32_bits(pd_daddr)); intel_logical_ring_emit_reg(ringbuf, GEN8_RING_PDP_LDW(engine, i)); intel_logical_ring_emit(ringbuf, lower_32_bits(pd_daddr)); } intel_logical_ring_emit(ringbuf, MI_NOOP); intel_logical_ring_advance(ringbuf); return 0; }

Contributors

PersonTokensPropCommitsCommitProp
michel thierrymichel thierry16395.88%120.00%
tvrtko ursulintvrtko ursulin42.35%240.00%
ville syrjalaville syrjala21.18%120.00%
chris wilsonchris wilson10.59%120.00%
Total170100.00%5100.00%


static int gen8_emit_bb_start(struct drm_i915_gem_request *req, u64 offset, unsigned dispatch_flags) { struct intel_ringbuffer *ringbuf = req->ringbuf; bool ppgtt = !(dispatch_flags & I915_DISPATCH_SECURE); int ret; /* Don't rely in hw updating PDPs, specially in lite-restore. * Ideally, we should set Force PD Restore in ctx descriptor, * but we can't. Force Restore would be a second option, but * it is unsafe in case of lite-restore (because the ctx is * not idle). PML4 is allocated during ppgtt init so this is * not needed in 48-bit.*/ if (req->ctx->ppgtt && (intel_engine_flag(req->engine) & req->ctx->ppgtt->pd_dirty_rings)) { if (!USES_FULL_48BIT_PPGTT(req->i915) && !intel_vgpu_active(req->i915)) { ret = intel_logical_ring_emit_pdps(req); if (ret) return ret; } req->ctx->ppgtt->pd_dirty_rings &= ~intel_engine_flag(req->engine); } ret = intel_ring_begin(req, 4); if (ret) return ret; /* FIXME(BDW): Address space and security selectors. */ intel_logical_ring_emit(ringbuf, MI_BATCH_BUFFER_START_GEN8 | (ppgtt<<8) | (dispatch_flags & I915_DISPATCH_RS ? MI_BATCH_RESOURCE_STREAMER : 0)); intel_logical_ring_emit(ringbuf, lower_32_bits(offset)); intel_logical_ring_emit(ringbuf, upper_32_bits(offset)); intel_logical_ring_emit(ringbuf, MI_NOOP); intel_logical_ring_advance(ringbuf); return 0; }

Contributors

PersonTokensPropCommitsCommitProp
oscar mateooscar mateo8644.79%19.09%
michel thierrymichel thierry6634.38%218.18%
john harrisonjohn harrison147.29%218.18%
abdiel janulgueabdiel janulgue105.21%19.09%
zhiyuan lvzhiyuan lv84.17%19.09%
tvrtko ursulintvrtko ursulin42.08%218.18%
nicholas hoathnicholas hoath31.56%19.09%
chris wilsonchris wilson10.52%19.09%
Total192100.00%11100.00%


static void gen8_logical_ring_enable_irq(struct intel_engine_cs *engine) { struct drm_i915_private *dev_priv = engine->i915; I915_WRITE_IMR(engine, ~(engine->irq_enable_mask | engine->irq_keep_mask)); POSTING_READ_FW(RING_IMR(engine->mmio_base)); }

Contributors

PersonTokensPropCommitsCommitProp
oscar mateooscar mateo3678.26%125.00%
tvrtko ursulintvrtko ursulin510.87%125.00%
chris wilsonchris wilson510.87%250.00%
Total46100.00%4100.00%


static void gen8_logical_ring_disable_irq(struct intel_engine_cs *engine) { struct drm_i915_private *dev_priv = engine->i915; I915_WRITE_IMR(engine, ~engine->irq_keep_mask); }

Contributors

PersonTokensPropCommitsCommitProp
oscar mateooscar mateo2480.00%125.00%
tvrtko ursulintvrtko ursulin310.00%125.00%
chris wilsonchris wilson310.00%250.00%
Total30100.00%4100.00%


static int gen8_emit_flush(struct drm_i915_gem_request *request, u32 invalidate_domains, u32 unused) { struct intel_ringbuffer *ringbuf = request->ringbuf; struct intel_engine_cs *engine = ringbuf->engine; struct drm_i915_private *dev_priv = request->i915; uint32_t cmd; int ret; ret = intel_ring_begin(request, 4); if (ret) return ret; cmd = MI_FLUSH_DW + 1; /* We always require a command barrier so that subsequent * commands, such as breadcrumb interrupts, are strictly ordered * wrt the contents of the write cache being flushed to memory * (and thus being coherent from the CPU). */ cmd |= MI_FLUSH_DW_STORE_INDEX | MI_FLUSH_DW_OP_STOREDW; if (invalidate_domains & I915_GEM_GPU_DOMAINS) { cmd |= MI_INVALIDATE_TLB; if (engine == &dev_priv->engine[VCS]) cmd |= MI_INVALIDATE_BSD; } intel_logical_ring_emit(ringbuf, cmd); intel_logical_ring_emit(ringbuf, I915_GEM_HWS_SCRATCH_ADDR | MI_FLUSH_DW_USE_GTT); intel_logical_ring_emit(ringbuf, 0); /* upper addr */ intel_logical_ring_emit(ringbuf, 0); /* value */ intel_logical_ring_advance(ringbuf); return 0; }

Contributors

PersonTokensPropCommitsCommitProp
oscar mateooscar mateo11074.83%112.50%
chris wilsonchris wilson1812.24%337.50%
john harrisonjohn harrison128.16%112.50%
tvrtko ursulintvrtko ursulin42.72%225.00%
nicholas hoathnicholas hoath32.04%112.50%
Total147100.00%8100.00%


static int gen8_emit_flush_render(struct drm_i915_gem_request *request, u32 invalidate_domains, u32 flush_domains) { struct intel_ringbuffer *ringbuf = request->ringbuf; struct intel_engine_cs *engine = ringbuf->engine; u32 scratch_addr = engine->scratch.gtt_offset + 2 * CACHELINE_BYTES; bool vf_flush_wa = false, dc_flush_wa = false; u32 flags = 0; int ret; int len; flags |= PIPE_CONTROL_CS_STALL; if (flush_domains) { flags |= PIPE_CONTROL_RENDER_TARGET_CACHE_FLUSH; flags |= PIPE_CONTROL_DEPTH_CACHE_FLUSH; flags |= PIPE_CONTROL_DC_FLUSH_ENABLE; flags |= PIPE_CONTROL_FLUSH_ENABLE; } if (invalidate_domains) { flags |= PIPE_CONTROL_TLB_INVALIDATE; flags |= PIPE_CONTROL_INSTRUCTION_CACHE_INVALIDATE; flags |= PIPE_CONTROL_TEXTURE_CACHE_INVALIDATE; flags |= PIPE_CONTROL_VF_CACHE_INVALIDATE; flags |= PIPE_CONTROL_CONST_CACHE_INVALIDATE; flags |= PIPE_CONTROL_STATE_CACHE_INVALIDATE; flags |= PIPE_CONTROL_QW_WRITE; flags |= PIPE_CONTROL_GLOBAL_GTT_IVB; /* * On GEN9: before VF_CACHE_INVALIDATE we need to emit a NULL * pipe control. */ if (IS_GEN9(request->i915)) vf_flush_wa = true; /* WaForGAMHang:kbl */ if (IS_KBL_REVID(request->i915, 0, KBL_REVID_B0)) dc_flush_wa = true; } len = 6; if (vf_flush_wa) len += 6; if (dc_flush_wa) len += 12; ret = intel_ring_begin(request, len); if (ret) return ret; if (vf_flush_wa) { intel_logical_ring_emit(ringbuf, GFX_OP_PIPE_CONTROL(6)); intel_logical_ring_emit(ringbuf, 0); intel_logical_ring_emit(ringbuf, 0); intel_logical_ring_emit(ringbuf, 0); intel_logical_ring_emit(ringbuf, 0); intel_logical_ring_emit(ringbuf, 0); } if (dc_flush_wa) { intel_logical_ring_emit(ringbuf, GFX_OP_PIPE_CONTROL(6)); intel_logical_ring_emit(ringbuf, PIPE_CONTROL_DC_FLUSH_ENABLE); intel_logical_ring_emit(ringbuf, 0); intel_logical_ring_emit(ringbuf, 0); intel_logical_ring_emit(ringbuf, 0); intel_logical_ring_emit(ringbuf, 0); } intel_logical_ring_emit(ringbuf, GFX_OP_PIPE_CONTROL(6)); intel_logical_ring_emit(ringbuf, flags); intel_logical_ring_emit(ringbuf, scratch_addr); intel_logical_ring_emit(ringbuf, 0); intel_logical_ring_emit(ringbuf, 0); intel_logical_ring_emit(ringbuf, 0); if (dc_flush_wa) { intel_logical_ring_emit(ringbuf, GFX_OP_PIPE_CONTROL(6)); intel_logical_ring_emit(ringbuf, PIPE_CONTROL_CS_STALL); intel_logical_ring_emit(ringbuf, 0); intel_logical_ring_emit(ringbuf, 0); intel_logical_ring_emit(ringbuf, 0); intel_logical_ring_emit(ringbuf, 0); } intel_logical_ring_advance(ringbuf); return 0; }

Contributors

PersonTokensPropCommitsCommitProp
oscar mateooscar mateo16039.41%18.33%
mika kuoppalamika kuoppala14836.45%18.33%
imre deakimre deak5814.29%18.33%
john harrisonjohn harrison122.96%18.33%
ben widawskyben widawsky112.71%18.33%
chris wilsonchris wilson71.72%325.00%
francisco jerezfrancisco jerez40.99%18.33%
tvrtko ursulintvrtko ursulin30.74%216.67%
nicholas hoathnicholas hoath30.74%18.33%
Total406100.00%12100.00%


static void bxt_a_seqno_barrier(struct intel_engine_cs *engine) { /* * On BXT A steppings there is a HW coherency issue whereby the * MI_STORE_DATA_IMM storing the completed request's seqno * occasionally doesn't invalidate the CPU cache. Work around this by * clflushing the corresponding cacheline whenever the caller wants * the coherency to be guaranteed. Note that this cacheline is known * to be clean at this point, since we only write it in * bxt_a_set_seqno(), where we also do a clflush after the write. So * this clflush in practice becomes an invalidate operation. */ intel_flush_status_page(engine, I915_GEM_HWS_INDEX); }

Contributors

PersonTokensPropCommitsCommitProp
imre deakimre deak1578.95%133.33%
chris wilsonchris wilson210.53%133.33%
tvrtko ursulintvrtko ursulin210.53%133.33%
Total19100.00%3100.00%

/* * Reserve space for 2 NOOPs at the end of each request to be * used as a workaround for not being allowed to do lite * restore with HEAD==TAIL (WaIdleLiteRestore). */ #define WA_TAIL_DWORDS 2
static int gen8_emit_request(struct drm_i915_gem_request *request) { struct intel_ringbuffer *ringbuf = request->ringbuf; int ret; ret = intel_ring_begin(request, 6 + WA_TAIL_DWORDS); if (ret) return ret; /* w/a: bit 5 needs to be zero for MI_FLUSH_DW address. */ BUILD_BUG_ON(I915_GEM_HWS_INDEX_ADDR & (1 << 5)); intel_logical_ring_emit(ringbuf, (MI_FLUSH_DW + 1) | MI_FLUSH_DW_OP_STOREDW); intel_logical_ring_emit(ringbuf, intel_hws_seqno_address(request->engine) | MI_FLUSH_DW_USE_GTT); intel_logical_ring_emit(ringbuf, 0); intel_logical_ring_emit(ringbuf, request->seqno); intel_logical_ring_emit(ringbuf, MI_USER_INTERRUPT); intel_logical_ring_emit(ringbuf, MI_NOOP); return intel_logical_ring_advance_and_submit(request); }

Contributors

PersonTokensPropCommitsCommitProp
oscar mateooscar mateo6757.76%111.11%
chris wilsonchris wilson3328.45%444.44%
john harrisonjohn harrison97.76%111.11%
nicholas hoathnicholas hoath65.17%222.22%
tvrtko ursulintvrtko ursulin10.86%111.11%
Total116100.00%9100.00%


static int gen8_emit_request_render(struct drm_i915_gem_request *request) { struct intel_ringbuffer *ringbuf = request->ringbuf; int ret; ret = intel_ring_begin(request, 8 + WA_TAIL_DWORDS); if (ret) return ret; /* We're using qword write, seqno should be aligned to 8 bytes. */ BUILD_BUG_ON(I915_GEM_HWS_INDEX & 1); /* w/a for post sync ops following a GPGPU operation we * need a prior CS_STALL, which is emitted by the flush * following the batch. */ intel_logical_ring_emit(ringbuf, GFX_OP_PIPE_CONTROL(6)); intel_logical_ring_emit(ringbuf, (PIPE_CONTROL_GLOBAL_GTT_IVB | PIPE_CONTROL_CS_STALL | PIPE_CONTROL_QW_WRITE)); intel_logical_ring_emit(ringbuf, intel_hws_seqno_address(request->engine)); intel_logical_ring_emit(ringbuf, 0); intel_logical_ring_emit(ringbuf, i915_gem_request_get_seqno(request)); /* We're thrashing one dword of HWS. */ intel_logical_ring_emit(ringbuf, 0); intel_logical_ring_emit(ringbuf, MI_USER_INTERRUPT); intel_logical_ring_emit(ringbuf, MI_NOOP); return intel_logical_ring_advance_and_submit(request); }

Contributors

PersonTokensPropCommitsCommitProp
chris wilsonchris wilson8464.62%342.86%
michal winiarskimichal winiarski2519.23%114.29%
michel thierrymichel thierry1612.31%114.29%
oscar mateooscar mateo43.08%114.29%
tvrtko ursulintvrtko ursulin10.77%114.29%
Total130100.00%7100.00%


static int intel_lr_context_render_state_init(struct drm_i915_gem_request *req) { struct render_state so; int ret; ret = i915_gem_render_state_prepare(req->engine, &so); if (ret) return ret; if (so.rodata == NULL) return 0; ret = req->engine->emit_bb_start(req, so.ggtt_offset, I915_DISPATCH_SECURE); if (ret) goto out; ret = req->engine->emit_bb_start(req, (so.ggtt_offset + so.aux_batch_offset), I915_DISPATCH_SECURE); if (ret) goto out; i915_vma_move_to_active(i915_gem_obj_to_ggtt(so.obj), req); out: i915_gem_render_state_fini(&so); return ret; }

Contributors

PersonTokensPropCommitsCommitProp
damien lespiaudamien lespiau8568.00%125.00%
arun siluveryarun siluvery2923.20%125.00%
john harrisonjohn harrison86.40%125.00%
tvrtko ursulintvrtko ursulin32.40%125.00%
Total125100.00%4100.00%


static int gen8_init_rcs_context(struct drm_i915_gem_request *req) { int ret; ret = intel_logical_ring_workarounds_emit(req); if (ret) return ret; ret = intel_rcs_context_init_mocs(req); /* * Failing to program the MOCS is non-fatal.The system will not * run at peak performance. So generate an error and carry on. */ if (ret) DRM_ERROR("MOCS failed to program: expect performance issues.\n"); return intel_lr_context_render_state_init(req); }

Contributors

PersonTokensPropCommitsCommitProp
thomas danielthomas daniel3058.82%133.33%
peter antoinepeter antoine1733.33%133.33%
john harrisonjohn harrison47.84%133.33%
Total51100.00%3100.00%

/** * intel_logical_ring_cleanup() - deallocate the Engine Command Streamer * * @engine: Engine Command Streamer. * */
void intel_logical_ring_cleanup(struct intel_engine_cs *engine) { struct drm_i915_private *dev_priv; if (!intel_engine_initialized(engine)) return; /* * Tasklet cannot be active at this point due intel_mark_active/idle * so this is just for documentation. */ if (WARN_ON(test_bit(TASKLET_STATE_SCHED, &engine->irq_tasklet.state))) tasklet_kill(&engine->irq_tasklet); dev_priv = engine->i915; if (engine->buffer) { intel_logical_ring_stop(engine); WARN_ON((I915_READ_MODE(engine) & MODE_IDLE) == 0); } if (engine->cleanup) engine->cleanup(engine); i915_cmd_parser_fini_ring(engine); i915_gem_batch_pool_fini(&engine->batch_pool); intel_engine_fini_breadcrumbs(engine); if (engine->status_page.obj) { i915_gem_object_unpin_map(engine->status_page.obj); engine->status_page.obj = NULL; } intel_lr_context_unpin(dev_priv->kernel_context, engine); engine->idle_lite_restore_wa = 0; engine->disable_lite_restore_wa = false; engine->ctx_desc_template = 0; lrc_destroy_wa_ctx_obj(engine); engine->i915 = NULL; }

Contributors

PersonTokensPropCommitsCommitProp
oscar mateooscar mateo7541.90%318.75%
tvrtko ursulintvrtko ursulin6234.64%637.50%
chris wilsonchris wilson2312.85%425.00%
dave gordondave gordon116.15%16.25%
arun siluveryarun siluvery42.23%16.25%
john harrisonjohn harrison42.23%16.25%
Total179100.00%16100.00%


static void logical_ring_default_vfuncs(struct intel_engine_cs *engine) { /* Default vfuncs which can be overriden by each engine. */ engine->init_hw = gen8_init_common_ring; engine->emit_request = gen8_emit_request; engine->emit_flush = gen8_emit_flush; engine->irq_enable = gen8_logical_ring_enable_irq; engine->irq_disable = gen8_logical_ring_disable_irq; engine->emit_bb_start = gen8_emit_bb_start; if (IS_BXT_REVID(engine->i915, 0, BXT_REVID_A1)) engine->irq_seqno_barrier = bxt_a_seqno_barrier; }

Contributors

PersonTokensPropCommitsCommitProp
tvrtko ursulintvrtko ursulin5886.57%233.33%
chris wilsonchris wilson913.43%466.67%
Total67100.00%6100.00%


static inline void logical_ring_default_irqs(struct intel_engine_cs *engine, unsigned shift) { engine->irq_enable_mask = GT_RENDER_USER_INTERRUPT << shift; engine->irq_keep_mask = GT_CONTEXT_SWITCH_INTERRUPT << shift; }

Contributors

PersonTokensPropCommitsCommitProp
tvrtko ursulintvrtko ursulin3096.77%266.67%
oscar mateooscar mateo13.23%133.33%
Total31100.00%3100.00%


static int lrc_setup_hws(struct intel_engine_cs *engine, struct drm_i915_gem_object *dctx_obj) { void *hws; /* The HWSP is part of the default context object in LRC mode. */ engine->status_page.gfx_addr = i915_gem_obj_ggtt_offset(dctx_obj) + LRC_PPHWSP_PN * PAGE_SIZE; hws = i915_gem_object_pin_map(dctx_obj); if (IS_ERR(hws)) return PTR_ERR(hws); engine->status_page.page_addr = hws + LRC_PPHWSP_PN * PAGE_SIZE; engine->status_page.obj = dctx_obj; return 0; }

Contributors

PersonTokensPropCommitsCommitProp
tvrtko ursulintvrtko ursulin79100.00%3100.00%
Total79100.00%3100.00%


static int logical_ring_init(struct intel_engine_cs *engine) { struct i915_gem_context *dctx = engine->i915->kernel_context; int ret; ret = intel_engine_init_breadcrumbs(engine); if (ret) goto error; ret = i915_cmd_parser_init_ring(engine); if (ret) goto error; ret = execlists_context_deferred_alloc(dctx, engine); if (ret) goto error; /* As this is the default context, always pin it */ ret = intel_lr_context_pin(dctx, engine); if (ret) { DRM_ERROR("Failed to pin context for %s: %d\n", engine->name, ret); goto error; } /* And setup the hardware status page. */ ret = lrc_setup_hws(engine, dctx->engine[engine->id].state); if (ret) { DRM_ERROR("Failed to set up hws %s: %d\n", engine->name, ret); goto error; } return 0; error: intel_logical_ring_cleanup(engine); return ret; }

Contributors

PersonTokensPropCommitsCommitProp
tvrtko ursulintvrtko ursulin6039.74%635.29%
chris wilsonchris wilson3925.83%635.29%
nicholas hoathnicholas hoath2415.89%15.88%
oscar mateooscar mateo1610.60%211.76%
dave gordondave gordon127.95%211.76%
Total151100.00%17100.00%


static int logical_render_ring_init(struct intel_engine_cs *engine) { struct drm_i915_private *dev_priv = engine->i915; int ret; if (HAS_L3_DPF(dev_priv)) engine->irq_keep_mask |= GT_RENDER_L3_PARITY_ERROR_INTERRUPT; /* Override some for render ring. */ if (INTEL_GEN(dev_priv) >= 9) engine->init_hw = gen9_init_render_ring; else engine->init_hw = gen8_init_render_ring; engine->init_context = gen8_init_rcs_context; engine->cleanup = intel_fini_pipe_control; engine->emit_flush = gen8_emit_flush_render; engine->emit_request = gen8_emit_request_render; ret = intel_init_pipe_control(engine, 4096); if (ret) return ret; ret = intel_init_workaround_bb(engine); if (ret) { /* * We continue even if we fail to initialize WA batch * because we only expect rare glitches but nothing * critical to prevent us from using GPU */ DRM_ERROR("WA batch buffer initialization failed: %d\n", ret); } ret = logical_ring_init(engine); if (ret) { lrc_destroy_wa_ctx_obj(engine); } return ret; }

Contributors

PersonTokensPropCommitsCommitProp
tvrtko ursulintvrtko ursulin13595.74%250.00%
nicholas hoathnicholas hoath42.84%125.00%
chris wilsonchris wilson21.42%125.00%
Total141100.00%4100.00%

static const struct logical_ring_info { const char *name; unsigned exec_id; unsigned guc_id; u32 mmio_base; unsigned irq_shift; int (*init)(struct intel_engine_cs *engine); } logical_rings[] = { [RCS] = { .name = "render ring", .exec_id = I915_EXEC_RENDER, .guc_id = GUC_RENDER_ENGINE, .mmio_base = RENDER_RING_BASE, .irq_shift = GEN8_RCS_IRQ_SHIFT, .init = logical_render_ring_init, }, [BCS] = { .name = "blitter ring", .exec_id = I915_EXEC_BLT, .guc_id = GUC_BLITTER_ENGINE, .mmio_base = BLT_RING_BASE, .irq_shift = GEN8_BCS_IRQ_SHIFT, .init = logical_ring_init, }, [VCS] = { .name = "bsd ring", .exec_id = I915_EXEC_BSD, .guc_id = GUC_VIDEO_ENGINE, .mmio_base = GEN6_BSD_RING_BASE, .irq_shift = GEN8_VCS1_IRQ_SHIFT, .init = logical_ring_init, }, [VCS2] = { .name = "bsd2 ring", .exec_id = I915_EXEC_BSD, .guc_id = GUC_VIDEO_ENGINE2, .mmio_base = GEN8_BSD2_RING_BASE, .irq_shift = GEN8_VCS2_IRQ_SHIFT, .init = logical_ring_init, }, [VECS] = { .name = "video enhancement ring", .exec_id = I915_EXEC_VEBOX, .guc_id = GUC_VIDEOENHANCE_ENGINE, .mmio_base = VEBOX_RING_BASE, .irq_shift = GEN8_VECS_IRQ_SHIFT, .init = logical_ring_init, }, };
static struct intel_engine_cs * logical_ring_setup(struct drm_i915_private *dev_priv, enum intel_engine_id id) { const struct logical_ring_info *info = &logical_rings[id]; struct intel_engine_cs *engine = &dev_priv->engine[id]; enum forcewake_domains fw_domains; engine->id = id; engine->name = info->name; engine->exec_id = info->exec_id; engine->guc_id = info->guc_id; engine->mmio_base = info->mmio_base; engine->i915 = dev_priv; /* Intentionally left blank. */ engine->buffer = NULL; fw_domains = intel_uncore_forcewake_for_reg(dev_priv, RING_ELSP(engine), FW_REG_WRITE); fw_domains |= intel_uncore_forcewake_for_reg(dev_priv, RING_CONTEXT_STATUS_PTR(engine), FW_REG_READ | FW_REG_WRITE); fw_domains |= intel_uncore_forcewake_for_reg(dev_priv, RING_CONTEXT_STATUS_BUF_BASE(engine), FW_REG_READ); engine->fw_domains = fw_domains; INIT_LIST_HEAD(&engine->active_list); INIT_LIST_HEAD(&engine->request_list); INIT_LIST_HEAD(&engine->buffers); INIT_LIST_HEAD(&engine->execlist_queue); spin_lock_init(&engine->execlist_lock); tasklet_init(&engine->irq_tasklet, intel_lrc_irq_handler, (unsigned long)engine); logical_ring_init_platform_invariants(engine); logical_ring_default_vfuncs(engine); logical_ring_default_irqs(engine, info->irq_shift); intel_engine_init_hangcheck(engine); i915_gem_batch_pool_init(&dev_priv->drm, &engine->batch_pool); return engine; }

Contributors

PersonTokensPropCommitsCommitProp
tvrtko ursulintvrtko ursulin18074.07%318.75%
oscar mateooscar mateo239.47%425.00%
arun siluveryarun siluvery124.94%16.25%
nicholas hoathnicholas hoath93.70%16.25%
daniel vetterdaniel vetter52.06%16.25%
chris wilsonchris wilson41.65%212.50%
damien lespiaudamien lespiau31.23%16.25%
michel thierrymichel thierry31.23%16.25%
imre deakimre deak31.23%16.25%
jani nikulajani nikula10.41%16.25%
Total243100.00%16100.00%

/** * intel_logical_rings_init() - allocate, populate and init the Engine Command Streamers * @dev: DRM device. * * This function inits the engines for an Execlists submission style (the * equivalent in the legacy ringbuffer submission world would be * i915_gem_init_engines). It does it only for those engines that are present in * the hardware. * * Return: non-zero if the initialization failed. */
int intel_logical_rings_init(struct drm_device *dev) { struct drm_i915_private *dev_priv = to_i915(dev); unsigned int mask = 0; unsigned int i; int ret; WARN_ON(INTEL_INFO(dev_priv)->ring_mask & GENMASK(sizeof(mask) * BITS_PER_BYTE - 1, I915_NUM_ENGINES)); for (i = 0; i < ARRAY_SIZE(logical_rings); i++) { if (!HAS_ENGINE(dev_priv, i)) continue; if (!logical_rings[i].init) continue; ret = logical_rings[i].init(logical_ring_setup(dev_priv, i)); if (ret) goto cleanup; mask |= ENGINE_MASK(i); } /* * Catch failures to update logical_rings table when the new engines * are added to the driver by a warning and disabling the forgotten * engines. */ if (WARN_ON(mask != INTEL_INFO(dev_priv)->ring_mask)) { struct intel_device_info *info = (struct intel_device_info *)&dev_priv->info; info->ring_mask = mask; } return 0; cleanup: for (i = 0; i < I915_NUM_ENGINES; i++) intel_logical_ring_cleanup(&dev_priv->engine[i]); return ret; }

Contributors

PersonTokensPropCommitsCommitProp
tvrtko ursulintvrtko ursulin12261.62%250.00%
oscar mateooscar mateo7336.87%125.00%
chris wilsonchris wilson31.52%125.00%
Total198100.00%4100.00%


static u32 make_rpcs(struct drm_i915_private *dev_priv) { u32 rpcs = 0; /* * No explicit RPCS request is needed to ensure full * slice/subslice/EU enablement prior to Gen9. */ if (INTEL_GEN(dev_priv) < 9) return 0; /* * Starting in Gen9, render power gating can leave * slice/subslice/EU in a partially enabled state. We * must make an explicit request through RPCS for full * enablement. */ if (INTEL_INFO(dev_priv)->has_slice_pg) { rpcs |= GEN8_RPCS_S_CNT_ENABLE; rpcs |= INTEL_INFO(dev_priv)->slice_total << GEN8_RPCS_S_CNT_SHIFT; rpcs |= GEN8_RPCS_ENABLE; } if (INTEL_INFO(dev_priv)->has_subslice_pg) { rpcs |= GEN8_RPCS_SS_CNT_ENABLE; rpcs |= INTEL_INFO(dev_priv)->subslice_per_slice << GEN8_RPCS_SS_CNT_SHIFT; rpcs |= GEN8_RPCS_ENABLE; } if (INTEL_INFO(dev_priv)->has_eu_pg) { rpcs |= INTEL_INFO(dev_priv)->eu_per_subslice << GEN8_RPCS_EU_MIN_SHIFT; rpcs |= INTEL_INFO(dev_priv)->eu_per_subslice << GEN8_RPCS_EU_MAX_SHIFT; rpcs |= GEN8_RPCS_ENABLE; } return rpcs; }

Contributors

PersonTokensPropCommitsCommitProp
jeff mcgeejeff mcgee11890.77%133.33%
chris wilsonchris wilson118.46%133.33%
oscar mateooscar mateo10.77%133.33%
Total130100.00%3100.00%


static u32 intel_lr_indirect_ctx_offset(struct intel_engine_cs *engine) { u32 indirect_ctx_offset; switch (INTEL_GEN(engine->i915)) { default: MISSING_CASE(INTEL_GEN(engine->i915)); /* fall through */ case 9: indirect_ctx_offset = GEN9_CTX_RCS_INDIRECT_CTX_OFFSET_DEFAULT; break; case 8: indirect_ctx_offset = GEN8_CTX_RCS_INDIRECT_CTX_OFFSET_DEFAULT; break; } return indirect_ctx_offset; }

Contributors

PersonTokensPropCommitsCommitProp
michel thierrymichel thierry4987.50%133.33%
chris wilsonchris wilson47.14%133.33%
tvrtko ursulintvrtko ursulin35.36%133.33%
Total56100.00%3100.00%


static int populate_lr_context(struct i915_gem_context *ctx, struct drm_i915_gem_object *ctx_obj, struct intel_engine_cs *engine, struct intel_ringbuffer *ringbuf) { struct drm_i915_private *dev_priv = ctx->i915; struct i915_hw_ppgtt *ppgtt = ctx->ppgtt; void *vaddr; u32 *reg_state; int ret; if (!ppgtt) ppgtt = dev_priv->mm.aliasing_ppgtt; ret = i915_gem_object_set_to_cpu_domain(ctx_obj, true); if (ret) { DRM_DEBUG_DRIVER("Could not set to CPU domain\n"); return ret; } vaddr = i915_gem_object_pin_map(ctx_obj); if (IS_ERR(vaddr)) { ret = PTR_ERR(vaddr); DRM_DEBUG_DRIVER("Could not map object pages! (%d)\n", ret); return ret; } ctx_obj->dirty = true; /* The second page of the context object contains some fields which must * be set up prior to the first execution. */ reg_state = vaddr + LRC_STATE_PN * PAGE_SIZE; /* A context is actually a big batch buffer with several MI_LOAD_REGISTER_IMM * commands followed by (reg, value) pairs. The values we are setting here are * only for the first context restore: on a subsequent save, the GPU will * recreate this batchbuffer with new values (including all the missing * MI_LOAD_REGISTER_IMM commands that we are not initializing here). */ reg_state[CTX_LRI_HEADER_0] = MI_LOAD_REGISTER_IMM(engine->id == RCS ? 14 : 11) | MI_LRI_FORCE_POSTED; ASSIGN_CTX_REG(reg_state, CTX_CONTEXT_CONTROL, RING_CONTEXT_CONTROL(engine), _MASKED_BIT_ENABLE(CTX_CTRL_INHIBIT_SYN_CTX_SWITCH | CTX_CTRL_ENGINE_CTX_RESTORE_INHIBIT | (HAS_RESOURCE_STREAMER(dev_priv) ? CTX_CTRL_RS_CTX_ENABLE : 0))); ASSIGN_CTX_REG(reg_state, CTX_RING_HEAD, RING_HEAD(engine->mmio_base), 0); ASSIGN_CTX_REG(reg_state, CTX_RING_TAIL, RING_TAIL(engine->mmio_base), 0); /* Ring buffer start address is not known until the buffer is pinned. * It is written to the context image in execlists_update_context() */ ASSIGN_CTX_REG(reg_state, CTX_RING_BUFFER_START, RING_START(engine->mmio_base), 0); ASSIGN_CTX_REG(reg_state, CTX_RING_BUFFER_CONTROL, RING_CTL(engine->mmio_base), ((ringbuf->size - PAGE_SIZE) & RING_NR_PAGES) | RING_VALID); ASSIGN_CTX_REG(reg_state, CTX_BB_HEAD_U, RING_BBADDR_UDW(engine->mmio_base), 0); ASSIGN_CTX_REG(reg_state, CTX_BB_HEAD_L, RING_BBADDR(engine->mmio_base), 0); ASSIGN_CTX_REG(reg_state, CTX_BB_STATE, RING_BBSTATE(engine->mmio_base), RING_BB_PPGTT); ASSIGN_CTX_REG(reg_state, CTX_SECOND_BB_HEAD_U, RING_SBBADDR_UDW(engine->mmio_base), 0); ASSIGN_CTX_REG(reg_state, CTX_SECOND_BB_HEAD_L, RING_SBBADDR(engine->mmio_base), 0); ASSIGN_CTX_REG(reg_state, CTX_SECOND_BB_STATE, RING_SBBSTATE(engine->mmio_base), 0); if (engine->id == RCS) { ASSIGN_CTX_REG(reg_state, CTX_BB_PER_CTX_PTR, RING_BB_PER_CTX_PTR(engine->mmio_base), 0); ASSIGN_CTX_REG(reg_state, CTX_RCS_INDIRECT_CTX, RING_INDIRECT_CTX(engine->mmio_base), 0); ASSIGN_CTX_REG(reg_state, CTX_RCS_INDIRECT_CTX_OFFSET, RING_INDIRECT_CTX_OFFSET(engine->mmio_base), 0); if (engine->wa_ctx.obj) { struct i915_ctx_workarounds *wa_ctx = &engine->wa_ctx; uint32_t ggtt_offset = i915_gem_obj_ggtt_offset(wa_ctx->obj); reg_state[CTX_RCS_INDIRECT_CTX+1] = (ggtt_offset + wa_ctx->indirect_ctx.offset * sizeof(uint32_t)) | (wa_ctx->indirect_ctx.size / CACHELINE_DWORDS); reg_state[CTX_RCS_INDIRECT_CTX_OFFSET+1] = intel_lr_indirect_ctx_offset(engine) << 6; reg_state[CTX_BB_PER_CTX_PTR+1] = (ggtt_offset + wa_ctx->per_ctx.offset * sizeof(uint32_t)) | 0x01; } } reg_state[CTX_LRI_HEADER_1] = MI_LOAD_REGISTER_IMM(9) | MI_LRI_FORCE_POSTED; ASSIGN_CTX_REG(reg_state, CTX_CTX_TIMESTAMP, RING_CTX_TIMESTAMP(engine->mmio_base), 0); /* PDP values well be assigned later if needed */ ASSIGN_CTX_REG(reg_state, CTX_PDP3_UDW, GEN8_RING_PDP_UDW(engine, 3), 0); ASSIGN_CTX_REG(reg_state, CTX_PDP3_LDW, GEN8_RING_PDP_LDW(engine, 3), 0); ASSIGN_CTX_REG(reg_state, CTX_PDP2_UDW, GEN8_RING_PDP_UDW(engine, 2), 0); ASSIGN_CTX_REG(reg_state, CTX_PDP2_LDW, GEN8_RING_PDP_LDW(engine, 2), 0); ASSIGN_CTX_REG(reg_state, CTX_PDP1_UDW, GEN8_RING_PDP_UDW(engine, 1), 0); ASSIGN_CTX_REG(reg_state, CTX_PDP1_LDW, GEN8_RING_PDP_LDW(engine, 1), 0); ASSIGN_CTX_REG(reg_state, CTX_PDP0_UDW, GEN8_RING_PDP_UDW(engine, 0), 0); ASSIGN_CTX_REG(reg_state, CTX_PDP0_LDW, GEN8_RING_PDP_LDW(engine, 0), 0); if (USES_FULL_48BIT_PPGTT(ppgtt->base.dev)) { /* 64b PPGTT (48bit canonical) * PDP0_DESCRIPTOR contains the base address to PML4 and * other PDP Descriptors are ignored. */ ASSIGN_CTX_PML4(ppgtt, reg_state); } else { /* 32b PPGTT * PDP*_DESCRIPTOR contains the base address of space supported. * With dynamic page allocation, PDPs may not be allocated at * this point. Point the unallocated PDPs to the scratch page */ execlists_update_context_pdps(ppgtt, reg_state); } if (engine->id == RCS) { reg_state[CTX_LRI_HEADER_2] = MI_LOAD_REGISTER_IMM(1); ASSIGN_CTX_REG(reg_state, CTX_R_PWR_CLK_STATE, GEN8_R_PWR_CLK_STATE, make_rpcs(dev_priv)); } i915_gem_object_unpin_map(ctx_obj); return 0; }

Contributors

PersonTokensPropCommitsCommitProp
oscar mateooscar mateo32843.62%210.00%
ville syrjalaville syrjala19425.80%210.00%
arun siluveryarun siluvery9412.50%15.00%
tvrtko ursulintvrtko ursulin607.98%315.00%
michel thierrymichel thierry374.92%420.00%
thomas danielthomas daniel202.66%15.00%
chris wilsonchris wilson50.66%210.00%
daniel vetterdaniel vetter50.66%210.00%
jeff mcgeejeff mcgee50.66%15.00%
abdiel janulgueabdiel janulgue20.27%15.00%
zhi wangzhi wang20.27%15.00%
Total752100.00%20100.00%

/** * intel_lr_context_size() - return the size of the context for an engine * @engine: which engine to find the context size for * * Each engine may require a different amount of space for a context image, * so when allocating (or copying) an image, this function can be used to * find the right size for the specific engine. * * Return: size (in bytes) of an engine-specific context image * * Note: this size includes the HWSP, which is part of the context image * in LRC mode, but does not include the "shared data page" used with * GuC submission. The caller should account for this if using the GuC. */
uint32_t intel_lr_context_size(struct intel_engine_cs *engine) { int ret = 0; WARN_ON(INTEL_GEN(engine->i915) < 8); switch (engine->id) { case RCS: if (INTEL_GEN(engine->i915) >= 9) ret = GEN9_LR_CONTEXT_RENDER_SIZE; else ret = GEN8_LR_CONTEXT_RENDER_SIZE; break; case VCS: case BCS: case VECS: case VCS2: ret = GEN8_LR_CONTEXT_OTHER_SIZE; break; } return ret; }

Contributors

PersonTokensPropCommitsCommitProp
oscar mateooscar mateo5670.89%233.33%
michael h. nguyenmichael h. nguyen1417.72%116.67%
tvrtko ursulintvrtko ursulin45.06%116.67%
chris wilsonchris wilson45.06%116.67%
dave gordondave gordon11.27%116.67%
Total79100.00%6100.00%

/** * execlists_context_deferred_alloc() - create the LRC specific bits of a context * @ctx: LR context to create. * @engine: engine to be used with the context. * * This function can be called more than once, with different engines, if we plan * to use the context with them. The context backing objects and the ringbuffers * (specially the ringbuffer backing objects) suck a lot of memory up, and that's why * the creation is a deferred call: it's better to make sure first that we need to use * a given ring with the context. * * Return: non-zero on error. */
static int execlists_context_deferred_alloc(struct i915_gem_context *ctx, struct intel_engine_cs *engine) { struct drm_i915_gem_object *ctx_obj; struct intel_context *ce = &ctx->engine[engine->id]; uint32_t context_size; struct intel_ringbuffer *ringbuf; int ret; WARN_ON(ce->state); context_size = round_up(intel_lr_context_size(engine), 4096); /* One extra page as the sharing data between driver and GuC */ context_size += PAGE_SIZE * LRC_PPHWSP_PN; ctx_obj = i915_gem_object_create(&ctx->i915->drm, context_size); if (IS_ERR(ctx_obj)) { DRM_DEBUG_DRIVER("Alloc LRC backing obj failed.\n"); return PTR_ERR(ctx_obj); } ringbuf = intel_engine_create_ringbuffer(engine, ctx->ring_size); if (IS_ERR(ringbuf)) { ret = PTR_ERR(ringbuf); goto error_deref_obj; } ret = populate_lr_context(ctx, ctx_obj, engine, ringbuf); if (ret) { DRM_DEBUG_DRIVER("Failed to populate LRC: %d\n", ret); goto error_ringbuf; } ce->ringbuf = ringbuf; ce->state = ctx_obj; ce->initialised = engine->init_context == NULL; return 0; error_ringbuf: intel_ringbuffer_free(ringbuf); error_deref_obj: drm_gem_object_unreference(&ctx_obj->base); ce->ringbuf = NULL; ce->state = NULL; return ret; }

Contributors

PersonTokensPropCommitsCommitProp
oscar mateooscar mateo12756.95%624.00%
chris wilsonchris wilson4921.97%832.00%
nicholas hoathnicholas hoath135.83%14.00%
thomas danielthomas daniel94.04%14.00%
alex daialex dai73.14%14.00%
tvrtko ursulintvrtko ursulin52.24%14.00%
dave gordondave gordon41.79%312.00%
michel thierrymichel thierry31.35%14.00%
zhi wangzhi wang31.35%14.00%
daniel vetterdaniel vetter20.90%14.00%
dan carpenterdan carpenter10.45%14.00%
Total223100.00%25100.00%


void intel_lr_context_reset(struct drm_i915_private *dev_priv, struct i915_gem_context *ctx) { struct intel_engine_cs *engine; for_each_engine(engine, dev_priv) { struct intel_context *ce = &ctx->engine[engine->id]; struct drm_i915_gem_object *ctx_obj = ce->state; void *vaddr; uint32_t *reg_state; if (!ctx_obj) continue; vaddr = i915_gem_object_pin_map(ctx_obj); if (WARN_ON(IS_ERR(vaddr))) continue; reg_state = vaddr + LRC_STATE_PN * PAGE_SIZE; ctx_obj->dirty = true; reg_state[CTX_RING_HEAD+1] = 0; reg_state[CTX_RING_TAIL+1] = 0; i915_gem_object_unpin_map(ctx_obj); ce->ringbuf->head = 0; ce->ringbuf->tail = 0; } }

Contributors

PersonTokensPropCommitsCommitProp
thomas danielthomas daniel9166.42%114.29%
tvrtko ursulintvrtko ursulin3324.09%342.86%
chris wilsonchris wilson128.76%228.57%
alex daialex dai10.73%114.29%
Total137100.00%7100.00%


Overall Contributors

PersonTokensPropCommitsCommitProp
oscar mateooscar mateo199920.53%2310.90%
tvrtko ursulintvrtko ursulin196620.19%2411.37%
arun siluveryarun siluvery144914.88%136.16%
chris wilsonchris wilson8028.23%3114.69%
michel thierrymichel thierry7157.34%115.21%
john harrisonjohn harrison4764.89%2411.37%
thomas danielthomas daniel4534.65%73.32%
mika kuoppalamika kuoppala3753.85%136.16%
ville syrjalaville syrjala2312.37%73.32%
ben widawskyben widawsky2282.34%41.90%
tim goretim gore1921.97%31.42%
jeff mcgeejeff mcgee1231.26%10.47%
damien lespiaudamien lespiau1151.18%31.42%
nicholas hoathnicholas hoath1091.12%62.84%
zhi wangzhi wang1061.09%52.37%
imre deakimre deak760.78%20.95%
alex daialex dai630.65%20.95%
dave gordondave gordon590.61%104.74%
jani nikulajani nikula460.47%10.47%
peter antoinepeter antoine340.35%31.42%
michal winiarskimichal winiarski250.26%10.47%
daniel vetterdaniel vetter230.24%73.32%
zhiyuan lvzhiyuan lv220.23%20.95%
michael h. nguyenmichael h. nguyen180.18%10.47%
abdiel janulgueabdiel janulgue120.12%10.47%
andrzej hajdaandrzej hajda100.10%10.47%
dave airliedave airlie50.05%10.47%
francisco jerezfrancisco jerez40.04%10.47%
deepak sdeepak s10.01%10.47%
tomas elftomas elf10.01%10.47%
dan carpenterdan carpenter10.01%10.47%
Total9739100.00%211100.00%
Information contained on this website is for historical information purposes only and does not indicate or represent copyright ownership.