These reworks were combined into this patch:
* Matt Turner: i965: Disable NoDDChk/NoDDClr test on Gen12+
* Francisco Jerez: intel/eu/validate/gen12: Disable
qword_low_power_no_depctrl eu_validate test.
Signed-off-by: Jordan Justen <jordan.l.justen@intel.com>
Reviewed-by: Francisco Jerez <currojerez@riseup.net>
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
Calculated the number for allocation and did not
reserve space ....
Fixes: 2117c53b72 "radv: Add temporary datastructure for submissions."
Reviewed-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
Variables spilled on both branch legs need to be assigned to the same spilling slot.
These affinities can be transitive through multiple merge blocks.
Reviewed-by: Rhys Perry <pendingchaos02@gmail.com>
This patch makes the live variable analysis more precise
w.r.t. killed phi operands and the block's register pressure.
Reviewed-by: Rhys Perry <pendingchaos02@gmail.com>
Converting to 'Conventional SSA Form' ensures correctness w.r.t. spilling of phi nodes.
Previously, it was possible that phi operands have intersecting live-ranges, and thus,
couldn't get spilled to the same spilling slot. For this reason, ACO tried to avoid to
spill phis, even if it was beneficial.
This patch implements a conversion pass which is currently only called if spilling is necessary.
Reviewed-by: Rhys Perry <pendingchaos02@gmail.com>
Fixes these deqp tests (and more):
dEQP-GLES2.functional.draw.draw_arrays.points.single_attribute
dEQP-GLES2.functional.draw.draw_arrays.points.multiple_attributes
dEQP-GLES2.functional.draw.draw_arrays.points.default_attribute
dEQP-GLES2.functional.draw.draw_elements.points.single_attribute
dEQP-GLES2.functional.draw.draw_elements.points.multiple_attributes
dEQP-GLES2.functional.draw.draw_elements.points.default_attribute
Signed-off-by: Jonathan Marek <jonathan@marek.ca>
Reviewed-by: Christian Gmeiner <christian.gmeiner@gmail.com>
The final version of previous stencil fix patch ended up breaking one-sided
stencil.
Fixes remaining failures in these deqp tests (tested on GC3000/GC7000L):
dEQP-GLES2.functional.fragment_ops.depth_stencil.*
Note: deqp tests require --deqp-gl-config-name=rgba8888d24s8ms0
Fixes: 05da025f ("etnaviv: fix two-sided stencil")
Signed-off-by: Jonathan Marek <jonathan@marek.ca>
Reviewed-by: Christian Gmeiner <christian.gmeiner@gmail.com>
Fixes remaining failures in these deqp tests (tested on GC3000/GC7000L):
dEQP-GLES2.functional.polygon_offset.*
Fixes: 6c3c05dc ("etnaviv: fix polygon offset")
Signed-off-by: Jonathan Marek <jonathan@marek.ca>
Reviewed-by: Christian Gmeiner <christian.gmeiner@gmail.com>
On gen11 and older, compressed images are tiled and aligned to 4K. On
gen12 this 4K alignment restriction was removed. However, only aligning
the fast clear color buffer to 64B (a cacheline, as it's on the
documentation) is causing some bugs where the fast clear color is not
converted during the fast clear operation. Aligning things to 4K seems
to fix it.
v2: Fix typo case in the comment (Nanley)
v3: Rebase and fix conflicts.
v4: Fix rebase mistake (Nanley).
Reviewed-by: Nanley Chery <nanley.g.chery@intel.com>
On gen11 and older, compressed images are tiled and aligned to 4K. On
gen12 this 4K alignment restriction was removed. However, only aligning
the fast clear color buffer to 64B (a cacheline, as it's on the
documentation) is causing some bugs where the fast clear color is not
converted during the fast clear operation. Aligning things to 4K seems
to fix it.
v2: Assert that image->planes[plane].offset is 4K aligned (Nanley)
Reviewed-by: Nanley Chery <nanley.g.chery@intel.com>
While we're at it, make sure we error out if it's not supported when
required.
This brings us a bit closer to being able to test on SwiftShader, which
doesn't currently support KHR_external_memory_fd.
TGL will have separate tables for src0 and src1, so the shared function
will no longer make sense.
Reviewed-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
The EU compaction unit test fuzzes the compaction code by flipping bits.
We use a simple skip_bits() function with a list of reserved bits to
ignore, but for more complex cases like invalid combinations of register
file:type, we need either machinery to check validity or for these
functions to simply inform us whether a combination was valid.
enum brw_reg_type a 4-bit field in brw_reg, so rather than expanding it
with an "INVALID" value, just return -1 and let the caller check for
that.
Scott suggested redefining unreachable() within the unit test to
longjmp() which would allow driver code like this to still use it and
allow the test to handle expected failures like this. If that plan works
out, I plan to revert this.
Mostly for vertex formats, but they are supported as texture formats too
(untested however).
Signed-off-by: Jonathan Marek <jonathan@marek.ca>
Reviewed-by: Rob Clark <robdclark@gmail.com>
If src/dst addresses are dw aligned and size is > 4 then we align
byte count to dw as well.
PAL implementation works like this.
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
Previously, the scheduler tried to move up instructions from below depending
VMEM instructions only to move them down again when scheduling the VMEM
instruction.
Reviewed-by: Rhys Perry <pendingchaos02@gmail.com>
These got lost due to some refactoring.
Due to the way our scheduler works currently, for now
we add back the reorder flag for divergent loads only.
Reviewed-by: Rhys Perry <pendingchaos02@gmail.com>
This patch changes VMEM scheduling in a way that they can only
be moved upwards by previous VMEM instructions but not downwards.
This way, it improves the order of VMEM instructions in relation
to their users.
Reviewed-by: Rhys Perry <pendingchaos02@gmail.com>
Previously, we allowed all shaders to reduce the number of max_waves to as low as 5.
Restricting this on shaders with low register demand, increases the total number of waves
while the VMEM def-use distances hardly change.
This patch also changes the max number of move operations per MEM instruction.
Reviewed-by: Rhys Perry <pendingchaos02@gmail.com>
This shaves around 4-5% off of a CPU-limited example running with the
Dawn WebGPU implementation.
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
In 0e4a75f917, Ken added a flag brw_stage_prog_data which indicates
whether any UBO pulls ever occur. Unfortunately, he neglected to set
the bit in the vec4 back-end. This was fine at the time because the
optimization was intended for iris which does not support gen7 and using
the vec4 back-end on Gen8+ requires an environment variable. We want to
use this in Vulkan which does support Gen7 so we want the information
from the vec4 back-end as well as scalar.
Fixes: 0e4a75f917 "intel/compiler: Record whether any pull constant..."
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
RADV_PERFTEST=outooforder has been removed a while ago. This fixes
dumping the options into hang reports.
Signed-off-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
This is actually a non-threaded implementation. I'd summarize this
as event-based submission.
When submit happens we walk a tree of submissions that depend on
the syncobj signal operations to be submitted and if those submission
we no other dependencies we start to execute them immediately.
Or, well I still use a list to avoid issues with long chains and
the stacksize when using recursion.
Reviewed-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
This does not fully do wait-before-submit, to be done in a follow
up patch.
For kernels without support for timeline syncobjs, this adds an
implementation of non-shareable timelines using legacy syncobjs.
Reviewed-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
This will lead to fewer pipelines in the cache, which is assumed to
become our most unavoidable performance bottle-neck down the line.
Reviewed-by: Dave Airlie <airlied@redhat.com>
This is a function with timeout support for reading from the pipe
between processes used for secure compile.
Initially we hardcode the timeout to 5 seconds. We can adjust the
timeout limit in future if needed.
Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
This will be used in the following patch to support timeouts for
reading the pipe between processes.
Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
This skips touching %ebx most times and it shows that glGetString performance
increased from 114M/s to 120M/s on my desktop.
Reviewed-by: Matt Turner <mattst88@gmail.com>
Signed-off-by: Lepton Wu <lepton@chromium.org>
Remove hard coded 16 and use entry_generate_or_patch to patch
public stubs. The generated code actually is sightly tighter
than before since the "nop" instructions before the final "jmp"
get removed.
Reviewed-by: Matt Turner <mattst88@gmail.com>
Signed-off-by: Lepton Wu <lepton@chromium.org>
The code works exactly the same with before. Just split this function
out so we can reuse it.
Reviewed-by: Matt Turner <mattst88@gmail.com>
Signed-off-by: Lepton Wu <lepton@chromium.org>
The x86 assembly language stub in src/mapi/entry_x86_tsd.h does not
generate PIC (position-independent code). This causes text relocations
which bring troubles on recent versions of FreeBSD, OpenBSD, Android.
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=108541
Reviewed-by: Matt Turner <mattst88@gmail.com>
Signed-off-by: Lepton Wu <lepton@chromium.org>
Vulkan requires that only one bit for the ordering is set, but old
versions of GLSLang just set all the bits. This was fixed as part of
c51287d744
but we can still find older versions (or shaders compiled with it)
around.
So instead of failing, emit a warning and fallback to the effective
result of any combination of multiple bits: AcquireRelease.
Closes: https://gitlab.freedesktop.org/mesa/mesa/issues/2018
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
We have to resolve destination surfaces if we are bliting to and from
the same surface.
v2: Revert unrelated change (Nanley Chery)
Signed-off-by: Sagar Ghuge <sagar.ghuge@intel.com>
Reviewed-by: Nanley Chery <nanley.g.chery@intel.com>
Let aux surface state tracker track the stencil buffer's aux state while
clearing depth stencil buffer.
v2: Fix condition check (Nanley Chery)
Signed-off-by: Sagar Ghuge <sagar.ghuge@intel.com>
Reviewed-by: Nanley Chery <nanley.g.chery@intel.com>
Even though stencil buffer compression looks like regular lossless color
compression w/o fast clear support, we have to resolve stencil buffer
with WM_HZ_OP packet.
v2: Check if resource is stencil with helper function (Nanley Chery)
v3: Remove unnecessary included file (Nanley Chery)
v4: (Nanley Chery)
- Avoid stencil buffer aux state transition by improving condition check
Signed-off-by: Sagar Ghuge <sagar.ghuge@intel.com>
Reviewed-by: Nanley Chery <nanley.g.chery@intel.com>
On Gen12+, Stencil buffer's lossless compression should be resolved
with WM_HZ_OP packet.
Signed-off-by: Sagar Ghuge <sagar.ghuge@intel.com>
Reviewed-by: Nanley Chery <nanley.g.chery@intel.com>
We never saw any failures regarding this typo but it's good to assign
correct stencil view while constructing blorp_params.
Fixes: 0cabf93b80 "intel/blorp: Add an entrypoint for clearing depth and stencil"
Signed-off-by: Sagar Ghuge <sagar.ghuge@intel.com>
Reviewed-by: Nanley Chery <nanley.g.chery@intel.com>
On Gen12, the CCS buffer address doesn't have to be referenced in state
packets. In the case of a stencil buffer with CCS, the kernel won't know
the location of the CCS unless an extra call is made to pin its address.
To avoid this extra call, make the CCS part of the main surface.
v2. Update comment above bo_size. (Jordan)
Reviewed-by: Jordan Justen <jordan.l.justen@intel.com>
The functions used during aux buffer configuration and creation only
return false for exceptional errors. Don't proceed with surface creation
in those cases.
Reviewed-by: Jordan Justen <jordan.l.justen@intel.com>
The original value of 256 was under the assumption that you're a batch
buffer which is likely going to have a large number of relocations.
However, pipeline objects on Gen7 will have at most 6 relocations (one
per shader stage and one for the workaround BO) so this is a lot of
per-pipeline wasted space.
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
The old relocation list code always allocated 256 relocations and a hash
set up-front without knowing whether or not we really need them. In
particular, in the softpin case, this is two fairly large allocations
that we don't need to be making. Also, for pipeline objects on haswell
where we don't have softpin, we don't need relocations unless scratch is
used so this is extra data per-pipeline. Instead, we should do it
on-demand. This shaves 3.5% off of a cpu-limited example running with
the Dawn WebGPU implementation.
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
For gen12 we set the streamout buffers using 4 separate
commands instead of 3DSTATE_SO_BUFFER.
Signed-off-by: Plamena Manolova <plamena.manolova@intel.com>
Acked-by: Jason Ekstrand <jason@jlekstrand.net>
Extract out values for the handful of unknown registers which have
different values across different a6xx models, to simplify adding
support for new a6xx's.
Signed-off-by: Rob Clark <robdclark@chromium.org>
E.g. documentation-only changes cannot affect the outcome of the
pipeline, so don't waste resources on running it.
The thing we need to be careful about here is that the container stage
jobs must always run if any later stage jobs using the corresponding
docker images run. We're currently using the same .ci-run-policy
template for all jobs, so this is trivially true.
v2:
* Add bin/ and common.py (Eric Engestrom)
Acked-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com> # v1
Reviewed-by: Eric Engestrom <eric.engestrom@intel.com>
A few equations/programming changes for ICL.
v2: Fix a couple of issues in naming and floating/integer operations (Ken)
Signed-off-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Acked-by: Kenneth Graunke <kenneth@whitecape.org>
Commit 9edcce2a32 bumped the required libdrm-amdgpu version to
2.4.100. Update the version we use in our CI scripts to avoid CI
build failures.
Also bump the debian image name for this change to take effect.
Note that amdgpu is only built with the debian-buster image,
so only this image requires an update.
Fixes: 9edcce2a ("ac: get tcc_harvested from the kernel")
Signed-off-by: Alexandros Frantzis <alexandros.frantzis@collabora.com>
Reviewed-by: Erik Faye-Lund <erik.faye-lund@collabora.com>
The anv_batch_bo contents are linked one to another, and when printing
we have to start with the first of those. Since in `u_vector` new
elements are added to the head, to get the first element we need the
vector's tail.
Fixes: 32ffd90002 ("anv: add support for INTEL_DEBUG=bat")
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Previously subgroup shuffle was implemented using the bpermute
instruction, which only works accross half-waves, so by itself it's
not suitable for implementing subgroup shuffle when the shader is
running in wave64 mode.
This commit adds a trick using shared VGPRs that allows to implement
subgroup shuffle still relatively effectively in this mode.
Signed-off-by: Timur Kristóf <timur.kristof@gmail.com>
Reviewed-by: Daniel Schürmann <daniel@schuermann.dev>
Fixes p_reduce (all cluster sizes), p_inclusive_scan and p_exclusive_scan
with all reduction operations.
Signed-off-by: Rhys Perry <pendingchaos02@gmail.com>
Reviewed-by: Daniel Schürmann <daniel@schuermann.dev>
The existing "fallback" code didn't actually do anything, so this
removes it, and instead we just always fallback to `iris` for future
PCI IDs.
Suggested-by: Kenneth Graunke <kenneth@whitecape.org>
Signed-off-by: Eric Engestrom <eric.engestrom@intel.com>
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
When this code was merged, this wasn't necessary because the
state-tracker would do it later anyway. But this recently got changed,
without changing the code that depended on this.
Arguably, this was a mistake in the lowering pass to begin with. Either
way, let's fix it by not assuming that the lowering code gets called
later when it's not needed.
This fixed user-defined clip-planes in Zink.
Signed-off-by: Erik Faye-Lund <erik.faye-lund@collabora.com>
Fixes: eaffdad108 ("st/mesa: don't lower_global_vars_to_local for VS if there are no dead inputs")
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
Add case for MCS_CCS so that we get the correct aux usage while copy
operation.
v2: Fix commit subject (Nanley Chery)
Signed-off-by: Sagar Ghuge <sagar.ghuge@intel.com>
Reviewed-by: Nanley Chery <nanley.g.chery@intel.com>
Depending on MCS_CSS or MCS we can emit blorp blit shaders.
As we support MCS_CSS and MCS, it makes sense to use
isl_aux_usage_has_mcs function.
v2: Fix commit message (Nanley Chery)
Signed-off-by: Sagar Ghuge <sagar.ghuge@intel.com>
Reviewed-by: Nanley Chery <nanley.g.chery@intel.com>
If aux for MCS is already configured, don't configure again.
v2: Fix missing period in commit message (Nanley Chery)
Signed-off-by: Sagar Ghuge <sagar.ghuge@intel.com>
Reviewed-by: Nanley Chery <nanley.g.chery@intel.com>
The Vulkan spec says that an implementation has to support one of
VK_FORMAT_X8_D24_UNORM_PACK32 and VK_FORMAT_D32_SFLOAT, as well of
one of VK_FORMAT_D24_UNORM_S8_UINT and VK_FORMAT_D32_SFLOAT_S8_UINT.
So let's keep track which one is supported of earch pair, and emulate
one on top of the other one.
This won't give the exact result for comparisons, or when mapping and
unmapping the resources. But it's better than flat out failing to create
the resource, and we can fix the map/unmap issue later if needed.
Tested-by: Duncan Hopkins <duncan@thefoundry.co.uk>
If a modifier specifies an aux, it must be created.
Fixes: 75a3947af4 ("iris/resource: Fall back to no aux if creation fails")
Reviewed-by: Sagar Ghuge <sagar.ghuge@intel.com>
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
Make sure the res struct is free'd before returning.
Fixes: 2dce0e94a3 ("iris: Initial commit of a new 'iris' driver for Intel Gen8+ GPUs.")
Reviewed-by: Sagar Ghuge <sagar.ghuge@intel.com>
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
Store the converted depth value into two dwords. Avoids regressing the
piglit test "fbo-depth-array depth-clear", when HIZ_CCS sampling is
enabled in a later commit.
Reviewed-by: Sagar Ghuge <sagar.ghuge@intel.com>
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
Write through to the CCS if the surface is used as a texture and can be
sampled by the HW with CCS.
Reviewed-by: Sagar Ghuge <sagar.ghuge@intel.com>
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
Check that the alignment requirements for HIZ_CCS are satisfied by using
this function.
Reviewed-by: Sagar Ghuge <sagar.ghuge@intel.com>
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
Prevent the piglit test,
amd_vertex_shader_layer-layered-depth-texture-render, from regressing in
in a future commit.
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
Prepare this function to be used in iris and to handle new Gen12 behavior.
Reviewed-by: Sagar Ghuge <sagar.ghuge@intel.com>
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
Add a helper to determine if an ISL surface supports the write-through
mode of HIZ_CCS.
Reviewed-by: Sagar Ghuge <sagar.ghuge@intel.com>
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
The HIZ_CCS and MCS_CCS auxiliary surface modes require that drivers
store information about two aux buffers. We choose to represent this as
HiZ/MCS being the primary aux surface and the CCS as an secondary/extra
aux surface. This representation has the effect of placing most of the
code that will have to choose between the two aux surfaces around the
aux-map entry points.
Reviewed-by: Sagar Ghuge <sagar.ghuge@intel.com>
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
Instead of guessing an aux_usage, then confirming it if the
isl_surf_get_*_surf functions are successful, just call the ISL
functions up-front. This will help us to more easily determine if a
depth buffer supports HIZ_CCS.
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
Add an extra aux parameter which will be filled out with CCS if the
first two isl_surf parameters fit the requirements for HiZ_CCS.
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
While this format isn't listed in BSpec: 53911, other documentation and
empirical evidence suggest that it's fine to remap it to R32_FLOAT. I've
filed a bug for the BSpec page.
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
We'll start doing slow depth clears more often on HIZ_CCS buffers in a
future commit. Reduce the performance impact by making them use less
bandwidth.
From the Depth Test section of the BSpec:
This function is enabled by the Depth Test Enable state variable. If
enabled, the pixel's ("source") depth value is first computed. After
computation the pixel's depth value is clamped to the range defined
by Minimum Depth and Maximum Depth in the selected CC_VIEWPORT state.
Then the current ("destination") depth buffer value for this pixel is
read.
and from the Depth Buffer Updates section of the BSpec:
If depth testing is disabled or the depth test passed, the incoming
pixel's depth value is written to the Depth Buffer.
Taken together, it's clear that depth testing isn't necessary to perform
a depth buffer clear. Mark Janes and I analyzed this patch with
frameretrace and a depthrange piglit test. I disabled HiZ to ensure we'd
get slow depth clears. We've observed the bandwidth consumption by the
depth buffer access to be cut ~50% on BDW and SKL during depth clears.
On a more graphically intensive workload, the Shadowmapping Sascha
benchmark, I took the average of 3 runs on a BDW with a display
resolution of about 1920x1200 (minus some desktop environment
decorations). I measured a 22.61% FPS improvement when HiZ is disabled.
v2. The BSpec doesn't mandate this behavior, update comment accordingly.
(Ken)
Fixes: bc4bb5a7e3 ("intel/blorp: Emit more complete DEPTH_STENCIL state")
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
In ISL:
Update the format table to add CCS_E support for some 8BPP formats,
some 16BPP formats, and R10G10B10A2_UNORM_SRGB.
In the helper for determining CCS_E support, we return false for some
16BPP formats because they aren't properly handled in blorp_copy().
In BLORP:
Allow the new and non-problematic formats for CCS_E-enabled copies.
v2. Update other fields for A1B5G5R5_UNORM and A4B4G4R4_UNORM in table.
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
Reviewed-by: Jordan Justen <jordan.l.justen@intel.com> (v1)
The CCS could be described in a number of ways, but this format was
chosen to minimize churn in the drivers. We may decide on an different
direction in the future.
v2. Increase alignment for display surfaces. (Nanley)
Reviewed-by: Jordan Justen <jordan.l.justen@intel.com> (v1)
Acked-by: Kenneth Graunke <kenneth@whitecape.org>
Use a helper that will automatically handle Gen12's CCS tiling when
creating a CCS isl_surf.
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
Reviewed-by: Jordan Justen <jordan.l.justen@intel.com>
In the function which translates ISL tilings to i915 tilings, map ISL's
HiZ and CCS tilings to Y instead of NONE (linear). The HW docs describe
HiZ and pre-Gen12 CCS surfaces as being Y-tiled in memory.
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
Reviewed-by: Jordan Justen <jordan.l.justen@intel.com>
Avoid the compiler warnings for the new enums that will be introduced in
a future commit.
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
Reviewed-by: Jordan Justen <jordan.l.justen@intel.com>
The isl_surf structs for Gen12's CCS won't describe how many slices in
the main surface can be compressed. All slices will be compressable if
CCS is enabled, so lookup the main surface's logical dimension.
v2. Add a space before a `?`. (Jordan)
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
Reviewed-by: Jordan Justen <jordan.l.justen@intel.com>
This isn't accurate enough for HiZ which can have a discontiguous range
of supported aux slices. This also won't work with the plan to represent
Gen12 CCS as a single slice surface.
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
Reviewed-by: Jordan Justen <jordan.l.justen@intel.com>
From "Render Target Fast Clear" description for Gen12:
"SW must store clear color using MI_STORE_DATA_IMM with
ForceWriteCompletionCheck bit set."
From Instruction_MI_STORE_DATA_IMM, bitfield 10 (when set to 1):
"Following the last write from this command, Command Streamer
will wait for all previous writes are completed and in global
observable domain before moving to next command."
We use 4 SDIs to store the clear color (one per channel). From the
description, it looks to me that setting that flag only on the last SDI
should be enough.
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
Reviewed-by: Jordan Justen <jordan.l.justen@intel.com>
Gen12's CCS requires that the main surface have a pitch aligned to 512B.
v2. Provide a BSpec citation. (Ken)
Reviewed-by: Jordan Justen <jordan.l.justen@intel.com>
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
add_aux_state_tracking_buffer() actually checks the aux usage when
determining how many dwords to allocate for state tracking. Move the
function call to the point after the CCS_E aux usage is assigned.
Fixes: de3be61801 ("anv/cmd_buffer: Rework aux tracking")
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
Avoid failing the `info->use_clear_address` assertion in ISL on Gen12+.
Fixes: 6c9f9a82d7 ("intel/genxml,isl: Add gen12 render surface state changes")
Reported-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
In gen12 we use the 3DSTATE_DEPTH_BOUNDS instruction
to enable depth bounds testing.
Signed-off-by: Plamena Manolova <plamena.manolova@intel.com>
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
In gen12 we use the 3DSTATE_DEPTH_BOUNDS instruction
to enable depth bounds testing.
Signed-off-by: Plamena Manolova <plamena.manolova@intel.com>
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
In gen12 we add the 3DSTATE_DEPTH_BOUNDS instruction
which enables support for depth bounds testing.
Signed-off-by: Plamena Manolova <plamena.manolova@gmail.com>
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
When resolving a merge-conflict, I accidentally only updated the
ARM64-tag tag. Let's correct this.
Fixes: 3d529c1739 ("gitlab-ci: also build Zink on CI")
Some implementations don't support the lineWidth-feature, so let's
avoid setting invalid state to them. But since we don't have a fallback
for this, inform the user.
Acked-by: Jordan Justen <jordan.l.justen@intel.com>
The driver can report a minimum alignment for UBOs, and that can be
larger than 64, which we've currently been using. Let's play ball, and
use the reported value instead.
Acked-by: Jordan Justen <jordan.l.justen@intel.com>
There's two things that goes wrong in this code on some drivers:
1. Rounding off the line-width to granularity can push it outside the
legal range.
2. A granularity of 0.0 results in NaN, because we divide by zero.
So let's make this code a bit more robust.
Acked-by: Jordan Justen <jordan.l.justen@intel.com>
We're now adding interface-types during code-emitting, so we need to
defer emitting the entry-point. No biggie, spirv_builder is prepares for
this.
Acked-by: Jordan Justen <jordan.l.justen@intel.com>
This is the only call-site that wants to specify unique values per
component for any of the get_*_constant functions. So let's give this
its own implementation instead, so we can ease the burden for the rest.
Acked-by: Jordan Justen <jordan.l.justen@intel.com>
While we're at it, let's move emit_float_const to the same location as
this needs to be defined at.
Acked-by: Jordan Justen <jordan.l.justen@intel.com>
This is going to make it easier to verify that 1-bit float sizes don't
leak into the rest of the code.
Acked-by: Jordan Justen <jordan.l.justen@intel.com>
We don't implement the get_timestamp context-method, so this is just
going to crash if anyone tries to use it. Let's implement it later.
Acked-by: Jordan Justen <jordan.l.justen@intel.com>
This isn't as inaccurate as the comment says, the Vulkan documentation
even seems to suggest this is the same. Let's drop the comment.
Acked-by: Jordan Justen <jordan.l.justen@intel.com>
Because si.waitSemaphoreCount is 0, this won't even be looked at by the
driver, so let's just drop it.
Acked-by: Jordan Justen <jordan.l.justen@intel.com>
This inlines submit_cmdbuf into zink_end_batch, the only place it's
used. This makes the code a bit more straight-forward to read.
Acked-by: Jordan Justen <jordan.l.justen@intel.com>
These aren't guaranteed to be vectors, they can also be scalars. The
var-part is the significant part here, not the vector-ness. So let's
rename these.
Acked-by: Jordan Justen <jordan.l.justen@intel.com>
These track nir-registers, so it's clearer if we refer to them by that
name instead. There's potentially more vars than these.
Acked-by: Jordan Justen <jordan.l.justen@intel.com>
we don't need to track the resources for the samplers any longer, as
the sampler view holds a reference instead.
Acked-by: Jordan Justen <jordan.l.justen@intel.com>
The primitive topology is a bit of an odd-ball, as it's the only
truly draw-call specific state that needs to be passed to the program to
get a pipeline.
So let's make this a bit more explict, by passing it separately. This
makes the flow of data a bit easier to wrap your head around.
Acked-by: Jordan Justen <jordan.l.justen@intel.com>
This adds bitcasting to uint everywhere for now,
and stores all spir-v ssa values as uints.
It also casts bool to 0/0xffffffff for now
(nir 1-bit bools may be coming in the future).
This fixes a lot of piglit tests to pass now
Acked-by: Jordan Justen <jordan.l.justen@intel.com>
In vulkan, the Z-range of clip-space goes from 0..W instead of -W..+W
as is the case in OpenGL. So we need to transform the Z-range to
account for this.
Acked-by: Jordan Justen <jordan.l.justen@intel.com>
Here's zink, a so far pretty simple vulkan-gallium driver that is able
to translate some applications from OpenGL to Vulkan.
The compiler is quite limited for now, this will be improved on later.
Signed-off-by: Erik Faye-Lund <erik.faye-lund@collabora.com>
Acked-by: Jordan Justen <jordan.l.justen@intel.com>
Do not flush NaN to 0.
Fixes
dEQP-VK.spirv_assembly.instruction.compute.opquantize.propagated_nans
Signed-off-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
It's similar to GFX9+. Shadow of Mordor (Vulkan beta) hits that
path and it works fine.
Signed-off-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
This interface allows the aux-map code in the intel/common library to
allocate and free buffers.
Reworks:
* free gen_buffer in gen_aux_map_buffer_free. (Rafael)
Signed-off-by: Jordan Justen <jordan.l.justen@intel.com>
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Commit d2b60e433e introduced restrictions (as per GLES spec) on the
internal format. We need to setup a sized format for the texture image
so framebuffers created with that are considered complete.
This change fixes following Android CTS test in AHardwareBufferNativeTests
category:
SingleLayer_ColorTest_GpuColorOutputAndSampledImage_R10G10B10A2_UNORM
Signed-off-by: Tapani Pälli <tapani.palli@intel.com>
Fixes: d2b60e433e ("mesa/main: R10G10B10_(A2) formats are not color renderable in ES")
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
I thought there was hardware support for this, but it seems to broken,
or at least more complex than I believed.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
This memory needs to still be available after all the drawing is done
and forgotten about, so cannot be transient.
Also clear the result so that no rendering returns a zero.
Signed-off-by: Urja Rannikko <urjaman@gmail.com>
Reviewed-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
Serialized NIR is required for clover with the SPIR-V pipeline. With
this change and PAN_MESA_DEBUG=deqp, clinfo is able to successfully
probe panfrost.
Code from Nouveau (commit 7955fabcf8 by
Karol Herbst).
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
A device supported by kmsro will not automatically probe kmsro since the
driver name will be panfrost/lima/v3d/..., not "kmsro". Since kmsro is a
bit of a catch-all for generic (mostly embedded) GPUs, add a fallback on
kmsro for the dynamic loader.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
Reviewed-by: Christian Gmeiner <christian.gmeiner@gmail.com>
Acked-by: Karol Herbst <kherbst@redhat.com>
kmsro is used by numerous embedded GPUs for a common winsys abstraction.
Let's add support for it for the dynamic pipe loader, so clover can
probe on these drivers.
We build the target with Panfrost. When other drivers need kmsro+clover,
we can revisit the build system part; my mesonfu is wanting.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
Reviewed-by: Christian Gmeiner <christian.gmeiner@gmail.com>
Acked-by: Karol Herbst <kherbst@redhat.com>
Can be enabled via the environment variable which tells the
driver how many compilation threads are expected to be called,
and therefore how many forked processes the driver should
create.
For example we would expect to call fossilize replay with
something like this:
RADV_SECURE_COMPILE_THREADS=8 ./fossilize-replay --num-threads 8 \
--shader-cache-size 0 --ignore-derived-pipelines pipeline_cache.foz
Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
This added support for the fork, the installation of the seccomp
filter, and the main loop for the actual compilation to be called
from i.e. run_secure_compile_device().
Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
This function will be called by the parent process when doing a
secure compile. It first selects a free process to work with then
passes it all the information it needs to compile the pipeline.
Once the pipeline information has been passed to the secure
process, it then waits around to read/write any disk cache entries
required before exiting.
Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
This allows the secure process to read and write to the disk cache
via the parent process. This commit just adds the functionality
needed for the secure process, the following commit will add the
functionality for the parent process.
Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
These will be used by the following commits to hold information about
the forked secure compile processes.
Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
This will be used to identify information being passed between the
parent and secure process during a secure compile.
Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
This can be usefull for debugging the on disk cache, but is also
useful in the following patch for secure compiles which will be
used to compile huge pipeline collections. These pipeline
collections can be multiple GBs and the in memory cache grows to
multiple GBs very quickly when they are compiled so we want to
be able to turn off the in memory cache.
Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
This is cleaner and avoids having to read/write an additional copy of
topology for use with secure compile.
Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
This adds a new CI job that runs on windows with MSVC. It currently
builds softpipe and osmesa, and runs the related unit tests. It does
rely on meson's wraps for zlib, but I've set up caching of the wrap
dependencies so hopefully that wont be a problem.
I really wanted to user powershell for this, but there just isn't an
easy way to do that, it's much easier to use batch scripts, so thats
what I used.
The leading `/` for .gitlab-ci/lava... must be removed because windows
doesn't understand it, and when it reads the file the job ends in error.
Reviewed-by: Eric Engestrom <eric@engestrom.ch>
All of these (bug titles, patch titles, features, and people's names)
can contain characters that are not valid html. Just escape everything
for safety.
Fixes: 86079447da
("scripts: Add a gen_release_notes.py script")
Reviewed-by: Eric Engestrom <eric@engestrom.ch>
Reviewed-by: Juan A. Suarez <jasuarez@igalia.com>
I made a bad assumption; I assumed this would be run in the release
branch. But we don't do that, we run in the master branch. As a result
we need to pass the version as an argument.
Fixes: 3226b12a09
("release: Add an update_release_calendar.py script")
Reviewed-by: Eric Engestrom <eric@engestrom.ch>
Reviewed-by: Juan A. Suarez <jasuarez@igalia.com>
Which is very likely .Z > 0 releases.
Fixes: 86079447da
("scripts: Add a gen_release_notes.py script")
Reviewed-by: Eric Engestrom <eric@engestrom.ch>
Reviewed-by: Juan A. Suarez <jasuarez@igalia.com>
If they use the `Fixes: #1` form.
Fixes: 86079447da
("scripts: Add a gen_release_notes.py script")
Reviewed-by: Eric Engestrom <eric@engestrom.ch>
Reviewed-by: Juan A. Suarez <jasuarez@igalia.com>
Previously this would result in the .0 warning be generated for .z > 0
and the .z == 0 would get the other message.
Fixes: 86079447da
("scripts: Add a gen_release_notes.py script")
Reviewed-by: Eric Engestrom <eric@engestrom.ch>
Reviewed-by: Juan A. Suarez <jasuarez@igalia.com>
After the discussion in
https://github.com/KhronosGroup/OpenGL-API/issues/45
the section 8.17 (texture completeness) of the OpenGL 4.6 core profile
was changed to explicitly say that multisample texture completeness
ignores filter state of the texture.
"Using the preceding definitions, a texture is complete unless any of the
following conditions hold true:
...
- The minification filter requires a mipmap (is neither NEAREST nor LINEAR),
the texture is not multisample, and the texture is not mipmap complete.
- The texture is not multisample; either the magnification filter is not
NEAREST, or the minification filter is neither NEAREST nor NEAREST_-
MIPMAP_NEAREST; and any of
– The internal format of the texture is integer (see table 8.12).
– The internal format is STENCIL_INDEX.
– The internal format is DEPTH_STENCIL, and the value of DEPTH_-
STENCIL_TEXTURE_MODE for the texture is STENCIL_INDEX."
Signed-off-by: Danylo Piliaiev <danylo.piliaiev@globallogic.com>
Signed-off-by: Illia Iorin <illia.iorin@globallogic.com>
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
MAX_VARYINGS_INCL_PATCH subtracts VARYING_SLOT_VAR0 giving us a size
that's too small, so BITSET_SET writes words out of bounds, corrupting
the stack and causing all kinds of chaos. VARYING_SLOT_TESS_MAX is
the right value to use here, as it's the largest location.
Closes: 2002
Fixes: ee2050b111 ("nir: Use BITSET for tracking varyings in lower_io_arrays")
Reviewed-by: Kristian H. Kristensen <hoegsberg@google.com>
[12/60] Compiling C object 'src/gallium/auxiliary/eb820e8@@gallium@sta/rbug_rbug_texture.c.o'.
FAILED: src/gallium/auxiliary/eb820e8@@gallium@sta/rbug_rbug_texture.c.o
[...]
../src/gallium/auxiliary/rbug/rbug_texture.c: In function 'rbug_send_texture_info_reply':
../src/gallium/auxiliary/rbug/rbug_texture.c:302:21: error: implicit declaration of function 'alloca'; did you mean 'malloc'? [-Werror=implicit-function-declaration]
uint32_t *height = alloca(sizeof(uint32_t) * height_len);
^~~~~~
malloc
../src/gallium/auxiliary/rbug/rbug_texture.c:302:21: warning: initialization makes pointer from integer without a cast [-Wint-conversion]
../src/gallium/auxiliary/rbug/rbug_texture.c:303:20: warning: initialization makes pointer from integer without a cast [-Wint-conversion]
uint32_t *depth = alloca(sizeof(uint32_t) * height_len);
^~~~~~
cc1: some warnings being treated as errors
Include c99_alloca.h to portably make the alloca() prototype available.
See also: 498d9d0f, adfb9c5c, fc8139b1
Fixes: 6174cba7 ("rbug: fix transmitted texture sizes")
Reviewed-by: Eric Engestrom <eric@engestrom.ch>
Reviewed-by: Christian Gmeiner <christian.gmeiner@gmail.com>
Rather than supplying a mask/swizzle to compose with the original, just
supply the offset of the allocated register so we can directly offset
the mask/swizzle, without resorting to composition.
This is simpler, cleaner, and will generalize to non-32-bit.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
GFX10 hazards require a different approach compared to previous
generations, for example it doesn't need s_nop, and most hazards
can't be solved by adding NOPs at all. Also, they are not
resolved by branch instructions.
This commit reorganizes aco_insert_NOPs so that there is now a
separate pass for GFX10. The new GFX10 pass also respects the
control flow of the shader.
Signed-off-by: Timur Kristóf <timur.kristof@gmail.com>
Reviewed-by: Daniel Schürmann <daniel@schuermann.dev>
This commit refines the VMEMtoScalarWriteHazard mitigation, based
upon a closer look at what LLVM does. Also changes the code to
match the structure of the other hazard mitigations.
* The hazard is not only triggered by VMEM, FLAT and GLOBAL
but also SCRATCH and DS instructions.
* The SMEM/SALU instructions only cause a hazard when they
write a register that the VMEM/etc. are reading.
Signed-off-by: Timur Kristóf <timur.kristof@gmail.com>
Reviewed-by: Daniel Schürmann <daniel@schuermann.dev>
There is a hazard caused by there is a branch between a
VMEM/GLOBAL/SCRATCH instruction and a DS instruction.
This commit adds a workaround that avoids the problem.
Signed-off-by: Timur Kristóf <timur.kristof@gmail.com>
Reviewed-by: Daniel Schürmann <daniel@schuermann.dev>
There is a hazard that happens when an SMEM instruction
reads an SGPR and then a VALU instruction writes that same SGPR.
This commit adds a workaround that avoids the problem.
Signed-off-by: Timur Kristóf <timur.kristof@gmail.com>
Reviewed-by: Daniel Schürmann <daniel@schuermann.dev>
There is a hazard when a non-VALU instruction reads the EXEC mask
and then a VALU instruction writes the EXEC mask.
This commit adds a workaround that avoids the problem.
Signed-off-by: Timur Kristóf <timur.kristof@gmail.com>
Reviewed-by: Daniel Schürmann <daniel@schuermann.dev>
Any permlane instruction that follows any VOPC instruction can cause a hazard,
this commit implements a workaround that avoids this causing a problem.
Signed-off-by: Timur Kristóf <timur.kristof@gmail.com>
Reviewed-by: Daniel Schürmann <daniel@schuermann.dev>
ACO currently mitigates VMEMtoScalarWriteHazard and Offset3fBug
(names from LLVM). There are some bugs that ACO needn't care about.
Just to be on the safe side, add an assertion that makes sure
that we aren't hit by FlatSegmentOffsetBug.
Signed-off-by: Timur Kristóf <timur.kristof@gmail.com>
Reviewed-by: Daniel Schürmann <daniel@schuermann.dev>
From the Vulkan spec 1.1.126 :
"VK_SHADER_FLOAT_CONTROLS_INDEPENDENCE_32_BIT_ONLY_KHR specifies
that shader float controls for 32-bit floating point can be set
independently; other bit widths must be set identically to each
other."
Forgot to update this when I enabled that extension recently.
Fixes dEQP-VK.spirv_assembly.instruction.compute.float_controls.independence_settings.independence_setting
Signed-off-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
This saves one return and a simple benchmark which calls glGetString
repeatedly on my desktop shows it improves calls per second from 118M
to 128M.
Signed-off-by: Lepton Wu <lepton@chromium.org>
Reviewed-by: Kristian H. Kristensen <hoegsberg@google.com>
Most GPU require the sample count is power of 2. Just remove those
formats with unusual sample count. This decreases dEQP EGL tests run
time a lot.
Signed-off-by: Lepton Wu <lepton@chromium.org>
Reviewed-by: Eric Anholt <eric@anholt.net>
MAX_VARYINGS_INCL_PATCH is greater than 64, so we'll need more that 64
bits (per component) to track which vars have indirects. This pass was
trying to track patch varyings (which start at bit 63) in a separate
64 bit word, but failed to subtract VARYING_SLOT_PATCH0 and accessed
out of bounds.
Do away with the ad-hoc bit mask tracking and just use a BITSET.
Fixes: dEQP-GLES31.functional.tessellation.user_defined_io.per_patch_block.vertex_io_array_size_implicit.triangles
Reviewed-by: Eric Anholt <eric@anholt.net>
Reviewed-by: Timothy Arceri <tarceri@itsqueeze.com>
Signed-off-by: Kristian H. Kristensen <hoegsberg@google.com>
In some cases, in particular when you have things that can be src
modifiers ((abs)/(neg)), once eliminating one mov, there is a
possibility to remove another. Handle this by re-visiting an
instruction after eliminating a copy on one of it's srcs.
Signed-off-by: Rob Clark <robdclark@chromium.org>
Reviewed-by: Kristian H. Kristensen <hoegsberg@google.com>
These date back to relatively early days of ir3, when a lot was still
not well understood. But according to CI (and what I've seen blob
driver do), these are not actually real restrictions.
Signed-off-by: Rob Clark <robdclark@chromium.org>
Reviewed-by: Kristian H. Kristensen <hoegsberg@google.com>
Now that we fixed the sharp edges that this was papering over, we can
relax the restriction about eliminating a mov coming out of a fanout
(for example from result of texture fetch).
Signed-off-by: Rob Clark <robdclark@chromium.org>
Reviewed-by: Kristian H. Kristensen <hoegsberg@google.com>
This avoids copy-propagating a high register into an instruction which
cannot consume it.
Signed-off-by: Rob Clark <robdclark@chromium.org>
Reviewed-by: Kristian H. Kristensen <hoegsberg@google.com>
We did this properly already for split/fanout. But collect was missed.
Extract out a helper to share.
This way we avoid copy propagating a mov from high or half reg into an
instruction which cannot consume a high/half reg.
Signed-off-by: Rob Clark <robdclark@chromium.org>
Reviewed-by: Kristian H. Kristensen <hoegsberg@google.com>
1) deduplicate IR3_SHADER_DEBUG=disasm versus fs/vs/etc handling
2) standardize shader stage name prints, in particular VERT vs BVERT
3) don't mix stderr and stdout
Signed-off-by: Rob Clark <robdclark@chromium.org>
Reviewed-by: Kristian H. Kristensen <hoegsberg@google.com>
Avoid keeping track of the idx and all possible image operands for
each operation. Note for convenience we split up the handling of
ImageOperandsOffsetMask and ImageOperandsConstOffsetMask.
Suggested by Jason Ekstrand.
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
Change the information to also include the category, so that the
particulars of BitEnum enumeration can be handled in the template.
Acked-by: Jason Ekstrand <jason@jlekstrand.net>
Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
Emit barriers with semantics matching the access operand and the
storage class of the pointer.
v2: Fix order of visible / available emission relative to the
operations. (Bas)
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
Set the memory semantics and scope for later emitting the barrier.
Note the barrier emission code already exist in vtn_handle_image for
the Image atomics.
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
Add a helper to split the memory semantics into before and after the
operation, and use that result to emit memory barriers.
v2: Be more explicit about which bits we are keeping around when
splitting memory semantics into a before and after. For now
we are ignoring Volatile. (Jason)
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
Including the right storage memory semantic based on the storage class
of the operation. These will be used later to emit memory barriers.
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
Three groups of tests, effectively defining what cases the
optimization is allowed or prevented
- Redudant loads (a load generated the value)
- Propagate SSA values (a store generated the value)
- Propagate a var (a copy generated the value)
Change the shader type of the tests to be COMPUTE so
nir_var_mem_shared can also be used. Doesn't affect the semantic of
the copy propagation.
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
Add a NIR instrinsic that represent a memory barrier in SPIR-V /
Vulkan Memory Model, with extra attributes that describe the barrier:
- Ordering: whether is an Acquire or Release;
- "Cache control": availability ("ensure this gets written in the memory")
and visibility ("ensure my cache is up to date when I'm reading");
- Variable modes: which memory types this barrier applies to;
- Scope: how far this barrier applies.
Note that unlike in SPIR-V, the "Storage Semantics" and the "Memory
Semantics" are split into two different attributes so we can use
variable modes for the former.
NIR passes that took barriers in consideration were also changed
- nir_opt_copy_prop_vars: clean up the values for the mode of an
ACQUIRE barrier. Copy propagation effect is to "pull up a load" (by
not performing it), which is what ACQUIRE restricts.
- nir_opt_dead_write_vars and nir_opt_combine_writes: clean up the
pending writes for the modes of an RELEASE barrier. Dead writes
effect is to "push down a store", which is what RELEASE restricts.
- nir_opt_access: treat the ACQUIRE and RELEASE as a full barrier for
the modes. This is conservative, but since this is a GL-specific
pass, doesn't make a difference for now.
v2: Fix the scoped barrier handling in copy propagation. (Jason)
Add scoped barrier handling to nir_opt_access and
nir_opt_combine_writes. (Rhys)
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
This warning is different. Meson support for windows is less mature than
for other platforms, and the goal here is to alert people that
eventually we plan to drop scons and move to meson, and that they should
try out meson and report issues.
Reviewed-by: Eric Anholt <eric@anholt.net>
At this point meson should be able to handle all of the non-windows
platforms just fine; we'd like to be able to stop maintaining scons for
those platforms sooner than later.
Reviewed-by: Eric Anholt <eric@anholt.net>
This ensures that we get python3's print() function behavior even in
python2, instead of python2's print statement behavior. We'll be using
this in the next patch.
Reviewed-by: Eric Anholt <eric@anholt.net>
On GFX8 the number of records is in bytes while on other chips
it's in units of "stride".
Fixes dEQP-VK.robustness.vertex_access.*.draw.vertex_* on RAVEN.
Tested on GFX6, GFX8, GFX10 and RAVEN.
Signed-off-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
Flagged by UBSan:
../src/util/tests/fast_idiv_by_const/fast_idiv_by_const_test.cpp:233:14: runtime error: negation of -2147483648 cannot be represented in type 'int'; cast to an unsigned type to negate this value to itself
#0 0x55b4c1a2a428 in rand_sint ../src/util/tests/fast_idiv_by_const/fast_idiv_by_const_test.cpp:233
#1 0x55b4c1a2ad3a in random_sdiv_test ../src/util/tests/fast_idiv_by_const/fast_idiv_by_const_test.cpp:308
#2 0x55b4c1a2b837 in fast_idiv_by_const_int32_Test::TestBody() ../src/util/tests/fast_idiv_by_const/fast_idiv_by_const_test.cpp:410
#3 0x55b4c1abc13f in void testing::internal::HandleSehExceptionsInMethodIfSupported<testing::Test, void>(testing::Test*, void (testing::Test::*)(), char const*) ../src/gtest/src/gtest.cc:2402
#4 0x55b4c1aa7a4d in void testing::internal::HandleExceptionsInMethodIfSupported<testing::Test, void>(testing::Test*, void (testing::Test::*)(), char const*) ../src/gtest/src/gtest.cc:2438
#5 0x55b4c1a4ce57 in testing::Test::Run() ../src/gtest/src/gtest.cc:2474
#6 0x55b4c1a4f530 in testing::TestInfo::Run() ../src/gtest/src/gtest.cc:2656
#7 0x55b4c1a51cbe in testing::TestCase::Run() ../src/gtest/src/gtest.cc:2774
#8 0x55b4c1a6d698 in testing::internal::UnitTestImpl::RunAllTests() ../src/gtest/src/gtest.cc:4649
#9 0x55b4c1abfd58 in bool testing::internal::HandleSehExceptionsInMethodIfSupported<testing::internal::UnitTestImpl, bool>(testing::internal::UnitTestImpl*, bool (testing::internal::UnitTestImpl::*)(), char const*) ../src/gtest/src/gtest.cc:2402
#10 0x55b4c1aab425 in bool testing::internal::HandleExceptionsInMethodIfSupported<testing::internal::UnitTestImpl, bool>(testing::internal::UnitTestImpl*, bool (testing::internal::UnitTestImpl::*)(), char const*) ../src/gtest/src/gtest.cc:2438
#11 0x55b4c1a64cba in testing::UnitTest::Run() ../src/gtest/src/gtest.cc:4257
#12 0x55b4c1ae4b73 in RUN_ALL_TESTS() ../src/gtest/include/gtest/gtest.h:2233
#13 0x55b4c1ae4a33 in main ../src/gtest/src/gtest_main.cc:37
#14 0x7ff172d1dbba in __libc_start_main ../csu/libc-start.c:308
#15 0x55b4c1a28dc9 in _start (/home/daenzer/src/mesa-git/mesa/build-amd64-sanitize/src/util/tests/fast_idiv_by_const/fast_idiv_by_const_test+0x96dc9)
../src/util/tests/fast_idiv_by_const/fast_idiv_by_const_test.cpp:309:52: runtime error: negation of -9223372036854775808 cannot be represented in type 'long int'; cast to an unsigned type to negate this value to itself
#0 0x563b24dafd2d in random_sdiv_test ../src/util/tests/fast_idiv_by_const/fast_idiv_by_const_test.cpp:309
#1 0x563b24db0f0f in fast_idiv_by_const_int64_Test::TestBody() ../src/util/tests/fast_idiv_by_const/fast_idiv_by_const_test.cpp:473
#2 0x563b24e41111 in void testing::internal::HandleSehExceptionsInMethodIfSupported<testing::Test, void>(testing::Test*, void (testing::Test::*)(), char const*) ../src/gtest/src/gtest.cc:2402
#3 0x563b24e2ca1f in void testing::internal::HandleExceptionsInMethodIfSupported<testing::Test, void>(testing::Test*, void (testing::Test::*)(), char const*) ../src/gtest/src/gtest.cc:2438
#4 0x563b24dd1e29 in testing::Test::Run() ../src/gtest/src/gtest.cc:2474
#5 0x563b24dd4502 in testing::TestInfo::Run() ../src/gtest/src/gtest.cc:2656
#6 0x563b24dd6c90 in testing::TestCase::Run() ../src/gtest/src/gtest.cc:2774
#7 0x563b24df266a in testing::internal::UnitTestImpl::RunAllTests() ../src/gtest/src/gtest.cc:4649
#8 0x563b24e44d2a in bool testing::internal::HandleSehExceptionsInMethodIfSupported<testing::internal::UnitTestImpl, bool>(testing::internal::UnitTestImpl*, bool (testing::internal::UnitTestImpl::*)(), char const*) ../src/gtest/src/gtest.cc:2402
#9 0x563b24e303f7 in bool testing::internal::HandleExceptionsInMethodIfSupported<testing::internal::UnitTestImpl, bool>(testing::internal::UnitTestImpl*, bool (testing::internal::UnitTestImpl::*)(), char const*) ../src/gtest/src/gtest.cc:2438
#10 0x563b24de9c8c in testing::UnitTest::Run() ../src/gtest/src/gtest.cc:4257
#11 0x563b24e69b45 in RUN_ALL_TESTS() ../src/gtest/include/gtest/gtest.h:2233
#12 0x563b24e69a05 in main ../src/gtest/src/gtest_main.cc:37
#13 0x7f9a90330bba in __libc_start_main ../csu/libc-start.c:308
#14 0x563b24daddc9 in _start (/home/daenzer/src/mesa-git/mesa/build-amd64-sanitize/src/util/tests/fast_idiv_by_const/fast_idiv_by_const_test+0x96dc9)
v2:
* Use INT64_MIN instead of LLONG_MIN (Jason Ekstrand)
* Simpler test for INT64_MIN result from rand_sint (Jason Ekstrand)
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
Reviewed-by: Adam Jackson <ajax@redhat.com>
Shifting int64_t values left into the sign bit has undefined behaviour:
../src/util/fast_idiv_by_const.c:175:14: runtime error: left shift of 131 by 56 places cannot be represented in type 'long int'
#0 0x561337ed10c1 in sign_extend ../src/util/fast_idiv_by_const.c:175
#1 0x561337ed1335 in util_compute_fast_sdiv_info ../src/util/fast_idiv_by_const.c:239
#2 0x561337e17519 in fast_idiv_by_const_int8_Test::TestBody() ../src/util/tests/fast_idiv_by_const/fast_idiv_by_const_test.cpp:357
#3 0x561337ea815d in void testing::internal::HandleSehExceptionsInMethodIfSupported<testing::Test, void>(testing::Test*, void (testing::Test::*)(), char const*) ../src/gtest/src/gtest.cc:2402
#4 0x561337e93a6b in void testing::internal::HandleExceptionsInMethodIfSupported<testing::Test, void>(testing::Test*, void (testing::Test::*)(), char const*) ../src/gtest/src/gtest.cc:2438
#5 0x561337e38e75 in testing::Test::Run() ../src/gtest/src/gtest.cc:2474
#6 0x561337e3b54e in testing::TestInfo::Run() ../src/gtest/src/gtest.cc:2656
#7 0x561337e3dcdc in testing::TestCase::Run() ../src/gtest/src/gtest.cc:2774
#8 0x561337e596b6 in testing::internal::UnitTestImpl::RunAllTests() ../src/gtest/src/gtest.cc:4649
#9 0x561337eabd76 in bool testing::internal::HandleSehExceptionsInMethodIfSupported<testing::internal::UnitTestImpl, bool>(testing::internal::UnitTestImpl*, bool (testing::internal::UnitTestImpl::*)(), char const*) ../src/gtest/src/gtest.cc:2402
#10 0x561337e97443 in bool testing::internal::HandleExceptionsInMethodIfSupported<testing::internal::UnitTestImpl, bool>(testing::internal::UnitTestImpl*, bool (testing::internal::UnitTestImpl::*)(), char const*) ../src/gtest/src/gtest.cc:2438
#11 0x561337e50cd8 in testing::UnitTest::Run() ../src/gtest/src/gtest.cc:4257
#12 0x561337ed0b91 in RUN_ALL_TESTS() ../src/gtest/include/gtest/gtest.h:2233
#13 0x561337ed0a51 in main ../src/gtest/src/gtest_main.cc:37
#14 0x7f85ba483bba in __libc_start_main ../csu/libc-start.c:308
#15 0x561337e14dc9 in _start (/home/daenzer/src/mesa-git/mesa/build-amd64-sanitize/src/util/tests/fast_idiv_by_const/fast_idiv_by_const_test+0x96dc9)
../src/util/tests/fast_idiv_by_const/fast_idiv_by_const_test.cpp:51:14: runtime error: left shift of negative value -63
#0 0x55fc3c0e67cc in strunc ../src/util/tests/fast_idiv_by_const/fast_idiv_by_const_test.cpp:51
#1 0x55fc3c0e6d93 in smul_high ../src/util/tests/fast_idiv_by_const/fast_idiv_by_const_test.cpp:140
#2 0x55fc3c0e7067 in fast_sdiv ../src/util/tests/fast_idiv_by_const/fast_idiv_by_const_test.cpp:181
#3 0x55fc3c0e858b in fast_idiv_by_const_int8_Test::TestBody() ../src/util/tests/fast_idiv_by_const/fast_idiv_by_const_test.cpp:358
#4 0x55fc3c17915d in void testing::internal::HandleSehExceptionsInMethodIfSupported<testing::Test, void>(testing::Test*, void (testing::Test::*)(), char const*) ../src/gtest/src/gtest.cc:2402
#5 0x55fc3c164a6b in void testing::internal::HandleExceptionsInMethodIfSupported<testing::Test, void>(testing::Test*, void (testing::Test::*)(), char const*) ../src/gtest/src/gtest.cc:2438
#6 0x55fc3c109e75 in testing::Test::Run() ../src/gtest/src/gtest.cc:2474
#7 0x55fc3c10c54e in testing::TestInfo::Run() ../src/gtest/src/gtest.cc:2656
#8 0x55fc3c10ecdc in testing::TestCase::Run() ../src/gtest/src/gtest.cc:2774
#9 0x55fc3c12a6b6 in testing::internal::UnitTestImpl::RunAllTests() ../src/gtest/src/gtest.cc:4649
#10 0x55fc3c17cd76 in bool testing::internal::HandleSehExceptionsInMethodIfSupported<testing::internal::UnitTestImpl, bool>(testing::internal::UnitTestImpl*, bool (testing::internal::UnitTestImpl::*)(), char const*) ../src/gtest/src/gtest.cc:2402
#11 0x55fc3c168443 in bool testing::internal::HandleExceptionsInMethodIfSupported<testing::internal::UnitTestImpl, bool>(testing::internal::UnitTestImpl*, bool (testing::internal::UnitTestImpl::*)(), char const*) ../src/gtest/src/gtest.cc:2438
#12 0x55fc3c121cd8 in testing::UnitTest::Run() ../src/gtest/src/gtest.cc:4257
#13 0x55fc3c1a1b91 in RUN_ALL_TESTS() ../src/gtest/include/gtest/gtest.h:2233
#14 0x55fc3c1a1a51 in main ../src/gtest/src/gtest_main.cc:37
#15 0x7fd224759bba in __libc_start_main ../csu/libc-start.c:308
#16 0x55fc3c0e5dc9 in _start (/home/daenzer/src/mesa-git/mesa/build-amd64-sanitize/src/util/tests/fast_idiv_by_const/fast_idiv_by_const_test+0x96dc9)
v2:
* Use two casts instead of changing the argument type (Jason Ekstrand)
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
Reviewed-by: Adam Jackson <ajax@redhat.com>
Otherwise a smaller type may be promoted to int, which can hit undefined
behaviour:
../src/gallium/auxiliary/util/u_half.h:126:29: runtime error: left shift of 32768 by 16 places cannot be represented in type 'int'
#0 0x5646ff63d488 in util_half_to_float ../src/gallium/auxiliary/util/u_half.h:126
#1 0x5646ff63d749 in _mesa_half_to_float ../src/util/half_float.c:145
#2 0x5646ff54d557 in nir_const_value_negative_equal ../src/compiler/nir/nir_instr_set.c:372
#3 0x5646ff44d29a in const_value_negative_equal_test_nir_type_float16_trivially_true_Test::TestBody() ../src/compiler/nir/tests/negative_equal_tests.cpp:121
#4 0x5646ff505c05 in void testing::internal::HandleSehExceptionsInMethodIfSupported<testing::Test, void>(testing::Test*, void (testing::Test::*)(), char const*) ../src/gtest/src/gtest.cc:2402
#5 0x5646ff4f1513 in void testing::internal::HandleExceptionsInMethodIfSupported<testing::Test, void>(testing::Test*, void (testing::Test::*)(), char const*) ../src/gtest/src/gtest.cc:2438
#6 0x5646ff4979b5 in testing::Test::Run() ../src/gtest/src/gtest.cc:2474
#7 0x5646ff49a08e in testing::TestInfo::Run() ../src/gtest/src/gtest.cc:2656
#8 0x5646ff49c81c in testing::TestCase::Run() ../src/gtest/src/gtest.cc:2774
#9 0x5646ff4b81f6 in testing::internal::UnitTestImpl::RunAllTests() ../src/gtest/src/gtest.cc:4649
#10 0x5646ff50981e in bool testing::internal::HandleSehExceptionsInMethodIfSupported<testing::internal::UnitTestImpl, bool>(testing::internal::UnitTestImpl*, bool (testing::internal::UnitTestImpl::*)(), char const*) ../src/gtest/src/gtest.cc:2402
#11 0x5646ff4f4eeb in bool testing::internal::HandleExceptionsInMethodIfSupported<testing::internal::UnitTestImpl, bool>(testing::internal::UnitTestImpl*, bool (testing::internal::UnitTestImpl::*)(), char const*) ../src/gtest/src/gtest.cc:2438
#12 0x5646ff4af818 in testing::UnitTest::Run() ../src/gtest/src/gtest.cc:4257
#13 0x5646ff52e639 in RUN_ALL_TESTS() ../src/gtest/include/gtest/gtest.h:2233
#14 0x5646ff52e4f9 in main ../src/gtest/src/gtest_main.cc:37
#15 0x7f6bacb78bba in __libc_start_main ../csu/libc-start.c:308
#16 0x5646ff448019 in _start (/home/daenzer/src/mesa-git/mesa/build-amd64-sanitize/src/compiler/nir/negative_equal+0x17c019)
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
Reviewed-by: Adam Jackson <ajax@redhat.com>
Otherwise a smaller type may be promoted to int, which can hit undefined
behaviour:
../src/intel/compiler/brw_packed_float.c:66:17: runtime error: left shift of 128 by 24 places cannot be represented in type 'int'
#0 0x5604a03969aa in brw_vf_to_float ../src/intel/compiler/brw_packed_float.c:66
#1 0x5604a0391305 in vf_float_conversion_test_test_vf_to_float_Test::TestBody() ../src/intel/compiler/test_vf_float_conversions.cpp:70
#2 0x5604a041a323 in void testing::internal::HandleSehExceptionsInMethodIfSupported<testing::Test, void>(testing::Test*, void (testing::Test::*)(), char const*) ../src/gtest/src/gtest.cc:2402
#3 0x5604a0405c31 in void testing::internal::HandleExceptionsInMethodIfSupported<testing::Test, void>(testing::Test*, void (testing::Test::*)(), char const*) ../src/gtest/src/gtest.cc:2438
#4 0x5604a03ab03b in testing::Test::Run() ../src/gtest/src/gtest.cc:2474
#5 0x5604a03ad714 in testing::TestInfo::Run() ../src/gtest/src/gtest.cc:2656
#6 0x5604a03afea2 in testing::TestCase::Run() ../src/gtest/src/gtest.cc:2774
#7 0x5604a03cb87c in testing::internal::UnitTestImpl::RunAllTests() ../src/gtest/src/gtest.cc:4649
#8 0x5604a041df3c in bool testing::internal::HandleSehExceptionsInMethodIfSupported<testing::internal::UnitTestImpl, bool>(testing::internal::UnitTestImpl*, bool (testing::internal::UnitTestImpl::*)(), char const*) ../src/gtest/src/gtest.cc:2402
#9 0x5604a0409609 in bool testing::internal::HandleExceptionsInMethodIfSupported<testing::internal::UnitTestImpl, bool>(testing::internal::UnitTestImpl*, bool (testing::internal::UnitTestImpl::*)(), char const*) ../src/gtest/src/gtest.cc:2438
#10 0x5604a03c2e9e in testing::UnitTest::Run() ../src/gtest/src/gtest.cc:4257
#11 0x5604a0442d57 in RUN_ALL_TESTS() ../src/gtest/include/gtest/gtest.h:2233
#12 0x5604a0442c17 in main ../src/gtest/src/gtest_main.cc:37
#13 0x7f9a1983dbba in __libc_start_main ../csu/libc-start.c:308
#14 0x5604a0390d89 in _start (/home/daenzer/src/mesa-git/mesa/build-amd64-sanitize/src/intel/compiler/vf_float_conversions+0x8dd89)
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
Reviewed-by: Adam Jackson <ajax@redhat.com>
To avoid it, use the modulo of the number of bits in the value being
shifted, which is presumably what ended up happening on x86.
Flagged by UBSan:
../src/intel/compiler/brw_eu_validate.c:974:33: runtime error: shift exponent 64 is too large for 64-bit type 'long unsigned int'
#0 0x561abb612ab3 in general_restrictions_on_region_parameters ../src/intel/compiler/brw_eu_validate.c:974
#1 0x561abb617574 in brw_validate_instructions ../src/intel/compiler/brw_eu_validate.c:1851
#2 0x561abb53bd31 in validate ../src/intel/compiler/test_eu_validate.cpp:106
#3 0x561abb555369 in validation_test_source_cannot_span_more_than_2_registers_Test::TestBody() ../src/intel/compiler/test_eu_validate.cpp:486
#4 0x561abb742651 in void testing::internal::HandleSehExceptionsInMethodIfSupported<testing::Test, void>(testing::Test*, void (testing::Test::*)(), char const*) ../src/gtest/src/gtest.cc:2402
#5 0x561abb72e64d in void testing::internal::HandleExceptionsInMethodIfSupported<testing::Test, void>(testing::Test*, void (testing::Test::*)(), char const*) ../src/gtest/src/gtest.cc:2438
#6 0x561abb6d5451 in testing::Test::Run() ../src/gtest/src/gtest.cc:2474
#7 0x561abb6d7b2a in testing::TestInfo::Run() ../src/gtest/src/gtest.cc:2656
#8 0x561abb6da2b8 in testing::TestCase::Run() ../src/gtest/src/gtest.cc:2774
#9 0x561abb6f5c92 in testing::internal::UnitTestImpl::RunAllTests() ../src/gtest/src/gtest.cc:4649
#10 0x561abb74626a in bool testing::internal::HandleSehExceptionsInMethodIfSupported<testing::internal::UnitTestImpl, bool>(testing::internal::UnitTestImpl*, bool (testing::internal::UnitTestImpl::*)(), char const*) ../src/gtest/src/gtest.cc:2402
#11 0x561abb732025 in bool testing::internal::HandleExceptionsInMethodIfSupported<testing::internal::UnitTestImpl, bool>(testing::internal::UnitTestImpl*, bool (testing::internal::UnitTestImpl::*)(), char const*) ../src/gtest/src/gtest.cc:2438
#12 0x561abb6ed2b4 in testing::UnitTest::Run() ../src/gtest/src/gtest.cc:4257
#13 0x561abb768b3b in RUN_ALL_TESTS() ../src/gtest/include/gtest/gtest.h:2233
#14 0x561abb7689fb in main ../src/gtest/src/gtest_main.cc:37
#15 0x7f525e5a9bba in __libc_start_main ../csu/libc-start.c:308
#16 0x561abb538ed9 in _start (/home/daenzer/src/mesa-git/mesa/build-amd64-sanitize/src/intel/compiler/eu_validate+0x1b8ed9)
Reviewed-by: Adam Jackson <ajax@redhat.com>
`strerror()` takes an `errno`, not the negative value returned by the
`ioctl()`.
Instead of fixing this as `"%s", strerror(errno)`, let's just use the
`"%m"` shortcut for it.
Fixes: 2b5f30b1d9 ("anv: implement VK_INTEL_performance_query")
Signed-off-by: Eric Engestrom <eric.engestrom@intel.com>
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
The kernel total GMR/DMA size is limited, but it's definitely possible for the
kernel to allow a larger buffer allocation to succeed, but command
submission using that buffer as a GMR would fail typically causing an
application crash.
So have the winsys limit the size of GMR/DMA buffers. The pipe driver will
then resort to allocating smaller buffers and perform the DMA transfer in
multiple bands, also allowing for the pre-flush mechanism to kick in.
This avoids the related application crashes.
Fixes: e7843273fa ("winsys/svga: Update to vmwgfx kernel module 2.1")
Signed-off-by: Thomas Hellstrom <thellstrom@vmware.com>
Reviewed-by: Brian Paul <brianp@vmware.com>
Reviewed-by: Charmaine Lee <charmainel@vmware.com>
Reviewed-by: Roland Scheidegger <sroland@vmware.com>
Even with banded DMA uploads, st->hwbuf is always non-NULL, but when we've
allocated a software buffer to hold the full upload, unmapping of the
hardware buffer has already been done before
svga_texture_transfer_unmap_dma(), and the code was performing an unmap of
an already mapped buffer.
Fix this by testing for software buffer not present.
Fixes: a9c4a861d5 ("svga: refactor svga_texture_transfer_map/unmap functions")
Signed-off-by: Thomas Hellstrom <thellstrom@vmware.com>
Reviewed-by: Brian Paul <brianp@vmware.com>
Reviewed-by: Charmaine Lee <charmainel@vmware.com>
Reviewed-by: Roland Scheidegger <sroland@vmware.com>
Update to 5.4-rc4 so we can test Panfrost on devices with Mali T720 and
T820.
A bug was found that prevented things working at all on RK3288 devices,
so we carry a patch for now in my personal fork.
Signed-off-by: Tomeu Vizoso <tomeu.vizoso@collabora.com>
Acked-by: Daniel Stone <daniels@collabora.com>
If there are queued shaders to be written to disk, wait for that.
Reviewed-by: Timothy Arceri <tarceri@itsqueeze.com>
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
We were relying on specific pass ordering in st to avoid setting
inputs_read/outputs_written for edge flags. Instead, just assume
that it happens and throw out the results we don't want.
We should probably revisit this and try and add a vertex element
property like I originally wanted so we can avoid having it be
associated with the VS altogether.
I recently changed the slow depth/stencil clear path to make sure
depth values are explicitly exported by the fragment shader. This
is actually only useful when VK_EXT_depth_range_unrestricted is
enabled.
While this path is correct, it introduced a performance regression
with Heroes of the Storm, Shadow of Mordor (Vulkan beta) and
probably more titles. This is because it prevents the hardware
to do some optimizations like discarding fragments.
This commit re-introduces the previous (a bit faster) slow
depth/stencil clear path and it selects the unrestricted path
only if VK_EXT_depth_range_unrestricted is enabled.
Closes: https://gitlab.freedesktop.org/mesa/mesa/issues/863
Signed-off-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
descriptorCount is the number of bytes into the descriptor, so
it shouldn't be used as an index. srcArrayElement/dstArrayElement
specify the starting byte offset within the binding to copy from/to.
This fixes new CTS tests:
dEQP-VK.binding_model.descriptor_copy.*.inline_uniform_block_*
dEQP-VK.binding_model.descriptor_copy.*.mix_3
dEQP-VK.binding_model.descriptor_copy.*.mix_array1
Fixes: 8d2654a419 ("radv: Support VK_EXT_inline_uniform_block.")
Signed-off-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
It used to cause weird issues on GFX10 in the past with vkmark and
Wreckfest, and they can't be reproduced now. Shadow Of Mordor
(Vulkan beta) hits that path and it works fine.
Signed-off-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
Fixes some crashes with dEQP-VK.geometry.layered.*.secondary_cmd_buffer
on Raven and other chips that allow rbplus.
This just prevents a crash and rbplus probaby needs more work.
Cc: 19.2 <mesa-stable@lists.freedesktop.org>
Signed-off-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
The driver only supports up to 8 samples, so it's useless to
create more pipelines than needed.
This fixes a conditional jump reported by Valgrind on GFX10:
==194282== Conditional jump or move depends on uninitialised value(s)
==194282== at 0xDBF925A: radv_gfx10_compute_bin_size (radv_pipeline.c:3242)
==194282== by 0xDBF95A6: radv_pipeline_generate_binning_state (radv_pipeline.c:3334)
==194282== by 0xDBFC1A0: radv_pipeline_generate_pm4 (radv_pipeline.c:4440)
==194282== by 0xDBFD15E: radv_pipeline_init (radv_pipeline.c:4764)
==194282== by 0xDBFD23E: radv_graphics_pipeline_create (radv_pipeline.c:4788)
==194282== by 0xDBB95A3: create_pipeline (radv_meta_clear.c:114)
==194282== by 0xDBB9AC5: create_color_pipeline (radv_meta_clear.c:297)
==194282== by 0xDBBCF05: radv_device_init_meta_clear_state (radv_meta_clear.c:1277)
==194282== by 0xDB9ACD9: radv_device_init_meta (radv_meta.c:363)
==194282== by 0xDB7FE3A: radv_CreateDevice (radv_device.c:2080
This is caused by an out of bound access of 'fmask_array' (ie. index
is 4 as for 16 samples).
Cc: <mesa-stable@lists.freedesktop.org>
Signed-off-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
v2: Introduce the appropriate pipe controls
Properly deal with changes in metric sets (using execbuf parameter)
Record marker at query end
v3: Fill out PerfCntr1&2
v4: Introduce vkUninitializePerformanceApiINTEL
v5: Use new execbuf extension mechanism
v6: Fix comments in genX_query.c (Rafael)
Use PIPE_CONTROL workarounds (Rafael)
Refactor on the last kernel series update (Lionel)
v7: Only I915_PERF_IOCTL_CONFIG when perf stream is already opened (Lionel)
Signed-off-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Reviewed-by: Rafael Antognolli <rafael.antognolli@intel.com>
We have 2 of those we can configure to source programmable events.
Those are not part of the OA reports. Configuration happens in i915
through the metric set selected by the application. On the Mesa side
we'll just sample those and do a diff.
Signed-off-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Reviewed-by: Rafael Antognolli <rafael.antognolli@intel.com>
Pull new updates from drm-next as of the following commit:
commit f1b4a9217efd61d0b84c6dc404596c8519ff6f59
Merge: 400e91347e1d f3a36d469621
Author: Dave Airlie <airlied@redhat.com>
Date: Tue Oct 22 15:04:00 2019 +1000
Merge tag 'du-next-20191016' of git://linuxtv.org/pinchartl/media into drm-next
Signed-off-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
We want to query the content of register configurations from the
kernel. Let's pull this out of the query.
Signed-off-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Reviewed-by: Rafael Antognolli <rafael.antognolli@intel.com>
The Vulkan performance query extension is a bit lower level than the
GL one. Expose some of the functions to do the result accumulation
directly in the Anv driver.
Signed-off-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Reviewed-by: Rafael Antognolli <rafael.antognolli@intel.com>
This is useful for PBO texture upload with GL_RGB and GL_UNSIGNED_BYTE.
v2: Vasily Khoruzhick provided an update for the Lima CI expectations.
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
p_extract_vector's second operand is in units of the definition size, not
dwords.
v2: move extract_subvector() to right before ds_write_helper
Signed-off-by: Rhys Perry <pendingchaos02@gmail.com>
Reviewed-by: Daniel Schürmann <daniel@schuermann.dev>
Small typo resulted in not converting footprint to vec4, meaning that we
could potentially ask for quite a few more registers than required
Signed-off-by: Rob Clark <robdclark@chromium.org>
Reviewed-by: Kristian H. Kristensen <hoegsberg@google.com>
If the load_interpolated_input is scalarized, we would be too
conservative about deciding the tex instruction wasn't a candidate to
pre-fetch:
vec1 32 ssa_0 = load_const (0x00000000 /* 0.000000 */)
vec2 32 ssa_1 = intrinsic load_barycentric_pixel () (0) /* interp_mode=0 */
vec1 32 ssa_2 = intrinsic load_interpolated_input (ssa_1, ssa_0) (0, 0) /* base=0 */ /* component=0 */ /* packed:v_uv,v_uv1 */
vec1 32 ssa_3 = intrinsic load_interpolated_input (ssa_1, ssa_0) (0, 1) /* base=0 */ /* component=1 */ /* packed:v_uv,v_uv1 */
vec2 32 ssa_8 = vec2 ssa_2, ssa_3
vec4 32 ssa_9 = tex ssa_8 (coord), 0 (texture), 0 (sampler)
Really we don't care that the texcoord components come from different
load_interpolated_input instructions, just that they have consecutive
varying offsets.
Reported-by: Eduardo Lima Mitev <elima@igalia.com>
Reviewed-by: Eduardo Lima Mitev <elima@igalia.com>
Signed-off-by: Rob Clark <robdclark@chromium.org>
Reviewed-by: Kristian H. Kristensen <hoegsberg@google.com>
Previously, we used one hashset per BB, so that we could
always initialize the current hashset from the immediate
dominator. This patch changes the behavior to a single
hashmap using the block index per instruction to resolve
dominance.
Reviewed-by: Rhys Perry <pendingchaos02@gmail.com>
Some of these lowerings aren't supported for drivers that supports
tesselation and geometry shaders. Let's add a couple of asserts to make
it obvious if these have been enabled when it's not possible.
Signed-off-by: Erik Faye-Lund <erik.faye-lund@collabora.com>
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
v2:
* Use LLVM 8 from buster-backports
v3:
* Use LLVM 7 again for armhf, llvmpipe is still broken there with LLVM 8
Acked-by: Eric Engestrom <eric.engestrom@intel.com>
This allows running the regression tests.
One downside is that we can't easily build the Vulkan overlay layer,
because only x86 binaries of the glslang validator are available. If
that's important, we could either use those binaries via qemu, or build
it from source.
v2:
* Add :amd64 suffix to existing debian-9/10 job names (Eric Engestrom)
Acked-by: Eric Engestrom <eric.engestrom@intel.com> # v1
Apparently needs: in a definition overwrites inherited ones. So
.deqp-test effectively didn't declare needs: for debian-10, which means
any jobs based on .deqp-test could spuriously run after the debian-10
job failed or was cancelled.
Use https:// URLs in the APT configuration.
Drop --no-install-recommends, the image generation template disables
installation of recommended packages in /etc/apt/apt.conf.
Run apt-get autoremove at the end, cleaning up packages which were
installed to satisfy dependencies but are no longer needed.
Reviewed-by: Eric Engestrom <eric.engestrom@intel.com>
On GFX9, the driver is able to do an optimized fast depth/stencil
clear with only one aspect (ie. clear the stencil part of a
depth/stencil image). When this happens, the driver should only
update the clear values of the given aspect.
Note that it's currently only supported on GFX9 but I have some
local patches that extend this optimized path for other gens.
Closes: https://gitlab.freedesktop.org/mesa/mesa/issues/1967
Cc: 19.2 <mesa-stable@lists.freedesktop.org>
Signed-off-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
On Gen12, we support mixed mode HF/F operands, and also 3 source
instruction supports immediate value support, so keep immediate as it
is, if it fits properly in 16 bit field.
Signed-off-by: Sagar Ghuge <sagar.ghuge@intel.com>
Reviewed-by: Matt Turner <mattst88@gmail.com>
On Gen >= 12, if src0 or src2 holds immediate value, we need set
src[0/2]_is_imm bits instead of register file.
Signed-off-by: Sagar Ghuge <sagar.ghuge@intel.com>
Reviewed-by: Matt Turner <mattst88@gmail.com>
On Gen >= 10, Either src0 or src2 can use 16-bit immediate value, but
not both.
Signed-off-by: Sagar Ghuge <sagar.ghuge@intel.com>
Reviewed-by: Matt Turner <mattst88@gmail.com>
It's been throwing the following error today:
"<Fault -32603: 'Internal Server Error (contact server administrator
for details): could not extend file "base/17952/18226": No space left
on device\nHINT: Check free disk space.\n'>"
Reviewed-by: Daniel Stone <daniels@collabora.com>
If you set LP_NUM_THREADS=0 compute shaders would hang,
just execute the workloads in sequence if we have no threads
in the pool.
Fixes: 1b24e3ba75 ("llvmpipe: add compute threadpool + mutex")
Reviewed-by: Roland Scheidegger <sroland@vmware.com>
This file is created in 2a0d45ae6c but
addition to android makefiles was omitted. It breaks the build with
missing references which are defined in this file.
List the file in ir3_SOURCES to make the build succeed.
Signed-off-by: Marijn Suijten <marijns95@gmail.com>
This fixes some crashes with dEQP-VK.descriptor_indexing.* when
read_first_invocation has its source from a descriptor.
Most of these tests still fail because of an LLVM bug (they work
with ACO).
Signed-off-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
v2: make variable names snake_case
v2: minor cleanups in emit_udiv()
v2: fix Panfrost build failure
v3: use an enum instead of a boolean flag in nir_lower_idiv()'s signature
v4: remove nir_op_urcp
v5: drop nv50 path
v5: rebase
v6: add back nv50 path
v6: add comment for nir_lower_idiv_path enum
v7: rename _nv50/_llvm to _fast/_precise
v8: fix etnaviv build failure
Signed-off-by: Rhys Perry <pendingchaos02@gmail.com>
Reviewed-by: Daniel Schürmann <daniel@schuermann.dev>
Remove emit_alpha_to_coverage workaround from backend compiler and start
using ported workaround from NIR.
v2: Copy comment from brw_fs_visitor (Caio Marcelo de Oliveira Filho)
Fixes piglit test on HSW:
- arb_sample_shading-builtin-gl-sample-mask-mrt-alpha-to-coverage-combinations
Signed-off-by: Sagar Ghuge <sagar.ghuge@intel.com>
Reviewed-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
Importing this pass from fs_visitor::emit_alpha_to_coverage_workaround()
in intel/compiler.
v2 (Caio Marcelo de Oliveira Filho):
- Track store output and sample mask instruction
- Nest math insturction for more readability
- Bail out early if no gl_SampleMask
v3: (Caio Marcelo de Oliveira Filho):
- Do math instructions after instruction block
- Restructure code
- Move pass under src/intel/compiler
v4: (Caio Marcelo de Oliveira Filho):
- Organize dither mask calculation
Signed-off-by: Sagar Ghuge <sagar.ghuge@intel.com>
Reviewed-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
0.49.0 can compile most of mesa with ICC or ICL, but not SWR without
additional workarounds in our meson.build files. Bumping patch version
is easier and shouldn't be a big burden anyway, especially to cover a
niche compiler. The check originally only covered ICC, but now covers
ICL as well.
Fixes: 3740ffb59c
("meson: add switches for SWR with MSVC")
Closes: https://gitlab.freedesktop.org/mesa/mesa/issues/1937
Acked-by: Eric Engestrom <eric.engestrom@intel.com>
Due to a bug in GFX10 hardware, s_nop instructions must be added
if a branch is at 0x3f. We already do this, but forgot to also update
the constant addresses that come after this instruction.
Signed-off-by: Timur Kristóf <timur.kristof@gmail.com>
Reviewed-by: Daniel Schürmann <daniel@schuermann.dev>
Currently if you have an SMEM store followed by an SMEM load that
loads the same location as was written, it won't work because the
store isn't finished before the load is executed. This is NOT
mitigated by an s_nop instruction on GFX10.
Since we currently don't have proper alias analysis, this commit adds
a workaround which will insert an s_waitcnt lgkmcnt(0) before each
SSBO load if they follow a store. We should further refine this in
the future when we can make sure to only add the wait when we load the
same thing as has been stored.
Signed-off-by: Timur Kristóf <timur.kristof@gmail.com>
Reviewed-by: Daniel Schürmann <daniel@schuermann.dev>
The current implementation does not synchronize on BO readiness when
DISCARD_WHOLE_RES flag is set, which can lead to misbehaviours when the
resource being updated is being used by one of the pending or already
flushed batches.
Adding unconditional BO synchronization would do the trick, but we can
sometimes optimize this path by re-allocating a new BO instead of
waiting for the existing one to be ready.
Reported-by: Daniel Stone <daniels@collabora.com>
Reported-by: Heinrich Fink <heinrich.fink@daqri.com>
Signed-off-by: Boris Brezillon <boris.brezillon@collabora.com>
Reviewed-by: Daniel Stone <daniels@collabora.com>
Reviewed-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
According to the OES_geometry_shader spec, section Dependencies:
"OpenGL ES 3.1 and OpenGL ES Shading Language 3.10
are required."
Reviewed-by: Kristian H. Kristensen <hoegsberg@google.com>
We currently doesn't maintain it correctly and the buffer gets leaked if
surface is destroyed before calling swapping buffers.
From Android frameworks/native/libs/nativewindow/include/system/window.h:
The window holds a reference to the buffer between dequeueBuffer and
either queueBuffer or cancelBuffer, so clients only need their own
reference if they might use the buffer after queueing or canceling it.
v2: Remove our own reference.
Fixes: 0212db3504 ("egl/android: Cancel any outstanding ANativeBuffer in surface destructor")
Reviewed-by: Chia-I Wu <olvaffe@gmail.com> (v1)
Reviewed-By: Tapani Pälli <tapani.palli@intel.com>
Signed-off-by: Lepton Wu <lepton@chromium.org>
If a pipeline has both graphics and compute, descriptors are same.
While we are at it, use queue->device for simplicity.
Signed-off-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
To make sure a trace file is generated in case the driver crashes
during the hang report generation (which happens sometimes).
Signed-off-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
This information has never been useful. All descriptors are
already dumped with colors etc, and it's more useful.
Signed-off-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
A bunch of blend tests fixed on T760. A single blend test regressed on
both T760/T860 but I am unable to reproduce locally so am just
documenting the regression and moving on.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
We would like to eliminate not just entire dead instructions, but also
dead components, which increases scheduler flexibility (since some
vector instructions can become scalar after eliminating dead
components). This also will allow better RA in the future.
Results are meh.
total instructions in shared programs: 3453 -> 3451 (-0.06%)
instructions in affected programs: 60 -> 58 (-3.33%)
helped: 2
HURT: 0
total bundles in shared programs: 1826 -> 1824 (-0.11%)
bundles in affected programs: 33 -> 31 (-6.06%)
helped: 2
HURT: 0
total quadwords in shared programs: 3144 -> 3144 (0.00%)
quadwords in affected programs: 0 -> 0
helped: 0
HURT: 0
total registers in shared programs: 321 -> 321 (0.00%)
registers in affected programs: 45 -> 45 (0.00%)
helped: 11
HURT: 11
helped stats (abs) min: 1 max: 1 x̄: 1.00 x̃: 1
helped stats (rel) min: 16.67% max: 50.00% x̄: 39.70% x̃: 50.00%
HURT stats (abs) min: 1 max: 1 x̄: 1.00 x̃: 1
HURT stats (rel) min: 100.00% max: 100.00% x̄: 100.00% x̃: 100.00%
95% mean confidence interval for registers value: -0.45 0.45
95% mean confidence interval for registers %-change: -1.87% 62.18%
Inconclusive result (value mean confidence interval includes 0).
total threads in shared programs: 445 -> 447 (0.45%)
threads in affected programs: 2 -> 4 (100.00%)
helped: 1
HURT: 0
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
This allows for vec16 dependencies in the scheduler, not that we have
any yet (thankfully).
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
Now that we have notion of byte masks, liveness tracking can be updated
to reflect this extra granularity without loss of correctness.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
Read component masks don't have a particular type associated, since the
type of the ALU operation may not match the type of the operands in
question. So let's generate byte masks instead, and update the rest of
the compiler to use byte masks when analyzing reads.
Preparation for mixed types.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
There are essentially two formats of masks in play beginning with this
commit: masks per-channel and masks per-byte. The former make sense
within a given fixed-size instruction; the latter are
typesize-independent. It turns out you need the latter to meaningfully
manipulate instructions containing multiple sizes (which is quite
possible with ALU operations).
Similarly, we have mir_srcsize. We calculate the size of the source by
analyzing the size of the instruction itself and stepping down if there
is a half-modifier.
Finally, we have mir_round_bytemask_down, for when we want to take a
byte mask and "round it down" to a given component size, so that we can
use it as a component mask.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
This will allow us to encode properties about the load/store ops like we
do for ALU ops. We include now properties about whether we have a store,
and if there are special cases on the load/store op. We also tag each
instruction by its natural size... this is probably not totally right,
but it's a start.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
The trick is realizing even with a destination override, the masks are encoded in the same mode as the
instruction itself, rather than stepping down. The override means that
the smaller type is used, but the mask is parsed as if it were the
higher type. Overriding down is down by printed by blinding doing this. Overriding up can be thought of as printing in the upper size, but shifting the alphabet to use the upper half, i.e. shifting xyzw to become abcd.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
Add some comments explaining what's going on in a more natural flow in
order to solve the actual bug.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
Fixes: 2d914ebe81 ("pan/midgard: Fix memory corruption in register spilling")
This allows a write to proceed to an uninitialized part of a buffer
even when the GPU is using the previously-initialized portions.
Such a situation can be triggered with the following API usage example:
glBufferSubData(..., offset, size, data1);
glDrawArrays(...);
// append new vertex data
glBufferSubData(..., offset+size, size, data2);
glDrawArrays(...);
Same is done for freedreno, nouveau and radeon.
Signed-off-by: Christian Gmeiner <christian.gmeiner@gmail.com>
Reviewed-by: Lucas Stach <l.stach@pengutronix.de>
This is the layout used in the GL API, and maps directly to PIPE
formats with no endianness trickery. As with the LA change, this
fixes big-endian fetching from texbos. Also cleans up some endian
shenanigans in shader images.
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
Now that Mesa is also using an array format for LA, nothing was using
these. (And, clearly, no HW driver had exposed them).
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
The array format is what the GL API wants (fixing texbos on
big-endian), and matches directly to gallium's corresponding array
format. The only driver exposing A8L8 was radeon/r200 in big-endian,
where the HW's underlying format was trying to read as array and we
needed to flip things around to make our packed format come out right
(note that while the radeon format tables had both AL and LA,
ChooseTextureFormat would only pick one of them based on endianness).
v2: Don't make r200/radeon use endian swaps.
v3: Rebase on dropping the r200 _be/_le format table removal patch
v4: reword commit message to explain why we can drop both formats
from radeon.
Reviewed-by: Marek Olšák <marek.olsak@amd.com> (v1)
The array format is what the GL API wants (and we made a mistake in
the format returned for texbos on big-endian!), and it's exactly what
the gallium-side PIPE_FORMAT_L16A16 is. The only downside is that
dri_util tries to fall back to sampling RG16 using LA16, which doesn't
have a match for big-endian any more. No HW drivers supported A16L16
anyway.
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
The first arg to OUT_BATCH_RELOC is ignored, we actually wanted these
in the third arg. They're always 0 so far, so it didn't matter.
v2: Reword commit message that I don't end up using the tile bits, but
keep the commit as a cleanup anyway.
Reviewed-by: Marek Olšák <marek.olsak@amd.com> (v1)
No matter what, we deref the texFormat from the table, except for a
mistake in cpp=4 where we pulled a 0 out of the table either way.
v2: Rebase on dropping r200 table deduplication patch.
Reviewed-by: Marek Olšák <marek.olsak@amd.com> (v1)
PP stack size should be set to maximum PP stack size, not to stack size of
last shader.
Fixes: 27e7603c34 ("lima: fix ppir spill stack allocation")
Tested-by: Icenowy Zheng <icenowy@aosc.io>
Reviewed-by: Erico Nunes <nunes.erico@gmail.com>
Signed-off-by: Vasily Khoruzhick <anarsoul@gmail.com>
Similiar to iadd, we can fold an added constant value from an imad24_ir3
into the load_uniform's constant offset. This avoids some cases where
the addition of imad24_ir3 could otherwise be a regression in instr
count.
Signed-off-by: Rob Clark <robdclark@chromium.org>
Reviewed-by: Kristian H. Kristensen <hoegsberg@google.com>
Reviewed-by: Eduardo Lima Mitev <elima@igalia.com>
We can't encode immed sources for cat3 (mad) instructions, but we can
use const in first or third src. We handled this case already, but we
weren't considering that we could lower immed to const.
For manhattan:
total instructions in shared programs: 35202 -> 34718 (-1.37%)
instructions in affected programs: 14931 -> 14447 (-3.24%)
helped: 90
HURT: 0
total full in shared programs: 2451 -> 2359 (-3.75%)
full in affected programs: 653 -> 561 (-14.09%)
helped: 69
HURT: 2
Signed-off-by: Rob Clark <robdclark@chromium.org>
Reviewed-by: Kristian H. Kristensen <hoegsberg@google.com>
Lower amul to either imul or imul24, depending on whether 24b is enough
bits to calculate an offset within the thing being dereferenced.
Signed-off-by: Rob Clark <robdclark@chromium.org>
Used for address/offset calculation (ie. array derefs), where we can
potentially use less than 32b for the multiply of array idx by element
size. For backends that support `imul24`, this gives a lowering pass
an easy way to find multiplies that potentially can be converted to
`imul24`.
Signed-off-by: Rob Clark <robdclark@chromium.org>
Reviewed-by: Kristian H. Kristensen <hoegsberg@google.com>
Reviewed-by: Eduardo Lima Mitev <elima@igalia.com>
Some hardware can do 24b multiply in a single instruction, but not 32b.
However in most cases 24b is sufficient for address/offset calculation.
Signed-off-by: Rob Clark <robdclark@chromium.org>
Acked-by: Jason Ekstrand <jason@jlekstrand.net>
Reviewed-by: Kristian H. Kristensen <hoegsberg@google.com>
Reviewed-by: Eduardo Lima Mitev <elima@igalia.com>
ir3 compiler has a signed integer multiply-add instruction (MAD_S24)
that is used for different offset calculations in the backend.
Since we intend to move some of these calculations to NIR, we need
a new ALU op that can directly represent it.
Signed-off-by: Rob Clark <robdclark@chromium.org>
Acked-by: Jason Ekstrand <jason@jlekstrand.net>
Reviewed-by: Kristian H. Kristensen <hoegsberg@google.com>
Reviewed-by: Eduardo Lima Mitev <elima@igalia.com>
Otherwise, if the base type is (for example) uint32, we would
incorrectly think that PoT optimizations could not apply.
Signed-off-by: Rob Clark <robdclark@chromium.org>
Reviewed-by: Jason Ekstsrand <jason@jleksrand.net>
Reviewed-by: Kristian H. Kristensen <hoegsberg@google.com>
Reviewed-by: Eduardo Lima Mitev <elima@igalia.com>
The pass should run once at the end of shader compilation, for a4xx
onwards. It iterates texture sampling instructions and mark those
eligibile for pre-dispatch by changing the tex op from 'tex' to
'tex_prefetch'. An instruction is eligibile if:
* The coordinate is a vector where all its components come from a
shader input.
* The order of the components match exactly that of the input (no
swizzles).
* The instruction is in the 'main' function, and in the outer
most-block.
The first two restrictions were arrived to empirically, so more
testing could tighten or loosen it.
The 3rd restriction is there to allow moving the instructions
eligible for pre-dispatch to the beginning of the shader, so
that we don't block the registers holding the result for too
long.
Signed-off-by: Rob Clark <robdclark@chromium.org>
Reviewed-by: Kristian H. Kristensen <hoegsberg@google.com>
It seems that pre-fs texture fetch only works if ij_pix ends up in r0.x.
I've tried unknown zero bits, to no avail, and blob also seems to force
r0.x when this feature is used.
Signed-off-by: Rob Clark <robdclark@chromium.org>
Reviewed-by: Kristian H. Kristensen <hoegsberg@google.com>
Useful to see in disassembly listing texture fetches that were moved to
pre-dispatch.
Signed-off-by: Rob Clark <robdclark@chromium.org>
Reviewed-by: Kristian H. Kristensen <hoegsberg@google.com>
If the only use of varyings is a pre-shader texture-fetch, we still need
to issue a bary.f with the end-input flag, otherwise we'll block further
VS invocations, as the hw will think varying storage is still busy.
Signed-off-by: Rob Clark <robdclark@chromium.org>
Reviewed-by: Kristian H. Kristensen <hoegsberg@google.com>
It is possible that the result of a pre-fs texture fetch is an output
(or partially an output) of the FS. Sine the meta:tex_prefetch
instructions are dropped before the assembler, we need to account for
this when we fixup the register footprint.
Signed-off-by: Rob Clark <robdclark@chromium.org>
Reviewed-by: Kristian H. Kristensen <hoegsberg@google.com>
Add a placeholder instruction to track texture fetches made prior to FS
shader dispatch. These, like meta:input instructions are scheduled
before any real instructions, so that RA realizes their result values
are live before the first real instruction. And to give legalize a way
to track usage of fetched sample requiring (sy) sync flags.
There is some related special handling for varying texcoord inputs used
for pre-fs-fetch, so that they are not DCE'd and remain in linkage
between FS and previous stage. Note that we could almost avoid this
special handling by giving meta:tex_prefetch real src arguments, except
that in the FS stage, inputs are actual bary.f/ldlv instructions.
Signed-off-by: Rob Clark <robdclark@chromium.org>
Reviewed-by: Kristian H. Kristensen <hoegsberg@google.com>
When we enable pre-dispatch texture fetch, we could have a scenario
where the barycentric i/j coord sysval is not used in the shader, but
only used for the varying fetch for the pre-dispatch texture fetch.
In this case we need to take care not to DCE this sysval.
Signed-off-by: Rob Clark <robdclark@chromium.org>
Reviewed-by: Kristian H. Kristensen <hoegsberg@google.com>
Will be needed for special handling of SYSTEM_VALUE_BARYCENTRIC_PIXEL
(ij_pix) when pre-fs texture fetch is enabled.
Signed-off-by: Rob Clark <robdclark@chromium.org>
Reviewed-by: Kristian H. Kristensen <hoegsberg@google.com>
Not sure I remember how long this has been unused for. But it's unused
now.
Signed-off-by: Rob Clark <robdclark@chromium.org>
Reviewed-by: Kristian H. Kristensen <hoegsberg@google.com>
This is like nir_texop_tex, but signals that the sampling coordinates
are immutable during the shader stage, in a way that allows the HW
that supports pre-dispatching sampling operations to pre-fetch
the result prior to scheduling the shader stage.
This is introduced to support the feature in Freedreno. Adreno HW
from a4xx supports it.
A NIR pass introduced later in this series will detect sampling
operations that are eligible for pre-dispatch, and replace
nir_texop_tex by this new op, to tell the backend to enable
pre-fetch.
Signed-off-by: Rob Clark <robdclark@chromium.org>
Reviewed-by: Kristian H. Kristensen <hoegsberg@google.com>
We don't use cmake normally because it always results in static linking.
This is very problematic for *nix OSes which expect shared linking by
default, but for windows this isn't a problem as LLVM doesn't support
shared linking on windows anyway.
Reviewed-by: Adam Jackson <ajax@redhat.com>
For building on Windows (when not using cygwin), users may want to use a
binary wrap of LLVM, this provides a fallback to the LLVM dependency
which may be used in this case
Reviewed-by: Adam Jackson <ajax@redhat.com>
MIN filter is only used when LOD MAX is at least 4 (I guess the 2 LSB don't
actually exist).
Signed-off-by: Jonathan Marek <jonathan@marek.ca>
Reviewed-by: Lucas Stach <l.stach@pengutronix.de>
The etnaviv kernel driver will only ever flush write caches. As both
the TX descriptor and instruction cache are read caches they must be
flushed from the user cmdstream at an appropriate time.
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
Reviewed-by: Jonathan Marek <jonathan@marek.ca>
It's just a matter of writing the addressing mode into the
texture descriptor.
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
Reviewed-by: Jonathan Marek <jonathan@marek.ca>
Create a separate implementation file with texture-descriptor-based
sampler views and sampler states. Initialize the one or the other
based on the GPU. There is so little in common that this seemed more
appropriate that keeping them as one type of state object would
only be confusing.
This commit is actually a combiation of the original commit by
Wladimir, fixes and TS implementation from Jonathan and changed to
use softpin by Lucas.
Signed-off-by: Wladimir J. van der Laan <laanwj@gmail.com>
Signed-off-by: Jonathan Marek <jonathan@marek.ca>
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
Reviewed-by: Guido Günther <agx@sigxcpu.org>
Reviewed-by: Jonathan Marek <jonathan@marek.ca>
Halti5 uses texture descriptors to control the samplers, and thus needs to
know the GPU virtual address for the texture buffers to fill into the
descriptor buffer. Without softpin userspace has no control over the GPU
VM and also no way to fix up the texture descriptor buffer, so there is
no point in creating a screen on a Halti5 device without softpin being
available.
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
Reviewed-by: Jonathan Marek <jonathan@marek.ca>
If softpin is available on the kernel side, we transparently replace the
relocs with self-managed GPU virtual addresses. This allows to skip some
work at the kernel side, as it doesn't need to touch the command stream
anymore before submitting it to the hardware.
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
Reviewed-by: Jonathan Marek <jonathan@marek.ca>
Replace the per-screen locking of flushing with per-context one and
add per-context lock around command stream buffer accesses, to prevent
cross-context flushing from corrupting these command stream buffers.
Signed-off-by: Marek Vasut <marex@denx.de>
Reallocate the command stream buffer in case it is too small.
The older kernel versions are limited to 64 kiB buffer, so
limit the size to avoid oversized buffers.
Signed-off-by: Marek Vasut <marex@denx.de>
Have each context track which resources it marked as pending read and
pending write. Have each resource track in which context it is pending.
This way, it is possible to identify when a resource is both pending
read and pending write at the same time. Moreover, the status field
can be correctly calculated and updated when necessary.
Signed-off-by: Marek Vasut <marex@denx.de>
This way we can ensure that the pipe driver tracking of pending resources
stays in sync with the actual command buffer state, even if a space
reservation triggers a forced flush.
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
Reviewed-by: Jonathan Marek <jonathan@marek.ca>
As long as a resource is pending in any context we must not destroy
it, otherwise we'll hit a classical use-after-free with fireworks.
To avoid this take a reference when the resource is first added to
the pending set and put the reference when no longer pending.
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
Reviewed-by: Jonathan Marek <jonathan@marek.ca>
Currently, the screen tracks all resources for all contexts, but this
is not correct. Each context should track the resources it uses. This
also allows a context to detect whether a resource is used by another
context and to notify another context using a resource that the current
context is done using the resource.
Signed-off-by: Marek Vasut <marex@denx.de>
Cc: Christian Gmeiner <christian.gmeiner@gmail.com>
Cc: Guido Günther <guido.gunther@puri.sm>
Cc: Lucas Stach <l.stach@pengutronix.de>
This exposes what's required for DX and this is what we already
configure. The driver flushes denorms for FP32 and preserves them
for FP16/FP64. Note that we can't allow both preserving and
flushing denorms because this won't work for merged shaders. This
will require LLVM to update the float mode register to make it work.
Only enabled on GFX8+ with the LLVM path because it's untested on
previous chips and ACO doesn't support it.
This extension is required for SPIRV 1.4.
Signed-off-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
Because some instructions will be optimized by the backend compiler,
the driver has to manually flush to zero to keep the result exact.
Signed-off-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
The new Mac OS X images apparently already have python2 and python3,
and `brew` considers asking to install something already installed
as a fatal error...
Signed-off-by: Eric Engestrom <eric.engestrom@intel.com>
This will expose GL_EXT_primitive_bounding_box and
GL_OES_primitive_bounding_box after previous commits
expose OpenGL ES 3.1 once Compute Shaders are available.
Reviewed-by: Eric Anholt <eric@anholt.net>
This adapts the v3d driver to the new CL submit ioctl interface that
allows the driver to request a flush of the caches after the render
job has completed. This seems to eliminate the kernel write violation
errors reported during CTS and Piglit excutions, fixing some CTS tests
and GPU resets along the way.
v2:
- Adapt to changes in the kernel side.
- Disable shader storage and shader images if the kernel doesn't
implement cache flushing.
Fixes CTS tests:
KHR-GLES31.core.shader_image_size.basic-nonMS-fs-float
KHR-GLES31.core.shader_image_size.basic-nonMS-fs-int
KHR-GLES31.core.shader_image_size.basic-nonMS-fs-uint
KHR-GLES31.core.shader_image_size.advanced-nonMS-fs-float
KHR-GLES31.core.shader_image_size.advanced-nonMS-fs-int
KHR-GLES31.core.shader_image_size.advanced-nonMS-fs-uint
KHR-GLES31.core.shader_atomic_counters.advanced-usage-many-draw-calls2
KHR-GLES31.core.shader_atomic_counters.advanced-usage-draw-update-draw
KHR-GLES31.core.shader_storage_buffer_object.advanced-unsizedArrayLength-fs-int
KHR-GLES31.core.shader_storage_buffer_object.advanced-unsizedArrayLength-fs-std140-matR
KHR-GLES31.core.shader_storage_buffer_object.advanced-unsizedArrayLength-fs-std140-struct
KHR-GLES31.core.shader_storage_buffer_object.advanced-unsizedArrayLength-fs-std430-matC-pad
KHR-GLES31.core.shader_storage_buffer_object.advanced-unsizedArrayLength-fs-std430-vec
Reviewed-by: Eric Anholt <eric@anholt.net>
Now that the UAPI has landed, add the pipe_context function for
dispatching compute shaders. This is the last major feature for GLES 3.1,
though it's not enabled quite yet.
That we set for any TMU write on spills and general tmu. It is then
used as part of v3d_emit_gl_shader_state later.
v2: add a new flag instead at v3d_compiler instead of dirty the flag
at v3dx if there is any spill (change suggested by Eric, added by
Alejandro)
v3: set this for anything that is not a load and do it also in
v3d40_vir_emit_image_load_store (Eric)
Reviewed-by: Eric Anholt <eric@anholt.net>
The SCR_INIT macro used to install the rbug resource_changed method
will only do so when the driver below rbug exposes this method, so
the check will always evaluate to true.
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
All the other context method initialzation follow the order of the pipe_context
structure definition making it easy to find unimplemented methods in rbug.
Move the flush_resource init to follow the same order.
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
All resources passed to the drivers below rbug need to be unwrapped before
being passed down. We missed to do this for the index buffer resource when
this was made part of the draw_info structure.
Fixes: 330d0607ed (gallium: remove pipe_index_buffer and set_index_buffer)
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
The rbug wire format defines the texture size parameters to be uint32_t sized
and uses memcpy to move the function parameters to the message structure.
This caused totally wrong transmitted texture sizes since the height and depth
paramterds have been changed to uint16_t in the gallium API. Fix this by doing
an explicit conversion to the correct representation before packing into the
wire message.
Fixes: e6428092f5 (gallium: decrease the size of pipe_resource - 64 -> 48 bytes)
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
Using 0 as the backlog argument to listen() is exploiting implementation
defined behavior and will lead to no connections being accepted on some
libc implementations.
Quote of the listen manpage: "A backlog argument of 0 may allow the socket to
accept connections, in which case the length of the listen queue may be set to
an implementation-defined minimum value."
Fix this by using a more sensible backlog value.
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
It seems that for desktop GL this was included with ARB_gpu_shader5, but
for OpenGL ES this is already included with the base extension and there is
a CTS test that checks this.
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
Implement the 3 functions using the texturestorage_error() helper.
_mesa_lookup_or_create_texture is always called to make sure that 'texture'
is initialized (even if the texturestorage_error() generates an error afterwards).
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
When we import a resource through Gallium, we need to take account of
the offset parameter passed.
Fixes a failure seen with the VIVID V4L2 driver, which would create NV12
resources within the same BO, with an offset. Sample pipeline to
reproduce (replace videoN with your actual VIVID device node):
gst-launch-1.0 v4l2src device=/dev/videoN ! video/x-raw,format=NV12 ! glimagesink
Signed-off-by: Daniel Stone <daniels@collabora.com>
Reviewed-by: Tomeu Vizoso <tomeu.vizoso@collabora.com>
Reviewed-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
Reported-by: Nicolas Dufresne <nicolas.dufresne@collabora.com>
Tested-by: Nicolas Dufresne <nicolas.dufresne@collabora.com>
Reworks:
* Change subject from "iris: Align main surface allocation to 64k on gen12+"
* Make use of isl surf alignment. (Nanley)
Signed-off-by: Jordan Justen <jordan.l.justen@intel.com>
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
Reworks:
* Fill out the format's entry in the ISL format table. (Nanley)
* Support CCS_E-enabled BLORP copies with the format. (Nanley)
Signed-off-by: Jordan Justen <jordan.l.justen@intel.com>
Reviewed-by: Sagar Ghuge <sagar.ghuge@intel.com>
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
This can be useful to measure whether memory access optimizations are
having the desired effect. For example, we might see a reduction in
image loads/stores, or constant buffer loads. We can already see this
in cycle estimates to some extent, but this is a more direct approach,
minus a lot of the noise of random scheduler shuffling.
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
The removed st_nir_opts calls are mostly redundant.
There is an improvement with shader-db on radeonsi:
Before:
real 1m54.047s
user 28m37.857s
sys 0m7.573s
After:
real 1m52.012s
user 28m3.412s
sys 0m7.808s
Reviewed-by: Timothy Arceri <tarceri@itsqueeze.com>
DPH isn't actually commutative, so this doesn't work. If the immediate
in src0 would be a VF candidate, we could do better. *shrug*
No shader-db changes on any Intel platform.
Reviewed-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
Fixes: b04beaf41d ("intel/vec4: Try both sources as candidates for being immediates")
Reviewed-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
Fixes: 09705747d7 ("nir/algebraic: Reassociate fadd into fmul in DPH-like pattern")
This function was difficult to implement for new formats due to the
combination of endianness and swapbytes support. Since it's mostly
used for fast paths, bugs in it were often missed during testing.
Just reimplement it on top of the recent
_mesa_format_from_format_and_type() which can give us a canonical
MESA_FORMAT for a format and type enum (while respecting endianness).
Fixes:
- R4G4B4A4_UNORM, B4G4R4_UINT, R4G4B4A4_UINT incorrectly matched with
swapBytes (you can't just reverse the channels if the channels
aren't bytes)
- A4R4G4B4_UNORM and A4R4G4B4_UINT missing BGRA/4444_REV matches
- failing to match RGB/BGR unorm8 array formats on BE
- 2101010 formats incorrectly matching with swapBytes set.
- UINT/SINT byte formats failed to match with swapBytes set.
This deletes the part of tests/mesa_formats.cpp that called
_mesa_format_matches_format_and_type() to make sure it didn't
assertion fail, as it now would assertion fail due to the fact that we
were passing an invalid format (GL_RG) for most types.
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
In desktop GL, you can specify things like GL_DEPTH_COMPONENT/GL_BYTE as a
ReadPixels format, and we need to be able to represent that to see if we
have proper MESA_FORMATs for them. That's exactly what the
mesa_array_format enum is for.
v2: Drop _mesa from static fn.
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
We had missed this case where GLES3 allows glReadPixels(DEPTH, UINT_24_8),
and just got lucky by the readpixels path never asking for the matching
format from this function.
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
The GL spec says the 24-bit component is in the high bits, and
format_unpack.c looks at the high 24 bits in the S8Z24 case, not
Z24SS8.
Avoids a regression in the next commit.
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
The unreachable() that follows isn't very useful for debug, and by adding
this here we get a nice description of the failure in debug builds.
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
When we don't have streamout enabled, we have to read this register to
get the number of primitives emitted.
Signed-off-by: Kristian H. Kristensen <hoegsberg@google.com>
When used in a GS pipeline, the VS doesn't end with the END
instruction. Instead it chains to the GS, which continues running with
the same register allocation. The intended use cases seems to be that
you can compile a regular VS (ie outputs in registers and ending with
END) but then tack on link-time generated code past the END to write
the outputs using STLW, in case the VS is used with GS.
Signed-off-by: Kristian H. Kristensen <hoegsberg@google.com>
We don't know what kind of loads we might have to wait on when coming
in from chsh in the VS so set both sync flags.
Signed-off-by: Kristian H. Kristensen <hoegsberg@google.com>
These sysvals have to be unclobbered by VS and in the same registers
in both VS and GS, since the chsh from VS to GS doesn't reload the
values. We use the pre-color argument to ir3_ra() to always place
these values in r0.x and r0.y.
Signed-off-by: Kristian H. Kristensen <hoegsberg@google.com>
Inputs are the GS header, which contains vertex ID, local primitive ID
and thread ID as well as primitive ID. The setup is a little different
from other sysvals, since we always have to receive them in the VS so
that it can pass them on into the GS.
The vertex flag outputs from GS is set up as a proper nir output in
the lowering pass and doesn't need special handling here.
Signed-off-by: Kristian H. Kristensen <hoegsberg@google.com>
This implements the load_vs_primitive_stride_ir3,
load_vs_vertex_stride_ir3 and load_primitive_location_ir3 intrinsics,
used for getting the primitive layout strides and locations.
Signed-off-by: Kristian H. Kristensen <hoegsberg@google.com>
This introduces two new lowering passes. One to lower VS to explicit
outputs using STLW and one to lower GS to load input using LDLW and
implement the GS specific functionality.
Signed-off-by: Kristian H. Kristensen <hoegsberg@google.com>
Since the presence of GS changes how the VS operates we need to track
that in the shader key.
Signed-off-by: Kristian H. Kristensen <hoegsberg@google.com>
These intrinsics will let us do all the offset calculations in nir,
which is nicer to work with and lets nir_opt_algebraic eat it all up.
Signed-off-by: Kristian H. Kristensen <hoegsberg@google.com>
Before, offset held the offset, which can be either immediate or a
register. Use a third register to hold the offset so that we can use
a register.
Signed-off-by: Kristian H. Kristensen <hoegsberg@google.com>
Just add the constructors for now and special case similar to END so
we don't remove them.
Signed-off-by: Kristian H. Kristensen <hoegsberg@google.com>
We know what these do an either write them in the program stateobj or
don't need to write them.
Signed-off-by: Kristian H. Kristensen <hoegsberg@google.com>
Tests the combinations of cases of RAW, WAW and WAR hazards involving
both inorder and outoforder instructions. Also tests that
dependencies combine and propagate correctly through control
flow (loops and conditionals).
v2: Add an extra test illustrating that the non-logical CFG edge
between then-block and else-block is being taking into
account. (Curro)
Reviewed-by: Francisco Jerez <currojerez@riseup.net>
LLVM 8 did remove both the signed and unsigned sse2/avx intrinsics in
the end, and provide arch-independent llvm intrinsics instead.
Fixes a crash when using snorm framebuffers (tested with piglit
arb_color_buffer_float-render GL_RGBA8_SNORM -auto).
Reviewed-by: Jose Fonseca <jfonseca@vmware.com>
Reviewed-by: Dave Airlie <airlied@redhat.com>
CC: <mesa-stable@lists.freedesktop.org>
If two jobs use the same GEM object at the same time, the job that
finishes first will (previous to this commit) close the GEM object, even
if there's a job still referencing it.
To prevent this, have all jobs use the same panfrost_bo for a given GEM
object, so it's only closed once the last job is done with it.
Signed-off-by: Tomeu Vizoso <tomeu.vizoso@collabora.com>
Signed-off-by: Boris Brezillon <boris.brezillon@collabora.com>
Reviewed-by: Daniel Stone <daniels@collabora.com>
Reviewed-by: Rohan Garg <rohan.garg@collabora.com>
Reviewed-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
A new throttle fence was initialized to 1, and increased by 1
again when it's put in drawable->throttle_fence; the ref was
decreased by 1 when it's removed from drawable->throttle_fence,
and never reached to 0, caused leak.
Fixes: ff77bf5cbf7 ("gallium: simplify throttle implementation")
Closes: https://gitlab.freedesktop.org/mesa/mesa/issues/1949
Signed-off-by: James Xiong <james.xiong@intel.com>
Reported-by: Florian Wesch <fw@info-beamer.com>
Reviewed-by: Jose Maria Casanova Crespo <jmcasanova@igalia.com>
This allows us to make sure clipdist is emitted as a scalar array rather
than two vec4s. This matches SPIR-V semantics, and will be useful for
Zink.
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
This actually corresponds to legal GL depth-ranges, because depth-clear
values are always in the 0..1 range in OpenGL.
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
This will prevent us from accidentally falling back to the wrap-db
instead of using locally installed versions.
Reviewed-by: Eric Engestrom <eric.engestrom@intel.com>
The one debian provides is broken in buster+, so I've just written my
own. This allows meson to find the installed zlib and prevents it from
falling back to wraps.
Reviewed-by: Eric Engestrom <eric.engestrom@intel.com>
It's not really needed, and there's no debian package for it so we're
forced to fall back to wraps in mesa's CI. This can be problematic in
itself.
Reviewed-by: Eric Engestrom <eric.engestrom@intel.com>
nvc0 and I assume radeonsi as well hit an assert inside glsl_to_tgsi as atan
instructions get inserted into the shader.
Fixes: cece947a8d ("glsl/builtin: Add alternate versions of atan using new ops")
Cc: Neil Roberts <nroberts@igalia.com>
Signed-off-by: Karol Herbst <kherbst@redhat.com>
Reviewed-by: Timothy Arceri <tarceri@itsqueeze.com>
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
Reviewed-by: Rob Clark <robdclark@gmail.com>
We're trying to cast the return type to the type of the var, but instead
we were casting `sizeof(*v)`.
Fixes: 6df72e970c ("util: Make u_atomic.h typeless.")
Fixes: 0a7f17cf5b ("util/u_atomic: add p_atomic_xchg")
Signed-off-by: Eric Engestrom <eric.engestrom@intel.com>
Reviewed-by: Pierre-Eric Pelloux-Prayer <pierre-eric.pelloux-prayer@amd.com>
It's already defined in `m_debug_util.h`, along with an explanation of
what it is and how to use it.
Signed-off-by: Eric Engestrom <eric.engestrom@intel.com>
Reviewed-by: Eric Anholt <eric@anholt.net>
'struct lima_context' has to be declared before usage in lima_program.h
Signed-off-by: Andreas Baierl <ichgeh@imkreisrum.de>
Reviewed-by: Vasily Khoruzhick <anarsoul@gmail.com>
Fixes build errors of:
In file included from ../src/intel/vulkan/anv_private.h:48,
from ../src/intel/vulkan/genX_blorp_exec.c:26:
../src/intel/common/gen_gem.h: In function ‘gen_ioctl’:
../src/intel/common/gen_gem.h:68:15: error: implicit declaration of function ‘ioctl’ [-Werror=implicit-function-declaration]
68 | ret = ioctl(fd, request, arg);
| ^~~~~
In file included from ../include/c11/threads_posix.h:35,
from ../include/c11/threads.h:66,
from ../src/mesa/main/mtypes.h:39,
from ../src/intel/compiler/brw_compiler.h:30,
from ../src/intel/vulkan/anv_private.h:51,
from ../src/intel/vulkan/genX_blorp_exec.c:26:
/usr/include/unistd.h: At top level:
/usr/include/unistd.h:471:12: error: conflicting types for ‘ioctl’
471 | extern int ioctl(int, int, ...);
| ^~~~~
/usr/include/unistd.h:471:1: note: a parameter list with an ellipsis can’t match an empty parameter name list declaration
471 | extern int ioctl(int, int, ...);
| ^~~~~~
In file included from ../src/intel/vulkan/anv_private.h:48,
from ../src/intel/vulkan/genX_blorp_exec.c:26:
../src/intel/common/gen_gem.h:68:15: note: previous implicit declaration of ‘ioctl’ was here
68 | ret = ioctl(fd, request, arg);
| ^~~~~
Signed-off-by: Alan Coopersmith <alan.coopersmith@oracle.com>
Acked-by: Eric Engestrom <eric.engestrom@intel.com>
gcc is very particular about where you place the (void) cast
The previous placement made it error out with:
In file included from disk_cache.c:40:0:
../../src/util/u_atomic.h:203:29: error: void value not ignored as it ought to be
#define p_atomic_add(v, i) ((void) \
^
disk_cache.c:658:4: note: in expansion of macro ‘p_atomic_add’
p_atomic_add(cache->size, size);
^
Signed-off-by: Alan Coopersmith <alan.coopersmith@oracle.com>
Reviewed-by: Eric Engestrom <eric.engestrom@intel.com>
Fixes build failures on Solaris in C++ files using gcc:
../src/util/u_math.h:628:41: error: expected ‘,’ or ‘...’ before ‘dest’
628 | util_memcpy_cpu_to_le32(void * restrict dest, const void * restrict src, size_t n)
| ^~~~
../src/util/u_math.h: In function ‘void* util_memcpy_cpu_to_le32(void*)’:
../src/util/u_math.h:641:18: error: ‘dest’ was not declared in this scope
641 | return memcpy(dest, src, n);
| ^~~~
../src/util/u_math.h:641:24: error: ‘src’ was not declared in this scope
641 | return memcpy(dest, src, n);
| ^~~
../src/util/u_math.h:641:29: error: ‘n’ was not declared in this scope; did you mean ‘yn’?
641 | return memcpy(dest, src, n);
| ^
| yn
Signed-off-by: Alan Coopersmith <alan.coopersmith@oracle.com>
Acked-by: Eric Engestrom <eric.engestrom@intel.com>
It doesn't make sense. You already spilled it once, and it didn't help.
Don't try again, or you'll end up in a loop.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
Essentially an off-by-one error ... bit of an edge case, but seems to
occur in some glamor shaders.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
The new frame throttling implemention interacts unfortunately with
pipelining, leading to fence fds leaking like crazy and ultimately apps
crashing quickly.
With this patch, apps still crash but not as quickly. We need to either
figure out the real cause or revert the core changes.
Nevertheless, we don't want frame throttling in the first place, so.
Fixes: a65e29ccb2 ("gallium: simplify throttle implementation")
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
This commit moves the target check before using _mesa_get_current_tex_object
to fix a "Mesa implementation error: bad target in _mesa_get_current_tex_object()"
error.
Fixes: 9dd1f7cec0 ("mesa: pass gl_texture_object as arg to not depend on state")
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
We don't really need to impose this condition, but we do need to cope
with the slightly more general case.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
A buffer and its aux are imported separately, if the aux import is
not completed yet when resource_get_param is called, merge the
separate aux a.k.a the 2nd image into the main image.
Fixes: 246eebba4a ("iris: Export and import surfaces with modifiers that have aux data")
Signed-off-by: James Xiong <james.xiong@intel.com>
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
Reviewed-by: Jordan Justen <jordan.l.justen@intel.com>
Once again, we were handling back-to-front in the GLES3 case, but not
the desktop GL case.
Fixes GTF-GL46.gtf30.GL3Tests.framebuffer_srgb.framebuffer_srgb_default_encoding when run with --deqp-surface-type=pbuffer --deqp-gl-context-type=egl.
Reviewed-by: Kristian H. Kristensen <hoegsberg@google.com>
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
We were looking at ctx->DrawBuffer when asking about the read buffer,
which was good enough for CTS purposes, but definitely not right.
Reviewed-by: Eric Engestrom <eric.engestrom@intel.com>
Reviewed-by: Kristian H. Kristensen <hoegsberg@google.com>
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
The first time you call glXMakeCurrent, current != ctx. As a result we
would never look up whether the drawable already had an XMesaDrawable,
and would instead always create one. Then XMesaBufferList would have two
different buffers for the same XID, and you'd be reading and drawing to
different places, and that's not what you want at all.
Instead just always look up the drawable.
Fixes: db8be355 (gallium/xlib: Remove drawable caching from the MakeCurrent path)
Closes: https://gitlab.freedesktop.org/mesa/mesa/issues/1196
Reviewed-by: Michel Dänzer <mdaenzer@redhat.com>
This reverts commit 2ca8629fa9.
This was initially ported from RadeonSI, but in the meantime it has
been reverted because it might hang. Be conservative and re-introduce
this packet emission.
Unfortunately this doesn't fix anything known.
Cc: 19.2 <mesa-stable@lists.freedesktop.org>
Signed-off-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
Variables with same location should use the same driver_location.
Signed-off-by: Jonathan Marek <jonathan@marek.ca>
Reviewed-by: Kristian H. Kristensen <hoegsberg@google.com>
Taken from nir_lower_samplers. Sampler arrays don't work though, this is
just to avoid an assert fail in ir3.
Signed-off-by: Jonathan Marek <jonathan@marek.ca>
Reviewed-by: Kristian H. Kristensen <hoegsberg@google.com>
Notably includes centroid varying bits that were missing.
Signed-off-by: Jonathan Marek <jonathan@marek.ca>
Reviewed-by: Kristian H. Kristensen <hoegsberg@google.com>
Noticed while debugging a tiling-looking issue by comparing our gmem
blit setup to freedreno's.
Reviewed-by: Kristian H. Kristensen <hoegsberg@google.com>
Fixes ir3 compiler failure failure in
dEQP-VK.renderpass.dedicated_allocation.formats.r8g8b8a8_unorm.clear.clear_draw
(now just a rendering failure where the subpass clear isn't happening)
Reviewed-by: Kristian H. Kristensen <hoegsberg@google.com>
Fixes assertion failures in
dEQP-VK.api.image_clearing.core.clear_color_image.2d.* for these
formats, though the test set as a whole is stil failing.
Reviewed-by: Kristian H. Kristensen <hoegsberg@google.com>
Deal with tiled r8g8 having different alignment and other updates taken
from fd6_resource. Additionally track image samples/cpp.
Signed-off-by: Jonathan Marek <jonathan@marek.ca>
Reviewed-by: Kristian H. Kristensen <hoegsberg@google.com>
Avoids hangs and some texture tests are happy with just this.
Signed-off-by: Jonathan Marek <jonathan@marek.ca>
Reviewed-by: Kristian H. Kristensen <hoegsberg@google.com>
Changes to make compressed, tiled, 3d, etc textures work
Signed-off-by: Jonathan Marek <jonathan@marek.ca>
Reviewed-by: Kristian H. Kristensen <hoegsberg@google.com>
* Fix R16G16 SCALED and R16G16B16A16 SCALED having texture format
* Fix B5G6R5 swap value
* Use R8_UINT instead of R8_UNORM for S8_UINT rb format
* Disable 96-bit texture formats instead having a check for NPOT formats
* Don't fail assert on D24X8 format
Signed-off-by: Jonathan Marek <jonathan@marek.ca>
Reviewed-by: Kristian H. Kristensen <hoegsberg@google.com>
Not supported, so always set pointer to NULL
Signed-off-by: Jonathan Marek <jonathan@marek.ca>
Reviewed-by: Kristian H. Kristensen <hoegsberg@google.com>
Note: for output type U32, negative LOD is not sign extended from 16 bits
Signed-off-by: Jonathan Marek <jonathan@marek.ca>
Reviewed-by: Rob Clark <robclark@gmail.com>
GPUs with a single supported vertex stream must use the single state
address to program the stream.
Fixes: 3d09bb390a (etnaviv: GC7000: State changes for HALTI3..5)
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
Reviewed-by: Jonathan Marek <jonathan@marek.ca>
This gs_iface doesn't seem to require a dependence on the tgsi
context, except for the swr end prim code.
This refactors the API to include all the info that the swr
code needs in the interface rather than having to dig it out of
the struct inheritance.
This is a precursor to adding NIR support to llvmpipe.
Reviewed-by: Jan Zielinski <jan.zielinski@intel.com>
Fixes the following building error:
external/mesa/src/gallium/winsys/amdgpu/drm/amdgpu_winsys.c:42:10:
fatal error: 'ac_llvm_util.h' file not found
^~~~~~~~~~~~~~~~
1 error generated.
Fixes: 3a08110 ("amd: Move all amd/common code that depends on LLVM to amd/llvm.")
Signed-off-by: Mauro Rossi <issor.oruam@gmail.com>
Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
All gallium drivers currently set MAX_FRAME_IN_FLIGHT to either 1
or 0, which means that the drivers either throttle on the previous
render or don't throttle, the current implementation is more
complicated than necessary and can be simplified.
Signed-off-by: James Xiong <james.xiong@intel.com>
Reviewed-by: Michel Dänzer <mdaenzer@redhat.com>
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
Adds alternate versions of the atan builtin functions that use
ir_unop_atan and ir_binop_atan2 instead of inlining to the IR
implementation of the function. These alternatives are selected if the
IR is going to be consumed by NIR. In that case the IR ops will be
translated to the appropriate NIR op.
Reviewed-by: Kristian H. Kristensen <hoegsberg@google.com>
Adds ir_binop_atan2 and ir_unop_atan. When converting to NIR these are
expanded out using the appropriate builtin generator. If they are used
with anything else then it will just hit an assert.
Reviewed-by: Kristian H. Kristensen <hoegsberg@google.com>
Moves build_atan and build_atan2 into nir_builtin_builder. The goal is
to be able to use this from the GLSL translator too.
Reviewed-by: Kristian H. Kristensen <hoegsberg@google.com>
When users pass a config to `eglCreateWindowSurface` it requests double
buffering, but if the config doesn't have the appropriate `__DRIconfig`,
`eglCreateWindowSurface` fails with a `EGL_BAD_MATCH`.
Given that such behaviour is completely unacceptable, we drop the
`EGL_WINDOW_BIT` if we don't have at least one `__DRIconfig` supporting double
buffering, otherwise dropping the `EGL_PIXMAP_BIT`.
Fixes: 049f343e8a "egl: Allow 24-bit visuals for 32-bit RGBA8888 configs"
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=67676
Cc: mesa-stable@lists.freedesktop.org
Reviewed-by: Adam Jackson <ajax@redhat.com>
Signed-off-by: Hal Gentz <zegentzy@protonmail.com>
This commit does this by allowing both RGB and RGBA visuals to match with
EGL configs. We also expose the `EGL_MESA_config_select_group` egl
extension, which is similar to GLX's visual select group extension, to
allow the RGBA visuals to get less priority.
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=67676
Fixes: 049f343e8a "egl: Allow 24-bit visuals for 32-bit RGBA8888 configs"
Cc: mesa-stable@lists.freedesktop.org
Reviewed-by: Adam Jackson <ajax@redhat.com>
Signed-off-by: Hal Gentz <zegentzy@protonmail.com>
The bit moved on gen12 in order to prepare for dual-SIMD8 dispatch.
This implementation isn't an entirely complete as it only works on SIMD8
and SIMD16 and not dual-SIMD8.
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
Apparently the ts_request_type and ts_resource_select thread spawner
message descriptor bits were removed from the hardware at least since
ICL. Drop them in order to avoid assertion failures on Gen12+
platforms which don't have any encoding for this. On Gen9+ these are
probably just ignored by the hardware, so this is unlikely to have had
any functional implications prior to Gen12.
v2: Mark TS message fields as non-existing in brw_inst.h on ICL. (Caio)
Reviewed-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
The WAIT instruction has been removed, but SYNC.bar can be used
instead to wait for a notification on n0.0.
Reviewed-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
Apparently this field was removed on SKL, and according to the
hardware docs for previous platforms "This field is only valid for a
ForwardMsg message. It is ignored for other messages. The BarrierMsg
message always increments the N0 notification counter".
Reviewed-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
Confirmed no regressions after a full Piglit run on TGL with the
brw_fs_test_dispatch_packing() test enabled.
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
They look like a NULL source if you don't look at the address mode.
Reviewed-by: Francisco Jerez <currojerez@riseup.net>
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
The following fix-up by Jordan Justen is squashed in:
intel/eu/validate: gen12 send instruction doesn't have a dst type field
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
Due to hardware bug filed as HSDES#1604601757.
v2: Only return if result of fs_inst::can_do_source_mods() is known to
be false for the case new orthogonal restrictions are implemented
below in the future. (Caio)
Reviewed-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
Kept as a separate commit in order to avoid distracting reviewers of
the software scoreboard pass with memory management boilerplate.
Reviewed-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
Gen12+ hardware lacks the register scoreboard logic that used to
guarantee data coherency between register reads and writes in previous
generations. This lowering pass runs after register allocation in
order to make up for it.
It works by performing global dataflow analysis in order to determine
the set of potential dependencies of every instruction in the shader,
and then inserts any required SWSB annotations and additional SYNC
instructions in order to guarantee data coherency.
v2: Drop unnecessary _safe list iteration (Caio).
v3: Temporarily workaround potential WaR hazard between FPU
instruction and subsequent out-of-order write, pending
clarification from the hardware team. Drop redundant tracking of
implicit access of acc0-1, since the hardware guarantees coherency
of these (but not the other accumulators...).
Reviewed-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
Reviewers are encouraged to audit the code generation pass
independently for the case I missed some potential data hazard or new
code has been added in the meantime.
v2: Add SYNC instruction to cr0 workaround in brw_float_controls_mode().
v3: Drop likely redundant (and potentially harmful) RegDist SWSB
annotation from ce0 read in brw_find_live_channel() (Caio).
Reviewed-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
An effect similar to the one formerly provided by setting thread
control to "switch" can be achieved now by setting a RegDist of 1 on
the SWSB field.
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
A future lowering pass will simulate the same behavior originally
provided by NoDDChk/NoDDClr at the IR level by using appropriate SWSB
annotations.
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
The new SEND instruction behaves like the former SENDS instruction.
The original single-payload SEND instruction is gone.
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
The SEND instruction is now four-source. The descriptor is no longer
part of source 1, so avoid touching it to avoid corruption while
initializing the descriptor.
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
Quite a lot of churn because the encoding of most hardware opcodes has
changed unfortunately.
v2: Split dot-product description fixes to separate patch (Caio).
Reviewed-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
The encoding of almost every instruction field has changed in Gen12,
so this involves adding a Gen12+ bitfield spec to every brw_inst
macro. In addition some new macros are required to handle certain
discontiguous and variable-length fields.
This commit doesn't actually include the Gen12 updated bitfield specs,
only the macros are extended here for reviewability.
Reviewed-by: Jordan Justen <jordan.l.justen@intel.com>
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
v2: Rename FDC() to FFDC() and FDC1() to FDC() for consistency with
the existing F() and FF() macros.
This edge doesn't exist in the original scalar program, but it
represents a potential control flow path the EU will take in cases
where control flow isn't uniform across channels of the same SIMD
thread.
Reviewed-by: Jordan Justen <jordan.l.justen@intel.com>
Reviewed-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
This edge doesn't exist in the original scalar program, but it
represents a potential control flow path the EU will take in cases
where the condition isn't uniform across channels of the same SIMD
thread.
Reviewed-by: Jordan Justen <jordan.l.justen@intel.com>
Reviewed-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
Currently only the physical back-edge is represented, which
incidentally also leads to the exit block of the loop, but we need the
direct logical edge in addition for our logical CFG representation to
be complete.
Reviewed-by: Jordan Justen <jordan.l.justen@intel.com>
Reviewed-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
This represents two control flow graphs in the same cfg_t data
structure: The physical CFG that will include all possible control
flow paths the EU can physically take, and the logical CFG restricted
to the control flow paths that exist in the original scalar program.
The latter is a subset of the former because in case of divergence the
SIMD vectorized program will take control flow paths that aren't part
of the original scalar program.
The bblock_link constructor and bblock_t::add_successor() now take a
"kind" parameter that specifies whether the edge is purely physical or
whether it's part of both the logical and physical CFGs (a logical
edge is of course always guaranteed to be in the physical CFG as
well). bblock_t::is_predecessor_of() and ::is_successor_of() also
take a kind parameter specifying which CFG is being queried. The '~>'
notation will be used now in order to represent purely physical edges
in IR dumps.
This commit doesn't actually add nor remove any edges from the CFG
(the only edges marked as purely physical here are the two WHILE loop
ones that already existed). Optimization passes should continue using
the same (incomplete) physical CFG they were using before until
they're fixed to do something smarter in a later commit, so this
shouldn't lead to any functional changes.
v2: Remove tabs from lines changed in this file (Caio).
Reviewed-by: Jordan Justen <jordan.l.justen@intel.com>
Reviewed-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
Having the IR opcodes locked to their hardware representation is risky
because it causes opcodes as different as BRC and IFF to compare equal
at the IR level (luckily the back-end only ever uses one opcode from
each group, right now), and it prevents us from supporting
instructions that change their hardware representation across
generations, which will become a problem on Gen12+ platforms.
Reviewed-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
Reviewed-by: Jordan Justen <jordan.l.justen@intel.com>
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
Change brw_inst_set_opcode() and brw_inst_opcode() to call
brw_opcode_encode/decode() transparently in order to translate between
hardware and IR opcodes, and update the EU compaction code in order to
do the same as needed, so we can eventually drop the one-to-one
correspondence between hardware and IR opcodes.
Reviewed-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
Reviewed-by: Jordan Justen <jordan.l.justen@intel.com>
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
This rewrites the current opcode description tables as a more compact
flat data structure. The purpose is to allow efficient constant-time
look-up by either HW or IR opcode, which will allow us to drop the
hard-coded correspondence between HW and IR opcodes -- See the next
commits for the rationale.
brw_eu.c is now built as C++ source so we can take advantage of
pointers to member in order to make the look-up function work
regardless of the opcode_desc member used as look-up key.
v2: Optimize devinfo struct comparison (Caio)
Reviewed-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
Reviewed-by: Jordan Justen <jordan.l.justen@intel.com>
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
The brw_inst opcode accessors are going away in one of the following
commits. We could potentially replace them with the new helpers that
do opcode remapping, but that would lead to a circular dependency
between brw_inst.h and brw_eu.h. This way we also avoid ordering
issues that can cause the semantics of the ex_desc accessors to change
depending on whether the ex_desc field is set after or before the
opcode instruction field.
Reviewed-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
Reviewed-by: Jordan Justen <jordan.l.justen@intel.com>
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
This is required because SEND message payload sources are fetched
asynchronously by the hardware, which can lead to WaR data corruption
on Gen12+ platforms if not handled specially by the compiler to
guarantee proper synchronization.
Reviewed-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
Reviewed-by: Jordan Justen <jordan.l.justen@intel.com>
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
And after discard-only loops. Otherwise we end up with dead code
which confuses nir_repair_ssa into adding a whole bunch of uses
of undefined. However, for derefs, we sometimes always expect to
get a variable instead of undefined.
Fixes dEQP-VK.graphicsfuzz.write-red-in-loop-nest on radv.
Fixes: c832820ce9 "nir/dead_cf: Repair SSA if the pass makes progress"
Closes: https://gitlab.freedesktop.org/mesa/mesa/issues/1928
Reviewed-by: Connor Abbott <cwabbott0@gmail.com>
p_as_uniform can get CSE'd, which can be incorrect and break some
dEQP-VK.descriptor_indexing.* tests.
Signed-off-by: Rhys Perry <pendingchaos02@gmail.com>
Reviewed-by: Daniel Schürmann <daniel@schuermann.dev>
Fixes the UBO/SSBO dEQP-VK.descriptor_indexing.* tests
v2: remove bld.copy() usage
Signed-off-by: Rhys Perry <pendingchaos02@gmail.com>
Reviewed-by: Daniel Schürmann <daniel@schuermann.dev>
This can happen when bcsel is used between the results of two
vulkan_resource_index. It's also probably needed for non-uniform
descriptor indexing
Fixes dEQP-VK.spirv_assembly.instruction.compute.variable_pointers.compute.reads_opselect_two_buffers
Signed-off-by: Rhys Perry <pendingchaos02@gmail.com>
Reviewed-by: Daniel Schürmann <daniel@schuermann.dev>
v2: always assert on the texture/sampler handle's num_components
v3: replicate the deref inside the loop
v4: remove a case of useless line wrapping
Signed-off-by: Rhys Perry <pendingchaos02@gmail.com>
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
Now that the base resource is allowed to be incompatible with PE, we can
make a smarter choice of tiling mode to avoid allocating a PE compatible
base that is never used for regular textures. This affects GPUs like GC2000
where there is no tiling compatible with both PE and TE.
Signed-off-by: Jonathan Marek <jonathan@marek.ca>
Reviewed-by: Christian Gmeiner <christian.gmeiner@gmail.com>
For PE-incompatible layouts, use a mechanism similar to what texture does
to create a compatible base resource.
Signed-off-by: Jonathan Marek <jonathan@marek.ca>
Reviewed-by: Christian Gmeiner <christian.gmeiner@gmail.com>
Remove the "addressing_mode" state, which is currently set incorrectly, and
instead deduce the addressing mode from the tiling layout.
Signed-off-by: Jonathan Marek <jonathan@marek.ca>
Reviewed-by: Christian Gmeiner <christian.gmeiner@gmail.com>
It simplifies the definitions of jobs using the Debian 10 image.
The needs: was previously missing from the llvmpipe/softpipe test jobs,
so they could spuriously run if the debian-10 job failed or was
cancelled.
Reviewed-by: Eric Engestrom <eric.engestrom@intel.com>
The coroutine split pass is missing a dependency before LLVM 9.0,
and fails to initialise properly if the CallGraphWrapperPass hasn't
be initialised earlier (x86 does it due to some of it's passes
requiring it).
This is a workaround for llvm 8 (coroutines are only supported in 8
and higher). It adds another pass that has a dependency on the pass
the coroutines split requires. This pass shouldn't have any raal
effects.
Fixes: d32690b43c (gallivm: add coroutine pass manager support)
Reviewed-by: Roland Scheidegger <sroland@vmware.com>
This is needed as part of GLES3.1 and helps for ARB_gpu_shader5.
Fixes: KHR-GLES31.core.texture_gather.* cases
Reviewed-by: Roland Scheidegger <sroland@vmware.com>
This will encode the component selection value (0, 1, 2, 3) into
the X swizzle of the sampler, if the driver requests it.
Signed-off-by: Dave Airlie <airlied@redhat.com>
Reviewed-by: Roland Scheidegger <sroland@vmware.com>
Accessing the TG4 component via immediates in the llvmpipe backend is quite
messy (like really messy). Roland suggested we change the instruction encoding,
so introduce a cap to allow the component to be selected to be store in the
sampler swizzle, which should be otherwise unused.
I could probably switch all drivers over, but virgl would need some work that
I'd prefer not to rush it.
Signed-off-by: Dave Airlie <airlied@redhat.com>
Reviewed-by: Roland Scheidegger <sroland@vmware.com>
This job uses the vs2017 backend of meson (msbuild) as opposed to the
ninja backend used on MacOS and Linux.
v7: - rebase on master
- remove llvm (we'll add that back later)
- remove cygwin (we'll add that back later too)
v6: - rebase on master, including the addition of cygwin
- consolidate 3 appveyor patches into this one patch
v5 - use the new b_vscrt option instead of manually specifying the crt
v4: - rebase on python3 generators
- cache meson wraps
- Build x86 instead of x86_64, since that's what the pre-built LLVM
is
- update to vs2017 from vs2015
- set the default-library to static
- use the new vscrt override
- add the /m switch to msbuild to make the build somewhat faster
Acked-by: Kristian H. Kristensen <hoegsberg@google.com>
Currently meson doesn't correctly handle passing compiled binaries to
scripts in tests. This patch looks to the future (0.53) when meson will
have this functionality, but also immediately it fixes these tests in
cross compiles by causing them to return 77, which meson interprets as
skip.
Acked-by: Kristian H. Kristensen <hoegsberg@google.com>
MSVC is generally happy, but mingw errors. I've spent as much time
(several days) trying to squash all of these warnings and I'm done with
it, just leave them as warnings with MinGW.
Acked-by: Kristian H. Kristensen <hoegsberg@google.com>
This has always been present in the scons build, so it should be in
the meson build as well.
Acked-by: Eric Engestrom <eric.engestrom@intel.com>
Acked-by: Kristian H. Kristensen <hoegsberg@google.com>
Mesa uses the lib prefix, and doesn't use a version for it's dynamic
libraries, which meson defaults to.
v2: - this patch
Acked-by: Eric Engestrom <eric.engestrom@intel.com>
Acked-by: Kristian H. Kristensen <hoegsberg@google.com>
It crashes hard (pop-up window and all).
v2: - Change comment to FIXME
Reviewed-by: Eric Engestrom <eric.engestrom@intel.com>
Acked-by: Kristian H. Kristensen <hoegsberg@google.com>
I can't figure out why symbols are being exposed that shouldn't.
v2: - change comment to FIXME
Reviewed-by: Eric Engestrom <eric.engestrom@intel.com>
Acked-by: Kristian H. Kristensen <hoegsberg@google.com>
They require the pipe-loaders, which require xmlconfig, which doesn't
build with msvc.
Acked-by: Eric Engestrom <eric.engestrom@intel.com>
Acked-by: Kristian H. Kristensen <hoegsberg@google.com>
There are quite a few tests that require getopt, when using MSVC we need
to use the bundled version of getopt since there isn't a system version.
Acked-by: Kristian H. Kristensen <hoegsberg@google.com>
Because the macros for exporting dll symbols and using TLS are mutually
exclusive.
Reviewed-by: Eric Engestrom <eric.engestrom@intel.com>
Acked-by: Kristian H. Kristensen <hoegsberg@google.com>
This makes two changes for SWR,
The first is that it reorders the arguments to try to put the ICL ones
first. This is required to support older versions of meson that don't
add enough "error in this case" switches to ICL, which causes it to
happy accept -mavx (for example) even though it doesn't support them,
resulting in compilation failures.
The second is to fix the names of the libraries, setting the soversion
to '' will result in <lib>.dll, instead of <lib>-0.dll. Since these are
not versioned dll's, but implement an internal API we should communicate
that. It's also what scons does.
Acked-by: Kristian H. Kristensen <hoegsberg@google.com>
There isn't an obvious command line switch here, /arch:AVX *might* be
the right thing, but meson doesn't know what to do here either and
leaves the -msse4.1 and -mstackrealign.
Reviewed-by: Eric Engestrom <eric.engestrom@intel.com>
Acked-by: Kristian H. Kristensen <hoegsberg@google.com>
v2: - Add missing D to pound define
- Simply define the variable rather than set it to 1 (mirrors
android.mk not scons)
Acked-by: Kristian H. Kristensen <hoegsberg@google.com>
There's a mingw bug for this, it exports __builtin_posix_memalign but
not posix_memalign, so the check will succeed, but compiling will fail.
Reviewed-by: Eric Engestrom <eric.engestrom@intel.com>
Acked-by: Kristian H. Kristensen <hoegsberg@google.com>
v2: - set so_version to '' (only affects windows)
- always set lib prefix to 'lib', even on msvc
v5: - key NO_EXPORTS on shared glapi instead of gles.
Acked-by: Kristian H. Kristensen <hoegsberg@google.com>
v4: - Fix check for broken mingw (should be for x86 not x86_64)
- Add comment about why check is needed
Acked-by: Kristian H. Kristensen <hoegsberg@google.com>
v4: - Handle enable gles properly
- Add comments about what various #defines do
v5: - key NO_EXPORTS on shared glapi instead of gles.
Acked-by: Kristian H. Kristensen <hoegsberg@google.com>
v4: - Retain scons comments for windows specific defines
v5: - key GLAPI_NO_EXPORTS off of shared-glapi instead of gles
Acked-by: Kristian H. Kristensen <hoegsberg@google.com>
These are needed to control the export or symbols due to differences
between the way windows and *nix handle symbol exports.
Reviewed-by: Eric Anholt <eric@anholt.net> (v2)
Acked-by: Kristian H. Kristensen <hoegsberg@google.com>
v5: - key NO_EXPORT off of shared-glapi instead of gles
v4: - Fix typo in warning code (4246 -> 4267)
- Copy comments from scons for what MSVC warnings codes do
- Merge linker argument changes into this commit
v5: - Add /GR- on windows if LLVM is build without rtti (equivalent to
GCc's -fno-rtti')
- Add /wd4291, which is catching the same hting that
-Wno-non-virtual-dtor is on GCC/Clang
Acked-by: Kristian H. Kristensen <hoegsberg@google.com>
MinGW defines only _WIN32, but doesn't have fcntl, so we need to use the
windows path.
Reviewed-by: Erik Faye-Lund <erik.faye-lund@collabora.com>
Acked-by: Kristian H. Kristensen <hoegsberg@google.com>
Not sure if this is a bug in the user or not, but some CTS
tests fail due to using an 8 byte constant buffer.
Fixes: KHR-GLES31.core.layout_binding.block_layout_binding_block_VertexShader
Reviewed-by: Roland Scheidegger <sroland@vmware.com>
since images are a single level, minify before passing the w/h
to draw.
Fixes: KHR-GLES31.core.shader_image_size.basic-nonMS-vs-*
Reviewed-by: Roland Scheidegger <sroland@vmware.com>
Due to use vmovdqa instructions in the asm, which require 16-byte
aligned buffers.
This fixes a crash in
KHR-GLES31.core.texture_buffer.texture_buffer_texture_buffer_range
Reviewed-by: Roland Scheidegger <sroland@vmware.com>
Preparation for a later commit.
Fixes: 93df862b6a ("meson: re-add incorrect pkg-config files with GLVND for backward compatibility")
Signed-off-by: Eric Engestrom <eric.engestrom@intel.com>
Reviewed-by: Dylan Baker <dylan@pnwbakers.com>
This reflects better what is provided by glvnd or not.
Fixes: 93df862b6a ("meson: re-add incorrect pkg-config files with GLVND for backward compatibility")
Signed-off-by: Eric Engestrom <eric.engestrom@intel.com>
Reviewed-by: Dylan Baker <dylan@pnwbakers.com>
SCons and Meson have never supported that feature, and Autotools was
deleted over 6 months ago and no-one complained yet, so it's pretty
obvious nobody cares about it.
Fixes: 95aefc94a9 ("Delete autotools")
Signed-off-by: Eric Engestrom <eric.engestrom@intel.com>
Acked-by: Dylan Baker <dylan@pnwbakers.com>
This is a security feature to disallow malicious apps from passing
a buffer that is too small.
Reviewed-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
To abstract things a bit, this adds a helper function in radv_android.c.
However, this means we have to link in radv_android.c on non-android as
well, which means some scaffolding changes.
Reviewed-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
Since we really cannot share them ever.
Also remove an unused switch.
Fixes: b70829708a "radv: Implement VK_KHR_external_memory"
Reviewed-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
Derived from the Intel code.
For the internal format we just use the internal Vulkan format,
as we have Vulkan formats for all android formats we care about.
For the ycbcr properties we just do something. I do not have a real
clue what would be recommended.
Reviewed-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
The minigbm comment really says it all. We should
fix minigbm as well, but for now this is the more
robust solution.
Note that this only changes width and height for
the surface creation, not for the image and hence
also not for the sampler, where it would wreak
havoc due to the normalized coords.
Reviewed-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
We want this flexibility because in GFX10 we lose any stride fields,
so we have to make sure our width/height are in alignment with
the external image we import.
Furthermore, we need the ability to inject tiling modifiers on import
time which is strictly after create time for Android. So, with the
layout & patch functions being fully independent of pCreateInfo, we
can delay it until import/bind time.
Reviewed-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
Run dEQP on boards with Mali 400 and 450 in Baylibre's lab.
There's lots of skipped tests because of crashes and undetermined
behavior. May be a good idea to run the tests with valgrind and fix any
issues found.
Signed-off-by: Tomeu Vizoso <tomeu.vizoso@collabora.com>
Reviewed-by: Vasily Khoruzhick <anarsoul@gmail.com>
Reviewed-by: Neil Armstrong <narmstrong@baylibre.com>
As the non-LAVA runner script does, have per-GPU version files listing
the tests that are to be skipped, due to being very slow, unstable, etc.
Signed-off-by: Tomeu Vizoso <tomeu.vizoso@collabora.com>
Reviewed-by: Vasily Khoruzhick <anarsoul@gmail.com>
Reviewed-by: Neil Armstrong <narmstrong@baylibre.com>
Create basic aub_context on GEM_CONTEXT_CREATE.
Set it up and submit a context + ring + pphwsp during execbuf
submission, if it has not been initialized yet.
v2: Write the HWSP only once per engine (Lionel).
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
v2:
- Only dump context if there were no erros (Lionel).
- Store counter for context handles in aub_file (Lionel).
v3:
- Add a comment about aub_context -> GEM context (Lionel).
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
We want to be able to create contexts on demand, and increase the GGTT
as needed for that. Use the aub_map_ggtt() function for that.
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
We want to reuse it in execlists_setup().
v2: Rename it to write_ggtt_ptes() (Lionel).
v3: Rename it to aub_map_ggtt() (Lionel).
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
When the timestamp is not ready (ie. UINT64_MAX), the availabily bit
should be zero. The previous code used to copy the timestamp value
as the availabily bit and that's completely wrong.
Because it's not that simple to emit a conditional with the CP, the
driver now uses a compute shader for copying timestamp query results.
Fixes dEQP-VK.pipeline.timestamp.misc_tests.reset_query_before_copy.
Signed-off-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
Otherwise, the GPU might write timestamp queries after the reset
operation. This is similar to other query operations.
Signed-off-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
These are not needed anymore, since PhyReg has an implicit
conversion operator that can convert it to unsigned int,
which is equivalent to accessing this field.
Signed-off-by: Timur Kristóf <timur.kristof@gmail.com>
Reviewed-by: Daniel Schürmann <daniel@schuermann.dev>
According to LLVM, branches with an offset of 0x3f are buggy.
v2: (by Timur Kristóf)
- extract the GFX10 specific part to its own function
Signed-off-by: Rhys Perry <pendingchaos02@gmail.com>
Signed-off-by: Timur Kristóf <timur.kristof@gmail.com>
Reviewed-by: Daniel Schürmann <daniel@schuermann.dev>
The DLC bit is now set to 1 for all loads when GLC is also set,
but cleared to 0 for all stores (otherwise it causes issues),
and also cleared to 0 for atomics.
Signed-off-by: Timur Kristóf <timur.kristof@gmail.com>
Reviewed-by: Daniel Schürmann <daniel@schuermann.dev>
Also remove img_format from aco_ir, since it can be calculated
from dfmt and nfmt. So only the assember needs to deal with it.
Signed-off-by: Timur Kristóf <timur.kristof@gmail.com>
Reviewed-by: Daniel Schürmann <daniel@schuermann.dev>
We'd like to use some functions, for example some
ac_shader_util functions in ACO, so we need to link
ACO to AC.
Signed-off-by: Timur Kristóf <timur.kristof@gmail.com>
Reviewed-by: Daniel Schürmann <daniel@schuermann.dev>
We'd like to include some of these in C++ code later.
Specifically, ACO is written in C++ and we would like to use
some of this code in ACO in order to avoid code duplication.
Signed-off-by: Timur Kristóf <timur.kristof@gmail.com>
Reviewed-by: Daniel Schürmann <daniel@schuermann.dev>
Specifically when reading the primitive counters.
This fixed ~700 CTS tests using this pattern:
dEQP-GLES3.functional.transform_feedback.*
when run after tests like
dEQP-GLES3.functional.prerequisite.read_pixels on the same
caselist. When run individually those tests were passing because
prim_counts_offset was zero.
Fixes: 0f2d1dfe65 ("v3d: use the GPU to
record primitives written to transform feedback")
Reviewed-by: Jose Maria Casanova Crespo <jmcasanova@igalia.com>
Reviewed-by: Iago Toral Quiroga <itoral@igalia.com>
The initial patch only fixed up the NIR path, but forgot
the TGSI path needed fixing as well.
Fixes: f92226931b ("st/mesa: Prefer R8 for bitmap textures")
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
With libc++ (LLVM's STL implementation), the original code does not compile because an
appropriate vector constructor cannot be found (for the _ForwardIterator one, requirement
is_constructible is not satisfied).
<sys/param.h> is required for NetBSD version detection,
and __NetBSD__ must be used to detect even on older releases.
Reviewed-by: Eric Engestrom <eric.engestrom@intel.com>
Reviewed-by: Matt Turner <mattst88@gmail.com>
eglGetDisplay is awful because you have to inspect the pointer you're
given and guess what type of native display it corresponds to. We make
it worse by caching the type of the first such display we detect, so if
the second call to eglGetDisplay is to a different display type, kaboom.
Fortunately this is a problem that can be solved with the delete key.
Reviewed-by: Eric Engestrom <eric.engestrom@intel.com>
Closes: https://gitlab.freedesktop.org/mesa/mesa/issues/156
Previously, this could have made the resource divergent in code like
that which is genereated by nir_lower_non_uniform_access.
Fixes: da8ed68a ('nir: replace nir_move_load_const() with nir_opt_sink()')
Reviewed-by: Daniel Schürmann <daniel@schuermann.dev>
Previously, for code like:
loop {
loop {
a = load_ubo()
}
use(a)
}
adjust_block_for_loops() would return the block before the first loop.
Now we compute the range of allowed blocks and then walk the dominance
tree directly, guaranteeing directly that we always choose a block that
dominates all the uses and is dominated by the definition.
Reviewed-by: Daniel Schürmann <daniel@schuermann.dev>
And use a new p_discard_early_exit instruction. This fixes some cases
where a definition having the same register as an operand causes issues.
v2: rename instruction to p_exit_early_if
v2: modify the existing instruction instead of creating a new one
v3: merge the "i == num - 1" IFs
Signed-off-by: Rhys Perry <pendingchaos02@gmail.com>
Reviewed-by: Daniel Schürmann <daniel@schuermann.dev>
Cloning texture loads isn't a good idea since we may move it into
a block that is not shared between all the invocations of the shader.
We'd like to avoid that since it may result in undefined behavior.
Reviewed-by: Andreas Baierl <ichgeh@imkreisrum.de>
Signed-off-by: Vasily Khoruzhick <anarsoul@gmail.com>
Without this, the test jobs could spuriously run after the container
job failed or was cancelled, even if the build job didn't run at all.
Reviewed-by: Tomeu Vizoso <tomeu.vizoso@collabora.com>
The spec has probably been misinterpreted during RADV bringup.
This fixes GPU hangs with dEQP-VK.binding_model.*offset_nonzero*.
Fixes: f4e499ec79 ("radv: add initial non-conformant radv vulkan driver")
Signed-off-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
If it's not available, we fall back to A8. This should work on all drivers,
because we depend on it in the display-list code already.
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
This should help avoid stalls in the pixel mask array in certain
non-promoted depth cases. It especially helps for Z16, as each bit
in the PMA corresponds to two pixels when using Z16, as opposed to
the usual one pixel.
Improves performance in GFXBench5 TRex by 22% (n=1).
Fixes Piglit's gl-2.1-polygon-stipple-fs on iris.
Fixes: 63f24c3c01 ("gallium: Enable MESA_framebuffer_flip_y")
Reviewed-by: Fritz Koenig <frkoenig@google.com>
Reviewed-by: Kristian H. Kristensen <hoegsberg@google.com>
Implement glFramebufferParameteriMESA on GLES 3 so
that the extension is not dependant on GLES 3.1
Reviewed-by: Eric Anholt <eric@anholt.net>
Reviewed-by: Kristian H. Kristensen <hoegsberg@google.com>
bound_vertex_buffers doesn't include extra draw parameters buffers.
Tracking this correctly is kind of complicated, and iris_destroy_state
isn't exactly in a hot path, so just loop over all VBO bindings.
Fixes: 4122665dd9 (iris: Enable ARB_shader_draw_parameters support)
Reported-by: Sergii Romantsov <sergii.romantsov@globallogic.com>
On Solaris, sys/sysmacros.h has long-deprecated copies of major() & minor()
but not makedev().
sys/mkdev.h has all three and is the preferred choice.
Let's make sure we check for all 3 major(), minor() and makedev().
Reported-by: Alan Coopersmith <alan.coopersmith@oracle.com>
Signed-off-by: Eric Engestrom <eric.engestrom@intel.com>
Reviewed-by: Alan Coopersmith <alan.coopersmith@oracle.com>
Tested-by: Alan Coopersmith <alan.coopersmith@oracle.com>
The list of AMD/ATI devices supported by radeon/r200/r300/r600 is
complete, so anything else must use radeonsi.
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
When only the depth/stencil bufs are cleared, we should make sure the
color content is reloaded into the tile buffers if we want to preserve
their content.
Signed-off-by: Boris Brezillon <boris.brezillon@collabora.com>
Reviewed-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
glClear()s are expected to be the first thing GL apps do before drawing
new things. If there's already an existing batch targetting the same
FBO that has draws attached to it, we should make sure the new clear
gets a new batch assigned to guaranteed that the FB content is actually
cleared with the requested color/depth/stencil values.
We create a panfrost_get_fresh_batch_for_fbo() helper for that and
call it from panfrost_clear().
Signed-off-by: Boris Brezillon <boris.brezillon@collabora.com>
Reviewed-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
Hitting any fallback path on Broxton as we require clflushing the whole
buffer even for an upload of a subtexture. However, since gallium
provides a pbo upload path, allow it to sample packed RGB if supported.
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
It gets used by the gallium auxiliary draw module, which gets used
pretty much always when LLVM is used as JIT.
At the same time most builds don't hit the issue here because the
shared library of LLVM contains all modules.
Fixes: d32690b43c ("gallivm: add coroutine pass manager support")
Closes: https://gitlab.freedesktop.org/mesa/mesa/issues/951
Reviewed-by: Gert Wollny <gert.wollny@collabora.com>
This commit is a step towards the goal of being able to build RADV
without LLVM. In the future we would like to offer the option to
use RADV solely with ACO. There is still a need for the common AMD
code located in amd/common but the LLVM specific parts need to be
separated.
Signed-off-by: Timur Kristóf <timur.kristof@gmail.com>
Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
Acked-by: Marek Olšák <marek.olsak@amd.com>
Acked-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
This mirrors the intrinsics in the GLSL IR. One could imagine an
alternate definition where reading the semantic would account for the
READ_HELPER functionality, but that feels potentially dodgy and could be
subject to CSE unpleasantness.
Signed-off-by: Ilia Mirkin <imirkin@alum.mit.edu>
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
u_upload_mgr sets it, so that util_range_add can skip the lock.
The time spent in tc_transfer_flush_region decreases from 0.8% to 0.2%
in torcs on radeonsi.
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
Meson automatically tracks any file included by a file it already tracks,
and `pci_id_driver_map.h` & `loader.h` are included by `loader.c`, while
`loader_dri3_helper.h` is included by `loader_dri3_helper.c`.
Signed-off-by: Eric Engestrom <eric.engestrom@intel.com>
Acked-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
These can appear after loop unrolling.
v2: stylistic changes
v2: replace state->mem_ctx with state->shader
v2: add bounds checking
v3: use nir_intrinsic_range() for bounds checking
v3: fix issue where partially out-of-bounds reads are replaced with undefs
v4: fix merge conflicts during rebase
v5: split into two commits
v6: set constant_data to NULL after freeing (fixes nir_sweep()/Iris)
v7: don't remove the constant data if there are no constant loads
Signed-off-by: Rhys Perry <pendingchaos02@gmail.com>
Reviewed-by: Connor Abbott <cwabbott0@gmail.com> (v6)
Acked-by: Ian Romanick <ian.d.romanick@intel.com>
We only have the subgroup variant in NIR (equivalent to clockARB), so
only support that for now.
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
In the test stage, we can use any of the two container images as we
arent going to do anything architecture-dependent when submitting the
jobs to LAVA.
But if we are in a pipeline in which the images need to be rebuilt and
one finishes much earlier than the other, it could happen that the test
job that executes first fails to find the container image.
To avoid that, have each job in the test stage to use the image that has
been already implicitly built by depending on the build job for the
given arch.
Signed-off-by: Tomeu Vizoso <tomeu.vizoso@collabora.com>
Reviewed-by: Michel Dänzer <mdaenzer@redhat.com>
This reverts commit 19546108d3.
This commit breaks the build because lima implements
->set_damage_region(). I guess we'll need more discussion before
removing the ->set_damage_region() hook.
This reverts commit 492ffbed63.
BACK_LEFT attachment can be outdated when the user calls
KHR_partial_update(), leading to a damage region update on the
wrong pipe_resource object.
Let's not expose the ->set_damage_region() method until the core is
fixed to handle that properly.
Cc: mesa-stable@lists.freedesktop.org
Signed-off-by: Boris Brezillon <boris.brezillon@collabora.com>
Acked-by: Daniel Stone <daniels@collabora.com>
In preparation for testing drivers other than Panfrost in LAVA labs.
Signed-off-by: Tomeu Vizoso <tomeu.vizoso@collabora.com>
Reviewed-by: Eric Anholt <eric@anholt.net>
Include Panfrost's gitlab.ci.yml file from Mesa's main .gitlab-ci.yml so
we test on devices with Panfrost.
This uses LAVA to schedule jobs in the devices and will be the base for
testing Etnaviv, Lima, etc.
Signed-off-by: Tomeu Vizoso <tomeu.vizoso@collabora.com>
Reviewed-by: Eric Anholt <eric@anholt.net>
This is a port of Nanley's 904c2a617d
from i965 to iris.
One concern is that iris uses larger batches, and also emits far fewer
commands, so we may come closer to the 500 limit within a batch, and
could need to supplement this with actual counting. Manhattan 3.0 had
239 3DSTATE_CONSTANT_PS packets in a batch, Unigine Valley had 155.
So it seems like we're still in the realm of safety.
We're missing the offset of the slice in the subslice mask...
This worked for most platforms that don't have first slice fused off
because we would reread the same mask from slice0 again and again...
Signed-off-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Fixes: c1900f5b0f ("intel: devinfo: add helper functions to fill fusing masks values")
Gitlab: https://gitlab.freedesktop.org/mesa/mesa/issues/1869
Reviewed-by: Mark Janes <mark.a.janes@intel.com>
We were supplying __DRI2_THROTTLE_SWAPBUFFER, rather than the obvious
choice of __DRI2_THROTTLE_COPYSUBBUFFER. This meant that we hit the
swap-based frame throttling. glXCopySubBuffer doesn't seem like it's
intended to be a frame boundary, so we'd like to avoid this throttling.
Tested-by: Michel Dänzer <mdaenzer@redhat.com> # DRI3 only
Reviewed-by: Michel Dänzer <mdaenzer@redhat.com>
glXCopySubBufferMESA copies data from the back buffer to the front,
so it needs to perform a MSAA downsampling operation just like
glXSwapBuffers would.
Currently, the CopySubBuffer implementations supply a throttle reason
of __DRI2_THROTTLE_SWAPBUFFERS, so they hit this path and work today.
But we'd like to avoid swapbuffer throttling in this case, so the next
patch will change that reason.
Tested-by: Michel Dänzer <mdaenzer@redhat.com> # DRI3 only
Reviewed-by: Michel Dänzer <mdaenzer@redhat.com>
I thought I fixed this, but I guess I must have broken it again.
Fixes various dEQP-VK.draw.* tests
Signed-off-by: Rhys Perry <pendingchaos02@gmail.com>
Reviewed-by: Daniel Schürmann <daniel@schuermann.dev>
Working on the algebraic implementation, I was being driven nuts by my
editor not highlighting and handling indentation for the C code. It turns
out that it's basically not pass-specific code, and we can move it over to
the relevant .c file. Replaces 30KB of code with 34KB of data on my i965
build. No perf diff on shader-db (n=3)
Reviewed-by: Ian Romanick <ian.d.romainck@intel.com>
Reviewed-by: Connor Abbott <cwabbott0@gmail.com>
This lets us memoize range analysis work across instructions. Reduces
runtime of shader-db on Intel by -30.0288% +/- 2.1693% (n=3).
Fixes: 405de7ccb6 ("nir/range-analysis: Rudimentary value range analysis pass")
Reviewed-by: Ian Romanick <ian.d.romanick@intel.com>
Reviewed-by: Connor Abbott <cwabbott0@gmail.com>
Having passes generate these is just making more work for copy
propagation (and thus probably calling more optimization passes)
later. Noticed while trying to debug nir_opt_algebraic()
top-to-bottom having O(n^2) behavior due to not finding new matches in
replacement code.
Reviewed-by: Ian Romanick <ian.d.romainck@intel.com>
Reviewed-by: Connor Abbott <cwabbott0@gmail.com>
This matches what we do for uses_sample_qualifier, and what we
do in ir_set_program_inouts.cpp as well.
Reviewed-by: Eric Anholt <eric@anholt.net>
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
This simplifies ACO and allows the lowered code to be optimized (in
particular, constant folded).
Totals from affected shaders:
SGPRS: 1776 -> 1776 (0.00 %)
VGPRS: 1436 -> 1436 (0.00 %)
Spilled SGPRs: 0 -> 0 (0.00 %)
Spilled VGPRs: 0 -> 0 (0.00 %)
Private memory VGPRs: 0 -> 0 (0.00 %)
Scratch size: 0 -> 0 (0.00 %) dwords per thread
Code Size: 203452 -> 203564 (0.06 %) bytes
LDS: 0 -> 0 (0.00 %) blocks
Max Waves: 103 -> 103 (0.00 %)
At least some of the code size increase seems to be from literals being
applied to instructions as a result of constant folding.
v2: remove fmod/frem handling in init_context()
Signed-off-by: Rhys Perry <pendingchaos02@gmail.com>
Reviewed-by: Daniel Schürmann <daniel@schuermann.dev>
If the instruction interpolateAtCentroid is used the extra interpolator
must also be enabled in the state.
Fixes: fs-interpolateatcentroid-block
Signed-off-by: Gert Wollny <gert.wollny@collabora.com>
Reviewed-by: Roland Scheidegger <sroland@vmware.com>
Now that we have live_out calculated per block as metadata, calculating
liveness of an instruction at a given point in the program becomes O(n)
to the size of the block worst-case, rather than O(n) the program.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
Callers should have liveness info ready. Ideally we'd have a nice
metadata tracking framework like NIR to handle this automatically, but
for now this will allow us to make forward progress... when we're about
to do something with liveness, invalidate everything ahead to force a
clean calculation.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
This will allow us to explicitly invalidate liveness analysis results so
we can cache liveness results.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
By definition, once liveness analysis has occurred:
live_out = OR {succ} succ->live_in
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
There are unfortunately two distinct liveness analysis passes in the
compiler right now -- one good (but complex) pass used by RA based on
solving data flow equations, and one awful (but simple) pass used for
dead code elimination and bundling based on an abstract walk of the AST.
Let's move RA's pass into shared code so we can work on unifying.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
This allows us to fill in ctx->temp_count explicitly, even if we haven't
squished down the MIR.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
We already enforce this with the SSA/register distinction in the
backend. There is no need to duplicate this logic merely for an assert.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
Now that we have track inter-batch dependencies, the flush done in
panfrost_set_framebuffer_state() is no longer needed. Let's get rid of
it.
Signed-off-by: Boris Brezillon <boris.brezillon@collabora.com>
Reviewed-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
Now that we have all the pieces in place to support pipelining batches
we can get rid of the drmSyncobjWait() at the end of
panfrost_batch_submit().
Signed-off-by: Boris Brezillon <boris.brezillon@collabora.com>
Reviewed-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
We don't have to flush all batches when we're only interested in
reading/writing a specific BO. Thanks to the
panfrost_flush_batches_accessing_bo() and panfrost_bo_wait() helpers
we can now flush only the batches touching the BO we want to access
from the CPU.
This fixes the dEQP-GLES2.functional.fbo.render.texsubimage.* tests.
Signed-off-by: Boris Brezillon <boris.brezillon@collabora.com>
Reviewed-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
This is needed if we want to free the panfrost_batch object at submit
time in order to not have to GC the batch on the next job submission.
Signed-off-by: Boris Brezillon <boris.brezillon@collabora.com>
Reviewed-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
Will be useful to make the ioctl(WAIT_BO) call conditional on BOs that
are not exported/imported (meaning that all GPU accesses are known
by the context).
Signed-off-by: Boris Brezillon <boris.brezillon@collabora.com>
Reviewed-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
This will allow us to only flush batches touching a specific resource,
which is particularly useful when the CPU needs to access a BO.
Signed-off-by: Boris Brezillon <boris.brezillon@collabora.com>
Reviewed-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
And use it in panfrost_flush() to flush all batches, and not only the
one currently bound to the context.
We also replace all internal calls to panfrost_flush() by
panfrost_flush_all_batches() ones.
Signed-off-by: Boris Brezillon <boris.brezillon@collabora.com>
Reviewed-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
The panfrost_fence logic currently waits on the last submitted batch,
but the batch serialization that was enforced in
panfrost_batch_submit() is about to go away, allowing for several
batches to be pipelined, and the last submitted one is not necessarily
the one that will finish last.
We need to make sure the fence logic waits on all flushed batches, not
only the last one.
Signed-off-by: Boris Brezillon <boris.brezillon@collabora.com>
Reviewed-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
The idea is to track which BO are being accessed and the type of access
to determine when a dependency exists. Thanks to that we can build a
dependency graph that will allow us to flush batches in the correct
order.
Signed-off-by: Boris Brezillon <boris.brezillon@collabora.com>
Reviewed-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
We'll soon need to freeze a batch not only when it's flushed, but also
when another batch depends on us, so let's add a helper to avoid
duplicating the logic.
Signed-off-by: Boris Brezillon <boris.brezillon@collabora.com>
Reviewed-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
We just replace the per-context out_sync object by a pointer to the
the fence of the last last submitted batch. Pipelining of batches will
come later.
Signed-off-by: Boris Brezillon <boris.brezillon@collabora.com>
Reviewed-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
So we can store the flags as data and keep the BO as a key. This way
we keep track of the type of access done on BOs.
Signed-off-by: Boris Brezillon <boris.brezillon@collabora.com>
Reviewed-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
The type of access being done on a BO has impacts on job scheduling
(shared resources being written enforce serialization while those
being read only allow for job parallelization) and BO lifetime (the
fragment job might last longer than the vertex/tiler ones, if we can,
it's good to release BOs earlier so that others can re-use them
through the BO re-use cache).
Let's pass extra access flags to panfrost_batch_add_bo() and
panfrost_batch_create_bo() so the batch submission logic can take the
appropriate when submitting batches. Note that this information is not
used yet, we're just patching callers to pass the correct flags here.
Signed-off-by: Boris Brezillon <boris.brezillon@collabora.com>
Reviewed-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
The CTS finally has agreed to drop the requirement for a
565-no-depth-no-stencil config for ES 3.0. Hence we can now remove the
code to satisfy this requirement using a pbuffer-only visual with
whatever other buffers the driver happens to have given us.
This reverts commit 82607f8a90,
commit 6ad31c4ff3 and
commit dacb11a585.
v2:
- Reference the VK-GL-CTS issue (Eric E.).
v3:
- Don't revert
fc21394bc4 ("egl: Quiet warning about front buffer rendering for pixmaps/pbuffers")
(Kenneth).
References: VK-GL-CTS issue 1601.
Cc: mesa-stable@lists.freedesktop.org
Signed-off-by: Andres Gomez <agomez@igalia.com>
Acked-by: Eric Engestrom <eric.engestrom@intel.com>
Acked-by: Eric Anholt <eric@anholt.net>
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
This script is responsible for generating an entire page in the
docs/relnotes/ directory. It includes a template for the page, and uses
mako to fill in the necessary bits. It is designed to be purely fire and
forget, calculating previous versions, shortlogs, bug fixes, and dates.
Acked-by: Eric Engestrom <eric.engestrom@intel.com>
Acked-by: Juan A. Suarez <jasuarez@igalia.com>
The next patch is going to introduce a tool that creates the entire
release html page for us, without any user intervention. As such we
can't be editing it. To that end the script will read the
new_features.txt file to get a list of new features.
This is a flat text file, one entry per line.
Acked-by: Eric Engestrom <eric.engestrom@intel.com>
Acked-by: Juan A. Suarez <jasuarez@igalia.com>
On 64 bits platforms, some atomic operations like __sync_fetch_and_add()
have constant time, but on 32 bits platforms they are implemented with a
loop and might take much longer.
Additionally, it seems like if their operands are not aligned to 64
bits, they also require extra memory accesses. From the Intel
Architecture's Developer Manual Vol. 1, 4.1.1:
"A word or doubleword operand that crosses a 4-byte boundary or a
quadword operand that crosses an 8-byte boundary is considered
unaligned and requires two separate memory bus cycles for access."
Forcing the u64 field to be aligned to 64 bits seems to make the unit
tests that are stressing this finish much faster.
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
This replaces to old Bugzilla: tag, which no longer makes sense because
we don't use bugzilla anymore.
Reviewed-by: Eric Anholt <eric@anholt.net> (v1)
Reviewed-by: Eric Engestrom <eric.engestrom@intel.com>
v1 by Topi Pohjolainen
v2,v3 by Anuj Phogat:
- Apply for gen >= 11
- Remove wa_bug_xxx function
- Use helper functions
Signed-off-by: Anuj Phogat <anuj.phogat@gmail.com>
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
This lowers fmod and frem at NIR level like RadeonSI. fmod is
already lowered directly in NIR->LLVM, and frem will be lowered by
LLVM anyways.
This fixes a LLVM crash with:
dEQP-VK.glsl.builtin.precision_fp16_storage32b.frem.compute.scalar.
Signed-off-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
... because it's wrong to do so. The error path out of
dri2_initialize_drm ends with dri2_display_destroy, which calls
functions in the vtable we're trying to set up, so if we dlclose the
driver then those function pointers will point off into space and things
crash.
Noticed this because after !1923 eglinfo would crash when setting up the
GBM platform. This was something of a cascade failure, because my kernel
is too old for DRM_IOCTL_I915_GETPARAM to work without DRM_AUTH, so i965
wouldn't load. platform_drm.c then got very confused when it tries to
load swrast as a dri2 driver.
Reviewed-by: Eric Anholt <eric@anholt.net>
uintptr_t is 32 bits in a 32-bits build, resulting in shifting out
of bounds.
Reviewed-by: Eric Engestrom <eric@engestrom.ch>
Reviewed-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
We need the continue CS for referencing the tess/GDS/sample position BOs.
Fixes: 46e52df34d "radv: add tessellation ring allocation support. (v2)"
Fixes: e1dc3ab753 "radv/gfx10: allocate GDS/OA buffer objects for NGG streamout"
Fixes: 1171b304f3 "radv: overhaul fragment shader sample positions."
Reviewed-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
Instead of a single cache shared between all jobs, but reduce the
maximum cache size to 1.5G (from 5G).
Rationale for smaller cache:
Pulling & pushing a 5G cache could take a long time. Consider
https://gitlab.freedesktop.org/mesa/mesa/-/jobs/684010 (click the "Show
complete raw" button to see timestamps): Pulling the cache took
1569927241-1569927194 = 47 seconds, pushing it 1569927671-1569927519
= 152, for a total of 199 seconds. The actual build took comparable
1569927518-1569927243 = 275 seconds, despite no cache hits from ccache.
In other words, the cache transfers almost doubled the job duration,
and they would have negated any build time benefits from ccache even
with a high cache hit rate.
Also, the smaller caches avoid blowing up storage requirements for them
too much.
Rationale for per-job caches:
Making a single cache significantly smaller might result in cached
build products from one job getting evicted by another job, reducing
the likelihood of cache hits from previous pipelines.
v2:
* Move up "ccache --max-size=1500M" call (Eric Engestrom)
Reviewed-by: Eric Engestrom <eric.engestrom@intel.com>
To truly to do this correctly, we'll have to fix the discrepancy between
drm_virtgpu_3d_transfer_to_host and virtio_gpu_transfer_host_3d. However,
this is a good starting point.
Since virtio-gpu only supports self-import and export, this should be fine.
Let's only do WINSYS_HANDLE_TYPE_FD for this currently.
Reviewed by: Robert Tarasov <tutankhamen@chromium.org>
The winsys might supply dimensions that are different than
those we calculate. In additional, it may supply virtualized
modifiers.
In practice, a stride != bpp * width and virtualized modifiers don't
happen yet, but the plan is to move in that direction.
Also make virgl_resource_layout static.
Reviewed by: Robert Tarasov <tutankhamen@chromium.org>
i915 will report ENODEV on generations prior to Haswell because there
is no point in reporting values on those. This is prior any fusing
could happen on parts with identical PCI ids.
This query call was previously only triggered on generations that
support performance queries, which happens to match generation for
which i915 reports topology, but the commit pointed below started
using it on all generations.
Signed-off-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Gitlab: https://gitlab.freedesktop.org/mesa/mesa/issues/1860
Cc: <mesa-stable@lists.freedesktop.org>
Fixes: 96e1c945f2 ("i965: Move device info initialization to common code")
Reviewed-by: Mark Janes <mark.a.janes@intel.com>
Random hangs no longer happen, I'm actually not sure if they were
related to this.
Signed-off-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
Make sure to export the expected clear values to the depth
stencil attachment.
This fixes dEQP-VK.pipeline.depth_range_unrestricted.* on GFX10.
Signed-off-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
The number of vertices has to be adjusted with the output primitive
type.
This fixes dEQP-VK.transform_feedback.simple.triangle_strip_*.
Signed-off-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
The GS outputs are stored differently in the LDS storage, they
are indexed by out_idx which is incremented for each stored DWORD.
Thus, we need a different path for exporting the stream outputs.
This fixes a bunch of CTS failures when NGG GS is force enabled.
Signed-off-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
The LDS storage allocated for stream outputs is 4 * N, where N
is the number of outputs. So, we have to store/load with N as index
and not with the output location as index.
This doesn't fix anything known but it should fix out-of-bounds
access and it also reduces the number of outputs written to the
LDS storage.
Signed-off-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
Some hardware has a bug with triangle strips and it is signalled by the
flag BUG_FIXED8 whether this bug has been fixed. So only enable triangle
strips when this flag is set.
Thanks: Jonathan Marek and Christian Gmeiner for the pointers
v2: Add TODO to indicate that the handling should be refined
(Jonathan & Christian)
Signed-off-by: Gert Wollny <gert.wollny@collabora.com>
Reviewed-by: Christian Gmeiner <christian.gmeiner@gmail.com>
It's unused here, and undefined in scons. It is used in targets/osmesa,
but it's properly defined there already.
Reviewed-by: Eric Engestrom <eric.engestrom@intel.com>
Reviewed-by: Eric Anholt <eric@anholt.net>
On the Android Antutu benchmark we ran into an assert in ISL where the
(base layer + num layers) > total layers. It turns out the core of
mesa forgot to clear the _Layer variable, potentially leaving an
inconsistent value.
v2: Pull setting u->_Layer out of the conditional blocks (Jason)
Signed-off-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Cc: <mesa-stable@lists.freedesktop.org>
Reviewed-by: Eric Engestrom <eric.engestrom@intel.com>
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
In converting to shift/size-based validation, we lost a condition from
the ARGB/XRGB equivalence check, which left it working one way round
but not the other, and broke applications like glmark2-es2-drm on some
platforms. Restore the equivalent check that *both* configs actually
have an alpha channel before considering a mismatch.
Fixes: 7b4ed2b513 ("egl: Convert configs to use shifts and sizes instead of masks")
Signed-off-by: Robin Murphy <robin.murphy@arm.com>
Reviewed-by: Adam Jackson <ajax@redhat.com>
Reviewed-by: Eric Engestrom <eric.engestrom@intel.com>
1. The hgl.c file is a read-only file versus read-write.
Ref: src/gallium/state_trackers/hgl/hgl.c
2. I've included the Haiku-specific patches I used to get a successful
build of Mesa 19.1.7 on Haiku using the meson/ninja build procedure.
Shows "[764/764] linking target ... libswpipe.so" at build completion.
v2:
Remove autotools files (Eric)
v3:
Update the patch
Reported-by: Ken Mays <kmays2000@gmail.com>
Tested-by: Ken Mays <kmays2000@gmail.com>
CC: mesa-stable@lists.freedesktop.org
Reviewed-by: Alexander von Gluck IV <kallisti5@unixzen.com>
After 41549a18e6 ("i965: Enable OpenGL 4.6 for Gen8+"), i965
implements GL_ARB_gl_spirv, GL_ARB_spirv_extensions and OpenGL 4.6.
After 15e439071d ("iris: Enable ARB_gl_spirv and ARB_spirv_extensions"),
iris implements GL_ARB_gl_spirv, GL_ARB_spirv_extensions and OpenGL
4.6.
v2:
- Explicit the support is for i965 and iris.
v3:
- Add also GL_ARB_spirv_extensions to the release notes (Alejandro).
Signed-off-by: Andres Gomez <agomez@igalia.com>
Reviewed-by: Alejandro Piñeiro <apinheiro@igalia.com>
Reviewed-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
Found when building for Android in C99 mode. Include bitscan.h to ensure ffs is
available.
Fixes: 7b4ed2b5 ("egl: Convert configs to use shifts and sizes instead of masks")
Signed-off-by: Kevin Strasser <kevin.strasser@intel.com>
Reviewed-by: Eric Engestrom <eric.engestrom@intel.com>
The order of comparison has changed, so we need to invert the logic of
"insert_left" when using rb_tree_insert_at().
Fixes: dae33052db (util/rb_tree: Reverse the order of comparison
functions).
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
To enable EXT_demote_to_helper_invocation:
This extension adds a "demote" keyword that is similar to "discard" but
only suppresses subsequent writes and outputs to the framebuffer, and
does not terminate the execution of the invocation. For the remainder
of the execution, the invocation is "demoted" to act like a helper
invocation.
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
From EXT_demote_to_helper_invocation, implemented with the existing
nir_intrinsic_is_helper_invocation.
Such builtin is necessary when using `demote` because we can't
redefine the value of gl_HelperInvocation (since it is an input
variable).
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
When the EXT_demote_to_helper_invocation extension is enabled,
`demote` is treated as a keyword, and produces an ir_demote.
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
To represent the new `demote` keyword when using
EXT_demote_to_helper_invocation extension. Most of the changes are to
include it in the visitors.
Demote is not considered a control flow, so also include an empty
visit member function in ir_control_flow_visitor.
Only NIR actually supports `demote`, so assert the translations for
TGSI and Mesa's gl_program -- since the demote is not expected to
appear for those.
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
We can't just check for the BO base address, we need to check for the
full address including any offset we may have applied. When updating
the address, we need to include the offset again.
Fixes: 5ad0c88dbe ("iris: Replace buffer backing storage and rebind to update addresses.")
A while back, Michael Larabel noticed that Paraview's Wavelet Volume
case runs significantly slower on iris than i965. It turns out this
is because we enable CCS_E for 32-bit floating point formats, while
i965 disables it, with an oblique comment saying that we benchmarked
it (on what exactly?) and determined that it was a loss.
Paraview uses both R32_FLOAT and R32G32B32A32_FLOAT, and I observed
large framerate drops when enabling CCS_E for either format. However,
several other benchmarks (Aztec Ruins, many Synmark cases) use 16-bit
floating point formats, with no apparent ill effects.
So, disable compression for 32-bit float formats for now, but leave it
enabled for 16-bit float formats as they seem to be working fine.
Improves performance in Paraview's Wavelet Volume test by 62% on a
Skylake GT4e.
Fixes: 3cfc6a207b ("iris: Fill out res->aux.possible_usages")
Tests done with llvm-config indicate that there are only 2 libraries in
irreader and not in engine, LLVMAsmParser and LLVMIRReader and both of them
are part of coroutines so I replaced irreader with coroutines and added
libraries unique to coroutines.
Reviewed-by: Jose Fonseca <jfonseca@vmware.com>
Now that we have constant adjustment logic abstracted, we can do this
safely. Along with the csel inversion patch, this allows many more
common csel ops to inline their condition in the bundle.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
If we can reuse constant slots from other instructions, we would like to
do so to include more instructions per bundle.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
If an instruction could be scheduled to vmul to satisfy the writeout
conditions, let's do that and save an instruction+cycle per fragment
shader.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
We still emit in-order but we switch to using the bundles created from
the new scheduler, which will allow greater flexibility and room for
out-of-order optimization.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
We require chosen instructions to be "close", to avoid ballooning
register pressure. This is a kludge that will go away once we have
proper liveness tracking in the scheduler, but for now it prevents a lot
of needless spilling.
v2: Lower threshold to 6 (from 8). Schedule is hurt, but a few shaders
that spilled excessively are fixed.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
Derp
We can bundle two load/store together. This eliminates the need for
explicit load/store pairing in a prepass, as well.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
Conditions for branches don't have a swizzle explicitly in the emitted
binary, but they do implicitly get swizzled in whatever instruction
wrote r31, so we need to handle that.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
Conditional instructions (csel and conditional branches) require their
condition to be written to a special condition pipeline register (r31.w
for scalar, r31.xyzw for vector). However, pipeline registers are live
only for the duration of a single bundle. As such, the logic to schedule
conditionals correct is surprisingly complex. Essentially, we see if we
could stuff the conditional within the same bundle as the csel/branch
without breaking anything; if we can, we do that. If we can't, we add a
dummy move to make room.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
A bit of a kludge but allows setting an implicit dependency of synthetic
conditional moves on the actual condition, fixing code generated like:
vmul.feq r0, ..
sadd.imov r31, .., r0
vadd.fcsel [...]
The imov runs simultaneous with feq so it gets garbage results, but it's
too late to add an actual dependency practically speaking, since the new
synthetic imov doesn't have a node associated.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
In the future, we will want to keep track of which components of
constants of various sizes correspond to which parts of the bundle
constants, like in the old scheduler. For now, let's just stub it out
for a simple rule of one instruction with embedded constants per bundle.
We can eventually do better, of course.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
We don't actually do any scheduling here yet, but add per-tag helpers to
consume an instruction, print it, pop it off the worklist.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
It's not always obvious what the optimal bundle type should be. Let's
break out the logic to decide.
Currently set for purely in-order operation.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
After we've chosen an instruction, popped it off, and processed it, it's
time to update the worklist, removing that instruction from the
dependency graph to allow its dependents to be put onto the worklist.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
In the future, this routine will implement the core scheduling logic to
decide which instruction out of the worklist will be scheduled next, in
a way that minimizes cycle count and register pressure.
In the present, we are more interested in replicating in-order
scheduling with the much-more-powerful out-of-order model. So rather
than discriminating by a register pressure estimate, we simply choose
the latest possible instruction in the worklist.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
We would like to flatten a linked list of midgard_instructions into an
array of midgard_instruction pointers on the heap.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
It's not based on the writemask and it can't be inferred; it's just
intrinsic to the op itself.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
This worked better than my original v3d-local pass for just subs, and is a
huge win over not producing subs.
total instructions in shared programs: 6408469 -> 6167932 (-3.75%)
total threads in shared programs: 153784 -> 154104 (0.21%)
total uniforms in shared programs: 2157078 -> 1905823 (-11.65%)
total max-temps in shared programs: 904546 -> 895796 (-0.97%)
total spills in shared programs: 4959 -> 4993 (0.69%)
total fills in shared programs: 6558 -> 6670 (1.71%)
total sfu-stalls in shared programs: 25845 -> 25175 (-2.59%)
total inst-and-stalls in shared programs: 6434314 -> 6193107 (-3.75%)
Reviewed-by: Daniel Schürmann <daniel@schuermann.dev>
Reviewed-by: Connor Abbott <cwabbott0@gmail.com>
There are some optimizations which are only implemented for additions
and some optimizations which assume that subtractions have been lowered.
By lowering all subtractions first and later recombine for backends
which prefer this option, we don't have to implement them twice.
This patch also moves lower_negate to nir_opt_algebraic_late() to enable
these optimizations for backends which make use of it.
Reviewed-by: Eric Anholt <eric@anholt.net>
Reviewed-by: Connor Abbott <cwabbott0@gmail.com>
Upcoming changes to sub optimization will make this pass required. Over
the course of that series, we see uniforms +.46%, instructions -.24%
(seems like a fine tradeoff -- uniforms are 1/2 the size of instructions
as far as cache occupancy)
Reviewed-by: Daniel Schürmann <daniel@schuermann.dev>
Reviewed-by: Connor Abbott <cwabbott0@gmail.com>
Without this, it was theoretically possible for the jobs to run before
the docker image was ready.
v2:
* Use - list syntax instead of [] (Eric Engestrom)
Reviewed-by: Eric Engestrom <eric.engestrom@intel.com>
Reviewed-by: Eric Anholt <eric@anholt.net>
This allows most build jobs to run before the stretch or arm64 docker
images are ready.
v2:
* Use - list syntax instead of [] (Eric Engestrom)
Reviewed-by: Eric Engestrom <eric.engestrom@intel.com>
Reviewed-by: Eric Anholt <eric@anholt.net>
This allows the *-old-llvm jobs to run before the buster docker images
are ready.
v2:
* Use - list syntax instead of [] (Eric Engestrom)
Reviewed-by: Eric Engestrom <eric.engestrom@intel.com>
Reviewed-by: Eric Anholt <eric@anholt.net>
Reviewed-by: Jose Fonseca <jfonseca@vmware.com>
This patch is based on 28e3f85e09/mingw-w64-mesa/link-ole32.patch but with tweaks to avoid MSVC build break when applied.
v2: Create Mingw platform alias pointing to windows host platform define to avoid spurious crosscompilation;
v3: Fix obviously wrong compiler flags for swr driver;
v4: Update original patch URL because it has been relocated;
v5: Don't bother patching autools stuff as it's not used by MSYS2 Mingw-w64 build and it's days are numbered anyway;
v6: After Mingw posix flag fix in 295851eb things are far simpler as we don't need more linking of uuid, ole32, version and shell32 than what is already in place.
As X86AsmPrinter component is gone, LLVMX86AsmPrinter got replaced
with LLVMRemarks, LLVMBitstreamReader and LLVMDebugInfoDWARF.
Tests done with llvm-config on both LLVM 8 and 9 indicate that
mcjit, bitwriter and x86asmprinter fully fit inside engine component.
On other platforms and with meson build mcdisassembler was used to replace
X86AsmPrinter but mcdisassembler also fully fits inside engine component
for LLVM>=8 according to same tests.
v2: Avoid duplicating code related to Mingw pthreads.
Reviewed-by: Jose Fonseca <jfonseca@vmware.com>
Cc: 19.1 19.2 <mesa-stable@lists.freedesktop.org>
On 19.1 this patch does not apply cleanly without 88eb2a1f
Looks like blob uses following values for uniforms buffer:
0 for 8 bytes
1 for 16 bytes
2 for 24 bytes
2 for 32 bytes
3 for 40 bytes
3 for 48 bytes
3 for 56 bytes
3 for 64 bytes
4 for 72 bytes
It all looks like log2(size / 8) rounded up, so let's do the same.
Fixes: 931fc2a7b3f9("lima: do not set the PP uniforms address lowest bits")
Reviewed-by: Icenowy Zheng <icenowy@aosc.io>
Signed-off-by: Vasily Khoruzhick <anarsoul@gmail.com>
Android building rules are added in src/amd/Android.compiler.mk
libmesa_aco static library is built conditionally to radeonsi
as done for vulkan.radv module
This will prevent Android build errors for non x86 systems
filter-out compiler/aco_instruction_selection_setup.cpp source,
as already included by compiler/aco_instruction_selection.cpp
and would cause several multiple definition linker errors
NOTE: libLLVM requires AMDGPU Disassembler to build radv with aco
Fixes: 93c8ebf ("aco: Initial commit of independent AMD compiler")
Fixes: a70a998 ("radv/aco: Setup alternate path in RADV to support the experimental ACO compiler")
Signed-off-by: Mauro Rossi <issor.oruam@gmail.com>
Fixes a few building errors similar to the following:
In file included from external/mesa/src/amd/compiler/aco_instruction_selection.cpp:26:
In file included from external/libcxx/include/algorithm:639:
external/libcxx/include/utility:321:9:
error: implicit instantiation of undefined template 'std::__1::array<aco::Temp, 4>'
_T2 second;
^
Fixes: 93c8ebf ("aco: Initial commit of independent AMD compiler")
Signed-off-by: Mauro Rossi <issor.oruam@gmail.com>
Fixes the following piglit test: fragdepth_gles2 (for ETNA_MESA_DEBUG=nir)
Signed-off-by: Jonathan Marek <jonathan@marek.ca>
Reviewed-by: Christian Gmeiner <christian.gmeiner@gmail.com>
Note it can still be improved a bit:
* Use alu swizzle to determine if src is scalar
* Take into account new immediates in the multiple uniform src lowering
Signed-off-by: Jonathan Marek <jonathan@marek.ca>
Reviewed-by: Christian Gmeiner <christian.gmeiner@gmail.com>
This can improve performance by allowing the LAST_VARYING_2X bit to be
set when possible (and possibility more benefits on HALTI5 where the
number of components is set for each varying).
Signed-off-by: Jonathan Marek <jonathan@marek.ca>
Reviewed-by: Christian Gmeiner <christian.gmeiner@gmail.com>
LOAD starts reading into the first enabled destination component, and
doesn't skip disabled components, so we need to allocate a destination with
contiguous components.
Signed-off-by: Jonathan Marek <jonathan@marek.ca>
Reviewed-by: Christian Gmeiner <christian.gmeiner@gmail.com>
Only invert front facing when glFrontFace is GL_CW.
Fixes following deqp test:
dEQP-GLES2.functional.shaders.builtin_variable.frontfacing
Signed-off-by: Jonathan Marek <jonathan@marek.ca>
Reviewed-by: Christian Gmeiner <christian.gmeiner@gmail.com>
The PP uniforms address register in render state is not a direct pointer
to the uniforms storage -- instead, it points to an one-item array, and
the array item is the real pointer to the uniforms storage.
This register reuses some of its LSBs as a size field. Currently the
size is set according to the length of the real uniforms storage.
However, as the register itself contains only a pointer to the one-item
array, the size field should be set to the length of the one-item array
and subtract it by 1, which means a fixed value of 0. That means we can
just omit it now.
Test shows this should be the correct approach to set this register.
Signed-off-by: Icenowy Zheng <icenowy@aosc.io>
Reviewed-by: Vasily Khoruzhick <anarsoul@gmail.com>
glsl 4.4 spec section '5.9 expressions':
"The operator is multiply (*), where both operands are matrices or one operand is a vector and the
other a matrix. A right vector operand is treated as a column vector and a left vector operand as a
row vector. In all these cases, it is required that the number of columns of the left operand is equal
to the number of rows of the right operand. Then, the multiply (*) operation does a linear
algebraic multiply, yielding an object that has the same number of rows as the left operand and the
same number of columns as the right operand. Section 5.10 “Vector and Matrix Operations”
explains in more detail how vectors and matrices are operated on."
This fix disallows a multiplication of incompatible matrices like:
mat4x3(..) * mat4x3(..)
mat4x2(..) * mat4x2(..)
mat3x2(..) * mat3x2(..)
....
CC: <mesa-stable@lists.freedesktop.org>
Reviewed-by: Eric Anholt <eric@anholt.net>
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=111664
Signed-off-by: Andrii Simiklit <andrii.simiklit@globallogic.com>
According to the 1.1.123 spec:
"The implementation will attempt to create all pipelines, and only
return VK_NULL_HANDLE values for those that actually failed."
Reviewed-by: Kristian H. Kristensen <hoegsberg@google.com>
I was inheriting the one from src/freedreno with funny tabs, while
this driver is written with normal Mesa 3-space indents.
Unfortunately I have to add both files, because I use emacs and emacs
prefers .dir-locals to .editorconfig :(
Reviewed-by: Kristian H. Kristensen <hoegsberg@google.com>
We include shader_enums.h from freedreno's compiler for both GL and
Vulkan, and the main/config.h include resulted in polluting the
namespace with things like MAX_VIEWPORTS that other Vulkan drivers use
as their driver-specific maximums.
Reviewed-by: Kristian H. Kristensen <hoegsberg@google.com>
Without this, we were DCEing flag writes because we didn't think their
results were used because we didn't understand that an ANY32 predicate
actually read all the flags.
Fixes: df1aec763e "i965/fs: Define methods to calculate the flag..."
Reviewed-by: Matt Turner <mattst88@gmail.com>
Passes most of piglit's tests regarding arb_framebuffer_object
and unlocks some more piglit tests.
Signed-off-by: Christian Gmeiner <christian.gmeiner@gmail.com>
Reviewed-by: Jonathan Marek <jonathan@marek.ca>
We are using util_resource_copy_region(..) as fallback which supports
different formats for src and dst. Improves the experience when running
deqp or piglit with a debug build.
Signed-off-by: Christian Gmeiner <christian.gmeiner@gmail.com>
Reviewed-by: Jonathan Marek <jonathan@marek.ca>
Prior to xvmc 1.0.12 libxvmc incorrectly required libxv, but that was
fixed. This results in compilation failures for the gallium xvmc tracker
and tools. This patch fixes that by explicitly linking to libxv.
Fixes: 22a817af8a
("meson: build gallium xvmc state tracker")
Closes: https://gitlab.freedesktop.org/mesa/mesa/issues/1844
Reviewed-by: Adam Jackson <ajax@redhat.com>
We don't want to require a visual for the drawable, because there exist
fbconfigs that don't correspond to any visual (say a 565 pixmap|pbuffer
config on a depth-24 display). Fortunately, we don't need one either.
Passing the visual to XCreateImage serves only to fill in the XImage's
{red,green,blue}_mask fields, which libX11 itself never uses, they exist
only for the client's convenience, and we don't care. And we already
have the drawable depth in glx_config::rgbBits. So replace the
XVisualInfo field in the drawable private with a pointer to the
glx_config.
Having done that driswCreateGCs becomes trivial, so inline it into its
caller.
Gitlab: https://gitlab.freedesktop.org/mesa/mesa/issues/1194
Reviewed-by: Eric Anholt <eric@anholt.net>
There's no reason to have two GCs here. The only difference between
them is that swapgc would generate graphics exposures, except we only
ever use this GC for PutImage, and PutImage doesn't generate graphics
exposures. We also don't need to explicitly ChangeGC to GXCopy, because
that's the default.
Reviewed-by: Eric Anholt <eric@anholt.net>
Looks like r16_unorm might have precision issues.
dEQP-VK.api.copy_and_blit.core.image_to_image.all_formats.color.r16_unorm.r16_unorm.general_general
fails, but the dumped images in the xml are the same so
I'd guess the low bits are the issue.
r8_unorm and r16_uint work.
Reviewed-by: Kristian H. Kristensen <hoegsberg@google.com>
Reviewed-by: Eric Anholt <eric@anholt.net>
3D blits & format reinterpretation are still TBD.
Reviewed-by: Kristian H. Kristensen <hoegsberg@google.com>
Reviewed-by: Eric Anholt <eric@anholt.net>
We need the loop header phis for the outer exec masks. Needed for
dEQP-VK.glsl.demote.dynamic_loop_texture
Signed-off-by: Rhys Perry <pendingchaos02@gmail.com>
Reviewed-by: Daniel Schürmann <daniel@schuermann.dev>
NIR may emit a single instrinsic to load several packed varyings,
but that's suboptimal for Utgard PP for several reasons:
- varyings that are used as sampler inputs can be passed using
pipeline register with increased precision
- we have small number of regs, so using a vec4 regs for storing
two vec2 varyings increases reg pressure.
Add NIR pass to split a single load into several loads and utilize
it in lima.
Reviewed-by: Qiang Yu <yuq825@gmail.com>
Signed-off-by: Vasily Khoruzhick <anarsoul@gmail.com>
According to radeonsi, GLM doesn't support WB alone, so
we have to set INV too when WB is set.
Signed-off-by: Timur Kristóf <timur.kristof@gmail.com>
Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
Avoids getting a "load_output" in a case like this:
gl_Position = ubuf.MVP * ubuf.position[gl_VertexIndex];
frag_pos = gl_Position.xyz;
Signed-off-by: Jonathan Marek <jonathan@marek.ca>
Reviewed-by: Kristian H. Kristensen <hoegsberg@google.com>
Acked-by: Eric Anholt <eric@anholt.net>
Lower these to something compatible with ir3, and save the descriptor set
and binding information.
Signed-off-by: Jonathan Marek <jonathan@marek.ca>
Reviewed-by: Kristian H. Kristensen <hoegsberg@google.com>
Acked-by: Eric Anholt <eric@anholt.net>
The old version of the iterators relies on a &iter->field != NULL check
which works fine on older GCC but newer GCC versions and clang have
optimizations that break if you do pointer math on a null pointer. The
correct solution to this is to do the null comparisons before we do any
sort of &iter->field or use rb_node_data to do the reverse operation.
Acked-by: Michel Dänzer <mdaenzer@redhat.com>
Tested-by: Michel Dänzer <mdaenzer@redhat.com>
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
This is based on the fix I used for the same problem on V3D. In this
case, it fixes all but the the
dEQP-GLES2.functional.texture.filtering.2d.*_npot cases of
dEQP-GLES2.functional.texture.filtering.2d.*'s failures.
Acked-by: Rob Clark <robdclark@chromium.org>
As Vasily discovered, the bit 7 of the word 1 of the texture descriptor
is set when reloading the framebuffer, to use framebuffer-based offset
rather than normalized one. This bit also works for regular textures to
enable accessing with non-normalized offset.
Add support for rectangle texture by setting this bit for
PIPE_TEXTURE_RECT.
Suggested-by: Vasily Khoruzhick <anarsoul@gmail.com>
Signed-off-by: Icenowy Zheng <icenowy@aosc.io>
Reviewed-by: Vasily Khoruzhick <anarsoul@gmail.com>
Per the valgrind output below, we were returning the pointer to freed
memory if none of the later conditional pointer assignments were
executed. This caused dEQP CI jobs to crash on certain runners,
presumably due to a double-free down the line.
Also, we were skipping to the out: label before the vendor_id & chip_id
variables used by it were initialized, resulting in broken
LIBGL_DEBUG=verbose output such as
libGL: pci id for fd 4: 51108f00:51108f00, driver radeonsi
Fixes: 5a545e355b "loader: always map the "amdgpu" kernel driver name to radeonsi (v2)"
==403== Invalid read of size 1
==403== at 0x4AFD576: surfaceless_probe_device (platform_surfaceless.c:316)
==403== by 0x4AFD915: dri2_initialize_surfaceless (platform_surfaceless.c:391)
==403== by 0x4AF5EEA: dri2_initialize (egl_dri2.c:984)
==403== by 0x4AF5EEA: dri2_initialize (egl_dri2.c:958)
==403== by 0x4AF1EEC: _eglMatchAndInitialize (egldriver.c:75)
==403== by 0x4AF1F3B: _eglMatchDriver (egldriver.c:96)
==403== by 0x4AE9367: eglInitialize (eglapi.c:617)
==403== by 0x1D99C9: tcu::surfaceless::EglRenderContext::EglRenderContext(glu::RenderConfig const&, tcu::CommandLine const&) [clone .constprop.57] (in /deqp/modules/gles2/deqp-gles2)
==403== by 0x1DABB0: tcu::surfaceless::ContextFactory::createContext(glu::RenderConfig const&, tcu::CommandLine const&, glu::RenderContext const*) const (in /deqp/modules/gles2/deqp-gles2)
==403== by 0x53EBD1: glu::createRenderContext(tcu::Platform&, tcu::CommandLine const&, glu::RenderConfig const&, glu::RenderContext const*) (in /deqp/modules/gles2/deqp-gles2)
==403== by 0x53EFE9: glu::createDefaultRenderContext(tcu::Platform&, tcu::CommandLine const&, glu::ApiType) (in /deqp/modules/gles2/deqp-gles2)
==403== by 0x1DE07A: deqp::gles2::Context::Context(tcu::TestContext&) (in /deqp/modules/gles2/deqp-gles2)
==403== by 0x1DB5EF: deqp::gles2::TestPackage::init() (in /deqp/modules/gles2/deqp-gles2)
==403== Address 0x56bd340 is 0 bytes inside a block of size 4 free'd
==403== at 0x48369AB: free (in /usr/lib/x86_64-linux-gnu/valgrind/vgpreload_memcheck-amd64-linux.so)
==403== by 0x4B01767: loader_get_driver_for_fd (loader.c:464)
==403== by 0x4AFD553: surfaceless_probe_device (platform_surfaceless.c:308)
==403== by 0x4AFD915: dri2_initialize_surfaceless (platform_surfaceless.c:391)
==403== by 0x4AF5EEA: dri2_initialize (egl_dri2.c:984)
==403== by 0x4AF5EEA: dri2_initialize (egl_dri2.c:958)
==403== by 0x4AF1EEC: _eglMatchAndInitialize (egldriver.c:75)
==403== by 0x4AF1F3B: _eglMatchDriver (egldriver.c:96)
==403== by 0x4AE9367: eglInitialize (eglapi.c:617)
==403== by 0x1D99C9: tcu::surfaceless::EglRenderContext::EglRenderContext(glu::RenderConfig const&, tcu::CommandLine const&) [clone .constprop.57] (in /deqp/modules/gles2/deqp-gles2)
==403== by 0x1DABB0: tcu::surfaceless::ContextFactory::createContext(glu::RenderConfig const&, tcu::CommandLine const&, glu::RenderContext const*) const (in /deqp/modules/gles2/deqp-gles2)
==403== by 0x53EBD1: glu::createRenderContext(tcu::Platform&, tcu::CommandLine const&, glu::RenderConfig const&, glu::RenderContext const*) (in /deqp/modules/gles2/deqp-gles2)
==403== by 0x53EFE9: glu::createDefaultRenderContext(tcu::Platform&, tcu::CommandLine const&, glu::ApiType) (in /deqp/modules/gles2/deqp-gles2)
==403== Block was alloc'd at
==403== at 0x483577F: malloc (in /usr/lib/x86_64-linux-gnu/valgrind/vgpreload_memcheck-amd64-linux.so)
==403== by 0x4EE5E09: strndup (strndup.c:43)
==403== by 0x4B010B1: loader_get_kernel_driver_name (loader.c:101)
==403== by 0x4B016AF: loader_get_driver_for_fd (loader.c:462)
==403== by 0x4AFD553: surfaceless_probe_device (platform_surfaceless.c:308)
==403== by 0x4AFD915: dri2_initialize_surfaceless (platform_surfaceless.c:391)
==403== by 0x4AF5EEA: dri2_initialize (egl_dri2.c:984)
==403== by 0x4AF5EEA: dri2_initialize (egl_dri2.c:958)
==403== by 0x4AF1EEC: _eglMatchAndInitialize (egldriver.c:75)
==403== by 0x4AF1F3B: _eglMatchDriver (egldriver.c:96)
==403== by 0x4AE9367: eglInitialize (eglapi.c:617)
==403== by 0x1D99C9: tcu::surfaceless::EglRenderContext::EglRenderContext(glu::RenderConfig const&, tcu::CommandLine const&) [clone .constprop.57] (in /deqp/modules/gles2/deqp-gles2)
==403== by 0x1DABB0: tcu::surfaceless::ContextFactory::createContext(glu::RenderConfig const&, tcu::CommandLine const&, glu::RenderContext const*) const (in /deqp/modules/gles2/deqp-gles2)
Acked-by: Pierre-Eric Pelloux-Prayer <pierre-eric.pelloux-prayer@amd.com>
This new option can help debug shader compiler problems when
there are issues with the meta shaders.
Signed-off-by: Timur Kristóf <timur.kristof@gmail.com>
Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
Add a function called ac_get_fs_input_vgpr_cnt which will return
the number of input VGPRs used by an AMD shader. Previously,
radv and radeonsi had the same code duplicated, but this commit also
allows them to share this code.
Signed-off-by: Timur Kristóf <timur.kristof@gmail.com>
Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
This commit allows RADV to set the shared VGPR count according to
the shader config.
Signed-off-by: Timur Kristóf <timur.kristof@gmail.com>
Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
This commit moves ac_get_tbuffer_format, ac_get_sampler_dim and
ac_get_image_dim into ac_shader_util, thus enabling them to be used
by compilers other than LLVM.
Signed-off-by: Timur Kristóf <timur.kristof@gmail.com>
Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
The aim of this commit is to keep ac_shader_util LLVM-free,
since we would like to use it in ACO later.
Signed-off-by: Timur Kristóf <timur.kristof@gmail.com>
Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
v2: rename pass_temp to pass_flags
v2: also CSE reductions
v3: add ds_swizzle_b32 support
v3: check gds/offset0/offset1 fields
Signed-off-by: Rhys Perry <pendingchaos02@gmail.com>
Reviewed-by: Daniel Schürmann <daniel@schuermann.dev>
We want to generate PC files for non-glvnd builds and for builds with
old glvnd, but the current logic doesn't do that, it builds them
unconditionally, and for GLES it builds the shared libraries, which is
also not what we want. This does not generate .pc files for gles1 or
gles2. Which it we weren't doing before either, making this not a
regression but a return to status-quo.o
Closes: https://gitlab.freedesktop.org/mesa/mesa/issues/1838
Fixes: 93df862b6a
("meson: re-add incorrect pkg-config files with GLVND for backward compatibility")
Reviewed-by: Matt Turner <mattst88@gmail.com>
This allows the reslut of mov and bcsel to be separately interpreted as
float or int depending on the use.
Reviewed-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
Some shaders are hurt by this change because now a
load_const(0x00000000) is not recognized as eq_zero when loaded as a
float. This behavior is restored in a later patch (nir/range-analysis:
Use types to provide better ranges from bcsel and mov).
v2: Add a comment about reinterpretation of int/uint/bool. Suggested by
Caio. Rewrite condition the check for types being float versus checking
for types not being all the things that aren't float.
Fixes: 405de7ccb6 ("nir/range-analysis: Rudimentary value range analysis pass")
Reviewed-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
All Gen7+ platforms had similar results. (Ice Lake shown)
total instructions in shared programs: 16327543 -> 16328255 (<.01%)
instructions in affected programs: 55928 -> 56640 (1.27%)
helped: 0
HURT: 208
HURT stats (abs) min: 1 max: 16 x̄: 3.42 x̃: 3
HURT stats (rel) min: 0.33% max: 6.74% x̄: 1.31% x̃: 1.12%
95% mean confidence interval for instructions value: 3.06 3.79
95% mean confidence interval for instructions %-change: 1.17% 1.46%
Instructions are HURT.
total cycles in shared programs: 363682759 -> 363683977 (<.01%)
cycles in affected programs: 325758 -> 326976 (0.37%)
helped: 44
HURT: 133
helped stats (abs) min: 1 max: 179 x̄: 33.61 x̃: 5
helped stats (rel) min: 0.06% max: 14.21% x̄: 2.47% x̃: 0.29%
HURT stats (abs) min: 1 max: 157 x̄: 20.28 x̃: 14
HURT stats (rel) min: 0.07% max: 14.44% x̄: 1.42% x̃: 0.73%
95% mean confidence interval for cycles value: 0.38 13.39
95% mean confidence interval for cycles %-change: -0.06% 0.96%
Inconclusive result (%-change mean confidence interval includes 0).
Sandy Bridge
total instructions in shared programs: 10787433 -> 10787443 (<.01%)
instructions in affected programs: 1842 -> 1852 (0.54%)
helped: 0
HURT: 10
HURT stats (abs) min: 1 max: 1 x̄: 1.00 x̃: 1
HURT stats (rel) min: 0.33% max: 1.85% x̄: 0.73% x̃: 0.49%
95% mean confidence interval for instructions value: 1.00 1.00
95% mean confidence interval for instructions %-change: 0.36% 1.10%
Instructions are HURT.
total cycles in shared programs: 153724543 -> 153724563 (<.01%)
cycles in affected programs: 8407 -> 8427 (0.24%)
helped: 1
HURT: 3
helped stats (abs) min: 18 max: 18 x̄: 18.00 x̃: 18
helped stats (rel) min: 0.98% max: 0.98% x̄: 0.98% x̃: 0.98%
HURT stats (abs) min: 4 max: 18 x̄: 12.67 x̃: 16
HURT stats (rel) min: 0.21% max: 0.75% x̄: 0.56% x̃: 0.72%
95% mean confidence interval for cycles value: -21.31 31.31
95% mean confidence interval for cycles %-change: -1.11% 1.46%
Inconclusive result (value mean confidence interval includes 0).
No shader-db changes on Iron Lake or GM45.
We're using vs and fs now, and adding hs, ds and gs soon. It's
confusing enough that we have both DS/TCS and HS/TES. At least for VS
and FS there doesn't have to be multiple names.
Signed-off-by: Kristian H. Kristensen <hoegsberg@google.com>
RET as a last instruction could be safely ignored.
Remove it to prevent crashes/warnings in case underlying driver
doesn't implement arbitrary returns.
A better way would be to remove the RET after the whole shader
is parsed which will handle a possible case when the last RET is
followed by a comment.
CC: <mesa-stable@lists.freedesktop.org>
Signed-off-by: Danylo Piliaiev <danylo.piliaiev@globallogic.com>
Reviewed-by: Axel Davy <davyaxel0@gmail.com>
--oneline shortens hashes, while --oneline=pretty doesn't, otherwise
they are the same. Having full hashes is convenient as that is the
format that the bin/.cherry-ignore script requires to work correctly.
Reviewed-by: Eric Engestrom <eric.engestrom@intel.com>
Reviewed-by: Juan A. Suarez <jasuarez@igalia.com>
The main reason to do this is that 19.2 has slipped by two weeks, and
such the 19.3 branch is due to happen extremely close to the release of
19.2.0. I think it would be better to have a little more time between
releases for developers and for packagers.
This would still have the 19.3 release out before December, even if it
slips by 1 week.
Acked-By: Karol Herbst <kherbst@redhat.com>
Acked-by: Juan A. Suarez <jasuarez@igalia.com>
This is a bit counter-intuitive, but the issue is that GLVND is broken
in versions <= 1.1.1, so we need to keep wrongly providing these files
to cover up their mistake, otherwise the rest of the world ends up
broken.
Suggested-by: Dylan Baker <dylan@pnwbakers.com>
Cc: mesa-stable@lists.freedesktop.org
Signed-off-by: Eric Engestrom <eric.engestrom@intel.com>
Reviewed-by: Dylan Baker <dylan@pnwbakers.com>
We currently lower them, but nir_opt_algebraic() can add new ones because
lower_sub=true.
Signed-off-by: Rhys Perry <pendingchaos02@gmail.com>
Reviewed-by: Daniel Schürmann <daniel@schuermann.dev>
When handling two variables with overlapping locations, we process the
one with lower location first, and then extend the location ->
driver_location map to guarantee that it's contiguous for the second
variable too. But the loop had the wrong bound, so we weren't extending
the map 100%, which could lead to problems later such as an incorrect
num_inputs. The loop index i is an index into the slots of the variable,
so we need to stop at the final slot of the variable (var_size) instead
of the number of unassigned slots.
This fixes
spec@arb_enhanced_layouts@execution@component-layout@vs-fs-array-interleave-range
on radeonsi NIR.
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
both clang and gcc warn with:
"moving a local object in a return statement prevents copy elision"
Signed-off-by: Karol Herbst <kherbst@redhat.com>
Reviewed-by: Pierre Moreau <dev@pmoreau.org>
This moves the fix from commit 361f3d19f1 to happen in get_param
(used now instead of get_handle by st/dri). This fixes artifacts
seen with Xorg and CCS_E.
Fixes: fc12fd05f5 "iris: Implement pipe_screen::resource_get_param"
Signed-off-by: Tapani Pälli <tapani.palli@intel.com>
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
Without this, we'll incorrectly round off huge values to the nearest
representable double instead of keeping it at the exact value as
we're supposed to.
Found by inspecting compiler-warnings.
Signed-off-by: Erik Faye-Lund <erik.faye-lund@collabora.com>
Fixes: 85faf5082f ("glsl: Add 64-bit integer support for constant expressions")
Reviewed-by: Eric Engestrom <eric.engestrom@intel.com>
Currently we add dependecies in 3 cases:
1) One node consumes value produced by another node
2) Sequency dependencies
3) Write after read dependencies
2) and 3) only affect scheduler decisions since we still can use pipeline
register if we have only 1 dependency of type 1).
Add 3 dependency types and mark dependencies as we add them.
Reviewed-by: Qiang Yu <yuq825@gmail.com>
Signed-off-by: Vasily Khoruzhick <anarsoul@gmail.com>
It makes no sense to clone texture coords if it's not varying, moreover
we don't support cloning ALU nodes.
Fixes: 1c1890fa70 ("lima/ppir: clone uniforms and load_coords into each successor")
Reviewed-by: Andreas Baierl <ichgeh@imkreisrum.de>
Signed-off-by: Vasily Khoruzhick <anarsoul@gmail.com>
because vl doesn't call flush_resource and I wasn't able to find
all places where flush_resource needs to be called.
This fixes corrupted / unflushed surfaces with fullscreen videos on Raven.
Cc: 19.1 19.2 <mesa-stable@lists.freedesktop.org>
This fixes some piglit tests on radeonsi NIR where a varying is
initialized to a constant array in the vertex shader. Varying packing
after nir_lower_io_to_temporaries creates writemasked stores which
persist after pulling the constant initialization down into the fragment
shader.
While we're here, rewrite handle_constant_store() to do the loop over
components outside the switch, so that we don't have to duplicate the
writemask checking for every bitsize.
Fixes: 1235850522 ("nir: Add a large constants optimization pass")
Reviewed-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
Fixes a compilation error when building libnouveau:
In file included from ../src/gallium/drivers/nouveau/nv50/nv50_program.c:25:
../src/compiler/nir/nir.h:1115:10: fatal error: nir_intrinsics.h: No such file or directory
#include "nir_intrinsics.h"
^~~~~~~~~~~~~~~~~~
compilation terminated.
Fixes: f014ae3c7c ("nouveau: add support for nir")
Signed-off-by: Stephen Barber <smbarber@chromium.org>
Reviewed-by: Eric Engestrom <eric.engestrom@intel.com>
Reviewed-by: Karol Herbst <kherbst@redhat.com>
Released today and hangs on RADV. We don't have the root cause yet,
but this should unblock people playing the game.
No drirc because the radv debugflags are not usable from drirc and
I want this backported.
CC: <mesa-stable@lists.freedesktop.org>
Reviewed-by: Dave Airlie <airlied@redhat.com>
flrp was forgotten when already adding the rounding mode for other
instructions.
Fixes: ba1e25e1aa ("i965/fs: set rounding mode when emitting fadd, fmul and ffma instructions")
Suggested-by: Ian Romanick <ian.d.romanick@intel.com>
Signed-off-by: Andres Gomez <agomez@igalia.com>
Reviewed-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
Reviewed-by: Samuel Iglesias Gonsálvez <siglesias@igalia.com>
After
1711bf6cf2 ("intel/fs: Generate better code for fsign multiplied by a value"),
the conflicts resolution for setting the rounding mode after the
fused fmul and fsign optimization is non obvious.
Basically, the optimization doesn't really result in a MUL, or any
other operation which would need to have the rounding mode set. Hence,
we set it just before the actual MUL in the treatment of fmul.
Fixes: ba1e25e1aa ("i965/fs: set rounding mode when emitting fadd, fmul and ffma instructions")
Suggested-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
Signed-off-by: Andres Gomez <agomez@igalia.com>
Reviewed-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
Reviewed-by: Samuel Iglesias Gonsálvez <siglesias@igalia.com>
The script only handles commits with "Fixes: <sha1>" where <sha1> is
equal or great than 8 chars. But <sha1> can be smaller, like 7 chars.
This commit relax the restriction to handle <sha1> 4 or more chars.
Fixes: 533fead423 ("bin/get-pick-list.sh: tweak the commit sha matching pattern")
Acked-by: Eric Engestrom <eric.engestrom@intel.com>
Signed-off-by: Juan A. Suarez Romero <jasuarez@igalia.com>
The scheduler doesn't expect them. To do this, I had to refactor the
registration part of gpir_node_create_dest() to be separate from
creating and inserting the node, since the last two now aren't done when
handling moves. This adds more code but creates the possibility of
automatically inserting input dependencies when inserting nodes, similar
to what's done in NIR with the use-def lists (this isn't done yet).
Reviewed-by: Vasily Khoruzhick <anarsoul@gmail.com>
We guarantee that a complex1 op is always used by postlog2 directly by
rewriting the postlog2 op to be a move when there would be a move
inserted between them. But we weren't doing this in all circumstances
where there might be a move. Move the logic to place_move() so that it
always happens. Fixes a few log tests that happened to start failing due
to changes in the register allocator leading to a different scheduling
order.
Reviewed-by: Vasily Khoruzhick <anarsoul@gmail.com>
This commit adds the framework for cross-basic-block register
allocation. Like ARM's compiler, we assume that the value registers
aren't usable across branches, which means we have to use physical
registers to store any value that crosses a basic block. There are three
parts to this:
1. When translating from NIR, we rely on the NIR out-of-ssa pass to
coalesce values into registers. We insert store_reg instructions for
values used in more than one basic block, and load_reg instructions for
values not defined in the same basic block (or defined after their use,
for loops). So by the time we've translated out of NIR we've already
split things into values (which are only used in the same basic block)
and registers (which are only used in different basic blocks than where
they're defined).
2. We allocate the registers at the same time that we allocate the
values, before the final scheduler. Unlike the values, where the
assigned color is fake, we assign the actual physical index & component
to physregs at this stage. load_reg and store_reg are treated as moves
in the allocator and when creating write-after-read dependencies.
3. Finally, in the main scheduler we have to avoid overwriting existing
live physregs when spilling. First, we have to tell the scheduler which
physical registers are live at the end of each block, to avoid
overwriting those. If a register is only live at the beginning, we can
reuse it for spilling after the last original use in the final program
happens, i.e. before any original use is scheduled, but we have to be
careful to add the proper dependencies so that the spill write is
scheduled before the original reads. To handle this we repurpose
reg_link for uses to be used by the scheduler.
A few register-related things copied over from NIR or from other
drivers can be dropped.
Reviewed-by: Vasily Khoruzhick <anarsoul@gmail.com>
Because branch conditions have to be in the pass slot, there is no
unconditional branch, and realistically the pass slot has to contain a
move when branching (there's nothing it does that would be useful for
operating on booleans, so we can't use it for anything when computing
the branch condition), we put the branch instruction in the pass slot
and at codegen time turn it into a move of the branch condition. This
means that it doesn't have to be special-cased like store instructions
are in the scheduler. Because of this decision we can remove the
half-implemented BRANCH codegen slot. Finally, we (ab)use the existing
schedule_first mechanism to make sure that branches are always last in
the basic block.
Reviewed-by: Vasily Khoruzhick <anarsoul@gmail.com>
When picking a node to be scheduled, we try to schedule its children as
well. But we shouldn't try to schedule nodes which only have a fake
dependency on the original node, since this isn't the point of
scheduling children at the same time and can break some expectations of
the rest of the code.
Reviewed-by: Vasily Khoruzhick <anarsoul@gmail.com>
Somewhat terrifyingly, we never sent this for direct contexts, which
means the server never knew the context/drawable bindings. To handle
this sanely, pull the request code up out of the indirect backend, and
rewrite the context switch path to call it as appropriate. This
attempts to preserve the existing behavior of not calling unbind() on
the context if its refcount would not drop to zero.
Of course, you can't just do this indiscriminately, because this is GLX
and extant X servers have bugs and everything is terrible. To wit:
- For 1.20.x prior to 1.20.6, you can bind a direct context once, but
the second time you try to modify the context's binding you will get
GLXBadContextTag. This includes unbinding the context. And "deleting"
the context will leak memory, because it will still appear to be
current.
- For 1.19 and earlier, glXMakeCurrent(dpy, None, ctx) should be legal
for GL 3.0+ contexts, but the server will throw BadMatch.
To guard against this, we only send the request for indirect contexts
unless the server is known good, and only mention one context at a time
in such a request; if switching between contexts, we first unbind the
old, and then bind the new. Note that the second VendorRelease() version
is to catch XFree86 4.x and Xorg [67].x, which almost certainly have the
above bugs. Other servers might report different version numbers here,
but we can't do direct rendering against them, so this should be safe.
Fixes glx-make-context, glx-multi-window-single-context and
glx-query-drawable-glx_fbconfig_id-window. Sufficiently old piglit will
regress on glx-make-glxdrawable-current (throwing BadMatch), which is
fixed by mesa/piglit!116.
From the MEDIA_VFE_STATE docs:
"Starting with this configuration, the Maximum Number of Threads must
be set to (#EU * 8) for GPGPU dispatches.
Although there are only 7 threads per EU in the configuration, the
FFTID is calculated as if there are 8 threads per EU, which in turn
requires a larger amount of Scratch Space to be allocated by the
driver."
It's pretty clear that we need to increase this for scratch address
calculations, because the FFTID has a certain bit-pattern. The quote
above seems to indicate that we should increase the actual thread count
programmed in MEDIA_VFE_STATE as well, but we think the intention is to
only bump the scratch space.
Fixes GPU hangs in Bioshock Infinite and Synmark's CSDof on Icelake 8x8.
Fixes: 5ac804bd9a ("intel: Add a preliminary device for Ice Lake")
Reviewed-by: Matt Turner <mattst88@gmail.com>
This reverts commit 729de1488f.
It turns out that, although the register is in the logical context,
it isn't whitelisted, so we can't actually write it from userspace
batch buffers. The write just becomes a noop, which is why we saw
no performance changes.
I manually whitelisted it, and still observed no performance gains, but
it did regress KHR-GL46.texture_cube_map_array.color_depth_attachments
on the iris driver. So we might need to fix something before enabling
this. To prevent it randomly getting turned on should the kernel ever
whitelist this register, we revert the patch for now.
'α' has never appeared in any genxml files, so there's no need to
replace it with the word "alpha".
Reviewed-by: Jordan Justen <jordan.l.justen@intel.com>
Use VPC_SO_OVERRIDE to control whether we do streamout in binning or
draw pass. Normally we want to do streamout in binning pass, except
when there is a single tile and binning passed is skipped.
Signed-off-by: Rob Clark <robdclark@chromium.org>
Reviewed-by: Eric Anholt <eric@anholt.net>
Reviewed-by: Kristian H. Kristensen <hoegsberg@google.com>
We could bit doing streamout from binning pass. In this case we want to
use the full VS which doesn't have (potentially streamed out) varyings
stripped out.
Signed-off-by: Rob Clark <robdclark@chromium.org>
Reviewed-by: Eric Anholt <eric@anholt.net>
Reviewed-by: Kristian H. Kristensen <hoegsberg@google.com>
In a3268599f3, I attempted to fix nir_repair_ssa for unreachable
blocks. However, that commit missed the possibility that the use is in
a block which, itself, is unreachable. In this case, we can end up in
an infinite loop trying to replace a def with itself. Even though a
no-op replacement is a fine operation, it keeps extending the end of the
uses list as we're walking it. Instead of explicitly checking for the
group of conditions, just check if the phi builder gives us a different
def. That's guaranteed to be 100% reliable and, while it lacks symmetry
with the is_valid checks, should be more reliable.
Fixes: a3268599 "nir/repair_ssa: Repair dominance for unreachable..."
Reviewed-by: Ian Romanick <ian.d.romanick@intel.com>
pipe->clear() is not called for partial clears, which mesa emulates by
drawing a quad.
Furthermore, drivers should not use rasterizer state information for
scissor information (which was being used to handle the partial clears).
So, remove the partial clear support since it was not supposed to be
handled by pipe->clear() anyway.
This fixes issues with clearing after switching to different sized
framebuffers.
Signed-off-by: Erico Nunes <nunes.erico@gmail.com>
Reviewed-by: Vasily Khoruzhick <anarsoul@gmail.com>
Reviewed-by: Qiang Yu <yuq825@gmail.com>
->padded_count should be large enough to cover all vertices pointed by
the index array. Use the local vertex_count variable that contains the
updated vertex_count value for the indexed draw case.
Signed-off-by: Boris Brezillon <boris.brezillon@collabora.com>
Reviewed-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
fixes "sorry, unimplemented: non-trivial designated initializers not supported"
Fixes: deb04adf2a ("clover: add support for passing kernels as nir to the driver")
Signed-off-by: Karol Herbst <kherbst@redhat.com>
Reviewed-by: Eric Engestrom <eric@engestrom.ch>
Allocating BOs is expensive, so we should avoid doing that by caching
freed BOs.
BO cache is modelled after one in v3d driver and works as follows:
- in lima_bo_create() check if we have matching BO in cache and return
it if there's one, allocate new BO otherwise.
- in lima_bo_unreference() (renamed from lima_bo_free()): put BO in
cache instead of freeing it and remove all stale BOs from cache
Reviewed-by: Qiang Yu <yuq825@gmail.com>
Signed-off-by: Vasily Khoruzhick <anarsoul@gmail.com>
os_time_get_absolute_timeout(0) returns current time, while kernel
driver expects 0 as value to poll BO status and return immediately.
Fix it by setting abs_timeout to 0 if timeout_ns is 0
Reviewed-by: Qiang Yu <yuq825@gmail.com>
Signed-off-by: Vasily Khoruzhick <anarsoul@gmail.com>
Some time weston set full damage region. It is
more effient to use the cached pp stream instead
of dynamically create one.
Reviewed-and-Tested-by: Vasily Khoruzhick <anarsoul@gmail.com>
Signed-off-by: Qiang Yu <yuq825@gmail.com>
This extension set a damage region for each
buffer swap which can be used to reduce buffer
reload cost by only feed damage region's tile
buffer address for PP.
Reviewed-and-Tested-by: Vasily Khoruzhick <anarsoul@gmail.com>
Signed-off-by: Qiang Yu <yuq825@gmail.com>
The PLBU expects the viewport's 4 borders' coordinates, however
currently we're feeding the coordinate of the left-bottom point and the
size to it, which leads to misrendering when the left-bottom point is
not (0,0).
Change the macros for the viewport PLBU command, and the data feed to
it. The code to calculate the 4 borders is ported from Panfrost.
Signed-off-by: Icenowy Zheng <icenowy@aosc.io>
Reviewed-by: Qiang Yu <yuq825@gmail.com>
ACO depends on C++14, but radeonsi/radv with LLVM 8,9 do not. Let us
only require it for RADV, since that is the only user.
Fixes: a70a998718 "radv/aco: Setup alternate path in RADV to support the experimental ACO compiler"
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
required for OpenCL
v2: adjust to changes in previous commits
v3: properly convert to NIR in nvc0_cp_state_create
Signed-off-by: Karol Herbst <kherbst@redhat.com>
Reviewed-by: Pierre Moreau <pierre.morrow@free.fr> (v1)
v2: minor formatting fixes
v3: call glsl_type_singleton_init_or_ref and glsl_type_singleton_decref
v4: capitalize and punctuate comments
fix text_executable -> text_intermediate in TODO
make glsl_type_singleton wrapper static
v5: rewrite how we run the nir passes
v6: fix unhandled case switch warning in st/mesa
Signed-off-by: Karol Herbst <kherbst@redhat.com>
Reviewed-by: Francisco Jerez <currojerez@riseup.net> (v4)
v2: rework arguments to compiler::compile_program
add assert to device::ir_format
v3: remove PIPE_SHADER_IR_SPIRV
change title
Signed-off-by: Karol Herbst <kherbst@redhat.com>
Reviewed-by: Francisco Jerez <currojerez@riseup.net> (v2)
Reviewed-by: Pierre Moreau <pierre.morrow@free.fr>
Most drivers have actually no binary format and just store the IR directly
as a single entry point blob.
v2: add a cap to switch between single or multi entry point binaries
v3: remove the entry_point field
v4: remove PIPE_CAP_MULTI_ENTRY_POINT_BINARIES
v5: remove supports_multiple_entry_points
Signed-off-by: Karol Herbst <kherbst@redhat.com>
Reviewed-by: Pierre Moreau <pierre.morrow@free.fr>
Changes since:
* v12:
- remove autotools (Karol Herbst)
- Remove the callback in format_validation_msg. (Francisco Jerez)
- Removed is_binary_spirv. (Francisco Jerez)
- Pass a string reference to is_valid_spirv instead of the
notification callback. (Francisco Jerez)
* v11: Fix compilation error introduced in v11.
* v10:
- Reuse format_validation_msg in is_valid_spirv.
- Remove LVL2STR macro in format_validation_msg.
* v9: Add `clover_cpp_std` to the overrides of the `libclspirv` target
in Meson.
* v7: Add DEFINES to libclspirv and libclover, in autotools, as they
would otherwise never know whether CLOVER_ALLOW_SPIRV has been
defined (Dave Airlie)
* v6: Update the dependency name (meson) and the libs variable
(Makefile) due to the replacement of llvm-spirv to the new
official SPIRV-LLVM-Translator.
* v5: Changed to match the updated “clover/llvm: Allow translating from
SPIR-V to LLVM IR” in the v6.
Reviewed-by: Karol Herbst <kherbst@redhat.com>
Reviewed-by: Francisco Jerez <currojerez@riseup.net>
Changes since:
* v12 (Karol Herbst):
- rename CLOVER_ALLOW_SPIRV to HAVE_CLOVER_SPIRV
* v11 (Karol Herbst):
- only set new defines for clover to speed up recompilation
- remove autotools
* v10:
- Add a new flag (`--enable-opencl-spirv` for autotools, and
`-Dopencl-spirv=true` for meson) for enabling SPIR-V support in
clover, and never automagically enable it without that flag. (Dylan Baker)
- When enabling the SPIR-V support, the SPIRV-Tools and
SPIRV-LLVM-Translator libraries are now required dependencies.
* v7:
- Properly align LLVMSPIRVLib comment (Dylan Baker)
- Only define CLOVER_ALLOW_SPIRV when **both** dependencies are found:
autotools was only requiring one or the other.
* v6: Replace the llvm-spirv repository by the new official
SPIRV-LLVM-Translator.
* v4: Add a comment saying where to find llvm-spirv (Karol Herbst).
* v3:
- make SPIRV-Tools and llvm-spirv optional (Francisco Jerez);
- bump requirement for llvm-spirv to version 0.2
* v2:
- Bump the required version of SPIRV-Tools to the latest release;
- Add a dependency on llvm-spirv.
Reviewed-by: Dylan Baker <dylan@pnwbakers.com> (v10)
Reviewed-by: Karol Herbst <kherbst@redhat.com>
Reviewed-by: Francisco Jerez <currojerez@riseup.net>
Gen11 doesn't require us to bypass the L2 cache for BC* images anymore.
The documentation is a bit hard to follow on this point, but the Windows
driver clearly only applies this workaround on Gen9, and their commit
history indicates that this was an intentional change to drop the
workaround for Gen11+.
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
Currently there is no way to make no context current w/gallium + osmesa.
The non-gallium version of osmesa does this if the context and buffer
passed to `OSMesaMakeCurrent` are both null. This small change makes it
so that this is also the case with the gallium version.
Cc: mesa-stable@lists.freedesktop.org
Signed-off-by: Hal Gentz <zegentzy@protonmail.com>
Reviewed-by: Eric Anholt <eric@anholt.net>
We can't really handle it in the little-core 64-bit case but it's not
really needed there. Where we really want this is for when we need to
do 16 -> 8-bit conversions.
Reviewed-by: Paulo Zanoni <paulo.r.zanoni@intel.com>
Because byte immediates aren't a thing on GEN hardware, we return a
signed or unsigned word immediate in the byte case.
Reviewed-by: Paulo Zanoni <paulo.r.zanoni@intel.com>
During generate_shuffle(), when we use byte sized registers we end up
with a destination stride of 2. We don't take the stride into
consideration when selecting the group offset for the last MOV
operation, which means we end up moving things to the wrong place,
leaving the last few channels untouched. Take the destination stride
in consideration so we don't miss the last channels.
v2: Assert this is not necessary for the IVB special case (Jason).
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
Signed-off-by: Paulo Zanoni <paulo.r.zanoni@intel.com>
The new order matches that of the comparison functions accepted by the C
standard library qsort() functions. Being consistent with qsort will
hopefully help avoid developer confusion.
The only current user of the red-black tree is aub_mem.c which is pretty
easy to fix up.
Reviewed-by: Lionel Landwerlin <lionel.g.lndwerlin@intel.com>
When I wrote the red-black tree implementation, I wrote tests for it but
they never got imported into mesa.
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
This effectively breaks the instance dispatch table in 2 with entry
points using a physical device as first argument getting their own
dispatch table.
As a result we now have to check instance & physical device dispatch
table instead of just the instance dispatch table before.
Signed-off-by: Eric Engestrom <eric.engestrom@intel.com>
Signed-off-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
We were using the current drawable of the context to name the
appropriate screen for creating the bitmaps. But one, the current
drawable can be None, and two, it can be a GLXDrawable. Passing either
one as the second argument to XCreatePixmap will throw BadDrawable. Use
the root window of the context's screen instead.
Gitlab: https://gitlab.freedesktop.org/mesa/mesa/issues/89
LOLed-by: Kristian H. Kristensen <hoegsberg@google.com>
Reviewed-by: Kristian H. Kristensen <hoegsberg@google.com>
atof() is locale-dependent (sigh), which means 1.3 becomes 1.0 if the
locale's decimal separator isn't a full-stop. Just use the protocol
major/minor instead. This would be slightly broken if the server
generically implements 1.3+ but a particular screen is only capable of
less, but in practice no such servers exist.
Gitlab: https://gitlab.freedesktop.org/mesa/mesa/issues/74
Reviewed-by: Eric Engestrom <eric.engestrom@intel.com>
Some shaders do not use 'invariant' in vertex and (possibly) geometry
shader stages on some outputs that are intended to be invariant. For
various reasons, this optimization may not be fully applied in all
shaders used for different rendering passes of the same geometry. This
can result in Z-fighting artifacts (at best). For now, disable this
optimization in these stages.
In tessellation stages applications seem to use 'precise' when
necessary, so allow the optimization in those stages.
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=111490
Fixes: 09705747d7 ("nir/algebraic: Reassociate fadd into fmul in DPH-like pattern")
All Gen8+ platforms had similar results. (Ice Lake shown)
total instructions in shared programs: 16194726 -> 16344745 (0.93%)
instructions in affected programs: 2855172 -> 3005191 (5.25%)
helped: 6
HURT: 20279
helped stats (abs) min: 1 max: 3 x̄: 1.33 x̃: 1
helped stats (rel) min: 0.44% max: 1.00% x̄: 0.54% x̃: 0.44%
HURT stats (abs) min: 1 max: 32 x̄: 7.40 x̃: 7
HURT stats (rel) min: 0.14% max: 42.86% x̄: 8.58% x̃: 6.56%
95% mean confidence interval for instructions value: 7.34 7.45
95% mean confidence interval for instructions %-change: 8.48% 8.67%
Instructions are HURT.
total cycles in shared programs: 364471296 -> 365014683 (0.15%)
cycles in affected programs: 32421530 -> 32964917 (1.68%)
helped: 2925
HURT: 16144
helped stats (abs) min: 1 max: 403 x̄: 18.39 x̃: 5
helped stats (rel) min: <.01% max: 22.61% x̄: 1.97% x̃: 1.15%
HURT stats (abs) min: 1 max: 18471 x̄: 36.99 x̃: 15
HURT stats (rel) min: 0.02% max: 52.58% x̄: 5.60% x̃: 3.87%
95% mean confidence interval for cycles value: 21.58 35.41
95% mean confidence interval for cycles %-change: 4.36% 4.52%
Cycles are HURT.
There's nothing whatsoever compiler-specific about it other than that's
currently where it's used.
Reviewed-by: Kristian H. Kristensen <hoegsberg@google.com>
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
There's a missing prev_ldst = NULL; assignment in the new logic,
but even with this fixed it seems to regress some applications,
so let's revert the change until we find the real problem.
This reverts commit c9bebae287.
BLORP always turns off TCS/TES/GS. If regular drawing also has them
disabled (the overwhelmingly common case), then leaving them disabled
is just fine by us and we can skip dirtying them, as that would just
re-disable them a second time on the next draw.
If they are actually enabled, however, we do need to flag them.
Cuts 52% of the 3DSTATE_HS packets in an Aztec Ruins trace.
Later generations support bindless for samplers, images, and buffers and
thus per-stage descriptors are not limited by the binding table size.
However, gen8 doesn't support bindless images and thus needs to report a
lower per-stage limit so that all combinations of descriptors that fit
within the advertised limits are reported as supported by
vkGetDescriptorSetLayoutSupport.
Fixes test dEQP-VK.api.maintenance3_check.descriptor_set
Fixes: 79fb0d27f3 ("anv: Implement SSBOs bindings with GPU addresses in the descriptor BO")
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Commit d1e1563bb6 added a NULL check for eglGetSyncAttribKHR
but eglGetSyncAttrib does not do this. Patch adds same check to
happen with eglGetSyncAttrib.
Fixes crashes in (when exposing EGL 1.5):
dEQP-EGL.functional.fence_sync.invalid.get_invalid_value
Signed-off-by: Tapani Pälli <tapani.palli@intel.com>
Reviewed-by: Eric Engestrom <eric.engestrom@intel.com>
Cc: mesa-stable@lists.freedesktop.org
This improves a couple of things:
1. We now only update anything if the shader actually cares.
Previously, is_indexed_draw was causing us to flag dirty vertex
buffers, elements, and SGVs every time the shader switched between
indexed and non-indexed draws. This is a very common situation,
but we only need that information if the shader uses gl_BaseVertex.
We were also flagging things when switching between indirect/direct
draws as well, and now we only bother if it matters.
2. We upload new draw parameters only when necessary.
When we detect that the draw parameters have changed, we upload a
new copy, and use that. Previously we were uploading it every time
the vertex buffers were dirty (for possibly unrelated reasons) and
the shader needed that info. Tying these together also makes the
code a bit easier to follow.
In Civilization VI's benchmark, this code was flagging dirty state
many times per frame (49 average, 16 median, 614 maximum). Now it
occurs exactly once for the entire run.
This makes use of the total job size limiting feature added in the
previous patch.
The idea is to avoid an excessive build up in memory use due to the
use of both the UTIL_QUEUE_INIT_RESIZE_IF_FULL and
UTIL_QUEUE_INIT_USE_MINIMUM_PRIORITY flags.
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
When both UTIL_QUEUE_INIT_RESIZE_IF_FULL and
UTIL_QUEUE_INIT_USE_MINIMUM_PRIORITY are set, we can get into a
situation where the queue never executes and grows to a huge size
due to all other threads being busy.
This is the case with the shader cache when attempting to compile a
huge number of shaders up front. If all threads are busy compiling
shaders the cache queues memory use can climb into the many GBs
very fast.
The use of these two flags with the shader cache is intended to
allow shaders compiled at runtime to be compiled as fast as possible.
To avoid huge memory use but still allow the queue to perform
optimally in the run time compilation case, we now add the ability
to track memory consumed by the jobs in the queue and limit it to
a hardcoded 256MB which should be more than enough.
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
Since we set the UTIL_QUEUE_INIT_USE_MINIMUM_PRIORITY flag this should
have little impact on low core systems. However just about all modern
CPUs currently available that run Mesa have *at least* 4 cores. For
these CPUs allowing more threads can result in the queue being
processed faster and avoid excessive memory use due to a backlog of
cache entrys building up in the queue.
This change helps avoid a huge build up of cache entrys in the queue
due to using both the UTIL_QUEUE_INIT_USE_MINIMUM_PRIORITY and
UTIL_QUEUE_INIT_RESIZE_IF_FULL flags.
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
The current code can create functions with a width of 32, which is not
supported by our hardware. Add some code to simplify how we express
what we want and prevent such cases.
For some unknown reason, all the tests I could run seem to work even
with these unsupported MOVs.
Fixes: b0858c1cc6 "intel/fs: Add a couple of simple helper opcodes"
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
Signed-off-by: Paulo Zanoni <paulo.r.zanoni@intel.com>
There are cases where we try to generate registers with a stride of
32, while the hardware maximum is just 16. This happens, for example,
when using 8 bit integers on SIMD32. This results in a crash because
the variable 'width' has a value of 32:
../../src/intel/compiler/brw_reg.h:550: brw_reg brw_vecn_reg(unsigned
int, brw_reg_file, unsigned int, unsigned int): Assertion `!"Invalid
register width"' failed.
This change prevents the crash and makes the tests pass.
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
Signed-off-by: Paulo Zanoni <paulo.r.zanoni@intel.com>
IMHO the code is easier to understand this way, being explicit that
we're doing exactly the same thing every time.
No functional changes.
v2: Adjust the loop breaking condition (Jason).
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
Signed-off-by: Paulo Zanoni <paulo.r.zanoni@intel.com>
When dealing with uint16_t and uint8_t on SIMD32 we can do all the
operations using just 2 registers, so we don't hit the recursion at
the beginning of emit_scan(). Because of that, we need to actually
compute scan/reduce for channels 31:16.
v2: Still missed instructions (Jason).
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
Signed-off-by: Paulo Zanoni <paulo.r.zanoni@intel.com>
We want this for tessellation eventually, but we can turn it on now.
Shader-db results:
total instructions in shared programs: 8612905 -> 8611387 (-0.02%)
instructions in affected programs: 164952 -> 163434 (-0.92%)
total dwords in shared programs: 11952000 -> 11950560 (-0.01%)
dwords in affected programs: 68096 -> 66656 (-2.11%)
total full in shared programs: 315019 -> 315009 (<.01%)
full in affected programs: 1642 -> 1632 (-0.61%)
total constlen in shared programs: 2463654 -> 2463654 (0.00%)
constlen in affected programs: 0 -> 0
total (ss) in shared programs: 152379 -> 152409 (0.02%)
(ss) in affected programs: 1503 -> 1533 (2.00%)
total (sy) in shared programs: 96473 -> 96525 (0.05%)
(sy) in affected programs: 654 -> 706 (7.95%)
total max_sun in shared programs: 1172454 -> 1172472 (<.01%)
max_sun in affected programs: 104 -> 122 (17.31%)
Signed-off-by: Kristian H. Kristensen <hoegsberg@google.com>
Reviewed-by: Eric Anholt <eric@anholt.net>
Also, swap vs and fs constructor or so fs comes first.
Signed-off-by: Kristian H. Kristensen <hoegsberg@google.com>
Reviewed-by: Eric Anholt <eric@anholt.net>
When using xfb and rasterizing, the fragment shader may have fewer
inputs than the vertex shader outputs. We can't rely on gl_Position to
be placed at fs->total_in, but have to instead remember where we add
it in the link map and use that location.
Fixes 100+ tesselation dEQPs under
dEQP-GLES31.functional.tessellation.primitive_discard.*
dEQP-GLES31.functional.tessellation.user_defined_io.*
Reviewed-by: Eric Anholt <eric@anholt.net>
New added cases "stole" the previous break.
Fixes: 420ad0a1a3 ("spirv: check support for SPV_KHR_float_controls capabilities")
Reviewed-by: Eric Engestrom <eric.engestrom@intel.com>
If we can entirely push uniform data, we don't need a SURFACE_STATE
descriptor for pulling data. Since constant uploads are a very common
operation, and being able to push all data is also very common, we would
like to avoid the overhead in this case.
This patch defers uploading new descriptors. Instead of handling that
at iris_set_constant_buffer, we do it at iris_update_compiled_shaders,
where we can see the currently bound shader variants. If any need pull
descriptors, and descriptors are missing, we update them and flag that
the binding table also needs to be refreshed.
Improves performance in GFXBench5 gl_driver2 on an i7-6770HQ by
31.9774% +/- 1.12947% (n=15).
Reviewed-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
I would like for iris to be able to avoid setting up SURFACE_STATE
for UBOs in the common case where all constants are pushed.
Unfortunately, we don't know up front whether everything will be
pushed: the backend is allowed to demote pushed UBOs to pull loads
fairly late in the process. This is probably desirable though, as
we'd like the backend to be able to re-pull pushed data to break up
long live ranges in response to register pressure.
Here we simply add a "are there any pull loads at all" boolean to
prog_data, which is a bit crude but at least allows us to skip work
in the common "everything pushed" case. We could skip more work by
tracking exactly which UBO surfaces are pulled in a bitmask, but I
wanted to avoid bringing back the old mark_surface_used() mechanism.
Finer-grained tracking could allow us to skip a bit more work when
multiple UBOs are in use and /some/ are 100% pushed, but others are
accessed via pulls. However, I'm not sure how common this is and
it would save at most 4 pull descriptors, so we defer that for now.
Reviewed-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
We now track per-stage bind history for constant and shader buffers,
shader images, and sampler views by adding an extra res->bind_stages
field to go with res->bind_history.
This lets us flag IRIS_DIRTY_CONSTANTS for only the specific stages
involved, and also skip some CPU overhead in iris_rebind_buffer.
Cuts 4% of 3DSTATE_CONSTANT_XS packets in a Shadow of Mordor trace
on Icelake.
Reviewed-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
The underlying buffer isn't changing - so we don't need to update any
SURFACE_STATE descriptors - we just might have new constants, meaning
we need to re-emit 3DSTATE_CONSTANT_XS. On Gen9, this means we need
to update 3DSTATE_BINDING_TABLE_POINTERS_XS too, but that's now handled
by the explicit check in the previous patch.
On Gen9, this should cause us to re-emit the binding table /pointer/ on
writing to a buffer with PIPE_BIND_CONSTANT_BUFFER, rather than emitting
a whole new /table/.
On Gen8 and Gen11, this avoids binding table churn altogether.
Cuts 61% of 3DSTATE_BINDING_TABLE_POINTERS_XS packets in a Shadow of
Mordor trace on Icelake.
Reviewed-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
Right now, we usually flag both IRIS_DIRTY_{CONSTANTS,BINDINGS}_XS,
because we have SURFACE_STATE for constant buffers in case the shaders
access them via pull mode.
But this flagging is overkill in many cases. Gen8 and Gen11 don't need
it at all. Gen9 doesn't need that large of a hammer in all cases.
Just handle it explicitly so the right thing happens.
Reviewed-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
We upload a new SURFACE_STATE for the UBO/SSBO in question, which
means that we need new binding tables as well.
Reviewed-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
Apparently we already enabled it without having support ...
Not sure if we also need to set disable_start_of_prim when the PS
has memory writes, but this mirrors radeonsi.
Doubles fillrate in my dual_quad_bench from ~16 pixels/cycles to
~32 pixels/cycle on a Raven.
Reviewed-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
The pass assumed that "Most ALU ops produce an undefined result if any
source is undef" which is completely untrue. Due to how we lower if
statements to selects and then optimize on those selects later, we
simply cannot make that assumption. In particular this pass tried to
replace an ior of undef and true, which had been generated by
optimizing a select which itself came from flattening an if statement,
to undef causing a miscompilation for a CTS test with radeonsi NIR.
We fix this by always doing what the non-undef path did, i.e. duplicate
the instruction twice. If there are cases where the instruction before
the loop can be folded away due to having an undef source, we should add
these to opt_undef instead.
The comment above the pass says that if the phi source from before the
loop is undef, and we can fold the instruction before the loop to undef,
then we can ignore sources of the original instruction that don't
dominate the block before the loop because we don't need them to create
the instruction before the loop. This is incorrect, because the
instruction at the bottom of the loop would get those sources from the
wrong loop iteration. The code never actually did what the comment said,
so we only have to update the comment to match what the pass actually
does. We also update the example to more closely match what most actual
loops look like after vtn and peephole_select.
There are no shader-db changes with i965, radeonsi NIR, or radv. With
anv and my vkpipeline-db there's only one change:
total instructions in shared programs: 14125290 -> 14125300 (<.01%)
instructions in affected programs: 2598 -> 2608 (0.38%)
helped: 0
HURT: 1
total cycles in shared programs: 2051473437 -> 2051473397 (<.01%)
cycles in affected programs: 36697 -> 36657 (-0.11%)
helped: 1
HURT: 0
Fixes
KHR-GL45.shader_subroutine.control_flow_and_returned_subroutine_values_used_as_subroutine_input
with radeonsi NIR.
Akin to 1a25980c46 ("egl: drop incorrect pkg-config file for
glvnd") and b01524fff0 ("meson: don't build libGLES*.so with
GLVND") , removes a pkg-config file that shouldn't have been there in
the first place, but was needed because of that GLVND bug.
Now that the glvnd bug has been fixed, it was apparent that this gl.pc
pkg-config file was forgotten to be removed, so let's do just that :)
Suggested-by: Matt Turner <mattst88@gmail.com>
Cc: mesa-stable@lists.freedesktop.org
Signed-off-by: Eric Engestrom <eric.engestrom@intel.com>
Reviewed-by: Matt Turner <mattst88@gmail.com>
Having Python and C variables sharing name in the same block of code
makes its understanding a bit confusing. Make it explicit that the
Python bit_size variable refers to the destination bit size.
Suggested-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
Signed-off-by: Andres Gomez <agomez@igalia.com>
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
Reviewed-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
No GS copy shader if a pipeline enables NGG GS.
This fixes
dEQP-VK.pipeline.executable_properties.graphics.*geometry_stage*.
Fixes: 86864eedd2 ("radv: Implement radv_GetPipelineExecutablePropertiesKHR.")
Signed-off-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
* Set missing STENCIL_CONFIG_EXT2 bits
* Swap stencil sides when rendering CCW
Fixes following deqp tests (which were 99% failing):
dEQP-GLES2.functional.fragment_ops.depth_stencil.*
Note: deqp tests require --deqp-gl-config-name=rgba8888d24s8ms0
Signed-off-by: Jonathan Marek <jonathan@marek.ca>
Reviewed-by: Christian Gmeiner <christian.gmeiner@gmail.com>
This is needed in particular to get a recent enough version of meson in
the stretch image, but should be generally beneficial.
Acked-by: Eric Engestrom <eric.engestrom@intel.com>
Pros:
* Less fragile due to not mixing packages from stretch and buster
* No longer need to use third-party LLVM packages
* The buster image now uses GCC 8 for C++ as well (previously 6 for C++,
8 for C), allowing to drop some hacks
Con:
* The stretch image now only uses GCC 6 for C as well as C++
* Need separate jobs for testing old LLVM versions
Reviewed-by: Eric Engestrom <eric.engestrom@intel.com>
If installing new packages would require removing previously installed
ones, this flag causes apt-get to abort with an error instead,
preventing later obscure failures due to the missing packages.
Reviewed-by: Eric Engestrom <eric.engestrom@intel.com>
If we want to execute several batches in parallel they need to have
their own tiler and scratchpad BOs. Let move those objects to
panfrost_batch and allocate them on a per-batch basis.
Signed-off-by: Boris Brezillon <boris.brezillon@collabora.com>
Reviewed-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
If we want the batch dependency tracking to work correctly we must
make sure all BOs are added to the batch->bos set early enough. Adding
FBO BOs when generating the fragment job is clearly to late. Add a
panfrost_batch_add_fbo_bos helper and call it in the clear/draw path.
Signed-off-by: Boris Brezillon <boris.brezillon@collabora.com>
Reviewed-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
This helper automates the panfrost_bo_create()+panfrost_batch_add_bo()+
panfrost_bo_unreference() sequence that's done for all per-batch BOs.
Signed-off-by: Boris Brezillon <boris.brezillon@collabora.com>
Reviewed-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
Thanks to that we avoid the recursive call into panfrost_bo_create()
and we can get rid of panfrost_bo_release() by inlining the code in
panfrost_bo_unreference().
Signed-off-by: Boris Brezillon <boris.brezillon@collabora.com>
Reviewed-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
panfrost_bo_unreference() should be used instead.
The only difference caused by this change is that the scratchpad,
tiler_heap and tiler_dummy BOs are now returned to the cache instead
of being freed when a context is destroyed. This is only a problem if
we care about context isolation, which apparently is not the case since
transient BOs are already returned to the per-FD cache (and all contexts
share the same address space anyway, so enforcing context isolation
is almost impossible).
Signed-off-by: Boris Brezillon <boris.brezillon@collabora.com>
Reviewed-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
Store a screen pointer in panfrost_bo so we don't have to pass a screen
object to all functions manipulating the BO.
Signed-off-by: Boris Brezillon <boris.brezillon@collabora.com>
Reviewed-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
Right now, the BO API is spread over pan_{allocate,resource,screen}.h.
Let's move all BO related definitions to a separate header file.
Signed-off-by: Boris Brezillon <boris.brezillon@collabora.com>
Reviewed-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
pan_drm.c was only meaningful when we were supporting 2 kernel drivers
(mali_kbase, and the drm one). Now that there's now kernel-driver
abstraction we're better off moving those functions were they belong:
* BO related functions in pan_bo.c
* fence related functions + query_gpu_version() in pan_screen.c
* submit related functions in pan_job.c
While at it, we rename the functions according to the place they're
being moved to.
Signed-off-by: Boris Brezillon <boris.brezillon@collabora.com>
Reviewed-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
ctx is allocated with rzalloc() which takes care of zero-ing the memory
region. No need to call memset(0) on top.
Signed-off-by: Boris Brezillon <boris.brezillon@collabora.com>
Reviewed-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
That's what we do for other per-batch BOs, and we'll soon add an helper
to automate this create_bo()+add_bo()+bo_unreference() sequence, so
let's prepare the code to ease this transition.
Signed-off-by: Boris Brezillon <boris.brezillon@collabora.com>
Reviewed-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
Some BOs are used by batches but never explicitly added to the BO set.
This is currently not a problem because we wait for the execution of
a batch to be finished before releasing a BO, but we will soon relax
this rule.
Signed-off-by: Boris Brezillon <boris.brezillon@collabora.com>
Reviewed-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
The DRM driver expects an array of u32, let's use the correct type, even
if using an int works in practice because it's still a 32-bit integer.
Signed-off-by: Boris Brezillon <boris.brezillon@collabora.com>
Reviewed-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
Looks like only HALT2 GPUs have support for it but that is not yet
implemented so disable ARB_shadow for now.
Signed-off-by: Christian Gmeiner <christian.gmeiner@gmail.com>
Reviewed-by: Jonathan Marek <jonathan@marek.ca>
Reviewed-by: Eric Anholt <eric@anholt.net>
drmIoctl handles EAGAIN itself and actually it always return -1 on errors.
Remove the wrong handling of its return value. Also, print a warning when
it fails.
v2: - use _debug_printf instead of fprintf (Gurchetan Singh)
Signed-off-by: Lepton Wu <lepton@chromium.org>
Reviewed-by: Eric Anholt <eric@anholt.net> (v1)
The compiler now sets the "Null Render Target" bit in the RT write
extended message descriptor, causing it to write to an implicit null
surface without us needing to set one up in the binding table.
Together with the last patch, this improves performance in Car Chase on
an Icelake 8x8 (locked to 700Mhz) by 0.0445526% +/- 0.0132736% (n=832).
Reviewed-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
When there are no color regions (i.e. a depth only pass), we can set
the "Null Render Target" bit in the Gen11 RT write extended message
descriptor to indicate that it should behave as if it's writing to a
null render target, without the need for a binding table entry.
This lets drivers avoid setting up that null RT binding table entry,
but more importantly means the HW doesn't actually have to bother
looking up the surface state.
Together with the next patch, this improves performance in Car Chase on
an Icelake 8x8 (locked to 700Mhz) by 0.0445526% +/- 0.0132736% (n=832).
Reviewed-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
This adds support for
VK_STRUCTURE_TYPE_PHYSICAL_DEVICE_FLOAT_CONTROLS_PROPERTIES_KHR and
enables de Vulkan and SPIR-V extensions.
Also, notice that this includes the updates applied to the
VkPhysicalDeviceFloatControlsPropertiesKHR structure in the extension
VK_KHR_shader_float_controls v4 and Vulkan 1.1.116.
Signed-off-by: Samuel Iglesias Gonsálvez <siglesias@igalia.com>
Signed-off-by: Andres Gomez <agomez@igalia.com>
Reviewed-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
The remove_extra_rounding_modes() optimization will remove duplicated
rounding mode changes.
v2:
- Fix bug in the rounding mode change (Alejandro).
v3:
- Fix rounding modes.
v4:
- Updated to renamed shader info member and enum values (Andres).
v5:
- Simplify flags logic operations (Caio).
Signed-off-by: Samuel Iglesias Gonsálvez <siglesias@igalia.com>
Signed-off-by: Andres Gomez <agomez@igalia.com>
Reviewed-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
We need this function to emit code that setups the control register
later with the defined execution mode for the shader. Therefore, we
emit it as the first instruction.
v2:
- Fix bug in setting the default mode mask in brw_rnd_mode_from_nir().
- Fix support for rounding modes in brw_rnd_mode_from_nir().
v3:
- Updated to renamed shader info member and enum values (Andres).
v4:
- Add actual emission as first instruction of emit_nir_code (Caio).
Signed-off-by: Samuel Iglesias Gonsálvez <siglesias@igalia.com>
Signed-off-by: Andres Gomez <agomez@igalia.com>
Reviewed-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
Before this commit, we had only FPRoundingMode decoration (the per
instruction one) that is applied during the SPIR-V handling. In
vtn_alu we find out the rounding mode, and generate the code
accordingly that later will be used to look for the respective
nir_op_f2f16_{rtz,rtne}.
Per-instruction gets prioritized because we make them explicit
conversions (with RTZ or RTNE nir opcodes) and they will override the
default execution mode defined with float controls. However, we need
to come back to the mode defined by float controls after the execution
of the FP Rounding instruction.
Therefore, the new SHADER_OPCODE_FLOAT_CONTROL_MODE opcode will be
used to set the default rounding mode and denorms treatment in the
whole shader while the pre-existent SHADER_OPCODE_RND_MODE, will be
used as prioritized rounding mode in a per-instruction basis.
v2:
- Fix bug in defining BRW_CR0_FP_MODE_MASK.
v3:
- Update comment (Caio).
v4:
- Split the patch into the helper and the new opcode (this
one) (Caio).
v5:
- Add an explanation on the actual purpose and priority of the newly
introduced opcode in the commit log (Caio).
Signed-off-by: Samuel Iglesias Gonsálvez <siglesias@igalia.com>
Reviewed-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
v2:
- Fix bug in defining BRW_CR0_FP_MODE_MASK.
v3:
- Update comment (Caio).
v4:
- Split the patch into the helper (this one) and the new
opcode (Caio).
Signed-off-by: Samuel Iglesias Gonsálvez <siglesias@igalia.com>
Reviewed-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
The denorm mode is set in the control register, no need to do
something else.
v2:
- Add an assert to make sure that we realize if this assumption is
broken in the future (Caio).
Signed-off-by: Samuel Iglesias Gonsálvez <siglesias@igalia.com>
Signed-off-by: Andres Gomez <agomez@igalia.com>
Reviewed-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
If we have fsin or fcos trigonometric operations with constant values
as inputs, we will multiply the result by 0.99997 in
brw_nir_apply_trig_workarounds, making the result wrong.
Adjusting the rules so they do not apply to const values we let a
later constant fold to deal with it.
v2:
- Do not early constant fold but only apply the trig workaround for
non constants (Caio).
- Add fixes tag to commit log (Caio).
Fixes: bfd17c76c1 "i965: Port INTEL_PRECISE_TRIG=1 to NIR."
Signed-off-by: Samuel Iglesias Gonsálvez <siglesias@igalia.com>
Signed-off-by: Andres Gomez <agomez@igalia.com>
Reviewed-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
Until now, it was using the floating point version of fmin/fmax,
instead of the double version.
Signed-off-by: Samuel Iglesias Gonsálvez <siglesias@igalia.com>
Reviewed-by: Connor Abbott <cwabbott0@gmail.com>
v2:
- Replace hard coded value with DBL_MIN (Connor).
v3:
- Have into account the FLOAT_CONTROLS_DENORM_PRESERVE_FP64
flag (Caio).
Signed-off-by: Samuel Iglesias Gonsálvez <siglesias@igalia.com>
Signed-off-by: Andres Gomez <agomez@igalia.com>
Reviewed-by: Connor Abbott <cwabbott0@gmail.com> [v2]
According to VK_KHR_shader_float_controls:
"Denormalized values obtained via unpacking an integer into a vector
of values with smaller bit width and interpreting those values as
floating-point numbers must: be flushed to zero, unless the entry
point is declared with the code:DenormPreserve execution mode."
v2:
- Add nir_op_unpack_half_2x16_flush_to_zero opcode (Connor).
v3:
- Adapt to use the new NIR lowering framework (Andres).
v4:
- Updated to renamed shader info member and enum values (Andres).
v5:
- Simplify flags logic operations (Caio).
Signed-off-by: Samuel Iglesias Gonsálvez <siglesias@igalia.com>
Signed-off-by: Andres Gomez <agomez@igalia.com>
Reviewed-by: Connor Abbott <cwabbott0@gmail.com> [v2]
If FLOAT_CONTROLS_SIGNED_ZERO_INF_NAN_PRESERVE or
FLOAT_CONTROLS_DENORM_FLUSH_TO_ZERO are enabled, do not apply the
inexact optimizations so the VK_KHR_shader_float_controls execution
mode is respected.
v2:
- Do not apply inexact optimizations if SHADER_DENORM_FLUSH_TO_ZERO is
enabled (Andres).
v3:
- Updated to renamed shader info member (Andres).
v4:
- Directly access execution mode instead of dragging it by parameter (Caio).
Signed-off-by: Samuel Iglesias Gonsálvez <siglesias@igalia.com>
Signed-off-by: Andres Gomez <agomez@igalia.com>
Reviewed-by: Connor Abbott <cwabbott0@gmail.com> [v1]
With the arrival of VK_KHR_shader_float_controls algebraic
optimizations for float types of the form (('fop', a, b), a) become
inexact depending on the execution mode.
For example, if we have activated SHADER_DENORM_FLUSH_TO_ZERO, in case
of a denorm value for the "a" parameter, we cannot return it still as
a denorm, it needs to be flushed to zero. Therefore, we mark now all
those operations as inexact.
Suggested-by: Samuel Iglesias Gonsálvez <siglesias@igalia.com>
Signed-off-by: Andres Gomez <agomez@igalia.com>
Reviewed-by: Connor Abbott <cwabbott0@gmail.com>
v2:
- Move the op-code specific knowledge to nir_opcodes.py even if it
means a rount trip conversion (Connor).
Signed-off-by: Samuel Iglesias Gonsálvez <siglesias@igalia.com>
Signed-off-by: Andres Gomez <agomez@igalia.com>
Reviewed-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
According to Vulkan spec, the new execution modes affect only
correctly rounded SPIR-V instructions, which includes fadd, fsub and
fmul.
v2:
- Fix fmul, fsub and fadd round-to-zero definitions, they should use
auxiliary functions to calculate the proper value because Mesa uses
round-to-nearest-even rounding mode by default (Connor).
v3:
- Do an actual fused multiply-add at ffma (Connor).
v4:
- Simplify fadd and fmul for bit sizes < 64 (Connor).
- Do not use double ffma for 32 bits float (Connor).
Signed-off-by: Samuel Iglesias Gonsálvez <siglesias@igalia.com>
Signed-off-by: Andres Gomez <agomez@igalia.com>
Reviewed-by: Connor Abbott <cwabbott0@gmail.com> [v3]
f2f16's rounding modes are already handled and f2f64 don't need it
as there is not a floating point type with higher bit size than 64 for
now.
Signed-off-by: Samuel Iglesias Gonsálvez <siglesias@igalia.com>
Reviewed-by: Connor Abbott <cwabbott0@gmail.com>
In order to be coherent with the pre-existent API for half floats,
this new API for double is the one meant to be used when doing double
to float conversions. It is no more than a wrapper for the softfloat.h
API but we meant to keep that one private.
v2:
- Fix bug in _mesa_double_to_float_rtz() in the inf/nan detection
using the exponent value.
v3:
- Replace custom f64 -> f32 implementations with the softfloat
one (Andres).
v4:
- Added API usage clarifying comments (Caio).
Signed-off-by: Samuel Iglesias Gonsálvez <siglesias@igalia.com>
Signed-off-by: Andres Gomez <agomez@igalia.com>
Reviewed-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
In order to be coherent with the pre-existent functions, this new API
is the one meant to be used when doing half float to float
conversions. It is no more than a wrapper for the softfloat.h API but
we meant to keep that one private.
v2:
- Replace custom f32 -> f16 RTZ implementation with the softfloat
one (Andres).
v3:
- Added API usage clarifying comments (Caio).
Signed-off-by: Samuel Iglesias Gonsálvez <siglesias@igalia.com>
Signed-off-by: Andres Gomez <agomez@igalia.com>
Reviewed-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
Implemented fadd, fsub, fmul and ffma for doubles and ffma for floats,
rounding to zero, using a modified implementation from Berkely
Softfloat 3e Library.
Their implementation correctness has been checked with the Berkeley
TestFloat Release 3e tool for x86_64.
v2:
- Reuse util_last_bit64() in _mesa_count_leading_zeros64()
implementation (Connor).
v3:
- Add a specific ffma for floats version (Connor).
- Implement the ffma for doubles version (Andres).
- Lots of fixes in fadd, fsub and fmul (Andres).
- Improved documentation (Andres).
v4:
- Added f64 -> f32 conversion function (Andres).
- Added f32 -> f16 RTZ conversion function (Andres).
Signed-off-by: Samuel Iglesias Gonsálvez <siglesias@igalia.com>
Signed-off-by: Andres Gomez <agomez@igalia.com>
Tested-by: Andres Gomez <agomez@igalia.com>
Acked-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
v2:
- Refactor conditions and shared function (Connor).
- Move code to nir_eval_const_opcode() (Connor).
- Don't flush to zero on fquantize2f16
From Vulkan spec, VK_KHR_shader_float_controls section:
"3) Do denorm and rounding mode controls apply to OpSpecConstantOp?
RESOLVED: Yes, except when the opcode is OpQuantizeToF16."
v3:
- Fix bit size (Connor).
- Fix execution mode on nir_loop_analize (Connor).
v4:
- Adapt after API changes to nir_eval_const_opcode (Andres).
v5:
- Simplify constant_denorm_flush_to_zero (Caio).
v6:
- Adapt after API changes and to use the new constant
constructors (Andres).
- Replace MAYBE_UNUSED with UNUSED as the first is going
away (Andres).
v7:
- Adapt to newly added calls (Andres).
- Simplified the auxiliary to flush denorms to zero (Caio).
- Updated to renamed supported capabilities member (Andres).
Signed-off-by: Samuel Iglesias Gonsálvez <siglesias@igalia.com>
Signed-off-by: Andres Gomez <agomez@igalia.com>
Reviewed-by: Connor Abbott <cwabbott0@gmail.com> [v4]
Reviewed-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
v2:
- Added more functions.
v3:
- Simplify most of the functions (Caio).
v4:
- Updated to renamed enum values (Andres).
Signed-off-by: Samuel Iglesias Gonsálvez <siglesias@igalia.com>
Signed-off-by: Andres Gomez <agomez@igalia.com>
Reviewed-by: Connor Abbott <cwabbott0@gmail.com> [v2]
Reviewed-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com> [v3]
v2:
- Add support for rounding modes for each floating point bit size.
v3:
- Commit e68871f6a4 ("spirv: Handle constants and types before
execution modes") changed when the execution modes are handled,
which affects the result of the floating point constants when the
rounding mode is set in the execution mode. Moved the handling of
the rounding modes before we handle the constants.
v4:
- Rename vtn_decoration "literals" to "operands" (Andres).
- Simplify execution mode parsing util function (Caio).
- Extend the comment about the timing of the handling of the rounding
modes (Caio).
v5:
- Correct extension name (Caio).
- Rename shader info member (Andres).
- Rename float controls enum (Andres).
Signed-off-by: Samuel Iglesias Gonsálvez <siglesias@igalia.com>
Signed-off-by: Andres Gomez <agomez@igalia.com>
Reviewed-by: Connor Abbott <cwabbott0@gmail.com> [v3]
Reviewed-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
Yes, some tests fail, but we can turn those into XFAILs at meson time.
Better to keep the things that work working than not cover them at all.
Unfortunately XPASS results will not cause the build to fail until we
update CI to meson 0.51 or newer.
Reviewed-by: Daniel Stone <daniels@collabora.com>
Since struct timespec's tv_sec member is of type time_t, adjust the
expected value to allow for the truncation which will occur with 32-bit
time_t.
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Reviewed-by: Adam Jackson <ajax@redhat.com>
Otherwise it never gets closed, this fixes errors seen with deqp-egl
where we end up opening 1024 files.
Fixes: 2dce0e94 ("iris: Initial commit of a new 'iris' driver for Intel Gen8+ GPUs.")
Signed-off-by: Tapani Pälli <tapani.palli@intel.com>
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
In case the GLSL version is 130 or higher, we've already enabled
ARB_shader_bit_encoding a bit earlier in this same function. So this
condition will always be true.
Signed-off-by: Erik Faye-Lund <erik.faye-lund@collabora.com>
Reviewed-by: Tapani Pälli <tapani.palli@intel.com>
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
The PLBU seems to preserve scissor state between draws, and since lima doesn't
emit PLBU_CMD_SCISSORS() if scissor test is disabled, it uses state from previous draw.
Fix it by emitting PLBU_CMD_SCISSORS() for full fb if scissor test is disabled.
Signed-off-by: Icenowy Zheng <icenowy@aosc.io>
Reviewed-by: Vasily Khoruzhick <anarsoul@gmail.com>
Reviewed-by: Qiang Yu <yuq825@gmail.com>
SPIR-V 1.5 incorported the SPV_EXT_shader_viewport_index_layer but
splitting into the two capabilities above. Just handle them as we
support the extension already.
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
We can't deref list_(first/last)_entries unless we know we have at least
one. Instead, just use our IP we've been tracking as we go to set up the
start ip, and fill in the end IP as we walk instructions.
Fixes a complaint in valgrind on
dEQP-GLES3.functional.transform_feedback.* which sometimes has an
empty main (non-END) block when the VS inputs are just directly mapped
to outputs without any ALU ops.
Reviewed-by: Rob Clark <robdclark@chromium.org>
Table 23.54 of the OpenGL 4.5 spec lists the minimum values for
GL_POINT_SIZE_RANGE as [1, 1]. So zero is not allowed (even though
arguably this could be useful for MSAA rendering, where a sub-1px
point might cover only some samples...)
This fixes the WebGL 2.0 conformance suite's state.gl-get-calls test
on Chromium on Linux, which uses desktop OpenGL. The test checks that
the minimum value of GL_ALIASED_POINT_SIZE_RANGE is 1. Unfortunately,
that query doesn't exist in desktop GL, so it checks POINT_SIZE_RANGE,
which is the anti-aliased value. There's not really anything better
for Chromium to do here, unfortunately. When running Chromium with
--api=es3, it maps it to the correct query and the test already works.
Reviewed-by: Jordan Justen <jordan.l.justen@intel.com>
Reviewed-by: Eric Anholt <eric@anholt.net>
Previously, internalformat GL_RGBA and type GL_UNSIGNED_SHORT_5_5_5_1
was promoted to RGBA8888 as the table entry with the 5551 formats
is listed below the 8888 entry, and it also doesn't have GL_RGBA as
a possible internalformat.
Using actual 5551 fixes the following dEQP-EGL test:
- dEQP-EGL.functional.image.modify.tex_rgb5_a1_tex_subimage_rgba8
Reviewed-by: Eric Anholt <eric@anholt.net>
Currently scons puts them in src/mapi/glapi, meosn puts them in
src/mapi/glapi/gen. This results in some things being compilable only by
one or the other, put them in the same places so that everyone is happy.
Reviewed-by: Eric Engestrom <eric.engestrom@intel.com>
It's useful for analyzing shader binaries produced by ARM mali offline
compiler which outputs files in MBS format. MBS is mali binary shader,
currently parser just extracts shader binary and ignores everything else.
Reviewed-and-tested-by: Connor Abbott<cwabbott0@gmail.com>
Signed-off-by: Vasily Khoruzhick <anarsoul@gmail.com>
Update GL headers and xml API from upstream Khronos registry (commit
3d0c3eb). Keep `BUILDING_MESA` quirk in glext.h.
mesa/extensions: Expose EXT_EGL_sync instead of MESA_EGL_sync to reflect
Khronos request of changing this extension's scope from MESA to EXT.
EGL_EGL_sync is also the name of the extension that has been merged into
the upstream Khronos GL registry.
Remove MESA_EGL_sync spec txt from Mesa tree as it is now published as
EXT by Khronos.
v1: Remove MESA_EGL_sync spec and squash commits (Eric E)
Reviewed-by: Eric Engestrom <eric.engestrom@intel.com>
Reviewed-by: Daniel Stone <daniels@collabora.com>
This might allow the arm64 tests to start running earlier.
Reviewed-by: Eric Engestrom <eric.engestrom@intel.com>
Reviewed-by: Eric Anholt <eric@anholt.net>
v2:
* Preserve setting NIR_VALIDATE=0 for all arm64_* jobs
* Preserve setting DEQP_SKIPS=deqp-default-skips.txt for
arm64_a306_gles2 jobs
Reviewed-by: Eric Engestrom <eric.engestrom@intel.com> # v1
Reviewed-by: Eric Anholt <eric@anholt.net>
Support for multiple inheritance was added to GitLab recently.
Reviewed-by: Eric Engestrom <eric.engestrom@intel.com>
Reviewed-by: Eric Anholt <eric@anholt.net>
This allows the arm64_a306_gles2 jobs to run as soon as the meson-arm64
job has finished.
Fixes: 6f0dc087b7 "freedreno: Introduce gitlab-based CI."
Reviewed-by: Eric Engestrom <eric.engestrom@intel.com>
Reviewed-by: Eric Anholt <eric@anholt.net>
Commit f3e978db incorrectly assumed the maximum number of
samplers was equal to the max number of defined samplers
e.g. where bindings skip slots.
This fixes an assert in si_nir_load_sampler_desc() for an
enemy territory quake wars shader. And fixes potential bugs with
incorrect bounds limiting in the same code for production builds
of mesa.
Fixes: f3e978db ("radeonsi/nir: Remove uniform variable scanning")
Reviewed-by: Connor Abbott <cwabbott0@gmail.com>
It's still disabled by default because transform feedback randomly
hangs and it seems like it's related to GDS (cf. RadeonSI).
Signed-off-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
Otherwise the next streamout operation will overwrite GDS. This
can be improved by tracking if there is a streamout operation in
flight. Currently the driver unconditionally flushes but that
doesn't matter much as NGG streamout is disabled by default.
Signed-off-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
NGG streamout uses GDS and we have to make sure that another
process isn't going to overwrite GDS while our shaders are busy.
Signed-off-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
Otherwise the wave IDs are probably 0 and it hangs. NGG_WAVE_ID_EN
generates wave IDs for GDS OA.
Signed-off-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
This internal option is turned off by default because NGG streamout
still hangs. It seems like it's related to GDS as RadeonSI.
That option will be turned on once all issues are resolved.
Signed-off-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
Documentation for pipe_context::flush states:
"NOTE: use screen->fence_reference() (or equivalent) to transfer
new fence ref to **fence, to ensure that previous fence is unref'd"
Hence we need to unref previous out_fence.
Signed-off-by: Danylo Piliaiev <danylo.piliaiev@globallogic.com>
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
A filed of nir_variable.location may be equel to -1.
That may cause copying to invalid address of list-node,
making some internal fields corrupted.
Patch fixes segfault during freeing context due to
corrupted address of ralloc_header.destructor.
v2: copy data if var is constant (Connor Abbott)
CC: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
Fixes: b6d4753568 (nir/large_constants: De-duplicate constants)
Signed-off-by: Sergii Romantsov <sergii.romantsov@globallogic.com>
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=111676
Reviewed-by: Connor Abbott <cwabbott0@gmail.com>
Vulkan applications can register with the following structure :
typedef struct VkApplicationInfo {
VkStructureType sType;
const void* pNext;
const char* pApplicationName;
uint32_t applicationVersion;
const char* pEngineName;
uint32_t engineVersion;
uint32_t apiVersion;
} VkApplicationInfo;
This enables the Vulkan implementations to apply workarounds based off
matching this description.
Here we add a new parameter for matching the driconfig options with
the following :
<device driver="anv">
<application engine_name_match="MyOwnEngine.*" engine_versions="10:12,40:42">
<option name="blaaah" value="true" />
</application>
</device>
v2: switch engine name match to use regexps
v3: Verify that the regexec returns REG_NOMATCH for match failure (Eric)
v4: Add missing bit that went to the following commit (Eric)
Signed-off-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Reviewed-by: Eric Engestrom <eric.engestrom@intel.com>
Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
Cc: 19.2 <mesa-stable@lists.freedesktop.org>
Seen a couple flakes on this one so far. Not sure if it is a real
driver problem or not, but skip it to unblock things.
Signed-off-by: Rob Clark <robdclark@chromium.org>
It was calloc'd to 0 which is PIPE_PRIM_POINTS, which means that we
fail to notice an initial primitive of points being new, and fail at
updating the "primitive is points or lines" field.
We do not need to reset this on device loss because we're tracking
the last primitive mode sent to us on the CPU via draw_vbo, not the
last primitive mode sent to the GPU.
Fixes several tests:
- dEQP-GLES3.functional.clipping.point.wide_point_clip
- dEQP-GLES3.functional.clipping.point.wide_point_clip_viewport_center
- dEQP-GLES3.functional.clipping.point.wide_point_clip_viewport_corner
Fixes: dcfca0af7c ("iris: Set XY Clipping correctly.")
If people fix bugs without updating the expected-fails list, then we
end up with a lack of coverage of those failures in the future. Also,
some day down the line another developer ends up trying to figure out
if the bug was actually fixed or their environment is just failing to
reproduce it.
Suggested-by: Kristian H. Kristensen <hoegsberg@google.com>
Reviewed-by: Adam Jackson <ajax@redhat.com>
Acked-by: Rob Clark <robdclark@chromium.org>
Reviewed-by: Kristian H. Kristensen <hoegsberg@google.com>
This hasn't failed for me in ~5 minutes of looping over
dEQP-GLES3.functional.fbo.msaa.*
Reviewed-by: Adam Jackson <ajax@redhat.com>
Acked-by: Rob Clark <robdclark@chromium.org>
Reviewed-by: Kristian H. Kristensen <hoegsberg@google.com>
Add a ppir dummy node for nir_ssa_undef_instr, create a reg for it and mark
it as undefined, so that regalloc can set it non-interfering to avoid
register pressure.
Signed-off-by: Andreas Baierl <ichgeh@imkreisrum.de>
Reviewed-by: Vasily Khozuzhick <anarsoul@gmail.com>
Reviewed-by: Erico Nunes <nunes.erico@gmail.com>
Building w/ AOSP, I was hitting the following error:
external/mesa3d/src/amd/Android.common.mk:95: error: missing separator.
Which was due to the changes to mesa-build-with-llvm missing
a line continuation.
Fixes: 96b592696f
Signed-off-by: John Stultz <john.stultz@linaro.org>
We are about to patch panfrost_flush() to flush all pending batches,
not only the current one. In order to do that, we need to move the
'flush single batch' code to panfrost_batch_submit().
While at it, we get rid of the existing pipelining logic, which is
currently unused and replace it by an unconditional wait at the end of
panfrost_batch_submit(). A new pipeline logic will be introduced later
on.
Signed-off-by: Boris Brezillon <boris.brezillon@collabora.com>
Reviewed-by: Alyssa Rosenzweig <alyssa@rosenzweig.io>
panfrost_flush() is about to be reworked to flush all pending batches,
but we want the fence to block on the last one. Let's move the fence
creation logic in panfrost_flush() to prepare for this situation.
Signed-off-by: Boris Brezillon <boris.brezillon@collabora.com>
Reviewed-by: Alyssa Rosenzweig <alyssa@rosenzweig.io>
panfrost_draw_vbo() Might call the primeconvert/without_prim_restart
helpers which will enter the ->draw_vbo() again. Let's delay
payloads[].offset_start initialization so we don't initialize them
twice.
Signed-off-by: Boris Brezillon <boris.brezillon@collabora.com>
Reviewed-by: Alyssa Rosenzweig <alyssa@rosenzweig.io>
panfrost_attach_vt_xxx() functions are now passed a batch, and the
generated FB desc is kept in panfrost_batch so we can switch FBs
without forcing a flush. The postfix->framebuffer field is restored
on the next attach_vt_framebuffer() call if the batch already has an
FB desc.
Signed-off-by: Boris Brezillon <boris.brezillon@collabora.com>
Reviewed-by: Alyssa Rosenzweig <alyssa@rosenzweig.io>
So we can emit SET_VALUE jobs for a batch that's not currently bound
to the context.
Signed-off-by: Boris Brezillon <boris.brezillon@collabora.com>
Reviewed-by: Alyssa Rosenzweig <alyssa@rosenzweig.io>
We'll soon be able to flush a batch that's not currently bound to the
context, which means ctx->pipe_framebuffer will not necessarily be the
FBO targeted by the wallpaper draw. Let's prepare for this case and
use ctx->wallpaper_batch in panfrost_blit_wallpaper().
Signed-off-by: Boris Brezillon <boris.brezillon@collabora.com>
Reviewed-by: Alyssa Rosenzweig <alyssa@rosenzweig.io>
We need that if we want to upload transient buffers to a batch that's
not currently bound to the context, which in turn will be needed if we
want to relax the batch serialization we have right now (only flush
batches when we need to: on a flush request, or when one batch depends
on the result of other batches).
Signed-off-by: Boris Brezillon <boris.brezillon@collabora.com>
Reviewed-by: Alyssa Rosenzweig <alyssa@rosenzweig.io>
Rename panfrost_is_scanout() into panfrost_batch_is_scanout(), pass it
a batch instead of a context and move the code to pan_job.c.
With this in place, we can now test if a batch is targeting a scanout
FB even if this batch is not bound to the context.
Signed-off-by: Boris Brezillon <boris.brezillon@collabora.com>
Reviewed-by: Alyssa Rosenzweig <alyssa@rosenzweig.io>
Will be replaced by something similar but using a BOs as keys instead
of resources.
Signed-off-by: Boris Brezillon <boris.brezillon@collabora.com>
Reviewed-by: Alyssa Rosenzweig <alyssa@rosenzweig.io>
This way we have all the fb_state information directly attached to a
batch and can pass only the batch to functions emitting CMDs, which is
needed if we want to be able to queue CMDs to a batch that's not
currently bound to the context.
Signed-off-by: Boris Brezillon <boris.brezillon@collabora.com>
Reviewed-by: Alyssa Rosenzweig <alyssa@rosenzweig.io>
mir_foreach_instr_in_block_safe() is based on list_for_each_entry_safe()
which is designed to protect against removal of the current entry, but
removing the entry placed just after the current one will lead to a
use-after-free situation.
Luckily, the midgard_pair_load_store() logic guarantees that the
instruction being removed (if any) is never placed just after ins which
in turn guarantees that the hidden __next variable always points to a
valid object.
Took me a bit of time to realize that this code was safe, so I'm
suggesting to get rid of the inner mir_foreach_instr_in_block_from()
loop and rework the code so that the removed instruction is always the
current one (which is what the list_for_each_entry_safe() API was
initially designed for).
While at it, we also get rid of the unecessary insert(ins)/remove(ins)
dance by simply moving the instruction around.
Signed-off-by: Boris Brezillon <boris.brezillon@collabora.com>
Reviewed-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
list_for_each_entry() does not allow modifying the current item pointer.
Let's rework the skip-instructions logic in schedule_block() to not
break this rule.
Signed-off-by: Boris Brezillon <boris.brezillon@collabora.com>
Reviewed-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
The V3D documentation states that primitive counters are reset when
we emit Tile Binning Mode Configuration items, which we do at the start
of each draw call, however, in the actual hardware this doesn't seem to
take effect when transform feedback is not active (this doesn't happen in
the simulator). This causes a problem in the following scenario:
glBeginTransformFeedback()
glDrawArrays()
glPauseTransformFeedback()
glDrawArrays()
glResumeTransformFeedback()
glEndTransformFeedback()
The TF pause will trigger a flush of the primitive counters, which results
in a correct number of primitives up to that point. In theory, the counter
should then be reset when we execute the draw after pausing TF, but that
doesn't happen, and since TF is enabled again by the resume command before
we end recording, by the time we end the transform feedback recording we
again check the counters, but instead of reading 0, we read again the same
value we read at the time we paused, incorrectly accumulating that value
again.
In theory, we should be able to avoid this by using the other method to
reset the primitive counters: using operation 1 instead of 0 when we
flush the counts to the buffer at the time we pause, but again, this
doesn't seem to be work and we still see obsolete counts by the time we
end transform feedback.
This patch fixes the problem by not accumulating TF primitive counts
unless we know we have actually queued draw calls during transform
feedback, since that seems to effectively reset the counters. This should
also be more performant, since it saves unnecessary stalls for the
primitive counters to be updated when we know there haven't been any
new primitives drawn.
Fixes CTS tests:
dEQP-GLES3.functional.transform_feedback.*
Reviewed-by: Eric Anholt <eric@anholt.net>
This was updating the counter for the indexed draw path only, but we are
already updating the counter for all paths a bit later, so this is only
duplicating counts for indexed paths.
Reviewed-by: Eric Anholt <eric@anholt.net>
Instead of running it with the Wayland platform, which introduces
unwanted dependencies and complexity.
Makes tests run 30% faster, as well.
Signed-off-by: Tomeu Vizoso <tomeu.vizoso@collabora.com>
streamout_buffers is assigned after that function, so the previous
fix was completely wrong. This probably fix something when streamout
buffers and push constants are used/inlined in the same shader.
Fixes: 378e2d2414 ("radv: fix computing number of user SGPRs for streamout buffers")
Signed-off-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
_mesa_texstore_z32f_x24s8 calculates source rowStride at a
pace of 64-bit, this will make inaccuracy offset if the width
of src image is an odd number. Modify src pointer to int_32* as
source image format is gl_float which is 32-bit per pixel.
Reviewed by Ilia Mirkin
Signed-off-by: Jiadong Zhu <Jiadong.Zhu@amd.com>
Signed-off-by: Marek Olšák <marek.olsak@amd.com>
The AnTuTu "garden" benchmark overflows the fixed size constbuffer
stateobject, so lets be more clever and calculate (a potentially
slightly pessimistic) actual size.
Signed-off-by: Rob Clark <robdclark@chromium.org>
fd6_blitter.c:724:31: warning: passing argument 1 of ‘fd_resource_level_linear’ discards ‘const’ qualifier from pointer target type [-Wdiscarded-qualifiers]
Signed-off-by: Rob Clark <robdclark@chromium.org>
Reviewed-by: Eric Anholt <eric@anholt.net>
Since freedreno's kernel and GPU reset seem to be totally solid, we
don't need to have the complexity of the LAVA setup that panfrost has.
Instead, we can register some boards as shared gitlab runners and have
the jobs run out of a docker container just like we do for llvmpipe.
Just make sure that the DRI device node is passed through to the
containers in the gitlab config ('devices = ["/dev/dri"]' under
runners.docker).
If a runner fails (networking dies, kernel panic, etc.) it'll take out
one build but the rest can keep going since gitlab-runner is what
pulls jobs. Since the runner pulls jobs, it also means that they can
live behind firewalls instead of needing some public address to be
accessed by gitlab.fd.o.
For now, enable it just on db410c (A307) and cheza (A630) as those are
the hardware that I have plenty of. A307 is only testing GLES2 since
running all of GLES3 takes too long for the number of boards I've
brought up.
Acked-by: Rob Clark <robdclark@chromium.org>
Acked-by: Kenneth Graunke <kenneth@whitecape.org>
Sometimes you just want confirmation that dEQP really picked up the
driver we built you thought. This is not as good as one might like,
because git isn't present in the cross-build image.
Acked-by: Rob Clark <robdclark@chromium.org>
Acked-by: Kenneth Graunke <kenneth@whitecape.org>
A handful of tests on freedreno have been close to the watchdog
timeout, and now sporadically fail since range analysis has slowed
down the compiler for them.
Acked-by: Rob Clark <robdclark@chromium.org>
Acked-by: Kenneth Graunke <kenneth@whitecape.org>
This brings back the fallback previously present in
st_nir_lookup_parameter_index(): if there's no parameter associated
with the variable, use a parameter from a variable with the same
prefix.
We'll have to sort out something for SPIR-V, but in the meantime let's
fix GLSL.
Fixes: b6384e57f5 ("mesa/st: Lookup parameters without using names")
Reviewed-by: Eric Anholt <eric@anholt.net>
Tested-by: Eric Anholt <eric@anholt.net>
As introduced in "v3d: flag dirty state when binding new sampler states"
we need to add support for compute states. New flag VC5_DIRTY_COMPTEX and
VC5_DIRTY_UNCOMPILED_CS are introduced.
Reaching 33 flags at the dirty field forces us to change the type to
uint_64. Flags are reordered and empty continuous bits are available
for future pipeline stages.
v2: Update flag conditions to compile cs shader. (Eric Antholt)
Now dirty flags use uint_64t and flags are reordered.
Added VC5_DIRTY_UNCOMPILED_CS flag.
Reviewed-by: Eric Anholt <eric@anholt.net>
Translating TGSI_INTERPOLATE_COLOR as INTERP_MODE_SMOOTH made
it for drivers impossible to have flatshaded color inputs.
Translate it to INTERP_MODE_NONE which drivers interpret as
smooth or flat depending on flatshading state.
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=111467
Fixes: 770faf54 ("tgsi_to_nir: Improve interpolation modes.")
Signed-off-by: Danylo Piliaiev <danylo.piliaiev@globallogic.com>
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
Reviewed-by: Eric Anholt <eric@anholt.net>
The patch adds support for HAL_PIXEL_FORMAT_RGBA_1010102 on
Android platform.
Fixes android.media.cts.DecoderTest#testVp9HdrStaticMetadata
which failed in egl due to "Unsupported native buffer format 0x2b"
on Android.
Reviewed-by: Tapani Pälli <tapani.palli@intel.com>
Signed-off-by: Chenglei Ren <chenglei.ren@intel.com>
We have only two defines that aren't from DRM_FORMAT_*: SARGB and
SABGR. Keep only those as __DRI_IMAGE_FOURCC and garbage collect the
rest.
While this header is also used from the X server, the X server doesn't
use any __DRI_IMAGE enums.
Reviewed-by: Eric Engestrom <eric.engestrom@intel.com>
Reviewed-by: Kristian H. Kristensen <hoegsberg@google.com>
Taken from drm-misc-next 268de6530aa1 ("drm: mst: Fix query_payload
ack reply struct")
Reviewed-by: Eric Engestrom <eric.engestrom@intel.com>
Reviewed-by: Kristian H. Kristensen <hoegsberg@google.com>
Previously, ReadPixels, PBO upload/download, and clears would call
cso_save_state with CSO_PAUSE_QUERIES, causing cso_context to call
pipe->set_active_query_state() twice for each operation. This can
potentially cause driver work to enable/disable statistics counters.
But often, there are no queries happening which need to be paused.
By keeping a simple tally of active queries, we can skip this work.
Reviewed-by: Tapani Pälli <tapani.palli@intel.com>
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
Initial benchmarking didn't show any performance benefits. But it might eventually.
Signed-off-by: Anuj Phogat <anuj.phogat@gmail.com>
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
For example, the surfaceless platform only supports pbuffers. If the
driver supports MSAA, we would still create a config, but it would have
no supported surface types. That's meaningless, so don't do it.
Reviewed-by: Eric Engestrom <eric.engestrom@intel.com>
To go any further than this would be to break the current version of
Android.
Reviewed-by: Timothy Arceri <tarceri@itsqueeze.com>
Reviewed-by: Jose Fonseca <jfonseca@vmware.com>
Reviewed-by: Timothy Arceri <tarceri@itsqueeze.com>
Reviewed-by: Jose Fonseca <jfonseca@vmware.com>
[ Michel Dänzer: Dropped jessie line from debian-install.sh again ]
Upgrading to a newer g++ causes older LLVM/clang packages to be
removed.
Reviewed-by: Jose Fonseca <jfonseca@vmware.com>
Reviewed-by: Adam Jackson <ajax@redhat.com>
Something seems to have changed in Debian buster causing installation
of the other foreign packages to fail without this.
Reviewed-by: Jose Fonseca <jfonseca@vmware.com>
Reviewed-by: Adam Jackson <ajax@redhat.com>
IIRC, designated initializers are not legal C++.
Fixes the MSVC build.
Fixes: 83fd1e58 ("glsl/nir: Add and use a gl_nir_link() function")
Reviewed-by: Neha Bhende <bhenden@vmware.com>
This is unsupported by meson and may become a hard error in the future.
Fixes: 5adfc8602c
("lima/ppir: move sin/cos input scaling into NIR")
Reviewed-by: Vasily Khoruzhick <anarsoul@gmail.com>
Looks like .out_sync wasn't set in lima_submit_start(), as result
submit completion fence was never signalled.
Reviewed-by: Qiang Yu <yuq825@gmail.com>
Signed-off-by: Vasily Khoruzhick <anarsoul@gmail.com>
This is a pipe format, not a boolean.
Fixes: 5849e0612c ("gallium/auxiliary: Add util_format_get_depth_only() helper.")
Reviewed-by: Tapani Pälli <tapani.palli@intel.com>
Fixes dEQP-GLES3.functional.texture.specification.texstorage3d.size.3d_2x2x2_2_levels
Signed-off-by: Rob Clark <robdclark@chromium.org>
Reviewed-by: Kristian H. Kristensen <hoegsberg@google.com>
If the lowest (largest) mipmap level is too small to tile, then don't
bother pretending.
Note that this requires initializing pipe->screen before
fd_resource_level_linear() is called.
Signed-off-by: Rob Clark <robdclark@chromium.org>
Reviewed-by: Kristian H. Kristensen <hoegsberg@google.com>
This will also "unlock" OpenGL 4.6 for Iris!
v2: Also enable PIPE_CAP_GL_SPIRV_VARIABLE_POINTERS.
Reviewed-by: Alejandro Piñeiro <apinheiro@igalia.com> [v1]
Perform all the NIR linking steps in order. Change iris and i965 to
use it. Suggested by Alejandro.
v2: Add gl_nir_linker_options struct.
Reviewed-by: Alejandro Piñeiro <apinheiro@igalia.com> [v1]
The PIPE_CAP_GL_SPIRV capability enables ARB_gl_spirv and
ARB_spirv_extensions, and will make sure the corresponding SPIR-V
capabilities and extensions lists are initialized.
The additional PIPE_CAP_GL_SPIRV_VARIABLE_POINTERS capability enables
the support for Variable Pointers in SPIR-V shaders. This depends on
the driver and is not mandatory for ARB_gl_spirv support.
v2: Add a PIPE_CAP for Variable Pointers. (Marek)
Reviewed-by: Alejandro Piñeiro <apinheiro@igalia.com> [v1]
There's no such case, if we load prog->nir from the shader cache, we
shouldn't hit this path.
Suggested-by: Kenneth Graunke <kenneth@whitecape.org>
Reviewed-by: Timothy Arceri <tarceri@itsqueeze.com>
The SPIR-V codepath uses NIR linking, so we have to preprocess after
the linking steps, which makes things slightly different than GLSL.
To make more clear when the preprocess is happening, I've ended up
inlining st_nir_get_mesa_program() into its caller.
The goal was to make both GLSL and SPIR-V to use the same preprocess
function, the exceptions are:
- SPIR-V codepath don't support NIR state slots yet;
- GLSL lowers shared memory early, so we don't do the deref lowering
for those.
For now I didn't bother to rename other functions and files (now that
many of them apply to both GLSL and SPIR-V), but we should do this in
further patches.
Reviewed-by: Timothy Arceri <tarceri@itsqueeze.com>
Refactor to split the glsl_to_nir conversion from the preprocessing
NIR passes into separate functions, so we can use them in SPIR-V.
Unlike in GLSL, there we'll need to perform a few passes with the NIR
linker before doing the individual preprocess calls.
No behavior should change with this patch.
Reviewed-by: Timothy Arceri <tarceri@itsqueeze.com>
Use the new MainUniformStorageIndex field in Parameter instead. It
was added so we could match those in the SPIR-V case, where names are
optional.
v2: Use MainUniformStorageIndex for all cases.
Reviewed-by: Alejandro Piñeiro <apinheiro@igalia.com> [v1]
Reviewed-by: Timothy Arceri <tarceri@itsqueeze.com>
Use the new UniformStorageIndex field in Parameter instead. This
mechanism was added so we could match those in the SPIR-V case, where
names are optional.
v2: Use UniformStorageIndex for all cases. (Timothy)
Reviewed-by: Timothy Arceri <tarceri@itsqueeze.com>
When creating Parameters, fill in the associated uniform storage
indices, like it is done with the NIR linker used for SPIR-V. This
will allow later code to not rely on names (which would never work for
SPIR-V where names are optional).
Reviewed-by: Timothy Arceri <tarceri@itsqueeze.com>
The parameter lists were not being created nor filled since i965
doesn't use them. In Gallium they are used for uniform handling, so
add a way to fill them.
The gl_uniform_storage struct got two new fields that let us go
- from a Parameter to the matching UniformStorage and,
- from the variable to the *first* UniformStorage
without relying on names -- since they are optional for ARB_gl_spirv.
Later patches will make use of them.
v2: Do not fill parameters for i965. (Timothy)
Use uint32_t for the new attributes. (Marek)
v3: Serialize the new fields. (Timothy)
Reviewed-by: Timothy Arceri <tarceri@itsqueeze.com>
The gl_register_file doesn't need 16 bits, so shorten it and use the
extra room for 'Padded' (also mark it as a single bit). This shrinks
the struct size from 32 bytes to 24 bytes.
See also 4794fbc86e ("mesa: reduce the size of gl_program_parameter")
that shrinked from 40 to 24 and later 7536af670b ("glsl: fix shader
cache for packed param list") that added `Padded`.
v2: Use just 5 bits for gl_register_file. (Timothy)
Reviewed-by: Timothy Arceri <tarceri@itsqueeze.com>
Every uniform that have the "gl_" name also have some state slots. So
use the state_slots like we did in 57b6184931 ("i965: account for NIR
uniforms without name").
This removes the dependency on names, which are optional when using
ARB_gl_spirv.
Reviewed-by: Alejandro Piñeiro <apinheiro@igalia.com>
Don't use the UNMAPPED_UNIFORM_LOC (-1) to set the unsigned
max_uniform_location. Those unmapped uniforms don't have to be
accounted at this point.
Fixes: 7a9e5cdfbb ("nir/linker: Add gl_nir_link_uniforms()")
Reviewed-by: Alejandro Piñeiro <apinheiro@igalia.com>
This mirrors the haiku build which uses a platform.
v2: - Fix some rebase problems
Reviewed-by: Eric Anholt <eric@anholt.net>
Reviewed-by: Eric Engestrom <eric.engestrom@intel.com>
v4: - Don't wrap a single file in a list to match mesa style
- Use null_dep instead of empty list
Reviewed-by: Eric Anholt <eric@anholt.net> (v3)
Reviewed-by: Eric Engestrom <eric.engestrom@intel.com>
v4: - Don't run checks on Windows that will always fail
Reviewed-by: Eric Anholt <eric@anholt.net> (v3)
Reviewed-by: Eric Engestrom <eric.engestrom@intel.com>
Which will allow meson to build a shared glapi build with mingw.
v2: - Add symbol to symbol check test
Reviewed-by: Eric Anholt <eric@anholt.net> (v1)
Reviewed-by: Eric Engestrom <eric.engestrom@intel.com>
It doesn't compile due to undefined symbols, which are in
libglapi_static, so I don't understand the problem.
Reviewed-by: Eric Engestrom <eric.engestrom@intel.com>
Currently the praser for s expressions assumes that newlines will be \n,
resulting in incorrect parsing on windows, where the newline is \r\n.
This patch just adds \r? to the regular expression used to parse the s
expressions, which fixes at 1 test on windows.
Reviewed-by: Eric Engestrom <eric.engestrom@intel.com>
Since the system value refactor, we've accidentally only been setting
cbuf->buffer_size in the UBO case, and not in the uploaded-constants
case. We use cbuf->buffer_size to fill out the SURFACE_STATE entry,
so it needs to be initialized in both cases.
Fixes: 3b6d787e40 ("iris: move sysvals to their own constant buffer")
This fixes some interactions when NGG GS is enabled. It fixes:
- dEQP-VK.clipping.user_defined.clip_cull_distance_dynamic_index.*geom*
- dEQP-VK.tessellation.geometry_interaction.passthrough.*
For some reasons, using the computed ESGS ring size randomly hangs
with CTS. For now, just use the maximum LDS size for ESGS.
Signed-off-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
This shouldn't be in NIR->LLVM because ACO also needs the shader
info. This will also help for computing some NGG values that are
necessary for declaring LDS symbols.
Signed-off-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
ac_surface computes it for amdgpu.
radeon_drm_surface computes it for radeon.
Acked-by: Pierre-Eric Pelloux-Prayer <pierre-eric.pelloux-prayer@amd.com>
The VBO module maps a buffer with GL_MAP_FLUSH_EXPLICIT, and keeps
appending data, and calling glFlushMappedBufferRange(). We were
invalidating the VF cache each time it flushed a new range, which
results in a ton of VF flushes.
If the contents of the destination in the target range are undefined
(never even possibly written), this patch makes us assume that it's
likely not in the cache and so cache invalidations are required. If
the destination range is defined, we continue cache flushing as we may
need to expunge stale data.
This eliminates 88% of the VF cache invalidates on Manhattan 3.0.
Improves performance in Manhattan 3.0 on my Icelake 8x8 with the GPU
frequency locked to 700Mhz by 0.376724% +/- 0.0989183% (n=10).
This cuts roughly 85% of the 3DSTATE_SAMPLER_STATE_POINTERS_PS calls in
the J2DBench images test. For some reason, the state tracker is calling
bind_sampler_state with the same sampler state in a bunch of cases.
The line stipple pattern and factor only matter if line stippling is
actually enabled. Otherwise, we can safely ignore it.
PBO upload may give us zero for line stipple information, while normal
drawing tends to give us an actual stipple pattern such as 0xffff. This
was causing us to flag IRIS_DIRTY_LINE_STIPPLE way too often, leading to
useless 3DSTATE_LINE_STIPPLE commands, which are non-pipelined and thus
very expensive.
Improves performance in Manhattan 3.0 on Skylake GT4e by
0.149261% +/- 0.0380796% (n=210). On an Icelake 8x8 with the GPU
frequency locked at 700Mhz, improves by 0.423756% +/- 0.222843% (n=3).
The entire point of schedule_first is that the node has to be scheduled
as soon as possible without any moves because it doesn't produce a
proper floating-point value, or its value changes depending on where you
read it. We were still introducing a move for preexp2 in some cases
though, even if it got scheduled as soon as possible, which broke some
exp() tests. Fix that.
Reviewed-by: Vasily Khoruzhick <anarsoul@gmail.com>
Tested-by: Vasily Khoruzhick <anarsoul@gmail.com>
The whole point of schedule_first nodes is that they need to be
scheduled as soon as possible, so if a schedule_first node is the
successor in a fake dependency that prevents it from being scheduled
after its parent, that can cause problems. We need to add these fake
dependencies to the parent as well, and we need to guarantee that the
pre-RA scheduler puts schedule_first nodes right before their parents in
order to prevent this from adding cycles to the dependency graph.
Reviewed-by: Vasily Khoruzhick <anarsoul@gmail.com>
Tested-by: Vasily Khoruzhick <anarsoul@gmail.com>
The idea was to make sure schedule_first nodes were always first in the
ready list. I made sure they were inserted first, but not that other
nodes wouldn't later be scheduled ahead of them. Fixes
spec@glsl-1.10@execution@built-in-functions@vs-exp-float and probably
others.
Reviewed-by: Vasily Khoruzhick <anarsoul@gmail.com>
Tested-by: Vasily Khoruzhick <anarsoul@gmail.com>
The point of the function is to avoid creating a complex move which is
used by certain slots in the next instruction, but unscheduled
successors will never be in the next instruction. Found while debugging
a crash that the previous commit fixed.
Reviewed-by: Vasily Khoruzhick <anarsoul@gmail.com>
Tested-by: Vasily Khoruzhick <anarsoul@gmail.com>
The scheduler assumes that load nodes are always duplicated so that they
can always be scheduled eventually and therefore they never need to be
spilled. But some lowerings were running after the pre-RA scheduler,
whereas duplication has to happen before then since it's needed for the
scheduler to do a better job reducing register pressure. This meant
that lowerings were introducing multiple uses of a load instruction,
which broke the scheduler's expectation and resulted in infinite loops
in situations where the only nodes available to spill were load nodes.
Spilling load nodes would be silly, so we want to fix the lowerings
rather than the scheduler. Just do all lowerings before the pre-RA
scheduler, which also helps with reducing pressure since the scheduler
can more accurately compute the pressure.
Fixeslima/mesa#104.
Reviewed-by: Vasily Khoruzhick <anarsoul@gmail.com>
Tested-by: Vasily Khoruzhick <anarsoul@gmail.com>
Change needed to fix the following building error:
In file included from external/mesa/src/intel/vulkan/anv_device.c:43:
external/mesa/src/util/xmlpool.h:115:10: fatal error: 'xmlpool/options.h' file not found
^~~~~~~~~~~~~~~~~~~
1 error generated.
Fixes: 4dcb1ff ("anv: add support for driconf")
Signed-off-by: Mauro Rossi <issor.oruam@gmail.com>
Reviewed-by: Eric Engestrom <eric@engestrom.ch>
So we can move all the BO logic into this file instead of having it
spread over pan_resource.c, pan_drm.c and pan_bo_cache.c.
Signed-off-by: Boris Brezillon <boris.brezillon@collabora.com>
Reviewed-by: Alyssa Rosenzweig <alyssa@rosenzweig.io>
The last users have been converted to use plain BOs. Let's get rid of
this abstraction. We can always consider adding it back if we need it
at some point.
Signed-off-by: Boris Brezillon <boris.brezillon@collabora.com>
Reviewed-by: Alyssa Rosenzweig <alyssa@rosenzweig.io>
Some fields in panfrost_context are unused (probably leftovers from
previous refactor). Let's get rid of them.
Signed-off-by: Boris Brezillon <boris.brezillon@collabora.com>
Reviewed-by: Alyssa Rosenzweig <alyssa@rosenzweig.io>
ctx->{scratchpad,tiler_heap,tiler_dummy} are allocated using
panfrost_drm_allocate_slab() but they never any of the SLAB-based
allocation logic. Let's convert those fields to plain BOs.
Signed-off-by: Boris Brezillon <boris.brezillon@collabora.com>
Reviewed-by: Alyssa Rosenzweig <alyssa@rosenzweig.io>
Right now, the transient memory allocator implements its own BO caching
mechanism, which is not really needed since we already have a generic
BO cache. Let's simplify things a bit.
Signed-off-by: Boris Brezillon <boris.brezillon@collabora.com>
Reviewed-by: Alyssa Rosenzweig <alyssa@rosenzweig.io>
What we currently call a job is actually a batch containing several jobs
all attached to a rendering operation targeting a specific FBO.
Let's rename structs, functions, variables and fields to reflect this
fact.
Suggested-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
Signed-off-by: Boris Brezillon <boris.brezillon@collabora.com>
Reviewed-by: Alyssa Rosenzweig <alyssa@rosenzweig.io>
This commit follow OES_EGL_sync to universially enable use of EGL sync
objects with desktop OpenGL contexts.
Reviewed-by: Daniel Stone <daniels@collabora.com>
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
The dead_cf pass calls into the CF manipulation helpers which attempt to
keep NIR's SSA form sane. However, when the only break is removed from
a loop, dominance gets messed up anyway because the CF SSA clean-up code
only looks at phis and doesn't consider the case of code becoming
unreachable. One solution to this would be to put the loop into LCSSA
form before we modify any of its contents. Another (and the approach
taken by this pass) is to just run the repair_ssa pass afterwards
because the CF manipulation helpers are smart enough to keep all the
use/def stuff sane; they just don't always preserve dominance
properties.
While we're here, we clean up some bogus indentation.
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=111405
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=111069
Cc: mesa-stable@lists.freedesktop.org
Reviewed-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
NIR currently assumes that unreachable blocks are trivially dominated by
everything. However, when considering well-formed SSA, there is no path
from any block to an unreachable block. Therefore, we can break any
use-def chains where the use is in an unreachable block. This removes
any dependencies on code created by uses in unreachable blocks and lets
DCE do a better job of cleaning it up.
Cc: mesa-stable@lists.freedesktop.org
Reviewed-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
We already bail and don't split the vars but we were passing a NULL to
_mesa_hash_table_search which is not allowed.
Fixes: f1cb3348f1 "nir/split_vars: Properly bail in the presence of ..."
Reviewed-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
In the case where the stencil clear is nicely aligned, we can clear
stencil much more efficiently by mapping it as a wide format (say
RGBA32_UINT) and blasting out the stencil clear value with a repclear.
On Unigine Heaven, this makes one stencil clear go from non-trivial to
unnoticeable when looking at per-draw timings.
In order for this change to work properly, ANV needs to do a bit more
flushing around depth and stencil clears. i965 and iris already have
the cache tracking logic to handle this so no changes are required
there.
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
This isn't known to fix any current bugs but it does prevent a
regression in a subsequent commit.
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Signed-off-by: Rhys Perry <pendingchaos02@gmail.com>
Fixes: 02bc4aabb48 ('nir/lower_io_to_vector: allow FS outputs to be vectorized')
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
u_endian.h needs to be included, otherwise PIPE_ARCH_BIG_ENDIAN might not
be defined on big-endian architectures and the endian conversion macros
will be incorrect.
I don't think anything is broken because of this, I just noticed this when
looking at the file.
Signed-off-by: Rhys Perry <pendingchaos02@gmail.com>
Reviewed-by: Eric Anholt <eric@anholt.net>
This bit redirects the state cache from the unified/RO sections of the
L3 cache to the "CS command buffer" section of the cache, which would
be set up via TCCNTLREG. The documentation says:
"Additionaly, this redirection should be enabled only if there is a
non-zero allocation for the CS command buffer section."
We don't allocate any cache to the CS command buffer section, so
enabling this redirection effectively disabled the state cache.
The Windows driver only sets up that section when using POSH, which
we do not currently use. So, leave it unallocated and disable the
redirection to get a functional state cache again.
Improves performance in Civilization VI by 18%, Manhattan 3.0 by 6%,
and Car Chase by 2%.
Jason pointed out that the caches likely refer to offsets from dynamic
and surface state base addresses, so when we change those, we need to
invalidate the caches.
Comment borrowed from src/intel/vulkan/genX_cmd_buffer.c.
The driver can't determine PIPE_QUERY_PRIMITIVES_GENERATED or
PIPE_QUERY_PRIMITIVES_EMITTED once we support geometry or
tessellation, since these stages add primitives at runtime. Use the
WRITE_PRIMITIVE_COUNTS event to write back the primitive counts and
implement a hw query for this.
Reviewed-by: Rob Clark <robdclark@gmail.com>
The GPU writes out streamout offsets as it goes to the FLUSH_BASE
pointer. We use that value with CP_MEM_TO_REG when appending to the
stream so that we don't have to track the offsets with the CPU in the
driver. This ensures that streamout continues to work once we enable
geometry and tessellation shader stages that add geometry.
Reviewed-by: Rob Clark <robdclark@gmail.com>
Should fix some issues we're seeing. And use REALLOC instead of realloc.
Reviewed-by: Dave Airlie <airlied@redhat.com>
Reviewed-by: Jose Fonseca <jfonseca@vmware.com>
This has lower_io_to_vector try to turn variables into arrays of 4-sized
vectors when possible and fall back to the old approach when that isn't
possible.
This is so that lower_io_to_vector can guarantee that only one variable is
used for each fragment shader output.
v2: handle dual-source blending
v3: don't try to merge structs and non-32-bit types in get_flat_type()
v3: fix per-vertex inputs
v3: fix and cleanup location advancement in get_flat_type() and it's
calling code
v4: prioritize the original mode over the flat mode
v4: don't create flat variables to merge only one variable
v5: don't skip an entire slot when encountering structs in the old mode
Signed-off-by: Rhys Perry <pendingchaos02@gmail.com>
Reviewed-by: Connor Abbott <cwabbott0@gmail.com>
It shouldn't matter much because output varyings should have been
compacted during NIR shader linking but it mirrors what the driver
does when emitting NGG GS vertex parameters.
Signed-off-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
If the fragment shader needs the layer index, we have to allocate
one more dword in the NGG GS storage. Found by inspection. This
doesn't fix anything known.
Signed-off-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
Sometimes LAVA jobs will timeout due to transient issues, and the Gitlab
job will fail in that case. Increase the timeouts to reduce the
likeliness of that happening and reduce false positives.
Signed-off-by: Tomeu Vizoso <tomeu.vizoso@collabora.com>
So repositories don't need to be specially configured with a token to
access LAVA, store this token in a bind volume for a special runner.
Signed-off-by: Tomeu Vizoso <tomeu.vizoso@collabora.com>
For loops which condition is false on the first iteration
iteration count was falsely calculated under the assumption
that loop's condition is true until it becomes false, meaning
it's true at least one time.
Now such loops are reported as having 0 iteration.
Similar to the fix e71fc7f2 done in NIR.
Fixes tests/shaders/glsl-fs-loop-while-false-02.shader_test
Signed-off-by: Danylo Piliaiev <danylo.piliaiev@globallogic.com>
Reviewed-by: Timothy Arceri <tarceri@itsqueeze.com>
This commit removes the GLSL dependency in TTN by manually recording
the textures used and calling nir_lower_samplers
instead of its GL counterpart.
Signed-off-by: Timur Kristóf <timur.kristof@gmail.com>
Reviewed-by: Connor Abbott <cwabbott0@gmail.com>
Lowering samplers is needed to produce NIR that can actually be
consumed by some gallium drivers, so it doesn't make sense to
to keep it only in the GLSL code.
This commit introduces nir_lower_samplers to compiler/nir,
while maintains the GL-specific function too.
Signed-off-by: Timur Kristóf <timur.kristof@gmail.com>
Reviewed-by: Connor Abbott <cwabbott0@gmail.com>
This fixes a memory leak in the flush code:
Direct leak of 128 byte(s) in 1 object(s) allocated from:
#0 in __interceptor_realloc .../gcc-8.3.0/libsanitizer/asan/asan_malloc_linux.cc:105
#1 in si_buffer_do_flush_region src/gallium/drivers/radeonsi/si_buffer.c:573
#2 in si_buffer_flush_region src/gallium/drivers/radeonsi/si_buffer.c:608
#3 in si_buffer_flush_region src/gallium/drivers/radeonsi/si_buffer.c:597
Signed-off-by: Gert Wollny <gert.wollny@collabora.com>
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
This patch partially reverts 20294dc ("mesa: Enable asm unconditionally, ...")
Android makefile build logic needs to disable assembler optimization
in 32bit builds to avoid text relocations for libglapi.so shared
Fixes the following build error with Android x86 32bit target:
[ 0% 4/477] target SharedLib: libglapi (out/target/product/x86/obj/SHARED_LIBRARIES/libglapi_intermediates/LINKED/libglapi.so)
FAILED: out/target/product/x86/obj/SHARED_LIBRARIES/libglapi_intermediates/LINKED/libglapi.so
...
prebuilts/gcc/linux-x86/x86/x86_64-linux-android-4.9/x86_64-linux-android/bin/ld: warning: shared library text segment is not shareable
prebuilts/gcc/linux-x86/x86/x86_64-linux-android-4.9/x86_64-linux-android/bin/ld: error: treating warnings as errors
clang-6.0: error: linker command failed with exit code 1 (use -v to see invocation)
Fixes: 20294dc ("mesa: Enable asm unconditionally, now that gen_matypes is gone.")
Signed-off-by: Mauro Rossi <issor.oruam@gmail.com>
Acked-by: Eric Engestrom <eric@engestrom.ch>
The codegen handles it and it adds the correct casts. This fixes
a bunch of LLVM validation errors when enabling Wave32 for compute.
Signed-off-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
Load a 32-bit value then convert to 1-bit. Convert 1-bit to 32-bit
value, then Store it.
These cases started to appear when we changed Anvil to use derefs for
shared memory.
v2: Use `bit_size` in a couple of places we were missing. (Jason)
Reassign `value` instead of `src[0]`. (Jason)
Fixes: 024a46a407 ("anv: use derefs for shared memory access")
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
This reverts commit c0504569ea. Now that
we're doing interpolation lowering in NIR, we can continue to stride the
FS input registers directly in the brw_fs_nir code like we did before.
This fixes SIMD32 fragment shaders which broke because lower_simd_width
depended on the 0 stride to split PLN instructions correctly.
Reviewed-by: Francisco Jerez <currojerez@riseup.net>
This commit does two things. First, it simplifies the way we compute
the FB write group bit. There's no reason to use a ternary because
inst->group / 16 can only be 0 or 1. Second, it fixes an order-of-
operations bug where the ternary wasn't selecting between (1 << 11) and
0 but between (1 << 11) and 0 | brw_dp_write_desc(...).
Fixes: 0d9648416 "intel/compiler: Use generic SEND for Gen7+ FB writes"
Reviewed-by: Francisco Jerez <currojerez@riseup.net>
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
Utgard PP is vec4 architecture, so lowering phis to scalars
increases instruction count and potentially interferes with
spilling.
Tested-by: Andreas Baierl <ichgeh@imkreisrum.de>
Reviewed-by: Eric Anholt <eric@anholt.net>
Signed-off-by: Vasily Khoruzhick <anarsoul@gmail.com>
For render formats, update fd2_pipe2color to only work with HW supported
render formats, and remove the format whitelist is_format_supported. This
patch enables float render formats (which work).
For vertex/texture formats, use a generic function which translates using
the bitsize of the channels. Since we fake support for some vertex formats,
check for these in is_format_supported to avoid enabling them as sampler
formats.
Signed-off-by: Jonathan Marek <jonathan@marek.ca>
Reviewed-by: Rob Clark <robdclark@chromium.org>
Use fd_gmem_restore_format() to avoid trying to use unsupported Z24S8/Z16
render formats for gmem restore.
Also apply this change to gmem2mem so it doesn't depend on fd2_pipe2color
working with depth formats.
gmem2mem/mem2gmem also doesn't need to use the swap/swizzle, since dst/src
formats are the same.
Signed-off-by: Jonathan Marek <jonathan@marek.ca>
Reviewed-by: Rob Clark <robdclark@chromium.org>
Fixes failures in the following deqp tests:
dEQP-GLES2.functional.polygon_offset.*
Signed-off-by: Jonathan Marek <jonathan@marek.ca>
Reviewed-by: Rob Clark <robdclark@chromium.org>
Reviewed-by: Eric Anholt <eric@anholt.net>
Fixes failures in the following deqp tests:
dEQP-GLES2.functional.fragment_ops.*src_alpha_saturate*
Signed-off-by: Jonathan Marek <jonathan@marek.ca>
Reviewed-by: Rob Clark <robdclark@chromium.org>
Reviewed-by: Eric Anholt <eric@anholt.net>
Fixes the following deqp test:
dEQP-GLES2.functional.shaders.builtin_variable.pointcoord
Signed-off-by: Jonathan Marek <jonathan@marek.ca>
Reviewed-by: Rob Clark <robdclark@chromium.org>
Reviewed-by: Eric Anholt <eric@anholt.net>
Some instructions generated by int/bool float lowering need to be lowered
by opt_algebraic.
Fixes: 43dbd7d6
Signed-off-by: Jonathan Marek <jonathan@marek.ca>
Reviewed-by: Rob Clark <robdclark@chromium.org>
Reviewed-by: Eric Anholt <eric@anholt.net>
Utgard PP has vector fcsel operation, but its condition is scalar. Add
filtering callback that checks whether {b,f}csel condition is not scalar
to lower {b,f}csel to scalar only in this case.
Reviewed-by: Qiang Yu <yuq825@gmail.com>
Reviewed-by: Eric Anholt <eric@anholt.net>
Signed-off-by: Vasily Khoruzhick <anarsoul@gmail.com>
Set of opcodes doesn't have enough flexibility in certain cases. E.g.
Utgard PP has vector conditional select operation, but condition is always
scalar. Lowering all the vector selects to scalar increases instruction
number, so we need a way to filter only those ops that can't be handled
in hardware.
Reviewed-by: Qiang Yu <yuq825@gmail.com>
Reviewed-by: Eric Anholt <eric@anholt.net>
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
Signed-off-by: Vasily Khoruzhick <anarsoul@gmail.com>
This appears to work fine (with the additional constraint of keeping the
indirect load in the same block that a0.x was loaded).
We can probably lift this restriction on earlier gens after testing.
Signed-off-by: Rob Clark <robdclark@chromium.org>
Reviewed-by: Eric Anholt <eric@anholt.net>
Need to use ir3_instr_set_address(), otherwise the instruction might not
get added to the indirects table. This becomes a problem when we turn
on copy propagation for relative accesses, as check_instr() in the sched
pass won't realize there is an indirect consumer of address register
load that is ready to be scheduled.
Signed-off-by: Rob Clark <robdclark@chromium.org>
Reviewed-by: Eric Anholt <eric@anholt.net>
An instruction can reference only a single address register value.
Add an assert to catch bugs.
Also, address value should also be local to the same block as the
instruction.
(The one spot where changing the instruction address is actually legit
needs to clear the address first.)
Signed-off-by: Rob Clark <robdclark@chromium.org>
Reviewed-by: Eric Anholt <eric@anholt.net>
After the next patch enabling copy propagation for relative sources,
we'll need to dereference the n'th src in valid_flags(), so we actually
need to swap the sources before calling valid_flags().
But the logic was already a bit cumbersome, so move it into a helper
function.
Signed-off-by: Rob Clark <robdclark@chromium.org>
Reviewed-by: Eric Anholt <eric@anholt.net>
The live_values and use_count was not being properly updated. This
starts triggering problems with the next patch, where we allow copy
propagation for RELATIV access.
Signed-off-by: Rob Clark <robdclark@chromium.org>
Reviewed-by: Eric Anholt <eric@anholt.net>
Move the constant part of the indirect offset into nir intrinsic base.
When we have multiple indirect accesses with different constant offsets,
this lets other opt passes clean up things to use a single address
register value.
Signed-off-by: Rob Clark <robdclark@chromium.org>
Reviewed-by: Eric Anholt <eric@anholt.net>
Now that spilling ops can be inserted into existing instructions, it
makes sense to increase cost to spill registers that would cause the
creation of a new instruction.
Experimental results showed that penalizing too much due to this caused
worse results, however it is beneficial as a tie resolver between
registers with the same number of components.
Signed-off-by: Erico Nunes <nunes.erico@gmail.com>
Reviewed-by: Vasily Khoruzhick <anarsoul@gmail.com>
Avoid creating unnecessary instructions for the load/store temp nodes
when not required, to further reduce register pressure.
The store_temp operation seems to be unable to do any spilling.
At least the offline shader seems to never output instructions accessing
swizzled components, and attempting to output that in ppir results in
errors. So, force spilled registers to allocate a full vec4 register.
This seems to be the optimal way as it is possible to always keep stores
and temps in a single instruction that can be pipelined.
Signed-off-by: Erico Nunes <nunes.erico@gmail.com>
Reviewed-by: Vasily Khoruzhick <anarsoul@gmail.com>
One ssa created in the spillinc code in ppir_update_spilled_src was not
properly being marked 'spilled', which made it a candidate for future
spilling attempts.
Since it was being inserted by the spilling code itself, let's mark it
unspillable to avoid an infinite spilling loop.
Signed-off-by: Erico Nunes <nunes.erico@gmail.com>
Reviewed-by: Vasily Khoruzhick <anarsoul@gmail.com>
Shaders must not attempt to write to the register files in the last
three instructions, but that doesn't include the magic registers:
nop ; nop ; thrsw; ldtmu.- *** ERROR ***
nop ; nop
nop ; nop
v2: Simplify validation rules. (Eric Anholt)
v3: Adjust validation even more. (Eric Anholt)
Reviewed-by: Eric Anholt <eric@anholt.net>
For radeonsi, we will prefer the NIR pass as it'll generate better code
(some index calculation and a single load vs. a load, then index
calculation, then another load) and oftentimes NIR optimization can kick
in and make all the access indices constant.
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
Reviewed-by: Timothy Arceri <tarceri@itsqueeze.com>
This prevents regressions when disabling indirect lowering. Sometimes
the only use of an input array was copying it to the array created by
nir_lower_io_to_temporaries, and without lowering indirects we wouldn't
have eliminated the temporary array until after linking, which was too
late to remove unused code in the producer.
No shader-db changes with radeonsi NIR.
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
Reviewed-by: Timothy Arceri <tarceri@itsqueeze.com>
Setup a constant global variable that LLVM will stick in a .rodata
section and generate PC-relative loads for.
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
We usually use these counts as a simple way to figure out if a change
reduces the number of instructions or shrinks an instruction. However,
since .rodata sections aren't executed, we shouldn't be counting their
size for this analysis. Make the linker return the total executable
size, and use it to report the more useful size in both drivers.
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
Removing GL_FRAMEBUFFER_FLIP_Y_MESA token from glheader.h as it is now
provided by glext.h
Reviewed-by: Eric Anholt <eric@anholt.net>
Reviewed-by: Matt Turner <mattst88@gmail.com>
Sync extension spec of MESA_framebuffer_flip_y to what has been merged
upstream in the GL registry. Update now carries the accepted GL
extension no.
v2: split GL headers update off to separate commit
Reviewed-by: Eric Anholt <eric@anholt.net>
Reviewed-by: Matt Turner <mattst88@gmail.com>
Integrating headers from upstream registry [0] master branch. Effective
GL registry commit integrated:
9d534f9312e56c72df763207e449c6719576fd54
Keeping the following quirks local to Mesa:
- glext.h: BUILDING_MESA guard (see !1492)
- glxext.h: glXQueryGLXPbufferSGIX: 'int' return type (Mesa) vs while
'void' (GL registry)
- glxext.h: GLX_RENDERER_ID_MESA is still expected by some mesa tests,
even though its token has been removed from the spec (see
docs/specs/MESA_query_renderer.spec)
- glxext.h: glXGetTransparentIndexSUN / PFNGLXGETTRANSPARENTINDEXSUNPROC
argument pTransparentIndex has type 'unsigned long *' (Mesa) vs. 'long
*' (GL registry)
[0] https://github.com/KhronosGroup/OpenGL-Registry
Reviewed-by: Eric Anholt <eric@anholt.net>
Reviewed-by: Matt Turner <mattst88@gmail.com>
Given that we occasionally touch this code and probably nobody really
wants to think about it, introduce a minimal test so that we know we
haven't completely broken OSMesa.
Reviewed-by: Timothy Arceri <tarceri@itsqueeze.com>
When run in optirun, applications that linked to `libGLX.so` and then
proceeded to querying Mesa for extension strings caused a SEGV in Mesa.
`glXQueryExtensionsString` was calling a chain of functions that
eventually led to `__glXQueryServerString`. This function would call
`xcb_glx_query_server_string` then `xcb_glx_query_server_string_reply`.
The latter for some unknown reason returned `NULL`. Passing this `NULL`
to `xcb_glx_query_server_string_string_length` would cause a SEGV as the
function tried to dereference it.
The reason behind the function returning `NULL` is yet to be determined,
however, simply checking that the ptr is not `NULL` resolves this. A
similar check has been added to `__glXGetString` for completeness sake,
although not immediately necessary.
In addition to that, we stumbled into a similar problem in
`AllocAndFetchScreenConfigs` which tries to access the configs to free
them if `__glXQueryServerString` fails. This, of course, SEGVs, because the
configs are yet to have been allocated. Simply continuing past the configs
if their config ptrs are `NULL` resolves this. We also switch to `calloc`
to make sure that the config ptrs are `NULL` by default, and not some
uninitialized value.
Cc: mesa-stable@lists.freedesktop.org
Fixes: 24b8a8cfe8 "glx: implement __glXGetString, hide __glXGetStringFromServer"
Fixes: cb3610e37c "Import the GLX client side library, formerly from xc/lib/GL/glx. Build it "
Reviewed-by: Adam Jackson <ajax@redhat.com>
Signed-off-by: Hal Gentz <zegentzy@protonmail.com>
The precision for a function return type is now stored in
ir_function_signature. This will later be useful to implement mediump
to float16 lowering. In the meantime it is also useful to catch errors
where a function is redeclared with a different precision.
Reviewed-by: Timothy Arceri <tarceri@itsqueeze.com>
This adds the dispatch code. It creates a job for the number
of blocks in the grid, and dispatches them to the threadpool
implementation. The threadpool then calls the JIT code to
execute the coroutines.
Reviewed-by: Roland Scheidegger <sroland@vmware.com>
This creates the coroutine execution environment and the
main compute shaders that get executed inside it.
Each compute shader block is executed in it's own coroutine
execution shader, which each "thread" being a coroutine executed
inside it in sequence.
Reviewed-by: Roland Scheidegger <sroland@vmware.com>
This doesn't actually build any of the shaders yet, but just
builds up the framework necessary to start building the shaders
and variants.
Reviewed-by: Roland Scheidegger <sroland@vmware.com>
The compute shader will need it's own context like the frag shader
has, this just introduces the framework struct and allocates/frees
for it in the right places.
Reviewed-by: Roland Scheidegger <sroland@vmware.com>
When the code is executing an hits a barrier, it will suspend
the coroutine and return control to the coroutine dispatcher.
Reviewed-by: Roland Scheidegger <sroland@vmware.com>
Reviewed-by: Roland Scheidegger <sroland@vmware.com>
In order to efficiently run a number of compute blocks, use
a threadpool that just allows for jobs with unique sequential
ids to be dispatched.
In order to share the texture/image/sampler code with compute
shaders we need to reorg them to be at the front of context
same as draw does for vs/gs sharing.
Reviewed-by: Roland Scheidegger <sroland@vmware.com>
These wrap the coroutine intrinsics and also add some higher
level wrappers around coroutine begin, end and suspend procedures
Reviewed-by: Roland Scheidegger <sroland@vmware.com>
We were previously not doing at least some of the checks. This uses the
same logic that is used in glTexImage*.
Signed-off-by: Ilia Mirkin <imirkin@alum.mit.edu>
Reviewed-by: Dave Airlie <airlied@redhat.com>
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
Each BSD has slightly different sysctl for retrieving per-CPU times.
FreeBSD returns long while NetBSD returns uint64_t. On OpenBSD return
type differs between summation and per-CPU times. DragonFly is
compatible with FreeBSD.
Signed-off-by: Jan Beich <jbeich@FreeBSD.org>
Based on the vc4 implementation.
Fixes Android RenderEngine::flush() routine:
android.googlesource.com/platform/frameworks/native/+/refs/tags/android-o-mr1-iot-release-smart-clock-fcs/services/surfaceflinger/RenderEngine/RenderEngine.cpp#225
Signed-off-by: Roman Stratiienko <roman.stratiienko@globallogic.com>
Reviewed-by: Qiang Yu <yuq825@gmail.com>
Try more aggressive approach with cloning uniform and coord loads.
Uniform load can be inserted into any instruction, so let's do that. ARM site
claim that penalty for cache miss is one clock, so we don't lose anything if
we merge it into instruction that uses the result. As side effect we can also
pipeline it and thus decrease reg pressure.
Do the same for varyings that hold texture coords, but for different reason:
looks like there's a special path for coords that increases precision if
varying that holds it is pipelined. If we don't pipeline it and load coords
from a register its precision is fp16 and thus only 10 bits which is not enough
to accurately sample textures of size 1024 or larger.
Since instruction can hold only one uniform load and one varying load,
node_to_instr now creates a move using helper introduced in previous commit if
slot is already taken. As side effect of this change we can also try to
pipeline texture loads and create a move if attempt fails.
Reviewed-by: Erico Nunes <nunes.erico@gmail.com>
Signed-off-by: Vasily Khoruzhick <anarsoul@gmail.com>
It can load value from varying directly as well. Also load_regs is the
only op that has a source, so add src_num field to load node and set it
accordingly.
Reviewed-by: Erico Nunes <nunes.erico@gmail.com>
Signed-off-by: Vasily Khoruzhick <anarsoul@gmail.com>
When lowering from ubo, use the constant base field in the load_uniform
instruction for the constant part of the offset. Doesn't change much
for constant indexing, but this will help for indirect indexing because
constant-folding can't completely clean up the result.
Signed-off-by: Rob Clark <robdclark@chromium.org>
Reviewed-by: Eric Anholt <eric@anholt.net>
This looks like clear copy-and-pasteos, and fixes:
dEQP-GLES2.functional.draw.random.40
(on A307 and A630, both tested in the new CI farm)
Reviewed-by: Rob Clark <robdclark@chromium.org>
We can get all the information we need from NIR. It's slightly less
accurate, but radeonsi doesn't use the extra information. The old code
also overcounted atomic counters, which led to problems when everything
was used at once.
Fixes KHR-GL45.compute_shader.resources-max.
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
Otherwise it's impossible to know the maximum SSBO index for both
internal TGSI shaders from TTN (which don't have any notion of atomic
counters and no offset) as well as shaders from GLSL.
I fixed everything I could find while grepping for num_ssbos and
num_abos, which hopefully is everything (iris was the only user I could
find that uses it in a meaningful way).
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
This adds a bit of unneccesary code on radeonsi, since whether
unnormalized coordinates are used is known at compile time with GL, but
I wasn't sure if it was worth the few instructions to plumb everything
through, especially for something so rare -- my shader-db doesn't have
any instances where this changes anything.
Fixes CTS tests I created at
https://github.com/cwabbott0/VK-GL-CTS/tree/unnorm-gather-tests
Acked-by: Marek Olšák <marek.olsak@amd.com>
Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
The workaround was originally written based on amdgpu-pro traces, but
since then radeonsi has got its own slightly different version. Use the
radeonsi version instead, to be consistent and because it'll be slightly
more convenient for handling unnormalized coordinates.
Acked-by: Marek Olšák <marek.olsak@amd.com>
Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
Technically, the user might have set EGL_DISPLAY instead of
EGL_PLATFORM, but since the former is deprecated let's just mention the
latter in the warning message.
Signed-off-by: Eric Engestrom <eric.engestrom@intel.com>
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
This routine was made obsolete over a series of reworks of memory
allocation; Tomeu's changes to shader memory allocation finally made
this unused as cppcheck noted.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
Reviewed-by: Tomeu Vizoso <tomeu.vizoso@collabora.com>
swr_shader.cpp: In function ‘void (* swr_compile_gs(swr_context*, swr_jit_gs_key&))(HANDLE, HANDLE, SWR_GS_CONTEXT*)’:
swr_shader.cpp:732:44: error: ‘make_unique’ was not declared in this scope
ctx->gs->map.insert(std::make_pair(key, make_unique<VariantGS>(builder.gallivm, func)));
^~~~~~~~~~~
Signed-off-by: Vinson Lee <vlee@freedesktop.org>
Reviewed-by: Jan Zielinski <jan.zielinski@intel.com>
While the documentation for _BitScanReverse64 on MSDN says that it's
available on ARM, this isn't true. It's only available on ARM64. So
let's match reality.
Signed-off-by: Erik Faye-Lund <erik.faye-lund@collabora.com>
Acked-by: Matt Turner <mattst88@gmail.com>
This code generates CVTSD2SI, which requires SSE2. So let's fix the
required SSE-version.
Signed-off-by: Erik Faye-Lund <erik.faye-lund@collabora.com>
Fixes: 5de29ae (util: try to use SSE instructions with MSVC and 32-bit gcc)
Reviewed-by: Matt Turner <mattst88@gmail.com>
This has been unused since 183db3a645 ("glsl: move half<->float
convertion to util"), Oct 10 2015. Let's drop needlessly including it.
Signed-off-by: Erik Faye-Lund <erik.faye-lund@collabora.com>
Reviewed-by: Matt Turner <mattst88@gmail.com>
Lionel found actual documentation for this at long last. Apparently
it actually is a sampler cache limitation that was mostly fixed on
Icelake. Unfortunately, it seems there are still issues with ASTC
and non-ASTC sampler views. Still, we can lessen the flush condition
from "format mismatch" to "ASTC mismatch", which eliminates most of
the flushing here.
We also update the documentation to refer to the workaround name.
strchrnul is not available on macOS.
pipe_loader.c:141:14: error: implicit declaration of function 'strchrnul' is invalid in C99 [-Werror,-Wimplicit-function-declaration]
next = strchrnul(library_paths, ':');
^
Signed-off-by: Vinson Lee <vlee@freedesktop.org>
Acked-by: Eric Engestrom <eric.engestrom@intel.com>
The majority of these only apply the start argument to the input, but a
few of them also does for the output-array. util_primconvert, the only
user of this argument expects this pass a non-zero start-argument does
not expect this to be applied to the output; if it is, it will write
outside of allocated memory, leading to VRAM corruption.
The reason this doesn't seem to have been noticed before, is that no
driver currently use util_primconvert to convert a primitive-type to
itself, which is the cases where this was broken. But for Zink, this
will no longer be true, because we need to eliminate the use of 8-bit
index-buffers.
Signed-off-by: Erik Faye-Lund <erik.faye-lund@collabora.com>
Fixes: 28f3f8d413 ("gallium/auxiliary/indices: add start param")
Reviewed-by: Rob Clark <robdclark@chromium.org>
Commit 6f7306c029 ("swr/rast: Refactor memory API between rasterizer
core and swr") unintentionally removed changes for llvm-9.0.
Fixes: 6f7306c029 ("swr/rast: Refactor memory API between rasterizer core and swr")
Fixes: 5dd9ad1570 ("swr/rasterizer: Better implementation of scatter")
Signed-off-by: Vinson Lee <vlee@freedesktop.org>
Reviewed-by: Jan Zielinski <jan.zielinski@intel.com>
We already had a perfectly cromulent pass for this, but one landed in
common NIR code so let's switch and lighten our tree.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
This optimization depended on RA running before scheduling. It therefore
no longer applies and is now unused.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
This is a tradeoff.
Scheduling before RA means we don't do RA on what-will-become pipeline
registers. Importantly, it means the scheduler is able to reorder
instructions, as registers have not been decided yet.
Unfortunately, it also complicates register spilling, since the spills
themselves won't get bundled optimally and we can only spill twice per
ALU bundle (only one spill per bundle allowed here). It also prevents us
from eliminating dead moves introduced by register allocation, as they
are not dead before RA. The shader-db regressions are from poor spilling
choices introduced by the new bundling requirements. These could be
solved by the combination of a post-scheduler (to combine adjacent
spills into bundles) with a VLIW-aware spill cost calculation.
Nevertheless, the change is small enough that I feel it's worth it to
eat a tiny shader-db regression for the sake of flexibility.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
Rather than using a pile of hacks and awkward constructs in MIR to
ensure the writeout parameter gets written into r0, let's add a
dedicated shadow register class for writeout (interfering with work
register r0) so we can express the writeout condition succintly and
directly.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
There's no slot for it; you'll end up writing into the void and
clobbering stuff. Don't. do it.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
When running the register allocator after scheduling, the MIR looks a
little different, so we need to extend the RA to handle a few of these
extra cases correctly.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
After scheduling, we still have valid MIR, but we have additional
bundling annotations which we would like to keep debug, so print these.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
Rather than a vague "br.??" line, annotate the branch with its target
type (useful for disambiguating discards) and whether it was inverted.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
I'm not sure if this is strictly necessary but it makes debugging easier
and minimizes the diff with the experimental scheduler.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
Scheduling occurs on a per-block basis, strongly assuming that a given
block contains at most a single branch. This does not always map to the
source NIR control flow, particularly when discard intrinsics are
involved. The solution is to allow scheduling barriers, which will
terminate a block early in code generation and open a new block.
To facilitate this, we need to move some post-block processing to a new
pass, rather than relying hackily on the current_block pointer.
This allows us to cleanup some logic analyzing branches in other parts
of the driver us well, now that the MIR is much more well-formed.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
It's sometimes convenient to call this with no instruction specified. By
definition, a missing instruction cannot reference any argument, so
let's check for NULL and shortciruit to false.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
The branch has the writeout specified in its source list, making this
special even if it's not explicitly part of r0.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
In order to run register allocation after scheduling, it is sometimes
necessary to be able to insert instructions into an already-scheduled
program. This is suboptimal, since it forces us to do a worst-case
scheduling, but it is nevertheless required for correct handling of
spills/fills. Let's add helpers to insert instructions as standalone
bundles for use in spilling code.
These helpers are minimal -- they *only* work on load/store ops or
moves. They should not be used for anything but register spilling; any
other instructions should be added prior to the schedule.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
While it doesn't matter with an unconditional move to the conditional
register (r31), when we try to elide that move we'll need to track the
swizzle explicitly, and there is no slot for that yet since ALU ops are
normally binary.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
Oh boy. Midgard scheduling is crazy... These are all just the
requirements, not even the algorithm yet.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
This does not affect shaders in any way. Rather, it makes the shader-db
instruction count recorded in the compiler accurate with the in-order
scheduler, matching up with what we calculate from pandecode.
Though shaders are the same, instruction counts cannot be compared
across this commit for this reason.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
Allow a direct link to the PDF itself from the authors themselves,
rather than a paywall splash page.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
Acked-by: Rob Clark <robdclark@chromium.org>
The GLX extension strings are independent of any context, so abusing the
direct_support bit to control this extension's visibility is wrong.
This reverts commit 079d0717fc896bc8086b037d0ed22642274986c7.
Reported-by: Michel Dänzer <michel@daenzer.net>
Reviewed-by: Michel Dänzer <michel@daenzer.net>
Memory allocated through panfrost_allocate_transient() is likely to
come from the transient pool. Let's add the BO backing the allocated
memory region to the job batch so the kernel can retain this BO while
jobs are executed.
In practice that has never been a problem because the transient pool
is never shrinked, and even if it was, we still control the lifetime of
the job, so there's no reason for this BO to be freed before the GPU is
done executing the batch. But it still make sense to add the BO for
debugging purpose.
Signed-off-by: Boris Brezillon <boris.brezillon@collabora.com>
Reviewed-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
This fix two dEQP tests for virgl:
dEQP-EGL.functional.image.create.gles2_cubemap_positive_x_rgba_texture
dEQP-EGL.functional.image.render_multiple_contexts.gles2_cubemap_positive_x_rgba8_texture
Signed-off-by: Lepton Wu <lepton@chromium.org>
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
* Fix 2D/2DArray/3D tiling parameters:
There is a bottom threshold for width and height.
* Renable tiling for Cubemap, after setting the right parameters.
Reviewed-by: Rob Clark <robdclark@gmail.com>
Equivalent of 0c1dd9dee "broadcom/vc4: Allow importing linear BOs with
arbitrary offset/stride." for v3d.
Allows YUV buffers with a single buffer and plane offsets to be
passed in.
Signed-off-by: Dave Stevenson <dave.stevenson@raspberrypi.org>
Reviewed-by: Eric Anholt <eric@anholt.net>
Input to GS is just a set of attributes, so remove explicit setup of
'position' which is meaningless for GS input processing.
Reviewed-by: Alok Hota <alok.hota@intel.com>
RADV no longer uses specific LLVM options compared to the common code.
Signed-off-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
Reviewed-by: Dave Airlie <airlied@redhat.com>
The patch adds support for 64 bit HAL_PIXEL_FORMAT_RGBA_FP16
for android platform.
Fixes android.graphics.cts.BitmapColorSpaceTest#test16bitHardware
which failed in egl due to "Unsupported native buffer format 0x16"
on chromebooks.
Signed-off-by: Nataraj Deshpande <nataraj.deshpande@intel.com>
Reviewed-by: Tapani Pälli <tapani.palli@intel.com>
v2: Update several of the comments. Drop some redundant uses of
ASSERT_UNION_OF_OTHERS_MATCHES_UNKNOWN_*_SOURCE source. Suggested by
Caio.
Reviewed-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
Suggested-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
One shader from Metro Last Light and the rest from Rochard. In the
Rochard cases, something like:
min(1.0, max(pow(saturate(x), y), z))
was transformed to
saturate(max(pow(saturate(x), y), z))
because the result of the pow must be >= 0.
The Metro Last Light case was similar. An instance of
min(pow(abs(x), y), 1.0)
became
saturate(pow(abs(x), y))
v2: Fix some comments. Suggested by Caio.
v3: Fix setting is_intgral when the exponent might be negative. See
also Mesa MR !1778.
Reviewed-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
All Intel platforms had similar results. (Ice Lake shown)
total instructions in shared programs: 16280670 -> 16280659 (<.01%)
instructions in affected programs: 1130 -> 1119 (-0.97%)
helped: 11
HURT: 0
helped stats (abs) min: 1 max: 1 x̄: 1.00 x̃: 1
helped stats (rel) min: 0.72% max: 1.43% x̄: 1.03% x̃: 0.97%
95% mean confidence interval for instructions value: -1.00 -1.00
95% mean confidence interval for instructions %-change: -1.19% -0.86%
Instructions are helped.
total cycles in shared programs: 367168430 -> 367168270 (<.01%)
cycles in affected programs: 10281 -> 10121 (-1.56%)
helped: 10
HURT: 1
helped stats (abs) min: 16 max: 18 x̄: 17.00 x̃: 17
helped stats (rel) min: 1.31% max: 2.43% x̄: 1.79% x̃: 1.70%
HURT stats (abs) min: 10 max: 10 x̄: 10.00 x̃: 10
HURT stats (rel) min: 3.10% max: 3.10% x̄: 3.10% x̃: 3.10%
95% mean confidence interval for cycles value: -20.06 -9.04
95% mean confidence interval for cycles %-change: -2.36% -0.32%
Cycles are helped.
I discovered this while looking at a shader that was hurt by some other
work I'm doing. When I examined the changes, I was confused that one
instance of a comparison that was used in a discard_if was (incorrectly)
eliminated, while another instance used by a bcsel was (correctly) not
eliminated. I had to use NIR_PRINT=true to see exactly where things
when wrong.
A bunch of shaders in Goat Simulator, Dungeon Defenders, Sanctum 2, and
Strike Suit Zero were impacted.
Reviewed-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
Fixes: 405de7ccb6 ("nir/range-analysis: Rudimentary value range analysis pass")
All Intel platforms had similar results. (Ice Lake shown)
total instructions in shared programs: 16280659 -> 16281075 (<.01%)
instructions in affected programs: 21042 -> 21458 (1.98%)
helped: 0
HURT: 136
HURT stats (abs) min: 1 max: 9 x̄: 3.06 x̃: 3
HURT stats (rel) min: 1.16% max: 6.12% x̄: 2.23% x̃: 2.03%
95% mean confidence interval for instructions value: 2.93 3.19
95% mean confidence interval for instructions %-change: 2.08% 2.37%
Instructions are HURT.
total cycles in shared programs: 367168270 -> 367170313 (<.01%)
cycles in affected programs: 172020 -> 174063 (1.19%)
helped: 14
HURT: 111
helped stats (abs) min: 2 max: 80 x̄: 21.21 x̃: 9
helped stats (rel) min: 0.10% max: 4.47% x̄: 1.35% x̃: 0.79%
HURT stats (abs) min: 2 max: 584 x̄: 21.08 x̃: 5
HURT stats (rel) min: 0.12% max: 17.28% x̄: 1.55% x̃: 0.40%
95% mean confidence interval for cycles value: 5.41 27.28
95% mean confidence interval for cycles %-change: 0.64% 1.81%
Cycles are HURT.
Found by inspection. I tried really, really hard to make a test case
that would trigger this problem, but I was unsuccesful. It's very hard
to get an instruction to produce a ne_zero result without ne_zero
sources. The most plausible way is using bcsel. That proves
problematic because bcsel interprets its sources as integers, so it
cannot currently be used to "clean" values for floating point
instructions.
No shader-db changes on any Intel platform.
Reviewed-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
Fixes: 405de7ccb6 ("nir/range-analysis: Rudimentary value range analysis pass")
Fixes piglit tests (new in piglit!110):
- fs-underflow-fma-compare-zero.shader_test
- fs-underflow-mul-compare-zero.shader_test
v2: Add back part of comment accidentally deleted. Noticed by
Caio. Remove is_not_zero function as it is no longer used.
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=111308
Fixes: fa116ce357 ("nir/range-analysis: Range tracking for ffma and flrp")
Fixes: 405de7ccb6 ("nir/range-analysis: Rudimentary value range analysis pass")
Reviewed-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
All Gen7+ platforms** had similar results. (Ice Lake shown)
total instructions in shared programs: 16278465 -> 16279492 (<.01%)
instructions in affected programs: 16765 -> 17792 (6.13%)
helped: 0
HURT: 23
HURT stats (abs) min: 7 max: 275 x̄: 44.65 x̃: 8
HURT stats (rel) min: 1.15% max: 17.51% x̄: 4.23% x̃: 1.62%
95% mean confidence interval for instructions value: 9.57 79.74
95% mean confidence interval for instructions %-change: 1.85% 6.61%
Instructions are HURT.
total cycles in shared programs: 367135159 -> 367154270 (<.01%)
cycles in affected programs: 279306 -> 298417 (6.84%)
helped: 0
HURT: 23
HURT stats (abs) min: 13 max: 6029 x̄: 830.91 x̃: 54
HURT stats (rel) min: 0.17% max: 45.67% x̄: 7.33% x̃: 0.49%
95% mean confidence interval for cycles value: 100.89 1560.94
95% mean confidence interval for cycles %-change: 0.94% 13.71%
Cycles are HURT.
total spills in shared programs: 8870 -> 8869 (-0.01%)
spills in affected programs: 19 -> 18 (-5.26%)
helped: 1
HURT: 0
total fills in shared programs: 21904 -> 21901 (-0.01%)
fills in affected programs: 81 -> 78 (-3.70%)
helped: 1
HURT: 0
LOST: 0
GAINED: 1
** On Broadwell, a shader was hurt for spills / fills instead of
helped.
No changes on any earlier platforms.
Fix the a / b ordering in some compares. Delete duplicate patterns.
Add a table explaining things. While I was cleaning this up, I managed
to confuse myself. The table helped sort that out.
Reviewed-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
Reviewed-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
This didn't fix bug #111308, but it was found will trying to find the
actual cause of that bug.
Fixes piglit tests (new in piglit!110):
- fs-fract-of-NaN.shader_test
- fs-lt-nan-tautology.shader_test
- fs-ge-nan-tautology.shader_test
No shader-db changes on any Intel platform.
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=111308
Fixes: b77070e293 ("nir/algebraic: Use value range analysis to eliminate tautological compares")
Reviewed-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
Reviewed-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
We enabled fast clears at level > 0, but didn't minify the dimensions
when comparing the box size, so we always thought it was a partial
clear and as a result never actually enabled any.
This eliminates some slow clears in Civilization VI, but they are mostly
during initialization and not the main rendering.
Thanks to Dan Walsh for noticing we had too many slow clears.
Fixes: 393f659ed8 ("iris: Enable fast clears on other miplevels and layers than 0.")
Reviewed-by: Rafael Antognolli <rafael.antognolli@intel.com>
Otherwise it doesn't exist and can't be parsed, so everything dies at
screen init time.
Fixes: 6dc4ddc5f8 ("iris: use driconf for 'bo_reuse' parameter")
Some functionality has been added to deqp-volt to only print
regressions, so update our version of it and use the new options.
Signed-off-by: Tomeu Vizoso <tomeu.vizoso@collabora.com>
Acked-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
LLVM 7.0 ditched the pmulu intrinsics.
This is only a trivial patch to use the fallback code instead.
It'll likely produce atrocious code since the pattern doesn't match what
llvm itself uses in its autoupgrade paths, hence the pattern won't be
recognized.
Should fix https://bugs.freedesktop.org/show_bug.cgi?id=111496
Reviewed-by: Jose Fonseca <jfonseca@vmware.com>
Reviewed-by: Dave Airlie <airlied@redhat.com>
Fixes a possible data race spotted while debugging on other EGL
related failures where glFinish and eglCreateContext are going on at
the same time:
==11558== Possible data race during read of size 1 at 0x5E78CD0 by thread #23
==11558== Locks held: 1, at address 0x5E77CA8
==11558== at 0x61B71D4: bo_alloc_internal (brw_bufmgr.c:639)
==11558== by 0x61B7328: brw_bo_alloc (brw_bufmgr.c:669)
==11558== by 0x61EF975: recreate_growing_buffer (intel_batchbuffer.c:231)
==11558== by 0x61EFAAE: intel_batchbuffer_reset (intel_batchbuffer.c:255)
==11558== by 0x61EFB85: intel_batchbuffer_reset_and_clear_render_cache (intel_batchbuffer.c:280)
==11558== by 0x61F0507: brw_new_batch (intel_batchbuffer.c:551)
==11558== by 0x61F12C1: _intel_batchbuffer_flush_fence (intel_batchbuffer.c:888)
==11558== by 0x61BDD6B: intel_glFlush (brw_context.c:296)
==11558== by 0x61BDDB9: intel_finish (brw_context.c:307)
==11558== by 0x623831B: _mesa_Finish (context.c:1906)
==11558== by 0x46D556: deqp::egl::GLES2ThreadTest::Operation::execute(tcu::ThreadUtil::Thread&)
==11558== by 0x721502: tcu::ThreadUtil::Thread::run()
==11558==
==11558== This conflicts with a previous write of size 1 by thread #26
==11558== Locks held: 1, at address 0x5D09878
==11558== at 0x61B98A9: brw_bufmgr_enable_reuse (brw_bufmgr.c:1541)
==11558== by 0x61BF09D: brw_process_driconf_options (brw_context.c:854)
==11558== by 0x61BF6CA: brwCreateContext (brw_context.c:993)
==11558== by 0x621181F: driCreateContextAttribs (dri_util.c:473)
==11558== by 0x53FE87B: dri2_create_context (egl_dri2.c:1388)
==11558== by 0x53EE7BE: eglCreateContext (eglapi.c:807)
==11558== by 0x5C8AB9: eglw::FuncPtrLibrary::createContext(void*, void*, void*, int const*) const
==11558== by 0x46E027: deqp::egl::GLES2ThreadTest::CreateContext::exec(tcu::ThreadUtil::Thread&)
Signed-off-by: Tapani Pälli <tapani.palli@intel.com>
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
When u_upload_mgr fills up a buffer, it unmaps and destroys it. Our
unmap function was automatically performing the equivalent of a
FlushMappedBufferRange call in this case. Because the buffer mapping
is persistent and coherent, we don't actually do any flushing when we
do the rest of the writes to the buffer - we were just doing one final
one at the end. But we would be using the uploaded contents on the
GPU the whole time.
This certainly shouldn't be necessary for streaming buffers, and if
such flushing and dirtying is necessary for coherent buffers, this is
wildly insufficient.
Drops a small number of constant packets and PIPE_CONTROL flushes from
most benchmarks that I've looked at. Doesn't seem to make much of an
impact on performance, however.
Thanks to Felix Degrood for noticing that we were emitting more
3DSTATE_CONSTANT_* packets than we needed to.
NIR shaders use GLSL types (note: these live outside libglsl), and
nine needs to properly initialize these just like the other state
trackers. This fixes an assertion failure when TTN is used.
Signed-off-by: Timur Kristóf <timur.kristof@gmail.com>
Fixes:
dEQP-GLES3.functional.shaders.switch.switch_in_do_while_loop_dynamic_vertex
dEQP-GLES3.functional.shaders.switch.switch_in_do_while_loop_dynamic_fragment
Signed-off-by: Rob Clark <robdclark@chromium.org>
Reviewed-by: Eric Anholt <eric@anholt.net>
While resolving jumps to skip intermediate jumps from the structured
CFG, maintain the successors and predecessors correctly.
Signed-off-by: Rob Clark <robdclark@chromium.org>
Reviewed-by: Eric Anholt <eric@anholt.net>
This adds the ability for intel devices that:
* Only load on i965
* Only load on iris
* First attempt i965, and try iris next
* First attempt iris, and try i965 next
Signed-off-by: Jordan Justen <jordan.l.justen@intel.com>
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
If a field name differs slightly between two generations then this
change will still add the fields into the same group.
For example, these will be treated as equal:
* "Software Exception" and "Software Exception"
* "Per Thread" and "Per-Thread"
Signed-off-by: Jordan Justen <jordan.l.justen@intel.com>
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
We were clamping the LOD to force non-mipmap filtering, but that means
that the HW doesn't get to select between the min and mag filters.
Setting MIPFILTER_LINEAR_FAR appears to force non-mipmap filtering.
Fixes all failures in dEQP-GLES2.functional.texture.filtering.2d.*
Reviewed-by: Rob Clark <robdclark@chromium.org>
See the previous commit for the explanation of the Fixes tag.
Hurts 21 shaders in shader-db. All of the hurt shaders are in Unreal
Engine 4 tech demos.
Reviewed-by: Matt Turner <mattst88@gmail.com>
Fixes: 7afa26d4e3 ("nir: Add lowering for nir_op_bitfield_reverse.")
This caused a problem on Sandybridge where an open-coded
bitfieldReverse() function could be optimized to a
nir_op_bitfield_reverse that would generate an unsupported BFREV
instruction in the backend. This was encountered in some Unreal4 tech
demos in shader-db. The bug was not previously noticed because we don't
actually try to run those demos on Sandybridge.
The fixes tag is a bit a lie. The actual bug was introduced about
26,000 commits earlier in 371c4b3c48 ("nir: Recognize open-coded
bitfield_reverse."). Without the NIR lowering pass, the flag needed to
avoid the optimization does not exist. Hopefully nobody will care to
fix this on an earlier Mesa release.
Reviewed-by: Matt Turner <mattst88@gmail.com>
Fixes: 7afa26d4e3 ("nir: Add lowering for nir_op_bitfield_reverse.")
Reduces the size of the u_format_table.c file by 140k (out of 1.64M)
and makes me less confused about endianness in gallium.
Reviewed-by: Roland Scheidegger <sroland@vmware.com>
Acked-by: Adam Jackson <ajax@redhat.com>
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
The formats affected are:
- LA x (16_FLOAT, 32_FLOAT, 32_UINT, 32_SINT)
- R8G8B8 x (UNORM, SNORM, SRGB, USCALED, SSCALED, UINT, SINT)
- RG/RGB/RGBA x (64_FLOAT, 32_FLOAT, 16_FLOAT, 32_UNORM, 32_SNORM,
32_USCALED, 32_SSCALED, 32_FIXED, 32_UINT, 32_SINT)
- RGB/RGBA x (16_UNORM, 16_SNORM, 16_USCALED, 16_SSCALED,
16_UINT, 16_SINT)
- RGBx16 x (UNORM, SNORM, FLOAT, UINT, SINT)
- RGBx32 x (FLOAT, UINT, SINT)
- RA x (16_FLOAT, 32_FLOAT, 32_UINT, 32_SINT)
The updated st_formats.c unit test checks that the formats affected by
this change are all array formats in the equivalent Mesa format (if
any). Mesa's array format definition is clear: the value stored is an
array (increasing memory address) of values of the channel's type.
It's also the only thing that makes sense for the RGB types, or very
large types like RGBA64_FLOAT (A should not move to the low address
because the cpu is BE).
Acked-by: Roland Scheidegger <sroland@vmware.com>
Acked-by: Adam Jackson <ajax@redhat.com>
Tested-by: Matt Turner <mattst88@gmail.com> (unit tests on BE)
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
Nothing accessed the .value field, just the .chan. Unwrap all the
code from the union, for clarity (and 13k less generated code).
Reviewed-by: Roland Scheidegger <sroland@vmware.com>
Acked-by: Adam Jackson <ajax@redhat.com>
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
Shaves 30k off of the 1.6M .c file, and makes for less noise for me
trying to understand how gallium formats actually work.
Reviewed-by: Roland Scheidegger <sroland@vmware.com>
Acked-by: Adam Jackson <ajax@redhat.com>
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
Instructions attached to blocks are never explicitly freed. Let's
use ralloc() to attach those objects to the compiler context so that
they are automatically freed when the ctx object is freed.
Signed-off-by: Boris Brezillon <boris.brezillon@collabora.com>
Reviewed-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
I don't know how Meson didn't hit this issue, when it too already uses
-Werror=incompatible-pointer-types
Fixes: 3dd299c3d5 ("glx: Sync <GL/glxext.h> with Khronos")
Reviewed-by: Eric Engestrom <eric@engestrom.ch>
Reviewed-by: Adam Jackson <ajax@redhat.com>
Reviewed-by: Brian Paul <brianp@vmware.com>
Reviewed-by: Roland Scheidegger <sroland@vmware.com>
Looks like initial RE was wrong and some fields have different purpose.
I.e. there's no "disable_mipmap" field, it's actually part of another field
that selects mipmap filtering.
Also fix layout position.
Reviewed-by: Qiang Yu <yuq825@gmail.com>
Signed-off-by: Vasily Khoruzhick <anarsoul@gmail.com>
This fixes the following CTS test on 32-bit systems:
GTF-GL46.gtf30.GL3Tests.packed_depth_stencil.packed_depth_stencil_init
It does glGetTexImage of a 16-bit SNORM image, requesting 32-bit UNORM
data. In get_tex_rgba_uncompressed, we round trip through float to
handle image transfer ops for clamping. _mesa_format_convert does:
_mesa_float_to_unorm(0.571428597f, 32)
which translated to:
_mesa_lroundevenf(0.571428597f * 0xffffffffu)
which produced different results on 64-bit and 32-bit systems:
64-bit: result = 0x92492500
32-bit: result = 0x80000000
This is because the size of "long" varies between the two systems, and
0x92492500 is too large to fit in a signed 32-bit integer. To fix this,
we switch to the new _mesa_i64roundevenf function which always does the
64-bit operation.
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=104395
Fixes: 594fc0f859 ("mesa: Replace F_TO_I() with _mesa_lroundevenf().")
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
Reviewed-by: Matt Turner <mattst88@gmail.com>
This always returns a int64_t, translating to _mesa_lroundevenf on
systems where long is 64-bit, and llrintf where "long long" is needed.
Fixes: 594fc0f859 ("mesa: Replace F_TO_I() with _mesa_lroundevenf().")
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
Reviewed-by: Matt Turner <mattst88@gmail.com>
GLX_EXT_import_context operates only on indirect contexts, a direct
context cannot possibly support it. Without this change the extension
will appear in the combined GLX extension string even if it is missing
from the server string, indicating a lack of required server support.
At least on Linux, we can use the ELF auxiliary vector to
detect the presence of AltiVec, VSX and other CPU features
without having to go through handling SIGILL, which has
various problems of its own.
A similar thing is already being done for ARM to detect NEON.
Reviewed-by: Matt Turner <mattst88@gmail.com>
Signed-off-by: Daniel Kolesa <daniel@octaforge.org>
Gen11 adds support for specifying the render target index and src0
alpha present bits in the extended message descriptor. Previously,
we had to use a message header for this, requiring extra instructions
to write the fields, and two registers of extra payload.
Improves performance on my ICL 8x8 frequency locked to 700Mhz, on iris:
GfxBench5 Manhattan 3.0: 2.13635% +/- 0.159859% (n=5)
GfxBench5 Aztec Ruins: 1.57173% +/- 0.128749% (n=5)
Synmark2 OglDeferred: 2.86914% +/- 0.191211% (n=10)
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
This takes care of generate_fb_write/fire_fb_write/brw_fb_WRITE's stuff
earlier in the visitor. It will also make it easier to generate SENDSC
messages with indirect extended descriptors in a few patches.
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
Annoyingly, these bits exist in some extended message descriptors
(in particular render target writes), but they don't have any
corresponding bits in the ISA encoding. So we can't use an immediate
and have to fall back to an indirect extended descriptor.
Thanks to Jason Ekstrand for reminding me that you can still set these
bits via an indirect descriptor, even if they don't exist in the ISA.
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
src0 vstride and type overlap with bits of the extended descriptor.
brw_set_desc() also sets the extended descriptor to 0. So by setting
the descriptor, then setting src0, we were accidentally setting a bunch
of extended descriptor bits unintentionally.
When using this infrastructure for framebuffer writes (in a future
patch), this ended up setting the extended descriptor bit 20, which is
"Null Render Target" on Icelake, causing nothing to be written to the
framebuffer.
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
We need two different values of the register, one for NGG and one for
legacy, in order to fix edge flags for the legacy pipeline.
Passing the ngg flag to emit_clip_regs would be too complicated,
so CONTEXT_REG_RMW is used for partial register updates.
Acked-by: Pierre-Eric Pelloux-Prayer <pierre-eric.pelloux-prayer@amd.com>
Right now we're leaking all block and instruction objects allocated by
the compiler. Let's clean things up before leaving
midgard_compile_shader_nir().
Signed-off-by: Boris Brezillon <boris.brezillon@collabora.com>
Reviewed-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
src/gallium/winsys/svga/drm/vmw_screen.c: In function ‘vmw_dev_compare’:
src/gallium/winsys/svga/drm/vmw_screen.c:48:12: warning: implicit declaration of function ‘major’ [-Wimplicit-function-declaration]
48 | return (major(*(dev_t *)key1) == major(*(dev_t *)key2) &&
| ^~~~~
src/gallium/winsys/svga/drm/vmw_screen.c:49:12: warning: implicit declaration of function ‘minor’ [-Wimplicit-function-declaration]
49 | minor(*(dev_t *)key1) == minor(*(dev_t *)key2)) ? 0 : 1;
| ^~~~~
That file (and many others) already has the proper #include with their
respective guards, but scons wasn't defining them, resulting in implicit
functions being used instead (and an always-true check that's probably
breaking something down the line).
Note that I'm cheating a bit here because Scons doesn't seem to have
a clean way to detect the existence of major() et al. as functions or
macros, so I'm taking the shortcut of just detecting the presence of the
header and assuming its contents is what we expect.
Signed-off-by: Eric Engestrom <eric.engestrom@intel.com>
Reviewed-By: Jose Fonseca <jfonseca@vmware.com>
This mirrors the vs/gs keys, and will be needed when adding images
support.
The const changes also mirror how the draw code work (as is needed
when we add images)
Reviewed-by: Roland Scheidegger <sroland@vmware.com>
Not sure how I missed this before, but compswap was hitting an
assert here as it is it's own special case.
Fixes: b5ac381d8f ("gallivm: add buffer operations to the tgsi->llvm conversion.")
Reviewed-by: Roland Scheidegger <sroland@vmware.com>
Looks like a copy/paste error. This patch prevents a segfault when
running the following on BDW:
INTEL_DEBUG=no8,no16,do32 ./deqp-vk -n \
dEQP-VK.subgroups.arithmetic.compute.subgroupmin_dvec4
For the curious, the message we're getting is:
CS compile failed: Failure to register allocate. Reduce number
of live scalar values to avoid this.
Fixes: 864737ce6c ("i965/fs: Build 32-wide compute shader when needed.")
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
Signed-off-by: Paulo Zanoni <paulo.r.zanoni@intel.com>
No driver implements them yet, but this is a long way toward gallium
having matching format enums for Mesa formats.
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
To add ASTC 3D compression formats, we need to be able to express the
block depth. While I'm touching every line, line up the columns of
the CSV again as they've drifted over time.
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
When moving constants, if switching to a floating-point representation
doesn't break anything, we'd rather have an fmov than an imov,
permitting inlining the constant in many circumstances.
total quadwords in shared programs: 3408 -> 3366 (-1.23%)
quadwords in affected programs: 1188 -> 1146 (-3.54%)
helped: 41
HURT: 0
helped stats (abs) min: 1 max: 2 x̄: 1.02 x̃: 1
helped stats (rel) min: 0.19% max: 25.00% x̄: 9.65% x̃: 11.11%
95% mean confidence interval for quadwords value: -1.07 -0.98
95% mean confidence interval for quadwords %-change: -11.38% -7.93%
Quadwords are helped.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
Storing constants as float doesn't make sense when we have integer
instructions; better to switch to be integer natively and coerce to/from
float rather than the opposite.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
This fixes dEQP-GLES3.functional.texture.specification subtests on iris:
- texsubimage3d_depth.depth24_stencil8_2d_array
- texsubimage3d_depth.depth32f_stencil8_2d_array
- texsubimage3d_depth.depth_component32f_2d_array
- texsubimage3d_depth.depth_component24_2d_array
- texstorage2d.format.depth24_stencil8_2d
- texstorage2d.format.depth32f_stencil8_2d
- texstorage2d.format.depth_component24_2d
- texstorage2d.format.depth_component32f_2d
- texstorage3d.format.depth24_stencil8_2d_array
- texstorage3d.format.depth32f_stencil8_2d_array
- texstorage3d.format.depth_component24_2d_array
- texstorage3d.format.depth_component32f_2d_array
Here, something appears to be going wrong with having this bit set
during blorp_copy operations for texture upload, which override the
format to R8G8B8A8_UINT.
AFAICT this bit should have no effect for integer surfaces, as it has
to do with blending, and integer blending is not a thing. So it should
be harmless to disable it.
The Windows driver appears to be setting this bit universally, so
I am unclear why we would need to. Perhaps they simply haven't run
into this issue.
Fixes: f741de236b ("isl: Enable Unorm Path in Color Pipe")
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
Jason suggested I remove this in review, and he's right. AFAICT this
affects blending, and that just isn't going to happen on buffers.
Fixes: f741de236b ("isl: Enable Unorm Path in Color Pipe")
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
For each mipmaps, the driver will store the clear values (8-bytes)
and the TC-compat zrange value (4-bytes).
Signed-off-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
The workaround was entirely in common code, and it's needed in radeonsi
too so just always do it when necessary. Fixes
KHR-GL45.shader_image_load_store.advanced-allStages-oneImage on gfx9
with LLVM 8.
Reviewed-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
GL and Vulkan allow you to bind a single layer of a 3D texture to a 2D
image, and we weren't implementing a workaround for that on gfx9 that
TGSI was. Copy it over.
Fixes KHR-GL45.shader_image_load_store.non-layered_binding with radeonsi
NIR.
Reviewed-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
16-bit and 32-bit values match hardware values but 8-bit doesn't.
This fixes dEQP-VK.pipeline.input_assembly.* with 8-bit index.
Fixes: 372c3dcfdb ("radv: implement VK_EXT_index_type_uint8")
Signed-off-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl
The virgl formats are fixed in time snapshots of the gallium ones,
we just need to provide a translation table between them when
we enter the hardware.
This fixes a regression since Eric renumbered the gallium table.
Fixes: c45c33a5a2 (gallium: Remove manual defining of PIPE_FORMAT enum values.)
Bugzilla: https://bugs.freedesktop.org/111454
v1 by Dave Airlie <airlied@redhat.com>
v2: virgl: Add a number of formats to the table that are used, e.g. for vertex
attributes
v3: cover some more missing formats from a piglit run
Signed-off-by: Gert Wollny <gert.wollny@collabora.com>
pp has vector units and some operations can be optimized when bundled
together.
Benchmarking this with piglit shaders shows that the instruction count
can be greatly reduced on many examples with vectorize.
Signed-off-by: Erico Nunes <nunes.erico@gmail.com>
Reviewed-by: Qiang Yu <yuq825@gmail.com>
nir vec4 fcsel assumes that each component of the condition will be used
to select the same component from the options, but pp can't implement
that since it only has 1 component for the condition.
Signed-off-by: Erico Nunes <nunes.erico@gmail.com>
Reviewed-by: Qiang Yu <yuq825@gmail.com>
The previous spill stack was fixed and too small, and caused instability
in programs requiring spilling for roughly more than one value.
This patch adds a dynamic calculation of the buffer size based on stack
utilization and switches it to a separate allocation at flush time that
will fit the shader that requires the largest buffer.
Signed-off-by: Erico Nunes <nunes.erico@gmail.com>
Reviewed-by: Vasily Khoruzhick <anarsoul@gmail.com>
Reviewed-by: Qiang Yu <yuq825@gmail.com>
It's not used by anything anymore now that so much lowering has been
moved into NIR. Sadly, we still need on in brw_compile_gs() for
geometry shaders on Sandy Bridge. Short of a lot of pointless work,
that one's probably not going away.
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
Create a unified table to handle pipe format to texture
and render target format lookup.
Reviewed-by: Vasily Khoruzhick <anarsoul@gmail.com>
Reviewed-by: Erico Nunes <nunes.erico@gmail.com>
Signed-off-by: Qiang Yu <yuq825@gmail.com>
Put the uncached GTT type at a higher index than the visible VRAM type,
rather than having GTT first.
When we don't have dedicated VRAM, we don't have a non-visible VRAM
type, and the property flags for GTT and visible VRAM are identical.
According to the spec, for types with identical flags, we should give
the one with better performance a lower index.
Previously, apps which follow the spec guidance for choosing a memory
type would have picked the GTT type in preference to visible VRAM (all
Feral games will do this), and end up with lower performance.
On a Ryzen 5 2500U laptop (Raven Ridge), this improves average FPS in
the Rise of the Tomb Raider benchmark by up to ~30%. Tested a couple of
other (Feral) games and saw similar improvement on those as well.
Signed-off-by: Alex Smith <asmith@feralinteractive.com>
Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
Reviewed-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
Cc: 19.2 <mesa-stable@lists.freedesktop.org>
(Bas: CCing this to 19.2-rc due to high impact and limited complexity)
Add better liveness analysis that was modelled after one in vc4.
It uses live ranges and is aware of multiple blocks which is prerequisite
for adding CF support
Tested-by: Andreas Baierl <ichgeh@imkreisrum.de>
Reviewed-by: Qiang Yu <yuq825@gmail.com>
Reviewed-by: Erico Nunes <nunes.erico@gmail.com>
Signed-off-by: Vasily Khoruzhick <anarsoul@gmail.com>
Mali4x0 supports only gl_FragColor. gl_FragDepth is not supported.
Check that we don't get anything but gl_FragColor in shader outputs.
Reviewed-by: Qiang Yu <yuq825@gmail.com>
Signed-off-by: Vasily Khoruzhick <anarsoul@gmail.com>
We don't have a special OP to store color in PP, all we need to do is to
store gl_FragColor into reg0, thus it's just a mov and therefore ALU node.
Yet we still need to indicate that it's store_color op so regalloc ignores
its destination.
Tested-by: Andreas Baierl <ichgeh@imkreisrum.de>
Reviewed-by: Qiang Yu <yuq825@gmail.com>
Reviewed-by: Erico Nunes <nunes.erico@gmail.com>
Signed-off-by: Vasily Khoruzhick <anarsoul@gmail.com>
Create ppir block for each corresponding NIR block and populate
its successors. It will be used later in liveness analysis and
in CF support
Tested-by: Andreas Baierl <ichgeh@imkreisrum.de>
Reviewed-by: Qiang Yu <yuq825@gmail.com>
Reviewed-by: Erico Nunes <nunes.erico@gmail.com>
Signed-off-by: Vasily Khoruzhick <anarsoul@gmail.com>
We can get following from NIR:
(1) r1 = r2
(2) r2 = ssa1
Note that r2 is read before it's assigned, so there's no node for
it in comp->var_nodes. We need to create a dummy node in this case
which sole purpose is to hold ppir_dest with reg in it.
Tested-by: Andreas Baierl <ichgeh@imkreisrum.de>
Reviewed-by: Qiang Yu <yuq825@gmail.com>
Reviewed-by: Erico Nunes <nunes.erico@gmail.com>
Signed-off-by: Vasily Khoruzhick <anarsoul@gmail.com>
There can be several root nodes, i.e.:
(1) r0 = r1
(2) r2 = r3
(3) branch if (ssa1)
We need to make (3) depend on (1) and (2), old code added
dependency only for (2), and (1) was kept as root node since there
is no branch/discard or store color between two movs.
Tested-by: Andreas Baierl <ichgeh@imkreisrum.de>
Reviewed-by: Qiang Yu <yuq825@gmail.com>
Reviewed-by: Erico Nunes <nunes.erico@gmail.com>
Signed-off-by: Vasily Khoruzhick <anarsoul@gmail.com>
ppir_lower_load() and ppir_lower_load_texture() assume that node
is in the same block as its successors, fix it by cloning each
ld_uni and ld_tex to every block.
It also reduces register pressure since values never cross block
boundaries and thus never appear in live_in or live_out of any block,
so do it for varyings as well.
Tested-by: Andreas Baierl <ichgeh@imkreisrum.de>
Reviewed-by: Qiang Yu <yuq825@gmail.com>
Reviewed-by: Erico Nunes <nunes.erico@gmail.com>
Signed-off-by: Vasily Khoruzhick <anarsoul@gmail.com>
Const nodes are now cloned for each user, i.e. const is guaranteed to have
exactly one successor, so we can use ppir_do_one_node_to_instr() and
drop insert_to_each_succ_instr()
Tested-by: Andreas Baierl <ichgeh@imkreisrum.de>
Reviewed-by: Qiang Yu <yuq825@gmail.com>
Reviewed-by: Erico Nunes <nunes.erico@gmail.com>
Signed-off-by: Vasily Khoruzhick <anarsoul@gmail.com>
On commit f6e7de41d7, we started emitting 3DSTATE_LINE_STIPPLE as part
of the non-dynamic state. That gets re-emitted every time we bind a new
VkPipeline. But that instruction is non-pipelined, and it caused a perf
regression of about 9-10% on Dota2.
This commit makes anv_dynamic_state_copy() return a mask with only the
state that has changed when copying it. 3DSTATE_LINE_STIPPLE won't be
emitted anymore unless it has changed, fixing the problem above.
v2: Improve commit message and add documentation about skipped checks
(Jason)
Fixes: f6e7de41d7 ("anv: Implement VK_EXT_line_rasterization")
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
We can statically determine from the disassembly if helper invocations
will be needed, so we can validate the corresponding bit in the
cmdstream and thus avoid printing the bit itself in the decode.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
We check for texture ops which calculate derivatives (either explicitly
via dFd* or implicitly) and mark the shader as requiring helper
invocations.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
The variables level and start_layer are not initialized, then
initialized if we have a BUFFER_BIT_DEPTH set. We assert on them
later using the same check. This should be enough but GCC 9.1.1 is
not convinced, so let's initialize the variables.
Acked-by: Eric Engestrom <eric.engestrom@intel.com>
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
Code that used it was removed in 4ebe6b2e72 ("tgsi: Drop the SSE2
constants setup that's been dead code since 2011.")
Acked-by: Eric Engestrom <eric.engestrom@intel.com>
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
Initialize `next_batch_addr` and `second_level`. If the batch is well
formed, those values will be overriden, if not, they are as good as
uninitialized garbage.
Acked-by: Eric Engestrom <eric.engestrom@intel.com>
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
The helper check_node_type() is only used when DEBUG is set (in the
function below), but ASSERTED macro uses NDEBUG. So just guard the
helper with #ifdef. If we see more such cases we might consider a
ASSERTED-like macro for the DEBUG case.
Reviewed-by: Eric Engestrom <eric.engestrom@intel.com>
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
Compiler can't see that d is initialized.
../src/intel/compiler/brw_vec4_nir.cpp: In function ‘int brw::try_immediate_source(const nir_alu_instr*, brw::src_reg*, bool, const gen_device_info*)’:
../src/intel/compiler/brw_vec4_nir.cpp:984:12: warning: ‘d’ may be used uninitialized in this function [-Wmaybe-uninitialized]
984 | d = MAX2(-d, d);
Assert that we expect at least one component -- hence d going to be
set. That by itself is not enough, so also zero initialize the
variable.
Acked-by: Eric Engestrom <eric.engestrom@intel.com>
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
...by copying the implementation of anv_get_absolute_timeout().
Appears to fix a CTS test with 32-bit builds:
GTF-GL46.gtf32.GL3Tests.sync.sync_functionality_clientwaitsync_flush
Fixes: f459c56be6 ("iris: Add fence support using drm_syncobj")
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Reviewed-by: Eric Engestrom <eric@engestrom.ch>
Rafael Antognolli tracked down a performance gap between i965 and iris
in Synmark2's OglCSDof microbenchmark, noting that iris was performing
substantially more memory reads and writes, with substantially fewer
L3 hits. He suggested that something might be wrong with MOCS, or L3
configs, at which point I came up with a theory...
It would appear that the STATE_BASE_ADDRESS command updates the MOCS
settings for various base addresses even if you don't specify the
"Modify Enable" bit for that address. Until now, we had been setting
only the MOCS for bases we intended to change, leaving the others
"blank" which is MOCS table entry 0, which is uncached.
Most data access has a more specific MOCS (e.g. in SURFACE_STATE),
but scratch access uses the Stateless Data Port Access MOCS from
STATE_BASE_ADDRESS. So this meant all scratch access was uncached.
Improves performance in Synmark2's OglCSDof by 2x, bringing iris
on par with the existing i965 driver.
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Fix this build error on macOS.
../src/glx/apple/glx_empty.c:158:4: error: void function 'glXQueryGLXPbufferSGIX' should not return a value [-Wreturn-type]
return 0;
^ ~
Fixes: 3dd299c3d5 ("glx: Sync <GL/glxext.h> with Khronos")
Signed-off-by: Vinson Lee <vlee@freedesktop.org>
Reviewed-by: Eric Engestrom <eric@engestrom.ch>
Reviewed-by: Adam Jackson <ajax@redhat.com>
Similarly to before, this didn't properly handle varying structs with
doubles in them.
This doesn't fix any tests, but was noticed while looking at the code.
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
The old version wasn't as accurate as it could be, and didn't handle
double variables inside structs correctly. Walk the path to compute the
actual components affected.
In combination with the previous commit fixes
KHR-GL45.enhanced_layouts.varying_structure_locations.
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
This is already done in get_deref_offset() in the common code. We were
adding it twice accidentally.
Fixes KHR-GL45.enhanced_layouts.varying_array_locations.
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
Some users of this function (e.g. GS inputs) currently only work with
constant offsets. We got lucky since all the tests used an array index
of 0, so the non-constant part was always 0. But we still need to handle
this.
This doesn't fix any CTS test, but was noticed while debugging one.
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
Now that LLVM 9 will be released soon, we will only support
LLVM 8, 9 and master (10).
Signed-off-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
Fixes errors seen with eglSetBlobCacheFuncsANDROID on Android when
running dEQP that terminates and reinitializes a display.
Fixes: 6f5b57093b "egl: add support for EGL_ANDROID_blob_cache"
Signed-off-by: Tapani Pälli <tapani.palli@intel.com>
Reviewed-by: Eric Engestrom <eric.engestrom@intel.com>
We were always resolving the buffer as if we were accessing it via
CPU maps, which don't understand any auxiliary surfaces. But we often
copy to a temporary using BLORP, which understands compression just
fine. So we can avoid the resolve, and accelerate the copy as well.
Fixes: 9d1334d2a0 ("iris: Use copy_region and staging resources to avoid transfer stalls")
Reviewed-by: Rafael Antognolli <rafael.antognolli@intel.com>
This doesn't work for compressed formats, as the source texture and
temporary texture would have different block sizes. (Forcing the driver
to always take the GPU path would expose the bug.) Instead, just use
the source format for the temporary, and let blorp_copy deal with
overrides.
The one case where we can't do this is ASTC, because isl won't let us
create a linear ASTC surface. Fall back to the CPU paths there for now.
Fixes: 9d1334d2a0 ("iris: Use copy_region and staging resources to avoid transfer stalls")
Reviewed-by: Rafael Antognolli <rafael.antognolli@intel.com>
Gen11 stores the fast clear color in an "indirect clear buffer", as
a packed pixel value. Gen9 hardware stores it as a float or integer
value, which is interpreted via the format. We were trying to store
that in a buffer, for similarity with Icelake, and MI_COPY_MEM_MEM
it from there to the actual SURFACE_STATE bytes where it's stored.
This unfortunately doesn't work for blorp_copy(), which does bit-for-bit
copies, and overrides the format to a CCS-compatible UINT format. This
causes the clear color to be interpreted in the overridden format.
Normally, we provide the clear color on the CPU, and blorp_blit.c:2611
converts it to a packed pixel value in the original format, then unpacks
it in the overridden format, so the clear color we use expands to the
bits we originally desired.
However, BLORP doesn't support this pack/unpack with an indirect clear
buffer, as it would need to do the math on the GPU. On Gen11+, it isn't
necessary, as the hardware does the right thing.
This patch changes Gen9 to stop using an indirect clear buffer and
simply do PIPE_CONTROLs with post-sync write immediate operations
to store the new color over the surface states for regular drawing.
BLORP continues streaming out surface states, and handles fast clear
colors on the CPU.
Fixes: 53c484ba8a ("iris: blorp using resolve hooks")
Reviewed-by: Rafael Antognolli <rafael.antognolli@intel.com>
For renderable surfaces, we allocate SURFACE_STATEs for each bit in
res->aux.possible_usages. Sampler views use res->aux.sampler_usages.
When pinning buffers, we call surf_state_offset_for_aux() to calculate
the offset to the desired surface state. surf_state_offset_for_aux()
took an aux_modes parameter, which should be one of those two fields.
However...it was not using that parameter. It always used the broader
res->aux.possible_usages field directly.
One of the callers, update_clear_value(), was passing incorrect masks
for this parameter. It iterated through the bits in order, using
u_bit_scan(), which destructively modifies the mask. So each time we
called it, the count of bits before our selected mode was 0, which would
cause us to always update the SURFACE_STATE for ISL_AUX_USAGE_NONE,
rather than updating each in turn. This was hidden by the earlier bug
where surf_state_offset_for_aux() ignored the parameter.
Fixes: 7339660e80 ("iris: Add aux.sampler_usages.")
Reviewed-by: Rafael Antognolli <rafael.antognolli@intel.com>
This is genxml, we can compile out this code.
Fixes: 2660667284 ("iris/gen8: Re-emit the SURFACE_STATE if the clear color changed.")
Reviewed-by: Rafael Antognolli <rafael.antognolli@intel.com>
Rather than passing through the transformed gl_Position, we can use the
hardware-level varying for this, which will correctly handle
gl_FragCoord.w
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
The offset is added to the base address, so we need to subtract it from
the size to maintain the same end address and thus prevent a buffer
overflow:
end_address = start_address + size
start_address' = start_address + offset
size' = size - offset
end_address' = start_address' + size'
= (start_address + offset) + (size - offset)
= (start_address + size) + (offset - offset)
= start_address + size
= end_address
QED.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
We need a special path for special varyings so we parse them correctly
instead of throwing an error when they inevitably point to bad memory.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
The hardware doesn't care, and a lot of Panfrost code relies on an
oversized buffer. The important part is that (stride *
padded_num_vertices) is no greater than size, which we'll need to check
once we validate instancing.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
We don't need to dump the contents necessary, but having the stub with
the address is useful.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
I don't know who thought this mask was a good idea but unfortunately it
must have been me.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
If we permit more $whatever through than the shader needs, that's a bit
of a waste, but it isn't an error.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
We don't actually care about the *contents* of the index buffer, but we
would rather like to ensure it is present and of the correct size.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
We can infer these stats in many cases from the disassembly, so we
should try to sanity check where we can. We may need to be fuzzy about
analysis, since analysis gives us a bound but we don't mind if it's not
used fully by the shader.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
We could do better by forcing the checks to *equal* zero (right now, an
indeterminate answer will pass the checks), but this is a start to guard
against some egregious cases.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
There are a number of conditions we need to test for to statically check
for TILE_RANGE_FAULTs, but once these checks are in order, we can print
as-is.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
These tags need to match up with what's actually described by the MFBD,
so check this. Once this is checked, since the type and contents of the
FBD are obvious from printing above, there's no need to explicitly mark
off the framebuffer line.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
For shaders using exclusively direct attribute/varyings, we can work
this out statically. For shaders with indirect access, we just set an
upper bound of 16 (the max attributes/varyings we support) and the
actual count will be reported regardless.
We proceed similarly for textures/samplers, as well as for UBOs. While
UBOs can be *indexed* indirectly, the *UBO itself* -- which is what we
count in the shader descriptor (rather than the UBO descriptors) -- is
statically determinable.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
This one is a little tricky, but the idea is that:
r16-r23 are always uniforms
r8-r15 are sometimes work, sometimes uniforms...
...but as work, they are always written before use
...and as uniforms, they are never written before use
So we use that heuristic to determine the count to feed the machine.
We'll record work register use in the next commit.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
Panfrost is the only user of the macro; we are better off expanding than
having random stuff in nir.h.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
Reviewed-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
Right now it always returns zero, but as of:
commit a48a6b8a40
Author: Adam Jackson <ajax@redhat.com>
Date: Tue Nov 14 15:13:05 2017 -0500
glx: Prepare driFetchDrawable for no-config contexts
We were hoping it would return true if the drawable could actually be
looked up. It wasn't, so that didn't go very well. With the most recent
update to <GL/glxext.h> glXQueryGLXPbufferSGIX (correctly) returns void,
so there's no longer anything else besides driFetchDrawable that depends
on the return value from __glXGetDrawableAttribute.
Acked-by: Eric Engestrom <eric.engestrom@intel.com>
Reviewed-by: Eric Anholt <eric@anholt.net>
Minor fixups required to keep the prototypes matching and to remove
mention of retired enums.
Acked-by: Eric Engestrom <eric.engestrom@intel.com>
Reviewed-by: Eric Anholt <eric@anholt.net>
We added this utility for vulkan where all timeouts are given as
uint64_t values. We can switch from signed to unsigned as this is the
only user and if we ever deal with signed integers somewhere else
we'll have to be careful to use the corresponding
timespec_(add|sub)_msec and always pass absolute values.
v2: Forgot to drop the test calling add_nsec() with a negative number
Signed-off-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Reported-by: Juan A. Suarez Romero <jasuarez@igalia.com>
Fixes: d2d70c3bb5 ("util: add a timespec helper")
Acked-by: Daniel Stone <daniels@collabora.com>
This fixes a regression introduced with scan&reduce operations
on GFX10. Note that some subgroups CTS still fail on GFX10 but
I assume it's a different issue.
This fixes dEQP-VK.subgroups.arithmetic.*.subgroupexclusive*.
Fixes: 227c29a80d "amd/common/gfx10: implement scan & reduce operations"
Signed-off-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
Commit fixes current crashes with Vulkan applications on Android.
Fixes: c0376a1234 "util: add anon_file.h for all memfd/temp file usage"
Signed-off-by: Tapani Pälli <tapani.palli@intel.com>
Reviewed-by: Eric Engestrom <eric@engestrom.ch>
v2: Pass through to oscreen rather than faking it (review from Marek).
Fixes: 0346b70083 ("gallium/screen: Add pipe_screen::resource_get_param")
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
At compressed_tex_sub_image we only can obtain the tex_object after
compressed_subtexture_target_check is validated for TEX_MODE_CURRENT.
So if the target is wrong the error is raised to the user.
This completes the fix for the regression introduced on "mesa: refactor
compressed_tex_sub_image function" of the pending failing tests:
dEQP-GLES3.functional.negative_api.texture.compressedtexsubimage3d
dEQP-GLES31.functional.debug.negative_coverage.get_error.texture.compressedtexsubimage3d
v2: Fix warning that texObj might be used uninitialized (Gert Wollny)
Fixes: 7df233d68d ("mesa: refactor compressed_tex_sub_image function")
Reviewed-By: Gert Wollny <gert.wollny@collabora.com>
Expose configs when allow_fp16_configs has been enabled and
DRI_LOADER_CAP_FP16 is set in the loader.
Also, make kms_swrast_dri respect format bpp, to allow for allocating
buffers wider than 32 bpp.
Make fp16 opt-in for gallium.
Signed-off-by: Kevin Strasser <kevin.strasser@intel.com>
Reviewed-by: Adam Jackson <ajax@redhat.com>
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
Reviewed-by: Emil Velikov <emil.l.velikov@gmail.com>
Expose configs when allow_fp16_configs has been enabled and
DRI_LOADER_CAP_FP16 is set in the loader.
Also, define a new dri configuration option so users can disable exposure of
fp16 formats. Make fp16 opt-in for i965.
Signed-off-by: Kevin Strasser <kevin.strasser@intel.com>
Reviewed-by: Adam Jackson <ajax@redhat.com>
Reviewed-by: Emil Velikov <emil.l.velikov@gmail.com>
Add dri formats for RGBA ordered 64 bpp IEEE 754 half precision floating
point. Leverage existing offscreen render support for
MESA_FORMAT_RGBA_FLOAT16 and MESA_FORMAT_RGBX_FLOAT16.
Signed-off-by: Kevin Strasser <kevin.strasser@intel.com>
Reviewed-by: Adam Jackson <ajax@redhat.com>
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
Reviewed-by: Emil Velikov <emil.velikov@collabora.com>
In the case that __DRI_ATTRIB_FLOAT_BIT is set in the dri config, set
EGL_COLOR_COMPONENT_TYPE_FLOAT_EXT in the egl config. Add a field to the
platform driver visual to indicate if it has components that are in floating
point form.
Signed-off-by: Kevin Strasser <kevin.strasser@intel.com>
Reviewed-by: Adam Jackson <ajax@redhat.com>
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
Reviewed-by: Emil Velikov <emil.velikov@collabora.com>
In order to handle pixel formats that consist of floating point data, enable
floatMode field in the dri config, and set __DRI_ATTRIB_FLOAT_BIT in the
render type attribute.
Signed-off-by: Kevin Strasser <kevin.strasser@intel.com>
Reviewed-by: Adam Jackson <ajax@redhat.com>
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
Reviewed-by: Emil Velikov <emil.velikov@collabora.com>
Change dri2_add_config to take arrays of shifts and sizes, and compare with
those set in the dri config. Convert all platform driver masks
to shifts and sizes.
In order to handle older drivers, where shift attributes aren't available,
we fall back to the mask attributes and compute the shifts with ffs.
Signed-off-by: Kevin Strasser <kevin.strasser@intel.com>
Reviewed-by: Emil Velikov <emil.l.velikov@gmail.com>
bitcount is free from the pipe header dependencies that make u_math.h hard
to include by non-gallium specific code, so move it to bitscan.h. bitscan.h
is included by u_math.h so existing references will continue working.
Signed-off-by: Kevin Strasser <kevin.strasser@intel.com>
Reviewed-by: Emil Velikov <emil.l.velikov@gmail.com>
The existing mask attributes can only support up to 32 bpp. Introduce
per-channel SHIFT attributes that indicate how many bits, from lsb towards
msb, the bit field is offset. A shift of -1 will indicate that there is no
bit field set for the channel.
As old loaders will still be looking for masks, we set the masks to 0 for
any formats wider than 32 bpp.
Signed-off-by: Kevin Strasser <kevin.strasser@intel.com>
Reviewed-by: Adam Jackson <ajax@redhat.com>
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
Reviewed-by: Emil Velikov <emil.l.velikov@gmail.com>
The driver checks dri config options and loader caps to filter out certain
formats during config creation. Fold 4 call sites under a single helper
function.
Signed-off-by: Kevin Strasser <kevin.strasser@intel.com>
Reviewed-by: Emil Velikov <emil.l.velikov@gmail.com>
This debug option allows vkGet[Instance/Device]ProcAddr() to succeed
even if the extension associated with the requested entrypoint was not
enabled.
This has come in handy in a few instances when debugging VR
applications, so I thought it would be good to have a cleaned up version
upstreamed.
Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
This should fix glDepthRangef issues. Eventually, something similar
should allow implementing the depth bounds test.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
A pair of special flags can turn the texture/sampler handle fields into
register selects. This means code like:
texture(uTextures[hr28.w], ...)
can be compiled to something like:
texture ..., fsampler[hr28.w], texture[hr28.w]
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
This data structure is shared in other parts of the texture word, so
let's streamline printing.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
This allows nodes to be unsigned and prevents a class of weird
signedness bugs identified by Coverity.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
This path shouldn't be possible for in-spec shaders, but let's be
defensive. (Because security, right? Mostly because Coverity.)
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
This better matches all the other atomic intrinsics such as those for
SSBOs and shared variables where the sign is part of the intrinsic
opcode. Both generators (GLSL and SPIR-V) know the sign from the type
of the image variable or handle. In SPIR-V, signed min/max are separate
opcodes from unsigned.
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
Reviewed-by: Eric Anholt <eric@anholt.net>
We can smush this into one-line per record as per usual. We still need
more validation and cleaning this up, especially around instancing. But
for LINEAR records, it works okay already.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
This consolidates texture format and dimensionality into something simple:
tiled rgba8_unorm.rgb1: 512x512
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
Textures of a smaller dimension don't need higher dimensions printed.
This allows us to be more compact, while enforcing verification that
higher dimensions must be zero.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
unknown3A I think I've actually seen on T6xx but.. we'll see what
happens in traces going forward. We don't want the zero noise normally,
and if they show up in the wild, we want to draw attention to them.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
This dramatically reduces visual clutter: now an entire
attribute/varying record looks something like:
rgba32f attribute_0[16].bgra;
which is equivalent to the raw structure:
{
.index = 0,
.format = MALI_FORMAT_RGBA32F,
.swizzle = (MALI_CHANNEL_BLUE << 9) | ....,
.src_offset = 16,
}
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
We want to make sure we don't access a component in the swizzle that
doesn't exist in the format, since that is (as far as I know) undefined.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
We've never seen them, so if they come up in trace, we want to draw
attention to that.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
Varying discard is not used by Panfrost, but the blob uses it sometimes
to have some padding in the varyings table, probably to minimize
per-draw overhead. (...We should maybe consider this ourselves!)
Let's check for this and ensure the rest of the record is consistent
with a discarded varying.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
It's a legacy GL thing... we don't really need to handle it *right* now,
but we shouldn't crash..
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
This CAP controls a desktop-only extension. If the corresponding support
exists in the hardware, we don't know how to use it.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
Subtle issue masked by how we emitted SET_VALUE jobs, but this case can
and does occur, so let's fix it.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
This normalizes the printed format. It also makes it easier for the
future when we may introduce semantic _warn and _error handlers.
A tripped zero is essentially a hazard to check for.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
If this bit is clear, MFBD preload will be enabled, and you.. don't want
that. (At least, when the bit is clear, the old contents of the
framebuffer will be preserved. I'm assuming this is what "MFBD preload"
refers to in kbase.)
Validate that this bit is always set.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
There is no "chunknown" structure; that part of the union is an artefact
from falsely believing vertex/tiler MFBDs could have render targets
attached (they can't). These are just plain old AFBC fields, and if
there is no AFBC, it's error to set these field.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
For our purposes of driver debugging, the contents of uniform buffers
are rarely interesting; we're more concerned about the metadata setting
them up.
We do need to be careful to validate the sizes of both uniforms and
uniform buffers.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
Many structures in the command stream have a GPU address and size
determined statically. We should check that the pointers we are passed
are valid and the buffers they point to are big enough for the given
size. If they're not, an MMU fault would be raised.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
Verify sizes / masks / etc against something logical to cull down the
trace space and automatically guard against a number of potential
hazards.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
While the algorithm for computing the header size has been correct for a
while, we used a major hack to conservatively guess the body size. Let's
scrap that and figure out the algorithm we actually need to use to be
bit-identical with what the hardware expects.
We do have to be careful to add the header size to total comptued BO
size.
It's not clear how big the polygon list needs to be in practice -- but
it has to be somewhat bigger than the polygon list itself. This needs
more investigation. If we size the polygon list exactly based on the
polygon_list_size field, we get faults like:
[ 1224.219886] panfrost ff9a0000.gpu: Unhandled Page fault in AS0 at VA 0x000000001BDE8000
Reason: TODO
raw fault status: 0x660003C3
decoded fault status: SLAVE FAULT
exception type 0xC3: TRANSLATION_FAULT_LEVEL3
access type 0x3: WRITE
source id 0x6600
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
The other commented lines just add noise/entropy we don't want, and can
in fact crash the trace due to asserts failing.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
The polygon sizes are computed from the width/height/flags, so we can
reverse the computation and use our computation to verify the two
computation algorithms are bit-identical. If they are, we can omit the
computed fields.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
We have the BOs available; ensure that the bounds specified in the
command stream are actually the correct bounds.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
This allows the caller to call track_mmap multiple times for the same
gpu_va for the purpose of updating the mmap. This is used to trace
invisible BOs with kbase and doesn't apply to native traces.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
This allows us to catch a class of errors (for negative offsets, etc)
automatically.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
The on-the-wire representation of workgroups is not 1:1 to the decoded
Gallium-level workgroups (there are multiple valid encodings; see the
previous commit). Nevertheless, since we're now bit-identical in packing
vs the blob, we can check for a canonical form and only print the
verbose trace if we fail the canonical form.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
This is a blob quirk; in so much as I know, the hardware doesn't care.
But we're trying to be bit-identical to take as much entropy out of
traces as possible, so let's introduce the quirk.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
The routines in this file have no dependency on Gallium. Let's share
them so they can be used for a theoretical future Vulkan driver or, more
immediately, consulted when tracing.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
It's obvious that it's linked by virtue of us printing the struct it
links against. No need to repeat ourselves.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
The last remaining stuff was ARB_gl_spirv and ARB_spirv_extensions.
Note that it is really likely that we can enable it for some Gen7 (as
4.5 was), but it was not tested yet.
Reviewed-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
To help make sure we are running tests in the ideal number of threads,
print load stats to make obvious when there's a problem with
utilization.
This will be specially useful when we run tests on a wider variety of
devices.
Signed-off-by: Tomeu Vizoso <tomeu.vizoso@collabora.com>
Reviewed-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
Some runners may be configured such that the qemu binary might not be
available by the time we need to start running commands within the
chroot.
So make sure that it's there to avoid suprising problems in that case.
Signed-off-by: Tomeu Vizoso <tomeu.vizoso@collabora.com>
Reviewed-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
A number of things can go wrong when building the rootfs from within a
non-native chroot, so make sure to print the bootstrap.log so we can
tell what's going on.
Signed-off-by: Tomeu Vizoso <tomeu.vizoso@collabora.com>
Reviewed-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
It's able to run tests in parallel, fully utilizing the HW and
shortening considerable the time it takes.
Needed to disable tests in RK3288 for now because Volt doesn't support
armhf yet, though this should be fixed soon.
Tests are now run with --deqp-gl-config-name=rgba8888d24s8ms0, so we are
hitting a few more failures in tests that previously were being skipped.
The time to run the tests decreases from around 8 minutes to 1:45
minutes, allowing for extending coverage without increasing CI times too
much.
Signed-off-by: Tomeu Vizoso <tomeu.vizoso@collabora.com>
Reviewed-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
This gives a nice boost, +20% at this time on my Vega 56. Shader
ballot should be enabled by default at some point but it reduces
performance a bit (-6%) with Wolfeinstein II. Enable it only for
Youngblood at the moment, like what we did for Talos in the past.
As a bonus point, it gets rid of some minor artifacts that only
happens when ballot is disabled for some reasons.
Cc: 19.2 <mesa-stable@lists.freedesktop.org
Signed-off-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
Loops like:
block block_0:
vec1 32 ssa_2 = load_const (0x00000020)
vec1 32 ssa_3 = load_const (0x00000001)
loop {
vec1 32 ssa_7 = phi block_0: ssa_3, block_4: ssa_9
vec1 1 ssa_8 = ige ssa_2, ssa_7
if ssa_8 {
break
} else {
}
vec1 32 ssa_9 = iadd ssa_7, ssa_1
}
Were treated as having more than 1 iteration and after unrolling
produced wrong results, however such loop will exit during
the first iteration if not unrolled.
So we check if loop will actually loop.
Fixes tests/shaders/glsl-fs-loop-while-false-02.shader_test
Signed-off-by: Danylo Piliaiev <danylo.piliaiev@globallogic.com>
Reviewed-by: Timothy Arceri <tarceri@itsqueeze.com>
The comments say that we should remove continue if it is the last
intruction in a loop however we remove any kind of jump.
Signed-off-by: Danylo Piliaiev <danylo.piliaiev@globallogic.com>
Reviewed-by: Timothy Arceri <tarceri@itsqueeze.com>
Otherwise hangs are possible. This register was already set for
GS and NGG.
Fixes: 5eaed7ecfc "radv/gfx10: enable support for NAVI10, NAVI12 and NAVI14"
Reviewed-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
Should take the max of the 2.
Fixes: ea337c8b7e "radv/gfx10: fix VS input VGPRs with the legacy path"
Reviewed-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
An application quitting before the destroying its GL context and
binding a NULL context might still have a radeonsi compiler thread
running and potentially still accessing the types.
Therefore take a reference for the duration of the threads' lifetime.
v2: Only ref the glsl types, the builtins should be used by the time
shader data gets to a gallium driver.
Signed-off-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
The issue we're running into when running CTS is that glsl types are
deleted while builtins depending on them are not.
This happens because on one hand we have glsl types ref counted, but
builtins are not. Instead builtins are destroyed when unloading libGL
or explicitly calling glReleaseShaderCompiler().
This change removes almost entirely any dealing with glsl types
ref/unref by letting the builtins deal with it instead. In turn we
introduce a builtin ref count mechanism. Each GL context takes a
reference on the builtins when compiling a shader for the first time.
It releases the reference when the context is destroyed. It can also
explicitly release those when glReleaseShaderCompiler() is called.
Finally we also take a reference on the glsl types when loading libGL
to avoid recreating glsl types too often.
v2: Ensure we take a reference if we don't have one in link step (Lionel)
Signed-off-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=110796
Reviewed-by: Eric Anholt <eric@anholt.net>
Reviewed-by: Tapani Pälli <tapani.palli@intel.com>
The compute paths in vl are a bit AMD-specific. For example, they (on
nouveau), try to use a BGRX8 image format, which is not supported.
Fixing all this is probably possible, but since the compute paths aren't
in any way better, it's difficult to care.
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=111213
Fixes: 9364d66cb7 (gallium/auxiliary/vl: Add video compositor compute shader render)
Signed-off-by: Ilia Mirkin <imirkin@alum.mit.edu>
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
ra_get_best_spill_node is what other users of the mesa register
allocator use.
Switching to it now also fixes an infinite loop issue with ppir regalloc
with the ppir control flow patchset, and also provides a small gain over
the previous herusitic on number of spilled nodes testing with
shader-db.
Signed-off-by: Erico Nunes <nunes.erico@gmail.com>
Reviewed-by: Vasily Khoruzhick <anarsoul@gmail.com>
The GLES2 CTS takes about 8 minutes of total runtime (at parallel 4 is
~2 minutes in the test stage if runners are free), while GLES3 takes
about 25. Since the GLES3 run is pretty expensive, just do a cheap
touch test of 1 out of every 10 tests in the test list on MRs, until
we can get the runtime down.
v2: Drop the full run for now until we can bring runtime down or bring
up a dedicated mesa runner.
Reviewed-by: Eric Engestrom <eric@engestrom.ch> (v1)
Reviewed-By: Gert Wollny <gert.wollny@collabora.com> (v1)
This fixes the regression introduced on "mesa: refactor
compressed_tex_sub_image function" that started to crash
KHR-GLES2.texture_3d.compressed_texture.negative_compressed_tex_sub_image
Fixes: 7df233d68d ("mesa: refactor compressed_tex_sub_image function")
Reviewed-by: Eric Anholt <eric@anholt.net>
In a descriptor set inline uniform blocks don't use up any bindings.
However, the presence of any inline uniform blocks doed require the
use of the descriptor buffer, which takes up one binding.
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
This pass expects the shader to be in LCSSA form.
The algorithm is based on 'The Simple Divergence Analysis' from
Diogo Sampaio, Rafael De Souza, Sylvain Collange, Fernando Magno Quintão Pereira.
Divergence Analysis. ACM Transactions on Programming Languages and Systems (TOPLAS)
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
Reviewed-by: Connor Abbott <cwabbott0@gmail.com>
ACO depends on LCSSA phis for divergent booleans to work correctly.
Reviewed-by: Connor Abbott <cwabbott0@gmail.com>
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
The implementation introduced in "tgsi_to_nir: be careful about not
losing any TGSI properties silently (v2)" updates all the TGSI properties,
but it didn't take into account that the shader_info structure uses a union
to store the different attributes for each shader stage.
Now we only update the attributes if they affect current shader stage,
avoiding to overwrite members of the union that should be overwritten.
This has created hundreds of regressions in v3d.
For example the TGSI_PROPERTY_VS_BLIT_SGPRS_AMD was overwritting the
same position used by TGSI_PROPERY_CS_FIXED_BLOCK_DEPTH.
Fixes: e300365197 ("tgsi_to_nir: be careful about not losing any TGSI properties silently (v2)")
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
If a drawbuffer is an fbo without an attachment then its 'Height' will be zero,
and we have to take its 'DefaultGeometry.Height' into account.
Fixes on softpipe (with the exception of tests that use multisample):
dEQP-GLES31.functional.fbo.no_attachments.*
Signed-off-by: Gert Wollny <gert.wollny@collabora.com>
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
Create separate SURFACE_STATE for render target read in order to support
non coherent framebuffer fetch on broadwell.
Also we need to resolve framebuffer in order to support CCS_D.
v2: Add outputs_read check (Kenneth Graunke)
v3: 1) Import Curro's comment from get_isl_surf
2) Rename get_isl_surf method
3) Clean up allocation in case of failure
Signed-off-by: Sagar Ghuge <sagar.ghuge@intel.com>
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
This will be used in next patches for supporting non coherent
framebuffer fetch on Broadwell.
v2: Fix comment (Kenneth Graunke)
v3: 1) Fix a few nits (Caio)
2) Add comment (Caio)
Signed-off-by: Sagar Ghuge <sagar.ghuge@intel.com>
Reviewed-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
When building Mesa against a recent LLVM 10 with C++11, the build fails
if the AMD common code is built as well due to "std::index_sequence"
being undeclared.
LLVM requires a minimum of C++14.
Signed-off-by: Kai Wasserbäch <kai@dev.carbon-project.org>
Acked-by: Eric Engestrom <eric@engestrom.ch>
The kernel now supports madvise ioctl to indicate which BOs can be freed
when there is memory pressure. Mark BOs purgeable when they are in the
BO cache. The BOs must also be munmapped when they are in the cache or
they cannot be purged.
We could optimize avoiding the madvise ioctl on older kernels once the
driver version bump lands, but probably not worth it given the other
driver features also being added.
Reviewed-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
Reviewed-by: Tomeu Vizoso <tomeu.vizoso@collabora.com>
Signed-off-by: Rob Herring <robh@kernel.org>
Combine compressed_tex_sub_image, compressed_tex_sub_image_error and
compressed_tex_sub_image_no_error in a single function.
The added "enum tex_mode mode" parameter allows to implement the
DSA / non-DSA variants and their error/no_error combination.
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
Took the freedom to enable dfsm even though I don't have benchmark
results yet, but it seems Raven-like.
Rest is from radeonsi.
Reviewed-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
This only appears to happen on Raven2.
Possible way to reproduce:
resource_get_handle(WINSYS_HANDLE_TYPE_KMS) --> sets is_shared = true
resource_get_handle(WINSYS_HANDLE_TYPE_DMABUF) --> fail
Cc: 19.1 19.2 <mesa-stable@lists.freedesktop.org>
This shrinks the table, avoids needing to update the table with NULL
entries on every MESA_FORMAT addition, and removes a surprising,
non-unit-tested format number ordering dependency.
Acked-by: Jose Fonseca <jfonseca@vmware.com>
Reviewed-by: Charmaine Lee <charmainel@vmware.com>
Now that SVGA doesn't have a table that has to be in PIPE_FORMAT
order, we can let the enums have whatever values they naturally would
without worrying about holes.
Acked-by: Jose Fonseca <jfonseca@vmware.com>
Reviewed-by: Charmaine Lee <charmainel@vmware.com>
Now that we're using the array initializers, we don't need to manually
fill out all these stub entries.
Produced with "sed -i '/.*INVALID.*INVALID.*INVALID/d'
src/gallium/drivers/svga/svga_format.c"
Acked-by: Jose Fonseca <jfonseca@vmware.com>
Reviewed-by: Charmaine Lee <charmainel@vmware.com>
By using the [ ] = {} array initializer syntax, we no longer need the
entries to be listed in PIPE_FORMAT_* value order. This means that
people adding new gallium formats don't need to cargo-cult changes to
this driver or regress that non-unit-tested requirement.
While I'm here, drop the lines for formats that no longer exist (the
numbered ones in the table).
Acked-by: Jose Fonseca <jfonseca@vmware.com>
Reviewed-by: Charmaine Lee <charmainel@vmware.com>
Seemed like a sensible cleanup, while I was looking at whether I could
make the table sparse.
To make the svga table not require fixups on every new gallium format,
we may want to change how it's populated.
Acked-by: Jose Fonseca <jfonseca@vmware.com>
Reviewed-by: Charmaine Lee <charmainel@vmware.com>
Rather than using a regalloc based on live internals, computed hastily
with repeated invocations of a forward-analysis pass, we switch to
compute liveness information on a per-block basis.
Within a given basic block, we compute liveness backwards with a
linear-time algorithm; for common shaders, this may help RA terminate
quicker.
Across blocks, we use a work list (really a work set) and check if we're
making progress. This isn't terribly efficient, but it gets the job
done. Point is, we get the live_in/live_out for each block.
From there, it's simple to rerun the linear-time update algorithm to
compute the interference graph.
The benefit of this technique is the ability to ignore "gaps" in
liveness across intermediate blocks that are never executed. On simple
shaders like the loops in glmark, this results in a minor reduction in
register pressure. The motivation was a complex shader in Krita that
failed register allocation due to an unfortunate interaction between
texture pipeline registers and control flow. This shader now compiles
successfully.
total instructions in shared programs: 3439 -> 3438 (-0.03%)
instructions in affected programs: 22 -> 21 (-4.55%)
helped: 1
HURT: 0
total bundles in shared programs: 2077 -> 2076 (-0.05%)
bundles in affected programs: 12 -> 11 (-8.33%)
helped: 1
HURT: 0
total quadwords in shared programs: 3457 -> 3456 (-0.03%)
quadwords in affected programs: 20 -> 19 (-5.00%)
helped: 1
HURT: 0
total registers in shared programs: 341 -> 338 (-0.88%)
registers in affected programs: 9 -> 6 (-33.33%)
helped: 3
HURT: 0
helped stats (abs) min: 1 max: 1 x̄: 1.00 x̃: 1
helped stats (rel) min: 33.33% max: 33.33% x̄: 33.33% x̃: 33.33%
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
If there's a nontrivial swizzle fed into an extra (shortened) argument,
we bail on copyprop. No glmark changes (since it doesn't use fancy
texturing/loads).
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
It's always been ambiguous which they are, but their primary register is
their output, not their input; therefore, they are loads.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
Same issue with liveness analysis. If we store out a vec3, we should not
reference the .w component.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
The texture coordinate for a 2D texture could be a vec2 or a vec3,
depending if it's an array texture or not. If it's vec2 (non-array
texture), we should not reference the z component; otherwise, liveness
analysis will get very confused when z is never written.
v2: Fix typo (Ilia).
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
If we need to lower a move for a read from a vec2 texture coordinate, we
shouldn't write zw, even incidentally.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
Fixes shaders with control flow like:
out = 0;
if (A) {
if (B)
out = texture(A, ...)
} else {
out = texture(B, ...)
}
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
The exit block has been 'dangling' in the successors graph, so let's
ensure it's linked in.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
While we already compute the successors array, for backwards data flow
analysis, it is useful to walk the control flow graph backwards based on
predecessors, so let's compute that information as well.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
By adding one more helper to ac_llvm_build, we can also easily keep
vector stores together.
Fixes the
tests/spec/glsl-1.30/execution/fs-large-local-array-vec4.shader_test
piglit test.
Fixes: 74470baebb ("ac/nir: Lower large indirect variables to scratch")
Reviewed-by: Marek Olšák <marek.olsak@amd.com
PIPE_FORMAT_YV12 is not handled so switching to PIPE_FORMAT_IYUV and
adding back YVU support.
Signed-off-by: James Xiong <james.xiong@intel.com>
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
PIPE_TIMEOUT_INFINITE is unsigned and gets assigned to signed fields
where it ends up as -1. When this reaches the kernel as a timeout it
gets translated as no timeout, which cause the waiting functions to
return immediately and not actually wait for a completion.
This seems to cause unstable results with lima where even piglit tests
randomly fail.
Handle this by setting the signed max value in case of infinite timeout.
Signed-off-by: Erico Nunes <nunes.erico@gmail.com>
Reviewed-by: Qiang Yu <yuq825@gmail.com>
Quite useless without DCC for LAYOUT_GENERAL.
Fixes: b4dad3afaa Revert "radv: Do not decompress on LAYOUT_GENERAL."
Acked-by: Dave Airlie <airlied@redhat.com>
Causes issues with a bunch of games with DXVK.
Fixes: 50add1b33a "radv: Do not decompress on LAYOUT_GENERAL."
Acked-by: Dave Airlie <airlied@redhat.com>
Control-flow enforcement technology is a new instructions on x86
processors to denote where indirect jumps can land. Gcc auto adds
the instruction (which encodes as a NOP on older CPUs) to entrypoints
but assembler files need manual adding. This adds it to all the
entry points in the mesa x86/x86-64 assembler files.
This will only happen if mesa is built with the -fcf-protection flag
to gcc as some distros are wanting to do.
Acked-by: Eric Anholt <eric@anholt.net>
Fixes errors for some people building Mesa:
../src/panfrost/bifrost/bifrost_sched.c:32:31: error: initializer
element is not constant
const unsigned max_vec2_reg = max_primary_reg / 2;
../src/panfrost/bifrost/bifrost_sched.c:33:31: error: initializer
element is not constant
const unsigned max_vec3_reg = max_primary_reg / 4; // XXX: Do we need
to align vec3 to vec4 boundary?
../src/panfrost/bifrost/bifrost_sched.c:34:31: error: initializer
element is not constant
const unsigned max_vec4_reg = max_primary_reg / 4;
../src/panfrost/bifrost/bifrost_sched.c:35:32: error: initializer
element is not constant
const unsigned max_registers = max_primary_reg +
../src/panfrost/bifrost/bifrost_sched.c:40:28: error: initializer
element is not constant
const unsigned vec2_base = primary_base + max_primary_reg;
../src/panfrost/bifrost/bifrost_sched.c:41:28: error: initializer
element is not constant
const unsigned vec3_base = vec2_base + max_vec2_reg;
../src/panfrost/bifrost/bifrost_sched.c:42:28: error: initializer
element is not constant
const unsigned vec4_base = vec3_base + max_vec3_reg;
../src/panfrost/bifrost/bifrost_sched.c:43:27: error: initializer
element is not constant
const unsigned vec4_end = vec4_base + max_vec4_reg;
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
This method returns size_t, but the multiplication multiplies two
integers, leading to overflow rather than type widening.
Noticed by compiling with MSVC, which emits a warning.
Signed-off-by: Erik Faye-Lund <erik.faye-lund@collabora.com>
Reviewed-by: Eric Anholt <eric@anholt.net>
On Windows, p_atomic_inc_return returns an unsigned long long rather
than the type the pointer refers to, so let's make sure we cast the
result to the right type. Otherwise, we'll trigger a warning about
the wrong format-string for the type.
Signed-off-by: Erik Faye-Lund <erik.faye-lund@collabora.com>
Reviewed-by: Eric Anholt <eric@anholt.net>
There was two incompatible definitions of strcasecmp, which lead to a
compiler warning. Let's clean this up by only leaving one of them, and
using that one all the time.
Signed-off-by: Erik Faye-Lund <erik.faye-lund@collabora.com>
Reviewed-by: Eric Anholt <eric@anholt.net>
This generates a warning on some 64-bit systems, so let's cast to a
properly sized integer first.
Signed-off-by: Erik Faye-Lund <erik.faye-lund@collabora.com>
Reviewed-by: Eric Anholt <eric@anholt.net>
This intentionally-bogus pointer generates a warning on some 64-bit
systems, so let's cast to a properly-sized integer first.
Signed-off-by: Erik Faye-Lund <erik.faye-lund@collabora.com>
Reviewed-by: Eric Anholt <eric@anholt.net>
Similarly to the unsigned-version, we need to first cast the result to a
suiting integer before negating the number, otherwise we'll trigger a
warning.
Signed-off-by: Erik Faye-Lund <erik.faye-lund@collabora.com>
Reviewed-by: Eric Anholt <eric@anholt.net>
When subslices_delta == 0 and we take the early return,
device->slice_hash is not initialized on GEN11. It then causes a
segfault when going through anv_DestroyDevice, if compiled with
valgrind.
Fixes: 7bc022b4bb ("anv/gen11: Emit SLICE_HASH_TABLE when pipes are
unbalanced.)
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
We started honouring the normalized_coords flag in the texture
descriptor, but a bisection revealed that broke RECT textures -- since
we were *also* lowering them in the shader. So just remove the
shader-based lowering, use native RECT textures, and enjoy the nominal
reduction in complexity and performance boost.
Fixes: 3e47a1181b ("panfrost: Add MALI_SAMP_NORM_COORDS flag")
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
We only know how to promote aligned accesses, although theoretically we
should be able to promote unaligned to swizzles in the future. Check
this.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
We'll want to be smarter about unaligned reads, so let's get this code
all in one place.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
Our hardware supports independent (per-RT) blending, but we need to
route those settings through from Gallium.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
We'll need multiple branches for MRT, so we can't defer. Also, we need
to track dependencies to ensure r0 is set to the correct value for each
store_output.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
Fixes DATA_INVALID_FAULTs with multiple render targets.
We do always allocate space for 4 cbufs just to keep things sane. This
may not be strictly necessary.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
We don't have a good way to confirm this, but it parallels the kernel
definitons for MMU faults nicely.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
This was supposed to read heap_start. It's the same value but still,
better get this right.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
We don't actually have a standalone compiler in-tree yet, but let's get
prepared for when we do.
Signed-off-by: Ryan Houdek <Sonicadvance1@gmail.com>
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
The disassembler was updated to move common code with the compiler into
a shared header. Additional, some new ops and control registers relating
to rounding were added.
Signed-off-by: Ryan Houdek <Sonicadvance1@gmail.com>
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
Now that panwrap has gained the ability to trace directly without
dumping to the filesystem, there's no need to lug around this tool.
I can assure you nobody will miss it.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
I don't think the hardware cares, but this adds a lot of noise to traces
that we would rather not need to look at.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
Some memory corruption / etc issues let to an accidental "fuzzing" of
the disassembler ;) This uncovered some issues leading to a disassembler
hang, so let's fix that.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
We want a defined ABI for tracing; this set of functions should be as
small as strictly necessary to minimize panwrap shenanigans.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
This allows us to have multiple spill moves, whereas otherwise for N
spill moves, the first N-1 would be clobbered. Issue found in Krita.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
It doesn't... make a ton of sense to need to assert and this routine is
hotter than you might expect. Doesn't matter for release builds, of
course.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
copy_deref for wildcard dereferences requires the same
arrays lengths otherwise it leads to a crash in optimizations
like 'nir_opt_copy_prop_vars' because these optimizations expect
'copy_deref' just for arrays with the same lengths.
v2: check was moved to 'try_match_deref' to fix aoa cases
(Jason Ekstrand <jason@jlekstrand.net>)
v3: -fixed comment
-the condition merged with other one
(Jason Ekstrand <jason@jlekstrand.net>)
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=111286
Signed-off-by: Andrii Simiklit <andrii.simiklit@globallogic.com>
We wire through some shader-db-style stats on the current shader in the
disassemble so we can get a quick estimate of shader complexity from a
trace.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
Suggested-by: Rob Clark <robdclark@chromium.org>
We'll want to dump some stats after the shader, and I refuse to use one
teensy little goto.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
Have a correct answer to GL_MAX_FRAGMENT_UNIFORM_VECTORS and
GL_MAX_VERTEX_UNIFORM_VECTORS.
Signed-off-by: Christian Gmeiner <christian.gmeiner@gmail.com>
Reviewed-by: Lucas Stach l.stach@pengutronix.de
The flush callback may be called on the same pipe context, and thus
the same stream, from two different threads of execution. However,
etna_cmd_stream_flush{,2}() must not be called on the same stream
from two different threads of execution as that would mess up the
etna_bo refcounting and likely have other ugly side effects.
Fix this by using a reentrant screen lock around the flush callback.
Signed-off-by: Marek Vasut <marex@denx.de>
Reviewed-by: Christian Gmeiner <christian.gmeiner@gmail.com>
Reviewed-by: Lucas Stach <l.stach@pengutronix.de>
The following situation can happen in a multithreaded OpenGL application.
A BO is submitted from etna_cmd_stream #1 with flags set for read.
A BO is submitted from etna_cmd_stream #2 with flags set for write.
This triggers a flush on stream #1 and clears the BO's current_stream
pointer. If at this point, stream #2 attempts to queue BO again, which
does happen, the BO will be added to the submit list twice. The Linux
kernel driver correctly detects this and warns about it with "BO at
index %u already on submit list" kernel message.
However, when cleaning the BO cache in etna_bo_cache_free(), the BO
which was submitted twice will also be free()d twice, this triggering
a glibc double free detector.
The fix is easy, even if the BO does not have current_stream set,
iterate over current streams' list of BOs before adding the BO to it
and verify that the BO is not yet there.
Signed-off-by: Marek Vasut <marex@denx.de>
Reviewed-by: Christian Gmeiner <christian.gmeiner@gmail.com>
Reviewed-by: Lucas Stach <l.stach@pengutronix.de>
Now we have an accessors for ppir src, so it's possible to easily
print all srcs and dests while dumping ppir representation.
Reviewed-by: Qiang Yu <yuq825@gmail.com>
Signed-off-by: Vasily Khoruzhick <anarsoul@gmail.com>
Sometimes we need to walk through ppir_node sources, common
accessor for all node types will simplify code a lot.
Reviewed-by: Qiang Yu <yuq825@gmail.com>
Signed-off-by: Vasily Khoruzhick <anarsoul@gmail.com>
The DRI interface for modifiers with aux data treats the aux data as a
separate plane of the main surface.
When the dri layer requests the plane associated with the aux data, we
save the required information into the dri aux plane image.
Later when the image is used, the dri plane image will be available in
the pipe_resource structure's `next` field. Therefore in iris, we
reconstruct the aux setup from this separate dri plane image when the
image is used.
Signed-off-by: Jordan Justen <jordan.l.justen@intel.com>
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
Reworks:
* If the aux-state is not ISL_AUX_STATE_AUX_INVALID, then use memset
even when memset_value is zero. The hiz buffer initial aux-state
will be set to invalid, and therefore we can skip the memset. But,
for CCS it will be set to ISL_AUX_STATE_PASS_THROUGH, and therefore
the aux data must be cleared to 0 with the memset. Previously we
would use BO_ALLOC_ZEROED with the CCS aux data, so this memset
wasn't required. Now, the CCS aux data may be part of the main
surface. We prefer to not use BO_ALLOC_ZEROED excessively, so the
memset is needed for the CCS case. (Nanley)
Signed-off-by: Jordan Justen <jordan.l.justen@intel.com>
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
This is not currently required because the hiz buffer is in a separate
buffer, and therefore the offset is 0. If we combine the aux buffer
with the main surface buffer, then the hiz offset may become non-zero.
Suggested-by: Nanley Chery <nanley.g.chery@intel.com>
Signed-off-by: Jordan Justen <jordan.l.justen@intel.com>
Reviewed-by: Nanley Chery <nanley.g.chery@intel.com>
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
Instead of "genX_bits.h" use "genxml/genX_bits.h"
as already done in other similar cases
Besides being more correct, it also fixes building error in Android.
Fixes: f0d2923 ("i965/gen11: Emit SLICE_HASH_TABLE when pipes are unbalanced.")
Signed-off-by: Mauro Rossi <issor.oruam@gmail.com>
Reviewed-by: Rafael Antognolli <rafael.antognolli@intel.com>
We can't intersect with empty regions.
Fixes: 65ae86b854 ("panfrost: Add support for KHR_partial_update()")
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
This is the start of doing CTS tests on merges to Mesa master. We use
the surfaceless platform so that we don't need to bother bringing up
weston or X11. The surface size is kept low to reduce runtime, but
this comes at the cost of many rendering tests skipping due to
too-small render targets (as we see the impact of Mesa on the shared
runner pool, we can reevaluate this and what set of CTS tests we want
to run).
We split the job up across 4 runners (each at 4 llvmpipe threads), so
that the job can load-balance across our shared runners and finish
sooner (since dEQP is very single-thread-performance bound).
Reviewed-by: Eric Engestrom <eric.engestrom@intel.com>
Now that we're running the drivers we build, building with
optimization is important for keeping our runtime down. Shaves about
4 minutes of runtime off of GLES2 CTS of llvmpipe at 64x64.
v2: Only switch meson-main until we enable CTS for other builds
on request by Michel.
Reviewed-by: Eric Engestrom <eric.engestrom@intel.com>
If we don't set DESTDIR, then the DEFAULT_DRIVER_DIR built into the
libraries is correct and we don't need to use LIBGL_DRIVERS_PATH and
friends for CI usage. Incidentally, this moves our installed paths
from /builds/anholt/mesa/install/usr/local/lib (for example) to
/builds/anholt/mesa/install/lib for simplicity.
Reviewed-by: Eric Engestrom <eric.engestrom@intel.com>
If we're hitting the swrast fallback path here, it's probably because
we stumbled across a KMS-only device (such as the ASpeed that some of
our CI runners have) that will then return a NULL driver_name. Don't
crash in that case.
Reviewed-by: Eric Engestrom <eric.engestrom@intel.com>
We get a getDrawableInfo() call in the MakeCurrent path, which
platform_device was handling correctly by returning the pbuffer's
width/height but platform_surfaceless segfaulted for. Reuse
platform_device's implementation.
Reviewed-by: Gurchetan Singh <gurchetansingh@chromium.org>
I want to enable CI of llvmpipe out of the meson-main build. So, kick
classic swrast/osmesa to meson-i386, then promote llvmpipe to
meson-main (along with nine, now that classic osmesa isn't keeping it
out of there).
Reviewed-by: Michel Dänzer <michel.daenzer@amd.com>
Reviewed-by: Eric Engestrom <eric.engestrom@intel.com>
Midgard has no hardware support for transform feedback, so we simulate
it in software. Lucky us.
What Midgard does do is write out vertex shader outputs to main memory
unconditonally. Fragment shaders read varyings back from main memory;
there's no on-chip storage for varyings. Whether this was a reasonable
design is a question I will not be engaging in this commit message.
What that does mean is that, in some sense, Midgard *always* does
transform feedback uncondtionally, and there's no way to turn off
transform feedback. Normally, we would allocate some scratch memory
every frame to store the varyings in an arbitrary format (interleaved
for simplicity), and then feed that scratch to the fragment shader and
discard when the rendering completes.
The only difference now is that sometimes, for some buffers, we use a BO
provided to us by Gallium and a format provided by Gallium, instead of
allocating the memory and choosing the format ourselves. This has some
limitations -- in particular, it only works at vec4 granularity, so a
corresponding GLSL linkage patch is needed to correctly implement
transform feedback for non-vec4 types. Nevertheless, given the hardware
already works in this admittedly-bizarre fashion, transform feedback is
"free". Or, at least, it's no more expensive than any other rendering.
Specifically not implemented is dynamically-sized transform feedback
(i.e. with geometry/tesselation shaders).
Spoiler alert: Midgard has no support for geometry *or* tessellation
shaders, despite advertising support. They get compiled to *massive*
compute shaders. How's that for checkbox compliance?
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
Reviewed-by: Boris Brezillon <boris.brezillon@collabora.com>
We could probably get away with doing this once per pipe_shader_state
but let's not jump down that rabbit hole quite yet.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
Reviewed-by: Boris Brezillon <boris.brezillon@collabora.com>
This is a huge hack to workaround incomplete BO flushing logic, but it's
enough for the dEQP transform feedback tests, and doing the resource
management to get this right is out-of-scope for this patch series.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
Reviewed-by: Boris Brezillon <boris.brezillon@collabora.com>
It doesn't really make sense, since we don't have special texture
coordinate varyings, but it'll make some code simpler for XFB and it
doesn't hurt us, even if I lose a bit of my soul setting it.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
Reviewed-by: Boris Brezillon <boris.brezillon@collabora.com>
We're just going to compute them in the driver but let's get the
structures setup to handle them. Implementation from v3d.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
Reviewed-by: Boris Brezillon <boris.brezillon@collabora.com>
If driver-params are required, we really should emit it on every draw
for correctness. And if not required, we should emit a DISABLE so that
un-applied state updates from previous draws don't corrupt the const
state.
Signed-off-by: Rob Clark <robdclark@chromium.org>
Which takes ownership of the stateobj. Useful for streaming state-
objs, to avoid an extra ref/unref
Worth ~5% at gl_driver2
Signed-off-by: Rob Clark <robdclark@chromium.org>
Should be no functional change. Next step is to re-arrange various
const state into different stateobjs.
Signed-off-by: Rob Clark <robdclark@chromium.org>
Hoist them out of code-paths that will eventually be called directly for
various a6xx+ const related stateobjs.
This ends up duplicating one constlen check in ir3_emit_vs_consts(), to
avoid what could otherwise be an unnecessary WFI on older gens.
Signed-off-by: Rob Clark <robdclark@chromium.org>
These don't need to be in context, and we'll need them in screen in a
later patch. Plus it's a good cleanup.
Signed-off-by: Rob Clark <robdclark@chromium.org>
Move DP emit to it's own function. No functional change, just code
motion to prepare for splitting up const state into multiple state-
objs on a6xx.
Signed-off-by: Rob Clark <robdclark@chromium.org>
Implement ->set_damage_region() region to support partial updates.
This is a dummy implementation in that it does not try to merge
damage rects. It also does not deal with distinct regions and instead
pick the largest quad as the only damage rect and generate up to 4
reload rects out of it (the left/right/top/bottom regions surrounding
the biggest damage rect).
We also do not try to reduce the number of draws by passing all quad
vertices to the blit request (would require extending u_blitter)
Signed-off-by: Boris Brezillon <boris.brezillon@collabora.com>
Reviewed-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
Add a pipe_screen->set_damage_region() hook to propagate
set-damage-region requests to the driver, it's then up to the driver to
decide what to do with this piece of information.
If the hook is left unassigned, the buffer-damage extension is
considered unsupported.
Signed-off-by: Daniel Stone <daniels@collabora.com>
Signed-off-by: Boris Brezillon <boris.brezillon@collabora.com>
Reviewed-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
Reviewed-by: Qiang Yu <yuq825@gmail.com>
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
Add a new DRI2_BufferDamage interface to support the
EGL_KHR_partial_update extension, informing the driver of an overriding
scissor region for a particular drawable.
Based on a commit originally authored by:
Harish Krupo <harish.krupo.kps@intel.com>
renamed extension, retargeted at DRI drawable instead of context,
rewritten description
Signed-off-by: Daniel Stone <daniels@collabora.com>
Signed-off-by: Boris Brezillon <boris.brezillon@collabora.com>
Acked-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
Reviewed-by: Qiang Yu <yuq825@gmail.com>
Tested-by: Qiang Yu <yuq825@gmail.com>
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
The intension of the KHR_partial_update was not to send the damage back
to the platform but to send the damage to the driver to ensure that the
following rendering could be restricted to those regions.
This patch removes the set_damage_region from the egl_dri vtbl and all
the platfrom_*.c files.
Then upcomming patches add a new dri2 interface for the drivers to
implement
Signed-off-by: Harish Krupo <harishkrupo@gmail.com>
Reviewed-by: Daniel Stone <daniels@collabora.com>
Signed-off-by: Boris Brezillon <boris.brezillon@collabora.com>
Acked-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
Reviewed-by: Qiang Yu <yuq825@gmail.com>
Tested-by: Qiang Yu <yuq825@gmail.com>
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
This refactor will let us more easily use
pipe_screen::resource_get_param as an alternative to
pipe_screen::resource_get_handle.
Signed-off-by: Jordan Justen <jordan.l.justen@intel.com>
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
Acked-by: Eric Anholt <eric@anholt.net>
This function retrieves individual parameters selected by enum
pipe_resource_param. It can be used as a more direct alternative to
pipe_screen::resource_get_handle.
Signed-off-by: Jordan Justen <jordan.l.justen@intel.com>
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
Acked-by: Eric Anholt <eric@anholt.net>
The OpenGL ES spec requires that the value of gl_PointSize is clamped
to an implementation-dependent range matching what is advertised by
GL_ALIASED_POINT_SIZE_RANGE. For VC4 this is [1.0, 512.0], but the
hardware won't clamp to the minimum side of the range and won't render
points with a size strictly smaller than 1.0 either, so we need to
clamp manually. For points larger than the maximum size of the range
the hardware clamps automatically.
Fixes piglit test:
spec/!opengl 2.0/vs-point_size-zero
Reviewed-by: Eric Anholt <eric@anholt.net>
The OpenGL ES spec requires that the value of gl_PointSize is clamped
to an implementation-dependent range matching what is advertised by
GL_ALIASED_POINT_SIZE_RANGE. For V3D this is [1.0, 512.0], but the
hardware won't clamp to the minimum side of the range and won't render
points with a size strictly smaller than 1.0 either, so we need to
clamp manually. For points larger than the maximum size of the range
the hardware clamps automatically.
Fixes piglit test:
spec/!opengl 2.0/vs-point_size-zero
Reviewed-by: Eric Anholt <eric@anholt.net>
The OpenGL and OpenGL ES specs require that implementations clamp the
value of gl_PointSize to an implementation-depedent range. This pass
is useful for any GPU hardware that doesn't do this automatically
for either one or both sides of the range, such as V3D.
v2:
- Turn into a generic NIR pass (Eric).
- Make the pass work before lower I/O so we can use the deref variable
to inspect if we are writing to gl_PointSize (Eric).
- Make the pass take the range to clamp as parameter and allow it
to clamp to both sides of the range or just one side.
- Make the pass report progress.
v3:
- Fix copyright header (Eric)
- use fmin/fmax instead of bcsel to clamp (Eric)
Reviewed-by: Eric Anholt <eric@anholt.net>
v2:
- Fix incremental update of the const offset when we need to emit a sequence
with more than one write because of the writemask.
- Do not move the tmu write emission to a separate helper.
v3:
- Get the store writemask before the loop, use ffs to get the first component
to write and clear writemask bits as we process the components (Eric).
- Simplified the code that figured out the number of components for the TMU
config based on the number of tmu writes for stores and atomics.
v4:
- Code clean-ups (Eric).
Fixes:
KHR-GLES31.core.shader_image_load_store.advanced-cast-cs
KHR-GLES31.core.shader_image_load_store.advanced-cast-fs
KHR-GLES31.core.shader_storage_buffer_object.advanced-switchBuffers-cs
KHR-GLES31.core.shader_storage_buffer_object.advanced-switchPrograms-cs
KHR-GLES31.core.shader_storage_buffer_object.basic-operations-case1-cs
Reviewed-by: Eric Anholt <eric@anholt.net>
When we implement write masks on store operations we might need to
emit multiple write sequences for a given store intrinsic. To make
that easier, let's split the emission of the tmud instructions to
their own block after we are done with the code that only needs to
run once no matter how many write sequences we need to emit.
Reviewed-by: Eric Anholt <eric@anholt.net>
If the current job has a sequence of draw calls involving SSBOs and/or
shader images, we would flush the job in between each draw call.
With this change, we won't flush the current job and we rely on the
application inserting correct barriers by issuing glMemoryBarrier()
when needed.
v2 (Eric):
- When mapping a buffer for writing, we always need to flush.
Reviewed-by: Eric Anholt <eric@anholt.net>
PIPE_BARRIER_UPDATE is defined as:
PIPE_BARRIER_UPDATE_BUFFER | PIPE_BARRIER_UPDATE_TEXTURE
Which means we were flushing for any flags other than these two, but
this was intended to only flush for ssbos and images.
Actually, the driver automatically flushes jobs as we need, including
writes/reads involving SSBOs and images, so we don't really need to
flush anything when the program emits a barrier. However, this may
lead to excessive flushing in some cases, so we will soon change this
to avoid atutomatic flushing of the current job for SSBOs and images,
meaning that we will rely on the application to emit correct memory
barriers for these that we should make sure to process here.
Reviewed-by: Eric Anholt <eric@anholt.net>
If the current draw call includes SSBOs, then we must flush any jobs
that are writing to the same SSBOs (so that our SSBOs reads are correct),
as well as jobs reading from the same SSBO (so that our SSBO writes don't
stomp previous SSBO reads).
The exact same logic applies to shader images. In this case we were already
flushing previous writes, but we should also flush previous reads.
Note that We don't need to call v3d_flush_jobs_reading_resource() and
v3d_flush_jobs_writing_resource() separately though, since flushing
jobs that read a resource also flushes those writing to it.
Suggested-by: Eric Anholt <eric@anholt.net>
Reviewed-by: Eric Anholt <eric@anholt.net>
The `device` can be set earlier either by a command line or a by
intercepting an ioctl call to get the I915_PARAM_CHIPSET_ID done by
the application early. In both cases `aub_file` and `devinfo` would
not be initialized.
Fix by splitting the conditions
- `device == 0`: use the FD to get both device and devinfo.
- Or `devinfo.gen == 0`: use `device` to initialize it.
And separatedly, initialize aub_file the first time it is needed.
Fixes: d594d2a052 ("intel/tools: use device info initializer")
Acked-by: Jason Ekstrand <jason@jlekstrand.net>
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
If the pixel pipes have a different number of subslices, emit a slice
hashing table that will ensure proper workload distribution.
v2: Set Mask field to 0xffff for workaround (Ken).
If the pixel pipes have a different number of subslices, emit a slice
hashing table that will ensure proper workload distribution.
v2: Don't need to set the mask - it's mbo (Ken).
If the pixel pipes have a different number of subslices, emit a slice
hashing table that will ensure proper workload distribution.
v2: Don't need to set the mask - it's mbo (Ken).
v3: Don't keep a reference to the resource used for emitting the table
(Ken).
Add these fields and the 3DSTATE_SLICE_TABLE_STATE_POINTERS instruction
so we can properly configure the slice and subslice hashing on ICL+
v2: Make 'Mask' field a mbo (Ken).
We don't need it for state setup but it's a useful statistic we want to
pass on to developers.
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
This commit is all annoying plumbing work which just adds support for a
new brw_compile_stats struct. This struct provides a binary driver
readable form of the same statistics we dump out to stderr when we
INTEL_DEBUG is set with a shader stage.
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
While NIR's lower_imul64() solves the case of 64 bit integer multiplications
generated early, we don't have a way to lower such instructions when they are
generated by our own backend, such as the scan/reduce intrinsics. We'll need
this soon, so implement it now.
An easy way to test this is to simply disable nir_lower_imul64 to let
those operations reach the backend.
v2:
- Fix Q/UQ copy/paste errors (Caio).
- Transform an 'if' into 'else if' (Caio).
- Add an extra comment to clarify the need for 64b = 32b * 32b
(Caio).
- Make private functions private (Caio).
v3:
- Remove ambiguity with 'b' and 'd' variables (Caio).
- Allocate potentially less regs for the dwords (Caio).
Cc: Jason Ekstrand <jason.ekstrand@intel.com>
Cc: Matt Turner <matt.turner@intel.com>
Cc: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
Reviewed-by: Matt Turner <mattst88@gmail.com>
Reviewed-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
Signed-off-by: Paulo Zanoni <paulo.r.zanoni@intel.com>
Invert the logic of how progress is handled: remove the continue
statements and mark progress inside the places where it actually
happens.
We're going to add a new lowering that also looks for BRW_OPCODE_MUL,
so inverting the logic here makes the resulting code much easier to
follow.
Reviewed-by: Matt Turner <mattst88@gmail.com>
Reviewed-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
Signed-off-by: Paulo Zanoni <paulo.r.zanoni@intel.com>
Don't instantiate a builder for each instruction during
lower_integer_multiplication(). Instantiate one only when needed.
On the other hand, these unneeded builders don't seem to cost much to
init, so I don't expect any significant difference in performance:
this is mostly about code organization.
Reviewed-by: Matt Turner <mattst88@gmail.com>
Reviewed-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
Signed-off-by: Paulo Zanoni <paulo.r.zanoni@intel.com>
The lower_integer_multiplication() function is already a little too
big. I want to add more to it, so let's reorganize the existing code
first. Let's start with just extracting the current code to
subfunctions. Later we'll change them a little more.
v2: Make private functions private (Caio).
v3: Fix typo (Caio).
Reviewed-by: Matt Turner <mattst88@gmail.com>
Reviewed-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
Signed-off-by: Paulo Zanoni <paulo.r.zanoni@intel.com>
v2: add to series
v3: update Makefile.sources
v4: don't remove a comment and break statement
v4: use nir_can_move_instr
Signed-off-by: Rhys Perry <pendingchaos02@gmail.com>
Reviewed-by: Eric Anholt <eric@anholt.net>
This is mostly the same as nir_move_load_const() but can also move
undef instructions, comparisons and some intrinsics (being careful with
loops).
v2: actually delete nir_move_load_const.c
v3: fix nir_opt_sink() usage in freedreno
v3: update Makefile.sources
v4: replace get_move_def with nir_can_move_instr and nir_instr_ssa_def
v4: handle if uses
v4: fix handling of nested loops
v5: re-write adjust_block_for_loops
v5: re-write setting of use_block for if uses
Signed-off-by: Rhys Perry <pendingchaos02@gmail.com>
Co-authored-by: Daniel Schürmann <daniel@schuermann.dev>
Reviewed-by: Eric Anholt <eric@anholt.net>
See "i965/gen9: Optimize slice and subslice load balancing behavior."
for the rationale. According to Jason, improves Aztec Ruins
performance by 2.7%.
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org> (v1)
v2: Undo CPU performance micro-optimization done in i965 and iris due
to lack of data justifying it on anv. Use
cmd_buffer_apply_pipe_flushes wrapper instead of emitting pipe
control command directly. (Jason)
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
This allows enabling the shader info keeping on a per shader basis.
Also disables the cache on a per shader basis.
Reviewed-by: Dave Airlie <airlied@redhat.com>
The default pixel hashing mode settings used for slice and subslice
load balancing are far from optimal under certain conditions (see the
comments below for the gory details). The top-of-the-line GT4 parts
suffer from a particularly severe performance problem currently due to
a subslice load balancing issue. Fixing this seems to improve
graphics performance across the board for most of the benchmarks in my
test set, up to ~20% in some cases, e.g. from SKL GT4:
unigine/valley: 3.44% ±0.11%
gfxbench/gl_manhattan31: 3.99% ±0.13%
gputest/pixmark_piano: 7.95% ±0.33%
synmark/OglTexFilterAniso: 15.22% ±0.07%
synmark/OglTexMem128: 22.26% ±0.06%
Lower-end platforms are also affected by some subslice load imbalance
to a lesser degree, especially during CCS resolve and fast clear
operations, which are handled specially here due to rasterization
ocurring in reduced CCS coordinates, which changes the semantics of
the pixel hashing mode settings.
No regressions seen during my tests on some SKL, KBL and BXT
configurations. Additional benchmark reports welcome on any Gen9
platforms (that includes anything with Skylake, Broxton, Kabylake,
Geminilake, Coffeelake, Whiskey Lake, Comet Lake or Amber Lake in your
renderer string).
P.S.: A similar problem is likely to be present on other non-Gen9
platforms, especially for CCS resolve and fast clear operations.
Will follow-up with additional patches fixing the hashing mode
for those once I have enough performance data to justify it.
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
This is a bit of a hack, but it'll hold us over until we have 64-bit
support wired through.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
This used a delicate hack to try to find indirect inputs and skip them
as candidates for pairing. Let's use a better criterion -- no sources --
and pair based on that.
We could do better, but that would require more complex data flow
analysis than we're interested in doing here.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
We implement gl_WorkGroupID and gl_LocalInvocationID, which map to
ld_compute_id with special sources.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
This allows liveness analysis within a loop to be more fine grained,
fixing RA failures with partial spilled movs within a loop, as well as
enabling a slight reduction of register pressure more generally:
total registers in shared programs: 350 -> 347 (-0.86%)
registers in affected programs: 12 -> 9 (-25.00%)
helped: 3
HURT: 0
helped stats (abs) min: 1 max: 1 x̄: 1.00 x̃: 1
helped stats (rel) min: 25.00% max: 25.00% x̄: 25.00% x̃: 25.00%
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
Just laying the groundwork. Reads and writes should be supported (both
direct and indirect, either int or float, vec1/2/3/4), but no bounds
checking is done at the moment.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
This is a corner case that happens a lot with SSBOs. Basically, if we
only read a few components of a uniform, we need to only spill a few
components or otherwise we try to spill what we spilled and RA hangs.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
We don't want to load a 128-bit sysval when 64-bits will do. Fixes RA
failures with SSBO indirect writes.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
Sometimes a sysval is used to facilitate an instruction but is not the
instruction itself.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
This is of course suboptimal for performance, forcing each
glDispatchCompute call to be submitted separately to the kernel and
finish to completion. However, for the initial bring-up of compute jobs,
this simplifies quite a bit.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
For each SSBO index we get from Gallium/NIR, we need two pieces of
information in the shader:
1. The address of the SSBO in GPU memory. Within the shader, we'll be
accessing it with raw memory load/store, so we need the actual address,
not just an index.
2. The size of the SSBO. This is not strictly necessary, but at some
point, we may like to do bounds checking on SSBO accesses.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
This u_prim.h helper determines the number of outputs for stream output,
given a particular primitive type and a vertex count. This is useful for
statically calculating sizes of stream output buffers (i.e. when there
is no geometry/tessellation shader in use).
This helper will be used in Panfrost's transform feedback
implementation, as you can probably guess since why else would I be
submitting it....
See also dEQP's getTransformFeedbackOutputCount routine.
v2: Simplify definition using new helpers, which also extends to non-ES2
primitive types (Eric).
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
Reviewed-by: Eric Anholt <eric@anholt.net>
By optimizing the shader before inlining, we avoid having to redo this
work for each inlined copy of a function. It should also reduce the
memory consumption a bit.
This cuts the KHR-GL46.arrays_of_arrays_gl.SubroutineFunctionCalls2
runtime by 25% on my Icelake. That test compiles many shaders, which
contain large types (dmat4) and division (expensive operations).
Reviewed-by: Timothy Arceri <tarceri@itsqueeze.com>
Reviewed-by: Matt Turner <mattst88@gmail.com>
[27/31] Compiling C object 'src/gallium/drivers/etnaviv/df32d18@@etnaviv@sta/etnaviv_compiler_nir.c.o'.
In file included from ../../src/gitlab_mesa/src/gallium/drivers/etnaviv/etnaviv_compiler_nir.c:552:
../../src/gitlab_mesa/src/gallium/drivers/etnaviv/etnaviv_compiler_nir_emit.h: In function 'ra_assign':
../../src/gitlab_mesa/src/gallium/drivers/etnaviv/etnaviv_compiler_nir_emit.h:903:9: warning: unused variable 'ok' [-Wunused-variable]
bool ok = ra_allocate(g);
^~
../../src/gitlab_mesa/src/gallium/drivers/etnaviv/etnaviv_compiler_nir.c: In function 'etna_compile_shader_nir':
../../src/gitlab_mesa/src/gallium/drivers/etnaviv/etnaviv_compiler_nir.c:663:9: warning: unused variable 'ok' [-Wunused-variable]
bool ok = emit_shader(c->nir, &options, &v->num_temps, &num_consts);
^~
Signed-off-by: Christian Gmeiner <christian.gmeiner@gmail.com>
Reviewed-by: Eric Engestrom <eric@engestrom.ch>
Reviewed-by: Jonathan Marek <jonathan@marek.ca>
Test that found this: dEQP-VK.geometry.layered.1d_array.secondary_cmd_buffer
Fixes: 49e6c2fb78 "radv: Store color/depth surface info in attachment info instead of framebuffer."
Reviewed-by: Dave Airlie <airlied@redhat.com>
The version bump adds a proper features struct.
Fixes: d10de25309 "anv: Implement VK_EXT_subgroup_size_control"
Reviewed-by: Eric Engestrom <eric.engestrom@intel.com>
Can result in different shaders.
Fixes: 8a86908e9a "radv/gfx10: add Wave32 support for vertex, tessellation and geometry shaders"
Reviewed-by: Dave Airlie <airlied@redhat.com>
Instead of having the three values everywhere. This is also more
future proof if we want the driver to make those decisions eventually.
Reviewed-by: Dave Airlie <airlied@redhat.com>
Intel drivers are not using this anymore, and turnip still don't have
Compute Shaders, so won't make a difference to stop using this option.
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
Acked-by: Rob Clark <robdclark@chromium.org>
Instead of asking spirv_to_nir to lower the workgroup (shared memory)
to offsets, keep them as derefs longer, then lower it later on.
Because Workgroup memory doesn't have explicit offsets, we need to set
those using nir_lower_vars_to_explicit_types before calling the I/O
lowering pass.
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
This is the first call that provides the iris context to the monitor
implementation. On the first call, use the iris context to initialize
the monitor context.
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
With this commit, Iris will report that AMD_performance_monitor is
supported, and will allow the caller to query the available metrics.
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
Rather than hardcoding certain varying buffer indices "by convention",
work it out at draw time. This added flexibility is needed for
futureproofing and will be enable streamout.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
This code is fairly self-contained, so let's factor it out of the giant
pan_context.c monster.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
Register allocation for varying stores is a bit different, since the
instructions ignore the writemask (varyings are normalized
packed/vectorized..)
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
When GL_SCISSOR_TEST is enabled glClear is handled by state tracker
and there is no need to do this in gallium driver.
Reviewed-by: Alok Hota alok.hota@intel.com
This reverts commit c73988300f.
Reason: Made a fix for this, then saw @eric's change
("util/anon_file: add missing"), but some sequence of events
I don't really remember caused this to get merged. So revert ;-)
Reviewed-by: Eric Engestrom <eric.engestrom@intel.com>
Remove etna_bo_from_handle() as there are no known users.
Signed-off-by: Marek Vasut <marex@denx.de>
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
Currently I am seeing a handful of the following debug message:
translate_texture_target:495: Unhandled texture target: 0
PIPE_BUFFER is not handled in translate_texture_target(..) which makes
sense as it is used to translate from PIPE_XXX to GPU specific value
during etna_create_sampler_view_state(..).
To fix this problem introduce gpu_supports_texture_target(..) which just
checks if the texture target is supported.
Fixes: dfe048058f ("etnaviv: support 3D and 2D array textures")
Signed-off-by: Christian Gmeiner <christian.gmeiner@gmail.com>
Reviewed-by: Jonathan Marek <jonathan@marek.ca>
Android 9 loader conditionally advertises VK_KHR_shared_presentable_image
extension based on this property and it looks like it does not
initialize the struct before query.
Pragmas are added to ignore warnings with Android specific structure
types in same manner as commit 8d386e6eef did.
Signed-off-by: Tapani Pälli <tapani.palli@intel.com>
Acked-by: Jason Ekstrand <jason@jlekstrand.net>
Use a struct with bitfields to construct texture descriptor
instead of poking bits in array of uint32_t. It improves code
readability and makes it easier to experiment with unknown fields.
Also fix mipmapping while we're at it - Utgard can have up to 13
levels, but 64 bytes is enough only for 10. Calculate descriptor
size dynamically to account extra levels if we need them.
Reviewed-by: Qiang Yu <yuq825@gmail.com>
Signed-off-by: Vasily Khoruzhick <anarsoul@gmail.com>
Introduce a table for supported texel formats and use it to check
whether format is supported and for converting pipe format to lima
texel format.
Reviewed-by: Qiang Yu <yuq825@gmail.com>
Signed-off-by: Vasily Khoruzhick <anarsoul@gmail.com>
Otherwise I get:
../src/util/anon_file.c: In function ‘create_tmpfile_cloexec’:
../src/util/anon_file.c:75:9: error: implicit declaration of function ‘mkostemp’
[-Werror=implicit-function-declaration]
fd = mkostemp(tmpname, O_CLOEXEC);
^~~~~~~~
../src/util/anon_file.c:133:7: error: implicit declaration of function ‘asprintf’
[-Werror=implicit-function-declaration]
asprintf(&name, "%s/mesa-shared-%s-XXXXXX", path, debug_name);
^~~~~~~~
../src/util/anon_file.c:141:4: error: implicit declaration of function ‘free’
[-Werror=implicit-function-declaration]
free(name)
Fixes: c0376a ("util: add anon_file.h for all memfd/temp file usage")
Otherwise, virgl will report renderable or texturable formats as
also scan-out formats.
v2: drop host feature check (@kusma)
Reviewed-by: Erik Faye-Lund <erik.faye-lund@collabora.com>
vkpipeline-db for my Skylake GPU:
total instructions in shared programs: 8847602 -> 8847896 (<.01%)
instructions in affected programs: 10165 -> 10459 (2.89%)
helped: 8
HURT: 2
total cycles in shared programs: 1606273555 -> 1606251634 (<.01%)
cycles in affected programs: 2201803 -> 2179882 (-1.00%)
helped: 7
HURT: 3
The shaders with more instructions is due to a loop over a shared array
in Three Kingdoms being unrolled (and creating a lot of nested ifs). Not sure
if that's good or bad.
One of the shaders with worse cycles is only worse by 0.04% and the other
two are the shaders with loops unrolled.
v2: add patch
v4: don't set spirv_options.shared_addr_format
v4: move comment concerning the shared address format used and NULL
v4: add vkpipeline-db results
v5: rename to nir_lower_vars_to_explicit_types
v5: move setting of total_shared to outside brw_compile_cs
v6: set shared_addr_format
v6: formatting changes
Signed-off-by: Rhys Perry <pendingchaos02@gmail.com>
Reviewed-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com> (v5)
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
v2: use glsl_type_size_align_func
v2: move get_explicit_type() to glsl_types.cpp/nir_types.cpp
v2: use align() instead of util_align_npot()
v2: pack arrays a bit tighter
v2: rename mem_* to field_*
v2: don't attempt to handle when struct offsets are already set
v2: use column_type() instead of recreating it
v2: use a branch instead of |= in nir_lower_to_explicit_impl()
v2: assign locations to variables and update shared_size and num_shared
v2: allow the pass to be used with nir_var_{shader_temp,function_temp}
v4: rebase
v5: add TODO
v5: small formatting changes
v5: remove incorrect assert in get_explicit_type()
v5: rename to nir_lower_vars_to_explicit_types
v5: correctly update progress when only variables are updated
v5: rename get_explicit_type() to get_explicit_shared_type()
v5: add comment explaining how get_explicit_shared_type() is different
v5: update cast strides
v6: update progress when lowering nir_var_function_temp variables
v6: formatting changes
v6: add more detailed documentation comment for get_explicit_shared_type
v6: rename get_explicit_shared_type to get_explicit_type_for_size_align
v7: fix comment in nir_lower_vars_to_explicit_types_impl()
Signed-off-by: Rhys Perry <pendingchaos02@gmail.com>
Reviewed-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com> (v5)
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
v2: require nir_address_format_32bit_offset instead
v3: don't call nir_intrinsic_set_access() for shared atomics
Signed-off-by: Rhys Perry <pendingchaos02@gmail.com>
Reviewed-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
On Windows, p_atomic_inc_return returns an unsigned long long rather
than the type the pointer refers to, so let's make sure we cast the
result to the right type. Otherwise, we'll trigger a warning about
the wrong format-string for the type.
Signed-off-by: Erik Faye-Lund <erik.faye-lund@collabora.com>
Acked-by: Eric Engestrom <eric@engestrom.ch>
This avoids a warning about implicitly casting away the constness of the
pointer.
Signed-off-by: Erik Faye-Lund <erik.faye-lund@collabora.com>
Acked-by: Eric Engestrom <eric@engestrom.ch>
This avoids a warning on some compiler, complaining about implicitly
casting the function-pointer.
Signed-off-by: Erik Faye-Lund <erik.faye-lund@collabora.com>
Fixes: d482a8f "spirv: Update the OpenCL.std.h header"
Acked-by: Eric Engestrom <eric@engestrom.ch>
Imported resources might not start at offset 0 into the buffer object.
Make sure to remember the offset that is provided with the handle on
import.
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
Reviewed-by: Philipp Zabel <p.zabel@pengutronix.de>
Reviewed-by: Christian Gmeiner <christian.gmeiner@gmail.com>
1. fix build issues with MSVC 2019 compiler
The MSVC 2019 compiler seems to have an issue with optimized code-gen
when using the _mm256_and_si256() intrinsic.
Only disable use of integer vpand on buggy versions MSVC 2019.
Otherwise allow use of integer vpand intrinsic.
2. Remove unused vec/matrix functionality
Reviewed-by: Alok Hota <alok.hota@intel.com>
We can use the PRIMITIVE_COUNTS_FEEDBACK packet to write various primitive
counts to a buffer, including the number of primives written to transform
feedback buffers, which will handle buffer overflow correctly.
There are a couple of caveats with this:
Primitive counters are reset when we emit a 'Tile Binning Mode Configuration'
packet, which can happen in the middle of a primitives query, so we need to
read the buffer when we submit a job and accumulate the counts in the context
so we don't lose them.
We also need to do the same when we switch primitive type during transform
feedback so we can compute the correct number of recorded vertices from
the number of primitives. This is necessary so we can provide an accurate
vertex count for draw from transform feedback.
v2:
- When computing the number of vertices for a primitive, pass in the base
primitive, since that is what the hardware will count.
- No need to update primitive counts when switching primitive types if
the base primitives are the same.
- Log perf warning when mapping the primitive counts BO for readback (Eric).
- Only emit the primitive counts packet once at job end (Eric).
- Use u_upload mechanism for the primitive counts buffer (Eric).
- Use the XML to generate indices into the primitive counters buffer (Eric).
Fixes piglit tests:
spec/ext_transform_feedback/overflow-edge-cases
spec/ext_transform_feedback/query-primitives_written-bufferrange
spec/ext_transform_feedback/query-primitives_written-bufferrange-discard
spec/ext_transform_feedback/change-size base-shrink
spec/ext_transform_feedback/change-size base-grow
spec/ext_transform_feedback/change-size offset-shrink
spec/ext_transform_feedback/change-size offset-grow
spec/ext_transform_feedback/change-size range-shrink
spec/ext_transform_feedback/change-size range-grow
spec/ext_transform_feedback/intervening-read prims-written
Reviewed-by: Eric Anholt <eric@anholt.net>
These were not being compiled because of the lack of __gen_unpack_address.
v2:
- Shift raw address correctly (Eric).
Reviewed-by: Eric Anholt <eric@anholt.net>
What we call GROWABLE in Mesa corresponds to the HEAP BO flag in the
kernel. These buffers cannot be memory mapped in the CPU side at the
moment, so make sure they are also marked INVISIBLE.
This allows us to allocate a big heap upfront (16MB) without actually
reserving space unless it's needed.
Signed-off-by: Tomeu Vizoso <tomeu.vizoso@collabora.com>
Reviewed-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
This will be useful right now so we avoid retrieving a non-executable
buffer when a executable one is needed.
As we support more flags, this logic will need to be extended to
consider the different trade-offs to be made when matching BO
specifications to BOs in the cache.
Signed-off-by: Tomeu Vizoso <tomeu.vizoso@collabora.com>
Reviewed-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
Instead of all shaders being stored in a single BO, have each shader in
its own.
This removes the need for a 16MB allocation per context, and allows us
to place transient blend shaders in BOs marked as executable (before
they were allocated in the transient pool, which shouldn't be
executable).
v2: - Store compiled blend shaders in a malloc'ed buffer, to avoid
reading from GPU-accessible memory when patching (Alyssa).
- Free struct panfrost_blend_shader (Alyssa).
- Give the job a reference to regular shaders when emitting
(Alyssa).
v3: - Split out the allocation flags change (Rob).
Signed-off-by: Tomeu Vizoso <tomeu.vizoso@collabora.com>
Reviewed-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
Some hash functions (eg. key_u64_hash) will attempt to dereference the
key, causing an invalid access when passed DELETED_KEY_VALUE (0x1) or
FREED_KEY_VALUE (0x0).
When in 32-bit arch a 64-bit key value doesn't fit into a pointer, so
hash_table_u64 internally use a pointer to a struct containing the
64-bit key value.
Fix _mesa_hash_table_u64_clear() to handle the 32-bit case by creating a
temporary hash_key_u64 to pass to the hash function.
Signed-off-by: Tomeu Vizoso <tomeu.vizoso@collabora.com>
Suggested-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
Reviewed-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
Cc: Samuel Pitoiset <samuel.pitoiset@gmail.com>
Cc: Nicolai Hähnle <nicolai.haehnle@amd.com>
New function supports gralloc1 usage flags that get set separately
for producer and consumer. As we still need to support old method too,
let's share common code and use android_convertGralloc0To1Usage helper.
Bump the VK_ANDROID_native_buffer version to indicate support for the
new call.
Changes were tested on Android Celadon P with Basemark GPU and various
Sascha Willems Vulkan demos.
Signed-off-by: Tapani Pälli <tapani.palli@intel.com>
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
INTEL_DEBUG=perfmon will iterate over the perf queries, printing
information about the state of each query. Some of this information
will be private to intel/perf, and needs to a dump routine that can be
called from i965.
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
Now that all references from i965 have been moved to perf, we can make
internal methods private again.
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
By encapsulating this implementation within perf, we can eventually
make struct gen_perf_ctx private.
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
This refactor moves several helper functions for get_query_data as
well:
- accumulate_oa_reports
- read_gt_frequency
- get_pipeline_stats_data
- get_oa_counter_data
Functions which are no longer referenced in brw_performance_query.c
have been removed.
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
The following methods have duplicate implementation of read_oa_samples_until in
brw_performance_query.c:
- read_oa_samples_for_query
- read_oa_samples_until
They ar still referenced by other methods in the file and will be
removed on the subsequent commit.
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
To move more operations into intel/perf, several state items are
needed. Save references to that state in the perf_ctxt, rather than
passing them in for every operation.
This commit includes an initializer for gen_perf_context, to set those
references and also encapsulate the initialization of the sample
buffer state.
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
Most accesses to perf state were made through repeated dereferences of
brw_context members. Prefering temporary variables of perf_ctx and
perf_cfg has the following advantages:
- more concise implementation
- easier refactor when moving subsequent methods to perf
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
The "context" that is necessary to submit and process perf commands to
the hardware was previously present in the brw_context.perfquery
struct. This commit moves it into perf and provides a more
understandable name.
The intention is for this struct to be private, when all methods that
access it are migrated into perf.
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
oa_sample_buf holds the data provided by the kernel that will be
collated into performance metrics. Since this functionality will be
implemented in perf, the struct needs to be defined there.
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
Iris and i965 both need to enumerate the available metrics, so these
routines must be located in perf.
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
The perf subsystem needs several macro definitions that were
duplicated in Iris and i965 headers. Place these macros within perf,
if the perf implementation contains the only references to the values.
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
Performance metrics collections requires several actions (eg bo_map())
that have different implementations for Iris and i965. The perf
subsystem needs a vtable for each of these actions, so it can invoke
the corresponding implementation for each driver.
The first call to be added to the table is bo_alloc.
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
There were multiple ioctl-wrapper functions, so a common
implementation was put in gen_gem.h. With a common implementation,
perf no longer needs the caller to configure one for it.
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
This structure contains the configurations of the metrics for the
current platform, and the settings needed for the perf subsystem to
query that configuration from the device. This data is available
without a rendering context, and needed to support MDAPI metrics for
Vulkan.
A gen_perf_context struct will be added later, which holds additional
state from the rendering context necessary for metric data
collection. The gen_perf struct needs a more precise name to reduce
confusion.
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
This debug situation is unforunate. debug_printf only does something
with DEBUG set, but in practice all that needs to be moved to !NDEBUG.
For now, use _debug_printf which always prints. However the whole
function is guarded by !NDEBUG.
Signed-off-by: Ilia Mirkin <imirkin@alum.mit.edu>
Both AMD and NVIDIA hardware define it this way. Instead of replicating
the logic everywhere, just fix it up in one place.
Signed-off-by: Ilia Mirkin <imirkin@alum.mit.edu>
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
There's no reason to bring format-less load requirement into this
extension. It requires a size to be provided, and a compatible format is
computed from the size + data type. For example
layout(size1x32) uniform iimage1D image;
becomes
DCL IMAGE[0], 1D, PIPE_FORMAT_R32_SINT, WR
whereas PIPE_CAP_IMAGE_LOAD_FORMATTED is designed to allow
PIPE_FORMAT_NONE to be provided as a format and still enable LOAD
operations to be performed.
So the shader has all the information it needs about the format.
Signed-off-by: Ilia Mirkin <imirkin@alum.mit.edu>
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
Registration of mdapi metrics based on statistics query registers was
inadvertently removed in the commit that checks for OA kernel support.
The statistics queries are not dependent on OA.
Fixes: 96e1c945f2 ("i965: Move device info initialization to common code")
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Move the Weston os_create_anonymous_file code from egl/wayland into util,
add support for Linux memfd and FreeBSD SHM_ANON,
use that code in anv/aubinator instead of explicit memfd calls for portability.
Acked-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Reviewed-by: Eric Engestrom <eric.engestrom@intel.com>
Also only set FLUSH_ON_BINNING_TRANSITION for GPU families that needs it (matches
what si_emit_dpbb_disable is doing).
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
The DBG marco in brw_blorp.c ends up calling an android log function:
error: undefined reference to '__android_log_print'
v2: On suggestion from Lionel, hang the Android dependency onto a new
libintel_common dependency.
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
We also need to update wayland-protocols and libXrandr (and randrproto),
as they are too old for gdk3 (which gtk3 depends on).
Signed-off-by: Eric Engestrom <eric.engestrom@intel.com>
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
The current Androdi.registers.mk file causes build failures that
look like:
FAILED:
external/mesa3d/src/freedreno/Android.registers.mk:49: error: implicit rules are obsolete: out/target/product/linaro_db845c/gen/STATIC_LIBRARIES/libfreedreno_registers_intermediates/registers/%.xml.h
Caused by the following Android build rule change:
https://android.googlesource.com/platform/build/+/HEAD/Changes.md#implicit_rules
I tried to replace this with something similar to the static
pattern suggested in the URL above, but ended up getting all the
xml.h files generated using only the first a2xx.xml source file.
So I've fallen back to explicitly defining the make rules for
each.
Additionally, we needed to provide the proper
LOCAL_EXPORT_C_INCLUDE_DIRS and add the defined static library
to the components that depend on the register headers.
Acked-by: Eric Anholt <eric@anholt.net>
Signed-off-by: John Stultz <john.stultz@linaro.org>
Midgard does not accept a index_bias directly and relies instead on a
bias correction offset (offset_bias_correction) in order to calculate
the unbiased vertex index.
We need to make sure we adjust offset_start and vertex_count in order to
take into account the index_bias as required by a
glDrawElementsBaseVertex call and then supply a additional
offset_bias_correction to the hardware.
Signed-off-by: Rohan Garg <rohan.garg@collabora.com>
Reviewed-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
Previously we relied on stores not using DCC but that is going to
change, so disable compression explicitly.
Reviewed-by: Dave Airlie <airlied@redhat.com>
For extra args. Unlike image creation, I'm not embedding the vk
struct in there, so all the inline structs can be kept.
Reviewed-by: Dave Airlie <airlied@redhat.com>
VK spec 7.3:
"Applications must ensure that all accesses to memory that backs
image subresources used as attachments in a given renderpass instance
either happen-before the load operations for those attachments, or
happen-after the store operations for those attachments."
So the only renderloops we can have is with input attachments. Detect
these.
Reviewed-by: Dave Airlie <airlied@redhat.com>
Using the wrong bounds
Fixes: "219d6939df8 radv: add more assertions to make sure packets are correctly emitted"
Reviewed-by: Andres Rodriguez <andresx7@gmail.com>
Reviewed-by: Dave Airlie <airlied@redhat.com>
for internal radeonsi shaders
v2 (Connor):
- Split out prep work from adding opcodes, and rewrite the former
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
for internal radeonsi shaders
v2 (Connor):
- Split this out from the prep work, and rework the former
- Add support for U64SNE
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
We shouldn't be using the versions that output a 32-bit boolean, since
nir_opt_algebraic won't optimize them as well. Drivers will lower these
to the 32-bit versions after optimizing, if appropriate. Also, this will
make implementing 64-bit comparisons easier.
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
This extension has 2 functions that are missing from the ARB versions:
- imageAtomicIncWrap
- imageAtomicDecWrap
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
Lowering PS inputs can eliminate some of them, which messes up
persp/linear barycentric coord usage info.
Acked-by: Pierre-Eric Pelloux-Prayer <pierre-eric.pelloux-prayer@amd.com>
They are exactly like _mesa_GetTexEnvfv/_mesa_GetTexEnviv except they take
a GLuint texunit parameter instead of relying of ctx->Texture.CurrentUnit.
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
Based on the 'static get_texobj_by_target' function from texparam.c,
but extended to also take the texunit as a parameter.
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
The new function implements the same feature but doesn't depend
on ctx->Texture.CurrentUnit.
This change allows to use it from indexed functions.
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
Lowers BaseVertex to the correct system value for OpenGL.
v2: use options->environment rather than adding a new flag to
spirv_to_nir_options
Reviewed-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
Fix uploading of 3D textures and 2D array textures:
* Remove asserts in BLT and RS checking z
* Use box->z/box->depth in etna_copy_resource_box and CPU tile/untile
* Track mip level depth and use it in etna_copy_resource
Signed-off-by: Jonathan Marek <jonathan@marek.ca>
Reviewed-by: Christian Gmeiner <christian.gmeiner@gmail.com>
Allow UBO relocs and only emitting uniforms that are actually used.
GC7000Lite has no address register, so upload uniforms to a UBO object to
LOAD from.
I removed the code to check for changes to individual uniforms and just
reupload to entire uniform state when the state is dirty. I think there
was very limited benefit to it and it isn't compatible with relocs.
Signed-off-by: Jonathan Marek <jonathan@marek.ca>
Reviewed-by: Philipp Zabel <p.zabel@pengutronix.de>
Reviewed-by: Christian Gmeiner <christian.gmeiner@gmail.com>
Very basic summary, loops and gpir spills:fills are not updated yet and
are only there to comply with the strings to shader-db report.py regex.
For now it can be used to analyze the impact of changes in instruction
count in both gpir and ppir.
The LIMA_DEBUG=shaderdb setting can be useful to output stats on
applications other than shader-db.
Signed-off-by: Erico Nunes <nunes.erico@gmail.com>
Reviewed-by: Qiang Yu <yuq825@gmail.com>
This adds support for glDebugMessageCallback which is required to
support shader-db reports.
Signed-off-by: Erico Nunes <nunes.erico@gmail.com>
Reviewed-by: Qiang Yu <yuq825@gmail.com>
These tests have been fixed by:
b514f41183 ("glcpp: use pre-expansion line number for __LINE__")
Signed-off-by: Tomeu Vizoso <tomeu.vizoso@collabora.com>
8af1990a exposed dri2_get_mapping_by_fourcc() in dri_helpers.h, so it
could be used by dri_get_egl_image(), but didn't move it. This breaks
the build in the with_dri=false case (e.g. when building for a target
which doesn't have libdrm, so swrast is only dri driver built)
Fixes the following deqp tests:
dEQP-GLES2.functional.shaders.preprocessor.predefined_macros.line_2_*
It don't see the spec requiring this, but it seems to be better, as the
clang preprocessor for example has this behavior.
Signed-off-by: Jonathan Marek <jonathan@marek.ca>
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
This just eliminates tautological / contradictory compares that are used
for bcsel and other non-if-statement cases. If-statements are not
affected because removing flow control can cause the i965 instrution
scheduler to create some very long live ranges resulting in unncessary
spilling. This causes some shaders to fall of a performance cliff.
Since many small if-statements are already flattened to bcsel, this
optimization covers more than 68% of the possible cases (2417 shaders
helped for instructions on Skylake vs. 3554).
v2: Reorder and add whitespace to make the relationship between the
patterns more obvious. Suggested by Caio.
All Gen7+ platforms had similar results. (Ice Lake shown)
total instructions in shared programs: 16333474 -> 16322028 (-0.07%)
instructions in affected programs: 438559 -> 427113 (-2.61%)
helped: 1765
HURT: 0
helped stats (abs) min: 1 max: 275 x̄: 6.48 x̃: 4
helped stats (rel) min: 0.20% max: 36.36% x̄: 4.07% x̃: 1.82%
95% mean confidence interval for instructions value: -6.87 -6.10
95% mean confidence interval for instructions %-change: -4.30% -3.84%
Instructions are helped.
total cycles in shared programs: 367608554 -> 367511103 (-0.03%)
cycles in affected programs: 8368829 -> 8271378 (-1.16%)
helped: 1541
HURT: 129
helped stats (abs) min: 1 max: 4468 x̄: 66.78 x̃: 39
helped stats (rel) min: 0.01% max: 45.69% x̄: 4.10% x̃: 2.17%
HURT stats (abs) min: 1 max: 973 x̄: 42.25 x̃: 10
HURT stats (rel) min: 0.02% max: 64.39% x̄: 2.15% x̃: 0.60%
95% mean confidence interval for cycles value: -64.90 -51.81
95% mean confidence interval for cycles %-change: -3.89% -3.36%
Cycles are helped.
total spills in shared programs: 8867 -> 8868 (0.01%)
spills in affected programs: 18 -> 19 (5.56%)
helped: 0
HURT: 1
total fills in shared programs: 21900 -> 21903 (0.01%)
fills in affected programs: 78 -> 81 (3.85%)
helped: 0
HURT: 1
All Gen6 and earlier platforms had similar results. (Sandy Bridge shown)
total instructions in shared programs: 10829877 -> 10829247 (<.01%)
instructions in affected programs: 30240 -> 29610 (-2.08%)
helped: 177
HURT: 0
helped stats (abs) min: 1 max: 15 x̄: 3.56 x̃: 3
helped stats (rel) min: 0.37% max: 17.39% x̄: 2.68% x̃: 1.94%
95% mean confidence interval for instructions value: -3.93 -3.18
95% mean confidence interval for instructions %-change: -3.04% -2.32%
Instructions are helped.
total cycles in shared programs: 154036580 -> 154035437 (<.01%)
cycles in affected programs: 352402 -> 351259 (-0.32%)
helped: 96
HURT: 28
helped stats (abs) min: 1 max: 128 x̄: 14.73 x̃: 6
helped stats (rel) min: 0.03% max: 24.00% x̄: 1.51% x̃: 0.46%
HURT stats (abs) min: 1 max: 117 x̄: 9.68 x̃: 4
HURT stats (rel) min: 0.03% max: 2.24% x̄: 0.43% x̃: 0.23%
95% mean confidence interval for cycles value: -13.40 -5.03
95% mean confidence interval for cycles %-change: -1.62% -0.53%
Cycles are helped.
Reviewed-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
A similar technique could be used for fmin3, fmax3, and fmid3.
This could be squashed with the previous commit. I kept it separate to
ease review.
Reviewed-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
This could be squashed with the previous commit. I kept it separate to
ease review.
v2: Add some missing cases. Use nir_src_is_const helper. Both
suggested by Caio. Use a table for mapping source ranges to a result
range.
Reviewed-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
This could be squashed with the previous commit. I kept it separate to
ease review.
v2: Use a switch statement and add more comments. Both suggested by
Caio.
Reviewed-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
Most integer operations are omitted because dealing with integer
overflow is hard. There are a few things that could be smarter if there
was a small amount more tracking of ranges of integer types (i.e.,
operands are Boolean, operand values fit in 16 bits, etc.).
The changes to nir_search_helpers.h are included in this patch to
simplify reordering the changes to nir_opt_algebraic.py.
v2: Memoize range analysis results. Without this, some shaders appear
to get stuck in infinite loops.
v3: Rebase on many months of Mesa changes, including 1-bit Boolean
changes.
v4: Rebase on "nir: Drop imov/fmov in favor of one mov instruction".
v5: Use nir_alu_srcs_equal for detecting (a*a). Previously just the SSA
value was compared, and this incorrectly matched (a.x*a.y).
v6: Many code improvements including (but not limited to) better names,
more comments, and better use of helper functions. All suggested by
Caio. Rework the handling of several opcodes to use a table for mapping
source ranges to a result range. This change fixed a bug that caused
fmax(gt_zero, ge_zero) to be incorrectly recognized as ge_zero.
Slightly tighten the range of fmul by recognizing that x*x is gt_zero if
x is gt_zero. Add similar handling for -x*x.
v7: Use _______ in the tables as an alias for unknown. Suggested by
Caio.
Reviewed-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
Previously, we assumed that the dirty bit was always 1 << VK_DYNAMIC_*
and this assumption is about to be false. Extensions which define new
VK_DYNAMIC_* enums won't be nice and tightly packed which this really
requires. Instead, add functions to don the conversions and rework the
bits a bit.
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Look ma, we're a real driver now! I was waiting until Panfrost
stabilises a bit for this, but now that 19.2 is almost here, let's make
us official :)
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
Reviewed-by: Eric Engestrom <eric@engestrom.ch>
nir_lower_int_to_float is currently only meant to run once, and some ops
must be lowered after being converted from int ops to be implementable,
so re-run nir_opt_algebraic after lowering ints to floats.
Signed-off-by: Erico Nunes <nunes.erico@gmail.com>
Reviewed-by: Vasily Khoruzhick <anarsoul@gmail.com>
Symptom: the sky is black in SuperTuxKart (flashbacks to SMB/NES
emulation intensify).
Essentially, what happened is a fixed (special) move to r0 was
eliminated but scheduling did not factor this in, so
can_run_concurrent_ssa returned true even when there was a logical data
dependency that needed to be resolved.
Fixes: 20771ede1c ("pan/midgard: Add post-RA move elimination")
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
When dumping shader's assembly with INTEL_DEBUG=vs,tcs,...
sha1 of the resulting assembly is also printed, having environment
variable INTEL_SHADER_ASM_READ_PATH present driver will try to
load a "%sha1%.bin" file from the path and substitute current
assembly with the one from the file.
Signed-off-by: Danylo Piliaiev <danylo.piliaiev@globallogic.com>
Reviewed-by: Sagar Ghuge <sagar.ghuge@intel.com>
Reviewed-by: Matt Turner <mattst88@gmail.com>
Add '-t,--type' command line option to specify the output type
which can be 'bin', 'c_literal' or 'hex'.
Signed-off-by: Danylo Piliaiev <danylo.piliaiev@globallogic.com>
Reviewed-by: Sagar Ghuge <sagar.ghuge@intel.com>
In preparation for an initial 19.2 release, add a blacklist for apps
known to be buggy under Panfrost to protect users. Panfrost is NOT a
conformant implementation at this time.
Distros: please do not revert this patch. If blacklisted apps are run
using Panfrost, dragons will bite you. Thanks :)
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
Acked-by: Tomeu Vizoso <tomeu.vizoso@collabora.com>
A while ago, we started deferring GEM object closure and VMA release
until buffers were idle. This had some unforeseen interactions with
external buffers.
We keep imported buffers in hash tables, so if we have repeated imports
of the same GEM object, we map those to the same iris_bo structure.
This is critical for several reasons. Unfortunately, we broke this
assumption. When freeing a non-idle external buffer, we would drop it
from the hash tables, then move it to the zombie list. If someone
reimported the same GEM object, we would not find it in the hash tables,
and go ahead and make a second iris_bo for that GEM object. But the old
iris_bo would still be in the zombie list, and so we would eventually
call GEM_CLOSE on it - closing a BO that should have still been live.
To work around this, we defer removing a BO from the hash tables until
it's actually fully closed. This has the strange effect that an
external BO may be on the zombie list, and yet be resurrected before
it can be properly cleaned up. In this case, we remove it from the
list so it won't be freed.
Fixes severe instability in Weston, which was hitting EINVALs and
ENOENTs from execbuf2, due to batches referring to a GEM object that
had been closed, or at least had its VMA torched.
Fixes: 457a55716e ("iris: Defer closing and freeing VMA until buffers are idle.")
Everybody importing an external buffer was looking it up in the hash
table, then referencing it. We can just do that in the helper instead,
which also gives us a convenient spot to stash extra code shortly.
The STM32MP157 features a Vivante GC400 GPU supported by etnaviv.
Add a DRM entry point for the STM display controller, so mesa
can be used with it.
Signed-off-by: Ahmad Fatoum <a.fatoum@pengutronix.de>
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
Reviewed-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
The load uniform/temporary operations output only to a pipeline
register, which must be consumed by another op in the same instruction
later.
The current implementation delays the decision of who will consume this
result to until the scheduling step. If the consumer node is not able to
use the pipeline register, a mov node may have to be created, during the
scheduler step.
As part of the ppir scheduler simplification, and now that the ppir
scheduler supports pipeline register dependencies, this can be
simplified by always creating a single mov node outputting to a normal
register that can be used directly by all consumers.
Signed-off-by: Erico Nunes <nunes.erico@gmail.com>
Reviewed-by: Vasily Khoruzhick <anarsoul@gmail.com>
Reviewed-by: Qiang Yu <yuq825@gmail.com>
The select operation relies on the select condition coming from the
result of the the alu scalar mult slot, in the same instruction.
The current implementation creates a mov node to be the predecessor of
select, and then relies on an exception during scheduling to ensure that
both ops are inserted in the same instruction.
Now that the ppir scheduler supports pipeline register dependencies,
this can be simplified by making the mov explicitly output to the fmul
pipeline register, and the scheduler can place it without an exception.
Since the select condition can only be placed in the scalar mult slot,
differently than a regular mov, define a separate op for it.
Signed-off-by: Erico Nunes <nunes.erico@gmail.com>
Reviewed-by: Vasily Khoruzhick <anarsoul@gmail.com>
Reviewed-by: Qiang Yu <yuq825@gmail.com>
The ppir scheduler grew to be rather complicated and containing many
exceptions as it also has to take care of inserting additional nodes
when it is mandatory for nodes to be in the same instruction.
As such, the lima lowering and scheduling process can be difficult to
understand and maintain.
The ppir lowering step created nodes hoping that the scheduler would
notice the exception and do the right thing.
This proposal adds a simple refactor to the scheduler so that it places
nodes with pipeline registers in the same instruction.
With the scheduler handling this in a general way, it is possible to
create same-instruction dependencies by using pipeline registers during
the lowering stage.
This is simpler to maintain because now we can make these dependencies
explicit in a single place (lowering), and we can drop exceptions from
scheduling.
Reducing the complexity of the scheduler is also useful as preparatory
work to support control flow in ppir.
Signed-off-by: Erico Nunes <nunes.erico@gmail.com>
Reviewed-by: Vasily Khoruzhick <anarsoul@gmail.com>
Reviewed-by: Qiang Yu <yuq825@gmail.com>
On recent versions of Meson (0.47+) these are synonymous, but we still
support older versions than that, so let's use the correct syntax to
avoid confusing users of old Meson versions.
Signed-off-by: Eric Engestrom <eric.engestrom@intel.com>
Right now, all it does is provide the new standard `static_assert()` name.
Fixes: fbf7c38da3 ("egl/wayland: use bitset.h for `formats` bit set")
Signed-off-by: Eric Engestrom <eric.engestrom@intel.com>
Tested-by: Bhushan Shah <bshah@kde.org>
According to Mac OSX's man page [1], this is how we should get the list
of exported symbols:
nm -g -P foo.dylib
-g to only show the exported symbols
-P to show it in a "portable" format, ie. readable by a script
Since this is supported by GNU nm as well, let's use that everywhere,
although some care needs to be taken as there are some differences in
the output.
[1] https://www.unix.com/man-page/osx/1/nm/
Signed-off-by: Eric Engestrom <eric.engestrom@intel.com>
Tested-by: Vinson Lee <vlee@freedesktop.org>
Utgard PP is vec4, but some operations are scalar, utilize
NIR vec to scalar lowering pass and indicate operations that we
want to lower.
Reviewed-by: Qiang Yu <yuq825@gmail.com>
Signed-off-by: Vasily Khoruzhick <anarsoul@gmail.com>
The brw_wm_prog_data_dispatch_grf_start_reg and _prog_offset helpers
read the _NPixelDispatchEnable fields from 3DSTATE_PS to figure out
which bits to pull out of the prog data and stuff where. Therefore,
they need to be called with the final set of _NPixelDispatchEnable bits
after we've done the workaround for SIMD32 and 16x MSAA. Otherwise, if
you end up with a somewhat odd combination of enables, the GRF start reg
and KSP data ends up in the wrong slots. In particular, running
SIMD32-only is broken but several other combinations are as well.
Fixes: 5445c176e2 "iris: Disable SIMD32 when using a 16x MSAA..."
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
The asm code expects a specific kind of implementation, but Android
uses something different (emutls).
Turns out mesa has a fallback with pthread_getspecific, with an
optimizaiton if only a single thread is used. emutls also uses
getspecific, so lets just use the optimized mesa implementation.
Fixes: 20294dceeb "mesa: Enable asm unconditionally, now that gen_matypes is gone."
Reviewed-by: Eric Engestrom <eric.engestrom@intel.com>
This automates the include_directories and dependencies tracking so that
all users of libmesa_util don't need to add them manually.
Next commit will remove the ones that were only added for that reason.
Signed-off-by: Eric Engestrom <eric.engestrom@intel.com>
Acked-by: Eric Anholt <eric@anholt.net>
Tested-by: Vinson Lee <vlee@freedesktop.org>
No functional changes, just breaks out a megamonster function and fixes
the indentation.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
We need three independent sources to support indirect SSBO writes (as
well as textures with both LOD/bias and offsets). Now is a good time to
make sources just an array so we don't have to rewrite a ton of code if
we ever needed a fourth source for some reason.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
As far as I know, there's no such thing as a load/store op that only
takes its argument in r27. We just need to set the appropriate arg_1
field in the RA to specify other registers if we want them.
To facilitate this, various RA-related changes are needed across the
compiler ; this should also fix indirect offsets which were implicitly
interpreted as "r27-only" despite not even passing through RA yet. One
ripple effect change is switching the move insertion point and adjusting
the liveness analysis accordingly, so while this was intended as a
purely functional change, there are some shader-db changes:
total instructions in shared programs: 3511 -> 3498 (-0.37%)
instructions in affected programs: 563 -> 550 (-2.31%)
helped: 12
HURT: 0
helped stats (abs) min: 1 max: 2 x̄: 1.08 x̃: 1
helped stats (rel) min: 0.93% max: 5.00% x̄: 2.58% x̃: 2.33%
95% mean confidence interval for instructions value: -1.27 -0.90
95% mean confidence interval for instructions %-change: -3.23% -1.93%
Instructions are helped.
total bundles in shared programs: 2067 -> 2067 (0.00%)
bundles in affected programs: 398 -> 398 (0.00%)
helped: 7
HURT: 4
helped stats (abs) min: 1 max: 1 x̄: 1.00 x̃: 1
helped stats (rel) min: 1.54% max: 10.00% x̄: 5.04% x̃: 5.56%
HURT stats (abs) min: 1 max: 2 x̄: 1.75 x̃: 2
HURT stats (rel) min: 2.13% max: 4.26% x̄: 3.72% x̃: 4.26%
95% mean confidence interval for bundles value: -0.95 0.95
95% mean confidence interval for bundles %-change: -5.21% 1.50%
Inconclusive result (value mean confidence interval includes 0).
total quadwords in shared programs: 3464 -> 3454 (-0.29%)
quadwords in affected programs: 1199 -> 1189 (-0.83%)
helped: 18
HURT: 4
helped stats (abs) min: 1 max: 1 x̄: 1.00 x̃: 1
helped stats (rel) min: 1.03% max: 5.26% x̄: 2.44% x̃: 1.79%
HURT stats (abs) min: 2 max: 2 x̄: 2.00 x̃: 2
HURT stats (rel) min: 2.56% max: 2.82% x̄: 2.63% x̃: 2.56%
95% mean confidence interval for quadwords value: -0.98 0.07
Inconclusive result (value mean confidence interval includes 0).
total registers in shared programs: 383 -> 373 (-2.61%)
registers in affected programs: 56 -> 46 (-17.86%)
helped: 12
HURT: 2
helped stats (abs) min: 1 max: 1 x̄: 1.00 x̃: 1
helped stats (rel) min: 9.09% max: 33.33% x̄: 29.58% x̃: 33.33%
HURT stats (abs) min: 1 max: 1 x̄: 1.00 x̃: 1
HURT stats (rel) min: 20.00% max: 50.00% x̄: 35.00% x̃: 35.00%
95% mean confidence interval for registers value: -1.13 -0.29
95% mean confidence interval for registers %-change: -35.07% -5.63%
Registers are helped.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
Load/store's main "argument 0" already has its swizzle handled
correctly (for stores, that is). But the tinier arguments, the compact
ones with a component select but not a full swizzle, those are not yet
handled. Let's do something about that!
Rather than an ersatz thing that sort of looks like successors but is in
fact just the source order traversal with some backward jumps hacked in
for loops... construct an actual flow graph so we can do analysis
sanely.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
The 16-bit field can be decomposed to two independent 8-bit fields, each
representing a single (additional) argument to the load/store op,
generally used for encoding registers. Addressable registers here are
substantially limited compared to the main register in a load/store op.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
Rather than alloacting a huge (64MB) polygon list on context creation
and sharing it across framebuffers, we instead allocate polygon lists as
BOs (which consistently hit the cache) sized appropriately; for about a
month, we've known how to calculate the polygon list size so this has
only recently become possible.
The good news is we can render to truly massive framebuffers without
crashing and, more importantly, we eliminate the 64MB upfront overhead.
If a list that size isn't actually needed, it's not allocated.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
Signed-off-by: Boris Brezillon <boris.brezillon@collabora.com>
The wallpaper blit is a bit special in that the operation is targetting
the current FB, but the u_blitter logic creates a new surface for it
which makes util_framebuffer_state_equal() return false. In that case
we don't want a new FB descriptor to be emitted/attached, so let's just
copy the new state into ctx->pipe_framebuffer and exit the function.
Signed-off-by: Boris Brezillon <boris.brezillon@collabora.com>
Reviewed-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
If the current FB matches the new one there's nothing to be done in
panfrost_set_framebuffer_state(). By bailing out early in that case we
avoid emitting new FB descriptors (the old ones are still valid).
Signed-off-by: Boris Brezillon <boris.brezillon@collabora.com>
Reviewed-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
No need to emit SFBD/MFBD at frame invalidation. They can be emitted
when the framebuffer is attached, which saves us a potential FB desc
re-allocation if a new FB is bound after the swap.
Signed-off-by: Boris Brezillon <boris.brezillon@collabora.com>
Reviewed-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
ctx->job is supposed to serve as a cache to avoid an hash table lookup
everytime we access the job attached to the currently bound FB, except
it was never assigned to anything but NULL.
Fix that by adding the missing assignment in panfrost_get_job_for_fbo().
Also add a missing NULL assignment in the ->set_framebuffer_state()
path.
While at it, add extra assert()s to make sure ctx->job is consistent.
Fixes: 59c9623d0a ("panfrost: Import job data structures from v3d")
Signed-off-by: Boris Brezillon <boris.brezillon@collabora.com>
Reviewed-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
When the application does not ask for robust buffer access.
Only implemented the check in radv.
Reviewed-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
This allows us to avoid having to rename all the PIPE_OS_* at once while
still making sure PIPE_OS_* and DETECT_OS_* are always in sync.
Signed-off-by: Eric Engestrom <eric.engestrom@intel.com>
Reviewed-by: Matt Turner <mattst88@gmail.com>
It has nothing to do with the PIPE_SUBSYSTEM_* stuff from gallium.
Signed-off-by: Eric Engestrom <eric.engestrom@intel.com>
Reviewed-by: Matt Turner <mattst88@gmail.com>
PIPE_SUBSYSTEM_DRI was introduced in dacfef1589 ("gallium: New
configuration header.") 11 years ago, and was never used.
Signed-off-by: Eric Engestrom <eric.engestrom@intel.com>
Reviewed-by: Matt Turner <mattst88@gmail.com>
Mostly copied from src/gallium/include/pipe/p_config.h, so I kept its
copyright and authorship.
Other than the obvious rename, the big difference is that these are
always defined, to be used as `#if DETECT_OS_LINUX`.
Signed-off-by: Eric Engestrom <eric.engestrom@intel.com>
Reviewed-by: Matt Turner <mattst88@gmail.com>
We can have a scenario like:
A -> B
A -> C -> B
When adding the A->C dependency, it doesn't really matter that C depends
on something that A depends on, that isn't a necessary condition for a
dependency loop.
Instead what we want to know is that nothing C depends on, directly or
indirectly, depends on A. We can detect this by recursively OR'ing the
dependents_mask of C and all it's dependencies.
Signed-off-by: Rob Clark <robdclark@chromium.org>
If no clear, and no geometry according to VSC_STATE[pipe] we can skip
the tile entirely. If there is a fast-clear, we can't skip restore
(clear) or resolve IBs, but we can still skip draw IB.
Signed-off-by: Rob Clark <robdclark@chromium.org>
Reviewed-by: Eric Anholt <eric@anholt.net>
Check VSC_SIZE/VSC_SIZE2 regs from cmdstream to detect overflow, and
skip use of VSC visibility stream when overflow is detected, to avoid
GPU hangs. This is done w/ introduction of some CP_REG_TEST/
CP_COND_REG_EXEC packet pairs.
In addition, eventually (after a frame or two) detect the condition and
resize the VSC buffers until overflow no longer happens.
Note that this significantly reduces the initial size of the VSC
buffers, backing out a previous hack to make them 16x larger than
what should be typically required (the previous "solution" for
VSC overflow).
Signed-off-by: Rob Clark <robdclark@chromium.org>
Reviewed-by: Eric Anholt <eric@anholt.net>
Seems this isn't needed anymore on a6xx to control whether visibility
stream is used. And it would be hard to deal with if it was, for
disabling use of VSC stream in draw pass. So just remove it and
simplify things.
Signed-off-by: Rob Clark <robdclark@chromium.org>
Reviewed-by: Eric Anholt <eric@anholt.net>
Rename to "control_mem", and switch to using a struct to manage the
layout, rather than just ad-hoc hard-coded offsets.
For recovering from VSC stream overflow, we'll need to add more, but
best to clean it up first.
Signed-off-by: Rob Clark <robdclark@chromium.org>
Reviewed-by: Eric Anholt <eric@anholt.net>
Fix some #ifdef'd bitrot, and get rid of #ifdef so it doesn't bitrot
again.
And add a prints for per-tile state.
Signed-off-by: Rob Clark <robdclark@chromium.org>
Reviewed-by: Eric Anholt <eric@anholt.net>
Since it ends up contended, it is a bit of a bottleneck for workloads
with high driver overhead. Worth nearly +10% at gfxbench driver2.
Signed-off-by: Rob Clark <robdclark@chromium.org>
Not all flush paths come thru fd_context_flush(), so we should also set
last_fence in the batch flush path. This avoids some no-op flushes just
to get a fence. For example when pctx->flush_resource() triggers a
flush.
We should probably keep the last_fence update in fd_context_flush() as
well to handle deferred flush case.
Signed-off-by: Rob Clark <robdclark@chromium.org>
Reviewed-by: Eric Anholt <eric@anholt.net>
The pscreen param was just there to satisfy pipe_screen::fence_reference
But some of the internal uses passed NULL for screen. Which is a bit
ugly. Instead drop the param and add a shim function to plug into the
screen.
Signed-off-by: Rob Clark <robdclark@chromium.org>
Reviewed-by: Eric Anholt <eric@anholt.net>
We would like to flip ops to have a constant in the second place to
enable inlining of the constant.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
nir_lower_alu_to_scalar can now be used to only lower certain ops, so we
don't need the custom pass. And we can lower fall_equal/fany_nequal with
lower_vector_cmp instead.
Signed-off-by: Jonathan Marek <jonathan@marek.ca>
Reviewed-by: Rob Clark <robdclark@chromium.org>
Previously we would get a fmov with modifiers, but now that mov has no type
these opcodes need to be supported.
Signed-off-by: Jonathan Marek <jonathan@marek.ca>
Reviewed-by: Rob Clark <robdclark@chromium.org>
int_to_float needs to come after bool_to_float, and lower_to_source_mods
needs to come after both, since they don't deal wih source mods.
Signed-off-by: Jonathan Marek <jonathan@marek.ca>
Reviewed-by: Rob Clark <robdclark@chromium.org>
Not sure how this happened, but apparently all cubemaps need swapped XY.
Signed-off-by: Jonathan Marek <jonathan@marek.ca>
Reviewed-by: Rob Clark <robdclark@chromium.org>
This line was mistakenly added while there is already a `-D tools=all`
a few lines below.
Fixes: f60defa72d ("gitlab-ci: Add a shader-db run using v3d on drm-shim.")
Signed-off-by: Eric Engestrom <eric.engestrom@intel.com>
Reviewed-by: Eric Anholt <eric@anholt.net>
The driver should now rely on cmask_offset because CMASK can be
disabled by the driver for some reasons (eg. mipmaps). Apply the
same change for FMASK, although it should be useless.
Fixes: ad1bc8621d ("radv: remove radv_get_image_fmask_info()")
Fixes: 10d08da52c ("radv/gfx10: add missing dcc_tile_swizzle tweak")
Signed-off-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
LLVM 9 does not have a 64-bit buffer compswap intrinsic, so this
extracts the ptr, does a bound check and then uses a cmpxchg LLVM
instruction.
Not ideal, but the earliest release we're going to get a proper
intrinsic is LLVM 10.
Reviewed-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
>From the EGL_KHR_create_context spec:
"* If OpenGL 3.1 is requested, the context returned may implement
any of the following versions:
* Version 3.1. The GL_ARB_compatibility extension may or may
not be implemented, as determined by the implementation.
* The core profile of version 3.2 or greater."
Fixes CTS tests:
dEQP-EGL.functional.create_context_ext.gl_31.rgb888_depth_stencil
dEQP-EGL.functional.create_context_ext.robust_gl_31.rgb888_depth_stencil
dEQP-EGL.functional.create_context_ext.gl_31.rgb888_depth_no_stencil
dEQP-EGL.functional.create_context_ext.robust_gl_31.rgb888_depth_no_stencil
dEQP-EGL.functional.create_context_ext.gl_31.rgba8888_depth_no_stencil
dEQP-EGL.functional.create_context_ext.gl_31.rgb888_no_depth_no_stencil
dEQP-EGL.functional.create_context_ext.robust_gl_31.rgba8888_depth_no_stencil
dEQP-EGL.functional.create_context_ext.robust_gl_31.rgb888_no_depth_no_stencil
dEQP-EGL.functional.create_context_ext.gl_31.rgba8888_no_depth_no_stencil
dEQP-EGL.functional.create_context_ext.robust_gl_31.rgba8888_no_depth_no_stencil
dEQP-EGL.functional.create_context_ext.gl_31.rgba8888_depth_stencil
dEQP-EGL.functional.create_context_ext.robust_gl_31.rgba8888_depth_stencil
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
This makes it cheaper to just change the dynamic offsets with
the same descriptor sets.
This optimization has been reverted a while back because of
random GPU hangs on GFX9, no it looks fine, at least CTS no longer
hangs on GFX9 and it doesn't hang on GFX10 as well.
It fixes a performance problem with Wolfenstein Youngblood.
Suggested-by: Philip Rebohle <philip.rebohle@tu-dortmund.de>
This exposes the textureSamplesIdenticalEXT function in GLSL.
We enable it for iris and radeonsi, because their compilers already
have support for this. Tested on Intel Kabylake and AMD Vega 64.
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
Specifically the optimization of a conditional BREAK + WHILE sequence
into a conditional WHILE seems pretty broken. The list of successors
of "earlier_block" (where the conditional BREAK was found) is emptied
and then re-created with the same edges for no apparent reason. On
top of that the list of predecessors of the block immediately after
the WHILE loop is emptied, but only one of the original edges will be
added back, which means that potentially several blocks that still
have it on their list of successors won't be on its list of
predecessors anymore, causing all sorts of hilarity due to the
inconsistency in the control flow graph.
The solution is to remove the code that's removing valid edges from
the CFG. cfg_t::remove_block() will already clean up after itself.
The assert in bblock_t::combine_with() also needs to be removed since
we will be merging a block with multiple children into the first one
of them.
Found the issue on a hardware enabling branch originally, but
apparently somebody reproduced the same problem independently on
master in the meantime.
Fixes: d13bcdb3a9 ("i965/fs: Extend predicated break pass to predicate WHILE.")
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=111009
Cc: jiradet.jd@gmail.com
Cc: Sergii Romantsov <sergii.romantsov@globallogic.com>
Cc: Matt Turner <mattst88@gmail.com>
Cc: mesa-stable@lists.freedesktop.org
Tested-by: Paul Chelombitko <qamonstergl@gmail.com>
Reviewed-by: Matt Turner <mattst88@gmail.com>
The device info initializer makes several fuctions internal:
- handling of device override
- updating topology from kernel information
The implementation file is slightly reordered due to the renamed
functions being static.
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Rename the original device info initialization routine so callers
don't mistakenly call the wrong one:
gen_get_device_info_from_fd:
Queries kernel for full device info, including topology
details.
gen_get_device_info_from_pci_id:
Partially initializes device info based on PCI ID lookup, when
the kernel is not available.
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
With perf queries, initializing the device info is much more complex
than just getting a PCI ID and calling gen_get_device_info. This commit
adds a new gen_get_device_info_from_fd helper in common code which does
all of the requisite kernel queries to get device info including all of
the topology information.
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
When gen_device_info updates the topology in it's initializer, the
kernel queries will fail silently. Iris and anv have minimum
kernel requirements that support the queries. i965 must verify kernel
support before reporting OA metrics.
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
i965 links against libdrm for drmIoctl, but anv and iris both
re-implement this routine to avoid the dependency.
intel/dev also needs an ioctl wrapper, so lets share the same
implementation everywhere.
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Fixes a hang (and abort) on empty shaders, which you shouldn't have
anyway but better safe than sorry. DCE going on the fritz is no reason
to freeze the system.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
Important fields relating to shader state and UBOs are filled out from
this (misnomer) function.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
The path for compute shader compiles resembles the graphic shader
compile path, although it is substantially simpler as we don't need any
shader keying.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
We want this routine to be generic across graphics and compute, so let
the caller deal with the typing.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
We already have helpers for packing invocations (due to its role in
instanced vertex shaders), so we can reuse this drop in for compute
shaders.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
Squint at it hard enough and you realize it's the beginning of an
SFBD... I guess...
A compute shader with register spilling would be able to confirm this,
but we would expect to see the first field | 1 and an address splattered
later, setting up TLS.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
It's a little verbose, but this way we can support other shader stages
without too much contortion.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
It's still incomplette, but we're able to hook into launch_grid to
create a stub COMPUTE job.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
Rather than disparate variables, let's use an array of payloads indexed
by the shader stage.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
last_tiler.gpu may be NULL at flush time despite no clear and existing
jobs -- if we executed a compute-only workload.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
We *could* expose TGSI as well -- we pipe it through tgsi_to_nir for
Gallium-internal shaders anyway -- but we'd rather not.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
Values reported here aren't remotely correct, but it's a start to just
get the entrypoint stubbed out.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
Interestingly, this requires no compiler changes. It's just exposed as a
special varying.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
This allows us to decode asymmetric varyings correctly, which occurs
with e.g. gl_FrontFacing.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
We have a cap bit for gallium and a GLSL compiler flag to control this.
Just trust what GLSL gives us and stop forcing it. In order for this to
be safe, we have to advertise another cap in some of the gallium
drivers.
Reviewed-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
Reviewed-by: Eric Anholt <eric@anholt.net>
Conceptually follows util_set_vertex_buffers_mask but for SSBOs.
v2: Fix missing ~ when clearing mask. Adjust mask behaviour to match
freedreno/v3d when buffer == NULL.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
Reviewed-by: Eric Anholt <eric@anholt.net>
These drivers are kmsro drivers so they should be part of the kmsro #if
This fixes missing imx_drm driver when building with only freedreno+kmsro
Signed-off-by: Jonathan Marek <jonathan@marek.ca>
Reviewed-by: Eric Anholt <eric@anholt.net>
Currently we use the python package to manage repositories. At the same
time we also do that by hand - since it's a trivial echo to a file.
Stay consistent, remove the package and manage things manually.
Acked-by: Eric Engestrom <eric@engestrom.ch>
Reviewed-by: Michel Dänzer <michel.daenzer@amd.com>
Signed-off-by: Emil Velikov <emil.velikov@collabora.com>
Use value "2" to signal that lowering is needed and supported and enable
it accordingly.
v2: - Note in CAP description that this lowering currently requires TGSI
- use "true" instead of GL_TRUE (both Erik)
Signed-off-by: Gert Wollny <gert.wollny@collabora.com>
Reviewed-by: Reviewed-by: Marek Olšák <marek.olsak@amd.com>
v1 implemented by Erik Faye-Lund <erik.faye-lund@collabora.com>
v2: - Add GS and TES
- fix constants state update flags (Erik)
v3: don't update rasterizer when depth_clamp is lowered (Erik)
v4: Correct NewDepthClamp and also set flags for NewClipControl (Erik)
v5: Also set shader_has_one_variant property acording to possible
depth_clamp lowering (Marek)
Signed-off-by: Gert Wollny <gert.wollny@collabora.com>
Reviewed-by: Reviewed-by: Marek Olšák <marek.olsak@amd.com>
v2: Use file scope defined depth_range_state in common
v3: - don't use the one_shader_variant property, as this is
not correct (Marek)
- also use tests on available shader stages to enable
depth_clamp lowering
v4: Don't use key.st, use st directly (Marek)
Signed-off-by: Gert Wollny <gert.wollny@collabora.com>
Reviewed-by: Reviewed-by: Marek Olšák <marek.olsak@amd.com>
v1: implemented by Erik Faye-Lund <erik.faye-lund@collabora.com>
v2: Add handling of the ARB_clip_control depth mode
v3: Move depth_range_state to file scope and remove training zeros (Erik)
v4: - don't use the one_shader_variant property, as this is not correct (Marek)
- also use tests on available shader stages to enable depth_clamp lowering
V5: Don't use key.st, use st directly (Marek)
Signed-off-by: Gert Wollny <gert.wollny@collabora.com>
Reviewed-by: Reviewed-by: Marek Olšák <marek.olsak@amd.com>
This is a TGSI pass that lowers depth-clamping into shader-operations,
by replacing the depth-value with 0 (a z-coordinate of zero will always
pass the OpenGL depth test conditions), and using a dedicated varying to
interpolate the real depth-value instead. Finally we replace the
depth-output in the fragment shader.
v1 implemented by Erik Faye-Lund <erik.faye-lund@collabora.com>
v2: Add support for handling depth clip mode, and refactor code
v3: - Rename *_vs functions to *_last_vertex_stage (Erik)
- Use 0.0 depth to avoid clipping (Erik)
v4: Fix inversion of bool value for clip control property
Signed-off-by: Gert Wollny <gert.wollny@collabora.com>
Reviewed-by: Reviewed-by: Marek Olšák <marek.olsak@amd.com>
These were left after a rebase and happen to make
NIR_INTRINSIC_SWIZZLE_MASK == NIR_INTRINSIC_SRC_ACCESS, which is how it
was noticed.
Fixes: 6f20643b47 ("nir: Allow qualifiers on copy_deref and image instructions")
Cc: Connor Abbott <cwabbott0@gmail.com>
Signed-off-by: Eric Engestrom <eric.engestrom@intel.com>
Reviewed-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
Reviewed-by: Eric Anholt <eric@anholt.net>
We were emitting 3DSTATE_INDEX_BUFFER on every indexed draw, even if
back-to-back draws referred to the same index buffer. This improves
drawoverhead scores in the DrawElements cases by about 10%, by giving
us even more minimal batches.
this makes dri2_get_mapping_by_fourcc accessible from dri_helpers.h
and does a direct lookup on the fourcc id to match the pipe format
v2 (Ken): Allow map to be NULL, use img->texture->format.
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
Reviewed-by: Eric Anholt <eric@anholt.net>
The mali pp doesn't support integers and some nir_algebraic
optimizations may result in ops that are not easily lowerable to floats,
so disable optimizations resulting in bitops.
Signed-off-by: Erico Nunes <nunes.erico@gmail.com>
Reviewed-by: Jonathan Marek <jonathan@marek.ca>
Optimizations that insert bitshift or bitwise operations should not be
applied on GPUs that don't support integer operations.
The .lower_bitshift could be used to control the bitshift related ones,
but there was also one bitwise optimization uncovered.
Since only lima and freedreno use this option and the use case is that
no bit operations are wanted, let's rename it to .lower_bitops and use
it to control all bitops related optimizations.
Signed-off-by: Erico Nunes <nunes.erico@gmail.com>
Reviewed-by: Jonathan Marek <jonathan@marek.ca>
Now that we have fsum in nir, we can move fdot lowering there.
This helps reduce ppir complexity and enables the lowered ops to be part
of other nir optimizations in the optimization loop.
Signed-off-by: Erico Nunes <nunes.erico@gmail.com>
Reviewed-by: Qiang Yu <yuq825@gmail.com>
The Mali400 pp doesn't implement fdot but has fsum3 and fsum4, which can
be used to optimize fdot lowering. fsum2 is not implemented and can be
further lowered to an add with the vector components.
Currently lima ppir handles this lowering internally, however this
happens in a very late stage and requires a big chunk of code compared
to a nir_opt_algebraic lowering.
By having fsum in nir, we can reduce ppir complexity and enable the
lowered ops to be part of other nir optimizations in the optimization
loop.
Signed-off-by: Erico Nunes <nunes.erico@gmail.com>
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
The 'varying fetch' pp instruction deals only with coordinates, and
'texture fetch' deals only with the sampler index.
Previously it was not possible to clearly map ppir_op_load_coords and
ppir_op_load_texture to pp instructions as the source coordinates were
kept in the ppir_op_load_texture node, making this harder to maintain.
The refactor is made with the attempt to clearly map ppir_op_load_coords
to the 'varying fetch' and ppir_op_load_texture to the 'texture fetch'.
The coordinates are still temporarily kept in the ppir_op_load_texture
node as nir has both sampler and coordinates in a single instruction and
it is only possible to output one ppir node during emit. But now after
lowering, the sources are transferred to the (always) created
ppir_op_load_coords node, and it should be possible to directly map them
to their pp instructions from there onwards.
Signed-off-by: Erico Nunes <nunes.erico@gmail.com>
Reviewed-by: Qiang Yu <yuq825@gmail.com>
Lower texture projection in ppir using nir_lower_tex and nir_lower_tex.
This will insert a mul with the coordinate division before the load
varying.
Even though the lima pp supports projection in the load varying
instruction while loading the coordinates (from a register or a
varying), it requires that both the coordinates and projector be
components in a single register.
nir currently handles them in separate ssa, and attempting to merge them
manually may end up in worse code than just doing the coordinate
division manually. So for now let's just lower the projection to add
support for it in lima.
In the future, an optimization pass may be implemented in lima to ensure
that both coords and projector come in the same register, then this
lowering may be disabled and in this case lima may use the built-in
projection and save the mul instruction from lowering.
Signed-off-by: Erico Nunes <nunes.erico@gmail.com>
Reviewed-by: Qiang Yu <yuq825@gmail.com>
Corresponds to the normalized coordinates? flag on images in OpenCL and
evidently also shows up in GL, so let's wire it in.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
It's just a bit field containing some flags; there's no need for all the
macro magic.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
We still don't know what it is, but from a newer trace we now know it's
half the size we thought it was.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
We had them backwards in both the command stream and the Midgard stack.
In OpenGL ES 2.0, they're always the same, but in Vulkan/later-GL/CL
they diverge so we can fix this.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
Images are implemented (in part) as special attributes, so include
support for decoding this.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
this makes dri2_get_mapping_by_fourcc accessible from dri_helpers.h
and does a direct lookup on the fourcc id to match the pipe format
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
st/dri:
this adds a table (similar to the one in i965) which provides
mappings for turning various planar formats into multiple sampler views.
whereas only NV12 and IYUV were supported, now many more formats are
supported here:
* P0XX
* YUV4XX
* YVU4XX
* AYUV
* XYUV
* YUYV
* UYVY
the table is used directly to handle image creation, simplifying
a lot of code and resolving related TODO/FIXME items where workarounds were
previously in place to manage NV12 and IYUV formats exclusively
st/mesa:
the changes here relate to setting up samplers for the planar formats.
this requires:
* checking for driver support for all the sampler formats
* creating the samplers with the corresponding formats and swizzling
* running nir_lower_tex with the appropriate options to trigger the lowering
for each plane->sampler
fixeskwg/mesa#36
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
This were seriously messed up beyond all recognition. How we're passing
shaders.random.* is a mystery.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
If any scissor rectangles are enabled, then we need to set proper
scissor rectangles for all viewports. But if the scissor test is
entirely disabled, then we can skip updating any scissor rectangles.
Without this step, we were updating the scissor rectangles based on
the current framebuffer size. So if an app rendered to a variety of
render targets at different sizes, with scissor test disabled each
time, we'd still be continually updating the scissor rectangles,
even though it's not necessary.
In Civilization VI, this drops us from 310-350 set_scissor_state
calls per frame to 0, as it doesn't appear to use scissor testing.
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
Randomly came across this file, which was likely only used by autotools
to pass arguments to the test.
Signed-off-by: Eric Engestrom <eric.engestrom@intel.com>
Reviewed-by: Eric Anholt <eric@anholt.net>
Otherwise that variable is only used in an assert() and would need an
ASSERTED to avoid the warning.
Signed-off-by: Eric Engestrom <eric.engestrom@intel.com>
Reviewed-by: Matt Turner <mattst88@gmail.com>
MAYBE_UNUSED is going away, so let's replace legitimate uses of it with
UNUSED, which the former aliased to so far anyway.
Signed-off-by: Eric Engestrom <eric.engestrom@intel.com>
Reviewed-by: Matt Turner <mattst88@gmail.com>
MAYBE_UNUSED is going away, so let's replace legitimate uses of it with
UNUSED, which the former aliased to so far anyway.
Signed-off-by: Eric Engestrom <eric.engestrom@intel.com>
Reviewed-by: Matt Turner <mattst88@gmail.com>
MAYBE_UNUSED is going away, so let's replace legitimate uses of it with
UNUSED, which the former aliased to so far anyway.
Signed-off-by: Eric Engestrom <eric.engestrom@intel.com>
Reviewed-by: Matt Turner <mattst88@gmail.com>
MAYBE_UNUSED is going away, so let's replace legitimate uses of it with
UNUSED, which the former aliased to so far anyway.
Signed-off-by: Eric Engestrom <eric.engestrom@intel.com>
Reviewed-by: Matt Turner <mattst88@gmail.com>
MAYBE_UNUSED is going away, so let's replace legitimate uses of it with
UNUSED, which the former aliased to so far anyway.
Signed-off-by: Eric Engestrom <eric.engestrom@intel.com>
Reviewed-by: Matt Turner <mattst88@gmail.com>
MAYBE_UNUSED is going away, so let's replace legitimate uses of it with
UNUSED, which the former aliased to so far anyway.
Signed-off-by: Eric Engestrom <eric.engestrom@intel.com>
Reviewed-by: Matt Turner <mattst88@gmail.com>
Often times, the depth buffer is entirely disabled, but color render
targets change. For example, GenerateMipmaps will change the color
render target for each miplevel, but there is no depth buffer.
In the Civilization VI benchmark, this drops the median number of
3DSTATE_DEPTH_BUFFER etc. packets emitted per frame from 472 to 34.
These functions dont support display list as specified:
Should the selector-free versions of various OpenGL 3.0 and
EXT_framebuffer_object framebuffer object commands not be allowed
in display lists [...]?
RESOLVED: Yes
When transitioning gl_FragCoord over to a system value, we missed one
instance of VARYING_SLOT_POS in i965. As of this commit, i965 has no
references to VARYING_SLOT_POS.
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=111263
Fixes: 4bb6e6817e "intel: Use a system value for gl_FragCoord"
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
The issue here was discovered by a set of Vulkan CTS tests:
dEQP-VK.glsl.derivate.*.dynamic_*
These tests use ballot ops to construct a branch condition that takes
the same path for each 2x2 quad but may not be uniform across the whole
subgroup. They then tests that derivatives work and give the correct
value even when executed inside such a branch. Because the derivative
isn't executed in uniform control-flow and the values coming into the
derivative aren't smooth (or worse, linear), they nicely catch bugs that
aren't uncovered by simpler derivative tests.
Unfortunately, these tests require Vulkan and the equivalent GL test
would require the GL_ARB_shader_ballot extension which requires int64.
Because the requirements for these tests are so high, it's not easy to
test on older hardware and the bug is only proven to exist on gen7;
gen4-6 are a conjecture.
Cc: mesa-stable@lists.freedesktop.org
Reviewed-by: Matt Turner <mattst88@gmail.com>
It'll grow further, and we'd like to avoid adding an additional
parameter to fs_generator() for each new piece of data.
v2 (idr): Rebase on 17 months. Track a visitor instead of a cfg.
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
Reviewed-by: Matt Turner <mattst88@gmail.com>
log2 is tricky because there cannot be a move between complex1 and
postlog2. We can't guarantee that scheduling complex1 will succeed when
we schedule postlog2, so we try to schedule complex1 and if it fails we
back out by rewriting the postlog2 as a move and introducing a new
postlog2 so that we can try again later.
Signed-off-by: Connor Abbott <cwabbott0@gmail.com>
Acked-by: Qiang Yu <yuq825@gmail.com>
See https://gitlab.freedesktop.org/lima/mesa/issues/94 for the gory
details of why this is needed. For *_impl this is easy, since it never
increases register pressure and it goes in the complex slot hence it
never counts against max nodes. It's a bit more challenging for
complex2, since it does count against max nodes, so we need to change
the reservation logic to reserve an extra slot for complex2 when
scheduling complex1. This second part isn't strictly necessary yet, but
it will be for exp2.
Signed-off-by: Connor Abbott <cwabbott0@gmail.com>
Acked-by: Qiang Yu <yuq825@gmail.com>
Set all the handles to VK_NULL_HANDLE:
"If the creation of any of those descriptor sets fails, then the implementation
must destroy all successfully created descriptor set objects from this command,
set all entries of the pDescriptorSets array to VK_NULL_HANDLE and return the
error."
(Vulkan 1.1.117 Spec, section 13.2)
CC: <mesa-stable@lists.freedesktop.org>
Reviewed-by: Dave Airlie <airlied@redhat.com>
When vkGetQueryPoolResults() is called with VK_QUERY_RESULT_WAIT_BIT
set, the driver is supposed to wait for the query to become available
before returning.
Currently, radv returns once the query is indeed ready, but it returns
VK_NOT_READY. It also fails to populate the results.
The problem is a missing volatile in the secondary check for query
availability. This patch removes the secondary check altogether since it
is redundant with the preceding loop.
This bug was found with an unreleased version of SteamVR.
Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
program_invocation_name and program_invocation_short_name are both GNU
extensions. I don't believe one can exist without the other, so only
check for program_invocation_name.
Reviewed-by: Eric Engestrom <eric.engestrom@intel.com>
It's better to test for needed functions instead of using external
knowledge about presence in this or that C library.
Reviewed-by: Eric Engestrom <eric.engestrom@intel.com>
Same rationale as the previous patch, but additionally these checks just
seem entirely unnecessary. pthread_self() has been used in Mesa since at
least 1999.
Acked-by: Eric Engestrom <eric.engestrom@intel.com>
glibc-2.12 was released in 2010. No one is building new Mesa against 9
year old glibc, and removing these checks allows the code to work on
other C libraries like musl.
Acked-by: Eric Engestrom <eric.engestrom@intel.com>
We can have a access flag already set here so just augment the
existing ones.
Signed-off-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Fixes: 0fb61dfdeb ("spirv: propagate access qualifiers through ssa & pointer")
Reviewed-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
With the help of Sagar, Ian and Ivan.
v2: Fix dependencies (Ian Romanick)
v3: 1) fix function name (Marek Olsak)
2) Add check for extension enable (Marek Olsak)
Signed-off-by: Paulo Zanoni <paulo.r.zanoni@intel.com>
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
This will be used to support one of the function from
Ext_texture_shadow_lod specification.
With the help of Sagar, Ian and Ivan.
Signed-off-by: Paulo Zanoni <paulo.r.zanoni@intel.com>
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
Conceptually, r28-r29 (as used for reading) and r28-r29 (as used for
writing) aren't registers at all, merely push/pull arrangements. So you
can't feed a texture result back into itself without explicitly moving
in the middle.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
instr->type is the type of the array element, not the type of the array
being dereferenced. Rather than fishing out the parent type, just use
parent->num_children which should be the length plus 1. While we're here
add another assert for the issue fixed by the previous commit.
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=111251
Fixes: 156306e5e6 ("nir/find_array_copies: Handle wildcards and overlapping copies")
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
Added support for avx512 scatter instruction. Non-avx512 will
now call into a C function to do the scatter emulation.
This has better jit compile performance than
the previous approach of jitting scalar loops.
Reviewed-by: Bruce Cherniak <bruce.cherniak@intel.com>
This move gets us back to parity with global manager
in that we can dump render context buckets now.
Reviewed-by: Bruce Cherniak <bruce.cherniak@intel.com>
In most cases this is not needed because the usual is that when a
separate stencil is written, the parent resource is also written.
This is needed if we have a separate stencil, no depth buffer, and the
source and destination is the same, as in that case the stencil can be
updated, but not the parent source (like if you are blitting only the
stencil buffer). On that situation, the following access to the
stencil buffer would clear the stencil buffer (so overwritting the
previous blitting) cleared because the parent source has
v3d_resource.writes to 0.
As far as I see, that situation only happens with the
GL_DEPTH32F_STENCIL8 format.
Note that one alternative would consider that if the separate_stencil
has been written, the parent should also be considered written (and
update its "writes" field accordingly). But I found this patch more
natural.
Fixes the following piglit tests:
spec/arb_depth_buffer_float/fbo-stencil-gl_depth32f_stencil8-blit
spec/arb_depth_buffer_float/fbo-stencil-gl_depth32f_stencil8-copypixels
the latter regressed when internally glCopyPixels implementation
started to use blitting. So:
Fixes: 131d40cfc9 ("st/mesa: accelerate glCopyPixels(STENCIL)")
Reviewed-by: Eric Anholt <eric@anholt.net>
In fact, the description of the workaround states that the mask field
doesn't work correctly on gen10, and we need to set it to 0xffff even we
we only want to update a single field:
"The mask bits are not implemented properly on 3DSTATE_3D_MODE. Driver
must always program bits 31:16 of DW1 a value of 0xFFFF. This means
if it is only updating 1 field, it must update all the fields to the
correct value."
So unless we want to change any of the fields of 3DSTATE_3D_MODE,
there's not need to emit. Additionally, it seems this workaround is not
required on gen11. And last but not least, this workaround is not
implemented on iris or anv, and it doesn't seem to be missed there.
So let's just remove the whole thing.
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
We accidentally started copying a full 64-bit value rather than copying
a 32-bit offset and zeroing the top 32-bits. This caused us to compute
bogus vertex counts which could lead to GPU hangs in some cases.
Thanks to Clayton Craft for catching the regressions!
Fixes: 0e24d10ff5 ("iris: Use gen_mi_builder to handle CS ALU operations.")
It's kind-of an anomaly that the Intel drivers are still treating
gl_FragCoord as an input. It also makes zero sense because we have to
special-case it in the back-end.
Because ANV is the only user of nir_lower_wpos_center, we go ahead and
just update it to look for nir_intrinsic_load_frag_coord as part of this
patch.
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
This fixes glsl-fcoord-invariant-pass.shader_test on drivers that set
GLSLFragCoordIsSysVal which includes radeonsi among others.
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
Seems like RB_BLIT_SCISSOR needs to be aligned to (minimum?) tile size.
Fixes intermittent GPU hangs triggered by some of the three.js samples
on https://threejs.org/
Signed-off-by: Rob Clark <robdclark@chromium.org>
fishgl.com has a shader which does roughly:
foo = texture(...);
if (bar)
foo = texture(...);
after lowering phi webs to regs we end up w/ a vec4 reg (array). But
since it was not an indirect access, we try to skip the extra mov. This
results that the per-component fanout (split) meta instructions store
directly to the reg (array). Which doesn't work out in RA.
Signed-off-by: Rob Clark <robdclark@chromium.org>
Arcturus CHIP enum is less than Navi10, since it's still gfx9,
but its VCN version belongs to VCN2.x
Signed-off-by: Leo Liu <leo.liu@amd.com>
Reviewed-by: Pierre-Eric Pelloux-Prayer <pierre-eric.pelloux-prayer@amd.com>
Now that helgrind is less upset and I've completed many successful
full shader-db runs, we should be able to enable freedreno shader-db
runs for Mesa checkins on the tiny public shader-db.
Reviewed-by: Rob Clark <robdclark@gmail.com>
Even if the data race wasn't real (I'm not great at reasoning about
this), helgrind is a nice enough tool that keeping noise out of it is
probably worthwhile. Besides, typing out the numbers keeps the data
in the read-only data section instead of emitting code to initialize
it every time.
Reviewed-by: Iago Toral Quiroga <itoral@igalia.com>
Shaders are shared across contexts in gallium (part of making it so
that you get shader compile at link time, for shader-db and to reduce
compiles at draw time). So, we need to protect from variant creation
for a shader from multiple threads at the same time.
Reviewed-by: Rob Clark <robdclark@gmail.com>
There is a single ir3_compiler in the screen, and each context may be
compiling ir3 shaders, which call ir3_create. ralloc doesn't do any
locking on its own, so eventually you can end up racing to break
ralloc's linked lists.
We really don't want struct ir3 to live as long as the compiler (maybe
struct ir3_shader's lifetime, if anything), so you'd better be freeing
it anyway.
Fixes: 8fe2076243 ("freedreno/ir3: convert over to ralloc")
Reviewed-by: Rob Clark <robdclark@gmail.com>
If the variable's going to be static, we shouldn't be memsetting it
from every thread and instead just have it in the data section.
Reviewed-by: Rob Clark <robdclark@gmail.com>
On Haswell, the format works but it doesn't properly do an sRGB decode.
It appears to act identically to R8G8B8_UNORM. Only Vulkan uses this
format so this only affects Vulkan on HSW.
Cc: mesa-stable@lists.freedesktop.org
Reviewed-by: Eric Engestrom <eric@engestrom.ch>
To select the correct layer the z-coordinate must be rounded before it
is multiplied by six.
Fixes a number of tests out of
dEQP-GLES31.functional.texture.filtering.cube_array.formats.*
Signed-off-by: Gert Wollny <gert.wollny@collabora.com>
Reviewed-by: Roland Scheidegger <sroland@vmware.com>
v2: Eric's nits
v3: Reuse timespec utils (Daniel)
Deal with ppoll being interrupted by a signal (Daniel)
v4: Remove unnecessary time check
v5: Deal with EAGAIN from wl_display_prepare_read_queue() (Daniel)
Signed-off-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Reviewed-by: Eric Engestrom <eric.engestrom@intel.com> (v2)
Reviewed-by: Daniel Stone <daniels@collabora.com>
It causes issues on GFX10.
This fixes rendering issues with vkmark and Wreckfest at least.
Signed-off-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl
This might happen when a pipeline doesn't define the vertex input
state, so the buffer data format is 0 (aka INVALID).
This fixes crashes when compiling some shaders on GFX10.
Signed-off-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
This commit rewrites opt_find_array_copies to be able to handle
an array copy sequence with other intervening operations in between. In
particular, this handles the case where we OpLoad an array of structs
and then OpStore it, which generates code like:
foo[0].a = bar[0].a
foo[0].b = bar[0].b
foo[1].a = bar[1].a
foo[1].b = bar[1].b
...
that wasn't recognized by the previous pass.
In order to correctly handle copying arrays of arrays, and in particular
to correctly handle copies involving wildcards, we need to use a tree
structure similar to lower_vars_to_ssa so that we can walk all the
partial array copies invalidated by a particular write, including
ones where one of the common indices is a wildcard. I actually think
that when factoring in the needed hashing/comparing code, a hash table
based approach wouldn't be a lot smaller anyways.
All of the changes come from tessellation control shaders in Strange
Brigade, where we're able to remove the DXVK-inserted copy at the
beginning of the shader. These are the result for radv:
Totals from affected shaders:
SGPRS: 4576 -> 4576 (0.00 %)
VGPRS: 13784 -> 5560 (-59.66 %)
Spilled SGPRs: 0 -> 0 (0.00 %)
Spilled VGPRs: 0 -> 0 (0.00 %)
Private memory VGPRs: 0 -> 0 (0.00 %)
Scratch size: 8696 -> 6876 (-20.93 %) dwords per thread
Code Size: 329940 -> 263268 (-20.21 %) bytes
LDS: 0 -> 0 (0.00 %) blocks
Max Waves: 330 -> 898 (172.12 %)
Wait states: 0 -> 0 (0.00 %)
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
We were missing handling for a few other ops that rearrange their
sources somehow in codegen, namely complex2 and select.
This should fix spec@glsl-1.10@execution@built-in-functions@vs-asin-vec3
and possibly other random regressions from the new scheduler which were
supposed to be fixed in the commit right after.
Fixes: 54434fe670 ("lima/gpir: Rework the scheduler")
Signed-off-by: Connor Abbott <cwabbott0@gmail.com>
Acked-by: Qiang Yu <yuq825@gmail.com>
Remove an unnecessary nir_lower_regs_to_ssa as that should be done by
the state tracker, and add a missing DCE pass after running copy
propagation in order to remove the dead copies. This shouldn't fix
anything but the second part will reduce shader sizes.
Signed-off-by: Connor Abbott <cwabbott0@gmail.com>
Reviewed-by: Qiang Yu <yuq825@gmail.com>
In try_node(), we assume that the node we pick can still be scheduled
successfully after speculatively trying all the other nodes. Normally we
always undo every node after speculating it, so that when we finally
schedule best_node the scheduler state is exactly the same and it
succeeds. However, we also try to spill nodes, which can change the
state and in a corner case that can make scheduling best_node fail. In
particular, the following sequence of events happened with piglit
shaders@glsl-vs-if-nested: a partially-ready node N was spilled and a
register store node S, which is a use of N, was created and then later
the other uses of N were scheduled, so that S is now ready and N is
partially ready. First we try to schedule S and succeed, then we try to
schedule another node M, which fails, so we try to spill the remaining
uses of N. This succeeds, but scheduling M still fails so that best_node
is still S. However since one of the uses of N is one cycle ago, and
therefore we inserted a read dependent on S one cycle ago when spilling
N, S can no longer be scheduled as read-after-write latency is three
cycles.
While we could ad-hoc try to catch cases like this, or (the best option
but very complicated) treat the spill as speculative and roll it back if
we decide not to schedule the node, a simpler solution is to just
give up on spilling if we've already successfully speculatively
scheduled another node. We'd give up a few cases where we discover that
by spilling even harder we could schedule a more desirable node, but
that seems like it would be pretty rare in practice. With this we
guarantee that nothing has been touched after best_node was successfully
scheduled. We also cut down on pointless spilling, since if we already
scheduled a node it's unlikely that spilling harder will let us schedule
an even better node, and hence any spilling at this point is probably
useless.
While we're here, clean up the code around spilling by flattening the
two if's and getting rid of the second unnecessary check for INT_MIN.
Fixes: 54434fe670 ("lima/gpir: Rework the scheduler")
Acked-by: Qiang Yu <yuq825@gmail.com>
Signed-off-by: Connor Abbott <cwabbott0@gmail.com>
With OpenCL, kernels can take arguments and return values (?). However
in practice, there is no more TGSI compute implementation, and even if
there were, it would probably have named functions and no explicit main.
This improves RA considerably for compute shaders, since temps are not
kept around as return values.
Signed-off-by: Ilia Mirkin <imirkin@alum.mit.edu>
Reviewed-by: Karol Herbst <kherbst@redhat.com>
The meson conversion chose to change the meaning of DEBUG to "used for
debugging" to be "used for expensive things for debugging", primarily
for nir_validate. Flip things over so that we get nice things with
optimizations enabled.
While we're at it, also kill off nouveau_statebuf.h which is unused (and
has a mention of DEBUG which is how I found it).
Signed-off-by: Ilia Mirkin <imirkin@alum.mit.edu>
Reviewed-by: Karol Herbst <kherbst@redhat.com>
This will enable us to fuse inverts in various ways. Marginal hurt:
total instructions in shared programs: 3610 -> 3611 (0.03%)
instructions in affected programs: 67 -> 68 (1.49%)
helped: 0
HURT: 1
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
Rather than putting registers after SSA in the MIR indexing, put them
side-by-side, shifted 1, using the bottom bit as the SSA/reg select.
This will allow us to generate SSA temps in the compiler.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
MaxPicOrderCntLsb should be at least 16 according to the spec,
therefore add minimum value check.
Also use poc value passed from st instead of calculation
in slice header encoding.
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=110673
Cc: mesa-stable@lists.freedesktop.org
V2: Fix typo
V3: Use MAX2 macro instead of coding. Also MaxPicOrderCntLsb
should be power of 2 according to spec.
Signed-off-by: Boyuan Zhang <boyuan.zhang@amd.com>
Acked-by: Leo Liu <leo.liu@amd.com>
MaxPicOrderCntLsb should be at least 16 according to the spec,
therefore add minimum value check.
Also use poc value passed from st instead of calculation
in slice header encoding.
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=110673
Cc: mesa-stable@lists.freedesktop.org
V2: Fix typo
V3: Use MAX2 macro instead of coding. Also MaxPicOrderCntLsb
should be power of 2 according to spec.
Signed-off-by: Boyuan Zhang <boyuan.zhang@amd.com>
Acked-by: Leo Liu <leo.liu@amd.com>
We don't have calculate final quotient in order to calculate unsigned
modulo result. Once we are done with error correction we have partial
result which can be used to find out modulo operation result
Signed-off-by: Sagar Ghuge <sagar.ghuge@intel.com>
Reviewed-by: Matt Turner <mattst88@gmail.com>
Rather than bailing if we see something that's not SSA, do out the
analysis to check if we can pipeline and do so if we can.
total registers in shared programs: 392 -> 391 (-0.26%)
registers in affected programs: 3 -> 2 (-33.33%)
helped: 1
HURT: 0
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
Rather than always emitting an extra move for fragments, check the
actual criteria and emit accordingly. (This was lost during the RA
improvements at the end of May).
total bundles in shared programs: 2210 -> 2176 (-1.54%)
bundles in affected programs: 501 -> 467 (-6.79%)
helped: 34
HURT: 0
helped stats (abs) min: 1 max: 1 x̄: 1.00 x̃: 1
helped stats (rel) min: 1.59% max: 33.33% x̄: 13.13% x̃: 12.50%
95% mean confidence interval for bundles value: -1.00 -1.00
95% mean confidence interval for bundles %-change: -16.06% -10.21%
Bundles are helped.
total quadwords in shared programs: 3639 -> 3605 (-0.93%)
quadwords in affected programs: 795 -> 761 (-4.28%)
helped: 34
HURT: 0
helped stats (abs) min: 1 max: 1 x̄: 1.00 x̃: 1
helped stats (rel) min: 0.96% max: 33.33% x̄: 11.22% x̃: 8.33%
95% mean confidence interval for quadwords value: -1.00 -1.00
95% mean confidence interval for quadwords %-change: -14.31% -8.13%
Quadwords are helped.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
Think of this pass as register coalescing part 2. After RA runs, but
before scheduling, we scan for code of the form:
mov rN, rN
and delete the move, since it's totally redundant. This pass helps
already, but it'd of course be much more effective paired with
register coalescing to encourage moves in general to end up in this
form. Nevertheless, even by itself:
total instructions in shared programs: 3665 -> 3613 (-1.42%)
instructions in affected programs: 2046 -> 1994 (-2.54%)
helped: 52
HURT: 0
helped stats (abs) min: 1 max: 1 x̄: 1.00 x̃: 1
helped stats (rel) min: 0.19% max: 25.00% x̄: 8.02% x̃: 4.00%
95% mean confidence interval for instructions value: -1.00 -1.00
95% mean confidence interval for instructions %-change: -10.26% -5.79%
Instructions are helped.
total bundles in shared programs: 2256 -> 2213 (-1.91%)
bundles in affected programs: 1154 -> 1111 (-3.73%)
helped: 43
HURT: 0
helped stats (abs) min: 1 max: 1 x̄: 1.00 x̃: 1
helped stats (rel) min: 0.33% max: 25.00% x̄: 9.10% x̃: 5.56%
95% mean confidence interval for bundles value: -1.00 -1.00
95% mean confidence interval for bundles %-change: -11.60% -6.60%
Bundles are helped.
total quadwords in shared programs: 3689 -> 3642 (-1.27%)
quadwords in affected programs: 2025 -> 1978 (-2.32%)
helped: 47
HURT: 0
helped stats (abs) min: 1 max: 1 x̄: 1.00 x̃: 1
helped stats (rel) min: 0.19% max: 25.00% x̄: 7.86% x̃: 3.85%
95% mean confidence interval for quadwords value: -1.00 -1.00
95% mean confidence interval for quadwords %-change: -10.30% -5.42%
Quadwords are helped.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
The source and destination were incorrectly flipped in the move, but
some details of our internal regalloc made this function anyway. Now
that we're changing the regalloc, we need to fix this to avoid
regressing blend shaders.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
This is a special case of DCE designed to run after the out-of-ssa pass
to cleanup special register lowering.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
We mixed up component_lo and full, which made it appear that we had
less freedom in RA than we actually do. Fix this to fix some
disassemblies as well as prepare for RA with the bias field.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
Following the RA work, we apply the same technique to eliminate the move
to r27 when loading cubemaps.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
As per previous commit, Meson doesn't support using uninstalled libs,
they're simply not ready until `ninja install` is ran, so delete them.
Suggested-by: Jason Ekstrand <jason@jlekstrand.net>
Signed-off-by: Eric Engestrom <eric.engestrom@intel.com>
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net> # for anv
Reviewed-by: Eric Anholt <eric@anholt.net> # for tu
Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl> # for radv
Meson doesn't support using uninstalled libs, they're simply not ready
until `ninja install` is ran, at which point one might as well use the
proper icd.json file in the install folder.
Signed-off-by: Eric Engestrom <eric.engestrom@intel.com>
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
Otherwise the wait only happens at flip time, which messes with
keeping idle buffers around if the GPU work makes the image miss
the next flip.
I decided not to use the wait fences as those are still xshm fences,
so that means we'd still have to wait in the application. Just doing
it before presenting makes things simpler.
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Not only variables can be flagged as NonUniformEXT but also
expressions. We're currently ignoring it in an expression such as :
imageLoad(data[nonuniformEXT(rIndex)], 0)
The associated SPIRV :
OpDecorate %69 NonUniformEXT
...
%69 = OpLoad %61 %68
This changes propagates access qualifiers through ssa & pointers so
that when it hits a OpLoad/OpStore style instructions, qualifiers are
not forgotten.
Fixes failure the following tests :
dEQP-VK.descriptor_indexing.*
Signed-off-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Fixes: 8ed583fe52 ("spirv: Handle the NonUniformEXT decoration")
Reviewed-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
This refactor allows for common code to apply decoration on all
ssa/pointer values. In particular this will allow to propagage access
qualifiers.
Signed-off-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Suggested-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
Reviewed-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
This needs to take the vertex count from the provided transform
feedback buffer.
v2:
- don't take the vertex count from the underlying buffer, instead,
take it from a v3d subclass of pipe_stream_output_target (Eric).
Fixes piglit tests:
spec/ext_transform_feedback2/draw-auto
spec/ext_transform_feedback2/draw-auto instanced
Reviewed-by: Eric Anholt <eric@anholt.net>
This reverts commit 4508f43eed, which
broke a bunch of dEQP tests (e.g. in
dEQP-GLES2.functional.draw.draw_arrays.*)
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
It's totally implementable, it's just that the plumbing is a bit
different and we never hooked it up. Don't advertise a broken feature.
Fixes: 36ee2fd61c "anv: Implement the basic form of VK_EXT_transform_feedback"
Iago Toral Quiroga fixed this in commit 94f740e3fc,
but it recently regressed in 0d8826f723.
Quoting Iago's original commit message for the fix:
GetTex*Image should return INVALID_ENUM if target is not valid, however,
GetTextureImage does not receive a target, and instead should return
INVALID_OPERATION if the effective target is not valid. From the
OpenGL 4.6 core profile spec, section 8.11 Texture Queries:
"An INVALID_OPERATION error is generated by GetTextureImage if the
effective target is not one of TEXTURE_1D, TEXTURE_2D, TEXTURE_3D,
TEXTURE_1D_ARRAY, TEXTURE_2D_ARRAY, TEXTURE_CUBE_MAP_ARRAY,
TEXTURE_RECTANGLE, or TEXTURE_CUBE_MAP (for GetTextureImage only)."
Note that this differs from the original ARB_direct_state_access spec.
However, the EXT_direct_state_access version does take a target
parameter, so it should continue reporting INVALID_ENUM.
Fixes KHR-GL45.direct_state_access.textures_image_query_errors.
Fixes: 0d8826f723 ("mesa: refactor get_texture_image to remove duplicate code")
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
In a few cases, we switch to MI_MATH instead of MI_PREDICATE,
just because we were already doing math and it's easier to chain
together.
Reviewed-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
This performs predicated MI_STORE_REGISTER_MEM commands, assuming that
the condition is already loaded into MI_PREDICATE_DATA.
Reviewed-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
We might want to resolve something to be in a particular register,
so we can access it outside of the gen_mi framework...but it may already
be in that register, at which point there's no work to do.
Reviewed-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
This will let us put the genxml boilerplate in one place, before we
expand genxml to more files shortly. Like i965/genX_boilerplate.h.
Reviewed-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
This lets us specify the prototypes once, instead of cut and pasting
them per generation. isl uses a similar approach (isl_genX_priv.h).
Reviewed-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
This provides significant compiler coverage during CI at a fairly low
cost in CPU time (~17s per thread for 4 threads on
gst-gitlab-htz-runner3).
I'm leaving wget in the docker image, as once this is in master I'm
planning on having an automatic shader-db comparison between master
and the branch included in the artifacts. I also haven't done
freedreno yet, because it has some races when run in multithreaded
mode that I'm still tracking down.
Reviewed-by: Eric Engestrom <eric@engestrom.ch>
On a build failure, we were tarring up the whole ccache directory,
build.ninja, build products, etc. This was over 400MB compressed on a
recent early meson-main build failure, which fd.o then has to hang on
to for 4 weeks. The build logs are probably the interesting part, are
potentially useful regardless ("how did CI's build flags differ from
mine?"), and are <500k uncompressed on my personal meson build.
Reviewed-by: Michel Dänzer <michel.daenzer@amd.com>
I introduced libdir for cross-builds so we could point at the
resulting drivers without per-arch dependencies, but I'd rather not
have to type x86_64-linux-whatever for non-cross-builds either.
Reviewed-by: Eric Engestrom <eric@engestrom.ch>
The goal is to enable testing of parts of drivers without depending on any
particular kernel version or hardware being present.
Simply set LD_PRELOAD=$PREFIX/lib/libv3d_drm_shim.so in your environment,
and we'll fake a /dev/dri/renderD128 (or whatever the next available node
is) using v3dv3. That node can then be used with the surfaceless or gbm
EGL platforms.
Acked-by: Iago Toral Quiroga <itoral@igalia.com>
When generating the error message for a missing function error where
all available overloads were missing due to a too low GLSL version, we
used to report something like this:
---8<---
0:224(14): error: no matching function for call to
`textureCubeLod(samplerCube, vec3, float)'; candidates are:
0:224(14): error: type mismatch
---8<---
This is a pretty confusing error message, and can throw people off when
debugging. So let's instead check if any overload is available before we
decide what to print. This allow us to report something like this
instead:
---8<---
0:224(14): error: no function with name 'textureCubeLod'
0:224(14): error: type mismatch
---8<---
This is arguably easier to understand for programmers, and doesn't send
you on a wild goose chase to figure out what argument is wrong just
because you stopped reading the message prematurely. I'm of course
referring to a friend, not me. For sure. I would never do that.
Signed-off-by: Erik Faye-Lund <erik.faye-lund@collabora.com>
Reviewed-by: Matt Turner <mattst88@gmail.com>
Without correct size, radeonsi assumes the metadata is incorrect,
which can and will cause issues.
Since the metadata is really incorrect without the size, let us
fix that.
Fixes: e43cc3e3af "radv/gfx9: handle GFX9 opaque metadata"
Reviewed-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
Report VK_EXTERNAL_MEMORY_HANDLE_TYPE_HOST_ALLOCATION_BIT_EXT as
supported for images. It was being shown supported for buffers, but not
images.
Fixes: 69cc6272fb ("anv: Implement VK_EXT_external_memory_host")
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
This fixes
dEQP-VK.rasterization.primitive_size.points.point_size_*
This also fixes some black squares with the Sascha SSAO demo.
Signed-off-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
It's coherent and faster. GFX7-GFX9 should also support this but
for now only uses L2 for GFX10 because it's untested on previous gens.
This fixes dEQP-VK.memory.pipeline_barrier.transfer_*
This also fixes some missing geometry in Dawn Of War III because
VBOs weren't updated correctly.
Signed-off-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
We add a new opt pass fusing perspective projection with varyings. Minor
win..? We don't combine non-varying projections, since if we're too
agressive, the extra load/store traffic will hurt us so it's not really
a win in practice.
total instructions in shared programs: 3915 -> 3913 (-0.05%)
instructions in affected programs: 76 -> 74 (-2.63%)
helped: 1
HURT: 0
total bundles in shared programs: 2520 -> 2519 (-0.04%)
bundles in affected programs: 46 -> 45 (-2.17%)
helped: 1
HURT: 0
total quadwords in shared programs: 4027 -> 4025 (-0.05%)
quadwords in affected programs: 80 -> 78 (-2.50%)
helped: 1
HURT: 0
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
We don't use it yet, since it's actually a shader-db regression. This is
primarily helpful as an intermediate step for attaching projection to
varyings.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
While load/store ops like st_vary can take an argument in either
r26/r27, ops like those for perspective projection must specifically
take their argument in r27.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
The load/store pipes can't take a uniform register in, so an explicit
move is necessary here.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
Given the constraints on special registers, we add a helper for lowering
these by inserting moves (copies) where needed to satsify the ISA
constraints.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
We generalize the constant emission helper used in fragment writeout as
we'll also need it for vertex outputs.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
This ensures the rules for accessing special register classes are
satisfied. This is asserted as a prepass should have lowered offending
uses to something satisfying these rules. Special register classes are
*not* work registers and cannot be used for RMW operations; they are
essentially 1-way pipes straight into/from fixed-function logic in the
shader cores.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
We reuse the same register spilling mechanism as for work->memory to
spill special->work registers, e.g. to allow writing out more than 2
vec4 varyings (without better scheduling anyway).
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
This does not yet support special->work spilling, nor does it support
multiclass breakup. These corner cases will be handled in succeeding
commits.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
Checks for x/xy/xyz/xyzw style swizzles (slightly more general but you
get the idea).
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
We don't need to guesstimate this ourselves. This will help when we
bringup derivatives.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
We include a zsbuf attachment function based on how the corresponding
MFBD code works, as well as extending cbufs to mipmapped rendering while
we're at it.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
Complements the existing getters and the setter for node class. To be
used in the Panfrost RA refactor.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
Reviewed-by: Eric Anholt <eric@anholt.net>
This fixes eglExportDMABUFImageQueryMESA() so it will report the
modififers of the underlying image. Without this information,
re-importing will likely be broken as it is rare these days that no
modifiers are used.
Reviewed-by: Eric Engestrom <eric@engestrom.ch>
Reviewed-by: Daniel Stone <daniels@collabora.com>
Fixes: 8f7338f284 ("egl: add initial EGL_MESA_image_dma_buf_export v2.4")
Extend MESA_framebuffer_flip_y to be used with OpenGL versions 4.3 and
higher. OpenGL 4.3 adds FramebufferParameteri needed by this extension.
Reviewed-by: Fritz Koenig <frkoenig@google.com>
Reviewed-by: Matt Turner <mattst88@gmail.com>
This is why we can't have nice things. I'm sure there's someway
to do this with {0} but I really don't have time for that.
Fixes: 2631fd3b0b ("gallivm: rework lp_build_tgsi_soa to take a struct")
Reviewed-by: Timothy Arceri <tarceri@itsqueeze.com>
When 'x' is the result of a scmp op:
x != 0.0 or x == 1.0: passthrough
x == 0.0 or x != 1.0: invert
Signed-off-by: Jonathan Marek <jonathan@marek.ca>
Reviewed-by: Matt Turner <mattst88@gmail.com>
Add generic lowerings for fall_equalN/fany_nequalN. These should be optimal
for vec4 backends that doesn't have any special instructions for it, as
long as they support saturate.
Signed-off-by: Jonathan Marek <jonathan@marek.ca>
Reviewed-by: Matt Turner <mattst88@gmail.com>
This will effectively enable the optimization in anv.
Reviewed-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
This allows us to drop legacy userclip plane handling in both the vec4
and FS backends, and simplifies a few interfaces.
v2 (Jason Ekstrand):
- Move brw_nir_lower_legacy_clipping to brw_nir_uniforms.cpp because
it's i965-specific.
- Handle adding the params in brw_nir_lower_legacy_clipping
- Call brw_nir_lower_legacy_clipping from brw_codegen_vs_prog
Co-authored-by: Jason Ekstrand <jason@jlekstrand.net>
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
Instead of building a constant mask (which depends on knowing the
subgroup size), we build an expression. Because the pass uses the
nir_shader_lower_instructions helper, subgroup lowering will be run on
any newly emitted instructions as well as the previously existing
instructions. In particular, if the subgroup size is known, the newly
emitted subgroup_size intrinsic will get turned into a constant and a
later constant folding pass will clean it up.
Reviewed-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
The rules for gl_SubgroupSize in Vulkan require that it be a constant
that can be queried through the API. However, all GL requires is that
it's a uniform. Instead of always claiming that the subgroup size in
the shader is 32 in GL like we have to do for Vulkan, claim 8 for
geometry stages, the maximum for fragment shaders, and the actual size
for compute.
Reviewed-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
Instead of lowering the subgroup size so early, wait until we have more
information. In particular, we're going to want different subgroup
sizes from different stages depending on the API. We also defer
lowering of subgroup masks because the ge/gt masks require the subgroup
size to generate a subgroup mask.
Reviewed-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
Tested on Gen > 9.
v2: 1) Fix lowering
2) Keep a consistent i/u order (Matt Turner)
Signed-off-by: Sagar Ghuge <sagar.ghuge@intel.com>
Reviewed-by: Matt Turner <mattst88@gmail.com>
Add freeing of SubroutineIndexes to the _mesa_free_shader_state.
Fixes: 4566aaaa5b ("mesa/subroutines: start adding per-context
subroutine index support (v1.1)")
Signed-off-by: Yevhenii Kolesnikov <yevhenii.kolesnikov@globallogic.com>
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
When a pipeline uses transform feedback, the driver fallbacks to
the legacy path because NGG support for streamout is a non-trivial
amount of work.
AMDVLK also uses the legacy path for streamout, while RadeonSI
uses the new NGG path.
Signed-off-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
NGG GS for streamout requires a bunch of work, so enable it with
the legacy path only for now.
Signed-off-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
The parameters were getting messy and I have to add a few more
for compute shaders, so clean it up before proceeding.
Reviewed-by: Roland Scheidegger <sroland@vmware.com>
I can't find a single place where nir_lower_io is called after going out
of SSA which is the only real reason why you wouldn't do this. Returning
SSA defs is more idiomatic and is required for the next commit.
Reviewed-by: Matt Turner <mattst88@gmail.com>
Numbers for Talos:
gfx10 without binning: 77.0 77.7 77.2 77.6
gfx10 with binning: 82.3 82.0 82.7 82.4
Reviewed-by: Dave Airlie <airlied@redhat.com>
Reviewed-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
The assign_extra_samplers() adds the needed extra samplers but they need
to be used in the nir_tex_instr.
Otherwise the plane information is simply lost and all nir_tex_instr use the
same sampler.
Here's an example of the bug:
NIR before st_nir_lower_tex_src_plane:
vec1 32 ssa_8 = load_const (0x00000000 /* 0.000000 */)
vec4 32 ssa_9 = tex ssa_0 (texture_deref), ssa_0 (sampler_deref), ssa_5 (coord), ssa_8 (plane)
vec1 32 ssa_10 = load_const (0x00000001 /* 0.000000 */)
vec4 32 ssa_11 = tex ssa_0 (texture_deref), ssa_0 (sampler_deref), ssa_5 (coord), ssa_10 (plane)
After:
vec4 32 ssa_9 = tex ssa_0 (texture_deref), ssa_0 (sampler_deref), ssa_5 (coord)
vec4 32 ssa_11 = tex ssa_0 (texture_deref), ssa_0 (sampler_deref), ssa_5 (coord)
This fixes the following piglit test for radeonsi + NIR:
- ext_image_dma_buf_import-sample_nv12
- ext_image_dma_buf_import-sample_yuv420
- ext_image_dma_buf_import-sample_yvu420
Reviewed-by: Eric Anholt <eric@anholt.net>
Signed-off-by: Marek Olšák <marek.olsak@amd.com>
Commit ea5b7de138 broke some piglit tests on radeonsi (Bonaire hardware).
This commit fixes half of the regression by enabling msaa if the dest surface has
more than 1 sample (instead of hardcoding it to false).
Fixes: ea5b7de138 ("radeonsi: make gl_SampleMaskIn = 0x1 when MSAA is disabled")
Tested-by: Michel Dänzer <michel.daenzer@amd.com>
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
The one obvious omission here is gl_HelperInvocation itself. However,
the spec doesn't require that we generate then when gl_HelperInvocation
is used, it merely mandates that we report them if they are there.
Reviewed-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
Otherwise, as we add things to the switch, we're going to forget and add
some 64-bit op at some point in the future and it'll stop getting
flagged. There's no reason why we can't do the check for derivatives.
Reviewed-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
Make sure that a <group> tag within another <group> tag work just fine.
v2: rename 'halfbyte' to 'byte' to match the size (Lionel).
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Now we can decode a <group> tag inside another <group> tag, and properly
print its indices and content.
v2: Use push/pop stack to fields, groups and iters (Lionel).
v3: Add assert(iter->level < DECODE_MAX_ARRAY_DEPTH) (Lionel).
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
We currently use the group->next pointer to iterate through the <group>
tags. This change them to be a type of field, so we can descend into
them while iterating, and then go back to the original position. Will be
useful when we want to decode <group>'s inside <group>'s, and when there
are more <field>'s after a <group> tag.
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
A gen_group (group in most of the code) can be of several types:
- instruction
- struct
- register
- group (?!?)
The <group> tag actually represents an array of elements. So at least
in our code, lets call it an array to avoid confusion with gen_group.
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Refactor the code from gen_spec_load_from_path() into a separate
function, that can be used with a xml file that doesn't fit the genX.xml
filename format.
Will be used soon for implementing unit tests for gen_decoder.
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
When using gen_spec_load_from path, only abort decoding if the read
length is 0. Previously, we were aborting if finding an EOF, even if
something was read from the file.
Also only kill the decoded file if no commands or structs were found,
and print a message in such case.
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
This allows using the LCDIF display controllers (with the mxsfb drm
modesetting driver) along with the Etnaviv render-only drivers. LCDIF is
found on i.MX SoCs.
Signed-off-by: Guido Günther <agx@sigxcpu.org>
Reviewed-by: Eric Anholt <eric@anholt.net>
This should save a lot of per-compile time by using the RA the way it's
actually supposed to be used.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
This doesn't fix any bug at the moment because the next statement is
'true' which happens to be APIMODE_D3D, but if that changes it could.
The fixes tags is as far I could go but the error predates it (2016 is
probably far enough).
Signed-off-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Fixes: 8db6f2e6eb ("anv/pipeline: Roll genX_pipeline_util.h into genX_pipeline.c")
Reviewed-by: Eric Engestrom <eric.engestrom@intel.com>
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
While testing kmscube with mesa master, it turns out that kmscube is not
working anymore. After bisecting, commit
5a7688fdec is the culprit. A short trial
and error session allowed to find the removed bit of code making kmscube
working again.
This patch adds it back.
Fixes: 5a7688fde ("panfrost: Use 64-bit descriptors globally")
v2: Add comment pointing out this is magic. [Alyssa, trivial]
Signed-off-by: Arnaud Patard <arnaud.patard@rtp-net.org>
Reviewed-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
Rather than anything "early Midgard", limit us specifically to T6XX, as
certain workarounds only apply to genuine T6XX, not T7XX.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
This can be the case if the resource was obtained from
st_vdpau_output/video_surface_gallium.
st_vdpau_output/video_surface_dma_buf do a similar dance internally.
v2:
* Pass PIPE_HANDLE_USAGE_FRAMEBUFFER_WRITE instead of 0 for usage.
Bugzilla: https://bugs.freedesktop.org/111099
Acked-by: Pierre-Eric Pelloux-Prayer <pierre-eric.pelloux-prayer@amd.com> # v1
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
We still have some big ticket items left on GLES 3.0, but it's often
helpful to be able to access higher dEQP levels for debugging features
that just don't quite match a particular API.
Plus, this opens up a whole slew of new features to poke at if boredom
overtakes, ahem.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
The branch instruction has 6 bits per register operand which allows it
to specify a component in the register.
Fix codegen so that it outputs the right component, otherwise it always
outputs the x component.
Signed-off-by: Erico Nunes <nunes.erico@gmail.com>
Reviewed-by: Vasily Khoruzhick <anarsoul@gmail.com>
Reviewed-by: Qiang Yu <yuq825@gmail.com>
The macros already prepend "ppir: ", remove them from the actual strings
so it doesn't appear duplicated.
Signed-off-by: Erico Nunes <nunes.erico@gmail.com>
Reviewed-by: Qiang Yu <yuq825@gmail.com>
The spilling code spills entire vec4 registers regardless of the
components used by the spilled uses.
The inserted stores code force the 4 components, but these loads were
using a variable number of components, causing bugs on loading the
spilled registers.
Signed-off-by: Erico Nunes <nunes.erico@gmail.com>
Reviewed-by: Qiang Yu <yuq825@gmail.com>
This is a relatively minimal change to adjust all the gallium interfaces
to use bool instead of boolean. I tried to avoid making unrelated
changes inside of drivers to flip boolean -> bool to reduce the risk of
regressions (the compiler will much more easily allow "dirty" values
inside a char-based boolean than a C99 _Bool).
This has been build-tested on amd64 with:
Gallium drivers: nouveau r300 r600 radeonsi freedreno swrast etnaviv v3d
vc4 i915 svga virgl swr panfrost iris lima kmsro
Gallium st: mesa xa xvmc xvmc vdpau va
Signed-off-by: Ilia Mirkin <imirkin@alum.mit.edu>
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
Acked-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
The comment even justifies the wrongness wrongly.
We should be translating to pipe values properly here or else
fragment maps to tess ctrl.
Fixes: 3d7611e9a6 ("st/nir: use NIR for asm programs")
Reviewed-by: Timothy Arceri <tarceri@itsqueeze.com>
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
st_extensions.c sets const->MaxImageSamples (GL_MAX_IMAGE_SAMPLES) by
looping over [16, 15, .. 1x] MSAA modes, and RGBA/BGRA/ARGB/ABGR 8888
color formats, calling pipe->is_format_supported() for each, with
the usage set to PIPE_BIND_SHADER_IMAGE. If any are supported, it
selects that number of samples.
We were checking if sample_count <= 1, which meant that we were getting
a value of 1x MSAA, rather than the expected 0x (feature doesn't exist).
But, only on Icelake because Gen11 adds support for typed read messages
for R8G8B8A8_UNORM. The lack of typed read messages for these formats
was tricking the check on Gen9 to say no correctly. This caused some
Icelake conformance failures, because we don't implement this feature.
Just check for sample_count == 0 instead.
Glamor in xorg-server 1.20 cannot expose 16bpp pixmaps when running in
the usual 24bpp mode. This meant our 565 pbuffer configs would
ultimately fail to create a backing pixmap, leading to crashes.
To hack around this, make a 16bpp pixmap and try and export it.
If it works, expose the configs. Otherwise, just skip them.
This also disables them on DRI2. These configs were only added to pass
conformance requirements, and I doubt anybody cares about testing out
565 pbuffer visuals on DRI2-only drivers.
v2: Don't leak the fds (caught by Eric Anholt)
v3: Don't free(fds), it's not malloc'd
Fixes: dacb11a585 ("egl: Add a 565 pbuffer-only EGL config under X11.")
Reviewed-by: Eric Anholt <eric@anholt.net>
In commit dacb11a585, Eric found the first
matching 565 pbuffer config, and stopped. Our double-buffered configs
come first in the list, so we added that, making a pbuffer-only config
that claimed to be double buffered. This doesn't make sense, since
pixmaps/pbuffers are fundamentally not double buffered.
When using that config, every call to eglCreatePbufferSurface would fail
with EGL_BAD_MATCH. The call chain looks like this:
- eglCreatePbufferSurface
- dri3_create_pbuffer_surface
- dri3_create_surface
- dri2_get_dri_config
which eventually does:
const bool double_buffer = surface_type == EGL_WINDOW_BIT;
and then fails to find a matching config, because it ends up looking
for a single-buffered config - and there aren't any.
To fix this, make the 565 pbuffer config single-buffered. This fixes
at least 51 dEQP-EGL.* tests.
Fixes: dacb11a585 ("egl: Add a 565 pbuffer-only EGL config under X11.")
Reviewed-by: Eric Engestrom <eric.engestrom@intel.com>
Reviewed-by: Eric Anholt <eric@anholt.net>
pbuffer configs cause a million of these warnings to trigger, but
when using pixmaps or buffers, there is only one surface, so this
warning doesn't make much sense. Retain it for window surfaces for now.
Fixes: dacb11a585 ("egl: Add a 565 pbuffer-only EGL config under X11.")
Reviewed-by: Eric Engestrom <eric.engestrom@intel.com>
Reviewed-by: Eric Anholt <eric@anholt.net>
pbuffers are internally single-buffered. Marek fixed DrawBuffers to
handle this case, but we need to fix ReadBuffers too. Otherwise,
pretty much every conformance test fails because glReadPixels breaks.
v2: Refactor the switch into a helper (suggested by Eric Anholt)
Fixes: 35294f2eca ("mesa: fix pbuffers because internally they are front buffers")
Acked-by: Eric Engestrom <eric.engestrom@intel.com> (v1)
Reviewed-by: Eric Anholt <eric@anholt.net>
Normally, we haven't worried too much about stack sizes as Linux tends
to be fairly friendly towards large stacks. However, when running DXVK
apps under wine, we're suddenly subject to Windows' more stringent stack
limitations and can run out of space more easily. In particular, some
of the shaders in Elite Dangerous: Horizons have quite a few registers
and the arrays in split_virtual_grfs are large enough to blow a 1 MiB
stack leading to crashes during shader compilation.
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=108662
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Reviewed-by: Eric Anholt <eric@anholt.net>
Reviewed-by: Matt Turner <mattst88@gmail.com>
Cc: mesa-stable@lists.freedesktop.org
color_buffers[] is currently hard coded to 3 for android which fails
in droid_window_dequeue_buffer when ANativeWindow creates color_buffers
>3 while querying buffer age during dEQP partial_update tests on chromeOS.
The patch removes static color_buffers[], queries for MIN_UNDEQUEUED_BUFFERS,
sets native window buffer count and allocates the correct number of
color_buffers as per android.
Fixes dEQP-EGL.functional.partial_update* tests on chromebooks with
enabling EGL_KHR_partial_update.
v2: update comment instead of removing (Eric Engestrom)
v3: change static array to dynamic allocated color_buffers
querying MIN_UNDEQUEUED_BUFFERS (Chia-I Wu olv@chromium.org)
Fixes: 2acc69da8c "EGL/Android: Add EGL_EXT_buffer_age extension"
Signed-off-by: Nataraj Deshpande <nataraj.deshpande@intel.com>
Acked-by: Eric Engestrom <eric@engestrom.ch>
Reviewed-by: Chia-I Wu <olvaffe@gmail.com>
Memory corruption (for both legitimate and illegitimate reasons) causes
this to hang pantrace.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
This was disabled to permit regression-free RA work. Now that the spill
code is in place, we can reenable, with some caveats about efficacy.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
Pipe through the number of bytes of spilled memory used from the
compiler into the main driver, where it will be used to allocate the
Thread Local Storage buffer.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
Indirect linear writes were not being marked as initialized, causing the
back blit to be dropped, breaking the listed tests.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
We just use the pointers of the midgard_block*, which is crude, but it
gets the point across and will help debug successor related issues.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
Now that we run RA in a loop, before each iteration after a failed
allocation we choose a spill node and spill it to Thread Local Storage
using st_int4/ld_int4 instructions (for spills and fills respectively).
This allows us to compile complex shaders that normally would not fit
within the 16 work register limits, although it comes at a fairly steep
performance penalty.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
If we write to an index before reading it, the old copy we're checking
liveness for isn't live in this block, even if it does get read later.
Fixes abnormally high register pressure in shaders with loops.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
Midgard bundles contain a tag, as well as a copy of the tag of the next
bundle to facilitate prefetch. Do some simple static analysis to detect
certain tag errors (particularly on shaders without branching).
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
Rather than rewriting an index away across the whole block, we expose
finer (per-instruction) granularity for rewrites.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
These are used to load/store from Thread Local Storage, which is memory
allocated per-thread (corresponding to ctx->scratchpad in the command
stream) and used for register spilling.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
It was a crazy idea that didn't pan out. We're better served by a good
copyprop pass. It's also unused now.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
Rather than creating either a load or a uniform register read with a
fixed beginning offset, we always create a load and then promote to a
uniform register later. This will allow us to promote in a register
pressure aware manner.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
This will allow us to insert instructions as a result of register
allocation, permitting spilling to be implemented. As a side effect,
with the assert commented out this would fix a bunch of glamor crashes
(due to RA failures) so MATE becomes useable.
Ideally we'll have scheduling or RA actually sorted out before the
branch point but if not this gives us a one-line out to get X working...
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
RadeonSI only uses Z32_FLOAT_CLAMP for upgraded depth textures
on GFX10 and RADV doesn't promotes Z16 or Z24.
Signed-off-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
This late optimization pass is only affected by nir_opt_if() and handles all cases
in a single pass. It's enough to call it once after the optimization loop.
No changes on vkpipeline-db.
Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
Since logicop_func 0 is PIPE_LOGIOP_CLEAR, we were trigger lowerinng
of logic ops on precompiled shaders, which we don't want to do. Also, this
had the side effect of making shader-db crash, as during this lowering we
would try to read the color format swizzle information from the fragment shader
key that we don't populate in precompiled shaders because right now we only
need it when logic operations are enabled.
Reviewed-by: Eric Anholt <eric@anholt.net>
If we detect that a scheduling candidate will stall because having a
register source that is the written by the SFU unit in the previous
instruction we reduce its priority so any non stalling operation would
be chosen.
The latency of SFU operations is defined as 2. So they would be scheduled
earlier if other candidates have the same priority.
Finally we won't merge instructions that stall to a previously chosen one.
As the result of the previous one would be waiting for an extra cycle.
Although shader-db result show that instruction are hurt with an increase
of 0.35% the sum of instructions + stalls is reduced a 0.52%. And
the total of sfu-stalls is reduced a 63.51%. It implies also a small
increase in the max-temps metric because of scheduling earlier SFU
operations.
total instructions in shared programs: 9102719 -> 9117851 (0.17%)
instructions in affected programs: 4324628 -> 4339760 (0.35%)
helped: 4162
HURT: 12128
helped stats (abs) min: 1 max: 10 x̄: 1.28 x̃: 1
helped stats (rel) min: 0.09% max: 4.76% x̄: 0.66% x̃: 0.51%
HURT stats (abs) min: 1 max: 27 x̄: 1.69 x̃: 1
HURT stats (rel) min: 0.05% max: 7.69% x̄: 0.87% x̃: 0.68%
95% mean confidence interval for instructions value: 0.90 0.96
95% mean confidence interval for instructions %-change: 0.47% 0.50%
Instructions are HURT.
total max-temps in shared programs: 1327728 -> 1327812 (<.01%)
max-temps in affected programs: 4730 -> 4814 (1.78%)
helped: 61
HURT: 134
helped stats (abs) min: 1 max: 2 x̄: 1.08 x̃: 1
helped stats (rel) min: 2.70% max: 13.33% x̄: 4.89% x̃: 4.17%
HURT stats (abs) min: 1 max: 3 x̄: 1.12 x̃: 1
HURT stats (rel) min: 1.54% max: 20.00% x̄: 6.10% x̃: 5.26%
95% mean confidence interval for max-temps value: 0.28 0.58
95% mean confidence interval for max-temps %-change: 1.80% 3.52%
Max-temps are HURT.
total sfu-stalls in shared programs: 99551 -> 36324 (-63.51%)
sfu-stalls in affected programs: 95029 -> 31802 (-66.53%)
helped: 25882
HURT: 0
helped stats (abs) min: 1 max: 27 x̄: 2.44 x̃: 2
helped stats (rel) min: 5.26% max: 100.00% x̄: 79.86% x̃: 100.00%
95% mean confidence interval for sfu-stalls value: -2.47 -2.42
95% mean confidence interval for sfu-stalls %-change: -80.18% -79.54%
Sfu-stalls are helped.
total inst-and-stalls in shared programs: 9202270 -> 9154175 (-0.52%)
inst-and-stalls in affected programs: 5618516 -> 5570421 (-0.86%)
helped: 22728
HURT: 855
helped stats (abs) min: 1 max: 31 x̄: 2.16 x̃: 1
helped stats (rel) min: 0.07% max: 16.67% x̄: 1.14% x̃: 0.92%
HURT stats (abs) min: 1 max: 5 x̄: 1.25 x̃: 1
HURT stats (rel) min: 0.12% max: 5.26% x̄: 1.24% x̃: 0.86%
95% mean confidence interval for inst-and-stalls value: -2.07 -2.01
95% mean confidence interval for inst-and-stalls %-change: -1.07% -1.05%
Inst-and-stalls are helped.
v2: Rename v3d_qpu_generates_sfu_stalls to v3d_qpu_instr_is_sfu (Eric)
Reviewed-by: Eric Anholt <eric@anholt.net>
SFU operations have a latency of 2 cicles, so if their results
are used in the following cycle to a SFU instruction, the GPU
stalls for an extra cycle until the result is available.
This adds the number of stalls to the shader-db debug mode and
sum of instruction + stalls to evaluate optimizations to schedule
instructions that avoid generating sfu-stalls.
v2: Rename v3d_qpu_generates_sfu_stalls to v3d_qpu_instr_is_sfu (Eric)
Reviewed-by: Eric Anholt <eric@anholt.net>
In virgl_buffer_transfer_extend, when no flush is needed, it tries
to extend a previously queued transfer instead if it can find one.
Comparing to virgl_resource_transfer_prepare, it fails to check if
the resource is busy.
The existence of a previously queued transfer normally implies that
the resource is not busy, maybe except for when the transfer is
PIPE_TRANSFER_UNSYNCHRONIZED. Rather than burdening us with a
lengthy comment, and potential concerns over breaking it as the
transfer code evolves, this commit makes the valid_buffer_range
check the only condition to take the fast path.
In real world, we hit the fast path almost only because of the
valid_buffer_range check. In micro benchmarks, the condition should
always be true, otherwise the benchmarks are not very representative
of meaningful workloads. I think this fix is justified.
The recent change to PIPE_TRANSFER_MAP_DIRECTLY usage disables the
fast path. This commit re-enables it as well.
Signed-off-by: Chia-I Wu <olvaffe@gmail.com>
Reviewed-by: Gurchetan Singh <gurchetansingh@chromium.org>
Do not take a transfer and do the memcpy. Add a _buffer suffix to
the function name to make it clear that it is only for buffers.
Signed-off-by: Chia-I Wu <olvaffe@gmail.com>
Reviewed-by: Gurchetan Singh <gurchetansingh@chromium.org>
Without setting hw_res, virgl_transfer_queue_extend never finds a
match and always returns NULL.
Signed-off-by: Chia-I Wu <olvaffe@gmail.com>
Reviewed-by: Gurchetan Singh <gurchetansingh@chromium.org>
The implementation uses _mesa_ActiveTexture to change the active texture unit and
then reset it.
It causes an unnecessary _NEW_TEXTURE_STATE but:
- adding an index argument to _mesa_set_enable causes a lot of changes (~140 callers)
- enable_texture (called by _mesa_set_enable) might cause a _NEW_TEXTURE_STATE
anyway.
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
Added functions:
- glTextureImage1DEXT
- glTextureImage2DEXT
- glTextureImage3DEXT
- glTextureSubImage1DEXT
- glTextureSubImage3DEXT
- glCopyTextureImage1DEXT
- glCopyTextureImage2DEXT
- glCopyTextureSubImage1DEXT
- glCopyTextureSubImage2DEXT
- glCopyTextureSubImage3DEXT
- glGetTextureImageEXT
All but the last one can be compiled in a display list.
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
Move shared code in a new function (_get_texture_image) and use it instead
of duplicating the same lines.
Will be also used by the EXT_dsa functions (GetTextureImageEXT and GetMultiTexImageEXT).
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
The amdgpu dri is used for the closed source AMD driver. Since this driver
does not implement multimedia, we fall back to radeonsi in mesa to do
multimedia. This corrects the dri driver name for when it is set to amdgpu.
Reviewed-by: Michel Dänzer <michel.daenzer@amd.com> (v1)
Signed-off-by: Jeremy Newton <Jeremy.Newton@amd.com>
Signed-off-by: Marek Olšák <marek.olsak@amd.com>
With b01524fff0 ("meson: don't build libGLES*.so with GLVND")
we dropped the incorrect pkg-config files for GLES*.
Since then, the glvnd issue of its missing files has become painfully
apparent, since it break the build for everyone using glvnd.
NVIDIA has had a fix for a few years now, but has yet to accept it:
https://github.com/NVIDIA/libglvnd/pull/86
Since the breakage is already there, let's clean up everything on our side
while we wait for NVIDIA to accept the fix.
Signed-off-by: Eric Engestrom <eric.engestrom@intel.com>
Reviewed-by: Matt Turner <mattst88@gmail.com>
Semantically, the memory barrier has to come first to wait
for the completion of pending memory requests.
Afterwards, the workgroups can be synchronized.
Reviewed-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
ppir_node_replace_child is used by the const lowering routine in ppir.
All types need to be handled here, otherwise the src node is not updated
properly when one of the lowered nodes is a const, which results in, for
example, regalloc not assigning registers correctly.
Signed-off-by: Erico Nunes <nunes.erico@gmail.com>
Reviewed-by: Vasily Khoruzhick <anarsoul@gmail.com>
The branch instruction has sources which must be handled in src handling
paths so that regalloc assigns registers to them properly.
Signed-off-by: Erico Nunes <nunes.erico@gmail.com>
Reviewed-by: Vasily Khoruzhick <anarsoul@gmail.com>
If we don't have clear state (which gfx10 doesn't currently)
we will fix to reset the scissor. AMDVLK will leave it set
to something else.
Marek also has this fix for radeonsi pending.
Reviewed-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
Enabling tracing, and then having a vmfault, can leads to a segfault
before we print out the traces, as if a meta shader is executing
and we don't have the NIR for it.
Just pass the stage and give back a default.
Fixes: 9b9ccee4d6 ("radv: take LDS into account for compute shader occupancy stats")
Reviewed-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
This will be reused in the following patch to add support for clip
vertex lowering in geometry shaders.
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
This will allow code sharing in a following patch that adds support
for lowering in geometry shaders. It also allows us to exit early
if there is no lowering to do which allows a small code tidy up.
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
This doesn't quite work yet, but it illustrates how MRT is implemented
in the MFBD: rt_count is set appropriately based on the number of render
targets, while additional render target descriptors are appended on with
an index variable in them (not quite decoded since there's some aspects
we don't understand there, but conceptually this should be right).
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
If tiler_heap_end == tiler_heap_start, ensure it's printed the same
rather than one erroring out as hex.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
There's no polygons, so you can't have any size to the polygon list,
although there is a minimal header.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
Fixes a bunch of NULL dereferences, although it does cause GPU faults of
course.
This is caused by color buffers masked out in MRT, which we'll
eventually have to solve the right way... one thing at a time.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
128MB is excessive and 16MB is still plenty. Saves 112MB/context on
kernels without growable/heap support.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
If a function has a constant and is called more than once, after
inlining we may end up with different variables representing the same
constant. This commit look into the data and de-duplicate them.
The first pass now will collect the constant data in a per variable
buffer, then de-duplication happens (by sorting then linear walk), and
the second pass will use the data in var->data.location.
One side-effect of the current implementation is that constants will
be reordered. If this turns out to be a problem is something that can
be fixed.
An alternative strategy considered was to perform this in a
per-function basis and then merge the results, the problem is that we
would have to fix up the offsets during the merge. Given the data we
have, the current patch is good enough.
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
This will be used later on to allocate constant data for each
variable (and then deduplicate). Also drop initializing found_read,
as it is already implicitly false in the literal.
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
v3d's NIR txf_ms lowering wants to swizzle around the input coordinates in
NIR, but doesn't generate a new txf_ms instructions as replacement. It's
pretty easy to allow that in nir_shader_lower_instructions, and it may be
common in lowering passes.
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
nir_lower_io leaves around deref_var instructions after lowering away
deref intrinsics. This ends up breaking validation after v3d_nir_lower_io
removes variables not actually being stored by the shader's
store_output()s.
Reviewed-by: Iago Toral Quiroga <itoral@igalia.com>
Z32F uses a dediacted float path. Z32F_S8 uses separate stencil planes
in the hardware, lowered via u_transfer_helper.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
It is legal to load a shader from a NULL address, particularly when the
TILER job is used strictly for effects on the Z/S buffer with 0x0 color
mask. Don't crash the decoder in this case.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
When backside stenciling is disabled, backfacing primitives just do the
same thing as frontfacing primitives.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
This commit cleans up API between the core of the rasterizer and swr.
Some formatting changes are also done.
Reviewed-by: Alok Hota <alok.hota@intel.com>
Treat gl_PointCoord as a system value and
add the necessary bits for correct codegen.
Signed-off-by: Andreas Baierl <ichgeh@imkreisrum.de>
Reviewed-by: Qiang Yu <yuq825@gmail.com>
Reviewed-by: Eric Anholt <eric@anholt.net>
gl_PointCoord handling needs some special bits set in lima/ppir code
generation. Treating gl_PointCoord as a system value makes it easier
to distinguish from a regular varying.
Signed-off-by: Andreas Baierl <ichgeh@imkreisrum.de>
Reviewed-by: Qiang Yu <yuq825@gmail.com>
Reviewed-by: Eric Anholt <eric@anholt.net>
When writing the scheduler, we forgot that you can't read the complex
unit in certain sources because it gets overwritten to 0 or 1. Fixing
this turned out to be possible without giving up and reducing
GPIR_VALUE_REG_NUM to 10, although it was difficult in a way I didn't
expect. There can be at most 4 next-max nodes that can't have moves
scheduled in the complex slot, so it actually isn't a problem for
getting the number of next-max nodes at 5 or lower. However, it is a
problem for stores. If a given node is a next-max node whose move cannot
go in the complex slot *and* is used by a store that we decide to
schedule, we have to reserve one of the non-complex slots for a move
instead of all the slots, or we can wind up in a situation where only
the complex slot is free and we fail the move. This means that we have
to add another term to the reservation logic, for stores whose children
cannot be in the complex slot.
Acked-by: Qiang Yu <yuq825@gmail.com>
Now, we do scheduling at the same time as value register allocation. The
ready list now acts similarly to the array of registers in
value_regalloc, keeping us from running out of slots. Before this, the
value register allocator wasn't aware of the scheduling constraints of
the actual machine, which meant that it sometimes chose the wrong false
dependencies to insert. Now, we assign value registers at the same time
as we actually schedule instructions, making its choices reflect reality
much better. It was also conservative in some cases where the new scheme
doesn't have to be. For example, in something like:
1 = ld_att
2 = ld_uni
3 = add 1, 2
It's possible that one of 1 and 2 can't be scheduled in the same
instruction as 3, meaning that a move needs to be inserted, so the value
register allocator needs to assume that this sequence requires two
registers. But when actually scheduling, we could discover that 1, 2,
and 3 can all be scheduled together, so that they only require one
register. The new scheduler speculatively inserts the instruction under
consideration, as well as all of its child load instructions, and then
counts the number of live value registers after all is said and done.
This lets us be more aggressive with scheduling when we're close to the
limit.
With the new scheduler, the kmscube vertex shader is now scheduled in 40
instructions, versus 66 before.
Acked-by: Qiang Yu <yuq825@gmail.com>
The location is unused for shader_temp and function_temp variables, and
due to the way we nir_lower_io_to_temproraries demotes shader_out
variables to shader_temp variables, it happened to equal
VARYING_SLOT_POS for the gl_Position temporary, which made this pass
fail with the offline compiler due to this coming before vars_to_ssa.
Reviewed-by: Qiang Yu <yuq825@gmail.com>
With NGG, empty waves may still be required to export data.
This fixes dEQP-VK.ycbcr.format.*_unorm.geometry_*.
Signed-off-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
For per-sample color writes we need the output intrinsic to pack the
sample index, which is not provided with regular store_output intrinsics
unless we figured out a way to encode it into the base or the offset.
v2:
- Drop the writemask (Eric)
Reviewed-by: Eric Anholt <eric@anholt.net>
We want to split the tlb specifier setup from the color writes, because when
we implement per-sample color writes we want to do the latter for all the
samples, but the former only once.
Reviewed-by: Eric Anholt <eric@anholt.net>
We will soon be adding per-sample color writes which means additional
complexity and more indentation (we will need another loop to emit
the writes for each individual sample), so this will help keeping
things simple and a bit more readable.
Reviewed-by: Eric Anholt <eric@anholt.net>
anv_format is supposed to have a pointer back to the associated
VkFormat, we were missed this for depth/stencil formats.
This doesn't fix anything afaict, but will be needed for future
changes.
Signed-off-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Fixes: 465de47bad ("anv: associate vulkan formats with aspects")
Acked-by: Jason Ekstrand <jason@jlekstrand.net>
I can find no evidence that removing this is a good idea.
Fixes: 9b116173b6 ("radv: do not emit VGT_FLUSH on GFX10")
Reviewed-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
unorm and snorm require that the border color values are clamped, so when
picking the sampler view copy/clamp the border color from the sampler and
use these adjusted values.
Fixes:
dEQP-GLES31.functional.texture.border_clamp.range_clamp.linear_compressed_color
dEQP-GLES31.functional.texture.border_clamp.range_clamp.linear_snorm_color
dEQP-GLES31.functional.texture.border_clamp.range_clamp.linear_srgb_color
dEQP-GLES31.functional.texture.border_clamp.range_clamp.linear_unorm_color
dEQP-GLES31.functional.texture.border_clamp.range_clamp.nearest_compressed_color
dEQP-GLES31.functional.texture.border_clamp.range_clamp.nearest_snorm_color
dEQP-GLES31.functional.texture.border_clamp.range_clamp.nearest_srgb_color
dEQP-GLES31.functional.texture.border_clamp.range_clamp.nearest_unorm_color
dEQP-GLES31.functional.texture.border_clamp.range_clamp.nearest_unorm_depth
dEQP-GLES31.functional.texture.border_clamp.range_clamp.nearest_unorm_depth_uint_stencil_sample_depth
Signed-off-by: Gert Wollny <gert.wollny@collabora.com>
Reviewed-by: Roland Scheidegger <sroland@vmware.com>
The value of -0.5f is not small enough to produce negative coordinates,
so lower the minimum clamp value to -1.0f. This fixes a number of tests
from
dEQP-GLES31.functional.texture.border_clamp.*
Signed-off-by: Gert Wollny <gert.wollny@collabora.com>
Reviewed-by: Roland Scheidegger <sroland@vmware.com>
when mirroring the texture corrdinates the indices must be mirrored as
well and the half pixel shift must be applied in reverse.
Fixes a number of tests from:
dEQP-GLES31.functional.texture.gather.offset.*
dEQP-GLES31.functional.texture.gather.offsets.*
Signed-off-by: Gert Wollny <gert.wollny@collabora.com>
Reviewed-by: Roland Scheidegger <sroland@vmware.com>
At this point all the draw caches are flushed to the old attached textures,
so the read caches of these textures will need to be updated too.
Fixes:
dEQP-GLES3.functional.fbo.color.repeated_clear.sample.tex2d.*
Signed-off-by: Gert Wollny <gert.wollny@collabora.com>
Reviewed-by: Roland Scheidegger <sroland@vmware.com>
This fixes a rendering glitch observed in SDL testscale test, where alpha
blending samples with value (1.0, 1.0, 1.0, 0.0) whitens the target instead
of having no effect.
Signed-off-by: Jonathan Marek <jonathan@marek.ca>
Reviewed-by: Christian Gmeiner <christian.gmeiner@gmail.com>
We need to check rgb_func/alpha_func when determining if blend or separate
alpha is required.
Signed-off-by: Jonathan Marek <jonathan@marek.ca>
Reviewed-by: Christian Gmeiner <christian.gmeiner@gmail.com>
Dividing the fui result by 65535 is obviously wrong, and from testing, on
GC7000L at least there is no division by 65535.
Fixes dEQP-GLES2.functional.polygon_offset.fixed16_displacement_with_units
Signed-off-by: Jonathan Marek <jonathan@marek.ca>
Reviewed-by: Christian Gmeiner <christian.gmeiner@gmail.com>
This is ported from AMDVLK, it's probably not requires unless
we want to use "real time queues", but it might be nice to just have
in place.
Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
Rob Clark thinks this was likely a workaround for our const buffer
update bugs, and now that it's passing tests, we should be able to
drop it.
renderdoc-traces results:
traces/android/clashofclans.rdc: +6.1% +/- 1.1%
traces/android/candycrush.rdc: +5.2% +/- 1.6%
Reviewed-by: Rob Clark <robdclark@gmail.com>
Now that the bin vs render constlen is fixed, we can skip these waits.
Improves webgl aquarium performance at 10k fish from 27fps to 33.
Some highlights from renderdoc-traces:
traces/android/minecraft.rdc: +17.1% +/- 3.4%
traces/glmark2/ideas-speed=duration.rdc: +11.6% +/- 2.4%
traces/android/candycrush.rdc: +5.4% +/- 1.1%
traces/android/clashofclans.rdc: +4.4% +/- 1.3%
Reviewed-by: Rob Clark <robdclark@gmail.com>
We actually could go up to vs->constlen in the binning shader on a6xx,
but for sanity let's make sure that we're always under constlen. This
would have caught the bug fixed in 572c76fd88 ("freedreno: Clamp UBO
uploads to the constlen decided by the shader.")
Reviewed-by: Rob Clark <robdclark@gmail.com>
Fixes constlen overflow in
dEQP-GLES31.functional.shaders.builtin_var.compute.num_work_groups and
dEQP-GLES31.functional.image_load_store.buffer.image_size.readonly_32
and probably others.
Reviewed-by: Rob Clark <robdclark@gmail.com>
Spec says :
"timestampComputeAndGraphics specifies support for timestamps on all
graphics and compute queues. If this limit is set to VK_TRUE, all
queues that advertise the VK_QUEUE_GRAPHICS_BIT or
VK_QUEUE_COMPUTE_BIT in the VkQueueFamilyProperties::queueFlags
support VkQueueFamilyProperties::timestampValidBits of at least 36."
On gen7+ this should be true (we only have 32bits of timestamp on
gen6 and below).
Signed-off-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Fixes: 802f00219a ("anv/device: Update features and limits")
Reported-by: Timothy Strelchun <timothy.strelchun@intel.com>
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
Until now we only supported fast clear colors on the first miplevel and
layer. The main reason for it is that we can't have different fast clear
values at different levels/layers, since the surface state only supports
one clear value.
We can, however, enable it if we make sure we only use the same value
for all levels/layers, and if one of them changes, we resolve all the
others. We already do that for depth fast clears so hopefully it will be
fine for color fast clears too.
v2: Add check for partial clear too (Ken).
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
Relax the restriction that all the writes need to be in the first
block: now accept variables that have all the writes in the same
block, and all the reads are dominated by that block.
This let the pass identify large constants that are local to a helper
function. The writes will be at the place that the function is
inlined, possibly not in the first block (but still all in the same
block).
Results for vkpipeline-db in SKL:
total instructions in shared programs: 3624891 -> 3623145 (-0.05%)
instructions in affected programs: 79416 -> 77670 (-2.20%)
helped: 16
HURT: 0
total cycles in shared programs: 1458149667 -> 1458147273 (<.01%)
cycles in affected programs: 30154164 -> 30151770 (<.01%)
helped: 14
HURT: 2
total loops in shared programs: 2437 -> 2437 (0.00%)
loops in affected programs: 0 -> 0
helped: 0
HURT: 0
total spills in shared programs: 8813 -> 8745 (-0.77%)
spills in affected programs: 2894 -> 2826 (-2.35%)
helped: 8
HURT: 0
total fills in shared programs: 23470 -> 23392 (-0.33%)
fills in affected programs: 12248 -> 12170 (-0.64%)
helped: 6
HURT: 2
LOST: 0
GAINED: 0
Results for shader-db in SKL with Iris:
total instructions in shared programs: 15379442 -> 15379392 (<.01%)
instructions in affected programs: 837 -> 787 (-5.97%)
helped: 2
HURT: 2
helped stats (abs) min: 27 max: 27 x̄: 27.00 x̃: 27
helped stats (rel) min: 10.47% max: 10.67% x̄: 10.57% x̃: 10.57%
HURT stats (abs) min: 2 max: 2 x̄: 2.00 x̃: 2
HURT stats (rel) min: 1.23% max: 1.23% x̄: 1.23% x̃: 1.23%
95% mean confidence interval for instructions value: -39.14 14.14
95% mean confidence interval for instructions %-change: -15.51% 6.17%
Inconclusive result (value mean confidence interval includes 0).
total loops in shared programs: 4880 -> 4880 (0.00%)
loops in affected programs: 0 -> 0
helped: 0
HURT: 0
total cycles in shared programs: 370677237 -> 370676567 (<.01%)
cycles in affected programs: 17852 -> 17182 (-3.75%)
helped: 2
HURT: 1
helped stats (abs) min: 338 max: 356 x̄: 347.00 x̃: 347
helped stats (rel) min: 13.98% max: 14.64% x̄: 14.31% x̃: 14.31%
HURT stats (abs) min: 24 max: 24 x̄: 24.00 x̃: 24
HURT stats (rel) min: 0.18% max: 0.18% x̄: 0.18% x̃: 0.18%
total spills in shared programs: 11772 -> 11772 (0.00%)
spills in affected programs: 0 -> 0
helped: 0
HURT: 0
total fills in shared programs: 24948 -> 24948 (0.00%)
fills in affected programs: 0 -> 0
helped: 0
HURT: 0
LOST: 0
GAINED: 0
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
In many cases, the compiler can just copy-prop the strided MOV whereas
the conversion is a bit trickier. This cuts 5% of the instructions off
of one particular Vulkan CTS test which does lots of load_ssbo.
Reviewed-by: Matt Turner <mattst88@gmail.com>
These seem like obvious enough optimizations in the world of multiple
integer bit sizes. The only known thing which hits these at the moment
is some Vulkan CTS tests for 16-bit SSBO values which like to up-cast
and check for equality. However, it's something that's bound to come up
as we start seeing more integers in shaders.
The optimizations of comparisons of casted values with constants are
something which we would ideally do with range analysis. However,
lacking that, we can do it in opt_algebraic as long as one side is a
constant.
In dEQP-VK.ssbo.phys.layout.random.16bit.scalar.13, this commit, along
with the previous commit, reduce the number of instructions emitted on
Skylake from 55328 to 44546, a reduction of 20%.
Acked-by: Matt Turner <mattst88@gmail.com>
Reviewed-by: Ian Romanick <ian.d.romanick@intel.com>
We could, in theory, add the same optimization for 64-bit unpack
operations but that's likely to fight with 64-bit integer lowering on
platforms which require it so it will require more infrastructure before
that will be a good idea.
Reviewed-by: Matt Turner <mattst88@gmail.com>
This helps greatly when debugging algebraic transform generators because
you can now actually see the output and verify that your transforms are
getting generated.
Acked-by: Matt Turner <mattst88@gmail.com>
This fixes some validation errors generated by certain D->W conversions
but is likely not a full solution. Calculating an actual register
stride is a far more complex problem in general and should probably be
handled by the brw_fs_generator.
Reviewed-by: Matt Turner <mattst88@gmail.com>
It was checking if the dest or src[0] SSA values were vectors, rather than
whether the ALU op was using the source as a vector resulting in a
nir_fdot4 making it through to vc4 and v3d:
vec1 32 ssa_6 = fdot4 ssa_4.xxxx, ssa_5
Fixes: c1cffa4249 ("nir/alu_to_scalar: Use the new NIR lowering framework")
v2: Use Jason's recommendation to look at input_sizes.
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
Theoretically we would like these split since varyings can have
specially optimized flags (no map, coherent local). For now, since
neither of these flags is particularly meaningful right now, merge them
together instead of special casing varyings_mem.
Saves upwards of 64MB of RAM per context.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
With the following chain of events :
vkQueuePresent()
<- Surface resize
vkQueuePresent()
We should be able to report SUBOPTIMAL or OUT_OF_DATE on the second
vkQueuePresent() call. Currently we only look at X11 events in the
vkAcquireNextImage() path so we're not able to report this.
This change checks the queue of events and process any available ones
to update the swapchain status.
v2: Be consistent about reporting the current error state of the
swapchain (Jason)
Signed-off-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=111097
Cc: <mesa-stable@lists.freedesktop.org>
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
Add a struct to maintain which SPIR-V extensions are supported, and an
utility method to initialize it based on
nir_spirv_supported_capabilities.
v2:
* Fixing code style (Ian Romanick)
* Adding a prefix (spirv) to fill_supported_spirv_extensions (Ian Romanick)
v3: rebase update (nir_spirv_supported_extensions renamed)
v4: include AMD_gcn_shader support
v5: move spirv_fill_supported_spirv_extensions to
src/mesa/main/spirv_extensions.c
Signed-off-by: Alejandro Piñeiro <apinheiro@igalia.com>
Signed-off-by: Arcady Goldmints-Orlov <agoldmints@igalia.com>
Reviewed-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
Ideally this should be generated somehow. One option would be gather
all the extension dependencies listed on the core grammar, but there
would be the possibility of not including some of the extensions.
Note that spirv-tools is doing it just slightly better, as it has a
hardcoded list of extensions manually took from the registry, that
they parse to get the enum and the to_string method (see
generate_grammar_tables.py).
v2:
* Use a macro to improve readability. (Tapani Pälli)
* Add unreachable on the switch, no default (Eric Engestrom)
* No typedef enum (Ian Romanick)
* Sort extensions names (Ian Romanick)
* Don't add extensions unlikely to be supported by Mesa at any point
(Ian Romanick)
v3: rebase update
v4: Include AMD_gcn_shader
v5: move spirv_extensions_to_string to src/mesa/main/spirv_extensions.c
Signed-off-by: Alejandro Piñeiro <apinheiro@igalia.com>
Signed-off-by: Arcady Goldmints-Orlov <agoldmints@igalia.com>
Reviewed-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
v2:
* Mention extension gap at gl_API.xml (Emil Velikov)
* Bail with INVALID_ENUM if extension not available on getStringi (Emil Velikov)
* Use EXTRA_EXT macro when defining the extension at
get.c/get_hash_params.py (Emil Velikov)
* Rename source files (spirvextensions.[ch] -> spirv_extensions.[ch]) (Ian)
v3:
* Fix GL_PROGRAM_BINARY_FORMATS glGet query, broken by error on a
previous rebase
v4:
* Fix rebase conflicts on getstring.c after
GL_SHADING_LANGUAGE_VERSION query was added
v5:
* Remove src/mapi/glapi/gen/Makefile.am as it no longer exists in
master
Signed-off-by: Alejandro Piñeiro <apinheiro@igalia.com>
Signed-off-by: Arcady Goldmints-Orlov <agoldmints@igalia.com>
Reviewed-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
I did implement this extension a while ago but it didn't work
on pre GFX10 for some reasons. Now all CTS pass.
Signed-off-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
This is unsupported and hangs.
This fixes GPU hangs with
dEQP-VK.tessellation.geometry_interaction.limits.output_required_*.
Signed-off-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
It should be possible to build it on-demand too but it requires
more work. On GFX10, the GS copy shader is required when tess
is enabled with extreme geometry.
Signed-off-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
Thanks to Eric Engestrom for pointing out that there was something wrong
with that function.
Fixes: 724a73509e
softpipe: Prepare handling explicit gradients
Signed-off-by: Gert Wollny <gert.wollny@collabora.com>
Reviewed-by: Eric Anholt <eric@anholt.net>
This decoration can be ignored, so we can just skip the next steps.
Otherwise we'd have to also handle it in apply_var_decoration.
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
Sources of phi instructions act as if they occur at the very end of the
predecessor block not the block in which the phi lives. In order to
handle them correctly, we have to skip phi sources on the normal
instruction walk and handle them as a separate walk over the successor
phis. While registers in phi instructions is a bit of an oddity it can
happen when we temporarily go out-of-SSA for control-flow manipulations.
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=111075
Cc: mesa-stable@lists.freedesktop.org
Reviewed-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
It's disallowed according to the SPIR-V spec or at least I think that's
what the spec says. It's in a section explicitly about explicit layout
of things in the StorageBuffer, Uniform, and PushConstant storage
classes so it's not 100% clear that it applies with other storage
classes. However, it seems like it should apply in general and
violating it can trigger (fairly harmless) asserts in NIR.
Reviewed-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
When running on ICL the
dEQP-VK.ssbo.phys.layout.random.16bit.scalar.13 needs more than 1M for
the shader, so bump it.
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
This avoids needing format_pack to have access to the GLenum return
functions for mesa_format. It seems like an odd function and unlikely
to be reused.
Reviewed-by: Thomas Helland <thomashelland90@gmail.com>
Reviewed-by: Kristian H. Kristensen <hoegsberg@google.com>
_mesa_get_srgb_format_linear() just returns the original format if it
wasn't sRGB.
Reviewed-by: Thomas Helland <thomashelland90@gmail.com>
Reviewed-by: Kristian H. Kristensen <hoegsberg@google.com>
I'm considering moving most of this code to src/util/, and I want that
code to not expose GL types in its interfaces.
Reviewed-by: Rob Clark <robdclark@gmail.com>
Reviewed-by: Kristian H. Kristensen <hoegsberg@google.com>
We want errors in the table to show up as unit test failures in MRs.
Also keeps unit test code out of the built drivers.
Reviewed-by: Thomas Helland <thomashelland90@gmail.com>
Reviewed-by: Kristian H. Kristensen <hoegsberg@google.com>
The two implementations differ across the entire input range only in
that u_half.h preserves mantissa bits for NaNs. The u_half.h version
shaves 15% off of the text size of half_float.o.
Reviewed-by: Thomas Helland <thomashelland90@gmail.com>
Reviewed-by: Kristian H. Kristensen <hoegsberg@google.com>
You could break the test and meson test wouldn't complain, since we
returned success either way.
Reviewed-by: Thomas Helland <thomashelland90@gmail.com>
Reviewed-by: Kristian H. Kristensen <hoegsberg@google.com>
This patch adds the missing building rules for Android,
to avoid following building errors:
In file included from external/mesa/src/amd/vulkan/radv_debug.c:35:
In file included from external/mesa/src/amd/vulkan/radv_debug.h:27:
external/mesa/src/amd/vulkan/radv_private.h:95:10:
fatal error: 'gfx10_format_table.h' file not found
^~~~~~~~~~~~~~~~~~~~~~
1 error generated.
In file included from external/mesa/src/amd/vulkan/radv_android.c:31:
external/mesa/src/amd/vulkan/radv_private.h:95:10:
fatal error: 'gfx10_format_table.h' file not found
^~~~~~~~~~~~~~~~~~~~~~
1 error generated.
Fixes: 3dc5ec5d16 ("radv/gfx10: generate gfx10_format_table.h")
Signed-off-by: Mauro Rossi <issor.oruam@gmail.com>
Acked-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
Add sampler uniforms for the UV plane(s), so driver can count the
uniforms and get the correct sampler count.
Fixes lowered YUV on a6xx which actually wants to know # of samplers.
Signed-off-by: Rob Clark <robdclark@chromium.org>
Reviewed-by: Kristian H. Kristensen <hoegsberg@google.com>
Reviewed-by: Eric Anholt <eric@anholt.net>
We can hit multi-fd EGL_NATIVE_BUFFER_ANDROID case when the native
android buffer is YUV. So we need to handle that.
Currently this went unnoticed because, even though we have two or
three fd's for YUV native android buffers, they all reference the
same backing buffer. But we really shouldn't rely on that.
Signed-off-by: Rob Clark <robdclark@chromium.org>
Reviewed-by: Eric Engestrom <eric@engestrom.ch>
Reviewed-by: Kristian H. Kristensen <hoegsberg@google.com>
Reviewed-by: Eric Anholt <eric@anholt.net>
Now that the 64-bit lowering passes do a complete lowering in one go, we
don't need to loop anymore. We do, however, have to ensure that int64
lowering happens after double lowering because double lowering can
produce int64 ops.
Reviewed-by: Eric Anholt <eric@anholt.net>
One advantage of this is that we no longer need to run in a loop because
the new framework handles lowering instructions added by lowering.
Reviewed-by: Eric Anholt <eric@anholt.net>
One advantage of this is that we no longer need to run in a loop because
the new framework handles lowering instructions added by lowering.
Reviewed-by: Eric Anholt <eric@anholt.net>
Instead of only lowering system from variables, lower most to intrinsics
and let the lowering framework immediately lower the intrinsic. This
will result in a bit more instruction churn but it means that NIR code
builders can just use intrinsics instead of everything having to go
through variables.
Reviewed-by: Eric Anholt <eric@anholt.net>
Instead of having context-aware builder functions, just provide lowering
for the system value intrinsics and let nir_shader_lower_instructions
handle the recursion for us. This makes everything a bit simpler and
means that the lowering can also be used if something comes in as a
system value intrinsic rather than a load_deref.
Reviewed-by: Eric Anholt <eric@anholt.net>
When debugging, we're given the fault_pointer unresolved, so it is
helpful to have more context in the decode.
Signed-off-by: Tomeu Vizoso <tomeu.vizoso@collabora.com>
Midgard supports two modes of operation, 32-bit mode and 64-bit mode.
The GPU is natively 64-bit, but job descriptors can be submitted in
32-bit mode. Among other changes, 32-bit mode shortens pointer sizes to
use 32-bit pointers rather than the full 64-bit range.
The blob decides which mode to use based on the CPU bitness, so an armhf
system uses 32-bit descriptors and an aarch64 system uses 64-bit
descriptors. For a while, we mimicked this, bu inevitably this caused
the 32-bit support to lag behind as our reference platform is 64-bit.
To combat the code staleness, we traced an older GPU paired with a 64-bit
CPU (the Midgard T720 on-board the sunxi H64). From there, we could tell
which fields were really about hardware and which fields were simply
reflections of the descriptor bitness.
From there, we decided to remove support for 32-bit descriptors
entirely, using 64-bit descriptors unconditionally. There is minimal
performance penalty for this in practice, and it allows us to unify
these disparate code paths. This fixes:
- T860 + armhf
- T820 + armhf
- T760 + aarch64
And will help bringup of 1st/2nd generation Midgard regardless of CPU.
[Work done by Tomeu. Commit message written by Alyssa.]
v2: Add comments preserving information about the old behaviour for
future reference. Fix a compiler warning. (Alyssa)
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
In 6ce8592836 we started looking at the dynamic stencil state and
disabling stencil writes when the stencil mask is zero. Unfortunately,
we never updated the PMA fix code accordingly so 3DSTATE_WM_DEPTH_STENCIL
and the PMA fix were getting out-of-sync causing hangs.
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=109203
Fixes: 6ce8592836 "anv: Disable stencil writes when both write..."
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Rather than hardcoding a BO layout at creation-time, we implement the
ability to hint layouts at various points in a BO's lifetime,
potentially reallocating and switching layouts if it's heuristically
deemed useful to do so.
In this patch, we add a simple hinting implementation, opportunistically
compressing FBOs.
Support is hidden behind PAN_MESA_DEBUG=afbc as the implementation is
incomplete (software access to AFBC is unimplemented at the moment) and
therefore would regress significantly.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
Meta of CopyPixel generates a buffer object
but does not free it on cleanup.
Fixes: 37d11b13ce (meta: Don't pollute the buffer object namespace in _mesa_meta_setup_vertex_objects)
Signed-off-by: Sergii Romantsov <sergii.romantsov@globallogic.com>
Reviewed-by: Ian Romanick <ian.d.romanick@intel.com>
The NGG GS epilogue no longers call that function so the assertion
is just useless now.
Signed-off-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
Reviewed-by: Dave Airlie <airlied@redhat.com>
For NGG, the driver relies on the VS outinfo struct.
This fixes
dEQP-VK.clipping.user_defined.clip_*_vert_tess_geom_*
Signed-off-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
Reviewed-by: Dave Airlie <airlied@redhat.com>
We emit code to saturate texture coordinates when using clamp wrapping
mode so if we don't flag the dirty state here we don't get to recompile
the shaders when the wrapping mode changes.
v2:
- Do the same when setting sampler views (Eric)
- Use a switch statement instead of an if ladder.
- Swap the shader stage assertion with an unreachable.
Fixes:
spec/!opengl 1.1/texwrap 1d bordercolor/gl_rgba8, border color only
spec/!opengl 1.1/texwrap 1d proj bordercolor/gl_rgba8, projected, border color only
spec/!opengl 1.1/texwrap 2d bordercolor/gl_rgba8, border color only
spec/!opengl 1.1/texwrap 2d proj bordercolor/gl_rgba8, projected, border color only
spec/!opengl 1.1/texwrap formats bordercolor-swizzled/gl_alpha12, swizzled, border color only
spec/!opengl 1.1/texwrap formats bordercolor-swizzled/gl_alpha16, swizzled, border color only
spec/!opengl 1.1/texwrap formats bordercolor-swizzled/gl_alpha4, swizzled, border color only
spec/!opengl 1.1/texwrap formats bordercolor-swizzled/gl_alpha8, swizzled, border color only
spec/!opengl 1.1/texwrap formats bordercolor-swizzled/gl_intensity8, swizzled, border color only
spec/!opengl 1.1/texwrap formats bordercolor-swizzled/gl_luminance4_alpha4, swizzled, border color only
spec/!opengl 1.1/texwrap formats bordercolor-swizzled/gl_luminance6_alpha2, swizzled, border color only
spec/!opengl 1.1/texwrap formats bordercolor-swizzled/gl_luminance8, swizzled, border color only
spec/!opengl 1.1/texwrap formats bordercolor-swizzled/gl_luminance8_alpha8, swizzled, border color only
spec/!opengl 1.1/texwrap formats bordercolor-swizzled/gl_r3_g3_b2, swizzled, border color only
spec/!opengl 1.1/texwrap formats bordercolor-swizzled/gl_rgb10, swizzled, border color only
spec/!opengl 1.1/texwrap formats bordercolor-swizzled/gl_rgb10_a2, swizzled, border color only
spec/!opengl 1.1/texwrap formats bordercolor-swizzled/gl_rgb4, swizzled, border color only
spec/!opengl 1.1/texwrap formats bordercolor-swizzled/gl_rgb5, swizzled, border color only
spec/!opengl 1.1/texwrap formats bordercolor-swizzled/gl_rgb5_a1, swizzled, border color only
spec/!opengl 1.1/texwrap formats bordercolor-swizzled/gl_rgb8, swizzled, border color only
spec/!opengl 1.1/texwrap formats bordercolor-swizzled/gl_rgba4, swizzled, border color only
spec/!opengl 1.1/texwrap formats bordercolor-swizzled/gl_rgba8, swizzled, border color only
spec/!opengl 1.1/texwrap formats bordercolor/gl_alpha12, border color only
spec/!opengl 1.1/texwrap formats bordercolor/gl_alpha16, border color only
spec/!opengl 1.1/texwrap formats bordercolor/gl_alpha4, border color only
spec/!opengl 1.1/texwrap formats bordercolor/gl_alpha8, border color only
spec/!opengl 1.1/texwrap formats bordercolor/gl_intensity8, border color only
spec/!opengl 1.1/texwrap formats bordercolor/gl_luminance4_alpha4, border color only
spec/!opengl 1.1/texwrap formats bordercolor/gl_luminance6_alpha2, border color only
spec/!opengl 1.1/texwrap formats bordercolor/gl_luminance8, border color only
spec/!opengl 1.1/texwrap formats bordercolor/gl_luminance8_alpha8, border color only
spec/!opengl 1.1/texwrap formats bordercolor/gl_r3_g3_b2, border color only
spec/!opengl 1.1/texwrap formats bordercolor/gl_rgb10, border color only
spec/!opengl 1.1/texwrap formats bordercolor/gl_rgb10_a2, border color only
spec/!opengl 1.1/texwrap formats bordercolor/gl_rgb4, border color only
spec/!opengl 1.1/texwrap formats bordercolor/gl_rgb5, border color only
spec/!opengl 1.1/texwrap formats bordercolor/gl_rgb5_a1, border color only
spec/!opengl 1.1/texwrap formats bordercolor/gl_rgb8, border color only
spec/!opengl 1.1/texwrap formats bordercolor/gl_rgba4, border color only
spec/!opengl 1.1/texwrap formats bordercolor/gl_rgba8, border color only
spec/!opengl 1.2/texwrap 3d bordercolor/gl_rgba8, border color only
spec/!opengl 1.2/texwrap 3d proj bordercolor/gl_rgba8, projected, border color only
spec/arb_es2_compatibility/texwrap formats bordercolor-swizzled/gl_rgb565, swizzled, border color only
spec/arb_es2_compatibility/texwrap formats bordercolor/gl_rgb565, border color only
spec/arb_texture_compression/texwrap formats bordercolor-swizzled/gl_compressed_alpha, swizzled, border color only
spec/arb_texture_compression/texwrap formats bordercolor-swizzled/gl_compressed_luminance_alpha, swizzled, border color only
spec/arb_texture_compression/texwrap formats bordercolor-swizzled/gl_compressed_rgb, swizzled, border color only
spec/arb_texture_compression/texwrap formats bordercolor/gl_compressed_alpha, border color only
spec/arb_texture_compression/texwrap formats bordercolor/gl_compressed_luminance_alpha, border color only
spec/arb_texture_compression/texwrap formats bordercolor/gl_compressed_rgb, border color only
spec/arb_texture_float/texwrap formats bordercolor-swizzled/gl_alpha16f_arb, swizzled, border color only
spec/arb_texture_float/texwrap formats bordercolor-swizzled/gl_intensity16f_arb, swizzled, border color only
spec/arb_texture_float/texwrap formats bordercolor-swizzled/gl_luminance16f_arb, swizzled, border color only
spec/arb_texture_float/texwrap formats bordercolor-swizzled/gl_luminance_alpha16f_arb, swizzled, border color only
spec/arb_texture_float/texwrap formats bordercolor-swizzled/gl_rgb16f, swizzled, border color only
spec/arb_texture_float/texwrap formats bordercolor-swizzled/gl_rgba16f, swizzled, border color only
spec/arb_texture_float/texwrap formats bordercolor/gl_alpha16f_arb, border color only
spec/arb_texture_float/texwrap formats bordercolor/gl_intensity16f_arb, border color only
spec/arb_texture_float/texwrap formats bordercolor/gl_luminance16f_arb, border color only
spec/arb_texture_float/texwrap formats bordercolor/gl_luminance_alpha16f_arb, border color only
spec/arb_texture_float/texwrap formats bordercolor/gl_rgb16f, border color only
spec/arb_texture_float/texwrap formats bordercolor/gl_rgba16f, border color only
spec/arb_texture_rectangle/texwrap rect bordercolor/gl_rgba8, border color only
spec/arb_texture_rectangle/texwrap rect proj bordercolor/gl_rgba8, projected, border color only
spec/arb_texture_rg/texwrap formats bordercolor-swizzled/gl_r8, swizzled, border color only
spec/arb_texture_rg/texwrap formats bordercolor-swizzled/gl_rg8, swizzled, border color only
spec/arb_texture_rg/texwrap formats bordercolor/gl_r8, border color only
spec/arb_texture_rg/texwrap formats bordercolor/gl_rg8, border color only
spec/arb_texture_rg/texwrap formats-float bordercolor-swizzled/gl_r16f, swizzled, border color only
spec/arb_texture_rg/texwrap formats-float bordercolor-swizzled/gl_rg16f, swizzled, border color only
spec/arb_texture_rg/texwrap formats-float bordercolor/gl_r16f, border color only
spec/arb_texture_rg/texwrap formats-float bordercolor/gl_rg16f, border color only
spec/ext_packed_float/texwrap formats bordercolor-swizzled/gl_r11f_g11f_b10f, swizzled, border color only
spec/ext_packed_float/texwrap formats bordercolor/gl_r11f_g11f_b10f, border color only
spec/ext_texture_shared_exponent/texwrap formats bordercolor-swizzled/gl_rgb9_e5, swizzled, border color only
spec/ext_texture_shared_exponent/texwrap formats bordercolor/gl_rgb9_e5, border color only
spec/ext_texture_snorm/texwrap formats bordercolor-swizzled/gl_alpha8_snorm, swizzled, border color only
spec/ext_texture_snorm/texwrap formats bordercolor-swizzled/gl_intensity8_snorm, swizzled, border color only
spec/ext_texture_snorm/texwrap formats bordercolor-swizzled/gl_luminance8_alpha8_snorm, swizzled, border color only
spec/ext_texture_snorm/texwrap formats bordercolor-swizzled/gl_luminance8_snorm, swizzled, border color only
spec/ext_texture_snorm/texwrap formats bordercolor-swizzled/gl_r8_snorm, swizzled, border color only
spec/ext_texture_snorm/texwrap formats bordercolor-swizzled/gl_rg8_snorm, swizzled, border color only
spec/ext_texture_snorm/texwrap formats bordercolor-swizzled/gl_rgb8_snorm, swizzled, border color only
spec/ext_texture_snorm/texwrap formats bordercolor-swizzled/gl_rgba8_snorm, swizzled, border color only
spec/ext_texture_snorm/texwrap formats bordercolor/gl_alpha8_snorm, border color only
spec/ext_texture_snorm/texwrap formats bordercolor/gl_intensity8_snorm, border color only
spec/ext_texture_snorm/texwrap formats bordercolor/gl_luminance8_alpha8_snorm, border color only
spec/ext_texture_snorm/texwrap formats bordercolor/gl_luminance8_snorm, border color only
spec/ext_texture_snorm/texwrap formats bordercolor/gl_r8_snorm, border color only
spec/ext_texture_snorm/texwrap formats bordercolor/gl_rg8_snorm, border color only
spec/ext_texture_snorm/texwrap formats bordercolor/gl_rgb8_snorm, border color only
spec/ext_texture_snorm/texwrap formats bordercolor/gl_rgba8_snorm, border color only
spec/ext_texture_srgb/texwrap formats bordercolor-swizzled/gl_sluminance8, swizzled, border color only
spec/ext_texture_srgb/texwrap formats bordercolor-swizzled/gl_sluminance8_alpha8, swizzled, border color only
spec/ext_texture_srgb/texwrap formats bordercolor-swizzled/gl_srgb8, swizzled, border color only
spec/ext_texture_srgb/texwrap formats bordercolor-swizzled/gl_srgb8_alpha8, swizzled, border color only
spec/ext_texture_srgb/texwrap formats bordercolor/gl_sluminance8, border color only
spec/ext_texture_srgb/texwrap formats bordercolor/gl_sluminance8_alpha8, border color only
spec/ext_texture_srgb/texwrap formats bordercolor/gl_srgb8, border color only
spec/ext_texture_srgb/texwrap formats bordercolor/gl_srgb8_alpha8, border color only
Reviewed-by: Eric Anholt <eric@anholt.net>
This fixes
dEQP-VK.spirv_assembly.instruction.graphics.16bit_storage.input_output_*
Found with RADV_DEBUG=checkir
Signed-off-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
Setting this seems to be broken, amdvlk only sets it for quilted
textures which I'm not sure what those are.
Fixes dEQP-VK.glsl.texture_functions.query.texturesize*3d*
Fixes: bf11f1c3a4 ("radv/gfx10: add gfx10_make_texture_descriptor")
Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
v2: Include '-s' to suppress the diff.
v3: use the git config command (Ken), use < (Eric)
Reviewed-by: Matt Turner <mattst88@gmail.com> (v1)
Acked-by: Eric Engestrom <eric@engestrom.ch>
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
The stride was already overriden when using
lower_workgroup_access_to_offsets, so elaborate a bit the commentary
there.
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
Use alignment to calculate the stride associated with the pointer
types. That stride is used when the pointers are casted to arrays.
Note that size alone is not sufficient, e.g. struct { vec2 a; vec1 b;
} will have element an element size of 12 bytes, but the stride needs
to be 16 bytes to respect the 8 byte alignment.
Fixes: 050eb6389a "spirv: Ignore ArrayStride in OpPtrAccessChain for Workgroup"
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
We don't implement batch splitting quite yet which is necessary for the
ludicrous number of draw calls these tests invoke. Blacklist them for
now.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
When we allocate them, we allocate with two references accidentally,
causing them to leak uncontrollably.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
This destructor will be used to legitimately free the BOs, now that a BO
free with cacheable=0 is only a "fake" free.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
Saves a call to calloc (the maximum size is small and known at
compile-time) and fixes a leak.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
For drivers supporting PIPE_CAP_SIGNED_VERTEX_BUFFER_OFFSET the buffer_offset value
will be interpreted as an signed int.
An example of application code causing a negative offset:
float b[] = { ... }; // 3 float for pos, 3 for color
glBufferData(GL_ARRAY_BUFFER, ..., b, ...);
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 6 * sizeof(float), 0);
glVertexAttribPointer(1, 3, GL_FLOAT, GL_FALSE, 6 * sizeof(float), &b[3]);
^
should be 3 * sizeof(float)
The offset is a ptr so when interpreted as a signed int it can be negative.
This commit adds a verification that (int) buffer_offset is not negative - this would
indicate an application bug. Since it's too late to emit a GL_INVALID_VALUE error,
we replace the negative offset by 0 and emit a debug message.
Signed-off-by: Marek Olšák <marek.olsak@amd.com>
It can be useful to call the decoder on a single batch. But, that batch
may not contain STATE_BASE_ADDRESS, at which point the decoder will have
no idea how to find any buffers. We can initialize the two static bases
at the beginning of time, so it has them even if it never sees SBA.
Surface base address changes dynamically, possibly in the middle of a
batch. So we update it at the start of each batch, making it always
start at the value we inherited from the previous one. SBA commands
inside the batch can update it to a proper value.
Reviewed-by: Jordan Justen <jordan.l.justen@intel.com>
Specifically needed for nativewindow for some VK_EXT_external_memory_android_hardware_buffers
functions, where we call into some AHardwareBuffer functions.
The legacy Android ext did not have us call into any Android function
at all and hence it was not noticed.
Fixes: 755c633b8d "anv: Fix vulkan build in meson."
Reviewed-by: Eric Engestrom <eric.engestrom@intel.com>
Reviewed-by: Chad Versace <chadversary@chromium.org>
The old algorithm is still used (and the same issue -- namely, leaking
all shaders -- applies) but we're way more concise about it since we're
*only* using the routine for shaders nowadays; everything else is a
BO-proper or transient.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
I don't think this is still necessary, and if it is, we'll have to
figure out how to fix it the right way.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
It's not legal to reuse the vertex shader descriptor across frames now
that we patch it at draw-time, so upload to transient memory.
Ideally, we could be smarter about this such that subsequent draws with
the same vertex shader and same patched state would reuse the
descriptor, but for now, let's simply achieve correctness.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
We use the new PAN_ALLOCATE_DELAY_MMAP flag to only map resources
on-demand, which should avoid mapping FBOs.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
While we're at it, prompted by a semantics issue around INVISIBLE, also
add a separate DELAY_MMAP flag.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
On the new kernel, mmaping doesn't *hurt* per se, but it's still
wasteful for buffers explicitly marked as not needing an mmap.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
anv_render_pass_compile() turns an unused attachment into a NULL
depth_stencil_attachment pointer so check that pointer before
accessing it.
Found with updates to existing CTS tests.
Signed-off-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Fixes: 208be8eafa ("anv: Make subpass::depth_stencil_attachment a pointer")
Reviewed-by: Eric Engestrom <eric@engestrom.ch>
Reviewed-by: Juan A. Suarez <jasuarez@igalia.com>
It will be used for stream output but for now only declares it
if VS and if the PrimitiveID needs to be exported.
Signed-off-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
Only VS needs that. We shouldn't hardcode these values but
that's complicated to not do that for now.
Signed-off-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
ac_nir_context is initialized after the driver emits the NGG GS
prologue so it's likely to crash.
Signed-off-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
"unknown_2" field is actually a size of instruction that branch
points to. If it's set to a smaller size than actual instruction
branch behavior is not defined (and it usually wedges the GPU).
Fix it by setting this field correctly.
Fixes: af0de6b91c ("lima/ppir: implement discard and discard_if")
Reviewed-by: Qiang Yu <yuq825@gmail.com>
Signed-off-by: Vasily Khoruzhick <anarsoul@gmail.com>
This corresponds to what the GC3000 blob does. The USED / UNUSED enums are
wrong, at least for GC2000/GC3000.
Without this the 3rd texture component is not interpolated correctly (flat?)
in the following test (and others):
dEQP-GLES2.functional.texture.mipmap.cube.generate.rgba8888_nicest
Strangely, when the texture is sampled from OpenGL it works correctly,
the problem only shows up for sampling by gallium/blitter. This fixes other
cube map tests which use util_blitter_blit.
Signed-off-by: Jonathan Marek <jonathan@marek.ca>
Reviewed-by: Christian Gmeiner <christian.gmeiner@gmail.com>
The rs alignment doesn't have to be multiplied by # of pixel pipes.
This works on GC2000 which doesn't have the SINGLE_BUFFER feature.
This fixes some cubemaps (NPOT / small mipmap levels) because aligning by 8
breaks the expected alignment of 4 for tiled format. We don't want to mess
with the alignment of tiled formats.
Signed-off-by: Jonathan Marek <jonathan@marek.ca>
Reviewed-by: Christian Gmeiner <christian.gmeiner@gmail.com>
The MIN filter is never used when not using mipmaps. This fixes that.
Interestingly, only GC3000 needs this (GC2000 works without this fix).
Signed-off-by: Jonathan Marek <jonathan@marek.ca>
Reviewed-by: Christian Gmeiner <christian.gmeiner@gmail.com>
ROUND_UV rounding breaks nearest filtering.
Enable it only when nearest filtering isn't used.
Signed-off-by: Jonathan Marek <jonathan@marek.ca>
Reviewed-by: Christian Gmeiner <christian.gmeiner@gmail.com>
I don't particularly care about getting x86/ARM cross-build coverage
of all the window systems, but we do want to be building src/mesa/
(for x86 asm) and gallium drivers (for vc4 NEON asm). I'm also hoping
to use these build products for testing freedreno on actual HW (which
we do using surfaceless).
This increases the docker image from 1.4G to 1.5G.
Reviewed-by: Michel Dänzer <michel.daenzer@amd.com>
Acked-by: Eric Engestrom <eric@engestrom.ch>
When using softpin, the first allocation was not calculating the
padding and offset correctly for the case the first allocation needed
to grow. We were missing initialize the state.end right after
expanding the pool for the first time.
This is not a problem for non-softpin since there we don't use
leftover padding so the ends would re-arrange incrementally.
This fixes running dEQP-VK.ssbo.phys.layout.random.16bit.scalar.13 in
SKL -- the test uses a shader larger than the initial size for the
instruction pool.
Fixes: dfc9ab2ccd "anv/allocator: Add padding information."
Reviewed-by: Rafael Antognolli <rafael.antognolli@intel.com>
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
There is widespread consensus that simple_list should go away.
This patch converts one more use to the modern kernel-style list.
Reviewed-by: Kristian H. Kristensen <hoegsberg@google.com>
For bindless SSBO access, we have to do 64-bit address calculations. On
ICL and above, we don't have 64-bit integer support so we have to lower
the address calculations to 32-bit arithmetic. If we don't run the
optimization loop before lowering, we won't fold any of the address
chain calculations before lowering 64-bit arithmetic and they aren't
really foldable afterwards. This cuts the size of the generated code in
the compute shader in dEQP-VK.ssbo.phys.layout.random.16bit.scalar.13 by
around 30%.
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
Reviewed-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
We don't even support replay anymore; this is just wasting characters
and adding clutter.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
This allows dumping memory properties directly without dereferencing an
address, allowing us to fix more -Waddress-of-packed-member warnings.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
It could be midgard_outmod_float or midgard_outmod_int; don't assume
it's one or the other. Fixes -Wenum-conversion warnings.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
Mali job dependency graphs, at least for GLES3.0, have the special
property that a given node will only have at most a single dependent.
This allows us to efficiently precompute the dependent array and
replace an inner loop's O(N) search with an O(1) lookup, bringing the
algorithmic complexity of scoreboarding from O(N^2) to O(N).
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
Now that it has been totally replaced by the borrow mechanism, it is now
unused code.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
The whole purpose of the transient memory model is to make subdivision
stupidly easy, so let's handle that.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
The batch now temporarily possesses the transient buffer, so it'll need
to remember that to free it later.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
We use a fixed size slab if we can, otherwise we create a dedicated
("oversized") BO and add that to the job. In the latter case we'll get
reference counting for free so we can forget about this corner case for
the rest of the series.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
We would like transient allocations to occur on the screen (borrowed by
the batch) rather than on the context. Add fields to track this.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
The latter upload is correct, but the former upload is unassociated with
any particular FBO and therefore becomes orphaned. We do have to upload
at draw-time at the latest, if we haven't by then.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
Zero-sized allocations will fail with an unhelpful errno from the
kernel; check size explicitly in userspace before it gets that far.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
The test is based on link_shaders().
For example, it allows the following test (when run on SPIR-V mode) to
pass:
spec/arb_compute_shader/linker/mix_compute_and_non_compute.shader_test
Reviewed-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
The shader combination tests are copied from link_shaders().
For example, it allows the following tests (when run on SPIR-V mode) to
pass:
spec/arb_tessellation_shader/linker/no-vs
spec/arb_tessellation_shader/linker/tcs-no-vs
spec/arb_tessellation_shader/linker/tes-no-vs
spec/glsl-1.50/linker/gs-without-vs
Reviewed-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
Right now we don't support disk cache for SPIR-V shaders (from
ARB_gl_spirv), so let's avoid writing the program data to or reading
it from the disk if any in-use shaders use SPIR-V.
Reviewed-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
Right now we don't have cache support for SPIR-V shaders (from
ARB_gl_spirv). Right now they are properly skipped because they fall
on the ff shader code path (no key, no name), but it would be better
to update current comments, and add some guards.
Reviewed-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
Allocate UniformDataDefaults and fill in the data defaults when
linking a SPIR-V program. Among other things, this allows program
serialization to work.
It allows the following piglit test (when run on SPIR-V mode) to pass:
spec/arb_get_program_binary/execution/uniform-after-restore.shader_test
v2: use memcpy to initialize UniformDataDefaults
Reviewed-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
When querying MAX_NUM_ACTIVE_VARIABLES, NUM_ACTIVE_VARIABLES and
ACTIVE_VARIABLES over SSBO and UBO interfaces, we filter the variables
which are active using the variable's name and looking for it in the
program resource list. If it is in the program resource list, the
variable will be considered active.
However due to ARB_gl_spirv where name reflection information is not
mandatory, we can use the UBO/SSBO binding and variable offset to
filter which variables which are active.
v2: use RESOURCE_UBO/UNI macros instead of direct castings, update
comment (Alejandro)
v3: Change signature of _mesa_program_resource_find_active_variable
to simplify calling it. Also, squash the fix for find_binding_offset
for arrays of blocks (Arcady)
Signed-off-by: Antia Puentes <apuentes@igalia.com>
Signed-off-by: Alejandro Piñeiro <apinheiro@igalia.com>
Signed-off-by: Arcady Goldmints-Orlov <agoldmints@igalia.com>
Reviewed-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
When querying GL_LOCATION_INDEX using glGetProgramResourceiv
we already know the index of the resource, we do not need to find
it using the name, which is convenient for shaders coming from
SPIR-V binaries where names are optional.
Reviewed-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
Fixes the program queries API (glGetProgramiv):
TRANSFORM_FEEDBACK_VARYINGS and TRANSFORM_FEEDBACK_VARYING_MAX_LENGTH
in two cases:
1. ARB_enhaced_layouts:
The queries were not working for GLSL shaders which specify the
varyings using enhanced layouts. We were returning the info as if the
varyings could only be specified using the API.
2. ARB_gl_spirv:
TRANSFORM_FEEDBACK_VARYING_MAX_LENGTH should return 1 if there is no
name reflection information available.
Reviewed-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
From the ARB_gl_spirv specification, glGetUniformLocation should
return -1 when no name reflection is available.
Reviewed-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
For shaders constructed from SPIR-V binaries, it is possible that
no name reflection information is available. In that case,
- glGetProgramInterfaceiv(.., pname=MAX_NAME_LENGTH, ..)
- gletProgramResourceiv(.., props=NAME_LENGTH, ..)
should return 1.
Signed-off-by: Antia Puentes <apuentes@igalia.com>
Signed-off-by: Andres Gomez <agomez@igalia.com>
Reviewed-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
Since ARB_gl_spirv it is possible to miss a lot of name reflection
information, so it is needed to add NULL name checks for several
queries, and return a specific value on those cases. This commit add
them for ACTIVE_UNIFORM_BLOCK_MAX_NAME_LENGTH,
ACTIVE_ATTRIBUTE_MAX_LENGTH and ACTIVE_UNIFORM_MAX_LENGTH.
From ARB_gl_spirv spec:
"If pname is ACTIVE_UNIFORM_BLOCK_MAX_NAME_LENGTH, the length of
the longest active uniform block name, including the null
terminator, is returned. If no active uniform blocks exist, zero
is returned. If no name reflection information is available, one
is returned.
If pname is ACTIVE_ATTRIBUTE_MAX_LENGTH, the length of the longest
active attribute name, including a null terminator, is returned.
If no active attributes exist, zero is returned. If no name
reflection information is available, one is returned.
If pname is ACTIVE_UNIFORM_MAX_LENGTH, the length of the longest
active uniform name, including a null terminator, is returned. If
no active uniforms exist, zero is returned. If no name reflection
information is available, one is returned."
Reviewed-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
From the ARB_program_interface_query specification:
"For the property TOP_LEVEL_ARRAY_SIZE, a single integer
identifying the number of active array elements of the top-level
shader storage block member containing to the active variable is
written to <params>. If the top-level block member is not
declared as an array, the value one is written to <params>. If
the top-level block member is an array with no declared size, the
value zero is written to <params>."
"For the property TOP_LEVEL_ARRAY_STRIDE, a single integer
identifying the stride between array elements of the top-level
shader storage block member containing the active variable is
written to <params>. For top-level block members declared as
arrays, the value written is the difference, in basic machine
units, between the offsets of the active variable for consecutive
elements in the top-level array. For top-level block members not
declared as an array, zero is written to <params>."
v2: move top_level_array_size and stride into nir_link_uniforms_state
Reviewed-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
ARB_gl_spirv points that the offset must be explicit, however this is
true for 'root' types. For complex types, like struct members or
arrays of arraya, it needs to be computed.
We are not using the offset stored in the gl_buffer_variables during
the uniform blocks linking because currently we do not have a way to
relate a gl_buffer_variable with its corresponding gl_uniform_storage.
The GLSL path uses the name for that, but we can not rely on that
because names are optional in SPIR-V.
Notice that uniforms non-backed by a buffer object will have an offset
equal to -1, like in the GLSL path.
v2: add offset and var_is_in_block as per-variable state in
nir_link_uniforms_state (Arcady)
Reviewed-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
v2: added TODO comment hinting possible future refactoring of
nir_build_program_resource_list and build_program_resource_list,
to avoid code duplication (Alejandro, to explicitly reflect a
valid concern from Timothy during the review).
Reviewed-by: Timothy Arceri <tarceri@itsqueeze.com>
v2: "nir/linker: Use the stageref when adding UBO/SSBO resources"
squashed on this one (Timothy)
Reviewed-by: Timothy Arceri <tarceri@itsqueeze.com>
Reviewed-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
Binding comparison is used to determine the block the uniform is part
of. Note that to do the binding comparison we need the information in
UniformBlocks[] and ShaderStorageBlocks[] to be available, so we have
to call gl_nir_link_uniform_blocks() before linking the uniforms.
v2: add missing break (Timothy)
Reviewed-by: Timothy Arceri <tarceri@itsqueeze.com>
c8665005: ("intel/compiler: Don't always require precise lowering of flrp")
forgot to remove some comments that didn't apply any more after the
change.
Signed-off-by: Andres Gomez <agomez@igalia.com>
Reviewed-by: Jason Ekstrand <jason@jlekstrnd.net>
This was probably not caught before because no supported test was
exercising the flrp lowering with other bit size different than 32.
With the arrival of VK_KHR_shader_float_controls we will have some of
those and, unless we keep the bit size, we will end with something
like:
../src/compiler/nir/nir_builder.h:420: nir_builder_alu_instr_finish_and_insert: Assertion `src_bit_size == bit_size' failed.
Fixes: 158370ed2a ("nir/flrp: Add new lowering pass for flrp instructions")
Fixes: ae02622d8f ("nir/flrp: Lower flrp(a, b, c) differently if another flrp(_, b, c) exists")
Signed-off-by: Andres Gomez <agomez@igalia.com>
Reviewed-by: Jason Ekstrand <jason@jlekstrnd.net>
With separate stencil usage, we can't just grab the usage from the image
directly and have to consider the per-aspect usage instead.
Fixes: 1be38f9178 "anv:Use VK_EXT_separate_stencil_usage to avoid..."
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Don't rely on them being preinitialized to zero; this can cause junk to
appear on the wire.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
A bunch of these are from asserts not being compiled in 32-bit mode
(once Erik's ASSERTABLE stuff is merged, we'll want to switch).
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
This includes metadata as well. On GFX10, we have to invalidate
the L2 metadata cache when shaders read DCC.
Note that we still have to implement GFX10 coherency by
introducing INV_L2_METATADA but for now just flush L2.
This fixes a corruption with DCC and Talos.
Signed-off-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
This allows to remove a mov of 1/-1, as it is implicit with the
operation.
As with atomic inc/dec/add, usual shader-db set doesn't include any
GLES shader using it. So using as workaround vk-gl-cts shaders, we get
this:
total instructions in shared programs: 1217013 -> 1217006 (<.01%)
instructions in affected programs: 53 -> 46 (-13.21%)
helped: 2
HURT: 0
One of the helped shader went from 40 to 34 instructions.
Reviewed-by: Eric Anholt <eric@anholt.net>
And moved to new auxiliar method v3d40_image_load_store_tmu_op,
equivalent to the nir_to_nir v3d_general_tmu_op, to clean-up a little.
Reviewed-by: Eric Anholt <eric@anholt.net>
Among other things, this avoid the need of loading 1/-1 constants (so
one less operation).
The removed comment suggest the option of adding support on NIR for
inc/dec. Intel just uses an auxiliar method to get which hw operation
is needed, so no lowering is needed. And at the same time, being so
small, seems unreasonable to try to add a general one on NIR
itself. It is more easy to just adapt the method here (that is what
the patch does right now).
It is worth to note that we are not getting any change on shader-db
stats because all those methods are used on the usual shader-db set
with shaders needing GLSL > 4.2. In general there aren't too many GLSL
ES 3.1 tests.
As an alternative, we captured the GLES3/GLSL31/GLS32 used on
vk-gl-cts, even if that is not a real life usage of shaders. With
those we get the following:
total instructions in shared programs: 1217022 -> 1217013 (<.01%)
instructions in affected programs: 117 -> 108 (-7.69%)
helped: 6
HURT: 0
helped stats (abs) min: 1 max: 2 x̄: 1.50 x̃: 1
helped stats (rel) min: 3.57% max: 10.00% x̄: 8.09% x̃: 9.09%
95% mean confidence interval for instructions value: -2.07 -0.93
95% mean confidence interval for instructions %-change: -10.54% -5.64%
Instructions are helped.
Note that the shaders helped are really low because most of the
vk-gl-cts tests using AtomicInc/Dec/Add are mostly used on compute
shaders. Although right now there is a branch around with CS support,
the usual is doing the stats against master.
Reviewed-by: Eric Anholt <eric@anholt.net>
They are already defined, although is a slightly different format on
the generated packet headers, so it was needed to change how it is
used on nir_to_vir.
In addition to allow to remove some duplicated headers, it will allow
to define just one get_op_for_atomic_add aux method later to support
using inc/dec instead of add of 1/-1.
Reviewed-by: Eric Anholt <eric@anholt.net>
This implements support for OpenGL logic operations by emitting code to read
from the TLB if needed and blending the fragment output accordingly. It is
similar to VC4's blend lowering pass, but exclusive to logic operations, since
blending is otherwise supported in hardware.
The pass doesn't handle MSAA targets yet.
Fixes the following piglit tests:
spec/!opengl 1.0/gl-1.0-logicop/*
spec/!opengl 1.1/gl-1.1-xor
spec/!opengl 1.1/gl-1.1-xor-copypixels
It also fixes text cursor rendering in Libreoffice with the GTK+2 theme, which
is rendered via glamor using the XOR logic operation.
v2: fix checks for allowed variable location and maximum render target (Eric)
Reviewed-by: Eric Anholt <eric@anholt.net>
Until now we have always been emitting our scoreboard locks on the last thread
switch to improve parallelism. We did this by emitting our last thread switch
right before our tlb writes at the very end of the program, where we know that
we are outside control flow.
Unfortunately, this strategy is not valid when we have tlb color reads too, as
these will happen before this point in the program and can happen inside
control flow.
To fix this we always emit a thread switch before the first tlb load and if we
see additional thread switches after that point, we change the strategy to lock
on the first thread switch.
v2: change the solution so it is expected to work in more scenarios (Eric).
Reviewed-by: Eric Anholt <eric@anholt.net>
We will be emitting this intrinsic to signal TLB color loads when we implement
OpenGL logic operations, where we need to blend the fragment shader color
output with the existing color in the render target.
Per-sample TLB reads are not supported yet.
v2: fix the offset into the color_reads array (Eric).
Reviewed-by: Eric Anholt <eric@anholt.net>
This is intended to be used, for example, with OpenGL logic operations. It
takes a render target as source and a sample index in the base index for
MSAA color reads.
v2: drop the CAN_ELIMINATE and CAN_REORDER flags (Eric).
Reviewed-by: Eric Anholt <eric@anholt.net>
We need to scale the size of these arrays to consider up to
V3D_MAX_DRAW_BUFFERS render targets and 4 components per color.
v2: we want to store each color component separately, so scale by 4 too.
Reviewed-by: Eric Anholt <eric@anholt.net>
The ldtlbu version will read an implicit uniform with the TLB read
specifier and should be used for the first read in a sequence
of TLB reads (unless the default configuration is valid, in which
case we can use ldtlb). The ldtlb version is used for any subsequent
TLB read in the sequence.
Reviewed-by: Eric Anholt <eric@anholt.net>
We already have code to print these signals but the early return in the code
that checks if any signals are present present was missing the checks for them,
so it would skip printing them unless they were paired with other signals.
Reviewed-by: Eric Anholt <eric@anholt.net>
Kind of a funky corner case that does not (as far as I know) apply to
organic shaders from GLES but does pop up in generated shaders from the
fixed-function desktop pipeline.
Fixes: bb483a9166 ("panfrost: Clamp point size")
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
The screen header includes the common xml, and otherwise we might race
to build before it's done.
Fixes: e03259974e ("freedreno: Generate headers from xml files")
Reviewed-by: Kristian H. Kristensen <hoegsberg@google.com>
A couple patches later in this series use the flag to avoid a few
thousand shader-db regresions on all vec4 platforms.
I'm not particularly enamored with the name of this flag. However, I
suspect the Intel vec4 backend is the only backend that will benefit
from it. Specifically, the cases where this helps are all cases where
we want to prevent nir_opt_algebraic from rearranging instructions to
create 3-source instructions, such as ffma and flrp, with additional
immediate value or uniform sources.
The earlier commit "intel/vec4: Try to emit a single load for multiple
3-src instruction operands" solves most of the problems caused by
additional immediate values, but the restrictions on register strides
that cause problems for uniforms and shader inputs persist.
Reviewed-by: Matt Turner <mattst88@gmail.com>
If a 3-source instruction uses immediate values 1.0 and -1.0, just load
1.0 into a register. Use the negation source modifier to get -1.0.
This has trivial impact now, but it prevents a few thousand regressions
on vec4 platforms with "nir/algebraic: Recognize open-coded flrp(-1, 1,
a) and flrp(1, -1, a)"
All Gen6 and Gen7 platforms had similar results. (Haswell shown)
total instructions in shared programs: 13487412 -> 13487406 (<.01%)
instructions in affected programs: 541 -> 535 (-1.11%)
helped: 6
HURT: 0
helped stats (abs) min: 1 max: 1 x̄: 1.00 x̃: 1
helped stats (rel) min: 0.36% max: 2.08% x̄: 1.65% x̃: 1.80%
95% mean confidence interval for instructions value: -1.00 -1.00
95% mean confidence interval for instructions %-change: -2.33% -0.97%
Instructions are helped.
total cycles in shared programs: 376402564 -> 376402500 (<.01%)
cycles in affected programs: 10348 -> 10284 (-0.62%)
helped: 10
HURT: 1
helped stats (abs) min: 2 max: 26 x̄: 7.00 x̃: 2
helped stats (rel) min: 0.13% max: 2.05% x̄: 0.89% x̃: 0.79%
HURT stats (abs) min: 6 max: 6 x̄: 6.00 x̃: 6
HURT stats (rel) min: 0.29% max: 0.29% x̄: 0.29% x̃: 0.29%
95% mean confidence interval for cycles value: -11.72 0.08
95% mean confidence interval for cycles %-change: -1.20% -0.36%
Inconclusive result (value mean confidence interval includes 0).
No shader-db changes on any other Intel platform.
Reviewed-by: Matt Turner <mattst88@gmail.com>
I'm not sure why I thoughtt here was an off-by-one, other than maybe bad
data collection.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
Gen11 SLM is not on L3 anymore, so now the hardware has two separate
fences. Add a way to control which fence types to use.
At this time, we don't have enough information in NIR to control the
visibility of the memory being fenced, so for now be conservative and
assume that fences will need a stall. With more information later
we'll be able to reduce those.
Fixes Vulkan CTS tests in ICL:
dEQP-VK.memory_model.message_passing.core11.u32.coherent.fence_fence.atomicwrite.device.payload_nonlocal.workgroup.guard_local.buffer.comp
dEQP-VK.memory_model.message_passing.core11.u32.coherent.fence_fence.atomicwrite.device.payload_local.buffer.guard_nonlocal.workgroup.comp
dEQP-VK.memory_model.message_passing.core11.u32.coherent.fence_fence.atomicwrite.device.payload_local.image.guard_nonlocal.workgroup.comp
dEQP-VK.memory_model.message_passing.core11.u32.coherent.fence_fence.atomicwrite.workgroup.payload_local.buffer.guard_nonlocal.workgroup.comp
dEQP-VK.memory_model.message_passing.core11.u32.coherent.fence_fence.atomicwrite.workgroup.payload_local.image.guard_nonlocal.workgroup.comp
The whole set of supported tests in dEQP-VK.memory_model.* group
should be passing in ICL now.
v2: Pass BTI around instead of having an enum. (Jason)
Emit two SHADER_OPCODE_MEMORY_FENCE instead of one that gets
transformed into two. (Jason)
List tests fixed. (Lionel)
v3: For clarity, split the decision of which fences to emit from the
emission code. (Jason)
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
Acked-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
This reverts commit 812ce2ce9e.
We massively regress with the reverted patch. So in the meantime, take
it out.
Signed-off-by: Tomeu Vizoso <tomeu.vizoso@collabora.com>
Only Z24S8 is properly supported right now, so let's be careful. Fixes a
number of issues relating to improper Z/S handling. The most obvious is
depth buffers with incorrect strides, which manifests in truly bizarre
ways and can happen commonly with FBOs.
Fixes WebGL (Aquarium runs, etc).
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
In emit_vertex we optimize storage if the output mask does not
have all bits set. Do the same in the epilogue so the indices
actually match up.
Fixes dEQP-VK.geometry.input.basic_primitive.points because it
outputs PSIZE with an output mask of 1, which cause the generic
attribute for the color to be loaded from the wrong indices.
Reviewed-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
The dimensions also have to be adjusted if the number of supported
mip levels is changed.
This fixes dEQP-VK.api.info.image_format_properties.3d.*.
Signed-off-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
For some reasons D32_SFLOAT is also affected on GFX10, it works
fine with previous generations.
This fixes some dEQP-VK.renderpass2.depth_stencil_resolve.*.
Signed-off-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
We were failing to flag the program dirty when it changed. Also, we
were unnecessarily setting key->input_vertices for SINGLE_PATCH mode,
which would reduce program cache hits. Only set it if needed.
There are two constructors for glsl_struct_field with different
parameters. Instead of repeating them for both constructors, this
patch adds a convenience macro. This will make it easier to add a
third constructor in a later patch.
Reviewed-by: Eric Anholt <eric@anholt.net>
All of the builtin variables mentioned in the GLSL ES spec and the
extensions include a precision declaration which is different
depending on what the variable is used for. This patch makes it set
the corresponding precision when creating the variable. This will make
a difference once we start using the precision information for
optimisation. Previously all of the builtin variables ended up with a
precision of NONE.
v2: Made gl_PointSize and gl_FragCoord highp since GLSL ES 3.00. Fixed
gl_MaxViewPorts to always be highp. (Eric Anholt)
Reviewed-by: Eric Anholt <eric@anholt.net>
We were recompiling the softfp64 library of functions from GLSL to NIR
every time we compiled a shader that used fp64. Worse, we were ralloc
stealing it to the GL context. This meant that we'd accumulate lots of
copies for the lifetime of the context, which was a big space leak.
Instead, we can simply stash a single copy in the GL context, and use
it for subsequent compiles. Having a single copy should be fine from
a memory context point of view: nir_inline_function_impl already clones
the necessary nir_function_impl's as it inlines.
KHR-GL45.enhanced_layouts.ssb_member_align_non_power_of_2 was previously
OOM'ing a system with 16GB of RAM when using softfp64. Now it finishes
much more quickly and uses only ~200MB of RAM.
Reviewed-by: Jordan Justen <jordan.l.justen@intel.com>
Right now, all keys have two things in common: a program string ID and a
sampler_prog_key_data. I'd like to add another thing or two and need a
place to put it. This commit adds a new brw_base_prog_key struct which
contains those two common bits.
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
We're about to change the type of key to be brw_base_prog_key and that
will mean blindly adding the key size without a cast will lead to the
wrong calculation. It's safer to cast to char * first anyway.
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
I'm not 100% sure how this ever worked because gem_create usually shoots
you if the BO size isn't page-aligned.
Reviewed-by: Chad Versace <chadversary@chromium.org>
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
This is the MOCS setting used for the A64 stateless messages which we
sometimes use for SSBO operations.
Fixes: 48ed2a7bb0 "anv: Implement VK_EXT_buffer_device_address"
Fixes: 79fb0d27f3 "anv: Implement SSBOs bindings with GPU addr..."
Reviewed-by: Chad Versace <chadversary@chromium.org>
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
It's not clear the hardware really has a maximum which confuses dEQP;
clamp to whatever we report as our maximum.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
In preparation for a Panfrost-based non-Gallium driver (maybe
Vulkan...?), hoist everything except for the Gallium driver into a
shared src/panfrost. Practically, that means the compilers, the headers,
and pandecode.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
The reason for doing this is two-fold:
1. These passes are likely to be shared with the Bifrost compiler
Therefore, we don't want to restrict them to Midgard
2. The coding style is different (NIR-style vs Panfrost-style)
The NIR passes are candidates for moving upstream into
compiler/nir, so don't block that off for stylistic reasons
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
PIPE_CAP_SM3 has always been an odd one out of all our caps. While most
other caps are fine-grained and single-purpose, this cap encode several
features in one. And since OpenGL cares more about single features, it'd
be nice to get rid of this one.
As it turns, this is now relatively simple. We only really care about
three features using this cap, and those already got their own caps. So
we can remove it, and make sure all current drivers just give the same
response to all of them.
The only place we *really* care about SM3 is in nine, and there we can
instead just re-construct the information based on the finer-grained
caps. This avoids DX9 semantics from needlessly leaking into all of the
drivers, most of who doesn't care a whole lot about DX9 specifically.
Signed-off-by: Erik Faye-Lund <erik.faye-lund@collabora.com>
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
Acked-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
Shader Model 3.0 is a big promise to make to the state-tracker, and
for instance mobile hardware might support vertex-shader saturate but
not some of the other features of SM3. So let's give this its own cap
for simplicity.
Signed-off-by: Erik Faye-Lund <erik.faye-lund@collabora.com>
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
Shader Model 3.0 is a big promise to make to the state-tracker, and
for instance mobile hardware might support fragment-shader derivatives
but not some of the other features of SM3. So let's give this its own
cap for simplicity.
Signed-off-by: Erik Faye-Lund <erik.faye-lund@collabora.com>
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
Shader Model 3.0 is a big promise to make to the state-tracker, and
for instance mobile hardware might support texture lod but not some
of the other features of SM3. So let's give this its own cap for
simplicity.
Signed-off-by: Erik Faye-Lund <erik.faye-lund@collabora.com>
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
This boolean is only consulted once during init, so there's nothing
much saved by storing this in the context. So let's just check directly
when we need it instead.
Signed-off-by: Erik Faye-Lund <erik.faye-lund@collabora.com>
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
Panfrost is known to only work on a select few CPU/GPU combinations at
the moment (tested system-on-chips: RK3288, RK3399, and S912). Whitelist
the combinations known to work and refuse to load on others where
nothing works yet to avoid user confusion.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
Reviewed-by: Tomeu Vizoso <tomeu.vizoso@collabora.com>
A lot of the pan_screen.c code was cargoculted from other drivers. The
upshot is that we return true for a lot of PIPE_CAPs that we don't
actually support, resulting in us exposing way too many extensions that
we don't actually support. Be more careful.
Some CAPs we do need to fake to access higher dEQP versions (i.e. in
order to debug the features we're hiding behind the CAP). For these, we
hide the CAP behind a special PAN_MESA_DEBUG=deqp option to avoid
apps randomly using these in-development features.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
Reviewed-by: Tomeu Vizoso <tomeu.vizoso@collabora.com>
It's easy to forget about, but shader size does matter for things like
i-cache, so let's include it in the analysis.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
We have to emit it anyway for the report to be happy (with respect to
unrolling), so return an actual count rather than dummy numbers.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
bany/ball type ops read from all 4 channels even though they only write
to 1; specify this in the opcode table like we do for dot products.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
Now that the output usage mask is set to 0x1 the LayerID is
correctly exported in the loop above.
Signed-off-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
When the stage preceding FS doesn't export it the fragment shader
might read it, even if it's 0.
Signed-off-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
Much of the format selection code was inherited from softpipe (!) of all
places, and a lot of it is accordingly cruft. Later if-elses were added
in random places to workaround missing formats at various points in
history. Clean up some of this.
Theoretically, any format we can texture from we can also render to. In
practice, there are a few corner cases that we need to disable
explicitly.
For one, we do have to restrict SCANOUT formats to workaround
buggy apps (in particular, dEQP which with --deqp-surface-type=window
under Weston will end up with RGB10_A2 and complain about low alpha
precision). Just be clearer about how/why.
Also, RGB5_A1 support is still broken; let's not worry about that quite
yet.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
Rather than have random variables flying around and a long if-else
chain, use a switch. They're literally *designed* for this.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
We add support for writing out (via a blend shader):
- RGBA4
- RGB10_A2_UNORM
- RGB10_A2_UINT
- RGB5_A1_UNORM
- R11G11B10_FLOAT
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
We would like to permit keying blend shaders against the framebuffer
format, which requires some new blending abstractions.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
We would like the offset field to be unsigned, letting 0 represent "no
offset" and positive represent an offset.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
I'm not sure I'm totally comfortable with this, but conceptually neither
float nor pure-int formats require any format conversion, except size
conversion. Going from a shaderable format (fp32 or i16, for instance)
into a blendable format (fp16) is a separate question, one we can defer
momentarily while we're not interested in actually blending.
As an aside, I'd be fascinated by an integer-based blending
implementation.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
We'll need some careful handling, but for now, get some baseline code
out for handling float formats in a blend shader.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
Normally, disabled blend can definitely be fixed-function'd away, but
if a blend shader is used merely for format conversion rather than
blending, this code path can be nevertheless hit.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
Ideally, we would keep Galliumisms far away from the compiler;
unfortunately, Mesa hasn't standardized on system of format codes to be
shared across APIs and across drivers, so using Gallium formats is our
best bet in the short run.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
We now have some preliminary fp16 support available. We're not able to
expose this for GLSL quite yet, but for internal blend shaders, we're
able to do control bitness ourselves just fine. So let's fp16 that
stuff!
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
Eventually this should be replaced by proper tex RA / not emitting so
many silly moves to begin with / better general copy prop. For now
remove it since it breaks things.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
Share a single mask field in midgard_instruction with a unified format,
rather than using separate masks for each instruction tag with
hardware-specific formats. Eliminates quite a bit of duplicated code and
will enable vec8/vec16 masks as well (which don't map as cleanly to the
hardware as we might like).
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
We can handle differing, but we'd prefer not to because there are
restrictions on sizing which aren't accounted for yet.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
It's not clear where the extra indirection was from (older hardware or
just older blobs?)
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
The scale and type-convert can now be expressed in NIR, rather than MIR,
which is significantly more maintainable and demonstrates correctness of
the type conversion patches.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
Rather than using a dest_override, we upscale integers by using a half
field with a sign-extend bit. A variant of this trick should also work
for floats, but one step at a time!
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
We have dedicated intrinsics to access the raw contents of the tile
buffer so we can use a dedicated NIR pass to lower appropriately for
blend shaders, rather than introducing a bizarre hardcoded blend
epilogue that only works for RGBA8_UNORM.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
Oh, dear. No turning back now.
We begin implementing non-32-bit types, using downsizing integer type
conversions as the initial instructions. We implement them naively as
type-converting moves; substantially more efficient operation is
possible by copypropping the type conversion modifier, but this
optimization is not implemented here.
Size converting modifiers on Midgard allow an instruction to write to a
destination 1/2 the size, or to read from a source 1/2 the size. If we
need an extreme conversion (32-bit to 8-bit, for instance), multiple
type converting ops are chained together, which here is handled via an
algebraic pass.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
This begins the process of removing blend shader specific MIR into a
more general NIR lowering pass for formats.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
Eventually, this will allow packing clear colours for all formats,
including floating-point framebuffers, pure integer buffers, and special
formats. Currently, a few of these formats are supported, and many more
are handled through a generic Gallium colour packing path (which is not
a perfect fit for the hardware, but works for many formats and is a sane
default for the moment.)
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
We see the hardware doesn't actually support float framebuffers in the
native sense -- rather, it just allows higher bpp framebuffers and lets
a blend shader / additional clear_color fields sort out the formats.
This will be.. interesting.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
Full MRT support is a while away, but in the mean time, we can remove
code that explicitly assumes nr_cbufs <= 0, to minimize the obstacles
we'll face later when we add the whole thing.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
According to the spec [1], `__egl_Main` is the only symbol that needs to
be exported. We don't want applications directly linking against
libEGL_mesa.so (apps should always go through libEGL.so, regardless of
who is providing it), so we shouldn't export any other symbols either.
[1] https://github.com/NVIDIA/libglvnd/blob/master/include/glvnd/libeglabi.h
(this header is the closest there is to a spec)
Signed-off-by: Eric Engestrom <eric.engestrom@intel.com>
Reviewed-by: Emil Velikov <emil.velikov@collabora.com>
Part of the effort to replace shell scripts with portable python scripts.
I could've used a trivial `assert lines == sorted(lines)`, but this way
the caller is shown which entrypoint is out of order.
Signed-off-by: Eric Engestrom <eric.engestrom@intel.com>
Reviewed-by Dylan Baker <dylan@pnwbakers.com>
Reviewed-by: Emil Velikov <emil.velikov@collabora.com>
Note: the list in gbm-symbols.txt is the same as the one that was in
gbm-symbols-check, I just took the opportunity to sort it.
Signed-off-by: Eric Engestrom <eric.engestrom@intel.com>
Reviewed-by Dylan Baker <dylan@pnwbakers.com>
Reviewed-by: Emil Velikov <emil.velikov@collabora.com>
I've re-written this in bash a couple times over the years, and then
I realised python is much more portable and already required by Mesa, so
we might as well make use of it.
I decided to still use the build system's NM instead of re-implementing
symbols extraction, to offload the complexity of keeping it compatible
with many systems (Linux, Unix, BSD, MacOS, etc.), especially when
cross-building.
This new script checks not only that nothing is exported when it
shouldn't be, but also that everything that should be exported is.
Sometimes, some symbols _can_ be exported but don't have to be, in which
case they can be prefixed with `(optional)`.
Signed-off-by: Eric Engestrom <eric.engestrom@intel.com>
Reviewed-by Dylan Baker <dylan@pnwbakers.com>
Reviewed-by: Emil Velikov <emil.velikov@collabora.com>
Drivers only use lower_io for modes where pointers don't have a
meaningful value, and dereferences can always be traced back to a
variable. But there can be other modes, like global mode with
VK_EXT_buffer_device_address, where pointers cannot be traced back to a
variable, and lower_io would segfault on loads/stores of these since
nir_deref_instr_get_variable() would return NULL.
Just use the mode on the deref itself to filter out these modes before
we try to get the variable.
Fixes: 118a66df99 ("radv: Use NIR barycentric coordinates")
Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
Currently this is done rather late in radv, after lowering booleans, so
it isn't safe to run additional optimizations that may add e.g. 1-bit
booleans. We could move the lowering parts earlier, but since right now
we only lower FS inputs and by this point all indirects have been
lowered away, there's no reason we should need to optimize anything.
One shader from Devil May Cry 5 was getting optimized, but only because
the optimization loop was working on 32-bit booleans which revealed an
opportunity that was hidden with 1-bit booleans, and we generated a
1-bit boolean which is invalid.
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=111092
Fixes: 118a66df99
Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
Fix the following building error:
external/mesa/src/amd/addrlib/src/gfx10/gfx10addrlib.cpp:35:10:
fatal error: 'gfx10_gb_reg.h' file not found
^~~~~~~~~~~~~~~~
1 error generated.
Fixes: 78cdf9a ("amd/addrlib: add gfx10 support")
Signed-off-by: Mauro Rossi <issor.oruam@gmail.com>
Acked-by: Marek Olšák <marek.olsak@amd.com>
The necessary Android makefile building rules are added
and the generation rules are simplified for readability
Fixes the following building errors:
external/mesa/src/amd/common/ac_llvm_build.c:1496:45:
error: use of undeclared identifier 'V_008F0C_IMG_FORMAT_8_UINT'
case V_008F0C_BUF_DATA_FORMAT_8: format = V_008F0C_IMG_FORMAT_8_UINT; break;
^
Fixes: 74a26af ("amd/common/gfx10: add register JSON")
Signed-off-by: Mauro Rossi <issor.oruam@gmail.com>
Acked-by: Marek Olšák <marek.olsak@amd.com>
The gen_enum_to_str.py generates vk_enum_to_str.c and its header at once.
However, the makefiles incorrectly list both files parallel with the same
recipes. That means both two files may be generated simultaneously by two
processes. The generating files may be truncated by another process, as
shown below:
$ cd $OUT/obj/STATIC_LIBRARIES/libmesa_vulkan_util_intermediates/util
$ ls -l
-rw-rw-r-- 1 lh lh 193713 Jul 5 13:31 vk_enum_to_str.c
-rw-rw-r-- 1 lh lh 4609 Jul 5 13:31 vk_enum_to_str.d
-rw-rw-r-- 1 lh lh 0 Jul 5 16:21 vk_enum_to_str.h
Let one file depends on the other with empty recipe to avoid the issue.
Signed-off-by: Chih-Wei Huang <cwhuang@linux.org.tw>
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Add libmesa_nir to a common LOCAL_STATIC_LIBRARIES defined by
ANV_STATIC_LIBRARIES so that its include path can be imported
automatically. Then ANV_INCLUDES is unnecessary and could be
eliminated.
Signed-off-by: Chih-Wei Huang <cwhuang@linux.org.tw>
Acked-by: Eric Engestrom <eric.engestrom@intel.com>
Acked-by: Emil Velikov <emil.velikov@collabora.com>
The dummy library libmesa_anv_entrypoints is totally unnecessary.
The four VULKAN_GENERATED_FILES could be generated and built in
libmesa_vulkan_common directly. The libraries using the generated
headers should get it via the exported include path.
Signed-off-by: Chih-Wei Huang <cwhuang@linux.org.tw>
Acked-by: Eric Engestrom <eric.engestrom@intel.com>
Acked-by: Emil Velikov <emil.velikov@collabora.com>
The libmesa_git_sha1 is a dummy library. There is no reason to put
it into LOCAL_WHOLE_STATIC_LIBRARIES.
Move libmesa_vulkan_util to the vulkan.radv which really needs it.
Signed-off-by: Chih-Wei Huang <cwhuang@linux.org.tw>
Acked-by: Eric Engestrom <eric.engestrom@intel.com>
Acked-by: Emil Velikov <emil.velikov@collabora.com>
The libmesa_anv_entrypoints and libmesa_genxml are dummy libraries.
There is no reason to put them into LOCAL_WHOLE_STATIC_LIBRARIES.
Move libmesa_vulkan_util to the vulkan HAL which really needs it.
Signed-off-by: Chih-Wei Huang <cwhuang@linux.org.tw>
Acked-by: Eric Engestrom <eric.engestrom@intel.com>
Acked-by: Emil Velikov <emil.velikov@collabora.com>
This commit re-plumbs all of nir_loop_analyze to use nir_ssa_scalar for
all intermediate values so that we can properly handle swizzles. Even
though if conditions are required to be scalars, they may still consume
swizzles so you could have ((a.yzw < b.zzx).xz && c.xx).y == 0 as your
loop termination condition. The old code would just bail the moment it
saw its first non-zero swizzle but we can now properly chase the scalar
from the if condition to all the way to a, b, and c.
Shader-db results on Kaby Lake:
total loops in shared programs: 4388 -> 4364 (-0.55%)
loops in affected programs: 29 -> 5 (-82.76%)
helped: 29
HURT: 5
Shader-db results on Haswell:
total loops in shared programs: 4370 -> 4373 (0.07%)
loops in affected programs: 2 -> 5 (150.00%)
helped: 2
HURT: 5
Reviewed-by: Timothy Arceri <tarceri@itsqueeze.com>
This commit reworks both get_induction_and_limit_vars() and
try_find_trip_count_vars_in_iand to return true on success and not
modify their output parameters on failure. This makes their callers
significantly simpler.
Reviewed-by: Timothy Arceri <tarceri@itsqueeze.com>
There are various cases in which we want to chase SSA values through ALU
ops ranging from hand-written optimizations to back-end translation
code. In all these cases, it can be very tricky to do properly because
of swizzles. This set of helpers lets you easily work with a single
component of an SSA def and chase through ALU ops safely.
Reviewed-by: Timothy Arceri <tarceri@itsqueeze.com>
None of the current code knows what to do with swizzles. Take the safe
option for now and bail if we see one. This does have a small shader-db
impact but it is at least safe.
Shader-db results on Kaby Lake:
total loops in shared programs: 4364 -> 4388 (0.55%)
loops in affected programs: 5 -> 29 (480.00%)
helped: 5
HURT: 29
Shader-db results on Haswell:
total loops in shared programs: 4373 -> 4370 (-0.07%)
loops in affected programs: 5 -> 2 (-60.00%)
helped: 5
HURT: 2
Fixes: 6772a17acc "nir: Add a loop analysis pass"
Reviewed-by: Timothy Arceri <tarceri@itsqueeze.com>
The current code assumes everything is 32-bit which is very likely true
but not guaranteed by any means. Instead, use nir_eval_const_opcode to
do the calculations in a bit-size-agnostic way. We also use the new
constant constructors to build the correct size constants.
Fixes: 6772a17acc "nir: Add a loop analysis pass"
Reviewed-by: Timothy Arceri <tarceri@itsqueeze.com>
One issue was that the original version didn't check that swizzles
matched when comparing ALU instructions so it could end up matching
very different instructions. Using the nir_instrs_equal function from
nir_instr_set.c which we use for CSE should be much more reliable.
Another was that the loop assumes it will only run two iterations which
may not be true. If there's something which guarantees that this case
only happens for phis after ifs, it wasn't documented.
Fixes: 9e6b39e1d5 "nir: detect more induction variables"
Reviewed-by: Timothy Arceri <tarceri@itsqueeze.com>
Now that we have the nir_const_value_as_* helpers, every one of these
functions is effectively the same except for the suffix they use so we
can easily define them with a repeated macro. This also means that
they're inline and the fact that the nir_src is being passed by-value
should no longer really hurt anything.
Reviewed-by: Timothy Arceri <tarceri@itsqueeze.com>
Now that virgl_transfer_queue_is_queued does not search
COMPLETED_LIST, we don't need to move transfers to that list.
Signed-off-by: Chia-I Wu <olvaffe@gmail.com>
Reviewed-by: Alexandros Frantzis <alexandros.frantzis@collabora.com>
Search only the pending list and return immediately on the first
hit.
When the transfer queue was introduced, the function was used to
deal with
write transfer -> draw -> write transfer
sequence. It was used to tell if the second transfer intersects
with the first transfer. If yes, the transfer queue avoided
reordering the second transfer to before the draw (by flushing) in
case the draw uses the transferred data.
With the recent changes to the transfer code, the function is used
to deal with
write transfer -> readback transfer
We want to avoid reordering the readback transfer to before the
first transfer (also by flushing).
In the old code, we needed to track the compeleted transfers as well
to avoid reordering. But in the new code, a readback transfer is
guaranteed to see the data from the completed transfers (in other
words, it cannot be reoderered to before the already completed
transfers). We don't need to search the COMPLETED_LIST.
Signed-off-by: Chia-I Wu <olvaffe@gmail.com>
Reviewed-by: Alexandros Frantzis <alexandros.frantzis@collabora.com>
Rewrite the function and check z/depth more carefully. We
intentionally avoid u_box_test_intersection_2d because it returns
true when two boxes touch but do not intersect and can be confusing.
Signed-off-by: Chia-I Wu <olvaffe@gmail.com>
Reviewed-by: Alexandros Frantzis <alexandros.frantzis@collabora.com>
The highest used index determines the stride for shader outputs in shaders
that use LDS or memory for outputs.
Acked-by: Pierre-Eric Pelloux-Prayer <pierre-eric.pelloux-prayer@amd.com>
Acked-by: Dave Airlie <airlied@redhat.com>
This can decrease LDS and/or memory usage for shader outputs when geometry
shaders or tessellation is used.
Only PS inputs support higher indices and those aren't eliminated by
kill_outputs.
Acked-by: Pierre-Eric Pelloux-Prayer <pierre-eric.pelloux-prayer@amd.com>
Acked-by: Dave Airlie <airlied@redhat.com>
- don't pass it via a parameter if it can be derived from other parameters
- set shader_type for ac_rtld_open
- use enum pipe_shader_type instead of unsigned
Acked-by: Pierre-Eric Pelloux-Prayer <pierre-eric.pelloux-prayer@amd.com>
Acked-by: Dave Airlie <airlied@redhat.com>
Useful for formats that would work with the same driver code path as
RGBA8 UNORM but that don't meet the util_format_is_rgba8_variant
criteria due to a smaller channel count.
v2: Use simpler logic (suggested by Iago).
v3: Fix spelling erorr. boolean->bool (thank you airlied).
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
Reviewed-by: Dave Airlie <airlied@redhat.com>
This gives more flexibility than the normal store_deref/store_output
versions (particularly, it allows us to abuse the type system in awful
ways, which is necessary for efficient format conversion in blend
shaders.)
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
Acked-by: Karol Herbst <kherbst@redhat.com>
Fix si_vid_is_format_supported to expose support
for 10-bit VP9 decode using P016 format. Without
this change, 10-bit decode will be exposed only
for HEVC even though newer hardware support
10-bit decode for VP9.
Signed-off-by: Pratik Vishwakarma <Pratik.Vishwakarma@amd.com>
Reviewed-by: Christian König <christian.koenig@amd.com>
We already have nir_imm_float16 and nir_imm_vec4; let's add the ability
to easily make immediate fp16 vectors as well, now that fp16 support is
maturing in NIR/GLSL.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
Reviewed-by: Matt Turner <mattst88@gmail.com>
This reverts commit 63e0675d98.
The GS is merged with the preceding shader and since the preceding
shader will have as_ngg set the final binary will have is_ngg set.
So we do not need the gs key here.
Reviewed-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
Until now we were just asking entries on the bo hash table, and don't
worry if the handle was NULL, as we were just expecting to get a NULL
in return. It seems that now the hash table assert with some reserverd
pointers, included NULL. This commit just early returns with handle 0.
This change fixes several crashes on vk-gl-cts GLES tests when using
the v3d simulator, like:
KHR-GLES3.core.internalformat.copy_tex_image.*
Reviewed-by: Eric Anholt <eric@anholt.net>
We should use the primitives output by the TES in that case.
There is always a separate TES if there is no GS.
Reviewed-by: Dave Airlie <airlied@redhat.com>
Theses seem to have been radeonsi specific callbacks that are no
longer needed now that these drivers no longer share this code
path.
These callbacks were removed from radeonsi in c0d44fe0e9.
Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
It is legal to call vkFreeCommandBuffers() on NULL command buffers.
This fix requires eb41ce1b01 ("util/hash_table: Properly handle
the NULL key in hash_table_u64").
Signed-off-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Fixes: 4438188f49 ("vulkan/overlay: record stats in command buffers and accumulate on exec/submit")
Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
Set the absolute minimum possible GLSL version. API_OPENGL_CORE can
mean an OpenGL 3.0 forward-compatible context, so that implies a minimum
possible version of 1.30. Otherwise, the minimum possible version 1.20.
Since Mesa unconditionally advertises GL_ARB_shading_language_100 and
GL_ARB_shader_objects, every driver has GLSL 1.20... even if they don't
advertise any extensions to enable any shader stages (e.g.,
GL_ARB_vertex_shader).
Converts about 2,500 piglit tests from crash to skip on NV18.
Reviewed-by: Ilia Mirkin <imirkin@alum.mit.edu>
Reviewed-by: Emil Velikov <emil.velikov@collabora.com>
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=109524
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=110955
Cc: mesa-stable@lists.freedesktop.org
This value is supported since gen7. See also 8514c75a26 "i965: Set
compute shader shared memory max to 64k".
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
Effectivley unused since dd7135d55d ("intel/compiler: Use the flrp
lowering pass for all stages on Gen4 and Gen5"). I had intended to
remove this code as part of that series, but I forgot.
Reviewed-by: Matt Turner <mattst88@gmail.com>
Existing users only operate on instructions with SSA destinations. Some
later patches add new direct calls and indirect calls (via existing NIR
functions) on instructions after going out of SSA. At the very least,
these calls are added by:
intel/vec4: Try to emit a VF source in try_immediate_source
intel/vec4: Try to emit a single load for multiple 3-src instruction operands
The first commit adds direct calls, and the second adds calls via
nir_alu_srcs_equal and nir_alu_srcs_negative_equal.
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
Reviewed-by: Matt Turner <mattst88@gmail.com>
When I added this function, I was not sure if swizzles of immediate
values were a thing that occurred in NIR. The only existing user of
these functions is the partial redundancy elimination for compares.
Since comparison instructions are inherently scalar, this does not
occur.
However, a couple later patches, "nir/algebraic: Recognize
open-coded flrp(-1, 1, a) and flrp(1, -1, a)" combined with "intel/vec4:
Try to emit a single load for multiple 3-src instruction operands",
collaborate to create a few thousand instances.
No shader-db changes on any Intel platform.
v2: Handle the swizzle in nir_alu_srcs_negative_equal and leave
nir_const_value_negative_equal unchanged. Suggested by Jason.
v3: Correctly handle write masks. Add note (and assertion) that the
caller is responsible for various compatibility checks. The single
existing caller only calls this for combinations of scalar fadd and
float comparison instructions, so all of the requirements are met. A
later patch (intel/vec4: Try to emit a single load for multiple 3-src
instruction operands) will call this for sources of the same
instruction, so all of the requirements are met.
v4: Add unit test for nir_opt_comparison_pre that is fixed by this
commit.
Reviewed-by: Matt Turner <mattst88@gmail.com>
Previously, an instruction like
mul(8) vgrf29.xy:F, vgrf25.yxxx:F, [-1F, 1F, 0F, 0F]
would get rewritten as
mul(8) vgrf0.yz:F, vgrf25.yyxx:F, [-1F, 1F, 0F, 0F]
The latter does not produce the correct result. The VF immediate in the
second should be either [-1F, -1F, 1F, 1F] or [0F, -1F, 1F, 0F]. This
commit produces the former.
Fixes: 1ee1d8ab46 ("i965/vec4: Reswizzle sources when necessary.")
Reviewed-by: Matt Turner <mattst88@gmail.com>
Each tests has a comment with the expected before and after NIR. The
tests don't actually check this. The tests only check whether or not
the optimization pass reported progress. I couldn't think of a robust,
future-proof way to check the before and after code.
Reviewed-by: Matt Turner <mattst88@gmail.com>
set bit15 (Disable Repacking for Compression) of CACHE_MODE_0 register
if the gen attribute, 'disable_ccs_repack' is set.
Signed-off-by: Dongwon Kim <dongwon.kim@intel.com>
Reviewed-by: Anuj Phogat <anuj.phogat@gmail.com>
set bit15 (Disable Repacking for Compression) of CACHE_MODE_0 register
if the gen attribute, 'disable_ccs_repack' is set.
Signed-off-by: Dongwon Kim <dongwon.kim@intel.com>
Reviewed-by: Anuj Phogat <anuj.phogat@gmail.com>
set bit15 (Disable Repacking for Compression) of CACHE_MODE_0 register
if the gen attribute, 'disable_ccs_repack' is set.
Signed-off-by: Dongwon Kim <dongwon.kim@intel.com>
Reviewed-by: Anuj Phogat <anuj.phogat@gmail.com>
add a new attribute, 'disable_ccs_repack' to gen_device info, which
indicates whether repacking of components in certain pixel formats
before compression needs to be disabled to keep the compatibility
with decompression capability of display controller (gen11+)
Signed-off-by: Dongwon Kim <dongwon.kim@intel.com>
Reviewed-by: Anuj Phogat <anuj.phogat@gmail.com>
correct bit fields information of CACHE_MODE_0 reg in current gen11.xml
Signed-off-by: Dongwon Kim <dongwon.kim@intel.com>
Reviewed-by: Anuj Phogat <anuj.phogat@gmail.com>
The "demote" intrinsic works like "discard" but don't change the
control flow, allowing derivative operations to work. This is the
semantics of D3D discard.
The "is_helper_invocation" intrinsic will return true for helper
invocations -- both the ones that started as helpers and the ones that
where demoted. This is needed to avoid changing the behavior of
gl_HelperInvocation which is an input (so not expected to change
during shader execution).
v2: Emit the discard jump and comment why it is safe. (Jason)
Rework the is_helper_invocation() that was stomping f0.1. (Jason)
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
From SPV_EXT_demote_to_helper_invocation. Demote will be implemented
as a variant of discard, so mark uses_discard if it is used.
v2: Add CAN_ELIMINATE flag to the new intrinsic. (Jason)
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
This is simpler than radv, since the driver_location is already assigned
for us.
Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
We always get gl_FragCoord as a system value, not a varying, so this is
never hit. We already set PIXEL_CENTER_INTEGER elsewhere.
Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
This is nice to have with radeonsi, where color varyings are handled
specially to avoid recompiles.
Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
We have to add a few lowering to deal with things that used to be dealt
with inline when creating inputs. We also move the code that fills out
the radv_shader_variant_info struct for linking purposes to
radv_shader.c, as it's no longer tied to the NIR->LLVM lowering.
Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
Pretty much every driver using nir_lower_io_to_temporaries followed by
nir_lower_io is going to want this. In particular, radv and radeonsi in
the next commits.
Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
These weren't properly supported. This does pretty much the same thing
that the radv code did.
Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
Right now nir_copy_prop_vars is effectively undoing
nir_lower_io_to_temporaries for inputs by propagating the original
variable through the copy created in lower_io_to_temporaries. A
theoretical variable coalescing pass would have the same issue with
output variables, although that doesn't exist yet. To fix this, add a
new bit to nir_variable, and disable copy propagation when it's set.
This doesn't seem to affect any drivers now, probably since since no one
uses lower_io_to_temporaries for inputs as well as copy_prop_vars, but
it will fix radv once we flip on lower_io_to_temporaries for fs inputs.
Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
It was double-counting cases where multiple variables were assigned to
the same slot, and not handling the case where the last variable is a
compact variable.
Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
These are used in Vulkan for clip/cull distances, instead of the GLSL
lowering when the clip/cull arrays are shared.
Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
It isn't really doing anything Gallium-specific, and it's needed for
handling component packing, overlapping, etc.
Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
load_fragcoord is already handled in common code for radeonsi, so we
don't need to do anything to handle it. However, there were some passes
creating NIR with the varying, so we switch them over to the sysval. In
the case of nir_lower_input_attachments which is used by both radv and
anv, we add handling for both until intel switches to using a sysval.
Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
On AMD, FragCoord should be a sysval because it is handled separately
from all the other inputs. We were already doing this in radeonsi, but
we weren't doing it with radv. It'll be much more annoying to handle
VARYING_SLOT_POS in fragment shaders when we let NIR lower FS inputs for
us, so here we add an option so that radv can get it as a system value.
Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
This was done for all previous GPUs.
This fixes Talos Principle launch hangs.
Fixes: 7e43022e8c (radv/gfx10: add gfx10_cs_emit_cache_flush)
Reviewed-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
GFX10 has two rings, so UMR want to know which one to halt.
Select the first one by default.
Signed-off-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
In merged shaders we put a big if around each shader, so both stages
can have a different number of threads. However, the NGG output code
still needs to run if the first shader is not executed.
This can happen when there are more gs threads than vs/es threads, or
when there are 0 es/vs threads (why? no clue).
Fixes: ee21bd7440 "radv/gfx10: implement NGG support (VS only)"
Reviewed-by: Dave Airlie <airlied@redhat.com>
It was reported as unsupported previously. It should be importable
and is compatible with itself.
Signed-off-by: Chia-I Wu <olvaffe@gmail.com>
Fixes: 69cc6272fb ("anv: Implement VK_EXT_external_memory_host")
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
This needs to be cleaned up a bit, and it probably contains
missing stuff and/or bugs.
This doesn't fix the "half of the triangles" issue.
Signed-off-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
Each gfx10 shader engine corresponds to two gfx9 shader engines, so scale
the number of offchip buffers accordingly.
Signed-off-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
The cache flush logic on GFX10 is quite different and it's
implemented with a new function.
Signed-off-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
DCC alignment can be less than the alignment of the main surface. In that
case, the DCC tile swizzle needs to be masked accordingly. Should have no
impact on pre-gfx10.
Signed-off-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
Begin/Reset of command buffer both reset the content of the command
buffer. Don't forget to wipe them on Begin.
Signed-off-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Fixes: 4438188f49 ("vulkan/overlay: record stats in command buffers and accumulate on exec/submit")
Acked-by: Eric Engestrom <eric.engestrom@intel.com>
Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
This add support for setting shader buffers and passing them
to draw or binding them to the fragment shader jit.
Reviewed-by: Roland Scheidegger <sroland@vmware.com>
This adds load, store and atomic operations. These operations
have to respect the exec_mask, and can't operate in lanes where
the execute is off. This is needed to avoid side effects seen
outside the shaders.
There is also bounds checking on the ssbo accesses vs the size
ptr.
Reviewed-by: Roland Scheidegger <sroland@vmware.com>
As suggested by Roland this is just a compare of fetch_max
vs the counter, much simpler than my original spaghetti code.
We require the vertex shader to have an exec mask to get proper
ssbo/image load/atore/atomics semantics
Reviewed-by: Roland Scheidegger <sroland@vmware.com>
Since the transition to virgl_resource_transfer_map(), several
previously public virgl_resource functions are not required to be public
anymore.
We also move the functions earlier in the file so they can be used
without functions declarations.
Signed-off-by: Alexandros Frantzis <alexandros.frantzis@collabora.com>
Reviewed-by: Chia-I Wu <olvaffe@gmail.com>
Normal mapping of buffers and textures uses almost identical logic.
This commit extracts the this logic in the form of the
virgl_resource_transfer_map() helper function.
Signed-off-by: Alexandros Frantzis <alexandros.frantzis@collabora.com>
Reviewed-by: Chia-I Wu <olvaffe@gmail.com>
Vivante GPUs have lossless buffer compression using the tile-status bits,
which can reduce memory access and thus improve performance.
This patch only enables compression for "V4" compression GPUs, but the
implementation is tested on GC2000(V1) and GC3000(V2). V1/V2 compresssion
looks absolutely useless, so it is not enabled.
I couldn't test if this patch breaks MSAA, because it looks like MSAA is
already broken.
Signed-off-by: Jonathan Marek <jonathan@marek.ca>
Reviewed-by: Christian Gmeiner <christian.gmeiner@gmail.com>
This mirrors the change in blt. RS cares about this for msaa/compression.
Signed-off-by: Jonathan Marek <jonathan@marek.ca>
Reviewed-by: Christian Gmeiner <christian.gmeiner@gmail.com>
Both translate the same thing, so just add the missing cases into one.
Signed-off-by: Jonathan Marek <jonathan@marek.ca>
Reviewed-by: Christian Gmeiner <christian.gmeiner@gmail.com>
BLT engine uses all ones to clear TS, set ts_clear_value to match that.
Note: ts_clear_value is never used with BLT engine.
Signed-off-by: Jonathan Marek <jonathan@marek.ca>
Reviewed-by: Christian Gmeiner <christian.gmeiner@gmail.com>
Since we have "ts_valid" to avoid using uncleared ts, this memset serves
no purpose. Also it is broken because it doesn't use cpu_prep/cpu_fini.
Signed-off-by: Jonathan Marek <jonathan@marek.ca>
Reviewed-by: Christian Gmeiner <christian.gmeiner@gmail.com>
GC7000L has a TS mode with larger tiles, which improves performance.
Signed-off-by: Jonathan Marek <jonathan@marek.ca>
Reviewed-by: Christian Gmeiner <christian.gmeiner@gmail.com>
The size of the TS is screen->specs.bits_per_tile bits per tile, with each
tile being 64 bytes of the resource.
This gives the same result for 32bpp formats, but reduces the size of TS
for 16bpp formats by 2.
Signed-off-by: Jonathan Marek <jonathan@marek.ca>
Reviewed-by: Christian Gmeiner <christian.gmeiner@gmail.com>
Back when autotools and scons were the two build systems, it kinda made
sense to call scons "not autoconf", but autoconf's been gone for a while
now and other build systems have been added (android.mk and meson), so
the name really doesn't make any sense anymore.
Signed-off-by: Eric Engestrom <eric.engestrom@intel.com>
Reviewed-by: Eric Anholt <eric@anholt.net>
This simplifies a bunch of stuff by
(1) Keeping all the things in a single allocation, making things easier
for the cache.
(2) creating a shader_variant creation helper.
This is immediately put to use by creating rtld shader binaries. This
is the main reason for the binaries, as we need to do the linking at
upload time, i.e. post caching. We do not enable rtld yet.
Reviewed-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
Some headers were not dragged in the last update(s).
Fixes: 465ec0b145 ("vulkan: Update the XML and headers to 1.1.113")
Signed-off-by: Andres Gomez <agomez@igalia.com>
Reviewed-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
Acked-by: Jason Ekstrand <jason@jlekstrand.net>
Fixes following build problem:
Message: libdrm 2.4.99 needed because amdgpu has the highest requirement
Dependency libdrm_intel found: NO found '2.4.97' but need: '>=2.4.99'
Dependency libdrm_intel found: NO
meson.build:1178:4: ERROR: Invalid version of dependency, need 'libdrm_intel' ['>=2.4.99'] found '2.4.97'.
Signed-off-by: Christian Gmeiner <christian.gmeiner@gmail.com>
Reviewed-by: Eric Engestrom <eric@engestrom.ch>
The scissor state -is- setup, but the scissor test is not enabled. This
can prevent certain optimizations from occurring on tilers where
unaffected tiles are thrown out entirely.
v2: Only enable scissor test if the scissor test is actually set by the
app, to avoid regressing quad-based clears used for other reasons (like
a color mask).
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
LLVM doesn't insert s_waitcnt_vscnt before GS_DONE.
There was also the crash in legacy GS copy shader.
Acked-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
We need to tell PA to accept edge flags generated by the input assembler,
because decomposed primitives shouldn't draw inner edges.
Acked-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
The NGG hardware pipeline doesn't track these statistics automatically,
and in fact *cannot* track them automatically when API geometry shaders
are involved, so we accumulate statistics in the shader using atomic
adds.
This implementation accumulates statistics via the memory system and
the RW buffer descriptor setup. We could use GDS, but since these
atomics aren't latency-sensitive, that basically just trades off
L2$ bandwidth vs. export bus bandwidth. One single memory transaction
per shader workgroup doesn't seem too bad. The result ring buffer in
memory is needed either way to avoid pipeline stalls.
The shader code contains the atomic unconditionally, though the
GFX10_GS_QUERY_BUF is a null buffer when no queries are active. The
atomic is simply discarded by the shader hardware in that case.
Acked-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
Yes, really. Note that non-format buffer loads are unaffected and work
just fine with unaligned pointers (as long as SH_MEM_CONFIG is setup
correctly, which amdgpu ensures).
Fixes e.g. KHR-GL45.vertex_attrib_64bit.vao
Acked-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
Each gfx10 shader engine corresponds to two gfx9 shader engines, so scale
the number of offchip buffers accordingly.
Acked-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
DCC alignment can be less than the alignment of the main surface. In that
case, the DCC tile swizzle needs to be masked accordingly. Should have no
impact on pre-gfx10.
Acked-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
MSAA is only supported for 64KB_{R,Z}_X modes, so the micro tile
optimization that we use on gfx9 and earlier does not work.
Be very explicit about how the swizzle mode of the temporary surface is
selected.
Acked-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
With NGG, the VGT_GS_OUT_PRIM_TYPE can change without a shader change.
The VS_STATE is required for both streamout and culling from a vertex
shader without pre-compiling outprim-specific variants.
We could consider compiling specialized variants in the future. We
could also consider compiling the NGG logic as an epilog.
Acked-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
For pipelines without API GS. We will later expand this to cover NGG
geometry shaders as well.
Note that the vtx offset passed into the GS part is just the
vertex index multiplied by VGT_ESGS_RING_ITEMSIZE.
Acked-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
This does not support geometry shading yet. Also missing are streamout
and NGG-specific optimizations.
Acked-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
Also add the shader main part NGG variant, so that in principle
we can switch between legacy in NGG modes.
Acked-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
We always use NGG by default, except when tessellation is enabled with
extreme geometry shader amplification.
Acked-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
The introduction of GCR_CNTL makes cache flush handling on gfx10
sufficiently different that it makes sense to just use a separate
function.
Since emit_cache_flush is called quite early during context init,
we initialize the pointer explicitly in si_create_context.
Acked-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
VCN 2.0 uses direct register space where VCN 1.0 uses some indirect registers
Signed-off-by: Boyuan Zhang <boyuan.zhang@amd.com>
Acked-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
From VCN2.0, the RBC have different views on the registers
Signed-off-by: Leo Liu <leo.liu@amd.com>
(v2: rebase -- Nicolai)
Acked-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
This mainly removes and simplifies code that is no longer needed.
There were some issues with the DB->CB stencil copy on gfx10, so let's
just use a fragment shader blit for all ZS mappings. It's more reliable.
Tested-by: Dieter Nützel <Dieter@nuetzel-hh.de>
This is for drivers that can't map depth and stencil and need to blit
them to a color texture for CPU access.
This also useful for drivers using separate depth and stencil.
Tested-by: Dieter Nützel <Dieter@nuetzel-hh.de>
- merge all 3 functions (Z, S, ZS)
- don't write the color output
- read the value from texel.x, then write it to position.z or stencil.y
(don't use the value from texel.y or texel.z)
Tested-by: Dieter Nützel <Dieter@nuetzel-hh.de>
gl_SampleMaskIn is 1 when R_028BE0_PA_SC_AA_CONFIG is 0, so this commit rework the conditions
controlling this register.
Before it was set if the sctx->framebuffer had a sample count > 1.
Now we still require this condition, but we also need either:
- GL_MULTISAMPLE to be enabled
- to be executing an operation that doesn't depends on GL state using u_blitter.
This fixes the arb_sample_shading/sample_mask piglit tests on radeonsi.
Signed-off-by: Marek Olšák <marek.olsak@amd.com>
If we're doing a Z -> Z MSAA blit (for example) we need to enable
msaa rasterization when drawing the quads so that we can properly
write the per-sample values.
This fixes a number of Piglit ext_framebuffer_multisample blit tests
such as ext_framebuffer_multisample/no-color 2 depth combined with
the VMware driver.
Signed-off-by: Marek Olšák <marek.olsak@amd.com>
If we are discarding the whole resource, we don't care about previous contents,
and the resource storage is now unused, either because we have created new
resource storage, or because we have waited for the existing resource storage
to become unused, or because the transfer is unsynchronized.
In the last two cases this commit marks the storage as uninitialized, but only
if the resource is not host writable (in which case we can't clear the valid
range, since that would result in missed readbacks in future transfers).
In the first case, when the whole resource discard involves a reallocation, the
reallocation and subsequent rebinding already update the valid buffer range
appropriately.
Signed-off-by: Alexandros Frantzis <alexandros.frantzis@collabora.com>
Reviewed-by: Chia-I Wu <olvaffe@gmail.com>
The rasterizer core supported ARB_viewport_array,
but the swr layer connecting core to Gallium state
tracker only allowed one viewport.
We add support for multiple viewports to swr layer.
Reviewed-by: Alok Hota <alok.hota@intel.com>
Basically same as external for now.
Reviewed-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
Only case we might need to handle differently in the near future
is Raven's case of displayable DCC which is not renderable. But
we don't support that yet.
Getting a DMA-buf fd and converting that to a handle using our duplicate
of that file descriptor (getting at which requires passing a
radeon_winsys pointer to the buffer_get_handle hook) makes sure of this,
since duplicated file descriptors reference the same file description
and therefore the same GEM handle namespace.
This is necessary because libdrm_amdgpu may use a different DRM file
descriptor with a separate handle namespace internally, e.g. because it
always reuses any existing amdgpu_device_handle for the same device.
amdgpu_bo_export returns a handle which is valid for that internal
file descriptor.
Bugzilla: https://bugs.freedesktop.org/110903
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
Tested-by: Pierre-Eric Pelloux-Prayer <pierre-eric.pelloux-prayer@amd.com>
It extends pipe_screen / radeon_winsys and references amdgpu_winsys.
Multiple amdgpu_screen_winsys instances may reference the same
amdgpu_winsys instance, which corresponds to an amdgpu_device_handle.
The purpose of amdgpu_screen_winsys is to keep a duplicate of the DRM
file descriptor passed to amdgpu_winsys_create, which will be needed
in the next change.
v2:
* Add comment in amdgpu_winsys_unref explaining why it always returns
true (Marek Olšák)
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
Tested-by: Pierre-Eric Pelloux-Prayer <pierre-eric.pelloux-prayer@amd.com>
Cleanup to prevent breakage with the next change, no functional change
intended in this one.
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
Tested-by: Pierre-Eric Pelloux-Prayer <pierre-eric.pelloux-prayer@amd.com>
Do not use the view format when filling the surface state.
Fixes dEQP-VK.image.texel_view_compatible.compute.extended.texture.*
Fixes: fb1350c76f ("intel: Add and use helpers for level0 extent")
Reviewed-by: Nanley Chery <nanley.g.chery@intel.com>
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
this can fail unexpectedly due to bugs, so it's good to provide feedback
when this occurs
Reviewed-by: Sagar Ghuge <sagar.ghuge@intel.com>
Reviewed-by: Tapani Pälli <tapani.palli@intel.com>
From OpPtrAccessChain description in the SPIR-V spec (1.4 rev 1):
For objects in the Uniform, StorageBuffer, or PushConstant storage
classes, the element’s address or location is calculated using a
stride, which will be the Base-type’s Array Stride when the Base
type is decorated with ArrayStride. For all other objects, the
implementation will calculate the element’s address or location.
For non-CL shaders the driver should layout the Workgroup storage
class, so override any explicitly set ArrayStride in the shader. This
currently fixes only the lower_workgroup_access_to_offsets case, which
is used by anv.
Reviewed-by: Juan A. Suarez <jasuarez@igalia.com>
If they never get used, dead code should clean them up. Also, we rework
the at_offset and at_sample intrinsics so they return a proper vec2
instead of returning things in PLN layout. Fortunately, copy-prop is
pretty good at cleaning this up and it doesn't result in any actual
extra MOVs.
Reviewed-by: Matt Turner <mattst88@gmail.com>
Instead of manually adding the BOs from the various SLAB pools plus
the one backing the color FB, we insert them in the BO set attached
to the job and let panfrost_drm_submit_job() pass all BOs from this set
to the SUBMIT ioctl.
This means we are now passing all referenced BOs and let the scheduler
wait on referenced BO fences if needed.
Signed-off-by: Boris Brezillon <boris.brezillon@collabora.com>
There's no point duplicating the code, and it will help us simplify
the bo_handles[] filling logic in panfrost_drm_submit_job().
Signed-off-by: Boris Brezillon <boris.brezillon@collabora.com>
To avoid the panfrost_memory <-> panfrost_bo dance done in
panfrost_resource_create_bo() and panfrost_bo_unreference().
Signed-off-by: Boris Brezillon <boris.brezillon@collabora.com>
So we can re-use it for the panfrost_drm_create_bo() function we are
about to introduce.
Signed-off-by: Boris Brezillon <boris.brezillon@collabora.com>
Let's keep a clear split between ioctl wrappers and the rest of the
driver. All the import BO function need is a dmabuf FD and the screen
object, and the export one should only take care of generating a dmabuf
FD out of a BO object. Winsys handle manipulation should stay in the
resource.c file.
Signed-off-by: Boris Brezillon <boris.brezillon@collabora.com>
That's what most (all?) implementation seem to do, and my understanding
is that a BO is just a bunch of memory that can be used for anything GPU
related, not only texture/FB resources.
Let's move those meta data in panfrost_resource so we can use
panfrost_bo for all kind of memory allocation and make BO allocation
more consistent.
Signed-off-by: Boris Brezillon <boris.brezillon@collabora.com>
panfrost_drm_submit_job() and panfrost_fence_create() are not used
outside of pan_drm.c.
Signed-off-by: Boris Brezillon <boris.brezillon@collabora.com>
bo->imported was never set to true which means this path was never taken.
Moreover, panfrost_drm_free_imported_bo() is doing missing the munmap()
call which seems wrong because the import BO function calls mmap().
Let's just kill this function along with the ->imported field.
Signed-off-by: Boris Brezillon <boris.brezillon@collabora.com>
Commit 5f81669d88 ("panfrost: Remove the panfrost_driver abstraction")
left a few things behind, remove them now.
Signed-off-by: Boris Brezillon <boris.brezillon@collabora.com>
Otherwise we get random use-after-{free,unmap} errors.
Signed-off-by: Boris Brezillon <boris.brezillon@collabora.com>
---
Changes in v2:
- Move the panfrost_job_add_bo() call out of the loop
It's currently only enabled if dcc_slice_size is equal to
dcc_slice_fast_clear_size because the driver assumes that
portions of multiple layers are contiguous but it's not
always true.
Still not supported on GFX9.
Signed-off-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
This will help for clearing DCC arrays because we need to know
the subresource range.
Signed-off-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
Addrlib doesn't provide this info. Because DCC is linear, at least
on GFX8, it's easy to compute the size of one slice.
Signed-off-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
There will unfortunately be circumstances where we cannot re-use a
virtual memory address until it's no longer active on the GPU. To
facilitate this, we instead move BOs to a "dead" list, and defer
closing them and returning their VMA until they are idle. We
periodically sweep these away in cleanup_bo_cache, which triggers
every time a new object's refcount hits zero.
Reviewed-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
Tested-by: Jordan Justen <jordan.l.justen@intel.com>
In the future, some images will need to be aligned to a larger value
than 4096. Most buffers, however, don't have any such requirement,
so for now we only add the parameter to iris_bo_alloc_tiled() and
leave the others with the simpler interface.
v2: Fix missing alignment in vma_alloc, caught by Caio!
Reviewed-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
Tested-by: Jordan Justen <jordan.l.justen@intel.com>
Generally, we achieve this by skipping the flush on calls to
v3d_flush_jobs_writing_resource() when we detect that the resource is written
in the current job from a transform feedback write.
The exception to this is the case where the caller is about to map the
resource, in which case we need to flush immediately since we can only emit
'Wait for transform feedback' commands on rendering jobs. We add a parameter
to the function so the caller can identify that scenario.
Reviewed-by: Eric Anholt <eric@anholt.net>
The hardware can flush transform feedback writes before reads in the same
job by inserting this command.
This patch detects when the rendering state for the current draw call reads
resources that had been previously written by transform feedback in the
same job and inserts the 'Wait for transform feedback' command before
emitting the new draw.
v2 (Eric):
- this was intended to look at job->tf_write_prscs for TF jobs.
- clear job->tf_write_prscs after we emit the TF flush.
- can skip flushes for fragment shader reads from TF.
v3 (Eric):
- all resources in job->tf_write_prscs are resources written by TF so
we don't need to check if they are bound to PIPE_BIND_STREAM_OUTPUT.
- documented optimization opportunity for geometry stages.
Reviewed-by: Eric Anholt <eric@anholt.net>
The hardware provides a feature to sync reads from previous transform feedback
writes in the same job so if we use this mechanism we no longer have to flush
the job.
In order to identify this scenario we need a mechanism to identify resources
that are written by transform feedback.
v2: use _mesa_pointer_set_create (Eric)
Reviewed-by: Eric Anholt <eric@anholt.net>
the dri image format here should match the fourcc format
Reviewed-by: Eric Anholt <eric@anholt.net>
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
the formats handled in the switch statement will always return an
unknown mesa format, so process them directly and leave the default
case for other/unknown formats
no functional changes
Reviewed-by: Eric Anholt <eric@anholt.net>
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
If our resource_copy_region size is a small number of DWords, then
instead of firing up BLORP, we can simply use MI_COPY_MEM_MEM (after
a CS stall). We also try and select the optimal batch.
Improves performance in Shadow of Mordor on Low settings at 1920x1080
on Skylake GT4e by 0.689096% +/- 0.473968% (n=4). It tries to copy
4 bytes of data to a buffer which was most recently used as a writable
compute shader SSBO. Previously we were switching from compute to the
render pipeline, then firing up all of blorp_buffer_copy...for 4 bytes.
I arbitrarily decided to support 4/8/12/16 bytes. Jason thinks this
is about the right threshold where it's cheaper to use MI_COPY_MEM_MEM.
We can greatly simplify our builds by just hardcoding GLvector4f and
GLmatrix's layouts.
Reviewed-by: Matt Turner <mattst88@gmail.com>
Reviewed-by: Eric Engestrom <eric.engestrom@intel.com>
Most of these haven't been used since the conversion from checked-in
matypes to generation. By cutting down the generated contents, this
should clarify why the file is generated: we need
architecture-specific offsets to the V4F fields in the asm that uses
it.
v2: Keep matrix offsets to prevent x86 build breakage..
Reviewed-by: Matt Turner <mattst88@gmail.com>
Reviewed-by: Eric Engestrom <eric.engestrom@intel.com>
Currently only 7 formats are supported, but we don't want the 16 limit
(it's an `unsigned`) to hit us by surprise :]
Let's use bitset.h's BITSET magic to allow us to have any number of
formats, with a static assert to make sure we don't forget to update it.
Signed-off-by: Eric Engestrom <eric.engestrom@intel.com>
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
v2: 1) Drop changes for vec4 backend as on Gen11+ we don't support
align16 mode (Matt Turner)
Signed-off-by: Sagar Ghuge <sagar.ghuge@intel.com>
Reviewed-by: Matt Turner <mattst88@gmail.com>
We implement GLES3.0 instanced rendering with full support for instanced
arrays (via instance divisors). To do so, we use the new invocation
helpers to invoke a triplet of (1, vertex_count, instance_count), rather
than simply (1, vertex_count, 1). We rewrite the attribute handling code
into a new pan_instancing.c file which handles both the simple LINEAR
case for non-instanced as well as each of the new instancing cases:
MODULO (for per-vertex attributes), POT and NPOT divisors.
As a side effect, we rework how vertex buffers are handled, duplicating
them to be 1:1 with vertex descriptors to simplify instancing code paths
dramatically. This might be a performance regression, but this remains
to be seen; if so, we can always deduplicate later with some added logic
in pan_instancing.c
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
Rather than open-coding workgroups_shift_* type fields, we include a
general routine for packing the vertex/tiler/compute descriptor based on
the provided dispatch parameters.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
Rather than using a magic lookup table with no explanations, let's add
liberal comments to the code to explain what this tiling scheme is and
how to encode/decode it efficiently.
It's not so mysterious after all -- just reordering bits with some XORs
thrown in.
v2: Correct copyright identifier. Fix spelling error. Switch space_4 to
a LUT. Fix comment typo. Use LUT instead of space_x tricks. Fallback on
generic rather than split up unaligned writes.
v3: Correct stride order (fixes crash loading). Correct coordinate
system mishap.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
Reviewed-by: Vasily Khoruzhick <anarsoul@gmail.com>
Tested-by: Andreas Baierl <ichgeh@imkreisrum.de>
Fixes link failure since clang r364424 "[clang/DIVar] Emit the flag for
params that have unmodified value", clangCodeGen depends on
clangASTMatchers now.
Reviewed-by: Dylan Baker <dylan@pnwbakers.com>
When using ARB_gl_spirv, the block names are optional and the uniform
blocks are referred using Bindings instead. Teach
gl_nir_lower_buffers to handle those.
Reviewed-by: Timothy Arceri <tarceri@itsqueeze.com>
Among other things, it supports arrays of arrays of UBO/SSBO (default
codepath doesn't).
Acked-by: Timothy Arceri <tarceri@itsqueeze.com>
v2: nir_address_format_vk_index_offset got renamed to
nir_address_format_32bit_index_offset (after rebase against master)
v3: the ptr_type fields in spirv_to_nir_options got changed to be of
type nir_address_format.
v4: remove phys_ssbo_addr_format and push_const_addr_format as they are
not used by glspirv
Reviewed-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
When using a SPIR-V shader. Note that needs to be done before linking
uniforms, so when creating the uniform storage entries, block_index
could be filled properly (among other things).
Reviewed-by: Timothy Arceri <tarceri@itsqueeze.com>
The function had a mix of true/GL_TRUE and false/GL_FALSE
returns. Using GL_TRUE/GL_FALSE as the function returns a GLboolean.
Reviewed-by: Timothy Arceri <tarceri@itsqueeze.com>
Until now, we were using the uniform explicit location to check if the
current nir variable was already processed while adding entries on the
uniform storage. But for UBOs/SSBOs, entries are added too but we lack
a explicit location.
For those we need to rely on the UBO/SSBO binding and the unifor
storage block_index. In that case several uniforms would need to be
updated at once.
v2: (from Timothy review)
* Improve wording and fix typos of some long comments.
* Rename update_uniform_storage for mark_stage_as_active
v3: (from cmarcelo review)
* Fixed some comment typos
Reviewed-by: Timothy Arceri <tarceri@itsqueeze.com>
Specifically, offset, stride (coming from arrays or matrices) and
row_major.
On GLSL, most of that info is computed using the layout qualifier, but
on ARB_gl_spirv they are explicit, and for Mesa, included on the
glsl_type.
From ARB_gl_spirv spec:
"Mapping of layouts
std140/std430 -> explicit *Offset*, *ArrayStride*, and
*MatrixStride* Decoration on struct members""
"7.6.2.spv SPIR-V Uniform Offsets and Strides
The SPIR-V decorations *GLSLShared* or *GLSLPacked* must not be
used. A variable in the *Uniform* Storage Class decorated as a
*Block* must be explicitly laid out using the *Offset*,
*ArrayStride*, and *MatrixStride* decorations"
For offset we needed to include the parent and index_in_parent while
processing the type, as the offset is maintained on glsl_struct_field
of the parent type, not on the type itself.
v2: Fix the default values for MATRIX_STRIDE, ARRAY_STRIDE and
ROW_MAJOR when the variable is not backed by a buffer object
(Antia Puentes).
v3: Update after Jason series "SPIR-V: Use NIR deref instructions for
UBO/SSBO access" that included just one explicit stride, instead
of a previous patch we wrote that had matrix_stride and
array_stride (Alejandro)
Signed-off-by: Antia Puentes <apuentes@igalia.com>
Signed-off-by: Alejandro Piñeiro <apinheiro@igalia.com>
Reviewed-by: Timothy Arceri <tarceri@itsqueeze.com>
For this interfaces, the inner members are added only once as uniforms
or resources, in opposite to other cases, like a uniform array of
structs.
For those guessing why a issue (16) from ARB_program_interface_query
was used, instead of a quote of the core spec: The core spec is not
really clear about how members of arrays of blocks should be
enumerated.
On GLSL this was also problematic, specially when we were trying to
pass the 4.5 CTS tests. See commit "glsl: Fix program interface
queries relating to interface blocks"
(4c4d9e4f03), as a reference. That one
also needed to rely on issue (16) to justify the change, pointing that
the core spec needs to be clarified.
Reviewed-by: Timothy Arceri <tarceri@itsqueeze.com>
Adding the ability to link uniform blocks and shader storage blocks
using NIR, intended for ARB_gl_spirv support. Among other things, this
linking needs to take into account that everything should work without
names, as they could be not present, while the GLSL IR uniform block
linking was wrote with the names on its core.
The other major difference compared with the GLSL IR linker is that we
don't deal with layouts. There are no references to std140, std430,
etc. Layouts are expressed through explicit offset, array stride and
matrix stride. That simplifies how the buffer size are computed. But
also means that we couldn't use the existing methods at glsl_types, so
we needed to implement new methods.
It is worth to note that this linking do a iteration over the
glsl_types, similarly to what the linking uniforms do. A possible
future improvement would be refactor both cases to try to share more
code that it sharing right now. On GLSL IR there are a class visitor,
specialized on each case, for that sharing. As adding a class visitor
on C would more complicated, for now we are just iterating on both.
Signed-off-by: Alejandro Piñeiro <apinheiro@igalia.com>
Signed-off-by: Neil Roberts <nroberts@igalia.com>
Signed-off-by: Antia Puentes <apuentes@igalia.com>
v2: (from Timothy review)
* Fix variable name convention
* Stop to use _function_name convention
* Don't use // for comments
* "nir/linker: Keep track of the stages referencing an UBO/SSBO"
squashed with this patch
v3: (from Caio review)
* Don't delete the linked shader on failure
* Use rzalloc_array to avoid some explicit initializations
Reviewed-by: Timothy Arceri <tarceri@itsqueeze.com>
Reviewed-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
Helper used to know when a glsl_type is a leaf when iteraring through
a complex type. Note that GLSL IR linking also uses the concept of
leaf while doing the same iteration, although in that case it uses a
visitor. See link_uniform_blocks, process_array_leaf and others as
reference.
Signed-off-by: Alejandro Piñeiro <apinheiro@igalia.com>
Signed-off-by: Antia Puentes <apuentes@igalia.com>
v2:
* Moved from gl_nir_linker to nir_types, so it could be used on nir
xfb gathering (Timothy Arceri)
* Minor update after Timothy's series about record to struct
renaming landed master.
Reviewed-by: Timothy Arceri <tarceri@itsqueeze.com>
While using SPIR-V shaders (ARB_gl_spirv), layout data is not implicit
to a specific value (std140, std430, etc) but explicitly included on
the type (explicit values for offset, stride and row_major).
So this method is equivalent to the existing std140_size and
std430_size, but using such explicit values.
Note that the value returned by this method is only valid if such data
is set, so when dealing with SPIR-V shaders.
v2: (all changes suggested by Jason Ekstrand)
* Iterate through all struct members, instead of assume that fields
are ordered by offset
* Use else if
* Take into account the case that explicit_stride > elem_size, to
fine graine the final size on arrays and matrices
* Handle different bit-sizes in general, not just 32 and 64.
v3: (change suggested by Caio Marcelo de Oliveira Filho)
* fix up explicit_size() to consider interface types
Signed-off-by: Alejandro Piñeiro <apinheiro@igalia.com>
Signed-off-by: Antia Puentes <apuentes@igalia.com>
Signed-off-by: Neil Roberts <nroberts@igalia.com>
Reviewed-by: Timothy Arceri <tarceri@itsqueeze.com>
Reviewed-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
Note that the nir_types glsl_get_bit_size is not a wrapper of this
one, because for bools at the nir level, we want to return size 1, but
at the glsl_types we want to return 32.
v2: reuse the new method in order to simplify is_16bit and is_32bit
helpers (Timothy)
v3: add a comment clarifying the difference between
glsl_base_type_bit_size and glsl_get_bit_size.
Reviewed-by: Timothy Arceri <tarceri@itsqueeze.com>
Equivalent to the already existing ir_variable is_in_buffer_block and
is_in_shader_storage_block, adding the uniform buffer object one. I'm
using the short forms (ssbo, ubo) to avoid having method names too
long.
Reviewed-by: Timothy Arceri <tarceri@itsqueeze.com>
The data for some nir variables is only filled up for some specific
modes. We need now too for UBO/SSBO, as such info would be used when
linking for OpenGL (ARB_gl_spirv).
There is an existing comment just before that code (starts with XXX)
that points that binding still needs to be filled up for uniform
variables at that point, and that should be fixed, although it doesn't
specify why that's a problem or what would be the alternative. For now
doing the same for UBO/SSBO, and will hope that the future fixing is
done for all of them.
Reviewed-by: Timothy Arceri <tarceri@itsqueeze.com>
Providing nir variables for UBO/SSBO it is not required for Vulkan,
but it is needed for OpenGL (ARB_gl_spirv), like for example, to
gather info from the UBO/SSBO while linking.
In opposite with most cases where the nir variables is created, here
the type assigned is the full type (not just the bare type). This is
needed because while linking using the nir shader we need the explicit
layout info (explicit stride, explicit offset, row_major, etc).
Also, we need to assign an interface type, used also on the OpenGL
linker if it is a UBO/SSBO. See ir_variable::is_in_buffer_block as
example.
v2: assign interface_type to be the variable type, not need to be
arrayness (Timothy)
Reviewed-by: Timothy Arceri <tarceri@itsqueeze.com>
Not all drivers support TGSI_OPCODE_DIV, so we should have a cap to be able
to check this.
Signed-off-by: Gert Wollny <gert.wollny@collabora.com>
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
Optimize the shader a bit by emitting MAD with the inverse size values
instead of DIV+ADD.
Signed-off-by: Gert Wollny <gert.wollny@collabora.com>
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
This fixes BGR565 blit: currently BGRA444 is used for the blit, but with
swizzles from the original BGR565 format, so the 4 alpha bits are set to 1.
We can't just use the swizzle from the 'compatible' format, since there are
cases where BGR<->RGB swap needs to happen.
We can avoid all this trouble by using the original formats and only
falling back to the 'compatible' format when we need to.
Signed-off-by: Jonathan Marek <jonathan@marek.ca>
Reviewed-by: Christian Gmeiner <christian.gmeiner@gmail.com>
For fast clear to happen, all bits must be cleared.
This allows using fast clear for 24bpp depth without stencil.
Signed-off-by: Jonathan Marek <jonathan@marek.ca>
Reviewed-by: Christian Gmeiner <christian.gmeiner@gmail.com>
Not a hot path obviously, but the table still has 425 extensions, which
you can go through in just 9 steps with a binary search.
The table is already sorted, as required by other parts of the code and
enforced by mesa's `main-test`.
Signed-off-by: Eric Engestrom <eric.engestrom@intel.com>
Reviewed-by: Tapani Pälli <tapani.palli@intel.com>
Changelog in Android makefile:
- Add LOCAL_MODULE_CLASS, intermediates and LOCAL_GENERATED_SOURCES
- Use LOCAL_EXPORT_C_INCLUDE_DIRS to export $(intermediates) path
- Move generated header rules before 'include $(BUILD_STATIC_LIBRARY)'
Fixes the following building error:
In file included from external/mesa/src/gallium/targets/dri/target.c:1:
external/mesa/src/gallium/auxiliary/target-helpers/drm_helper.h:257:16:
fatal error: 'virgl/virgl_driinfo.h' file not found
#include "virgl/virgl_driinfo.h"
^~~~~~~~~~~~~~~~~~~~~~~
1 error generated.
Fixes: cf800998a ("virgl: Add driinfo file and tie it into the build")
Signed-off-by: Mauro Rossi <issor.oruam@gmail.com>
Review-by: Chih-Wei Huang <cwhuang@linux.org.tw>
The simulator complains about using byte operands, we also have
documentation telling us.
Note that add operations on bytes seems to work fine on HW (like ADD).
Using dwords operands with CMP & SEL fixes the following tests :
dEQP-VK.spirv_assembly.type.vec*.i8.*
v2: Drop the GLK changes (Matt)
Add validator tests (Matt)
v3: Drop GLK ref (Matt)
Don't mix float/integer in MAD (Matt)
Signed-off-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Reviewed-by: Rafael Antognolli <rafael.antognolli@intel.com> (v1)
Reviewed-by: Matt Turner <mattst88@gmail.com>
BSpec: 3017
Cc: <mesa-stable@lists.freedesktop.org>
No shader-db change on any Intel platform. No shader-db run-time
difference on a certain 36-core / 72-thread system at 95% confidence
(n=20).
Reviewed-by: Connor Abbott <cwabbott0@gmail.com>
There is no reason to mark the fmul in the expression
('fmul', ('fadd', a, b), ('fadd', a, b))
as commutative. If a source of an instruction doesn't match one of the
('fadd', a, b) patterns, it won't match the other either.
This change is enough to make this pattern work:
('~fadd@32', ('fmul', ('fadd', 1.0, ('fneg', a)),
('fadd', 1.0, ('fneg', a))),
('fmul', ('flrp', a, 1.0, a), b))
This pattern has 5 commutative expressions (versus a limit of 4), but
the first fmul does not need to be commutative.
No shader-db change on any Intel platform. No shader-db run-time
difference on a certain 36-core / 72-thread system at 95% confidence
(n=20).
There are more subpatterns that could be marked as non-commutative, but
detecting these is more challenging. For example, this fadd:
('fadd', ('fmul', a, b), ('fmul', a, c))
The first fadd:
('fmul', ('fadd', a, b), ('fadd', a, b))
And this fadd:
('flt', ('fadd', a, b), 0.0)
This last case may be easier to detect. If all sources are variables
and they are the only instances of those variables, then the pattern can
be marked as non-commutative. It's probably not worth the effort now,
but if we end up with some patterns that bump up on the limit again, it
may be worth revisiting.
v2: Update the comment about the explicit "len(self.sources)" check to
be more clear about why it is necessary. Requested by Connor. Many
Python fixes style / idom fixes suggested by Dylan. Add missing (!!!)
opcode check in Expression::__eq__ method. This bug is the reason the
expected number of commutative expressions in the bitfield_reverse
pattern changed from 61 to 45 in the first version of this patch.
v3: Use all() in Expression::__eq__ method. Suggested by Connor.
Revert away from using __eq__ overloads. The "equality" implementation
of Constant and Variable needed for commutativity pruning is weaker than
the one needed for propagating and validating bit sizes. Using actual
equality caused the pruning to fail for my ('fmul', ('fadd', 1, a),
('fadd', 1, a)) case. I changed the name to "equivalent" rather than
the previous "same_as" to further differentiate it from __eq__.
Reviewed-by: Connor Abbott <cwabbott0@gmail.com>
Reviewed-by: Dylan Baker <dylan@pnwbakers.com>
Search patterns that are expected to have too many (e.g., the giant
bitfield_reverse pattern) can be added to a white list.
This would have saved me a few hours debugging. :(
v2: Implement the expected-failure annotation as a property of the
search-replace pattern instead of as a property of the whole list of
patterns. Suggested by Connor.
Reviewed-by: Connor Abbott <cwabbott0@gmail.com>
Reviewed-by: Dylan Baker <dylan@pnwbakers.com>
The bfi/bfm behavior change replaced the bfi/bfm usage in
lower_bitfield_insert_to_shifts with actual shifts like the name says,
but it failed to handle the offset=0, bits==32 case in the new
lowering.
v2: Use 31 < bits instead of bits == 32, to get the 31 < (iand bits,
31) -> false optimization.
Fixes regressions in dEQP-GLES31.*bitfield_insert* on freedreno.
Fixes: 165b7f3a44 ("nir: define behavior of nir_op_bfm and nir_op_u/ibfe according to SM5 spec.")
Reviewed-by: Daniel Schürmann <daniel@schuermann.dev>
This reverts commit 5157a42765.
There is a meson bug that causes llvm to always be statically linked,
which is obviously not what we want. I haven't had time to look into it
yet, but for now let's just revert it.
Turns out one of the magic bits in the texture instruction meant
'float'. Different magic bits mean int and uint then :)
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
We missed the workaround for compute workloads in earlier patches.
Signed-off-by: Anuj Phogat <anuj.phogat@gmail.com>
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
SLICE_COMMON_CHICKEN3 is a privileged register not accesible from userspace.
This patch silences a simulator warning about it.
We don't need to add this workaround in linux kernel as the WA description
says it's fixed on latest stepping.
This reverts commit 9c421d6b47.
Signed-off-by: Anuj Phogat <anuj.phogat@gmail.com>
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
SLICE_COMMON_CHICKEN3 is a privileged register not accesible from userspace.
This patch silences a simulator warning about it.
We don't need to add this workaround in linux kernel as the WA description
says it's fixed on latest stepping.
This reverts commit 2be60e0c73.
Signed-off-by: Anuj Phogat <anuj.phogat@gmail.com>
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
SLICE_COMMON_CHICKEN3 is a privileged register not accesible from userspace.
This patch silences a simulator warning about it.
We don't need to add this workaround in linux kernel as the WA description
says it's fixed on latest stepping.
This reverts commit 85ecd14ef6.
Signed-off-by: Anuj Phogat <anuj.phogat@gmail.com>
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
An earlier change was setting the SamplerCount = 0 for Gen 11
under #if GEN_GEN < 7. This commit fixes the problem.
This WA has also been added to the linux kernel.
Signed-off-by: Anuj Phogat <anuj.phogat@gmail.com>
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
In case we need to insert a dummy bary.f for the (ei) flag, it also
needs (ss) so we don't release varying storage to the next VS wave
before the ldlv completed. Fixes random failures in:
dEQP-GLES3.functional.transform_feedback.random.interleaved.lines.*
Signed-off-by: Rob Clark <robdclark@chromium.org>
Reviewed-by: Eric Anholt <eric@anholt.net>
Reviewed-by: Kristian H. Kristensen <hoegsberg@google.com>
Fixes:
dEQP-GLES2.functional.fbo.render.recreate_colorbuffer.rebind_rbo_rgba4
dEQP-GLES2.functional.fbo.render.recreate_colorbuffer.no_rebind_rbo_rgba4
dEQP-GLES2.functional.fbo.render.recreate_colorbuffer.no_rebind_rbo_rgba4_stencil_index8
dEQP-GLES2.functional.fbo.render.recreate_depthbuffer.rebind_rbo_rgba4_depth_component16
dEQP-GLES2.functional.fbo.render.recreate_depthbuffer.no_rebind_rbo_rgba4_depth_component16
dEQP-GLES2.functional.fbo.render.recreate_stencilbuffer.rebind_rbo_rgba4_stencil_index8
dEQP-GLES2.functional.fbo.render.recreate_stencilbuffer.no_rebind_rbo_rgba4_stencil_index8
Signed-off-by: Rob Clark <robdclark@chromium.org>
Reviewed-by: Eric Anholt <eric@anholt.net>
Reviewed-by: Kristian H. Kristensen <hoegsberg@google.com>
Previously, on systems where multiple versions of Python 3 (e.g. 3.6 and 3.7)
are installed, wrong version of Python 3 could have been used.
The proper fix requires availability of path() method in Meson's python
module, which has been added in Meson 0.50:
https://github.com/mesonbuild/meson/pull/4616
Distro Bug: https://bugs.gentoo.org/671308
Signed-off-by: Arfrever Frehtes Taifersar Arahesis <Arfrever@Apache.Org>
Reviewed-by: Dylan Baker <dylan@pnwbakers.com>
v2: - Add missing `endif` keyword (Dylan)
Splits texture lookup and binding actions.
The new _mesa_lookup_or_create_texture will be useful to implement the EXT_direct_state_access extension.
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
Signed-off-by: Marek Olšák <marek.olsak@amd.com>
The EXT_direct_state_access spec says:
INVALID_OPERATION is generated by GetNamedBufferParameterivEXT,
GetNamedBufferPointervEXT, GetNamedBufferSubDataEXT,
MapNamedBufferEXT, NamedBufferDataEXT, NamedBufferSubDataEXT, and
UnmapNamedBufferEXT if the buffer parameter is zero.
This commits adds buffer != 0 validation to the implemented functions.
glNamedBufferStorageEXT isn't included in this list and the EXT_buffer_storage
doesn't says that buffer = 0 is an error either so I didn't add the same
validation for this function.
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
Signed-off-by: Marek Olšák <marek.olsak@amd.com>
Since the ARB DSA function glUnmapNamedBuffer() is only exposed
for 3.1 or above we make glUnmapNamedBuffer() an alias of
glUnmapNamedBufferEXT() rather than the other way around.
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
Signed-off-by: Marek Olšák <marek.olsak@amd.com>
This is available in ARB_buffer_storage when
EXT_direct_state_access is present.
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
Signed-off-by: Marek Olšák <marek.olsak@amd.com>
When a context is destroyed the destroy_tex_sampler_cb makes sure that all the
sampler views created by that context are destroyed.
This is done by walking the ctx->Shared->TexObjects hash table.
In a multiple context environment the texture can be deleted by a different context,
so it will be removed from the TexObjects table and will prevent the above mechanism
to work.
This can result in an assertion in st_save_zombie_sampler_view because the
sampler_view owns a reference to a destroyed context.
This issue occurs in blender 2.80.
This commit fixes this by explicitly releasing sampler_view created by the destroyed
context for all texture attachments.
Fixes: 593e36f956 (st/mesa: implement "zombie" sampler views (v2))
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=110944
Signed-off-by: Marek Olšák <marek.olsak@amd.com>
We pipe_resource_reference when handling transfers in map, we need to
do a corresponding unreference in unmap.
Reviewed-by: Michel Dänzer <michel.daenzer@amd.com>
dbb4457d98 started using drmDevicesEqual(), which was
introduced in libdrm 2.4.81
We could either copy the function locally, or bump the required version.
Since the function is non-trivial and 2.4.81 is old enough already,
I suggesting the latter.
Fixes: dbb4457d98 ("egl: add EGL_EXT_device_drm support")
Cc: Emil Velikov <emil.velikov@collabora.com>
Signed-off-by: Eric Engestrom <eric.engestrom@intel.com>
Reviewed-by: Dylan Baker <dylan@pnwbakers.com>
Reviewed-by: Emil Velikov <emil.velikov@collabora.com>
Currently libdrm_amdgpu provides a typedef of the various handles. While
the goal was to make those opaque, it effectively became part of the API
To the best of my knowledge there are two ways to have opaque handles:
- "typedef void *foo;" - rather messy IMHO
- "stuct foo;" and use "struct foo *" through the API
In our case amdgpu_device_handle is used only internally, plus
respective code is not used or applicable for r300 and r600. Hence we
copied the typedef.
Seemingly this will be a problem since libdrm_amdgpu wants to change the
API, while not updating the code(?).
Either way, we can safely s/amdgpU_device_handle/void */ and carry on.
Cc: Michel Dänzer <michel@daenzer.net>
Signed-off-by: Emil Velikov <emil.velikov@collabora.com>
Reviewed-by: Marek Olšák <marek.olsak at amd.com>
Rendering to AFBC was broken, as the HW will complaint loudly if we pass
a tagged pointer in bifrost_render_target.
Signed-off-by: Tomeu Vizoso <tomeu.vizoso@collabora.com>
Fixes: 3609b50a64 ("panfrost: Merge AFBC slab with BO backing")
Reviewed-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
At the moment we don't have enough people to ensure that RK3288 is
regression-free, so don't fail the CI in that case.
For now we'll focus on not regressing on RK3399 and we can expand to
other SoCs as more people join the effort.
Signed-off-by: Tomeu Vizoso <tomeu.vizoso@collabora.com>
Suggested-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
We were failing to pipe_resource_unreference on the failure path due
to a non-renderable format. Instead of fixing this, just move the
checks earlier, before we even bother with refcounting or calloc.
These two extensions are supported on GFX8 but the throughput
of 16-bit floats/integers is same as 32-bit. Also, shaderInt16
is only enabled on GFX9+ for the same reason, be more consistent.
This fixes a crash with Wolfenstein II because it expects
shaderInt16 to be enabled when VK_AMD_gpu_shader_half_float is
exposed. Note that AMDVLK only enables these extensions on GFX9+.
Cc: 19.1 <mesa-stable@lists.freedesktop.org>
Signed-off-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
Staging buffers are now created directly by the virgl_staging_mgr. We
don't need to support creating staging pipe_resources.
Signed-off-by: Alexandros Frantzis <alexandros.frantzis@collabora.com>
Reviewed-by: Chia-I Wu <olvaffe@gmail.com>
Use an instance of virgl_staging_mgr instead of u_upload_mgr to handle
the staging buffer. This removes the need to track the availability
of the staging manager, since virgl_staging_mgr can handle concurrent
active allocations.
Signed-off-by: Alexandros Frantzis <alexandros.frantzis@collabora.com>
Reviewed-by: Chia-I Wu <olvaffe@gmail.com>
Add a manager for the staging buffer used in virgl. The staging manager
is heavily inspired by u_upload_mgr, but is simpler and is a better fit
for virgl's purposes. In particular, the staging manager:
* Allows concurrent staging allocations.
* Calls the virgl winsys directly to create and map resources, avoiding
unnecessarily going through gallium resources and transfers.
olv: make virgl_staging_alloc_buffer return a bool
Signed-off-by: Alexandros Frantzis <alexandros.frantzis@collabora.com>
Reviewed-by: Chia-I Wu <olvaffe@gmail.com>
Store the virgl_hw_res instead of the pipe_resource for copy transfer
sources. This prepares the codebase for a change to provide only the
virgl_hw_res for the staging buffers in upcoming commits.
Signed-off-by: Alexandros Frantzis <alexandros.frantzis@collabora.com>
Reviewed-by: Chia-I Wu <olvaffe@gmail.com>
We were failing to unreference the old image resource. Instead of open
coding this and doing it badly, just use the copier function which does
the right thing.
A while back, we added a new field, but failed to update the copier.
I believe iris is the only current user of the new field, and it hasn't
used the copier, so noone noticed.
Fixes: 8b626a22b2 st/mesa: Record shader access qualifiers for images
Reviewed-by: Timothy Arceri <tarceri@itsqueeze.com>
Otherwise they are classified as pipe_martian_resource, and don't
contain any helpful information about the texture.
Reviewed-by: Eric Anholt <eric@anholt.net>
Aligning phys_level0_sa by the compression block dimension prior to
mipmap layout causes the layout of compressed surfaces to differ from
the sampler's expectations in certain cases. The hardware docs agree:
From the BDW PRM, Vol. 5, Compressed Mipmap Layout,
The compressed mipmaps are stored in a similar fashion to
uncompressed mipmaps [...]
The following exceptions apply to the layout of compressed (vs.
uncompressed) mipmaps:
* [...]
* The dimensions of the mip maps are first determined by applying
the sizing algorithm presented in Non-Power-of-Two Mipmaps
above. Then, if necessary, they are padded out to compression
block boundaries.
The last bullet indicates that alignment should not be done for
calculating a miplevel's dimensions, but rather for determining miplevel
placement/padding. Comply with this text by removing the extra
alignment.
Fixes some fbo-generatemipmap-formats piglit failures on all tested
platforms (SNB-KBL).
v2:
- Note fixed platforms.
- Update some consumers via a helper function.
Cc: <mesa-stable@lists.freedesktop.org>
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
Prepare for a bug fix by adding and using helpers which convert
isl_surf::logical_level0_px and isl_surf::phys_level0_sa to units of
surface elements.
v2:
- Update iris (Ken).
- Update anv.
Cc: <mesa-stable@lists.freedesktop.org>
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
Clang (like LLVM), very annoyingly refuses to provide pkg-config, and
only provides cmake (unlike LLVM which at least provides llvm-config,
even if llvm-config is terrible). Meson has gained the ability to use
cmake to find dependencies, and can successfully find Clang. This change
attempts to use cmake to find clang instead of a bunch of library
searches, when paired with -Dcmake_prefix_path we can much more reliably
use cmake to control which clang we're getting. This is only enabled for
meson >= 0.51, which adds the required options.
Reviewed-by: Eric Engestrom <eric.engestrom@intel.com>
Meson has support for using cmake as a finder for some dependencies,
including LLVM. Using cmake has a lot of advantages: it needs less meson
maintenance to keep working (even for llvm updates); it works more
sanely for cross compiles (as llvm-config is a compiled binary not a
shell script). Meson 0.51.0 also has a new generic variable getter that
can be used to get information from either cmake, pkg-config, or
config-tools dependencies, which is needed for cmake. We continue to
support using llvm-config if you don't have cmake installed, or if cmake
cannot find a suitable version.
Fixes: 0d59459432
("meson: Force the use of config-tool for llvm")
Reviewed-by: Eric Engestrom <eric.engestrom@intel.com>
This rewrites the ddy in EXECUTE_4 mode with a loop to make it more
obvious what is going on and also sets the group each of the 4 threads
in the groups are supposed to execute.
Fixes the following CTS tests :
dEQP-VK.glsl.derivate.dfdyfine.dynamic_*
Signed-off-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Co-Authored-by: Jason Ekstrand <jason@jlekstrand.net>
Reviewed-by: Matt Turner <mattst88@gmail.com>
Fixes: 2134ea3800 ("intel/compiler/fs: Implement ddy without using align16 for Gen11+")
s/otions/options/, and while here let's give the full path to xmlpool.h
since `../` won't be true in the generated file.
Signed-off-by: Eric Engestrom <eric.engestrom@intel.com>
Reviewed-by: Dylan Baker <dylan@pnwbakers.com>
Today, we stream the compute shader thread IDs simply because they're
(annoyingly) relative to dynamic state base address. We could upload
them once at compile time, but we'd need a separate non-streaming
uploader for IRIS_MEMZONE_DYNAMIC, and I'm not sure it's worth it.
stream_state pins the buffer for use in the current batch, but also
returns a reference to the pipe_resource. We dropped this reference
on the floor, leaking a reference basically every time we dispatched
a compute shader after switching to a new one.
The reason it returns a reference is so that we can hold on to it and
re-pin it in iris_restore_compute_saved_bos, which we were also failing
to do. So if we actually filled up a batch with repeated dispatches to
the same compute shader, and flushed, then continued dispatching, we
would fail to pin it and likely GPU hang.
We were unconditionally uploading the new data, but then conditionally
using it with MEDIA_CURBE_LOAD. If we're not going to emit the command,
there's no point in uploading the data.
We only use push the compute shader thread IDs, not any actual constant
buffer data. So we should track the compute shader variant changing,
not constbuf changes.
MEDIA_INTERFACE_DESCRIPTOR's Interface Descriptor Data Start Address
field's docs say: "This bit specifies the 64-byte aligned address..."
And we were doing 32. Superfluous thread ID uploading was apparently
saving us from GPU hangs in most cases.
When the fault_pointer field in the header is set, we can get some idea
of which descriptor the HW isn't happy with if we know their addresses.
Signed-off-by: Tomeu Vizoso <tomeu.vizoso@collabora.com>
Reviewed-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
The only exception is the GS copy shader which emits them
unconditionally.
Totals from affected shaders:
SGPRS: 71320 -> 71008 (-0.44 %)
VGPRS: 54372 -> 54240 (-0.24 %)
Code Size: 2952628 -> 2941368 (-0.38 %) bytes
Max Waves: 9689 -> 9723 (0.35 %)
This helps Dota2, Doom, GTAV and Hitman 2.
Signed-off-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
This doesn't fix anything known, but it's likely going to
break if layerCount is ~0U.
Signed-off-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
This patch changes the code which sets EmitNoIndirectSampler to check
the core profile GLSL version, rather than the ARB_gpu_shader5 extension
enable. st/mesa exposes ARB_gpu_shader5 if GLSLVersion (in core
profiles) or GLSLVersionCompat (in compat profiles) >= 400.
The Intel drivers do not currently expose ARB_gpu_shader5 in compat
profiles. But the backend can absolutely handle indirect samplers.
Looking at the core profile version number should be a good indication
of what the driver supports.
Reviewed-by: Eric Anholt <eric@anholt.net>
Reviewed-by: Timothy Arceri <tarceri@itsqueeze.com>
The helpers are needed so we can use the syntax `instr(cond)` in the
algebraic rules. Add simple rule for dropping a pair of mul-div of
the same value when wrapping is guaranteed to not happen.
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
When handling the specified ALU operations, check for the decorations
and set nir_alu_instr no_signed_wrap and no_unsigned_wrap flags accordingly.
v2: Add a glsl_base_type_is_unsigned_integer() helper. (Karol)
v3: Rename helper to glsl_base_type_is_uint().
v4: Use two flags, so we don't need the helper anymore. (Connor)
v5: Pass alu directly to handle function. (Jason)
Reviewed-by: Karol Herbst <kherbst@redhat.com> [v3]
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
They indicate the operation does not cause overflow or underflow.
This is motivated by SPIR-V decorations NoSignedWrap and
NoUnsignedWrap.
Change the storage of `exact` to be a single bit, so they pack
together.
v2: Handle no_wrap in nir_instr_set. (Karol)
v3: Use two separate flags, since the NIR SSA values and certain
instructions are typeless, so just no_wrap would be insufficient
to know which one was referred to. (Connor)
v4: Don't use nir_instr_set to propagate the flags, unlike `exact`,
consider the instructions different if the flags have different
values. Fix hashing/comparing. (Jason)
Reviewed-by: Karol Herbst <kherbst@redhat.com> [v1]
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
There doesn't seem to be any reason to keep these opcodes around:
* fnot/fxor are not used at all.
* fand/for are only used in lower_alu_to_scalar, but easily replaced
Signed-off-by: Jonathan Marek <jonathan@marek.ca>
Reviewed-by: Eric Anholt <eric@anholt.net>
Reviewed-by: Ian Romanick <ian.d.romanick@intel.com>
We can vectorize instructions with different constant sources by creating
a new load_const and using that.
Signed-off-by: Jonathan Marek <jonathan@marek.ca>
Reviewed-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
In Midgard, a bundle consists of a few ALU instructions. Within the
bundle, there is room for an optional 128-bit constant; this constant is
shared across all instructions in the bundle.
Unfortunately, many instructions want a 128-bit constant all to
themselves (how selfish!). If we run out of space for constants in a
bundle, the bundle has to be broken up, incurring a performance and
space penalty.
As an optimization, the scheduler now analyzes the constants coming in
per-instruction and attempts to merge shared components, adjusting the
swizzle accessing the bundle's constants appropriately. Concretely,
given the GLSL:
(a * vec4(1.5, 0.5, 0.5, 1.0)) + vec4(1.0, 2.3, 2.3, 0.5)
instead of compiling to the naive two bundles:
vmul.fmul [temp], [a], r26
fconstants 1.5, 0.5, 0.5, 1.0
vadd.fadd [out], [temp], r26
fconstants 1.0, 2.3, 2.3, 0.5
The scheduler can now fuse into a single (pipelined!) bundle:
vmul.fmul [temp], [a], r26.xyyz
vadd.fadd [out], [temp], r26.zwwy
fconstants 1.5, 0.5, 1.0, 2.3
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
In the past, each query object had their own BO. Checking if the batch
referenced that BO was an easy way to check if commands were still
queued to compute the query value. If so, we needed to flush.
More recently (c24a574e6c), we started using an u_upload_mgr for query
objects, placing multiple queries in the same BO. One side-effect is
that iris_batch_references is a no longer a reasonable way to check if
commands are still queued for our query. Ours might be done, but a
later query that happens to be in the same BO might be queued. We don't
want to flush in that case.
Instead, check if the current batch's signalling syncpt is the one we
referenced when ending the query. We know the syncpt can't have been
reused because our query is holding a reference, so a simple pointer
comparison should suffice.
Removes all batch flushing caused by query objects in Shadow of Mordor.
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
This returns a pointer to the signalling syncpt, without incrementing
the reference count. This can be useful for comparisons.
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
This support requires the driver to be a NIR driver as we use the
NIR lowering pass to do the clamping.
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
A ton of tests were fixed by this series. A few were incorrectly passing
before (QualityError, for instance) and now are explicitly failing. A
few legitimate regressions but overwhelmingly positive.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
u_blitter gets "special treatment" and uses this mechanism to cast
cube maps to 2D textures in order to texelFetch them.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
In a vertex shader, a tex op should map to txl, as there *must* be a LOD
given to the hardware (implicitly or explicitly).
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
Identify the seamless cubemap bit and passthrough the Gallium state
rather than setting unconditionally.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
This is similar to the AFBC merge; now all (non-imported) buffers use a
common backing buffer. Reenables checksumming, eliminating a performance
regression.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
I thought I already fixed this. Maybe that was a dream...? Then again, I
might be dreaming now.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
The main ctx->blitter instance should be reserved for blits originated
from Gallium (like mipmap generation). Since wallpapering is
conceptually different -- wallpaper blits can be triggered by Gallium
blits -- the blitter pipes must be separate to avoid potential u_blitter
recursion.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
Rather than tracking AFBC memory "specially", just use the same codepath
as linear and tiled. Less things to mess up, I figure. This allows us to
use the standard setup_slices() call with AFBC resources, allowing
mipmapped AFBC resources.
Unfortunately, we do have to disable AFBC (and checksumming) in the
meantime to avoid functional regressions, as we don't know _a priori_ if
we'll need to access a resource from software (which is not yet hooked
up with AFBC) and we don't yet have routines to switch the layout of a
BO at runtime.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
As far as we know, Utgard-style tiling only works for color render
targets, not depth/stencil, so ensure we don't try to tile it (rather
than compress or plain old linear) and drive ourselves into a corner.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
Now the autogeneration of mipmaps is working (via u_blitter), we can
finally enable mipmaps!
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
Now that all the prerequisites breaking u_blitter are fixed, we can
finally hook up panfrost_blit.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
txf instructions can result from blits, so handle them rather than
crash. Only works for 2D textures (not even 2D array texture) due to a
register allocation constraint that may not be sorted for a while.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
We need the flush from u_blitter for a normal blit (e.g. for mipmaps);
it's only wallpaper-related blits that are special-cased.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
To avoid interference with the wallpaper code, we need to do some state
tracking when generating mipmaps. In particular, we need to mark the
generated layers as invalid before generating the mipmap, so we don't
try to backblit them if they already had content.
Likewise, we need to flush both before and after generating a mipmap
since our usual set_framebuffer_state flushing isn't quite there yet.
Ideally better optimizations would save the flush but I digress.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
If a mipfilter is not set, it's legal to have an incomplete mipmap; we
should handle this accordingly. An "easy way out" is to rig the LOD
clamps.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
Sampler state prefetching is broken on Gen11, and WA_160668216 says
to disable it. Apparently sampler state prefetching also has basically
zero impact on performance, so we don't need to worry there.
i965, anv, and iris already handle this correctly, but we missed BLORP.
Ideally the kernel should globally disable this by writing SARCHKMD, at
which point we wouldn't have to worry about it. But let's be defensive
and handle it ourselves too.
v2: separate out from BTP workaround in case we change that eventually
Reviewed-by: Anuj Phogat <anuj.phogat@gmail.com> [v1]
When immutable samplers are set we call write_image_view with a NULL
image view. This causes issues on IVB where we have to fake texture
swizzling.
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=110999
Fixes: d2aa65eb18 "anv: Emulate texture swizzle in the shader when..."
This reduces the size of fill operations needed to clear CMASK
for layered color textures.
GFX9 unsupported for now.
Signed-off-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
This reduces the size of fill operations needed to clear FMASK
for layered color textures.
GFX9 unsupported for now.
Signed-off-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
In case of any enabled VS members from: uses_firstvertex,
uses_baseinstance, uses_drawid, uses_is_indexed_draw
leaks may happens.
Call gen6_upload_push_constants allocates
stage_stat->push_const_bo. It than takes pointer from
push_const_bo to draw_params_bo (in the call
brw_prepare_shader_draw_parameters by brw_upload_data)
and do reference which finally haven't got unreferenced.
Fixes leak:
136 bytes in 1 blocks are definitely lost in loss record 6 of 13
at 0x4C31B25: calloc (in /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so)
by 0xC2B64B7: bo_alloc_internal (brw_bufmgr.c:596)
by 0xC2B6748: brw_bo_alloc (brw_bufmgr.c:672)
by 0xC314BB3: brw_upload_space (intel_upload.c:88)
by 0xC2EBBC5: gen6_upload_push_constants (gen6_constant_state.c:155)
by 0xC9E4FA6: gen9_upload_vs_push_constants (genX_state_upload.c:3300)
by 0xC2E0EDA: check_and_emit_atom (brw_state_upload.c:540)
by 0xC2E0EDA: brw_upload_pipeline_state (brw_state_upload.c:659)
by 0xC2E0FF1: brw_upload_render_state (brw_state_upload.c:681)
by 0xC2C5D2D: brw_draw_single_prim (brw_draw.c:1052)
by 0xC2C62CB: brw_draw_prims (brw_draw.c:1175)
by 0xC488AD1: vbo_exec_vtx_flush (vbo_exec_draw.c:386)
by 0xC485270: vbo_exec_FlushVertices_internal (vbo_exec_api.c:652)
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Reported-by: Yevhenii Kolesnikov <yevhenii.kolesnikov@globallogic.com>
Signed-off-by: Sergii Romantsov <sergii.romantsov@globallogic.com>
st/egl used to support eglCreatePbufferFromClientBuffer, but now that
it's gone, any call to it would segfault.
Let's return a nice error instead.
Signed-off-by: Eric Engestrom <eric.engestrom@intel.com>
Reviewed-by: Tapani Pälli <tapani.palli@intel.com>
Nobody ever uses these, so let's just hard code them instead.
If an EGL driver ever comes around that needs them they're trivial to
re-add.
Suggested-by: Emil Velikov <emil.velikov@collabora.com>
Signed-off-by: Eric Engestrom <eric.engestrom@intel.com>
Reviewed-by: Tapani Pälli <tapani.palli@intel.com>
The driver doesn't use these values and ac_rtld has assertions
expecting the value of 0.
Tested-by: Dieter Nützel <Dieter@nuetzel-hh.de>
Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
We'll have to extend this at some point, and using a bitfield union in
this way makes it easier to get the right index without excessive
branching.
Tested-by: Dieter Nützel <Dieter@nuetzel-hh.de>
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
The initial prototype used a processor-specific symbol type, but
feedback suggests that an approach using processor-specific section
name that encodes the alignment analogous to SHN_COMMON symbols is
preferred.
This patch keeps both variants around for now to reduce problems
with LLVM compatibility as we switch branches around.
This also cleans up the error reporting in this function.
Tested-by: Dieter Nützel <Dieter@nuetzel-hh.de>
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
Incrementing the iteration count was intended to fix an off-by-one error
when the first terminator was superseded by a later terminator. If
there is no first terminator or later terminator, there is no off-by-one
error. Incrementing the loop count creates one. This can be seen in
loops like:
do {
if (something) {
// No breaks or continues here.
}
} while (false);
Reviewed-by: Timothy Arceri <tarceri@itsqueeze.com>
Tested-by: Abel Briggs <abelbriggs1@hotmail.com>
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=110953
Fixes: 646621c66d ("glsl: make loop unrolling more like the nir unrolling path")
We were pessimistically uploading all of it in case of indirection,
but we can just bump that when we encounter indirection.
total constlen in shared programs: 2529623 -> 2485933 (-1.73%)
Reviewed-by: Kristian H. Kristensen <hoegsberg@google.com>
Reviewed-by: Rob Clark <robdclark@gmail.com>
ir3_nir_analyze_ubo_ranges() has already told us how much of cb0 we
need to upload (all of it, since it will lower indirect UBO 0 accesses
from load_ubo back to indirection on the constant buffer).
Reviewed-by: Kristian H. Kristensen <hoegsberg@google.com>
Reviewed-by: Rob Clark <robdclark@gmail.com>
If the NIR-level analysis decided to move UBO loads to the constant
file, but the backend decided not to load those constants, we could
upload past the end of constlen. This is particularly relevant for
pre-a6xx, where we emit a different constlen between bin and render
variants.
(Fix by Rob, commit message by anholt)
Reviewed-by: Eric Anholt <eric@anholt.net>
UBOs and uniforms now use a common code path with an explicit `index`
argument passed, enabling UBO reads.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
Prevents an assert(0) later in this (not so edge) case. We still have to
have a dummy there.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
We've known about this for a while, but it was never formally in the
machine header files / decoder, so let's add them in.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
Now that all the counting is sorted, it's a matter of passing along a
GPU address and going.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
We already uploaded UBOs, but only a fixed number (1) for uniforms;
let's upload as many as we compute we need.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
We look at the highest set bit in the UBO enable mask to work out the
maximum indexable UBO, i.e. the UBO count as we need to report to the
hardware.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
We refactor panfrost_constant_buffer to mirror v3d's constant buffer
handling, to enable UBOs as well as a single set of uniforms.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
This doesn't handle Y-flipping, but it's good enough to render the stars
in Neverball.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
In preparation for lowering point sprites, track them like we track
alpha testing state.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
Those either depend on information filled by the NIR linking steps OR
are restricted by those:
- gl_nir_lower_samplers: depends on UniformStorage being set by the
linker.
- brw_nir_lower_image_load_store: After 6981069fc8 "i965: Ignore
uniform storage for samplers or images, use binding info" we want
this pass to happen after gl_nir_lower_samplers.
- gl_nir_lower_buffers: depends on UniformBlocks and
SharedStorageBlocks being set by the linker.
For the regular GLSL code path, those datastructures are filled
earlier. For NIR linking code path we need to generate the nir_shader
first then process it -- and currently the processing works with all
shaders together. So move the passes out of brw_create_nir into its
own function, called by the brwProgramStringNotify and
brw_link_shader().
This patch prepares ground for ARB_gl_spirv, that will make use of NIR
linker.
Reviewed-by: Timothy Arceri <tarceri@itsqueeze.com>
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
The iterator `i` already walks the right amount now that is
incremented by `dmul`, so no need to `* 2`. Fixes invalid memory
access in upcoming ARB_gl_spirv tests.
Failure bisected by Arcady Goldmints-Orlov.
Fixes: b019fe8a5b "glsl/nir: Fix handling of 64-bit values in uniform storage"
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
For n_columns == 1, we have a vector which is handled by the else
case. Fixes invalid memory access in upcoming ARB_gl_spirv tests.
Failure bisected by Arcady Goldmints-Orlov.
Fixes: 81e51b412e "nir: Make nir_constant a vector rather than a matrix"
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
This lets us use the optimization pattern
(('ult', 31, ('iand', b, 31)), False) to remove the
bcsel instruction for code originating in D3D shaders.
Reviewed-by: Connor Abbott <cwabbott0@gmail.com>
The [iu]bfe and bfm instructions are defined to only use the five
least significant bits.
This optimizes a common pattern from D3D -> SPIR-V translation.
Reviewed-by: Connor Abbott <cwabbott0@gmail.com>
That is: the five least significant bits provide the values of
'bits' and 'offset' which is the case for all hardware currently
supported by NIR and using the bfm/bfe instructions.
This patch also changes the lowering of bitfield_insert/extract
using shifts to not use bfm and removes the flag 'lower_bfm'.
Tested-by: Eric Anholt <eric@anholt.net>
Reviewed-by: Connor Abbott <cwabbott0@gmail.com>
These optimizations are based on the fact that
'and(a,b) <= umin(a,b)'.
For AMD, this series moves the optimization from LLVM to NIR,
so currently no vkpipeline-db changes here.
Reviewed-by: Ian Romanick <ian.d.romanick@intel.com>
Since we cannot handle ffma in ppir, lower it on nir level already.
Signed-off-by: Andreas Baierl <ichgeh@imkreisrum.de>
Reviewed-by: Qiang Yu <yuq825@gmail.com>
This simple extension might be useful for debugging purposes.
GAPID has support for it.
Signed-off-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
When HAL_PIXEL_FORMAT_IMPLEMENTATION_DEFINED is used, then the platform
gralloc module will select a format based on the usage flags provided by
the camera device and the other endpoint of the stream.
The patch fixes crash in vulkan when the test is run with camera stream
set to HAL_PIXEL_FORMAT_IMPLEMENTATION_DEFINED.
Test: android.graphics.cts.CameraVulkanGpuTest#testCameraImportAndRendering
on chromebook with camera HAL3.
v2: use AHARDWAREBUFFER_FORMAT_IMPLEMENTATION_DEFINED and take
AHARDWAREBUFFER_USAGE_CAMERA_MASK in to account (Gurchetan)
Fixes: f1654fa7e3 "anv/android: support creating images from external format"
Signed-off-by: Nataraj Deshpande <nataraj.deshpande@intel.com>
Signed-off-by: Gurchetan Singh <gurchetansingh@chromium.org>
Signed-off-by: Tapani Pälli <tapani.palli@intel.com>
Reviewed-by: Tapani Pälli <tapani.palli@intel.com>
Reviewed-by: Gurchetan Singh <gurchetansingh@chromium.org>
Acked-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Acked-by: Jason Ekstrand <jason@jlekstrand.net>
This commit moves the sysvals to a separate, new constant buffer
at the end (before the shader constants). It also allows us to
remove the special handling we had for cbuf0, and enables all
constant buffers to support user-specified resources and user
buffers.
v2: (by Kenneth Graunke)
- Rebase on the previous patch to fix system value uploading.
- Fix disk cache num_cbufs calculation
- Fix passthrough TCS to report num_cbufs = 1 so upload actually occurs
- Change upload_sysvals to assert that num_cbufs > 0 when
num_system_values > 0.
Signed-off-by: Timur Kristóf <timur.kristof@gmail.com>
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
I neglected to mark cbuf0_needs_upload = false after uploading it.
The obvious fix regressed user clip plane tests, because of a second
bug: we also forgot to mark that they may need re-uploading when
changing shader programs (which may have more or less system values).
Thanks to Timur Kristóf for catching the original issue.
Signed-off-by: Kenneth Graunke <kenneth@whitecape.org>
Reviewed-by: Timur Kristóf <timur.kristof@gmail.com>
This reverts commits cc4b68a801 and
b27fb3eaca.
These caused a bunch of EGLSync tests to crash when they were previously
failing.
I have a hunch the tests are doing something wrong, like using
extensions without checking for they support, but until the issue is
investigated I'm just reverting these commits.
Signed-off-by: Eric Engestrom <eric.engestrom@intel.com>
total constlen in shared programs: 2485933 -> 2462236 (-0.95%)
Reviewed-by: Rob Clark <robdclark@gmail.com>
Reviewed-by: Kristian H. Kristensen <hoegsberg@google.com>
We only ever return the shader we were passed in (but internally
modified).
Reviewed-by: Rob Clark <robdclark@gmail.com>
Reviewed-by: Kristian H. Kristensen <hoegsberg@google.com>
We need the constants uploaded to cover the NIR offset plus the size,
not the aligned-down start of our upload range plus the size. Fixes
mistaken UBO analysis with mat3 loads.
Fixes: 893425a607 ("freedreno/ir3: Push UBOs to constant file")
Reviewed-by: Kristian H. Kristensen <hoegsberg@google.com>
Reviewed-by: Rob Clark <robdclark@gmail.com>
NIR 1-bit bool dests will have a bit size of 1, and thus a calculated
"bytes" of 0. load_ubo is always loading from dwords in the source.
Fixes: 893425a607 ("freedreno/ir3: Push UBOs to constant file")
Reviewed-by: Rob Clark <robdclark@gmail.com>
We end up uploading constlen regardless, so max_const would only get
you slightly improved granularity in const usage in comparison.
Reviewed-by: Rob Clark <robdclark@gmail.com>
The non-DRM backend is gone. Let's get rid of the panfrost_driver
abstraction and call the panfrost_drm_xxx() functions directly.
Signed-off-by: Boris Brezillon <boris.brezillon@collabora.com>
Reviewed-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
The DRM backend has a dummy implementation and the non-DRM backend is
gone, so let's remove this perf counter interface.
Signed-off-by: Boris Brezillon <boris.brezillon@collabora.com>
Reviewed-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
Without this fix, LAVA isn't parsing crashes as failed tests, because
the shell logging is interspersed within the fake deqp output.
Signed-off-by: Tomeu Vizoso <tomeu.vizoso@collabora.com>
Acked-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
If there are no tiling jobs and no clears, there is no need to submit a
fragment job (relevant for transform feedback).
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
We want to know if a given slice of a buffer is initialized at a
particular point in the execution of the program. This is accomplished
easily enough -- start out uninitialized and upon an operation writing
to the buffer, mark it initialized.
The motivation is to optimize away expensive operations (like wallpaper
blits) when reading from an uninitialized buffer; since it's
uninitialized, the results of these operations are undefined, and it's
legal to take the fast path ^_^
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
This is a rather complex change, adding a lot of code but ideally
cleaning up quite a bit as we go.
Within a batch (single frame), there are multiple distinct Mali job
types: SET_VALUE, VERTEX, TILER, FRAGMENT for the few that we emit right
now (eventually more for compute and geometry shaders). Each hardware
job has a mali_job_descriptor_header, which contains three fields of
interest: job index, a dependencies list, and a next job pointer.
The next job pointer in each job is used to form a linked list of
submitted jobs. Easy enough.
The job index and dependencies list, however, are used to form a
dependency graph (a DAG, where each hardware job is a node and each
dependency is a directed edge). Internally, this sets up a scoreboarding
data structure for the hardware to dispatch jobs in parallel, enabling
(for example) vertex shaders from different draws to execute in parallel
while there are strict dependencies between tiling the geometry of a
draw and running that vertex shader.
For a while, we got by with an incredible series of total hacks,
manually coding indices, lists, and dependencies. That worked for a
moment, but combinatorial kaboom kicked in and it became an
unmaintainable mess of spaghetti code.
We can do better. This commit explicitly handles the scoreboarding by
providing high-level manipulation for jobs. Rather than a command like
"set dependency #2 to index 17", we can express quite naturally "add a
dependency from job T on job V". Instead of some open-coded logic to
copy a draw pointer into a delicate context array, we now have an
elegant exposed API to simple "queue a job of type XYZ".
The design is influenced by both our current requirements (standard ES2
draws and u_blitter) as well as the need for more complex scheduling in
the future. For instance, blits can be optimized to use only a tiler
job, without a vertex job first (since the screen-space vertices are
known ahead-of-time) -- causing tiler-only jobs. Likewise, when using
transform feedback with rasterizer discard enabled, vertex jobs are
created (to run vertex shaders) with no corresponding tiler job. Both of
these cases break the original model and could not be expressed with the
open-coded logic. More generally, this will make it easier to add
support for compute shaders, geometry shaders, and fused jobs (an
optimization available on Bifrost).
Incidentally, this moves quite a bit of state from the driver context to
the batch, which helps with Rohan's refactor to eventually permit
pipelining across framebuffers (one important outstanding optimization
for FBO-heavy workloads).
v2: Add comment explaining the meaning of "primary batch" as suggested
by Tomeu (trivial - not reviewed).
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
Reviewed-by: Tomeu Vizoso <tomeu.vizoso@collabora.com>
Reviewed-by: Rohan Garg <rohan.garg@collabora.com>
This is the preferred clipping mode since it doesn't mean your points
disappear the moment part of the point crosses over the edge of the
viewport and that lines have weird endpoints at viewport edges. We've
just never bothered to hook it up until now.
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
In workloads where there is a lot of geometry drawn that crosses over
the edge of the viewport, this should substantially improve clipper
performance. Not really sure why it's taken 3 years to turn it on but
we never got around to it.
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Fix android building errors in winsys/amdgpu and radv
due to 'amdgfxregs.h' not found.
Changelog:
amd/common - generated $(intermediated)/common path is added to exports
winsys/amdgpu - libmesa_amd_common static dependency is added
radv - correct generated $(intermediated)/common path is added to includes
Fixes: f480b8a ("amd/common: use generated register header")
Signed-off-by: Mauro Rossi <issor.oruam@gmail.com>
Reviewed-by: Nicolai Hähnle <nicolai.haehnle@amd.com>
HTILE decompressions need the user sample locations if specified
in the current subpass.
Signed-off-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
The driver might need to clear one aspect of the depth/stencil
resolve attachment before performing the resolve itself.
Signed-off-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
This path supports layers but it requires to decompress HTILE
before resolving. The driver also needs to fixup HTILE after
the resolve. This path is probably slower than the graphics one.
Signed-off-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
When using graphics, the driver doesn't need to decompress HTILE
before resolving. This path currently doesn't support layers
so we have to fallback to the compute path.
Signed-off-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
Otherwise fast color/depth clears can't work because they depend
on the framebuffer.
This fixes the following CTS (when the small hint is disabled):
- dEQP-VK.geometry.layered.1d_array.secondary_cmd_buffer
- dEQP-VK.geometry.layered.2d_array.secondary_cmd_buffer
- dEQP-VK.geometry.layered.cube.secondary_cmd_buffer
- dEQP-VK.geometry.layered.cube_array.secondary_cmd_buffer
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=110810
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=107986
Signed-off-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
These three tests pass on RK3399, but fail on RK3288:
dEQP-GLES2.functional.shaders.matrix.div.const_lowp_mat2_mat2_vertex
dEQP-GLES2.functional.shaders.operator.unary_operator.pre_increment_effect.highp_ivec4_vertex
dEQP-GLES2.functional.shaders.texture_functions.vertex.texture2dprojlod_vec3
They reliably pass when run individually, but reliably fail when run in
a full CI run.
Signed-off-by: Tomeu Vizoso <tomeu.vizoso@collabora.com>
This can happen when any of our vertex buffers was written by a previous
transform feedback draw.
Fixes the following piglit tests:
spec/ext_transform_feedback/position-render-bufferbase
spec/ext_transform_feedback/position-render-bufferbase-discard
spec/ext_transform_feedback/position-render-bufferoffset
spec/ext_transform_feedback/position-render-bufferoffset-discard
spec/ext_transform_feedback/position-render-bufferrange
spec/ext_transform_feedback/position-render-bufferrange-discard
Reviewed-by: Eric Anholt <eric@anholt.net>
If we are about to write to a transform feedback buffer, we should
make sure that we flush any prior work that intended to read from
any of these buffers.
Fixes piglit test:
spec/ext_transform_feedback/immediate-reuse
Reviewed-by: Eric Anholt <eric@anholt.net>
Fixes regression in shaders using ball/etc by explicitly passing through
the number of channels in the NIR op and broadcasting the last
components of the channel appropriately, as the Midgard ops are all vec4
implicitly but NIR can be vec2/3.
v2: Don't also regress every other swizzle in Equestria.
v3: Don't regress the swizzles at Canterlot High either.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
Acked-by: Tomeu Vizoso <tomeu.vizoso@collabora.com>
Most vertex data lives in user VBOs in IRIS_MEMZONE_OTHER, which
typically have high bits set to 0xffff. The shader draw parameters were
being uploaded in IRIS_MEMZONE_DYNAMIC, which have high bets set to 0x2.
This was causing a lot of ping-ponging of high bits, leading to
unnecessary VF cache flushing.
Cuts 7.2% of the flushes in the Civizilation VI demo on Kabylake GT2.
If there is no buffer, then it doesn't matter. Leave the old stale
high bits in place (for next time) and don't bother invalidating.
Cuts 5.6% of the flushes in the Civilization VI demo on Kabylake GT2.
If a buffer has no usage history, we don't have any read only cache
invalidates to do. If we've written it with the CPU, we don't need
to flush the render cache. The only bit remaining is the CS stall
from iris_flush_bits_for_history. We can just skip the PIPE_CONTROL
in this case.
This is pretty common - an app creates a buffer, fills it with data,
and then binds it for some purpose.
Cuts 36% of the flushes in Manhattan 3.0 on Kabylake GT2.
Instead of using the combined iris_flush_and_dirty_for_history, use
iris_flush_bits_for_history directly - we were already using the split
out iris_dirty_for_history. There's no need to dirty twice, and we can
avoid the looping altogether for non-buffers.
My intention was to have iris_copy_region not do flushing, and leave
that up to the callers. iris_resource_copy_region needs to do this,
but iris_transfer_flush_region was already doing it. The net result
was that we were doing it twice for transfers.
So, move the flushing from iris_copy_region to iris_resource_copy_region
so that it only happens in the callers as I intended.
When I split iris_flush_and_dirty_history into two helper functions,
I accidentally made it stop dirtying. Which was...sort of the point.
Fixes: 21688a306b iris: Split iris_flush_and_dirty_for_history into two helpers.
Otherwise, tests which loop on glMemoryBarrier may run us out of
batch space with piles of flushing. (Ideally, we'd elide those bonus
PIPE_CONTROLs, but presumably this isn't that common of a case...)
Piglit's arb_pipeline_statistics_query-comp would hit this case after
some of the next patches remove other PIPE_CONTROLs with maybe_flushes.
This prints a log of every PIPE_CONTROL flush we emit, noting which bits
were set, and also the reason for the flush. That way we can see which
are caused by hardware workarounds, render-to-texture, buffer updates,
and so on. It should make it easier to determine whether we're doing
too many flushes and why.
Looking at the scissor, we can discard some tiles. We specifially don't
care about the scissor on the wallpaper, since that's a no-op if the
entire tile is culled.
v2: Clarify clear comment (not reviewed but trivial).
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
Reviewed-by: Tomeu Vizoso <tomeu.vizoso@collabora.com>
This `gen_scrn_dispatch.pl` has never existed, in the sense that NVIDIA
never published it. There have been a number (6) of commits to fix
various things in there over the years, and never anything from NVIDIA.
For all intents and purposes this file is hand-written and
hand-maintained, and we're on our own.
Let's make this clear by removing this misleading comment.
Suggested-by: Eric Anholt <eric@anholt.net>
Signed-off-by: Eric Engestrom <eric.engestrom@intel.com>
Reviewed-by: Eric Anholt <eric@anholt.net>
Acked-by: Emil Velikov <emil.velikov@collabora.com>
Lima and Panfrost both have implementations of software tiling
(the Lima one was forked off the Panfrost one which was forked off the
original Lima one...). Switch to the most recent Lima code, since it's
more complete than ours at this point.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
Panfrost's tiling routines (incorrectly) ignored the source stride,
masking this bug; lima's routines respect this stride, causing issues
when tiling NPOT textures whose stride is not a multiple of 64
(for instance, NPOT textures with bpp=1).
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
This will allow both drivers to share this code. Both drivers
build-tested with meson. Android build not tested.
v2: Change naming from tiling->shared, in case Lima and Panfrost can
share more in the future. Fix Android build system.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
Reviewed-and-tested-by: Qiang Yu <yuq825@gmail.com>
We can rely on only one kind of synchronization object (drm-syncobj)
when it is available. This reduces the number of file descriptors we
use in our implementation.
This will be required later for timeline semaphores implementation, at
this point we won't ever want to use anything else but syncobjs.
v2: Only use has_syncobj for semaphores (Jason)
v3: Only has_syncobj in assert on semaphores in QueueSubmit (Jason)
Signed-off-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
Looking at internal evidence (later fields including a literal other
compute job inception-style, seeming memory corruption, no clear
function, and the field after this being a pointer to *itself*), it
looks like this is really a much smaller descriptor.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
In OpenGL, uniforms generally represent fp32 vec4s (at least in highp
mode). In OpenCL, they represent vec2s of 64-bit pointers.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
There is fundamentally not a framebuffer associated with a compute job.
Allocate a new structure for it so we don't mess up graphics when
decoding.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
These tests are failing at times, blacklist for now:
dEQP-GLES2.functional.fbo.render.shared_colorbuffer_clear.tex2d_rgba
dEQP-GLES2.functional.fbo.render.shared_colorbuffer_clear.tex2d_rgb
dEQP-GLES2.functional.shaders.matrix.mul.dynamic_highp_mat4_vec4_vertex
Signed-off-by: Tomeu Vizoso <tomeu.vizoso@collabora.com>
Acked-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
When DRI_CONF_GLES_EMULATE_BGRA was added for the virgl driver, it
missed a DRI_CONF_OPT_END.
This make some drivers, like v4c/v3d to crash with the following
error:
Fatal error in __driConfigOptions line 99, column 2: mismatched tag.
Not sure why it doesn't fail with virgl.
Fixes: b793663449
Reviewed-by: Eric Engestrom <eric.engestrom@intel.com>
This basically boils down to supporting persistent and coherent buffer
storage.
We chose to use coherent buffer storage for all persistent buffers
even if it's not explicitly specified, since using glMemoryBarrier to
obtain coherency would be particularly expensive in our driver stack,
and require a lot of additional bookkeeping.
Signed-off-by: Thomas Hellstrom <thellstrom@vmware.com>
Reviewed-by: Brian Paul <brianp@vmware.com>
For svga, the use of persistent / coherent maps is typically slightly
slower than without them. It's probably a bit case-dependent and
possible to tune, but for now, make sure we can disable those.
Signed-off-by: Thomas Hellstrom <thellstrom@vmware.com>
Reviewed-by: Brian Paul <brianp@vmware.com>
With SWTNL and index translation we're mapping buffers for reading. These
buffers are commonly upload_mgr buffers that might already be referenced
by another submitted or unsubmitted GPU command. A synchronous map will
then trigger a flush and sync, at least on Linux that doesn't distinguish
between read- and write referencing. So map these buffers async. If they
for some obscure reason happen to be dirty (stream-output, buffer-copy),
the resource_buffer code will read-back and sync anyway. For persistent /
coherent buffers a corresponding read-back and sync will happen in the
kernel fault handler.
Testing: Piglit quick. No regressions.
Signed-off-by: Thomas Hellstrom <thellstrom@vmware.com>
Reviewed-by: Brian Paul <brianp@vmware.com>
In the case of SWTNL and index translation we were uploading index buffers
and then reading out from them using the CPU. Furthermore, when translating
indices we often cached the results with an upload_mgr buffer, causing the
cached indexes to be immediately discarded on the next write to that
upload_mgr buffer.
Fix this by only uploading when we know the index buffer is going to be
used by hardware. If translating, only cache translated indices if the
original buffer was not a user buffer. In the latter case when we're not
caching, use an upload_mgr buffer for the hardware indices.
This means we can also remove the SWTNL hand-crafted index buffer upload
mechanism in favour of the upload_mgr.
Finally avoid using util_upload_index_buffer(). It wastes index buffer
space by trying to make sure that the offset of the indices in the
upload_mgr buffer is larger or equal to the position of the indices in
the source buffer. From what I can tell, the SVGA device does not
require that.
Testing done: Piglit quick. No regressions.
Signed-off-by: Thomas Hellstrom <thellstrom@vmware.com>
Reviewed-by: Brian Paul <brianp@vmware.com>
Add a flag in the surface cache key and a winsys usage flag to
specify coherent memory.
Signed-off-by: Thomas Hellstrom <thellstrom@vmware.com>
Reviewed-by: Brian Paul <brianp@vmware.com>
Previously unsynchronized maps have been assumed to also be persistent,
Now destinguish between persistent and unsynchronized map and also support
PIPE_TRANSFER_PERSISTENT from ARB_buffer_storage.
Signed-off-by: Thomas Hellstrom <thellstrom@vmware.com>
Reviewed-by: Brian Paul <brianp@vmware.com>
On GLES hosts GL_SAMPLES_PASSED is emulated by GL_ANY_SAMPLES_PASSED which returns a boolen.
With this tweak the value that is returned if any sample passed can be set. This
may be of iterest when an application decides whether some geometry is rendered based
on an amount of visibility and not just a binary desicion. virgelrenderer sets a default
of 1024 on th host.
v2: Remove reference from virgl and correct description (Emil)
v3: Send the tweak binary encoded instead of using strings (Gurchetan)
Signed-off-by: Gert Wollny <gert.wollny@collabora.com>
Reviewed-by: Gurchetan Singh <gurchetansingh@chromium.org>
With Qemu this final swizzle is not needed, but with vtest it is, i.e. it depends on
how a program using virglrenderer uses the surface that is rendered to, hence
a tweak is added.
v2: Update description and fix spelling (Emil)
v3: Send tweak as binary value instead of using strings (Gurchetan)
Reviewed-by: Gurchetan Singh <gurchetansingh@chromium.org>
These tweaks are used to fix rendering issues with Valve games and
at least also "The Raven Remastered" when run on a GLES host.
v2: Fix type in define and remove virgl from driconf option (Emil)
v3: Encode tweak binary instead of using strings (Gurchetan)
Signed-off-by: Gert Wollny <gert.wollny@collabora.com>
Reviewed-by: Gurchetan Singh <gurchetansingh@chromium.org>
Tie in the check whether the host supports tweaks and whether this tweak
is enabled.
v2: Add comment about the emulated formats not being used directly in the
guest (Gurchetan)
Signed-off-by: Gert Wollny <gert.wollny@collabora.com>
Reviewed-by: Gurchetan Singh <gurchetansingh@chromium.org>
This works only for the drm variant of virgl and not for the vtest
variant.
v2: Rebase, replace the configuration query function by a pointer to
the configuration data.
Signed-off-by: Gert Wollny <gert.wollny@collabora.com>
Reviewed-by: Emil Velikov <emil.velikov@collabora.com> (v1)
Reviewed-by: Gurchetan Singh <gurchetansingh@chromium.org>
These were previously handled by the spirv_to_nir, but that changed to
be an explict pass in 23d30f4099 "spirv,nir: lower
frexp_exp/frexp_sig inside a new NIR pass"
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
In ARB_gl_spirv we'll be able to use variables for uniform buffers, so
don't use the descriptor intrinsics to lower the block access.
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
For the block BLOCK_TEXEL_VIEW_COMPATIBLE case, this didn't matter
because the flags were already more-or-less what we wanted. However,
for gen7 stencil shadow images, it still had ISL_SURF_USAGE_STENCIL_BIT
so we were getting W-tiled which isn't what we want for the shadow. By
passing just ISL_SURF_USAGE_TEXTURE_BIT (and CUBE if we care), we now
get something that's actually texturable.
Fixes: f3ea0cf828 "anv: Add stencil texturing support for gen7"
Copies to a shadow image happen during a VkCmdPipelineBarrier or at
subpass transitions. We could potentially be a bit more conservative
but these transitions shouldn't happen often and it's better to have our
bases covered.
Fixes: f3ea0cf828 "anv: Add stencil texturing support for gen7"
Most places in NIR, we treat matrices like arrays. The one annoying
exception to this has been nir_constant where a matrix is a first-class
thing. This commit changes that so a matrix nir_constant is the same as
an array nir_constant. This makes matrix nir_constants a tiny bit more
expensive but shrinks all others by 96B.
Reviewed-by: Karol Herbst <kherbst@redhat.com>
Now that nir_const_value is a scalar, there's no reason why we need
multiple paths here and it's just extra paths to keep working. While
we're here, we also add a vtn_fail_if check that component indices are
in-bounds.
Reviewed-by: Karol Herbst <kherbst@redhat.com>
It only accepts 32-bit integers so it should have a more descriptive
name. This patch should not be a functional change.
Reviewed-by: Karol Herbst <kherbst@redhat.com>
All of the callers for this function are looking at interpolation
qualifiers and want to make sure they're declared flat. Any 64-bit
integer inputs need to be flat. It's also makes the function make more
sense since "integer" is fairly generic.
Reviewed-by: Karol Herbst <kherbst@redhat.com>
All of the callers of this function really just want to know if the type
is an integer and don't care about bit size.
Reviewed-by: Karol Herbst <kherbst@redhat.com>
Even if only variables access flags are changed, the existing NIR
infrastructure expects metadata to be explicitly preserved, so do
that. Don't care about avoiding preserve to be called twice since the
cost is negligible.
This scenario can be triggered by dead variables, and also by other
intrinsics that read the variables -- but not cause progress to be
made when processing the intrinsics.
Fixes: f2d0e48ddc "glsl/nir: Add optimization pass for access flags"
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
Unwrap any array in the variable type so we can get the sampler dim.
This fixes piglit test
spec/arb_arrays_of_arrays/execution/image_store/basic-imageStore-const-uniform-index.shader_test.
Fixes: f2d0e48ddc "glsl/nir: Add optimization pass for access flags"
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
Rather than checking __GLIBC__/__UCLIBC__ macros as a proxy for
execinfo.h presence, just check directly. This allows the build to work
on musl.
Reviewed-by: Matt Turner <mattst88@gmail.com>
Reviewed-by: Eric Anholt <eric@anholt.net>
Reviewed-by: Eric Engestrom <eric.engestrom@intel.com>
We don't execute any of the commands to record snapshots, so we can't
actually produce a real result. We do however need to avoid waiting
on a syncpt which will never be signalled. So, just return 0.
Right now, this just deduces when we can arbitrarily reorder SSBO and
image loads, matching the existing logic in radeonsi's TGSI->LLVM pass.
This approach can't handle some things that nir_opt_copy_prop_vars can,
but it can handle images, and with GCM it lets us hoist reads outside of
loops. We can also pass this information to LLVM which lets it do its
own optimizations on it.
This is GLSL only as I haven't tested it on Vulkan yet, and it would
probably need a few changes to work there.
Reviewed-by: Timothy Arceri <tarceri@itsqueeze.com>
The spec explicitly says that volatile writes can't be removed and
volatile reads do not guarantee that the same value will still be around
after the read, as if there were a barrier after each read/write. Just
ignore them.
Reviewed-by: Timothy Arceri <tarceri@itsqueeze.com>
We were completely ignoring these before, except for putting them on
variables. While we're here, don't set access qualifiers when converting
to bindless since glsl_to_nir will already have set a more accurate
qualifier that includes any qualifiers on struct members that are
dereferenced.
Reviewed-by: Timothy Arceri <tarceri@itsqueeze.com>
In the next commit, we'll properly handle access qualifiers on struct
members by propagating them to load/store instructions, but these
instructions had no way to specify the qualifier.
Reviewed-by: Timothy Arceri <tarceri@itsqueeze.com>
inaccessiblememonly means that it doesn't modify memory accesible via
normal LLVM pointers. This lets LLVM's dead store elimination, memcpy
forwarding, etc. ignore functions with this attribute. We don't
represent descriptors as pointers, so this property is always true of
buffer and image stores. There are plans to represent descriptors via
pointers, but this just means that now nothing is inaccessiblememonly,
as LLVM will then understand loads/stores via its usual alias analysis.
Radeonsi was mistakenly only setting it if the driver could prove that
there were no reads, and then it was cargo-culted into ac_llvm_build
and ac_llvm_to_nir. Rip it out of everything.
statistics with nir enabled:
Totals from affected shaders:
SGPRS: 152 -> 152 (0.00 %)
VGPRS: 128 -> 132 (3.12 %)
Spilled SGPRs: 0 -> 0 (0.00 %)
Spilled VGPRs: 0 -> 0 (0.00 %)
Private memory VGPRs: 0 -> 0 (0.00 %)
Scratch size: 0 -> 0 (0.00 %) dwords per thread
Code Size: 9324 -> 9244 (-0.86 %) bytes
LDS: 2 -> 2 (0.00 %) blocks
Max Waves: 17 -> 17 (0.00 %)
Wait states: 0 -> 0 (0.00 %)
The only difference was a manhattan31 shader.
Acked-by: Timothy Arceri <tarceri@itsqueeze.com>
Acked-by: Nicolai Hähnle <nicolai.haehnle@amd.com>
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
This allows us to disable the FMASK decompress pass when
transitioning from CB writes to shader reads.
This will likely be improved and enabled by default in the future.
No CTS regressions on GFX8 but a few number of multisample CTS
failures on GFX9 (they look related to the small hint).
Signed-off-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
We have some serious leaks, so plug some and also move to ralloc to
limit the lifetime of some objects to that of their parent.
Lots more such work to do.
For some reason, this fixes:
dEQP-GLES2.functional.lifetime.attach.deleted_output.texture_framebuffer
Signed-off-by: Tomeu Vizoso <tomeu.vizoso@collabora.com>
Reviewed-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
Do not offer a hardware drm backed egl device if no render node
is available. The current implementation will fail on this
egl device. On top it issues a warning that is actually missleading.
There are finally more error paths that can fail on the way to a
hardware backed egl device. Fixing all of them would kind of require
opening the drm device and see if there is a usable driver associated
with the device. The taken approach avoids a full probe and fixes at
least this kind of problem on kvm virtualization hosts I observe here.
Fixes: dbb4457d98 ("egl: add EGL_EXT_device_drm support")
Reviewed-by: Emil Velikov <emil.velikov@collabora.com>
Signed-off-by: Mathias Fröhlich <Mathias.Froehlich@web.de>
This is pointless in that we won't ever hit those paths in real life,
but coverity complains.
Fixes: f014ae3c7c ("nouveau: add support for nir")
Reviewed-by: Ilia Mirkin <imirkin@alum.mit.edu>
GL_MAP_INVALIDATE_BUFFER_BIT cannot be treated as
GL_MAP_INVALIDATE_RANGE_BIT naively. When we run into
ptr = glMapBufferRange(buf, 0, size,
GL_WRITE_BIT|GL_MAP_INVALIDATE_BUFFER_BIT);
memcpy(ptr, data1, size);
glUnmapBuffer(buf);
ptr = glMapBufferRange(buf, size, size,
GL_WRITE_BIT|GL_MAP_UNSYNCHRONIZED_BIT);
memcpy(ptr, data2, size);
glUnmapBuffer(buf);
we never want data1 to be copy_transfer'ed. Because that would mean
that data2 might overwrite valid data.
Signed-off-by: Chia-I Wu <olvaffe@gmail.com>
Reviewed-by: Alexandros Frantzis alexandros.frantzis@collabora.com
Fixes: a22c5df079 ("virgl: Use buffer copy transfers to avoid waiting when mapping")
Reviewed-by: Emil Velikov <emil.velikov@collabora.com>
Now that sRGB formats are supported for both rendering and sampling,
advertise support.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
The performance impact is slightly mitigated by tiling the render
target, but it's undeniably still slow compared to AFBC. Unfortunately,
it doesn't look like AFBC and sRGB play nice...
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
For fixed-function, we have hardware to handle sRGB so we just set a
flag. For blend shaders, it's rather more involved; this is currently
unimplemented. Assert it out for now; we don't need it quite yet.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
We already can sample from Mali's linear/tiled encoding (the one from
Utgard -- AFBC is mostly unrelated); let's be able to render to it as
well.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
A mode for rendering tiled/uncompressed was noticed, so we reshuffle the
MFBD render target definitions to explicitly include block type.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
This combines the two cmdstream bits "is_3d" and "is_not_cubemap" into a
single 2-bit texture target selection, noticing it's the same as the
2-bit selection in Midgard and Bifrost texturing ops. Accordingly, we
share this definition and add the missing entry for 1D/buffer textures.
This requires a nontrivial (but functionally similar) refactor of all
parts of the driver to use the new definitions appropriately.
Theoretically, this should add support for buffer textures, but that's
obviously not tested and probably wouldn't work.
While doing so, we notice the sRGB enable bit, which we document and
decode as well here so we don't forget about it.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
Requirements for a job should be figured out in pan_job.c
v2: [Alyssa] Fix early return
Reviewed-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
It's nice to keep these two files in sync, as they define
guest userspace <---> host userspace communcation.
Reviewed-by: Emil Velikov <emil.velikov@collabora.com>
Getting rid of a compiler warning :
In file included from ../src/intel/tools/aubinator_viewer.cpp:225:
../src/imgui/imgui_memory_editor.h: In member function ‘void MemoryEditor::DisplayPreviewData(size_t, const u8*, size_t, MemoryEditor::DataType, MemoryEditor::DataFormat, char*, size_t) const’:
../src/imgui/imgui_memory_editor.h:637:16: warning: enumeration value ‘DataType_COUNT’ not handled in switch [-Wswitch]
switch (data_type)
^
Signed-off-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
This effectively does the opposite of nir_lower_alus_to_scalar, trying
to combine per-component ALU operations with the same sources but
different swizzles into one larger ALU operation. It uses a similar
model as CSE, where we do a depth-first approach and keep around a hash
set of instructions to be combined, but there are a few major
differences:
1. For now, we only support entirely per-component ALU operations.
2. Since it's not always guaranteed that we'll be able to combine
equivalent instructions, we keep a stack of equivalent instructions
around, trying to combine new instructions with instructions on the
stack.
The pass isn't comprehensive by far; it can't handle operations where
some of the sources are per-component and others aren't, and it can't
handle phi nodes. But it should handle the more common cases, and it
should be reasonably efficient.
[Alyssa: Rebase on latest master, updating with respect to typeless
moves]
Acked-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
Acked-by: Jason Ekstrand <jason@jlekstrand.net>
This patch adds support for nir_texop_txs instructions which are needed
to support the OpenGL textureSize() function. This is also needed to
support RECT texture sampling which is currently lowered to 2D sampling +
a TXS() instruction by the nir_lower_tex() helper.
Changes in v2:
* Split options for the 1st and 2nd tex lowering passes
Signed-off-by: Boris Brezillon <boris.brezillon@collabora.com>
Reviewed-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
We are about to add support for the TXS (texture size) op which is not
implemented using a midgard texture instruction. Let's rename emit_tex()
into emit_texop_native() and repurpose emit_tex() as a dispatcher.
Signed-off-by: Boris Brezillon <boris.brezillon@collabora.com>
Reviewed-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
We're about to add more sysval types, and panfrost_emit_for_draw()
is big enough, so let's move the sysval upload logic in a separate
function.
We also add one sub-function per sysval type to keep the
panfrost_upload_sysvals() small/readable.
Signed-off-by: Boris Brezillon <boris.brezillon@collabora.com>
Reviewed-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
We are about to add support for nir_texop_txs which requires adding a
sysval/uniform containing the texture size. Let's change the
emit_sysval_read() prototype to take a nir_instr object instead of
a nir_intrinsic_instr one so we can re-use this function when emitting
a sysval for a txs instruction.
Signed-off-by: Boris Brezillon <boris.brezillon@collabora.com>
Reviewed-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
The V3D driver has an open-coded solution for this, and we need the
same thing for Panfrost, so let's add a generic way to lower TXS(LOD)
into max(TXS(0) >> LOD, 1).
Changes in v2:
* Use == 0 instead of !
* Rework the minification logic as suggested by Jason
* Assign cursor pos at the beginning of the function
* Patch the LOD just after retrieving the old value
Signed-off-by: Boris Brezillon <boris.brezillon@collabora.com>
Reviewed-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
get_texture_size() will create a txs instruction with ->sampler_dim set
to the original tex->sampler_dim. The condition to call lower_rect()
only checks the value of ->sampler_dim and whether lower_rect is
requested or not. This leads to an infinite loop when calling
nir_lower_tex() with the same options until it returns false.
In order to avoid that, let's move the tex->sampler_dim patching before
get_texture_size() is called. This way the txs instruction will have
->sampler_dim set to GLSL_SAMPLER_DIM_2D and nir_lower_tex() won't try
to lower it on the subsequent passes.
Changes in v2:
* Add Jason R-b
* Add a comment explaining why we patch ->sampler_dim at the beginning
of the lower_rect() func
Signed-off-by: Boris Brezillon <boris.brezillon@collabora.com>
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
Reviewed-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
The code considers that projector lowering was done even if it's not
really the case. Change the project_src() prototype to return a bool
encoding whether projector lowering happened or not and update the
progress var accordingly in nir_lower_tex_block().
---
Changes in v2:
* Add Jason R-b
* Drop the part suggesting that nir_lower_rect() could be called in
a do-while(progress) loop.
Signed-off-by: Boris Brezillon <boris.brezillon@collabora.com>
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
Reviewed-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
In other words, make use of radv_dcc_enabled() instead of
radv_image_has_dcc() all over the places.
Signed-off-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
Without them, the state tracker falls back to an RGBA format, but it
doesn't always manage to override the swizzle for us. So we lose the
information that the API expects an X channel, where alpha is garbage
and reads back as 1. We have no equivalent ISL RGBX format for these,
so we just use RGBA directly and override the swizzle in all cases.
The VaryingNames array has NumVaryings entries. But BufferStride is
a small array of MAX_FEEDBACK_BUFFERS (4) entries. Programs with
more than 4 varyings would read out of bounds.
Also, BufferStride is set based on the shader itself, which means that
it's inherently already included in the hash, and doesn't need to be
included again. At the point when shader_cache_read_program_metadata
is called, the linker hasn't even set those fields yet. So, just drop
it entirely.
Fixes valgrind errors in KHR-GL45.transform_feedback.linking_errors_test.
Fixes: 6d830940f7 glsl/shader_cache: Allow shader cache usage with transform feedback
Reviewed-by: Timothy Arceri <tarceri@itsqueeze.com>
This should fix floating-point border color on all gen7 HW. Integer is
still thoroughly busted on gen7 because it doesn't exist on IVB and it's
crazy on HSW.
Cc: mesa-stable@lists.freedesktop.org
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
Whenever stencil texturing is not required (most of the time), we can
use VK_EXT_separate_stencil_usage to only create the shadow image when
VK_IMAGE_USAGE_SAMPLED_BIT is required for stencil. Of course, this
depends on applications to use the extension but hopefully DXVK and
similar translators are doing so and that covers most of the apps.
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Intel hardware didn't get support for sampling from W-tiled (required
for stencil) images until Broadwell so we can't directly sample from
stencil. Instead, if we want to support stencil texturing on gen7
hardware, we have to keep a texture-capable shadow copy around and use
BLORP to update when stencil changes. The one thing this commit does
not implement is self-dependencies with stencil input attachments.
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=99493
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
This should ensure the TC invalidate happens after the stall.
Fixes KHR-GL43.copy_image.functional which does a CopyImage (blorp_copy)
from a buffer (using R8G8B8A8_UINT), then GetTexImage to read back the
original image (using R10G10B10A2_UNORM).
When copying/blitting with format reinterpretation, we invalidate the
texture cache before/after. Before is so the source of the copy works,
and after is to get rid of our new data in the "wrong" format to protect
future attempts to sample.
When I ported these hacks to iris, I tried to be cautious by only
bothering with the hacks if the batch referenced the BO. This makes
some sense for the before case. If it isn't referenced, the texture
cache can't really have any data for the BO (since it's also invalidated
between batches). But we still need to do the after case regardless,
as we've just polluted the cache with hazardous entries.
When the host virglrenderer is an older version that doesn't check the sRGB write
control feature, or when the guest kernel doesn't support CAPS v2, then the guest
will only report support for GL 2.1 on a GL 3.3 host, even though it was supporting
3.3 with earlier guest mesa versions.
By also checking the host feature check version this regression can be avoided.
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=110921
Fixes: 2845939d6a
virgl: Set sRGB write control CAP based on host capabilities
Signed-off-by: Gert Wollny <gert.wollny@collabora.com>
Reviewed-by: Chia-I Wu <olvaffe@gmail.com>
Fixes:
dEQP-GLES31.functional.stencil_texturing.format.depth24_stencil8_2d
dEQP-GLES31.functional.stencil_texturing.format.stencil_index8_2d
dEQP-GLES31.functional.stencil_texturing.misc.compare_mode_effect
Signed-off-by: Rob Clark <robdclark@chromium.org>
The stencil is actually in the .w component, but we used to use SWAP to
remap the channels. This doesn't work when tiled/ubwc.
Fixes:
dEQP-GLES31.functional.stencil_texturing.format.depth24_stencil8_2d_array
dEQP-GLES31.functional.stencil_texturing.format.depth24_stencil8_cube
dEQP-GLES31.functional.stencil_texturing.format.stencil_index8_2d_array
dEQP-GLES31.functional.stencil_texturing.format.stencil_index8_cube
dEQP-GLES31.functional.stencil_texturing.misc.base_level
dEQP-GLES31.functional.texture.border_clamp.formats.stencil_index8.nearest_size_pot
dEQP-GLES31.functional.texture.border_clamp.formats.stencil_index8.nearest_size_npot
dEQP-GLES31.functional.texture.border_clamp.formats.depth24_stencil8_sample_stencil.nearest_size_pot
dEQP-GLES31.functional.texture.border_clamp.formats.depth24_stencil8_sample_stencil.nearest_size_npot
dEQP-GLES31.functional.texture.border_clamp.sampler.uint_stencil
Signed-off-by: Rob Clark <robdclark@chromium.org>
Inline the ring buffer and signal logic into lp_scene_queue instead of
using a u_ringbuffer. The code ends up simpler since there's no need
to handle serializing data from / to packets.
This fixes a crash when compiling Mesa with LTO, that happened because
of util_ringbuffer_dequeue() was writing data after the "header
packet", as shown below
struct scene_packet {
struct util_packet header;
struct lp_scene *scene;
};
/* Snippet of old lp_scene_deque(). */
packet.scene = NULL;
ret = util_ringbuffer_dequeue(queue->ring,
&packet.header,
sizeof packet / 4,
return packet.scene;
but due to the way aliasing analysis work the compiler didn't
considered the "&packet->header" to alias with "packet->scene". With
the aggressive inlining done by LTO, this would end up always
returning NULL instead of the content read by
util_ringbuffer_dequeue().
Issue found by Marco Simental and iThiago Macieira.
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=110884
Reviewed-by: Roland Scheidegger <sroland@vmware.com>
Just encode the Mali magic number for `replace` rather than awkwardly
forcing Gallium structures through.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
We switch all fmov to (i)mov, following the NIR switch. This simplifies
some code surrounding blend shaders and should have no functional
changes elsewhere.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
When the resource to be mapped is busy and the backing storage can
be discarded, reallocate the backing storage to avoid waiting.
In this new path, we allocate a new buffer, emit a state change,
write, and add the transfer to the queue . In the
PIPE_TRANSFER_DISCARD_RANGE path, we suballocate a staging buffer,
write, and emit a copy_transfer (which may allocate, memcpy, and
blit internally). The win might not always be clear. But another
win comes from that the new path clears res->valid_buffer_range and
does not clear res->clean_mask. This makes it much more preferable
in scenarios such as
access = enough_space ? GL_MAP_UNSYNCHRONIZED_BIT :
GL_MAP_INVALIDATE_BUFFER_BIT;
glMapBufferRange(..., GL_MAP_WRITE_BIT | access);
memcpy(...); // append new data
glUnmapBuffer(...);
Signed-off-by: Chia-I Wu <olvaffe@gmail.com>
Reviewed-by: Alexandros Frantzis <alexandros.frantzis@collabora.com>
We are going support reallocating the HW resource for a
virgl_resource. When that happens, the virgl_resource needs to be
rebound to the context.
Signed-off-by: Chia-I Wu <olvaffe@gmail.com>
Reviewed-by: Alexandros Frantzis <alexandros.frantzis@collabora.com>
When PIPE_TRANSFER_DISCARD_WHOLE_RESOURCE is properly supported,
virgl_transfer might refer to a different virgl_hw_res than
virgl_resource does. We need to save the virgl_hw_res and use the
saved one.
Signed-off-by: Chia-I Wu <olvaffe@gmail.com>
Reviewed-by: Alexandros Frantzis <alexandros.frantzis@collabora.com>
It works similar to pipe_resource_reference but is for virgl_hw_res.
It can also replace resource_unref.
Signed-off-by: Chia-I Wu <olvaffe@gmail.com>
Reviewed-by: Alexandros Frantzis <alexandros.frantzis@collabora.com>
This adds a set of opcodes for performing moves and type conversions
with respect to particular rounding modes, required for OpenCL.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
The hardware support for scissoring requires minimally 1 pixel to be
drawn. If the scissor culls *everything*, we need to drop the draw
entirely early on.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
Pipelined rendering is important for performance but is not working
right these days. Disable it for correctness until the panfrost_job
refactor is enabled and we can do it right.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
In anticipation of more general mipmapping support, we implemented
support for rendering to linear mipmaps (a very simple case).
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
In preparation for more complex mipmap operations. glGenerateMipmap() in
particular, as implemented by u_blitter, requires reading from non-zero
initial mip levels.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
it turns out we have explicit control over helper invocations; if a
particular bit in the fragment shader descriptor is set, helper
invocations are launched; if it clear, they are not. Helper invocations
are required whenever computing derivatives, whether explicitly
(dFdx/dFdy) *or* implicitly (any texturing). Accordingly, we set this
bit when texturing to fix edge case behaviour (literally, haha).
Thank you to Jason Ekstrand and Ilia Mirkin for pointing out the
representative dEQP test failed along triangle edges and for suggesting
helper invocations / derivatives as a list of suspect pieces (which led
to discovering the helper invocations enable bit in the first place).
Ideally we would use the new NIR analysis pass for this, but that hasn't
landed quite yet.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
In some cases, Gallium can give us bad info about the texture count,
counting some NULL textures. We pass Gallium's info to the hardware
blindly, which can confuse the hardware in edge cases. This patch
adjusts accordingly.
This worked around a bug in oooold versions of Panfrost. Nowadays, its
presence is, at best, *creating* bugs. Let's wack it.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
In a poorly coded app, the framebuffer can be partially drawn, an FBO
switched, switch back to the framebuffer and keep drawing, etc.
Reordering would fix this, but for now we need to just be careful about
flushing scanout too.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
On more complex apps (possibly using desktop GL specific extensions?),
our viewport code was getting wacky results for unclear reasons. Let's
be a little less wacky.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
To do so, we route some basic information through to the FBD creation
routines (currently just a binary toggle of "has draws?"). Eventually,
more refactoring will enable dynamic hierarchy mask selection, but right
now we do the most basic.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
Quite a bit of refactoring in the main driver will be necessary to make
use of this effectively, so the implementation is incomplete.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
As per the notes at the beginning of pan_tiler.c, we implement a routine
to calculate the size of the polygon list header given the framebuffer
dimensions and the provided hierarchy mask.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
I'm not sure how the blob does it, but this seems to be a dead simple
test and roughly corresponds to what I've noticed from the blob, so
maybe it's good enough.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
Following the research into Midgard's hierarchical tiling
infrastructure, we now understand (in broad stokes) the purpose of each
tiler field in the MFBD. Additionally, we understand more of the tiling
fields in the SFBD and in Bifrost's structures, although this knowledge
is still incomplete.
Update the names, decoder, and comments to reflect this new
understanding.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
This explains how the polygon list is allocated, updating the headers
appropiately to sync the terminology.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
These names are from the replay workaround in kbase; they begin to shine
some light on the meaning of these fields. In particular, we now
understand why the "tiler_meta" field has the effect it does on
performance in certain scenes (controlling tile granularity).
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
Otherwise the buffer loads/stores in the bufimage meta operations fail.
If we decompress DCC then we can use the "canonical" format compatible
with the not-supported format.
CC: <mesa-stable@lists.freedesktop.org>
Reviewed-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
This fixes a segfault when forcing DCC decompressions on compute
because internal meta objects are not created since the on-demand
stuff.
Signed-off-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
Reviewed-By: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
Instead of re-computing in the driver. The 3d and cube flags
are correctly set, so the same values should returned by
ac_compute_surface().
Signed-off-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
Reviewed-By: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
Since commit 4f3c82c72c fmod is no longer being lowered in nir, and
ends up crashing lima programs with "unsupported nir_op: fmod" in both
ppir and gpir.
There seems to be no mod operation in hardware in utgard and there is an
optimization in nir to lower fmod to instructions that lima already
implements, so let's use that.
Signed-off-by: Erico Nunes <nunes.erico@gmail.com>
Reviewed-by: Qiang Yu <yuq825@gmail.com>
Now that we can blit depth/stencil in a way that plays nicely with UBWC,
re-enable it.
Signed-off-by: Rob Clark <robdclark@chromium.org>
Reviewed-by: Kristian H. Kristensen <hoegsberg@gmail.com>
Now that it can turn these blits into rendering to RB6_Z24_UNORM_S8_UINT
it can properly handle cases where only one of depth+stencil is being
blit. And this avoids lying about he format, which completely doesn't
work when UBWC is used.
Signed-off-by: Rob Clark <robdclark@chromium.org>
Reviewed-by: Kristian H. Kristensen <hoegsberg@gmail.com>
For re-written z/s blits, we want to use the re-written `pipe_blit_info`
even if we have to fallback to 3d pipe (`u_blitter`). So handle that
fallback ourself.
Signed-off-by: Rob Clark <robdclark@chromium.org>
Reviewed-by: Kristian H. Kristensen <hoegsberg@gmail.com>
The name 'separate' doesn't make a while lot of sense, as only one of
the cases is the blit actually split. But split out from previous patch
in an attempt to reduce the noise.
Signed-off-by: Rob Clark <robdclark@chromium.org>
Reviewed-by: Kristian H. Kristensen <hoegsberg@gmail.com>
This maps to a special format that recent generations of adreno have,
for blitting z24s8. Conceptually it is similar to doing Z and/or S
blits by pretending it is r8g8b8a8 (with appropriate writemask). But
it differs when bandwidth compression is used, as z24 is a different
type from r8g8b8.
Signed-off-by: Rob Clark <robdclark@chromium.org>
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
Reviewed-by: Kristian H. Kristensen <hoegsberg@gmail.com>
Apparently, we're supposed to look at the texture object's built-in
sampler object's sRGB decode setting in order to decide whether to
decode/downsample/re-encode, or simply downsample as-is. Previously,
we had just respected the pipe_resource's format.
Fixes SKQP's Skia_Unit_Tests.SRGBMipMaps test.
(This ports commit 337a808062 from i965
to st/mesa for Gallium drivers.)
Reviewed-by: Eric Anholt <eric@anholt.net>
Commit de8a919702 refactored dynarray usage and changed the size of the
allocation in lima_submit_add_bo.
That causes a segfault in programs running with lima.
This commit restores the allocation size back to the previous size.
Signed-off-by: Erico Nunes <nunes.erico@gmail.com>
Reviewed-by: Vasily Khoruzhick <anarsoul@gmail.com>
Fixes the following building error in Android:
In file included from external/mesa/src/amd/common/ac_llvm_helper.cpp:34:
In file included from external/mesa/src/amd/common/ac_llvm_build.h:30:
In file included from external/mesa/src/compiler/nir/nir.h:40:
In file included from external/mesa/src/compiler/nir_types.h:36:
external/mesa/src/compiler/glsl_types.h:37:10: fatal error: 'main/config.h' file not found
^~~~~~~~~~~~~~~
1 error generated.
Fixes: bd4c661 ("ac,ac/nir: use a better sync scope for shared atomics")
Signed-off-by: Mauro Rossi <issor.oruam@gmail.com>
Reviewed-by: Tapani Pälli <tapani.palli@intel.com>
Fixes building errors due to libmesa_util and libexpat dependencies:
In file included from external/mesa/src/amd/vulkan/radv_device.c:52:
external/mesa/src/util/xmlpool.h:115:10: fatal error: 'xmlpool/options.h' file not found
^~~~~~~~~~~~~~~~~~~
1 error generated.
FAILED: out/target/product/x86_64/obj_x86/SHARED_LIBRARIES/vulkan.radv_intermediates/LINKED/vulkan.radv.so
...
external/mesa/src/util/xmlconfig.c:670: error: undefined reference to 'XML_ParserCreate'
...
clang.real: error: linker command failed with exit code 1 (use -v to see invocation)
Fixes: 3c2e826 ("radv: Add support for driconf.")
Signed-off-by: Mauro Rossi <issor.oruam@gmail.com>
Reviewed-by: Tapani Pälli <tapani.palli@intel.com>
Initially I was only interested on documenting NIR_PRINT, as today I
needed to check the code to find this envvar, that at the moment I
vaguely remembered that existed.
As we are here, though, let's just document all of them (assuming that
makes sense).
Reviewed-by: Eric Anholt <eric@anholt.net>
When searching for resources in the cache, we previously released all
expired resources even after having found a compatible resource.
This commit changes this behavior to return immediately when finding a
compatible resource, so that the operation finishes more quickly. This
moves more of the burden of releasing expired resources to cache
addition, which, since it happens at resource destruction time, it's
less time critical.
Signed-off-by: Alexandros Frantzis <alexandros.frantzis@collabora.com>
Reviewed-by: Chia-I Wu <olvaffe@gmail.com>
Introduce a resource cache implementation that can be used by any virgl
winsys backend.
Signed-off-by: Alexandros Frantzis <alexandros.frantzis@collabora.com>
Reviewed-by: Chia-I Wu <olvaffe@gmail.com>
Record types have their own slot to store the precision for each
member in glsl_struct_field. Previously if the member didn’t have an
explicit precision qualifier this was being left as
GLSL_PRECISION_NONE. This patch makes it take into account the type’s
default precision qualifier like it does for regular variables in
apply_type_qualifier_to_variable.
This has the additional benefit of correctly reporting an error when a
float type is used in a struct without declaring the default type.
Reviewed-by: Eric Anholt <eric@anholt.net>
This function is confusingly also used to match interstage interfaces
as well as intrastage. In the interstage case it needs to avoid
comparing the precisions. This patch adds a parameter to specify
whether to take the precision into account or not so that it can be
used for both cases.
Reviewed-by: Eric Anholt <eric@anholt.net>
On GLES, the interface between vertex and fragment shaders doesn’t
need to have matching precision.
Section 4.3.10 of the GLSL ES 3.00 spec:
“The type of vertex outputs and fragment inputs with the same name
must match, otherwise the link command will fail. The precision does
not need to match.”
Reviewed-by: Eric Anholt <eric@anholt.net>
On GLES, the interface between vertex and fragment shaders doesn’t
need to have matching precision. This adds an extra argument to
glsl_types::record_compare to disable the precision comparison. This
will later be used for the shader interface check.
In order to make this work this patch also adds a helper function to
recursively compare types while ignoring the precision.
v2: Call record_compare from within compare_no_precision to avoid
duplicating code (Eric Anholt).
Reviewed-by: Eric Anholt <eric@anholt.net>
The offsets to read the query results were off-by-one, which causes the
counters to report bogus increasing values.
Also the counter result is u32, so we need to initialize the query type
to reflect that.
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
Reviewed-by: Christian Gmeiner <christian.gmeiner@gmail.com>
Either all channels executed the 'then' block, in which case all
channels will directly jump to the 'endif' block at the end of the
'then' block, or all channels execute the 'else' block (so no
execution masking is necessary).
Shader-db results:
total instructions in shared programs: 9119238 -> 9117550 (-0.02%)
instructions in affected programs: 401252 -> 399564 (-0.42%)
helped: 855
HURT: 77
total uniforms in shared programs: 3022622 -> 3022605 (<.01%)
uniforms in affected programs: 3566 -> 3549 (-0.48%)
helped: 17
HURT: 0
total max-temps in shared programs: 1327762 -> 1327774 (<.01%)
max-temps in affected programs: 619 -> 631 (1.94%)
helped: 2
HURT: 15
Reviewed-by: Eric Anholt <eric@anholt.net>
Removes a compiler warning about uninitialized variable.
Fixes: c02ffd2700 "ir3: Use the new NIR lowering pass for integer multiplication"
Signed-off-by: Tapani Pälli <tapani.palli@intel.com>
Reviewed-by: Rob Clark <robclark@gmail.com>
Reviewed-by: Eduardo Lima <elima@igalia.com>
When both the fragment and vertex shaders point to the same varying
location they expect to share the same varying slot.
Make sure vertex and fragment varyings pointing to the same loc have
->src_offset set to the same value.
[Alyssa: In addition a patch implement txs, this fixes GALLIUM_HUD on
Panfrost]
Signed-off-by: Boris Brezillon <boris.brezillon@collabora.com>
Reviewed-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
The src/dst format is overriden from the pipe_blit_info, so this just
logic just serves to confuse the reader.
Signed-off-by: Rob Clark <robdclark@chromium.org>
Since we could only need a subset of the layers, and otherwise we
trigger an assert in util_max_layer()
Signed-off-by: Rob Clark <robdclark@chromium.org>
These are tests that regressed in RK3288 but still pass on RK3399.
So we still have a CI we can rely on, add them to the flip-flop list for
now.
Signed-off-by: Tomeu Vizoso <tomeu.vizoso@collabora.com>
Reviewed-by: Alyssa Rosenzweig <alyssa@rosenzweig.io>
Some tests got fixed since the last update, but also some regressions
crept in.
To keep the CI green, add the regressions to the expected failures.
Signed-off-by: Tomeu Vizoso <tomeu.vizoso@collabora.com>
Reviewed-by: Alyssa Rosenzweig <alyssa@rosenzweig.io>
This fixes a rebase fail in ea51275e07, and prevents it from happening
again. There's no reason to do this manually.
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
altsoftware.com seems to no longer be around, and is currently being
held by a domain squatter. Let's link to waybackmachine instead.
Signed-off-by: Erik Faye-Lund <erik.faye-lund@collabora.com>
Reviewed-by: Eric Engestrom <eric.engestrom@intel.com>
dsbox.com now forwards to haystax.com, which is tehcnially unrealted to
this link. Let's link to waybackmachine instead.
Signed-off-by: Erik Faye-Lund <erik.faye-lund@collabora.com>
Reviewed-by: Eric Engestrom <eric.engestrom@intel.com>
sgi.com now forwards to hpe.com, which is technically unrelated to these
links. Let's link to waybackmachine instead.
Signed-off-by: Erik Faye-Lund <erik.faye-lund@collabora.com>
Reviewed-by: Eric Engestrom <eric.engestrom@intel.com>
Just a couple of lines above, we have this exact same link, but this
time with a leading "www.". Let's match that.
Signed-off-by: Erik Faye-Lund <erik.faye-lund@collabora.com>
Reviewed-by: Eric Engestrom <eric.engestrom@intel.com>
Meson is what should tell you about these issues, not the configure
script. We no longer have that.
Signed-off-by: Erik Faye-Lund <erik.faye-lund@collabora.com>
Reviewed-by: Eric Engestrom <eric.engestrom@intel.com>
We no longer have an autoconf build-system to maintain, but we do have a
meson build-system. So let's mention that instead.
Signed-off-by: Erik Faye-Lund <erik.faye-lund@collabora.com>
Reviewed-by: Eric Engestrom <eric.engestrom@intel.com>
Automake and libtool are no longer required to build, instead we need
meson and ninja-build.
Signed-off-by: Erik Faye-Lund <erik.faye-lund@collabora.com>
Reviewed-by: Eric Engestrom <eric.engestrom@intel.com>
The only build system that doesn't support Haiku is `Android.mk`,
which also doesn't support most other platforms either, so there is
no need to single it out.
Signed-off-by: Erik Faye-Lund <erik.faye-lund@collabora.com>
Reviewed-by: Eric Engestrom <eric.engestrom@intel.com>
We have to emit a CACHE_FLUSH_AND_INV_TS_EVENT to be sure all
prior GPU work is done. While we are at it, also flush and
invalidate DB.
This fixes the following CTS (when the small hint is disabled):
dEQP-VK.query_pool.statistics_query.reset_before_copy.*
Signed-off-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
Reviewed-By: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
The dri2 winsys also uses libdrm (and you can only enable dri3 if
you enable dri2), and the drm winsys only requires libdrm.
So if any winsys is enabled you can also enable the drm winsys, and
since we always want at least one winsys we can always enable it.
I removed the check for the drm platform for VA and OMX since they
do not care anymore. Since we still check for one of r600g, nouveau
or radeonsi, we are guarantueed to still only enable it by default
in a configuration that requires libdrm anyway. So for people using
va=auto, we don't suddenly start requiring libdrm were we did not
before.
This supersedes "vl: Enable DRM by default.", which I pushed, but
rolled back because it used dep_libdrm before its definition.
Reviewed-by: Emil Velikov <emil.velikov@collabora.com>
Do not want it for perf reasons. Always have to disable DCC when
transferring to external queue.
Reviewed-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
Transitions to external queue should do the transition & make sure
it works on all queues.
Fixes: 8ebc7dcb59 "radv: Allow fast clears with concurrent queue mask for some layouts."
Reviewed-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
A pipe_transfer is a context object. It is fine for the
constructor/destructor to have access to the context.
Signed-off-by: Chia-I Wu <olvaffe@gmail.com>
Reviewed-by: Alexandros Frantzis <alexandros.frantzis@collabora.com>
A pipe_transfer is a context object. It is fine for
virgl_transfer_queue to have access to the context.
Signed-off-by: Chia-I Wu <olvaffe@gmail.com>
Reviewed-by: Alexandros Frantzis <alexandros.frantzis@collabora.com>
This dumps disassembly to the pipe_debug_callback together with shader
stats.
Can be used together with shader-db to get full disassembly of all shaders
in the database.
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
Compute the count since the start of the current line instead of the
count since the start of the the disassembly.
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
This implies that the memory will always be at address 0, which allows
LLVM to generate slightly better code.
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
This will make it easier to use LDS for other purposes in geometry
shaders in the future.
The lifetime of the esgs_ring variable is as follows:
- declared as [0 x i32] while compiling shader parts or monolithic shaders
- just before uploading, gfx9_get_gs_info computes (among other things)
the final ESGS ring size (this depends on both the ES and the GS shader)
- during upload, the "esgs_ring" symbol is given to ac_rtld as a shared
LDS symbol, which will lead to correctly laying out the LDS including
other LDS objects that may be defined in the future
- si_shader_gs uses shader->config.lds_size as the LDS size
This change depends on the LLVM changes for emitting LDS symbols into
the ELF file.
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
Upcoming changes to LLVM will emit LDS objects as symbols in the ELF
symbol table, with relocations that will be resolved with this change.
Callers will also be able to define LDS symbols that are shared between
shader parts. This will be used by radeonsi for the ESGS ring in gfx9+
merged shaders.
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
Since it can only be used for reading the config of an individual,
non-combined shader, it is not very reusable anyway.
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
The compiler should be able to optimize them away, but still. There's
no point in declaring those as pointers, and if the compiler *doesn't*
optimize them away, they add unnecessary load-time relocations.
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
Using an explicit linker instead of just concatenating .text
sections will allow us to start using .rodata sections and
explicit descriptions of data on LDS that is shared between
stages.
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
Use hash_table_u64 instead of hash_table directly, since the former
will also handle the special keys (deleted and freed) and allow use
the whole u64 space.
Fixes crash in INTEL_DEBUG=bat when using a key with value 0 -- the
current value for a freed key.
Fixes: b38dab101c "util/hash_table: Assert that keys are not reserved pointers"
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
The hash_table_u64 should support any uint64_t as input. It does
special handling for the "deleted" key, storing the data in the table
itself; do the same for the "freed" key.
Fixes: b38dab101c "util/hash_table: Assert that keys are not reserved pointers"
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
The main motivation for this change is API ergonomics: most operations
on dynarrays are really on elements, not on bytes, so it's weird to have
grow and resize as the odd operations out.
The secondary motivation is memory safety. Users of the old byte-oriented
functions would often multiply a number of elements with the element size,
which could overflow, and checking for overflow is tedious.
With this change, we only need to implement the overflow checks once.
The checks are cheap: since eltsize is a compile-time constant and the
functions should be inlined, they only add a single comparison and an
unlikely branch.
v2:
- ensure operations are no-op when allocation fails
- in util_dynarray_clone, call resize_bytes with a compile-time constant element size
v3:
- fix iris, lima, panfrost
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
We're not very good at handling out-of-memory conditions in general, but
this change at least gives the caller the option of handling it gracefully
and without memory leaks.
This happens to fix an error in out-of-memory handling in i965, which has
the following code in brw_bufmgr.c:
node = util_dynarray_grow(vma_list, sizeof(struct vma_bucket_node));
if (unlikely(!node))
return 0ull;
Previously, allocation failure for util_dynarray_grow wouldn't actually
return NULL when the dynarray was previously non-empty.
v2:
- make util_dynarray_ensure_cap a no-op on failure, add MUST_CHECK attribute
- simplify the new capacity calculation: aside from avoiding a useless loop
when newcap is very large, this also avoids an infinite loop when newcap
is larger than 1 << 31
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
We do have native support for perspective division on the load/store
unit, but this is for the future, something ideally we would select
generally, not just for textures. Meanwhile, flipping on projector
lowering works now.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
This follows the txb implementation, but requires an adjustment to how
the cont/last flags are set.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
This replaces bind_vs/fs_state calls to a unified bind_shader_state
call, removing a great deal of duplicated logic related to variant
selection.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
This logic is repeated in a bunch of places and will only grow worse as
we support more job types; collect it.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
Paralleling emit_uniform_read, this allows varying reads to be emitted
independent of an honest-to-goodness load vary instruction in the NIR.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
I don't think these are actual structures, just figments over
cargoculting dumped memory without making any sense of it. Nothing seems
to break if the region is zeroed out, anyway.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
History lesson! In the early days of a Panfrost, we had a library
independent of the driver called `panwrap` which would be LD_PRELOAD'ed
into a driver to decode its cmdstream in real-time. When upstreaming
Panfrost, we realized that we would much rather have this decode
functionality maintained in-tree to avoid divergence, but that we could
not upstream panwrap because of its use with the legacy API. So we
instead dumped GPU memory to the filesystem with an out-of-tree panwrap,
and decoded that with the in-tree pandecode module. When we migrated to
the new kernel, we just added support for doing this memory dump
directly from the driver (via a module "pantrace").
This works, but dumping memory every frame is sloooooooooooooow and
error-prone. I figured if we have pandecode in-tree, we might as well
link to it directly in the driver, allowing us to decode Panfrost's
command streams without dumping memory to the filesystem first. This
cleans up the code *substantially* and improves dumping performance by a
HUGE margin. I'm talking "several seconds per frame" to "dumping in
real-time" kind of jump.
Note to users: this removes the environmental option "PANTRACE_BASE".
Instead, for equivalent functionality set "PAN_MESA_DEBUG=trace" and
redirect stdout to the file of your choosing.
This should be debugging Panfrost much more pleasant.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
The bpp in the dumb buffer creation request is hardcoded to 32, which is an
incorrect assumption as the caller is free to pick any pipe format. Use the
bpp supplied to us through util_format_get_blocksizebits().
Fixes: 3b176c441b "gallium: Add a dumb drm/kms winsys backed swrast provider"
Signed-off-by: Kevin Strasser <kevin.strasser@intel.com>
Reviewed-by: Adam Jackson <ajax@redhat.com>
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
Reviewed-by: Emil Velikov <emil.velikov@collabora.com>
Reviewed-by: Eric Engestrom <eric.engestrom@intel.com>
The number of render backends is 16 but the enabled mask is 0xaaaa.
As noticed by Bas, allowing disabled render backends might break
the OCCLUSION_QUERY packet. We don't use it yet but keep this in
mind.
This fixes dEQP-VK.query_pool.* and dEQP-VK.multiview.*.
Cc: 19.0 19.1 <mesa-stable@lists.freedesktop.org>
Signed-off-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
Reviewed-By: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
Special care is needed to ensure that when we have two consecutive
calls with the same grid size, we only bail in the second one if it
either don't need the surface state or the surface state was already
uploaded.
v2: Instead of having a new bool in ice->state to know whether we had
a surface, check whether we have state->ref. (Ken)
Clean up the logic a little bit by adding 'grid_updated' local. (Ken)
Reviewed-by: Sagar Ghuge <sagar.ghuge@intel.com> [v1]
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
This implements GLSL disk shader caching for the R300-R500 series of AMD GPUs.
Signed-off-by: Rui Salvaterra <rsalvaterra@gmail.com>
Signed-off-by: Marek Olšák <marek.olsak@amd.com>
This avoids lowering of CS system values by GLSL (configured by state
tracker). In i965 we don't use that lowering, and we also shouldn't
need that in Iris.
Using it cause some unnecessary round trip between values, e.g.:
shader uses gl_LocalInvocationIndex, GLSL rewrites it in terms of
gl_LocalInvocationID, then driver rewrites those in terms of
gl_LocalInvocationIndex again. Copy propagation can make some of
those go away, but not all as seen below.
Intel SKL shader-db results:
total instructions in shared programs: 15595189 -> 15594556 (<.01%)
instructions in affected programs: 74880 -> 74247 (-0.85%)
helped: 81
HURT: 4
helped stats (abs) min: 2 max: 172 x̄: 7.88 x̃: 4
helped stats (rel) min: 0.19% max: 5.66% x̄: 1.71% x̃: 1.23%
HURT stats (abs) min: 1 max: 2 x̄: 1.25 x̃: 1
HURT stats (rel) min: 0.45% max: 1.65% x̄: 0.76% x̃: 0.46%
95% mean confidence interval for instructions value: -11.56 -3.34
95% mean confidence interval for instructions %-change: -1.91% -1.28%
Instructions are helped.
total loops in shared programs: 4831 -> 4831 (0.00%)
loops in affected programs: 0 -> 0
helped: 0
HURT: 0
total cycles in shared programs: 372136618 -> 372145628 (<.01%)
cycles in affected programs: 9218230 -> 9227240 (0.10%)
helped: 131
HURT: 86
helped stats (abs) min: 1 max: 798 x̄: 39.79 x̃: 12
helped stats (rel) min: <.01% max: 6.75% x̄: 0.42% x̃: 0.13%
HURT stats (abs) min: 2 max: 2442 x̄: 165.38 x̃: 6
HURT stats (rel) min: <.01% max: 20.83% x̄: 0.74% x̃: 0.12%
95% mean confidence interval for cycles value: -2.07 85.11
95% mean confidence interval for cycles %-change: -0.22% 0.30%
Inconclusive result (value mean confidence interval includes 0).
total spills in shared programs: 11956 -> 11950 (-0.05%)
spills in affected programs: 77 -> 71 (-7.79%)
helped: 3
HURT: 0
total fills in shared programs: 25619 -> 25549 (-0.27%)
fills in affected programs: 593 -> 523 (-11.80%)
helped: 4
HURT: 0
LOST: 0
GAINED: 0
Total CPU time (seconds): 1695.69 -> 1706.03 (0.61%)
Reviewed-by: Eric Anholt <eric@anholt.net>
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
Tells whether or not the driver can handle gl_LocalInvocationIndex and
gl_GlobalInvocationID. If not supported (the default), state tracker
will lower those on behalf of the driver.
v2: Add case to u_screen.c. (Anholt)
Reviewed-by: Eric Anholt <eric@anholt.net>
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
Modern DXVK requires event support [1], but looks like it only
uses vkCmdSetEvent() + vkGetEventStatus(). So we can just
borrow the relevant code from gen8, leaving CmdWaitEvents still
unimplemented.
[1] 8c3900c533
v2: Also move CmdWaitEvents into genX_cmd_buffer.c (Jason)
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
The other sources of the bcsel behave like the sources of an and or
other logical operation. However, source zero behaves differently.
It is evaluated as a Boolean, so it needs to be resolved.
No shader-db changes, but the tests mentioned in the bug get a couple
instructions added back.
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=110857
Reviewed-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
Flip the FD_MESA_DEBUG flag to a disable rather than enable, drop the
obsolete comment (and bonus, drop unused softpin debug flag)
Signed-off-by: Rob Clark <robdclark@chromium.org>
Reviewed-by: Kristian H. Kristensen <hoegsberg@google.com>
This is slightly annoying because it *mostly* works.. but we have some
issues to sort out about how to blit z24s8/x24s8/z24x8 with UBWC before
we can enable UBWC by default. For now it is a step forward to at least
enable it for non-z/s while we figure out how to blit z24s8+UBWC.
(The basic issue is that pretending z24s8 is an equivalently sized rgba
format for the purpose of blitting falls apart when UBWC is in the
picture.)
Signed-off-by: Rob Clark <robdclark@chromium.org>
Reviewed-by: Kristian H. Kristensen <hoegsberg@google.com>
No functional change, the registers have the same layout as MRT flags
pitch reg.
Signed-off-by: Rob Clark <robdclark@chromium.org>
Reviewed-by: Kristian H. Kristensen <hoegsberg@google.com>
An older blob claims to support UBWC w/ r32ui an r32i, but not r32f.
Results from deqp indicate that it doesn't work with r32ui and r32i.
This *could* also just mean that use as "IBO" (image) is more limited
than as texture, although blob also doesn't seem to bother to try to use
UBWC with images at all, so hard to know for sure.
Signed-off-by: Rob Clark <robdclark@chromium.org>
Reviewed-by: Kristian H. Kristensen <hoegsberg@google.com>
We'll need this for a few edge cases, like image/sampler view that uses
a format that UBWC does not support with a resource originally created
in a format that UBWC does support.
NOTE we *could* in some cases do an in-place uncompress. But that has
a couple potential sharp edges:
1) the uncompressed buffer could have different layout, ie. a5xx
with meta and pixel data of layers/levels interleaved.
2) if it comes mid-batch, it would force flush, or somehow fixing
up cmdstream for draws already emitted. But with the resource
shadowing approach we can rely on batch re-ordering to avoid
splitting things.. older draws see the older compressed version,
newer draws see the new uncompressed version of the rsc.
Signed-off-by: Rob Clark <robdclark@chromium.org>
Reviewed-by: Kristian H. Kristensen <hoegsberg@google.com>
When uncompressing a UBWC buffer, we don't want to discard anything.
Signed-off-by: Rob Clark <robdclark@chromium.org>
Reviewed-by: Kristian H. Kristensen <hoegsberg@google.com>
It doesn't come up yet, as so far we only hit this path with linear
buffers. But it will when we start re-using the shadow path for
uncompressing UBWC buffers.
Signed-off-by: Rob Clark <robdclark@chromium.org>
Reviewed-by: Kristian H. Kristensen <hoegsberg@google.com>
To uncompress UBWC, I want to re-use the shadow path, but we'll need a
way to request that the new buffer is not compressed.
Signed-off-by: Rob Clark <robdclark@chromium.org>
Reviewed-by: Kristian H. Kristensen <hoegsberg@google.com>
A newly created resource can be regarded as idle. We don't care if
the RESOURCE_CREATE command has been retired, unless it is used for
fencing.
Signed-off-by: Chia-I Wu <olvaffe@gmail.com>
Reviewed-by: Alexandros Frantzis <alexandros.frantzis@collabora.com>
The round trip to the kernel is expensive. Add a local cache to
avoid it when possible.
There is a race condition when two contexts access the same resource
at the same time (e.g., ctx1 submits a cmdbuf that accesses a
resource while ctx2 maps the resource). But that is probably an app
bug in the first place.
Signed-off-by: Chia-I Wu <olvaffe@gmail.com>
Reviewed-by: Alexandros Frantzis <alexandros.frantzis@collabora.com>
We should not reuse a resource for other purposes when it can still
be accessed by another process or device.
Signed-off-by: Chia-I Wu <olvaffe@gmail.com>
Reviewed-by: Alexandros Frantzis <alexandros.frantzis@collabora.com>
This seems to be a performance win, but more rigorous testing is
necessary to figure out the exact circumstances when this is good/bad.
Incidentally, this fixes non-aligned ZS.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
For constant LODs/biases, we can use an immediate embedded in the
texture (already decoded); for non-constant, we have to use a register
squeezed into the usual immediate field, which is decoded here.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
It's not at all clear why this work for texelFetch but not texture.
Maybe the top bits are dual-purpose on other texturing ops...?
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
This is clear for texelFetch, hence the confusion with Bifrost's filter
field, but it's much more general in reality.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
With an extra flag, we're able to do a perspective division "for free"
while loading a varying.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
This patch identifies the two modes of offsets in a texture instruction
(immediate and register, disambiguated by the bit-once-known-as
"has_offset") and implements disassembly for both.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
This eliminates some unknowns, clarifies 3D textures, and will maybe
help with array/shadow textures?
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
According to the Vulkan spec, inline uniform blocks are not allowed
to be updated through vkCmdPushDescriptorSetKHR().
These are the spec quotes from "13.2.1. Descriptor Set Layout"
that are relevant for this case:
"VK_DESCRIPTOR_SET_LAYOUT_CREATE_PUSH_DESCRIPTOR_BIT_KHR specifies
that descriptor sets must not be allocated using this layout, and
descriptors are instead pushed by vkCmdPushDescriptorSetKHR."
"If flags contains
VK_DESCRIPTOR_SET_LAYOUT_CREATE_PUSH_DESCRIPTOR_BIT_KHR, then all
elements of pBindings must not have a descriptorType of
VK_DESCRIPTOR_TYPE_INLINE_UNIFORM_BLOCK_EXT".
There is no explicit mention in vkCmdPushDescriptorSetKHR() to forbid
this case but it is implied in the creation of the descriptor set
layout as aforementioned.
Signed-off-by: Samuel Iglesias Gonsálvez <siglesias@igalia.com>
Reviewed-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
According to the Vulkan spec, inline uniform blocks are not allowed
to be updated through vkCmdPushDescriptorSetKHR().
These are the spec quotes from "13.2.1. Descriptor Set Layout"
that are relevant for this case:
"VK_DESCRIPTOR_SET_LAYOUT_CREATE_PUSH_DESCRIPTOR_BIT_KHR specifies
that descriptor sets must not be allocated using this layout, and
descriptors are instead pushed by vkCmdPushDescriptorSetKHR."
"If flags contains
VK_DESCRIPTOR_SET_LAYOUT_CREATE_PUSH_DESCRIPTOR_BIT_KHR, then all
elements of pBindings must not have a descriptorType of
VK_DESCRIPTOR_TYPE_INLINE_UNIFORM_BLOCK_EXT".
There is no explicit mention in vkCmdPushDescriptorSetKHR() to forbid
this case but it is implied in the creation of the descriptor set
layout as aforementioned.
Signed-off-by: Samuel Iglesias Gonsálvez <siglesias@igalia.com>
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
`memcmp()` compares a given number of bytes, but `EGLAttrib` is larger than a byte.
Fixes: 8e991ce539 "egl: handle the full attrib list in display::options"
Signed-off-by: Eric Engestrom <eric.engestrom@intel.com>
Reviewed-by: Eric Anholt <eric@anholt.net>
Reviewed-by: Tapani Pälli <tapani.palli@intel.com>
The number of elements to draw should not be affected by the offset.
A similar fix was submitted for a6xx at 79180a05.
Fixes these dEQP tests on a5xx:
dEQP-GLES31.functional.draw_indirect.compute_interop.large.drawelements_separate_grid_500x500_drawcount_8
dEQP-GLES31.functional.draw_indirect.compute_interop.large.drawelements_separate_grid_500x500_drawcount_2500
dEQP-GLES31.functional.draw_indirect.compute_interop.large.drawarrays_separate_grid_500x500_drawcount_2500
dEQP-GLES31.functional.draw_indirect.compute_interop.large.drawarrays_combined_grid_500x500_drawcount_2500
dEQP-GLES31.functional.draw_indirect.compute_interop.large.drawelements_combined_grid_500x500_drawcount_8
dEQP-GLES31.functional.draw_indirect.compute_interop.large.drawelements_combined_grid_500x500_drawcount_2500
Reviewed-by: Rob Clark <robdclark@gmail.com>
When decompressing resolve source images, we should rely on the
framebuffer layer count instead of resolving all images layers.
Signed-off-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
Reviewed-By: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
Use an explicit pipeline barrier for doing layout transitions
instead of duplicating some code.
Signed-off-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
Reviewed-By: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
When resolving inside a subpass, we should rely on the framebuffer
layer count instead of resolving all images layers. This should
improve performance of layered resolves a bit.
Signed-off-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
Reviewed-By: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
Was watching a presentation on YT where this was used and it turns
out it is not invalid.
The only case it is actually valid as format in the creation of an
image or image view is with Android Hardware Buffers which have
their format specified externally.
So we can just ignore all entries with VK_FORMAT_UNDEFINED.
Reviewed-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
uintptr_t is 32-bits then and shifting it by 32 bits results in undefined
behavior IIRC.
Fixes: b3c8de1c55 "radv: save all descriptor pointers into the trace BO"
Reviewed-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
With this commit all remaining compilation tests in Piglit for
ARB_fragment_shader_interlock will pass.
Reviewed-by: Tapani Pälli <tapani.palli@intel.com>
Reviewed-by: Plamena Manolova <plamena.manolova@intel.com>
Generalize the barrier code to provide correct error messages for
other builtins.
Fixes most of piglit compilation tests for
ARB_fragment_shader_interlock.
Reviewed-by: Tapani Pälli <tapani.palli@intel.com>
Reviewed-by: Plamena Manolova <plamena.manolova@intel.com>
The rules added in patch 3addd7c are inverted:
It should be:
(al * bh) << 16 + c
instead of:
(ah * bl) << 16 + c
Fixes a number of regressions under
dEQP-GLES31.functional.draw_indirect.compute_interop.large.*
on Freedreno.
Reviewed-by: Rob Clark <robdclark@gmail.com>
We postfix instructions by their size if a destination override is in
place (a la AT&T assembly), disambiguating instruction sizes.
Previously, "16-bit instruction, 16-bit dest, 16-bit sources"
disassembled identically to "32-bit instruction, 16-bit dest, 16-bit
sources", which is semantically distinct due to the lessened opportunity
for parallelism but (potentially) greater precision. Adding a postfix
removes the ambiguity and relieves mental gymnastics reading weird
disassemblies even in some cases that are not ambiguous.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
Midgard ALUs can operate in one of four modes: vec2 64-bit, vec4 32-bit,
vec8 16-bit, or vec16 8-bit. Our compiler (and indeed, any OpenGL ES
shader) only uses 32-bit (and eventually vec4 16-bit) modes in normal
circumstances. Nevertheless, the other modes do exist and are easily
accessible through OpenCL; they also come up in cases like blend
shaders.
While we have had minimal support for decoding 8-bit/64-bit modes, we
did so pretending they were vec4 in each case; 16-bit registers had a
synthetically duplicated register file to separate lo/hi halves, etc.
This works for GL, but it doesn't map to what the hardware is -actually-
doing, which can cause some headscratchingly bizarre disassemblies from
OpenCL. So, we dive in the deep end and support these other modes
natively in the disassembler, using absurdly long masks/swizzles, since
the hardware is considerably more flexible than what was exposed before.
Outside of some fixed routines for blending, none of the above is
supported in the compiler yet. But it's better to have it in the ISA
definitions and disassembler than not, for future use if nothing else.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
As a source modifier, shift allows shifting a value left by the bit
size, useful in conjunction with a greater register mode, for instance
to implement `upsample`. As a concrete example, the following OpenCL:
ushort hr0 = /* ... */, uint r1 = /* ... */;
uint r2 = (convert_uint(hr0) << 16) ^ b;
compiles to the following Midgard assembly:
ixor r, (hr0) << 16, b
In reverse, the ".hi" output modifier shifts the value right by the bit
size, leaving just the carry/overflow at the bottom. To implement *_hi
functions in OpenCL (for <64-bit), we do arithmetic in the 2x higher
mode with the .hi modifier. (For 64-bit, things are hairier, since there
is not an 128-bit int mode).
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
For floats, output modifiers determine clamping behaviour. For integers,
they determine wrapping/saturation behaviour (or shifting -- see next
commit). These are very different; they are conceptually two unrelated
enums union'ed together; the distinction is responsible for many-a-bug.
While clamping behaviour for floats was clear from GL, the int behaviour
is only known From OpenCL contortion with convert_*_sat() functions.
With the underlying functions known, clean up the codebase, likely
fixing outmod type related bugs in the process.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
OP_TYPE_CONVERTS denotes an opcode that returns a different type than is
source (going from int-domain to float-domain or vice versa), named
after the f2i/i2f family of opcodes it covers. We care because source
mods are determined by the source type (i/f) but output modifiers are
determined by the output type (equals the source type, unless the op
type converts, in which case it's the opposite).
The upshot is that floating-point compares (feq/fne/etc) actually do
type-convert. That is, that take in floating-points and output in
integer space (a boolean), so we mark them off this way to ensure the
correct output modifiers are used.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
It's just -easier- to render to aligned framebuffers. For winsys
targets, we already align, but even for an internal linear FBO we ought
to align everything nicely.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
Now that we support custom strides on mipmapped textures (theoretically,
at least), extend the stride check to support mipmaps. Fixes incorrect
strides of linear windows in Weston.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
Reviewed-by: Tomeu Vizoso <tomeu.vizoso@collabora.com>
We move some coding packing the texture/sampler descriptors into
dedicated functions (out of the terrifyingly long emit_for_draw
monolith), cleaning them up as we go.
The discovery triggering the cleanup is the format for including manual
strides in the presence of mipmaps/cubemaps. Rather than placed at the
end like previously assumed, they are interleaved after each address.
This difference is relevant when handling NPOT linear mipmaps.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
We refactor the wallpaper rendering code to separate the
wallpaper-specific bits from the general blitting capabilities. In the
(hopefully near) future, we'll turn this on to implement real Gallium
blits, e.g. for automatic mipmap generation.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
This patch does a substantial cleanup of the code for handling AFBC,
moving various disparate misplaced functions into a new central
pan_afbc.c file.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
The title of the release notes says 19.0.5 while the rest of the file
(correctly) says 19.0.6
Fixes: fe79d75ccf ("docs: Add relnotes for 19.0.6")
Signed-off-by: Emil Velikov <emil.velikov@collabora.com>
Reviewed-by: Dylan Baker <dylan at pnwbakers.com>
Earlier commit converted ES1 and ES2 to a new, much simpler, dispatch
generator. At the same time, GL/glapi and the driver side are still
using the old code.
There is a hidden ABI between GL*.so and glapi.so, former referencing
entry-points by offset in the _glapi_table. Hence earlier commit added
the full table of entry-points, alongside a marker for other cases like
indirect GL(X) and driver-size remapping.
Yet the patches did not handle things fully, thus it was possible to
get different interpretations of the dispatch table after the marker.
This commit fixes that adding an indicative error message to catch
future bugs.
While here correct the marker (MAX_OFFSETS) comment.
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=110302
Fixes: cf317bf093 ("mapi: add all _glapi_table entrypoints tostatic_data.py")
Signed-off-by: Emil Velikov <emil.velikov@collabora.com>
Reviewed-by: Dave Airlie <airlied@redhat.com>
As elaborated in the next patch, there is some hidden ABI that
effectively require most entrypoints to be listed in the file.
Cc: Marek Olšák <marek.olsak@amd.com>
Fixes: d2906293c4 ("mesa: EXT_dsa add selectorless matrix stackfunctions")
Signed-off-by: Emil Velikov <emil.velikov@collabora.com>
Reviewed-by: Dave Airlie <airlied@redhat.com>
In the call arguments to dri2_create_drawable decouple loaderPrivate
from dri2_surf. For all callers of dri2_create_drawable the two
pointers are the same with the exception of the gbm backed platform.
Let the calling code of dri2_create_drawable decide what
loaderPrivate shall be.
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
Signed-off-by: Mathias Fröhlich <Mathias.Froehlich@web.de>
When alphaToCoverage is enabled, we should always write the alpha
channel of MRT0 if it's unused. This now matches RadeonSI.
This fixes the new CTS:
dEQP-VK.pipeline.multisample.alpha_to_coverage_unused_attachment.samples_*.alpha_invisible
Cc: 19.0 19.1 <mesa-stable@lists.freedesktop.org>
Signed-off-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
Reviewed-By: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl
Use the infrastructure in wayland/ci-templates to build the container
images.
This prevents from getting into some situations in which the images
wouldn't be rebuilt, and allows us to share some infrastructure with
other projects in freedesktop.org.
Signed-off-by: Tomeu Vizoso <tomeu.vizoso@collabora.com>
Suggested-by: Michel Dänzer <michel@daenzer.net>
Acked-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
We were dropping "/' around arguments grouped together.
This was triggering failures with :
$ ./framemetrics -g "Memory Writes Distribution Gen9" -o /tmp/output.csv -f ./my.trace 10 11
Signed-off-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Reviewed-by: Jordan Justen <jordan.l.justen@intel.com>
Reviewed-by: Eric Engestrom <eric@engestrom.ch>
Since we don't normally flush before performing copy transfers, it's
possible in some scenarios to use too much memory for staging resources
and start failing. This can happen either because we exhaust the total
available memory (including system memory virtio-gpu swaps out to), or,
more commonly, because the total size of resources in a command buffer
doesn't fit in virtio-gpu video memory.
To reduce the chances of this happening, force a flush before a copy
transfer if the total size of queued staging resources exceeds a certain
limit. Since after a flush any queued staging resources will be
eventually released, this ensures both that each command buffer doesn't
require too much video memory, and that we don't end up consuming too
much memory for staging resources in total.
Fixes kernel errors reported when running texture_upload tests in glbench.
Signed-off-by: Alexandros Frantzis <alexandros.frantzis@collabora.com>
Reviewed-by: Chia-I Wu <olvaffe@gmail.com>
Now that we have copy transfers in place, we can remove the incorrect
resource wait condition. Copy transfers and other optimizations minimize
the performance impact of this removal, while providing the correct
behavior.
Signed-off-by: Alexandros Frantzis <alexandros.frantzis@collabora.com>
Reviewed-by: Chia-I Wu <olvaffe@gmail.com>
Extend copy transfers to also be used for busy textures.
Performance results:
Unigine Valley, qemu before: 22.7 FPS after: 23.1 FPS
Signed-off-by: Alexandros Frantzis <alexandros.frantzis@collabora.com>
Reviewed-by: Chia-I Wu <olvaffe@gmail.com>
We typically need to wait for a buffer to become ready before mapping,
so that we don't write new contents while the host is still using the
old contents. However, if we are allowed to discard the contents of the
mapped buffer range, then we can avoid waiting by using a staging buffer
range which we guarantee to never be busy, copying from the staging
buffer range to the target buffer in the host.
This commit implements this optimization by utilizing a dedicated
u_upload_mgr for the staging buffer.
Performance results:
Twilight Struggle (Steam/Proton), qemu before: 7 FPS after: 25 FPS
glmark2 ubo, qemu before: 38 FPS after: 331 FPS
Signed-off-by: Alexandros Frantzis <alexandros.frantzis@collabora.com>
Suggested-by: Gurchetan Singh <gurchetansingh@chromium.org>
Reviewed-by: Chia-I Wu <olvaffe@gmail.com>
Support transfers that use a different resource as the source of data to
transfer. This will be used in upcoming commits to send data to host
buffers through a transfer upload buffer, in order to avoid waiting
when the buffer resource is busy.
Note that we don't support queueing copy transfers in the transfer
queue. Copy transfers should be emitted directly in the command queue,
allowing us to avoid flushes before them and leads to better
performance.
Signed-off-by: Alexandros Frantzis <alexandros.frantzis@collabora.com>
Reviewed-by: Chia-I Wu <olvaffe@gmail.com>
Introduce definitions for the copy_transfer3d protocol command and virgl
capability. This command transfers data to the host by copying through
another resource, and will be used in upcoming commits to avoid waiting
when transferring data for busy resources.
Signed-off-by: Alexandros Frantzis <alexandros.frantzis@collabora.com>
Reviewed-by: Chia-I Wu <olvaffe@gmail.com>
This could help performance when trying to recreate such resources for
copy transfers.
Signed-off-by: Alexandros Frantzis <alexandros.frantzis@collabora.com>
Reviewed-by: Chia-I Wu <olvaffe@gmail.com>
Support a new virgl bind type for staging buffers which don't require
dedicated host-side storage. These will be used to implement copy
transfers.
Signed-off-by: Alexandros Frantzis <alexandros.frantzis@collabora.com>
Reviewed-by: Chia-I Wu <olvaffe@gmail.com>
If we are not allowed to block, and we know that we will have to wait,
either because the resource is busy, or because it will become busy due
to a readback, return early to avoid performing an incomplete
transfer_get. Such an incomplete transfer_get may finish at any time,
during which another unsynchronized map could write to the resource
contents, leaving the contents in an undefined state.
Signed-off-by: Alexandros Frantzis <alexandros.frantzis@collabora.com>
Suggested-by: Chia-I Wu <olvaffe@gmail.com>
Reviewed-by: Chia-I Wu <olvaffe@gmail.com>
Also fixes a missed check for VIRGL_BIND_CUSTOM in one of the duplicate
code snippets.
Note that legacy fences also use VIRGL_BIND_CUSTOM, but we ensured they
don't go through the cache in the previous commit.
Signed-off-by: Alexandros Frantzis <alexandros.frantzis@collabora.com>
Reviewed-by: Chia-I Wu <olvaffe@gmail.com>
Resources for fences should not be from the cache, since we are basing
the fence status on the resource creation busy status.
Signed-off-by: Alexandros Frantzis <alexandros.frantzis@collabora.com>
Reviewed-by: Chia-I Wu <olvaffe@gmail.com>
We will need the full info. This also speeds up
virgl_attach_res_atomic_buffers and fixes resource leaks when the
context is destroyed.
Signed-off-by: Chia-I Wu <olvaffe@gmail.com>
Reviewed-by: Alexandros Frantzis <alexandros.frantzis@collabora.com>
virgl_shader_binding_state will be used to manage all per-stage
shader bindings. For now, it manages only sampler views.
This replaces virgl_textures_info and fixes some issues
- start_slot is now honored
- views outside of [start_slot, slart_slot+count) are unmodified
- views are released when the context is destroyed
Signed-off-by: Chia-I Wu <olvaffe@gmail.com>
Reviewed-by: Alexandros Frantzis <alexandros.frantzis@collabora.com>
Fixes valgrind errors when running two CTS tests back to back:
- KHR-GL45.shader_image_load_store.basic-allTargets-loadStoreT*
(The first test has an actual TCS, the second uses passthrough.)
This restriction was accidentally added to the BSpec/PRM as an
unrestricted restriction starting with the HSW docs and it was never
removed. However, it only ever applied to HSW and actually potentially
causes problems on BDW and above where we have mipmapped fast-clears.
Reviewed-by: Nanley Chery <nanley.g.chery@intel.com>
Split out a separate program config state group to run early before the
other groups.
This seems to help w/ intermittent "missed tiles" (although I had
assumed that was a mem2gmem issue), or at least I can't reproduce that
issue with this patch, but can without.
It has the benefit of HLSQ_VS_CNTL.CONSTLEN matching for VS and BS.
Signed-off-by: Rob Clark <robdclark@chromium.org>
Reviewed-by: Kristian H. Kristensen <hoegsberg@google.com>
With the newer (v1.76) fw, we were getting hangs (compared to older
v1.66 fw). Re-work the GMEM code to structure things a bit closer to
the blob. This moves some PKT7 packets from IB2 to IB1, which I think
is what was confusing SQE and causing it to get stuck in an infinite
loop. But in general structuring things at least closer to the same way
blob does makes it easier to compare cmdstream.
Note: this is a bit on the large side for what I'd normally consider for
stable.. but right now it is looking like it is the newer fw that is
headed for linux-firmware. This should defn have some soak time on
master, but probably a good idea for this patch to end up in distro mesa
builds by the time a630_sqe.fw hits linux-firmware.
Cc: mesa-stable@lists.freedesktop.org
Signed-off-by: Rob Clark <robdclark@chromium.org>
Reviewed-by: Kristian H. Kristensen <hoegsberg@google.com>
This seems to be in a block of non buffered/context regs. Blob always
WFIs before write, so probably a good idea.
Annoyingly, compared to ealier gens, it is a bit harder to tell from the
register offset whether it is a buffered reg, it isn't as simple as
everything below 0x2000, it seems.
Signed-off-by: Rob Clark <robdclark@chromium.org>
Reviewed-by: Kristian H. Kristensen <hoegsberg@google.com>
In some cases the draw for the text wasn't working. This seems to be
fixed by resyncing some of the "golded registers" from blob (initial
values were based on somewhat older blob version).
Perhaps good to have a bit of soak time on master, but would be good
to eventually land in 19.x stable branches.
Cc: mesa-stable@lists.freedesktop.org
Signed-off-by: Rob Clark <robdclark@chromium.org>
Reviewed-by: Kristian H. Kristensen <hoegsberg@google.com>
On CNL+, the clear color struct is composed of RGBA channel values and
fields which are either reserved by the HW or used to control
fast-clears. Currently anv initializes the channel values to zero and
allows the other fields to be undefined.
Satisfy the MBZ field requirements by removing an optimization that
doesn't hold true for CNL+ and pulling in the number of dwords to
initialize from ISL.
Cc: <mesa-stable@lists.freedesktop.org>
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Fix compilation where the DWORD type is used with a format, after
-Werror-format added by c9c1e261.
Some Win32 API types are different fundamental types in the 32-bit and
64-bit versions. This problem is then further compounded by the fact
that whilst both 32-bit Cygwin and 32-bit MinGW use the ILP32 data
model, 64-bit MinGW uses the LLP64 data model, but 64-bit Cygwin uses
the LP64 data model. This makes it near impossible to write printf
format specifiers which are correct for all those targets.
In the Win32 API, DWORD is an unsigned, 32-bit type. So, it is defined
in terms of an unsigned long, except in the LP64 data model used by
64-bit Cygwin, where it is an unsigned int.
It should always be safe to cast it to unsigned int and use %u or %x.
Reviewed-by: Eric Anholt <eric@anholt.net>
I recently discovered that the following code lead to valgrind errors:
struct isl_swizzle swizzle = ISL_SWIZZLE_IDENTITY;
VALGRIND_CHECK_MEM_IS_DEFINED(&swizzle, sizeof(swizzle));
which is surprising, because struct isl_swizzle is simply:
struct isl_swizzle {
enum isl_channel_select r:4;
enum isl_channel_select g:4;
enum isl_channel_select b:4;
enum isl_channel_select a:4;
};
and the above code initializes all of them with a C99 initializer.
Iván Briano reminded me that C99 initializers don't necessarily zero
padding. A quick inspection revealed that sizeof(struct isl_swizzle)
was 4 (rather than the expected 2). Ian Romanick suggested changing
it to uint16_t, since this is essentially dicing up an unsigned, and
that worked.
This patch marks enum isl_channel_select packed, changing its size
from 4 bytes to 1 byte. This then makes struct isl_swizzle 2 bytes,
with no bogus padding fields. This eliminates valgrind undefined
memory warnings.
These isl_swizzle values become part of our BLORP blit program keys,
which are then hashed. This undefined padding was being included in
the hashing, possibly leading to issues. I originally saw this error
when running KHR-GL45.texture_size_promotion.functional in iris under
valgrind.
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
We were previously lowering to inand, but the second arg was not
duplicated so inot would always return ~0. Oops.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
This uses the new mesa/st functionality for NIR I/O vectorization, which
eliminates a number of corner cases (resulting in assorted dEQP
failures and regressions) and should improve performance substantial due
to lessened pressure on the load/store pipe.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
This pass interfered with the more delicate path required for
non-vectorized I/O. It's also ugly and duplicating the job of an actual
honest-to-goodness scheduler.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
This now boils down to just picking between binning or vertex shader
and dummy_fs or real fs, which we can do in a couple of lines of code
instead. The constlen logic isn't doing what it thinks it's doing,
both constlens at this point
MAX2(s[VS].constlen, align(state->bs->constlen, 4));
are binning shader constlens. We'll have to revisit the constlen
logic, but this commit doesn't change how it works.
Reviewed-by: Rob Clark <robdclark@gmail.com>
Reviewed-by: Eric Anholt <eric@anholt.net>
We have a similar function in fd6_program.c. Move to fd6_emit.h and
share.
Reviewed-by: Rob Clark <robdclark@gmail.com>
Reviewed-by: Eric Anholt <eric@anholt.net>
There's already a bit of duplicated logic here and tessellation will
add more. Build up dword 0 in fd6_draw_vbo() and drop the a4xx in the
process.
Reviewed-by: Rob Clark <robdclark@gmail.com>
Reviewed-by: Eric Anholt <eric@anholt.net>
From the Vulkan spec 1.1.109:
"Some implementations may need to evaluate depth image values
while performing image layout transitions. To accommodate this,
instances of the VkSampleLocationsInfoEXT structure can be
specified for each situation where an explicit or automatic
layout transition has to take place. [...] and
VkRenderPassSampleLocationsBeginInfoEXT can be chained from
VkRenderPassBeginInfo to provide sample locations for layout
transitions performed implicitly by a render pass instance."
Signed-off-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
Reviewed-By: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
From the Vulkan spec 1.1.109,
"Some implementations may need to evaluate depth image values
while performing image layout transitions. To accommodate this,
instances of the VkSampleLocationsInfoEXT structure can be
specified for each situation where an explicit or automatic
layout transition has to take place. VkSampleLocationsInfoEXT
can be chained from VkImageMemoryBarrier structures to provide
sample locations for layout transitions performed by
vkCmdWaitEvents and vkCmdPipelineBarrier calls."
This handles explicit depth/stencil layout transitions performed
with CmdWaitEvents() or CmdPipelineBarrier().
Signed-off-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
Reviewed-By: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
If VK_EXT_sample_locations is used, the driver might need to emit
the sample locations specified during layout transitions.
Signed-off-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
Reviewed-By: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
This will be used for the depth decompress pass that might need
to emit variable sample locations during layout transitions.
Signed-off-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
Reviewed-By: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
We run a ton of backend specific passes here (mostly brw_preprocess_nir)
and ought to sweep up any unused memory at this point, since we're going
to hang on to this NIR for as long as the linked program lives.
Currently, ir3 backend compiler is lowering integer multiplication from:
dst = a * b
to:
dst = (al * bl) + (ah * bl << 16) + (al * bh << 16)
by emitting this code:
mull.u tmp0, a, b ; mul low, i.e. al * bl
madsh.m16 tmp1, a, b, tmp0 ; mul-add shift high mix, i.e. ah * bl << 16
madsh.m16 dst, b, a, tmp1 ; i.e. al * bh << 16
which at that point has very low chances of being optimized.
This patch adds a new nir_algebraic.AlgebraicPass to performs this
lowering during NIR algebraic optimization passes, giving it a better
chance for optimizing the resulting code.
Reviewed-by: Eric Anholt <eric@anholt.net>
For umul_low (al * bl), zero is returned if the low 16-bits word of either
source is zero.
for imadsh_mix16 (ah * bl << 16 + c), c is returned if either 'ah' or 'bl'
is zero.
A couple of nir_search_helpers are added:
is_upper_half_zero() returns true if the highest word of all components of
an integer NIR alu src are zero.
is_lower_half_zero() returns true if the lowest word of all components of
an integer nir alu src are zero.
Reviewed-by: Eric Anholt <eric@anholt.net>
'umul_low' is the low 32-bits of unsigned integer multiply. It maps
directly to ir3's MULL_U.
'imadsh_mix16' is multiply add with shift and mix, an ir3 specific
instruction that maps directly to ir3's IMADSH_M16.
Both are necessary for the lowering of integer multiplication on
Freedreno, which will be introduced later in this series.
Reviewed-by: Eric Anholt <eric@anholt.net>
Commit 2282ec0a refactored drawable creation across various platforms
into a new dri2_create_drawable helper function.
The GBM code in platform_drm.c code passed in dri2_surf->gbm_surf as the
loaderPrivate, while most other backends passed in dri2_surf directly.
To try and handle this, the patch checked if dri2_surf->gbm_surf was
non-NULL, and if so, presumed that the caller is the DRM platform and
we should use the dri2_surf->gbm_surf pointer.
This worked for most platforms, which calloc their dri2_surf structure,
zeroing the data. Unfortunately, platform_x11.c used malloc, leaving
most of the dri2_surf as garbage. In particular, dri2_surf->gbm_surf
was often non-NULL, causing dri2_create_drawable to try and use it,
passing a garbage pointer to the createNewDrawable hook, usually leading
to a SIGBUS or SIGSEGV when trying to dereference that bad pointer.
Since most callers calloc the data, make platform_x11.c follow suit.
Fixes crashes with i915_dri.so when running dEQP-GLES2.
Reviewed-by: Mathias Fröhlich <Mathias.Froehlich@web.de>
Reviewed-by: Tapani Pälli <tapani.palli@intel.com>
Fixes formatting errors for 32 bit compilations, eg:
error: format specifies type 'unsigned long' but the argument has
type 'uint64_t' (aka 'unsigned long long') [-Werror,-Wformat]
printf("result1 = %lu result2 = %lu\n", res1.u64, res2.u64);
Reviewed-by: Emil Velikov <emil.velikov@collabora.com>
Reviewed-by: Eric Anholt <eric@anholt.net>
Crash introduced in "b38dab101ca7e0896255dccbd85fd510c47d84d1" but not
adding a Fixes tag since it's our bug anyway.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
HTML has the <p>-tag for this purpose. It adds some margins, but that
just makes this read better, IMO.
Signed-off-by: Erik Faye-Lund <erik.faye-lund@collabora.com>
Reviewed-by: Eric Engestrom <eric.engestrom@intel.com>
This reads better if we include the asterisk in the code-block, as it's
part of the function-reference, even though it's not technically
speaking code. But as the <code>-tag isn't purely for code, this should
be fine.
Signed-off-by: Erik Faye-Lund <erik.faye-lund@collabora.com>
Reviewed-by: Eric Engestrom <eric.engestrom@intel.com>
Looks like I missed a few cases when I recently added more code-tags
here. So let's add these cases as well.
Signed-off-by: Erik Faye-Lund <erik.faye-lund@collabora.com>
Reviewed-by: Eric Engestrom <eric.engestrom@intel.com>
When rewriting 20c56e18c2 after review, I accidentally dropped the "at"
here. Sorry for that, and let's fix it up!
Signed-off-by: Erik Faye-Lund <erik.faye-lund@collabora.com>
Fixes: 20c56e18c2 ("docs: use proper links instead of code-tags")
Reviewed-by: Eric Engestrom <eric.engestrom@intel.com>
AHARDWAREBUFFER_FORMAT_Y8Cb8Cr8_420 is an implementation defined
flexible YUV format. Most of the times, it's NV12 or YV12.
On Intel, NV12 is preferred since it can be used by the display
engine.
This API adds a dependency between gralloc and buffer consumers,
unfortunately. Right now, the code seems to work for i915 gralloc,
but not cros_gralloc. Add a preprocessor flag to fix this.
TEST=android.graphics.cts.MediaVulkanGpuTest#testMediaImportAndRendering
Reviewed-by: Tapani Pälli <tapani.palli@intel.com>
When e9d935ed0e added force_dcc_off(), we forced it off for any
preloaded image descriptor which had stores associated with them, since
the same preloaded descriptors were used for loads and stores. However,
when the preloading was removed in 16be87c904, the existing logic was
kept despite it not being necessary anymore. The comment above
force_dcc_off() only mentions stores, so only force DCC off for stores.
Cc: Nicolai Hähnle <nicolai.haehnle@amd.com>
Cc: Marek Olšák <marek.olsak@amd.com>
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
Depending on whether compiled with frame-pointer or not, the temporary
memory location used for the bp parameter in these macros are referenced
relative to the stack pointer or the frame pointer.
Hence we can never reference that parameter when we've modified either
the stack pointer or the frame pointer, because then the compiler would
generate an incorrect stack reference.
Fix this by pushing the temporary memory parameter on a known location on
the stack before modifying the stack- and frame pointers.
Also in case of failuire RPCI channel is not closed which lead to vmx
running out of channels.
Cc: mesa-stable@lists.freedesktop.org
Signed-off-by: Deepak Rawat <drawat@vmware.com>
Reviewed-by: Sinclair Yeh <syeh@vmware.com>
Reviewed-by: Thomas Hellstrom <thellstrom@vmware.com>
Signed-off-by: Thomas Hellstrom <thellstrom@vmware.com>
This might fix initial subpass transitions when multiview is used.
Noticed while implementing sample locations during layout transitions.
Signed-off-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
Reviewed-By: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
Checking isl_fmt returned value in assert seems appropriate
instead of format variable.
Fixes: f1654fa7e3 "anv/android: support creating images from external format"
Signed-off-by: Nataraj Deshpande <nataraj.deshpande@intel.com>
Reviewed-by: Tapani Pälli <tapani.palli@intel.com>
Reviewed-by: Sagar Ghuge <sagar.ghuge@intel.com>
We were not accountint for small immediates in the B mux so the scheduler
was interpreting these are regular register file accesses, which could
lead to additional (incorrect) write-read dependencies.
Shader-db changes:
total instructions in shared programs: 9163664 -> 9137263 (-0.29%)
instructions in affected programs: 3931035 -> 3904634 (-0.67%)
helped: 12457
HURT: 2563
total max-temps in shared programs: 1325787 -> 1325597 (-0.01%)
max-temps in affected programs: 5746 -> 5556 (-3.31%)
helped: 186
HURT: 16
helped stats (abs) min: 1 max: 4 x̄: 1.12 x̃: 1
helped stats (rel) min: 1.45% max: 22.22% x̄: 4.42% x̃: 3.28%
HURT stats (abs) min: 1 max: 3 x̄: 1.12 x̃: 1
HURT stats (rel) min: 2.86% max: 10.00% x̄: 5.76% x̃: 5.88%
95% mean confidence interval for max-temps value: -1.04 -0.84
95% mean confidence interval for max-temps %-change: -4.16% -3.07%
Max-temps are helped.
Reviewed-by: Eric Anholt <eric@anholt.net>
Program may need no regalloc at all, e.g. in case when program consists
of single discard op.
Signed-off-by: Vasily Khoruzhick <anarsoul@gmail.com>
Reviewed-by: Qiang Yu <yuq825@gmail.com>
If we insert a NULL key, it will appear to succeed but will mess up
entry counting. Similar errors can occur if someone accidentally
inserts the deleted key. The later is highly unlikely but technically
possible so we should guard against it too.
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
Reviewed-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
Reviewed-by: Eric Anholt <eric@anholt.net>
If we insert a NULL key, it will appear to succeed but will mess up
entry counting. Similar errors can occur if someone accidentally
inserts the deleted key. The later is highly unlikely but technically
possible so we should guard against it too.
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
Reviewed-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
Reviewed-by: Eric Anholt <eric@anholt.net>
This is the same as the need_dest parameter to
prepare_alu_destination_and_sources. This allows us to not change the
register that is expected to hold an result if an instruction is
re-emitted. This is particularly a problem if the re-emitted
instruction is a partial write. A later patch will use this feature.
No shader-db changes on any Intel platform.
v2: Don't do the Boolean resolve when there is no destination. If the
ALU instruction didn't write a register, there's nothing to resolve.
This replaces an earlier patch "intel/fs: Allocate dummy destination
register when need_dest is false".
Reviewed-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
Reviewed-by: Matt Turner <mattst88@gmail.com>
There were two errors. First, the pass could propagate conditional
modifiers from an instruction that writes on flag register to an
instruction that writes a different flag register. For example,
cmp.nz.f0.0(16) null:F, vgrf6:F, vgrf5:F
cmp.nz.f0.1(16) null:F, vgrf6:F, vgrf5:F
could be come
cmp.nz.f0.0(16) null:F, vgrf6:F, vgrf5:F
Second, if an instruction writes f0.1 has it's condition propagated, the
modified instruction will incorrectly write flag f0.0. For example,
linterp(16) vgrf6:F, g2:F, attr0:F
cmp.z.f0.1(16) null:F, vgrf6:F, vgrf5:F
(-f0.1) discard_jump(16) (null):UD
could become
linterp.z.f0.0(16) vgrf6:F, g2:F, attr0:F
(-f0.1) discard_jump(16) (null):UD
None of these cases will occur currently. The only time we use f0.1 is
for generating discard intrinsics. In all those cases, we generate a
squence like:
cmp.nz.f0.0(16) vgrf7:F, vgrf6:F, vgrf5:F
(+f0.1) cmp.z(16) null:D, vgrf7:D, 0d
(-f0.1) discard_jump(16) (null):UD
Due to the mixed types and incompatible conditions, this sequence would
never see any cmod propagation. The next patch will change this.
No shader-db changes on any Intel platform.
v2: Fix typo in comment in test case subtract_delete_compare_other_flag.
Noticed by Caio.
Reviewed-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
Reviewed-by: Matt Turner <mattst88@gmail.com>
Tests like this should have been added in 4467040cb6 ("i965/fs:
Propagate conditional modifiers from not instructions").
Reviewed-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
Reviewed-by: Matt Turner <mattst88@gmail.com>
This patch makes blorp_blit handle SINT<->UINT blit value clamping.
After reading the source's integer data (which is expanded to 32-bit),
we either IMAX with 0 (for SINT -> UINT, to clamp negative numbers) or
UMIN with (1 << 31) - 1 (for UINT -> SINT, to clamp positive numbers
outside of the representable range).
Such blits are not allowed by the OpenGL or Vulkan APIs directly:
The Vulkan 1.1 spec for vkCmdBlitImage says:
"Integer formats can only be converted to other integer formats with
the same signedness."
The GL 4.5 spec for glBlitFramebuffer says:
"An INVALID_OPERATION error is generated if format conversions are
not supported, which occurs under any of the following conditions:
[...]
* The read buffer contains unsigned integer values and any draw
buffer does not contain unsigned integer values.
* The read buffer contains signed integer values and any draw buffer
does not contain signed integer values."
However, they are useful for other operations, such as texture upload
and download, which typically are implemented via blorp_blit(). i965
has code to fall back in this case (which the next commit will delete),
and Gallium expects blit() to handle this case for texture upload.
Fixes the following tests on iris:
- GTF-GL46.gtf32.GL3Tests.packed_pixels.packed_pixels
- GTF-GL46.gtf32.GL3Tests.packed_pixels.packed_pixels_pbo
- GTF-GL46.gtf32.GL3Tests.packed_pixels.packed_pixels_pixelstore
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
Both GLSL IR and NIR perform the same mod -> floor lowering for 32-bit
types. But nir_lower_double_ops is slightly more defensive against
lowered drcp precision loss, and handles mod(x, x) = 0 directly. This
works well...assuming nir_lower_double_ops actually gets an fmod op to
lower in the first place.
The previous patches enabled NIR-based lowering for the remaining
drivers, so we can stop using the GLSL IR lowering when using NIR.
Fixes KHR-GL45.gpu_shader_fp64.builtin.mod_dvec[234] on iris.
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
Currently, st/mesa is always calling the GLSL IR lower_instructions()
pass with MOD_TO_FLOOR set, so mod operations will be lowered before
ever reaching NIR. This enables the same lowering at the NIR level,
which will let me shut off the GLSL IR path for NIR-based drivers.
The AMD NIR backend also has code to handle fmod, so we could
potentially skip this and still be fine. I don't have an opinion
on that.
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
Currently, st/mesa is always calling the GLSL IR lower_instructions()
pass with MOD_TO_FLOOR set, so mod operations will be lowered before
ever reaching NIR. This enables the same lowering at the NIR level,
which will let me shut off the GLSL IR path for NIR-based drivers.
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
Acked-by: Eric Anholt <eric@anholt.net>
Currently, st/mesa is always calling the GLSL IR lower_instructions()
pass with MOD_TO_FLOOR set, so mod operations will be lowered before
ever reaching NIR. This enables the same lowering at the NIR level,
which will let me shut off the GLSL IR path for NIR-based drivers.
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
Acked-by: Eric Anholt <eric@anholt.net>
We originally had a single lower_fmod option. In commit 2ab2d2e5, Sam
split 32 and 64-bit lowering into separate flags, with the rationale
that some drivers might want different options there. This left 16-bit
unhandled, so Iago added a lower_fmod16 option in commit ca31df6f.
Now that lower_fmod64 is gone (in favor of nir_lower_doubles and
nir_lower_dmod), we re-combine lower_fmod16 and lower_fmod32 into a
single lower_fmod flag again. I'm not aware of any hardware which
need lowering for one bitsize and not the other.
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
nir_lower_doubles offers a wide variety of fp64 lowering, including
lowering fmod@64. The version there also better handles imprecisions
due to lowered frcp@64. Let's consolidate on one version.
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
I don't think panfrost actually does doubles yet, but it at least
claims to support PIPE_CAP_DOUBLES, so at least pretend to switch
to the new lowering.
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
Acked-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
We currently have two duplicate mechanisms for lowering fmod@64.
One is a nir_opt_algebraic rule keyed off of options->lower_fmod64,
and the other is nir_lower_doubles, which offers a full gamut of
fp64 lowering. The latter works slightly better in some corner cases,
so I'm trying to eliminate lower_fmod64 and drop the redundancy.
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
These checksums were obtained by downloading the releases from
ftp://ftp.freedesktop.org/pub/mesa/older-versions/9.x/9.2.2/ and
running md5sum on them.
Hopefully the server wasn't compromised since release.
Signed-off-by: Erik Faye-Lund <erik.faye-lund@collabora.com>
Reviewed-by: Emil Velikov <emil.velikov@collabora.com>
Acked-by: Eric Engestrom <eric.engestrom@intel.com>
A definition list is a better semantic match for what this list is
supposed to convey, so let's use that instead. And while we're at it,
let's add some code-tags around filenames, as they stand a bit more out
that way.
Signed-off-by: Erik Faye-Lund <erik.faye-lund@collabora.com>
Reviewed-by: Emil Velikov <emil.velikov@collabora.com>
Reviewed-by: Eric Engestrom <eric.engestrom@intel.com>
A HTML definition-list is more semantically strong than just some
unordered list, and renders a bit cleaner by default. So let's use that
instead.
Signed-off-by: Erik Faye-Lund <erik.faye-lund@collabora.com>
Reviewed-by: Emil Velikov <emil.velikov@collabora.com>
Acked-by: Eric Engestrom <eric.engestrom@intel.com>
These links are a bit odd in that the URLs are simply placed in
code-tags. This makes them harder to work with. Let's use proper
links instead.
Signed-off-by: Erik Faye-Lund <erik.faye-lund@collabora.com>
Reviewed-by: Emil Velikov <emil.velikov@collabora.com>
Reviewed-by: Eric Engestrom <eric.engestrom@intel.com>
These half-way structured sections are needlessly problematic to
translate cleanly to other markup-languages, so let's just make this
into a free-form paragraph instead.
Signed-off-by: Erik Faye-Lund <erik.faye-lund@collabora.com>
Reviewed-by: Emil Velikov <emil.velikov@collabora.com>
Reviewed-by: Eric Engestrom <eric.engestrom@intel.com>
This makes this document a bit more structured, which is generally
considered a good thing for HTML. It will also translate a bit better
into other markup-formats.
Signed-off-by: Erik Faye-Lund <erik.faye-lund@collabora.com>
Reviewed-by: Emil Velikov <emil.velikov@collabora.com>
Reviewed-by: Eric Engestrom <eric.engestrom@intel.com>
The different headers and header-sizes already convey the hierarchical
structure of this document, the unusual spacing arguably just looks a
bit inconsistent with the rest of the site. Let's remove it; it looks
fine without it, and will translate better to other markup languages.
Signed-off-by: Erik Faye-Lund <erik.faye-lund@collabora.com>
Reviewed-by: Emil Velikov <emil.velikov@collabora.com>
Reviewed-by: Eric Engestrom <eric.engestrom@intel.com>
It's easier to read function-names, file-names and other
"machine"-related strings if they are formatted in a monospace font. So
let's mark these up with code-tags.
Signed-off-by: Erik Faye-Lund <erik.faye-lund@collabora.com>
Reviewed-by: Emil Velikov <emil.velikov@collabora.com>
Reviewed-by: Eric Engestrom <eric.engestrom@intel.com>
The tt-tag has been removed from HTML5, so let's normalize this to
code-tags intead. This just makes things a bit more consistent, as we've
mixed these left and right so far anyway.
Signed-off-by: Erik Faye-Lund <erik.faye-lund@collabora.com>
Reviewed-by: Emil Velikov <emil.velikov@collabora.com>
Reviewed-by: Eric Engestrom <eric.engestrom@intel.com>
The pipeline register creation algorithm is only valid for SSA indices;
NIR registers and such cannot be pipelined without more complex
analysis. However, there are the ocassional class of "liars" -- indices
that claim to be SSA but are not. This occurs in the blend shader
prologue, for example. Detect this and just bail quietly for now.
Eventually we need to rewrite the blend shader prologue to occur in NIR
anyway (which would mitigate the issue), but that's more involved and
depends on a better understanding of pixel formats in blend shaders (for
non-RGBA8888/UNORM cases).
Fixes some blend shader regressions.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
This piece of code was cargo-culted from the ir3 standalone compiler and
made sense when we were a standalone compiler ourselves. Unfortunately,
for the online compiler, mesa/st *already handles this for us* and if we
duplicate it here, we're duplicating it *incorrectly*. So just delete
these lines and fix a heck of a lot of tests.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
If by flush time the client hasn't submitted a clear, add jobs for
reloading the framebuffer contents as the first draw in the frame.
This is required by programs such as Weston which don't do clears and
rely on the previous contents of the framebuffer being there.
Reloading the whole framebuffer on every frame without regards to what
is needed or what is going to be covered is very inefficient, but future
work will introduce support for damage regions and partial updates so we
know what needs to be actually reloaded.
Fixes quite a few tests in dEQP-EGL.functional.buffer_age.*.
[Alyssa: The context is that tilers do an implicit glClear() on every
frame, whether you asked them to or not. If you want a clear, this is
very efficient. But if you don't, you have to explicitly blit the
backbuffer back into tile memory, accomplished by a dummy texturing
draw. This patch generates that draw via u_blitter, although we could do
a bit better ourselves by eliding the vertex job. This fixes "black
rectangles in Weston/sway" as well as "video not displaying when UI
visible in mpv"]
Signed-off-by: Tomeu Vizoso <tomeu.vizoso@collabora.com>
Reviewed-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
The mesa/st flips the viewport, so we respect that rather than
trying to flip the framebuffer itself and ignoring the viewport and
using a messy heuristic.
However, this brings an underlying disagreement about the interpretation
of winding order to light. The blob uses a different strategy than Mesa
for handling viewport Y flipping, so the meanings of the winding order
bit are flipped for it. To keep things clean on our end, we rename to
explicitly use Gallium (rather than flipped OpenGL) conventions.
Fixes upside-down Xwayland/egl windows.
v2: Adjust lowering configuration to correctly flip gl_PointCoord.y and
gl_FragCoord.y. v1 was R-b'd by Tomeu, but then retracted due to these
regressions which are not fixed.
Suggested-by: Rob Clark <robdclark@chromium.org>
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
Sort-of-reviewed-by: Tomeu Vizoso <tomeu.vizoso@collabora.com>
This patch allows nine to read the preferred IR from pipe caps and use
NIR when that is preferred by the driver, by calling tgsi_to_nir. Also
adds some debug options that allow overriding it.
Signed-off-by: Timur Kristóf <timur.kristof@gmail.com>
Reviewed-by: Axel Davy <davyaxel0@gmail.com>
We're currently trying to detect dynamic loading config support by
trying to remove to test config (hard coded in the i915 driver) and
checking we get ENOENT.
This can fail if the test config was updated in Mesa but not yet in
i915.
A better way to do this is to pick an invalid ID and check for ENOENT.
Signed-off-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Cc: <mesa-stable@lists.freedesktop.org>
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
Since NIR_PASS no longer swaps out the NIR pointer when NIR_TEST_* is
enabled, we can just take a single pointer and not a pointer to pointer.
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
Now that NIR_TEST_* doesn't swap the shader out from under us, it's
sufficient to just modify the shader rather than having to return in
case we're testing serialization or cloning.
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
Instead, we add a new helper which stomps one nir_shader and replaces it
with another. The new helper effectively just changes which pointer
gets used for the base nir_shader. It should be 99% as good at testing
cloning but without requiring that everything handle having the shader
swapped out from under it constantly.
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=108957
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
Reviewed-by: Rob Clark <robdclark@chromium.org>
EuThreadsCount is supposed to be the number of threads per EU, not the
total number of threads in the whole device.
Signed-off-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Fixes: 1fc7b95127 ("i965: Add Gen8+ INTEL_performance_query support")
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
Fixes formatting errors for 32 bit compilations, eg:
error: format ‘%lx’ expects argument of type ‘long unsigned int’,
but argument 5 has type ‘uint64_t’ {aka ‘long long unsigned int’}
[-Werror=format=]
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
This removes one useless SMEM load operations which pointed to
the same descriptor anyway.
Signed-off-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
Reviewed-By: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
Use a helper function to get the sysval/attribute/varying/output name
and make the disam debug output independent of shader stage.
Reviewed-by: Rob Clark <robdclark@gmail.com>
In a fragment shader, r0 is written out with a special branch sequence.
r0 is not a real register here, but essentially a pipeline register --
as such, it needs to be written out in full and on time, with hanging
dependencies in the bundle. Otherwise, we break up the bundle, which
costs an extra ALU cycle and adds a move.
When the scheduler ran last thing, we could do this analysis within the
scheduler. Now that RA can run after scheduling, that's no longer valid,
so we remove the analysis and always break it up (at a performance
penalty). Future work can add a post-RA/post-schedule pass to merge
writeout blocks if possible. It's a bit of a low-priority next to fixing
conformance regressions, of course.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
In this particular instance, struct member were used outside of the
block where it was defined. Fix this by moving the definition outside of
block.
Signed-off-by: Deepak Rawat <drawat@vmware.com>
Fixes: 569f838987 ("winsys/svga: Add support for new surface ioctl, multisample pattern")
Reviewed-by: Brian Paul <brianp@vmware.com>
We use the shared nir_lower_idiv pass to lower integer division, fixing
144 dEQP tests. This pass was not applied in the past due to breakage
from iabs fixed earlier in the series.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
Reviewed-By: Ryan Houdek <Sonicadvance1@gmail.com>
Certain ops that only take one argument have an imaginary "zero"
constant for their second argument. For instance, conversions:
i2f [dest], [source], #0
Memory corruption meant that #0 was instead random noise. For some ops,
that doesn't matter (manifested as abnormally large code size and poor
scheduling due to extra constants in random places). But for others,
where a 1-op is emulated by a 2-op with an implicit 0 second argument,
that broke things.
Fixes iabs (emulated by iabsdiff).
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
Reviewed-By: Ryan Houdek <Sonicadvance1@gmail.com>
These ops are used to accelerate various functions exposed in OpenCL.
This commit only includes the routine additions to the table. They are
not wired through the compiler; rather, they are just here to keep a
reference for the disassembler.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
Reviewed-By: Ryan Houdek <Sonicadvance1@gmail.com>
This new 'platform' is added by default with no guards.
It is effectively a copy of the surfaceless one, with updated function
names and brand new probe function.
Due to the reuse, some of the ifdef HAVE_SURFACELESS_PLATFORM guards
have been dropped.
A worthy mention are the changes in _egFindDisplay, since the original
and dup'd fd are required, we make use of the plat_opt argument.
Note that no hacks for eglGetDisplay are added - the API works only with
the eglGetPlatformDisplay* API.
v2:
- s/_eglCompareDeviceDisplay/_eglSameDeviceDisplay/ (Eric)
- let ^^ return bool (Eric)
- fixup meson build, move files() further up (Eric)
- copy from plat. surfaceless w/o the visual cleanups
- close and free when destroying the dpy
- sprinkle a few _eglDeviceSupports
- split fd handling into separate function
- use directly the render node if no FD is given (Mathias)
v3:
- s/dpy/disp/g
- drop swap_buffers* callbacks
- drop loader_set_logger()
- drop local define
- re-introduce _eglGetDRMDeviceRenderNode()
- EGL_WARN on ForceSoftware with HW device - continue using the HW device
- bail out for "EGL_MESA_device_software" until it's fixed
- wire-up the Android build
v4:
- use new style _eglFindDisplay()
- split hw vs sw code paths
- don't close the internal fd (already handled in FiniDisplay())
- make swrast work (bit hacky bit will do for now)
- Android for real, drop autotools
- Correct HW + LIBGL_ALWAYS_SOFTWARE check
- use the dri2_create_drawable() helper
v5:
- enhance comment around fd checks (Mathias)
- rebase for dri2_init_surface() changes
Cc: Mathias Fröhlich <Mathias.Froehlich@gmx.net>
Acked-by: Marek Olšák <marek.olsak@amd.com> (v4)
Signed-off-by: Emil Velikov <emil.velikov@collabora.com>
Signed-off-by: Marek Olšák <marek.olsak@amd.com>
At the moment the user will pass the screen number via attribs, yet we
would throw that away. Reason being that the int *screen passed to
xcb_connect() is output only.
v2 (Emil):
- split from a larger patch
- use xcb_connect() returned screen, as fallback
- use helper function only as needed
Reviewed-by: Mathias Fröhlich <Mathias.Froehlich@web.de>
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
Signed-off-by: Emil Velikov <emil.velikov@collabora.com>
Signed-off-by: Marek Olšák <marek.olsak@amd.com>
Earlier spec is vague, although EGL 1.5 makes it clear:
Multiple calls made to eglGetPlatformDisplay with the same
parameters will return the same EGLDisplay handle.
With this commit we store and compare the full attrib list.
v2 (Emil):
- Split into separate patches
- Use EGLBoolean over int masked as such
- Don't return free'd pointed on calloc failure
Reviewed-by: Mathias Fröhlich <Mathias.Froehlich@web.de>
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
Signed-off-by: Emil Velikov <emil.velikov@collabora.com>
Signed-off-by: Marek Olšák <marek.olsak@amd.com>
This commit fix support and adjusts the capabilities
returned by the SWR driver and the documentation
to correctly report the GL_ARB_copy_image extension.
Reviewed-by: Alok Hota <alok.hota@intel.com>
The compiler configuration was hardened to fail on format warnings and
things stopped building.
Fixes: c9c1e26106 ("mesa: prevent common string formatting security issues")
Signed-off-by: Tomeu Vizoso <tomeu.vizoso@collabora.com>
Reviewed-By: Ryan Houdek <Sonicadvance1@gmail.com>
We do support TF now, so it's no longer valid. Besides, if we want this
optimization, we should probably have mesa/st doing it right for everyone.
Reviewed-by: Rob Clark <robdclark@gmail.com>
We have the GLSL type, so we can just ask it how many coordinates there
are. The GLSL function already has Vulkan cases that we'd probably want
eventually.
Reviewed-by: Rob Clark <robdclark@gmail.com>
When comparing our sin/cos behavior to the closed source driver, I
noticed that we were off by a bit (or, in the case of 1/2pi, 3 bits).
Fixes:
dEQP-GLES3.functional.shaders.random.trigonometric.vertex.52
dEQP-GLES3.functional.shaders.random.all_features.vertex.0
Reviewed-by: Kristian H. Kristensen <hoegsberg@google.com>
Reviewed-by: Rob Clark <robdclark@gmail.com>
Just provide a "(null)" literal in case driverName is NULL.
In file included from ../src/glx/dri3_glx.c:76:
../src/glx/dri3_glx.c: In function ‘dri3_create_screen’:
../src/glx/dri_common.h:70:36: error: ‘%s’ directive argument is null [-Werror=format-overflow=]
70 | #define CriticalErrorMessageF(...) dri_message(_LOADER_FATAL, __VA_ARGS__)
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
../src/glx/dri3_glx.c:1002:4: note: in expansion of macro ‘CriticalErrorMessageF’
1002 | CriticalErrorMessageF("failed to load driver: %s\n", driverName);
| ^~~~~~~~~~~~~~~~~~~~~
../src/glx/dri3_glx.c:1002:50: note: format string is defined here
1002 | CriticalErrorMessageF("failed to load driver: %s\n", driverName);
| ^~
cc1: some warnings being treated as errors
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
When PIPE_TRANSFER_READ requires a resolve, we blit from the host
storage to a temporary storage, and do a format conversion from the
temporary storage to the guest storage. This change makes sure we
convert to the correct level of the guest storage.
Signed-off-by: Chia-I Wu <olvaffe@gmail.com>
Reviewed-by: Alexandros Frantzis <alexandros.frantzis@collabora.com>
util_format_translate_3d expects the source box to be aligned to the
block size. When resolving, make sure the size of the staging
buffer is aligned to the block size.
Signed-off-by: Chia-I Wu <olvaffe@gmail.com>
Reviewed-by: Alexandros Frantzis <alexandros.frantzis@collabora.com>
Some new flag setting disallows it due to being a security risk.
Fixes: c9c1e26106 "mesa: prevent common string formatting security issues"
Reviewed-by: Rob Clark <robdclark@gmail.com>
A previous optimization converts fmax(x, 0.0) instructions to fmov.pos.
This pass then propagates the .pos from the move up to the source
instruction (when possible). From there, copy propagation will eliminate
the move.
In the future, we might prefer to do this in common NIR code like we do
for saturate, as Bifrost can also benefit.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
Reviewed-by: Ryan Houdek <Sonicadvance1@gmail.com>
This prepass, run after scheduling but before RA, specializes to
pipeline registers where possible. It walks the IR, checking whether
sources are ever used outside of the immediate bundle in which they are
written. If they are not, they are rewritten to a pipeline register (r24
or r25), valid only within the bundle itself. This has theoretical
benefits for power consumption and register pressure (and performance by
extension). While this is tested to work, it's not clear how much of a
win it really is, especially without an out-of-order scheduler (yet!).
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
Reviewed-by: Ryan Houdek <Sonicadvance1@gmail.com>
First, this moves the scheduler and emitter out of midgard_compile.c
into their own dedicated files.
More interestingly, this slims down midgard_bundle to be essentially an
array of _pointers_ to midgard_instructions (plus some bundling
metadata), rather than the instructions and packing themselves. The
difference is critical, as it means that (within reason, i.e. as long as
it doesn't affect the schedule) midgard_instrucitons can now be modified
_after_ scheduling while having changes updated in the final binary.
On a more philosophical level, this removes an IR. Previously, the IR
before scheduling (MIR) was separate from the IR after scheduling
(post-schedule MIR), requiring a separate set of utilities to traverse,
using different idioms. There was no good reason for this, and it
restricts our flexibility with the RA. So unify all the things!
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
Reviewed-by: Ryan Houdek <Sonicadvance1@gmail.com>
Mostly, this fixes a number of instances of lines >> 80 chars,
refactoring them into something legible.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
Reviewed-by: Ryan Houdek <Sonicadvance1@gmail.com>
This represents a major break with the former RA design. We now use
conflicting register classes to represent the subdivision of Midgard's
128-bit registers into varying sizes and arrangement. We determine class
based on the number of components in the instructions' masks. To support
this, we include a number of helpers in the RA to allow composing
swizzles and masks, such that MIR written implicitly assuming .xyzw
sources can be transformed to use actual (non-aligned) sources.
The net result is a marked decrease in register pressure on
non-vec4-exclusive shaders. We could still be doing much better. Not
implemented yet are:
- Register spilling
- Per-component liveness
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
Reviewed-by: Ryan Houdek <Sonicadvance1@gmail.com>
These masks distinguish scalar/vec2/vec3 loads from the default vec4,
which helps with assembly readability (since it's immediately obvious
how many components are _actually_ affected, rather than doing
mysterious things to an unknown number of unused components). Later in
the series, this will enable smarter register allocation, as the unused
components will not be interpreted abnormally.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
Reviewed-by: Ryan Houdek <Sonicadvance1@gmail.com>
This fixes liveness analysis with respect to inline constants and
branching. in practice, the symptom is abnormally high register
pressure.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
Reviewed-by: Ryan Houdek <Sonicadvance1@gmail.com>
These snippets of integer assembly are injected for various purposes.
Eventually, we'll want to implement these in NIR directly. Regardless,
the "default" output modifier is different between floats and ints, so
let's set the right one.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
Reviewed-by: Ryan Houdek <Sonicadvance1@gmail.com>
This mechanism is only used by blend shaders, so just use a move here.
Ideally, it'll be copy-propped and DCE'd away; this removes a source of
considerable indirection and will simplify RA logic.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
Reviewed-by: Ryan Houdek <Sonicadvance1@gmail.com>
This pattern was noticed in glmark's jellyfish scene.
v2: Add inexact qualifier due to NaN behaviour.
Minimal shader-db changes (slightly helped).
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
Reviewed-by: Elie Tournier <tournier.elie@gmail.com>
Adds a compile-time error for obvious security issues like:
printf(string_var);
The proposed flag is more tolerant than -Wformat-nonliteral.
Specifically, it tolerates common mesa formatting like:
static const char *shader_template = "really long string %d";
printf(shader_template, uniform_number);
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=110833
Reviewed-by: Eric Anholt <eric@anholt.net>
Reviewed-by: Eric Engestrom <eric@engestrom.ch>
With 8 and 16-bit types and anything where we have to use non-trivial
strides registersto deal with restrictions, we end up with things that
look like partial writes even though we don't care about any values in
the register except those written by that instruction. This is
particularly important when dealing with loops because liveness sees
is_partial_write and the fact that an old version from a previous loop
iteration may be valid at that point and extends all purely partially
written values to the entire loop.
This commit adds a new UNDEF instruction which does nothing (the
generator doesn't emit anything) but which does a fake write to the
register. This informs liveness that we don't care about any values
before that point so it won't consider those registers to be falsely
live. We can safely emit UNDEF instructions for all SSA values that
come in from NIR and nearly all temporaries generated by various stages
of the compiler. In particular, we need to insert UNDEF instructions
when we handle region restrictions because the newly allocated registers
are almost guaranteed to be partially written.
No shader-db changes.
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=110432
Reviewed-by: Matt Turner <mattst88@gmail.com>
This corresponds to commit 8b911bd2ba37677037b38c9bd286c7c05701bcda on
GitHub.
We previously tweaked OpenCL.std.h from upstream to be included in C
code. Now upstream header can be included, however the symbol names
are slightly different (include an OpenCLstd_ prefix), so this patch
also fixes vtn_opencl.c to use those.
Reviewed-by: Karol Herbst <kherbst@redhat.com>
If libdrm is found the pipe loader enables drm anyway, and that is
pretty much the only extra dependency this code has.
This enables creating libva display using a drm fd without having
to enable the DRM (GBM really) backend of EGL, which is completely
unrelated.
Leaving the X11 platforms alone as they would still result in the
additional inclusion of extra deps.
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
Reviewed-by: Emil Velikov <emil.velikov@collabora.com>
Otherwise LLVM can sink them and their texture coordinate calculations
into divergent branches.
v2: simplify the conditions on which the intrinsic is marked as convergent
v3: only mark as convergent in FS and CS with derivative groups
Cc: <mesa-stable@lists.freedesktop.org>
Signed-off-by: Rhys Perry <pendingchaos02@gmail.com>
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
Fixes -Woverflow warnings with GCC 9.1.1
v2: use a cast instead of a bitwise and
Signed-off-by: Rhys Perry <pendingchaos02@gmail.com>
Reviewed-By: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
This might be slightly faster since we're doing one read rather than
two before we decide to skip. The more important reason, however, is
because no_spill prevents us from re-spilling spill registers. In the
new world in which we don't re-calculate liveness every spill, we may
not have valid liveness for spill registers so we shouldn't even look
their live ranges up.
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=110825
Fixes: e99081e76d "intel/fs/ra: Spill without destroying the..."
Reviewed-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
Tested-by: Tapani Pälli <tapani.palli@intel.com>
Clear w/ quad uses a normal draw which adds up to OQ. st/meta
uses set_active_query_state(..) to tell the driver to pause
queries in such cases.
Fixes spec@arb_occlusion_query@occlusion_query_meta_save piglit.
Signed-off-by: Christian Gmeiner <christian.gmeiner@gmail.com>
The driver should only fast depth clears with the graphics path
when the view covers all image layers, otherwise this might
corrupt layers when HTILE is enabled.
Cc: 19.0 19.1 mesa-stable@lists.freedesktop.org
Signed-off-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
It's unsupported, only load/store format with vec3 are supported.
Fixes: 6970a9a6ca ("ac,radv: remove the vec3 restriction with LLVM 9+")"
Signed-off-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
Mesa measures in DWords. The hardware also claims to measure in DWords.
Except the SO_WRITE_OFFSET field is actually bits 31:2, with 1:0 MBZ.
Which means that it really measures in bytes. So, convert to bytes.
Without this, our offset / stride denominator was 1/4th the size it
should be, leading to 4x the vertex count that we should have had.
Fixes GTF-GL46.gtf40.GL3Tests.transform_feedback2.transform_feedback2_two_buffers
This is the same as SpvOpCopyObject but without the type checking,
which is how vtn_composite_copy works, so we just need to hook the
operation.
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
SPIR-V 1.4 supports OpSelect over any composite type, and also allows
scalar boolean condition for vector types -- a case which we already
handled to support old GLSLang.
Added a helper function to recursively perform nir_bcsel, that makes
easier to support structs.
v2: Replace asserts() with vtn_fail_if(). (Jason)
v3: Simplify Condition and Result types verifications. (Jason)
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
The automatic header generation unifies identical registers in a series
and only emits definitions for the first one. This is mostly to avoid
emitting excessive definitions for CB registers, but special-casing
an exception for this family of registers doesn't seem worth it.
The definition of the fields differs, but PITCH_GFX9 is a mere extension
of PITCH_GFX6 that does not conflict with any other fields.
This aligns the definitions with what will be generated from the
register JSON.
The information about how large the fields really are is preserved in
the register database.
The field layout wasn't actually changed in gfx9, so having the suffix
isn't very useful. The field *contents* were changed, but this is
reflected in the V_xxx_xxx definitions and is taken into account by
the ac_debug logic based on the register JSON.
This aligns the definitions with what will be generated from the
register JSON.
We will derive both the debugging tables and (the majority of) the
register headers from descriptions in JSON, instead of deriving the
debugging tables from an awkward parsing of the register headers.
Some of the scripts are useful for maintaining the register database
itself. The scripts are designed to output reasonably readable JSON
by default.
Don't have a separate mechanism for NIR constants to be removed from
the table. If unused, we will compact it away. The use_null_surface
is needed when INTEL_DISABLE_COMPACT_BINDING_TABLE is set.
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
Change the iris_binding_table to keep track of what surfaces are
actually going to be used, then assign binding table indices just for
those. Reducing unused bytes on those are valuable because we use a
reduced space for those tables in Iris.
The rest of the driver can go from "group indices" (i.e. UBO #2) to
BTI and vice-versa using helper functions. The value
IRIS_SURFACE_NOT_USED is returned to indicate a certain group index is
not used or a certain BTI is not valid.
The environment variable INTEL_DISABLE_COMPACT_BINDING_TABLE can be
set to skip compacting binding table.
v2: (all from Ken)
Use BITFIELD64_MASK helper. Improve comments.
Assert all group is marked as used when we have indirects.
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
Stop using brw_compiler to lower the final binding table indices for
surface access. This is done by simply not setting the
'prog_data->binding_table.*_start' fields. Then make the driver
perform this lowering.
This is a better place to perfom the binding table assignments, since
the driver has more information and will also later consume those
assignments to upload resources.
This also prepares us for two changes: use ibc without having to
implement binding table logic there; and remove unused entries from
the binding table.
Since the `block` field in brw_ubo_range now refers to the final
binding table index, we need to adjust it before using to index
shs->constbuf.
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
We'll change iris to perform lowering of the binding table indices
earlier (before the backend kick in), but the backend compiler uses
the result of the analysis to identify load_ubo intrinsics, so we do
the analysis after the lowering to have the right indices.
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
v2: Fix comparing addresses from formats that have more than one
component by using nir_ball_iequal(). (Jason)
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
Program parser allocates parameter list.
In case of parsing error some variables will not be freed.
Patch adds freeing of it.
Signed-off-by: Sergii Romantsov <sergii.romantsov@globallogic.com>
Reviewed-by: Tapani Pälli <tapani.palli@intel.com>
Signed-off-by: Marek Olšák <marek.olsak@amd.com>
Previously the loop for assigning registers was bailing out early if
the register had a null source. I think the intention is that in this
case it isn’t necessary to assign a register. However it was also
missing out the part to fix up the types. This can happen if the
instruction is copy propagated to be a move from a constant half-float
input register. In that case it still needs to fix up the types.
Fixes assert in
dEQP-GLES3.functional.shaders.invariance.highp.subexpression_precision_mediump
when lowering the precision of the variables.
Signed-off-by: Rob Clark <robdclark@chromium.org>
Though it should be fixed in RA pass, it needs to be set correctly from
the beginning according to the bitsize of NIR dest.
v2: Would be better for mad,fddx,fddy to fixup later in RA pass.
[small cleanup of fallout from imov/fmov removal fallout]
Signed-off-by: Rob Clark <robdclark@chromium.org>
It seems to handle only 32-bit values for half constant registers
within floating point opcodes according to the blob driver.
So we need to convert back to 32-bit values from 16-bit values, when a
lower precision pass is in effect.
Signed-off-by: Rob Clark <robdclark@chromium.org>
If the type of dest reg and src reg of absneg opcode are different,
it shouldn't be considered as same type mov.
This patch becomes meaningful when we start to use mediump information for
doing precision lowering to 16bit.
Signed-off-by: Rob Clark <robdclark@chromium.org>
eg. uniform mediump vec4 f;
This patch means nothing since there's no mediump lowering pass for now,
but will be meaningful when the pass land in the near future.
Signed-off-by: Rob Clark <robdclark@chromium.org>
Previously the A5XX_SP_FS_OUTPUT_REG_HALF_PRECISION was set depending
on whether half_precision was set in the shader key. With support for
mediump precision, it is possible to have different outputs use
different precisions. That means we can’t have a global shader state
to specify it. Instead it now tries to copy the half-float-ness
from the nir_variable for the output into the ir3_shader_variant. This
is then used to decide whether to set half-precision for each output.
The a6xx version is copied from the a5xx code but it has not been
tested.
v2. [Hyunjun Ko (zzoon@igalia.com)] There's the half flag recently
added, which represents precision based on IR3_REG_HALF. Now use this
flag to avoid duplication.
Signed-off-by: Rob Clark <robdclark@chromium.org>
Previously the code to load from a constant instruction was always
using the u32 pointer. If the constant is actually a 16-bit source
this would end up with the wrong values because the pointer would be
offset by the wrong size. This fixes it to use the u16 pointer.
Signed-off-by: Rob Clark <robdclark@chromium.org>
The aren't real instructions, and don't change # of live values, so no
point in them competing with real instructions.
Signed-off-by: Rob Clark <robdclark@chromium.org>
Reviewed-by: Eric Anholt <eric@anholt.net>
For instructions that increase the # of live values, apply a threshold
to avoid scheduling them too early. And factor the net change of # of
live values that would result from scheduling an instruction, to
prioritize instructions that reduce number of live values as the number
of live values increases.
For manhattan:
total instructions in shared programs: 27869 -> 28413 (1.95%)
instructions in affected programs: 26756 -> 27300 (2.03%)
helped: 102
HURT: 87
total full in shared programs: 1903 -> 1719 (-9.67%)
full in affected programs: 1390 -> 1206 (-13.24%)
helped: 124
HURT: 9
The reduction in register usage nets ~20% gain in manhattan. (So
getting mediump support should be a huge win for gles gfxbench.)
Also significantly helps some of the more complex shadertoy shaders,
like IQ's Piano (32 to 18 regs, doubles fps).
The effect is less pronounced on smaller shaders.
Signed-off-by: Rob Clark <robdclark@chromium.org>
Reviewed-by: Eric Anholt <eric@anholt.net>
Account for shader outputs and values live in any direct/indirect
successor block.
Signed-off-by: Rob Clark <robdclark@chromium.org>
Reviewed-by: Eric Anholt <eric@anholt.net>
Allows the legacy matrix stacks to be manipulated without disturbing the
matrix mode selector.
Adapted from a patch from Chris Forbes.
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
Split this out from glMatrixMode since we're about to need it
independently for EXT_DSA.
Adapted from Chris Forbes commit.
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
Commit a1378639ab reordered context functions initializations but broke
sctx->b.resource_copy_region init when using AMD_DEBUG=forcedma.
In this case sctx->dma_copy was assigned a value after being used in:
sctx->b.resource_copy_region = sctx->dma_copy;
This commit moves the FORCE_DMA special case after sctx->dma_copy initialization.
See https://bugs.freedesktop.org/show_bug.cgi?id=110422
Signed-off-by: Marek Olšák <marek.olsak@amd.com>
Recently PIPE_CAP_MAX_FRAMES_IN_FLIGHT was changed from 2
to 1:
20909284f2
No driver seems to overwrite the default value.
One user reports severe regressions for some games.
For now, revert to the value 2 for nine.
Cc: "19.1" mesa-stable@lists.freedesktop.org
Signed-off-by: Axel Davy <davyaxel0@gmail.com>
Now that we have the util function for the default values, we can get
rid of the boilerplate.
Signed-off-by: Christian Gmeiner <christian.gmeiner@gmail.com>
From the Vulkan spec 1.1.108:
"vkCmdCopyQueryPoolResults is guaranteed to see the effect of
previous uses of vkCmdResetQueryPool in the same queue, without any
additional synchronization."
Signed-off-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
Now that the type gathering function look at instructions that might
have other types, return invalid type instead of crashing. That
invalid will be properly ignored later.
Fixes: c12750527b "nir: add type information to load uniform/input and store output intrinsics"
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
Removes the bool_to_float logic from the int_to_float pass, so that both
can be used separately. By having separate passes we have better validation
and it makes it possible to use with the lower_ftrunc option (int lowering
generates ftrunc, but lower_ftrunc generates bools, ftrunc lowering should
probably be reworked). For now we always expect lower_bool to come after
lower_int.
Also fixes f2i32 to become ftrunc and adds u2f/f2u cases.
Signed-off-by: Jonathan Marek <jonathan@marek.ca>
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
It is treated like the vecN instructions which also have no type.
Signed-off-by: Jonathan Marek <jonathan@marek.ca>
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
Add a "lower_bitshift" option, which disables optimizations introducing
bitshifts and lowers ishl by constant to a multiply, so that we don't have
to deal with bitshifts in int_to_float lowering.
Signed-off-by: Jonathan Marek <jonathan@marek.ca>
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
Consts and undefs can be used as different types (common with "0" constant)
so don't copy types from consts/undefs, only to them. It doesn't entirely
solve the problem that the type given to the const could be wrong , but
now the only realistic case is with "0" which is the same when casted to
float, so it doesn't matter for lower_int_to_float.
The other change is to get type information for load input/uniform and
store output, and use that to get correct results.
Signed-off-by: Jonathan Marek <jonathan@marek.ca>
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
This type information will be used by gather_ssa_types to get usable results
Signed-off-by: Jonathan Marek <jonathan@marek.ca>
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
Improvements related to the patch that removed native_integers:
* In glsl_to_nir, special cases for i2f,u2f,etc are no longer needed
* In prog_to_nir, use sge/slt and let lower_scmp lower it if needed
Signed-off-by: Jonathan Marek <jonathan@marek.ca>
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
We could have identical texture state for both VS and FS.. which would
result in VS state getting created first, and FS state mapping to the
identical cmdstream. Resulting in VS state getting emitted twice and no
FS state emitted.
Fixes:
dEQP-GLES2.functional.uniform_api.value.assigned.by_pointer.render.basic_array.sampler2D_both
dEQP-GLES2.functional.uniform_api.value.assigned.by_pointer.render.struct_in_array.sampler2D_samplerCube_both
dEQP-GLES2.functional.uniform_api.value.assigned.by_pointer.render.array_in_struct.sampler2D_samplerCube_both
dEQP-GLES2.functional.uniform_api.value.assigned.by_pointer.render.nested_structs_arrays.sampler2D_samplerCube_both
dEQP-GLES2.functional.uniform_api.value.assigned.by_value.render.nested_structs_arrays.sampler2D_samplerCube_both
dEQP-GLES31.functional.program_uniform.by_pointer.render.array_in_struct.sampler2D_samplerCube_both
dEQP-GLES31.functional.program_uniform.by_pointer.render.nested_structs_arrays.sampler2D_samplerCube_both
dEQP-GLES31.functional.program_uniform.by_value.render.nested_structs_arrays.sampler2D_samplerCube_both
Signed-off-by: Rob Clark <robdclark@chromium.org>
Reviewed-by: Eric Anholt <eric@anholt.net>
If we access the address of the UBO indirectly, and there is no higher
const emitted w/ direct access (like an immediate lowered to uniform)
the assembler won't figure out the correct constlen.
Fixes:
dEQP-GLES31.functional.shaders.opaque_type_indexing.ubo.uniform_vertex
dEQP-GLES31.functional.shaders.opaque_type_indexing.ubo.uniform_fragment
dEQP-GLES31.functional.shaders.opaque_type_indexing.ubo.dynamically_uniform_vertex
dEQP-GLES31.functional.shaders.opaque_type_indexing.ubo.dynamically_uniform_fragment
Signed-off-by: Rob Clark <robdclark@chromium.org>
Fixes dEQP-GLES2.functional.multisampled_render_to_texture.readpixels
Signed-off-by: Rob Clark <robdclark@chromium.org>
Acked-by: Eric Anholt <eric@anholt.net>
Blob is also setting the .l bit, and it seems to solve some intermittent
failures with a couple of deqp's:
dEQP-GLES31.functional.image_load_store.2d.qualifiers.coherent_r32i
dEQP-GLES31.functional.image_load_store.2d.qualifiers.volatile_r32f
Signed-off-by: Rob Clark <robdclark@chromium.org>
Acked-by: Eric Anholt <eric@anholt.net>
It seems like (ei) handling doesn't sync on (ss), so we could end up in
a situation where we release varying storage before an ldlv for flat
shaded varyings completes. Keep track if we've done an (ss) since the
last ldlv, and if not add (ss) flag to last_input which gets (ei).
Noticed with dEQP-GLES3.functional.fragment_out.random.24 and
dEQP-GLES3.functional.fragment_out.random.27, which previously passed by
luck because ir3_sched ordered instructions in a way that resulted in a
lucky (ss).
Signed-off-by: Rob Clark <robdclark@chromium.org>
Acked-by: Eric Anholt <eric@anholt.net>
The special handling for last_input assumes that all the varying loads
are in the first block. Add an assert to catch if anyone breaks that
assumption.
Signed-off-by: Rob Clark <robdclark@chromium.org>
Reviewed-by: Eric Anholt <eric@anholt.net>
While we're here, copy the size table from set.c to get rid of hard tabs
in the hash_table.c version.
Reviewed-by: Eric Anholt <eric@anholt.net>
Acked-by: Jason Ekstrand <jason@jlekstrand.net>
Compilation times with my shader-db database:
Difference at 95.0% confidence
-1.22312 +/- 0.726033
-0.283979% +/- 0.168254%
(Student's t, pooled s = 1.02177)
Reviewed-by: Eric Anholt <eric@anholt.net>
Acked-by: Jason Ekstrand <jason@jlekstrand.net>
This should be at least as fast as using fast_idiv_by_const, and has the
advantage that the precomputation is simple enough to be evaluated at
Mesa-compile time for hash tables and sets which have a fixed table of
possible divisors.
Acked-by: Eric Anholt <eric@anholt.net>
Acked-by: Jason Ekstrand <jason@jlekstrand.net>
A significant portion of the time spent in nir_opt_cse for the Dolphin
ubershaders was in resizing the set. When resizing a hash table, we know
in advance that each new element to be inserted will be different from
every other element, so we don't have to compare them, and there will be
no tombstone elements, so we don't have to worry about caching the
first-seen tombstone. We add a specialized add function which skips
these steps entirely, speeding up resizing.
Compile-time results from my shader-db database:
Difference at 95.0% confidence
-2.29143 +/- 0.845534
-0.529475% +/- 0.194767%
(Student's t, pooled s = 1.08807)
Reviewed-by: Eric Anholt <eric@anholt.net>
Acked-by: Jason Ekstrand <jason@jlekstrand.net>
To keep the set and hash table in sync. Note that some of this had
already been done for hash tables, in particular pulling out the
hash % ht->size computation.
Reviewed-by: Eric Anholt <eric@anholt.net>
Acked-by: Jason Ekstrand <jason@jlekstrand.net>
Unfortunately GCC can't do this for us, probably because we call the key
comparison function which GCC can't prove won't modify arbitrary memory.
This is a pretty hot function, so do the optimization manually to be
sure the compiler will get it right.
While we're here, make the computation of the new probe address use a
single conditional subtract instead of a modulo, since we know that it
won't ever get as big as 2 * ht->size before the modulo. Modulos tend to
be pretty expensive operations.
shader-db compile time results for my database:
Difference at 95.0% confidence
-2.24934 +/- 0.69897
-0.516296% +/- 0.159993%
(Student's t, pooled s = 0.983684)
Reviewed-by: Eric Anholt <eric@anholt.net>
Acked-by: Jason Ekstrand <jason@jlekstrand.net>
Before this change, we were searching for each instruction twice, once
when checking if it exists and once when figuring out where to insert
it. By using the new function, we can do everything we need to do in one
operation.
Compilation time numbers for my shader-db database:
Difference at 95.0% confidence
-4.04706 +/- 0.669508
-0.922142% +/- 0.151948%
(Student's t, pooled s = 0.95824)
Reviewed-by: Eric Anholt <eric@anholt.net>
Acked-by: Jason Ekstrand <jason@jlekstrand.net>
Unlike _mesa_set_search_and_add(), it doesn't replace an entry if it's
found, returning it instead. This is useful for nir_instr_set, where
we have to know both the original original instruction and its
equivalent.
Reviewed-by: Eric Anholt <eric@anholt.net>
Acked-by: Jason Ekstrand <jason@jlekstrand.net>
ncomp is never set for vertex shaders, but a3xx and a4xx still use it.
Fixes: 831f1a05c0 freedreno/ir3: rework varying packing
Signed-off-by: Jonathan Marek <jonathan@marek.ca>
Reviewed-by: Rob Clark <robdclark@chromium.org>
On some architectures, Boolean values used to control conditional
branches or condtional selection must be propagated into a flag. This
generally means that a stored Boolean value must be compared with zero.
Rather than force the generation of extra compares with zero, re-emit
the original comparison instruction. This can save register pressure by
not needing to store the Boolean value.
There are several possible ares for future improvement to this pass:
1. Be more conservative. If both sources to the comparison instruction
are non-constants, it may be better for register pressure to emit the
extra compare. The current shader-db results on Intel GPUs (next
commit) lead me to believe that this is not currently a problem.
2. Be less conservative. Currently the pass requires that all users of
the comparison match the pattern. The idea is that after the pass is
complete, no instruction will use the resulting Boolean value. The only
uses will be of the flag value. It may be beneficial to relax this
requirement in some cases.
3. Be less conservative. Also try to rematerialize comparisons used for
discard_if intrinsics. After changing the way the Intel compiler
generates cod e for discard_if (see MR!935), I tried implementing this
already. The changes were pretty small. Instructions were helped in 19
shaders, but, overall, cycles were hurt. A commit "nir: Rematerialize
comparisons for nir_intrinsic_discard_if too" is on my fd.o cgit.
4. Copy the preceeding ALU instruction. If the comparison is a
comparison with zero, and it is the only user of a particular ALU
instruction (e.g., (a+b) != 0.0), it may be a further improvment to also
copy the preceeding ALU instruction. On Intel GPUs, this may enable
cmod propagation to make additional progress.
v2: Use much simpler method to get the prev_block for an if-statement.
Suggested by Tim.
Reviewed-by: Matt Turner <mattst88@gmail.com>
And instead, link them as they are added.
Makes things a bit clearer and prepares future work such as FB reload
jobs.
Signed-off-by: Tomeu Vizoso <tomeu.vizoso@collabora.com>
Reviewed-by: Alyssa Rosenzweig <alyssa@rosenzweig.io>
We now bounds check properly in the uniform loading fast path, so
there's no need to disable it by pretending there are other UBO bindings
in use. The way this looks at the variable name was causing problems
when two piglit shaders, one with a name that triggered the hack and one
that didn't, got hashed to the same thing after stripping out the names.
Reviewed-by: Timothy Arceri <tarceri@itsqueeze.com>
location is the API-level location, but driver_location is the actual
location the uniform gets passed to the driver. This apparently only
caused failures with builtins, where the location is 0 because it's
represented via the state tokens instead.
Reviewed-by: Timothy Arceri <tarceri@itsqueeze.com>
ac expands the store to 32-bit components for us, but we still have to
deal with storing up to 8 components, and when a varying is split across
two vec4 slots we have to calculate the address again for the second
slot, since they aren't adjacent in memory. I didn't do this on the ac
level because we should generate better indexing arithmetic for the lds
store, where slots are contiguous.
Reviewed-by: Timothy Arceri <tarceri@itsqueeze.com>
Because the core principle of the vars_to_ssa pass is that it globally
(within a function) looks at all of the uses of a never-indirected path
and does a full into-SSA on that path, it can't handle a path which has
any chance of having aliasing. If a function_temp variable has a cast
or anything else which may cause aliasing, we have to assume that all
paths to that variable may alias and ignore the entire variable.
Reviewed-by: Dave Airlie <airlied@redhat.com>
Reviewed-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
We're about to change the meaning of get_deref_node returning NULL so we
need a non-NULL value to mean properly undefined.
Reviewed-by: Dave Airlie <airlied@redhat.com>
Reviewed-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
This lets passes easily detect derefs which have uses that fall outside
the standard load/store/copy pattern so they can bail appropriately.
Reviewed-by: Dave Airlie <airlied@redhat.com>
Reviewed-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
When we inlined cf_node_has_side_effects into node_is_dead, all the
conditions flipped and we forgot to flip one. Fortunately, it doesn't
matter right now because no one uses this pass on shaders with more than
one function.
Fixes: b50465d197 "nir/dead_cf: Inline cf_node_has_side_effects"
Reviewed-by: Dave Airlie <airlied@redhat.com>
Reviewed-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
When creating function parameters, we create pointers from ssa
values, this creates nir casts with stride 0, however we have
no where else to get this value from. Later passes to lower
explicit io need this stride value to do the right thing.
Reviewed-by: Karol Herbst <kherbst@redhat.com>
Debugging use of unsafe iterators when you should have used the _safe
version sucks. Add some DEBUG build support to catch and assert if
someone does that.
I didn't update the UPPERCASE verions of the iterators. They should
probably be deprecated/removed.
Signed-off-by: Rob Clark <robdclark@chromium.org>
Reviewed-by: Erik Faye-Lund <erik.faye-lund@collabora.com>
This mode is used by PhysicalStorageBufferEXT storage class.
Fixes: 8bdf5a008b "nir: Allow derefs to be used as phi sources"
Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
Cannonlake hardware adds a new resolve type in 3DSTATE_PS called
FAST_CLEAR_0 which does an ambiguate. Now that the hardware can do it
directly, we should use that instead of binding the CCS as a render
target and doing it manually. This was tested with a full Vulkan CTS
run on Cannonlake.
Reviewed-by: Nanley Chery <nanley.g.chery@intel.com>
No significant changes in the code needed to enable
the extension. Just updating SWR capabilities
and the documentation
Reviewed-by: Alok Hota <alok.hota@intel.com>
We set header_present but then pass it some random garbage. Give it g0
instead. I'm not actually sure this does anything but g0 is the usual
header data and this is what the windows driver does so it seems like a
good idea.
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
Basically, this extension allows applications to use custom
sample locations. It doesn't support variable sample locations
during subpass. Note that we don't have to upload the user
sample locations because the spec doesn't allow this.
The extension is currently disabled because the driver needs to
support variable sample locations during layout transitions. The
depth decompress needs to know them and that's a bit invasive.
Signed-off-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
We only need the lock for:
1. Rummaging through the cache
2. Allocating VMA
We don't need it for alloc_fresh_bo(), which does GEM_CREATE, and also
SET_DOMAIN to allocate the underlying pages. The idea behind calling
SET_DOMAIN was to avoid a lock in the kernel while allocating pages,
now we avoid our own global lock as well.
We do have to re-lock around VMA. Hopefully this shouldn't happen too
much in practice because we'll find a cached BO in the right memzone
and not have to reallocate it.
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Chris pointed out that the order between SET_DOMAIN and SET_TILING
doesn't matter, so we can just do the page allocation when creating
a new BO. Simplifies the flow a bit.
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Mathias Fröhlich reported that commit 6244da8e23 crashes.
list_for_each_entry_safe is safe against removing the current entry,
but iris_bo_cache_purge_bucket was potentially removing next entries
too, which broke our saved next pointer.
To fix this, don't bother with the iris_bo_cache_purge_bucket step.
We just detected a single entry where the kernel has purged the BO's
memory, and so it isn't a usable entry for our cache. We're about to
continue the search with the next BO. If that one's purged, we'll
clean it up too. And so on.
We may miss cleaning up purged BOs that are further down the list
after non-purged BOs...but that's probably fine. We still have the
time-based cleaner (cleanup_bo_cache) which will take care of them
eventually, and the kernel's already freed their memory, so it's not
that harmful to have a few kicking around a little longer.
Fixes: 6244da8e23 iris: Dig through the cache to find a BO in the right memzone
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
This saves some util_vma thrash when the first entry in the cache
happens to be in a different memory zone, but one just a tiny bit
ahead is already there and instantly reusable. Hopefully the cost
of a little extra searching won't break the bank - if it does, we
can consider having separate list heads or keeping a separate VMA
cache.
Improves OglDrvRes performance by 22%, restoring a regression from
deleting the bucket allocators in 694d1a08d3.
Thanks to Clayton Craft for alerting me to the regression.
Reviewed-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
There's enough going on here to warrant a helper. This also simplifies
the control flow and eliminates the last non-error-case goto.
Reviewed-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
It is unlikely that we would fail to map a cached BO in order to zero
its contents. When we did, we would free the first BO in the cache and
try again with the second. It's possible that this next BO already had
a map setup, in which case we'd succeed. But if it didn't, we'd likely
fail again in the same manner.
There's not much point in optimizing this case (and frankly, if we're
out of CPU-side VMA we should probably dump the cache entirely)...so
instead, just fall back to allocating a fresh BO from the kernel which
will already be zeroed so we don't have to try and map it.
Reviewed-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
Both the from-cache and fresh-from-GEM cases were calling SET_TILING.
In the cached case, we would retry the allocation on failure, pitching
one BO from the cache each time. This is silly, because the only time
it should fail is if the tiling or stride parameters are unacceptable,
which has nothing to do with the particular BO in question. So there's
no point in retrying - we should simply fail the allocation.
This patch moves both calls to bo_set_tiling_internal() below the
cache/fresh split, so we have it at a single point in time instead
of two.
To preserve the ordering between SET_TILING and SET_DOMAIN, we move
that below as well. (I am unsure if the order matters.)
Reviewed-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
We mark snooped BOs as non-reusable, so we never return them to the
cache. This means that we'd need to call I915_GEM_SET_CACHING to make
any BO we find in the cache snooped. But then again, any BO we freshly
allocate from the kernel will also be non-snooped, so it has the same
issue. There's really no reason to skip the cache - we may as well use
it to avoid the I915_GEM_CREATE overhead.
Reviewed-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
util_vma needs to be protected by a lock. All other callers of
vma_alloc and vma_free appear to be holding a lock already.
Reviewed-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
The goto jumped over the mtx_lock, but proceeded to hit the mtx_unlock.
We can simply set the bucket to NULL and it will skip the cache without
goto, and without messing up locking.
Reviewed-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
debugoptimized builds don't define NDEBUG, but they also don't define
DEBUG. We want to enable cheap debug code for these builds.
I only chose those occurences that I care about.
Reviewed-by: Mathias Fröhlich <Mathias.Froehlich@web.de>
When we hit a GPU hang, we failed to reset Surface State Base Address
right away, and would keep hanging until we filled up the binder. Then
we'd finally get it right after a lot of repeated stumbles. Update it
right away so we hopefully hang fewer times before succeeding.
Shader-db results on Kaby Lake:
total instructions in shared programs: 15306230 -> 15304726 (<.01%)
instructions in affected programs: 4570 -> 3066 (-32.91%)
helped: 16
HURT: 0
total cycles in shared programs: 361703436 -> 361680041 (<.01%)
cycles in affected programs: 129388 -> 105993 (-18.08%)
helped: 16
HURT: 0
LOST: 0
GAINED: 2
The helped programs were in XCom 2, Deus Ex: Mankind Divided, and Kerbal
Space Program
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
It will be true for the constant/system value buffer because they use a
constant zero but it's not true in general. If we ever got here when
the source wasn't constant, nir_src_as_uint would assert.
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
Cc: mesa-stable@lists.freedesktop.org
Silence two unused var warnings. And init elem_size, elem_align to
zero to silence "maybe uninitialized" warnings.
Reviewed-by: Kristian H. Kristensen <hoegsberg@google.com>
Jason pointed out that we don't need to keep an entire copy of the
serialized NIR around, we just need the SHA1. This does change our
disk cache key to be taking a SHA1 of a SHA1, which is a bit odd,
but should work out and be faster and use less memory.
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
spirv_to_nir() returned the nir_function corresponding to the
entrypoint, as a way to identify it. There's now a bool is_entrypoint
in nir_function and also a helper function to get the entry_point from
a nir_shader.
The return type reflects better what the function name suggests. It
also helps drivers avoid the mistake of reusing internal shader
references after running NIR_PASS on it. When using NIR_TEST_CLONE or
NIR_TEST_SERIALIZE, those would be invalidated right in the first pass
executed.
Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
Replace its uses with checking for is_entrypoint and calling
nir_shader_get_entrypoint().
This is a preparation to change spirv_to_nir() return type.
Reviewed-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
Replace its use with checking for is_entrypoint.
This is a preparation to change spirv_to_nir() return type.
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
Replace its uses with nir_shader_get_entrypoint(), and change the
helper function to return nir_shader *.
This is a preparation to change spirv_to_nir() return type.
Reviewed-by: Rob Clark <robdclark@gmail.com>
Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
When readback is true, and there are pending writes in the transfer
queue, we should flush to avoid reading back outdated data. This
fixes piglit arb_copy_buffer/dlist and a subtest of
arb_copy_buffer/data-sync.
Signed-off-by: Chia-I Wu <olvaffe@gmail.com>
Reviewed-by: Alexandros Frantzis <alexandros.frantzis@collabora.com>
It is possible and valid for a pointer to be selected based on a
conditional before used, and depending on the mode, those cases will
result in a phi with derefs as sources.
To achieve this, we don't rematerialize derefs that are used by phis.
As a consequence, when converting from SSA to regs, we may have phis
that come from different blocks and are used by phis. We now convert
those to regs too.
Validation was added to ensure only derefs of certain modes can be
used as phi sources. No extra validation is needed for the presence
of cast, any instruction that uses derefs will validate the
deref-chain is complete (ending in a cast or a var).
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
KHR_blend_equation_advanced_coherent isn't exposed on OpenGL ES 1.x, so
we shouldn't allow its enums there either.
Signed-off-by: Erik Faye-Lund <erik.faye-lund@collabora.com>
Reviewed-by: Ian Romanick <ian.d.romanick@intel.com>
This enum shouldn't be allowed on OpenGL ES 1.x, so let's instead
use the extenion-helpers, and check for desktop and gles extensions
separately.
Signed-off-by: Erik Faye-Lund <erik.faye-lund@collabora.com>
Reviewed-by: Ian Romanick <ian.d.romanick@intel.com>
This extension isn't enabled for GLES 1.x, so we shouldn't allow the
state there. Let's use the extension-helpers instead of CHECK_EXTENSION
for this.
Signed-off-by: Erik Faye-Lund <erik.faye-lund@collabora.com>
Reviewed-by: Ian Romanick <ian.d.romanick@intel.com>
This just makes the logic of the checks for this enum the same for
gl{Enable,Disable} and for glIsEnabled. They are already functionally
the same, so this is just a minor code-cleanup.
Signed-off-by: Erik Faye-Lund <erik.faye-lund@collabora.com>
Reviewed-by: Ian Romanick <ian.d.romanick@intel.com>
{En,Dis}ableClientState(PRIMITIVE_RESTART_NV) should only work on
compatibility contextxs. While we're at it, modernize the code a bit,
by using the extension helpers instead of open-coding.
Signed-off-by: Erik Faye-Lund <erik.faye-lund@collabora.com>
Reviewed-by: Ian Romanick <ian.d.romanick@intel.com>
Make sure to sync all previous work if the given command buffer
has pending active queries. Otherwise the GPU might write queries
data after the reset operation.
This fixes a bunch of new dEQP-VK.query_pool.* CTS failures.
Signed-off-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
This makes the following packets use actual driver provided sizes rather
than guessing an arbitrary number:
- CC_VIEWPORT
- SF_CLIP_VIEWPORT
- BLEND_STATE
- COLOR_CALC_STATE
- SCISSOR_RECT
Reviewed-by: Sagar Ghuge <sagar.ghuge@intel.com>
(Technically this is common code, but it doesn't affect i965 or anv.)
Improves performance of GFXBench5/gl_tess_off on Skylake GT4e at 1080p
by 9.3933% +/- 0.0305157% by eliminating all spilling in the GS.
Improves performance of GFXBench5/gl_4_off (Car Chase) on Skylake GT4e
at 1080p by 0.325208% +/- 0.0842233% (n=18).
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
We scalarize IO to enable further optimizations, such as propagating
constant components across shaders, eliminating dead components, and
so on. This patch attempts to re-vectorize those operations after
the varying optimizations are done.
Intel GPUs are a scalar architecture, but IO operations work on whole
vec4's at a time, so we'd prefer to have a single IO load per vector
rather than 4 scalar IO loads. This re-vectorization can help a lot.
Broadcom GPUs, however, really do want scalar IO. radeonsi may want
this, or may want to leave it to LLVM. So, we make a new flag in the
NIR compiler options struct, and key it off of that, allowing drivers
to pick. (It's a bit awkward because we have per-stage settings, but
this is about IO between two stages...but I expect drivers to globally
prefer one way or the other. We can adjust later if needed.)
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
This fixes the egl_mesa_platform_surfaceless piglit test as well
as the new egl_ext_device_base piglit test on classic swrast.
v2: Fix swrast surfaceless contexts on the driver side.
Reviewed-by: Emil Velikov <emil.velikov@collabora.com>
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
Signed-off-by: Mathias Fröhlich <Mathias.Froehlich@web.de>
This makes use of radv_meta_resolve_compute_image() by filling
a VkImageResolve region instead of duplicating code.
Signed-off-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
This reverts commit 55376cb31e.
It's been over a year and both QT 5.9.5 and 5.11.0 contained a fix for the
original issue. It seems i965 only ever applied this workaround to the
18.0 branch.
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
When using the binding tables to access arrays of YCbCr descriptors we
did not consider the offset of the accessed element. We can't do a
simple multiple because the binding table entries are tightly packed.
For example element 0 of the array could use 2 entries/planes and
element 1 could use 2 entries/planes.
Signed-off-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Fixes: 3bb8768b9d ("anv: toggle on support for VK_EXT_ycbcr_image_arrays")
Reviewed-by: Tapani Pälli <tapani.palli@intel.com>
Fixes following piglit and does not introduce any regressions.
spec@ext_packed_depth_stencil@fbo-depth-gl_depth24_stencil8-blit
Signed-off-by: Christian Gmeiner <christian.gmeiner@gmail.com>
Reviewed-by: Lucas Stach <l.stach@pengutronix.de>
This commit also adds codegen for branch since we need it
for discard_if.
Reviewed-by: Qiang Yu <yuq825@gmail.com>
Signed-off-by: Vasily Khoruzhick <anarsoul@gmail.com>
The old code was not wrong because the transitions performed
after the resolves should re-emit the framebuffer if needed.
This change is mostly a no-op but it improves consistency
regarding other meta operations that need to save/restore subpasses.
Signed-off-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
This helper will be useful for clearing HTILE after some
depth/stencil resolves.
Signed-off-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
Just move the block that checks the availability bit into the
switch like other query types.
Signed-off-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
Fix a regression I inadvertently caused by acking typeless movs before
implementing/pushing this *whistles*
Nothing to see here, move along folks.
Signed-off-by: Alyssa Rosenzweig <alyssa@rosenzweig.io>
lima_blit will do blit between resources with different levels.
When blit from a level!=0 source, it will sample from that level
of resource as texture.
Current texture setup won't respect level when not mipmap filter.
Reviewed-by: Vasily Khoruzhick <anarsoul@gmail.com>
Signed-off-by: Qiang Yu <yuq825@gmail.com>
Imagine this
resource_copy_region(ctx, dst, ..., src, ...);
transfer_map(ctx, src, 0, PIPE_TRANSFER_WRITE, ...);
at the beginning of a cmdbuf. We need to flush in transfer_map so
that the transfer is not reordered before the resource copy. The
check for "vctx->num_draws == 0 && vctx->num_compute == 0" is not
enough. Removing the optimization entirely.
Because of the more precise resource tracking in the previous
commit, I hope the performance impact is minimized. We will have to
go with perfect resource tracking, or attempt a more limited
optimization, if there are specific cases we really need to optimize
for.
Signed-off-by: Chia-I Wu <olvaffe@gmail.com>
Reviewed-by: Gurchetan Singh <gurchetansingh@chromium.org>
This gives us more precise resource tracking. It can be beneficial
because glFlush is often followed by state changes. We don't want
to reemit resources that are going to be unbound.
Signed-off-by: Chia-I Wu <olvaffe@gmail.com>
Reviewed-by: Gurchetan Singh <gurchetansingh@chromium.org>
The default null_output really needs to be static, otherwise the values
we'll eventually get later are doubly random (they are not initialized,
and even if they were it's a pointer to a local stack variable).
VMware bug 2349556.
Reviewed-by: Jose Fonseca <jfonseca@vmware.com>
We are currently leaking resources if they were sampled from. Once we
are done with a sampler, we should dereference that resource.
Signed-off-by: Tomeu Vizoso <tomeu.vizoso@collabora.com>
Reviewed-by: Alyssa Rosenzweig <alyssa@rosenzweig.io>
Jump over the container stage if we haven't changed any of the files
that involved in building the container images.
This saves 1-2 minutes in each run and helps conserve resources.
Signed-off-by: Tomeu Vizoso <tomeu.vizoso@collabora.com>
Acked-by: Alyssa Rosenzweig <alyssa@rosenzweig.io>
The difference between imov and fmov has been a constant source of
confusion in NIR for years. No one really knows why we have two or when
to use one vs. the other. The real reason is that they do different
things in the presence of source and destination modifiers. However,
without modifiers (which many back-ends don't have), they are identical.
Now that we've reworked nir_lower_to_source_mods to leave one abs/neg
instruction in place rather than replacing them with imov or fmov
instructions, we don't need two different instructions at all anymore.
Reviewed-by: Kristian H. Kristensen <hoegsberg@google.com>
Reviewed-by: Alyssa Rosenzweig <alyssa@rosenzweig.io>
Reviewed-by: Vasily Khoruzhick <anarsoul@gmail.com>
Acked-by: Rob Clark <robdclark@chromium.org>
Unless source modifiers are present, fmov and imov are the same.
There's no good reason for having two helpers.
Reviewed-by: Kristian H. Kristensen <hoegsberg@google.com>
Acked-by: Alyssa Rosenzweig <alyssa@rosenzweig.io>
It's potentially a tiny bit less efficient but the helpers make it much
easier to sort out the rules for updating source modifiers.
Reviewed-by: Kristian H. Kristensen <hoegsberg@google.com>
Acked-by: Alyssa Rosenzweig <alyssa@rosenzweig.io>
A few of our very late passes can end up generating vectors accidentally
so we need to get rid of them. The only known case of this is the ffma
peephole which generates fneg and fabs as vectors. Currently, they're
not a problem because they get turned into fmov which the back-end
compiler knows how to handle as a vector. That's about to change.
Reviewed-by: Kristian H. Kristensen <hoegsberg@google.com>
Acked-by: Alyssa Rosenzweig <alyssa@rosenzweig.io>
This flag has caused more confusion than good in most cases. You can
validly use imov for floats or fmov for integers because, without source
modifiers, neither modify their input in any way. Using imov for floats
is more reliable so we go that direction.
Reviewed-by: Kristian H. Kristensen <hoegsberg@google.com>
Acked-by: Alyssa Rosenzweig <alyssa@rosenzweig.io>
Both of these passes predate the nir_channel helper. We should just use
it instead of hand-rolling it in both passes.
Reviewed-by: Kristian H. Kristensen <hoegsberg@google.com>
Acked-by: Alyssa Rosenzweig <alyssa@rosenzweig.io>
If descriptorType is VK_DESCRIPTOR_TYPE_STORAGE_IMAGE
or VK_DESCRIPTOR_TYPE_INPUT_ATTACHMENT, the imageView member of each
element of pImageInfo must have been created with the identity swizzle.
Fixes: d2aa65eb
Signed-off-by: Danylo Piliaiev <danylo.piliaiev@globallogic.com>
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Verified to work properly with Iris driver on Android Celadon. Cache
files get generated as 'com.android.opengl.shaders_cache' for each
application.
v2: check that cache was returned (Eric Engestrom)
Signed-off-by: Tapani Pälli <tapani.palli@intel.com>
Reviewed-by: Eric Engestrom <eric.engestrom@intel.com>
Now that the online compiler and pandecode are reliable and upstreamed,
nobody is using this. If somebody does need it, it should be easy enough
to bring back, I suppose. At the moment, it's just a maintenance hazard,
since meson is silly and does double builds for compiler updates (triple
for disassembler changes).
If people need the standalone _disassembler_, that can be added
trivially into pandecode (pandecode already includes the disassembler).
Signed-off-by: Alyssa Rosenzweig <alyssa@rosenzweig.io>
Reviewed-by: Ryan Houdek <Sonicadvance1@gmail.com>
src/vulkan/util/vk_enum_to_str.c: In function ‘vk_structure_type_size’:
src/vulkan/util/vk_enum_to_str.c:3335:9: warning: case value ‘1000010000’ not in enumerated type ‘VkStructureType’ {aka ‘const enum VkStructureType’} [-Wswitch]
case VK_STRUCTURE_TYPE_NATIVE_BUFFER_ANDROID: return sizeof(VkNativeBufferANDROID);
^~~~
Signed-off-by: Eric Engestrom <eric.engestrom@intel.com>
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
st/mesa now exposes KHR_blend_equation_advanced_coherent and
EXT_shader_framebuffer_fetch if the new capability is supported.
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
This extension requires the ability to read from all render targets,
so we only enable it if PIPE_CAP_FBFETCH >= PIPE_CAP_MAX_RENDER_TARGETS.
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
TGSI's FBFETCH instruction currently only supports reading from a single
render target, but NIR intrinsics can support multiple render targets.
radeonsi can only support fetching from RT 0, but other drivers may be
able to support fetching from any render target.
To express this, this patch renames PIPE_CAP_TGSI_FS_FBFETCH to simply
PIPE_CAP_FBFETCH, and converts it from a boolean "is FBFETCH supported?"
to an integer number of render targets which can be fetched.
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
Felix noticed a crash when using INTEL_DEBUG=bat decoding. It turned
out that we were sometimes placing variable length data near the end
of a buffer, and with the decoder guessing random lengths rather than
having an actual count, it was walking off the end and crashing. So
this does more than improve the decoder output.
Unfortunately, this is a bit more complicated than i965's handling,
because we don't have a single state buffer. Various places upload
data via u_upload_mgr, and so there isn't a central place to record
the size. We don't need to catch every single place, however, since
it's only important to record variable length packets (like viewports
and binding tables).
State data also lives arbitrarily long, rather than being discarded on
every batch like i965, so we don't know when to clear out old entries
either. (We also don't have a callback when an upload buffer is
released.) So, this tracking may space leak over time. That's probably
okay though, as this is only a debugging feature and it's a slow leak.
We may also get lucky and overwrite existing entries as we reuse BOs,
though I find this unlikely to happen.
The fact that the decoder works in terms of offsets from a state base
address is also not ideal, as dynamic state base address and surface
state base address differ for iris. However, because dynamic state
addresses start from the top of a 4GB region, and binding tables start
from addresses [0, 64K), it's highly unlikely that we'll get overlap.
We can always improve this, but for now it's better than what we had.
`-Wswitch` applies to `switch()`, not `case:`, and is bypassed by the
presence of a `default:` anyway, so let's drop the `default:` and move
the warning suppression to where it can make a difference, and then it
turns out that we don't need to keep a list of special cases anymore :)
Signed-off-by: Eric Engestrom <eric.engestrom@intel.com>
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
INTEL_conservative_rasterization isn't exposed on compatibility
contexts, nor for GLES 3.0 and below. We already do this correctly for
gl{Enable,Disable}, but we should do the same for glIsEnabled as well.
Signed-off-by: Erik Faye-Lund <erik.faye-lund@collabora.com>
Reviewed-by: Ian Romanick <ian.d.romanick@intel.com>
IsEnabled(FRAGMENT_PROGRAM) isn't supposed to be allowed, but our
check allowed this anyway. Let's make these checks consistent, and
while we're at it, modernize them a bit.
Signed-off-by: Erik Faye-Lund <erik.faye-lund@collabora.com>
Reviewed-by: Ian Romanick <ian.d.romanick@intel.com>
IsEnabled(TEXTURE_CUBE_MAP) isn't supposed to be allowed, but our
check allowed this anyway. Let's make these checks consistent, and
while we're at it, modernize them a bit.
Signed-off-by: Erik Faye-Lund <erik.faye-lund@collabora.com>
Reviewed-by: Ian Romanick <ian.d.romanick@intel.com>
The 'CAP' argument has been unused in both of these macros since
2010, so let's get rid of it from both.
Signed-off-by: Erik Faye-Lund <erik.faye-lund@collabora.com>
Reviewed-by: Ian Romanick <ian.d.romanick@intel.com>
This greatly simplifies the code to calculate if we should add a
buffer to the resource list. This uses the spec rules and simple
math to decide if we should add the buffer rather than complex
string processing.
This patch refines a patch present in the ARB_gl_spriv merge
request for the NIR linker and applies it to the GLSL IR linker.
This is why we also move the function to the shared linker code,
because we will want to reuse the code for the NIR linker also.
Reviewed-by: Tapani Pälli <tapani.palli@intel.com>
virgl_transfer_queue_is_queued was used to avoid flushing. That
fails when the resource is being accessed by previous cmdbufs but
not the current one.
The new approach of tracking the valid buffer range does not apply
to textures however. But hopefully it is fine because the goal is
to avoid waiting for this scenario
glBufferSubData(..., offset, size, data1);
glDrawArrays(...);
// append new vertex data
glBufferSubData(..., offset+size, size, data2);
glDrawArrays(...);
If glTex(Sub)Image* turns out to be an issue, we will need to track
valid level/layer ranges as well.
v2: update virgl_buffer_transfer_extend as well
v3: do not remove virgl_transfer_queue_is_queued
Signed-off-by: Chia-I Wu <olvaffe@gmail.com>
Reviewed-by: Alexandros Frantzis <alexandros.frantzis@collabora.com> (v1)
Reviewed-by: Gurchetan Singh <gurchetansingh@chromium.org> (v2)
st/mesa does not need it and virglrenderer does not really support
it. Remove the support so that we are sure pipe_surface never
refers to a buffer resource.
Signed-off-by: Chia-I Wu <olvaffe@gmail.com>
Reviewed-by: Alexandros Frantzis <alexandros.frantzis@collabora.com>
Reviewed-by: Gurchetan Singh <gurchetansingh@chromium.org>
On machines with many cores, you can run into that issue :
../mesa-9999/src/vulkan/overlay-layer/overlay.cpp:42:10: fatal error: vk_enum_to_str.h: No such file or directory
v2: Move declare_dependency around (Eric)
Signed-off-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Reported-by: Jan Ziak
Cc: <mesa-stable@lists.freedesktop.org>
Reviewed-by: Eric Engestrom <eric.engestrom@intel.com>
The DRM_CONF_SHARE_FD code did not check for Linux, so the commit that
introduced PIPE_CAP_DMABUF broke Wayland-EGL clients on FreeBSD.
Fixes: 8ae50e60 (gallium: replace DRM_CONF_SHARE_FD with PIPE_CAP_DMABUF)
Reviewed-by: Dylan Baker <dylan@pnwbakers.com>
Reviewed-by: Eric Engestrom <eric.engestrom@intel.com>
From the Vulkan spec 1.1.108:
"After query pool creation, each query must be reset before
it is used."
So, the driver doesn't need to do this at creation time.
Signed-off-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
We were checking this based on nir->info.name, but with the shader
cache enabled, nir_strip throws out the name, causing us to use IEEE
mode for ARB programs.
gl-1.0-spot-light regressed because it wants ALT mode for 0^0 behavior.
Fixes: dc5dc727d5 iris: Serialize the NIR to a blob we can use for shader cache purposes.
This lets st/nir cache the NIR for shaders, based on the shader source
string hash, allowing us to skip initial compiles altogether, and also
letting us start from there should we need to recompile for NOS.
Reviewed-by: Dylan Baker <dylan@pnwbakers.com>
We will use a hash of the serialized NIR together with brw_prog_*_key
(for NOS) as the disk cache key, where the disk cache contains actual
assembly shaders.
Reviewed-by: Dylan Baker <dylan@pnwbakers.com>
This creates the on-disk shader cache data structure, and handles the
build-id keying aspects. The next commits will fill it out so it's
actually used.
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
It had been internal to iris_program.c, but with the upcoming disk cache
code, the "program module" is going to be spread across a couple source
files. Into a header it goes!
Now it lives alongside iris_compiled_shader, which makes sense.
Reviewed-by: Dylan Baker <dylan@pnwbakers.com>
SPV_GOOGLE_decorate_string and SPV_GOOGLE_hlsl_functionality1 were
incorporated to SPIR-V. Let's pick the names used by SPIR-V core.
Reviewed-by: Karol Herbst <kherbst@redhat.com>
Choose the first we see in the grammar file as the main one. This is
needed to parse SPIR-V 1.4 because it introduced opcode aliases.
Reviewed-by: Karol Herbst <kherbst@redhat.com>
Through a series of rebases, I forgot to switch a bunch of error
checks to use a macro that will show where the problem is, rather than
printing out a dumb "ERROR!".
Signed-off-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Reviewed-by: Tapani Pälli <tapani.palli@intel.com>
The
if (!pipe && timestamp)
logic was broken. It should have been :
if (!pipe && !timestamp)
Let just drop this condition as the following code does the right
thing for all cases.
An error was appearing with the following variables :
VK_INSTANCE_LAYERS=VK_LAYER_MESA_overlay VK_LAYER_MESA_OVERLAY_CONFIG=gpu_timing
Signed-off-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Fixes: ea7a6fa980 ("vulkan/overlay: add pipeline statistic & timestamps support")
Reviewed-by: Tapani Pälli <tapani.palli@intel.com>
%error-verbose has been deprecated since Bison 3.0, which was released
in 2013. In Bison 3.3.1 which was recently released, this has started
causing warnings. Let's update the code to do this in the modern way
intead, to avoid cluttering the output needlessly.
Signed-off-by: Erik Faye-Lund <erik.faye-lund@collabora.com>
Reviewed-by: Timothy Arceri <tarceri@itsqueeze.com>
This just adds some split var splitting tests, it verifies
by counting derefs and local vars.
a basic load from inputs, store to array,
same as before but with a shader temp
struct { float } [4] don't split test
a basic load from inputs, with some out of band loads.
a load/store of only half the array
two level array, load from inputs store to all levels
a basic load from inputs with an indirect store to array.
two level array, indirect store to lvl 0
two level array, indirect store to lvl 1
load from inputs, store to array twice
load from input, store to array, load from array, store to another array.
load and from input and copy deref to array
create wildcard derefs, and do a copy
v2: use array_imm helpers, move derefs out of loops,
rename toplevel/secondlevel, use ints, fix lvl1 don't split test,
rename globabls to shader_temp, add comment, check the derefs type
Reviewed-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
When running with NIR_TEST_CLONE=1, the pointer will not be valid, as
the whole shader is going to be recreated every pass. Prefer using
is_entrypoint (to query when looping) and nir_shader_get_entrypoint()
instead.
Fixes the Vulkan Piglit tests
- vulkan/glsl450/frexp-double
- vulkan/glsl450/isinf-double
- vulkan/shaders/fs-multiple-large-local-array
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=108957
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
When num_state_slots is 0, don't create the array. This was
triggering the following assert when running vkcube with
NIR_TEST_CLONE=1
vkcube: ../src/compiler/nir/nir_split_per_member_structs.c:66:
split_variable: Assertion `var->state_slots == NULL' failed.
Fixes: 9fbd390dd4 "nir: Add support for cloning shaders"
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
With commit c89e8470e5, framebuffers are purged after unbinding context,
but this change also introduces a heap corruption when running Rhino application
on VMware svga device. Instead of purging the framebuffers after the context
is unbound, this patch first ubinds the winsys buffers, then purges the framebuffers
with the current context, and then finally unbinds the context.
This fixes heap corruption.
Cc: mesa-stable@lists.freedesktop.org
Reviewed-by: Brian Paul <brianp@vmware.com>
Use the storage class address format information to pick the right
constant values for a NULL pointer.
v2: Don't add a deref_cast to the values. (Jason)
v3: Update to use vtn_storage_class_to_mode() and
vtn_mode_to_address_format() explicitly. (Jason)
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
And change vtn_storage_class_to_mode() to accept NULL as
interface_type. In this case, if we have a SpvStorageClassUniform, we
assume it is uses an ubo_addr_format, like the code being replaced by
the helper.
That assumption is a problem, but no different than the previous
code.
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
Corresponding to SpvStorageClassImage. We see pointers for that
storage class in tests, but don't use the storage class any further.
Adding this so that we can call vtn_mode_to_address_format() for all
supported pointers.
v2: Fail when trying to create a SpvStorageClassImage
variable. (Jason)
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
Handles all the modes and we can use it in combination with
nir_address_format_to_glsl_type() to replace the
vtn_ptr_type_for_mode() helper. Since the new helper is more generic,
moved the assertions from the old one to the call sites.
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
Just the mode is needed to decide whether SSA offsets are needed, so
make a function that takes that and reuse it for
vtn_pointer_uses_ssa_offset().
This will be used for constant null pointers, that won't have a
vtn_pointer handy.
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
v2: Renamed from vtn_interface_type. (Jason)
Accept any type not only pointers.
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
This is a preparation to handle OpConstantNull for pointers, we'll use
the vtn_type to get to the address format and then the appropriate
representation of NULL pointer.
v2: Move rest of body to use vtn_type. (Jason)
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
Returns the nir_const_value * with the representation of the NULL
pointer for each address format.
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
Instead of setting the glsl types of the pointers for each resource,
set the nir_address_format, from which we can derive the glsl_type,
and in the future the bit pattern representing a NULL pointer.
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
This is a simple 32-bit address which is not a global address. Gives
us a format that don't use 0 as its null pointer value. We will need
this in anv to represent nir_var_mem_shared addresses.
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
An address format representing a purely logical addressing model. In
this model, all deref chains must be complete from the dereference
operation to the variable. Cast derefs are not allowed. These
addresses will be 32-bit scalars but the format is immaterial because
you can always chase the chain. E.g. push constants in anv.
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
This "fixes" hangs seen w/ various android games. I think a similar
issue to with constant state, we need to avoid CP_LOAD_STATE until
previous draw completes.
It isn't entirely clear why blob doesn't need to do this, but it might
have a different way to accomplish the same thing.
Signed-off-by: Rob Clark <robdclark@chromium.org>
Since we use same constant state for both binning pass program state and
draw pass state, and it is possible for binning pass shader to use fewer
consts, we need to make sure we program a large enough constlen.
Signed-off-by: Rob Clark <robdclark@chromium.org>
we validate assert entry just before this, but since that doesn't
stop execution, we need to check entry before the next validation
assert.
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
Some vector sysval can't be lowered to scaler, so need to break
it to scaler in nir to gpir convertion.
Reviewed-by: Vasily Khoruzhick <anarsoul@gmail.com>
Reviewed-by: Erico Nunes <nunes.erico@gmail.com>
Signed-off-by: Qiang Yu <yuq825@gmail.com>
This commit moves the register allocator out of midgard_compile.c and
into its own midgard_ra.c file. In doing so, a number of dependencies
are identified and moved into their own files in turn. midgard_compile.c
is still fairly monolithic, but this should help.
Code churn, but no functional changes should be introduced by this
commit.
Signed-off-by: Alyssa Rosenzweig <alyssa@rosenzweig.io>
This fixes a few miscellaneous issues with the fixed-function blending
programming, though it is far from complete. For cases known to be
buggy, we force a fallback to blend shaders.
Signed-off-by: Alyssa Rosenzweig <alyssa@rosenzweig.io>
This implements blend shaders via nir_lower_blend, by creating dummy
fragment shaders simply passing through the source color and using the
new lowering pass to inject blendability.
Signed-off-by: Alyssa Rosenzweig <alyssa@rosenzweig.io>
To prepare for the new nir_lower_blend pass, we wire up the intrinsics
for tilebuffer reads and constant colour loading.
Signed-off-by: Alyssa Rosenzweig <alyssa@rosenzweig.io>
This new lowering pass implements the OpenGL ES blend pipeline in
shaders, applicable to hardware lacking full-featured blending hardware
(including Midgard/Bifrost and vc4). This pass is run on a fragment
shader, rewriting the store to a blended version, loading in the
framebuffer destination color and constant color via intrinsics as
necessary. This pass is sufficient for OpenGL ES 2.0 and is verified to
pass dEQP's blend tests. MIN/MAX modes are included and tested as well.
That said, at present it has the following limitations:
- MRT is not supported (ES3).
- sRGB support is missing (ES3).
- Extended blending is not yet ported from GLSL IR lowering (ES3.2)
- Dual-source blending is not supported. (N/A)
- Logic ops are not supported. (N/A)
v2: Fix code conventions (per Ian Romanick's feedback). Implement color
masks.
This pass should be in common nir/ space, but due to non-technical
reasons, for now it's in Panfrost space. In the future, depending if
other drivers need some of the functionality, we can move this back to
src/compiler/nir space.
Signed-off-by: Alyssa Rosenzweig <alyssa@rosenzweig.io>
This adds a forgotten decode line on Midgard and adds the field of a
blend constant on Bifrost. The Bifrost encoding is fairly weird; whereas
Midgard is just a regular 32-bit float, Bifrost uses a fancy
fixed-point-esque encoding. The decode logic here is experimentally
correct. The encode logic is a sort of "guesstimate", assuming that the
high byte is just int(f / 255.0) and then solving algebraicly for the
low byte. This might be slightly off in some cases.
Signed-off-by: Alyssa Rosenzweig <alyssa@rosenzweig.io>
Reviewed-by: Ryan Houdek <Sonicadvance1@gmail.com>
This eliminates one major source of #ifdef parity between Midgard and
Bifrost, better representing how the struct acts on Midgard and allowing
proper decodes on Bifrost.
Signed-off-by: Alyssa Rosenzweig <alyssa@rosenzweig.io>
Reviewed-by: Ryan Houdek <Sonicadvance1@gmail.com>
We already have the Bifrost disassembler in-tree, so now that panwrap is
able to dump Bifrost command streams, hook up the disassembler to
pandecode.
Signed-off-by: Alyssa Rosenzweig <alyssa@rosenzweig.io>
Reviewed-by: Ryan Houdek <Sonicadvance1@gmail.com>
For IMMEDIATE and FIFO, most games work in a pipelined manner where the
can produce frames at a rate of 1/MAX(CPU duration, GPU duration), but
the render latency is CPU duration + GPU duration.
This means that with scanout from pageflipping we need 3 frames to run
full speed:
1) CPU rendering work
2) GPU rendering work
3) scanout
Once we have a nonblocking acquire that returns a semaphore we can merge
1 and 3. Hence the ideal implementation needs only 2 images, but games
cannot tellwe currently do not have an ideal implementation and that
hence they need to allocate 3 images. So let us do it for them.
This is a tradeoff as it uses more memory than needed for non-fullscreen
and non-performance intensive applications.
Since this is pretty much a TODO that can use the context I added this as
a comment.
Acked-by: Jason Ekstrand <jason@jlekstrand.net>
Acked-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
EGL annoyingly defines a few variants of this token:
EGL_CONTEXT_OPENGL_RESET_NOTIFICATION_STRATEGY_EXT - 0x3138
EGL_CONTEXT_OPENGL_RESET_NOTIFICATION_STRATEGY_KHR - 0x31BD
EGL_CONTEXT_OPENGL_RESET_NOTIFICATION_STRATEGY - 0x31BD
The EGL_EXT_create_context_robustness extension specifies that the EXT
token is only valid for ES contexts, not GL. The EGL_KHR_create_context
extension defines the KHR version, and says it is only allowed for GL
contexts, and specifically calls out that it's an error for ES contexts.
But EGL 1.5 includes the new suffixless token, which has the same value
as the KHR version, and specifically calls out that it's now valid to
use with both GL and ES contexts. So we should allow this.
Fixes KHR-NoContext.es32.robustness.no_reset_notification and
KHR-NoContext.es32.robustness.lose_context_on_reset on iris, which
apparently is exposing EGL 1.5.
Reviewed-by: Tapani Pälli <tapani.palli@intel.com>
From the Vulkan 1.1.107 spec:
Sample shading is enabled for a graphics pipeline:
- If the interface of the fragment shader entry point of the
graphics pipeline includes an input variable decorated with
SampleId or SamplePosition. In this case minSampleShadingFactor
takes the value 1.0.
- Else if the sampleShadingEnable member of the
VkPipelineMultisampleStateCreateInfo structure specified when
creating the graphics pipeline is set to VK_TRUE. In this case
minSampleShadingFactor takes the value of
VkPipelineMultisampleStateCreateInfo::minSampleShading.
Otherwise, sample shading is considered disabled.
In other words, if sampleShadingEnable is set to VK_FALSE, we should
ignore minSampleShading.
Cc: mesa-stable@lists.freedesktop.org
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
This was an unintended artifact of my testing of bindless images. We
should be choosing bindless or not dynamically.
Fixes: c0d9926df7 "anv: Use bindless handles for images"
Reviewed-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
We need to free memory allocation PrimitiveOffsets in draw_gs_destroy().
This fixes memory leak found while running piglit on windows.
Fixes: 7720ce32a ("draw: add support to tgsi paths for geometry streams. (v2)")
Tested with piglit
Reviewed-by: Brian Paul <brianp@vmware.com>
Reviewed-by: Charmaine Lee <charmainel@vmware.com>
Reviewed-by: Dave Airlie <airlied@redhat.com>
Now that we have the descriptor buffer mechanism, emulated texture
swizzle can be implemented in a very non-invasive way. Previous
attempts all tried to extend the push constant based image param
mechanism which was gross. This could, in theory, be done much faster
with a magic back-end instruction which does indirect MOVs but Vulkan on
IVB is already so slow this isn't going to matter much.
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=104355
Cc: "19.1" <mesa-stable@lists.freedesktop.org>
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
The load/store optimizer pass doesn't handle WaW hazards correctly
and this is the root cause of the reflection issue with Monster
Hunter World. AFAIK, it's the only game that are affected by this
issue.
This is fixed with LLVM r361008, but we need a workaround for older
LLVM versions unfortunately.
Cc: "19.0" "19.1" <mesa-stable@lists.freedesktop.org>
Signed-off-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
The vmwgfx driver supports emulated coherent surface memory as of version
2.16. Add en environtment variable to enable this functionality for
texture- and buffer maps: SVGA_FORCE_COHERENT.
This environment variable should be used for testing only.
Signed-off-by: Thomas Hellstrom <thellstrom@vmware.com>
Reviewed-by: Brian Paul <brianp@vmware.com>
In order to be able to add access modes to a pb_validate_entry, update
the pb_validate_add_buffer function to take a pointer hash table and also
to return whether the buffer was already on the validate list.
Update the svga winsys accordingly.
Signed-off-by: Thomas Hellstrom <thellstrom@vmware.com>
Reviewed-by: Charmaine Lee <charmainel@vmware.com>
The rendered-to flag indicates that the HW surface content is more recent
than the content of the mob. That's the case after a SurfaceDMA transfer
to the surface.
Signed-off-by: Thomas Hellstrom <thellstrom@vmware.com>
Reviewed-by: Brian Paul <brianp@vmware.com>
Reviewed-by: Charmaine Lee <charmainel@vmware.com>
SVGA_RELOC_INTERNAL indicates a transfer between surface and backing mob.
This means that if the GPU for example reads from the surface it writes
to the backing mob. But since the buffer mapping code allows for
simultaneous gpu- and cpu read access, a read from the surface to the mob
will not synchronize a subsequent map to the readback.
Fix this by inverting the mob access mode in a surface relocation with
SVGA_RELOC_INTERNAL set.
Signed-off-by: Thomas Hellstrom <thellstrom@vmware.com>
Reviewed-by: Brian Paul <brianp@vmware.com>
Reviewed-by: Charmaine Lee <charmainel@vmware.com>
Instead unconditionally call SVGA3D_InvalidateGBSurface() since it's needed
also for Linux for dirty buffers and operation without SurfaceDMA.
For non-guest-backed operation, remove the surface cache surface invalidation
altogether.
Signed-off-by: Thomas Hellstrom <thellstrom@vmware.com>
Reviewed-by: Brian Paul <brianp@vmware.com>
Reviewed-by: Charmaine Lee <charmainel@vmware.com>
This reverts commit 865b9ddae4.
The buffer always reports format PIPE_FORMAT_R8_UNORM so with this patch only
one component would be supported. The original issue is still relevant, but
the fix should be different.
Signed-off-by: Gert Wollny <gert.wollny@collabora.com>
Reviewed-by: Dave Airlie <airlied@redhat.com>
glsl_to_nir.cpp:276: uninit_member: Non-static class member "sig" is not initialized in this constructor nor in any functions that it calls.
Reported by coverity
Acked-by: Ilia Mirkin <imirkin@alum.mit.edu>
imgui_draw.cpp:1781: error[shiftTooManyBitsSigned]: Shifting signed 32-bit value by 31 bits is undefined behaviour
Reported by coverity
Acked-by: Ilia Mirkin <imirkin@alum.mit.edu>
link_uniforms.cpp:477: uninit_member: Non-static class member "shader_storage_blocks_write_access" is not initialized in this constructor nor in any functions that it calls.
Reported by coverity.
v2: fix 9->0 typo (Ilia)
Acked-by: Ilia Mirkin <imirkin@alum.mit.edu>
src/compiler/glsl_types.cpp:577: uninit_member: Non-static class member "packed" is not initialized in this constructor nor in any functions that it calls.
from Coverity.
Fixes: 659f333b3a (glsl: add packed for struct types)
Acked-by: Ilia Mirkin <imirkin@alum.mit.edu>
Many of these are now patched; one of them we patch here. Regardless,
this is one less thing to worry about in the code, I suppose.
Signed-off-by: Alyssa Rosenzweig <alyssa@rosenzweig.io>
Don't attempt sampling with HiZ if the sampler lacks support for it. On
ICL, the HW docs state that sampling with HiZ is not supported and that
instances of AUX_HIZ in the RENDER_SURFACE_STATE object will be
interpreted as AUX_NONE.
Cc: <mesa-stable@lists.freedesktop.org>
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Reviewed-by: Anuj Phogat <anuj.phogat@gmail.com>
If the SSA def produced by this instruction is only in the block in
which it is defined and is not used by ifs or phis, then we don't have
a reason to convert it to a register in
nir_lower_ssa_defs_to_regs_block().
The special case for derefs is covered by the general case, so can be
removed: at this point all derefs in the block are
materialized (i.e. the whole deref chain is in the block) and derefs
are not used in phis.
v2: Fix wrong check for if_uses. If there's such an use, the def is
not "local_to_block". (Jason)
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
Normally gl_nir_lower_samplers_as_deref records info->textures_used
for us, but this pass runs after that, attempting to assign samplers
in the same order as st_atom_texture's external_samplers_used loop
so the stars align and we get the same locations.
Since we're adding textures late, we need to amend info->textures_used.
iris uses info->textures_used to set up texture bindings; this fixes
Piglit's ext_image_dma_buf_import-sample-{nv12,yuv420,yvu420} there.
Reviewed-by: Rob Clark <robdclark@gmail.com>
First, allow the case for negative powers of two. Then ensure that we
use the absolute value of the non-constant value to calculate the
quotient -- this was hinted in the code by the name 'uq'.
This fixes an issue when 'd' is positive and 'n' is negative. The
ishr will propagate the negative sign and we'll use nir_ineg() again,
incorrectly.
v2: First version used only ishr, but that isn't sufficient, since it
never can produce a zero as a result. (Jason)
Allow negative powers of two. (Caio)
Fixes: 74492ebad9 "nir: Add a pass for lowering integer division by constants"
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
shader-db's report.py will use this to see when we've changed loop
unrolling behavior on a shader and skip including other stats like
instruction count from being considered for that shader, since they won't
be useful as a proxy for real world performance in that case.
Reviewed-by: Rob Clark <robdclark@gmail.com>
Tested-by: Eduardo Lima Mitev <elima@igalia.com>
This lets us reuse their report.py, at the expense of fd-report.py no
longer working.
Reviewed-by: Rob Clark <robdclark@gmail.com>
Tested-by: Eduardo Lima Mitev <elima@igalia.com>
It was more of a hindrance, as it pretended that we could compile in the
driver with a missing screen.
Reviewed-by: Rob Clark <robdclark@gmail.com>
Tested-by: Eduardo Lima Mitev <elima@igalia.com>
The TTN path needs access to the screen to make the right decisions about
lowering, but we didn't have pctx->screen set up at fdN_prog_init time.
Reviewed-by: Rob Clark <robdclark@gmail.com>
Tested-by: Eduardo Lima Mitev <elima@igalia.com>
The prim discard compute shader bakes InstanceID into the output index buffer.
Tested-by: Dieter Nützel <Dieter@nuetzel-hh.de>
Acked-by: Nicolai Hähnle <nicolai.haehnle@amd.com>
If a prim discard compute shader hasn't finished compilation, we don't want
to any shader.
Tested-by: Dieter Nützel <Dieter@nuetzel-hh.de>
Acked-by: Nicolai Hähnle <nicolai.haehnle@amd.com>
The primitive discard compute shader will get the position output this way.
Tested-by: Dieter Nützel <Dieter@nuetzel-hh.de>
Acked-by: Nicolai Hähnle <nicolai.haehnle@amd.com>
This commit adjusts the capabilities returned
by the SWR driver and the documentation to correctly
report the following extensions:
GL_ARB_texture_query_lod, GL_ARB_texture_cube_map_array,
GL_ARB_gpu_shader_fp64, GL_ARB_texture_gather,
GL_ARB_vertex_attrib_64bit.
Reviewed-by: Alok Hota <alok.hota@intel.com>
For newcomers to gitlab, it is not evident that it is better to press
the "Resolve Discussion" button when you update your branch handling
feedback.
v2:
* Fix several grammar nits, reorder, use new corrected text (Connor
Abbot)
* Use "reviewers", instead of reviewer (Eric Engestrom)
Reviewed-by: Connor Abbott <cwabbott0@gmail.com>
Reviewed-by: Eric Engestrom <eric.engestrom@intel.com>
transform feedback draws get the number of vertices from the transform
feedback object. In draw, we'll figure this out with the number of bytes
written divided by the stride. However, it is apparently possible we end
up with a stride of 0 there (not entirely sure it could happen with GL).
Probably when nothing was actually ever written (so we don't actually
have a stride set). Just avoid the division by zero by setting the count
to 0.
Reviewed-by: Jose Fonseca <jfonseca@vmware.com>
Use fstat() only to pre-allocate a big enough buffer.
This fixes a race where if the file grows between fstat() and read()
we would be missing the end of the file, and if the file slims down
read() would just fail.
Fixes: 316964709e "util: add os_read_file() helper"
Reported-by: Jason Ekstrand <jason@jlekstrand.net>
Signed-off-by: Eric Engestrom <eric.engestrom@intel.com>
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
This pass moves instructions around and adds control-flow in the
middle of blocks. We need to use nir_foreach_instr_safe to ensure that
we iterate over instructions correctly anyway.
Signed-off-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Fixes: 3bd5457641 ("nir: Add a lowering pass for non-uniform resource access")
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
For a block with a contiguous chunk of 32 vars that don't need updating,
this lets us skip 32 vars at a time. Also, by using bitscan, we only
iterate for each set bit rather than testing them all one at a time.
Looking at perf (with -O0 which is unfortunately necessary to get
reasonable back-traces), this seems to cuts about 50-60% of the time
spent in compute_start_end() which is, itself about 4-6% of the
run-time. In the real world, with a release driver build, this cuts
1.34% off a full shader-db run. (I ran shader-db 5 times in each
configuration).
Reviewed-by: Matt Turner <mattst88@gmail.com>
Otherwise, we get an effectively random spill reg because we no longer
have the information from RA to guide us. Also, a completely clean
graph has undefined data in in_stack which is used for choosing the
spill reg so it really is non-deterministic.
Fixes: e99081e76d "intel/fs/ra: Spill without destroying the..."
Tested-by: Ian Romanick <ian.d.romanick@intel.com>
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
This line is no longer relevant now that booleans are 1-bit, and in fact
causes issues (infinite progress loop between algebraic optimizations
and copy prop) with constant vector masks.
No shader-db changes on Intel platforms (Jason).
Signed-off-by: Alyssa Rosenzweig <alyssa@rosenzweig.io>
Reviewed-by: Jason Ekstrand <jason.ekstrand@intel.com>
This commit adds a bunch of new load/store opcodes, largely related to
OpenCL, as well as adjusting the name of existing opcodes to be more
uniform. The immediate effect is compute shaders are substantially
easier to interpret now.
Signed-off-by: Alyssa Rosenzweig <alyssa@rosenzweig.io>
Midgard ALU features two types of constants: embedded constants (128-bit
chunk, zero/one per schedule bundle) and inline constants (16-bit
splattered into the op, second source if present). Inline constants are
much more efficient from a space and scheduling freedom standpoint, so
it's desirable to inline when possible. Now that integer ops are well
understood and in use, we enable inlining of integers constants in
addition to floats (which have been inlined since forever).
Signed-off-by: Alyssa Rosenzweig <alyssa@rosenzweig.io>
By default, the "normal" output modifier is set on ALU ops. This is the
correct default for float outputs -- for floats, it preserves the semantic
value. Unfortunately, when used with integers, it does not preserve the
bitstream encoding, causing misbehaviour. (It's an open question what
happens when `normal` is used with integers -- does it apply some other
transformation? or does it do floating point normalization/etc on the
ints as if they were floats?).
Instead, we default to the "clamp to integer" output modifier for
ops writing integers. Semantically, this makes sense (clamping an
integer to the nearest integer is the identity function). In the
hardware with an integer opcode, this is the actual "normal".
This fixes numerous sporadic and sometimes bizarre bugs relating to
integers, especially integer moves. With this in place, we no longer
care about the types involved; it's just bits on the wire again.
Signed-off-by: Alyssa Rosenzweig <alyssa@rosenzweig.io>
From Gallium (and our) perspective, the stride of a BO is arbitrary. For
internal buffers, we can make it something nice, but for imported linear
buffers (e.g. EGL clients), we don't always have that luxury. To cope,
we calculate the expected stride of a texture, compare it to the BO's
actual reported stride, and if they differ, set the latter as a custom
stride.
Fixes rendering of windows not on tile boundaries (noticeable in Weston
with es2gears_wayland, for instance). Also, this should fix stride
issues with bufer reloading.
Signed-off-by: Alyssa Rosenzweig <alyssa@rosenzweig.io>
With a special flag, texture descriptors can include custom stride(s).
We haven't seen a case of this used for mipmaps/cubemaps, so it's not
clear how that will be encoded, but this dumps correctly for single
one-level 2D textures.
Signed-off-by: Alyssa Rosenzweig <alyssa@rosenzweig.io>
One field was not dumped for some reason. It's observed to be 0, but
it's still good to have it available.
Also, extra fields might be snuck in the bitmaps array (it's
variable-lengthed at the end), and we want to guard against that
possibility, so we dump a little more.
Signed-off-by: Alyssa Rosenzweig <alyssa@rosenzweig.io>
Acked-by: Dave Airlie <airlied@redhat.com>
We already use GFX9 and I don't want us to have confusing naming
in the driver. GFXn naming is better from the driver perspective,
because it's the real version of the gfx portion of the hw. Also,
CIK means Bonaire-Kaveri-Kabini, it doesn't mean CI.
It shouldn't confuse our SDMA, UVD, VCE etc. code much. Those have
nothing to do with GFXn and they have their own version numbers.
Handle PIPE_TRANSFER_DONT_BLOCK and PIPE_TRANSFER_MAP_DIRECTLY.
Make virgl_resource_transfer_prepare return an enum instead of a
bool for extensibility (e.g., instruct the callers to map
differently).
Signed-off-by: Chia-I Wu <olvaffe@gmail.com>
Reviewed-by: Alexandros Frantzis <alexandros.frantzis@collabora.com>
virgl_resource_transfer_prepare should be called before mapping to
prepare the resource. It does flush, readback, and wait as needed.
virgl_res_needs_flush and virgl_res_needs_readback become internal
helpers to the new function.
There should be no externally visible change.
Signed-off-by: Chia-I Wu <olvaffe@gmail.com>
Reviewed-by: Alexandros Frantzis <alexandros.frantzis@collabora.com>
Obviously missing the instruction insertion into the SSA list.
Signed-off-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Fixes: 3bd5457641 ("nir: Add a lowering pass for non-uniform resource access")
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
There's some debate about whether we should support this on older
hardware as well. Currently i965 turns it off on Gen8- though, so
we follow suit. If this changes, we can update this as well.
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
Corresponding to GL_ARB_fragment_shader_interlock and
GL_NV_fragment_shader_interlock. Currently, only the NIR paths
support this functionality, but someone could conceivably add it
to TGSI too.
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
In the future I want to expand this to 128-bits, for vec16 support, so
lets just put the code in place to use bitset ranges now.
v2: just declare the bitset to be the max of what we should ever see
and change assert to reflect it.
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
Our tessellation control shaders can be dispatched in several modes.
- SINGLE_PATCH (Gen7+) processes a single patch per thread, with each
channel corresponding to a different patch vertex. PATCHLIST_N will
launch (N / 8) threads. If N is less than 8, some channels will be
disabled, leaving some untapped hardware capabilities. Conditionals
based on gl_InvocationID are non-uniform, which means that they'll
often have to execute both paths. However, if there are fewer than
8 vertices, all invocations will happen within a single thread, so
barriers can become no-ops, which is nice. We also burn a maximum
of 4 registers for ICP handles, so we can compile without regard for
the value of N. It also works in all cases.
- DUAL_PATCH mode processes up to two patches at a time, where the first
four channels come from patch 1, and the second group of four come
from patch 2. This tries to provide better EU utilization for small
patches (N <= 4). It cannot be used in all cases.
- 8_PATCH mode processes 8 patches at a time, with a thread launched per
vertex in the patch. Each channel corresponds to the same vertex, but
in each of the 8 patches. This utilizes all channels even for small
patches. It also makes conditions on gl_InvocationID uniform, leading
to proper jumps. Barriers, unfortunately, become real. Worse, for
PATCHLIST_N, the thread payload burns N registers for ICP handles.
This can burn up to 32 registers, or 1/4 of our register file, for
URB handles. For Vulkan (and DX), we know the number of vertices at
compile time, so we can limit the amount of waste. In GL, the patch
dimension is dynamic state, so we either would have to waste all 32
(not reasonable) or guess (badly) and recompile. This is unfortunate.
Because we can only spawn 16 thread instances, we can only use this
mode for PATCHLIST_16 and smaller. The rest must use SINGLE_PATCH.
This patch implements the new 8_PATCH TCS mode, but leaves us using
SINGLE_PATCH by default. A new INTEL_DEBUG=tcs8 flag will switch to
using 8_PATCH mode for testing and benchmarking purposes. We may
want to consider using 8_PATCH mode in Vulkan in some cases.
The data I've seen shows that 8_PATCH mode can be more efficient in
some cases, but SINGLE_PATCH mode (the one we use today) is faster
in other cases. Ultimately, the TES matters much more than the TCS
for performance, so the decision may not matter much.
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
The payload field is actually "instance" (thread number), which is used
to calculate the invocation ID.
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
When we add 8_PATCH mode, this will get a bit more complex, so we may
as well start by putting it in a helper function.
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
This diverged back in f1374805a8 ("drm-uapi: use local files, not system
libdrm") to point at drm-uapi's copy, which we don't need now that we're
actually in drm-uapi.
Reviewed-by: Rob Clark <robdclark@gmail.com>
The goal is to avoid having an extra MOV instruction to perform the
saturate. Doing the subtraction first allows the saturate to be applied
to the ADD instruction making the MOV unnecessary. Values generated in
different block and values from non-ALU instructions (e.g., texture
instructions) almost always need the extra MOV.
Multiply instructions are restricted because doing this rearrangement
can interfere with the generation of flrp and ffma instructions.
v2: Now that the final method has been selected, squash three commits
into one.
All Intel platforms has similar results. (Ice Lake shown)
total instructions in shared programs: 17223214 -> 17219386 (-0.02%)
instructions in affected programs: 1524376 -> 1520548 (-0.25%)
helped: 2686
HURT: 26
helped stats (abs) min: 1 max: 32 x̄: 1.44 x̃: 1
helped stats (rel) min: 0.03% max: 16.67% x̄: 0.54% x̃: 0.37%
HURT stats (abs) min: 1 max: 2 x̄: 1.69 x̃: 2
HURT stats (rel) min: 0.33% max: 1.67% x̄: 0.54% x̃: 0.35%
95% mean confidence interval for instructions value: -1.46 -1.36
95% mean confidence interval for instructions %-change: -0.56% -0.50%
Instructions are helped.
total cycles in shared programs: 360811571 -> 360791896 (<.01%)
cycles in affected programs: 103650214 -> 103630539 (-0.02%)
helped: 1557
HURT: 675
helped stats (abs) min: 1 max: 1773 x̄: 41.44 x̃: 16
helped stats (rel) min: <.01% max: 26.77% x̄: 1.37% x̃: 0.64%
HURT stats (abs) min: 1 max: 1513 x̄: 66.44 x̃: 14
HURT stats (rel) min: <.01% max: 46.16% x̄: 2.00% x̃: 0.49%
95% mean confidence interval for cycles value: -14.82 -2.81
95% mean confidence interval for cycles %-change: -0.50% -0.20%
Cycles are helped.
LOST: 2
GAINED: 0
Reviewed-by: Matt Turner <mattst88@gmail.com> [v1]
Reviewed-by: Thomas Helland <thomashelland90@gmail.com>
The value-range tracking pass that is coming is not clever enough to
know that the result of the ffma must be non-negative. Making it that
smart will require quite a bit of work. It might be possible to add a
special case that detects that a whole tree of fadd(fmul(fsat(a),
fneg(fsat(a))), 1.0) cannot be negative.
For cases when the comparison is used in the domain guard for a
square-root (see nir/algebraic: Simplify fsqrt domain guard), the
compare may be converted to a fmax. This patch also handles that case.
All of the affected cases are in DiRT: Showdown.
All Gen7+ platforms had similar results. (Ice Lake shown)
total instructions in shared programs: 17225365 -> 17225303 (<.01%)
instructions in affected programs: 40051 -> 39989 (-0.15%)
helped: 62
HURT: 0
helped stats (abs) min: 1 max: 1 x̄: 1.00 x̃: 1
helped stats (rel) min: 0.07% max: 0.66% x̄: 0.27% x̃: 0.26%
95% mean confidence interval for instructions value: -1.00 -1.00
95% mean confidence interval for instructions %-change: -0.31% -0.22%
Instructions are helped.
total cycles in shared programs: 360842788 -> 360842595 (<.01%)
cycles in affected programs: 1818081 -> 1817888 (-0.01%)
helped: 29
HURT: 22
helped stats (abs) min: 1 max: 206 x̄: 20.66 x̃: 14
helped stats (rel) min: <.01% max: 9.55% x̄: 0.87% x̃: 0.42%
HURT stats (abs) min: 1 max: 108 x̄: 18.45 x̃: 7
HURT stats (rel) min: <.01% max: 4.48% x̄: 0.56% x̃: 0.19%
95% mean confidence interval for cycles value: -14.48 6.91
95% mean confidence interval for cycles %-change: -0.71% 0.21%
Inconclusive result (value mean confidence interval includes 0).
No changes on any other Intel platform.
Reviewed-by: Matt Turner <mattst88@gmail.com>
Reviewed-by: Thomas Helland <thomashelland90@gmail.com>
Without this, adding an algebraic rule like
(('bcsel', ('flt', a, 0.0), 0.0, ...), ...),
will cause assertion failures inside nir_src_comp_as_float in
GTF-GL46.gtf21.GL.lessThan.lessThan_vec3_frag (and related tests) from
the OpenGL CTS and shaders/closed/steam/witcher-2/511.shader_test from
shader-db.
All of these cases have some code that ends up like
('bcsel', ('flt', a, 0.0), 'b@1', ...)
When the 'b@1' is tested, nir_src_comp_as_float fails because there's
no such thing as a 1-bit float.
Reviewed-by: Matt Turner <mattst88@gmail.com>
Reviewed-by: Thomas Helland <thomashelland90@gmail.com>
This change also enables a later change (nir/algebraic: Replace
1-fsat(a) with fsat(1-a)) to affect more shaders.
Almost all of the affected shaders are in Bioshock Infinite, and all of
those shaders all require GLSL 4.10.
All Intel platforms had similar results. (Ice Lake shown)
total instructions in shared programs: 17228584 -> 17228376 (<.01%)
instructions in affected programs: 31438 -> 31230 (-0.66%)
helped: 105
HURT: 0
helped stats (abs) min: 1 max: 5 x̄: 1.98 x̃: 1
helped stats (rel) min: 0.08% max: 1.53% x̄: 0.73% x̃: 0.70%
95% mean confidence interval for instructions value: -2.20 -1.76
95% mean confidence interval for instructions %-change: -0.80% -0.67%
Instructions are helped.
total cycles in shared programs: 360936431 -> 360935690 (<.01%)
cycles in affected programs: 420100 -> 419359 (-0.18%)
helped: 71
HURT: 21
helped stats (abs) min: 1 max: 160 x̄: 19.28 x̃: 10
helped stats (rel) min: <.01% max: 9.78% x̄: 0.95% x̃: 0.48%
HURT stats (abs) min: 1 max: 198 x̄: 29.90 x̃: 10
HURT stats (rel) min: 0.05% max: 8.36% x̄: 1.24% x̃: 0.90%
95% mean confidence interval for cycles value: -16.77 0.66
95% mean confidence interval for cycles %-change: -0.85% -0.06%
Inconclusive result (value mean confidence interval includes 0).
Reviewed-by: Matt Turner <mattst88@gmail.com>
Reviewed-by: Thomas Helland <thomashelland90@gmail.com>
This doesn't make any real difference now, but future work (not in this
series) will add a LOT of ffma patterns. Having to duplicate all of
them for ffma(a, b, c) and ffma(b, a, c) is just terrible.
No shader-db changes on any Intel platform.
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
v2: Instead of handling 3 sources as a special case, generalize with
loops to N sources. Suggested by Jason.
v3: Further generalize by only checking that number of sources is >= 2.
Suggested by Jason.
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
The meaning of the new name is that the first two sources are
commutative. Since this is only currently applied to two-source
operations, there is no change.
A future change will mark ffma as 2src_commutative.
It is also possible that future work will add 3src_commutative for
opcodes like fmin3.
v2: s/commutative_2src/2src_commutative/g. I had originally considered
this, but I discarded it because I did't want to deal with identifiers
that (should) start with 2. Jason suggested it in review, so we decided
that _2src_commutative would be used in nir_opcodes.py. Also add some
comments documenting what 2src_commutative means. Also suggested by
Jason.
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
Instead of re-building the interference graph every time we spill, we
modify it in place so we can avoid recalculating liveness and the whole
O(n^2) interference graph building process. We make a simplifying
assumption in order to do so which is that all spill/fill temporary
registers live for the entire duration of the instruction around which
we're spilling. This isn't quite true because a spill into the source
of an instruction doesn't need to interfere with its destination, for
instance. Not re-calculating liveness also means that we aren't
adjusting spill costs based on the new liveness. The combination of
these things results in a bit of churn in spilling. It takes a large
cut out of the run-time of shader-db on my laptop.
Shader-db results on Kaby Lake:
total instructions in shared programs: 15311224 -> 15311360 (<.01%)
instructions in affected programs: 77027 -> 77163 (0.18%)
helped: 11
HURT: 18
total cycles in shared programs: 355544739 -> 355830749 (0.08%)
cycles in affected programs: 203273745 -> 203559755 (0.14%)
helped: 234
HURT: 190
total spills in shared programs: 12049 -> 12042 (-0.06%)
spills in affected programs: 2465 -> 2458 (-0.28%)
helped: 9
HURT: 16
total fills in shared programs: 25112 -> 25165 (0.21%)
fills in affected programs: 6819 -> 6872 (0.78%)
helped: 11
HURT: 16
Total CPU time (seconds): 2469.68 -> 2360.22 (-4.43%)
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
This is slightly less convenient in some places but it will make it much
easier when we want to start adding nodes dynamically.
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
The old code was arranged by the type of interference being added. It
would set up payload registers and then add payload interference for all
VGRFs. It would set up MRFs and add MRF interference for all VGRFs.
This commit re-arranges things to be organized differently. It first
creates and sets up all RA nodes and then groups interference into two
new categories: live range and instruction interference. Once all the
RA nodes have been set up, it walks the list of VGRFs and sets up their
live range interference and then walks the list of instructions and sets
up instruction interference. This new arrangement will be advantageous
for a future patch but, at the moment, it cuts 2% off the run-time of
shader-db on my laptop.
Shader-db results on Kaby Lake:
total instructions in shared programs: 15311224 -> 15311224 (0.00%)
instructions in affected programs: 0 -> 0
helped: 0
HURT: 0
total cycles in shared programs: 355544739 -> 355544739 (0.00%)
cycles in affected programs: 0 -> 0
helped: 0
HURT: 0
Total CPU time (seconds): 2523.45 -> 2469.68 (-2.13%)
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
The only use of the MRF hack these days is for spilling and there we
don't need the precise MRF usage information. If we're spilling then we
know pretty well how many MRFs are going to be used. It is possible if
the only things that are spilled have fewer SIMD channels than the
dispatch width of the shader that this may be more MRFs than needed.
That's a risk we're willing to takd.
Shader-db results on Kaby Lake:
total instructions in shared programs: 15311100 -> 15311224 (<.01%)
instructions in affected programs: 16664 -> 16788 (0.74%)
helped: 1
HURT: 5
total cycles in shared programs: 355543197 -> 355544739 (<.01%)
cycles in affected programs: 731864 -> 733406 (0.21%)
helped: 3
HURT: 6
The hurt shaders are all SIMD32 compute shaders where we reserve enough
space for a 32-wide spill/fill but don't need it.
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
This accomplishes two things. First, it makes interfaces which are
really private to RA private to RA. Second, it gives us a place to
store some common stuff as we go through the algorithm.
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
There's no reason why we need to use the calculated payload_node_count
value which is just first_non_payload_grf aligned up. The grf_used
value will be aligned up to 16 anyway (which is a much bigger alignment)
before being handed off to hardware.
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
We only have one node per VGRF so this was adding way too much
interference. No idea how we didn't catch this before.
Shader-db results on Kaby Lake:
total instructions in shared programs: 15311100 -> 15311100 (0.00%)
instructions in affected programs: 0 -> 0
helped: 0
HURT: 0
total cycles in shared programs: 355468050 -> 355543197 (0.02%)
cycles in affected programs: 2472492 -> 2547639 (3.04%)
helped: 17
HURT: 20
Fixes: 014edff0d2 "intel/fs: Add interference between SENDS sources"
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
We want to be able to call ra_allocate() and, when it fails, mutate the
graph and try again rather than re-building the graph from scratch.
This commit moves all the scratch bits except the final register
allocation (which is really an out value not scratch) into sub-structs
named "tmp" to make it clear which things are scratch. It also adds
bits to the ra_select() initialization loop to initialize things (since
we can't trust rzalloc anymore) and copy q_test and forced_reg over.
Reviewed-by: Eric Anholt <eric@anholt.net>
Unfortunately, we can't quite follow the standard C conventions for
these because ralloc doesn't know the sizes of pointers.
Reviewed-by: Eric Anholt <eric@anholt.net>
In the last phase of the schedule and RA loop, the RA call is redundant
if we spill. Immediately afterwards, we're going to see that we
couldn't allocate without spilling and call back into RA and tell it to
go ahead and spill. We've known about it for a while but we've always
brushed over it on the theory that, if you're going to spill, you'll be
calling RA a bunch anyway and what does one extra RA hurt? As it turns
out, it hurts more than you'd expect. Because the RA interference graph
gets sparser with each spill and the RA algorithm is more efficient on
sparser graphs, the RA call that we're duplicating is actually the most
expensive call in the RA-and-spill loop.
There's another extra RA call we do that's a bit harder to see which
this also removes. If we try to compile a shader that isn't the minimum
dispatch width and it fails to allocate without spilling we call fail()
to set an error but then go ahead and do the first spilling RA pass and
only after that's complete do we detect the fail and bail out. By
making minimum dispatch widths part of the spill condition, we side-step
this problem.
Getting rid of these extra spills takes the compile time of a nasty
Aztec Ruins shader from about 28 seconds to about 26 seconds on my
laptop. It also makes shader-db 1.5% faster
Shader-db results on Kaby Lake:
total instructions in shared programs: 15311100 -> 15311100 (0.00%)
instructions in affected programs: 0 -> 0
helped: 0
HURT: 0
total cycles in shared programs: 355468050 -> 355468050 (0.00%)
cycles in affected programs: 0 -> 0
helped: 0
HURT: 0
Total CPU time (seconds): 2524.31 -> 2486.63 (-1.49%)
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
The most expensive part of register allocation is the ra_simplify step
which is a fixed-point algorithm with a worst-case complexity of O(n^2)
which adds the registers to a stack which we then use later to do the
actual allocation. This commit uses bit sets and changes the core loop
of ra_simplify to first walk 32-node chunks and then walk each chunk.
This lets us skip whole 32-node chunks in one go based on bit operations
and compute the minimum q value potentially 32x as fast. Of course, the
algorithm still has the same fundamental O(n^2) run-time but the
constant is now much lower.
In the nasty Aztec Ruins compute shader, this shaves a full four seconds
off the 30s compile time for a release build of mesa. In a debug build
(needed for accurate stack traces), perf says that ra_select takes 20%
of runtime before this patch and only 5-6% of runtime after this patch.
It also makes shader-db runs faster.
Shader-db results on Kaby Lake:
total instructions in shared programs: 15311100 -> 15311100 (0.00%)
instructions in affected programs: 0 -> 0
helped: 0
HURT: 0
total cycles in shared programs: 355468050 -> 355468050 (0.00%)
cycles in affected programs: 0 -> 0
helped: 0
HURT: 0
Total CPU time (seconds): 2602.37 -> 2524.31 (-3.00%)
Reviewed-by: Eric Anholt <eric@anholt.net>
We only use q_total if the reg is not assigned so there's no point in
updating it if the reg is not assigned. This has no known perf benefit
but it will reduce churn in a future commit.
Reviewed-by: Eric Anholt <eric@anholt.net>
This shaves about half a second off the 30 second compile time of one of
the compute shaders in Aztec ruins.
Reviewed-by: Eric Anholt <eric@anholt.net>
Both apps and we (see virgl_buffer_transfer_flush_region) might
flush regions that are unmodified. We have to read back for those
flushes.
Signed-off-by: Chia-I Wu <olvaffe@gmail.com>
Reviewed-by: Alexandros Frantzis <alexandros.frantzis@collabora.com>
Reviewed-by: Gurchetan Singh <gurchetansingh@chromium.org>
It currently has no user and is probably incorrect (resource_wait is
required in some more cases). Remove it so that we can focus on
transfers first.
Signed-off-by: Chia-I Wu <olvaffe@gmail.com>
Reviewed-by: Alexandros Frantzis <alexandros.frantzis@collabora.com>
Reviewed-by: Gurchetan Singh <gurchetansingh@chromium.org>
No surface requires an auxiliary surface to operate correctly. Fall back
to an uncompressed surface if mesa fails to create and allocate an
auxiliary surface. This enables adding more restrictions to ISL without
having to update iris.
Reviewed-by: Rafael Antognolli <rafael.antognolli@intel.com>
intel_tiling_supports_hiz() and intel_miptree_supports_hiz() duplicate
much the work done by isl_surf_get_hiz_surf(). Replace them with simple
expressions.
Reviewed-by: Rafael Antognolli <rafael.antognolli@intel.com>
Import some restrictions from intel_tiling_supports_hiz() and
intel_miptree_supports_hiz().
Reviewed-by: Rafael Antognolli <rafael.antognolli@intel.com>
intel_tiling_supports_ccs() and intel_miptree_supports_ccs() duplicate
much the work done by isl_surf_get_ccs_surf(). Drop them both and index
a boolean array to choose CCS_D in intel_miptree_choose_aux_usage().
Reviewed-by: Rafael Antognolli <rafael.antognolli@intel.com>
This function duplicates much the work done by isl_surf_get_mcs_surf().
Replace it with a simple expression.
Reviewed-by: Rafael Antognolli <rafael.antognolli@intel.com>
Import some restrictions from intel_miptree_supports_mcs() and don't
assume that the caller knows which device generations are supported.
Reviewed-by: Rafael Antognolli <rafael.antognolli@intel.com>
No surface requires an auxiliary surface to operate correctly. Fall back
to an uncompressed surface if mesa fails to create and allocate an
auxiliary surface. This enables adding more restrictions to ISL without
having to update i965.
Reviewed-by: Rafael Antognolli <rafael.antognolli@intel.com>
Instead of checking the API variant on entry of set_varying_vp_inputs
to check if we can ever be interrested in fixed function processing
or not, we can check if we are actually fixed function processing.
To check this we can use the immediately updated
gl_context::VertexProgram._VPMode value that tells us if we have a
user provided shader program or if we are in fixed function processing
either through an internal TNL shader of directly through hardware.
When doing so, we also need to recheck the varying_vp_inputs variable
at the time gl_context::VertexProgram._VPMode is set to VP_MODE_FF.
Put asserts at the consumers of gl_context::varying_vp_inputs to make
sure gl_context::VertexProgram._VPMode is set to VP_MODE_FF. By that
gl_context::varying_vp_inputs should be up to date then.
By not looking at the opengl api for this decision we should actually
catch more cases where we can avoid setting a state change flag, including
the ones where we cannot get into VP_MODE_FF by the choice of the api.
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
Signed-off-by: Mathias Fröhlich <Mathias.Froehlich@web.de>
The precondition stated in the comment is not true. The values mentioned are
only set from _mesa_update_state which in turn may not yet be called.
For now set the _NEW_VARYING_VP_INPUTS flag a bit more often, we will narrow
that down to a minimum again in a later patch.
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
Signed-off-by: Mathias Fröhlich <Mathias.Froehlich@web.de>
Whenever a buffer is allocated, e.g. by the first draw call or EGL call after a
buffer swap, make sure the size is up to date. Prior to this commit, we
failed to do so when querying the buffer age, or swapping buffers
without any prior EGL call or draw call.
Signed-off-by: Jonas Ådahl <jadahl@gmail.com>
Reviewed-by: Eric Engestrom <eric.engestrom@intel.com>
The spec says we can't create another surface if we already created a
surface with the given window or pixmap. Implement this check.
This behavior is exercised by piglit/egl-create-surface.
Reviewed-by: Tapani Pälli <tapani.palli@intel.com>
Reviewed-by: Eric Engestrom <eric.engestrom@intel.com>
Signed-off-by: Paulo Zanoni <paulo.r.zanoni@intel.com>
Each platform stores this in a different place:
- platform_drm uses dri2_surf->gbm_surf->base
- platform_android uses dri2_surf->window
- platform_wayland uses dri2_surf->wl_win
- platform_x11 uses dri2_surf->drawable
- platform_x11_dri3 uses dri3_surf->loader_drawable.drawable
- haiku doesn't even store it!
We need access to the native surface since the specification asks us
to refuse creating a new surface if there's already an EGLSurface
associated with native_surface.
An alternative to this patch would be to create a new
API.GetNativeWindow callback that each platform would have to
implement. While that's something we can definitely do, I prefer
this approach.
Reviewed-by: Tapani Pälli <tapani.palli@intel.com>
Reviewed-by: Eric Engestrom <eric.engestrom@intel.com>
Signed-off-by: Paulo Zanoni <paulo.r.zanoni@intel.com>
Otherwise we risk to read past the end of the buffer.
In addition, change the loop counters to unsigned to be consistent
with the types.
Fixes: afa8707ba9
softpipe: add SSBO/shader atomics support.
Signed-off-by: Gert Wollny <gert.wollny@collabora.com>
Reviewed-by: Dave Airlie <airlied@redhat.com>
As with the previous value of 5000 we seemed to be reaching OOM in some
circumstances.
Signed-off-by: Tomeu Vizoso <tomeu.vizoso@collabora.com>
Acked-by: Alyssa Rosenzweig <alyssa@rosenzweig.io>
Since last Friday, these two tests have been fixed:
dEQP-GLES2.functional.shaders.functions.control_flow.return_in_nested_loop_fragment
dEQP-GLES2.functional.shaders.linkage.varying_7
Signed-off-by: Tomeu Vizoso <tomeu.vizoso@collabora.com>
Acked-by: Alyssa Rosenzweig <alyssa@rosenzweig.io>
It sure looks like we just want both of them to be nonzero, and && is
probably going to be cheaper than * anyway.
Reviewed-by: Kristian H. Kristensen <hoegsberg@google.com>
My gcc can't see that the uninitialized value from the PIPE_BUFFER case
isn't used from the !PIPE_BUFFER cases later.
Reviewed-by: Kristian H. Kristensen <hoegsberg@google.com>
__u64 is a ulonglong on x86_64, not uint64_t, so my gcc was complaining
about the wrong type being passed in.
Reviewed-by: Kristian H. Kristensen <hoegsberg@google.com>
The .editorconfig helps with the tabs, but we've got this
two-tabs-from-previous-indentation line continuation style that requires
whacking the c-file-offsets. This will throw emacs warnings when first
opening a file in the directory, press '!' to shut it up for the future.
Reviewed-by: Kristian H. Kristensen <hoegsberg@google.com>
The editorconfig takes precedence over dir-locals in emacs26 with
editorconfig enabled, so the /.editorconfig was affecting these
directories.
Reviewed-by: Kristian H. Kristensen <hoegsberg@google.com>
There's no real work to do here since we already support scalar block
layout which is a direct superset of what this extension allows.
Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
Acked-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Other types like syncobj do not need it, so lets make things a bit more uniform.
Also reduce confusion what the signalled/submitted referred to (especially with
imported fences)
Reviewed-by: Dave Airlie <airlied@redhat.com>
VK_AMD_draw_indirect_count has been promoted with the suffix
changed to KHR.
Signed-off-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
The V3D 4.2 HW has a limit to MSAA texture sizes of 4096. With non-MSAA,
we can go up to 7680 (actually probably 8138, but that hasn't been
validated by the HW team). Exposing 7680 in X11 will allow dual 4k displays.
The _LEVELS assumes that the max is always power of two. For V3D 4.2, we
can support up to 7680 non-power-of-two MSAA textures, which will let X11
support dual 4k displays on newer hardware.
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
In most places (glGetInteger, max_legal_texture_dimensions), we wanted the
number of pixels, not the number of levels. Number of levels is easily
recovered with util_next_power_of_two() and ffs(). More importantly, for
V3D we want to be able to expose a non-power-of-two maximum texture size
to cover 2x4k displays on HW that can't quite do 8192 wide.
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
The shared function has some extension presence checks, but other than
that has the same switch statement contents.
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
glibc < 2.27 defines OVERFLOW in /usr/include/math.h.
This patch fixes this build error.
In file included from ../include/c99_math.h:37:0,
from ../src/util/u_math.h:44,
from ../src/mesa/main/macros.h:35,
from ../src/intel/compiler/brw_reg.h:47,
from ../src/intel/tools/i965_asm.h:32,
from ../src/intel/tools/i965_gram.y:29:
src/intel/tools/i965_gram.tab.c:562:5: error: expected identifier before numeric constant
OVERFLOW = 412,
^
Fixes: 70308a5a8a ("intel/tools: New i965 instruction assembler tool")
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=110656
Signed-off-by: Vinson Lee <vlee@freedesktop.org>
Acked-by: Eric Engestrom <eric@engestrom.ch>
The overall goal is to support unaligned loads from vertex buffers
natively on SI.
In the unaligned case, we fall back to the general case implementation in
ac_build_opencoded_load_format. Since this function is fully general,
we will also use it going forward for cases requiring fully manual format
conversions of dwords anyway.
This requires a different encoding of the fix_fetch array, which will now
contain the entire format information if a fixup is required.
Having to check the alignment of vertex buffers is awkward. To keep the
impact on the fast path minimal, the si_context will keep track of which
vertex buffers are (not) at least dword-aligned, while the
si_vertex_elements will note which vertex buffers have some (at most dword)
alignment requirement. Vertex buffers should be dword-aligned most of the
time, which allows a fast early-out in almost all cases.
Add the radeonsi_vs_fetch_always_opencode configuration variable for
testing purposes. Note that it can only be used reliably on LLVM >= 9,
because support for byte and short load is required.
v2:
- add a missing check to si_bind_vertex_elements
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
The current SSA def validation we do in nir_validate validates three
things:
1. That each SSA def is only ever used in the function in which it is
defined.
2. That an nir_src exists in an SSA def's use list if and only if it
points to that SSA def.
3. That each nir_src is in the correct use list (uses or if_uses) based
on whether it's an if condition or not.
The way we were doing this before was that we had a hash table which
provided a map from SSA def to a small ssa_def_validate_state data
structure which contained a pointer to the nir_function_impl and two
hash sets, one for each use list. This meant piles of allocation and
creating of little hash sets. It also meant one hash lookup for each
SSA def plus one per use as well as two per src (because we have to look
up the ssa_def_validate_state and then look up the use.) It also
involved a second walk over the instructions as a post-validate step.
This commit changes us to use a single low-collision hash set of SSA
sources for all of this by being a bit more clever. We accomplish the
objectives above as follows:
1. The list is clear when we start validating a function. If the
nir_src references an SSA def which is defined in a different
function, it simply won't be in the set.
2. When validating the SSA defs, we walk the uses and verify that they
have is_ssa set and that the SSA def points to the SSA def we're
validating. This catches the case of a nir_src being in the wrong
list. We then put the nir_src in the set and, when we validate the
nir_src, we assert that it's in the set. This takes care of any
cases where a nir_src isn't in the use list. After checking that
the nir_src is in the set, we remove it from the set and, at the end
of nir_function_impl validation, we assert that the set is empty.
This takes care of any cases where a nir_src is in a use list but
the instruction is no longer in the shader.
3. When we put a nir_src in the set, we set the bottom bit of the
pointer to 1 if it's the condition of an if. This lets us detect
whether or not a nir_src is in the right list.
When running shader-db with an optimized debug build of mesa on my
laptop, I get the following shader-db CPU times:
With NIR_VALIDATE=0 3033.34 seconds
Before this commit 20224.83 seconds
After this commit 6255.50 seconds
Assuming shader-db is a representative sampling of GLSL shaders, this
means that making this change yields an 81% reduction in the time spent
in nir_validate. It still isn't cheap but enabling validation now only
increases compile times by 2x instead of 6.6x.
Reviewed-by: Eric Anholt <eric@anholt.net>
Reviewed-by: Thomas Helland <thomashelland90@gmail.com>
Often times you don't know how big a set will be and you want the code
to just grow it as needed. However, sometimes you do know and you can
avoid a lot of rehashing if you just specify a size up-front.
Reviewed-by: Eric Anholt <eric@anholt.net>
Reviewed-by: Thomas Helland <thomashelland90@gmail.com>
This function is identical to _mesa_set_add except that it takes an
extra out parameter that lets the caller detect if a replacement
happened.
Reviewed-by: Eric Anholt <eric@anholt.net>
Reviewed-by: Thomas Helland <thomashelland90@gmail.com>
All of our hash tables and sets are already using ralloc. There's
really no good reason why we don't just make a ralloc context rather
than try to remember to clean everything up manually.
Reviewed-by: Eric Anholt <eric@anholt.net>
Reviewed-by: Thomas Helland <thomashelland90@gmail.com>
The H5 hardware variant requires a specific plb_max_blk number. This
value can't be probed at the hardware level.
Signed-off-by: Patrick Lerda <patrick9876@free.fr>
Reviewed-by: Qiang Yu <yuq825@gmail.com>
Move plb_max_blk to lima_screen, and add a new debug option:
LIMA_PLB_MAX_BLK
Signed-off-by: Patrick Lerda <patrick9876@free.fr>
Reviewed-by: Qiang Yu <yuq825@gmail.com>
While ImageFormatProperties returns the number of internal descriptors,
it turns out that applications do not need to actually allocate more
descriptors in the descriptor pool.
So if we make descriptors with more planes larger we have to be
convervative and always allocate space for the larger descriptors
which is a waste given the low usage of this ext.
So let us make use of the fact that 3plane formats all have the
same formats & dimensions for the last two planes. This way we
only need the first half of the descriptor of the 3rd plane and
can share the second half of the second plane.
This allows us to use 16 bytes for the descriptor which nicely
fits into the 16 bytes that are unused right next to the sampler.
Fixes: 5564c38212 "radv: Update descriptor sets for multiple planes."
Reviewed-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
We use an algebraic pass for the csel optimizations, and use proper
vectorized csel ops (i/fcsel_v) for mixed, rather lowering.
To avoid regressions along the way, we fix an issue with the copy
propagation pass (it should not attempt to propagate constants).
Similarly, we take care to break bundles when using csel to fix some
scheduler corner cases.
Signed-off-by: Alyssa Rosenzweig <alyssa@rosenzweig.io>
iris_draw_vbo is divided into two functions to remove unnecessary
operations from the loop. This implementation of ARB_indirect_parameters
takes into account NV_conditional_render by saving MI_PREDICATE_RESULT
at the start of a draw call and restoring it at the end also the result
of NV_conditional_render is taken into account when computing predicates
that limit draw calls for ARB_indirect_parameters in a similar way
to 1952fd8d in ANV.
v2: Optimize indirect draws (suggested by Kenneth Graunke)
v3: (by Kenneth Graunke)
- Fix an issue where indirect draws wouldn't set patch information
before updating the compiled TCS.
- Move some code back to iris_draw_vbo to avoid duplicating it.
- Fix minor indentation issues.
Signed-off-by: Illia Iorin <illia.iorin@globallogic.com>
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
Shader draw parameters need updating on each iteration of a multidraw
loop, but the primitive based information only needs to be updated once.
Also, patch information needs to be recorded before filling out the TCS
program key, as it determines the number of HS instances.
The nested fma calls were supposed to implement
x_new = x + x * (1 - x*src),
but instead current code is equivalent to
x_new = x - x * (1 - x*src).
The result is that Newton-Raphson steps don't improve precision at all.
This patch fixes this problem.
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=110435
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
Only vertex inputs accessed by vertex shader must have valid buffers
bound.
Signed-off-by: Józef Kucia <joseph.kucia@gmail.com>
Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
Fixes: 5010436e09 "radv: bail out when binding the same vertex buffers"
src/mesa/state_tracker/st_tgsi_lower_yuv.c:68: void reg_dst(struct
tgsi_full_dst_register *, const struct tgsi_full_dst_register *, unsigned
int): assertion "dst->Register.WriteMask" failed
The second crash was due to insufficient allocated size for TGSI
instructions.
Cc: 19.0 19.1 <mesa-stable@lists.freedesktop.org>
Reviewed-by: Rob Clark <robdclark@gmail.com>
Anuj fixed this in i965 and anv, but the fix never landed in iris.
Fixes tessellation corruption on Icelake. Thanks to Rafael for
bisecting this and tracking it down.
Fixes: d0996d5fab iris: Emit default L3 config for the render pipeline
Reviewed-by: Rafael Antognolli <rafael.antognolli@intel.com>
Update various limits in
VkPhysicalDeviceDescriptorIndexingPropertiesEXT that were previously
zero to their values from VkPhysicalDeviceLimits. When using
VK_EXT_descriptor_indexing, the former limits will apply to all the
descriptor layout sets -- not only those using the new feature bits.
For the reference, VK_EXT_descriptor_indexing says
"There are new descriptor set layout and descriptor pool creation
flags that are required to opt in to the update-after-bind
functionality, and there are separate maxPerStage* and
maxDescriptorSet* limits that apply to these descriptor set
layouts which may be much higher than the pre-existing limits. The
old limits only count descriptors in non-updateAfterBind
descriptor set layouts, and the new limits count descriptors in
all descriptor set layouts in the pipeline layout."
Fixes: 6e230d7607 "anv: Implement VK_EXT_descriptor_indexing"
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
The original implementation assumed that we could allocate the same
amount of command buffers as the number of images in the swapchain.
But the application could potentially render much faster and rerender
into images that have been submitted for presentation but not yet
presented.
This change keeps on allocating command buffers, vertex buffer, vertex
indices as well as a semaphore and a fence for as long as we can't
reuse a previously submitted one.
This fixes rendering issues in the overlay at high frame rates.
v2: Don't recreate semaphores constantly (Józef)
v3: Drop useless surface & FreeCommandBuffers (Józef)
Signed-off-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=110655
Cc: 19.1 <mesa-stable@lists.freedesktop.org>
Reviewed-by: Józef Kucia <joseph.kucia@gmail.com>
Non dispatchable handles can be uint64_t. When compiling the layer on
a 32bit platform, this will lead to casting uint64_t into (void *)
which is 32bit, leading to incorrect handles being mapped internally
in the layer.
v2: Use more HKEY() (Eric)
Signed-off-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Reported-by: Józef Kucia <joseph.kucia@gmail.com>
Fixes: 2d2927938f ("vulkan/overlay-layer: fix cast errors")
Reviewed-by: Józef Kucia <joseph.kucia@gmail.com>
This was taking a reference to the 64kB upload buffer and never
returning it, leaking a reference each time this atom triggered.
This leaked lots of 64kB upload BOs, eventually running us out of
of VMA space. This would usually happen when using mpv to watch a
movie, after 20-40 minutes.
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=110134
Fixes: 63d7b33f51 i965/cs: Setup surface binding for gl_NumWorkGroups
Reviewed-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
This fixes video being rendered incorrectly.
User wants height of 360 but internally pipe_video_buffer 's height
is 368 in the test below.
Test:
GST_GL_PLATFORM=egl gst-launch-1.0 videotestsrc ! video/x-raw, width=868, height=360, format=NV12 ! vaapipostproc ! glimagesink
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=110443
Signed-off-by: Julien Isorce <jisorce@oblong.com>
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
Reviewed-by: Leo Liu <leo.liu@amd.com>
This avoids a conflict with the new (driver-agnostic) blend_func enum in
shader_enum.h, which broke the build of swrast (and i965 by extension).
My apologies :(
Signed-off-by: Alyssa Rosenzweig <alyssa@rosenzweig.io>
Fixes: f41be53a ("compiler: Add enums for blend state")
Cc: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
Reviewed-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
This represents a float vec4 constant color, as passed to glBlendColor.
While the existing 4 shader sysvals are retained to minimize code churn,
a single vectorized intrinsic is required for efficient blending on
vector architectures. (This may also apply to archictectures like
Bifrost where ALU is scalar but load/store is vector; it largely depends
on how blending is implemented per-driver.)
Signed-off-by: Alyssa Rosenzweig <alyssa@rosenzweig.io>
Reviewed-by: Ian Romanick <ian.d.romanick@intel.com>
Reviewed-by: Eric Anholt <eric@anholt.net>
Complementing the new API-agnostic shader_enum blending style, we add
helpers to translate between the two forms. Ideally, we could just use
PIPE blending directly, but that makes Vulkan support challenging.
Signed-off-by: Alyssa Rosenzweig <alyssa@rosenzweig.io>
Reviewed-by: Eric Anholt <eric@anholt.net>
We add enums corresponding to (GLES) blend state to shader_enums.h,
complementing the existing advanced blending enums in the file. This
allows us to represent blending state in a driver-agnostic, API-agnostic
way to permit lowering.
Signed-off-by: Alyssa Rosenzweig <alyssa@rosenzweig.io>
Reviewed-by: Ian Romanick <ian.d.romanick@intel.com>
Acked-by: Eric Anholt <eric@anholt.net>
This can be used by both etnaviv and freedreno/a2xx as they are both vec4
architectures with some instructions being scalar-only.
Signed-off-by: Jonathan Marek <jonathan@marek.ca>
Reviewed-by: Christian Gmeiner <christian.gmeiner@gmail.com>
Reviewed-by: Eric Anholt <eric@anholt.net>
In order to set up KILL sets, the dataflow code was walking the entire
array of ACPs for every instruction. If you assume the number of ACPs
increases roughly with the number of instructions, this is O(n^2). As
it turns out, regions_overlap() is not nearly as cheap as one would like
and shows up as a significant chunk on perf traces.
This commit changes things around and instead first builds an array of
exec_lists which it uses like a hash table (keyed off ACP source or
destination) similar to what's done in the rest of the copy-prop code.
By first walking the list of ACPs and populating the table and then
walking instructions and only looking at ACPs which probably have the
same VGRF number, we can reduce the complexity to O(n). This takes the
execution time of the piglit vs-isnan-dvec test from about 56.4 seconds
on an unoptimized debug build (what we run in CI) with NIR_VALIDATE=0 to
about 38.7 seconds.
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
Reviewed-by: Matt Turner <mattst88@gmail.com>
If the destination of an ACP entry exists only within this block, then
there's no need to keep it for dataflow analysis. We can delete it from
the out_acp table and avoid growing the bitsets any bigger than we
absolutely have to. This reduces the maximum number of global ACP
entries in the vs-isnan-dvec with software fp64 on Kaby Lake from 8630
to 3942 and takes the execution time of the piglit vs-isnan-dvec test
from about 1:16.2 on an unoptimized debug build (what we run in CI) with
NIR_VALIDATE=0 to about 56.4 seconds.
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
Reviewed-by: Matt Turner <mattst88@gmail.com>
While the number of ACPs is generally not huge compared to the number of
blocks, 16 does seem a bit small. Bumping it to 64 takes the execution
time of the piglit vs-isnan-dvec test from about 1:18.1 on an unoptimized
debug build (what we run in CI) with NIR_VALIDATE=0 to about 1:16.2.
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
Reviewed-by: Matt Turner <mattst88@gmail.com>
When width=4096 and shift_w=0, block_w=0x100 which overflow
the PLBU_CMD 8 bits for it.
Reviewed-by: Vasily Khoruzhick <anarsoul@gmail.com>
Signed-off-by: Qiang Yu <yuq825@gmail.com>
Just do what everybody else but Nouveau does and return 0.0f.
This prevents the repeated logging of these messages on startup:
Unexpected PIPE_CAPF 6 query
Unexpected PIPE_CAPF 7 query
Unexpected PIPE_CAPF 8 query
Signed-off-by: Tomeu Vizoso <tomeu.vizoso@collabora.com>
Reviewed-by: Emil Velikov <emil.velikov@collabora.com>
Reviewed-by: Alyssa Rosenzweig <alyssa@rosenzweig.io>
As the functions operate on 16-byte blocks.
Fixes this Valgrind error:
Invalid read of size 4
at 0x5857568: swizzle_bpp1_align16 (pan_swizzle.c:85)
by 0x585780F: panfrost_texture_swizzle (pan_swizzle.c:171)
by 0x584F587: panfrost_tile_texture (pan_resource.c:489)
by 0x584F641: panfrost_transfer_unmap (pan_resource.c:525)
by 0x587718D: u_transfer_helper_transfer_unmap (u_transfer_helper.c:516)
by 0x5875D85: pipe_transfer_unmap (u_inlines.h:515)
by 0x5875F13: u_default_texture_subdata (u_transfer.c:80)
by 0x53FFDC3: st_TexSubImage (st_cb_texture.c:1480)
by 0x54005BB: st_TexImage (st_cb_texture.c:1709)
by 0x5391353: teximage (teximage.c:3105)
by 0x5391353: teximage_err (teximage.c:3132)
by 0x5391B9B: _mesa_TexImage2D (teximage.c:3170)
by 0x5097A77: shared_dispatch_stub_183 (glapi_mapi_tmp.h:18833)
Address 0x1e94f1e8 is 0 bytes after a block of size 16 alloc'd
at 0x483F5C8: malloc (vg_replace_malloc.c:299)
by 0x584F47D: panfrost_transfer_map (pan_resource.c:467)
by 0x587694D: u_transfer_helper_transfer_map (u_transfer_helper.c:243)
by 0x5875EA7: u_default_texture_subdata (u_transfer.c:59)
by 0x53FFDC3: st_TexSubImage (st_cb_texture.c:1480)
by 0x54005BB: st_TexImage (st_cb_texture.c:1709)
by 0x5391353: teximage (teximage.c:3105)
by 0x5391353: teximage_err (teximage.c:3132)
by 0x5391B9B: _mesa_TexImage2D (teximage.c:3170)
by 0x5097A77: shared_dispatch_stub_183 (glapi_mapi_tmp.h:18833)
by 0x4DA8AB: glu::CallLogWrapper::glTexImage2D(unsigned int, int, int, int, int, int, unsigned int, unsigned int, void const*) (in /home/tomeu/deqp-build/modules/gles2/deqp-gles2)
Signed-off-by: Tomeu Vizoso <tomeu.vizoso@collabora.com>
Reviewed-by: Emil Velikov <emil.velikov@collabora.com>
Cc: 19.1 <mesa-stable@lists.freedesktop.org>
Valgrind was complaining of those.
NIR_PASS only sets progress to TRUE if there was progress.
nir_const_load_to_arr() only sets as many constants as components has
the instruction.
This was causing some dEQP tests to flip-flop, such as:
dEQP-GLES2.functional.fragment_ops.blend.equation_src_func_dst_func.add_src_color_constant_color
Signed-off-by: Tomeu Vizoso <tomeu.vizoso@collabora.com>
Reviewed-by: Alyssa Rosenzweig <alyssa@rosenzweig.io>
Fixes: 14531d676b ("nir: make nir_const_value scalar")
These tests add too much time to the total run time, and some of them
even hang the DUTs, even if I haven't been able to reproduce it locally.
Signed-off-by: Tomeu Vizoso <tomeu.vizoso@collabora.com>
Reviewed-by: Alyssa Rosenzweig <alyssa@rosenzweig.io>
There doesn't seem to actually be any noticeably memory leaks on Weston
when running dEQP. We do seem to leak quiet a bit in the client, so we
still have to run the dEQP runner in batches.
This removes the risk of Weston not restarting properly and introducing
spurious failures.
Signed-off-by: Tomeu Vizoso <tomeu.vizoso@collabora.com>
Reviewed-by: Alyssa Rosenzweig <alyssa@rosenzweig.io>
This matches the current state of things on both RK3288 and RK3399.
Hopefully, from now on we'll only remove stuff from this list.
Signed-off-by: Tomeu Vizoso <tomeu.vizoso@collabora.com>
Reviewed-by: Alyssa Rosenzweig <alyssa@rosenzweig.io>
Make sure we have only test case names in the list, excluding names of
test groups.
Signed-off-by: Tomeu Vizoso <tomeu.vizoso@collabora.com>
Reviewed-by: Alyssa Rosenzweig <alyssa@rosenzweig.io>
To improve robustness, check that we got the expected number of results.
Right now we hard-code the expected number of tests run, but with some
effort we may be able to infer it.
Signed-off-by: Tomeu Vizoso <tomeu.vizoso@collabora.com>
Reviewed-by: Alyssa Rosenzweig <alyssa@rosenzweig.io>
Build artifacts for armhf and schedule them on a Veyron Chromebook with
RK3288.
Signed-off-by: Tomeu Vizoso <tomeu.vizoso@collabora.com>
Reviewed-by: Alyssa Rosenzweig <alyssa@rosenzweig.io>
Buffer needs to be reloaded every time unless explicit clear() was
called.
Fixes rendering issues with wayland compositors.
Reviewed-by: Qiang Yu <yuq825@gmail.com>
Signed-off-by: Vasily Khoruzhick <anarsoul@gmail.com>
The key reason for that mechanism is gone: all the extra optional data
that could be in the anv_push_constants was moved elsewhere. At this
point, just put anv_push_constants directly in anv_cmd_state (part of
anv_cmd_buffer).
v2: Remove a NULL check we don't need anymore in
anv_cmd_buffer_push_constants(). (Lionel)
Fix size we consider for valid push params. (Lionel)
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
This provides a way for the application to query whether any resets have
happened, which lets us expose "robust" contexts. This also enables the
KHR_robust_buffer_access_behavior tests.
This mechanism lets the driver inform the state tracker about GPU
resets, say for destroying a robust API context and reporting a "device
lost" error to the application, making it take action to deal with this.
The iris batch module now tries to detect that the kernel has banned
our GEM context, creates a new non-banned context, and informs the
iris context module that all assumptions about state are now invalid
and it needs to reinitialize the relevant state.
Based on Chris Wilson's work, but significantly rewritten by me.
Adapted from Chris Wilson's patch. The comment is largely his.
Currently, when iris hangs the GPU, it will continue sending batches
which incrementally update the state, assuming it's preserved across
batches. However, the kernel's GPU reset support reinitializes the
guilty context to the default GPU state (reasonably not wanting to
trust the current state). This ends up resetting critical things
like STATE_BASE_ADDRESS, causing memory accesses in all subsequent
batches to be garbage, and almost certainly result in more hangs
until we're banned or we kill the machine.
We now ask the kernel to ban our render context immediately, so we
notice we've gone off the rails as fast as possible. Eventually, we'll
attempt to recover and continue. For now, we just avoid torching the
GPU over and over.
Ofc legacy gl features that are broken don't trigger fails in deqp. I
should remember to test glxgears more often.
Fixes: 7ff6705b8d freedreno/ir3: convert to "new style" frag inputs
Signed-off-by: Rob Clark <robdclark@chromium.org>
We didn't notice this issue much because the 2 struct share a similar
layout, expect for the additional fields...
We run into that issue in Anv :
==15236== Invalid write of size 8
==15236== at 0x8CF3939C: anv_state_table_expand_range (anv_allocator.c:211)
==15236== by 0x8CF394D5: anv_state_table_grow (anv_allocator.c:264)
==15236== by 0x8CF3967E: anv_state_table_add (anv_allocator.c:312)
==15236== by 0x8CF3B13C: anv_state_pool_alloc_no_vg (anv_allocator.c:1167)
==15236== by 0x8CF3B2B0: anv_state_pool_alloc (anv_allocator.c:1190)
==15236== by 0x8CF60871: alloc_surface_state (anv_image.c:1122)
==15236== by 0x8CF61FF9: anv_CreateImageView (anv_image.c:1519)
==15236== by 0x8BCBD2ED: vkCreateImageView (trampoline.c:1358)
==15236== Address 0x8994ef10 is 0 bytes after a block of size 128 alloc'd
==15236== at 0x4C2FB0F: malloc (in /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so)
==15236== by 0x8D2578E6: u_vector_init (u_vector.c:47)
==15236== by 0x8CF3929A: anv_state_table_init (anv_allocator.c:168)
==15236== by 0x8CF3A99A: anv_state_pool_init (anv_allocator.c:921)
==15236== by 0x8CF56517: anv_CreateDevice (anv_device.c:1909)
==15236== by 0x8BCB4FBA: terminator_CreateDevice (loader.c:6073)
==15236== by 0x8DD2CB3D: ??? (in /home/djdeath/.steam/ubuntu12_64/libVkLayer_steam_fossilize.so)
==15236== by 0x8DF4D241: vkCreateDevice (in /home/djdeath/.steam/ubuntu12_64/steamoverlayvulkanlayer.so)
==15236== by 0x8BCB35C6: loader_create_device_chain (loader.c:5449)
==15236== by 0x8BCBC230: vkCreateDevice (trampoline.c:838)
v2: Rename mmap_cleanups to avoid confusion (Caio)
v3: s/fail_mmap_cleanups/fail_cleanups/ (Caio)
Signed-off-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=110648
Cc: <mesa-stable@lists.freedesktop.org>
Reviewed-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
When first implemented in fefd03e16c Mesa's behavior was aligned on behavior
of Nvidia's driver. This caused a failing test in piglit but was ok since the
specification is unclear on this subject.
Nvidia's driver behavior has been modified because using version 410.104, the
problematic test (program_binary_retrievable_hint) now passes.
This commit defers BinaryRetrievableHint update until the next linking so the
test passes on Mesa as well.
Signed-off-by: Pierre-Eric Pelloux-Prayer <pierre-eric.pelloux-prayer@amd.com>
Signed-off-by: Marek Olšák <marek.olsak@amd.com>
I don't know why I thought NIR_PASS always set the progress variable.
Derp.
Fixes: d41cdef2a5 ("nir: Use the flrp lowering pass instead of nir_opt_algebraic")
Reviewed-by: Brian Paul <brianp@vmware.com>
Reviewed-by: Timothy Arceri <tarceri@itsqueeze.com>
Reviewed-by: Emil Velikov <emil.velikov@collabora.com>
Coverity CID: 1444996
Coverity CID: 1444995
Coverity CID: 1444994
Coverity CID: 1444993
Coverity CID: 1444991
Coverity CID: 1444989
We need to know the number of rectangles.
This fixes new CTS dEQP-VK.draw.discard_rectangles.dynamic_*.
Fixes: 5db0bf9994 ("radv: Implement VK_EXT_discard_rectangles.")
Signed-off-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
Propagate the failure from GEM_EXECBUFFER2, cleanup then report failure
if need be. We retain the current behaviour to abort() at the first sign
of trouble -- for a non-robustness context, arguably this is the right
thing to do as the client cannot recover, and the system state is lost.
How to properly integrate with KHR_robustness and reset-strategy is
left as a future exercise.
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
Pull i915_drm.h to include
kernel commit ba4fda620a5f7db521aa9e0262cf49854c1b1d9c
Author: Chris Wilson <chris@chris-wilson.co.uk>
Date: Mon Feb 18 10:58:21 2019 +0000
drm/i915: Optionally disable automatic recovery after a GPU reset
for improved resilience in handling GPU hangs.
Signed-off-by: Eric Engestrom <eric.engestrom@intel.com>
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
[ Michel Dänzer: Take changes affecting the docker image from !299,
plus remove the unzip package again before generating the image ]
And consolidate it all into a single job.
It doesn't take much longer than a single version, thanks to ccache.
Overall, this single job might be faster or at least use fewer CPU
cycles than the two jobs before, while covering thrice as many versions
of LLVM.
v2:
* Move "rm -rf _build" to meson-build.sh.
* Set GALLIUM_DRIVERS the same way both times in the meson-clover job,
for symmetry.
Reviewed-by: Eric Engestrom <eric.engestrom@intel.com> # v1
No functional change intended (except for no longer running meson
--version separately, as the version appears early in meson's output
anyway).
Reviewed-by: Eric Engestrom <eric.engestrom@intel.com>
We really shouldn't ever need a suffix, otherwise it indicates a failure
in coordination. :) In which case, it doesn't really matter how the tag
is disambiguated.
Reviewed-by: Eric Engestrom <eric.engestrom@intel.com>
meson git now has a cmake find method for llvm, but it lacks a couple of
features that we use from the config tool version. Until that reaches
parity we need to use the config-tool version.
CC: 19.0 19.1 <<mesa-stable@lists.freedesktop.org>
Reviewed-by: Eric Engestrom <eric.engestrom@intel.com>
We use a mix of MI & PIPE_CONTROL commands to write our queries' data
(results & availability). Those commands' memory write order is not
guaranteed with regard to their order in the command stream, unless CS
stalls are inserted between them. This is problematic for 2 reasons :
1. We copy results from the device using MI commands even though
the values are generated from PIPE_CONTROL, meaning we could
copy unlanded values into the results and then copy the
availability that is inconsistent with the values.
2. We allow the user to poll on the availability values of the
query pool from the CPU. If the availability lands in memory
before the values then we could return invalid values.
This change does 2 things to address this problem :
- We use either PIPE_CONTROL or MI commands to write both
queries values and availability, so that the ordering of the
memory writes guarantees that if availability is visible,
results are also visible.
- For the occlusion & timestamp queries we apply a CS stall
before copying the results on the device, to ensure copying
with MI commands see the correct values of previous
PIPE_CONTROL writes of availability (required by the Vulkan
spec).
Signed-off-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Reported-by: Iago Toral Quiroga <itoral@igalia.com>
Cc: mesa-stable@lists.freedesktop.org
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
It's generally frowned upon to have more than one H1 per document in
HTML4. So let's put the text directly inside the header. This means we
can drop the flex-based centering, which makes things a bit easier. We
also need to change the padding to rem instead of em, because the em has
now changed.
Signed-off-by: Erik Faye-Lund <erik.faye-lund@collabora.com>
Reviewed-by: Eric Engestrom <eric.engestrom@intel.com>
We're pretty insonsistent in what the headings and titles are, especially
compared to what the articles are listed as in the sidebar. Let's
harmonize this.
There's a notable exception for meson.html, where the sidebar uses a
short-hand form that makes sense in the sidebar, but not in the article
due to the visible context being different.
Signed-off-by: Erik Faye-Lund <erik.faye-lund@collabora.com>
Reviewed-by: Eric Engestrom <eric.engestrom@intel.com>
It's generally frowned upon to have multiple H1 headings in HTML4. So
let's make sure each article has a primary heading for the article, and
that that heading is the title that is used in the sidebar.
While we're at it, let's update the title in the articles to match the
title from the sidebar as well.
Signed-off-by: Erik Faye-Lund <erik.faye-lund@collabora.com>
Reviewed-by: Eric Engestrom <eric.engestrom@intel.com>
It's generally frowned upon to have multiple H1 headings in HTML4. So
let's add a primary heading for the article, and source that from the
title used in the sidebar.
While we're at it, let's update the title in the article to match the
title from the sidebar as well.
Signed-off-by: Erik Faye-Lund <erik.faye-lund@collabora.com>
Reviewed-by: Eric Engestrom <eric.engestrom@intel.com>
We generally use title-casing for headings in the sidebar. But not
all headings was constently cased like that. Let's make sure this
is consistent.
Signed-off-by: Erik Faye-Lund <erik.faye-lund@collabora.com>
Acked-by: Eric Engestrom <eric.engestrom@intel.com>
There's no need to keep this short, we can just spell out "and" here.
Besides, a slash kind of implies "or", but these articles are about
both of these, not either.
Signed-off-by: Erik Faye-Lund <erik.faye-lund@collabora.com>
Reviewed-by: Eric Engestrom <eric.engestrom@intel.com>
It's quite visible that there's more docs below, we don't need to spell
it out for the reader.
Signed-off-by: Erik Faye-Lund <erik.faye-lund@collabora.com>
Reviewed-by: Eric Engestrom <eric.engestrom@intel.com>
We're not short on space here, so there's little point in abbreviating
this. This also matches the heading in the article.
Signed-off-by: Erik Faye-Lund <erik.faye-lund@collabora.com>
Reviewed-by: Eric Engestrom <eric.engestrom@intel.com>
We're not short on space here, so let's just spell out "and" instead of
using the ampersand. This is more consistent with the entry above in the
sidebar.
Signed-off-by: Erik Faye-Lund <erik.faye-lund@collabora.com>
Reviewed-by: Eric Engestrom <eric.engestrom@intel.com>
This ports commit 9e7b0988d6 from anv
to i965. Thanks to Lionel for noticing that it was missing!
Fixes: 01058a5522 i965: Add virtual memory allocator infrastructure to brw_bufmgr.
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
This should happen regardless, but let's be paranoid.
Fixes: 01058a5522 i965: Add virtual memory allocator infrastructure to brw_bufmgr.
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
The STATE_BASE_ADDRESS "Size" fields can only hold 0xfffff in pages,
and 0xfffff * 4096 = 4294963200, which is 1 page shy of 4GB.
So we can't use the top page.
Fixes: 01058a5522 i965: Add virtual memory allocator infrastructure to brw_bufmgr.
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
In the FS IR we pretend that the instruction is predicated with (+f0.1)
just for flag dependency tracking purposes. Since the instruction
doesn't support predication before Haswell, we unset the predicate so we
should also unset the flag register so that we can round-trip the
disassembly.
Reviewed-by: Sagar Ghuge <sagar.ghuge@intel.com>
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
On haswell, for dim instruction we encode immediate float value operand
into double float,
v2: Fix comment (Matt Turner)
Signed-off-by: Sagar Ghuge <sagar.ghuge@intel.com>
Reviewed-by: Matt Turner <mattst88@gmail.com>
For the W or UW (signed or unsigned word) source types, the 16-bit value
must be replicated in both the low and high words of the 32-bit
immediate value.
v2: Fix replication in other places as well
V3: fix a few nits (Matt Turner)
Signed-off-by: Sagar Ghuge <sagar.ghuge@intel.com>
Reviewed-by: Matt Turner <mattst88@gmail.com>
Print quad value same as unsigned quad so that we can distinguish in
between quater control disassembled values for e.g 1/2/3[Q] and
immediate quad value for e.g 1Q. This allows round-tripping through the
assembler/disassembler.
Signed-off-by: Sagar Ghuge <sagar.ghuge@intel.com>
Reviewed-by: Matt Turner <mattst88@gmail.com>
this adds support for imports where the image data begins at an offset
from the start of the buffer, as used in h/x264
fixeskwg/mesa#47
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
Brian noticed there was an uninitialized var for the 8-wide case and 128
bit blocks, which made it always crash. Likewise, the 64bit block case
had another crash bug due to type mismatch.
Color decode (used for all s3tc formats) also had a bogus shuffle for
this case, leading to decode artifacts.
Fix these all up, which makes the code actually work 8-wide. Note that
it's still not used - I've verified it works, and the generated assembly
does look quite a bit simpler actually (20-30% less instructions for the
s3tc decode part with avx2), however in practice it still seems to be
sligthly slower for some unknown reason (tested with openarena) on my
haswell box, so for now continue to split things into 4-wide vectors
before decoding.
Reviewed-by: Brian Paul <brianp@vmware.com>
Reviewed-by: Jose Fonseca <jfonseca@vmware.com>
For a6xx, we construct/emit a single VS const state used for both
binning pass and draw pass. So far we were mostly getting lucky that
there were not (obvious) mismatches between the const_state (like
different lowered immediates) between the binning and draw pass
VS ir3_shader_variant.
And I guess this situation will come up more as GS and tess is added
into the equation.
Since really everything about the const state is not specific to the
variant, move this. The main exception is lowered immediates, but these
are the last to appear in the layout, and it doesn't hurt for each new
shader variant to just append any immed's it lowers to the end of the
immediate state.
Signed-off-by: Rob Clark <robdclark@chromium.org>
Next patch moves const_state to ir3_shader, before the compile context
is created. So move the code around in prep to call it earlier.
Signed-off-by: Rob Clark <robdclark@chromium.org>
They are really part of the constant state, and it will moving things
from ir3_shader_variant to ir3_shader if we combine them.
Signed-off-by: Rob Clark <robdclark@chromium.org>
Combine the offsets of differenet parts of the constant space with (what
was formerly known as) ir3_driver_const_layout. Bunch of churn, but no
functional change.
Signed-off-by: Rob Clark <robdclark@chromium.org>
Move to ir3_compiler so it doesn't depend on the compile context. Prep
work for moving constant state from variant (where we have compile
context) to shader (where we do not).
Signed-off-by: Rob Clark <robdclark@chromium.org>
Not quite sure what version of GCC/Clang produces errors (8.3.0
locally was fine).
v2: also fix an integer literal issue (Karol)
Signed-off-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Reviewed-by: Tapani Pälli <tapani.palli@intel.com> (v1)
Reviewed-by: Eric Engestrom <eric.engestrom@intel.com>
There are tests in CTS for alpha to coverage without a color attachment
that are failing. This happens because we remove the shader color
outputs when we don't have a valid color attachment for them, but when
alpha to coverage is enabled we still want to preserve the the output
at location 0 since we need the alpha component. In that case we will
also need to create a null render target for RT 0.
v2:
- We already create a null rt when we don't have any, so reuse that
for this case (Jason)
- Simplify the code a bit (Iago)
v3:
- Take alpha to coverage from the key and don't tie this to depth-only
rendering only, we want the same behavior if we have multiple render
targets but the one at location 0 is not used. (Jason).
- Rewrite commit message (Iago)
v4:
- Make sure we take into account the array length of the shader outputs,
which we were no handling correctly either and make sure we also
create null render targets for any invalid array entries too.
v5:
- Simplify removal of unused outputs by using rt_used[] so we don't have
to special case alpha to coverage there too.
Fixes the following CTS tests:
dEQP-VK.pipeline.multisample.alpha_to_coverage_no_color_attachment.*
Signed-off-by: Samuel Iglesias Gonsálvez <siglesias@igalia.com>
Signed-off-by: Iago Toral Quiroga <itoral@igalia.com>
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
In a previous verion of this patch, Jason commented,
"Re-associating based on whether or not something has a constant
value of 1.0 seems a bit sneaky. I think it's well within the rules
but it seems like something that could bite you."
That is possibly true. The reassociation will generate different
results if fabs(b) >= 2**24 and fabs(c) < 0.5. The delta increases as
fabs(c) approaches 0.
However, i965 has done this same reassociation indirectly for years.
We would previously allow nir_op_flrp on all pre-Gen11 hardware even
though Gen4 and Gen5 do not have a LRP instruction. Optimizations in
nir_opt_algebraic would convert expressions like a+c(b-a) into flrp(a,
b, c). On Gen7+, the hardware performs the same arithmetic as
a(1-c)+bc. Gen6 seems to implement LRP as a+c(b-a). On Gen4 and
Gen5, we would lower LRP to a sequence of instructions that implement
a(1-c)+bc. The lowering happens after all constant folding, so we
would litterally generate a 1+(-1) instruction sequence in this
scenario: one instruction to load either 1 or -1 in a register, and
another instruction to add either -1 or 1 to it.
This patch just cuts out the middle man. Do the reassociation that
we've always done, but do it explicitly at a time when we can benefit
from other optimizations.
A few cases that were hurt by "nir: Lower flrp(±1, b, c) and flrp(a,
±1, c) differently" are restored by this patch. This includes a few
shaders in ET:QW.
I tried a similar thing for open-coded flrp(-1, b, c), and it hurt
instructions on 35 shaders for ILK without helping any. The helped /
hurt cycles was about even.
No changes on any other Intel platforms.
Iron Lake and GM45 had similar results. (Iron Lake shown)
total instructions in shared programs: 8172020 -> 8164367 (-0.09%)
instructions in affected programs: 1089851 -> 1082198 (-0.70%)
helped: 3285
HURT: 64
helped stats (abs) min: 1 max: 6 x̄: 2.35 x̃: 2
helped stats (rel) min: 0.13% max: 12.00% x̄: 1.15% x̃: 0.83%
HURT stats (abs) min: 1 max: 1 x̄: 1.00 x̃: 1
HURT stats (rel) min: 0.24% max: 0.64% x̄: 0.39% x̃: 0.38%
95% mean confidence interval for instructions value: -2.32 -2.25
95% mean confidence interval for instructions %-change: -1.16% -1.09%
Instructions are helped.
total cycles in shared programs: 188758338 -> 188719974 (-0.02%)
cycles in affected programs: 20004922 -> 19966558 (-0.19%)
helped: 3012
HURT: 477
helped stats (abs) min: 2 max: 142 x̄: 13.41 x̃: 12
helped stats (rel) min: 0.01% max: 6.37% x̄: 0.52% x̃: 0.24%
HURT stats (abs) min: 2 max: 328 x̄: 4.27 x̃: 4
HURT stats (rel) min: <.01% max: 1.55% x̄: 0.14% x̃: 0.11%
95% mean confidence interval for cycles value: -11.38 -10.62
95% mean confidence interval for cycles %-change: -0.46% -0.41%
Cycles are helped.
Reviewed-by: Matt Turner <mattst88@gmail.com>
This doesn't help on Intel GPUs now because we always take the
"always_precise" path first. It may help on other GPUs, and it does
prevent a bunch of regressions in "intel/compiler: Don't always require
precise lowering of flrp".
Reviewed-by: Matt Turner <mattst88@gmail.com>
There is little effect on Intel GPUs now because we almost always take
the "always_precise" path first. It may help on other GPUs, and it does
prevent a bunch of regressions in "intel/compiler: Don't always require
precise lowering of flrp".
No changes on any other Intel platforms.
GM45 and Iron Lake had similar results. (Iron Lake shown)
total cycles in shared programs: 188852500 -> 188852484 (<.01%)
cycles in affected programs: 14612 -> 14596 (-0.11%)
helped: 4
HURT: 0
helped stats (abs) min: 4 max: 4 x̄: 4.00 x̃: 4
helped stats (rel) min: 0.09% max: 0.13% x̄: 0.11% x̃: 0.11%
95% mean confidence interval for cycles value: -4.00 -4.00
95% mean confidence interval for cycles %-change: -0.13% -0.09%
Cycles are helped.
Reviewed-by: Matt Turner <mattst88@gmail.com>
This doesn't help on Intel GPUs now because we always take the
"always_precise" path first. It may help on other GPUs, and it does
prevent a bunch of regressions in "intel/compiler: Don't always require
precise lowering of flrp".
No changes on any Intel platform. Before a number of large rebases this
helped cycles in a couple shaders on Iron Lake and GM45.
Reviewed-by: Matt Turner <mattst88@gmail.com>
If the magnitudes of #a and #b are such that (b-a) won't lose too much
precision, lower as a+c(b-a).
No changes on any other Intel platforms.
v2: Rebase on 424372e5dd5 ("nir: Use the flrp lowering pass instead of
nir_opt_algebraic")
Iron Lake and GM45 had similar results. (Iron Lake shown)
total instructions in shared programs: 8192503 -> 8192383 (<.01%)
instructions in affected programs: 18417 -> 18297 (-0.65%)
helped: 68
HURT: 0
helped stats (abs) min: 1 max: 18 x̄: 1.76 x̃: 1
helped stats (rel) min: 0.19% max: 7.89% x̄: 1.10% x̃: 0.43%
95% mean confidence interval for instructions value: -2.48 -1.05
95% mean confidence interval for instructions %-change: -1.56% -0.63%
Instructions are helped.
total cycles in shared programs: 188662536 -> 188661956 (<.01%)
cycles in affected programs: 744476 -> 743896 (-0.08%)
helped: 62
HURT: 0
helped stats (abs) min: 4 max: 60 x̄: 9.35 x̃: 6
helped stats (rel) min: 0.02% max: 4.84% x̄: 0.27% x̃: 0.06%
95% mean confidence interval for cycles value: -12.37 -6.34
95% mean confidence interval for cycles %-change: -0.48% -0.06%
Cycles are helped.
Reviewed-by: Matt Turner <mattst88@gmail.com>
I tried to be very careful while updating all the various drivers, but I
don't have any of that hardware for testing. :(
i965 is the only platform that sets always_precise = true, and it is
only set true for fragment shaders. Gen4 and Gen5 both set lower_flrp32
only for vertex shaders. For fragment shaders, nir_op_flrp is lowered
during code generation as a(1-c)+bc. On all other platforms 64-bit
nir_op_flrp and on Gen11 32-bit nir_op_flrp are lowered using the old
nir_opt_algebraic method.
No changes on any other Intel platforms.
v2: Add panfrost changes.
Iron Lake and GM45 had similar results. (Iron Lake shown)
total cycles in shared programs: 188647754 -> 188647748 (<.01%)
cycles in affected programs: 5096 -> 5090 (-0.12%)
helped: 3
HURT: 0
helped stats (abs) min: 2 max: 2 x̄: 2.00 x̃: 2
helped stats (rel) min: 0.12% max: 0.12% x̄: 0.12% x̃: 0.12%
Reviewed-by: Matt Turner <mattst88@gmail.com>
This pass will soon grow to include some optimizations that are
difficult or impossible to implement correctly within nir_opt_algebraic.
It also include the ability to generate strictly correct code which the
current nir_opt_algebraic lowering lacks (though that could be changed).
v2: Document the parameters to nir_lower_flrp. Rebase on top of
3766334923 ("compiler/nir: add lowering for 16-bit flrp")
Reviewed-by: Matt Turner <mattst88@gmail.com>
Driver which do not support native integers should use a lowering
pass to go from integers to floats.
Signed-off-by: Christian Gmeiner <christian.gmeiner@gmail.com>
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
This commit does a fairly large cleanup of blend descriptors, although
there should not be any functional changes. In particular, we split
apart the Midgard and Bifrost blend descriptors, since they are
radically different. From there, we can identify that the Midgard
descriptor as previously written was really two render targets'
descriptors stuck together. From this observation, we split the Midgard
descriptor into what a single RT actually needs. This enables us to
correctly dump blending configuration for MRT samples on Midgard. It
also allows the Midgard and Bifrost blend code to peacefully coexist,
with runtime selection rather than a #ifdef. So, as a bonus, this will
help the future Bifrost effort, eliminating one major source of
compile-time architectural divergence.
Signed-off-by: Alyssa Rosenzweig <alyssa@rosenzweig.io>
Neither GP nor PP in Mali4x0 support integers, so utilize new pass
and set native_integers to true for now until this flag is dropped.
Reviewed-by: Qiang Yu <yuq825@gmail.com>
Signed-off-by: Vasily Khoruzhick <anarsoul@gmail.com>
This new pass lowers ints and bools to floats. It allows hardware
that doesn't have native integers (e.g. Mali4x0) use the same
code paths as modern hardware.
It uses newly introduced pass to gather SSA types and should be
used as late as possible.
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
Reviewed-by: Christian Gmeiner <christian.gmeiner@gmail.com>
Signed-off-by: Vasily Khoruzhick <anarsoul@gmail.com>
If PIPE_CAP_PACKED_UNIFORMS is not set uniforms are vec4 aligned,
so lima_nir_lower_uniform_to_scalar should use first channel of vec4
for float uniforms.
Reviewed-by: Qiang Yu <yuq825@gmail.com>
Signed-off-by: Vasily Khoruzhick <anarsoul@gmail.com>
We need to re-prepare the middle-end state to pick up changes to this
state to react correctly to pausing/resuming stream-out. So let's add a
flush here.
Signed-off-by: Erik Faye-Lund <erik.faye-lund@collabora.com>
Fixes: ec8cbd79ac "draw/softpipe: EXT_transform_feedback support (v2)"
Reviewed-by: Roland Scheidegger <sroland@vmware.com>
We currently set this state in the draw-module twice on each draw, but
which trashes this state. So far that's not a problem, because we don't
really do much from that function.
But it turns out, we're going to have to do more; namely flush when the
state changes. This will incur a large performance penalty due to the
excessive setting.
Instead, let's rely on the CSO caching making sure that
llvmpipe_set_so_targets doesn't get called needlessly, and setup the
state directly there instead.
Signed-off-by: Erik Faye-Lund <erik.faye-lund@collabora.com>
Reviewed-by: Roland Scheidegger <sroland@vmware.com>
glxinfo for Cypress XT [Radeon HD 5870] lists GL_KHR_robustness
as supported extension. This was the last missing extension
for GL 4.5, so Mark GL 4.5 as all DONE for r600.
Reviewed-by: Dave Airlie <airlied@redhat.com>
Inline writes skip transfer map/unamp at the cost of an extra copy
on the data during execbuffer. That is generally a win for small
transfers. But the heuristic to use inline writes based on buffer
sizes rather than transfer sizes makes little sense. More
importantly, inline writes miss optimizations that are done for
buffer transfers.
Let's just use transfers.
Signed-off-by: Chia-I Wu <olvaffe@gmail.com>
Reviewed-By: Gert Wollny <gert.wollny@collabora.com>
virglrender has been changed such that
- VIRGL_CCMD_GET_QUERY_RESULT is fenced
- query buffers (PIPE_BIND_CUSTOM) are coherent
We can check if a query is ready using DRM_IOCTL_VIRTGPU_WAIT, and also
avoid a synchronized transfer to retrieve the query result. When
running against an older virglrenderer, it falls back to the old
behavior automatically.
TF2 @ 640x480 for pts4.dem went from 17fps to 40fps on my testing
machine.
Signed-off-by: Chia-I Wu <olvaffe@gmail.com>
Reviewed-by: Gurchetan Singh <gurchetansingh@chromium.org>
This makes CompressedTexSubImage from a PBO source do proper GPU
rendering to upload instead of stalling to map the PBO source on
the CPU (then copying it on the CPU).
Thanks Bas Nieuwenhuizen for pointing out that Vulkan includes this
functionality, and to Jason Ekstrand for writing the code I adapted.
Vulkan only supports a single layer, however, and this code tries to
support multiple layers as long as it's miplevel 0.
Improves performance in Sid Meier's Civilization VI:
Average frame time (ms): -3.67423% +/- 1.46201% (n=5)
99th percentile frame time (ms): -5.09910% +/- 3.87874% (n=5)
Currently ppir continues compilation when there is an unsupported
intrinsic, resulting in a shader that will surely not work as intended.
This is a problem during piglit runs as some tests don't compile
properly due to this but actually still get submitted to the gpu and
leave the system in an unstable state after executing, causing further
tests to fail.
Signed-off-by: Erico Nunes <nunes.erico@gmail.com>
Reviewed-by: Qiang Yu <yuq825@gmail.com>
While lima still doesn't support some kinds of intrinsics, it is more
helpful to display the name of the unsupported instr->intrinsic to make
debugging easier.
Signed-off-by: Erico Nunes <nunes.erico@gmail.com>
Reviewed-by: Qiang Yu <yuq825@gmail.com>
The current AOSP master build system breaks building mesa due to the
following error:
external/mesa3d/src/compiler/Android.glsl.gen.mk:94: error:
writing to readonly directory: "external/mesa3d/src/compiler/glsl/ir.h"
This error is bogus -- nothing "writes" to ir.h -- but the rule is
unnecessary because the generated header that is a dependency of the
non-generated header should be added to LOCAL_GENERATED_SOURCES and this
will track if the dependency needs to be regenerated.
(This change fixes a similar problem affecting nir.h too.)
Cc: Rob Clark <robdclark@chromium.org>
Cc: Emil Velikov <emil.l.velikov@gmail.com>
Cc: Amit Pundir <amit.pundir@linaro.org>
Cc: Sumit Semwal <sumit.semwal@linaro.org>
Cc: Alistair Strachan <astrachan@google.com>
Cc: Greg Hartman <ghartman@google.com>
Cc: Tapani Pälli <tapani.palli@intel.com>
Cc: Jason Ekstrand <jason@jlekstrand.net>
Reviewed-by: Tapani Pälli <tapani.palli@intel.com>
Reviewed-by: Emil Velikov <emil.velikov@collabora.com>
Signed-off-by: Alistair Strachan <astrachan@google.com>
[jstultz: Forward ported and tweaked commit subject]
Signed-off-by: John Stultz <john.stultz@linaro.org>
Apparently cosited_even was the required one instead of midpoint.
This adds slight offset of 0.5 pixels to the coordinates (+ we need
the image size to convert to normalized coords)
Fixes: 91702374d5 "radv: Add ycbcr lowering pass."
Acked-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
It was removed in "delete autotools .gitignore files", but the build
directory is created by scons.
[Skip CI]
Reviewed-by: Eric Engestrom <eric.engestrom@intel.com>
Broken on Polaris and since I discovered NV12 is not subsampled, but
a 2-plane format I decided I don't really care.
Work to do to re-enable:
1) Figure out which devices support it natively.
2) Write some software emulation for the others.
Fixes: 52c1adda21 "radv: Add ycbcr format features."
Reviewed-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
Basically, when the conditions of a csel diverge, we scalarize to avoid
going into weird code paths during emit. We could be doing better, but
this case can't occur organically from GLSL as far as I can, though it
does fix lowered atan2.
Signed-off-by: Alyssa Rosenzweig <alyssa@rosenzweig.io>
A previous commit by Tomeu aborted RA early, which solves the memory
corruption issue, but then generates an incorrect compile. This fixes
that.
Signed-off-by: Alyssa Rosenzweig <alyssa@rosenzweig.io>
This handles the usual case. 8-bit register access parallels 16-bit
access, but with one major caveat: in 8-bit mode, only half of the
register file is actually (directly) accessible as sources. In
particular, for each 16-bit integer register (hrN), we can only index a
*single* 8-bit integer (qrN), corresponding to the lower 8-bits. To get
the upper 8-bits, it is required to do an explicit shift. For example,
to add the bytes of a 16-bit integer hr0.x and get the result as an
8-bit qr0, you'd need to do something like:
ilsr hr1.x, hr0.x, #8
iadd qr0.x, qr0.x, qr1.x
This scheme diverges from 32-bit registers, in that both the upper and
lower halves of a 32-bit register are individually accessible as a pair
of half registers. For contrast, to add the lower and upper 16-bits of a
32-bit integer r0.x, you can just:
iadd hr0.x, hr0.x, hr1.x
Since hr1.x = upper 16-bit of r0.x.
Signed-off-by: Alyssa Rosenzweig <alyssa@rosenzweig.io>
Meanwhile, we're forced to disable dest_override, since it's not yet
clear how this interacts with other bitnesses (it'll likely need to be
overhauled in any case).
Signed-off-by: Alyssa Rosenzweig <alyssa@rosenzweig.io>
We silently ignored certain bits of the mask, which causes issues when
disassembly 8/64-bit ops.
Signed-off-by: Alyssa Rosenzweig <alyssa@rosenzweig.io>
In preparation for 8-bit and 64-bit operands, let's not reinforce the
32-bit-centric biases in the ISA.
Signed-off-by: Alyssa Rosenzweig <alyssa@rosenzweig.io>
Since it is dependent on the tile mode (ie. disabled for smaller mipmap
levels), we should handle it a similar way to fd_resource_level_linear().
The code previously mostly did the right thing because the old helper
took the tile mode.
Signed-off-by: Rob Clark <robdclark@chromium.org>
Best to keep it encapsulated in the helper which returns layer/level
offset (and actually use that helper everywhere) rather than spreading
the logic around the code.
Also add a helper to find UBWC offset, to complete the encapsulation.
Signed-off-by: Rob Clark <robdclark@chromium.org>
If someone is importing a buffer, we can't really know the state of it's
contents, so assume it is valid.
Signed-off-by: Rob Clark <robdclark@chromium.org>
There are still some fallbacks we'll need to handle before we can enable
UBWC by default. I think we may need to fallback to uncompressed if
image atomic operations are used. And we still need to sort out how to
handle image and sampler views of compressed resources if the image/
sampler view is using a format that does not support compression. (I
think the latter should hopefully be uncommon outside of deqp/piglit.)
But at least this gets us to the point where supertuxkart works properly
with UBWC enabled ;-)
Signed-off-by: Rob Clark <robdclark@chromium.org>
A few fixes that get UBWC working for the games/benchmarks where I
noticed problems before (in particular and manhattan, and stk (modulo
image support for UBWC when compute shaders are used for post-process
effects):
+ fix the size of the UBWC meta buffer (ie, the offset to color
pixel data) that is returned by ->fill_ubwc_buffer_sizes()
+ correct size/layout for 8 and 16 byte per pixel formats
+ limit the supported formats.. Note all formats that can be
tiled can be compressed.
Signed-off-by: Rob Clark <robdclark@chromium.org>
Fixes dEQP-GLES31.functional.ubo.random.all_per_block_buffers.13 and .20
ca3eb5db66 went from silently truncating
the constant state, which was also the wrong thing to do, to an assert.
Which then showed up in a couple of dEQPs. Actually there is nothing
wrong with larger constant file so just drop the assert.
Signed-off-by: Rob Clark <robdclark@chromium.org>
with that we can simplify code where nir vectors are created
v2: merge both lines in nir_vec
Signed-off-by: Karol Herbst <kherbst@redhat.com>
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
Now that dlist compilation again knows if it is inside glBegin/glEnd,
we can leave the decision if aliasing should occur to the vertex attribute
setter functions instead of doing that at glArrayElement time.
Reviewed-by: Brian Paul <brianp@vmware.com>
Signed-off-by: Mathias Fröhlich <Mathias.Froehlich@web.de>
We have to use _mesa_inside_dlist_begin_end instead of
_mesa_inside_begin_end to see if we are inside a glBegin/glEnd block in
case of display lists.
So split the is_vertex_position function used in vertex attribute processing
into a imm and dlist variant and use the appropriate _mesa_inside_begin_end
variant.
Reviewed-by: Brian Paul <brianp@vmware.com>
Signed-off-by: Mathias Fröhlich <Mathias.Froehlich@web.de>
That seems to be lost somewhere. Is needed for correct outside begin/end
detection in display list compilation. And is needed for correct aliasing
in dlists restablished in the next changes.
Reviewed-by: Brian Paul <brianp@vmware.com>
Signed-off-by: Mathias Fröhlich <Mathias.Froehlich@web.de>
Is no longer used, so we have less occasions where NewState is non zero.
Reviewed-by: Brian Paul <brianp@vmware.com>
Signed-off-by: Mathias Fröhlich <Mathias.Froehlich@web.de>
Now this part of gl_context state is unused and can be removed.
Reviewed-by: Brian Paul <brianp@vmware.com>
Signed-off-by: Mathias Fröhlich <Mathias.Froehlich@web.de>
In glArrayElement, use the bitmask trick to just walk the enabled
vao arrays. This should be about equivalent in execution time to
walk the prepare aelt_context list. Finally this will allow us to
reduce the _mesa_update_state calls in a few patches.
v2: Add comments.
Reviewed-by: Brian Paul <brianp@vmware.com>
Signed-off-by: Mathias Fröhlich <Mathias.Froehlich@web.de>
In the glArrayElement implementation, use glVertexAttrib*NV type
functions for fixed function attributes. We do the same in display
execution when the list is replayed using immediate mode attribute
functions. Using a single set of function pointers enables to
use a unified loop to walk the vertex array attributes.
Reviewed-by: Brian Paul <brianp@vmware.com>
Signed-off-by: Mathias Fröhlich <Mathias.Froehlich@web.de>
For access to glArrayElement methods factor out a function to
get the table lookup index for normalized/integer/double access.
The function will be used in the next patch at least twice.
v2: Use vertex_format_to_index instead of NORM_IDX.
Reviewed-by: Brian Paul <brianp@vmware.com>
Signed-off-by: Mathias Fröhlich <Mathias.Froehlich@web.de>
This new pass (which isn't even compile-tested) attempts to determine
the ALU type of all the SSA values in a function impl. It takes a
greedy approach and assigns intness or floatness to everything it thinks
can possibly contain an int or a float. Some values will be labled as
both int and float and some will be labled as neither and it is up to
the caller to decide what to do with this information. However, for a
"nice" shader where the original source contained no bit-casts and no
implicit bit-casts were introduced by optimizations, there shouldn't be
any overlap in the two sets save for the odd CSEd zero constant.
Reviewed-by: Vasily Khoruzhick <anarsoul@gmail.com>
These add a lot of complexity, and I currently can't measure any
performance benefit from having them. In the past, I seem to recall
seeing a benefit in drawoverhead scores, but currently it looks like
dropping them is either a wash or 1-2% faster.
Drop them to simplify allocations.
The STATE_BASE_ADDRESS "Size" fields can only hold 0xfffff in pages,
and 0xfffff * 4096 = 4294963200, which is 1 page shy of 4GB.
So we can't use the top page.
Both drivers are feature-complete and should be running more-or-less at
perf at this point. Drop the warning.
Acked-by: Kenneth Graunke <kenneth@whitecape.org>
Reviewed-by: Anuj Phogat <anuj.phogat@gmail.com>
Just don't emit the transform array at all if there are no transforms
v2:
- Don't use len(array) > 0 (Dylan)
- Keep using ARRAY_SIZE to make the generated C code easier to read
(Jason).
st/mesa's PBO upload path binds a vertex shader that doesn't use any
textures, but leaves the existing sampler views bound in place. This
was tricking us into thinking the PBO destination might be bound for
texturing in some cases. In Civilization VI, this fixes a false self-
dependency issue that was preventing CCS_E compression on upload.
Fixing this slightly improves frame times.
This makes nm not required, but used if found. In general I imagine that
this means that on windows nm wont be found, and on other platforms it
will.
v2: - fix gbm and egl symbols check tests to only be run if nm is found
- reword commit message to reflect the code change
Reviewed-by: Eric Anholt <eric@anholt.net>
So that it can be implicitly disabled on windows, where it doesn't
compile.
v2: - Use an auto-option rather than automagic.
- fix shader_cache check (== -> !=)
v4: - Use new with_shader_cache instead of get_option('shader-cache')
elsewhere in the meson build
Reviewed-by: Eric Anholt <eric@anholt.net>
This allows them to default to false on windows, but default to true
elsewhere. As a side effect turning off shared-glapi now automatically
turns off gles. Shared glapi remains a boolean defaulting to true.
v5: - new in this version
Reviewed-by: Eric Anholt <eric@anholt.net>
Somewhere down in the depths of the mingw headers 'interface' is
defined, change it to iface like a similar patch did.
Signed-off-by: Dylan Baker <dylan.c.baker@intel.com>
Reviewed-by: Eric Anholt <eric@anholt.net>
This allows the identifier to be used even if shared-glapi isn't build,
which simplifies a bunch of things.
Signed-off-by: Dylan Baker <dylan.c.baker@intel.com>
Reviewed-by: Eric Anholt <eric@anholt.net>
Because the new raw/struct intrinsics are buggy with LLVM 8
(they weren't marked as source of divergence), we fallback to the
old instrinsics for atomic buffer operations only. This means we need
to apply the indexing workaround for GFX9. The load/store
operations still use the new LLVM 8 intrinsics.
The fact that we need another workaround is painful but we should
be able to clean up that a bit once LLVM 7 support will be dropped.
This fixes a GPU hang with AC Odyssey and some rendering problems
with Nioh.
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=110573
Fixes: 31164cf5f7 ("ac/nir: only use the new raw/struct image atomic intrinsics with LLVM 9+")
Signed-off-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
Not all package managers / users will install perl into /usr/bin,
but /usr/bin/env /should/ always be present.
Using /usr/bin/env means that we can't give the -w argument to Perl,
so I added `use warnings' in the script.
Reviewed-by: Rob Clark <robclark@freedesktop.org>
Reviewed-by: Eric Engestrom <eric.engestrom@intel.com>
Found while running Talos Principle.
As far as I can tell running a draw call with a pipeline having push
constants without the application having called vkCmdPushConstants
gives undefined push constant values.
Signed-off-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
Cc: mesa-stable@lists.freedesktop.org
Since 09f1de97a7 "anv,i965: Lower away image derefs in the driver"
the backend compiler is not expected to handle any derefs, so let's
assert on it.
This helps identifying problems when a deref is not lowered and
"leaks" into the backend compiler.
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
The MASK macro is used in the RANGE macro, and it should
return the pre-bitset word mask for the (b) value.
i.e.
BITSET_MASK(0) should be undefined since it's meaningless.
BITSET_MASK(31) should give 0x7fffffff
BITSET_MASK(32) should give 0xffffffff
BITSET_MASK(33) should give 0x00000001
BITSET_MASK(64) should give 0xffffffff
However then BITSET_RANGE ends up broken for cases where
it's (b) value is the 0,32,64 value as in that case the lower
mask would be 0 not 0xffffffff.
This fixes the unit tests that I've added, and my code that
uses bitsets.
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
Fixes: bb38cadb1c "More GLSL code"
Reviewed-by: Kristian H. Kristensen <hoegsberg@google.com>
The last test here currently fails as there is a bug in bitset.h
Reviewed-by: Kristian H. Kristensen <hoegsberg@google.com>
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
This has a couple of hardcoded vec4 limits in it, change them
to the proper sizing to avoid future issues.
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
This is a port of Danylo's eca4a6548d
which fixed the hang on i965. It fixes GPU hangs in his new Piglit
test, arb_blend_func_extended-dual-src-blending-discard-without-src1.
I avoided my own review feedback here, and decided to simply adjust
3DSTATE_PS_BLEND rather than BLEND_STATE_ENTRY[0]. It has never been
clear to me which the hardware uses in every case. However, whacking
the enable in 3DSTATE_PS_BLEND seems to be sufficient to fix the hang,
and that packet is already dynamic, so it's easy to handle. I'd rather
avoid making BLEND_STATE_ENTRY[0] dynamic unless I have to.
It is an input but it comes in as part of the shader payload and doesn't
count towards the limits.
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
During a rebase, it seems I accidentally broke the contents-menu,
leading to a duplicate link to freedesktop.org. This was obviously not
intended. Let's fix this.
Signed-off-by: Erik Faye-Lund <erik.faye-lund@collabora.com>
Fixes: 7eee13c467 ("docs: use dl/dd instead of blockquote for
freedesktop link")
Reviewed-by: Dylan Baker <dylan@pnwbakers.com>
Reviewed-by: Eric Engestrom <eric@engestrom.ch>
Support nir_op_ftrunc by turning it into a mov with a round to integer
output modifier.
Signed-off-by: Erico Nunes <nunes.erico@gmail.com>
Reviewed-by: Qiang Yu <yuq825@gmail.com>
Merge the following into `meson-main`/`meson-loader-classic-dri`/
`meson-gallium-swr`:
- meson-vulkan
- meson-gallium-drivers-other
- meson-gallium-st-other
Signed-off-by: Eric Engestrom <eric.engestrom@intel.com>
[ Michel Dänzer ]
* Rebase and fix up commit log.
* Don't set VULKAN_DRIVERS in meson-loader-classic-dri.
* Remove extraneous whitespace.
* Squash in follow-up fixes.
Signed-off-by: Michel Dänzer <michel.daenzer@amd.com>
[ anholt]
* Add a note why nine and swrast landed where they did.
* Switch from s/meson-vulkan/meson-main/ to
s/meson-loader-classic-dri/meson-main/ which I think was the original
intent
Reviewed-by: Eric Anholt <eric@anholt.net>
Acked-by: Eric Engestrom <eric.engestrom@intel.com> (anholt changes)
Acked-by: Dylan Baker <dylan@pnwbakers.com>
- Add GBM_BO_IMPORT_FD_MODIFIER to documentation of supported foreign
object types
- Add newline before documentation block
- Improve language
Reviewed-by: Eric Engestrom <eric.engestrom@intel.com>
Reviewed-by: Eric Anholt <eric@anholt.net>
Reviewed-by: Daniel Stone <daniels@collabora.com>
- make sure compute shader derivatives are exposed for all extensions
- unify duplicated code
Reviewed-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
It's done by:
- decrease the number of frames in flight by 1
- flush before throttling in SwapBuffers
(instead of wait-then-flush, do flush-then-wait)
The improvement is apparent with Unigine Heaven.
Previously:
draw frame 2
wait frame 0
flush frame 2
present frame 2
The input lag is 2 frames.
Now:
draw frame 2
flush frame 2
wait frame 1
present frame 2
The input lag is 1 frame. Flushing is done before waiting, because
otherwise the device would be idle after waiting.
Nine is affected because it also uses the pipe cap.
This roughly mirrors what we get from autotools. There's a few
differences, though:
1. The "exec_prefix" output has been dropped. Meson doesn't support
this, so it makes no sense here.
2. The "llvm-config" output has been dropped. Meson abstracts dependency
discovery a bit more than our autotools build-system does, so it's
not easy to get this information as-is.
3. HUD extra stats, SWR archs, Shared/Static libs and CFLAGS / CXXFLAGS /
LDFLAGS has been dropped. These can be inspected by "meson configure".
4. How we set defines works quite differently in our Meson build-system,
and the result isn't quite the same. In particular, the DEFINES output
has been dropped, to avoid having to refactor the code too much.
Signed-off-by: Erik Faye-Lund <erik.faye-lund@collabora.com>
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=109326
Reviewed-by: Eric Engestrom <eric.engestrom@intel.com>
Acked-by: Dylan Baker <dylan@pnwbakers.com>
Variables are cheap, and there's little reason for the dri and gallium
drivers to work on the same variable for the driver list. So let's split
these in two separate lists instead.
This makes it easier to inspect these after-the fact, for instance
for generating a summary of build-settings.
Signed-off-by: Erik Faye-Lund <erik.faye-lund@collabora.com>
Reviewed-by: Eric Engestrom <eric.engestrom@intel.com>
Acked-by: Dylan Baker <dylan@pnwbakers.com>
This way we can mark the dri_drivers and dri_link arrays as temporary,
as all knowledge about them are contained in a single build-file with
clearly visible limited life-span.
Signed-off-by: Erik Faye-Lund <erik.faye-lund@collabora.com>
Reviewed-by: Eric Engestrom <eric.engestrom@intel.com>
Acked-by: Dylan Baker <dylan@pnwbakers.com>
We just need to do a sequence of commands to flush the cache.
Signed-off-by: Rob Clark <robdclark@chromium.org>
Reviewed-by: Kristian H. Kristensen <hoegsberg@google.com>
Wire up support to sample from the fb (and force GMEM rendering when we
have fb reads). The existing GLSL IR lowering for blend_equation_advanced
does the rest.
Signed-off-by: Rob Clark <robdclark@chromium.org>
Reviewed-by: Kristian H. Kristensen <hoegsberg@google.com>
Lower load_output to txf_ms_fb and add support for the new texture fetch
instruction.
Signed-off-by: Rob Clark <robdclark@chromium.org>
Reviewed-by: Kristian H. Kristensen <hoegsberg@google.com>
Apparently we never hit this path. Or at least haven't for a rather
long time. But in either case (load_deref or load_frag_coord), we can
just directly use the intrinsic's ssa dest. So stop passing the
nir_variable (which would be NULL in the load_frag_coord case) around
and instead just use &intr->dest.ssa.
(This ofc means we need to setup the cursor to insert *after* the
instruction, which seems to be another bug of the original
implementation.)
Signed-off-by: Rob Clark <robdclark@chromium.org>
Reviewed-by: Kristian H. Kristensen <hoegsberg@google.com>
And a comment.. since we are mixing units of bytes/dwords/vec4,
hopefully this will avoid some unit confusion.
Signed-off-by: Rob Clark <robdclark@chromium.org>
It isn't quite as simple as not running the pass, since with packed
varyings we get load_ubo for block==0 (ie. the "real" uniforms). So
instead run the pass normally but decline to lower anything in
block > 0
Signed-off-by: Rob Clark <robdclark@chromium.org>
Since we emit UBO regions INDIRECTly (ie. not copied into cmdstream but
emit by EXT_SRC_ADDR) we need to keep them 4*vec4 aligned. Which the
code already mostly did, except for aligning the first UBO region itself
(ie. the one after block==0 which is the "real" uniforms).
Fixes: 893425a607 freedreno/ir3: Push UBOs to constant file
Fixes: 3c8779af32 freedreno/ir3: Enable PIPE_CAP_PACKED_UNIFORMS
Signed-off-by: Rob Clark <robdclark@chromium.org>
Otherwise we zero out the state again, but all the UBO loads that we
could lower are already lowered. End result is that we didn't emit the
uniforms for lowered UBO access in any case where multiple shader
variants are used.
Fixes: 893425a607 freedreno/ir3: Push UBOs to constant file
Fixes: 3c8779af32 freedreno/ir3: Enable PIPE_CAP_PACKED_UNIFORMS
Signed-off-by: Rob Clark <robdclark@chromium.org>
This is useful to normalize the numbers written into the output file
as those number are accumulated over a period of time and number of
frames.
Signed-off-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
v2: switch to VkBase{In,Out}Structure
v3: Add timestamps at begin/end of primary command buffers to estimate
gpu time spent per submission (Lionel)
Signed-off-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Acked-by: Eric Engestrom <eric.engestrom@intel.com> (v2)
This significantly reworks how numbers displayed are computed. We
accumulate operations written into command buffers and add those to
the device when submitted to a queue. These collected values are then
used to compute per frame overlay data.
We also accumulate the data over the sampling fps period to produce
numbers for that period of time.
Signed-off-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
This will be used to copy chains of structures so that we can alterate
some of them.
v2: Drop vk_util.h include (Eric)
Use VkBaseInStructure directly (Eric)
v3: Drop --platforms= param to generator script, instead produce a
file with #ifdef based what platforms are compiled.
Signed-off-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Reviewed-by: Eric Engestrom <eric.engestrom@intel.com>
nir_opt_algebraic is currently one of the most expensive NIR passes,
because of the many different patterns we've added over the years. Even
though patterns are already sorted by opcode, there are still way too
many patterns for common opcodes like bcsel and fadd, which means that
many patterns are tried but only a few actually match. One way to fix
this is to add a pre-pass over the code that scans it using an automaton
constructed beforehand, similar to the automatons produced by lex and
yacc for parsing source code. This automaton has to walk the SSA graph
and recognize possible pattern matches.
It turns out that the theory to do this is quite mature already, having
been developed for instruction selection as well as other non-compiler
things. I followed the presentation in the dissertation cited in the
code, "Tree algorithms: Two Taxonomies and a Toolkit," trying to keep
the naming similar. To create the automaton, we have to perform
something like the classical NFA to DFA subset construction used by lex,
but it turns out that actually computing the transition table for all
possible states would be way too expensive, with the dissertation
reporting times of almost half an hour for an example of size similar to
nir_opt_algebraic. Instead, we adopt one of the "filter" approaches
explained in the dissertation, which trade much faster table generation
and table size for a few more table lookups per instruction at runtime.
I chose the filter which resulted the fastest table generation time,
with medium table size. Right now, the table generation takes around .5
seconds, despite being implemented in pure Python, which I think is good
enough. Based on the numbers in the dissertation, the other choice might
make table compilation time 25x slower to get 4x smaller table size, but
I don't think that's worth it. As of now, we get the following binary
size before and after this patch:
text data bss dec hex filename
11979455 464720 730864 13175039 c908ff before i965_dri.so
text data bss dec hex filename
12037835 616244 791792 13445871 cd2aef after i965_dri.so
There are a number of places where I've simplified the automaton by
getting rid of details in the LHS patterns rather than complicate things
to deal with them. For example, right now the automaton doesn't
distinguish between constants with different values. This means that it
isn't as precise as it could be, but the decrease in compile time is
still worth it -- these are the compilation time numbers for a shader-db
run with my (admittedly old) database on Intel skylake:
Difference at 95.0% confidence
-42.3485 +/- 1.375
-7.20383% +/- 0.229926%
(Student's t, pooled s = 1.69843)
We can always experiment with making it more precise later.
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
According to RadeonSI, this seems to be required by the hardware
to avoid GPU hangs. I think I just forgot to set that bit when I
implemented VK_EXT_transform_feedback.
This fixes a GPU hang with Space Engineers and DXVK.
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=110291
Fixes: b4eb029062 ("radv: implement VK_EXT_transform_feedback")
Signed-off-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
valgrind crashes when we try to initialize host logging. This
env var can be used to disable logging.
v2: rebase onto "svga: move host logging to winsys".
Cc: mesa-stable@lists.freedesktop.org
Reviewed-by: Neha Bhende <bhenden@vmware.com>
All other pages has the heading as ghe first thing in the article. Let's
clean this up for consistency.
Signed-off-by: Erik Faye-Lund <erik.faye-lund@collabora.com>
Reviewed-by: Eric Engestrom <eric.engestrom@intel.com>
The FAQ is the only article we have that uses a centered heading, which
makes it look odd compared to the other articles. Let's drop the
centering for consistency.
Signed-off-by: Erik Faye-Lund <erik.faye-lund@collabora.com>
Reviewed-by: Eric Engestrom <eric.engestrom@intel.com>
HTML already have a way of doing automatically ordered lists, so let's
use that instead of open-coding one.
Signed-off-by: Erik Faye-Lund <erik.faye-lund@collabora.com>
Reviewed-by: Eric Engestrom <eric.engestrom@intel.com>
This markup seems to assume paragraphs survive across block-elements,
which isn't the case. Let's rectify that.
Signed-off-by: Erik Faye-Lund <erik.faye-lund@collabora.com>
Reviewed-by: Eric Engestrom <eric.engestrom@intel.com>
It's illegal to nest block-level elements such as <pre> inside <p> in
HTML. This means that when the paragraphs gets closed after a <pre>-tag,
we end up closing a non-existent tag, so the browser inserts a dummy
<p>-tag. This is entirely pointless, so let's just close these tags
before the <pre>-tag instead.
Signed-off-by: Erik Faye-Lund <erik.faye-lund@collabora.com>
Reviewed-by: Eric Engestrom <eric.engestrom@intel.com>
paragraphs can't contain lists, and attempting to close them after
the list just cause an extra, empty paragraph to be created. We don't
want that, so let's close the paragraphs before the list intead.
Signed-off-by: Erik Faye-Lund <erik.faye-lund@collabora.com>
Reviewed-by: Eric Engestrom <eric.engestrom@intel.com>
A list-item must be openened before it can be closed. So let's replace
this closing tag with an opening tag.
Signed-off-by: Erik Faye-Lund <erik.faye-lund@collabora.com>
Reviewed-by: Eric Engestrom <eric.engestrom@intel.com>
The blockquote happens to match the indentation of the other lists for
most browsers, but this isn't a guarantee. Let's instead use a
definition-list, which is more strongly connected to a list, so it's
more likely to have the same indention.
This also makes sure that we don't have similar padding on the
right-hand side, in case we change the text-size.
Signed-off-by: Erik Faye-Lund <erik.faye-lund@collabora.com>
Reviewed-by: Eric Engestrom <eric.engestrom@intel.com>
<b>-tags aren't allowed in the root of <body>, so let's replace these
with <h2>-tags with some CSS to make them appear as bold.
Signed-off-by: Erik Faye-Lund <erik.faye-lund@collabora.com>
Reviewed-by: Eric Engestrom <eric.engestrom@intel.com>
Even in preformatted blocks, ampersands should be escaped. Let's correct
this, in case of strict parsers.
Signed-off-by: Erik Faye-Lund <erik.faye-lund@collabora.com>
Reviewed-by: Eric Engestrom <eric.engestrom@intel.com>
The '>'-symbol should usually be escaped to avoid confusing strict
parsers. While it's very unlikely to cause issues as-is, let's quite it
for good measure.
Signed-off-by: Erik Faye-Lund <erik.faye-lund@collabora.com>
Reviewed-by: Eric Engestrom <eric.engestrom@intel.com>
It previously used var->type instead of deref_instr->type and didn't
handle 64-bit outputs.
This fixes lots of transform feedback CTS tests involving transform
feedback and geometry shaders (mostly
dEQP-VK.transform_feedback.fuzz.random_geometry.*)
v2: fix writemask widening when comp != 0
v3: fix 64-bit variables when comp != 0, again
Signed-off-by: Rhys Perry <pendingchaos02@gmail.com>
Cc: 19.0 19.1 <mesa-stable@lists.freedesktop.org>
Reviewed-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
It's generally nicer to do this in terms of em units, as that scales
better with text-sizes, if we ever decide to change them.
The result is slightly larger than before, but only by a couple of
pixels.
Signed-off-by: Erik Faye-Lund <erik.faye-lund@collabora.com>
Reviewed-by: Eric Engestrom <eric.engestrom@intel.com>
With "display: flex;" we can make this a bit more automatic, not
requiring a bunch of values to be of specific values to get the right
centering.
Signed-off-by: Erik Faye-Lund <erik.faye-lund@collabora.com>
Reviewed-by: Eric Engestrom <eric.engestrom@intel.com>
This is a bit tidier than to set a background on the h1-text, requiring
it to be full height and all.
Signed-off-by: Erik Faye-Lund <erik.faye-lund@collabora.com>
Reviewed-by: Eric Engestrom <eric.engestrom@intel.com>
While it's legal to omit the last semicolon in a CSS block, it's
generally not considered good style, as it makes it harder to add new
lines.
Signed-off-by: Erik Faye-Lund <erik.faye-lund@collabora.com>
Reviewed-by: Eric Engestrom <eric.engestrom@intel.com>
These attributes has been commented out since 2005; I don't think
there's a big chance of them making a return as-is.
Signed-off-by: Erik Faye-Lund <erik.faye-lund@collabora.com>
Reviewed-by: Eric Engestrom <eric.engestrom@intel.com>
Tabs has been around as the indention style of this file since it was
created. Some newer CSS has added double-spaces, but let's keep it
consistent.
Signed-off-by: Erik Faye-Lund <erik.faye-lund@collabora.com>
Reviewed-by: Eric Engestrom <eric.engestrom@intel.com>
This error code typically indicated that a buffer object that was referenced
by the command stream was being used for CPU access by another client.
The correct action here is to retry after a while. Use usleep() until we
have proper kernel support for this wait.
Signed-off-by: Thomas Hellstrom <thellstrom@vmware.com>
Reviewed-by: Brian Paul <brianp@vmware.com>
The file vmwgfx_drm.h was a bit outdated. Update to a recent version,
including defines supporting coherent memory.
Signed-off-by: Thomas Hellstrom <thellstrom@vmware.com>
Reviewed-by: Brian Paul <brianp@vmware.com>
Some constant- and texture upload buffer data may bounce in malloced
buffers before being transferred to hardware buffers. In the case of
texture upload buffers this seems to be an oversight. In the case of
constant buffers, code comments indicate that we want to avoid mapping
hardware buffers for reading when copying out of buffers that need
modification before being passed to hardware. In this case we avoid
data bouncing for upload manager buffers but make sure buffers that
we read out from stay in malloced memory.
Signed-off-by: Thomas Hellstrom <thellstrom@vmware.com>
Reviewed-by: Brian Paul <brianp@vmware.com>
We didn't have the path using this command enabled as
typically we take an alternate path using DMA uploads.
Emable it so that we can exercise that code-path by turning off
the DMA path.
Signed-off-by: Thomas Hellstrom <thellstrom@vmware.com>
Reviewed-by: Brian Paul <brianp@vmware.com>
The vmwgfx kernel module has a compatibility mode for user-space that is
not guest-backed resource aware. Add an environment variable to facilitate
testing of this mode on guest-backed aware kernels: if the environment
variable SVGA_FORCE_HOST_BACKED is defined, the driver will use host-backed
operation.
Signed-off-by: Thomas Hellstrom <thellstrom@vmware.com>
Reviewed-by: Deepak Rawat <drawat@vmware.com>
Reviewed-by: Brian Paul <brianp@vmware.com>
For consistency with ac_build_llvm8_buffer_{load,store}_common
helpers and that will help a bit for removing the vec3 restriction.
Signed-off-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
Per the Vulkan spec 1.1.107, the predicate is a 32-bit value. Though
the AMD hardware treats it as a 64-bit value which means it might
fail to discard.
I don't know why this extension has been drafted like that but this
definitely not fit with AMD. The hardware doesn't seem to support
a 32-bit value for the predicate, so we need to implement a workaround.
This fixes an issue when DXVK enables conditional rendering with RADV,
this also fixes the Sasha conditionalrender demo.
Fixes: e45ba51ea4 ("radv: add support for VK_EXT_conditional_rendering")
Reported-by: Philip Rebohle <philip.rebohle@tu-dortmund.de>
Signed-off-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
The hardware actually rounds before conversion. This now matches
what values are used when performing fast clears vs slow clears.
This fixes a rendering issue with Far Cry 3&4. This also fixes
a bunch of CTS tests that use a 8-bit UNORM format (only when
the 512*512 image size hint is manually disabled).
Cc: "19.0" "19.1" <mesa-stable@lists.freedesktop.org>
Signed-off-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
I want to be able to do BITSET_TEST() != BITSET_TEST() and this isn't
currently possible because BITSET_TEST() returns a random bit. Compare
to zero to get an actual Boolean.
Reviewed-by: Eric Engestrom <eric.engestrom@intel.com>
Reviewed-by: Ian Romanick <ian.d.romanick@intel.com>
I'm not sure what triggered this, but building with
scons platform=windows toolchain=crossmingw machine=x86 build=profile
with MinGW g++ 7.3 or 7.4 causes an internal compiler error.
We can work around it by forcing -O1 optimization.
Reviewed-by: Jose Fonseca <jfonseca@vmware.com>
Reviewed-by: Neha Bhende <bhenden@vmware.com>
It's needed by the next pbuffer fix, which changes the behavior of
draw_buffer_enum_to_bitmask, so it can't be used to help with error
checking.
Reviewed-by: Emil Velikov <emil.velikov@collabora.com>
It has been noted that the lima GP has a limit of 512 instructions,
after which the shaders don't work and fail silently.
This commit adds a check to make the shader compilation abort when the
shader exceeds this limit, so that we get a clear reason for why the
program will not work.
Signed-off-by: Erico Nunes <nunes.erico@gmail.com>
Reviewed-by: Qiang Yu <yuq825@gmail.com>
Small buffer subdatas which are essentially doing a memcpy were getting
bogged down by all the overhead of creating new transfers.
Signed-off-by: David Riley <davidriley@chromium.org>
Reviewed-by: Gurchetan Singh <gurchetansingh@chromium.org>
Intersecting transfer queue entries allow for the possibility of
extending an existing transfer instead of creating a new one (and all
the associated mappign/unmapping).
Signed-off-by: David Riley <davidriley@chromium.org>
Reviewed-by: Gurchetan Singh <gurchetansingh@chromium.org>
Recently we added checks to try and deny multisampled shader images.
Unfortunately, this messed up imageBuffers, which have sample_count = 0,
which are also used in PBO download, causing us hit CPU map fallbacks.
Fixes: b15f5cfd20 iris: Do not advertise multisampled image load/store.
Reviewed-by: Rafael Antognolli <rafael.antognolli@intel.com>
If no view is bound we still should reset the override to 0
and array mode.
This should fix misrendering in firefox WebRender since
the pbo sampler was removed.
Fixes: 1250383e36 (st/mesa: remove sampler associated with buffer texture in pbo logic)
This #include is needed for `NULL`, which is used on all OSes, not just Linux.
Reported-by: Juan A. Suarez Romero <jasuarez@igalia.com>
Fixes: 316964709e "util: add os_read_file() helper"
Signed-off-by: Eric Engestrom <eric.engestrom@intel.com>
Reviewed-by: Juan A. Suarez <jasuarez@igalia.com>
If we don't update this for all primitive-types, we end up rendering
slightly offset points and lines up until the point where the first
triangle gets drawn. This is obviously not correct, and violates
OpenGL's repeatability rule.
Signed-off-by: Erik Faye-Lund <erik.faye-lund@collabora.com>
Fixes: ca9c413647 ("softpipe: Respect gl_rasterization_rules in
primitive setup.")
Reviewed-By: Gert Wollny <gert.wollny@collabora.com>
Use a different arrangement of constants to allow more ffma.
A vec4 backend will now use 3 fma for yuv_to_rgb. On freedreno/ir3, it is
down from 10 to 7 alu (4 fma, 3 mul, 3 add to 7 fma). Other backends
shouldn't be hurt.
Signed-off-by: Jonathan Marek <jonathan@marek.ca>
Reviewed-by: Eric Anholt <eric@anholt.net>
Tested-by: Ian Romanick <ian.d.romanick@intel.com>
Since softpipe doesn't truely support multisample, I've not added softpipe
to the "Enhanced per-sample shading" even though with the advertised GLSL
level ARB_gpu_shader5 is advertised.
Signed-off-by: Gert Wollny <gert.wollny@collabora.com>
Reviewed-by: Roland Scheidegger <sroland@vmware.com>
This will enable calls to the interpolateAt* functions, but also a bunch
of other features.
Signed-off-by: Gert Wollny <gert.wollny@collabora.com>
Reviewed-by: Roland Scheidegger <sroland@vmware.com>
Like with interpolatAtSample this is also not really implementing the
according sampling and will only work correctly for pixels that are fully
covered, but since softpipe only supports one sample this is good enough
for now.
v2: Correct spelling (Roland Scheidegger)
Signed-off-by: Gert Wollny <gert.wollny@collabora.com>
Reviewed-by: Roland Scheidegger <sroland@vmware.com>
Since for this opcode the offsets are given manually the function
should actually also work for non-zero offsets, but the related piglits
only ever test with offset 0. Accordingly the patch satisfies
"fs-interpolateatoffset-*".
Signed-off-by: Gert Wollny <gert.wollny@collabora.com>
Reviewed-by: Roland Scheidegger <sroland@vmware.com>
Softpipe doesn't support more than one sample, so this function
implements the interpolation at sample 0 and adds a stub to make it
possible to interpolate at other samples.
As it is this makes the piglits "fs-interpolateatsample-*" pass, but
they only ever test sample 0 anyway.
Signed-off-by: Gert Wollny <gert.wollny@collabora.com>
Reviewed-by: Roland Scheidegger <sroland@vmware.com>
This adds entry points for correcting the interpolation values if the
interpolation is done by using one of the interpolateAt* functions.
Signed-off-by: Gert Wollny <gert.wollny@collabora.com>
Reviewed-by: Roland Scheidegger <sroland@vmware.com>
Now that the LOD is evaluated up front the cube faces can also be
evauate on a per sample basis instead of using the quad.
This fixes a large number of deqp gles 3 and 31 cube texture tests.
Signed-off-by: Gert Wollny <gert.wollny@collabora.com>
Reviewed-by: Roland Scheidegger <sroland@vmware.com>
This enables the use of explicit gradients.
Also remove an unused parameter when changing the interfaces.
Signed-off-by: Gert Wollny <gert.wollny@collabora.com>
Reviewed-by: Roland Scheidegger <sroland@vmware.com>
The shadow evaluation compare parameter is stored in different locations,
depending on the texture type. Move the values to a common location free
the lod storage and to be able to reduce the number of parameters.
Signed-off-by: Gert Wollny <gert.wollny@collabora.com>
Reviewed-by: Roland Scheidegger <sroland@vmware.com>
The value is stored in the lod components and this will be overwritten
when swithcing to the new code path.
Signed-off-by: Gert Wollny <gert.wollny@collabora.com>
Reviewed-by: Roland Scheidegger <sroland@vmware.com>
The EGL_KHR_create_context spec says:
"If an OpenGL context is requested and the values for attributes
EGL_CONTEXT_MAJOR_VERSION_KHR and EGL_CONTEXT_MINOR_VERSION_KHR,
when considered together with the value for attribute
EGL_CONTEXT_OPENGL_FORWARD_COMPATIBLE_BIT_KHR, specify an OpenGL
version and feature set that are not defined, than an
EGL_BAD_MATCH error is generated."
This case is already correctly handled a bit below in
the same source file.
The correct handling was added by commit: 63beb3df
Reported-by: Ian Romanick <idr@freedesktop.org>
Here: https://bugzilla.freedesktop.org/show_bug.cgi?id=92552#c9
Fixes: 11cabc45b7 "egl: rework handling EGL_CONTEXT_FLAGS"
Reviewed-by: Ian Romanick <ian.d.romanick@intel.com>
Signed-off-by: Andrii Simiklit <andrii.simiklit@globallogic.com>
Some of the opts are not called in the general optimastion loop
in the state trackers glsl -> nir conversion. We need to call
the radeonsi specific optimisation once before scanning over
the nir otherwise we can end up gathering info on code that
is later removed.
Fixes an assert in the piglit test:
./bin/varying-struct-centroid_gles3
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
Needs to update max_half_reg, or be remapped to full reg and update
max_reg accordingly, depending on generation..
Signed-off-by: Rob Clark <robdclark@chromium.org>
When discard_delayed_release is set (default), we allocate more buffers
and use a different buffer wait path.
Check if it is set, and use the old paths if not
(the alternative buffer wait path could still be used, but there is no
advantage to using it in this case).
Signed-off-by: Axel Davy <davyaxel0@gmail.com>
thread_submit's throttling depending on the number of internal
back buffers, and wasn't affected by the driver requested
throttling value.
Now it is.
Signed-off-by: Axel Davy <davyaxel0@gmail.com>
Optimize writeonly by passing PIPE_TRANSFER_WRITE
for these buffers instead of the safer
PIPE_TRANSFER_READ_WRITE.
This seems to improve the performance of d3d8 games
using d3d8to9.
Signed-off-by: Axel Davy <davyaxel0@gmail.com>
We used TGSI_SEMANTIC_FOG for fog,
however on vs/ps 3, fog is allowed to have
4 components (even on the ff pipeline according
to a wine test).
Since gallium's TGSI_SEMANTIC_FOG has only one
component, use TGSI_SEMANTIC_GENERIC instead.
Fixes:
https://github.com/iXit/Mesa-3D/issues/346
Signed-off-by: Axel Davy <davyaxel0@gmail.com>
The shader constant buffer size with the
constant compaction code can vary depending
on the shader variant compiled (for example if
fog constants are required, etc).
Thus instead of using fixed size for the shader,
add in the variant cache the size required, pass it
to the context, and use this value.
Signed-off-by: Axel Davy <davyaxel0@gmail.com>
As with the constant compaction we map the constant
slots to new slots, we need to pass that information
to the context which is in charge of uploading
the constants.
Signed-off-by: Axel Davy <davyaxel0@gmail.com>
When indirect addressing is not used, we know exactly
which constants are accessed, and thus can
have them located in consecutive slots.
We thus parse again the shader with a slot map
for compaction.
The path contains the work inside nine_shader.c for this
path, but it needs some other commits to work, and thus
is not enabled yet by this commit.
Signed-off-by: Axel Davy <davyaxel0@gmail.com>
Regroup all the param->rel assertions into one assertion for better clarity
and better covering.
param->rel on an input can only happen with float constants for vs,
or with inputs on vs/ps 3.0.
Signed-off-by: Axel Davy <davyaxel0@gmail.com>
Boolean and Integer constants are used in d3d9 for flow control.
Boolean are used for if/then/else and Integer constants
for loops.
The compilers can generate better code if these values are known
at compilation.
I haven't met so far a game that would change the values of these
constants frequently (and when they do, they set to the values used
for the previous draw call, and thus the changes get filtered out).
Thus it makes sense to inline these constants and recompile the shaders.
The commit sets a bound to the number of variants for a given shader
to avoid too many shaders to be generated.
One drawback is it means more shader compilations. It would probably
make sense to compile these shaders asynchronously or let the user
control the behaviour with an env var, but this is not done here.
The games I tested hit very few shader variants, and the performance
impact was negligible, but it could help for games with uber shaders.
Signed-off-by: Axel Davy <davyaxel0@gmail.com>
dynamic textures seem to have predictable stride. This stride
should be the same as for a ram buffer.
It seems some game don't check the actual stride value, assuming
it to be the expected one.
Thus this workaround (protected by drirc option) is to use an intermediate
ram buffer.
Fixes Rayman Legends texture issues when enabled.
Signed-off-by: Axel Davy <davyaxel0@gmail.com>
Use nine_context_box_upload instead of locking the pipe
for volume upload with format conversion.
nine_context_box_upload already handles format
conversion.
Signed-off-by: Axel Davy <davyaxel0@gmail.com>
Use nine_context_box_upload instead of locking the pipe
for surface upload with format conversion.
nine_context_box_upload already handles format
conversion.
Signed-off-by: Axel Davy <davyaxel0@gmail.com>
SINCOS takes an input with replicated swizzle.
the swizzle can be on any component, not just x.
Enable it to read from any component, but also
use a temporary register to avoid dst/src aliasing.
No known game is fixed by this change as it seems
the input swizzle is commonly on x for this instruction,
and src and dst don't alias.
Signed-off-by: Axel Davy <davyaxel0@gmail.com>
Systemmem has a specific behaviour we don't
mimick exactly.
That makes Halo feel free to use nooverwrite
with it all the time, even when reading again
at the same location.
Ignore nooverwrite to have proper synchronization.
Fixes: https://github.com/iXit/Mesa-3D/issues/348
Signed-off-by: Axel Davy <davyaxel0@gmail.com>
For many ps 1.X instructions, we were reading the
texcoords directly, instead of through tx_src_param,
resulting in modifiers getting ignored.
Use tx_src_param for all these instructions.
Fixes: https://github.com/iXit/Mesa-3D/issues/337
Signed-off-by: Axel Davy <davyaxel0@gmail.com>
d3d's nooverwrite and gallium's unsynchronized
have different semantics.
Indeed nooverwrite says the applications won't
write to locations needed by previous draws,
which is less strong than unsynchronized which
won't synchronize previous writes.
Thus in case app is locking without discard/nooverwrite,
then using nooverwrite, we need to add a
synchronization.
Fixes: https://github.com/iXit/wine-nine-standalone/issues/29
Signed-off-by: Axel Davy <davyaxel0@gmail.com>
Previously nine_state_clear was not using
NineBindBufferToDevice and NineBindTextureToDevice
to unbind buffers and textures (but used nine_bind)
This was resulting in an uncorrect bind count for these
resources.
Combined with
0ec4e5f630
Some buffers were scheduled to be uploaded directly
after they were locked (because the bind count incorrectly
assumed they were needed for the next draw call),
which resulted in uploads before the data was written.
To simplify a bit the code (and because I needed to
add a pointer to device),
remove the stateblock usage from nine_state_clear and
rename to nine_device_state_clear.
Fixes:
https://github.com/iXit/Mesa-3D/issues/345
Signed-off-by: Axel Davy <davyaxel0@gmail.com>
When a draw call is emited, buffers in the
device->update_buffers list are uploaded.
This patch removes buffers from the list if they
are not bound anymore.
Behaviour found studying:
https://github.com/iXit/Mesa-3D/issues/345
Signed-off-by: Axel Davy <davyaxel0@gmail.com>
When a draw call is emited, textures in the
device->update_textures list are uploaded.
This patch removes textures from the list if they
are not bound anymore.
Signed-off-by: Axel Davy <davyaxel0@gmail.com>
This seems to fix Rayman (which adds things
to the RCP result, and thus gets an Inf),
while not having regressions.
Signed-off-by: Axel Davy <davyaxel0@gmail.com>
No-one reported bugs for that, but is seems
c442dd7890
and previous commits used APIs not defined until
nine minor version 3.
This patch should prevent crash in this case.
Also turn off the resize feature in this case,
as we won't prevent a buffer leak anymore.
Cc: "19.0" mesa-stable@lists.freedesktop.org
Signed-off-by: Axel Davy <davyaxel0@gmail.com>
These were updated in version 1.1.106 of vulkan.h to make more sense
with the extension names. We may as well keep with the times.
See also: 90108deb27 "anv: Update to use the new features struct names"
Signed-off-by: Eric Engestrom <eric.engestrom@intel.com>
Reviewed-by: Chia-I Wu <olvaffe@gmail.com>
These were updated in version 1.1.106 of vulkan.h to make more sense
with the extension names. We may as well keep with the times.
See also: 90108deb27 "anv: Update to use the new features struct names"
Signed-off-by: Eric Engestrom <eric.engestrom@intel.com>
Reviewed-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
Newer gens (> 9) will start doing the linear -> sRGB conversion of the
clear color for us, if we use a sRGB surface format. So let's make sure
that doesn't happen and keep the same semantics as before.
Even though the hardware could convert the clear color for us during
fast clear, that converted color is only used for sampling. For resolve,
the original color would be used (without the conversion). So we convert
it ourselves and the same converted color gets used for both sampling
and resolving, simplifying the whole logic.
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
We were disabling fast clears if the view format had a different
colorspace than the resource format (sRGB vs linear or vice-versa). But
we actually support them if we use the view format to decide if we
should encode the clear color into sRGB colorspace.
Also add a missing linear -> sRGB surface format conversion (we don't
want the clear color to be encoded to sRGB again during resolve).
v2: Do not track sRGB colorspace during fast clears (Nanley).
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
vmw_screen.h uses dev_t which is defines in sys/types.h
this header is required to be included for getting dev_t
definition. This issue happens on musl C library, it is hidden
on glibc since sys/types.h is included through another
system headers
Reviewed-by: Eric Engestrom <eric.engestrom@intel.com>
This patch claimed that the autotools build generates libGLESv1_CM.so.1.0.0, but
it doesn't:
es1api_libGLESv1_CM_la_LDFLAGS = \
-no-undefined \
-version-number 1:1 \
$(GC_SECTIONS) \
$(LD_NO_UNDEFINED)
Revert commit cc15460e18 to ensure that the
autotools and meson builds produce the same libraries.
Fixes: cc15460e18 "meson: drop GLESv1 .so version back to 1.0.0"
Reviewed-by: Eric Engestrom <eric.engestrom@intel.com>
Reviewed-by: Dylan Baker <dylan@pnwbakers.com>
This enables the remaining capabilities in SPV_EXT_descriptor_indexing.
Fixes: 6e230d7607 "anv: Implement VK_EXT_descriptor_indexing"
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
This enables the remaining capabilities in SPV_EXT_descriptor_indexing.
Fixes: 0e10790558 "radv: Enable VK_EXT_descriptor_indexing."
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
The code was handling the Weak variant in some cases, but missing
others, e.g. the get_deref_nir_atomic_op. Add all the missing cases
with the same behavior of the non-Weak SpvOpAtomicCompareExchange.
Note that the Weak variant is basically an alias, as SPIR-V 1.3,
Revision 7 says
"OpAtomicCompareExchangeWeak
Deprecated (use OpAtomicCompareExchange).
Has the same semantics as OpAtomicCompareExchange."
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
These files implement running almost all of deqp-gles2 on Chomebooks of
the rk3399-gru-kevin type in Collabora's LAVA lab.
The approach follows what is currently being used for virglrenderer,
but scheduling the actual test jobs via LAVA.
We start by building a container in Docker that contains a suitable
rootfs and kernel for the DUT, deqp and all dependencies for building
Mesa itself.
The Mesa is built and the rootfs, deqp and Mesa are combined in a cpio
ramdisk. A LAVA job is generated, submitted to LAVA and the results are
processed by simply comparing them to the expectations that are stored
in git. Any code that changes the expectations (hopefully tests are
fixed) needs to also update the expectations file.
The next step is adding support for other devices, possibly in other
LAVA labs.
In order to use this, the repository has to be configured to run the
gitlab-ci.yaml file from the panfrost/ci dir, and a LAVA token needs to
be setup.
Signed-off-by: Tomeu Vizoso <tomeu.vizoso@collabora.com>
Reviewed-by: Alyssa Rosenzweig <alyssa@rosenzweig.io>
Prep work for fb_read (blend_equation_advanced)
Switch to using 'enum pipe_shader_type' everywhere, and (optional, in
non-cache / slowpath case) pass ctx instead of image/ssbo state. In the
fb_read case we also need to access the framebuffer state, so having
the ctx simplifies things.
Signed-off-by: Rob Clark <robdclark@chromium.org>
Hardware docs say that Gen11 requires the use of two MI_ATOMICs of size
QWORD when updating the clear color. The second MI_ATOMIC also needs CS
Stall and Return Data Control set.
v2: Remove include of srgb header (Lionel)
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Change some of the single bit fields to booleans, and add an enum with
the definition of the ATOMIC_OPCODE.
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
With python's int(), if the optional second parameter is 0, then
python will support the 0x prefix for hex numbers.
Signed-off-by: Jordan Justen <jordan.l.justen@intel.com>
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
One special case, `src/util/xmlpool/.gitignore` is not entirely deleted,
as `xmlpool.pot` still gets generated (eg. by `ninja xmlpool-pot`).
Signed-off-by: Eric Engestrom <eric.engestrom@intel.com>
Reviewed-by: Dylan Baker <dylan@pnwbakers.com>
I was setting it based off a pipe_rasterizer_state field that appears
to be entirely dead outside of the draw module respecting it.
I should be setting it when the primitive type reaching the SF is
neither points nor lines. This is, unfortunately, rather dirty,
as we have to look at the rasterizer state, the geometry shader state,
the tessellation evaluation shader state, and the primitive type...
https://reviews.llvm.org/rL356946 (present in LLVM 9 and later) changed
the meaning of the "system" sync scope, making it no longer restricted to
the memory operation's address space. So a single address space sync scope
is needed for shared atomic operations (such as "system-one-as" or
"workgroup-one-as") otherwise buffer_wbinvl1 and s_waitcnt instructions
can be created at each shared atomic operation.
This mostly reimplements LLVMBuildAtomicRMW and LLVMBuildAtomicCmpXchg
to allow for more sync scopes and uses the new functions in ac->nir with
the "workgroup-one-as" or "workgroup" sync scopes.
F1 2017 (4K, Ultra High settings, TAA), avg FPS : 59 -> 59.67 (+1.14%)
Strange Brigade (4K, ~highest settings), avg FPS : 51.5 -> 51.6 (+0.19%)
RotTR/mountain (4K, VeryHigh settings, FXAA), avg FPS : 57.2 -> 57.2 (+0.0%)
RotTR/tomb (4K, VeryHigh settings, FXAA), avg FPS : 42.5 -> 43.0 (+1.17%)
RotTR/valley (4K, VeryHigh settings, FXAA), avg FPS : 40.7 -> 41.6 (+2.21%)
Warhammer II/fallen, avg FPS : 31.63 -> 31.83 (+0.63%)
Warhammer II/skaven, avg FPS : 37.77 -> 38.07 (+0.79%)
Signed-off-by: Rhys Perry <pendingchaos02@gmail.com>
Reviewed-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
To quote Uli Schlachter, who understands this stuff more than I do:
> The function __glXSendError() in mesa's src/glx/glx_error.c invents an X11
> protocol error out of thin air. For the sequence number it uses dpy->request.
> This is the sequence number of the last request that was sent. _XError() will
> then update dpy->last_request_read based on the sequence number of the error
> that just "came in".
>
> If now another something comes in with a sequence number less than
> dpy->last_request_read, since sequence numbers are monotonically increasing,
> widen() will incorrectly add 1<<32 to the sequence number and things might go
> downhill afterwards.
`__glXSendErrorForXcb` was also patched, as that's the function that
`glXCreateContextAttribsARB` actually uses.
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=99781
Cc: mesa-stable@lists.freedesktop.org
Fixes: ad503c41 'apple: Initial import of libGL for OSX from AppleSGLX svn repository'
Reviewed-by: Adam Jackson <ajax@redhat.com>
Reviewed-by: Ian Romanick <ian.d.romanick@intel.com>
Signed-off-by: Hal Gentz <zegentzy@protonmail.com>
In commit 0d46e404 ("anv: limit URB reconfigurations when using
blorp") we tried to limit the number of URB reconfiguration by
checking if the last allocation is large enough to fit the blorp
dispatch.
We used the last bound pipeline to compare the allocation. The problem
with this is that the pipeline is bound but its commands might not
have been emitted into the command buffer yet.
Let's just revert commit 0d46e40467
since it didn't seem to yield any performance improvement.
Signed-off-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Fixes: 0d46e404 ("anv: limit URB reconfigurations when using blorp")
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=110535
Acked-by: Jason Ekstrand <jason@jlekstrand.net>
It's prefectly legal and well-defined to render using a non-existing
or empty buffer object. The data coming out of the buffer object isn't
well defined unless we have the robustness flag set on the context, but
that's a different matter, and up to the shader hardware; it's the same
as out-of-bounds reads.
Signed-off-by: Erik Faye-Lund <erik.faye-lund@collabora.com>
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
It's legal for a buffer-object to have a NULL-resource, but let's just
skip over it, as there's nothing to do.
This patch switches the order of the conditionals in swr_update_derived,
so the logic becomes a bit more straight forward:
if (is_user_buffer)
...
else if (resource)
...
else
...
...instead of this:
if (!is_user_buffer)
if (resource)
...
else
...
else
...
Signed-off-by: Erik Faye-Lund <erik.faye-lund@collabora.com>
Reviewed-by: Alok Hota <alok.hota@intel.com>
It's legal for a buffer-object to have a NULL-resource, but let's just
skip over it, as there's nothing to do.
Signed-off-by: Erik Faye-Lund <erik.faye-lund@collabora.com>
Acked-by: Karol Herbst <kherbst@redhat.com>
It's legal for a buffer-object to have a NULL-resource, but let's just
skip over it, as there's nothing to do.
Signed-off-by: Erik Faye-Lund <erik.faye-lund@collabora.com>
It's legal for a buffer-object to have a NULL-resource, but let's just
skip over it, as there's nothing to do.
Signed-off-by: Erik Faye-Lund <erik.faye-lund@collabora.com>
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
st_setup_current never sets this flag, and it's already checked against
right before. So let's remove this pointless check.
Signed-off-by: Erik Faye-Lund <erik.faye-lund@collabora.com>
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
From page 76 (page 80 of the PDF) of the GLSL 4.60 v.5 spec:
" No aliasing in output buffers is allowed: It is a compile-time or
link-time error to specify variables with overlapping transform
feedback offsets."
Currently, this is expected to fail, but it succeeds:
"
...
layout (xfb_offset = 0) out vec2 a;
layout (xfb_offset = 0) out vec4 b;
...
"
Fixes the following piglit test:
tests/spec/arb_enhanced_layouts/compiler/transform-feedback-layout-qualifiers/xfb_offset/invalid-overlap.vert
Fixes the following test:
KHR-GL44.enhanced_layouts.xfb_output_overlapping
v2:
- Use a data structure to track the used components instead of a
nested loop (Ilia).
v3:
- Take the BITSET_WORD array out from the
gl_transform_feedback_buffer struct and make it local to the
validation process (Timothy).
- Do not use a nested scope for the validation (Timothy).
v4:
- Add reference to the fixed piglit test in the commit log.
- Add reference to the fixed VK-GL-CTS test in the commit
log (Tapani).
- Empty initialize the BITSET_WORD pointers array (Tapani).
Cc: Timothy Arceri <tarceri@itsqueeze.com>
Cc: Ilia Mirkin <imirkin@alum.mit.edu>
Signed-off-by: Andres Gomez <agomez@igalia.com>
Reviewed-by: Tapani Pälli <tapani.palli@intel.com>
Before setting the physical device API version, we should check if the
MESA_VK_VERSION_OVERRIDE environment variable is set and take it into
account.
Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
Reviewed-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
While we can clean this up later, it's trivial to not generate the
stupid code in the first place, which saves some optimization work.
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
Reviewed-by: Matt Turner <mattst88@gmail.com>
VK_ANDROID_external_memory_android_hardware_buffer requires this
extension. It is safe to enable it since currently aux usage is
disabled for ahw buffers.
Fixes following dEQP extension dependency test on Android:
dEQP-VK.api.info.device#extensions
Cc: <mesa-stable@lists.freedesktop.org>
Signed-off-by: Tapani Pälli <tapani.palli@intel.com>
Acked-by: Jason Ekstrand <jason@jlekstrand.net>
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Treat gl_FragCoord variable as a system value and lower the w component
with a nir pass.
Add the necessary bits for correct codegen.
Signed-off-by: Andreas Baierl <ichgeh@imkreisrum.de>
Reviewed-by: Qiang Yu <yuq825@gmail.com>
On some hardware (e.g. Mali400) the shader needs to apply some
transformations for correct gl_FragCoord handling. The lowering
actions look like the following in pseudocode:
gl_FragCoord.xyz = gl_FragCoord_orig.xyz
gl_FragCoord.w = 1.0 / gl_FragCoord_orig.w
Add this lowering as a nir pass in preparation for using it in the driver.
Signed-off-by: Andreas Baierl <ichgeh@imkreisrum.de>
Reviewed-by: Qiang Yu <yuq825@gmail.com>
Reviewed-by: Eric Anholt <eric@anholt.net>
I have *no* idea what's happening here, but let's not regress an app
that used to work in the mean time while we're figuring it out..
Signed-off-by: Alyssa Rosenzweig <alyssa@rosenzweig.io>
In a perfect world, we'd use fp16 varyings for mediump and fp32 for
highp, allowing us to get a performance win without sacrificing
conformance. Unfortunately, we're not there (yet), so it's better we
assume always fp32 than always fp16 to avoid artefacts / breaking a lot
of deqp.
Signed-off-by: Alyssa Rosenzweig <alyssa@rosenzweig.io>
Two fixes here, one is that we tried to copyprop non-strictly-SSA values
which was bound to fly in our face. The other was peeling back the imov
workaround.. Turns out we still need that. More research is needed
still, but let's not regress real apps.
Signed-off-by: Alyssa Rosenzweig <alyssa@rosenzweig.io>
Fixes: commit b53b4573c3.
Optimization gone wrong. In the future, we should try this again (it's a
net win if implemented right), but at the moment this just regresses.
Signed-off-by: Alyssa Rosenzweig <alyssa@rosenzweig.io>
Some of the dEQP.functional.transform_feedback tests end up doing
the following sequence of operations:
1. BeginTransformFeedback
2. PauseTransformFeedback
3. Draw
4. ResumeTransformFeedback
At step 1, we'd pack 3DSTATE_SO_BUFFER commands saying to zero the
SO_WRITE_OFFSET registers. At step 2, we disable streamout, so step 3
doesn't bother emitting those commands. Then, step 4 re-packs new
3DSTATE_SO_BUFFER commands with offset = 0xFFFFFFFF, saying to continue
appending at the existing offset. This loads the value from the BO as
the offsets - but we never actually zeroed it.
So, just maintain a flag saying "we actually emitted the commands",
and stomp offset back to zero until we emit some.
I have a platform with vc4 display but V3D 4.x. We can fall back on
kmsro's probing to bring up the v3d gallium driver.
Acked-by: Rob Clark <robdclark@chromium.org>
Like vc4, we expect to have SOCs with various displays that have a single
V3D instance for rendering.
v2: Add v3d to the list of drivers that make enabling kmsro valid.
Acked-by: Rob Clark <robdclark@chromium.org>
We can't use the QPU functions to detect this until register allocation is
done and we've moved inst->dst into inst->qpu.
Fixes bad TMU sequences from register spilling in
KHR-GLES31.core.compute_shader.shared-max.
Looks like I lost it in a rebase conflict resolution. We'd hit the
unknown intrinsic assertion in
KHR-GLES31.core.compute_shader.shared-struct.
Fixes: 6b1c659825 ("v3d: Add Compute Shader compilation support.")
There are two cases where v3d's sampler view's resource doesn't match the
base's: shadow textures for sampling from raster, and pointing at the
separate depth texture for z32f_s8x24. We only want to update shadow for
the first case.
Fixes
dEQP-GLES31.functional.stencil_texturing.render.depth32f_stencil8_draw
when run after the previous testcase.
We were emitting a dummy load for when the VS doesn't load any attributes,
but we also need to emit a dummy load for when the render VS loads
attributes but the binner VS doesn't. Fixes simulator assertion failures
and GPU hangs on KHR-GLES31.core.texture_gather.\*
We are assured that the input segment size field is ignored for
!separate_segs mode, and now the simulator wants an in-range value set
regardless of whether it's functionally ignored or not.
fixes following warning with clang:
warning: suggest braces around initialization of subobject
Signed-off-by: Tapani Pälli <tapani.palli@intel.com>
Reviewed-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
Reviewed-by: Kristian H. Kristensen <hoegsberg@google.com>
Used same syntax as elsewhere with Mesa sources, verified result
against MSVC with godbolt.org.
fixes following warning with clang:
warning: suggest braces around initialization of subobject
v2: empty braces -> braces around subobject (Caio, Kristian)
Signed-off-by: Tapani Pälli <tapani.palli@intel.com>
Reviewed-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
Reviewed-by: Kristian H. Kristensen <hoegsberg@google.com>
- Emulation of AVX512 built into SIMDLIB
- Remove associated macros
- Remove knobs controlling AVX512 and let emulation handle it
- Refactor variable names for SIMD16
Reviewed-by: Bruce Cherniak <bruce.cherniak@intel.com>
- Use 8x2 tiling by default
- Remove associated macros
- Use SIMDLIB emulation for SIMD16 on SIMD8 hardware
- Remove code rot in Load/StoreTile
Reviewed-by: Bruce Cherniak <bruce.cherniak@intel.com>
If there is no last fence, due to no rendering happening yet, just
create a new signaled fence and return it, to match the expectations of
the EGL sync fence API.
Fixes random "Could not create sync fence 0x3003" assertion failures from
Skia on Android, coming from the following code:
https://android.googlesource.com/platform/frameworks/base/+/master/libs/hwui/pipeline/skia/SkiaOpenGLPipeline.cpp#427
Reproducible especially with thread count >= 4.
One could make the driver always keep the reference to the last fence,
but:
- the driver seems to explicitly destroy the fence whenever a rendering
pass completes and changing that would require a significant functional
change to the code. (Specifically, in lp_scene_end_rasterization().)
- it still wouldn't solve the problem of an EGL sync fence being created
and waited on without any rendering happening at all, which is
also likely to happen with Android code pointed to in the commit.
Therefore, the simple approach of always creating a fence is taken,
similarly to other drivers, such as radeonsi.
Tested with piglit llvmpipe suite with no regressions and following
tests fixed:
egl_khr_fence_sync
conformance
eglclientwaitsynckhr_flag_sync_flush
eglclientwaitsynckhr_nonzero_timeout
eglclientwaitsynckhr_zero_timeout
eglcreatesynckhr_default_attributes
eglgetsyncattribkhr_invalid_attrib
eglgetsyncattribkhr_sync_status
v2:
- remove the useless lp_fence_reference() dance (Nicolai),
- explain why creating the dummy fence is the right approach.
Signed-off-by: Tomasz Figa <tfiga@chromium.org>
Reviewed-by: Roland Scheidegger <sroland@vmware.com>
Currently if the timeout differs from 0, we'll end up with infinite
wait... even if the user is perfectly clear they don't want that.
Use the new lp_fence_timedwait() helper guarding both waits in an
!lp_fence_signalled block like the rest of llvmpipe.
Signed-off-by: Emil Velikov <emil.velikov@collabora.com>
Reviewed-by: Roland Scheidegger <sroland@vmware.com>
The function is analogous to lp_fence_wait() while taking at timeout
(ns) parameter, as needed for EGL fence/sync.
v2:
- use absolute UTC time, as per spec (Gustaw)
- bail out on cnd_timedwait() failure (Gustaw)
v3:
- check count/rank under mutex (Gustaw)
Signed-off-by: Emil Velikov <emil.velikov@collabora.com>
Reviewed-by: Roland Scheidegger <sroland@vmware.com> (v1)
Reviewed-by: Gustaw Smolarczyk <wielkiegie@gmail.com>
As effectively required by the extension, we need to ensure we're master
Currently drivers employ vendor specific solutions, which check if the
device behind the fd is capable*, yet none of them do the master check.
*In the radv case, if acceleration is available.
Instead of duplicating the check in each driver, keep it where it's
needed and used.
Note this copies libdrm's drmIsMaster() to avoid depending on bleeding
edge version of the library.
v2: set the fd to -1 if not master (Bas)
Fixes: da997ebec9 ("vulkan: Add KHR_display extension using DRM [v10]")
Cc: Andres Rodriguez <andresx7@gmail.com>
Cc: Jason Ekstrand <jason@jlekstrand.net>
Cc: Keith Packard <keithp@keithp.com>
Reported-by: Andres Rodriguez <andresx7@gmail.com>
Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
Signed-off-by: Emil Velikov <emil.velikov@collabora.com>
These have been popping up more and more with the OpenCL work and other
bits causing extra conversions to/from 64-bit.
Reviewed-by: Karol Herbst <kherbst@redhat.com>
In 105002bd2d, we fixed a memory leak bug where we weren't properly
destroying descriptor when destroying/resetting a descriptor pool.
However, the only real leak that happened was that we we take a
reference to the descriptor set layout in the descriptor set and we
weren't dropping our reference. Everything else in the descriptor set
is tied to the pool itself and doesn't need to be freed on a per-set
basis. This commit changes the destroy/reset functions to only bother
walking the list of sets to unref the layouts and otherwise we just
assume that the whole-pool destroy/reset takes care of the rest.
Now that we're doing more non-trivial things with descriptor sets such
as allocating things with util_vma_heap, per-set destruction is starting
to show up on perf traces. This takes reset back to where it's supposed
to be as a cheap whole-pool operation.
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
In c520f4dec9, we chose to align the sizes of descriptor set buffers to
32 bytes. We have to align the descriptor set buffer to 32B so that
it's valid for using with push constants. We align the size as well so
we don't leave lots of holes with util_vma_heap_alloc. Unfortunately,
we were only aligning it for alloc and not for free so we were still
creating piles of holes when we delete descriptor sets. This causes
terrible perf for the allocator once we've deleted piles of descriptor
sets.
This commit reworks the code so that we align the descriptor set buffer
size to 32B for both alloc and free. The result is that it takes the
new crucible vkResetDescriptorPool from 104.567719 to 2.898354 seconds.
Fixes: c520f4dec9 "anv: Add a concept of a descriptor buffer"
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=110497
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
This fixes a case where we are expecting 64-bit but generate
32-bit consts and validate gets angry.
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
Signed-off-by: Dave Airlie <airlied@redhat.com>
This fixes KHR-GL45.compute_shader.resources-max on radeonsi.
Fixes: 4e1e8f684b "glsl: remember which SSBOs are not read-only and pass it to gallium"
v2: use is_interface_array, protect again assertion failures in u_bit_consecutive
Reviewed-by: Dave Airlie <airlied@redhat.com>
Forgot to update corresponding entries for desktop GL.. kinda wish we
didn't have to update both GLES and GL tables.
Signed-off-by: Rob Clark <robdclark@chromium.org>
The compiler support for:
OES_sample_shading
OES_sample_variables
OES_shader_multisample_interpolation
Signed-off-by: Rob Clark <robdclark@chromium.org>
The so->inputs[] table is in units of vec4
Fixes: 7ff6705b8d freedreno/ir3: convert to "new style" frag inputs
Signed-off-by: Rob Clark <robdclark@chromium.org>
There are a few places that we check if a shader stage input reg is
used/valid (ie. not r63.x).. and there are about to be a bunch more.
So add some helper macros for less open-coding.
Signed-off-by: Rob Clark <robdclark@chromium.org>
Since this is what the value actually is. Cleanup the name before
adding more different i,j related values for sample-shading.
Signed-off-by: Rob Clark <robdclark@chromium.org>
Calculates i,j at specified offset within a pixel. A new load_size_ir3
intrinsic is used in conjunction with fddx/fddy to translate the offset
into primitive space and adjust the i,j from load_barycentric_pixel
accordingly.
Signed-off-by: Rob Clark <robdclark@chromium.org>
De-duplicate the "normal" and "flags" versions of the macros, and while
at it go ahead and add "flags" versions for all the remaining macros,
since we'll at least need INSTR1F in a following commit.
Signed-off-by: Rob Clark <robdclark@chromium.org>
Update UABI header and add FD_PP_PGTABLE and FD_NR_FAULTS params.
Robustness can be supported by a kernel which provides the new ABI if it
also indicates that per-process pagetables are in use.
Signed-off-by: Rob Clark <robdclark@chromium.org>
We'll want to unify this with main copy prop (and extend to varyings),
but that'll take more care to handle some special cases, so leave it as
a stub pass for now.
Signed-off-by: Alyssa Rosenzweig <alyssa@rosenzweig.io>
This extends copy propagation to respect output modifiers for ALU
instructions, as well as potentially fixing some bugs related to looping
(all dEQP loop tests pass).
Signed-off-by: Alyssa Rosenzweig <alyssa@rosenzweig.io>
This enabled the basic YCBCR features.
We support basic multiplane formats using 8-bit and 16-bit unorms, as
well as YUV2 formats.
Reviewed-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
No functional changes. This temporarily uses plane 0 for
everything.
Long term plan is that only single plane images get to use
metadata like htile/dcc/cmask/fmask.
Reviewed-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
These will be lowered by nir_lower_tex() with the
lower_tex_when_implicit_lod_not_supported, so don't need the extra
handling here.
Reviewed-by: Rob Clark <robdclark@gmail.com>
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
We already add the LOD src, so go ahead and update the texop as well
when this option is set.
v2: Make it an option. (Rob Clark)
v3: Use a more concise name suggested by Jason.
Reviewed-by: Rob Clark <robdclark@gmail.com>
One cannot write the URB arbitrarily and therefore the message
has to be carefully constructed. The clever tricks originate
from Kenneth and Jason, I'm just writing the patch.
Fixes GPU hangs on ICL with Vulkan CTS.
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
Signed-off-by: Topi Pohjolainen <topi.pohjolainen@intel.com>
OpenGL 4.6 Spec:
"5.3.3 Rules
.......
Note: “Updates” via rendering or transform feedback
are treated consistently with updates via GL commands.
Once EndTransformFeedback has been issued, any subsequent
command in the same context that uses the results of the
transform feedback operation will see the results."
v2: removed a wrong comment
( Kenneth Graunke <kenneth@whitecape.org> )
v3: - flush+dirty depends on buffers usage history
- removed an old hack
( Kenneth Graunke <kenneth@whitecape.org> )
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=110404
Signed-off-by: Andrii Simiklit <andrii.simiklit@globallogic.com>
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
Just enable it during init_render_context on Gen10+, and move the
Gen9 state tracking into iris_genx_state so it only exists on Gen9.
Reviewed-by: Mike Blumenkrantz <michael.blumenkrantz@gmail.com>
Reviewed-by: Rafael Antognolli <rafael.antognolli@intel.com>
DRI driver loadable modules are always installed with
install_megadriver.py with names ending with '.so', irrespective of
platform.
Force the name the loadable module is built with to match, so
install_megadriver.py doesn't spin trying to remove non-existent
symlinks.
Fixes: c77acc3c "meson: remove meson-created megadrivers symlinks"
Force the driver thread to sync immediately with a compiler thread (but
compilation still happens in a separate thread).
This can be useful to simplify debugging compiler issues.
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
Enabling this option will create ddebug-style dumps for the aux context,
except that instead of intercepting the pipe_context layer
we just dump the IB contents on flush.
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
Due to asynchronous execution, it's not clear which of the draws the state
may refer to.
This also works around an issue encountered with radeonsi where dumping
the driver state itself caused a hang.
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
Move the definition of radeonsi_clear_db_cache_before_clear there,
as well as radeonsi_enable_nir.
This removes the AMD_DEBUG=nir option.
We currently still have two places for options: the driconf machinery
and AMD_DEBUG/R600_DEBUG. If we are to have a single place for options,
then the driconf machinery should be preferred since it's more flexible.
The only downside of the driconf machinery was that adding new options
was quite inconvenient. With this change, a simple boolean option can
be added with a single line of code, same as for AMD_DEBUG.
One technical limitation of this particular implementation is that while
almost all driconf features are available, the translation machinery doesn't
pick up the description strings for options added in si_debvug_options. In
practice, translations haven't been provided anyway, and this is intended
for developer options, so I'm not too worried. It could always be added
later if anybody really cares.
v2:
- use bool instead of uint8_t for options
- si_debug_options.inc -> si_debug_options.h
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
This reverts commit 40b3abb4d1.
It is not clear that this commit was entirely correct, and unfortunately
it was pushed by error.
CC: Jason Ekstrand <jason@jlekstrand.net>
Acked-by: Jason Ekstrand <jason@jlekstrand.net>
We were only setting the used mask for the first component of a
varying. Since the linking opts split vectors into scalars this
has mostly worked ok.
However this causes an issue where for example if we split a
struct on one side of the interface but not the other, then we
can possibly end up removing the first components on the side
that was split and then incorrectly remove the whole struct
on the other side of the varying.
With this change we simply mark all 4 components for each slot
used by a struct. We could possibly make this more fine gained
but that would require a more complex change.
This fixes a bug in Strange Brigade on RADV when tessellation
is enabled, all credit goes to Samuel Pitoiset for tracking down
the cause of the bug.
Fixes: f1eb5e6399 ("nir: add component level support to remove_unused_io_vars()")
Reviewed-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
This was a rebase issue which lost of change to a file moved from i965
to src/intel/perf.
Signed-off-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Fixes: 134e750e16 ("i965: extract performance query metrics")
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
need_cs_space may clear the buffer list.
Fixes: 951d60f8cd "radeonsi: delay adding BOs at the beginning of IBs until the first draw"
Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
This is needed for exposing the samplerBuffer functions under
EXT_gpu_shader4.
v2: - expose it in the compat profile only
- make it an alias of EXT_gpu_shader4
Reviewed-by: Timothy Arceri <tarceri@itsqueeze.com> (v1)
Tested-by: Dieter Nützel <Dieter@nuetzel-hh.de>
Reviewed-by: Eric Anholt <eric@anholt.net>
The CTS fails on
dEQP-GLES31.functional.shaders.opaque_type_indexing.atomic_counter.*vertex
when they are enabled, due to the VS being run for both bin and render. I
think this behavior is expected to be valid, but I can't find text in
atomic counters or SSBO specs saying so (the closed I found was in
shader_image_load_store). Just disable it for now, since the closed
source driver doesn't expose vertex atomic counters/SSBOs either.
This is just asking for tests to get confused about the HW supporting
atomics in this shader stage or not, such as
dEQP-GLES31.functional.shaders.opaque_type_indexing.atomic_counter.const_expression_vertex.
v2: Rebase on the other atomic cleanups that have happened since posting.
v3: Commit message tweak by Marek.
Signed-off-by: Eric Anholt <eric@anholt.net>
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
this automatically enables preemption on gen10 where it is disabled by
default but still available
Reviewed-by: Anuj Phogat <anuj.phogat@gmail.com>
Reviewed-by: Rafael Antognolli <rafael.antognolli@intel.com>
We create two new helpers, iris_flush_bits_for_history, and
iris_dirty_for_history, then use them in the existing function.
The first accumulates flush bits based on res->bind_history, but doesn't
actually perform a flush. This allows us to accumulate flush bits by
looping over multiple resources, but ultimately emit a single flush for
all of them.
The latter flags dirty bits without flushing, which again allows us to
handle multiple resources, but also is more convenient when writing from
the CPU where we don't need a flush (as in commit 4d12236072).
This inserts a handle for the flink name and a handle the correct
gem handle for the bo.
v2: fix handles/names confusion (Lepton Wu)
v3: set flink name correctly (Lepton Wu)
Reviewed-by: Chia-I Wu <olvaffe@gmail.com>
Instead of aligning and then taking inline uniforms into account, we
need to take inline uniforms into account and then align to a page.
Otherwise, we may not be aligned to a page and allocation may fail.
Fixes: 43f40dc7cb "anv: Implement VK_EXT_inline_uniform_block"
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
anv_descriptor_pool_free_set is called on the clean-up path of
anv_descriptor_set_create and the set may not have been added to the
pool's list of sets yet. While we're here, we move adding it to that
list into set_create for symmetry.
Fixes: 105002bd2d "anv: destroy descriptor sets when pool gets..."
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Found by GCC warning:
src/intel/compiler/brw_fs_combine_constants.cpp: In function ‘bool needs_negate(const fs_reg*, const imm*)’:
src/intel/compiler/brw_fs_combine_constants.cpp:306:34: warning: comparison of unsigned expression < 0 is always false [-Wtype-limits]
return ((reg->d & 0xffffu) < 0) != (imm->w < 0);
~~~~~~~~~~~~~~~~~~~^~~
The result of the bit-and is a 32-bit value with the top bits all zero.
This will never be < 0. Instead of masking off the bits, just cast to
int16_t and let the compiler handle the actual conversion.
Fixes: e64be391dd ("intel/compiler: generalize the combine constants pass")
Cc: Iago Toral Quiroga <itoral@igalia.com>
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
This code is outdated and unused; now that the compiler is mature,
there's no point keeping it around in-tree (or at all).
Signed-off-by: Alyssa Rosenzweig <alyssa@rosenzweig.io>
This code is stable and can live upstream independently while the rest
of the Bifrost stack comes up.
v2: Added a verbose flag to hide away some of the more verbose features
that nobody really needs
[The Bifrost disassembler is written by Connor Abbott, Lyude Paul, and
Ryan Houdek.]
Reviewed-by: Alyssa Rosenzweig <alyssa@rosenzweig.io>
We create an all-encompassing opcode table for handling name and
properties, removing a number of ad hoc opcode tables which became
brittle and quickly out of date. While we're at it, we fix some
incorrect opcodes relating to ball/bany, and move a small function out
to midgard_compile.c. Together these changes should allow compilation
without warnings, along with helping the codebase health considerably.
Signed-off-by: Alyssa Rosenzweig <alyssa@rosenzweig.io>
Most copy prop should occur at the NIR level, but we generate a fair
number of moves implicitly ourselves, etc... long story short, it's a
net win to also do simple copy prop + DCE on the MIR. As a bonus, this
fixes the weird imov precision bug once and for good, I think.
Signed-off-by: Alyssa Rosenzweig <alyssa@rosenzweig.io>
For floating point ops, these bits determine the "negate?" and "abs?"
modifiers. For integer ops, it turns out they control how sign/zero
extension work, useful for mixing types.
Signed-off-by: Alyssa Rosenzweig <alyssa@rosenzweig.io>
In the future, we might want to switch to a table-based approach, but
for now, at least have it current.
Signed-off-by: Alyssa Rosenzweig <alyssa@rosenzweig.io>
We reshuffle the existing "dead move elimination" pass into a generic
dead code elimination layer, fixing bugs incurred with looping in the
process.
Signed-off-by: Alyssa Rosenzweig <alyssa@rosenzweig.io>
The bug this worked around is no longer applicable, it seems -- remove
the hack that breaks more than it fixes.
Signed-off-by: Alyssa Rosenzweig <alyssa@rosenzweig.io>
The hardware needs this lowered anyway; for now, might as well use
mesa's default lowering for pure conformance reasons.
Signed-off-by: Alyssa Rosenzweig <alyssa@rosenzweig.io>
This restriction makes sense logically. Not sure why it wasn't obeyed
before. In conjunction with previous commit's disclaimer, fixes
dEQP-GLES2.functional.shaders.loop.for_dynamic_iterations.*
Signed-off-by: Alyssa Rosenzweig <alyssa@rosenzweig.io>
Along with a corresponding fix to the move elimination pass (not
included here yet -- I just have it disabled for now), this will fix
dEQP-GLES2.functional.shaders.loops.for_uniform_iterations.*
Signed-off-by: Alyssa Rosenzweig <alyssa@rosenzweig.io>
This adds preliminary support for indirect loads of varying arrays and
uniform arrays, bringing a few new tests in shader.indexing.* to
passing, although there remains a number of cases still missing.
Signed-off-by: Alyssa Rosenzweig <alyssa@rosenzweig.io>
Varying arrays sometimes are lowered to a series of directly accessed
varyings (which we handled okay), but when indirectly accessed, they
appear as a single array; we need to handle this as well.
Signed-off-by: Alyssa Rosenzweig <alyssa@rosenzweig.io>
The semantics of this field are not well understood; it is better to
print it unconditionally along with the other unknown state, rather than
silently eat the value. Without this change, some critical state was
being lost in some shaders (notably, the offset for load/store
scratchpad intructions found in shaders that spill registers.)
Signed-off-by: Alyssa Rosenzweig <alyssa@rosenzweig.io>
Otherwise our textures don't get color compression. Thanks to
Eero Tamminen for noticing this was missing!
Improves performance of GLB27_FillTestC24Z16 on my Apollolake
laptop with single channel RAM by 2.3x.
Reported-by: Eero Tamminen <eero.t.tamminen@intel.com>
flrp32 is also a 3-source instruction, but there is another pending
series that handles that for Gen4 and Gen5.
v2: Rebase on "intel/compiler: Don't have sepearate, per-Gen
nir_options"
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
Reviewed-by: Matt Turner <mattst88@gmail.com>
Instead, just have separate scalar vs. vector nir_options and do
per-Gen "fix ups".
Suggested-by: Jason Ekstrand <jason@jlekstrand.net>
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
Reviewed-by: Matt Turner <mattst88@gmail.com>
Every file that included glsl/ir.h had a warning like:
src/compiler/glsl/ir.h: In member function ‘virtual bool ir_rvalue::is_lvalue(const _mesa_glsl_parse_state*) const’:
src/compiler/glsl/ir.h:236:64: warning: unused parameter ‘state’ [-Wunused-parameter]
virtual bool is_lvalue(const struct _mesa_glsl_parse_state *state = NULL) const
^
Cc: Samuel Pitoiset <samuel.pitoiset@gmail.com>
Fixes: fa4ebf6b8d ("glsl: add _mesa_glsl_parse_state object to is_lvalue()")
Reviewed-by: Sagar Ghuge <sagar.ghuge@intel.com>
Reviewed-by: Matt Turner <mattst88@gmail.com>
Commit 624789e370 moved the destruction of types out of atexit() and
made use of a ref count instead. This is useful for avoiding a crash
where drivers such as radeonsi are still compiling in a thread when the app
exits and has not called MakeCurrent to change from the current context.
While the above scenario is technically an app bug we shouldn't crash.
However that change caused another race condition between the shader
compilation tread in radeonsi and context teardown functions.
This patch makes two changes to fix this new problem:
First we explicitly call _mesa_destroy_shader_compiler_types() when destroying
the st context rather than calling it indirectly via _mesa_free_context_data().
We do this as we must call it after st_destroy_context_priv() so that we don't
destory the glsl types before the compilation threads finish.
Next wait for the shader threads to finish in si_destroy_context() this
also means we need to call context destroy before destroying the queues
in si_destroy_screen().
Fixes: 624789e370 ("compiler/glsl: handle case where we have multiple users for types")
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
The dri options are optional. When the dri options are not provided
the WSI will not use adaptive sync.
FWIW I think for xf86-video-amdgpu this still requires an X11 config
option, so only people who opt in can get possible regressions from this.
So then the remaining question is: why do this in the WSI?
It has been suggested in another MR that the application sets this.
However, I disagree with that as I don't think we'll ever get a
reasonable set of applications setting it.
The next questions is whether this can be a layer. It definitely
can be as implemented now. However, I think this generally fits
well with the function of the WSI. Furthemore, for e.g. the DISPLAY
WSI this is much harder to do in a layer.
Of course, most of the WSI could almost be a layer, but I think
this still fits best in the WSI.
Acked-by: Jason Ekstrand <jason@jlekstrand.net>
This includes 0 options.
The cache parsing is located at a position where we can easily add
config filtering by VkApplicationInfo.
Reviewed-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
this hooks up the iris gallium driver to existing mesa bits which handle
the implementation
resolveskwg/mesa#8
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
if the driver (iris) indicates support for the inner_coverage pipe cap, this
will set the necessary states in the driver flags and rasterizer structs
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
this can be used by drivers which support the extension to indicate support
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
We need to subtract the starting offset from the final offset before
dividing by the stride. See src/intel/vulkan/genX_cmd_buffer.c:3142.
Not known to fix anything.
This operation decorate with an Id instead of a Literal or String.
It is used by HlslCounterBufferGOOGLE (provided by
SPV_GOOGLE_hlsl_functionality1). Even if we don't do anything with
that decoration, we must be able to parse SPIR-V that uses it.
Fixes: 891886da2f "spirv: Add no-op support for VK_GOOGLE_hlsl_functionality1"
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
Decorations (and ExecutionModes) can have not only literals, but also
Ids associated with them. So rename the field to the more general
name "Operand" used by the spec.
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
Several empty cmdbufs are submitted by app/xserver per frame, from
glamor_block_handler for example. Let's skip them.
Signed-off-by: Chia-I Wu <olvaffe@gmail.com>
Reviewed-by: Gurchetan Singh <gurchetansingh@chromium.org>
Noticed while trying to decide if pipebuffer was of any use to me, and
found that nothing has used it in the last 10 years at least.
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
Reviewed-by: Jose Fonseca <jfonseca@vmware.com>
Reviewed-by: Kristian Høgsberg <hoegsberg@gmail.com>
Set REG_A2XX_RB_COPY_DEST_OFFSET in the tile init as it won't get touched
by the draw batch. Then gmem2mem is the same for all tiles.
Similar to what is done in a6xx, but only for gmem2mem.
Signed-off-by: Jonathan Marek <jonathan@marek.ca>
Reviewed-by: Rob Clark <robdclark@gmail.com>
Allows removing the load_deref/store_deref code in the compiler.
tgsi_to_nir now uses screen instead of options so we can simplify that too.
Signed-off-by: Jonathan Marek <jonathan@marek.ca>
Reviewed-by: Rob Clark <robdclark@gmail.com>
a2xx driver is currently broken when PIPE_CAP_PACKED_UNIFORMS is enabled,
disable it for now.
Signed-off-by: Jonathan Marek <jonathan@marek.ca>
Reviewed-by: Kristian H. Kristensen <hoegsberg@google.com>
Reviewed-by: Rob Clark <robdclark@gmail.com>
tgsi_to_nir now requires a screen pointer and is used by fd2_prog_init.
fd2_prog_init is used before fd_context_init so set the pointer manually.
Signed-off-by: Jonathan Marek <jonathan@marek.ca>
Reviewed-by: Rob Clark <robdclark@gmail.com>
so that bound compute shader resources won't be added when they are not
needed and same for graphics.
Acked-by: Nicolai Hähnle <nicolai.haehnle@amd.com>
The assertion considers max_dw from the current IB in the chain, but
big_ib_buffer is a buffer for the next IB, which can be smaller.
Tested-by: Dieter Nützel <Dieter@nuetzel-hh.de>
Acked-by: Nicolai Hähnle <nicolai.haehnle@amd.com>
Applications frequently call glBufferSubData() to consecutive regions
of a VBO to append new vertex data. If no data exists there yet, we
can promote these to unsynchronized writes, even if the buffer is busy,
since the GPU can't be doing anything useful with undefined content.
This can avoid a bunch of unnecessary blitting on the GPU.
u_threaded_context would do this for us, and in fact prohibits us from
doing so (see TC_TRANSFER_MAP_NO_INFER_UNSYNCHRONIZED). But we haven't
hooked that up yet, and it may be useful to disable u_threaded_context
when debugging...at which point we'd still want this optimization. At
the very least, it would let us measure the benefit of threading
independently from this optimization. And it's not a lot of code.
Removes most stall avoidance blits in "Total War: WARHAMMER."
On my Skylake GT4e at 1920x1080, this appears to improve performance
in games by the following (but I did not do many runs for proper
statistics gathering):
----------------------------------------------
| DiRT Rally | +2% (avg) | + 2% (max) |
| Bioshock Infinite | +3% (avg) | + 9% (max) |
| Shadow of Mordor | +7% (avg) | +20% (max) |
----------------------------------------------
This implements PIPE_CAP_INVALIDATE_BUFFER and invalidate_resource(),
as well as the PIPE_TRANSFER_DISCARD_WHOLE_RESOURCE flag. When either
of these happen, we swap out the backing storage of the buffer for a
new idle BO, allowing us to write to it immediately without stalling
or queueing a blit.
On my Skylake GT4e at 1920x1080, this improves performance in games:
-----------------------------------------------
| DiRT Rally | +25% (avg) | +17% (max) |
| Bioshock Infinite | +22% (avg) | +11% (max) |
| Shadow of Mordor | +27% (avg) | +83% (max) |
-----------------------------------------------
This is probably not the best place for it, but I don't feel like moving
the one out of the TGSI translator today, and we already have the other
direction here, so...*shrug*
This unifies a bunch of the UBO and SSBO code to use common structures.
Beyond iris_state_ref, pipe_shader_buffer also gives us a buffer size,
which can be useful when filling out the surface state.
I have various conditions in place to try and avoid unnecessary
PIPE_CONTROL flushes, especially to batches which may have never
used the buffer being mapped. But if we do a CPU map to a bound
constant buffer, we still need to mark push constants dirty, even
if there's nothing happening in batches that would warrant a flush.
Fixes obvious misrendering in the "XCOM 2: War of the Chosen" menus
(lots of rainbow colored triangles). Fixes lots of blinking elements
in "Shadow of Mordor". Fixes missing crowd rendering in "DiRT Rally".
Allow ATTR and IMM sources unconditionally (ATTR are just GRFs, IMM will
be handled by opt_combine_constants(). Both are already allowed by
opt_copy_propagation().
Also allow FIXED_GRF if the regioning is 8,8,1. Could also allow other
stride=1 regions (e.g., 4,4,1) and scalar regions but I don't think
those occur. This is sufficient to allow a pass added in a future commit
(fs_visitor::lower_linterp) to avoid emitting extra MOV instructions.
I removed the 'src.stride > 1' case because it seems wrong: 3-src
instructions on Gen6-9 are align16-only and can only do stride=1 or
stride=0. A run through Jenkins with an assert(src.stride <= 1) never
triggers, so it seems that it was dead code.
Reviewed-by: Rafael Antognolli <rafael.antognolli@intel.com>
The two new unit tests verify that propagating a saturate between
instructions of different exec sizes does not happen.
Reviewed-by: Rafael Antognolli <rafael.antognolli@intel.com>
Will allow us to test that propagation between instructions of different
exec sizes does not happen (in the next commit).
The stray-looking change in intervening_dest_write is to adjust the size
of the texture result to keep the test functioning identically when the
instructions' exec sizes are doubled. Without the change, the texture
does not overwrite the destination fully as the unit test intends.
Reviewed-by: Rafael Antognolli <rafael.antognolli@intel.com>
We now have a lowering pass that will do this at the fs_visitor level,
so we can remove this code from gen11+.
v2: Reduce size of the "i" array from 4 to 2 (Matt).
Reviewed-by: Matt Turner <mattst88@gmail.com>
On gen11, instead of using a PLN instruction, we convert
FS_OPCODE_LINTERP to 2 or 4 multiply adds. That is done in the
fs_generator code.
This patch adds a lowering pass that does the same thing at the
fs_visitor. It also drops the usage of NF types, since we don't need the
extra precision and it lets us skip the accumulator. With all that, some
optimizations will still be run on the generated code, and we should get
better scheduling.
v2: Update comment about saturation and conditional mod (Matt)
Reviewed-by: Matt Turner <mattst88@gmail.com>
Move the scalar-region conversion from the IR to the generator, so it
doesn't affect the Gen11 path. We need the non-scalar regioning
for a later lowering pass that we are adding.
v2: Better commit message (Matt)
Reviewed-by: Matt Turner <mattst88@gmail.com>
Otherwise it could propagate the saturation from a SIMD16 instruction
into a SIMD8 instruction. With that, only part of the destination
register, which is the source of the move with saturation, would have
been updated.
Reviewed-by: Matt Turner <mattst88@gmail.com>
I left code indented one level too far in the previous commit to make
the diff easier to review. Drop that extra level now.
Fixes: 6981069fc8 i965: Ignore uniform storage for samplers or images, use binding info
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
gl_nir_lower_samplers_as_deref creates new top level sampler and image
uniforms which have been split from structure uniforms. i965 assumed
that it could walk through gl_uniform_storage slots by starting at
var->data.location and walking forward based on a simple slot count.
This assumed that structure types were walked in a particular order.
With samplers and images split out of structures, it becomes impossible
to assign meaningful locations. Consider:
struct S {
sampler2D a;
sampler2D b;
} s[2];
The gl_uniform_storage locations for these follow this map:
0 => a[0], 1 => b[0], 2 => a[0], 3 => b[0].
But the new split variables look like:
sampler2D lowered_a[2];
sampler2D lowered_b[2];
and there is no way to know that there's effectively a stride to get to
the location for successive elements of a[] or b[]. So, working with
location becomes effectively impossible.
Ultimately, the point of looking at uniform storage was to pull out the
bindings from the opaque index fields. gl_nir_lower_samplers_as_derefs
can obtain this information while doing the splitting, however, and sets
up var->data.binding to have the desired values.
We move gl_nir_lower_samplers before brw_nir_lower_image_load_store so
gl_nir_lower_samplers_as_derefs has the opportunity to set proper image
bindings. Then, we make the uniform handling code skip sampler(-array)
variables, and handle image param setup based on var->data.binding.
Fixes Piglit tests/spec/glsl-1.10/execution/samplers/uniform-struct,
this time without regressing dEQP-GLES2.functional.uniform_api.random.3.
Fixes: f003859f97 nir: Make gl_nir_lower_samplers use gl_nir_lower_samplers_as_deref
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
This reverts commit 9e0c744f07, which
regressed dEQP-GLES2.functional.uniform_api.random.3. It turns out
that the newly produced location is meaningless and impossible to
consume by drivers that want to look at gl_uniform_storage, so it's
probably better to leave it unset (0) than a number that looks usable.
Leave a tombstone^Wcomment to discourage the next person from making
the obvious looking fix.
See the next commit for a longer description of the problem.
This breaks tests/spec/glsl-1.10/execution/samplers/uniform-struct
on i965, which was originally fixed by the revert. The next commit
will fix it again.
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
Marek recently extended pipe->set_shader_buffers() to take an extra
writable_bitmask parameter, indicating which SSBOs are writable (some
may be bound read-only). We can use this to decide whether to set
EXEC_OBJECT_WRITE when pinning. Avoiding the write flag can save us
some cross-batch flushing if the SSBO is used for reading in both the
render and compute engines.
The LLVM project made some questionable decisions about defaults for
armv7 (e.g. they enable NEON that is not there on NVIDIA and Marvell
platforms).
On top of that, getHostCPUFeatures() doesn't disable missing machine
attributes. Finally, -neon alone is not sufficient to disable emmision
of NEON instructions.
Signed-off-by: Lubomir Rintel <lkundrak@v3.sk>
Cc: <mesa-stable@lists.freedesktop.org>
Reviewed-by: Matt Turner <mattst88@gmail.com>
We have a macro for this now; no reason to hand-roll it for derefs.
While we're here, move the NIR_DEFINE_CAST for derefs down to where all
the other ones are.
Reviewed-by: Eric Anholt <eric@anholt.net>
A lot has happened in those two drivers since the 19.0 release and we
keep forgetting to update release notes. Time to bring everything up to
date again before 19.1 gets released.
Acked-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
Reviewed-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
Only computeDerivativeGroupLinear is supported for now.
All crucible tests pass.
Signed-off-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
Commit ad98fbc217 ("intel/fs: Refactor code generation for nir_op_fsign
to its own function") criss-crossed with c2b8fb9a81 ("anv/device:
expose VK_KHR_shader_float16_int8 in gen8+"), and I was not paying
enough attention when I rebased. This adds back the float16 changes and
enables the optimization.
v2: Incorporate more changes from 19cd2f5deb and a8d8b1a139 that I
missed in the previous version.
Fixes: ad98fbc217 ("intel/fs: Refactor code generation for nir_op_fsign to its own function")
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=110474
Reviewed-by: Matt Turner <mattst88@gmail.com> [v1]
Currently only meson build supported is added for lima driver.
Add Android build support for lima.
Signed-off-by: Icenowy Zheng <icenowy@aosc.io>
Acked-by: Qiang Yu <yuq825@gmail.com>
For HW cursors, "cursor.pos" doesn't hold the current position of the
pointer, just the position of the last call to SetCursorPosition().
Skip the check against stale values and bump the d3dadapter9 drm version
to expose this change of behaviour.
Signed-off-by: Andre Heider <a.heider@gmail.com>
Reviewed-by: Axel Davy <davyaxel0@gmail.com>
Previously, we were storing the per-binding create info pointer in the
immutable_samplers field temporarily so that we can switch the order in
which we walk the loop. However, now that we have multiple arrays of
structs to walk, it makes more sense to store an index of some sort.
Because we want to leave immutable_samplers as NULL for undefined
bindings, we store index + 1 and then subtract one later.
Reviewed-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
I missed this on the first go round. The bindingCount field of
VkDescriptorSetLayoutBindingFlagsCreateInfoEXT is allowed to be zero
which means the flags array is ignored.
Fixes: d6c9bd6e01 "anv: Put binding flags in descriptor set layouts"
Reviewed-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
Mali attribute buffers have to be 64-byte aligned. However, Gallium
enforces no such requirement; for unaligned buffers, we were previously
forced to create a shadow copy (slow!). To prevent this, we instead use
the offseted buffer's address with the lower bits masked off, and then
add those masked off bits to the src_offset. Proof of correctness
included, possibly for the opportunity to say "QED" unironically.
Signed-off-by: Alyssa Rosenzweig <alyssa@rosenzweig.io>
This (fairly large) patch continues work surrounding the panfrost_job
abstraction to improve job lifetime management. In particular, we add
infrastructure to track which BOs are used by a particular job
(currently limited to the vertex buffer BOs), to reference count these
BOs, and to automatically manage the BOs memory based on the reference
count. This set of changes serves as a code cleanup, as a way of future
proofing for allowing flushing BOs, and immediately as a bugfix to
workaround the missing reference counting for vertex buffer BOs.
Meanwhile, there are a few cleanups to vertex buffer handling code
itself, so in the short-term, this allows us to remove the costly VBO
staging workaround, since this patch addresses the underlying causes.
v2: Use pipe_reference for BO reference counting, rather than managing
it ourselves. Don't duplicate hash-table key removal. Fix vertex buffer
counting.
Signed-off-by: Alyssa Rosenzweig <alyssa@rosenzweig.io>
Now that everything is in place to do bindless for all resource types
except input attachments and UBOs, VK_EXT_descriptor_indexing is
"trivial".
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Reviewed-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
This commit changes anv to put bindless handles and sampler pointers
into the descriptor buffer and use those instead of bindful when we run
out of binding table space. This "spilling" of descriptors allows to to
advertise an almost unbounded number of images and samplers.
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Reviewed-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
Instead of setting it manually, call the helper. When setting
descriptor sets becomes more complicated than just setting some struct
values, this will keep immutable sampler handling correct.
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Reviewed-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
We add two new texture sources for bindless surface and sampler handles.
Bindless surface handles are expected to be pre-shifted so that the
20-bit surface state table index is in the top 20 bits of the 32-bit
handle. This lets us avoid any extra shifts in the shader. Bindless
sampler handles are 32-byte aligned byte offsets from general state base
address. We use 32-byte aligned instead of 16-byte aligned to avoid
having to use more indirect messages than needed. It means we can't
tightly pack samplers but that's probably not a big deal.
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Reviewed-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
When we have a bindless sampler, we need an instruction header. Even in
SIMD8, this pushes the instruction over the sampler message size maximum
of 11 registers. Instead, we have to lower TXD to TXL.
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Reviewed-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
This commit adds a new way for ANV to do SSBO bindings by just passing a
GPU address in through the descriptor buffer and using the A64 messages
to access the GPU address directly. This means that our variable
pointers are now "real" pointers instead of a vec2(BTI, offset) pair.
This carries a few of advantages:
1. It lets us support a virtually unbounded number of SSBO bindings.
2. It lets us implement VK_KHR_shader_atomic_int64 which we couldn't
implement before because those atomic messages are only available
in the bindless A64 form.
3. It's way better than messing around with bindless handles for SSBOs
which is the only other option for VK_EXT_descriptor_indexing.
4. It's more future looking, maybe? At the least, this is what NVIDIA
does (they don't have binding based SSBOs at all). This doesn't a
priori mean it's better, it just means it's probably not terrible.
The big disadvantage, of course, is that we have to start doing our own
bounds checking for robustBufferAccess again have to push in dynamic
offsets.
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Reviewed-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
In order to avoid the potential overhead of A64 operations on all SSBO
ops, we look for those SSBO ops where we can get to the descriptor set
from the SSBO access operation and lower those to a binding-table
approach. When robustBufferAccess is enabled, this lets the hardware do
the bounds checking for us. It also avoids some potentially expensive
64-bit integer calculations.
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Reviewed-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
This is more descriptive and a bit nicer than checking for gen >= 8 &&
use_softpin everywhere.
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Reviewed-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
We're about to start doing 64-bit pointer calculations in ANV. They
will get applied after brw_preprocess_nir which is where we currently do
64-bit integer arithmetic lowering. Because we're adding 64-bit integer
arithmetic after the initial lowering has happened, we need to lower
again.
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Reviewed-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
If the number of surfaces or samplers exceeds what we can put in a
table, we will want to spill out to bindless. There is no bindless
support yet but this gets us the basic framework that will be used by
later commits.
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Reviewed-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
This commit just sorts the bindings by how often they're used vs the
array size of the binding. This will let us make more nuanced decisions
about what goes in the binding table vs. what to make bindless.
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Reviewed-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
This also fixes a bug where we mis-calculate maximum binding table sizes
and may return true in vkGetDescriptorSetLayoutSupport even for sets too
large to fit in a binding table.
Fixes: ddc4069122 "anv: Implement VK_KHR_maintenance3"
Reviewed-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
This is really where they belong; not push constants. The one downside
here is that we can't push them anymore for compute shaders. However,
that's a general problem and we should figure out how to push descriptor
sets for compute shaders. This lets us bump MAX_IMAGES to 64 on BDW and
earlier platforms because we no longer have to worry about push constant
overhead limits.
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Reviewed-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
We spend a lot of time in the driver adding things to hash sets to track
residency. The reality is that a properly built Vulkan app uses large
memory objects and sub-allocates from them. In a typical frame, most of
if not all of those allocations are going to be resident for the entire
frame so we're really not saving ourselves much by tracking fine-grained
residency. Just throwing everything in the validation list does make it
a little bit more expensive inside the kernel to walk the list and
ensure that all our VA is in order. However, without relocations, the
overhead of that is pretty small.
If we ever do run into a memory pressure situation where the fine-
grained residency could even potentially help, we would likely be
swapping one page out to make room for another within the draw call and
performance is totally lost at that point. We're better off swapping
out other apps and just letting ours run a whole frame.
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Reviewed-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
Remove unused functions and mark unhandled default case with
unreachable.
Signed-off-by: Kristian H. Kristensen <hoegsberg@google.com>
Reviewed-by: Emil Velikov <emil.velikov@collabora.com>
This suppresses warning about calling a non-virtual destructor in a
non-final class with virtual functions:
src/compiler/glsl/ast.h:53:4: warning: destructor called on non-final 'ast_node' that has virtual functions but non-virtual destructor [-Wdelete-non-virtual-dtor]
DECLARE_LINEAR_ZALLOC_CXX_OPERATORS(ast_node);
Signed-off-by: Kristian H. Kristensen <hoegsberg@google.com>
Reviewed-by: Ian Romanick <ian.d.romanick@intel.com>
Reviewed-by: Emil Velikov <emil.velikov@collabora.com>
Compiler warns about overflow when assigning UINT64_MAX to something
smaller than a uin64_t:
src/compiler/nir/nir_constant_expressions.c:16909:50: warning: implicit conversion from 'unsigned long long' to 'uint1_t' (aka 'unsigned char') changes value from 18446744073709551615 to 255 [-Wconstant-conversion]
uint1_t dst = (src0 + src1) < src0 ? UINT64_MAX : (src0 + src1);
~~~ ^~~~~~~~~~
Shift UINT64_MAX down to the appropriate maximum value for the type
being assigned to.
Signed-off-by: Kristian H. Kristensen <hoegsberg@google.com>
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
If we want to assert on found == true when the loop exits early, we
need to initialize it to false.
Signed-off-by: Kristian H. Kristensen <hoegsberg@chromium.org>
Reviewed-by: Ian Romanick <ian.d.romanick@intel.com>
If the last graphics pipeline bound to the command buffer has enough
space in its VS URB entries for Blorp then avoid reconfiguring the URB
partitions.
v2: s/0/MESA_SHADER_VERTEX/ (Caio)
Signed-off-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Reviewed-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
When looking at the dEQP nested_struct_array_dynamic_index_fragment code
after lowering, I was horrified at the amount of adding and multiplying by
0 we were doing. The builder _imm helpers handle that for you so that the
following optimization passes have less work to do. Plus, it's easier to
read.
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
We were calcuating the offset for the field within the struct, and just
dropping it on the floor. Fixes a regression in
KHR-GLES3.shaders.struct.local.nested_struct_array_dynamic_index_fragment
and a few of its friends since the scratch lowering commit.
Fixes: e8e159e9df ("nir/deref: Add helpers for getting offsets")
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
The mali utgard pp doesn't support a sign instruction.
Use the nir lowering function for fsign to implement fsign in ppir.
Signed-off-by: Erico Nunes <nunes.erico@gmail.com>
Reviewed-by: Eric Anholt <eric@anholt.net>
The mali utgard pp doesn't support a sign instruction.
In the ARM offline shader compiler, the sign function is implemented
using sub(gt(0.0, a), lt(0.0, a)).
This is a generic optimization, so implement it in the nir level when
lower_fsign is set, alongside the lowering for isign.
Signed-off-by: Erico Nunes <nunes.erico@gmail.com>
Reviewed-by: Eric Anholt <eric@anholt.net>
Basically just reserve the memory in the descriptor sets.
On the shader side we construct a buffer descriptor, since
AFAIU VGPR indexing on 32-bit pointers in LLVM is still broken.
This fully supports update after bind and variable descriptor set
sizes. However, the limits are somewhat arbitrary and are mostly
about finding a reasonable division of a 2 GiB max memory size over
the set.
v2: - rebased on top of master (Samuel)
- remove the loading resources rework (Samuel)
- only load UBO descriptors if it's a pointer (Samuel)
- use LLVMBuildPtrToInt to avoid IR failures (Samuel)
Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl> (v2)
This is actually fixed now.
This change requires LLVM r358579. Make sure to have it in
your tree, otherwise the following piglit will hang:
tests/spec/arb_shader_storage_buffer_object/execution/ssbo-atomicCompSwap-int.shader_test
Signed-off-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
Reviewed-by: Timothy Arceri <tarceri@itsqueeze.com>
They are buggy with older LLVM version, see r358579.
Fixes: 78c551aca1 ("ac/nir: use new LLVM 8 intrinsics for SSBO atomics except cmpswap")
Signed-off-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
Reviewed-by: Timothy Arceri <tarceri@itsqueeze.com>
They are buggy with LLVM 8 because they weren't marked as source
of divergence, see r358579.
Fixes: dd0172e865 ("radv: Use structured intrinsics instead of indexing workaround for GFX9.")"
Signed-off-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
Reviewed-by: Timothy Arceri <tarceri@itsqueeze.com>
We empty the cache sets when flushing the batch, at which point we need
to add any framebuffer related BOs even though the bindings haven't
changed. So, we now do the cache set tracking unconditionally.
For now, we continue skipping resolve work based on the same conditions
in the predraw functions - the thinking is if we didn't trigger
resolves, there's nothing to update here. Time will tell if this works.
Partly reverts commit 365886ebe1, and
fixes Unigine Valley rendering on Gen9+. Drops drawoverhead scores
by about 10-12%.
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=110353
The current register allocator has a concept of "spill benefit" which is
based on the number of nodes with which a given node interferes. The
idea is that you want to spill stuff with high interference because
those are the most likely registers to help when spilling. However,
this fails to take into account the length of the live range so the
allocator frequently picks "cheap" (not many uses) registers which are
actually very short lived and so spilling them doesn't help with the
pressure situation.
This commit takes into account the length of the live range to make
long-lived registers more likely to get spilled than short-lived ones.
This encourages the spill chooser to choose slightly larger registers
which will affect a larger area of the program and hopefully we have to
spill fewer of them to get the same reduction in over-all register
pressure.
Shader-db results on Kaby Lake:
total spills in shared programs: 23664 -> 12050 (-49.08%)
spills in affected programs: 19243 -> 7629 (-60.35%)
helped: 296
HURT: 8
total fills in shared programs: 32028 -> 25139 (-21.51%)
fills in affected programs: 20378 -> 13489 (-33.81%)
helped: 295
HURT: 16
Of course, most of that is in Deus Ex...
Shader-db results on Kaby Lake (without Deus Ex):
total spills in shared programs: 6479 -> 5834 (-9.96%)
spills in affected programs: 3231 -> 2586 (-19.96%)
helped: 40
HURT: 4
total fills in shared programs: 17165 -> 17099 (-0.38%)
fills in affected programs: 6951 -> 6885 (-0.95%)
helped: 40
HURT: 7
Even without Deus Ex, the spill help is pretty respectable. The worst
hurt shaders were one compute shader in Aztec Ruins and one fragment
shader in KSP that were each hurt by around 13% fill 9% spill.
VkPipeline-db results on Kaby Lake:
total spills in shared programs: 9149 -> 8069 (-11.80%)
spills in affected programs: 5197 -> 4117 (-20.78%)
helped: 27
HURT: 16
total fills in shared programs: 26390 -> 25477 (-3.46%)
fills in affected programs: 12662 -> 11749 (-7.21%)
helped: 24
HURT: 22
The Vulkan results were decidedly more mixed but we don't have nearly as
many apps in that database yet.
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
Reviewed-by: Matt Turner <mattst88@gmail.com>
Otherwise, there's artifacts when running Unigine Valley with
protocol version 2.
We can get away with not waiting for most buffers, but let's
be conservative.
Signed-off-by: Gurchetan Singh <gurchetansingh@chromium.org>
Reviewed-By: Gert Wollny <gert.wollny@collabora.com>
Reviewed-By: Piotr Rak <p.rak@samsung.com>
The only tricky part is with protocol 0 we can either have
a display target or resource backing store. With protocol
2 we can have both. Make the map/unmap functions only deal
with the resource backing store.
v2: Handle MSAA texture case.
v3: spelling
v4: Fix dangling else (@prak)
v5: mmap --> os_mmap (@prak) + added comments (@gerddie)
Signed-off-by: Gurchetan Singh <gurchetansingh@chromium.org>
Reviewed-By: Gert Wollny <gert.wollny@collabora.com>
Reviewed-By: Piotr Rak <p.rak@samsung.com>
This just moves everything to a helper function -- "flush_front_buffer"
will be used later.
virgl_vtest_resource_map / virgl_vtest_resource_unmap already take
care to map the display target.
Signed-off-by: Gurchetan Singh <gurchetansingh@chromium.org>
Reviewed-By: Gert Wollny <gert.wollny@collabora.com>
Reviewed-By: Piotr Rak <p.rak@samsung.com>
We really need to wait under certain circumstances, or we can end
up writing to memory the same time the host is reading.
Partial revert of d6dc68 ("virgl: use uint16_t mask instead of separate booleans").
Test cases:
- dEQP-GLES31.functional.texture.texture_buffer.render_modify.as_vertex_array.bufferdata
on vtest protocol version 2
- Flickering during Alien Isolation
Fixes: d6dc68 ("virgl: use uint16_t mask instead of separate booleans")
Signed-off-by: Gurchetan Singh <gurchetansingh@chromium.org>
Reviewed-By: Gert Wollny <gert.wollny@collabora.com>
Reviewed-By: Piotr Rak <p.rak@samsung.com>
In what might be my first case of finding a divergence between hardware
and simpenrose for v3d 4.x, it seems that despite what the spec claims,
you actually need specific values in the TYPE field for atomic ops.
Fixes dEQP-GLES31.functional.*.compswap.*
All of the affected shaders are in Mad Max. I noticed this while
looking at some other things. I tried a couple similar patterns, but
the affect on cycles was general negative. It may be worth revisiting
this later.
v2: Rebase on 1-bit Boolean changes.
All Gen7+ platforms had similar results. (Skylake shown)
total instructions in shared programs: 15282073 -> 15282053 (<.01%)
instructions in affected programs: 1192 -> 1172 (-1.68%)
helped: 14
HURT: 0
helped stats (abs) min: 1 max: 2 x̄: 1.43 x̃: 1
helped stats (rel) min: 1.16% max: 2.17% x̄: 1.65% x̃: 1.39%
95% mean confidence interval for instructions value: -1.73 -1.13
95% mean confidence interval for instructions %-change: -1.91% -1.38%
Instructions are helped.
total cycles in shared programs: 372595954 -> 372594532 (<.01%)
cycles in affected programs: 11477 -> 10055 (-12.39%)
helped: 14
HURT: 0
helped stats (abs) min: 76 max: 122 x̄: 101.57 x̃: 104
helped stats (rel) min: 7.76% max: 15.62% x̄: 12.94% x̃: 14.78%
95% mean confidence interval for cycles value: -111.05 -92.09
95% mean confidence interval for cycles %-change: -14.90% -10.98%
Cycles are helped.
No changes on any Gen6 or earlier platforms.
Reviewed-by: Matt Turner <mattst88@gmail.com>
All of the affected shaders are in Mad Max. The inner part of the
pattern is itself an open-coded sign(a). I tried using that as a
pattern, but the results were not good. A bunch of shaders were helped
for instructions, but overall cycles, spill, and fills were hurt.
v2: Rebase on 1-bit Boolean changes.
v3: Fix order of copysign() parameters in comments and commit message.
Noticed by Matt.
All Gen7+ platforms had similar results. (Skylake shown)
total instructions in shared programs: 15282141 -> 15282073 (<.01%)
instructions in affected programs: 6106 -> 6038 (-1.11%)
helped: 17
HURT: 0
helped stats (abs) min: 4 max: 4 x̄: 4.00 x̃: 4
helped stats (rel) min: 1.02% max: 2.20% x̄: 1.15% x̃: 1.06%
95% mean confidence interval for instructions value: -4.00 -4.00
95% mean confidence interval for instructions %-change: -1.30% -1.00%
Instructions are helped.
total cycles in shared programs: 372597886 -> 372595954 (<.01%)
cycles in affected programs: 32701 -> 30769 (-5.91%)
helped: 17
HURT: 0
helped stats (abs) min: 6 max: 216 x̄: 113.65 x̃: 118
helped stats (rel) min: 0.40% max: 21.86% x̄: 6.20% x̃: 5.83%
95% mean confidence interval for cycles value: -152.84 -74.45
95% mean confidence interval for cycles %-change: -8.89% -3.51%
Cycles are helped.
No changes on any Gen6 or earlier platforms.
Reviewed-by: Matt Turner <mattst88@gmail.com>
Normally fsign generates -1, 0, or +1. The new scale factor, S, causes
fsign to generate -S, 0, or +S.
v2: Rebase on v2 changes in previous commit.
v3: Rebase on 85c35885b3 ("nir: Rework nir_src_as_alu_instr to not take
a pointer").
Reviewed-by: Matt Turner <mattst88@gmail.com> [v2]
We test the condition, declare a few variables, then test the exact
same condition again. Let's not do that.
Signed-off-by: Kristian H. Kristensen <hoegsberg@chromium.org>
Other nir_src_as_* functions just take a nir_src. It's not that much
more memory copying and the constness preserving really isn't worth the
cognitive dissonance.
Reviewed-by: Eric Anholt <eric@anholt.net>
Reviewed-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
This 3d performance workaround was initially put in the kernel but the
media driver requires different settings so the register has been
whitelisted in i915 [1] and userspace drivers are left initializing it as
they wish.
[1] : https://patchwork.freedesktop.org/series/59494/
Signed-off-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Reviewed-by: Anuj Phogat <anuj.phogat@gmail.com>
This 3d performance workaround was initially put in the kernel but the
media driver requires different settings so the register has been
whitelisted in i915 [1] and userspace drivers are left initializing it as
they wish.
[1] : https://patchwork.freedesktop.org/series/59494/
Signed-off-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Reviewed-by: Anuj Phogat <anuj.phogat@gmail.com>
This 3d performance workaround was initially put in the kernel but the
media driver requires different settings so the register has been
whitelisted in i915 [1] and userspace drivers are left initializing it as
they wish.
[1] : https://patchwork.freedesktop.org/series/59494/
Signed-off-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Reviewed-by: Anuj Phogat <anuj.phogat@gmail.com>
v2 (Jason):
- Merge shaderFloat16 and shaderInt8 enablement into a single patch.
- Merge extension enable.
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net> (v1)
v2:
- Merge Float16 and Int8 capabilities into a single patch (Jason)
- Merged patch that enabled SPIR-V front-end checks for these caps
(except for Int8, which was already merged)
v3:
- Keep capabilities sorted (Jason)
v4:
- SpvCapabilityFloat16 support already added in master (Juan)
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net> (v1)
v2:
- Adapted unit tests to make them consistent with the changes done
to the validation of half-float conversions.
v3 (Curro):
- Check all the accummulators
- Constify declarations
- Do not check src1 type in single-source instructions.
- Check for all instructions that read accumulator (either implicitly or
explicitly)
- Check restrictions in src1 too.
- Merge conditional block
- Add invalid test case.
v4 (Curro):
- Assert on 3-src instructions, as they are not validated.
- Get rid of types_are_mixed_float(), as we know instruction is mixed
float at that point.
- Remove conditions from not verified case.
- Fix brackets on conditional.
Reviewed-by: Francisco Jerez <currojerez@riseup.net>
v2:
- Add some tests with UB type too (Jason)
v3:
- consider implicit conversions from 2src instructions too (Curro).
v4:
- Do not check src1 type in single-source instructions (Curro).
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net> (v2)
v2:
- Consider implicit conversions in 2-src instructions too (Curro)
- For restrictions that involve destination stride requirements
only validate them for Align1, since Align16 always requires
packed data.
- Skip general rule for the dst/execution type size ratio for
mixed float instructions on CHV and SKL+, these have their own
set of rules that we'll be validated separately.
v3 (Curro):
- Do not check src1 type in single-source instructions.
- Check restriction on src1.
- Remove invalid test.
Reviewed-by: Francisco Jerez <currojerez@riseup.net>
The section 'Execution Data Types' of 3D Media GPGPU volume, which
describes execution types, is exactly the same in BDW and SKL+.
Also, this section states that there is a single execution type, so it
makes sense that this is the wider of the two floating point types
involved in mixed float mode, which is what we do for SKL+ and CHV.
v2:
- Make sure we also account for the destination type in mixed mode (Curro).
Acked-by: Francisco Jerez <currojerez@riseup.net>
It is very likely that this optimzation is never useful and we'll probably
just end up removing it, so let's not bother adding more cases to it for
now.
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
NIR already has these and correctly considers exact/inexact qualification,
whereas the backend doesn't and can apply the optimizations where it
shouldn't. This happened to be the case in a handful of Tomb Raider shaders,
where NIR would skip the optimizations because of a precise qualification
but the backend would then (incorrectly) apply them anyway.
Besides this, considering that we are not emitting much math in the backend
these days it is unlikely that these optimizations are useful in general. A
shader-db run confirms that MAD and LRP optimizations, for example, were only
being triggered in cases where NIR would skip them due to precise
requirements, so in the near future we might want to remove more of these,
but for now we just remove the ones that are not completely correct.
Suggested-by: Jason Ekstrand <jason@jlekstrand.net>
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
There are no 8-bit immediates, so assert in that case.
16-bit immediates are replicated in each word of a 32-bit immediate, so
we only need to check the lower 16-bits.
v2:
- Fix is_zero with half-float to consider -0 as well (Jason).
- Fix is_negative_one for word type.
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
At the very least we need it to handle HF too, since we are doing
constant propagation for MAD and LRP, which relies on this pass
to promote the immediates to GRF in the end, but ideally
we want it to support even more types so we can take advantage
of it to improve register pressure in some scenarios.
v2 (Jason):
- Support 64-bit types too.
- Check if we need to set the half-float flag if the immediate already
existed.
- Multiply the size of the immediate by the width of the copy
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
The hardware only allows a stride of 1 on a Byte destination for raw
byte MOV instructions. This is required even when the destination
is the NULL register.
Rather than making sure that we emit a proper NULL:B destination
every time we need one, just fix it at emission time.
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
Now that we have the regioning lowering pass we can just put all of these
opcodes together in a single block and we can just assert on the few cases
of conversion instructions that are not supported in hardware and that should
be lowered in brw_nir_lower_conversions.
The only cases what we still handle separately are the conversions from float
to half-float since the rounding variants would need to fallthrough and we
are already doing this for boolean opcodes (since they need to negate), plus
there is also a large comment about these opcodes that we probably want to
keep so it is just easier to keep these separate.
Suggested-by: Jason Ekstrand <jason@jlekstrand.net>
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
This function is used in two different scenarios that for 32-bit
instructions are the same, but for 16-bit instructions are not.
One scenario is that in which we are working at a SIMD8 register
level and we need to know if a register is fully defined or written.
This is useful, for example, in the context of liveness analysis or
register allocation, where we work with units of registers.
The other scenario is that in which we want to know if an instruction
is writing a full scalar component or just some subset of it. This is
useful, for example, in the context of some optimization passes
like copy propagation.
For 32-bit instructions (or larger), a SIMD8 dispatch will always write
at least a full SIMD8 register (32B) if the write is not partial. The
function is_partial_write() checks this to determine if we have a partial
write. However, when we deal with 16-bit instructions, that logic disables
some optimizations that should be safe. For example, a SIMD8 16-bit MOV will
only update half of a SIMD register, but it is still a complete write of the
variable for a SIMD8 dispatch, so we should not prevent copy propagation in
this scenario because we don't write all 32 bytes in the SIMD register
or because the write starts at offset 16B (wehere we pack components Y or
W of 16-bit vectors).
This is a problem for SIMD8 executions (VS, TCS, TES, GS) of 16-bit
instructions, which lose a number of optimizations because of this, most
important of which is copy-propagation.
This patch splits is_partial_write() into is_partial_reg_write(), which
represents the current is_partial_write(), useful for things like
liveness analysis, and is_partial_var_write(), which considers
the dispatch size to check if we are writing a full variable (rather
than a full register) to decide if the write is partial or not, which
is what we really want in many optimization passes.
Then the patch goes on and rewrites all uses of is_partial_write() to use
one or the other version. Specifically, we use is_partial_var_write()
in the following places: copy propagation, cmod propagation, common
subexpression elimination, saturate propagation and sel peephole.
Notice that the semantics of is_partial_var_write() exactly match the
current implementation of is_partial_write() for anything that is
32-bit or larger, so no changes are expected for 32-bit instructions.
Tested against ~5000 tests involving 16-bit instructions in CTS produced
the following changes in instruction counts:
Patched | Master | % |
================================================
SIMD8 | 621,900 | 706,721 | -12.00% |
================================================
SIMD16 | 93,252 | 93,252 | 0.00% |
================================================
As expected, the change only affects SIMD8 dispatches.
Reviewed-by: Topi Pohjolainen <topi.pohjolainen@intel.com>
Empirical testing shows that gen8 has a bug where MAD instructions with
a half-float source starting at a non-zero offset fail to execute
properly.
This scenario usually happened in SIMD8 executions, where we used to
pack vector components Y and W in the second half of SIMD registers
(therefore, with a 16B offset). It looks like we are not currently doing
this any more but this would handle the situation properly if we ever
happen to produce code like this again.
v2 (Jason):
- Move this workaround to the lower_regioning pass as an additional case
to has_invalid_src_region()
- Do not apply the workaround if the stride of the source operand is 0,
testing suggests the problem doesn't exist in that case.
v3 (Jason):
- We want offset % REG_SIZE > 0, not just offset > 0
- Use a helper to compute the offset
Reviewed-by: Topi Pohjolainen <topi.pohjolainen@intel.com> (v1)
Broadwell has restrictions that apply to Align16 half-float that
make the Align16 implementation of this invalid for this platform.
Use the gen11 path for this instead, which uses Align1 mode.
The restriction is not present in cherryview, gen9 or gen10, where
the Align16 implementation seems to work just fine.
v2:
- Rework the comment in the code, move the PRM citation from the
commit message to the comment in the code (Matt)
- Cherryview isn't affected, only Broadwell (Matt)
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net> (v1)
Reviewed-by: Matt Turner <mattst88@gmail.com>
We were assuming 32-bit elements. Also, In SIMD8 we pack 2 vector components
in a single SIMD register, so for example, component Y of a 16-bit vec2
starts is at byte offset 16B. This means that when we compute the offset of
the elements to be differentiated we should not stomp whatever base offset we
have, but instead add to it.
v2
- Use byte_offset() helper (Jason)
- Merge the fix for SIMD8: using byte_offset() fixes that too.
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net> (v1)
Reviewed-by: Matt Turner <mattst88@gmail.com>
Source0 and Destination extract the floating-point precision automatically
from the SrcType and DstType instruction fields respectively when they are
set to types :F or :HF. For Source1 and Source2 operands, we use the new
1-bit fields Src1Type and Src2Type, where 0 means normal precision and 1
means half-precision. Since we always use the type of the destination for
all operands when we emit 3-source instructions, we only need set Src1Type
and Src2Type to 1 when we are emitting a half-precision instruction.
v2:
- Set the bit separately for each source based on its type so we can
do mixed floating-point mode in the future (Topi).
v3:
- Use regular citation style for the comment referencing the PRM (Matt).
- Decided not to add asserts in the emission code to check that only
mixed HF/F types are used since such checks would break negative tests
for brw_eu_validate.c (Matt)
Reviewed-by: Topi Pohjolainen <topi.pohjolainen@intel.com>
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
Reviewed-by: Matt Turner <mattst88@gmail.com>
We are now using these bits, so don't assert that they are not set. In gen8,
if these bits are set compaction is not possible. On gen9 and CHV platforms
set_3src_control_index() checks these bits (and others) against a table to
validate if the particular bit combination is eligible for compaction or not.
v2
- Add more detail in the commit message explaining the situation for SKL+
and CHV (Jason)
Reviewed-by: Topi Pohjolainen <topi.pohjolainen@intel.com>
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
Reviewed-by: Matt Turner <mattst88@gmail.com>
This is available since gen8.
v2: restore previously existing assertion.
v3: don't use separate tables for gen7 and gen8, just assert that we
don't use half-float before gen8 (Matt)
Reviewed-by: Topi Pohjolainen <topi.pohjolainen@intel.com> (v1)
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
The original SrcType is a 3-bit field that takes a subset of the types
supported for the hardware for 3-source instructions. Since gen8,
when the half-float type was added, 3-source floating point operations
can use use mixed precision mode, where not all the operands have the
same floating-point precision. While the precision for the first operand
is taken from the type in SrcType, the bits in Src1Type (bit 36) and
Src2Type (bit 35) define the precision for the other operands
(0: normal precision, 1: half precision).
Reviewed-by: Topi Pohjolainen <topi.pohjolainen@intel.com>
Reviewed-by: Matt Turner <mattst88@gmail.com>
Acked-by: Jason Ekstrand <jason@jlekstrand.net>
v2:
- make 16-bit be its own separate case (Jason)
v3:
- Drop the result_int temporary (Jason)
Reviewed-by: Topi Pohjolainen <topi.pohjolainen@intel.com> (v1)
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
Extended math with half-float operands is only supported since gen9,
but it is limited to SIMD8. In gen8 we lower it to 32-bit.
v2: quashed together the following patches (Jason):
- intel/compiler: allow extended math functions with HF operands
- intel/compiler: lower 16-bit extended math to 32-bit prior to gen9
- intel/compiler: extended Math is limited to SIMD8 on half-float
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
Reviewed-by: Topi Pohjolainen <topi.pohjolainen@intel.com>
(allow extended math functions with HF operands,
extended Math is limited to SIMD8 on half-float)
There are some hardware restrictions that brw_nir_lower_conversions should
have taken care of before we get here.
v2:
- rebased on top of regioning lowering pass
Reviewed-by: Topi Pohjolainen <topi.pohjolainen@intel.com> (v1)
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
Since we handle booleans as integers this makes more sense.
v2:
- rebased to incorporate new boolean conversion opcodes
v3:
- rebased on top regioning lowering pass
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net> (v1)
Reviewed-by: Topi Pohjolainen <topi.pohjolainen@intel.com> (v2)
Going forward having these split is a bit more convenient since these two
groups have different restrictions.
v2:
- Rebased on top of new regioning lowering pass.
Reviewed-by: Topi Pohjolainen <topi.pohjolainen@intel.com> (v1)
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
Some conversions are not directly supported in hardware and need to be
split in two conversion instructions going through an intermediary type.
Doing this at the NIR level simplifies a bit the complexity in the backend.
v2:
- Consider fp16 rounding conversion opcodes
- Properly handle swizzles on conversion sources.
v3
- Run the pass earlier, right after nir_opt_algebraic_late (Jason)
- NIR alu output types already have the bit-size (Jason)
- Use 'is_conversion' to identify conversion operations (Jason)
v4:
- Be careful about the intermediate types we use so we don't lose
range and avoid incorrect rounding semantics (Jason)
Reviewed-by: Topi Pohjolainen <topi.pohjolainen@intel.com> (v1)
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
Use the raw version (ie. IDXEN=0) because vindex is unused.
Use the old intrinsic for compare&swap because the new one
hangs the GPU for some reasons.
Signed-off-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
llvm 8 removed saturated unsigned add / sub x86 sse2 intrinsics, and
now llvm 9 removed the signed versions as well - they were proposed for
removal earlier, but the pattern to recognize those was very complex,
so it wasn't done then. However, instead of these arch-specific
intrinsics, there's now arch-independent intrinsics for saturated
add / sub, both for signed and unsigned, so use these.
They should have only advantages (work with arbitrary vector sizes,
optimal code for all archs), although I don't know how well they work
in practice for other archs (at least for x86 they do the right thing).
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=110454
Reviewed-by: Brian Paul <brianp@vmware.com>
This fixes a race condition where anv_gen_files are executed before
genxml files, which causes a build failure
v2: add dependency on idep_genxml (Lionel)
Fixes: d1992255bb
("meson: Add build Intel "anv" vulkan driver")
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
The Gen10+ expected format adds an additional counter which we can't
disclose yet. We can still make the size of the expected query result
match.
Signed-off-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Reviewed-by: Mark Janes <mark.a.janes@intel.com>
We would like to reuse performance query metrics in other APIs. Let's
make the query code dealing with the processing of raw counters into
human readable values API agnostic.
Signed-off-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Reviewed-by: Mark Janes <mark.a.janes@intel.com>
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
This blit can fail, but this is not new; in the old version we
didn't even try to blit in this case. So let's just document the
limitation for now, and leave this for another day.
Signed-off-by: Erik Faye-Lund <erik.faye-lund@collabora.com>
Reviewed-by: Gurchetan Singh <gurchetansingh@chromium.org>
When running on OpenGL ES, we can't just map any format for reading,
because of limitations on glReadPixels. So let's fall back to the
blit code-path, and translate the pixels to the correct format in the
end.
This fixes the remaining failures of KHR-GL32.packed_pixels.* apart
from the sRGB tests.
Signed-off-by: Erik Faye-Lund <erik.faye-lund@collabora.com>
Reviewed-by: Gurchetan Singh <gurchetansingh@chromium.org>
Without this, we can't for instance convert between r8_sint and
r8g8b8a8_sint. But that's pretty useful, so let's support it as well.
Signed-off-by: Erik Faye-Lund <erik.faye-lund@collabora.com>
Reviewed-by: Gurchetan Singh <gurchetansingh@chromium.org>
We currently don't support writing to resources that uses a temporary
staging-resource to resolve the pixels. If a write-bit was set, we
forgot to perform a blit back to the old resource, followed by trying to
update the wrong resource, which lacks backing-storage. The end-result
would be that nothing useful happened.
This approach also fixes a few smaller bugs, like using the wrong box
(without x y and z zeroed out), which means a partial update of a
multisampled texture could result in the wrong part of the texture being
updated.
Signed-off-by: Erik Faye-Lund <erik.faye-lund@collabora.com>
Reviewed-by: Gurchetan Singh <gurchetansingh@chromium.org>
It's hard to read the code that decides if we want to queue up an unmap
or destroy the transfer right away. So let's make it a bit simpler, by
setting a bool in case we want to queue it.
Signed-off-by: Erik Faye-Lund <erik.faye-lund@collabora.com>
Reviewed-by: Gurchetan Singh <gurchetansingh@chromium.org>
This isn't the temporary resource itself, it's the template that we'll
create the resource from. So let's name it appropriately.
Signed-off-by: Erik Faye-Lund <erik.faye-lund@collabora.com>
Reviewed-by: Gurchetan Singh <gurchetansingh@chromium.org>
This is a port of Jason's 8379bff6c4
from i965 to iris. We can't find anything relevant in the documentation
and no one we've talked to has been able to help us pin down a solution.
Unfortunately, we have to put the hack in both iris_blit() and
iris_copy_region(). st/mesa's CopyImage() implementation sometimes
chooses to use pipe->blit() instead of pipe->resource_copy_region().
For blits, we only do the hack if the blit source format doesn't match
the underlying resource (i.e. it's reinterpreting the bits). Hopefully
this should not be too common.
We were failing to set up payload[1] for use by LocalInvocationIndex/ID
and shared variable accesses if gl_WorkGroupID/gl_GlobalInvocationID
wasn't used (possibly because you only have one workgroup). You're always
going to use payload[1], and payload[0] is common enough and we have DCE
in the backend to clean it up if it happens to not be used.
Fixes assertion failures in the CTS since Karol's cleanup when NIR started
noticing that we were reading an invalid component.
Fixes: 5450f1c9fb ("v3d: prefer using nir_src_comp_as_int over nir_src_as_const_value")
v2: When available, include the opcode name too. (Karol)
v3: Use more to_string helpers. (Karol)
Include the wrong bit_size in those failures.
Include the capability number in spv_check_supported.
Provide vtn_fail_with_* macros to avoid noise in the call sites.
v4: Provide macros only for opcode and decoration, which have enough
usages to justify them. (Jason)
Acked-by: Jason Ekstrand <jason@jlekstrand.net>
Reviewed-by: Karol Herbst <kherbst@redhat.com>
Also, use a set to identify repeated values. The previous arrangement
worked when the repetitions were one after another, but in some of the
new cases they are not.
Reviewed-by: Karol Herbst <kherbst@redhat.com>
This patch changes the GL_VENDOR string from "Mesa Project" to "Intel".
This makes GLX_MESA_query_renderer report "Vendor: Intel (0x8086)"
instead of "Vendor: Mesa Project (0x8086)" which is arguably wrong.
We now also use a consistent vendor string across Windows and Linux.
It also prepends "Mesa" to the GL_RENDERER string, both to credit the
community and have a distinguishing mark between the two drivers. We
drop "DRI" compared to i965, as it's not really that important.
Improves performance in Portal by 1.8x. Iris is now 3.86% faster
than i965 at the portal-d1.dem timedemo on my Kabylake laptop. One
change is that Portal selects the MapBufferRange path based on the
vendor string, and iris's BufferSubData path is still missing the
storage invalidation optimization.
This takes the stupid simplest and most reliable approach to reducing
redundancy that I could come up with: Just use the struct declaration
as the cach key. This cuts the size of the generated C file to about
half and takes about 50 KiB off the .data section.
size before (release build):
text data bss dec hex filename
5363833 336880 13584 5714297 573179 _install/lib64/libvulkan_intel.so
size after (release build):
text data bss dec hex filename
5229017 285264 13584 5527865 545939 _install/lib64/libvulkan_intel.so
Reviewed-by: Connor Abbott <cwabbott0@gmail.com>
Order of operations is important, otherwise we'll find the program we
just uploaded as the "old" compile and get confused why nothing is
different between the two keys.
Reviewed-by: Jordan Justen <jordan.l.justen@intel.com>
I was lazy earlier and hadn't bothered typing / refactoring this.
Now I'm hitting some extra recompiles and would like to see why.
Reviewed-by: Jordan Justen <jordan.l.justen@intel.com>
The i965 driver has a bunch of code to compare two sets of program keys
and print out the differences. This can be useful for debugging why a
shader needed to be recompiled on the fly due to non-orthogonal state
dependencies. anv doesn't do recompiles, so we didn't need to share
this in the past - but I'd like to use it in iris.
This moves the bulk of the code to the compiler where it can be reused.
To make that possible, we need to decouple it from i965 - we can't get
at the brw program cache directly, nor use brw_context to print things.
Instead, we use compiler->shader_perf_log(), and simply pass in keys.
We put all of this debugging code in brw_debug_recompile.c, and only
export a single function, for simplicity. I also tidied the code a
bit while moving it, now that it all lives in one file.
Reviewed-by: Jordan Justen <jordan.l.justen@intel.com>
This patch will add support for frame_cropping when the input size is not
matched with aligned size. Currently vaapi driver ignores frame cropping
values provided by client. This change will update SPS nalu with proper
cropping values.
Signed-off-by: Satyajit Sahu <satyajit.sahu@amd.com>
Reviewed-by: Leo Liu <leo.liu@amd.com>
This patch will add support for frame_cropping when the input size is not
matched with aligned size. Currently vaapi driver ignores frame cropping
values provided by client. This change will update SPS nalu with proper
cropping values.
v2: Moving default crop setting to else when enc_frame_cropping_flag is not set.
Signed-off-by: Satyajit Sahu <satyajit.sahu@amd.com>
Reviewed-by: Leo Liu <leo.liu@amd.com>
This patch adds cropping flags for H264 in pipe_h264_enc_pic_control.
Signed-off-by: Satyajit Sahu <satyajit.sahu@amd.com>
Reviewed-by: Leo Liu <leo.liu@amd.com>
Both Vulkan and OpenGL might be using glsl_types simultaneously or we
can also have multiple concurrent Vulkan instances using glsl_types.
Patch adds a one time init to track number of users and will release
types only when last user calls _glsl_type_singleton_decref().
This change fixes glsl_type memory leaks we have with anv driver.
v2: reuse hash_mutex, cleanup, apply fix also to radv driver and
rename helper functions (Jason)
v3: move init, destroy to happen on GL context init and destroy
Signed-off-by: Tapani Pälli <tapani.palli@intel.com>
Reviewed-by: Timothy Arceri <tarceri@itsqueeze.com>
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
bash subshells don't inherit the -e option by default, so failures in
the subshell commands wouldn't cause the CI job to fail.
Reviewed-by: Eric Engestrom <eric.engestrom@intel.com>
We either compile these locally, or they are dependencies of other
packages we install.
v2:
* Adapt to leaving self-compiled packages untouched.
Reviewed-by: Eric Engestrom <eric.engestrom@intel.com>
We now use the C frontend of GCC 8 instead of 6 (required tweaking the
before_script for the clang job). We cannot use the C++ frontend of GCC
7 or newer yet, because upstream GCC 7 changed some C++ name mangling
stuff in backwards incompatible ways, and LLVM < 6.0 packages aren't
available in buster.
Reviewed-by: Eric Engestrom <eric.engestrom@intel.com>
The APT archive used by the Ubuntu docker image can be slow, even timing
out sometimes, causing spurious failures of the containers-build job.
The Debian docker image uses deb.debian.org, which is backed by a
content distribution network.
One downside is that stretch only has GCC 6, whereas bionic had 7.
Reviewed-by: Eric Engestrom <eric.engestrom@intel.com>
virgl_drm_fence can wrap either a fence fd or a virgl_hw_res. Because a
fence fd is cheaper than a virgl_hw_res, we use it whenever it is
available.
Signed-off-by: Chia-I Wu <olvaffe@gmail.com>
Reviewed-by: Emil Velikov <emil.velikov@collabora.com>
Fence fds are cheaper than resources. We want to let winsys make the
decision and use fence fds whenever they are supported. This commit
prepares the work.
For the moment, we create a resource _and_ a fence fd when
supports_fences is true. This will be fixed such that we create a
resource _or_ a fence fd. (And because of a version check bug that we
will fix later, supports_fences is actually never true).
Signed-off-by: Chia-I Wu <olvaffe@gmail.com>
Reviewed-by: Emil Velikov <emil.velikov@collabora.com>
It does not need help from the driver. This also fixes one issue where
the fence is ignored when the transfer queue is full.
Signed-off-by: Chia-I Wu <olvaffe@gmail.com>
Reviewed-by: Emil Velikov <emil.velikov@collabora.com>
0 is a valid value as max index, and the code handles it fine. This isn't
commonly seen, as it will only happen with array declarations of size 1.
Fixes piglit tests/shaders/complex-loop-analysis-bug.shader_test
Fixes: a3c898dc97 "gallivm: fix improper clamping of vertex index when fetching gs inputs"
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=110441
Reviewed-by: Brian Paul <brianp@vmware.com>
Until now, we were only doing this when linking a SSO
program. However, nothing avoids linking a non SSO program which
doesn't have both a VS and FS. In those cases, we also need to report
the usual linking errors, if happening.
v2: Use a better name for the renamed function (Timothy).
Signed-off-by: Andres Gomez <agomez@igalia.com>
Reviewed-by: Timothy Arceri <tarceri@itsqueeze.com>
Juan A. Suarez takes his place and the shorter loop makes Dylan
repeating earlier.
Signed-off-by: Andres Gomez <agomez@igalia.com>
Reviewed-by: Juan A. Suarez <jasuarez@igalia.com>
Reviewed-by: Dylan Baker <dylan@pnwbakers.com>
We need to preserve PIPE_TRANSFER_FLUSH_EXPLICIT, DISCARD_RANGE, and
so on, but don't want to pass them to iris_bo_map(). So, keep them all,
but mask them off when calling map.
Chris Wilson told me to do this a long time ago and he was right.
Budgie Window Manager is an increasingly used alternative to GNOME and MATE.
Default in Solus OS, also used in other distros.
Signed-off-by: Marek Olšák <marek.olsak@amd.com>
There's still a few in here, but those docs are already so out of date
that it probably makes more sense to delete them. Such as the GLES
docs which still claim we only support 1.1 and 2.0, with no mention of
3.x at all.
v2: - Add docs for testing back end (Eric Engestrom)
- Drop more autootols references
- meson is now required not recommended
- Add $PWD
Acked-by: Kenneth Graunke <kenneth@whitecape.org>
Reviewed-by: Eric Anholt <eric@anholt.net>
Reviewed-by: Eric Engestrom <eric.engestrom@intel.com>
Acked-by: Marek Olšák <marek.olsak@amd.com>
Acked-by: Jason Ekstrand <jason@jlekstrand.net>
Acked-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
Acked-by: Matt Turner <mattst88@gmail.com>
Even if we don't use local buffers in general. Turns out that even
though the performance is not the best the kernel still does it
better than our own list.
We still have to keep the radv bo list for buffers that are shared
externally.
This improves Talos on lowest quality setting (so as CPU bound as
possible) by ~10% if the global bo list is enabled.
Reviewed-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
In radv we had a separate flag to actually use it + an env option
to experimentally use it.
The common code setting has_local_buffers to false of course broke
that experimental option.
Also the "enable on APU" did not make sense for RADV as it is still
disabled by default.
Fixes: b21a4efb55 "radv/winsys: allow local BOs on APUs"
Reviewed-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
Seems it was missing the "/ ma + 0.5" and the order was swapped.
Fixes: a1a2a8dfda ('nir: add AMD_gcn_shader extended instructions')
Signed-off-by: Rhys Perry <pendingchaos02@gmail.com>
Reviewed-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
These were updated in version 1.1.106 of vulkan.h to make more sense
with the extension names. We may as well keep with the times.
Acked-by: Dave Airlie <airlied@redhat.com>
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
When gathering info for unmovable types we need to handle arrays.
While we dont support packing/moving arrays we do support packing
scalar components with these arrays.
Fixes piglit:
tests/spec/arb_enhanced_layouts/execution/component-layout/vs-fs-array-interleave-range.shader_test
Fixes: 5eb17506e1 ("nir: do not pack varying with different types")
Reviewed-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
Pipeline statistics queries should not count BLORP's rectangles.
(23) How do operations like Clear, TexSubImage, etc. affect the
results of the newly introduced queries?
DISCUSSION: Implementations might require "helper" rendering
commands be issued to implement certain operations like Clear,
TexSubImage, etc.
RESOLVED: They don't. Only application submitted rendering
commands should have an effect on the results of the queries.
Piglit's arb_pipeline_statistics_query-vert_adj exposes this bug when
the driver is hacked to always perform glBufferData via a GPU staging
copy (for debugging purposes).
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Now that nir_const_value is a scalar, we don't need the switch on bit
size in order to pluck off components properly.
Reviewed-by: Karol Herbst <kherbst@redhat.com>
Now that nir_const_value is a scalar, we don't need the switch on bit
size in order copy components around properly.
Reviewed-by: Karol Herbst <kherbst@redhat.com>
Now that nir_const_value is a scalar, we don't need the switch on bit
size in order to swizzle them properly.
Reviewed-by: Karol Herbst <kherbst@redhat.com>
v2: remove & operator in a couple of memsets
add some memsets
v3: fixup lima
Signed-off-by: Karol Herbst <kherbst@redhat.com>
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net> (v2)
we already assert above that there are no more than 3 sources, so it
doesn't make sense to use an array of 4 sources
Signed-off-by: Karol Herbst <kherbst@redhat.com>
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
While we're here, fix a typo which caused it to actually return a vec4
with the third and fourth components zero.
Signed-off-by: Karol Herbst <kherbst@redhat.com>
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
On Mali hardware (supported by Panfrost and Lima), the fixed-function
transformation from world-space to screen-space coordinates is done in
the vertex shader prior to writing out the gl_Position varying, rather
than in dedicated hardware. This commit adds a shared NIR pass for
implementing coordinate transformation and lowering gl_Position writes
into screen-space gl_Position writes.
v2: Run directly on derefs before io/vars are lowered to cleanup the
code substantially. Thank you to Qiang for this suggestion!
v3: Bikeshed continues.
v4: Add to Makefile.sources (per Jason's comment). Bikeshed comment.
Ian and Qiang's reviews are from v3, but no real functional changes from
v4. Rob's review is from v4.
Signed-off-by: Alyssa Rosenzweig <alyssa@rosenzweig.io>
Suggested-by: Qiang Yu <yuq825@gmail.com>
Reviewed-by: Ian Romanick <ian.d.romanick@intel.com>
Reviewed-by: Qiang Yu <yuq825@gmail.com>
Reviewed-by: Rob Clark <robdclark@gmail.com>
As part of this cleanup, we use the newly-exposed
u_vbuf_get_minmax_index, deduplicating quite a bit of bookkeeping. We
also centralize the draw_flags tracking to make this code cleaner /
futureproofed; we have already had bugs regarding this field so we might
as well get it right now.
Signed-off-by: Alyssa Rosenzweig <alyssa@rosenzweig.io>
This was used as a workaround for uniform sizing which was fixed in
771adffe ("st: Lower uniforms in st in the...")
Signed-off-by: Alyssa Rosenzweig <alyssa@rosenzweig.io>
Fixes the following building error happening with Android build system:
external/mesa/src/gallium/auxiliary/draw/draw_gs.c:740:79:
error: address of array 'draw->gs.tgsi.machine->PrimitiveOffsets' will always evaluate to 'true' [-Werror,-Wpointer-bool-conversion]
if (!draw->gs.tgsi.machine->Primitives[i] || !draw->gs.tgsi.machine->PrimitiveOffsets)
~~~~~~~~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~
1 error generated.
Fixes: 7720ce3 ("draw: add support to tgsi paths for geometry streams. (v2)")
Signed-off-by: Mauro Rossi <issor.oruam@gmail.com>
Reviewed-by: Dave Airlie <airlied@redhat.com>
Hardware supports writing back Z/S buffers and sampling from them,
so add support for that.
Signed-off-by: Vasily Khoruzhick <anarsoul@gmail.com>
Reviewed-by: Qiang Yu <yuq825@gmail.com>
Tested-by: Icenowy Zheng <icenowy@aosc.io>
Looks like it's somehow used by subsequent PP job, so we have to
preserve its contents until PP job is done.
Signed-off-by: Vasily Khoruzhick <anarsoul@gmail.com>
Reviewed-by: Qiang Yu <yuq825@gmail.com>
Tested-by: Icenowy Zheng <icenowy@aosc.io>
Adding \ prior to " in llvm version string fixes the following building errors:
external/mesa/src/gallium/drivers/r600/r600_pipe_common.c:1290:14:
error: expected ')'
", LLVM " MESA_LLVM_VERSION_STRING
^
<command line>:8:34: note: expanded from here
^
external/mesa/src/gallium/drivers/r600/r600_pipe_common.c:1287:10:
note: to match this '('
snprintf(rscreen->renderer_string, sizeof(rscreen->renderer_string),
^
1 error generated.
Fixes: 05b114e ("simplify LLVM version string printing")
Signed-off-by: Mauro Rossi <issor.oruam@gmail.com>
Reviewed-by: Eric Engestrom <eric@engestrom.ch>
In 628c9ca908 I forgot to apply the same -4Gb of the high address
of the high heap VMA. This was previously computed in the
HIGH_HEAP_MAX_ADDRESS.
Many thanks to James for pointing this out.
Signed-off-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Reported-by: Xiong, James <james.xiong@intel.com>
Fixes: 628c9ca908 ("anv: store heap address bounds when initializing physical device")
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
We can use the same register spilling infrastructure for our loads/stores
of indirect access of temp variables, instead of doing an if ladder.
Cuts 50% of instructions and max-temps from 2 KSP shaders in shader-db.
Also causes several other KSP shaders with large bodies and large loop
counts to not be force-unrolled.
The change was originally motivated by NOLTIS slightly modifying register
pressure in piglit temp mat4 array read/write tests, triggering register
allocation failures.
This commit adds new nir_load/store_scratch opcodes which read and write
a virtual scratch space. It's up to the back-end to figure out what to
do with it and where to put the actual scratch data.
v2: Drop const_index comments (by anholt)
Reviewed-by: Eric Anholt <eric@anholt.net>
While waiting for the CSD UABI to get reviewed, I keep having to rebase
the CS patch. Just land the compiler side for now to keep it from
diverging.
For now this covers just GLES 3.1 compute shaders, not CL kernels.
We're using ARB_debug_output for the main shader-db, but I had this env
var left around from the shader-db-2 support (vc4 apitrace-based). Keep
the env var around since it's nice sometimes to get the stats on a shader
you're optimizing without having to do a shader-db run, but drop the old
formatting that's not useful and keeps tricking me when I go to add
another measurement to the shader-db output.
The constant_index slots are named right there in the intrinsic
definition, and the comment is just a chance to get out of sync. Noticed
while reviewing the lower_to_scratch changes that copy-and-pasted wrong
comments, and load_ubo and load_per_vertex_output had incorrect comments
currently.
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
gl_nir_lower_samplers_as_deref splits structure uniform variables,
creating new variables for individual fields. As part of that, it
calculates a new location. It then never set this on the new variables.
Thanks to Michael Fiano for finding this bug. Fixes crashes on i965
with Piglit's new tests/spec/glsl-1.10/execution/samplers/uniform-struct
test, which was reduced from the failing case in Michael's app.
Fixes: f003859f97 nir: Make gl_nir_lower_samplers use gl_nir_lower_samplers_as_deref
Reviewed-by: Timothy Arceri <tarceri@itsqueeze.com>
32-bit needs mmap64 for 64-bit offsets. We get 64-bit offsets from kernel.
Signed-off-by: Mateusz Krzak <kszaquitto@gmail.com>
Reviewed-by: Alyssa Rosenzweig <alyssa@rosenzweig.io>
Required for 64-bit kernel to interpret the pointer from 32-bit userspace.
Signed-off-by: Mateusz Krzak <kszaquitto@gmail.com>
Reviewed-by: Alyssa Rosenzweig <alyssa@rosenzweig.io>
ac_build_image_opcode() casts if necessary and buffer images
are casted too.
Signed-off-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
v2: handle atomics as well
make use of nir_rewrite_image_intrinsic
v3: remove call to nir_remove_dead_derefs
v4: (Timothy Arceri) dont actually call lowering yet
Signed-off-by: Karol Herbst <kherbst@redhat.com>
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net> (v3)
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
Both processors of Mali Utgard are float-only, so bool are not
acceptable data type of them. Fortunately the NIR compiler
infrastructure has a lower pass to lower bool to float.
Call this lower pass to lower bool to float for both GP and PP. This
makes Glamor on Xorg server 1.20.3 at least doesn't hang when starting
gtk3-demo.
The old map of nir op bcsel is changed to fcsel, and the map of b2f32 in
PP is dropped because it's not needed now (it's originally only mapped
to ppir_op_mov).
Signed-off-by: Icenowy Zheng <icenowy@aosc.io>
Reviewed-by: Qiang Yu <yuq825@gmail.com>
We were pinning it for compute shaders, and pinning it when restoring
saved buffers, but we never actually pinned it in the original batch
for VS/TCS/TES/GS/FS.
Fixes rendering in GFXBench5's Tessellation demo and a bunch of Piglit
geometry shader tests.
We can then reuse those bounds to initialize the VMA heaps at logical
device creation.
This fixes an issue on EHL which has only 36bits of VMA. We were
incorrectly using the fixed 48bits upper bound to initialize the
logical device heap, resulting in addresses beyong the device's
limits.
v2: Don't confuse heap size (limited by system memory) and VMA size
(limited by number of addressing bits the platform has)
v3: Fix low heap vma_size :( (Lionel)
Signed-off-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Reported-by: James Xiong <james.xiong@intel.com>
Reviewed-by: Eric Engestrom <eric.engestrom@intel.com> (v1)
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net> (v2)
Our exec masking introduces lots of redundant flags updates, and even
without that there will be cases where NIR comparisons on the same sources
for different reasons may generate the same comparison instruction before
the selection.
total instructions in shared programs: 6492930 -> 6460934 (-0.49%)
total uniforms in shared programs: 2117460 -> 2115106 (-0.11%)
total spills in shared programs: 4983 -> 4987 (0.08%)
total fills in shared programs: 6408 -> 6416 (0.12%)
This allows using the Marvell Armada display controllers (with the
armada drm modesetting driver) along with the render-only drivers,
such as Etnaviv on an OLPC XO-1.75 laptop.
v2:
- Add to Android.mk too
Signed-off-by: Lubomir Rintel <lkundrak@v3.sk>
Reviewed-by: Eric Anholt <eric@anholt.net>
As we have already prepared for using util_blitter, use it to implement
lima_blit.
Signed-off-by: Icenowy Zheng <icenowy@aosc.io>
Reviewed-by: Qiang Yu <yuq825@gmail.com>
Currently the lima driver saves the framebuffer state in its
from-scratch struct lima_context_framebuffer. However, util_blitter
requires to save framebuffer with standard struct
pipe_framebuffer_state.
Make the lima_context_framebuffer a subtype of the standard
pipe_framebuffer_state, thus the standard part can be used for
util_blitter framebuffer state saving.
Signed-off-by: Icenowy Zheng <icenowy@aosc.io>
Reviewed-by: Qiang Yu <yuq825@gmail.com>
The set_sample_mask function is required in util_blitter.
Add a dummy one to make util_blitter work.
Signed-off-by: Icenowy Zheng <icenowy@aosc.io>
Reviewed-by: Qiang Yu <yuq825@gmail.com>
Fixes a couple of Coverity warnings CID 1444626.
Fixes: e30804c602 ("nir/radv: remove restrictions on opt_if_loop_last_continue()")
Reviewed-by: Tapani Pälli <tapani.palli@intel.com>
In commit 3b3653c4cf we decided not to use bare types; hence do not use
bare type when comparing with interface type to find out if the xfb
variable is an array block.
This fixes dEQP-VK.transform_feedback.* tests.
Fixes: 3b3653c4cf ("nir/spirv: don't use bare types, remove assert in
split vars for testing")
CC: Dave Airlie <airlied@redhat.com>
CC: Jason Ekstrand <jason@jlekstrand.net>
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
We never want to display a transfer-temp surface, so let's ignore that
flag when calculating the new binding flags.
Signed-off-by: Erik Faye-Lund <erik.faye-lund@collabora.com>
Reviewed-by: Gurchetan Singh <gurchetansingh@chromium.org>
For formats with multiple planes, application will pass a num_planes
sized fds array which should be initialized properly in case fds amount
utilized by the driver is less than the number of planes.
Signed-off-by: Tapani Pälli <tapani.palli@intel.com>
Reviewed-by: Eric Anholt <eric@anholt.net>
Enable using lima for KMS renderonly. This still needs KMS driver
name mapping to kmsro to be used automatically.
Reviewed-by: Eric Anholt <eric@anholt.net>
Signed-off-by: Rob Herring <robh@kernel.org>
Signed-off-by: Qiang Yu <yuq825@gmail.com>
This helper function can be used by driver which
always need min/max index.
Reviewed-by: Eric Anholt <eric@anholt.net>
Signed-off-by: Qiang Yu <yuq825@gmail.com>
This is for the case that user only know a max size
it wants to append to the array and enlarge the array
capacity before writing into it.
v2:
- rename newsize to newcap
- rename util_dynarray_enlarge to util_dynarray_grow_cap
Signed-off-by: Qiang Yu <yuq825@gmail.com>
Reviewed-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
Without that kmscube with GALLIUM_TRACE would segfault like:
#0 0x0000000000000000 in ()
#1 0x0000ffff8f311760 in dri2_create_fence_fd (_ctx=0xaaaae266b8b0, fd=10) at ../src/gallium/state_trackers/dri/dri_helpers.c:122
#2 0x0000ffff90788670 in dri2_create_sync (drv=0xaaaae2667910, disp=0xaaaae26691f0, type=12612, attrib_list=0xaaaae26b9290) at ../src/egl/drivers/dri2/egl_dri2.c:2993
#3 0x0000ffff90776a9c in _eglCreateSync (disp=0xaaaae26691f0, type=12612, attrib_list=0xaaaae26b9290, orig_is_EGLAttrib=0, invalid_type_error=12292) at ../src/egl/main/eglapi.c:1823
#4 0x0000ffff90776be4 in eglCreateSyncKHR (dpy=0xaaaae26691f0, type=12612, int_list=0xfffff662e828) at ../src/egl/main/eglapi.c:1848
Signed-off-by: Guido Günther <agx@sigxcpu.org>
Signed-off-by: Marek Olšák <marek.olsak@amd.com>
libintel_common depends on libintel_compiler, but it contains debug
functionality that is needed by libintel_compiler. Break the circular
dependency by moving gen_debug files to libintel_dev.
Suggested-by: Kenneth Graunke <kenneth@whitecape.org>
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
Same as I did for V3D, drop all this code trying to GC the
non-indirectly-loaded uniforms from the UBO that's used for indirect
access of gallium cb[0]. While it does successfully drop some of those,
it came at the cost of uploading the VS's indirect unifroms twice, for the
bin and render versions of the shader.
With the UBO loads simplified, I was also able to easily backport V3D's
change to pack a UBO offset into the uniform_data[] field so that we don't
need to do the add of the uniform base in the shader.
As a bonus, now vc4 doesn't depend on mesa/st type_size functions.
total uniforms in shared programs: 25514 -> 25490 (-0.09%)
total instructions in shared programs: 77019 -> 76836 (-0.24%)
PIPE_CAP_PACKED_UNIFORMS conflates several things: Lowering uniforms i/o
at the st level instead of the backend, packing uniforms with no padding
at all, and lowering to UBOs.
Requiring backends to lower uniforms i/o for !PIPE_CAP_PACKED_UNIFORMS
leads to the driver needing to either link against the type size function
in mesa/st, or duplicating it in the backend. Given that all backends
want this lower-io as far as I can tell, just move it to mesa/st to
resolve the link issue and avoid the driver author needing to understand
st's uniforms layout.
Incidentally, fixes uniform layout failures in nouveau in:
dEQP-GLES2.functional.shaders.struct.uniform.sampler_nested_fragment
dEQP-GLES2.functional.shaders.struct.uniform.sampler_nested_vertex
dEQP-GLES2.functional.shaders.struct.uniform.sampler_array_fragment
dEQP-GLES2.functional.shaders.struct.uniform.sampler_array_vertex
and I think in Lima as well.
v2: fix indents
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
If the user didn't provide a pipeline cache and we're using the
default internal pipeline cache, then we shouldn't consider a cache
hit for VK_EXT_pipeline_creation_feedback as the application did not
provide a cache.
Signed-off-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Fixes: 6601e5d6fc ("anv: implement VK_EXT_pipeline_creation_feedback")
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
also set some constants for SSBOs.
With that it can compile the shader from:
dEQP-GLES31.functional.ssbo.layout.random.all_per_block_buffers.18
Signed-off-by: Karol Herbst <kherbst@redhat.com>
Reviewed-by: Timothy Arceri <tarceri@itsqueeze.com>
While we're at it, prefix the string with "VIRGL: ", to match similar
code elsewhere in virgl.
Fixes: d7b3196976 ("virgl: Return an error if we use fp64 on top of GLES")
Signed-off-by: Erik Faye-Lund <erik.faye-lund@collabora.com>
Reviewed-by: Elie Tournier <elie.tournier@collabora.com>
This is needed to properly handle interpolateAt* when the input to be
interpolated is passed as array in the original GLSL.
Currently, the the GLSL compiler would lower selecting the correct input so
that the interpolant parameter to interpolateAt* is a temporary, and this
can not be used to create a valid shader on the host side, because here the
parameter must a shader input.
By allowing the passing the created TGSI allows to create proper GLSL.
This is related to the virglrenderer bug
https://gitlab.freedesktop.org/virgl/virglrenderer/issues/74
v2: Squash the two patches handling these flags into another
Signed-off-by: Gert Wollny <gert.wollny@collabora.com>
Reviewed-by: Gurchetan Singh <gurchetansingh@chromium.org>
PIPE_CAP_TGSI_SKIP_SHRINK_IO_ARRAYS is added to indicate whether the TGSI
pass to shrink IO arrays should be skipped to enforce the originally declared array
sizes and locations instead.
Signed-off-by: Gert Wollny <gert.wollny@collabora.com>
Reviewed-by: Gurchetan Singh <gurchetansingh@chromium.org>
Should be safe to enable as all instructions seem to support 16-bit.
Unfortunately, there is no CTS test.
Signed-off-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
Acked-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
virgl render complains about "Illegal resource" when running
dEQP-EGL.functional.color_clears.single_context.gles2.rgb888_window,
the reason is that a zero bind value was given for temp resource.
Signed-off-by: Lepton Wu <lepton@chromium.org>
Reviewed-by: Erik Faye-Lund <erik.faye-lund@collabora.com>
This patch does it as late as possible so the potential extra
basic blocks don't inhibit other optimizations.
Big thanks to Jason for writing the lowering pass.
Reviewed-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
Otherwise nir_lower_non_uniform_access crashes when it tries
to get the access of a load_ubo.
Fixes: 8ed583fe52 "spirv: Handle the NonUniformEXT decoration"
Fixes: e50ab2c0f2 "nir: Add access flags to deref and SSBO atomics"
Reviewed-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
CTS: GL45-CTS.compute_shader.resources-max
Fixes: 4e1e8f684b "glsl: remember which SSBOs are not read-only and pass it to gallium"
Reviewed-by: Timothy Arceri <tarceri@itsqueeze.com>
So far ANV was advertising 4 bits for both subTexelPrecisionBits and
mipmapPrecisionBits. But these values were not actually verified.
But it seems the right value is actually 8 bits for both cases.
Unfortunately Intel PRM does not clarify how many bits the hardware use.
For the mipmap case, there is the following reference in PRM Volume 6
(3D Media GPGPU), specifically in LOD Computation Pseudocode:
```
Bias: S4.8
MinLod: U4.8
MaxLod: U4.8
Base: U4.1
MIPCnt: U4
SurfMinLod: U4.8
ResMinLod: U4.8
``
We have other clues, though:
- On one side, dEQP-VK.texture.explicit_lod.* tests fail when using 4
bits, but work when using 8 bits. These tests try to mimic the expected
behaviour as much real as possible, and they use the reported
subTexelPrecisionBits and mipmapPrecisionBits reported to get this.
- On the other side, the equivalent driver for Windows is reporting 8
bits for both elements. Not sure if they got to verify it from the PRM
or from a diffent source.
CC: Jason Ekstrand <jason@jlekstrand.net>
CC: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
From the OpenGL 4.60.5 spec, section 4.4.1 Input Layout Qualifiers,
Page 67, (Location aliasing):
" Further, when location aliasing, the aliases sharing the location
must have the same underlying numerical type and bit
width (floating-point or integer, 32-bit versus 64-bit, etc.) and
the same auxiliary storage and interpolation qualification."
Additionally, we have improved the linker error descriptions.
Specifically, when taking structs into account we were producing a
linker error because we assumed that all components in each location
were used and that would cause component aliasing. This is not
accurate of the actual problem. Now, the failure specifies that the
underlying numerical type incompatibility is the cause for the
failure.
Fixes the following piglit test:
tests/spec/arb_enhanced_layouts/linker/component-layout/vs-to-fs-width-mismatch-double-float.shader_test
v2:
- Do not assert if we see invalid numerical types. These come
straight from shader code, so we should produce linker errors if
shaders attempt to do location aliasing on variables that are not
numerical such as records.
- While we are at it, improve error reporting for the case of
numerical type mismatch to include the shader stage.
v3:
- Allow location aliasing of images and samplers. If we get these
it means bindless support is active and they should be handled
as 64-bit integers (Ilia)
- Make sure we produce link errors for any non-numerical type
for which we attempt location aliasing, not just structs.
v4:
- Rebased with minor fixes (Andres).
- Added fixing tag to the commit log (Andres).
v5:
- Remove the helper function and check individually for the
underlying numerical type and bit width (Timothy).
- Implicitly, assume that any non-treated type which is checked for
its underlying numerical type is either integer or
float and has a defined bit width (Timothy).
- Implicitly, assume that structs are the only non-treated
non-numerical type (Timothy).
- Improve the linker error descriptions and commit log (Andres).
Fixes: 13652e7516 ("glsl/linker: Fix type checks for location aliasing")
Cc: Ilia Mirkin <imirkin@alum.mit.edu>
Cc: Timothy Arceri <tarceri@itsqueeze.com>
Cc: Iago Toral Quiroga <itoral@igalia.com>
Signed-off-by: Andres Gomez <agomez@igalia.com>
Reviewed-by: Timothy Arceri <tarceri@itsqueeze.com>
The offset alignment must be set to s16 because the tile cache is
implemented to require this.
This enables ARB_buffer_texture_range and OES_texture_buffer for
softpipe. The according deqp-gles31 tests pass.
Also update the feature table.
Signed-off-by: Gert Wollny <gert.wollny@collabora.com>
Reviewed-by: Dave Airlie <airlied@redhat.com>
With buffers the addressing is done on a per-byte bases so the code
path for normal textures doesn't work properly. Also add an assert
to make sure that the bit cound for storing the X coordinate is
large enough.
Signed-off-by: Gert Wollny <gert.wollny@collabora.com>
Reviewed-by: Dave Airlie <airlied@redhat.com>
With buffers the addressing is done on a per byte basis and we with
a maximal block size of 16 byte we have to take into acount four more
bits. For simplicity just remove the TEX_TILE_SIZE_LOG2, which is 5 bit.
Signed-off-by: Gert Wollny <gert.wollny@collabora.com>
Reviewed-by: Dave Airlie <airlied@redhat.com>
For the gather op no magnifictaion filter is provided, so always use
the filter given for minification (which is the linear filter)
Fixes: 0dff1533f2
softpipe: Use mag texture filter also for clamped lod == 0
Signed-off-by: Gert Wollny <gert.wollny@collabora.com>
Reviewed-by: Dave Airlie <airlied@redhat.com>
We have a pass to lower global registers to locals and many drivers
dutifully call it. However, no one ever creates a global register ever
so it's all dead code. It's time we bury it.
Acked-by: Karol Herbst <kherbst@redhat.com>
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
All we ever do is initialize it to zero, clone it, print it, and
validate it. No one ever sets or uses it.
Acked-by: Karol Herbst <kherbst@redhat.com>
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
This will pass the multi draw through to the host if it has
support for it instead of using the st to emulate it
Reviewed-By: Gert Wollny <gert.wollny@collabora.com>
When I added indirect support I forgot this, however to use it
now we need to check for a new enough capability on the host side.
Reviewed-By: Gert Wollny <gert.wollny@collabora.com>
As defined in SPV_NV_compute_shader_derivatives. These control how the
invocations are arranged in a CS when doing derivative and related
operations (which are also enabled by the extension).
Since we expect valid SPIR-V, we don't need to do more work at SPIR-V
level to enable the derivative and related operations to be called.
Reviewed-by: Ian Romanick <ian.d.romanick@intel.com>
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
To enable NV_compute_shader_derivatives, which allows derivatives (and
texture lookups with implicit derivatives) in compute shaders.
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
Reviewed-by: Ian Romanick <ian.d.romanick@intel.com>
This will make that step visible in NIR_PRINT=1.
v2: Also use the macro for the cleanup passes.
Reviewed-by: Ian Romanick <ian.d.romanick@intel.com>
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
This was needed when certain intrinsics were lowered to other ones
that were defined by the same pass. After 060817b2 "intel,nir: Move
gl_LocalInvocationID lowering to nir_lower_system_values" we don't
need the loop anymore.
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
When using quads, instead of mapping the elements to the next 4 local
invocation indices, we map the two next in the "current" row and two
next in the "next row". A side effect is that a thread will execute
the indices in a different order.
We now perform the lowering of both local invocation ID and index
together -- and don't rely anymore on lowering done by
nir_lower_system_values. That is convenient when doing the math for
quads, because we need X and Y to get the right invocation index.
When the pass progresses, fold the constants and clean up to reduce
the noise from the indexing math.
This implements the derivative_group_quadsNV semantics from
NV_compute_shader_derivatives.
v2: Take subgroup_id into account, otherwise only values in the first
subgroup would be used. (Jason)
v3: Calculate invocation index and ID together, to avoid duplicating
some math in the quads case when both index and ID are used. (Jason)
v4: Don't call cleanup passes as part of the lowering, let that to the
call site. (Jason)
Change calculation to use less instructions. (Jason)
Reviewed-by: Ian Romanick <ian.d.romanick@intel.com> (v3)
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
When using NV_compute_shader_derivatives to set a derivative group,
a compute shader supports texture with implicit LOD calculation, so
don't set an explicit LOD.
Note if the extension is used but the derivative group is not
specified, it will default to LOD=0 as before.
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
In compute shaders if no derivative group is defined, the derivatives
will always be zero. Specified in NV_compute_shader_derivatives.
To make the check more convenient, add a "info" local variable to the
generated code so we can refer to it in the Python rules. (Jason)
Reviewed-by: Ian Romanick <ian.d.romanick@intel.com>
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
NV_compute_shader_derivatives allow selecting between two possible
arrangements (quads and linear) when calculating derivatives and
certain subgroup operations in case of Vulkan. So parse and propagate
those up to shader_info.h.
v2: Do not fail when ARB_compute_variable_group_size is being used,
since we are still clarifying what is the right thing to do here.
Reviewed-by: Ian Romanick <ian.d.romanick@intel.com>
When I implemented opt_if_loop_last_continue() I had restricted
this pass from moving other if-statements inside the branch opposite
the continue. At the time it was causing a bunch of spilling in
shader-db for i965.
However Samuel Pitoiset noticed that making this pass more aggressive
significantly improved the performance of Doom on RADV. Below are
the statistics he gathered.
28717 shaders in 14931 tests
Totals:
SGPRS: 1267317 -> 1267549 (0.02 %)
VGPRS: 896876 -> 895920 (-0.11 %)
Spilled SGPRs: 24701 -> 26367 (6.74 %)
Code Size: 48379452 -> 48507880 (0.27 %) bytes
Max Waves: 241159 -> 241190 (0.01 %)
Totals from affected shaders:
SGPRS: 23584 -> 23816 (0.98 %)
VGPRS: 25908 -> 24952 (-3.69 %)
Spilled SGPRs: 503 -> 2169 (331.21 %)
Code Size: 2471392 -> 2599820 (5.20 %) bytes
Max Waves: 586 -> 617 (5.29 %)
The codesize increases is related to Wolfenstein II it seems largely
due to an increase in phis rather than the existing jumps.
This gives +10% FPS with Doom on my Vega56.
Rhys Perry also benchmarked Doom on his VEGA64:
Before: 72.53 FPS
After: 80.77 FPS
v2: disable pass on non-AMD drivers
Reviewed-by: Ian Romanick <ian.d.romanick@intel.com> (v1)
Acked-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
This enables the ARB_gpu_shader5 vertex streams on softpipe.
v2: only enable when not using llvm.
Reviewed-by: Roland Scheidegger <sroland@vmware.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
This hooks up the geometry shader processing to the TGSI
support added in the previous commits.
It doesn't change the llvm interface other than to
keep things building.
v2: fix some regressions caused by primitiveoffsets
Reviewed-by: Roland Scheidegger <sroland@vmware.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
We need indexed queries to retrieve the geom shader info.
Reviewed-by: Roland Scheidegger <sroland@vmware.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
This adds support to retrieve the primitive counts
for each stream, along with the offset for each
primitive into the output array.
It also adds support for parsing the stream argument
to the emit and end instructions.
Reviewed-by: Roland Scheidegger <sroland@vmware.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
This just adds space for the member to the callback, doesn't
change anything else.
Reviewed-by: Roland Scheidegger <sroland@vmware.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
When wl_drm is missing and the driver supports modifiers, use
zwp_linux_dmabuf_v1 for the list of supported formats and for buffer
creation.
Limit the supported formats to those with modifiers, which are
WL_DRM_FORMAT_{ARGB8888,XRGB8888} currently.
Signed-off-by: Chia-I Wu <olvaffe@gmail.com>
Reviewed-by: Daniel Stone <daniels@collabora.com>
Add wsi_wl_display_dmabuf for zwp_linux_dmabuf_v1-related states.
Signed-off-by: Chia-I Wu <olvaffe@gmail.com>
Reviewed-by: Daniel Stone <daniels@collabora.com>
Add wsi_wl_display_drm for wl_drm-related states. We will move
formats into the struct in a later commit.
Remove the unnecessary check for wl_registry_bind failures.
Signed-off-by: Chia-I Wu <olvaffe@gmail.com>
Reviewed-by: Daniel Stone <daniels@collabora.com>
Refactor the swtich statement in drm_handle_format out to
wsi_wl_display_add_wl_format.
Signed-off-by: Chia-I Wu <olvaffe@gmail.com>
Reviewed-by: Daniel Stone <daniels@collabora.com>
When modifiers are specified, we have to use dmabuf rather than
wl_drm. We don't need the wrapper in that case.
Signed-off-by: Chia-I Wu <olvaffe@gmail.com>
Reviewed-by: Daniel Stone <daniels@collabora.com>
XQueryExtension merely tells you whether the extension exists, it
doesn't tell you whether you're local enough for it to work.
XShmQueryVersion is not enough to discover this either, you need to
provoke the server to do actual work, and if it thinks you're remote it
will throw BadRequest at you. So send an invalid ShmDetach and use the
error code to distinguish local from remote.
[airlied: fixed bug not resetting xshm_error to 0 on success,
which made later stuff fail completely.]
Reviewed-by: Eric Anholt <eric@anholt.net>
Reviewed-by: Dave Airlie <airlied@redhat.com>
Signed-off-by: Adam Jackson <ajax@redhat.com>
Consider the following search expression and NIR sequence:
('iadd', ('imul', a, b), b)
ssa_2 = imul ssa_0, ssa_1
ssa_3 = iadd ssa_2, ssa_0
The current algorithm is greedy and, the moment the imul finds a match,
it commits those variable names and returns success. In the above
example, it maps a -> ssa_0 and b -> ssa_1. When we then try to match
the iadd, it sees that ssa_0 is not b and fails to match. The iadd
match will attempt to flip itself and try again (which won't work) but
it cannot ask the imul to try a flipped match.
This commit instead counts the number of commutative ops in each
expression and assigns an index to each. It then does a loop and loops
over the full combinatorial matrix of commutative operations. In order
to keep things sane, we limit it to at most 4 commutative operations (16
combinations). There is only one optimization in opt_algebraic that
goes over this limit and it's the bitfieldReverse detection for some UE4
demo.
Shader-db results on Kaby Lake:
total instructions in shared programs: 15310125 -> 15302469 (-0.05%)
instructions in affected programs: 1797123 -> 1789467 (-0.43%)
helped: 6751
HURT: 2264
total cycles in shared programs: 357346617 -> 357202526 (-0.04%)
cycles in affected programs: 15931005 -> 15786914 (-0.90%)
helped: 6024
HURT: 3436
total loops in shared programs: 4360 -> 4360 (0.00%)
loops in affected programs: 0 -> 0
helped: 0
HURT: 0
total spills in shared programs: 23675 -> 23666 (-0.04%)
spills in affected programs: 235 -> 226 (-3.83%)
helped: 5
HURT: 1
total fills in shared programs: 32040 -> 32032 (-0.02%)
fills in affected programs: 190 -> 182 (-4.21%)
helped: 6
HURT: 2
LOST: 18
GAINED: 5
Reviewed-by: Thomas Helland <thomashelland90@gmail.com>
This revision allows for images to be :
- created by reusing image parameters from swapchain
- bound to memory from a swapchain
v2: Add color attachment flag
Use same implicit WSI parameters (tiling, samples, usage)
v3: Fix missing break in vk_foreach_struct_const() switch (Lionel)
v4: Fix accessing image aspects before android resolve (Tapani)
Signed-off-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Reviewed-by: Tapani Pälli <tapani.palli@intel.com>
This is an array of 1, so [0] is the only content, and meson already
flattens the list so this is unnecessary.
Also, all the other uses of vk_api_xml don't do that.
Signed-off-by: Eric Engestrom <eric.engestrom@intel.com>
Reviewed-by: Dylan Baker <dylan@pnwbakers.com>
This fixes the following LLVM error when using RADV_DEBUG=checkir:
Intrinsic name not mangled correctly for type arguments! Should be: llvm.amdgcn.buffer.atomic.add.i32
i32 (i32, <4 x i32>, i32, i32, i1)* @llvm.amdgcn.buffer.atomic.add
The cmpswap operation still uses the old intrinsic.
Signed-off-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
Reviewed-by: Erik Faye-Lund <erik.faye-lund@collabora.com>
Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
This structure was used maaaany moons ago as a placeholder for the
varying meta (now unified with mali_attr_meta and essentially fully
decoded). I don't know why it's still in the file. Let's wack it.
Signed-off-by: Alyssa Rosenzweig <alyssa@rosenzweig.io>
The mechanics of this opcode are a little opaque, but essentially, it's
used in 8-bit mode to do a bit count in parallel of a uint and then
doing a ton of clever iadd/imov ops to recombine.
v2: Correct opcode. Thank you to jernej on IRC for noticing this awkward
typo!
Signed-off-by: Alyssa Rosenzweig <alyssa@rosenzweig.io>
These flags are set when reading back the tilebuffer from a fragment
shader via various mechanisms (including ARM_shader_framebuffer_fetch
and EXT_pixel_local_storage).
Signed-off-by: Alyssa Rosenzweig <alyssa@rosenzweig.io>
The new OR pattern has been seen in the wild and can end up being
generated by GLSLang. Not sure about the other two new patterns but we
may as well throw them in for completeness. While we're here, we can
drop the '@bool' specifier from the one pattern because specifying True
already implies 1-bit which basically implies boolean.
Shader-db results on Kaby Lake:
total instructions in shared programs: 15321227 -> 15321129 (<.01%)
instructions in affected programs: 3594 -> 3496 (-2.73%)
helped: 6
HURT: 0
total cycles in shared programs: 357481321 -> 357479725 (<.01%)
cycles in affected programs: 44109 -> 42513 (-3.62%)
helped: 6
HURT: 0
VkPipeline-DB results on Kaby Lake:
total instructions in shared programs: 3770504 -> 3769734 (-0.02%)
instructions in affected programs: 19058 -> 18288 (-4.04%)
helped: 163
HURT: 0
total cycles in shared programs: 1417583701 -> 1417569727 (<.01%)
cycles in affected programs: 750958 -> 736984 (-1.86%)
helped: 158
HURT: 1
Reviewed-by: Timothy Arceri <tarceri@itsqueeze.com>
Now that we have one-bit booleans, we don't need to rely on looking at
parent instructions in order to figure out if a value is a Boolean most
of the time. We can drop these specifiers and now the optimizations
will apply more generally.
Shader-DB results on Kaby Lake:
total instructions in shared programs: 15321168 -> 15321227 (<.01%)
instructions in affected programs: 8836 -> 8895 (0.67%)
helped: 1
HURT: 31
total cycles in shared programs: 357481781 -> 357481321 (<.01%)
cycles in affected programs: 146524 -> 146064 (-0.31%)
helped: 22
HURT: 10
total spills in shared programs: 23675 -> 23673 (<.01%)
spills in affected programs: 11 -> 9 (-18.18%)
helped: 1
HURT: 0
total fills in shared programs: 32040 -> 32036 (-0.01%)
fills in affected programs: 27 -> 23 (-14.81%)
helped: 1
HURT: 0
No change in VkPipeline-DB
Looking at the instructions hurt, a bunch of them seem to be a case
where doing exactly the right thing in NIR ends up doing the wrong-ish
thing in the back-end because flags are dumb. In particular, there's a
case where we have a MUL followed by a CMP followed by a SEL and when we
turn that SEL into an OR, it uses the GRF result of the CMP rather than
the flag result so the CMP can't be merged with the MUL. Those shaders
appear to schedule better according to the cycle estimates so I guess
it's a win? Also it helps spilling in one Car Chase compute shader.
Reviewed-by: Timothy Arceri <tarceri@itsqueeze.com>
This is a field of FLOAT_32_UNSIGNED_INT_24_8_REV texture pixel.
OpenGL spec "8.4.4.2 Special Interpretations" is saying:
"the second word contains a packed 24-bit unused field,
followed by an 8-bit index"
The spec doesn't require us to clear this unused field
however it make sense to do it to avoid some
undefined behavior in some apps.
Suggested-by: Eric Anholt <eric@anholt.net>
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
Reviewed-by: Eric Anholt <eric@anholt.net>
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=110305
Signed-off-by: Andrii Simiklit <andrii.simiklit@globallogic.com>
If a def is used as an condition before its definition, we should also
consider this a case to repair. When repairing, make sure we rewrite
any if conditions too.
Found in while inspecting a SPIR-V conversion from a 'continue block'
that contains a conditional branch. We pull the continue block up to
the beggining of the loop, and the condition in the branch ends up
defined afterwards.
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
Fixes: 364212f1ed "nir: Add a pass to repair SSA form"
Add memory barrier sync for multiple launch cases, and unbind completed
resources after launch.
Signed-off-by: James Zhu <James.Zhu@amd.com>
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
Multiple init buffer within one open instance will cause blank issue.
Updating viewport per frame will fix this issue.
Signed-off-by: James Zhu <James.Zhu@amd.com>
Tested-by: Bruno Milreu <bmilreu@gmail.com>
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
The current algorithm only supports packing 32-bit types.
If a shader uses both 16-bit and 32-bit varyings, we shouldn't
compact them together.
Signed-off-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
Reviewed-by: Timothy Arceri <tarceri@itsqueeze.com>
Follow the spec when selecting the magnification filter (OpenGL 4.5,
section 8.14):
If λ(x, y) is less than or equal to the constant c (see section 8.15)
the texture is said to be magnified;
While we're here also silence a potential warning about implicit float
to double conversion.
v2: Update commit message to contain a reference to the spec as pointed
out by Eric.
Fixes a number of dEQP GLES2 and GLES3 test out of:
dEQP-GLES2.functional.texture.filtering.*
dEQP-GLES2.functional.texture.vertex.2d.filtering.*
dEQP-GLES3.functional.texture.vertex.*.filtering.*
dEQP-GLES3.functional.texture.filtering.*
dEQP-GLES3.functional.texture.shadow.2d.*
Signed-off-by: Gert Wollny <gert.wollny@collabora.com>
Reviewed-by: Eric Anholt <eric@anholt.net>
Disable aux when resource seen the first time and EXPLICIT_FLUSH
not being set. This fixes issues seen when launching Xorg and
CCS_E getting utilized.
Signed-off-by: Tapani Pälli <tapani.palli@intel.com>
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
We'll need to do a render-based blit for scissors, since the TFU (as seen
in this conditional) can only update a whole surface.
Fixes: 976ea90bdc ("v3d: Add support for using the TFU to do some blits.")
Fixes piglit fbo-scissor-blit.
4.1 and 4.2 both have the same 16k limit, but it I'm seeing GPU hangs in
the CTS at 8k and 16k. 4k at least lets us get one 4k display working.
Cc: mesa-stable@lists.freedesktop.org
I have v3d allocating enough initial allocation memory that we've been
passing tests without it, but to match kernel behavior more it would be
good to actually exercise the OOM path.
Section 7.4.1 (Shader Interface Matching) of the OpenGL 4.30 spec says:
"Variables or block members declared as structures are considered
to match in type if and only if structure members match in name,
type, qualification, and declaration order."
Fixes:
* layout-location-struct.shader_test
v2: rebased against master and small fixes
Signed-off-by: Vadym Shovkoplias <vadym.shovkoplias@globallogic.com>
Reviewed-by: Anuj Phogat <anuj.phogat@gmail.com>
Reviewed-by: Timothy Arceri <tarceri@itsqueeze.com>
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=108250
While playing with compute shaders, I was getting a random crash,
noticed that bind_state was using the old shader info for comparision,
but gallium allows the shader to be deleted while bound, so this could
lead to a use after free.
This can't happen using the cso cache. As it tracks all of this.
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
The GL 4.5 spec says:
"If any enabled array’s buffer binding is zero when DrawArrays or
one of the other drawing commands defined in section 10.4 is called,
the result is undefined."
The result is undefined but it should not crash.
Fixes: gl-3.1-vao-broken-attrib
Signed-off-by: Danylo Piliaiev <danylo.piliaiev@globallogic.com>
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
MI_PREDICATE_DATA is an intermediate storage for the MI_PREDICATE
command's calculations - it holds the result of the subtraction when
the compare operation is SRCS_EQUAL or DELTAS_EQUAL. But the actual
result of the predication is MI_PREDICATE_RESULT, which is what we
want to copy from the render context to the compute context.
We consider it acceptable, but let's still document it in case people
notice it and are not sure why it's there.
Signed-off-by: Eric Engestrom <eric.engestrom@intel.com>
Reviewed-by: Tapani Pälli <tapani.palli@intel.com>
Figure it out once in the build system, then just use that all over the place.
Signed-off-by: Eric Engestrom <eric.engestrom@intel.com>
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
Without that `GALLIUM_DDEBUG=always kmscube -A` would segfault like
#0 0x0000000000000000 in ()
#1 0x0000ffffa72a3c54 in dri2_get_fence_fd (_screen=0xaaaaed4f2090, _fence=0xaaaaed9ef880) at ../src/gallium/state_trackers/dri/dri_helpers.c:140
#2 0x0000ffffa8744824 in dri2_dup_native_fence_fd (drv=0xaaaaed5010c0, disp=0xaaaaed5029a0, sync=0xaaaaed9ef7c0) at ../src/egl/drivers/dri2/egl_dri2.c:3050
#3 0x0000ffffa87339b8 in eglDupNativeFenceFDANDROID (dpy=0xaaaaed5029a0, sync=0xaaaaed9ef7c0) at ../src/egl/main/eglapi.c:2107
#4 0x0000aaaabd29ca90 in ()
#5 0x0000aaaabd401000 in ()
Signed-off-by: Guido Günther <agx@sigxcpu.org>
Reviewed-by: Lucas Stach <l.stach@pengutronix.de>
Documentation for glDrawPixels with GL_COLOR_INDEX says:
"If the GL is in color index mode, and if GL_MAP_COLOR is true,
the index is replaced with the value that it references in
lookup table GL_PIXEL_MAP_I_TO_I"
We are always in RGBA mode and there is nothing in documentation
about GL_MAP_COLOR in RGBA mode for GL_COLOR_INDEX.
Scale and bias are also only applicable for RGBA format and not
mentioned for GL_COLOR_INDEX.
Thus the behaviour will be on par with i965.
Fixes: gl-1.0-drawpixels-color-index
Signed-off-by: Danylo Piliaiev <danylo.piliaiev@globallogic.com>
Signed-off-by: Marek Olšák <marek.olsak@amd.com>
iris_upload_border_color is passed a pointer which points to
variable that is introduced in a different scope.
CID: 1444296
Signed-off-by: Tapani Pälli <tapani.palli@intel.com>
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
This should lower transient memory usage and improve performance
slightly (due to less memory to malloc/free, better cache locality,
etc).
Signed-off-by: Alyssa Rosenzweig <alyssa@rosenzweig.io>
This fixes a regression uploading partial tiled textures introduced
sometime during the cubemap series.
Signed-off-by: Alyssa Rosenzweig <alyssa@rosenzweig.io>
This patch implements system values via specially-crafted uniforms.
While we previously had an ad hoc system for passing the viewport into
the vertex shader, this commit generalizes the system to allow for
arbitrary system values to be added to both shader stages. While we're
at it, we clean up uniform handling code (which was considerably muddied
to handle the ad hoc viewport uniform).
This commit serves as both a cleanup of the existing codebase and the
precursor to new functionality, like implementing textureSize().
Concurrent with these changes is respecting the depth transform, which
was not possible with the old fixed uniform system and here serves as a
proof-of-correctness test (as well as justifying the NIR changes).
Signed-off-by: Alyssa Rosenzweig <alyssa@rosenzweig.io>
While a partial set of viewport system values exist, these are scalar
values, which is a poor fit for viewport transformations on vector ISAs
like Midgard (where the vec3 values for scale and offset each need to be
coherent in a vec4 uniform slot to take advantage of vectorized
transform math). This patch adds vec3 scale/offset fields corresponding
to the 3D Gallium viewport / glViewport+depth
Signed-off-by: Alyssa Rosenzweig <alyssa@rosenzweig.io>
Reviewed-by: Eric Anholt <eric@anholt.net>
For texture write-transfers, we either free them on the transfer-queue
or right away. But for read-transfers, we currently only destroy them in
case they used a temp-resource. This leads to occasional resource-leaks.
Let's add a call to virgl_resource_destroy_transfer in the missing case.
Do the same thing for buffers as well, but the logic is a bit easier to
follow there.
Signed-off-by: Erik Faye-Lund <erik.faye-lund@collabora.com>
Fixes: f0e71b1088 ("virgl: use transfer queue")
Reviewed-by: Gurchetan Singh <gurchetansingh@chromium.org>
Previously, there was minimal support for interoperating with legacy
kernels (reusing kernel modules originally designed for proprietary
legacy userspaces, rather than for upstream-friendly free software
stacks). Now that the Panfrost kernel is stabilising, this commit drops
the legacy code path.
Panfrost users need to use a modern, mainline kernel supporting the
Panfrost kernel driver from this commit forward.
Signed-off-by: Alyssa Rosenzweig <alyssa@rosenzweig.io>
Reviewed-by: Tomeu Vizoso <tomeu.vizoso@collabora.com>
Trying to construct a scanout capable buffer will only ever work when
when we are on top of a KMS winsys, as the render node isn't capable
of allocating contiguous buffers.
Tested-by: Marius Vlad <marius.vlad@collabora.com>
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
Reviewed-by: Christian Gmeiner <christian.gmeiner@gmail.com>
When setting up a transfer to a resource, all contexts where the resource
is pending must be flushed. Otherwise a write transfer might be started
in the current context before all contexts that access the resource in
shared (read) mode have been executed.
Fixes: 64813541d5 (etnaviv: fix resource usage tracking across
different pipe_context's)
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
Reviewed-by: Christian Gmeiner <christian.gmeiner@gmail.com>
Tested-By: Guido Günther <agx@sigxcpu.org>
The context is self synchronizing at the GPU side, as commands are
executed in order. We must not flush our own context when updating the
resource use, as that leads to excessive flushing on effectively every
draw call, causing huge CPU overhead.
Fixes: 64813541d5 (etnaviv: fix resource usage tracking across
different pipe_context's)
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
Reviewed-by: Christian Gmeiner <christian.gmeiner@gmail.com>
If we increase vector sizing later it would be nice to avoid
tripped over this again.
Reviewed-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
If we increase the vector size in the future it would be good
to not have to fix these up, this should change nothing at present.
Reviewed-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
This fd was create in virgl_drm_screen_create and should be closed
in virgl_drm_screen_destroy.
Signed-off-by: Lepton Wu <lepton@chromium.org>
Reviewed-by: Chia-I Wu <olvaffe@gmail.com>
Since we are now properly storing the clear color with SCS bits, we can
now enable fast clears on gen8 too.
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
We want to skip some types of aux usages (for instance,
ISL_AUX_USAGE_HIZ when the hardware doesn't support it, or when we have
multisampling) when sampling from the surface.
Instead of checking for those cases while filling the surface state and
leaving it blank, let's have a version of aux.possible_usages for
sampling. This way we can also avoid allocating surface state for the
cases we don't use.
Fixes: a8b5ea8ef0 "iris: Add function to update clear color in surface state."
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
Since we are not using it for the clear color, there's no need to
allocate it.
Fixes: a8b5ea8ef0 "iris: Add function to update clear color in surface state."
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
At the fast clear time, the only swizzle we have available is actually
the identity swizzle (which we use for most rendering). So the call to
swizzle_color_value() becomes simply a no-op, and doesn't properly zero
out the unused channels.
We have to manually override those channels.
Fixes: a8b5ea8ef0 "iris: Add function to update clear color in surface state."
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
The swizzle for rendering surfaces is always identity. So when we are
doing the fast clear, we don't have enough information to store the
clear color OR'ed with the Shader Channel Select bits for the dword in
the SURFACE_STATE.
Instead of trying to patch up the SURFACE_STATE correctly later, by
reading the color from the clear color state buffer and then doing all
the operations to store it, let's just re-emit the whole SURFACE_STATE.
That should make things way simpler on gen8, and we can still use the
clear color state buffer for gen9+.
Fixes: a8b5ea8ef0 "iris: Add function to update clear color in surface state."
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
Newer gens can read it directly.
Also properly skip updating the ISL_AUX_USAGE_NONE surface.
Fixes: a8b5ea8ef0 "iris: Add function to update clear color in surface state."
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
There shouldn't be a difference for users, but this way we do manage
all of our containers from freedesktop.org
note: compared to the provious Dockerfile, we need to manually
add gcc, g++ and python*-wheel
Signed-off-by: Benjamin Tissoires <benjamin.tissoires@gmail.com>
Reviewed-by: Michel Dänzer <michel.daenzer@amd.com>
Reviewed-by: Eric Engestrom <eric.engestrom@intel.com>
Since with gles hosts we lie about the GLSL feature level it is better
to set the number of streams based on actual hosts capabilities.
v2: Make use of feature check level to avoid regressions.
Signed-off-by: Gert Wollny <gert.wollny@collabora.com>
Reviewed-By: Reviewed-by: Gurchetan Singh <gurchetansingh@chromium.org>
This enables the following piglits with PASS:
nv_shader_atomic_float/execution/
shared-atomicadd-float
shared-atomicexchange-float
ssbo-atomicadd-float
ssbo-atomicexchange-float
v2: Minimize the patch by using type punning (Eric Anholt)
Signed-off-by: Gert Wollny <gert.wollny@collabora.com>
Reviewed-by: Eric Anholt <eric@anholt.net>
Using RGTC, ETC1, ETC2 or S3TC for 3D-textures isn't alowed by any of
OpenGL 4.6, OpenGL ES 3.2, ARB_texture_compression_rgtc,
EXT_texture_compression_rgtc, OES_compressed_ETC1_RGB8_texture,
S3_s3tc or EXT_texture_compression_s3tc specifications.
So let's not allow any of those compressed 3d-textures at all. It's not
going to work once it hits the OpenGL driver in virglrenderer.
Signed-off-by: Erik Faye-Lund <erik.faye-lund@collabora.com>
Reviewed-by: Gurchetan Singh <gurchetansingh@chromium.org>
We were caching only the value set with glXSwapIntervalSGI(), missing out
on the default setting of the swap interval by the loader. This fixes
glxgears's warning about being vblank synchronized by default.
Fixes: 9777c4234b ("loader: drop the [gs]et_swap_interval callbacks")
Reviewed-by: Ian Romanick <ian.d.romanick@intel.com>
Not all hardware is made equal and some does not have the full
complement of 48b of address space. Ask what the actual size of virtual
address space allocated for contexts, and bail if that is not enough to
satisfy our static partitioning needs.
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
src/gallium/drivers/radeonsi/si_state_viewport.c:196: si_emit_guardband:
Assertion `vp_as_scissor.maxx <= max_viewport_size[vp_as_scissor.quant_mode]
&& vp_as_scissor.maxy <= max_viewport_size[vp_as_scissor.quant_mode]' failed.
The comparison was unsigned, so negative maxx or maxy would fail.
Fixes: 3c540e0a74 "radeonsi: Fix guardband computation for large render targets"
This updates allows an MI_LRI to trigger a OA report write in the
global OA buffer. This isn't really useful for us, we just keep close
to the internal public configs.
Signed-off-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Acked-by: Kenneth Graunke <kenneth@whitecape.org>
These are new metrics for Gen8/9 to measure the effect of the PMA
stall workaround fix.
Signed-off-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Acked-by: Kenneth Graunke <kenneth@whitecape.org>
This unifies some of the programming between pre-production stepping
and production ones.
Signed-off-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Acked-by: Kenneth Graunke <kenneth@whitecape.org>
- Implements RGB565/RGBA5551 formats
- Don't advertise support for flipped RGBA5551 and ETC
Fixes remaining tests in dEQP-GLES2.functional.texture.format.* which is
now at 36/36.
Signed-off-by: Alyssa Rosenzweig <alyssa@rosenzweig.io>
transfer_unmap now tiles for any tiled resource, not just TEXTURE_2D,
which should more than just cubemaps!
Signed-off-by: Alyssa Rosenzweig <alyssa@rosenzweig.io>
Add support for load_barycentric_pixel, load_interpolated_input, and
friends. For now, this retains support for old-style inputs, which can
probably be dropped with some ttn work.
Prep work for sample-shading support.
Signed-off-by: Rob Clark <robdclark@gmail.com>
Originally we kept track of a table of inputs. But with new-style frag
inputs this becomes awkward. Re-work it so that initially we assigned
un-packed varying locations, and then after the shader is compiled scan
to find actual used inputs, and re-pack.
Signed-off-by: Rob Clark <robdclark@gmail.com>
CXXLD libxatracker.la
/usr/bin/ld: ../../../../src/gallium/auxiliary/.libs/libgallium.a(tgsi_to_nir.o): in function `ttn_finalize_nir':
src/gallium/auxiliary/nir/tgsi_to_nir.c:2111: undefined reference to `gl_nir_lower_samplers_as_deref'
/usr/bin/ld: src/gallium/auxiliary/nir/tgsi_to_nir.c:2113: undefined reference to `gl_nir_lower_samplers'
Fixes: 9a834447d6 ("tgsi_to_nir: Produce optimized NIR for a given pipe_screen.")
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=109929
Signed-off-by: Vinson Lee <vlee@freedesktop.org>
This prevents getting mixed-up results if a multi-threaded app has two
validation errors in different threads.
Reviewed-by: Timothy Arceri <tarceri@itsqueeze.com>
__builtin_types_compatible_p() is GCC-specific and breaks the
MSVC build.
This intrinsic has been in u_vector_foreach() for a long time, but
that macro has only recently been used in code
(nir/nir_opt_comparison_pre.c) that's built with MSVC.
Fixes: 2cf59861a ("nir: Add partial redundancy elimination for compares")
Reviewed-by: José Fonseca <jfonseca@vmware.com>
Reviewed-by: Ian Romanick <ian.d.romanick@intel.com>
Removed a few unused variables and iris_getparam_boolean().
Kept 'name' around since there's a commented debug that make use of it.
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
From GLVND author:
> From a functional standpoint, exporting additional symbols doesn't
> really matter, since libglvnd will load the vendor libraries with
> RTLD_LOCAL.
Suggested-by: Kyle Brenneman <kbrenneman@nvidia.com>
Signed-off-by: Eric Engestrom <eric.engestrom@intel.com>
Acked-by: Kyle Brenneman <kbrenneman@nvidia.com>
This reverts commit 29132af234.
It seems the new intrinsic causes a hang on radeonsi (VEGA) when running the
piglit test:
tests/spec/arb_shader_storage_buffer_object/execution/ssbo-atomicCompSwap-int.shader_test
This fixes the following piglit with RadeonSI
tests/spec/arb_gpu_shader_fp64/execution/built-in-functions/fs-frexp-dvec4.shader_test
Signed-off-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
Reviewed-by: Timothy Arceri <tarceri@itsqueeze.com>
When we add new feature checks on the host side that is used to
enable a cap conditionally that was enabled unconditionally before
we might end up with a feature regression when a new mesa version
is used with an old virglrenderer version that doesn't check for
that cap.
To work around this problem add a version id to the caps that corresponds
to the features that are actually checked on the host and check that
version too when enabling the cap.
Fixes: 2ee197d6e8
virgl: Enable mixed color FBO attachemnets only when the host supports it
Signed-off-by: Gert Wollny <gert.wollny@collabora.com>
Reviewed-by: Pohsien Wang <pwang@chromium.org>
Especially when performing a transtion from UNDEFINED->GENERAL,
the driver shouldn't initialize HTILE metadata in compressed
state because it doesn't decompress when the src layout is
GENERAL.
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=110259
Fixes: 3a2e93147f ("radv: always initialize HTILE when the src layout is UNDEFINED")
Signed-off-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
This pass attempts to dectect code sequences like
if (x < y) {
z = y - x;
...
}
and replace them with sequences like
t = x - y;
if (t < 0) {
z = -t;
...
}
On architectures where the subtract can generate the flags used by the
if-statement, this saves an instruction. It's also possible that moving
an instruction out of the if-statement will allow
nir_opt_peephole_select to convert the whole thing to a bcsel.
Currently only floating point compares and adds are supported. Adding
support for integer will be a challenge due to integer overflow. There
are a couple possible solutions, but they may not apply to all
architectures.
v2: Fix a typo in the commit message and a couple typos in comments.
Fix possible NULL pointer deref from result of push_block(). Add
missing (-A + B) case. Suggested by Caio.
v3: Fix is_not_const_zero to work correctly with types other than
nir_type_float32. Suggested by Ken.
v4: Add some comments explaining how this works. Suggested by Ken.
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
v2: Move bug fix in get_neg_instr from the next patch to this patch
(where it was intended to be in the first place). Noticed by Caio.
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
No shader-db changes on any Intel platform.
v2: Use a loop to generate patterns. Suggested by Jason.
v3: Fix a copy-and-paste bug in the extract_[ui] of ishl loop that would
replace an extract_i8 with and extract_u8. This broke ~180 tests. This
bug was introduced in v2.
Reviewed-by: Matt Turner <mattst88@gmail.com> [v1]
Reviewed-by: Dylan Baker <dylan@pnwbakers.com> [v2]
Acked-by: Jason Ekstrand <jason@jlekstrand.net> [v2]
No shader-db changes on any Intel platform.
v2: Use a loop to generate patterns. Suggested by Jason.
v3: Fix a copy-and-paste bug in the extract_[ui] of ishl loop that would
replace an extract_i8 with and extract_u8. This broke ~180 tests. This
bug was introduced in v2.
Reviewed-by: Matt Turner <mattst88@gmail.com> [v1]
Reviewed-by: Dylan Baker <dylan@pnwbakers.com> [v2]
Acked-by: Jason Ekstrand <jason@jlekstrand.net> [v2]
llvm/spir-v spits out some struct a { struct b {} }, but it
doesn't deref, it casts (struct a) to (struct b), reconstruct
struct derefs instead of casts for these.
v2: use ssa_def_rewrite uses, rework the type restrictions (Jason)
v3: squish more stuff into one function, drop unused temp (Jason)
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
This is no longer true since PIPE_CAP_PACKED_UNIFORMS was enabled.
Fixes: 3c8779af32 freedreno/ir3: Enable PIPE_CAP_PACKED_UNIFORMS
Signed-off-by: Rob Clark <robdclark@gmail.com>
Not sure why new-style frag inputs start triggering this. But we
probably shouldn't consider src's from other blocks.
Signed-off-by: Rob Clark <robdclark@gmail.com>
For depth and stencil blits, we always want the main mask to be Z, and
the secondary pass mask to be S. If asked to blit Z+S to S, we should
handle the blit in the second pass which properly gets the stencil
resources.
Before, we were trying to handle S as the main mask, and accidentally
blitting a Z source to a S destination, which doesn't work out well.
Fixes Piglit's "framebuffer-blit-levels {draw,read} stencil" tests.
Fixes assertion failures in Piglit's "framebuffer-blit-levels
{draw,read} stencil" tests on iris. Also fixes assert failures in
frameretrace, which tries to ReadPixels the stencil values (only)
from a Z24S8 depth/stencil attachment.
Reviewed-by: Kristian H. Kristensen <hoegsberg@chromium.org>
This instruction needs a workaround when used from vertex shaders.
Fixes:
dEQP-GLES3.functional.shaders.texture_functions.texturegradoffset.sampler2dshadow_vertex
dEQP-GLES3.functional.shaders.texture_functions.texturegradoffset.sampler3d_fixed_vertex
dEQP-GLES3.functional.shaders.texture_functions.texturegradoffset.sampler3d_float_vertex
dEQP-GLES3.functional.shaders.texture_functions.textureprojgradoffset.sampler2dshadow_vertex
dEQP-GLES3.functional.shaders.texture_functions.textureprojgradoffset.sampler3d_fixed_vertex
dEQP-GLES3.functional.shaders.texture_functions.textureprojgradoffset.sampler3d_float_vertex
dEQP-GLES3.functional.shaders.texture_functions.textureprojgrad.sampler2dshadow_vertex
Signed-off-by: Kristian H. Kristensen <hoegsberg@chromium.org>
Reviewed-by: Rob Clark <robdclark@gmail.com>
emit_cat5() needs to check if the last optional reg is there before it
accesses it.
Signed-off-by: Kristian H. Kristensen <hoegsberg@chromium.org>
Reviewed-by: Rob Clark <robdclark@gmail.com>
There are multiple `goto path_fail` with an open fd, but none that go to
`fail:` without going through `path_fail:` first, so let's just move the
`close(fd)` there.
Signed-off-by: Eric Engestrom <eric.engestrom@intel.com>
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
I don't think we should update metadata when conditional rendering
is enabled. For some reasons, some CTS breaks only on SI.
This fixes the following CTS on SI:
dEQP-VK.conditional_rendering.draw_clear.clear.depth.*
Cc: 19.0 <mesa-stable@lists.freedesktop.org>
Signed-off-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
i965 does this, and st's tgsi path does this. st/nir did not.
Cuts 138MB of memory from a DiRT Rally trace, which is about 44%
of the total GLSL IR memory.
Reviewed-by: Timothy Arceri <tarceri@itsqueeze.com>
I neglected to fill out this driver function, causing us to advertise
0 modifiers. Now we advertise the various tilings and let the driver
pick them. I've verified that X tiling works with Weston (by hacking
the list to skip Y tiling).
Y+CCS doesn't work yet because it's multiplane and the Gallium dri
state tracker isn't really prepared for that. Leave it off for now.
v2: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
- fix missing type
- fix *_FQM_*/*_QM_* commands
- shorten some media structs using groups
- factor out memory attributes
- switch MI_FLUSH_DW fields to bool
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
v2: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
- fix missing type
- fix *_FQM_*/*_QM_* commands
- shorten some media structs using groups
- factor out memory attributes
- switch MI_FLUSH_DW fields to bool
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
v2: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
- fix missing type
- fix *_FQM_*/*_QM_* commands
- shorten some media structs using groups
- factor out memory attributes
- switch MI_FLUSH_DW fields to bool
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
headers
v2: Fixed the check for engine
v3: Changed engine into an argument given to the scripts
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
The code to handle image unit indirect was missing
Fixes piglit tests/spec/arb_arrays_of_arrays/execution/image_store/basic-imageStore-mixed-const-non-const-uniform-index.shader_test
Reviewed-by: Roland Scheidegger <sroland@vmware.com>
This fixes the vertex id fetch in the non-llvm drawing paths.
This vertex id in elt mode comes from the elts not just a linear
value.
Note we don't bad basevertex in the elts case as it's already included
in the elts by the looks of it (at least tests fail if I add it)
Fixes piglit end-primitive tests and some others.
Reviewed-by: Roland Scheidegger <sroland@vmware.com>
We have a rather big constant file and it seems that the best way to
use it is to upload all UBOs and lower UBO access the load_uniform.
Signed-off-by: Kristian H. Kristensen <hoegsberg@chromium.org>
Reviewed-by: Rob Clark <robdclark@gmail.com>
This commit turns on the gallium cap and adds a pass to lower the
load_ubo intrinsics for block 0 back to load_uniform intrinsics and
adjust the backend where the cap switches units from vec4s to dwords.
As we stop using ir3_glsl_type_size() for uniform layout, this also
corrects an issue where we would allocate a vec4 slot for samplers in
uniforms, fixing:
dEQP-GLES3.functional.shaders.struct.uniform.sampler_array_fragment
dEQP-GLES3.functional.shaders.struct.uniform.sampler_array_vertex
dEQP-GLES3.functional.shaders.struct.uniform.sampler_nested_fragment
dEQP-GLES2.functional.shaders.struct.uniform.sampler_nested_vertex
dEQP-GLES2.functional.shaders.struct.uniform.sampler_nested_fragment
Signed-off-by: Kristian H. Kristensen <hoegsberg@chromium.org>
Reviewed-by: Rob Clark <robdclark@gmail.com>
We don't need to determine the number of uniform slots here, it's
already available as prog->Parameters->NumParameterValues. The way we
previously determined the number of slots was also broken for
PackedDriverUniformStorage, where we would add loc (in dwords) and
type_size() (in vec4s).
Signed-off-by: Kristian H. Kristensen <hoegsberg@chromium.org>
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
Reviewed-by: Rob Clark <robdclark@gmail.com>
kms_swrast can work with primary nodes out of the box, but also
with rendernodes if the build environment specifies the
EGL_FORCE_RENDERNODE flag.
Suggested-by: Emil Velikov <emil.velikov@collabora.com>
Reviewed-by: Emil Velikov <emil.velikov@collabora.com>
Load the kms_swrast driver when specified.
Doesn't work with drm_gralloc.
v2: remove unneeded line (@eric)
v3: Remove swrast_loader_extensions (@evelikov)
Reviewed-by: Emil Velikov <emil.velikov@collabora.com>
Reviewed-by: Eric Engestrom <eric.engestrom@intel.com>
This way, we can use primary nodes with kms_swrast too.
Also fix up some whitespace issues.
Reviewed-by: Emil Velikov <emil.velikov@collabora.com>
Reviewed-by: Eric Engestrom <eric.engestrom@intel.com>
This reverts commit 4e1bbb000c. It turns
out that some DXVK apps due to some implementation detail of DXVK or
other create and destroy instances in an interleaved way. Freeing the
glsl_type memory without being a bit more careful causes use-after-free
issues. Looks like we need to try again.
When the host is running on softpipe/llvmpipe the maximum number of
samples for multisampling is 1. GL 3.0 requires at least 4 samples, and
softpipe/llvmpipe get around this by enabling PIPE_CAP_FAKE_SW_MSAA.
This patch mimics softpipe/llvmpipe behavior in virgl by enabling the
same PIPE_CAP_FAKE_SW_MSAA workaround when the max sample count reported
by the host is 1. This change allows virgl on a softpipe/llvmpipe host
to advertise support for GL 3.0 and beyond.
Signed-off-by: Alexandros Frantzis <alexandros.frantzis@collabora.com>
Reviewed-By: Gert Wollny <gert.wollny@collabora.com>
Up to twice, for a total of 3 attempts maximum.
This will hopefully avoid spurious CI pipeline failures due to
intermittent GitLab/docker infrastructure issues.
Reviewed-by: Eric Engestrom <eric.engestrom@intel.com>
The containers-build stage job doesn't use the cache, so this might save
some wasted time for it.
Reviewed-by: Eric Engestrom <eric.engestrom@intel.com>
This will allow us to make use of the selection control support in
spirv and the GL support provided by EXT_control_flow_attributes.
Note this only supports if-statements as we dont support switches
in NIR.
Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=108841
This patch refactors a substantial amount of code in preparation for
mipmaps. In particular, we know have a correct slice abstraction based
on offsets; cpu/gpu are no longer arbitrary pointers. We additionally
shuffle around other code to accompany these changes and cleanup how
tiled textures are handled, while drawing some attention to the blit
code.
Mipmaps are still disabled at this point, as autogeneration is not yet
implemented; enabling as-is would cause regressions.
Signed-off-by: Alyssa Rosenzweig <alyssa@rosenzweig.io>
In fact, the native "fpow" instruction only does half of it; more work
is needed for the actual instruction. For now, just lower.
Fixes: 1ea42894c ("panfrost/midgard: Implement fpow")
Signed-off-by: Alyssa Rosenzweig <alyssa@rosenzweig.io>
Although this is not functional (and the command stream side is not
aiming for ES3 right now), this is enough to run dEQP-GLES3 shader
tests with the version override directive; this is useful, as some ES3
shader feature can occur in ES2 class shaders due to lowering.
Signed-off-by: Alyssa Rosenzweig <alyssa@rosenzweig.io>
On Midgard, float ops support standard source modifiers (abs/neg) and
destination modifiers (sat/pos/round). Integer ops do not support these,
however. To cope, we use native NIR source modifiers for floats, but
lower them away to iabs/ineg for integers, implementing those ops
simultaneously to avoid regressions.
Fixes the integer tests in
dEQP-GLES2.functional.shaders.operator.unary_operator.minus.*
Signed-off-by: Alyssa Rosenzweig <alyssa@rosenzweig.io>
Some of these are not yet fully functional due to related bugs, but this
the correct op mapping. The native ball/bany opcodes act on vec4's
unconditionally. That said, both ball and bany have the nice property
that duplicating an argument does not affect their output, so the
default "hanging swizzles" allow us to implement 2/3-component opcodes
correctly, implicitly lowering.
Signed-off-by: Alyssa Rosenzweig <alyssa@rosenzweig.io>
Whereas a normal fcsel acts on a boolean input in r31.w, the fcsel_i
variant acts on an integer input in r31.w, which can be preloaded with
an instruction like imov (with the appropriate negate flag on the
source).
Signed-off-by: Alyssa Rosenzweig <alyssa@rosenzweig.io>
This preliminary implementation should handle some basic cases. Future
work should scissor the FRAGMENT job as well for efficiency.
Signed-off-by: Alyssa Rosenzweig <alyssa@rosenzweig.io>
Our viewport code hardcoded a number of wrong assumptions, which sort of
sometimes worked but was definitely wrong (and broke most of dEQP). This
corrects the logic, accounting for flipped-Y framebuffers, which
fixes... most of dEQP.
Signed-off-by: Alyssa Rosenzweig <alyssa@rosenzweig.io>
This gets the basevertex from the draw depending on whether
it's an indexed or non-indexed draw.
We still fail a transform feedback test for vertex id, as
the vertex id actually an index id, and isn't getting translated
properly to a vertex id, suggestions on how/where to fix that welcome.
Reviewed-by: Brian Paul <brianp@vmware.com>
This field was added in a recent addrlib update, and while there
currently seems to be no issue with skipping it, we will have to
set it correctly in the future.
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
Most cat5 instructions are constructed using ir3_SAM, which uses
regs[1] for the (sampler, tex) src. Not DSX/DSY though, so we look up
src1 and src2 differently for those two.
Fixes: 1dffb089 ("freedreno/ir3: fix sam.s2en encoding")
Signed-off-by: Kristian H. Kristensen <hoegsberg@chromium.org>
Reviewed-by: Rob Clark <robdclark@gmail.com>
In 1088b788 ("freedreno/ir3: find # of samplers from uniform vars") we
started counting number of samplers based on the uniform vars instead
of number of cat5 instructions. We used the number of samplers to
determine whether to enable derivatives, but when we only use
derivatives and no samplers, that now breaks. Track whether we need
derivatives explicitly and use that to enable the state.
Fixes: 1088b788 ("freedreno/ir3: find # of samplers from uniform vars")
Signed-off-by: Kristian H. Kristensen <hoegsberg@chromium.org>
Reviewed-by: Rob Clark <robdclark@gmail.com>
We will need them for a new ACCESS_NON_UNIFORM flag that's about to be
added in the next commit.
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
On Intel, we have both bindless and bindful and we'd like to use them at
the same time if we can so we need to be able to distinguish at the NIR
level between the two. This also fixes nir_lower_tex to properly handle
bindless in its tex_texture_size and get_texture_lod helpers.
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Fix the order of src0_alpha and sample mask in fb payload.
From SKL PRM Volume 7, "Data Payload Register Order
for Render Target Write Messages":
Type S0A oM sZ oS M2 M3 M4
SIMD8 1 1 0 0 s0A oM R
SIMD16 1 1 0 0 1/0s0A 3/2s0A oM
It also fixes working of alpha to coverage with sample mask
on GEN6 since now they are in correct order.
Signed-off-by: Danylo Piliaiev <danylo.piliaiev@globallogic.com>
Signed-off-by: Francisco Jerez <currojerez@riseup.net>
Reviewed-by: Francisco Jerez <currojerez@riseup.net>
From "Alpha Coverage" section of SKL PRM Volume 7:
"If Pixel Shader outputs oMask, AlphaToCoverage is disabled in
hardware, regardless of the state setting for this feature."
From OpenGL spec 4.6, "15.2 Shader Execution":
"The built-in integer array gl_SampleMask can be used to change
the sample coverage for a fragment from within the shader."
From OpenGL spec 4.6, "17.3.1 Alpha To Coverage":
"If SAMPLE_ALPHA_TO_COVERAGE is enabled, a temporary coverage value
is generated where each bit is determined by the alpha value at the
corresponding sample location. The temporary coverage value is then
ANDed with the fragment coverage value to generate a new fragment
coverage value."
Similar wording could be found in Vulkan spec 1.1.100
"25.6. Multisample Coverage"
Thus we need to compute alpha to coverage dithering manually in shader
and replace sample mask store with the bitwise-AND of sample mask and
alpha to coverage dithering.
The following formula is used to compute final sample mask:
m = int(16.0 * clamp(src0_alpha, 0.0, 1.0))
dither_mask = 0x1111 * ((0xfea80 >> (m & ~3)) & 0xf) |
0x0808 * (m & 2) | 0x0100 * (m & 1)
sample_mask = sample_mask & dither_mask
Credits to Francisco Jerez <currojerez@riseup.net> for creating it.
It gives a number of ones proportional to the alpha for 2, 4, 8 or 16
least significant bits of the result.
GEN6 hardware does not have issue with simultaneous usage of sample mask
and alpha to coverage however due to the wrong sending order of oMask
and src0_alpha it is still affected by it.
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=109743
Signed-off-by: Danylo Piliaiev <danylo.piliaiev@globallogic.com>
Reviewed-by: Francisco Jerez <currojerez@riseup.net>
If the geom shader emits a point size we failed to find it here,
use the correct API to look it up.
Fixes:
tests/spec/glsl-1.50/execution/geometry/point-size-out.shader_test
Reviewed-by: Brian Paul <brianp@vmware.com>
With indirect rendering it's fine to set the instance count
parameter to 0, and expect the rendering to be ignored.
Fixes assert in KHR-GLES31.core.compute_shader.pipeline-gen-draw-commands
on softpipe
v2: return earlier before changing fpstate
Reviewed-by: Brian Paul <brianp@vmware.com>
The wait here is unnecessary since we got a pool of back buffers,
and the wait for swap buffer will happen before the present pixmap,
at the same time the previous back buffer will be put back to pool
for reuse after the check for PresentIdleNotify event
Signed-off-by: Leo Liu <leo.liu@amd.com>
Reviewed-by: Michel Dänzer <michel.daenzer@amd.com>
v2 (Topi):
- Make bit-size handling order be 16-bit, 32-bit, 64-bit
- Clamp lower exponent range at -28 instead of -30.
Reviewed-by: Topi Pohjolainen <topi.pohjolainen@intel.com>
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
When we destroy a context, we need to temporarily make that context
the current one for the thread.
That's because during context tear-down we make many calls to
_mesa_reference_texobj(&texObj, NULL). Note there's no context
parameter. If the texture's refcount goes to zero and we need to
delete it, we use the thread's current context. But if that context
isn't the context we're tearing down, we get into trouble when
deallocating sampler views. See patch 593e36f956 ("st/mesa:
implement "zombie" sampler views (v2)") for background information.
Also, we need to release any sampler views attached to the fallback
textures.
Fixes a crash on exit with a glretrace of the Nobel Clinician
application.
v2: at end of st_destroy_context(), check if save_ctx == ctx and
unbind the context if so.
Reviewed-by: Roland Scheidegger <sroland@vmware.com>
Reviewed-by: Neha Bhende <bhenden@vmware.com>
Reviewed-by: Jose Fonseca <jfonseca@vmware.com>
In Android O, MESA needs to statically link libexpat so that
it's in same VNDK namespace.
v2: apply change also to anv driver (Tapani)
v3: use += in anv change (Eric Engestrom)
Change-Id: I82b0be5c817c21e734dfdf5bfb6a9aa1d414ab33
Signed-off-by: Kishore Kadiyala <kishore.kadiyala@intel.com>
Signed-off-by: Tapani Pälli <tapani.palli@intel.com>
Reviewed-by: Eric Engestrom <eric.engestrom@intel.com>
If VK_QUERY_RESULT_WITH_AVAILABILY_BIT is set and
VK_QUERY_RESULT_WAIT_BIT and VK_QUERY_RESULT_PARTIAL_BIT are both not
set, we need return to VK_NOT_READY only and set the availability
status field for each query.
From Vulkan spec:
"If VK_QUERY_RESULT_WAIT_BIT and VK_QUERY_RESULT_PARTIAL_BIT are both
not set then no result values are written to pData for queries that are
in the unavailable state at the time of the call, and
vkGetQueryPoolResults returns VK_NOT_READY. However, availability state
is still written to pData for those queries if
VK_QUERY_RESULT_WITH_AVAILABILITY_BIT is set."
Signed-off-by: Samuel Iglesias Gonsálvez <siglesias@igalia.com>
Reviewed-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
If the query is not available and VK_QUERY_RESULT_WAIT_BIT and
VK_QUERY_RESULT_PARTIAL_BIT are both not set, the spec doesn't
allow to modify its result.
From Vulkan spec:
"If VK_QUERY_RESULT_WAIT_BIT and VK_QUERY_RESULT_PARTIAL_BIT are
both not set then no result values are written to pData for queries
that are in the unavailable state at the time of the call, and
vkGetQueryPoolResults returns VK_NOT_READY. However, availability state
is still written to pData for those queries
if VK_QUERY_RESULT_WITH_AVAILABILITY_BIT is set."
v2:
- Move VK_NOT_READY change to next patch (Samuel Pitoiset)
Signed-off-by: Samuel Iglesias Gonsálvez <siglesias@igalia.com>
Reviewed-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
With vkpipelinedb Samuel discovered a regression since we stopped
stripping types at the spir-v level.
This adds a check to the var splitting for the case where it
asserts the type hasn't changed, when it has just created a bare
type, and it's different than the original type which has an explicit
stride.
This also removes a pointless assert that also triggers.
Fixes: 3b3653c4cf (nir/spirv: don't use bare types, remove assert in split vars for testing)
Acked-by: Jason Ekstrand <jason@jlekstrand.net>
Also handle GLSL_TYPE_INTERFACE the same way we do GLSL_TYPE_STRUCT in
various places. Motivated by ARB_gl_spirv work, that will take
advantage of the interface types when handling NIR coming from SPIR-V.
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
Also updates gl_spirv to pick the right one. At the moment nothing
uses it, but upcoming functionality part of ARB_gl_spirv will use it,
and we also later can be more assertful when handling certain features
for each of the execution environments.
Reviewed-by: Alejandro Piñeiro <apinheiro@igalia.com>
Acked-by: Karol Herbst <kherbst@redhat.com>
The CTS requires a 565-no-depth-no-stencil (meaning d/s not-required, not
not-present) config for ES 3.0, but at depth 24 of X11 we wouldn't do so.
We can satisfy that bad requirement using a pbuffer-only visual with
whatever other buffers the driver happens to have given us.
I've tried to raise this as an absurd requirement with Khronos and made no
progress.
v2: Make sure it's single sample, no depth, no stencil. Comment typo fix
Reviewed-by: Adam Jackson <ajax@redhat.com>
Report 320 for a6xx, which isn't *quite* true (no geom/tess, in
particular), but other caps keep the reported GL and GLSL versions
correct (3.1 / 3.10 es). But reporting 320 will switch on
EXT_gpu_shader5, which is the goal.
Signed-off-by: Rob Clark <robdclark@gmail.com>
For GLES2+ contexts, enable EXT_gpu_shader5 if the driver exposes a
sufficiently high ESSL feature level, even if the GLSL feature level
isn't high enough.
This allows drivers to support EXT_gpu_shader5 in GLES contexts before
they support all the additional features of ARB_gpu_shader5 in GL
contexts.
Signed-off-by: Rob Clark <robdclark@gmail.com>
Reviewed-by: Ilia Mirkin <imirkin@alum.mit.edu>
Adds a new cap to allow drivers to expose higher shading language
versions in GLES contexts, to avoid having to report an artificially
low version for the benefit of GL contexts.
The motivation is to expose EXT_gpu_shader5 even though a driver may
not support all the features needed for the corresponding GL extension
(ARB_gpu_shader5).
Signed-off-by: Rob Clark <robdclark@gmail.com>
Reviewed-by: Ilia Mirkin <imirkin@alum.mit.edu>
Fix build error after llvm-9.0svn r352827 ("[opaque pointer types] Add a
FunctionCallee wrapper type, and use it.").
In file included from ./rasterizer/jitter/builder.h:158:0,
from swr_shader.cpp:35:
./rasterizer/jitter/gen_builder_meta.hpp: In member function ‘llvm::Value* SwrJit::Builder::VGATHERPD(llvm::Value*, llvm::Value*, llvm::Value*, llvm::Value*, llvm::Value*, const llvm:
:Twine&)’:
./rasterizer/jitter/gen_builder_meta.hpp:51:117: error: no matching function for call to ‘cast(llvm::FunctionCallee)’
Function* pFunc = cast<Function>(JM()->mpCurrentModule->getOrInsertFunction("meta.intrinsic.VGATHERPD", pFuncTy));
^
Suggested-by: Philip Meulengracht <the_meulengracht@hotmail.com>
Signed-off-by: Vinson Lee <vlee@freedesktop.org>
Reviewed-by: Alok Hota <alok.hota@intel.com>
The previous patch tried to address a bug when DESTDIR is '', however,
it introduces a bug when DESTDIR is not '', and fakeroot is used. This
patch does fix that, and has been tested with the arch pkg-build to
ensure it isn't regressed.
Fixes: 093a1ade4e24b7dd701a093d30a71efd669fe9c8
("bin/install_megadrivers.py: Correctly handle DESTDIR=''")
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=110221
Reviewed-by: Eric Engestrom <eric@engestrom.ch>
This lowering isn't needed for RADV because AMDGCN has two
instructions. It will be disabled for RADV in an upcoming series.
While we are at it, factorize a little bit.
Signed-off-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
Fix this build error with GCC 4.4.7.
CC nir/nir_opt_copy_prop_vars.lo
nir/nir_opt_copy_prop_vars.c: In function ‘load_element_from_ssa_entry_value’:
nir/nir_opt_copy_prop_vars.c:454: error: unknown field ‘ssa’ specified in initializer
nir/nir_opt_copy_prop_vars.c:455: error: unknown field ‘def’ specified in initializer
nir/nir_opt_copy_prop_vars.c:456: error: unknown field ‘component’ specified in initializer
nir/nir_opt_copy_prop_vars.c:456: error: extra brace group at end of initializer
nir/nir_opt_copy_prop_vars.c:456: error: (near initialization for ‘(anonymous).<anonymous>’)
nir/nir_opt_copy_prop_vars.c:456: warning: excess elements in union initializer
nir/nir_opt_copy_prop_vars.c:456: warning: (near initialization for ‘(anonymous).<anonymous>’)
Fixes: 96c32d7776 ("nir/copy_prop_vars: handle load/store of vector elements")
Signed-off-by: Vinson Lee <vlee@freedesktop.org>
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=109810
Reviewed-by: Andres Gomez <agomez@igalia.com>
Reviewed-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
Invoking VALGRIND_CHECK_MEM_IS_DEFINED pulls in enough code to convince
gcc to not inline __gen_uint and results in a lot of packing code ending
up out-of-line with lots of stack copying. To ameliorate this, only
insert the check inside the packer if DEBUG is defined and instead
perform the validation checking before submitting the batch to the
kernel. This should give accurate results if --trace-origins=yes is
used, and failing that we can recompile in full debug mode to check on
insertion.
Improve drawoverhead baseline by 25% with a default build with
valgrind-dev installed (with effectively no loss of vg coverage).
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
Fixes:
dEQP-GLES31.functional.image_load_store.early_fragment_tests.no_early_fragment_tests_depth
dEQP-GLES31.functional.image_load_store.early_fragment_tests.no_early_fragment_tests_stencil
dEQP-GLES31.functional.image_load_store.early_fragment_tests.no_early_fragment_tests_depth_fbo
dEQP-GLES31.functional.image_load_store.early_fragment_tests.no_early_fragment_tests_stencil_fbo
Signed-off-by: Rob Clark <robdclark@gmail.com>
There are other cases where we need to disable early-z, like image
writes. So rename to something more generic.
Signed-off-by: Rob Clark <robdclark@gmail.com>
This is required by the validation layers if we want to validate the
commands inserted by the overlay layer.
Signed-off-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Reviewed-by: Tapani Pälli <tapani.palli@intel.com>
'invariant' qualifier is propagated on variables which are used
to calculate other invariant variables, however when we are matching
variable's declarations we should take into account only explicitly
declared invariance because invariance propagation is an implementation
specific detail.
Thus new flag is added to ir_variable_data which indicates 'invariant'
qualifier being explicitly set in the shader.
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=100316
Fixes: 89b60492 ('glsl: Add a pass to propagate the "invariant" and
"precise" qualifiers')
Signed-off-by: Danylo Piliaiev <danylo.piliaiev@globallogic.com>
Reviewed-by: Jordan Justen <jordan.l.justen@intel.com>
The swizzling was putting float one in not integer 1.
This fixes a lot of arb_texture_view-rendering-formats cases.
Reviewed-by: Brian Paul <brianp@vmware.com>
I don't think this really buys us anything and TG4 with cubemap arrays
falls over because sampler == 2, but otherwise works fine.
Fixes:
./bin/textureGather fs shadow r CubeArray repeat
on softpipe with ARB_gpu_shader5 enabled.
Reviewed-by: Brian Paul <brianp@vmware.com>
These didn't deal with the width == 32 case that TGSI is defined with.
Fixes piglit tests if ARB_gpu_shader5 is enabled.
Reviewed-by: Brian Paul <brianp@vmware.com>
Rather than skipping code that looked like this:
loop {
...
if (cond) {
do_work_1();
continue;
} else {
break;
}
do_work_2();
}
Previously we would turn this into:
loop {
...
if (cond) {
do_work_1();
continue;
} else {
do_work_2();
break;
}
}
This was clearly wrong. This change checks for this case and makes
sure we now leave it for nir_opt_dead_cf() to clean up.
Reviewed-by: Ian Romanick <ian.d.romanick@intel.com>
The idea was that we could skip uploading the constant-indexed uniform
data and just upload the uniforms that are variably-indexed. However,
since the VS bin and render shaders may have a different set of uniforms
used, this meant that we had to upload the UBO for each of them. The
first case is generally a fairly small impact (usually the uniform array
is the most space, other than a couple of FSes in shader-db), while the
second is a larger impact: 3DMMES2 was uploading 38k/frame of uniforms
instead of 18k.
Given that the optimization is of dubious value, has a big downside, and
is quite a bit of code, just drop it. No change in shader-db. No change
on 3DMMES2 (n=15).
We'd end up with the constant offset in the uniform stream anyway, since
they're bigger than small immediates. Avoids the extra uniforms and adds
in the shader in favor of just adding once on the CPU.
shader-db:
total instructions in shared programs: 6496865 -> 6494851 (-0.03%)
total uniforms in shared programs: 2119511 -> 2117243 (-0.11%)
I want to reuse this for encoding small constant UBO/SSBO offsets into the
uniform stream to reduce the extra uniform loads and adds for the small
constant offsets.
When building the Chrome OS Android container, we need to build copies
of mesa that don't conflict with the Android system-supplied libraries.
This adds options to create suffixed versions of EGL and GLES libraries:
libEGL.so -> libEGL${egl-lib-suffix}.so
libGLESv1_CM.so -> libGLESv1_CM${gles-lib-suffix}.so
libGLESv2.so -> libGLES${gles-lib-suffix}.so
This is similar to what happens when --enable-libglvnd is specified, but
without the side effects of linking against libglvnd. To avoid
unexpected clashes with the suffixed appended by libglvnd, make it an
error to specify both --enable-libglvnd and --with-egl-lib-suffix.
Reviewed-by: Eric Engestrom <eric.engestrom@intel.com>
In some cases, we can end up with varying structs that aren't split to
their member variables. nir_compact_varyings attempted to record these
as unmovable, so it would leave them be. Unfortunately, it didn't do
it right for non-vector/scalar types. It set the mask to:
((1 << (elements * dmul)) - 1) << var->data.location_frac
where elements is the number of vector elements. For structures and
other non-vector/scalars, elements is 0...so the whole mask became 0.
This caused nir_compact_varyings to assign other varyings on top of
the structure varying's location (as it appeared to take up no space).
To combat this, we just set elements to 4 for non-vector/scalar types,
so that the entire slot gets marked as unmovable.
Fixes KHR-GL45.tessellation_shader.tessellation_control_to_tessellation_evaluation.gl_in on iris.
Reviewed-by: Timothy Arceri <tarceri@itsqueeze.com>
Fixes dEQP-GLES31.functional.shaders.opaque_type_indexing.ubo.uniform_fragment
and similar things with multiple UBOs
Signed-off-by: Rob Clark <robdclark@gmail.com>
Seems like it can only work 16b at a time. Fixes
dEQP-GLES31.functional.shaders.builtin_functions.integer.bitcount.*
TODO need to check if this limitation applies to a3xx as well.
Signed-off-by: Rob Clark <robdclark@gmail.com>
For some things that show up when we expose higher glsl
TODO check blob traces to see if we have instructions for some of this?
I guess we don't but worth a check..
Signed-off-by: Rob Clark <robdclark@gmail.com>
For now it uses indirect for everything. The next step is for the
ir3_cp pass to detect the case that tex and samp idx are immediate
and convert the sam instruction back to the non .s2en variant. But
doing that in a following patch so we can shake out the bugs with
.s2en more easily.
Signed-off-by: Rob Clark <robdclark@gmail.com>
When we have indirect samplers, we cannot tell the max sampler
referenced. Instead just refer to the number of sampler uniforms.
Signed-off-by: Rob Clark <robdclark@gmail.com>
On a6xx+ with half-regs conflicting with full-regs, the legalize pass
needs to set appropriate sync bits, such as (sy), on writes to full regs
that conflict with half regs, and visa-versa.
Signed-off-by: Rob Clark <robdclark@gmail.com>
On a6xx, half-regs conflict with full-regs. But we were only setting up
conflicts for the first class (ie. scalar, but not hvec2/hvec3/hvec4),
resulting in higher half-reg classes getting assigned to regs that
overwrite full-regs.
Noticed while trying to enable indirect-sampler (sam.s2en) which uses an
hvec2 argument to pass the sampler/tex index.
Signed-off-by: Rob Clark <robdclark@gmail.com>
GLC/SLC are boolean.
This fixes the following LLVM error when checkir is set:
Intrinsic has incorrect argument type!
void (i32, <4 x i32>, i32, i32, i32, i32, i32, i32, i32, i32)* @llvm.amdgcn.tbuffer.store.i32
Signed-off-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl
This fixes the following LLVM error when ckeckir is set:
Type too small for ZExt
Signed-off-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl
Users of this function expect alu to be a supported comparision
if the induction variable is not NULL. Since we attempt to
override the return values if the first limit is not a const, we
must make sure we are dealing with a valid comparision before
overriding the alu instruction.
Fixes an unreachable in inverse_comparison() with the game
Assasins Creed Odyssey.
Fixes: 3235a942c1 ("nir: find induction/limit vars in iand instructions")
Acked-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=110216
This cuts down the job runtime from ~9.5 to ~7 minutes with my personal
runner on an 8-core Ryzen 7 1700.
While this might result in slightly higher load on shared runners, it
should be OK, since libtool doesn't use the CPU cores as effectively as
e.g. ninja does; a significant part of the CPU load tends to be in bash
processes at any time, which should be relatively light on memory.
Reviewed-by: Eric Anholt <eric@anholt.net>
Reviewed-by: Eric Engestrom <eric.engestrom@intel.com>
This increases the chance of them running earlier, which can have an
impact on the total duration of the pipeline.
v2:
* Minor style fix-up to moved comment (Eric Anholt)
Reviewed-by: Eric Engestrom <eric.engestrom@intel.com>
Acked-by: Eric Anholt <eric@anholt.net>
Fixes leaks for each glsl_type generated:
==32470== 384 bytes in 3 blocks are possibly lost in loss record 18 of 18
==32470== at 0x483880B: malloc (vg_replace_malloc.c:309)
==32470== by 0x4C43F4A: ralloc_size (ralloc.c:119)
==32470== by 0x4C44014: rzalloc_size (ralloc.c:151)
==32470== by 0x4C44258: rzalloc_array_size (ralloc.c:215)
==32470== by 0x4D38957: glsl_type::glsl_type(glsl_struct_field const*, unsigned int, char const*) (glsl_types.cpp:114)
==32470== by 0x4D3BEED: glsl_type::get_struct_instance(glsl_struct_field const*, unsigned int, char const*) (glsl_types.cpp:1146)
==32470== by 0x4D42ECC: glsl_struct_type (nir_types.cpp:501)
==32470== by 0x4CDB5A1: vtn_handle_type (spirv_to_nir.c:1269)
==32470== by 0x4CE53DD: vtn_handle_variable_or_type_instruction (spirv_to_nir.c:4018)
==32470== by 0x4CD8CFF: vtn_foreach_instruction (spirv_to_nir.c:365)
==32470== by 0x4CE5E6B: spirv_to_nir (spirv_to_nir.c:4490)
==32470== by 0x497AF10: anv_shader_compile_to_nir (anv_pipeline.c:173)
v2: move release call to vkDestroyInstance
v3: apply fix also to radv driver
Signed-off-by: Tapani Pälli <tapani.palli@intel.com>
Cc: mesa-stable@lists.freedesktop.org
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
Values inside the offsets parameter of textureGatherOffsets are required to be
constants in the range of [GL_MIN_PROGRAM_TEXTURE_GATHER_OFFSET,
GL_MAX_PROGRAM_TEXTURE_GATHER_OFFSET].
As this range is never outside [-32, 31] for all existing drivers inside mesa,
we can simply store the offsets as a int8_t[4][2] array inside nir_tex_instr.
Right now only Nvidia hardware supports this in hardware, so we can turn this
on inside Nouveau for the NIR path as it is already enabled with the TGSI one.
v2: use memcpy instead of for loops
add missing bits to nir_instr_set
don't show offsets if they are all 0
v3: default offsets aren't all 0
v4: rename offsets -> tg4_offsets
rename nir_tex_instr_has_explicit_offsets -> nir_tex_instr_has_explicit_tg4_offsets
Signed-off-by: Karol Herbst <kherbst@redhat.com>
Not sure how ptr_stride should be taken into account if at all here
v2: reorder check to avoid src walking (Jason)
v3: remove is_cast_cast checks, keep going afterwards (Jason)
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
For OpenCL we never want to strip the info from the types, and it makes
type comparisons easier in later stages. We might later need a nir pass to
strip this for GLSL, but so far the only regression is the assert and Jason
said removing that is fine.
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
Signed-off-by: Dave Airlie <airlied@redhat.com>
v2: Update tracked clear color when we update the surface state.
v3: Update all aux surface states when updating the clear color.
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
Only if the clear color/depth is changing. In those cases, it's hard to
keep track of the current clear color, and aux state of some layers,
when predication is enabled. So simplify everything by stalling on the
few cases where we would have a fast clear color change with
predication.
v2:
- fix comment (Ken)
- explicitly check for predicate state after resolving it (Ken)
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
This function can be used to stall on the CPU and resolve the predicate
for the conditional render. It will convert ice->state.predicate from
IRIS_PREDICATE_STATE_USE_BIT to either IRIS_PREDICATE_STATE_RENDER or
IRIS_PREDICATE_STATE_DONT_RENDER, depending on the result of the query.
v2:
- return void (Ken)
- update the stored condition (Ken)
- simplify the code leading to resolve the predicate (Ken)
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
If all the restrictions are satisfied, do a fast clear instead of
regular clear.
v2:
- add perf_debug() when we can't fast clear (Ken)
- improve comment: s/miptree/resource/ (Ken)
- use swizzle_color_value from blorp (Ken)
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
It needs to be converted to a value that can be used by ISL (and our
hardware SURFACE_STATE structure).
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
Check and do a fast clear instead of a regular clear on depth buffers.
v3:
- remove swith with some cases that we shouldn't wory about (Ken)
- more parens into the has_hiz check (Ken)
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
Take the clear depth into account when IRIS_DIRTY_DEPTH_BUFFER is marked
as dirty.
Also update the blorp surface clear color.
v2: Use a single if (zres && zres->aux.bo) (Ken).
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
Also store clear color in the iris_resource.
Always allocate clear color state buffer.
v2:
- Make clear_color_offset be 64 bits (Ken).
- Simplify the logic to decide when to memset the aux buffer (Ken).
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
Does what it says on the tin.
The per stage time is only an approximation due to linking and
the Vega merged stages.
Reviewed-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
Currently if destdir is set to '' then the resulting libdir will have
it's first character replaced by / instead of / being prepended to the
string. This was the result of ensuring that that DESTDIR wouldn't be
ignored if libdir was absolute, since the only cases that meson allows
the libdir to be absolute is if the prefix is /, this won't be a
problem.
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=110211
Fixes: ae3f45c11e
("bin/install_megadrivers: fix DESTDIR and -D*-path")
Reviewed-by: Eric Engestrom <eric.engestrom@intel.com>
Fixes dEQP-VK.binding_model.buffer_device_address.* and
dEQP-VK.ssbo.phys.layout* Vulkan CTS tests.
v2: set val->type->stride in the section below (Jason)
v3: restore val->type->type to original place (Jason)
Fixes: d0ba326f23 ("nir/spirv: support physical pointers")
CC: Karol Herbst <kherbst@redhat.com>
CC: Jason Ekstrand <jason@jlekstrand.net>
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
I noticed we crashed piglit arb_texture_view-rendering-formats
when run on softpipe.
This fixes the clear tiles to use the surface format not the
underlying storage format.
This fixes a bunch of srgb piglits as well.
Fixes: 396ac41fc2 (softpipe: add integer support)
Reviewed-by: Brian Paul <brianp@vmware.com>
Reviewed-by: Roland Scheidegger <sroland@vmware.com>
I added new barrier bits in 220c1dce1e
and made most drivers skip them. I thought nvc0 was already skipping
those but missed the else case here, which does something. So make it
explicitly skip like I did everywhere else.
Thanks to Ilia for catching this.
Fixes: 220c1dce1e gallium: Add PIPE_BARRIER_UPDATE_BUFFER and UPDATE_TEXTURE bits.
An extension reporting cache hit in the user supplied pipeline cache
as well as timing information for creating the pipelines & stages.
v2: Don't consider no cache for cache hits (Jason)
Rework duration accumulation (Jason)
v3: Fold feedback creation writing into pipeline compile functions (Jason/Lionel)
v4: Get cache hit information from anv_device_search_for_kernel() (Jason)
Only set cache hit from the whole pipeline if all stages also have that bit (Lionel)
v5: Always user_cache_hit in anv_device_search_for_kernel() (Jason)
Signed-off-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
One line left out of the conversion to ir3 ssbo intrinsics on a6xx.
Fixes: 2e4525883f ir3/compiler: Enable lower_io_offsets pass and handle new SSBO intrinsics
Signed-off-by: Rob Clark <robdclark@gmail.com>
We initially set this lower because we didn't have SIMD32 support yet
but we've supported SIMD32 for quite some time now. We should bump it
up to the real limit.
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
The mask should be accumulated if two calls are used for
binding two buffers at different indexes. Otherwise, the
driver only accounts for the last one.
Noticed while glancing at this code.
Cc: 18.3 19.0 <mesa-stable@lists.freedesktop.org>
Signed-off-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
The glMemoryBarrier() function makes shader memory stores ordered with
respect to things specified by the given bits. Until now, st/mesa has
ignored GL_TEXTURE_UPDATE_BARRIER_BIT and GL_BUFFER_UPDATE_BARRIER_BIT,
saying that drivers should implicitly perform the needed flushing.
This seems like a pretty big assumption to make. Instead, this commit
opts to translate them to new PIPE_BARRIER bits, and adjusts existing
drivers to continue ignoring them (preserving the current behavior).
The i965 driver performs actions on these memory barriers. Shader
memory stores go through a "data cache" which is separate from the
render cache and other read caches (like the texture cache). All
memory barriers need to flush the data cache (to ensure shader memory
stores are visible), and possibly invalidate read caches (to ensure
stale data is no longer visible). The driver implicitly flushes for
most caches, but not for data cache, since ARB_shader_image_load_store
introduced MemoryBarrier() precisely to order these explicitly.
I would like to follow i965's approach in iris, flushing the data cache
on any MemoryBarrier() call, so I need st/mesa to actually call the
pipe->memory_barrier() callback.
Fixes KHR-GL45.shader_image_load_store.advanced-sync-textureUpdate
and Piglit's spec/arb_shader_image_load_store/host-mem-barrier on
the iris driver.
Roland said this looks reasonable to him.
Reviewed-by: Eric Anholt <eric@anholt.net>
Handle buffers whose width is not aligned to 16px by padding the stride
and storing it accordingly.
This does not reject imports for images whose stride is not sufficiently
aligned.
v2: make sure bo->stride is set on imported buffers, and add missing
variable definition. (Tomeu)
Tested-by: Tomeu Vizoso <tomeu.vizoso@collabora.com>
Reviewed-by: Alyssa Rosenzweig <alyssa@rosenzweig.io>
With autotools this close to being not supported anymore, let's not
waste half of the CI cycles on it. The default build will catch most
issues, and the rest can be tested by the old Travis.
Signed-off-by: Eric Engestrom <eric.engestrom@intel.com>
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Reviewed-by: Eric Anholt <eric@anholt.net>
This allows DRI3 to pick between UIF and raster according to whether we're
pageflipping or not and whether the pageflipping display can do UIF,
avoiding copies for the windowed/composited case that previously was
forced to linear.
Improves windowed glmark2 -b build:use-vbo=false performance by 30.7783%
+/- 13.1719% (n=3)
We ask the other side to make a buffer with the right number of pages, and
then just store the UIF in it. This avoids an extra silent copy of the
buffer from linear to UIF if it gets used for texturing (X11 copy-based
swapbuffers, GL compositors).
This reverts commit 1aa5738e66.
This patch incorrectly asumed that for SSOs no inner interface
matching check was needed.
From the ARB_separate_shader_objects spec v.25:
" With separable program objects, interfaces between shader stages
may involve the outputs from one program object and the inputs
from a second program object. For such interfaces, it is not
possible to detect mismatches at link time, because the programs
are linked separately. When each such program is linked, all
inputs or outputs interfacing with another program stage are
treated as active. The linker will generate an executable that
assumes the presence of a compatible program on the other side of
the interface. If a mismatch between programs occurs, no GL error
will be generated, but some or all of the inputs on the interface
will be undefined."
This completes the fix from commit:
3be05dd267 ("glsl/linker: don't fail non static used inputs without matching outputs")
Fixes: 1aa5738e66 ("glsl: relax input->output validation for SSO programs")
Cc: Tapani Pälli <tapani.palli@intel.com>
Cc: Timothy Arceri <tarceri@itsqueeze.com>
Cc: Ilia Mirkin <imirkin@alum.mit.edu>
Cc: Samuel Iglesias Gonsálvez <siglesias@igalia.com>
Cc: Ian Romanick <ian.d.romanick@intel.com>
Signed-off-by: Andres Gomez <agomez@igalia.com>
Reviewed-by: Timothy Arceri <tarceri@itsqueeze.com>
Current implementation uses a complicated calculation which relies in
an implicit conversion to check the integral part of 2 division
results.
However, the calculation actually checks that the xfb_offset is
smaller or a multiplier of the xfb_stride. For example, while this is
expected to fail, it actually succeeds:
"
...
layout(xfb_buffer = 2, xfb_stride = 12) out block3 {
layout(xfb_offset = 0) vec3 c;
layout(xfb_offset = 12) vec3 d; // ERROR, requires stride of 24
};
...
"
Fixes: 2fab85aaea ("glsl: add xfb_stride link time validation")
Cc: Timothy Arceri <tarceri@itsqueeze.com>
Signed-off-by: Andres Gomez <agomez@igalia.com>
Reviewed-by: Timothy Arceri <tarceri@itsqueeze.com>
If there is no Static Use of an input variable, the linker shouldn't
fail whenever there is no defined matching output variable in the
previous stage.
From page 47 (page 51 of the PDF) of the GLSL 4.60 v.5 spec:
" Only the input variables that are statically read need to be
written by the previous stage; it is allowed to have superfluous
declarations of input variables."
Now, we complete this exception whenever the input variable has an
explicit location. Previously, 18004c338f ("glsl: fail when a
shader's input var has not an equivalent out var in previous") took
care of the cases in which the input variable didn't have an explicit
location.
v2: do the location based interface matching check regardless on
whether it is a separable program or not (Ilia).
Fixes: 1aa5738e66 ("glsl: relax input->output validation for SSO programs")
Cc: Timothy Arceri <tarceri@itsqueeze.com>
Cc: Iago Toral Quiroga <itoral@igalia.com>
Cc: Samuel Iglesias Gonsálvez <siglesias@igalia.com>
Cc: Tapani Pälli <tapani.palli@intel.com>
Cc: Ian Romanick <ian.d.romanick@intel.com>
Cc: Ilia Mirkin <imirkin@alum.mit.edu>
Signed-off-by: Andres Gomez <agomez@igalia.com>
Reviewed-by: Timothy Arceri <tarceri@itsqueeze.com>
Outputs are always validated when having explicit locations and we
were trusting its outcome to catch similar problems with the inputs
since, in case of having undefined outputs for existing inputs, we
would be already reporting a linker error.
However, consider this case:
" Shader stage n:
---------------
...
layout(location = 0) out float a;
...
Shader stage n+1:
-----------------
...
layout(location = 0) in float b;
layout(location = 0) in float c;
...
"
Currently, this won't report a linker error even though location
aliasing is happening for the inputs.
Therefore, we also need to validate the inputs independently from the
outcome of the outputs validation.
Cc: Timothy Arceri <tarceri@itsqueeze.com>
Cc: Iago Toral Quiroga <itoral@igalia.com>
Cc: Ilia Mirkin <imirkin@alum.mit.edu>
Signed-off-by: Andres Gomez <agomez@igalia.com>
Reviewed-by: Timothy Arceri <tarceri@itsqueeze.com>
From page 62 (page 68 of the PDF) of the GLSL 4.50 v.7 spec:
" A dvec3 or dvec4 can only be declared without specifying a
component."
Therefore, using the "component" qualifier with a dvec3 or dvec4
should result in a compiling error.
v2: enhance the error message (Timothy).
Fixes: 94438578d2 ("glsl: validate and store component layout qualifier in GLSL IR")
Cc: Timothy Arceri <tarceri@itsqueeze.com>
Cc: Kenneth Graunke <kenneth@whitecape.org>
Signed-off-by: Andres Gomez <agomez@igalia.com>
Reviewed-by: Timothy Arceri <tarceri@itsqueeze.com>
This reverts commit db57db5317. When
building IR, nothing is really immutable and, since C has no concept of
constness propagating beyond the first pointer, we have to be vary
careful with how we use it. To just throw const into a function like
this is a lie.
Instead, we should just drop the unneeded const in spirv_to_nir which
this commit does along with the revert.
`clang` has a different set of warnings and errors than `gcc`, so it's
useful to do at least a generic pass over Mesa with it.
Signed-off-by: Eric Engestrom <eric.engestrom@intel.com>
Fix this check so that we can get a HiZ aux buffer for multisampled
surfaces as well. Also make sure we don't try to emit a sampler view
surface state for multisampled depth sufaces with HiZ enabled, as
the sampler can't HiZ for multisampled buffers and isl would assert.
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
the idea here is to generate an entry point stub function wrapping around the
actual kernel function and turn all parameters into shader inputs with byte
addressing instead of vec4.
This gives us several advantages:
1. calling kernel functions doesn't differ from calling any other function
2. CL inputs match uniforms in most ways and we can just take advantage of most
of nir_lower_io
v2: move code into a seperate function
v3: verify the entry point got a name
fix minor typo
v4: make vtn_emit_kernel_entry_point_wrapper take the old entry point as an arg
Signed-off-by: Karol Herbst <kherbst@redhat.com>
local memory is too small to require 64 bit pointers, so cast the array index
to a 32 bit value to save up on 64 bit operations.
Signed-off-by: Karol Herbst <kherbst@redhat.com>
We need this for OpenCL kernels because we have to apply C rules for alignment
and padding inside structs and for this we also have to know if a struct is
packed or not.
v2: fix for kernel params
Signed-off-by: Karol Herbst <kherbst@redhat.com>
Reviewed-by: Erik Faye-Lund <erik.faye-lund@collabora.com>
There are two stages to varying assembly in the command stream: creating
the varying buffers in the command stream, and creating the varying meta
descriptors (also in the command stream) linked to the aforementioned
buffers. The previous code for this was ad hoc and brittle, making some
invalid assumptions causing unmaintainable workarounds to pile up across
the driver (both compiler and command stream side).
This patch completely rewrites the varying assembly code. There's a
trivial performance penalty (we now memcpy the varying meta to the
command stream on draw, rather than on compile). That said, the
improvement in flexibility and clarity is well-worth it.
The motivator for these changes was support for gl_PointCoord (and
eventually point sprites for legacy GL), which was impossible to
implement with the old varying assembly code. With the new refactor,
it's super easy; support for gl_PointCoord is included with this patch.
All in all, I'm quite happy with how this turned out.
Signed-off-by: Alyssa Rosenzweig <alyssa@rosenzweig.io>
In addition to fixing actual primconvert bugs, this prevents an infinite
loop when trying to draw POINTS.
Signed-off-by: Alyssa Rosenzweig <alyssa@rosenzweig.io>
The if is actually returning true on success, enabling fast clears, so we
need to have the test succeed when the iview dimensions are right.
Fixes: d5400a5ec2 "radv: provide a helper for comparing an image extents."
Reviewed-by: Dave Airlie <airlied@redhat.com>
Vulkan spec doesn't explicitly forbid zero size transform
feedback buffers.
Having zero size xfb caused SurfaceSize overflow and
triggered assert in debug build.
The only way to have zero size SO_BUFFER is to disable
SO_BUFFER as stated in hardware spec.
From SKL PRM, Vol 2a, "3DSTATE_SO_BUFFER":
"If set, stream output to SO Buffer is enabled,
if 3DSTATE_STREAMOUT::SO Function ENABLE is also enabled.
If clear, the SO Buffer is considered "not bound" and effectively
treated as a zero- length buffer for the purposes of SO output and
overflow detection. If an enabled stream's Stream to Buffer Selects
includes this buffer it is by definition an overflow condition.
That stream will cause no writes to occur,
and only SO_PRIM_STORAGE_NEEDED[<stream>] will increment."
Fixes: 36ee2fd61c "anv: Implement the basic form of VK_EXT_transform_feedback"
Signed-off-by: Danylo Piliaiev <danylo.piliaiev@globallogic.com>
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
VkFullScreenExclusiveEXT comes from the win32 header. Mostly took
the logic from the entrypoint scripts:
1) If there is an ext that has it in the requires and has a platform,
take the guard for that platform.
2) Otherwise assume it is from the core headers.
Acked-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
Reviewed-by: Eric Engestrom <eric@engestrom.ch>
In commit 530927d3f6 ("vulkan/util: generate instance/device
dispatch tables") we started generating instance dispatch tables some
of them (like wayland) require external headers.
This commit moves the dependencies up one level so that they apply the
whole vulkan directory. We use them for both the util & overlay layer.
Signed-off-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Fixes: 530927d3f6 ("vulkan/util: generate instance/device dispatch tables")
Reviewed-by: Eric Engestrom <eric@engestrom.ch>
Some of the header file locations are changed between Android
versions (when VNDK is used), patch makes sure we get all the
required headers.
v2: cleanups, put SDK version checks in all places (Tapani)
Signed-off-by: Tapani Pälli <tapani.palli@intel.com>
Signed-off-by: Chen Lin Z <lin.z.chen@intel.com>
Tested-by: Clayton Craft <clayton.a.craft@intel.com>
Acked-by: Eric Engestrom <eric.engestrom@intel.com>
For VK_IMAGE_LAYOUT_COLOR_ATTACHMENT_OPTIMAL and
VK_IMAGE_LAYOUT_DEPTH_STENCIL_ATTACHMENT_OPTIMAL we do not care about
the queue mask because
1) using these is only allowed on the gfx queue
2) transitions for these are only allowed on the gfx queue.
This enables some fast clears for Doom that uses
VK_SHARING_MODE_CONCURRENT.
Reviewed-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
This was used to avoid freeing a sampler view which was created by a
context that was already deleted. But the state tracker does not
allow that.
Reviewed-by: Roland Scheidegger <sroland@vmware.com>
Reviewed-by: Neha Bhende <bhenden@vmware.com>
Reviewed-by: Mathias Fröhlich <Mathias.Froehlich@web.de>
Reviewed-By: Jose Fonseca <jfonseca@vmware.com>
This function was used in the past to avoid deleting a sampler view
for a context that no longer exists. But the Mesa state tracker
ensures that cannot happen. Use the standard refcounting function
instead.
Also, remove the code which checked for context mis-matches in
svga_sampler_view_destroy(). It's no longer needed since implementing
the zombie sampler view code in the state tracker.
Testing Done: google chrome, variety of GL demos/games
Reviewed-by: Roland Scheidegger <sroland@vmware.com>
Reviewed-by: Neha Bhende <bhenden@vmware.com>
Reviewed-by: Mathias Fröhlich <Mathias.Froehlich@web.de>
Reviewed-By: Jose Fonseca <jfonseca@vmware.com>
In all instances here we can replace pipe_sampler_view_release(pipe,
view) with pipe_sampler_view_reference(view, NULL) because the views
in question are private to the state tracker context. So there's no
danger of freeing a sampler view with the wrong context.
Testing done: google chrome, misc GL demos, games
Reviewed-by: Roland Scheidegger <sroland@vmware.com>
Reviewed-by: Neha Bhende <bhenden@vmware.com>
Reviewed-by: Mathias Fröhlich <Mathias.Froehlich@web.de>
Reviewed-By: Jose Fonseca <jfonseca@vmware.com>
As with the preceding patch for sampler views, this patch does
basically the same thing but for shaders. However, reference counting
isn't needed here (instead of calling cso_delete_XXX_shader() we call
st_save_zombie_shader().
The Redway3D Watch is one app/demo that needs this change. Otherwise,
the vmwgfx driver generates an error about trying to destroy a shader
ID that doesn't exist in the context.
Note that if PIPE_CAP_SHAREABLE_SHADERS = TRUE, then we can use/delete
any shader with any context and this mechanism is not used.
Tested with: google-chrome, google earth, Redway3D Watch/Turbine demos
and a few Linux games.
Reviewed-by: Roland Scheidegger <sroland@vmware.com>
Reviewed-by: Neha Bhende <bhenden@vmware.com>
Reviewed-by: Mathias Fröhlich <Mathias.Froehlich@web.de>
Reviewed-By: Jose Fonseca <jfonseca@vmware.com>
When st_texture_release_all_sampler_views() is called the texture may
have sampler views belonging to several contexts. If we unreference a
sampler view and its refcount hits zero, we need to be sure to destroy
the sampler view with the same context which created it.
This was not the case with the previous code which used
pipe_sampler_view_release(). That function could end up freeing a
sampler view with a context different than the one which created it.
In the case of the VMware svga driver, we detected this but leaked the
sampler view. This led to a crash with google-chrome when the kernel
module had too many sampler views. VMware bug 2274734.
Alternately, if we try to delete a sampler view with the correct
context, we may be "reaching into" a context which is active on
another thread. That's not safe.
To fix these issues this patch adds a per-context list of "zombie"
sampler views. These are views which are to be freed at some point
when the context is active. Other contexts may safely add sampler
views to the zombie list at any time (it's mutex protected). This
avoids the context/view ownership mix-ups we had before.
Tested with: google-chrome, google earth, Redway3D Watch/Turbine demos
a few Linux games. If anyone can recomment some other multi-threaded,
multi-context GL apps to test, please let me know.
v2: avoid potential race issue by always adding sampler views to the
zombie list if the view's context doesn't match the current context,
ignoring the refcount.
Reviewed-by: Roland Scheidegger <sroland@vmware.com>
Reviewed-by: Neha Bhende <bhenden@vmware.com>
Reviewed-by: Mathias Fröhlich <Mathias.Froehlich@web.de>
Reviewed-By: Jose Fonseca <jfonseca@vmware.com>
Split up the "Environment Variables" section into "Compiler Options"
and "Compiler Specification". I think this makes the information
easier to find and understand.
Add the necessary build rules for android, to avoid building errors.
Fixes: f014ae3 ("nouveau: add support for nir")
Signed-off-by: Mauro Rossi <issor.oruam@gmail.com>
Reviewed-by: Karol Herbst <kherbst@redhat.com>
Without this the build breaks with:
In file included from ../src/vulkan/util/vk_util.h:32,
from ../src/vulkan/util/vk_util.c:28:
../include/vulkan/vulkan.h:51:10: fatal error: wayland-client.h: No such file or
directory
#include <wayland-client.h>
^~~~~~~~~~~~~~~~~~
compilation terminated.
The above misses the include directory for wayland:
-I/usr/include/wayland
Signed-off-by: Tobias Klausmann <tobias.johannes.klausmann@mni.thm.de>
Cc: mesa-stable@lists.freedesktop.org
Reviewed-by: Eric Engestrom <eric.engestrom@intel.com>
v4: use smarter getIndirect helper
use new getSlotAddress helper
v5: use loadFrom helper
v8: don't require C++11 features
Signed-off-by: Karol Herbst <kherbst@redhat.com>
v3: fix compiler warnings
v4: use loadFrom helper
v5: fix signed min/max
v6: set tex mask
add support for indirect image access
set cache mode
v7: make compatible with 884d27bcf6
rework the whole deref thing to prepare for bindless
v8: port to deref instructions
don't require C++11 features
v9: implement MS images
rebase on master (image modifiers)
fix regressions due to variable src compnents
replace '(*it).' with 'it->'
convert to C++ style comments
Signed-off-by: Karol Herbst <kherbst@redhat.com>
v4: use smarter getIndirect helper
use new getSlotAddress helper
use loadFrom helper
v8: don't require C++11 features
Signed-off-by: Karol Herbst <kherbst@redhat.com>
We store those arrays in local memory and reserve some space for each of the
arrays. With NIR we could store those arrays packed, but we don't do that yet
as it causes MemoryOpt to generate unaligned memory accesses.
v3: use fixed size vec4 arrays until we fix MemoryOpt
v4: fix for 64 bit types
v5: use loadFrom helper
v8: don't require C++11 features
v9: convert to C++ style comments
Signed-off-by: Karol Herbst <kherbst@redhat.com>
v2: add vote_eq support
use the new subop intrinsic helper
add ballot
v3: add read_(first_)invocation
v8: handle vectorized intrinsics
don't require C++11 features
v9: lower_subgroups to 32 bit (produces less instructions)
use getSSA and getScratch instead of new_LValue
Signed-off-by: Karol Herbst <kherbst@redhat.com>
a lot of those fields are not valid for a lot of tex ops. Not quite sure if
it's worth the effort to check for those or just keep it like that. It seems
to kind of work.
v2: reworked offset handling
add tex support with indirect R/S arguments
handle GLSL_SAMPLER_DIM_EXTERNAL
drop reference in convert(glsl_sampler_dim&, bool, bool)
fix tg4 component selection
v5: fill up coords args with scratch values if coords provided is less than TexTarget.getArgCount()
v7: prepare for bindless_texture support
v8: don't require C++11 features
v9: convert to C++ style comments
fix txf with a uniform constant 0 lod
Signed-off-by: Karol Herbst <kherbst@redhat.com>
v2: support more sys values
fixed a bug where for multi component reads all values ended up in x
v3: add load_patch_vertices_in
v4: add subgroup stuff
v5: add helper invocation
v6: fix loading 64 bit system values
v8: don't require C++11 features
v9: convert to C++ style comments
Signed-off-by: Karol Herbst <kherbst@redhat.com>
v3: and load_output
v4: use smarter getIndirect helper
use new getSlotAddress helper
v5: don't use const_offset directly
fix for indirects
v6: add support for interpolateAt
v7: fix compiler warnings
add load_barycentric_sample
handle load_output for fragment shaders
v8: set info->prop.fp.readsSampleLocations for at_sample interpolation
don't require C++11 features
v9: convert to C++ style comments
Signed-off-by: Karol Herbst <kherbst@redhat.com>
v3: add workaround for RA issues
indirects have to be multiplied by 0x10
fix indirect access
v4: use smarter getIndirect helper
use storeTo helper
v5: don't use const_offset directly
v8: don't require C++11 features
v9: convert to C++ style comments
handle clip planes correctly
Signed-off-by: Karol Herbst <kherbst@redhat.com>
v2: use new getIndirect helper
fixes symbols for 64 bit types
v4: use smarter getIndirect helper
simplify address calculation
use loadFrom helper
v8: don't require C++11 features
Signed-off-by: Karol Herbst <kherbst@redhat.com>
v2: user bitfield_insert instead of bfi
rework switch helper macros
remove some lowering code (LoweringHelper is now used for this)
v3: add pack_half_2x16_split
add unpack_half_2x16_split_x/y
v5: replace first argument with nullptr in loadImm calls
prefer getSSA over getScratch
v8: fix setting precise modifier for first instruction inside a block
add guard in case no instruction gets inserted into an empty block
don't require C++11 features
v9: use CC_NE for integer compares
convert to C++ style comments
fix b2f for doubles
remove macros around nir ops to make it easier to grep them
add handling for fpow
Signed-off-by: Karol Herbst <kherbst@redhat.com>
v2: parse a few more fields
v3: add special handling for GL_ISOLINES
v8: set info->prop.fp.readsSampleLocations
don't require C++11 features
v9: replace '(*it).' with 'it->'
convert to C++ style comments
Signed-off-by: Karol Herbst <kherbst@redhat.com>
v2: add support for geometry shaders
set idx
add some missing mappings
fix for 64bit inputs/outputs
fix up some FP color output index messup
parse centroid flag
v3: fix arrays in outputs as well
fix input/ouput size calculation for tessellation shaders
v4: add getSlotAddress helper
fix for 64 bit typed inputs
v5: change getSlotAddress interface for easier use
fix sample inputs
fix slot counting for mat
v7: fix driver_location of images
v8: don't require C++11 features
v9: convert to C++ style comments
support VERT_ATTRIB_POINT_SIZE
add more error checking to slots
Signed-off-by: Karol Herbst <kherbst@redhat.com>
v4: treat imul as unsigned
v5: remove pointless !!
v7: inot is unsigned as well
v8: don't require C++11 features
v9: convert to C++ style comments
improve formatting
print error in all cases where codegen doesn't support a given type
Signed-off-by: Karol Herbst <kherbst@redhat.com>
Acked-by: Pierre Moreau <pierre.morrow@free.fr>
v2: add helper function for indirects
v4: add new getIndirect overload for easier use
v5: use getSSA for ssa values
we can just create the values for unassigned registers in getSrc
v6: always create at least 32 bit values
v8: don't require C++11 features
v9: include unordered_map on supported stdlibs
replace '(*it).' with 'it->'
Signed-off-by: Karol Herbst <kherbst@redhat.com>
v2: add constant_folding
v6: print non final NIR only for verbose debugging
v8: add passes we will need for OpenCL compute shaders
v9: move type_size into anonymous namespace
convert to C++ style comments
lower bools to int32
Signed-off-by: Karol Herbst <kherbst@redhat.com>
Acked-by: Pierre Moreau <pierre.morrow@free.fr>
v9: rename variable to driver_flags
use constants for shader cache flags
Signed-off-by: Karol Herbst <kherbst@redhat.com>
Reviewed-by: Pierre Moreau <pierre.morrow@free.fr>
not all those nir options are actually required, it just made the work a
little easier.
v2: fix asserts
parse compute shaders
don't lower bitfield_insert
v3: fix memory leak
v4: don't lower fmod32
v5: set lower_all_io_to_temps to false
fix memory leak because we take over ownership of the nir shader
merge: use the lowering helper
v6: include TGSI debug header for proper assert call
add nv50 support
v7: fix Automake build
v8: free shader only for the set shader type
v9: check for IR type inside get_compiler_options
squash "nouveau: add env var to make nir default"
fix memory leak when creating compute shaders
use debug_get_bool_option as it is available in non debug builds
return failure if unsupported IR is encountered
don't lower fpow in nir
lower int 64 divmod inside nir to prevent crashes
Signed-off-by: Karol Herbst <kherbst@redhat.com>
Reviewed-by: Pierre Moreau <pierre.morrow@free.fr>
if we start supporting multiple input IRs we might want to move lowering code
into a common place and keep the initial translation simplier.
This will also allows us to react on ISA changes more easily.
v5: also handle SAT
v6: rename type variables
fixed lowering of NEG
add lowering of NOT
v8: don't require C++11 features
Signed-off-by: Karol Herbst <kherbst@redhat.com>
Reviewed-by: Pierre Moreau <pierre.morrow@free.fr>
No autotools build to care about.
The half baked turnips param is kind of ugly, but felt like a waste
defining more variables for it now.
Reviewed-by: Eric Engestrom <eric@engestrom.ch>
Reviewed-by: Kristian H. Kristensen <hoegsberg@chromium.org>
Avoids
src/freedreno/vulkan/meson.build:42:0: ERROR: Tried to create target "vk_format_table.c", but a target of that name already exists.
when building both radv and turnip.
Fixes: 26380b3a9f "turnip: Add driver skeleton (v2)"
Reviewed-by: Eric Engestrom <eric@engestrom.ch>
Reviewed-by: Kristian H. Kristensen <hoegsberg@chromium.org>
Apparently GCC does not consider static const variables to be
integer constants, and hence the array size and the static assert
result in compile failures.
Fixes: 4b9f967cd1 "turnip: add a more complete format table"
Reviewed-by: Eric Engestrom <eric@engestrom.ch>
Reviewed-by: Kristian H. Kristensen <hoegsberg@chromium.org>
This fixes a serious performance issue with DXVK:
https://github.com/doitsujin/dxvk/issues/937
This was caused by a recent change that to improve performance on RADV
which back-fired on ANV and killed performance for some apps:
e5a06d3f4a
Throwing in this bit of lowering lets us come along and CSE those UBO
loads (or copy-prop for SSBO load) and get one load where we previously
would have gotten several.
VkPipeline-db results on Kaby Lake:
total instructions in shared programs: 5115361 -> 5073185 (-0.82%)
instructions in affected programs: 1754333 -> 1712157 (-2.40%)
helped: 5331
HURT: 63
total cycles in shared programs: 2544501169 -> 2481144545 (-2.49%)
cycles in affected programs: 2531058653 -> 2467702029 (-2.50%)
helped: 9202
HURT: 4323
total loops in shared programs: 3340 -> 3331 (-0.27%)
loops in affected programs: 9 -> 0
helped: 9
HURT: 0
total spills in shared programs: 3246 -> 3053 (-5.95%)
spills in affected programs: 384 -> 191 (-50.26%)
helped: 10
HURT: 5
total fills in shared programs: 4626 -> 4452 (-3.76%)
fills in affected programs: 439 -> 265 (-39.64%)
helped: 10
HURT: 5
All of the shaders with hurt spilling were in Rise of the Tomb Raider
which also had shaders solidly helped in the spilling department. Not
shown in those results (because I've not had success dumping the
shaders) is Witcher 3 where this reduces spilling and improves over-all
perf by around 20-25%. There were no shader-db changes. Apparently,
this just isn't a pattern that happens in OpenGL.
Reviewed-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
Cc: "19.0" mesa-stable@lists.freedesktop.org
This pass was originally written for lowering TCS output reads and
writes but it is also applicable just about anything including UBOs,
SSBOs, and shared variables.
Reviewed-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
This one's a tiny bit better than what we had in spirv_to_nir because it
emits a binary tree rather than a linear walk. It also doesn't leave
around unneeded bcsel instructions for a constant index and returns an
undef for constant OOB access.
Reviewed-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
Something that we didn't hit earlier because of the extra shr.b
Signed-off-by: Rob Clark <robdclark@gmail.com>
Reviewed-by: Kristian H. Kristensen <hoegsberg@chromium.org>
Increase shader_params size to pass sampler data to
compute shader during weave de-interlace.
Signed-off-by: James Zhu <James.Zhu@amd.com>
Acked-by: Leo Liu <leo.liu@amd.com>
Tested-by: Bruno Milreu <bmilreu@gmail.com>
The OpenMAX state tracker will use this.
RadeonSI is adapted to use pipe_grid_info::last_block instead of its
internal state.
Acked-by: Leo Liu <leo.liu@amd.com>
When varyings was added we moved to use to dynamycally allocated
pointers, instead of allocating just one block for everything. That
breaks some assumptions of some vulkan drivers (like anv), that make
serialization and copying easier. And at the same time, varyings are
not needed for vulkan.
So this commit moves them out. Although it seems a little an overkill,
fixing the anv side would require a similar, or more, changes, so in
the end it is about to decide where do we want to put our effort.
v2: (from Jason review)
* Don't use a temp variable on the _create methods, just return
result of rzalloc_size
* Wrap some lines too long.
Fixes: cf0b2ad486 ("nir/xfb: adding varyings on nir_xfb_info and gather_info")
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
The maximum value primitive restart index is different for each index data
type. Use the appropriate fixed restart index value.
Reviewed-by: Brian Paul <brianp@vmware.com>
Signed-off-by: Mathias Fröhlich <Mathias.Froehlich@web.de>
The standard requires that the primitive restart comparison happens before
the basevertex value is added. Do this now, drop a reference to the standard
why this happens at this place.
Reviewed-by: Brian Paul <brianp@vmware.com>
Signed-off-by: Mathias Fröhlich <Mathias.Froehlich@web.de>
Since mapping and unmapping the buffer objects in a VAO is handled
directly from the VAO, this part of the _NEW_ARRAY state is no longer
used. So remove this part of array element state.
Reviewed-by: Brian Paul <brianp@vmware.com>
Signed-off-by: Mathias Fröhlich <Mathias.Froehlich@web.de>
Due to the use of bitmaps, the _mesa_vao_{,un}map_arrays functions
should provide comparable runtime efficienty to the currently used
_ae_{,un}map_vbos functions. So use this functions and enable
further cleanup.
Reviewed-by: Brian Paul <brianp@vmware.com>
Signed-off-by: Mathias Fröhlich <Mathias.Froehlich@web.de>
Make use of the newly factored out _mesa_array_element function
in display list compilation. For now that duplicates out the
primitive restart logic. But that turns out to need a fix in
display list handling anyhow.
Reviewed-by: Brian Paul <brianp@vmware.com>
Signed-off-by: Mathias Fröhlich <Mathias.Froehlich@web.de>
The factored out function handles emitting the vertex attributes
at the given index. The now public accessible function gets used
in the following patches.
Reviewed-by: Brian Paul <brianp@vmware.com>
Signed-off-by: Mathias Fröhlich <Mathias.Froehlich@web.de>
Provide a set of functions that maps or unmaps all VBOs held
in a VAO. The functions will be used in the following patches.
v2: Update comments.
Reviewed-by: Brian Paul <brianp@vmware.com>
Signed-off-by: Mathias Fröhlich <Mathias.Froehlich@web.de>
Instead, we do UBO and SSBO deref lowering in NIR after we've given it a
chance to optimize SSBO access:
Shader-db results on Kaby Lake:
total instructions in shared programs: 15235775 -> 15235484 (<.01%)
instructions in affected programs: 14992 -> 14701 (-1.94%)
helped: 19
HURT: 20
total cycles in shared programs: 339220331 -> 339027307 (-0.06%)
cycles in affected programs: 79831981 -> 79638957 (-0.24%)
helped: 540
HURT: 602
total loops in shared programs: 4402 -> 4348 (-1.23%)
loops in affected programs: 186 -> 132 (-29.03%)
helped: 27
HURT: 0
total spills in shared programs: 23261 -> 23234 (-0.12%)
spills in affected programs: 38 -> 11 (-71.05%)
helped: 1
HURT: 0
total fills in shared programs: 31442 -> 31371 (-0.23%)
fills in affected programs: 98 -> 27 (-72.45%)
helped: 1
HURT: 0
LOST: 12
GAINED: 12
Most of the help and hurt in instruction counts was just churn caused by
re-ordering of optimizations and the fact that the NIR deref lowering
code is emitting slightly different instructions. Nothing was hurt by
more than three instructions and most things weren't helped by more than
four. The primary exception to this is one Car Chase shader:
shaders/non-free/gfxbench4/carchase/341.shader_test CS SIMD32: 1144 -> 821 (-28.23%)
There is also one compute shader in Manhattan 3.1 and a fragment shader
in the UE4 Shooter Game demo that now get a loop partially unrolled.
Those showed up in the results as hurt instructions but were manually
removed to get the results above.
The lost/gained was a dozen Car Chase shaders that went from SIMD8 to
SIMD16 thanks to improved register pressure:
shaders/non-free/gfxbench4/carchase/366.shader_test CS
shaders/non-free/gfxbench4/carchase/368.shader_test CS
shaders/non-free/gfxbench4/carchase/370.shader_test CS
shaders/non-free/gfxbench4/carchase/372.shader_test CS
shaders/non-free/gfxbench4/carchase/376.shader_test CS
shaders/non-free/gfxbench4/carchase/378.shader_test CS
shaders/non-free/gfxbench4/carchase/380.shader_test CS
shaders/non-free/gfxbench4/carchase/382.shader_test CS
shaders/non-free/gfxbench4/carchase/384.shader_test CS
shaders/non-free/gfxbench4/carchase/388.shader_test CS
shaders/non-free/gfxbench4/carchase/4.shader_test CS
shaders/non-free/gfxbench4/carchase/6.shader_test CS
Given how much it appeared to be improved, I ran Car Chase on my laptop.
Unfortunately, I wasn't able to see any measurable improvement. It
might be helped by 1-2% but it's in the noise. It does render correctly
as far as I can tell so the improvement is legitimate.
All of the loops that got delete were in dolphin uber shaders. I've had
no opportunity to test them for correctness or performance.
Reviewed-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
We didn't have any of these before because all NIR consumers always
called lower_ubo_references. Soon, we want to pass the derefs straight
through to NIR so we need to handle these intrinsics directly.
Reviewed-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
We want to be able to use variables and derefs for UBO/SSBO access in
NIR. In order to do this, the rest of NIR needs to know the type layout
information.
Reviewed-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
All of these are backed by some sort of memory so if you have multiple
threads writing to different components of the same vector at the same
time, the load-vec-store pattern that GLSL IR emits won't work. This
shouldn't affect any drivers today as they all call GLSL IR lowering
which lowers access to these variables to index+offset intrinsics before
we get to this point. However, NIR will start handling the derefs
itself and won't want the lowering.
Reviewed-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
It's just a 32-bit index and offset. We're going to want to use it in
GL as well so stop talking about Vulkan.
Reviewed-by: Kristian H. Kristensen <hoegsberg@chromium.org>
Reviewed-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
If we get to two deref_var paths with different variables, we usually
know they don't alias. However, if both of the paths are marked
coherent, we don't have to worry about it.
Reviewed-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
We also need to modify the current size/align helpers to not blow up
when they encounter an explicitly laid out type. Previously we
considered using the size/align helpers mutually exclusive with standard
layouts but now we just assert that they match.
Reviewed-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
With UBOs and SSBOs we have boolean types but they're actually 32-bit
values. Make the validator a little less strict so that we can do a
32-bit load/store on boolean types. We're about to add a lowering pass
called gl_nir_lower_buffers which will lower boolean load/store
operations to 32-bit and insert i2b and b2i instructions to convert
to/from 1-bit booleans. We want that to be legal.
Reviewed-by: Kristian H. Kristensen <hoegsberg@chromium.org>
Reviewed-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
If we want to be able to use copy_deref instructions on explicitly laid
out types, we have to be a little more flexible about what types we
allow. Instead, of requiring the types to exactly match, only require
the bare types to match.
Reviewed-by: Kristian H. Kristensen <hoegsberg@chromium.org>
Reviewed-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
Shader-db results on Kaby Lake:
total instructions in shared programs: 15225213 -> 15222365 (-0.02%)
instructions in affected programs: 43524 -> 40676 (-6.54%)
helped: 203
HURT: 0
Lots of shaders in Shadow Warrior had this pattern along with Deus Ex,
Civ, Shadow of Mordor, and several others.
Reviewed-by: Kristian H. Kristensen <hoegsberg@chromium.org>
Starting a glxgears and closing it, I was seeing a lot of leaked TGSI for
the fixed function VPs.
v2: drop unused delete_ir() arg.
Fixes: 3b4929ec6e ("st/mesa: Copy VP TGSI tokens if they exist, even for NIR shaders.")
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
GLSL NIR gets freed on relink by _mesa_delete_program(), but for ARB
programs we need to free the old NIR when PSN is used to set up new NIR in
the same gl_program. Additionally, set the base .nir field so that it
will get freed by _mesa_delete_program().
Fixes: 3d7611e9a6 ("st/nir: use NIR for asm programs")
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
We have a native op for this, which was just found in a disassembly --
so instead of lowering, use it!
Signed-off-by: Alyssa Rosenzweig <alyssa@rosenzweig.io>
Previously, we were caching this incorrectly; there's no real reason to
given how variable it is (sensitive to changes in viewport, framebuffer
dimensions, and scissors) and how cheap it is to recompute. So, just do
it on the fly each draw.
Fixes glmark-es2 -bshadow and -brefract.
Signed-off-by: Alyssa Rosenzweig <alyssa@rosenzweig.io>
For inexplicable reasons, the depth buffer is faster if kept as linear,
whereas the colour buffers are faster if AFBC. Given both code paths are
available, we'll choose the faster one of each (which also helps with
testing coverage).
Signed-off-by: Alyssa Rosenzweig <alyssa@rosenzweig.io>
It's not clear why the hardware "spills" a little bit, but if we don't
do this, we get MMU faults with linear depth buffers.
Signed-off-by: Alyssa Rosenzweig <alyssa@rosenzweig.io>
While a depth buffer may be supplied, it only needs to be written to if
the depth writemask is set for any draw AND if the depth buffer is not
immediately invalidated (as is the case for scanout). This refactors
panfrost_job to provide a depth write requirement, which is now
implemented for MFBD depth buffers.
Signed-off-by: Alyssa Rosenzweig <alyssa@rosenzweig.io>
This removes a clunky hack where the depth buffer was enabled during the
*clear*, instead of during depth buffer linking. That said, this does
not yet support writeback like AFBC depth buffers.
Signed-off-by: Alyssa Rosenzweig <alyssa@rosenzweig.io>
The fragment framebuffer descriptor should not be a context entry;
rather, it should be constructed only at fragment time to keep analysis
tractable.
Signed-off-by: Alyssa Rosenzweig <alyssa@rosenzweig.io>
This substantially cleans up the corresponding logic at the expense of a
bit of code duplication; nevertheless, it's a net win since otherwise
incompatible hardware code is mixed confusingly.
Signed-off-by: Alyssa Rosenzweig <alyssa@rosenzweig.io>
This function is replicated across vc4/v3d/freedreno and is needed in
Panfrost; let's make this shared code.
v2: Supply generic util_array_contains_u64 version (Eric Engestrom). Add
missing stdbool.h include (Eric Anholt). Mark inline (Christian
Gmeiner).
Signed-off-by: Alyssa Rosenzweig <alyssa@rosenzweig.io>
Reviewed-by: Eric Engestrom <eric.engestrom@intel.com>
Reviewed-by: Rob Clark <robdclark@gmail.com>
Reviewed-by: Eric Anholt <eric@anholt.net>
_mesa_log_msg must provide the length of the string passed into the
KHR_debug api. When the string formatted by _mesa_gl_vdebugf exceeds
MAX_DEBUG_MESSAGE_LENGTH, the length is incorrectly set to the number
of characters that would have been written if enough space had been
available.
Fixes: 3025680578
("mesa: Add support for GL_ARB_debug_output with dynamic ID allocation.")
Reviewed-by: Erik Faye-Lund <erik.faye-lund@collabora.com>
We don't set it on HSW and earlier in i965 and disabling it appears to
make derivatives somewhat more reliable.
Acked-by: Kenneth Graunke <kenneth@whitecape.org>
Not mutating the boxes is arguably cleaner.
Split from a patch by Chris Wilson but reworked to use a pointer to the
original box rather than making a copy at all.
Patch removes scaling factors introduced in 2a2e69f975 but leaves
option to use scaling in place as it could be useful with other upcoming
YUV formats.
We did this scaling because ffmpeg was shifting channel bits down, however
it seems this is not the right place as compositor wants to flip same
buffers directly to display as well and therefore bitshifting needs to be
done by the client when receiving frame from ffmpeg.
Now P0x formats are treated the same, e.g. P010 is same as P016 but with
lower 6 bits set to zeros.
Fixes: 2a2e69f975 "i965: add P0x formats and propagate required scaling factors"
Signed-off-by: Tapani Pälli <tapani.palli@intel.com>
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
We've been fairly inconsistent about this so we should really choose
whether we're going to use VK_TRUE/FALSE or the C boolean values. The
Vulkan #defines are set to 1 and 0 respectively so it's the same value
as C gives you when you cast a boolean expression to an integer. Since
there are several places where we set a VkBool32 to a C logical
expression, let's just embrace C booleans and stop using the VK defines.
Reviewed-by: Samuel Iglesias Gonsálvez <siglesias@igalia.com>
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Now that we are properly resolving buffers before giving them to the
window system, let's enable aux support again.
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
In i965, we disable the use of RGBX formats, so the higher layers of
Mesa choose the equivalent RGBA format, and swizzle the alpha channel to
1.0.
However, Gallium won't do that. We need to explicitly convert it to
RGBA.
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
The flush_resource hook is supposedly called when the resource content
needs to be made visible to external (okay, that's pretty vague). For
instance, it gets called before a surface gets handled to the window
system. So we need to resolve it if it's not resolved yet.
v2 (Ken):
- Check mod_info in iris_flush_resource instead of ISL_AUX_USAGE_NONE
- Drop my old broken resolve code from iris_resource_get_handle() now
that Rafael's got it hooked up in the right place.
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
While we lack value range tracking, this patch tries to 'manually' propogate
the division by 4 to calculate SSBO element-offset, into a possible previous
shift operation (shift left or right); checking that it is safe to do so.
This should help in cases like ie. when accessing a field in an array of
structs, where the offset is likely defined as base plus a multiplication
by a struct or array element size.
See dEQP test 'dEQP-GLES31.functional.ssbo.atomic.xor.highp_uint'
for an example of a shader that benefits from this.
Reviewed-by: Rob Clark <robdclark@gmail.com>
These intrinsics have the offset in dwords already computed in the last
source, so the change here is basically using that instead of emitting
the ir3_SHR to divide the byte-offset by 4.
The improvement in shader stats is significant, of up to ~15% in
instruction count in some cases. Tested only on a5xx.
shader-db is unfortunately not very useful here because shaders that use
SSBO require GLSL versions that are not supported by freedreno yet.
For examples, most Khronos CTS tests under 'dEQP-GLES31.functional.ssbo.*'
are helped.
A random case:
dEQP-GLES31.functional.ssbo.layout.2_level_array.packed.row_major_mat3x2
with current master:
; CL prog 14/1: 1252 instructions, 0 half, 48 full
; 8 const, 8 constlen
; 61 (ss), 43 (sy)
with the SSBO dword-offset moved to NIR:
; CL prog 14/1: 1053 instructions, 0 half, 45 full
; 7 const, 7 constlen
; 34 (ss), 73 (sy)
The SHR previously emitted for every single SSBO instruction disappears
in most cases, and the dword-offset ends up embedded in the STGB
instruction as immediate in many cases as well.
There are also a few of those tests that are currently failing on register
allocation, that start to pass as a result of reducing the pressure. At least
these, probably more:
dEQP-GLES31.functional.ssbo.layout.random.unsized_arrays.24
dEQP-GLES31.functional.ssbo.layout.random.arrays_of_arrays.6
dEQP-GLES31.functional.ssbo.layout.random.arrays_of_arrays.17
dEQP-GLES31.functional.ssbo.layout.random.nested_structs_arrays.14
dEQP-GLES31.functional.ssbo.layout.random.nested_structs_arrays_instance_arrays.5
dEQP-GLES31.functional.ssbo.layout.random.nested_structs_arrays_instance_arrays.7
No regressions observed with relevant CTS and piglit tests.
Reviewed-by: Rob Clark <robdclark@gmail.com>
This NIR->NIR pass implements offset computations that are currently
done on the IR3 backend compiler, to give NIR a better chance of
optimizing them.
For now, it supports lowering the dword-offset computation for SSBO
instructions. It will take an SSBO intrinsic and replace it with the
new ir3-specific version that adds an extra source. That source will
hold the SSA value resulting from inserting a division by 4 (an SHR op)
of the original byte-offset source already provided by NIR in one of
the intrinsic sources.
Note that on a6xx the original byte-offset is not needed, so we could
potentially replace that source instead of adding a new one. But to
keep things simple and consistent we always add the new source and
a6xx will just ignore the original one.
Reviewed-by: Rob Clark <robdclark@gmail.com>
These are ir3 specific versions of SSBO intrinsics that add an
extra source to hold the element offset (dword), which is what the
backend instructions need.
The original byte-offset source provided by NIR is not replaced
because on a4xx and a5xx the backend still needs it.
Reviewed-by: Rob Clark <robdclark@gmail.com>
indexConfigAttrib iterates over every index in the dri driver, possibly
exceeding __DRI_ATTRIB_MAX. In other words, if the dri driver has newer
attributes libEGL will end up reading from uninitialized memory through
dri2_to_egl_attribute_map[].
Signed-off-by: Kevin Strasser <kevin.strasser@intel.com>
Cc: mesa-stable@lists.freedesktop.org
Reviewed-by: Emil Velikov <emil.velikov@collabora.com>
Always use the streaming load (since we know we have Broadwell+, all of
our target CPU support sse41) for reading back form the tiled surface
for mapping the resource. This means we hit the fast WC handling paths
on Atoms (without LLC), and for big Core (with LLC) using the streaming
load is no less efficient as we do not require the tiled buffer to be
pulled into the CPU cache.
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
On !llc machines (Atoms), reading from a linear buffers is slow and so
copying from one resource into the linear staging buffer is still slow.
However, we can tell the GPU to snoop the CPU cache when reading from and
writing to the staging buffer eliminating the slow uncached reads.
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
Due to lack of write mask in SPIR-V store, generators may produce
multiple stores to the same vector but using different array derefs.
Use the combining store pass to clean this up. For example,
layout(binding = 3) buffer block {
vec4 v;
};
void main() {
v.x = 11;
v.y = 22;
}
after going to SPIR-V and NIR, ends up with in two store_derefs to
v[0] and v[1]
vec2 32 ssa_4 = deref_struct &ssa_3->field0 (ssbo vec4) /* &((block *)ssa_2)->field0 */
vec2 32 ssa_6 = deref_array &(*ssa_4)[0] (ssbo float) /* &((block *)ssa_2)->field0[0] */
intrinsic store_deref (ssa_6, ssa_7) (1, 0) /* wrmask=x */ /* access=0 */
vec1 32 ssa_13 = load_const (0x00000001 /* 0.000000 */)
vec2 32 ssa_14 = deref_array &(*ssa_4)[1] (ssbo float) /* &((block *)ssa_2)->field0[1] */
intrinsic store_deref (ssa_14, ssa_15) (1, 0) /* wrmask=x */ /* access=0 */
producing two different sends instructions in skl. The combining pass
transform the snippet above into
vec2 32 ssa_4 = deref_struct &ssa_3->field0 (ssbo vec4) /* &((block *)ssa_2)->field0 */
vec4 32 ssa_18 = vec4 ssa_7, ssa_15, ssa_16, ssa_17
intrinsic store_deref (ssa_4, ssa_18) (3, 0) /* wrmask=xy */ /* access=0 */
producing a single sends instruction.
v2: Move this from spirv_to_nir into the general optimization pass for
intel compiler. (Jason)
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
v2: (all from Jason)
Reuse existing function for the end of the block combinations.
Check the SSA values are coming from the right place in tests.
Document the case when the store to array_deref is reused.
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
We were not copying the saturate bit from the original instruction
to the new replacement instruction. This caused major misrendering
in DiRT Rally on iris, where comparisons leading to discards failed
due to the missing saturate, causing lots of extra garbage pixels to
be drawn in text rendering, trees, and so on.
This did not show up on i965 because st/nir performs a more aggressive
version of nir_opt_peephole_select, yielding more b32csel operations.
Fixes: 52c7df1643 i965/fs: Merge CMP and SEL into CSEL on Gen8+
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Reviewed-by: Ian Romanick <ian.d.romanick@intel.com>
Tessellation control shader outputs act as if they have memory backing
them and you can have multiple writes to different components of the
same vector in-flight at the same time. When this happens, the load vec
store pattern that gets used by ir_triop_vector_insert doesn't yield the
correct results. Instead, just emit a sequence of conditional
assignments.
Reviewed-by: Ian Romanick <ian.d.romanick@intel.com>
Cc: mesa-stable@lists.freedesktop.org
The previous code was completely broken when it came to constructing the
undef values. I'm not sure how it ever worked. For the case of a copy
that reads an undefined value, we can just delete the copy because the
destination is a valid undefined value. This saves us the effort of
trying to construct a value for an arbitrary copy_deref intrinsic.
Fixes: e8a8937a04 "nir: add partial loop unrolling support"
Reviewed-by: Timothy Arceri <tarceri@itsqueeze.com>
v2: Only reject no-error contexts for too-old GL if we're actually
trying to create a no-error context (Adam Jackson)
v3: Fix share contexts (Adam Jackson)
Reviewed-by: Adam Jackson <ajax@redhat.com>
Reviewed-by: Eric Anholt <eric@anholt.net>
pool->next and pool->free_list were reset before their usage in
anv_descriptor_pool_free_set
Fixes: 775aabdd "anv: destroy descriptor sets when pool gets reset"
Signed-off-by: Danylo Piliaiev <danylo.piliaiev@globallogic.com>
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
This reduces the runtime of dEQP-GLES3.functional.shaders.precision.* from
11.5s to 3.3s. This brings CTS runs down to 4 hours on one of my target
devices.
The IO scalarization pass that we run to help with linking end up
turning some shader I/O such as that for tessellation and geometry
shaders into many scalar URB operations rather than one vector one. To
alleviate this, we now vectorize the I/O once again. This fixes a 10%
performance regression in the GfxBench tessellation test that was caused
by scalarizing.
Shader-db results on Kaby Lake:
total instructions in shared programs: 15224023 -> 15220871 (-0.02%)
instructions in affected programs: 342009 -> 338857 (-0.92%)
helped: 1236
HURT: 443
total spills in shared programs: 23471 -> 23465 (-0.03%)
spills in affected programs: 6 -> 0
helped: 1
HURT: 0
total fills in shared programs: 31770 -> 31766 (-0.01%)
fills in affected programs: 4 -> 0
helped: 1
HURT: 0
Cycles was just a lot of churn do to moves being different places. Most
of the pure churn in instructions was +/- one or two instructions in
fragment shaders.
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=107510
Fixes: 4434591bf5 "intel/nir: Call nir_lower_io_to_scalar_early"
Fixes: 8d8222461f "intel/nir: Enable nir_opt_find_array_copies"
Reviewed-by: Connor Abbott <cwabbott0@gmail.com>
This ensures Mesa3D build doesn't fail in this case as encountered when
bisecting Scons source code while regression testing
https://bugs.freedesktop.org/show_bug.cgi?id=109443
and when testing 3.0.5.a.2
Technical details:
Scons version string has consistently been in this format:
MajorVersion.MinorVersion.Patch[.alpha/beta.yyyymmdd]
so these formulas should strip alpha/beta flags and return Scons version:
- as string - `'.'.join(SCons.__version__.split('.')[:3])`
- as tuple of integers - `tuple(map(int, SCons.__version__.split('.')[:3]))`
- v2: Fixed Scons version retrieval formulas as string and tuple of integers.
- v3: Fixed Scons version string format description.
Cc: "19.0" <mesa-stable@lists.freedesktop.org>
Reviewed-by: Jose Fonseca <jfonseca@vmware.com>
This reverts commit 47fc359822.
Reason is that patch did not take in to account situation where we might
have both OpenGL and Vulkan using glsl_types at the same time.
Signed-off-by: Tapani Pälli <tapani.palli@intel.com>
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
This reduces compilation time for my shader-db collection from around 40
seconds to 30, vs. 19 seconds for TGSI. There are still some shaders
that TGSI caches but NIR doesn't, partly because of more aggressive
cross-stage optimizations with NIR.
Reviewed-by: Timothy Arceri <tarceri@itsqueeze.com>
Oftentimes various nir shaders after lowering will be the same, or
almost the same. For example, this can happen when the same shader is
linked with different shaders to form different pipelines and
cross-stage optimizations don't kick in to change it. We want to avoid
running the backend twice on these shaders. We were already doing this
with radeonsi, but we were storing a few extra pieces of information
that made this much less effective compared to TGSI. The worse offender
by far was the program name, which caused most of the cache misses. This
pass strips out these pieces of information, controlled by the NIR_STRIP
debug env variable.
Reviewed-by: Timothy Arceri <tarceri@itsqueeze.com>
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
The values should match the ones that are emitted.
This fixes new CTS dEQP-VK.rasterization.primitive_size.points.*.
Fixes: f4e499ec79 ("radv: add initial non-conformant radv vulkan driver")
Signed-off-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
This fixes a segfault when we try to access the array using a
-1 when the array wasn't allocated in the first place.
Before 7536af670b we would just access a pre-allocated array
that was also load/stored to/from the shader cache. But now the
cache will no longer allocate these arrays if they are empty.
The change resulted in tests such as the following segfaulting
when run with a warm shader cache.
tests/spec/arb_arrays_of_arrays/execution/sampler/fs-struct-const-index.shader_test
The fragment_extra structure contains additional fields extending the
MRT framebuffer descriptor, snuck in between the main framebuffer
descriptor and the render targets. Its fields include those related to
transaction elimination and depth/stencil buffers. This patch identifies
the flags field (previously just "unk" with some magic values) as well
as identifying some (but not all) flags set by the driver.
The process of identifying flags brought a bug to light where
transaction elimination (checksumming) could not be enabled unless AFBC
was in-use. This issue is now resolved.
Signed-off-by: Alyssa Rosenzweig <alyssa@rosenzweig.io>
Reviewed-by: Tomeu Vizoso <tomeu.vizoso@collabora.com>
This bit, if set, causes the depth buffer to be copied from GPU tile
memory to the provided depth buffer in main memory. If not set, the GPU
will not access the main memory (saving considerable memory bandwidth if
depth results are not actually used).
Signed-off-by: Alyssa Rosenzweig <alyssa@rosenzweig.io>
This combination has not yet been seen "in the wild" in traces, but to
support linear depth FBOs, ~bruteforce reveals this bit pattern is
necessary. It's not yet clear why the meanings of 0x1 and 0x2 are
essentially flipped (tiled vs linear for colour, linear vs some sort of
tiled for depth).
Signed-off-by: Alyssa Rosenzweig <alyssa@rosenzweig.io>
Reviewed-by: Tomeu Vizoso <tomeu.vizoso@collabora.com>
Previously, linear BOs shared memory with each other to minimize kernel
round-trips / latency, as well as to work around a bug in the free_slab
function. These concerns are invalid now, but continuing to use the slab
allocator for BOs resulted in memory allocation errors. This issue was
aggravated, though not introduced (so not a real regression) in the
previous commit.
v2 (unreviewed): Fix bug in v1 preventing munmaps from working
Signed-off-by: Alyssa Rosenzweig <alyssa@rosenzweig.io>
Reviewed-by: Tomeu Vizoso <tomeu.vizoso@collabora.com>
Again, these formats are only properly known at the time of fragment job
emit. Rather than hardcoding the format, at least for MFBD we begin to
construct the format bits on-demand. This cleans up the code,
futureproofs for ES3 framebuffer formats, and should fix bugs regarding
FBO colour swizzles.
Signed-off-by: Alyssa Rosenzweig <alyssa@rosenzweig.io>
Reviewed-by: Tomeu Vizoso <tomeu.visozo@collabora.com>
In an effort to cleanup framebuffer management code, we delay
colour buffer setup until the FRAGMENT job is actually emitted, allowing
the AFBC and linear codepaths to be unified.
Signed-off-by: Alyssa Rosenzweig <alyssa@rosenzweig.io>
Reviewed-by: Tomeu Vizoso <tomeu.visozo@collabora.com>
AFBC, tiled, and linear BO layouts are mutually exclusive; they should
be coupled via a single enum rather than ad hoc checks of booleans.
Signed-off-by: Alyssa Rosenzweig <alyssa@rosenzweig.io>
Reviewed-by: Tomeu Vizoso <tomeu.visozo@collabora.com>
I'm not sure why we were checking for these additional criteria (likely
inherited from some other driver); remove the needless checks to cleanup
the code and perhaps fix some bugs down the line.
Signed-off-by: Alyssa Rosenzweig <alyssa@rosenzweig.io>
Reviewed-by: Tomeu Vizoso <tomeu.visozo@collabora.com>
This implements virtually all documented PIPE_CONTROL restrictions
in a centralized helper. You now simply ask for the operations you
want, and the pipe control "brain" will figure out exactly what pipe
controls to emit to make that happen without tanking your system.
The hope is that this will fix some intermittent flushing issues as
well as GPU hangs. However, it also has a high risk of causing GPU
hangs and other regressions, as this is a particularly sensitive
area and poking the bear isn't always advisable.
Mark Janes noted that this patch helps with some GPU hangs on Icelake.
This does re-enable the VF Invalidate => Write Immediate workaround
on Gen8, which had been disabled (bug 103787) due to GPU hangs. The
old code did this workaround after another which would have added CS
stall bits, so it missed a workaround. The new code orders them
properly and appears to work.
v4: Don't pass "bo, offset, imm" to a recursive CS stall (caught by
Topi Pohjolainen), drop Gen10 workarounds that are unnecessary for
production hardware.
Reviewed-by: Topi Pohjolainen <topi.pohjolainen@intel.com>
While this does add a bunch of boilerplate, it also protects us against
the hardware moving bits, or changing their meaning. For something as
finnicky as PIPE_CONTROL, the extra safety seems worth it.
We turn PIPE_CONTROL_* into an bitfield of arbitrary flags, and then
pack them appropriately.
Reviewed-by: Topi Pohjolainen <topi.pohjolainen@intel.com>
This will let us make multiple genX_*.c files, without copy and pasting
all this boilerplate.
Reviewed-by: Topi Pohjolainen <topi.pohjolainen@intel.com>
Add new Introduction and Advanced Usage sections.
Spell out a few more details, like "ninja install".
Improve the layout around example commands.
Fix grammatical errors and tighten up the text.
Explain the --prefix option.
v2: Remove language about 'ninja clean' and move link to Meson
information about separate build directories earlier in the page.
Reviewed-by: Eric Engestrom <eric.engestrom@intel.com>
Rename st_texture_free_sampler_views() to
st_delete_texture_sampler_views() to align with
st_DeleteTextureObject(), its only caller.
Move the call to st_texture_release_all_sampler_views() from
st_DeleteTextureObject() to st_delete_texture_sampler_views()
so all the sampler view clean-up code is in one place.
Reviewed-by: Neha Bhende <bhenden@vmware.com>
st_init_driver_functions() is only called in st_context.c so there's
no need for the prototype in st_context.h
To avoid a forward declaration of st_init_driver_functions() in
st_context.c, we need to move around several other functions.
No functional change.
Reviewed-by: Neha Bhende <bhenden@vmware.com>
To de-clutter st_context.h.
Clean up remaining function prototypes in st_context.h.
The st_vp_uses_current_values() helper is only used in st_context.c
so move it there.
The st_get_active_states() function is only used in st_context.c so
remove its prototype in st_context.h
Reviewed-by: Neha Bhende <bhenden@vmware.com>
As stated in Vulkan spec:
"Resetting a descriptor pool recycles all of the resources from all
of the descriptor sets allocated from the descriptor pool back to
the descriptor pool, and the descriptor sets are implicitly freed."
This fixes dEQP-VK.api.descriptor_pool.*
Fixes: 14f6275c92 "anv/descriptor_set: add reference counting for..."
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
Tested-by: Clayton Craft <clayton.a.craft@intel.com>
Rather than getting this from the alu instruction this allows us
some flexibility. In the following pass we instead pass the
inverse op.
Reviewed-by: Ian Romanick <ian.d.romanick@intel.com>
This helps make find_trip_count() a little easier to follow but
will also be used by a following patch.
Reviewed-by: Ian Romanick <ian.d.romanick@intel.com>
This will be used to help find the trip count of loops that look
like the following:
while (a < x && i < 8) {
...
i++;
}
Where the NIR will end up looking something like this:
vec1 32 ssa_1 = load_const (0x00000004 /* 0.000000 */)
loop {
...
vec1 1 ssa_12 = ilt ssa_225, ssa_11
vec1 1 ssa_17 = ilt ssa_226, ssa_1
vec1 1 ssa_18 = iand ssa_12, ssa_17
vec1 1 ssa_19 = inot ssa_18
if ssa_19 {
...
break
} else {
...
}
}
So in order to find the trip count we need to find the inverse of
ilt.
Reviewed-by: Ian Romanick <ian.d.romanick@intel.com>
Here we create a helper is_supported_terminator_condition()
and use that rather than embedding all the trip count code
inside a switch.
The new helper will also be used in a following patch.
Reviewed-by: Ian Romanick <ian.d.romanick@intel.com>
This adds support to loop analysis for loops where the induction
variable is compared to the result of min(variable, constant).
For example:
for (int i = 0; i < imin(x, 4); i++)
...
We add a new bool to the loop terminator struct in order to
differentiate terminators with this exit condition.
Reviewed-by: Ian Romanick <ian.d.romanick@intel.com>
This adds partial loop unrolling support and makes use of a
guessed trip count based on array access.
The code is written so that we could use partial unrolling
more generally, but for now it's only use when we have guessed
the trip count.
We use partial unrolling for this guessed trip count because its
possible any out of bounds array access doesn't otherwise affect
the shader e.g the stores/loads to/from the array are unused. So
we insert a copy of the loop in the innermost continue branch of
the unrolled loop. Later on its possible for nir_opt_dead_cf()
to then remove the loop in some cases.
A Renderdoc capture from the Rise of the Tomb Raider benchmark,
reports the following change in an affected compute shader:
GPU duration: 350 -> 325 microseconds
shader-db results radeonsi VEGA (NIR backend):
SGPRS: 1008 -> 816 (-19.05 %)
VGPRS: 684 -> 432 (-36.84 %)
Spilled SGPRs: 539 -> 0 (-100.00 %)
Spilled VGPRs: 0 -> 0 (0.00 %)
Private memory VGPRs: 0 -> 0 (0.00 %)
Scratch size: 0 -> 0 (0.00 %) dwords per thread
Code Size: 39708 -> 45812 (15.37 %) bytes
LDS: 0 -> 0 (0.00 %) blocks
Max Waves: 105 -> 144 (37.14 %)
Wait states: 0 -> 0 (0.00 %)
shader-db results i965 SKL:
total instructions in shared programs: 13098265 -> 13103359 (0.04%)
instructions in affected programs: 5126 -> 10220 (99.38%)
helped: 0
HURT: 21
total cycles in shared programs: 332039949 -> 331985622 (-0.02%)
cycles in affected programs: 289252 -> 234925 (-18.78%)
helped: 12
HURT: 9
vkpipeline-db results VEGA:
Totals from affected shaders:
SGPRS: 184 -> 184 (0.00 %)
VGPRS: 448 -> 448 (0.00 %)
Spilled SGPRs: 0 -> 0 (0.00 %)
Spilled VGPRs: 0 -> 0 (0.00 %)
Private memory VGPRs: 0 -> 0 (0.00 %)
Scratch size: 0 -> 0 (0.00 %) dwords per thread
Code Size: 26076 -> 24428 (-6.32 %) bytes
LDS: 6 -> 6 (0.00 %) blocks
Max Waves: 5 -> 5 (0.00 %)
Wait states: 0 -> 0 (0.00 %)
Reviewed-by: Ian Romanick <ian.d.romanick@intel.com>
In order to stop continuously partially unrolling the same loop
we add the bool partially_unrolled to nir_loop, we add it here
rather than in nir_loop_info because nir_loop_info is only set
via loop analysis and is intended to be cleared before each
analysis. Also nir_loop_info is never cloned.
Reviewed-by: Ian Romanick <ian.d.romanick@intel.com>
This detects an induction variable used as an array index to guess
the trip count of the loop. This enables us to do a partial
unroll of the loop, which can eventually result in the loop being
eliminated.
v2: check if the induction var is used to index more than a single
array and if so get the size of the smallest array.
Reviewed-by: Ian Romanick <ian.d.romanick@intel.com>
'dxc' hlsl-to-spirv compiler appears to emit 2 (Unknown) in the depth field,
when the image is not sampled and the value is not needed.
Previously, shaders failed with:
SPIR-V parsing FAILED:
In file ../src/compiler/spirv/spirv_to_nir.c:1412
!is_shadow
632 bytes into the SPIR-V binary
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
We may bind new Z/S buffers (which come via the framebuffer CSO,
triggering IRIS_DIRTY_DEPTH_BUFFER), but with writes disabled.
The next draw may enable Z or S writes (which come via the ZSA CSO,
triggering IRIS_DIRTY_WM_DEPTH_STENCIL), which requires us to update
our pin to have the write flag.
So, update pinning if either dirty flag changes. To clarify, pass
cso_zsa to the pinning function rather than pulling the random values
out of ice->state, which unfortunately have to exist for the resolve
code since iris_depth_stencil_alpha_state only exists in iris_state.c.
This avoids the code duplication that caused me to put things in the
wrong place in the previous commit. One used to have extra flushes,
but we moved those out so now these are identical and can be easily
shared.
Commit d6dd57d43c (iris: Add missing depth cache flushes) added the
depth/stencil flushes to the wrong place. I meant to add them to the
iris_upload_dirty_render_state code that emits the packets, but I
accidentally added them to the nearly identical looking code in
iris_restore_render_saved_bos. This meant we missed the actual flushing
at draw time, but instead did pointless flushing on the first draw in a
batch where things are already flushed anyway.
This commit moves them to iris_resolve.c, next to the depth prepares,
similar to what we do for color buffers. i965 does them elsewhere, but
I'm not sure why - this seems like the most consistent place.
1. If we switch the TCS for one with a different number of output
vertices, then the TES's gl_PatchVerticesIn value will change.
We need to re-upload in this case. For now, re-emit constants
whenever the TCS/TES are swapped out.
2. If there is no TCS, then we can't grab gl_PatchVerticesIn from
the TCS info. Since it's a passthrough, we can just use the
primitive's patch count (like the TCS gl_PatchVerticesIn does).
Fixes KHR-GL45.tessellation_shader.single.max_patch_vertices and
KHR-GL45.tessellation_shader.tessellation_control_to_tessellation_evaluation.gl_PatchVerticesIn.
Reviewed-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
Now that we've added a system value uploading mechanism, we may as well
reuse the same system for default tessellation levels. This simplifies
the state upload code a bit.
Also fixes:
KHR-GL45.tessellation_shader.tessellation_control_to_tessellation_evaluation.gl_tessLevel
Reviewed-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
This patch adds PIPE_CAP_TGSI_FS_FACE_IS_INTEGER_SYSVAL which
despite its name is not a TGSI-specific capability, just lets
the state tracker know that it should generate a system value
for FACE.
This is needed if we want to run tgsi_to_nir on iris.
Signed-off-by: Timur Kristóf <timur.kristof@gmail.com>
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
That is, drop KHR from all tokens that were promoted to Vulkan 1.1.
The consistency makes ctags more useful (it now jumps directly to the
real definitions in vulkan_core.h instead of the typedefs); and it makes
the code slightly less verbose.
Save SPIR-V in tu_shader_module. Tranlation to NIR happens in
tu_shader_create, and compilation to binary code happens in
tu_shader_compile. Both will be called during pipeline creation.
Let tu_cs_begin_sub_stream imply tu_cs_reserve_space, and
tu_cs_end_sub_stream imply tu_cs_sanity_check. Callers are no
longer required to call them (but can still do if they choose to).
We will start a draw IB at the beginning of a subpass and consume it
at the end of the subpass. With tu_cs_discard_entries, we can reuse
the same tu_cs for all subpasses.
Asserting (cur < end) in tu_cs_emit catches much less programming
errors comparing to asserting (cur < reserved_end). We should never
write more commands than what we have reserved.
Assert IB is non-empty and sane in tu_cs_emit_ib.
This should be quite complete feature-wise. External fences are
still missing. We probably also want to add a simpler path to
tu_WaitForFences for when fenceCount == 1.
Add tu_pack_clear_value to correctly pack VkClearValue according to
VkFormat. It ignores the component order defined by VkFormat, and
always packs to WZYX order.
A format table is an array of tu_native_format. Table lookup is
done through array indexing.
This commit defines a single format table for core VkFormat. It is
derived from the table in the gallium driver. There might be errors
introduced in the process of the conversion.
When an extension that defines new VkFormat is supported, we need to
add a new table for the extension.
- create tile_load_ib and tile_store_ib at the beginning of each
subpass
- execute the IBs at the end of each subpass
- no DONT_CARE support
- no subpass dependency analysis and subpass merging
- no zs support
- no true VkImageView support
- assume VK_FORMAT_B8G8R8A8_UNORM
- no tiling
- no MSAA
This also removes cur_cs from tu_cmd_buffer.
When in TU_CS_MODE_SUB_STREAM, tu_cs_begin_sub_stream (or
tu_cs_end_sub_stream) should be called instead of tu_cs_begin (or
tu_cs_end). It gives the caller a TU_CS_MODE_EXTERNAL cs to emit
commands to.
Add tu_cs_mode and TU_CS_MODE_EXTERNAL. When in
TU_CS_MODE_EXTERNAL, tu_cs wraps an external buffer and can not
grow.
This also moves tu_cs* up in tu_private.h, such that other structs
can embed tu_cs_entry.
Error checking tu_cs_begin/tu_cs_end is too tedious for the callers.
Move tu_cs_add_bo and tu_cs_reserve_entry to tu_cs_reserve_space
such that tu_cs_begin/tu_cs_end never fails.
We need the current color/depth/stencil attachments and the current
render area to compute the tiling config.
We compute the tiling config at the beginning of each subpass for
the moment. We should change that when the driver can reorder/merge
subpasses.
It is very common that the render area is the entire framebuffer.
We might want to optimize for the case and compute the tiling config
in tu_framebuffer ctor.
Being the first commit that emits meaningful command packets, there
are many things included in this commit
- tu6_emit_xxx are low-level helpers that emit command packets
without boundary checks
- tu6_xxx are high-level helpers that emit command packets with
boundary checks
- cmdbuf->cs is a pointer to the current CS, so that we can use the
helpers above to emit to other CS
- use cmd as the variable name of tu_cmd_buffer
- there is a per-cmdbuf scratch bo for CP_EVENT_WRITE writeback
- there is a per-cmdbuf debug marker, using scratch reg 7 or 6
depending on whether the cmdbuf is primary or secondary
(olv, after rebase) REG_A6XX_SP_UNKNOWN_AB20 is renamed
They are used like
tu_cs_reserve_space(...);
tu_cs_emit(...);
...;
tu_cs_reserve_space_assert();
to make sure we reserved enough space at the beginning.
Build drm_msm_gem_submit_bo array directly in tu_bo_list. We might
change this again, but this is good enough for now.
There are other issues as well, such as not using
VkAllocationCallbacks and sloppy error checking. We should revisit
this in the near future. Same to tu_cs.
This adds a radv-style check_space functions + emit functions.
Also puts them in a header as a bunch of inlines, so
(1) we can use them from meta code.
(2) they are inline for performance as these are common and small.
Did not put them in tu_private.h as a bunch of inlines only
clutters up that huge headerfile.
Precise error propagation for memory allocation failures is still
todo.
This creates a new fd on each queue submit. I do not go with
DRM_IOCTL_MSM_WAIT_FENCE solely because the path is marked legacy.
Otherwise, we can use the fence id rather than requesting a fence
fd until external fences are supported and enabled.
./deqp-vk -n dEQP-VK.info.*
Writing test log into TestResults.qpa
dEQP Core unknown (0xcafebabe) starting..
target implementation = 'Surfaceless'
WARNING: tu is not a conformant vulkan implementation, testing use only.
WARNING: tu is not a conformant vulkan implementation, testing use only.
Test case 'dEQP-VK.info.build'..
Pass (Not validated)
Test case 'dEQP-VK.info.device'..
Pass (Not validated)
Test case 'dEQP-VK.info.platform'..
Pass (Not validated)
Test case 'dEQP-VK.info.memory_limits'..
Pass (Pass)
DONE!
Test run totals:
Passed: 4/4 (100.0%)
Failed: 0/4 (0.0%)
Not supported: 0/4 (0.0%)
Warnings: 0/4 (0.0%)
The Makefile.am doesn't work. I tried fixing it but gave up because
I don't understand Autotools. I strongly suspect the Android.mk also
doesn't work.
Rather than maintain the broken build files, let's delete them and
re-add working build files if-and-when we need them. (Maybe we'll be
lucky and turnip will never need to support Autotools!).
meson files have been updated, autotools and android still need
updating.
Only build tested.
v2 (chadv):
- Rebase onto master.
- Fix build breakage in Python scripts.
- Drop the WSI code. The internal WSI apis have changed recently, and
will likely change again before the driver goes upstream. To avoid
unnecessary rebase work, let's drop the WSI code and re-add it when
we're ready to really use WSI.
(olv, after rebase) do not enable freedreno by default on ARM
The nir_state_slot struct had some padding that was never initialized.
Serializing the individual parts of the struct is more robust and avoids
the overhead of zeroing it at creation, so just do that.
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
Fixes leaks for each glsl_type generated:
==32470== 384 bytes in 3 blocks are possibly lost in loss record 18 of 18
==32470== at 0x483880B: malloc (vg_replace_malloc.c:309)
==32470== by 0x4C43F4A: ralloc_size (ralloc.c:119)
==32470== by 0x4C44014: rzalloc_size (ralloc.c:151)
==32470== by 0x4C44258: rzalloc_array_size (ralloc.c:215)
==32470== by 0x4D38957: glsl_type::glsl_type(glsl_struct_field const*, unsigned int, char const*) (glsl_types.cpp:114)
==32470== by 0x4D3BEED: glsl_type::get_struct_instance(glsl_struct_field const*, unsigned int, char const*) (glsl_types.cpp:1146)
==32470== by 0x4D42ECC: glsl_struct_type (nir_types.cpp:501)
==32470== by 0x4CDB5A1: vtn_handle_type (spirv_to_nir.c:1269)
==32470== by 0x4CE53DD: vtn_handle_variable_or_type_instruction (spirv_to_nir.c:4018)
==32470== by 0x4CD8CFF: vtn_foreach_instruction (spirv_to_nir.c:365)
==32470== by 0x4CE5E6B: spirv_to_nir (spirv_to_nir.c:4490)
==32470== by 0x497AF10: anv_shader_compile_to_nir (anv_pipeline.c:173)
v2: move release call to vkDestroyInstance
Signed-off-by: Tapani Pälli <tapani.palli@intel.com>
Cc: mesa-stable@lists.freedesktop.org
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Fixes following leak:
==21853== 32 bytes in 1 blocks are definitely lost in loss record 2 of 20
==21853== at 0x483AB1A: calloc (vg_replace_malloc.c:762)
==21853== by 0x4C4DD7F: util_vma_heap_free (vma.c:221)
==21853== by 0x4C4D647: util_vma_heap_init (vma.c:46)
==21853== by 0x4957B9F: anv_CreateDescriptorPool (anv_descriptor_set.c:578)
Fixes: c520f4dec9 ("anv: Add a concept of a descriptor buffer")
Signed-off-by: Tapani Pälli <tapani.palli@intel.com>
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
Patch maintains a list of sets in the pool and destroys possible
remaining sets when pool is destroyed.
As stated in Vulkan spec:
"When a pool is destroyed, all descriptor sets allocated from
the pool are implicitly freed and become invalid."
This fixes memory leaks spotted with valgrind:
==19622== 96 bytes in 1 blocks are definitely lost in loss record 2 of 3
==19622== at 0x483880B: malloc (vg_replace_malloc.c:309)
==19622== by 0x495B67E: default_alloc_func (anv_device.c:547)
==19622== by 0x4955E05: vk_alloc (vk_alloc.h:36)
==19622== by 0x4956A8F: anv_multialloc_alloc (anv_private.h:538)
==19622== by 0x4956A8F: anv_CreateDescriptorSetLayout (anv_descriptor_set.c:217)
Fixes: 14f6275c92 ("anv/descriptor_set: add reference counting for descriptor set layouts")
Signed-off-by: Tapani Pälli <tapani.palli@intel.com>
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
This backend interacts with the new DRM driver for Midgard GPUs which is
currently in development.
When using this backend, Panfrost has roughly on-par functionality as
when using the non-DRM driver from Arm.
Alyssa Rosenzweig: To do so, we implement additional routines for
runtime GPU version detection and fencing. We cleanup some duplicate
code interfering with the new driver. We fix a long-standing memory leak
which is aggravated on the new driver. Finally, we implement BO
import/export in a way compatible with the new driver. These changes are
squashed to preserve bisectability given the hard-to-track ABI shifts in
the nondrm module
Signed-off-by: Tomeu Vizoso <tomeu.vizoso@collabora.com>
Reviewed-by: Alyssa Rosenzweig <alyssa@rosenzweig.io>
Non-zero offset wasn't working, which breaks a bunch of
dEQP-GLES31.functional.texture.border_clamp.formats.* when doing sharded
deqp runs (because order of tests changes, resulting in different
texture state bound.. deqp doesn't really clean up it's gl state between
tests very well)
Previously, if additional textures were bound, due to using too small of
a bcolor_entry size, the last 32bytes of the bcolor_entry would be
overwritten.
Signed-off-by: Rob Clark <robdclark@gmail.com>
In standalone.h, the struct gl_context type is not declared by #includ'ing
mtypes.h:
In file included from src/gallium/drivers/panfrost/midgard/cmdline.c:24:
src/compiler/glsl/standalone.h:46:14: warning: ‘struct gl_context’ declared inside parameter list will not be visible outside of this definition or declaration
struct gl_context *ctx);
^~~~~~~~~~
This causes the following compilation failure:
src/gallium/drivers/panfrost/midgard/cmdline.c: In function ‘compile_shader’:
src/gallium/drivers/panfrost/midgard/cmdline.c:58:61: error: passing argument 4 of ‘standalone_compile_shader’ from incompatible pointer type [-Werror=incompatible-pointer-types]
prog = standalone_compile_shader(&options, 2, argv, &local_ctx);
^~~~~~~~~~
In file included from src/gallium/drivers/panfrost/midgard/cmdline.c:24:
src/compiler/glsl/standalone.h:43:28: note: expected ‘struct gl_context *’ but argument is of type ‘struct gl_context *’
struct gl_shader_program * standalone_compile_shader(
^~~~~~~~~~~~~~~~~~~~~~~~~
Fixes: e67e072637 "panfrost: Implement Midgard shader toolchain"
Cc: Alyssa Rosenzweig <alyssa@rosenzweig.io>
Signed-off-by: Eric Engestrom <eric.engestrom@intel.com>
Most hw on the native platform advertise these
caps this way.
D3DCAPS_READ_SCANLINE: We don't really have hardware
support for that, but many games don't even check the
flag, and expect GetRasterStatus to work, which is
why we emulated it with a timer (like wine). So we
may as well advertise the cap.
D3DCURSORCAPS_LOWRES: I don't know what is the status
of this on X11, but I don't know of any dx9 game
running at height < 400 either.
D3DPTEXTURECAPS_TEXREPEATNOTSCALEDBYSIZE: The cap should
correspond to what the current generation of hw is doing.
Signed-off-by: Axel Davy <davyaxel0@gmail.com>
Reviewed-by: Patrick Rudolph <siro@das-labor.org>
The former is supported on Matrox cards but no other hw.
The latter isn't supported anywhere.
It is fine to not advertise them as supported,
and it could prevent apps to trigger weird rendering paths.
Signed-off-by: Axel Davy <davyaxel0@gmail.com>
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
If D3D_ALWAYS_SOFTWARE is set for debugging purposes,
run on any DRI enabled platform.
Instead of probing for a compatible gallium driver (which might
fail if there's none) always use the KMS DRI software renderer.
Allows to run nine on i915 when D3D_ALWAYS_SOFTWARE=1.
Signed-off-by: Patrick Rudolph <siro@das-labor.org>
Reviewed-by: Axel Davy <davyaxel0@gmail.com>
This broke piles of image load store tests (179 failures on CI,
mesa_master build #15546, previous build right before this landed
was green). I'd rather not leave the tree on fire over the weekend,
so let's revert for now, and we can figure out what happened next week.
No shader-db changes on any Intel platform.
v2: Use a loop to generate patterns. Suggested by Jason.
Reviewed-by: Matt Turner <mattst88@gmail.com> [v1]
Reviewed-by: Dylan Baker <dylan@pnwbakers.com>
Acked-by: Jason Ekstrand <jason@jlekstrand.net>
No shader-db changes on any Intel platform.
v2: Use a loop to generate patterns. Suggested by Jason.
Reviewed-by: Matt Turner <mattst88@gmail.com> [v1]
Reviewed-by: Dylan Baker <dylan@pnwbakers.com>
Acked-by: Jason Ekstrand <jason@jlekstrand.net>
Now that nir_opt_copy_prop_vars can properly handle array derefs on
vectors, it's safe to move UBO and SSBO lowering to late in the
pipeline. This should allow NIR to actually start optimizing SSBO
access.
Reviewed-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
It doesn't really matter where this pass goes as long as it's after we
call nir_lower_explicit_io and before we go into the back-end. Putting
it brw_postprocess_nir lets us move nir_lower_explicit_io significantly
later in the pipeline.
Reviewed-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
Which also requires uadd_carry lowering
Until recently this was lowered in glsl ir so it went unnoticed that we
weren't lowering it.
Fixes: 1d8994a63b glsl: [u/i]mulExtended optimization for GLSL
Signed-off-by: Rob Clark <robdclark@gmail.com>
Fixes: 45271702ec freedreno: fix ir3_cmdline build
Fixes: 7530d4abfc glsl/freedreno/panfrost: pass gl_context to the standalone compiler
Signed-off-by: Rob Clark <robdclark@gmail.com>
With createImage(), the caller was expected to set a SHARED flag if they
needed the ability to get a GEM handle. DRI3, wayland, and gbm all set
it, EGL_MESA_drm_image passes it through, and surfaceless doesn't need it
because there's no way to request a handle.
With the new createImageWithModifiers() DRI method to replace it, the
expectation is that you'll always be able to share the buffer, so the flag
is unnecessary in its arguments. However, we do need to tell gallium
about this expectation.
Without this, kmscube's modifiers path using
gbm_bo_create_with_modifiers(&modifier, 1) instead of
gbm_bo_create(SCANOUT | SHARED) will call the driver's resource_create()
function wtih PIPE_BIND_SHARED unset, so the driver (particularly
renderonly drivers) may allocate in such a way that it can't return an
answer from gbm_bo_get_handle(). I used to have a hack in v3d using
count==1 && modifier==LINEAR to indicate that you wanted SHARED anyway,
but that was dropped recently.
Fixes: 59527a36e9 ("v3d: Restructure RO allocations using
resource_from_handle.")
Reviewed-by: Kristian H. Kristensen <hoegsberg@chromium.org>
This is similar to intel_miptree_map_blit and intel_buffer_object.c's
temporary blits in i965.
Improves performance of DiRT Rally by 20-25% by eliminating stalls.
Breaks piglit's spec/arb_shader_image_load_store/host-mem-barrier,
by using the GPU to do uploads, exposing a st/mesa issue where it
doesn't give us memory_barrier() calls. This is a pre-existing issue
and will be fixed by a later patch (currently out for review).
for fragment programs we already treat fog as a single component value,
but for vp we didn't.
Fixes fog related piglit tests with my out of tree Nouveau nir patches.
Signed-off-by: Karol Herbst <kherbst@redhat.com>
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
Reviewed-by: Eric Anholt <eric@anholt.net>
Everything else uses `#include "GL/internal/dri_interface.h"` instead,
and this full path was even already used in other parts of GLX.
While at it, nothing uses `inc_gl_internal` anymore so let's remove it
as well.
Signed-off-by: Eric Engestrom <eric.engestrom@intel.com>
Reviewed-by: Dylan Baker <dylan@pnwbakers.com>
Tested-by: Clayton Craft <clayton.a.craft@intel.com>
Since the compiler may not zero-out padding in the object.
Add a couple comments about this to prevent misunderstandings in
the future.
Fixes: 67d96816ff ("st/mesa: move, clean-up shader variant key decls/inits")
Reviewed-by: Roland Scheidegger <sroland@vmware.com>
More or less any of this issue pointed out by the compiler is
a coding error. Make sure we flag it and bail loudly.
v2: - apply the change to autotools and scons as well (Emil)
- C++ doesn't need this, it's already an error and the flag
doesn't exist (Gert)
v3: - drop scons, flags are not checked so until someone adds that
functionality we can't have this.
Signed-off-by: Eric Engestrom <eric.engestrom@intel.com>
Reviewed-by: Dylan Baker <dylan@pnwbakers.com> # v1
Reviewed-by: Emil Velikov <emil.velikov@collabora.com> # v1
[Emil: apply the same change to autotools and scons]
Signed-off-by: Emil Velikov <emil.velikov@collabora.com>
Reviewed-by: Eric Anholt <eric@anholt.net>
Fixes: 45d58cd91567b39f51af "gitlab-ci: only build the default (=latest) and oldest llvm versions"
Signed-off-by: Eric Engestrom <eric.engestrom@intel.com>
GLVND already provides these, so distro packagers have been deleting
them all along. Let's save ourselves the trouble and not build them in
the first place.
Signed-off-by: Eric Engestrom <eric.engestrom@intel.com>
Reviewed-by: Matt Turner <mattst88@gmail.com>
Reviewed-by: Dylan Baker <dylan@pnwbakers.com>
Reviewed-by: Emil Velikov <emil.velikov@collabora.com>
GLVND already provides these, so distro packagers have been deleting
them all along. Let's save ourselves the trouble and not build them in
the first place.
Signed-off-by: Eric Engestrom <eric.engestrom@intel.com>
Reviewed-by: Matt Turner <mattst88@gmail.com>
Reviewed-by: Dylan Baker <dylan@pnwbakers.com>
Reviewed-by: Emil Velikov <emil.velikov@collabora.com>
This fixes a rendering issue where UBO updates aren't always picked
up by drawing calls. This issue effected the Webots robotics
simulator. VMware bug 2175527.
Testing Done: Webots replay, piglit, misc Linux games
Reviewed-by: Thomas Hellstrom <thellstrom@vmware.com>
The draw_vgpu10() function was huge. Move the code for preparing the
vertex buffers and the index buffer into separate functions.
Reviewed-by: Neha Bhende <bhenden@vmware.com>
And add a comment that we're implicitly converting PIPE_TRANSFER_
flags to PB_USAGE_ flags in one place. And statically assert that
the enum values match.
Reviewed-by: Neha Bhende <bhenden@vmware.com>
Reviewed-by: Thomas Hellstrom <thellstrom@vmware.com>
Use a new enum type instead of 'unsigned' to make things a bit more
understandable.
Reviewed-by: Neha Bhende <bhenden@vmware.com>
Reviewed-by: Thomas Hellstrom <thellstrom@vmware.com>
With this patch, the svga shader type will be saved in the shader variant,
and there is no need to pass in the shader type to the define/destroy
variant functions.
Reviewed-by: Brian Paul <brianp@vmware.com>
From the ARB_enhanced_layouts specification:
"For the property TRANSFORM_FEEDBACK_BUFFER_INDEX, a single integer
identifying the index of the active transform feedback buffer
associated with an active variable is written to <params>. For
variables corresponding to the special names "gl_NextBuffer",
"gl_SkipComponents1", "gl_SkipComponents2", "gl_SkipComponents3",
and "gl_SkipComponents4", -1 is written to <params>."
We were storing the xfb_buffer value, instead of the value
corresponding to GL_TRANSFORM_FEEDBACK_BUFFER_INDEX.
Note that the implementation assumes that varyings would be sorted by
offset and buffer.
Signed-off-by: Antia Puentes <apuentes@igalia.com>
Signed-off-by: Alejandro Piñeiro <apinheiro@igalia.com>
Reviewed-by: Timothy Arceri <tarceri@itsqueeze.com>
Instead of a custom ARB_gl_spirv xfb gather info pass.
In fact, this is not only about reusing code, but the current custom
code was not handling properly how many varyings are enumerated from
some complex types. So this change is also about fixing some corner
cases.
v2: Use util_bitcount, simplify current stage check (Kenneth)
Reviewed-by: Timothy Arceri <tarceri@itsqueeze.com>
On OpenGL, a array of a simple type adds just one varying. So
gl_transform_feedback_varying_info struct defined at mtypes.h includes
the parameters Type (base_type) and Size (number of elements).
This commit checks this when the recursive add_var_xfb_outputs call
handles arrays, to ensure that just one is addded.
We also need to take into account AoA here
v2: use glsl_type_is_leaf from nir_types (Timothy Arceri)
v3: simplified aoa check, without the need ot using glsl_type_is_leaf,
using glsl_types_is_struct (Timothy Arceri)
Reviewed-by: Timothy Arceri <tarceri@itsqueeze.com>
Right now we are only re-sorting outputs. But it is better to sort too
varyings, as linker expect them to be sorted out (as it was done on
GLSL). For varyings, and to make easier to compute buffer_index, we
sort also by buffer. We could do the same for outputs, but we lack a
reason for that, so we left it as it is (just offset).
Reviewed-by: Timothy Arceri <tarceri@itsqueeze.com>
In order to be used for OpenGL (right now for ARB_gl_spirv).
This commit adds two new structures:
* nir_xfb_varying_info: that identifies each individual varying. For
each one, we need to know the type, buffer and xfb_offset
* nir_xfb_buffer_info: as now for each buffer, in addition to the
stride, we need to know how many varyings are assigned to it.
For this patch, the only case where num_outputs != num_varyings is
with the case of doubles, that for dvec3/4 could require more than one
output. There are more cases though (like aoa), that will be handled
on following patches.
v2: updated after new nir general XFB support introduced for "anv: Add
support for VK_EXT_transform_feedback"
v3: compute num_varyings beforehand for allocating, instead of relying
on num_outputs as approximate value (Timothy Arceri)
Reviewed-by: Timothy Arceri <tarceri@itsqueeze.com>
Where component_offset here is the offset when accessing components of
a packed variable. Or in other words, location_frac on
nir.h. Different places of mesa use different names for it.
Technically nir_xfb_info consumer can get the same from the
component_mask, it seems somewhat forced to make it to compute it,
instead of providing it.
v2: rename local location_frac for comp_offset, more similar to the
intended use (Timothy Arceri)
Reviewed-by: Timothy Arceri <tarceri@itsqueeze.com>
This changes is actually wrong because we have to sync
before doing image layout transitions.
This fixes rendering issues in Batman, Path of Exile and
probably more titles.
This reverts commit 76c17cfd8d.
Fixes: 76c17cfd8d ("radv: execute external subpass barriers after ending subpasses")
Cc: 19.0 <mesa-stable@lists.freedesktop.org>
Signed-off-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
Found out that some base64 data matched the '---' identifier. We can
avoid this by adding the surrounding spaces.
Signed-off-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Reviewed-by: Rafael Antognolli <rafael.antognolli@intel.com>
The error state contains several kind of BOs, including the context
image which we will want to write in a later commit. Because it can
come later in the error state than the user buffers and because we
need to write it first in the aub file, we have to first build a list
of BOs and then write them in the appropriate order.
Signed-off-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Reviewed-by: Rafael Antognolli <rafael.antognolli@intel.com>
Add the missing PIPE_CAP_CONTEXT_PRIORITY_MASK and parsing of the context
construction flags.
Testcase: piglit/egl-context-priority
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
I'll want to use this for transfer maps, which already do their own
flushing. This lets us avoid a double flush, and also gives us more
control over the batch which is selected.
We were using batch->contains_draw as a proxy for "are we even using
this engine?" That isn't quite right, because it only counts regular
draws. BLORP operations may have also rendered to a resource, which
needs to trigger flushing. To check for this, we also see if the
render and sometimes depth caches are non-empty.
We can also drop the "but there might already be stale data in the
cache even if we haven't emitted any commands yet" concern in the
comments. The kernel flushes caches between batches.
This may not be great but it's at least better than what was there.
By mistake, this was previously set for all shaders.
It is a vertex shader property so only makes sense to
set it for vertex shaders.
Signed-Off-By: Timur Kristóf <timur.kristof@gmail.com>
Reviewed-By: Timothy Arceri <tarceri@itsqueeze.com>
Tested-by: Dieter Nützel <Dieter@nuetzel-hh.de>
Unlike most of the cases in which we do this by hand, the new helper
properly handles non-32-bit pointers.
Reviewed-by: Karol Herbst <kherbst@redhat.com>
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
There's no guarantee when build_deref_follower is called that the two
derefs have the same bit size destination. Insert a cast on the array
index in case we have differing bit sizes. While we're here, insert
some asserts in build_deref_array and build_deref_ptr_as_array. The
validator will catch violations here but they're easier to debug if we
catch them while building.
Reviewed-by: Karol Herbst <kherbst@redhat.com>
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Because we already know the immediate right-hand parameter, we can
potentially save the optimizer a bit of work.
Reviewed-by: Karol Herbst <kherbst@redhat.com>
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Reviewed-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
Fixes nearly all of the remaining
dEQP-GLES31.functional.texture.border_clamp.formats.* fails
Signed-off-by: Rob Clark <robdclark@gmail.com>
Reviewed-by: Kristian H. Kristensen <hoegsberg@chromium.org>
We need a version of fd6_tex_swiz() that just returns the composed
swizzle without building part of the TEX_CONST_0 state. So just
refactor the existing function to build more of the TEX_CONST_0 state,
and leave fd6_tex_swiz() simply composing swizzles.
The small IBO state change (to use LINEAR for smaller sizes/levels) is
to match the state in fd6_tex_const_0(). It seems like maybe tiled
actually works at the smaller sizes but not if minification is in play,
so best just to make images match what we do for textures.
Signed-off-by: Rob Clark <robdclark@gmail.com>
Reviewed-by: Kristian H. Kristensen <hoegsberg@chromium.org>
This cap is mainly for working around a r600 texture swizzle issue,
but it also controls whether ARB_texture_buffer_object (with legacy
formats) is enabled. I suspect the missing I/L/A/LA faking is why
I had it set in the first place.
Thanks to Ilia for pointing out that I shouldn't be setting this.
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
For texturing, we map alpha formats to the corresponding red format,
as many alpha formats are outright missing, and red is more efficient
when sampling anyway.
When rendering to A8_UNORM, we use that format directly, so the image
gets the shader output's .a/.w channel, rather than the .r/.x channel.
All other A* formats are non-renderable, so we can't do much and just
mark them as unsupported for rendering. Fortunately, GL only requires
rendering to A8_UNORM, so that works out.
According to Andre Heider and Timur Kristóf, this fixes font rendering
in Witcher 1 (via nine). Andre also reported that it fixes Unigine
Heaven (presumably via nine).
v2: Use the same swizzle for both sampler views and "render targets".
BLORP expects the read swizzle, and will take the inverse when
setting up the destination swizzle (and actually applying it in
the shaders). We ignore the format swizzle when setting up normal
rendering SURFACE_STATEs, which is necessary because it would be
an illegal shader channel select combination. Thanks to Jason
Ekstrand for pointing out that BLORP took an inverse swizzle.
Tested-by: Timur Kristóf <timur.kristof@gmail.com>
Tested-by: Andre Heider <a.heider@gmail.com>
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
Gallium might call us multiple times to bind subsets of the samplers,
at which point we'd recreate the table a bunch of times. It doesn't
really buy us anything to do it here - even if we defer to draw time,
the dirty tracking ensures we'll only do it on the first draw after a
bind_sampler_states() call.
We now use the number of samplers specified by the shader instead of
the binding count. If this number changes, we flag sampler state as
dirty so we re-upload a table with the right number of entries.
This also fixes a bug where ice->state.need_border_colors was never
unset, so once something needed border colors, the pool would always
be pinned in all future batches.
v2: Explicitly flag sampler states as dirty, rather than assuming that
bind_sampler_states() will be called if the program texture count
changes. While this may be true for st/mesa, it isn't the case for
Gallium HUD.
Tested-by: Timur Kristóf <timur.kristof@gmail.com>
Tested-by: Andre Heider <a.heider@gmail.com>
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
This is necessary for legacy texture buffer object formats, where we'll
need to use a swizzle to fake e.g. luminance.
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
This variable is now unused, so let's remove it.
Fixes: 9c4930946a (virgl: add encoder functions for new protocol)
Reviewed-by: Gurchetan Singh <gurchetansingh@chromium.org>
This variable is now unused, so let's remove it.
Fixes: db77573d7b (virgl: modify how we handle GL_MAP_FLUSH_EXPLICIT_BIT)
Reviewed-by: Gurchetan Singh <gurchetansingh@chromium.org>
This variable is now unused, so let's remove it.
Fixes: c19aedcf1a (virgl: don't mark unclean after a flush)
Reviewed-by: Gurchetan Singh <gurchetansingh@chromium.org>
These variables are now unused, let's remove them to get rif of a few
warnings.
Fixes: f0e71b1088 (virgl: use transfer queue)
Reviewed-by: Gurchetan Singh <gurchetansingh@chromium.org>
We allocate GGTT entries and physical addresses are we create engines
rather than having a fixed layout.
Context images now receive a parameter argument which is used to setup
pml4 & ring buffer addresses.
Signed-off-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Reviewed-by: Rafael Antognolli <rafael.antognolli@intel.com>
We'll make them more parameterized in a later commit.
As this is just a transitional commit, we allow ourself to leak the
context images allocated in get_context_init(). We'll fix this in the
next commit.
Signed-off-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Reviewed-by: Rafael Antognolli <rafael.antognolli@intel.com>
Prepare aub write to deal with multiple engine instances. We don't
pass the instance number yet this could be done in the future by
having a 2 dimensional array of struct engine.
Signed-off-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Acked-by: Rafael Antognolli <rafael.antognolli@intel.com>
In the future we'll want error2aub to reuse the context image saved by
i915 instead of the default one we write in intel_dump_gpu.
Signed-off-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Reviewed-by: Rafael Antognolli <rafael.antognolli@intel.com>
IGT has a test to hang the GPU that works by having a batch buffer
jump back into itself, trigger an infinite loop on the command stream.
As our implementation of the decoding is "perfectly" mimicking the
hardware, our decoder also "hangs". This change limits the number of
batch buffer we'll decode before we bail to 100.
Signed-off-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Reviewed-by: Rafael Antognolli <rafael.antognolli@intel.com>
An MI_BATCH_BUFFER_START in the ring buffer acts as a second level
batchbuffer (aka jump back to ring buffer when running into a
MI_BATCH_BUFFER_END).
Signed-off-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Reviewed-by: Rafael Antognolli <rafael.antognolli@intel.com>
The Gallium Nine state tracker now works on iris.
Also tested with GALLIUM_HUD and Star Wars: Knights of the Old
Republic on WINE (GL_ATI_fragment_shader).
Signed-off-by: Andre Heider <a.heider@gmail.com>
Reviewed-by: Timur Kristóf <timur.kristof@gmail.com>
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
Fixes a leak:
==7576== 320 (48 direct, 272 indirect) bytes in 1 blocks are definitely lost in loss record 26 of 26
==7576== at 0x4C2EE3B: malloc (vg_replace_malloc.c:309)
==7576== by 0x53EF0E4: ralloc_size (ralloc.c:119)
==7576== by 0x53EF0C2: ralloc_context (ralloc.c:113)
==7576== by 0x5471F64: nir_split_per_member_structs (nir_split_per_member_structs.c:176)
==7576== by 0x51288CF: anv_shader_compile_to_nir (anv_pipeline.c:216)
Signed-off-by: Tapani Pälli <tapani.palli@intel.com>
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Reviewed-by: Eric Engestrom <eric.engestrom@intel.com>
Fixes leaks from anv_device_upload_nir:
==7345== 8,192 bytes in 2 blocks are definitely lost in loss record 24 of 24
==7345== at 0x4C2ED78: malloc (vg_replace_malloc.c:308)
==7345== by 0x4C31393: realloc (vg_replace_malloc.c:836)
==7345== by 0x54E0848: grow_to_fit (blob.c:67)
==7345== by 0x54E0BE5: blob_reserve_bytes (blob.c:166)
==7345== by 0x54E0C7C: blob_reserve_intptr (blob.c:186)
==7345== by 0x54704A7: nir_serialize (nir_serialize.c:1091)
==7345== by 0x512F97D: anv_device_upload_nir (anv_pipeline_cache.c:756)
Signed-off-by: Tapani Pälli <tapani.palli@intel.com>
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Reviewed-by: Eric Engestrom <eric.engestrom@intel.com>
Use anv_gem_munmap for unmap when softpin in use, this corresponds to
anv_gem_mmap used in anv_block_pool_expand_range. This fixes valgrind
errors seen for each pool when softpin is in use:
==25581== 262,144 bytes in 1 blocks are definitely lost in loss record 31 of 31
==25581== at 0x50E77E8: anv_gem_mmap (anv_gem.c:96)
==25581== by 0x50EEE2B: anv_block_pool_expand_range (anv_allocator.c:543)
==25581== by 0x50EEB51: anv_block_pool_init (anv_allocator.c:477)
==25581== by 0x50EF7EF: anv_state_pool_init (anv_allocator.c:920)
==25581== by 0x510B8EB: anv_CreateDevice (anv_device.c:2031)
Signed-off-by: Tapani Pälli <tapani.palli@intel.com>
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
The NIR and TGSI paths are currently intertwined which makes it
not only hard to follow but also makes it hard to take advantage
of the differences in IR.
Here we take the first step to splitting that path apart. With
this we take the opportunity to no longer call the GLSL IR
optimisation passes after the final lowering calls for NIR. We
can instead just use the NIR passes which can produce better code
and should also result in faster compile times.
The speed-up can be measured in some dolphin uber shaders due to
no longer calling lower_if_to_cond_assign() for example
dolphin/ubershaders/120.shader_test goes from ~1.63 -> ~1.53
seconds on my machine.
There are some code changes as a result of not calling
lower_if_to_cond_assign(), this is because it flattens ifs that
contain UBOs where as NIR's peephole select doesn't. This is
were most of the regressions in Max Waves happens with shader-db.
shader-db results (VEGA):
Totals from affected shaders:
SGPRS: 2349056 -> 2349640 (0.02 %)
VGPRS: 1322160 -> 1323300 (0.09 %)
Spilled SGPRs: 21190 -> 21527 (1.59 %)
Spilled VGPRs: 99 -> 99 (0.00 %)
Private memory VGPRs: 0 -> 0 (0.00 %)
Scratch size: 72 -> 72 (0.00 %) dwords per thread
Code Size: 57260904 -> 57270932 (0.02 %) bytes
Compile Time: 1107186 -> 1022942 (-7.61 %) milliseconds
LDS: 786 -> 786 (0.00 %) blocks
Max Waves: 391932 -> 391619 (-0.08 %)
Wait states: 0 -> 0 (0.00 %)
Reviewed-by: Eric Anholt <eric@anholt.net>
glsl_to_nir() is still missing support for converting certain
functions to NIR, so for those we use the GLSL IR optimisations
to remove the functions.
Reviewed-by: Eric Anholt <eric@anholt.net>
If Vertex Shader uses EdgeFlag the hardware request that it is setup
as the last VERTEX_ELEMENT_STATE. If SGVS are add at draw time we
need to also reconfigure the last 3DSTATE_VF_INSTANCING so its
VertexElementIndex points to the new Vertex Element that contains
the EdgeFlag.
So if draw parameters or edgeflag are not used the CSO generated at
iris_create_vertex_element is sent directly in the batches. But if
edge flag is used we adjust last VERTEX_ELEMENT_STATE and
last 3DSTATE_VF_INSTANCING using their alternative edge flag version
we generate at iris_create_vertex_element and store at the CSO.
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
You usually want to go find the highest pressure and figure out why you
couldn't spill or what pattern led to a bunch of pressure leading to that
point.
Now that we have a loop unrolling cost function and loop unrolling isn't
going to kill us the moment we have a 64-bit op in a loop, we can go
ahead and move 64-bit lowering later. This gives us the opportunity to
do more optimizations and actually let the full optimizer run even on
64-bit ops rather than hoping one round of opt_algebraic will fix
everything. This substantially reduces both fp64 shader compile times
and the resulting code size.
Reviewed-by: Matt Turner <mattst88@gmail.com>
Reviewed-by: Jordan Justen <jordan.l.justen@intel.com>
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
Now that we have a loop unrolling cost function and loop unrolling isn't
going to kill us the moment we have a 64-bit op in a loop, we can go
ahead and move 64-bit lowering later. This gives us the opportunity to
do more optimizations and actually let the full optimizer run even on
64-bit ops rather than hoping one round of opt_algebraic will fix
everything. This substantially reduces both fp64 shader compile times
and the resulting code size. On the vs-isnan-dvec test from piglit:
Before this commit:
1684.63s user 17.29s system 99% cpu 28:28.24 total
101479 instructions. 0 loops. 802452 cycles. 79:369 spills:fills.
Peak memory usage (according to massif): 1.435 GB
After this commit:
179.64s user 7.75s system 99% cpu 3:07.92 total
57316 instructions. 0 loops. 459287 cycles. 0:0 spills:fills.
Peak memory usage (according to massif): 531.0 MB
Reviewed-by: Matt Turner <mattst88@gmail.com>
Reviewed-by: Jordan Justen <jordan.l.justen@intel.com>
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
Instead of trusting the caller to already have created a softfp64
function shader and added all its functions to our shader, we simply
take the softfp64 shader as an argument and do the function inlining
ouselves. This means that there's no more nasty functions lying around
that the caller needs to worry about cleaning up.
Reviewed-by: Matt Turner <mattst88@gmail.com>
Reviewed-by: Jordan Justen <jordan.l.justen@intel.com>
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
This pulls the guts of function inlining into a builder helper so that
it can be used elsewhere. The rest of the infrastructure is still
needed for most inlining cases to ensure that everything gets inlined
and only ever once. However, there are use-cases where you just want to
inline one little thing. This new helper also has a neat trick where it
can seamlessly inline a function from one nir_shader into another.
Reviewed-by: Matt Turner <mattst88@gmail.com>
Reviewed-by: Jordan Justen <jordan.l.justen@intel.com>
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
This doesn't really change anything as the functions will all get
inlined anyway. However it does let us do a bit of the work earlier and
in a common place.
Reviewed-by: Matt Turner <mattst88@gmail.com>
Reviewed-by: Jordan Justen <jordan.l.justen@intel.com>
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
Even though this is technically a step in the function inlining process
as laid out in nir_inline_functions.c, it's not really needed. We
already have constant initializers lowered here and no new ones are
added by appending the softfp64 functions.
Reviewed-by: Matt Turner <mattst88@gmail.com>
Reviewed-by: Jordan Justen <jordan.l.justen@intel.com>
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
Instead of looking the devinfo directly, look at the lowering options we
provided to NIR. This is more accurate as it's now checking for "do we
need full software lowering" rather than a hardware bit.
Reviewed-by: Matt Turner <mattst88@gmail.com>
Reviewed-by: Jordan Justen <jordan.l.justen@intel.com>
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
The lowering we do for 64-bit instructions can cause a single NIR ALU
instruction to blow up into hundreds or thousands of instructions
potentially with control flow. If loop unrolling isn't aware of this,
it can unroll a loop 20 times which contains a nir_op_fsqrt which we
then lower to a full software implementation based on integer math.
Those 20 invocations suddenly get a lot more expensive than NIR loop
unrolling currently expects. By giving it an approximate estimate
function, we can prevent loop unrolling from going to town when it
shouldn't.
Reviewed-by: Matt Turner <mattst88@gmail.com>
Reviewed-by: Jordan Justen <jordan.l.justen@intel.com>
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
We already have one internally for int64 but we don't have a similar one
for doubles so we'll have to make one.
Reviewed-by: Matt Turner <mattst88@gmail.com>
Reviewed-by: Jordan Justen <jordan.l.justen@intel.com>
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
In the old code, we would generate the exact same instruction for
extract_u8(some_u64, 0) and extract_u8(some_u64, 1). The mask-a-word
trick only works for even numbered bytes.
This fixes the (new) piglit test
tests/spec/arb_gpu_shader_int64/execution/fs-ushr-and-mask.shader_test.
v2: Use a SHR instead of an AND. This saves an instruction compared to
using two moves. Suggested by Jason.
Fixes: 6ac2d16901 ("i965/fs: Fix extract_i8/u8 to a 64-bit destination")
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
The parameter is never used, and it's not part of a common interface
idiom. Remove it.
src/intel/compiler/brw_interpolation_map.c: In function ‘brw_setup_vue_interpolation’:
src/intel/compiler/brw_interpolation_map.c:62:59: warning: unused parameter ‘devinfo’ [-Wunused-parameter]
const struct gen_device_info *devinfo)
^~~~~~~
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
In 61e009d2c4 we changed the number of components in the
vulkan_resource_index intrinsic and forgot the update Radv's code for
it.
Signed-off-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Fixes: 61e009d2c4 ("spirv: Use the same types for resource indices as pointers")
Reviewed-by: Samuel Pitoiset samuel.pitoiset@gmail.com
Not complete, mostly just adding things as I encounter them in CTS. But
not getting far enough yet to hit most of the OpenCL.std instructions.
Anyway, this is better than nothing and covers the most common builtins.
v2: add hadd proof from Jason
move some of the lowering into opt_algebraic and create new nir opcodes
simplify nextafter lowering
fix normalize lowering for inf
rework upsample to use nir_pack_bits
add missing files to build systems
v3: split lines of iadd/sub_sat expressions
Signed-off-by: Karol Herbst <kherbst@redhat.com>
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
v2: add assert in else clause
make local group intrinsics 32 bit wide
v3: always use 32 bit constant for local_size
v4: add comment by Jason
Signed-off-by: Karol Herbst <kherbst@redhat.com>
Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
we define it inside 'include/c99_math.h' so it is safe to use.
Signed-off-by: Karol Herbst <kherbst@redhat.com>
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
The idea is that for repeated use of the same uniform, we could avoid
loading it on each consumer. The results look pretty good.
total instructions in shared programs: 6413571 -> 6521464 (1.68%)
total threads in shared programs: 154214 -> 154000 (-0.14%)
total uniforms in shared programs: 2393604 -> 2119629 (-11.45%)
total spills in shared programs: 4960 -> 4984 (0.48%)
total fills in shared programs: 6350 -> 6418 (1.07%)
Once we do scheduling at the NIR level, the register pressure (and thus
also instructions) issues we see here will drop back down.
This feels like the right tradeoff for threads vs uniforms, particularly
given that we often have very short thread segments right now:
total instructions in shared programs: 6411504 -> 6413571 (0.03%)
total threads in shared programs: 153946 -> 154214 (0.17%)
total uniforms in shared programs: 2387665 -> 2393604 (0.25%)
On each iteration of successfully spilling a reg, we'd allocate another
copy of temp_registers, and when decrementing thread conut we'd allocate
another copy of the graph. These all got cleaned up on freeing the
compile.
It is already obvious whether the job is building a container or running
a mesa build, so let's drop that prefix so that we can see more
information on the screen (eg. in the jobs list on a pipeline page).
Signed-off-by: Eric Engestrom <eric.engestrom@intel.com>
Reviewed-by: Michel Dänzer <michel.daenzer@amd.com>
Reviewed-by: Eric Anholt <eric@anholt.net>
Reviewed-by: Jordan Justen <jordan.l.justen@intel.com>
Previously, only the driver_location was set for all variables,
but constants need to use the location field instead. This change
is necessary because the nine state tracker can produce non-packed
constants whose location needs to be explicitly set.
Signed-Off-By: Timur Kristóf <timur.kristof@gmail.com>
Tested-by: Andre Heider <a.heider@gmail.com>
Tested-by: Rob Clark <robdclark@gmail.com>
Reviewed-by: Timothy Arceri <tarceri@itsqueeze.com>
This patch extracts the interpolation mode translation
into a separate function called ttn_translate_interp_mode,
adds support for TGSI_INTERPOLATE_COLOR which was missing,
and also sets the proper interpolation mode to output
variables, which were not set previously.
Signed-Off-By: Timur Kristóf <timur.kristof@gmail.com>
Tested-by: Andre Heider <a.heider@gmail.com>
Tested-by: Rob Clark <robdclark@gmail.com>
Reviewed-by: Timothy Arceri <tarceri@itsqueeze.com>
Reviewed-by: Eric Anholt <eric@anholt.net>
v2: fix is_shadow, is_array and txq
Some drivers (eg. iris) need the presence of sampler variables and derefs
so that they can count them to determine the number of samplers used.
This change also makes the output NIR closer to what glsl_to_nir outputs.
Signed-Off-By: Timur Kristóf <timur.kristof@gmail.com>
Tested-by: Andre Heider <a.heider@gmail.com>
Tested-by: Rob Clark <robdclark@gmail.com>
Reviewed-by: Timothy Arceri <tarceri@itsqueeze.com>
Reviewed-by: Eric Anholt <eric@anholt.net>
Previously, FACE was hard-coded as a sysval, but TTN emulated
it incorrectly. Also, POSITION was not supported when it was
a sysval. This patch fixes these by allowing both of them to
be sysvals or inputs, based on driver capabilities. It also
fixes the TGSI FACE emulation based on the TGSI spec.
Signed-Off-By: Timur Kristóf <timur.kristof@gmail.com>
Tested-by: Andre Heider <a.heider@gmail.com>
Tested-by: Rob Clark <robdclark@gmail.com>
Reviewed-by: Timothy Arceri <tarceri@itsqueeze.com>
Reviewed-by: Eric Anholt <eric@anholt.net>
With this patch, tgsi_to_nir will output NIR that is tailored to
the given pipe, by reading its capabilities and adjusting the NIR code
to those capabilities similarly to how glsl_to_nir works.
It also adds an optimization loop that brings the output NIR in line
with what glsl_to_nir outputs. This is necessary for the same reason
why glsl_to_nir has its own optimization loop: currently not every
driver does these optimizations yet.
For uses which cannot pass a pipe_screen we also keep a variant
called tgsi_to_nir_noscreen which keeps the old behavior.
Signed-Off-By: Timur Kristóf <timur.kristof@gmail.com>
Tested-by: Andre Heider <a.heider@gmail.com>
Tested-by: Rob Clark <robdclark@gmail.com>
Acked-By: Eric Anholt <eric@anholt.net>
This patch makes it possible for freedreno to pass a pipe_screen
to tgsi_to_nir. This will be needed when tgsi_to_nir supports reading
pipe capabilities.
Signed-off-by: Timur Kristóf <timur.kristof@gmail.com>
Tested-by: Rob Clark <robdclark@gmail.com>
Reviewed-by: Rob Clark <robdclark@gmail.com>
Note that locations can be set in different units, and the multiplier
argument caters to supporting these different units. For example,
st_glsl_to_nir uses dwords (4 bytes) so the multiplier should be 4,
while tgsi_to_nir uses bytes, so the multiplier should be 16.
Signed-Off-By: Timur Kristóf <timur.kristof@gmail.com>
Tested-by: Andre Heider <a.heider@gmail.com>
Tested-by: Rob Clark <robdclark@gmail.com>
Reviewed-by: Timothy Arceri <tarceri@itsqueeze.com>
Reviewed-by: Eric Anholt <eric@anholt.net>
The nir_lower_uniforms_to_ubo function is useful outside of
mesa/state_tracker, and in fact is needed to produce NIR for
drivers that have the PIPE_CAP_PACKED_UNIFORMS capability.
Signed-Off-By: Timur Kristóf <timur.kristof@gmail.com>
Tested-by: Andre Heider <a.heider@gmail.com>
Tested-by: Rob Clark <robdclark@gmail.com>
Reviewed-by: Timothy Arceri <tarceri@itsqueeze.com>
Reviewed-by: Eric Anholt <eric@anholt.net>
Previously, tgsi_to_nir was a single big function, and this patch
intends to make the code easier to understand by splitting it up
to multiple smaller pieces.
Signed-Off-By: Timur Kristóf <timur.kristof@gmail.com>
Tested-by: Andre Heider <a.heider@gmail.com>
Tested-by: Rob Clark <robdclark@gmail.com>
Reviewed-by: Timothy Arceri <tarceri@itsqueeze.com>
Acked-By: Tested-by: Rob Clark <robdclark@gmail.com>
TGSI spec says LIT needs a "greater than" comparison. NIR doesn't have that,
so let's use "less than" and swap the arguments. Previously "greater than or equal"
was used by tgsi_to_nir which is incorrect.
Signed-off-by: Timur Kristóf <timur.kristof@gmail.com>
Tested-by: Andre Heider <a.heider@gmail.com>
Tested-by: Rob Clark <robdclark@gmail.com>
Reviewed-by: Timothy Arceri <tarceri@itsqueeze.com>
Reviewed-by: Eric Anholt <eric@anholt.net>
According to the TGSI spec, ARR needs to do a rounding and then
a float-to-integer conversion which was missing. This patch also
makes the rounding a bit more efficient by using nir_fround_even
instead of the previous nir_ffloor+nir_fadd trick.
Signed-Off-By: Timur Kristóf <timur.kristof@gmail.com>
Tested-by: Andre Heider <a.heider@gmail.com>
Tested-by: Rob Clark <robdclark@gmail.com>
Reviewed-by: Eric Anholt <eric@anholt.net>
Reviewed-by: Timothy Arceri <tarceri@itsqueeze.com>
This patch adds a shader_info field that tells the driver to use window
space coordinates for a given vertex shader. It also enables this feature
in radeonsi (the only NIR-capable driver that supported it in TGSI),
and makes tgsi_to_nir aware of it.
Signed-Off-By: Timur Kristóf <timur.kristof@gmail.com>
Tested-by: Andre Heider <a.heider@gmail.com>
Tested-by: Rob Clark <robdclark@gmail.com>
Reviewed-by: Timothy Arceri <tarceri@itsqueeze.com>
Reviewed-by: Eric Anholt <eric@anholt.net>
This lets us emit the VPM_WRITEs directly from
nir_intrinsic_store_output() (useful once NIR scheduling is in place so
that we can reduce register pressure), and lets future NIR scheduling
schedule the math to generate them. Even in the meantime, it looks like
this lets NIR DCE some more code and make better decisions.
total instructions in shared programs: 6429246 -> 6412976 (-0.25%)
total threads in shared programs: 153924 -> 153934 (<.01%)
total loops in shared programs: 486 -> 483 (-0.62%)
total uniforms in shared programs: 2385436 -> 2388195 (0.12%)
Acked-by: Ian Romanick <ian.d.romanick@intel.com> (nir)
We were printing only when the channel was exactly the start channel, so
scalarized loads/stores would be missing the name on the rest.
Reviewed-by: Ian Romanick <ian.d.romanick@intel.com>
We need more space than just a 32-bit scalar and we have to burn all
that space anyway so we may as well expose it to the driver. This also
fixes a subtle bug when UBOs and SSBOs have different pointer types.
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
With the new deref changes, the old pointer_offset version may not be
the right one to call. Just call the generic one and let it sort it
out.
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
We can't pull it from the variable type because it might be an array of
blocks and not just the one block. While we're here, throw in some
error checking.
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Cc: mesa-stable@lists.freedesktop.org
This buffer goes along side the CPU data structure and may contain
pointers, bindless handles, or any other descriptor information.
Currently, all descriptors are size zero and nothing goes in the buffer
but this commit sets up the framework we will need later.
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Pull the common code out of the two entrypoints into the helper which
fetches the push descriptor set for us. Now that it does more than just
get a thing, call it anv_cmd_buffer_push_descriptor_set.
Cc: "19.0" <mesa-stable@lists.freedesktop.org>
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
The descriptor set layout code in our driver has undergone many changes
over the years. Some of the fields which were once essential are now
useless or nearly so. The has_dynamic_offsets field was completely
unused accept for the code to set and hash it. The per-stage indices
were only being used to determine if a particular binding had images,
samplers, etc. The fact that it's per-stage also doesn't matter because
that binding should never be accessed by a shader of the wrong stage.
This commit deletes a pile of cruft and replaces it all with a
descriptive bitfield which states what a particular descriptor contains.
This merely describes the data available and doesn't necessarily dictate
how it will be lowered in anv_nir_apply_pipeline_layout.
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
This is what we're actually storing in the descriptor set and consuming
when we bind surface states. This commit renames image_count to
image_param_count a few places and moves the decision to not count image
params on gen9+ into anv_descriptor_set.c when we build the layout.
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Make them all take a device followed by a set. This is consistent
with how the actual Vulkan entrypoint parameters are laid out.
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
This commit just puts the free list code together as part of the pool
instead of having it inlined into the descriptor set create code.
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
In our backend, the successor edges from the blocks only point to where
QPU control flow goes, not where the notional control flow goes from a
"break" or "continue" modifying the execution mask to resume writing to
some channels later. As a result, this attempt at restricting live ranges
ended up missing the live range of a value where a conditional
break/continue was present in a loop before the later def of a variable.
The previous commit ended up fixing the problem that the flag tried to
solve.
Fixes glsl-vs-loop-continue.shader_test and/or
glsl-vs-loop-redundant-condition.shader_test based on register allocation
results.
In the backend, we often have condition codes on writes to variables, such
that there's no screening def anywhere and the previous live ranges
algorithm would conclude that the start of the range extends to the start
of the program. However, we do know that the live range can only extend
as early as you can reach from all blocks writing to the variable.
The motivation was that, while we have a couple of hacks to try to promote
conditional writes up to being a def within the block, the exec_mask one
was broken and needed a replacement.
Based on c3c1aa5aeb ("intel/fs: Restrict live intervals to the subset
possibly reachable from any definition.").
Ubuntu Bionic is shipping ninja 1.8.2. Therefore, we do not need to
download v1.6.0 manually any more.
Signed-off-by: Andres Gomez <agomez@igalia.com>
Reviewed-by: Eric Engestrom <eric.engestrom@intel.com>
This fixes some CTS crashes with:
dEQP-VK.renderpass2.suballocation.attachment_write_mask.attachment_count_8.start_index_*
Ideally, we should check cmd_buffer->cs->max_dw because there is
likely enough space (the internal clear draws allocate space), but
keep that way for consistency.
Signed-off-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
This function was never used, and isn't properly guarded by HAVE_LIBDRM,
breaking the build on systems that don't have libdrm.
Let's just remove it.
Fixes: 7552fcb7b9 "egl: add base EGL_EXT_device_base implementation"
Reported-by: Timo Aaltonen <tjaalton@debian.org>
Signed-off-by: Eric Engestrom <eric.engestrom@intel.com>
Acked-by: Emil Velikov <emil.velikov@collabora.com>
We already propagate coord_components correctly and did not have
layer restrictions for ycbcr formats.
Signed-off-by: Tapani Pälli <tapani.palli@intel.com>
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
This does not seem to fix anything ATM but is the right thing todo.
Signed-off-by: Tapani Pälli <tapani.palli@intel.com>
Fixes: f3e91e78a3 ("anv: add nir lowering pass for ycbcr textures")
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Now i965 supports mesa_glthread=true like Gallium drivers do.
According to Markus (degasus), the Citra emulator now runs ~30% faster.
Emmanuel (linkmauve) also reported that the Dolphin emulator improved
by 2.8x on one game. (Both of those still need to be added to drirc.)
An Intel Mesa CI run with mesa_glthread=true appears to be happy.
Bioshock Infinite's benchmark mode seems to be around 15-20% faster
on my Skylake GT4 at 1920x1080.
Tested-by: Markus Wick <markus@selfnet.de>
Tested-by: Emmanuel Gil Peyrot <linkmauve@linkmauve.fr>
Tested-by: Kenneth Graunke <kenneth@whitecape.org>
Reviewed-by: Jordan Justen <jordan.l.justen@intel.com>
Reviewed-by: Ian Romanick <ian.d.romanick@intel.com>
We zero out the prog data anyway and, now that bias is always zero, this
function is accomplishing nothing.
Reviewed-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
This commit moves our handling of gl_NumWorkgroups over to work like our
handling of other special bindings in the Vulkan driver. We give it a
magic descriptor set number and teach emit_binding_tables to handle it.
This is better than the bias mechanism we were using because it allows
us to do proper accounting through the bind map mechanism.
Reviewed-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
When we have a larger sampler index, we get into the "high sampler"
scenario and need an instruction header. Even in SIMD8, this pushes the
instruction over the sampler message size maximum of 11 registers.
Instead, we have to lower TXD to TXL.
Fixes: cb98e0755f "intel/fs: Support min_lod parameters on texture..."
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Reviewed-by: Ian Romanick <ian.d.romanick@intel.com>
No idea how this fell through the cracks besides the fact that the
sampler bound at 0 almost always works and the CTS isn't amazing. In
any case, this appears to have been broken for almost forever.
Reviewed-by: Samuel Iglesias Gonsálvez <siglesias@igalia.com>
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Cc: mesa-stable@lists.freedesktop.org
Use new nir opcode nir_[i/u]mul_2x32_64 and extract lower and higher 32
bits as needed instead of emitting mul and mul_high.
v2: Surround the switch case with curly braces (Jason Ekstrand)
Signed-off-by: Sagar Ghuge <sagar.ghuge@intel.com>
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
Optimize mulExtended to use 32x32->64 multiplication.
Drivers which are not based on NIR, they can set the
MUL64_TO_MUL_AND_MUL_HIGH lowering flag in order to have same old
behavior.
v2: Add missing condition check (Jason Ekstrand)
Signed-off-by: Sagar Ghuge <sagar.ghuge@intel.com>
Suggested-by: Matt Turner <Matt Turner <mattst88@gmail.com>
Suggested-by: Jason Ekstrand <jason@jlekstrand.net>
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
On Gen 8 and 9, "mul" instruction supports 64 bit destination type. We
can reduce our 64x64 int multiplication from 4 instructions to 3.
Also instead of emitting two mul instructions, we can emit single mul
instuction and extract low/high 32 bits from 64 bit result for
[i/u]mulExtended
v2: 1) Allow lower_mul_high64 to use new opcode (Jason Ekstrand)
2) Add lower_mul_2x32_64 flag (Matt Turner)
3) Remove associative property as bit size is different (Connor
Abbott)
v3: Fix indentation and variable naming convention (Jason Ekstrand)
Signed-off-by: Sagar Ghuge <sagar.ghuge@intel.com>
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
Fix anv_extrypoints.{c,h} and anv_extensions.{c,h} missing dependencies
Rename the variable labels according to targets and python scripts
Align the building rules as per Automake for simplification
Fixes building errors during rebuils due to missing dependencies
(v2) Fixed a missing $(VULKAN_API_XML) reference
Fixes: 9a508b7 ("android: anv/extensions: fix generated sources build")
Fixes: dd088d4bec ("anv/extensions: Generate a header file with extension tables")
Signed-off-by: Mauro Rossi <issor.oruam@gmail.com>
Reviewed-by: Tapani Pälli <tapani.palli@intel.com>
Reviewed-by: Eric Engestrom <eric.engestrom@intel.com>
Cc: "19.0" <mesa-stable@lists.freedesktop.org>
MinGW release build complains about a possible out-of-bounds
array access. Test i < 4 to silence it.
Reviewed-by: Neha Bhende <bhenden@vmware.com>
Reviewed-by: Mathias Fröhlich <Mathias.Froehlich@web.de>
MinGW release builds warns about use of a possbily uninitialized
variable here.
Reviewed-by: Neha Bhende <bhenden@vmware.com>
Reviewed-by: Mathias Fröhlich <Mathias.Froehlich@web.de>
virtio-gpu fallbacks to software rendering when 3D features
are unavailable since 6c5ab, and kms_swrast is more
feature complete than swrast.
v2: Add comment (Emil)
Reviewed-by: Eric Engestrom <eric@engestrom.ch>
Reviewed-by: Emil Velikov <emil.velikov@collabora.com>
Now that the buffer object usage history tracks if it is
being used as vertex buffer object, we can restrict setting
the ST_NEW_VERTEX_ARRAYS bit to dirty on glBufferData calls to
buffers that are potentially used as vertex buffer object.
Also put a note that the same could be done for index arrays
used in indexed draws.
Reviewed-by: Brian Paul <brianp@vmware.com>
Signed-off-by: Mathias Fröhlich <Mathias.Froehlich@web.de>
We already track the usage history for buffer objects
in a lot of aspects. Add GL_ARRAY_BUFFER and
GL_ELEMENT_ARRAY_BUFFER to gl_buffer_object::UsageHistory.
Reviewed-by: Brian Paul <brianp@vmware.com>
Signed-off-by: Mathias Fröhlich <Mathias.Froehlich@web.de>
If a selected unit causes a data hazard, the whole block gets cut short.
So, we preview for data hazards _while_ selecting units.
Signed-off-by: Alyssa Rosenzweig <alyssa@rosenzweig.io>
Tested-by: Tomeu Vizoso <tomeu.vizoso@collabora.com
smul comes first in the pipeline, before vmul. Until we have a full
instruction scheduler, it's better to have vmul prioritized to maximize
bundle size.
Signed-off-by: Alyssa Rosenzweig <alyssa@rosenzweig.io>
Tested-by: Tomeu Vizoso <tomeu.vizoso@collabora.com
This special-case was needlessly added and breaks purely offscreen
rendering (when there is no scanout involved)
Signed-off-by: Alyssa Rosenzweig <alyssa@rosenzweig.io>
Reviewed-by: Tomeu Vizoso <tomeu.vizoso@collabora.com
Previously, we forced a #0 inline constant tacked on for the lut
instructions to mirror the blob's behaviour, which caused some
suboptimal codegen due to our constant inlining implementation. Instead,
just don't force a constant at all.
Signed-off-by: Alyssa Rosenzweig <alyssa@rosenzweig.io>
Tested-by: Tomeu Vizoso <tomeu.vizoso@collabora.com
The operations of gallium->clear() and the hardware callbacks are
fundamentally independent. This routine decouples them by routing shared
information via panfrost_job, allowing the hardware half to be deferred
to the fragment job generation.
Signed-off-by: Alyssa Rosenzweig <alyssa@rosenzweig.io>
Reviewed-by: Tomeu Vizoso <tomeu.vizoso@collabora.com>
At the moment, Panfrost state is ad hoc, which creates issues for FBOs.
This commit imports the skeleton of the v3d_job structure as
panfrost_job, in preparation for refactors to organize this state.
Signed-off-by: Alyssa Rosenzweig <alyssa@rosenzweig.io>
Reviewed-by: Tomeu Vizoso <tomeu.vizoso@collabora.com>
This is purely for conformance, since it's not actually possible to do
XFB on TCS output varyings. However we do have to make sure we record
the names correctly, and this removes an extra level of array-ness from
the names in question.
Fixes KHR-GL45.tessellation_shader.single.xfb_captures_data_from_correct_stage
v2: Add comment to the new program_resource_visitor::process function.
(Ilia Mirkin)
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=108457
Signed-off-by: Ilia Mirkin <imirkin@alum.mit.edu>
Cc: 19.0 <mesa-stable@lists.freedesktop.org>
Reviewed-by: Timothy Arceri <tarceri@itsqueeze.com>
Avoids regression on:
KHR-GLES*.core.tessellation_shader.single.xfb_captures_data_from_correct_stage
that is uncovered by the following patch.
"glsl: fix recording of variables for XFB in TCS shaders"
v2: Rebased over glsl: fix recording of variables for XFB in TCS shaders
v3: Move this patch before "glsl: fix recording of variables for XFB in TCS
shaders" to avoid temporal regressions. (Illia Mirkin)
Cc: 19.0 <mesa-stable@lists.freedesktop.org>
Reviewed-by: Timothy Arceri <tarceri@itsqueeze.com>
EXT_texture_query_lod provides the same functionality for GLES like
the ARB extension with the same name for GL.
v2: Set ES 3.0 as minimum GLES version as required by the extension
Signed-off-by: Gert Wollny <gert.wollny@collabora.com>
Reviewed-by: Eric Anholt <eric@anholt.net>
Not a perfect solution, and the "pressure" target is hard-coded. But it
doesn't really seem to much in the common case, and avoids exploding
register usage in dEQP ssbo tests.
So this should serve as a stop-gap solution until I have time to re-
write the scheduler.
Hurts slightly in instruction count, but gains (reduces) slightly the
register usage in shader-db. Fixes ~150 dEQP-GLES31.functional.ssbo.*
that were failing due to RA fail.
Signed-off-by: Rob Clark <robdclark@gmail.com>
This might enough for iris and possible r600 (when it gets NIR)
This appears to work for iris.
v2:
* change cap return so DOUBLES == 2 means sw emu
v3:
* Refactor using int64/doubles lowering options which were added
into nir options
* Remove DOUBLES == 2 added in v2
[jordan: Remove "2" value on PIPE_CAP_DOUBLES]
[jordan: Use lowering options added to nir options]
Signed-off-by: Jordan Justen <jordan.l.justen@intel.com>
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
Acked-by: Jason Ekstrand <jason@jlekstrand.net>
Instead of calculating the int64 and doubles lowering options each
time a shader is preprocessed, save and use the values in
nir_shader_compiler_options.
Signed-off-by: Jordan Justen <jordan.l.justen@intel.com>
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
This will allow the options to be visible under nir_shader->options,
which will allow the gallium state_tracker to use the driver preferred
settings during glsl_to_nir.
Suggested-by: Kenneth Graunke <kenneth@whitecape.org>
Signed-off-by: Jordan Justen <jordan.l.justen@intel.com>
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
This ran afoul of Iris's use of nir_lower_clamp_color_outputs which
applies fsat() before writes to vertex shader color outpus.
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
Fixes: 7725d60938 ("intel/fs: Emit better code for b2f(inot(a)) and b2i(inot(a))")
As requested by Ken ;)
v2: Also decode simple batches (Caio)
Fix u_vector usage issues (Lionel)
v3: Make binding/instruction/state/surface available (Lionel)
v4: Going through device pools for simple batches (Lionel)
Centralize search BO callbacks into anv_device.c (Lionel)
v5: Clear decoded batch buffer var after use (Caio)
Signed-off-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Reviewed-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
v3d may be built as part of a set of drivers in a system not requiring
NEON, but we know V3D devices will be paired with CPUs with NEON so we
should be able to use this asm.
Fixes: 0c05198d6b ("v3d: Always enable the NEON utile load/store code.")
I noticed this while looking at a shader that was affected by Tim's
"more loop unrolling" series.
In review, Tim Arceri asked:
> Why the hurt on Gen6+ is this something that should be in the late
> optimisations pass?
As far as I can tell, it's just because our scheduler is terrible. In
all the fragment shaders that I looked at (some hurt shaders were from
other stages), only one of the SIMD8 or SIMD16 version would be hurt.
In many of those case, the other SIMD width is improved (e.g.,
shaders/closed/steam/brutal-legend/3990.shader_test).
Often it looks like the scheduler decides to differently schedule a SEND
the occurs somewhere early in the shader. Once that happens, everything
is different.
I looked at one vertex shader that was hurt (from Goat Simulator). In
that case, both the floor and fract are used. The optimization
eliminates the add, and it should allow better scheduling. In the area
of the FRC and RNDD instructions, the scheduler does the right thing.
However, later in the shader a MAD and and ADD get scheduled
differently, and that makes it slightly worse.
In light of this, I tried adding some "is_used_once" mark-up, and that
did not fix all the cycles regressions. It also did a lot more harm
than good on SKL (helped 82 vs. hurt 241).
All Gen6+ platforms had similar results. (Skylake shown)
total instructions in shared programs: 15437001 -> 15435259 (-0.01%)
instructions in affected programs: 213651 -> 211909 (-0.82%)
helped: 988
HURT: 0
helped stats (abs) min: 1 max: 27 x̄: 1.76 x̃: 1
helped stats (rel) min: 0.15% max: 11.54% x̄: 1.14% x̃: 0.59%
95% mean confidence interval for instructions value: -1.89 -1.63
95% mean confidence interval for instructions %-change: -1.23% -1.05%
Instructions are helped.
total cycles in shared programs: 383007378 -> 382997063 (<.01%)
cycles in affected programs: 1650825 -> 1640510 (-0.62%)
helped: 679
HURT: 302
helped stats (abs) min: 1 max: 348 x̄: 23.39 x̃: 14
helped stats (rel) min: 0.04% max: 28.77% x̄: 1.61% x̃: 0.98%
HURT stats (abs) min: 1 max: 250 x̄: 18.43 x̃: 7
HURT stats (rel) min: 0.04% max: 25.86% x̄: 1.41% x̃: 0.53%
95% mean confidence interval for cycles value: -13.05 -7.98
95% mean confidence interval for cycles %-change: -0.86% -0.50%
Cycles are helped.
Iron Lake and GM45 had similar results. (GM45 shown)
total instructions in shared programs: 5043616 -> 5043010 (-0.01%)
instructions in affected programs: 119691 -> 119085 (-0.51%)
helped: 432
HURT: 0
helped stats (abs) min: 1 max: 27 x̄: 1.40 x̃: 1
helped stats (rel) min: 0.10% max: 8.11% x̄: 0.66% x̃: 0.39%
95% mean confidence interval for instructions value: -1.58 -1.23
95% mean confidence interval for instructions %-change: -0.72% -0.59%
Instructions are helped.
total cycles in shared programs: 128139812 -> 128135762 (<.01%)
cycles in affected programs: 3829724 -> 3825674 (-0.11%)
helped: 602
HURT: 0
helped stats (abs) min: 2 max: 486 x̄: 6.73 x̃: 6
helped stats (rel) min: 0.02% max: 4.85% x̄: 0.19% x̃: 0.10%
95% mean confidence interval for cycles value: -8.40 -5.05
95% mean confidence interval for cycles %-change: -0.22% -0.16%
Cycles are helped.
Reviewed-by: Elie Tournier <tournier.elie@gmail.com>
I have not investigated the result of doing this during code
generation. That should be possible, but it would be a bit more
effort.
All Gen6+ platforms had nearly identical results. (Skylake shown)
total cycles in shared programs: 370961508 -> 370961367 (<.01%)
cycles in affected programs: 5174 -> 5033 (-2.73%)
helped: 2
HURT: 0
Iron Lake and GM45 had similar results. (Iron Lake shown)
total instructions in shared programs: 8206587 -> 8206589 (<.01%)
instructions in affected programs: 1325 -> 1327 (0.15%)
helped: 0
HURT: 2
total cycles in shared programs: 187657422 -> 187657428 (<.01%)
cycles in affected programs: 11566 -> 11572 (0.05%)
helped: 0
HURT: 2
This change has almost no effect right now. However, removing this
patch (but leaving the patch "intel/fs: Generate if instructions with
inverted conditions") after adding a patch that removes !(a < b) -> (a
>= b) optimizations (like
https://patchwork.freedesktop.org/patch/264787/) has the following
results on Skylake:
Skylake
total instructions in shared programs: 15071804 -> 15071806 (<.01%)
instructions in affected programs: 640 -> 642 (0.31%)
helped: 0
HURT: 2
total cycles in shared programs: 369914348 -> 369916569 (<.01%)
cycles in affected programs: 27900 -> 30121 (7.96%)
helped: 4
HURT: 15
helped stats (abs) min: 2 max: 112 x̄: 30.00 x̃: 3
helped stats (rel) min: 0.28% max: 12.28% x̄: 3.34% x̃: 0.40%
HURT stats (abs) min: 2 max: 758 x̄: 156.07 x̃: 81
HURT stats (rel) min: 0.20% max: 74.30% x̄: 16.29% x̃: 16.91%
95% mean confidence interval for cycles value: 12.68 221.11
95% mean confidence interval for cycles %-change: 3.09% 21.23%
Cycles are HURT.
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
Other places will need to do this soon to properly handle source
swizzles. The patch looks a little odd, but the change is pretty
straight forward. All of the swizzle and mask handling is moved out,
but the code for handling move instructions and vecN instructions
remains in nir_emit_alu.
I'm not terribly pleased with the "need_dest" parameter, but
get_nir_dest is (somewhat surprisingly) destructive. I am open to
suggestions of alternatives.
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
To allow cmod propagation from a MOV in a sequence like:
and(16) g31<1>UD g20<8,8,1>UD g22<8,8,1>UD
mov.nz.f0(16) null<1>F g31<8,8,1>D
A similar change to the vec4 backend had no effect.
Somewhere between c1ec582059 and 40fc4b5acd (1,094 commits) the
effectiveness of this patch diminished, and as of commit d7e0d47b9d
(nir: Add a bunch of b2[if] optimizations) this optimization no longer
has any effect on any platform.
A later patch "intel/fs: Use De Morgan's laws to avoid logical-not of a
logic result on Gen8+," generates some instruction sequences that
require this change in order for cmod propagation to make progress.
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
This reverts the following commits:
71a76a47cc "swr/codegen: fix autotools build"
7763e664ce "meson/swr: replace hard-coded path with current_build_dir()"
773b3ceaca "swr/rast: Fix autotools and scons codegen"
16e10b8c30 "swr/rast: Add general SWTag statistics"
b45a15a39f "swr/rast: Add string handling to AR event framework"
8608a747aa "swr/rast: Add initial SWTag proto definitions"
93cd9905c8 "swr/rast: Cleanup and generalize gen_archrast"
The last one in this list broke all the build systems that can build
this (meson, autotools & scons).
See MR !304 for more details:
https://gitlab.freedesktop.org/mesa/mesa/merge_requests/304
Signed-off-by: Eric Engestrom <eric.engestrom@intel.com>
UBWC requires space for a metadata or flag buffer
that contains compression data. Each 16x4 tile of image
data corresponds to a byte of compression data.
This buffer needs to be stored before (at a lower address)
the image buffer in order to match up with what the
display driver. This allows the display driver to directly
scan-out at UBWC buffer.
Universal bandwidth compression(UBWC) reduces memory bandwidth
by compressing buffers. This compression takes the form of
a full sized image buffer as well as a smaller metadata buffer.
I'm guessing a previous version of this script used an index-based map
of entrypoints, but that's not the case anymore.
Signed-off-by: Eric Engestrom <eric.engestrom@intel.com>
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Differently than the direct case, the indirect array derefs of vector
are handled like regular derefs, with the exception that we ignore any
vector entry that has SSA values when performing a load. Such SSA
values don't help loading of the indirect unless we emit an if-ladder.
Copy_derefs are supported for indirects.
Also enable two tests that now pass.
v2: Remove unnecessary temporaries. Be clearer when identifying the
case where copy_entry doesn't help when we are dealing with an
indirect array_deref (of a vector). (Jason)
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
When looking up an entry to use, always prefer an equal match, as it
more likely to contain reusable SSA or derefs to propagate.
This will be necessary when adding entries with array derefs of
vectors, because we don't want the vector if the equal entry (an array
deref of that vector) is present.
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
Both on an actual array and on a vector, and an extra test on a vector
mixing direct and indirect access. The vector tests are disabled and
will be enabled by a later commit.
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
When direct array deref is used on a vector type (for loads and
stores), copy_prop_vars is now smart to propagate values it knows
about.
Given a 'vec4 v', storing to v[3] will update the copy entry for v and
it is equivalent to a write to v.w. Loading from v[1] will try first
to see if there's a known value for v.y -- and drop the load in that
case.
The copy entries still always refer to the entire vectors, so the
operations happen on the parent deref (the 'vector') and the values
are fixed accordingly.
It might be the case now that certain entries have not only different
SSA defs in each element but also those come from different components
than they are set to, because stores to individual elements always
come from a SSA definition with a single component.
Tests related to these cases are now enabled.
v2: Instead of asserting on invalid indices, "load" an undef and
remove the store. (Jason)
v3: Merge code path for the cases of is_array_deref_of_vector into the
regular code path. Add a base_index parameter to
value_set_from_value. (code changes by Jason)
v4: Removed the get_entry_for_deref helper, now being used only once.
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
Also replace uses of 0xf with the appropriate full mask created from
the number of components.
Note that an increase of MAX might make us change how the data is
stored later on, but for now at least we make sure the pass is not
hardcoded.
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
The name reflected this function role back when the pass also did dead
write elimination. So rename it to what it does now, which is setting
a value using another value; and narrow the argument list.
Reviewed-by: Ian Romanick <ian.d.romanick@intel.com>
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
A pipe_resource can be shared by all the pipe_context's hanging off the
same pipe_screen.
Changes from v2 -> v3:
- add locking with mtx_*() to resource and screen (Marek)
Changes from v3 -> v4:
- drop rsc->lock, just use screen->lock for the entire serialization (Marek)
- simplify etna_resource_used() flush condition, which also prevents
potentially flushing resources twice (Marek)
- don't remove resouces from screen->used_resources in
etna_cmd_stream_reset_notify(), they may still be used in other
contexts and may need flushing there later on (Marek)
Changes from v4 -> v5:
- Fix coding style issues reported by Guido
Changes from v5 -> v6:
- Add missing locking in etna_transfer_map(..) (Boris)
Signed-off-by: Christian Gmeiner <christian.gmeiner@gmail.com>
Signed-off-by: Marek Vasut <marex@denx.de>
Signed-off-by: Boris Brezillon <boris.brezillon@collabora.com>
Tested-by: Marek Vasut <marex@denx.de>
Reviewed-by: Boris Brezillon <boris.brezillon@collabora.com>
Tested-by: Boris Brezillon <boris.brezillon@collabora.com>
Changes v1 -> v2:
- Avoid the GPU sampling from the resource that gets mutated by the the
transfer map by setting DRM_ETNA_PREP_WRITE.
Changes v2 -> v3:
- make use of likely(..)
- drop minor optimization regarding rsc->layout == ETNA_LAYOUT_LINEAR
- better documentation why DRM_ETNA_PREP_WRITE is needed
Signed-off-by: Christian Gmeiner <christian.gmeiner@gmail.com>
Reviewed-by: Lucas Stach <l.stach@pengutronix.de>
Saves us from calling etna_bo_map(..) and saves us from doing the
same offset calcs for map() and unmap() operations.
Signed-off-by: Christian Gmeiner <christian.gmeiner@gmail.com>
Reviewed-by: Lucas Stach <l.stach@pengutronix.de>
ETC2 is supported with HALTI0, however that implementation is buggy
in hardware. The blob driver does per-block patching to work around
this. We need to swap colors for t-mode etc2 blocks.
Changes v2 -> v3:
- Drop redundant format check
Signed-off-by: Christian Gmeiner <christian.gmeiner@gmail.com>
Acked-by: Lucas Stach <l.stach@pengutronix.de>
The scalar back-end uses SHADER_OPCODE_SEND for all surface messages so
we no longer need the non-logical opcodes there. Prefix them VEC4 so
it's clear that they're only used by the vec4 back-end.
Reviewed-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
The unused typed surface read/write support in the vec4 back-end has
been dropped and the fs back-end now uses SHADER_OPCODE_SEND for all
image and buffer ops. There's no reason to keep these opcodes around
anymore.
Reviewed-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
Since switching to SHADER_OPCODE_SEND for image operations, we no longer
need the non-logical opcode.
Reviewed-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
All of the actual abstraction (except possibly setting size_written)
happens as part of the logical opcodes. The only thing that the surface
builder is providing at this point is extra levels of functions to call
through. I'm going to be adding bindless image support soon and all the
extra abstraction here is just getting in the way.
Reviewed-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
It makes more sense to start at the surface then move on to the address
and then the data. Also, this is a really good test of whether or not
we got all the places that use the sources by explicit integer number.
Reviewed-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
This change applies the workaround suggested by Bill Deegan on the
affected SCons versions.
It also adds a comment with the URL explaining why we were using
customizing the decider and max_drift in the first place, as I had
forgotten all about it.
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=109443
Tested-by: liviuprodea@yahoo.com
Reviewed-by: Roland Scheidegger <sroland@vmware.com>
Reviewed-by: Brian Paul <brianp@vmware.com>
This DTD can be used to validate the drirc xml:
$ xmllint --noout --valid 00-mesa-defaults.conf
Signed-off-by: Eric Engestrom <eric.engestrom@intel.com>
Reviewed-by: Emil Velikov <emil.velikov@collabora.com>
VGEM and kms_swrast were introduced to work with one another.
All we do is CPU rendering to dumb buffers. There is no reason to carve
out GPU memory, increasing the memory pressure on a device that could
make a better use of it.
Note:
- The original code did not work out of the box, since the dumb buffer
ioctls are not exposed to render nodes.
- This requires libdrm commit 3df8a7f0 ("xf86drm: fallback to MODALIAS
for OF less platform devices")
- The non-kms, swrast is unaffected by this change.
v2:
- elaborate what and how is/isn't working (Eric)
- simplify driver_name handling (Eric)
v3:
- move node_type outside of the loop (Eric)
- kill no longer needed DRM_RENDER_DEV_NAME define
Signed-off-by: Emil Velikov <emil.velikov@collabora.com>
Reviewed-by: Eric Engestrom <eric.engestrom@intel.com>
Reviewed-by: Gurchetan Singh <gurchetansingh@chromium.org>
Make the code a bit easier to read.
As a bonus point this makes it obvious that we forgot to call
_eglAddDevice() for the device - do so.
v2:
- s/dpy/disp/ (Eric)
- free(driver_name) on dri2_load_driver_swrast() failure (Eric)
Signed-off-by: Emil Velikov <emil.velikov@collabora.com>
Reviewed-by: Eric Engestrom <eric.engestrom@intel.com>
Reviewed-by: Mathias Fröhlich <Mathias.Froehlich@web.de> (v1)
Reviewed-by: Gurchetan Singh <gurchetansingh@chromium.org>
When emitting a branch in a block, it does not make sense to continue
processing further instructions, as they will not be reachable.
This fixes a nasty case with a loop with a branch that both then-part
and else-part exits the loop:
%1 = OpLabel
OpLoopMerge %2 %3 None
OpBranchConditional %false %2 %2
%3 = OpLabel
OpBranch %1
%2 = OpLabel
[...]
We know that block %1 will branch always to block %2, which is the merge
block for the loop. And thus a break is emitted. If we keep continuing
processing further instructions, we will be processing the branch
conditional and thus emitting the proper NIR conditional, which leads to
instructions after the break.
This fixes dEQP-VK.graphicsfuzz.continue-and-merge.
CC: Jason Ekstrand <jason@jlekstrand.net>
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
Usually the uniforms will be assigned locations and have their slots
counted automatically, but for builtin shaders the location assignment
is manual. So count them too otherwise we get num_uniforms == 0.
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
call XShmDetach to allow X server to free shared memory
Fixes: bcd80be49a "drisw/glx: use XShm if possible"
Signed-off-by: Ray Zhang <zhanglei002@gmail.com>
Reviewed-by: Dave Airlie <airlied@redhat.com>
This helps improve compile times. For example the shader-db dolphin
shader shaders/dolphin/ubershaders/120.shader_test goes from
~1.69 -> ~1.57 seconds on my machine with this change.
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
Some types of params such as some builtins are always padded. We
need to keep track of this so we can restore the list correctly.
Here we also remove a couple of cache entries that are not actually
required as they get rebuilt by the _mesa_add_parameter() calls.
This patch fixes a bunch of arb_texture_multisample and
arb_sample_shading piglit tests for the radeonsi NIR backend.
Fixes: edded12376 ("mesa: rework ParameterList to allow packing")
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
The optimization in 4cd1a0be76 introduced a replacement of :
cmp(8).z.f0.0 vgrf11.y:D, vgrf10.xxxx:D, vgrf2.xyyy:D
...
cmp(8).nz.f0.0 null.x:D, vgrf11.yyyy:D, 0D
By :
cmp(8).z.f0.0 vgrf15.x:D, vgrf10.xxxx:D, vgrf2.yyyy:D
...
mov(8) vgrf11.y:D, vgrf15.yyyy:D
The first cmp instruction is storing in x while the second mov is
sourcing from y. We need to take into account where the replacement on
the scan_inst destination is going to store thing so that the
replacement mov can source things from the correct location.
Signed-off-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Fixes: 4cd1a0be76 ("i965/vec4: Propagate conditional modifiers from more compares to other compares")
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=109759
Reviewed-by: Ian Romanick <ian.d.romanick@intel.com>
Now that freedreno has create_with_modifiers(), this "hack" is needed to
make some cases work. Copied from vc4.
Fixes: 41ddf1d1
Signed-off-by: Jonathan Marek <jonathan@marek.ca>
In freedreno_gmem.c, gmem_align of 0x8000 is used. Alignment used here
should be the same.
Fixes: 912a9c8d
Signed-off-by: Jonathan Marek <jonathan@marek.ca>
When the output directory was changed, the BUILT_SOURCES and build-rule
target-path was no longer correct, leading to races to generate the
sources and compiling them.
Fix this by updating both sets of paths, so automake see what's going on
here.
Fixes: 773b3ceaca ("swr/rast: Fix autotools and scons codegen")
Signed-off-by: Erik Faye-Lund <erik.faye-lund@collabora.com>
Reviewed-by: Alok Hota <alok.hota@intel.com>
This is useful for r600 since there the abs source modifier is not supported
for ops with three sources
v2: Use correct logic to enable lowering to abs source mod (Eric Anhold)
Signed-off-by: Gert Wollny <gw.fossdev@gmail.com>
Reviewed-by: Eric Anholt <eric@anholt.net>
This is a partial revert of 9d81cd ("virgl: Pass resource size and
transfer offsets").
The adjustments made in the client code means there's various
mismatches when transfering data.
Let's fallback to protocol version 0 and deprecate protocol
version 1. We can still use the protocol version 1 slots for
a shared memory transfer mechanism later.
Fixes:
dEQP-GLES31.functional.copy_image.mixed.viewclass_128_bits_mixed.*_renderbuffer
Reviewed-By: Gert Wollny <gert.wollny@collabora.com>
Header xmmintrin.h conditionally includes emmintrin.h that defines
_MM_DENORMALS_ZERO_MASK, add ifndef to fix this warning.
Signed-off-by: Tapani Pälli <tapani.palli@intel.com>
Reviewed-by: Jordan Justen <jordan.l.justen@intel.com>
Otherwise with VNDK enabled we fail linking:
src/gallium/targets/dri/Android.mk: error: gallium_dri (native:vendor)
should not link to libbacktrace.vendor (native:vndk_private)
Option makes it possible to use libbacktrace only when VNDK is not
enabled.
Signed-off-by: Tapani Pälli <tapani.palli@intel.com>
Reviewed-by: Jordan Justen <jordan.l.justen@intel.com>
Previously, we were guarded by an #ifdef, which is generally a bad form.
This patch instead guards them behind an environmental variable.
Signed-off-by: Alyssa Rosenzweig <alyssa@rosenzweig.io>
Additional VERTEX_ELEMENT_STATE are used to store basevertex and
baseinstance and drawid updating the DWordLength of the
3DSTATE_VERTEX_ELEMENTS command.
This passes all piglit tests for spec.*draw_parameters.* tests
and VK-GL-CTS KHR-GL45.shader_draw_parameters_tests.* tests.
Now we only mark a dirty_update when parameters are changed or
when we have an indirect draw.
We enable PIPE_CAP_DRAW_PARAMETERS on Iris.
There is no edge flag support in the Vertex Elements setup.
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
Currently clover will advertise any device that advertises
PIPE_CAP_COMPUTE, even if they do not support PIPE_SHADER_IR_NATIVE,
which is the IR used internally by clover.
This avoids clover advertising devices as available even though they
actually are not supported.
Reviewed-by: Karol Herbst <kherbst@redhat.com>
Reviewed-by: Francisco Jerez <currojerez@riseup.net>
Program linking options are only valid if the library was created with
the `-enable-link-options` option, which itself is only valid when
creating a library, and only when creating an executable.
Reviewed-by: Francisco Jerez <currojerez@riseup.net>
If creating a library, do not allow non-compiled object in it, as
executables are not allowed, and libraries would make it really hard to
enforce the "-enable-link-options" flag.
Reviewed-by: Francisco Jerez <currojerez@riseup.net>
Reviewed-by: Aaron Watry <awatry@gmail.com>
From the OpenCL 1.2 Specification, Section 5.6.2 (about clBuildProgram):
> If program is created with clCreateProgramWithBinary, then the
> program binary must be an executable binary (not a compiled binary or
> library).
Reviewed-by: Aaron Watry <awatry@gmail.com>
Reviewed-by: Francisco Jerez <currojerez@riseup.net>
* Avoid warnings from references to deprecated CL 1.0, 1.2, 2.0 and 2.1 APIs.
* Avoid warnings from not defining CL_TARGET_OPENCL_VERSION.
Reviewed-by: Karol Herbst <kherbst@redhat.com>
Reviewed-by: Francisco Jerez <currojerez@riseup.net>
Seems like dxvk used integer builtins without setting the flat
interpolation decoration.
I believe in the current spec the app is required to set these,
but in the meantime to avoid breaking things in stable releases
(and so close to release for 19.0), only expand the interpolation
to float16 and struct (which cannot be builtins as our spirv parser
lowers the builtin block).
Fixes: f324784104 "radv: Allow interpolation on non-float types."
Reviewed-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
Array index should come before sample-id. And exclude all isam variants
(which take integer texel coords) from adding of offset.
Fixes dEQP-GLES31.functional.texture.multisample.samples_1.use_texture_*_2d_array
Signed-off-by: Rob Clark <robdclark@gmail.com>
We also need to put in the output mov. Possibly we could just fixup the
output register to read it directly from the dummy, but that is more
work and I guess dEQP is probably the only time you encounter this.
Fixes dEQP-GLES31.functional.shaders.opaque_type_indexing.atomic_counter.const_literal_fragment
Signed-off-by: Rob Clark <robdclark@gmail.com>
The indirect offset does not effect the index buffer size. Fixes all of
dEQP-GLES31.functional.draw_indirect.compute_interop.large.drawelements_combined_grid_100x100_drawcount_*
with drawcount > 1.
Signed-off-by: Rob Clark <robdclark@gmail.com>
We weren't propagating the array info for cases where result of atomic
is array/reg. This can happen, for example, if result is part of a phi
web lowered to regs.
Fixes dEQP-GLES31.functional.ssbo.atomic.compswap.*
Signed-off-by: Rob Clark <robdclark@gmail.com>
Fixes a bunch of deqp ssbo tests that use multiple ssbo blocks packed
into a single buffer.
Note the a5xx value seems suspicious, but this is what blob seems to
advertise.
Signed-off-by: Rob Clark <robdclark@gmail.com>
Use the (nopN) encoding for slightly denser shaders.. this lets us fold
nop instructions into the previous alu instruction in certain cases.
Shouldn't change the # of cycles a shader takes to execute, but reduces
the size. (ex: glmark2 refract goes from 168 to 116 instructions)
Currently only enabled for a6xx, but I think we could enable this for
a5xx and possibly a4xx.
Signed-off-by: Rob Clark <robdclark@gmail.com>
We were overflowing instrlen (which is # of groups of 16 instructions)
in a couple dEQP tests, causing gpu hangs:
dEQP-GLES31.functional.ubo.random.all_per_block_buffers.13
dEQP-GLES31.functional.ubo.random.all_per_block_buffers.20
Signed-off-by: Rob Clark <robdclark@gmail.com>
This fixes a failed assertion in glDeleteLists() for the following
case:
list = glGenLists(1);
glDeleteLists(list, 1);
when those are the first display list commands issued by the
application.
When we generate display lists, we plug in empty lists created with
the make_list() helper. This function uses the OPCODE_END_OF_LIST
opcode but does not call dlist_alloc() which would set the
InstSize[OPCODE_END_OF_LIST] element to non-zero.
When the empty list was deleted, we failed the InstSize[opcode] > 0
assertion.
Typically, display lists are created with glNewList/glEndList so we
set InstSize[OPCODE_END_OF_LIST] = 1 in dlist_alloc(). That's why
this bug wasn't found before.
To fix this failure, simply initialize the InstSize[OPCODE_END_OF_LIST]
element in make_list().
The game oolite was hitting this.
Fixes: https://github.com/OoliteProject/oolite/issues/325
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
Calculating the scissor rectangle fields with the y flipped (0 on top)
can generate negative values that will cause assertion failure later on
as the scissor fields are all unsigned. We must clamp the bbox values
again to make sure they don't exceed the fb_height. Also fixed a
calculation error.
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=108999https://bugs.freedesktop.org/show_bug.cgi?id=109594
v2:
- I initially clamped the values inside the if (Y is flipped) case
and I made a mistake in the calculation: the clamp of the bbox[2] should
be a check if (bbox[2] >= fbheight) bbox[2] = fbheight - 1 instead and I
shouldn't have changed the ScissorRectangleYMax calculation. As the
fixed code is equivalent with using CLAMP instead of MAX2 at the top of
the function when bbox[2] and bbox[3] are calculated, and the 2nd is more
clear, I replaced it. (Nanley Chery)
v3:
- Reversed the CLAMP change in bbox[3] as the API guarantees that the
viewport height is positive. (Nanley Chery)
v4:
- Added nomination for the mesa-stable branch and the link to the second
bugzilla bug (Nanley Chery)
CC: <mesa-stable@lists.freedesktop.org>
Tested-by: Paul Chelombitko <qamonstergl@gmail.com>
Reviewed-by: Nanley Chery <nanley.g.chery@intel.com>
This DTD can be used to validate the output and make sure any parsers
out there can handle it:
$ xmllint --noout --valid driinfo.xml
Signed-off-by: Eric Engestrom <eric.engestrom@intel.com>
Reviewed-by: Emil Velikov <emil.velikov@collabora.com>
The Loader/Validation-Layers repository allow the user to choose where
header files are installed. On my system I choose /usr/include
thinking it was the obvious "base" location, but it turns out the
headers end up being installed right there rather in a vulkan
subdirectory. On Debian/Ubuntu the selected installation path is
/usr/include/vulkan, so just go with that.
Hopefully other distro don't choose another path.
Note that the validation layer doesn't provide a .pc file so we have
no way of querying where the headers are installed.
Signed-off-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=109739
Acked-by: Eric Engestrom <eric.engestrom@intel.com>
If the first time a fork was created, the job creating the containers was
manually cancelled, this would have left the fork unable to use the CI
(until the next automatic regeneration of the container).
Avoid this by always running the container-generation job, even though
99% of the time it will spin up, see that the container exists and shut
down.
Signed-off-by: Eric Engestrom <eric.engestrom@intel.com>
Reviewed-by: Michel Dänzer <michel.daenzer@amd.com>
It's the current maximum supported by the kernel. Stay consistent with
the rest of Mesa and use the same number.
Signed-off-by: Emil Velikov <emil.velikov@collabora.com>
Some platforms lack O_CLOEXEC. The loader_open_device() handles those
appropriately, so use the helper.
Signed-off-by: Emil Velikov <emil.velikov@collabora.com>
Reviewed-by: Eric Engestrom <eric.engestrom@intel.com>
Earlier commit introduced support for haiku yet did not properly
annotate the loader/xmlconfig dependencies.
Thus we ended up adding inc_loader for each !haiku platform - see
659910eda09a96bf0ecdc731508b98ec6cb01e21.
One piece remained though - the wayland platform. Hence the following
would fail:
meson -Dgallium-drivers=etnaviv -Ddri-drivers=''\
-Dtools=etnaviv -Dplatforms=wayland -Dglx=disabled \
build/
Cc: Alexander von Gluck IV <kallisti5@unixzen.com>
Reported-by: Boris Brezillon <boris.brezillon@collabora.com>
Fixes: 834d221512 ("meson: Add Haiku platform support v4")
Signed-off-by: Emil Velikov <emil.velikov@collabora.com>
Tested-by: Boris Brezillon <boris.brezillon@collabora.com>
Reviewed-by: Eric Engestrom <eric.engestrom@intel.com>
Reviewed-by: Dylan Baker <dylan@pnwbakers.com>
The difference between the three functions is the list of mandatory
driver extensions. Pass that as an argument to the common helper.
Signed-off-by: Emil Velikov <emil.velikov@collabora.com>
Reviewed-by: Frank Binns <frank.binns@imgtec.com>
Reviewed-by: Eric Engestrom <eric.engestrom@intel.com>
Fixes following valgrind warning:
==27561== Conditional jump or move depends on uninitialised value(s)
==27561== at 0x667856B: value_set_ssa_components (nir_opt_copy_prop_vars.c:78)
==27561== by 0x667A1C4: copy_prop_vars_block (nir_opt_copy_prop_vars.c:797)
Fixes: 62332d139c "nir: Add a local variable-based copy propagation pass"
Signed-off-by: Tapani Pälli <tapani.palli@intel.com>
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Reviewed-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
If we have a MOV of a uniform value available to spill, that's one of our
best choices. We can just not spill the value, and emit a new load of the
uniform as the fill. This saves bothering the TMU and the thrsw, and is
the same cost in uniforms (since the spill offset is a uniform anyway).
This doesn't have a huge impact on shader-db, since there aren't a whole
lot of spills and we usually copy-prop the uniforms at the VIR level such
that the only uniform MOVs are from vir_lower_uniforms:
total instructions in shared programs: 6430292 -> 6430279 (<.01%)
total uniforms in shared programs: 2386023 -> 2385787 (<.01%)
total spills in shared programs: 4961 -> 4960 (-0.02%)
total fills in shared programs: 6352 -> 6350 (-0.03%)
However, I'm interested in dropping the uniforms copy-prop in the backend,
since it would be cheaper to not load repeated uniforms if we have the
registers to spare. This also saves many spills on
dEQP-GLES31.functional.ubo.random.all_per_block_buffers.20, which is what
motivated a bunch of my recent backend work in the first place:
before: 46 spills, 106 fills, 3062 instructions
after: 0 spills, 0 fills, 2611 instructions
Since using bitmasks we can easily check if we have any
current value that is potentially uploaded on array setup.
So check for any potential vertex program input that is not
already a vao enabled array. Only flag array update if there is
a potential overlap.
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
Reviewed-by: Brian Paul <brianp@vmware.com>
Signed-off-by: Mathias Fröhlich <Mathias.Froehlich@web.de>
Fix the logic for buffer full check on alloc.
This patch just takes the fix Nicolai attached to the bug report
and updates it to work on master.
Fixes: e0f0d3675d ("radeonsi: factor si_query_buffer logic out of si_query_hw")
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=109561
The nir_builder swizzling improvement to not emit extra MOVs resulted in
nir_lower_tex() trying to rewrite an SSA def to itself, triggering the
assert on all texturing in v3d. There's no work to be done in this case,
so just stop asserting.
Fixes: 743700be1f ("nir/builder: Don't emit no-op swizzles")
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
If no framebuffer is bound, get the number of samples and the
image format from the render pass.
This fixes new CTS dEQP-VK.geometry.layered.*.secondary_cmd_buffer.
Cc: 18.3 19.0 <mesa-stable@lists.freedesktop.org>
Signed-off-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
This is a common pattern from HLSL->SPIRV translation
and supported in HW by all current NIR backends.
vkpipeline-db results anv (SKL):
total instructions in shared programs: 6403130 -> 6402380 (-0.01%)
instructions in affected programs: 204084 -> 203334 (-0.37%)
helped: 208
HURT: 0
total cycles in shared programs: 1915629582 -> 1918198408 (0.13%)
cycles in affected programs: 1158892682 -> 1161461508 (0.22%)
helped: 107
HURT: 86
shader-db results on i965 (KBL):
total instructions in shared programs: 15284592 -> 15284568 (<.01%)
instructions in affected programs: 81683 -> 81659 (-0.03%)
helped: 24
HURT: 0
total cycles in shared programs: 375013622 -> 375013932 (<.01%)
cycles in affected programs: 40169618 -> 40169928 (<.01%)
helped: 13
HURT: 9
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
SPIR-V shifts are undefined for values >= bitsize, but SM5 shifts
are defined to only use the least significant bits.
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
For split indirect sends we have to put the EOT parameter in the
extended descriptor as well as the instruction itself so just calling
brw_inst_set_eot is insufficient. Moving the EOT handling handling into
the send_indirect_[split]_message helper lets us handle it properly.
The extension NV_depth_clamp is written against OpenGL 1.2.1, and
since GLES 2.0 is based on GL 2.0 there is no reason not to enable
this extension also for GLES >= 2.0.
v2: Use EXT_depth_clamp that has been proposed to Khronos
v3: - Fix check for extension availability (Erik Faya-Lund)
- Also fix the test in is_enabled
v4: - Test both, ARB and EXT extension (Erik)
v5: - Fix white space errors (Erik)
Signed-off-by: Gert Wollny <gert.wollny@collabora.com>
Reviewed-by: Erik Faye-Lund <erik.faye-lund@collabora.com>
I was converting them at pipe_surface creation time, but not when
answering queries about whether formats support rendering. This caused
a lot of FBO incomplete errors for formats that ought to be supported.
Fixes "Child of Light", which uses PIPE_FORMAT_R8G8B8X8_UNORM_SRGB.
Also fixes Witcher 1 using wined3d (GL) according to Timur Kristóf.
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=109738
For texture attachments, 'f' is texImg->_BaseFormat, but for
renderbuffer attachments, 'f' is att->Renderbuffer->InternalFormat.
InternalFormat may be something like GL_RGB8, which causes our
(f == GL_RGB) check to fail. Switch to using a proper _BaseFormat,
which drops the size.
Fixes dEQP-GLES31.functional.draw_buffers_indexed.random.
max_required_draw_buffers.15 on iris when combined with a driver fix.
Reviewed-by: Tapani Pälli <tapani.palli@intel.com>
Reviewed-by: Timur Kristóf <timur.kristof@gmail.com>
apply_implicit_conversion only converts and check base types but we
need actual type equality for function returns, otherwise you can
return a vec2 from a function declared as returning a float.
Reviewed-by: Tapani Pälli <tapani.palli@intel.com>
On MRT-capable systems, the framebuffer format is encoded as a 64-bit
word in the render target descriptor. Previously, the two 32-bit
words were exposed as opaque hex values. This commit identifies a 12-bit
Mali swizzle and a 2-bit channel counter, removing some of the magic. It
also adds decoding support for the AFBC and MSAA enable bits, which were
already known but otherwise ignored in pandecode.
Signed-off-by: Alyssa Rosenzweig <alyssa@rosenzweig.io>
These ops were discovered by invoking the correspondingly names GLSL
functions. The rounding ops here behave exact as expected and are mapped
to their corresponding NIR ops where applicable. The ffma behaves as a
LUT instruction and requires some special argument packing (since
Midgard normally only allows for 2 arguments); this quirk will be
addressed in the future, but for now FMA is still lowered.
Signed-off-by: Alyssa Rosenzweig <alyssa@rosenzweig.io>
This flag corresponds to what was MEM_COHERENT_LOCAL in the vendor
driver, which seems to influence the cache policy, necessary for the
varying temporary storage but nothing else.
Signed-off-by: Alyssa Rosenzweig <alyssa@rosenzweig.io>
Potentially, the kernel could optimize these allocations, or perhaps we
can save on mapping costs.
Signed-off-by: Alyssa Rosenzweig <alyssa@rosenzweig.io>
For reasons that are still unclear (speculation included in the comment
added in this patch), the tiler? metadata has a fast path that we were
not enabling; there looks to be a possible time/memory tradeoff, but the
details remain unclear.
Regardless, this patch improves performance dramatically. Particular
wins are for geometry-heavy scenes. For instance, glmark2-es2's
Phong-shaded bunny, rendering at fullscreen (2400x1600) via GBM, jumped
from ~20fps to hitting vsync cap at 60fps. Gains are even more obvious
when vsync is disabled, as in glmark2-es2-wayland.
With this patch, on GLES 2.0 samples not involving FBOs, it appears
performance is converging with (and sometimes surpassing) the blob.
Signed-off-by: Alyssa Rosenzweig <alyssa@rosenzweig.io>
The nir_swizzle helper is used some on it's own but it's also called by
nir_channel and nir_channels which are used everywhere. It's pretty
quick to check while we're walking the swizzle anyway whether or not
it's an identity swizzle. If it is, we now don't bother emitting the
instruction. Sure, copy-prop will clean it up for us but there's no
sense making more work for the optimizer than we have to.
Reviewed-by: Ian Romanick <ian.d.romanick@intel.com>
This header has been unused since f8f2520e88 ("st/mesa: Remove
unnecessary headers"). And in the more than 8 years since, this
hasn't been useful. So let's just get rid of it.
Signed-off-by: Erik Faye-Lund <erik.faye-lund@collabora.com>
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
Reviewed-by: Eric Engestrom <eric.engestrom@intel.com>
From the bash manual:
string1 == string2
string1 = string2
True if the strings are equal. = should be used with the test
command for POSIX conformance.
Test using array deref on vectors in loads and stores. These are
marked DISABLED_ as this optimization is currently not done.
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
Replace find_next_intrinsic(intrinsic, after) with
get_intrinsic(intrinsic, index). This makes slightly more convenient
to check the resulting loads/stores/copies, since in most tests we
know which one we care about. The cost is to perform more traversals,
but for such tests this is not a problem.
Added the ASSERT_EQ() on count to some tests missing it, so the
indices queried are always expected to find something.
Also, drop two nir_print_shader leftover calls in a test.
v2: Remove redundant assertions. nir_src_comp_as_uint already
assert what we need. (Jason)
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
When a copy_entry is SSA, store not only the nir_ssa_def* for each
component, but also the source component they come from. At the
moment this is always a match (i.e. 'component[i] == i'), because all
the operations for a copy_entry happen using definitions with the same
size. This prepares the code for array_derefs of vectors, in which
'component[i] != i'.
Also, extract setting all SSA components into a function of its own.
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
Disabled by default, to be used during development. Adding those
so I don't rewrite some ad-hoc version of them everytime I'm working
with this pass.
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
For now these derefs are not handled, so don't let these get into the
copies list -- which would cause wrong propagations. For load_derefs,
do nothing. For store_derefs, invalidate whatever the store is
writing to. For copy_derefs, invalidate whatever the copy is writing
to.
These cases will happen once derefs to SSBOs/UBOs are kept around long
enough to get optimized by copy_prop_vars.
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
Rather than only lowering if all srcs are scalarizable we instead
check that at least one src is scalarizable.
We change undef type to return false otherwise it will cause
regressions when it is the only scalarizable src.
total instructions in shared programs: 13219105 -> 13024547 (-1.47%)
instructions in affected programs: 1153797 -> 959239 (-16.86%)
helped: 581
HURT: 74
total cycles in shared programs: 333968972 -> 324807922 (-2.74%)
cycles in affected programs: 129809402 -> 120648352 (-7.06%)
helped: 571
HURT: 131
total spills in shared programs: 57947 -> 29130 (-49.73%)
spills in affected programs: 53364 -> 24547 (-54.00%)
helped: 351
HURT: 0
total fills in shared programs: 51310 -> 25468 (-50.36%)
fills in affected programs: 44882 -> 19040 (-57.58%)
helped: 351
HURT: 0
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
This fixes a bug where SWR will fail to render in cases with large
buffer allocations, e.g. very large meshes whose vertex buffers exceed
2GB
CC: <mesa-stable@lists.freedesktop.org>
Reviewed-by: Bruce Cherniak <bruce.cherniak@intel.com>
Note that emit_intrinsic_load_image() already swaps a .3d flag with an
.a flag. I tried doing things the other way around (going back to .3d)
but that didn't work. And treating cube images as 2d array is also what
blob does, so let's just go with that.
Fixes dEQP-GLES31.functional.image_load_store.cube.load_store.*
Signed-off-by: Rob Clark <robdclark@gmail.com>
Fixes nearly all of dEQP-GLES31.functional.texture.border_clamp.* when
run after a test that binds textures used in vertex shader.
Signed-off-by: Rob Clark <robdclark@gmail.com>
Fixes dEQP-GLES31.functional.shaders.opaque_type_indexing.sampler.const_literal.vertex.samplercubeshadow
and few other similar tests that do multiple texture fetches into
individual components of a packet output. Mostly works around the
issue mentioned in ra_block_find_definers().
Signed-off-by: Rob Clark <robdclark@gmail.com>
vulkan_core.h defines non-dispatchable handles as (struct object *)
on 64-bit systems, but uint64_t on 32-bit systems. The former can be
implicitly cast to void *, but the latter requires an explicit cast.
While here, %lu is the wrong format specifier for uint64_t on 32-bit
systems, so use PRIu64, fixing a warning.
Reported-by: Mike Lothian <mike@fireburn.co.uk>
Reviewed-by: Tapani Pälli <tapani.palli@intel.com>
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
On one side, when emitting 3DSTATE_SF, VertexSubPixelPrecisionSelect is
used to select between 8 bit subpixel precision (value 0) or 4 bit
subpixel precision (value 1). As this value is not set, means it is
taking the value 0, so 8 bit are used.
On the other side, in the Vulkan CTS tests, if the reference rasterizer,
which uses 8 bit precision, as it is used to check what should be the
expected value for the tests, is changed to use 4 bit as ANV was
advertising so far, some of the tests will fail.
So it seems ANV is actually using 8 bits.
v2: explicitly set 3DSTATE_SF::VertexSubPixelPrecisionSelect (Jason)
v3: use _8Bit definition as value (Jason)
v4: (by Jason)
anv: Explicitly set 3DSTATE_CLIP::VertexSubPixelPrecisionSelect
This field was added on gen8 even though there's an identically defined
one in 3DSTATE_SF.
CC: Jason Ekstrand <jason@jlekstrand.net>
CC: Kenneth Graunke <kenneth@whitecape.org>
CC: 18.3 19.0 <mesa-stable@lists.freedesktop.org>
Signed-off-by: Juan A. Suarez Romero <jasuarez@igalia.com>
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
float16 types can have non-flat interpolation so set up the HW
correctly for that.
Fixes: 62024fa775 "radv: enable VK_KHR_16bit_storage extension / 16bit storage features"
Reviewed-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
It causes more trouble than it's worth. Now vl tries to create compute
shaders without all the proper checking. Since there's really no
(current) way to use compute on nv50, just mark it disabled.
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=109742
Fixes: f6ac0b5d71 ("gallium/auxiliary/vl: Add compute shader to support video compositor render")
Signed-off-by: Ilia Mirkin <imirkin@alum.mit.edu>
Fixes the following building error:
including ./external/mesa/Android.mk ...
build/core/base_rules.mk:183: *** external/mesa/src/intel:
MODULE.TARGET.STATIC_LIBRARIES.libmesa_isl_tiled_memcpy already defined by external/mesa/src/intel.
make: *** [build/core/ninja.mk:164: out/build-android_x86_64.ninja] Error 1
ISL_TILED_MEMCPY_FILES is isl/isl_tiled_memcpy_normal.c
and that source file includes isl_tiled_memcpy.c source
Fixes: 96bb328 ("iris: add Android build")
Signed-off-by: Mauro Rossi <issor.oruam@gmail.com>
Reviewed-by: Tapani Pälli <tapani.palli@intel.com>
This is needed for gen_clflush.h intrinsics to work on 32-bit builds.
i965 and anv both set these, and iris needs to as well.
Tested-by: Mark Janes <mark.a.janes@intel.com>
Even though the hardware spec claims that any "integer DWord multiply"
operation is affected by the regioning restrictions of CHV/BXT/GLK,
this is inconsistent with the behavior of the simulator and with
empirical evidence -- Return false from has_dst_aligned_region_restriction()
for such instructions as a micro-optimization.
Tested-by: Anuj Phogat <anuj.phogat@gmail.com>
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
Strides up to 32B can be implemented for the source regions of most
instructions by leveraging either the vertical or the horizontal
stride of the hardware Align1 region. The main motivation for this is
that currently the lower_integer_multiplication() pass will happily
double the stride of one of the 32-bit sources, which can blow up if
the stride of the original source was already the maximum value
allowed by the hardware.
An alternative would be to use the regioning legalization pass in
order to lower such strides into the composition of multiple legal
strides, but that would be somewhat less efficient.
This showed up as a regression from my commit cbea91eb57
in Vulkan 1.1 CTS tests on CHV/BXT platforms, however it was really a
pre-existing problem that had affected conformance on other platforms
without native support for integer multiplication. CHV/BXT were
getting around it because the code I removed in that commit had the
"fortunate" side effect of emitting narrower regions that didn't hit
the hardware stride limit after lowering. Beyond fixing the
regression this fixes ~90 additional Vulkan 1.1 subgroup CTS tests on
ICL (that's why this patch is marked for inclusion in mesa-stable even
though the original regressing patch was not).
According to Jason, a nearly equivalent change had been committed
previously as e8c9e65185 and then (mistakenly?) reverted as
a31d038208.
Cc: mesa-stable@lists.freedesktop.org
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=109328
Reported-by: Mark Janes <mark.a.janes@intel.com>
Tested-by: Anuj Phogat <anuj.phogat@gmail.com>
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
This is required in combination with the following commit, because
otherwise if a source region with an extended 8+ stride is present in
the instruction (which we're about to declare legal) we'll end up
emitting code that attempts to write to such a region, even though
strides greater than four are still illegal for the destination.
Tested-by: Anuj Phogat <anuj.phogat@gmail.com>
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
Because the "low" temporary needs to be accessed with word type and
twice the original stride, attempting to preserve the alignment of the
original destination can potentially lead to instructions with illegal
destination stride greater than four. Because the CHV/BXT alignment
restrictions are now being enforced by the regioning lowering pass run
after lower_integer_multiplication(), there is no real need to
preserve the original strides anymore.
Note that this bug can be reproduced on stable branches, but
back-porting would be non-trivial, because the fix relies on the
regioning lowering pass recently introduced.
Tested-by: Anuj Phogat <anuj.phogat@gmail.com>
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
Currently the execution type calculation will return a bogus value in
cases like:
mov_indirect(8) vgrf0:w, vgrf1:w, vgrf2:ud, 32u
Which will be considered to have a 32-bit integer execution type even
though the actual indirect move operation will be carried out with
16-bit precision.
Similarly there's no need to apply the CHV/BXT double-precision region
alignment restrictions to such control sources, since they aren't
directly involved in the double-precision arithmetic operations
emitted by these virtual instructions. Applying the CHV/BXT
restrictions to control sources was expected to be harmless if mildly
inefficient, but unfortunately it exposed problems at codegen level
for virtual instructions (namely the SHUFFLE instruction used for the
Vulkan 1.1 subgroup feature) that weren't prepared to accept control
sources with an arbitrary strided region.
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=109328
Reported-by: Mark Janes <mark.a.janes@intel.com>
Fixes: efa4e4bc5f "intel/fs: Introduce regioning lowering pass."
Tested-by: Anuj Phogat <anuj.phogat@gmail.com>
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
This reduces the time spent in nir_opt_cse() by almost a half.
The massif tool from callgrind reported no change in peak
memory use with the large doliphin uber shaders I used for
testing.
Reviewed-by: Thomas Helland<thomashelland90@gmail.com>
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
This currently regresses KHR-GL4x.compute_shader.resource-texture,
but that's a pre-existing bug (https://bugs.freedesktop.org/109113)
which should be fixed up once we have fast clear support.
If we change the aux state for a given resource, we need to re-emit the
binding table pointers for any stage that has such resource bound. Since
we don't track that, flag IRIS_ALL_DIRTY_BINDINGS and emit all of them.
If iris_resource_get_handle() gets called without a context, we can't
resolve the resource. Hopefully it shouldn't be compressed anyway, so
let's just add an assert to ensure it's correct.
CCS_E can fall back to CCS_D with incompatible format views
CCS_D is pretty useless without fast clears and we may as well use NONE,
but we're surely going to hook those up at some point, so may as well
just go ahead and do it now...
We can safely assume that the given resource is depth, depth/stencil,
or stencil already. The stencil-only case is easily detectable with
a single format check, and all other cases are handled identically.
This saves some CPU overhead.
This just moves the code for dealing with pipe_shader_state /
pipe_compute_state / iris_uncompiled_shader to the end of the file.
Now that those do precompiles, they want to call the actual compile
functions. Putting them at the end eliminates the need for a bunch
of prototypes.
Caio noted that this is not necessary on Gen8+:
"Before Gen8, there was a historical configuration control field to
swizzle address bit[6] for in X/Y tiling modes. This was set in
three different places: TILECTL[1:0], ARB_MODE[5:4], and
DISP_ARB_CTL[14:13]. For Gen8 and subsequent generations, the
swizzle fields are all reserved, and the CPU's memory controller
performs all address swizzling modifications."
Since we don't support earlier hardware, we can skip it entirely.
The Vulkan driver only sets this if color writes are disabled, which
is more conservative - but would require us to inspect blend state.
(If color writes are enabled, we don't need to force anything, because
the internal signal is already correct. But it shouldn't hurt to do so.)
I was misreading i965 - the 3DSTATE_WM::PixelShaderKillsPixel bit from
Gen < 8 needed all of this, but the 3DSTATE_PS_EXTRA bit only needs
prog_data->uses_kill.
Suggested by Chris Wilson, if only to make it obvious to the human
readers that these are volatile reads. It may also be necessary for
the compiler in a few cases.
When switching from bo_wait to sync-points, I missed that we turned an
if (not landed) bo_wait into a while (not landed) check_syncpt(), which
has a timeout of 0. This meant, rather than sleeping until the batch
is complete, we'd busy-loop, continually asking the kernel "is the batch
done yet???". This is not what we want at all - if we wanted a busy
loop, we'd just loop on !snapshots_landed. We want to sleep.
Add an effectively infinite timeout so that we sleep.
Instead of allocating 4K BO per query object, we can create a large blob
of memory and split it into pieces as required.
Having one BO for multiple query objects, we don't want to wait on all
of them, instead when we write last snapshot, we create a sync point, and
check syncpoints while waiting on particular object.
Signed-off-by: Sagar Ghuge <sagar.ghuge@intel.com>
I inherited this from i965. It would be nice to track the state size
so INTEL_DEBUG=color,bat decoding can print the right number of e.g.
binding table entries or blend states, but...without a single point
of entry for state, it's a little tricky to get right. Punt for now,
and drop the dead code in the meantime.
i965 re-emits 3DSTATE_CONSTANT_* on every batch, so there's no point in
restoring the constants from the context. Iris actually re-pins the
constant buffers properly across the batch, and avoids re-emitting the
constant packets unless it's necessary. So, we don't want ISP_DIS.
Previously I had a hack in st/mesa to make it stop remapping
VARYING_SLOT_* into the naively compacted slots, which aren't
what we want. But that wasn't very feasible, as we'd have to
update all drivers, or add capability bits, and it gets messy fast.
It turns out that I can map back to VARYING_SLOT_* in about 5 LOC,
so let's just do that. It removes the need for hacks, and is easy.
This also fixes KHR-GL46.enhanced_layouts.xfb_capture_struct, which
apparently with my hack was still getting the wrong slot info.
1. Set a render condition. We emit it immediately on the render
engine, and stash q->bo as ice->state.compute_predicate in case
the compute engine needs it.
2. Clear the render condition. We were incorrectly leaving a stale
compute_predicate kicking around...
3. Dispatch compute. We would then read the stale compute predicate,
and try to load it into MI_PREDICATE_DATA. But q->bo may have been
freed altogether, causing us to try and use garbage memory as a BO,
adding it to the validation list, failing asserts, and tripping
EINVALs in execbuf.
Huge thanks to Mark Janes for narrowing this sporadic GL CTS failure
down to a list of 48 tests I could easily run to reproduce it. Huge
thanks to the Valgrind authors for the memcheck tool that immediately
pinpointed the problem.
In st_nir_lower_uniforms_to_ubo() all UBO access in the shader have
its index incremented to open room for uniforms in constbuf0. So if
we use UBOs, we always need to include the extra binding entry in the
table.
To avoid doing this checks both when compiling the shader and when
assigning binding tables, store the num_cbufs in iris_compiled_shader.
Fixes a bunch of tests from Piglit and CTS that use UBOs but don't use
uniforms or system values. Note that some tests fitting this criteria
were passing because the UBOs were moved to be push
constants (avoiding the problem).
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
iris_bufmgr allocates addresses across the entire screen, since buffers
may be shared between multiple contexts. There used to be a single
special address, IRIS_BINDER_ADDRESS, that was per-context - and all
contexts used the same address. When I moved to the multi-binder
system, I made a separate memory zone for them. I wanted there to be
2-3 binders per context, so we could cycle them to avoid the stalls
inherent in pinning two buffers to the same address in back-to-back
batches. But I figured I'd allow 100 binders just to be wildly
excessive/cautious.
What I didn't realize was that we need 2-3 binders per *context*,
and what I did was allocate 100 binders per *screen*. Web browsers,
for example, might have 1-2 contexts per tab, leading to hundreds of
contexts, and thus binders.
To fix this, we stop allocating VMA for binders in bufmgr, and let
the binder handle it itself. Binders are per-context, and they can
assign context-local addresses for the buffers by simply doing a
ringbuffer style approach. We only hold on to one binder BO at a
time, so we won't ever have a conflicting address.
This fixes dEQP-EGL.functional.multicontext.non_shared_clear.
Huge thanks to Tapani Pälli for debugging this whole mess and
figuring out what was going wrong.
Reviewed-by: Tapani Pälli <tapani.palli@intel.com>
We use > for IRIS_MEMZONE_DYNAMIC because IRIS_BORDER_COLOR_POOL_ADDRESS
lives at the very start of that zone. However, IRIS_MEMZONE_SURFACE and
IRIS_MEMZONE_BINDER are normal zones. They used to be a single zone
(surface) with a single binder BO at the beginning, similar to the
border color pool. But when I moved us to multiple binders, I made them
have a real zone (if a small one). So both zones should use >=.
Reviewed-by: Tapani Pälli <tapani.palli@intel.com>
INTEL_DEBUG=reemit was breaking streamout tests, by re-emitting
3DSTATE_SO_BUFFER commands that tell the HW to zero the SO write
offsets. We would need to alter them to use 0xFFFFFFFF for the offset.
Also, have each upload function only flag bits relevant to its own
pipeline.
I think this was an attempt to work around various sample mask bugs I
had early on. It's not correct. A sample mask of 0 is legal and means
to disable all samples.
Fixes dEQP-GLES31.functional.texture.multisample.*.*sample_mask*
I had a hack in place earlier to pass the query type as q->index
for the regular statistics query, but we ended up adjusting the
interface and adding a new query type. Use that instead, fixing
pipeline statistics queries since the rebase.
We were dividing by 4 in calculate_result_on_gpu(), and also in
iris_get_query_result(). We should stop doing the latter, and instead
divide by 4 in calculate_result_on_cpu() as well.
Otherwise, if snapshots were available, and you hit the
calculate_result_on_cpu() path, but requested it be written to a QBO,
you'd fail to get a divide.
This is an awkward corner case. We create batches in order, each of
which creates and pins a BO. The other batches may not be set up yet,
so it may not be safe to ask whether they reference a BO.
Just avoid this for now. We could avoid it for other context-local BOs
too, but we currently don't have a flag for that (and I'm not certain
whether it's worth it).
Various places in the transfer code need to know whether they must
read the existing resource's values. Rather than checking both flags
everywhere, just make PIPE_TRANSFER_DISCARD_WHOLE_RESOURCE also flag
PIPE_TRANSFER_DISCARD_RANGE - if we can discard everything, we can
discard a subrange, too.
Obviously, we can do better for PIPE_TRANSFER_DISCARD_WHOLE_RESOURCE,
but eventually u_threaded_context should handle swapping out buffers
for new idle buffers, anyway. In the meantime, this is at least better.
BLORP uses the render engine to write to buffers, and we need to flush
that data out to the actual surface (finishing the write). Then, the
rest of this function invalidates any caches that might have stale data
which needs to be refetched.
I don't know if this is required - surprisingly, I haven't seen it
matter - but I'd like to use it for multi-slice transfer maps. We may
as well do the right thing.
There are a variety of ways to fix this, many of which are simple, but
I could use some advice on which ones other people prefer, and so we'll
punt until after the holidays.
We were relying on CSE/GVN/etc to coalesce all intrinsics that load the
same value, but that's a bad idea. We might have a couple intrinsics
that reload the same value. If so, we only want to set up the uniform
on the first one we see.
System values are built-in uniforms. We set them up as UBO values, and
might pull or push them. UBO push analysis will take care of that. We
only want to enable push constants if there's an actual range being
pushed. Otherwise, we might get into a scenario where 3DSTATE_PS
enables push constants but 3DSTATE_CONSTANT_PS isn't pushing anything.
This fixes GPU hangs in Broadwell image load store tests which have
unused image param system values but no other uniforms. (We shouldn't
be making those anyway, but that's a separate fix...)
cso_fb->layers is only valid for no-attachment framebuffers. Use the
helper function to get the real value, then stash it so we don't have
to call the helper function on the old value for comparison, or at draw
time for Force Zero RTA Index setting.
This fixes Force Zero RTA Index being set even when attempting layered
rendering.
Both 'z' and 'depth' are counted in slices, according to the Gallium
docs (context.rst). In our temporary memory, we allocate `box.depth`
slices, so we need to rebase the starting slice (box.z) down to 0,
and back again when writing on unmap.
There's nothing strange about cubes here.
Gen9-10 have fewer than 4 subslices per slice, so they need this to be
rounded up. Gen11 isn't documented as needing this hack, and it can
also have more than 4 subslices, so the hack actually can break things.
Fixes tests/spec/arb_enhanced_layouts/execution/component-layout/
sso-vs-gs-fs-array-interleave
I was using the Gallium API wrong. set_* functions with start_slot
and count parameters are supposed to update a subrange of the items.
I had been trashing all bound vertex buffers and starting over.
This should hopefully also make it easier to slot in additional
VERTEX_BUFFER_STATEs at draw time, say, for shader draw parameters.
Note that at least following additional libs/components require changes
since they refer to BOARD_GPU_DRIVERS variable which is used to select
the driver:
- mixins
- minigbm
- libdrm
- drm_gralloc
v2: (feedback by Gustaw Smolarczyk) Fix trailing \ in a few cases
Signed-off-by: Tapani Pälli <tapani.palli@intel.com>
(cleaned up by Ken - make sure a bunch of things were more obviously
not using res->surf, do allow checking res->surf.tiling == LINEAR,
drop format cpp checks that aren't needed, drop memzone handling for
images, assume buffers / non-buffers in a few places...)
Dave fixed it to consider whether the sampler view is a cube.
With that, there's no point (possibly harm) in looking if the original
resource was a cube...if it's an array view, we don't want to treat it
as a cube anymore...
This is supposed to exclude single address zones. We were getting
too many VMA allocators but failing to set them up, which worked out
because we also forgot to destroy them...
We were being very picky about things being Y tiled. But, not
everything can be - for example, > 16382 surfaces on SKL GT1-3
have to fall back to linear.
Instead, give ISL options and let it pick.
dead since my program cache API rework. we could still use it for one
function, but it's so trivial to pass the size, that it's probably not
worth the extra code
This exposes iris_upload_shader() without having to bind it, which will
be useful for precompiles. It also lets us examine the old programs and
flag dirty bits at a higher level, rather than cramming all that
knowledge into the cache layer.
When we blit, transfer, or copy_resource to a buffer, we need to flush
to ensure any stale data for that buffer is invalidated in the caches.
bind_history will inform us which caches need to be flushed.
Also, for any push constant buffers, we need to flag those dirty so
that we re-emit 3DSTATE_CONSTANT_*, causing the data to be re-pushed.
Size can be too large for a surf, blorp_buffer_copy chops things up
into segments we can actually handle
Fixes map_buffer_range_test and copy_buffer_coherency
When flushing a batch due to a data dependency, we need to not only
kick off the other batch's work, but stall our execution until it
completes. Just wait on last_syncpt after flushing it.
(adjusted by Ken to make the signalling sync object immediately on
batch reset, rather than batch finish time. this will work better
with deferred flushes...)
x should be in bytes, not cpp units
This generally worked out because PIPE_BUFFER is supposedly required
to be R8_UINT or R8_UNORM. I hear some state trackers pass
PIPE_FORMAT_NONE instead, however, which would make this break.
Just do the right thing directly, to be defensive and clear.
it doesn't do anything, we have no params. I guess I thought there
would be some, but they all get dead code eliminated even if we try
to make them exist in the first place.
The backend compiler used to do this for us, but after a rebase, it's
now the driver's responsibility. This lets us alter it for say, clip
vertex lowering, at the global level rather than the per-variant level.
we borrow the approach from anv rather than i965, as it works better
with pre-baked state that needs to contain scratch BO addresses
fixes a bunch of varying packing tests
The intention is that render and compute use their own contexts,
and each is PIPELINE_SELECT'd to the right pipeline. But we hadn't
actually made them, so we got the fd-default context.
Thanks to Chris Wilson for catching this!
This makes e.g. the render batch aware of the compute batch, so it can
ask questions like "is this BO referenced by some other batch?" and do
something about that.
3DSTATE_CONSTANT_* looks at prog_data->ubo_ranges. We were getting
saved by iris_set_constant_buffers() usually happening when changing
programs (as they usually change uniforms too), but with the clear
shader that doesn't use uniforms, we weren't getting one and were
leaving push constants enabled, screwing things up.
Also clean up a bit of a mess left by the hacks - we were missing
bindings in the VS/FS/CS case, among other issues...
It may be in the dynamic state buffer but the fact that we have a
resource takes care of that. We don't need to add in the address of
the dynamic state buffer again.
If the state tracker never gave us the framebuffer dimensions via
a set_framebuffer_state() call, just fall back to the unbound texture
null surface, which is 1x1x1. Otherwise we'd use a NULL resource
(no pun intended).
we don't want to emit piles of pipe controls to a compute batch if
it isn't necessary...
prevents double-batch-wraps in cs-op-selection-bool-bvec4-bvec4
(but it's still kinda a big ol' hack...)
now we only upload a new grid when it's actually changed, which saves us
from having to emit a new binding table every time it changes.
this also moves a bunch of non-gen-specific stuff out of iris_state.c
We only can push constants for compute shaders from one range.
Gallium glsl-to-nir (src/mesa/state_tracker/st_glsl_to_nir.cpp) lowers
all uniform accesses to a ubo.
Unfortunately we also load the subgroup-id as a uniform in the
compiler. Since we use the 1 push range for this subgroup-id, we then
lose the ability to actually push the ubo with all the normal user
uniform values.
In other words, there is lots of room for performance improvement, but
at least retrieving the uniforms as pull-constants is functional for
now.
Signed-off-by: Jordan Justen <jordan.l.justen@intel.com>
This can happen when faking Z32_S8X24 and setting StencilSampling = true
I guess we'll just turn it into S8_UINT...
Fixes KHR-GL45.texture_swizzle.functional
We were accidentally using the ISL_FORMAT_R32_FLOAT_X8X24_TYPELESS
format, which is NOT what we use. We just store R32_FLOAT depth.
fixes Piglit's texwrap GL_ARB_depth_buffer_float
chaining to a new batch reuses create_batch(), but we don't need to do
the work of pinning BOs we inherit from a previous batch...when that is
actually part of the same execbuf invocation.
instead, just flag it when setting primary_batch_size = 0, in
iris_batch_reset
this is more obviously correct. I think the two end up being the same
in practice, since this is in the alloc_from_cache case, and presumably
bo from the bucket has bo->size == bucket->size, and bo_size also is
bucket->size...
still. better to do the obvious thing.
brw_bufmgr already does it this way.
otherwise, get results may check q->map->snapshots_landed...before our
commands to initialize it to false have actually executed...so it'd get
some random garbage from the BO...
the passthrough shader doesn't need a real program string ID - that's
basically used for ARB programs indicating total program source code
changes, or other pre-baked uniform changes, etc...none of which a
passthrough shader has...so we don't need a unique identifier to
distinguish them. We want to use a consistent value so we find
existing passthrough shaders in the cache.
If no TCS is provided, create a "passthrough" TCS that will take the
default values set in the API as constants and pass to the TES, along
with any other inputs it expects. The code to create the NIR shader
is the same as in i965.
Tested with
./piglit run -t 'tess' quick_shader r
and fixed a dozen crashes from that list.
not sure why this is labeled const, I'm pretty sure we are taking the
reference and owning this, so there's no particular reason we can't
change it. it certainly seems to be working for non-compute. and,
freedreno's ir3_shader.c seems to do this as well. still...gross :/
The backend compiler expects the gl_TessLevel* variables to be mapped
as inputs instead of system values. Use the new PIPE_CAP to get this
behavior from GLSL compiler.
Tested with:
tests/spec/arb_tessellation_shader/execution/vs-tcs-tes-tessinner-tessouter-inputs-quads.shader_test
some of them had typos, didn't say 'authors or copyright holders',
or other mistakes. This is now https://opensource.org/licenses/MIT
text, formatted consistently.
This fixes some cases in fbo-none* and framebuffer_no_attachments.
I'm not sure this is correct otherwise, the tests don't all pass yet
No idea if this is in any way the correct answer
0xffffffff does not mean 1, it means enable as many as there actually
are. we don't get set_sample_mask() calls until some masking is
actually applied...i.e. it doesn't get updated based on # of samples
in the FBO changing.
looking at the freedreno code, this is totally unnecessary! we can just
store the NIR and be happy, and not have any vestiges of TGSI.
plus we can reuse this structure for compute shaders, without needing a
pipe_compute_state base.
create_surface happens before st_validate_attachment, which actually
does the "hey, this is a render target now, is that OK?" check
Fixes asserts in ./bin/arb_texture_view-rendering-formats, allowing the
rest of the tests to run.
st/nir offsets SSBO indexes by MaxABOs. This is not what we want,
as it bloats the binding tables. We'll need to adjust it to use
info->num_abos as the offset and buffer base instead. For now,
just use the inefficient format to get us rolling. We can add a
PIPE_CAP later.
not sure how useful this really is...
./bin/ext_transform_feedback-tessellation triangles flat_first
is hitting a case where we rebind the same VS program, but with
different streamout info...which isn't in the key...but is in the
cache...so we don't rebuild it...
It just does blits between layers, which is all we'd do anyway,
and it already should use BLORP because of iris_blit(). Plus it
handles 3D, which our code in i965 doesn't.
We used to not reset the batch, and just keep appending to it, so you'd
get the same invalid contents over and over.
I'd also really like to know about this, so aborting seems wise for now,
if not for the long term
seeing
set_viewport_state 0 1
set_viewport_state 1 15
which gives us a total of 16 viewports, updated incrementally
so keep old values around and update them...
we need proper batch chaining. without relocations, we can't grow,
since we've only allocated so much VMA for the batch, and the mechanism
only works if we can pin it at the old address
there's some bug here as Jason's patches for only emitting 3DS_DR once
got reverted by Mark later on, apparently they regressed MSAA tests.
need to sort that out.
It's useless to allocate SAMPLER_STATEs in GPU memory on creation like
we do for SURFACE_STATES, because they need to be organized into a
contiguous block of memory. But we can do that at bind time, rather
than draw time.
tomorrow, fix the build system to avoid symbol clashes somehow...
we're getting gen9 functions because they happen to be listed before 10
in the link list.
This commit introduces a new Gallium driver for Intel Gen8+ GPUs,
named 'iris_dri.so' after the hardware.
Developed by:
- Kenneth Graunke (overall driver)
- Dave Airlie (shaders, conditional render, overflow query, Gen8 port)
- Chris Wilson (fencing, pinned memory, ...)
- Jordan Justen (compute shaders)
- Jason Ekstrand (image load store)
- Caio Marcelo de Oliveira Filho (tessellation control passthrough)
- Rafael Antognolli (auxiliary buffer fixes)
- The rest of the i965 contributors and the Mesa community
Turns out we can write to tiled images as well as read. This avoids
having to linearize or do the tiling in the shader.
Signed-off-by: Rob Clark <robdclark@gmail.com>
On GLSL that info is set as a layout qualifier when redeclaring
gl_FragCoord, so somehow tied to a specific variable. But in practice,
they behave as a global of the shader. On ARB programs they are set
using a global OPTION (defined at ARB_fragment_coord_conventions), and
on SPIR-V using ExecutionModes, that are also not tied specifically to
the builtin.
This patch moves that info from nir variable and ir variable to nir
shader and gl_program shader_info respectively, so the map is more
similar to SPIR-V, and ARB programs, instead of more similar to GLSL.
FWIW, shader_info.fs already had pixel_center_integer, so this change
also removes some redundancy. Also, as struct gl_program also includes
a shader_info, we removed gl_program::OriginUpperLeft and
PixelCenterInteger, as it would be superfluous.
This change was needed because recently spirv_to_nir changed the order
in which execution modes and variables are handled, so the variables
didn't get the correct values. Now the info is set on the shader
itself, and we don't need to go back to the builtin variable to set
it.
Fixes: e68871f6a ("spirv: Handle constants and types before execution
modes")
v2: (Jason)
* glsl_to_nir: get the info before glsl_to_nir, while all the rest
of the info gathering is happening
* prog_to_nir: gather the info on a general info-gathering pass,
not on variable setup.
v3: (Jason)
* Squash with the patch that removes that info from ir variable
* anv: assert that OriginUpperLeft is true. It should be already
set by spirv_to_nir.
* blorp: set origin_upper_left on its core "compile fragment
shader", not just on some specific places (for this we added an
helper on a previous patch).
* prog_to_nir: no need to gather specifically this fragcoord modes
as the full gl_program shader_info is copied.
* spirv_to_nir: assert that we are a fragment shader when handling
this execution modes.
v4: (reported by failing gitlab pipeline #18750)
* state_tracker: update too due changes on ir.h/gl_program
v5:
* blorp: minor change after change on previous patch
* radeonsi: update due this change.
v6: (Timothy Arceri)
* prog_to_nir: remove extra whitespace
* shader_info: don't use :1 on origin_upper_left
* glsl: program.fs.origin_upper_left/pixel_center_integer can be
move out of the shader list loop
This initializes the nir shader that will be used by blorp. Right now
it doesn't do too much beyond calling nir_builder_init_simple_shader,
and setting a name. More stuff will be added on following patches.
v2: there is a case were it is used a VERTEX_SHADER (Alejandro)
The condition code in extended branches is repeated 8 times for unclear
reasons; accordingly, the code would be disassembled as "unknown5555",
"unknownAAAA", etc. This patch correctly masks off the lower two bits to
find the true code to print, verifying that the code is repeated as
believed to be necessary (providing some assurance for compiler quality
and an assert trip in case we encounter a shader in the wild that breaks
the convention).
Signed-off-by: Alyssa Rosenzweig <alyssa@rosenzweig.io>
discard and discard_if are both implemented with the branching pipeline
on Midgard; essentially, we branch to the end of the fragment shader in
a special "discard" mode, setting the condition as necessary.
Previously, we hardcoded the form of this instruction, which worked for
very simple shaders but was incorrect for anything remotely interesting.
This patch instead emits logical branches in the IR, which are flattened
to real discard ops the same way other branches are, allowing targets to
be computed correctly.
Signed-off-by: Alyssa Rosenzweig <alyssa@rosenzweig.io>
Previously, we only emitted compact branches; however, the offset range
of these branches is too small for many real world shaders. This patch
implements support for emitting extended branches and switches to always
using them for control flow. This incurs a code size and possibly
performance penalty, but expands the range of working shaders and
provides opportunity for further optimization.
Support for emitting compact branches is retained but this code path is
presently unused. In the future, we'll want to heuristically determine
which type of branch should be emitted for optimal codegen.
Signed-off-by: Alyssa Rosenzweig <alyssa@rosenzweig.io>
Midgard features "compact branches" and "extended branches", i.e.
corresponds to short jumps and far jumps. The form of the extended
branch was previously incorrect in the ISA headers; this patch corrects
it and updates the disassembler (simultaneous to preserve
bisectability).
Additionally, we fix some a corner case in the disassembly of extended
branches, and we now prefix extended branches with "brx", to visually
differentiate from compact branches prefixed with "br".
Signed-off-by: Alyssa Rosenzweig <alyssa@rosenzweig.io>
An if-else statement is compiled to a conditional branch (from the start
to the second block) and an unconditional branch (from the end of the
first block to the end of the else). We previously incorrectly computed
the block index of the unconditional branch to be exactly one after that
of the conditional branch, valid for a single if-else statement but
nothing fancier. This patch correctly computes the unconditional branch
target, fixing more complex if-else chains.
Signed-off-by: Alyssa Rosenzweig <alyssa@rosenzweig.io>
Each Midgard instruction is scheduled to a particular instruction type
("tag"). Presumably the hardware prefetches memory based on tag, so it
is required to report out the first tag to the command stream and the
next tag of a branch target. This procedure was implemented in two
separate parts of the compiler (one time with a slight bug relating to
empty blocks); this patch refactors to unite the two routines and solve
the bug when branching to empty blocks.
Signed-off-by: Alyssa Rosenzweig <alyssa@rosenzweig.io>
Historically, Panfrost debugging entailed the use of the LD_PRELOADable
`panwrap` tool. This setup is a tad fragile; Panfrost can be traced
directly without the intermediate layer. pantrace implements the
quivalent functionality of panwrap into Panfrost proper, allowing dumps
to work regardless of the kernel layer in use.
Signed-off-by: Alyssa Rosenzweig <alyssa@rosenzweig.io>
The `panwrap` utility can be LD_PRELOAD'd into a GLES app, intercepting
communication between the driver and the kernel. Modern panwrap versions
do no processing of their own; instead, they create a trace directory.
This directory contains the following files:
- control.log: a line-by-line plain text file, denoting important
syscalls (mmaps and job submits) along with their arguments
- memory_*.bin, shader_*.bin: binary dumps of mapped memory
Together, these files contain enough information to reconstruct the
command stream and shaders of (at minimum) a single frame.
The `pandecode` utility takes this directory structure as input,
reconstructing the mapped memory and using the job submit command as an
entrypoint. It then walks the descriptors as the hardware would, parsing
and pretty-printing. Its final output is the pretty-printed command
stream interleaved with the disassembled shaders, suitable for driver
debugging. For instance, the behaviour of two driver versions (one
working, one broken) can be compared by diff'ing their decoded logs.
pandecode/decode.c was originally a part of `panwrap`; it is the oldest
living code in the project. Its history is generally not worth
preserving.
panwrap itself will continue to live downstream for the foreseeable
future, as it is specifically written for the vendor kernel. It is
possible, however, to produce equivalent traces directly from Panfrost,
bypassing the intermediate wrapping layer for well-behaved drivers.
Signed-off-by: Alyssa Rosenzweig <alyssa@rosenzweig.io>
This is not yet functional, but it resolves a crash in various apps and
provides a framework for further work.
Signed-off-by: Alyssa Rosenzweig <alyssa@rosenzweig.io>
v2: use tc.stream_uploader in si buffer_transfer_map if not called from
the driver thread
Reviewed-by: Nicolai Hähnle <nicolai.haehnle@amd.com> (v1)
Tested-by: Dieter Nützel <Dieter@nuetzel-hh.de>
This makes us properly handle gl_ClipDistance and gl_CullDistance.
Fixes: 19064b8c "nir: Add a pass for gathering transform feedback info"
Reviewed-by: Alejandro Piñeiro <apinheiro@igalia.com>
We needed to better handle cases where a chunk of a variable starts at
some non-zero location_frac and rolls over into the next slot but may
not be more than 4 dwords. For example, if gl_CullDistance is an array
of 3 things and has location_frac = 2, it will span across two vec4s but
is not, itself, bigger than a vec4. If you ignore the clip/cull special
case, it's not allowed to happen for anything else because the only
things that can span more than one slot is dvec3 and dvec4 and they're
both bigger than a vec4. The current code uses this attrib_slot thing
where we count attribute slots and iterate over them. However, that
doesn't work in the case above because gl_CullDistance will have an
attrib_slot count of 1 even though it does span two slots. We could fix
this by adjusting attrib_slot but we already have comp_mask and it's
easier to just handle it that way.
Reviewed-by: Alejandro Piñeiro <apinheiro@igalia.com>
Instead of going to all the work of to combine them into one array, just
make two arrays and use location_frac to colocate them within CLIP0.
Then the back-end can sort things out and stack them on top of each
other. Thanks to ef99f4c8, we also don't need to set compact anymore.
Reviewed-by: Alejandro Piñeiro <apinheiro@igalia.com>
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
Use the 'UNK31' bit (which should probably be called 'BUFFER') for
samplerBuffer case, which increases the size of supported buffer
texture beyond 2^15 elements.
Also need to fix the 2nd coord injected to handle the tex instructions
that take integer coords.
Fixes dEQP-GLES31.functional.texture.texture_buffer.render.as_fragment_texture.buffer_size_131071
and similar
Signed-off-by: Rob Clark <robdclark@gmail.com>
... instead of isam. It seems like when using isam, plus atomics, we
can have the problem of old data being in the texture cache. Plus this
way we don't have to load a component at a time.
Note that blob still seems to use isam in some cases. I suppose it might
be preferable in the case of loading a single component, when atomics
are not in the picture (or that the ssbo does not need to otherwise be
coherent).
Signed-off-by: Rob Clark <robdclark@gmail.com>
Resync disasm and instr header from envytools, and add ldib encoding.
This replaces an opcode from a3xx which was never seen in practice,
since that seemed easier than dealing with the same opc # meaning a
different thing on a6xx. (Not really sure if 'sti' was actually a
real thing, I think it was only seen in fuzzing.)
Signed-off-by: Rob Clark <robdclark@gmail.com>
Fixes
dEQP-GLES3.functional.shaders.indexing.varying_array.vec3_dynamic_write_dynamic_loop_read
regression.
Fixes: c1a27ba9ba freedreno/ir3: HIGH reg w/a for a6xx
Signed-off-by: Rob Clark <robdclark@gmail.com>
The variant will be NULL if RA failed. Which isn't ideal, but at least
lets not segfault and bring down the rest of the dEQP run with us.
Signed-off-by: Rob Clark <robdclark@gmail.com>
The wrmask is handled in regmask_get()/regmask_set(), but it wasn't
being propagated from SSA src to dst. So for example, an SSBO read
value that is passed in as src2.y component to atomic op, wasn't
getting the (sy) flag set. Causing lots of fail.
Signed-off-by: Rob Clark <robdclark@gmail.com>
Even in a very basic shader this reduces the time spent in
nir_copy_prop() by ~17%.
No shader-db changes for radeonsi NIR or i965.
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
While I haven't tested them all, given that they're all using the same
allocation paths and modifiers in the kernel they should be fine to use in
the same way.
v2: Rebase on other kmsro changes.
v3: Skip repeated '[with_gallium_kmsro,' in the meson build.
Acked-by: Alyssa Rosenzweig <alyssa@rosenzweig.io>
Commit 08bfd710a2. (nir/dead_cf: Stop
relying on liveness analysis) introduced a new check that iterated
through a SSA def's uses, to see if it's used. But it only checked
normal uses, and not uses which are part of an 'if' condition. This
led to it thinking more nodes were dead than possible.
Fixes Piglit's variable-indexing/tcs-output-array-float-index-wr test
(and related tests) with the out-of-tree Iris driver.
Fixes: 08bfd710a2 nir/dead_cf: Stop relying on liveness analysis
Reviewed-by: Connor Abbott <cwabbott0@gmail.com>
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
This gets stencil and depth resolves working properly.
Fixes:
dEQP-GLES3.functional.fbo.msaa.2_samples.depth32f_stencil8
dEQP-GLES3.functional.fbo.msaa.2_samples.depth24_stencil8
dEQP-GLES3.functional.fbo.msaa.4_samples.depth32f_stencil8
dEQP-GLES3.functional.fbo.msaa.4_samples.depth24_stencil8
dEQP-GLES3.functional.fbo.invalidate.whole.unbind_blit_msaa_color
dEQP-GLES3.functional.fbo.invalidate.sub.unbind_blit_msaa_color
Signed-off-by: Kristian H. Kristensen <hoegsberg@chromium.org>
Blitter does support it after all. Previous attempt to use R8_UINT
failed because we overwrote the a6xx format in emit_blit_texture(),
but some of the later setup still looked at the gallium format.
If we overwrite it in the pipe_blit_info before we even call into
emit_blit_texture() it works properly.
Signed-off-by: Kristian H. Kristensen <hoegsberg@chromium.org>
Fullscreening and unfullscreening a totem window while playing a video
sometimes results in the video subsurface not changing size along. This
is also reproducible with epiphany.
If a surface gets resized while we have an active back buffer for it, the
resized dimensions won't get neither immediately applied on the resize
callback, nor correctly synchronized on update_buffers(), as the
(now stale) surface size and currently attached buffer size still do match.
There's actually 2 things to synchronize here, first the surface query
size might not be updated yet to the wl_egl_window's (i.e. resize_callback
happened while there is a back buffer), and second the wayland buffers
would need dropping if new surface size differs with the currently attached
buffer. These are done in separate steps now.
https://bugzilla.redhat.com/show_bug.cgi?id=1650929https://bugs.freedesktop.org/show_bug.cgi?id=109594
Fixes: a9fb331ea7 ("wayland/egl: update surface size on window resize")
Signed-off-by: Carlos Garnacho <carlosg@gnome.org>
Reviewed-by: Juan A. Suarez <jasuarez@igalia.com>
Reviewed-by: Daniel Stone <daniels@collabora.com>
Tested-by: Bastien Nocera <hadess@hadess.net>
Tested-by: Denys Kostin <denys.kostin@globallogic.com>
The cacheline size was a requirement for using the BLT engine, which
we don't use anymore except for a few things on old HW, so we drop it.
Fixes CTS's CL#3500 test:
dEQP-VK.api.image_clearing.core.clear_color_image.2d.linear.single_layer.r8g8b8_unorm
Signed-off-by: Samuel Iglesias Gonsálvez <siglesias@igalia.com>
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
This uses prog_to_nir to translate ARB assembly programs to NIR.
Co-authored by Tim Arceri, Dave Airlie, and Ken Graunke:
- [Tim Arceri]: original patch
- [Dave Airlie]: fix crashes with parameter names
- [Ken Graunke]:
- Rebase on SCALAR_ISA cap, lower wpos_ytransform too.
- Rebase on streamout fixes.
- Lower system values for fragcoord support.
- Don't try to use prog_to_nir for ATI_fragment_shader programs.
- Create TGSI for fixed-function or ARB vertex shaders even if the
driver prefers NIR, so we can create draw module shaders for
feedback/select emulation, which rely on TGSI.
Tested on:
- iris (Intel Skylake/Kabylake): Piglit & GL CTS - Ken Graunke
- radeonsi (AMD Vega 64): Piglit - Ken Graunke
- vc4/v3d - Piglit - Eric Anholt
- freedreno - dEQP - Kristian Høgsberg
Fixes lit_degenerate_case on vc4 and v3d, and vp-address-01,
vp-arl-constant-array-huge-offset-neg, and vp-arl-neg-array on v3d.
No Piglit regressions on radeonsi; no dEQP regressions on freedreno.
Acked-by: Eric Anholt <eric@anholt.net>
Tested-by: Eric Anholt <eric@anholt.net>
Reviewed-by: Timothy Arceri <tarceri@itsqueeze.com>
Even if the driver wants to use NIR shaders, we may need to have TGSI
tokens for creating draw module vertex shaders for the feedback/select
render modes.
So...if the st_vertex_program has any TGSI...copy it to the variant.
Acked-by: Eric Anholt <eric@anholt.net>
Tested-by: Eric Anholt <eric@anholt.net>
Reviewed-by: Timothy Arceri <tarceri@itsqueeze.com>
ARB_vertex_program and ARB_fragment_program define 0^0 = 1 (while GLSL
leaves it undefined). Performing fpow lowering in NIR would break this
behavior, preventing us from using prog_to_nir.
According to llvm/lib/Target/AMDGPU/SIInstructions.td, POW_common
expands to <V_LOG_F32_e32, V_EXP_F32_e32, V_MUL_LEGACY_F32_e32>,
which presumably does a zero-wins multiply.
Lowering in NIR results in a non-legacy multiply, where:
pow(0, 0) = 2^(log2(0) * 0)
= 2^(-INF * 0)
= 2^(-NaN)
= -NaN
which isn't the desired result.
This reverts:
- commit d6b7539206
(ac/nir: remove emission of nir_op_fpow)
- commit 22430224fe
(radeonsi/nir: enable lowering of fpow)
and prevents a regression in gl-1.0-spot-light with AMD_DEBUG=nir
after enabling prog_to_nir in st/mesa later in this series.
Reviewed-by: Timothy Arceri <tarceri@itsqueeze.com>
The nine state tracker can produce NIR uniform variables
whose location is explicitly set. radeonsi did not take that
into account when calculating const_file_max, resulting in
rendering glitches. This patch fixes that.
Signed-Off-By: Timur Kristóf <timur.kristof@gmail.com>
Tested-by: Andre Heider <a.heider@gmail.com>
Reviewed-by: Timothy Arceri <tarceri@itsqueeze.com>
In the new Intel Iris driver, I am using Tim's new packed uniform
storage system. It works great, with one caveat: our scalar compiler
backend assumes that uniform offsets will be aligned to the underlying
data type. For example, doubles must be 64-bit aligned, floats 32-bit,
half-floats 16-bit, and so on. It does not need any other padding.
Currently, _mesa_add_parameter aligns everything to 32-bit offsets,
creating doubles that have an unaligned offset. This patch alters
that code to align doubles to 64-bit offsets.
This may be slightly less optimal for drivers which can support full
packing, and allow reads from unaligned offsets at no penalty. We could
make this extra alignment optional. However, it only comes into play
when intermixing double and single precision uniforms. Doubles are
already not too common, and intermixed values (floats then doubles)
is probably even less common. At most, we burn a single 32-bit slot
to the alignment, which is not that expensive. So, it doesn't seem
worthwhile to add the extra complexity.
Eventually, we'll likely want to update this code to allow half-float
values to be packed tighter than 32-bit offsets. At that point, we'll
probably want to revisit what drivers ultimately want, and add options.
Acked-by: Timothy Arceri <tarceri@itsqueeze.com>
I'd like to use this in the prog_parameter.c code, so I need to move it
into C, make it non-static, and so on. This probably isn't the ideal
place for it, but I couldn't think of a better one.
Acked-by: Timothy Arceri <tarceri@itsqueeze.com>
Last commit limited the CI to master and MRs, but to avoid having to
manually trigger CI runs, let's add a 3rd, automatic way: by pushing to
a branch named `ci/*` (or `ci-*` or just `ci`) (which you can delete
afterwards, the pipeline results will remain).
Signed-off-by: Eric Engestrom <eric.engestrom@intel.com>
Reviewed-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
Reviewed-by: Erik Faye-Lund <erik.faye-lund@collabora.com>
Runs on random other branches (stables RCs, personal forks) can still be
triggered manually via the web interface, or an app using the API.
This should massively help with the current voracious state of our CI.
Signed-off-by: Eric Engestrom <eric.engestrom@intel.com>
Reviewed-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
Reviewed-by: Erik Faye-Lund <erik.faye-lund@collabora.com>
So that the signature is correct and consistent, the inputs to a export
intrinsic should always be 32-bit floats.
This and the previous commit fixes a large amount crashes from
dEQP-VK.spirv_assembly.instruction.graphics.16bit_storage.input_output_int_*
tests
Fixes: b722b29f10 ('radv: add support for 16bit input/output')
Signed-off-by: Rhys Perry <pendingchaos02@gmail.com>
Reviewed-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
16-bit outputs are stored as 16-bit floats in the outputs array, so they
have to be bitcast.
Fixes: b722b29f10 ('radv: add support for 16bit input/output')
Signed-off-by: Rhys Perry <pendingchaos02@gmail.com>
Reviewed-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
For V3D 3.x, we emitted the ldvpms all at the top so that we didn't need
to do VPM setup when the load_inputs are out of order. For V3D 4.x, we
can reduce register pressure by delaying our loads until they're actually
needed. This also avoids a bunch of silly MOVs in the pre-opt VIR dump.
total instructions in shared programs: 6421415 -> 6419933 (-0.02%)
total uniforms in shared programs: 2393139 -> 2393140 (<.01%)
total threads in shared programs: 153864 -> 153906 (0.03%)
The execute.file check used to be good enough, until I stopped setting up
the execute mask for uniform ifs.
No known tests fixed, noticed while doing a refactor.
Fixes: 0805060573 ("v3d: Handle dynamically uniform IF statements with uniform control flow.")
Now that we don't have the vir_PF() magic, it's obvious that we were doing
the wrong thing for f2b32 by allowing -0.0 to produce true instead of
false.
You were allowed to pass in any old temp so that you could hopefully fold
the PF up into the def of the temp. If we couldn't find one, it
implicitly generated a MOV(nop, reg). However, that PF could have
different behavior depending on whether the def being folded into was a
float or int opcode, which the caller doesn't necessarily control.
Due to the fragility of the function, just switch all callers over to
vir_set_pf(). This also encourages the callers to use a _dest call for
the inst they're putting the PF on, eliminating a bunch of temps in the
pre-optimization VIR.
shader-db says the change is in the noise:
total instructions in shared programs: 6226247 -> 6227184 (0.02%)
instructions in affected programs: 851068 -> 852005 (0.11%)
Both were doing the same thing to try to get a condition to predicate on.
Noticed when I wanted to do this for discard_if as well.
No change in shader-db.
The NIR lowering works fine, though it causes some slight noise due to
what looks like choices about propagating constants up multiply chains
changing.
total instructions in shared programs: 6229671 -> 6229820 (<.01%)
total uniforms in shared programs: 2312171 -> 2312324 (<.01%)
Fixes some stalls in 3DMMES's main vertex shader.
total instructions in shared programs: 6280751 -> 6211270 (-1.11%)
instructions in affected programs: 2935050 -> 2865569 (-2.37%)
Apparently we need disable-EZ flagged, not just "does Z writes".
Fixes
dEQP-GLES31.functional.image_load_store.early_fragment_tests.no_early_fragment_tests_depth_fbo
on 7278, even though it passed in simulation.
Signed-off-by: Eric Anholt <eric@anholt.net>
Fixes: 051a41d3d5 ("v3d: Add support for the early_fragment_tests flag.")
Fixes intermittent fails in
dEQP-GLES31.functional.draw_indirect.compute_interop.separate.drawelements_compute_cmd_and_data_and_indices
and others (particularly when run as part of a CTS run)
Otherwise, we might have pages accessible that shouldn't be and miss out
on errors. This is unlikely for most tests since v3d_hw_get_mem() is big
enough that it'll be a freshly zeroed mmap, but if screens are destroyed
and recreated then we'd be reusing the old v3d_hw_get_mem() contents.
From the table in isl_format.c, it appears that all generations
support blending on 32-bit float surfaces.
Signed-off-by: Ilia Mirkin <imirkin@alum.mit.edu>
Reviewed-by: Tapani Pälli <tapani.palli@intel.com>
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
If EXT_float_blend is not supported, error out on blending of FP32
attachments in an ES2 context.
Signed-off-by: Ilia Mirkin <imirkin@alum.mit.edu>
Reviewed-by: Tapani Pälli <tapani.palli@intel.com>
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
The new encoding returns a value via the 2nd src. The legalize pass
needs to be aware of this to set the correct needs_sy flag, otherwise we
can, in cases where the atomic dst is not used, overwrite the register
that hardware will asynchronously load result into without (sy) flag, so
it gets clobbered by the atomic result.
This fixes a whole lot of rando ssbo+atomic fails, like
dEQP-GLES31.functional.ssbo.layout.single_basic_type.packed.highp_vec4.
Signed-off-by: Rob Clark <robdclark@gmail.com>
Since gl_HelperInvocation is lowered to:
!((1 << sample_id) & sample_mask_in))
Not setting these enable bits was causing it be broken. (And probably a
bunch of other stuff too.)
Fixes dEQP-GLES31.functional.shaders.helper_invocation.*
Signed-off-by: Rob Clark <robdclark@gmail.com>
This fixes invalid access to Attachment array which would occur if caller
would exceed MaxColorAttachments. In practice this should not ever happen
because DiscardFramebufferEXT specifies only GL_COLOR_ATTACHMENT0 to be
valid and InvalidateFramebuffer will error out before but this should
make coverity happy.
v2: const, remove _EXT (Ian)
CID: 1442559
Fixes: 0c42b5f3cb "mesa: wire up InvalidateFramebuffer"
Signed-off-by: Tapani Pälli <tapani.palli@intel.com>
Reviewed-by: Ian Romanick <ian.d.romanick@intel.com>
This fixes issues where polygons that should be culled (due to negative
w, for instance) may not be.
Signed-off-by: Alyssa Rosenzweig <alyssa@rosenzweig.io>
The idea here is to reassociate a * (b * c) into (a * c) * b, when
b is a non-constant value, but a and c are constants, allowing them
to be combined.
But nothing was enforcing that 'b' must be non-constant, which meant
that running opt_algebraic in a loop would never terminate if the IR
contained non-folded constant expressions like 256 * 0.5 * 2. Normally,
we call constant folding in such a loop too, but IMO it's better for
nir_opt_algebraic to be robust and not rely on that.
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=109581
Fixes: 32e266a9a5 i965: Compile fp64 funcs only if we do not have 64-bit hardware support
Reviewed-by: Ian Romanick <ian.d.romanick@intel.com>
Object handles are local to the device fd, so double check we are not
mixing together objects from multiple screens on execbuf submission.
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
Soon we'll need this logic to deal w/ image/SSBO case, so split out a
helper rather than duplicate the logic.
Signed-off-by: Rob Clark <robdclark@gmail.com>
It seems like some instructions (noticed this w/ cat3), cannot read HIGH
regs.. cat1 (mov/cov) can, and possibly some/all of cat2.
The blob seems to stick w/ an extra mov into low regs. So lets do the
same.
This fixes WGID on a6xx, which unsurprisingly is related to a lot of
deqp compute fails.
Signed-off-by: Rob Clark <robdclark@gmail.com>
Images and SSBOs don't map directly to the hw. They end up being part
texture and part something else. Starting with a6xx, the hack used for
a5xx to smash the image tex state into hw texture state starting from
MAX counting down won't work, because we start using tex state also for
SSBO read.
Signed-off-by: Rob Clark <robdclark@gmail.com>
Note that image/ssbo support is currently only implemented for a5xx.
But the instruction encoding is the same for a4xx.
Signed-off-by: Rob Clark <robdclark@gmail.com>
We probably need to rethink how we detect which instruction first
defines higher register classes. But for now, this at least fixes
the symptom.
Signed-off-by: Rob Clark <robdclark@gmail.com>
The elements added into a vector should have the same type as the
first one, otherwise this hits an assertion in LLVM.
Fixes: 4b3549c084 ("radv: reduce the number of loaded channels for vertex input fetches")
reported-by: Philip Rebohle <philip.rebohle@tu-dortmund.de>
Signed-off-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
After the previous changes to emulate the ETC/EAC formats using the
secondary shadow miptree, the etc_format field of the intel_mipmap_tree
struct became redundant and the remaining check that used it has been
replaced. (Nanley Chery)
Reviewed-by: Nanley Chery <nanley.g.chery@intel.com>
OES_copy_image extension was disabled on Gen7 due to the lack of support
for ETC2 images. Enabled it back. (Kenneth Graunke)
v2:
- Removed the blank lines in the comments above OES_copy_image and
OES_texture_view extensions in intel_extensions.c (Nanley Chery)
Reviewed-by: Nanley Chery <nanley.g.chery@intel.com>
For CopyImageSubData to copy the data during the 1st draw call, we need
to update the shadow tree right before the rendering.
v2:
- Added assertion that the miptree doesn't need update at the time we
update the texture surface. (Nanley Chery)
v3:
- As we now update the tree before the rendering we don't need to copy
the data during the unmap anymore. Removed the unnecessary update from
the intel_miptree_unmap in intel_mipmap_tree.c (Nanley Chery)
v4:
- Fixed unrelated empty line removal (Nanley Chery)
- As now the intel_upate_etc_shadow of intel_mipmap_tree.c is only
called inside its following function, we don't need to declare it at
the top of the file anymore. (Nanley Chery)
Reviewed-by: Nanley Chery <nanley.g.chery@intel.com>
GPUs Gen < 8 cannot sample ETC2 formats. So far, they converted the
compressed EAC/ETC2 images to non-compressed RGBA images. When
GetCompressed* functions were called, the pixels were returned in this
RGBA format and not the compressed format that was expected.
Trying to fix this problem, we use a secondary shadow miptree to store the
decompressed data for the rendering and the main miptree to store the
compressed for the Get functions to work. Each time that the main miptree
is written with compressed data, we decompress them to RGB and update the
shadow. Then we use the shadow for rendering.
v2:
- Fixes in the commit message (Nanley Chery)
- Reversed the changes in brw_get_texture_swizzle and swapped the b, g
values at the time that we decompress the data in the function:
intel_miptree_update_etc_shadow of intel_mipmap_tree.c (Nanley Chery)
- Simplified the format checks in the miptree_create function of the
intel_mipmap_tree.c and reserved the call of the
intel_lower_compressed_format for the case that we are faking the ETC
support (Nanley Chery)
- Removed the check for the auxiliary usage for the shadow miptree at
creation (miptree_create of intel_mipmap_tree.c) as we won't use
auxiliary buffers with these types of trees (Nanley Chery)
- Set the etc_format of the non-ETC miptrees to MESA_FORMAT_NONE and
removed the unecessary checks (Nanley Chery)
- Fixed an unrelated indentation change (Nanley Chery)
- Modified the function intel_miptree_finish_write to set the
mt->shadow_needs_update to true to catch all the cases when we need to
update the miptree (Nanley Chery)
- In order to update the shadow miptree during the unmap of the
main and always map the main (Nanley Chery) the following change was
necessary: Splitted the previous update function that was updating all
the mipmap levels and use two functions instead: one that updates one
level and one that updates all of them. Used the first during unmap
and the second before the rendering.
- Removed the BRW_MAP_ETC_BIT flag and the mechanism to decide which
miptree should be mapped each time and reversed all the changes in the
higher level texture functions that upload data to textures as they
aren't needed anymore.
- Replaced the boolean needs_fake_etc with an inline function that
checks when we need to fake the ETC compression (Nanley Chery)
- Removed the initialization of the strides in the update function as
the values will be overwritten by the intel_miptree_map call (Nanley
Chery)
- Used minify instead of division in the new update function
intel_miptree_update_etc_shadow_levels in intel_mipmap_tree.c (Nanley
Chery)
- Removed the depth from the calculation of the number of slices in
the new update function (intel_miptree_update_etc_shadow_levels of
intel_mipmap_tree.c) as we don't need to support 3D ETC images.
(Nanley Chery)
v3:
- Renamed the rgba_fmt in function miptree_create
(intel_mipmap_tree.c) to decomp_format as the format is not always in
rgba order. (Nanley Chery)
- Documented the new usage for the shadow miptree in the comment above
the field in the intel_miptree struct in intel_mipmap_tree.h (Nanley
Chery)
- Removed the redundant flags from the mapping of the miptrees in
intel_miptree_update_etc_shadow of intel_mipmap_tree.c (Nanley Chery)
- Fixed the switch from surface's logical level to physical level in
the intel_miptree_update_etc_shadow_levels of intel_mipmap_tree.c
(Nanley Chery)
- Excluded the Baytrail GPUs from the check for the ETC emulation as
they support the ETC formats natively. (Nanley Chery)
- Simplified the check if the format is BGRA in
intel_miptree_update_etc_shadow of intel_mipmap_tree.c (Nanley Chery)
v4:
- Removed the functions intel_miptree_(map|unmap)_etc and the check if
we need to call them as with the new changes, they became unreachable.
(Nanley Chery)
- We'd rather calculate the level width and height using the shadow
miptree instead of the main in intel_miptree_update_etc_shadow_levels of
intel_mipmap_tree.c (Nanley Chery)
- Fixed the format in the mt_surface_usage, set at the miptree creation,
in miptree_create of intel_mipmap_tree.c (Nanley Chery)
v5:
- Fixed the levels calculations in intel_mipmap_tree.c (Nanley Chery)
- Update the flag shadow_needs_update outside the function
intel_miptree_update_etc_shadow (Nanley Chery)
- Fixed indentation error (Nanley Chery)
v6:
- Fixed typo in commit message (Nanley Chery)
- Simplified the assignment of the mt_fmt in the miptree_create of the
intel_mipmap_tree.c (Nanley Chery)
- Combined declarations and assignments where it was possible in the
intel_miptree_update_etc_shadow and
intel_miptree_update_etc_shadow_levels of the intel_mipmap_tree.c
(Nanley Chery)
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=81843
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=104272
Reviewed-by: Nanley Chery <nanley.g.chery@intel.com>
This was probably useful when it was first written, however it
looks to be no longer necessary.
As far as I can tell these days dce is smart enough to remove useless
instructions from if branches. Once this is done
nir_opt_peephole_select() will end up removing the empty if.
Removing this support reduces the dolphin uber shader compilation
time spent in nir_opt_dead_cf() by a little over 7x.
No shader-db changes on i965 or radeonsi.
Tested-by: Dieter Nützel <Dieter@nuetzel-hh.de>
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
Reviewed-by: Connor Abbott <cwabbott0@gmail.com>
- Ensure all threads have optimal floating-point control state
- Disable auto-generation of fused FP ops for VERTEX shader stage
- Disable "fast" FP ops for VERTEX shader stage
Reviewed-by: Bruce Cherniak <bruce.cherniak@intel.com>
The intrinsic returns the number of leading zeros, not the bit number of
the first nonzero, so just flip it based on the mask size
Reviewed-by: Bruce Cherniak <bruce.cherniak@intel.com>
normalized and scaled formats also return floats.
Fixes: 4b3549c084 ("radv: reduce the number of loaded channels for vertex input fetches")
Reviewed-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
v2: Fix silly bug in logic. s/||/&&/
All but one of the affected shaders is in an Unreal4 demo. The other is
in Tomb Raider. All of the cases that Ian investigated appear to be
sequences like the following
if (int(uint(some_float)) < 0) /* other relations too */
...
At least in Tomb Raider, it's not obvious that this sequence came from
the original shader.
In some of the Unreal demos, the shader contains code like
if (int(uint(textureLod(...))) > 0)
...
which explicitly generates the offending sequence.
All Gen6+ platforms had similar results (Skylake shown):
total instructions in shared programs: 15437170 -> 15437187 (<.01%)
instructions in affected programs: 4492 -> 4509 (0.38%)
helped: 0
HURT: 17
HURT stats (abs) min: 1 max: 1 x̄: 1.00 x̃: 1
HURT stats (rel) min: 0.05% max: 0.73% x̄: 0.66% x̃: 0.73%
95% mean confidence interval for instructions value: 1.00 1.00
95% mean confidence interval for instructions %-change: 0.57% 0.75%
Instructions are HURT.
total cycles in shared programs: 383007996 -> 383007992 (<.01%)
cycles in affected programs: 20542 -> 20538 (-0.02%)
helped: 6
HURT: 7
helped stats (abs) min: 2 max: 6 x̄: 5.33 x̃: 6
helped stats (rel) min: 0.11% max: 0.36% x̄: 0.32% x̃: 0.36%
HURT stats (abs) min: 4 max: 4 x̄: 4.00 x̃: 4
HURT stats (rel) min: 0.27% max: 0.27% x̄: 0.27% x̃: 0.27%
95% mean confidence interval for cycles value: -3.30 2.69
95% mean confidence interval for cycles %-change: -0.19% 0.19%
Inconclusive result (value mean confidence interval includes 0).
No changes on Iron Lake or GM45.
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=109404
Reviewed-by: Ian Romanick <ian.d.romanick@intel.com>
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
Tested-by: nagrigoriadis@gmail.com
Tested-by: Danylo Piliaiev <danylo.piliaiev@gmail.com>
We emit an FBL instruction which only exists since Gen7. This prevents
the test from segfaulting when run with TEST_DEBUG=1.
Reviewed-by: Ian Romanick <ian.d.romanick@intel.com>
Add compute shader initilization, assign and cleanup in vl_compositor API.
Set video compositor compute shader render as default when pipe support it.
Signed-off-by: James Zhu <James.Zhu@amd.com>
Reviewed-by: Christian König <christian.koenig@amd.com>
Rename csc_matrix to shader_params, and increase shader_params size
to store more constants for compute shader,
Signed-off-by: James Zhu <James.Zhu@amd.com>
Reviewed-by: Christian König <christian.koenig@amd.com>
Split vl_compositor graphic shaders from vl_compositor API in order to share
vl_compositor API with vl_compositor compute shader later.
Signed-off-by: James Zhu <James.Zhu@amd.com>
Reviewed-by: Christian König <christian.koenig@amd.com>
In opt_peel_initial_if optimization, when moving the continue list to
end of the continue block, before the jump, could happen that the
continue list itself also ends with a jump.
This would mean that we would have two jump instructions in a row: the
first one from the continue list and the second one from the contine
block.
As inserting an instruction after a jump is not allowed (and it does not
make sense, as it will not be executed), remove the jump from the
continue block and keep the one from continue list, as it will be
executed first.
CC: Jason Ekstrand <jason@jlekstrand.net>
Reviewed-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
opt_split_alu_of_phi moves ALU instruction to the end of continue block.
But if the continue block ends with a jump instruction (an explicit
"continue" instruction) then the ALU must be inserted before the jump,
as it is illegal to add instructions after the jump.
CC: Ian Romanick <ian.d.romanick@intel.com>
Fixes: 0881e90c09 ("nir: Split ALU instructions in loops that read phis")
Reviewed-by: Ian Romanick <ian.d.romanick@intel.com>
Instead of generating a GL_INVALID_ENUM error when the type or format
is incorrect while using glClear{Named}Buffer{Sub}Data, generate
GL_INVALID_VALUE.
From page 72 (page 94 of the PDF) of the OpenGL 4.6 spec:
" An INVALID_VALUE error is generated if type is not one of the
types in table 8.2.
An INVALID_VALUE error is generated if format is not one of the
formats in table 8.3."
Fixes the following test:
KHR-GL45.direct_state_access.buffers_errors
v2: correct the doxygen documentation.
Cc: Pi Tabred <servuswiegehtz@yahoo.de>
Cc: Brian Paul <brianp@vmware.com>
Signed-off-by: Andres Gomez <agomez@igalia.com>
Reviewed-by: Tapani Pälli <tapani.palli@intel.com>
We've noticed the Team Fortress 2 engine seems to do many small
calls to glSubData(..). Let's pick our heuristic based on the
resource base width, not the size of a particular upload.
This will cause transfers to be batched together in the transfer
queue.
Revelant glbench microbenchmark --
Before: buffer_upload_dynamic_element_array_131072 = 131.17 mbytes_sec
After: buffer_upload_dynamic_element_array_131072 = 6828.24 mbytes_sec
Reviewed-by: Gert Wollny <gert.wollny@collabora.com>
This improves Unigine Valley benchmark by 3 to 10 fps (depending
on the scene).
It also improves the Team Fortress 2 benchmark from 6 fps to 13
fps (host: 20 fps).
Reviewed-by: Gert Wollny <gert.wollny@collabora.com>
Transfers will be placed here at unmap time instead of incurring
a VM exit. There's an attempt to deduplicate intersecting 1D transfers,
which are surprisingly common.
This can also help with mipmapped texture upload and smaller
textures, where the majority of the time is spent in the guest
kernel / QEMU -- not virglrenderer. This is shown by the GLbench
texture upload benchmark:
Before:
texture_upload_rgba_teximage2d_32 = 64.23 mtexel_sec
After:
texture_upload_rgba_teximage2d_32 = 367.44 mtexel_sec
v2: Split up list iteration functions (@gerddie)
v3: Support for optimizing glBufferSubData
Reviewed-by: Gert Wollny <gert.wollny@collabora.com>
The idea is to have two command buffers:
1) One for transfers
2) One for commands, which can include transfers
At flush time, (2) will be filled. Otherwise, (1) will be
used to submit transfers if there are enough of them.
v2: Pass size directly to cmd_buf_create (@gerddie)
Reviewed-by: Gert Wollny <gert.wollny@collabora.com>
Much of our logic is based around the idea the upper 16 bits
of a command dword can encode the length of the command.
Now that the command buffer >= 2^16 - 1, we should check for
this.
v2: alignment, and only check VIRGL_ENCODE_MAX_DWORDS
Reviewed-by: Gert Wollny <gert.wollny@collabora.com>
Let's define a helper function and use it.
This commit also allows resources to be emitted into different command
buffers.
Like the ioctls, send 0 for layer_stride and stride. If we actually
send the real values, there are various assumptions in virglrenderer
for non-1D buffers that may need to be modified.
Reviewed-by: Gert Wollny <gert.wollny@collabora.com>
Mostly similar to VIRGL_CCMD_RESOURCE_INLINE_WRITE. However, this
uses the resource's already attached iovecs rather than the command
buffer to transfer the data.
v2: Used (1 << 16) not (1 << 15) [@gerddie]
Reviewed-by: Gert Wollny <gert.wollny@collabora.com>
Since we're just uploading to guest memory, let's just align to dword
size.
Fixes: e0f932 ("u_upload_mgr: pass alignment to u_upload_data manually")
Reviewed-by: Gert Wollny <gert.wollny@collabora.com>
We have cases where we would not like to expose these.
v2: call the option allow_rgb565_configs for consistency
with existing allow_rgb10_configs (Eric, Jason)
Signed-off-by: Tapani Pälli <tapani.palli@intel.com>
Reviewed-by: Eric Engestrom <eric.engestrom@intel.com>
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
There are a few differenes between Mali T860 (Panfrost's primary
reference target) and the older Midgard generations (T600/T700):
- Miscellaneous different magic numbers. It's not clear what these
numbers mean on either the old or new configurations yet.
- Errata fixes. T800 is the final Midgard generation and presumably the
least buggy. Older Midgard has some extra hardware errata we have to
workaround.
- SFBD vs MFBD split. Essentially, older Midgard use a Single
FrameBuffer Descriptor (SFBD), which corresponds to single
render-target rendering. Newer Midgard (T760+) use a Multiple
FrameBuffer Descriptor (MFBD), allowing multiple RTs. On ES 2.0, these
descriptors serve the same function, but we implement both, depending on
the version of the hardware.
- CPU bitness. 32-bit systems generally use 32-bit GPU descriptors, and
vice versa for 64-bit. Our target T760 systems are 32-bit whereas our
target T860 systems are 64-bit. More work is needed in this area.
This patch fixes support in these areas for supporting older Midgard
hardware. It is tested on Mali T760 and Mali T860.
Signed-off-by: Alyssa Rosenzweig <alyssa@rosenzweig.io>
The liveness analysis pass is fairly expensive because it has to build
large bit-sets and run a fix-point algorithm on them. Instead of
requiring liveness for detecting if values escape a CF node, just take
advantage of the structured nature of NIR and use block indices instead.
This only requires the block index metadata which is the fastest we have
metadata to generate.
No shader-db changes on Kaby Lake
Reviewed-by: Timothy Arceri <tarceri@itsqueeze.com>
We want to handle live SSA values differently and it's going to involve
walking the instructions. We can make it a single instruction walk if
we combine it with cf_node_has_side_effects.
Reviewed-by: Timothy Arceri <tarceri@itsqueeze.com>
This fixes a bug in runscape where we were optimizing x >> 16 to an
extract and then negating and converting to float. The NIR to fs pass
was dropping the negate on the floor breaking a geometry shader and
causing it to render nothing.
Fixes: 1f862e923c "i965/fs: Optimize float conversions of byte/word..."
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=109601
Tested-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Reviewed-by: Matt Turner <mattst88@gmail.com>
Unfortunately swr was missed in the original commit. The number of
varyings should generally match up to what's reported as the shader
caps for fragment inputs.
Fixes: 6010d7b8e8 (gallium: add PIPE_CAP_MAX_VARYINGS)
Signed-off-by: Ilia Mirkin <imirkin@alum.mit.edu>
Reviewed-by: Alok Hota <alok.hota@intel.com>
Cc: 19.0 <mesa-stable@lists.freedesktop.org>
Just a little higher up in the function we assert that the aspect masks
are actually equal so there's no reason for the weaker check. Also, the
temporary variables were causing compiler warnings in release builds.
Reviewed-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
[28/716] Compiling C object 'src/compiler/nir/068b2c8@@nir@sta/nir_gather_xfb_info.c.o'.
../src/compiler/nir/nir_gather_xfb_info.c: In function ‘nir_gather_xfb_info’:
../src/compiler/nir/nir_gather_xfb_info.c:171:13: warning: variable ‘max_offset’ set but not used [-Wunused-but-set-variable]
unsigned max_offset[NIR_MAX_XFB_BUFFERS] = {0};
^~~~~~~~~~
[36/716] Compiling C object 'src/compiler/nir/068b2c8@@nir@sta/nir_instr_set.c.o'.
../src/compiler/nir/nir_instr_set.c:502:1: warning: ‘instr_each_src_and_dest_is_ssa’ defined but not used [-Wunused-function]
instr_each_src_and_dest_is_ssa(nir_instr *instr)
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Reviewed-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
spirv_to_nir can generate input/output variables which are illegal
for the current shader stage, which would cause nir_validate_shader
to balk. After my recent commit to start decorating arrays as compact,
dEQP-VK.spirv_assembly.instruction.graphics.module.same_module started
hitting validation errors due to outputs in a TCS (not intended for the
TCS at all) not being per-vertex arrays.
Thanks to Jason Ekstrand for suggesting this approach.
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=109573
Fixes: ef99f4c8d1 compiler: Mark clip/cull distance arrays as compact before lowering.
Reviewed-by: Juan A. Suarez <jasuarez@igalia.com>
My patch to switch from struct-based MOCS to numeric MOCS accidentally
divided all MOCS entries by 2 in the Vulkan driver.
MOCS on Gen9+ is just an array index into a table. But in the hardware
packets, the index starts at bit 1. So we need to shift it.
Fixes: 0b44644ca6 (genxml: Consistently use a numeric "MOCS" field)
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
assert()-based tests make no sense without asserts, so make sure asserts
are compiled in, even if the rest of the code has asserts turned off.
Signed-off-by: Eric Engestrom <eric.engestrom@intel.com>
Acked-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
assert()-based tests make no sense without asserts, so make sure asserts
are compiled in, even if the rest of the code has asserts turned off.
Signed-off-by: Eric Engestrom <eric.engestrom@intel.com>
Acked-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
There was an issue recently caused by the system header being included
by mistake, so let's just get rid of this include path and always
explicitly #include "drm-uapi/FOO.h"
Signed-off-by: Eric Engestrom <eric.engestrom@intel.com>
Reviewed-by: Kristian H. Kristensen <hoegsberg@chromium.org>
These headers are used by a lot more than just the intel drivers nowadays.
Signed-off-by: Eric Engestrom <eric.engestrom@intel.com>
Reviewed-by: Kristian H. Kristensen <hoegsberg@chromium.org>
We should check that num_channels is 4, otherwise that breaks
the world. Sorry for the short breakage.
Fixes: 4b3549c084 ("radv: reduce the number of loaded channels for vertex input fetches")
Signed-off-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
I think this will save an instruction and hopefully not increase any other
costs (possibly the immediate -1 and 1?), but I haven't actually tested.
Reviewed-by: Kristian H. Kristensen <hoegsberg@chromium.org>
Drops one instruction from fs-sign-int.shader_test. No change in
shader-db due to it having 0 instances of sign(genIType). This may hurt
isign64 if algebraic runs before int64 lowering, but I wasn't sure how to
mark the algebraic opt as "every bit size but 64".
v2: Update commit message about shader-db.
Reviewed-by: Ian Romanick <ian.d.romanick@intel.com> (v1)
Everything should be in ssa form when we call this. This is a
hotpath so replace the check with an assert.
Reviewed-by: Connor Abbott <cwabbott0@gmail.com>
Everthing should be in ssa form when this is called. Checking
for it here is expensive so turn this into an assert instead.
Reviewed-by: Connor Abbott <cwabbott0@gmail.com>
There is no need to hash the instruction twice, especially as we
end up adding it in the majority of cases.
Reviewed-by: Connor Abbott <cwabbott0@gmail.com>
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
Currently the Intel "anvil" driver races with the generation of genxml
files, while i965 has an explicit dependency. This patch adds the same
dependency to anvil.
Fixes: d1992255bb
("meson: Add build Intel "anv" vulkan driver")
Acked-by: Jason Ekstrand <jason@jlekstrand.net>
Acked-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Reviewed-by: Eric Engestrom <eric.engestrom@intel.com>
Allows drivers using `u_pipe_screen_get_param_defaults` to use a
fallback value for the new pipe cap. Default value of 8 based on GL 2.1
MAX_VARYING_FLOATS
Reviewed-by: Eric Anholt <eric@anholt.net>
Use ir3_next_varying() for iterating through varyings and unset the
global point coord invert bit.
Fixes:
dEQP-GLES3.functional.shaders.builtin_variable.pointcoord
Signed-off-by: Kristian H. Kristensen <hoegsberg@chromium.org>
We need to set UNK3 in GRAS_CNTL and RB_RENDER_CONTROL0 for the value
to be reliably delivered.
Fixes:
dEQP-GLES3.functional.shaders.builtin_variable.frontfacing
Signed-off-by: Kristian H. Kristensen <hoegsberg@chromium.org>
This pulls in changes for compute shaders and a6xx ssbo/image support.
FACENESS bit moved from position 1 to 2 and there's a global invert
bit for point coord.
Signed-off-by: Kristian H. Kristensen <hoegsberg@chromium.org>
Because none of them have been picked up for 19.0 due to this bug
being reintroduced.
v2: - Fix fixes tags
Fixes: e6b3a3b201
("bin/get-pick-list.sh: handle "typod" usecase.")
Fixes: fac10169bb
("bin/get-pick-list.sh: prefix output with "[stable] "")
Reviewed-by: Andres Gomez <agomez@igalia.com>
Reviewed-by: Emil Velikov <emil.velikov@collabora.com>
I tried bumping the limit on make and scons instead, but that just
thrashed the runners, so let's not do that (sorry @daniels :]).
Instead, remove the automatic thread management from ninja and limit it
to 4 instead, in line with make and scons.
Signed-off-by: Eric Engestrom <eric.engestrom@intel.com>
if we have something like this:
loop {
...
if x {
break;
} else {
continue;
}
}
opt_if_loop_last_continue returns true marking progress allthough nothing
changes.
Fixes: 5921a19d4b "nir: add if opt opt_if_loop_last_continue()"
Signed-off-by: Karol Herbst <kherbst@redhat.com>
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
Stop using 12.12 quantization for viewports that are not contained in
the lower 4k corner of the render target as the hardware needs to keep
both absolute and relative coordinates representable.
Signed-off-by: Marek Olšák <marek.olsak@amd.com>
Cc: 18.3 19.0 <mesa-stable@lists.freedesktop.org>
This extension simply drops a draw time restriction:
"Furthermore, an INVALID_OPERATION error is generated by
DrawArrays and the other drawing commands defined in section
2.8.3 (10.5 in ES 3.1) if blending is enabled (see below) and
any draw buffer has 32-bit floating-point format components."
We never correctly enforced this restriction anyway, so we were
basically already implementing it. We just need to advertise it
for our behavior to be correct.
The extension requires EXT_color_buffer_float, but we already enable
that via dummy_true. So we can dummy_true this one as well.
Found while debugging WebGL conformance tests. Does not fix any.
Reviewed-by: Tapani Pälli <tapani.palli@intel.com>
Without using this function, we fail the -Wswitch flag when compiling
the default debugoptimized mode in Meson
Reviewed-by: Bruce Cherniak <bruce.cherniak@intel.com>
This can happen when we record a VkCmdDraw in a secondary buffer that
was created inheriting from the primary buffer, but with the framebuffer
set to NULL in the VkCommandBufferInheritanceInfo.
Vulkan 1.1.81 spec says that "the application must ensure (using scissor
if neccesary) that all rendering is contained in the render area [...]
[which] must be contained within the framebuffer dimesions".
While this should be done by the application, commit 465e5a86 added the
clamp to the framebuffer size, in case of application does not do it.
But this requires to know the framebuffer dimensions.
If we do not have a framebuffer at that moment, the best compromise we
can do is to just apply the scissor as it is, and let the application to
ensure the rendering is contained in the render area.
v2: do not clamp to framebuffer if there isn't a framebuffer
v3 (Jason):
- clamp earlier in the conditional
- clamp to render area if command buffer is primary
v4: clamp also x and y to render area (Jason)
v5: rename used variables (Jason)
Fixes: 465e5a86 ("anv: Clamp scissors to the framebuffer boundary")
CC: Jason Ekstrand <jason@jlekstrand.net>
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
This removes some scalar loads from shaders, but it increases
the number of SET_SH_REG packets. This is currently basic but
it could be improved if needed. Inlining dynamic offsets might
also help.
Original idea from Dave Airlie.
29077 shaders in 15096 tests
Totals:
SGPRS: 1321325 -> 1357101 (2.71 %)
VGPRS: 936000 -> 932576 (-0.37 %)
Spilled SGPRs: 24804 -> 24791 (-0.05 %)
Code Size: 49827960 -> 49642232 (-0.37 %) bytes
Max Waves: 242007 -> 242700 (0.29 %)
Totals from affected shaders:
SGPRS: 290989 -> 326765 (12.29 %)
VGPRS: 244680 -> 241256 (-1.40 %)
Spilled SGPRs: 1442 -> 1429 (-0.90 %)
Code Size: 8126688 -> 7940960 (-2.29 %) bytes
Max Waves: 80952 -> 81645 (0.86 %)
Signed-off-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
This is needed in order to inline some push constants when possible.
This also adds a new helper for initializing the pass.
Signed-off-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
"The C standard says that compound literals which occur inside of
the body of a function have automatic storage duration associated
with the enclosing block. Older GCC releases were putting such
compound literals into the scope of the whole function, so their
lifetime actually ended at the end of containing function. This
has been fixed in GCC 9. Code that relied on this extended lifetime
needs to be fixed, move the compound literals to whatever scope
they need to accessible in."
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=109543
Cc: <mesa-stable@lists.freedesktop.org>
Signed-off-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
Reviewed-by: Gustaw Smolarczyk <wielkiegie@gmail.com>
Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
Patch adds nir_lower_tex_options as parameter to sample_plane so that
we don't need to extend nir_tex_instr for this.
Signed-off-by: Tapani Pälli <tapani.palli@intel.com>
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
prog->SamplersUsed is set by the linker when validating resource limits,
while info->textures_used is gathered after NIR optimizations, which may
have eliminated some unused surfaces.
This may let us skip some work.
Reviewed-by: Eric Anholt <eric@anholt.net>
textures_used_by_txf is a subset of textures_used which is a subset
of prog->SamplerUnits. This should do nothing.
Reviewed-by: Eric Anholt <eric@anholt.net>
Eric and I would like a bitmask of which samplers are used, similar to
prog->SamplersUsed, but available in NIR. The linker uses SamplersUsed
for resource limit checking, but later optimizations may eliminate more
samplers. So instead of propagating it through, we gather a new one.
While there, we also gather the existing textures_used_by_txf bitmask.
Gathering these bitfields in nir_shader_gather_info is awkward at best.
The main reason is that it introduces an ordering dependency between the
two passes. If gathering runs before lower_samplers_as_deref, it can't
look at var->data.binding. If the driver doesn't use the full lowering
to texture_index/texture_array_size (like radeonsi), then the gathering
can't use those fields. Gathering might be run early /and/ late, first
to get varying info, and later to update it after variant lowering. At
this point, should gathering work on pre-lowered or post-lowered code?
Pre-lowered is also harder due to the presence of structure types.
Just doing the gathering when we do the lowering alleviates these
ordering problems. This fixes ordering issues in i965 and makes the
txf info gathering work for radeonsi (though they don't use it).
Reviewed-by: Eric Anholt <eric@anholt.net>
Until now, prog_to_nir has been setting texture_index and sampler_index
directly. This is different than GLSL shaders, which create variable
dereferences and rely on lowering passes to reach this final form.
radeonsi uses variable dereferences for samplers rather than
texture_index and sampler_index, so it doesn't even make sense to set
them there. By moving to derefs, we ensure that both GLSL and ARB
programs produce the same final form that the driver desires.
Reviewed-by: Eric Anholt <eric@anholt.net>
An upcoming patch will start building derefs in prog_to_nir, at which
point we'll need to lower them to indexes.
This gets both GLSL and non-GLSL shaders using the same paths.
Reviewed-by: Eric Anholt <eric@anholt.net>
Passes like nir_lower_drawpixels add additional sampler variables,
and set an explicit binding which never changes. These extra samplers
don't have proper uniform storage associated with them, and there is no
way to update bindings via the API. So, for any 'hidden' variables,
just trust that there's an explicit binding set.
Reviewed-by: Eric Anholt <eric@anholt.net>
I would like to be able to run gl_nir_lower_samplers() to turn texture
and sampler variable dereferences into indexes and offsets, even for
ARB programs, and built-in shaders. This would make sampler handling
more consistent across the various types of shaders.
For GLSL programs, the gl_nir_lower_samplers_as_deref() pass looks up
the variable bindings in the shader program's uniform storage. But
ARB programs and built-in shaders don't have a gl_shader_program, and
uniform storage doesn't exist. In this case, we simply skip that
lookup, and trust var->data.binding to be set correctly by whoever
created the shader.
Reviewed-by: Eric Anholt <eric@anholt.net>
Piglit's vp-max-array test creates a vertex program containing a uniform
array sized to the value of GL_MAX_NATIVE_PROGRAM_PARAMETERS_ARB. Mesa
will then add additional state-var parameters for things like the MVP
matrix.
radeonsi currently exposes a value of 4096, derived from constant buffer
upload size. This means the array will have 4096 elements, and the
extra MVP state-vars would get a prog_src_register::Index of over 4096.
Unfortunately, prog_src_register::Index is a signed 13-bit integer, so
values beyond 4096 end up turning into negative numbers. Negative
source indexes are only valid for relative addressing, so this ends up
generating illegal IR.
In prog_to_nir, this would cause an out of bounds array access.
st_mesa_to_tgsi checks for a negative value, assumes it's bogus,
and remaps it to parameter 0 in order to get something in-range.
This isn't right - instead of reading the MVP matrix, it would read
the first element of the vertex program's large array. But the test
only checks that the program compiles, so we never noticed that it
was broken.
This patch limits the size of the program limits, with the understanding
that we may need to generate additional state-vars internally. i965 has
exposed 1024 for this limit for years, so I don't expect lowering it to
2048 will cause any practical problems for radeonsi or other drivers.
Fixes vp-max-array with prog_to_nir.c.
Cc: "19.0" <mesa-stable@lists.freedesktop.org>
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
Reviewed-by: Eric Anholt <eric@anholt.net>
This fixes a rather astonishing problem that came up while debugging
an issue in the Vulkan CTS. Apparently the Vulkan CTS framework has
the tendency to create multiple VkDevices, each one with a separate
DRM device FD and therefore a disjoint GEM buffer object handle space.
Because the intel_dump_gpu tool wasn't making any distinction between
buffers from the different handle spaces, it was confusing the
instruction state pools from both devices, which happened to have the
exact same GEM handle and PPGTT virtual address, but completely
different shader contents. This was causing the simulator to believe
that the vertex pipeline was executing a fragment shader, which didn't
end up well.
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
The blitter doesn't seem to have a write mask, so for depth only and
stencil only blits to Z24S8 we cast the Z24S8 buffer to an RGBA UNORM8
buffer and fall back to pipeline blits with corresponding write mask.
Fixes
dEQP-GLES3.functional.fbo.blit.depth_stencil.depth24_stencil8_stencil_only
dEQP-GLES3.functional.fbo.invalidate.sub.unbind_blit_depth
dEQP-GLES3.functional.fbo.invalidate.sub.unbind_blit_msaa_depth
dEQP-GLES3.functional.fbo.invalidate.whole.unbind_blit_depth
dEQP-GLES3.functional.fbo.invalidate.whole.unbind_blit_msaa_depth
dEQP-GLES3.functional.fbo.msaa.2_samples.stencil_index8
dEQP-GLES3.functional.fbo.msaa.4_samples.stencil_index8
Reviewed-by: Rob Clark <robdclark@gmail.com>
Signed-off-by: Kristian H. Kristensen <hoegsberg@chromium.org>
We need to allow overriding the format with that of the image or
sampler view, so we can't take it from the resource in fd6_tex_swiz().
Reviewed-by: Rob Clark <robdclark@gmail.com>
Signed-off-by: Kristian H. Kristensen <hoegsberg@chromium.org>
The src coordinates are s24.8. For an inverted blit that ends at y=0
we need to program -1 for sy2, so we need to handle negative values
correctly.
Fixes
dEQP-GLES3.functional.fbo.blit.rect.nearest_consistency_mag_reverse_dst_y
dEQP-GLES3.functional.fbo.blit.rect.nearest_consistency_min_reverse_dst_y
dEQP-GLES3.functional.fbo.blit.rect.nearest_consistency_min_reverse_src_y
dEQP-GLES3.functional.fbo.invalidate.sub.unbind_blit_color
dEQP-GLES3.functional.fbo.invalidate.whole.unbind_blit_color
Reviewed-by: Rob Clark <robdclark@gmail.com>
Signed-off-by: Kristian H. Kristensen <hoegsberg@chromium.org>
We can rewrite almost all depth stencil blits to various red-only
blits. The exception is depth-only or stencil-only blits into z24s8
combined depth stencil buffer. We can fall back for depth-only, but
stencil-only remains broken.
Fixes
dEQP-GLES3.functional.fbo.blit.depth_stencil.depth24_stencil8_basic
dEQP-GLES3.functional.fbo.blit.depth_stencil.depth24_stencil8_scale
dEQP-GLES3.functional.fbo.blit.depth_stencil.depth32f_stencil8_basic
dEQP-GLES3.functional.fbo.blit.depth_stencil.depth32f_stencil8_scale
dEQP-GLES3.functional.fbo.blit.depth_stencil.depth32f_stencil8_stencil_only
Reviewed-by: Rob Clark <robdclark@gmail.com>
Signed-off-by: Kristian H. Kristensen <hoegsberg@chromium.org>
The explanation for the compressed format check is broken across two
comments:
/* We can blit if both or neither formats are compressed formats... */
/* ... but only if they're the same compression format. */
but the ok_format() checks were inserted between, breaking up the flow
of the sentence.
Reviewed-by: Rob Clark <robdclark@gmail.com>
Signed-off-by: Kristian H. Kristensen <hoegsberg@chromium.org>
Call ctx->blit() and let it reject blits it can't do instead of giving
up on stencil blits and blits u_blitter can't do.
Reviewed-by: Rob Clark <robdclark@gmail.com>
Signed-off-by: Kristian H. Kristensen <hoegsberg@chromium.org>
We already check earlier in the call chain in fd_blit().
glBlitFramebuffer always sets render_condition_enable and thus we
would never try the blitter path for that.
Now that we get all of dEQP-GLES3.functional.fbo.blit.conversion.*
down this path, it turs out that the
fail_if(info->mask != util_format_get_mask(info->src.format));
fail_if(info->mask != util_format_get_mask(info->dst.format));
conditions weren't accurate. util_format_get_mask() returns
PIPE_MASK_RGBA for any format with any color channels, while
info->mask is the exact set of channels to blit. So we reject things
we could blit - for example, PIPE_FORMAT_R16G16_FLOAT where info->mask
is RG while util_format_get_mask() returns RGBA - and accept things we
can't. It turns out that the blitter is happy to blit different
number of channels, but fails to blit formats with different numerical
formats and srgb formats.
Signed-off-by: Kristian H. Kristensen <hoegsberg@chromium.org>
Reviewed-by: Rob Clark <robdclark@gmail.com>
Update for a6xx.xml.h to incorporate a few new bits and changes to
blit src rect coordinate types.
Signed-off-by: Kristian H. Kristensen <hoegsberg@chromium.org>
Reviewed-by: Rob Clark <robdclark@gmail.com>
Based on VA Spec,DeriveImage() returns VA_STATUS_ERROR_OPERATION_FAILED if driver
dont have support for internal surface formats.Currently vaDeriveImage()
failed for non-contiguous planes and operation failed error string is
required to support indirect manner i.e. vaCreateImage()+vaPutImage()
incase vaDeriveImage() failed with VA_STATUS_ERROR_OPERATION_FAILED.
This patch will notify to the client as operation failed with proper
error sting,so that client will fallback to vaCreateImage()+vaPutImage().
v2: updated commit message based on VA spec.
Signed-off-by: suresh guttula <suresh.guttula@amd.com>
Reviewed-by: Leo Liu <leo.liu@amd.com>
When nir_rematerialize_derefs_in_use_blocks_impl was first written, I
attempted to optimize things a bit by not bothering to re-materialize
the sources of deref instructions figuring that the final caller would
take care of that. However, in the case of more complex deref chains
where the first link or two lives in block A and then another link and
the load/store_deref intrinsic live in block B it doesn't work. The
code in rematerialize_deref_in_block looks at the tail of the chain,
sees that it's already in block B and skips it, not realizing that part
of the chain also lives in block A.
The easy solution here is to just rematerialize deref sources of deref
instructions as well. This may potentially lead to a few more deref
instructions being created by the conditions required for that to
actually happen are fairly unlikely and, thanks to the caching, it's all
linear time regardless.
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=109603
Fixes: 7d1d1208c2 "nir: Add a small pass to rematerialize derefs per-block"
Reviewed-by: Alejandro Piñeiro <apinheiro@igalia.com>
It's more clear and means we don't have to update the array every time
we add an optional texture instruction argument
Reviewed-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
Instead of generating it from scratch in each forked repo. This should
save time, energy and storage. (The xserver & xf86-video-amdgpu CI
scripts do basically the same)
v2:
* Hardcode "mesa" instead of using $CI_PROJECT_NAME, to avoid breakage
if the project name is changed after forking (Eric Engestrom)
Reviewed-by: Eric Engestrom <eric.engestrom@intel.com>
For some reason we don't use view volume clipping by default, and use
scissors instead. These scissors were set to an 8k max fb size, while
the driver advertises 16k-sized framebuffers.
Signed-off-by: Ilia Mirkin <imirkin@alum.mit.edu>
Cc: <mesa-stable@lists.freedesktop.org>
Midgard has native support for QUADS and POLYGONS; Bifrost seemingly
does not. Thus, Midgard generally skips prim_convert whereas Bifrost
needs the pass; this patch allows the setting of allowed primitives to
occur on a per-context basis (for runtime hardware selection).
v2: Use (POLYGONS + 1) instead of LINES_ADJACENCY.
Signed-off-by: Alyssa Rosenzweig <alyssa@rosenzweig.io>
Reviewed-by: Robert Foss <robert.foss@collabora.com>
Various methods relating to resource management were previously marked
as kernel-specific, forcing them to stay downstream in the vendor
overlay and eventually be duplicated for DRM code. This patch adds back
this code in kernel-neutral space, allowing for code sharing and
minimising the diff to downstream.
Signed-off-by: Alyssa Rosenzweig <alyssa@rosenzweig.io>
Most Midgard instructions take two-arguments logically; there are always
two arguments at the assembly level. For the few instructions that take
only a single argument, generally the second argument slot is unused,
with a zero inline constant occupying the space. fmov/imov are the
exception, where the first argument is filled with r24 and the logical
argument is in the second slot.
Previously, these constraints were handled by a delicate, buggy series
of hacks. This commit removes these hacks. Instead, we look at the
logical number of arguments (from NIR), switching between two argument
and one-argument-one-zero style. We then introduce a quirk for the
flipped style, which applies to fmov/imov.
Signed-off-by: Alyssa Rosenzweig <alyssa@rosenzweig.io>
Nouveau apparently uses the u_screen helper but prints a warning in the
default case, so running any GL program would start grumbling.
Fixes: 8fa54bc549 gallium: Add a PIPE_CAP_NIR_COMPACT_ARRAYS capability bit.
Reviewed-by: Karol Herbst <kherbst@redhat.com>
Acked-by: Ilia Mirkin <imirkin@alum.mit.edu>
The sampler will be ignored since the underlying 'ld_mcs' operation
won't use it, so just fill the field with 0 instead of the texture to
make it clearer that's the case.
This will also avoid is_high_sampler() to kick in unnecessarily, in
case we are using the operation for a texture with index >= 16.
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
anv and radv both happened to already return 2^14 for these, but
querying the ICD is safer and will help if vdreno (or whatever it's
called) doesn't have the same max.
Signed-off-by: Eric Engestrom <eric.engestrom@intel.com>
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
v2: Remove the original ALU instruciton after all of its readers are
modified to read the new ALU instruction.
v3: Fix an issue where a bcsel that may not be executed on a loop
iteration due to a break statement is converted to a phi (and therefore
incorrectly "executed"). Noticed by Tim.
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=109216
Fixes: 8fb8ebfbb0 ("intel/compiler: More peephole select")
Reviewed-by: Timothy Arceri <tarceri@itsqueeze.com>
A single shader in Unigine Superposition is affected by this change.
A single iadd is moved to the end of a loop. This iadd is involved in
a complex set of logic to terminate the loop, and an extra mov
instruction is inserted. This shader really needs the optimization
suggested by bugzilla #94747, and I expect that to make this tiny
regression go away.
All Gen7+ platforms had similar results. (Skylake shown)
total instructions in shared programs: 15047543 -> 15047545 (<.01%)
instructions in affected programs: 565 -> 567 (0.35%)
helped: 0
HURT: 2
total cycles in shared programs: 369977253 -> 369978253 (<.01%)
cycles in affected programs: 127910 -> 128910 (0.78%)
helped: 0
HURT: 2
v2: Skip nir_op_vec{2,3,4} and nir_op_[fi]mov instructions to avoid
infinite optimization loops. Remove the original ALU instruciton after
all of its readers are modified to read the new ALU instruction.
v3: Extend to the more general case. The if the prev-block value from
the phi is not undef, this means the ALU instruction has to be
duplicated in both the prev-block and the continue-block.
Fixes: 8fb8ebfbb0 ("intel/compiler: More peephole select")
Reviewed-by: Timothy Arceri <tarceri@itsqueeze.com>
This will be used in a couple more places soon.
The function name is... horribly long. Neither Matt nor I could think
of any thing that was shorter and still more descriptive than
"is_phi_foo". I'm willing to entertain suggestions.
Fixes: 8fb8ebfbb0 ("intel/compiler: More peephole select")
Reviewed-by: Timothy Arceri <tarceri@itsqueeze.com>
For some reason, this warning only occurs for me in release builds.
In file included from src/intel/compiler/brw_nir_lower_mem_access_bit_sizes.c:25:0:
src/intel/compiler/brw_nir_lower_mem_access_bit_sizes.c: In function ‘brw_nir_lower_mem_access_bit_sizes’:
src/compiler/nir/nir_builder.h:501:26: warning: ‘src_swiz[2]’ may be used uninitialized in this function [-Wmaybe-uninitialized]
alu_src.swizzle[i] = swiz[i];
~~~~~~~~~~~~~~~~~~~^~~~~~~~~
src/intel/compiler/brw_nir_lower_mem_access_bit_sizes.c:225:16: note: ‘src_swiz[2]’ was declared here
unsigned src_swiz[4];
^~~~~~~~
Reviewed-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
Reviewed-by: Timothy Arceri <tarceri@itsqueeze.com>
There are a number of reasons for the rewrite.
1. Adding support for packing tess patch varyings in a sane way.
2. Making use of qsort allowing the code to be much easier to
follow.
3. Fixes a bug where different interp types caused component
packing to be skipped for all varyings in some scenarios.
4. Allows us to add a crude live range analysis for deciding
which components should be packed together. This support can
optionally be added in a future patch.
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
This will be used in the following patches to determine if we
support packing the components of a varying.
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
This adds support needed for marking the varyings as used but we
don't actually support packing patches in this patch.
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
If the driver does not support rendering to these formats but does
support texturing, we can end up in incompatibilities between textures
and renderbuffers that are then copied to.
Fixes KHR-GL45.copy_image.functional on nvc0
Reviewed-by: Ilia Mirkin <imirkin@alum.mit.edu>
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
Cc: 19.0 <mesa-stable@lists.freedesktop.org>
Some NVIDIA hardware can accept 128 fragment shader input components,
but only have up to 124 varying-interpolated input components. We add a
new cap to express this cleanly. For most drivers, this will have the
same value as PIPE_SHADER_CAP_MAX_INPUTS for the fragment shader.
Fixes KHR-GL45.limits.max_fragment_input_components
Signed-off-by: Karol Herbst <karolherbst@gmail.com>
[imirkin: rebased, improved docs/commit message]
Signed-off-by: Ilia Mirkin <imirkin@alum.mit.edu>
Acked-by: Rob Clark <robdclark@gmail.com>
Acked-by: Eric Anholt <eric@anholt.net>
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
Cc: 19.0 <mesa-stable@lists.freedesktop.org>
Regardless of whether the build uses kmsro, kmsro is the default driver
descriptor when the static loader is used. Thus, in an edge case where
the static loader is used, no static targets are loaded, and kmsro is
not compiled, a spurious warning is printed. There's no harm in
executing the stub function in this case, but it's not "an error" to not
have kmsro in the build; the driver missing warning should not printed
kmsro.
Signed-off-by: Alyssa Rosenzweig <alyssa@rosenzweig.io>
Reviewed-by: Eric Anholt <eric@anholt.net>
Reviewed-by: Karol Herbst <kherbst@redhat.com>
This partially reverts a change from b7a93cbded ("radv: Handle
VK_ATTACHMENT_UNUSED in CmdClearAttachment") which fixed actual issues
but also started to accept invalid values for the colorAttachment
field.
This change asserts that the field is valid for the current pass.
Signed-off-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Fixes: b7a93cbded ("radv: Handle VK_ATTACHMENT_UNUSED in CmdClearAttachment")
Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
This reverts commit d76e777988.
Let's make this obvious that there is an application issue if it tries
to access an attachment that doesn't exist in the current pass.
Signed-off-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Fixes: d76e777988 ("anv: Handle VK_ATTACHMENT_UNUSED in colorAttachment")
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
This variable was removed in commit 087af992a2 "travis: remove
unused linux code path" because it looked like it was only used by the
Linux build. Turns out I was wrong, so let's restore it.
Signed-off-by: Eric Engestrom <eric.engestrom@intel.com>
Reviewed-by: Kristian H. Kristensen <hoegsberg@chromium.org>
Reviewed-by: Dylan Baker <dylan@pnwbakers.com>
In addition to the DRM interface in active development, for legacy
kernels Panfrost has a small, optional, out-of-tree glue repository. For
various reasons, this legacy code should not be included in Mesa proper,
but this commit allows it to coexist peacefully with upstream Panfrost.
If the nondrm repo is cloned/symlinked to the directory
`src/gallium/drivers/panfrost/nondrm`, legacy functionality will be
built. Otherwise, the driver will build normally, though a runtime error
message will be printed if a legacy kernel is detected.
This workaround is icky, but it allows a nearly-upstream Panfrost to
work on real hardware, today. Ideally, this patch will be reverted when
the Panfrost kernel module is mature and we drop legacy support.
Signed-off-by: Alyssa Rosenzweig <alyssa@rosenzweig.io>
This patch includes the command stream portion of the driver,
complementing the earlier compiler. It provides a base for future work,
though it does not integrate with any particular winsys.
Signed-off-by: Alyssa Rosenzweig <alyssa@rosenzweig.io>
Switching to the defaults function cleans up pan_screen.h markedly and
futureproofs for when new PIPE_CAPs are added.
Signed-off-by: Alyssa Rosenzweig <alyssa@rosenzweig.io>
Suggested-by: Eric Anholt <eric@anholt.net>
As kmsro allows an essentially mix-and-match hodgepodge of display
drivers and renderonly GPUs, it doesn't make sense to couple the display
driver entrypoint definition with the driver. Instead, we move *all*
kmsro entrypoints to a shared kmsro block at the end (avoiding clutter
and distraction since this list may snowball in the future).
v2: Alphabetize driver list.
Signed-off-by: Alyssa Rosenzweig <alyssa@rosenzweig.io>
Reviewed-by: Eric Anholt <eric@anholt.net>
The strategy is to keep a CPU-side counter of the direct invocations,
and a GPU-side counter of the indirect invocations, and then add them
together for queries.
The specific technique is a macro which multiplies a list of integers
together and accumulates the product into SCRATCH registers held inside
of the context. Another macro will read those values out and add them to
the passed-in cpu-side counter to be stored in a query buffer the same
way that all the other statistics are stored.
Original implementation by Rhys Perry, redone by Ilia Mirkin to use the
SCRATCH temporaries.
Signed-off-by: Ilia Mirkin <imirkin@alum.mit.edu>
Not quite perfect, but at least we don't end up with random values in
the query buffer.
Fixes KHR-GL45.pipeline_statistics_query_tests_ARB.functional_default_qo_values
Signed-off-by: Ilia Mirkin <imirkin@alum.mit.edu>
Cc: 19.0 <mesa-stable@lists.freedesktop.org>
For the NO_WAIT variants, we would jump into the ALWAYS case for both
nested and inverted occlusion queries. However if the query had
previously completed, the application could reasonably expect that the
render condition would follow that result.
To resolve this, we remove the nesting distinction which unnecessarily
created an imbalance between the regular and inverted cases (since
there's no "zero" condition mode). We also use the proper comparison if
we know that the query has completed (which could happen as a result of
an earlier get_query_result call).
Fixes KHR-GL45.conditional_render_inverted.functional
Signed-off-by: Ilia Mirkin <imirkin@alum.mit.edu>
Cc: 19.0 <mesa-stable@lists.freedesktop.org>
Looks like SUBFM.3D and SUEAU are perfectly capable of dealing with 3d
tiling, they just need the correct inputs. Supply them.
We also have to deal with the case where a 2d "layer" of a 3d image is
bound. In this case, we supply the z coordinate separately to the
shader, which has to optionally treat every 2d case as if it could be a
slice of a 3d texture.
Signed-off-by: Ilia Mirkin <imirkin@alum.mit.edu>
Cc: 19.0 <mesa-stable@lists.freedesktop.org>
We used to pre-set a bunch of extra arguments to a texture instruction
in order to force the RA to allocate a register at the boundary of 4.
However with the levelZero optimization, which removes a LOD argument
when it's uniformly equal to zero, we undid that logic by removing an
extra argument. As a result, we could end up with insufficient alignment
on the second wide texture argument.
Instead we switch to a different method of achieving the same result.
The logic runs during the constraint analysis of the RA, and adds unset
sources as necessary right before being merged into a wide argument.
Fixes MISALIGNED_REG errors in Hitman when run with bindless textures
enabled on a GK208.
Fixes: 9145873b15 ("nvc0/ir: use levelZero flag when the lod is set to 0")
Signed-off-by: Ilia Mirkin <imirkin@alum.mit.edu>
Cc: 19.0 <mesa-stable@lists.freedesktop.org>
Atomic operations don't update the local cache, which means that we
would have to issue CCTL operations in order to get the updated values.
When we know that a buffer is primarily used for atomic operations, it's
easier to just avoid the caching at that level entirely.
The same issue persists for non-atomic buffers, which will have to be
fixed separately.
Fixes the failing dEQP-GLES31.functional.atomic_counter.* tests.
Signed-off-by: Ilia Mirkin <imirkin@alum.mit.edu>
Reviewed-by: Karol Herbst <kherbst@redhat.com>
Cc: 19.0 <mesa-stable@lists.freedesktop.org>
The hardware does not natively support FIXED and DOUBLE formats. If
those are used in an indirect draw, they have to be converted. Our
conversion tries to be clever about only converting the data that's
needed. However for indirect, that won't work.
Given that DOUBLE or FIXED are highly unlikely to ever be used with
indirect draws, read the indirect buffer on the CPU and issue draws
directly.
Fixes the failing dEQP-GLES31.functional.draw_indirect.random.* tests.
Signed-off-by: Ilia Mirkin <imirkin@alum.mit.edu>
Cc: 19.0 <mesa-stable@lists.freedesktop.org>
We used to restrict this to just PIPE_BIND_SAMPLER_VIEW resources, but
most resources benefit from being tiled.
Signed-off-by: Kristian H. Kristensen <hoegsberg@chromium.org>
Reviewed-by: Rob Clark <robdclark@gmail.com>
We're writing to the bo and the kernel needs to know for
fd_bo_cpu_prep() to work.
Fixes: f93e431272 ("freedreno/a6xx: Enable blitter")
Reviewed-by: Rob Clark <robdclark@gmail.com>
Signed-off-by: Kristian H. Kristensen <hoegsberg@chromium.org>
Needed for VK_EXT_buffer_device_address.
The pointers are implmemented as i8*, since I could not figure
out how to emulate setting struct offsets in LLVM based on the
SPIR-V offsets (and more weird stuff like row major matrices).
Acked-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
We use a straight glsl->llvm type conversion so types should already be right.
Also even though the writemasks were changed we we not actually doing 32-bit
things, so this fails miserably.
Reviewed-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
For example with VK_EXT_buffer_device_address or
VK_KHR_variable_pointers.
Fixes: a2b5cc3c39 "radv: enable variable pointers"
Reviewed-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
For the implicit casts inherent in nir.
This should probably have been done for shared memory for
VK_KHR_variable_pointers.
Reviewed-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
`EGLDisplay` variables (the opaque Khronos type) have mostly been
consistently called `dpy`, as this is the name used in the Khronos
specs.
However, `_EGLDisplay` variables (our internal struct) have been
randomly called `dpy` when there was no local variable clash with
`EGLDisplay`s, and `disp` otherwise.
Let's be consistent and use `dpy` for the Khronos type, and `disp`
for our struct.
Signed-off-by: Eric Engestrom <eric.engestrom@intel.com>
Acked-by: Emil Velikov <emil.velikov@collabora.com>
Acked-by: Eric Anholt <eric@anholt.net>
Until the kernel side matures and the full driver is upstreamed, to
avoid end-user surprises, Panfrost should only be built for the
adventurous.
Signed-off-by: Alyssa Rosenzweig <alyssa@rosenzweig.io>
Reviewed-by: Eric Anholt <eric@anholt.net>
I had a single function for "does this do float input unpacking" with two
major flaws: It was missing the most common thing to try to copy propagate
a f32 input nunpack to (the VFPACK to an FP16 render target) along with
several other ALU ops, and also would try to propagate an f32 unpack into
a VFMUL which only does f16 unpacks.
instructions in affected programs: 659232 -> 655895 (-0.51%)
uniforms in affected programs: 132613 -> 135336 (2.05%)
and a couple of programs increase their thread counts.
The uniforms hit appears to be a pattern in generated code of doing (-a >=
a) comparisons, which when a is abs(b) can result in the abs instruction
being copy propagated once but not fully DCEed.
If you only bound rt 1+, we'd still emit a write to the rt0 that isn't
present (noticed while debugging an
ext_framebuffer_multisample-alpha-to-coverage-no-draw-buffer-zero
regression in another change).
Iris would like to use compact arrays for tesslevels and clip/cull
distances. radeonsi will likely want to switch to these at some point,
since it'll be necessary for GL_ARB_gl_spirv support, but it's not ready
for them just yet.
Reviewed-by: Timothy Arceri <tarceri@itsqueeze.com>
Today, st always sets LowerCombinedClipCullDistance, causing the GLSL IR
lowering to run, giving us vec4[2] arrays. I would like to disable this
and instead run the NIR lowering so that we get compact float[] arrays
instead.
Calling the new pass is a noop if the GLSL IR pass has already run, so
it's safe to call the pass unconditionally.
Reviewed-by: Timothy Arceri <tarceri@itsqueeze.com>
Compact arrays are used for special variables like clip and cull
distances, or tessellation levels. Drivers using compact arrays
assume that these values will always be actual arrays. We don't
want to turn a float[1] gl_CullDistance into a single float; that
would confuse drivers.
Today, i965 uses compact arrays, and Gallium drivers use
nir_lower_io_arrays_to_elements, so we haven't had any overlap
that would demonstrate the issue. Iris will use both.
Reviewed-by: Timothy Arceri <tarceri@itsqueeze.com>
A couple places in st/nir assume that cull distances have been lowered
away, so it will need to call this lowering pass for drivers which opt
out of the GLSL IR lowering. The Intel backend also calls this pass,
for i965 and anv. We need to only do it once.
Reviewed-by: Timothy Arceri <tarceri@itsqueeze.com>
We have a GLSL IR pass to convert clip/cull distance float[] arrays
into vec4[2] arrays. In ff281e6204, we attempted to skip this pass
if the GLSL IR lowering had already run. But, that code was not quite
right, as we forgot to strip away the per-vertex IO array layer for
geometry and tessellation shader varyings.
If the GLSL IR pass has run, the variables will not be marked as
"compact". So we can simply check that and bail.
Reviewed-by: Timothy Arceri <tarceri@itsqueeze.com>
nir_lower_clip_cull_distance_arrays() marks the combined clip/cull
distance array as compact. However, when translating in from GLSL
or SPIR-V, we were not marking the original float[] arrays as compact.
We should do so. That way, we can detect these corner cases properly.
Reviewed-by: Timothy Arceri <tarceri@itsqueeze.com>
radeonsi uses a system value for gl_FragCoord rather than an input var.
These get translated into load_frag_coord NIR intrinsics, which lose the
pixel_center_integer and origin_upper_left decorations. To cope with
this, Tim added a shader_info field for pixel_center_integer, and made
glsl_to_nir set it accordingly.
prog_to_nir also needs to handle these fragcoord conventions. Instead
of duplicating the logic to set the info field, just move it to
nir_lower_system_values so it'll happen regardless of who makes the NIR.
(For what it's worth, we don't need an info flag for origin_upper_left,
because radeonsi lowers origin conventions in nir_lower_wpos_ytransform
before nir_lower_system_values destroys the variable and qualifiers.)
Reviewed-by: Eric Anholt <eric@anholt.net>
Some drivers, such as radeonsi, use a system value for gl_FragCoord
rather than an input variable. In this case, our Mesa IR will have
a PROGRAM_SYSTEM_VALUE register, which we need to translate.
This makes prog_to_nir work for Gallium drivers which expose the
PIPE_CAP_TGSI_FS_POSITION_IS_SYSVAL capability bit.
Reviewed-by: Eric Anholt <eric@anholt.net>
We implement the basic VS and FS, as well as the VS that does layered
clears by writing gl_Layer from the vertex shader. Drivers which need
a geometry shader for writing layer continue falling back to TGSI, as
I didn't need this and so didn't bother implementing it. (We certainly
could, however, if people want to add it in the future.)
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
Tested-by: Rob Clark <robdclark@gmail.com>
Tested-by: Eric Anholt <eric@anholt.net>
This provides a native NIR version of the DrawPixels/Bitmap passthrough
vertex shader.
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
Tested-by: Rob Clark <robdclark@gmail.com>
Tested-by: Eric Anholt <eric@anholt.net>
The state tracker generates several built-in shaders in order to
perform scissored clears, upload/download PBOs, and so on. These
are currently constructed using TGSI, using ureg and u_simple_shader.
I want to have NIR versions of these shaders, for my Gallium driver
that has a NIR backend but no TGSI support. To that end, we'll want
a few helpers to help construct simple shaders.
This patch adds two new helpers:
- st_nir_finish_builtin_shader() takes a manually constructed NIR
shader, applies lowering passes (like st_link_nir would do for GLSL),
and constructs the pipe_shader_state.
- st_nir_make_passthrough_shader() makes a simple passthrough shader,
which copies inputs to outputs. This is similar to u_simple_shaders.
v2: Set info->fs.untyped_color_outputs for vc4/v3d (thanks Eric!).
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
Tested-by: Rob Clark <robdclark@gmail.com>
Tested-by: Eric Anholt <eric@anholt.net>
Ken's rework of mesa/st builtins to NIR means that we'll have more NIR
shaders with color output types that are mismatched with the render target
types. Since this is behavior that GLSL doesn't require, add it as a
shader_info option so the driver can know that it needs to ignore the FS
output's base type in favor of the actual render target's. This prevents
needing additional variants in several mesa/st paths (clear, pbo upload,
pbo download), given that the driver already has to handle the variants
for any TGSI being passed to it (from u_blitter, for example).
Reviewed-by: Rob Clark <robdclark@gmail.com>
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
gives me an performance boost of 0.2% in pixmark_piano on my gk106, gm204 and
gp107.
reduces the amount of generated convert instructions by roughly 30% in
shader-db.
v2: only for 32 bit operations
move some common code out of the switch
handle OP_SAT with modifiers
v3: only for registers and const memory
rework if clauses
merge isCvt into this patch
v4: merge isCvt into its use
Signed-off-by: Karol Herbst <kherbst@redhat.com>
Reviewed-by: Ilia Mirkin <imirkin@alum.mit.edu>
When Mesa is compiled for gallium-xlib using e.g.
./configure --enable-glx=gallium-xlib --disable-dri --disable-gbm
-disable-egl
and is used by an X server (usually remotely via SSH X11 forwarding)
that does not support MIT-SHM such as XMing or MobaXterm, OpenGL
clients report error messages such as
Xlib: extension "MIT-SHM" missing on display "localhost:11.0".
ad infinitum.
The reason is that the code in src/gallium/winsys/sw/xlib uses
MIT-SHM without checking for its existence, unlike the code
in src/glx/drisw_glx.c and src/mesa/drivers/x11/xm_api.c.
I copied the same check using XQueryExtension, and tested with
glxgears on MobaXterm.
This issue was reported before here:
https://lists.freedesktop.org/archives/mesa-users/2016-July/001183.html
Reviewed-by: Eric Anholt <eric@anholt.net>
Reviewed-by: Emil Velikov <emil.velikov@collabora.com>
Cc: <mesa-stable@lists.freedesktop.org>
sizeof counts the terminating null character as well, so that also
contributed to the ID computed for the X11 atom. But the convention is
for only the non-null characters to contribute to the atom ID.
Fixes: 2e12fe425f "loader/dri3: Enable adaptive_sync via
_VARIABLE_REFRESH property"
Reviewed-by: Nicholas Kazlauskas <nicholas.kazlauskas@amd.com>
Reviewed-by: Eric Anholt <eric@anholt.net>
When a texture is still bound as an image and the context it was bound in
is destroyed but not the texture, then the texture will still hold the
resource and will not be freed when it is finally destroyed. Hence, release
these references when the context is destroyed.
This leak was triggered by virglrenderer:
https://gitlab.freedesktop.org/virgl/virglrenderer/issues/86
Signed-off-by: Gert Wollny <gert.wollny@collabora.com>
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
ureg_get_tokens clears the reference to the tokens, and create_compute_state makes
a copy, hence the tokens must be explicitely released.
Fixes: Direct leak of 256 byte(s) in 1 object(s) allocated from:
#0 0x7ff729cf3c60 in realloc (/usr/lib64/gcc/x86_64-pc-linux-gnu/7.3.0/libasan.so+0xdbc60)
#1 0x7ff721b1240c in tokens_expand ../../samba/mesa/src/gallium/auxiliary/tgsi/tgsi_ureg.c:234
#2 0x7ff721b1c9c0 in get_tokens ../../samba/mesa/src/gallium/auxiliary/tgsi/tgsi_ureg.c:257
#3 0x7ff721b1c9c0 in copy_instructions ../../samba/mesa/src/gallium/auxiliary/tgsi/tgsi_ureg.c:2040
#4 0x7ff721b1c9c0 in ureg_finalize ../../samba/mesa/src/gallium/auxiliary/tgsi/tgsi_ureg.c:2090
#5 0x7ff721b1e919 in ureg_get_tokens ../../samba/mesa/src/gallium/auxiliary/tgsi/tgsi_ureg.c:2167
#6 0x7ff721f8b35a in si_create_dma_compute_shader ../../samba/mesa/src/gallium/drivers/radeonsi/si_shaderlib_tgsi.c:219
#7 0x7ff722043ed9 in si_compute_do_clear_or_copy ../../samba/mesa/src/gallium/drivers/radeonsi/si_compute_blit.c:156
#8 0x7ff7220448d3 in si_clear_buffer ../../samba/mesa/src/gallium/drivers/radeonsi/si_compute_blit.c:247
#9 0x7ff7220350e8 in vi_dcc_clear_level ../../samba/mesa/src/gallium/drivers/radeonsi/si_clear.c:274
Signed-off-by: Gert Wollny <gert.wollny@collabora.com>
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
It is always false on Gen8+. Also, move the variable definition near
its use.
Reviewed-by: Jordan Justen <jordan.l.justen@intel.com>
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
All things being equal is better to keep the original order. Since
the new block is empty, push the phis in order to tail.
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
Reviewed-by: Daniel Schürmann <daniel.schuermann@campus.tu-berlin.de>
This patch implements the free Midgard shader toolchain: the assembler,
the disassembler, and the NIR-based compiler. The assembler is a
standalone inaccessible Python script for reference purposes. The
disassembler and the compiler are implemented in C, accessible via the
standalone `midgard_compiler` binary. Later patches will use these
interfaces from the driver for online compilation.
Signed-off-by: Alyssa Rosenzweig <alyssa@rosenzweig.io>
Acked-by: Jason Ekstrand <jason@jlekstrand.net>
Acked-by: Rob Clark <robdclark@gmail.com>
Acked-by: Eric Anholt <eric@anholt.net>
Acked-by: Emil Velikov <emil.velikov@collabora.com>
This patch adds an initial stub for the Gallium driver, containing
simple screen functions and the majority of the driver headers but no
actual functionality. It further adds the winsys glue for linking in
this stub driver via kmsro on Rockchip/Amlogic boards.
Signed-off-by: Alyssa Rosenzweig <alyssa@rosenzweig.io>
Acked-by: Jason Ekstrand <jason@jlekstrand.net>
Acked-by: Rob Clark <robdclark@gmail.com>
Acked-by: Eric Anholt <eric@anholt.net>
Acked-by: Emil Velikov <emil.velikov@collabora.com>
Commit 8b626a22b2 introduced a new
pipe_image_view::shader_access field, indicating the access mode
specified in the shader. st/mesa's built-in PBO download shader
creates a write-only image buffer, so we should flag it as such.
Nobody uses this field yet (Iris will), so we don't need to backport
this fix to stable branches.
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
Transform feedback did not set correct SO_DECL.ComponentMask for
varyings packed in VARYING_SLOT_PSIZ:
gl_Layer - VARYING_SLOT_LAYER in VARYING_SLOT_PSIZ.y
gl_ViewportIndex - VARYING_SLOT_VIEWPORT in VARYING_SLOT_PSIZ.z
gl_PointSize - VARYING_SLOT_PSIZ in VARYING_SLOT_PSIZ.w
Fixes: 36ee2fd61c "anv: Implement the basic form of VK_EXT_transform_feedback"
Signed-off-by: Danylo Piliaiev <danylo.piliaiev@globallogic.com>
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
From the Vulkan 1.0.98 spec for vkCmdClearAttachments:
"If any attachment to be cleared in the current subpass is VK_ATTACHMENT_UNUSED,
then the clear has no effect on that attachment."
"If the aspectMask member of any element of pAttachments contains
VK_IMAGE_ASPECT_COLOR_BIT, then the colorAttachment member of that
element must either refer to a color attachment which is VK_ATTACHMENT_UNUSED,
or must be a valid color attachment."
"If the aspectMask member of any element of pAttachments contains
VK_IMAGE_ASPECT_DEPTH_BIT, then the current subpass' depth/stencil attachment
must either be VK_ATTACHMENT_UNUSED, or must have a depth component"
"If the aspectMask member of any element of pAttachments contains
VK_IMAGE_ASPECT_STENCIL_BIT, then the current subpass' depth/stencil attachment
must either be VK_ATTACHMENT_UNUSED, or must have a stencil component"
Signed-off-by: Danylo Piliaiev <danylo.piliaiev@globallogic.com>
Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
From the Vulkan 1.0.98 spec for vkCmdClearAttachments:
"If the aspectMask member of any element of pAttachments contains
VK_IMAGE_ASPECT_COLOR_BIT, then the colorAttachment member of that
element must either refer to a color attachment which is VK_ATTACHMENT_UNUSED,
or must be a valid color attachment."
Signed-off-by: Danylo Piliaiev <danylo.piliaiev@globallogic.com>
Reviewed-by: Tapani Pälli <tapani.palli@intel.com>
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
The Vulkan spec says:
"If pResolveAttachments is not NULL, for each resolve attachment
that does not have the value VK_ATTACHMENT_UNUSED, the
corresponding color attachment must not have the value
VK_ATTACHMENT_UNUSED."
Signed-off-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
Outgoing dependencies (ie. external) should happen after the subpass.
This doesn't change anything for subpass resolves as we already
make sure that attachments are shader readable.
Signed-off-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
The different masks should be accumulated. For example if two
subpasses declare an outgoing dependency (ie. dst ==
VK_SUBPASS_EXTERNAL).
Signed-off-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
radv_render_pass_compile() is common to vkCreateRenderPass()
and vkCreateRenderPass2().
Signed-off-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
That shouldn't change anything as we check if the last
subpass id is the final subpass.
Signed-off-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
This reworks how the depth stencil attachment is used for
simplicity. This also introduces radv_render_pass_compile()
helper that will be used for further optimizations.
Signed-off-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
To unify some code in BeginRenderPass() and NextSubpass().
Based on Intel ANV driver.
Signed-off-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
Previously, we only applied the fix to shaders with a dispatch mode of
SIMD8 but the code it relies on for SIMD16 mode only applies to SIMD16
instructions. If you have a SIMD8 instruction in a SIMD16 shader,
neither would trigger and the restriction could still be hit.
Fixes: 232ed89802 "i965/fs: Register allocator shoudn't use grf127..."
Reviewed-by: Jose Maria Casanova Crespo <jmcasanova@igalia.com>
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
By just assigning dst.type to src[i].type, we ensure that the offset at
the end of the loop actually offsets it by the right number of
registers. Otherwise, we'll get into a case where we copy with a Q type
and then offset with a D type and things get out of sync.
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
Previously, we tried to combine all cases where the instruction being
CSE'd writes to more than one MOV worth of registers into one case with
a bit of special casing for LOAD_PAYLOAD. This commit splits things so
that LOAD_PAYLOAD is entirely it's own case. This makes tweaking the
LOAD_PAYLOAD case simpler in the next commit.
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
Missing check for shader stage in the fs_visitor would corrupt the
cs_prog_data.push information and trigger crashes / corruption later
when uploading the CS state.
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
We already defer handling the actual execution modes until after we've
created the shader. This just moves it a tiny bit further so we
actually have constants and types and can handle OpExecutionModeId.
Reviewed-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
Instead of handling it as part of the handling of constant instructions,
just stash the vtn_value when we see the decoration and handle it
explicitly later. This will let us re-order handling of constant
instructions without breaking the Vulkan SPIR-V requirement that
decorating a specialization constant as the WorkgroupSize built-in
overrides the workgroup size set as an execution mode.
Reviewed-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
The uint version is less typing, supports different bit sizes, and is
probably a bit more safe because we're actually verifying that the
SPIR-V value is an integer scalar constant.
Reviewed-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
Required for the following test:
bin/compressedteximage GL_COMPRESSED_SRGB_ALPHA_S3TC_DXT1_EXT
pass when emulating GL on GLES.
Reviewed-by: Erik Faye-Lund <erik.faye-lund@collabora.com>
Without this we do not end up with a deterministic NIR because
temporary register variables are added in random order. NIR must
be deterministic because we use it to produce a sha for the
radeonsi backends disk cache.
This fixes the shader cache for a bunch of shaders.
Another positive is that this results in a large reduction in the
size of the NIR that the state tracker stores to the disk cache.
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
for meson all C++ code is already compiled as C++11, so it's
unnecessary. It's also the wrong way to do this, if we really needed
this the correct way is to set:
```meson
executable(
...
override_options : ['cpp_std=c++11'],
)
```
Which ensures not only that the correct syntax for the current
compiler is used, but also that meson doesn't create arguments like
`-std=c++14 ... -std=c++11`
Reviewed-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
Reviewed-by: Eric Engestrom <eric.engestrom@intel.com>
The `:` in options should always have one space before and after `foo
: bar`, and lists do not get spaces around the braces: `[foo]` not `[
foo ]`
Reviewed-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
Reviewed-by: Eric Engestrom <eric.engestrom@intel.com>
Which is and has always been the default. This is largely an artifact
of how the building of these tools was controlled when the meson build
was originally created.
Reviewed-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
Reviewed-by: Eric Engestrom <eric.engestrom@intel.com>
We need to initialize all fields in rs->prim explicitly while
creating new rastpos stage.
Fixes: bac8534267 ("st/mesa: allow glDrawElements to work with GL_SELECT
feedback")
v2: Initializing all fields in rs->prim as per Ilia.
Reviewed-by: Brian Paul <brianp@vmware.com>
Reviewed-by: Ilia Mirkin <imirkin@alum.mit.edu>
Android.mk and autotools disagree about where generated files should
go, which wasn't a problem until we wanted to build a dist
tarball. This corrects the problem by changing the output and include
paths to be the same on android and autotools (meson already has the
correct include path).
Fixes: 7d7b30835c
("automake: Fix path to generated source")
Reviewed-by: Tapani Pälli <tapani.palli@intel.com>
This was copy-and-paste fail, that oddly showed up in the CTS's
reinterprets of r32f, rgba8, and srgba8 to rgba8i, but not r32ui and r32i
to rgba8i or reinterprets to other signed int formats.
Fixes: 6281f26f06 ("v3d: Add support for shader_image_load_store.")
One of the CTS cases tries to invalidate just stencil of packed
depth/stencil, and we incorrectly lost the depth contents.
Fixes dEQP-GLES3.functional.fbo.invalidate.whole.unbind_read_stencil
Fixes: 0c42b5f3cb ("mesa: wire up InvalidateFramebuffer")
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
As of Nov/30/2018 the extension is also valid for OpenGL >= 1.2, so
enable it accordingly and also add the required view class entry.
Signed-off-by: Gert Wollny <gert.wollny@collabora.com>
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
This fixes serious stuttering in Shadow Of The Tomb Raider.
Fixes: 50fd253bd6 ("radv/winsys: Add priority handling during submit.")
Signed-off-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
Reported by Coverity: in the case of unsupported modifier request, the
code does not jump to the “fail” label to destroy the acquired resource.
CID: 1435704
Signed-off-by: Ernestas Kulik <ernestas.kulik@gmail.com>
Fixes: 45bb8f2957 ("broadcom: Add V3D 3.3 gallium driver called "vc5", for BCM7268.")
Reported by Coverity: in the case where there exist hardware and
non-hardware queries, the code does not jump to err_free_query and leaks
the query.
CID: 1430194
Signed-off-by: Ernestas Kulik <ernestas.kulik@gmail.com>
Fixes: 9ea90ffb98 ("broadcom/vc4: Add support for HW perfmon")
I can't imagine the new HW block being paired with a v6 CPU, so don't
bother with the CPU detection that vc4 had to do.
Improves 1024x1024 TexImage on my 7278 by 47.3229% +/- 0.679632%
Earlier commit addressed 7 of the 8 instances available.
v2: Rebase patch back to master (by anholt)
Cc: Carsten Haitzler (Rasterman) <raster@rasterman.com>
Cc: Eric Anholt <eric@anholt.net>
Fixes: 300d3ae8b1 ("vc4: Declare the cpu pointers as being modified in NEON asm.")
Signed-off-by: Emil Velikov <emil.velikov@collabora.com>
# Install python wheels, necessary to install SCons via pip
- python -m pip install wheel
# Install SCons
- python -m pip install scons==3.0.1
- scons --version
# Install flex/bison
- set WINFLEXBISON_ARCHIVE=win_flex_bison-%WINFLEXBISON_VERSION%.zip
- if not exist "%WINFLEXBISON_ARCHIVE%" appveyor DownloadFile "https://github.com/lexxmark/winflexbison/releases/download/v%WINFLEXBISON_VERSION%/%WINFLEXBISON_ARCHIVE%"
- 7z x -y -owinflexbison\ "%WINFLEXBISON_ARCHIVE%" > nul
- set Path=%CD%\winflexbison;%Path%
- win_flex --version
- win_bison --version
# Download and extract LLVM
- if not exist "%LLVM_ARCHIVE%" appveyor DownloadFile "https://people.freedesktop.org/~jrfonseca/llvm/%LLVM_ARCHIVE%"
GLw (OpenGL widget library) is now available from a separate <ahref="https://cgit.freedesktop.org/mesa/glw/">git repository</a>. Unless you're using very old Xt/Motif applications with OpenGL, you shouldn't need it.
GLw (OpenGL widget library) is now available from a separate <ahref="https://gitlab.freedesktop.org/mesa/glw">git repository</a>. Unless you're using very old Xt/Motif applications with OpenGL, you shouldn't need it.
</p>
<h2>2.5 What's the proper place for the libraries and headers?</h2>
Some files were not shown because too many files have changed in this diff
Show More
Reference in New Issue
Block a user
Blocking a user prevents them from interacting with repositories, such as opening or commenting on pull requests or issues. Learn more about blocking a user.