Compare commits

..

193 Commits

Author SHA1 Message Date
Eric Engestrom
3e361635b8 VERSION: bump for 24.0.1 2024-02-14 20:55:00 +00:00
Eric Engestrom
68a46fd846 docs: add release notes for 24.0.1 2024-02-14 20:54:39 +00:00
Friedrich Vock
9b4abb2ed0 radv,driconf: Enable active AS leaf workaround for Jedi Survivor
Another game that can't get AS updates right.

Cc: mesa-stable
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/27590>
(cherry picked from commit afab80bdb6)
2024-02-14 17:26:50 +00:00
Lionel Landwerlin
e4c2cbeb33 anv: fix buffer marker cache flush issues on MTL
For some yet unknown reason the CS L3 coherency setting is different
on MTL than DG2.

Fixes issues in tests from the subgroup :

  dEQP-VK.api.buffer_marker.*

Signed-off-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Fixes: c8e122a738 ("anv: Implement rudimentary VK_AMD_buffer_marker support")
Reviewed-by: Tapani Pälli <tapani.palli@intel.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/27613>
(cherry picked from commit e54638ddf5)
2024-02-14 17:05:00 +00:00
Mike Blumenkrantz
8d18be6357 nir/lower_io: fix handling for compact arrays with indirect derefs
this logic relies on constant indexing for compact arrays, but this is
frequently not the case for compact array builtins (e.g., gl_TessLevelOuter).
the usual strategy of lowering to temps isn't viable in TCS, which means
io lowering has to be able to handle indirect access to these builtins
without crashing

cc: mesa-stable

Reviewed-by: Dave Airlie <airlied@redhat.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/27534>
(cherry picked from commit 9e2c7314f2)
2024-02-14 17:04:59 +00:00
José Roberto de Souza
c1afe86299 intel: Fix intel_get_mesh_urb_config()
The round up in 'next_address_8kb = DIV_ROUND_UP(push_constant_kb, 8)'
was not decreasing the amount of URB available for Mesh and Task, what
could cause an over allocation of URB.

There was also no minimum entries enforcement for Mesh and Task, what
could cause 0 r.mesh_entries to be set in a case where tue_size_dw is
90% > than mue_size_dw. Same for r.task_entries when Task is enabled.

Also adding a few more asserts to help debug.

This fixes at least dEQP-VK.mesh_shader.ext.properties.mesh_payload_size
in LNL but it has potential to fixes other Mesh tests as well.

Cc: mesa-stable
Signed-off-by: José Roberto de Souza <jose.souza@intel.com>
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/27555>
(cherry picked from commit d0fba810b3)
2024-02-14 17:04:53 +00:00
Georg Lehmann
003ba21b5f aco/gfx11+: limit hard clauses to 32 instructions
https://github.com/llvm/llvm-project/pull/81287

Foz-DB Navi31:
Totals from 406 (0.52% of 78112) affected shaders:
Instrs: 585342 -> 585750 (+0.07%)
CodeSize: 3077856 -> 3079456 (+0.05%); split: -0.00%, +0.05%
Latency: 3263165 -> 3263326 (+0.00%); split: -0.00%, +0.01%
InvThroughput: 664092 -> 664114 (+0.00%); split: -0.00%, +0.00%
VClause: 11143 -> 11537 (+3.54%)
SClause: 11878 -> 11884 (+0.05%)
Copies: 39807 -> 39815 (+0.02%)

Cc: mesa-stable
Reviewed-by: Rhys Perry <pendingchaos02@gmail.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/27569>
(cherry picked from commit 6121497228)
2024-02-14 17:04:51 +00:00
Karol Herbst
73c637fcfe rusticl/mem: support GL_TEXTURE_BUFFER
Fixes: 2645003bdc ("rusticl: Create CL mem objects from GL")
Signed-off-by: Karol Herbst <kherbst@redhat.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/27385>
(cherry picked from commit 3f7b344930)
2024-02-14 17:04:48 +00:00
Karol Herbst
277c905b41 rusticl/mem: properly handle buffers
Closes: https://gitlab.freedesktop.org/mesa/mesa/-/issues/10505
Fixes: 2645003bdc ("rusticl: Create CL mem objects from GL")
Signed-off-by: Karol Herbst <kherbst@redhat.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/27385>
(cherry picked from commit 117291332c)
2024-02-14 17:04:47 +00:00
Karol Herbst
c206849335 nir/lower_cl_images: record image_buffers and msaa_images
Cc: mesa-stable
Signed-off-by: Karol Herbst <kherbst@redhat.com>
Reviewed-by: Jesse Natalie <jenatali@microsoft.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/27385>
(cherry picked from commit 727cddd338)
2024-02-14 17:04:46 +00:00
Lionel Landwerlin
a2a141dffa anv: fix incorrect flushing on shader query copy
When doing query result copies in 3D mode, we're flushing the render
target cache, but the shader writes go through the dataport.

Fixes flakes/fails in piglit with shader query copies forced with Zink :

  $ query_copy_with_shader_threshold=0 ./bin/arb_query_buffer_object-coherency -auto -fbo

Signed-off-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Fixes: b3b12c2c27 ("anv: enable CmdCopyQueryPoolResults to use shader for copies")
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/26797>
(cherry picked from commit c53a4711cb)
2024-02-14 17:04:41 +00:00
Lionel Landwerlin
48608401a3 intel/fs: rerun divergence prior to lowering non-uniform interpolate at sample
Signed-off-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Fixes: 74a40cc4b6 ("intel/fs: move lower of non-uniform at_sample barycentric to NIR")
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/26797>
(cherry picked from commit 2437556d83)
2024-02-14 17:04:39 +00:00
Lepton Wu
9b2d95ab13 llvmpipe: Set "+64bit" for X86_64
Without this, on some "buggy" qemu cpu setup, LLVM could crash
if LLVM detects the wrong CPU type.

Fixes: f92cadccc6 ("llvmpipe: Always using util_get_cpu_caps to get cpu caps for llvm on x86")

Signed-off-by: Lepton Wu <lepton@chromium.org>
Reviewed-by: Yonggang Luo <luoyonggang@gmail.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/27539>
(cherry picked from commit 04d26ceb0a)
2024-02-14 17:04:37 +00:00
David Rosca
73b955965b radeonsi/vcn: Don't reinitialize encode session on bitrate/fps change
When bitrate or fps change is detected, only update rate control
parameters instead of completely reinitializing encode session.

This fixes an issue where if application changed bitrate or fps often,
the output bitrate would significantly overshoot the target bitrate in some
cases. In other cases, the output bitrate would be extremely low instead.

Cc: mesa-stable

Reviewed-by: Ruijing Dong <ruijing.dong@amd.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/27548>
(cherry picked from commit 8d44a11508)
2024-02-14 17:04:18 +00:00
Lionel Landwerlin
23442c825b anv: don't unmap AUX ranges at BO delete
It is possible to free memory backing images before images are
destroyed :

   VkFreeMemory:

   "Memory can be freed whilst still bound to resources, but those
    resources must not be used afterwards."

The spec leaves us the option to keep a reference on the associated
memory and free it only when all the bound resources have been
destroyed. Here we choose to free memory immediately.

One particular test in the CTS
(dEQP-VK.synchronization.internally_synchronized_objects.pipeline_cache_graphics)
does the following :

   imgA = vkCreateImage()
   imgB = vkCreateImage()
   memA = vkAllocateMemory()
   vkBindImageMemory(imgA, memA) # Aux mapping with ref count = 1
   vkFreeMemory(memA)            # Aux mapping removed, ref count = 0
   memB = vkAllocateMemory()     # Same address as memA
   vkBindImageMemory(imgB, memB)
   vkDestroyImage(imgA)	         # Removes the mapping of imgB-memB

   vkQueueSubmit()               # hang with pagefault in AUX-TT

The solution implemented in this change is to not do anything AUX-TT
related in vkFreeMemory(). This soluation has some consequences,
because a virtual memory address range freed and reallocated cannot be
rebound in the AUX-TT until all the associated resources have released
their AUX-TT mapping (to bring back the AUX-TT refcount of the range
to 0). This should still be better than keeping the memory allocated
through refcounting of the anv_bo.

Signed-off-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Fixes: 7b87e1afbc ("anv: track & unbind image aux-tt binding")
Closes: https://gitlab.freedesktop.org/mesa/mesa/-/issues/10528
Reviewed-by: Tapani Pälli <tapani.palli@intel.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/27566>
(cherry picked from commit e0b4dfbbda)
2024-02-14 17:04:18 +00:00
Timothy Arceri
d1b79ef57b Revert "ci: Enable GALLIUM_DUMP_CPU=true only in the clang job"
Rob worded it well in 9e8450b65c.

"We don't want util_cpu to vomit cpu caps all over the test output."

This reverts commit c6979d97e4.

Reviewed-by: Eric Engestrom <eric@engestrom.ch>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/27564>
(cherry picked from commit 62fa5c8d0f)
2024-02-14 17:04:18 +00:00
M Henning
e7c7b4e2f1 nvk: Don't clobber vb0 after repeated blits
If a program does two blits in a row, we internally do a sequence of
operations that involves binding vb0.

Previously, the vb0 state after each operation would look something like:

| operation                    | cmd->state.gfx.vb0 | hardware  | save->vb0 |
| ---------------------------- | ------------------ | --------- | --------- |
|                              | user               | user      |           |
| nvk_meta_begin()             | user               | user      | user      |
| BindVertexBuffers(internal0) | internal0          | internal0 | user      |
| nvk_meta_end()               | internal0          | user      |           |
| nvk_meta_begin()             | internal0          | user      | internal0 |
| BindVertexBuffers(internal1) | internal1          | internal1 | internal0 |
| nvk_meta_end()               | internal1          | internal0 |           |

That is, CmdBindVertexBuffers() would update cmd->state.gfx.vb0, but
nvk_meta_end() would not. This meant that the last operation would bind a
driver-internal buffer instead of the original value that the user set.

This change fixes the issue by tracking cmd->state.gfx.vb0 in
nvk_cmd_bind_vertex_buffer(), which both CmdBindVertexBuffers() and
nvk_meta_end() call into.

After this commit, the state looks like:

| operation                    | cmd->state.gfx.vb0 | hardware  | save->vb0 |
| ---------------------------- | ------------------ | --------- | --------- |
|                              | user               | user      |           |
| nvk_meta_begin()             | user               | user      | user      |
| BindVertexBuffers(internal0) | internal0          | internal0 | user      |
| nvk_meta_end()               | user               | user      |           |
| nvk_meta_begin()             | user               | user      | user      |
| BindVertexBuffers(internal1) | internal1          | internal1 | user      |
| nvk_meta_end()               | user               | user      |           |

To test this commit, build gtk4 commit 87b66de1, run:

    GSK_RENDERER=vulkan gtk4-demo --run=image_scaling

then select trilinear filtering in the dropdown and check for rendering
artifacts.

Fixes: e1c66501 ("nvk: Use vk_meta for CmdClearAttachments")
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/27559>
(cherry picked from commit d98ff2cc4a)
2024-02-14 17:01:50 +00:00
Georg Lehmann
e8c13d5b9d aco: don't remove branches that skip v_writelane_b32
Cc: mesa-stable

Reviewed-by: Daniel Schürmann <daniel@schuermann.dev>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/27537>
(cherry picked from commit cd6d9c5918)
2024-02-14 17:01:49 +00:00
Georg Lehmann
ddb4aff2c2 aco/gfx11+: disable v_pk_fmac_f16_dpp
Public docs are apparently wrong: https://github.com/llvm/llvm-project/pull/79598#issuecomment-1933988048

Cc: mesa-stable

Reviewed-by: Daniel Schürmann <daniel@schuermann.dev>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/27533>
(cherry picked from commit e927c5004f)
2024-02-14 17:01:47 +00:00
Mark Janes
963ad46563 hasvk: add missing linker arguments
vulkan_icd_link_args was added for other vulkan drivers but not hasvk.
Without it, statically linked json-c symbols are wrongly exported.

Ref: 2b1e9b0fd6 ("anv: add linker script to fix android symbols")
Fixes: 78578a6ddb ("vk: move radv's linker symbols scripts for use in all drivers")
Reviewed-by: Jordan Justen <jordan.l.justen@intel.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/27540>
(cherry picked from commit fb7240bef9)
2024-02-14 16:59:53 +00:00
Mike Blumenkrantz
8667ddc209 mesa: plumb errors through to texture allocation
the spec allows this and tests like spec@arb_texture_multisample@arb_texture_multisample-dsa-texelfetch
expect it

cc: mesa-stable

Reviewed-by: Marek Olšák <marek.olsak@amd.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/25931>
(cherry picked from commit fab5c706fe)
2024-02-14 16:59:44 +00:00
Danylo Piliaiev
46c290d94b tu: Do not print anything on systems without Adreno GPU
Output debug info only when explicitly requested with
TU_DEBUG=startup otherwise we should be silent.

Closes: https://gitlab.freedesktop.org/mesa/mesa/-/issues/10213
Fixes: a669147689
("tu: Always print startup failure messages")

Signed-off-by: Danylo Piliaiev <dpiliaiev@igalia.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/27527>
(cherry picked from commit c8cc7c5c18)
2024-02-14 16:59:43 +00:00
Boris Brezillon
45c761f2c2 pan/va: Add missing valhall_enums dep to valhall_disasm
valhall_disasm compilation fails if the valhall_enums.h has
not be generated.

Fixes: 619566dea1 ("pan/va: Generate header containing enums")
Closes: https://gitlab.freedesktop.org/mesa/mesa/-/issues/10553
Signed-off-by: Boris Brezillon <boris.brezillon@collabora.com>
Reviewed-by: Dylan Baker <dylan@pnwbakers.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/27524>
(cherry picked from commit 49069a1243)
2024-02-14 16:59:43 +00:00
Boris Brezillon
41b9381046 panfrost: Pad compute jobs with zeros on v4
Apparently, Midgard GPUs don't like when the last 2 words of
compute/vertex jobs contain garbage. Extend the compute job definition
to include a padding section thus aligning the job on a 64-byte boundary,
and add the according pan_section_pack() calls where we have a
compute job filled.

Fixes: b76420be1f ("panfrost: Split command stream descriptor definitions per-gen")
Closes: https://gitlab.freedesktop.org/mesa/mesa/-/issues/10558
Signed-off-by: Boris Brezillon <boris.brezillon@collabora.com>
Acked-by: Erik Faye-Lund <erik.faye-lund@collabora.com>
Tested-by: Anton Bambura <jenneron@postmarketos.org>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/27515>
(cherry picked from commit 5b1b76e9cd)
2024-02-14 16:59:42 +00:00
Sviatoslav Peleshko
d329b14698 driconf: Apply dual color blending workaround to Dying Light
The game uses shader with `location=0` and `location=1` outputs where
it wants dual source blending and should've used `location=0, index=0`
and `location=0, index=1`.

Cc: mesa-stable
Closes: https://gitlab.freedesktop.org/mesa/mesa/-/issues/10413
Signed-off-by: Sviatoslav Peleshko <sviatoslav.peleshko@globallogic.com>
Reviewed-by: Tapani Pälli <tapani.palli@intel.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/27509>
(cherry picked from commit 0a1c8779e8)
2024-02-14 16:59:41 +00:00
Kenneth Graunke
25b67841a1 driconf: Advertise GL_EXT_shader_image_load_store on iris for SVP13
SPECviewperf creo-03 needs GL_EXT_shader_image_load_store in order for
its shaders to compile but we don't support a few corner cases that
didn't make it into the ARB variant.  It seems to run fine with an
override, so just do that for now.

Cc: mesa-stable
Acked-by: Tapani Pälli <tapani.palli@intel.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/27429>
(cherry picked from commit 24d3c83212)
2024-02-14 16:59:40 +00:00
Rob Clark
7d8c7ccc0d freedreno: Fix MSAA z/s layout in GMEM
A bit surprised that this didn't show up in any piglit or deqp.

Fixes: cf0c7258ee ("freedreno/a5xx: MSAA")
Signed-off-by: Rob Clark <robdclark@chromium.org>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/27508>
(cherry picked from commit c3062e3402)
2024-02-14 16:59:37 +00:00
Pavel Ondračka
feacc7e5a3 r300: fix vs output register indexing
Vertex shaders were writing TEXCOORDs before GENERICS, however
fragment shaders were reading it the opposite way, so this caused
problems for shaders that used both TEXCOORD and GENERIC varyings.

Fixes: d4b8e8a481
Closes: https://gitlab.freedesktop.org/mesa/mesa/-/issues/10489
Signed-off-by: Pavel Ondračka <pavel.ondracka@gmail.com>
Reviewed-by: Filip Gawin <filip.gawin@collabora.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/27469>
(cherry picked from commit 0ac6801970)
2024-02-14 16:09:59 +00:00
José Roberto de Souza
7d78a9b36b iris: Fix return of iris_wait_syncobj()
iris_wait_syncobj() succeed if IOCTL return is 0 otherwise it failled.

Cc: mesa-stable
Signed-off-by: José Roberto de Souza <jose.souza@intel.com>
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/27500>
(cherry picked from commit 138303fb9d)
2024-02-14 16:09:59 +00:00
Connor Abbott
6e96c0df97 ir3/ra: Fix bug with collect source handling
It can be the case that a collect and one of its sources are assigned
to non-overlapping parts of the same merge set, for example:

	ssa_1 = ...
	ssa_2 = ...
	ssa_3 = ...

	ssa_4 = collect ssa_1, ssa_2 (kill), ssa_3
	... = ssa_4 (kill)

	ssa_5 = collect ssa_1, ssa_3

	... = ssa_1 (kill)
	... = ssa_3 (kill)
	... = ssa_5 (kill)

If we merge the first collect first, we get a merge set:

	ssa_1 (offset 0)
	ssa_2 (offset 2)
	ssa_3 (offset 4)
	ssa_4 (offset 0)

Now, we decide to merge ssa_1 and ssa_5:

	ssa_1 (offset 0)
	ssa_2 (offset 2)
	ssa_3 (offset 4)
	ssa_4 (offset 0)
	ssa_5 (offset 0)

ssa_3 cannot become a child of ssa_5 in the interval tree, just like a
source not in the same merge set, so we should not remove it and then
reinsert it assuming that RA will make it a child of ssa_5.

This fixes an RA validation error in Farming Simulater.

Fixes: 0ffcb19 ("ir3: Rewrite register allocation")
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/27497>
(cherry picked from commit aeed5fd98d)
2024-02-14 16:09:59 +00:00
Corentin Noël
eac978e36d zink: Only call reapply_color_write if EXT_color_write_enable is available
Allows to use zink with drivers that do not expose this extension.

Backport-to: 23.3 24.0
Reviewed-By: Mike Blumenkrantz <michael.blumenkrantz@gmail.com>
Signed-off-by: Corentin Noël <corentin.noel@collabora.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/27516>
(cherry picked from commit 72886cbefa)
2024-02-14 16:09:59 +00:00
Eric Engestrom
8bb8f2ef2c .pick_status.json: Update to 90eae30bcb 2024-02-14 15:55:45 +00:00
David Rosca
c3ba03903f frontends/va: Fix updating AV1 rate control parameters
Follow the same logic as H264.

Fixes: 5edbecb856 ("frontends/va: adding va av1 encoding functions")

Reviewed-by: Ruijing Dong <ruijing.dong@amd.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/27481>
(cherry picked from commit fa8e0ba3f7)
2024-02-06 22:09:28 +00:00
David Heidelberg
cb6f5b00e9 meson: upgrade zlib wrap to 1.3.1
`$ meson wrap update zlib`

Cc: mesa-stable
Reviewed-by: Dylan Baker <dylan@pnwbakers.com>
Acked-by: Erik Faye-Lund <erik.faye-lund@collabora.com>
Signed-off-by: David Heidelberg <david.heidelberg@collabora.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/27311>
(cherry picked from commit 56f31d1847)
2024-02-06 22:09:27 +00:00
Rhys Perry
223c8cd0d3 aco: fix >8 byte linear vgpr copies
No fossil-db changes.

Signed-off-by: Rhys Perry <pendingchaos02@gmail.com>
Reviewed-by: Daniel Schürmann <daniel@schuermann.dev>
Cc: mesa-stable
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/27436>
(cherry picked from commit 174e37afb9)
2024-02-06 22:09:26 +00:00
Sviatoslav Peleshko
71c1740914 anv,driconf: Add sampler coordinate precision workaround for AoE 4
AoE4 samples texture on the edge between texels, which can cause
unexpected texel to be returned, and cause misrenderings. This workaround
enables coordinate rounding even in NEAREST mode, which fixes the problem.

Cc: mesa-stable
Closes: https://gitlab.freedesktop.org/mesa/mesa/-/issues/9864
Signed-off-by: Sviatoslav Peleshko <sviatoslav.peleshko@globallogic.com>
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/27337>
(cherry picked from commit 0a44f6319e)
2024-02-06 22:09:25 +00:00
Tapani Pälli
7bca150d5a anv: flush tile cache independent of format with HIZ-CCS flush
Cc: mesa-stable
Fixes: ba87656079 ("anv: implement undocumented tile cache flush requirements")
Closes: https://gitlab.freedesktop.org/mesa/mesa/-/issues/10420
Closes: https://gitlab.freedesktop.org/mesa/mesa/-/issues/10530
Signed-off-by: Tapani Pälli <tapani.palli@intel.com>
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Tested-by: Mark Janes <markjanes@swizzler.org>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/27440>
(cherry picked from commit 5178ad761c)
2024-02-06 22:09:18 +00:00
Christian Duerr
7a27b2afba panfrost: Fix dual-source blending
If dual blending is enabled, only 1 output is supported. Multiple
outputs confuse the write combining pass in this case, leading to
incorrect output and/or an assert failure in emit_fragment_store.

The fix is straightforward, just skip the speculative emitting of
multiple outputs in the case where dual source blending is enabled.

This also adds an extra sanity check in `pan_nir_lower_zs_store` to
check for only one blend store being present.

Fixes: c65a9be421 ("panfrost: Preprocess shaders at CSO create time")
Closes: https://gitlab.freedesktop.org/mesa/mesa/-/issues/9487
Co-Authored-By: Eric R. Smith <eric.smith@collabora.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/26474>
(cherry picked from commit 49c1b404e5)
2024-02-06 22:09:17 +00:00
Dave Airlie
acdfcc4243 radv: don't submit 0 length on UVD either.
The kernel checks for UVD msgs and if there aren't any gets upset,
so don't submit 0 length on UVD rings either to avoid that.

Cc: mesa-stable
Reviewed-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/27186>
(cherry picked from commit 47c725b53e)
2024-02-06 22:09:16 +00:00
Dave Airlie
c47dbea2a0 radv/uvd: uvd kernel checks for full dpb allocation.
The CTS image allocation sometimes doesn't try to allocate a complete
DPB, but the amdgpu kernel module checks for this, so always make
the DPB max sized on uvd instances.

Fixes part of video decode on Fiji/Polaris

Cc: mesa-stable
Reviewed-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/27186>
(cherry picked from commit df9bc11589)
2024-02-06 22:09:16 +00:00
Dave Airlie
4c25e80e6c radv: init decoder ip block earlier.
This makes the queue decisions later correct.

Cc: mesa-stable
Reviewed-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/27186>
(cherry picked from commit bba36df84d)
2024-02-06 22:09:15 +00:00
Dave Airlie
3ee587ed56 radv: fix correct padding on uvd
Fixes: 8a29291dbe ("radv/video: add h264 support for uvd")

Reviewed-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/27186>
(cherry picked from commit 6065671a7f)
2024-02-06 22:09:14 +00:00
Samuel Pitoiset
d24a93dbec radv/sqtt: fix describing queue submits for RGP
The submit_sub_index field is used by RGP to determine the number of
submits. Previously, it was incorrectly reporting the same number of
submits than command buffers.

Fixes: 88cbe32048 ("radv: add support for RGP queue events")
Signed-off-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/27439>
(cherry picked from commit c6286e39ec)
2024-02-06 22:09:13 +00:00
Samuel Pitoiset
dbf730b14d radv: add a workaround for mipmaps and minLOD on GFX6-8
This is spurious and it looks like we should be able to uses non-zero
base level everytime on GFX6-8 but it doesn't always work.

This fixes the remaining CTS failures on GFX6-8.

Cc: mesa-stable
Signed-off-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/26290>
(cherry picked from commit 9698d5f0fd)
2024-02-06 22:09:12 +00:00
Eric Engestrom
b9554aead1 panfrost: fix UB caused by shifting signed int too far
Fixes: 13d7ca1300 ("pan/va: Optimize add with imm to ADD_IMM")
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/27437>
(cherry picked from commit 6250885640)
2024-02-06 22:09:11 +00:00
Konstantin Seurer
3f5b57c696 radv/sqtt: Handle ray tracing pipelines with no traversal shader
Fixes: 0f87d40 ("radv/rt: Skip compiling a traversal shader")
Reviewed-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/27383>
(cherry picked from commit bb14ee53a5)
2024-02-06 22:07:48 +00:00
Blisto
8c5fafef1d driconf: set vk_x11_strict_image_count for Atlas Fallen Vulkan
Prevents crash with vsync turned off on xwayland.

Cc: mesa-stable
Reviewed-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/27122>
(cherry picked from commit 3bc6f95e3d)
2024-02-06 22:07:47 +00:00
Timothy Arceri
f45c9a38db glsl: don't tree graft globals
As per this optimisations description:

"Takes assignments to variables that are dereferenced only
once and pastes the RHS expression into where the variables
dereferenced."

However the optimisation is run at compile time before multiple
shaders from the same stage could have been pasted together.
So this optimisation can incorrectly assume a global is only
referenced once since it cannot see the other pieces of the
shader stage until link time.

Here we skip the optimisation if the variable is a global. We
could change it to only run at link time however this
optimisation is only run at link time if we are being forced
to use GLSL IR to inline a function that glsl to nir cannot
handle and this will also be removed in a future patchset.

Closes: https://gitlab.freedesktop.org/mesa/mesa/-/issues/10482
Fixes: d75a36a9ee ("glsl: remove do_copy_propagation_elements() optimisation pass")

Acked-by: Pierre-Eric Pelloux-Prayer <pierre-eric.pelloux-prayer@amd.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/27351>
(cherry picked from commit bc0178af57)
2024-02-06 22:06:13 +00:00
Pierre-Eric Pelloux-Prayer
4ce8d2d8b3 egl/drm: flush before calling get_back_bo
Similar to what was done for Wayland in 58f90fd03f:
the glthread unmarhsal thread needs to be idle to avoid
concurrent calls to get_back_bo.

Also the existing code flushed after setting dri2_surf->back
to NULL so a new back buffer was always allocated by the
glthread flush:

|---------------> dri2_drm_swap_buffers
| get_back_bo (back=0x55eb93c6c488) >       # First get_back_bo call
| get_back_bo (back=0x55eb93c6c488 age: 0)<
|                                           # dri2_surf->back = NULL
|-----> FLUSH
| get_back_bo (back=nil) >                  # Another get_back_bo call
| get_back_bo (back=0x55eb93c6c4c8 age: 3)<
|-----< FLUSH
|---------------< dri2_drm_swap_buffers

Closes: https://gitlab.freedesktop.org/mesa/mesa/-/issues/10437
Cc: mesa-stable
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/27143>
(cherry picked from commit 6f47e87a60)
2024-02-06 22:06:12 +00:00
Friedrich Vock
9f0048a73f radv/rt: Write inactive node data in ALWAYS_ACTIVE workaround
Fixes: a9831caa ("radv/rt: Add workaround to make leaves always active")
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/27340>
(cherry picked from commit f66055a6a6)
2024-02-06 22:06:10 +00:00
Dave Airlie
60d67f1820 zink: use sparse residency for buffers.
GL ARB_sparse_buffer allows unbound regions in buffers.
VK sparseBinding insists all regions must be bound before first use.

This means we need to use sparseResidencyBuffer to back GL
sparse buffers to get the same semantics.

Fixes GL and piglit sparse buffer tests on zink/nvk.

Fixes: c90246b682 ("zink: implement sparse buffer creation/mapping")
Cc: mesa-stable
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/27404>
(cherry picked from commit ff50e80574)
2024-02-06 22:06:08 +00:00
Eric Engestrom
22bc2a897c vk/util: fix 'beta' check for physical device properties
`--beta` is a string, not a bool (although really it should be, but that's a bigger change).

Fixes: 083793a39d ("vulkan: Allow beta extensions for physical device properties")
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/27394>
(cherry picked from commit c35247ab20)
2024-02-06 22:05:54 +00:00
Eric Engestrom
8c6de60c54 vk/util: fix 'beta' check for physical device features
`--beta` is a string, not a bool (although really it should be, but that's a bigger change).

Fixes: a7141a6f8a ("vulkan: Allow beta extensions for physical device features")
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/27394>
(cherry picked from commit 794ec7f0a1)
2024-02-06 22:05:51 +00:00
Eric Engestrom
e79ec6621b .pick_status.json: Update to fa8e0ba3f7 2024-02-06 22:05:41 +00:00
Eric Engestrom
ee25160ed5 docs: add sha256sum for 24.0.0 2024-02-01 00:18:28 +00:00
Eric Engestrom
150a5d8298 docs: add release notes for 24.0.0 2024-02-01 00:18:28 +00:00
Eric Engestrom
03ecd8b0a5 VERSION: bump for 24.0.0 2024-01-31 23:29:42 +00:00
Daniel Schürmann
74a0eb9cfa aco/insert_exec_mask: Fix unconditional demote at top-level control flow.
Cc: mesa-stable
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/27362>
(cherry picked from commit c309d20172)
2024-01-31 22:21:24 +00:00
Antoine Coutant
ec84f5a1e2 clc: retrieve libclang path at runtime.
LLVM_LIB_DIR is a variable used for runtime compilations.
When cross compiling, LLVM_LIB_DIR must be set to the
libclang path on the target. So, this path should not
be retrieved during compilation but at runtime.

dladdr uses an address to search for a loaded library.
If a library is found, it returns information about it.
The path to the libclang library can therefore be
retrieved using one of its functions. This is useful
because we don't know the name of the libclang library
(libclang.so.X or libclang-cpp.so.X)

v2 (Karol): use clang::CompilerInvocation::CreateFromArgs for dladdr
v3 (Karol): follow symlinks to fix errors on debian

Fixes: e22491c832 ("clc: fetch clang resource dir at runtime")
Signed-off-by: Antoine Coutant <antoine.coutant@smile.fr>
Reviewed-by: Karol Herbst <kherbst@redhat.com>
Reviewed-by (v1): Jesse Natalie <jenatali@microsoft.com>

Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/25568>
(cherry picked from commit 445aacb421)
2024-01-31 22:21:24 +00:00
Karol Herbst
63dc250b69 clc: force fPIC for every user when using shared LLVM
As we want to start using `dladdr`, this is needed to prevent `dladdr`
returning information of the wrong file.

Fixes tag as it's required by the actual fix.

Signed-off-by: Karol Herbst <kherbst@redhat.com>
Fixes: e22491c832 ("clc: fetch clang resource dir at runtime")
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/25568>
(cherry picked from commit 8efd11fce9)
2024-01-31 22:21:24 +00:00
Iago Toral Quiroga
a3a927a1cd broadcom/compiler: be more careful with unifa in non-uniform control flow
If the lane from which the hardware writes the unifa address
is disabled, then we may end up with a bogus address and invalid
memory accesses from follow-up ldunifa.

Instead of always disabling unifa loads in non-uniform control
flow we can try to see if the address is prouced from a nir
register (which is the only case where we do conditional writes
under non-uniform control flow in ntq_store_def), and only
disable it in that case.

When enabling subgroups for graphics pipelines, this fixes a
GMP violation in the simulator with the following test
(which has non-uniform control flow writing unifa with lane 0
disabled, which is the lane from which the unifa takes the
address):
dEQP-VK.subgroups.ballot_broadcast.graphics.subgroupbroadcastfirst_int

Reviewed-by: Alejandro Piñeiro <apinheiro@igalia.com>
Cc: mesa-stable
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/27211>
(cherry picked from commit 5b269814fc)
2024-01-31 22:21:24 +00:00
Iago Toral Quiroga
f7c73de1c2 broadcom/compiler: fix incorrect flags update for subgroup elect
c->execute is 0 (not the block index) for lanes currently active
under non-uniform control flow.

Also this simplifies a bit the instructions we emit for flag
generation, both for uniform and non-uniform control flow.

Reviewed-by: Alejandro Piñeiro <apinheiro@igalia.com>
Cc: mesa-stable
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/27211>
(cherry picked from commit 7bdc8898b1)
2024-01-31 22:21:24 +00:00
Iago Toral Quiroga
a085877c56 broadcom/compiler: fix incorrect flags setup in non-uniform if path
If the ELSE block is cheap then we don't emit the branch instruction
but we still want to generate the flags, since these are setting
the flags for the THEN block too.

Fixes: e401add741 ("broadcom/compiler: skip jumps in non-uniform if/then when block cost is small")
Reviewed-by: Alejandro Piñeiro <apinheiro@igalia.com>
Cc: mesa-stable
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/27211>
(cherry picked from commit 29d4924e5e)
2024-01-31 22:21:24 +00:00
Hyunjun Ko
438a064a9c anv/video: fix out-of-bounds read
Since STD_VIDEO_H265_CHROMA_QP_OFFSET_TILE_COLS_LIST_SIZE is 19.

Fixes: 8d519eb5 ("anv: add initial video decode support for h265")
Closes: mesa/mesa#10529

Signed-off-by: Hyunjun Ko <zzoon@igalia.com>
Acked-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/27373>
(cherry picked from commit d0d2cf549b)
2024-01-31 22:21:23 +00:00
Mike Blumenkrantz
627a6d792a zink: fix descriptor buffer unmaps on screen destroy
descriptor buffer uses mapped buffers. mapping/unmapping buffers
uses a ctx in the function params, but at this time there is no ctx.
since the ctx is not actually used for unmapping descriptor buffers,
this can instead use a special buffer unmap function to avoid invalid access

Fixes: b06f6e00fb ("zink: fix heap-use-after-free on batch_state with sub-allocated pipe_resources")
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/27344>
(cherry picked from commit 0a97d1ebfa)
2024-01-31 22:21:23 +00:00
Mike Blumenkrantz
098fb7465d zink: always map descriptor buffers as COHERENT
this is already implied since the buffers must be BAR-allocated,
but it ensures the context isn't accessed during unmap

Fixes: b06f6e00fb ("zink: fix heap-use-after-free on batch_state with sub-allocated pipe_resources")
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/27344>
(cherry picked from commit c900cca96c)
2024-01-31 22:21:23 +00:00
Gert Wollny
b728809a02 nir/builder: Fix compilation with gcc-13 when tsan is enabled
../src/compiler/nir/nir_builder.h: In function ‘nir_build_deref_follower’:
../src/compiler/nir/nir_builder.h:1607:1: error: control reaches end of non-void function [-Werror=return-type]
 1607 | }

Fixes: 4a4e175738
    nir: Support deref instructions in lower_var_copies

Signed-off-by: Gert Wollny <gert.wollny@collabora.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/27345>
(cherry picked from commit 0ab3b3c641)
2024-01-31 22:21:23 +00:00
Gert Wollny
cb7fe98f3f nir/lower_int64: Fix compilation with gcc-13 and tsan enabled
../src/compiler/nir/nir_lower_int64.c: In function ‘lower_int64_intrinsic’:
../src/compiler/nir/nir_lower_int64.c:1347:1: error: control reaches end of non-void function [-Werror=return-type]
1347 | }

Fixes: bf7a114246
   nir/lower_int64: Add lowering for some 64-bit subgroup ops

Signed-off-by: Gert Wollny <gert.wollny@collabora.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/27345>
(cherry picked from commit 80a1b91601)
2024-01-31 22:21:23 +00:00
Gert Wollny
22a21925e4 radv: Fix compilation with gcc-13 and tsan enabled
../src/amd/vulkan/radv_sampler.c: In function ‘radv_tex_wrap’:
../src/amd/vulkan/radv_sampler.c:50:1: error: control reaches end of non-void function [-Werror=return-type]
   50 | }
      | ^
../src/amd/vulkan/radv_sampler.c: In function ‘radv_tex_compare’:
../src/amd/vulkan/radv_sampler.c:76:1: error: control reaches end of non-void function [-Werror=return-type]
   76 | }
      | ^

Fixes: 4de305cb8a
   radv: move sampler related code to radv_sampler.c

Signed-off-by: Gert Wollny <gert.wollny@collabora.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/27345>
(cherry picked from commit ca47138fb1)
2024-01-31 22:21:23 +00:00
Leo Liu
f135adb82a radeonsi: fix video processing path without VPE enabled
Fixes: 6b441ef6ab (amd, radeonsi: supports post processing entrypoint)
Closes: https://gitlab.freedesktop.org/mesa/mesa/-/issues/10495
Cc: mesa-stable

Signed-off-by: Leo Liu <leo.liu@amd.com>
Reviewed-by: Ruijing Dong <ruijing.dong@amd.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/27339>
(cherry picked from commit 46f5a226d6)
2024-01-31 22:21:23 +00:00
Haihao Xiang
d2064c52fb anv: Fix typo in transition_color_buffer
Signed-off-by: Haihao Xiang <haihao.xiang@intel.com>
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Cc: mesa-stable
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/27330>
(cherry picked from commit 29d18f3ca9)
2024-01-31 22:21:23 +00:00
Friedrich Vock
e5ef4678dd util/disk_cache: Use secure_getenv to determine cache directories
Reviewed-by: Eric Engestrom <eric@igalia.com>
Cc: mesa-stable
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/27381>
(cherry picked from commit 1c01fd0286)
2024-01-31 22:21:23 +00:00
Friedrich Vock
9ac7a658c4 radv: Use secure_getenv for RADV_THREAD_TRACE_TRIGGER
Reviewed-by: Eric Engestrom <eric@igalia.com>
Cc: mesa-stable
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/27381>
(cherry picked from commit e8b0e5cac9)
2024-01-31 22:21:23 +00:00
Friedrich Vock
566c2835dc radv: Use secure_getenv in radv_builtin_cache_path
Reviewed-by: Eric Engestrom <eric@igalia.com>
Cc: mesa-stable
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/27381>
(cherry picked from commit c01a07f2e4)
2024-01-31 22:21:23 +00:00
Friedrich Vock
6bcf386f5c mesa/main: Use secure_getenv for shader dumping
Reviewed-by: Eric Engestrom <eric@igalia.com>
Cc: mesa-stable
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/27381>
(cherry picked from commit 72f95a8364)
2024-01-31 22:21:23 +00:00
Friedrich Vock
cad3474793 vtn: Use secure_getenv for shader dumping
Reviewed-by: Eric Engestrom <eric@igalia.com>
Cc: mesa-stable
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/27381>
(cherry picked from commit 321e2cee53)
2024-01-31 22:21:23 +00:00
Friedrich Vock
bbe9e29fd4 aux/trace: Guard triggers behind __normal_user
Reviewed-by: Eric Engestrom <eric@igalia.com>
Cc: mesa-stable
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/27381>
(cherry picked from commit f3b892b74a)
2024-01-31 22:21:23 +00:00
Friedrich Vock
1aab9bc3f0 vulkan: Use secure_getenv for trigger files
Reviewed-by: Eric Engestrom <eric@igalia.com>
Cc: mesa-stable
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/27381>
(cherry picked from commit 7ea96ff75b)
2024-01-31 22:21:23 +00:00
Friedrich Vock
ec4d013e82 util: Provide a secure_getenv fallback for platforms without it
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/27381>
(cherry picked from commit 8b209a6200)
2024-01-31 22:21:23 +00:00
Eric Engestrom
6924679fff util: check for setgid() as well in __normal_user()
Cc: mesa-stable
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/27378>
(cherry picked from commit 501f78fdba)
2024-01-31 21:22:09 +00:00
Eric Engestrom
780b69ebfc util: simplify logic in __normal_user()
Cc: mesa-stable
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/27378>
(cherry picked from commit afd4e633ee)
2024-01-31 21:22:08 +00:00
Eric Engestrom
26db70410e tree-wide: use __normal_user() everywhere instead of writing the check manually
Cc: mesa-stable
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/27346>
(cherry picked from commit 92c24191d4)
2024-01-31 21:22:04 +00:00
Eric Engestrom
d1b2c4152e util: rename __check_suid() to __normal_user()
Cc: mesa-stable
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/27346>
(cherry picked from commit 3e00558ef0)
2024-01-31 21:22:03 +00:00
Eric Engestrom
ce8c959664 .pick_status.json: Update to 4cd5b2b542 2024-01-31 21:21:49 +00:00
Lionel Landwerlin
e6990f0316 anv: fix transfer barriers flushes with compute queue
Transfer operation are implemented differently on the compute engine
and require a different kind of cache flush.

Signed-off-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Cc: mesa-stable
Reviewed-by: Sagar Ghuge <sagar.ghuge@intel.com>
(cherry picked from commit 3b9466dd51)

Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/27354>
2024-01-31 19:32:15 +00:00
Tapani Pälli
6cced86088 anv: move *bits_for_access_flags to genX_cmd_buffer
This makes is possible to use GFX_VER macros in these functions.

Signed-off-by: Tapani Pälli <tapani.palli@intel.com>
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
(cherry picked from commit d0a3bac163)

Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/27354>
2024-01-31 19:32:15 +00:00
Gert Wollny
ea8681f985 virgl: Use better reporting for mirror_clamp features
Fixes: 9efe50c83b
    virgl: report MIRROR_CLAMP features better

Signed-off-by: Gert Wollny <gert.wollny@collabora.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/27106>
(cherry picked from commit b9fea5ea6b)
2024-01-31 19:31:13 +00:00
Samuel Pitoiset
570faac1c1 radv: fix segfault when getting device vm fault info
pFaultInfo can be NULL.

Fixes: 8097becc7f ("radv: add initial VK_EXT_device_fault support")
Signed-off-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/27292>
(cherry picked from commit c68f96878c)
2024-01-31 19:31:13 +00:00
Gert Wollny
a315353199 r600: lower dround_even also on hardware that supports fp64
Fixes: aed6a39c10
  glsl: Retire dround lowering.

Signed-off-by: Gert Wollny <gert.wollny@collabora.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/27329>
(cherry picked from commit 820859a6ab)
2024-01-31 19:31:13 +00:00
Sebastian Wick
4c62d39214 radeonsi: Destroy queues before the aux contexts
Otherwise the queue might access the already destroyed aux contexts

Signed-off-by: Sebastian Wick <sebastian.wick@redhat.com>
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
Reviewed-by: Pierre-Eric Pelloux-Prayer <pierre-eric.pelloux-prayer@amd.com>
Cc: mesa-stable
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/27300>
(cherry picked from commit c467a87e06)
2024-01-31 19:31:13 +00:00
Lionel Landwerlin
5f7921620e anv: retain ccs image binding address
Memory can be free before images it is bound to. When unmapping the
CCS range in the AUX-TT, we cannot rely on the anv_bo::offset field
because the anv_bo might have been freed.

Just save the mapping address/size and use those values at unmapping
time.

Fixes an assert on CI with :

  dEQP-VK.synchronization.internally_synchronized_objects.pipeline_cache_graphics

Signed-off-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Fixes: e519e06f4b ("anv: add missing alignment for AUX-TT mapping")
Reviewed-by: Tapani Pälli <tapani.palli@intel.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/27304>
(cherry picked from commit 9d31680e79)
2024-01-29 22:22:31 +00:00
Lionel Landwerlin
8be6eab836 anv: rename aux_tt image field
We'll add more to the sub-struct

Signed-off-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Reviewed-by: Jianxun Zhang <jianxun.zhang@intel.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/27304>
(cherry picked from commit eead86ad8e)
2024-01-29 22:22:25 +00:00
Lionel Landwerlin
66d3b00eaa anv: factor out aux-tt binding logic for future reuse
Signed-off-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Reviewed-by: Jianxun Zhang <jianxun.zhang@intel.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/27304>
(cherry picked from commit fdc2f0a52e)
2024-01-29 22:22:15 +00:00
Pierre-Eric Pelloux-Prayer
68e58263eb radeonsi: adjust flags for si_compute_shorten_ubyte_buffer
- no need to flush anything before as we're working on a clean
  buffer (SI_OP_SKIP_CACHE_INV_BEFORE)
- L2 must be flushed after the job to avoid rendering artifacts.
  Instead of setting it manually, use SI_OP_SYNC_AFTER +
  SI_COHERENCY_NONE.

Fixes: 1a99f50c7f ("radeonsi: use a compute shader to convert unsupported indices format")
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/27095>
(cherry picked from commit cce5920025)
2024-01-29 22:07:53 +00:00
Pierre-Eric Pelloux-Prayer
88880bfc78 radeonsi: emit cache flushes before draw registers
This fixes #9807 but I don't understand why.

Emitting cache flushes before VGT_PRIMITIVE_TYPE is what makes
the problem go away but changing the order in si_draw() is clearer.

The only cases where sctx->flags is modified in si_emit_draw_registers
is handled using si_emit_cache_flush_direct so we can move cache
flushing up without any addtional conditionals.

Closes: https://gitlab.freedesktop.org/mesa/mesa/-/issues/9807
Fixes: 1e4b539042 ("radeonsi: handle deferred cache flushes as a state (si_atom)")
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/27095>
(cherry picked from commit 0e16da89fe)
2024-01-29 22:07:53 +00:00
Lionel Landwerlin
439aff7ff6 anv: add missing alignment for AUX-TT mapping
Buffers that are not dedicated can also be used for CCS mapped images,
so they need to be aligned to the AUX-TT requirements.

GTK+ is running into such case where it creates an image with a CCS
modifier. When requesting the alignment through
vkGetImageMemoryRequirements() the 64KB/1MB alignment is returned, but
the binding fails with an assert because the VkDeviceMemory has not
been aligned to the AUX-TT requirement and we cannot disable CCS since
the modifier requires it.

Signed-off-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Fixes: 4cdd3178fb ("anv: Meet CCS alignment reqs with dedicated allocs")
Closes: https://gitlab.freedesktop.org/mesa/mesa/-/issues/10433
Reviewed-by: Jianxun Zhang <jianxun.zhang@intel.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/27258>
(cherry picked from commit e519e06f4b)
2024-01-29 22:07:52 +00:00
Louis-Francis Ratté-Boulianne
e7715e39a5 panfrost: Legalize before updating part of a AFBC-packed texture
When updating an AFBC-packed resource, we need to make sure it is
legalized before blitting the staging resource to it. We can't rely
on the blit to properly convert the resource as it will result in
blit recursion and a crash.

If the whole texture is updated however, there is no need to unpack
as the content can be discarded. Just create a new BO with the right
format.

Signed-off-by: Louis-Francis Ratté-Boulianne <lfrb@collabora.com>
Reviewed-by: Erik Faye-Lund <erik.faye-lund@collabora.com>
Fixes: 33b48a5585 ("panfrost: Add debug flag to force packing of AFBC textures on upload")
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/27208>
(cherry picked from commit 1aa832e5f5)
2024-01-29 22:07:32 +00:00
Louis-Francis Ratté-Boulianne
97c0e12da3 panfrost: add can_discard flag to pan_legalize_afbc_format
There might be a more efficient path when legalizing a resource if
we don't need to worry about its content. For example, it doesn't
make sense to copy the resource content when converting the modifier
if the resource content is discarded anyway.

Signed-off-by: Louis-Francis Ratté-Boulianne <lfrb@collabora.com>
Reviewed-by: Erik Faye-Lund <erik.faye-lund@collabora.com>
Fixes: 33b48a5585 ("panfrost: Add debug flag to force packing of AFBC textures on upload")
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/27208>
(cherry picked from commit ee77168d57)
2024-01-29 22:07:30 +00:00
Louis-Francis Ratté-Boulianne
dae3eb155a panfrost: add copy_resource flag to pan_resource_modifier_convert
When converting the modifier for a resource, it's not always
needed to blit the content as well. Creating a new resource with
the right format/modifier might be enough.

Signed-off-by: Louis-Francis Ratté-Boulianne <lfrb@collabora.com>
Reviewed-by: Erik Faye-Lund <erik.faye-lund@collabora.com>
Fixes: 33b48a5585 ("panfrost: Add debug flag to force packing of AFBC textures on upload")
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/27208>
(cherry picked from commit 62ed14b386)
2024-01-29 22:07:29 +00:00
Louis-Francis Ratté-Boulianne
dc7b4111fd panfrost: factor out method to check whether we can discard resource
The logic is gonna be re-used to determine whether we need to
unpack a AFBC-packed texture before updating it (when unmapping).

Signed-off-by: Louis-Francis Ratté-Boulianne <lfrb@collabora.com>
Reviewed-by: Erik Faye-Lund <erik.faye-lund@collabora.com>
Fixes: 33b48a5585 ("panfrost: Add debug flag to force packing of AFBC textures on upload")
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/27208>
(cherry picked from commit 22a7637b08)
2024-01-29 22:06:19 +00:00
Faith Ekstrand
6ffceb7138 nak: Fix TCS output reads
The hardware uses the lane index for per-vertex TCS output reads rather
than the vertex index.  Fortunately, it's a pretty easy calculation to
go from one to the other.

Fixes: abe9c1fea2 ("nak: Add NIR lowering for attribute I/O")
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/27284>
(cherry picked from commit 99ef70d8aa)
2024-01-29 21:16:35 +00:00
Lionel Landwerlin
8f9db1db2e anv: don't prevent L1 untyped cache flush in 3D mode
Required on MTL.

Fixes tests like :

 dEQP-VK.synchronization2.op.single_queue.timeline_semaphore.write_copy_buffer_read_copy_buffer.buffer_16384

Signed-off-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Cc: mesa-stable
Co-Authored-by: Jordan Justen <jordan.l.justen@intel.com>
Tested-by: Matt Turner <mattst88@gmail.com>
Reviewed-by: Jordan Justen <jordan.l.justen@intel.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/27172>
(cherry picked from commit 7c2ff46a4f)
2024-01-29 21:16:33 +00:00
Dmitry Baryshkov
5b7553a101 freedreno/drm: don't crash for unsupported devices
For unsupported devices fd_pipe_new() will return NULL, causing a crash
when fd_device_new() tries to check device generation.

Handle fd_pipe_new() returning NULL by destroying device and returning
NULL.

Fixes: 4861067689 ("freedreno/drm: Add sub-allocator")
Signed-off-by: Dmitry Baryshkov <dmitry.baryshkov@linaro.org>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/26961>
(cherry picked from commit 7a81855a67)
2024-01-29 21:16:20 +00:00
Rohan Garg
77e4a2a06e anv: untyped data port flush required when a pipeline sets the VK_ACCESS_2_SHADER_STORAGE_READ_BIT
VK_ACCESS_2_SHADER_STORAGE_READ_BIT specifies read access to a
storage buffer, physical storage buffer, storage texel buffer, or
storage image in any shader pipeline stage.

Any storage buffers or images written to must be invalidated and
flushed before the shader can access them.

This fixes the following tests on LNL:
  - dEQP-VK.synchronization2.op.single_queue.barrier.write\*_specialized_access_flag

Signed-off-by: Rohan Garg <rohan.garg@intel.com>
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
cc: mesa-stable

Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/27212>
(cherry picked from commit 3e93ccbc1b)
2024-01-29 21:10:02 +00:00
Dave Airlie
2588d3f4b9 gallivm: passing fp16_split_fp64 to fp16 lowering.
This causes lavapipe to use the split code and fixes accuracy
for CTS.

Fixes dEQP-VK.glsl.builtin.precision_fconvert.f64_to_f16*

Cc: mesa-stable
Reviewed-by: Mike Blumenkrantz <michael.blumenkrantz@gmail.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/27228>
(cherry picked from commit 38e92556a0)
2024-01-29 21:07:02 +00:00
Faith Ekstrand
09bace40bf nvk: Don't exnore ExternalImageFormatInfo
Fixes: 702326d013 ("nvk: Add external memory queries")
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/27242>
(cherry picked from commit 58e916b3b7)
2024-01-29 21:07:01 +00:00
Rhys Perry
aba20a934d aco: fix labelling of s_not with constant
Fixes RADV compilation of a Cyberpunk 2077 RT pipeline with
PIPELINE_CREATE_DISABLE_OPTIMIZATION_BIT.

Signed-off-by: Rhys Perry <pendingchaos02@gmail.com>
Fixes: dfaa3c0af6 ("aco: Flip s_cbranch / s_cselect to optimize out an s_not if possible.")
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/27194>
(cherry picked from commit 6dc182b6b2)
2024-01-29 21:07:00 +00:00
Mike Blumenkrantz
7e4e5fbdfb zink: set more dynamic states when using shader objects
fixes #10457

cc: mesa-stable

Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/27146>
(cherry picked from commit df45cbddb5)
2024-01-29 21:06:57 +00:00
Eric Engestrom
77c2a5d4d8 .pick_status.json: Update to b75ee1a067 2024-01-29 18:09:22 +00:00
Thong Thai
ba5fd74ae3 radeonsi/vcn: remove EFC support for renoir
Renoir hardware has limited EFC support, so remove support for it from Mesa.
Thanks to @nyanmisaka for raising the issue.

Closes: https://gitlab.freedesktop.org/mesa/mesa/-/issues/9436

Signed-off-by: Thong Thai <thong.thai@amd.com>
Reviewed-by: Ruijing Dong <ruijing.dong@amd.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/27076>
(cherry picked from commit df5203d631)
2024-01-29 17:31:50 +00:00
Eric Engestrom
808f056688 VERSION: bump for 24.0.0-rc3 2024-01-24 20:01:30 +00:00
Karol Herbst
d25222c73f rusticl/kernel: check that local size on dispatch doesn't exceed limits
Cc: mesa-stable
Signed-off-by: Karol Herbst <kherbst@redhat.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/27232>
(cherry picked from commit eca4f0f632)
2024-01-24 14:22:24 +00:00
Friedrich Vock
3f1d5726cc nir: Handle casts in nir_opt_copy_prop_vars
Cc: mesa-stable

Reviewed-by: Faith Ekstrand <faith.ekstrand@collabora.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/27197>
(cherry picked from commit 9f22b95956)
2024-01-24 14:22:23 +00:00
Friedrich Vock
466ae8c313 nir: Make is_trivial_deref_cast public
Cc: mesa-stable

Reviewed-by: Faith Ekstrand <faith.ekstrand@collabora.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/27197>
(cherry picked from commit 6c845ed548)
2024-01-24 14:22:22 +00:00
Boris Brezillon
d2094c1e1b panfrost: Clamp the render area to the damage region
The render area clamping was lost during the transition to the FB
helpers. Restore the original logic so we can benefit from
EGL_KHR_partial_update on v4, and on v5 when only one damage
rectangle is passed.

Fixes: ff3eada7eb ("panfrost: Use the generic preload and FB helpers in the gallium driver")
Reported-by: Sjoerd Simons <sjoerd.simons@collabora.com>
Signed-off-by: Boris Brezillon <boris.brezillon@collabora.com>
Tested-by: Sjoerd Simons <sjoerd.simons@collabora.com>
Reviewed-by: Erik Faye-Lund <erik.faye-lund@collabora.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/27215>
(cherry picked from commit f6f7715c58)
2024-01-24 14:22:21 +00:00
Rhys Perry
73dcdc7a4e nir/lower_shader_calls: remove CF before nir_opt_if
Otherwise, opt_if_simplification() can attempt to insert an inot after a
jump.

Fixes RADV compilation of a Cyberpunk 2077 pipeline with
PIPELINE_CREATE_DISABLE_OPTIMIZATION_BIT.

Signed-off-by: Rhys Perry <pendingchaos02@gmail.com>
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Cc: mesa-stable
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/27193>
(cherry picked from commit e465ac2561)
2024-01-24 14:22:20 +00:00
Rhys Perry
1059613931 nir/lower_non_uniform: set non_uniform=false when lowering is not needed
Fixes RADV compilation of a Doom Eternal pipeline with
PIPELINE_CREATE_DISABLE_OPTIMIZATION_BIT, because
nir_opt_non_uniform_access was skipped and later passes don't expect
non-uniform access.

Signed-off-by: Rhys Perry <pendingchaos02@gmail.com>
Reviewed-by: Faith Ekstrand <faith.ekstrand@collabora.com>
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Fixes: b1619109ca ("nir/lower_non_uniform: remove non_uniform flags after lowering")
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/27192>
(cherry picked from commit 015b0d678f)
2024-01-24 14:22:19 +00:00
Eric Engestrom
4410947ebe .pick_status.json: Update to eca4f0f632 2024-01-24 14:22:16 +00:00
Rhys Perry
b85673b086 radv: do nir_shader_gather_info after radv_nir_lower_rt_abi
Fixes compilation of a Doom Eternal shader with
PIPELINE_CREATE_DISABLE_OPTIMIZATION_BIT.

ac_nir_lower_resinfo() was not happening because it is predicated on
uses_resource_info_query and no later optimization updated it.

Signed-off-by: Rhys Perry <pendingchaos02@gmail.com>
Reviewed-by: Friedrich Vock <friedrich.vock@gmx.de>
Reviewed-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
Cc: mesa-stable
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/27195>
(cherry picked from commit 90939e93f6)
2024-01-23 20:34:31 +00:00
Karol Herbst
725af5b50c nak/opt_out: fix comparison in try_combine_outs
clippy complained it was comparing the same thing

Fixes: 5b355ff25a ("nak: Fix opt_out")
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/27216>
(cherry picked from commit 0a414ecdf5)
2024-01-23 20:34:31 +00:00
Eric Engestrom
e2178ddc07 .pick_status.json: Update to 90939e93f6 2024-01-23 20:34:30 +00:00
Karol Herbst
ee57c9df39 nir: rework and fix rotate lowering
No driver supports urol/uror on all bit sizes. Intel gen11+ only for 16
and 32 bit, Nvidia GV100+ only for 32 bit. Etnaviv can support it on 8,
16 and 32 bit.

Also turn the `lower` into a `has` option as only two drivers actually
support `uror` and `urol` at this momemt.

Fixes crashes with CL integer_rotate on iris and nouveau since we emit
urol for `rotate`.

v2: always lower 64 bit

Fixes: fe0965afa6 ("spirv: Don't use libclc for rotate")
Signed-off-by: Karol Herbst <kherbst@redhat.com>
Reviewed-by (Intel and nir): Ian Romanick <ian.d.romanick@intel.com>

Reviewed-by: David Heidelberg <david.heidelberg@collabora.com>
Acked-by: Yonggang Luo <luoyonggang@gmail.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/27090>
(cherry picked from commit f2b7c4ce29)
2024-01-23 20:34:30 +00:00
Caio Oliveira
2a4f8de54f intel/compiler: Fix rebuilding the CFG in fs_combine_constants
When building the CFG the instructions are taken of the list in
fs_visitor and added to the lists inside each block.  The single
"exec_node" in the instruction is used for those memberships.

In the case the pass rebuilt the CFG, it had no instructions, so
calculate_cfg() had nothing to work with.  For now fix the bug by
pulling all the instructions back to the original list.

We can do better here, but punting until upcoming work on
CFG itself.

Issue found in an unpublished CTS test.  Small reproduction in our
unit tests now enabled.

Fixes: 65237f8bbc ("intel/fs: Don't add MOV instructions to DO blocks in combine constants")
Reviewed-by: Ian Romanick <ian.d.romanick@intel.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/27131>
(cherry picked from commit 4dbf9181cd)
2024-01-23 20:21:13 +00:00
Tapani Pälli
7af4d666a7 iris: replace constant cache invalidate with hdc flush
This implements Wa_14010840176.

Cc: mesa-stable
Signed-off-by: Tapani Pälli <tapani.palli@intel.com>
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/21364>
(cherry picked from commit 231ede4f0c)
2024-01-23 19:49:09 +00:00
Lionel Landwerlin
72d36448f8 anv: implement undocumented tile cache flush requirements
Signed-off-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Cc: mesa-stable
Reviewed-by: Tapani Pälli <tapani.palli@intel.com>
Reviewed-by: Ivan Briano <ivan.briano@intel.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/27169>
(cherry picked from commit ba87656079)
2024-01-23 19:49:07 +00:00
Lionel Landwerlin
e99d28b4e2 anv: fix pipeline executable properties with graphics libraries
We're missing the ISA code in renderdoc. You can reproduce with the
Sascha Willems graphics pipeline demo.

The change is large here because we have to fix a confusion between
anv_shader_bin & anv_pipeline_executable. anv_pipeline_executable is
there as a representation for the user and multiple
anv_pipeline_executable can point to a single anv_shader_bin.

In this change we split the anv_shader_bin related logic that was
added in anv_pipeline_add_executable*() and move it to a new
anv_pipeline_account_shader() function.

When importing RT libraries, we add all the anv_pipeline_executable
from the libraries.

When importing Gfx libraries, we add the anv_pipeline_executable only
if not doing link time optimization.

anv_shader_bin related properties are added whenever we're importing a
shader from a library, compiling or finding in the cache.

Signed-off-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Fixes: 3d49cdb71e ("anv: implement VK_EXT_graphics_pipeline_library")
Reviewed-by: Ivan Briano <ivan.briano@intel.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/26594>
(cherry picked from commit 58c9f817cb)
2024-01-23 19:49:06 +00:00
Yiwei Zhang
bfa31de5fd venus: fix to respect the final pipeline layout
This fixes VUID-vkCmdDraw-None-08600 violation when running gpl cts:
dEQP-VK...graphics_library.misc.bind_null_descriptor_set.*, where the
final pipeline layout is falsely dropped, leading to incompatible with
the pipeline layout of the bound descriptor set.

Fixes: a65ac274ac ("venus: Do pipeline fixes for VK_EXT_graphics_pipeline_library")
Signed-off-by: Yiwei Zhang <zzyiwei@chromium.org>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/27054>
(cherry picked from commit 80a5df16fe)
2024-01-23 13:21:29 +00:00
Yiwei Zhang
252a87e77c venus: fix pipeline derivatives
This was unexpected dropped in the initial GPL impl.

Fixes: a65ac274ac ("venus: Do pipeline fixes for VK_EXT_graphics_pipeline_library")
Signed-off-by: Yiwei Zhang <zzyiwei@chromium.org>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/27054>
(cherry picked from commit f713b17a16)
2024-01-23 13:21:28 +00:00
Yiwei Zhang
95167b212e venus: fix pipeline layout lifetime
Should check the count instead of random ptr addr.

Fixes: 19f2b9d0bb ("venus: extend VkPipelineLayout lifetime for ...")
Signed-off-by: Yiwei Zhang <zzyiwei@chromium.org>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/27054>
(cherry picked from commit b551b6e48a)
2024-01-23 13:21:27 +00:00
Sil Vilerino
8039f6a525 d3d12: Implement cap for PIPE_VIDEO_CAP_ENC_INTRA_REFRESH
Fixes: c81967fa89 ("d3d12: Implement Intra Refresh for H264, HEVC, AV1")
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/27201>
(cherry picked from commit a3c91624f4)
2024-01-23 13:21:25 +00:00
Eric R. Smith
2e4623bd19 panfrost: fix panfrost drm-shim
The panfrost driver now makes an ioctl to retrieve some new memory
parameters, and DRM_PANFROST_PARAM_MEM_FEATURES is required (does not
default in the caller). This caused drm-shim to stop working. This
patch adds some defaults to get drm-shim working again.

Signed-off-by: Eric R. Smith <eric.smith@collabora.com>
Reviewed-by: Boris Brezillon <boris.brezillon@collabora.com>
Fixes: 91fe8a0d28 ("panfrost: Back panfrost_device with pan_kmod_dev object")
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/27162>
(cherry picked from commit a50b2f8f25)
2024-01-23 13:21:14 +00:00
Samuel Pitoiset
812bcc29af radv: fix indirect draws with NULL index buffer on GFX10
GFX10 has a hw bug and it can't handle 0-sized index buffer. The
non-indirect draw path was fine but not the indirect path where RADV
emits the index buffer.

This fixes flakes with dEQP-VK.*maintenance6* on NAVI14, and possibly
GPU hangs if there is an indirect draw with a valid index buffer right
before because it would re-use the same index buffer.

Fixes: db9816fd66 ("radv: add support for NULL index buffer")
Signed-off-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/27142>
(cherry picked from commit 783e3c096f)
2024-01-23 13:21:10 +00:00
Samuel Pitoiset
21d22653da radv: fix indirect dispatches on the compute queue on GFX7
GFX7 CP requires the indirect dispatch VA to be aligned to 32-bytes.

This fixes dEQP-VK.api.command_buffers.many_indirect_disps_on_secondary,
but it's unexpected that it uncovered this bug.

Cc: mesa-stable
Signed-off-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/27148>
(cherry picked from commit 5c03cdbd02)
2024-01-23 13:21:09 +00:00
Georg Lehmann
ebd56d7a76 aco: stop scheduling at p_logical_end
No Foz-DB changes, but this fixes some issues when the spiller inserts
scratch loads after p_logical_end for p_return.

Reviewed-by: Daniel Schürmann <daniel@schuermann.dev>

Cc: mesa-stable
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/27119>
(cherry picked from commit 74fc2e287f)
2024-01-23 13:21:08 +00:00
Daniel Schürmann
e1d20b69c4 aco: give spiller more room to assign spilled SGPRs to VGPRs
On chordal graphs, a greedy coloring can be done in a way that never uses
more colors than are required for the largest clique. However, since we
have vector values and force phi resources into the same spill slots, the
interference graphs are not chordal, and thus, this assumption doesn't hold.

Use twice as many spill slots as upper bound.

Totals from 10 (0.01% of 79242) affected shaders: (GFX11)
MaxWaves: 52 -> 54 (+3.85%)
Instrs: 271386 -> 271779 (+0.14%)
CodeSize: 1362544 -> 1365432 (+0.21%)
VGPRs: 2536 -> 2532 (-0.16%)
SpillVGPRs: 778 -> 818 (+5.14%)
Scratch: 73472 -> 76800 (+4.53%)
Latency: 3331718 -> 3328798 (-0.09%); split: -0.14%, +0.05%
InvThroughput: 1665860 -> 1643350 (-1.35%); split: -1.40%, +0.05%
VClause: 3292 -> 3329 (+1.12%); split: -0.06%, +1.18%
Copies: 46082 -> 46257 (+0.38%)

Cc: mesa-stable
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/27011>
(cherry picked from commit e3098bb232)
2024-01-23 13:21:08 +00:00
Friedrich Vock
0392e4bf5c radv: Fix shader replay allocation condition
Cc: mesa-stable
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/26891>
(cherry picked from commit 43bdfebbff)
2024-01-23 13:21:07 +00:00
Konstantin Seurer
364835c513 lavapipe: Report the correct preprocess buffer size
There can be multiple sequences.

Fixes: 976dd26 ("lavapipe: NV_device_generated_commands")
Reviewed-by: Dave Airlie <airlied@redhat.com>
Reviewed-by: Mike Blumenkrantz <michael.blumenkrantz@gmail.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/27019>
(cherry picked from commit 024f144165)
2024-01-23 13:20:01 +00:00
Konstantin Seurer
83b5a3a3f9 lavapipe: Mark vertex elements dirty if the stride changed
Fixes: 7672545 ("gallium: move vertex stride to CSO")
Reviewed-by: Dave Airlie <airlied@redhat.com>
Reviewed-by: Mike Blumenkrantz <michael.blumenkrantz@gmail.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/27019>
(cherry picked from commit cc94ff081c)
2024-01-23 13:20:00 +00:00
Konstantin Seurer
6d1dae874d lavapipe: Fix DGC vertex buffer handling
Fixes: 976dd26 ("lavapipe: NV_device_generated_commands")
Reviewed-by: Dave Airlie <airlied@redhat.com>
Reviewed-by: Mike Blumenkrantz <michael.blumenkrantz@gmail.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/27019>
(cherry picked from commit 6d88c1bb6c)
2024-01-23 13:19:59 +00:00
Konstantin Seurer
484a051aaa ac/llvm: Enable helper invocations for quad OPs
Closes: https://gitlab.freedesktop.org/mesa/mesa/-/issues/9239
cc: mesa-stable

Reviewed-by: Pierre-Eric Pelloux-Prayer <pierre-eric.pelloux-prayer@amd.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/27110>
(cherry picked from commit 220c912080)
2024-01-23 13:19:58 +00:00
Tapani Pälli
cfa818f191 anv: expand pre-hiz data cache flush to gfx >= 125
Cc: mesa-stable
Signed-off-by: Tapani Pälli <tapani.palli@intel.com>
Reviewed-by: Sagar Ghuge <sagar.ghuge@intel.com>
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/27132>
(cherry picked from commit 02d7f5e4ff)
2024-01-23 13:19:07 +00:00
Tapani Pälli
69cac7ae19 iris: expand pre-hiz data cache flush to gfx >= 125
Cc: mesa-stable
Signed-off-by: Tapani Pälli <tapani.palli@intel.com>
Reviewed-by: Sagar Ghuge <sagar.ghuge@intel.com>
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/27132>
(cherry picked from commit 93706d5c2f)
2024-01-23 13:19:06 +00:00
Ian Romanick
b428530441 intel/compiler: Track mue_compaction and mue_header_packing flags in brw_get_compiler_config_value
v2: Use u_foreach_bit64. Suggested by Lionel.

Fixes: 48885c7fe3 ("intel/compiler: load debug mesh compaction options once")
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/26993>
(cherry picked from commit 7481d61a5d)
2024-01-23 13:19:05 +00:00
Ian Romanick
057493cefb intel/compiler: Track lower_dpas flag in brw_get_compiler_config_value
This user-settable flag affects compiler output, so it should be tracked
in the cache hash.

Fixes: 3756f60558 ("intel/fs: DPAS lowering")
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Suggested-by: Lionel Landwerlin
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/26993>
(cherry picked from commit 6f237a23c7)
2024-01-23 13:19:04 +00:00
Ian Romanick
219cd6dc40 intel/compiler: Disable DPAS instructions on MTL
Reviewed-by: Mark Janes <markjanes@swizzler.org>
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Fixes: 3756f60558 ("intel/fs: DPAS lowering")
Closes: #10376
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/26993>
(cherry picked from commit 951e08fc18)
2024-01-23 13:19:03 +00:00
Hans-Kristian Arntzen
ba54cfa014 wsi/x11: Add workaround for Detroit Become Human.
Game needs strict image count to not crash in non-vsync mode.

Signed-off-by: Hans-Kristian Arntzen <post@arntzen-software.no>
Reviewed-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
Cc: mesa-stable
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/27038>
(cherry picked from commit efc0131d5b)
2024-01-23 13:19:02 +00:00
Eric Engestrom
bc9a92aa03 .pick_status.json: Update to d0a3bac163 2024-01-23 13:18:34 +00:00
Dave Airlie
085612fce5 radv: don't submit empty command buffers on encoder ring.
the vcn enc/unified rings don't do nop packets, and hang with 0 sized
cmd buffers. This just stops submitting 0 sized cmd buffers to the hw.

Fixes hangs with dEQP-VK.video.decode.h264_i on navi3x

Cc: mesa-stable
Reviewed-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/25932>
(cherry picked from commit f33683e4da)
2024-01-18 13:07:57 +00:00
Dave Airlie
74ee323c17 radv/video: refactor sq start/end code to avoid decode hangs.
The extra cmd buffer layer was done wrong, need to emit the
sq start and ends around every reset/decode packet.

Fixes dEQP-VK.video.decode.h264_i on navi3x

Fixes: d8f3060bd9 ("radv/video: start adding gfx11 vcn decoder")
Acked-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/25932>
(cherry picked from commit d32f2ee7b6)
2024-01-18 13:07:56 +00:00
Faith Ekstrand
2ff9219359 nvk: Unref shaders on pipeline free
Fixes: d6a1e29ccd ("nvk: pipeline shader cache")
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/27130>
(cherry picked from commit be0f04f5bd)
2024-01-18 13:07:54 +00:00
Ryan Neph
ed75400a50 venus: fix shmem leak on vn_ring_destroy
Missed shmem unref when moving ring internals out of vn_instance.c.

Fixes: d1e29b7557 ("venus: move ring shmem into vn_ring")
Signed-off-by: Ryan Neph <ryanneph@google.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/27125>
(cherry picked from commit 6e4bb8253e)
2024-01-18 13:07:51 +00:00
Eric Engestrom
22c416e1c8 .pick_status.json: Update to d2b08f9437 2024-01-18 13:07:49 +00:00
David Heidelberg
a062b0432a ci/deqp: uprev deqp-runner for Linux too to 0.18.0
Previous commit upreved deqp only for the Android

Fixes: 1ff4687e86 ("ci: uprev deqp-runner from 0.16.1 to 0.18.0")

Signed-off-by: David Heidelberg <david.heidelberg@collabora.com>

[Eric]
- rename the deqp-runner version to DEQP_RUNNER_VERSION instead of DEQP_VERSION
- update image tags
- fix expectations lists

Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/27062>
(cherry picked from commit 4ff77f08e4)
2024-01-17 23:29:18 +00:00
Eric Engestrom
01b374ecaf ci/deqp: ensure that in default builds, wayland + x11 + xcb are all built
If someone were to remove the libraries that are needed for these,
`default` would simply not enable these tests, and the only thing we
could notice is that test jobs would suddenly take less time to run.

Instead, let's have a check to make sure dEQP's cmake has detected
everything and enabled these platforms.

Reviewed-by: David Heidelberg <david.heidelberg@collabora.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/27041>
(cherry picked from commit 27a1b4e4f3)
2024-01-17 23:28:12 +00:00
Eric Engestrom
e716b08f86 VERSION: bump for 24.0.0-rc2 2024-01-17 22:28:20 +00:00
Friedrich Vock
9d1a064663 radv/rt: Add workaround to make leaves always active
DOOM Eternal builds acceleration structures with inactive primitives and
tries to make them active in later AS updates. This is disallowed by the
spec and triggers a GPU hang. Fix the hang by working around the bug.

Cc: mesa-stable
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/27034>
(cherry picked from commit a9831caa14)
2024-01-17 21:42:02 +00:00
Boris Brezillon
f7f823c787 panvk: Fix access to unitialized panvk_pipeline_layout::num_sets field
Commit 73eecffabd ("panvk: Use the vk_pipeline_layout base struct")
reworked the panvk logic to use vk_pipeline_layout, which contains the
number of descriptor set layout referenced by a pipeline layout, thus
deprecating panvk_pipeline_layout::num_sets.

Make panvk_fill_non_vs_attribs() use vk_pipeline_layout::set_count
instead of panvk_pipeline_layout::num_sets and kill the latter so we
can't introduce new users.

Fixes: 73eecffabd ("panvk: Use the vk_pipeline_layout base struct")
Cc: mesa-stable
Signed-off-by: Boris Brezillon <boris.brezillon@collabora.com>
Reviewed-by: Constantine Shablya <constantine.shablya@collabora.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/27107>
(cherry picked from commit b18bfed2c5)
2024-01-17 21:41:44 +00:00
Boris Brezillon
b65d7520f6 panvk: Fix tracing
pandecode_next_frame() take a decode context. Passing NULL leads to a
NULL deref.

Fixes: 56be9a55be ("pan/decode: handle more than one panfrost_device")
Cc: mesa-stable
Signed-off-by: Boris Brezillon <boris.brezillon@collabora.com>
Reviewed-by: Constantine Shablya <constantine.shablya@collabora.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/27107>
(cherry picked from commit 35a02560c8)
2024-01-17 21:39:08 +00:00
Sviatoslav Peleshko
1246e54f1c nir: Use alu source components count in nir_alu_srcs_negative_equal
When we use source from ALU instruction directly, the default swizzle array
should be populated with the same amount of components as the src has.

Otherwise, if we use nir_ssa_alu_instr_src_components, it can return
the destination components count that is lower than component index
actually used in that source. This can lead to false equality
between 0 (uninitialized) and 0 (.x) in swizzle comparison below.

Fixes: c6ee46a7 ("nir: Add nir_alu_srcs_negative_equal")
Closes: https://gitlab.freedesktop.org/mesa/mesa/-/issues/8704
Signed-off-by: Sviatoslav Peleshko <sviatoslav.peleshko@globallogic.com>
Reviewed-by: Ian Romanick <ian.d.romanick@intel.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/22655>
(cherry picked from commit 6b0bfdfa9e)
2024-01-17 21:39:06 +00:00
Erico Nunes
4175b4d547 Revert "ci: lima farm is down"
This reverts commit 601b826a5e.

Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/26905>
(cherry picked from commit 8bd4cae768)
2024-01-17 21:39:02 +00:00
Yonggang Luo
9732d1bdcd compiler/spirv: The spirv shader is binary, should write in binary mode
Fixes: 53265c8798 ("spirv: Add a mechanism for dumping failing shaders")

Signed-off-by: Yonggang Luo <luoyonggang@gmail.com>
Reviewed-by: Jesse Natalie <jenatali@microsoft.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/26775>
(cherry picked from commit fd11818828)
2024-01-17 21:39:00 +00:00
Yiwei Zhang
8974222433 vulkan/wsi/wayland: fix returns and avoid leaks for failed swapchain
Cc: mesa-stable
Signed-off-by: Yiwei Zhang <zzyiwei@chromium.org>
Tested-by: Eric Engestrom <eric@engestrom.ch>
Reviewed-by: Ryan Neph <ryanneph@google.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/27080>
(cherry picked from commit dc5725ee29)
2024-01-17 21:38:56 +00:00
Eric Engestrom
ce34ec41cd ci: fix job dependency error in MRs for bin/ci/* scripts
'debian/x86_64_build' job needs 'debian/x86_64_build-base' job, but 'debian/x86_64_build-base' is not in any previous stage

Fixes: f298a0e709 ("ci: make sure we evaluate the python-test rules first")
Fixes: 2c9fdaa830 ("ci: fix python-test dependency error on merge requests")
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/27042>
(cherry picked from commit 2ce0b5ab0a)
2024-01-17 21:38:53 +00:00
Eric Engestrom
3dabc03b58 .pick_status.json: Update to 10e2dbb63b 2024-01-17 21:36:44 +00:00
David Rosca
25ae9134dd radeonsi/vcn: Fix H264 slice header when encoding I frames
Cc: mesa-stable
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/27029>
(cherry picked from commit 865abfde63)
2024-01-16 18:41:37 +00:00
Patrick Lerda
43a00ad0fa glsl/nir: fix gl_nir_cross_validate_outputs_to_inputs() memory leak
For instance, this issue is triggered with
vs-to-fs-overlap.shader_test -auto -fbo:
Direct leak of 24 byte(s) in 1 object(s) allocated from:
    #0 0x7fe64f58e9a7 in calloc (/usr/lib64/libasan.so.6+0xb19a7)
    #1 0x7fe642ca2839 in _mesa_symbol_table_ctor ../src/mesa/program/symbol_table.c:286
    #2 0x7fe642ff003d in gl_nir_cross_validate_outputs_to_inputs ../src/compiler/glsl/gl_nir_link_varyings.c:728
    #3 0x7fe642d7c7d8 in gl_nir_link_glsl ../src/compiler/glsl/gl_nir_linker.c:1357
    #4 0x7fe642be6931 in st_link_glsl_to_nir ../src/mesa/state_tracker/st_glsl_to_nir.cpp:562
    #5 0x7fe642be6931 in st_link_shader ../src/mesa/state_tracker/st_glsl_to_nir.cpp:944
    #6 0x7fe642acab55 in link_program ../src/mesa/main/shaderapi.c:1336
    #7 0x7fe642acab55 in link_program_error ../src/mesa/main/shaderapi.c:1447
    #8 0x7fe6424aa389 in _mesa_unmarshal_LinkProgram src/mapi/glapi/gen/marshal_generated2.c:1911
    #9 0x7fe641fd912b in glthread_unmarshal_batch ../src/mesa/main/glthread.c:139
    #10 0x7fe641f48d48 in util_queue_thread_func ../src/util/u_queue.c:309
    #11 0x7fe641fa442a in impl_thrd_routine ../src/c11/impl/threads_posix.c:67

Fixes: 7d1948e9b5 ("glsl: implement cross_validate_outputs_to_inputs() in nir linker")
Signed-off-by: Patrick Lerda <patrick9876@free.fr>
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/27071>
(cherry picked from commit bacace8634)
2024-01-16 18:41:36 +00:00
Karol Herbst
78fd14d938 rusticl/kernel: run opt/lower_memcpy later to fix a crash
nir_opt_memcpy requires explicit types to function properly. So run them
after lowering vars to explicit types.

Cc: mesa-stable
Signed-off-by: Karol Herbst <kherbst@redhat.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/27068>
(cherry picked from commit f896659894)
2024-01-16 18:41:35 +00:00
Tatsuyuki Ishi
c5b8590e6d radv: never set DISABLE_WR_CONFIRM for CP DMA clears and copies
This mirrors the changes in 69ff9c16bb ("radeonsi: never set
DISABLE_WR_CONFIRM for CP DMA clears and copies").

Cc: mesa-stable
Suggested-by: Vitaliy Triang3l Kuzmin <triang3l@yandex.ru>
Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/27053>
(cherry picked from commit 43fb43ba2c)
2024-01-16 18:41:34 +00:00
Lucas Stach
9888a95130 etnaviv: disable 64bpp render/sampler formats
Vivante hardware handles 64bpp render targets and samplers in a odd way
by splitting the buffer and using a pair of texture samplers or a pair
of MRT outputs to access those resources. This isn't implemented in the
driver right now, so we should not advertise support for those formats.

CC: mesa-stable
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
Reviewed-by: Christian Gmeiner <cgmeiner@igalia.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/26982>
(cherry picked from commit e481c1269c)
2024-01-16 18:41:33 +00:00
Eric Engestrom
05ff891088 .pick_status.json: Update to ff84aef116 2024-01-16 18:41:30 +00:00
Tapani Pälli
fc4180339c anv: check for wa 16013994831 in emit_so_memcpy_end
We are toggling preemption on/off during streamout, this is also
happening on gfx12 platforms, not just dg2.

Cc: mesa-stable
Signed-off-by: Tapani Pälli <tapani.palli@intel.com>
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/27002>
(cherry picked from commit 36f428f1de)
2024-01-15 21:57:32 +00:00
Vinson Lee
b39ee4d766 intel/disasm: Remove duplicate variable reg_file
Fix defects reported by Coverity Scan.

Evaluation order violation (EVALUATION_ORDER)
write_write_typo: In reg_file = reg_file = brw_inst_dpas_3src_dst_reg_file(devinfo, inst),
reg_file is written twice with the same value.

Fixes: 1c92dad5cb ("intel/disasm: Disassembly support for DPAS")
Signed-off-by: Vinson Lee <vlee@freedesktop.org>
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/27056>
(cherry picked from commit 73835874a8)
2024-01-15 21:57:31 +00:00
Lionel Landwerlin
5b8984f32f anv: hide vendor ID for The Finals
XeSS workaround.

Signed-off-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Reviewed-by: Tapani Pälli <tapani.palli@intel.com>
Closes: https://gitlab.freedesktop.org/mesa/mesa/-/issues/10436
Cc: mesa-stable
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/27057>
(cherry picked from commit a34a113059)
2024-01-15 21:57:30 +00:00
Lionel Landwerlin
eb3d73073f intel/aux_map: fix fallback unmapping range on failure
Signed-off-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Fixes: 7c6faa1efe ("intel/aux_map: introduce ref count of L1 entries")
Reviewed-by: Tapani Pälli <tapani.palli@intel.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/27057>
(cherry picked from commit ff6041afdf)
2024-01-15 21:57:29 +00:00
Jesse Natalie
f19b7d8dfc mesa: Consider mesa format in addition to internal format for mip/cube completeness
Prior to 06b526de, the mesa format was used for these completeness checks.
That was to address the case where a *different* internal format selected
the *same* mesa format, and the texture shouldn't be considered compatible.
But this didn't address the case where the *same* internal format selected
a *different* mesa format, e.g. because the type passed to the TexImage
API was different.

An old WGL demo app called TexFilter.exe tries to redefine a mipped RGBA16
texture as RGBA8. This incorrect logic caused Mesa to try to copy the RGBA16
data from the smaller mips into the newly created RGBA8 data, because it
thought that the texture was still mip-complete, despite the format changing.

Cc: mesa-stable
Reviewed-By: Mike Blumenkrantz <michael.blumenkrantz@gmail.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/27023>
(cherry picked from commit 4cb9c77e8e)
2024-01-15 21:57:28 +00:00
José Roberto de Souza
04ffe4771e anv: Fix PAT entry for userptr in integrated GPUs
Fixes: 060439bdf0 ("anv: Add ANV_BO_ALLOC_IMPORTED")
Signed-off-by: José Roberto de Souza <jose.souza@intel.com>
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/27040>
(cherry picked from commit 49fe060b5f)
2024-01-15 21:57:27 +00:00
Yiwei Zhang
0ebdd39d85 venus: populate oom from ring submit alloc failures
ring_seqno_valid indicates a successful ring cmd submission, and can be
used to avoid invalid reply decoding due to failed submit alloc.
Otherwise, the garbled VkResult will mislead into initialization failure
instead of oom.

Below cts failure is fixed:
dEQP-VK.api.device_init.create_instance_device_intentional_alloc_fail.basic

Fixes: ec131c6e55 ("venus: use instance allocator for ring allocs")
Signed-off-by: Yiwei Zhang <zzyiwei@chromium.org>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/27026>
(cherry picked from commit ecd50e70d4)
2024-01-15 21:57:24 +00:00
Matt Turner
fcd78c5281 util/tests: Disable half-float NaN test on hppa/old-mips
Bug: https://bugs.gentoo.org/908079
Fixes: 067023dce2 ("util: Add some unit tests of the half-float conversions.")
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/26991>
(cherry picked from commit 5b7c733902)
2024-01-15 21:56:38 +00:00
Matt Turner
97ebcff41c util: Add DETECT_ARCH_HPPA macro
Cc: mesa-stable
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/26991>
(cherry picked from commit 0540c9de44)
2024-01-15 21:56:37 +00:00
Pierre-Eric Pelloux-Prayer
6febac5c96 Revert "ci/radeonsi: disable VA-API testing on raven"
This reverts commit 9017852de4.

Reviewed-by: Marek Olšák <marek.olsak@amd.com>
Reviewed-by: David Heidelberg <david.heidelberg@collabora.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/26947>
(cherry picked from commit e2f39e8aca)
2024-01-15 21:56:36 +00:00
Pierre-Eric Pelloux-Prayer
ab960ee0bf radeonsi: compute epitch when modifying surf_pitch
In the linear case with no mipmaps addrlib sets epitch to surf_pitch - 1
so lets do the same thing here.

The change in si_descriptors.c looks like it's papering over a bug but I
couldn't find any other changes that wouldn't break at least one use case.

Closes: https://gitlab.freedesktop.org/mesa/mesa/-/issues/10375
Fixes: 115b61e51f ("ac/surface: don't oversize surf_size")
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/26947>
(cherry picked from commit 4e76c4ecb4)
2024-01-15 21:56:10 +00:00
Tatsuyuki Ishi
fc11cbb37e radv: Recompute max_waves after postprocessing RT config
The max waves for RT prolog need to be recalculated after merging the
resource usage of all shaders invoked from it.

Note that there is no need to panic, as the info was only used to
calculate maximum scratch size and with the RT prolog being low
footprint, this likely only caused overestimation rather than
underestimation.

Fixes: 533ec9843e ("radv: Precompute shader max_waves.")
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/26998>
(cherry picked from commit 63827751e1)
2024-01-15 21:56:09 +00:00
Mike Blumenkrantz
1f5604ed45 zink: fix separate shader patch variable location adjustment
in spirv, these start at location 0, not location 32

fixes #10414

Fixes: d9942442f2 ("zink: handle patch variable locations for separate shaders better")
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/26981>
(cherry picked from commit 565ee4fafc)
2024-01-15 21:56:07 +00:00
Lionel Landwerlin
cc677d7c30 anv: fix disabled Wa_14017076903/18022508906
Signed-off-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Fixes: d0669f3ede ("intel/dev: switch defect identifiers to use lineage numbers")
Reviewed-by: José Roberto de Souza <jose.souza@intel.com>
Reviewed-by: Tapani Pälli <tapani.palli@intel.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/27008>
(cherry picked from commit 695b4a2992)
2024-01-15 21:56:05 +00:00
Eric Engestrom
f575e2b9f1 ci: make sure we evaluate the python-test rules first
Fixes: 2c9fdaa830 ("ci: fix python-test dependency error on merge requests")
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/26984>
(cherry picked from commit f298a0e709)
2024-01-15 21:56:02 +00:00
Timur Kristóf
3753919715 radv: Correctly select SDMA support for PRIME blit.
Cc: mesa-stable
Closes: https://gitlab.freedesktop.org/mesa/mesa/-/issues/10317
Reviewed-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
Signed-off-by: Timur Kristóf <timur.kristof@gmail.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/27015>
(cherry picked from commit 436b89e838)
2024-01-15 21:56:00 +00:00
Pavel Ondračka
757192b046 r300: fix reusing of color varying slots for generic ones
This was broken when I added texcoord support, the problem is that we
failed to properly count the number of used fs inputs and thus we failed
to make the proper decision when to reuse the color varying slot
Also fix the error messages, they were incorrect after the rewrite as
well. This fixes a bunch of piglits.

Fixes: d4b8e8a481

Signed-off-by: Pavel Ondračka <pavel.ondracka@gmail.com>
Reviewed-by: Filip Gawin <filip.gawin@collabora.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/27003>
(cherry picked from commit 53c17d85ab)
2024-01-15 21:55:56 +00:00
Mike Blumenkrantz
02b5a2348d lavapipe: fix devenv icd filename
fixes #10408

cc: mesa-stable

Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/26985>
(cherry picked from commit 465e26dd98)
2024-01-15 21:55:52 +00:00
Mike Blumenkrantz
3c36933195 lavapipe: use pushconstants2 for dgc
Fixes: ec656e1984 ("lavapipe: maint6")
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/26977>
(cherry picked from commit bf729063c3)
2024-01-15 21:55:51 +00:00
Mike Blumenkrantz
ae5c0e6600 vk/cmdbuf: add back deleted maint6 workgraph bits
this otherwise breaks workgraph support in lavapipe

Fixes: ec656e1984 ("lavapipe: maint6")
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/26977>
(cherry picked from commit b6bfa73dc7)
2024-01-15 21:22:38 +00:00
Eric Engestrom
f1064107e9 .pick_status.json: Mark 0557f0d59c as denominated 2024-01-15 09:44:39 +00:00
Eric Engestrom
6b4f639474 .pick_status.json: Update to 4fe5f06d40 2024-01-15 09:43:41 +00:00
Eric Engestrom
26a96af808 VERSION: bump for 24.0.0-rc1 2024-01-11 14:19:21 +00:00
8757 changed files with 776459 additions and 1409501 deletions

View File

@@ -2,8 +2,6 @@
# enforcement in the CI.
src/gallium/drivers/i915
src/gallium/drivers/r300/compiler/*
src/gallium/targets/teflon/**/*
src/amd/vulkan/**/*
src/amd/compiler/**/*
src/egl/**/*

View File

@@ -31,7 +31,7 @@ indent_size = 3
[*.patch]
trim_trailing_whitespace = false
[{meson.build,meson.options}]
[{meson.build,meson_options.txt}]
indent_style = space
indent_size = 2

View File

@@ -65,15 +65,3 @@ c7bf3b69ebc8f2252dbf724a4de638e6bb2ac402
# ir3: Reformat source with clang-format
177138d8cb0b4f6a42ef0a1f8593e14d79f17c54
# ir3: reformat after refactoring in previous commit
8ae5b27ee0331a739d14b42e67586784d6840388
# ir3: don't use deprecated NIR_PASS_V anymore
2fedc82c0cc9d3fb2e54707b57941b79553b640c
# ir3: reformat after previous commit
7210054db8cfb445a8ccdeacfdcfecccf44fa266
# freedreno/a6xx: The great register renaming
7fd99c88b9cd5c0c8c1cb3e92383acac5cb8220b

View File

@@ -42,7 +42,7 @@ jobs:
[binaries]
llvm-config = '/usr/local/opt/llvm/bin/llvm-config'
EOL
$MESON_EXEC . build --native-file=native_config -Dmoltenvk-dir=$(brew --prefix molten-vk) -Dbuild-tests=true -Dgallium-drivers=swrast,zink -Dglx=${{ matrix.glx_option }}
$MESON_EXEC . build --native-file=native_config -Dmoltenvk-dir=$(brew --prefix molten-vk) -Dbuild-tests=true -Dosmesa=true -Dgallium-drivers=swrast,zink -Dglx=${{ matrix.glx_option }}
- name: Build
run: $MESON_EXEC compile -C build
- name: Test

2
.gitignore vendored
View File

@@ -1,7 +1,5 @@
.cache
.vscode*
*.pyc
*.pyo
*.out
/build
.venv/

View File

@@ -30,91 +30,56 @@ workflow:
# do not duplicate pipelines on merge pipelines
- if: $CI_COMMIT_BRANCH && $CI_OPEN_MERGE_REQUESTS && $CI_PIPELINE_SOURCE == "push"
when: never
# Tag pipelines are disabled as it's too late to run all the tests by
# then, the release has been made based on the staging pipelines results
- if: $CI_COMMIT_TAG
when: never
# Merge pipeline
# merge pipeline
- if: &is-merge-attempt $GITLAB_USER_LOGIN == "marge-bot" && $CI_PIPELINE_SOURCE == "merge_request_event"
variables:
KERNEL_IMAGE_BASE: https://${S3_HOST}/mesa-lava/${KERNEL_REPO}/${KERNEL_TAG}
MESA_CI_PERFORMANCE_ENABLED: 1
FDO_RUNNER_JOB_PRIORITY_TAG_X86_64: priority:high
FDO_RUNNER_JOB_PRIORITY_TAG_X86_64_KVM: priority:high-kvm
FDO_RUNNER_JOB_PRIORITY_TAG_AARCH64: priority:high-aarch64
CI_TRON_JOB_PRIORITY_TAG: "" # Empty tags are ignored by gitlab
JOB_PRIORITY: 75
# fast-fail in merge pipelines: stop early if we get this many unexpected fails/crashes
DEQP_RUNNER_MAX_FAILS: 40
# Post-merge pipeline
VALVE_INFRA_VANGOGH_JOB_PRIORITY: "" # Empty tags are ignored by gitlab
# post-merge pipeline
- if: &is-post-merge $GITLAB_USER_LOGIN == "marge-bot" && $CI_PIPELINE_SOURCE == "push"
variables:
FDO_RUNNER_JOB_PRIORITY_TAG_X86_64: priority:high
FDO_RUNNER_JOB_PRIORITY_TAG_X86_64_KVM: priority:high-kvm
FDO_RUNNER_JOB_PRIORITY_TAG_AARCH64: priority:high-aarch64
# Pre-merge pipeline (because merge pipelines are already caught above)
- if: &is-merge-request $CI_PIPELINE_SOURCE == "merge_request_event"
# Push to a branch on a fork
- if: &is-push-to-fork $CI_PROJECT_NAMESPACE != "mesa" && $CI_PIPELINE_SOURCE == "push"
# a pipeline running within the upstream project
- if: &is-upstream-pipeline $CI_PROJECT_PATH == $FDO_UPSTREAM_REPO
# an MR pipeline running within the upstream project, usually true for
# those with the Developer role or above
- if: &is-upstream-mr-pipeline $CI_PROJECT_PATH == $FDO_UPSTREAM_REPO && $CI_PIPELINE_SOURCE == "merge_request_event"
# Nightly pipeline
# nightly pipeline
- if: &is-scheduled-pipeline $CI_PIPELINE_SOURCE == "schedule"
variables:
FDO_RUNNER_JOB_PRIORITY_TAG_X86_64: priority:low
FDO_RUNNER_JOB_PRIORITY_TAG_X86_64_KVM: priority:low-kvm
FDO_RUNNER_JOB_PRIORITY_TAG_AARCH64: priority:low-aarch64
JOB_PRIORITY: 45
# (some) nightly builds perform LTO, so they take much longer than the
# short timeout allowed in other pipelines.
# Note: 0 = infinity = gitlab's job `timeout:` applies, which is 1h
BUILD_JOB_TIMEOUT_OVERRIDE: 0
# Pipeline for direct pushes to the default branch that bypassed the CI
- if: &is-push-to-upstream-default-branch $CI_PROJECT_NAMESPACE == "mesa" && $CI_PIPELINE_SOURCE == "push" && $CI_COMMIT_REF_NAME == $CI_DEFAULT_BRANCH
KERNEL_IMAGE_BASE: https://${S3_HOST}/mesa-lava/${KERNEL_REPO}/${KERNEL_TAG}
JOB_PRIORITY: 50
VALVE_INFRA_VANGOGH_JOB_PRIORITY: priority:low
# pipeline for direct pushes that bypassed the CI
- if: &is-direct-push $CI_PROJECT_NAMESPACE == "mesa" && $CI_PIPELINE_SOURCE == "push" && $GITLAB_USER_LOGIN != "marge-bot"
variables:
JOB_PRIORITY: 70
# Pipeline for direct pushes from release maintainer
- if: &is-push-to-upstream-staging-branch $CI_PROJECT_NAMESPACE == "mesa" && $CI_PIPELINE_SOURCE == "push" && $CI_COMMIT_REF_NAME =~ /^staging\//
KERNEL_IMAGE_BASE: https://${S3_HOST}/mesa-lava/${KERNEL_REPO}/${KERNEL_TAG}
JOB_PRIORITY: 40
VALVE_INFRA_VANGOGH_JOB_PRIORITY: priority:low
# pre-merge or fork pipeline
- if: $FORCE_KERNEL_TAG != null
variables:
JOB_PRIORITY: 70
KERNEL_IMAGE_BASE: https://${S3_HOST}/mesa-lava/${KERNEL_REPO}/${FORCE_KERNEL_TAG}
JOB_PRIORITY: 50
VALVE_INFRA_VANGOGH_JOB_PRIORITY: priority:low
- if: $FORCE_KERNEL_TAG == null
variables:
KERNEL_IMAGE_BASE: https://${S3_HOST}/mesa-lava/${KERNEL_REPO}/${KERNEL_TAG}
JOB_PRIORITY: 50
VALVE_INFRA_VANGOGH_JOB_PRIORITY: priority:low
variables:
FDO_UPSTREAM_REPO: mesa/mesa
MESA_TEMPLATES_COMMIT: &ci-templates-commit c6aeb16f86e32525fa630fb99c66c4f3e62fc3cb
MESA_TEMPLATES_COMMIT: &ci-templates-commit d5aa3941aa03c2f716595116354fb81eb8012acb
CI_PRE_CLONE_SCRIPT: |-
set -o xtrace
curl --silent --location --fail --retry-connrefused --retry 3 --retry-delay 10 \
${CI_PROJECT_URL}/-/raw/${CI_COMMIT_SHA}/.gitlab-ci/download-git-cache.sh | bash
wget -q -O download-git-cache.sh ${CI_PROJECT_URL}/-/raw/${CI_COMMIT_SHA}/.gitlab-ci/download-git-cache.sh
bash download-git-cache.sh
rm download-git-cache.sh
set +o xtrace
S3_JWT_FILE: /s3_jwt
S3_JWT_FILE_SCRIPT: |-
echo -n '${S3_JWT}' > '${S3_JWT_FILE}' &&
S3_JWT_FILE_SCRIPT= &&
unset CI_JOB_JWT S3_JWT # Unsetting vulnerable env variables
CI_JOB_JWT_FILE: /minio_jwt
S3_HOST: s3.freedesktop.org
# This bucket is used to fetch ANDROID prebuilts and images
S3_ANDROID_BUCKET: mesa-rootfs
# This bucket is used to fetch the kernel image
S3_KERNEL_BUCKET: mesa-rootfs
# Bucket for git cache
S3_GITCACHE_BUCKET: git-cache
# Bucket for the pipeline artifacts pushed to S3
S3_ARTIFACTS_BUCKET: artifacts
# Buckets for traces
S3_TRACIE_RESULTS_BUCKET: mesa-tracie-results
S3_TRACIE_PUBLIC_BUCKET: mesa-tracie-public
S3_TRACIE_PRIVATE_BUCKET: mesa-tracie-private
# Base path used for various artifacts
S3_BASE_PATH: "${S3_HOST}/${S3_KERNEL_BUCKET}"
# per-pipeline artifact storage on MinIO
PIPELINE_ARTIFACTS_BASE: ${S3_HOST}/${S3_ARTIFACTS_BUCKET}/${CI_PROJECT_PATH}/${CI_PIPELINE_ID}
PIPELINE_ARTIFACTS_BASE: ${S3_HOST}/artifacts/${CI_PROJECT_PATH}/${CI_PIPELINE_ID}
# per-job artifact storage on MinIO
JOB_ARTIFACTS_BASE: ${PIPELINE_ARTIFACTS_BASE}/${CI_JOB_ID}
# reference images stored for traces
PIGLIT_REPLAY_REFERENCE_IMAGES_BASE: "${S3_HOST}/${S3_TRACIE_RESULTS_BUCKET}/$FDO_UPSTREAM_REPO"
PIGLIT_REPLAY_REFERENCE_IMAGES_BASE: "${S3_HOST}/mesa-tracie-results/$FDO_UPSTREAM_REPO"
# For individual CI farm status see .ci-farms folder
# Disable farm with `git mv .ci-farms{,-disabled}/$farm_name`
# Re-enable farm with `git mv .ci-farms{-disabled,}/$farm_name`
@@ -122,42 +87,27 @@ variables:
ARTIFACTS_BASE_URL: https://${CI_PROJECT_ROOT_NAMESPACE}.${CI_PAGES_DOMAIN}/-/${CI_PROJECT_NAME}/-/jobs/${CI_JOB_ID}/artifacts
# Python scripts for structured logger
PYTHONPATH: "$PYTHONPATH:$CI_PROJECT_DIR/install"
# No point in continuing once the device is lost
MESA_VK_ABORT_ON_DEVICE_LOSS: 1
# Avoid the wall of "Unsupported SPIR-V capability" warnings in CI job log, hiding away useful output
MESA_SPIRV_LOG_LEVEL: error
# Default priority for non-merge pipelines
FDO_RUNNER_JOB_PRIORITY_TAG_X86_64: "" # Empty tags are ignored by gitlab
FDO_RUNNER_JOB_PRIORITY_TAG_X86_64_KVM: kvm
FDO_RUNNER_JOB_PRIORITY_TAG_AARCH64: aarch64
CI_TRON_JOB_PRIORITY_TAG: ci-tron:priority:low
JOB_PRIORITY: 50
DATA_STORAGE_PATH: data_storage
KERNEL_IMAGE_BASE: "https://$S3_HOST/$S3_KERNEL_BUCKET/$KERNEL_REPO/$KERNEL_TAG"
# Mesa-specific variables that shouldn't be forwarded to DUTs and crosvm
CI_EXCLUDE_ENV_VAR_REGEX: 'SCRIPTS_DIR|RESULTS_DIR'
CI_TRON_JOB_TEMPLATE_PROJECT: &ci-tron-template-project gfx-ci/ci-tron
CI_TRON_JOB_TEMPLATE_COMMIT: &ci-tron-template-commit ddadab0006e43f1365cd30779f565b444a6538ee
CI_TRON_JOB_TEMPLATE_PROJECT_URL: "https://gitlab.freedesktop.org/$CI_TRON_JOB_TEMPLATE_PROJECT"
default:
timeout: 1m # catch any jobs which don't specify a timeout
id_tokens:
S3_JWT:
aud: https://s3.freedesktop.org
before_script:
- >
export SCRIPTS_DIR=$(mktemp -d) &&
curl -L -s --retry 4 -f --retry-all-errors --retry-delay 60 -O --output-dir "${SCRIPTS_DIR}" "${CI_PROJECT_URL}/-/raw/${CI_COMMIT_SHA}/.gitlab-ci/setup-test-env.sh" &&
. ${SCRIPTS_DIR}/setup-test-env.sh
- eval "$S3_JWT_FILE_SCRIPT"
. ${SCRIPTS_DIR}/setup-test-env.sh &&
echo -n "${CI_JOB_JWT}" > "${CI_JOB_JWT_FILE}" &&
unset CI_JOB_JWT # Unsetting vulnerable env variables
after_script:
# Work around https://gitlab.com/gitlab-org/gitlab/-/issues/20338
- find -name '*.log' -exec mv {} {}.txt \;
- >
set +x
test -e "${CI_JOB_JWT_FILE}" &&
export CI_JOB_JWT="$(<${CI_JOB_JWT_FILE})" &&
rm "${CI_JOB_JWT_FILE}"
# Retry when job fails. Failed jobs can be found in the Mesa CI Daily Reports:
# https://gitlab.freedesktop.org/mesa/mesa/-/issues/?sort=created_date&state=opened&label_name%5B%5D=CI%20daily
retry:
@@ -177,59 +127,66 @@ stages:
- sanity
- container
- git-archive
- build-for-tests
- build-only
- build-x86_64
- build-misc
- code-validation
- amd
- amd-nightly
- intel
- intel-nightly
- nouveau
- nouveau-nightly
- arm
- arm-nightly
- broadcom
- broadcom-nightly
- freedreno
- freedreno-nightly
- etnaviv
- etnaviv-nightly
- software-renderer
- software-renderer-nightly
- layered-backends
- layered-backends-nightly
- performance
- deploy
include:
- project: 'freedesktop/ci-templates'
ref: 16bc29078de5e0a067ff84a1a199a3760d3b3811
file:
- '/templates/ci-fairy.yml'
- project: 'freedesktop/ci-templates'
ref: *ci-templates-commit
file:
- '/templates/alpine.yml'
- '/templates/debian.yml'
- '/templates/fedora.yml'
- '/templates/ci-fairy.yml'
- project: *ci-tron-template-project
ref: *ci-tron-template-commit
file: '/.gitlab-ci/dut.yml'
- local: '.gitlab-ci/image-tags.yml'
- local: '.gitlab-ci/bare-metal/gitlab-ci.yml'
- local: '.gitlab-ci/ci-tron/gitlab-ci.yml'
- local: '.gitlab-ci/lava/gitlab-ci.yml'
- local: '.gitlab-ci/lava/lava-gitlab-ci.yml'
- local: '.gitlab-ci/container/gitlab-ci.yml'
- local: '.gitlab-ci/build/gitlab-ci.yml'
- local: '.gitlab-ci/test/gitlab-ci.yml'
- local: '.gitlab-ci/farm-rules.yml'
- local: '.gitlab-ci/test-source-dep.yml'
- local: 'docs/gitlab-ci.yml'
- local: 'src/**/ci/gitlab-ci.yml'
- local: 'src/amd/ci/gitlab-ci.yml'
- local: 'src/broadcom/ci/gitlab-ci.yml'
- local: 'src/etnaviv/ci/gitlab-ci.yml'
- local: 'src/freedreno/ci/gitlab-ci.yml'
- local: 'src/gallium/drivers/crocus/ci/gitlab-ci.yml'
- local: 'src/gallium/drivers/d3d12/ci/gitlab-ci.yml'
- local: 'src/gallium/drivers/i915/ci/gitlab-ci.yml'
- local: 'src/gallium/drivers/r300/ci/gitlab-ci.yml'
- local: 'src/gallium/drivers/lima/ci/gitlab-ci.yml'
- local: 'src/gallium/drivers/llvmpipe/ci/gitlab-ci.yml'
- local: 'src/gallium/drivers/nouveau/ci/gitlab-ci.yml'
- local: 'src/gallium/drivers/softpipe/ci/gitlab-ci.yml'
- local: 'src/gallium/drivers/virgl/ci/gitlab-ci.yml'
- local: 'src/gallium/drivers/zink/ci/gitlab-ci.yml'
- local: 'src/gallium/frontends/lavapipe/ci/gitlab-ci.yml'
- local: 'src/intel/ci/gitlab-ci.yml'
- local: 'src/microsoft/ci/gitlab-ci.yml'
- local: 'src/panfrost/ci/gitlab-ci.yml'
- local: 'src/virtio/ci/gitlab-ci.yml'
# Rules applied to every job in the pipeline
.common-rules:
rules:
- if: *is-push-to-fork
when: manual
# YAML anchors for rule conditions
# --------------------------------
.rules-anchors:
# Pre-merge pipeline
- &is-pre-merge '$CI_PIPELINE_SOURCE == "merge_request_event"'
.never-post-merge-rules:
rules:
@@ -237,61 +194,8 @@ include:
when: never
# Note: make sure the branches in this list are the same as in
# `.build-only-delayed-rules` below.
.container-rules:
.container+build-rules:
rules:
- !reference [.common-rules, rules]
# Run when re-enabling a disabled farm, but not when disabling it
- !reference [.disable-farm-mr-rules, rules]
# Never run immediately after merging, as we just ran everything
- !reference [.never-post-merge-rules, rules]
# Only rebuild containers in merge pipelines if any tags have been
# changed, else we'll just use the already-built containers
- if: *is-merge-attempt
changes: &image_tags_path
- .gitlab-ci/image-tags.yml
when: on_success
# Skip everything for pre-merge and merge pipelines which don't change
# anything in the build; we only do this for marge-bot and not user
# pipelines in a MR, because we might still need to run it to copy the
# container into the user's namespace.
- if: *is-merge-attempt
when: never
# Any MR pipeline which changes image-tags.yml needs to be able to
# rebuild the containers
- if: *is-merge-request
changes: *image_tags_path
when: manual
# ... if the MR pipeline runs as mesa/mesa and does not need a container
# rebuild, we can skip it
- if: *is-upstream-mr-pipeline
when: never
# ... however for MRs running inside the user namespace, we may need to
# run these jobs to copy the container images from upstream
- if: *is-merge-request
when: manual
# Build everything after someone bypassed the CI
- if: *is-push-to-upstream-default-branch
when: on_success
# Build everything when pushing to staging branches
- if: *is-push-to-upstream-staging-branch
when: on_success
# Scheduled pipelines reuse already-built containers
- if: *is-scheduled-pipeline
when: never
# Any other pipeline in the upstream should reuse already-built containers
- if: *is-upstream-pipeline
when: never
# Allow building everything in fork pipelines, but build nothing unless
# manually triggered
- when: manual
# Note: make sure the branches in this list are the same as in
# `.build-only-delayed-rules` below.
.build-rules:
rules:
- !reference [.common-rules, rules]
# Run when re-enabling a disabled farm, but not when disabling it
- !reference [.disable-farm-mr-rules, rules]
# Never run immediately after merging, as we just ran everything
@@ -304,7 +208,6 @@ include:
- bin/git_sha1_gen.py
- bin/install_megadrivers.py
- bin/symbols-check.py
- bin/ci/**/*
# GitLab CI
- .gitlab-ci.yml
- .gitlab-ci/**/*
@@ -322,20 +225,18 @@ include:
- src/**/*
when: on_success
# Same as above, but for pre-merge pipelines
- if: *is-merge-request
changes: *all_paths
- if: *is-pre-merge
changes:
*all_paths
when: manual
# Skip everything for pre-merge and merge pipelines which don't change
# anything in the build
- if: *is-merge-attempt
when: never
- if: *is-merge-request
- if: *is-pre-merge
when: never
# Build everything after someone bypassed the CI
- if: *is-push-to-upstream-default-branch
when: on_success
# Build everything when pushing to staging branches
- if: *is-push-to-upstream-staging-branch
- if: *is-direct-push
when: on_success
# Build everything in scheduled pipelines
- if: *is-scheduled-pipeline
@@ -344,58 +245,48 @@ include:
# manually triggered
- when: manual
# Repeat of the above but with `when: on_success` replaced with
# `when: delayed` + `start_in:`, for build-only jobs.
# Note: make sure the branches in this list are the same as in
# `.container+build-rules` above.
.build-only-delayed-rules:
.ci-deqp-artifacts:
artifacts:
name: "mesa_${CI_JOB_NAME}"
when: always
untracked: false
paths:
# Watch out! Artifacts are relative to the build dir.
# https://gitlab.com/gitlab-org/gitlab-ce/commit/8788fb925706cad594adf6917a6c5f6587dd1521
- artifacts
- _build/meson-logs/*.txt
- _build/meson-logs/strace
# Git archive
make git archive:
extends:
- .fdo.ci-fairy
stage: git-archive
rules:
- !reference [.common-rules, rules]
# Run when re-enabling a disabled farm, but not when disabling it
- !reference [.disable-farm-mr-rules, rules]
# Never run immediately after merging, as we just ran everything
- !reference [.never-post-merge-rules, rules]
# Build everything in merge pipelines, if any files affecting the pipeline
# were changed
- if: *is-merge-attempt
changes: *all_paths
when: delayed
start_in: &build-delay 5 minutes
# Same as above, but for pre-merge pipelines
- if: *is-merge-request
changes: *all_paths
when: manual
# Skip everything for pre-merge and merge pipelines which don't change
# anything in the build
- if: *is-merge-attempt
when: never
- if: *is-merge-request
when: never
# Build everything after someone bypassed the CI
- if: *is-push-to-upstream-default-branch
when: delayed
start_in: *build-delay
# Build everything when pushing to staging branches
- if: *is-push-to-upstream-staging-branch
when: delayed
start_in: *build-delay
# Build everything in scheduled pipelines
- if: *is-scheduled-pipeline
when: delayed
start_in: *build-delay
# Allow building everything in fork pipelines, but build nothing unless
# manually triggered
- when: manual
- !reference [.scheduled_pipeline-rules, rules]
# ensure we are running on packet
tags:
- packet.net
script:
# Compactify the .git directory
- git gc --aggressive
# Download & cache the perfetto subproject as well.
- rm -rf subprojects/perfetto ; mkdir -p subprojects/perfetto && curl https://android.googlesource.com/platform/external/perfetto/+archive/$(grep 'revision =' subprojects/perfetto.wrap | cut -d ' ' -f3).tar.gz | tar zxf - -C subprojects/perfetto
# compress the current folder
- tar -cvzf ../$CI_PROJECT_NAME.tar.gz .
- ci-fairy s3cp --token-file "${CI_JOB_JWT_FILE}" ../$CI_PROJECT_NAME.tar.gz https://$S3_HOST/git-cache/$CI_PROJECT_NAMESPACE/$CI_PROJECT_NAME/$CI_PROJECT_NAME.tar.gz
# Sanity checks of MR settings and commit logs
sanity:
extends:
- .fdo.ci-fairy
stage: sanity
tags:
- placeholder-job
rules:
- if: *is-merge-request
- if: *is-pre-merge
when: on_success
- when: never
variables:
@@ -403,35 +294,18 @@ sanity:
script:
# ci-fairy check-commits --junit-xml=check-commits.xml
- ci-fairy check-merge-request --require-allow-collaboration --junit-xml=check-merge-request.xml
- |
set -eu
image_tags=(
ALPINE_X86_64_BUILD_TAG
ALPINE_X86_64_LAVA_SSH_TAG
ALPINE_X86_64_LAVA_TRIGGER_TAG
DEBIAN_BASE_TAG
DEBIAN_BUILD_TAG
DEBIAN_TEST_ANDROID_TAG
DEBIAN_TEST_GL_TAG
DEBIAN_TEST_VK_TAG
FEDORA_X86_64_BUILD_TAG
FIRMWARE_TAG
KERNEL_TAG
PKG_REPO_REV
WINDOWS_X64_BUILD_TAG
WINDOWS_X64_MSVC_TAG
WINDOWS_X64_TEST_TAG
)
for var in "${image_tags[@]}"
do
if [ "$(echo -n "${!var}" | wc -c)" -gt 20 ]
then
echo "$var is too long; please make sure it is at most 20 chars."
exit 1
fi
done
artifacts:
when: on_failure
reports:
junit: check-*.xml
tags:
- placeholder-job
# Jobs that need to pass before spending hardware resources on further testing
.required-for-hardware-jobs:
needs:
- job: clang-format
optional: true
- job: rustfmt
optional: true

View File

@@ -1,33 +0,0 @@
[flake8]
exclude = .venv*,
# PEP 8 Style Guide limits line length to 79 characters
max-line-length = 159
ignore =
# continuation line under-indented for hanging indent
E121
# continuation line over-indented for hanging indent
E126,
# continuation line under-indented for visual indent
E128,
# whitespace before ':'
E203,
# missing whitespace around arithmetic operator
E226,
# missing whitespace after ','
E231,
# expected 2 blank lines, found 1
E302,
# too many blank lines
E303,
# imported but unused
F401,
# f-string is missing placeholders
F541,
# local variable assigned to but never used
F841,
# line break before binary operator
W503,
# line break after binary operator
W504,

View File

@@ -9,9 +9,6 @@
# submission, so skip it in the regular CI.
dEQP-VK.api.driver_properties.conformance_version
# Exclude this test which might fail when a new extension is implemented.
dEQP-VK.info.device_extensions
# These are tremendously slow (pushing toward a minute), and aren't
# reliable to be run in parallel with other tests due to CPU-side timing.
dEQP-GLES[0-9]*.functional.flush_finish.*
@@ -80,36 +77,3 @@ wayland-dEQP-EGL.functional.render.multi_context.gles2_gles3.other
wayland-dEQP-EGL.functional.render.multi_thread.gles2.other
wayland-dEQP-EGL.functional.render.multi_thread.gles3.other
wayland-dEQP-EGL.functional.render.multi_thread.gles2_gles3.other
# These test the loader more than the implementation and are broken because the
# Vulkan loader in Debian is too old
dEQP-VK.api.get_device_proc_addr.non_enabled
dEQP-VK.api.version_check.unavailable_entry_points
# These tests are flaking too much recently on almost all drivers, so better skip them until the cause is identified
spec@arb_program_interface_query@arb_program_interface_query-getprogramresourceindex
spec@arb_program_interface_query@arb_program_interface_query-getprogramresourceindex@'vs_input2[1][0]' on GL_PROGRAM_INPUT
# These tests attempt to read from the front buffer after a swap. They are skipped
# on both X11 and gbm, but for different reasons:
#
# On X11: Given that we run piglit tests in parallel in Mesa CI, and don't have a
# compositor running, the frontbuffer reads may end up with undefined results from
# windows overlapping us.
# Piglit does mark these tests as not to be run in parallel, but deqp-runner
# doesn't respect that. We need to extend deqp-runner to allow some tests to be
# marked as single-threaded and run after the rayon loop if we want to support
# them.
# Other front-buffer access tests like fbo-sys-blit, fbo-sys-sub-blit, or
# fcc-front-buffer-distraction don't appear here, because the DRI3 fake-front
# handling should be holding the pixels drawn by the test even if we happen to fail
# GL's window system pixel occlusion test.
# Note that glx skips don't appear here, they're in all-skips.txt (in case someone
# sets PIGLIT_PLATFORM=gbm to mostly use gbm, but still has an X server running).
#
# On gbm: gbm does not support reading the front buffer after a swapbuffers, and
# that's intentional. Don't bother running these tests when PIGLIT_PLATFORM=gbm.
# Note that this doesn't include tests like fbo-sys-blit, which draw/read front
# but don't swap.
spec@!opengl 1.0@gl-1.0-swapbuffers-behavior
spec@!opengl 1.1@read-front

View File

@@ -1,74 +0,0 @@
#!/usr/bin/env bash
# shellcheck disable=SC2086 # we want word splitting
# shellcheck disable=SC1091 # paths only become valid at runtime
. "${SCRIPTS_DIR}/setup-test-env.sh"
ci_tag_test_time_check "ANDROID_CTS_TAG"
export PATH=/android-tools/build-tools:/android-cts/jdk/bin/:$PATH
export JAVA_HOME=/android-cts/jdk
# Wait for the appops service to show up
while [ "$($ADB shell dumpsys -l | grep appops)" = "" ] ; do sleep 1; done
SKIP_FILE="$INSTALL/${GPU_VERSION}-android-cts-skips.txt"
EXCLUDE_FILTERS=""
if [ -e "$SKIP_FILE" ]; then
EXCLUDE_FILTERS="$(grep -v -E "(^#|^[[:space:]]*$)" "$SKIP_FILE" | sed -e 's/\s*$//g' -e 's/.*/--exclude-filter "\0" /g')"
fi
INCLUDE_FILE="$INSTALL/${GPU_VERSION}-android-cts-include.txt"
if [ ! -e "$INCLUDE_FILE" ]; then
set +x
echo "ERROR: No include file (${GPU_VERSION}-android-cts-include.txt) found."
echo "This means that we are running the all available CTS modules."
echo "But the time to run it might be too long, please provide an include file instead."
exit 1
fi
INCLUDE_FILTERS="$(grep -v -E "(^#|^[[:space:]]*$)" "$INCLUDE_FILE" | sed -e 's/\s*$//g' -e 's/.*/--include-filter "\0" /g')"
if [ -n "${ANDROID_CTS_PREPARE_COMMAND:-}" ]; then
eval "$ANDROID_CTS_PREPARE_COMMAND"
fi
uncollapsed_section_switch android_cts_test "Android CTS: testing"
set +e
eval "/android-cts/tools/cts-tradefed" run commandAndExit cts-dev \
$INCLUDE_FILTERS \
$EXCLUDE_FILTERS
SUMMARY_FILE=/android-cts/results/latest/invocation_summary.txt
# Parse a line like `x/y modules completed` to check that all modules completed
COMPLETED_MODULES=$(sed -n -e '/modules completed/s/^\([0-9]\+\)\/\([0-9]\+\) .*$/\1/p' "$SUMMARY_FILE")
AVAILABLE_MODULES=$(sed -n -e '/modules completed/s/^\([0-9]\+\)\/\([0-9]\+\) .*$/\2/p' "$SUMMARY_FILE")
[ "$COMPLETED_MODULES" = "$AVAILABLE_MODULES" ]
# shellcheck disable=SC2319 # False-positive see https://github.com/koalaman/shellcheck/issues/2937#issuecomment-2660891195
MODULES_FAILED=$?
# Parse a line like `FAILED : x` to check that no tests failed
[ "$(grep "^FAILED" "$SUMMARY_FILE" | tr -d ' ' | cut -d ':' -f 2)" = "0" ]
# shellcheck disable=SC2319 # False-positive see https://github.com/koalaman/shellcheck/issues/2937#issuecomment-2660891195
TESTS_FAILED=$?
[ "$MODULES_FAILED" = "0" ] && [ "$TESTS_FAILED" = "0" ]
# shellcheck disable=SC2034 # EXIT_CODE is used by the script that sources this one
EXIT_CODE=$?
set -e
mkdir "${RESULTS_DIR}/android-cts"
cp -r "/android-cts/results/latest/" "${RESULTS_DIR}/android-cts/results"
cp -r "/android-cts/logs/latest/" "${RESULTS_DIR}/android-cts/logs"
if [ -n "${ARTIFACTS_BASE_URL:-}" ]; then
echo "============================================"
echo "Review the Android CTS test results at: ${ARTIFACTS_BASE_URL}/results/android-cts/results/test_result.html"
fi
section_end android_cts_test

View File

@@ -1,110 +0,0 @@
#!/usr/bin/env bash
# shellcheck disable=SC2086 # we want word splitting
# shellcheck disable=SC1091 # paths only become valid at runtime
. "${SCRIPTS_DIR}/setup-test-env.sh"
# deqp
$ADB shell mkdir -p /data/deqp
$ADB push /deqp-gles/modules/egl/deqp-egl-android /data/deqp
$ADB push /deqp-gles/mustpass/egl-main.txt.zst /data/deqp
$ADB push /deqp-gles/modules/gles2/deqp-gles2 /data/deqp
$ADB push /deqp-gles/mustpass/gles2-main.txt.zst /data/deqp
$ADB push /deqp-vk/external/vulkancts/modules/vulkan/* /data/deqp
$ADB push /deqp-vk/mustpass/vk-main.txt.zst /data/deqp
$ADB push /deqp-tools/* /data/deqp
$ADB push /deqp-runner/deqp-runner /data/deqp
$ADB push "$INSTALL/all-skips.txt" /data/deqp
$ADB push "$INSTALL/android-skips.txt" /data/deqp
$ADB push "$INSTALL/angle-skips.txt" /data/deqp
if [ -e "$INSTALL/$GPU_VERSION-flakes.txt" ]; then
$ADB push "$INSTALL/$GPU_VERSION-flakes.txt" /data/deqp
fi
if [ -e "$INSTALL/$GPU_VERSION-fails.txt" ]; then
$ADB push "$INSTALL/$GPU_VERSION-fails.txt" /data/deqp
fi
if [ -e "$INSTALL/$GPU_VERSION-skips.txt" ]; then
$ADB push "$INSTALL/$GPU_VERSION-skips.txt" /data/deqp
fi
$ADB push "$INSTALL/deqp-$DEQP_SUITE.toml" /data/deqp
BASELINE=""
if [ -e "$INSTALL/$GPU_VERSION-fails.txt" ]; then
BASELINE="--baseline /data/deqp/$GPU_VERSION-fails.txt"
fi
# Default to an empty known flakes file if it doesn't exist.
$ADB shell "touch /data/deqp/$GPU_VERSION-flakes.txt"
DEQP_SKIPS=""
if [ -e "$INSTALL/$GPU_VERSION-skips.txt" ]; then
DEQP_SKIPS="$DEQP_SKIPS /data/deqp/$GPU_VERSION-skips.txt"
fi
if [ -n "${ANGLE_TAG:-}" ]; then
DEQP_SKIPS="$DEQP_SKIPS /data/deqp/angle-skips.txt"
fi
AOSP_RESULTS=/data/deqp/results
uncollapsed_section_switch cuttlefish_test "cuttlefish: testing"
# Print the detailed version with the list of backports and local patches
{ set +x; } 2>/dev/null
for api in vk-main vk gl gles; do
deqp_version_log=/deqp-$api/deqp-$api-version
if [ -r "$deqp_version_log" ]; then
cat "$deqp_version_log"
fi
done
set -x
set +e
$ADB shell "mkdir ${AOSP_RESULTS}; cd ${AOSP_RESULTS}/..; \
XDG_CACHE_HOME=/data/local/tmp \
./deqp-runner \
suite \
--suite /data/deqp/deqp-$DEQP_SUITE.toml \
--output $AOSP_RESULTS \
--skips /data/deqp/all-skips.txt $DEQP_SKIPS \
--flakes /data/deqp/$GPU_VERSION-flakes.txt \
--testlog-to-xml /data/deqp/testlog-to-xml \
--shader-cache-dir /data/local/tmp \
--fraction-start ${CI_NODE_INDEX:-1} \
--fraction $(( CI_NODE_TOTAL * ${DEQP_FRACTION:-1})) \
--jobs ${FDO_CI_CONCURRENT:-4} \
$BASELINE \
${DEQP_RUNNER_MAX_FAILS:+--max-fails \"$DEQP_RUNNER_MAX_FAILS\"} \
"
# shellcheck disable=SC2034 # EXIT_CODE is used by the script that sources this one
EXIT_CODE=$?
set -e
section_switch cuttlefish_results "cuttlefish: gathering the results"
$ADB pull "$AOSP_RESULTS/." "$RESULTS_DIR"
# Remove all but the first 50 individual XML files uploaded as artifacts, to
# save fd.o space when you break everything.
find $RESULTS_DIR -name \*.xml | \
sort -n |
sed -n '1,+49!p' | \
xargs rm -f
# If any QPA XMLs are there, then include the XSL/CSS in our artifacts.
find $RESULTS_DIR -name \*.xml \
-exec cp /deqp-tools/testlog.css /deqp-tools/testlog.xsl "$RESULTS_DIR/" ";" \
-quit
$ADB shell "cd ${AOSP_RESULTS}/..; \
./deqp-runner junit \
--testsuite dEQP \
--results $AOSP_RESULTS/failures.csv \
--output $AOSP_RESULTS/junit.xml \
--limit 50 \
--template \"See $ARTIFACTS_BASE_URL/results/{{testcase}}.xml\""
$ADB pull "$AOSP_RESULTS/junit.xml" "$RESULTS_DIR"
section_end cuttlefish_results

View File

@@ -1,150 +0,0 @@
#!/usr/bin/env bash
# shellcheck disable=SC2086 # we want word splitting
# shellcheck disable=SC1091 # paths only become valid at runtime
set -uex
# Set default ADB command if not set already
: "${ADB:=adb}"
$ADB wait-for-device root
sleep 1
# overlay
REMOUNT_PATHS="/vendor"
if [ "$ANDROID_VERSION" -ge 15 ]; then
REMOUNT_PATHS="$REMOUNT_PATHS /system"
fi
OV_TMPFS="/data/overlay-remount"
$ADB shell mkdir -p "$OV_TMPFS"
$ADB shell mount -t tmpfs none "$OV_TMPFS"
for path in $REMOUNT_PATHS; do
$ADB shell mkdir -p "${OV_TMPFS}${path}-upper"
$ADB shell mkdir -p "${OV_TMPFS}${path}-work"
opts="lowerdir=${path},upperdir=${OV_TMPFS}${path}-upper,workdir=${OV_TMPFS}${path}-work"
$ADB shell mount -t overlay -o "$opts" none ${path}
done
$ADB shell setenforce 0
$ADB push /android-tools/eglinfo /data
$ADB push /android-tools/vulkaninfo /data
get_gles_runtime_renderer() {
while [ "$($ADB shell XDG_CACHE_HOME=/data/local/tmp /data/eglinfo | grep 'OpenGL ES profile renderer':)" = "" ] ; do sleep 1; done
$ADB shell XDG_CACHE_HOME=/data/local/tmp /data/eglinfo | grep 'OpenGL ES profile renderer' | head -1
}
get_gles_runtime_version() {
while [ "$($ADB shell XDG_CACHE_HOME=/data/local/tmp /data/eglinfo | grep 'OpenGL ES profile version:')" = "" ] ; do sleep 1; done
$ADB shell XDG_CACHE_HOME=/data/local/tmp /data/eglinfo | grep 'OpenGL ES profile version:' | head -1
}
get_vk_runtime_device_name() {
$ADB shell XDG_CACHE_HOME=/data/local/tmp /data/vulkaninfo | grep deviceName | head -1
}
get_vk_runtime_version() {
$ADB shell XDG_CACHE_HOME=/data/local/tmp /data/vulkaninfo | grep driverInfo | head -1
}
# Check what GLES & VK implementation is used before uploading the new libraries
get_gles_runtime_renderer
get_gles_runtime_version
get_vk_runtime_device_name
get_vk_runtime_version
# replace libraries
$ADB shell rm -f /vendor/lib64/libgallium_dri.so*
$ADB shell rm -f /vendor/lib64/egl/libEGL_mesa.so*
$ADB shell rm -f /vendor/lib64/egl/libGLESv1_CM_mesa.so*
$ADB shell rm -f /vendor/lib64/egl/libGLESv2_mesa.so*
$ADB push "$INSTALL/lib/libgallium_dri.so" /vendor/lib64/libgallium_dri.so
$ADB push "$INSTALL/lib/libEGL.so" /vendor/lib64/egl/libEGL_mesa.so
$ADB push "$INSTALL/lib/libGLESv1_CM.so" /vendor/lib64/egl/libGLESv1_CM_mesa.so
$ADB push "$INSTALL/lib/libGLESv2.so" /vendor/lib64/egl/libGLESv2_mesa.so
$ADB shell rm -f /vendor/lib64/hw/vulkan.lvp.so*
$ADB shell rm -f /vendor/lib64/hw/vulkan.virtio.so*
$ADB shell rm -f /vendor/lib64/hw/vulkan.intel.so*
$ADB push "$INSTALL/lib/libvulkan_lvp.so" /vendor/lib64/hw/vulkan.lvp.so
$ADB push "$INSTALL/lib/libvulkan_virtio.so" /vendor/lib64/hw/vulkan.virtio.so
$ADB push "$INSTALL/lib/libvulkan_intel.so" /vendor/lib64/hw/vulkan.intel.so
$ADB shell rm -f /vendor/lib64/egl/libEGL_emulation.so*
$ADB shell rm -f /vendor/lib64/egl/libGLESv1_CM_emulation.so*
$ADB shell rm -f /vendor/lib64/egl/libGLESv2_emulation.so*
if [ -n "${ANGLE_TAG:-}" ]; then
ANGLE_DEST_PATH=/vendor/lib64/egl
if [ "$ANDROID_VERSION" -ge 15 ]; then
ANGLE_DEST_PATH=/system/lib64
fi
$ADB shell rm -f "$ANGLE_DEST_PATH/libEGL_angle.so"*
$ADB shell rm -f "$ANGLE_DEST_PATH/libGLESv1_CM_angle.so"*
$ADB shell rm -f "$ANGLE_DEST_PATH/libGLESv2_angle.so"*
$ADB push /angle/libEGL_angle.so "$ANGLE_DEST_PATH/libEGL_angle.so"
$ADB push /angle/libGLESv1_CM_angle.so "$ANGLE_DEST_PATH/libGLESv1_CM_angle.so"
$ADB push /angle/libGLESv2_angle.so "$ANGLE_DEST_PATH/libGLESv2_angle.so"
fi
# Check what GLES & VK implementation is used after uploading the new libraries
MESA_BUILD_VERSION=$(cat "$INSTALL/VERSION")
get_gles_runtime_renderer
GLES_RUNTIME_VERSION="$(get_gles_runtime_version)"
get_vk_runtime_device_name
VK_RUNTIME_VERSION="$(get_vk_runtime_version)"
if [ -n "${ANGLE_TAG:-}" ]; then
# Note: we are injecting the ANGLE libs too, so we need to check if the
# new ANGLE libs are being used.
ANGLE_HASH=$(head -c 12 /angle/version)
if ! printf "%s" "$GLES_RUNTIME_VERSION" | grep --quiet "${ANGLE_HASH}"; then
echo "Fatal: Android is loading a wrong version of the ANGLE libs: ${ANGLE_HASH}" 1>&2
exit 1
fi
fi
if ! printf "%s" "$VK_RUNTIME_VERSION" | grep -Fq -- "${MESA_BUILD_VERSION}"; then
echo "Fatal: Android is loading a wrong version of the Mesa3D Vulkan libs: ${VK_RUNTIME_VERSION}" 1>&2
exit 1
fi
get_surfaceflinger_pid() {
while [ "$($ADB shell dumpsys -l | grep 'SurfaceFlinger$')" = "" ] ; do sleep 1; done
$ADB shell ps -A | grep -i surfaceflinger | tr -s ' ' | cut -d ' ' -f 2
}
OLD_SF_PID=$(get_surfaceflinger_pid)
# restart Android shell, so that services use the new libraries
$ADB shell stop
$ADB shell start
# Check that SurfaceFlinger restarted, to ensure that new libraries have been picked up
NEW_SF_PID=$(get_surfaceflinger_pid)
if [ "$OLD_SF_PID" == "$NEW_SF_PID" ]; then
echo "Fatal: check that SurfaceFlinger restarted" 1>&2
exit 1
fi
if [ -n "${ANDROID_CTS_TAG:-}" ]; then
# The script sets EXIT_CODE
. "$(dirname "$0")/android-cts-runner.sh"
else
# The script sets EXIT_CODE
. "$(dirname "$0")/android-deqp-runner.sh"
fi
exit $EXIT_CODE

View File

@@ -1,11 +0,0 @@
# Skip these tests when running fractional dEQP batches, as the AHB tests are expected
# to be handled separately in a non-fractional run within the deqp-runner suite.
dEQP-VK.api.external.memory.android_hardware_buffer.*
# Skip all WSI tests: the DEQP_ANDROID_EXE build used can't create native windows, as
# only APKs support window creation on Android.
dEQP-VK.image.swapchain_mutable.*
dEQP-VK.wsi.*
# These tests cause hangs and need to be skipped for now.
dEQP-VK.synchronization*

View File

@@ -0,0 +1,63 @@
version: 1
# Rules to match for a machine to qualify
target:
id: '{{ ci_runner_id }}'
timeouts:
first_console_activity: # This limits the time it can take to receive the first console log
minutes: {{ timeout_first_minutes }}
retries: {{ timeout_first_retries }}
console_activity: # Reset every time we receive a message from the logs
minutes: {{ timeout_minutes }}
retries: {{ timeout_retries }}
boot_cycle:
minutes: {{ timeout_boot_minutes }}
retries: {{ timeout_boot_retries }}
overall: # Maximum time the job can take, not overrideable by the "continue" deployment
minutes: {{ timeout_overall_minutes }}
retries: 0
# no retries possible here
console_patterns:
session_end:
regex: >-
{{ session_end_regex }}
{% if session_reboot_regex %}
session_reboot:
regex: >-
{{ session_reboot_regex }}
{% endif %}
job_success:
regex: >-
{{ job_success_regex }}
job_warn:
regex: >-
{{ job_warn_regex }}
# Environment to deploy
deployment:
# Initial boot
start:
kernel:
url: '{{ kernel_url }}'
cmdline: >
SALAD.machine_id={{ '{{' }} machine_id }}
console={{ '{{' }} local_tty_device }},115200 earlyprintk=vga,keep
loglevel={{ log_level }} no_hash_pointers
b2c.service="--privileged --tls-verify=false --pid=host docker://{{ '{{' }} fdo_proxy_registry }}/gfx-ci/ci-tron/telegraf:latest" b2c.hostname=dut-{{ '{{' }} machine.full_name }}
b2c.container="-ti --tls-verify=false docker://{{ '{{' }} fdo_proxy_registry }}/gfx-ci/ci-tron/machine-registration:latest check"
b2c.ntp_peer=10.42.0.1 b2c.pipefail b2c.cache_device=auto b2c.poweroff_delay={{ poweroff_delay }}
b2c.minio="gateway,{{ '{{' }} minio_url }},{{ '{{' }} job_bucket_access_key }},{{ '{{' }} job_bucket_secret_key }}"
b2c.volume="{{ '{{' }} job_bucket }}-results,mirror=gateway/{{ '{{' }} job_bucket }},pull_on=pipeline_start,push_on=changes,overwrite{% for excl in job_volume_exclusions %},exclude={{ excl }}{% endfor %},remove,expiration=pipeline_end,preserve"
{% for volume in volumes %}
b2c.volume={{ volume }}
{% endfor %}
b2c.container="-v {{ '{{' }} job_bucket }}-results:{{ working_dir }} -w {{ working_dir }} {% for mount_volume in mount_volumes %} -v {{ mount_volume }}{% endfor %} --tls-verify=false docker://{{ local_container }} {{ container_cmd }}"
{% if kernel_cmdline_extras is defined %}
{{ kernel_cmdline_extras }}
{% endif %}
initramfs:
url: '{{ initramfs_url }}'

55
.gitlab-ci/b2c/generate_b2c.py Executable file
View File

@@ -0,0 +1,55 @@
#!/usr/bin/env python3
# Copyright © 2022 Valve Corporation
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice (including the next
# paragraph) shall be included in all copies or substantial portions of the
# Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
# IN THE SOFTWARE.
from jinja2 import Environment, FileSystemLoader
from os import environ, path
# Pass all the environment variables prefixed by B2C_
values = {
key.removeprefix("B2C_").lower(): environ[key]
for key in environ if key.startswith("B2C_")
}
env = Environment(loader=FileSystemLoader(path.dirname(values['job_template'])),
trim_blocks=True, lstrip_blocks=True)
template = env.get_template(path.basename(values['job_template']))
values['ci_job_id'] = environ['CI_JOB_ID']
values['ci_runner_id'] = environ['CI_RUNNER_ID']
values['job_volume_exclusions'] = [excl for excl in values['job_volume_exclusions'].split(",") if excl]
values['working_dir'] = environ['CI_PROJECT_DIR']
# Use the gateway's pull-through registry caches to reduce load on fd.o.
values['local_container'] = environ['IMAGE_UNDER_TEST']
values['local_container'] = values['local_container'].replace(
'registry.freedesktop.org',
'{{ fdo_proxy_registry }}'
)
if 'kernel_cmdline_extras' not in values:
values['kernel_cmdline_extras'] = ''
with open(path.splitext(path.basename(values['job_template']))[0], "w") as f:
f.write(template.render(values))

View File

@@ -5,8 +5,6 @@
# First stage: very basic setup to bring up network and /dev etc
/init-stage1.sh
export CURRENT_SECTION=dut_boot
# Second stage: run jobs
test $? -eq 0 && /init-stage2.sh

View File

@@ -0,0 +1,17 @@
#!/bin/bash
# shellcheck disable=SC2086 # we want word splitting
if [ -z "$BM_POE_INTERFACE" ]; then
echo "Must supply the PoE Interface to power down"
exit 1
fi
if [ -z "$BM_POE_ADDRESS" ]; then
echo "Must supply the PoE Switch host"
exit 1
fi
SNMP_KEY="1.3.6.1.4.1.9.9.402.1.2.1.1.1.$BM_POE_INTERFACE"
SNMP_OFF="i 4"
snmpset -v2c -r 3 -t 30 -cmesaci "$BM_POE_ADDRESS" "$SNMP_KEY" $SNMP_OFF

View File

@@ -0,0 +1,22 @@
#!/bin/bash
# shellcheck disable=SC2086 # we want word splitting
if [ -z "$BM_POE_INTERFACE" ]; then
echo "Must supply the PoE Interface to power up"
exit 1
fi
if [ -z "$BM_POE_ADDRESS" ]; then
echo "Must supply the PoE Switch host"
exit 1
fi
set -ex
SNMP_KEY="1.3.6.1.4.1.9.9.402.1.2.1.1.1.$BM_POE_INTERFACE"
SNMP_ON="i 1"
SNMP_OFF="i 4"
snmpset -v2c -r 3 -t 10 -cmesaci "$BM_POE_ADDRESS" "$SNMP_KEY" $SNMP_OFF
sleep 3s
snmpset -v2c -r 3 -t 10 -cmesaci "$BM_POE_ADDRESS" "$SNMP_KEY" $SNMP_ON

View File

@@ -0,0 +1,124 @@
#!/bin/bash
# shellcheck disable=SC1091 # The relative paths in this file only become valid at runtime.
# shellcheck disable=SC2034
# shellcheck disable=SC2086 # we want word splitting
# Boot script for Chrome OS devices attached to a servo debug connector, using
# NFS and TFTP to boot.
# We're run from the root of the repo, make a helper var for our paths
BM=$CI_PROJECT_DIR/install/bare-metal
CI_COMMON=$CI_PROJECT_DIR/install/common
CI_INSTALL=$CI_PROJECT_DIR/install
# Runner config checks
if [ -z "$BM_SERIAL" ]; then
echo "Must set BM_SERIAL in your gitlab-runner config.toml [[runners]] environment"
echo "This is the CPU serial device."
exit 1
fi
if [ -z "$BM_SERIAL_EC" ]; then
echo "Must set BM_SERIAL in your gitlab-runner config.toml [[runners]] environment"
echo "This is the EC serial device for controlling board power"
exit 1
fi
if [ ! -d /nfs ]; then
echo "NFS rootfs directory needs to be mounted at /nfs by the gitlab runner"
exit 1
fi
if [ ! -d /tftp ]; then
echo "TFTP directory for this board needs to be mounted at /tftp by the gitlab runner"
exit 1
fi
# job config checks
if [ -z "$BM_KERNEL" ]; then
echo "Must set BM_KERNEL to your board's kernel FIT image"
exit 1
fi
if [ -z "$BM_ROOTFS" ]; then
echo "Must set BM_ROOTFS to your board's rootfs directory in the job's variables"
exit 1
fi
if [ -z "$BM_CMDLINE" ]; then
echo "Must set BM_CMDLINE to your board's kernel command line arguments"
exit 1
fi
set -ex
# Clear out any previous run's artifacts.
rm -rf results/
mkdir -p results
# Create the rootfs in the NFS directory. rm to make sure it's in a pristine
# state, since it's volume-mounted on the host.
rsync -a --delete $BM_ROOTFS/ /nfs/
mkdir -p /nfs/results
. $BM/rootfs-setup.sh /nfs
# Put the kernel/dtb image and the boot command line in the tftp directory for
# the board to find. For normal Mesa development, we build the kernel and
# store it in the docker container that this script is running in.
#
# However, container builds are expensive, so when you're hacking on the
# kernel, it's nice to be able to skip the half hour container build and plus
# moving that container to the runner. So, if BM_KERNEL is a URL, fetch it
# instead of looking in the container. Note that the kernel build should be
# the output of:
#
# make Image.lzma
#
# mkimage \
# -A arm64 \
# -f auto \
# -C lzma \
# -d arch/arm64/boot/Image.lzma \
# -b arch/arm64/boot/dts/qcom/sdm845-cheza-r3.dtb \
# cheza-image.img
rm -rf /tftp/*
if echo "$BM_KERNEL" | grep -q http; then
curl -L --retry 4 -f --retry-all-errors --retry-delay 60 \
$BM_KERNEL -o /tftp/vmlinuz
elif [ -n "${FORCE_KERNEL_TAG}" ]; then
curl -L --retry 4 -f --retry-all-errors --retry-delay 60 \
"${FDO_HTTP_CACHE_URI:-}${KERNEL_IMAGE_BASE}/${DEBIAN_ARCH}/${BM_KERNEL}" -o /tftp/vmlinuz
curl -L --retry 4 -f --retry-all-errors --retry-delay 60 \
"${FDO_HTTP_CACHE_URI:-}${KERNEL_IMAGE_BASE}/${DEBIAN_ARCH}/modules.tar.zst" -o modules.tar.zst
tar --keep-directory-symlink --zstd -xf modules.tar.zst -C "/nfs/"
rm modules.tar.zst &
else
cp /baremetal-files/"$BM_KERNEL" /tftp/vmlinuz
fi
echo "$BM_CMDLINE" > /tftp/cmdline
set +e
STRUCTURED_LOG_FILE=job_detail.json
python3 $CI_INSTALL/custom_logger.py ${STRUCTURED_LOG_FILE} --update dut_job_type "${DEVICE_TYPE}"
python3 $CI_INSTALL/custom_logger.py ${STRUCTURED_LOG_FILE} --update farm "${FARM}"
python3 $CI_INSTALL/custom_logger.py ${STRUCTURED_LOG_FILE} --create-dut-job dut_name "${CI_RUNNER_DESCRIPTION}"
python3 $CI_INSTALL/custom_logger.py ${STRUCTURED_LOG_FILE} --update-dut-time submit "${CI_JOB_STARTED_AT}"
python3 $BM/cros_servo_run.py \
--cpu $BM_SERIAL \
--ec $BM_SERIAL_EC \
--test-timeout ${TEST_PHASE_TIMEOUT:-20}
ret=$?
python3 $CI_INSTALL/custom_logger.py ${STRUCTURED_LOG_FILE} --close-dut-job
python3 $CI_INSTALL/custom_logger.py ${STRUCTURED_LOG_FILE} --close
set -e
# Bring artifacts back from the NFS dir to the build dir where gitlab-runner
# will look for them.
cp -Rp /nfs/results/. results/
if [ -f "${STRUCTURED_LOG_FILE}" ]; then
cp -p ${STRUCTURED_LOG_FILE} results/
echo "Structured log file is available at https://${CI_PROJECT_ROOT_NAMESPACE}.pages.freedesktop.org/-/${CI_PROJECT_NAME}/-/jobs/${CI_JOB_ID}/artifacts/results/${STRUCTURED_LOG_FILE}"
fi
exit $ret

View File

@@ -0,0 +1,168 @@
#!/usr/bin/env python3
#
# Copyright © 2020 Google LLC
# SPDX-License-Identifier: MIT
import argparse
import re
import sys
from custom_logger import CustomLogger
from serial_buffer import SerialBuffer
class CrosServoRun:
def __init__(self, cpu, ec, test_timeout, logger):
self.cpu_ser = SerialBuffer(
cpu, "results/serial.txt", "R SERIAL-CPU> ")
# Merge the EC serial into the cpu_ser's line stream so that we can
# effectively poll on both at the same time and not have to worry about
self.ec_ser = SerialBuffer(
ec, "results/serial-ec.txt", "R SERIAL-EC> ", line_queue=self.cpu_ser.line_queue)
self.test_timeout = test_timeout
self.logger = logger
def close(self):
self.ec_ser.close()
self.cpu_ser.close()
def ec_write(self, s):
print("W SERIAL-EC> %s" % s)
self.ec_ser.serial.write(s.encode())
def cpu_write(self, s):
print("W SERIAL-CPU> %s" % s)
self.cpu_ser.serial.write(s.encode())
def print_error(self, message):
RED = '\033[0;31m'
NO_COLOR = '\033[0m'
print(RED + message + NO_COLOR)
self.logger.update_status_fail(message)
def run(self):
# Flush any partial commands in the EC's prompt, then ask for a reboot.
self.ec_write("\n")
self.ec_write("reboot\n")
bootloader_done = False
self.logger.create_job_phase("boot")
tftp_failures = 0
# This is emitted right when the bootloader pauses to check for input.
# Emit a ^N character to request network boot, because we don't have a
# direct-to-netboot firmware on cheza.
for line in self.cpu_ser.lines(timeout=120, phase="bootloader"):
if re.search("load_archive: loading locale_en.bin", line):
self.cpu_write("\016")
bootloader_done = True
break
# The Cheza firmware seems to occasionally get stuck looping in
# this error state during TFTP booting, possibly based on amount of
# network traffic around it, but it'll usually recover after a
# reboot. Currently mostly visible on google-freedreno-cheza-14.
if re.search("R8152: Bulk read error 0xffffffbf", line):
tftp_failures += 1
if tftp_failures >= 10:
self.print_error(
"Detected intermittent tftp failure, restarting run.")
return 1
# If the board has a netboot firmware and we made it to booting the
# kernel, proceed to processing of the test run.
if re.search("Booting Linux", line):
bootloader_done = True
break
# The Cheza boards have issues with failing to bring up power to
# the system sometimes, possibly dependent on ambient temperature
# in the farm.
if re.search("POWER_GOOD not seen in time", line):
self.print_error(
"Detected intermittent poweron failure, abandoning run.")
return 1
if not bootloader_done:
self.print_error("Failed to make it through bootloader, abandoning run.")
return 1
self.logger.create_job_phase("test")
for line in self.cpu_ser.lines(timeout=self.test_timeout, phase="test"):
if re.search("---. end Kernel panic", line):
return 1
# There are very infrequent bus errors during power management transitions
# on cheza, which we don't expect to be the case on future boards.
if re.search("Kernel panic - not syncing: Asynchronous SError Interrupt", line):
self.print_error(
"Detected cheza power management bus error, abandoning run.")
return 1
# If the network device dies, it's probably not graphics's fault, just try again.
if re.search("NETDEV WATCHDOG", line):
self.print_error(
"Detected network device failure, abandoning run.")
return 1
# These HFI response errors started appearing with the introduction
# of piglit runs. CosmicPenguin says:
#
# "message ID 106 isn't a thing, so likely what happened is that we
# got confused when parsing the HFI queue. If it happened on only
# one run, then memory corruption could be a possible clue"
#
# Given that it seems to trigger randomly near a GPU fault and then
# break many tests after that, just restart the whole run.
if re.search("a6xx_hfi_send_msg.*Unexpected message id .* on the response queue", line):
self.print_error(
"Detected cheza power management bus error, abandoning run.")
return 1
if re.search("coreboot.*bootblock starting", line):
self.print_error(
"Detected spontaneous reboot, abandoning run.")
return 1
if re.search("arm-smmu 5040000.iommu: TLB sync timed out -- SMMU may be deadlocked", line):
self.print_error("Detected cheza MMU fail, abandoning run.")
return 1
result = re.search("hwci: mesa: (\S*)", line)
if result:
if result.group(1) == "pass":
self.logger.update_dut_job("status", "pass")
return 0
else:
self.logger.update_status_fail("test fail")
return 1
self.print_error(
"Reached the end of the CPU serial log without finding a result")
return 1
def main():
parser = argparse.ArgumentParser()
parser.add_argument('--cpu', type=str,
help='CPU Serial device', required=True)
parser.add_argument(
'--ec', type=str, help='EC Serial device', required=True)
parser.add_argument(
'--test-timeout', type=int, help='Test phase timeout (minutes)', required=True)
args = parser.parse_args()
logger = CustomLogger("job_detail.json")
logger.update_dut_time("start", None)
servo = CrosServoRun(args.cpu, args.ec, args.test_timeout * 60, logger)
retval = servo.run()
# power down the CPU on the device
servo.ec_write("power off\n")
logger.update_dut_time("end", None)
servo.close()
sys.exit(retval)
if __name__ == '__main__':
main()

View File

@@ -0,0 +1,10 @@
#!/bin/bash
relay=$1
if [ -z "$relay" ]; then
echo "Must supply a relay arg"
exit 1
fi
"$CI_PROJECT_DIR"/install/bare-metal/eth008-power-relay.py "$ETH_HOST" "$ETH_PORT" off "$relay"

View File

@@ -0,0 +1,28 @@
#!/usr/bin/python3
import sys
import socket
host = sys.argv[1]
port = sys.argv[2]
mode = sys.argv[3]
relay = sys.argv[4]
msg = None
if mode == "on":
msg = b'\x20'
else:
msg = b'\x21'
msg += int(relay).to_bytes(1, 'big')
msg += b'\x00'
c = socket.create_connection((host, int(port)))
c.sendall(msg)
data = c.recv(1)
c.close()
if data[0] == b'\x01':
print('Command failed')
sys.exit(1)

View File

@@ -0,0 +1,12 @@
#!/bin/bash
relay=$1
if [ -z "$relay" ]; then
echo "Must supply a relay arg"
exit 1
fi
"$CI_PROJECT_DIR"/install/bare-metal/eth008-power-relay.py "$ETH_HOST" "$ETH_PORT" off "$relay"
sleep 5
"$CI_PROJECT_DIR"/install/bare-metal/eth008-power-relay.py "$ETH_HOST" "$ETH_PORT" on "$relay"

View File

@@ -0,0 +1,31 @@
#!/bin/bash
set -e
STRINGS=$(mktemp)
ERRORS=$(mktemp)
trap 'rm $STRINGS; rm $ERRORS;' EXIT
FILE=$1
shift 1
while getopts "f:e:" opt; do
case $opt in
f) echo "$OPTARG" >> "$STRINGS";;
e) echo "$OPTARG" >> "$STRINGS" ; echo "$OPTARG" >> "$ERRORS";;
*) exit
esac
done
shift $((OPTIND -1))
echo "Waiting for $FILE to say one of following strings"
cat "$STRINGS"
while ! grep -E -wf "$STRINGS" "$FILE"; do
sleep 2
done
if grep -E -wf "$ERRORS" "$FILE"; then
exit 1
fi

167
.gitlab-ci/bare-metal/fastboot.sh Executable file
View File

@@ -0,0 +1,167 @@
#!/bin/bash
# shellcheck disable=SC1091 # The relative paths in this file only become valid at runtime.
# shellcheck disable=SC2034
# shellcheck disable=SC2086 # we want word splitting
. "$SCRIPTS_DIR"/setup-test-env.sh
BM=$CI_PROJECT_DIR/install/bare-metal
CI_COMMON=$CI_PROJECT_DIR/install/common
if [ -z "$BM_SERIAL" ] && [ -z "$BM_SERIAL_SCRIPT" ]; then
echo "Must set BM_SERIAL OR BM_SERIAL_SCRIPT in your gitlab-runner config.toml [[runners]] environment"
echo "BM_SERIAL:"
echo " This is the serial device to talk to for waiting for fastboot to be ready and logging from the kernel."
echo "BM_SERIAL_SCRIPT:"
echo " This is a shell script to talk to for waiting for fastboot to be ready and logging from the kernel."
exit 1
fi
if [ -z "$BM_POWERUP" ]; then
echo "Must set BM_POWERUP in your gitlab-runner config.toml [[runners]] environment"
echo "This is a shell script that should reset the device and begin its boot sequence"
echo "such that it pauses at fastboot."
exit 1
fi
if [ -z "$BM_POWERDOWN" ]; then
echo "Must set BM_POWERDOWN in your gitlab-runner config.toml [[runners]] environment"
echo "This is a shell script that should power off the device."
exit 1
fi
if [ -z "$BM_FASTBOOT_SERIAL" ]; then
echo "Must set BM_FASTBOOT_SERIAL in your gitlab-runner config.toml [[runners]] environment"
echo "This must be the a stable-across-resets fastboot serial number."
exit 1
fi
if [ -z "$BM_KERNEL" ]; then
echo "Must set BM_KERNEL to your board's kernel vmlinuz or Image.gz in the job's variables:"
exit 1
fi
if [ -z "$BM_DTB" ]; then
echo "Must set BM_DTB to your board's DTB file in the job's variables:"
exit 1
fi
if [ -z "$BM_ROOTFS" ]; then
echo "Must set BM_ROOTFS to your board's rootfs directory in the job's variables:"
exit 1
fi
if echo $BM_CMDLINE | grep -q "root=/dev/nfs"; then
BM_FASTBOOT_NFSROOT=1
fi
set -ex
# Clear out any previous run's artifacts.
rm -rf results/
mkdir -p results/
if [ -n "$BM_FASTBOOT_NFSROOT" ]; then
# Create the rootfs in the NFS directory. rm to make sure it's in a pristine
# state, since it's volume-mounted on the host.
rsync -a --delete $BM_ROOTFS/ /nfs/
mkdir -p /nfs/results
. $BM/rootfs-setup.sh /nfs
# Root on NFS, no need for an inintramfs.
rm -f rootfs.cpio.gz
touch rootfs.cpio
gzip rootfs.cpio
else
# Create the rootfs in a temp dir
rsync -a --delete $BM_ROOTFS/ rootfs/
. $BM/rootfs-setup.sh rootfs
# Finally, pack it up into a cpio rootfs. Skip the vulkan CTS since none of
# these devices use it and it would take up space in the initrd.
if [ -n "$PIGLIT_PROFILES" ]; then
EXCLUDE_FILTER="deqp|arb_gpu_shader5|arb_gpu_shader_fp64|arb_gpu_shader_int64|glsl-4.[0123456]0|arb_tessellation_shader"
else
EXCLUDE_FILTER="piglit|python"
fi
pushd rootfs
find -H . | \
grep -E -v "external/(openglcts|vulkancts|amber|glslang|spirv-tools)" |
grep -E -v "traces-db|apitrace|renderdoc" | \
grep -E -v $EXCLUDE_FILTER | \
cpio -H newc -o | \
xz --check=crc32 -T4 - > $CI_PROJECT_DIR/rootfs.cpio.gz
popd
fi
if echo "$BM_KERNEL $BM_DTB" | grep -q http; then
curl -L --retry 4 -f --retry-all-errors --retry-delay 60 \
"$BM_KERNEL" -o kernel
# FIXME: modules should be supplied too
curl -L --retry 4 -f --retry-all-errors --retry-delay 60 \
"$BM_DTB" -o dtb
cat kernel dtb > Image.gz-dtb
elif [ -n "${FORCE_KERNEL_TAG}" ]; then
curl -L --retry 4 -f --retry-all-errors --retry-delay 60 \
"${FDO_HTTP_CACHE_URI:-}${KERNEL_IMAGE_BASE}/${DEBIAN_ARCH}/${BM_KERNEL}" -o kernel
curl -L --retry 4 -f --retry-all-errors --retry-delay 60 \
"${FDO_HTTP_CACHE_URI:-}${KERNEL_IMAGE_BASE}/${DEBIAN_ARCH}/modules.tar.zst" -o modules.tar.zst
if [ -n "$BM_DTB" ]; then
curl -L --retry 4 -f --retry-all-errors --retry-delay 60 \
"${FDO_HTTP_CACHE_URI:-}${KERNEL_IMAGE_BASE}/${DEBIAN_ARCH}/${BM_DTB}.dtb" -o dtb
fi
cat kernel dtb > Image.gz-dtb || echo "No DTB available, using pure kernel."
rm kernel
tar --keep-directory-symlink --zstd -xf modules.tar.zst -C "$BM_ROOTFS/"
rm modules.tar.zst &
else
cat /baremetal-files/"$BM_KERNEL" /baremetal-files/"$BM_DTB".dtb > Image.gz-dtb
cp /baremetal-files/"$BM_DTB".dtb dtb
fi
export PATH=$BM:$PATH
mkdir -p artifacts
mkbootimg.py \
--kernel Image.gz-dtb \
--ramdisk rootfs.cpio.gz \
--dtb dtb \
--cmdline "$BM_CMDLINE" \
$BM_MKBOOT_PARAMS \
--header_version 2 \
-o artifacts/fastboot.img
rm Image.gz-dtb dtb
# Start background command for talking to serial if we have one.
if [ -n "$BM_SERIAL_SCRIPT" ]; then
$BM_SERIAL_SCRIPT > results/serial-output.txt &
while [ ! -e results/serial-output.txt ]; do
sleep 1
done
fi
set +e
$BM/fastboot_run.py \
--dev="$BM_SERIAL" \
--test-timeout ${TEST_PHASE_TIMEOUT:-20} \
--fbserial="$BM_FASTBOOT_SERIAL" \
--powerup="$BM_POWERUP" \
--powerdown="$BM_POWERDOWN"
ret=$?
set -e
if [ -n "$BM_FASTBOOT_NFSROOT" ]; then
# Bring artifacts back from the NFS dir to the build dir where gitlab-runner
# will look for them.
cp -Rp /nfs/results/. results/
fi
exit $ret

View File

@@ -0,0 +1,159 @@
#!/usr/bin/env python3
#
# Copyright © 2020 Google LLC
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice (including the next
# paragraph) shall be included in all copies or substantial portions of the
# Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
# IN THE SOFTWARE.
import argparse
import subprocess
import re
from serial_buffer import SerialBuffer
import sys
import threading
class FastbootRun:
def __init__(self, args, test_timeout):
self.powerup = args.powerup
self.ser = SerialBuffer(
args.dev, "results/serial-output.txt", "R SERIAL> ")
self.fastboot = "fastboot boot -s {ser} artifacts/fastboot.img".format(
ser=args.fbserial)
self.test_timeout = test_timeout
def close(self):
self.ser.close()
def print_error(self, message):
RED = '\033[0;31m'
NO_COLOR = '\033[0m'
print(RED + message + NO_COLOR)
def logged_system(self, cmd, timeout=60):
print("Running '{}'".format(cmd))
try:
return subprocess.call(cmd, shell=True, timeout=timeout)
except subprocess.TimeoutExpired:
self.print_error("timeout, abandoning run.")
return 1
def run(self):
if ret := self.logged_system(self.powerup):
return ret
fastboot_ready = False
for line in self.ser.lines(timeout=2 * 60, phase="bootloader"):
if re.search("[Ff]astboot: [Pp]rocessing commands", line) or \
re.search("Listening for fastboot command on", line):
fastboot_ready = True
break
if re.search("data abort", line):
self.print_error(
"Detected crash during boot, abandoning run.")
return 1
if not fastboot_ready:
self.print_error(
"Failed to get to fastboot prompt, abandoning run.")
return 1
if ret := self.logged_system(self.fastboot):
return ret
print_more_lines = -1
for line in self.ser.lines(timeout=self.test_timeout, phase="test"):
if print_more_lines == 0:
return 1
if print_more_lines > 0:
print_more_lines -= 1
if re.search("---. end Kernel panic", line):
return 1
# The db820c boards intermittently reboot. Just restart the run
# when if we see a reboot after we got past fastboot.
if re.search("PON REASON", line):
self.print_error(
"Detected spontaneous reboot, abandoning run.")
return 1
# db820c sometimes wedges around iommu fault recovery
if re.search("watchdog: BUG: soft lockup - CPU.* stuck", line):
self.print_error(
"Detected kernel soft lockup, abandoning run.")
return 1
# If the network device dies, it's probably not graphics's fault, just try again.
if re.search("NETDEV WATCHDOG", line):
self.print_error(
"Detected network device failure, abandoning run.")
return 1
# A3xx recovery doesn't quite work. Sometimes the GPU will get
# wedged and recovery will fail (because power can't be reset?)
# This assumes that the jobs are sufficiently well-tested that GPU
# hangs aren't always triggered, so just try again. But print some
# more lines first so that we get better information on the cause
# of the hang. Once a hang happens, it's pretty chatty.
if "[drm:adreno_recover] *ERROR* gpu hw init failed: -22" in line:
self.print_error(
"Detected GPU hang, abandoning run.")
if print_more_lines == -1:
print_more_lines = 30
result = re.search("hwci: mesa: (\S*)", line)
if result:
if result.group(1) == "pass":
return 0
else:
return 1
self.print_error(
"Reached the end of the CPU serial log without finding a result, abandoning run.")
return 1
def main():
parser = argparse.ArgumentParser()
parser.add_argument(
'--dev', type=str, help='Serial device (otherwise reading from serial-output.txt)')
parser.add_argument('--powerup', type=str,
help='shell command for rebooting', required=True)
parser.add_argument('--powerdown', type=str,
help='shell command for powering off', required=True)
parser.add_argument('--fbserial', type=str,
help='fastboot serial number of the board', required=True)
parser.add_argument('--test-timeout', type=int,
help='Test phase timeout (minutes)', required=True)
args = parser.parse_args()
fastboot = FastbootRun(args, args.test_timeout * 60)
retval = fastboot.run()
fastboot.close()
fastboot.logged_system(args.powerdown)
sys.exit(retval)
if __name__ == '__main__':
main()

View File

@@ -1,129 +0,0 @@
.baremetal-test:
extends:
- .test
# Cancel job if a newer commit is pushed to the same branch
interruptible: true
before_script:
- !reference [.download_s3, before_script]
variables:
BM_ROOTFS: /rootfs-${DEBIAN_ARCH}
artifacts:
when: always
name: "${CI_PROJECT_NAME}_${CI_JOB_NAME_SLUG}"
paths:
- results/
- serial*.txt
exclude:
- results/*.shader_cache
reports:
junit: results/junit.xml
# ARM testing of bare-metal boards attached to an x86 gitlab-runner system
.baremetal-test-arm32-gl:
extends:
- .baremetal-test
- .use-debian/baremetal_arm32_test-gl
variables:
DEBIAN_ARCH: armhf
S3_ARTIFACT_NAME: mesa-arm32-default-debugoptimized
needs:
- job: debian/baremetal_arm32_test-gl
optional: true
- job: debian-arm32
artifacts: false
- !reference [.required-for-hardware-jobs, needs]
# ARM64 testing of bare-metal boards attached to an x86 gitlab-runner system
.baremetal-test-arm64-gl:
extends:
- .baremetal-test
- .use-debian/baremetal_arm64_test-gl
variables:
DEBIAN_ARCH: arm64
S3_ARTIFACT_NAME: mesa-arm64-default-debugoptimized
needs:
- job: debian/baremetal_arm64_test-gl
optional: true
- job: debian-arm64
artifacts: false
- !reference [.required-for-hardware-jobs, needs]
# ARM64 testing of bare-metal boards attached to an x86 gitlab-runner system
.baremetal-test-arm64-vk:
extends:
- .baremetal-test
- .use-debian/baremetal_arm64_test-vk
variables:
DEBIAN_ARCH: arm64
S3_ARTIFACT_NAME: mesa-arm64-default-debugoptimized
needs:
- job: debian/baremetal_arm64_test-vk
optional: true
- job: debian-arm64
artifacts: false
- !reference [.required-for-hardware-jobs, needs]
# ARM32/64 testing of bare-metal boards attached to an x86 gitlab-runner system, using an asan mesa build
.baremetal-arm32-asan-test-gl:
variables:
S3_ARTIFACT_NAME: mesa-arm32-asan-debugoptimized
DEQP_FORCE_ASAN: 1
needs:
- job: debian/baremetal_arm32_test-gl
optional: true
- job: debian-arm32-asan
artifacts: false
- !reference [.required-for-hardware-jobs, needs]
.baremetal-arm64-asan-test-gl:
variables:
S3_ARTIFACT_NAME: mesa-arm64-asan-debugoptimized
DEQP_FORCE_ASAN: 1
needs:
- job: debian/baremetal_arm64_test-gl
optional: true
- job: debian-arm64-asan
artifacts: false
- !reference [.required-for-hardware-jobs, needs]
.baremetal-arm64-asan-test-vk:
variables:
S3_ARTIFACT_NAME: mesa-arm64-asan-debugoptimized
DEQP_FORCE_ASAN: 1
needs:
- job: debian/baremetal_arm64_test-vk
optional: true
- job: debian-arm64-asan
artifacts: false
- !reference [.required-for-hardware-jobs, needs]
.baremetal-arm64-ubsan-test-gl:
extends:
- .baremetal-test
- .use-debian/baremetal_arm64_test-gl
variables:
S3_ARTIFACT_NAME: mesa-arm64-ubsan-debugoptimized
needs:
- job: debian/baremetal_arm64_test-gl
optional: true
- job: debian-arm64-ubsan
artifacts: false
- !reference [.required-for-hardware-jobs, needs]
.baremetal-arm64-ubsan-test-vk:
extends:
- .baremetal-test
- .use-debian/baremetal_arm64_test-vk
variables:
S3_ARTIFACT_NAME: mesa-arm64-ubsan-debugoptimized
needs:
- job: debian/baremetal_arm64_test-vk
optional: true
- job: debian-arm64-ubsan
artifacts: false
- !reference [.required-for-hardware-jobs, needs]
.baremetal-deqp-test:
variables:
HWCI_TEST_SCRIPT: "/install/deqp-runner.sh"
FDO_CI_CONCURRENT: 0 # Default to number of CPUs

View File

@@ -0,0 +1,10 @@
#!/bin/bash
relay=$1
if [ -z "$relay" ]; then
echo "Must supply a relay arg"
exit 1
fi
"$CI_PROJECT_DIR"/install/bare-metal/google-power-relay.py off "$relay"

View File

@@ -0,0 +1,19 @@
#!/usr/bin/python3
import sys
import serial
mode = sys.argv[1]
relay = sys.argv[2]
# our relays are "off" means "board is powered".
mode_swap = {
"on": "off",
"off": "on",
}
mode = mode_swap[mode]
ser = serial.Serial('/dev/ttyACM0', 115200, timeout=2)
command = "relay {} {}\n\r".format(mode, relay)
ser.write(command.encode())
ser.close()

View File

@@ -0,0 +1,12 @@
#!/bin/bash
relay=$1
if [ -z "$relay" ]; then
echo "Must supply a relay arg"
exit 1
fi
"$CI_PROJECT_DIR"/install/bare-metal/google-power-relay.py off "$relay"
sleep 5
"$CI_PROJECT_DIR"/install/bare-metal/google-power-relay.py on "$relay"

View File

@@ -0,0 +1,569 @@
#!/usr/bin/env python3
#
# Copyright 2015, The Android Open Source Project
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Creates the boot image."""
from argparse import (ArgumentParser, ArgumentTypeError,
FileType, RawDescriptionHelpFormatter)
from hashlib import sha1
from os import fstat
from struct import pack
import array
import collections
import os
import re
import subprocess
import tempfile
# Constant and structure definition is in
# system/tools/mkbootimg/include/bootimg/bootimg.h
BOOT_MAGIC = 'ANDROID!'
BOOT_MAGIC_SIZE = 8
BOOT_NAME_SIZE = 16
BOOT_ARGS_SIZE = 512
BOOT_EXTRA_ARGS_SIZE = 1024
BOOT_IMAGE_HEADER_V1_SIZE = 1648
BOOT_IMAGE_HEADER_V2_SIZE = 1660
BOOT_IMAGE_HEADER_V3_SIZE = 1580
BOOT_IMAGE_HEADER_V3_PAGESIZE = 4096
BOOT_IMAGE_HEADER_V4_SIZE = 1584
BOOT_IMAGE_V4_SIGNATURE_SIZE = 4096
VENDOR_BOOT_MAGIC = 'VNDRBOOT'
VENDOR_BOOT_MAGIC_SIZE = 8
VENDOR_BOOT_NAME_SIZE = BOOT_NAME_SIZE
VENDOR_BOOT_ARGS_SIZE = 2048
VENDOR_BOOT_IMAGE_HEADER_V3_SIZE = 2112
VENDOR_BOOT_IMAGE_HEADER_V4_SIZE = 2128
VENDOR_RAMDISK_TYPE_NONE = 0
VENDOR_RAMDISK_TYPE_PLATFORM = 1
VENDOR_RAMDISK_TYPE_RECOVERY = 2
VENDOR_RAMDISK_TYPE_DLKM = 3
VENDOR_RAMDISK_NAME_SIZE = 32
VENDOR_RAMDISK_TABLE_ENTRY_BOARD_ID_SIZE = 16
VENDOR_RAMDISK_TABLE_ENTRY_V4_SIZE = 108
# Names with special meaning, mustn't be specified in --ramdisk_name.
VENDOR_RAMDISK_NAME_BLOCKLIST = {b'default'}
PARSER_ARGUMENT_VENDOR_RAMDISK_FRAGMENT = '--vendor_ramdisk_fragment'
def filesize(f):
if f is None:
return 0
try:
return fstat(f.fileno()).st_size
except OSError:
return 0
def update_sha(sha, f):
if f:
sha.update(f.read())
f.seek(0)
sha.update(pack('I', filesize(f)))
else:
sha.update(pack('I', 0))
def pad_file(f, padding):
pad = (padding - (f.tell() & (padding - 1))) & (padding - 1)
f.write(pack(str(pad) + 'x'))
def get_number_of_pages(image_size, page_size):
"""calculates the number of pages required for the image"""
return (image_size + page_size - 1) // page_size
def get_recovery_dtbo_offset(args):
"""calculates the offset of recovery_dtbo image in the boot image"""
num_header_pages = 1 # header occupies a page
num_kernel_pages = get_number_of_pages(filesize(args.kernel), args.pagesize)
num_ramdisk_pages = get_number_of_pages(filesize(args.ramdisk),
args.pagesize)
num_second_pages = get_number_of_pages(filesize(args.second), args.pagesize)
dtbo_offset = args.pagesize * (num_header_pages + num_kernel_pages +
num_ramdisk_pages + num_second_pages)
return dtbo_offset
def write_header_v3_and_above(args):
if args.header_version > 3:
boot_header_size = BOOT_IMAGE_HEADER_V4_SIZE
else:
boot_header_size = BOOT_IMAGE_HEADER_V3_SIZE
args.output.write(pack(f'{BOOT_MAGIC_SIZE}s', BOOT_MAGIC.encode()))
# kernel size in bytes
args.output.write(pack('I', filesize(args.kernel)))
# ramdisk size in bytes
args.output.write(pack('I', filesize(args.ramdisk)))
# os version and patch level
args.output.write(pack('I', (args.os_version << 11) | args.os_patch_level))
args.output.write(pack('I', boot_header_size))
# reserved
args.output.write(pack('4I', 0, 0, 0, 0))
# version of boot image header
args.output.write(pack('I', args.header_version))
args.output.write(pack(f'{BOOT_ARGS_SIZE + BOOT_EXTRA_ARGS_SIZE}s',
args.cmdline))
if args.header_version >= 4:
# The signature used to verify boot image v4.
args.output.write(pack('I', BOOT_IMAGE_V4_SIGNATURE_SIZE))
pad_file(args.output, BOOT_IMAGE_HEADER_V3_PAGESIZE)
def write_vendor_boot_header(args):
if filesize(args.dtb) == 0:
raise ValueError('DTB image must not be empty.')
if args.header_version > 3:
vendor_ramdisk_size = args.vendor_ramdisk_total_size
vendor_boot_header_size = VENDOR_BOOT_IMAGE_HEADER_V4_SIZE
else:
vendor_ramdisk_size = filesize(args.vendor_ramdisk)
vendor_boot_header_size = VENDOR_BOOT_IMAGE_HEADER_V3_SIZE
args.vendor_boot.write(pack(f'{VENDOR_BOOT_MAGIC_SIZE}s',
VENDOR_BOOT_MAGIC.encode()))
# version of boot image header
args.vendor_boot.write(pack('I', args.header_version))
# flash page size
args.vendor_boot.write(pack('I', args.pagesize))
# kernel physical load address
args.vendor_boot.write(pack('I', args.base + args.kernel_offset))
# ramdisk physical load address
args.vendor_boot.write(pack('I', args.base + args.ramdisk_offset))
# ramdisk size in bytes
args.vendor_boot.write(pack('I', vendor_ramdisk_size))
args.vendor_boot.write(pack(f'{VENDOR_BOOT_ARGS_SIZE}s',
args.vendor_cmdline))
# kernel tags physical load address
args.vendor_boot.write(pack('I', args.base + args.tags_offset))
# asciiz product name
args.vendor_boot.write(pack(f'{VENDOR_BOOT_NAME_SIZE}s', args.board))
# header size in bytes
args.vendor_boot.write(pack('I', vendor_boot_header_size))
# dtb size in bytes
args.vendor_boot.write(pack('I', filesize(args.dtb)))
# dtb physical load address
args.vendor_boot.write(pack('Q', args.base + args.dtb_offset))
if args.header_version > 3:
vendor_ramdisk_table_size = (args.vendor_ramdisk_table_entry_num *
VENDOR_RAMDISK_TABLE_ENTRY_V4_SIZE)
# vendor ramdisk table size in bytes
args.vendor_boot.write(pack('I', vendor_ramdisk_table_size))
# number of vendor ramdisk table entries
args.vendor_boot.write(pack('I', args.vendor_ramdisk_table_entry_num))
# vendor ramdisk table entry size in bytes
args.vendor_boot.write(pack('I', VENDOR_RAMDISK_TABLE_ENTRY_V4_SIZE))
# bootconfig section size in bytes
args.vendor_boot.write(pack('I', filesize(args.vendor_bootconfig)))
pad_file(args.vendor_boot, args.pagesize)
def write_header(args):
if args.header_version > 4:
raise ValueError(
f'Boot header version {args.header_version} not supported')
if args.header_version in {3, 4}:
return write_header_v3_and_above(args)
ramdisk_load_address = ((args.base + args.ramdisk_offset)
if filesize(args.ramdisk) > 0 else 0)
second_load_address = ((args.base + args.second_offset)
if filesize(args.second) > 0 else 0)
args.output.write(pack(f'{BOOT_MAGIC_SIZE}s', BOOT_MAGIC.encode()))
# kernel size in bytes
args.output.write(pack('I', filesize(args.kernel)))
# kernel physical load address
args.output.write(pack('I', args.base + args.kernel_offset))
# ramdisk size in bytes
args.output.write(pack('I', filesize(args.ramdisk)))
# ramdisk physical load address
args.output.write(pack('I', ramdisk_load_address))
# second bootloader size in bytes
args.output.write(pack('I', filesize(args.second)))
# second bootloader physical load address
args.output.write(pack('I', second_load_address))
# kernel tags physical load address
args.output.write(pack('I', args.base + args.tags_offset))
# flash page size
args.output.write(pack('I', args.pagesize))
# version of boot image header
args.output.write(pack('I', args.header_version))
# os version and patch level
args.output.write(pack('I', (args.os_version << 11) | args.os_patch_level))
# asciiz product name
args.output.write(pack(f'{BOOT_NAME_SIZE}s', args.board))
args.output.write(pack(f'{BOOT_ARGS_SIZE}s', args.cmdline))
sha = sha1()
update_sha(sha, args.kernel)
update_sha(sha, args.ramdisk)
update_sha(sha, args.second)
if args.header_version > 0:
update_sha(sha, args.recovery_dtbo)
if args.header_version > 1:
update_sha(sha, args.dtb)
img_id = pack('32s', sha.digest())
args.output.write(img_id)
args.output.write(pack(f'{BOOT_EXTRA_ARGS_SIZE}s', args.extra_cmdline))
if args.header_version > 0:
if args.recovery_dtbo:
# recovery dtbo size in bytes
args.output.write(pack('I', filesize(args.recovery_dtbo)))
# recovert dtbo offset in the boot image
args.output.write(pack('Q', get_recovery_dtbo_offset(args)))
else:
# Set to zero if no recovery dtbo
args.output.write(pack('I', 0))
args.output.write(pack('Q', 0))
# Populate boot image header size for header versions 1 and 2.
if args.header_version == 1:
args.output.write(pack('I', BOOT_IMAGE_HEADER_V1_SIZE))
elif args.header_version == 2:
args.output.write(pack('I', BOOT_IMAGE_HEADER_V2_SIZE))
if args.header_version > 1:
if filesize(args.dtb) == 0:
raise ValueError('DTB image must not be empty.')
# dtb size in bytes
args.output.write(pack('I', filesize(args.dtb)))
# dtb physical load address
args.output.write(pack('Q', args.base + args.dtb_offset))
pad_file(args.output, args.pagesize)
return img_id
class AsciizBytes:
"""Parses a string and encodes it as an asciiz bytes object.
>>> AsciizBytes(bufsize=4)('foo')
b'foo\\x00'
>>> AsciizBytes(bufsize=4)('foob')
Traceback (most recent call last):
...
argparse.ArgumentTypeError: Encoded asciiz length exceeded: max 4, got 5
"""
def __init__(self, bufsize):
self.bufsize = bufsize
def __call__(self, arg):
arg_bytes = arg.encode() + b'\x00'
if len(arg_bytes) > self.bufsize:
raise ArgumentTypeError(
'Encoded asciiz length exceeded: '
f'max {self.bufsize}, got {len(arg_bytes)}')
return arg_bytes
class VendorRamdiskTableBuilder:
"""Vendor ramdisk table builder.
Attributes:
entries: A list of VendorRamdiskTableEntry namedtuple.
ramdisk_total_size: Total size in bytes of all ramdisks in the table.
"""
VendorRamdiskTableEntry = collections.namedtuple( # pylint: disable=invalid-name
'VendorRamdiskTableEntry',
['ramdisk_path', 'ramdisk_size', 'ramdisk_offset', 'ramdisk_type',
'ramdisk_name', 'board_id'])
def __init__(self):
self.entries = []
self.ramdisk_total_size = 0
self.ramdisk_names = set()
def add_entry(self, ramdisk_path, ramdisk_type, ramdisk_name, board_id):
# Strip any trailing null for simple comparison.
stripped_ramdisk_name = ramdisk_name.rstrip(b'\x00')
if stripped_ramdisk_name in VENDOR_RAMDISK_NAME_BLOCKLIST:
raise ValueError(
f'Banned vendor ramdisk name: {stripped_ramdisk_name}')
if stripped_ramdisk_name in self.ramdisk_names:
raise ValueError(
f'Duplicated vendor ramdisk name: {stripped_ramdisk_name}')
self.ramdisk_names.add(stripped_ramdisk_name)
if board_id is None:
board_id = array.array(
'I', [0] * VENDOR_RAMDISK_TABLE_ENTRY_BOARD_ID_SIZE)
else:
board_id = array.array('I', board_id)
if len(board_id) != VENDOR_RAMDISK_TABLE_ENTRY_BOARD_ID_SIZE:
raise ValueError('board_id size must be '
f'{VENDOR_RAMDISK_TABLE_ENTRY_BOARD_ID_SIZE}')
with open(ramdisk_path, 'rb') as f:
ramdisk_size = filesize(f)
self.entries.append(self.VendorRamdiskTableEntry(
ramdisk_path, ramdisk_size, self.ramdisk_total_size, ramdisk_type,
ramdisk_name, board_id))
self.ramdisk_total_size += ramdisk_size
def write_ramdisks_padded(self, fout, alignment):
for entry in self.entries:
with open(entry.ramdisk_path, 'rb') as f:
fout.write(f.read())
pad_file(fout, alignment)
def write_entries_padded(self, fout, alignment):
for entry in self.entries:
fout.write(pack('I', entry.ramdisk_size))
fout.write(pack('I', entry.ramdisk_offset))
fout.write(pack('I', entry.ramdisk_type))
fout.write(pack(f'{VENDOR_RAMDISK_NAME_SIZE}s',
entry.ramdisk_name))
fout.write(entry.board_id)
pad_file(fout, alignment)
def write_padded_file(f_out, f_in, padding):
if f_in is None:
return
f_out.write(f_in.read())
pad_file(f_out, padding)
def parse_int(x):
return int(x, 0)
def parse_os_version(x):
match = re.search(r'^(\d{1,3})(?:\.(\d{1,3})(?:\.(\d{1,3}))?)?', x)
if match:
a = int(match.group(1))
b = c = 0
if match.lastindex >= 2:
b = int(match.group(2))
if match.lastindex == 3:
c = int(match.group(3))
# 7 bits allocated for each field
assert a < 128
assert b < 128
assert c < 128
return (a << 14) | (b << 7) | c
return 0
def parse_os_patch_level(x):
match = re.search(r'^(\d{4})-(\d{2})(?:-(\d{2}))?', x)
if match:
y = int(match.group(1)) - 2000
m = int(match.group(2))
# 7 bits allocated for the year, 4 bits for the month
assert 0 <= y < 128
assert 0 < m <= 12
return (y << 4) | m
return 0
def parse_vendor_ramdisk_type(x):
type_dict = {
'none': VENDOR_RAMDISK_TYPE_NONE,
'platform': VENDOR_RAMDISK_TYPE_PLATFORM,
'recovery': VENDOR_RAMDISK_TYPE_RECOVERY,
'dlkm': VENDOR_RAMDISK_TYPE_DLKM,
}
if x.lower() in type_dict:
return type_dict[x.lower()]
return parse_int(x)
def get_vendor_boot_v4_usage():
return """vendor boot version 4 arguments:
--ramdisk_type {none,platform,recovery,dlkm}
specify the type of the ramdisk
--ramdisk_name NAME
specify the name of the ramdisk
--board_id{0..15} NUMBER
specify the value of the board_id vector, defaults to 0
--vendor_ramdisk_fragment VENDOR_RAMDISK_FILE
path to the vendor ramdisk file
These options can be specified multiple times, where each vendor ramdisk
option group ends with a --vendor_ramdisk_fragment option.
Each option group appends an additional ramdisk to the vendor boot image.
"""
def parse_vendor_ramdisk_args(args, args_list):
"""Parses vendor ramdisk specific arguments.
Args:
args: An argparse.Namespace object. Parsed results are stored into this
object.
args_list: A list of argument strings to be parsed.
Returns:
A list argument strings that are not parsed by this method.
"""
parser = ArgumentParser(add_help=False)
parser.add_argument('--ramdisk_type', type=parse_vendor_ramdisk_type,
default=VENDOR_RAMDISK_TYPE_NONE)
parser.add_argument('--ramdisk_name',
type=AsciizBytes(bufsize=VENDOR_RAMDISK_NAME_SIZE),
required=True)
for i in range(VENDOR_RAMDISK_TABLE_ENTRY_BOARD_ID_SIZE):
parser.add_argument(f'--board_id{i}', type=parse_int, default=0)
parser.add_argument(PARSER_ARGUMENT_VENDOR_RAMDISK_FRAGMENT, required=True)
unknown_args = []
vendor_ramdisk_table_builder = VendorRamdiskTableBuilder()
if args.vendor_ramdisk is not None:
vendor_ramdisk_table_builder.add_entry(
args.vendor_ramdisk.name, VENDOR_RAMDISK_TYPE_PLATFORM, b'', None)
while PARSER_ARGUMENT_VENDOR_RAMDISK_FRAGMENT in args_list:
idx = args_list.index(PARSER_ARGUMENT_VENDOR_RAMDISK_FRAGMENT) + 2
vendor_ramdisk_args = args_list[:idx]
args_list = args_list[idx:]
ramdisk_args, extra_args = parser.parse_known_args(vendor_ramdisk_args)
ramdisk_args_dict = vars(ramdisk_args)
unknown_args.extend(extra_args)
ramdisk_path = ramdisk_args.vendor_ramdisk_fragment
ramdisk_type = ramdisk_args.ramdisk_type
ramdisk_name = ramdisk_args.ramdisk_name
board_id = [ramdisk_args_dict[f'board_id{i}']
for i in range(VENDOR_RAMDISK_TABLE_ENTRY_BOARD_ID_SIZE)]
vendor_ramdisk_table_builder.add_entry(ramdisk_path, ramdisk_type,
ramdisk_name, board_id)
if len(args_list) > 0:
unknown_args.extend(args_list)
args.vendor_ramdisk_total_size = (vendor_ramdisk_table_builder
.ramdisk_total_size)
args.vendor_ramdisk_table_entry_num = len(vendor_ramdisk_table_builder
.entries)
args.vendor_ramdisk_table_builder = vendor_ramdisk_table_builder
return unknown_args
def parse_cmdline():
version_parser = ArgumentParser(add_help=False)
version_parser.add_argument('--header_version', type=parse_int, default=0)
if version_parser.parse_known_args()[0].header_version < 3:
# For boot header v0 to v2, the kernel commandline field is split into
# two fields, cmdline and extra_cmdline. Both fields are asciiz strings,
# so we minus one here to ensure the encoded string plus the
# null-terminator can fit in the buffer size.
cmdline_size = BOOT_ARGS_SIZE + BOOT_EXTRA_ARGS_SIZE - 1
else:
cmdline_size = BOOT_ARGS_SIZE + BOOT_EXTRA_ARGS_SIZE
parser = ArgumentParser(formatter_class=RawDescriptionHelpFormatter,
epilog=get_vendor_boot_v4_usage())
parser.add_argument('--kernel', type=FileType('rb'),
help='path to the kernel')
parser.add_argument('--ramdisk', type=FileType('rb'),
help='path to the ramdisk')
parser.add_argument('--second', type=FileType('rb'),
help='path to the second bootloader')
parser.add_argument('--dtb', type=FileType('rb'), help='path to the dtb')
dtbo_group = parser.add_mutually_exclusive_group()
dtbo_group.add_argument('--recovery_dtbo', type=FileType('rb'),
help='path to the recovery DTBO')
dtbo_group.add_argument('--recovery_acpio', type=FileType('rb'),
metavar='RECOVERY_ACPIO', dest='recovery_dtbo',
help='path to the recovery ACPIO')
parser.add_argument('--cmdline', type=AsciizBytes(bufsize=cmdline_size),
default='', help='kernel command line arguments')
parser.add_argument('--vendor_cmdline',
type=AsciizBytes(bufsize=VENDOR_BOOT_ARGS_SIZE),
default='',
help='vendor boot kernel command line arguments')
parser.add_argument('--base', type=parse_int, default=0x10000000,
help='base address')
parser.add_argument('--kernel_offset', type=parse_int, default=0x00008000,
help='kernel offset')
parser.add_argument('--ramdisk_offset', type=parse_int, default=0x01000000,
help='ramdisk offset')
parser.add_argument('--second_offset', type=parse_int, default=0x00f00000,
help='second bootloader offset')
parser.add_argument('--dtb_offset', type=parse_int, default=0x01f00000,
help='dtb offset')
parser.add_argument('--os_version', type=parse_os_version, default=0,
help='operating system version')
parser.add_argument('--os_patch_level', type=parse_os_patch_level,
default=0, help='operating system patch level')
parser.add_argument('--tags_offset', type=parse_int, default=0x00000100,
help='tags offset')
parser.add_argument('--board', type=AsciizBytes(bufsize=BOOT_NAME_SIZE),
default='', help='board name')
parser.add_argument('--pagesize', type=parse_int,
choices=[2**i for i in range(11, 15)], default=2048,
help='page size')
parser.add_argument('--id', action='store_true',
help='print the image ID on standard output')
parser.add_argument('--header_version', type=parse_int, default=0,
help='boot image header version')
parser.add_argument('-o', '--output', type=FileType('wb'),
help='output file name')
parser.add_argument('--gki_signing_algorithm',
help='GKI signing algorithm to use')
parser.add_argument('--gki_signing_key',
help='path to RSA private key file')
parser.add_argument('--gki_signing_signature_args',
help='other hash arguments passed to avbtool')
parser.add_argument('--gki_signing_avbtool_path',
help='path to avbtool for boot signature generation')
parser.add_argument('--vendor_boot', type=FileType('wb'),
help='vendor boot output file name')
parser.add_argument('--vendor_ramdisk', type=FileType('rb'),
help='path to the vendor ramdisk')
parser.add_argument('--vendor_bootconfig', type=FileType('rb'),
help='path to the vendor bootconfig file')
args, extra_args = parser.parse_known_args()
if args.vendor_boot is not None and args.header_version > 3:
extra_args = parse_vendor_ramdisk_args(args, extra_args)
if len(extra_args) > 0:
raise ValueError(f'Unrecognized arguments: {extra_args}')
if args.header_version < 3:
args.extra_cmdline = args.cmdline[BOOT_ARGS_SIZE-1:]
args.cmdline = args.cmdline[:BOOT_ARGS_SIZE-1] + b'\x00'
assert len(args.cmdline) <= BOOT_ARGS_SIZE
assert len(args.extra_cmdline) <= BOOT_EXTRA_ARGS_SIZE
return args
def add_boot_image_signature(args, pagesize):
"""Adds the boot image signature.
Note that the signature will only be verified in VTS to ensure a
generic boot.img is used. It will not be used by the device
bootloader at boot time. The bootloader should only verify
the boot vbmeta at the end of the boot partition (or in the top-level
vbmeta partition) via the Android Verified Boot process, when the
device boots.
"""
args.output.flush() # Flush the buffer for signature calculation.
# Appends zeros if the signing key is not specified.
if not args.gki_signing_key or not args.gki_signing_algorithm:
zeros = b'\x00' * BOOT_IMAGE_V4_SIGNATURE_SIZE
args.output.write(zeros)
pad_file(args.output, pagesize)
return
avbtool = 'avbtool' # Used from otatools.zip or Android build env.
# We need to specify the path of avbtool in build/core/Makefile.
# Because avbtool is not guaranteed to be in $PATH there.
if args.gki_signing_avbtool_path:
avbtool = args.gki_signing_avbtool_path
# Need to specify a value of --partition_size for avbtool to work.
# We use 64 MB below, but avbtool will not resize the boot image to
# this size because --do_not_append_vbmeta_image is also specified.
avbtool_cmd = [
avbtool, 'add_hash_footer',
'--partition_name', 'boot',
'--partition_size', str(64 * 1024 * 1024),
'--image', args.output.name,
'--algorithm', args.gki_signing_algorithm,
'--key', args.gki_signing_key,
'--salt', 'd00df00d'] # TODO: use a hash of kernel/ramdisk as the salt.
# Additional arguments passed to avbtool.
if args.gki_signing_signature_args:
avbtool_cmd += args.gki_signing_signature_args.split()
# Outputs the signed vbmeta to a separate file, then append to boot.img
# as the boot signature.
with tempfile.TemporaryDirectory() as temp_out_dir:
boot_signature_output = os.path.join(temp_out_dir, 'boot_signature')
avbtool_cmd += ['--do_not_append_vbmeta_image',
'--output_vbmeta_image', boot_signature_output]
subprocess.check_call(avbtool_cmd)
with open(boot_signature_output, 'rb') as boot_signature:
if filesize(boot_signature) > BOOT_IMAGE_V4_SIGNATURE_SIZE:
raise ValueError(
f'boot sigature size is > {BOOT_IMAGE_V4_SIGNATURE_SIZE}')
write_padded_file(args.output, boot_signature, pagesize)
def write_data(args, pagesize):
write_padded_file(args.output, args.kernel, pagesize)
write_padded_file(args.output, args.ramdisk, pagesize)
write_padded_file(args.output, args.second, pagesize)
if args.header_version > 0 and args.header_version < 3:
write_padded_file(args.output, args.recovery_dtbo, pagesize)
if args.header_version == 2:
write_padded_file(args.output, args.dtb, pagesize)
if args.header_version >= 4:
add_boot_image_signature(args, pagesize)
def write_vendor_boot_data(args):
if args.header_version > 3:
builder = args.vendor_ramdisk_table_builder
builder.write_ramdisks_padded(args.vendor_boot, args.pagesize)
write_padded_file(args.vendor_boot, args.dtb, args.pagesize)
builder.write_entries_padded(args.vendor_boot, args.pagesize)
write_padded_file(args.vendor_boot, args.vendor_bootconfig,
args.pagesize)
else:
write_padded_file(args.vendor_boot, args.vendor_ramdisk, args.pagesize)
write_padded_file(args.vendor_boot, args.dtb, args.pagesize)
def main():
args = parse_cmdline()
if args.vendor_boot is not None:
if args.header_version not in {3, 4}:
raise ValueError(
'--vendor_boot not compatible with given header version')
if args.header_version == 3 and args.vendor_ramdisk is None:
raise ValueError('--vendor_ramdisk missing or invalid')
write_vendor_boot_header(args)
write_vendor_boot_data(args)
if args.output is not None:
if args.second is not None and args.header_version > 2:
raise ValueError(
'--second not compatible with given header version')
img_id = write_header(args)
if args.header_version > 2:
write_data(args, BOOT_IMAGE_HEADER_V3_PAGESIZE)
else:
write_data(args, args.pagesize)
if args.id and img_id is not None:
print('0x' + ''.join(f'{octet:02x}' for octet in img_id))
if __name__ == '__main__':
main()

View File

@@ -10,7 +10,7 @@ if [ -z "$BM_POE_ADDRESS" ]; then
exit 1
fi
SNMP_KEY="SNMPv2-SMI::mib-2.105.1.1.1.3.1.$((${BM_POE_BASE:-0} + BM_POE_INTERFACE))"
SNMP_KEY="SNMPv2-SMI::mib-2.105.1.1.1.3.1.$((48 + BM_POE_INTERFACE))"
SNMP_OFF="i 2"
flock /var/run/poe.lock -c "snmpset -v2c -r 3 -t 30 -cmesaci $BM_POE_ADDRESS $SNMP_KEY $SNMP_OFF"

View File

@@ -10,7 +10,7 @@ if [ -z "$BM_POE_ADDRESS" ]; then
exit 1
fi
SNMP_KEY="SNMPv2-SMI::mib-2.105.1.1.1.3.1.$((${BM_POE_BASE:-0} + BM_POE_INTERFACE))"
SNMP_KEY="SNMPv2-SMI::mib-2.105.1.1.1.3.1.$((48 + BM_POE_INTERFACE))"
SNMP_ON="i 1"
SNMP_OFF="i 2"

View File

@@ -71,8 +71,6 @@ if [ -z "$BM_CMDLINE" ]; then
exit 1
fi
section_start prepare_rootfs "Preparing rootfs components"
set -ex
date +'%F %T'
@@ -103,11 +101,29 @@ if [ -f "${BM_BOOTFS}" ]; then
BM_BOOTFS=/tmp/bootfs
fi
# If BM_KERNEL and BM_DTS is present
if [ -n "${FORCE_KERNEL_TAG}" ]; then
if [ -z "${BM_KERNEL}" ] || [ -z "${BM_DTB}" ]; then
echo "This machine cannot be tested with external kernel since BM_KERNEL or BM_DTB missing!"
exit 1
fi
curl -L --retry 4 -f --retry-all-errors --retry-delay 60 \
"${FDO_HTTP_CACHE_URI:-}${KERNEL_IMAGE_BASE}/${DEBIAN_ARCH}/${BM_KERNEL}" -o "${BM_KERNEL}"
curl -L --retry 4 -f --retry-all-errors --retry-delay 60 \
"${FDO_HTTP_CACHE_URI:-}${KERNEL_IMAGE_BASE}/${DEBIAN_ARCH}/${BM_DTB}.dtb" -o "${BM_DTB}.dtb"
curl -L --retry 4 -f --retry-all-errors --retry-delay 60 \
"${FDO_HTTP_CACHE_URI:-}${KERNEL_IMAGE_BASE}/${DEBIAN_ARCH}/modules.tar.zst" -o modules.tar.zst
fi
date +'%F %T'
# Install kernel modules (it could be either in /lib/modules or
# /usr/lib/modules, but we want to install in the latter)
if [ -n "${BM_BOOTFS}" ]; then
if [ -n "${FORCE_KERNEL_TAG}" ]; then
tar --keep-directory-symlink --zstd -xf modules.tar.zst -C /nfs/
rm modules.tar.zst &
elif [ -n "${BM_BOOTFS}" ]; then
[ -d $BM_BOOTFS/usr/lib/modules ] && rsync -a $BM_BOOTFS/usr/lib/modules/ /nfs/usr/lib/modules/
[ -d $BM_BOOTFS/lib/modules ] && rsync -a $BM_BOOTFS/lib/modules/ /nfs/lib/modules/
else
@@ -118,7 +134,7 @@ fi
date +'%F %T'
# Install kernel image + bootloader files
if [ -z "$BM_BOOTFS" ]; then
if [ -n "${FORCE_KERNEL_TAG}" ] || [ -z "$BM_BOOTFS" ]; then
mv "${BM_KERNEL}" "${BM_DTB}.dtb" /tftp/
else # BM_BOOTFS
rsync -aL --delete $BM_BOOTFS/boot/ /tftp/
@@ -126,6 +142,33 @@ fi
date +'%F %T'
# Set up the pxelinux config for Jetson Nano
mkdir -p /tftp/pxelinux.cfg
cat <<EOF >/tftp/pxelinux.cfg/default-arm-tegra210-p3450-0000
PROMPT 0
TIMEOUT 30
DEFAULT primary
MENU TITLE jetson nano boot options
LABEL primary
MENU LABEL CI kernel on TFTP
LINUX Image
FDT tegra210-p3450-0000.dtb
APPEND \${cbootargs} $BM_CMDLINE
EOF
# Set up the pxelinux config for Jetson TK1
cat <<EOF >/tftp/pxelinux.cfg/default-arm-tegra124-jetson-tk1
PROMPT 0
TIMEOUT 30
DEFAULT primary
MENU TITLE jetson TK1 boot options
LABEL primary
MENU LABEL CI kernel on TFTP
LINUX zImage
FDT tegra124-jetson-tk1.dtb
APPEND \${cbootargs} $BM_CMDLINE
EOF
# Create the rootfs in the NFS directory
. $BM/rootfs-setup.sh /nfs
@@ -138,16 +181,13 @@ if [ -n "$BM_BOOTCONFIG" ]; then
printf "$BM_BOOTCONFIG" >> /tftp/config.txt
fi
section_end prepare_rootfs
set +e
STRUCTURED_LOG_FILE=results/job_detail.json
STRUCTURED_LOG_FILE=job_detail.json
python3 $CI_INSTALL/custom_logger.py ${STRUCTURED_LOG_FILE} --update dut_job_type "${DEVICE_TYPE}"
python3 $CI_INSTALL/custom_logger.py ${STRUCTURED_LOG_FILE} --update farm "${FARM}"
ATTEMPTS=3
first_attempt=True
while [ $((ATTEMPTS--)) -gt 0 ]; do
section_start dut_boot "Booting hardware device ..."
python3 $CI_INSTALL/custom_logger.py ${STRUCTURED_LOG_FILE} --create-dut-job dut_name "${CI_RUNNER_DESCRIPTION}"
# Update subtime time to CI_JOB_STARTED_AT only for the first run
if [ "$first_attempt" = "True" ]; then
@@ -159,22 +199,17 @@ while [ $((ATTEMPTS--)) -gt 0 ]; do
--dev="$BM_SERIAL" \
--powerup="$BM_POWERUP" \
--powerdown="$BM_POWERDOWN" \
--boot-timeout-seconds ${BOOT_PHASE_TIMEOUT_SECONDS:-300} \
--test-timeout-minutes ${TEST_PHASE_TIMEOUT_MINUTES:-$((CI_JOB_TIMEOUT/60 - ${TEST_SETUP_AND_UPLOAD_MARGIN_MINUTES:-5}))}
--test-timeout ${TEST_PHASE_TIMEOUT:-20}
ret=$?
if [ $ret -eq 2 ]; then
echo "Did not detect boot sequence, retrying..."
python3 $CI_INSTALL/custom_logger.py ${STRUCTURED_LOG_FILE} --close-dut-job
first_attempt=False
error "Device failed to boot; will retry"
else
# We're no longer in dut_boot by this point
unset CURRENT_SECTION
ATTEMPTS=0
fi
done
section_start dut_cleanup "Cleaning up after job"
python3 $CI_INSTALL/custom_logger.py ${STRUCTURED_LOG_FILE} --close-dut-job
python3 $CI_INSTALL/custom_logger.py ${STRUCTURED_LOG_FILE} --close
set -e
@@ -184,8 +219,11 @@ date +'%F %T'
# Bring artifacts back from the NFS dir to the build dir where gitlab-runner
# will look for them.
cp -Rp /nfs/results/. results/
if [ -f "${STRUCTURED_LOG_FILE}" ]; then
cp -p ${STRUCTURED_LOG_FILE} results/
echo "Structured log file is available at ${ARTIFACTS_BASE_URL}/results/${STRUCTURED_LOG_FILE}"
fi
date +'%F %T'
section_end dut_cleanup
exit $ret

View File

@@ -31,12 +31,11 @@ from custom_logger import CustomLogger
from serial_buffer import SerialBuffer
class PoERun:
def __init__(self, args, boot_timeout, test_timeout, logger):
def __init__(self, args, test_timeout, logger):
self.powerup = args.powerup
self.powerdown = args.powerdown
self.ser = SerialBuffer(
args.dev, "results/serial-output.txt", ": ")
self.boot_timeout = boot_timeout
args.dev, "results/serial-output.txt", "")
self.test_timeout = test_timeout
self.logger = logger
@@ -57,7 +56,7 @@ class PoERun:
boot_detected = False
self.logger.create_job_phase("boot")
for line in self.ser.lines(timeout=self.boot_timeout, phase="bootloader"):
for line in self.ser.lines(timeout=5 * 60, phase="bootloader"):
if re.search("Booting Linux", line):
boot_detected = True
break
@@ -65,7 +64,7 @@ class PoERun:
if not boot_detected:
self.print_error(
"Something wrong; couldn't detect the boot start up sequence")
return 2
return 1
self.logger.create_job_phase("test")
for line in self.ser.lines(timeout=self.test_timeout, phase="test"):
@@ -87,17 +86,14 @@ class PoERun:
self.print_error("nouveau jetson tk1 network fail, abandoning run.")
return 1
result = re.search(r"hwci: mesa: exit_code: (\d+)", line)
result = re.search("hwci: mesa: (\S*)", line)
if result:
exit_code = int(result.group(1))
if exit_code == 0:
if result.group(1) == "pass":
self.logger.update_dut_job("status", "pass")
return 0
else:
self.logger.update_status_fail("test fail")
self.logger.update_dut_job("exit_code", exit_code)
return exit_code
return 1
self.print_error(
"Reached the end of the CPU serial log without finding a result")
@@ -113,14 +109,12 @@ def main():
parser.add_argument('--powerdown', type=str,
help='shell command for powering off', required=True)
parser.add_argument(
'--boot-timeout-seconds', type=int, help='Boot phase timeout (seconds)', required=True)
parser.add_argument(
'--test-timeout-minutes', type=int, help='Test phase timeout (minutes)', required=True)
'--test-timeout', type=int, help='Test phase timeout (minutes)', required=True)
args = parser.parse_args()
logger = CustomLogger("results/job_detail.json")
logger = CustomLogger("job_detail.json")
logger.update_dut_time("start", None)
poe = PoERun(args, args.boot_timeout_seconds, args.test_timeout_minutes * 60, logger)
poe = PoERun(args, args.test_timeout * 60, logger)
retval = poe.run()
poe.logged_system(args.powerdown)

View File

@@ -13,17 +13,20 @@ date +'%F %T'
# Make JWT token available as file in the bare-metal storage to enable access
# to MinIO
cp "${S3_JWT_FILE}" "${rootfs_dst}${S3_JWT_FILE}"
cp "${CI_JOB_JWT_FILE}" "${rootfs_dst}${CI_JOB_JWT_FILE}"
date +'%F %T'
cp $CI_COMMON/capture-devcoredump.sh $rootfs_dst/
cp $CI_COMMON/intel-gpu-freq.sh $rootfs_dst/
cp $CI_COMMON/kdl.sh $rootfs_dst/
cp "$SCRIPTS_DIR/setup-test-env.sh" "$rootfs_dst/"
set +x
# Pass through relevant env vars from the gitlab job to the baremetal init script
echo "Variables passed through:"
filter_env_vars | tee $rootfs_dst/set-job-env-vars.sh
"$CI_COMMON"/generate-env.sh | tee $rootfs_dst/set-job-env-vars.sh
set -x

View File

@@ -22,7 +22,7 @@
# IN THE SOFTWARE.
import argparse
from datetime import datetime, UTC
from datetime import datetime, timezone
import queue
import serial
import threading
@@ -130,10 +130,9 @@ class SerialBuffer:
if b == b'\n'[0]:
line = line.decode(errors="replace")
ts = datetime.now(tz=UTC)
ts_str = f"{ts.hour:02}:{ts.minute:02}:{ts.second:02}.{int(ts.microsecond / 1000):03}"
print("{endc}{time}{prefix}{line}".format(
time=ts_str, prefix=self.prefix, line=line, endc='\033[0m'), flush=True, end='')
time = datetime.now().strftime('%y-%m-%d %H:%M:%S')
print("{endc}{time} {prefix}{line}".format(
time=time, prefix=self.prefix, line=line, endc='\033[0m'), flush=True, end='')
self.line_queue.put(line)
line = bytearray()

View File

@@ -0,0 +1,41 @@
#!/usr/bin/python3
# Copyright © 2020 Christian Gmeiner
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice (including the next
# paragraph) shall be included in all copies or substantial portions of the
# Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
# IN THE SOFTWARE.
#
# Tiny script to read bytes from telnet, and write the output to stdout, with a
# buffer in between so we don't lose serial output from its buffer.
#
import sys
import telnetlib
host = sys.argv[1]
port = sys.argv[2]
tn = telnetlib.Telnet(host, port, 1000000)
while True:
bytes = tn.read_some()
sys.stdout.buffer.write(bytes)
sys.stdout.flush()
tn.close()

View File

@@ -0,0 +1,7 @@
#!/bin/sh
# shellcheck disable=SC1091
set -e
_COMPILER=clang++-15
. compiler-wrapper.sh

View File

@@ -0,0 +1,7 @@
#!/bin/sh
# shellcheck disable=SC1091
set -e
_COMPILER=clang++
. compiler-wrapper.sh

View File

@@ -0,0 +1,7 @@
#!/bin/sh
# shellcheck disable=SC1091
set -e
_COMPILER=clang-15
. compiler-wrapper.sh

View File

@@ -0,0 +1,7 @@
#!/bin/sh
# shellcheck disable=SC1091
set -e
_COMPILER=clang
. compiler-wrapper.sh

View File

@@ -0,0 +1,7 @@
#!/bin/sh
# shellcheck disable=SC1091
set -e
_COMPILER=g++
. compiler-wrapper.sh

View File

@@ -0,0 +1,7 @@
#!/bin/sh
# shellcheck disable=SC1091
set -e
_COMPILER=gcc
. compiler-wrapper.sh

View File

@@ -0,0 +1,21 @@
# shellcheck disable=SC1091
# shellcheck disable=SC2086 # we want word splitting
if command -V ccache >/dev/null 2>/dev/null; then
CCACHE=ccache
else
CCACHE=
fi
if echo "$@" | grep -E 'meson-private/tmp[^ /]*/testfile.c' >/dev/null; then
# Invoked for meson feature check
exec $CCACHE $_COMPILER "$@"
fi
if [ "$(eval printf "'%s'" "\"\${$(($#-1))}\"")" = "-c" ]; then
# Not invoked for linking
exec $CCACHE $_COMPILER "$@"
fi
# Compiler invoked by ninja for linking. Add -Werror to turn compiler warnings into errors
# with LTO. (meson's werror should arguably do this, but meanwhile we need to)
exec $CCACHE $_COMPILER "$@" -Werror

View File

@@ -1,95 +0,0 @@
.meson-build-for-tests:
extends:
- .build-linux
stage: build-for-tests
script:
- &meson-build timeout --verbose ${BUILD_JOB_TIMEOUT_OVERRIDE:-$BUILD_JOB_TIMEOUT} bash --login .gitlab-ci/meson/build.sh
- .gitlab-ci/prepare-artifacts.sh
.meson-build-only:
extends:
- .meson-build-for-tests
- .build-only-delayed-rules
stage: build-only
script:
- *meson-build
# Shared between windows and Linux
.build-common:
extends: .build-rules
# Cancel job if a newer commit is pushed to the same branch
interruptible: true
variables:
# Build jobs are typically taking between 5-12 minutes, depending on how
# much they build and how many new Rust compilers we have to build twice.
# Allow 25 minutes as a reasonable margin: beyond this point, something
# has gone badly wrong, and we should try again to see if we can get
# something from it.
#
# Some jobs not in the critical path use a higher timeout, particularly
# when building with ASan or UBSan.
BUILD_JOB_TIMEOUT: 12m
RUN_MESON_TESTS: "true"
timeout: 16m
# We don't want to download any previous job's artifacts
dependencies: []
artifacts:
name: "${CI_PROJECT_NAME}_${CI_JOB_NAME_SLUG}"
when: always
paths:
- _build/meson-logs/*.txt
- _build/meson-logs/strace
- _build/.ninja_log
- artifacts
.build-run-long:
variables:
BUILD_JOB_TIMEOUT: 18m
timeout: 25m
# Just Linux
.build-linux:
extends: .build-common
variables:
C_ARGS: >
-Wno-error=deprecated-declarations
CCACHE_COMPILERCHECK: "content"
CCACHE_COMPRESS: "true"
CCACHE_DIR: /cache/mesa/ccache
# Use ccache transparently, and print stats before/after
before_script:
- !reference [default, before_script]
- |
export PATH="/usr/lib/ccache:$PATH"
export CCACHE_BASEDIR="$PWD"
if test -x /usr/bin/ccache; then
section_start ccache_before "ccache stats before build"
ccache --show-stats
section_end ccache_before
fi
after_script:
- if test -x /usr/bin/ccache; then ccache --show-stats | grep "Hits:"; fi
- !reference [default, after_script]
.build-windows:
extends:
- .build-common
- .windows-docker-tags
cache:
key: ${CI_JOB_NAME}
paths:
- subprojects/packagecache
.ci-deqp-artifacts:
artifacts:
name: "${CI_PROJECT_NAME}_${CI_JOB_NAME_SLUG}"
when: always
untracked: false
paths:
# Watch out! Artifacts are relative to the build dir.
# https://gitlab.com/gitlab-org/gitlab-ce/commit/8788fb925706cad594adf6917a6c5f6587dd1521
- artifacts
- _build/meson-logs/*.txt
- _build/meson-logs/strace
- _build/.ninja_log

File diff suppressed because it is too large Load Diff

View File

@@ -1,268 +0,0 @@
# For CI-tron based testing farm jobs.
.ci-tron-test:
extends:
- .ci-tron-b2c-job-v1
variables:
GIT_STRATEGY: none
B2C_VERSION: v0.9.15.1 # Linux 6.13.7
SCRIPTS_DIR: install
CI_TRON_PATTERN__JOB_SUCCESS__REGEX: 'hwci: mesa: exit_code: 0\r$'
CI_TRON_PATTERN__SESSION_END__REGEX: '^.*It''s now safe to turn off your computer\r$'
CI_TRON_TIMEOUT__FIRST_CONSOLE_ACTIVITY__MINUTES: 2
CI_TRON_TIMEOUT__FIRST_CONSOLE_ACTIVITY__RETRIES: 3
CI_TRON_TIMEOUT__CONSOLE_ACTIVITY__MINUTES: 5
CI_TRON__B2C_ARTIFACT_EXCLUSION: "*.shader_cache,install/*,*/install/*,*/vkd3d-proton.cache*,vkd3d-proton.cache*,*.qpa"
CI_TRON_HTTP_ARTIFACT__INSTALL__PATH: "/install.tar.zst"
CI_TRON_HTTP_ARTIFACT__INSTALL__URL: "https://$PIPELINE_ARTIFACTS_BASE/$S3_ARTIFACT_NAME.tar.zst"
CI_TRON__B2C_MACHINE_REGISTRATION_CMD: "setup --tags $CI_TRON_DUT_SETUP_TAGS"
CI_TRON__B2C_IMAGE_UNDER_TEST: $MESA_IMAGE
CI_TRON__B2C_EXEC_CMD: "curl --silent --fail-with-body {{ job.http.url }}$CI_TRON_HTTP_ARTIFACT__INSTALL__PATH | tar --zstd --extract && $SCRIPTS_DIR/common/init-stage2.sh"
# Assume by default this is running deqp, as that's almost always true
HWCI_TEST_SCRIPT: install/deqp-runner.sh
# Keep the job script in the artifacts
CI_TRON_JOB_SCRIPT_PATH: results/job_script.sh
needs:
- !reference [.required-for-hardware-jobs, needs]
tags:
- farm:$RUNNER_FARM_LOCATION
- $CI_TRON_DUT_SETUP_TAGS
# Override the default before_script, as it is not compatible with the CI-tron environment. We just keep the clearing
# of the JWT token for security reasons
before_script:
- |
set -eu
eval "$S3_JWT_FILE_SCRIPT"
for var in CI_TRON_DUT_SETUP_TAGS; do
if [[ -z "$(eval echo \${$var:-})" ]]; then
echo "The required variable '$var' is missing"
exit 1
fi
done
# Open a section that will be closed by b2c
echo -e "\n\e[0Ksection_start:`date +%s`:b2c_kernel_boot[collapsed=true]\r\e[0K\e[0;36m[$(cut -d ' ' -f1 /proc/uptime)]: Submitting the CI-tron job and booting the DUT\e[0m\n"
# Anything our job places in results/ will be collected by the
# Gitlab coordinator for status presentation. results/junit.xml
# will be parsed by the UI for more detailed explanations of
# test execution.
artifacts:
when: always
name: "${CI_PROJECT_NAME}_${CI_JOB_NAME_SLUG}"
paths:
- results
reports:
junit: results/**/junit.xml
.ci-tron-x86_64-test:
extends:
- .ci-tron-test
variables:
CI_TRON_INITRAMFS__B2C__URL: 'https://gitlab.freedesktop.org/gfx-ci/boot2container/-/releases/$B2C_VERSION/downloads/initramfs.linux_amd64.cpio.xz'
CI_TRON_KERNEL__URL: 'https://gitlab.freedesktop.org/gfx-ci/boot2container/-/releases/$B2C_VERSION/downloads/linux-x86_64'
# Set the following variables if you need AMD, Intel, or NVIDIA support
# CI_TRON_INITRAMFS__DEPMOD__URL: "https://gitlab.freedesktop.org/gfx-ci/boot2container/-/releases/$B2C_VERSION/downloads/linux-x86_64.depmod.cpio.xz"
# CI_TRON_INITRAMFS__GPU__URL: "https://gitlab.freedesktop.org/gfx-ci/boot2container/-/releases/$B2C_VERSION/downloads/linux-x86_64.gpu.cpio"
# CI_TRON_INITRAMFS__GPU__FORMAT__0__ARCHIVE__KEEP__0__PATH: "(lib/(modules|firmware/amdgpu)/.*)"
S3_ARTIFACT_NAME: "mesa-x86_64-default-debugoptimized"
.ci-tron-x86_64-test-vk:
extends:
- .use-debian/x86_64_test-vk
- .ci-tron-x86_64-test
needs:
- job: debian/x86_64_test-vk
artifacts: false
optional: true
- job: debian-x86_64
artifacts: false
- !reference [.ci-tron-test, needs]
.ci-tron-x86_64-test-vk-manual:
extends:
- .use-debian/x86_64_test-vk
- .ci-tron-x86_64-test
variables:
S3_ARTIFACT_NAME: "debian-build-x86_64"
needs:
- job: debian/x86_64_test-vk
artifacts: false
optional: true
- job: debian-build-x86_64
artifacts: false
- !reference [.ci-tron-test, needs]
.ci-tron-x86_64-test-gl:
extends:
- .use-debian/x86_64_test-gl
- .ci-tron-x86_64-test
needs:
- job: debian/x86_64_test-gl
artifacts: false
optional: true
- job: debian-x86_64
artifacts: false
- !reference [.ci-tron-test, needs]
.ci-tron-x86_64-test-gl-manual:
extends:
- .use-debian/x86_64_test-gl
- .ci-tron-x86_64-test
variables:
S3_ARTIFACT_NAME: "debian-build-x86_64"
needs:
- job: debian/x86_64_test-gl
artifacts: false
optional: true
- job: debian-build-x86_64
artifacts: false
- !reference [.ci-tron-test, needs]
.ci-tron-arm64-test:
extends:
- .ci-tron-test
variables:
CI_TRON_INITRAMFS__B2C__URL: 'https://gitlab.freedesktop.org/gfx-ci/boot2container/-/releases/$B2C_VERSION/downloads/initramfs.linux_arm64.cpio.xz'
CI_TRON_KERNEL__URL: 'https://gitlab.freedesktop.org/gfx-ci/boot2container/-/releases/$B2C_VERSION/downloads/linux-arm64'
S3_ARTIFACT_NAME: "mesa-arm64-default-debugoptimized"
.ci-tron-arm64-test-vk:
extends:
- .use-debian/arm64_test-vk
- .ci-tron-arm64-test
needs:
- job: debian/arm64_test-vk
artifacts: false
optional: true
- job: debian-arm64
artifacts: false
- !reference [.ci-tron-test, needs]
.ci-tron-arm64-test-asan-vk:
extends:
- .use-debian/arm64_test-vk
- .ci-tron-arm64-test
variables:
S3_ARTIFACT_NAME: "mesa-arm64-asan-debugoptimized"
DEQP_FORCE_ASAN: 1
needs:
- job: debian/arm64_test-vk
artifacts: false
optional: true
- job: debian-arm64-asan
artifacts: false
- !reference [.ci-tron-test, needs]
.ci-tron-arm64-test-ubsan-vk:
extends:
- .use-debian/arm64_test-vk
- .ci-tron-arm64-test
variables:
S3_ARTIFACT_NAME: "mesa-arm64-ubsan-debugoptimized"
needs:
- job: debian/arm64_test-vk
artifacts: false
optional: true
- job: debian-arm64-ubsan
artifacts: false
- !reference [.ci-tron-test, needs]
.ci-tron-arm64-test-gl:
extends:
- .use-debian/arm64_test-gl
- .ci-tron-arm64-test
needs:
- job: debian/arm64_test-gl
artifacts: false
optional: true
- job: debian-arm64
artifacts: false
- !reference [.ci-tron-test, needs]
.ci-tron-arm64-test-asan-gl:
extends:
- .use-debian/arm64_test-gl
- .ci-tron-arm64-test
variables:
S3_ARTIFACT_NAME: "mesa-arm64-asan-debugoptimized"
DEQP_FORCE_ASAN: 1
needs:
- job: debian/arm64_test-gl
artifacts: false
optional: true
- job: debian-arm64-asan
artifacts: false
- !reference [.ci-tron-test, needs]
.ci-tron-arm64-test-ubsan-gl:
extends:
- .use-debian/arm64_test-gl
- .ci-tron-arm64-test
variables:
S3_ARTIFACT_NAME: "mesa-arm64-ubsan-debugoptimized"
needs:
- job: debian/arm64_test-gl
artifacts: false
optional: true
- job: debian-arm64-ubsan
artifacts: false
- !reference [.ci-tron-test, needs]
.ci-tron-arm32-test:
extends:
- .ci-tron-test
variables:
CI_TRON_INITRAMFS__B2C__URL: 'https://gitlab.freedesktop.org/gfx-ci/boot2container/-/releases/$B2C_VERSION/downloads/initramfs.linux_arm.cpio.xz'
CI_TRON_KERNEL__URL: 'https://gitlab.freedesktop.org/gfx-ci/boot2container/-/releases/$B2C_VERSION/downloads/linux-arm'
S3_ARTIFACT_NAME: "mesa-arm32-default-debugoptimized"
.ci-tron-arm32-test-vk:
extends:
- .use-debian/arm32_test-vk
- .ci-tron-arm32-test
needs:
- job: debian/arm32_test-vk
artifacts: false
optional: true
- job: debian-arm32
artifacts: false
- !reference [.ci-tron-test, needs]
.ci-tron-arm32-test-gl:
extends:
- .use-debian/arm32_test-gl
- .ci-tron-arm32-test
needs:
- job: debian/arm32_test-gl
artifacts: false
optional: true
- job: debian-arm32
artifacts: false
- !reference [.ci-tron-test, needs]
.ci-tron-arm32-test-asan-gl:
extends:
- .use-debian/arm32_test-gl
- .ci-tron-arm32-test
variables:
S3_ARTIFACT_NAME: "mesa-arm32-asan-debugoptimized"
DEQP_FORCE_ASAN: 1
needs:
- job: debian/arm32_test-gl
artifacts: false
optional: true
- job: debian-arm32-asan
artifacts: false
- !reference [.ci-tron-test, needs]

View File

@@ -7,7 +7,7 @@ while true; do
devcds=$(find /sys/devices/virtual/devcoredump/ -name data 2>/dev/null)
for i in $devcds; do
echo "Found a devcoredump at $i."
if cp $i $RESULTS_DIR/first.devcore; then
if cp $i /results/first.devcore; then
echo 1 > $i
echo "Saved to the job artifacts at /first.devcore"
exit 0
@@ -23,7 +23,7 @@ while true; do
rm "$tmpfile"
else
echo "Found an i915 error state at $i size=$filesize."
if cp "$tmpfile" $RESULTS_DIR/first.i915_error_state; then
if cp "$tmpfile" /results/first.i915_error_state; then
rm "$tmpfile"
echo 1 > "$i"
echo "Saved to the job artifacts at /first.i915_error_state"

131
.gitlab-ci/common/generate-env.sh Executable file
View File

@@ -0,0 +1,131 @@
#!/bin/bash
for var in \
ACO_DEBUG \
ARTIFACTS_BASE_URL \
ASAN_OPTIONS \
BASE_SYSTEM_FORK_HOST_PREFIX \
BASE_SYSTEM_MAINLINE_HOST_PREFIX \
CI_COMMIT_BRANCH \
CI_COMMIT_REF_NAME \
CI_COMMIT_TITLE \
CI_JOB_ID \
CI_JOB_JWT_FILE \
CI_JOB_STARTED_AT \
CI_JOB_NAME \
CI_JOB_URL \
CI_MERGE_REQUEST_SOURCE_BRANCH_NAME \
CI_MERGE_REQUEST_TITLE \
CI_NODE_INDEX \
CI_NODE_TOTAL \
CI_PAGES_DOMAIN \
CI_PIPELINE_ID \
CI_PIPELINE_URL \
CI_PROJECT_DIR \
CI_PROJECT_NAME \
CI_PROJECT_PATH \
CI_PROJECT_ROOT_NAMESPACE \
CI_RUNNER_DESCRIPTION \
CI_SERVER_URL \
CROSVM_GALLIUM_DRIVER \
CROSVM_GPU_ARGS \
CURRENT_SECTION \
DEQP_BIN_DIR \
DEQP_CONFIG \
DEQP_EXPECTED_RENDERER \
DEQP_FRACTION \
DEQP_HEIGHT \
DEQP_RESULTS_DIR \
DEQP_RUNNER_OPTIONS \
DEQP_SUITE \
DEQP_TEMP_DIR \
DEQP_VARIANT \
DEQP_VER \
DEQP_WIDTH \
DEVICE_NAME \
DRIVER_NAME \
EGL_PLATFORM \
ETNA_MESA_DEBUG \
FDO_CI_CONCURRENT \
FDO_UPSTREAM_REPO \
FD_MESA_DEBUG \
FLAKES_CHANNEL \
FREEDRENO_HANGCHECK_MS \
GALLIUM_DRIVER \
GALLIVM_PERF \
GPU_VERSION \
GTEST \
GTEST_FAILS \
GTEST_FRACTION \
GTEST_RESULTS_DIR \
GTEST_RUNNER_OPTIONS \
GTEST_SKIPS \
HWCI_FREQ_MAX \
HWCI_KERNEL_MODULES \
HWCI_KVM \
HWCI_START_WESTON \
HWCI_START_XORG \
HWCI_TEST_SCRIPT \
IR3_SHADER_DEBUG \
JOB_ARTIFACTS_BASE \
JOB_RESULTS_PATH \
JOB_ROOTFS_OVERLAY_PATH \
KERNEL_IMAGE_BASE \
KERNEL_IMAGE_NAME \
LD_LIBRARY_PATH \
LP_NUM_THREADS \
MESA_BASE_TAG \
MESA_BUILD_PATH \
MESA_DEBUG \
MESA_GLES_VERSION_OVERRIDE \
MESA_GLSL_VERSION_OVERRIDE \
MESA_GL_VERSION_OVERRIDE \
MESA_IMAGE \
MESA_IMAGE_PATH \
MESA_IMAGE_TAG \
MESA_LOADER_DRIVER_OVERRIDE \
MESA_TEMPLATES_COMMIT \
MESA_VK_IGNORE_CONFORMANCE_WARNING \
S3_HOST \
S3_RESULTS_UPLOAD \
NIR_DEBUG \
PAN_I_WANT_A_BROKEN_VULKAN_DRIVER \
PAN_MESA_DEBUG \
PANVK_DEBUG \
PIGLIT_FRACTION \
PIGLIT_NO_WINDOW \
PIGLIT_OPTIONS \
PIGLIT_PLATFORM \
PIGLIT_PROFILES \
PIGLIT_REPLAY_ARTIFACTS_BASE_URL \
PIGLIT_REPLAY_DEVICE_NAME \
PIGLIT_REPLAY_EXTRA_ARGS \
PIGLIT_REPLAY_LOOP_TIMES \
PIGLIT_REPLAY_REFERENCE_IMAGES_BASE \
PIGLIT_REPLAY_SUBCOMMAND \
PIGLIT_RESULTS \
PIGLIT_TESTS \
PIGLIT_TRACES_FILE \
PIPELINE_ARTIFACTS_BASE \
RADEON_DEBUG \
RADV_DEBUG \
RADV_PERFTEST \
SKQP_ASSETS_DIR \
SKQP_BACKENDS \
TU_DEBUG \
USE_ANGLE \
VIRGL_HOST_API \
WAFFLE_PLATFORM \
VK_CPU \
VK_DRIVER \
VK_ICD_FILENAMES \
VKD3D_PROTON_RESULTS \
VKD3D_CONFIG \
ZINK_DESCRIPTORS \
ZINK_DEBUG \
LVP_POISON_MEMORY \
; do
if [ -n "${!var+x}" ]; then
echo "export $var=${!var@Q}"
fi
done

View File

@@ -3,10 +3,6 @@
# Very early init, used to make sure devices and network are set up and
# reachable.
# When changing this file, you need to bump the following
# .gitlab-ci/image-tags.yml tags:
# ALPINE_X86_64_LAVA_TRIGGER_TAG
set -ex
cd /
@@ -27,8 +23,3 @@ echo "nameserver 8.8.8.8" > /etc/resolv.conf
# Set the time so we can validate certificates before we fetch anything;
# however as not all DUTs have network, make this non-fatal.
for _ in 1 2 3; do sntp -sS pool.ntp.org && break || sleep 2; done || true
# Create a symlink from /dev/fd to /proc/self/fd if /dev/fd is missing.
if [ ! -e /dev/fd ]; then
ln -s /proc/self/fd /dev/fd
fi

View File

@@ -7,8 +7,6 @@
# Second-stage init, used to set up devices and our job environment before
# running tests.
shopt -s extglob
# Make sure to kill itself and all the children process from this script on
# exiting, since any console output may interfere with LAVA signals handling,
# which based on the log console.
@@ -47,13 +45,6 @@ for path in '/dut-env-vars.sh' '/set-job-env-vars.sh' './set-job-env-vars.sh'; d
done
. "$SCRIPTS_DIR"/setup-test-env.sh
# Flush out anything which might be stuck in a serial buffer
echo
echo
echo
section_switch init_stage2 "Pre-testing hardware setup"
set -ex
# Set up any devices required by the jobs
@@ -76,7 +67,9 @@ fi
# - vmx for Intel VT
# - svm for AMD-V
#
if [ -n "$HWCI_ENABLE_X86_KVM" ]; then
# Additionally, download the kernel image to boot the VM via HWCI_TEST_SCRIPT.
#
if [ "$HWCI_KVM" = "true" ]; then
unset KVM_KERNEL_MODULE
{
grep -qs '\bvmx\b' /proc/cpuinfo && KVM_KERNEL_MODULE=kvm_intel
@@ -89,6 +82,11 @@ if [ -n "$HWCI_ENABLE_X86_KVM" ]; then
echo "WARNING: Failed to detect CPU virtualization extensions"
} || \
modprobe ${KVM_KERNEL_MODULE}
mkdir -p /lava-files
curl -L --retry 4 -f --retry-all-errors --retry-delay 60 \
-o "/lava-files/${KERNEL_IMAGE_NAME}" \
"${KERNEL_IMAGE_BASE}/amd64/${KERNEL_IMAGE_NAME}"
fi
# Fix prefix confusion: the build installs to $CI_PROJECT_DIR, but we expect
@@ -102,22 +100,12 @@ export LIBGL_DRIVERS_PATH=/install/lib/dri
# telling it to look in /usr/local/lib.
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/lib
# The Broadcom devices need /usr/local/bin unconditionally added to the path
export PATH=/usr/local/bin:$PATH
# Store Mesa's disk cache under /tmp, rather than sending it out over NFS.
export XDG_CACHE_HOME=/tmp
# Make sure Python can find all our imports
export PYTHONPATH=$(python3 -c "import sys;print(\":\".join(sys.path))")
# If we need to specify a driver, it means several drivers could pick up this gpu;
# ensure that the other driver can't accidentally be used
if [ -n "$MESA_LOADER_DRIVER_OVERRIDE" ]; then
rm /install/lib/dri/!($MESA_LOADER_DRIVER_OVERRIDE)_dri.so
fi
ls -1 /install/lib/dri/*_dri.so || true
if [ "$HWCI_FREQ_MAX" = "true" ]; then
# Ensure initialization of the DRM device (needed by MSM)
head -0 /dev/dri/renderD128
@@ -136,14 +124,13 @@ if [ "$HWCI_FREQ_MAX" = "true" ]; then
# and enable throttling detection & reporting.
# Additionally, set the upper limit for CPU scaling frequency to 65% of the
# maximum permitted, as an additional measure to mitigate thermal throttling.
/install/common/intel-gpu-freq.sh -s 70% --cpu-set-max 65% -g all -d
/intel-gpu-freq.sh -s 70% --cpu-set-max 65% -g all -d
fi
# Start a little daemon to capture sysfs records and produce a JSON file
KDL_PATH=/install/common/kdl.sh
if [ -x "$KDL_PATH" ]; then
if [ -x /kdl.sh ]; then
echo "launch kdl.sh!"
$KDL_PATH &
/kdl.sh &
BACKGROUND_PIDS="$! $BACKGROUND_PIDS"
else
echo "kdl.sh not found!"
@@ -157,15 +144,11 @@ fi
# Start a little daemon to capture the first devcoredump we encounter. (They
# expire after 5 minutes, so we poll for them).
CAPTURE_DEVCOREDUMP=/install/common/capture-devcoredump.sh
if [ -x "$CAPTURE_DEVCOREDUMP" ]; then
$CAPTURE_DEVCOREDUMP &
if [ -x /capture-devcoredump.sh ]; then
/capture-devcoredump.sh &
BACKGROUND_PIDS="$! $BACKGROUND_PIDS"
fi
ARCH=$(uname -m)
export VK_DRIVER_FILES="/install/share/vulkan/icd.d/${VK_DRIVER}_icd.$ARCH.json"
# If we want Xorg to be running for the test, then we start it up before the
# HWCI_TEST_SCRIPT because we need to use xinit to start X (otherwise
# without using -displayfd you can race with Xorg's startup), but xinit will eat
@@ -173,7 +156,8 @@ export VK_DRIVER_FILES="/install/share/vulkan/icd.d/${VK_DRIVER}_icd.$ARCH.json"
if [ -n "$HWCI_START_XORG" ]; then
echo "touch /xorg-started; sleep 100000" > /xorg-script
env \
xinit /bin/sh /xorg-script -- /usr/bin/Xorg -noreset -s 0 -dpms -logfile "$RESULTS_DIR/Xorg.0.log" &
VK_ICD_FILENAMES="/install/share/vulkan/icd.d/${VK_DRIVER}_icd.$(uname -m).json" \
xinit /bin/sh /xorg-script -- /usr/bin/Xorg -noreset -s 0 -dpms -logfile /Xorg.0.log &
BACKGROUND_PIDS="$! $BACKGROUND_PIDS"
# Wait for xorg to be ready for connections.
@@ -198,43 +182,45 @@ if [ -n "$HWCI_START_WESTON" ]; then
export DISPLAY=:0
mkdir -p /tmp/.X11-unix
env weston --config="/install/common/weston.ini" -Swayland-0 --use-gl &
env \
VK_ICD_FILENAMES="/install/share/vulkan/icd.d/${VK_DRIVER}_icd.$(uname -m).json" \
weston -Bheadless-backend.so --use-gl -Swayland-0 --xwayland --idle-time=0 &
BACKGROUND_PIDS="$! $BACKGROUND_PIDS"
while [ ! -S "$WESTON_X11_SOCK" ]; do sleep 1; done
fi
set +x
section_end init_stage2
echo "Running ${HWCI_TEST_SCRIPT} ${HWCI_TEST_ARGS} ..."
set +e
$HWCI_TEST_SCRIPT ${HWCI_TEST_ARGS:-}; EXIT_CODE=$?
bash -c ". $SCRIPTS_DIR/setup-test-env.sh && $HWCI_TEST_SCRIPT"
EXIT_CODE=$?
set -e
section_start post_test_cleanup "Cleaning up after testing, uploading results"
set -x
# Let's make sure the results are always stored in current working directory
mv -f ${CI_PROJECT_DIR}/results ./ 2>/dev/null || true
[ ${EXIT_CODE} -ne 0 ] || rm -rf results/trace/"$PIGLIT_REPLAY_DEVICE_NAME"
# Make sure that capture-devcoredump is done before we start trying to tar up
# artifacts -- if it's writing while tar is reading, tar will throw an error and
# kill the job.
cleanup
# upload artifacts (lava jobs)
# upload artifacts
if [ -n "$S3_RESULTS_UPLOAD" ]; then
tar --zstd -cf results.tar.zst results/;
ci-fairy s3cp --token-file "${S3_JWT_FILE}" results.tar.zst https://"$S3_RESULTS_UPLOAD"/results.tar.zst
ci-fairy s3cp --token-file "${CI_JOB_JWT_FILE}" results.tar.zst https://"$S3_RESULTS_UPLOAD"/results.tar.zst;
fi
# We still need to echo the hwci: mesa message, as some scripts rely on it, such
# as the python ones inside the bare-metal folder
[ ${EXIT_CODE} -eq 0 ] && RESULT=pass || RESULT=fail
set +x
section_end post_test_cleanup
# Print the final result; both bare-metal and LAVA look for this string to get
# the result of our run, so try really hard to get it out rather than losing
# the run. The device gets shut down right at this point, and a630 seems to
# enjoy corrupting the last line of serial output before shutdown.
for _ in $(seq 0 3); do echo "hwci: mesa: exit_code: $EXIT_CODE"; sleep 1; echo; done
for _ in $(seq 0 3); do echo "hwci: mesa: $RESULT"; sleep 1; echo; done
exit $EXIT_CODE

View File

@@ -35,27 +35,6 @@
# - gt_act_freq_mhz (the actual GPU freq)
# - gt_cur_freq_mhz (the last requested freq)
#
# Intel later switched to per-tile sysfs interfaces, which is what the Xe DRM
# driver exlusively uses, and the capabilites are now located under the
# following directory for the first tile:
#
# /sys/class/drm/card<n>/device/tile0/gt0/freq0/<freq_info>
#
# Where <n> is the DRM card index and <freq_info> one of the following:
#
# - max_freq (enforced maximum freq)
# - min_freq (enforced minimum freq)
#
# The hardware capabilities can be accessed via:
#
# - rp0_freq (supported maximum freq)
# - rpn_freq (supported minimum freq)
# - rpe_freq (most efficient freq)
#
# The current frequency can be read from:
# - act_freq (the actual GPU freq)
# - cur_freq (the last requested freq)
#
# Also note that in addition to GPU management, the script offers the
# possibility to adjust CPU operating frequencies. However, this is currently
# limited to just setting the maximum scaling frequency as percentage of the
@@ -71,25 +50,10 @@
# Constants
#
# Check if any /sys/class/drm/cardX/device/tile0 directory exists to detect Xe
USE_XE=0
for i in $(seq 0 15); do
if [ -d "/sys/class/drm/card$i/device/tile0" ]; then
USE_XE=1
break
fi
done
# GPU
if [ "$USE_XE" -eq 1 ]; then
DRM_FREQ_SYSFS_PATTERN="/sys/class/drm/card%d/device/tile0/gt0/freq0/%s_freq"
ENF_FREQ_INFO="max min"
CAP_FREQ_INFO="rp0 rpn rpe"
else
DRM_FREQ_SYSFS_PATTERN="/sys/class/drm/card%d/gt_%s_freq_mhz"
ENF_FREQ_INFO="max min boost"
CAP_FREQ_INFO="RP0 RPn RP1"
fi
DRM_FREQ_SYSFS_PATTERN="/sys/class/drm/card%d/gt_%s_freq_mhz"
ENF_FREQ_INFO="max min boost"
CAP_FREQ_INFO="RP0 RPn RP1"
ACT_FREQ_INFO="act cur"
THROTT_DETECT_SLEEP_SEC=2
THROTT_DETECT_PID_FILE_PATH=/tmp/thrott-detect.pid
@@ -148,11 +112,7 @@ identify_intel_gpu() {
}
path=$(print_freq_sysfs_path "" ${i})
if [ "$USE_XE" -eq 1 ]; then
path=${path%/*/*/*/*/*}/device/vendor
else
path=${path%/*}/device/vendor
fi
path=${path%/*}/device/vendor
[ -r "${path}" ] && read vendor < "${path}" && \
[ "${vendor}" = "0x8086" ] && INTEL_DRM_CARD_INDEX=$i && return 0
@@ -237,13 +197,13 @@ compute_freq_set() {
case "$1" in
+)
val=$(eval "echo \${FREQ_$(echo $CAP_FREQ_INFO | cut -d' ' -f1)}") # FREQ_rp0 or FREQ_RP0
val=${FREQ_RP0}
;;
-)
val=$(eval "echo \${FREQ_$(echo $CAP_FREQ_INFO | cut -d' ' -f2)}") # FREQ_rpn or FREQ_RPn
val=${FREQ_RPn}
;;
*%)
val=$((${1%?} * $(eval "echo \${FREQ_$(echo $CAP_FREQ_INFO | cut -d' ' -f1)}") / 100))
val=$((${1%?} * FREQ_RP0 / 100))
# Adjust freq to comply with 50 MHz increments
val=$((val / 50 * 50))
;;
@@ -272,17 +232,15 @@ set_freq_max() {
read_freq_info n min || return $?
# FREQ_rp0 or FREQ_RP0
[ ${SET_MAX_FREQ} -gt $(eval "echo \${FREQ_$(echo $CAP_FREQ_INFO | cut -d' ' -f1)}") ] && {
[ ${SET_MAX_FREQ} -gt ${FREQ_RP0} ] && {
log ERROR "Cannot set GPU max freq (%s) to be greater than hw max freq (%s)" \
"${SET_MAX_FREQ}" "$(eval "echo \${FREQ_$(echo $CAP_FREQ_INFO | cut -d' ' -f1)}")"
"${SET_MAX_FREQ}" "${FREQ_RP0}"
return 1
}
# FREQ_rpn or FREQ_RPn
[ ${SET_MAX_FREQ} -lt $(eval "echo \${FREQ_$(echo $CAP_FREQ_INFO | cut -d' ' -f2)}") ] && {
[ ${SET_MAX_FREQ} -lt ${FREQ_RPn} ] && {
log ERROR "Cannot set GPU max freq (%s) to be less than hw min freq (%s)" \
"${SET_MIN_FREQ}" "$(eval "echo \${FREQ_$(echo $CAP_FREQ_INFO | cut -d' ' -f2)}")"
"${SET_MIN_FREQ}" "${FREQ_RPn}"
return 1
}
@@ -294,21 +252,12 @@ set_freq_max() {
[ -z "${DRY_RUN}" ] || return 0
# Write to max freq path
if ! printf "%s" ${SET_MAX_FREQ} | tee $(print_freq_sysfs_path max) > /dev/null;
if ! printf "%s" ${SET_MAX_FREQ} | tee $(print_freq_sysfs_path max) \
$(print_freq_sysfs_path boost) > /dev/null;
then
log ERROR "Failed to set GPU max frequency"
return 1
fi
# Only write to boost if the sysfs file exists, as it's removed in Xe
if [ -e "$(print_freq_sysfs_path boost)" ]; then
if ! printf "%s" ${SET_MAX_FREQ} | tee $(print_freq_sysfs_path boost) > /dev/null;
then
log ERROR "Failed to set GPU boost frequency"
return 1
fi
fi
}
#
@@ -325,9 +274,9 @@ set_freq_min() {
return 1
}
[ ${SET_MIN_FREQ} -lt $(eval "echo \${FREQ_$(echo $CAP_FREQ_INFO | cut -d' ' -f2)}") ] && {
[ ${SET_MIN_FREQ} -lt ${FREQ_RPn} ] && {
log ERROR "Cannot set GPU min freq (%s) to be less than hw min freq (%s)" \
"${SET_MIN_FREQ}" "$(eval "echo \${FREQ_$(echo $CAP_FREQ_INFO | cut -d' ' -f2)}")"
"${SET_MIN_FREQ}" "${FREQ_RPn}"
return 1
}
@@ -345,7 +294,7 @@ set_freq_min() {
#
set_freq() {
# Get hw max & min frequencies
read_freq_info n $(echo $CAP_FREQ_INFO | cut -d' ' -f1,2) || return $? # RP0 RPn
read_freq_info n RP0 RPn || return $?
[ -z "${SET_MAX_FREQ}" ] || {
SET_MAX_FREQ=$(compute_freq_set "${SET_MAX_FREQ}")
@@ -448,7 +397,7 @@ detect_throttling() {
}
(
read_freq_info n $(echo $CAP_FREQ_INFO | cut -d' ' -f2) || return $? # RPn
read_freq_info n RPn || exit $?
while true; do
sleep ${THROTT_DETECT_SLEEP_SEC}
@@ -457,13 +406,13 @@ detect_throttling() {
#
# The throttling seems to occur when act freq goes below min.
# However, it's necessary to exclude the idle states, where
# act freq normally reaches rpn and cur goes below min.
# act freq normally reaches RPn and cur goes below min.
#
[ ${FREQ_act} -lt ${FREQ_min} ] && \
[ ${FREQ_act} -gt $(eval "echo \${FREQ_$(echo $CAP_FREQ_INFO | cut -d' ' -f2)}") ] && \
[ ${FREQ_act} -gt ${FREQ_RPn} ] && \
[ ${FREQ_cur} -ge ${FREQ_min} ] && \
printf "GPU throttling detected: act=%s min=%s cur=%s rpn=%s\n" \
${FREQ_act} ${FREQ_min} ${FREQ_cur} $(eval "echo \${FREQ_$(echo $CAP_FREQ_INFO | cut -d' ' -f2)}")
printf "GPU throttling detected: act=%s min=%s cur=%s RPn=%s\n" \
${FREQ_act} ${FREQ_min} ${FREQ_cur} ${FREQ_RPn}
done
) &
@@ -611,8 +560,7 @@ set_cpu_freq_max() {
read_cpu_freq_info ${cpu_index} n ${CAP_CPU_FREQ_INFO} || { res=$?; continue; }
target_freq=$(compute_cpu_freq_set "${CPU_SET_MAX_FREQ}")
tf_res=$?
[ -z "${target_freq}" ] && { res=$tf_res; continue; }
[ -z "${target_freq}" ] && { res=$?; continue; }
log INFO "Setting CPU%s max scaling freq to %s Hz" ${cpu_index} "${target_freq}"
[ -n "${DRY_RUN}" ] && continue

View File

@@ -1,18 +1,24 @@
#!/usr/bin/env bash
# shellcheck disable=SC1091 # the path is created in build-kdl and
# here is check if exist
# shellcheck disable=SC2086 # we want the arguments to be expanded
if ! [ -f /ci-kdl/bin/activate ]; then
echo -e "ci-kdl not installed; not monitoring temperature"
exit 0
terminate() {
echo "ci-kdl.sh caught SIGTERM signal! propagating to child processes"
for job in $(jobs -p)
do
kill -15 "$job"
done
}
trap terminate SIGTERM
if [ -f /ci-kdl.venv/bin/activate ]; then
source /ci-kdl.venv/bin/activate
/ci-kdl.venv/bin/python /ci-kdl.venv/bin/ci-kdl | tee -a /results/kdl.log &
child=$!
wait $child
mv kdl_*.json /results/kdl.json
else
echo -e "Not possible to activate ci-kdl virtual environment"
fi
KDL_ARGS="
--output-file=${RESULTS_DIR}/kdl.json
--log-level=WARNING
--num-samples=-1
"
source /ci-kdl/bin/activate
exec /ci-kdl/bin/ci-kdl ${KDL_ARGS}

21
.gitlab-ci/common/start-x.sh Executable file
View File

@@ -0,0 +1,21 @@
#!/bin/sh
set -ex
_XORG_SCRIPT="/xorg-script"
_FLAG_FILE="/xorg-started"
echo "touch ${_FLAG_FILE}; sleep 100000" > "${_XORG_SCRIPT}"
if [ "x$1" != "x" ]; then
export LD_LIBRARY_PATH="${1}/lib"
export LIBGL_DRIVERS_PATH="${1}/lib/dri"
fi
xinit /bin/sh "${_XORG_SCRIPT}" -- /usr/bin/Xorg vt45 -noreset -s 0 -dpms -logfile /Xorg.0.log &
# Wait for xorg to be ready for connections.
for _ in 1 2 3 4 5; do
if [ -e "${_FLAG_FILE}" ]; then
break
fi
sleep 5
done

View File

@@ -1,7 +0,0 @@
[core]
backend=headless-backend.so
xwayland=true
idle-time=0
[xwayland]
path=/usr/local/bin/Xwayland

View File

@@ -1,7 +0,0 @@
variables:
CONDITIONAL_BUILD_ANDROID_CTS_TAG: b018634d732f438027ec58c0383615e7
CONDITIONAL_BUILD_ANGLE_TAG: f62910e55be46e37cc867d037e4a8121
CONDITIONAL_BUILD_CROSVM_TAG: 0f59350b1052bdbb28b65a832b494377
CONDITIONAL_BUILD_FLUSTER_TAG: 3bc3afd7468e106afcbfd569a85f34f9
CONDITIONAL_BUILD_PIGLIT_TAG: 827b708ab7309721395ea28cec512968
CONDITIONAL_BUILD_VKD3D_PROTON_TAG: 82cadf35246e64a8228bf759c9c19e5b

View File

@@ -1,70 +0,0 @@
# Build the CI Alpine docker images.
#
# MESA_IMAGE_TAG is the tag of the docker image used by later stage jobs. If the
# image doesn't exist yet, the container stage job generates it.
#
# In order to generate a new image, one should generally change the tag.
# While removing the image from the registry would also work, that's not
# recommended except for ephemeral images during development: Replacing
# an image after a significant amount of time might pull in newer
# versions of gcc/clang or other packages, which might break the build
# with older commits using the same tag.
#
# After merging a change resulting in generating a new image to the
# main repository, it's recommended to remove the image from the source
# repository's container registry, so that the image from the main
# repository's registry will be used there as well.
# Alpine based x86_64 build image
.alpine/x86_64_build-base:
extends:
- .fdo.container-build@alpine
- .container
variables:
FDO_DISTRIBUTION_VERSION: "3.21"
FDO_BASE_IMAGE: alpine:$FDO_DISTRIBUTION_VERSION # since cbuild ignores it
# Alpine based x86_64 build image
alpine/x86_64_build:
extends:
- .alpine/x86_64_build-base
variables:
MESA_IMAGE_TAG: &alpine-x86_64_build ${ALPINE_X86_64_BUILD_TAG}
LLVM_VERSION: &alpine-llvm_version 19
rules:
- !reference [.container, rules]
# Note: the next three lines must remain in that order, so that the rules
# in `linkcheck-docs` catch nightly pipelines before the rules in `deploy-docs`
# exclude them.
- !reference [linkcheck-docs, rules]
- !reference [deploy-docs, rules]
- !reference [test-docs, rules]
.use-alpine/x86_64_build:
tags:
- $FDO_RUNNER_JOB_PRIORITY_TAG_X86_64
extends:
- .set-image
variables:
MESA_IMAGE_PATH: "alpine/x86_64_build"
MESA_IMAGE_TAG: *alpine-x86_64_build
LLVM_VERSION: *alpine-llvm_version
needs:
- job: sanity
optional: true
- job: alpine/x86_64_build
optional: true
# Alpine based x86_64 image for LAVA SSH dockerized client
alpine/x86_64_lava_ssh_client:
extends:
- .alpine/x86_64_build-base
variables:
MESA_IMAGE_TAG: &alpine-x86_64_lava_ssh_client ${ALPINE_X86_64_LAVA_SSH_TAG}
# Alpine based x86_64 image to run LAVA jobs
alpine/x86_64_lava-trigger:
extends:
- .alpine/x86_64_build-base
variables:
MESA_IMAGE_TAG: &alpine-x86_64_lava_trigger ${ALPINE_X86_64_LAVA_TRIGGER_TAG}

View File

@@ -6,9 +6,6 @@
# ALPINE_X86_64_BUILD_TAG
set -e
. .gitlab-ci/setup-test-env.sh
set -o xtrace
EPHEMERAL=(
@@ -19,65 +16,43 @@ DEPS=(
bash
bison
ccache
"clang${LLVM_VERSION}-dev"
clang-dev
cmake
clang-dev
coreutils
curl
elfutils-dev
expat-dev
flex
g++
gcc
gettext
g++
git
gettext
glslang
graphviz
libclc-dev
libdrm-dev
libpciaccess-dev
libva-dev
linux-headers
"llvm${LLVM_VERSION}-dev"
"llvm${LLVM_VERSION}-static"
mold
musl-dev
py3-clang
py3-cparser
py3-mako
py3-packaging
py3-pip
py3-ply
py3-yaml
llvm16-dev
meson
expat-dev
elfutils-dev
libdrm-dev
libselinux-dev
libva-dev
libpciaccess-dev
zlib-dev
python3-dev
samurai
spirv-llvm-translator-dev
py3-mako
py3-ply
vulkan-headers
spirv-tools-dev
util-macros
vulkan-headers
zlib-dev
wayland-dev
wayland-protocols
)
apk --no-cache add "${DEPS[@]}" "${EPHEMERAL[@]}"
pip3 install --break-system-packages sphinx===8.2.3 hawkmoth===0.19.0
. .gitlab-ci/container/container_pre_build.sh
. .gitlab-ci/container/install-meson.sh
EXTRA_MESON_ARGS='--prefix=/usr' \
. .gitlab-ci/container/build-wayland.sh
############### Uninstall the build software
# too many vendor binarise, just keep the ones we need
find /usr/share/clc \
\( -type f -o -type l \) \
! -name 'spirv-mesa3d-.spv' \
! -name 'spirv64-mesa3d-.spv' \
-delete
apk del "${EPHEMERAL[@]}"
. .gitlab-ci/container/container_post_build.sh

View File

@@ -1,50 +0,0 @@
#!/usr/bin/env bash
# This is a ci-templates build script to generate a container for triggering LAVA jobs.
# When changing this file, you need to bump the following
# .gitlab-ci/image-tags.yml tags:
# ALPINE_X86_64_LAVA_TRIGGER_TAG
# shellcheck disable=SC1091
set -e
. .gitlab-ci/setup-test-env.sh
set -o xtrace
uncollapsed_section_start alpine_setup "Base Alpine system setup"
# Ephemeral packages (installed for this script and removed again at the end)
EPHEMERAL=(
git
py3-pip
)
# We only need these very basic packages to run the LAVA jobs
DEPS=(
curl
python3
tar
zstd
)
apk --no-cache add "${DEPS[@]}" "${EPHEMERAL[@]}"
pip3 install --break-system-packages -r bin/ci/requirements-lava.txt
cp -Rp .gitlab-ci/lava /
cp -Rp .gitlab-ci/bin/*_logger.py /lava
cp -Rp .gitlab-ci/common/init-stage1.sh /lava
. .gitlab-ci/container/container_pre_build.sh
############### Uninstall the build software
uncollapsed_section_switch alpine_cleanup "Cleaning up base Alpine system"
apk del "${EPHEMERAL[@]}"
. .gitlab-ci/container/container_post_build.sh
section_end alpine_cleanup

View File

@@ -4,9 +4,6 @@
# shellcheck disable=SC1091
set -e
. .gitlab-ci/setup-test-env.sh
set -o xtrace
EPHEMERAL=(

View File

@@ -1,5 +1,4 @@
#!/usr/bin/env bash
# shellcheck disable=SC2154 # arch is assigned in previous scripts
set -e
set -o xtrace
@@ -7,12 +6,11 @@ set -o xtrace
# Fetch the arm-built rootfs image and unpack it in our x86_64 container (saves
# network transfer, disk usage, and runtime on test jobs)
S3_PATH="https://${S3_HOST}/${S3_KERNEL_BUCKET}"
if curl -L --retry 3 -f --retry-delay 10 -s --head "${S3_PATH}/${FDO_UPSTREAM_REPO}/${LAVA_DISTRIBUTION_TAG}/lava-rootfs.tar.zst"; then
ARTIFACTS_URL="${S3_PATH}/${FDO_UPSTREAM_REPO}/${LAVA_DISTRIBUTION_TAG}"
# shellcheck disable=SC2154 # arch is assigned in previous scripts
if curl -X HEAD -s "${ARTIFACTS_PREFIX}/${FDO_UPSTREAM_REPO}/${ARTIFACTS_SUFFIX}/${arch}/done"; then
ARTIFACTS_URL="${ARTIFACTS_PREFIX}/${FDO_UPSTREAM_REPO}/${ARTIFACTS_SUFFIX}/${arch}"
else
ARTIFACTS_URL="${S3_PATH}/${CI_PROJECT_PATH}/${LAVA_DISTRIBUTION_TAG}"
ARTIFACTS_URL="${ARTIFACTS_PREFIX}/${CI_PROJECT_PATH}/${ARTIFACTS_SUFFIX}/${arch}"
fi
curl -L --retry 4 -f --retry-all-errors --retry-delay 60 \
@@ -26,8 +24,39 @@ if [[ $arch == "arm64" ]]; then
pushd /baremetal-files
curl -L --retry 4 -f --retry-all-errors --retry-delay 60 \
-O "${KERNEL_IMAGE_BASE}"/arm64/Image
-O "${KERNEL_IMAGE_BASE}"/arm64/Image
curl -L --retry 4 -f --retry-all-errors --retry-delay 60 \
-O "${KERNEL_IMAGE_BASE}"/arm64/Image.gz
curl -L --retry 4 -f --retry-all-errors --retry-delay 60 \
-O "${KERNEL_IMAGE_BASE}"/arm64/cheza-kernel
DEVICE_TREES=""
DEVICE_TREES="$DEVICE_TREES apq8016-sbc.dtb"
DEVICE_TREES="$DEVICE_TREES apq8096-db820c.dtb"
DEVICE_TREES="$DEVICE_TREES tegra210-p3450-0000.dtb"
DEVICE_TREES="$DEVICE_TREES imx8mq-nitrogen.dtb"
for DTB in $DEVICE_TREES; do
curl -L --retry 4 -f --retry-all-errors --retry-delay 60 \
-O "${KERNEL_IMAGE_BASE}/arm64/$DTB"
done
popd
elif [[ $arch == "armhf" ]]; then
mkdir -p /baremetal-files
pushd /baremetal-files
curl -L --retry 4 -f --retry-all-errors --retry-delay 60 \
-O "${KERNEL_IMAGE_BASE}"/armhf/zImage
DEVICE_TREES=""
DEVICE_TREES="$DEVICE_TREES imx6q-cubox-i.dtb"
DEVICE_TREES="$DEVICE_TREES tegra124-jetson-tk1.dtb"
for DTB in $DEVICE_TREES; do
curl -L --retry 4 -f --retry-all-errors --retry-delay 60 \
-O "${KERNEL_IMAGE_BASE}/armhf/$DTB"
done
popd
fi

View File

@@ -1,67 +0,0 @@
#!/usr/bin/env bash
#
# When changing this file, you need to bump the following
# .gitlab-ci/image-tags.yml tags:
# DEBIAN_TEST_ANDROID_TAG
# This script runs in a container to:
# 1. Download the Android CTS (Compatibility Test Suite)
# 2. Filter out unneeded test modules
# 3. Compress and upload the stripped version to S3
# Note: The 'build-' prefix in the filename is only to make it compatible
# with the bin/ci/update_tag.py script.
set -euo pipefail
section_start android-cts "Downloading Android CTS"
# xtrace is getting lost with the section switching
set -x
# Do a very early check to make sure the tag is correct without the need of
# setting up the environment variables locally
ci_tag_build_time_check "ANDROID_CTS_TAG"
# List of all CTS modules we might want to run in CI
# This should be the union of all modules required by our CI jobs
# Specific modules to run are selected via the ${GPU_VERSION}-android-cts-include.txt files
ANDROID_CTS_MODULES=(
"CtsDeqpTestCases"
"CtsGraphicsTestCases"
"CtsNativeHardwareTestCases"
"CtsSkQPTestCases"
)
ANDROID_CTS_VERSION="${ANDROID_VERSION}_r1"
ANDROID_CTS_DEVICE_ARCH="x86"
# Download the stripped CTS from S3, because the CTS download from Google can take 20 minutes
CTS_FILENAME="android-cts-${ANDROID_CTS_VERSION}-linux_x86-${ANDROID_CTS_DEVICE_ARCH}"
ARTIFACT_PATH="${DATA_STORAGE_PATH}/android-cts/${ANDROID_CTS_TAG}.tar.zst"
if FOUND_ARTIFACT_URL="$(find_s3_project_artifact "${ARTIFACT_PATH}")"; then
echo "Found Android CTS at: ${FOUND_ARTIFACT_URL}"
curl-with-retry "${FOUND_ARTIFACT_URL}" | tar --zstd -x -C /
else
echo "No cached CTS found, downloading from Google and uploading to S3..."
curl-with-retry --remote-name "https://dl.google.com/dl/android/cts/${CTS_FILENAME}.zip"
# Disable zipbomb detection, because the CTS zip file is too big
# At least locally, it is detected as a zipbomb
UNZIP_DISABLE_ZIPBOMB_DETECTION=true \
unzip -q -d / "${CTS_FILENAME}.zip"
rm "${CTS_FILENAME}.zip"
# Keep only the interesting tests to save space
# shellcheck disable=SC2086 # we want word splitting
ANDROID_CTS_MODULES_KEEP_EXPRESSION=$(printf "%s|" "${ANDROID_CTS_MODULES[@]}" | sed -e 's/|$//g')
find /android-cts/testcases/ -mindepth 1 -type d | grep -v -E "$ANDROID_CTS_MODULES_KEEP_EXPRESSION" | xargs rm -rf
# Using zstd compressed tarball instead of zip, the compression ratio is almost the same, but
# the extraction is faster, also LAVA overlays don't support zip compression.
tar --zstd -cf "${CTS_FILENAME}.tar.zst" /android-cts
ci-fairy s3cp --token-file "${S3_JWT_FILE}" "${CTS_FILENAME}.tar.zst" \
"https://${S3_BASE_PATH}/${CI_PROJECT_PATH}/${ARTIFACT_PATH}"
fi
section_end android-cts

View File

@@ -1,121 +0,0 @@
#!/usr/bin/env bash
# When changing this file, you need to bump the following
# .gitlab-ci/image-tags.yml and .gitlab-ci/container/gitlab-ci.yml tags:
# DEBIAN_BUILD_TAG
# ANDROID_LLVM_ARTIFACT_NAME
set -exu
# If CI vars are not set, assign an empty value, this prevents -u to fail
: "${CI:=}"
: "${CI_PROJECT_PATH:=}"
# Early check for required env variables, relies on `set -u`
: "$ANDROID_NDK_VERSION"
: "$ANDROID_SDK_VERSION"
: "$ANDROID_LLVM_VERSION"
: "$ANDROID_LLVM_ARTIFACT_NAME"
: "$S3_JWT_FILE"
: "$S3_HOST"
: "$S3_ANDROID_BUCKET"
# Check for CI if the auth file used later on is non-empty
if [ -n "$CI" ] && [ ! -s "${S3_JWT_FILE}" ]; then
echo "Error: ${S3_JWT_FILE} is empty." 1>&2
exit 1
fi
if curl -s -o /dev/null -I -L -f --retry 4 --retry-delay 15 "https://${S3_HOST}/${S3_ANDROID_BUCKET}/${CI_PROJECT_PATH}/${ANDROID_LLVM_ARTIFACT_NAME}.tar.zst"; then
echo "Artifact ${ANDROID_LLVM_ARTIFACT_NAME}.tar.zst already exists, skip re-building."
# Download prebuilt LLVM libraries for Android when they have not changed,
# to save some time
curl -L --retry 4 -f --retry-all-errors --retry-delay 60 \
-o "/${ANDROID_LLVM_ARTIFACT_NAME}.tar.zst" "https://${S3_HOST}/${S3_ANDROID_BUCKET}/${CI_PROJECT_PATH}/${ANDROID_LLVM_ARTIFACT_NAME}.tar.zst"
tar -C / --zstd -xf "/${ANDROID_LLVM_ARTIFACT_NAME}.tar.zst"
rm "/${ANDROID_LLVM_ARTIFACT_NAME}.tar.zst"
exit
fi
# Ephemeral packages (installed for this script and removed again at the end)
EPHEMERAL=(
unzip
)
apt-get update
apt-get install -y --no-install-recommends --no-remove "${EPHEMERAL[@]}"
ANDROID_NDK="android-ndk-${ANDROID_NDK_VERSION}"
ANDROID_NDK_ROOT="/${ANDROID_NDK}"
if [ ! -d "$ANDROID_NDK_ROOT" ];
then
curl -L --retry 4 -f --retry-all-errors --retry-delay 60 \
-o "${ANDROID_NDK}.zip" \
"https://dl.google.com/android/repository/${ANDROID_NDK}-linux.zip"
unzip -d / "${ANDROID_NDK}.zip" "$ANDROID_NDK/source.properties" "$ANDROID_NDK/build/cmake/*" "$ANDROID_NDK/toolchains/llvm/*"
rm "${ANDROID_NDK}.zip"
fi
if [ ! -d "/llvm-project" ];
then
mkdir "/llvm-project"
pushd "/llvm-project"
git init
git remote add origin https://github.com/llvm/llvm-project.git
git fetch --depth 1 origin "$ANDROID_LLVM_VERSION"
git checkout FETCH_HEAD
popd
fi
pushd "/llvm-project"
# Checkout again the intended version, just in case of a pre-existing full clone
git checkout "$ANDROID_LLVM_VERSION" || true
LLVM_INSTALL_PREFIX="/${ANDROID_LLVM_ARTIFACT_NAME}"
rm -rf build/
cmake -GNinja -S llvm -B build/ \
-DCMAKE_TOOLCHAIN_FILE="${ANDROID_NDK_ROOT}/build/cmake/android.toolchain.cmake" \
-DANDROID_ABI=x86_64 \
-DANDROID_PLATFORM="android-${ANDROID_SDK_VERSION}" \
-DANDROID_NDK="${ANDROID_NDK_ROOT}" \
-DCMAKE_ANDROID_ARCH_ABI=x86_64 \
-DCMAKE_ANDROID_NDK="${ANDROID_NDK_ROOT}" \
-DCMAKE_BUILD_TYPE=MinSizeRel \
-DCMAKE_SYSTEM_NAME=Android \
-DCMAKE_SYSTEM_VERSION="${ANDROID_SDK_VERSION}" \
-DCMAKE_INSTALL_PREFIX="${LLVM_INSTALL_PREFIX}" \
-DCMAKE_CXX_FLAGS="-march=x86-64 --target=x86_64-linux-android${ANDROID_SDK_VERSION} -fno-rtti" \
-DLLVM_HOST_TRIPLE="x86_64-linux-android${ANDROID_SDK_VERSION}" \
-DLLVM_TARGETS_TO_BUILD=X86 \
-DLLVM_BUILD_LLVM_DYLIB=OFF \
-DLLVM_BUILD_TESTS=OFF \
-DLLVM_BUILD_EXAMPLES=OFF \
-DLLVM_BUILD_DOCS=OFF \
-DLLVM_BUILD_TOOLS=OFF \
-DLLVM_ENABLE_RTTI=OFF \
-DLLVM_BUILD_INSTRUMENTED_COVERAGE=OFF \
-DLLVM_NATIVE_TOOL_DIR="${ANDROID_NDK_ROOT}/toolchains/llvm/prebuilt/linux-x86_64/bin" \
-DLLVM_ENABLE_PIC=False \
-DLLVM_OPTIMIZED_TABLEGEN=ON
ninja "-j${FDO_CI_CONCURRENT:-4}" -C build/ install
popd
rm -rf /llvm-project
tar --zstd -cf "${ANDROID_LLVM_ARTIFACT_NAME}.tar.zst" "$LLVM_INSTALL_PREFIX"
# If run in CI upload the tar.zst archive to S3 to avoid rebuilding it if the
# version does not change, and delete it.
# The file is not deleted for non-CI because it can be useful in local runs.
if [ -n "$CI" ]; then
ci-fairy s3cp --token-file "${S3_JWT_FILE}" "${ANDROID_LLVM_ARTIFACT_NAME}.tar.zst" "https://${S3_HOST}/${S3_ANDROID_BUCKET}/${CI_PROJECT_PATH}/${ANDROID_LLVM_ARTIFACT_NAME}.tar.zst"
rm "${ANDROID_LLVM_ARTIFACT_NAME}.tar.zst"
fi
apt-get purge -y "${EPHEMERAL[@]}"

173
.gitlab-ci/container/build-angle.sh Executable file → Normal file
View File

@@ -1,171 +1,58 @@
#!/usr/bin/env bash
# When changing this file, you need to bump the following
# .gitlab-ci/image-tags.yml tags:
# DEBIAN_TEST_ANDROID_TAG
# DEBIAN_TEST_GL_TAG
set -ex
set -uex
section_start angle "Building ANGLE"
# Do a very early check to make sure the tag is correct without the need of
# setting up the environment variables locally
ci_tag_build_time_check "ANGLE_TAG"
ANGLE_REV="c39f4a5c553cbee39af8f866aa82a9ffa4f02f5b"
DEPOT_REV="5982a1aeb33dc36382ed8c62eddf52a6135e7dd3"
# Set ANGLE_ARCH based on DEBIAN_ARCH if it hasn't been explicitly defined
if [[ -z "${ANGLE_ARCH:-}" ]]; then
case "$DEBIAN_ARCH" in
amd64) ANGLE_ARCH=x64;;
arm64) ANGLE_ARCH=arm64;;
esac
fi
ANGLE_REV="0518a3ff4d4e7e5b2ce8203358f719613a31c118"
# DEPOT tools
mkdir /depot-tools
pushd /depot-tools
git init
git remote add origin https://chromium.googlesource.com/chromium/tools/depot_tools.git
git fetch --depth 1 origin "$DEPOT_REV"
git checkout FETCH_HEAD
export PATH=/depot-tools:$PATH
git clone --depth 1 https://chromium.googlesource.com/chromium/tools/depot_tools.git
PWD=$(pwd)
export PATH=$PWD/depot_tools:$PATH
export DEPOT_TOOLS_UPDATE=0
popd
mkdir /angle-build
mkdir /angle
pushd /angle-build
git init
git remote add origin https://chromium.googlesource.com/angle/angle.git
git fetch --depth 1 origin "$ANGLE_REV"
git checkout FETCH_HEAD
echo "$ANGLE_REV" > /angle/version
GCLIENT_CUSTOM_VARS=()
GCLIENT_CUSTOM_VARS+=('--custom-var=angle_enable_cl=False')
GCLIENT_CUSTOM_VARS+=('--custom-var=angle_enable_cl_testing=False')
GCLIENT_CUSTOM_VARS+=('--custom-var=angle_enable_vulkan_validation_layers=False')
GCLIENT_CUSTOM_VARS+=('--custom-var=angle_enable_wgpu=False')
GCLIENT_CUSTOM_VARS+=('--custom-var=build_angle_deqp_tests=False')
GCLIENT_CUSTOM_VARS+=('--custom-var=build_angle_perftests=False')
if [[ "$ANGLE_TARGET" == "android" ]]; then
GCLIENT_CUSTOM_VARS+=('--custom-var=checkout_android=True')
fi
# source preparation
gclient config --name REPLACE-WITH-A-DOT --unmanaged \
"${GCLIENT_CUSTOM_VARS[@]}" \
https://chromium.googlesource.com/angle/angle.git
sed -e 's/REPLACE-WITH-A-DOT/./;' -i .gclient
sed -e 's|"custom_deps" : {|"custom_deps" : {\
"third_party/clspv/src": None,\
"third_party/dawn": None,\
"third_party/glmark2/src": None,\
"third_party/libjpeg_turbo": None,\
"third_party/llvm/src": None,\
"third_party/OpenCL-CTS/src": None,\
"third_party/SwiftShader": None,\
"third_party/VK-GL-CTS/src": None,\
"third_party/vulkan-validation-layers/src": None,|' -i .gclient
gclient sync --no-history -j"${FDO_CI_CONCURRENT:-4}"
python3 scripts/bootstrap.py
mkdir -p build/config
gclient sync
sed -i "/catapult/d" testing/BUILD.gn
mkdir -p out/Release
cat > out/Release/args.gn <<EOF
angle_assert_always_on=false
angle_build_all=false
angle_build_tests=false
angle_enable_cl=false
angle_enable_cl_testing=false
angle_enable_gl=false
angle_enable_gl_desktop_backend=false
angle_enable_null=false
angle_enable_swiftshader=false
angle_enable_trace=false
angle_enable_wgpu=false
angle_enable_vulkan=true
angle_enable_vulkan_api_dump_layer=false
angle_enable_vulkan_validation_layers=false
angle_has_frame_capture=false
angle_has_histograms=false
angle_has_rapidjson=false
angle_use_custom_libvulkan=false
build_angle_deqp_tests=false
echo '
is_debug = false
angle_enable_swiftshader = false
angle_enable_null = false
angle_enable_gl = false
angle_enable_vulkan = true
angle_has_histograms = false
build_angle_trace_perf_tests = false
build_angle_deqp_tests = false
angle_use_custom_libvulkan = false
dcheck_always_on=true
enable_expensive_dchecks=false
is_component_build=false
is_debug=false
target_cpu="${ANGLE_ARCH}"
target_os="${ANGLE_TARGET}"
treat_warnings_as_errors=false
EOF
case "$ANGLE_TARGET" in
linux) cat >> out/Release/args.gn <<EOF
angle_egl_extension="so.1"
angle_glesv2_extension="so.2"
use_custom_libcxx=false
custom_toolchain="//build/toolchain/linux/unbundle:default"
host_toolchain="//build/toolchain/linux/unbundle:default"
EOF
;;
android) cat >> out/Release/args.gn <<EOF
android_ndk_version="${ANDROID_NDK_VERSION}"
android64_ndk_api_level=${ANDROID_SDK_VERSION}
android32_ndk_api_level=${ANDROID_SDK_VERSION}
use_custom_libcxx=true
EOF
;;
*) echo "Unexpected ANGLE_TARGET value: $ANGLE_TARGET"; exit 1;;
esac
' > out/Release/args.gn
if [[ "$DEBIAN_ARCH" = "arm64" ]]; then
# We need to get an AArch64 sysroot - because ANGLE isn't great friends with
# system dependencies - but use the default system toolchain, because the
# 'arm64' toolchain you get from Google infrastructure is a cross-compiler
# from x86-64
build/linux/sysroot_scripts/install-sysroot.py --arch=arm64
fi
(
# The 'unbundled' toolchain configuration requires clang, and it also needs to
# be configured via environment variables.
export CC="clang-${LLVM_VERSION}"
export HOST_CC="$CC"
export CFLAGS="-Wno-unknown-warning-option"
export HOST_CFLAGS="$CFLAGS"
export CXX="clang++-${LLVM_VERSION}"
export HOST_CXX="$CXX"
export CXXFLAGS="-Wno-unknown-warning-option"
export HOST_CXXFLAGS="$CXXFLAGS"
export AR="ar"
export HOST_AR="$AR"
export NM="nm"
export HOST_NM="$NM"
export LDFLAGS="-fuse-ld=lld-${LLVM_VERSION} -lpthread -ldl"
export HOST_LDFLAGS="$LDFLAGS"
gn gen out/Release
# depot_tools overrides ninja with a version that doesn't work. We want
# ninja with FDO_CI_CONCURRENT anyway.
/usr/local/bin/ninja -C out/Release/
gn gen out/Release
# depot_tools overrides ninja with a version that doesn't work. We want
# ninja with FDO_CI_CONCURRENT anyway.
/usr/local/bin/ninja -C out/Release/ libEGL libGLESv1_CM libGLESv2
)
rm -f out/Release/libvulkan.so* out/Release/*.so*.TOC
cp out/Release/lib*.so* /angle/
if [[ "$ANGLE_TARGET" == "linux" ]]; then
ln -s libEGL.so.1 /angle/libEGL.so
ln -s libGLESv2.so.2 /angle/libGLESv2.so
fi
mkdir /angle
cp out/Release/lib*GL*.so /angle/
ln -s libEGL.so /angle/libEGL.so.1
ln -s libGLESv2.so /angle/libGLESv2.so.2
rm -rf out
popd
rm -rf /depot-tools
rm -rf /angle-build
section_end angle
rm -rf ./depot_tools

View File

@@ -3,19 +3,19 @@
# When changing this file, you need to bump the following
# .gitlab-ci/image-tags.yml tags:
# DEBIAN_BASE_TAG
# DEBIAN_X86_64_TEST_GL_TAG
# DEBIAN_X86_64_TEST_VK_TAG
# KERNEL_ROOTFS_TAG
set -uex
set -ex
uncollapsed_section_start apitrace "Building apitrace"
APITRACE_VERSION="b6102d10960c9f43b1b473903fc67937dd19fb98"
APITRACE_VERSION="0a6506433e1f9f7b69757b4e5730326970c4321a"
git clone https://github.com/apitrace/apitrace.git --single-branch --no-checkout /apitrace
pushd /apitrace
git checkout "$APITRACE_VERSION"
git submodule update --init --depth 1 --recursive
cmake -S . -B _build -G Ninja -DCMAKE_BUILD_TYPE=Release -DENABLE_GUI=False -DENABLE_WAFFLE=on ${EXTRA_CMAKE_ARGS:-}
cmake -S . -B _build -G Ninja -DCMAKE_BUILD_TYPE=Release -DENABLE_GUI=False -DENABLE_WAFFLE=on $EXTRA_CMAKE_ARGS
cmake --build _build --parallel --target apitrace eglretrace
mkdir build
cp _build/apitrace build
@@ -23,5 +23,3 @@ cp _build/eglretrace build
${STRIP_CMD:-strip} build/*
find . -not -path './build' -not -path './build/*' -delete
popd
section_end apitrace

View File

@@ -1,28 +0,0 @@
#!/usr/bin/env bash
# shellcheck disable=SC2086 # we want word splitting
# When changing this file, you need to bump the following
# .gitlab-ci/image-tags.yml tags:
# DEBIAN_BUILD_TAG
# FEDORA_X86_64_BUILD_TAG
uncollapsed_section_start bindgen "Building bindgen"
BINDGEN_VER=0.71.1
CBINDGEN_VER=0.26.0
# bindgen
RUSTFLAGS='-L native=/usr/local/lib' cargo install \
bindgen-cli --version ${BINDGEN_VER} \
--locked \
-j ${FDO_CI_CONCURRENT:-4} \
--root /usr/local
# cbindgen
RUSTFLAGS='-L native=/usr/local/lib' cargo install \
cbindgen --version ${CBINDGEN_VER} \
--locked \
-j ${FDO_CI_CONCURRENT:-4} \
--root /usr/local
section_end bindgen

View File

@@ -1,56 +1,44 @@
#!/usr/bin/env bash
# shellcheck disable=SC2086 # we want word splitting
# When changing this file, you need to bump the following
# .gitlab-ci/image-tags.yml tags:
# DEBIAN_BASE_TAG
# Do a very early check to make sure the tag is correct without the need of
# setting up the environment variables locally
ci_tag_build_time_check "CROSVM_TAG"
set -uex
section_start crosvm "Building crosvm"
set -ex
git config --global user.email "mesa@example.com"
git config --global user.name "Mesa CI"
CROSVM_VERSION=4a6b4316155742fbfa1be7087c2ee578cfee884d
CROSVM_VERSION=e3815e62d675ef436956a992e0ed58b7309c759d
git clone --single-branch -b main --no-checkout https://chromium.googlesource.com/crosvm/crosvm /platform/crosvm
pushd /platform/crosvm
git checkout "$CROSVM_VERSION"
git submodule update --init
VIRGLRENDERER_VERSION=06d43ce974b664f9dc521b706a0ad7f91dbf2866
VIRGLRENDERER_VERSION=747c6ae5b194ca551a79958a9a86c42bddcc4553
rm -rf third_party/virglrenderer
git clone --single-branch -b main --no-checkout https://gitlab.freedesktop.org/virgl/virglrenderer.git third_party/virglrenderer
pushd third_party/virglrenderer
git checkout "$VIRGLRENDERER_VERSION"
meson setup build/ -D libdir=lib -D render-server-worker=process -D venus=true ${EXTRA_MESON_ARGS:-}
meson setup build/ -D libdir=lib -D render-server-worker=process -D venus=true $EXTRA_MESON_ARGS
meson install -C build
popd
rm rust-toolchain
cargo update -p pkg-config@0.3.26 --precise 0.3.27
RUSTFLAGS='-L native=/usr/local/lib' cargo install \
bindgen-cli \
--locked \
-j ${FDO_CI_CONCURRENT:-4} \
--root /usr/local \
--version 0.71.1 \
${EXTRA_CARGO_ARGS:-}
--version 0.65.1 \
$EXTRA_CARGO_ARGS
CROSVM_USE_SYSTEM_MINIGBM=1 CROSVM_USE_SYSTEM_VIRGLRENDERER=1 RUSTFLAGS='-L native=/usr/local/lib' cargo install \
CROSVM_USE_SYSTEM_VIRGLRENDERER=1 RUSTFLAGS='-L native=/usr/local/lib' cargo install \
-j ${FDO_CI_CONCURRENT:-4} \
--locked \
--features 'default-no-sandbox gpu x virgl_renderer' \
--features 'default-no-sandbox gpu x virgl_renderer virgl_renderer_next' \
--path . \
--root /usr/local \
${EXTRA_CARGO_ARGS:-}
$EXTRA_CARGO_ARGS
popd
rm -rf /platform/crosvm
section_end crosvm

View File

@@ -3,97 +3,62 @@
# When changing this file, you need to bump the following
# .gitlab-ci/image-tags.yml tags:
# DEBIAN_TEST_ANDROID_TAG
# DEBIAN_BASE_TAG
# DEBIAN_X86_64_TEST_ANDROID_TAG
# KERNEL_ROOTFS_TAG
set -uex
set -ex
section_start deqp-runner "Building deqp-runner"
DEQP_RUNNER_VERSION=0.18.0
DEQP_RUNNER_VERSION=0.20.3
if [ -n "${DEQP_RUNNER_GIT_TAG}${DEQP_RUNNER_GIT_REV}" ]; then
# Build and install from source
DEQP_RUNNER_CARGO_ARGS="--git ${DEQP_RUNNER_GIT_URL:-https://gitlab.freedesktop.org/anholt/deqp-runner.git}"
commits_to_backport=(
)
patch_files=(
)
DEQP_RUNNER_GIT_URL="${DEQP_RUNNER_GIT_URL:-https://gitlab.freedesktop.org/mesa/deqp-runner.git}"
if [ -n "${DEQP_RUNNER_GIT_TAG:-}" ]; then
DEQP_RUNNER_GIT_CHECKOUT="$DEQP_RUNNER_GIT_TAG"
elif [ -n "${DEQP_RUNNER_GIT_REV:-}" ]; then
DEQP_RUNNER_GIT_CHECKOUT="$DEQP_RUNNER_GIT_REV"
else
DEQP_RUNNER_GIT_CHECKOUT="v$DEQP_RUNNER_VERSION"
fi
BASE_PWD=$PWD
mkdir -p /deqp-runner
pushd /deqp-runner
mkdir deqp-runner-git
pushd deqp-runner-git
git init
git remote add origin "$DEQP_RUNNER_GIT_URL"
git fetch --depth 1 origin "$DEQP_RUNNER_GIT_CHECKOUT"
git checkout FETCH_HEAD
for commit in "${commits_to_backport[@]}"
do
PATCH_URL="https://gitlab.freedesktop.org/mesa/deqp-runner/-/commit/$commit.patch"
echo "Backport deqp-runner commit $commit from $PATCH_URL"
curl -L --retry 4 -f --retry-all-errors --retry-delay 60 $PATCH_URL | git am
done
for patch in "${patch_files[@]}"
do
echo "Apply patch to deqp-runner from $patch"
git am "$BASE_PWD/.gitlab-ci/container/patches/$patch"
done
if [ -z "${RUST_TARGET:-}" ]; then
RUST_TARGET=""
fi
if [[ "$RUST_TARGET" != *-android ]]; then
# When CC (/usr/lib/ccache/gcc) variable is set, the rust compiler uses
# this variable when cross-compiling arm32 and build fails for zsys-sys.
# So unset the CC variable when cross-compiling for arm32.
SAVEDCC=${CC:-}
if [ "$RUST_TARGET" = "armv7-unknown-linux-gnueabihf" ]; then
unset CC
if [ -n "${DEQP_RUNNER_GIT_TAG}" ]; then
DEQP_RUNNER_CARGO_ARGS="--tag ${DEQP_RUNNER_GIT_TAG} ${DEQP_RUNNER_CARGO_ARGS}"
else
DEQP_RUNNER_CARGO_ARGS="--rev ${DEQP_RUNNER_GIT_REV} ${DEQP_RUNNER_CARGO_ARGS}"
fi
DEQP_RUNNER_CARGO_ARGS="${DEQP_RUNNER_CARGO_ARGS} ${EXTRA_CARGO_ARGS}"
else
# Install from package registry
DEQP_RUNNER_CARGO_ARGS="--version ${DEQP_RUNNER_VERSION} ${EXTRA_CARGO_ARGS} -- deqp-runner"
fi
if [ -z "$ANDROID_NDK_HOME" ]; then
cargo install --locked \
-j ${FDO_CI_CONCURRENT:-4} \
--root /usr/local \
${EXTRA_CARGO_ARGS:-} \
--path .
CC=$SAVEDCC
${DEQP_RUNNER_CARGO_ARGS}
else
mkdir -p /deqp-runner
pushd /deqp-runner
git clone --branch v${DEQP_RUNNER_VERSION} --depth 1 https://gitlab.freedesktop.org/anholt/deqp-runner.git deqp-runner-git
pushd deqp-runner-git
cargo install --locked \
-j ${FDO_CI_CONCURRENT:-4} \
--root /usr/local --version 2.10.0 \
cargo-ndk
rustup target add $RUST_TARGET
RUSTFLAGS='-C target-feature=+crt-static' cargo ndk --target $RUST_TARGET build --release
rustup target add x86_64-linux-android
RUSTFLAGS='-C target-feature=+crt-static' cargo ndk --target x86_64-linux-android build
mv target/$RUST_TARGET/release/deqp-runner /deqp-runner
mv target/x86_64-linux-android/debug/deqp-runner /deqp-runner
cargo uninstall --locked \
--root /usr/local \
cargo-ndk
fi
popd
rm -rf deqp-runner-git
popd
popd
rm -rf deqp-runner-git
popd
fi
# remove unused test runners to shrink images for the Mesa CI build (not kernel,
# which chooses its own deqp branch)
if [ -z "${DEQP_RUNNER_GIT_TAG:-}${DEQP_RUNNER_GIT_REV:-}" ]; then
if [ -z "${DEQP_RUNNER_GIT_TAG}${DEQP_RUNNER_GIT_REV}" ]; then
rm -f /usr/local/bin/igt-runner
fi
section_end deqp-runner

339
.gitlab-ci/container/build-deqp.sh Executable file → Normal file
View File

@@ -3,30 +3,27 @@
# When changing this file, you need to bump the following
# .gitlab-ci/image-tags.yml tags:
# DEBIAN_TEST_ANDROID_TAG
# DEBIAN_TEST_GL_TAG
# DEBIAN_TEST_VK_TAG
# DEBIAN_X86_64_TEST_ANDROID_TAG
# DEBIAN_X86_64_TEST_GL_TAG
# DEBIAN_X86_64_TEST_VK_TAG
# KERNEL_ROOTFS_TAG
set -ue -o pipefail
set -ex -o pipefail
# shellcheck disable=SC2153
deqp_api=${DEQP_API,,}
DEQP_VERSION=vulkan-cts-1.3.7.0
section_start deqp-$deqp_api "Building dEQP $DEQP_API"
git config --global user.email "mesa@example.com"
git config --global user.name "Mesa CI"
git clone \
https://github.com/KhronosGroup/VK-GL-CTS.git \
-b $DEQP_VERSION \
--depth 1 \
/VK-GL-CTS
pushd /VK-GL-CTS
set -x
mkdir -p /deqp
# See `deqp_build_targets` below for which release is used to produce which
# binary. Unless this comment has bitrotten:
# - the commit from the main branch produces the deqp tools and `deqp-vk`,
# - the VK release produces `deqp-vk`,
# - the GL release produces `glcts`, and
# - the GLES release produces `deqp-gles*` and `deqp-egl`
DEQP_MAIN_COMMIT=9cc8e038994c32534b3d2c4ba88c1dc49ef53228
DEQP_VK_VERSION=1.4.1.1
DEQP_GL_VERSION=4.6.6.0
DEQP_GLES_VERSION=3.2.12.0
echo "dEQP base version $DEQP_VERSION" > /deqp/version-log
# Patches to VulkanCTS may come from commits in their repo (listed in
# cts_commits_to_backport) or patch files stored in our repo (in the patch
@@ -34,206 +31,76 @@ DEQP_GLES_VERSION=3.2.12.0
# Both list variables would have comments explaining the reasons behind the
# patches.
# shellcheck disable=SC2034
main_cts_commits_to_backport=(
# If you find yourself wanting to add something in here, consider whether
# bumping DEQP_MAIN_COMMIT is not a better solution :)
cts_commits_to_backport=(
# Take multiview into account for task shader inv. stats
22aa3f4c59f6e1d4daebd5a8c9c05bce6cd3b63b
# Remove illegal mesh shader query tests
2a87f7b25dc27188be0f0a003b2d7aef69d9002e
# Relax fragment shader invocations result verifications
0d8bf6a2715f95907e9cf86a86876ff1f26c66fe
# Fix several issues in dynamic rendering basic tests
c5453824b498c981c6ba42017d119f5de02a3e34
)
# shellcheck disable=SC2034
main_cts_patch_files=(
)
# shellcheck disable=SC2034
vk_cts_commits_to_backport=(
# Stop querying device address from unbound buffers
046343f46f7d39d53b47842d7fd8ed3279528046
)
# shellcheck disable=SC2034
vk_cts_patch_files=(
)
# shellcheck disable=SC2034
gl_cts_commits_to_backport=(
# Add testing for GL_PRIMITIVES_SUBMITTED_ARB query.
e075ce73ddc5973aa46a5236c715bb281c9501fa
)
# shellcheck disable=SC2034
gl_cts_patch_files=(
build-deqp-gl_Build-Don-t-build-Vulkan-utilities-for-GL-builds.patch
build-deqp-gl_Revert-Add-missing-context-deletion.patch
build-deqp-gl_Revert-Fix-issues-with-GLX-reset-notification-strate.patch
build-deqp-gl_Revert-Fix-spurious-failures-when-using-a-config-wit.patch
)
# shellcheck disable=SC2034
# GLES builds also EGL
gles_cts_commits_to_backport=(
)
# shellcheck disable=SC2034
gles_cts_patch_files=(
build-deqp-gl_Build-Don-t-build-Vulkan-utilities-for-GL-builds.patch
build-deqp-gl_Revert-Add-missing-context-deletion.patch
build-deqp-gl_Revert-Fix-issues-with-GLX-reset-notification-strate.patch
build-deqp-gl_Revert-Fix-spurious-failures-when-using-a-config-wit.patch
)
### Careful editing anything below this line
git config --global user.email "mesa@example.com"
git config --global user.name "Mesa CI"
# shellcheck disable=SC2153
case "${DEQP_API}" in
tools) DEQP_VERSION="$DEQP_MAIN_COMMIT";;
*-main) DEQP_VERSION="$DEQP_MAIN_COMMIT";;
VK) DEQP_VERSION="vulkan-cts-$DEQP_VK_VERSION";;
GL) DEQP_VERSION="opengl-cts-$DEQP_GL_VERSION";;
GLES) DEQP_VERSION="opengl-es-cts-$DEQP_GLES_VERSION";;
*) echo "Unexpected DEQP_API value: $DEQP_API"; exit 1;;
esac
mkdir -p /VK-GL-CTS
pushd /VK-GL-CTS
[ -e .git ] || {
git init
git remote add origin https://github.com/KhronosGroup/VK-GL-CTS.git
}
git fetch --depth 1 origin "$DEQP_VERSION"
git checkout FETCH_HEAD
DEQP_COMMIT=$(git rev-parse FETCH_HEAD)
if [ "$DEQP_VERSION" = "$DEQP_MAIN_COMMIT" ]; then
merge_base="$(curl-with-retry -s https://api.github.com/repos/KhronosGroup/VK-GL-CTS/compare/main...$DEQP_MAIN_COMMIT | jq -r .merge_base_commit.sha)"
if [[ "$merge_base" != "$DEQP_MAIN_COMMIT" ]]; then
echo "VK-GL-CTS commit $DEQP_MAIN_COMMIT is not a commit from the main branch."
exit 1
fi
fi
mkdir -p /deqp-$deqp_api
if [ "$DEQP_VERSION" = "$DEQP_MAIN_COMMIT" ]; then
prefix="main"
else
prefix="$deqp_api"
fi
cts_commits_to_backport="${prefix}_cts_commits_to_backport[@]"
for commit in "${!cts_commits_to_backport}"
for commit in "${cts_commits_to_backport[@]}"
do
PATCH_URL="https://github.com/KhronosGroup/VK-GL-CTS/commit/$commit.patch"
echo "Apply patch to ${DEQP_API} CTS from $PATCH_URL"
curl-with-retry $PATCH_URL | GIT_COMMITTER_DATE=$(LC_TIME=C date -d@0) git am -
echo "Apply patch to VK-GL-CTS from $PATCH_URL"
curl -L --retry 4 -f --retry-all-errors --retry-delay 60 $PATCH_URL | \
git am -
done
cts_patch_files="${prefix}_cts_patch_files[@]"
for patch in "${!cts_patch_files}"
cts_patch_files=(
# Android specific patches.
build-deqp_Allow-running-on-Android-from-the-command-line.patch
build-deqp_Android-prints-to-stdout-instead-of-logcat.patch
)
for patch in "${cts_patch_files[@]}"
do
echo "Apply patch to ${DEQP_API} CTS from $patch"
GIT_COMMITTER_DATE=$(LC_TIME=C date -d@0) git am < $OLDPWD/.gitlab-ci/container/patches/$patch
echo "Apply patch to VK-GL-CTS from $patch"
git am < $OLDPWD/.gitlab-ci/container/patches/$patch
done
{
if [ "$DEQP_VERSION" = "$DEQP_MAIN_COMMIT" ]; then
commit_desc=$(git show --no-patch --format='commit %h on %ci' --abbrev=10 "$DEQP_COMMIT")
echo "dEQP $DEQP_API at $commit_desc"
else
echo "dEQP $DEQP_API version $DEQP_VERSION"
fi
if [ "$(git rev-parse HEAD)" != "$DEQP_COMMIT" ]; then
echo "The following local patches are applied on top:"
git log --reverse --oneline "$DEQP_COMMIT".. --format='- %s'
fi
} > /deqp-$deqp_api/deqp-$deqp_api-version
echo "The following local patches are applied on top:" >> /deqp/version-log
git log --reverse --oneline $DEQP_VERSION.. --format=%s | sed 's/^/- /' >> /deqp/version-log
# --insecure is due to SSL cert failures hitting sourceforge for zlib and
# libpng (sigh). The archives get their checksums checked anyway, and git
# always goes through ssh or https.
python3 external/fetch_sources.py --insecure
case "${DEQP_API}" in
VK-main)
# Video tests rely on external files
python3 external/fetch_video_decode_samples.py
python3 external/fetch_video_encode_samples.py
;;
esac
if [[ "$DEQP_API" = tools ]]; then
# Save the testlog stylesheets:
cp doc/testlog-stylesheet/testlog.{css,xsl} /deqp-$deqp_api
fi
# Save the testlog stylesheets:
cp doc/testlog-stylesheet/testlog.{css,xsl} /deqp
popd
deqp_build_targets=()
case "${DEQP_API}" in
VK|VK-main)
deqp_build_targets+=(deqp-vk)
;;
GL)
deqp_build_targets+=(glcts)
;;
GLES)
deqp_build_targets+=(deqp-gles{2,3,31})
deqp_build_targets+=(glcts) # needed for gles*-khr tests
# deqp-egl also comes from this build, but it is handled separately below.
;;
tools)
deqp_build_targets+=(testlog-to-xml)
deqp_build_targets+=(testlog-to-csv)
deqp_build_targets+=(testlog-to-junit)
;;
esac
pushd /deqp
OLD_IFS="$IFS"
IFS=";"
CMAKE_SBT="${deqp_build_targets[*]}"
IFS="$OLD_IFS"
pushd /deqp-$deqp_api
if [ "${DEQP_API}" = 'GLES' ]; then
if [ "${DEQP_TARGET}" = 'android' ]; then
cmake -S /VK-GL-CTS -B . -G Ninja \
-DDEQP_TARGET=android \
-DCMAKE_BUILD_TYPE=Release \
-DSELECTED_BUILD_TARGETS="deqp-egl" \
${EXTRA_CMAKE_ARGS:-}
ninja modules/egl/deqp-egl
mv modules/egl/deqp-egl{,-android}
else
if [ "${DEQP_TARGET}" != 'android' ]; then
# When including EGL/X11 testing, do that build first and save off its
# deqp-egl binary.
cmake -S /VK-GL-CTS -B . -G Ninja \
-DDEQP_TARGET=x11_egl_glx \
-DCMAKE_BUILD_TYPE=Release \
-DSELECTED_BUILD_TARGETS="deqp-egl" \
${EXTRA_CMAKE_ARGS:-}
$EXTRA_CMAKE_ARGS
ninja modules/egl/deqp-egl
mv modules/egl/deqp-egl{,-x11}
mv /deqp/modules/egl/deqp-egl /deqp/modules/egl/deqp-egl-x11
cmake -S /VK-GL-CTS -B . -G Ninja \
-DDEQP_TARGET=wayland \
-DCMAKE_BUILD_TYPE=Release \
-DSELECTED_BUILD_TARGETS="deqp-egl" \
${EXTRA_CMAKE_ARGS:-}
$EXTRA_CMAKE_ARGS
ninja modules/egl/deqp-egl
mv modules/egl/deqp-egl{,-wayland}
fi
mv /deqp/modules/egl/deqp-egl /deqp/modules/egl/deqp-egl-wayland
fi
cmake -S /VK-GL-CTS -B . -G Ninja \
-DDEQP_TARGET=${DEQP_TARGET} \
-DDEQP_TARGET=${DEQP_TARGET:-default} \
-DCMAKE_BUILD_TYPE=Release \
-DSELECTED_BUILD_TARGETS="${CMAKE_SBT}" \
${EXTRA_CMAKE_ARGS:-}
$EXTRA_CMAKE_ARGS
# Make sure `default` doesn't silently stop detecting one of the platforms we care about
if [ "${DEQP_TARGET}" = 'default' ]; then
@@ -242,73 +109,57 @@ if [ "${DEQP_TARGET}" = 'default' ]; then
grep -q DEQP_SUPPORT_XCB=1 build.ninja
fi
ninja "${deqp_build_targets[@]}"
mold --run ninja
if [ "$DEQP_API" != tools ]; then
# Copy out the mustpass lists we want.
mkdir -p mustpass
if [ "${DEQP_API}" = 'VK' ] || [ "${DEQP_API}" = 'VK-main' ]; then
for mustpass in $(< /VK-GL-CTS/external/vulkancts/mustpass/main/vk-default.txt) ; do
cat /VK-GL-CTS/external/vulkancts/mustpass/main/$mustpass \
>> mustpass/vk-main.txt
done
fi
if [ "${DEQP_API}" = 'GL' ]; then
cp \
/VK-GL-CTS/external/openglcts/data/gl_cts/data/mustpass/gl/khronos_mustpass/main/*-main.txt \
mustpass/
cp \
/VK-GL-CTS/external/openglcts/data/gl_cts/data/mustpass/gl/khronos_mustpass_single/main/*-single.txt \
mustpass/
fi
if [ "${DEQP_API}" = 'GLES' ]; then
cp \
/VK-GL-CTS/external/openglcts/data/gl_cts/data/mustpass/gles/aosp_mustpass/main/*.txt \
mustpass/
cp \
/VK-GL-CTS/external/openglcts/data/gl_cts/data/mustpass/egl/aosp_mustpass/main/egl-main.txt \
mustpass/
cp \
/VK-GL-CTS/external/openglcts/data/gl_cts/data/mustpass/gles/khronos_mustpass/main/*-main.txt \
mustpass/
fi
# Compress the caselists, since Vulkan's in particular are gigantic; higher
# compression levels provide no real measurable benefit.
zstd -f -1 --rm mustpass/*.txt
if [ "${DEQP_TARGET}" = 'android' ]; then
mv /deqp/modules/egl/deqp-egl /deqp/modules/egl/deqp-egl-android
fi
if [ "$DEQP_API" = tools ]; then
# Copy out the mustpass lists we want.
mkdir /deqp/mustpass
for mustpass in $(< /VK-GL-CTS/external/vulkancts/mustpass/main/vk-default.txt) ; do
cat /VK-GL-CTS/external/vulkancts/mustpass/main/$mustpass \
>> /deqp/mustpass/vk-master.txt
done
if [ "${DEQP_TARGET}" != 'android' ]; then
cp \
/deqp/external/openglcts/modules/gl_cts/data/mustpass/gles/aosp_mustpass/3.2.6.x/*.txt \
/deqp/mustpass/.
cp \
/deqp/external/openglcts/modules/gl_cts/data/mustpass/egl/aosp_mustpass/3.2.6.x/egl-master.txt \
/deqp/mustpass/.
cp \
/deqp/external/openglcts/modules/gl_cts/data/mustpass/gles/khronos_mustpass/3.2.6.x/*-master.txt \
/deqp/mustpass/.
cp \
/deqp/external/openglcts/modules/gl_cts/data/mustpass/gl/khronos_mustpass/4.6.1.x/*-master.txt \
/deqp/mustpass/.
cp \
/deqp/external/openglcts/modules/gl_cts/data/mustpass/gl/khronos_mustpass_single/4.6.1.x/*-single.txt \
/deqp/mustpass/.
# Save *some* executor utils, but otherwise strip things down
# to reduct deqp build size:
mv executor/testlog-to-* .
rm -rf executor
mkdir /deqp/executor.save
cp /deqp/executor/testlog-to-* /deqp/executor.save
rm -rf /deqp/executor
mv /deqp/executor.save /deqp/executor
fi
# Remove other mustpass files, since we saved off the ones we wanted to conventient locations above.
rm -rf assets/**/mustpass/
rm -rf external/**/mustpass/
rm -rf external/vulkancts/modules/vulkan/vk-main*
rm -rf external/vulkancts/modules/vulkan/vk-default
rm -rf /deqp/external/openglcts/modules/gl_cts/data/mustpass
rm -rf /deqp/external/vulkancts/modules/vulkan/vk-master*
rm -rf /deqp/external/vulkancts/modules/vulkan/vk-default
rm -rf external/openglcts/modules/cts-runner
rm -rf modules/internal
rm -rf execserver
rm -rf framework
rm -rf /deqp/external/openglcts/modules/cts-runner
rm -rf /deqp/modules/internal
rm -rf /deqp/execserver
rm -rf /deqp/framework
find . -depth \( -iname '*cmake*' -o -name '*ninja*' -o -name '*.o' -o -name '*.a' \) -exec rm -rf {} \;
if [ "${DEQP_API}" = 'VK' ] || [ "${DEQP_API}" = 'VK-main' ]; then
${STRIP_CMD:-strip} external/vulkancts/modules/vulkan/deqp-vk
fi
if [ "${DEQP_API}" = 'GL' ] || [ "${DEQP_API}" = 'GLES' ]; then
${STRIP_CMD:-strip} external/openglcts/modules/glcts
fi
if [ "${DEQP_API}" = 'GLES' ]; then
${STRIP_CMD:-strip} modules/*/deqp-*
fi
${STRIP_CMD:-strip} external/vulkancts/modules/vulkan/deqp-vk
${STRIP_CMD:-strip} external/openglcts/modules/glcts
${STRIP_CMD:-strip} modules/*/deqp-*
du -sh ./*
rm -rf /VK-GL-CTS
popd
section_end deqp-$deqp_api

View File

@@ -5,15 +5,11 @@
# .gitlab-ci/image-tags.yml tags:
# DEBIAN_BUILD_TAG
set -uex
set -ex
uncollapsed_section_start directx-headers "Building directx-headers"
git clone https://github.com/microsoft/DirectX-Headers -b v1.614.1 --depth 1
git clone https://github.com/microsoft/DirectX-Headers -b v1.611.0 --depth 1
pushd DirectX-Headers
meson setup build --backend=ninja --buildtype=release -Dbuild-test=false ${EXTRA_MESON_ARGS:-}
meson setup build --backend=ninja --buildtype=release -Dbuild-test=false $EXTRA_MESON_ARGS
meson install -C build
popd
rm -rf DirectX-Headers
section_end directx-headers

View File

@@ -1,52 +0,0 @@
#!/usr/bin/env bash
# shellcheck disable=SC1091 # The relative paths in this file only become valid at runtime.
# shellcheck disable=SC2034 # Variables are used in scripts called from here
# shellcheck disable=SC2086 # we want word splitting
# When changing this file, you need to bump the following
# .gitlab-ci/image-tags.yml tags:
# DEBIAN_TEST_VIDEO_TAG
# Install fluster in /fluster.
set -uex
section_start fluster "Installing Fluster"
# Do a very early check to make sure the tag is correct without the need of
# setting up the environment variables locally
ci_tag_build_time_check "FLUSTER_TAG"
FLUSTER_REVISION="e997402978f62428fffc8e5a4a709690d9ca9bc5"
git clone https://github.com/fluendo/fluster.git --single-branch --no-checkout
pushd fluster || exit
git checkout "${FLUSTER_REVISION}"
popd || exit
ARTIFACT_PATH="${DATA_STORAGE_PATH}/fluster/${FLUSTER_TAG}/vectors.tar.zst"
if FOUND_ARTIFACT_URL="$(find_s3_project_artifact "${ARTIFACT_PATH}")"; then
echo "Found fluster vectors at: ${FOUND_ARTIFACT_URL}"
mv fluster/ /
curl-with-retry "${FOUND_ARTIFACT_URL}" | tar --zstd -x -C /
else
echo "No cached vectors found, rebuilding..."
# Download the necessary vectors: H264, H265 and VP9
# When updating FLUSTER_REVISION, make sure to update the vectors if necessary or
# fluster-runner will report Missing results.
fluster/fluster.py download -j ${FDO_CI_CONCURRENT:-4} \
JVT-AVC_V1 JVT-FR-EXT JVT-MVC JVT-SVC_V1 \
JCT-VC-3D-HEVC JCT-VC-HEVC_V1 JCT-VC-MV-HEVC JCT-VC-RExt JCT-VC-SCC JCT-VC-SHVC \
VP9-TEST-VECTORS-HIGH VP9-TEST-VECTORS
# Build fluster vectors archive and upload it
tar --zstd -cf "vectors.tar.zst" fluster/resources/
ci-fairy s3cp --token-file "${S3_JWT_FILE}" "vectors.tar.zst" \
"https://${S3_BASE_PATH}/${CI_PROJECT_PATH}/${ARTIFACT_PATH}"
mv fluster/ /
fi
section_end fluster

View File

@@ -2,12 +2,11 @@
# When changing this file, you need to bump the following
# .gitlab-ci/image-tags.yml tags:
# DEBIAN_TEST_VK_TAG
# DEBIAN_X86_64_TEST_VK_TAG
# KERNEL_ROOTFS_TAG
set -ex
uncollapsed_section_start fossilize "Building fossilize"
git clone https://github.com/ValveSoftware/Fossilize.git
cd Fossilize
git checkout b43ee42bbd5631ea21fe9a2dee4190d5d875c327
@@ -18,5 +17,3 @@ cmake -S .. -B . -G Ninja -DCMAKE_BUILD_TYPE=Release
ninja -C . install
cd ../..
rm -rf Fossilize
section_end fossilize

View File

@@ -2,8 +2,6 @@
set -ex
uncollapsed_section_start gfxreconstruct "Building gfxreconstruct"
GFXRECONSTRUCT_VERSION=761837794a1e57f918a85af7000b12e531b178ae
git clone https://github.com/LunarG/gfxreconstruct.git \
@@ -19,5 +17,3 @@ cmake -S . -B _build -G Ninja -DCMAKE_BUILD_TYPE=Release -DCMAKE_INSTALL_PREFIX:
cmake --build _build --parallel --target tools/{replay,info}/install/strip
find . -not -path './build' -not -path './build/*' -delete
popd
section_end gfxreconstruct

View File

@@ -0,0 +1,16 @@
#!/bin/bash
set -ex
PARALLEL_DEQP_RUNNER_VERSION=fe557794b5dadd8dbf0eae403296625e03bda18a
git clone https://gitlab.freedesktop.org/mesa/parallel-deqp-runner --single-branch -b master --no-checkout /parallel-deqp-runner
pushd /parallel-deqp-runner
git checkout "$PARALLEL_DEQP_RUNNER_VERSION"
meson . _build
ninja -C _build hang-detection
mkdir -p build/bin
install _build/hang-detection build/bin
strip build/bin/*
find . -not -path './build' -not -path './build/*' -delete
popd

View File

@@ -3,30 +3,21 @@
set -ex
uncollapsed_section_start kdl "Building kdl"
KDL_REVISION="5056f71b100a68b72b285c6fc845a66a2ed25985"
KDL_REVISION="cbbe5fd54505fd03ee34f35bfd16794f0c30074f"
KDL_CHECKOUT_DIR="/tmp/ci-kdl.git"
mkdir -p ${KDL_CHECKOUT_DIR}
pushd ${KDL_CHECKOUT_DIR}
mkdir ci-kdl.git
pushd ci-kdl.git
git init
git remote add origin https://gitlab.freedesktop.org/gfx-ci/ci-kdl.git
git fetch --depth 1 origin ${KDL_REVISION}
git checkout FETCH_HEAD
popd
# Run venv in a subshell, so we don't accidentally leak the venv state into
# calling scripts
(
python3 -m venv /ci-kdl
source /ci-kdl/bin/activate &&
pushd ${KDL_CHECKOUT_DIR} &&
pip install -r requirements.txt &&
pip install . &&
popd
)
python3 -m venv ci-kdl.venv
source ci-kdl.venv/bin/activate
pushd ci-kdl.git
pip install -r requirements.txt
pip install .
popd
rm -rf ${KDL_CHECKOUT_DIR}
section_end kdl
rm -rf ci-kdl.git

View File

@@ -0,0 +1,31 @@
#!/usr/bin/env bash
# shellcheck disable=SC2086 # we want word splitting
# shellcheck disable=SC2153
set -ex
mkdir -p kernel
pushd kernel
if [[ ${DEBIAN_ARCH} = "arm64" ]]; then
KERNEL_IMAGE_NAME+=" cheza-kernel"
fi
for image in ${KERNEL_IMAGE_NAME}; do
curl -L --retry 4 -f --retry-all-errors --retry-delay 60 \
-o "/lava-files/${image}" "${KERNEL_IMAGE_BASE}/${DEBIAN_ARCH}/${image}"
done
for dtb in ${DEVICE_TREES}; do
curl -L --retry 4 -f --retry-all-errors --retry-delay 60 \
-o "/lava-files/${dtb}" "${KERNEL_IMAGE_BASE}/${DEBIAN_ARCH}/${dtb}"
done
mkdir -p "/lava-files/rootfs-${DEBIAN_ARCH}"
curl -L --retry 4 -f --retry-all-errors --retry-delay 60 \
-O "${KERNEL_IMAGE_BASE}/${DEBIAN_ARCH}/modules.tar.zst"
tar --keep-directory-symlink --zstd -xf modules.tar.zst -C "/lava-files/rootfs-${DEBIAN_ARCH}/"
popd
rm -rf kernel

View File

@@ -1,8 +1,6 @@
#!/usr/bin/env bash
set -uex
uncollapsed_section_start libclc "Building libclc"
set -ex
export LLVM_CONFIG="llvm-config-${LLVM_VERSION:?"llvm unset!"}"
LLVM_TAG="llvmorg-15.0.7"
@@ -31,5 +29,3 @@ ln -s /usr/share/clc/spirv-mesa3d-.spv /usr/lib/clc/
du -sh ./*
rm -rf /libclc /llvm-project
section_end libclc

View File

@@ -1,21 +1,16 @@
#!/usr/bin/env bash
# Script used for Android and Fedora builds (Debian builds get their libdrm version
# from https://gitlab.freedesktop.org/gfx-ci/ci-deb-repo - see PKG_REPO_REV)
# Script used for Android and Fedora builds
# shellcheck disable=SC2086 # we want word splitting
set -uex
set -ex
uncollapsed_section_start libdrm "Building libdrm"
export LIBDRM_VERSION=libdrm-2.4.122
export LIBDRM_VERSION=libdrm-2.4.119
curl -L -O --retry 4 -f --retry-all-errors --retry-delay 60 \
https://dri.freedesktop.org/libdrm/"$LIBDRM_VERSION".tar.xz
tar -xvf "$LIBDRM_VERSION".tar.xz && rm "$LIBDRM_VERSION".tar.xz
cd "$LIBDRM_VERSION"
meson setup build -D vc4=disabled -D freedreno=disabled -D etnaviv=disabled ${EXTRA_MESON_ARGS:-}
meson setup build -D vc4=disabled -D freedreno=disabled -D etnaviv=disabled $EXTRA_MESON_ARGS
meson install -C build
cd ..
rm -rf "$LIBDRM_VERSION"
section_end libdrm

View File

@@ -2,13 +2,7 @@
set -ex
uncollapsed_section_start llvm-spirv "Building LLVM-SPIRV-Translator"
if [ "${LLVM_VERSION:?llvm version not set}" -ge 18 ]; then
VER="${LLVM_VERSION}.1.0"
else
VER="${LLVM_VERSION}.0.0"
fi
VER="${LLVM_VERSION:?llvm not set}.0.0"
curl -L --retry 4 -f --retry-all-errors --retry-delay 60 \
-O "https://github.com/KhronosGroup/SPIRV-LLVM-Translator/archive/refs/tags/v${VER}.tar.gz"
@@ -26,5 +20,3 @@ popd
du -sh "SPIRV-LLVM-Translator-${VER}"
rm -rf "SPIRV-LLVM-Translator-${VER}"
section_end llvm-spirv

View File

@@ -2,30 +2,14 @@
set -ex
# When changing this file, you need to bump the following
# .gitlab-ci/image-tags.yml tags:
# ALPINE_X86_64_BUILD_TAG
# DEBIAN_BASE_TAG
# DEBIAN_BUILD_TAG
# FEDORA_X86_64_BUILD_TAG
uncollapsed_section_start mold "Building mold"
MOLD_VERSION="2.32.0"
MOLD_VERSION="1.11.0"
git clone -b v"$MOLD_VERSION" --single-branch --depth 1 https://github.com/rui314/mold.git
pushd mold
cmake -DCMAKE_BUILD_TYPE=Release -D BUILD_TESTING=OFF -D MOLD_LTO=ON
cmake --build . --parallel "${FDO_CI_CONCURRENT:-4}"
cmake --install . --strip
# Always use mold from now on
find /usr/bin \( -name '*-ld' -o -name 'ld' \) \
-exec ln -sf /usr/local/bin/ld.mold {} \; \
-exec ls -l {} +
cmake --build . --parallel
cmake --install .
popd
rm -rf mold
section_end mold

View File

@@ -1,30 +1,24 @@
#!/bin/bash
# shellcheck disable=SC2086 # we want word splitting
set -uex
section_start piglit "Building piglit"
set -ex
# When changing this file, you need to bump the following
# .gitlab-ci/image-tags.yml tags:
# DEBIAN_TEST_GL_TAG
# DEBIAN_TEST_VK_TAG
# DEBIAN_X86_64_TEST_GL_TAG
# DEBIAN_X86_64_TEST_VK_TAG
# KERNEL_ROOTFS_TAG
# Do a very early check to make sure the tag is correct without the need of
# setting up the environment variables locally
ci_tag_build_time_check "PIGLIT_TAG"
REV="a0a27e528f643dfeb785350a1213bfff09681950"
REV="f7db20b03de6896d013826c0a731bc4417c1a5a0"
git clone https://gitlab.freedesktop.org/mesa/piglit.git --single-branch --no-checkout /piglit
pushd /piglit
git checkout "$REV"
patch -p1 <$OLDPWD/.gitlab-ci/piglit/disable-vs_in.diff
cmake -S . -B . -G Ninja -DCMAKE_BUILD_TYPE=Release $PIGLIT_OPTS ${EXTRA_CMAKE_ARGS:-}
ninja ${PIGLIT_BUILD_TARGETS:-}
find . -depth \( -name .git -o -name '*ninja*' -o -iname '*cmake*' -o -name '*.[chao]' \) \
! -name 'include_test.h' -exec rm -rf {} \;
cmake -S . -B . -G Ninja -DCMAKE_BUILD_TYPE=Release $PIGLIT_OPTS $EXTRA_CMAKE_ARGS
ninja $PIGLIT_BUILD_TARGETS
find . -depth \( -name .git -o -name '*ninja*' -o -iname '*cmake*' -o -name '*.[chao]' \) -exec rm -rf {} \;
rm -rf target_api
if [ "${PIGLIT_BUILD_TARGETS:-}" = "piglit_replayer" ]; then
if [ "$PIGLIT_BUILD_TARGETS" = "piglit_replayer" ]; then
find . -depth \
! -regex "^\.$" \
! -regex "^\.\/piglit.*" \
@@ -37,5 +31,3 @@ if [ "${PIGLIT_BUILD_TARGETS:-}" = "piglit_replayer" ]; then
-exec rm -rf {} \; 2>/dev/null
fi
popd
section_end piglit

View File

@@ -5,10 +5,17 @@
set -ex
section_start rust "Building Rust toolchain"
# cargo (and rustup) wants to store stuff in $HOME/.cargo, and binaries in
# $HOME/.cargo/bin. Make bin a link to a public bin directory so the commands
# are just available to all build jobs.
mkdir -p "$HOME"/.cargo
ln -s /usr/local/bin "$HOME"/.cargo/bin
# Pick a specific snapshot from rustup so the compiler doesn't drift on us.
RUST_VERSION=1.81.0-2024-09-05
# Rusticl requires at least Rust 1.66.0 and NAK requires 1.73.0
#
# Also, pick a specific snapshot from rustup so the compiler doesn't drift on
# us.
RUST_VERSION=1.73.0-2023-10-05
# For rust in Mesa, we use rustup to install. This lets us pick an arbitrary
# version of the compiler, rather than whatever the container's Debian comes
@@ -19,20 +26,14 @@ curl -L --retry 4 -f --retry-all-errors --retry-delay 60 \
--profile minimal \
-y
# Make rustup tools available in the PATH environment variable
# shellcheck disable=SC1091
. "$HOME/.cargo/env"
rustup component add clippy rustfmt
# Set up a config script for cross compiling -- cargo needs your system cc for
# linking in cross builds, but doesn't know what you want to use for system cc.
cat > "$HOME/.cargo/config" <<EOF
cat > /root/.cargo/config <<EOF
[target.armv7-unknown-linux-gnueabihf]
linker = "arm-linux-gnueabihf-gcc"
[target.aarch64-unknown-linux-gnu]
linker = "aarch64-linux-gnu-gcc"
EOF
section_end rust

View File

@@ -6,13 +6,9 @@
set -ex
uncollapsed_section_start shader-db "Building shader-db"
pushd /usr/local
git clone https://gitlab.freedesktop.org/mesa/shader-db.git --depth 1
rm -rf shader-db/.git
cd shader-db
make
popd
section_end shader-db

View File

@@ -6,35 +6,15 @@
#
# When changing this file, you need to bump the following
# .gitlab-ci/image-tags.yml tags:
# DEBIAN_TEST_GL_TAG
set -uex
uncollapsed_section_start skqp "Building SkQP"
# KERNEL_ROOTFS_TAG
SKQP_BRANCH=android-cts-12.1_r5
SCRIPT_DIR="$(pwd)/.gitlab-ci/container"
SKQP_PATCH_DIR="${SCRIPT_DIR}/patches"
BASE_ARGS_GN_FILE="${SCRIPT_DIR}/build-skqp_base.gn"
case "$DEBIAN_ARCH" in
amd64)
SKQP_ARCH=x64
;;
armhf)
SKQP_ARCH=arm
;;
arm64)
SKQP_ARCH=arm64
;;
esac
SKIA_DIR=${SKIA_DIR:-$(mktemp -d)}
SKQP_OUT_DIR=${SKIA_DIR}/out/${SKQP_ARCH}
SKQP_INSTALL_DIR=${SKQP_INSTALL_DIR:-/skqp}
SKQP_ASSETS_DIR="${SKQP_INSTALL_DIR}/assets"
SKQP_BINARIES=(skqp list_gpu_unit_tests list_gms)
# hack for skqp see the clang
pushd /usr/bin/
ln -s ../lib/llvm-15/bin/clang clang
ln -s ../lib/llvm-15/bin/clang++ clang++
popd
create_gn_args() {
# gn can be configured to cross-compile skia and its tools
@@ -58,6 +38,19 @@ download_skia_source() {
git clone --branch "${SKQP_BRANCH}" --depth 1 "${SKQP_REPO}" "${SKIA_DIR}"
}
set -ex
SCRIPT_DIR=$(realpath "$(dirname "$0")")
SKQP_PATCH_DIR="${SCRIPT_DIR}/patches"
BASE_ARGS_GN_FILE="${SCRIPT_DIR}/build-skqp_base.gn"
SKQP_ARCH=${SKQP_ARCH:-x64}
SKIA_DIR=${SKIA_DIR:-$(mktemp -d)}
SKQP_OUT_DIR=${SKIA_DIR}/out/${SKQP_ARCH}
SKQP_INSTALL_DIR=${SKQP_INSTALL_DIR:-/skqp}
SKQP_ASSETS_DIR="${SKQP_INSTALL_DIR}/assets"
SKQP_BINARIES=(skqp list_gpu_unit_tests list_gms)
download_skia_source
pushd "${SKIA_DIR}"
@@ -66,16 +59,10 @@ pushd "${SKIA_DIR}"
cat "${SKQP_PATCH_DIR}"/build-skqp_*.patch |
patch -p1
# hack for skqp see the clang
pushd /usr/bin/
ln -s "../lib/llvm-${LLVM_VERSION}/bin/clang" clang
ln -s "../lib/llvm-${LLVM_VERSION}/bin/clang++" clang++
popd
# Fetch some needed build tools needed to build skia/skqp.
# Basically, it clones repositories with commits SHAs from ${SKIA_DIR}/DEPS
# directory.
python3 tools/git-sync-deps
python tools/git-sync-deps
mkdir -p "${SKQP_OUT_DIR}"
mkdir -p "${SKQP_INSTALL_DIR}"
@@ -100,5 +87,3 @@ popd
rm -Rf "${SKIA_DIR}"
set +ex
section_end skqp

View File

@@ -34,11 +34,6 @@ extra_cflags_cc = [
"-Wno-unused-but-set-variable",
"-Wno-sizeof-array-div",
"-Wno-string-concatenation",
"-Wno-unsafe-buffer-usage",
"-Wno-switch-default",
"-Wno-cast-function-type-strict",
"-Wno-format",
"-Wno-enum-constexpr-conversion",
]
cc_wrapper = "ccache"

View File

@@ -1,13 +1,10 @@
#!/usr/bin/env bash
# shellcheck disable=SC2086 # we want word splitting
# When changing this file, you need to bump the following
# .gitlab-ci/image-tags.yml tags:
# DEBIAN_TEST_VIDEO_TAG
# KERNEL_ROOTFS_TAG
set -uex
section_start va-tools "Building va-tools"
set -ex
git config --global user.email "mesa@example.com"
git config --global user.name "Mesa CI"
@@ -20,11 +17,9 @@ git clone \
pushd /va-utils
# Too old libva in Debian 11. TODO: when this PR gets in, refer to the patch.
curl --fail -L https://github.com/intel/libva-utils/pull/329.patch | git am
curl -L https://github.com/intel/libva-utils/pull/329.patch | git am
meson setup build -D tests=true -Dprefix=/va ${EXTRA_MESON_ARGS:-}
meson setup build -D tests=true -Dprefix=/va $EXTRA_MESON_ARGS
meson install -C build
popd
rm -rf /va-utils
section_end va-tools

View File

@@ -2,66 +2,42 @@
# When changing this file, you need to bump the following
# .gitlab-ci/image-tags.yml tags:
# DEBIAN_TEST_VK_TAG
# DEBIAN_X86_64_TEST_VK_TAG
# KERNEL_ROOTFS_TAG
set -ex
section_start vkd3d-proton "Building vkd3d-proton"
# Do a very early check to make sure the tag is correct without the need of
# setting up the environment variables locally
ci_tag_build_time_check "VKD3D_PROTON_TAG"
VKD3D_PROTON_COMMIT="6be781076617cb2cb3038710618acc3b57a674db"
VKD3D_PROTON_COMMIT="a0ccc383937903f4ca0997ce53e41ccce7f2f2ec"
VKD3D_PROTON_DST_DIR="/vkd3d-proton-tests"
VKD3D_PROTON_SRC_DIR="/vkd3d-proton-src"
VKD3D_PROTON_BUILD_DIR="/vkd3d-proton-build"
VKD3D_PROTON_WINE_DIR="/vkd3d-proton-wine64"
VKD3D_PROTON_S3_ARTIFACT="vkd3d-proton.tar.zst"
VKD3D_PROTON_BUILD_DIR="/vkd3d-proton-$VKD3D_PROTON_VERSION"
if [ ! -d "$VKD3D_PROTON_WINE_DIR" ]; then
echo "Fatal: Directory '$VKD3D_PROTON_WINE_DIR' does not exist. Aborting."
exit 1
fi
function build_arch {
local arch="$1"
shift
meson "$@" \
-Denable_tests=true \
--buildtype release \
--prefix "$VKD3D_PROTON_DST_DIR" \
--strip \
--bindir "x${arch}" \
--libdir "x${arch}" \
"$VKD3D_PROTON_BUILD_DIR/build.${arch}"
ninja -C "$VKD3D_PROTON_BUILD_DIR/build.${arch}" install
install -D -m755 -t "${VKD3D_PROTON_DST_DIR}/x${arch}/bin" "$VKD3D_PROTON_BUILD_DIR/build.${arch}/tests/d3d12"
}
git clone https://github.com/HansKristian-Work/vkd3d-proton.git --single-branch -b master --no-checkout "$VKD3D_PROTON_SRC_DIR"
pushd "$VKD3D_PROTON_SRC_DIR"
git checkout "$VKD3D_PROTON_COMMIT"
git submodule update --init --recursive
git submodule update --recursive
meson setup \
-D enable_tests=true \
--buildtype release \
--prefix "$VKD3D_PROTON_DST_DIR" \
--strip \
--libdir "lib" \
"$VKD3D_PROTON_BUILD_DIR/build"
ninja -C "$VKD3D_PROTON_BUILD_DIR/build" install
install -m755 -t "${VKD3D_PROTON_DST_DIR}/" "$VKD3D_PROTON_BUILD_DIR/build/tests/d3d12"
mkdir "$VKD3D_PROTON_DST_DIR/tests"
cp \
"tests/test-runner.sh" \
"tests/d3d12_tests.h" \
"$VKD3D_PROTON_DST_DIR/tests/"
build_arch 64
build_arch 86
popd
# Archive and upload vkd3d-proton for use as a LAVA overlay, if the archive doesn't exist yet
ARTIFACT_PATH="${DATA_STORAGE_PATH}/vkd3d-proton/${VKD3D_PROTON_TAG}/${CI_JOB_NAME}/${VKD3D_PROTON_S3_ARTIFACT}"
if FOUND_ARTIFACT_URL="$(find_s3_project_artifact "${ARTIFACT_PATH}")"; then
echo "Found vkd3d-proton at: ${FOUND_ARTIFACT_URL}, skipping upload"
else
echo "Uploaded vkd3d-proton not found, reuploading..."
tar --zstd -cf "$VKD3D_PROTON_S3_ARTIFACT" -C / "${VKD3D_PROTON_DST_DIR#/}" "${VKD3D_PROTON_WINE_DIR#/}"
ci-fairy s3cp --token-file "${S3_JWT_FILE}" "$VKD3D_PROTON_S3_ARTIFACT" \
"https://${S3_BASE_PATH}/${CI_PROJECT_PATH}/${ARTIFACT_PATH}"
rm "$VKD3D_PROTON_S3_ARTIFACT"
fi
rm -rf "$VKD3D_PROTON_BUILD_DIR"
rm -rf "$VKD3D_PROTON_SRC_DIR"
section_end vkd3d-proton

View File

@@ -2,23 +2,17 @@
# When changing this file, you need to bump the following
# .gitlab-ci/image-tags.yml tags:
# DEBIAN_TEST_GL_TAG
# DEBIAN_X86_64_TEST_GL_TAG
# KERNEL_ROOTFS_TAG:
set -uex
set -ex
uncollapsed_section_start vulkan-validation "Building Vulkan validation layers"
VALIDATION_TAG="snapshot-2025wk15"
VALIDATION_TAG="v1.3.269"
git clone -b "$VALIDATION_TAG" --single-branch --depth 1 https://github.com/KhronosGroup/Vulkan-ValidationLayers.git
pushd Vulkan-ValidationLayers
# we don't need to build SPIRV-Tools tools
sed -i scripts/known_good.json -e 's/SPIRV_SKIP_EXECUTABLES=OFF/SPIRV_SKIP_EXECUTABLES=ON/'
python3 scripts/update_deps.py --dir external --config release --generator Ninja --optional tests
python3 scripts/update_deps.py --dir external --config debug
cmake -G Ninja -DCMAKE_BUILD_TYPE=Release -DCMAKE_INSTALL_PREFIX=/usr -DBUILD_TESTS=OFF -DBUILD_WERROR=OFF -C external/helper.cmake -S . -B build
ninja -C build -j"${FDO_CI_CONCURRENT:-4}"
cmake --install build --strip
ninja -C build install
popd
rm -rf Vulkan-ValidationLayers
section_end vulkan-validation

View File

@@ -1,34 +1,15 @@
#!/usr/bin/env bash
# shellcheck disable=SC2086 # we want word splitting
set -uex
set -ex
uncollapsed_section_start wayland "Building Wayland"
# When changing this file, you need to bump the following
# .gitlab-ci/image-tags.yml tags:
# ALPINE_X86_64_BUILD_TAG
# DEBIAN_BASE_TAG
# DEBIAN_BUILD_TAG
# DEBIAN_TEST_ANDROID_TAG
# DEBIAN_TEST_GL_TAG
# DEBIAN_TEST_VK_TAG
# FEDORA_X86_64_BUILD_TAG
export LIBWAYLAND_VERSION="1.24.0"
export WAYLAND_PROTOCOLS_VERSION="1.41"
export LIBWAYLAND_VERSION="1.21.0"
export WAYLAND_PROTOCOLS_VERSION="1.31"
git clone https://gitlab.freedesktop.org/wayland/wayland
cd wayland
git checkout "$LIBWAYLAND_VERSION"
# Build the scanner first in case we're a cross build
# Note the lack of EXTRA_MESON_ARGS here. This is always a native build
meson setup -Dtests=false -Ddocumentation=false -Ddtd_validation=false -Dlibraries=false -Dscanner=true _scanner
meson install -C _scanner
# Now build libwayland using the given scanner
meson setup -Ddocumentation=false -Ddtd_validation=false -Dlibraries=true -Dscanner=false _build ${EXTRA_MESON_ARGS:-}
meson setup -Ddocumentation=false -Ddtd_validation=false -Dlibraries=true _build $EXTRA_MESON_ARGS
meson install -C _build
cd ..
rm -rf wayland
@@ -36,9 +17,7 @@ rm -rf wayland
git clone https://gitlab.freedesktop.org/wayland/wayland-protocols
cd wayland-protocols
git checkout "$WAYLAND_PROTOCOLS_VERSION"
meson setup -Dtests=false _build ${EXTRA_MESON_ARGS:-}
meson setup _build $EXTRA_MESON_ARGS
meson install -C _build
cd ..
rm -rf wayland-protocols
section_end wayland

View File

@@ -1,52 +0,0 @@
#!/usr/bin/env bash
# shellcheck disable=SC2086 # we want word splitting
set -uex
section_start weston "Building Weston"
# When changing this file, you need to bump the following
# .gitlab-ci/image-tags.yml tags:
# DEBIAN_BASE_TAG
export WESTON_VERSION="14.0.1"
git clone https://gitlab.freedesktop.org/wayland/weston
cd weston
git checkout "$WESTON_VERSION"
meson setup \
-Dbackend-drm=false \
-Dbackend-drm-screencast-vaapi=false \
-Dbackend-headless=true \
-Dbackend-pipewire=false \
-Dbackend-rdp=false \
-Dscreenshare=false \
-Dbackend-vnc=false \
-Dbackend-wayland=false \
-Dbackend-x11=false \
-Dbackend-default=headless \
-Drenderer-gl=true \
-Dxwayland=true \
-Dsystemd=false \
-Dremoting=false \
-Dpipewire=false \
-Dshell-desktop=true \
-Dshell-fullscreen=false \
-Dshell-ivi=false \
-Dshell-kiosk=false \
-Dcolor-management-lcms=false \
-Dimage-jpeg=false \
-Dimage-webp=false \
-Dtools= \
-Ddemo-clients=false \
-Dsimple-clients= \
-Dresize-pool=false \
-Dwcap-decode=false \
-Dtests=false \
-Ddoc=false \
_build ${EXTRA_MESON_ARGS:-}
meson install -C _build
cd ..
rm -rf weston
section_end weston

View File

@@ -1,31 +0,0 @@
#!/usr/bin/env bash
# shellcheck disable=SC2086 # we want word splitting
set -uex
uncollapsed_section_start xwayland "Building XWayland"
# When changing this file, you need to bump the following
# .gitlab-ci/image-tags.yml tags:
# DEBIAN_BASE_TAG
#
export XORGPROTO_VERSION="xorgproto-2024.1"
export XWAYLAND_VERSION="xwayland-24.1.8"
git clone https://gitlab.freedesktop.org/xorg/proto/xorgproto
cd xorgproto
git checkout "$XORGPROTO_VERSION"
meson setup _build ${EXTRA_MESON_ARGS:-}
meson install -C _build
cd ..
rm -rf xorgproto
git clone https://gitlab.freedesktop.org/xorg/xserver
cd xserver
git checkout "$XWAYLAND_VERSION"
meson setup _build ${EXTRA_MESON_ARGS:-}
meson install -C _build
cd ..
rm -rf xserver
section_end xwayland

View File

@@ -1,24 +0,0 @@
#!/usr/bin/env bash
# When changing this file, all the linux tags in
# .gitlab-ci/image-tags.yml need updating.
set -eu
# Early check for required env variables, relies on `set -u`
: "$S3_JWT_FILE_SCRIPT"
if [ -z "$1" ]; then
echo "usage: $(basename "$0") <CONTAINER_CI_JOB_NAME>" 1>&2
exit 1
fi
CONTAINER_CI_JOB_NAME="$1"
# Tasks to perform before executing the script of a container job
eval "$S3_JWT_FILE_SCRIPT"
unset S3_JWT_FILE_SCRIPT
trap 'rm -f ${S3_JWT_FILE}' EXIT INT TERM
bash ".gitlab-ci/container/${CONTAINER_CI_JOB_NAME}.sh"

View File

@@ -6,6 +6,8 @@ fi
# Clean up any build cache
rm -rf /root/.cache
rm -rf /root/.cargo
rm -rf /.cargo
if test -x /usr/bin/ccache; then
ccache --show-stats

View File

@@ -1,7 +1,4 @@
#!/bin/sh
# When changing this file, you need to bump the following
# .gitlab-ci/image-tags.yml tags:
# DEBIAN_BUILD_TAG
if test -x /usr/bin/ccache; then
if test -f /etc/debian_version; then
@@ -16,8 +13,8 @@ if test -x /usr/bin/ccache; then
export CCACHE_COMPILERCHECK=content
export CCACHE_COMPRESS=true
export CCACHE_DIR="/cache/$CI_PROJECT_NAME/ccache"
export PATH="$CCACHE_PATH:$PATH"
export CCACHE_DIR=/cache/$CI_PROJECT_NAME/ccache
export PATH=$CCACHE_PATH:$PATH
# CMake ignores $PATH, so we have to force CC/GCC to the ccache versions.
export CC="${CCACHE_PATH}/gcc"
@@ -26,6 +23,14 @@ if test -x /usr/bin/ccache; then
ccache --show-stats
fi
# When not using the mold linker (e.g. unsupported architecture), force
# linkers to gold, since it's so much faster for building. We can't use
# lld because we're on old debian and it's buggy. mingw fails meson builds
# with it with "meson.build:21:0: ERROR: Unable to determine dynamic linker"
find /usr/bin -name \*-ld -o -name ld | \
grep -v mingw | \
xargs -n 1 -I '{}' ln -sf '{}.gold' '{}'
# Make a wrapper script for ninja to always include the -j flags
{
echo '#!/bin/sh -x'
@@ -38,29 +43,10 @@ chmod +x /usr/local/bin/ninja
# flags (doesn't apply to non-container builds, but we don't run make there)
export MAKEFLAGS="-j${FDO_CI_CONCURRENT:-4}"
# Ensure that rust tools are in PATH if they exist
CARGO_ENV_FILE="$HOME/.cargo/env"
if [ -f "$CARGO_ENV_FILE" ]; then
# shellcheck disable=SC1090
source "$CARGO_ENV_FILE"
fi
ci_tag_early_checks() {
# Runs the first part of the build script to perform the tag check only
uncollapsed_section_switch "ci_tag_early_checks" "Ensuring component versions match declared tags in CI builds"
echo "[Structured Tagging] Checking components: ${CI_BUILD_COMPONENTS}"
# shellcheck disable=SC2086
for component in ${CI_BUILD_COMPONENTS}; do
bin/ci/update_tag.py --check ${component} || exit 1
done
echo "[Structured Tagging] Components check done"
section_end "ci_tag_early_checks"
}
# Check if each declared tag component is up to date before building
if [ -n "${CI_BUILD_COMPONENTS:-}" ]; then
# Remove any duplicates by splitting on whitespace, sorting, then joining back
CI_BUILD_COMPONENTS="$(echo "${CI_BUILD_COMPONENTS}" | xargs -n1 | sort -u | xargs)"
ci_tag_early_checks
fi
# make wget to try more than once, when download fails or timeout
echo -e "retry_connrefused = on\n" \
"read_timeout = 300\n" \
"tries = 4\n" \
"retry_on_host_error = on\n" \
"retry_on_http_error = 429,500,502,503,504\n" \
"wait_retry = 32" >> /etc/wgetrc

View File

@@ -18,11 +18,11 @@ cat > "$cross_file" <<EOF
[binaries]
ar = '$ndk/toolchains/llvm/prebuilt/linux-x86_64/bin/llvm-ar'
c = ['ccache', '$ndk/toolchains/llvm/prebuilt/linux-x86_64/bin/${arch2}${sdk_version}-clang', '-fno-exceptions', '-fno-unwind-tables', '-fno-asynchronous-unwind-tables']
cpp = ['ccache', '$ndk/toolchains/llvm/prebuilt/linux-x86_64/bin/${arch2}${sdk_version}-clang++', '-fno-exceptions', '-fno-unwind-tables', '-fno-asynchronous-unwind-tables', '--start-no-unused-arguments', '-static-libstdc++', '--end-no-unused-arguments']
cpp = ['ccache', '$ndk/toolchains/llvm/prebuilt/linux-x86_64/bin/${arch2}${sdk_version}-clang++', '-fno-exceptions', '-fno-unwind-tables', '-fno-asynchronous-unwind-tables', '-static-libstdc++']
c_ld = 'lld'
cpp_ld = 'lld'
strip = '$ndk/toolchains/llvm/prebuilt/linux-x86_64/bin/llvm-strip'
pkg-config = ['/usr/bin/pkgconf']
pkgconfig = ['/usr/bin/pkgconf']
[host_machine]
system = 'android'

View File

@@ -2,13 +2,10 @@
# shellcheck disable=SC2086 # we want word splitting
set -e
. .gitlab-ci/setup-test-env.sh
set -o xtrace
export DEBIAN_FRONTEND=noninteractive
: "${LLVM_VERSION:?llvm version not set!}"
export LLVM_VERSION="${LLVM_VERSION:=15}"
# Ephemeral packages (installed for this script and removed again at the end)
EPHEMERAL=(
@@ -39,6 +36,7 @@ DEPS=(
"libxrandr-dev:$arch"
"libxshmfence-dev:$arch"
"libxxf86vm-dev:$arch"
"libwayland-dev:$arch"
)
dpkg --add-architecture $arch
@@ -54,8 +52,7 @@ if [[ $arch != "armhf" ]]; then
# We don't need clang-format for the crossbuilds, but the installed amd64
# package will conflict with libclang. Uninstall clang-format (and its
# problematic dependency) to fix.
apt-get remove -y "clang-format-${LLVM_VERSION}" "libclang-cpp${LLVM_VERSION}" \
"llvm-${LLVM_VERSION}-runtime" "llvm-${LLVM_VERSION}-linker-tools"
apt-get remove -y "clang-format-${LLVM_VERSION}" "libclang-cpp${LLVM_VERSION}"
# llvm-*-tools:$arch conflicts with python3:amd64. Install dependencies only
# with apt-get, then force-install llvm-*-{dev,tools}:$arch with dpkg to get

31
.gitlab-ci/container/debian/android_build.sh Executable file → Normal file
View File

@@ -5,11 +5,7 @@
# .gitlab-ci/image-tags.yml tags:
# DEBIAN_BUILD_TAG
set -e
. .gitlab-ci/setup-test-env.sh
set -x
set -ex
EPHEMERAL=(
autoconf
@@ -19,13 +15,11 @@ EPHEMERAL=(
apt-get install -y --no-remove "${EPHEMERAL[@]}"
. .gitlab-ci/container/container_pre_build.sh
# Fetch the NDK and extract just the toolchain we want.
ndk="android-ndk-${ANDROID_NDK_VERSION}"
ndk=$ANDROID_NDK
curl -L --retry 4 -f --retry-all-errors --retry-delay 60 \
-o $ndk.zip https://dl.google.com/android/repository/$ndk-linux.zip
unzip -d / $ndk.zip "$ndk/source.properties" "$ndk/build/cmake/*" "$ndk/toolchains/llvm/*"
unzip -d / $ndk.zip "$ndk/toolchains/llvm/*"
rm $ndk.zip
# Since it was packed as a zip file, symlinks/hardlinks got turned into
# duplicate files. Turn them into hardlinks to save on container space.
@@ -40,12 +34,6 @@ sh .gitlab-ci/container/create-android-cross-file.sh /$ndk i686-linux-android x8
sh .gitlab-ci/container/create-android-cross-file.sh /$ndk aarch64-linux-android aarch64 armv8 $ANDROID_SDK_VERSION
sh .gitlab-ci/container/create-android-cross-file.sh /$ndk arm-linux-androideabi arm armv7hl $ANDROID_SDK_VERSION armv7a-linux-androideabi
# Build libdrm for the host (Debian) environment, so it's available for
# binaries we'll run as part of the build process
. .gitlab-ci/container/build-libdrm.sh
# Build libdrm for the NDK environment, so it's available when building for
# the Android target
for arch in \
x86_64-linux-android \
i686-linux-android \
@@ -97,22 +85,9 @@ for arch in \
--libdir=/usr/local/lib/${arch}
make install
make distclean
unset CC
unset CC
unset CXX
unset LD
unset RANLIB
done
cd ..
rm -rf $LIBELF_VERSION
# Build LLVM libraries for Android only if necessary, uploading a copy to S3
# to avoid rebuilding it in a future run if the version does not change.
bash .gitlab-ci/container/build-android-x86_64-llvm.sh
apt-get purge -y "${EPHEMERAL[@]}"
. .gitlab-ci/container/container_post_build.sh

View File

@@ -1,4 +0,0 @@
#!/usr/bin/env bash
DEBIAN_ARCH=armhf \
. .gitlab-ci/container/debian/test-base.sh

View File

@@ -1,4 +0,0 @@
#!/usr/bin/env bash
DEBIAN_ARCH="armhf" \
. .gitlab-ci/container/debian/test-gl.sh

Some files were not shown because too many files have changed in this diff Show More