Compare commits

..

185 Commits

Author SHA1 Message Date
Eric Engestrom
f9d8468d29 VERSION: bump for 24.2.0-rc4 2024-08-07 13:42:04 +02:00
Mike Blumenkrantz
06020b2ad8 dril: always take the egl init path
using EGL_DEFAULT_DISPLAY will cover the swrast case, which
fixes generating all the correct configs

Fixes: ec7afd2c24 ("dril: rework config creation")

Reviewed-by: Eric Engestrom <eric@igalia.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/30426>
(cherry picked from commit ef88af8467)
2024-08-07 12:04:51 +02:00
Eric Engestrom
6cd9d0a0d6 .pick_status.json: Update to ef88af8467 2024-08-07 12:04:48 +02:00
Georg Lehmann
2a332a10c9 aco/gfx11+: don't use VOP3 v_swap_b16
v_swap_b16 is not offically supported as VOP3, so it can't be used with v128-255.
Tests show that VOP3 appears to work correctly, but according to AMD that should
not be relied on.

https://github.com/llvm/llvm-project/pull/100442#discussion_r1703929676

Foz-DB Navi31:
Totals from 6 (0.01% of 79395) affected shaders:
Instrs: 64799 -> 65932 (+1.75%)
CodeSize: 360180 -> 368440 (+2.29%)
Latency: 1364648 -> 1365922 (+0.09%)
InvThroughput: 635843 -> 636475 (+0.10%)
Copies: 14766 -> 15698 (+6.31%)
VALU: 38743 -> 39675 (+2.41%)

Fixes: 80b8bbf0c5 ("aco/gfx11: use v_swap_b16")

Reviewed-by: Rhys Perry <pendingchaos02@gmail.com>
Reviewed-by: Timur Kristóf <timur.kristof@gmail.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/30515>
(cherry picked from commit e0818cb87b)
2024-08-07 10:28:54 +02:00
Lionel Landwerlin
07f1560c07 anv: add missing MEDIA_STATE_FLUSH for internal shaders
Replicating what we do in genX_cmd_compute.c

Signed-off-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Fixes: 7ca5c84804 ("anv: add support for simple internal compute shaders")
Reviewed-by: Tapani Pälli <tapani.palli@intel.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/30539>
(cherry picked from commit 4f093b2e2b)
2024-08-07 10:28:54 +02:00
Rhys Perry
56f49723c6 docs: update ACO_DEBUG documentation for perfwarn
Signed-off-by: Rhys Perry <pendingchaos02@gmail.com>
Reviewed-by: Georg Lehmann <dadschoorse@gmail.com>
Fixes: cc404d45ff ("aco: remove perfwarn")
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/30519>
(cherry picked from commit 373851e7ee)
2024-08-07 10:28:54 +02:00
Rhys Perry
54727a8d57 docs: update ACO_DEBUG documentation for scheduler options
Signed-off-by: Rhys Perry <pendingchaos02@gmail.com>
Reviewed-by: Georg Lehmann <dadschoorse@gmail.com>
Fixes: 48461c0d9e ("aco: enable VOPD scheduler")
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/30519>
(cherry picked from commit e45035c83a)
2024-08-07 10:28:54 +02:00
David Rosca
bceb2328c4 radeonsi/vcn: Add decode DPB buffers as CS dependency
This is needed to ensure correct synchronization in kernel eg. when it
moves the buffers between VRAM and GTT.

Backport-to: 24.2
Closes: https://gitlab.freedesktop.org/drm/amd/-/issues/3437
Closes: https://gitlab.freedesktop.org/mesa/mesa/-/issues/11624
Reviewed-by: Boyuan Zhang <Boyuan.Zhang@amd.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/30510>
(cherry picked from commit 0c024bbe64)
2024-08-07 10:28:54 +02:00
Eric Engestrom
da72fa9262 .pick_status.json: Update to d9849ac466 2024-08-07 10:28:54 +02:00
José Roberto de Souza
d8b5ee8d65 intel/dev: Support new topology type with SIMD16 EUs
Xe KMD will now report the different topology mask types based on the
type of the EU of running platform.
With this we don't need to divide the EU count by 2 in intel_perf.

Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Signed-off-by: José Roberto de Souza <jose.souza@intel.com>
(cherry picked from commit 132c5cdeb9)

Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/30522>
2024-08-07 10:28:50 +02:00
José Roberto de Souza
ae8dd9d6bc intel: Sync xe_drm.h
Sync xe_drm.h with f2881dfdaaa9 ("drm/xe/oa/uapi: Make bit masks unsigned").

Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Signed-off-by: José Roberto de Souza <jose.souza@intel.com>
(cherry picked from commit 3da911b092)

Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/30522>
2024-08-07 10:28:28 +02:00
Timothy Arceri
c72320ef7d glsl: fix glsl to nir support for lower precision builtins
When we switch to the full nir based glsl linker in an upcomming merge
request this is required for existing tests from 8fcf8e7fd4 to
continue to pass, this is because they never exercised glsl to nir so
the missing support went unnoticed.

Closes: https://gitlab.freedesktop.org/mesa/mesa/-/issues/11456
Fixes: 8fcf8e7fd4 ("glsl: lower builtins to mediump that ignore precision of certain parameters ")
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/30395>
(cherry picked from commit 140ca7e5d7)
2024-08-06 15:18:53 +02:00
Iván Briano
4a1b71e4a1 intel/rt: fix terminateOnFirstHit handling
If TraceRay() is called with the TerminateOnFirstHit flag, we need to
terminate the ray on the first confirmed intersection. This is handled
by the lowering of accept_ray_intersection and it's working fine for the
case of multiple instances of the intersection shader being called.

But if the shader calls reportIntersection() more than once, we were
handling them all and accepting the closest one regardless of the flag.

Check for the flag on every confirmed intersection and, if set, accept
it right there. The subsequent lowering will take care of terminating
handling the ray termination if necessary.

Fixes new test dEQP-VK.ray_tracing_pipeline.amber.flags-accept-first

Cc: mesa-stable

Reviewed-by: Ian Romanick <ian.d.romanick@intel.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/30418>
(cherry picked from commit f8553f56ac)
2024-08-06 15:18:43 +02:00
Lionel Landwerlin
719f3a1393 anv: reuse object string for RMV token
The current code is not handling the potential NULL pointer in
VkDebugUtilsObjectNameInfoEXT::pObjectName

Signed-off-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Fixes: e1b9a6e4f3 ("anv: initial RMV support")
Reviewed-by: Faith Ekstrand <faith.ekstrand@collabora.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/30516>
(cherry picked from commit c6bf1f02c4)
2024-08-06 15:18:39 +02:00
Lionel Landwerlin
f80caae143 vulkan/runtime: allow null/empty debug names
VkDebugUtilsObjectNameInfoEXT::pObjectName can be NULL [1] :

   "Applications may change the name associated with an object simply
    by calling vkSetDebugUtilsObjectNameEXT again with a new string. If
    pObjectName is either NULL or an empty string, then any previously
    set name is removed."

The current code will segfault.

[1] : https://registry.khronos.org/vulkan/specs/1.3-extensions/html/chap50.html#VkDebugUtilsObjectNameInfoEXT

Signed-off-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Fixes: 3b361b234a ("vulkan: Implement VK_EXT_debug_utils")
Reviewed-by: Faith Ekstrand <faith.ekstrand@collabora.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/30516>
(cherry picked from commit ae9a249dfe)
2024-08-06 15:18:16 +02:00
Mike Blumenkrantz
b00b73c046 pipe-loader: fix driconf memory management
this had a number of issues:
* pipe_loader_get_driinfo_xml() frees driver_driconf immediately,
  except the driOptionCache struct string pointers are all just copied
  in merge_driconf instead of having the strings copied, which means any
  subsequent access of driver_driconf strings is invalid access
* pipe_loader_drm_get_driconf_by_name() is a disaster that only happened
  to work because the dlopen here is the same lib that gets opened elsewhere
  by mesa and not closed. if the lib here is actually closed, then all
  the statically allocated strings become invalid, which means they need to
  be manually copied

cc: mesa-stable

Reviewed-by: Marek Olšák <marek.olsak@amd.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/30494>
(cherry picked from commit 0c220741e6)
2024-08-06 15:17:08 +02:00
Marek Olšák
ec1fda18b2 radeonsi/gfx12: fix VS output corruption with streamout
We increased VS_EXPORT_COUNT to 8 for streamout in gfx10_shader_ngg,
but we forgot to increase the attribute ring stride, causing all waves
except the first one to get corrupted VS outputs.

Fixes: f703dfd1bb - radeonsi: add gfx12

Reviewed-by: Pierre-Eric Pelloux-Prayer <pierre-eric.pelloux-prayer@amd.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/30503>
(cherry picked from commit 0e27df4521)
2024-08-06 15:17:07 +02:00
Marek Olšák
7d5b5da211 radeonsi/gfx12: fix register programming to fix GPU hangs
Fixes: f703dfd1bb - radeonsi: add gfx12

Reviewed-by: Pierre-Eric Pelloux-Prayer <pierre-eric.pelloux-prayer@amd.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/30503>
(cherry picked from commit 07a0b5e2f2)
2024-08-06 15:17:06 +02:00
Marek Olšák
7b7c32b4dc radeonsi: fix buffer coherency issues on gfx6-8,12 due to missing PFP->ME sync
This fixes random GPU hangs on gfx12 due to incoherent indirect buffer data,
causing random indirect vertex and instance counts, which timeouts if
the random numbers are large.

Fixes: a8abbbb172 - radeonsi: remove r600_pipe_common.h

Reviewed-by: Pierre-Eric Pelloux-Prayer <pierre-eric.pelloux-prayer@amd.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/30503>
(cherry picked from commit 83b88c54ba)
2024-08-06 15:17:06 +02:00
Marek Olšák
d05e39f304 radeonsi: ensure TC_L2_dirty is set if we don't sync after internal SSBO blits
There was a case where if we don't sync, we wouldn't set TC_L2_dirty either,
which could cause problems later.

Fixes: f703dfd1bb - radeonsi: add gfx12

Reviewed-by: Pierre-Eric Pelloux-Prayer <pierre-eric.pelloux-prayer@amd.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/30503>
(cherry picked from commit ebc5116e70)
2024-08-06 15:17:05 +02:00
Marek Olšák
4b7fda129e radeonsi/gfx12: fix a GPU hang due to an invalid packet with window rectangles
I guess incorrect packet interrupts have been enabled, so this started hanging.

radeon_set_context_reg_seq shouldn't be used with gfx12_set_context_reg.

Fixes: f703dfd1bb - radeonsi: add gfx12

Reviewed-by: Pierre-Eric Pelloux-Prayer <pierre-eric.pelloux-prayer@amd.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/30503>
(cherry picked from commit e4b3848fde)
2024-08-06 15:17:04 +02:00
Faith Ekstrand
d8282badb0 nak: Sample locations are byte-aligned
Fixes: cc33cafcac ("nak/nir: Use an indirect load for sample locations")
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/29194>
(cherry picked from commit 761874ea85)
2024-08-06 15:17:03 +02:00
Paulo Zanoni
7ef9b0358c anv: disable CCS for Source2 games on Xe2
Dota 2 and Counter-Strike 2 really want to be able to allocate memory
for both VkImages and VkBuffers from the same memory type. Xe2's
special compression-only memory type does not support buffers, which
makes these games crash. Disable CCS on these games as a workaround.

This is a temporary workaround as we're still working towards a
long-term solution (either by fixing the engine or finding a way
better expose our memory types).

Backport-to: 24.2
Fixes: https://gitlab.freedesktop.org/mesa/mesa/-/issues/11520
Fixes: https://gitlab.freedesktop.org/mesa/mesa/-/issues/11521
Reviewed-by: José Roberto de Souza <jose.souza@intel.com>
Reviewed-by: Jianxun Zhang <jianxun.zhang@intel.com>
Signed-off-by: Paulo Zanoni <paulo.r.zanoni@intel.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/30481>
(cherry picked from commit 644dcc0337)
2024-08-06 15:16:54 +02:00
Paulo Zanoni
1b3ab37a93 anv: don't expose the compressed memory types when DEBUG_NO_CCS
These memory types are useless when CCS is disabled, don't leave them
there so they don't confuse applications.

Backport-to: 24.2
Reviewed-by: José Roberto de Souza <jose.souza@intel.com>
Reviewed-by: Jianxun Zhang <jianxun.zhang@intel.com>
Signed-off-by: Paulo Zanoni <paulo.r.zanoni@intel.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/30481>
(cherry picked from commit b4f5a04223)
2024-08-06 15:12:56 +02:00
Rhys Perry
f71ba676b7 aco: fix validation of v_s_ opcodes
Signed-off-by: Rhys Perry <pendingchaos02@gmail.com>
Reviewed-by: Georg Lehmann <dadschoorse@gmail.com>
Fixes: 284b9965e8 ("aco/gfx11.5+: allow sgpr dst for trans ops and use pseudo scalar ops on gfx12")
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/30477>
(cherry picked from commit 911fdce0b6)
2024-08-06 15:12:55 +02:00
Karol Herbst
5df25b68ea rusticl/kernel: properly respect device thread limits per dimension
Cc: mesa-stable
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/30504>
(cherry picked from commit b3e925a21b)
2024-08-06 15:12:54 +02:00
Karol Herbst
652387c9fb zink: lower 8/16 bit alu ops vk spirv doesn't allow
Cc: mesa-stable
Acked-by: Mike Blumenkrantz <michael.blumenkrantz@gmail.com>
Reviewed-by: Georg Lehmann <dadschoorse@gmail.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/30504>
(cherry picked from commit b2225b9437)
2024-08-06 15:12:52 +02:00
Karol Herbst
f94623e230 zink: lower 64 bit find_lsb, ufind_msb and bit_count
Cc: mesa-stable
Acked-by: Mike Blumenkrantz <michael.blumenkrantz@gmail.com>
Reviewed-by: Georg Lehmann <dadschoorse@gmail.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/30504>
(cherry picked from commit 39ec184db6)
2024-08-06 15:12:51 +02:00
Rob Clark
660356472a freedreno/a6xx: Initial a7xx support
Passing all of deqp-gles*

LRZ is still causing some artifacts in games so it is disabled for now.

Signed-off-by: Rob Clark <robdclark@chromium.org>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/30304>
(cherry picked from commit ad90bf0500)
2024-08-06 15:09:42 +02:00
Rob Clark
11e5adee04 freedreno/a6xx: Rework CCU_CNTL emit for a7xx
Regs are different, and a750+ gets new configuration for VPC cache in
GMEM.

Signed-off-by: Rob Clark <robdclark@chromium.org>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/30304>
(cherry picked from commit e6be78c703)
2024-08-06 15:09:31 +02:00
Rob Clark
d815a8be11 freedreno/a6xx: Refactor CP_EVENT_WRITE emit
Consolidate the various uses of CP_EVENT_WRITE into helpers, and use use
fd_gpu_event to manage the differences between a6xx and a7xx.  This is a
bit churny as it spreading a fair bit of the CHIP template param around.

Signed-off-by: Rob Clark <robdclark@chromium.org>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/30304>
(cherry picked from commit 1f41d59059)
2024-08-06 15:09:27 +02:00
Rob Clark
aa90813ac9 freedreno/a6xx: Allocate lrcfc when needed for direction tracking
On later GPUs this buffer is also used for direction tracking, etc.
Meaning that it is not optional.

Signed-off-by: Rob Clark <robdclark@chromium.org>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/30304>
(cherry picked from commit b1937f76ff)
2024-08-06 15:09:24 +02:00
Rob Clark
5c1ae7fee1 freedreno: Extract out shared LRZFC layout helpers
Signed-off-by: Rob Clark <robdclark@chromium.org>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/30304>
(cherry picked from commit 679e9093e1)
2024-08-06 15:09:20 +02:00
Rob Clark
89bc16c6ff freedreno: Extract out common UBWC helper
And re-use in gallium driver.

Signed-off-by: Rob Clark <robdclark@chromium.org>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/30304>
(cherry picked from commit beb5577e12)
2024-08-06 15:09:16 +02:00
Rob Clark
69e30a778d freedreno: Move GENX/CALLX magic to common
And re-use them in the gallium driver

Signed-off-by: Rob Clark <robdclark@chromium.org>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/30304>
(cherry picked from commit 5c34a5e59a)
2024-08-06 15:09:12 +02:00
Rob Clark
1800532d6d freedreno/drm: Handle a7xx case
Signed-off-by: Rob Clark <robdclark@chromium.org>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/30304>
(cherry picked from commit a6c9f152cc)
2024-08-06 15:09:08 +02:00
Rob Clark
602757bec9 tu/drm/virtio: Add missing a7xx case
Signed-off-by: Rob Clark <robdclark@chromium.org>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/30304>
(cherry picked from commit 31302ca107)
2024-08-06 15:09:04 +02:00
Rob Clark
bf48ae259e freedreno/cffdec: Fix a7xx CP_EVENT_WRITE decoding
Signed-off-by: Rob Clark <robdclark@chromium.org>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/30304>
(cherry picked from commit 8ff33a756d)
2024-08-06 15:09:00 +02:00
Rob Clark
45ba7c2b1b freedreno/a7xx: Fix GRAS_UNKNOWN_80F4 writes
If this is a 64b reg, we should write both halves.

Signed-off-by: Rob Clark <robdclark@chromium.org>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/30304>
(cherry picked from commit 10eaf06e47)
2024-08-06 15:08:56 +02:00
Rob Clark
f36d6bc97f freedreno/a6xx: Implement reg stomper support
Useful to track down issues related to uninitialized regs.

Signed-off-by: Rob Clark <robdclark@chromium.org>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/30304>
(cherry picked from commit 1a3f041cd8)
2024-08-06 15:08:51 +02:00
Eric Engestrom
b5f656370d .pick_status.json: Update to d58f7a24d1 2024-08-06 15:08:45 +02:00
Echo J
1023efa1bf util: Fix the integer addition in os_time_get_absolute_timeout()
This should fix glClientWaitSync() timing out too early with a INT64_MAX
timeout on radeonsi

Closes: https://gitlab.freedesktop.org/mesa/mesa/-/issues/11615
Fixes: 7316cc92f3 ("gallium/os: add conversion and wait functions for absolute timeouts")
Reviewed-by: Pierre-Eric Pelloux-Prayer <pierre-eric.pelloux-prayer@amd.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/30476>
(cherry picked from commit e14d1f5bc0)
2024-08-04 23:43:07 +02:00
Echo J
bfc23382ed d3d10umd: Use pipe_resource_usage enum in translate_resource_usage()
This should fix a build error with MSVC

Closes: https://gitlab.freedesktop.org/mesa/mesa/-/issues/11461
Fixes: 40785d9a52 ("gallium: properly type pipe_resource.usage with the enum")
Reviewed-by: Jesse Natalie <jenatali@microsoft.com>
Reviewed-by: Max Ramanouski <max8rr8@gmail.com>
Reviewed-by: Jose Fonseca <jose.fonseca@broadcom.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/30172>
(cherry picked from commit b2f919eaaf)
2024-08-04 23:42:54 +02:00
Jordan Justen
0a93ba01be intel/brw/validate: Convert access mask to be grf based
Our validation code doesn't need to know which bytes are accessed. It
only needs to know which grfs were accessed by an element. This also
helps to easily handle the Xe2 register size change.

Backport-to: 24.2
Signed-off-by: Jordan Justen <jordan.l.justen@intel.com>
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/28479>
(cherry picked from commit 58469620d3)
2024-08-04 23:42:53 +02:00
Jordan Justen
8c2f63a47f intel/brw/validate: Update dst grf crossing check for Xe2
Rework:
 * Update grf_size_shift calculation (s-b Ken)

Backport-to: 24.2
Signed-off-by: Jordan Justen <jordan.l.justen@intel.com>
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/28479>
(cherry picked from commit e62606b2ec)
2024-08-04 23:42:52 +02:00
Jordan Justen
a34a3e3cde intel/brw/validate: Simplify grf span validation check by not using a mask
Previously this check would create a mask of the bytes used in the
grf, and then shift the mask. This worked well when there was 32 bytes
in the register because a 64-bit uint64_t could easily detect that
bytes were used in the next regiter. (The next register was the high
32-bits of the `access_mask` variable.)

With Xe2, the register size becomes 64 bytes, meaning this strategy
doesn't work. Instead of a mask, we can just check to see if more than
1 grfs are used during each loop iteration. (Suggested by Ken.) This
will make it easier to extend for Xe2 in a follow on commit.

Verified this with
dEQP-VK.subgroups.arithmetic.compute.subgroupexclusivemul_u64vec4_requiredsubgroupsize
on Xe2, which otherwise would cause the program to fail to validate
because it assumed a grf was 32 bytes.

Backport-to: 24.2
Signed-off-by: Jordan Justen <jordan.l.justen@intel.com>
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/28479>
(cherry picked from commit f2800deacb)
2024-08-04 23:42:52 +02:00
Mike Blumenkrantz
44574f93f6 kopper: check swapchain size after possible loader image resize
previously the size was checked at the top of the function, but this
ignored cases where the loader might end up resizing the drawable,
resulting in an attempted 0x0 swapchain creation based on stale
geometry

cc: mesa-stable

Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/30487>
(cherry picked from commit a6d97b0afe)
2024-08-04 23:42:51 +02:00
Karmjit Mahil
99af0cf920 tu: Set TU_ACCESS_CCHE_READ for transfer ops with read access
Transfer ops also use CCHE since they use the same path as
texture access.

This addresses the flakiness seen in
KHR-GL46.shader_storage_buffer_object.advanced-usage-sync-cs
CCHE wasn't being invalidated between the compute op and transfer
op which would sometimes lead to old/invalid data to be copied
in the transfer op.

Fixes: fb1c3f7f5d ("tu: Implement CCHE invalidation")
Closes: https://gitlab.freedesktop.org/mesa/mesa/-/issues/11458
Signed-off-by: Karmjit Mahil <karmjit.mahil@igalia.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/30490>
(cherry picked from commit cf9588bae6)
2024-08-04 23:42:20 +02:00
Karol Herbst
f58730746b nvk: use nv_device_uuid
Cc: mesa-stable
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/30455>
(cherry picked from commit 8340f490bf)
2024-08-04 22:43:21 +02:00
Karol Herbst
3d75fecde7 nouveau: implement driver_uuid and device_uuid
Closes: https://gitlab.freedesktop.org/mesa/mesa/-/issues/11592
Cc: mesa-stable
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/30455>
(cherry picked from commit 43365502c4)
2024-08-04 22:43:20 +02:00
Karol Herbst
c9b31c3971 nouveau: add nv_device_uuid
Cc: mesa-stable
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/30455>
(cherry picked from commit 826d00617c)
2024-08-04 22:43:19 +02:00
Karol Herbst
1aad77d7ea nouveau: use nv_devince_info and fill in PCI and type information
Cc: mesa-stable
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/30455>
(cherry picked from commit 9c15875d4d)
2024-08-04 22:43:18 +02:00
Karol Herbst
aba755d758 nouveau/winsys: fix handling of NV_DEVICE_TYPE_IGP
It's a PCI device as well, just no discrete VRAM.

Cc: mesa-stable
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/30455>
(cherry picked from commit fb1763e93c)
2024-08-04 22:43:17 +02:00
Karol Herbst
dcd0e0db89 mesa: check for enabled extensions for *UID enums
Applications might use them without checking if the extension is supported
and would run into a NULL pointer deref calling the callbacks.

Cc: mesa-stable
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/30455>
(cherry picked from commit 740cae64a1)
2024-08-04 22:43:16 +02:00
Matt Turner
41644b0711 docs: Drop references to LIBGL_DRIVERS_PATH
Fixes: 93511c1c5c ("gbm: link directly with libgallium")
Closes: https://gitlab.freedesktop.org/mesa/mesa/-/issues/11556
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/30456>
(cherry picked from commit 66243e6999)
2024-08-04 22:43:12 +02:00
Daniel Stone
c61862e7af gbm/dri: Remove erroneous assert
The user is allowed to pass a list of modifiers including
DRM_FORMAT_MOD_INVALID, meaning that the user is OK with implicit
modifiers. Since merging the DRI interfaces, this assert that we are
never returning an implicit modifier is unnecessary and also wrong. It
was originally added to be super-safe, but we now know that our drivers
work very well with modifiers, so don't need it.

Signed-off-by: Daniel Stone <daniels@collabora.com>
Fixes: 0b16d7ebb9 ("dri: Allow INVALID for modifier-less drivers")
Closes: mesa/mesa#11591
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/30464>
(cherry picked from commit cd961a7e3f)
2024-08-04 22:43:06 +02:00
Eric Engestrom
fbfcf52c4d .pick_status.json: Mark f427c9fe23 as denominated 2024-08-04 22:43:03 +02:00
Lucas Fryzek
85b66f3e10 lp: only map dt buffer on import from dmabuf
Adjusts `resource_from_handle` to follow original execution path
in cases where we are not importing a dmabuf.

Fixes: db38a4913e

Reviewed-By: Mike Blumenkrantz <michael.blumenkrantz@gmail.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/30441>
(cherry picked from commit c44d65a467)
2024-08-04 22:42:56 +02:00
Eric Engestrom
63052daa2c .pick_status.json: Mark 93f9afa1e0 as denominated 2024-08-04 22:42:50 +02:00
Mike Blumenkrantz
49a18092b0 dri: link with libloader
this has always called loader_bind_extensions, so it should have been
linking with the loader

cc: mesa-stable

Reviewed-by: Eric Engestrom <eric@igalia.com>
Reported-by: Yurii Kolesnykov <root@yurikoles.com>
Tested-by: Yurii Kolesnykov <root@yurikoles.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/30449>
(cherry picked from commit 827812912d)
2024-08-04 22:42:45 +02:00
Mike Blumenkrantz
5d544f9ba9 glx: include src/gallium for apple
Fixes: 91e1ea52c9 ("mesa_interface: Move out of GL/internal/")

Reviewed-by: Eric Engestrom <eric@igalia.com>
Reported-by: Yurii Kolesnykov <root@yurikoles.com>
Tested-by: Yurii Kolesnykov <root@yurikoles.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/30449>
(cherry picked from commit c5c0c1215b)
2024-08-04 22:42:44 +02:00
Eric Engestrom
cf19e52723 meson,ci: remove dead kmsro option in gallium-drivers
Fixes: 70813c1c13 ("meson: Remove kmsro from gallium-drivers")
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/30463>
(cherry picked from commit 89863a050b)
2024-08-04 22:42:40 +02:00
Hans-Kristian Arntzen
52ab6f8d02 wsi/common: Do not update present mode with MESA_VK_WSI_PRESENT_MODE.
Signed-off-by: Hans-Kristian Arntzen <post@arntzen-software.no>
Reviewed-by: Sebastian Wick <sebastian.wick@redhat.com>
Fixes: ad71d584cf ("wsi/common: Add function to modify present mode.")
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/30434>
(cherry picked from commit 369e3cc20a)
2024-08-04 21:44:05 +02:00
Lionel Landwerlin
21463c6967 anv: fix check on pipeline mode to track buffer writes
We want to check the current mode of the pipeline, not the queue type
(since graphics can toggle between 3D & gpgpu modes).

Signed-off-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Fixes: 455a13fb7f ("anv: limit ANV_PIPE_RENDER_TARGET_BUFFER_WRITES to blorp operations using 3D")
Closes: https://gitlab.freedesktop.org/mesa/mesa/-/issues/11607
Reviewed-by: Tapani Pälli <tapani.palli@intel.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/30469>
(cherry picked from commit fafa0d5abb)
2024-08-04 21:44:03 +02:00
Timothy Arceri
8c87dc44f9 nir: set disallow_undef_to_nan for legacy ARB asm programs
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
Closes: https://gitlab.freedesktop.org/mesa/mesa/-/issues/11389
Fixes: 861d274453 ("nir: replace undef only used by ALU opcodes with 0 or NaN")
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/30419>
(cherry picked from commit 298633e365)
2024-08-04 21:44:02 +02:00
Rob Clark
c8c0d44252 freedreno/drm/virtio: Fix issues with 16k (or larger) page sizes
Signed-off-by: Rob Clark <robdclark@chromium.org>
Fixes: e6b2785811 ("freedreno/drm/virtio: Use userspace IOVA allocation")
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/30431>
(cherry picked from commit 87c889cd8a)
2024-08-04 21:44:01 +02:00
Rob Clark
9e11264df4 tu: Fix issues with 16k (or larger) page sizes
The iova allocations need to be CPU page aligned.  (The GPU itself
always supports 4k mappings regardless of the smallest CPU page size,
but GEM buffer allocations must be an integer number of CPU pages.)

Signed-off-by: Rob Clark <robdclark@chromium.org>
Fixes: 63904240f2 ("tu: Re-enable bufferDeviceAddressCaptureReplay")
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/30431>
(cherry picked from commit 7fe3529715)
2024-08-04 21:44:00 +02:00
Karol Herbst
004d53da01 nouveau: handle realloc failure inside cli_kref_set
Closes: https://gitlab.freedesktop.org/mesa/mesa/-/issues/11477
Fixes: 821f4c8d99 ("nouveau: import libdrm_nouveau")
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/30374>
(cherry picked from commit 801078cbf8)
2024-08-04 21:43:59 +02:00
Konstantin Seurer
1e271414b3 aco: print s_delay_alu INSTSKIP>3 correctly
INSTSKIP has 3 bits.

Fixes: 94958e6 ("aco: improve printing of s_delay_alu")
Reviewed-by: Rhys Perry <pendingchaos02@gmail.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/30401>
(cherry picked from commit f8bf9f07b6)
2024-08-04 21:43:58 +02:00
Dave Airlie
fce310bef4 nvc0: fix null ptr deref on fermi due to debug changes.
Not everyone has a copy class, so don't dereference it if it's not
set.

Pointed out on irc by Armada

Fixes: 65092ab1a5 ("nouveau/nvc0: add support for using common pushbuf dumper")

Reviewed-by: Karol Herbst <kherbst@redhat.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/30421>
(cherry picked from commit 56ea4e4fa6)
2024-08-04 21:43:57 +02:00
Eric Engestrom
6b43d2790c .pick_status.json: Update to 366e7e2ddc 2024-08-04 21:43:47 +02:00
Eric Engestrom
49b84034cf VERSION: bump for 24.2.0-rc3 2024-07-31 17:49:34 +02:00
Georg Lehmann
1b913135cd aco/optimizer: update temp_rc when converting to uniform bool alu
Cc: mesa-stable

Reviewed-by: Konstantin Seurer <konstantin.seurer@gmail.com>
Reviewed-by: Rhys Perry <pendingchaos02@gmail.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/30399>
(cherry picked from commit 6da7bd842c)
2024-07-30 13:59:00 +02:00
Mike Blumenkrantz
05581dd481 Revert "vl/dri3: use loader's dri3 init code and delete everything else"
This reverts commit 586d0c4a9b.

Fixes: 586d0c4a9b ("vl/dri3: use loader's dri3 init code and delete everything else")
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/30415>
(cherry picked from commit 87ce0ce0b1)
2024-07-30 13:59:00 +02:00
Karol Herbst
804dbcec17 rusticl/spirv: protect against 0 length in slice::from_raw_parts
Closes: https://gitlab.freedesktop.org/mesa/mesa/-/issues/11584
Cc: mesa-stable
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/30410>
(cherry picked from commit dc2755a4f8)
2024-07-30 13:58:57 +02:00
Karol Herbst
a9b46077f5 rusticl/api: protect against 0 length in slice::from_raw_parts
Fixes: 84d16045d0 ("rusticl/api: add param to query which contains application provided values")
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/30410>
(cherry picked from commit 81f75e2a2d)
2024-07-30 13:58:55 +02:00
Karol Herbst
7b35976bbc rusticl/program: protect against 0 length in slice::from_raw_parts
Fixes: e028baa177 ("rusticl/program: implement clCreateProgramWithBinary")
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/30410>
(cherry picked from commit ad6fb3406b)
2024-07-30 13:58:54 +02:00
Karol Herbst
81b0c68fb0 rusticl: fix clippy lint having bounds defined in multiple places
Fixes: 734352ddfb ("rusticl/program: some boilerplate code for SPIR-V support")
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/30410>
(cherry picked from commit 7a8b1dc6e5)
2024-07-30 13:58:54 +02:00
Jianxun Zhang
5d0a3cf84f anv: Disable legacy CCS setup in binding (xe2)
The condition of flat ccs and vram_only checker causes different
aux usage at binding stage. The current design is reusing CCS_E
on Xe2, so we want both Xe2 integrated and discreted GPUs behave
the same way.

Xe2 shouldn't need any special setup of CCS in the loop.

Backport-to: 24.2
Signed-off-by: Jianxun Zhang <jianxun.zhang@intel.com>
Reviewed-by: Nanley Chery <nanley.g.chery@intel.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/30111>
(cherry picked from commit c5ee7e9bdc)
2024-07-30 13:58:53 +02:00
Jianxun Zhang
28e2b5423e anv: Disable compression on legacy modifiers (xe2)
On pre-Xe2 platforms, the compression on these modifiers that
don't support compression are enabled. The compressed will be
resolved when needed. On Xe2+ we haven't support explicit
resolve, so all the paths to resolves are prohibited now. But
the code is still doing it, causing an assertion failure:

Fixes: vkcube
src/intel/vulkan/anv_private.h:5467:
anv_image_get_fast_clear_type_addr: Assertion
`device->info->ver < 20' failed.

Backport-to: 24.2
Signed-off-by: Jianxun Zhang <jianxun.zhang@intel.com>
Reviewed-by: Nanley Chery <nanley.g.chery@intel.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/30111>
(cherry picked from commit e054068787)
2024-07-30 13:58:52 +02:00
Jianxun Zhang
da6e9fcdfd iris: Fix an assertion failure with compressed format
Fixes: ext_texture_array-compressed teximage pbo -fbo -auto

src/gallium/drivers/iris/iris_state.c:3142: iris_create_surface:
Assertion `res->aux.usage == ISL_AUX_USAGE_NONE' failed

Suggested by Nanley Chery <nanley.g.chery@intel.com>

Backport-to: 24.2
Signed-off-by: Jianxun Zhang <jianxun.zhang@intel.com>
Reviewed-by: Nanley Chery <nanley.g.chery@intel.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/30111>
(cherry picked from commit 6b4def143c)
2024-07-30 13:58:52 +02:00
Jianxun Zhang
3c586ea1b8 anv: Fix assertion failures on BMG (xe2)
Fixes: beb0ea2469 ("anv: Disable tracking fast clear and aux state (xe2)")

crucible run func.first

dEQP-VK.api.copy_and_blit.core.image_to_image.
all_formats.color.2d_to_2d.a1r5g5b5_unorm_pack16.
r16_uint.optimal_optimal

dEQP-VK.pipeline.monolithic.multisample.misc.clear_attachments.
r8g8b8a8_unorm_r16g16b16a16_sfloat_r16g16b16a16_sint_d32_sfloat_
s8_uint.16x.ds_resolve_sample_zero.whole_framebuffer

src/intel/vulkan/anv_private.h:5491:
anv_image_get_compression_state_addr: Assertion
`device->info->ver < 20' failed.

Backport-to: 24.2
Signed-off-by: Jianxun Zhang <jianxun.zhang@intel.com>
Reviewed-by: Nanley Chery <nanley.g.chery@intel.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/30111>
(cherry picked from commit 49c91a4ea0)
2024-07-30 13:58:51 +02:00
Eric Engestrom
5aabc6012f .pick_status.json: Update to aa9745427b 2024-07-30 13:58:44 +02:00
Jordan Justen
2fc396ae75 intel/dev: Disable LNL PCI IDs on Mesa 24.2 (require INTEL_FORCE_PROBE)
This reverts commit e9f63df2f2 for Mesa
24.2.

According to Lucas, the kernel will be knowingly breaking Mesa's LNL
support in Linux 6.11. The kernel will not commit to not break LNL for
user-mode drivers until force_probe is removed, which might mean
waiting until Linux 6.12.

"There's no support really in kernel 6.10, 6.11 etc to LNL."

 * https://lists.freedesktop.org/archives/intel-xe/2024-July/043706.html

Signed-off-by: Jordan Justen <jordan.l.justen@intel.com>
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/30398>
2024-07-29 00:39:43 -07:00
Georg Lehmann
d7c994372e spirv: ignore more function param decorations
These caused log spam during vk-cts.

Fixes: 9b55dcca54 ("spirv: initial parsing of function parameter decorations")

Reviewed-by: Karol Herbst <kherbst@redhat.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/30391>
(cherry picked from commit a7c8eab63d)
2024-07-28 22:01:45 +02:00
Eric Engestrom
989328728e ci: remove llvmpipe in the job that disables llvm
Instead of removing it from all the arm build jobs and only adding it
back on arm64.

Fixes: 35cb0c350e ("ci: replace gallium-drivers=swrast with gallium-drivers=llvmpipe,softpipe")
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/30366>
(cherry picked from commit c3b25dd357)
2024-07-28 22:01:45 +02:00
Eric Engestrom
5eb6f6cf92 meson: improve wording of "incompatible llvm options" error
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/30366>
(cherry picked from commit 5d84e6cf26)
2024-07-28 22:01:45 +02:00
Eric Engestrom
52709709eb meson: don't select the deprecated swrast option ourselves
Users get the deprecation warning but didn't do anything, they left
things to `auto` and we pick the deprecated `swrast`? Hardly seems fair!

(I forgot to do this when I added the deprecation warning to ajax's commit)

Fixes: 010b2f9497 ("gallium/meson: Deconflate swrast/softpipe/llvmpipe")
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/30366>
(cherry picked from commit 77b69cdbc3)
2024-07-28 22:01:12 +02:00
X512
3cdd2eb92a egl/haiku: fix synchronization problems, add missing header
`st_context_invalidate_state` call is required when changing buffer attachments.

Including header with BBitmap class definition is required to properly
call C++ destructor.

Cc: mesa-stable
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/30372>
(cherry picked from commit 828c3cf002)
2024-07-28 22:00:50 +02:00
Daniel Stone
bf713bb3d0 dri: Allow INVALID for modifier-less drivers
If the user passes in DRM_FORMAT_MOD_INVALID as an acceptable modifier,
we can progress with implicit modifiers. Add this to a more
comprehensive special case along with linear to make sure that we can
still allocate when users pass in a modifier list to a driver which
doesn't support modifiers.

Signed-off-by: Daniel Stone <daniels@collabora.com>

Fixes: 361f362258 ("dri: Unify createImage and createImageWithModifiers")

Reviewed-By: Mike Blumenkrantz <michael.blumenkrantz@gmail.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/30383>
(cherry picked from commit 0b16d7ebb9)
2024-07-28 22:00:47 +02:00
Jianxun Zhang
3a2dac7c2d intel/common: Remove blank lines in intel_set_ps_dispatch_state() (xe2)
Backport-to: 24.2
Signed-off-by: Jianxun Zhang <jianxun.zhang@intel.com>
Reviewed-by: Nanley Chery <nanley.g.chery@intel.com>
Reviewed-by: Francisco Jerez <currojerez@riseup.net>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/29907>
(cherry picked from commit 349e7a2919)
2024-07-28 22:00:46 +02:00
Jianxun Zhang
33700e5b2b intel/common: Ensure SIMD16 for fast-clear kernel (xe2)
Add a restriction on SIMD mode for fast-clear pixel
shader according to the Bspec.

Backport-to: 24.2
Signed-off-by: Jianxun Zhang <jianxun.zhang@intel.com>
Reviewed-by: Nanley Chery <nanley.g.chery@intel.com>
Reviewed-by: Francisco Jerez <currojerez@riseup.net>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/29907>
(cherry picked from commit cb7f816fc4)
2024-07-28 22:00:45 +02:00
José Roberto de Souza
1112f171d7 anv: Propagate protected information to blorp_batch_isl_copy_usage()
This fixes protected tests that uses vkCmdCopyBuffer().

Cc: mesa-stable
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Signed-off-by: José Roberto de Souza <jose.souza@intel.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/30369>
(cherry picked from commit 5fdacb56ed)
2024-07-28 22:00:43 +02:00
José Roberto de Souza
21ce5e817c isl: Fix Xe2 protected mask
BSpec 71045 and 57023 still points that protected/encrypted bit is still
bit 0, bit 1 should not be set or undesired MOCS index could be set.

Fixes: 7be8bc2c97 ("isl: Add mocs for xe2")
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Signed-off-by: José Roberto de Souza <jose.souza@intel.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/30369>
(cherry picked from commit 79f95a3711)
2024-07-28 22:00:41 +02:00
Mike Blumenkrantz
8f78762c98 dri: fix kmsro define
Fixes: 50fc7cc290 ("glx: directly link to gallium")
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/30376>
(cherry picked from commit 40004219b1)
2024-07-28 22:00:32 +02:00
Lionel Landwerlin
27fd222083 anv: propagate protected information for blorp operations
Signed-off-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Cc: mesa-stable
Reviewed-by: José Roberto de Souza <jose.souza@intel.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/29982>
(cherry picked from commit d5b0526507)
2024-07-28 22:00:31 +02:00
Lionel Landwerlin
927b900f44 anv: properly flag image/imageviews for ISL protection
Signed-off-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Cc: mesa-stable
Reviewed-by: José Roberto de Souza <jose.souza@intel.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/29982>
(cherry picked from commit 8d9cc6aa23)
2024-07-28 21:56:37 +02:00
Lionel Landwerlin
6bbeac5b90 isl: account for protection in base usage checks
Only Cc stable because it's needed for the next patches.

Signed-off-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Cc: mesa-stable
Reviewed-by: José Roberto de Souza <jose.souza@intel.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/29982>
(cherry picked from commit 4eab285d4a)
2024-07-28 21:53:18 +02:00
Eric Engestrom
d6f9819095 ci/baremetal: fix logic for retrying boot when it failed
Contrary to what the original commit said, this is actually still used
(see .gitlab-ci/bare-metal/poe-powered.sh:205), and the boot retry logic
has been broken ever since, exacerbating the rpi farm boot problems.

Fixes: 97b2afa16a ("ci/bare-metal: Drop the 2 vs 1 exit code from poe_run.")
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/30340>
(cherry picked from commit 2bc82b7147)
2024-07-28 21:53:17 +02:00
Mary Guillemard
c31af2145c panvk: Pass attrib_buf_idx_offset to desc_copy_info
This was missing from the original fix and was causing MMU falults on
"dEQP-VK.memory.pipeline_barrier.host_write_uniform_texel_buffer.*".

Fixes: cec45cac84 ("panvk: Fix image support in vertex jobs")
Signed-off-by: Mary Guillemard <mary.guillemard@collabora.com>
Reviewed-by: Boris Brezillon <boris.brezillon@collabora.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/30378>
(cherry picked from commit e863acb318)
2024-07-28 21:53:05 +02:00
GKraats
e1c783720f i915g: fix max_lod at mipmap-sampling
At update_map at i915_state_sampler.c max_lod is no longer set to 1
for npots. This almost totally disabled mipmapping.
Max_lod should still be set to 1, but only if it is still 0,
because no mipmap-levels are present.
According to existing comment at update_map this is needed, to
avoid problems at sampling,
if MIN_FILTER and MAX_FILTER differ.

Cc: mesa-stable

Signed-off-by: GKraats <vd.kraats@hccnet.nl>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/28638>
(cherry picked from commit ad02bfe41d)
2024-07-28 21:53:01 +02:00
GKraats
f1268a6c1e i915g: fix mipmap-layout for npots
Remove at i945_texture_layout_2d() call of  util_next_power_of_two(),
which oversized the npot-blocks for every level to get power of 2
for width and height. Hardware doesnot expect these oversized
npot-blocks, causing mangled mipmapping.
This also is done at i915_texture_layout_2d(), which is
used by older gen3-gpus.

Cc: mesa-stable

Signed-off-by: GKraats <vd.kraats@hccnet.nl>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/28638>
(cherry picked from commit bb95d744ca)
2024-07-28 21:53:00 +02:00
GKraats
de9242569a i915g: fix generation of large mipmaps
Generation of mipmaps was failing for large heights.
If height > 1365 LEVEL 1 couldnot be generated because of
the max texture size limit (2048). This is solved by using an
offset at the texture-buffer at overflow situations.
The height of the offset must be multiple of 8.
This solves the problem mentioned at MR !27561 (closed).

Closes: https://gitlab.freedesktop.org/mesa/mesa/-/issues/10410

Cc: mesa-stable

Signed-off-by: GKraats <vd.kraats@hccnet.nl>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/28638>
(cherry picked from commit a1a301488b)
2024-07-28 21:52:59 +02:00
Mike Blumenkrantz
a1a47b8d07 llvmpipe: only use vma allocations on linux
this was broken on other platforms

Fixes: a062544d3d ("llvmpipe: Use an anonymous file for memory allocations")

Reviewed-by: Konstantin Seurer <konstantin.seurer@gmail.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/30229>
(cherry picked from commit bb5145bcb8)
2024-07-28 21:52:58 +02:00
Mike Blumenkrantz
d10fa7e4d3 llvmpipe: handle vma allocation failure
Fixes: a062544d3d ("llvmpipe: Use an anonymous file for memory allocations")

Reviewed-by: Konstantin Seurer <konstantin.seurer@gmail.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/30229>
(cherry picked from commit a8ff1bdc83)
2024-07-28 21:52:56 +02:00
Dave Airlie
dd3f21e8a2 gallivm/sample: fix sampling indirect from vertex shaders
When doing indirect sampling, we just fetch one value per lane,
but type.length == 1 caused num_quads to be 0 which caused things
to crash.

Fixes dEQP-GLES31.functional.shaders.opaque_type_indexing.sampler.uniform.vertex.sampler2d

Cc: mesa-stable
Reviewed-by: Konstantin Seurer <konstantin.seurer@gmail.com>
Reviewed-by: Roland Scheidegger <roland.scheidegger@broadcom.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/30358>
(cherry picked from commit 3e01422a16)
2024-07-28 21:52:55 +02:00
Yiwei Zhang
522c21becc Revert "meson: disallow Venus debug + LTO build via GCC"
This reverts commit 423ba5d1c7.

Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/30355>
(cherry picked from commit 3e6b73a75a)
2024-07-28 21:52:53 +02:00
Yiwei Zhang
5ab1aebd51 venus: fix a race condition between gem close and gem handle tracking
After using sparse array to manager virtgpu bo, we set gem_handle to 0
to indicate that the bo is invalid. However, the gem handle gets closed
before that and can be reused by another newly created bo, leading to
the tracked gem handle being unexpectedly zero'ed out.

Fixes: 88f481dd74 ("venus: make sure gem_handle and vn_renderer_bo are 1:1")
Signed-off-by: Yiwei Zhang <zzyiwei@chromium.org>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/30362>
(cherry picked from commit f788c87d02)
2024-07-28 21:52:50 +02:00
Matt Turner
5985125453 intel/elk: Use REG_CLASS_COUNT
Fixes: d44462c08d ("intel/elk: Fork Gfx8- compiler by copying existing code")
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/30314>
(cherry picked from commit a3714b55f4)
2024-07-28 21:52:49 +02:00
Matt Turner
40f063e29d intel/brw: Use REG_CLASS_COUNT
Fixes: 5d87f41a54 ("intel/fs/ra: Define REG_CLASS_COUNT constant specifying the number of register classes.")
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/30314>
(cherry picked from commit 5e24c21625)
2024-07-28 21:52:48 +02:00
X512
b163c2bbbd egl/haiku: fix double free of BBitmap
Cc: mesa-stable
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/30364>
(cherry picked from commit 2e70757dc0)
2024-07-28 21:52:47 +02:00
Karol Herbst
c2a474e7c3 clc: force linking of spirvs with mismatching pointer types in signatures
With LLVM-17 and opaque pointers, sometimes the compiled spirvs lose all
their information in regards to what specific pointer type a function
parameter has.

To workaround this, we can tell the spirv linker to insert casts to handle
those cases.

See https://github.com/KhronosGroup/SPIRV-Tools/pull/5534

Cc: mesa-stable
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/30029>
(cherry picked from commit f283c38f9c)
2024-07-28 21:52:45 +02:00
Eric Engestrom
a52efd07a3 .pick_status.json: Update to ad90bf0500 2024-07-28 21:52:39 +02:00
Eric Engestrom
71bd9e3c19 VERSION: bump for 24.2.0-rc2 2024-07-25 15:11:02 +02:00
Dave Airlie
5b63f5b88f llvmpipe/cs/orcjit: add stub function name for coro
This fixes some debug
JIT session error: Unexpected definitions in module : [ cs_co_variant ]
Failed to materialize symbols: { (cs0_variant0_3, { cs_variant }) }

Fixes: bb0efdd4d8 ("llvmpipe: add shader cache support for ORCJIT implementation")
Reviewed-by: Icenowy Zheng <uwu@icenowy.me>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/30335>
(cherry picked from commit 76ae27efb3)
2024-07-25 11:51:02 +02:00
Dave Airlie
e119dce9fd draw/orcjit: supply stub function for tcs coro
This fixes a crash with shader cache enabled:
JIT session error: Unexpected definitions in module : [ draw_llvm_tcs_coro_variant ]
Failed to materialize symbols: { (draw_llvm_tcs_variant0_7, { draw_llvm_tcs_variant }) }

Fixes: bb0efdd4d8 ("llvmpipe: add shader cache support for ORCJIT implementation")
Reviewed-by: Icenowy Zheng <uwu@icenowy.me>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/30335>
(cherry picked from commit fcf9e33ec0)
2024-07-25 11:51:00 +02:00
Mike Blumenkrantz
20b3400701 dril: rework config creation
the original implementation of config selection had a number of flaws:
* using eglChooseConfigs with lots of loops, which was okay for filtering but
  also added considerable complexity and made it difficult to correctly
  get all the configs
* not adding enough configs; there were a lot more color and zs formats
  which weren't in the base config list
* double buffer configs were never created
* srgb configs were also never created

there will now be fewer configs than there were pre-DRIL, but this is only
because accum buffers are now gone and not because anything of value is
missing

Fixes: 3de62b2f9a ("gallium/dril: Compatibility stub for the legacy DRI loader interface")

Acked-by: Daniel Stone <daniels@collabora.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/30311>
(cherry picked from commit ec7afd2c24)
2024-07-25 11:50:58 +02:00
Paulo Zanoni
da3c916839 anv/xe: try harder when the vm_bind ioctl fails
From all the many possible errors returned by the vm_bind ioctl, some
can actually happen in the wild when the system is under memory
pressure. Thomas Hellström pointed to us that, due to its asynchronous
nature, the vm_bind ioctl itself has to pin some memory, so if the
number of bind operations passed is too big, there is a probability
that it may run out of memory.

Previously the Kernel would return ENOMEM when this condition
happened.  Since commit e8babb280b5e ("drm/xe: Convert multiple bind
ops into single job") the Kernel has started returning ENOBUFS when it
doesn't have enough memory to do what it wants but thinks we'd succeed
if we tried to do one bind operation at a time (instead of doing
multiple operations in the same ioctl), and ENOMEM in some other
situations. Still-uncommitted commit "drm/xe: Return -ENOBUFS if a
kmalloc fails which is tied to an array of binds" proposes converting
a few more ENOMEM cases no ENOBUFS.

Still, even ENOMEM situations could in theory be possible to recover
from, because if we wait some amount of time, resources that may have
been consuming memory could end up being freed by other threads or
processes, allowing the operations to succeed. So our main idea in
this patch is that we treat both ENOMEM and ENOBUFS in the same way,
so our implementation can work with any xe.ko driver regardless of
having or not having the commits mentioned above.

So in this patch, when we detect the system is under memory pressure
(i.e., the vm_bind() function returns VK_ERROR_OUT_OF_HOST_MEMORY), we
throw away our performance expectations and try to go slowly and
steady. First we wait everything we're supposed to wait (hoping that
this alone could also help to alleviate the memory pressure), and then
we synchronously bind one piece at a time (as this will ensure ENOBUFS
can't be returned), hoping that this won't cause the Kernel to try to
reserve too much memory. All this while also hoping that whatever
thing that may be eating all the memory goes away in the meantime. If
even this fails, we give up and hope the upper layer will be able to
figure out what to do.

This fixes a bunch of LNL failures and flaky tests (as LNL is our
first officially supported xe.ko platform). This can be seen in dEQP
but only if multiple tests are being run parallel. Happens in multiple
tests, some of which may include:

  - dEQP-VK.sparse_resources.image_sparse_binding.2d_array.rgba8_snorm.1024_128_8
  - dEQP-VK.sparse_resources.image_sparse_binding.3d.rgba16_snorm.1024_128_8
  - dEQP-VK.sparse_resources.image_sparse_binding.3d.rgba16ui.512_256_6

I don't ever see these errors when running Alchemist/DG2 with xe.ko.

Fixes: e9f63df2f2 ("intel/dev: Enable LNL PCI IDs without INTEL_FORCE_PROBE")
Reviewed-by: José Roberto de Souza <jose.souza@intel.com>
Signed-off-by: Paulo Zanoni <paulo.r.zanoni@intel.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/30276>
(cherry picked from commit dd5362c78a)
2024-07-25 11:50:57 +02:00
Matt Turner
1ff8e0e7f8 intel/clc: Free disk_cache
Fixes: c15bf88f01 ("intel: Add a little OpenCL C compiler binary")
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/30313>
(cherry picked from commit aae82061af)
2024-07-25 11:50:56 +02:00
Matt Turner
9e6ebed213 intel/clc: Free parsed_spirv_data
This declaration shadowed a variable by the same type and name in an
outer scope. That variable is passed to clc_free_parsed_spirv().

Fixes: 4fd7495c69 ("intel/clc: add ability to output NIR")
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/30313>
(cherry picked from commit 1574372de4)
2024-07-25 11:50:55 +02:00
Alessandro Astone
e86e472a6a egl/gbm: Walk device list to initialize DRM platform
We cannot always use /dev/dri/card0.
As a matter of fact, on systems with SimpleDRM enabled /dev/dri/card0
will be created by it and removed once a GPU driver has loaded.

In any case we shouldn't hard-code the device number and instead walk
the device list to find the first suitable device.

This issue is trivially reproducible with `eglinfo -B -p gbm` on
Ubuntu 24.04 or Fedora 40

Fixes: 32f4cf3808 ("egl/gbm: Fix EGL_DEFAULT_DISPLAY")
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/30325>
(cherry picked from commit 7949471716)
2024-07-25 11:50:53 +02:00
Dylan Baker
40a54c84d1 crocus: check for depth+stencil before creating resource
This avoid leaking memory if we return early.

Fixes: 5f7df5df0d ("crocus: disable depth and d+s formats with memory objects")
Reviewed-by: Nanley Chery <nanley.g.chery@intel.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/30305>
(cherry picked from commit 4ef0cbaf05)
2024-07-25 11:50:52 +02:00
Dylan Baker
d2967559ee crocus: properly free resources on BO allocation failure
Iris already has the same fix applied.

Fixes: f3630548f1 ("crocus: initial gallium driver for Intel gfx 4-7")
Reviewed-by: Nanley Chery <nanley.g.chery@intel.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/30305>
(cherry picked from commit 34145725ce)
2024-07-25 11:50:51 +02:00
Dylan Baker
271b6d1cf6 tgsi_to_nir: free disk cache value if the size is wrong
Fixes: 4db880d805 ("ttn: Implement disk cache")
Reviewed-by: Ian Romanick <ian.d.romanick@intel.com>
Reviewed-by: Rob Clark <robclark@freedesktop.org>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/30308>
(cherry picked from commit 11bc95934f)
2024-07-25 11:50:50 +02:00
Eric Engestrom
9c3d3f0e55 .pick_status.json: Update to c33d2db06a 2024-07-25 11:49:54 +02:00
Mike Blumenkrantz
48e35be44d ci: prune dri from LD_LIBRARY_PATH
partial revert of 50fc7cc290

Fixes: 50fc7cc290 ("glx: directly link to gallium")
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/30346>
(cherry picked from commit 6cd4372460)
2024-07-24 18:38:11 +02:00
Mike Blumenkrantz
b09f24b7af gallium: install gallium-$version.so to libdir
Installing this private library into the default library
search path avoids needing to rely on -Wl,-rpath,
which is inconsistently implemented as either DT_RUNPATH
or DT_RPATH on different distributions; in particular,
on distributions that implement it as DT_RPATH,
it interferes with use of LD_LIBRARY_PATH and has semantics
that are difficult to reason about, and is incompatible with
Steam's container runtime (which has the known limitation that
it only implements DT_RUNPATH and not DT_RPATH).

To avoid third-party developers being tempted to link to the
unstable libgallium, give it a name that varies with each Mesa release,
so that there is no obvious way for third-party software to link to it.
This is similar to the way the proprietary Nvidia driver sets up its similar
implementation-detail libraries such as libnvidia-glcore.so.535.183.01.

Fixes: 50fc7cc2 ("glx: directly link to gallium")

Acked-by: Daniel Stone <daniels@collabora.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/30328>
(cherry picked from commit 9b7bb6cc9f)
2024-07-24 18:38:10 +02:00
Mary Guillemard
7844f879ab panvk: Fix image support in vertex jobs
There were various bugs causing images access to fault.

This fixes
"dEQP-VK.memory.pipeline_barrier.host_write_storage_buffer.*" and
possibly other tests.

Fixes: 7bea6f8612 ("panvk: Overhaul the Bifrost descriptor set implementation")
Signed-off-by: Mary Guillemard <mary.guillemard@collabora.com>
Reviewed-by: Boris Brezillon <boris.brezillon@collabora.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/30293>
(cherry picked from commit cec45cac84)
2024-07-24 18:38:09 +02:00
Eric Engestrom
bbdb0f5b80 docs: add stub header for u_format_gen.h
Warning, treated as error:
docs/isl/aux-surf-comp.rst:51:docs/../src/util/format/u_formats.h:33: 'util/format/u_format_gen.h' file not found

Fixes: e05415a82e ("format: Generate endian-independent format aliases")
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/30339>
(cherry picked from commit e634acaf88)
2024-07-24 18:38:07 +02:00
Eric Engestrom
c3034b82e6 .pick_status.json: Update to 6cd4372460 2024-07-24 18:38:05 +02:00
Faith Ekstrand
d256a04d5b meson/megadriver: Don't invoke the megadriver script with no drivers
Otherwise, the install will fail due to missing arguments to
install_megadrivers.py.

Reviewed-by: Mike Blumenkrantz <michael.blumenkrantz@gmail.com>
Reviewed-by: Dylan Baker <dylan.c.baker@intel.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/30277>
(cherry picked from commit 74b4c91e7b)
2024-07-24 10:45:13 +02:00
Faith Ekstrand
6fef8f1800 nak/spill_values: Don't assume no trivial phis
Thanks to LCSSA, we can absolutely have phis with only one source and we
need to handle those in spilling.  Fortunately, there's nothing really
special about that case.  I was just prematurely optimizing.

Fixes: bcad2add47 ("nak: Add a spilling pass")
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/28084>
(cherry picked from commit 8bf3213a54)
2024-07-24 10:19:29 +02:00
Christian Gmeiner
d1ed0c04f1 dri: fix driver names
All the driver reports itself with a hyphen here, so they all stopped
loading completely after 50fc7cc290. Check each of them with the latest
kernel sources.

Fixes: 50fc7cc290 ("glx: directly link to gallium")

Signed-off-by: Christian Gmeiner <cgmeiner@igalia.com>
Acked-By: Mike Blumenkrantz <michael.blumenkrantz@gmail.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/30330>
(cherry picked from commit 305bf503e7)
2024-07-24 10:19:27 +02:00
Erico Nunes
db7d05a398 dri: fix sun4i-drm driver name
The driver reports itself as sun4i-drm with a hyphen here, so
it stopped loading completely after 50fc7cc290.

Fixes: 50fc7cc290 ("glx: directly link to gallium")
Signed-off-by: Erico Nunes <nunes.erico@gmail.com>
Acked-By: Mike Blumenkrantz <michael.blumenkrantz@gmail.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/30326>
(cherry picked from commit 0bdc2f180f)
2024-07-24 10:19:24 +02:00
Eric Engestrom
39ea288456 .pick_status.json: Update to c30e5d44b1 2024-07-24 10:19:22 +02:00
Karol Herbst
45d530fa4a nak: allow clippy::not_unsafe_ptr_arg_deref lints
Clippy errors on this, so just allow it here.

Fixes: b9c0e3c1ab ("nak: Add helpers for filling QMDs")
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/30323>
(cherry picked from commit 526a572233)
2024-07-23 22:02:55 +02:00
Marek Olšák
3437afadca nir/opt_algebraic: use fmulz for fpow lowering to fix incorrect rendering
The original implementation in all radeon drivers had this behavior.

Fixes: 9bc1fb4c07 - ac/llvm,radeonsi: lower nir_fpow for aco and llvm
Closes: https://gitlab.freedesktop.org/mesa/mesa/-/issues/11464

Acked-by: Pierre-Eric Pelloux-Prayer <pierre-eric.pelloux-prayer@amd.com>
Reviewed-by: Faith Ekstrand <faith.ekstrand@collabora.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/30069>
(cherry picked from commit ecfefe823e)
2024-07-23 22:02:54 +02:00
Ganesh Belgur Ramachandra
a160ffc8d6 amd/common: skip lane size determination for chips without image opcodes (e.g. gfx940)
This fixes VAAPI decode performance issues.

Fixes: 5b3e1a0532 ("radeonsi: change the compute blit to clear/blit multiple pixels per lane")

Reviewed-by: Marek Olšák <marek.olsak@amd.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/30273>
(cherry picked from commit ec4e5ef0f7)
2024-07-23 22:02:53 +02:00
Ganesh Belgur Ramachandra
52fa10f453 radeonsi: fix eptich on chips without image opcodes (e.g. gfx940)
This fixes VAAPI decode corruption issues.

Fixes: 26cd3a1718 ("ac,radv,radeonsi: add a helper to set mutable tex desc fields")

Reviewed-by: Marek Olšák <marek.olsak@amd.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/30273>
(cherry picked from commit 0cb3ace969)
2024-07-23 22:02:53 +02:00
Rhys Perry
dbb7731a90 aco/gfx11.5: workaround export priority issue
https://github.com/llvm/llvm-project/pull/99273

fossil-db (gfx1150):
Totals from 73996 (93.20% of 79395) affected shaders:
Instrs: 36015357 -> 36807177 (+2.20%)
CodeSize: 189072544 -> 192238748 (+1.67%)
Latency: 245845181 -> 246790550 (+0.38%); split: -0.00%, +0.38%
InvThroughput: 45068018 -> 45116177 (+0.11%); split: -0.00%, +0.11%

Signed-off-by: Rhys Perry <pendingchaos02@gmail.com>
Reviewed-by: Daniel Schürmann <daniel@schuermann.dev>
Backport-to: 24.2
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/30241>
(cherry picked from commit 0919ce1ac4)
2024-07-23 22:02:52 +02:00
Dylan Baker
f6ba6a5205 util/glsl2spirv: fixup the generated depfile when copying sources
So that the depfile contains a reference to the original source rather
than the copied one. This is necessary to avoid ninja not finding the
copy and causing spurious rebuilds when the copy has been removed, as
well as correctly tracking changes to the input files.

fixes: 46644ba371

Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/30132>
(cherry picked from commit 36160c967c)
2024-07-23 22:02:51 +02:00
Vlad Schiller
3b61e8f004 pvr: Handle VK_STRUCTURE_TYPE_IMAGE_FORMAT_LIST_CREATE_INFO
This commit silences a debug message, which can get quite spammy.

Fixes: a2e0701 ("pvr: Enable KHR_image_format_list")
Signed-off-by: Vlad Schiller <vlad-radu.schiller@imgtec.com>
Reviewed-by: Frank Binns <frank.binns@imgtec.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/30222>
(cherry picked from commit 848c7c9560)
2024-07-23 22:02:50 +02:00
Vlad Schiller
9973e9ab9c pvr: Handle VK_STRUCTURE_TYPE_EXPORT_MEMORY_ALLOCATE_INFO
This commit silences a debug message, which can get quite spammy.

Fixes: 8991e64 ("pvr: Add a Vulkan driver for Imagination Technologies PowerVR Rogue GPUs")
Signed-off-by: Vlad Schiller <vlad-radu.schiller@imgtec.com>
Reviewed-by: Frank Binns <frank.binns@imgtec.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/30222>
(cherry picked from commit eda77bf79d)
2024-07-23 22:02:50 +02:00
Eric Engestrom
a3ac00f6f7 meson: xcb & xcb-randr are needed by the loader whenever x11 is built
Specifically, `src/loader/loader_dri_helper.c` needs them.

Closes: https://gitlab.freedesktop.org/mesa/mesa/-/issues/11536
Cc: mesa-stable
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/30292>
(cherry picked from commit aed5a974e9)
2024-07-23 22:02:23 +02:00
Neha Bhende
83b908aad7 dri: fix macro name check to detect svga driver
svga driver is detected via HAVE_SVGA.

Since commit 50fc7cc290, svga driver was not loading at all

Fixes: 50fc7cc290 ("glx: directly link to gallium")

Acked-by: Mike Blumenkrantz <michael.blumenkrantz@gmail.com>
Reviewed-by: Zack Rusin <zack.rusin@broadcom.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/30307>
(cherry picked from commit 8f9a157daa)
2024-07-23 22:02:21 +02:00
Yiwei Zhang
e013c79aad venus: clarify wsi image ownership
Fix to call vn_image_bind_wsi_memory as long as the image is a wsi
image. This is needed so that we track the wsi memory in the wsi image
so that creating from swapchain info works normally on x11/wayland
platforms. This change also make it clear that ANB image owns the wsi
memory

Fixes: c4b30b604f ("venus: support VK_ANDROID_NATIVE_BUFFER_SPEC_VERSION 8")
Signed-off-by: Yiwei Zhang <zzyiwei@chromium.org>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/30278>
(cherry picked from commit a27e3c5078)
2024-07-23 22:02:20 +02:00
Faith Ekstrand
5ce44462e2 nvk: Reject sparse images on Maxwell A and earlier
Even though we don't advertise the sparseResidency feature, a bunch of
CTS tests just call GetPhysicalDeviceImageFormatProperties2() with
SPARSE_RESIDENCY_BIT and see if that fails.

Fixes: d2177f4764 ("nvk: Don't advertise sparse residency on Maxwell A")
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/30303>
(cherry picked from commit 68d6cdfbc5)
2024-07-23 22:02:19 +02:00
Francisco Jerez
24568198cb iris: Pin pixel hashing table BO from iris_batch submission instead of from iris_state.
This fixes sporadic rendering corruption reported on MTL with ChromeOS
in cases where multiple processes including Chrome were utilizing the
GPU concurrently, and one of the processes happened to submit a
BLORP-only batch buffer right after a switch from a different context.

In such a scenario we would fail to add the BO that holds the pixel
hashing tables to the execbuf IOCTL for the BLORP batch, because it
was being pinned from iris_restore_render_saved_bos() which isn't
called for BLORP operations, potentially causing it to use garbage as
pixel pipe hashing tables, which led to corruption of the BLORP
rendering.

Technically this could have affected DG2 as well, but it has only been
reported on MTL so far.

Cc: mesa-stable
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Tested-by: Sushma Venkatesh Reddy <sushma.venkatesh.reddy@intel.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/30274>
(cherry picked from commit 49b433d5e7)
2024-07-23 22:02:18 +02:00
Dylan Baker
8eadeb3ce1 mesa: fix memory leak when using shader cache
Fixes: 656ccf4ef8 ("mesa: shader dump/read support for ARB programs")
Reviewed-by: Ian Romanick <ian.d.romanick@intel.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/30302>
(cherry picked from commit 7513a0bf3a)
2024-07-23 22:02:17 +02:00
Dylan Baker
ccfbb03ccd compilers/clc: Add missing break statements.
fixes: c0cf7f578a

Reviewed-by: Karol Herbst <kherbst@redhat.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/30301>
(cherry picked from commit e5b53d9408)
2024-07-23 22:02:16 +02:00
Karol Herbst
a12e4243f6 spirv: handle function parameters passed by value
Cc: mesa-stable
Reviewed-by: Jesse Natalie <jenatali@microsoft.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/29896>
(cherry picked from commit bad67ee77c)
2024-07-23 21:58:58 +02:00
Karol Herbst
a22f21a73e spirv: initial parsing of function parameter decorations
It doesn't do anything substantial yet, but it ignores enough so internal
shaders won't generate warnings.

I've also added ByVal parsing, because I need this one to actually fix a
correctness issue in a later patch.

Cc: mesa-stable
Reviewed-by: Jesse Natalie <jenatali@microsoft.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/29896>
(cherry picked from commit 9b55dcca54)
2024-07-23 21:58:57 +02:00
Karol Herbst
1b4ac55a5a spirv: generate info for FunctionParameterAttribute
Cc: mesa-stable
Reviewed-by: Jesse Natalie <jenatali@microsoft.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/29896>
(cherry picked from commit 90db6c729d)
2024-07-23 21:58:57 +02:00
Jesse Natalie
28d55e0baa microsoft/clc: Split struct copies before vars_to_ssa in pre-inline optimizations
Cc: mesa-stable
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/29896>
(cherry picked from commit f05b7225a3)
2024-07-23 21:58:56 +02:00
Paulo Zanoni
5e13e71a2a anv/trtt: fix the process of picking device->trtt.queue
We want to use actual sparse-capable queues as the default
trtt->queue, not copy queues that may have a companion_rcs_batch.
Before this patch, if we expose more than one queue *and* the
application creates a copy queue first, we'll end up setting
trtt->queue as the copy queue, which will GPU hang when we submit the
TR-TT batches as they don't support the pipe_control commands we
issue.

The trtt->queue queue is used for binding/unbinding buffers in code
paths where there's no specific queue coming from user space, such as
when we're creating or destroying a sparse resource.

This is not a problem yet on i915.ko since we are exposing
only a single queue, and it is not a problem for xe.ko since TR-TT is
not the default there. This is also not a problem in applications
that create the render or compute queue first. We plan to expose more
queues when using TR-TT, so this would become a problem without this
patch.

None of VK-GL-CTS seems to exercise that, and none of the Steam games
I tested exercise that as well. I was able to reproduce this issue
using our internal tracing tool.

v2: New implementation that doesn't break when we only have a compute
    queue (Lionel).

Fixes: 04bfe828db ("anv/sparse: allow sparse resouces to use TR-TT as its backend")
Reviewed-by: José Roberto de Souza <jose.souza@intel.com>
Signed-off-by: Paulo Zanoni <paulo.r.zanoni@intel.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/30252>
(cherry picked from commit 3ab8ff99fa)
2024-07-23 21:58:55 +02:00
Valentine Burley
67c16b59b2 tu/kgsl: Remove unused variable
The offset variable declaration at the beginning of the function was left over
after the variable was moved inside the if statement.

Fixes: 17c12a9924 ("turnip/kgsl: Support external memory via ION/DMABUF buffers")

Signed-off-by: Valentine Burley <valentine.burley@gmail.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/30300>
(cherry picked from commit 0a6cbb3a97)
2024-07-23 21:58:54 +02:00
Pierre-Eric Pelloux-Prayer
b71c17e7f4 egl,gbm,glx: fix log message spam
Based on the other similar logs we only want to log when extensions
is NULL.
Use this opportunity to indicate the source of the log and remove
the extra ')' at the end of each line.

Fixes: 50fc7cc290 ("glx: directly link to gallium")
Reviewed-by: Eric Engestrom <eric@igalia.com>
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/30257>
(cherry picked from commit 159a3edd80)
2024-07-23 21:58:47 +02:00
Pierre-Eric Pelloux-Prayer
59c48fa36a amd: use a valid size for ac_pm4_state allocation
If max_dw is smaller than the pm4 array the allocation size would be
smaller than sizeof(ac_pm4_state).

Fixes: 428601095c ("ac,radeonsi import PM4 state from RadeonSI")
Reviewed-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/30257>
(cherry picked from commit 0c868aa94a)
2024-07-23 21:58:46 +02:00
Eric Engestrom
0d06efe0a1 v3d/ci: mark spec@amd_performance_monitor@vc4 tests as flaky
Turns out it was not fixed, it just happened to pass a bunch of times in
a row, but it actually fails randomly, so mark it as such.

Fixes: 4696e9c49b ("v3d/ci: mark spec@amd_performance_monitor@vc4 tests as fixed")
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/30290>
(cherry picked from commit 547de1e928)
2024-07-23 21:58:43 +02:00
Eric Engestrom
29a2848abe venus: initialize bitset in CreateDescriptorPool()
Fixes: de5879447b ("Track bitset when create descriptor pool")
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/30286>
(cherry picked from commit 5c5df9376f)
2024-07-23 21:58:42 +02:00
Eric Engestrom
84ac19e896 nak: fix meson typo
Fixes: 95bff5ca5b ("nak: Add minimum bindgen requirement")
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/30288>
(cherry picked from commit 324ccd7430)
2024-07-23 21:58:40 +02:00
Eric Engestrom
4b6f10f7e7 .pick_status.json: Update to 3b6867f53a 2024-07-23 21:58:37 +02:00
Faith Ekstrand
8da6f4abec nvk: Don't advertise sparse residency on Maxwell A
Fixes: 48803ac53d ("nvk: enable sparse residency features")
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/30281>
(cherry picked from commit d2177f4764)
2024-07-21 15:05:13 +02:00
Faith Ekstrand
657bc4365b nvk: Fix indirect cbuf binds pre-Turing
nvk_cmd_buffer_push_indirect() takes bytes, not dwords.

Fixes: ee29a8d1cd ("nvk: Upload cbufs based on the cbuf_map")
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/30281>
(cherry picked from commit a888e83c3a)
2024-07-21 15:05:11 +02:00
Sushma Venkatesh Reddy
5f8a46c62c intel/clflush: Utilize clflushopt in intel_invalidate_range
On MTL ChromeOS boards, during AI based video conference, we were
observing a lot of overhead from invalidations. Upon debug, it was found
that we were using clflush in this function and that isn't efficient.

With this change, while executing compute workloads like zoo models, we
are getting ~25% performance improvements in a best case scenario.

Rework:
 * Jordan: Call intel_clflushopt_range() rather than
   __builtin_ia32_clflushopt() because intel_mem.c is not compiled
   with -mclflushopt.

Backport-to: 24.1 24.2
Signed-off-by: Sushma Venkatesh Reddy <sushma.venkatesh.reddy@intel.com>
Reviewed-by: Matt Turner <mattst88@gmail.com>
Reviewed-by: Jordan Justen <jordan.l.justen@intel.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/30238>
(cherry picked from commit 2f6919e6c2)
2024-07-21 15:05:08 +02:00
Daniel Stone
a381332757 build: Check for PyYAML in Meson build
Closes: #11540
Fixes: ccc6442d6f ("u_format: Rewrite format table to use YAML")
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/30272>
(cherry picked from commit bed6e0d691)
2024-07-21 15:05:07 +02:00
Jessica Clarke
6a824a57a5 meson: egl: Build egl_dri2 driver even for plain DRI
Despite its name, egl_dri2 works under plain DRI without DRI2, and the
old autotools build system built it when $enable_dri = yes, with no
check for DRI2. This fixes the build for GNU/Hurd, which supports DRI,
but doesn't have DRM and thus no DRI2 support.

Cc: mesa-stable
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/587>
(cherry picked from commit 149e8bff52)
2024-07-21 15:05:05 +02:00
Jessica Clarke
2f745476f0 Revert "meson: fix with_dri2 definition for GNU Hurd"
This reverts commit ad862c36e5.

This change does not work, because libdrm is required if with_dri2 is
true. Moreover, we don't want all of DRI2 on Hurd, we just want the
egl_dri2 driver, as done by autotools. So first revert this to stop
trying to build all of DRI2.

Cc: mesa-stable
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/587>
(cherry picked from commit ec55a6c329)
2024-07-21 15:05:03 +02:00
Jessica Clarke
82f38946ef Revert "meson: Do not require libdrm for DRI2 on hurd"
This reverts commit 2fd85105c6.

Despite its name, egl_dri2 works under plain DRI without DRI2, and the
old autotools build system built it when $enable_dri = yes, with no
check for DRI2. A future commit will adapt meson.build to follow that
approach rather than this hackier one.

Note that the case removed in the second hunk is already dead code,
since system_has_kms_drm is false on GNU/Hurd, and could have been
dropped as part of 66d2ae0386 ("meson: forcefully disable libdrm when
host doesn't have it").

Cc: mesa-stable
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/587>
(cherry picked from commit 8461776a09)
2024-07-21 15:05:02 +02:00
Francisco Jerez
7e047cdb4a iris/gfx12.5: Pass non-empty push constant data to PS stage for TBIMR workaround.
Note that this bug leading to GPU hangs hasn't been reproduced on GL
so far, workaround is mainly included for completeness.

Fixes: 57decad976 ("intel/xehp: Enable TBIMR by default.")
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/30031>
(cherry picked from commit 49144ebcf9)
2024-07-21 15:01:44 +02:00
Francisco Jerez
64d580894a anv/gfx12.5: Pass non-empty push constant data to PS stage for TBIMR workaround.
Closes: https://gitlab.freedesktop.org/mesa/mesa/-/issues/10728
Closes: https://gitlab.freedesktop.org/mesa/mesa/-/issues/11399
Fixes: 57decad976 ("intel/xehp: Enable TBIMR by default.")
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/30031>
(cherry picked from commit ff3c3792b4)
2024-07-21 15:01:43 +02:00
Francisco Jerez
43e4dffc2a intel/dev: Add devinfo flag for TBIMR push constant workaround.
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/30031>
(cherry picked from commit bb2513918a)
2024-07-21 15:01:39 +02:00
Francisco Jerez
a69bba6b6d intel/brw: Implement null push constant workaround.
This implements an undocumented workaround for a hardware bug that
affects draw calls with a pixel shader that has 0 push constant cycles
when TBIMR is enabled, which has been seen to lead to a hang with
Fallout 3 and Metal Gear Rising Revengeance.  This hardware bug has
been reported as HSDES#22020184996 which is still pending a resolution
by the hardware team.  However since this workaround found empirically
has been confirmed to fix the issue reliably and it's relatively
harmless it seems worth checking in already even though no final W/A
number is available nor has the W/A json file been updated.

To avoid the issue we simply pad the push constant payload to be at
least 1 register.  This is enabled via a brw_wm_prog_key since the
driver needs to be in agreement with the compiler on whether the dummy
push constant cycle is present, and it can be avoided in cases where
the driver knows that TBIMR will be disabled (e.g. for BLORP).

Related: https://gitlab.freedesktop.org/mesa/mesa/-/issues/10728
Related: https://gitlab.freedesktop.org/mesa/mesa/-/issues/11399
Fixes: 57decad976 ("intel/xehp: Enable TBIMR by default.")
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/30031>
(cherry picked from commit b98eebbcb2)
2024-07-21 14:59:07 +02:00
Deborah Brouwer
dad8f2d4e2 ci/lava: Detect a6xx gpu recovery failures
Sporadically a6xx gpu will fail to recover causing the lava job
a660_vk_full to loop on error messages for three hours before timing
out.

A few sporadic error messages may still be recoverable, but when multiple
errors occur over a short period, successful recovery is unlikely. Parse
the logs to look for repeated error messages within a short time period.
If found, cancel the lava job and rerun it.

Also add unit tests for this behaviour.

cc: mesa-stable

Reported-by: Valentine Burley <valentine.burley@gmail.com>
Acked-by: Daniel Stone <daniel.stone@collabora.com>
Reviewed-by: Guilherme Gallo <guilherme.gallo@collabora.com>
Signed-off-by: Deborah Brouwer <deborah.brouwer@collabora.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/30032>
(cherry picked from commit 72c182f873)
2024-07-21 14:59:06 +02:00
Eric Engestrom
d956bc9ec2 loader: gc loader_get_extensions_name() and __DRI_DRIVER_{GET_,}EXTENSIONS defines
Leaving the defines in include/GL/internal/dri_interface.h because I'm
not sure if something needs it.

Fixes: fa541a887c ("loader: delete loader_open_driver()")
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/30266>
(cherry picked from commit dfd70bab4a)
2024-07-21 14:59:05 +02:00
Mark Burton
a69fee8131 gallivm: Fix compilation errors when using LLVM 13.
Adds missing header file and fixes local variable type.

Fixes: 47cd0eee26 ("gallivm: create a pass manager wrapper.")

Signed-off-by: Mark Burton <markb@smartavionics.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/30058>
(cherry picked from commit 7dfb9ba023)
2024-07-21 14:58:57 +02:00
Eric Engestrom
220582661f venus/ci: skip timing out test
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/30264>
(cherry picked from commit e2c90da560)
2024-07-21 14:42:58 +02:00
Eric Engestrom
f056c8fd3d anv+zink/ci: mark a couple of tests as flaky
Seen while trying to merge this series.

Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/30264>
(cherry picked from commit e64adab9a0)
2024-07-21 14:42:58 +02:00
Eric Engestrom
68f5902f35 anv+zink/ci: document two tests, one failing and one crashing
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/30264>
(cherry picked from commit ebef31e4cf)
2024-07-21 14:42:58 +02:00
Eric Engestrom
fcf45e63fa anv+zink/ci: mark some tests as fixed
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/30264>
(cherry picked from commit 2ed5d362a6)
2024-07-21 14:42:58 +02:00
Eric Engestrom
e834077024 freedreno/ci: document extra variants of failing tests on a618 and a630
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/30264>
(cherry picked from commit 8fe147de57)
2024-07-21 14:42:58 +02:00
Eric Engestrom
1908b19be8 freedreno/ci: double job timeout for a306
Based on the predicted remaining time when it gets killed, it need just over 30min.

Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/30264>
(cherry picked from commit 734823fe7d)
2024-07-21 14:42:00 +02:00
Eric Engestrom
59b00a7676 radeonsi/ci: skip timing out test
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/30264>
(cherry picked from commit e1edf20a4d)
2024-07-21 14:41:55 +02:00
Eric Engestrom
0d2131493c .pick_status.json: Update to 0cc23b6524 2024-07-21 14:28:58 +02:00
Eric Engestrom
bb87ab6715 VERSION: bump for 24.2.0-rc1 2024-07-19 19:15:46 +02:00
6934 changed files with 615190 additions and 943682 deletions

View File

@@ -2,7 +2,6 @@
# enforcement in the CI.
src/gallium/drivers/i915
src/gallium/drivers/r300/compiler/*
src/gallium/targets/teflon/**/*
src/amd/vulkan/**/*
src/amd/compiler/**/*

View File

@@ -31,7 +31,7 @@ indent_size = 3
[*.patch]
trim_trailing_whitespace = false
[{meson.build,meson.options}]
[{meson.build,meson_options.txt}]
indent_style = space
indent_size = 2

View File

@@ -65,15 +65,3 @@ c7bf3b69ebc8f2252dbf724a4de638e6bb2ac402
# ir3: Reformat source with clang-format
177138d8cb0b4f6a42ef0a1f8593e14d79f17c54
# ir3: reformat after refactoring in previous commit
8ae5b27ee0331a739d14b42e67586784d6840388
# ir3: don't use deprecated NIR_PASS_V anymore
2fedc82c0cc9d3fb2e54707b57941b79553b640c
# ir3: reformat after previous commit
7210054db8cfb445a8ccdeacfdcfecccf44fa266
# freedreno/a6xx: The great register renaming
7fd99c88b9cd5c0c8c1cb3e92383acac5cb8220b

View File

@@ -42,7 +42,7 @@ jobs:
[binaries]
llvm-config = '/usr/local/opt/llvm/bin/llvm-config'
EOL
$MESON_EXEC . build --native-file=native_config -Dmoltenvk-dir=$(brew --prefix molten-vk) -Dbuild-tests=true -Dgallium-drivers=swrast,zink -Dglx=${{ matrix.glx_option }}
$MESON_EXEC . build --native-file=native_config -Dmoltenvk-dir=$(brew --prefix molten-vk) -Dbuild-tests=true -Dosmesa=true -Dgallium-drivers=swrast,zink -Dglx=${{ matrix.glx_option }}
- name: Build
run: $MESON_EXEC compile -C build
- name: Test

View File

@@ -30,73 +30,50 @@ workflow:
# do not duplicate pipelines on merge pipelines
- if: $CI_COMMIT_BRANCH && $CI_OPEN_MERGE_REQUESTS && $CI_PIPELINE_SOURCE == "push"
when: never
# Tag pipelines are disabled as it's too late to run all the tests by
# then, the release has been made based on the staging pipelines results
- if: $CI_COMMIT_TAG
when: never
# Merge pipeline
# merge pipeline
- if: &is-merge-attempt $GITLAB_USER_LOGIN == "marge-bot" && $CI_PIPELINE_SOURCE == "merge_request_event"
variables:
KERNEL_IMAGE_BASE: https://${S3_HOST}/${S3_KERNEL_BUCKET}/${KERNEL_REPO}/${KERNEL_TAG}
MESA_CI_PERFORMANCE_ENABLED: 1
FDO_RUNNER_JOB_PRIORITY_TAG_X86_64: priority:high
FDO_RUNNER_JOB_PRIORITY_TAG_X86_64_KVM: priority:high-kvm
FDO_RUNNER_JOB_PRIORITY_TAG_AARCH64: priority:high-aarch64
CI_TRON_JOB_PRIORITY_TAG: "" # Empty tags are ignored by gitlab
JOB_PRIORITY: 75
# fast-fail in merge pipelines: stop early if we get this many unexpected fails/crashes
DEQP_RUNNER_MAX_FAILS: 40
# Post-merge pipeline
VALVE_INFRA_VANGOGH_JOB_PRIORITY: "" # Empty tags are ignored by gitlab
# post-merge pipeline
- if: &is-post-merge $GITLAB_USER_LOGIN == "marge-bot" && $CI_PIPELINE_SOURCE == "push"
variables:
FDO_RUNNER_JOB_PRIORITY_TAG_X86_64: priority:high
FDO_RUNNER_JOB_PRIORITY_TAG_X86_64_KVM: priority:high-kvm
FDO_RUNNER_JOB_PRIORITY_TAG_AARCH64: priority:high-aarch64
# Pre-merge pipeline (because merge pipelines are already caught above)
- if: &is-merge-request $CI_PIPELINE_SOURCE == "merge_request_event"
# Push to a branch on a fork
- if: &is-push-to-fork $CI_PROJECT_NAMESPACE != "mesa" && $CI_PIPELINE_SOURCE == "push"
# a pipeline running within the upstream project
- if: &is-upstream-pipeline $CI_PROJECT_PATH == $FDO_UPSTREAM_REPO
# an MR pipeline running within the upstream project, usually true for
# those with the Developer role or above
- if: &is-upstream-mr-pipeline $CI_PROJECT_PATH == $FDO_UPSTREAM_REPO && $CI_PIPELINE_SOURCE == "merge_request_event"
# Nightly pipeline
# nightly pipeline
- if: &is-scheduled-pipeline $CI_PIPELINE_SOURCE == "schedule"
variables:
FDO_RUNNER_JOB_PRIORITY_TAG_X86_64: priority:low
FDO_RUNNER_JOB_PRIORITY_TAG_X86_64_KVM: priority:low-kvm
FDO_RUNNER_JOB_PRIORITY_TAG_AARCH64: priority:low-aarch64
JOB_PRIORITY: 45
# (some) nightly builds perform LTO, so they take much longer than the
# short timeout allowed in other pipelines.
# Note: 0 = infinity = gitlab's job `timeout:` applies, which is 1h
BUILD_JOB_TIMEOUT_OVERRIDE: 0
# Pipeline for direct pushes to the default branch that bypassed the CI
- if: &is-push-to-upstream-default-branch $CI_PROJECT_NAMESPACE == "mesa" && $CI_PIPELINE_SOURCE == "push" && $CI_COMMIT_REF_NAME == $CI_DEFAULT_BRANCH
KERNEL_IMAGE_BASE: https://${S3_HOST}/${S3_KERNEL_BUCKET}/${KERNEL_REPO}/${KERNEL_TAG}
JOB_PRIORITY: 50
VALVE_INFRA_VANGOGH_JOB_PRIORITY: priority:low
# pipeline for direct pushes that bypassed the CI
- if: &is-direct-push $CI_PROJECT_NAMESPACE == "mesa" && $CI_PIPELINE_SOURCE == "push" && $GITLAB_USER_LOGIN != "marge-bot"
variables:
JOB_PRIORITY: 70
# Pipeline for direct pushes from release maintainer
- if: &is-push-to-upstream-staging-branch $CI_PROJECT_NAMESPACE == "mesa" && $CI_PIPELINE_SOURCE == "push" && $CI_COMMIT_REF_NAME =~ /^staging\//
KERNEL_IMAGE_BASE: https://${S3_HOST}/${S3_KERNEL_BUCKET}/${KERNEL_REPO}/${KERNEL_TAG}
JOB_PRIORITY: 40
VALVE_INFRA_VANGOGH_JOB_PRIORITY: priority:low
# pre-merge or fork pipeline
- if: $FORCE_KERNEL_TAG != null
variables:
JOB_PRIORITY: 70
KERNEL_IMAGE_BASE: https://${S3_HOST}/${S3_KERNEL_BUCKET}/${KERNEL_REPO}/${FORCE_KERNEL_TAG}
JOB_PRIORITY: 50
VALVE_INFRA_VANGOGH_JOB_PRIORITY: priority:low
- if: $FORCE_KERNEL_TAG == null
variables:
KERNEL_IMAGE_BASE: https://${S3_HOST}/${S3_KERNEL_BUCKET}/${KERNEL_REPO}/${KERNEL_TAG}
JOB_PRIORITY: 50
VALVE_INFRA_VANGOGH_JOB_PRIORITY: priority:low
variables:
FDO_UPSTREAM_REPO: mesa/mesa
MESA_TEMPLATES_COMMIT: &ci-templates-commit c6aeb16f86e32525fa630fb99c66c4f3e62fc3cb
MESA_TEMPLATES_COMMIT: &ci-templates-commit d5aa3941aa03c2f716595116354fb81eb8012acb
CI_PRE_CLONE_SCRIPT: |-
set -o xtrace
curl --silent --location --fail --retry-connrefused --retry 3 --retry-delay 10 \
${CI_PROJECT_URL}/-/raw/${CI_COMMIT_SHA}/.gitlab-ci/download-git-cache.sh | bash
wget -q -O download-git-cache.sh ${CI_PROJECT_URL}/-/raw/${CI_COMMIT_SHA}/.gitlab-ci/download-git-cache.sh
bash download-git-cache.sh
rm download-git-cache.sh
set +o xtrace
S3_JWT_FILE: /s3_jwt
S3_JWT_FILE_SCRIPT: |-
echo -n '${S3_JWT}' > '${S3_JWT_FILE}' &&
S3_JWT_FILE_SCRIPT= &&
unset CI_JOB_JWT S3_JWT # Unsetting vulnerable env variables
S3_HOST: s3.freedesktop.org
# This bucket is used to fetch ANDROID prebuilts and images
S3_ANDROID_BUCKET: mesa-rootfs
# This bucket is used to fetch the kernel image
S3_KERNEL_BUCKET: mesa-rootfs
# Bucket for git cache
@@ -107,8 +84,6 @@ variables:
S3_TRACIE_RESULTS_BUCKET: mesa-tracie-results
S3_TRACIE_PUBLIC_BUCKET: mesa-tracie-public
S3_TRACIE_PRIVATE_BUCKET: mesa-tracie-private
# Base path used for various artifacts
S3_BASE_PATH: "${S3_HOST}/${S3_KERNEL_BUCKET}"
# per-pipeline artifact storage on MinIO
PIPELINE_ARTIFACTS_BASE: ${S3_HOST}/${S3_ARTIFACTS_BUCKET}/${CI_PROJECT_PATH}/${CI_PIPELINE_ID}
# per-job artifact storage on MinIO
@@ -122,28 +97,12 @@ variables:
ARTIFACTS_BASE_URL: https://${CI_PROJECT_ROOT_NAMESPACE}.${CI_PAGES_DOMAIN}/-/${CI_PROJECT_NAME}/-/jobs/${CI_JOB_ID}/artifacts
# Python scripts for structured logger
PYTHONPATH: "$PYTHONPATH:$CI_PROJECT_DIR/install"
# No point in continuing once the device is lost
# Drop once deqp-runner is upgraded to > 0.18.0
MESA_VK_ABORT_ON_DEVICE_LOSS: 1
# Avoid the wall of "Unsupported SPIR-V capability" warnings in CI job log, hiding away useful output
MESA_SPIRV_LOG_LEVEL: error
# Default priority for non-merge pipelines
FDO_RUNNER_JOB_PRIORITY_TAG_X86_64: "" # Empty tags are ignored by gitlab
FDO_RUNNER_JOB_PRIORITY_TAG_X86_64_KVM: kvm
FDO_RUNNER_JOB_PRIORITY_TAG_AARCH64: aarch64
CI_TRON_JOB_PRIORITY_TAG: ci-tron:priority:low
JOB_PRIORITY: 50
DATA_STORAGE_PATH: data_storage
KERNEL_IMAGE_BASE: "https://$S3_HOST/$S3_KERNEL_BUCKET/$KERNEL_REPO/$KERNEL_TAG"
# Mesa-specific variables that shouldn't be forwarded to DUTs and crosvm
CI_EXCLUDE_ENV_VAR_REGEX: 'SCRIPTS_DIR|RESULTS_DIR'
CI_TRON_JOB_TEMPLATE_PROJECT: &ci-tron-template-project gfx-ci/ci-tron
CI_TRON_JOB_TEMPLATE_COMMIT: &ci-tron-template-commit ddadab0006e43f1365cd30779f565b444a6538ee
CI_TRON_JOB_TEMPLATE_PROJECT_URL: "https://gitlab.freedesktop.org/$CI_TRON_JOB_TEMPLATE_PROJECT"
default:
timeout: 1m # catch any jobs which don't specify a timeout
id_tokens:
S3_JWT:
aud: https://s3.freedesktop.org
@@ -151,13 +110,21 @@ default:
- >
export SCRIPTS_DIR=$(mktemp -d) &&
curl -L -s --retry 4 -f --retry-all-errors --retry-delay 60 -O --output-dir "${SCRIPTS_DIR}" "${CI_PROJECT_URL}/-/raw/${CI_COMMIT_SHA}/.gitlab-ci/setup-test-env.sh" &&
. ${SCRIPTS_DIR}/setup-test-env.sh
- eval "$S3_JWT_FILE_SCRIPT"
. ${SCRIPTS_DIR}/setup-test-env.sh &&
echo -n "${S3_JWT}" > "${S3_JWT_FILE}" &&
unset CI_JOB_JWT S3_JWT # Unsetting vulnerable env variables
after_script:
# Work around https://gitlab.com/gitlab-org/gitlab/-/issues/20338
- find -name '*.log' -exec mv {} {}.txt \;
- >
set +x
test -e "${S3_JWT_FILE}" &&
export S3_JWT="$(<${S3_JWT_FILE})" &&
rm "${S3_JWT_FILE}"
# Retry when job fails. Failed jobs can be found in the Mesa CI Daily Reports:
# https://gitlab.freedesktop.org/mesa/mesa/-/issues/?sort=created_date&state=opened&label_name%5B%5D=CI%20daily
retry:
@@ -177,45 +144,33 @@ stages:
- sanity
- container
- git-archive
- build-for-tests
- build-only
- build-x86_64
- build-misc
- code-validation
- amd
- amd-nightly
- intel
- intel-nightly
- nouveau
- nouveau-nightly
- arm
- arm-nightly
- broadcom
- broadcom-nightly
- freedreno
- freedreno-nightly
- etnaviv
- etnaviv-nightly
- software-renderer
- software-renderer-nightly
- layered-backends
- layered-backends-nightly
- performance
- deploy
include:
- project: 'freedesktop/ci-templates'
ref: 16bc29078de5e0a067ff84a1a199a3760d3b3811
file:
- '/templates/ci-fairy.yml'
- project: 'freedesktop/ci-templates'
ref: *ci-templates-commit
file:
- '/templates/alpine.yml'
- '/templates/debian.yml'
- '/templates/fedora.yml'
- '/templates/ci-fairy.yml'
- project: *ci-tron-template-project
ref: *ci-tron-template-commit
file: '/.gitlab-ci/dut.yml'
- local: '.gitlab-ci/image-tags.yml'
- local: '.gitlab-ci/bare-metal/gitlab-ci.yml'
- local: '.gitlab-ci/ci-tron/gitlab-ci.yml'
- local: '.gitlab-ci/lava/gitlab-ci.yml'
- local: '.gitlab-ci/lava/lava-gitlab-ci.yml'
- local: '.gitlab-ci/container/gitlab-ci.yml'
- local: '.gitlab-ci/build/gitlab-ci.yml'
- local: '.gitlab-ci/test/gitlab-ci.yml'
@@ -225,11 +180,12 @@ include:
- local: 'src/**/ci/gitlab-ci.yml'
# Rules applied to every job in the pipeline
.common-rules:
rules:
- if: *is-push-to-fork
when: manual
# YAML anchors for rule conditions
# --------------------------------
.rules-anchors:
# Pre-merge pipeline
- &is-pre-merge '$CI_PIPELINE_SOURCE == "merge_request_event"'
.never-post-merge-rules:
rules:
@@ -237,61 +193,8 @@ include:
when: never
# Note: make sure the branches in this list are the same as in
# `.build-only-delayed-rules` below.
.container-rules:
.container+build-rules:
rules:
- !reference [.common-rules, rules]
# Run when re-enabling a disabled farm, but not when disabling it
- !reference [.disable-farm-mr-rules, rules]
# Never run immediately after merging, as we just ran everything
- !reference [.never-post-merge-rules, rules]
# Only rebuild containers in merge pipelines if any tags have been
# changed, else we'll just use the already-built containers
- if: *is-merge-attempt
changes: &image_tags_path
- .gitlab-ci/image-tags.yml
when: on_success
# Skip everything for pre-merge and merge pipelines which don't change
# anything in the build; we only do this for marge-bot and not user
# pipelines in a MR, because we might still need to run it to copy the
# container into the user's namespace.
- if: *is-merge-attempt
when: never
# Any MR pipeline which changes image-tags.yml needs to be able to
# rebuild the containers
- if: *is-merge-request
changes: *image_tags_path
when: manual
# ... if the MR pipeline runs as mesa/mesa and does not need a container
# rebuild, we can skip it
- if: *is-upstream-mr-pipeline
when: never
# ... however for MRs running inside the user namespace, we may need to
# run these jobs to copy the container images from upstream
- if: *is-merge-request
when: manual
# Build everything after someone bypassed the CI
- if: *is-push-to-upstream-default-branch
when: on_success
# Build everything when pushing to staging branches
- if: *is-push-to-upstream-staging-branch
when: on_success
# Scheduled pipelines reuse already-built containers
- if: *is-scheduled-pipeline
when: never
# Any other pipeline in the upstream should reuse already-built containers
- if: *is-upstream-pipeline
when: never
# Allow building everything in fork pipelines, but build nothing unless
# manually triggered
- when: manual
# Note: make sure the branches in this list are the same as in
# `.build-only-delayed-rules` below.
.build-rules:
rules:
- !reference [.common-rules, rules]
# Run when re-enabling a disabled farm, but not when disabling it
- !reference [.disable-farm-mr-rules, rules]
# Never run immediately after merging, as we just ran everything
@@ -304,7 +207,6 @@ include:
- bin/git_sha1_gen.py
- bin/install_megadrivers.py
- bin/symbols-check.py
- bin/ci/**/*
# GitLab CI
- .gitlab-ci.yml
- .gitlab-ci/**/*
@@ -322,20 +224,18 @@ include:
- src/**/*
when: on_success
# Same as above, but for pre-merge pipelines
- if: *is-merge-request
changes: *all_paths
- if: *is-pre-merge
changes:
*all_paths
when: manual
# Skip everything for pre-merge and merge pipelines which don't change
# anything in the build
- if: *is-merge-attempt
when: never
- if: *is-merge-request
- if: *is-pre-merge
when: never
# Build everything after someone bypassed the CI
- if: *is-push-to-upstream-default-branch
when: on_success
# Build everything when pushing to staging branches
- if: *is-push-to-upstream-staging-branch
- if: *is-direct-push
when: on_success
# Build everything in scheduled pipelines
- if: *is-scheduled-pipeline
@@ -344,58 +244,47 @@ include:
# manually triggered
- when: manual
# Repeat of the above but with `when: on_success` replaced with
# `when: delayed` + `start_in:`, for build-only jobs.
# Note: make sure the branches in this list are the same as in
# `.container+build-rules` above.
.build-only-delayed-rules:
.ci-deqp-artifacts:
artifacts:
name: "mesa_${CI_JOB_NAME}"
when: always
untracked: false
paths:
# Watch out! Artifacts are relative to the build dir.
# https://gitlab.com/gitlab-org/gitlab-ce/commit/8788fb925706cad594adf6917a6c5f6587dd1521
- artifacts
- _build/meson-logs/*.txt
- _build/meson-logs/strace
# Git archive
make git archive:
extends:
- .fdo.ci-fairy
stage: git-archive
rules:
- !reference [.common-rules, rules]
# Run when re-enabling a disabled farm, but not when disabling it
- !reference [.disable-farm-mr-rules, rules]
# Never run immediately after merging, as we just ran everything
- !reference [.never-post-merge-rules, rules]
# Build everything in merge pipelines, if any files affecting the pipeline
# were changed
- if: *is-merge-attempt
changes: *all_paths
when: delayed
start_in: &build-delay 5 minutes
# Same as above, but for pre-merge pipelines
- if: *is-merge-request
changes: *all_paths
when: manual
# Skip everything for pre-merge and merge pipelines which don't change
# anything in the build
- if: *is-merge-attempt
when: never
- if: *is-merge-request
when: never
# Build everything after someone bypassed the CI
- if: *is-push-to-upstream-default-branch
when: delayed
start_in: *build-delay
# Build everything when pushing to staging branches
- if: *is-push-to-upstream-staging-branch
when: delayed
start_in: *build-delay
# Build everything in scheduled pipelines
- if: *is-scheduled-pipeline
when: delayed
start_in: *build-delay
# Allow building everything in fork pipelines, but build nothing unless
# manually triggered
- when: manual
- !reference [.scheduled_pipeline-rules, rules]
# ensure we are running on packet
tags:
- packet.net
script:
# Compactify the .git directory
- git gc --aggressive
# Download & cache the perfetto subproject as well.
- rm -rf subprojects/perfetto ; mkdir -p subprojects/perfetto && curl https://android.googlesource.com/platform/external/perfetto/+archive/$(grep 'revision =' subprojects/perfetto.wrap | cut -d ' ' -f3).tar.gz | tar zxf - -C subprojects/perfetto
# compress the current folder
- tar -cvzf ../$CI_PROJECT_NAME.tar.gz .
- ci-fairy s3cp --token-file "${S3_JWT_FILE}" ../$CI_PROJECT_NAME.tar.gz https://$S3_HOST/git-cache/$CI_PROJECT_NAMESPACE/$CI_PROJECT_NAME/$CI_PROJECT_NAME.tar.gz
# Sanity checks of MR settings and commit logs
sanity:
extends:
- .fdo.ci-fairy
stage: sanity
tags:
- placeholder-job
rules:
- if: *is-merge-request
- if: *is-pre-merge
when: on_success
- when: never
variables:
@@ -406,20 +295,19 @@ sanity:
- |
set -eu
image_tags=(
ALPINE_X86_64_BUILD_TAG
ALPINE_X86_64_LAVA_SSH_TAG
ALPINE_X86_64_LAVA_TRIGGER_TAG
DEBIAN_BASE_TAG
DEBIAN_BUILD_TAG
DEBIAN_TEST_ANDROID_TAG
DEBIAN_TEST_GL_TAG
DEBIAN_TEST_VK_TAG
ALPINE_X86_64_BUILD_TAG
ALPINE_X86_64_LAVA_SSH_TAG
FEDORA_X86_64_BUILD_TAG
FIRMWARE_TAG
KERNEL_ROOTFS_TAG
KERNEL_TAG
PKG_REPO_REV
WINDOWS_X64_BUILD_TAG
WINDOWS_X64_MSVC_TAG
WINDOWS_X64_BUILD_TAG
WINDOWS_X64_TEST_TAG
)
for var in "${image_tags[@]}"
@@ -434,4 +322,30 @@ sanity:
when: on_failure
reports:
junit: check-*.xml
tags:
- placeholder-job
mr-label-maker-test:
extends:
- .fdo.ci-fairy
stage: sanity
rules:
- !reference [.mr-label-maker-rules, rules]
variables:
GIT_STRATEGY: fetch
timeout: 10m
script:
- set -eu
- python3 -m venv .venv
- source .venv/bin/activate
- pip install git+https://gitlab.freedesktop.org/freedesktop/mr-label-maker
- mr-label-maker --dry-run --mr $CI_MERGE_REQUEST_IID
# Jobs that need to pass before spending hardware resources on further testing
.required-for-hardware-jobs:
needs:
- job: clang-format
optional: true
- job: rustfmt
optional: true

View File

@@ -1,33 +0,0 @@
[flake8]
exclude = .venv*,
# PEP 8 Style Guide limits line length to 79 characters
max-line-length = 159
ignore =
# continuation line under-indented for hanging indent
E121
# continuation line over-indented for hanging indent
E126,
# continuation line under-indented for visual indent
E128,
# whitespace before ':'
E203,
# missing whitespace around arithmetic operator
E226,
# missing whitespace after ','
E231,
# expected 2 blank lines, found 1
E302,
# too many blank lines
E303,
# imported but unused
F401,
# f-string is missing placeholders
F541,
# local variable assigned to but never used
F841,
# line break before binary operator
W503,
# line break after binary operator
W504,

View File

@@ -80,36 +80,3 @@ wayland-dEQP-EGL.functional.render.multi_context.gles2_gles3.other
wayland-dEQP-EGL.functional.render.multi_thread.gles2.other
wayland-dEQP-EGL.functional.render.multi_thread.gles3.other
wayland-dEQP-EGL.functional.render.multi_thread.gles2_gles3.other
# These test the loader more than the implementation and are broken because the
# Vulkan loader in Debian is too old
dEQP-VK.api.get_device_proc_addr.non_enabled
dEQP-VK.api.version_check.unavailable_entry_points
# These tests are flaking too much recently on almost all drivers, so better skip them until the cause is identified
spec@arb_program_interface_query@arb_program_interface_query-getprogramresourceindex
spec@arb_program_interface_query@arb_program_interface_query-getprogramresourceindex@'vs_input2[1][0]' on GL_PROGRAM_INPUT
# These tests attempt to read from the front buffer after a swap. They are skipped
# on both X11 and gbm, but for different reasons:
#
# On X11: Given that we run piglit tests in parallel in Mesa CI, and don't have a
# compositor running, the frontbuffer reads may end up with undefined results from
# windows overlapping us.
# Piglit does mark these tests as not to be run in parallel, but deqp-runner
# doesn't respect that. We need to extend deqp-runner to allow some tests to be
# marked as single-threaded and run after the rayon loop if we want to support
# them.
# Other front-buffer access tests like fbo-sys-blit, fbo-sys-sub-blit, or
# fcc-front-buffer-distraction don't appear here, because the DRI3 fake-front
# handling should be holding the pixels drawn by the test even if we happen to fail
# GL's window system pixel occlusion test.
# Note that glx skips don't appear here, they're in all-skips.txt (in case someone
# sets PIGLIT_PLATFORM=gbm to mostly use gbm, but still has an X server running).
#
# On gbm: gbm does not support reading the front buffer after a swapbuffers, and
# that's intentional. Don't bother running these tests when PIGLIT_PLATFORM=gbm.
# Note that this doesn't include tests like fbo-sys-blit, which draw/read front
# but don't swap.
spec@!opengl 1.0@gl-1.0-swapbuffers-behavior
spec@!opengl 1.1@read-front

View File

@@ -1,74 +0,0 @@
#!/usr/bin/env bash
# shellcheck disable=SC2086 # we want word splitting
# shellcheck disable=SC1091 # paths only become valid at runtime
. "${SCRIPTS_DIR}/setup-test-env.sh"
ci_tag_test_time_check "ANDROID_CTS_TAG"
export PATH=/android-tools/build-tools:/android-cts/jdk/bin/:$PATH
export JAVA_HOME=/android-cts/jdk
# Wait for the appops service to show up
while [ "$($ADB shell dumpsys -l | grep appops)" = "" ] ; do sleep 1; done
SKIP_FILE="$INSTALL/${GPU_VERSION}-android-cts-skips.txt"
EXCLUDE_FILTERS=""
if [ -e "$SKIP_FILE" ]; then
EXCLUDE_FILTERS="$(grep -v -E "(^#|^[[:space:]]*$)" "$SKIP_FILE" | sed -e 's/\s*$//g' -e 's/.*/--exclude-filter "\0" /g')"
fi
INCLUDE_FILE="$INSTALL/${GPU_VERSION}-android-cts-include.txt"
if [ ! -e "$INCLUDE_FILE" ]; then
set +x
echo "ERROR: No include file (${GPU_VERSION}-android-cts-include.txt) found."
echo "This means that we are running the all available CTS modules."
echo "But the time to run it might be too long, please provide an include file instead."
exit 1
fi
INCLUDE_FILTERS="$(grep -v -E "(^#|^[[:space:]]*$)" "$INCLUDE_FILE" | sed -e 's/\s*$//g' -e 's/.*/--include-filter "\0" /g')"
if [ -n "${ANDROID_CTS_PREPARE_COMMAND:-}" ]; then
eval "$ANDROID_CTS_PREPARE_COMMAND"
fi
uncollapsed_section_switch android_cts_test "Android CTS: testing"
set +e
eval "/android-cts/tools/cts-tradefed" run commandAndExit cts-dev \
$INCLUDE_FILTERS \
$EXCLUDE_FILTERS
SUMMARY_FILE=/android-cts/results/latest/invocation_summary.txt
# Parse a line like `x/y modules completed` to check that all modules completed
COMPLETED_MODULES=$(sed -n -e '/modules completed/s/^\([0-9]\+\)\/\([0-9]\+\) .*$/\1/p' "$SUMMARY_FILE")
AVAILABLE_MODULES=$(sed -n -e '/modules completed/s/^\([0-9]\+\)\/\([0-9]\+\) .*$/\2/p' "$SUMMARY_FILE")
[ "$COMPLETED_MODULES" = "$AVAILABLE_MODULES" ]
# shellcheck disable=SC2319 # False-positive see https://github.com/koalaman/shellcheck/issues/2937#issuecomment-2660891195
MODULES_FAILED=$?
# Parse a line like `FAILED : x` to check that no tests failed
[ "$(grep "^FAILED" "$SUMMARY_FILE" | tr -d ' ' | cut -d ':' -f 2)" = "0" ]
# shellcheck disable=SC2319 # False-positive see https://github.com/koalaman/shellcheck/issues/2937#issuecomment-2660891195
TESTS_FAILED=$?
[ "$MODULES_FAILED" = "0" ] && [ "$TESTS_FAILED" = "0" ]
# shellcheck disable=SC2034 # EXIT_CODE is used by the script that sources this one
EXIT_CODE=$?
set -e
mkdir "${RESULTS_DIR}/android-cts"
cp -r "/android-cts/results/latest/" "${RESULTS_DIR}/android-cts/results"
cp -r "/android-cts/logs/latest/" "${RESULTS_DIR}/android-cts/logs"
if [ -n "${ARTIFACTS_BASE_URL:-}" ]; then
echo "============================================"
echo "Review the Android CTS test results at: ${ARTIFACTS_BASE_URL}/results/android-cts/results/test_result.html"
fi
section_end android_cts_test

View File

@@ -1,110 +0,0 @@
#!/usr/bin/env bash
# shellcheck disable=SC2086 # we want word splitting
# shellcheck disable=SC1091 # paths only become valid at runtime
. "${SCRIPTS_DIR}/setup-test-env.sh"
# deqp
$ADB shell mkdir -p /data/deqp
$ADB push /deqp-gles/modules/egl/deqp-egl-android /data/deqp
$ADB push /deqp-gles/mustpass/egl-main.txt.zst /data/deqp
$ADB push /deqp-gles/modules/gles2/deqp-gles2 /data/deqp
$ADB push /deqp-gles/mustpass/gles2-main.txt.zst /data/deqp
$ADB push /deqp-vk/external/vulkancts/modules/vulkan/* /data/deqp
$ADB push /deqp-vk/mustpass/vk-main.txt.zst /data/deqp
$ADB push /deqp-tools/* /data/deqp
$ADB push /deqp-runner/deqp-runner /data/deqp
$ADB push "$INSTALL/all-skips.txt" /data/deqp
$ADB push "$INSTALL/android-skips.txt" /data/deqp
$ADB push "$INSTALL/angle-skips.txt" /data/deqp
if [ -e "$INSTALL/$GPU_VERSION-flakes.txt" ]; then
$ADB push "$INSTALL/$GPU_VERSION-flakes.txt" /data/deqp
fi
if [ -e "$INSTALL/$GPU_VERSION-fails.txt" ]; then
$ADB push "$INSTALL/$GPU_VERSION-fails.txt" /data/deqp
fi
if [ -e "$INSTALL/$GPU_VERSION-skips.txt" ]; then
$ADB push "$INSTALL/$GPU_VERSION-skips.txt" /data/deqp
fi
$ADB push "$INSTALL/deqp-$DEQP_SUITE.toml" /data/deqp
BASELINE=""
if [ -e "$INSTALL/$GPU_VERSION-fails.txt" ]; then
BASELINE="--baseline /data/deqp/$GPU_VERSION-fails.txt"
fi
# Default to an empty known flakes file if it doesn't exist.
$ADB shell "touch /data/deqp/$GPU_VERSION-flakes.txt"
DEQP_SKIPS=""
if [ -e "$INSTALL/$GPU_VERSION-skips.txt" ]; then
DEQP_SKIPS="$DEQP_SKIPS /data/deqp/$GPU_VERSION-skips.txt"
fi
if [ -n "${ANGLE_TAG:-}" ]; then
DEQP_SKIPS="$DEQP_SKIPS /data/deqp/angle-skips.txt"
fi
AOSP_RESULTS=/data/deqp/results
uncollapsed_section_switch cuttlefish_test "cuttlefish: testing"
# Print the detailed version with the list of backports and local patches
{ set +x; } 2>/dev/null
for api in vk-main vk gl gles; do
deqp_version_log=/deqp-$api/deqp-$api-version
if [ -r "$deqp_version_log" ]; then
cat "$deqp_version_log"
fi
done
set -x
set +e
$ADB shell "mkdir ${AOSP_RESULTS}; cd ${AOSP_RESULTS}/..; \
XDG_CACHE_HOME=/data/local/tmp \
./deqp-runner \
suite \
--suite /data/deqp/deqp-$DEQP_SUITE.toml \
--output $AOSP_RESULTS \
--skips /data/deqp/all-skips.txt $DEQP_SKIPS \
--flakes /data/deqp/$GPU_VERSION-flakes.txt \
--testlog-to-xml /data/deqp/testlog-to-xml \
--shader-cache-dir /data/local/tmp \
--fraction-start ${CI_NODE_INDEX:-1} \
--fraction $(( CI_NODE_TOTAL * ${DEQP_FRACTION:-1})) \
--jobs ${FDO_CI_CONCURRENT:-4} \
$BASELINE \
${DEQP_RUNNER_MAX_FAILS:+--max-fails \"$DEQP_RUNNER_MAX_FAILS\"} \
"
# shellcheck disable=SC2034 # EXIT_CODE is used by the script that sources this one
EXIT_CODE=$?
set -e
section_switch cuttlefish_results "cuttlefish: gathering the results"
$ADB pull "$AOSP_RESULTS/." "$RESULTS_DIR"
# Remove all but the first 50 individual XML files uploaded as artifacts, to
# save fd.o space when you break everything.
find $RESULTS_DIR -name \*.xml | \
sort -n |
sed -n '1,+49!p' | \
xargs rm -f
# If any QPA XMLs are there, then include the XSL/CSS in our artifacts.
find $RESULTS_DIR -name \*.xml \
-exec cp /deqp-tools/testlog.css /deqp-tools/testlog.xsl "$RESULTS_DIR/" ";" \
-quit
$ADB shell "cd ${AOSP_RESULTS}/..; \
./deqp-runner junit \
--testsuite dEQP \
--results $AOSP_RESULTS/failures.csv \
--output $AOSP_RESULTS/junit.xml \
--limit 50 \
--template \"See $ARTIFACTS_BASE_URL/results/{{testcase}}.xml\""
$ADB pull "$AOSP_RESULTS/junit.xml" "$RESULTS_DIR"
section_end cuttlefish_results

View File

@@ -1,150 +0,0 @@
#!/usr/bin/env bash
# shellcheck disable=SC2086 # we want word splitting
# shellcheck disable=SC1091 # paths only become valid at runtime
set -uex
# Set default ADB command if not set already
: "${ADB:=adb}"
$ADB wait-for-device root
sleep 1
# overlay
REMOUNT_PATHS="/vendor"
if [ "$ANDROID_VERSION" -ge 15 ]; then
REMOUNT_PATHS="$REMOUNT_PATHS /system"
fi
OV_TMPFS="/data/overlay-remount"
$ADB shell mkdir -p "$OV_TMPFS"
$ADB shell mount -t tmpfs none "$OV_TMPFS"
for path in $REMOUNT_PATHS; do
$ADB shell mkdir -p "${OV_TMPFS}${path}-upper"
$ADB shell mkdir -p "${OV_TMPFS}${path}-work"
opts="lowerdir=${path},upperdir=${OV_TMPFS}${path}-upper,workdir=${OV_TMPFS}${path}-work"
$ADB shell mount -t overlay -o "$opts" none ${path}
done
$ADB shell setenforce 0
$ADB push /android-tools/eglinfo /data
$ADB push /android-tools/vulkaninfo /data
get_gles_runtime_renderer() {
while [ "$($ADB shell XDG_CACHE_HOME=/data/local/tmp /data/eglinfo | grep 'OpenGL ES profile renderer':)" = "" ] ; do sleep 1; done
$ADB shell XDG_CACHE_HOME=/data/local/tmp /data/eglinfo | grep 'OpenGL ES profile renderer' | head -1
}
get_gles_runtime_version() {
while [ "$($ADB shell XDG_CACHE_HOME=/data/local/tmp /data/eglinfo | grep 'OpenGL ES profile version:')" = "" ] ; do sleep 1; done
$ADB shell XDG_CACHE_HOME=/data/local/tmp /data/eglinfo | grep 'OpenGL ES profile version:' | head -1
}
get_vk_runtime_device_name() {
$ADB shell XDG_CACHE_HOME=/data/local/tmp /data/vulkaninfo | grep deviceName | head -1
}
get_vk_runtime_version() {
$ADB shell XDG_CACHE_HOME=/data/local/tmp /data/vulkaninfo | grep driverInfo | head -1
}
# Check what GLES & VK implementation is used before uploading the new libraries
get_gles_runtime_renderer
get_gles_runtime_version
get_vk_runtime_device_name
get_vk_runtime_version
# replace libraries
$ADB shell rm -f /vendor/lib64/libgallium_dri.so*
$ADB shell rm -f /vendor/lib64/egl/libEGL_mesa.so*
$ADB shell rm -f /vendor/lib64/egl/libGLESv1_CM_mesa.so*
$ADB shell rm -f /vendor/lib64/egl/libGLESv2_mesa.so*
$ADB push "$INSTALL/lib/libgallium_dri.so" /vendor/lib64/libgallium_dri.so
$ADB push "$INSTALL/lib/libEGL.so" /vendor/lib64/egl/libEGL_mesa.so
$ADB push "$INSTALL/lib/libGLESv1_CM.so" /vendor/lib64/egl/libGLESv1_CM_mesa.so
$ADB push "$INSTALL/lib/libGLESv2.so" /vendor/lib64/egl/libGLESv2_mesa.so
$ADB shell rm -f /vendor/lib64/hw/vulkan.lvp.so*
$ADB shell rm -f /vendor/lib64/hw/vulkan.virtio.so*
$ADB shell rm -f /vendor/lib64/hw/vulkan.intel.so*
$ADB push "$INSTALL/lib/libvulkan_lvp.so" /vendor/lib64/hw/vulkan.lvp.so
$ADB push "$INSTALL/lib/libvulkan_virtio.so" /vendor/lib64/hw/vulkan.virtio.so
$ADB push "$INSTALL/lib/libvulkan_intel.so" /vendor/lib64/hw/vulkan.intel.so
$ADB shell rm -f /vendor/lib64/egl/libEGL_emulation.so*
$ADB shell rm -f /vendor/lib64/egl/libGLESv1_CM_emulation.so*
$ADB shell rm -f /vendor/lib64/egl/libGLESv2_emulation.so*
if [ -n "${ANGLE_TAG:-}" ]; then
ANGLE_DEST_PATH=/vendor/lib64/egl
if [ "$ANDROID_VERSION" -ge 15 ]; then
ANGLE_DEST_PATH=/system/lib64
fi
$ADB shell rm -f "$ANGLE_DEST_PATH/libEGL_angle.so"*
$ADB shell rm -f "$ANGLE_DEST_PATH/libGLESv1_CM_angle.so"*
$ADB shell rm -f "$ANGLE_DEST_PATH/libGLESv2_angle.so"*
$ADB push /angle/libEGL_angle.so "$ANGLE_DEST_PATH/libEGL_angle.so"
$ADB push /angle/libGLESv1_CM_angle.so "$ANGLE_DEST_PATH/libGLESv1_CM_angle.so"
$ADB push /angle/libGLESv2_angle.so "$ANGLE_DEST_PATH/libGLESv2_angle.so"
fi
# Check what GLES & VK implementation is used after uploading the new libraries
MESA_BUILD_VERSION=$(cat "$INSTALL/VERSION")
get_gles_runtime_renderer
GLES_RUNTIME_VERSION="$(get_gles_runtime_version)"
get_vk_runtime_device_name
VK_RUNTIME_VERSION="$(get_vk_runtime_version)"
if [ -n "${ANGLE_TAG:-}" ]; then
# Note: we are injecting the ANGLE libs too, so we need to check if the
# new ANGLE libs are being used.
ANGLE_HASH=$(head -c 12 /angle/version)
if ! printf "%s" "$GLES_RUNTIME_VERSION" | grep --quiet "${ANGLE_HASH}"; then
echo "Fatal: Android is loading a wrong version of the ANGLE libs: ${ANGLE_HASH}" 1>&2
exit 1
fi
fi
if ! printf "%s" "$VK_RUNTIME_VERSION" | grep -Fq -- "${MESA_BUILD_VERSION}"; then
echo "Fatal: Android is loading a wrong version of the Mesa3D Vulkan libs: ${VK_RUNTIME_VERSION}" 1>&2
exit 1
fi
get_surfaceflinger_pid() {
while [ "$($ADB shell dumpsys -l | grep 'SurfaceFlinger$')" = "" ] ; do sleep 1; done
$ADB shell ps -A | grep -i surfaceflinger | tr -s ' ' | cut -d ' ' -f 2
}
OLD_SF_PID=$(get_surfaceflinger_pid)
# restart Android shell, so that services use the new libraries
$ADB shell stop
$ADB shell start
# Check that SurfaceFlinger restarted, to ensure that new libraries have been picked up
NEW_SF_PID=$(get_surfaceflinger_pid)
if [ "$OLD_SF_PID" == "$NEW_SF_PID" ]; then
echo "Fatal: check that SurfaceFlinger restarted" 1>&2
exit 1
fi
if [ -n "${ANDROID_CTS_TAG:-}" ]; then
# The script sets EXIT_CODE
. "$(dirname "$0")/android-cts-runner.sh"
else
# The script sets EXIT_CODE
. "$(dirname "$0")/android-deqp-runner.sh"
fi
exit $EXIT_CODE

View File

@@ -1,11 +0,0 @@
# Skip these tests when running fractional dEQP batches, as the AHB tests are expected
# to be handled separately in a non-fractional run within the deqp-runner suite.
dEQP-VK.api.external.memory.android_hardware_buffer.*
# Skip all WSI tests: the DEQP_ANDROID_EXE build used can't create native windows, as
# only APKs support window creation on Android.
dEQP-VK.image.swapchain_mutable.*
dEQP-VK.wsi.*
# These tests cause hangs and need to be skipped for now.
dEQP-VK.synchronization*

View File

@@ -0,0 +1,84 @@
version: 1
# Rules to match for a machine to qualify
target:
id: '{{ ci_runner_id }}'
timeouts:
first_console_activity: # This limits the time it can take to receive the first console log
minutes: {{ timeout_first_console_activity_minutes | default(0, true) }}
seconds: {{ timeout_first_console_activity_seconds | default(0, true) }}
retries: {{ timeout_first_console_activity_retries }}
console_activity: # Reset every time we receive a message from the logs
minutes: {{ timeout_console_activity_minutes | default(0, true) }}
seconds: {{ timeout_console_activity_seconds | default(0, true) }}
retries: {{ timeout_console_activity_retries }}
boot_cycle:
minutes: {{ timeout_boot_minutes | default(0, true) }}
seconds: {{ timeout_boot_seconds | default(0, true) }}
retries: {{ timeout_boot_retries }}
overall: # Maximum time the job can take, not overrideable by the "continue" deployment
minutes: {{ timeout_overall_minutes | default(0, true) }}
seconds: {{ timeout_overall_seconds | default(0, true) }}
retries: 0
# no retries possible here
console_patterns:
session_end:
regex: >-
{{ session_end_regex }}
{% if session_reboot_regex %}
session_reboot:
regex: >-
{{ session_reboot_regex }}
{% endif %}
job_success:
regex: >-
{{ job_success_regex }}
{% if job_warn_regex %}
job_warn:
regex: >-
{{ job_warn_regex }}
{% endif %}
# Environment to deploy
deployment:
# Initial boot
start:
storage:
http:
- path: "/b2c-extra-args"
data: >
b2c.pipefail b2c.poweroff_delay={{ poweroff_delay }}
b2c.minio="gateway,{{ '{{' }} minio_url }},{{ '{{' }} job_bucket_access_key }},{{ '{{' }} job_bucket_secret_key }}"
b2c.volume="{{ '{{' }} job_bucket }}-results,mirror=gateway/{{ '{{' }} job_bucket }},pull_on=pipeline_start,push_on=changes,overwrite{% for excl in job_volume_exclusions %},exclude={{ excl }}{% endfor %},remove,expiration=pipeline_end,preserve"
{% for volume in volumes %}
b2c.volume={{ volume }}
{% endfor %}
b2c.service="--privileged --tls-verify=false --pid=host docker://{{ '{{' }} fdo_proxy_registry }}/gfx-ci/ci-tron/telegraf:latest" b2c.hostname=dut-{{ '{{' }} machine.full_name }}
b2c.container="-v {{ '{{' }} job_bucket }}-results:{{ working_dir }} -w {{ working_dir }} {% for mount_volume in mount_volumes %} -v {{ mount_volume }}{% endfor %} --tls-verify=false docker://{{ local_container }} {{ container_cmd }}"
kernel:
url: '{{ kernel_url }}'
# NOTE: b2c.cache_device should not be here, but this works around
# a limitation of b2c which will be removed in the next release
cmdline: >
SALAD.machine_id={{ '{{' }} machine_id }}
console={{ '{{' }} local_tty_device }},115200
b2c.cache_device=auto b2c.ntp_peer=10.42.0.1
b2c.extra_args_url={{ '{{' }} job.http.url }}/b2c-extra-args
{% if kernel_cmdline_extras is defined %}
{{ kernel_cmdline_extras }}
{% endif %}
initramfs:
url: '{{ initramfs_url }}'
{% if dtb_url is defined %}
dtb:
url: '{{ dtb_url }}'
{% endif %}

55
.gitlab-ci/b2c/generate_b2c.py Executable file
View File

@@ -0,0 +1,55 @@
#!/usr/bin/env python3
# Copyright © 2022 Valve Corporation
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice (including the next
# paragraph) shall be included in all copies or substantial portions of the
# Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
# IN THE SOFTWARE.
from jinja2 import Environment, FileSystemLoader
from os import environ, path
# Pass all the environment variables prefixed by B2C_
values = {
key.removeprefix("B2C_").lower(): environ[key]
for key in environ if key.startswith("B2C_")
}
env = Environment(loader=FileSystemLoader(path.dirname(values['job_template'])),
trim_blocks=True, lstrip_blocks=True)
template = env.get_template(path.basename(values['job_template']))
values['ci_job_id'] = environ['CI_JOB_ID']
values['ci_runner_id'] = environ['CI_RUNNER_ID']
values['job_volume_exclusions'] = [excl for excl in values['job_volume_exclusions'].split(",") if excl]
values['working_dir'] = environ['CI_PROJECT_DIR']
# Use the gateway's pull-through registry caches to reduce load on fd.o.
values['local_container'] = environ['IMAGE_UNDER_TEST']
values['local_container'] = values['local_container'].replace(
'registry.freedesktop.org',
'{{ fdo_proxy_registry }}'
)
if 'kernel_cmdline_extras' not in values:
values['kernel_cmdline_extras'] = ''
with open(path.splitext(path.basename(values['job_template']))[0], "w") as f:
f.write(template.render(values))

View File

@@ -5,8 +5,6 @@
# First stage: very basic setup to bring up network and /dev etc
/init-stage1.sh
export CURRENT_SECTION=dut_boot
# Second stage: run jobs
test $? -eq 0 && /init-stage2.sh

View File

@@ -0,0 +1,17 @@
#!/bin/bash
# shellcheck disable=SC2086 # we want word splitting
if [ -z "$BM_POE_INTERFACE" ]; then
echo "Must supply the PoE Interface to power down"
exit 1
fi
if [ -z "$BM_POE_ADDRESS" ]; then
echo "Must supply the PoE Switch host"
exit 1
fi
SNMP_KEY="1.3.6.1.4.1.9.9.402.1.2.1.1.1.$BM_POE_INTERFACE"
SNMP_OFF="i 4"
snmpset -v2c -r 3 -t 30 -cmesaci "$BM_POE_ADDRESS" "$SNMP_KEY" $SNMP_OFF

View File

@@ -0,0 +1,22 @@
#!/bin/bash
# shellcheck disable=SC2086 # we want word splitting
if [ -z "$BM_POE_INTERFACE" ]; then
echo "Must supply the PoE Interface to power up"
exit 1
fi
if [ -z "$BM_POE_ADDRESS" ]; then
echo "Must supply the PoE Switch host"
exit 1
fi
set -ex
SNMP_KEY="1.3.6.1.4.1.9.9.402.1.2.1.1.1.$BM_POE_INTERFACE"
SNMP_ON="i 1"
SNMP_OFF="i 4"
snmpset -v2c -r 3 -t 10 -cmesaci "$BM_POE_ADDRESS" "$SNMP_KEY" $SNMP_OFF
sleep 3s
snmpset -v2c -r 3 -t 10 -cmesaci "$BM_POE_ADDRESS" "$SNMP_KEY" $SNMP_ON

View File

@@ -0,0 +1,124 @@
#!/bin/bash
# shellcheck disable=SC1091 # The relative paths in this file only become valid at runtime.
# shellcheck disable=SC2034
# shellcheck disable=SC2086 # we want word splitting
# Boot script for Chrome OS devices attached to a servo debug connector, using
# NFS and TFTP to boot.
# We're run from the root of the repo, make a helper var for our paths
BM=$CI_PROJECT_DIR/install/bare-metal
CI_COMMON=$CI_PROJECT_DIR/install/common
CI_INSTALL=$CI_PROJECT_DIR/install
# Runner config checks
if [ -z "$BM_SERIAL" ]; then
echo "Must set BM_SERIAL in your gitlab-runner config.toml [[runners]] environment"
echo "This is the CPU serial device."
exit 1
fi
if [ -z "$BM_SERIAL_EC" ]; then
echo "Must set BM_SERIAL in your gitlab-runner config.toml [[runners]] environment"
echo "This is the EC serial device for controlling board power"
exit 1
fi
if [ ! -d /nfs ]; then
echo "NFS rootfs directory needs to be mounted at /nfs by the gitlab runner"
exit 1
fi
if [ ! -d /tftp ]; then
echo "TFTP directory for this board needs to be mounted at /tftp by the gitlab runner"
exit 1
fi
# job config checks
if [ -z "$BM_KERNEL" ]; then
echo "Must set BM_KERNEL to your board's kernel FIT image"
exit 1
fi
if [ -z "$BM_ROOTFS" ]; then
echo "Must set BM_ROOTFS to your board's rootfs directory in the job's variables"
exit 1
fi
if [ -z "$BM_CMDLINE" ]; then
echo "Must set BM_CMDLINE to your board's kernel command line arguments"
exit 1
fi
set -ex
# Clear out any previous run's artifacts.
rm -rf results/
mkdir -p results
# Create the rootfs in the NFS directory. rm to make sure it's in a pristine
# state, since it's volume-mounted on the host.
rsync -a --delete $BM_ROOTFS/ /nfs/
mkdir -p /nfs/results
. $BM/rootfs-setup.sh /nfs
# Put the kernel/dtb image and the boot command line in the tftp directory for
# the board to find. For normal Mesa development, we build the kernel and
# store it in the docker container that this script is running in.
#
# However, container builds are expensive, so when you're hacking on the
# kernel, it's nice to be able to skip the half hour container build and plus
# moving that container to the runner. So, if BM_KERNEL is a URL, fetch it
# instead of looking in the container. Note that the kernel build should be
# the output of:
#
# make Image.lzma
#
# mkimage \
# -A arm64 \
# -f auto \
# -C lzma \
# -d arch/arm64/boot/Image.lzma \
# -b arch/arm64/boot/dts/qcom/sdm845-cheza-r3.dtb \
# cheza-image.img
rm -rf /tftp/*
if echo "$BM_KERNEL" | grep -q http; then
curl -L --retry 4 -f --retry-all-errors --retry-delay 60 \
$BM_KERNEL -o /tftp/vmlinuz
elif [ -n "${FORCE_KERNEL_TAG}" ]; then
curl -L --retry 4 -f --retry-all-errors --retry-delay 60 \
"${FDO_HTTP_CACHE_URI:-}${KERNEL_IMAGE_BASE}/${DEBIAN_ARCH}/${BM_KERNEL}" -o /tftp/vmlinuz
curl -L --retry 4 -f --retry-all-errors --retry-delay 60 \
"${FDO_HTTP_CACHE_URI:-}${KERNEL_IMAGE_BASE}/${DEBIAN_ARCH}/modules.tar.zst" -o modules.tar.zst
tar --keep-directory-symlink --zstd -xf modules.tar.zst -C "/nfs/"
rm modules.tar.zst &
else
cp /baremetal-files/"$BM_KERNEL" /tftp/vmlinuz
fi
echo "$BM_CMDLINE" > /tftp/cmdline
set +e
STRUCTURED_LOG_FILE=job_detail.json
python3 $CI_INSTALL/custom_logger.py ${STRUCTURED_LOG_FILE} --update dut_job_type "${DEVICE_TYPE}"
python3 $CI_INSTALL/custom_logger.py ${STRUCTURED_LOG_FILE} --update farm "${FARM}"
python3 $CI_INSTALL/custom_logger.py ${STRUCTURED_LOG_FILE} --create-dut-job dut_name "${CI_RUNNER_DESCRIPTION}"
python3 $CI_INSTALL/custom_logger.py ${STRUCTURED_LOG_FILE} --update-dut-time submit "${CI_JOB_STARTED_AT}"
python3 $BM/cros_servo_run.py \
--cpu $BM_SERIAL \
--ec $BM_SERIAL_EC \
--test-timeout ${TEST_PHASE_TIMEOUT:-20}
ret=$?
python3 $CI_INSTALL/custom_logger.py ${STRUCTURED_LOG_FILE} --close-dut-job
python3 $CI_INSTALL/custom_logger.py ${STRUCTURED_LOG_FILE} --close
set -e
# Bring artifacts back from the NFS dir to the build dir where gitlab-runner
# will look for them.
cp -Rp /nfs/results/. results/
if [ -f "${STRUCTURED_LOG_FILE}" ]; then
cp -p ${STRUCTURED_LOG_FILE} results/
echo "Structured log file is available at https://${CI_PROJECT_ROOT_NAMESPACE}.pages.freedesktop.org/-/${CI_PROJECT_NAME}/-/jobs/${CI_JOB_ID}/artifacts/results/${STRUCTURED_LOG_FILE}"
fi
exit $ret

View File

@@ -0,0 +1,168 @@
#!/usr/bin/env python3
#
# Copyright © 2020 Google LLC
# SPDX-License-Identifier: MIT
import argparse
import re
import sys
from custom_logger import CustomLogger
from serial_buffer import SerialBuffer
class CrosServoRun:
def __init__(self, cpu, ec, test_timeout, logger):
self.cpu_ser = SerialBuffer(
cpu, "results/serial.txt", "R SERIAL-CPU> ")
# Merge the EC serial into the cpu_ser's line stream so that we can
# effectively poll on both at the same time and not have to worry about
self.ec_ser = SerialBuffer(
ec, "results/serial-ec.txt", "R SERIAL-EC> ", line_queue=self.cpu_ser.line_queue)
self.test_timeout = test_timeout
self.logger = logger
def close(self):
self.ec_ser.close()
self.cpu_ser.close()
def ec_write(self, s):
print("W SERIAL-EC> %s" % s)
self.ec_ser.serial.write(s.encode())
def cpu_write(self, s):
print("W SERIAL-CPU> %s" % s)
self.cpu_ser.serial.write(s.encode())
def print_error(self, message):
RED = '\033[0;31m'
NO_COLOR = '\033[0m'
print(RED + message + NO_COLOR)
self.logger.update_status_fail(message)
def run(self):
# Flush any partial commands in the EC's prompt, then ask for a reboot.
self.ec_write("\n")
self.ec_write("reboot\n")
bootloader_done = False
self.logger.create_job_phase("boot")
tftp_failures = 0
# This is emitted right when the bootloader pauses to check for input.
# Emit a ^N character to request network boot, because we don't have a
# direct-to-netboot firmware on cheza.
for line in self.cpu_ser.lines(timeout=120, phase="bootloader"):
if re.search("load_archive: loading locale_en.bin", line):
self.cpu_write("\016")
bootloader_done = True
break
# The Cheza firmware seems to occasionally get stuck looping in
# this error state during TFTP booting, possibly based on amount of
# network traffic around it, but it'll usually recover after a
# reboot. Currently mostly visible on google-freedreno-cheza-14.
if re.search("R8152: Bulk read error 0xffffffbf", line):
tftp_failures += 1
if tftp_failures >= 10:
self.print_error(
"Detected intermittent tftp failure, restarting run.")
return 1
# If the board has a netboot firmware and we made it to booting the
# kernel, proceed to processing of the test run.
if re.search("Booting Linux", line):
bootloader_done = True
break
# The Cheza boards have issues with failing to bring up power to
# the system sometimes, possibly dependent on ambient temperature
# in the farm.
if re.search("POWER_GOOD not seen in time", line):
self.print_error(
"Detected intermittent poweron failure, abandoning run.")
return 1
if not bootloader_done:
self.print_error("Failed to make it through bootloader, abandoning run.")
return 1
self.logger.create_job_phase("test")
for line in self.cpu_ser.lines(timeout=self.test_timeout, phase="test"):
if re.search("---. end Kernel panic", line):
return 1
# There are very infrequent bus errors during power management transitions
# on cheza, which we don't expect to be the case on future boards.
if re.search("Kernel panic - not syncing: Asynchronous SError Interrupt", line):
self.print_error(
"Detected cheza power management bus error, abandoning run.")
return 1
# If the network device dies, it's probably not graphics's fault, just try again.
if re.search("NETDEV WATCHDOG", line):
self.print_error(
"Detected network device failure, abandoning run.")
return 1
# These HFI response errors started appearing with the introduction
# of piglit runs. CosmicPenguin says:
#
# "message ID 106 isn't a thing, so likely what happened is that we
# got confused when parsing the HFI queue. If it happened on only
# one run, then memory corruption could be a possible clue"
#
# Given that it seems to trigger randomly near a GPU fault and then
# break many tests after that, just restart the whole run.
if re.search("a6xx_hfi_send_msg.*Unexpected message id .* on the response queue", line):
self.print_error(
"Detected cheza power management bus error, abandoning run.")
return 1
if re.search("coreboot.*bootblock starting", line):
self.print_error(
"Detected spontaneous reboot, abandoning run.")
return 1
if re.search("arm-smmu 5040000.iommu: TLB sync timed out -- SMMU may be deadlocked", line):
self.print_error("Detected cheza MMU fail, abandoning run.")
return 1
result = re.search("hwci: mesa: (\S*)", line)
if result:
if result.group(1) == "pass":
self.logger.update_dut_job("status", "pass")
return 0
else:
self.logger.update_status_fail("test fail")
return 1
self.print_error(
"Reached the end of the CPU serial log without finding a result")
return 1
def main():
parser = argparse.ArgumentParser()
parser.add_argument('--cpu', type=str,
help='CPU Serial device', required=True)
parser.add_argument(
'--ec', type=str, help='EC Serial device', required=True)
parser.add_argument(
'--test-timeout', type=int, help='Test phase timeout (minutes)', required=True)
args = parser.parse_args()
logger = CustomLogger("job_detail.json")
logger.update_dut_time("start", None)
servo = CrosServoRun(args.cpu, args.ec, args.test_timeout * 60, logger)
retval = servo.run()
# power down the CPU on the device
servo.ec_write("power off\n")
logger.update_dut_time("end", None)
servo.close()
sys.exit(retval)
if __name__ == '__main__':
main()

View File

@@ -0,0 +1,10 @@
#!/bin/bash
relay=$1
if [ -z "$relay" ]; then
echo "Must supply a relay arg"
exit 1
fi
"$CI_PROJECT_DIR"/install/bare-metal/eth008-power-relay.py "$ETH_HOST" "$ETH_PORT" off "$relay"

View File

@@ -0,0 +1,28 @@
#!/usr/bin/python3
import sys
import socket
host = sys.argv[1]
port = sys.argv[2]
mode = sys.argv[3]
relay = sys.argv[4]
msg = None
if mode == "on":
msg = b'\x20'
else:
msg = b'\x21'
msg += int(relay).to_bytes(1, 'big')
msg += b'\x00'
c = socket.create_connection((host, int(port)))
c.sendall(msg)
data = c.recv(1)
c.close()
if data[0] == b'\x01':
print('Command failed')
sys.exit(1)

View File

@@ -0,0 +1,12 @@
#!/bin/bash
relay=$1
if [ -z "$relay" ]; then
echo "Must supply a relay arg"
exit 1
fi
"$CI_PROJECT_DIR"/install/bare-metal/eth008-power-relay.py "$ETH_HOST" "$ETH_PORT" off "$relay"
sleep 5
"$CI_PROJECT_DIR"/install/bare-metal/eth008-power-relay.py "$ETH_HOST" "$ETH_PORT" on "$relay"

View File

@@ -0,0 +1,31 @@
#!/bin/bash
set -e
STRINGS=$(mktemp)
ERRORS=$(mktemp)
trap 'rm $STRINGS; rm $ERRORS;' EXIT
FILE=$1
shift 1
while getopts "f:e:" opt; do
case $opt in
f) echo "$OPTARG" >> "$STRINGS";;
e) echo "$OPTARG" >> "$STRINGS" ; echo "$OPTARG" >> "$ERRORS";;
*) exit
esac
done
shift $((OPTIND -1))
echo "Waiting for $FILE to say one of following strings"
cat "$STRINGS"
while ! grep -E -wf "$STRINGS" "$FILE"; do
sleep 2
done
if grep -E -wf "$ERRORS" "$FILE"; then
exit 1
fi

167
.gitlab-ci/bare-metal/fastboot.sh Executable file
View File

@@ -0,0 +1,167 @@
#!/bin/bash
# shellcheck disable=SC1091 # The relative paths in this file only become valid at runtime.
# shellcheck disable=SC2034
# shellcheck disable=SC2086 # we want word splitting
. "$SCRIPTS_DIR"/setup-test-env.sh
BM=$CI_PROJECT_DIR/install/bare-metal
CI_COMMON=$CI_PROJECT_DIR/install/common
if [ -z "$BM_SERIAL" ] && [ -z "$BM_SERIAL_SCRIPT" ]; then
echo "Must set BM_SERIAL OR BM_SERIAL_SCRIPT in your gitlab-runner config.toml [[runners]] environment"
echo "BM_SERIAL:"
echo " This is the serial device to talk to for waiting for fastboot to be ready and logging from the kernel."
echo "BM_SERIAL_SCRIPT:"
echo " This is a shell script to talk to for waiting for fastboot to be ready and logging from the kernel."
exit 1
fi
if [ -z "$BM_POWERUP" ]; then
echo "Must set BM_POWERUP in your gitlab-runner config.toml [[runners]] environment"
echo "This is a shell script that should reset the device and begin its boot sequence"
echo "such that it pauses at fastboot."
exit 1
fi
if [ -z "$BM_POWERDOWN" ]; then
echo "Must set BM_POWERDOWN in your gitlab-runner config.toml [[runners]] environment"
echo "This is a shell script that should power off the device."
exit 1
fi
if [ -z "$BM_FASTBOOT_SERIAL" ]; then
echo "Must set BM_FASTBOOT_SERIAL in your gitlab-runner config.toml [[runners]] environment"
echo "This must be the a stable-across-resets fastboot serial number."
exit 1
fi
if [ -z "$BM_KERNEL" ]; then
echo "Must set BM_KERNEL to your board's kernel vmlinuz or Image.gz in the job's variables:"
exit 1
fi
if [ -z "$BM_DTB" ]; then
echo "Must set BM_DTB to your board's DTB file in the job's variables:"
exit 1
fi
if [ -z "$BM_ROOTFS" ]; then
echo "Must set BM_ROOTFS to your board's rootfs directory in the job's variables:"
exit 1
fi
if echo $BM_CMDLINE | grep -q "root=/dev/nfs"; then
BM_FASTBOOT_NFSROOT=1
fi
set -ex
# Clear out any previous run's artifacts.
rm -rf results/
mkdir -p results/
if [ -n "$BM_FASTBOOT_NFSROOT" ]; then
# Create the rootfs in the NFS directory. rm to make sure it's in a pristine
# state, since it's volume-mounted on the host.
rsync -a --delete $BM_ROOTFS/ /nfs/
mkdir -p /nfs/results
. $BM/rootfs-setup.sh /nfs
# Root on NFS, no need for an inintramfs.
rm -f rootfs.cpio.gz
touch rootfs.cpio
gzip rootfs.cpio
else
# Create the rootfs in a temp dir
rsync -a --delete $BM_ROOTFS/ rootfs/
. $BM/rootfs-setup.sh rootfs
# Finally, pack it up into a cpio rootfs. Skip the vulkan CTS since none of
# these devices use it and it would take up space in the initrd.
if [ -n "$PIGLIT_PROFILES" ]; then
EXCLUDE_FILTER="deqp|arb_gpu_shader5|arb_gpu_shader_fp64|arb_gpu_shader_int64|glsl-4.[0123456]0|arb_tessellation_shader"
else
EXCLUDE_FILTER="piglit|python"
fi
pushd rootfs
find -H . | \
grep -E -v "external/(openglcts|vulkancts|amber|glslang|spirv-tools)" |
grep -E -v "traces-db|apitrace|renderdoc" | \
grep -E -v $EXCLUDE_FILTER | \
cpio -H newc -o | \
xz --check=crc32 -T4 - > $CI_PROJECT_DIR/rootfs.cpio.gz
popd
fi
if echo "$BM_KERNEL $BM_DTB" | grep -q http; then
curl -L --retry 4 -f --retry-all-errors --retry-delay 60 \
"$BM_KERNEL" -o kernel
# FIXME: modules should be supplied too
curl -L --retry 4 -f --retry-all-errors --retry-delay 60 \
"$BM_DTB" -o dtb
cat kernel dtb > Image.gz-dtb
elif [ -n "${FORCE_KERNEL_TAG}" ]; then
curl -L --retry 4 -f --retry-all-errors --retry-delay 60 \
"${FDO_HTTP_CACHE_URI:-}${KERNEL_IMAGE_BASE}/${DEBIAN_ARCH}/${BM_KERNEL}" -o kernel
curl -L --retry 4 -f --retry-all-errors --retry-delay 60 \
"${FDO_HTTP_CACHE_URI:-}${KERNEL_IMAGE_BASE}/${DEBIAN_ARCH}/modules.tar.zst" -o modules.tar.zst
if [ -n "$BM_DTB" ]; then
curl -L --retry 4 -f --retry-all-errors --retry-delay 60 \
"${FDO_HTTP_CACHE_URI:-}${KERNEL_IMAGE_BASE}/${DEBIAN_ARCH}/${BM_DTB}.dtb" -o dtb
fi
cat kernel dtb > Image.gz-dtb || echo "No DTB available, using pure kernel."
rm kernel
tar --keep-directory-symlink --zstd -xf modules.tar.zst -C "$BM_ROOTFS/"
rm modules.tar.zst &
else
cat /baremetal-files/"$BM_KERNEL" /baremetal-files/"$BM_DTB".dtb > Image.gz-dtb
cp /baremetal-files/"$BM_DTB".dtb dtb
fi
export PATH=$BM:$PATH
mkdir -p artifacts
mkbootimg.py \
--kernel Image.gz-dtb \
--ramdisk rootfs.cpio.gz \
--dtb dtb \
--cmdline "$BM_CMDLINE" \
$BM_MKBOOT_PARAMS \
--header_version 2 \
-o artifacts/fastboot.img
rm Image.gz-dtb dtb
# Start background command for talking to serial if we have one.
if [ -n "$BM_SERIAL_SCRIPT" ]; then
$BM_SERIAL_SCRIPT > results/serial-output.txt &
while [ ! -e results/serial-output.txt ]; do
sleep 1
done
fi
set +e
$BM/fastboot_run.py \
--dev="$BM_SERIAL" \
--test-timeout ${TEST_PHASE_TIMEOUT:-20} \
--fbserial="$BM_FASTBOOT_SERIAL" \
--powerup="$BM_POWERUP" \
--powerdown="$BM_POWERDOWN"
ret=$?
set -e
if [ -n "$BM_FASTBOOT_NFSROOT" ]; then
# Bring artifacts back from the NFS dir to the build dir where gitlab-runner
# will look for them.
cp -Rp /nfs/results/. results/
fi
exit $ret

View File

@@ -0,0 +1,159 @@
#!/usr/bin/env python3
#
# Copyright © 2020 Google LLC
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice (including the next
# paragraph) shall be included in all copies or substantial portions of the
# Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
# IN THE SOFTWARE.
import argparse
import subprocess
import re
from serial_buffer import SerialBuffer
import sys
import threading
class FastbootRun:
def __init__(self, args, test_timeout):
self.powerup = args.powerup
self.ser = SerialBuffer(
args.dev, "results/serial-output.txt", "R SERIAL> ")
self.fastboot = "fastboot boot -s {ser} artifacts/fastboot.img".format(
ser=args.fbserial)
self.test_timeout = test_timeout
def close(self):
self.ser.close()
def print_error(self, message):
RED = '\033[0;31m'
NO_COLOR = '\033[0m'
print(RED + message + NO_COLOR)
def logged_system(self, cmd, timeout=60):
print("Running '{}'".format(cmd))
try:
return subprocess.call(cmd, shell=True, timeout=timeout)
except subprocess.TimeoutExpired:
self.print_error("timeout, abandoning run.")
return 1
def run(self):
if ret := self.logged_system(self.powerup):
return ret
fastboot_ready = False
for line in self.ser.lines(timeout=2 * 60, phase="bootloader"):
if re.search("[Ff]astboot: [Pp]rocessing commands", line) or \
re.search("Listening for fastboot command on", line):
fastboot_ready = True
break
if re.search("data abort", line):
self.print_error(
"Detected crash during boot, abandoning run.")
return 1
if not fastboot_ready:
self.print_error(
"Failed to get to fastboot prompt, abandoning run.")
return 1
if ret := self.logged_system(self.fastboot):
return ret
print_more_lines = -1
for line in self.ser.lines(timeout=self.test_timeout, phase="test"):
if print_more_lines == 0:
return 1
if print_more_lines > 0:
print_more_lines -= 1
if re.search("---. end Kernel panic", line):
return 1
# The db820c boards intermittently reboot. Just restart the run
# when if we see a reboot after we got past fastboot.
if re.search("PON REASON", line):
self.print_error(
"Detected spontaneous reboot, abandoning run.")
return 1
# db820c sometimes wedges around iommu fault recovery
if re.search("watchdog: BUG: soft lockup - CPU.* stuck", line):
self.print_error(
"Detected kernel soft lockup, abandoning run.")
return 1
# If the network device dies, it's probably not graphics's fault, just try again.
if re.search("NETDEV WATCHDOG", line):
self.print_error(
"Detected network device failure, abandoning run.")
return 1
# A3xx recovery doesn't quite work. Sometimes the GPU will get
# wedged and recovery will fail (because power can't be reset?)
# This assumes that the jobs are sufficiently well-tested that GPU
# hangs aren't always triggered, so just try again. But print some
# more lines first so that we get better information on the cause
# of the hang. Once a hang happens, it's pretty chatty.
if "[drm:adreno_recover] *ERROR* gpu hw init failed: -22" in line:
self.print_error(
"Detected GPU hang, abandoning run.")
if print_more_lines == -1:
print_more_lines = 30
result = re.search("hwci: mesa: (\S*)", line)
if result:
if result.group(1) == "pass":
return 0
else:
return 1
self.print_error(
"Reached the end of the CPU serial log without finding a result, abandoning run.")
return 1
def main():
parser = argparse.ArgumentParser()
parser.add_argument(
'--dev', type=str, help='Serial device (otherwise reading from serial-output.txt)')
parser.add_argument('--powerup', type=str,
help='shell command for rebooting', required=True)
parser.add_argument('--powerdown', type=str,
help='shell command for powering off', required=True)
parser.add_argument('--fbserial', type=str,
help='fastboot serial number of the board', required=True)
parser.add_argument('--test-timeout', type=int,
help='Test phase timeout (minutes)', required=True)
args = parser.parse_args()
fastboot = FastbootRun(args, args.test_timeout * 60)
retval = fastboot.run()
fastboot.close()
fastboot.logged_system(args.powerdown)
sys.exit(retval)
if __name__ == '__main__':
main()

View File

@@ -1,129 +0,0 @@
.baremetal-test:
extends:
- .test
# Cancel job if a newer commit is pushed to the same branch
interruptible: true
before_script:
- !reference [.download_s3, before_script]
variables:
BM_ROOTFS: /rootfs-${DEBIAN_ARCH}
artifacts:
when: always
name: "${CI_PROJECT_NAME}_${CI_JOB_NAME_SLUG}"
paths:
- results/
- serial*.txt
exclude:
- results/*.shader_cache
reports:
junit: results/junit.xml
# ARM testing of bare-metal boards attached to an x86 gitlab-runner system
.baremetal-test-arm32-gl:
extends:
- .baremetal-test
- .use-debian/baremetal_arm32_test-gl
variables:
DEBIAN_ARCH: armhf
S3_ARTIFACT_NAME: mesa-arm32-default-debugoptimized
needs:
- job: debian/baremetal_arm32_test-gl
optional: true
- job: debian-arm32
artifacts: false
- !reference [.required-for-hardware-jobs, needs]
# ARM64 testing of bare-metal boards attached to an x86 gitlab-runner system
.baremetal-test-arm64-gl:
extends:
- .baremetal-test
- .use-debian/baremetal_arm64_test-gl
variables:
DEBIAN_ARCH: arm64
S3_ARTIFACT_NAME: mesa-arm64-default-debugoptimized
needs:
- job: debian/baremetal_arm64_test-gl
optional: true
- job: debian-arm64
artifacts: false
- !reference [.required-for-hardware-jobs, needs]
# ARM64 testing of bare-metal boards attached to an x86 gitlab-runner system
.baremetal-test-arm64-vk:
extends:
- .baremetal-test
- .use-debian/baremetal_arm64_test-vk
variables:
DEBIAN_ARCH: arm64
S3_ARTIFACT_NAME: mesa-arm64-default-debugoptimized
needs:
- job: debian/baremetal_arm64_test-vk
optional: true
- job: debian-arm64
artifacts: false
- !reference [.required-for-hardware-jobs, needs]
# ARM32/64 testing of bare-metal boards attached to an x86 gitlab-runner system, using an asan mesa build
.baremetal-arm32-asan-test-gl:
variables:
S3_ARTIFACT_NAME: mesa-arm32-asan-debugoptimized
DEQP_FORCE_ASAN: 1
needs:
- job: debian/baremetal_arm32_test-gl
optional: true
- job: debian-arm32-asan
artifacts: false
- !reference [.required-for-hardware-jobs, needs]
.baremetal-arm64-asan-test-gl:
variables:
S3_ARTIFACT_NAME: mesa-arm64-asan-debugoptimized
DEQP_FORCE_ASAN: 1
needs:
- job: debian/baremetal_arm64_test-gl
optional: true
- job: debian-arm64-asan
artifacts: false
- !reference [.required-for-hardware-jobs, needs]
.baremetal-arm64-asan-test-vk:
variables:
S3_ARTIFACT_NAME: mesa-arm64-asan-debugoptimized
DEQP_FORCE_ASAN: 1
needs:
- job: debian/baremetal_arm64_test-vk
optional: true
- job: debian-arm64-asan
artifacts: false
- !reference [.required-for-hardware-jobs, needs]
.baremetal-arm64-ubsan-test-gl:
extends:
- .baremetal-test
- .use-debian/baremetal_arm64_test-gl
variables:
S3_ARTIFACT_NAME: mesa-arm64-ubsan-debugoptimized
needs:
- job: debian/baremetal_arm64_test-gl
optional: true
- job: debian-arm64-ubsan
artifacts: false
- !reference [.required-for-hardware-jobs, needs]
.baremetal-arm64-ubsan-test-vk:
extends:
- .baremetal-test
- .use-debian/baremetal_arm64_test-vk
variables:
S3_ARTIFACT_NAME: mesa-arm64-ubsan-debugoptimized
needs:
- job: debian/baremetal_arm64_test-vk
optional: true
- job: debian-arm64-ubsan
artifacts: false
- !reference [.required-for-hardware-jobs, needs]
.baremetal-deqp-test:
variables:
HWCI_TEST_SCRIPT: "/install/deqp-runner.sh"
FDO_CI_CONCURRENT: 0 # Default to number of CPUs

View File

@@ -0,0 +1,10 @@
#!/bin/bash
relay=$1
if [ -z "$relay" ]; then
echo "Must supply a relay arg"
exit 1
fi
"$CI_PROJECT_DIR"/install/bare-metal/google-power-relay.py off "$relay"

View File

@@ -0,0 +1,19 @@
#!/usr/bin/python3
import sys
import serial
mode = sys.argv[1]
relay = sys.argv[2]
# our relays are "off" means "board is powered".
mode_swap = {
"on": "off",
"off": "on",
}
mode = mode_swap[mode]
ser = serial.Serial('/dev/ttyACM0', 115200, timeout=2)
command = "relay {} {}\n\r".format(mode, relay)
ser.write(command.encode())
ser.close()

View File

@@ -0,0 +1,12 @@
#!/bin/bash
relay=$1
if [ -z "$relay" ]; then
echo "Must supply a relay arg"
exit 1
fi
"$CI_PROJECT_DIR"/install/bare-metal/google-power-relay.py off "$relay"
sleep 5
"$CI_PROJECT_DIR"/install/bare-metal/google-power-relay.py on "$relay"

View File

@@ -0,0 +1,569 @@
#!/usr/bin/env python3
#
# Copyright 2015, The Android Open Source Project
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Creates the boot image."""
from argparse import (ArgumentParser, ArgumentTypeError,
FileType, RawDescriptionHelpFormatter)
from hashlib import sha1
from os import fstat
from struct import pack
import array
import collections
import os
import re
import subprocess
import tempfile
# Constant and structure definition is in
# system/tools/mkbootimg/include/bootimg/bootimg.h
BOOT_MAGIC = 'ANDROID!'
BOOT_MAGIC_SIZE = 8
BOOT_NAME_SIZE = 16
BOOT_ARGS_SIZE = 512
BOOT_EXTRA_ARGS_SIZE = 1024
BOOT_IMAGE_HEADER_V1_SIZE = 1648
BOOT_IMAGE_HEADER_V2_SIZE = 1660
BOOT_IMAGE_HEADER_V3_SIZE = 1580
BOOT_IMAGE_HEADER_V3_PAGESIZE = 4096
BOOT_IMAGE_HEADER_V4_SIZE = 1584
BOOT_IMAGE_V4_SIGNATURE_SIZE = 4096
VENDOR_BOOT_MAGIC = 'VNDRBOOT'
VENDOR_BOOT_MAGIC_SIZE = 8
VENDOR_BOOT_NAME_SIZE = BOOT_NAME_SIZE
VENDOR_BOOT_ARGS_SIZE = 2048
VENDOR_BOOT_IMAGE_HEADER_V3_SIZE = 2112
VENDOR_BOOT_IMAGE_HEADER_V4_SIZE = 2128
VENDOR_RAMDISK_TYPE_NONE = 0
VENDOR_RAMDISK_TYPE_PLATFORM = 1
VENDOR_RAMDISK_TYPE_RECOVERY = 2
VENDOR_RAMDISK_TYPE_DLKM = 3
VENDOR_RAMDISK_NAME_SIZE = 32
VENDOR_RAMDISK_TABLE_ENTRY_BOARD_ID_SIZE = 16
VENDOR_RAMDISK_TABLE_ENTRY_V4_SIZE = 108
# Names with special meaning, mustn't be specified in --ramdisk_name.
VENDOR_RAMDISK_NAME_BLOCKLIST = {b'default'}
PARSER_ARGUMENT_VENDOR_RAMDISK_FRAGMENT = '--vendor_ramdisk_fragment'
def filesize(f):
if f is None:
return 0
try:
return fstat(f.fileno()).st_size
except OSError:
return 0
def update_sha(sha, f):
if f:
sha.update(f.read())
f.seek(0)
sha.update(pack('I', filesize(f)))
else:
sha.update(pack('I', 0))
def pad_file(f, padding):
pad = (padding - (f.tell() & (padding - 1))) & (padding - 1)
f.write(pack(str(pad) + 'x'))
def get_number_of_pages(image_size, page_size):
"""calculates the number of pages required for the image"""
return (image_size + page_size - 1) // page_size
def get_recovery_dtbo_offset(args):
"""calculates the offset of recovery_dtbo image in the boot image"""
num_header_pages = 1 # header occupies a page
num_kernel_pages = get_number_of_pages(filesize(args.kernel), args.pagesize)
num_ramdisk_pages = get_number_of_pages(filesize(args.ramdisk),
args.pagesize)
num_second_pages = get_number_of_pages(filesize(args.second), args.pagesize)
dtbo_offset = args.pagesize * (num_header_pages + num_kernel_pages +
num_ramdisk_pages + num_second_pages)
return dtbo_offset
def write_header_v3_and_above(args):
if args.header_version > 3:
boot_header_size = BOOT_IMAGE_HEADER_V4_SIZE
else:
boot_header_size = BOOT_IMAGE_HEADER_V3_SIZE
args.output.write(pack(f'{BOOT_MAGIC_SIZE}s', BOOT_MAGIC.encode()))
# kernel size in bytes
args.output.write(pack('I', filesize(args.kernel)))
# ramdisk size in bytes
args.output.write(pack('I', filesize(args.ramdisk)))
# os version and patch level
args.output.write(pack('I', (args.os_version << 11) | args.os_patch_level))
args.output.write(pack('I', boot_header_size))
# reserved
args.output.write(pack('4I', 0, 0, 0, 0))
# version of boot image header
args.output.write(pack('I', args.header_version))
args.output.write(pack(f'{BOOT_ARGS_SIZE + BOOT_EXTRA_ARGS_SIZE}s',
args.cmdline))
if args.header_version >= 4:
# The signature used to verify boot image v4.
args.output.write(pack('I', BOOT_IMAGE_V4_SIGNATURE_SIZE))
pad_file(args.output, BOOT_IMAGE_HEADER_V3_PAGESIZE)
def write_vendor_boot_header(args):
if filesize(args.dtb) == 0:
raise ValueError('DTB image must not be empty.')
if args.header_version > 3:
vendor_ramdisk_size = args.vendor_ramdisk_total_size
vendor_boot_header_size = VENDOR_BOOT_IMAGE_HEADER_V4_SIZE
else:
vendor_ramdisk_size = filesize(args.vendor_ramdisk)
vendor_boot_header_size = VENDOR_BOOT_IMAGE_HEADER_V3_SIZE
args.vendor_boot.write(pack(f'{VENDOR_BOOT_MAGIC_SIZE}s',
VENDOR_BOOT_MAGIC.encode()))
# version of boot image header
args.vendor_boot.write(pack('I', args.header_version))
# flash page size
args.vendor_boot.write(pack('I', args.pagesize))
# kernel physical load address
args.vendor_boot.write(pack('I', args.base + args.kernel_offset))
# ramdisk physical load address
args.vendor_boot.write(pack('I', args.base + args.ramdisk_offset))
# ramdisk size in bytes
args.vendor_boot.write(pack('I', vendor_ramdisk_size))
args.vendor_boot.write(pack(f'{VENDOR_BOOT_ARGS_SIZE}s',
args.vendor_cmdline))
# kernel tags physical load address
args.vendor_boot.write(pack('I', args.base + args.tags_offset))
# asciiz product name
args.vendor_boot.write(pack(f'{VENDOR_BOOT_NAME_SIZE}s', args.board))
# header size in bytes
args.vendor_boot.write(pack('I', vendor_boot_header_size))
# dtb size in bytes
args.vendor_boot.write(pack('I', filesize(args.dtb)))
# dtb physical load address
args.vendor_boot.write(pack('Q', args.base + args.dtb_offset))
if args.header_version > 3:
vendor_ramdisk_table_size = (args.vendor_ramdisk_table_entry_num *
VENDOR_RAMDISK_TABLE_ENTRY_V4_SIZE)
# vendor ramdisk table size in bytes
args.vendor_boot.write(pack('I', vendor_ramdisk_table_size))
# number of vendor ramdisk table entries
args.vendor_boot.write(pack('I', args.vendor_ramdisk_table_entry_num))
# vendor ramdisk table entry size in bytes
args.vendor_boot.write(pack('I', VENDOR_RAMDISK_TABLE_ENTRY_V4_SIZE))
# bootconfig section size in bytes
args.vendor_boot.write(pack('I', filesize(args.vendor_bootconfig)))
pad_file(args.vendor_boot, args.pagesize)
def write_header(args):
if args.header_version > 4:
raise ValueError(
f'Boot header version {args.header_version} not supported')
if args.header_version in {3, 4}:
return write_header_v3_and_above(args)
ramdisk_load_address = ((args.base + args.ramdisk_offset)
if filesize(args.ramdisk) > 0 else 0)
second_load_address = ((args.base + args.second_offset)
if filesize(args.second) > 0 else 0)
args.output.write(pack(f'{BOOT_MAGIC_SIZE}s', BOOT_MAGIC.encode()))
# kernel size in bytes
args.output.write(pack('I', filesize(args.kernel)))
# kernel physical load address
args.output.write(pack('I', args.base + args.kernel_offset))
# ramdisk size in bytes
args.output.write(pack('I', filesize(args.ramdisk)))
# ramdisk physical load address
args.output.write(pack('I', ramdisk_load_address))
# second bootloader size in bytes
args.output.write(pack('I', filesize(args.second)))
# second bootloader physical load address
args.output.write(pack('I', second_load_address))
# kernel tags physical load address
args.output.write(pack('I', args.base + args.tags_offset))
# flash page size
args.output.write(pack('I', args.pagesize))
# version of boot image header
args.output.write(pack('I', args.header_version))
# os version and patch level
args.output.write(pack('I', (args.os_version << 11) | args.os_patch_level))
# asciiz product name
args.output.write(pack(f'{BOOT_NAME_SIZE}s', args.board))
args.output.write(pack(f'{BOOT_ARGS_SIZE}s', args.cmdline))
sha = sha1()
update_sha(sha, args.kernel)
update_sha(sha, args.ramdisk)
update_sha(sha, args.second)
if args.header_version > 0:
update_sha(sha, args.recovery_dtbo)
if args.header_version > 1:
update_sha(sha, args.dtb)
img_id = pack('32s', sha.digest())
args.output.write(img_id)
args.output.write(pack(f'{BOOT_EXTRA_ARGS_SIZE}s', args.extra_cmdline))
if args.header_version > 0:
if args.recovery_dtbo:
# recovery dtbo size in bytes
args.output.write(pack('I', filesize(args.recovery_dtbo)))
# recovert dtbo offset in the boot image
args.output.write(pack('Q', get_recovery_dtbo_offset(args)))
else:
# Set to zero if no recovery dtbo
args.output.write(pack('I', 0))
args.output.write(pack('Q', 0))
# Populate boot image header size for header versions 1 and 2.
if args.header_version == 1:
args.output.write(pack('I', BOOT_IMAGE_HEADER_V1_SIZE))
elif args.header_version == 2:
args.output.write(pack('I', BOOT_IMAGE_HEADER_V2_SIZE))
if args.header_version > 1:
if filesize(args.dtb) == 0:
raise ValueError('DTB image must not be empty.')
# dtb size in bytes
args.output.write(pack('I', filesize(args.dtb)))
# dtb physical load address
args.output.write(pack('Q', args.base + args.dtb_offset))
pad_file(args.output, args.pagesize)
return img_id
class AsciizBytes:
"""Parses a string and encodes it as an asciiz bytes object.
>>> AsciizBytes(bufsize=4)('foo')
b'foo\\x00'
>>> AsciizBytes(bufsize=4)('foob')
Traceback (most recent call last):
...
argparse.ArgumentTypeError: Encoded asciiz length exceeded: max 4, got 5
"""
def __init__(self, bufsize):
self.bufsize = bufsize
def __call__(self, arg):
arg_bytes = arg.encode() + b'\x00'
if len(arg_bytes) > self.bufsize:
raise ArgumentTypeError(
'Encoded asciiz length exceeded: '
f'max {self.bufsize}, got {len(arg_bytes)}')
return arg_bytes
class VendorRamdiskTableBuilder:
"""Vendor ramdisk table builder.
Attributes:
entries: A list of VendorRamdiskTableEntry namedtuple.
ramdisk_total_size: Total size in bytes of all ramdisks in the table.
"""
VendorRamdiskTableEntry = collections.namedtuple( # pylint: disable=invalid-name
'VendorRamdiskTableEntry',
['ramdisk_path', 'ramdisk_size', 'ramdisk_offset', 'ramdisk_type',
'ramdisk_name', 'board_id'])
def __init__(self):
self.entries = []
self.ramdisk_total_size = 0
self.ramdisk_names = set()
def add_entry(self, ramdisk_path, ramdisk_type, ramdisk_name, board_id):
# Strip any trailing null for simple comparison.
stripped_ramdisk_name = ramdisk_name.rstrip(b'\x00')
if stripped_ramdisk_name in VENDOR_RAMDISK_NAME_BLOCKLIST:
raise ValueError(
f'Banned vendor ramdisk name: {stripped_ramdisk_name}')
if stripped_ramdisk_name in self.ramdisk_names:
raise ValueError(
f'Duplicated vendor ramdisk name: {stripped_ramdisk_name}')
self.ramdisk_names.add(stripped_ramdisk_name)
if board_id is None:
board_id = array.array(
'I', [0] * VENDOR_RAMDISK_TABLE_ENTRY_BOARD_ID_SIZE)
else:
board_id = array.array('I', board_id)
if len(board_id) != VENDOR_RAMDISK_TABLE_ENTRY_BOARD_ID_SIZE:
raise ValueError('board_id size must be '
f'{VENDOR_RAMDISK_TABLE_ENTRY_BOARD_ID_SIZE}')
with open(ramdisk_path, 'rb') as f:
ramdisk_size = filesize(f)
self.entries.append(self.VendorRamdiskTableEntry(
ramdisk_path, ramdisk_size, self.ramdisk_total_size, ramdisk_type,
ramdisk_name, board_id))
self.ramdisk_total_size += ramdisk_size
def write_ramdisks_padded(self, fout, alignment):
for entry in self.entries:
with open(entry.ramdisk_path, 'rb') as f:
fout.write(f.read())
pad_file(fout, alignment)
def write_entries_padded(self, fout, alignment):
for entry in self.entries:
fout.write(pack('I', entry.ramdisk_size))
fout.write(pack('I', entry.ramdisk_offset))
fout.write(pack('I', entry.ramdisk_type))
fout.write(pack(f'{VENDOR_RAMDISK_NAME_SIZE}s',
entry.ramdisk_name))
fout.write(entry.board_id)
pad_file(fout, alignment)
def write_padded_file(f_out, f_in, padding):
if f_in is None:
return
f_out.write(f_in.read())
pad_file(f_out, padding)
def parse_int(x):
return int(x, 0)
def parse_os_version(x):
match = re.search(r'^(\d{1,3})(?:\.(\d{1,3})(?:\.(\d{1,3}))?)?', x)
if match:
a = int(match.group(1))
b = c = 0
if match.lastindex >= 2:
b = int(match.group(2))
if match.lastindex == 3:
c = int(match.group(3))
# 7 bits allocated for each field
assert a < 128
assert b < 128
assert c < 128
return (a << 14) | (b << 7) | c
return 0
def parse_os_patch_level(x):
match = re.search(r'^(\d{4})-(\d{2})(?:-(\d{2}))?', x)
if match:
y = int(match.group(1)) - 2000
m = int(match.group(2))
# 7 bits allocated for the year, 4 bits for the month
assert 0 <= y < 128
assert 0 < m <= 12
return (y << 4) | m
return 0
def parse_vendor_ramdisk_type(x):
type_dict = {
'none': VENDOR_RAMDISK_TYPE_NONE,
'platform': VENDOR_RAMDISK_TYPE_PLATFORM,
'recovery': VENDOR_RAMDISK_TYPE_RECOVERY,
'dlkm': VENDOR_RAMDISK_TYPE_DLKM,
}
if x.lower() in type_dict:
return type_dict[x.lower()]
return parse_int(x)
def get_vendor_boot_v4_usage():
return """vendor boot version 4 arguments:
--ramdisk_type {none,platform,recovery,dlkm}
specify the type of the ramdisk
--ramdisk_name NAME
specify the name of the ramdisk
--board_id{0..15} NUMBER
specify the value of the board_id vector, defaults to 0
--vendor_ramdisk_fragment VENDOR_RAMDISK_FILE
path to the vendor ramdisk file
These options can be specified multiple times, where each vendor ramdisk
option group ends with a --vendor_ramdisk_fragment option.
Each option group appends an additional ramdisk to the vendor boot image.
"""
def parse_vendor_ramdisk_args(args, args_list):
"""Parses vendor ramdisk specific arguments.
Args:
args: An argparse.Namespace object. Parsed results are stored into this
object.
args_list: A list of argument strings to be parsed.
Returns:
A list argument strings that are not parsed by this method.
"""
parser = ArgumentParser(add_help=False)
parser.add_argument('--ramdisk_type', type=parse_vendor_ramdisk_type,
default=VENDOR_RAMDISK_TYPE_NONE)
parser.add_argument('--ramdisk_name',
type=AsciizBytes(bufsize=VENDOR_RAMDISK_NAME_SIZE),
required=True)
for i in range(VENDOR_RAMDISK_TABLE_ENTRY_BOARD_ID_SIZE):
parser.add_argument(f'--board_id{i}', type=parse_int, default=0)
parser.add_argument(PARSER_ARGUMENT_VENDOR_RAMDISK_FRAGMENT, required=True)
unknown_args = []
vendor_ramdisk_table_builder = VendorRamdiskTableBuilder()
if args.vendor_ramdisk is not None:
vendor_ramdisk_table_builder.add_entry(
args.vendor_ramdisk.name, VENDOR_RAMDISK_TYPE_PLATFORM, b'', None)
while PARSER_ARGUMENT_VENDOR_RAMDISK_FRAGMENT in args_list:
idx = args_list.index(PARSER_ARGUMENT_VENDOR_RAMDISK_FRAGMENT) + 2
vendor_ramdisk_args = args_list[:idx]
args_list = args_list[idx:]
ramdisk_args, extra_args = parser.parse_known_args(vendor_ramdisk_args)
ramdisk_args_dict = vars(ramdisk_args)
unknown_args.extend(extra_args)
ramdisk_path = ramdisk_args.vendor_ramdisk_fragment
ramdisk_type = ramdisk_args.ramdisk_type
ramdisk_name = ramdisk_args.ramdisk_name
board_id = [ramdisk_args_dict[f'board_id{i}']
for i in range(VENDOR_RAMDISK_TABLE_ENTRY_BOARD_ID_SIZE)]
vendor_ramdisk_table_builder.add_entry(ramdisk_path, ramdisk_type,
ramdisk_name, board_id)
if len(args_list) > 0:
unknown_args.extend(args_list)
args.vendor_ramdisk_total_size = (vendor_ramdisk_table_builder
.ramdisk_total_size)
args.vendor_ramdisk_table_entry_num = len(vendor_ramdisk_table_builder
.entries)
args.vendor_ramdisk_table_builder = vendor_ramdisk_table_builder
return unknown_args
def parse_cmdline():
version_parser = ArgumentParser(add_help=False)
version_parser.add_argument('--header_version', type=parse_int, default=0)
if version_parser.parse_known_args()[0].header_version < 3:
# For boot header v0 to v2, the kernel commandline field is split into
# two fields, cmdline and extra_cmdline. Both fields are asciiz strings,
# so we minus one here to ensure the encoded string plus the
# null-terminator can fit in the buffer size.
cmdline_size = BOOT_ARGS_SIZE + BOOT_EXTRA_ARGS_SIZE - 1
else:
cmdline_size = BOOT_ARGS_SIZE + BOOT_EXTRA_ARGS_SIZE
parser = ArgumentParser(formatter_class=RawDescriptionHelpFormatter,
epilog=get_vendor_boot_v4_usage())
parser.add_argument('--kernel', type=FileType('rb'),
help='path to the kernel')
parser.add_argument('--ramdisk', type=FileType('rb'),
help='path to the ramdisk')
parser.add_argument('--second', type=FileType('rb'),
help='path to the second bootloader')
parser.add_argument('--dtb', type=FileType('rb'), help='path to the dtb')
dtbo_group = parser.add_mutually_exclusive_group()
dtbo_group.add_argument('--recovery_dtbo', type=FileType('rb'),
help='path to the recovery DTBO')
dtbo_group.add_argument('--recovery_acpio', type=FileType('rb'),
metavar='RECOVERY_ACPIO', dest='recovery_dtbo',
help='path to the recovery ACPIO')
parser.add_argument('--cmdline', type=AsciizBytes(bufsize=cmdline_size),
default='', help='kernel command line arguments')
parser.add_argument('--vendor_cmdline',
type=AsciizBytes(bufsize=VENDOR_BOOT_ARGS_SIZE),
default='',
help='vendor boot kernel command line arguments')
parser.add_argument('--base', type=parse_int, default=0x10000000,
help='base address')
parser.add_argument('--kernel_offset', type=parse_int, default=0x00008000,
help='kernel offset')
parser.add_argument('--ramdisk_offset', type=parse_int, default=0x01000000,
help='ramdisk offset')
parser.add_argument('--second_offset', type=parse_int, default=0x00f00000,
help='second bootloader offset')
parser.add_argument('--dtb_offset', type=parse_int, default=0x01f00000,
help='dtb offset')
parser.add_argument('--os_version', type=parse_os_version, default=0,
help='operating system version')
parser.add_argument('--os_patch_level', type=parse_os_patch_level,
default=0, help='operating system patch level')
parser.add_argument('--tags_offset', type=parse_int, default=0x00000100,
help='tags offset')
parser.add_argument('--board', type=AsciizBytes(bufsize=BOOT_NAME_SIZE),
default='', help='board name')
parser.add_argument('--pagesize', type=parse_int,
choices=[2**i for i in range(11, 15)], default=2048,
help='page size')
parser.add_argument('--id', action='store_true',
help='print the image ID on standard output')
parser.add_argument('--header_version', type=parse_int, default=0,
help='boot image header version')
parser.add_argument('-o', '--output', type=FileType('wb'),
help='output file name')
parser.add_argument('--gki_signing_algorithm',
help='GKI signing algorithm to use')
parser.add_argument('--gki_signing_key',
help='path to RSA private key file')
parser.add_argument('--gki_signing_signature_args',
help='other hash arguments passed to avbtool')
parser.add_argument('--gki_signing_avbtool_path',
help='path to avbtool for boot signature generation')
parser.add_argument('--vendor_boot', type=FileType('wb'),
help='vendor boot output file name')
parser.add_argument('--vendor_ramdisk', type=FileType('rb'),
help='path to the vendor ramdisk')
parser.add_argument('--vendor_bootconfig', type=FileType('rb'),
help='path to the vendor bootconfig file')
args, extra_args = parser.parse_known_args()
if args.vendor_boot is not None and args.header_version > 3:
extra_args = parse_vendor_ramdisk_args(args, extra_args)
if len(extra_args) > 0:
raise ValueError(f'Unrecognized arguments: {extra_args}')
if args.header_version < 3:
args.extra_cmdline = args.cmdline[BOOT_ARGS_SIZE-1:]
args.cmdline = args.cmdline[:BOOT_ARGS_SIZE-1] + b'\x00'
assert len(args.cmdline) <= BOOT_ARGS_SIZE
assert len(args.extra_cmdline) <= BOOT_EXTRA_ARGS_SIZE
return args
def add_boot_image_signature(args, pagesize):
"""Adds the boot image signature.
Note that the signature will only be verified in VTS to ensure a
generic boot.img is used. It will not be used by the device
bootloader at boot time. The bootloader should only verify
the boot vbmeta at the end of the boot partition (or in the top-level
vbmeta partition) via the Android Verified Boot process, when the
device boots.
"""
args.output.flush() # Flush the buffer for signature calculation.
# Appends zeros if the signing key is not specified.
if not args.gki_signing_key or not args.gki_signing_algorithm:
zeros = b'\x00' * BOOT_IMAGE_V4_SIGNATURE_SIZE
args.output.write(zeros)
pad_file(args.output, pagesize)
return
avbtool = 'avbtool' # Used from otatools.zip or Android build env.
# We need to specify the path of avbtool in build/core/Makefile.
# Because avbtool is not guaranteed to be in $PATH there.
if args.gki_signing_avbtool_path:
avbtool = args.gki_signing_avbtool_path
# Need to specify a value of --partition_size for avbtool to work.
# We use 64 MB below, but avbtool will not resize the boot image to
# this size because --do_not_append_vbmeta_image is also specified.
avbtool_cmd = [
avbtool, 'add_hash_footer',
'--partition_name', 'boot',
'--partition_size', str(64 * 1024 * 1024),
'--image', args.output.name,
'--algorithm', args.gki_signing_algorithm,
'--key', args.gki_signing_key,
'--salt', 'd00df00d'] # TODO: use a hash of kernel/ramdisk as the salt.
# Additional arguments passed to avbtool.
if args.gki_signing_signature_args:
avbtool_cmd += args.gki_signing_signature_args.split()
# Outputs the signed vbmeta to a separate file, then append to boot.img
# as the boot signature.
with tempfile.TemporaryDirectory() as temp_out_dir:
boot_signature_output = os.path.join(temp_out_dir, 'boot_signature')
avbtool_cmd += ['--do_not_append_vbmeta_image',
'--output_vbmeta_image', boot_signature_output]
subprocess.check_call(avbtool_cmd)
with open(boot_signature_output, 'rb') as boot_signature:
if filesize(boot_signature) > BOOT_IMAGE_V4_SIGNATURE_SIZE:
raise ValueError(
f'boot sigature size is > {BOOT_IMAGE_V4_SIGNATURE_SIZE}')
write_padded_file(args.output, boot_signature, pagesize)
def write_data(args, pagesize):
write_padded_file(args.output, args.kernel, pagesize)
write_padded_file(args.output, args.ramdisk, pagesize)
write_padded_file(args.output, args.second, pagesize)
if args.header_version > 0 and args.header_version < 3:
write_padded_file(args.output, args.recovery_dtbo, pagesize)
if args.header_version == 2:
write_padded_file(args.output, args.dtb, pagesize)
if args.header_version >= 4:
add_boot_image_signature(args, pagesize)
def write_vendor_boot_data(args):
if args.header_version > 3:
builder = args.vendor_ramdisk_table_builder
builder.write_ramdisks_padded(args.vendor_boot, args.pagesize)
write_padded_file(args.vendor_boot, args.dtb, args.pagesize)
builder.write_entries_padded(args.vendor_boot, args.pagesize)
write_padded_file(args.vendor_boot, args.vendor_bootconfig,
args.pagesize)
else:
write_padded_file(args.vendor_boot, args.vendor_ramdisk, args.pagesize)
write_padded_file(args.vendor_boot, args.dtb, args.pagesize)
def main():
args = parse_cmdline()
if args.vendor_boot is not None:
if args.header_version not in {3, 4}:
raise ValueError(
'--vendor_boot not compatible with given header version')
if args.header_version == 3 and args.vendor_ramdisk is None:
raise ValueError('--vendor_ramdisk missing or invalid')
write_vendor_boot_header(args)
write_vendor_boot_data(args)
if args.output is not None:
if args.second is not None and args.header_version > 2:
raise ValueError(
'--second not compatible with given header version')
img_id = write_header(args)
if args.header_version > 2:
write_data(args, BOOT_IMAGE_HEADER_V3_PAGESIZE)
else:
write_data(args, args.pagesize)
if args.id and img_id is not None:
print('0x' + ''.join(f'{octet:02x}' for octet in img_id))
if __name__ == '__main__':
main()

View File

@@ -71,8 +71,6 @@ if [ -z "$BM_CMDLINE" ]; then
exit 1
fi
section_start prepare_rootfs "Preparing rootfs components"
set -ex
date +'%F %T'
@@ -103,11 +101,29 @@ if [ -f "${BM_BOOTFS}" ]; then
BM_BOOTFS=/tmp/bootfs
fi
# If BM_KERNEL and BM_DTS is present
if [ -n "${FORCE_KERNEL_TAG}" ]; then
if [ -z "${BM_KERNEL}" ] || [ -z "${BM_DTB}" ]; then
echo "This machine cannot be tested with external kernel since BM_KERNEL or BM_DTB missing!"
exit 1
fi
curl -L --retry 4 -f --retry-all-errors --retry-delay 60 \
"${FDO_HTTP_CACHE_URI:-}${KERNEL_IMAGE_BASE}/${DEBIAN_ARCH}/${BM_KERNEL}" -o "${BM_KERNEL}"
curl -L --retry 4 -f --retry-all-errors --retry-delay 60 \
"${FDO_HTTP_CACHE_URI:-}${KERNEL_IMAGE_BASE}/${DEBIAN_ARCH}/${BM_DTB}.dtb" -o "${BM_DTB}.dtb"
curl -L --retry 4 -f --retry-all-errors --retry-delay 60 \
"${FDO_HTTP_CACHE_URI:-}${KERNEL_IMAGE_BASE}/${DEBIAN_ARCH}/modules.tar.zst" -o modules.tar.zst
fi
date +'%F %T'
# Install kernel modules (it could be either in /lib/modules or
# /usr/lib/modules, but we want to install in the latter)
if [ -n "${BM_BOOTFS}" ]; then
if [ -n "${FORCE_KERNEL_TAG}" ]; then
tar --keep-directory-symlink --zstd -xf modules.tar.zst -C /nfs/
rm modules.tar.zst &
elif [ -n "${BM_BOOTFS}" ]; then
[ -d $BM_BOOTFS/usr/lib/modules ] && rsync -a $BM_BOOTFS/usr/lib/modules/ /nfs/usr/lib/modules/
[ -d $BM_BOOTFS/lib/modules ] && rsync -a $BM_BOOTFS/lib/modules/ /nfs/lib/modules/
else
@@ -118,7 +134,7 @@ fi
date +'%F %T'
# Install kernel image + bootloader files
if [ -z "$BM_BOOTFS" ]; then
if [ -n "${FORCE_KERNEL_TAG}" ] || [ -z "$BM_BOOTFS" ]; then
mv "${BM_KERNEL}" "${BM_DTB}.dtb" /tftp/
else # BM_BOOTFS
rsync -aL --delete $BM_BOOTFS/boot/ /tftp/
@@ -126,6 +142,33 @@ fi
date +'%F %T'
# Set up the pxelinux config for Jetson Nano
mkdir -p /tftp/pxelinux.cfg
cat <<EOF >/tftp/pxelinux.cfg/default-arm-tegra210-p3450-0000
PROMPT 0
TIMEOUT 30
DEFAULT primary
MENU TITLE jetson nano boot options
LABEL primary
MENU LABEL CI kernel on TFTP
LINUX Image
FDT tegra210-p3450-0000.dtb
APPEND \${cbootargs} $BM_CMDLINE
EOF
# Set up the pxelinux config for Jetson TK1
cat <<EOF >/tftp/pxelinux.cfg/default-arm-tegra124-jetson-tk1
PROMPT 0
TIMEOUT 30
DEFAULT primary
MENU TITLE jetson TK1 boot options
LABEL primary
MENU LABEL CI kernel on TFTP
LINUX zImage
FDT tegra124-jetson-tk1.dtb
APPEND \${cbootargs} $BM_CMDLINE
EOF
# Create the rootfs in the NFS directory
. $BM/rootfs-setup.sh /nfs
@@ -138,16 +181,13 @@ if [ -n "$BM_BOOTCONFIG" ]; then
printf "$BM_BOOTCONFIG" >> /tftp/config.txt
fi
section_end prepare_rootfs
set +e
STRUCTURED_LOG_FILE=results/job_detail.json
STRUCTURED_LOG_FILE=job_detail.json
python3 $CI_INSTALL/custom_logger.py ${STRUCTURED_LOG_FILE} --update dut_job_type "${DEVICE_TYPE}"
python3 $CI_INSTALL/custom_logger.py ${STRUCTURED_LOG_FILE} --update farm "${FARM}"
ATTEMPTS=3
first_attempt=True
while [ $((ATTEMPTS--)) -gt 0 ]; do
section_start dut_boot "Booting hardware device ..."
python3 $CI_INSTALL/custom_logger.py ${STRUCTURED_LOG_FILE} --create-dut-job dut_name "${CI_RUNNER_DESCRIPTION}"
# Update subtime time to CI_JOB_STARTED_AT only for the first run
if [ "$first_attempt" = "True" ]; then
@@ -159,22 +199,17 @@ while [ $((ATTEMPTS--)) -gt 0 ]; do
--dev="$BM_SERIAL" \
--powerup="$BM_POWERUP" \
--powerdown="$BM_POWERDOWN" \
--boot-timeout-seconds ${BOOT_PHASE_TIMEOUT_SECONDS:-300} \
--test-timeout-minutes ${TEST_PHASE_TIMEOUT_MINUTES:-$((CI_JOB_TIMEOUT/60 - ${TEST_SETUP_AND_UPLOAD_MARGIN_MINUTES:-5}))}
--test-timeout ${TEST_PHASE_TIMEOUT:-20}
ret=$?
if [ $ret -eq 2 ]; then
echo "Did not detect boot sequence, retrying..."
python3 $CI_INSTALL/custom_logger.py ${STRUCTURED_LOG_FILE} --close-dut-job
first_attempt=False
error "Device failed to boot; will retry"
else
# We're no longer in dut_boot by this point
unset CURRENT_SECTION
ATTEMPTS=0
fi
done
section_start dut_cleanup "Cleaning up after job"
python3 $CI_INSTALL/custom_logger.py ${STRUCTURED_LOG_FILE} --close-dut-job
python3 $CI_INSTALL/custom_logger.py ${STRUCTURED_LOG_FILE} --close
set -e
@@ -184,8 +219,11 @@ date +'%F %T'
# Bring artifacts back from the NFS dir to the build dir where gitlab-runner
# will look for them.
cp -Rp /nfs/results/. results/
if [ -f "${STRUCTURED_LOG_FILE}" ]; then
cp -p ${STRUCTURED_LOG_FILE} results/
echo "Structured log file is available at ${ARTIFACTS_BASE_URL}/results/${STRUCTURED_LOG_FILE}"
fi
date +'%F %T'
section_end dut_cleanup
exit $ret

View File

@@ -31,12 +31,11 @@ from custom_logger import CustomLogger
from serial_buffer import SerialBuffer
class PoERun:
def __init__(self, args, boot_timeout, test_timeout, logger):
def __init__(self, args, test_timeout, logger):
self.powerup = args.powerup
self.powerdown = args.powerdown
self.ser = SerialBuffer(
args.dev, "results/serial-output.txt", ": ")
self.boot_timeout = boot_timeout
args.dev, "results/serial-output.txt", "")
self.test_timeout = test_timeout
self.logger = logger
@@ -57,7 +56,7 @@ class PoERun:
boot_detected = False
self.logger.create_job_phase("boot")
for line in self.ser.lines(timeout=self.boot_timeout, phase="bootloader"):
for line in self.ser.lines(timeout=5 * 60, phase="bootloader"):
if re.search("Booting Linux", line):
boot_detected = True
break
@@ -87,17 +86,14 @@ class PoERun:
self.print_error("nouveau jetson tk1 network fail, abandoning run.")
return 1
result = re.search(r"hwci: mesa: exit_code: (\d+)", line)
result = re.search("hwci: mesa: (\S*)", line)
if result:
exit_code = int(result.group(1))
if exit_code == 0:
if result.group(1) == "pass":
self.logger.update_dut_job("status", "pass")
return 0
else:
self.logger.update_status_fail("test fail")
self.logger.update_dut_job("exit_code", exit_code)
return exit_code
return 1
self.print_error(
"Reached the end of the CPU serial log without finding a result")
@@ -113,14 +109,12 @@ def main():
parser.add_argument('--powerdown', type=str,
help='shell command for powering off', required=True)
parser.add_argument(
'--boot-timeout-seconds', type=int, help='Boot phase timeout (seconds)', required=True)
parser.add_argument(
'--test-timeout-minutes', type=int, help='Test phase timeout (minutes)', required=True)
'--test-timeout', type=int, help='Test phase timeout (minutes)', required=True)
args = parser.parse_args()
logger = CustomLogger("results/job_detail.json")
logger = CustomLogger("job_detail.json")
logger.update_dut_time("start", None)
poe = PoERun(args, args.boot_timeout_seconds, args.test_timeout_minutes * 60, logger)
poe = PoERun(args, args.test_timeout * 60, logger)
retval = poe.run()
poe.logged_system(args.powerdown)

View File

@@ -17,13 +17,16 @@ cp "${S3_JWT_FILE}" "${rootfs_dst}${S3_JWT_FILE}"
date +'%F %T'
cp $CI_COMMON/capture-devcoredump.sh $rootfs_dst/
cp $CI_COMMON/intel-gpu-freq.sh $rootfs_dst/
cp $CI_COMMON/kdl.sh $rootfs_dst/
cp "$SCRIPTS_DIR/setup-test-env.sh" "$rootfs_dst/"
set +x
# Pass through relevant env vars from the gitlab job to the baremetal init script
echo "Variables passed through:"
filter_env_vars | tee $rootfs_dst/set-job-env-vars.sh
"$CI_COMMON"/generate-env.sh | tee $rootfs_dst/set-job-env-vars.sh
set -x

View File

@@ -22,7 +22,7 @@
# IN THE SOFTWARE.
import argparse
from datetime import datetime, UTC
from datetime import datetime, timezone
import queue
import serial
import threading
@@ -130,10 +130,9 @@ class SerialBuffer:
if b == b'\n'[0]:
line = line.decode(errors="replace")
ts = datetime.now(tz=UTC)
ts_str = f"{ts.hour:02}:{ts.minute:02}:{ts.second:02}.{int(ts.microsecond / 1000):03}"
print("{endc}{time}{prefix}{line}".format(
time=ts_str, prefix=self.prefix, line=line, endc='\033[0m'), flush=True, end='')
time = datetime.now().strftime('%y-%m-%d %H:%M:%S')
print("{endc}{time} {prefix}{line}".format(
time=time, prefix=self.prefix, line=line, endc='\033[0m'), flush=True, end='')
self.line_queue.put(line)
line = bytearray()

View File

@@ -0,0 +1,41 @@
#!/usr/bin/python3
# Copyright © 2020 Christian Gmeiner
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice (including the next
# paragraph) shall be included in all copies or substantial portions of the
# Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
# IN THE SOFTWARE.
#
# Tiny script to read bytes from telnet, and write the output to stdout, with a
# buffer in between so we don't lose serial output from its buffer.
#
import sys
import telnetlib
host = sys.argv[1]
port = sys.argv[2]
tn = telnetlib.Telnet(host, port, 1000000)
while True:
bytes = tn.read_some()
sys.stdout.buffer.write(bytes)
sys.stdout.flush()
tn.close()

View File

@@ -0,0 +1,7 @@
#!/bin/sh
# shellcheck disable=SC1091
set -e
_COMPILER=clang++-15
. compiler-wrapper.sh

View File

@@ -0,0 +1,7 @@
#!/bin/sh
# shellcheck disable=SC1091
set -e
_COMPILER=clang++
. compiler-wrapper.sh

View File

@@ -0,0 +1,7 @@
#!/bin/sh
# shellcheck disable=SC1091
set -e
_COMPILER=clang-15
. compiler-wrapper.sh

View File

@@ -0,0 +1,7 @@
#!/bin/sh
# shellcheck disable=SC1091
set -e
_COMPILER=clang
. compiler-wrapper.sh

View File

@@ -0,0 +1,7 @@
#!/bin/sh
# shellcheck disable=SC1091
set -e
_COMPILER=g++
. compiler-wrapper.sh

View File

@@ -0,0 +1,7 @@
#!/bin/sh
# shellcheck disable=SC1091
set -e
_COMPILER=gcc
. compiler-wrapper.sh

View File

@@ -0,0 +1,21 @@
# shellcheck disable=SC1091
# shellcheck disable=SC2086 # we want word splitting
if command -V ccache >/dev/null 2>/dev/null; then
CCACHE=ccache
else
CCACHE=
fi
if echo "$@" | grep -E 'meson-private/tmp[^ /]*/testfile.c' >/dev/null; then
# Invoked for meson feature check
exec $CCACHE $_COMPILER "$@"
fi
if [ "$(eval printf "'%s'" "\"\${$(($#-1))}\"")" = "-c" ]; then
# Not invoked for linking
exec $CCACHE $_COMPILER "$@"
fi
# Compiler invoked by ninja for linking. Add -Werror to turn compiler warnings into errors
# with LTO. (meson's werror should arguably do this, but meanwhile we need to)
exec $CCACHE $_COMPILER "$@" -Werror

View File

@@ -1,95 +0,0 @@
.meson-build-for-tests:
extends:
- .build-linux
stage: build-for-tests
script:
- &meson-build timeout --verbose ${BUILD_JOB_TIMEOUT_OVERRIDE:-$BUILD_JOB_TIMEOUT} bash --login .gitlab-ci/meson/build.sh
- .gitlab-ci/prepare-artifacts.sh
.meson-build-only:
extends:
- .meson-build-for-tests
- .build-only-delayed-rules
stage: build-only
script:
- *meson-build
# Shared between windows and Linux
.build-common:
extends: .build-rules
# Cancel job if a newer commit is pushed to the same branch
interruptible: true
variables:
# Build jobs are typically taking between 5-12 minutes, depending on how
# much they build and how many new Rust compilers we have to build twice.
# Allow 25 minutes as a reasonable margin: beyond this point, something
# has gone badly wrong, and we should try again to see if we can get
# something from it.
#
# Some jobs not in the critical path use a higher timeout, particularly
# when building with ASan or UBSan.
BUILD_JOB_TIMEOUT: 12m
RUN_MESON_TESTS: "true"
timeout: 16m
# We don't want to download any previous job's artifacts
dependencies: []
artifacts:
name: "${CI_PROJECT_NAME}_${CI_JOB_NAME_SLUG}"
when: always
paths:
- _build/meson-logs/*.txt
- _build/meson-logs/strace
- _build/.ninja_log
- artifacts
.build-run-long:
variables:
BUILD_JOB_TIMEOUT: 18m
timeout: 25m
# Just Linux
.build-linux:
extends: .build-common
variables:
C_ARGS: >
-Wno-error=deprecated-declarations
CCACHE_COMPILERCHECK: "content"
CCACHE_COMPRESS: "true"
CCACHE_DIR: /cache/mesa/ccache
# Use ccache transparently, and print stats before/after
before_script:
- !reference [default, before_script]
- |
export PATH="/usr/lib/ccache:$PATH"
export CCACHE_BASEDIR="$PWD"
if test -x /usr/bin/ccache; then
section_start ccache_before "ccache stats before build"
ccache --show-stats
section_end ccache_before
fi
after_script:
- if test -x /usr/bin/ccache; then ccache --show-stats | grep "Hits:"; fi
- !reference [default, after_script]
.build-windows:
extends:
- .build-common
- .windows-docker-tags
cache:
key: ${CI_JOB_NAME}
paths:
- subprojects/packagecache
.ci-deqp-artifacts:
artifacts:
name: "${CI_PROJECT_NAME}_${CI_JOB_NAME_SLUG}"
when: always
untracked: false
paths:
# Watch out! Artifacts are relative to the build dir.
# https://gitlab.com/gitlab-org/gitlab-ce/commit/8788fb925706cad594adf6917a6c5f6587dd1521
- artifacts
- _build/meson-logs/*.txt
- _build/meson-logs/strace
- _build/.ninja_log

File diff suppressed because it is too large Load Diff

View File

@@ -1,268 +0,0 @@
# For CI-tron based testing farm jobs.
.ci-tron-test:
extends:
- .ci-tron-b2c-job-v1
variables:
GIT_STRATEGY: none
B2C_VERSION: v0.9.15.1 # Linux 6.13.7
SCRIPTS_DIR: install
CI_TRON_PATTERN__JOB_SUCCESS__REGEX: 'hwci: mesa: exit_code: 0\r$'
CI_TRON_PATTERN__SESSION_END__REGEX: '^.*It''s now safe to turn off your computer\r$'
CI_TRON_TIMEOUT__FIRST_CONSOLE_ACTIVITY__MINUTES: 2
CI_TRON_TIMEOUT__FIRST_CONSOLE_ACTIVITY__RETRIES: 3
CI_TRON_TIMEOUT__CONSOLE_ACTIVITY__MINUTES: 5
CI_TRON__B2C_ARTIFACT_EXCLUSION: "*.shader_cache,install/*,*/install/*,*/vkd3d-proton.cache*,vkd3d-proton.cache*,*.qpa"
CI_TRON_HTTP_ARTIFACT__INSTALL__PATH: "/install.tar.zst"
CI_TRON_HTTP_ARTIFACT__INSTALL__URL: "https://$PIPELINE_ARTIFACTS_BASE/$S3_ARTIFACT_NAME.tar.zst"
CI_TRON__B2C_MACHINE_REGISTRATION_CMD: "setup --tags $CI_TRON_DUT_SETUP_TAGS"
CI_TRON__B2C_IMAGE_UNDER_TEST: $MESA_IMAGE
CI_TRON__B2C_EXEC_CMD: "curl --silent --fail-with-body {{ job.http.url }}$CI_TRON_HTTP_ARTIFACT__INSTALL__PATH | tar --zstd --extract && $SCRIPTS_DIR/common/init-stage2.sh"
# Assume by default this is running deqp, as that's almost always true
HWCI_TEST_SCRIPT: install/deqp-runner.sh
# Keep the job script in the artifacts
CI_TRON_JOB_SCRIPT_PATH: results/job_script.sh
needs:
- !reference [.required-for-hardware-jobs, needs]
tags:
- farm:$RUNNER_FARM_LOCATION
- $CI_TRON_DUT_SETUP_TAGS
# Override the default before_script, as it is not compatible with the CI-tron environment. We just keep the clearing
# of the JWT token for security reasons
before_script:
- |
set -eu
eval "$S3_JWT_FILE_SCRIPT"
for var in CI_TRON_DUT_SETUP_TAGS; do
if [[ -z "$(eval echo \${$var:-})" ]]; then
echo "The required variable '$var' is missing"
exit 1
fi
done
# Open a section that will be closed by b2c
echo -e "\n\e[0Ksection_start:`date +%s`:b2c_kernel_boot[collapsed=true]\r\e[0K\e[0;36m[$(cut -d ' ' -f1 /proc/uptime)]: Submitting the CI-tron job and booting the DUT\e[0m\n"
# Anything our job places in results/ will be collected by the
# Gitlab coordinator for status presentation. results/junit.xml
# will be parsed by the UI for more detailed explanations of
# test execution.
artifacts:
when: always
name: "${CI_PROJECT_NAME}_${CI_JOB_NAME_SLUG}"
paths:
- results
reports:
junit: results/**/junit.xml
.ci-tron-x86_64-test:
extends:
- .ci-tron-test
variables:
CI_TRON_INITRAMFS__B2C__URL: 'https://gitlab.freedesktop.org/gfx-ci/boot2container/-/releases/$B2C_VERSION/downloads/initramfs.linux_amd64.cpio.xz'
CI_TRON_KERNEL__URL: 'https://gitlab.freedesktop.org/gfx-ci/boot2container/-/releases/$B2C_VERSION/downloads/linux-x86_64'
# Set the following variables if you need AMD, Intel, or NVIDIA support
# CI_TRON_INITRAMFS__DEPMOD__URL: "https://gitlab.freedesktop.org/gfx-ci/boot2container/-/releases/$B2C_VERSION/downloads/linux-x86_64.depmod.cpio.xz"
# CI_TRON_INITRAMFS__GPU__URL: "https://gitlab.freedesktop.org/gfx-ci/boot2container/-/releases/$B2C_VERSION/downloads/linux-x86_64.gpu.cpio"
# CI_TRON_INITRAMFS__GPU__FORMAT__0__ARCHIVE__KEEP__0__PATH: "(lib/(modules|firmware/amdgpu)/.*)"
S3_ARTIFACT_NAME: "mesa-x86_64-default-debugoptimized"
.ci-tron-x86_64-test-vk:
extends:
- .use-debian/x86_64_test-vk
- .ci-tron-x86_64-test
needs:
- job: debian/x86_64_test-vk
artifacts: false
optional: true
- job: debian-x86_64
artifacts: false
- !reference [.ci-tron-test, needs]
.ci-tron-x86_64-test-vk-manual:
extends:
- .use-debian/x86_64_test-vk
- .ci-tron-x86_64-test
variables:
S3_ARTIFACT_NAME: "debian-build-x86_64"
needs:
- job: debian/x86_64_test-vk
artifacts: false
optional: true
- job: debian-build-x86_64
artifacts: false
- !reference [.ci-tron-test, needs]
.ci-tron-x86_64-test-gl:
extends:
- .use-debian/x86_64_test-gl
- .ci-tron-x86_64-test
needs:
- job: debian/x86_64_test-gl
artifacts: false
optional: true
- job: debian-x86_64
artifacts: false
- !reference [.ci-tron-test, needs]
.ci-tron-x86_64-test-gl-manual:
extends:
- .use-debian/x86_64_test-gl
- .ci-tron-x86_64-test
variables:
S3_ARTIFACT_NAME: "debian-build-x86_64"
needs:
- job: debian/x86_64_test-gl
artifacts: false
optional: true
- job: debian-build-x86_64
artifacts: false
- !reference [.ci-tron-test, needs]
.ci-tron-arm64-test:
extends:
- .ci-tron-test
variables:
CI_TRON_INITRAMFS__B2C__URL: 'https://gitlab.freedesktop.org/gfx-ci/boot2container/-/releases/$B2C_VERSION/downloads/initramfs.linux_arm64.cpio.xz'
CI_TRON_KERNEL__URL: 'https://gitlab.freedesktop.org/gfx-ci/boot2container/-/releases/$B2C_VERSION/downloads/linux-arm64'
S3_ARTIFACT_NAME: "mesa-arm64-default-debugoptimized"
.ci-tron-arm64-test-vk:
extends:
- .use-debian/arm64_test-vk
- .ci-tron-arm64-test
needs:
- job: debian/arm64_test-vk
artifacts: false
optional: true
- job: debian-arm64
artifacts: false
- !reference [.ci-tron-test, needs]
.ci-tron-arm64-test-asan-vk:
extends:
- .use-debian/arm64_test-vk
- .ci-tron-arm64-test
variables:
S3_ARTIFACT_NAME: "mesa-arm64-asan-debugoptimized"
DEQP_FORCE_ASAN: 1
needs:
- job: debian/arm64_test-vk
artifacts: false
optional: true
- job: debian-arm64-asan
artifacts: false
- !reference [.ci-tron-test, needs]
.ci-tron-arm64-test-ubsan-vk:
extends:
- .use-debian/arm64_test-vk
- .ci-tron-arm64-test
variables:
S3_ARTIFACT_NAME: "mesa-arm64-ubsan-debugoptimized"
needs:
- job: debian/arm64_test-vk
artifacts: false
optional: true
- job: debian-arm64-ubsan
artifacts: false
- !reference [.ci-tron-test, needs]
.ci-tron-arm64-test-gl:
extends:
- .use-debian/arm64_test-gl
- .ci-tron-arm64-test
needs:
- job: debian/arm64_test-gl
artifacts: false
optional: true
- job: debian-arm64
artifacts: false
- !reference [.ci-tron-test, needs]
.ci-tron-arm64-test-asan-gl:
extends:
- .use-debian/arm64_test-gl
- .ci-tron-arm64-test
variables:
S3_ARTIFACT_NAME: "mesa-arm64-asan-debugoptimized"
DEQP_FORCE_ASAN: 1
needs:
- job: debian/arm64_test-gl
artifacts: false
optional: true
- job: debian-arm64-asan
artifacts: false
- !reference [.ci-tron-test, needs]
.ci-tron-arm64-test-ubsan-gl:
extends:
- .use-debian/arm64_test-gl
- .ci-tron-arm64-test
variables:
S3_ARTIFACT_NAME: "mesa-arm64-ubsan-debugoptimized"
needs:
- job: debian/arm64_test-gl
artifacts: false
optional: true
- job: debian-arm64-ubsan
artifacts: false
- !reference [.ci-tron-test, needs]
.ci-tron-arm32-test:
extends:
- .ci-tron-test
variables:
CI_TRON_INITRAMFS__B2C__URL: 'https://gitlab.freedesktop.org/gfx-ci/boot2container/-/releases/$B2C_VERSION/downloads/initramfs.linux_arm.cpio.xz'
CI_TRON_KERNEL__URL: 'https://gitlab.freedesktop.org/gfx-ci/boot2container/-/releases/$B2C_VERSION/downloads/linux-arm'
S3_ARTIFACT_NAME: "mesa-arm32-default-debugoptimized"
.ci-tron-arm32-test-vk:
extends:
- .use-debian/arm32_test-vk
- .ci-tron-arm32-test
needs:
- job: debian/arm32_test-vk
artifacts: false
optional: true
- job: debian-arm32
artifacts: false
- !reference [.ci-tron-test, needs]
.ci-tron-arm32-test-gl:
extends:
- .use-debian/arm32_test-gl
- .ci-tron-arm32-test
needs:
- job: debian/arm32_test-gl
artifacts: false
optional: true
- job: debian-arm32
artifacts: false
- !reference [.ci-tron-test, needs]
.ci-tron-arm32-test-asan-gl:
extends:
- .use-debian/arm32_test-gl
- .ci-tron-arm32-test
variables:
S3_ARTIFACT_NAME: "mesa-arm32-asan-debugoptimized"
DEQP_FORCE_ASAN: 1
needs:
- job: debian/arm32_test-gl
artifacts: false
optional: true
- job: debian-arm32-asan
artifacts: false
- !reference [.ci-tron-test, needs]

View File

@@ -7,7 +7,7 @@ while true; do
devcds=$(find /sys/devices/virtual/devcoredump/ -name data 2>/dev/null)
for i in $devcds; do
echo "Found a devcoredump at $i."
if cp $i $RESULTS_DIR/first.devcore; then
if cp $i /results/first.devcore; then
echo 1 > $i
echo "Saved to the job artifacts at /first.devcore"
exit 0
@@ -23,7 +23,7 @@ while true; do
rm "$tmpfile"
else
echo "Found an i915 error state at $i size=$filesize."
if cp "$tmpfile" $RESULTS_DIR/first.i915_error_state; then
if cp "$tmpfile" /results/first.i915_error_state; then
rm "$tmpfile"
echo 1 > "$i"
echo "Saved to the job artifacts at /first.i915_error_state"

139
.gitlab-ci/common/generate-env.sh Executable file
View File

@@ -0,0 +1,139 @@
#!/bin/bash
VARS=(
ACO_DEBUG
ARTIFACTS_BASE_URL
ASAN_OPTIONS
BASE_SYSTEM_FORK_HOST_PREFIX
BASE_SYSTEM_MAINLINE_HOST_PREFIX
CI_COMMIT_BRANCH
CI_COMMIT_REF_NAME
CI_COMMIT_TITLE
CI_JOB_ID
S3_JWT_FILE
CI_JOB_STARTED_AT
CI_JOB_NAME
CI_JOB_URL
CI_MERGE_REQUEST_SOURCE_BRANCH_NAME
CI_MERGE_REQUEST_TITLE
CI_NODE_INDEX
CI_NODE_TOTAL
CI_PAGES_DOMAIN
CI_PIPELINE_ID
CI_PIPELINE_URL
CI_PROJECT_DIR
CI_PROJECT_NAME
CI_PROJECT_PATH
CI_PROJECT_ROOT_NAMESPACE
CI_RUNNER_DESCRIPTION
CI_SERVER_URL
CROSVM_GALLIUM_DRIVER
CROSVM_GPU_ARGS
CURRENT_SECTION
DEQP_BIN_DIR
DEQP_CONFIG
DEQP_EXPECTED_RENDERER
DEQP_FRACTION
DEQP_HEIGHT
DEQP_RESULTS_DIR
DEQP_RUNNER_OPTIONS
DEQP_SUITE
DEQP_TEMP_DIR
DEQP_VER
DEQP_WIDTH
DEVICE_NAME
DRIVER_NAME
EGL_PLATFORM
ETNA_MESA_DEBUG
FDO_CI_CONCURRENT
FDO_UPSTREAM_REPO
FD_MESA_DEBUG
FLAKES_CHANNEL
FREEDRENO_HANGCHECK_MS
GALLIUM_DRIVER
GALLIVM_PERF
GPU_VERSION
GTEST
GTEST_FAILS
GTEST_FRACTION
GTEST_RESULTS_DIR
GTEST_RUNNER_OPTIONS
GTEST_SKIPS
HWCI_FREQ_MAX
HWCI_KERNEL_MODULES
HWCI_KVM
HWCI_START_WESTON
HWCI_START_XORG
HWCI_TEST_SCRIPT
IR3_SHADER_DEBUG
JOB_ARTIFACTS_BASE
JOB_RESULTS_PATH
JOB_ROOTFS_OVERLAY_PATH
KERNEL_IMAGE_BASE
KERNEL_IMAGE_NAME
LD_LIBRARY_PATH
LIBGL_ALWAYS_SOFTWARE
LP_NUM_THREADS
MESA_BASE_TAG
MESA_BUILD_PATH
MESA_DEBUG
MESA_GLES_VERSION_OVERRIDE
MESA_GLSL_VERSION_OVERRIDE
MESA_GL_VERSION_OVERRIDE
MESA_IMAGE
MESA_IMAGE_PATH
MESA_IMAGE_TAG
MESA_LOADER_DRIVER_OVERRIDE
MESA_TEMPLATES_COMMIT
MESA_VK_ABORT_ON_DEVICE_LOSS
MESA_VK_IGNORE_CONFORMANCE_WARNING
S3_HOST
S3_RESULTS_UPLOAD
NIR_DEBUG
PAN_I_WANT_A_BROKEN_VULKAN_DRIVER
PAN_MESA_DEBUG
PANVK_DEBUG
PIGLIT_FRACTION
PIGLIT_NO_WINDOW
PIGLIT_OPTIONS
PIGLIT_PLATFORM
PIGLIT_PROFILES
PIGLIT_REPLAY_ANGLE_TAG
PIGLIT_REPLAY_ARTIFACTS_BASE_URL
PIGLIT_REPLAY_DEVICE_NAME
PIGLIT_REPLAY_EXTRA_ARGS
PIGLIT_REPLAY_LOOP_TIMES
PIGLIT_REPLAY_REFERENCE_IMAGES_BASE
PIGLIT_REPLAY_SUBCOMMAND
PIGLIT_RESULTS
PIGLIT_TESTS
PIGLIT_TRACES_FILE
PIPELINE_ARTIFACTS_BASE
RADEON_DEBUG
RADV_DEBUG
RADV_PERFTEST
SKQP_ASSETS_DIR
SKQP_BACKENDS
TU_DEBUG
USE_ANGLE
VIRGL_HOST_API
VIRGL_RENDER_SERVER
WAFFLE_PLATFORM
VK_DRIVER
VKD3D_PROTON_RESULTS
VKD3D_CONFIG
VKD3D_TEST_EXCLUDE
ZINK_DESCRIPTORS
ZINK_DEBUG
LVP_POISON_MEMORY
# Dead code within Mesa CI, but required by virglrender CI
# (because they include our files in their CI)
VK_DRIVER_FILES
)
for var in "${VARS[@]}"; do
if [ -n "${!var+x}" ]; then
echo "export $var=${!var@Q}"
fi
done

View File

@@ -3,10 +3,6 @@
# Very early init, used to make sure devices and network are set up and
# reachable.
# When changing this file, you need to bump the following
# .gitlab-ci/image-tags.yml tags:
# ALPINE_X86_64_LAVA_TRIGGER_TAG
set -ex
cd /
@@ -27,8 +23,3 @@ echo "nameserver 8.8.8.8" > /etc/resolv.conf
# Set the time so we can validate certificates before we fetch anything;
# however as not all DUTs have network, make this non-fatal.
for _ in 1 2 3; do sntp -sS pool.ntp.org && break || sleep 2; done || true
# Create a symlink from /dev/fd to /proc/self/fd if /dev/fd is missing.
if [ ! -e /dev/fd ]; then
ln -s /proc/self/fd /dev/fd
fi

View File

@@ -47,13 +47,6 @@ for path in '/dut-env-vars.sh' '/set-job-env-vars.sh' './set-job-env-vars.sh'; d
done
. "$SCRIPTS_DIR"/setup-test-env.sh
# Flush out anything which might be stuck in a serial buffer
echo
echo
echo
section_switch init_stage2 "Pre-testing hardware setup"
set -ex
# Set up any devices required by the jobs
@@ -76,7 +69,9 @@ fi
# - vmx for Intel VT
# - svm for AMD-V
#
if [ -n "$HWCI_ENABLE_X86_KVM" ]; then
# Additionally, download the kernel image to boot the VM via HWCI_TEST_SCRIPT.
#
if [ "$HWCI_KVM" = "true" ]; then
unset KVM_KERNEL_MODULE
{
grep -qs '\bvmx\b' /proc/cpuinfo && KVM_KERNEL_MODULE=kvm_intel
@@ -89,6 +84,11 @@ if [ -n "$HWCI_ENABLE_X86_KVM" ]; then
echo "WARNING: Failed to detect CPU virtualization extensions"
} || \
modprobe ${KVM_KERNEL_MODULE}
mkdir -p /lava-files
curl -L --retry 4 -f --retry-all-errors --retry-delay 60 \
-o "/lava-files/${KERNEL_IMAGE_NAME}" \
"${KERNEL_IMAGE_BASE}/amd64/${KERNEL_IMAGE_NAME}"
fi
# Fix prefix confusion: the build installs to $CI_PROJECT_DIR, but we expect
@@ -102,9 +102,6 @@ export LIBGL_DRIVERS_PATH=/install/lib/dri
# telling it to look in /usr/local/lib.
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/lib
# The Broadcom devices need /usr/local/bin unconditionally added to the path
export PATH=/usr/local/bin:$PATH
# Store Mesa's disk cache under /tmp, rather than sending it out over NFS.
export XDG_CACHE_HOME=/tmp
@@ -136,14 +133,13 @@ if [ "$HWCI_FREQ_MAX" = "true" ]; then
# and enable throttling detection & reporting.
# Additionally, set the upper limit for CPU scaling frequency to 65% of the
# maximum permitted, as an additional measure to mitigate thermal throttling.
/install/common/intel-gpu-freq.sh -s 70% --cpu-set-max 65% -g all -d
/intel-gpu-freq.sh -s 70% --cpu-set-max 65% -g all -d
fi
# Start a little daemon to capture sysfs records and produce a JSON file
KDL_PATH=/install/common/kdl.sh
if [ -x "$KDL_PATH" ]; then
if [ -x /kdl.sh ]; then
echo "launch kdl.sh!"
$KDL_PATH &
/kdl.sh &
BACKGROUND_PIDS="$! $BACKGROUND_PIDS"
else
echo "kdl.sh not found!"
@@ -157,9 +153,8 @@ fi
# Start a little daemon to capture the first devcoredump we encounter. (They
# expire after 5 minutes, so we poll for them).
CAPTURE_DEVCOREDUMP=/install/common/capture-devcoredump.sh
if [ -x "$CAPTURE_DEVCOREDUMP" ]; then
$CAPTURE_DEVCOREDUMP &
if [ -x /capture-devcoredump.sh ]; then
/capture-devcoredump.sh &
BACKGROUND_PIDS="$! $BACKGROUND_PIDS"
fi
@@ -173,7 +168,7 @@ export VK_DRIVER_FILES="/install/share/vulkan/icd.d/${VK_DRIVER}_icd.$ARCH.json"
if [ -n "$HWCI_START_XORG" ]; then
echo "touch /xorg-started; sleep 100000" > /xorg-script
env \
xinit /bin/sh /xorg-script -- /usr/bin/Xorg -noreset -s 0 -dpms -logfile "$RESULTS_DIR/Xorg.0.log" &
xinit /bin/sh /xorg-script -- /usr/bin/Xorg -noreset -s 0 -dpms -logfile /Xorg.0.log &
BACKGROUND_PIDS="$! $BACKGROUND_PIDS"
# Wait for xorg to be ready for connections.
@@ -198,43 +193,44 @@ if [ -n "$HWCI_START_WESTON" ]; then
export DISPLAY=:0
mkdir -p /tmp/.X11-unix
env weston --config="/install/common/weston.ini" -Swayland-0 --use-gl &
env \
weston -Bheadless-backend.so --use-gl -Swayland-0 --xwayland --idle-time=0 &
BACKGROUND_PIDS="$! $BACKGROUND_PIDS"
while [ ! -S "$WESTON_X11_SOCK" ]; do sleep 1; done
fi
set +x
section_end init_stage2
echo "Running ${HWCI_TEST_SCRIPT} ${HWCI_TEST_ARGS} ..."
set +e
$HWCI_TEST_SCRIPT ${HWCI_TEST_ARGS:-}; EXIT_CODE=$?
bash -c ". $SCRIPTS_DIR/setup-test-env.sh && $HWCI_TEST_SCRIPT"
EXIT_CODE=$?
set -e
section_start post_test_cleanup "Cleaning up after testing, uploading results"
set -x
# Let's make sure the results are always stored in current working directory
mv -f ${CI_PROJECT_DIR}/results ./ 2>/dev/null || true
[ ${EXIT_CODE} -ne 0 ] || rm -rf results/trace/"$PIGLIT_REPLAY_DEVICE_NAME"
# Make sure that capture-devcoredump is done before we start trying to tar up
# artifacts -- if it's writing while tar is reading, tar will throw an error and
# kill the job.
cleanup
# upload artifacts (lava jobs)
# upload artifacts
if [ -n "$S3_RESULTS_UPLOAD" ]; then
tar --zstd -cf results.tar.zst results/;
ci-fairy s3cp --token-file "${S3_JWT_FILE}" results.tar.zst https://"$S3_RESULTS_UPLOAD"/results.tar.zst
ci-fairy s3cp --token-file "${S3_JWT_FILE}" results.tar.zst https://"$S3_RESULTS_UPLOAD"/results.tar.zst;
fi
# We still need to echo the hwci: mesa message, as some scripts rely on it, such
# as the python ones inside the bare-metal folder
[ ${EXIT_CODE} -eq 0 ] && RESULT=pass || RESULT=fail
set +x
section_end post_test_cleanup
# Print the final result; both bare-metal and LAVA look for this string to get
# the result of our run, so try really hard to get it out rather than losing
# the run. The device gets shut down right at this point, and a630 seems to
# enjoy corrupting the last line of serial output before shutdown.
for _ in $(seq 0 3); do echo "hwci: mesa: exit_code: $EXIT_CODE"; sleep 1; echo; done
for _ in $(seq 0 3); do echo "hwci: mesa: $RESULT"; sleep 1; echo; done
exit $EXIT_CODE

View File

@@ -35,27 +35,6 @@
# - gt_act_freq_mhz (the actual GPU freq)
# - gt_cur_freq_mhz (the last requested freq)
#
# Intel later switched to per-tile sysfs interfaces, which is what the Xe DRM
# driver exlusively uses, and the capabilites are now located under the
# following directory for the first tile:
#
# /sys/class/drm/card<n>/device/tile0/gt0/freq0/<freq_info>
#
# Where <n> is the DRM card index and <freq_info> one of the following:
#
# - max_freq (enforced maximum freq)
# - min_freq (enforced minimum freq)
#
# The hardware capabilities can be accessed via:
#
# - rp0_freq (supported maximum freq)
# - rpn_freq (supported minimum freq)
# - rpe_freq (most efficient freq)
#
# The current frequency can be read from:
# - act_freq (the actual GPU freq)
# - cur_freq (the last requested freq)
#
# Also note that in addition to GPU management, the script offers the
# possibility to adjust CPU operating frequencies. However, this is currently
# limited to just setting the maximum scaling frequency as percentage of the
@@ -71,25 +50,10 @@
# Constants
#
# Check if any /sys/class/drm/cardX/device/tile0 directory exists to detect Xe
USE_XE=0
for i in $(seq 0 15); do
if [ -d "/sys/class/drm/card$i/device/tile0" ]; then
USE_XE=1
break
fi
done
# GPU
if [ "$USE_XE" -eq 1 ]; then
DRM_FREQ_SYSFS_PATTERN="/sys/class/drm/card%d/device/tile0/gt0/freq0/%s_freq"
ENF_FREQ_INFO="max min"
CAP_FREQ_INFO="rp0 rpn rpe"
else
DRM_FREQ_SYSFS_PATTERN="/sys/class/drm/card%d/gt_%s_freq_mhz"
ENF_FREQ_INFO="max min boost"
CAP_FREQ_INFO="RP0 RPn RP1"
fi
DRM_FREQ_SYSFS_PATTERN="/sys/class/drm/card%d/gt_%s_freq_mhz"
ENF_FREQ_INFO="max min boost"
CAP_FREQ_INFO="RP0 RPn RP1"
ACT_FREQ_INFO="act cur"
THROTT_DETECT_SLEEP_SEC=2
THROTT_DETECT_PID_FILE_PATH=/tmp/thrott-detect.pid
@@ -148,11 +112,7 @@ identify_intel_gpu() {
}
path=$(print_freq_sysfs_path "" ${i})
if [ "$USE_XE" -eq 1 ]; then
path=${path%/*/*/*/*/*}/device/vendor
else
path=${path%/*}/device/vendor
fi
path=${path%/*}/device/vendor
[ -r "${path}" ] && read vendor < "${path}" && \
[ "${vendor}" = "0x8086" ] && INTEL_DRM_CARD_INDEX=$i && return 0
@@ -237,13 +197,13 @@ compute_freq_set() {
case "$1" in
+)
val=$(eval "echo \${FREQ_$(echo $CAP_FREQ_INFO | cut -d' ' -f1)}") # FREQ_rp0 or FREQ_RP0
val=${FREQ_RP0}
;;
-)
val=$(eval "echo \${FREQ_$(echo $CAP_FREQ_INFO | cut -d' ' -f2)}") # FREQ_rpn or FREQ_RPn
val=${FREQ_RPn}
;;
*%)
val=$((${1%?} * $(eval "echo \${FREQ_$(echo $CAP_FREQ_INFO | cut -d' ' -f1)}") / 100))
val=$((${1%?} * FREQ_RP0 / 100))
# Adjust freq to comply with 50 MHz increments
val=$((val / 50 * 50))
;;
@@ -272,17 +232,15 @@ set_freq_max() {
read_freq_info n min || return $?
# FREQ_rp0 or FREQ_RP0
[ ${SET_MAX_FREQ} -gt $(eval "echo \${FREQ_$(echo $CAP_FREQ_INFO | cut -d' ' -f1)}") ] && {
[ ${SET_MAX_FREQ} -gt ${FREQ_RP0} ] && {
log ERROR "Cannot set GPU max freq (%s) to be greater than hw max freq (%s)" \
"${SET_MAX_FREQ}" "$(eval "echo \${FREQ_$(echo $CAP_FREQ_INFO | cut -d' ' -f1)}")"
"${SET_MAX_FREQ}" "${FREQ_RP0}"
return 1
}
# FREQ_rpn or FREQ_RPn
[ ${SET_MAX_FREQ} -lt $(eval "echo \${FREQ_$(echo $CAP_FREQ_INFO | cut -d' ' -f2)}") ] && {
[ ${SET_MAX_FREQ} -lt ${FREQ_RPn} ] && {
log ERROR "Cannot set GPU max freq (%s) to be less than hw min freq (%s)" \
"${SET_MIN_FREQ}" "$(eval "echo \${FREQ_$(echo $CAP_FREQ_INFO | cut -d' ' -f2)}")"
"${SET_MIN_FREQ}" "${FREQ_RPn}"
return 1
}
@@ -294,21 +252,12 @@ set_freq_max() {
[ -z "${DRY_RUN}" ] || return 0
# Write to max freq path
if ! printf "%s" ${SET_MAX_FREQ} | tee $(print_freq_sysfs_path max) > /dev/null;
if ! printf "%s" ${SET_MAX_FREQ} | tee $(print_freq_sysfs_path max) \
$(print_freq_sysfs_path boost) > /dev/null;
then
log ERROR "Failed to set GPU max frequency"
return 1
fi
# Only write to boost if the sysfs file exists, as it's removed in Xe
if [ -e "$(print_freq_sysfs_path boost)" ]; then
if ! printf "%s" ${SET_MAX_FREQ} | tee $(print_freq_sysfs_path boost) > /dev/null;
then
log ERROR "Failed to set GPU boost frequency"
return 1
fi
fi
}
#
@@ -325,9 +274,9 @@ set_freq_min() {
return 1
}
[ ${SET_MIN_FREQ} -lt $(eval "echo \${FREQ_$(echo $CAP_FREQ_INFO | cut -d' ' -f2)}") ] && {
[ ${SET_MIN_FREQ} -lt ${FREQ_RPn} ] && {
log ERROR "Cannot set GPU min freq (%s) to be less than hw min freq (%s)" \
"${SET_MIN_FREQ}" "$(eval "echo \${FREQ_$(echo $CAP_FREQ_INFO | cut -d' ' -f2)}")"
"${SET_MIN_FREQ}" "${FREQ_RPn}"
return 1
}
@@ -345,7 +294,7 @@ set_freq_min() {
#
set_freq() {
# Get hw max & min frequencies
read_freq_info n $(echo $CAP_FREQ_INFO | cut -d' ' -f1,2) || return $? # RP0 RPn
read_freq_info n RP0 RPn || return $?
[ -z "${SET_MAX_FREQ}" ] || {
SET_MAX_FREQ=$(compute_freq_set "${SET_MAX_FREQ}")
@@ -448,7 +397,7 @@ detect_throttling() {
}
(
read_freq_info n $(echo $CAP_FREQ_INFO | cut -d' ' -f2) || return $? # RPn
read_freq_info n RPn || exit $?
while true; do
sleep ${THROTT_DETECT_SLEEP_SEC}
@@ -457,13 +406,13 @@ detect_throttling() {
#
# The throttling seems to occur when act freq goes below min.
# However, it's necessary to exclude the idle states, where
# act freq normally reaches rpn and cur goes below min.
# act freq normally reaches RPn and cur goes below min.
#
[ ${FREQ_act} -lt ${FREQ_min} ] && \
[ ${FREQ_act} -gt $(eval "echo \${FREQ_$(echo $CAP_FREQ_INFO | cut -d' ' -f2)}") ] && \
[ ${FREQ_act} -gt ${FREQ_RPn} ] && \
[ ${FREQ_cur} -ge ${FREQ_min} ] && \
printf "GPU throttling detected: act=%s min=%s cur=%s rpn=%s\n" \
${FREQ_act} ${FREQ_min} ${FREQ_cur} $(eval "echo \${FREQ_$(echo $CAP_FREQ_INFO | cut -d' ' -f2)}")
printf "GPU throttling detected: act=%s min=%s cur=%s RPn=%s\n" \
${FREQ_act} ${FREQ_min} ${FREQ_cur} ${FREQ_RPn}
done
) &
@@ -611,8 +560,7 @@ set_cpu_freq_max() {
read_cpu_freq_info ${cpu_index} n ${CAP_CPU_FREQ_INFO} || { res=$?; continue; }
target_freq=$(compute_cpu_freq_set "${CPU_SET_MAX_FREQ}")
tf_res=$?
[ -z "${target_freq}" ] && { res=$tf_res; continue; }
[ -z "${target_freq}" ] && { res=$?; continue; }
log INFO "Setting CPU%s max scaling freq to %s Hz" ${cpu_index} "${target_freq}"
[ -n "${DRY_RUN}" ] && continue

View File

@@ -1,18 +1,24 @@
#!/usr/bin/env bash
# shellcheck disable=SC1091 # the path is created in build-kdl and
# here is check if exist
# shellcheck disable=SC2086 # we want the arguments to be expanded
if ! [ -f /ci-kdl/bin/activate ]; then
echo -e "ci-kdl not installed; not monitoring temperature"
exit 0
terminate() {
echo "ci-kdl.sh caught SIGTERM signal! propagating to child processes"
for job in $(jobs -p)
do
kill -15 "$job"
done
}
trap terminate SIGTERM
if [ -f /ci-kdl.venv/bin/activate ]; then
source /ci-kdl.venv/bin/activate
/ci-kdl.venv/bin/python /ci-kdl.venv/bin/ci-kdl | tee -a /results/kdl.log &
child=$!
wait $child
mv kdl_*.json /results/kdl.json
else
echo -e "Not possible to activate ci-kdl virtual environment"
fi
KDL_ARGS="
--output-file=${RESULTS_DIR}/kdl.json
--log-level=WARNING
--num-samples=-1
"
source /ci-kdl/bin/activate
exec /ci-kdl/bin/ci-kdl ${KDL_ARGS}

21
.gitlab-ci/common/start-x.sh Executable file
View File

@@ -0,0 +1,21 @@
#!/bin/sh
set -ex
_XORG_SCRIPT="/xorg-script"
_FLAG_FILE="/xorg-started"
echo "touch ${_FLAG_FILE}; sleep 100000" > "${_XORG_SCRIPT}"
if [ "x$1" != "x" ]; then
export LD_LIBRARY_PATH="${1}/lib"
export LIBGL_DRIVERS_PATH="${1}/lib/dri"
fi
xinit /bin/sh "${_XORG_SCRIPT}" -- /usr/bin/Xorg vt45 -noreset -s 0 -dpms -logfile /Xorg.0.log &
# Wait for xorg to be ready for connections.
for _ in 1 2 3 4 5; do
if [ -e "${_FLAG_FILE}" ]; then
break
fi
sleep 5
done

View File

@@ -1,7 +0,0 @@
[core]
backend=headless-backend.so
xwayland=true
idle-time=0
[xwayland]
path=/usr/local/bin/Xwayland

View File

@@ -1,7 +0,0 @@
variables:
CONDITIONAL_BUILD_ANDROID_CTS_TAG: b018634d732f438027ec58c0383615e7
CONDITIONAL_BUILD_ANGLE_TAG: f62910e55be46e37cc867d037e4a8121
CONDITIONAL_BUILD_CROSVM_TAG: 0f59350b1052bdbb28b65a832b494377
CONDITIONAL_BUILD_FLUSTER_TAG: 3bc3afd7468e106afcbfd569a85f34f9
CONDITIONAL_BUILD_PIGLIT_TAG: 827b708ab7309721395ea28cec512968
CONDITIONAL_BUILD_VKD3D_PROTON_TAG: 82cadf35246e64a8228bf759c9c19e5b

View File

@@ -1,70 +0,0 @@
# Build the CI Alpine docker images.
#
# MESA_IMAGE_TAG is the tag of the docker image used by later stage jobs. If the
# image doesn't exist yet, the container stage job generates it.
#
# In order to generate a new image, one should generally change the tag.
# While removing the image from the registry would also work, that's not
# recommended except for ephemeral images during development: Replacing
# an image after a significant amount of time might pull in newer
# versions of gcc/clang or other packages, which might break the build
# with older commits using the same tag.
#
# After merging a change resulting in generating a new image to the
# main repository, it's recommended to remove the image from the source
# repository's container registry, so that the image from the main
# repository's registry will be used there as well.
# Alpine based x86_64 build image
.alpine/x86_64_build-base:
extends:
- .fdo.container-build@alpine
- .container
variables:
FDO_DISTRIBUTION_VERSION: "3.21"
FDO_BASE_IMAGE: alpine:$FDO_DISTRIBUTION_VERSION # since cbuild ignores it
# Alpine based x86_64 build image
alpine/x86_64_build:
extends:
- .alpine/x86_64_build-base
variables:
MESA_IMAGE_TAG: &alpine-x86_64_build ${ALPINE_X86_64_BUILD_TAG}
LLVM_VERSION: &alpine-llvm_version 19
rules:
- !reference [.container, rules]
# Note: the next three lines must remain in that order, so that the rules
# in `linkcheck-docs` catch nightly pipelines before the rules in `deploy-docs`
# exclude them.
- !reference [linkcheck-docs, rules]
- !reference [deploy-docs, rules]
- !reference [test-docs, rules]
.use-alpine/x86_64_build:
tags:
- $FDO_RUNNER_JOB_PRIORITY_TAG_X86_64
extends:
- .set-image
variables:
MESA_IMAGE_PATH: "alpine/x86_64_build"
MESA_IMAGE_TAG: *alpine-x86_64_build
LLVM_VERSION: *alpine-llvm_version
needs:
- job: sanity
optional: true
- job: alpine/x86_64_build
optional: true
# Alpine based x86_64 image for LAVA SSH dockerized client
alpine/x86_64_lava_ssh_client:
extends:
- .alpine/x86_64_build-base
variables:
MESA_IMAGE_TAG: &alpine-x86_64_lava_ssh_client ${ALPINE_X86_64_LAVA_SSH_TAG}
# Alpine based x86_64 image to run LAVA jobs
alpine/x86_64_lava-trigger:
extends:
- .alpine/x86_64_build-base
variables:
MESA_IMAGE_TAG: &alpine-x86_64_lava_trigger ${ALPINE_X86_64_LAVA_TRIGGER_TAG}

View File

@@ -6,11 +6,10 @@
# ALPINE_X86_64_BUILD_TAG
set -e
. .gitlab-ci/setup-test-env.sh
set -o xtrace
export LLVM_VERSION="${LLVM_VERSION:=16}"
EPHEMERAL=(
)
@@ -19,29 +18,32 @@ DEPS=(
bash
bison
ccache
"clang${LLVM_VERSION}-dev"
clang-dev
clang16-dev
cmake
clang-dev
coreutils
curl
elfutils-dev
expat-dev
flex
g++
gcc
gettext
g++
git
gettext
glslang
graphviz
libclc-dev
libdrm-dev
libpciaccess-dev
libva-dev
linux-headers
"llvm${LLVM_VERSION}-dev"
"llvm${LLVM_VERSION}-static"
llvm16-static
llvm16-dev
meson
mold
musl-dev
expat-dev
elfutils-dev
libdrm-dev
libselinux-dev
libva-dev
libpciaccess-dev
zlib-dev
python3-dev
py3-clang
py3-cparser
py3-mako
@@ -49,35 +51,26 @@ DEPS=(
py3-pip
py3-ply
py3-yaml
python3-dev
samurai
spirv-llvm-translator-dev
vulkan-headers
spirv-tools-dev
util-macros
vulkan-headers
zlib-dev
wayland-dev
wayland-protocols
)
apk --no-cache add "${DEPS[@]}" "${EPHEMERAL[@]}"
pip3 install --break-system-packages sphinx===8.2.3 hawkmoth===0.19.0
pip3 install --break-system-packages sphinx===5.1.1 hawkmoth===0.16.0
. .gitlab-ci/container/build-llvm-spirv.sh
. .gitlab-ci/container/build-libclc.sh
. .gitlab-ci/container/container_pre_build.sh
. .gitlab-ci/container/install-meson.sh
EXTRA_MESON_ARGS='--prefix=/usr' \
. .gitlab-ci/container/build-wayland.sh
############### Uninstall the build software
# too many vendor binarise, just keep the ones we need
find /usr/share/clc \
\( -type f -o -type l \) \
! -name 'spirv-mesa3d-.spv' \
! -name 'spirv64-mesa3d-.spv' \
-delete
apk del "${EPHEMERAL[@]}"
. .gitlab-ci/container/container_post_build.sh

View File

@@ -1,50 +0,0 @@
#!/usr/bin/env bash
# This is a ci-templates build script to generate a container for triggering LAVA jobs.
# When changing this file, you need to bump the following
# .gitlab-ci/image-tags.yml tags:
# ALPINE_X86_64_LAVA_TRIGGER_TAG
# shellcheck disable=SC1091
set -e
. .gitlab-ci/setup-test-env.sh
set -o xtrace
uncollapsed_section_start alpine_setup "Base Alpine system setup"
# Ephemeral packages (installed for this script and removed again at the end)
EPHEMERAL=(
git
py3-pip
)
# We only need these very basic packages to run the LAVA jobs
DEPS=(
curl
python3
tar
zstd
)
apk --no-cache add "${DEPS[@]}" "${EPHEMERAL[@]}"
pip3 install --break-system-packages -r bin/ci/requirements-lava.txt
cp -Rp .gitlab-ci/lava /
cp -Rp .gitlab-ci/bin/*_logger.py /lava
cp -Rp .gitlab-ci/common/init-stage1.sh /lava
. .gitlab-ci/container/container_pre_build.sh
############### Uninstall the build software
uncollapsed_section_switch alpine_cleanup "Cleaning up base Alpine system"
apk del "${EPHEMERAL[@]}"
. .gitlab-ci/container/container_post_build.sh
section_end alpine_cleanup

View File

@@ -4,9 +4,6 @@
# shellcheck disable=SC1091
set -e
. .gitlab-ci/setup-test-env.sh
set -o xtrace
EPHEMERAL=(

View File

@@ -1,5 +1,4 @@
#!/usr/bin/env bash
# shellcheck disable=SC2154 # arch is assigned in previous scripts
set -e
set -o xtrace
@@ -7,12 +6,11 @@ set -o xtrace
# Fetch the arm-built rootfs image and unpack it in our x86_64 container (saves
# network transfer, disk usage, and runtime on test jobs)
S3_PATH="https://${S3_HOST}/${S3_KERNEL_BUCKET}"
if curl -L --retry 3 -f --retry-delay 10 -s --head "${S3_PATH}/${FDO_UPSTREAM_REPO}/${LAVA_DISTRIBUTION_TAG}/lava-rootfs.tar.zst"; then
ARTIFACTS_URL="${S3_PATH}/${FDO_UPSTREAM_REPO}/${LAVA_DISTRIBUTION_TAG}"
# shellcheck disable=SC2154 # arch is assigned in previous scripts
if curl -X HEAD -s "${ARTIFACTS_PREFIX}/${FDO_UPSTREAM_REPO}/${ARTIFACTS_SUFFIX}/${arch}/done"; then
ARTIFACTS_URL="${ARTIFACTS_PREFIX}/${FDO_UPSTREAM_REPO}/${ARTIFACTS_SUFFIX}/${arch}"
else
ARTIFACTS_URL="${S3_PATH}/${CI_PROJECT_PATH}/${LAVA_DISTRIBUTION_TAG}"
ARTIFACTS_URL="${ARTIFACTS_PREFIX}/${CI_PROJECT_PATH}/${ARTIFACTS_SUFFIX}/${arch}"
fi
curl -L --retry 4 -f --retry-all-errors --retry-delay 60 \
@@ -26,8 +24,39 @@ if [[ $arch == "arm64" ]]; then
pushd /baremetal-files
curl -L --retry 4 -f --retry-all-errors --retry-delay 60 \
-O "${KERNEL_IMAGE_BASE}"/arm64/Image
-O "${KERNEL_IMAGE_BASE}"/arm64/Image
curl -L --retry 4 -f --retry-all-errors --retry-delay 60 \
-O "${KERNEL_IMAGE_BASE}"/arm64/Image.gz
curl -L --retry 4 -f --retry-all-errors --retry-delay 60 \
-O "${KERNEL_IMAGE_BASE}"/arm64/cheza-kernel
DEVICE_TREES=""
DEVICE_TREES="$DEVICE_TREES apq8016-sbc.dtb"
DEVICE_TREES="$DEVICE_TREES apq8096-db820c.dtb"
DEVICE_TREES="$DEVICE_TREES tegra210-p3450-0000.dtb"
DEVICE_TREES="$DEVICE_TREES imx8mq-nitrogen.dtb"
for DTB in $DEVICE_TREES; do
curl -L --retry 4 -f --retry-all-errors --retry-delay 60 \
-O "${KERNEL_IMAGE_BASE}/arm64/$DTB"
done
popd
elif [[ $arch == "armhf" ]]; then
mkdir -p /baremetal-files
pushd /baremetal-files
curl -L --retry 4 -f --retry-all-errors --retry-delay 60 \
-O "${KERNEL_IMAGE_BASE}"/armhf/zImage
DEVICE_TREES=""
DEVICE_TREES="$DEVICE_TREES imx6q-cubox-i.dtb"
DEVICE_TREES="$DEVICE_TREES tegra124-jetson-tk1.dtb"
for DTB in $DEVICE_TREES; do
curl -L --retry 4 -f --retry-all-errors --retry-delay 60 \
-O "${KERNEL_IMAGE_BASE}/armhf/$DTB"
done
popd
fi

View File

@@ -1,67 +0,0 @@
#!/usr/bin/env bash
#
# When changing this file, you need to bump the following
# .gitlab-ci/image-tags.yml tags:
# DEBIAN_TEST_ANDROID_TAG
# This script runs in a container to:
# 1. Download the Android CTS (Compatibility Test Suite)
# 2. Filter out unneeded test modules
# 3. Compress and upload the stripped version to S3
# Note: The 'build-' prefix in the filename is only to make it compatible
# with the bin/ci/update_tag.py script.
set -euo pipefail
section_start android-cts "Downloading Android CTS"
# xtrace is getting lost with the section switching
set -x
# Do a very early check to make sure the tag is correct without the need of
# setting up the environment variables locally
ci_tag_build_time_check "ANDROID_CTS_TAG"
# List of all CTS modules we might want to run in CI
# This should be the union of all modules required by our CI jobs
# Specific modules to run are selected via the ${GPU_VERSION}-android-cts-include.txt files
ANDROID_CTS_MODULES=(
"CtsDeqpTestCases"
"CtsGraphicsTestCases"
"CtsNativeHardwareTestCases"
"CtsSkQPTestCases"
)
ANDROID_CTS_VERSION="${ANDROID_VERSION}_r1"
ANDROID_CTS_DEVICE_ARCH="x86"
# Download the stripped CTS from S3, because the CTS download from Google can take 20 minutes
CTS_FILENAME="android-cts-${ANDROID_CTS_VERSION}-linux_x86-${ANDROID_CTS_DEVICE_ARCH}"
ARTIFACT_PATH="${DATA_STORAGE_PATH}/android-cts/${ANDROID_CTS_TAG}.tar.zst"
if FOUND_ARTIFACT_URL="$(find_s3_project_artifact "${ARTIFACT_PATH}")"; then
echo "Found Android CTS at: ${FOUND_ARTIFACT_URL}"
curl-with-retry "${FOUND_ARTIFACT_URL}" | tar --zstd -x -C /
else
echo "No cached CTS found, downloading from Google and uploading to S3..."
curl-with-retry --remote-name "https://dl.google.com/dl/android/cts/${CTS_FILENAME}.zip"
# Disable zipbomb detection, because the CTS zip file is too big
# At least locally, it is detected as a zipbomb
UNZIP_DISABLE_ZIPBOMB_DETECTION=true \
unzip -q -d / "${CTS_FILENAME}.zip"
rm "${CTS_FILENAME}.zip"
# Keep only the interesting tests to save space
# shellcheck disable=SC2086 # we want word splitting
ANDROID_CTS_MODULES_KEEP_EXPRESSION=$(printf "%s|" "${ANDROID_CTS_MODULES[@]}" | sed -e 's/|$//g')
find /android-cts/testcases/ -mindepth 1 -type d | grep -v -E "$ANDROID_CTS_MODULES_KEEP_EXPRESSION" | xargs rm -rf
# Using zstd compressed tarball instead of zip, the compression ratio is almost the same, but
# the extraction is faster, also LAVA overlays don't support zip compression.
tar --zstd -cf "${CTS_FILENAME}.tar.zst" /android-cts
ci-fairy s3cp --token-file "${S3_JWT_FILE}" "${CTS_FILENAME}.tar.zst" \
"https://${S3_BASE_PATH}/${CI_PROJECT_PATH}/${ARTIFACT_PATH}"
fi
section_end android-cts

View File

@@ -1,121 +0,0 @@
#!/usr/bin/env bash
# When changing this file, you need to bump the following
# .gitlab-ci/image-tags.yml and .gitlab-ci/container/gitlab-ci.yml tags:
# DEBIAN_BUILD_TAG
# ANDROID_LLVM_ARTIFACT_NAME
set -exu
# If CI vars are not set, assign an empty value, this prevents -u to fail
: "${CI:=}"
: "${CI_PROJECT_PATH:=}"
# Early check for required env variables, relies on `set -u`
: "$ANDROID_NDK_VERSION"
: "$ANDROID_SDK_VERSION"
: "$ANDROID_LLVM_VERSION"
: "$ANDROID_LLVM_ARTIFACT_NAME"
: "$S3_JWT_FILE"
: "$S3_HOST"
: "$S3_ANDROID_BUCKET"
# Check for CI if the auth file used later on is non-empty
if [ -n "$CI" ] && [ ! -s "${S3_JWT_FILE}" ]; then
echo "Error: ${S3_JWT_FILE} is empty." 1>&2
exit 1
fi
if curl -s -o /dev/null -I -L -f --retry 4 --retry-delay 15 "https://${S3_HOST}/${S3_ANDROID_BUCKET}/${CI_PROJECT_PATH}/${ANDROID_LLVM_ARTIFACT_NAME}.tar.zst"; then
echo "Artifact ${ANDROID_LLVM_ARTIFACT_NAME}.tar.zst already exists, skip re-building."
# Download prebuilt LLVM libraries for Android when they have not changed,
# to save some time
curl -L --retry 4 -f --retry-all-errors --retry-delay 60 \
-o "/${ANDROID_LLVM_ARTIFACT_NAME}.tar.zst" "https://${S3_HOST}/${S3_ANDROID_BUCKET}/${CI_PROJECT_PATH}/${ANDROID_LLVM_ARTIFACT_NAME}.tar.zst"
tar -C / --zstd -xf "/${ANDROID_LLVM_ARTIFACT_NAME}.tar.zst"
rm "/${ANDROID_LLVM_ARTIFACT_NAME}.tar.zst"
exit
fi
# Ephemeral packages (installed for this script and removed again at the end)
EPHEMERAL=(
unzip
)
apt-get update
apt-get install -y --no-install-recommends --no-remove "${EPHEMERAL[@]}"
ANDROID_NDK="android-ndk-${ANDROID_NDK_VERSION}"
ANDROID_NDK_ROOT="/${ANDROID_NDK}"
if [ ! -d "$ANDROID_NDK_ROOT" ];
then
curl -L --retry 4 -f --retry-all-errors --retry-delay 60 \
-o "${ANDROID_NDK}.zip" \
"https://dl.google.com/android/repository/${ANDROID_NDK}-linux.zip"
unzip -d / "${ANDROID_NDK}.zip" "$ANDROID_NDK/source.properties" "$ANDROID_NDK/build/cmake/*" "$ANDROID_NDK/toolchains/llvm/*"
rm "${ANDROID_NDK}.zip"
fi
if [ ! -d "/llvm-project" ];
then
mkdir "/llvm-project"
pushd "/llvm-project"
git init
git remote add origin https://github.com/llvm/llvm-project.git
git fetch --depth 1 origin "$ANDROID_LLVM_VERSION"
git checkout FETCH_HEAD
popd
fi
pushd "/llvm-project"
# Checkout again the intended version, just in case of a pre-existing full clone
git checkout "$ANDROID_LLVM_VERSION" || true
LLVM_INSTALL_PREFIX="/${ANDROID_LLVM_ARTIFACT_NAME}"
rm -rf build/
cmake -GNinja -S llvm -B build/ \
-DCMAKE_TOOLCHAIN_FILE="${ANDROID_NDK_ROOT}/build/cmake/android.toolchain.cmake" \
-DANDROID_ABI=x86_64 \
-DANDROID_PLATFORM="android-${ANDROID_SDK_VERSION}" \
-DANDROID_NDK="${ANDROID_NDK_ROOT}" \
-DCMAKE_ANDROID_ARCH_ABI=x86_64 \
-DCMAKE_ANDROID_NDK="${ANDROID_NDK_ROOT}" \
-DCMAKE_BUILD_TYPE=MinSizeRel \
-DCMAKE_SYSTEM_NAME=Android \
-DCMAKE_SYSTEM_VERSION="${ANDROID_SDK_VERSION}" \
-DCMAKE_INSTALL_PREFIX="${LLVM_INSTALL_PREFIX}" \
-DCMAKE_CXX_FLAGS="-march=x86-64 --target=x86_64-linux-android${ANDROID_SDK_VERSION} -fno-rtti" \
-DLLVM_HOST_TRIPLE="x86_64-linux-android${ANDROID_SDK_VERSION}" \
-DLLVM_TARGETS_TO_BUILD=X86 \
-DLLVM_BUILD_LLVM_DYLIB=OFF \
-DLLVM_BUILD_TESTS=OFF \
-DLLVM_BUILD_EXAMPLES=OFF \
-DLLVM_BUILD_DOCS=OFF \
-DLLVM_BUILD_TOOLS=OFF \
-DLLVM_ENABLE_RTTI=OFF \
-DLLVM_BUILD_INSTRUMENTED_COVERAGE=OFF \
-DLLVM_NATIVE_TOOL_DIR="${ANDROID_NDK_ROOT}/toolchains/llvm/prebuilt/linux-x86_64/bin" \
-DLLVM_ENABLE_PIC=False \
-DLLVM_OPTIMIZED_TABLEGEN=ON
ninja "-j${FDO_CI_CONCURRENT:-4}" -C build/ install
popd
rm -rf /llvm-project
tar --zstd -cf "${ANDROID_LLVM_ARTIFACT_NAME}.tar.zst" "$LLVM_INSTALL_PREFIX"
# If run in CI upload the tar.zst archive to S3 to avoid rebuilding it if the
# version does not change, and delete it.
# The file is not deleted for non-CI because it can be useful in local runs.
if [ -n "$CI" ]; then
ci-fairy s3cp --token-file "${S3_JWT_FILE}" "${ANDROID_LLVM_ARTIFACT_NAME}.tar.zst" "https://${S3_HOST}/${S3_ANDROID_BUCKET}/${CI_PROJECT_PATH}/${ANDROID_LLVM_ARTIFACT_NAME}.tar.zst"
rm "${ANDROID_LLVM_ARTIFACT_NAME}.tar.zst"
fi
apt-get purge -y "${EPHEMERAL[@]}"

171
.gitlab-ci/container/build-angle.sh Executable file → Normal file
View File

@@ -2,170 +2,61 @@
# When changing this file, you need to bump the following
# .gitlab-ci/image-tags.yml tags:
# DEBIAN_TEST_ANDROID_TAG
# DEBIAN_TEST_GL_TAG
# KERNEL_ROOTFS_TAG
set -uex
set -ex
section_start angle "Building ANGLE"
# Do a very early check to make sure the tag is correct without the need of
# setting up the environment variables locally
ci_tag_build_time_check "ANGLE_TAG"
ANGLE_REV="c39f4a5c553cbee39af8f866aa82a9ffa4f02f5b"
DEPOT_REV="5982a1aeb33dc36382ed8c62eddf52a6135e7dd3"
# Set ANGLE_ARCH based on DEBIAN_ARCH if it hasn't been explicitly defined
if [[ -z "${ANGLE_ARCH:-}" ]]; then
case "$DEBIAN_ARCH" in
amd64) ANGLE_ARCH=x64;;
arm64) ANGLE_ARCH=arm64;;
esac
fi
ANGLE_REV="1409a05a81e3ccb279142433a2b987bc330f555b"
# DEPOT tools
mkdir /depot-tools
pushd /depot-tools
git init
git remote add origin https://chromium.googlesource.com/chromium/tools/depot_tools.git
git fetch --depth 1 origin "$DEPOT_REV"
git checkout FETCH_HEAD
export PATH=/depot-tools:$PATH
git clone --depth 1 https://chromium.googlesource.com/chromium/tools/depot_tools.git
PWD=$(pwd)
export PATH=$PWD/depot_tools:$PATH
export DEPOT_TOOLS_UPDATE=0
popd
mkdir /angle-build
mkdir /angle
pushd /angle-build
git init
git remote add origin https://chromium.googlesource.com/angle/angle.git
git fetch --depth 1 origin "$ANGLE_REV"
git checkout FETCH_HEAD
echo "$ANGLE_REV" > /angle/version
GCLIENT_CUSTOM_VARS=()
GCLIENT_CUSTOM_VARS+=('--custom-var=angle_enable_cl=False')
GCLIENT_CUSTOM_VARS+=('--custom-var=angle_enable_cl_testing=False')
GCLIENT_CUSTOM_VARS+=('--custom-var=angle_enable_vulkan_validation_layers=False')
GCLIENT_CUSTOM_VARS+=('--custom-var=angle_enable_wgpu=False')
GCLIENT_CUSTOM_VARS+=('--custom-var=build_angle_deqp_tests=False')
GCLIENT_CUSTOM_VARS+=('--custom-var=build_angle_perftests=False')
if [[ "$ANGLE_TARGET" == "android" ]]; then
GCLIENT_CUSTOM_VARS+=('--custom-var=checkout_android=True')
fi
# source preparation
gclient config --name REPLACE-WITH-A-DOT --unmanaged \
"${GCLIENT_CUSTOM_VARS[@]}" \
https://chromium.googlesource.com/angle/angle.git
sed -e 's/REPLACE-WITH-A-DOT/./;' -i .gclient
sed -e 's|"custom_deps" : {|"custom_deps" : {\
"third_party/clspv/src": None,\
"third_party/dawn": None,\
"third_party/glmark2/src": None,\
"third_party/libjpeg_turbo": None,\
"third_party/llvm/src": None,\
"third_party/OpenCL-CTS/src": None,\
"third_party/SwiftShader": None,\
"third_party/VK-GL-CTS/src": None,\
"third_party/vulkan-validation-layers/src": None,|' -i .gclient
gclient sync --no-history -j"${FDO_CI_CONCURRENT:-4}"
python3 scripts/bootstrap.py
mkdir -p build/config
gclient sync
sed -i "/catapult/d" testing/BUILD.gn
mkdir -p out/Release
cat > out/Release/args.gn <<EOF
angle_assert_always_on=false
angle_build_all=false
angle_build_tests=false
angle_enable_cl=false
angle_enable_cl_testing=false
angle_enable_gl=false
angle_enable_gl_desktop_backend=false
angle_enable_null=false
angle_enable_swiftshader=false
angle_enable_trace=false
angle_enable_wgpu=false
angle_enable_vulkan=true
angle_enable_vulkan_api_dump_layer=false
angle_enable_vulkan_validation_layers=false
angle_has_frame_capture=false
angle_has_histograms=false
angle_has_rapidjson=false
angle_use_custom_libvulkan=false
build_angle_deqp_tests=false
echo '
is_debug = false
angle_enable_swiftshader = false
angle_enable_null = false
angle_enable_gl = false
angle_enable_vulkan = true
angle_has_histograms = false
build_angle_trace_perf_tests = false
build_angle_deqp_tests = false
angle_use_custom_libvulkan = false
dcheck_always_on=true
enable_expensive_dchecks=false
is_component_build=false
is_debug=false
target_cpu="${ANGLE_ARCH}"
target_os="${ANGLE_TARGET}"
treat_warnings_as_errors=false
EOF
case "$ANGLE_TARGET" in
linux) cat >> out/Release/args.gn <<EOF
angle_egl_extension="so.1"
angle_glesv2_extension="so.2"
use_custom_libcxx=false
custom_toolchain="//build/toolchain/linux/unbundle:default"
host_toolchain="//build/toolchain/linux/unbundle:default"
EOF
;;
android) cat >> out/Release/args.gn <<EOF
android_ndk_version="${ANDROID_NDK_VERSION}"
android64_ndk_api_level=${ANDROID_SDK_VERSION}
android32_ndk_api_level=${ANDROID_SDK_VERSION}
use_custom_libcxx=true
EOF
;;
*) echo "Unexpected ANGLE_TARGET value: $ANGLE_TARGET"; exit 1;;
esac
' > out/Release/args.gn
if [[ "$DEBIAN_ARCH" = "arm64" ]]; then
# We need to get an AArch64 sysroot - because ANGLE isn't great friends with
# system dependencies - but use the default system toolchain, because the
# 'arm64' toolchain you get from Google infrastructure is a cross-compiler
# from x86-64
build/linux/sysroot_scripts/install-sysroot.py --arch=arm64
fi
(
# The 'unbundled' toolchain configuration requires clang, and it also needs to
# be configured via environment variables.
export CC="clang-${LLVM_VERSION}"
export HOST_CC="$CC"
export CFLAGS="-Wno-unknown-warning-option"
export HOST_CFLAGS="$CFLAGS"
export CXX="clang++-${LLVM_VERSION}"
export HOST_CXX="$CXX"
export CXXFLAGS="-Wno-unknown-warning-option"
export HOST_CXXFLAGS="$CXXFLAGS"
export AR="ar"
export HOST_AR="$AR"
export NM="nm"
export HOST_NM="$NM"
export LDFLAGS="-fuse-ld=lld-${LLVM_VERSION} -lpthread -ldl"
export HOST_LDFLAGS="$LDFLAGS"
gn gen out/Release
# depot_tools overrides ninja with a version that doesn't work. We want
# ninja with FDO_CI_CONCURRENT anyway.
/usr/local/bin/ninja -C out/Release/
gn gen out/Release
# depot_tools overrides ninja with a version that doesn't work. We want
# ninja with FDO_CI_CONCURRENT anyway.
/usr/local/bin/ninja -C out/Release/ libEGL libGLESv1_CM libGLESv2
)
rm -f out/Release/libvulkan.so* out/Release/*.so*.TOC
cp out/Release/lib*.so* /angle/
if [[ "$ANGLE_TARGET" == "linux" ]]; then
ln -s libEGL.so.1 /angle/libEGL.so
ln -s libGLESv2.so.2 /angle/libGLESv2.so
fi
mkdir /angle
cp out/Release/lib*GL*.so /angle/
ln -s libEGL.so /angle/libEGL.so.1
ln -s libGLESv2.so /angle/libGLESv2.so.2
rm -rf out
popd
rm -rf /depot-tools
rm -rf /angle-build
section_end angle
rm -rf ./depot_tools

View File

@@ -3,19 +3,19 @@
# When changing this file, you need to bump the following
# .gitlab-ci/image-tags.yml tags:
# DEBIAN_BASE_TAG
# DEBIAN_TEST_GL_TAG
# DEBIAN_TEST_VK_TAG
# KERNEL_ROOTFS_TAG
set -uex
set -ex
uncollapsed_section_start apitrace "Building apitrace"
APITRACE_VERSION="b6102d10960c9f43b1b473903fc67937dd19fb98"
APITRACE_VERSION="0a6506433e1f9f7b69757b4e5730326970c4321a"
git clone https://github.com/apitrace/apitrace.git --single-branch --no-checkout /apitrace
pushd /apitrace
git checkout "$APITRACE_VERSION"
git submodule update --init --depth 1 --recursive
cmake -S . -B _build -G Ninja -DCMAKE_BUILD_TYPE=Release -DENABLE_GUI=False -DENABLE_WAFFLE=on ${EXTRA_CMAKE_ARGS:-}
cmake -S . -B _build -G Ninja -DCMAKE_BUILD_TYPE=Release -DENABLE_GUI=False -DENABLE_WAFFLE=on $EXTRA_CMAKE_ARGS
cmake --build _build --parallel --target apitrace eglretrace
mkdir build
cp _build/apitrace build
@@ -23,5 +23,3 @@ cp _build/eglretrace build
${STRIP_CMD:-strip} build/*
find . -not -path './build' -not -path './build/*' -delete
popd
section_end apitrace

View File

@@ -1,14 +1,7 @@
#!/usr/bin/env bash
# shellcheck disable=SC2086 # we want word splitting
# When changing this file, you need to bump the following
# .gitlab-ci/image-tags.yml tags:
# DEBIAN_BUILD_TAG
# FEDORA_X86_64_BUILD_TAG
uncollapsed_section_start bindgen "Building bindgen"
BINDGEN_VER=0.71.1
BINDGEN_VER=0.65.1
CBINDGEN_VER=0.26.0
# bindgen
@@ -25,4 +18,3 @@ RUSTFLAGS='-L native=/usr/local/lib' cargo install \
-j ${FDO_CI_CONCURRENT:-4} \
--root /usr/local
section_end bindgen

View File

@@ -1,45 +1,35 @@
#!/usr/bin/env bash
# shellcheck disable=SC2086 # we want word splitting
# When changing this file, you need to bump the following
# .gitlab-ci/image-tags.yml tags:
# DEBIAN_BASE_TAG
# Do a very early check to make sure the tag is correct without the need of
# setting up the environment variables locally
ci_tag_build_time_check "CROSVM_TAG"
set -uex
section_start crosvm "Building crosvm"
set -ex
git config --global user.email "mesa@example.com"
git config --global user.name "Mesa CI"
CROSVM_VERSION=4a6b4316155742fbfa1be7087c2ee578cfee884d
CROSVM_VERSION=1641c55bcc922588e24de73e9cca7b5e4005bd6d
git clone --single-branch -b main --no-checkout https://chromium.googlesource.com/crosvm/crosvm /platform/crosvm
pushd /platform/crosvm
git checkout "$CROSVM_VERSION"
git submodule update --init
VIRGLRENDERER_VERSION=06d43ce974b664f9dc521b706a0ad7f91dbf2866
VIRGLRENDERER_VERSION=d9c002fac153b834a2c17731f2b85c36e333e102
rm -rf third_party/virglrenderer
git clone --single-branch -b main --no-checkout https://gitlab.freedesktop.org/virgl/virglrenderer.git third_party/virglrenderer
pushd third_party/virglrenderer
git checkout "$VIRGLRENDERER_VERSION"
meson setup build/ -D libdir=lib -D render-server-worker=process -D venus=true ${EXTRA_MESON_ARGS:-}
meson setup build/ -D libdir=lib -D render-server-worker=process -D venus=true $EXTRA_MESON_ARGS
meson install -C build
popd
rm rust-toolchain
cargo update -p pkg-config@0.3.26 --precise 0.3.27
RUSTFLAGS='-L native=/usr/local/lib' cargo install \
bindgen-cli \
--locked \
-j ${FDO_CI_CONCURRENT:-4} \
--root /usr/local \
--version 0.71.1 \
${EXTRA_CARGO_ARGS:-}
--version 0.65.1 \
$EXTRA_CARGO_ARGS
CROSVM_USE_SYSTEM_MINIGBM=1 CROSVM_USE_SYSTEM_VIRGLRENDERER=1 RUSTFLAGS='-L native=/usr/local/lib' cargo install \
-j ${FDO_CI_CONCURRENT:-4} \
@@ -47,10 +37,8 @@ CROSVM_USE_SYSTEM_MINIGBM=1 CROSVM_USE_SYSTEM_VIRGLRENDERER=1 RUSTFLAGS='-L nati
--features 'default-no-sandbox gpu x virgl_renderer' \
--path . \
--root /usr/local \
${EXTRA_CARGO_ARGS:-}
$EXTRA_CARGO_ARGS
popd
rm -rf /platform/crosvm
section_end crosvm

View File

@@ -4,73 +4,46 @@
# When changing this file, you need to bump the following
# .gitlab-ci/image-tags.yml tags:
# DEBIAN_TEST_ANDROID_TAG
# DEBIAN_BASE_TAG
# DEBIAN_TEST_GL_TAG
# DEBIAN_TEST_VK_TAG
# KERNEL_ROOTFS_TAG
set -uex
set -ex
section_start deqp-runner "Building deqp-runner"
DEQP_RUNNER_VERSION=0.20.3
commits_to_backport=(
)
patch_files=(
)
DEQP_RUNNER_VERSION=0.18.0
DEQP_RUNNER_GIT_URL="${DEQP_RUNNER_GIT_URL:-https://gitlab.freedesktop.org/mesa/deqp-runner.git}"
if [ -n "${DEQP_RUNNER_GIT_TAG:-}" ]; then
DEQP_RUNNER_GIT_CHECKOUT="$DEQP_RUNNER_GIT_TAG"
elif [ -n "${DEQP_RUNNER_GIT_REV:-}" ]; then
DEQP_RUNNER_GIT_CHECKOUT="$DEQP_RUNNER_GIT_REV"
if [ -n "${DEQP_RUNNER_GIT_TAG}${DEQP_RUNNER_GIT_REV}" ]; then
# Build and install from source
DEQP_RUNNER_CARGO_ARGS="--git $DEQP_RUNNER_GIT_URL"
if [ -n "${DEQP_RUNNER_GIT_TAG}" ]; then
DEQP_RUNNER_CARGO_ARGS="--tag ${DEQP_RUNNER_GIT_TAG} ${DEQP_RUNNER_CARGO_ARGS}"
DEQP_RUNNER_GIT_CHECKOUT="$DEQP_RUNNER_GIT_TAG"
else
DEQP_RUNNER_CARGO_ARGS="--rev ${DEQP_RUNNER_GIT_REV} ${DEQP_RUNNER_CARGO_ARGS}"
DEQP_RUNNER_GIT_CHECKOUT="$DEQP_RUNNER_GIT_REV"
fi
DEQP_RUNNER_CARGO_ARGS="${DEQP_RUNNER_CARGO_ARGS} ${EXTRA_CARGO_ARGS}"
else
# Install from package registry
DEQP_RUNNER_CARGO_ARGS="--version ${DEQP_RUNNER_VERSION} ${EXTRA_CARGO_ARGS} -- deqp-runner"
DEQP_RUNNER_GIT_CHECKOUT="v$DEQP_RUNNER_VERSION"
fi
BASE_PWD=$PWD
mkdir -p /deqp-runner
pushd /deqp-runner
mkdir deqp-runner-git
pushd deqp-runner-git
git init
git remote add origin "$DEQP_RUNNER_GIT_URL"
git fetch --depth 1 origin "$DEQP_RUNNER_GIT_CHECKOUT"
git checkout FETCH_HEAD
for commit in "${commits_to_backport[@]}"
do
PATCH_URL="https://gitlab.freedesktop.org/mesa/deqp-runner/-/commit/$commit.patch"
echo "Backport deqp-runner commit $commit from $PATCH_URL"
curl -L --retry 4 -f --retry-all-errors --retry-delay 60 $PATCH_URL | git am
done
for patch in "${patch_files[@]}"
do
echo "Apply patch to deqp-runner from $patch"
git am "$BASE_PWD/.gitlab-ci/container/patches/$patch"
done
if [ -z "${RUST_TARGET:-}" ]; then
RUST_TARGET=""
fi
if [[ "$RUST_TARGET" != *-android ]]; then
# When CC (/usr/lib/ccache/gcc) variable is set, the rust compiler uses
# this variable when cross-compiling arm32 and build fails for zsys-sys.
# So unset the CC variable when cross-compiling for arm32.
SAVEDCC=${CC:-}
if [ "$RUST_TARGET" = "armv7-unknown-linux-gnueabihf" ]; then
unset CC
fi
cargo install --locked \
-j ${FDO_CI_CONCURRENT:-4} \
--root /usr/local \
${EXTRA_CARGO_ARGS:-} \
--path .
CC=$SAVEDCC
${DEQP_RUNNER_CARGO_ARGS}
else
mkdir -p /deqp-runner
pushd /deqp-runner
git clone --branch "$DEQP_RUNNER_GIT_CHECKOUT" --depth 1 "$DEQP_RUNNER_GIT_URL" deqp-runner-git
pushd deqp-runner-git
cargo install --locked \
-j ${FDO_CI_CONCURRENT:-4} \
--root /usr/local --version 2.10.0 \
@@ -84,16 +57,14 @@ else
cargo uninstall --locked \
--root /usr/local \
cargo-ndk
fi
popd
rm -rf deqp-runner-git
popd
popd
rm -rf deqp-runner-git
popd
fi
# remove unused test runners to shrink images for the Mesa CI build (not kernel,
# which chooses its own deqp branch)
if [ -z "${DEQP_RUNNER_GIT_TAG:-}${DEQP_RUNNER_GIT_REV:-}" ]; then
if [ -z "${DEQP_RUNNER_GIT_TAG}${DEQP_RUNNER_GIT_REV}" ]; then
rm -f /usr/local/bin/igt-runner
fi
section_end deqp-runner

275
.gitlab-ci/container/build-deqp.sh Executable file → Normal file
View File

@@ -6,27 +6,19 @@
# DEBIAN_TEST_ANDROID_TAG
# DEBIAN_TEST_GL_TAG
# DEBIAN_TEST_VK_TAG
# KERNEL_ROOTFS_TAG
set -ue -o pipefail
# shellcheck disable=SC2153
deqp_api=${DEQP_API,,}
section_start deqp-$deqp_api "Building dEQP $DEQP_API"
set -x
set -ex -o pipefail
# See `deqp_build_targets` below for which release is used to produce which
# binary. Unless this comment has bitrotten:
# - the commit from the main branch produces the deqp tools and `deqp-vk`,
# - the VK release produces `deqp-vk`,
# - the GL release produces `glcts`, and
# - the GLES release produces `deqp-gles*` and `deqp-egl`
DEQP_MAIN_COMMIT=9cc8e038994c32534b3d2c4ba88c1dc49ef53228
DEQP_VK_VERSION=1.4.1.1
DEQP_GL_VERSION=4.6.6.0
DEQP_GLES_VERSION=3.2.12.0
DEQP_VK_VERSION=1.3.8.2
DEQP_GL_VERSION=4.6.4.1
DEQP_GLES_VERSION=3.2.10.1
# Patches to VulkanCTS may come from commits in their repo (listed in
# cts_commits_to_backport) or patch files stored in our repo (in the patch
@@ -34,40 +26,44 @@ DEQP_GLES_VERSION=3.2.12.0
# Both list variables would have comments explaining the reasons behind the
# patches.
# shellcheck disable=SC2034
main_cts_commits_to_backport=(
# If you find yourself wanting to add something in here, consider whether
# bumping DEQP_MAIN_COMMIT is not a better solution :)
)
# shellcheck disable=SC2034
main_cts_patch_files=(
)
# shellcheck disable=SC2034
vk_cts_commits_to_backport=(
# Stop querying device address from unbound buffers
046343f46f7d39d53b47842d7fd8ed3279528046
# Fix more ASAN errors due to missing virtual destructors
dd40bcfef1b4035ea55480b6fd4d884447120768
# Remove "unused shader stages" tests
7dac86c6bbd15dec91d7d9a98cd6dd57c11092a7
# Emit point size from "many indirect draws" test
771e56d1c4d03e073ddb7f1200ad6d57e0a0c979
)
# shellcheck disable=SC2034
vk_cts_patch_files=(
)
if [ "${DEQP_TARGET}" = 'android' ]; then
vk_cts_patch_files+=(
build-deqp-vk_Allow-running-on-Android-from-the-command-line.patch
build-deqp-vk_Android-prints-to-stdout-instead-of-logcat.patch
)
fi
# shellcheck disable=SC2034
gl_cts_commits_to_backport=(
# Add testing for GL_PRIMITIVES_SUBMITTED_ARB query.
e075ce73ddc5973aa46a5236c715bb281c9501fa
)
# shellcheck disable=SC2034
gl_cts_patch_files=(
build-deqp-gl_Build-Don-t-build-Vulkan-utilities-for-GL-builds.patch
build-deqp-gl_Revert-Add-missing-context-deletion.patch
build-deqp-gl_Revert-Fix-issues-with-GLX-reset-notification-strate.patch
build-deqp-gl_Revert-Fix-spurious-failures-when-using-a-config-wit.patch
)
if [ "${DEQP_TARGET}" = 'android' ]; then
gl_cts_patch_files+=(
build-deqp-gl_Allow-running-on-Android-from-the-command-line.patch
build-deqp-gl_Android-prints-to-stdout-instead-of-logcat.patch
)
fi
# shellcheck disable=SC2034
# GLES builds also EGL
gles_cts_commits_to_backport=(
@@ -75,12 +71,15 @@ gles_cts_commits_to_backport=(
# shellcheck disable=SC2034
gles_cts_patch_files=(
build-deqp-gl_Build-Don-t-build-Vulkan-utilities-for-GL-builds.patch
build-deqp-gl_Revert-Add-missing-context-deletion.patch
build-deqp-gl_Revert-Fix-issues-with-GLX-reset-notification-strate.patch
build-deqp-gl_Revert-Fix-spurious-failures-when-using-a-config-wit.patch
)
if [ "${DEQP_TARGET}" = 'android' ]; then
gles_cts_patch_files+=(
build-deqp-gles_Allow-running-on-Android-from-the-command-line.patch
build-deqp-gles_Android-prints-to-stdout-instead-of-logcat.patch
)
fi
### Careful editing anything below this line
@@ -90,150 +89,87 @@ git config --global user.name "Mesa CI"
# shellcheck disable=SC2153
case "${DEQP_API}" in
tools) DEQP_VERSION="$DEQP_MAIN_COMMIT";;
*-main) DEQP_VERSION="$DEQP_MAIN_COMMIT";;
VK) DEQP_VERSION="vulkan-cts-$DEQP_VK_VERSION";;
GL) DEQP_VERSION="opengl-cts-$DEQP_GL_VERSION";;
GLES) DEQP_VERSION="opengl-es-cts-$DEQP_GLES_VERSION";;
*) echo "Unexpected DEQP_API value: $DEQP_API"; exit 1;;
esac
mkdir -p /VK-GL-CTS
git clone \
https://github.com/KhronosGroup/VK-GL-CTS.git \
-b $DEQP_VERSION \
--depth 1 \
/VK-GL-CTS
pushd /VK-GL-CTS
[ -e .git ] || {
git init
git remote add origin https://github.com/KhronosGroup/VK-GL-CTS.git
}
git fetch --depth 1 origin "$DEQP_VERSION"
git checkout FETCH_HEAD
DEQP_COMMIT=$(git rev-parse FETCH_HEAD)
if [ "$DEQP_VERSION" = "$DEQP_MAIN_COMMIT" ]; then
merge_base="$(curl-with-retry -s https://api.github.com/repos/KhronosGroup/VK-GL-CTS/compare/main...$DEQP_MAIN_COMMIT | jq -r .merge_base_commit.sha)"
if [[ "$merge_base" != "$DEQP_MAIN_COMMIT" ]]; then
echo "VK-GL-CTS commit $DEQP_MAIN_COMMIT is not a commit from the main branch."
exit 1
fi
fi
mkdir -p /deqp
mkdir -p /deqp-$deqp_api
# shellcheck disable=SC2153
deqp_api=${DEQP_API,,}
if [ "$DEQP_VERSION" = "$DEQP_MAIN_COMMIT" ]; then
prefix="main"
else
prefix="$deqp_api"
fi
cts_commits_to_backport="${prefix}_cts_commits_to_backport[@]"
cts_commits_to_backport="${deqp_api}_cts_commits_to_backport[@]"
for commit in "${!cts_commits_to_backport}"
do
PATCH_URL="https://github.com/KhronosGroup/VK-GL-CTS/commit/$commit.patch"
echo "Apply patch to ${DEQP_API} CTS from $PATCH_URL"
curl-with-retry $PATCH_URL | GIT_COMMITTER_DATE=$(LC_TIME=C date -d@0) git am -
curl -L --retry 4 -f --retry-all-errors --retry-delay 60 $PATCH_URL | \
git am -
done
cts_patch_files="${prefix}_cts_patch_files[@]"
cts_patch_files="${deqp_api}_cts_patch_files[@]"
for patch in "${!cts_patch_files}"
do
echo "Apply patch to ${DEQP_API} CTS from $patch"
GIT_COMMITTER_DATE=$(LC_TIME=C date -d@0) git am < $OLDPWD/.gitlab-ci/container/patches/$patch
git am < $OLDPWD/.gitlab-ci/container/patches/$patch
done
{
if [ "$DEQP_VERSION" = "$DEQP_MAIN_COMMIT" ]; then
commit_desc=$(git show --no-patch --format='commit %h on %ci' --abbrev=10 "$DEQP_COMMIT")
echo "dEQP $DEQP_API at $commit_desc"
else
echo "dEQP $DEQP_API version $DEQP_VERSION"
fi
if [ "$(git rev-parse HEAD)" != "$DEQP_COMMIT" ]; then
echo "The following local patches are applied on top:"
git log --reverse --oneline "$DEQP_COMMIT".. --format='- %s'
fi
} > /deqp-$deqp_api/deqp-$deqp_api-version
echo "dEQP base version $DEQP_VERSION"
echo "The following local patches are applied on top:"
git log --reverse --oneline $DEQP_VERSION.. --format=%s | sed 's/^/- /'
} > /deqp/version-$deqp_api
# --insecure is due to SSL cert failures hitting sourceforge for zlib and
# libpng (sigh). The archives get their checksums checked anyway, and git
# always goes through ssh or https.
python3 external/fetch_sources.py --insecure
case "${DEQP_API}" in
VK-main)
# Video tests rely on external files
python3 external/fetch_video_decode_samples.py
python3 external/fetch_video_encode_samples.py
;;
esac
if [[ "$DEQP_API" = tools ]]; then
# Save the testlog stylesheets:
cp doc/testlog-stylesheet/testlog.{css,xsl} /deqp-$deqp_api
fi
# Save the testlog stylesheets:
cp doc/testlog-stylesheet/testlog.{css,xsl} /deqp
popd
deqp_build_targets=()
case "${DEQP_API}" in
VK|VK-main)
deqp_build_targets+=(deqp-vk)
;;
GL)
deqp_build_targets+=(glcts)
;;
GLES)
deqp_build_targets+=(deqp-gles{2,3,31})
deqp_build_targets+=(glcts) # needed for gles*-khr tests
# deqp-egl also comes from this build, but it is handled separately below.
;;
tools)
deqp_build_targets+=(testlog-to-xml)
deqp_build_targets+=(testlog-to-csv)
deqp_build_targets+=(testlog-to-junit)
;;
esac
OLD_IFS="$IFS"
IFS=";"
CMAKE_SBT="${deqp_build_targets[*]}"
IFS="$OLD_IFS"
pushd /deqp-$deqp_api
pushd /deqp
if [ "${DEQP_API}" = 'GLES' ]; then
if [ "${DEQP_TARGET}" = 'android' ]; then
cmake -S /VK-GL-CTS -B . -G Ninja \
-DDEQP_TARGET=android \
-DCMAKE_BUILD_TYPE=Release \
-DSELECTED_BUILD_TARGETS="deqp-egl" \
${EXTRA_CMAKE_ARGS:-}
ninja modules/egl/deqp-egl
mv modules/egl/deqp-egl{,-android}
$EXTRA_CMAKE_ARGS
mold --run ninja modules/egl/deqp-egl
mv /deqp/modules/egl/deqp-egl /deqp/modules/egl/deqp-egl-android
else
# When including EGL/X11 testing, do that build first and save off its
# deqp-egl binary.
cmake -S /VK-GL-CTS -B . -G Ninja \
-DDEQP_TARGET=x11_egl_glx \
-DCMAKE_BUILD_TYPE=Release \
-DSELECTED_BUILD_TARGETS="deqp-egl" \
${EXTRA_CMAKE_ARGS:-}
ninja modules/egl/deqp-egl
mv modules/egl/deqp-egl{,-x11}
$EXTRA_CMAKE_ARGS
mold --run ninja modules/egl/deqp-egl
mv /deqp/modules/egl/deqp-egl /deqp/modules/egl/deqp-egl-x11
cmake -S /VK-GL-CTS -B . -G Ninja \
-DDEQP_TARGET=wayland \
-DCMAKE_BUILD_TYPE=Release \
-DSELECTED_BUILD_TARGETS="deqp-egl" \
${EXTRA_CMAKE_ARGS:-}
ninja modules/egl/deqp-egl
mv modules/egl/deqp-egl{,-wayland}
$EXTRA_CMAKE_ARGS
mold --run ninja modules/egl/deqp-egl
mv /deqp/modules/egl/deqp-egl /deqp/modules/egl/deqp-egl-wayland
fi
fi
cmake -S /VK-GL-CTS -B . -G Ninja \
-DDEQP_TARGET=${DEQP_TARGET} \
-DCMAKE_BUILD_TYPE=Release \
-DSELECTED_BUILD_TARGETS="${CMAKE_SBT}" \
${EXTRA_CMAKE_ARGS:-}
$EXTRA_CMAKE_ARGS
# Make sure `default` doesn't silently stop detecting one of the platforms we care about
if [ "${DEQP_TARGET}" = 'default' ]; then
@@ -242,73 +178,86 @@ if [ "${DEQP_TARGET}" = 'default' ]; then
grep -q DEQP_SUPPORT_XCB=1 build.ninja
fi
ninja "${deqp_build_targets[@]}"
deqp_build_targets=()
case "${DEQP_API}" in
VK)
deqp_build_targets+=(deqp-vk)
;;
GL)
deqp_build_targets+=(glcts)
;;
GLES)
deqp_build_targets+=(deqp-gles{2,3,31})
# deqp-egl also comes from this build, but it is handled separately above.
;;
esac
if [ "${DEQP_TARGET}" != 'android' ]; then
deqp_build_targets+=(testlog-to-xml)
deqp_build_targets+=(testlog-to-csv)
deqp_build_targets+=(testlog-to-junit)
fi
if [ "$DEQP_API" != tools ]; then
mold --run ninja "${deqp_build_targets[@]}"
if [ "${DEQP_TARGET}" != 'android' ]; then
# Copy out the mustpass lists we want.
mkdir -p mustpass
mkdir -p /deqp/mustpass
if [ "${DEQP_API}" = 'VK' ] || [ "${DEQP_API}" = 'VK-main' ]; then
if [ "${DEQP_API}" = 'VK' ]; then
for mustpass in $(< /VK-GL-CTS/external/vulkancts/mustpass/main/vk-default.txt) ; do
cat /VK-GL-CTS/external/vulkancts/mustpass/main/$mustpass \
>> mustpass/vk-main.txt
>> /deqp/mustpass/vk-main.txt
done
fi
if [ "${DEQP_API}" = 'GL' ]; then
cp \
/VK-GL-CTS/external/openglcts/data/gl_cts/data/mustpass/gl/khronos_mustpass/main/*-main.txt \
mustpass/
/VK-GL-CTS/external/openglcts/data/mustpass/gl/khronos_mustpass/4.6.1.x/*-main.txt \
/deqp/mustpass/
cp \
/VK-GL-CTS/external/openglcts/data/gl_cts/data/mustpass/gl/khronos_mustpass_single/main/*-single.txt \
mustpass/
/VK-GL-CTS/external/openglcts/data/mustpass/gl/khronos_mustpass_single/4.6.1.x/*-single.txt \
/deqp/mustpass/
fi
if [ "${DEQP_API}" = 'GLES' ]; then
cp \
/VK-GL-CTS/external/openglcts/data/gl_cts/data/mustpass/gles/aosp_mustpass/main/*.txt \
mustpass/
/VK-GL-CTS/external/openglcts/data/mustpass/gles/aosp_mustpass/3.2.6.x/*.txt \
/deqp/mustpass/
cp \
/VK-GL-CTS/external/openglcts/data/gl_cts/data/mustpass/egl/aosp_mustpass/main/egl-main.txt \
mustpass/
/VK-GL-CTS/external/openglcts/data/mustpass/egl/aosp_mustpass/3.2.6.x/egl-main.txt \
/deqp/mustpass/
cp \
/VK-GL-CTS/external/openglcts/data/gl_cts/data/mustpass/gles/khronos_mustpass/main/*-main.txt \
mustpass/
/VK-GL-CTS/external/openglcts/data/mustpass/gles/khronos_mustpass/3.2.6.x/*-main.txt \
/deqp/mustpass/
fi
# Compress the caselists, since Vulkan's in particular are gigantic; higher
# compression levels provide no real measurable benefit.
zstd -f -1 --rm mustpass/*.txt
fi
if [ "$DEQP_API" = tools ]; then
# Save *some* executor utils, but otherwise strip things down
# to reduct deqp build size:
mv executor/testlog-to-* .
rm -rf executor
mkdir /deqp/executor.save
cp /deqp/executor/testlog-to-* /deqp/executor.save
rm -rf /deqp/executor
mv /deqp/executor.save /deqp/executor
fi
# Remove other mustpass files, since we saved off the ones we wanted to conventient locations above.
rm -rf assets/**/mustpass/
rm -rf external/**/mustpass/
rm -rf external/vulkancts/modules/vulkan/vk-main*
rm -rf external/vulkancts/modules/vulkan/vk-default
rm -rf /deqp/external/**/mustpass/
rm -rf /deqp/external/vulkancts/modules/vulkan/vk-main*
rm -rf /deqp/external/vulkancts/modules/vulkan/vk-default
rm -rf external/openglcts/modules/cts-runner
rm -rf modules/internal
rm -rf execserver
rm -rf framework
rm -rf /deqp/external/openglcts/modules/cts-runner
rm -rf /deqp/modules/internal
rm -rf /deqp/execserver
rm -rf /deqp/framework
find . -depth \( -iname '*cmake*' -o -name '*ninja*' -o -name '*.o' -o -name '*.a' \) -exec rm -rf {} \;
if [ "${DEQP_API}" = 'VK' ] || [ "${DEQP_API}" = 'VK-main' ]; then
if [ "${DEQP_API}" = 'VK' ]; then
${STRIP_CMD:-strip} external/vulkancts/modules/vulkan/deqp-vk
fi
if [ "${DEQP_API}" = 'GL' ] || [ "${DEQP_API}" = 'GLES' ]; then
if [ "${DEQP_API}" = 'GL' ]; then
${STRIP_CMD:-strip} external/openglcts/modules/glcts
fi
if [ "${DEQP_API}" = 'GLES' ]; then
${STRIP_CMD:-strip} modules/*/deqp-*
fi
du -sh ./*
rm -rf /VK-GL-CTS
popd
section_end deqp-$deqp_api

View File

@@ -5,15 +5,11 @@
# .gitlab-ci/image-tags.yml tags:
# DEBIAN_BUILD_TAG
set -uex
set -ex
uncollapsed_section_start directx-headers "Building directx-headers"
git clone https://github.com/microsoft/DirectX-Headers -b v1.614.1 --depth 1
git clone https://github.com/microsoft/DirectX-Headers -b v1.613.1 --depth 1
pushd DirectX-Headers
meson setup build --backend=ninja --buildtype=release -Dbuild-test=false ${EXTRA_MESON_ARGS:-}
meson setup build --backend=ninja --buildtype=release -Dbuild-test=false $EXTRA_MESON_ARGS
meson install -C build
popd
rm -rf DirectX-Headers
section_end directx-headers

View File

@@ -1,52 +0,0 @@
#!/usr/bin/env bash
# shellcheck disable=SC1091 # The relative paths in this file only become valid at runtime.
# shellcheck disable=SC2034 # Variables are used in scripts called from here
# shellcheck disable=SC2086 # we want word splitting
# When changing this file, you need to bump the following
# .gitlab-ci/image-tags.yml tags:
# DEBIAN_TEST_VIDEO_TAG
# Install fluster in /fluster.
set -uex
section_start fluster "Installing Fluster"
# Do a very early check to make sure the tag is correct without the need of
# setting up the environment variables locally
ci_tag_build_time_check "FLUSTER_TAG"
FLUSTER_REVISION="e997402978f62428fffc8e5a4a709690d9ca9bc5"
git clone https://github.com/fluendo/fluster.git --single-branch --no-checkout
pushd fluster || exit
git checkout "${FLUSTER_REVISION}"
popd || exit
ARTIFACT_PATH="${DATA_STORAGE_PATH}/fluster/${FLUSTER_TAG}/vectors.tar.zst"
if FOUND_ARTIFACT_URL="$(find_s3_project_artifact "${ARTIFACT_PATH}")"; then
echo "Found fluster vectors at: ${FOUND_ARTIFACT_URL}"
mv fluster/ /
curl-with-retry "${FOUND_ARTIFACT_URL}" | tar --zstd -x -C /
else
echo "No cached vectors found, rebuilding..."
# Download the necessary vectors: H264, H265 and VP9
# When updating FLUSTER_REVISION, make sure to update the vectors if necessary or
# fluster-runner will report Missing results.
fluster/fluster.py download -j ${FDO_CI_CONCURRENT:-4} \
JVT-AVC_V1 JVT-FR-EXT JVT-MVC JVT-SVC_V1 \
JCT-VC-3D-HEVC JCT-VC-HEVC_V1 JCT-VC-MV-HEVC JCT-VC-RExt JCT-VC-SCC JCT-VC-SHVC \
VP9-TEST-VECTORS-HIGH VP9-TEST-VECTORS
# Build fluster vectors archive and upload it
tar --zstd -cf "vectors.tar.zst" fluster/resources/
ci-fairy s3cp --token-file "${S3_JWT_FILE}" "vectors.tar.zst" \
"https://${S3_BASE_PATH}/${CI_PROJECT_PATH}/${ARTIFACT_PATH}"
mv fluster/ /
fi
section_end fluster

View File

@@ -3,11 +3,10 @@
# When changing this file, you need to bump the following
# .gitlab-ci/image-tags.yml tags:
# DEBIAN_TEST_VK_TAG
# KERNEL_ROOTFS_TAG
set -ex
uncollapsed_section_start fossilize "Building fossilize"
git clone https://github.com/ValveSoftware/Fossilize.git
cd Fossilize
git checkout b43ee42bbd5631ea21fe9a2dee4190d5d875c327
@@ -18,5 +17,3 @@ cmake -S .. -B . -G Ninja -DCMAKE_BUILD_TYPE=Release
ninja -C . install
cd ../..
rm -rf Fossilize
section_end fossilize

View File

@@ -2,8 +2,6 @@
set -ex
uncollapsed_section_start gfxreconstruct "Building gfxreconstruct"
GFXRECONSTRUCT_VERSION=761837794a1e57f918a85af7000b12e531b178ae
git clone https://github.com/LunarG/gfxreconstruct.git \
@@ -19,5 +17,3 @@ cmake -S . -B _build -G Ninja -DCMAKE_BUILD_TYPE=Release -DCMAKE_INSTALL_PREFIX:
cmake --build _build --parallel --target tools/{replay,info}/install/strip
find . -not -path './build' -not -path './build/*' -delete
popd
section_end gfxreconstruct

View File

@@ -0,0 +1,16 @@
#!/bin/bash
set -ex
PARALLEL_DEQP_RUNNER_VERSION=fe557794b5dadd8dbf0eae403296625e03bda18a
git clone https://gitlab.freedesktop.org/mesa/parallel-deqp-runner --single-branch -b master --no-checkout /parallel-deqp-runner
pushd /parallel-deqp-runner
git checkout "$PARALLEL_DEQP_RUNNER_VERSION"
meson . _build
ninja -C _build hang-detection
mkdir -p build/bin
install _build/hang-detection build/bin
strip build/bin/*
find . -not -path './build' -not -path './build/*' -delete
popd

View File

@@ -3,30 +3,21 @@
set -ex
uncollapsed_section_start kdl "Building kdl"
KDL_REVISION="5056f71b100a68b72b285c6fc845a66a2ed25985"
KDL_REVISION="cbbe5fd54505fd03ee34f35bfd16794f0c30074f"
KDL_CHECKOUT_DIR="/tmp/ci-kdl.git"
mkdir -p ${KDL_CHECKOUT_DIR}
pushd ${KDL_CHECKOUT_DIR}
mkdir ci-kdl.git
pushd ci-kdl.git
git init
git remote add origin https://gitlab.freedesktop.org/gfx-ci/ci-kdl.git
git fetch --depth 1 origin ${KDL_REVISION}
git checkout FETCH_HEAD
popd
# Run venv in a subshell, so we don't accidentally leak the venv state into
# calling scripts
(
python3 -m venv /ci-kdl
source /ci-kdl/bin/activate &&
pushd ${KDL_CHECKOUT_DIR} &&
pip install -r requirements.txt &&
pip install . &&
popd
)
python3 -m venv ci-kdl.venv
source ci-kdl.venv/bin/activate
pushd ci-kdl.git
pip install -r requirements.txt
pip install .
popd
rm -rf ${KDL_CHECKOUT_DIR}
section_end kdl
rm -rf ci-kdl.git

View File

@@ -1,8 +1,6 @@
#!/usr/bin/env bash
set -uex
uncollapsed_section_start libclc "Building libclc"
set -ex
export LLVM_CONFIG="llvm-config-${LLVM_VERSION:?"llvm unset!"}"
LLVM_TAG="llvmorg-15.0.7"
@@ -31,5 +29,3 @@ ln -s /usr/share/clc/spirv-mesa3d-.spv /usr/lib/clc/
du -sh ./*
rm -rf /libclc /llvm-project
section_end libclc

View File

@@ -3,9 +3,7 @@
# from https://gitlab.freedesktop.org/gfx-ci/ci-deb-repo - see PKG_REPO_REV)
# shellcheck disable=SC2086 # we want word splitting
set -uex
uncollapsed_section_start libdrm "Building libdrm"
set -ex
export LIBDRM_VERSION=libdrm-2.4.122
@@ -13,9 +11,7 @@ curl -L -O --retry 4 -f --retry-all-errors --retry-delay 60 \
https://dri.freedesktop.org/libdrm/"$LIBDRM_VERSION".tar.xz
tar -xvf "$LIBDRM_VERSION".tar.xz && rm "$LIBDRM_VERSION".tar.xz
cd "$LIBDRM_VERSION"
meson setup build -D vc4=disabled -D freedreno=disabled -D etnaviv=disabled ${EXTRA_MESON_ARGS:-}
meson setup build -D vc4=disabled -D freedreno=disabled -D etnaviv=disabled $EXTRA_MESON_ARGS
meson install -C build
cd ..
rm -rf "$LIBDRM_VERSION"
section_end libdrm

View File

@@ -2,13 +2,7 @@
set -ex
uncollapsed_section_start llvm-spirv "Building LLVM-SPIRV-Translator"
if [ "${LLVM_VERSION:?llvm version not set}" -ge 18 ]; then
VER="${LLVM_VERSION}.1.0"
else
VER="${LLVM_VERSION}.0.0"
fi
VER="${LLVM_VERSION:?llvm not set}.0.0"
curl -L --retry 4 -f --retry-all-errors --retry-delay 60 \
-O "https://github.com/KhronosGroup/SPIRV-LLVM-Translator/archive/refs/tags/v${VER}.tar.gz"
@@ -26,5 +20,3 @@ popd
du -sh "SPIRV-LLVM-Translator-${VER}"
rm -rf "SPIRV-LLVM-Translator-${VER}"
section_end llvm-spirv

View File

@@ -8,8 +8,7 @@ set -ex
# DEBIAN_BASE_TAG
# DEBIAN_BUILD_TAG
# FEDORA_X86_64_BUILD_TAG
uncollapsed_section_start mold "Building mold"
# KERNEL_ROOTFS_TAG
MOLD_VERSION="2.32.0"
@@ -17,15 +16,8 @@ git clone -b v"$MOLD_VERSION" --single-branch --depth 1 https://github.com/rui31
pushd mold
cmake -DCMAKE_BUILD_TYPE=Release -D BUILD_TESTING=OFF -D MOLD_LTO=ON
cmake --build . --parallel "${FDO_CI_CONCURRENT:-4}"
cmake --install . --strip
# Always use mold from now on
find /usr/bin \( -name '*-ld' -o -name 'ld' \) \
-exec ln -sf /usr/local/bin/ld.mold {} \; \
-exec ls -l {} +
cmake --build . --parallel
cmake --install .
popd
rm -rf mold
section_end mold

View File

@@ -0,0 +1,25 @@
#!/usr/bin/env bash
# When changing this file, you need to bump the following
# .gitlab-ci/image-tags.yml tags:
# DEBIAN_TEST_GL_TAG
set -ex -o pipefail
### Careful editing anything below this line
git config --global user.email "mesa@example.com"
git config --global user.name "Mesa CI"
git clone https://github.com/axeldavy/Xnine.git /Xnine
mkdir /Xnine/build
pushd /Xnine/build
git checkout c64753d224c08006bcdcfa7880ada826f27164b1
cmake .. -DBUILD_TESTS=1 -DWITH_DRI3=1 -DD3DADAPTER9_LOCATION=/install/lib/d3d/d3dadapter9.so
make
mkdir -p /NineTests/
mv NineTests/NineTests /NineTests/
popd
rm -rf /Xnine

View File

@@ -1,30 +1,24 @@
#!/bin/bash
# shellcheck disable=SC2086 # we want word splitting
set -uex
section_start piglit "Building piglit"
set -ex
# When changing this file, you need to bump the following
# .gitlab-ci/image-tags.yml tags:
# DEBIAN_TEST_GL_TAG
# DEBIAN_TEST_VK_TAG
# KERNEL_ROOTFS_TAG
# Do a very early check to make sure the tag is correct without the need of
# setting up the environment variables locally
ci_tag_build_time_check "PIGLIT_TAG"
REV="a0a27e528f643dfeb785350a1213bfff09681950"
REV="582f5490a124c27c26d3a452fee03a8c85fa9a5c"
git clone https://gitlab.freedesktop.org/mesa/piglit.git --single-branch --no-checkout /piglit
pushd /piglit
git checkout "$REV"
patch -p1 <$OLDPWD/.gitlab-ci/piglit/disable-vs_in.diff
cmake -S . -B . -G Ninja -DCMAKE_BUILD_TYPE=Release $PIGLIT_OPTS ${EXTRA_CMAKE_ARGS:-}
ninja ${PIGLIT_BUILD_TARGETS:-}
find . -depth \( -name .git -o -name '*ninja*' -o -iname '*cmake*' -o -name '*.[chao]' \) \
! -name 'include_test.h' -exec rm -rf {} \;
cmake -S . -B . -G Ninja -DCMAKE_BUILD_TYPE=Release $PIGLIT_OPTS $EXTRA_CMAKE_ARGS
ninja $PIGLIT_BUILD_TARGETS
find . -depth \( -name .git -o -name '*ninja*' -o -iname '*cmake*' -o -name '*.[chao]' \) -exec rm -rf {} \;
rm -rf target_api
if [ "${PIGLIT_BUILD_TARGETS:-}" = "piglit_replayer" ]; then
if [ "$PIGLIT_BUILD_TARGETS" = "piglit_replayer" ]; then
find . -depth \
! -regex "^\.$" \
! -regex "^\.\/piglit.*" \
@@ -37,5 +31,3 @@ if [ "${PIGLIT_BUILD_TARGETS:-}" = "piglit_replayer" ]; then
-exec rm -rf {} \; 2>/dev/null
fi
popd
section_end piglit

View File

@@ -5,10 +5,17 @@
set -ex
section_start rust "Building Rust toolchain"
# cargo (and rustup) wants to store stuff in $HOME/.cargo, and binaries in
# $HOME/.cargo/bin. Make bin a link to a public bin directory so the commands
# are just available to all build jobs.
mkdir -p "$HOME"/.cargo
ln -s /usr/local/bin "$HOME"/.cargo/bin
# Pick a specific snapshot from rustup so the compiler doesn't drift on us.
RUST_VERSION=1.81.0-2024-09-05
# Rusticl requires at least Rust 1.66.0 and NAK requires 1.73.0
#
# Also, pick a specific snapshot from rustup so the compiler doesn't drift on
# us.
RUST_VERSION=1.73.0-2023-10-05
# For rust in Mesa, we use rustup to install. This lets us pick an arbitrary
# version of the compiler, rather than whatever the container's Debian comes
@@ -19,20 +26,14 @@ curl -L --retry 4 -f --retry-all-errors --retry-delay 60 \
--profile minimal \
-y
# Make rustup tools available in the PATH environment variable
# shellcheck disable=SC1091
. "$HOME/.cargo/env"
rustup component add clippy rustfmt
# Set up a config script for cross compiling -- cargo needs your system cc for
# linking in cross builds, but doesn't know what you want to use for system cc.
cat > "$HOME/.cargo/config" <<EOF
cat > /root/.cargo/config <<EOF
[target.armv7-unknown-linux-gnueabihf]
linker = "arm-linux-gnueabihf-gcc"
[target.aarch64-unknown-linux-gnu]
linker = "aarch64-linux-gnu-gcc"
EOF
section_end rust

View File

@@ -6,13 +6,9 @@
set -ex
uncollapsed_section_start shader-db "Building shader-db"
pushd /usr/local
git clone https://gitlab.freedesktop.org/mesa/shader-db.git --depth 1
rm -rf shader-db/.git
cd shader-db
make
popd
section_end shader-db

View File

@@ -6,35 +6,15 @@
#
# When changing this file, you need to bump the following
# .gitlab-ci/image-tags.yml tags:
# DEBIAN_TEST_GL_TAG
set -uex
uncollapsed_section_start skqp "Building SkQP"
# KERNEL_ROOTFS_TAG
SKQP_BRANCH=android-cts-12.1_r5
SCRIPT_DIR="$(pwd)/.gitlab-ci/container"
SKQP_PATCH_DIR="${SCRIPT_DIR}/patches"
BASE_ARGS_GN_FILE="${SCRIPT_DIR}/build-skqp_base.gn"
case "$DEBIAN_ARCH" in
amd64)
SKQP_ARCH=x64
;;
armhf)
SKQP_ARCH=arm
;;
arm64)
SKQP_ARCH=arm64
;;
esac
SKIA_DIR=${SKIA_DIR:-$(mktemp -d)}
SKQP_OUT_DIR=${SKIA_DIR}/out/${SKQP_ARCH}
SKQP_INSTALL_DIR=${SKQP_INSTALL_DIR:-/skqp}
SKQP_ASSETS_DIR="${SKQP_INSTALL_DIR}/assets"
SKQP_BINARIES=(skqp list_gpu_unit_tests list_gms)
# hack for skqp see the clang
pushd /usr/bin/
ln -s ../lib/llvm-15/bin/clang clang
ln -s ../lib/llvm-15/bin/clang++ clang++
popd
create_gn_args() {
# gn can be configured to cross-compile skia and its tools
@@ -58,6 +38,19 @@ download_skia_source() {
git clone --branch "${SKQP_BRANCH}" --depth 1 "${SKQP_REPO}" "${SKIA_DIR}"
}
set -ex
SCRIPT_DIR=$(realpath "$(dirname "$0")")
SKQP_PATCH_DIR="${SCRIPT_DIR}/patches"
BASE_ARGS_GN_FILE="${SCRIPT_DIR}/build-skqp_base.gn"
SKQP_ARCH=${SKQP_ARCH:-x64}
SKIA_DIR=${SKIA_DIR:-$(mktemp -d)}
SKQP_OUT_DIR=${SKIA_DIR}/out/${SKQP_ARCH}
SKQP_INSTALL_DIR=${SKQP_INSTALL_DIR:-/skqp}
SKQP_ASSETS_DIR="${SKQP_INSTALL_DIR}/assets"
SKQP_BINARIES=(skqp list_gpu_unit_tests list_gms)
download_skia_source
pushd "${SKIA_DIR}"
@@ -66,16 +59,10 @@ pushd "${SKIA_DIR}"
cat "${SKQP_PATCH_DIR}"/build-skqp_*.patch |
patch -p1
# hack for skqp see the clang
pushd /usr/bin/
ln -s "../lib/llvm-${LLVM_VERSION}/bin/clang" clang
ln -s "../lib/llvm-${LLVM_VERSION}/bin/clang++" clang++
popd
# Fetch some needed build tools needed to build skia/skqp.
# Basically, it clones repositories with commits SHAs from ${SKIA_DIR}/DEPS
# directory.
python3 tools/git-sync-deps
python tools/git-sync-deps
mkdir -p "${SKQP_OUT_DIR}"
mkdir -p "${SKQP_INSTALL_DIR}"
@@ -100,5 +87,3 @@ popd
rm -Rf "${SKIA_DIR}"
set +ex
section_end skqp

View File

@@ -34,11 +34,6 @@ extra_cflags_cc = [
"-Wno-unused-but-set-variable",
"-Wno-sizeof-array-div",
"-Wno-string-concatenation",
"-Wno-unsafe-buffer-usage",
"-Wno-switch-default",
"-Wno-cast-function-type-strict",
"-Wno-format",
"-Wno-enum-constexpr-conversion",
]
cc_wrapper = "ccache"

View File

@@ -1,13 +1,10 @@
#!/usr/bin/env bash
# shellcheck disable=SC2086 # we want word splitting
# When changing this file, you need to bump the following
# .gitlab-ci/image-tags.yml tags:
# DEBIAN_TEST_VIDEO_TAG
# KERNEL_ROOTFS_TAG
set -uex
section_start va-tools "Building va-tools"
set -ex
git config --global user.email "mesa@example.com"
git config --global user.name "Mesa CI"
@@ -20,11 +17,9 @@ git clone \
pushd /va-utils
# Too old libva in Debian 11. TODO: when this PR gets in, refer to the patch.
curl --fail -L https://github.com/intel/libva-utils/pull/329.patch | git am
curl -L https://github.com/intel/libva-utils/pull/329.patch | git am
meson setup build -D tests=true -Dprefix=/va ${EXTRA_MESON_ARGS:-}
meson setup build -D tests=true -Dprefix=/va $EXTRA_MESON_ARGS
meson install -C build
popd
rm -rf /va-utils
section_end va-tools

View File

@@ -3,65 +3,41 @@
# When changing this file, you need to bump the following
# .gitlab-ci/image-tags.yml tags:
# DEBIAN_TEST_VK_TAG
# KERNEL_ROOTFS_TAG
set -ex
section_start vkd3d-proton "Building vkd3d-proton"
# Do a very early check to make sure the tag is correct without the need of
# setting up the environment variables locally
ci_tag_build_time_check "VKD3D_PROTON_TAG"
VKD3D_PROTON_COMMIT="6be781076617cb2cb3038710618acc3b57a674db"
VKD3D_PROTON_COMMIT="3d46c082906c77544385d10801e4c0184f0385d9"
VKD3D_PROTON_DST_DIR="/vkd3d-proton-tests"
VKD3D_PROTON_SRC_DIR="/vkd3d-proton-src"
VKD3D_PROTON_BUILD_DIR="/vkd3d-proton-build"
VKD3D_PROTON_WINE_DIR="/vkd3d-proton-wine64"
VKD3D_PROTON_S3_ARTIFACT="vkd3d-proton.tar.zst"
VKD3D_PROTON_BUILD_DIR="/vkd3d-proton-$VKD3D_PROTON_VERSION"
if [ ! -d "$VKD3D_PROTON_WINE_DIR" ]; then
echo "Fatal: Directory '$VKD3D_PROTON_WINE_DIR' does not exist. Aborting."
exit 1
fi
function build_arch {
local arch="$1"
shift
meson "$@" \
-Denable_tests=true \
--buildtype release \
--prefix "$VKD3D_PROTON_DST_DIR" \
--strip \
--bindir "x${arch}" \
--libdir "x${arch}" \
"$VKD3D_PROTON_BUILD_DIR/build.${arch}"
ninja -C "$VKD3D_PROTON_BUILD_DIR/build.${arch}" install
install -D -m755 -t "${VKD3D_PROTON_DST_DIR}/x${arch}/bin" "$VKD3D_PROTON_BUILD_DIR/build.${arch}/tests/d3d12"
}
git clone https://github.com/HansKristian-Work/vkd3d-proton.git --single-branch -b master --no-checkout "$VKD3D_PROTON_SRC_DIR"
pushd "$VKD3D_PROTON_SRC_DIR"
git checkout "$VKD3D_PROTON_COMMIT"
git submodule update --init --recursive
git submodule update --recursive
meson setup \
-D enable_tests=true \
--buildtype release \
--prefix "$VKD3D_PROTON_DST_DIR" \
--strip \
--libdir "lib" \
"$VKD3D_PROTON_BUILD_DIR/build"
ninja -C "$VKD3D_PROTON_BUILD_DIR/build" install
install -m755 -t "${VKD3D_PROTON_DST_DIR}/" "$VKD3D_PROTON_BUILD_DIR/build/tests/d3d12"
mkdir "$VKD3D_PROTON_DST_DIR/tests"
cp \
"tests/test-runner.sh" \
"tests/d3d12_tests.h" \
"$VKD3D_PROTON_DST_DIR/tests/"
build_arch 64
build_arch 86
popd
# Archive and upload vkd3d-proton for use as a LAVA overlay, if the archive doesn't exist yet
ARTIFACT_PATH="${DATA_STORAGE_PATH}/vkd3d-proton/${VKD3D_PROTON_TAG}/${CI_JOB_NAME}/${VKD3D_PROTON_S3_ARTIFACT}"
if FOUND_ARTIFACT_URL="$(find_s3_project_artifact "${ARTIFACT_PATH}")"; then
echo "Found vkd3d-proton at: ${FOUND_ARTIFACT_URL}, skipping upload"
else
echo "Uploaded vkd3d-proton not found, reuploading..."
tar --zstd -cf "$VKD3D_PROTON_S3_ARTIFACT" -C / "${VKD3D_PROTON_DST_DIR#/}" "${VKD3D_PROTON_WINE_DIR#/}"
ci-fairy s3cp --token-file "${S3_JWT_FILE}" "$VKD3D_PROTON_S3_ARTIFACT" \
"https://${S3_BASE_PATH}/${CI_PROJECT_PATH}/${ARTIFACT_PATH}"
rm "$VKD3D_PROTON_S3_ARTIFACT"
fi
rm -rf "$VKD3D_PROTON_BUILD_DIR"
rm -rf "$VKD3D_PROTON_SRC_DIR"
section_end vkd3d-proton

View File

@@ -3,22 +3,16 @@
# When changing this file, you need to bump the following
# .gitlab-ci/image-tags.yml tags:
# DEBIAN_TEST_GL_TAG
# KERNEL_ROOTFS_TAG
set -uex
set -ex
uncollapsed_section_start vulkan-validation "Building Vulkan validation layers"
VALIDATION_TAG="snapshot-2025wk15"
VALIDATION_TAG="v1.3.289"
git clone -b "$VALIDATION_TAG" --single-branch --depth 1 https://github.com/KhronosGroup/Vulkan-ValidationLayers.git
pushd Vulkan-ValidationLayers
# we don't need to build SPIRV-Tools tools
sed -i scripts/known_good.json -e 's/SPIRV_SKIP_EXECUTABLES=OFF/SPIRV_SKIP_EXECUTABLES=ON/'
python3 scripts/update_deps.py --dir external --config release --generator Ninja --optional tests
python3 scripts/update_deps.py --dir external --config debug
cmake -G Ninja -DCMAKE_BUILD_TYPE=Release -DCMAKE_INSTALL_PREFIX=/usr -DBUILD_TESTS=OFF -DBUILD_WERROR=OFF -C external/helper.cmake -S . -B build
ninja -C build -j"${FDO_CI_CONCURRENT:-4}"
cmake --install build --strip
ninja -C build install
popd
rm -rf Vulkan-ValidationLayers
section_end vulkan-validation

View File

@@ -1,34 +1,24 @@
#!/usr/bin/env bash
# shellcheck disable=SC2086 # we want word splitting
set -uex
uncollapsed_section_start wayland "Building Wayland"
set -ex
# When changing this file, you need to bump the following
# .gitlab-ci/image-tags.yml tags:
# ALPINE_X86_64_BUILD_TAG
# DEBIAN_BASE_TAG
# DEBIAN_BUILD_TAG
# DEBIAN_TEST_ANDROID_TAG
# DEBIAN_TEST_GL_TAG
# DEBIAN_TEST_VK_TAG
# FEDORA_X86_64_BUILD_TAG
# KERNEL_ROOTFS_TAG
export LIBWAYLAND_VERSION="1.24.0"
export WAYLAND_PROTOCOLS_VERSION="1.41"
export LIBWAYLAND_VERSION="1.21.0"
export WAYLAND_PROTOCOLS_VERSION="1.34"
git clone https://gitlab.freedesktop.org/wayland/wayland
cd wayland
git checkout "$LIBWAYLAND_VERSION"
# Build the scanner first in case we're a cross build
# Note the lack of EXTRA_MESON_ARGS here. This is always a native build
meson setup -Dtests=false -Ddocumentation=false -Ddtd_validation=false -Dlibraries=false -Dscanner=true _scanner
meson install -C _scanner
# Now build libwayland using the given scanner
meson setup -Ddocumentation=false -Ddtd_validation=false -Dlibraries=true -Dscanner=false _build ${EXTRA_MESON_ARGS:-}
meson setup -Ddocumentation=false -Ddtd_validation=false -Dlibraries=true _build $EXTRA_MESON_ARGS
meson install -C _build
cd ..
rm -rf wayland
@@ -36,9 +26,7 @@ rm -rf wayland
git clone https://gitlab.freedesktop.org/wayland/wayland-protocols
cd wayland-protocols
git checkout "$WAYLAND_PROTOCOLS_VERSION"
meson setup -Dtests=false _build ${EXTRA_MESON_ARGS:-}
meson setup _build $EXTRA_MESON_ARGS
meson install -C _build
cd ..
rm -rf wayland-protocols
section_end wayland

View File

@@ -1,52 +0,0 @@
#!/usr/bin/env bash
# shellcheck disable=SC2086 # we want word splitting
set -uex
section_start weston "Building Weston"
# When changing this file, you need to bump the following
# .gitlab-ci/image-tags.yml tags:
# DEBIAN_BASE_TAG
export WESTON_VERSION="14.0.1"
git clone https://gitlab.freedesktop.org/wayland/weston
cd weston
git checkout "$WESTON_VERSION"
meson setup \
-Dbackend-drm=false \
-Dbackend-drm-screencast-vaapi=false \
-Dbackend-headless=true \
-Dbackend-pipewire=false \
-Dbackend-rdp=false \
-Dscreenshare=false \
-Dbackend-vnc=false \
-Dbackend-wayland=false \
-Dbackend-x11=false \
-Dbackend-default=headless \
-Drenderer-gl=true \
-Dxwayland=true \
-Dsystemd=false \
-Dremoting=false \
-Dpipewire=false \
-Dshell-desktop=true \
-Dshell-fullscreen=false \
-Dshell-ivi=false \
-Dshell-kiosk=false \
-Dcolor-management-lcms=false \
-Dimage-jpeg=false \
-Dimage-webp=false \
-Dtools= \
-Ddemo-clients=false \
-Dsimple-clients= \
-Dresize-pool=false \
-Dwcap-decode=false \
-Dtests=false \
-Ddoc=false \
_build ${EXTRA_MESON_ARGS:-}
meson install -C _build
cd ..
rm -rf weston
section_end weston

View File

@@ -1,31 +0,0 @@
#!/usr/bin/env bash
# shellcheck disable=SC2086 # we want word splitting
set -uex
uncollapsed_section_start xwayland "Building XWayland"
# When changing this file, you need to bump the following
# .gitlab-ci/image-tags.yml tags:
# DEBIAN_BASE_TAG
#
export XORGPROTO_VERSION="xorgproto-2024.1"
export XWAYLAND_VERSION="xwayland-24.1.8"
git clone https://gitlab.freedesktop.org/xorg/proto/xorgproto
cd xorgproto
git checkout "$XORGPROTO_VERSION"
meson setup _build ${EXTRA_MESON_ARGS:-}
meson install -C _build
cd ..
rm -rf xorgproto
git clone https://gitlab.freedesktop.org/xorg/xserver
cd xserver
git checkout "$XWAYLAND_VERSION"
meson setup _build ${EXTRA_MESON_ARGS:-}
meson install -C _build
cd ..
rm -rf xserver
section_end xwayland

View File

@@ -1,24 +0,0 @@
#!/usr/bin/env bash
# When changing this file, all the linux tags in
# .gitlab-ci/image-tags.yml need updating.
set -eu
# Early check for required env variables, relies on `set -u`
: "$S3_JWT_FILE_SCRIPT"
if [ -z "$1" ]; then
echo "usage: $(basename "$0") <CONTAINER_CI_JOB_NAME>" 1>&2
exit 1
fi
CONTAINER_CI_JOB_NAME="$1"
# Tasks to perform before executing the script of a container job
eval "$S3_JWT_FILE_SCRIPT"
unset S3_JWT_FILE_SCRIPT
trap 'rm -f ${S3_JWT_FILE}' EXIT INT TERM
bash ".gitlab-ci/container/${CONTAINER_CI_JOB_NAME}.sh"

View File

@@ -6,6 +6,8 @@ fi
# Clean up any build cache
rm -rf /root/.cache
rm -rf /root/.cargo
rm -rf /.cargo
if test -x /usr/bin/ccache; then
ccache --show-stats

View File

@@ -1,7 +1,4 @@
#!/bin/sh
# When changing this file, you need to bump the following
# .gitlab-ci/image-tags.yml tags:
# DEBIAN_BUILD_TAG
if test -x /usr/bin/ccache; then
if test -f /etc/debian_version; then
@@ -16,8 +13,8 @@ if test -x /usr/bin/ccache; then
export CCACHE_COMPILERCHECK=content
export CCACHE_COMPRESS=true
export CCACHE_DIR="/cache/$CI_PROJECT_NAME/ccache"
export PATH="$CCACHE_PATH:$PATH"
export CCACHE_DIR=/cache/$CI_PROJECT_NAME/ccache
export PATH=$CCACHE_PATH:$PATH
# CMake ignores $PATH, so we have to force CC/GCC to the ccache versions.
export CC="${CCACHE_PATH}/gcc"
@@ -26,6 +23,14 @@ if test -x /usr/bin/ccache; then
ccache --show-stats
fi
# When not using the mold linker (e.g. unsupported architecture), force
# linkers to gold, since it's so much faster for building. We can't use
# lld because we're on old debian and it's buggy. mingw fails meson builds
# with it with "meson.build:21:0: ERROR: Unable to determine dynamic linker"
find /usr/bin -name \*-ld -o -name ld | \
grep -v mingw | \
xargs -n 1 -I '{}' ln -sf '{}.gold' '{}'
# Make a wrapper script for ninja to always include the -j flags
{
echo '#!/bin/sh -x'
@@ -38,29 +43,10 @@ chmod +x /usr/local/bin/ninja
# flags (doesn't apply to non-container builds, but we don't run make there)
export MAKEFLAGS="-j${FDO_CI_CONCURRENT:-4}"
# Ensure that rust tools are in PATH if they exist
CARGO_ENV_FILE="$HOME/.cargo/env"
if [ -f "$CARGO_ENV_FILE" ]; then
# shellcheck disable=SC1090
source "$CARGO_ENV_FILE"
fi
ci_tag_early_checks() {
# Runs the first part of the build script to perform the tag check only
uncollapsed_section_switch "ci_tag_early_checks" "Ensuring component versions match declared tags in CI builds"
echo "[Structured Tagging] Checking components: ${CI_BUILD_COMPONENTS}"
# shellcheck disable=SC2086
for component in ${CI_BUILD_COMPONENTS}; do
bin/ci/update_tag.py --check ${component} || exit 1
done
echo "[Structured Tagging] Components check done"
section_end "ci_tag_early_checks"
}
# Check if each declared tag component is up to date before building
if [ -n "${CI_BUILD_COMPONENTS:-}" ]; then
# Remove any duplicates by splitting on whitespace, sorting, then joining back
CI_BUILD_COMPONENTS="$(echo "${CI_BUILD_COMPONENTS}" | xargs -n1 | sort -u | xargs)"
ci_tag_early_checks
fi
# make wget to try more than once, when download fails or timeout
echo -e "retry_connrefused = on\n" \
"read_timeout = 300\n" \
"tries = 4\n" \
"retry_on_host_error = on\n" \
"retry_on_http_error = 429,500,502,503,504\n" \
"wait_retry = 32" >> /etc/wgetrc

View File

@@ -18,7 +18,7 @@ cat > "$cross_file" <<EOF
[binaries]
ar = '$ndk/toolchains/llvm/prebuilt/linux-x86_64/bin/llvm-ar'
c = ['ccache', '$ndk/toolchains/llvm/prebuilt/linux-x86_64/bin/${arch2}${sdk_version}-clang', '-fno-exceptions', '-fno-unwind-tables', '-fno-asynchronous-unwind-tables']
cpp = ['ccache', '$ndk/toolchains/llvm/prebuilt/linux-x86_64/bin/${arch2}${sdk_version}-clang++', '-fno-exceptions', '-fno-unwind-tables', '-fno-asynchronous-unwind-tables', '--start-no-unused-arguments', '-static-libstdc++', '--end-no-unused-arguments']
cpp = ['ccache', '$ndk/toolchains/llvm/prebuilt/linux-x86_64/bin/${arch2}${sdk_version}-clang++', '-fno-exceptions', '-fno-unwind-tables', '-fno-asynchronous-unwind-tables', '-static-libstdc++']
c_ld = 'lld'
cpp_ld = 'lld'
strip = '$ndk/toolchains/llvm/prebuilt/linux-x86_64/bin/llvm-strip'

View File

@@ -2,13 +2,10 @@
# shellcheck disable=SC2086 # we want word splitting
set -e
. .gitlab-ci/setup-test-env.sh
set -o xtrace
export DEBIAN_FRONTEND=noninteractive
: "${LLVM_VERSION:?llvm version not set!}"
export LLVM_VERSION="${LLVM_VERSION:=15}"
# Ephemeral packages (installed for this script and removed again at the end)
EPHEMERAL=(
@@ -39,6 +36,7 @@ DEPS=(
"libxrandr-dev:$arch"
"libxshmfence-dev:$arch"
"libxxf86vm-dev:$arch"
"libwayland-dev:$arch"
)
dpkg --add-architecture $arch

31
.gitlab-ci/container/debian/android_build.sh Executable file → Normal file
View File

@@ -5,11 +5,7 @@
# .gitlab-ci/image-tags.yml tags:
# DEBIAN_BUILD_TAG
set -e
. .gitlab-ci/setup-test-env.sh
set -x
set -ex
EPHEMERAL=(
autoconf
@@ -19,13 +15,11 @@ EPHEMERAL=(
apt-get install -y --no-remove "${EPHEMERAL[@]}"
. .gitlab-ci/container/container_pre_build.sh
# Fetch the NDK and extract just the toolchain we want.
ndk="android-ndk-${ANDROID_NDK_VERSION}"
ndk=$ANDROID_NDK
curl -L --retry 4 -f --retry-all-errors --retry-delay 60 \
-o $ndk.zip https://dl.google.com/android/repository/$ndk-linux.zip
unzip -d / $ndk.zip "$ndk/source.properties" "$ndk/build/cmake/*" "$ndk/toolchains/llvm/*"
unzip -d / $ndk.zip "$ndk/toolchains/llvm/*"
rm $ndk.zip
# Since it was packed as a zip file, symlinks/hardlinks got turned into
# duplicate files. Turn them into hardlinks to save on container space.
@@ -40,12 +34,6 @@ sh .gitlab-ci/container/create-android-cross-file.sh /$ndk i686-linux-android x8
sh .gitlab-ci/container/create-android-cross-file.sh /$ndk aarch64-linux-android aarch64 armv8 $ANDROID_SDK_VERSION
sh .gitlab-ci/container/create-android-cross-file.sh /$ndk arm-linux-androideabi arm armv7hl $ANDROID_SDK_VERSION armv7a-linux-androideabi
# Build libdrm for the host (Debian) environment, so it's available for
# binaries we'll run as part of the build process
. .gitlab-ci/container/build-libdrm.sh
# Build libdrm for the NDK environment, so it's available when building for
# the Android target
for arch in \
x86_64-linux-android \
i686-linux-android \
@@ -97,22 +85,9 @@ for arch in \
--libdir=/usr/local/lib/${arch}
make install
make distclean
unset CC
unset CC
unset CXX
unset LD
unset RANLIB
done
cd ..
rm -rf $LIBELF_VERSION
# Build LLVM libraries for Android only if necessary, uploading a copy to S3
# to avoid rebuilding it in a future run if the version does not change.
bash .gitlab-ci/container/build-android-x86_64-llvm.sh
apt-get purge -y "${EPHEMERAL[@]}"
. .gitlab-ci/container/container_post_build.sh

View File

@@ -1,4 +0,0 @@
#!/usr/bin/env bash
DEBIAN_ARCH=armhf \
. .gitlab-ci/container/debian/test-base.sh

View File

@@ -1,4 +0,0 @@
#!/usr/bin/env bash
DEBIAN_ARCH="armhf" \
. .gitlab-ci/container/debian/test-gl.sh

View File

@@ -1,4 +0,0 @@
#!/usr/bin/env bash
DEBIAN_ARCH="armhf" \
. .gitlab-ci/container/debian/test-vk.sh

View File

@@ -1,23 +1,15 @@
#!/usr/bin/env bash
# shellcheck disable=SC2086 # we want word splitting
# When changing this file, you need to bump the following
# .gitlab-ci/image-tags.yml tags:
# DEBIAN_BUILD_TAG
set -e
. .gitlab-ci/setup-test-env.sh
set -o xtrace
: "${LLVM_VERSION:?llvm version not set}"
export LLVM_VERSION="${LLVM_VERSION:=15}"
apt-get -y install ca-certificates curl gnupg2
apt-get -y install ca-certificates
sed -i -e 's/http:\/\/deb/https:\/\/deb/g' /etc/apt/sources.list.d/*
echo "deb [trusted=yes] https://gitlab.freedesktop.org/gfx-ci/ci-deb-repo/-/raw/${PKG_REPO_REV}/ ${FDO_DISTRIBUTION_VERSION%-*} main" | tee /etc/apt/sources.list.d/gfx-ci_.list
. .gitlab-ci/container/debian/maybe-add-llvm-repo.sh
apt-get update
# Ephemeral packages (installed for this script and removed again at the end)
EPHEMERAL=(
@@ -71,6 +63,7 @@ DEPS=(
libwayland-egl-backend-dev
"llvm-${LLVM_VERSION}-dev"
ninja-build
meson
openssh-server
pkgconf
python3-mako
@@ -79,28 +72,17 @@ DEPS=(
python3-pycparser
python3-requests
python3-setuptools
python3-venv
shellcheck
u-boot-tools
xz-utils
yamllint
zlib1g-dev
zstd
)
apt-get update
apt-get -y install "${DEPS[@]}" "${EPHEMERAL[@]}"
# Needed for ci-fairy s3cp
pip3 install --break-system-packages "ci-fairy[s3] @ git+https://gitlab.freedesktop.org/freedesktop/ci-templates@$MESA_TEMPLATES_COMMIT"
pip3 install --break-system-packages -r bin/ci/test/requirements.txt
. .gitlab-ci/container/install-meson.sh
pip3 install --break-system-packages git+http://gitlab.freedesktop.org/freedesktop/ci-templates@ffe4d1b10aab7534489f0c4bbc4c5899df17d3f2
arch=armhf
. .gitlab-ci/container/cross_build.sh
. .gitlab-ci/container/container_pre_build.sh
@@ -113,6 +95,8 @@ arch=armhf
. .gitlab-ci/container/build-libclc.sh
. .gitlab-ci/container/install-meson.sh
. .gitlab-ci/container/build-rust.sh
. .gitlab-ci/container/build-bindgen.sh

Some files were not shown because too many files have changed in this diff Show More