Compare commits

..

112 Commits

Author SHA1 Message Date
Dylan Baker
9a80d2f73b VERSION: bump to 22.2.0-rc3 2022-08-18 10:49:06 -07:00
Dylan Baker
af2892677b .pick_status.json: Mark 11ab608779 as denominated 2022-08-16 09:41:18 -07:00
Chia-I Wu
cc504c9887 turnip: fix a use-after-free in autotune
When removing old histories, check against gpu fence.  Otherwise,
pending_results could have dangling pointers to the removed histories.

Closes: https://gitlab.freedesktop.org/mesa/mesa/-/issues/7055
Cc: mesa-stable
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/18040>
(cherry picked from commit b8a916fd0c)
2022-08-16 09:39:24 -07:00
Qiang Yu
6561217214 nir/lower_gs_intrinsics: fix primitive count for points
When primitive is points, EndPrimitive can't be used to count
primitive. Need to use vertex count instead. And it's also not
needed to do vertex per primitive count and overwrite incomplete
primitive work for points.

Fixes: 2be99012e9 ("nir: Add ability to count emitted GS primitives.")
Reviewed-by: Timur Kristóf <timur.kristof@gmail.com>
Signed-off-by: Qiang Yu <yuq825@gmail.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/17805>
(cherry picked from commit 84956286a8)
2022-08-16 09:39:21 -07:00
Eric Engestrom
7b1412130a vk/device-select-layer: fix .sType of VkPhysicalDeviceGroupProperties
The validation layers complained:
> Validation Error: [ VUID-VkPhysicalDeviceGroupProperties-sType-sType ] Object 0: VK_NULL_HANDLE, type = VK_OBJECT_TYPE_DEVICE; | MessageID = 0xc9edee8b | vkEnumeratePhysicalDeviceGroups: parameter pPhysicalDeviceGroupProperties[0].sType must be VK_STRUCTURE_TYPE_PHYSICAL_DEVICE_GROUP_PROPERTIES The Vulkan spec states: sType must be VK_STRUCTURE_TYPE_PHYSICAL_DEVICE_GROUP_PROPERTIES (https://www.khronos.org/registry/vulkan/specs/1.3-extensions/html/vkspec.html#VUID-VkPhysicalDeviceGroupProperties-sType-sType)

Signed-off-by: Eric Engestrom <eric@igalia.com>
Reviewed-by: Georg Lehmann <dadschoorse@gmail.com>
Fixes: c196ffaca6 ("vk-device-select: add device group support")
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/18037>
(cherry picked from commit 4588453815)
2022-08-16 09:39:20 -07:00
Pavel Ondračka
40da2cee3d r300: fix variables detection for paired ALU and TEX instructions in different branches
TEX instrutions can't write xyz and w to separate registers so we
need to create variables from them first, otherwise we can create
two variables from ALU writing the same register xyz and w in other
branch (this usually works when TEX is not present as the xyz and
w can read/write from different registers).

This fixes regalloc because the variables are later used as a
graph nodes.

The variable order should not matter but it slightly does (leading
to approx 0.3% shader-db temps increase as compared to previous
state), so just sort the variables list afterwards to be as close
to the previous behavior as possible and prevent the regression.

CC: mesa-stable
Closes: https://gitlab.freedesktop.org/mesa/mesa/-/issues/6936
Signed-off-by: Pavel Ondračka <pavel.ondracka@gmail.com>
Reviewed-by: Filip Gawin <filip@gawin.net>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/17987>
(cherry picked from commit 88fd397c74)
2022-08-16 09:39:18 -07:00
Axel Davy
30ef443d23 frontend/nine: Fix ff position_t fallback when w = 0
For post-transformed vertices, w = 0 is similar to
w = 1. Replace the value to fix rcp(w).

It is common for apps to pass w = 0 for
position_t.

cc: mesa-stable

Signed-off-by: Axel Davy <davyaxel0@gmail.com>
Acked-by: David Heidelberg <david.heidelberg@collabora.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/18021>
(cherry picked from commit b5df20568a)
2022-08-16 09:39:16 -07:00
Axel Davy
499a65e88d frontend/nine: Fix shader multi-use crash
Due to the driver live shader cache, it's possible
two different d3d9 shaders get the same cso.

As it's disallowed to destroy a shader cso being
bound, nine checks for this scenario. However it
was not taking into account the cso might be from
a different shader.

cc: mesa-stable

Signed-off-by: Axel Davy <davyaxel0@gmail.com>
Acked-by: David Heidelberg <david.heidelberg@collabora.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/18021>
(cherry picked from commit 93da6e9f34)
2022-08-16 09:39:15 -07:00
Axel Davy
35025cbb77 frontend/nine: Fix cso restore bug
Invalidating all state groups is not sufficient, as
some states check for actual changes.
The correct way is to invalidate the
commit mask.

Found with a wine test.

cc: mesa-stable

Signed-off-by: Axel Davy <davyaxel0@gmail.com>
Acked-by: David Heidelberg <david.heidelberg@collabora.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/18021>
(cherry picked from commit 4c65ccab6d)
2022-08-16 09:39:14 -07:00
Axel Davy
5c4028ac36 frontend/nine: Fix ATOC handling
The previous code was incorrectly checking the previous
value of alphatestenable.
In addition, remove an optimization that cannot hit (as we
filter out redundant state settings).

cc: mesa-stable

Fixes: 1272640d5 ("st/nine: Fix alpha to coverage states")
Signed-off-by: Axel Davy <davyaxel0@gmail.com>
Acked-by: David Heidelberg <david.heidelberg@collabora.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/18021>
(cherry picked from commit 4f953ad512)
2022-08-16 09:39:14 -07:00
Axel Davy
53cd211cb9 frontend/nine: Fix buffer tracking out of bounds
Fixes a crash in a ffxi trace, which draws out of bounds.
This was previously resulting in trying to fill a buffer
resource not big enough.

cc: mesa-stable
Fixes: 380c2bf ("st/nine: Optimize dynamic systemmem buffers")

Signed-off-by: Axel Davy <davyaxel0@gmail.com>
Acked-by: David Heidelberg <david.heidelberg@collabora.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/18021>
(cherry picked from commit e5124e83ba)
2022-08-16 09:39:13 -07:00
Axel Davy
bdcffd60db frontend/nine: Skip invalid swvp calls
Without this it may crash running wine tests.
According to the test themselves, the correct
behaviour is a bit more complicated, but
that's a first step.

cc: mesa-stable

Signed-off-by: Axel Davy <davyaxel0@gmail.com>
Acked-by: David Heidelberg <david.heidelberg@collabora.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/18021>
(cherry picked from commit b74febffe6)
2022-08-16 09:39:13 -07:00
Yonggang Luo
b712253b53 util: Fixes invalid assumption that return non null by function util_format_fetch_rgba_func
Fixes: e342081c ("util/format: Assert that formats are valid")
Closes #7020

Signed-off-by: Yonggang Luo <luoyonggang@gmail.com>
Reviewed-by: Konstantin Seurer <konstantin.seurer@gmail.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/18024>
(cherry picked from commit 075b72ea06)
2022-08-16 09:39:11 -07:00
pal1000
ecc41f91ad meson: Microsoft / maybe Intel CLC need the all-targets workaround
just like clover

Closes: https://gitlab.freedesktop.org/mesa/mesa/-/issues/5666
Fixes: 1506ea2ecb ("Move a bunch of the CLC stuff from src/microsoft to common code")

Reviewed-by: Jesse Natalie <jenatali@microsoft.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/17682>
(cherry picked from commit b5b855149c)
2022-08-16 09:38:48 -07:00
Dylan Baker
f1a407de47 .pick_status.json: Update to 74fc367127 2022-08-16 09:38:46 -07:00
Konstantin Seurer
9998f8e1db radv: Fix stack size calculation with stage ids
In create_rt_shader, we were setting group_idx to the stage index before.

Fixes the following tests:

dEQP-VK.ray_query.builtin.instancecustomindex.miss.aabbs
dEQP-VK.ray_query.builtin.objectrayorigin.miss.triangles

Fixes: c39ccce ("radv/rt: use stage ID as handle for general and closestHit shaders")
Signed-off-by: Konstantin Seurer <konstantin.seurer@gmail.com>
Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/17936>
(cherry picked from commit 2d39227a90)
2022-08-12 10:42:23 -07:00
Samuel Pitoiset
26c1926a4a radv: fix cleaning the meta query state if an error occured
It's already correctly cleaned in radv_device_init_meta().

This fixes a recent regression with
dEQP-VK.api.device_init.create_instance_device_intentional_alloc_fail.

Fixes: 1a95d43e55 ("radv: Simplify the meta init fail path")
Signed-off-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
Reviewed-by: Konstantin Seurer <konstantin.seurer@gmail.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/17952>
(cherry picked from commit 37dfa4e3f3)

Conflicts:
	src/amd/ci/radv-hawaii-aco-fails.txt
	src/amd/ci/radv-oland-aco-fails.txt

Stable:
    - remove CI files that don't exist in 22.2
2022-08-12 10:42:23 -07:00
Mike Blumenkrantz
9a43a1f1d1 mesa: require render target bind for A/L/I in format selection
these are required framebuffer formats in certain versions of GL,
so don't create a texture that can't later be bound to a framebuffer

see also spec@!opengl 3.0@required-texture-attachment-formats

cc: mesa-stable

Reviewed-by: Marek Olšák <marek.olsak@amd.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/17687>
(cherry picked from commit 28d033b34f)
2022-08-12 10:42:23 -07:00
Mike Blumenkrantz
b3fc8cb419 mesa: fix blending when using luminance/intensity emulation
neither of these have a real alpha channel, so reuse the xrgb blend
clamping here to ensure the "right" alpha value is used

cc: mesa-stable

fixes:
spec@arb_texture_float@fbo-blending-formats

Reviewed-by: Marek Olšák <marek.olsak@amd.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/17687>
(cherry picked from commit 4f28e2827c)
2022-08-12 10:42:23 -07:00
sjfricke
9f305dd4e6 isl: fix bug where sb.MOCS is not being set
Currently the sb.MOCS is being reset to zero after struct init.

Signed-off-by: sjfricke <spencerfricke@gmail.com>
Fixes: c27fcb1d3b ("isl: Fill in MOCS for NULL depth, stencil, and HiZ buffers.")
Reviewed-by: Nanley Chery <nanley.g.chery@intel.com>
Reviewed-by: Jason Ekstrand <jason.ekstrand@collabora.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/17985>
(cherry picked from commit 861167f41d)
2022-08-12 09:41:33 -07:00
Marek Olšák
dbc956920f glthread: call _mesa_glthread_DeleteBuffers unconditionally
Deleted buffers were not unbound in glthread.

Fixes: 4fa24747b9 - glthread: call _mesa_glthread_BindBuffer unconditionally

Reviewed-by: Pierre-Eric Pelloux-Prayer <pierre-eric.pelloux-prayer@amd.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/17976>
(cherry picked from commit 28e351673e)
2022-08-12 09:41:33 -07:00
Marek Olšák
fa4c949150 glthread: unbind framebuffers in glDeleteFramebuffers
Tests:
    dEQP-GLES2.functional.lifetime.delete_bound.framebuffer
    dEQP-GLES2.functional.state_query.integers.framebuffer_binding_getinteger

Fixes: e48f676835 - glthread: don't sync for more glGetIntegerv enums for glretrace

Reviewed-by: Pierre-Eric Pelloux-Prayer <pierre-eric.pelloux-prayer@amd.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/17976>
(cherry picked from commit eb4036ea5b)
2022-08-12 09:41:33 -07:00
Charmaine Lee
3490712ad7 mesa/st: fix reference to nir->info after nir_to_tgsi
The nir shader memory is freed in nir_to_tgsi(), but the already
freed shader info is referenced later when create compute state.
To avoid referencing the freed memory, copy the shader info first before
calling nir_to_tgsi.

Fixes vmx crash running aztec on SVGA driver.
Fixes: 580f1ac473 ("nir: Extract shader_info->cs.shared_size out of union")

Reviewed-by: Roland Scheidegger <sroland@vmware.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/17999>
(cherry picked from commit 4393be8291)
2022-08-12 09:41:33 -07:00
Dylan Baker
260b7902fe .pick_status.json: Update to 24b9ad7cd5 2022-08-12 09:41:29 -07:00
Yonggang Luo
12f1cabeba microsoft/clc: Fixes compiling errors with clang/mingw64 in clc/clc_compiler_test.cpp
clc_compiler_test.cpp:1322:67: error: non-constant-expression cannot be narrowed from type 'double' to 'float' in initializer list
      log(0.0f) / log(2), log(1.0f) / log(2), log(2.0f) / log(2), log(3.0f) / log(2)
clc_compiler_test.cpp:2306:25: error: non-constant-expression cannot be narrowed from type 'std::vector<unsigned int>::size_type' (aka 'unsigned long long') to 'unsigned int' in initializer list
   CompileArgs args = { inout.size(), 1, 1 };

Signed-off-by: Yonggang Luo <luoyonggang@gmail.com>
Reviewed-by: Jesse Natalie <jenatali@microsoft.com>
(cherry picked from commit ecfda9a0fa)

Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/18030>
2022-08-12 12:40:53 +03:00
Lionel Landwerlin
4e0637a182 anv: don't return incorrect error code for vkCreateDescriptorPool
Signed-off-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Closes: https://gitlab.freedesktop.org/mesa/mesa/-/issues/7013
Cc: mesa-stable
Reviewed-by: Jason Ekstrand <jason.ekstrand@collabora.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/17945>
(cherry picked from commit 56bb29cb93)
2022-08-11 10:33:26 -07:00
Jesse Natalie
8b0343601c egl/wgl: Fix some awkward sizeof formatting
Fixes: 3415bf02 ("egl: Add a basic Windows driver")
Suggested-by: Mike Blumenkrantz <michael.blumenkrantz@gmail.com>
Acked-by: Daniel Stone <daniels@collabora.com>
Acked-by: Mike Blumenkrantz <michael.blumenkrantz@gmail.com>
Acked-by: Sidney Just <justsid@x-plane.com>
Acked-by: Jason Ekstrand <jason.ekstrand@collabora.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/12964>
(cherry picked from commit 17eda68df3)
2022-08-11 10:33:25 -07:00
Jesse Natalie
bac7da0264 egl/wgl: Delete unused variables/code
Fixes: 3415bf02 ("egl: Add a basic Windows driver")
Suggested-by: Yonggang Luo <luoyonggang@gmail.com>
Acked-by: Daniel Stone <daniels@collabora.com>
Acked-by: Mike Blumenkrantz <michael.blumenkrantz@gmail.com>
Acked-by: Sidney Just <justsid@x-plane.com>
Acked-by: Jason Ekstrand <jason.ekstrand@collabora.com>
Tested-by: Yonggang Luo <luoyonggang@gmail.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/12964>
(cherry picked from commit efd2ae6c0c)
2022-08-11 10:33:25 -07:00
Mike Blumenkrantz
b01498700c nir/validate: clamp unsized tex dests to 32bit
this is the "default" size that's expected

cc: mesa-stable

Reviewed-by: Jason Ekstrand <jason.ekstrand@collabora.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/17874>
(cherry picked from commit b7eda568a4)
2022-08-11 10:33:24 -07:00
Mike Blumenkrantz
1c6c94424b radv: fix return type for meta resolve shaders
this should match the image type

cc: mesa-stable

Reviewed-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/17874>
(cherry picked from commit 632e1b66f5)
2022-08-11 10:33:23 -07:00
Dylan Baker
87e006ca01 .pick_status.json: Update to a3bf0da1cb 2022-08-11 10:33:20 -07:00
Chia-I Wu
f88ce98ee6 turnip: use SPDX-License-Identifier
(cherry picked from commit f0558c6f1c)

Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/17877>
2022-08-10 13:30:16 -07:00
Chia-I Wu
ffc5316a7c turnip: remove headers from libtu_files
meson can work out the dependencies.

(cherry picked from commit 8977913a23)

Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/17877>
2022-08-10 13:30:16 -07:00
Chia-I Wu
5433fb705b turnip: remove tu_private.h
(cherry picked from commit 381f234ab8)

Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/17877>
2022-08-10 13:30:16 -07:00
Chia-I Wu
773964fb8b turnip: move away from tu_private.h
(cherry picked from commit 5f7538f241)

Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/17877>
2022-08-10 13:30:16 -07:00
Chia-I Wu
75af03a653 turnip: update tu_util.h
(cherry picked from commit 46baf86414)

Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/17877>
2022-08-10 13:30:16 -07:00
Chia-I Wu
61790c60dd turnip: add tu_android.h
(cherry picked from commit e99703b515)

Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/17877>
2022-08-10 13:30:16 -07:00
Chia-I Wu
6094318c4d turnip: add tu_cmd_buffer.h
(cherry picked from commit 8e61bee30c)

Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/17877>
2022-08-10 13:30:16 -07:00
Chia-I Wu
6225807c85 turnip: add tu_device.h
Also drop unused

 - tu_instance_extension_supported
 - tu_physical_device_api_version
 - tu_physical_device_extension_supported
 - tu_device_submit_deferred_locked
 - tu_get_perftest_option_name

(cherry picked from commit 6666ec3945)

Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/17877>
2022-08-10 13:30:16 -07:00
Chia-I Wu
df69376e68 turnip: update tu_autotune.h
(cherry picked from commit 9d9bf78565)

Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/17877>
2022-08-10 13:30:16 -07:00
Chia-I Wu
7c6e24f329 turnip: add tu_wsi.h
Also drop unused x11 and wayland type definitions.

(cherry picked from commit 4fc31e4af3)

Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/17877>
2022-08-10 13:30:16 -07:00
Chia-I Wu
c69f749bd8 turnip: add tu_pass.h
(cherry picked from commit 543fac108d)

Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/17877>
2022-08-10 13:30:16 -07:00
Chia-I Wu
42cd6b0fa0 turnip: add tu_lrz.h
(cherry picked from commit 3c607309c9)

Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/17877>
2022-08-10 13:30:16 -07:00
Chia-I Wu
0d27e5fd63 turnip: add tu_dynamic_rendering.h
(cherry picked from commit 79dd12478f)

Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/17877>
2022-08-10 13:30:16 -07:00
Chia-I Wu
a4a7aa5d1a turnip: add tu_clear_blit.h
Also drop unused tu_emit_load_gmem_attachment.

(cherry picked from commit 4f759fddba)

Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/17877>
2022-08-10 13:30:16 -07:00
Chia-I Wu
a77322c414 turnip: add tu_pipeline.h
Also drop unused tu_pipeline_key.

(cherry picked from commit 6430efcab7)

Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/17877>
2022-08-10 13:30:16 -07:00
Chia-I Wu
3b82f4eae2 turnip: add tu_shader.h
(cherry picked from commit ec5bc3d8ff)

Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/17877>
2022-08-10 13:30:16 -07:00
Chia-I Wu
d214aa3889 turnip: update tu_descriptor_set.h
Also drop unused tu_descriptor_range.

(cherry picked from commit a7fe90434c)

Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/17877>
2022-08-10 13:30:16 -07:00
Chia-I Wu
9b266113fe turnip: add tu_formats.h
(cherry picked from commit 216f19e62f)

Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/17877>
2022-08-10 13:30:15 -07:00
Chia-I Wu
96df57ad5d turnip: add tu_image.h
(cherry picked from commit 095dfcae45)

Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/17877>
2022-08-10 13:30:15 -07:00
Chia-I Wu
9d3c4ea4ec turnip: add tu_query.h
(cherry picked from commit 65a5fbcb15)

Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/17877>
2022-08-10 13:30:15 -07:00
Chia-I Wu
853962d850 turnip: update tu_cs.h
(cherry picked from commit 51d416a7e4)

Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/17877>
2022-08-10 13:30:15 -07:00
Chia-I Wu
c10a10b3ac turnip: add tu_suballoc.h
(cherry picked from commit 2e337f05ab)

Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/17877>
2022-08-10 13:30:15 -07:00
Chia-I Wu
fe4bc64b9f turnip: add tu_drm.h
Also define tu_syncobj_from_handle only when TU_USE_KGSL.

(cherry picked from commit 4d9ac3d0df)

Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/17877>
2022-08-10 13:30:15 -07:00
Chia-I Wu
1a1ded7d78 turnip: remove includes that are already in tu_common.h
(cherry picked from commit 120469efea)

Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/17877>
2022-08-10 13:30:15 -07:00
Chia-I Wu
e2ff62782d turnip: add tu_common.h as the common header
Move most includes and defines in tu_private.h to the new tu_common.h.

tu_common.h is a header that all other files include, mostly indirectly
through tu_private.h.  The only exceptions are tu_perfetto.h and
tu_tracepoints.h, because ir3 headers are not compatible with C++.

(cherry picked from commit 0312157101)

Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/17877>
2022-08-10 13:30:15 -07:00
Dylan Baker
df035d2894 VERSION: bump to 22.2.0-rc2 2022-08-10 12:17:09 -07:00
Dylan Baker
23daa993df Revert "VERSION: update to 22.2.0"
This reverts commit bc9e9c39ef.

Buggy script created a buggy patch. Sorry for the commotion
2022-08-10 12:13:04 -07:00
Dylan Baker
bc9e9c39ef VERSION: update to 22.2.0 2022-08-10 11:00:49 -07:00
Yonggang Luo
dacab91f27 d3d12: Fixes compile error with mingw/gcc-x64 when static linkage to runtime library
Closes #6968

Signed-off-by: Yonggang Luo <luoyonggang@gmail.com>
Suggested-by: Jesse Natalie <jenatali@microsoft.com>
Reviewed-by: Jesse Natalie <jenatali@microsoft.com>
Tested-by: Prodea Alexandru-Liviu <liviuprodea@yahoo.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/17889>
(cherry picked from commit b6fb2da6f2)
2022-08-10 09:47:50 -07:00
Pierre-Eric Pelloux-Prayer
32ac1133d0 nir: add a nir_opt_if_options enum
And don't enable nir_opt_if_optimize_phi_true_false on radeonsi with
LLVM 14 because it crashes Blender.

Closes: https://gitlab.freedesktop.org/mesa/mesa/-/issues/6976
Cc: mesa-stable
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
Acked-by: Emma Anholt <emma@anholt.net>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/17949>
(cherry picked from commit 70891edd97)
2022-08-10 09:47:25 -07:00
Rhys Perry
e99965a073 aco: fix hash statistic
Signed-off-by: Rhys Perry <pendingchaos02@gmail.com>
Reviewed-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
Fixes: 897561b7b9 ("aco: add aco_postprocess_shader() helper")
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/17954>
(cherry picked from commit bd40e1b012)
2022-08-10 09:47:24 -07:00
Erik Faye-Lund
f330229d98 zink: do not use VK_FORMAT_D32_SFLOAT_S8_UINT without checking
Without this, we might end up trying to use VK_FORMAT_D32_SFLOAT_S8_UINT
even when it's not supported...

Cc: mesa-stable
Reviewed-by: Mike Blumenkrantz <michael.blumenkrantz@gmail.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/17953>
(cherry picked from commit 3340dea194)
2022-08-10 09:47:24 -07:00
Erik Faye-Lund
266fc5f6cc zink: add have_D32_SFLOAT_S8_UINT boolean
This will be reused in the following commit.

Cc: mesa-stable
Reviewed-by: Mike Blumenkrantz <michael.blumenkrantz@gmail.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/17953>
(cherry picked from commit 71c1ca3c67)
2022-08-10 09:47:23 -07:00
Charmaine Lee
ec9691dbf1 svga: fix mksstats build
Trivial.

Fixes: ed77ac1eef ("svga: add a helper function for common shader creation")
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/17971>
(cherry picked from commit aa5d4062e8)
2022-08-10 09:47:20 -07:00
Iván Briano
b6973234ad anv: emit scissors when the pipeline changes
With the switch to common dynamic state tracking, something got lost
that made the scissors not always be emitted when they are not dynamic
and the pipeline is marked dirty.

Since both viewport and scissors make use of each other to calculate
their values, just stick the scissor emit in the same if block as
viewport for now.
I'd rather have them decoupled, and at least the Vulkan CTS didn't
complain when I tried it, but I don't know what other effects that
may have, especially when it comes to the guardband.

Fixes a bunch of tests under
dEQP-VK.pipeline.*.multisample.misc.*

Fixes: 7d25c04236 ("anv: Switch to using common dynamic state tracking")

Reviewed-by: Jason Ekstrand <jason.ekstrand@collabora.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/17964>
(cherry picked from commit fbd4133735)
2022-08-10 09:47:20 -07:00
Emma Anholt
b70516a37a zink: Make sure that we keep the existing ici pNext chain on inserts.
For external image imports, we'd lose the mutable image format list,
causing turnip to get angry that we try to do UBWC despite not having a
UBWC-compatible format list.

Cc: mesa-stable
Fixes: 28ee911ad6 ("zink: handle mutable swapchain images with dmabuf")
Reviewed-by: Mike Blumenkrantz <michael.blumenkrantz@gmail.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/17900>
(cherry picked from commit 8dda0a01bb)
2022-08-10 09:47:19 -07:00
Dylan Baker
3f18f014e4 .pick_status.json: Update to 70891edd97 2022-08-10 09:47:17 -07:00
pal1000
17faf33ab7 Microsoft clc: strip lib prefix
Otherwise OpenCLon12 ICD can't load it

Ref: https://github.com/microsoft/OpenCLOn12/search?q=clon12compiler

Reviewed-by: Jesse Natalie <jenatali@microsoft.com>
Reviewed-by: Yonggang Luo <luoyonggang@gmail.com>
(cherry picked from commit 25e2c4d784)

Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/17886>
2022-08-10 15:15:15 +00:00
Samuel Pitoiset
e35dd22c6d radv: fix gathering XFB info if there is dead outputs
The driver should still gather XFB info even if all XFB outputs are
dead, otherwise the pipeline can't find the streamout shader.

RADV should use vk_spirv_to_nir() at some point to reduce code
duplication during SPIRV->NIR compilation.

This fixes new dEQP-VK.transform_feedback.simple.*.

Cc: mesa-stable
Signed-off-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
Reviewed-by: Mike Blumenkrantz <michael.blumenkrantz@gmail.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/17939>
(cherry picked from commit e95531e101)
2022-08-09 09:51:56 -07:00
Pierre-Eric Pelloux-Prayer
797a781ffe amdgpu/bo: update uses_secure_bos when importing buffers
Fixes: 90b98c0649 ("amd/tmz: move uses_secure_bos to radeon_winsys")
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/11449>
(cherry picked from commit a693fbf64b)
2022-08-09 09:51:55 -07:00
Dylan Baker
517d22b3f7 .pick_status.json: Update to c67e60ae8f 2022-08-09 09:51:53 -07:00
Erik Faye-Lund
f02522adce docs: fixup link to virgl docs
Fixes: 6897266ce0 ("docs: import virgl docs")
Acked-by: Chia-I Wu <olvaffe@gmail.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/17881>
(cherry picked from commit 1a3b086b06)
2022-08-08 14:55:10 -07:00
Connor Abbott
679049bf4c tu: Fix sysmem depth attachment clear flushing
We can't invalidate CCU if there is any dirty data that hasn't been
flushed yet. In the case where we clear depth, we know that the depth
attachment itself isn't dirty but there may be dirty data from other
renderpasses. Therefore we need to flush before invalidating depth.

Fixes: 487aa80 ("tu: Rewrite flushing to use barriers")
Closes: #6987
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/17940>
(cherry picked from commit a7e64ab63c)
2022-08-08 14:55:10 -07:00
Rhys Perry
c9d2f45bf2 aco: fix LdsBranchVmemWARHazard with 2+ branch chains
For example, "DS -> branch -> VMEM -> branch -> DS".

fossil-db (navi10):
Totals from 639 (0.40% of 161220) affected shaders:
Instrs: 629090 -> 628254 (-0.13%); split: -0.19%, +0.06%
CodeSize: 3410164 -> 3406748 (-0.10%); split: -0.14%, +0.04%
Latency: 7834755 -> 7821011 (-0.18%); split: -0.70%, +0.52%
InvThroughput: 1369698 -> 1374495 (+0.35%); split: -0.12%, +0.47%

A lot of the fossil-db changes are noise.
threekingdoms.8db138826c386a62.1.foz/0b222ed175eebad0 is an example of a
shader that actually has this issue.

Signed-off-by: Rhys Perry <pendingchaos02@gmail.com>
Reviewed-by: Timur Kristóf <timur.kristof@gmail.com>
Fixes: c037ba1bb7 ("aco/gfx10: Mitigate LdsBranchVmemWARHazard.")
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/17697>
(cherry picked from commit b17e59a03b)
2022-08-08 14:55:10 -07:00
Samuel Pitoiset
2c7c5cc016 radv: ignore out-of-order rasterization if stencil write mask is dynamic
This might break out-of-order rasterization on GFX8-GFX9 because it
relies on the stencil write mask which can be dynamic.

Found by inspection.

Cc: mesa-stable
Signed-off-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/17673>
(cherry picked from commit 2012246075)
2022-08-08 14:55:10 -07:00
Timothy Arceri
fe2f7c06ae Revert "nir: Preserve offsets in lower_io_to_scalar_early"
This reverts commit 96fa23bca5.

The correct fix to the problem was a1bc152340, making this
change obsolete as the pass skips any vars marked with
always_active_io. There was no real advantage to allowing these
vars to be split because they can't be removed anyway. Also there
is no way to split varying arrays gracefully here due to the xfb
layout rules, and this change didn't handle arrays at all.

Removing this obsolete code also fixes an assert in the new CTS
test KHR-Single-GL45.enhanced_layouts.xfb_all_stages. The test
was legally adding xfb offsets to all vertex stages but since
we only mark the varyings in the final vertex stage with the
always_active_io flag the other stages were correctly lowering
to scalars but when an array with an offset hit this code it
asserted since it couldn't handle it.

Acked-by: Pierre-Eric Pelloux-Prayer <pierre-eric.pelloux-prayer@amd.com>

Fixes: a1bc152340 ("spirv: mark variables decorated with XfbBuffer as always active")
Closes: https://gitlab.freedesktop.org/mesa/mesa/-/issues/6928
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/17878>
(cherry picked from commit 8bffd601ed)
2022-08-08 14:55:10 -07:00
Alyssa Rosenzweig
515faea62b agx: Fix packing of samplers in texture instrs
Typo in the handwritten packing code, oof!

Fixes incorrectly repeated shadows in Neverball (among many other bugs,
I assume). Huge thanks to Lina for the idea that this was the
bug -- fixing it was a breeze from there :-)

Fixes: 9f55538834 ("agx: Pack texture ops")
Signed-off-by: Alyssa Rosenzweig <alyssa@rosenzweig.io>
Suggested-by: Asahi Lina <lina@asahilina.net>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/17198>
(cherry picked from commit 47a3f1226c)
2022-08-08 14:55:10 -07:00
Tatsuyuki Ishi
25f9046ccd radv: Implement radv_flush_before_query_copy to workaround UE Vulkan bugs.
Cc: mesa-stable
Closes: https://gitlab.freedesktop.org/mesa/mesa/-/issues/5740

Reviewed-by: Timur Kristóf <timur.kristof@gmail.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/14208>
(cherry picked from commit abc4eda846)
2022-08-08 14:55:10 -07:00
Dmitry Osipenko
661d8de303 virgl: Fix unmapping of blob resources
OpenGL API calls like glClearBufferData() result in mapping/unmapping
of a given buffer by Mesa and unmapping of a host blob fails in
virglrenderer because VirGL driver uses command that is intended for
unmapping of a guest buffer. In particular this causes problem for the
"Total War: Warhammer" game that gets GL_OUT_OF_MEMORY error due to the
failed unmapping command. Fix this by setting the mapping usage flag in
accordance to the resource flags, allowing virgl_buffer_transfer_unmap()
to differentiate host buffer from guest.

Fixes: 3b54e5837a ("virgl: support PIPE_CAP_BUFFER_MAP_PERSISTENT_COHERENT")
Signed-off-by: Dmitry Osipenko <dmitry.osipenko@collabora.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/17914>
(cherry picked from commit 46396e97be)
2022-08-08 14:55:10 -07:00
Rob Clark
de6ee5b782 freedreno/gmem: Fix col0 calc
Fix typo in calculation of position of start of a row of tiles.  This
could otherwise cause an out-of-bounds access in the next patch.

Fixes: 81d85be9a5 freedreno/gmem: Reverse order of alternative tile rows
Signed-off-by: Rob Clark <robdclark@chromium.org>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/17888>
(cherry picked from commit 2497741a1b)
2022-08-08 14:55:10 -07:00
Rob Clark
9b943044ac freedreno/drm: Fix potential bo cache vs export crash
Keep the list head valid (empty) after allocation from bo cache.  Avoids
a potential later crash in lookup_bo in the following sequence:

1. alloc, bo cache hit
2. export
3. re-import

Cc: mesa-stable
Fixes: f3cc0d2747 ("freedreno: import libdrm_freedreno + redesign submit")
Closes: https://gitlab.freedesktop.org/mesa/mesa/-/issues/6988
Signed-off-by: Rob Clark <robdclark@chromium.org>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/17888>
(cherry picked from commit 8b3f2a9e5d)
2022-08-08 14:55:10 -07:00
Dylan Baker
b1dbdecb27 .pick_status.json: Update to 1a3b086b06 2022-08-08 14:55:10 -07:00
Samuel Pitoiset
f8bdbbdd90 radv: implement VK_EXT_attachment_feedback_loop_layout
This extension introduces a new layout which allows applications
to both render and sample from the same image inside the same draw
(aka. feedback loops).

Previously, the GENERAL layout was used and this introduced some
rendering artifacts because the hw can't read&write DCC/HTILE for
the same image, and we try to keep it compressed on GFX10+.

This helps fixing corruption with D3D9 and RPCS3 games which
are candidate for feedback loops.

Closes: https://gitlab.freedesktop.org/mesa/mesa/-/issues/4411
Signed-off-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>

Reviewed-by: Timur Kristóf <timur.kristof@gmail.com>
Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/17883>
2022-08-08 09:16:39 -04:00
Samuel Pitoiset
38d6ae933d vulkan: add support for VK_IMAGE_LAYOUT_ATTACHMENT_FEEDBACK_LOOP_OPTIMAL_EXT
Signed-off-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>

Reviewed-by: Timur Kristóf <timur.kristof@gmail.com>
Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/17883>
2022-08-08 09:16:13 -04:00
Mike Blumenkrantz
2ce1c12477 vulkan: Update the XML and headers to 1.3.224
Reviewed-by: Timur Kristóf <timur.kristof@gmail.com>
Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
Acked-by: Jason Ekstrand <jason.ekstrand@collabora.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/17883>
2022-08-08 09:16:13 -04:00
Marek Olšák
2f18e16512 radeonsi: don't assume that TC_ACTION_ENA invalidates L1 cache on gfx9
Just got into a midnight discussion with a hw guy.
TC_ACTION_ENA apparently doesn't invalidate L1, so don't clear
the INV_VCACHE flag.

Fixes: 4056e953fe - radeonsi: move emit_cache_flush functions into si_gfx_cs.c

Reviewed-by: Pierre-Eric Pelloux-Prayer <pierre-eric.pelloux-prayer@amd.com>
Reviewed-by: Dave Airlie <airlied@redhat.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/17902>
(cherry picked from commit 279315fd73)
2022-08-05 10:10:01 -07:00
Lionel Landwerlin
eadc134dd8 anv: fixup PIPE_CONTROL restriction on gfx8
We're missing a condition that is currently papered over by having
ANV_PIPE_HDC_PIPELINE_FLUSH_BIT in the invalidate bits.

v2: rework with simplication (Caio)

Signed-off-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Cc: mesa-stable
Reviewed-by: Caio Oliveira <caio.oliveira@intel.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/16905>
(cherry picked from commit 5e21f47428)
2022-08-05 10:10:01 -07:00
Juan A. Suarez Romero
e16a613de0 vc4: properly restore vc4 debug option
Otherwise VC4_DEBUG does not work.

Fixes: c3f5d27631 ("vc4/v3d: restore calling debug_get_option_vc4/v3d_debug")
Signed-off-by: Juan A. Suarez Romero <jasuarez@igalia.com>
Reviewed-by: Eric Engestrom <eric@igalia.com>
Reviewed-by: Iago Toral Quiroga <itoral@igalia.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/17882>
(cherry picked from commit 644daa9743)
2022-08-05 10:10:00 -07:00
Dave Airlie
8cd9d2fcc0 draw: don't touch info values that aren't valid.
These shouldn't be accessed, and shows up as an uninit access in
valgrind with piglit rasterpos

Reviewed-by: Mike Blumenkrantz <michael.blumenkrantz@gmail.com>
Cc: mesa-stable
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/10641>
(cherry picked from commit 5449e6d14c)
2022-08-05 10:09:59 -07:00
Mike Blumenkrantz
167af40dae zink: don't fixup sparse texops
this is broken, and these will never need to be fixed

Fixes: 3a47576687 ("zink: add a compiler pass to match up tex op dest types")

Reviewed-by: Dave Airlie <airlied@redhat.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/17775>
(cherry picked from commit 32446f51a8)
2022-08-05 10:09:59 -07:00
Mike Blumenkrantz
b525edfce6 zink: add all format modifiers when adding for dmabuf export
adding LINEAR before was a good starter step, but LINEAR
might not actually be supported (e.g., nvidia)

cc: mesa-stable

Reviewed-by: Dave Airlie <airlied@redhat.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/17775>
(cherry picked from commit 247b8f2924)
2022-08-05 10:09:58 -07:00
Mike Blumenkrantz
aa90b5cd12 zink: don't add modifiers if EXT_image_drm_format_modifier isn't present
cc: mesa-stable

Reviewed-by: Dave Airlie <airlied@redhat.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/17775>
(cherry picked from commit 5e8ec87b68)
2022-08-05 10:09:58 -07:00
Mike Blumenkrantz
9234bdebed zink: use modifier_aspect to check for modifier plane in zink_resource_get_param
cc: mesa-stable

Reviewed-by: Dave Airlie <airlied@redhat.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/17775>
(cherry picked from commit c824a53f35)
2022-08-05 10:09:57 -07:00
Mike Blumenkrantz
fda5f3f630 zink: demote dmabuf tiling to linear if modifiers aren't supported
this is effectively the same as LINEAR, and it still allows dmabuf creation

cc: mesa-stable

Reviewed-by: Dave Airlie <airlied@redhat.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/17775>
(cherry picked from commit b59eb9c8b7)
2022-08-05 10:09:57 -07:00
Mike Blumenkrantz
349576d92f nine: check return on resource_get_handle
this has a return code, and if it return false, this is probably an
exit condition

cc: mesa-stable

Reviewed-by: Dave Airlie <airlied@redhat.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/17775>
(cherry picked from commit 188721d6d3)
2022-08-05 10:09:56 -07:00
Mike Blumenkrantz
0d7d35c84a zink: fix return for PIPE_CAP_DEPTH_CLIP_DISABLE
this uses the extension now

Fixes: 21ea19d504 ("zink: Always enable depth clamping, make depth clipping independent.")

Reviewed-by: Dave Airlie <airlied@redhat.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/17775>
(cherry picked from commit 721f33cd0f)
2022-08-05 10:09:56 -07:00
Mike Blumenkrantz
1889d87783 zink: handle !half_pixel_center
the shader is already getting a -0.5,-0.5 bias, but the viewport also
needs to be shifted by 0.5 to match

cc: mesa-stable

Reviewed-by: Dave Airlie <airlied@redhat.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/17775>
(cherry picked from commit 55a4a6b8dc)

Conflicts:
	src/gallium/drivers/zink/zink_state.c
2022-08-05 10:09:55 -07:00
Mike Blumenkrantz
50e133465c zink: handle unscaled depth bias from nine
nine uses this to pass unscaled units for depth bias, which means
the units must be scaled based on the format of the depth buffer

cc: mesa-stable

Reviewed-by: Dave Airlie <airlied@redhat.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/17775>
(cherry picked from commit 8a8edb310d)

Conflicts:
	src/gallium/drivers/zink/zink_screen.h
2022-08-05 10:06:14 -07:00
Mike Blumenkrantz
fdbabb07cf zink: drop mode_changed check from linewidth/depthbias draw updates
this doesn't need to be updated on primtype change since it's always
set

cc: mesa-stable

Reviewed-by: Dave Airlie <airlied@redhat.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/17775>
(cherry picked from commit a912952c3e)
2022-08-05 10:04:54 -07:00
Mike Blumenkrantz
71b113251d zink: force a new framebuffer for clear_depth_stencil if the clear region is big
can't clear outside the framebuffer, so set a new one if necessary

Fixes: f1f08e3529 ("zink: massively simplify zink_clear_depth_stencil")

Reviewed-by: Dave Airlie <airlied@redhat.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/17775>
(cherry picked from commit ff1fb9101f)
2022-08-05 10:04:54 -07:00
Mike Blumenkrantz
0ee8821b83 zink: force flush clears on fb change if fb geometry changes
Fixes: 66ceea7ed9 ("zink: lift clearing on fb state change up a level")

Reviewed-by: Dave Airlie <airlied@redhat.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/17775>
(cherry picked from commit 80364c4d19)
2022-08-05 10:04:53 -07:00
Dylan Baker
3eda2a96a8 .pick_status.json: Update to 0a0205f045 2022-08-05 10:04:52 -07:00
pal1000
5c8aaa70e8 d3d12/dzn/spirv2dxil: Require version library
Fixes: b8328c9 ("microsoft/compiler: Blacklist DXIL validator 1.6 from 20348 SDK")

Closes: #6952

Closes: #6959

v2: Always lookup version library on Windows

Reviewed-by: Jesse Natalie <jenatali@microsoft.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/17837>
(cherry picked from commit ec46a85c4f)
2022-08-04 11:33:21 -07:00
Mike Blumenkrantz
081fd3a4f4 zink: init cache_put program fence on program creation
re-initializing here might overwrite an existing cache_put job

cc: mesa-stable

Reviewed-by: Dave Airlie <airlied@redhat.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/17225>
(cherry picked from commit 3d58642984)
2022-08-04 11:33:21 -07:00
Dave Airlie
08adb7bb9d gallivm: fix printf hook for cached shaders.
I've noticed this before but never tracked it down, but it's annoying.

The printf hooks would crash with debug shaders when they were loaded
from the cache. This was because nothing was initing the printf hook
in the cached path so the global was never set.

No problems just always creating this afaics.

Fixes: 333ee94285 ("gallivm: rework debug printf hook to use global mapping.")
Reviewed-by: Mike Blumenkrantz <michael.blumenkrantz@gmail.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/17867>
(cherry picked from commit 4c0a7a169d)
2022-08-04 11:33:21 -07:00
Eric Engestrom
c702465d56 bin/gen_release_notes.py: bump advertised vulkan version to 1.3
Fixes: df8ac77af8 ("anv: Advertise Vulkan 1.3")
Fixes: 08c6f437cf ("radv: advertise Vulkan 1.3")
Signed-off-by: Eric Engestrom <eric@igalia.com>
Reviewed-by: Jordan Justen <jordan.l.justen@intel.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/17759>
(cherry picked from commit 446d2039cb)
2022-08-04 11:33:21 -07:00
Mike Blumenkrantz
5e00b2d8a7 zink: use modifier feature flags during surface creation when necessary
cc: mesa-stable

Acked-by: Emma Anholt <emma@anholt.net>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/17773>
(cherry picked from commit 22eff86eaf)
2022-08-04 11:33:21 -07:00
Mike Blumenkrantz
46fc1b37b5 zink: store VkFormatFeatureFlags on creation
Acked-by: Emma Anholt <emma@anholt.net>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/17773>
(cherry picked from commit fffd57ef61)
2022-08-04 11:33:21 -07:00
Mike Blumenkrantz
5814485a10 zink: handle mutable swapchain images with dmabuf
if a non-kopper swapchain image supports srgb, add a VkImageFormatListCreateInfo
to permit srgb mutability and avoid violating spec

cc: mesa-stable

Acked-by: Emma Anholt <emma@anholt.net>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/17773>
(cherry picked from commit 28ee911ad6)
2022-08-04 10:38:11 -07:00
Dylan Baker
16d299e40b .pick_status.json: Update to 8e6bdb2ed3 2022-08-04 10:38:10 -07:00
Dylan Baker
f8367fc41e VERSION: bump for 22.2.0-rc1 2022-08-03 11:11:03 -07:00
2813 changed files with 187152 additions and 299681 deletions

View File

@@ -35,10 +35,7 @@ trim_trailing_whitespace = false
indent_style = space
indent_size = 2
[*.ps1]
indent_style = space
indent_size = 2
[*.rs]
indent_style = space
indent_size = 4

View File

@@ -1,17 +1,12 @@
name: macOS-CI
name: CI
on: push
permissions:
contents: read
jobs:
macOS-CI:
strategy:
matrix:
glx_option: ['dri', 'xlib']
CI:
runs-on: macos-latest
env:
GALLIUM_DUMP_CPU: true
steps:
- name: Checkout
uses: actions/checkout@v3
@@ -35,24 +30,10 @@ jobs:
- name: Install Mako
run: pip3 install --user mako
- name: Configure
run: |
cat > native_config <<EOL
[binaries]
llvm-config = '/usr/local/opt/llvm/bin/llvm-config'
EOL
meson . build --native-file=native_config -Dbuild-tests=true -Dosmesa=true -Dgallium-drivers=swrast -Dglx=${{ matrix.glx_option }}
run: meson . build -Dbuild-tests=true -Dosmesa=true
- name: Build
run: meson compile -C build
- name: Test
run: meson test -C build --print-errorlogs
- name: Install
run: meson install -C build --destdir $PWD/install
- name: 'Upload Artifact'
if: always()
uses: actions/upload-artifact@v3
with:
name: macos-${{ matrix.glx_option }}-result
path: |
build/meson-logs/
install/
retention-days: 5
run: meson install -C build

View File

@@ -1,6 +1,6 @@
variables:
FDO_UPSTREAM_REPO: mesa/mesa
MESA_TEMPLATES_COMMIT: &ci-templates-commit d5aa3941aa03c2f716595116354fb81eb8012acb
MESA_TEMPLATES_COMMIT: &ci-templates-commit 290b79e0e78eab67a83766f4e9691be554fc4afd
CI_PRE_CLONE_SCRIPT: |-
set -o xtrace
wget -q -O download-git-cache.sh ${CI_PROJECT_URL}/-/raw/${CI_COMMIT_SHA}/.gitlab-ci/download-git-cache.sh
@@ -22,7 +22,6 @@ variables:
MICROSOFT_FARM: "online"
LIMA_FARM: "online"
IGALIA_FARM: "online"
ANHOLT_FARM: "online"
default:
before_script:
@@ -71,6 +70,7 @@ include:
- local: 'src/gallium/drivers/lima/ci/gitlab-ci.yml'
- local: 'src/gallium/drivers/llvmpipe/ci/gitlab-ci.yml'
- local: 'src/gallium/drivers/nouveau/ci/gitlab-ci.yml'
- local: 'src/gallium/drivers/radeonsi/ci/gitlab-ci.yml'
- local: 'src/gallium/drivers/softpipe/ci/gitlab-ci.yml'
- local: 'src/gallium/drivers/virgl/ci/gitlab-ci.yml'
- local: 'src/gallium/drivers/zink/ci/gitlab-ci.yml'
@@ -78,7 +78,6 @@ include:
- local: 'src/intel/ci/gitlab-ci.yml'
- local: 'src/microsoft/ci/gitlab-ci.yml'
- local: 'src/panfrost/ci/gitlab-ci.yml'
- local: 'src/virtio/ci/gitlab-ci.yml'
stages:
- sanity
@@ -86,7 +85,6 @@ stages:
- git-archive
- build-x86_64
- build-misc
- lint
- amd
- intel
- nouveau
@@ -132,7 +130,7 @@ stages:
- .build-rules
script:
- apk --no-cache add graphviz doxygen
- pip3 install sphinx===5.1.1 breathe===4.34.0 mako===1.2.3 sphinx_rtd_theme===1.0.0
- pip3 install sphinx breathe mako sphinx_rtd_theme
- docs/doxygen-wrapper.py --out-dir=docs/doxygen_xml
- sphinx-build -W -b html docs public

View File

@@ -3,8 +3,9 @@ version: 1
# Rules to match for a machine to qualify
target:
{% if tags %}
{% set b2ctags = tags.split(',') %}
tags:
{% for tag in tags %}
{% for tag in b2ctags %}
- '{{ tag | trim }}'
{% endfor %}
{% endif %}

View File

@@ -24,7 +24,6 @@
from jinja2 import Environment, FileSystemLoader
from argparse import ArgumentParser
from os import environ, path
import json
parser = ArgumentParser()
@@ -70,10 +69,7 @@ values['log_level'] = args.log_level
values['poweroff_delay'] = args.poweroff_delay
values['session_end_regex'] = args.session_end_regex
values['session_reboot_regex'] = args.session_reboot_regex
try:
values['tags'] = json.loads(args.tags)
except json.decoder.JSONDecodeError:
values['tags'] = args.tags.split(",")
values['tags'] = args.tags
values['template'] = args.template
values['timeout_boot_minutes'] = args.timeout_boot_minutes
values['timeout_boot_retries'] = args.timeout_boot_retries

View File

@@ -164,16 +164,19 @@ def main():
'--test-timeout', type=int, help='Test phase timeout (minutes)', required=True)
args = parser.parse_args()
servo = CrosServoRun(args.cpu, args.ec, args.test_timeout * 60)
while True:
servo = CrosServoRun(args.cpu, args.ec, args.test_timeout * 60)
retval = servo.run()
# power down the CPU on the device
servo.ec_write("power off\n")
servo.close()
if retval != 2:
sys.exit(retval)
break
# power down the CPU on the device
servo.ec_write("power off\n")
servo.close()
sys.exit(retval)
if __name__ == '__main__':

View File

@@ -106,25 +106,20 @@ if echo "$BM_KERNEL $BM_DTB" | grep -q http; then
wget $BM_DTB -O dtb
cat kernel dtb > Image.gz-dtb
rm kernel
rm kernel dtb
else
cat $BM_KERNEL $BM_DTB > Image.gz-dtb
cp $BM_DTB dtb
fi
export PATH=$BM:$PATH
mkdir -p artifacts
mkbootimg.py \
--kernel Image.gz-dtb \
--ramdisk rootfs.cpio.gz \
--dtb dtb \
--cmdline "$BM_CMDLINE" \
$BM_MKBOOT_PARAMS \
--header_version 2 \
-o artifacts/fastboot.img
abootimg \
--create artifacts/fastboot.img \
-k Image.gz-dtb \
-r rootfs.cpio.gz \
-c cmdline="$BM_CMDLINE"
rm Image.gz-dtb
rm Image.gz-dtb dtb
export PATH=$BM:$PATH
# Start background command for talking to serial if we have one.
if [ -n "$BM_SERIAL_SCRIPT" ]; then

View File

@@ -1,569 +0,0 @@
#!/usr/bin/env python3
#
# Copyright 2015, The Android Open Source Project
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Creates the boot image."""
from argparse import (ArgumentParser, ArgumentTypeError,
FileType, RawDescriptionHelpFormatter)
from hashlib import sha1
from os import fstat
from struct import pack
import array
import collections
import os
import re
import subprocess
import tempfile
# Constant and structure definition is in
# system/tools/mkbootimg/include/bootimg/bootimg.h
BOOT_MAGIC = 'ANDROID!'
BOOT_MAGIC_SIZE = 8
BOOT_NAME_SIZE = 16
BOOT_ARGS_SIZE = 512
BOOT_EXTRA_ARGS_SIZE = 1024
BOOT_IMAGE_HEADER_V1_SIZE = 1648
BOOT_IMAGE_HEADER_V2_SIZE = 1660
BOOT_IMAGE_HEADER_V3_SIZE = 1580
BOOT_IMAGE_HEADER_V3_PAGESIZE = 4096
BOOT_IMAGE_HEADER_V4_SIZE = 1584
BOOT_IMAGE_V4_SIGNATURE_SIZE = 4096
VENDOR_BOOT_MAGIC = 'VNDRBOOT'
VENDOR_BOOT_MAGIC_SIZE = 8
VENDOR_BOOT_NAME_SIZE = BOOT_NAME_SIZE
VENDOR_BOOT_ARGS_SIZE = 2048
VENDOR_BOOT_IMAGE_HEADER_V3_SIZE = 2112
VENDOR_BOOT_IMAGE_HEADER_V4_SIZE = 2128
VENDOR_RAMDISK_TYPE_NONE = 0
VENDOR_RAMDISK_TYPE_PLATFORM = 1
VENDOR_RAMDISK_TYPE_RECOVERY = 2
VENDOR_RAMDISK_TYPE_DLKM = 3
VENDOR_RAMDISK_NAME_SIZE = 32
VENDOR_RAMDISK_TABLE_ENTRY_BOARD_ID_SIZE = 16
VENDOR_RAMDISK_TABLE_ENTRY_V4_SIZE = 108
# Names with special meaning, mustn't be specified in --ramdisk_name.
VENDOR_RAMDISK_NAME_BLOCKLIST = {b'default'}
PARSER_ARGUMENT_VENDOR_RAMDISK_FRAGMENT = '--vendor_ramdisk_fragment'
def filesize(f):
if f is None:
return 0
try:
return fstat(f.fileno()).st_size
except OSError:
return 0
def update_sha(sha, f):
if f:
sha.update(f.read())
f.seek(0)
sha.update(pack('I', filesize(f)))
else:
sha.update(pack('I', 0))
def pad_file(f, padding):
pad = (padding - (f.tell() & (padding - 1))) & (padding - 1)
f.write(pack(str(pad) + 'x'))
def get_number_of_pages(image_size, page_size):
"""calculates the number of pages required for the image"""
return (image_size + page_size - 1) // page_size
def get_recovery_dtbo_offset(args):
"""calculates the offset of recovery_dtbo image in the boot image"""
num_header_pages = 1 # header occupies a page
num_kernel_pages = get_number_of_pages(filesize(args.kernel), args.pagesize)
num_ramdisk_pages = get_number_of_pages(filesize(args.ramdisk),
args.pagesize)
num_second_pages = get_number_of_pages(filesize(args.second), args.pagesize)
dtbo_offset = args.pagesize * (num_header_pages + num_kernel_pages +
num_ramdisk_pages + num_second_pages)
return dtbo_offset
def write_header_v3_and_above(args):
if args.header_version > 3:
boot_header_size = BOOT_IMAGE_HEADER_V4_SIZE
else:
boot_header_size = BOOT_IMAGE_HEADER_V3_SIZE
args.output.write(pack(f'{BOOT_MAGIC_SIZE}s', BOOT_MAGIC.encode()))
# kernel size in bytes
args.output.write(pack('I', filesize(args.kernel)))
# ramdisk size in bytes
args.output.write(pack('I', filesize(args.ramdisk)))
# os version and patch level
args.output.write(pack('I', (args.os_version << 11) | args.os_patch_level))
args.output.write(pack('I', boot_header_size))
# reserved
args.output.write(pack('4I', 0, 0, 0, 0))
# version of boot image header
args.output.write(pack('I', args.header_version))
args.output.write(pack(f'{BOOT_ARGS_SIZE + BOOT_EXTRA_ARGS_SIZE}s',
args.cmdline))
if args.header_version >= 4:
# The signature used to verify boot image v4.
args.output.write(pack('I', BOOT_IMAGE_V4_SIGNATURE_SIZE))
pad_file(args.output, BOOT_IMAGE_HEADER_V3_PAGESIZE)
def write_vendor_boot_header(args):
if filesize(args.dtb) == 0:
raise ValueError('DTB image must not be empty.')
if args.header_version > 3:
vendor_ramdisk_size = args.vendor_ramdisk_total_size
vendor_boot_header_size = VENDOR_BOOT_IMAGE_HEADER_V4_SIZE
else:
vendor_ramdisk_size = filesize(args.vendor_ramdisk)
vendor_boot_header_size = VENDOR_BOOT_IMAGE_HEADER_V3_SIZE
args.vendor_boot.write(pack(f'{VENDOR_BOOT_MAGIC_SIZE}s',
VENDOR_BOOT_MAGIC.encode()))
# version of boot image header
args.vendor_boot.write(pack('I', args.header_version))
# flash page size
args.vendor_boot.write(pack('I', args.pagesize))
# kernel physical load address
args.vendor_boot.write(pack('I', args.base + args.kernel_offset))
# ramdisk physical load address
args.vendor_boot.write(pack('I', args.base + args.ramdisk_offset))
# ramdisk size in bytes
args.vendor_boot.write(pack('I', vendor_ramdisk_size))
args.vendor_boot.write(pack(f'{VENDOR_BOOT_ARGS_SIZE}s',
args.vendor_cmdline))
# kernel tags physical load address
args.vendor_boot.write(pack('I', args.base + args.tags_offset))
# asciiz product name
args.vendor_boot.write(pack(f'{VENDOR_BOOT_NAME_SIZE}s', args.board))
# header size in bytes
args.vendor_boot.write(pack('I', vendor_boot_header_size))
# dtb size in bytes
args.vendor_boot.write(pack('I', filesize(args.dtb)))
# dtb physical load address
args.vendor_boot.write(pack('Q', args.base + args.dtb_offset))
if args.header_version > 3:
vendor_ramdisk_table_size = (args.vendor_ramdisk_table_entry_num *
VENDOR_RAMDISK_TABLE_ENTRY_V4_SIZE)
# vendor ramdisk table size in bytes
args.vendor_boot.write(pack('I', vendor_ramdisk_table_size))
# number of vendor ramdisk table entries
args.vendor_boot.write(pack('I', args.vendor_ramdisk_table_entry_num))
# vendor ramdisk table entry size in bytes
args.vendor_boot.write(pack('I', VENDOR_RAMDISK_TABLE_ENTRY_V4_SIZE))
# bootconfig section size in bytes
args.vendor_boot.write(pack('I', filesize(args.vendor_bootconfig)))
pad_file(args.vendor_boot, args.pagesize)
def write_header(args):
if args.header_version > 4:
raise ValueError(
f'Boot header version {args.header_version} not supported')
if args.header_version in {3, 4}:
return write_header_v3_and_above(args)
ramdisk_load_address = ((args.base + args.ramdisk_offset)
if filesize(args.ramdisk) > 0 else 0)
second_load_address = ((args.base + args.second_offset)
if filesize(args.second) > 0 else 0)
args.output.write(pack(f'{BOOT_MAGIC_SIZE}s', BOOT_MAGIC.encode()))
# kernel size in bytes
args.output.write(pack('I', filesize(args.kernel)))
# kernel physical load address
args.output.write(pack('I', args.base + args.kernel_offset))
# ramdisk size in bytes
args.output.write(pack('I', filesize(args.ramdisk)))
# ramdisk physical load address
args.output.write(pack('I', ramdisk_load_address))
# second bootloader size in bytes
args.output.write(pack('I', filesize(args.second)))
# second bootloader physical load address
args.output.write(pack('I', second_load_address))
# kernel tags physical load address
args.output.write(pack('I', args.base + args.tags_offset))
# flash page size
args.output.write(pack('I', args.pagesize))
# version of boot image header
args.output.write(pack('I', args.header_version))
# os version and patch level
args.output.write(pack('I', (args.os_version << 11) | args.os_patch_level))
# asciiz product name
args.output.write(pack(f'{BOOT_NAME_SIZE}s', args.board))
args.output.write(pack(f'{BOOT_ARGS_SIZE}s', args.cmdline))
sha = sha1()
update_sha(sha, args.kernel)
update_sha(sha, args.ramdisk)
update_sha(sha, args.second)
if args.header_version > 0:
update_sha(sha, args.recovery_dtbo)
if args.header_version > 1:
update_sha(sha, args.dtb)
img_id = pack('32s', sha.digest())
args.output.write(img_id)
args.output.write(pack(f'{BOOT_EXTRA_ARGS_SIZE}s', args.extra_cmdline))
if args.header_version > 0:
if args.recovery_dtbo:
# recovery dtbo size in bytes
args.output.write(pack('I', filesize(args.recovery_dtbo)))
# recovert dtbo offset in the boot image
args.output.write(pack('Q', get_recovery_dtbo_offset(args)))
else:
# Set to zero if no recovery dtbo
args.output.write(pack('I', 0))
args.output.write(pack('Q', 0))
# Populate boot image header size for header versions 1 and 2.
if args.header_version == 1:
args.output.write(pack('I', BOOT_IMAGE_HEADER_V1_SIZE))
elif args.header_version == 2:
args.output.write(pack('I', BOOT_IMAGE_HEADER_V2_SIZE))
if args.header_version > 1:
if filesize(args.dtb) == 0:
raise ValueError('DTB image must not be empty.')
# dtb size in bytes
args.output.write(pack('I', filesize(args.dtb)))
# dtb physical load address
args.output.write(pack('Q', args.base + args.dtb_offset))
pad_file(args.output, args.pagesize)
return img_id
class AsciizBytes:
"""Parses a string and encodes it as an asciiz bytes object.
>>> AsciizBytes(bufsize=4)('foo')
b'foo\\x00'
>>> AsciizBytes(bufsize=4)('foob')
Traceback (most recent call last):
...
argparse.ArgumentTypeError: Encoded asciiz length exceeded: max 4, got 5
"""
def __init__(self, bufsize):
self.bufsize = bufsize
def __call__(self, arg):
arg_bytes = arg.encode() + b'\x00'
if len(arg_bytes) > self.bufsize:
raise ArgumentTypeError(
'Encoded asciiz length exceeded: '
f'max {self.bufsize}, got {len(arg_bytes)}')
return arg_bytes
class VendorRamdiskTableBuilder:
"""Vendor ramdisk table builder.
Attributes:
entries: A list of VendorRamdiskTableEntry namedtuple.
ramdisk_total_size: Total size in bytes of all ramdisks in the table.
"""
VendorRamdiskTableEntry = collections.namedtuple( # pylint: disable=invalid-name
'VendorRamdiskTableEntry',
['ramdisk_path', 'ramdisk_size', 'ramdisk_offset', 'ramdisk_type',
'ramdisk_name', 'board_id'])
def __init__(self):
self.entries = []
self.ramdisk_total_size = 0
self.ramdisk_names = set()
def add_entry(self, ramdisk_path, ramdisk_type, ramdisk_name, board_id):
# Strip any trailing null for simple comparison.
stripped_ramdisk_name = ramdisk_name.rstrip(b'\x00')
if stripped_ramdisk_name in VENDOR_RAMDISK_NAME_BLOCKLIST:
raise ValueError(
f'Banned vendor ramdisk name: {stripped_ramdisk_name}')
if stripped_ramdisk_name in self.ramdisk_names:
raise ValueError(
f'Duplicated vendor ramdisk name: {stripped_ramdisk_name}')
self.ramdisk_names.add(stripped_ramdisk_name)
if board_id is None:
board_id = array.array(
'I', [0] * VENDOR_RAMDISK_TABLE_ENTRY_BOARD_ID_SIZE)
else:
board_id = array.array('I', board_id)
if len(board_id) != VENDOR_RAMDISK_TABLE_ENTRY_BOARD_ID_SIZE:
raise ValueError('board_id size must be '
f'{VENDOR_RAMDISK_TABLE_ENTRY_BOARD_ID_SIZE}')
with open(ramdisk_path, 'rb') as f:
ramdisk_size = filesize(f)
self.entries.append(self.VendorRamdiskTableEntry(
ramdisk_path, ramdisk_size, self.ramdisk_total_size, ramdisk_type,
ramdisk_name, board_id))
self.ramdisk_total_size += ramdisk_size
def write_ramdisks_padded(self, fout, alignment):
for entry in self.entries:
with open(entry.ramdisk_path, 'rb') as f:
fout.write(f.read())
pad_file(fout, alignment)
def write_entries_padded(self, fout, alignment):
for entry in self.entries:
fout.write(pack('I', entry.ramdisk_size))
fout.write(pack('I', entry.ramdisk_offset))
fout.write(pack('I', entry.ramdisk_type))
fout.write(pack(f'{VENDOR_RAMDISK_NAME_SIZE}s',
entry.ramdisk_name))
fout.write(entry.board_id)
pad_file(fout, alignment)
def write_padded_file(f_out, f_in, padding):
if f_in is None:
return
f_out.write(f_in.read())
pad_file(f_out, padding)
def parse_int(x):
return int(x, 0)
def parse_os_version(x):
match = re.search(r'^(\d{1,3})(?:\.(\d{1,3})(?:\.(\d{1,3}))?)?', x)
if match:
a = int(match.group(1))
b = c = 0
if match.lastindex >= 2:
b = int(match.group(2))
if match.lastindex == 3:
c = int(match.group(3))
# 7 bits allocated for each field
assert a < 128
assert b < 128
assert c < 128
return (a << 14) | (b << 7) | c
return 0
def parse_os_patch_level(x):
match = re.search(r'^(\d{4})-(\d{2})(?:-(\d{2}))?', x)
if match:
y = int(match.group(1)) - 2000
m = int(match.group(2))
# 7 bits allocated for the year, 4 bits for the month
assert 0 <= y < 128
assert 0 < m <= 12
return (y << 4) | m
return 0
def parse_vendor_ramdisk_type(x):
type_dict = {
'none': VENDOR_RAMDISK_TYPE_NONE,
'platform': VENDOR_RAMDISK_TYPE_PLATFORM,
'recovery': VENDOR_RAMDISK_TYPE_RECOVERY,
'dlkm': VENDOR_RAMDISK_TYPE_DLKM,
}
if x.lower() in type_dict:
return type_dict[x.lower()]
return parse_int(x)
def get_vendor_boot_v4_usage():
return """vendor boot version 4 arguments:
--ramdisk_type {none,platform,recovery,dlkm}
specify the type of the ramdisk
--ramdisk_name NAME
specify the name of the ramdisk
--board_id{0..15} NUMBER
specify the value of the board_id vector, defaults to 0
--vendor_ramdisk_fragment VENDOR_RAMDISK_FILE
path to the vendor ramdisk file
These options can be specified multiple times, where each vendor ramdisk
option group ends with a --vendor_ramdisk_fragment option.
Each option group appends an additional ramdisk to the vendor boot image.
"""
def parse_vendor_ramdisk_args(args, args_list):
"""Parses vendor ramdisk specific arguments.
Args:
args: An argparse.Namespace object. Parsed results are stored into this
object.
args_list: A list of argument strings to be parsed.
Returns:
A list argument strings that are not parsed by this method.
"""
parser = ArgumentParser(add_help=False)
parser.add_argument('--ramdisk_type', type=parse_vendor_ramdisk_type,
default=VENDOR_RAMDISK_TYPE_NONE)
parser.add_argument('--ramdisk_name',
type=AsciizBytes(bufsize=VENDOR_RAMDISK_NAME_SIZE),
required=True)
for i in range(VENDOR_RAMDISK_TABLE_ENTRY_BOARD_ID_SIZE):
parser.add_argument(f'--board_id{i}', type=parse_int, default=0)
parser.add_argument(PARSER_ARGUMENT_VENDOR_RAMDISK_FRAGMENT, required=True)
unknown_args = []
vendor_ramdisk_table_builder = VendorRamdiskTableBuilder()
if args.vendor_ramdisk is not None:
vendor_ramdisk_table_builder.add_entry(
args.vendor_ramdisk.name, VENDOR_RAMDISK_TYPE_PLATFORM, b'', None)
while PARSER_ARGUMENT_VENDOR_RAMDISK_FRAGMENT in args_list:
idx = args_list.index(PARSER_ARGUMENT_VENDOR_RAMDISK_FRAGMENT) + 2
vendor_ramdisk_args = args_list[:idx]
args_list = args_list[idx:]
ramdisk_args, extra_args = parser.parse_known_args(vendor_ramdisk_args)
ramdisk_args_dict = vars(ramdisk_args)
unknown_args.extend(extra_args)
ramdisk_path = ramdisk_args.vendor_ramdisk_fragment
ramdisk_type = ramdisk_args.ramdisk_type
ramdisk_name = ramdisk_args.ramdisk_name
board_id = [ramdisk_args_dict[f'board_id{i}']
for i in range(VENDOR_RAMDISK_TABLE_ENTRY_BOARD_ID_SIZE)]
vendor_ramdisk_table_builder.add_entry(ramdisk_path, ramdisk_type,
ramdisk_name, board_id)
if len(args_list) > 0:
unknown_args.extend(args_list)
args.vendor_ramdisk_total_size = (vendor_ramdisk_table_builder
.ramdisk_total_size)
args.vendor_ramdisk_table_entry_num = len(vendor_ramdisk_table_builder
.entries)
args.vendor_ramdisk_table_builder = vendor_ramdisk_table_builder
return unknown_args
def parse_cmdline():
version_parser = ArgumentParser(add_help=False)
version_parser.add_argument('--header_version', type=parse_int, default=0)
if version_parser.parse_known_args()[0].header_version < 3:
# For boot header v0 to v2, the kernel commandline field is split into
# two fields, cmdline and extra_cmdline. Both fields are asciiz strings,
# so we minus one here to ensure the encoded string plus the
# null-terminator can fit in the buffer size.
cmdline_size = BOOT_ARGS_SIZE + BOOT_EXTRA_ARGS_SIZE - 1
else:
cmdline_size = BOOT_ARGS_SIZE + BOOT_EXTRA_ARGS_SIZE
parser = ArgumentParser(formatter_class=RawDescriptionHelpFormatter,
epilog=get_vendor_boot_v4_usage())
parser.add_argument('--kernel', type=FileType('rb'),
help='path to the kernel')
parser.add_argument('--ramdisk', type=FileType('rb'),
help='path to the ramdisk')
parser.add_argument('--second', type=FileType('rb'),
help='path to the second bootloader')
parser.add_argument('--dtb', type=FileType('rb'), help='path to the dtb')
dtbo_group = parser.add_mutually_exclusive_group()
dtbo_group.add_argument('--recovery_dtbo', type=FileType('rb'),
help='path to the recovery DTBO')
dtbo_group.add_argument('--recovery_acpio', type=FileType('rb'),
metavar='RECOVERY_ACPIO', dest='recovery_dtbo',
help='path to the recovery ACPIO')
parser.add_argument('--cmdline', type=AsciizBytes(bufsize=cmdline_size),
default='', help='kernel command line arguments')
parser.add_argument('--vendor_cmdline',
type=AsciizBytes(bufsize=VENDOR_BOOT_ARGS_SIZE),
default='',
help='vendor boot kernel command line arguments')
parser.add_argument('--base', type=parse_int, default=0x10000000,
help='base address')
parser.add_argument('--kernel_offset', type=parse_int, default=0x00008000,
help='kernel offset')
parser.add_argument('--ramdisk_offset', type=parse_int, default=0x01000000,
help='ramdisk offset')
parser.add_argument('--second_offset', type=parse_int, default=0x00f00000,
help='second bootloader offset')
parser.add_argument('--dtb_offset', type=parse_int, default=0x01f00000,
help='dtb offset')
parser.add_argument('--os_version', type=parse_os_version, default=0,
help='operating system version')
parser.add_argument('--os_patch_level', type=parse_os_patch_level,
default=0, help='operating system patch level')
parser.add_argument('--tags_offset', type=parse_int, default=0x00000100,
help='tags offset')
parser.add_argument('--board', type=AsciizBytes(bufsize=BOOT_NAME_SIZE),
default='', help='board name')
parser.add_argument('--pagesize', type=parse_int,
choices=[2**i for i in range(11, 15)], default=2048,
help='page size')
parser.add_argument('--id', action='store_true',
help='print the image ID on standard output')
parser.add_argument('--header_version', type=parse_int, default=0,
help='boot image header version')
parser.add_argument('-o', '--output', type=FileType('wb'),
help='output file name')
parser.add_argument('--gki_signing_algorithm',
help='GKI signing algorithm to use')
parser.add_argument('--gki_signing_key',
help='path to RSA private key file')
parser.add_argument('--gki_signing_signature_args',
help='other hash arguments passed to avbtool')
parser.add_argument('--gki_signing_avbtool_path',
help='path to avbtool for boot signature generation')
parser.add_argument('--vendor_boot', type=FileType('wb'),
help='vendor boot output file name')
parser.add_argument('--vendor_ramdisk', type=FileType('rb'),
help='path to the vendor ramdisk')
parser.add_argument('--vendor_bootconfig', type=FileType('rb'),
help='path to the vendor bootconfig file')
args, extra_args = parser.parse_known_args()
if args.vendor_boot is not None and args.header_version > 3:
extra_args = parse_vendor_ramdisk_args(args, extra_args)
if len(extra_args) > 0:
raise ValueError(f'Unrecognized arguments: {extra_args}')
if args.header_version < 3:
args.extra_cmdline = args.cmdline[BOOT_ARGS_SIZE-1:]
args.cmdline = args.cmdline[:BOOT_ARGS_SIZE-1] + b'\x00'
assert len(args.cmdline) <= BOOT_ARGS_SIZE
assert len(args.extra_cmdline) <= BOOT_EXTRA_ARGS_SIZE
return args
def add_boot_image_signature(args, pagesize):
"""Adds the boot image signature.
Note that the signature will only be verified in VTS to ensure a
generic boot.img is used. It will not be used by the device
bootloader at boot time. The bootloader should only verify
the boot vbmeta at the end of the boot partition (or in the top-level
vbmeta partition) via the Android Verified Boot process, when the
device boots.
"""
args.output.flush() # Flush the buffer for signature calculation.
# Appends zeros if the signing key is not specified.
if not args.gki_signing_key or not args.gki_signing_algorithm:
zeros = b'\x00' * BOOT_IMAGE_V4_SIGNATURE_SIZE
args.output.write(zeros)
pad_file(args.output, pagesize)
return
avbtool = 'avbtool' # Used from otatools.zip or Android build env.
# We need to specify the path of avbtool in build/core/Makefile.
# Because avbtool is not guaranteed to be in $PATH there.
if args.gki_signing_avbtool_path:
avbtool = args.gki_signing_avbtool_path
# Need to specify a value of --partition_size for avbtool to work.
# We use 64 MB below, but avbtool will not resize the boot image to
# this size because --do_not_append_vbmeta_image is also specified.
avbtool_cmd = [
avbtool, 'add_hash_footer',
'--partition_name', 'boot',
'--partition_size', str(64 * 1024 * 1024),
'--image', args.output.name,
'--algorithm', args.gki_signing_algorithm,
'--key', args.gki_signing_key,
'--salt', 'd00df00d'] # TODO: use a hash of kernel/ramdisk as the salt.
# Additional arguments passed to avbtool.
if args.gki_signing_signature_args:
avbtool_cmd += args.gki_signing_signature_args.split()
# Outputs the signed vbmeta to a separate file, then append to boot.img
# as the boot signature.
with tempfile.TemporaryDirectory() as temp_out_dir:
boot_signature_output = os.path.join(temp_out_dir, 'boot_signature')
avbtool_cmd += ['--do_not_append_vbmeta_image',
'--output_vbmeta_image', boot_signature_output]
subprocess.check_call(avbtool_cmd)
with open(boot_signature_output, 'rb') as boot_signature:
if filesize(boot_signature) > BOOT_IMAGE_V4_SIGNATURE_SIZE:
raise ValueError(
f'boot sigature size is > {BOOT_IMAGE_V4_SIGNATURE_SIZE}')
write_padded_file(args.output, boot_signature, pagesize)
def write_data(args, pagesize):
write_padded_file(args.output, args.kernel, pagesize)
write_padded_file(args.output, args.ramdisk, pagesize)
write_padded_file(args.output, args.second, pagesize)
if args.header_version > 0 and args.header_version < 3:
write_padded_file(args.output, args.recovery_dtbo, pagesize)
if args.header_version == 2:
write_padded_file(args.output, args.dtb, pagesize)
if args.header_version >= 4:
add_boot_image_signature(args, pagesize)
def write_vendor_boot_data(args):
if args.header_version > 3:
builder = args.vendor_ramdisk_table_builder
builder.write_ramdisks_padded(args.vendor_boot, args.pagesize)
write_padded_file(args.vendor_boot, args.dtb, args.pagesize)
builder.write_entries_padded(args.vendor_boot, args.pagesize)
write_padded_file(args.vendor_boot, args.vendor_bootconfig,
args.pagesize)
else:
write_padded_file(args.vendor_boot, args.vendor_ramdisk, args.pagesize)
write_padded_file(args.vendor_boot, args.dtb, args.pagesize)
def main():
args = parse_cmdline()
if args.vendor_boot is not None:
if args.header_version not in {3, 4}:
raise ValueError(
'--vendor_boot not compatible with given header version')
if args.header_version == 3 and args.vendor_ramdisk is None:
raise ValueError('--vendor_ramdisk missing or invalid')
write_vendor_boot_header(args)
write_vendor_boot_data(args)
if args.output is not None:
if args.second is not None and args.header_version > 2:
raise ValueError(
'--second not compatible with given header version')
img_id = write_header(args)
if args.header_version > 2:
write_data(args, BOOT_IMAGE_HEADER_V3_PAGESIZE)
else:
write_data(args, args.pagesize)
if args.id and img_id is not None:
print('0x' + ''.join(f'{octet:02x}' for octet in img_id))
if __name__ == '__main__':
main()

View File

@@ -115,19 +115,6 @@ LABEL primary
APPEND \${cbootargs} $BM_CMDLINE
EOF
# Set up the pxelinux config for Jetson TK1
cat <<EOF >/tftp/pxelinux.cfg/default-arm-tegra124-jetson-tk1
PROMPT 0
TIMEOUT 30
DEFAULT primary
MENU TITLE jetson TK1 boot options
LABEL primary
MENU LABEL CI kernel on TFTP
LINUX zImage
FDT tegra124-jetson-tk1.dtb
APPEND \${cbootargs} $BM_CMDLINE
EOF
# Create the rootfs in the NFS directory
mkdir -p /nfs/results
. $BM/rootfs-setup.sh /nfs

View File

@@ -74,11 +74,6 @@ class PoERun:
self.print_error("nouveau jetson boot bug, retrying.")
return 2
# network fail on tk1
if re.search("NETDEV WATCHDOG:.* transmit queue 0 timed out", line):
self.print_error("nouveau jetson tk1 network fail, retrying.")
return 2
result = re.search("hwci: mesa: (\S*)", line)
if result:
if result.group(1) == "pass":

View File

@@ -1,2 +0,0 @@
schema.graphql
gitlab_gql.py.cache.db

View File

@@ -4,6 +4,7 @@
# Tomeu Vizoso <tomeu.vizoso@collabora.com>
# David Heidelberg <david.heidelberg@collabora.com>
#
# TODO GraphQL for dependencies
# SPDX-License-Identifier: MIT
"""
@@ -11,19 +12,18 @@ Helper script to restrict running only required CI jobs
and show the job(s) logs.
"""
import argparse
import re
import sys
import time
from concurrent.futures import ThreadPoolExecutor
from functools import partial
from itertools import chain
from typing import Optional
from functools import partial
from concurrent.futures import ThreadPoolExecutor
import os
import re
import time
import argparse
import sys
import gitlab
from colorama import Fore, Style
from gitlab_common import get_gitlab_project, read_token, wait_for_pipeline
from gitlab_gql import GitlabGQL, create_job_needs_dag, filter_dag, print_dag
REFRESH_WAIT_LOG = 10
REFRESH_WAIT_JOBS = 6
@@ -42,9 +42,44 @@ STATUS_COLORS = {
"skipped": "",
}
# TODO: This hardcoded list should be replaced by querying the pipeline's
# dependency graph to see which jobs the target jobs need
DEPENDENCIES = [
"debian/x86_build-base",
"debian/x86_build",
"debian/x86_test-base",
"debian/x86_test-gl",
"debian/arm_build",
"debian/arm_test",
"kernel+rootfs_amd64",
"kernel+rootfs_arm64",
"kernel+rootfs_armhf",
"debian-testing",
"debian-arm64",
]
COMPLETED_STATUSES = ["success", "failed"]
def get_gitlab_project(glab, name: str):
"""Finds a specified gitlab project for given user"""
glab.auth()
username = glab.user.username
return glab.projects.get(f"{username}/mesa")
def wait_for_pipeline(project, sha: str):
"""await until pipeline appears in Gitlab"""
print("⏲ for the pipeline to appear..", end="")
while True:
pipelines = project.pipelines.list(sha=sha)
if pipelines:
print("", flush=True)
return pipelines[0]
print("", end=".", flush=True)
time.sleep(1)
def print_job_status(job) -> None:
"""It prints a nice, colored job status with a link to the job."""
if job.status == "canceled":
@@ -85,18 +120,15 @@ def pretty_wait(sec: int) -> None:
def monitor_pipeline(
project,
pipeline,
target_job: Optional[str],
dependencies,
force_manual: bool,
stress: bool,
project, pipeline, target_job: Optional[str], dependencies, force_manual: bool
) -> tuple[Optional[int], Optional[int]]:
"""Monitors pipeline and delegate canceling jobs"""
statuses = {}
target_statuses = {}
stress_succ = 0
stress_fail = 0
if not dependencies:
dependencies = []
dependencies.extend(DEPENDENCIES)
if target_job:
target_jobs_regex = re.compile(target_job.strip())
@@ -109,13 +141,6 @@ def monitor_pipeline(
if force_manual and job.status == "manual":
enable_job(project, job, True)
if stress and job.status in ["success", "failed"]:
if job.status == "success":
stress_succ += 1
if job.status == "failed":
stress_fail += 1
retry_job(project, job)
if (job.id not in target_statuses) or (
job.status not in target_statuses[job.id]
):
@@ -147,14 +172,6 @@ def monitor_pipeline(
if target_job:
cancel_jobs(project, to_cancel)
if stress:
print(
"∑ succ: " + str(stress_succ) + "; fail: " + str(stress_fail),
flush=False,
)
pretty_wait(REFRESH_WAIT_JOBS)
continue
print("---------------------------------", flush=False)
if len(target_statuses) == 1 and {"running"}.intersection(
@@ -182,14 +199,6 @@ def enable_job(project, job, target: bool) -> None:
print(Fore.MAGENTA + f"{jtype} job {job.name} manually enabled" + Style.RESET_ALL)
def retry_job(project, job) -> None:
"""retry job"""
pjob = project.jobs.get(job.id, lazy=True)
pjob.retry()
jtype = ""
print(Fore.MAGENTA + f"{jtype} job {job.name} manually enabled" + Style.RESET_ALL)
def cancel_job(project, job) -> None:
"""Cancel GitLab job"""
pjob = project.jobs.get(job.id, lazy=True)
@@ -234,6 +243,7 @@ def parse_args() -> None:
+ '--target ".*traces" ',
)
parser.add_argument("--target", metavar="target-job", help="Target job")
parser.add_argument("--deps", nargs="+", help="Job dependencies")
parser.add_argument(
"--rev", metavar="revision", help="repository git revision", required=True
)
@@ -245,24 +255,19 @@ def parse_args() -> None:
parser.add_argument(
"--force-manual", action="store_true", help="Force jobs marked as manual"
)
parser.add_argument("--stress", action="store_true", help="Stresstest job(s)")
return parser.parse_args()
def find_dependencies(target_job: str, project_path: str, sha: str) -> set[str]:
gql_instance = GitlabGQL()
dag, _ = create_job_needs_dag(
gql_instance, {"projectPath": project_path.path_with_namespace, "sha": sha}
def read_token(token_arg: Optional[str]) -> str:
"""pick token from args or file"""
if token_arg:
return token_arg
return (
open(os.path.expanduser("~/.config/gitlab-token"), encoding="utf-8")
.readline()
.rstrip()
)
target_dep_dag = filter_dag(dag, target_job)
print(Fore.YELLOW)
print("Detected job dependencies:")
print()
print_dag(target_dep_dag)
print(Fore.RESET)
return set(chain.from_iterable(target_dep_dag.values()))
if __name__ == "__main__":
try:
@@ -279,14 +284,11 @@ if __name__ == "__main__":
print(f"Revision: {args.rev}")
pipe = wait_for_pipeline(cur_project, args.rev)
print(f"Pipeline: {pipe.web_url}")
deps = set()
if args.target:
print("🞋 job: " + Fore.BLUE + args.target + Style.RESET_ALL)
deps = find_dependencies(
target_job=args.target, sha=args.rev, project_path=cur_project
)
print(f"Extra dependencies: {args.deps}")
target_job_id, ret = monitor_pipeline(
cur_project, pipe, args.target, deps, args.force_manual, args.stress
cur_project, pipe, args.target, args.deps, args.force_manual
)
if target_job_id:

View File

@@ -1,11 +0,0 @@
#!/bin/sh
# Helper script to download the schema GraphQL from Gitlab to enable IDEs to
# assist the developer to edit gql files
SOURCE_DIR=$(dirname "$(realpath "$0")")
(
cd $SOURCE_DIR || exit 1
gql-cli https://gitlab.freedesktop.org/api/graphql --print-schema > schema.graphql
)

View File

@@ -1,42 +0,0 @@
#!/usr/bin/env python3
# Copyright © 2020 - 2022 Collabora Ltd.
# Authors:
# Tomeu Vizoso <tomeu.vizoso@collabora.com>
# David Heidelberg <david.heidelberg@collabora.com>
#
# SPDX-License-Identifier: MIT
'''Shared functions between the scripts.'''
import os
import time
from typing import Optional
def get_gitlab_project(glab, name: str):
"""Finds a specified gitlab project for given user"""
glab.auth()
username = glab.user.username
return glab.projects.get(f"{username}/mesa")
def read_token(token_arg: Optional[str]) -> str:
"""pick token from args or file"""
if token_arg:
return token_arg
return (
open(os.path.expanduser("~/.config/gitlab-token"), encoding="utf-8")
.readline()
.rstrip()
)
def wait_for_pipeline(project, sha: str):
"""await until pipeline appears in Gitlab"""
print("⏲ for the pipeline to appear..", end="")
while True:
pipelines = project.pipelines.list(sha=sha)
if pipelines:
print("", flush=True)
return pipelines[0]
print("", end=".", flush=True)
time.sleep(1)

View File

@@ -1,303 +0,0 @@
#!/usr/bin/env python3
import re
from argparse import ArgumentDefaultsHelpFormatter, ArgumentParser, Namespace
from dataclasses import dataclass, field
from os import getenv
from pathlib import Path
from typing import Any, Iterable, Optional, Pattern, Union
import yaml
from filecache import DAY, filecache
from gql import Client, gql
from gql.transport.aiohttp import AIOHTTPTransport
from graphql import DocumentNode
Dag = dict[str, list[str]]
TOKEN_DIR = Path(getenv("XDG_CONFIG_HOME") or Path.home() / ".config")
def get_token_from_default_dir() -> str:
try:
token_file = TOKEN_DIR / "gitlab-token"
return token_file.resolve()
except FileNotFoundError as ex:
print(
f"Could not find {token_file}, please provide a token file as an argument"
)
raise ex
def get_project_root_dir():
root_path = Path(__file__).parent.parent.parent.resolve()
gitlab_file = root_path / ".gitlab-ci.yml"
assert gitlab_file.exists()
return root_path
@dataclass
class GitlabGQL:
_transport: Any = field(init=False)
client: Client = field(init=False)
url: str = "https://gitlab.freedesktop.org/api/graphql"
token: Optional[str] = None
def __post_init__(self):
self._setup_gitlab_gql_client()
def _setup_gitlab_gql_client(self) -> Client:
# Select your transport with a defined url endpoint
headers = {}
if self.token:
headers["Authorization"] = f"Bearer {self.token}"
self._transport = AIOHTTPTransport(url=self.url, headers=headers)
# Create a GraphQL client using the defined transport
self.client = Client(
transport=self._transport, fetch_schema_from_transport=True
)
@filecache(DAY)
def query(
self, gql_file: Union[Path, str], params: dict[str, Any]
) -> dict[str, Any]:
# Provide a GraphQL query
source_path = Path(__file__).parent
pipeline_query_file = source_path / gql_file
query: DocumentNode
with open(pipeline_query_file, "r") as f:
pipeline_query = f.read()
query = gql(pipeline_query)
# Execute the query on the transport
return self.client.execute(query, variable_values=params)
def invalidate_query_cache(self):
self.query._db.clear()
def create_job_needs_dag(
gl_gql: GitlabGQL, params
) -> tuple[Dag, dict[str, dict[str, Any]]]:
result = gl_gql.query("pipeline_details.gql", params)
dag = {}
jobs = {}
pipeline = result["project"]["pipeline"]
if not pipeline:
raise RuntimeError(f"Could not find any pipelines for {params}")
for stage in pipeline["stages"]["nodes"]:
for stage_job in stage["groups"]["nodes"]:
for job in stage_job["jobs"]["nodes"]:
needs = job.pop("needs")["nodes"]
jobs[job["name"]] = job
dag[job["name"]] = {node["name"] for node in needs}
for job, needs in dag.items():
needs: set
partial = True
while partial:
next_depth = {n for dn in needs for n in dag[dn]}
partial = not needs.issuperset(next_depth)
needs = needs.union(next_depth)
dag[job] = needs
return dag, jobs
def filter_dag(dag: Dag, regex: Pattern) -> Dag:
return {job: needs for job, needs in dag.items() if re.match(regex, job)}
def print_dag(dag: Dag) -> None:
for job, needs in dag.items():
print(f"{job}:")
print(f"\t{' '.join(needs)}")
print()
def fetch_merged_yaml(gl_gql: GitlabGQL, params) -> dict[Any]:
gitlab_yml_file = get_project_root_dir() / ".gitlab-ci.yml"
content = Path(gitlab_yml_file).read_text().strip()
params["content"] = content
raw_response = gl_gql.query("job_details.gql", params)
if merged_yaml := raw_response["ciConfig"]["mergedYaml"]:
return yaml.safe_load(merged_yaml)
gl_gql.invalidate_query_cache()
raise ValueError(
"""
Could not fetch any content for merged YAML,
please verify if the git SHA exists in remote.
Maybe you forgot to `git push`? """
)
def recursive_fill(job, relationship_field, target_data, acc_data: dict, merged_yaml):
if relatives := job.get(relationship_field):
if isinstance(relatives, str):
relatives = [relatives]
for relative in relatives:
parent_job = merged_yaml[relative]
acc_data = recursive_fill(parent_job, acc_data, merged_yaml)
acc_data |= job.get(target_data, {})
return acc_data
def get_variables(job, merged_yaml, project_path, sha) -> dict[str, str]:
p = get_project_root_dir() / ".gitlab-ci" / "image-tags.yml"
image_tags = yaml.safe_load(p.read_text())
variables = image_tags["variables"]
variables |= merged_yaml["variables"]
variables |= job["variables"]
variables["CI_PROJECT_PATH"] = project_path
variables["CI_PROJECT_NAME"] = project_path.split("/")[1]
variables["CI_REGISTRY_IMAGE"] = "registry.freedesktop.org/${CI_PROJECT_PATH}"
variables["CI_COMMIT_SHA"] = sha
while recurse_among_variables_space(variables):
pass
return variables
# Based on: https://stackoverflow.com/a/2158532/1079223
def flatten(xs):
for x in xs:
if isinstance(x, Iterable) and not isinstance(x, (str, bytes)):
yield from flatten(x)
else:
yield x
def get_full_script(job) -> list[str]:
script = []
for script_part in ("before_script", "script", "after_script"):
script.append(f"# {script_part}")
lines = flatten(job.get(script_part, []))
script.extend(lines)
script.append("")
return script
def recurse_among_variables_space(var_graph) -> bool:
updated = False
for var, value in var_graph.items():
value = str(value)
dep_vars = []
if match := re.findall(r"(\$[{]?[\w\d_]*[}]?)", value):
all_dep_vars = [v.lstrip("${").rstrip("}") for v in match]
# print(value, match, all_dep_vars)
dep_vars = [v for v in all_dep_vars if v in var_graph]
for dep_var in dep_vars:
dep_value = str(var_graph[dep_var])
new_value = var_graph[var]
new_value = new_value.replace(f"${{{dep_var}}}", dep_value)
new_value = new_value.replace(f"${dep_var}", dep_value)
var_graph[var] = new_value
updated |= dep_value != new_value
return updated
def get_job_final_definiton(job_name, merged_yaml, project_path, sha):
job = merged_yaml[job_name]
variables = get_variables(job, merged_yaml, project_path, sha)
print("# --------- variables ---------------")
for var, value in sorted(variables.items()):
print(f"export {var}={value!r}")
# TODO: Recurse into needs to get full script
# TODO: maybe create a extra yaml file to avoid too much rework
script = get_full_script(job)
print()
print()
print("# --------- full script ---------------")
print("\n".join(script))
if image := variables.get("MESA_IMAGE"):
print()
print()
print("# --------- container image ---------------")
print(image)
def parse_args() -> Namespace:
parser = ArgumentParser(
formatter_class=ArgumentDefaultsHelpFormatter,
description="CLI and library with utility functions to debug jobs via Gitlab GraphQL",
epilog=f"""Example:
{Path(__file__).name} --rev $(git rev-parse HEAD) --print-job-dag""",
)
parser.add_argument("-pp", "--project-path", type=str, default="mesa/mesa")
parser.add_argument("--sha", "--rev", type=str, required=True)
parser.add_argument(
"--regex",
type=str,
required=False,
help="Regex pattern for the job name to be considered",
)
parser.add_argument("--print-dag", action="store_true", help="Print job needs DAG")
parser.add_argument(
"--print-merged-yaml",
action="store_true",
help="Print the resulting YAML for the specific SHA",
)
parser.add_argument(
"--print-job-manifest", type=str, help="Print the resulting job data"
)
parser.add_argument(
"--gitlab-token-file",
type=str,
default=get_token_from_default_dir(),
help="force GitLab token, otherwise it's read from $XDG_CONFIG_HOME/gitlab-token",
)
args = parser.parse_args()
args.gitlab_token = Path(args.gitlab_token_file).read_text()
return args
def main():
args = parse_args()
gl_gql = GitlabGQL(token=args.gitlab_token)
if args.print_dag:
dag, jobs = create_job_needs_dag(
gl_gql, {"projectPath": args.project_path, "sha": args.sha}
)
if args.regex:
dag = filter_dag(dag, re.compile(args.regex))
print_dag(dag)
if args.print_merged_yaml:
print(
fetch_merged_yaml(
gl_gql, {"projectPath": args.project_path, "sha": args.sha}
)
)
if args.print_job_manifest:
merged_yaml = fetch_merged_yaml(
gl_gql, {"projectPath": args.project_path, "sha": args.sha}
)
get_job_final_definiton(
args.print_job_manifest, merged_yaml, args.project_path, args.sha
)
if __name__ == "__main__":
main()

View File

@@ -1,7 +0,0 @@
query getCiConfigData($projectPath: ID!, $sha: String, $content: String!) {
ciConfig(projectPath: $projectPath, sha: $sha, content: $content) {
errors
mergedYaml
__typename
}
}

View File

@@ -1,86 +0,0 @@
fragment LinkedPipelineData on Pipeline {
id
iid
path
cancelable
retryable
userPermissions {
updatePipeline
}
status: detailedStatus {
id
group
label
icon
}
sourceJob {
id
name
}
project {
id
name
fullPath
}
}
query getPipelineDetails($projectPath: ID!, $sha: String!) {
project(fullPath: $projectPath) {
id
pipeline(sha: $sha) {
id
iid
complete
downstream {
nodes {
...LinkedPipelineData
}
}
upstream {
...LinkedPipelineData
}
stages {
nodes {
id
name
status: detailedStatus {
id
action {
id
icon
path
title
}
}
groups {
nodes {
id
status: detailedStatus {
id
label
group
icon
}
name
size
jobs {
nodes {
id
name
kind
scheduledAt
needs {
nodes {
id
name
}
}
}
}
}
}
}
}
}
}
}

View File

@@ -1,8 +1,2 @@
aiohttp==3.8.1
colorama==0.4.5
filecache==0.81
gql==3.4.0
python-gitlab==3.5.0
PyYAML==6.0
ruamel.yaml.clib==0.2.6
ruamel.yaml==0.17.21

View File

@@ -1,143 +0,0 @@
#!/usr/bin/env python3
# Copyright © 2022 Collabora Ltd.
# Authors:
# David Heidelberg <david.heidelberg@collabora.com>
#
# SPDX-License-Identifier: MIT
"""
Helper script to update traces checksums
"""
import argparse
import bz2
import glob
import re
import json
import sys
from ruamel.yaml import YAML
import gitlab
from gitlab_common import get_gitlab_project, read_token, wait_for_pipeline
DESCRIPTION_FILE = "export PIGLIT_REPLAY_DESCRIPTION_FILE='.*/install/(.*)'$"
DEVICE_NAME = "export PIGLIT_REPLAY_DEVICE_NAME='(.*)'$"
def gather_results(
project,
pipeline,
) -> None:
"""Gather results"""
target_jobs_regex = re.compile(".*-traces([:].*)?$")
for job in pipeline.jobs.list(all=True, sort="desc"):
if target_jobs_regex.match(job.name) and job.status == "failed":
cur_job = project.jobs.get(job.id)
# get variables
print(f"👁 Looking through logs for the device variable and traces.yml file in {job.name}...")
log = cur_job.trace().decode("unicode_escape").splitlines()
filename: str = ''
dev_name: str = ''
for logline in log:
desc_file = re.search(DESCRIPTION_FILE, logline)
device_name = re.search(DEVICE_NAME, logline)
if desc_file:
filename = desc_file.group(1)
if device_name:
dev_name = device_name.group(1)
if not filename or not dev_name:
print("! Couldn't find device name or YML file in the logs!")
return
print(f"👁 Found {dev_name} and file {filename}")
# find filename in Mesa source
traces_file = glob.glob('./**/' + filename, recursive=True)
# write into it
with open(traces_file[0], 'r', encoding='utf-8') as target_file:
yaml = YAML()
yaml.compact(seq_seq=False, seq_map=False)
yaml.version = 1,2
yaml.width = 2048 # do not break the text fields
yaml.default_flow_style = None
target = yaml.load(target_file)
# parse artifact
results_json_bz2 = cur_job.artifact(path="results/results.json.bz2", streamed=False)
results_json = bz2.decompress(results_json_bz2).decode("utf-8")
results = json.loads(results_json)
for _, value in results["tests"].items():
if (
not value['images'] or
not value['images'][0] or
"image_desc" not in value['images'][0]
):
continue
trace: str = value['images'][0]['image_desc']
checksum: str = value['images'][0]['checksum_render']
if not checksum:
print(f"Trace {trace} checksum is missing! Abort.")
continue
if checksum == "error":
print(f"Trace {trace} crashed")
continue
if (
checksum in target['traces'][trace][dev_name] and
target['traces'][trace][dev_name]['checksum'] == checksum
):
continue
if "label" in target['traces'][trace][dev_name]:
print(f'{trace}: {dev_name}: has label: {target["traces"][trace][dev_name]["label"]}, is it still right?')
target['traces'][trace][dev_name]['checksum'] = checksum
with open(traces_file[0], 'w', encoding='utf-8') as target_file:
yaml.dump(target, target_file)
def parse_args() -> None:
"""Parse args"""
parser = argparse.ArgumentParser(
description="Tool to generate patch from checksums ",
epilog="Example: update_traces_checksum.py --rev $(git rev-parse HEAD) "
)
parser.add_argument(
"--rev", metavar="revision", help="repository git revision", required=True
)
parser.add_argument(
"--token",
metavar="token",
help="force GitLab token, otherwise it's read from ~/.config/gitlab-token",
)
return parser.parse_args()
if __name__ == "__main__":
try:
args = parse_args()
token = read_token(args.token)
gl = gitlab.Gitlab(url="https://gitlab.freedesktop.org", private_token=token)
cur_project = get_gitlab_project(gl, "mesa")
print(f"Revision: {args.rev}")
pipe = wait_for_pipeline(cur_project, args.rev)
print(f"Pipeline: {pipe.web_url}")
gather_results(cur_project, pipe)
sys.exit()
except KeyboardInterrupt:
sys.exit(1)

View File

@@ -78,7 +78,7 @@ debian-testing:
-D dri3=enabled
-D gallium-va=enabled
GALLIUM_DRIVERS: "swrast,virgl,radeonsi,zink,crocus,iris,i915"
VULKAN_DRIVERS: "swrast,amd,intel,virtio-experimental"
VULKAN_DRIVERS: "swrast,amd,intel"
BUILDTYPE: "debugoptimized"
EXTRA_OPTION: >
-D spirv-to-dxil=true
@@ -86,6 +86,7 @@ debian-testing:
MINIO_ARTIFACT_NAME: mesa-amd64
LLVM_VERSION: "13"
script:
- .gitlab-ci/lava/lava-pytest.sh
- .gitlab-ci/meson/build.sh
- .gitlab-ci/prepare-artifacts.sh
artifacts:
@@ -123,17 +124,19 @@ debian-testing-msan:
GALLIUM_DRIVERS: "iris,nouveau,kmsro,r300,r600,swrast,svga,v3d,vc4,virgl,etnaviv,panfrost,lima,zink,radeonsi,tegra,d3d12,crocus"
VULKAN_DRIVERS: intel,amd,broadcom,virtio-experimental
.debian-cl-testing:
debian-clover-testing:
extends:
- .meson-build
- .ci-deqp-artifacts
variables:
LLVM_VERSION: "13"
UNWIND: "enabled"
DRI_LOADERS: >
-D glx=disabled
-D egl=disabled
-D gbm=disabled
GALLIUM_ST: >
-D gallium-opencl=icd
-D opencl-spirv=true
GALLIUM_DRIVERS: "swrast"
BUILDTYPE: "debugoptimized"
EXTRA_OPTION: >
@@ -142,23 +145,7 @@ debian-testing-msan:
- .gitlab-ci/meson/build.sh
- .gitlab-ci/prepare-artifacts.sh
debian-clover-testing:
extends:
- .debian-cl-testing
variables:
GALLIUM_ST: >
-D gallium-opencl=icd
-D opencl-spirv=true
debian-rusticl-testing:
extends:
- .debian-cl-testing
variables:
GALLIUM_ST: >
-D gallium-rusticl=true
-D opencl-spirv=true
debian-build-testing:
debian-gallium:
extends: .meson-build
variables:
UNWIND: "enabled"
@@ -171,22 +158,19 @@ debian-build-testing:
-D dri3=enabled
-D gallium-extra-hud=true
-D gallium-vdpau=enabled
-D gallium-xvmc=enabled
-D gallium-omx=bellagio
-D gallium-va=enabled
-D gallium-xa=enabled
-D gallium-nine=true
-D gallium-opencl=disabled
-D gallium-rusticl=false
GALLIUM_DRIVERS: "iris,nouveau,kmsro,r300,r600,freedreno,swrast,svga,v3d,vc4,virgl,etnaviv,panfrost,lima,zink,d3d12,asahi,crocus"
VULKAN_DRIVERS: swrast
EXTRA_OPTION: >
-D spirv-to-dxil=true
-D osmesa=true
-D tools=drm-shim,etnaviv,freedreno,glsl,intel,intel-ui,nir,nouveau,lima,panfrost,asahi
-D tools=drm-shim,etnaviv,freedreno,glsl,intel,intel-ui,nir,nouveau,xvmc,lima,panfrost,asahi
script:
- .gitlab-ci/lava/lava-pytest.sh
- .gitlab-ci/run-shellcheck.sh
- .gitlab-ci/run-yamllint.sh
- .gitlab-ci/meson/build.sh
- .gitlab-ci/run-shader-db.sh
@@ -194,7 +178,6 @@ debian-build-testing:
debian-release:
extends: .meson-build
variables:
LLVM_VERSION: "13"
UNWIND: "enabled"
DRI_LOADERS: >
-D glx=dri
@@ -205,12 +188,12 @@ debian-release:
-D dri3=enabled
-D gallium-extra-hud=true
-D gallium-vdpau=enabled
-D gallium-xvmc=disabled
-D gallium-omx=disabled
-D gallium-va=enabled
-D gallium-xa=enabled
-D gallium-nine=false
-D gallium-opencl=disabled
-D gallium-rusticl=false
-D llvm=enabled
GALLIUM_DRIVERS: "i915,iris,nouveau,kmsro,freedreno,r300,svga,swrast,v3d,vc4,virgl,etnaviv,panfrost,lima,zink,d3d12,crocus"
VULKAN_DRIVERS: "amd,imagination-experimental,microsoft-experimental"
@@ -242,30 +225,29 @@ fedora-release:
-D egl=enabled
-D glvnd=true
-D platforms=x11,wayland
# intel-clc disabled, we need llvm-spirv-translator 13.0+, Fedora 34 only packages 12.0.
EXTRA_OPTION: >
-D osmesa=true
-D selinux=true
-D tools=drm-shim,etnaviv,freedreno,glsl,intel,nir,nouveau,lima,panfrost,imagination
-D vulkan-layers=device-select,overlay
-D intel-clc=disabled
-D intel-clc=enabled
-D imagination-srv=true
GALLIUM_DRIVERS: "crocus,etnaviv,freedreno,iris,kmsro,lima,nouveau,panfrost,r300,r600,radeonsi,svga,swrast,tegra,v3d,vc4,virgl,zink"
GALLIUM_ST: >
-D dri3=enabled
-D gallium-extra-hud=true
-D gallium-vdpau=enabled
-D gallium-xvmc=disabled
-D gallium-omx=disabled
-D gallium-va=enabled
-D gallium-xa=enabled
-D gallium-nine=false
-D gallium-opencl=icd
-D gallium-rusticl=false
-D gles1=disabled
-D gles2=enabled
-D llvm=enabled
-D microsoft-clc=disabled
-D shared-llvm=enabled
-D vulkan-device-select-layer=true
LLVM_VERSION: ""
UNWIND: "disabled"
VULKAN_DRIVERS: "amd,broadcom,freedreno,intel,imagination-experimental"
@@ -301,12 +283,12 @@ debian-android:
GALLIUM_ST: >
-D dri3=disabled
-D gallium-vdpau=disabled
-D gallium-xvmc=disabled
-D gallium-omx=disabled
-D gallium-va=disabled
-D gallium-xa=disabled
-D gallium-nine=false
-D gallium-opencl=disabled
-D gallium-rusticl=false
LLVM_VERSION: ""
PKG_CONFIG_LIBDIR: "/disable/non/android/system/pc/files"
script:
@@ -333,6 +315,7 @@ debian-android:
GALLIUM_ST: >
-D dri3=enabled
-D gallium-vdpau=disabled
-D gallium-xvmc=disabled
-D gallium-omx=disabled
-D gallium-va=disabled
-D gallium-xa=disabled
@@ -384,6 +367,8 @@ debian-arm64-asan:
extends:
- debian-arm64
variables:
C_ARGS: >
-Wno-error=stringop-truncation
EXTRA_OPTION: >
-D llvm=disabled
-D b_sanitize=address
@@ -407,65 +392,33 @@ debian-arm64-build-test:
debian-clang:
extends: .meson-build
variables:
LLVM_VERSION: "13"
UNWIND: "enabled"
GALLIUM_DUMP_CPU: "true"
C_ARGS: >
-Wno-error=constant-conversion
-Wno-error=enum-conversion
-Wno-error=implicit-const-int-float-conversion
-Wno-error=initializer-overrides
-Wno-error=sometimes-uninitialized
-Wno-error=unused-function
CPP_ARGS: >
-Wno-error=c99-designator
-Wno-error=deprecated-declarations
-Wno-error=implicit-const-int-float-conversion
-Wno-error=missing-braces
-Wno-error=overloaded-virtual
-Wno-error=tautological-constant-out-of-range-compare
-Wno-error=unused-const-variable
-Wno-error=unused-private-field
DRI_LOADERS: >
-D glx=dri
-D gbm=enabled
-D egl=enabled
-D glvnd=true
-D platforms=x11,wayland
GALLIUM_ST: >
-D dri3=enabled
-D gallium-extra-hud=true
-D gallium-vdpau=enabled
-D gallium-omx=bellagio
-D gallium-va=enabled
-D gallium-xa=enabled
-D gallium-nine=true
-D gallium-opencl=icd
-D gles1=enabled
-D gles2=enabled
-D llvm=enabled
-D microsoft-clc=enabled
-D shared-llvm=enabled
-D opencl-spirv=true
GALLIUM_DRIVERS: "iris,nouveau,kmsro,r300,r600,freedreno,swrast,svga,v3d,vc4,virgl,etnaviv,panfrost,lima,zink,radeonsi,tegra,d3d12,crocus,i915,asahi"
VULKAN_DRIVERS: intel,amd,freedreno,broadcom,virtio-experimental,swrast,panfrost,imagination-experimental,microsoft-experimental
EXTRA_OPTION:
EXTRA_OPTIONS:
-D spirv-to-dxil=true
-D osmesa=true
-D imagination-srv=true
-D tools=drm-shim,etnaviv,freedreno,glsl,intel,intel-ui,nir,nouveau,lima,panfrost,asahi,imagination
-D vulkan-layers=device-select,overlay
-D build-aco-tests=true
-D intel-clc=enabled
-D imagination-srv=true
CC: clang
CXX: clang++
debian-clang-release:
extends: debian-clang
variables:
BUILDTYPE: "release"
DRI_LOADERS: >
-D glx=xlib
-D platforms=x11,wayland
windows-vs2019:
extends:
- .build-windows
@@ -479,50 +432,33 @@ windows-vs2019:
- _build/meson-logs/*.txt
- _install/
.debian-cl:
debian-clover:
extends: .meson-build
variables:
LLVM_VERSION: "13"
UNWIND: "enabled"
DRI_LOADERS: >
-D glx=disabled
-D egl=disabled
-D gbm=disabled
EXTRA_OPTION: >
-D valgrind=false
debian-clover:
extends: .debian-cl
variables:
GALLIUM_DRIVERS: "r600,radeonsi,swrast"
GALLIUM_DRIVERS: "r600,radeonsi"
GALLIUM_ST: >
-D dri3=disabled
-D gallium-vdpau=disabled
-D gallium-xvmc=disabled
-D gallium-omx=disabled
-D gallium-va=disabled
-D gallium-xa=disabled
-D gallium-nine=false
-D gallium-opencl=icd
-D gallium-rusticl=false
debian-rusticl:
extends: .debian-cl
variables:
GALLIUM_DRIVERS: "iris,swrast"
GALLIUM_ST: >
-D dri3=disabled
-D gallium-vdpau=disabled
-D gallium-omx=disabled
-D gallium-va=disabled
-D gallium-xa=disabled
-D gallium-nine=false
-D gallium-opencl=disabled
-D gallium-rusticl=true
EXTRA_OPTION: >
-D valgrind=false
script:
- LLVM_VERSION=9 GALLIUM_DRIVERS=r600,swrast .gitlab-ci/meson/build.sh
- .gitlab-ci/meson/build.sh
debian-vulkan:
extends: .meson-build
variables:
LLVM_VERSION: "13"
UNWIND: "disabled"
DRI_LOADERS: >
-D glx=disabled
@@ -533,12 +469,12 @@ debian-vulkan:
GALLIUM_ST: >
-D dri3=enabled
-D gallium-vdpau=disabled
-D gallium-xvmc=disabled
-D gallium-omx=disabled
-D gallium-va=disabled
-D gallium-xa=disabled
-D gallium-nine=false
-D gallium-opencl=disabled
-D gallium-rusticl=false
-D b_sanitize=undefined
-D c_args=-fno-sanitize-recover=all
-D cpp_args=-fno-sanitize-recover=all
@@ -547,7 +483,7 @@ debian-vulkan:
EXTRA_OPTION: >
-D vulkan-layers=device-select,overlay
-D build-aco-tests=true
-D intel-clc=disabled
-D intel-clc=enabled
-D imagination-srv=true
debian-i386:
@@ -558,7 +494,6 @@ debian-i386:
CROSS: i386
VULKAN_DRIVERS: intel,amd,swrast,virtio-experimental
GALLIUM_DRIVERS: "iris,nouveau,r300,r600,radeonsi,swrast,virgl,zink,crocus"
LLVM_VERSION: 13
EXTRA_OPTION: >
-D vulkan-layers=device-select,overlay
@@ -572,7 +507,8 @@ debian-s390x:
variables:
CROSS: s390x
GALLIUM_DRIVERS: "swrast,zink"
LLVM_VERSION: 13
# The lp_test_blend test times out with LLVM 11
LLVM_VERSION: 9
VULKAN_DRIVERS: "swrast"
debian-ppc64el:
@@ -609,15 +545,11 @@ debian-mingw32-x86_64:
VULKAN_DRIVERS: "swrast,amd,microsoft-experimental"
GALLIUM_ST: >
-D gallium-opencl=icd
-D gallium-rusticl=false
-D opencl-spirv=true
-D microsoft-clc=enabled
-D static-libclc=all
-D llvm=enabled
-D gallium-va=true
-D video-codecs=h264dec,h264enc,h265dec,h265enc,vc1dec
EXTRA_OPTION: >
-D min-windows-version=7
-D spirv-to-dxil=true
-D gles1=enabled
-D gles2=enabled

View File

@@ -111,7 +111,6 @@ for var in \
SKQP_BACKENDS \
TU_DEBUG \
VIRGL_HOST_API \
WAFFLE_PLATFORM \
VK_CPU \
VK_DRIVER \
VK_ICD_FILENAMES \

View File

@@ -149,9 +149,9 @@ cleanup
# upload artifacts
if [ -n "$MINIO_RESULTS_UPLOAD" ]; then
tar --zstd -cf results.tar.zst results/;
tar -czf results.tar.gz results/;
ci-fairy minio login --token-file "${CI_JOB_JWT_FILE}";
ci-fairy minio cp results.tar.zst minio://"$MINIO_RESULTS_UPLOAD"/results.tar.zst;
ci-fairy minio cp results.tar.gz minio://"$MINIO_RESULTS_UPLOAD"/results.tar.gz;
fi
# We still need to echo the hwci: mesa message, as some scripts rely on it, such

View File

@@ -55,9 +55,3 @@ CONFIG_USB_NET_DRIVERS=y
CONFIG_USB_RTL8152=y
CONFIG_USB_NET_AX8817X=y
CONFIG_USB_NET_SMSC95XX=y
# TK1
CONFIG_ARM_TEGRA_DEVFREQ=y
# 32-bit build failure
CONFIG_DRM_MSM=n

View File

@@ -16,7 +16,6 @@ CONFIG_DRM_LIMA=y
CONFIG_DRM_PANEL_SIMPLE=y
CONFIG_DRM_PANEL_EDP=y
CONFIG_DRM_MSM=y
CONFIG_DRM_ETNAVIV=y
CONFIG_DRM_I2C_ADV7511=y
CONFIG_PWM_CROS_EC=y
CONFIG_BACKLIGHT_PWM=y

View File

@@ -6,34 +6,32 @@ set -o xtrace
# Fetch the arm-built rootfs image and unpack it in our x86 container (saves
# network transfer, disk usage, and runtime on test jobs)
# shellcheck disable=SC2154 # arch is assigned in previous scripts
if wget -q --method=HEAD "${ARTIFACTS_PREFIX}/${FDO_UPSTREAM_REPO}/${ARTIFACTS_SUFFIX}/${arch}/done"; then
ARTIFACTS_URL="${ARTIFACTS_PREFIX}/${FDO_UPSTREAM_REPO}/${ARTIFACTS_SUFFIX}/${arch}"
else
ARTIFACTS_URL="${ARTIFACTS_PREFIX}/${CI_PROJECT_PATH}/${ARTIFACTS_SUFFIX}/${arch}"
fi
wget "${ARTIFACTS_URL}"/lava-rootfs.tar.zst -O rootfs.tar.zst
mkdir -p /rootfs-"$arch"
tar -C /rootfs-"$arch" '--exclude=./dev/*' --zstd -xf rootfs.tar.zst
rm rootfs.tar.zst
wget ${ARTIFACTS_URL}/lava-rootfs.tgz -O rootfs.tgz
mkdir -p /rootfs-$arch
tar -C /rootfs-$arch '--exclude=./dev/*' -zxf rootfs.tgz
rm rootfs.tgz
if [[ $arch == "arm64" ]]; then
mkdir -p /baremetal-files
pushd /baremetal-files
wget "${ARTIFACTS_URL}"/Image
wget "${ARTIFACTS_URL}"/Image.gz
wget "${ARTIFACTS_URL}"/cheza-kernel
wget ${ARTIFACTS_URL}/Image
wget ${ARTIFACTS_URL}/Image.gz
wget ${ARTIFACTS_URL}/cheza-kernel
DEVICE_TREES=""
DEVICE_TREES="$DEVICE_TREES apq8016-sbc.dtb"
DEVICE_TREES="$DEVICE_TREES apq8096-db820c.dtb"
DEVICE_TREES="$DEVICE_TREES tegra210-p3450-0000.dtb"
DEVICE_TREES="$DEVICE_TREES imx8mq-nitrogen.dtb"
for DTB in $DEVICE_TREES; do
wget "${ARTIFACTS_URL}/$DTB"
wget ${ARTIFACTS_URL}/$DTB
done
popd
@@ -41,14 +39,12 @@ elif [[ $arch == "armhf" ]]; then
mkdir -p /baremetal-files
pushd /baremetal-files
wget "${ARTIFACTS_URL}"/zImage
wget ${ARTIFACTS_URL}/zImage
DEVICE_TREES=""
DEVICE_TREES="$DEVICE_TREES imx6q-cubox-i.dtb"
DEVICE_TREES="$DEVICE_TREES tegra124-jetson-tk1.dtb"
DEVICE_TREES="imx6q-cubox-i.dtb"
for DTB in $DEVICE_TREES; do
wget "${ARTIFACTS_URL}/$DTB"
wget ${ARTIFACTS_URL}/$DTB
done
popd

View File

@@ -1,5 +1,4 @@
#!/bin/bash
# shellcheck disable=SC2086 # we want word splitting
set -ex

View File

@@ -1,23 +1,24 @@
#!/bin/bash
# shellcheck disable=SC2086 # we want word splitting
set -ex
git config --global user.email "mesa@example.com"
git config --global user.name "Mesa CI"
SCRIPT_DIR="$(pwd)"
CROSVM_VERSION=acd262cb42111c53b580a67355e795775545cced
git clone --single-branch -b main --no-checkout https://chromium.googlesource.com/crosvm/crosvm /platform/crosvm
CROSVM_VERSION=c7cd0e0114c8363b884ba56d8e12adee718dcc93
git clone --single-branch -b main --no-checkout https://chromium.googlesource.com/chromiumos/platform/crosvm /platform/crosvm
pushd /platform/crosvm
git checkout "$CROSVM_VERSION"
git submodule update --init
# Apply all crosvm patches for Mesa CI
cat "$SCRIPT_DIR"/.gitlab-ci/container/build-crosvm_*.patch |
patch -p1
VIRGLRENDERER_VERSION=3c5a9bbb7464e0e91e446991055300f4f989f6a9
VIRGLRENDERER_VERSION=dd301caf7e05ec9c09634fb7872067542aad89b7
rm -rf third_party/virglrenderer
git clone --single-branch -b master --no-checkout https://gitlab.freedesktop.org/virgl/virglrenderer.git third_party/virglrenderer
pushd third_party/virglrenderer
git checkout "$VIRGLRENDERER_VERSION"
meson build/ -Drender-server=true -Drender-server-worker=process -Dvenus-experimental=true $EXTRA_MESON_ARGS
meson build/ $EXTRA_MESON_ARGS
ninja -C build install
popd
@@ -25,7 +26,6 @@ RUSTFLAGS='-L native=/usr/local/lib' cargo install \
bindgen \
-j ${FDO_CI_CONCURRENT:-4} \
--root /usr/local \
--version 0.60.1 \
$EXTRA_CARGO_ARGS
RUSTFLAGS='-L native=/usr/local/lib' cargo install \

View File

@@ -0,0 +1,43 @@
From 3c57ec558bccc67fd53363c23deea20646be5c47 Mon Sep 17 00:00:00 2001
From: Tomeu Vizoso <tomeu.vizoso@collabora.com>
Date: Wed, 17 Nov 2021 10:18:04 +0100
Subject: [PATCH] Hack syslog out
It's causing stability problems when running several Crosvm instances in
parallel.
Signed-off-by: Tomeu Vizoso <tomeu.vizoso@collabora.com>
---
base/src/unix/linux/syslog.rs | 2 +-
common/sys_util/src/linux/syslog.rs | 2 +-
2 files changed, 2 insertions(+), 2 deletions(-)
diff --git a/base/src/unix/linux/syslog.rs b/base/src/unix/linux/syslog.rs
index 05972a3a..f0db3781 100644
--- a/base/src/unix/linux/syslog.rs
+++ b/base/src/unix/linux/syslog.rs
@@ -35,7 +35,7 @@ pub struct PlatformSyslog {
impl Syslog for PlatformSyslog {
fn new() -> Result<Self, Error> {
Ok(Self {
- socket: Some(openlog_and_get_socket()?),
+ socket: None,
})
}
diff --git a/common/sys_util/src/linux/syslog.rs b/common/sys_util/src/linux/syslog.rs
index 05972a3a..f0db3781 100644
--- a/common/sys_util/src/linux/syslog.rs
+++ b/common/sys_util/src/linux/syslog.rs
@@ -35,7 +35,7 @@ pub struct PlatformSyslog {
impl Syslog for PlatformSyslog {
fn new() -> Result<Self, Error> {
Ok(Self {
- socket: Some(openlog_and_get_socket()?),
+ socket: None,
})
}
--
2.25.1

View File

@@ -1,5 +1,4 @@
#!/bin/sh
# shellcheck disable=SC2086 # we want word splitting
set -ex
@@ -16,16 +15,10 @@ if [ -n "${DEQP_RUNNER_GIT_TAG}${DEQP_RUNNER_GIT_REV}" ]; then
DEQP_RUNNER_CARGO_ARGS="${DEQP_RUNNER_CARGO_ARGS} ${EXTRA_CARGO_ARGS}"
else
# Install from package registry
DEQP_RUNNER_CARGO_ARGS="--version 0.15.0 ${EXTRA_CARGO_ARGS} -- deqp-runner"
DEQP_RUNNER_CARGO_ARGS="--version 0.13.1 ${EXTRA_CARGO_ARGS} -- deqp-runner"
fi
cargo install --locked \
-j ${FDO_CI_CONCURRENT:-4} \
--root /usr/local \
${DEQP_RUNNER_CARGO_ARGS}
# remove unused test runners to shrink images for the Mesa CI build (not kernel,
# which chooses its own deqp branch)
if [ -z "${DEQP_RUNNER_GIT_TAG}${DEQP_RUNNER_GIT_REV}" ]; then
rm -f /usr/local/bin/igt-runner
fi

View File

@@ -1,5 +1,4 @@
#!/bin/bash
# shellcheck disable=SC2086 # we want word splitting
set -ex
@@ -12,13 +11,6 @@ git clone \
/VK-GL-CTS
pushd /VK-GL-CTS
# Apply a patch to update zlib link to an available version.
# vulkan-cts-1.3.3.0 uses zlib 1.2.12 which was removed from zlib server due to
# a CVE. See https://zlib.net/
# FIXME: Remove this patch when uprev to 1.3.4.0+
wget -O- https://github.com/KhronosGroup/VK-GL-CTS/commit/6bb2e7d64261bedb503947b1b251b1eeeb49be73.patch |
git am -
# --insecure is due to SSL cert failures hitting sourceforge for zlib and
# libpng (sigh). The archives get their checksums checked anyway, and git
# always goes through ssh or https.
@@ -68,9 +60,6 @@ cp \
cp \
/deqp/external/openglcts/modules/gl_cts/data/mustpass/gl/khronos_mustpass/4.6.1.x/*-master.txt \
/deqp/mustpass/.
cp \
/deqp/external/openglcts/modules/gl_cts/data/mustpass/gl/khronos_mustpass_single/4.6.1.x/*-single.txt \
/deqp/mustpass/.
# Save *some* executor utils, but otherwise strip things down
# to reduct deqp build size:
@@ -88,11 +77,10 @@ rm -rf /deqp/external/openglcts/modules/cts-runner
rm -rf /deqp/modules/internal
rm -rf /deqp/execserver
rm -rf /deqp/framework
# shellcheck disable=SC2038,SC2185 # TODO: rewrite find
find -iname '*cmake*' -o -name '*ninja*' -o -name '*.o' -o -name '*.a' | xargs rm -rf
${STRIP_CMD:-strip} external/vulkancts/modules/vulkan/deqp-vk
${STRIP_CMD:-strip} external/openglcts/modules/glcts
${STRIP_CMD:-strip} modules/*/deqp-*
du -sh ./*
du -sh *
rm -rf /VK-GL-CTS
popd

View File

@@ -1,5 +1,4 @@
#!/bin/bash
# shellcheck disable=SC2086 # we want word splitting
set -ex
@@ -12,15 +11,12 @@ pushd kernel
# debian (they'll get blown away by the rm of the kernel dir at the end).
mkdir -p ld-links
for i in /usr/bin/*-ld /usr/bin/ld; do
i=$(basename $i)
i=`basename $i`
ln -sf /usr/bin/$i.bfd ld-links/$i
done
export PATH=`pwd`/ld-links:$PATH
NEWPATH=$(pwd)/ld-links
export PATH=$NEWPATH:$PATH
KERNEL_FILENAME=$(basename $KERNEL_URL)
export LOCALVERSION="$KERNEL_FILENAME"
export LOCALVERSION="`basename $KERNEL_URL`"
./scripts/kconfig/merge_config.sh ${DEFCONFIG} ../.gitlab-ci/container/${KERNEL_ARCH}.config
make ${KERNEL_IMAGE_NAME}
for image in ${KERNEL_IMAGE_NAME}; do
@@ -32,8 +28,10 @@ if [[ -n ${DEVICE_TREES} ]]; then
cp ${DEVICE_TREES} /lava-files/.
fi
make modules
INSTALL_MOD_PATH=/lava-files/rootfs-${DEBIAN_ARCH}/ make modules_install
if [[ ${DEBIAN_ARCH} = "amd64" || ${DEBIAN_ARCH} = "arm64" ]]; then
make modules
INSTALL_MOD_PATH=/lava-files/rootfs-${DEBIAN_ARCH}/ make modules_install
fi
if [[ ${DEBIAN_ARCH} = "arm64" ]]; then
make Image.lzma

View File

@@ -26,5 +26,5 @@ mkdir -p /usr/lib/clc
ln -s /usr/share/clc/spirv64-mesa3d-.spv /usr/lib/clc/
ln -s /usr/share/clc/spirv-mesa3d-.spv /usr/lib/clc/
du -sh ./*
du -sh *
rm -rf /libclc /llvm-project

View File

@@ -1,14 +1,14 @@
#!/bin/bash
# shellcheck disable=SC2086 # we want word splitting
set -ex
export LIBDRM_VERSION=libdrm-2.4.110
wget https://dri.freedesktop.org/libdrm/"$LIBDRM_VERSION".tar.xz
tar -xvf "$LIBDRM_VERSION".tar.xz && rm "$LIBDRM_VERSION".tar.xz
cd "$LIBDRM_VERSION"
wget https://dri.freedesktop.org/libdrm/$LIBDRM_VERSION.tar.xz
tar -xvf $LIBDRM_VERSION.tar.xz && rm $LIBDRM_VERSION.tar.xz
cd $LIBDRM_VERSION
meson build -D vc4=false -D freedreno=false -D etnaviv=false $EXTRA_MESON_ARGS
ninja -C build install
cd ..
rm -rf "$LIBDRM_VERSION"
rm -rf $LIBDRM_VERSION

View File

@@ -1,19 +0,0 @@
#!/bin/bash
set -ex
wget https://github.com/KhronosGroup/SPIRV-LLVM-Translator/archive/refs/tags/v13.0.0.tar.gz
tar -xvf v13.0.0.tar.gz && rm v13.0.0.tar.gz
mkdir SPIRV-LLVM-Translator-13.0.0/build
pushd SPIRV-LLVM-Translator-13.0.0/build
cmake .. -G Ninja -DCMAKE_BUILD_TYPE=Release -DCMAKE_INSTALL_PREFIX=/usr
ninja
ninja install
# For some reason llvm-spirv is not installed by default
ninja llvm-spirv
cp tools/llvm-spirv/llvm-spirv /usr/bin/
popd
du -sh SPIRV-LLVM-Translator-13.0.0
rm -rf SPIRV-LLVM-Translator-13.0.0

View File

@@ -1,12 +0,0 @@
#!/bin/bash
set -ex
MOLD_VERSION="1.6.0"
git clone -b v"$MOLD_VERSION" --single-branch --depth 1 https://github.com/rui314/mold.git
cd mold
make
make install
cd ..
rm -rf mold

View File

@@ -1,19 +1,16 @@
#!/bin/bash
# shellcheck disable=SC2086 # we want word splitting
set -ex
git clone https://gitlab.freedesktop.org/mesa/piglit.git --single-branch --no-checkout /piglit
pushd /piglit
git checkout 591c91865012de4224bea551eac5d2274acf06ad
git checkout b2c9d8f56b45d79f804f4cb5ac62520f0edd8988
patch -p1 <$OLDPWD/.gitlab-ci/piglit/disable-vs_in.diff
cmake -S . -B . -G Ninja -DCMAKE_BUILD_TYPE=Release $PIGLIT_OPTS $EXTRA_CMAKE_ARGS
ninja $PIGLIT_BUILD_TARGETS
# shellcheck disable=SC2038,SC2185 # TODO: rewrite find
find -name .git -o -name '*ninja*' -o -iname '*cmake*' -o -name '*.[chao]' | xargs rm -rf
rm -rf target_api
if [ "$PIGLIT_BUILD_TARGETS" = "piglit_replayer" ]; then
# shellcheck disable=SC2038,SC2185 # TODO: rewrite find
if [ "x$PIGLIT_BUILD_TARGETS" = "xpiglit_replayer" ]; then
find ! -regex "^\.$" \
! -regex "^\.\/piglit.*" \
! -regex "^\.\/framework.*" \

View File

@@ -8,24 +8,17 @@ set -ex
# cargo (and rustup) wants to store stuff in $HOME/.cargo, and binaries in
# $HOME/.cargo/bin. Make bin a link to a public bin directory so the commands
# are just available to all build jobs.
mkdir -p "$HOME"/.cargo
ln -s /usr/local/bin "$HOME"/.cargo/bin
# Rusticl requires at least Rust 1.59.0
#
# Also, oick a specific snapshot from rustup so the compiler doesn't drift on
# us.
RUST_VERSION=1.59.0-2022-02-24
mkdir -p $HOME/.cargo
ln -s /usr/local/bin $HOME/.cargo/bin
# For rust in Mesa, we use rustup to install. This lets us pick an arbitrary
# version of the compiler, rather than whatever the container's Debian comes
# with.
wget https://sh.rustup.rs -O - | sh -s -- \
--default-toolchain $RUST_VERSION \
--profile minimal \
-y
rustup component add rustfmt
#
# Pick the rust compiler (1.48) available in Debian stable, and pick a specific
# snapshot from rustup so the compiler doesn't drift on us.
wget https://sh.rustup.rs -O - | \
sh -s -- -y --default-toolchain 1.49.0-2020-12-31
# Set up a config script for cross compiling -- cargo needs your system cc for
# linking in cross builds, but doesn't know what you want to use for system cc.

View File

@@ -55,9 +55,9 @@ BASE_ARGS_GN_FILE="${SCRIPT_DIR}/build-skqp_base.gn"
SKQP_ARCH=${SKQP_ARCH:-x64}
SKIA_DIR=${SKIA_DIR:-$(mktemp -d)}
SKQP_OUT_DIR=${SKIA_DIR}/out/${SKQP_ARCH}
SKQP_INSTALL_DIR=${SKQP_INSTALL_DIR:-/skqp}
SKQP_INSTALL_DIR=/skqp
SKQP_ASSETS_DIR="${SKQP_INSTALL_DIR}/assets"
SKQP_BINARIES=(skqp list_gpu_unit_tests list_gms)
SKQP_BINARIES=(skqp)
download_skia_source

View File

@@ -1,18 +0,0 @@
Nima-Cpp is not available anymore inside googlesource, revert to github one
Simulates `git revert 49233d2521054037ded7d760427c4a0dc1e11356`
diff --git a/DEPS b/DEPS
index 7e0b941..c88b064 100644
--- a/DEPS
+++ b/DEPS
@@ -33,8 +33,8 @@ deps = {
#"third_party/externals/v8" : "https://chromium.googlesource.com/v8/v8.git@5f1ae66d5634e43563b2d25ea652dfb94c31a3b4",
"third_party/externals/wuffs" : "https://skia.googlesource.com/external/github.com/google/wuffs.git@fda3c4c9863d9f9fcec58ae66508c4621fc71ea5",
"third_party/externals/zlib" : "https://chromium.googlesource.com/chromium/src/third_party/zlib@47af7c547f8551bd25424e56354a2ae1e9062859",
- "third_party/externals/Nima-Cpp" : "https://skia.googlesource.com/external/github.com/2d-inc/Nima-Cpp.git@4bd02269d7d1d2e650950411325eafa15defb084",
- "third_party/externals/Nima-Math-Cpp" : "https://skia.googlesource.com/external/github.com/2d-inc/Nima-Math-Cpp.git@e0c12772093fa8860f55358274515b86885f0108",
+ "third_party/externals/Nima-Cpp" : "https://github.com/2d-inc/Nima-Cpp.git@4bd02269d7d1d2e650950411325eafa15defb084",
+ "third_party/externals/Nima-Math-Cpp" : "https://github.com/2d-inc/Nima-Math-Cpp.git@e0c12772093fa8860f55358274515b86885f0108",
"../src": {
"url": "https://chromium.googlesource.com/chromium/src.git@ccf3465732e5d5363f0e44a8fac54550f62dd1d0",

View File

@@ -1,5 +1,4 @@
#!/bin/bash
# shellcheck disable=SC2086 # we want word splitting
set -ex

View File

@@ -17,8 +17,7 @@ export PATH=$CCACHE_PATH:$PATH
export CC="${CCACHE_PATH}/gcc"
export CXX="${CCACHE_PATH}/g++"
# When not using the mold linker (e.g. unsupported architecture), force
# linkers to gold, since it's so much faster for building. We can't use
# Force linkers to gold, since it's so much faster for building. We can't use
# lld because we're on old debian and it's buggy. ming fails meson builds
# with it with "meson.build:21:0: ERROR: Unable to determine dynamic linker"
find /usr/bin -name \*-ld -o -name ld | \
@@ -28,11 +27,8 @@ find /usr/bin -name \*-ld -o -name ld | \
ccache --show-stats
# Make a wrapper script for ninja to always include the -j flags
{
echo '#!/bin/sh -x'
# shellcheck disable=SC2016
echo '/usr/bin/ninja -j${FDO_CI_CONCURRENT:-4} "$@"'
} > /usr/local/bin/ninja
echo '#!/bin/sh -x' > /usr/local/bin/ninja
echo '/usr/bin/ninja -j${FDO_CI_CONCURRENT:-4} "$@"' >> /usr/local/bin/ninja
chmod +x /usr/local/bin/ninja
# Set MAKEFLAGS so that all make invocations in container builds include the

View File

@@ -13,7 +13,7 @@ arch2=${5:-$2}
# and allowing it in code generation means we get unwind symbols that break
# the libEGL and driver symbol tests.
cat > "$cross_file" <<EOF
cat >$cross_file <<EOF
[binaries]
ar = '$ndk/toolchains/llvm/prebuilt/linux-x86_64/bin/$arch-ar'
c = ['ccache', '$ndk/toolchains/llvm/prebuilt/linux-x86_64/bin/${arch2}29-clang', '-fno-exceptions', '-fno-unwind-tables', '-fno-asynchronous-unwind-tables']

View File

@@ -1,5 +1,4 @@
#!/bin/sh
# shellcheck disable=SC2086 # we want word splitting
# Makes a .pc file in the Android NDK for meson to find its libraries.

View File

@@ -2,7 +2,7 @@
arch=$1
cross_file="/cross_file-$arch.txt"
/usr/share/meson/debcrossgen --arch "$arch" -o "$cross_file"
/usr/share/meson/debcrossgen --arch $arch -o "$cross_file"
# Explicitly set ccache path for cross compilers
sed -i "s|/usr/bin/\([^-]*\)-linux-gnu\([^-]*\)-g|/usr/lib/ccache/\\1-linux-gnu\\2-g|g" "$cross_file"
if [ "$arch" = "i386" ]; then
@@ -10,11 +10,10 @@ if [ "$arch" = "i386" ]; then
sed -i "s|cpu_family = 'i686'|cpu_family = 'x86'|g" "$cross_file"
fi
# Rely on qemu-user being configured in binfmt_misc on the host
# shellcheck disable=SC1003 # how this sed doesn't seems to work for me locally
sed -i -e '/\[properties\]/a\' -e "needs_exe_wrapper = False" "$cross_file"
# Add a line for rustc, which debcrossgen is missing.
cc=$(sed -n 's|c = .\(.*\).|\1|p' < "$cross_file")
cc=`sed -n 's|c = .\(.*\).|\1|p' < $cross_file`
if [[ "$arch" = "arm64" ]]; then
rust_target=aarch64-unknown-linux-gnu
elif [[ "$arch" = "armhf" ]]; then
@@ -28,7 +27,6 @@ elif [[ "$arch" = "s390x" ]]; then
else
echo "Needs rustc target mapping"
fi
# shellcheck disable=SC1003 # how this sed doesn't seems to work for me locally
sed -i -e '/\[binaries\]/a\' -e "rust = ['rustc', '--target=$rust_target', '-C', 'linker=$cc']" "$cross_file"
# Set up cmake cross compile toolchain file for dEQP builds
@@ -36,18 +34,18 @@ toolchain_file="/toolchain-$arch.cmake"
if [[ "$arch" = "arm64" ]]; then
GCC_ARCH="aarch64-linux-gnu"
DE_CPU="DE_CPU_ARM_64"
CMAKE_ARCH=arm
elif [[ "$arch" = "armhf" ]]; then
GCC_ARCH="arm-linux-gnueabihf"
DE_CPU="DE_CPU_ARM"
CMAKE_ARCH=arm
fi
if [[ -n "$GCC_ARCH" ]]; then
{
echo "set(CMAKE_SYSTEM_NAME Linux)";
echo "set(CMAKE_SYSTEM_PROCESSOR arm)";
echo "set(CMAKE_C_COMPILER /usr/lib/ccache/$GCC_ARCH-gcc)";
echo "set(CMAKE_CXX_COMPILER /usr/lib/ccache/$GCC_ARCH-g++)";
echo "set(ENV{PKG_CONFIG} \"/usr/bin/$GCC_ARCH-pkg-config\")";
echo "set(DE_CPU $DE_CPU)";
} > "$toolchain_file"
echo "set(CMAKE_SYSTEM_NAME Linux)" > "$toolchain_file"
echo "set(CMAKE_SYSTEM_PROCESSOR arm)" >> "$toolchain_file"
echo "set(CMAKE_C_COMPILER /usr/lib/ccache/$GCC_ARCH-gcc)" >> "$toolchain_file"
echo "set(CMAKE_CXX_COMPILER /usr/lib/ccache/$GCC_ARCH-g++)" >> "$toolchain_file"
echo "set(ENV{PKG_CONFIG} \"/usr/bin/$GCC_ARCH-pkg-config\")" >> "$toolchain_file"
echo "set(DE_CPU $DE_CPU)" >> "$toolchain_file"
fi

View File

@@ -1,7 +1,4 @@
#!/bin/bash
# shellcheck disable=SC2140 # ugly array, remove later
# shellcheck disable=SC2288 # ugly array, remove later
# shellcheck disable=SC2086 # we want word splitting
set -ex
@@ -18,10 +15,6 @@ elif [ $DEBIAN_ARCH = amd64 ]; then
apt-get -y install --no-install-recommends wget gnupg2 software-properties-common
apt-key add /llvm-snapshot.gpg.key
add-apt-repository "deb https://apt.llvm.org/bullseye/ llvm-toolchain-bullseye-13 main"
# Debian bullseye has older wine 5.0, we want >= 7.0 for traces.
apt-key add /winehq.gpg.key
apt-add-repository https://dl.winehq.org/wine-builds/debian/
ARCH_PACKAGES="firmware-amd-graphics
inetutils-syslogd
@@ -41,10 +34,6 @@ elif [ $DEBIAN_ARCH = amd64 ]; then
spirv-tools
sysvinit-core
"
elif [ $DEBIAN_ARCH = armhf ]; then
ARCH_PACKAGES="firmware-misc-nonfree
"
fi
INSTALL_CI_FAIRY_PACKAGES="git
@@ -63,7 +52,6 @@ apt-get -y install --no-install-recommends \
ca-certificates \
firmware-realtek \
initramfs-tools \
jq \
libasan6 \
libexpat1 \
libpng16-16 \
@@ -104,30 +92,12 @@ apt-get -y install --no-install-recommends \
waffle-utils \
wget \
xinit \
xserver-xorg-core \
zstd
if [ "$DEBIAN_ARCH" = "amd64" ]; then
# workaround wine needing 32-bit
# https://bugs.winehq.org/show_bug.cgi?id=53393
apt-get install -y --no-remove wine-stable-amd64 # a requirement for wine-stable
WINE_PKG="wine-stable"
WINE_PKG_DROP="wine-stable-i386"
apt download "${WINE_PKG}"
dpkg --ignore-depends="${WINE_PKG_DROP}" -i "${WINE_PKG}"*.deb
rm "${WINE_PKG}"*.deb
sed -i "/${WINE_PKG_DROP}/d" /var/lib/dpkg/status
apt-get install -y --no-remove winehq-stable # symlinks-only, depends on wine-stable
fi
xserver-xorg-core
# Needed for ci-fairy, this revision is able to upload files to
# MinIO and doesn't depend on git
pip3 install git+http://gitlab.freedesktop.org/freedesktop/ci-templates@34f4ade99434043f88e164933f570301fd18b125
# Needed for manipulation with traces yaml files.
pip3 install yq
apt-get purge -y \
$INSTALL_CI_FAIRY_PACKAGES
@@ -255,7 +225,7 @@ rm -rf etc/dpkg
# Drop directories not part of ostree
# Note that /var needs to exist as ostree bind mounts the deployment /var over
# it
rm -rf var/* srv share
rm -rf var/* opt srv share
# ca-certificates are in /etc drop the source
rm -rf usr/share/ca-certificates

View File

@@ -1,5 +1,4 @@
#!/bin/bash
# shellcheck disable=SC2086 # we want word splitting
set -e
set -o xtrace
@@ -38,9 +37,8 @@ apt-get install -y --no-remove \
wget
if [[ $arch != "armhf" ]]; then
# See the list of available architectures in https://apt.llvm.org/bullseye/dists/llvm-toolchain-bullseye-13/main/
if [[ $arch == "s390x" ]] || [[ $arch == "i386" ]] || [[ $arch == "arm64" ]]; then
LLVM=13
if [[ $arch == "s390x" ]]; then
LLVM=9
else
LLVM=11
fi
@@ -48,7 +46,7 @@ if [[ $arch != "armhf" ]]; then
# llvm-*-tools:$arch conflicts with python3:amd64. Install dependencies only
# with apt-get, then force-install llvm-*-{dev,tools}:$arch with dpkg to get
# around this.
apt-get install -y --no-remove --no-install-recommends \
apt-get install -y --no-remove \
libclang-cpp${LLVM}:$arch \
libffi-dev:$arch \
libgcc-s1:$arch \

View File

@@ -1,5 +1,4 @@
#!/bin/bash
# shellcheck disable=SC2086 # we want word splitting
set -ex
@@ -20,7 +19,7 @@ rm $ndk.zip
# duplicate files. Turn them into hardlinks to save on container space.
rdfind -makehardlinks true -makeresultsfile false /android-ndk-r21d/
# Drop some large tools we won't use in this build.
find /android-ndk-r21d/ -type f | grep -E -i "clang-check|clang-tidy|lldb" | xargs rm -f
find /android-ndk-r21d/ -type f | egrep -i "clang-check|clang-tidy|lldb" | xargs rm -f
sh .gitlab-ci/container/create-android-ndk-pc.sh /$ndk zlib.pc "" "-lz" "1.2.3"

View File

@@ -1,5 +1,4 @@
#!/bin/bash
# shellcheck disable=SC2086 # we want word splitting
set -e
set -o xtrace
@@ -9,15 +8,9 @@ sed -i -e 's/http:\/\/deb/https:\/\/deb/g' /etc/apt/sources.list
echo 'deb https://deb.debian.org/debian buster main' >/etc/apt/sources.list.d/buster.list
apt-get update
# Ephemeral packages (installed for this script and removed again at
# the end)
STABLE_EPHEMERAL=" \
libssl-dev \
"
apt-get -y install \
${EXTRA_LOCAL_PACKAGES} \
${STABLE_EPHEMERAL} \
abootimg \
autoconf \
automake \
bc \
@@ -61,8 +54,7 @@ apt-get -y install \
u-boot-tools \
wget \
xz-utils \
zlib1g-dev \
zstd
zlib1g-dev
# Not available anymore in bullseye
apt-get install -y --no-remove -t buster \
@@ -75,12 +67,8 @@ arch=armhf
. .gitlab-ci/container/container_pre_build.sh
. .gitlab-ci/container/build-mold.sh
# dependencies where we want a specific version
EXTRA_MESON_ARGS=
. .gitlab-ci/container/build-libdrm.sh
apt-get purge -y $STABLE_EPHEMERAL
. .gitlab-ci/container/container_post_build.sh

View File

@@ -9,6 +9,7 @@ sed -i -e 's/http:\/\/deb/https:\/\/deb/g' /etc/apt/sources.list
apt-get update
apt-get install -y --no-remove \
abootimg \
cpio \
fastboot \
netcat \
@@ -18,8 +19,7 @@ apt-get install -y --no-remove \
python3-serial \
rsync \
snmp \
wget \
zstd
wget
# setup SNMPv2 SMI MIB
wget https://raw.githubusercontent.com/net-snmp/net-snmp/master/mibs/SNMPv2-SMI.txt \
@@ -37,9 +37,3 @@ ln -s \
/baremetal-files/Image \
/baremetal-files/tegra210-p3450-0000.dtb \
/baremetal-files/jetson-nano/boot/
mkdir -p /baremetal-files/jetson-tk1/boot/
ln -s \
/baremetal-files/zImage \
/baremetal-files/tegra124-jetson-tk1.dtb \
/baremetal-files/jetson-tk1/boot/

View File

@@ -1,16 +1,5 @@
#!/bin/bash
set -e
arch=s390x
# Ephemeral packages (installed for this script and removed again at the end)
STABLE_EPHEMERAL="libssl-dev"
apt-get -y install "$STABLE_EPHEMERAL"
. .gitlab-ci/container/build-mold.sh
apt-get purge -y "$STABLE_EPHEMERAL"
. .gitlab-ci/container/cross_build.sh

View File

@@ -5,9 +5,12 @@ set -o xtrace
# Installing wine, need this for testing mingw or nine
# We need multiarch for Wine
dpkg --add-architecture i386
apt-get update
apt-get install -y --no-remove \
wine \
wine32 \
wine64 \
xvfb

View File

@@ -1,5 +1,4 @@
#!/bin/bash
# shellcheck disable=SC2086 # we want word splitting
set -e
set -o xtrace
@@ -28,7 +27,6 @@ apt-get install -y --no-remove \
bison \
ccache \
dpkg-cross \
findutils \
flex \
g++ \
cmake \
@@ -38,12 +36,15 @@ apt-get install -y --no-remove \
kmod \
libclang-13-dev \
libclang-11-dev \
libclang-9-dev \
libclc-dev \
libelf-dev \
libepoxy-dev \
libexpat1-dev \
libgtk-3-dev \
libllvm13 \
libllvm11 \
libllvm9 \
libomxil-bellagio-dev \
libpciaccess-dev \
libunwind-dev \
@@ -57,13 +58,13 @@ apt-get install -y --no-remove \
libxrandr-dev \
libxrender-dev \
libxshmfence-dev \
libxvmc-dev \
libxxf86vm-dev \
make \
meson \
pkg-config \
python3-mako \
python3-pil \
python3-ply \
python3-requests \
qemu-user \
valgrind \
@@ -72,17 +73,11 @@ apt-get install -y --no-remove \
x11proto-gl-dev \
x11proto-randr-dev \
xz-utils \
zlib1g-dev \
zstd
zlib1g-dev
# Needed for ci-fairy, this revision is able to upload files to MinIO
pip3 install git+http://gitlab.freedesktop.org/freedesktop/ci-templates@34f4ade99434043f88e164933f570301fd18b125
# We need at least 0.61.4 for proper Rust
pip3 install meson==0.61.5
. .gitlab-ci/container/build-rust.sh
. .gitlab-ci/container/debian/x86_build-base-wine.sh
############### Uninstall ephemeral packages

View File

@@ -1,7 +1,4 @@
#!/bin/bash
# shellcheck disable=SC2086 # we want word splitting
set -e
# Pull packages from msys2 repository that can be directly used.
# We can use https://packages.msys2.org/ to retrieve the newest package

View File

@@ -1,18 +1,11 @@
#!/bin/bash
# shellcheck disable=SC2086 # we want word splitting
set -e
# Building libdrm (libva dependency)
. .gitlab-ci/container/build-libdrm.sh
wd=$PWD
CMAKE_TOOLCHAIN_MINGW_PATH=$wd/.gitlab-ci/container/debian/x86_mingw-toolchain.cmake
mkdir -p ~/tmp
pushd ~/tmp
# Building DirectX-Headers
git clone https://github.com/microsoft/DirectX-Headers -b v1.606.4 --depth 1
git clone https://github.com/microsoft/DirectX-Headers -b v1.606.3 --depth 1
mkdir -p DirectX-Headers/build
pushd DirectX-Headers/build
meson .. \
@@ -24,25 +17,6 @@ meson .. \
ninja install
popd
# Building libva
git clone https://github.com/intel/libva
pushd libva/
# Checking out commit hash with libva-win32 support
# This feature will be released with libva version 2.17
git checkout 2579eb0f77897dc01a02c1e43defc63c40fd2988
popd
# libva already has a build dir in their repo, use builddir instead
mkdir -p libva/builddir
pushd libva/builddir
meson .. \
--backend=ninja \
--buildtype=release \
-Dprefix=/usr/x86_64-w64-mingw32/ \
--cross-file=$wd/.gitlab-ci/x86_64-w64-mingw32
ninja install
popd
export VULKAN_SDK_VERSION=1.3.211.0
# Building SPIRV Tools

View File

@@ -1,5 +1,4 @@
#!/bin/bash
# shellcheck disable=SC2086 # we want word splitting
set -e
set -o xtrace
@@ -13,7 +12,6 @@ STABLE_EPHEMERAL=" \
autotools-dev \
bzip2 \
libtool \
libssl-dev \
python3-pip \
"
@@ -29,6 +27,7 @@ apt-get install -y --no-remove \
libclang-cpp11-dev \
libgbm-dev \
libglvnd-dev \
libllvmspirvlib-dev \
liblua5.3-dev \
libxcb-dri2-0-dev \
libxcb-dri3-dev \
@@ -42,16 +41,14 @@ apt-get install -y --no-remove \
libxml2-dev \
llvm-13-dev \
llvm-11-dev \
llvm-9-dev \
ocl-icd-opencl-dev \
python3-freezegun \
python3-pytest \
procps \
spirv-tools \
shellcheck \
strace \
time \
yamllint \
zstd
time
. .gitlab-ci/container/container_pre_build.sh
@@ -61,17 +58,11 @@ export XORG_RELEASES=https://xorg.freedesktop.org/releases/individu
export XORGMACROS_VERSION=util-macros-1.19.0
. .gitlab-ci/container/build-mold.sh
wget $XORG_RELEASES/util/$XORGMACROS_VERSION.tar.bz2
tar -xvf $XORGMACROS_VERSION.tar.bz2 && rm $XORGMACROS_VERSION.tar.bz2
cd $XORGMACROS_VERSION; ./configure; make install; cd ..
rm -rf $XORGMACROS_VERSION
. .gitlab-ci/container/build-llvm-spirv.sh
. .gitlab-ci/container/build-libclc.sh
. .gitlab-ci/container/build-libdrm.sh
. .gitlab-ci/container/build-wayland.sh
@@ -83,7 +74,7 @@ cd shader-db
make
popd
git clone https://github.com/microsoft/DirectX-Headers -b v1.606.4 --depth 1
git clone https://github.com/microsoft/DirectX-Headers -b v1.606.3 --depth 1
mkdir -p DirectX-Headers/build
pushd DirectX-Headers/build
meson .. --backend=ninja --buildtype=release -Dbuild-test=false
@@ -94,12 +85,6 @@ rm -rf DirectX-Headers
pip3 install git+https://git.lavasoftware.org/lava/lavacli@3db3ddc45e5358908bc6a17448059ea2340492b7
# install bindgen
RUSTFLAGS='-L native=/usr/local/lib' cargo install \
bindgen --version 0.59.2 \
-j ${FDO_CI_CONCURRENT:-4} \
--root /usr/local
############### Uninstall the build software
apt-get purge -y \

View File

@@ -1,5 +1,4 @@
#!/bin/bash
# shellcheck disable=SC2086 # we want word splitting
set -e
set -o xtrace
@@ -13,43 +12,11 @@ sed -i -e 's/http:\/\/deb/https:\/\/deb/g' /etc/apt/sources.list
# Ephemeral packages (installed for this script and removed again at
# the end)
STABLE_EPHEMERAL=" \
autoconf \
automake \
bc \
bison \
bzip2 \
ccache \
cmake \
clang-11 \
flex \
glslang-tools \
g++ \
libasound2-dev \
libcap-dev \
libclang-cpp11-dev \
libegl-dev \
libelf-dev \
libepoxy-dev \
libgbm-dev \
libpciaccess-dev \
libvulkan-dev \
libwayland-dev \
libx11-xcb-dev \
libxext-dev \
llvm-13-dev \
llvm-11-dev \
make \
meson \
patch \
pkg-config \
protobuf-compiler \
cargo \
python3-dev \
python3-pip \
python3-setuptools \
python3-wheel \
spirv-tools \
wayland-protocols \
xz-utils \
"
# Add llvm 13 to the build image
@@ -59,19 +26,14 @@ add-apt-repository "deb https://apt.llvm.org/bullseye/ llvm-toolchain-bullseye-1
apt-get update
apt-get dist-upgrade -y
apt-get install -y \
sysvinit-core
apt-get install -y --no-remove \
git \
git-lfs \
inetutils-syslogd \
iptables \
jq \
libasan6 \
libexpat1 \
libllvm13 \
libllvm11 \
libllvm9 \
liblz4-1 \
libpng16-16 \
libpython3.9 \
@@ -91,69 +53,22 @@ apt-get install -y --no-remove \
python3-requests \
python3-six \
python3-yaml \
socat \
vulkan-tools \
waffle-utils \
wget \
xauth \
xvfb \
zlib1g \
zstd
zlib1g
apt-get install -y --no-install-recommends \
$STABLE_EPHEMERAL
. .gitlab-ci/container/container_pre_build.sh
############### Build kernel
export DEFCONFIG="arch/x86/configs/x86_64_defconfig"
export KERNEL_IMAGE_NAME=bzImage
export KERNEL_ARCH=x86_64
export DEBIAN_ARCH=amd64
mkdir -p /lava-files/
. .gitlab-ci/container/build-kernel.sh
# Needed for ci-fairy, this revision is able to upload files to MinIO
# and doesn't depend on git
pip3 install git+http://gitlab.freedesktop.org/freedesktop/ci-templates@34f4ade99434043f88e164933f570301fd18b125
# Needed for manipulation with traces yaml files.
pip3 install yq
# Needed for crosvm compilation.
update-alternatives --install /usr/bin/clang clang /usr/bin/clang-11 100
############### Build LLVM-SPIRV translator
. .gitlab-ci/container/build-llvm-spirv.sh
############### Build libclc
. .gitlab-ci/container/build-libclc.sh
############### Build libdrm
. .gitlab-ci/container/build-libdrm.sh
############### Build Wayland
. .gitlab-ci/container/build-wayland.sh
############### Build Crosvm
. .gitlab-ci/container/build-rust.sh
. .gitlab-ci/container/build-crosvm.sh
############### Build dEQP runner
. .gitlab-ci/container/build-deqp-runner.sh
rm -rf /root/.cargo
rm -rf /root/.rustup
ccache --show-stats
rm -rf ~/.cargo
apt-get purge -y $STABLE_EPHEMERAL

View File

@@ -1,5 +1,4 @@
#!/bin/bash
# shellcheck disable=SC2086 # we want word splitting
set -e
set -o xtrace
@@ -8,18 +7,28 @@ export DEBIAN_FRONTEND=noninteractive
# Ephemeral packages (installed for this script and removed again at the end)
STABLE_EPHEMERAL=" \
autoconf \
automake \
bc \
bison \
bzip2 \
ccache \
clang-13 \
clang-11 \
cmake \
flex \
g++ \
glslang-tools \
libasound2-dev \
libcap-dev \
libclang-cpp13-dev \
libclang-cpp11-dev \
libelf-dev \
libexpat1-dev \
libfdt-dev \
libgbm-dev \
libgles2-mesa-dev \
libllvmspirvlib-dev \
libpciaccess-dev \
libpng-dev \
libudev-dev \
@@ -27,10 +36,12 @@ STABLE_EPHEMERAL=" \
libwaffle-dev \
libx11-xcb-dev \
libxcb-dri2-0-dev \
libxext-dev \
libxkbcommon-dev \
libxrender-dev \
llvm-13-dev \
llvm-11-dev \
llvm-spirv \
make \
meson \
ocl-icd-opencl-dev \
@@ -52,18 +63,51 @@ apt-get install -y --no-remove \
libclang-cpp11 \
libcap2 \
libegl1 \
libepoxy0 \
libepoxy-dev \
libfdt1 \
libllvmspirvlib11 \
libxcb-shm0 \
ocl-icd-libopencl1 \
python3-lxml \
python3-renderdoc \
python3-simplejson \
spirv-tools
socat \
spirv-tools \
sysvinit-core \
wget
. .gitlab-ci/container/container_pre_build.sh
############### Build libdrm
. .gitlab-ci/container/build-libdrm.sh
############### Build Wayland
. .gitlab-ci/container/build-wayland.sh
############### Build Crosvm
. .gitlab-ci/container/build-rust.sh
. .gitlab-ci/container/build-crosvm.sh
rm -rf /root/.cargo
rm -rf /root/.rustup
############### Build kernel
export DEFCONFIG="arch/x86/configs/x86_64_defconfig"
export KERNEL_IMAGE_NAME=bzImage
export KERNEL_ARCH=x86_64
export DEBIAN_ARCH=amd64
mkdir -p /lava-files/
. .gitlab-ci/container/build-kernel.sh
############### Build libclc
. .gitlab-ci/container/build-libclc.sh
############### Build piglit
PIGLIT_OPTS="-DPIGLIT_BUILD_CL_TESTS=ON -DPIGLIT_BUILD_DMA_BUF_TESTS=ON" . .gitlab-ci/container/build-piglit.sh

View File

@@ -1,7 +1,6 @@
#!/bin/bash
# The relative paths in this file only become valid at runtime.
# shellcheck disable=SC1091
# shellcheck disable=SC2086 # we want word splitting
set -e
set -o xtrace
@@ -50,9 +49,8 @@ STABLE_EPHEMERAL=" \
xz-utils \
"
apt-get install -y --no-remove --no-install-recommends \
apt-get install -y --no-remove \
$STABLE_EPHEMERAL \
libepoxy0 \
libxcb-shm0 \
pciutils \
python3-lxml \
@@ -61,36 +59,87 @@ apt-get install -y --no-remove --no-install-recommends \
xserver-xorg-video-amdgpu \
xserver-xorg-video-ati
# We need multiarch for Wine
dpkg --add-architecture i386
# Install a more recent version of Wine than exists in Debian.
apt-key add .gitlab-ci/container/debian/winehq.gpg.key
apt-add-repository https://dl.winehq.org/wine-builds/debian/
apt-get update -q
apt update -qyy
# Needed for Valve's tracing jobs to collect information about the graphics
# hardware on the test devices.
pip3 install gfxinfo-mupuf==0.0.9
# workaround wine needing 32-bit
# https://bugs.winehq.org/show_bug.cgi?id=53393
apt-get install -y --no-remove wine-stable-amd64 # a requirement for wine-stable
WINE_PKG="wine-stable"
WINE_PKG_DROP="wine-stable-i386"
apt-get download "${WINE_PKG}"
dpkg --ignore-depends="${WINE_PKG_DROP}" -i "${WINE_PKG}"*.deb
rm "${WINE_PKG}"*.deb
sed -i "/${WINE_PKG_DROP}/d" /var/lib/dpkg/status
apt-get install -y --no-remove winehq-stable # symlinks-only, depends on wine-stable
apt install -y --no-remove --install-recommends winehq-stable
function setup_wine() {
export WINEDEBUG="-all"
export WINEPREFIX="$1"
# We don't want crash dialogs
cat >crashdialog.reg <<EOF
Windows Registry Editor Version 5.00
[HKEY_CURRENT_USER\Software\Wine\WineDbg]
"ShowCrashDialog"=dword:00000000
EOF
# Set the wine prefix and disable the crash dialog
wine regedit crashdialog.reg
rm crashdialog.reg
# An immediate wine command may fail with: "${WINEPREFIX}: Not a
# valid wine prefix." and that is just spit because of checking
# the existance of the system.reg file, which fails. Just giving
# it a bit more of time for it to be created solves the problem
# ...
while ! test -f "${WINEPREFIX}/system.reg"; do sleep 1; done
}
############### Install DXVK
. .gitlab-ci/container/setup-wine.sh "/dxvk-wine64"
. .gitlab-ci/container/install-wine-dxvk.sh
dxvk_install_release() {
local DXVK_VERSION=${1:-"1.10.1"}
wget "https://github.com/doitsujin/dxvk/releases/download/v${DXVK_VERSION}/dxvk-${DXVK_VERSION}.tar.gz"
tar xzpf dxvk-"${DXVK_VERSION}".tar.gz
"dxvk-${DXVK_VERSION}"/setup_dxvk.sh install
rm -rf "dxvk-${DXVK_VERSION}"
rm dxvk-"${DXVK_VERSION}".tar.gz
}
# Install from a Github PR number
dxvk_install_pr() {
local __prnum=$1
# NOTE: Clone all the ensite history of the repo so as not to think
# harder about cloning just enough for 'git describe' to work. 'git
# describe' is used by the dxvk build system to generate a
# dxvk_version Meson variable, which is nice-to-have.
git clone https://github.com/doitsujin/dxvk
pushd dxvk
git fetch origin pull/"$__prnum"/head:pr
git checkout pr
./package-release.sh pr ../dxvk-build --no-package
popd
pushd ./dxvk-build/dxvk-pr
./setup_dxvk.sh install
popd
rm -rf ./dxvk-build ./dxvk
}
# Sets up the WINEPREFIX for the DXVK installation commands below.
setup_wine "/dxvk-wine64"
dxvk_install_release "1.10.1"
#dxvk_install_pr 2359
############### Install apitrace binaries for wine
. .gitlab-ci/container/install-wine-apitrace.sh
# Add the apitrace path to the registry
wine64 \
wine \
reg add "HKEY_LOCAL_MACHINE\System\CurrentControlSet\Control\Session Manager\Environment" \
/v Path \
/t REG_EXPAND_SZ \
@@ -101,6 +150,14 @@ wine64 \
. .gitlab-ci/container/container_pre_build.sh
############### Build libdrm
. .gitlab-ci/container/build-libdrm.sh
############### Build Wayland
. .gitlab-ci/container/build-wayland.sh
############### Build parallel-deqp-runner's hang-detection tool
. .gitlab-ci/container/build-hang-detection.sh
@@ -127,7 +184,7 @@ PIGLIT_BUILD_TARGETS="piglit_replayer" . .gitlab-ci/container/build-piglit.sh
############### Build VKD3D-Proton
. .gitlab-ci/container/setup-wine.sh "/vkd3d-proton-wine64"
setup_wine "/vkd3d-proton-wine64"
. .gitlab-ci/container/build-vkd3d-proton.sh

View File

@@ -1,5 +1,4 @@
#!/bin/bash
# shellcheck disable=SC2086 # we want word splitting
set -e
set -o xtrace
@@ -9,12 +8,10 @@ EPHEMERAL="
autoconf
automake
bzip2
cmake
git
libtool
pkgconfig(epoxy)
pkgconfig(gbm)
pkgconfig(openssl)
unzip
wget
xz
@@ -67,7 +64,6 @@ dnf install -y --setopt=install_weak_deps=False \
python3-mako \
python3-devel \
python3-mako \
python3-ply \
vulkan-headers \
spirv-tools-devel \
spirv-llvm-translator-devel \
@@ -87,8 +83,6 @@ tar -xvf $XORGMACROS_VERSION.tar.bz2 && rm $XORGMACROS_VERSION.tar.bz2
cd $XORGMACROS_VERSION; ./configure; make install; cd ..
rm -rf $XORGMACROS_VERSION
. .gitlab-ci/container/build-mold.sh
. .gitlab-ci/container/build-libdrm.sh
. .gitlab-ci/container/build-wayland.sh

View File

@@ -53,7 +53,7 @@
variables:
FDO_DISTRIBUTION_VERSION: bullseye-slim
FDO_REPO_SUFFIX: $CI_JOB_NAME
FDO_DISTRIBUTION_EXEC: 'bash .gitlab-ci/container/${CI_JOB_NAME}.sh'
FDO_DISTRIBUTION_EXEC: 'env "WINEPATH=${WINEPATH}" FDO_CI_CONCURRENT=${FDO_CI_CONCURRENT} bash .gitlab-ci/container/${CI_JOB_NAME}.sh'
# no need to pull the whole repo to build the container image
GIT_STRATEGY: none
@@ -189,7 +189,6 @@ debian/android_build:
debian/x86_test-base:
extends: debian/x86_build-base
variables:
KERNEL_URL: &kernel-rootfs-url "https://gitlab.freedesktop.org/gfx-ci/linux/-/archive/v5.19-for-mesa-ci-d4efddaec194/linux-v5.17-for-mesa-ci-b78f7870d97b.tar.bz2"
MESA_IMAGE_TAG: &debian-x86_test-base ${DEBIAN_BASE_TAG}
.use-debian/x86_test-base:
@@ -206,6 +205,8 @@ debian/x86_test-base:
debian/x86_test-gl:
extends: .use-debian/x86_test-base
variables:
FDO_DISTRIBUTION_EXEC: 'env KERNEL_URL=${KERNEL_URL} FDO_CI_CONCURRENT=${FDO_CI_CONCURRENT} bash .gitlab-ci/container/${CI_JOB_NAME}.sh'
KERNEL_URL: &kernel-rootfs-url "https://gitlab.freedesktop.org/gfx-ci/linux/-/archive/v5.17-for-mesa-ci-b78f7870d97b/linux-v5.17-for-mesa-ci-b78f7870d97b.tar.bz2"
MESA_IMAGE_TAG: &debian-x86_test-gl ${DEBIAN_X86_TEST_GL_TAG}
.use-debian/x86_test-gl:
@@ -332,9 +333,8 @@ debian/arm_test:
- kernel+rootfs_arm64
- kernel+rootfs_armhf
variables:
FDO_DISTRIBUTION_EXEC: 'env ARTIFACTS_PREFIX=https://${MINIO_HOST}/mesa-lava ARTIFACTS_SUFFIX=${MESA_ROOTFS_TAG}--${MESA_ARM_BUILD_TAG}--${MESA_TEMPLATES_COMMIT} CI_PROJECT_PATH=${CI_PROJECT_PATH} FDO_CI_CONCURRENT=${FDO_CI_CONCURRENT} FDO_UPSTREAM_REPO=${FDO_UPSTREAM_REPO} bash .gitlab-ci/container/${CI_JOB_NAME}.sh'
FDO_DISTRIBUTION_TAG: "${MESA_IMAGE_TAG}--${MESA_ROOTFS_TAG}--${MESA_ARM_BUILD_TAG}--${MESA_TEMPLATES_COMMIT}"
ARTIFACTS_PREFIX: "https://${MINIO_HOST}/mesa-lava"
ARTIFACTS_SUFFIX: "${MESA_ROOTFS_TAG}--${MESA_ARM_BUILD_TAG}--${MESA_TEMPLATES_COMMIT}"
MESA_ARM_BUILD_TAG: *debian-arm_build
MESA_IMAGE_TAG: &debian-arm_test ${DEBIAN_BASE_TAG}
MESA_ROOTFS_TAG: *kernel-rootfs
@@ -409,7 +409,7 @@ windows_build_vs2019:
- !reference [.build-rules, rules]
variables:
MESA_IMAGE_PATH: &windows_build_image_path ${WINDOWS_X64_BUILD_PATH}
MESA_IMAGE_TAG: &windows_build_image_tag ${MESA_BASE_IMAGE_TAG}--${WINDOWS_X64_BUILD_TAG}
MESA_IMAGE_TAG: &windows_build_image_tag ${WINDOWS_X64_BUILD_TAG}
DOCKERFILE: Dockerfile_build
MESA_BASE_IMAGE_PATH: *windows_vs_image_path
MESA_BASE_IMAGE_TAG: *windows_vs_image_tag
@@ -429,7 +429,7 @@ windows_test_vs2019:
- !reference [.build-rules, rules]
variables:
MESA_IMAGE_PATH: &windows_test_image_path ${WINDOWS_X64_TEST_PATH}
MESA_IMAGE_TAG: &windows_test_image_tag ${MESA_BASE_IMAGE_TAG}--${WINDOWS_X64_TEST_TAG}
MESA_IMAGE_TAG: &windows_test_image_tag ${WINDOWS_X64_BUILD_TAG}--${WINDOWS_X64_TEST_TAG}
DOCKERFILE: Dockerfile_test
MESA_BASE_IMAGE_PATH: *windows_vs_image_path
MESA_BASE_IMAGE_TAG: *windows_vs_image_tag
@@ -445,7 +445,6 @@ windows_test_vs2019:
variables:
MESA_IMAGE_PATH: *windows_build_image_path
MESA_IMAGE_TAG: *windows_build_image_tag
MESA_BASE_IMAGE_TAG: *windows_vs_image_tag
needs:
- windows_build_vs2019
@@ -457,4 +456,3 @@ windows_test_vs2019:
variables:
MESA_IMAGE_PATH: *windows_test_image_path
MESA_IMAGE_TAG: *windows_test_image_tag
MESA_BASE_IMAGE_TAG: *windows_vs_image_tag

View File

@@ -1,39 +0,0 @@
#!/bin/bash
set -e
dxvk_install_release() {
local DXVK_VERSION=${1:-"1.10.3"}
wget "https://github.com/doitsujin/dxvk/releases/download/v${DXVK_VERSION}/dxvk-${DXVK_VERSION}.tar.gz"
tar xzpf dxvk-"${DXVK_VERSION}".tar.gz
# https://github.com/doitsujin/dxvk/issues/2921
sed -i 's/wine="wine"/wine="wine32"/' "dxvk-${DXVK_VERSION}"/setup_dxvk.sh
"dxvk-${DXVK_VERSION}"/setup_dxvk.sh install
rm -rf "dxvk-${DXVK_VERSION}"
rm dxvk-"${DXVK_VERSION}".tar.gz
}
# Install from a Github PR number
dxvk_install_pr() {
local __prnum=$1
# NOTE: Clone all the ensite history of the repo so as not to think
# harder about cloning just enough for 'git describe' to work. 'git
# describe' is used by the dxvk build system to generate a
# dxvk_version Meson variable, which is nice-to-have.
git clone https://github.com/doitsujin/dxvk
pushd dxvk
git fetch origin pull/"$__prnum"/head:pr
git checkout pr
./package-release.sh pr ../dxvk-build --no-package
popd
pushd ./dxvk-build/dxvk-pr
./setup_dxvk.sh install
popd
rm -rf ./dxvk-build ./dxvk
}
dxvk_install_release "1.10.1"
#dxvk_install_pr 2359

View File

@@ -1,7 +1,4 @@
#!/bin/bash
# shellcheck disable=SC1091 # The relative paths in this file only become valid at runtime.
# shellcheck disable=SC2034 # Variables are used in scripts called from here
# shellcheck disable=SC2086 # we want word splitting
set -e
set -o xtrace
@@ -40,7 +37,6 @@ if [[ "$DEBIAN_ARCH" = "arm64" ]]; then
DEVICE_TREES+=" arch/arm64/boot/dts/mediatek/mt8183-kukui-jacuzzi-juniper-sku16.dtb"
DEVICE_TREES+=" arch/arm64/boot/dts/nvidia/tegra210-p3450-0000.dtb"
DEVICE_TREES+=" arch/arm64/boot/dts/qcom/sc7180-trogdor-lazor-limozeen-nots-r5.dtb"
DEVICE_TREES+=" arch/arm64/boot/dts/freescale/imx8mq-nitrogen.dtb"
KERNEL_IMAGE_NAME="Image"
elif [[ "$DEBIAN_ARCH" = "armhf" ]]; then
@@ -51,7 +47,6 @@ elif [[ "$DEBIAN_ARCH" = "armhf" ]]; then
DEVICE_TREES="arch/arm/boot/dts/rk3288-veyron-jaq.dtb"
DEVICE_TREES+=" arch/arm/boot/dts/sun8i-h3-libretech-all-h3-cc.dtb"
DEVICE_TREES+=" arch/arm/boot/dts/imx6q-cubox-i.dtb"
DEVICE_TREES+=" arch/arm/boot/dts/tegra124-jetson-tk1.dtb"
KERNEL_IMAGE_NAME="zImage"
. .gitlab-ci/container/create-cross-file.sh armhf
else
@@ -61,7 +56,7 @@ else
DEFCONFIG="arch/x86/configs/x86_64_defconfig"
DEVICE_TREES=""
KERNEL_IMAGE_NAME="bzImage"
ARCH_PACKAGES="libasound2-dev libcap-dev libfdt-dev libva-dev wayland-protocols p7zip"
ARCH_PACKAGES="libasound2-dev libcap-dev libfdt-dev libva-dev wayland-protocols"
fi
# Determine if we're in a cross build.
@@ -111,15 +106,13 @@ apt-get install -y --no-remove \
libxkbcommon-dev \
ninja-build \
patch \
protobuf-compiler \
python-is-python3 \
python3-distutils \
python3-mako \
python3-numpy \
python3-serial \
unzip \
wget \
zstd
wget
if [[ "$DEBIAN_ARCH" = "armhf" ]]; then
@@ -137,20 +130,6 @@ if [[ "$DEBIAN_ARCH" = "armhf" ]]; then
libxkbcommon-dev:armhf
fi
mkdir -p "/lava-files/rootfs-${DEBIAN_ARCH}"
############### Setuping
if [ "$DEBIAN_ARCH" = "amd64" ]; then
. .gitlab-ci/container/setup-wine.sh "/dxvk-wine64"
. .gitlab-ci/container/install-wine-dxvk.sh
mv /dxvk-wine64 "/lava-files/rootfs-${DEBIAN_ARCH}/"
fi
############### Installing
. .gitlab-ci/container/install-wine-apitrace.sh
mkdir -p "/lava-files/rootfs-${DEBIAN_ARCH}/apitrace-msvc-win64"
mv /apitrace-msvc-win64/bin "/lava-files/rootfs-${DEBIAN_ARCH}/apitrace-msvc-win64"
rm -rf /apitrace-msvc-win64
############### Building
STRIP_CMD="${GCC_ARCH}-strip"
@@ -236,9 +215,8 @@ set -e
cp .gitlab-ci/container/create-rootfs.sh /lava-files/rootfs-${DEBIAN_ARCH}/.
cp .gitlab-ci/container/debian/llvm-snapshot.gpg.key /lava-files/rootfs-${DEBIAN_ARCH}/.
cp .gitlab-ci/container/debian/winehq.gpg.key /lava-files/rootfs-${DEBIAN_ARCH}/.
chroot /lava-files/rootfs-${DEBIAN_ARCH} sh /create-rootfs.sh
rm /lava-files/rootfs-${DEBIAN_ARCH}/{llvm-snapshot,winehq}.gpg.key
rm /lava-files/rootfs-${DEBIAN_ARCH}/llvm-snapshot.gpg.key
rm /lava-files/rootfs-${DEBIAN_ARCH}/create-rootfs.sh
@@ -246,8 +224,7 @@ rm /lava-files/rootfs-${DEBIAN_ARCH}/create-rootfs.sh
# Dependencies pulled during the creation of the rootfs may overwrite
# the built libdrm. Hence, we add it after the rootfs has been already
# created.
find /libdrm/ -name lib\*\.so\* \
-exec cp -t /lava-files/rootfs-${DEBIAN_ARCH}/usr/lib/$GCC_ARCH/. {} \;
find /libdrm/ -name lib\*\.so\* | xargs cp -t /lava-files/rootfs-${DEBIAN_ARCH}/usr/lib/$GCC_ARCH/.
mkdir -p /lava-files/rootfs-${DEBIAN_ARCH}/libdrm/
cp -Rp /libdrm/share /lava-files/rootfs-${DEBIAN_ARCH}/libdrm/share
rm -rf /libdrm
@@ -261,14 +238,14 @@ fi
du -ah /lava-files/rootfs-${DEBIAN_ARCH} | sort -h | tail -100
pushd /lava-files/rootfs-${DEBIAN_ARCH}
tar --zstd -cf /lava-files/lava-rootfs.tar.zst .
tar czf /lava-files/lava-rootfs.tgz .
popd
. .gitlab-ci/container/container_post_build.sh
############### Upload the files!
ci-fairy minio login --token-file "${CI_JOB_JWT_FILE}"
FILES_TO_UPLOAD="lava-rootfs.tar.zst \
FILES_TO_UPLOAD="lava-rootfs.tgz \
$KERNEL_IMAGE_NAME"
if [[ -n $DEVICE_TREES ]]; then

View File

@@ -1,24 +0,0 @@
#!/bin/bash
export WINEPREFIX="$1"
export WINEDEBUG="-all"
# We don't want crash dialogs
cat >crashdialog.reg <<EOF
Windows Registry Editor Version 5.00
[HKEY_CURRENT_USER\Software\Wine\WineDbg]
"ShowCrashDialog"=dword:00000000
EOF
# Set the wine prefix and disable the crash dialog
wine64 regedit crashdialog.reg
rm crashdialog.reg
# An immediate wine command may fail with: "${WINEPREFIX}: Not a
# valid wine prefix." and that is just spit because of checking
# the existance of the system.reg file, which fails. Just giving
# it a bit more of time for it to be created solves the problem
# ...
while ! test -f "${WINEPREFIX}/system.reg"; do sleep 1; done

View File

@@ -54,10 +54,9 @@ VM_SOCKET=crosvm-${THREAD}.sock
# was terminated due to timeouts. This "vm stop" may fail if the crosvm died
# without cleaning itself up.
if [ -e $VM_SOCKET ]; then
crosvm stop $VM_SOCKET || true
crosvm stop $VM_SOCKET || rm -rf $VM_SOCKET
# Wait for socats from that invocation to drain
sleep 5
rm -rf $VM_SOCKET || true
fi
set_vsock_context || { echo "Could not generate crosvm vsock CID" >&2; exit 1; }
@@ -94,11 +93,10 @@ set +e -x
NIR_DEBUG="novalidate" \
LIBGL_ALWAYS_SOFTWARE=${CROSVM_LIBGL_ALWAYS_SOFTWARE} \
GALLIUM_DRIVER=${CROSVM_GALLIUM_DRIVER} \
VK_ICD_FILENAMES=$CI_PROJECT_DIR/install/share/vulkan/icd.d/${CROSVM_VK_DRIVER}_icd.x86_64.json \
crosvm --no-syslog run \
--gpu "${CROSVM_GPU_ARGS}" -m "${CROSVM_MEMORY:-4096}" -c 2 --disable-sandbox \
crosvm run \
--gpu "${CROSVM_GPU_ARGS}" -m 4096 -c 2 --disable-sandbox \
--shared-dir /:my_root:type=fs:writeback=true:timeout=60:cache=always \
--host-ip "192.168.30.1" --netmask "255.255.255.0" --mac "AA:BB:CC:00:00:12" \
--host_ip "192.168.30.1" --netmask "255.255.255.0" --mac "AA:BB:CC:00:00:12" \
-s $VM_SOCKET \
--cid ${VSOCK_CID} -p "${CROSVM_KERN_ARGS}" \
/lava-files/${KERNEL_IMAGE_NAME:-bzImage} > ${VM_TEMP_DIR}/crosvm 2>&1

View File

@@ -1,27 +1,27 @@
variables:
DEBIAN_X86_BUILD_BASE_IMAGE: "debian/x86_build-base"
DEBIAN_BASE_TAG: "2022-10-19-remove-xvmc-dev"
DEBIAN_BASE_TAG: "2022-07-01-bb-llvm13"
DEBIAN_X86_BUILD_IMAGE_PATH: "debian/x86_build"
DEBIAN_BUILD_TAG: "2022-10-22-mold-1_6"
DEBIAN_BUILD_TAG: "2022-07-14-directx-headers"
DEBIAN_X86_BUILD_MINGW_IMAGE_PATH: "debian/x86_build-mingw"
DEBIAN_BUILD_MINGW_TAG: "2022-10-18-dx-headers-va"
DEBIAN_BUILD_MINGW_TAG: "2022-07-14-directx-headers"
DEBIAN_X86_TEST_BASE_IMAGE: "debian/x86_test-base"
DEBIAN_X86_TEST_IMAGE_PATH: "debian/x86_test-gl"
DEBIAN_X86_TEST_GL_TAG: "2022-10-20-bindgen-zlib-cve"
DEBIAN_X86_TEST_VK_TAG: "2022-10-20-bindgen-zlib-cve"
DEBIAN_X86_TEST_GL_TAG: "2022-07-06-virgl-update"
DEBIAN_X86_TEST_VK_TAG: "2022-07-18-apitrace-11-1"
FEDORA_X86_BUILD_TAG: "2022-09-22-python3-ply-2"
KERNEL_ROOTFS_TAG: "2022-10-20-bindgen-zlib-cve"
FEDORA_X86_BUILD_TAG: "2022-04-24-spirv-tools-5"
KERNEL_ROOTFS_TAG: "2022-07-06-virgl-update"
WINDOWS_X64_VS_PATH: "windows/x64_vs"
WINDOWS_X64_VS_TAG: "2022-10-20-upgrade-zlib"
WINDOWS_X64_VS_TAG: "2022-06-15-vs-winsdk"
WINDOWS_X64_BUILD_PATH: "windows/x64_build"
WINDOWS_X64_BUILD_TAG: "2022-10-18-wrap-nodownload-va"
WINDOWS_X64_BUILD_TAG: "2022-06-15-vs-winsdk"
WINDOWS_X64_TEST_PATH: "windows/x64_test"
WINDOWS_X64_TEST_TAG: "2022-08-17-bump"
WINDOWS_X64_TEST_TAG: "2022-06-15-vs-winsdk"

View File

@@ -12,9 +12,9 @@
BASE_SYSTEM_MAINLINE_HOST_PATH: "${BASE_SYSTEM_HOST_PREFIX}/${FDO_UPSTREAM_REPO}/${DISTRIBUTION_TAG}/${ARCH}"
BASE_SYSTEM_FORK_HOST_PATH: "${BASE_SYSTEM_HOST_PREFIX}/${CI_PROJECT_PATH}/${DISTRIBUTION_TAG}/${ARCH}"
# per-job build artifacts
BUILD_PATH: "${PIPELINE_ARTIFACTS_BASE}/${CI_PROJECT_NAME}-${ARCH}.tar.zst"
BUILD_PATH: "${PIPELINE_ARTIFACTS_BASE}/${CI_PROJECT_NAME}-${ARCH}.tar.gz"
JOB_ROOTFS_OVERLAY_PATH: "${JOB_ARTIFACTS_BASE}/job-rootfs-overlay.tar.gz"
JOB_RESULTS_PATH: "${JOB_ARTIFACTS_BASE}/results.tar.zst"
JOB_RESULTS_PATH: "${JOB_ARTIFACTS_BASE}/results.tar.gz"
MINIO_RESULTS_UPLOAD: "${JOB_ARTIFACTS_BASE}"
PIGLIT_NO_WINDOW: 1
VISIBILITY_GROUP: "Collabora+fdo"
@@ -27,12 +27,10 @@
- results/
exclude:
- results/*.shader_cache
reports:
junit: results/junit.xml
tags:
- $RUNNER_TAG
after_script:
- wget -q "https://${JOB_RESULTS_PATH}" -O- | tar --zstd -x
- wget -q "https://${JOB_RESULTS_PATH}" -O- | tar -xz
.lava-test:armhf:
variables:

View File

@@ -21,9 +21,6 @@ cp artifacts/ci-common/intel-gpu-freq.sh results/job-rootfs-overlay/
# Prepare env vars for upload.
KERNEL_IMAGE_BASE_URL="https://${BASE_SYSTEM_HOST_PATH}" \
artifacts/ci-common/generate-env.sh > results/job-rootfs-overlay/set-job-env-vars.sh
echo -e "\e[0Ksection_start:$(date +%s):variables[collapsed=true]\r\e[0KVariables passed through:"
cat results/job-rootfs-overlay/set-job-env-vars.sh
echo -e "\e[0Ksection_end:$(date +%s):variables\r\e[0K"
tar zcf job-rootfs-overlay.tar.gz -C results/job-rootfs-overlay/ .
ci-fairy minio login --token-file "${CI_JOB_JWT_FILE}"

View File

@@ -32,9 +32,8 @@ from lava.exceptions import (
MesaCIRetryError,
MesaCITimeoutError,
)
from lava.utils import CONSOLE_LOG
from lava.utils import DEFAULT_GITLAB_SECTION_TIMEOUTS as GL_SECTION_TIMEOUTS
from lava.utils import (
CONSOLE_LOG,
GitlabSection,
LogFollower,
LogSectionType,
@@ -96,8 +95,8 @@ def generate_lava_yaml(args):
'url': '{}/{}'.format(args.kernel_url_prefix, args.kernel_image_name),
},
'nfsrootfs': {
'url': '{}/lava-rootfs.tar.zst'.format(args.rootfs_url_prefix),
'compression': 'zstd',
'url': '{}/lava-rootfs.tgz'.format(args.rootfs_url_prefix),
'compression': 'gz',
}
}
if args.kernel_image_type:
@@ -166,7 +165,7 @@ def generate_lava_yaml(args):
run_steps += [
'mkdir -p {}'.format(args.ci_project_dir),
'wget -S --progress=dot:giga -O- {} | tar --zstd -x -C {}'.format(args.build_url, args.ci_project_dir),
'wget -S --progress=dot:giga -O- {} | tar -xz -C {}'.format(args.build_url, args.ci_project_dir),
'wget -S --progress=dot:giga -O- {} | tar -xz -C /'.format(args.job_rootfs_overlay_url),
# Sleep a bit to give time for bash to dump shell xtrace messages into
@@ -498,13 +497,6 @@ def treat_mesa_job_name(args):
def main(args):
proxy = setup_lava_proxy()
# Overwrite the timeout for the testcases with the value offered by the
# user. The testcase running time should be at least 4 times greater than
# the other sections (boot and setup), so we can safely ignore them.
# If LAVA fails to stop the job at this stage, it will fall back to the
# script section timeout with a reasonable delay.
GL_SECTION_TIMEOUTS[LogSectionType.TEST_CASE] = timedelta(minutes=args.job_timeout)
job_definition = generate_lava_yaml(args)
if args.dump_yaml:

View File

@@ -8,9 +8,4 @@ from .log_follower import (
hide_sensitive_data,
print_log,
)
from .log_section import (
DEFAULT_GITLAB_SECTION_TIMEOUTS,
FALLBACK_GITLAB_SECTION_TIMEOUT,
LogSection,
LogSectionType,
)
from .log_section import LogSection, LogSectionType

View File

@@ -2,7 +2,6 @@ import re
from dataclasses import dataclass
from datetime import timedelta
from enum import Enum, auto
from os import getenv
from typing import Optional, Pattern, Union
from lava.utils.gitlab_section import GitlabSection
@@ -16,34 +15,24 @@ class LogSectionType(Enum):
LAVA_POST_PROCESSING = auto()
# Empirically, successful device boot in LAVA time takes less than 3
# minutes.
# LAVA itself is configured to attempt thrice to boot the device,
# summing up to 9 minutes.
# It is better to retry the boot than cancel the job and re-submit to avoid
# the enqueue delay.
LAVA_BOOT_TIMEOUT = int(getenv("LAVA_BOOT_TIMEOUT", 9))
# Test suite phase is where the initialization happens.
LAVA_TEST_SUITE_TIMEOUT = int(getenv("LAVA_TEST_SUITE_TIMEOUT", 5))
# Test cases may take a long time, this script has no right to interrupt
# them. But if the test case takes almost 1h, it will never succeed due to
# Gitlab job timeout.
LAVA_TEST_CASE_TIMEOUT = int(getenv("JOB_TIMEOUT", 60))
# LAVA post processing may refer to a test suite teardown, or the
# adjustments to start the next test_case
LAVA_POST_PROCESSING_TIMEOUT = int(getenv("LAVA_POST_PROCESSING_TIMEOUT", 5))
FALLBACK_GITLAB_SECTION_TIMEOUT = timedelta(minutes=10)
DEFAULT_GITLAB_SECTION_TIMEOUTS = {
LogSectionType.LAVA_BOOT: timedelta(minutes=LAVA_BOOT_TIMEOUT),
LogSectionType.TEST_SUITE: timedelta(minutes=LAVA_TEST_SUITE_TIMEOUT),
LogSectionType.TEST_CASE: timedelta(minutes=LAVA_TEST_CASE_TIMEOUT),
LogSectionType.LAVA_POST_PROCESSING: timedelta(
minutes=LAVA_POST_PROCESSING_TIMEOUT
),
# Empirically, successful device boot in LAVA time takes less than 3
# minutes.
# LAVA itself is configured to attempt thrice to boot the device,
# summing up to 9 minutes.
# It is better to retry the boot than cancel the job and re-submit to avoid
# the enqueue delay.
LogSectionType.LAVA_BOOT: timedelta(minutes=9),
# Test suite phase is where the initialization happens.
LogSectionType.TEST_SUITE: timedelta(minutes=5),
# Test cases may take a long time, this script has no right to interrupt
# them. But if the test case takes almost 1h, it will never succeed due to
# Gitlab job timeout.
LogSectionType.TEST_CASE: timedelta(minutes=60),
# LAVA post processing may refer to a test suite teardown, or the
# adjustments to start the next test_case
LogSectionType.LAVA_POST_PROCESSING: timedelta(minutes=5),
}
@@ -65,10 +54,9 @@ class LogSection:
if match := re.search(self.regex, lava_log_line["msg"]):
section_id = self.section_id.format(*match.groups())
section_header = self.section_header.format(*match.groups())
timeout = DEFAULT_GITLAB_SECTION_TIMEOUTS[self.section_type]
return GitlabSection(
id=section_id,
header=f"{section_header} - Timeout: {timeout}",
header=section_header,
type=self.section_type,
start_collapsed=self.collapsed,
)

View File

@@ -65,10 +65,9 @@ meson _build --native-file=native.file \
-D prefix=`pwd`/install \
-D libdir=lib \
-D buildtype=${BUILDTYPE:-debug} \
-D build-tests=true \
-D build-tests=false \
-D c_args="$(echo -n $C_ARGS)" \
-D cpp_args="$(echo -n $CPP_ARGS)" \
-D enable-glcpp-tests=false \
-D libunwind=${UNWIND} \
${DRI_LOADERS} \
${GALLIUM_ST} \
@@ -79,15 +78,7 @@ meson _build --native-file=native.file \
${EXTRA_OPTION}
cd _build
meson configure
if command -V mold &> /dev/null ; then
mold --run ninja
else
ninja
fi
ninja
LC_ALL=C.UTF-8 meson test --num-processes ${FDO_CI_CONCURRENT:-4} --print-errorlogs ${MESON_TEST_ARGS}
if command -V mold &> /dev/null ; then
mold --run ninja install
else
ninja install
fi
ninja install
cd ..

View File

@@ -1,8 +1,6 @@
#!/bin/sh
if [ "x$STRACEDIR" = "x" ]; then
STRACEDIR=meson-logs/strace/$(for i in $@; do basename -z -- $i; echo -n _; done)
fi
STRACEDIR=meson-logs/strace/$(for i in $@; do basename -z -- $i; echo -n _; done)
mkdir -p $STRACEDIR

View File

@@ -8,36 +8,6 @@ MINIO_ARGS="--credentials=/tmp/.minio_credentials"
RESULTS=$(realpath -s "$PWD"/results)
mkdir -p "$RESULTS"
if [ "$PIGLIT_REPLAY_SUBCOMMAND" = "profile" ]; then
# workaround for older Debian Bullseye libyaml 0.2.2
sed -i "/^%YAML 1\.2$/d" "$PIGLIT_REPLAY_DESCRIPTION_FILE"
yq -i -Y '. | del(.traces[][] | select(.label[0,1,2,3,4,5,6,7,8,9] == "no-perf"))' \
"$PIGLIT_REPLAY_DESCRIPTION_FILE" # label positions are a bit hack
fi
# WINE
case "$PIGLIT_REPLAY_DEVICE_NAME" in
vk-*)
export WINEPREFIX="/dxvk-wine64"
;;
*)
export WINEPREFIX="/generic-wine64"
;;
esac
PATH="/opt/wine-stable/bin/:$PATH" # WineHQ path
# Avoid asking about Gecko or Mono instalation
export WINEDLLOVERRIDES=mscoree=d;mshtml=d
# Set environment for DXVK.
export DXVK_LOG_LEVEL="info"
export DXVK_LOG="$RESULTS/dxvk"
[ -d "$DXVK_LOG" ] || mkdir -pv "$DXVK_LOG"
export DXVK_STATE_CACHE=0
# Set up the driver environment.
# Modifiying here directly LD_LIBRARY_PATH may cause problems when
# using a command wrapper. Hence, we will just set it when running the
@@ -67,10 +37,6 @@ quiet() {
# Set environment for apitrace executable.
export PATH="/apitrace/build:$PATH"
export PIGLIT_REPLAY_WINE_BINARY=wine64
export PIGLIT_REPLAY_WINE_APITRACE_BINARY="/apitrace-msvc-win64/bin/apitrace.exe"
export PIGLIT_REPLAY_WINE_D3DRETRACE_BINARY="/apitrace-msvc-win64/bin/d3dretrace.exe"
# Our rootfs may not have "less", which apitrace uses during
# apitrace dump
export PAGER=cat
@@ -205,7 +171,7 @@ __PREFIX="trace/$PIGLIT_REPLAY_DEVICE_NAME"
__MINIO_PATH="$PIGLIT_REPLAY_ARTIFACTS_BASE_URL"
__MINIO_TRACES_PREFIX="traces"
if [ "$PIGLIT_REPLAY_SUBCOMMAND" != "profile" ]; then
if [ "x$PIGLIT_REPLAY_SUBCOMMAND" != "xprofile" ]; then
quiet replay_minio_upload_images
fi

View File

@@ -52,8 +52,8 @@ cp -Rp .gitlab-ci/b2c artifacts/
if [ -n "$MINIO_ARTIFACT_NAME" ]; then
# Pass needed files to the test stage
MINIO_ARTIFACT_NAME="$MINIO_ARTIFACT_NAME.tar.zst"
zstd artifacts/install.tar -o ${MINIO_ARTIFACT_NAME}
MINIO_ARTIFACT_NAME="$MINIO_ARTIFACT_NAME.tar.gz"
gzip -c artifacts/install.tar > ${MINIO_ARTIFACT_NAME}
ci-fairy minio login --token-file "${CI_JOB_JWT_FILE}"
ci-fairy minio cp ${MINIO_ARTIFACT_NAME} minio://${PIPELINE_ARTIFACTS_BASE}/${MINIO_ARTIFACT_NAME}
fi

View File

@@ -1,23 +0,0 @@
#!/usr/bin/env bash
CHECKPATH=".gitlab-ci/container" # TODO: expand to cover whole .gitlab-ci/
is_bash() {
[[ $1 == *.sh ]] && return 0
[[ $1 == */bash-completion/* ]] && return 0
[[ $(file -b --mime-type "$1") == text/x-shellscript ]] && return 0
return 1
}
while IFS= read -r -d $'' file; do
if is_bash "$file" ; then
shellcheck -x -W0 -s bash "$file"
rc=$?
if [ "${rc}" -eq 0 ]
then
continue
else
exit 1
fi
fi
done < <(find $CHECKPATH -type f \! -path "./.git/*" -print0)

View File

@@ -1,5 +0,0 @@
#!/usr/bin/env bash
set -e
# Run yamllint against all traces files.
find . -name '*traces*yml' -print0 | xargs -0 yamllint -d "{rules: {line-length: {max: 150}}}"

View File

@@ -1,4 +1,4 @@
#!/bin/bash
#!/bin/sh
#
# Copyright (C) 2022 Collabora Limited
# Author: Guilherme Gallo <guilherme.gallo@collabora.com>
@@ -22,165 +22,6 @@
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
# SOFTWARE.
# Args:
# $1: section id
# $2: section header
gitlab_section_start() {
echo -e "\e[0Ksection_start:$(date +%s):$1[collapsed=${GL_COLLAPSED:-false}]\r\e[0K\e[32;1m$2\e[0m"
}
# Args:
# $1: section id
gitlab_section_end() {
echo -e "\e[0Ksection_end:$(date +%s):$1\r\e[0K"
}
# sponge allows piping to files that are being used as input.
# E.g.: sort file.txt | sponge file.txt
# In order to avoid installing moreutils just to have the sponge binary, we can
# use a bash function for it
# Source https://unix.stackexchange.com/a/561346/310927
sponge () (
set +x
append=false
while getopts 'a' opt; do
case $opt in
a) append=true ;;
*) echo error; exit 1
esac
done
shift "$(( OPTIND - 1 ))"
outfile=$1
tmpfile=$(mktemp "$(dirname "$outfile")/tmp-sponge.XXXXXXXX") &&
cat >"$tmpfile" &&
if "$append"; then
cat "$tmpfile" >>"$outfile"
else
if [ -f "$outfile" ]; then
chmod --reference="$outfile" "$tmpfile"
fi
if [ -f "$outfile" ]; then
mv "$tmpfile" "$outfile"
elif [ -n "$outfile" ] && [ ! -e "$outfile" ]; then
cat "$tmpfile" >"$outfile"
else
cat "$tmpfile"
fi
fi &&
rm -f "$tmpfile"
)
remove_comments_from_files() (
INPUT_FILES="$*"
for INPUT_FILE in ${INPUT_FILES}
do
[ -f "${INPUT_FILE}" ] || continue
sed -i '/#/d' "${INPUT_FILE}"
sed -i '/^\s*$/d' "${INPUT_FILE}"
done
)
subtract_test_lists() (
MINUEND=$1
sort "${MINUEND}" | sponge "${MINUEND}"
shift
for SUBTRAHEND in "$@"
do
sort "${SUBTRAHEND}" | sponge "${SUBTRAHEND}"
join -v 1 "${MINUEND}" "${SUBTRAHEND}" |
sponge "${MINUEND}"
done
)
merge_rendertests_files() {
BASE_FILE=$1
shift
FILES="$*"
# shellcheck disable=SC2086
cat $FILES "$BASE_FILE" |
sort --unique --stable --field-separator=, --key=1,1 |
sponge "$BASE_FILE"
}
assure_files() (
for CASELIST_FILE in $*
do
>&2 echo "Looking for ${CASELIST_FILE}..."
[ -f ${CASELIST_FILE} ] || (
>&2 echo "Not found. Creating empty."
touch ${CASELIST_FILE}
)
done
)
# Generate rendertests from scratch, customizing with fails/flakes/crashes files
generate_rendertests() (
set -e
GENERATED_FILE=$(mktemp)
TESTS_FILE_PREFIX="${SKQP_FILE_PREFIX}-${SKQP_BACKEND}_rendertests"
FLAKES_FILE="${TESTS_FILE_PREFIX}-flakes.txt"
FAILS_FILE="${TESTS_FILE_PREFIX}-fails.txt"
CRASHES_FILE="${TESTS_FILE_PREFIX}-crashes.txt"
RENDER_TESTS_FILE="${TESTS_FILE_PREFIX}.txt"
# Default to an empty known flakes file if it doesn't exist.
assure_files ${FLAKES_FILE} ${FAILS_FILE} ${CRASHES_FILE}
# skqp does not support comments in rendertests.txt file
remove_comments_from_files "${FLAKES_FILE}" "${FAILS_FILE}" "${CRASHES_FILE}"
# create an exhaustive rendertest list
"${SKQP_BIN_DIR}"/list_gms | sort > "$GENERATED_FILE"
# Remove undesirable tests from the list
subtract_test_lists "${GENERATED_FILE}" "${CRASHES_FILE}" "${FLAKES_FILE}"
# Add ",0" to each test to set the expected diff sum to zero
sed -i 's/$/,0/g' "$GENERATED_FILE"
merge_rendertests_files "$GENERATED_FILE" "${FAILS_FILE}"
mv "${GENERATED_FILE}" "${RENDER_TESTS_FILE}"
echo "${RENDER_TESTS_FILE}"
)
generate_unittests() (
set -e
GENERATED_FILE=$(mktemp)
TESTS_FILE_PREFIX="${SKQP_FILE_PREFIX}_unittests"
FLAKES_FILE="${TESTS_FILE_PREFIX}-flakes.txt"
FAILS_FILE="${TESTS_FILE_PREFIX}-fails.txt"
CRASHES_FILE="${TESTS_FILE_PREFIX}-crashes.txt"
UNIT_TESTS_FILE="${TESTS_FILE_PREFIX}.txt"
# Default to an empty known flakes file if it doesn't exist.
assure_files ${FLAKES_FILE} ${FAILS_FILE} ${CRASHES_FILE}
# Remove unitTest_ prefix
for UT_FILE in "${FAILS_FILE}" "${CRASHES_FILE}" "${FLAKES_FILE}"; do
sed -i 's/^unitTest_//g' "${UT_FILE}"
done
# create an exhaustive unittests list
"${SKQP_BIN_DIR}"/list_gpu_unit_tests > "${GENERATED_FILE}"
# Remove undesirable tests from the list
subtract_test_lists "${GENERATED_FILE}" "${CRASHES_FILE}" "${FLAKES_FILE}" "${FAILS_FILE}"
remove_comments_from_files "${GENERATED_FILE}"
mv "${GENERATED_FILE}" "${UNIT_TESTS_FILE}"
echo "${UNIT_TESTS_FILE}"
)
run_all_tests() {
rm -f "${SKQP_ASSETS_DIR}"/skqp/*.txt
}
copy_tests_files() (
# Copy either unit test or render test files from a specific driver given by
@@ -192,11 +33,9 @@ copy_tests_files() (
if echo "${SKQP_BACKEND}" | grep -qE 'vk|gl(es)?'
then
echo "Generating rendertests.txt file"
GENERATED_RENDERTESTS=$(generate_rendertests)
cp "${GENERATED_RENDERTESTS}" "${SKQP_ASSETS_DIR}"/skqp/rendertests.txt
mkdir -p "${SKQP_RESULTS_DIR}/${SKQP_BACKEND}"
cp "${GENERATED_RENDERTESTS}" "${SKQP_RESULTS_DIR}/${SKQP_BACKEND}/generated_rendertests.txt"
SKQP_RENDER_TESTS_FILE="${SKQP_FILE_PREFIX}-${SKQP_BACKEND}_rendertests.txt"
[ -f "${SKQP_RENDER_TESTS_FILE}" ] || return 1
cp "${SKQP_RENDER_TESTS_FILE}" "${SKQP_ASSETS_DIR}"/skqp/rendertests.txt
return 0
fi
@@ -204,37 +43,20 @@ copy_tests_files() (
# that is why it needs to be a special case.
if echo "${SKQP_BACKEND}" | grep -qE "unitTest"
then
echo "Generating unittests.txt file"
GENERATED_UNITTESTS=$(generate_unittests)
cp "${GENERATED_UNITTESTS}" "${SKQP_ASSETS_DIR}"/skqp/unittests.txt
mkdir -p "${SKQP_RESULTS_DIR}/${SKQP_BACKEND}"
cp "${GENERATED_UNITTESTS}" "${SKQP_RESULTS_DIR}/${SKQP_BACKEND}/generated_unittests.txt"
SKQP_UNIT_TESTS_FILE="${SKQP_FILE_PREFIX}_unittests.txt"
[ -f "${SKQP_UNIT_TESTS_FILE}" ] || return 1
cp "${SKQP_UNIT_TESTS_FILE}" "${SKQP_ASSETS_DIR}"/skqp/unittests.txt
fi
)
resolve_tests_files() {
if [ -n "${RUN_ALL_TESTS}" ]
then
run_all_tests
return
fi
SKQP_BACKEND=${1}
if ! copy_tests_files "${SKQP_BACKEND}"
then
echo "No override test file found for ${SKQP_BACKEND}. Using the default one."
fi
}
test_vk_backend() {
if echo "${SKQP_BACKENDS:?}" | grep -qE 'vk'
if echo "${SKQP_BACKENDS}" | grep -qE 'vk'
then
if [ -n "$VK_DRIVER" ]; then
return 0
fi
echo "VK_DRIVER environment variable is missing."
# shellcheck disable=SC2012
VK_DRIVERS=$(ls "$INSTALL"/share/vulkan/icd.d/ | cut -f 1 -d '_')
if [ -n "${VK_DRIVERS}" ]
then
@@ -256,74 +78,11 @@ setup_backends() {
fi
}
show_reports() (
set +xe
# Unit tests produce empty HTML reports, guide the user to check the TXT file.
if echo "${SKQP_BACKENDS}" | grep -qE "unitTest"
then
# Remove the empty HTML report to avoid confusion
rm -f "${SKQP_RESULTS_DIR}"/unitTest/report.html
echo "See skqp unit test results at:"
echo "https://$CI_PROJECT_ROOT_NAMESPACE.pages.freedesktop.org/-/$CI_PROJECT_NAME/-/jobs/$CI_JOB_ID/artifacts${SKQP_RESULTS_DIR}/unitTest/unit_tests.txt"
fi
REPORT_FILES=$(mktemp)
find "${SKQP_RESULTS_DIR}"/**/report.html -type f > "${REPORT_FILES}"
while read -r REPORT
do
# shellcheck disable=SC2001
BACKEND_NAME=$(echo "${REPORT}" | sed 's@.*/\([^/]*\)/report.html@\1@')
echo "See skqp ${BACKEND_NAME} render tests report at:"
echo "https://$CI_PROJECT_ROOT_NAMESPACE.pages.freedesktop.org/-/$CI_PROJECT_NAME/-/jobs/$CI_JOB_ID/artifacts${REPORT}"
done < "${REPORT_FILES}"
# If there is no report available, tell the user that something is wrong.
if [ ! -s "${REPORT_FILES}" ]
then
echo "No skqp report available. Probably some fatal error has occured during the skqp execution."
fi
)
usage() {
cat <<EOF
Usage: $(basename "$0") [-a]
Arguments:
-a: Run all unit tests and render tests, useful when introducing a new driver to skqp.
EOF
}
parse_args() {
while getopts ':ah' opt; do
case "$opt" in
a)
echo "Running all skqp tests"
export RUN_ALL_TESTS=1
shift
;;
h)
usage
exit 0
;;
?)
echo "Invalid command option."
usage
exit 1
;;
esac
done
}
set -e
parse_args "${@}"
set -ex
# Needed so configuration files can contain paths to files in /install
INSTALL="$CI_PROJECT_DIR"/install
ln -sf "$CI_PROJECT_DIR"/install /install
INSTALL=${PWD}/install
if [ -z "$GPU_VERSION" ]; then
echo 'GPU_VERSION must be set to something like "llvmpipe" or
@@ -335,37 +94,60 @@ fi
LD_LIBRARY_PATH=$INSTALL:$LD_LIBRARY_PATH
setup_backends
SKQP_BIN_DIR=${SKQP_BIN_DIR:-/skqp}
SKQP_ASSETS_DIR="${SKQP_BIN_DIR}"/assets
SKQP_RESULTS_DIR="${SKQP_RESULTS_DIR:-${PWD}/results}"
SKQP_ASSETS_DIR=/skqp/assets
SKQP_RESULTS_DIR="${SKQP_RESULTS_DIR:-$PWD/results}"
mkdir -p "${SKQP_ASSETS_DIR}"/skqp
# Show the reports on exit, even when a test crashes
trap show_reports INT TERM EXIT
SKQP_EXITCODE=0
for SKQP_BACKEND in ${SKQP_BACKENDS}
do
resolve_tests_files "${SKQP_BACKEND}"
set -e
if ! copy_tests_files "${SKQP_BACKEND}"
then
echo "No override test file found for ${SKQP_BACKEND}. Using the default one."
fi
set +e
SKQP_BACKEND_RESULTS_DIR="${SKQP_RESULTS_DIR}"/"${SKQP_BACKEND}"
mkdir -p "${SKQP_BACKEND_RESULTS_DIR}"
BACKEND_EXITCODE=0
GL_COLLAPSED=true gitlab_section_start "skqp_${SKQP_BACKEND}" "skqp logs for ${SKQP_BACKEND}"
"${SKQP_BIN_DIR}"/skqp "${SKQP_ASSETS_DIR}" "${SKQP_BACKEND_RESULTS_DIR}" "${SKQP_BACKEND}_" ||
BACKEND_EXITCODE=$?
gitlab_section_end "skqp_${SKQP_BACKEND}"
/skqp/skqp "${SKQP_ASSETS_DIR}" "${SKQP_BACKEND_RESULTS_DIR}" "${SKQP_BACKEND}_"
BACKEND_EXITCODE=$?
if [ ! $BACKEND_EXITCODE -eq 0 ]
then
echo "skqp failed on ${SKQP_BACKEND} tests with exit code: ${BACKEND_EXITCODE}."
else
echo "skqp succeeded on ${SKQP_BACKEND}."
echo "skqp failed on ${SKQP_BACKEND} tests with ${BACKEND_EXITCODE} exit code."
fi
# Propagate error codes to leverage the final job result
SKQP_EXITCODE=$(( SKQP_EXITCODE | BACKEND_EXITCODE ))
done
set +x
# Unit tests produce empty HTML reports, guide the user to check the TXT file.
if echo "${SKQP_BACKENDS}" | grep -qE "unitTest"
then
# Remove the empty HTML report to avoid confusion
rm -f "${SKQP_RESULTS_DIR}"/unitTest/report.html
echo "See skqp unit test results at:"
echo "https://$CI_PROJECT_ROOT_NAMESPACE.pages.freedesktop.org/-/$CI_PROJECT_NAME/-/jobs/$CI_JOB_ID/artifacts/${SKQP_RESULTS_DIR}/unitTest/unit_tests.txt"
fi
REPORT_FILES=$(mktemp)
find "${SKQP_RESULTS_DIR}"/**/report.html -type f > "${REPORT_FILES}"
while read -r REPORT
do
BACKEND_NAME=$(echo "${REPORT}" | sed 's@.*/\([^/]*\)/report.html@\1@')
echo "See skqp ${BACKEND_NAME} render tests report at:"
echo "https://$CI_PROJECT_ROOT_NAMESPACE.pages.freedesktop.org/-/$CI_PROJECT_NAME/-/jobs/$CI_JOB_ID/artifacts/${REPORT}"
done < "${REPORT_FILES}"
# If there is no report available, tell the user that something is wrong.
if [ ! -s "${REPORT_FILES}" ]
then
echo "No skqp report available. Probably some fatal error has occured during the skqp execution."
fi
exit $SKQP_EXITCODE

View File

@@ -114,7 +114,7 @@
stage: software-renderer
rules:
- !reference [.no_scheduled_pipelines-rules, rules]
- changes: &llvmpipe_cl_files
- changes:
- .gitlab-ci.yml
- .gitlab-ci/**/*
- meson.build
@@ -130,21 +130,10 @@
- changes:
*llvmpipe_file_list
when: on_success
.llvmpipe-clover-rules:
rules:
- !reference [.llvmpipe-cl-rules, rules]
- changes:
- changes: &clover_file_list
- src/gallium/frontends/clover/**/*
when: on_success
.llvmpipe-rusticl-rules:
rules:
- !reference [.llvmpipe-cl-rules, rules]
- changes:
- src/gallium/frontends/rusticl/**/*
when: on_success
.collabora-farm-rules:
rules:
- if: '$COLLABORA_FARM == "offline" && $RUNNER_TAG =~ /^mesa-ci-x86-64-lava-/'
@@ -155,11 +144,6 @@
- if: '$IGALIA_FARM == "offline"'
when: never
.anholt-farm-rules:
rules:
- if: '$ANHOLT_FARM == "offline"'
when: never
# Skips freedreno jobs if either of the farms we use are offline.
.freedreno-farm-rules:
rules:
@@ -184,8 +168,8 @@
.freedreno-rules:
stage: freedreno
rules:
- !reference [.freedreno-common-rules, rules]
- !reference [.gl-rules, rules]
- !reference [.freedreno-common-rules, rules]
- changes: &freedreno_gl_file_list
- src/freedreno/ir2/**/*
- src/gallium/drivers/freedreno/**/*
@@ -195,8 +179,8 @@
.turnip-rules:
stage: freedreno
rules:
- !reference [.freedreno-common-rules, rules]
- !reference [.vulkan-rules, rules]
- !reference [.freedreno-common-rules, rules]
- changes:
- src/freedreno/vulkan/**/*
when: on_success
@@ -211,7 +195,7 @@
stage: freedreno
rules:
# If the triggerer has access to the restricted traces and if it is pre-merge
- if: '($GITLAB_USER_LOGIN !~ "/^(robclark|anholt|flto|cwabbott0|Danil|tomeu|okias|gallo)$/") &&
- if: '($GITLAB_USER_LOGIN !~ "/^(robclark|anholt|flto|cwabbott0|Danil|tomeu|okias)$/") &&
($GITLAB_USER_LOGIN != "marge-bot" || $CI_COMMIT_BRANCH)'
when: never
- !reference [.freedreno-rules, rules]
@@ -250,7 +234,6 @@
.nouveau-rules:
stage: nouveau
rules:
- !reference [.anholt-farm-rules, rules]
- !reference [.gl-rules, rules]
- changes:
- src/nouveau/**/*
@@ -380,15 +363,6 @@
*virgl_file_list
when: manual
.venus-rules:
stage: layered-backends
rules:
- !reference [.lavapipe-rules, rules]
- changes: &venus_file_list
- src/virtio/**/*
when: on_success
- when: never
.radeonsi-rules:
stage: amd
rules:
@@ -400,24 +374,11 @@
- src/gallium/winsys/amdgpu/**/*
- src/amd/*
- src/amd/addrlib/**/*
- src/amd/ci/*
- src/amd/common/**/*
- src/amd/llvm/**/*
- src/amd/registers/**/*
when: on_success
.radeonsi+radv-rules:
stage: amd
rules:
- !reference [.collabora-farm-rules, rules]
- !reference [.gl-rules, rules]
- changes:
*radeonsi_file_list
when: on_success
- changes:
*radv_file_list
when: on_success
.radeonsi-vaapi-rules:
stage: amd
rules:
@@ -507,20 +468,20 @@
.zink-lvp-rules:
stage: layered-backends
rules:
- !reference [.lavapipe-rules, rules]
- !reference [.zink-common-rules, rules]
- !reference [.lavapipe-rules, rules]
.zink-anv-rules:
stage: layered-backends
rules:
- !reference [.anv-rules, rules]
- !reference [.zink-common-rules, rules]
- !reference [.anv-rules, rules]
.zink-turnip-rules:
stage: layered-backends
rules:
- !reference [.turnip-rules, rules]
- !reference [.zink-common-rules, rules]
- !reference [.turnip-rules, rules]
# Unfortunately YAML doesn't let us concatenate arrays, so we have to do the
# rules duplication manually
@@ -639,12 +600,3 @@
- changes:
*lavapipe_file_list
when: on_success
# Rules for linters
.lint-rustfmt-rules:
rules:
- !reference [.no_scheduled_pipelines-rules, rules]
- !reference [.core-rules, rules]
- changes:
- src/**/*.rs
when: on_success

View File

@@ -17,18 +17,6 @@
paths:
- results/
rustfmt:
# Cancel job if a newer commit is pushed to the same branch
interruptible: true
stage: lint
extends:
- .use-debian/x86_build
- .lint-rustfmt-rules
variables:
GIT_STRATEGY: fetch
script:
- git ls-files */{lib,app}.rs | xargs rustfmt --check
.test-gl:
extends:
- .test
@@ -51,6 +39,7 @@ rustfmt:
- .use-debian/x86_test-gl
needs:
- debian/x86_test-gl
- debian-clover-testing
.vkd3d-proton-test:
artifacts:
@@ -77,21 +66,21 @@ rustfmt:
.piglit-traces-test:
extends:
- .piglit-test
cache:
key: ${CI_JOB_NAME}
paths:
- replayer-db/
artifacts:
when: on_failure
name: "mesa_${CI_JOB_NAME}"
reports:
junit: results/junit.xml
paths:
- results/
exclude:
- results/*.shader_cache
- results/summary/
- results/*.txt
variables:
PIGLIT_REPLAY_EXTRA_ARGS: --keep-image --db-path ${CI_PROJECT_DIR}/replayer-db/ --minio_host=minio-packet.freedesktop.org --minio_bucket=mesa-tracie-public --role-session-name=${CI_PROJECT_PATH}:${CI_JOB_ID} --jwt-file=${CI_JOB_JWT_FILE}
script:
- echo -e "\e[0Ksection_start:$(date +%s):variables[collapsed=true]\r\e[0KVariables passed through:"
- install/common/generate-env.sh
- echo -e "\e[0Ksection_end:$(date +%s):variables\r\e[0K"
- install/piglit/piglit-traces.sh
.deqp-test:
@@ -135,7 +124,7 @@ rustfmt:
# improve it even more (see https://docs.mesa3d.org/ci/bare-metal.html for
# setup).
- echo -e "\e[0Ksection_start:$(date +%s):artifacts_download[collapsed=true]\r\e[0KDownloading artifacts from minio"
- wget ${FDO_HTTP_CACHE_URI:-}https://${PIPELINE_ARTIFACTS_BASE}/${MINIO_ARTIFACT_NAME}.tar.zst -S --progress=dot:giga -O- | tar --zstd -x
- wget ${FDO_HTTP_CACHE_URI:-}https://${PIPELINE_ARTIFACTS_BASE}/${MINIO_ARTIFACT_NAME}.tar.gz -S --progress=dot:giga -O- | tar -xz
- echo -e "\e[0Ksection_end:$(date +%s):artifacts_download\r\e[0K"
artifacts:
when: always
@@ -192,6 +181,11 @@ rustfmt:
HWCI_TEST_SCRIPT: "/install/deqp-runner.sh"
FDO_CI_CONCURRENT: 0 # Default to number of CPUs
.baremetal-skqp-test:
variables:
HWCI_START_XORG: 1
HWCI_TEST_SCRIPT: "/install/skqp-runner.sh"
# For Valve's bare-metal testing farm jobs.
.b2c-test:
# It would be nice to use ci-templates within Mesa CI for this job's
@@ -211,7 +205,7 @@ rustfmt:
GIT_STRATEGY: none
# boot2container initrd configuration parameters.
B2C_KERNEL_URL: 'https://gitlab.freedesktop.org/mupuf/valve-infra/-/package_files/144/download' # 5.17.1
B2C_INITRAMFS_URL: 'https://gitlab.freedesktop.org/mupuf/boot2container/-/releases/v0.9.8/downloads/initramfs.linux_amd64.cpio.xz'
B2C_INITRAMFS_URL: 'https://gitlab.freedesktop.org/mupuf/boot2container/-/releases/v0.9.6/downloads/initramfs.linux_amd64.cpio.xz'
B2C_JOB_SUCCESS_REGEX: '\[.*\]: Execution is over, pipeline status: 0\r$'
B2C_JOB_WARN_REGEX: '\*ERROR\* ring .* timeout, but soft recovered'
B2C_LOG_LEVEL: 6

View File

@@ -48,7 +48,7 @@ sleep 1
# when asked to load PE executables.
# TODO: Have boot2container mount this filesystem for all jobs?
mount -t binfmt_misc none /proc/sys/fs/binfmt_misc
echo ':DOSWin:M::MZ::/usr/bin/wine64:' > /proc/sys/fs/binfmt_misc/register
echo ':DOSWin:M::MZ::/usr/bin/wine:' > /proc/sys/fs/binfmt_misc/register
# Set environment for DXVK.
export DXVK_LOG_LEVEL="info"
@@ -68,7 +68,7 @@ if [ ${TEST_START_XORG:-0} -eq 1 ]; then
export DISPLAY=:0
fi
wine64 --version
wine --version
SANITY_MESA_VERSION_CMD="$SANITY_MESA_VERSION_CMD | tee /tmp/version.txt | grep \"Mesa $MESA_VERSION\(\s\|$\)\""

View File

@@ -38,8 +38,8 @@ Push-Location $builddir
meson `
--default-library=shared `
-Dzlib:default_library=static `
--buildtype=release `
--wrap-mode=nodownload `
-Db_ndebug=false `
-Db_vscrt=mt `
--cmake-prefix-path="$depsInstallPath" `
@@ -49,22 +49,18 @@ meson `
-Dshared-llvm=disabled `
-Dvulkan-drivers="swrast,amd,microsoft-experimental" `
-Dgallium-drivers="swrast,d3d12,zink" `
-Dgallium-va=true `
-Dvideo-codecs="h264dec,h264enc,h265dec,h265enc,vc1dec" `
-Dshared-glapi=enabled `
-Dgles1=enabled `
-Dgles2=enabled `
-Dgallium-opencl=icd `
-Dgallium-rusticl=false `
-Dopencl-spirv=true `
-Dmicrosoft-clc=enabled `
-Dstatic-libclc=all `
-Dspirv-to-dxil=true `
-Dbuild-tests=true `
-Dwerror=true `
-Dwarning_level=2 `
-Dzlib:warning_level=1 `
-Dlibelf:warning_level=1 `
$sourcedir && `
meson install && `
meson install --skip-subprojects && `
meson test --num-processes 32 --print-errorlogs
$buildstatus = $?

View File

@@ -8,82 +8,6 @@ $MyPath = $MyInvocation.MyCommand.Path | Split-Path -Parent
Remove-Item -Recurse -Force -ErrorAction SilentlyContinue "deps" | Out-Null
$depsInstallPath="C:\mesa-deps"
Get-Date
Write-Host "Cloning DirectX-Headers"
git clone -b v1.606.4 --depth=1 https://github.com/microsoft/DirectX-Headers deps/DirectX-Headers
if (!$?) {
Write-Host "Failed to clone DirectX-Headers repository"
Exit 1
}
Write-Host "Building DirectX-Headers"
$dxheaders_build = New-Item -ItemType Directory -Path ".\deps\DirectX-Headers" -Name "build"
Push-Location -Path $dxheaders_build.FullName
meson .. --backend=ninja -Dprefix="$depsInstallPath" --buildtype=release -Db_vscrt=mt && `
ninja -j32 install
$buildstatus = $?
Pop-Location
Remove-Item -Recurse -Force -ErrorAction SilentlyContinue -Path $dxheaders_build
if (!$buildstatus) {
Write-Host "Failed to compile DirectX-Headers"
Exit 1
}
Get-Date
Write-Host "Cloning zlib"
git clone -b v1.2.13 --depth=1 https://github.com/madler/zlib deps/zlib
if (!$?) {
Write-Host "Failed to clone zlib repository"
Exit 1
}
Write-Host "Downloading zlib meson build files"
Invoke-WebRequest -Uri "https://wrapdb.mesonbuild.com/v2/zlib_1.2.13-1/get_patch" -OutFile deps/zlib.zip
Expand-Archive -Path deps/zlib.zip -Destination deps/zlib
# Wrap archive puts build files in a version subdir
Move-Item deps/zlib/zlib-1.2.13/* deps/zlib
$zlib_build = New-Item -ItemType Directory -Path ".\deps\zlib" -Name "build"
Push-Location -Path $zlib_build.FullName
meson .. --backend=ninja -Dprefix="$depsInstallPath" --default-library=static --buildtype=release -Db_vscrt=mt && `
ninja -j32 install
$buildstatus = $?
Pop-Location
Remove-Item -Recurse -Force -ErrorAction SilentlyContinue -Path $zlib_build
if (!$buildstatus) {
Write-Host "Failed to compile zlib"
Exit 1
}
Get-Date
Write-Host "Cloning libva"
git clone https://github.com/intel/libva.git deps/libva
if (!$?) {
Write-Host "Failed to clone libva repository"
Exit 1
}
Push-Location -Path ".\deps\libva"
Write-Host "Checking out libva commit 2579eb0f77897dc01a02c1e43defc63c40fd2988"
# Checking out commit hash with libva-win32 support
# This feature will be released with libva version 2.17
git checkout 2579eb0f77897dc01a02c1e43defc63c40fd2988
Pop-Location
Write-Host "Building libva"
# libva already has a build dir in their repo, use builddir instead
$libva_build = New-Item -ItemType Directory -Path ".\deps\libva" -Name "builddir"
Push-Location -Path $libva_build.FullName
meson .. -Dprefix="$depsInstallPath"
ninja -j32 install
$buildstatus = $?
Pop-Location
Remove-Item -Recurse -Force -ErrorAction SilentlyContinue -Path $libva_build
if (!$buildstatus) {
Write-Host "Failed to compile libva"
Exit 1
}
Get-Date
Write-Host "Cloning LLVM release/12.x"
git clone -b release/12.x --depth=1 https://github.com/llvm/llvm-project deps/llvm-project
@@ -106,6 +30,8 @@ Push-Location deps/llvm-project/llvm/projects/SPIRV-LLVM-Translator
git checkout 5b641633b3bcc3251a52260eee11db13a79d7258
Pop-Location
$depsInstallPath="C:\mesa-deps"
Get-Date
# slightly convoluted syntax but avoids the CWD being under the PS filesystem meta-path
$llvm_build = New-Item -ItemType Directory -ErrorAction SilentlyContinue -Force -Path ".\deps\llvm-project" -Name "build"

View File

@@ -71,25 +71,3 @@ if (!$?) {
Exit 1
}
Remove-Item C:\vulkan-runtime.exe -Force
Get-Date
Write-Host "Installing graphics tools (DirectX debug layer)"
Set-Service -Name wuauserv -StartupType Manual
if (!$?) {
Write-Host "Failed to enable Windows Update"
Exit 1
}
For ($i = 0; $i -lt 5; $i++) {
Dism /online /quiet /add-capability /capabilityname:Tools.Graphics.DirectX~~~~0.0.1.0
$graphics_tools_installed = $?
if ($graphics_tools_installed) {
Break
}
}
if (!$graphics_tools_installed) {
Write-Host "Failed to install graphics tools"
Get-Content C:\Windows\Logs\DISM\dism.log
Exit 1
}

View File

@@ -79,7 +79,7 @@ Pop-Location
Get-Date
Write-Host "Cloning Vulkan and GL Conformance Tests"
$deqp_source = "C:\src\VK-GL-CTS\"
git clone --no-progress --single-branch https://github.com/KhronosGroup/VK-GL-CTS.git -b vulkan-cts-1.3.4 $deqp_source
git clone --no-progress --single-branch https://github.com/lfrb/VK-GL-CTS.git -b windows-flush $deqp_source
if (!$?) {
Write-Host "Failed to clone deqp repository"
Exit 1
@@ -115,10 +115,10 @@ Copy-Item -Path "$($deqp_source)\doc\testlog-stylesheet\testlog.xsl" -Destinatio
# Copy Vulkan must-pass list
$deqp_mustpass = New-Item -ItemType Directory -Path $deqp_build -Name "mustpass"
$root_mustpass = Join-Path -Path $deqp_source -ChildPath "external\vulkancts\mustpass\main"
$root_mustpass = Join-Path -Path $deqp_source -ChildPath "external\vulkancts\mustpass\master"
$files = Get-Content "$($root_mustpass)\vk-default.txt"
foreach($file in $files) {
Get-Content "$($root_mustpass)\$($file)" | Add-Content -Path "$($deqp_mustpass)\vk-main.txt"
Get-Content "$($root_mustpass)\$($file)" | Add-Content -Path "$($deqp_mustpass)\vk-master.txt"
}
Remove-Item -Force -Recurse $deqp_source

View File

@@ -27,7 +27,6 @@ Start-Process -NoNewWindow -Wait -FilePath C:\vs_buildtools.exe `
"--add", "Microsoft.VisualStudio.Component.VC.ATL", `
"--add", "Microsoft.VisualStudio.Component.VC.ATLMFC", `
"--add", "Microsoft.VisualStudio.Component.VC.Tools.x86.x64", `
"--add", "Microsoft.VisualStudio.Component.VC.Llvm.Clang", `
"--add", "Microsoft.VisualStudio.Component.Graphics.Tools", `
"--add", "Microsoft.VisualStudio.Component.Windows10SDK.20348"

View File

@@ -1,2 +0,0 @@
schema: 'schema.graphql'
documents: 'src/**/*.{graphql,js,ts,jsx,tsx}'

View File

@@ -161,7 +161,7 @@ Colin McDonald <cjmmail10-bz@yahoo.co.uk> <cjmcdonald@qinetiq.com>
Connor Abbott <cwabbott0@gmail.com> <connor.w.abbott@intel.com>
Connor Abbott <cwabbott0@gmail.com> <connor.abbott@intel.com>
Konstantin Kharlamov <Hi-Angel@yandex.ru>
Constantine Kharlamov <Hi-Angel@yandex.ru>
Corbin Simpson <MostAwesomeDude@gmail.com> <mostawesomed...@gmail.com>
Corbin Simpson <MostAwesomeDude@gmail.com> <mostawesomedude@gmail.com>

5330
.pick_status.json Normal file

File diff suppressed because it is too large Load Diff

View File

@@ -123,8 +123,8 @@ meson.build @dbaker @eric
/src/gallium/drivers/freedreno/ @robclark
# Imagination
/include/drm-uapi/pvr_drm.h @CreativeCylon @frankbinns
/src/imagination/ @CreativeCylon @frankbinns
/include/drm-uapi/pvr_drm.h @CreativeCylon @frankbinns @rajnesh-kanwal
/src/imagination/ @CreativeCylon @frankbinns @rajnesh-kanwal
/src/imagination/rogue/ @simon-perretta-img
# Intel

View File

@@ -1 +1 @@
22.3.0-devel
22.2.0-rc3

View File

@@ -100,12 +100,6 @@ endif
__MY_SHARED_LIBRARIES := $(LOCAL_SHARED_LIBRARIES)
ifeq ($(shell test $(PLATFORM_SDK_VERSION) -ge 30; echo $$?), 0)
MESA_LIBGBM_NAME := libgbm_mesa
else
MESA_LIBGBM_NAME := libgbm
endif
ifeq ($(TARGET_IS_64_BIT),true)
LOCAL_MULTILIB := 64
else
@@ -176,7 +170,7 @@ $(foreach driver,$(BOARD_MESA3D_VULKAN_DRIVERS), \
ifneq ($(filter true, $(BOARD_MESA3D_BUILD_LIBGBM)),)
# Modules 'libgbm', produces '/vendor/lib{64}/libgbm.so'
$(eval $(call mesa3d-lib,$(MESA_LIBGBM_NAME),.so.1,,MESA3D_LIBGBM_BIN,$(MESA3D_TOP)/src/gbm/main))
$(eval $(call mesa3d-lib,libgbm,.so.1,,MESA3D_LIBGBM_BIN,$(MESA3D_TOP)/src/gbm/main))
endif
#-------------------------------------------------------------------------------

View File

@@ -69,7 +69,7 @@ $(M_TARGET_PREFIX)MESA3D_LIBEGL_BIN := $(MESON_OUT_DIR)/install/usr/local/l
$(M_TARGET_PREFIX)MESA3D_LIBGLESV1_BIN := $(MESON_OUT_DIR)/install/usr/local/lib/libGLESv1_CM.so.1.1.0
$(M_TARGET_PREFIX)MESA3D_LIBGLESV2_BIN := $(MESON_OUT_DIR)/install/usr/local/lib/libGLESv2.so.2.0.0
$(M_TARGET_PREFIX)MESA3D_LIBGLAPI_BIN := $(MESON_OUT_DIR)/install/usr/local/lib/libglapi.so.0.0.0
$(M_TARGET_PREFIX)MESA3D_LIBGBM_BIN := $(MESON_OUT_DIR)/install/usr/local/lib/$(MESA_LIBGBM_NAME).so.1.0.0
$(M_TARGET_PREFIX)MESA3D_LIBGBM_BIN := $(MESON_OUT_DIR)/install/usr/local/lib/libgbm.so.1.0.0
MESA3D_GLES_BINS := \
@@ -85,12 +85,12 @@ MESON_GEN_NINJA := \
-Ddri-search-path=/vendor/$(MESA3D_LIB_DIR)/dri \
-Dplatforms=android \
-Dplatform-sdk-version=$(PLATFORM_SDK_VERSION) \
-Ddri-drivers= \
-Dgallium-drivers=$(subst $(space),$(comma),$(BOARD_MESA3D_GALLIUM_DRIVERS)) \
-Dvulkan-drivers=$(subst $(space),$(comma),$(subst radeon,amd,$(BOARD_MESA3D_VULKAN_DRIVERS))) \
-Dgbm=enabled \
-Degl=enabled \
-Dcpp_rtti=false \
-Dlmsensors=disabled \
MESON_BUILD := PATH=/usr/bin:/bin:/sbin:$$PATH ninja -C $(MESON_OUT_DIR)/build
@@ -148,7 +148,6 @@ $(MESON_GEN_FILES_TARGET): PRIVATE_TARGET_CRTEND_SO_O := $(my_target_crtend_so_o
##
define m-lld-flags
-Wl,-e,main \
-nostdlib -Wl,--gc-sections \
$(PRIVATE_TARGET_CRTBEGIN_SO_O) \
$(PRIVATE_ALL_OBJECTS) \
@@ -169,14 +168,13 @@ define m-lld-flags
endef
define m-lld-flags-cleaned
$(patsubst -Wl$(comma)--build-id=%,, \
$(subst prebuilts/,$(AOSP_ABSOLUTE_PATH)/prebuilts/, \
$(subst $(OUT_DIR)/,$(call relative-to-absolute,$(OUT_DIR))/, \
$(subst -Wl$(comma)--fatal-warnings,, \
$(subst -Wl$(comma)--no-undefined-version,, \
$(subst -Wl$(comma)--gc-sections,, \
$(patsubst %dummy.o,, \
$(m-lld-flags))))))))
$(m-lld-flags)))))))
endef
define m-cpp-flags

View File

@@ -59,7 +59,8 @@ SOURCES = [
Source('include/EGL/egl.h', 'https://github.com/KhronosGroup/EGL-Registry/raw/main/api/EGL/egl.h'),
Source('include/EGL/eglplatform.h', 'https://github.com/KhronosGroup/EGL-Registry/raw/main/api/EGL/eglplatform.h'),
Source('include/EGL/eglext.h', 'https://github.com/KhronosGroup/EGL-Registry/raw/main/api/EGL/eglext.h'),
Source('include/EGL/eglext_angle.h', 'https://chromium.googlesource.com/angle/angle/+/refs/heads/main/include/EGL/eglext_angle.h?format=TEXT'),
Source('include/EGL/eglextchromium.h', 'https://chromium.googlesource.com/chromium/src/+/refs/heads/master/ui/gl/EGL/eglextchromium.h?format=TEXT'),
Source('include/EGL/eglext_angle.h', 'https://chromium.googlesource.com/angle/angle/+/refs/heads/master/include/EGL/eglext_angle.h?format=TEXT'),
Source('include/EGL/eglmesaext.h', None),
],
},

View File

@@ -1,4 +1,4 @@
#!/usr/bin/env python3
#!/usr/bin/env python
#
# Copyright 2012 VMware Inc
# Copyright 2008-2009 Jose Fonseca
@@ -165,7 +165,7 @@ class PerfParser(LineParser):
sys.stdout.write('%6u' % (sample))
total_samples += sample
sys.stdout.write('%6u: %s\n' % (address, instr))
print('total:', total_samples)
print 'total:', total_samples
assert len(samples) == 0
sys.exit(0)
@@ -221,7 +221,7 @@ class PerfParser(LineParser):
start_address = lookupMap(module, function_name)
address -= start_address
#print(function_name, module, address)
#print function_name, module, address
samples[address] = samples.get(address, 0) + 1

View File

@@ -1,46 +0,0 @@
Amber Branch
============
After Mesa 21.3, all non-Gallium DRI drivers were removed from the Mesa
source-tree. These drivers are still being maintained to some degree,
but only on the 21.3.x branch, and only for critical fixes.
These drivers include:
- Radeon
- r200
- i915
- i965
- Nouveau (the DRI driver for NV04-NV20)
At the same time, the OpenSWR Gallium driver was removed from the Mesa
source-tree, because it was already practically speaking unmaintained and
the actively maintained LLVMpipe offers much of the same functionality.
Users with Intel GPUs that were using i965 should migrate to either Iris
or Crocus, depending on their GPU. These drivers generally speaking both
perform better and have more features than i965 had, and due to sharing
more code with the rest of the Mesa infrastructure, gets more bug fixes
and features.
Similarly, users of i915 should migrate to i915g (the Gallium driver for
the same hardware), as it's still being maintained.
Users who depend on the removed drivers will have to use them built from
the Amber branch in order to get updates.
Building
--------
The Amber branch has some extra logic to be able to coexist with recent
Mesa releases without them stepping on each others toes. In order to
enable that logic, you need to pass the ``-Damber=true`` flag to Meson.
Documentation
-------------
On `docs.mesa3d.org <https://docs.mesa3d.org/>`, we currently only
publish the documentation from our main branch. But you can view the
documentation for the Amber branch `here
<https://gitlab.freedesktop.org/mesa/mesa/-/tree/21.3/docs>`_.

View File

@@ -2,20 +2,20 @@ Android
=======
Mesa hardware drivers can be built for Android one of two ways: built
into the Android OS using the ndk-build build system on older versions
into the Android OS using the Android.mk build system on older versions
of Android, or out-of-tree using the Meson build system and the
Android NDK.
The ndk-build build system has proven to be hard to maintain, as one
The Android.mk build system has proven to be hard to maintain, as one
needs a built Android tree to build against, and it has never been
tested in CI. The Meson build system flow is frequently used by
tested in CI. The meson build system flow is frequently used by
Chrome OS developers for building and testing Android drivers.
Building using the Android NDK
------------------------------
Download and install the NDK using whatever method you normally would.
Then, create your Meson cross file to use it, something like this
Then, create your meson cross file to use it, something like this
``~/.local/share/meson/cross/android-aarch64`` file::
[binaries]
@@ -25,7 +25,7 @@ Then, create your Meson cross file to use it, something like this
c_ld = 'lld'
cpp_ld = 'lld'
strip = 'NDKDIR/toolchains/llvm/prebuilt/linux-x86_64/bin/aarch64-linux-android-strip'
# Android doesn't come with a pkg-config, but we need one for Meson to be happy not
# Android doesn't come with a pkg-config, but we need one for meson to be happy not
# finding all the optional deps it looks for. Use system pkg-config pointing at a
# directory we get to populate with any .pc files we want to add for Android
pkgconfig = ['env', 'PKG_CONFIG_LIBDIR=NDKDIR/pkgconfig', '/usr/bin/pkg-config']
@@ -144,7 +144,7 @@ ARC++, but it should also be possible to build using the NDK as
described above. There are currently rough edges with this, for
example the build will require that you have your arc-libdrm build
available to the NDK, assuming you're building anything but the
Freedreno Vulkan driver for KGSL. You can mostly put things in place
freedreno Vulkan driver for KGSL. You can mostly put things in place
with:
.. code-block:: console

View File

@@ -39,7 +39,7 @@ Instantiate your boards by creating them in the UI or at the command
line attached to that device type, then populate their dictionary
(using an "extends" line probably referencing the board's template in
``/etc/lava-dispatcher/device-types``). Now, go find a relevant
health check job for your board as a test job definition, or cobble
healthcheck job for your board as a test job definition, or cobble
something together from a board that boots using the same boot_method
and some public images, and figure out how to get your boards booting.
@@ -51,7 +51,7 @@ to restrict the jobs it takes or it will grab random jobs from tasks
across ``gitlab.freedesktop.org``, and your runner isn't ready for
that.
The Docker image will need access to the LAVA instance. If it's on a
The Docker image will need access to the lava instance. If it's on a
public network it should be fine. If you're running the LAVA instance
on localhost, you'll need to set ``network_mode="host"`` in
``/etc/gitlab-runner/config.toml`` so it can access localhost. Create a
@@ -74,7 +74,7 @@ access it. You probably have a ``volumes = ["/cache"]`` already, so now it woul
Note that this token is visible to anybody that can submit MRs to
Mesa! It is not an actual secret. We could just bake it into the
GitLab CI YAML, but this way the current method of connecting to the
GitLab CI yml, but this way the current method of connecting to the
LAVA instance is separated from the Mesa branches (particularly
relevant as we have many stable branches all using CI).

View File

@@ -28,19 +28,19 @@ The boards need to be able to have a kernel/initramfs supplied by the
gitlab-runner system, since Mesa often needs to update the kernel either for new
DRM functionality, or to fix kernel bugs.
The boards must have networking, so that we can extract the dEQP XML results to
The boards must have networking, so that we can extract the dEQP .xml results to
artifacts on GitLab, and so that we can download traces (too large for an
initramfs) for trace replay testing. Given that we need networking already, and
our dEQP/Piglit/etc. payload is large, we use NFS from the x86 runner system
our deqp/piglit/etc. payload is large, we use nfs from the x86 runner system
rather than initramfs.
See `src/freedreno/ci/gitlab-ci.yml` for an example of fastboot on DB410c and
DB820c (freedreno-a306 and freedreno-a530).
DB820c (freedreno-a306 and freereno-a530).
Requirements (Servo)
Requirements (servo)
--------------------
For Servo-connected boards, we can use the EC connection for power
For servo-connected boards, we can use the EC connection for power
control to reboot the board. However, loading a kernel is not as easy
as fastboot, so we assume your bootloader can do TFTP, and that your
gitlab-runner mounts the runner's tftp directory specific to the board
@@ -48,7 +48,7 @@ at /tftp in the container.
Since we're going the TFTP route, we also use NFS root. This avoids
packing the rootfs and sending it to the board as a ramdisk, which
means we can support larger rootfses (for Piglit testing), at the cost
means we can support larger rootfses (for piglit testing), at the cost
of needing more storage on the runner.
Telling the board about where its TFTP and NFS should come from is
@@ -74,8 +74,8 @@ call "servo"::
dhcp-option=tag:cheza1,option:root-path,/srv/nfs/cheza1
dhcp-option=tag:cheza2,option:root-path,/srv/nfs/cheza2
See `src/freedreno/ci/gitlab-ci.yml` for an example of Servo on cheza. Note
that other Servo boards in CI are managed using LAVA.
See `src/freedreno/ci/gitlab-ci.yml` for an example of servo on cheza. Note
that other servo boards in CI are managed using LAVA.
Requirements (POE)
------------------
@@ -98,7 +98,7 @@ You'll talk to the Cisco for configuration using its USB port, which provides a
serial terminal at 9600 baud. You need to enable SNMP control, which we'll do
using a "mesaci" community name that the gitlab runner can access as its
authentication (no password) to configure. To talk to the SNMP on the router,
you need to put an IP address on the default vlan (vlan 1).
you need to put an ip address on the default vlan (vlan 1).
Setting that up looks something like:
@@ -127,7 +127,7 @@ google, that was easier than figuring it out from finding the switch's MIB
database. You can query the POE status from the switch serial using the `show
power inline` command.
Other than that, find the dnsmasq/tftp/NFS setup for your boards "servo" above.
Other than that, find the dnsmasq/tftp/nfs setup for your boards "servo" above.
See `src/broadcom/ci/gitlab-ci.yml` and `src/nouveau/ci/gitlab-ci.yml` for an
examples of POE for Raspberry Pi 3/4, and Jetson Nano.
@@ -152,7 +152,7 @@ something like this to register a fastboot board:
--docker-privileged \
--non-interactive
For a Servo board, you'll need to also volume mount the board's NFS
For a servo board, you'll need to also volume mount the board's NFS
root dir at /nfs and TFTP kernel directory at /tftp.
The registration token has to come from a freedesktop.org GitLab admin
@@ -166,7 +166,7 @@ into that pool.
We need privileged mode and the /dev bind mount in order to get at the
serial console and fastboot USB devices (--device arguments don't
apply to devices that show up after container start, which is the case
with fastboot, and the Servo serial devices are actually links to
with fastboot, and the servo serial devices are actually links to
/dev/pts). We use host network mode so that we can spin up a nginx
server to collect XML results for fastboot.

View File

@@ -1,7 +1,7 @@
Docker CI
=========
For LLVMpipe and Softpipe CI, we run tests in a container containing
For llvmpipe and swrast CI, we run tests in a container containing
VK-GL-CTS, on the shared GitLab runners provided by `freedesktop
<http://freedesktop.org>`_
@@ -67,7 +67,7 @@ anyone on the internet run code on your device. Docker containers may
provide some limited protection, but how much you trust that and what
you do to mitigate hostile access is up to you.
* DUTs must expose the DRI device nodes to the containers.
* DUTs must expose the dri device nodes to the containers.
Obviously, to get access to the HW, we need to pass the render node
through. This is done by adding ``devices = ["/dev/dri"]`` to the

View File

@@ -15,7 +15,7 @@ The CI runs a number of tests, from trivial build-testing to complex GPU renderi
- Build testing for a number of build systems, configurations and platforms
- Sanity checks (``meson test``)
- Some drivers (Softpipe, LLVMpipe, Freedreno and Panfrost) are also tested
- Some drivers (softpipe, llvmpipe, freedreno and panfrost) are also tested
using `VK-GL-CTS <https://github.com/KhronosGroup/VK-GL-CTS>`__
- Replay of application traces
@@ -64,7 +64,7 @@ has been granted access to these traces.
A traces YAML file also includes a ``download-url`` pointing to a MinIO
instance where to download the traces from. While the first job should always work with
publicly accessible traces, the second job could point to an URL with restricted access.
publicly accessible traces, the second job could point to an url with restricted access.
Restricted traces are those that have been made available to Mesa developers without a
license to redistribute at will, and thus should not be exposed to the public. Failing to
@@ -84,14 +84,9 @@ added to the OPA policy for the MinIO repository as per
https://gitlab.freedesktop.org/freedesktop/helm-gitlab-config/-/commit/a3cd632743019f68ac8a829267deb262d9670958 .
So the jobs are created in personal repositories, the name of the user's account needs
to be added to the rules attribute of the GitLab CI job that accesses the restricted
to be added to the rules attribute of the Gitlab CI job that accesses the restricted
accounts.
.. toctree::
:maxdepth: 1
local-traces
Intel CI
--------
@@ -142,9 +137,9 @@ able to handle a whole pipeline's worth of jobs in less than 15 minutes
If a test farm is short the HW to provide these guarantees, consider dropping
tests to reduce runtime. dEQP job logs print the slowest tests at the end of
the run, and Piglit logs the runtime of tests in the results.json.bz2 in the
the run, and piglit logs the runtime of tests in the results.json.bz2 in the
artifacts. Or, you can add the following to your job to only run some fraction
(in this case, 1/10th) of the dEQP tests.
(in this case, 1/10th) of the deqp tests.
.. code-block:: yaml
@@ -252,7 +247,7 @@ directory. You can hack on mesa and iterate testing the build with:
Conformance Tests
-----------------
Some conformance tests require a special treatment to be maintained on GitLab CI.
Some conformance tests require a special treatment to be maintained on Gitlab CI.
This section lists their documentation pages.
.. toctree::
@@ -261,11 +256,11 @@ This section lists their documentation pages.
skqp
Updating GitLab CI Linux Kernel
Updating Gitlab CI Linux Kernel
-------------------------------
GitLab CI usually runs a bleeding-edge kernel. The following documentation has
instructions on how to uprev Linux Kernel in the GitLab CI ecosystem.
Gitlab CI usually runs a bleeding-edge kernel. The following documentation has
instructions on how to uprev Linux Kernel in the Gitlab Ci ecosystem.
.. toctree::
:maxdepth: 1

Some files were not shown because too many files have changed in this diff Show More