Compare commits

..

79 Commits

Author SHA1 Message Date
Eric Engestrom
bb9bdc4b73 VERSION: bump for 21.3.0-rc3
Signed-off-by: Eric Engestrom <eric@engestrom.ch>
2021-10-27 19:58:10 +01:00
Thomas Wagner
f199962c4d util: use anonymous file for memory fd creation
The original implementation in os_memory_fd.c always uses memfds.
Replace this by using the already existing os_create_anonymous_file in
order to support older systems or systems without memfd.

Fixes: 1166ee9caf ("gallium: add utility and interface for memory fd allocations")

Reviewed-by: Marek Olšák <marek.olsak@amd.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/13331>
(cherry picked from commit 4856586ac6)
2021-10-27 19:58:10 +01:00
Bas Nieuwenhuizen
bf2e533688 radv: Add bufferDeviceAddressMultiDevice support.
We don't support multiple devices so this is a nop. However, Baldurs Gate 3 enables
this and with the new more complete checks this causes device creation to fail.

Fixes: 2e5718c957 ("vulkan: provide common functions to check device features")
Gitlab: https://gitlab.freedesktop.org/mesa/mesa/-/issues/5509
Reviewed-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/13482>
(cherry picked from commit 1fe375e7cf)
2021-10-27 19:58:10 +01:00
Mike Blumenkrantz
627659b6af nir/lower_samplers_as_deref: rewrite more image intrinsics
"I think we want to lower them."

-Jason "And I do know how the pass works" Ekstrand

fixes #5540

cc: mesa-stable

Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/13489>
(cherry picked from commit b0c40bc905)
2021-10-27 19:58:10 +01:00
Mike Blumenkrantz
b4506d1cc2 zink: more accurately update samplemask for fs shader keys
the fs samplemask needs to be updated on framebuffer rebind and on
fs bind to ensure that the key gets updated in time for the pipeline
change

fixes #5559

cc: mesa-stable

Reviewed-by: Dave Airlie <airlied@redhat.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/13531>
(cherry picked from commit c9ce151ff9)
2021-10-27 19:58:10 +01:00
Mike Blumenkrantz
852c6bb9d2 zink: fix gl_SampleMaskIn spirv generation
the uint[1] -> uint dance is only relevant on the first load, so move
the variable type shuffling inside the create block to avoid breaking successive
loads

fixes #5543

cc: mesa-stable

Reviewed-by: Dave Airlie <airlied@redhat.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/13488>
(cherry picked from commit 8899f6a198)
2021-10-27 19:58:10 +01:00
Mike Blumenkrantz
59b59c586b zink: don't add dynamic vertex pipeline states if no attribs are used
adding the states requires that vertex attribs be bound, but it's illegal
to bind 0 attribs

cc: mesa-stable

fixes #5558

Reviewed-by: Erik Faye-Lund <erik.faye-lund@collabora.com>
Reviewed-by: Dave Airlie <airlied@redhat.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/13519>
(cherry picked from commit 90228a80ea)
2021-10-27 19:58:10 +01:00
Mike Blumenkrantz
8cb060c374 zink: stop exporting PIPE_SHADER_CAP_FP16_DERIVATIVES
spirv doesn't support this

fixes #5561

cc: mesa-stable

Reviewed-by: Emma Anholt <emma@anholt.net>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/13530>
(cherry picked from commit c13da98929)
2021-10-27 19:58:10 +01:00
Michael Tang
25b007d7a9 microsoft/spirv_to_dxil: turn sysvals into input varyings
Fixes: b47090c5b3 ("spirv: Always declare FragCoord as a sysval")
Reviewed-by: Jesse Natalie <jenatali@microsoft.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/13276>
(cherry picked from commit 3094524621)
2021-10-27 19:58:10 +01:00
Lionel Landwerlin
a82babccd1 anv: fix push constant lowering with bindless shaders
Signed-off-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Fixes: 9fa1cdfe7f ("intel/rt: Implement push constants as global memory reads")
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/13529>
(cherry picked from commit a6031cd9bd)
2021-10-27 19:58:10 +01:00
Mike Blumenkrantz
e4dc69796e zink: don't check rebind count outside of buffer/image rebind function
zink_resource_has_binds() only checks descriptor binds, and this doesn't
include streamout or fb bindings, so call these functions from the specific
rebind points to ensure those cases are also checked

fixes #5541

cc: mesa-stable

Reviewed-by: Dave Airlie <airlied@redhat.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/13490>
(cherry picked from commit 0a6f5ec942)
2021-10-27 19:58:10 +01:00
Mike Blumenkrantz
737c9a7dcf zink: only reset zink_resource::so_valid on buffer rebind
otherwise this is going to randomly modify some image properties

cc: mesa-stable

Reviewed-by: Dave Airlie <airlied@redhat.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/13490>
(cherry picked from commit 1a68f2eb8f)
2021-10-27 19:58:10 +01:00
Mike Blumenkrantz
7264cc5cd4 zink: don't break early when applying fb clears
a resource can be bound to multiple fb attachments, each with
its own clear, so ensure that all of these are applied

fixes #5542

cc: mesa-stable

Reviewed-by: Dave Airlie <airlied@redhat.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/13491>
(cherry picked from commit dabe477b4f)
2021-10-27 19:58:09 +01:00
Mike Blumenkrantz
4d88c19510 zink: detect prim type more accurately for tess/gs lines
u_reduced_prim() can't determine the output primitive when vs isn't the
last vertex stage, so store this from the appropriate shader info and use
it when it's available

fixes #5547

cc: mesa-stable

Reviewed-by: Dave Airlie <airlied@redhat.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/13487>
(cherry picked from commit 2a91e83b7f)
2021-10-27 19:58:09 +01:00
Lionel Landwerlin
b6f0a4c11d vulkan/wsi/wayland: don't expose surface formats not fully supported
Depending on whether an application creates a swapchain with
VK_COMPOSITE_ALPHA_PRE_MULTIPLIED_BIT_KHR or not, we might use 2
different formats with the compositor.

This change makes sure that we support all the underlying formats
before exposing the corresponding VkFormat to the application.

v2: Don't forget get_formats2() (Ivan)

v3: Replace formats with availability boolean (Simon)

Signed-off-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Fixes: 151b65b211 ("vulkan/wsi/wayland: generalize modifier handling")
Closes: https://gitlab.freedesktop.org/mesa/mesa/-/issues/5522
Reviewed-by: Ivan Briano <ivan.briano@intel.com> (v2)
Reviewed-by: Simon Ser <contact@emersion.fr>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/13453>
(cherry picked from commit d944136f36)
2021-10-27 19:58:09 +01:00
Boris Brezillon
a63104a7d8 vulkan: Fix entrypoint generation when compiling for x86 with MSVC
When compiling for x86 with MSVC, Vulkan API entry points follow the
__stdcall convention (VKAPI_CALL maps to __stdcall), which uses the
following name mangling:

   _<function_name>@<arguments_size>

Fix the vk_entrypoint_stub()/alternatename definitions accordingly.

Fixes: 6d44b21d4f ("vulkan: Fix weak symbol emulation when compiling with MSVC")
Signed-off-by: Boris Brezillon <boris.brezillon@collabora.com>
Reviewed-by: Jesse Natalie <jenatali@microsoft.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/13516>
(cherry picked from commit 1813bb5917)
2021-10-27 19:58:08 +01:00
Samuel Pitoiset
734011012f aco: only load streamout buffers if streamout is enabled
The streamout_config SGPR is used to determine if streamout is enabled.

This fixes a GPU hang with various transform feedback tests:
 - dEQP-GLES3.functional.transform_feedback.*
 - KHR-GL46.transform_feedback.api_errors_test
 - KHR-GL46.draw_indirect.basic-draw*-xfbPaused
 - KHR-GL46.geometry_shader.api.draw_calls_while_tf_is_paused

Cc: 21.3 mesa-stable
Signed-off-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
Reviewed-by: Rhys Perry <pendingchaos02@gmail.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/13514>
(cherry picked from commit dc74285d32)
2021-10-27 19:58:08 +01:00
Samuel Pitoiset
2ac3a3b5e9 radv: fix build errors with Android
Fixes: 49c3a88fad ("radv: implement VK_KHR_format_feature_flags2")
Closes: https://gitlab.freedesktop.org/mesa/mesa/-/issues/5518
Signed-off-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/13450>
(cherry picked from commit 4765edb4e0)
2021-10-27 19:58:08 +01:00
Alyssa Rosenzweig
40eb47924e panfrost: Enable AFBC on v7
The bugs blocking this have been resolved, so flip on AFBC again and get
moar fps.

Signed-off-by: Alyssa Rosenzweig <alyssa@collabora.com>
Cc: mesa-stable
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/13205>
(cherry picked from commit 2526f6f229)
2021-10-27 19:58:08 +01:00
Alyssa Rosenzweig
e97caaf452 panfrost: Decompress for incompatible AFBC formats
AFBC is keyed to the format. Depending on the hardware, we'll get an
Invalid Data Fault or a GPU timeout if we attempt to sample from an
AFBC-compressed RGBA8 texture as R32F (for example).

Fixes Piglit ./bin/arb_texture_view-rendering-formats_gles3 with AFBC.

Signed-off-by: Alyssa Rosenzweig <alyssa@collabora.com>
Cc: mesa-stable
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/13205>
(cherry picked from commit 789601a189)
2021-10-27 19:58:08 +01:00
Alyssa Rosenzweig
1b87d41d73 panfrost: Add internal afbc_formats
We need to know the internal (physical) formats used for AFBC of a given
logical format, in order to check format compatibility and determine if
we need to decompress AFBC for conformance.

Signed-off-by: Alyssa Rosenzweig <alyssa@collabora.com>
Cc: mesa-stable
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/13205>
(cherry picked from commit 93c9123c31)
2021-10-27 19:58:07 +01:00
Alyssa Rosenzweig
14f88aaca3 panfrost: Workaround ISSUE_TSIX_2033
According to mali_kbase, all Bifrost and Valhall GPUs are affected by
issue TSIX_2033. This hardware bug breaks the INTERSECT frame shader
mode when forcing clean_tile_writes. What does that mean?

The hardware considers a tile "clean" if it has been cleared but not
drawn to. Setting clean_tile_write forces the hardware to write back
such "clean" tiles to main memory.

Bifrost hardware supports frame shaders, which insert a rectangle into
every tile according to a configured rule. Frame shaders are used in
Panfrost to implement tile reloads (i.e. LOAD_OP_LOAD). Two modes are
relevant to the current discussion: ALWAYS, which always inserts a frame
shader, and INTERSECT, which tries to only insert where there is
geometry. Normally, we use INTERSECT for tile reloads as it is more
efficient than ALWAYS-- it allows us to skip reloads of tiles that are
discarded and never written back to memory.

From a software perspective, Panfrost's current logic is correct: if we
clear, we set clean_tile_writes, else we use an INTERSECT frame shader.
There is no software interaction between the two.

Unfortunately, there is a hardware interaction. The hardware forces
clean_tile_writes in certain circumstances when AFBC is used.
Ordinarily, this is a hardware implementation detail and invisible to
software. Unfortunately, this implicit clean tile write is enough to
trigger the hardware bug when using INTERSECT. As such, we need to
detect this case and use ALWAYS instead of INTERSECT for correct
results.

Signed-off-by: Alyssa Rosenzweig <alyssa@collabora.com>
Cc: mesa-stable
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/13205>
(cherry picked from commit 342ed4909f)
2021-10-27 19:58:06 +01:00
Alyssa Rosenzweig
fd3f846f4f panfrost: Fix gl_FragColor lowering
The gl_FragColor lowering in the fragment shader depends on the number
of render targets, which can change every set_framebuffer_state.
set_framebuffer_state thus needs to force a rebind of the fragment
shader.

Fixes a regression in Piglit fbo-drawbuffers-none gl_FragColor -auto
-fbo from enabling AFBC on Mali G52.

Fixes: 28ac4d1e00 ("panfrost: Call nir_lower_fragcolor based on key")
Signed-off-by: Alyssa Rosenzweig <alyssa@collabora.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/13498>
(cherry picked from commit e0335ad888)
2021-10-27 19:58:06 +01:00
Alyssa Rosenzweig
ddcf4a13f2 panfrost,panvk: Use dev->has_afbc instead of quirks
This uses the new property for AFBC we've added. The AFBC quirk is
applied only to v4, and we only set dev->has_afbc on v5+ so this is not
a regression. It now respects the hardware-specific AFBC disable.

Signed-off-by: Alyssa Rosenzweig <alyssa@collabora.com>
Cc: mesa-stable
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/13497>
(cherry picked from commit 68a7fafe2a)
2021-10-27 19:58:04 +01:00
Alyssa Rosenzweig
f97f9253b6 panfrost: Detect implementations support AFBC
AFBC is an optional feature on Bifrost. If it is missing, a bit will be
set in the poorly named AFBC_FEATURES register. Check this so we can act
appropriately.

Signed-off-by: Alyssa Rosenzweig <alyssa@collabora.com>
Cc: mesa-stable
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/13497>
(cherry picked from commit 3e168b97cc)
2021-10-27 19:58:03 +01:00
Marek Olšák
2569f415f2 st/mesa: don't crash when draw indirect buffer has no storage
Fixes: 22f6624ed3 - gallium: separate indirect stuff from pipe_draw_info

Acked-by: Pierre-Eric Pelloux-Prayer <pierre-eric.pelloux-prayer@amd.com>
Reviewed-by: Ilia Mirkin <imirkin@alum.mit.edu>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/13471>
(cherry picked from commit 520300ad22)
2021-10-27 19:58:03 +01:00
Tapani Pälli
03b36e3efb iris: clear bos_written when resetting a batch
This fixes dEQP-EGL.functional.sharing.gles2.multithread.* tests that
are hitting: "iris: Failed to submit batchbuffer: Invalid argument"
error.

v2: clear on reset rather than clear 'on-the-fly' (Kenneth Graunke)

Closes: https://gitlab.freedesktop.org/mesa/mesa/-/issues/5537
Fixes: e4c3d3efc7 ("iris: Defer construction of the validation (exec_object2) list")
Signed-off-by: Tapani Pälli <tapani.palli@intel.com>
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/13464>
(cherry picked from commit 1465ec8cf3)
2021-10-27 19:58:03 +01:00
Samuel Pitoiset
61244fad74 radv: re-emit prolog inputs when the nontrivial divisors state changed
If the application first uses nontrivial divisors, the driver emits
the vertex shader VA to the upload BO rather than directly via the
user SGPRs locations. But, if the vertex input dynamic state changes,
the driver might select a different VS prolog that no longer needs
nontrivial divisors.

In this case, the driver needs to re-emit the prolog inputs because
otherwise the VS prolog will jump to the PC that is emitted via the
user SGPR locations, and the previous one was somewhere in the
upload BO...

This fixes a GPU hang with Bioshock and Zink.

Fixes: d9c7a17542 ("radv: enable VK_EXT_vertex_input_dynamic_state")
Signed-off-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
Reviewed-by: Rhys Perry <pendingchaos02@gmail.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/13377>
(cherry picked from commit b6a69dbb40)
2021-10-27 19:58:03 +01:00
Iago Toral Quiroga
664cc248b3 broadcom/compiler: fix assert that current instruction must be in current block
This was not considering the possibility that the driver has called
nir_before_block() or nir_after_block() to update the cursor, in which
case the cursor link points to the instruction list header and not
to an actual instruction.

Fixes incorrect debug-assert crash in:
dEQP-VK.graphicsfuzz.cov-increment-vector-component-with-matrix-copy

Fixes: 265515fa62 ("broadcom/compiler: check instruction belongs to current block")
Reviewed-by: Juan A. Suarez <jasuarez@igalia.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/13467>
(cherry picked from commit 1561d0126a)
2021-10-27 19:57:59 +01:00
Samuel Pitoiset
fa24bfb914 aco: fix loading 64-bit inputs with fragment shaders
Fixes a bunch of 64-bit IO tests with piglit and Zink.

Cc: 21.3 mesa-stable
Signed-off-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
Reviewed-by: Rhys Perry <pendingchaos02@gmail.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/13454>
(cherry picked from commit 996e81fb70)
2021-10-27 19:57:59 +01:00
Vinson Lee
d9cdad377d radv: Fix memory leak on error path.
Fix defect reported by Coverity Scan.

Resource leak (RESOURCE_LEAK)
leaked_storage: Variable prolog going out of scope leaks the storage it points to

Fixes: 80841196b2 ("radv: implement dynamic vertex input state using vertex shader prologs")
Suggested-by: Rhys Perry <pendingchaos02@gmail.com>
Signed-off-by: Vinson Lee <vlee@freedesktop.org>
Reviewed-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/13402>
(cherry picked from commit 670fd8123b)
2021-10-27 19:57:55 +01:00
Mike Blumenkrantz
ebf218158c zink: move last of lazy descriptor state updating back to lazy-only code
hybrid mode is controlled by the caching manager, so state tracking is irrelevant

Reviewed-by: Dave Airlie <airlied@redhat.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/13350>
(cherry picked from commit dfd0f5dbfd)
2021-10-27 19:57:55 +01:00
Mike Blumenkrantz
d558fe4c75 zink: add an early return for zink_descriptors_update_lazy_masked()
no point in generating pools/sets that won't be used here

Reviewed-by: Dave Airlie <airlied@redhat.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/13350>
(cherry picked from commit 140d3ea8c6)
2021-10-27 19:57:55 +01:00
Mike Blumenkrantz
25a09a9879 zink: move push descriptor updating into lazy-only codepath
this was a bit confusing to read, and I originally left it in the hybrid
path to enable fallbacks to push descriptors in hybrid mode. the problem with
that idea is that it's impossible: the constant buffer set is the one set
that will never, ever trigger a fallback, so leaving it there just leaves
room for error and confusion

Reviewed-by: Dave Airlie <airlied@redhat.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/13350>
(cherry picked from commit 7c840f5103)
2021-10-27 19:57:54 +01:00
Mike Blumenkrantz
af0c678a3c zink: don't update lazy descriptor states in hybrid mode
I'm not 100% sure how, but this breaks tomb raider

Reviewed-by: Dave Airlie <airlied@redhat.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/13350>
(cherry picked from commit b140d58b1f)
2021-10-27 19:57:54 +01:00
Mike Blumenkrantz
b68088b110 zink: assert compute descriptor key is valid before hashing it
Reviewed-by: Dave Airlie <airlied@redhat.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/13350>
(cherry picked from commit 75e51138b1)
2021-10-27 19:57:54 +01:00
Mike Blumenkrantz
b829dc1a3a zink: clear descriptor refs on buffer replacement
the bo here can only ever be destroyed before it gets reused, so prune
it from the descriptor cache immediately

Reviewed-by: Dave Airlie <airlied@redhat.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/13350>
(cherry picked from commit 497ce3c38a)
2021-10-27 19:57:54 +01:00
Eric Engestrom
bb763eee16 .pick_status.json: Update to 4856586ac6 2021-10-27 19:57:31 +01:00
Eric Engestrom
7976828ae3 VERSION: bump for 21.3.0-rc2
Signed-off-by: Eric Engestrom <eric@engestrom.ch>
2021-10-20 20:48:17 +01:00
Mike Blumenkrantz
f8b5444a09 zink: rescue surfaces/bufferviews for cache hits during deletion
this is a wild race condition, but it's possible for these to get their
final unref, enter their destructor, and then get a cache hit while waiting
on the lock to remove themselves from the cache

in such a scenario, a second, normal check of the refcount will suffice,
as the increment is atomic, and the value will otherwise be zero

fixes crashes in basemark

cc: mesa-stable

Reviewed-by: Hoe Hao Cheng <haochengho12907@gmail.com>
Reviewed-by: Dave Airlie <airlied@redhat.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/13442>
(cherry picked from commit 86b3d8c66c)
2021-10-20 20:40:59 +01:00
Mykhailo Skorokhodov
f1b779361c Revert "iris: add tile cache flush to iris_copy_region"
This reverts commit 27534a49cf

Signed-off-by: Mykhailo Skorokhodov <mykhailo.skorokhodov@globallogic.com>
Reviewed-by: Felix DeGrood <felix.j.degrood@intel.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/12979>
(cherry picked from commit 5afce85f2b)
2021-10-20 20:40:59 +01:00
Mykhailo Skorokhodov
d797e98b6f iris: Add missed tile flush flag
Without adding `PIPE_CONTROL_TILE_CACHE_FLUSH` into `iris_emit_pipe_control`
gen12+ (UHD 750 in my case) has issues with textures.

Related-to: https://gitlab.freedesktop.org/mesa/mesa/-/issues/5029
Fixes: c85ea824('iris: reduce redundant tile cache flushes')

Signed-off-by: Mykhailo Skorokhodov <mykhailo.skorokhodov@globallogic.com>
Reviewed-by: Felix DeGrood <felix.j.degrood@intel.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/12979>
(cherry picked from commit 0523607ebb)
2021-10-20 20:40:59 +01:00
Marek Olšák
9aaf29b938 mesa: fix crashes in the no_error path of glUniform
Fixes: bd2662bfa1 - mesa: add KHR_no_error support to glUniform*() functions

Reviewed-by: Pierre-Eric Pelloux-Prayer <pierre-eric.pelloux-prayer@amd.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/13417>
(cherry picked from commit 03186773a6)
2021-10-20 20:40:59 +01:00
Samuel Pitoiset
97a30b2d21 aco: fix emitting stream outputs when the first component isn't zero
Fixes a bunch of XFB piglit tests with Zink.

Cc: 21.3 mesa-stable
Signed-off-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
Reviewed-by: Rhys Perry <pendingchaos02@gmail.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/13437>
(cherry picked from commit 572a902566)
2021-10-20 20:40:59 +01:00
Samuel Pitoiset
1771a7da08 aco: fix invalid IR generated for b2f64 when the dest is a VGPR
Fixes few 64-bit piglit tests with Zink.

Cc: 21.3 mesa-stable
Signed-off-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
Reviewed-by: Rhys Perry <pendingchaos02@gmail.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/13435>
(cherry picked from commit e3cbb0eb6a)
2021-10-20 20:40:59 +01:00
Samuel Pitoiset
5532c4267f radv: do not remove PSIZ for streamout shaders
It might still be read later from the streamout buffer.

Fixes a regression with
ext_transform_feedback-builtin-varyings gl_PointSize and Zink.

Fixes: 92e1981a80 ("radv: Remove PSIZ output when it isn't needed.")
Signed-off-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
Reviewed-by: Timur Kristóf <timur.kristof@gmail.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/13413>
(cherry picked from commit 19c91a120d)
2021-10-20 20:40:59 +01:00
Jan Beich
80305d7c2e meson: disable -Werror=thread-safety on FreeBSD
Annotated <pthread.h> exposes too many errors in Mesa that are
non-trivial to fix and keep working without FreeBSD CI.

Fixes: 0d5fe24c9b ("macros: Add thread-safety annotation macros")
Reviewed-by: Eric Engestrom <eric@engestrom.ch>
Cc: mesa-stable
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/9168>
(cherry picked from commit 60b7c3a0f4)
2021-10-20 20:40:59 +01:00
Mike Blumenkrantz
356cac1c29 zink: fully zero surface creation struct
gotta get those holes for caching

cc: mesa-stable

Reviewed-by: Hoe Hao Cheng <haochengho12907@gmail.com>
Reviewed-by: Dave Airlie <airlied@redhat.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/13410>
(cherry picked from commit e66558985a)
2021-10-20 20:40:59 +01:00
Mike Blumenkrantz
b28da95fa4 zink: add a read barrier for indirect dispatch
using the draw stage here doesn't make much sense, but that's what the
spec says, so let's git er done

fixes dEQP-GL45.functional.compute.indirect_dispatch* on radv

Cc: mesa-stable

Reviewed-by: Dave Airlie <airlied@redhat.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/13381>
(cherry picked from commit a2789fde0c)
2021-10-20 20:40:58 +01:00
Mike Blumenkrantz
28261505d6 zink: use static array for detecting VK_TIME_DOMAIN_DEVICE_EXT
there's only a few possible values for this, so just use a static array
to avoid leaking

Fixes: 039078fe97 ("zink: slight refactor of load_device_extensions()")

Reviewed-by: Dave Airlie <airlied@redhat.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/13360>
(cherry picked from commit 11dd9e4ee4)
2021-10-20 20:40:58 +01:00
Witold Baryluk
b67308a449 zink: Fully initialize VkBufferViewCreateInfo for hashing
Makes hashing achieve higher hit rate, and valgrind happier.

Reviewed-by: Hoe Hao Cheng <haochengho12907@gmail.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/13371>
(cherry picked from commit ae525da0e4)
2021-10-20 20:40:58 +01:00
Marcin Ślusarz
2d6c11843d intel: fix INTEL_DEBUG environment variable on 32-bit systems
INTEL_DEBUG is defined (since 4015e1876a) as:

 #define INTEL_DEBUG __builtin_expect(intel_debug, 0)

which unfortunately chops off upper 32 bits from intel_debug
on platforms where sizeof(long) != sizeof(uint64_t) because
__builtin_expect is defined only for the long type.

Fix this by changing the definition of INTEL_DEBUG to be function-like
macro with "flags" argument. New definition returns 0 or 1 when
any of the flags match.

Most of the changes in this commit were generated using:
for c in `git grep INTEL_DEBUG | grep "&" | grep -v i915 | awk -F: '{print $1}' | sort | uniq`; do
    perl -pi -e "s/INTEL_DEBUG & ([A-Z0-9a-z_]+)/INTEL_DBG(\1)/" $c
    perl -pi -e "s/INTEL_DEBUG & (\([A-Z0-9_ |]+\))/INTEL_DBG\1/" $c
done
but it didn't handle all cases and required minor cleanups (like removal
of round brackets which were not needed anymore).

Signed-off-by: Marcin Ślusarz <marcin.slusarz@intel.com>
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
Reviewed-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/13334>
(cherry picked from commit d05f7b4a2c)
2021-10-20 20:40:58 +01:00
Neha Bhende
ca1c300ecd st: Fix 64-bit vertex attrib index for TGSI path
Patch 77c2b022a0 removed lowering of 64-bit vertex attribs to 32bits.
This has thrown TGSI translation off the guard for 64bit attrib.
This lead to fail/crash of 1000+ piglit tests.

This patch basically fixes 64 bit attrib index for TGSI shader by adding placeholder
for second part of a double attribute.
It fixes all regressed piglit tests.

A big help from Charmaine to fix this regression
Reviewed-by: Charmaine Lee <charmainel@vmware.com>
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
Reviewed-by: Ilia Mirkin <imirkin@alum.mit.edu>

Fixes: 77c2b022a0 ("st/mesa: remove lowering of 64-bit vertex attribs to 32 bits")
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/13363>
(cherry picked from commit be6d584de4)
2021-10-20 20:40:58 +01:00
Samuel Pitoiset
ed1db8e8ca radv: fix OpImageQuerySamples with non-zero descriptor set
The descriptor set was always 0 because it wasn't gathered by the
shader info pass.

This fixes CPU crashes with
arb_shader_texture_image_samples-builtin-image and Zink.

Cc: 21.3 mesa-stable
Signed-off-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/13411>
(cherry picked from commit 5b797bd485)
2021-10-20 20:40:58 +01:00
Pierre-Eric Pelloux-Prayer
4504abe511 radeonsi: use viewport offset in quant_mode determination
Instead of only using the viewport extent.

Closes: https://gitlab.freedesktop.org/mesa/mesa/-/issues/5344
Cc: mesa-stable
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/13382>
(cherry picked from commit 234c69f600)
2021-10-20 20:40:58 +01:00
Vinson Lee
d19e28c139 anv: Fix assertion.
Fix defect reported by Coverity Scan.

Assign instead of compare (PW.ASSIGN_WHERE_COMPARE_MEANT)
assign_where_compare_meant: use of "=" where "==" may have been intended

Fixes: 35315c68a5 ("anv: Use the common wrapper for GetPhysicalDeviceFormatProperties")
Signed-off-by: Vinson Lee <vlee@freedesktop.org>
Reviewed-by: Tapani Pälli <tapani.palli@intel.com>
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/13395>
(cherry picked from commit 9eb010ee1e)
2021-10-20 20:40:58 +01:00
Samuel Pitoiset
983cccf757 radv: fix removing PSIZ when it's not emitted by the last VGT stage
This dereferences a NULL pointer and crash many tests with Zink.

Fixes: 92e1981a80 ("radv: Remove PSIZ output when it isn't needed.")
Signed-off-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
Reviewed-by: Timur Kristóf <timur.kristof@gmail.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/13378>
(cherry picked from commit 61be0bd34b)
2021-10-20 20:40:58 +01:00
Karol Herbst
7456331987 spirv: Don't add 0.5 to array indicies for OpImageSampleExplicitLod
This fixes CLs 1.2 1Darray and 2Darray images.

Fixes: 589d918a4f
       ("spirv: Add 0.5 to integer coordinates for OpImageSampleExplicitLod")

Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
Signed-off-by: Karol Herbst <kherbst@redhat.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/13401>
(cherry picked from commit f6ecd284e5)
2021-10-20 20:40:58 +01:00
Dave Airlie
bac2dd958a llvmpipe: fix userptr for texture resources.
This is needed for CL image hostptr support, but it's possible
it could hit these paths from GL/Vulkan

Fixes: 9a57dceeb7 ("llvmpipe: add support for user memory pointers")
Reviewed-By: Mike Blumenkrantz <michael.blumenkrantz@gmail.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/13375>
(cherry picked from commit 17a565e0cf)
2021-10-20 20:40:58 +01:00
Alyssa Rosenzweig
121f0528a7 panfrost: Don't allow rendering/texturing 48-bit
Matches freedreno. Fixes crashes in Piglit arb_texture_view.

Signed-off-by: Alyssa Rosenzweig <alyssa@collabora.com>
Cc: mesa-stable
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/13394>
(cherry picked from commit d31ca63527)
2021-10-20 20:40:58 +01:00
Derek Foreman
3c0c2465f3 egl/wayland: Properly clear stale buffers on resize
The following chain of events results in an incorrectly sized buffer
persisting beyond its useful lifetime, and causing visual artifacts.

buffer is attached at size A
window is resized to size B
rendering takes place for size B
window is resized back to size A
swapbuffers with damage is called

In this scenario, update_buffers fails to recognize that the surface it's
about to commit is a different size than it has rendered. The
attached_width and attached_height are set incorrectly, and periodic
flickering is observed.

Instead, we set a boolean flag at time of resize and use this at the time
we latch the window dimensions as surface dimensions to decide whether to
discard stale buffers.

Signed-off-by: Derek Foreman <derek.foreman@collabora.com>
Reviewed-by: Daniel Stone <daniels@collabora.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/13270>
(cherry picked from commit 28d12716e8)
2021-10-20 20:40:58 +01:00
Mike Blumenkrantz
118131c071 aux/pb: more correctly check number of reclaims
the increment needs to happen before the comparison here

Fixes: 3d6c8829f5 ("aux/pb: add a tolerance for reclaim failure")

Reviewed-by: Marek Olšák <marek.olsak@amd.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/13388>
(cherry picked from commit fe2674dd52)
2021-10-20 20:40:57 +01:00
Mike Blumenkrantz
b7942e3134 aux/pb: add a tolerance for reclaim failure
originally, a slab attempts to reclaim a single bo. there are two outcomes
to this which can occur:
* the bo is reclaimed
* the bo is not reclaimed

if the bo is reclaimed, great.

if the bo is not reclaimed, it remains at the head of the list until it can
be reclaimed. this means that any bo with a "long" work queue which makes it
into a slab will effectively kill the entire slab. in a benchmarking scenario,
this can occur in rapid succession, and every slab will get 1-2 suballocations
before it reaches a bo that blocks long enough for a new slab to be needed.

the inevitable result of this scenario is that all memory is depleted almost instantly,
all because pb assumes that if the first bo in the reclaim list isn't ready, none of them
can be ready

for drivers like radeonsi, this happens to be a fine assumption

for drivers like zink, this is entirely not workable and explodes the gpu

Cc: mesa-stable

Reviewed-by: Witold Baryluk <witold.baryluk@gmail.com>
Reviewed-by: Pierre-Eric Pelloux-Prayer <pierre-eric.pelloux-prayer@amd.com>
Tested-by: Witold Baryluk <witold.baryluk@gmail.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/13345>
(cherry picked from commit 3d6c8829f5)
2021-10-20 20:40:57 +01:00
Samuel Pitoiset
7cddbaab2d aco: do not return an empty string when disassembly is not supported
Fixes dEQP-VK.pipeline.executable_properties.* on GFX6-7 when
clrxdisasm isn't found. Other generations are also affected if RADV
is built without LLVM.

Cc: 21.3 mesa-stable
Signed-off-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
Reviewed-by: Tony Wasserka <tony.wasserka@gmx.de>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/13333>
(cherry picked from commit aac4e1f822)
2021-10-20 20:40:57 +01:00
Marcin Ślusarz
2dcee84ce3 iris: fix scratch address patching for TESS_EVAL stage
Scratch patching code in iris_upload_dirty_render_state (see MERGE_SCRATCH_ADDR
calls) assumes that in all shader stages derived_data field stores 3DSTATE_XS
packet first.

This is not true for TESS_EVAL (DS), so we end up patching 3DSTATE_TE
instead of 3DSTATE_DS leading to DWordLength becoming 11 instead of 9
(9 == 3DSTATE_DS.DWordLength, 2 == 3DSTATE_TE.DWordLength, and 9|2 == 11),
and hardware hanging on the next instruction.

Fix this by reversing the order of packets for TESS_EVAL stage.

Closes: https://gitlab.freedesktop.org/mesa/mesa/-/issues/5499

Fixes: 4256f7ed58 ("iris: Fill out scratch base address dynamically")
Signed-off-by: Marcin Ślusarz <marcin.slusarz@intel.com>
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/13358>
(cherry picked from commit 5387522bd0)
2021-10-20 20:40:57 +01:00
Maniraj D
9b68854c2e egl: set TSD as NULL after deinit
When eglReleaseThread() is called from application's
destructor (API with __attribute__((destructor))),
it crashes due to invalid memory access.

In this case, _egl_TLS is freed in the flow of
_eglAtExit() as below but _egl_TLS is not set to NULL.

    _eglDestroyThreadInfo
        _eglFiniTSD
            _eglAtExit
                _run_exit_handlers
                    exit

Later when the eglReleaseThread is called from
application's destructor, it ends-up accessing
the freed _egl_TLS pointer.

    eglReleaseThread -> in libEGL_mesa
        eglReleaseThread -> in libEGL(glvnd)
            destructor() -> App's destructor

To resolve the invalid access, setting the _egl_TLS
pointer as NULL after freeing it.

Reviewed-by: Eric Engestrom <eric@engestrom.ch>
Reviewed-by: Jesse Natalie <jenatali@microsoft.com>
Reviewed-by: Tapani Pälli <tapani.palli@intel.com>
Cc: mesa-stable

Closes: https://gitlab.freedesktop.org/mesa/mesa/-/issues/5466
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/13302>
(cherry picked from commit 796c9ab3fd)
2021-10-20 20:40:57 +01:00
Jason Ekstrand
071ce0bbc7 i965: Emit a NULL surface for buffer textures with no buffer
This is a preexisting bug but it was uncovered by 231653ea35
("intel/isl: Add a max_buffer_size limit to isl_device") which added an
assert(num_elements > 0) for typed buffers.

Cc: mesa-stable@lists.freedesktop.org
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/13351>
(cherry picked from commit 393fda2d34)
2021-10-20 20:40:57 +01:00
Witold Baryluk
8876a87565 zink: Do not access just freed zink_batch_state
Cc: mesa-stable
Reviewed-by: Mike Blumenkrantz <michael.blumenkrantz@gmail.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/13370>
(cherry picked from commit 4d777631b5)
2021-10-20 20:40:57 +01:00
Yiwei Zhang
9c7e483a1c dri_interface: remove gl header
Only gl typedefs are used. So just remove the header and update the
types to the underlying types.

Signed-off-by: Yiwei Zhang <zzyiwei@chromium.org>
Reviewed-by: Adam Jackson <ajax@redhat.com>
Reviewed-by: Emma Anholt <emma@anholt.net>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/13387>
(cherry picked from commit 2d58e31f10)
2021-10-20 20:40:57 +01:00
Yiwei Zhang
0969e1247c dri_interface: remove obsolete interfaces
Below are removed:
__DRI_FRAME_TRACKING
__DRI_TEX_OFFSET
__DRI_GET_DRAWABLE_INFO

Signed-off-by: Yiwei Zhang <zzyiwei@chromium.org>
Reviewed-by: Adam Jackson <ajax@redhat.com>
Reviewed-by: Emma Anholt <emma@anholt.net>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/13387>
(cherry picked from commit e19d9046db)
2021-10-20 20:40:57 +01:00
Clayton Craft
3f61f84fe3 anv: don't advertise vk conformance on GPUs that aren't conformant
This sets the conformance version to 0.0.0.0 for GPUs that have
incomplete support for vulkan, so that it's easier to check if vulkan is
fully supported by a GPU at runtime for applications/libraries.

    $ vulkaninfo|grep conf
    MESA-INTEL: warning: Ivy Bridge Vulkan support is incomplete
        conformanceVersion = 0.0.0.0

Signed-off-by: Clayton Craft <clayton@craftyguy.net>
Reviewed-by: Ivan Briano <ivan.briano@intel.com>
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/13275>
(cherry picked from commit b2ef7e6d6b)
2021-10-20 20:40:57 +01:00
Jason Ekstrand
5c3159e088 vulkan/log: Tweak our handling of a couple error enums
VK_ERROR_INITIALIZATION_FAILED can happen as part of device creation and
isn't really an instance error in that case.
VK_ERROR_EXTENSION_NOT_PRESENT, on the other hand, is always an instance
thing and we should handle it as such.

Fixes: 0cad3beb2a ("vulkan/log: Add common vk_error and vk_errorf helpers")
Reviewed-by: Tapani Pälli <tapani.palli@intel.com>
Reviewed-by: Iago Toral Quiroga <itoral@igalia.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/13341>
(cherry picked from commit 071437d29d)
2021-10-20 20:40:57 +01:00
Boris Brezillon
45a9fd6acb vulkan: Set unused entrypoints to vk_entrypoint_stub when compiling with MSVC
If we don't do that we hit the assert(entry[i] != NULL) added by commit
6d44b21d4f ("vulkan: Fix weak symbol emulation when compiling with MSVC").

Fixes: 6d44b21d4f ("vulkan: Fix weak symbol emulation when compiling with MSVC")
Signed-off-by: Boris Brezillon <boris.brezillon@collabora.com>
Reviewed-by: Jesse Natalie <jenatali@microsoft.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/13355>
(cherry picked from commit fd46749234)
2021-10-20 20:40:57 +01:00
Bas Nieuwenhuizen
81fe3260e0 radv: Fix modifier property query.
radv_get_modifier_flags read the format properties, doesn't write any. Setting
the central format properties based on the drm format properties doesn't make
any sense.

Fixes: 5dee0d9da9 "radv: switch to VK_FORMAT_FEATURE_2_XXX/VkFormatProperties3KHR"
Closes: https://gitlab.freedesktop.org/mesa/mesa/-/issues/5498
Reviewed-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
Tested-By: Mike Blumenkrantz <michael.blumenkrantz@gmail.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/13357>
(cherry picked from commit b4aa5a3fdd)
2021-10-20 20:40:57 +01:00
Boris Brezillon
70cd17bbf1 vulkan: Fix weak symbol emulation when compiling with MSVC
Mapping unimplemented entrypoints to a global function pointer variable
initialized to NULL is a bit cumbersome, and actually led to a bug
in the vk_xxx_dispatch_table_from_entrypoints() template: the !override
case didn't have the right check on the source table entries. Instead of
fixing that case, let's simplify the logic by creating a stub function
and making the alternatename pragma point to this stub. This way we get
rid of all those uneeded xxx_Null symbols/variables and simplify the
tests in vk_xxxx_dispatch_table_from_entrypoints().

Cc: mesa-stable
Fixes: 98c622a96e ("vulkan: Update dispatch table gen for Windows")
Signed-off-by: Boris Brezillon <boris.brezillon@collabora.com>
Reviewed-by: Jesse Natalie <jenatali@microsoft.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/13348>
(cherry picked from commit 6d44b21d4f)
2021-10-20 20:40:57 +01:00
Ian Romanick
2b77d8afa7 nir/loop_unroll: Always unroll loops that iterate at most once
Two carchase compute shaders (shader-db) and two Fallout 4 fragment
shaders (fossil-db) were helped.  Based on the NIR of the shaders, all
four had structures like

    for (i = 0; i < 1; i++) {
        ...

	for (...) {
            ...
	}
    }

All HSW+ platforms had similar results. (Ice Lake shown)
total loops in shared programs: 6033 -> 6031 (-0.03%)
loops in affected programs: 4 -> 2 (-50.00%)
helped: 2
HURT: 0

All Intel platforms had similar results. (Ice Lake shown)
Instructions in all programs: 143692018 -> 143692006 (-0.0%)
SENDs in all programs: 6947154 -> 6947154 (+0.0%)
Loops in all programs: 38285 -> 38283 (-0.0%)
Cycles in all programs: 8434822225 -> 8434476815 (-0.0%)
Spills in all programs: 191665 -> 191665 (+0.0%)
Fills in all programs: 298822 -> 298822 (+0.0%)

In the presense of loop unrolling like this, the change in cycles is not
accurate.

v2: Rearrange the logic in the if-condition to read a little better.
Suggested by Tim.

Closes: #5089
Reviewed-by: Timothy Arceri <tarceri@itsqueeze.com>
(cherry picked from commit ae99ea6f4d)

Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/13366>
2021-10-20 20:40:57 +01:00
Eric Engestrom
f774768d17 .pick_status.json: Mark 7a2e40df5e as denominated 2021-10-20 20:40:57 +01:00
Eric Engestrom
1cf264d89d .pick_status.json: Update to 86b3d8c66c 2021-10-20 20:40:41 +01:00
Eric Engestrom
2dc6aa567f VERSION: bump for 21.3.0-rc1
Signed-off-by: Eric Engestrom <eric@engestrom.ch>
2021-10-13 20:59:03 +01:00
10303 changed files with 1250882 additions and 2816438 deletions

View File

@@ -1,2 +0,0 @@
# Vendored code
src/amd/vulkan/radix_sort/*

View File

@@ -1,9 +0,0 @@
# The following files are opted into `ninja clang-format` and
# enforcement in the CI.
src/gallium/drivers/i915
src/gallium/targets/teflon/**/*
src/amd/vulkan/**/*
src/amd/compiler/**/*
src/egl/**/*
src/etnaviv/isa/**/*

View File

@@ -8,7 +8,7 @@ charset = utf-8
insert_final_newline = true insert_final_newline = true
tab_width = 8 tab_width = 8
[*.{c,h,cpp,hpp,cc,hh,y,yy}] [*.{c,h,cpp,hpp,cc,hh}]
indent_style = space indent_style = space
indent_size = 3 indent_size = 3
max_line_length = 78 max_line_length = 78
@@ -16,14 +16,26 @@ max_line_length = 78
[{Makefile*,*.mk}] [{Makefile*,*.mk}]
indent_style = tab indent_style = tab
[*.py] [{*.py,SCons*}]
indent_style = space indent_style = space
indent_size = 4 indent_size = 4
[*.pl]
indent_style = space
indent_size = 4
[*.m4]
indent_style = space
indent_size = 2
[*.yml] [*.yml]
indent_style = space indent_style = space
indent_size = 2 indent_size = 2
[*.html]
indent_style = space
indent_size = 2
[*.rst] [*.rst]
indent_style = space indent_style = space
indent_size = 3 indent_size = 3
@@ -34,11 +46,3 @@ trim_trailing_whitespace = false
[{meson.build,meson_options.txt}] [{meson.build,meson_options.txt}]
indent_style = space indent_style = space
indent_size = 2 indent_size = 2
[*.ps1]
indent_style = space
indent_size = 2
[*.rs]
indent_style = space
indent_size = 4

View File

@@ -1,67 +0,0 @@
# List of commits to ignore when using `git blame`.
# Enable with:
# git config blame.ignoreRevsFile .git-blame-ignore-revs
#
# Per git-blame(1):
# Ignore revisions listed in the file, one unabbreviated object name
# per line, in git-blame. Whitespace and comments beginning with # are
# ignored.
#
# Please keep these in chronological order :)
#
# You can add a new commit with the following command:
# git log -1 --pretty=format:'%n# %s%n%H%n' >> .git-blame-ignore-revs $COMMIT
# pvr: Fix clang-format error.
0ad5b0a74ef73f5fcbe1406ad9d57fe5dc00a5b1
# panfrost: Fix up some formatting for clang-format
a4705afe63412498d13ded73cba969c66be67907
# asahi: clang-format the world again
26c51bb8d8a33098b1990425a391f56ffba5728c
# perfetto: Add a .clang-format for the directory.
da78d5d729b1800136dd713b68492cb339993f4a
# panfrost/winsys: Clang-format
c90f036516a5376002be6550a917e8bad6a8a3b8
# panfrost: Re-run clang-format
4ccf174009af6732cbffa5d8ebb4687da7517505
# panvk: Clang-format
c7bf3b69ebc8f2252dbf724a4de638e6bb2ac402
# pan/mdg: Fix icky formatting
133af0d6c945d3aaca8989edd15283a2b7dcc6c7
# mapi: clang-format _glapi_add_dispatch()
30332529663268a6406e910848e906e725e6fda7
# radv: reformat according to its .clang-format
8b319c6db8bd93603b18bd783eb75225fcfd51b7
# aco: reformat according to its .clang-format
6b21653ab4d3a67e711fe10e3d403128b6d26eb2
# egl: re-format using clang-format
2f670d89db038d5a29f6b72732fd7ad63dfaf4c6
# panfrost: clang-format the tree
0afd691f29683f6e9dde60f79eca094373521806
# aco: Format.
1e2639026fec7069806449f9ba2a124ce4eb5569
# radv: Format.
59c501ca353f8ec9d2717c98af2bfa1a1dbf4d75
# pvr: clang-format fixes
953c04ebd39c52d457301bdd8ac803949001da2d
# freedreno: Re-indent
2d439343ea1aee146d4ce32800992cd389bd505d
# ir3: Reformat source with clang-format
177138d8cb0b4f6a42ef0a1f8593e14d79f17c54

7
.gitattributes vendored
View File

@@ -1,7 +0,0 @@
*.csv eol=crlf
* text=auto
*.jpg binary
*.png binary
*.gif binary
*.ico binary
*.cl gitlab-language=c

View File

@@ -1,60 +0,0 @@
name: macOS-CI
on: push
permissions:
contents: read
jobs:
macOS-CI:
strategy:
matrix:
glx_option: ['dri', 'xlib']
runs-on: macos-11
env:
GALLIUM_DUMP_CPU: true
MESON_EXEC: /Users/runner/Library/Python/3.11/bin/meson
steps:
- name: Checkout
uses: actions/checkout@v3
- name: Install Dependencies
run: |
cat > Brewfile <<EOL
brew "bison"
brew "expat"
brew "gettext"
brew "libx11"
brew "libxcb"
brew "libxdamage"
brew "libxext"
brew "molten-vk"
brew "ninja"
brew "pkg-config"
brew "python@3.10"
EOL
brew update
brew bundle --verbose
- name: Install Mako and meson
run: pip3 install --user mako meson
- name: Configure
run: |
cat > native_config <<EOL
[binaries]
llvm-config = '/usr/local/opt/llvm/bin/llvm-config'
EOL
$MESON_EXEC . build --native-file=native_config -Dmoltenvk-dir=$(brew --prefix molten-vk) -Dbuild-tests=true -Dosmesa=true -Dgallium-drivers=swrast,zink -Dglx=${{ matrix.glx_option }}
- name: Build
run: $MESON_EXEC compile -C build
- name: Test
run: $MESON_EXEC test -C build --print-errorlogs
- name: Install
run: $MESON_EXEC install -C build --destdir $PWD/install
- name: 'Upload Artifact'
if: always()
uses: actions/upload-artifact@v3
with:
name: macos-${{ matrix.glx_option }}-result
path: |
build/meson-logs/
install/
retention-days: 5

4
.gitignore vendored
View File

@@ -1,6 +1,4 @@
.vscode*
*.pyc *.pyc
*.pyo *.pyo
*.out *.out
/build build
.venv/

File diff suppressed because it is too large Load Diff

View File

@@ -1,82 +0,0 @@
# Note: skips lists for CI are just a list of lines that, when
# non-zero-length and not starting with '#', will regex match to
# delete lines from the test list. Be careful.
# This test checks the driver's reported conformance version against the
# version of the CTS we're running. This check fails every few months
# and everyone has to go and bump the number in every driver.
# Running this check only makes sense while preparing a conformance
# submission, so skip it in the regular CI.
dEQP-VK.api.driver_properties.conformance_version
# Exclude this test which might fail when a new extension is implemented.
dEQP-VK.info.device_extensions
# These are tremendously slow (pushing toward a minute), and aren't
# reliable to be run in parallel with other tests due to CPU-side timing.
dEQP-GLES[0-9]*.functional.flush_finish.*
# piglit: WGL is Windows-only
wgl@.*
# These are sensitive to CPU timing, and would need to be run in isolation
# on the system rather than in parallel with other tests.
glx@glx_arb_sync_control@timing.*
# This test is not built with waffle, while we do build tests with waffle
spec@!opengl 1.1@windowoverlap
# These tests all read from the front buffer after a swap. Given that we
# run piglit tests in parallel in Mesa CI, and don't have a compositor
# running, the frontbuffer reads may end up with undefined results from
# windows overlapping us.
#
# Piglit does mark these tests as not to be run in parallel, but deqp-runner
# doesn't respect that. We need to extend deqp-runner to allow some tests to be
# marked as single-threaded and run after the rayon loop if we want to support
# them.
#
# Note that "glx-" tests don't appear in x11-skips.txt because they can be
# run even if PIGLIT_PLATFORM=gbm (for example)
glx@glx-copy-sub-buffer.*
# A majority of the tests introduced in CTS 1.3.7.0 are experiencing failures and flakes.
# Disable these tests until someone with a more deeper understanding of EGL examines them.
#
# Note: on sc8280xp/a690 I get identical results (same passes and fails)
# between freedreno, zink, and llvmpipe, so I believe this is either a
# deqp bug or egl/wayland bug, rather than driver issue.
#
# With llvmpipe, the failing tests have the error message:
#
# "Illegal sampler view creation without bind flag"
#
# which might be a hint. (But some passing tests also have the same
# error message.)
#
# more context from David Heidelberg on IRC: the deqp commit where these
# started failing is: https://github.com/KhronosGroup/VK-GL-CTS/commit/79b25659bcbced0cfc2c3fe318951c585f682abe
# prior to that they were skipping.
wayland-dEQP-EGL.functional.color_clears.single_context.gles1.other
wayland-dEQP-EGL.functional.color_clears.single_context.gles2.other
wayland-dEQP-EGL.functional.color_clears.single_context.gles3.other
wayland-dEQP-EGL.functional.color_clears.multi_context.gles1.other
wayland-dEQP-EGL.functional.color_clears.multi_context.gles2.other
wayland-dEQP-EGL.functional.color_clears.multi_context.gles3.other
wayland-dEQP-EGL.functional.color_clears.multi_context.gles1_gles2.other
wayland-dEQP-EGL.functional.color_clears.multi_context.gles1_gles2_gles3.other
wayland-dEQP-EGL.functional.color_clears.multi_thread.gles1.other
wayland-dEQP-EGL.functional.color_clears.multi_thread.gles2.other
wayland-dEQP-EGL.functional.color_clears.multi_thread.gles3.other
wayland-dEQP-EGL.functional.color_clears.multi_thread.gles1_gles2.other
wayland-dEQP-EGL.functional.color_clears.multi_thread.gles1_gles2_gles3.other
# Seems to be the same is as wayland-dEQP-EGL.functional.color_clears.*
wayland-dEQP-EGL.functional.render.single_context.gles2.other
wayland-dEQP-EGL.functional.render.single_context.gles3.other
wayland-dEQP-EGL.functional.render.multi_context.gles2.other
wayland-dEQP-EGL.functional.render.multi_context.gles3.other
wayland-dEQP-EGL.functional.render.multi_context.gles2_gles3.other
wayland-dEQP-EGL.functional.render.multi_thread.gles2.other
wayland-dEQP-EGL.functional.render.multi_thread.gles3.other
wayland-dEQP-EGL.functional.render.multi_thread.gles2_gles3.other

View File

@@ -1,67 +0,0 @@
version: 1
# Rules to match for a machine to qualify
target:
id: '{{ ci_runner_id }}'
timeouts:
first_console_activity: # This limits the time it can take to receive the first console log
minutes: {{ timeout_first_minutes }}
retries: {{ timeout_first_retries }}
console_activity: # Reset every time we receive a message from the logs
minutes: {{ timeout_minutes }}
retries: {{ timeout_retries }}
boot_cycle:
minutes: {{ timeout_boot_minutes }}
retries: {{ timeout_boot_retries }}
overall: # Maximum time the job can take, not overrideable by the "continue" deployment
minutes: {{ timeout_overall_minutes }}
retries: 0
# no retries possible here
console_patterns:
session_end:
regex: >-
{{ session_end_regex }}
{% if session_reboot_regex %}
session_reboot:
regex: >-
{{ session_reboot_regex }}
{% endif %}
job_success:
regex: >-
{{ job_success_regex }}
job_warn:
regex: >-
{{ job_warn_regex }}
# Environment to deploy
deployment:
# Initial boot
start:
kernel:
url: '{{ kernel_url }}'
cmdline: >
SALAD.machine_id={{ '{{' }} machine_id }}
console={{ '{{' }} local_tty_device }},115200 earlyprintk=vga,keep
loglevel={{ log_level }} no_hash_pointers
b2c.service="--privileged --tls-verify=false --pid=host docker://{{ '{{' }} fdo_proxy_registry }}/gfx-ci/ci-tron/telegraf:latest" b2c.hostname=dut-{{ '{{' }} machine.full_name }}
b2c.container="-ti --tls-verify=false docker://{{ '{{' }} fdo_proxy_registry }}/gfx-ci/ci-tron/machine-registration:latest check"
b2c.ntp_peer=10.42.0.1 b2c.pipefail b2c.cache_device=auto b2c.poweroff_delay={{ poweroff_delay }}
b2c.minio="gateway,{{ '{{' }} minio_url }},{{ '{{' }} job_bucket_access_key }},{{ '{{' }} job_bucket_secret_key }}"
b2c.volume="{{ '{{' }} job_bucket }}-results,mirror=gateway/{{ '{{' }} job_bucket }},pull_on=pipeline_start,push_on=changes,overwrite{% for excl in job_volume_exclusions %},exclude={{ excl }}{% endfor %},remove,expiration=pipeline_end,preserve"
{% for volume in volumes %}
b2c.volume={{ volume }}
{% endfor %}
b2c.container="-v {{ '{{' }} job_bucket }}-results:{{ working_dir }} -w {{ working_dir }} {% for mount_volume in mount_volumes %} -v {{ mount_volume }}{% endfor %} --tls-verify=false docker://{{ local_container }} {{ container_cmd }}"
{% if kernel_cmdline_extras is defined %}
{{ kernel_cmdline_extras }}
{% endif %}
initramfs:
url: '{{ initramfs_url }}'
{% if dtb_url is defined %}
dtb:
url: '{{ dtb_url }}'
{% endif %}

View File

@@ -1,55 +0,0 @@
#!/usr/bin/env python3
# Copyright © 2022 Valve Corporation
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice (including the next
# paragraph) shall be included in all copies or substantial portions of the
# Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
# IN THE SOFTWARE.
from jinja2 import Environment, FileSystemLoader
from os import environ, path
# Pass all the environment variables prefixed by B2C_
values = {
key.removeprefix("B2C_").lower(): environ[key]
for key in environ if key.startswith("B2C_")
}
env = Environment(loader=FileSystemLoader(path.dirname(values['job_template'])),
trim_blocks=True, lstrip_blocks=True)
template = env.get_template(path.basename(values['job_template']))
values['ci_job_id'] = environ['CI_JOB_ID']
values['ci_runner_id'] = environ['CI_RUNNER_ID']
values['job_volume_exclusions'] = [excl for excl in values['job_volume_exclusions'].split(",") if excl]
values['working_dir'] = environ['CI_PROJECT_DIR']
# Use the gateway's pull-through registry caches to reduce load on fd.o.
values['local_container'] = environ['IMAGE_UNDER_TEST']
values['local_container'] = values['local_container'].replace(
'registry.freedesktop.org',
'{{ fdo_proxy_registry }}'
)
if 'kernel_cmdline_extras' not in values:
values['kernel_cmdline_extras'] = ''
with open(path.splitext(path.basename(values['job_template']))[0], "w") as f:
f.write(template.render(values))

View File

@@ -0,0 +1,26 @@
#!/bin/sh
# This test script groups together a bunch of fast dEQP variant runs
# to amortize the cost of rebooting the board.
set -ex
EXIT=0
# Run reset tests without parallelism:
if ! env \
DEQP_RESULTS_DIR=results/reset \
DEQP_PARALLEL=1 \
DEQP_CASELIST_FILTER='.*reset.*' \
/install/deqp-runner.sh; then
EXIT=1
fi
# Then run everything else with parallelism:
if ! env \
DEQP_RESULTS_DIR=results/nonrobustness \
DEQP_CASELIST_INV_FILTER='.*reset.*' \
/install/deqp-runner.sh; then
EXIT=1
fi

View File

@@ -1,17 +0,0 @@
#!/bin/bash
# shellcheck disable=SC2086 # we want word splitting
if [ -z "$BM_POE_INTERFACE" ]; then
echo "Must supply the PoE Interface to power down"
exit 1
fi
if [ -z "$BM_POE_ADDRESS" ]; then
echo "Must supply the PoE Switch host"
exit 1
fi
SNMP_KEY="1.3.6.1.4.1.9.9.402.1.2.1.1.1.$BM_POE_INTERFACE"
SNMP_OFF="i 4"
snmpset -v2c -r 3 -t 30 -cmesaci "$BM_POE_ADDRESS" "$SNMP_KEY" $SNMP_OFF

View File

@@ -1,22 +0,0 @@
#!/bin/bash
# shellcheck disable=SC2086 # we want word splitting
if [ -z "$BM_POE_INTERFACE" ]; then
echo "Must supply the PoE Interface to power up"
exit 1
fi
if [ -z "$BM_POE_ADDRESS" ]; then
echo "Must supply the PoE Switch host"
exit 1
fi
set -ex
SNMP_KEY="1.3.6.1.4.1.9.9.402.1.2.1.1.1.$BM_POE_INTERFACE"
SNMP_ON="i 1"
SNMP_OFF="i 4"
snmpset -v2c -r 3 -t 10 -cmesaci "$BM_POE_ADDRESS" "$SNMP_KEY" $SNMP_OFF
sleep 3s
snmpset -v2c -r 3 -t 10 -cmesaci "$BM_POE_ADDRESS" "$SNMP_KEY" $SNMP_ON

View File

@@ -1,7 +1,4 @@
#!/bin/bash #!/bin/bash
# shellcheck disable=SC1091 # The relative paths in this file only become valid at runtime.
# shellcheck disable=SC2034
# shellcheck disable=SC2086 # we want word splitting
# Boot script for Chrome OS devices attached to a servo debug connector, using # Boot script for Chrome OS devices attached to a servo debug connector, using
# NFS and TFTP to boot. # NFS and TFTP to boot.
@@ -9,7 +6,6 @@
# We're run from the root of the repo, make a helper var for our paths # We're run from the root of the repo, make a helper var for our paths
BM=$CI_PROJECT_DIR/install/bare-metal BM=$CI_PROJECT_DIR/install/bare-metal
CI_COMMON=$CI_PROJECT_DIR/install/common CI_COMMON=$CI_PROJECT_DIR/install/common
CI_INSTALL=$CI_PROJECT_DIR/install
# Runner config checks # Runner config checks
if [ -z "$BM_SERIAL" ]; then if [ -z "$BM_SERIAL" ]; then
@@ -84,41 +80,22 @@ mkdir -p /nfs/results
rm -rf /tftp/* rm -rf /tftp/*
if echo "$BM_KERNEL" | grep -q http; then if echo "$BM_KERNEL" | grep -q http; then
curl -L --retry 4 -f --retry-all-errors --retry-delay 60 \ apt install -y wget
$BM_KERNEL -o /tftp/vmlinuz wget $BM_KERNEL -O /tftp/vmlinuz
elif [ -n "${FORCE_KERNEL_TAG}" ]; then
curl -L --retry 4 -f --retry-all-errors --retry-delay 60 \
"${FDO_HTTP_CACHE_URI:-}${KERNEL_IMAGE_BASE}/${DEBIAN_ARCH}/${BM_KERNEL}" -o /tftp/vmlinuz
curl -L --retry 4 -f --retry-all-errors --retry-delay 60 \
"${FDO_HTTP_CACHE_URI:-}${KERNEL_IMAGE_BASE}/${DEBIAN_ARCH}/modules.tar.zst" -o modules.tar.zst
tar --keep-directory-symlink --zstd -xf modules.tar.zst -C "/nfs/"
rm modules.tar.zst &
else else
cp /baremetal-files/"$BM_KERNEL" /tftp/vmlinuz cp $BM_KERNEL /tftp/vmlinuz
fi fi
echo "$BM_CMDLINE" > /tftp/cmdline echo "$BM_CMDLINE" > /tftp/cmdline
set +e set +e
STRUCTURED_LOG_FILE=job_detail.json
python3 $CI_INSTALL/custom_logger.py ${STRUCTURED_LOG_FILE} --update dut_job_type "${DEVICE_TYPE}"
python3 $CI_INSTALL/custom_logger.py ${STRUCTURED_LOG_FILE} --update farm "${FARM}"
python3 $CI_INSTALL/custom_logger.py ${STRUCTURED_LOG_FILE} --create-dut-job dut_name "${CI_RUNNER_DESCRIPTION}"
python3 $CI_INSTALL/custom_logger.py ${STRUCTURED_LOG_FILE} --update-dut-time submit "${CI_JOB_STARTED_AT}"
python3 $BM/cros_servo_run.py \ python3 $BM/cros_servo_run.py \
--cpu $BM_SERIAL \ --cpu $BM_SERIAL \
--ec $BM_SERIAL_EC \ --ec $BM_SERIAL_EC
--test-timeout ${TEST_PHASE_TIMEOUT:-20}
ret=$? ret=$?
python3 $CI_INSTALL/custom_logger.py ${STRUCTURED_LOG_FILE} --close-dut-job
python3 $CI_INSTALL/custom_logger.py ${STRUCTURED_LOG_FILE} --close
set -e set -e
# Bring artifacts back from the NFS dir to the build dir where gitlab-runner # Bring artifacts back from the NFS dir to the build dir where gitlab-runner
# will look for them. # will look for them.
cp -Rp /nfs/results/. results/ cp -Rp /nfs/results/. results/
if [ -f "${STRUCTURED_LOG_FILE}" ]; then
cp -p ${STRUCTURED_LOG_FILE} results/
echo "Structured log file is available at https://${CI_PROJECT_ROOT_NAMESPACE}.pages.freedesktop.org/-/${CI_PROJECT_NAME}/-/jobs/${CI_JOB_ID}/artifacts/results/${STRUCTURED_LOG_FILE}"
fi
exit $ret exit $ret

View File

@@ -1,30 +1,76 @@
#!/usr/bin/env python3 #!/usr/bin/env python3
# #
# Copyright © 2020 Google LLC # Copyright © 2020 Google LLC
# SPDX-License-Identifier: MIT #
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice (including the next
# paragraph) shall be included in all copies or substantial portions of the
# Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
# IN THE SOFTWARE.
import argparse import argparse
import queue
import re import re
import sys
from custom_logger import CustomLogger
from serial_buffer import SerialBuffer from serial_buffer import SerialBuffer
import sys
import threading
class CrosServoRun: class CrosServoRun:
def __init__(self, cpu, ec, test_timeout, logger): def __init__(self, cpu, ec):
# Merged FIFO for the two serial buffers, fed by threads.
self.serial_queue = queue.Queue()
self.sentinel = object()
self.threads_done = 0
self.ec_ser = SerialBuffer(
ec, "results/serial-ec.txt", "R SERIAL-EC> ")
self.cpu_ser = SerialBuffer( self.cpu_ser = SerialBuffer(
cpu, "results/serial.txt", "R SERIAL-CPU> ") cpu, "results/serial.txt", "R SERIAL-CPU> ")
# Merge the EC serial into the cpu_ser's line stream so that we can
# effectively poll on both at the same time and not have to worry about
self.ec_ser = SerialBuffer(
ec, "results/serial-ec.txt", "R SERIAL-EC> ", line_queue=self.cpu_ser.line_queue)
self.test_timeout = test_timeout
self.logger = logger
def close(self): self.iter_feed_ec = threading.Thread(
self.ec_ser.close() target=self.iter_feed_queue, daemon=True, args=(self.ec_ser.lines(),))
self.cpu_ser.close() self.iter_feed_ec.start()
self.iter_feed_cpu = threading.Thread(
target=self.iter_feed_queue, daemon=True, args=(self.cpu_ser.lines(),))
self.iter_feed_cpu.start()
# Feed lines from our serial queues into the merged queue, marking when our
# input is done.
def iter_feed_queue(self, it):
for i in it:
self.serial_queue.put(i)
self.serial_queue.put(sentinel)
# Return the next line from the queue, counting how many threads have
# terminated and joining when done
def get_serial_queue_line(self):
line = self.serial_queue.get()
if line == self.sentinel:
self.threads_done = self.threads_done + 1
if self.threads_done == 2:
self.iter_feed_cpu.join()
self.iter_feed_ec.join()
return line
# Returns an iterator for getting the next line.
def serial_queue_lines(self):
return iter(self.get_serial_queue_line, self.sentinel)
def ec_write(self, s): def ec_write(self, s):
print("W SERIAL-EC> %s" % s) print("W SERIAL-EC> %s" % s)
@@ -38,71 +84,53 @@ class CrosServoRun:
RED = '\033[0;31m' RED = '\033[0;31m'
NO_COLOR = '\033[0m' NO_COLOR = '\033[0m'
print(RED + message + NO_COLOR) print(RED + message + NO_COLOR)
self.logger.update_status_fail(message)
def run(self): def run(self):
# Flush any partial commands in the EC's prompt, then ask for a reboot. # Flush any partial commands in the EC's prompt, then ask for a reboot.
self.ec_write("\n") self.ec_write("\n")
self.ec_write("reboot\n") self.ec_write("reboot\n")
bootloader_done = False
self.logger.create_job_phase("boot")
tftp_failures = 0
# This is emitted right when the bootloader pauses to check for input. # This is emitted right when the bootloader pauses to check for input.
# Emit a ^N character to request network boot, because we don't have a # Emit a ^N character to request network boot, because we don't have a
# direct-to-netboot firmware on cheza. # direct-to-netboot firmware on cheza.
for line in self.cpu_ser.lines(timeout=120, phase="bootloader"): for line in self.serial_queue_lines():
if re.search("load_archive: loading locale_en.bin", line): if re.search("load_archive: loading locale_en.bin", line):
self.cpu_write("\016") self.cpu_write("\016")
bootloader_done = True
break
# The Cheza firmware seems to occasionally get stuck looping in
# this error state during TFTP booting, possibly based on amount of
# network traffic around it, but it'll usually recover after a
# reboot. Currently mostly visible on google-freedreno-cheza-14.
if re.search("R8152: Bulk read error 0xffffffbf", line):
tftp_failures += 1
if tftp_failures >= 10:
self.print_error(
"Detected intermittent tftp failure, restarting run.")
return 1
# If the board has a netboot firmware and we made it to booting the
# kernel, proceed to processing of the test run.
if re.search("Booting Linux", line):
bootloader_done = True
break break
# The Cheza boards have issues with failing to bring up power to # The Cheza boards have issues with failing to bring up power to
# the system sometimes, possibly dependent on ambient temperature # the system sometimes, possibly dependent on ambient temperature
# in the farm. # in the farm.
if re.search("POWER_GOOD not seen in time", line): if re.search("POWER_GOOD not seen in time", line):
self.print_error( self.print_error("Detected intermittent poweron failure, restarting run...")
"Detected intermittent poweron failure, abandoning run.") return 2
return 1
if not bootloader_done: tftp_failures = 0
self.print_error("Failed to make it through bootloader, abandoning run.") for line in self.serial_queue_lines():
return 1
self.logger.create_job_phase("test")
for line in self.cpu_ser.lines(timeout=self.test_timeout, phase="test"):
if re.search("---. end Kernel panic", line): if re.search("---. end Kernel panic", line):
return 1 return 1
# The Cheza firmware seems to occasionally get stuck looping in
# this error state during TFTP booting, possibly based on amount of
# network traffic around it, but it'll usually recover after a
# reboot.
if re.search("R8152: Bulk read error 0xffffffbf", line):
tftp_failures += 1
if tftp_failures >= 100:
self.print_error("Detected intermittent tftp failure, restarting run...")
return 2
# There are very infrequent bus errors during power management transitions # There are very infrequent bus errors during power management transitions
# on cheza, which we don't expect to be the case on future boards. # on cheza, which we don't expect to be the case on future boards.
if re.search("Kernel panic - not syncing: Asynchronous SError Interrupt", line): if re.search("Kernel panic - not syncing: Asynchronous SError Interrupt", line):
self.print_error( self.print_error("Detected cheza power management bus error, restarting run...")
"Detected cheza power management bus error, abandoning run.") return 2
return 1
# If the network device dies, it's probably not graphics's fault, just try again. # If the network device dies, it's probably not graphics's fault, just try again.
if re.search("NETDEV WATCHDOG", line): if re.search("NETDEV WATCHDOG", line):
self.print_error( self.print_error(
"Detected network device failure, abandoning run.") "Detected network device failure, restarting run...")
return 1 return 2
# These HFI response errors started appearing with the introduction # These HFI response errors started appearing with the introduction
# of piglit runs. CosmicPenguin says: # of piglit runs. CosmicPenguin says:
@@ -114,30 +142,22 @@ class CrosServoRun:
# Given that it seems to trigger randomly near a GPU fault and then # Given that it seems to trigger randomly near a GPU fault and then
# break many tests after that, just restart the whole run. # break many tests after that, just restart the whole run.
if re.search("a6xx_hfi_send_msg.*Unexpected message id .* on the response queue", line): if re.search("a6xx_hfi_send_msg.*Unexpected message id .* on the response queue", line):
self.print_error( self.print_error("Detected cheza power management bus error, restarting run...")
"Detected cheza power management bus error, abandoning run.") return 2
return 1
if re.search("coreboot.*bootblock starting", line): if re.search("coreboot.*bootblock starting", line):
self.print_error( self.print_error(
"Detected spontaneous reboot, abandoning run.") "Detected spontaneous reboot, restarting run...")
return 1 return 2
if re.search("arm-smmu 5040000.iommu: TLB sync timed out -- SMMU may be deadlocked", line):
self.print_error("Detected cheza MMU fail, abandoning run.")
return 1
result = re.search("hwci: mesa: (\S*)", line) result = re.search("hwci: mesa: (\S*)", line)
if result: if result:
if result.group(1) == "pass": if result.group(1) == "pass":
self.logger.update_dut_job("status", "pass")
return 0 return 0
else: else:
self.logger.update_status_fail("test fail")
return 1 return 1
self.print_error( self.print_error("Reached the end of the CPU serial log without finding a result")
"Reached the end of the CPU serial log without finding a result")
return 1 return 1
@@ -147,19 +167,17 @@ def main():
help='CPU Serial device', required=True) help='CPU Serial device', required=True)
parser.add_argument( parser.add_argument(
'--ec', type=str, help='EC Serial device', required=True) '--ec', type=str, help='EC Serial device', required=True)
parser.add_argument(
'--test-timeout', type=int, help='Test phase timeout (minutes)', required=True)
args = parser.parse_args() args = parser.parse_args()
logger = CustomLogger("job_detail.json") servo = CrosServoRun(args.cpu, args.ec)
logger.update_dut_time("start", None)
servo = CrosServoRun(args.cpu, args.ec, args.test_timeout * 60, logger) while True:
retval = servo.run() retval = servo.run()
if retval != 2:
break
# power down the CPU on the device # power down the CPU on the device
servo.ec_write("power off\n") servo.ec_write("power off\n")
logger.update_dut_time("end", None)
servo.close()
sys.exit(retval) sys.exit(retval)

View File

@@ -7,4 +7,4 @@ if [ -z "$relay" ]; then
exit 1 exit 1
fi fi
"$CI_PROJECT_DIR"/install/bare-metal/eth008-power-relay.py "$ETH_HOST" "$ETH_PORT" off "$relay" $CI_PROJECT_DIR/install/bare-metal/eth008-power-relay.py $ETH_HOST $ETH_PORT off $relay

View File

@@ -7,6 +7,6 @@ if [ -z "$relay" ]; then
exit 1 exit 1
fi fi
"$CI_PROJECT_DIR"/install/bare-metal/eth008-power-relay.py "$ETH_HOST" "$ETH_PORT" off "$relay" $CI_PROJECT_DIR/install/bare-metal/eth008-power-relay.py $ETH_HOST $ETH_PORT off $relay
sleep 5 sleep 5
"$CI_PROJECT_DIR"/install/bare-metal/eth008-power-relay.py "$ETH_HOST" "$ETH_PORT" on "$relay" $CI_PROJECT_DIR/install/bare-metal/eth008-power-relay.py $ETH_HOST $ETH_PORT on $relay

View File

@@ -5,27 +5,26 @@ set -e
STRINGS=$(mktemp) STRINGS=$(mktemp)
ERRORS=$(mktemp) ERRORS=$(mktemp)
trap 'rm $STRINGS; rm $ERRORS;' EXIT trap "rm $STRINGS; rm $ERRORS;" EXIT
FILE=$1 FILE=$1
shift 1 shift 1
while getopts "f:e:" opt; do while getopts "f:e:" opt; do
case $opt in case $opt in
f) echo "$OPTARG" >> "$STRINGS";; f) echo "$OPTARG" >> $STRINGS;;
e) echo "$OPTARG" >> "$STRINGS" ; echo "$OPTARG" >> "$ERRORS";; e) echo "$OPTARG" >> $STRINGS ; echo "$OPTARG" >> $ERRORS;;
*) exit
esac esac
done done
shift $((OPTIND -1)) shift $((OPTIND -1))
echo "Waiting for $FILE to say one of following strings" echo "Waiting for $FILE to say one of following strings"
cat "$STRINGS" cat $STRINGS
while ! grep -E -wf "$STRINGS" "$FILE"; do while ! egrep -wf $STRINGS $FILE; do
sleep 2 sleep 2
done done
if grep -E -wf "$ERRORS" "$FILE"; then if egrep -wf $ERRORS $FILE; then
exit 1 exit 1
fi fi

View File

@@ -1,14 +1,9 @@
#!/bin/bash #!/bin/bash
# shellcheck disable=SC1091 # The relative paths in this file only become valid at runtime.
# shellcheck disable=SC2034
# shellcheck disable=SC2086 # we want word splitting
. "$SCRIPTS_DIR"/setup-test-env.sh
BM=$CI_PROJECT_DIR/install/bare-metal BM=$CI_PROJECT_DIR/install/bare-metal
CI_COMMON=$CI_PROJECT_DIR/install/common CI_COMMON=$CI_PROJECT_DIR/install/common
if [ -z "$BM_SERIAL" ] && [ -z "$BM_SERIAL_SCRIPT" ]; then if [ -z "$BM_SERIAL" -a -z "$BM_SERIAL_SCRIPT" ]; then
echo "Must set BM_SERIAL OR BM_SERIAL_SCRIPT in your gitlab-runner config.toml [[runners]] environment" echo "Must set BM_SERIAL OR BM_SERIAL_SCRIPT in your gitlab-runner config.toml [[runners]] environment"
echo "BM_SERIAL:" echo "BM_SERIAL:"
echo " This is the serial device to talk to for waiting for fastboot to be ready and logging from the kernel." echo " This is the serial device to talk to for waiting for fastboot to be ready and logging from the kernel."
@@ -87,57 +82,44 @@ else
fi fi
pushd rootfs pushd rootfs
find -H . | \ find -H | \
grep -E -v "external/(openglcts|vulkancts|amber|glslang|spirv-tools)" | egrep -v "external/(openglcts|vulkancts|amber|glslang|spirv-tools)" |
grep -E -v "traces-db|apitrace|renderdoc" | \ egrep -v "traces-db|apitrace|renderdoc" | \
grep -E -v $EXCLUDE_FILTER | \ egrep -v $EXCLUDE_FILTER | \
cpio -H newc -o | \ cpio -H newc -o | \
xz --check=crc32 -T4 - > $CI_PROJECT_DIR/rootfs.cpio.gz xz --check=crc32 -T4 - > $CI_PROJECT_DIR/rootfs.cpio.gz
popd popd
fi fi
# Make the combined kernel image and dtb for passing to fastboot. For normal
# Mesa development, we build the kernel and store it in the docker container
# that this script is running in.
#
# However, container builds are expensive, so when you're hacking on the
# kernel, it's nice to be able to skip the half hour container build and plus
# moving that container to the runner. So, if BM_KERNEL+BM_DTB are URLs,
# fetch them instead of looking in the container.
if echo "$BM_KERNEL $BM_DTB" | grep -q http; then if echo "$BM_KERNEL $BM_DTB" | grep -q http; then
curl -L --retry 4 -f --retry-all-errors --retry-delay 60 \ apt install -y wget
"$BM_KERNEL" -o kernel
# FIXME: modules should be supplied too wget $BM_KERNEL -O kernel
curl -L --retry 4 -f --retry-all-errors --retry-delay 60 \ wget $BM_DTB -O dtb
"$BM_DTB" -o dtb
cat kernel dtb > Image.gz-dtb cat kernel dtb > Image.gz-dtb
rm kernel dtb
elif [ -n "${FORCE_KERNEL_TAG}" ]; then
curl -L --retry 4 -f --retry-all-errors --retry-delay 60 \
"${FDO_HTTP_CACHE_URI:-}${KERNEL_IMAGE_BASE}/${DEBIAN_ARCH}/${BM_KERNEL}" -o kernel
curl -L --retry 4 -f --retry-all-errors --retry-delay 60 \
"${FDO_HTTP_CACHE_URI:-}${KERNEL_IMAGE_BASE}/${DEBIAN_ARCH}/modules.tar.zst" -o modules.tar.zst
if [ -n "$BM_DTB" ]; then
curl -L --retry 4 -f --retry-all-errors --retry-delay 60 \
"${FDO_HTTP_CACHE_URI:-}${KERNEL_IMAGE_BASE}/${DEBIAN_ARCH}/${BM_DTB}.dtb" -o dtb
fi
cat kernel dtb > Image.gz-dtb || echo "No DTB available, using pure kernel."
rm kernel
tar --keep-directory-symlink --zstd -xf modules.tar.zst -C "$BM_ROOTFS/"
rm modules.tar.zst &
else else
cat /baremetal-files/"$BM_KERNEL" /baremetal-files/"$BM_DTB".dtb > Image.gz-dtb cat $BM_KERNEL $BM_DTB > Image.gz-dtb
cp /baremetal-files/"$BM_DTB".dtb dtb
fi fi
export PATH=$BM:$PATH
mkdir -p artifacts mkdir -p artifacts
mkbootimg.py \ abootimg \
--kernel Image.gz-dtb \ --create artifacts/fastboot.img \
--ramdisk rootfs.cpio.gz \ -k Image.gz-dtb \
--dtb dtb \ -r rootfs.cpio.gz \
--cmdline "$BM_CMDLINE" \ -c cmdline="$BM_CMDLINE"
$BM_MKBOOT_PARAMS \ rm Image.gz-dtb
--header_version 2 \
-o artifacts/fastboot.img
rm Image.gz-dtb dtb export PATH=$BM:$PATH
# Start background command for talking to serial if we have one. # Start background command for talking to serial if we have one.
if [ -n "$BM_SERIAL_SCRIPT" ]; then if [ -n "$BM_SERIAL_SCRIPT" ]; then
@@ -151,7 +133,6 @@ fi
set +e set +e
$BM/fastboot_run.py \ $BM/fastboot_run.py \
--dev="$BM_SERIAL" \ --dev="$BM_SERIAL" \
--test-timeout ${TEST_PHASE_TIMEOUT:-20} \
--fbserial="$BM_FASTBOOT_SERIAL" \ --fbserial="$BM_FASTBOOT_SERIAL" \
--powerup="$BM_POWERUP" \ --powerup="$BM_POWERUP" \
--powerdown="$BM_POWERDOWN" --powerdown="$BM_POWERDOWN"

View File

@@ -22,102 +22,72 @@
# IN THE SOFTWARE. # IN THE SOFTWARE.
import argparse import argparse
import subprocess import os
import re import re
from serial_buffer import SerialBuffer from serial_buffer import SerialBuffer
import sys import sys
import threading import threading
class FastbootRun: class FastbootRun:
def __init__(self, args, test_timeout): def __init__(self, args):
self.powerup = args.powerup self.powerup = args.powerup
self.ser = SerialBuffer( # We would like something like a 1 minute timeout, but the piglit traces
args.dev, "results/serial-output.txt", "R SERIAL> ") # jobs stall out for long periods of time.
self.fastboot = "fastboot boot -s {ser} artifacts/fastboot.img".format( self.ser = SerialBuffer(args.dev, "results/serial-output.txt", "R SERIAL> ", timeout=600)
ser=args.fbserial) self.fastboot="fastboot boot -s {ser} artifacts/fastboot.img".format(ser=args.fbserial)
self.test_timeout = test_timeout
def close(self):
self.ser.close()
def print_error(self, message): def print_error(self, message):
RED = '\033[0;31m' RED = '\033[0;31m'
NO_COLOR = '\033[0m' NO_COLOR = '\033[0m'
print(RED + message + NO_COLOR) print(RED + message + NO_COLOR)
def logged_system(self, cmd, timeout=60): def logged_system(self, cmd):
print("Running '{}'".format(cmd)) print("Running '{}'".format(cmd))
try: return os.system(cmd)
return subprocess.call(cmd, shell=True, timeout=timeout)
except subprocess.TimeoutExpired:
self.print_error("timeout, abandoning run.")
return 1
def run(self): def run(self):
if ret := self.logged_system(self.powerup): if self.logged_system(self.powerup) != 0:
return ret return 1
fastboot_ready = False fastboot_ready = False
for line in self.ser.lines(timeout=2 * 60, phase="bootloader"): for line in self.ser.lines():
if re.search("[Ff]astboot: [Pp]rocessing commands", line) or \ if re.search("fastboot: processing commands", line) or \
re.search("Listening for fastboot command on", line): re.search("Listening for fastboot command on", line):
fastboot_ready = True fastboot_ready = True
break break
if re.search("data abort", line): if re.search("data abort", line):
self.print_error( self.print_error("Detected crash during boot, restarting run...")
"Detected crash during boot, abandoning run.") return 2
return 1
if not fastboot_ready: if not fastboot_ready:
self.print_error( self.print_error("Failed to get to fastboot prompt, restarting run...")
"Failed to get to fastboot prompt, abandoning run.") return 2
if self.logged_system(self.fastboot) != 0:
return 1 return 1
if ret := self.logged_system(self.fastboot): for line in self.ser.lines():
return ret
print_more_lines = -1
for line in self.ser.lines(timeout=self.test_timeout, phase="test"):
if print_more_lines == 0:
return 1
if print_more_lines > 0:
print_more_lines -= 1
if re.search("---. end Kernel panic", line): if re.search("---. end Kernel panic", line):
return 1 return 1
# The db820c boards intermittently reboot. Just restart the run # The db820c boards intermittently reboot. Just restart the run
# when if we see a reboot after we got past fastboot. # when if we see a reboot after we got past fastboot.
if re.search("PON REASON", line): if re.search("PON REASON", line):
self.print_error( self.print_error("Detected spontaneous reboot, restarting run...")
"Detected spontaneous reboot, abandoning run.") return 2
return 1
# db820c sometimes wedges around iommu fault recovery # db820c sometimes wedges around iommu fault recovery
if re.search("watchdog: BUG: soft lockup - CPU.* stuck", line): if re.search("watchdog: BUG: soft lockup - CPU.* stuck", line):
self.print_error( self.print_error(
"Detected kernel soft lockup, abandoning run.") "Detected kernel soft lockup, restarting run...")
return 1 return 2
# If the network device dies, it's probably not graphics's fault, just try again. # If the network device dies, it's probably not graphics's fault, just try again.
if re.search("NETDEV WATCHDOG", line): if re.search("NETDEV WATCHDOG", line):
self.print_error( self.print_error(
"Detected network device failure, abandoning run.") "Detected network device failure, restarting run...")
return 1 return 2
# A3xx recovery doesn't quite work. Sometimes the GPU will get
# wedged and recovery will fail (because power can't be reset?)
# This assumes that the jobs are sufficiently well-tested that GPU
# hangs aren't always triggered, so just try again. But print some
# more lines first so that we get better information on the cause
# of the hang. Once a hang happens, it's pretty chatty.
if "[drm:adreno_recover] *ERROR* gpu hw init failed: -22" in line:
self.print_error(
"Detected GPU hang, abandoning run.")
if print_more_lines == -1:
print_more_lines = 30
result = re.search("hwci: mesa: (\S*)", line) result = re.search("hwci: mesa: (\S*)", line)
if result: if result:
@@ -126,34 +96,29 @@ class FastbootRun:
else: else:
return 1 return 1
self.print_error( self.print_error("Reached the end of the CPU serial log without finding a result, restarting run...")
"Reached the end of the CPU serial log without finding a result, abandoning run.") return 2
return 1
def main(): def main():
parser = argparse.ArgumentParser() parser = argparse.ArgumentParser()
parser.add_argument( parser.add_argument('--dev', type=str, help='Serial device (otherwise reading from serial-output.txt)')
'--dev', type=str, help='Serial device (otherwise reading from serial-output.txt)') parser.add_argument('--powerup', type=str, help='shell command for rebooting', required=True)
parser.add_argument('--powerup', type=str, parser.add_argument('--powerdown', type=str, help='shell command for powering off', required=True)
help='shell command for rebooting', required=True) parser.add_argument('--fbserial', type=str, help='fastboot serial number of the board', required=True)
parser.add_argument('--powerdown', type=str,
help='shell command for powering off', required=True)
parser.add_argument('--fbserial', type=str,
help='fastboot serial number of the board', required=True)
parser.add_argument('--test-timeout', type=int,
help='Test phase timeout (minutes)', required=True)
args = parser.parse_args() args = parser.parse_args()
fastboot = FastbootRun(args, args.test_timeout * 60) fastboot = FastbootRun(args)
retval = fastboot.run() while True:
fastboot.close() retval = fastboot.run()
if retval != 2:
break
fastboot = FastbootRun(args)
fastboot.logged_system(args.powerdown) fastboot.logged_system(args.powerdown)
sys.exit(retval) sys.exit(retval)
if __name__ == '__main__': if __name__ == '__main__':
main() main()

View File

@@ -7,4 +7,4 @@ if [ -z "$relay" ]; then
exit 1 exit 1
fi fi
"$CI_PROJECT_DIR"/install/bare-metal/google-power-relay.py off "$relay" $CI_PROJECT_DIR/install/bare-metal/google-power-relay.py off $relay

View File

@@ -8,8 +8,8 @@ relay = sys.argv[2]
# our relays are "off" means "board is powered". # our relays are "off" means "board is powered".
mode_swap = { mode_swap = {
"on": "off", "on" : "off",
"off": "on", "off" : "on",
} }
mode = mode_swap[mode] mode = mode_swap[mode]

View File

@@ -7,6 +7,6 @@ if [ -z "$relay" ]; then
exit 1 exit 1
fi fi
"$CI_PROJECT_DIR"/install/bare-metal/google-power-relay.py off "$relay" $CI_PROJECT_DIR/install/bare-metal/google-power-relay.py off $relay
sleep 5 sleep 5
"$CI_PROJECT_DIR"/install/bare-metal/google-power-relay.py on "$relay" $CI_PROJECT_DIR/install/bare-metal/google-power-relay.py on $relay

View File

@@ -1,569 +0,0 @@
#!/usr/bin/env python3
#
# Copyright 2015, The Android Open Source Project
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Creates the boot image."""
from argparse import (ArgumentParser, ArgumentTypeError,
FileType, RawDescriptionHelpFormatter)
from hashlib import sha1
from os import fstat
from struct import pack
import array
import collections
import os
import re
import subprocess
import tempfile
# Constant and structure definition is in
# system/tools/mkbootimg/include/bootimg/bootimg.h
BOOT_MAGIC = 'ANDROID!'
BOOT_MAGIC_SIZE = 8
BOOT_NAME_SIZE = 16
BOOT_ARGS_SIZE = 512
BOOT_EXTRA_ARGS_SIZE = 1024
BOOT_IMAGE_HEADER_V1_SIZE = 1648
BOOT_IMAGE_HEADER_V2_SIZE = 1660
BOOT_IMAGE_HEADER_V3_SIZE = 1580
BOOT_IMAGE_HEADER_V3_PAGESIZE = 4096
BOOT_IMAGE_HEADER_V4_SIZE = 1584
BOOT_IMAGE_V4_SIGNATURE_SIZE = 4096
VENDOR_BOOT_MAGIC = 'VNDRBOOT'
VENDOR_BOOT_MAGIC_SIZE = 8
VENDOR_BOOT_NAME_SIZE = BOOT_NAME_SIZE
VENDOR_BOOT_ARGS_SIZE = 2048
VENDOR_BOOT_IMAGE_HEADER_V3_SIZE = 2112
VENDOR_BOOT_IMAGE_HEADER_V4_SIZE = 2128
VENDOR_RAMDISK_TYPE_NONE = 0
VENDOR_RAMDISK_TYPE_PLATFORM = 1
VENDOR_RAMDISK_TYPE_RECOVERY = 2
VENDOR_RAMDISK_TYPE_DLKM = 3
VENDOR_RAMDISK_NAME_SIZE = 32
VENDOR_RAMDISK_TABLE_ENTRY_BOARD_ID_SIZE = 16
VENDOR_RAMDISK_TABLE_ENTRY_V4_SIZE = 108
# Names with special meaning, mustn't be specified in --ramdisk_name.
VENDOR_RAMDISK_NAME_BLOCKLIST = {b'default'}
PARSER_ARGUMENT_VENDOR_RAMDISK_FRAGMENT = '--vendor_ramdisk_fragment'
def filesize(f):
if f is None:
return 0
try:
return fstat(f.fileno()).st_size
except OSError:
return 0
def update_sha(sha, f):
if f:
sha.update(f.read())
f.seek(0)
sha.update(pack('I', filesize(f)))
else:
sha.update(pack('I', 0))
def pad_file(f, padding):
pad = (padding - (f.tell() & (padding - 1))) & (padding - 1)
f.write(pack(str(pad) + 'x'))
def get_number_of_pages(image_size, page_size):
"""calculates the number of pages required for the image"""
return (image_size + page_size - 1) // page_size
def get_recovery_dtbo_offset(args):
"""calculates the offset of recovery_dtbo image in the boot image"""
num_header_pages = 1 # header occupies a page
num_kernel_pages = get_number_of_pages(filesize(args.kernel), args.pagesize)
num_ramdisk_pages = get_number_of_pages(filesize(args.ramdisk),
args.pagesize)
num_second_pages = get_number_of_pages(filesize(args.second), args.pagesize)
dtbo_offset = args.pagesize * (num_header_pages + num_kernel_pages +
num_ramdisk_pages + num_second_pages)
return dtbo_offset
def write_header_v3_and_above(args):
if args.header_version > 3:
boot_header_size = BOOT_IMAGE_HEADER_V4_SIZE
else:
boot_header_size = BOOT_IMAGE_HEADER_V3_SIZE
args.output.write(pack(f'{BOOT_MAGIC_SIZE}s', BOOT_MAGIC.encode()))
# kernel size in bytes
args.output.write(pack('I', filesize(args.kernel)))
# ramdisk size in bytes
args.output.write(pack('I', filesize(args.ramdisk)))
# os version and patch level
args.output.write(pack('I', (args.os_version << 11) | args.os_patch_level))
args.output.write(pack('I', boot_header_size))
# reserved
args.output.write(pack('4I', 0, 0, 0, 0))
# version of boot image header
args.output.write(pack('I', args.header_version))
args.output.write(pack(f'{BOOT_ARGS_SIZE + BOOT_EXTRA_ARGS_SIZE}s',
args.cmdline))
if args.header_version >= 4:
# The signature used to verify boot image v4.
args.output.write(pack('I', BOOT_IMAGE_V4_SIGNATURE_SIZE))
pad_file(args.output, BOOT_IMAGE_HEADER_V3_PAGESIZE)
def write_vendor_boot_header(args):
if filesize(args.dtb) == 0:
raise ValueError('DTB image must not be empty.')
if args.header_version > 3:
vendor_ramdisk_size = args.vendor_ramdisk_total_size
vendor_boot_header_size = VENDOR_BOOT_IMAGE_HEADER_V4_SIZE
else:
vendor_ramdisk_size = filesize(args.vendor_ramdisk)
vendor_boot_header_size = VENDOR_BOOT_IMAGE_HEADER_V3_SIZE
args.vendor_boot.write(pack(f'{VENDOR_BOOT_MAGIC_SIZE}s',
VENDOR_BOOT_MAGIC.encode()))
# version of boot image header
args.vendor_boot.write(pack('I', args.header_version))
# flash page size
args.vendor_boot.write(pack('I', args.pagesize))
# kernel physical load address
args.vendor_boot.write(pack('I', args.base + args.kernel_offset))
# ramdisk physical load address
args.vendor_boot.write(pack('I', args.base + args.ramdisk_offset))
# ramdisk size in bytes
args.vendor_boot.write(pack('I', vendor_ramdisk_size))
args.vendor_boot.write(pack(f'{VENDOR_BOOT_ARGS_SIZE}s',
args.vendor_cmdline))
# kernel tags physical load address
args.vendor_boot.write(pack('I', args.base + args.tags_offset))
# asciiz product name
args.vendor_boot.write(pack(f'{VENDOR_BOOT_NAME_SIZE}s', args.board))
# header size in bytes
args.vendor_boot.write(pack('I', vendor_boot_header_size))
# dtb size in bytes
args.vendor_boot.write(pack('I', filesize(args.dtb)))
# dtb physical load address
args.vendor_boot.write(pack('Q', args.base + args.dtb_offset))
if args.header_version > 3:
vendor_ramdisk_table_size = (args.vendor_ramdisk_table_entry_num *
VENDOR_RAMDISK_TABLE_ENTRY_V4_SIZE)
# vendor ramdisk table size in bytes
args.vendor_boot.write(pack('I', vendor_ramdisk_table_size))
# number of vendor ramdisk table entries
args.vendor_boot.write(pack('I', args.vendor_ramdisk_table_entry_num))
# vendor ramdisk table entry size in bytes
args.vendor_boot.write(pack('I', VENDOR_RAMDISK_TABLE_ENTRY_V4_SIZE))
# bootconfig section size in bytes
args.vendor_boot.write(pack('I', filesize(args.vendor_bootconfig)))
pad_file(args.vendor_boot, args.pagesize)
def write_header(args):
if args.header_version > 4:
raise ValueError(
f'Boot header version {args.header_version} not supported')
if args.header_version in {3, 4}:
return write_header_v3_and_above(args)
ramdisk_load_address = ((args.base + args.ramdisk_offset)
if filesize(args.ramdisk) > 0 else 0)
second_load_address = ((args.base + args.second_offset)
if filesize(args.second) > 0 else 0)
args.output.write(pack(f'{BOOT_MAGIC_SIZE}s', BOOT_MAGIC.encode()))
# kernel size in bytes
args.output.write(pack('I', filesize(args.kernel)))
# kernel physical load address
args.output.write(pack('I', args.base + args.kernel_offset))
# ramdisk size in bytes
args.output.write(pack('I', filesize(args.ramdisk)))
# ramdisk physical load address
args.output.write(pack('I', ramdisk_load_address))
# second bootloader size in bytes
args.output.write(pack('I', filesize(args.second)))
# second bootloader physical load address
args.output.write(pack('I', second_load_address))
# kernel tags physical load address
args.output.write(pack('I', args.base + args.tags_offset))
# flash page size
args.output.write(pack('I', args.pagesize))
# version of boot image header
args.output.write(pack('I', args.header_version))
# os version and patch level
args.output.write(pack('I', (args.os_version << 11) | args.os_patch_level))
# asciiz product name
args.output.write(pack(f'{BOOT_NAME_SIZE}s', args.board))
args.output.write(pack(f'{BOOT_ARGS_SIZE}s', args.cmdline))
sha = sha1()
update_sha(sha, args.kernel)
update_sha(sha, args.ramdisk)
update_sha(sha, args.second)
if args.header_version > 0:
update_sha(sha, args.recovery_dtbo)
if args.header_version > 1:
update_sha(sha, args.dtb)
img_id = pack('32s', sha.digest())
args.output.write(img_id)
args.output.write(pack(f'{BOOT_EXTRA_ARGS_SIZE}s', args.extra_cmdline))
if args.header_version > 0:
if args.recovery_dtbo:
# recovery dtbo size in bytes
args.output.write(pack('I', filesize(args.recovery_dtbo)))
# recovert dtbo offset in the boot image
args.output.write(pack('Q', get_recovery_dtbo_offset(args)))
else:
# Set to zero if no recovery dtbo
args.output.write(pack('I', 0))
args.output.write(pack('Q', 0))
# Populate boot image header size for header versions 1 and 2.
if args.header_version == 1:
args.output.write(pack('I', BOOT_IMAGE_HEADER_V1_SIZE))
elif args.header_version == 2:
args.output.write(pack('I', BOOT_IMAGE_HEADER_V2_SIZE))
if args.header_version > 1:
if filesize(args.dtb) == 0:
raise ValueError('DTB image must not be empty.')
# dtb size in bytes
args.output.write(pack('I', filesize(args.dtb)))
# dtb physical load address
args.output.write(pack('Q', args.base + args.dtb_offset))
pad_file(args.output, args.pagesize)
return img_id
class AsciizBytes:
"""Parses a string and encodes it as an asciiz bytes object.
>>> AsciizBytes(bufsize=4)('foo')
b'foo\\x00'
>>> AsciizBytes(bufsize=4)('foob')
Traceback (most recent call last):
...
argparse.ArgumentTypeError: Encoded asciiz length exceeded: max 4, got 5
"""
def __init__(self, bufsize):
self.bufsize = bufsize
def __call__(self, arg):
arg_bytes = arg.encode() + b'\x00'
if len(arg_bytes) > self.bufsize:
raise ArgumentTypeError(
'Encoded asciiz length exceeded: '
f'max {self.bufsize}, got {len(arg_bytes)}')
return arg_bytes
class VendorRamdiskTableBuilder:
"""Vendor ramdisk table builder.
Attributes:
entries: A list of VendorRamdiskTableEntry namedtuple.
ramdisk_total_size: Total size in bytes of all ramdisks in the table.
"""
VendorRamdiskTableEntry = collections.namedtuple( # pylint: disable=invalid-name
'VendorRamdiskTableEntry',
['ramdisk_path', 'ramdisk_size', 'ramdisk_offset', 'ramdisk_type',
'ramdisk_name', 'board_id'])
def __init__(self):
self.entries = []
self.ramdisk_total_size = 0
self.ramdisk_names = set()
def add_entry(self, ramdisk_path, ramdisk_type, ramdisk_name, board_id):
# Strip any trailing null for simple comparison.
stripped_ramdisk_name = ramdisk_name.rstrip(b'\x00')
if stripped_ramdisk_name in VENDOR_RAMDISK_NAME_BLOCKLIST:
raise ValueError(
f'Banned vendor ramdisk name: {stripped_ramdisk_name}')
if stripped_ramdisk_name in self.ramdisk_names:
raise ValueError(
f'Duplicated vendor ramdisk name: {stripped_ramdisk_name}')
self.ramdisk_names.add(stripped_ramdisk_name)
if board_id is None:
board_id = array.array(
'I', [0] * VENDOR_RAMDISK_TABLE_ENTRY_BOARD_ID_SIZE)
else:
board_id = array.array('I', board_id)
if len(board_id) != VENDOR_RAMDISK_TABLE_ENTRY_BOARD_ID_SIZE:
raise ValueError('board_id size must be '
f'{VENDOR_RAMDISK_TABLE_ENTRY_BOARD_ID_SIZE}')
with open(ramdisk_path, 'rb') as f:
ramdisk_size = filesize(f)
self.entries.append(self.VendorRamdiskTableEntry(
ramdisk_path, ramdisk_size, self.ramdisk_total_size, ramdisk_type,
ramdisk_name, board_id))
self.ramdisk_total_size += ramdisk_size
def write_ramdisks_padded(self, fout, alignment):
for entry in self.entries:
with open(entry.ramdisk_path, 'rb') as f:
fout.write(f.read())
pad_file(fout, alignment)
def write_entries_padded(self, fout, alignment):
for entry in self.entries:
fout.write(pack('I', entry.ramdisk_size))
fout.write(pack('I', entry.ramdisk_offset))
fout.write(pack('I', entry.ramdisk_type))
fout.write(pack(f'{VENDOR_RAMDISK_NAME_SIZE}s',
entry.ramdisk_name))
fout.write(entry.board_id)
pad_file(fout, alignment)
def write_padded_file(f_out, f_in, padding):
if f_in is None:
return
f_out.write(f_in.read())
pad_file(f_out, padding)
def parse_int(x):
return int(x, 0)
def parse_os_version(x):
match = re.search(r'^(\d{1,3})(?:\.(\d{1,3})(?:\.(\d{1,3}))?)?', x)
if match:
a = int(match.group(1))
b = c = 0
if match.lastindex >= 2:
b = int(match.group(2))
if match.lastindex == 3:
c = int(match.group(3))
# 7 bits allocated for each field
assert a < 128
assert b < 128
assert c < 128
return (a << 14) | (b << 7) | c
return 0
def parse_os_patch_level(x):
match = re.search(r'^(\d{4})-(\d{2})(?:-(\d{2}))?', x)
if match:
y = int(match.group(1)) - 2000
m = int(match.group(2))
# 7 bits allocated for the year, 4 bits for the month
assert 0 <= y < 128
assert 0 < m <= 12
return (y << 4) | m
return 0
def parse_vendor_ramdisk_type(x):
type_dict = {
'none': VENDOR_RAMDISK_TYPE_NONE,
'platform': VENDOR_RAMDISK_TYPE_PLATFORM,
'recovery': VENDOR_RAMDISK_TYPE_RECOVERY,
'dlkm': VENDOR_RAMDISK_TYPE_DLKM,
}
if x.lower() in type_dict:
return type_dict[x.lower()]
return parse_int(x)
def get_vendor_boot_v4_usage():
return """vendor boot version 4 arguments:
--ramdisk_type {none,platform,recovery,dlkm}
specify the type of the ramdisk
--ramdisk_name NAME
specify the name of the ramdisk
--board_id{0..15} NUMBER
specify the value of the board_id vector, defaults to 0
--vendor_ramdisk_fragment VENDOR_RAMDISK_FILE
path to the vendor ramdisk file
These options can be specified multiple times, where each vendor ramdisk
option group ends with a --vendor_ramdisk_fragment option.
Each option group appends an additional ramdisk to the vendor boot image.
"""
def parse_vendor_ramdisk_args(args, args_list):
"""Parses vendor ramdisk specific arguments.
Args:
args: An argparse.Namespace object. Parsed results are stored into this
object.
args_list: A list of argument strings to be parsed.
Returns:
A list argument strings that are not parsed by this method.
"""
parser = ArgumentParser(add_help=False)
parser.add_argument('--ramdisk_type', type=parse_vendor_ramdisk_type,
default=VENDOR_RAMDISK_TYPE_NONE)
parser.add_argument('--ramdisk_name',
type=AsciizBytes(bufsize=VENDOR_RAMDISK_NAME_SIZE),
required=True)
for i in range(VENDOR_RAMDISK_TABLE_ENTRY_BOARD_ID_SIZE):
parser.add_argument(f'--board_id{i}', type=parse_int, default=0)
parser.add_argument(PARSER_ARGUMENT_VENDOR_RAMDISK_FRAGMENT, required=True)
unknown_args = []
vendor_ramdisk_table_builder = VendorRamdiskTableBuilder()
if args.vendor_ramdisk is not None:
vendor_ramdisk_table_builder.add_entry(
args.vendor_ramdisk.name, VENDOR_RAMDISK_TYPE_PLATFORM, b'', None)
while PARSER_ARGUMENT_VENDOR_RAMDISK_FRAGMENT in args_list:
idx = args_list.index(PARSER_ARGUMENT_VENDOR_RAMDISK_FRAGMENT) + 2
vendor_ramdisk_args = args_list[:idx]
args_list = args_list[idx:]
ramdisk_args, extra_args = parser.parse_known_args(vendor_ramdisk_args)
ramdisk_args_dict = vars(ramdisk_args)
unknown_args.extend(extra_args)
ramdisk_path = ramdisk_args.vendor_ramdisk_fragment
ramdisk_type = ramdisk_args.ramdisk_type
ramdisk_name = ramdisk_args.ramdisk_name
board_id = [ramdisk_args_dict[f'board_id{i}']
for i in range(VENDOR_RAMDISK_TABLE_ENTRY_BOARD_ID_SIZE)]
vendor_ramdisk_table_builder.add_entry(ramdisk_path, ramdisk_type,
ramdisk_name, board_id)
if len(args_list) > 0:
unknown_args.extend(args_list)
args.vendor_ramdisk_total_size = (vendor_ramdisk_table_builder
.ramdisk_total_size)
args.vendor_ramdisk_table_entry_num = len(vendor_ramdisk_table_builder
.entries)
args.vendor_ramdisk_table_builder = vendor_ramdisk_table_builder
return unknown_args
def parse_cmdline():
version_parser = ArgumentParser(add_help=False)
version_parser.add_argument('--header_version', type=parse_int, default=0)
if version_parser.parse_known_args()[0].header_version < 3:
# For boot header v0 to v2, the kernel commandline field is split into
# two fields, cmdline and extra_cmdline. Both fields are asciiz strings,
# so we minus one here to ensure the encoded string plus the
# null-terminator can fit in the buffer size.
cmdline_size = BOOT_ARGS_SIZE + BOOT_EXTRA_ARGS_SIZE - 1
else:
cmdline_size = BOOT_ARGS_SIZE + BOOT_EXTRA_ARGS_SIZE
parser = ArgumentParser(formatter_class=RawDescriptionHelpFormatter,
epilog=get_vendor_boot_v4_usage())
parser.add_argument('--kernel', type=FileType('rb'),
help='path to the kernel')
parser.add_argument('--ramdisk', type=FileType('rb'),
help='path to the ramdisk')
parser.add_argument('--second', type=FileType('rb'),
help='path to the second bootloader')
parser.add_argument('--dtb', type=FileType('rb'), help='path to the dtb')
dtbo_group = parser.add_mutually_exclusive_group()
dtbo_group.add_argument('--recovery_dtbo', type=FileType('rb'),
help='path to the recovery DTBO')
dtbo_group.add_argument('--recovery_acpio', type=FileType('rb'),
metavar='RECOVERY_ACPIO', dest='recovery_dtbo',
help='path to the recovery ACPIO')
parser.add_argument('--cmdline', type=AsciizBytes(bufsize=cmdline_size),
default='', help='kernel command line arguments')
parser.add_argument('--vendor_cmdline',
type=AsciizBytes(bufsize=VENDOR_BOOT_ARGS_SIZE),
default='',
help='vendor boot kernel command line arguments')
parser.add_argument('--base', type=parse_int, default=0x10000000,
help='base address')
parser.add_argument('--kernel_offset', type=parse_int, default=0x00008000,
help='kernel offset')
parser.add_argument('--ramdisk_offset', type=parse_int, default=0x01000000,
help='ramdisk offset')
parser.add_argument('--second_offset', type=parse_int, default=0x00f00000,
help='second bootloader offset')
parser.add_argument('--dtb_offset', type=parse_int, default=0x01f00000,
help='dtb offset')
parser.add_argument('--os_version', type=parse_os_version, default=0,
help='operating system version')
parser.add_argument('--os_patch_level', type=parse_os_patch_level,
default=0, help='operating system patch level')
parser.add_argument('--tags_offset', type=parse_int, default=0x00000100,
help='tags offset')
parser.add_argument('--board', type=AsciizBytes(bufsize=BOOT_NAME_SIZE),
default='', help='board name')
parser.add_argument('--pagesize', type=parse_int,
choices=[2**i for i in range(11, 15)], default=2048,
help='page size')
parser.add_argument('--id', action='store_true',
help='print the image ID on standard output')
parser.add_argument('--header_version', type=parse_int, default=0,
help='boot image header version')
parser.add_argument('-o', '--output', type=FileType('wb'),
help='output file name')
parser.add_argument('--gki_signing_algorithm',
help='GKI signing algorithm to use')
parser.add_argument('--gki_signing_key',
help='path to RSA private key file')
parser.add_argument('--gki_signing_signature_args',
help='other hash arguments passed to avbtool')
parser.add_argument('--gki_signing_avbtool_path',
help='path to avbtool for boot signature generation')
parser.add_argument('--vendor_boot', type=FileType('wb'),
help='vendor boot output file name')
parser.add_argument('--vendor_ramdisk', type=FileType('rb'),
help='path to the vendor ramdisk')
parser.add_argument('--vendor_bootconfig', type=FileType('rb'),
help='path to the vendor bootconfig file')
args, extra_args = parser.parse_known_args()
if args.vendor_boot is not None and args.header_version > 3:
extra_args = parse_vendor_ramdisk_args(args, extra_args)
if len(extra_args) > 0:
raise ValueError(f'Unrecognized arguments: {extra_args}')
if args.header_version < 3:
args.extra_cmdline = args.cmdline[BOOT_ARGS_SIZE-1:]
args.cmdline = args.cmdline[:BOOT_ARGS_SIZE-1] + b'\x00'
assert len(args.cmdline) <= BOOT_ARGS_SIZE
assert len(args.extra_cmdline) <= BOOT_EXTRA_ARGS_SIZE
return args
def add_boot_image_signature(args, pagesize):
"""Adds the boot image signature.
Note that the signature will only be verified in VTS to ensure a
generic boot.img is used. It will not be used by the device
bootloader at boot time. The bootloader should only verify
the boot vbmeta at the end of the boot partition (or in the top-level
vbmeta partition) via the Android Verified Boot process, when the
device boots.
"""
args.output.flush() # Flush the buffer for signature calculation.
# Appends zeros if the signing key is not specified.
if not args.gki_signing_key or not args.gki_signing_algorithm:
zeros = b'\x00' * BOOT_IMAGE_V4_SIGNATURE_SIZE
args.output.write(zeros)
pad_file(args.output, pagesize)
return
avbtool = 'avbtool' # Used from otatools.zip or Android build env.
# We need to specify the path of avbtool in build/core/Makefile.
# Because avbtool is not guaranteed to be in $PATH there.
if args.gki_signing_avbtool_path:
avbtool = args.gki_signing_avbtool_path
# Need to specify a value of --partition_size for avbtool to work.
# We use 64 MB below, but avbtool will not resize the boot image to
# this size because --do_not_append_vbmeta_image is also specified.
avbtool_cmd = [
avbtool, 'add_hash_footer',
'--partition_name', 'boot',
'--partition_size', str(64 * 1024 * 1024),
'--image', args.output.name,
'--algorithm', args.gki_signing_algorithm,
'--key', args.gki_signing_key,
'--salt', 'd00df00d'] # TODO: use a hash of kernel/ramdisk as the salt.
# Additional arguments passed to avbtool.
if args.gki_signing_signature_args:
avbtool_cmd += args.gki_signing_signature_args.split()
# Outputs the signed vbmeta to a separate file, then append to boot.img
# as the boot signature.
with tempfile.TemporaryDirectory() as temp_out_dir:
boot_signature_output = os.path.join(temp_out_dir, 'boot_signature')
avbtool_cmd += ['--do_not_append_vbmeta_image',
'--output_vbmeta_image', boot_signature_output]
subprocess.check_call(avbtool_cmd)
with open(boot_signature_output, 'rb') as boot_signature:
if filesize(boot_signature) > BOOT_IMAGE_V4_SIGNATURE_SIZE:
raise ValueError(
f'boot sigature size is > {BOOT_IMAGE_V4_SIGNATURE_SIZE}')
write_padded_file(args.output, boot_signature, pagesize)
def write_data(args, pagesize):
write_padded_file(args.output, args.kernel, pagesize)
write_padded_file(args.output, args.ramdisk, pagesize)
write_padded_file(args.output, args.second, pagesize)
if args.header_version > 0 and args.header_version < 3:
write_padded_file(args.output, args.recovery_dtbo, pagesize)
if args.header_version == 2:
write_padded_file(args.output, args.dtb, pagesize)
if args.header_version >= 4:
add_boot_image_signature(args, pagesize)
def write_vendor_boot_data(args):
if args.header_version > 3:
builder = args.vendor_ramdisk_table_builder
builder.write_ramdisks_padded(args.vendor_boot, args.pagesize)
write_padded_file(args.vendor_boot, args.dtb, args.pagesize)
builder.write_entries_padded(args.vendor_boot, args.pagesize)
write_padded_file(args.vendor_boot, args.vendor_bootconfig,
args.pagesize)
else:
write_padded_file(args.vendor_boot, args.vendor_ramdisk, args.pagesize)
write_padded_file(args.vendor_boot, args.dtb, args.pagesize)
def main():
args = parse_cmdline()
if args.vendor_boot is not None:
if args.header_version not in {3, 4}:
raise ValueError(
'--vendor_boot not compatible with given header version')
if args.header_version == 3 and args.vendor_ramdisk is None:
raise ValueError('--vendor_ramdisk missing or invalid')
write_vendor_boot_header(args)
write_vendor_boot_data(args)
if args.output is not None:
if args.second is not None and args.header_version > 2:
raise ValueError(
'--second not compatible with given header version')
img_id = write_header(args)
if args.header_version > 2:
write_data(args, BOOT_IMAGE_HEADER_V3_PAGESIZE)
else:
write_data(args, args.pagesize)
if args.id and img_id is not None:
print('0x' + ''.join(f'{octet:02x}' for octet in img_id))
if __name__ == '__main__':
main()

View File

@@ -10,7 +10,8 @@ if [ -z "$BM_POE_ADDRESS" ]; then
exit 1 exit 1
fi fi
SNMP_KEY="SNMPv2-SMI::mib-2.105.1.1.1.3.1.$((${BM_POE_BASE:-0} + BM_POE_INTERFACE))" SNMP_KEY="SNMPv2-SMI::mib-2.105.1.1.1.3.1.`expr 48 + $BM_POE_INTERFACE`"
SNMP_ON="i 1"
SNMP_OFF="i 2" SNMP_OFF="i 2"
flock /var/run/poe.lock -c "snmpset -v2c -r 3 -t 30 -cmesaci $BM_POE_ADDRESS $SNMP_KEY $SNMP_OFF" flock /var/run/poe.lock -c "snmpset -v2c -r 3 -t 30 -cmesaci $BM_POE_ADDRESS $SNMP_KEY $SNMP_OFF"

View File

@@ -10,7 +10,7 @@ if [ -z "$BM_POE_ADDRESS" ]; then
exit 1 exit 1
fi fi
SNMP_KEY="SNMPv2-SMI::mib-2.105.1.1.1.3.1.$((${BM_POE_BASE:-0} + BM_POE_INTERFACE))" SNMP_KEY="SNMPv2-SMI::mib-2.105.1.1.1.3.1.`expr 48 + $BM_POE_INTERFACE`"
SNMP_ON="i 1" SNMP_ON="i 1"
SNMP_OFF="i 2" SNMP_OFF="i 2"

View File

@@ -1,10 +1,4 @@
#!/bin/bash #!/bin/bash
# shellcheck disable=SC1091
# shellcheck disable=SC2034
# shellcheck disable=SC2059
# shellcheck disable=SC2086 # we want word splitting
. "$SCRIPTS_DIR"/setup-test-env.sh
# Boot script for devices attached to a PoE switch, using NFS for the root # Boot script for devices attached to a PoE switch, using NFS for the root
# filesystem. # filesystem.
@@ -12,7 +6,6 @@
# We're run from the root of the repo, make a helper var for our paths # We're run from the root of the repo, make a helper var for our paths
BM=$CI_PROJECT_DIR/install/bare-metal BM=$CI_PROJECT_DIR/install/bare-metal
CI_COMMON=$CI_PROJECT_DIR/install/common CI_COMMON=$CI_PROJECT_DIR/install/common
CI_INSTALL=$CI_PROJECT_DIR/install
# Runner config checks # Runner config checks
if [ -z "$BM_SERIAL" ]; then if [ -z "$BM_SERIAL" ]; then
@@ -27,6 +20,18 @@ if [ -z "$BM_POE_ADDRESS" ]; then
exit 1 exit 1
fi fi
if [ -z "$BM_POE_USERNAME" ]; then
echo "Must set BM_POE_USERNAME in your gitlab-runner config.toml [[runners]] environment"
echo "This is the PoE switch username."
exit 1
fi
if [ -z "$BM_POE_PASSWORD" ]; then
echo "Must set BM_POE_PASSWORD in your gitlab-runner config.toml [[runners]] environment"
echo "This is the PoE switch password."
exit 1
fi
if [ -z "$BM_POE_INTERFACE" ]; then if [ -z "$BM_POE_INTERFACE" ]; then
echo "Must set BM_POE_INTERFACE in your gitlab-runner config.toml [[runners]] environment" echo "Must set BM_POE_INTERFACE in your gitlab-runner config.toml [[runners]] environment"
echo "This is the PoE switch interface where the device is connected." echo "This is the PoE switch interface where the device is connected."
@@ -61,8 +66,8 @@ if [ -z "$BM_ROOTFS" ]; then
exit 1 exit 1
fi fi
if [ -z "$BM_BOOTFS" ] && { [ -z "$BM_KERNEL" ] || [ -z "$BM_DTB" ]; } ; then if [ -z "$BM_BOOTFS" ]; then
echo "Must set /boot files for the TFTP boot in the job's variables or set kernel and dtb" echo "Must set /boot files for the TFTP boot in the job's variables"
exit 1 exit 1
fi fi
@@ -71,9 +76,12 @@ if [ -z "$BM_CMDLINE" ]; then
exit 1 exit 1
fi fi
set -ex if [ -z "$BM_BOOTCONFIG" ]; then
echo "Must set BM_BOOTCONFIG to your board's required boot configuration arguments"
exit 1
fi
date +'%F %T' set -ex
# Clear out any previous run's artifacts. # Clear out any previous run's artifacts.
rm -rf results/ rm -rf results/
@@ -83,147 +91,57 @@ mkdir -p results
# state, since it's volume-mounted on the host. # state, since it's volume-mounted on the host.
rsync -a --delete $BM_ROOTFS/ /nfs/ rsync -a --delete $BM_ROOTFS/ /nfs/
date +'%F %T'
# If BM_BOOTFS is an URL, download it # If BM_BOOTFS is an URL, download it
if echo $BM_BOOTFS | grep -q http; then if echo $BM_BOOTFS | grep -q http; then
curl -L --retry 4 -f --retry-all-errors --retry-delay 60 \ apt install -y wget
"${FDO_HTTP_CACHE_URI:-}$BM_BOOTFS" -o /tmp/bootfs.tar wget ${FDO_HTTP_CACHE_URI:-}$BM_BOOTFS -O /tmp/bootfs.tar
BM_BOOTFS=/tmp/bootfs.tar BM_BOOTFS=/tmp/bootfs.tar
fi fi
date +'%F %T'
# If BM_BOOTFS is a file, assume it is a tarball and uncompress it # If BM_BOOTFS is a file, assume it is a tarball and uncompress it
if [ -f "${BM_BOOTFS}" ]; then if [ -f $BM_BOOTFS ]; then
mkdir -p /tmp/bootfs mkdir -p /tmp/bootfs
tar xf $BM_BOOTFS -C /tmp/bootfs tar xf $BM_BOOTFS -C /tmp/bootfs
BM_BOOTFS=/tmp/bootfs BM_BOOTFS=/tmp/bootfs
fi fi
# If BM_KERNEL and BM_DTS is present
if [ -n "${FORCE_KERNEL_TAG}" ]; then
if [ -z "${BM_KERNEL}" ] || [ -z "${BM_DTB}" ]; then
echo "This machine cannot be tested with external kernel since BM_KERNEL or BM_DTB missing!"
exit 1
fi
curl -L --retry 4 -f --retry-all-errors --retry-delay 60 \
"${FDO_HTTP_CACHE_URI:-}${KERNEL_IMAGE_BASE}/${DEBIAN_ARCH}/${BM_KERNEL}" -o "${BM_KERNEL}"
curl -L --retry 4 -f --retry-all-errors --retry-delay 60 \
"${FDO_HTTP_CACHE_URI:-}${KERNEL_IMAGE_BASE}/${DEBIAN_ARCH}/${BM_DTB}.dtb" -o "${BM_DTB}.dtb"
curl -L --retry 4 -f --retry-all-errors --retry-delay 60 \
"${FDO_HTTP_CACHE_URI:-}${KERNEL_IMAGE_BASE}/${DEBIAN_ARCH}/modules.tar.zst" -o modules.tar.zst
fi
date +'%F %T'
# Install kernel modules (it could be either in /lib/modules or # Install kernel modules (it could be either in /lib/modules or
# /usr/lib/modules, but we want to install in the latter) # /usr/lib/modules, but we want to install in the latter)
if [ -n "${FORCE_KERNEL_TAG}" ]; then [ -d $BM_BOOTFS/usr/lib/modules ] && rsync -a --delete $BM_BOOTFS/usr/lib/modules/ /nfs/usr/lib/modules/
tar --keep-directory-symlink --zstd -xf modules.tar.zst -C /nfs/ [ -d $BM_BOOTFS/lib/modules ] && rsync -a --delete $BM_BOOTFS/lib/modules/ /nfs/usr/lib/modules/
rm modules.tar.zst &
elif [ -n "${BM_BOOTFS}" ]; then
[ -d $BM_BOOTFS/usr/lib/modules ] && rsync -a $BM_BOOTFS/usr/lib/modules/ /nfs/usr/lib/modules/
[ -d $BM_BOOTFS/lib/modules ] && rsync -a $BM_BOOTFS/lib/modules/ /nfs/lib/modules/
else
echo "No modules!"
fi
date +'%F %T'
# Install kernel image + bootloader files # Install kernel image + bootloader files
if [ -n "${FORCE_KERNEL_TAG}" ] || [ -z "$BM_BOOTFS" ]; then rsync -a --delete $BM_BOOTFS/boot/ /tftp/
mv "${BM_KERNEL}" "${BM_DTB}.dtb" /tftp/
else # BM_BOOTFS
rsync -aL --delete $BM_BOOTFS/boot/ /tftp/
fi
date +'%F %T'
# Set up the pxelinux config for Jetson Nano
mkdir -p /tftp/pxelinux.cfg
cat <<EOF >/tftp/pxelinux.cfg/default-arm-tegra210-p3450-0000
PROMPT 0
TIMEOUT 30
DEFAULT primary
MENU TITLE jetson nano boot options
LABEL primary
MENU LABEL CI kernel on TFTP
LINUX Image
FDT tegra210-p3450-0000.dtb
APPEND \${cbootargs} $BM_CMDLINE
EOF
# Set up the pxelinux config for Jetson TK1
cat <<EOF >/tftp/pxelinux.cfg/default-arm-tegra124-jetson-tk1
PROMPT 0
TIMEOUT 30
DEFAULT primary
MENU TITLE jetson TK1 boot options
LABEL primary
MENU LABEL CI kernel on TFTP
LINUX zImage
FDT tegra124-jetson-tk1.dtb
APPEND \${cbootargs} $BM_CMDLINE
EOF
# Create the rootfs in the NFS directory # Create the rootfs in the NFS directory
mkdir -p /nfs/results
. $BM/rootfs-setup.sh /nfs . $BM/rootfs-setup.sh /nfs
date +'%F %T'
echo "$BM_CMDLINE" > /tftp/cmdline.txt echo "$BM_CMDLINE" > /tftp/cmdline.txt
# Add some options in config.txt, if defined # Add some required options in config.txt
if [ -n "$BM_BOOTCONFIG" ]; then printf "$BM_BOOTCONFIG" >> /tftp/config.txt
printf "$BM_BOOTCONFIG" >> /tftp/config.txt
fi
set +e set +e
STRUCTURED_LOG_FILE=job_detail.json ATTEMPTS=2
python3 $CI_INSTALL/custom_logger.py ${STRUCTURED_LOG_FILE} --update dut_job_type "${DEVICE_TYPE}"
python3 $CI_INSTALL/custom_logger.py ${STRUCTURED_LOG_FILE} --update farm "${FARM}"
ATTEMPTS=3
first_attempt=True
while [ $((ATTEMPTS--)) -gt 0 ]; do while [ $((ATTEMPTS--)) -gt 0 ]; do
python3 $CI_INSTALL/custom_logger.py ${STRUCTURED_LOG_FILE} --create-dut-job dut_name "${CI_RUNNER_DESCRIPTION}"
# Update subtime time to CI_JOB_STARTED_AT only for the first run
if [ "$first_attempt" = "True" ]; then
python3 $CI_INSTALL/custom_logger.py ${STRUCTURED_LOG_FILE} --update-dut-time submit "${CI_JOB_STARTED_AT}"
else
python3 $CI_INSTALL/custom_logger.py ${STRUCTURED_LOG_FILE} --update-dut-time submit
fi
python3 $BM/poe_run.py \ python3 $BM/poe_run.py \
--dev="$BM_SERIAL" \ --dev="$BM_SERIAL" \
--powerup="$BM_POWERUP" \ --powerup="$BM_POWERUP" \
--powerdown="$BM_POWERDOWN" \ --powerdown="$BM_POWERDOWN" \
--test-timeout ${TEST_PHASE_TIMEOUT:-20} --timeout="${BM_POE_TIMEOUT:-60}"
ret=$? ret=$?
if [ $ret -eq 2 ]; then if [ $ret -eq 2 ]; then
echo "Did not detect boot sequence, retrying..." echo "Did not detect boot sequence, retrying..."
python3 $CI_INSTALL/custom_logger.py ${STRUCTURED_LOG_FILE} --close-dut-job
first_attempt=False
else else
ATTEMPTS=0 ATTEMPTS=0
fi fi
done done
python3 $CI_INSTALL/custom_logger.py ${STRUCTURED_LOG_FILE} --close-dut-job
python3 $CI_INSTALL/custom_logger.py ${STRUCTURED_LOG_FILE} --close
set -e set -e
date +'%F %T'
# Bring artifacts back from the NFS dir to the build dir where gitlab-runner # Bring artifacts back from the NFS dir to the build dir where gitlab-runner
# will look for them. # will look for them.
cp -Rp /nfs/results/. results/ cp -Rp /nfs/results/. results/
if [ -f "${STRUCTURED_LOG_FILE}" ]; then
cp -p ${STRUCTURED_LOG_FILE} results/
echo "Structured log file is available at ${ARTIFACTS_BASE_URL}/results/${STRUCTURED_LOG_FILE}"
fi
date +'%F %T'
exit $ret exit $ret

View File

@@ -24,26 +24,20 @@
import argparse import argparse
import os import os
import re import re
from serial_buffer import SerialBuffer
import sys import sys
import threading import threading
from custom_logger import CustomLogger
from serial_buffer import SerialBuffer
class PoERun: class PoERun:
def __init__(self, args, test_timeout, logger): def __init__(self, args):
self.powerup = args.powerup self.powerup = args.powerup
self.powerdown = args.powerdown self.powerdown = args.powerdown
self.ser = SerialBuffer( self.ser = SerialBuffer(args.dev, "results/serial-output.txt", "", args.timeout)
args.dev, "results/serial-output.txt", "")
self.test_timeout = test_timeout
self.logger = logger
def print_error(self, message): def print_error(self, message):
RED = '\033[0;31m' RED = '\033[0;31m'
NO_COLOR = '\033[0m' NO_COLOR = '\033[0m'
print(RED + message + NO_COLOR) print(RED + message + NO_COLOR)
self.logger.update_status_fail(message)
def logged_system(self, cmd): def logged_system(self, cmd):
print("Running '{}'".format(cmd)) print("Running '{}'".format(cmd))
@@ -51,25 +45,20 @@ class PoERun:
def run(self): def run(self):
if self.logged_system(self.powerup) != 0: if self.logged_system(self.powerup) != 0:
self.logger.update_status_fail("powerup failed")
return 1 return 1
boot_detected = False boot_detected = False
self.logger.create_job_phase("boot") for line in self.ser.lines():
for line in self.ser.lines(timeout=5 * 60, phase="bootloader"):
if re.search("Booting Linux", line): if re.search("Booting Linux", line):
boot_detected = True boot_detected = True
break break
if not boot_detected: if not boot_detected:
self.print_error( self.print_error("Something wrong; couldn't detect the boot start up sequence")
"Something wrong; couldn't detect the boot start up sequence")
return 2 return 2
self.logger.create_job_phase("test") for line in self.ser.lines():
for line in self.ser.lines(timeout=self.test_timeout, phase="test"):
if re.search("---. end Kernel panic", line): if re.search("---. end Kernel panic", line):
self.logger.update_status_fail("kernel panic")
return 1 return 1
# Binning memory problems # Binning memory problems
@@ -77,51 +66,31 @@ class PoERun:
self.print_error("Memory overflow in the binner; GPU hang") self.print_error("Memory overflow in the binner; GPU hang")
return 1 return 1
if re.search("nouveau 57000000.gpu: bus: MMIO read of 00000000 FAULT at 137000", line):
self.print_error("nouveau jetson boot bug, abandoning run.")
return 1
# network fail on tk1
if re.search("NETDEV WATCHDOG:.* transmit queue 0 timed out", line):
self.print_error("nouveau jetson tk1 network fail, abandoning run.")
return 1
result = re.search("hwci: mesa: (\S*)", line) result = re.search("hwci: mesa: (\S*)", line)
if result: if result:
if result.group(1) == "pass": if result.group(1) == "pass":
self.logger.update_dut_job("status", "pass")
return 0 return 0
else: else:
self.logger.update_status_fail("test fail")
return 1 return 1
self.print_error( self.print_error("Reached the end of the CPU serial log without finding a result")
"Reached the end of the CPU serial log without finding a result") return 2
return 1
def main(): def main():
parser = argparse.ArgumentParser() parser = argparse.ArgumentParser()
parser.add_argument('--dev', type=str, parser.add_argument('--dev', type=str, help='Serial device to monitor', required=True)
help='Serial device to monitor', required=True) parser.add_argument('--powerup', type=str, help='shell command for rebooting', required=True)
parser.add_argument('--powerup', type=str, parser.add_argument('--powerdown', type=str, help='shell command for powering off', required=True)
help='shell command for rebooting', required=True) parser.add_argument('--timeout', type=int, default=60,
parser.add_argument('--powerdown', type=str, help='time in seconds to wait for activity', required=False)
help='shell command for powering off', required=True)
parser.add_argument(
'--test-timeout', type=int, help='Test phase timeout (minutes)', required=True)
args = parser.parse_args() args = parser.parse_args()
logger = CustomLogger("job_detail.json") poe = PoERun(args)
logger.update_dut_time("start", None)
poe = PoERun(args, args.test_timeout * 60, logger)
retval = poe.run() retval = poe.run()
poe.logged_system(args.powerdown) poe.logged_system(args.powerdown)
logger.update_dut_time("end", None)
sys.exit(retval) sys.exit(retval)
if __name__ == '__main__': if __name__ == '__main__':
main() main()

View File

@@ -1,5 +1,4 @@
#!/usr/bin/env bash #!/bin/bash
# shellcheck disable=SC2086 # we want word splitting
rootfs_dst=$1 rootfs_dst=$1
@@ -9,29 +8,17 @@ mkdir -p $rootfs_dst/results
cp $BM/bm-init.sh $rootfs_dst/init cp $BM/bm-init.sh $rootfs_dst/init
cp $CI_COMMON/init*.sh $rootfs_dst/ cp $CI_COMMON/init*.sh $rootfs_dst/
date +'%F %T'
# Make JWT token available as file in the bare-metal storage to enable access
# to MinIO
cp "${S3_JWT_FILE}" "${rootfs_dst}${S3_JWT_FILE}"
date +'%F %T'
cp $CI_COMMON/capture-devcoredump.sh $rootfs_dst/ cp $CI_COMMON/capture-devcoredump.sh $rootfs_dst/
cp $CI_COMMON/intel-gpu-freq.sh $rootfs_dst/
cp $CI_COMMON/kdl.sh $rootfs_dst/
cp "$SCRIPTS_DIR/setup-test-env.sh" "$rootfs_dst/"
set +x set +x
# Pass through relevant env vars from the gitlab job to the baremetal init script # Pass through relevant env vars from the gitlab job to the baremetal init script
"$CI_COMMON"/generate-env.sh > $rootfs_dst/set-job-env-vars.sh
chmod +x $rootfs_dst/set-job-env-vars.sh
echo "Variables passed through:" echo "Variables passed through:"
"$CI_COMMON"/generate-env.sh | tee $rootfs_dst/set-job-env-vars.sh cat $rootfs_dst/set-job-env-vars.sh
echo "export CI_JOB_JWT=${CI_JOB_JWT@Q}" >> $rootfs_dst/set-job-env-vars.sh
set -x set -x
# Add the Mesa drivers we built, and make a consistent symlink to them. # Add the Mesa drivers we built, and make a consistent symlink to them.
mkdir -p $rootfs_dst/$CI_PROJECT_DIR mkdir -p $rootfs_dst/$CI_PROJECT_DIR
rsync -aH --delete $CI_PROJECT_DIR/install/ $rootfs_dst/$CI_PROJECT_DIR/install/ rsync -aH --delete $CI_PROJECT_DIR/install/ $rootfs_dst/$CI_PROJECT_DIR/install/
date +'%F %T'

View File

@@ -30,29 +30,21 @@ import time
class SerialBuffer: class SerialBuffer:
def __init__(self, dev, filename, prefix, timeout=None, line_queue=None): def __init__(self, dev, filename, prefix, timeout = None):
self.filename = filename self.filename = filename
self.dev = dev self.dev = dev
if dev: if dev:
self.f = open(filename, "wb+") self.f = open(filename, "wb+")
self.serial = serial.Serial(dev, 115200, timeout=timeout) self.serial = serial.Serial(dev, 115200, timeout=timeout if timeout else 10)
else: else:
self.f = open(filename, "rb") self.f = open(filename, "rb")
self.serial = None
self.byte_queue = queue.Queue() self.byte_queue = queue.Queue()
# allow multiple SerialBuffers to share a line queue so you can merge self.line_queue = queue.Queue()
# servo's CPU and EC streams into one thing to watch the boot/test
# progress on.
if line_queue:
self.line_queue = line_queue
else:
self.line_queue = queue.Queue()
self.prefix = prefix self.prefix = prefix
self.timeout = timeout self.timeout = timeout
self.sentinel = object() self.sentinel = object()
self.closing = False
if self.dev: if self.dev:
self.read_thread = threading.Thread( self.read_thread = threading.Thread(
@@ -66,31 +58,24 @@ class SerialBuffer:
target=self.serial_lines_thread_loop, daemon=True) target=self.serial_lines_thread_loop, daemon=True)
self.lines_thread.start() self.lines_thread.start()
def close(self):
self.closing = True
if self.serial:
self.serial.cancel_read()
self.read_thread.join()
self.lines_thread.join()
if self.serial:
self.serial.close()
# Thread that just reads the bytes from the serial device to try to keep from # Thread that just reads the bytes from the serial device to try to keep from
# buffer overflowing it. If nothing is received in 1 minute, it finalizes. # buffer overflowing it. If nothing is received in 1 minute, it finalizes.
def serial_read_thread_loop(self): def serial_read_thread_loop(self):
greet = "Serial thread reading from %s\n" % self.dev greet = "Serial thread reading from %s\n" % self.dev
self.byte_queue.put(greet.encode()) self.byte_queue.put(greet.encode())
while not self.closing: while True:
try: try:
b = self.serial.read() b = self.serial.read()
if len(b) == 0: if len(b) > 0:
self.byte_queue.put(b)
elif self.timeout:
self.byte_queue.put(self.sentinel)
break break
self.byte_queue.put(b)
except Exception as err: except Exception as err:
print(self.prefix + str(err)) print(self.prefix + str(err))
self.byte_queue.put(self.sentinel)
break break
self.byte_queue.put(self.sentinel)
# Thread that just reads the bytes from the file of serial output that some # Thread that just reads the bytes from the file of serial output that some
# other process is appending to. # other process is appending to.
@@ -98,13 +83,12 @@ class SerialBuffer:
greet = "Serial thread reading from %s\n" % self.filename greet = "Serial thread reading from %s\n" % self.filename
self.byte_queue.put(greet.encode()) self.byte_queue.put(greet.encode())
while not self.closing: while True:
line = self.f.readline() line = self.f.readline()
if line: if line:
self.byte_queue.put(line) self.byte_queue.put(line)
else: else:
time.sleep(0.1) time.sleep(0.1)
self.byte_queue.put(self.sentinel)
# Thread that processes the stream of bytes to 1) log to stdout, 2) log to # Thread that processes the stream of bytes to 1) log to stdout, 2) log to
# file, 3) add to the queue of lines to be read by program logic # file, 3) add to the queue of lines to be read by program logic
@@ -137,30 +121,14 @@ class SerialBuffer:
self.line_queue.put(line) self.line_queue.put(line)
line = bytearray() line = bytearray()
def lines(self, timeout=None, phase=None): def get_line(self):
start_time = time.monotonic() line = self.line_queue.get()
while True: if line == self.sentinel:
read_timeout = None self.lines_thread.join()
if timeout: return line
read_timeout = timeout - (time.monotonic() - start_time)
if read_timeout <= 0:
print("read timeout waiting for serial during {}".format(phase))
self.close()
break
try: def lines(self):
line = self.line_queue.get(timeout=read_timeout) return iter(self.get_line, self.sentinel)
except queue.Empty:
print("read timeout waiting for serial during {}".format(phase))
self.close()
break
if line == self.sentinel:
print("End of serial output")
self.lines_thread.join()
break
yield line
def main(): def main():

View File

@@ -28,8 +28,8 @@
import sys import sys
import telnetlib import telnetlib
host = sys.argv[1] host=sys.argv[1]
port = sys.argv[2] port=sys.argv[2]
tn = telnetlib.Telnet(host, port, 1000000) tn = telnetlib.Telnet(host, port, 1000000)

View File

@@ -1 +0,0 @@
../bin/ci

View File

@@ -1,7 +0,0 @@
#!/bin/sh
# shellcheck disable=SC1091
set -e
_COMPILER=clang++-15
. compiler-wrapper.sh

View File

@@ -1,7 +0,0 @@
#!/bin/sh
# shellcheck disable=SC1091
set -e
_COMPILER=clang++
. compiler-wrapper.sh

View File

@@ -1,7 +0,0 @@
#!/bin/sh
# shellcheck disable=SC1091
set -e
_COMPILER=clang-15
. compiler-wrapper.sh

View File

@@ -1,7 +0,0 @@
#!/bin/sh
# shellcheck disable=SC1091
set -e
_COMPILER=clang
. compiler-wrapper.sh

View File

@@ -1,7 +0,0 @@
#!/bin/sh
# shellcheck disable=SC1091
set -e
_COMPILER=g++
. compiler-wrapper.sh

View File

@@ -1,7 +0,0 @@
#!/bin/sh
# shellcheck disable=SC1091
set -e
_COMPILER=gcc
. compiler-wrapper.sh

View File

@@ -1,21 +0,0 @@
# shellcheck disable=SC1091
# shellcheck disable=SC2086 # we want word splitting
if command -V ccache >/dev/null 2>/dev/null; then
CCACHE=ccache
else
CCACHE=
fi
if echo "$@" | grep -E 'meson-private/tmp[^ /]*/testfile.c' >/dev/null; then
# Invoked for meson feature check
exec $CCACHE $_COMPILER "$@"
fi
if [ "$(eval printf "'%s'" "\"\${$(($#-1))}\"")" = "-c" ]; then
# Not invoked for linking
exec $CCACHE $_COMPILER "$@"
fi
# Compiler invoked by ninja for linking. Add -Werror to turn compiler warnings into errors
# with LTO. (meson's werror should arguably do this, but meanwhile we need to)
exec $CCACHE $_COMPILER "$@" -Werror

View File

@@ -1,713 +0,0 @@
# Shared between windows and Linux
.build-common:
extends: .container+build-rules
# Cancel job if a newer commit is pushed to the same branch
interruptible: true
# Build jobs don't take more than 1-3 minutes. 5-8 min max on a fresh runner
# without a populated ccache.
# These jobs are never slow, either they finish within reasonable time or
# something has gone wrong and the job will never terminate, so we should
# instead timeout so that the retry mechanism can kick in.
# A few exception are made, see `timeout:` overrides in the rest of this
# file.
timeout: 30m
artifacts:
name: "mesa_${CI_JOB_NAME}"
when: always
paths:
- _build/meson-logs/*.txt
- _build/meson-logs/strace
- artifacts
# Just Linux
.build-linux:
extends: .build-common
variables:
CCACHE_COMPILERCHECK: "content"
CCACHE_COMPRESS: "true"
CCACHE_DIR: /cache/mesa/ccache
# Use ccache transparently, and print stats before/after
before_script:
- !reference [default, before_script]
- |
export PATH="/usr/lib/ccache:$PATH"
export CCACHE_BASEDIR="$PWD"
if test -x /usr/bin/ccache; then
section_start ccache_before "ccache stats before build"
ccache --show-stats
section_end ccache_before
fi
after_script:
- if test -x /usr/bin/ccache; then ccache --show-stats | grep "Hits:"; fi
- !reference [default, after_script]
.build-windows:
extends:
- .build-common
- .windows-docker-tags
cache:
key: ${CI_JOB_NAME}
paths:
- subprojects/packagecache
.meson-build:
extends:
- .build-linux
- .use-debian/x86_64_build
stage: build-x86_64
variables:
LLVM_VERSION: 15
script:
- .gitlab-ci/meson/build.sh
debian-testing:
extends:
- .meson-build
- .ci-deqp-artifacts
variables:
UNWIND: "enabled"
DRI_LOADERS: >
-D glx=dri
-D gbm=enabled
-D egl=enabled
-D glvnd=disabled
-D platforms=x11,wayland
GALLIUM_ST: >
-D dri3=enabled
-D gallium-nine=true
-D gallium-va=enabled
-D gallium-rusticl=true
GALLIUM_DRIVERS: "swrast,virgl,radeonsi,zink,crocus,iris,i915,r300,svga"
VULKAN_DRIVERS: "swrast,amd,intel,intel_hasvk,virtio,nouveau"
BUILDTYPE: "debugoptimized"
EXTRA_OPTION: >
-D spirv-to-dxil=true
-D valgrind=disabled
-D perfetto=true
-D tools=drm-shim
S3_ARTIFACT_NAME: mesa-x86_64-default-${BUILDTYPE}
LLVM_VERSION: 15
script:
- .gitlab-ci/meson/build.sh
- .gitlab-ci/prepare-artifacts.sh
artifacts:
reports:
junit: artifacts/ci_scripts_report.xml
debian-testing-asan:
extends:
- debian-testing
variables:
C_ARGS: >
-Wno-error=stringop-truncation
EXTRA_OPTION: >
-D b_sanitize=address
-D valgrind=disabled
-D tools=dlclose-skip
-D intel-clc=system
S3_ARTIFACT_NAME: ""
ARTIFACTS_DEBUG_SYMBOLS: 1
# Do a host build for intel-clc (asan complains not being loaded
# as the first library)
HOST_BUILD_OPTIONS: >
-D build-tests=false
-D enable-glcpp-tests=false
-D gallium-opencl=disabled
-D gallium-drivers=
-D vulkan-drivers=
-D video-codecs=
-D glx=disabled
-D platforms=
-D intel-clc=enabled
-D install-intel-clc=true
debian-testing-msan:
# https://github.com/google/sanitizers/wiki/MemorySanitizerLibcxxHowTo
# msan cannot fully work until it's used together with msan libc
extends:
- debian-clang
variables:
# l_undef is incompatible with msan
EXTRA_OPTION:
-D b_sanitize=memory
-D b_lundef=false
-D intel-clc=system
S3_ARTIFACT_NAME: ""
ARTIFACTS_DEBUG_SYMBOLS: 1
# Don't run all the tests yet:
# GLSL has some issues in sexpression reading.
# gtest has issues in its test initialization.
MESON_TEST_ARGS: "--suite glcpp --suite format"
GALLIUM_DRIVERS: "freedreno,iris,nouveau,kmsro,r300,r600,swrast,svga,v3d,vc4,virgl,etnaviv,panfrost,lima,zink,radeonsi,tegra,d3d12,crocus"
VULKAN_DRIVERS: intel,amd,broadcom,virtio
# Do a host build for intel-clc (msan complains about
# uninitialized values in the LLVM libs)
HOST_BUILD_OPTIONS: >
-D build-tests=false
-D enable-glcpp-tests=false
-D gallium-opencl=disabled
-D gallium-drivers=
-D vulkan-drivers=
-D video-codecs=
-D glx=disabled
-D platforms=
-D intel-clc=enabled
-D install-intel-clc=true
debian-build-testing:
extends: .meson-build
variables:
BUILDTYPE: debug
UNWIND: "enabled"
DRI_LOADERS: >
-D glx=dri
-D gbm=enabled
-D egl=enabled
-D glvnd=disabled
-D platforms=x11,wayland
GALLIUM_ST: >
-D dri3=enabled
-D gallium-extra-hud=true
-D gallium-vdpau=enabled
-D gallium-omx=bellagio
-D gallium-va=enabled
-D gallium-xa=enabled
-D gallium-nine=true
-D gallium-rusticl=false
GALLIUM_DRIVERS: "iris,nouveau,kmsro,r300,r600,freedreno,swrast,svga,v3d,vc4,virgl,etnaviv,panfrost,lima,zink,d3d12,asahi,crocus"
VULKAN_DRIVERS: swrast
EXTRA_OPTION: >
-D spirv-to-dxil=true
-D osmesa=true
-D tools=drm-shim,etnaviv,freedreno,glsl,intel,intel-ui,nir,nouveau,lima,panfrost,asahi
-D b_lto=true
LLVM_VERSION: 15
S3_ARTIFACT_NAME: debian-build-testing
script: |
section_start lava-pytest "lava-pytest"
.gitlab-ci/lava/lava-pytest.sh
section_switch shellcheck "shellcheck"
.gitlab-ci/run-shellcheck.sh
section_switch yamllint "yamllint"
.gitlab-ci/run-yamllint.sh
section_end yamllint
.gitlab-ci/meson/build.sh
.gitlab-ci/prepare-artifacts.sh
timeout: 15m
shader-db:
stage: code-validation
extends:
- .use-debian/x86_64_build
- .container+build-rules
needs:
- debian-build-testing
variables:
S3_ARTIFACT_NAME: debian-build-testing
before_script:
- !reference [.download_s3, before_script]
script: |
.gitlab-ci/run-shader-db.sh
artifacts:
paths:
- shader-db
timeout: 15m
# Test a release build with -Werror so new warnings don't sneak in.
debian-release:
extends: .meson-build
variables:
LLVM_VERSION: 15
UNWIND: "enabled"
C_ARGS: >
-Wno-error=stringop-overread
DRI_LOADERS: >
-D glx=dri
-D gbm=enabled
-D egl=enabled
-D glvnd=disabled
-D platforms=x11,wayland
GALLIUM_ST: >
-D dri3=enabled
-D gallium-extra-hud=true
-D gallium-vdpau=enabled
-D gallium-omx=disabled
-D gallium-va=enabled
-D gallium-xa=enabled
-D gallium-nine=false
-D gallium-rusticl=false
-D llvm=enabled
GALLIUM_DRIVERS: "i915,iris,nouveau,kmsro,freedreno,r300,svga,swrast,v3d,vc4,virgl,etnaviv,panfrost,lima,zink,d3d12,crocus"
VULKAN_DRIVERS: "amd,imagination-experimental,microsoft-experimental"
EXTRA_OPTION: >
-D spirv-to-dxil=true
-D osmesa=true
-D tools=all
-D intel-clc=enabled
-D intel-rt=enabled
-D imagination-srv=true
BUILDTYPE: "release"
S3_ARTIFACT_NAME: "mesa-x86_64-default-${BUILDTYPE}"
script:
- .gitlab-ci/meson/build.sh
- 'if [ -n "$MESA_CI_PERFORMANCE_ENABLED" ]; then .gitlab-ci/prepare-artifacts.sh; fi'
alpine-build-testing:
extends:
- .meson-build
- .use-alpine/x86_64_build
stage: build-x86_64
variables:
BUILDTYPE: "release"
C_ARGS: >
-Wno-error=cpp
-Wno-error=array-bounds
-Wno-error=stringop-overread
DRI_LOADERS: >
-D glx=disabled
-D gbm=enabled
-D egl=enabled
-D glvnd=disabled
-D platforms=wayland
LLVM_VERSION: "16"
GALLIUM_DRIVERS: "crocus,etnaviv,freedreno,iris,kmsro,lima,nouveau,panfrost,r300,r600,radeonsi,svga,swrast,tegra,v3d,vc4,virgl,zink"
GALLIUM_ST: >
-D dri3=enabled
-D gallium-extra-hud=true
-D gallium-vdpau=disabled
-D gallium-omx=disabled
-D gallium-va=enabled
-D gallium-xa=disabled
-D gallium-nine=true
-D gallium-rusticl=false
-D gles1=disabled
-D gles2=enabled
-D llvm=enabled
-D microsoft-clc=disabled
-D shared-llvm=enabled
UNWIND: "disabled"
VULKAN_DRIVERS: "amd,broadcom,freedreno,intel,imagination-experimental"
fedora-release:
extends:
- .meson-build
- .use-fedora/x86_64_build
variables:
BUILDTYPE: "release"
C_LINK_ARGS: >
-Wno-error=stringop-overflow
-Wno-error=stringop-overread
CPP_ARGS: >
-Wno-error=dangling-reference
-Wno-error=overloaded-virtual
CPP_LINK_ARGS: >
-Wno-error=stringop-overflow
-Wno-error=stringop-overread
DRI_LOADERS: >
-D glx=dri
-D gbm=enabled
-D egl=enabled
-D glvnd=enabled
-D platforms=x11,wayland
EXTRA_OPTION: >
-D b_lto=true
-D osmesa=true
-D selinux=true
-D tools=drm-shim,etnaviv,freedreno,glsl,intel,nir,nouveau,lima,panfrost,imagination
-D vulkan-layers=device-select,overlay
-D intel-rt=enabled
-D imagination-srv=true
-D teflon=true
GALLIUM_DRIVERS: "crocus,etnaviv,freedreno,i915,iris,kmsro,lima,nouveau,panfrost,r300,r600,radeonsi,svga,swrast,tegra,v3d,vc4,virgl,zink"
GALLIUM_ST: >
-D dri3=enabled
-D gallium-extra-hud=true
-D gallium-vdpau=enabled
-D gallium-omx=disabled
-D gallium-va=enabled
-D gallium-xa=enabled
-D gallium-nine=false
-D gallium-rusticl=true
-D gles1=disabled
-D gles2=enabled
-D llvm=enabled
-D microsoft-clc=disabled
-D shared-llvm=enabled
LLVM_VERSION: ""
UNWIND: "disabled"
VULKAN_DRIVERS: "amd,broadcom,freedreno,imagination-experimental,intel,intel_hasvk"
debian-android:
extends:
- .meson-cross
- .use-debian/android_build
- .ci-deqp-artifacts
variables:
BUILDTYPE: debug
UNWIND: "disabled"
C_ARGS: >
-Wno-error=asm-operand-widths
-Wno-error=constant-conversion
-Wno-error=enum-conversion
-Wno-error=initializer-overrides
-Wno-error=sometimes-uninitialized
CPP_ARGS: >
-Wno-error=c99-designator
-Wno-error=unused-variable
-Wno-error=unused-but-set-variable
-Wno-error=self-assign
DRI_LOADERS: >
-D glx=disabled
-D gbm=disabled
-D egl=enabled
-D glvnd=disabled
-D platforms=android
EXTRA_OPTION: >
-D android-stub=true
-D llvm=disabled
-D platform-sdk-version=33
-D valgrind=disabled
-D android-libbacktrace=disabled
-D intel-clc=system
GALLIUM_ST: >
-D dri3=disabled
-D gallium-vdpau=disabled
-D gallium-omx=disabled
-D gallium-va=disabled
-D gallium-xa=disabled
-D gallium-nine=false
-D gallium-rusticl=false
LLVM_VERSION: "15"
PKG_CONFIG_LIBDIR: "/disable/non/android/system/pc/files"
HOST_BUILD_OPTIONS: >
-D build-tests=false
-D enable-glcpp-tests=false
-D gallium-opencl=disabled
-D gallium-drivers=
-D vulkan-drivers=
-D video-codecs=
-D glx=disabled
-D platforms=
-D intel-clc=enabled
-D install-intel-clc=true
ARTIFACTS_DEBUG_SYMBOLS: 1
S3_ARTIFACT_NAME: mesa-x86_64-android-${BUILDTYPE}
script:
- CROSS=aarch64-linux-android GALLIUM_DRIVERS=etnaviv,freedreno,lima,panfrost,vc4,v3d VULKAN_DRIVERS=freedreno,broadcom,virtio .gitlab-ci/meson/build.sh
# x86_64 build:
# Can't do Intel because gen_decoder.c currently requires libexpat, which
# is not a dependency that AOSP wants to accept. Can't do Radeon Gallium
# drivers because they requires LLVM, which we don't have an Android build
# of.
- CROSS=x86_64-linux-android GALLIUM_DRIVERS=iris,virgl VULKAN_DRIVERS=amd,intel .gitlab-ci/meson/build.sh
- .gitlab-ci/prepare-artifacts.sh
.meson-cross:
extends:
- .meson-build
stage: build-misc
variables:
UNWIND: "disabled"
DRI_LOADERS: >
-D glx=dri
-D gbm=enabled
-D egl=enabled
-D platforms=x11,wayland
-D osmesa=false
GALLIUM_ST: >
-D dri3=enabled
-D gallium-vdpau=disabled
-D gallium-omx=disabled
-D gallium-va=disabled
-D gallium-xa=disabled
-D gallium-nine=false
.meson-arm:
extends:
- .meson-cross
- .use-debian/arm64_build
needs:
- debian/arm64_build
variables:
VULKAN_DRIVERS: freedreno,broadcom
GALLIUM_DRIVERS: "etnaviv,freedreno,kmsro,lima,nouveau,panfrost,swrast,tegra,v3d,vc4,zink"
BUILDTYPE: "debugoptimized"
tags:
- aarch64
debian-arm32:
extends:
- .meson-arm
- .ci-deqp-artifacts
variables:
CROSS: armhf
DRI_LOADERS:
-D glvnd=disabled
EXTRA_OPTION: >
-D llvm=disabled
-D valgrind=disabled
S3_ARTIFACT_NAME: mesa-arm32-default-${BUILDTYPE}
# The strip command segfaults, failing to strip the binary and leaving
# tempfiles in our artifacts.
ARTIFACTS_DEBUG_SYMBOLS: 1
script:
- .gitlab-ci/meson/build.sh
- .gitlab-ci/prepare-artifacts.sh
debian-arm32-asan:
extends:
- debian-arm32
variables:
DRI_LOADERS:
-D glvnd=disabled
EXTRA_OPTION: >
-D llvm=disabled
-D b_sanitize=address
-D valgrind=disabled
-D tools=dlclose-skip
ARTIFACTS_DEBUG_SYMBOLS: 1
S3_ARTIFACT_NAME: mesa-arm32-asan-${BUILDTYPE}
MESON_TEST_ARGS: "--no-suite mesa:compiler --no-suite mesa:util"
debian-arm64:
extends:
- .meson-arm
- .ci-deqp-artifacts
variables:
C_ARGS: >
-Wno-error=array-bounds
-Wno-error=stringop-truncation
VULKAN_DRIVERS: "freedreno,broadcom,panfrost,imagination-experimental"
DRI_LOADERS:
-D glvnd=disabled
EXTRA_OPTION: >
-D llvm=disabled
-D valgrind=disabled
-D imagination-srv=true
-D perfetto=true
-D freedreno-kmds=msm,virtio
-D teflon=true
S3_ARTIFACT_NAME: mesa-arm64-default-${BUILDTYPE}
script:
- .gitlab-ci/meson/build.sh
- .gitlab-ci/prepare-artifacts.sh
debian-arm64-asan:
extends:
- debian-arm64
variables:
DRI_LOADERS:
-D glvnd=disabled
EXTRA_OPTION: >
-D llvm=disabled
-D b_sanitize=address
-D valgrind=disabled
-D tools=dlclose-skip
ARTIFACTS_DEBUG_SYMBOLS: 1
S3_ARTIFACT_NAME: mesa-arm64-asan-${BUILDTYPE}
MESON_TEST_ARGS: "--no-suite mesa:compiler"
debian-arm64-build-test:
extends:
- .meson-arm
- .ci-deqp-artifacts
variables:
VULKAN_DRIVERS: "amd"
DRI_LOADERS:
-D glvnd=disabled
EXTRA_OPTION: >
-Dtools=panfrost,imagination
debian-arm64-release:
extends:
- debian-arm64
variables:
BUILDTYPE: release
S3_ARTIFACT_NAME: mesa-arm64-default-${BUILDTYPE}
C_ARGS: >
-Wno-error=array-bounds
-Wno-error=stringop-truncation
-Wno-error=stringop-overread
script:
- .gitlab-ci/meson/build.sh
- 'if [ -n "$MESA_CI_PERFORMANCE_ENABLED" ]; then .gitlab-ci/prepare-artifacts.sh; fi'
debian-clang:
extends: .meson-build
variables:
BUILDTYPE: debug
LLVM_VERSION: 15
UNWIND: "enabled"
C_ARGS: >
-Wno-error=constant-conversion
-Wno-error=enum-conversion
-Wno-error=initializer-overrides
-Wno-error=sometimes-uninitialized
-Werror=misleading-indentation
CPP_ARGS: >
-Wno-error=c99-designator
-Wno-error=overloaded-virtual
-Wno-error=tautological-constant-out-of-range-compare
-Wno-error=unused-private-field
DRI_LOADERS: >
-D glx=dri
-D gbm=enabled
-D egl=enabled
-D glvnd=enabled
-D platforms=x11,wayland
GALLIUM_ST: >
-D dri3=enabled
-D gallium-extra-hud=true
-D gallium-vdpau=enabled
-D gallium-omx=bellagio
-D gallium-va=enabled
-D gallium-xa=enabled
-D gallium-nine=true
-D gles1=enabled
-D gles2=enabled
-D llvm=enabled
-D microsoft-clc=disabled
-D shared-llvm=enabled
-D opencl-spirv=true
-D shared-glapi=enabled
GALLIUM_DRIVERS: "iris,nouveau,kmsro,r300,r600,freedreno,swrast,svga,v3d,vc4,virgl,etnaviv,panfrost,lima,zink,radeonsi,tegra,d3d12,crocus,i915,asahi"
VULKAN_DRIVERS: intel,amd,freedreno,broadcom,virtio,swrast,panfrost,imagination-experimental,microsoft-experimental,nouveau
EXTRA_OPTION:
-D spirv-to-dxil=true
-D osmesa=true
-D imagination-srv=true
-D tools=drm-shim,etnaviv,freedreno,glsl,intel,intel-ui,nir,nouveau,lima,panfrost,asahi,imagination
-D vulkan-layers=device-select,overlay
-D build-aco-tests=true
-D intel-clc=enabled
-D intel-rt=enabled
-D imagination-srv=true
-D teflon=true
CC: clang-${LLVM_VERSION}
CXX: clang++-${LLVM_VERSION}
debian-clang-release:
extends: debian-clang
variables:
BUILDTYPE: "release"
DRI_LOADERS: >
-D glx=xlib
-D platforms=x11,wayland
GALLIUM_ST: >
-D dri3=enabled
-D gallium-extra-hud=true
-D gallium-vdpau=enabled
-D gallium-omx=bellagio
-D gallium-va=enabled
-D gallium-xa=enabled
-D gallium-nine=true
-D gles1=disabled
-D gles2=disabled
-D llvm=enabled
-D microsoft-clc=disabled
-D shared-llvm=enabled
-D opencl-spirv=true
-D shared-glapi=disabled
windows-msvc:
extends:
- .build-windows
- .use-windows_build_msvc
- .windows-build-rules
stage: build-misc
script:
- pwsh -ExecutionPolicy RemoteSigned .\.gitlab-ci\windows\mesa_build.ps1
artifacts:
paths:
- _build/meson-logs/*.txt
- _install/
debian-vulkan:
extends: .meson-build
variables:
BUILDTYPE: debug
LLVM_VERSION: 15
UNWIND: "disabled"
DRI_LOADERS: >
-D glx=disabled
-D gbm=disabled
-D egl=disabled
-D opengl=false
-D gles1=disabled
-D gles2=disabled
-D glvnd=disabled
-D platforms=x11,wayland
-D osmesa=false
GALLIUM_ST: >
-D dri3=enabled
-D gallium-vdpau=disabled
-D gallium-omx=disabled
-D gallium-va=disabled
-D gallium-xa=disabled
-D gallium-nine=false
-D gallium-rusticl=false
-D b_sanitize=undefined
-D c_args=-fno-sanitize-recover=all
-D cpp_args=-fno-sanitize-recover=all
UBSAN_OPTIONS: "print_stacktrace=1"
VULKAN_DRIVERS: amd,broadcom,freedreno,intel,intel_hasvk,panfrost,virtio,imagination-experimental,microsoft-experimental,nouveau
EXTRA_OPTION: >
-D vulkan-layers=device-select,overlay
-D build-aco-tests=true
-D intel-rt=disabled
-D imagination-srv=true
debian-x86_32:
extends:
- .meson-cross
- .use-debian/x86_32_build
variables:
BUILDTYPE: debug
CROSS: i386
VULKAN_DRIVERS: intel,amd,swrast,virtio
GALLIUM_DRIVERS: "iris,nouveau,r300,r600,radeonsi,swrast,virgl,zink,crocus,d3d12"
LLVM_VERSION: 15
DRI_LOADERS:
-D glvnd=disabled
EXTRA_OPTION: >
-D vulkan-layers=device-select,overlay
-D intel-clc=system
HOST_BUILD_OPTIONS: >
-D build-tests=false
-D enable-glcpp-tests=false
-D gallium-opencl=disabled
-D gallium-drivers=
-D vulkan-drivers=
-D video-codecs=
-D glx=disabled
-D platforms=
-D intel-clc=enabled
-D install-intel-clc=true
debian-s390x:
extends:
- debian-ppc64el
- .use-debian/s390x_build
- .s390x-rules
tags:
- kvm
variables:
CROSS: s390x
GALLIUM_DRIVERS: "swrast,zink"
LLVM_VERSION: 15
VULKAN_DRIVERS: "swrast"
DRI_LOADERS:
-D glvnd=disabled
debian-ppc64el:
extends:
- .meson-cross
- .use-debian/ppc64el_build
- .ppc64el-rules
variables:
BUILDTYPE: debug
CROSS: ppc64el
GALLIUM_DRIVERS: "nouveau,radeonsi,swrast,virgl,zink"
VULKAN_DRIVERS: "amd,swrast"
DRI_LOADERS:
-D glvnd=disabled

View File

@@ -1,10 +1,7 @@
#!/usr/bin/env bash #!/bin/sh
# shellcheck disable=SC2035
# shellcheck disable=SC2061
# shellcheck disable=SC2086 # we want word splitting
while true; do while true; do
devcds=$(find /sys/devices/virtual/devcoredump/ -name data 2>/dev/null) devcds=`find /sys/devices/virtual/devcoredump/ -name data 2>/dev/null`
for i in $devcds; do for i in $devcds; do
echo "Found a devcoredump at $i." echo "Found a devcoredump at $i."
if cp $i /results/first.devcore; then if cp $i /results/first.devcore; then
@@ -13,23 +10,5 @@ while true; do
exit 0 exit 0
fi fi
done done
i915_error_states=$(find /sys/devices/ -path */drm/card*/error)
for i in $i915_error_states; do
tmpfile=$(mktemp)
cp "$i" "$tmpfile"
filesize=$(stat --printf="%s" "$tmpfile")
# Does the file contain "No error state collected" ?
if [ "$filesize" = 25 ]; then
rm "$tmpfile"
else
echo "Found an i915 error state at $i size=$filesize."
if cp "$tmpfile" /results/first.i915_error_state; then
rm "$tmpfile"
echo 1 > "$i"
echo "Saved to the job artifacts at /first.i915_error_state"
exit 0
fi
fi
done
sleep 10 sleep 10
done done

View File

@@ -1,137 +1,85 @@
#!/bin/bash #!/bin/bash
VARS=( for var in \
ACO_DEBUG ASAN_OPTIONS \
ARTIFACTS_BASE_URL BASE_SYSTEM_FORK_HOST_PREFIX \
ASAN_OPTIONS BASE_SYSTEM_MAINLINE_HOST_PREFIX \
BASE_SYSTEM_FORK_HOST_PREFIX CI_COMMIT_BRANCH \
BASE_SYSTEM_MAINLINE_HOST_PREFIX CI_COMMIT_TITLE \
CI_COMMIT_BRANCH CI_JOB_ID \
CI_COMMIT_REF_NAME CI_JOB_URL \
CI_COMMIT_TITLE CI_MERGE_REQUEST_SOURCE_BRANCH_NAME \
CI_JOB_ID CI_MERGE_REQUEST_TITLE \
S3_JWT_FILE CI_NODE_INDEX \
CI_JOB_STARTED_AT CI_NODE_TOTAL \
CI_JOB_NAME CI_PAGES_DOMAIN \
CI_JOB_URL CI_PIPELINE_ID \
CI_MERGE_REQUEST_SOURCE_BRANCH_NAME CI_PROJECT_DIR \
CI_MERGE_REQUEST_TITLE CI_PROJECT_NAME \
CI_NODE_INDEX CI_PROJECT_PATH \
CI_NODE_TOTAL CI_PROJECT_ROOT_NAMESPACE \
CI_PAGES_DOMAIN CI_RUNNER_DESCRIPTION \
CI_PIPELINE_ID CI_SERVER_URL \
CI_PIPELINE_URL DEQP_CASELIST_FILTER \
CI_PROJECT_DIR DEQP_CASELIST_INV_FILTER \
CI_PROJECT_NAME DEQP_CONFIG \
CI_PROJECT_PATH DEQP_EXPECTED_RENDERER \
CI_PROJECT_ROOT_NAMESPACE DEQP_FRACTION \
CI_RUNNER_DESCRIPTION DEQP_HEIGHT \
CI_SERVER_URL DEQP_PARALLEL \
CROSVM_GALLIUM_DRIVER DEQP_RESULTS_DIR \
CROSVM_GPU_ARGS DEQP_RUNNER_OPTIONS \
CURRENT_SECTION DEQP_SUITE \
DEQP_BIN_DIR DEQP_VARIANT \
DEQP_CONFIG DEQP_VER \
DEQP_EXPECTED_RENDERER DEQP_WIDTH \
DEQP_FRACTION DEVICE_NAME \
DEQP_HEIGHT DRIVER_NAME \
DEQP_RESULTS_DIR EGL_PLATFORM \
DEQP_RUNNER_OPTIONS ETNA_MESA_DEBUG \
DEQP_SUITE FDO_CI_CONCURRENT \
DEQP_TEMP_DIR FDO_UPSTREAM_REPO \
DEQP_VER FD_MESA_DEBUG \
DEQP_WIDTH FLAKES_CHANNEL \
DEVICE_NAME GPU_VERSION \
DRIVER_NAME HWCI_FREQ_MAX \
EGL_PLATFORM HWCI_KERNEL_MODULES \
ETNA_MESA_DEBUG HWCI_START_XORG \
FDO_CI_CONCURRENT HWCI_TEST_SCRIPT \
FDO_UPSTREAM_REPO IR3_SHADER_DEBUG \
FD_MESA_DEBUG JOB_ARTIFACTS_BASE \
FLAKES_CHANNEL JOB_RESULTS_PATH \
FREEDRENO_HANGCHECK_MS JOB_ROOTFS_OVERLAY_PATH \
GALLIUM_DRIVER MESA_BUILD_PATH \
GALLIVM_PERF MESA_GL_VERSION_OVERRIDE \
GPU_VERSION MESA_GLSL_VERSION_OVERRIDE \
GTEST MESA_GLES_VERSION_OVERRIDE \
GTEST_FAILS MESA_VK_IGNORE_CONFORMANCE_WARNING \
GTEST_FRACTION MINIO_HOST \
GTEST_RESULTS_DIR NIR_VALIDATE \
GTEST_RUNNER_OPTIONS PAN_I_WANT_A_BROKEN_VULKAN_DRIVER \
GTEST_SKIPS PAN_MESA_DEBUG \
HWCI_FREQ_MAX PIGLIT_FRACTION \
HWCI_KERNEL_MODULES PIGLIT_JUNIT_RESULTS \
HWCI_KVM PIGLIT_NO_WINDOW \
HWCI_START_WESTON PIGLIT_OPTIONS \
HWCI_START_XORG PIGLIT_PLATFORM \
HWCI_TEST_SCRIPT PIGLIT_PROFILES \
IR3_SHADER_DEBUG PIGLIT_REPLAY_ARTIFACTS_BASE_URL \
JOB_ARTIFACTS_BASE PIGLIT_REPLAY_SUBCOMMAND \
JOB_RESULTS_PATH PIGLIT_REPLAY_DESCRIPTION_FILE \
JOB_ROOTFS_OVERLAY_PATH PIGLIT_REPLAY_DEVICE_NAME \
KERNEL_IMAGE_BASE PIGLIT_REPLAY_EXTRA_ARGS \
KERNEL_IMAGE_NAME PIGLIT_REPLAY_REFERENCE_IMAGES_BASE \
LD_LIBRARY_PATH PIGLIT_REPLAY_UPLOAD_TO_MINIO \
LIBGL_ALWAYS_SOFTWARE PIGLIT_RESULTS \
LP_NUM_THREADS PIGLIT_TESTS \
MESA_BASE_TAG PIPELINE_ARTIFACTS_BASE \
MESA_BUILD_PATH TEST_LD_PRELOAD \
MESA_DEBUG TU_DEBUG \
MESA_GLES_VERSION_OVERRIDE VK_CPU \
MESA_GLSL_VERSION_OVERRIDE VK_DRIVER \
MESA_GL_VERSION_OVERRIDE ; do
MESA_IMAGE
MESA_IMAGE_PATH
MESA_IMAGE_TAG
MESA_LOADER_DRIVER_OVERRIDE
MESA_SPIRV_LOG_LEVEL
MESA_TEMPLATES_COMMIT
MESA_VK_ABORT_ON_DEVICE_LOSS
MESA_VK_IGNORE_CONFORMANCE_WARNING
S3_HOST
S3_RESULTS_UPLOAD
NIR_DEBUG
PAN_I_WANT_A_BROKEN_VULKAN_DRIVER
PAN_MESA_DEBUG
PANVK_DEBUG
PIGLIT_FRACTION
PIGLIT_NO_WINDOW
PIGLIT_OPTIONS
PIGLIT_PLATFORM
PIGLIT_PROFILES
PIGLIT_REPLAY_ANGLE_TAG
PIGLIT_REPLAY_ARTIFACTS_BASE_URL
PIGLIT_REPLAY_DEVICE_NAME
PIGLIT_REPLAY_EXTRA_ARGS
PIGLIT_REPLAY_LOOP_TIMES
PIGLIT_REPLAY_REFERENCE_IMAGES_BASE
PIGLIT_REPLAY_SUBCOMMAND
PIGLIT_RESULTS
PIGLIT_TESTS
PIGLIT_TRACES_FILE
PIPELINE_ARTIFACTS_BASE
RADEON_DEBUG
RADV_DEBUG
RADV_PERFTEST
SKQP_ASSETS_DIR
SKQP_BACKENDS
TU_DEBUG
USE_ANGLE
VIRGL_HOST_API
WAFFLE_PLATFORM
VK_CPU
VK_DRIVER
# required by virglrender CI
VK_DRIVER_FILES
VKD3D_PROTON_RESULTS
VKD3D_CONFIG
VKD3D_TEST_EXCLUDE
ZINK_DESCRIPTORS
ZINK_DEBUG
LVP_POISON_MEMORY
)
for var in "${VARS[@]}"; do
if [ -n "${!var+x}" ]; then if [ -n "${!var+x}" ]; then
echo "export $var=${!var@Q}" echo "export $var=${!var@Q}"
fi fi

View File

@@ -7,14 +7,11 @@ set -ex
cd / cd /
findmnt --mountpoint /proc || mount -t proc none /proc mount -t proc none /proc
findmnt --mountpoint /sys || mount -t sysfs none /sys mount -t sysfs none /sys
mount -t debugfs none /sys/kernel/debug mount -t devtmpfs none /dev || echo possibly already mounted
findmnt --mountpoint /dev || mount -t devtmpfs none /dev
mkdir -p /dev/pts mkdir -p /dev/pts
mount -t devpts devpts /dev/pts mount -t devpts devpts /dev/pts
mkdir /dev/shm
mount -t tmpfs -o noexec,nodev,nosuid tmpfs /dev/shm
mount -t tmpfs tmpfs /tmp mount -t tmpfs tmpfs /tmp
echo "nameserver 8.8.8.8" > /etc/resolv.conf echo "nameserver 8.8.8.8" > /etc/resolv.conf
@@ -22,4 +19,4 @@ echo "nameserver 8.8.8.8" > /etc/resolv.conf
# Set the time so we can validate certificates before we fetch anything; # Set the time so we can validate certificates before we fetch anything;
# however as not all DUTs have network, make this non-fatal. # however as not all DUTs have network, make this non-fatal.
for _ in 1 2 3; do sntp -sS pool.ntp.org && break || sleep 2; done || true for i in 1 2 3; do sntp -sS pool.ntp.org && break || sleep 2; done || true

View File

@@ -1,95 +1,14 @@
#!/bin/bash #!/bin/sh
# shellcheck disable=SC1090
# shellcheck disable=SC1091
# shellcheck disable=SC2086 # we want word splitting
# shellcheck disable=SC2155
# Second-stage init, used to set up devices and our job environment before # Second-stage init, used to set up devices and our job environment before
# running tests. # running tests.
shopt -s extglob . /set-job-env-vars.sh
# Make sure to kill itself and all the children process from this script on
# exiting, since any console output may interfere with LAVA signals handling,
# which based on the log console.
cleanup() {
if [ "$BACKGROUND_PIDS" = "" ]; then
return 0
fi
set +x
echo "Killing all child processes"
for pid in $BACKGROUND_PIDS
do
kill "$pid" 2>/dev/null || true
done
# Sleep just a little to give enough time for subprocesses to be gracefully
# killed. Then apply a SIGKILL if necessary.
sleep 5
for pid in $BACKGROUND_PIDS
do
kill -9 "$pid" 2>/dev/null || true
done
BACKGROUND_PIDS=
set -x
}
trap cleanup INT TERM EXIT
# Space separated values with the PIDS of the processes started in the
# background by this script
BACKGROUND_PIDS=
for path in '/dut-env-vars.sh' '/set-job-env-vars.sh' './set-job-env-vars.sh'; do
[ -f "$path" ] && source "$path"
done
. "$SCRIPTS_DIR"/setup-test-env.sh
set -ex set -ex
# Set up any devices required by the jobs # Set up any devices required by the jobs
[ -z "$HWCI_KERNEL_MODULES" ] || { [ -z "$HWCI_KERNEL_MODULES" ] || (echo -n $HWCI_KERNEL_MODULES | xargs -d, -n1 /usr/sbin/modprobe)
echo -n $HWCI_KERNEL_MODULES | xargs -d, -n1 /usr/sbin/modprobe
}
# Set up ZRAM
HWCI_ZRAM_SIZE=2G
if /sbin/zramctl --find --size $HWCI_ZRAM_SIZE -a zstd; then
mkswap /dev/zram0
swapon /dev/zram0
echo "zram: $HWCI_ZRAM_SIZE activated"
else
echo "zram: skipping, not supported"
fi
#
# Load the KVM module specific to the detected CPU virtualization extensions:
# - vmx for Intel VT
# - svm for AMD-V
#
# Additionally, download the kernel image to boot the VM via HWCI_TEST_SCRIPT.
#
if [ "$HWCI_KVM" = "true" ]; then
unset KVM_KERNEL_MODULE
{
grep -qs '\bvmx\b' /proc/cpuinfo && KVM_KERNEL_MODULE=kvm_intel
} || {
grep -qs '\bsvm\b' /proc/cpuinfo && KVM_KERNEL_MODULE=kvm_amd
}
{
[ -z "${KVM_KERNEL_MODULE}" ] && \
echo "WARNING: Failed to detect CPU virtualization extensions"
} || \
modprobe ${KVM_KERNEL_MODULE}
mkdir -p /lava-files
curl -L --retry 4 -f --retry-all-errors --retry-delay 60 \
-o "/lava-files/${KERNEL_IMAGE_NAME}" \
"${KERNEL_IMAGE_BASE}/amd64/${KERNEL_IMAGE_NAME}"
fi
# Fix prefix confusion: the build installs to $CI_PROJECT_DIR, but we expect # Fix prefix confusion: the build installs to $CI_PROJECT_DIR, but we expect
# it in /install # it in /install
@@ -97,66 +16,31 @@ ln -sf $CI_PROJECT_DIR/install /install
export LD_LIBRARY_PATH=/install/lib export LD_LIBRARY_PATH=/install/lib
export LIBGL_DRIVERS_PATH=/install/lib/dri export LIBGL_DRIVERS_PATH=/install/lib/dri
# https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/22495#note_1876691
# The navi21 boards seem to have trouble with ld.so.cache, so try explicitly
# telling it to look in /usr/local/lib.
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/lib
# Store Mesa's disk cache under /tmp, rather than sending it out over NFS. # Store Mesa's disk cache under /tmp, rather than sending it out over NFS.
export XDG_CACHE_HOME=/tmp export XDG_CACHE_HOME=/tmp
# Make sure Python can find all our imports # Make sure Python can find all our imports
export PYTHONPATH=$(python3 -c "import sys;print(\":\".join(sys.path))") export PYTHONPATH=$(python3 -c "import sys;print(\":\".join(sys.path))")
# If we need to specify a driver, it means several drivers could pick up this gpu;
# ensure that the other driver can't accidentally be used
if [ -n "$MESA_LOADER_DRIVER_OVERRIDE" ]; then
rm /install/lib/dri/!($MESA_LOADER_DRIVER_OVERRIDE)_dri.so
fi
ls -1 /install/lib/dri/*_dri.so || true
if [ "$HWCI_FREQ_MAX" = "true" ]; then if [ "$HWCI_FREQ_MAX" = "true" ]; then
# Ensure initialization of the DRM device (needed by MSM) # Ensure initialization of the DRM device (needed by MSM)
head -0 /dev/dri/renderD128 head -0 /dev/dri/renderD128
# Disable GPU frequency scaling # Disable GPU frequency scaling
DEVFREQ_GOVERNOR=$(find /sys/devices -name governor | grep gpu || true) DEVFREQ_GOVERNOR=`find /sys/devices -name governor | grep gpu || true`
test -z "$DEVFREQ_GOVERNOR" || echo performance > $DEVFREQ_GOVERNOR || true test -z "$DEVFREQ_GOVERNOR" || echo performance > $DEVFREQ_GOVERNOR || true
# Disable CPU frequency scaling # Disable CPU frequency scaling
echo performance | tee -a /sys/devices/system/cpu/cpufreq/policy*/scaling_governor || true echo performance | tee -a /sys/devices/system/cpu/cpufreq/policy*/scaling_governor || true
# Disable GPU runtime power management # Disable GPU runtime power management
GPU_AUTOSUSPEND=$(find /sys/devices -name autosuspend_delay_ms | grep gpu | head -1) GPU_AUTOSUSPEND=`find /sys/devices -name autosuspend_delay_ms | grep gpu | head -1`
test -z "$GPU_AUTOSUSPEND" || echo -1 > $GPU_AUTOSUSPEND || true test -z "$GPU_AUTOSUSPEND" || echo -1 > $GPU_AUTOSUSPEND || true
# Lock Intel GPU frequency to 70% of the maximum allowed by hardware
# and enable throttling detection & reporting.
# Additionally, set the upper limit for CPU scaling frequency to 65% of the
# maximum permitted, as an additional measure to mitigate thermal throttling.
/intel-gpu-freq.sh -s 70% --cpu-set-max 65% -g all -d
fi
# Start a little daemon to capture sysfs records and produce a JSON file
if [ -x /kdl.sh ]; then
echo "launch kdl.sh!"
/kdl.sh &
BACKGROUND_PIDS="$! $BACKGROUND_PIDS"
else
echo "kdl.sh not found!"
fi
# Increase freedreno hangcheck timer because it's right at the edge of the
# spilling tests timing out (and some traces, too)
if [ -n "$FREEDRENO_HANGCHECK_MS" ]; then
echo $FREEDRENO_HANGCHECK_MS | tee -a /sys/kernel/debug/dri/128/hangcheck_period_ms
fi fi
# Start a little daemon to capture the first devcoredump we encounter. (They # Start a little daemon to capture the first devcoredump we encounter. (They
# expire after 5 minutes, so we poll for them). # expire after 5 minutes, so we poll for them).
if [ -x /capture-devcoredump.sh ]; then ./capture-devcoredump.sh &
/capture-devcoredump.sh &
BACKGROUND_PIDS="$! $BACKGROUND_PIDS"
fi
# If we want Xorg to be running for the test, then we start it up before the # If we want Xorg to be running for the test, then we start it up before the
# HWCI_TEST_SCRIPT because we need to use xinit to start X (otherwise # HWCI_TEST_SCRIPT because we need to use xinit to start X (otherwise
@@ -165,12 +49,10 @@ fi
if [ -n "$HWCI_START_XORG" ]; then if [ -n "$HWCI_START_XORG" ]; then
echo "touch /xorg-started; sleep 100000" > /xorg-script echo "touch /xorg-started; sleep 100000" > /xorg-script
env \ env \
VK_DRIVER_FILES="/install/share/vulkan/icd.d/${VK_DRIVER}_icd.$(uname -m).json" \
xinit /bin/sh /xorg-script -- /usr/bin/Xorg -noreset -s 0 -dpms -logfile /Xorg.0.log & xinit /bin/sh /xorg-script -- /usr/bin/Xorg -noreset -s 0 -dpms -logfile /Xorg.0.log &
BACKGROUND_PIDS="$! $BACKGROUND_PIDS"
# Wait for xorg to be ready for connections. # Wait for xorg to be ready for connections.
for _ in 1 2 3 4 5; do for i in 1 2 3 4 5; do
if [ -e /xorg-started ]; then if [ -e /xorg-started ]; then
break break
fi fi
@@ -179,57 +61,18 @@ if [ -n "$HWCI_START_XORG" ]; then
export DISPLAY=:0 export DISPLAY=:0
fi fi
if [ -n "$HWCI_START_WESTON" ]; then RESULT=fail
WESTON_X11_SOCK="/tmp/.X11-unix/X0" if sh $HWCI_TEST_SCRIPT; then
if [ -n "$HWCI_START_XORG" ]; then RESULT=pass
echo "Please consider dropping HWCI_START_XORG and instead using Weston XWayland for testing." rm -rf results/trace/$PIGLIT_REPLAY_DEVICE_NAME
WESTON_X11_SOCK="/tmp/.X11-unix/X1"
fi
export WAYLAND_DISPLAY=wayland-0
# Display server is Weston Xwayland when HWCI_START_XORG is not set or Xorg when it's
export DISPLAY=:0
mkdir -p /tmp/.X11-unix
env \
VK_DRIVER_FILES="/install/share/vulkan/icd.d/${VK_DRIVER}_icd.$(uname -m).json" \
weston -Bheadless-backend.so --use-gl -Swayland-0 --xwayland --idle-time=0 &
BACKGROUND_PIDS="$! $BACKGROUND_PIDS"
while [ ! -S "$WESTON_X11_SOCK" ]; do sleep 1; done
fi fi
set +e
bash -c ". $SCRIPTS_DIR/setup-test-env.sh && $HWCI_TEST_SCRIPT"
EXIT_CODE=$?
set -e
# Let's make sure the results are always stored in current working directory
mv -f ${CI_PROJECT_DIR}/results ./ 2>/dev/null || true
[ ${EXIT_CODE} -ne 0 ] || rm -rf results/trace/"$PIGLIT_REPLAY_DEVICE_NAME"
# Make sure that capture-devcoredump is done before we start trying to tar up
# artifacts -- if it's writing while tar is reading, tar will throw an error and
# kill the job.
cleanup
# upload artifacts # upload artifacts
if [ -n "$S3_RESULTS_UPLOAD" ]; then MINIO=$(cat /proc/cmdline | tr ' ' '\n' | grep minio_results | cut -d '=' -f 2 || true)
tar --zstd -cf results.tar.zst results/; if [ -n "$MINIO" ]; then
ci-fairy s3cp --token-file "${S3_JWT_FILE}" results.tar.zst https://"$S3_RESULTS_UPLOAD"/results.tar.zst; tar -czf results.tar.gz results/;
ci-fairy minio login "$CI_JOB_JWT";
ci-fairy minio cp results.tar.gz minio://"$MINIO"/results.tar.gz;
fi fi
# We still need to echo the hwci: mesa message, as some scripts rely on it, such echo "hwci: mesa: $RESULT"
# as the python ones inside the bare-metal folder
[ ${EXIT_CODE} -eq 0 ] && RESULT=pass || RESULT=fail
set +x
# Print the final result; both bare-metal and LAVA look for this string to get
# the result of our run, so try really hard to get it out rather than losing
# the run. The device gets shut down right at this point, and a630 seems to
# enjoy corrupting the last line of serial output before shutdown.
for _ in $(seq 0 3); do echo "hwci: mesa: $RESULT"; sleep 1; echo; done
exit $EXIT_CODE

View File

@@ -1,768 +0,0 @@
#!/usr/bin/env bash
# shellcheck disable=SC2013
# shellcheck disable=SC2015
# shellcheck disable=SC2034
# shellcheck disable=SC2046
# shellcheck disable=SC2059
# shellcheck disable=SC2086 # we want word splitting
# shellcheck disable=SC2154
# shellcheck disable=SC2155
# shellcheck disable=SC2162
# shellcheck disable=SC2229
#
# This is an utility script to manage Intel GPU frequencies.
# It can be used for debugging performance problems or trying to obtain a stable
# frequency while benchmarking.
#
# Note the Intel i915 GPU driver allows to change the minimum, maximum and boost
# frequencies in steps of 50 MHz via:
#
# /sys/class/drm/card<n>/<freq_info>
#
# Where <n> is the DRM card index and <freq_info> one of the following:
#
# - gt_max_freq_mhz (enforced maximum freq)
# - gt_min_freq_mhz (enforced minimum freq)
# - gt_boost_freq_mhz (enforced boost freq)
#
# The hardware capabilities can be accessed via:
#
# - gt_RP0_freq_mhz (supported maximum freq)
# - gt_RPn_freq_mhz (supported minimum freq)
# - gt_RP1_freq_mhz (most efficient freq)
#
# The current frequency can be read from:
# - gt_act_freq_mhz (the actual GPU freq)
# - gt_cur_freq_mhz (the last requested freq)
#
# Also note that in addition to GPU management, the script offers the
# possibility to adjust CPU operating frequencies. However, this is currently
# limited to just setting the maximum scaling frequency as percentage of the
# maximum frequency allowed by the hardware.
#
# Copyright (C) 2022 Collabora Ltd.
# Author: Cristian Ciocaltea <cristian.ciocaltea@collabora.com>
#
# SPDX-License-Identifier: MIT
#
#
# Constants
#
# GPU
DRM_FREQ_SYSFS_PATTERN="/sys/class/drm/card%d/gt_%s_freq_mhz"
ENF_FREQ_INFO="max min boost"
CAP_FREQ_INFO="RP0 RPn RP1"
ACT_FREQ_INFO="act cur"
THROTT_DETECT_SLEEP_SEC=2
THROTT_DETECT_PID_FILE_PATH=/tmp/thrott-detect.pid
# CPU
CPU_SYSFS_PREFIX=/sys/devices/system/cpu
CPU_PSTATE_SYSFS_PATTERN="${CPU_SYSFS_PREFIX}/intel_pstate/%s"
CPU_FREQ_SYSFS_PATTERN="${CPU_SYSFS_PREFIX}/cpu%s/cpufreq/%s_freq"
CAP_CPU_FREQ_INFO="cpuinfo_max cpuinfo_min"
ENF_CPU_FREQ_INFO="scaling_max scaling_min"
ACT_CPU_FREQ_INFO="scaling_cur"
#
# Global variables.
#
unset INTEL_DRM_CARD_INDEX
unset GET_ACT_FREQ GET_ENF_FREQ GET_CAP_FREQ
unset SET_MIN_FREQ SET_MAX_FREQ
unset MONITOR_FREQ
unset CPU_SET_MAX_FREQ
unset DETECT_THROTT
unset DRY_RUN
#
# Simple printf based stderr logger.
#
log() {
local msg_type=$1
shift
printf "%s: %s: " "${msg_type}" "${0##*/}" >&2
printf "$@" >&2
printf "\n" >&2
}
#
# Helper to print sysfs path for the given card index and freq info.
#
# arg1: Frequency info sysfs name, one of *_FREQ_INFO constants above
# arg2: Video card index, defaults to INTEL_DRM_CARD_INDEX
#
print_freq_sysfs_path() {
printf ${DRM_FREQ_SYSFS_PATTERN} "${2:-${INTEL_DRM_CARD_INDEX}}" "$1"
}
#
# Helper to set INTEL_DRM_CARD_INDEX for the first identified Intel video card.
#
identify_intel_gpu() {
local i=0 vendor path
while [ ${i} -lt 16 ]; do
[ -c "/dev/dri/card$i" ] || {
i=$((i + 1))
continue
}
path=$(print_freq_sysfs_path "" ${i})
path=${path%/*}/device/vendor
[ -r "${path}" ] && read vendor < "${path}" && \
[ "${vendor}" = "0x8086" ] && INTEL_DRM_CARD_INDEX=$i && return 0
i=$((i + 1))
done
return 1
}
#
# Read the specified freq info from sysfs.
#
# arg1: Flag (y/n) to also enable printing the freq info.
# arg2...: Frequency info sysfs name(s), see *_FREQ_INFO constants above
# return: Global variable(s) FREQ_${arg} containing the requested information
#
read_freq_info() {
local var val info path print=0 ret=0
[ "$1" = "y" ] && print=1
shift
while [ $# -gt 0 ]; do
info=$1
shift
var=FREQ_${info}
path=$(print_freq_sysfs_path "${info}")
[ -r ${path} ] && read ${var} < ${path} || {
log ERROR "Failed to read freq info from: %s" "${path}"
ret=1
continue
}
[ -n "${var}" ] || {
log ERROR "Got empty freq info from: %s" "${path}"
ret=1
continue
}
[ ${print} -eq 1 ] && {
eval val=\$${var}
printf "%6s: %4s MHz\n" "${info}" "${val}"
}
done
return ${ret}
}
#
# Display requested info.
#
print_freq_info() {
local req_freq
[ -n "${GET_CAP_FREQ}" ] && {
printf "* Hardware capabilities\n"
read_freq_info y ${CAP_FREQ_INFO}
printf "\n"
}
[ -n "${GET_ENF_FREQ}" ] && {
printf "* Enforcements\n"
read_freq_info y ${ENF_FREQ_INFO}
printf "\n"
}
[ -n "${GET_ACT_FREQ}" ] && {
printf "* Actual\n"
read_freq_info y ${ACT_FREQ_INFO}
printf "\n"
}
}
#
# Helper to print frequency value as requested by user via '-s, --set' option.
# arg1: user requested freq value
#
compute_freq_set() {
local val
case "$1" in
+)
val=${FREQ_RP0}
;;
-)
val=${FREQ_RPn}
;;
*%)
val=$((${1%?} * FREQ_RP0 / 100))
# Adjust freq to comply with 50 MHz increments
val=$((val / 50 * 50))
;;
*[!0-9]*)
log ERROR "Cannot set freq to invalid value: %s" "$1"
return 1
;;
"")
log ERROR "Cannot set freq to unspecified value"
return 1
;;
*)
# Adjust freq to comply with 50 MHz increments
val=$(($1 / 50 * 50))
;;
esac
printf "%s" "${val}"
}
#
# Helper for set_freq().
#
set_freq_max() {
log INFO "Setting GPU max freq to %s MHz" "${SET_MAX_FREQ}"
read_freq_info n min || return $?
[ ${SET_MAX_FREQ} -gt ${FREQ_RP0} ] && {
log ERROR "Cannot set GPU max freq (%s) to be greater than hw max freq (%s)" \
"${SET_MAX_FREQ}" "${FREQ_RP0}"
return 1
}
[ ${SET_MAX_FREQ} -lt ${FREQ_RPn} ] && {
log ERROR "Cannot set GPU max freq (%s) to be less than hw min freq (%s)" \
"${SET_MIN_FREQ}" "${FREQ_RPn}"
return 1
}
[ ${SET_MAX_FREQ} -lt ${FREQ_min} ] && {
log ERROR "Cannot set GPU max freq (%s) to be less than min freq (%s)" \
"${SET_MAX_FREQ}" "${FREQ_min}"
return 1
}
[ -z "${DRY_RUN}" ] || return 0
if ! printf "%s" ${SET_MAX_FREQ} | tee $(print_freq_sysfs_path max) \
$(print_freq_sysfs_path boost) > /dev/null;
then
log ERROR "Failed to set GPU max frequency"
return 1
fi
}
#
# Helper for set_freq().
#
set_freq_min() {
log INFO "Setting GPU min freq to %s MHz" "${SET_MIN_FREQ}"
read_freq_info n max || return $?
[ ${SET_MIN_FREQ} -gt ${FREQ_max} ] && {
log ERROR "Cannot set GPU min freq (%s) to be greater than max freq (%s)" \
"${SET_MIN_FREQ}" "${FREQ_max}"
return 1
}
[ ${SET_MIN_FREQ} -lt ${FREQ_RPn} ] && {
log ERROR "Cannot set GPU min freq (%s) to be less than hw min freq (%s)" \
"${SET_MIN_FREQ}" "${FREQ_RPn}"
return 1
}
[ -z "${DRY_RUN}" ] || return 0
if ! printf "%s" ${SET_MIN_FREQ} > $(print_freq_sysfs_path min);
then
log ERROR "Failed to set GPU min frequency"
return 1
fi
}
#
# Set min or max or both GPU frequencies to the user indicated values.
#
set_freq() {
# Get hw max & min frequencies
read_freq_info n RP0 RPn || return $?
[ -z "${SET_MAX_FREQ}" ] || {
SET_MAX_FREQ=$(compute_freq_set "${SET_MAX_FREQ}")
[ -z "${SET_MAX_FREQ}" ] && return 1
}
[ -z "${SET_MIN_FREQ}" ] || {
SET_MIN_FREQ=$(compute_freq_set "${SET_MIN_FREQ}")
[ -z "${SET_MIN_FREQ}" ] && return 1
}
#
# Ensure correct operation order, to avoid setting min freq
# to a value which is larger than max freq.
#
# E.g.:
# crt_min=crt_max=600; new_min=new_max=700
# > operation order: max=700; min=700
#
# crt_min=crt_max=600; new_min=new_max=500
# > operation order: min=500; max=500
#
if [ -n "${SET_MAX_FREQ}" ] && [ -n "${SET_MIN_FREQ}" ]; then
[ ${SET_MAX_FREQ} -lt ${SET_MIN_FREQ} ] && {
log ERROR "Cannot set GPU max freq to be less than min freq"
return 1
}
read_freq_info n min || return $?
if [ ${SET_MAX_FREQ} -lt ${FREQ_min} ]; then
set_freq_min || return $?
set_freq_max
else
set_freq_max || return $?
set_freq_min
fi
elif [ -n "${SET_MAX_FREQ}" ]; then
set_freq_max
elif [ -n "${SET_MIN_FREQ}" ]; then
set_freq_min
else
log "Unexpected call to set_freq()"
return 1
fi
}
#
# Helper for detect_throttling().
#
get_thrott_detect_pid() {
[ -e ${THROTT_DETECT_PID_FILE_PATH} ] || return 0
local pid
read pid < ${THROTT_DETECT_PID_FILE_PATH} || {
log ERROR "Failed to read pid from: %s" "${THROTT_DETECT_PID_FILE_PATH}"
return 1
}
local proc_path=/proc/${pid:-invalid}/cmdline
[ -r ${proc_path} ] && grep -qs "${0##*/}" ${proc_path} && {
printf "%s" "${pid}"
return 0
}
# Remove orphaned PID file
rm -rf ${THROTT_DETECT_PID_FILE_PATH}
return 1
}
#
# Control detection and reporting of GPU throttling events.
# arg1: start - run throttle detector in background
# stop - stop throttle detector process, if any
# status - verify if throttle detector is running
#
detect_throttling() {
local pid
pid=$(get_thrott_detect_pid)
case "$1" in
status)
printf "Throttling detector is "
[ -z "${pid}" ] && printf "not running\n" && return 0
printf "running (pid=%s)\n" ${pid}
;;
stop)
[ -z "${pid}" ] && return 0
log INFO "Stopping throttling detector (pid=%s)" "${pid}"
kill ${pid}; sleep 1; kill -0 ${pid} 2>/dev/null && kill -9 ${pid}
rm -rf ${THROTT_DETECT_PID_FILE_PATH}
;;
start)
[ -n "${pid}" ] && {
log WARN "Throttling detector is already running (pid=%s)" ${pid}
return 0
}
(
read_freq_info n RPn || exit $?
while true; do
sleep ${THROTT_DETECT_SLEEP_SEC}
read_freq_info n act min cur || exit $?
#
# The throttling seems to occur when act freq goes below min.
# However, it's necessary to exclude the idle states, where
# act freq normally reaches RPn and cur goes below min.
#
[ ${FREQ_act} -lt ${FREQ_min} ] && \
[ ${FREQ_act} -gt ${FREQ_RPn} ] && \
[ ${FREQ_cur} -ge ${FREQ_min} ] && \
printf "GPU throttling detected: act=%s min=%s cur=%s RPn=%s\n" \
${FREQ_act} ${FREQ_min} ${FREQ_cur} ${FREQ_RPn}
done
) &
pid=$!
log INFO "Started GPU throttling detector (pid=%s)" ${pid}
printf "%s\n" ${pid} > ${THROTT_DETECT_PID_FILE_PATH} || \
log WARN "Failed to write throttle detector PID file"
;;
esac
}
#
# Retrieve the list of online CPUs.
#
get_online_cpus() {
local path cpu_index
printf "0"
for path in $(grep 1 ${CPU_SYSFS_PREFIX}/cpu*/online); do
cpu_index=${path##*/cpu}
printf " %s" ${cpu_index%%/*}
done
}
#
# Helper to print sysfs path for the given CPU index and freq info.
#
# arg1: Frequency info sysfs name, one of *_CPU_FREQ_INFO constants above
# arg2: CPU index
#
print_cpu_freq_sysfs_path() {
printf ${CPU_FREQ_SYSFS_PATTERN} "$2" "$1"
}
#
# Read the specified CPU freq info from sysfs.
#
# arg1: CPU index
# arg2: Flag (y/n) to also enable printing the freq info.
# arg3...: Frequency info sysfs name(s), see *_CPU_FREQ_INFO constants above
# return: Global variable(s) CPU_FREQ_${arg} containing the requested information
#
read_cpu_freq_info() {
local var val info path cpu_index print=0 ret=0
cpu_index=$1
[ "$2" = "y" ] && print=1
shift 2
while [ $# -gt 0 ]; do
info=$1
shift
var=CPU_FREQ_${info}
path=$(print_cpu_freq_sysfs_path "${info}" ${cpu_index})
[ -r ${path} ] && read ${var} < ${path} || {
log ERROR "Failed to read CPU freq info from: %s" "${path}"
ret=1
continue
}
[ -n "${var}" ] || {
log ERROR "Got empty CPU freq info from: %s" "${path}"
ret=1
continue
}
[ ${print} -eq 1 ] && {
eval val=\$${var}
printf "%6s: %4s Hz\n" "${info}" "${val}"
}
done
return ${ret}
}
#
# Helper to print freq. value as requested by user via '--cpu-set-max' option.
# arg1: user requested freq value
#
compute_cpu_freq_set() {
local val
case "$1" in
+)
val=${CPU_FREQ_cpuinfo_max}
;;
-)
val=${CPU_FREQ_cpuinfo_min}
;;
*%)
val=$((${1%?} * CPU_FREQ_cpuinfo_max / 100))
;;
*[!0-9]*)
log ERROR "Cannot set CPU freq to invalid value: %s" "$1"
return 1
;;
"")
log ERROR "Cannot set CPU freq to unspecified value"
return 1
;;
*)
log ERROR "Cannot set CPU freq to custom value; use +, -, or % instead"
return 1
;;
esac
printf "%s" "${val}"
}
#
# Adjust CPU max scaling frequency.
#
set_cpu_freq_max() {
local target_freq res=0
case "${CPU_SET_MAX_FREQ}" in
+)
target_freq=100
;;
-)
target_freq=1
;;
*%)
target_freq=${CPU_SET_MAX_FREQ%?}
;;
*)
log ERROR "Invalid CPU freq"
return 1
;;
esac
local pstate_info=$(printf "${CPU_PSTATE_SYSFS_PATTERN}" max_perf_pct)
[ -e "${pstate_info}" ] && {
log INFO "Setting intel_pstate max perf to %s" "${target_freq}%"
if ! printf "%s" "${target_freq}" > "${pstate_info}";
then
log ERROR "Failed to set intel_pstate max perf"
res=1
fi
}
local cpu_index
for cpu_index in $(get_online_cpus); do
read_cpu_freq_info ${cpu_index} n ${CAP_CPU_FREQ_INFO} || { res=$?; continue; }
target_freq=$(compute_cpu_freq_set "${CPU_SET_MAX_FREQ}")
[ -z "${target_freq}" ] && { res=$?; continue; }
log INFO "Setting CPU%s max scaling freq to %s Hz" ${cpu_index} "${target_freq}"
[ -n "${DRY_RUN}" ] && continue
if ! printf "%s" ${target_freq} > $(print_cpu_freq_sysfs_path scaling_max ${cpu_index});
then
res=1
log ERROR "Failed to set CPU%s max scaling frequency" ${cpu_index}
fi
done
return ${res}
}
#
# Show help message.
#
print_usage() {
cat <<EOF
Usage: ${0##*/} [OPTION]...
A script to manage Intel GPU frequencies. Can be used for debugging performance
problems or trying to obtain a stable frequency while benchmarking.
Note Intel GPUs only accept specific frequencies, usually multiples of 50 MHz.
Options:
-g, --get [act|enf|cap|all]
Get frequency information: active (default), enforced,
hardware capabilities or all of them.
-s, --set [{min|max}=]{FREQUENCY[%]|+|-}
Set min or max frequency to the given value (MHz).
Append '%' to interpret FREQUENCY as % of hw max.
Use '+' or '-' to set frequency to hardware max or min.
Omit min/max prefix to set both frequencies.
-r, --reset Reset frequencies to hardware defaults.
-m, --monitor [act|enf|cap|all]
Monitor the indicated frequencies via 'watch' utility.
See '-g, --get' option for more details.
-d|--detect-thrott [start|stop|status]
Start (default operation) the throttling detector
as a background process. Use 'stop' or 'status' to
terminate the detector process or verify its status.
--cpu-set-max [FREQUENCY%|+|-}
Set CPU max scaling frequency as % of hw max.
Use '+' or '-' to set frequency to hardware max or min.
-r, --reset Reset frequencies to hardware defaults.
--dry-run See what the script will do without applying any
frequency changes.
-h, --help Display this help text and exit.
EOF
}
#
# Parse user input for '-g, --get' option.
# Returns 0 if a value has been provided, otherwise 1.
#
parse_option_get() {
local ret=0
case "$1" in
act) GET_ACT_FREQ=1;;
enf) GET_ENF_FREQ=1;;
cap) GET_CAP_FREQ=1;;
all) GET_ACT_FREQ=1; GET_ENF_FREQ=1; GET_CAP_FREQ=1;;
-*|"")
# No value provided, using default.
GET_ACT_FREQ=1
ret=1
;;
*)
print_usage
exit 1
;;
esac
return ${ret}
}
#
# Validate user input for '-s, --set' option.
# arg1: input value to be validated
# arg2: optional flag indicating input is restricted to %
#
validate_option_set() {
case "$1" in
+|-|[0-9]%|[0-9][0-9]%)
return 0
;;
*[!0-9]*|"")
print_usage
exit 1
;;
esac
[ -z "$2" ] || { print_usage; exit 1; }
}
#
# Parse script arguments.
#
[ $# -eq 0 ] && { print_usage; exit 1; }
while [ $# -gt 0 ]; do
case "$1" in
-g|--get)
parse_option_get "$2" && shift
;;
-s|--set)
shift
case "$1" in
min=*)
SET_MIN_FREQ=${1#min=}
validate_option_set "${SET_MIN_FREQ}"
;;
max=*)
SET_MAX_FREQ=${1#max=}
validate_option_set "${SET_MAX_FREQ}"
;;
*)
SET_MIN_FREQ=$1
validate_option_set "${SET_MIN_FREQ}"
SET_MAX_FREQ=${SET_MIN_FREQ}
;;
esac
;;
-r|--reset)
RESET_FREQ=1
SET_MIN_FREQ="-"
SET_MAX_FREQ="+"
;;
-m|--monitor)
MONITOR_FREQ=act
parse_option_get "$2" && MONITOR_FREQ=$2 && shift
;;
-d|--detect-thrott)
DETECT_THROTT=start
case "$2" in
start|stop|status)
DETECT_THROTT=$2
shift
;;
esac
;;
--cpu-set-max)
shift
CPU_SET_MAX_FREQ=$1
validate_option_set "${CPU_SET_MAX_FREQ}" restricted
;;
--dry-run)
DRY_RUN=1
;;
-h|--help)
print_usage
exit 0
;;
*)
print_usage
exit 1
;;
esac
shift
done
#
# Main
#
RET=0
identify_intel_gpu || {
log INFO "No Intel GPU detected"
exit 0
}
[ -n "${SET_MIN_FREQ}${SET_MAX_FREQ}" ] && { set_freq || RET=$?; }
print_freq_info
[ -n "${DETECT_THROTT}" ] && detect_throttling ${DETECT_THROTT}
[ -n "${CPU_SET_MAX_FREQ}" ] && { set_cpu_freq_max || RET=$?; }
[ -n "${MONITOR_FREQ}" ] && {
log INFO "Entering frequency monitoring mode"
sleep 2
exec watch -d -n 1 "$0" -g "${MONITOR_FREQ}"
}
exit ${RET}

View File

@@ -1,24 +0,0 @@
#!/usr/bin/env bash
# shellcheck disable=SC1091 # the path is created in build-kdl and
# here is check if exist
terminate() {
echo "ci-kdl.sh caught SIGTERM signal! propagating to child processes"
for job in $(jobs -p)
do
kill -15 "$job"
done
}
trap terminate SIGTERM
if [ -f /ci-kdl.venv/bin/activate ]; then
source /ci-kdl.venv/bin/activate
/ci-kdl.venv/bin/python /ci-kdl.venv/bin/ci-kdl | tee -a /results/kdl.log &
child=$!
wait $child
mv kdl_*.json /results/kdl.json
else
echo -e "Not possible to activate ci-kdl virtual environment"
fi

View File

@@ -13,7 +13,7 @@ fi
xinit /bin/sh "${_XORG_SCRIPT}" -- /usr/bin/Xorg vt45 -noreset -s 0 -dpms -logfile /Xorg.0.log & xinit /bin/sh "${_XORG_SCRIPT}" -- /usr/bin/Xorg vt45 -noreset -s 0 -dpms -logfile /Xorg.0.log &
# Wait for xorg to be ready for connections. # Wait for xorg to be ready for connections.
for _ in 1 2 3 4 5; do for i in 1 2 3 4 5; do
if [ -e "${_FLAG_FILE}" ]; then if [ -e "${_FLAG_FILE}" ]; then
break break
fi fi

View File

@@ -1,67 +0,0 @@
#!/usr/bin/env bash
# shellcheck disable=SC1091
# When changing this file, you need to bump the following
# .gitlab-ci/image-tags.yml tags:
# ALPINE_X86_64_BUILD_TAG
set -e
set -o xtrace
export LLVM_VERSION="${LLVM_VERSION:=16}"
EPHEMERAL=(
)
DEPS=(
bash
bison
ccache
clang16-dev
cmake
clang-dev
coreutils
curl
flex
gcc
g++
git
gettext
glslang
linux-headers
llvm16-static
llvm16-dev
meson
expat-dev
elfutils-dev
libdrm-dev
libselinux-dev
libva-dev
libpciaccess-dev
zlib-dev
python3-dev
py3-cparser
py3-mako
py3-ply
vulkan-headers
spirv-tools-dev
util-macros
wayland-dev
wayland-protocols
)
apk --no-cache add "${DEPS[@]}" "${EPHEMERAL[@]}"
. .gitlab-ci/container/build-llvm-spirv.sh
. .gitlab-ci/container/build-libclc.sh
. .gitlab-ci/container/container_pre_build.sh
############### Uninstall the build software
apk del "${EPHEMERAL[@]}"
. .gitlab-ci/container/container_post_build.sh

View File

@@ -1,29 +0,0 @@
#!/usr/bin/env bash
# This is a ci-templates build script to generate a container for LAVA SSH client.
# shellcheck disable=SC1091
set -e
set -o xtrace
EPHEMERAL=(
)
# We only need these very basic packages to run the tests.
DEPS=(
openssh-client # for ssh
iputils # for ping
bash
curl
)
apk --no-cache add "${DEPS[@]}" "${EPHEMERAL[@]}"
. .gitlab-ci/container/container_pre_build.sh
############### Uninstall the build software
apk del "${EPHEMERAL[@]}"
. .gitlab-ci/container/container_post_build.sh

View File

@@ -0,0 +1,57 @@
CONFIG_LOCALVERSION_AUTO=y
CONFIG_DEBUG_KERNEL=y
# abootimg with a 'dummy' rootfs fails with root=/dev/nfs
CONFIG_BLK_DEV_INITRD=n
CONFIG_DEVFREQ_GOV_PERFORMANCE=y
CONFIG_DEVFREQ_GOV_POWERSAVE=y
CONFIG_DEVFREQ_GOV_USERSPACE=y
CONFIG_DEVFREQ_GOV_PASSIVE=y
CONFIG_DEVFREQ_GOV_SIMPLE_ONDEMAND=y
CONFIG_DRM=y
CONFIG_DRM_ETNAVIV=y
CONFIG_DRM_ROCKCHIP=y
CONFIG_DRM_PANFROST=y
CONFIG_DRM_LIMA=y
CONFIG_DRM_PANEL_SIMPLE=y
CONFIG_PWM_CROS_EC=y
CONFIG_BACKLIGHT_PWM=y
CONFIG_ROCKCHIP_CDN_DP=n
CONFIG_SPI_ROCKCHIP=y
CONFIG_PWM_ROCKCHIP=y
CONFIG_PHY_ROCKCHIP_DP=y
CONFIG_DWMAC_ROCKCHIP=y
CONFIG_MFD_RK808=y
CONFIG_REGULATOR_RK808=y
CONFIG_RTC_DRV_RK808=y
CONFIG_COMMON_CLK_RK808=y
CONFIG_REGULATOR_FAN53555=y
CONFIG_REGULATOR=y
CONFIG_REGULATOR_VCTRL=y
CONFIG_KASAN=n
CONFIG_KASAN_INLINE=n
CONFIG_STACKTRACE=n
CONFIG_TMPFS=y
CONFIG_PROVE_LOCKING=n
CONFIG_DEBUG_LOCKDEP=n
CONFIG_SOFTLOCKUP_DETECTOR=n
CONFIG_BOOTPARAM_SOFTLOCKUP_PANIC=n
CONFIG_FW_LOADER_COMPRESS=y
CONFIG_USB_USBNET=y
CONFIG_NETDEVICES=y
CONFIG_USB_NET_DRIVERS=y
CONFIG_USB_RTL8152=y
CONFIG_USB_NET_AX8817X=y
CONFIG_USB_NET_SMSC95XX=y

View File

@@ -0,0 +1,149 @@
CONFIG_LOCALVERSION_AUTO=y
CONFIG_DEBUG_KERNEL=y
# abootimg with a 'dummy' rootfs fails with root=/dev/nfs
CONFIG_BLK_DEV_INITRD=n
CONFIG_DEVFREQ_GOV_PERFORMANCE=y
CONFIG_DEVFREQ_GOV_POWERSAVE=y
CONFIG_DEVFREQ_GOV_USERSPACE=y
CONFIG_DEVFREQ_GOV_PASSIVE=y
CONFIG_DRM=y
CONFIG_DRM_ROCKCHIP=y
CONFIG_DRM_PANFROST=y
CONFIG_DRM_LIMA=y
CONFIG_DRM_PANEL_SIMPLE=y
CONFIG_DRM_MSM=y
CONFIG_DRM_I2C_ADV7511=y
CONFIG_DRM_I2C_ADV7533=y
CONFIG_PWM_CROS_EC=y
CONFIG_BACKLIGHT_PWM=y
CONFIG_ROCKCHIP_CDN_DP=n
CONFIG_SPI_ROCKCHIP=y
CONFIG_PWM_ROCKCHIP=y
CONFIG_PHY_ROCKCHIP_DP=y
CONFIG_DWMAC_ROCKCHIP=y
CONFIG_STMMAC_ETH=y
CONFIG_TYPEC_FUSB302=y
CONFIG_TYPEC=y
CONFIG_TYPEC_TCPM=y
# MSM platform bits
CONFIG_QCOM_RPMHPD=y
CONFIG_QCOM_RPMPD=y
CONFIG_SDM_GPUCC_845=y
CONFIG_SDM_VIDEOCC_845=y
CONFIG_SDM_DISPCC_845=y
CONFIG_SDM_LPASSCC_845=y
CONFIG_SDM_CAMCC_845=y
CONFIG_RESET_QCOM_PDC=y
CONFIG_DRM_TI_SN65DSI86=y
CONFIG_I2C_QCOM_GENI=y
CONFIG_SPI_QCOM_GENI=y
CONFIG_PHY_QCOM_QUSB2=y
CONFIG_PHY_QCOM_QMP=y
CONFIG_QCOM_LLCC=y
CONFIG_QCOM_SPMI_TEMP_ALARM=y
CONFIG_QCOM_CLK_APCC_MSM8996=y
CONFIG_POWER_RESET_QCOM_PON=y
CONFIG_RTC_DRV_PM8XXX=y
CONFIG_INTERCONNECT=y
CONFIG_INTERCONNECT_QCOM=y
CONFIG_INTERCONNECT_QCOM_SDM845=y
CONFIG_INTERCONNECT_QCOM_MSM8916=y
CONFIG_INTERCONNECT_QCOM_OSM_L3=y
CONFIG_INTERCONNECT_QCOM_SC7180=y
CONFIG_QCOM_WDT=y
CONFIG_CRYPTO_DEV_QCOM_RNG=y
# db410c ethernet
CONFIG_USB_RTL8152=y
# db820c ethernet
CONFIG_ATL1C=y
CONFIG_ARCH_ALPINE=n
CONFIG_ARCH_BCM2835=n
CONFIG_ARCH_BCM_IPROC=n
CONFIG_ARCH_BERLIN=n
CONFIG_ARCH_BRCMSTB=n
CONFIG_ARCH_EXYNOS=n
CONFIG_ARCH_K3=n
CONFIG_ARCH_LAYERSCAPE=n
CONFIG_ARCH_LG1K=n
CONFIG_ARCH_HISI=n
CONFIG_ARCH_MVEBU=n
CONFIG_ARCH_SEATTLE=n
CONFIG_ARCH_SYNQUACER=n
CONFIG_ARCH_RENESAS=n
CONFIG_ARCH_R8A774A1=n
CONFIG_ARCH_R8A774C0=n
CONFIG_ARCH_R8A7795=n
CONFIG_ARCH_R8A7796=n
CONFIG_ARCH_R8A77965=n
CONFIG_ARCH_R8A77970=n
CONFIG_ARCH_R8A77980=n
CONFIG_ARCH_R8A77990=n
CONFIG_ARCH_R8A77995=n
CONFIG_ARCH_STRATIX10=n
CONFIG_ARCH_TEGRA=n
CONFIG_ARCH_SPRD=n
CONFIG_ARCH_THUNDER=n
CONFIG_ARCH_THUNDER2=n
CONFIG_ARCH_UNIPHIER=n
CONFIG_ARCH_VEXPRESS=n
CONFIG_ARCH_XGENE=n
CONFIG_ARCH_ZX=n
CONFIG_ARCH_ZYNQMP=n
# Strip out some stuff we don't need for graphics testing, to reduce
# the build.
CONFIG_CAN=n
CONFIG_WIRELESS=n
CONFIG_RFKILL=n
CONFIG_WLAN=n
CONFIG_REGULATOR_FAN53555=y
CONFIG_REGULATOR=y
CONFIG_REGULATOR_VCTRL=y
CONFIG_KASAN=n
CONFIG_KASAN_INLINE=n
CONFIG_STACKTRACE=n
CONFIG_TMPFS=y
CONFIG_PROVE_LOCKING=n
CONFIG_DEBUG_LOCKDEP=n
CONFIG_SOFTLOCKUP_DETECTOR=y
CONFIG_BOOTPARAM_SOFTLOCKUP_PANIC=y
CONFIG_DETECT_HUNG_TASK=y
CONFIG_FW_LOADER_COMPRESS=y
CONFIG_FW_LOADER_USER_HELPER=n
CONFIG_USB_USBNET=y
CONFIG_NETDEVICES=y
CONFIG_USB_NET_DRIVERS=y
CONFIG_USB_RTL8152=y
CONFIG_USB_NET_AX8817X=y
CONFIG_USB_NET_SMSC95XX=y
# For amlogic
CONFIG_MESON_GXL_PHY=y
CONFIG_MDIO_BUS_MUX_MESON_G12A=y
CONFIG_DRM_MESON=y
# For Mediatek
CONFIG_DRM_MEDIATEK=y
CONFIG_PWM_MEDIATEK=y
CONFIG_DRM_MEDIATEK_HDMI=y
CONFIG_GNSS_MTK_SERIAL=y
CONFIG_HW_RANDOM_MTK=y
CONFIG_MTK_DEVAPC=y
CONFIG_PWM_MTK_DISP=y
CONFIG_MTK_CMDQ=y

View File

@@ -1,44 +1,34 @@
#!/usr/bin/env bash #!/bin/bash
set -e set -e
set -o xtrace set -o xtrace
# Fetch the arm-built rootfs image and unpack it in our x86_64 container (saves # Fetch the arm-built rootfs image and unpack it in our x86 container (saves
# network transfer, disk usage, and runtime on test jobs) # network transfer, disk usage, and runtime on test jobs)
# shellcheck disable=SC2154 # arch is assigned in previous scripts if wget -q --method=HEAD "${ARTIFACTS_PREFIX}/${FDO_UPSTREAM_REPO}/${ARTIFACTS_SUFFIX}/${arch}/done"; then
if curl -X HEAD -s "${ARTIFACTS_PREFIX}/${FDO_UPSTREAM_REPO}/${ARTIFACTS_SUFFIX}/${arch}/done"; then
ARTIFACTS_URL="${ARTIFACTS_PREFIX}/${FDO_UPSTREAM_REPO}/${ARTIFACTS_SUFFIX}/${arch}" ARTIFACTS_URL="${ARTIFACTS_PREFIX}/${FDO_UPSTREAM_REPO}/${ARTIFACTS_SUFFIX}/${arch}"
else else
ARTIFACTS_URL="${ARTIFACTS_PREFIX}/${CI_PROJECT_PATH}/${ARTIFACTS_SUFFIX}/${arch}" ARTIFACTS_URL="${ARTIFACTS_PREFIX}/${CI_PROJECT_PATH}/${ARTIFACTS_SUFFIX}/${arch}"
fi fi
curl -L --retry 4 -f --retry-all-errors --retry-delay 60 \ wget ${ARTIFACTS_URL}/lava-rootfs.tgz -O rootfs.tgz
"${ARTIFACTS_URL}"/lava-rootfs.tar.zst -o rootfs.tar.zst mkdir -p /rootfs-$arch
mkdir -p /rootfs-"$arch" tar -C /rootfs-$arch '--exclude=./dev/*' -zxf rootfs.tgz
tar -C /rootfs-"$arch" '--exclude=./dev/*' --zstd -xf rootfs.tar.zst rm rootfs.tgz
rm rootfs.tar.zst
if [[ $arch == "arm64" ]]; then if [[ $arch == "arm64" ]]; then
mkdir -p /baremetal-files mkdir -p /baremetal-files
pushd /baremetal-files pushd /baremetal-files
curl -L --retry 4 -f --retry-all-errors --retry-delay 60 \ wget ${ARTIFACTS_URL}/Image
-O "${KERNEL_IMAGE_BASE}"/arm64/Image wget ${ARTIFACTS_URL}/Image.gz
curl -L --retry 4 -f --retry-all-errors --retry-delay 60 \ wget ${ARTIFACTS_URL}/cheza-kernel
-O "${KERNEL_IMAGE_BASE}"/arm64/Image.gz
curl -L --retry 4 -f --retry-all-errors --retry-delay 60 \
-O "${KERNEL_IMAGE_BASE}"/arm64/cheza-kernel
DEVICE_TREES="" DEVICE_TREES="apq8016-sbc.dtb apq8096-db820c.dtb"
DEVICE_TREES="$DEVICE_TREES apq8016-sbc.dtb"
DEVICE_TREES="$DEVICE_TREES apq8096-db820c.dtb"
DEVICE_TREES="$DEVICE_TREES tegra210-p3450-0000.dtb"
DEVICE_TREES="$DEVICE_TREES imx8mq-nitrogen.dtb"
for DTB in $DEVICE_TREES; do for DTB in $DEVICE_TREES; do
curl -L --retry 4 -f --retry-all-errors --retry-delay 60 \ wget ${ARTIFACTS_URL}/$DTB
-O "${KERNEL_IMAGE_BASE}/arm64/$DTB"
done done
popd popd
@@ -46,16 +36,12 @@ elif [[ $arch == "armhf" ]]; then
mkdir -p /baremetal-files mkdir -p /baremetal-files
pushd /baremetal-files pushd /baremetal-files
curl -L --retry 4 -f --retry-all-errors --retry-delay 60 \ wget ${ARTIFACTS_URL}/zImage
-O "${KERNEL_IMAGE_BASE}"/armhf/zImage
DEVICE_TREES="" DEVICE_TREES="imx6q-cubox-i.dtb"
DEVICE_TREES="$DEVICE_TREES imx6q-cubox-i.dtb"
DEVICE_TREES="$DEVICE_TREES tegra124-jetson-tk1.dtb"
for DTB in $DEVICE_TREES; do for DTB in $DEVICE_TREES; do
curl -L --retry 4 -f --retry-all-errors --retry-delay 60 \ wget ${ARTIFACTS_URL}/$DTB
-O "${KERNEL_IMAGE_BASE}/armhf/$DTB"
done done
popd popd

View File

@@ -1,58 +0,0 @@
#!/usr/bin/env bash
set -ex
ANGLE_REV="0518a3ff4d4e7e5b2ce8203358f719613a31c118"
# DEPOT tools
git clone --depth 1 https://chromium.googlesource.com/chromium/tools/depot_tools.git
PWD=$(pwd)
export PATH=$PWD/depot_tools:$PATH
export DEPOT_TOOLS_UPDATE=0
mkdir /angle-build
pushd /angle-build
git init
git remote add origin https://chromium.googlesource.com/angle/angle.git
git fetch --depth 1 origin "$ANGLE_REV"
git checkout FETCH_HEAD
# source preparation
python3 scripts/bootstrap.py
mkdir -p build/config
gclient sync
sed -i "/catapult/d" testing/BUILD.gn
mkdir -p out/Release
echo '
is_debug = false
angle_enable_swiftshader = false
angle_enable_null = false
angle_enable_gl = false
angle_enable_vulkan = true
angle_has_histograms = false
build_angle_trace_perf_tests = false
build_angle_deqp_tests = false
angle_use_custom_libvulkan = false
dcheck_always_on=true
' > out/Release/args.gn
if [[ "$DEBIAN_ARCH" = "arm64" ]]; then
build/linux/sysroot_scripts/install-sysroot.py --arch=arm64
fi
gn gen out/Release
# depot_tools overrides ninja with a version that doesn't work. We want
# ninja with FDO_CI_CONCURRENT anyway.
/usr/local/bin/ninja -C out/Release/
mkdir /angle
cp out/Release/lib*GL*.so /angle/
ln -s libEGL.so /angle/libEGL.so.1
ln -s libGLESv2.so /angle/libGLESv2.so.2
rm -rf out
popd
rm -rf ./depot_tools

View File

@@ -1,22 +1,15 @@
#!/usr/bin/env bash #!/bin/bash
# shellcheck disable=SC2086 # we want word splitting
# When changing this file, you need to bump the following
# .gitlab-ci/image-tags.yml tags:
# DEBIAN_X86_64_TEST_GL_TAG
# DEBIAN_X86_64_TEST_VK_TAG
# KERNEL_ROOTFS_TAG
set -ex set -ex
APITRACE_VERSION="0a6506433e1f9f7b69757b4e5730326970c4321a" APITRACE_VERSION="170424754bb46002ba706e16ee5404b61988d74a"
git clone https://github.com/apitrace/apitrace.git --single-branch --no-checkout /apitrace git clone https://github.com/apitrace/apitrace.git --single-branch --no-checkout /apitrace
pushd /apitrace pushd /apitrace
git checkout "$APITRACE_VERSION" git checkout "$APITRACE_VERSION"
git submodule update --init --depth 1 --recursive git submodule update --init --depth 1 --recursive
cmake -S . -B _build -G Ninja -DCMAKE_BUILD_TYPE=Release -DENABLE_GUI=False -DENABLE_WAFFLE=on $EXTRA_CMAKE_ARGS cmake -S . -B _build -G Ninja -DCMAKE_BUILD_TYPE=Release -DENABLE_GUI=False -DENABLE_WAFFLE=on $EXTRA_CMAKE_ARGS
cmake --build _build --parallel --target apitrace eglretrace ninja -C _build
mkdir build mkdir build
cp _build/apitrace build cp _build/apitrace build
cp _build/eglretrace build cp _build/eglretrace build

View File

@@ -1,44 +1,63 @@
#!/usr/bin/env bash #!/bin/bash
# shellcheck disable=SC2086 # we want word splitting
set -ex set -ex
git config --global user.email "mesa@example.com" # Pull down repositories that crosvm depends on to cros checkout-like locations.
git config --global user.name "Mesa CI" CROS_ROOT=/
THIRD_PARTY_ROOT=$CROS_ROOT/third_party
mkdir -p $THIRD_PARTY_ROOT
AOSP_EXTERNAL_ROOT=$CROS_ROOT/aosp/external
mkdir -p $AOSP_EXTERNAL_ROOT
PLATFORM2_ROOT=/platform2
CROSVM_VERSION=1641c55bcc922588e24de73e9cca7b5e4005bd6d PLATFORM2_COMMIT=72e56e66ccf3d2ea48f5686bd1f772379c43628b
git clone --single-branch -b main --no-checkout https://chromium.googlesource.com/crosvm/crosvm /platform/crosvm git clone --single-branch --no-checkout https://chromium.googlesource.com/chromiumos/platform2 $PLATFORM2_ROOT
pushd /platform/crosvm pushd $PLATFORM2_ROOT
git checkout "$CROSVM_VERSION" git checkout $PLATFORM2_COMMIT
git submodule update --init
VIRGLRENDERER_VERSION=d9c002fac153b834a2c17731f2b85c36e333e102
rm -rf third_party/virglrenderer
git clone --single-branch -b main --no-checkout https://gitlab.freedesktop.org/virgl/virglrenderer.git third_party/virglrenderer
pushd third_party/virglrenderer
git checkout "$VIRGLRENDERER_VERSION"
meson setup build/ -D libdir=lib -D render-server-worker=process -D venus=true $EXTRA_MESON_ARGS
meson install -C build
popd popd
cargo update -p pkg-config@0.3.26 --precise 0.3.27 # minijail does not exist in upstream linux distros.
MINIJAIL_COMMIT=debdf5de5a0ae3b667bee2f8fb1f755b0b3f5a6c
git clone --single-branch --no-checkout https://android.googlesource.com/platform/external/minijail $AOSP_EXTERNAL_ROOT/minijail
pushd $AOSP_EXTERNAL_ROOT/minijail
git checkout $MINIJAIL_COMMIT
make
cp libminijail.so /usr/lib/x86_64-linux-gnu/
popd
# Pull the cras library for audio access.
ADHD_COMMIT=a1e0869b95c845c4fe6234a7b92fdfa6acc1e809
git clone --single-branch --no-checkout https://chromium.googlesource.com/chromiumos/third_party/adhd $THIRD_PARTY_ROOT/adhd
pushd $THIRD_PARTY_ROOT/adhd
git checkout $ADHD_COMMIT
popd
# Pull vHost (dataplane for virtio backend drivers)
VHOST_COMMIT=3091854e27242d09453004b011f701fa29c0b8e8
git clone --single-branch --no-checkout https://chromium.googlesource.com/chromiumos/third_party/rust-vmm/vhost $THIRD_PARTY_ROOT/rust-vmm/vhost
pushd $THIRD_PARTY_ROOT/rust-vmm/vhost
git checkout $VHOST_COMMIT
popd
CROSVM_VERSION=e42a43d880b0364b55559dbeade3af174f929001
git clone --single-branch --no-checkout https://chromium.googlesource.com/chromiumos/platform/crosvm /platform/crosvm
pushd /platform/crosvm
git checkout "$CROSVM_VERSION"
RUSTFLAGS='-L native=/usr/local/lib' cargo install \ RUSTFLAGS='-L native=/usr/local/lib' cargo install \
bindgen-cli \ bindgen \
--locked \
-j ${FDO_CI_CONCURRENT:-4} \ -j ${FDO_CI_CONCURRENT:-4} \
--root /usr/local \ --root /usr/local \
--version 0.65.1 \
$EXTRA_CARGO_ARGS $EXTRA_CARGO_ARGS
CROSVM_USE_SYSTEM_MINIGBM=1 CROSVM_USE_SYSTEM_VIRGLRENDERER=1 RUSTFLAGS='-L native=/usr/local/lib' cargo install \ RUSTFLAGS='-L native=/usr/local/lib' cargo install \
-j ${FDO_CI_CONCURRENT:-4} \ -j ${FDO_CI_CONCURRENT:-4} \
--locked \ --locked \
--features 'default-no-sandbox gpu x virgl_renderer' \ --features 'default-no-sandbox gpu x virgl_renderer virgl_renderer_next' \
--path . \ --path . \
--root /usr/local \ --root /usr/local \
$EXTRA_CARGO_ARGS $EXTRA_CARGO_ARGS
popd popd
rm -rf /platform/crosvm rm -rf $PLATFORM2_ROOT $AOSP_EXTERNAL_ROOT/minijail $THIRD_PARTY_ROOT/adhd $THIRD_PARTY_ROOT/rust-vmm /platform/crosvm

View File

@@ -1,70 +1,9 @@
#!/usr/bin/env bash #!/bin/bash
# shellcheck disable=SC2086 # we want word splitting
# When changing this file, you need to bump the following
# .gitlab-ci/image-tags.yml tags:
# DEBIAN_X86_64_TEST_ANDROID_TAG
# DEBIAN_X86_64_TEST_GL_TAG
# DEBIAN_X86_64_TEST_VK_TAG
# KERNEL_ROOTFS_TAG
set -ex set -ex
DEQP_RUNNER_VERSION=0.18.0 cargo install --locked deqp-runner \
-j ${FDO_CI_CONCURRENT:-4} \
DEQP_RUNNER_GIT_URL="${DEQP_RUNNER_GIT_URL:-https://gitlab.freedesktop.org/mesa/deqp-runner.git}" --version 0.9.0 \
--root /usr/local \
if [ -n "${DEQP_RUNNER_GIT_TAG}${DEQP_RUNNER_GIT_REV}" ]; then $EXTRA_CARGO_ARGS
# Build and install from source
DEQP_RUNNER_CARGO_ARGS="--git $DEQP_RUNNER_GIT_URL"
if [ -n "${DEQP_RUNNER_GIT_TAG}" ]; then
DEQP_RUNNER_CARGO_ARGS="--tag ${DEQP_RUNNER_GIT_TAG} ${DEQP_RUNNER_CARGO_ARGS}"
DEQP_RUNNER_GIT_CHECKOUT="$DEQP_RUNNER_GIT_TAG"
else
DEQP_RUNNER_CARGO_ARGS="--rev ${DEQP_RUNNER_GIT_REV} ${DEQP_RUNNER_CARGO_ARGS}"
DEQP_RUNNER_GIT_CHECKOUT="$DEQP_RUNNER_GIT_REV"
fi
DEQP_RUNNER_CARGO_ARGS="${DEQP_RUNNER_CARGO_ARGS} ${EXTRA_CARGO_ARGS}"
else
# Install from package registry
DEQP_RUNNER_CARGO_ARGS="--version ${DEQP_RUNNER_VERSION} ${EXTRA_CARGO_ARGS} -- deqp-runner"
DEQP_RUNNER_GIT_CHECKOUT="v$DEQP_RUNNER_VERSION"
fi
if [[ "$RUST_TARGET" != *-android ]]; then
cargo install --locked \
-j ${FDO_CI_CONCURRENT:-4} \
--root /usr/local \
${DEQP_RUNNER_CARGO_ARGS}
else
mkdir -p /deqp-runner
pushd /deqp-runner
git clone --branch "$DEQP_RUNNER_GIT_CHECKOUT" --depth 1 "$DEQP_RUNNER_GIT_URL" deqp-runner-git
pushd deqp-runner-git
cargo install --locked \
-j ${FDO_CI_CONCURRENT:-4} \
--root /usr/local --version 2.10.0 \
cargo-ndk
rustup target add $RUST_TARGET
RUSTFLAGS='-C target-feature=+crt-static' cargo ndk --target $RUST_TARGET build --release
mv target/$RUST_TARGET/release/deqp-runner /deqp-runner
cargo uninstall --locked \
--root /usr/local \
cargo-ndk
popd
rm -rf deqp-runner-git
popd
fi
# remove unused test runners to shrink images for the Mesa CI build (not kernel,
# which chooses its own deqp branch)
if [ -z "${DEQP_RUNNER_GIT_TAG}${DEQP_RUNNER_GIT_REV}" ]; then
rm -f /usr/local/bin/igt-runner
fi

View File

@@ -1,264 +1,82 @@
#!/usr/bin/env bash #!/bin/bash
# shellcheck disable=SC2086 # we want word splitting
# When changing this file, you need to bump the following
# .gitlab-ci/image-tags.yml tags:
# DEBIAN_X86_64_TEST_ANDROID_TAG
# DEBIAN_X86_64_TEST_GL_TAG
# DEBIAN_X86_64_TEST_VK_TAG
# KERNEL_ROOTFS_TAG
set -ex -o pipefail
# See `deqp_build_targets` below for which release is used to produce which
# binary. Unless this comment has bitrotten:
# - the VK release produces `deqp-vk`,
# - the GL release produces `glcts`, and
# - the GLES release produces `deqp-gles*` and `deqp-egl`
DEQP_VK_VERSION=1.3.8.2
DEQP_GL_VERSION=4.6.4.0
DEQP_GLES_VERSION=3.2.10.0
# Patches to VulkanCTS may come from commits in their repo (listed in
# cts_commits_to_backport) or patch files stored in our repo (in the patch
# directory `$OLDPWD/.gitlab-ci/container/patches/` listed in cts_patch_files).
# Both list variables would have comments explaining the reasons behind the
# patches.
# shellcheck disable=SC2034
vk_cts_commits_to_backport=(
# Fix more ASAN errors due to missing virtual destructors
dd40bcfef1b4035ea55480b6fd4d884447120768
# Remove "unused shader stages" tests
7dac86c6bbd15dec91d7d9a98cd6dd57c11092a7
)
# shellcheck disable=SC2034
vk_cts_patch_files=(
)
if [ "${DEQP_TARGET}" = 'android' ]; then
vk_cts_patch_files+=(
build-deqp-vk_Allow-running-on-Android-from-the-command-line.patch
build-deqp-vk_Android-prints-to-stdout-instead-of-logcat.patch
)
fi
# shellcheck disable=SC2034
gl_cts_commits_to_backport=(
)
# shellcheck disable=SC2034
gl_cts_patch_files=(
)
if [ "${DEQP_TARGET}" = 'android' ]; then
gl_cts_patch_files+=(
build-deqp-gl_Allow-running-on-Android-from-the-command-line.patch
build-deqp-gl_Android-prints-to-stdout-instead-of-logcat.patch
)
fi
# shellcheck disable=SC2034
# GLES builds also EGL
gles_cts_commits_to_backport=(
# Implement support for the EGL_EXT_config_select_group extension
88ba9ac270db5be600b1ecacbc6d9db0c55d5be4
)
# shellcheck disable=SC2034
gles_cts_patch_files=(
# Correct detection mechanism for EGL_EXT_config_select_group extension
build-deqp-egl_Correct-EGL_EXT_config_select_group-extension-query.patch
)
if [ "${DEQP_TARGET}" = 'android' ]; then
gles_cts_patch_files+=(
build-deqp-gles_Allow-running-on-Android-from-the-command-line.patch
build-deqp-gles_Android-prints-to-stdout-instead-of-logcat.patch
)
fi
### Careful editing anything below this line
set -ex
git config --global user.email "mesa@example.com" git config --global user.email "mesa@example.com"
git config --global user.name "Mesa CI" git config --global user.name "Mesa CI"
# shellcheck disable=SC2153
case "${DEQP_API}" in
VK) DEQP_VERSION="vulkan-cts-$DEQP_VK_VERSION";;
GL) DEQP_VERSION="opengl-cts-$DEQP_GL_VERSION";;
GLES) DEQP_VERSION="opengl-es-cts-$DEQP_GLES_VERSION";;
esac
git clone \ git clone \
https://github.com/KhronosGroup/VK-GL-CTS.git \ https://github.com/KhronosGroup/VK-GL-CTS.git \
-b $DEQP_VERSION \ -b vulkan-cts-1.2.7.1 \
--depth 1 \ --depth 1 \
/VK-GL-CTS /VK-GL-CTS
pushd /VK-GL-CTS pushd /VK-GL-CTS
mkdir -p /deqp
# shellcheck disable=SC2153
deqp_api=${DEQP_API,,}
cts_commits_to_backport="${deqp_api}_cts_commits_to_backport[@]"
for commit in "${!cts_commits_to_backport}"
do
PATCH_URL="https://github.com/KhronosGroup/VK-GL-CTS/commit/$commit.patch"
echo "Apply patch to ${DEQP_API} CTS from $PATCH_URL"
curl -L --retry 4 -f --retry-all-errors --retry-delay 60 $PATCH_URL | \
git am -
done
cts_patch_files="${deqp_api}_cts_patch_files[@]"
for patch in "${!cts_patch_files}"
do
echo "Apply patch to ${DEQP_API} CTS from $patch"
git am < $OLDPWD/.gitlab-ci/container/patches/$patch
done
{
echo "dEQP base version $DEQP_VERSION"
echo "The following local patches are applied on top:"
git log --reverse --oneline $DEQP_VERSION.. --format=%s | sed 's/^/- /'
} > /deqp/version-$deqp_api
# --insecure is due to SSL cert failures hitting sourceforge for zlib and # --insecure is due to SSL cert failures hitting sourceforge for zlib and
# libpng (sigh). The archives get their checksums checked anyway, and git # libpng (sigh). The archives get their checksums checked anyway, and git
# always goes through ssh or https. # always goes through ssh or https.
python3 external/fetch_sources.py --insecure python3 external/fetch_sources.py --insecure
mkdir -p /deqp
# Save the testlog stylesheets: # Save the testlog stylesheets:
cp doc/testlog-stylesheet/testlog.{css,xsl} /deqp cp doc/testlog-stylesheet/testlog.{css,xsl} /deqp
popd popd
pushd /deqp pushd /deqp
# When including EGL/X11 testing, do that build first and save off its
if [ "${DEQP_API}" = 'GLES' ]; then # deqp-egl binary.
if [ "${DEQP_TARGET}" = 'android' ]; then
cmake -S /VK-GL-CTS -B . -G Ninja \
-DDEQP_TARGET=android \
-DCMAKE_BUILD_TYPE=Release \
$EXTRA_CMAKE_ARGS
mold --run ninja modules/egl/deqp-egl
mv /deqp/modules/egl/deqp-egl /deqp/modules/egl/deqp-egl-android
else
# When including EGL/X11 testing, do that build first and save off its
# deqp-egl binary.
cmake -S /VK-GL-CTS -B . -G Ninja \
-DDEQP_TARGET=x11_egl_glx \
-DCMAKE_BUILD_TYPE=Release \
$EXTRA_CMAKE_ARGS
mold --run ninja modules/egl/deqp-egl
mv /deqp/modules/egl/deqp-egl /deqp/modules/egl/deqp-egl-x11
cmake -S /VK-GL-CTS -B . -G Ninja \
-DDEQP_TARGET=wayland \
-DCMAKE_BUILD_TYPE=Release \
$EXTRA_CMAKE_ARGS
mold --run ninja modules/egl/deqp-egl
mv /deqp/modules/egl/deqp-egl /deqp/modules/egl/deqp-egl-wayland
fi
fi
cmake -S /VK-GL-CTS -B . -G Ninja \ cmake -S /VK-GL-CTS -B . -G Ninja \
-DDEQP_TARGET=${DEQP_TARGET} \ -DDEQP_TARGET=x11_egl_glx \
-DCMAKE_BUILD_TYPE=Release \ -DCMAKE_BUILD_TYPE=Release \
$EXTRA_CMAKE_ARGS $EXTRA_CMAKE_ARGS
ninja modules/egl/deqp-egl
cp /deqp/modules/egl/deqp-egl /deqp/modules/egl/deqp-egl-x11
# Make sure `default` doesn't silently stop detecting one of the platforms we care about
if [ "${DEQP_TARGET}" = 'default' ]; then
grep -q DEQP_SUPPORT_WAYLAND=1 build.ninja
grep -q DEQP_SUPPORT_X11=1 build.ninja
grep -q DEQP_SUPPORT_XCB=1 build.ninja
fi
deqp_build_targets=() cmake -S /VK-GL-CTS -B . -G Ninja \
case "${DEQP_API}" in -DDEQP_TARGET=${DEQP_TARGET:-x11_glx} \
VK) -DCMAKE_BUILD_TYPE=Release \
deqp_build_targets+=(deqp-vk) $EXTRA_CMAKE_ARGS
;; ninja
GL)
deqp_build_targets+=(glcts)
;;
GLES)
deqp_build_targets+=(deqp-gles{2,3,31})
# deqp-egl also comes from this build, but it is handled separately above.
;;
esac
if [ "${DEQP_TARGET}" != 'android' ]; then
deqp_build_targets+=(testlog-to-xml)
deqp_build_targets+=(testlog-to-csv)
deqp_build_targets+=(testlog-to-junit)
fi
mold --run ninja "${deqp_build_targets[@]}" mv /deqp/modules/egl/deqp-egl-x11 /deqp/modules/egl/deqp-egl
if [ "${DEQP_TARGET}" != 'android' ]; then # Copy out the mustpass lists we want.
# Copy out the mustpass lists we want. mkdir /deqp/mustpass
mkdir -p /deqp/mustpass for mustpass in $(< /VK-GL-CTS/external/vulkancts/mustpass/master/vk-default.txt) ; do
cat /VK-GL-CTS/external/vulkancts/mustpass/master/$mustpass \
>> /deqp/mustpass/vk-master.txt
done
if [ "${DEQP_API}" = 'VK' ]; then cp \
for mustpass in $(< /VK-GL-CTS/external/vulkancts/mustpass/main/vk-default.txt) ; do /deqp/external/openglcts/modules/gl_cts/data/mustpass/gles/aosp_mustpass/3.2.6.x/*.txt \
cat /VK-GL-CTS/external/vulkancts/mustpass/main/$mustpass \ /deqp/mustpass/.
>> /deqp/mustpass/vk-main.txt cp \
done /deqp/external/openglcts/modules/gl_cts/data/mustpass/egl/aosp_mustpass/3.2.6.x/egl-master.txt \
fi /deqp/mustpass/.
cp \
/deqp/external/openglcts/modules/gl_cts/data/mustpass/gles/khronos_mustpass/3.2.6.x/*-master.txt \
/deqp/mustpass/.
cp \
/deqp/external/openglcts/modules/gl_cts/data/mustpass/gl/khronos_mustpass/4.6.1.x/*-master.txt \
/deqp/mustpass/.
if [ "${DEQP_API}" = 'GL' ]; then # Save *some* executor utils, but otherwise strip things down
cp \ # to reduct deqp build size:
/VK-GL-CTS/external/openglcts/data/mustpass/gl/khronos_mustpass/4.6.1.x/*-main.txt \ mkdir /deqp/executor.save
/deqp/mustpass/ cp /deqp/executor/testlog-to-* /deqp/executor.save
cp \ rm -rf /deqp/executor
/VK-GL-CTS/external/openglcts/data/mustpass/gl/khronos_mustpass_single/4.6.1.x/*-single.txt \ mv /deqp/executor.save /deqp/executor
/deqp/mustpass/
fi
if [ "${DEQP_API}" = 'GLES' ]; then
cp \
/VK-GL-CTS/external/openglcts/data/mustpass/gles/aosp_mustpass/3.2.6.x/*.txt \
/deqp/mustpass/
cp \
/VK-GL-CTS/external/openglcts/data/mustpass/egl/aosp_mustpass/3.2.6.x/egl-main.txt \
/deqp/mustpass/
cp \
/VK-GL-CTS/external/openglcts/data/mustpass/gles/khronos_mustpass/3.2.6.x/*-main.txt \
/deqp/mustpass/
fi
# Save *some* executor utils, but otherwise strip things down
# to reduct deqp build size:
mkdir /deqp/executor.save
cp /deqp/executor/testlog-to-* /deqp/executor.save
rm -rf /deqp/executor
mv /deqp/executor.save /deqp/executor
fi
# Remove other mustpass files, since we saved off the ones we wanted to conventient locations above.
rm -rf /deqp/external/**/mustpass/
rm -rf /deqp/external/vulkancts/modules/vulkan/vk-main*
rm -rf /deqp/external/vulkancts/modules/vulkan/vk-default
rm -rf /deqp/external/openglcts/modules/gl_cts/data/mustpass
rm -rf /deqp/external/openglcts/modules/cts-runner rm -rf /deqp/external/openglcts/modules/cts-runner
rm -rf /deqp/modules/internal rm -rf /deqp/modules/internal
rm -rf /deqp/execserver rm -rf /deqp/execserver
rm -rf /deqp/framework rm -rf /deqp/framework
find . -depth \( -iname '*cmake*' -o -name '*ninja*' -o -name '*.o' -o -name '*.a' \) -exec rm -rf {} \; find -iname '*cmake*' -o -name '*ninja*' -o -name '*.o' -o -name '*.a' | xargs rm -rf
if [ "${DEQP_API}" = 'VK' ]; then ${STRIP_CMD:-strip} external/vulkancts/modules/vulkan/deqp-vk
${STRIP_CMD:-strip} external/vulkancts/modules/vulkan/deqp-vk ${STRIP_CMD:-strip} external/openglcts/modules/glcts
fi ${STRIP_CMD:-strip} modules/*/deqp-*
if [ "${DEQP_API}" = 'GL' ]; then du -sh *
${STRIP_CMD:-strip} external/openglcts/modules/glcts
fi
if [ "${DEQP_API}" = 'GLES' ]; then
${STRIP_CMD:-strip} modules/*/deqp-*
fi
du -sh ./*
rm -rf /VK-GL-CTS rm -rf /VK-GL-CTS
popd popd

View File

@@ -1,15 +0,0 @@
#!/usr/bin/env bash
# shellcheck disable=SC2086 # we want word splitting
# When changing this file, you need to bump the following
# .gitlab-ci/image-tags.yml tags:
# DEBIAN_BUILD_TAG
set -ex
git clone https://github.com/microsoft/DirectX-Headers -b v1.613.1 --depth 1
pushd DirectX-Headers
meson setup build --backend=ninja --buildtype=release -Dbuild-test=false $EXTRA_MESON_ARGS
meson install -C build
popd
rm -rf DirectX-Headers

View File

@@ -1,15 +1,10 @@
#!/bin/bash #!/bin/bash
# When changing this file, you need to bump the following
# .gitlab-ci/image-tags.yml tags:
# DEBIAN_X86_64_TEST_VK_TAG
# KERNEL_ROOTFS_TAG
set -ex set -ex
git clone https://github.com/ValveSoftware/Fossilize.git git clone https://github.com/ValveSoftware/Fossilize.git
cd Fossilize cd Fossilize
git checkout b43ee42bbd5631ea21fe9a2dee4190d5d875c327 git checkout 72088685d90bc814d14aad5505354ffa8a642789
git submodule update --init git submodule update --init
mkdir build mkdir build
cd build cd build

View File

@@ -1,19 +1,19 @@
#!/usr/bin/env bash #!/bin/bash
set -ex set -ex
GFXRECONSTRUCT_VERSION=761837794a1e57f918a85af7000b12e531b178ae GFXRECONSTRUCT_VERSION=3738decc2f4f9ff183818e5ab213a75a79fb7ab1
git clone https://github.com/LunarG/gfxreconstruct.git \ git clone https://github.com/LunarG/gfxreconstruct.git --single-branch -b master --no-checkout /gfxreconstruct
--single-branch \
-b master \
--no-checkout \
/gfxreconstruct
pushd /gfxreconstruct pushd /gfxreconstruct
git checkout "$GFXRECONSTRUCT_VERSION" git checkout "$GFXRECONSTRUCT_VERSION"
git submodule update --init git submodule update --init
git submodule update git submodule update
cmake -S . -B _build -G Ninja -DCMAKE_BUILD_TYPE=Release -DCMAKE_INSTALL_PREFIX:PATH=/gfxreconstruct/build -DBUILD_WERROR=OFF cmake -S . -B _build -G Ninja -DCMAKE_BUILD_TYPE=Release
cmake --build _build --parallel --target tools/{replay,info}/install/strip ninja -C _build gfxrecon-replay gfxrecon-info
mkdir -p build/bin
install _build/tools/replay/gfxrecon-replay build/bin
install _build/tools/info/gfxrecon-info build/bin
strip build/bin/*
find . -not -path './build' -not -path './build/*' -delete find . -not -path './build' -not -path './build/*' -delete
popd popd

View File

@@ -2,7 +2,7 @@
set -ex set -ex
PARALLEL_DEQP_RUNNER_VERSION=fe557794b5dadd8dbf0eae403296625e03bda18a PARALLEL_DEQP_RUNNER_VERSION=6596b71cf37a7efb4d54acd48c770ed2d4ad6b7e
git clone https://gitlab.freedesktop.org/mesa/parallel-deqp-runner --single-branch -b master --no-checkout /parallel-deqp-runner git clone https://gitlab.freedesktop.org/mesa/parallel-deqp-runner --single-branch -b master --no-checkout /parallel-deqp-runner
pushd /parallel-deqp-runner pushd /parallel-deqp-runner

View File

@@ -1,23 +0,0 @@
#!/usr/bin/env bash
# shellcheck disable=SC1091 # the path is created by the script
set -ex
KDL_REVISION="5056f71b100a68b72b285c6fc845a66a2ed25985"
mkdir ci-kdl.git
pushd ci-kdl.git
git init
git remote add origin https://gitlab.freedesktop.org/gfx-ci/ci-kdl.git
git fetch --depth 1 origin ${KDL_REVISION}
git checkout FETCH_HEAD
popd
python3 -m venv ci-kdl.venv
source ci-kdl.venv/bin/activate
pushd ci-kdl.git
pip install -r requirements.txt
pip install .
popd
rm -rf ci-kdl.git

View File

@@ -1,30 +1,50 @@
#!/usr/bin/env bash #!/bin/bash
# shellcheck disable=SC2086 # we want word splitting
# shellcheck disable=SC2153
set -ex set -ex
mkdir -p kernel mkdir -p kernel
wget -qO- ${KERNEL_URL} | tar -xj --strip-components=1 -C kernel
pushd kernel pushd kernel
if [[ ${DEBIAN_ARCH} = "arm64" ]]; then # The kernel doesn't like the gold linker (or the old lld in our debians).
KERNEL_IMAGE_NAME+=" cheza-kernel" # Sneak in some override symlinks during kernel build until we can update
fi # debian (they'll get blown away by the rm of the kernel dir at the end).
mkdir -p ld-links
for i in /usr/bin/*-ld /usr/bin/ld; do
i=`basename $i`
ln -sf /usr/bin/$i.bfd ld-links/$i
done
export PATH=`pwd`/ld-links:$PATH
export LOCALVERSION="`basename $KERNEL_URL`"
./scripts/kconfig/merge_config.sh ${DEFCONFIG} ../.gitlab-ci/container/${KERNEL_ARCH}.config
make ${KERNEL_IMAGE_NAME}
for image in ${KERNEL_IMAGE_NAME}; do for image in ${KERNEL_IMAGE_NAME}; do
curl -L --retry 4 -f --retry-all-errors --retry-delay 60 \ cp arch/${KERNEL_ARCH}/boot/${image} /lava-files/.
-o "/lava-files/${image}" "${KERNEL_IMAGE_BASE}/${DEBIAN_ARCH}/${image}"
done done
for dtb in ${DEVICE_TREES}; do if [[ -n ${DEVICE_TREES} ]]; then
curl -L --retry 4 -f --retry-all-errors --retry-delay 60 \ make dtbs
-o "/lava-files/${dtb}" "${KERNEL_IMAGE_BASE}/${DEBIAN_ARCH}/${dtb}" cp ${DEVICE_TREES} /lava-files/.
done fi
mkdir -p "/lava-files/rootfs-${DEBIAN_ARCH}" if [[ ${DEBIAN_ARCH} = "amd64" ]]; then
curl -L --retry 4 -f --retry-all-errors --retry-delay 60 \ make modules
-O "${KERNEL_IMAGE_BASE}/${DEBIAN_ARCH}/modules.tar.zst" INSTALL_MOD_PATH=/lava-files/rootfs-${DEBIAN_ARCH}/ make modules_install
tar --keep-directory-symlink --zstd -xf modules.tar.zst -C "/lava-files/rootfs-${DEBIAN_ARCH}/" fi
if [[ ${DEBIAN_ARCH} = "arm64" ]]; then
make Image.lzma
mkimage \
-f auto \
-A arm \
-O linux \
-d arch/arm64/boot/Image.lzma \
-C lzma\
-b arch/arm64/boot/dts/qcom/sdm845-cheza-r3.dtb \
/lava-files/cheza-kernel
KERNEL_IMAGE_NAME+=" cheza-kernel"
fi
popd popd
rm -rf kernel rm -rf kernel

View File

@@ -1,9 +1,8 @@
#!/usr/bin/env bash #!/bin/bash
set -ex set -ex
export LLVM_CONFIG="llvm-config-${LLVM_VERSION:?"llvm unset!"}" export LLVM_CONFIG="llvm-config-11"
LLVM_TAG="llvmorg-15.0.7"
$LLVM_CONFIG --version $LLVM_CONFIG --version
@@ -12,12 +11,12 @@ git config --global user.name "Mesa CI"
git clone \ git clone \
https://github.com/llvm/llvm-project \ https://github.com/llvm/llvm-project \
--depth 1 \ --depth 1 \
-b "${LLVM_TAG}" \ -b llvmorg-12.0.0-rc3 \
/llvm-project /llvm-project
mkdir /libclc mkdir /libclc
pushd /libclc pushd /libclc
cmake -S /llvm-project/libclc -B . -G Ninja -DLLVM_CONFIG="$LLVM_CONFIG" -DLIBCLC_TARGETS_TO_BUILD="spirv-mesa3d-;spirv64-mesa3d-" -DCMAKE_BUILD_TYPE=Release -DCMAKE_INSTALL_PREFIX=/usr -DLLVM_SPIRV=/usr/bin/llvm-spirv cmake -S /llvm-project/libclc -B . -G Ninja -DLLVM_CONFIG=$LLVM_CONFIG -DLIBCLC_TARGETS_TO_BUILD="spirv-mesa3d-;spirv64-mesa3d-" -DCMAKE_BUILD_TYPE=Release -DCMAKE_INSTALL_PREFIX=/usr -DLLVM_SPIRV=/usr/bin/llvm-spirv
ninja ninja
ninja install ninja install
popd popd
@@ -27,5 +26,5 @@ mkdir -p /usr/lib/clc
ln -s /usr/share/clc/spirv64-mesa3d-.spv /usr/lib/clc/ ln -s /usr/share/clc/spirv64-mesa3d-.spv /usr/lib/clc/
ln -s /usr/share/clc/spirv-mesa3d-.spv /usr/lib/clc/ ln -s /usr/share/clc/spirv-mesa3d-.spv /usr/lib/clc/
du -sh ./* du -sh *
rm -rf /libclc /llvm-project rm -rf /libclc /llvm-project

View File

@@ -1,16 +1,14 @@
#!/usr/bin/env bash #!/bin/bash
# Script used for Android and Fedora builds
# shellcheck disable=SC2086 # we want word splitting
set -ex set -ex
export LIBDRM_VERSION=libdrm-2.4.119 export LIBDRM_VERSION=libdrm-2.4.107
curl -L -O --retry 4 -f --retry-all-errors --retry-delay 60 \ wget https://dri.freedesktop.org/libdrm/$LIBDRM_VERSION.tar.xz
https://dri.freedesktop.org/libdrm/"$LIBDRM_VERSION".tar.xz tar -xvf $LIBDRM_VERSION.tar.xz && rm $LIBDRM_VERSION.tar.xz
tar -xvf "$LIBDRM_VERSION".tar.xz && rm "$LIBDRM_VERSION".tar.xz cd $LIBDRM_VERSION
cd "$LIBDRM_VERSION" meson build -D vc4=false -D freedreno=false -D etnaviv=false $EXTRA_MESON_ARGS
meson setup build -D vc4=disabled -D freedreno=disabled -D etnaviv=disabled $EXTRA_MESON_ARGS ninja -C build install
meson install -C build
cd .. cd ..
rm -rf "$LIBDRM_VERSION" rm -rf $LIBDRM_VERSION

View File

@@ -1,22 +0,0 @@
#!/usr/bin/env bash
set -ex
VER="${LLVM_VERSION:?llvm not set}.0.0"
curl -L --retry 4 -f --retry-all-errors --retry-delay 60 \
-O "https://github.com/KhronosGroup/SPIRV-LLVM-Translator/archive/refs/tags/v${VER}.tar.gz"
tar -xvf "v${VER}.tar.gz" && rm "v${VER}.tar.gz"
mkdir "SPIRV-LLVM-Translator-${VER}/build"
pushd "SPIRV-LLVM-Translator-${VER}/build"
cmake .. -G Ninja -DCMAKE_BUILD_TYPE=Release -DCMAKE_INSTALL_PREFIX=/usr
ninja
ninja install
# For some reason llvm-spirv is not installed by default
ninja llvm-spirv
cp tools/llvm-spirv/llvm-spirv /usr/bin/
popd
du -sh "SPIRV-LLVM-Translator-${VER}"
rm -rf "SPIRV-LLVM-Translator-${VER}"

View File

@@ -1,22 +0,0 @@
#!/usr/bin/env bash
set -ex
# When changing this file, you need to bump the following
# .gitlab-ci/image-tags.yml tags:
# DEBIAN_BASE_TAG
# DEBIAN_BUILD_TAG
# FEDORA_X86_64_BUILD_TAG
# KERNEL_ROOTFS_TAG
MOLD_VERSION="2.4.1"
git clone -b v"$MOLD_VERSION" --single-branch --depth 1 https://github.com/rui314/mold.git
pushd mold
cmake -DCMAKE_BUILD_TYPE=Release -D BUILD_TESTING=OFF -D MOLD_LTO=ON
cmake --build . --parallel
cmake --install .
popd
rm -rf mold

View File

@@ -1,25 +0,0 @@
#!/usr/bin/env bash
# When changing this file, you need to bump the following
# .gitlab-ci/image-tags.yml tags:
# DEBIAN_X86_64_TEST_GL_TAG
set -ex -o pipefail
### Careful editing anything below this line
git config --global user.email "mesa@example.com"
git config --global user.name "Mesa CI"
git clone https://github.com/axeldavy/Xnine.git /Xnine
mkdir /Xnine/build
pushd /Xnine/build
git checkout c64753d224c08006bcdcfa7880ada826f27164b1
cmake .. -DBUILD_TESTS=1 -DWITH_DRI3=1 -DD3DADAPTER9_LOCATION=/install/lib/d3d/d3dadapter9.so
make
mkdir -p /NineTests/
mv NineTests/NineTests /NineTests/
popd
rm -rf /Xnine

View File

@@ -1,33 +1,23 @@
#!/bin/bash #!/bin/bash
# shellcheck disable=SC2086 # we want word splitting
set -ex set -ex
# When changing this file, you need to bump the following
# .gitlab-ci/image-tags.yml tags:
# DEBIAN_X86_64_TEST_GL_TAG
# DEBIAN_X86_64_TEST_VK_TAG
# KERNEL_ROOTFS_TAG
REV="f7ece74a107a2f99b2f494d978c84f8d51faa703"
git clone https://gitlab.freedesktop.org/mesa/piglit.git --single-branch --no-checkout /piglit git clone https://gitlab.freedesktop.org/mesa/piglit.git --single-branch --no-checkout /piglit
pushd /piglit pushd /piglit
git checkout "$REV" git checkout 7d7dd2688c214e1b3c00f37226500cbec4a58efb
patch -p1 <$OLDPWD/.gitlab-ci/piglit/disable-vs_in.diff patch -p1 <$OLDPWD/.gitlab-ci/piglit/disable-vs_in.diff
cmake -S . -B . -G Ninja -DCMAKE_BUILD_TYPE=Release $PIGLIT_OPTS $EXTRA_CMAKE_ARGS cmake -S . -B . -G Ninja -DCMAKE_BUILD_TYPE=Release $PIGLIT_OPTS $EXTRA_CMAKE_ARGS
ninja $PIGLIT_BUILD_TARGETS ninja $PIGLIT_BUILD_TARGETS
find . -depth \( -name .git -o -name '*ninja*' -o -iname '*cmake*' -o -name '*.[chao]' \) -exec rm -rf {} \; find -name .git -o -name '*ninja*' -o -iname '*cmake*' -o -name '*.[chao]' | xargs rm -rf
rm -rf target_api rm -rf target_api
if [ "$PIGLIT_BUILD_TARGETS" = "piglit_replayer" ]; then if [ "x$PIGLIT_BUILD_TARGETS" = "xpiglit_replayer" ]; then
find . -depth \ find ! -regex "^\.$" \
! -regex "^\.$" \
! -regex "^\.\/piglit.*" \ ! -regex "^\.\/piglit.*" \
! -regex "^\.\/framework.*" \ ! -regex "^\.\/framework.*" \
! -regex "^\.\/bin$" \ ! -regex "^\.\/bin$" \
! -regex "^\.\/bin\/replayer\.py" \ ! -regex "^\.\/bin\/replayer\.py" \
! -regex "^\.\/templates.*" \ ! -regex "^\.\/templates.*" \
! -regex "^\.\/tests$" \ ! -regex "^\.\/tests$" \
! -regex "^\.\/tests\/replay\.py" \ ! -regex "^\.\/tests\/replay\.py" 2>/dev/null | xargs rm -rf
-exec rm -rf {} \; 2>/dev/null
fi fi
popd popd

View File

@@ -8,25 +8,17 @@ set -ex
# cargo (and rustup) wants to store stuff in $HOME/.cargo, and binaries in # cargo (and rustup) wants to store stuff in $HOME/.cargo, and binaries in
# $HOME/.cargo/bin. Make bin a link to a public bin directory so the commands # $HOME/.cargo/bin. Make bin a link to a public bin directory so the commands
# are just available to all build jobs. # are just available to all build jobs.
mkdir -p "$HOME"/.cargo mkdir -p $HOME/.cargo
ln -s /usr/local/bin "$HOME"/.cargo/bin ln -s /usr/local/bin $HOME/.cargo/bin
# Rusticl requires at least Rust 1.66.0 and NAK requires 1.73.0
#
# Also, pick a specific snapshot from rustup so the compiler doesn't drift on
# us.
RUST_VERSION=1.73.0-2023-10-05
# For rust in Mesa, we use rustup to install. This lets us pick an arbitrary # For rust in Mesa, we use rustup to install. This lets us pick an arbitrary
# version of the compiler, rather than whatever the container's Debian comes # version of the compiler, rather than whatever the container's Debian comes
# with. # with.
curl -L --retry 4 -f --retry-all-errors --retry-delay 60 \ #
--proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh -s -- \ # Pick the rust compiler (1.48) available in Debian stable, and pick a specific
--default-toolchain $RUST_VERSION \ # snapshot from rustup so the compiler doesn't drift on us.
--profile minimal \ wget https://sh.rustup.rs -O - | \
-y sh -s -- -y --default-toolchain 1.49.0-2020-12-31
rustup component add clippy rustfmt
# Set up a config script for cross compiling -- cargo needs your system cc for # Set up a config script for cross compiling -- cargo needs your system cc for
# linking in cross builds, but doesn't know what you want to use for system cc. # linking in cross builds, but doesn't know what you want to use for system cc.

View File

@@ -1,14 +0,0 @@
#!/usr/bin/env bash
# When changing this file, you need to bump the following
# .gitlab-ci/image-tags.yml tags:
# DEBIAN_BUILD_TAG
set -ex
pushd /usr/local
git clone https://gitlab.freedesktop.org/mesa/shader-db.git --depth 1
rm -rf shader-db/.git
cd shader-db
make
popd

View File

@@ -1,89 +0,0 @@
#!/usr/bin/env bash
# SPDX-License-Identifier: MIT
#
# Copyright © 2022 Collabora Limited
# Author: Guilherme Gallo <guilherme.gallo@collabora.com>
#
# When changing this file, you need to bump the following
# .gitlab-ci/image-tags.yml tags:
# KERNEL_ROOTFS_TAG
SKQP_BRANCH=android-cts-12.1_r5
# hack for skqp see the clang
pushd /usr/bin/
ln -s ../lib/llvm-15/bin/clang clang
ln -s ../lib/llvm-15/bin/clang++ clang++
popd
create_gn_args() {
# gn can be configured to cross-compile skia and its tools
# It is important to set the target_cpu to guarantee the intended target
# machine
cp "${BASE_ARGS_GN_FILE}" "${SKQP_OUT_DIR}"/args.gn
echo "target_cpu = \"${SKQP_ARCH}\"" >> "${SKQP_OUT_DIR}"/args.gn
}
download_skia_source() {
if [ -z ${SKIA_DIR+x} ]
then
return 1
fi
# Skia cloned from https://android.googlesource.com/platform/external/skqp
# has all needed assets tracked on git-fs
SKQP_REPO=https://android.googlesource.com/platform/external/skqp
git clone --branch "${SKQP_BRANCH}" --depth 1 "${SKQP_REPO}" "${SKIA_DIR}"
}
set -ex
SCRIPT_DIR=$(realpath "$(dirname "$0")")
SKQP_PATCH_DIR="${SCRIPT_DIR}/patches"
BASE_ARGS_GN_FILE="${SCRIPT_DIR}/build-skqp_base.gn"
SKQP_ARCH=${SKQP_ARCH:-x64}
SKIA_DIR=${SKIA_DIR:-$(mktemp -d)}
SKQP_OUT_DIR=${SKIA_DIR}/out/${SKQP_ARCH}
SKQP_INSTALL_DIR=${SKQP_INSTALL_DIR:-/skqp}
SKQP_ASSETS_DIR="${SKQP_INSTALL_DIR}/assets"
SKQP_BINARIES=(skqp list_gpu_unit_tests list_gms)
download_skia_source
pushd "${SKIA_DIR}"
# Apply all skqp patches for Mesa CI
cat "${SKQP_PATCH_DIR}"/build-skqp_*.patch |
patch -p1
# Fetch some needed build tools needed to build skia/skqp.
# Basically, it clones repositories with commits SHAs from ${SKIA_DIR}/DEPS
# directory.
python tools/git-sync-deps
mkdir -p "${SKQP_OUT_DIR}"
mkdir -p "${SKQP_INSTALL_DIR}"
create_gn_args
# Build and install skqp binaries
bin/gn gen "${SKQP_OUT_DIR}"
for BINARY in "${SKQP_BINARIES[@]}"
do
/usr/bin/ninja -C "${SKQP_OUT_DIR}" "${BINARY}"
# Strip binary, since gn is not stripping it even when `is_debug == false`
${STRIP_CMD:-strip} "${SKQP_OUT_DIR}/${BINARY}"
install -m 0755 "${SKQP_OUT_DIR}/${BINARY}" "${SKQP_INSTALL_DIR}"
done
# Move assets to the target directory, which will reside in rootfs.
mv platform_tools/android/apps/skqp/src/main/assets/ "${SKQP_ASSETS_DIR}"
popd
rm -Rf "${SKIA_DIR}"
set +ex

View File

@@ -1,59 +0,0 @@
cc = "clang"
cxx = "clang++"
extra_cflags = [
"-Wno-error",
"-DSK_ENABLE_DUMP_GPU",
"-DSK_BUILD_FOR_SKQP"
]
extra_cflags_cc = [
"-Wno-error",
# skqp build process produces a lot of compilation warnings, silencing
# most of them to remove clutter and avoid the CI job log to exceed the
# maximum size
# GCC flags
"-Wno-redundant-move",
"-Wno-suggest-override",
"-Wno-class-memaccess",
"-Wno-deprecated-copy",
"-Wno-uninitialized",
# Clang flags
"-Wno-macro-redefined",
"-Wno-anon-enum-enum-conversion",
"-Wno-suggest-destructor-override",
"-Wno-return-std-move-in-c++11",
"-Wno-extra-semi-stmt",
"-Wno-reserved-identifier",
"-Wno-bitwise-instead-of-logical",
"-Wno-reserved-identifier",
"-Wno-psabi",
"-Wno-unused-but-set-variable",
"-Wno-sizeof-array-div",
"-Wno-string-concatenation",
]
cc_wrapper = "ccache"
is_debug = false
skia_enable_fontmgr_android = false
skia_enable_fontmgr_empty = true
skia_enable_pdf = false
skia_enable_skottie = false
skia_skqp_global_error_tolerance = 8
skia_tools_require_resources = true
skia_use_dng_sdk = false
skia_use_expat = true
skia_use_icu = false
skia_use_libheif = false
skia_use_lua = false
skia_use_piex = false
skia_use_vulkan = true
target_os = "linux"

View File

@@ -1,25 +0,0 @@
#!/usr/bin/env bash
# shellcheck disable=SC2086 # we want word splitting
# When changing this file, you need to bump the following
# .gitlab-ci/image-tags.yml tags:
# KERNEL_ROOTFS_TAG
set -ex
git config --global user.email "mesa@example.com"
git config --global user.name "Mesa CI"
git clone \
https://github.com/intel/libva-utils.git \
-b 2.18.1 \
--depth 1 \
/va-utils
pushd /va-utils
# Too old libva in Debian 11. TODO: when this PR gets in, refer to the patch.
curl -L https://github.com/intel/libva-utils/pull/329.patch | git am
meson setup build -D tests=true -Dprefix=/va $EXTRA_MESON_ARGS
meson install -C build
popd
rm -rf /va-utils

View File

@@ -0,0 +1,20 @@
#!/bin/bash
set -ex
mkdir -p /epoxy
pushd /epoxy
wget -qO- https://github.com/anholt/libepoxy/releases/download/1.5.8/libepoxy-1.5.8.tar.xz | tar -xJ --strip-components=1
meson build/ $EXTRA_MESON_ARGS
ninja -C build install
popd
rm -rf /epoxy
VIRGLRENDERER_VERSION=f2ab66c6c00065b2944f4cd9d965ee455c535271
git clone https://gitlab.freedesktop.org/virgl/virglrenderer.git --single-branch --no-checkout /virglrenderer
pushd /virglrenderer
git checkout "$VIRGLRENDERER_VERSION"
meson build/ $EXTRA_MESON_ARGS
ninja -C build install
popd
rm -rf /virglrenderer

View File

@@ -1,12 +1,9 @@
#!/bin/bash #!/bin/bash
# When changing this file, you need to bump the following
# .gitlab-ci/image-tags.yml tags:
# DEBIAN_X86_64_TEST_VK_TAG
# KERNEL_ROOTFS_TAG
set -ex set -ex
VKD3D_PROTON_COMMIT="c3b385606a93baed42482d822805e0d9c2f3f603" VKD3D_PROTON_VERSION="2.3.1"
VKD3D_PROTON_COMMIT="3ed3526332f53d7d35cf1b685fa8096b01f26ff0"
VKD3D_PROTON_DST_DIR="/vkd3d-proton-tests" VKD3D_PROTON_DST_DIR="/vkd3d-proton-tests"
VKD3D_PROTON_SRC_DIR="/vkd3d-proton-src" VKD3D_PROTON_SRC_DIR="/vkd3d-proton-src"
@@ -19,7 +16,7 @@ function build_arch {
meson "$@" \ meson "$@" \
-Denable_tests=true \ -Denable_tests=true \
--buildtype release \ --buildtype release \
--prefix "$VKD3D_PROTON_DST_DIR" \ --prefix "$VKD3D_PROTON_BUILD_DIR" \
--strip \ --strip \
--bindir "x${arch}" \ --bindir "x${arch}" \
--libdir "x${arch}" \ --libdir "x${arch}" \
@@ -27,17 +24,20 @@ function build_arch {
ninja -C "$VKD3D_PROTON_BUILD_DIR/build.${arch}" install ninja -C "$VKD3D_PROTON_BUILD_DIR/build.${arch}" install
install -D -m755 -t "${VKD3D_PROTON_DST_DIR}/x${arch}/bin" "$VKD3D_PROTON_BUILD_DIR/build.${arch}/tests/d3d12" install -D -m755 -t "${VKD3D_PROTON_DST_DIR}/x${arch}/bin" "$VKD3D_PROTON_BUILD_DIR/build.${arch}/tests/"*.exe
} }
git clone https://github.com/HansKristian-Work/vkd3d-proton.git --single-branch -b master --no-checkout "$VKD3D_PROTON_SRC_DIR" git clone https://github.com/HansKristian-Work/vkd3d-proton.git --single-branch -b "v$VKD3D_PROTON_VERSION" --no-checkout "$VKD3D_PROTON_SRC_DIR"
pushd "$VKD3D_PROTON_SRC_DIR" pushd "$VKD3D_PROTON_SRC_DIR"
git checkout "$VKD3D_PROTON_COMMIT" git checkout "$VKD3D_PROTON_COMMIT"
git submodule update --init --recursive git submodule update --init --recursive
git submodule update --recursive git submodule update --recursive
build_arch 64 build_arch 64 --cross-file build-win64.txt
build_arch 86 build_arch 86 --cross-file build-win32.txt
cp "setup_vkd3d_proton.sh" "$VKD3D_PROTON_BUILD_DIR/setup_vkd3d_proton.sh"
chmod +x "$VKD3D_PROTON_BUILD_DIR/setup_vkd3d_proton.sh"
popd popd
"$VKD3D_PROTON_BUILD_DIR"/setup_vkd3d_proton.sh install
rm -rf "$VKD3D_PROTON_BUILD_DIR" rm -rf "$VKD3D_PROTON_BUILD_DIR"
rm -rf "$VKD3D_PROTON_SRC_DIR" rm -rf "$VKD3D_PROTON_SRC_DIR"

View File

@@ -1,18 +0,0 @@
#!/usr/bin/env bash
# When changing this file, you need to bump the following
# .gitlab-ci/image-tags.yml tags:
# DEBIAN_X86_64_TEST_GL_TAG
# KERNEL_ROOTFS_TAG:
set -ex
VALIDATION_TAG="v1.3.281"
git clone -b "$VALIDATION_TAG" --single-branch --depth 1 https://github.com/KhronosGroup/Vulkan-ValidationLayers.git
pushd Vulkan-ValidationLayers
python3 scripts/update_deps.py --dir external --config debug
cmake -G Ninja -DCMAKE_BUILD_TYPE=Release -DCMAKE_INSTALL_PREFIX=/usr -DBUILD_TESTS=OFF -DBUILD_WERROR=OFF -C external/helper.cmake -S . -B build
ninja -C build install
popd
rm -rf Vulkan-ValidationLayers

View File

@@ -1,32 +0,0 @@
#!/usr/bin/env bash
# shellcheck disable=SC2086 # we want word splitting
set -ex
# When changing this file, you need to bump the following
# .gitlab-ci/image-tags.yml tags:
# DEBIAN_BUILD_TAG
# DEBIAN_X86_64_TEST_ANDROID_TAG
# DEBIAN_X86_64_TEST_GL_TAG
# DEBIAN_X86_64_TEST_VK_TAG
# FEDORA_X86_64_BUILD_TAG
# KERNEL_ROOTFS_TAG
export LIBWAYLAND_VERSION="1.21.0"
export WAYLAND_PROTOCOLS_VERSION="1.34"
git clone https://gitlab.freedesktop.org/wayland/wayland
cd wayland
git checkout "$LIBWAYLAND_VERSION"
meson setup -Ddocumentation=false -Ddtd_validation=false -Dlibraries=true _build $EXTRA_MESON_ARGS
meson install -C _build
cd ..
rm -rf wayland
git clone https://gitlab.freedesktop.org/wayland/wayland-protocols
cd wayland-protocols
git checkout "$WAYLAND_PROTOCOLS_VERSION"
meson setup _build $EXTRA_MESON_ARGS
meson install -C _build
cd ..
rm -rf wayland-protocols

View File

@@ -1,14 +1,10 @@
#!/usr/bin/env bash #!/bin/sh
if test -f /etc/debian_version; then if test -f /etc/debian_version; then
apt-get autoremove -y --purge apt-get autoremove -y --purge
fi fi
# Clean up any build cache # Clean up any build cache for rust.
rm -rf /root/.cache
rm -rf /root/.cargo
rm -rf /.cargo rm -rf /.cargo
if test -x /usr/bin/ccache; then ccache --show-stats
ccache --show-stats
fi

View File

@@ -1,52 +1,36 @@
#!/bin/sh #!/bin/sh
if test -x /usr/bin/ccache; then if test -f /etc/debian_version; then
if test -f /etc/debian_version; then CCACHE_PATH=/usr/lib/ccache
CCACHE_PATH=/usr/lib/ccache else
elif test -f /etc/alpine-release; then CCACHE_PATH=/usr/lib64/ccache
CCACHE_PATH=/usr/lib/ccache/bin
else
CCACHE_PATH=/usr/lib64/ccache
fi
# Common setup among container builds before we get to building code.
export CCACHE_COMPILERCHECK=content
export CCACHE_COMPRESS=true
export CCACHE_DIR=/cache/$CI_PROJECT_NAME/ccache
export PATH=$CCACHE_PATH:$PATH
# CMake ignores $PATH, so we have to force CC/GCC to the ccache versions.
export CC="${CCACHE_PATH}/gcc"
export CXX="${CCACHE_PATH}/g++"
ccache --show-stats
fi fi
# When not using the mold linker (e.g. unsupported architecture), force # Common setup among container builds before we get to building code.
# linkers to gold, since it's so much faster for building. We can't use
# lld because we're on old debian and it's buggy. mingw fails meson builds export CCACHE_COMPILERCHECK=content
export CCACHE_COMPRESS=true
export CCACHE_DIR=/cache/mesa/ccache
export PATH=$CCACHE_PATH:$PATH
# CMake ignores $PATH, so we have to force CC/GCC to the ccache versions.
export CC="${CCACHE_PATH}/gcc"
export CXX="${CCACHE_PATH}/g++"
# Force linkers to gold, since it's so much faster for building. We can't use
# lld because we're on old debian and it's buggy. ming fails meson builds
# with it with "meson.build:21:0: ERROR: Unable to determine dynamic linker" # with it with "meson.build:21:0: ERROR: Unable to determine dynamic linker"
find /usr/bin -name \*-ld -o -name ld | \ find /usr/bin -name \*-ld -o -name ld | \
grep -v mingw | \ grep -v mingw | \
xargs -n 1 -I '{}' ln -sf '{}.gold' '{}' xargs -n 1 -I '{}' ln -sf '{}.gold' '{}'
ccache --show-stats
# Make a wrapper script for ninja to always include the -j flags # Make a wrapper script for ninja to always include the -j flags
{ echo '#!/bin/sh -x' > /usr/local/bin/ninja
echo '#!/bin/sh -x' echo '/usr/bin/ninja -j${FDO_CI_CONCURRENT:-4} "$@"' >> /usr/local/bin/ninja
# shellcheck disable=SC2016
echo '/usr/bin/ninja -j${FDO_CI_CONCURRENT:-4} "$@"'
} > /usr/local/bin/ninja
chmod +x /usr/local/bin/ninja chmod +x /usr/local/bin/ninja
# Set MAKEFLAGS so that all make invocations in container builds include the # Set MAKEFLAGS so that all make invocations in container builds include the
# flags (doesn't apply to non-container builds, but we don't run make there) # flags (doesn't apply to non-container builds, but we don't run make there)
export MAKEFLAGS="-j${FDO_CI_CONCURRENT:-4}" export MAKEFLAGS="-j${FDO_CI_CONCURRENT:-4}"
# make wget to try more than once, when download fails or timeout
echo -e "retry_connrefused = on\n" \
"read_timeout = 300\n" \
"tries = 4\n" \
"retry_on_host_error = on\n" \
"retry_on_http_error = 429,500,502,503,504\n" \
"wait_retry = 32" >> /etc/wgetrc

View File

@@ -5,33 +5,31 @@ arch=$2
cpu_family=$3 cpu_family=$3
cpu=$4 cpu=$4
cross_file="/cross_file-$arch.txt" cross_file="/cross_file-$arch.txt"
sdk_version=$5
# armv7 has the toolchain split between two names. # armv7 has the toolchain split between two names.
arch2=${6:-$2} arch2=${5:-$2}
# Note that we disable C++ exceptions, because Mesa doesn't use exceptions, # Note that we disable C++ exceptions, because Mesa doesn't use exceptions,
# and allowing it in code generation means we get unwind symbols that break # and allowing it in code generation means we get unwind symbols that break
# the libEGL and driver symbol tests. # the libEGL and driver symbol tests.
cat > "$cross_file" <<EOF cat >$cross_file <<EOF
[binaries] [binaries]
ar = '$ndk/toolchains/llvm/prebuilt/linux-x86_64/bin/llvm-ar' ar = '$ndk/toolchains/llvm/prebuilt/linux-x86_64/bin/$arch-ar'
c = ['ccache', '$ndk/toolchains/llvm/prebuilt/linux-x86_64/bin/${arch2}${sdk_version}-clang', '-fno-exceptions', '-fno-unwind-tables', '-fno-asynchronous-unwind-tables'] c = ['ccache', '$ndk/toolchains/llvm/prebuilt/linux-x86_64/bin/${arch2}29-clang', '-fno-exceptions', '-fno-unwind-tables', '-fno-asynchronous-unwind-tables']
cpp = ['ccache', '$ndk/toolchains/llvm/prebuilt/linux-x86_64/bin/${arch2}${sdk_version}-clang++', '-fno-exceptions', '-fno-unwind-tables', '-fno-asynchronous-unwind-tables', '-static-libstdc++'] cpp = ['ccache', '$ndk/toolchains/llvm/prebuilt/linux-x86_64/bin/${arch2}29-clang++', '-fno-exceptions', '-fno-unwind-tables', '-fno-asynchronous-unwind-tables', '-static-libstdc++']
c_ld = 'lld' c_ld = 'lld'
cpp_ld = 'lld' cpp_ld = 'lld'
strip = '$ndk/toolchains/llvm/prebuilt/linux-x86_64/bin/llvm-strip' strip = '$ndk/toolchains/llvm/prebuilt/linux-x86_64/bin/$arch-strip'
pkgconfig = ['/usr/bin/pkgconf'] pkgconfig = ['/usr/bin/pkg-config']
[host_machine] [host_machine]
system = 'android' system = 'linux'
cpu_family = '$cpu_family' cpu_family = '$cpu_family'
cpu = '$cpu' cpu = '$cpu'
endian = 'little' endian = 'little'
[properties] [properties]
needs_exe_wrapper = true needs_exe_wrapper = true
pkg_config_libdir = '/usr/local/lib/${arch2}/pkgconfig/:/${ndk}/toolchains/llvm/prebuilt/linux-x86_64/sysroot/usr/lib/${arch2}/pkgconfig/'
EOF EOF

View File

@@ -1,5 +1,4 @@
#!/bin/sh #!/bin/sh
# shellcheck disable=SC2086 # we want word splitting
# Makes a .pc file in the Android NDK for meson to find its libraries. # Makes a .pc file in the Android NDK for meson to find its libraries.
@@ -10,7 +9,6 @@ pc="$2"
cflags="$3" cflags="$3"
libs="$4" libs="$4"
version="$5" version="$5"
sdk_version="$6"
sysroot=$ndk/toolchains/llvm/prebuilt/linux-x86_64/sysroot sysroot=$ndk/toolchains/llvm/prebuilt/linux-x86_64/sysroot
@@ -25,7 +23,7 @@ for arch in \
cat >$pcdir/$pc <<EOF cat >$pcdir/$pc <<EOF
prefix=$sysroot prefix=$sysroot
exec_prefix=$sysroot exec_prefix=$sysroot
libdir=$sysroot/usr/lib/$arch/$sdk_version libdir=$sysroot/usr/lib/$arch/29
sharedlibdir=$sysroot/usr/lib/$arch sharedlibdir=$sysroot/usr/lib/$arch
includedir=$sysroot/usr/include includedir=$sysroot/usr/include
@@ -34,7 +32,7 @@ Description: zlib compression library
Version: $version Version: $version
Requires: Requires:
Libs: -L$sysroot/usr/lib/$arch/$sdk_version $libs Libs: -L$sysroot/usr/lib/$arch/29 $libs
Cflags: -I$sysroot/usr/include $cflags Cflags: -I$sysroot/usr/include $cflags
EOF EOF
done done

View File

@@ -2,18 +2,18 @@
arch=$1 arch=$1
cross_file="/cross_file-$arch.txt" cross_file="/cross_file-$arch.txt"
meson env2mfile --cross --debarch "$arch" -o "$cross_file" /usr/share/meson/debcrossgen --arch $arch -o "$cross_file"
# Explicitly set ccache path for cross compilers # Explicitly set ccache path for cross compilers
sed -i "s|/usr/bin/\([^-]*\)-linux-gnu\([^-]*\)-g|/usr/lib/ccache/\\1-linux-gnu\\2-g|g" "$cross_file" sed -i "s|/usr/bin/\([^-]*\)-linux-gnu\([^-]*\)-g|/usr/lib/ccache/\\1-linux-gnu\\2-g|g" "$cross_file"
if [ "$arch" = "i386" ]; then
# Work around a bug in debcrossgen that should be fixed in the next release
sed -i "s|cpu_family = 'i686'|cpu_family = 'x86'|g" "$cross_file"
fi
# Rely on qemu-user being configured in binfmt_misc on the host # Rely on qemu-user being configured in binfmt_misc on the host
# shellcheck disable=SC1003 # how this sed doesn't seems to work for me locally
sed -i -e '/\[properties\]/a\' -e "needs_exe_wrapper = False" "$cross_file" sed -i -e '/\[properties\]/a\' -e "needs_exe_wrapper = False" "$cross_file"
# Add a line for rustc, which meson env2mfile is missing. # Add a line for rustc, which debcrossgen is missing.
cc=$(sed -n "s|^c\s*=\s*\[?'\(.*\)'\]?|\1|p" < "$cross_file") cc=`sed -n 's|c = .\(.*\).|\1|p' < $cross_file`
if [[ "$arch" = "arm64" ]]; then if [[ "$arch" = "arm64" ]]; then
rust_target=aarch64-unknown-linux-gnu rust_target=aarch64-unknown-linux-gnu
elif [[ "$arch" = "armhf" ]]; then elif [[ "$arch" = "armhf" ]]; then
@@ -27,8 +27,6 @@ elif [[ "$arch" = "s390x" ]]; then
else else
echo "Needs rustc target mapping" echo "Needs rustc target mapping"
fi fi
# shellcheck disable=SC1003 # how this sed doesn't seems to work for me locally
sed -i -e '/\[binaries\]/a\' -e "rust = ['rustc', '--target=$rust_target', '-C', 'linker=$cc']" "$cross_file" sed -i -e '/\[binaries\]/a\' -e "rust = ['rustc', '--target=$rust_target', '-C', 'linker=$cc']" "$cross_file"
# Set up cmake cross compile toolchain file for dEQP builds # Set up cmake cross compile toolchain file for dEQP builds
@@ -36,19 +34,18 @@ toolchain_file="/toolchain-$arch.cmake"
if [[ "$arch" = "arm64" ]]; then if [[ "$arch" = "arm64" ]]; then
GCC_ARCH="aarch64-linux-gnu" GCC_ARCH="aarch64-linux-gnu"
DE_CPU="DE_CPU_ARM_64" DE_CPU="DE_CPU_ARM_64"
CMAKE_ARCH=arm
elif [[ "$arch" = "armhf" ]]; then elif [[ "$arch" = "armhf" ]]; then
GCC_ARCH="arm-linux-gnueabihf" GCC_ARCH="arm-linux-gnueabihf"
DE_CPU="DE_CPU_ARM" DE_CPU="DE_CPU_ARM"
CMAKE_ARCH=arm
fi fi
if [[ -n "$GCC_ARCH" ]]; then if [[ -n "$GCC_ARCH" ]]; then
{ echo "set(CMAKE_SYSTEM_NAME Linux)" > "$toolchain_file"
echo "set(CMAKE_SYSTEM_NAME Linux)"; echo "set(CMAKE_SYSTEM_PROCESSOR arm)" >> "$toolchain_file"
echo "set(CMAKE_SYSTEM_PROCESSOR arm)"; echo "set(CMAKE_C_COMPILER /usr/lib/ccache/$GCC_ARCH-gcc)" >> "$toolchain_file"
echo "set(CMAKE_C_COMPILER /usr/lib/ccache/$GCC_ARCH-gcc)"; echo "set(CMAKE_CXX_COMPILER /usr/lib/ccache/$GCC_ARCH-g++)" >> "$toolchain_file"
echo "set(CMAKE_CXX_COMPILER /usr/lib/ccache/$GCC_ARCH-g++)"; echo "set(ENV{PKG_CONFIG} \"/usr/bin/$GCC_ARCH-pkg-config\")" >> "$toolchain_file"
echo "set(CMAKE_CXX_FLAGS_INIT \"-Wno-psabi\")"; # makes ABI warnings quiet for ARMv7 echo "set(DE_CPU $DE_CPU)" >> "$toolchain_file"
echo "set(ENV{PKG_CONFIG} \"/usr/bin/$GCC_ARCH-pkgconf\")";
echo "set(DE_CPU $DE_CPU)";
} > "$toolchain_file"
fi fi

View File

@@ -0,0 +1,268 @@
#!/bin/bash
set -ex
if [ $DEBIAN_ARCH = arm64 ]; then
ARCH_PACKAGES="firmware-qcom-media"
elif [ $DEBIAN_ARCH = amd64 ]; then
ARCH_PACKAGES="firmware-amd-graphics
libelf1
libllvm11
"
fi
INSTALL_CI_FAIRY_PACKAGES="git
python3-dev
python3-pip
python3-setuptools
python3-wheel
"
apt-get -y install --no-install-recommends \
$ARCH_PACKAGES \
$INSTALL_CI_FAIRY_PACKAGES \
ca-certificates \
firmware-realtek \
initramfs-tools \
libasan6 \
libexpat1 \
libpng16-16 \
libpython3.9 \
libsensors5 \
libvulkan1 \
libwaffle-1-0 \
libx11-6 \
libx11-xcb1 \
libxcb-dri2-0 \
libxcb-dri3-0 \
libxcb-glx0 \
libxcb-present0 \
libxcb-randr0 \
libxcb-shm0 \
libxcb-sync1 \
libxcb-xfixes0 \
libxdamage1 \
libxext6 \
libxfixes3 \
libxkbcommon0 \
libxrender1 \
libxshmfence1 \
libxxf86vm1 \
netcat-openbsd \
python3 \
python3-lxml \
python3-mako \
python3-numpy \
python3-packaging \
python3-pil \
python3-renderdoc \
python3-requests \
python3-simplejson \
python3-yaml \
sntp \
strace \
waffle-utils \
wget \
xinit \
xserver-xorg-core \
xz-utils
# Needed for ci-fairy, this revision is able to upload files to
# MinIO and doesn't depend on git
pip3 install git+http://gitlab.freedesktop.org/freedesktop/ci-templates@0f1abc24c043e63894085a6bd12f14263e8b29eb
apt-get purge -y \
$INSTALL_CI_FAIRY_PACKAGES
passwd root -d
chsh -s /bin/sh
cat > /init <<EOF
#!/bin/sh
export PS1=lava-shell:
exec sh
EOF
chmod +x /init
#######################################################################
# Strip the image to a small minimal system without removing the debian
# toolchain.
# xz compress firmware so it doesn't waste RAM at runtime on ramdisk systems
find /lib/firmware -type f -print0 | \
xargs -0r -P4 -n4 xz -T1 -C crc32
# Copy timezone file and remove tzdata package
rm -rf /etc/localtime
cp /usr/share/zoneinfo/Etc/UTC /etc/localtime
UNNEEDED_PACKAGES="
libfdisk1
"
export DEBIAN_FRONTEND=noninteractive
# Removing unused packages
for PACKAGE in ${UNNEEDED_PACKAGES}
do
echo ${PACKAGE}
if ! apt-get remove --purge --yes "${PACKAGE}"
then
echo "WARNING: ${PACKAGE} isn't installed"
fi
done
apt-get autoremove --yes || true
# Dropping logs
rm -rf /var/log/*
# Dropping documentation, localization, i18n files, etc
rm -rf /usr/share/doc/*
rm -rf /usr/share/locale/*
rm -rf /usr/share/X11/locale/*
rm -rf /usr/share/man
rm -rf /usr/share/i18n/*
rm -rf /usr/share/info/*
rm -rf /usr/share/lintian/*
rm -rf /usr/share/common-licenses/*
rm -rf /usr/share/mime/*
# Dropping reportbug scripts
rm -rf /usr/share/bug
# Drop udev hwdb not required on a stripped system
rm -rf /lib/udev/hwdb.bin /lib/udev/hwdb.d/*
# Drop all gconv conversions && binaries
rm -rf usr/bin/iconv
rm -rf usr/sbin/iconvconfig
rm -rf usr/lib/*/gconv/
# Remove libusb database
rm -rf usr/sbin/update-usbids
rm -rf var/lib/usbutils/usb.ids
rm -rf usr/share/misc/usb.ids
#######################################################################
# Crush into a minimal production image to be deployed via some type of image
# updating system.
# IMPORTANT: The Debian system is not longer functional at this point,
# for example, apt and dpkg will stop working
UNNEEDED_PACKAGES="apt libapt-pkg6.0 "\
"ncurses-bin ncurses-base libncursesw6 libncurses6 "\
"perl-base "\
"debconf libdebconfclient0 "\
"e2fsprogs e2fslibs libfdisk1 "\
"insserv "\
"udev "\
"init-system-helpers "\
"bash "\
"cpio "\
"xz-utils "\
"passwd "\
"libsemanage1 libsemanage-common "\
"libsepol1 "\
"gpgv "\
"hostname "\
"adduser "\
"debian-archive-keyring "\
"libegl1-mesa-dev "\
"libegl-mesa0 "\
"libgl1-mesa-dev "\
"libgl1-mesa-dri "\
"libglapi-mesa "\
"libgles2-mesa-dev "\
"libglx-mesa0 "\
"mesa-common-dev "\
# Removing unneeded packages
for PACKAGE in ${UNNEEDED_PACKAGES}
do
echo "Forcing removal of ${PACKAGE}"
if ! dpkg --purge --force-remove-essential --force-depends "${PACKAGE}"
then
echo "WARNING: ${PACKAGE} isn't installed"
fi
done
# Show what's left package-wise before dropping dpkg itself
COLUMNS=300 dpkg-query -W --showformat='${Installed-Size;10}\t${Package}\n' | sort -k1,1n
# Drop dpkg
dpkg --purge --force-remove-essential --force-depends dpkg
# No apt or dpkg, no need for its configuration archives
rm -rf etc/apt
rm -rf etc/dpkg
# Drop directories not part of ostree
# Note that /var needs to exist as ostree bind mounts the deployment /var over
# it
rm -rf var/* opt srv share
# ca-certificates are in /etc drop the source
rm -rf usr/share/ca-certificates
# No bash, no need for completions
rm -rf usr/share/bash-completion
# No zsh, no need for comletions
rm -rf usr/share/zsh/vendor-completions
# drop gcc python helpers
rm -rf usr/share/gcc
# Drop sysvinit leftovers
rm -rf etc/init.d
rm -rf etc/rc[0-6S].d
# Drop upstart helpers
rm -rf etc/init
# Various xtables helpers
rm -rf usr/lib/xtables
# Drop all locales
# TODO: only remaining locale is actually "C". Should we really remove it?
rm -rf usr/lib/locale/*
# partition helpers
rm -rf usr/sbin/*fdisk
# local compiler
rm -rf usr/bin/localedef
# Systemd dns resolver
find usr etc -name '*systemd-resolve*' -prune -exec rm -r {} \;
# Systemd network configuration
find usr etc -name '*networkd*' -prune -exec rm -r {} \;
# systemd ntp client
find usr etc -name '*timesyncd*' -prune -exec rm -r {} \;
# systemd hw database manager
find usr etc -name '*systemd-hwdb*' -prune -exec rm -r {} \;
# No need for fuse
find usr etc -name '*fuse*' -prune -exec rm -r {} \;
# lsb init function leftovers
rm -rf usr/lib/lsb
# Only needed when adding libraries
rm -rf usr/sbin/ldconfig*
# Games, unused
rmdir usr/games
# Remove pam module to authenticate against a DB
# plus libdb-5.3.so that is only used by this pam module
rm -rf usr/lib/*/security/pam_userdb.so
rm -rf usr/lib/*/libdb-5.3.so
# remove NSS support for nis, nisplus and hesiod
rm -rf usr/lib/*/libnss_hesiod*
rm -rf usr/lib/*/libnss_nis*

View File

@@ -1,69 +1,58 @@
#!/usr/bin/env bash #!/bin/bash
# shellcheck disable=SC2086 # we want word splitting
set -e set -e
set -o xtrace set -o xtrace
export DEBIAN_FRONTEND=noninteractive export DEBIAN_FRONTEND=noninteractive
export LLVM_VERSION="${LLVM_VERSION:=15}"
# Ephemeral packages (installed for this script and removed again at the end) # Ephemeral packages (installed for this script and removed again at the end)
EPHEMERAL=( STABLE_EPHEMERAL=" \
) "
DEPS=(
"crossbuild-essential-$arch"
"pkgconf:$arch"
"libasan8:$arch"
"libdrm-dev:$arch"
"libelf-dev:$arch"
"libexpat1-dev:$arch"
"libffi-dev:$arch"
"libpciaccess-dev:$arch"
"libstdc++6:$arch"
"libvulkan-dev:$arch"
"libx11-dev:$arch"
"libx11-xcb-dev:$arch"
"libxcb-dri2-0-dev:$arch"
"libxcb-dri3-dev:$arch"
"libxcb-glx0-dev:$arch"
"libxcb-present-dev:$arch"
"libxcb-randr0-dev:$arch"
"libxcb-shm0-dev:$arch"
"libxcb-xfixes0-dev:$arch"
"libxdamage-dev:$arch"
"libxext-dev:$arch"
"libxrandr-dev:$arch"
"libxshmfence-dev:$arch"
"libxxf86vm-dev:$arch"
"libwayland-dev:$arch"
)
dpkg --add-architecture $arch dpkg --add-architecture $arch
echo "deb [trusted=yes] https://gitlab.freedesktop.org/gfx-ci/ci-deb-repo/-/raw/${PKG_REPO_REV}/ ${FDO_DISTRIBUTION_VERSION%-*} main" | tee /etc/apt/sources.list.d/gfx-ci_.list
apt-get update apt-get update
apt-get install -y --no-remove "${DEPS[@]}" "${EPHEMERAL[@]}" \ apt-get install -y --no-remove \
$EXTRA_LOCAL_PACKAGES $STABLE_EPHEMERAL \
crossbuild-essential-$arch \
libelf-dev:$arch \
libexpat1-dev:$arch \
libpciaccess-dev:$arch \
libstdc++6:$arch \
libvulkan-dev:$arch \
libx11-dev:$arch \
libx11-xcb-dev:$arch \
libxcb-dri2-0-dev:$arch \
libxcb-dri3-dev:$arch \
libxcb-glx0-dev:$arch \
libxcb-present-dev:$arch \
libxcb-randr0-dev:$arch \
libxcb-shm0-dev:$arch \
libxcb-xfixes0-dev:$arch \
libxdamage-dev:$arch \
libxext-dev:$arch \
libxrandr-dev:$arch \
libxshmfence-dev:$arch \
libxxf86vm-dev:$arch \
wget
if [[ $arch != "armhf" ]]; then if [[ $arch != "armhf" ]]; then
# We don't need clang-format for the crossbuilds, but the installed amd64 if [[ $arch == "s390x" ]]; then
# package will conflict with libclang. Uninstall clang-format (and its LLVM=9
# problematic dependency) to fix. else
apt-get remove -y "clang-format-${LLVM_VERSION}" "libclang-cpp${LLVM_VERSION}" \ LLVM=11
"llvm-${LLVM_VERSION}-runtime" "llvm-${LLVM_VERSION}-linker-tools" fi
# llvm-*-tools:$arch conflicts with python3:amd64. Install dependencies only # llvm-*-tools:$arch conflicts with python3:amd64. Install dependencies only
# with apt-get, then force-install llvm-*-{dev,tools}:$arch with dpkg to get # with apt-get, then force-install llvm-*-{dev,tools}:$arch with dpkg to get
# around this. # around this.
apt-get install -y --no-remove --no-install-recommends \ apt-get install -y --no-remove \
"libclang-cpp${LLVM_VERSION}:$arch" \ libclang-cpp${LLVM}:$arch \
"libgcc-s1:$arch" \ libffi-dev:$arch \
"libtinfo-dev:$arch" \ libgcc-s1:$arch \
"libz3-dev:$arch" \ libtinfo-dev:$arch \
"llvm-${LLVM_VERSION}:$arch" \ libz3-dev:$arch \
llvm-${LLVM}:$arch \
zlib1g zlib1g
fi fi
@@ -74,19 +63,17 @@ fi
# dependencies where we want a specific version # dependencies where we want a specific version
MULTIARCH_PATH=$(dpkg-architecture -A $arch -qDEB_TARGET_MULTIARCH) EXTRA_MESON_ARGS="--cross-file=/cross_file-${arch}.txt -D libdir=lib/$(dpkg-architecture -A $arch -qDEB_TARGET_MULTIARCH)"
export EXTRA_MESON_ARGS="--cross-file=/cross_file-${arch}.txt -D libdir=lib/${MULTIARCH_PATH}" . .gitlab-ci/container/build-libdrm.sh
. .gitlab-ci/container/build-wayland.sh
. .gitlab-ci/container/build-directx-headers.sh apt-get purge -y \
$STABLE_EPHEMERAL
apt-get purge -y "${EPHEMERAL[@]}"
. .gitlab-ci/container/container_post_build.sh . .gitlab-ci/container/container_post_build.sh
# This needs to be done after container_post_build.sh, or apt-get breaks in there # This needs to be done after container_post_build.sh, or apt-get breaks in there
if [[ $arch != "armhf" ]]; then if [[ $arch != "armhf" ]]; then
apt-get download llvm-"${LLVM_VERSION}"-{dev,tools}:"$arch" apt-get download llvm-${LLVM}-{dev,tools}:$arch
dpkg -i --force-depends llvm-"${LLVM_VERSION}"-*_"${arch}".deb dpkg -i --force-depends llvm-${LLVM}-*_${arch}.deb
rm llvm-"${LLVM_VERSION}"-*_"${arch}".deb rm llvm-${LLVM}-*_${arch}.deb
fi fi

View File

@@ -1,93 +1,60 @@
#!/usr/bin/env bash #!/bin/bash
# shellcheck disable=SC2086 # we want word splitting
# When changing this file, you need to bump the following
# .gitlab-ci/image-tags.yml tags:
# DEBIAN_BUILD_TAG
set -ex set -ex
EPHEMERAL=( EPHEMERAL="\
autoconf rdfind \
rdfind unzip \
unzip "
)
apt-get install -y --no-remove "${EPHEMERAL[@]}" apt-get install -y --no-remove $EPHEMERAL
# Fetch the NDK and extract just the toolchain we want. # Fetch the NDK and extract just the toolchain we want.
ndk=$ANDROID_NDK ndk=android-ndk-r21d
curl -L --retry 4 -f --retry-all-errors --retry-delay 60 \ wget -O $ndk.zip https://dl.google.com/android/repository/$ndk-linux-x86_64.zip
-o $ndk.zip https://dl.google.com/android/repository/$ndk-linux.zip
unzip -d / $ndk.zip "$ndk/toolchains/llvm/*" unzip -d / $ndk.zip "$ndk/toolchains/llvm/*"
rm $ndk.zip rm $ndk.zip
# Since it was packed as a zip file, symlinks/hardlinks got turned into # Since it was packed as a zip file, symlinks/hardlinks got turned into
# duplicate files. Turn them into hardlinks to save on container space. # duplicate files. Turn them into hardlinks to save on container space.
rdfind -makehardlinks true -makeresultsfile false /${ndk}/ rdfind -makehardlinks true -makeresultsfile false /android-ndk-r21d/
# Drop some large tools we won't use in this build. # Drop some large tools we won't use in this build.
find /${ndk}/ -type f \( -iname '*clang-check*' -o -iname '*clang-tidy*' -o -iname '*lldb*' \) -exec rm -f {} \; find /android-ndk-r21d/ -type f | egrep -i "clang-check|clang-tidy|lldb" | xargs rm -f
sh .gitlab-ci/container/create-android-ndk-pc.sh /$ndk zlib.pc "" "-lz" "1.2.3" $ANDROID_SDK_VERSION sh .gitlab-ci/container/create-android-ndk-pc.sh /$ndk zlib.pc "" "-lz" "1.2.3"
sh .gitlab-ci/container/create-android-cross-file.sh /$ndk x86_64-linux-android x86_64 x86_64 $ANDROID_SDK_VERSION sh .gitlab-ci/container/create-android-cross-file.sh /$ndk x86_64-linux-android x86_64 x86_64
sh .gitlab-ci/container/create-android-cross-file.sh /$ndk i686-linux-android x86 x86 $ANDROID_SDK_VERSION sh .gitlab-ci/container/create-android-cross-file.sh /$ndk i686-linux-android x86 x86
sh .gitlab-ci/container/create-android-cross-file.sh /$ndk aarch64-linux-android aarch64 armv8 $ANDROID_SDK_VERSION sh .gitlab-ci/container/create-android-cross-file.sh /$ndk aarch64-linux-android arm armv8
sh .gitlab-ci/container/create-android-cross-file.sh /$ndk arm-linux-androideabi arm armv7hl $ANDROID_SDK_VERSION armv7a-linux-androideabi sh .gitlab-ci/container/create-android-cross-file.sh /$ndk arm-linux-androideabi arm armv7hl armv7a-linux-androideabi
# Not using build-libdrm.sh because we don't want its cleanup after building
# each arch. Fetch and extract now.
export LIBDRM_VERSION=libdrm-2.4.102
wget https://dri.freedesktop.org/libdrm/$LIBDRM_VERSION.tar.xz
tar -xf $LIBDRM_VERSION.tar.xz && rm $LIBDRM_VERSION.tar.xz
for arch in \ for arch in \
x86_64-linux-android \ x86_64-linux-android \
i686-linux-android \ i686-linux-android \
aarch64-linux-android \ aarch64-linux-android \
arm-linux-androideabi ; do arm-linux-androideabi ; do
EXTRA_MESON_ARGS="--cross-file=/cross_file-$arch.txt --libdir=lib/$arch -Dnouveau=disabled -Dintel=disabled" \
. .gitlab-ci/container/build-libdrm.sh cd $LIBDRM_VERSION
rm -rf build-$arch
meson build-$arch \
--cross-file=/cross_file-$arch.txt \
--libdir=lib/$arch \
-Dlibkms=false \
-Dnouveau=false \
-Dvc4=false \
-Detnaviv=false \
-Dfreedreno=false \
-Dintel=false \
-Dcairo-tests=false
ninja -C build-$arch install
cd ..
done done
rm -rf $LIBDRM_VERSION rm -rf $LIBDRM_VERSION
export LIBELF_VERSION=libelf-0.8.13 apt-get purge -y $EPHEMERAL
curl -L --retry 4 -f --retry-all-errors --retry-delay 60 \
-O https://fossies.org/linux/misc/old/$LIBELF_VERSION.tar.gz
# Not 100% sure who runs the mirror above so be extra careful
if ! echo "4136d7b4c04df68b686570afa26988ac ${LIBELF_VERSION}.tar.gz" | md5sum -c -; then
echo "Checksum failed"
exit 1
fi
tar -xf ${LIBELF_VERSION}.tar.gz
cd $LIBELF_VERSION
# Work around a bug in the original configure not enabling __LIBELF64.
autoreconf
for arch in \
x86_64-linux-android \
i686-linux-android \
aarch64-linux-android \
arm-linux-androideabi ; do
ccarch=${arch}
if [ "${arch}" == 'arm-linux-androideabi' ]
then
ccarch=armv7a-linux-androideabi
fi
export CC=/${ndk}/toolchains/llvm/prebuilt/linux-x86_64/bin/llvm-ar
export CC=/${ndk}/toolchains/llvm/prebuilt/linux-x86_64/bin/${ccarch}${ANDROID_SDK_VERSION}-clang
export CXX=/${ndk}/toolchains/llvm/prebuilt/linux-x86_64/bin/${ccarch}${ANDROID_SDK_VERSION}-clang++
export LD=/${ndk}/toolchains/llvm/prebuilt/linux-x86_64/bin/${arch}-ld
export RANLIB=/${ndk}/toolchains/llvm/prebuilt/linux-x86_64/bin/llvm-ranlib
# The configure script doesn't know about android, but doesn't really use the host anyway it
# seems
./configure --host=x86_64-linux-gnu --disable-nls --disable-shared \
--libdir=/usr/local/lib/${arch}
make install
make distclean
done
cd ..
rm -rf $LIBELF_VERSION
apt-get purge -y "${EPHEMERAL[@]}"

View File

@@ -1,5 +0,0 @@
#!/usr/bin/env bash
set -e
arch=armhf . .gitlab-ci/container/debian/arm_test.sh

View File

@@ -1,91 +0,0 @@
#!/usr/bin/env bash
# shellcheck disable=SC2086 # we want word splitting
set -e
set -o xtrace
export LLVM_VERSION="${LLVM_VERSION:=15}"
apt-get -y install ca-certificates
sed -i -e 's/http:\/\/deb/https:\/\/deb/g' /etc/apt/sources.list.d/*
echo "deb [trusted=yes] https://gitlab.freedesktop.org/gfx-ci/ci-deb-repo/-/raw/${PKG_REPO_REV}/ ${FDO_DISTRIBUTION_VERSION%-*} main" | tee /etc/apt/sources.list.d/gfx-ci_.list
apt-get update
# Ephemeral packages (installed for this script and removed again at the end)
EPHEMERAL=(
libssl-dev
)
DEPS=(
apt-utils
android-libext4-utils
autoconf
automake
bc
bison
ccache
cmake
curl
fastboot
flatbuffers-compiler
flex
g++
git
glslang-tools
kmod
libasan8
libdrm-dev
libelf-dev
libexpat1-dev
libflatbuffers-dev
libvulkan-dev
libx11-dev
libx11-xcb-dev
libxcb-dri2-0-dev
libxcb-dri3-dev
libxcb-glx0-dev
libxcb-present-dev
libxcb-randr0-dev
libxcb-shm0-dev
libxcb-xfixes0-dev
libxdamage-dev
libxext-dev
libxrandr-dev
libxshmfence-dev
libxtensor-dev
libxxf86vm-dev
libwayland-dev
libwayland-egl-backend-dev
"llvm-${LLVM_VERSION}-dev"
ninja-build
meson
openssh-server
pkgconf
python3-mako
python3-pil
python3-pip
python3-pycparser
python3-requests
python3-setuptools
u-boot-tools
xz-utils
zlib1g-dev
zstd
)
apt-get -y install "${DEPS[@]}" "${EPHEMERAL[@]}"
pip3 install --break-system-packages git+http://gitlab.freedesktop.org/freedesktop/ci-templates@ffe4d1b10aab7534489f0c4bbc4c5899df17d3f2
arch=armhf
. .gitlab-ci/container/cross_build.sh
. .gitlab-ci/container/container_pre_build.sh
. .gitlab-ci/container/build-mold.sh
. .gitlab-ci/container/build-wayland.sh
apt-get purge -y "${EPHEMERAL[@]}"
. .gitlab-ci/container/container_post_build.sh

View File

@@ -1,5 +0,0 @@
#!/usr/bin/env bash
set -e
arch=arm64 . .gitlab-ci/container/debian/arm_test.sh

View File

@@ -0,0 +1,71 @@
#!/bin/bash
set -e
set -o xtrace
apt-get -y install ca-certificates
sed -i -e 's/http:\/\/deb/https:\/\/deb/g' /etc/apt/sources.list
echo 'deb https://deb.debian.org/debian buster main' >/etc/apt/sources.list.d/buster.list
apt-get update
apt-get -y install \
abootimg \
autoconf \
automake \
bc \
bison \
ccache \
cmake \
debootstrap \
fastboot \
flex \
g++ \
git \
kmod \
libasan6 \
libdrm-dev \
libelf-dev \
libexpat1-dev \
libx11-dev \
libx11-xcb-dev \
libxcb-dri2-0-dev \
libxcb-dri3-dev \
libxcb-glx0-dev \
libxcb-present-dev \
libxcb-randr0-dev \
libxcb-shm0-dev \
libxcb-xfixes0-dev \
libxdamage-dev \
libxext-dev \
libxrandr-dev \
libxshmfence-dev \
libxxf86vm-dev \
llvm-11-dev \
meson \
pkg-config \
python3-mako \
python3-pil \
python3-pip \
python3-requests \
python3-setuptools \
u-boot-tools \
wget \
xz-utils \
zlib1g-dev
# Not available anymore in bullseye
apt-get install -y --no-remove -t buster \
android-sdk-ext4-utils
pip3 install git+http://gitlab.freedesktop.org/freedesktop/ci-templates@6f5af7e5574509726c79109e3c147cee95e81366
arch=armhf
. .gitlab-ci/container/cross_build.sh
. .gitlab-ci/container/container_pre_build.sh
# dependencies where we want a specific version
EXTRA_MESON_ARGS=
. .gitlab-ci/container/build-libdrm.sh
. .gitlab-ci/container/container_post_build.sh

View File

@@ -1,53 +1,33 @@
#!/usr/bin/env bash #!/bin/bash
# shellcheck disable=SC2154 # arch is assigned in previous scripts
# When changing this file, you need to bump the following
# .gitlab-ci/image-tags.yml tags:
# DEBIAN_BASE_TAG
# KERNEL_ROOTFS_TAG
set -e set -e
set -o xtrace set -o xtrace
############### Install packages for baremetal testing ############### Install packages for baremetal testing
DEPS=(
cpio
curl
fastboot
netcat-openbsd
openssh-server
procps
python3-distutils
python3-filelock
python3-fire
python3-minimal
python3-serial
rsync
snmp
zstd
)
apt-get install -y ca-certificates apt-get install -y ca-certificates
sed -i -e 's/http:\/\/deb/https:\/\/deb/g' /etc/apt/sources.list.d/* sed -i -e 's/http:\/\/deb/https:\/\/deb/g' /etc/apt/sources.list
echo "deb [trusted=yes] https://gitlab.freedesktop.org/gfx-ci/ci-deb-repo/-/raw/${PKG_REPO_REV}/ ${FDO_DISTRIBUTION_VERSION%-*} main" | tee /etc/apt/sources.list.d/gfx-ci_.list
apt-get update apt-get update
apt-get install -y --no-remove "${DEPS[@]}" apt-get install -y --no-remove \
abootimg \
cpio \
fastboot \
netcat \
procps \
python3-distutils \
python3-minimal \
python3-serial \
rsync \
snmp \
wget
# setup SNMPv2 SMI MIB # setup SNMPv2 SMI MIB
curl -L --retry 4 -f --retry-all-errors --retry-delay 60 \ wget https://raw.githubusercontent.com/net-snmp/net-snmp/master/mibs/SNMPv2-SMI.txt \
https://raw.githubusercontent.com/net-snmp/net-snmp/master/mibs/SNMPv2-SMI.txt \ -O /usr/share/snmp/mibs/SNMPv2-SMI.txt
-o /usr/share/snmp/mibs/SNMPv2-SMI.txt
. .gitlab-ci/container/baremetal_build.sh arch=arm64 . .gitlab-ci/container/baremetal_build.sh
arch=armhf . .gitlab-ci/container/baremetal_build.sh
mkdir -p /baremetal-files/jetson-nano/boot/ # This firmware file from Debian bullseye causes hangs
ln -s \ wget https://git.kernel.org/pub/scm/linux/kernel/git/firmware/linux-firmware.git/plain/qcom/a530_pfp.fw?id=d5f9eea5a251d43412b07f5295d03e97b89ac4a5 \
/baremetal-files/Image \ -O /rootfs-arm64/lib/firmware/qcom/a530_pfp.fw
/baremetal-files/tegra210-p3450-0000.dtb \
/baremetal-files/jetson-nano/boot/
mkdir -p /baremetal-files/jetson-tk1/boot/
ln -s \
/baremetal-files/zImage \
/baremetal-files/tegra124-jetson-tk1.dtb \
/baremetal-files/jetson-tk1/boot/

View File

@@ -0,0 +1,5 @@
#!/bin/bash
arch=i386
. .gitlab-ci/container/cross_build.sh

View File

@@ -1,52 +0,0 @@
-----BEGIN PGP PUBLIC KEY BLOCK-----
Version: GnuPG v1.4.12 (GNU/Linux)
mQINBFE9lCwBEADi0WUAApM/mgHJRU8lVkkw0CHsZNpqaQDNaHefD6Rw3S4LxNmM
EZaOTkhP200XZM8lVdbfUW9xSjA3oPldc1HG26NjbqqCmWpdo2fb+r7VmU2dq3NM
R18ZlKixiLDE6OUfaXWKamZsXb6ITTYmgTO6orQWYrnW6ckYHSeaAkW0wkDAryl2
B5v8aoFnQ1rFiVEMo4NGzw4UX+MelF7rxaaregmKVTPiqCOSPJ1McC1dHFN533FY
Wh/RVLKWo6npu+owtwYFQW+zyQhKzSIMvNujFRzhIxzxR9Gn87MoLAyfgKEzrbbT
DhqqNXTxS4UMUKCQaO93TzetX/EBrRpJj+vP640yio80h4Dr5pAd7+LnKwgpTDk1
G88bBXJAcPZnTSKu9I2c6KY4iRNbvRz4i+ZdwwZtdW4nSdl2792L7Sl7Nc44uLL/
ZqkKDXEBF6lsX5XpABwyK89S/SbHOytXv9o4puv+65Ac5/UShspQTMSKGZgvDauU
cs8kE1U9dPOqVNCYq9Nfwinkf6RxV1k1+gwtclxQuY7UpKXP0hNAXjAiA5KS5Crq
7aaJg9q2F4bub0mNU6n7UI6vXguF2n4SEtzPRk6RP+4TiT3bZUsmr+1ktogyOJCc
Ha8G5VdL+NBIYQthOcieYCBnTeIH7D3Sp6FYQTYtVbKFzmMK+36ERreL/wARAQAB
tD1TeWx2ZXN0cmUgTGVkcnUgLSBEZWJpYW4gTExWTSBwYWNrYWdlcyA8c3lsdmVz
dHJlQGRlYmlhbi5vcmc+iQI4BBMBAgAiBQJRPZQsAhsDBgsJCAcDAgYVCAIJCgsE
FgIDAQIeAQIXgAAKCRAVz00Yr090Ibx+EADArS/hvkDF8juWMXxh17CgR0WZlHCC
9CTBWkg5a0bNN/3bb97cPQt/vIKWjQtkQpav6/5JTVCSx2riL4FHYhH0iuo4iAPR
udC7Cvg8g7bSPrKO6tenQZNvQm+tUmBHgFiMBJi92AjZ/Qn1Shg7p9ITivFxpLyX
wpmnF1OKyI2Kof2rm4BFwfSWuf8Fvh7kDMRLHv+MlnK/7j/BNpKdozXxLcwoFBmn
l0WjpAH3OFF7Pvm1LJdf1DjWKH0Dc3sc6zxtmBR/KHHg6kK4BGQNnFKujcP7TVdv
gMYv84kun14pnwjZcqOtN3UJtcx22880DOQzinoMs3Q4w4o05oIF+sSgHViFpc3W
R0v+RllnH05vKZo+LDzc83DQVrdwliV12eHxrMQ8UYg88zCbF/cHHnlzZWAJgftg
hB08v1BKPgYRUzwJ6VdVqXYcZWEaUJmQAPuAALyZESw94hSo28FAn0/gzEc5uOYx
K+xG/lFwgAGYNb3uGM5m0P6LVTfdg6vDwwOeTNIExVk3KVFXeSQef2ZMkhwA7wya
KJptkb62wBHFE+o9TUdtMCY6qONxMMdwioRE5BYNwAsS1PnRD2+jtlI0DzvKHt7B
MWd8hnoUKhMeZ9TNmo+8CpsAtXZcBho0zPGz/R8NlJhAWpdAZ1CmcPo83EW86Yq7
BxQUKnNHcwj2ebkCDQRRPZQsARAA4jxYmbTHwmMjqSizlMJYNuGOpIidEdx9zQ5g
zOr431/VfWq4S+VhMDhs15j9lyml0y4ok215VRFwrAREDg6UPMr7ajLmBQGau0Fc
bvZJ90l4NjXp5p0NEE/qOb9UEHT7EGkEhaZ1ekkWFTWCgsy7rRXfZLxB6sk7pzLC
DshyW3zjIakWAnpQ5j5obiDy708pReAuGB94NSyb1HoW/xGsGgvvCw4r0w3xPStw
F1PhmScE6NTBIfLliea3pl8vhKPlCh54Hk7I8QGjo1ETlRP4Qll1ZxHJ8u25f/ta
RES2Aw8Hi7j0EVcZ6MT9JWTI83yUcnUlZPZS2HyeWcUj+8nUC8W4N8An+aNps9l/
21inIl2TbGo3Yn1JQLnA1YCoGwC34g8QZTJhElEQBN0X29ayWW6OdFx8MDvllbBV
ymmKq2lK1U55mQTfDli7S3vfGz9Gp/oQwZ8bQpOeUkc5hbZszYwP4RX+68xDPfn+
M9udl+qW9wu+LyePbW6HX90LmkhNkkY2ZzUPRPDHZANU5btaPXc2H7edX4y4maQa
xenqD0lGh9LGz/mps4HEZtCI5CY8o0uCMF3lT0XfXhuLksr7Pxv57yue8LLTItOJ
d9Hmzp9G97SRYYeqU+8lyNXtU2PdrLLq7QHkzrsloG78lCpQcalHGACJzrlUWVP/
fN3Ht3kAEQEAAYkCHwQYAQIACQUCUT2ULAIbDAAKCRAVz00Yr090IbhWEADbr50X
OEXMIMGRLe+YMjeMX9NG4jxs0jZaWHc/WrGR+CCSUb9r6aPXeLo+45949uEfdSsB
pbaEdNWxF5Vr1CSjuO5siIlgDjmT655voXo67xVpEN4HhMrxugDJfCa6z97P0+ML
PdDxim57uNqkam9XIq9hKQaurxMAECDPmlEXI4QT3eu5qw5/knMzDMZj4Vi6hovL
wvvAeLHO/jsyfIdNmhBGU2RWCEZ9uo/MeerPHtRPfg74g+9PPfP6nyHD2Wes6yGd
oVQwtPNAQD6Cj7EaA2xdZYLJ7/jW6yiPu98FFWP74FN2dlyEA2uVziLsfBrgpS4l
tVOlrO2YzkkqUGrybzbLpj6eeHx+Cd7wcjI8CalsqtL6cG8cUEjtWQUHyTbQWAgG
5VPEgIAVhJ6RTZ26i/G+4J8neKyRs4vz+57UGwY6zI4AB1ZcWGEE3Bf+CDEDgmnP
LSwbnHefK9IljT9XU98PelSryUO/5UPw7leE0akXKB4DtekToO226px1VnGp3Bov
1GBGvpHvL2WizEwdk+nfk8LtrLzej+9FtIcq3uIrYnsac47Pf7p0otcFeTJTjSq3
krCaoG4Hx0zGQG2ZFpHrSrZTVy6lxvIdfi0beMgY6h78p6M9eYZHQHc02DjFkQXN
bXb5c6gCHESH5PXwPU4jQEE7Ib9J6sbk7ZT2Mw==
=j+4q
-----END PGP PUBLIC KEY BLOCK-----

View File

@@ -1,4 +1,4 @@
#!/usr/bin/env bash #!/bin/bash
arch=ppc64el arch=ppc64el

View File

@@ -1,18 +1,5 @@
#!/usr/bin/env bash #!/bin/bash
set -e
arch=s390x arch=s390x
# Ephemeral packages (installed for this script and removed again at the end)
EPHEMERAL=(
libssl-dev
)
apt-get -y install "${EPHEMERAL[@]}"
. .gitlab-ci/container/build-mold.sh
apt-get purge -y "${EPHEMERAL[@]}"
. .gitlab-ci/container/cross_build.sh . .gitlab-ci/container/cross_build.sh

View File

@@ -1,53 +0,0 @@
-----BEGIN PGP PUBLIC KEY BLOCK-----
mQGNBFwOmrgBDAC9FZW3dFpew1hwDaqRfdQQ1ABcmOYu1NKZHwYjd+bGvcR2LRGe
R5dfRqG1Uc/5r6CPCMvnWxFprymkqKEADn8eFn+aCnPx03HrhA+lNEbciPfTHylt
NTTuRua7YpJIgEOjhXUbxXxnvF8fhUf5NJpJg6H6fPQARUW+5M//BlVgwn2jhzlW
U+uwgeJthhiuTXkls9Yo3EoJzmkUih+ABZgvaiBpr7GZRw9GO1aucITct0YDNTVX
KA6el78/udi5GZSCKT94yY9ArN4W6NiOFCLV7MU5d6qMjwGFhfg46NBv9nqpGinK
3NDjqCevKouhtKl2J+nr3Ju3Spzuv6Iex7tsOqt+XdZCoY+8+dy3G5zbJwBYsMiS
rTNF55PHtBH1S0QK5OoN2UR1ie/aURAyAFEMhTzvFB2B2v7C0IKIOmYMEG+DPMs9
FQs/vZ1UnAQgWk02ZiPryoHfjFO80+XYMrdWN+RSo5q9ODClloaKXjqI/aWLGirm
KXw2R8tz31go3NMAEQEAAbQnV2luZUhRIHBhY2thZ2VzIDx3aW5lLWRldmVsQHdp
bmVocS5vcmc+iQHOBBMBCgA4AhsDBQsJCAcCBhUKCQgLAgQWAgMBAh4BAheAFiEE
1D9kAUU2nFHXht3qdvGiD/mHZy8FAlwOmyUACgkQdvGiD/mHZy/zkwv7B+nKFlDY
Bzz/7j0gqIODbs5FRZRtuf/IuPP3vZdWlNfAW/VyaLtVLJCM/mmaf/O6/gJ+D+E9
BBoSmHdHzBBOQHIj5IbRedynNcHT5qXsdBeU2ZPR50sdE+jmukvw3Wa5JijoDgUu
LGLGtU48Z3JsBXQ54OlnTZXQ2SMFhRUa10JANXSJQ+QY2Wo2Pi2+MEAHcrd71A2S
0mT2DQSSBQ92c6WPfUpOSBawd8P0ipT7rVFNLJh8HVQGyEWxPl8ecDEHoVfG2rdV
D0ADbNLx9031UUwpUicO6vW/2Ec7c3VNG1cpOtyNTw/lEgvsXOh3GQs/DvFvMy/h
QzaeF3Qq6cAPlKuxieJe4lLYFBTmCAT4iB1J8oeFs4G7ScfZH4+4NBe3VGoeCD/M
Wl+qxntAroblxiFuqtPJg+NKZYWBzkptJNhnrBxcBnRinGZLw2k/GR/qPMgsR2L4
cP+OUuka+R2gp9oDVTZTyMowz+ROIxnEijF50pkj2VBFRB02rfiMp7q6iQIzBBAB
CgAdFiEE2iNXmnTUrZr50/lFzvrI6q8XUZ0FAlwOm3AACgkQzvrI6q8XUZ3KKg/+
MD8CgvLiHEX90fXQ23RZQRm2J21w3gxdIen/N8yJVIbK7NIgYhgWfGWsGQedtM7D
hMwUlDSRb4rWy9vrXBaiZoF3+nK9AcLvPChkZz28U59Jft6/l0gVrykey/ERU7EV
w1Ie1eRu0tRSXsKvMZyQH8897iHZ7uqoJgyk8U8CvSW+V80yqLB2M8Tk8ECZq34f
HqUIGs4Wo0UZh0vV4+dEQHBh1BYpmmWl+UPf7nzNwFWXu/EpjVhkExRqTnkEJ+Ai
OxbtrRn6ETKzpV4DjyifqQF639bMIem7DRRf+mkcrAXetvWkUkE76e3E9KLvETCZ
l4SBfgqSZs2vNngmpX6Qnoh883aFo5ZgVN3v6uTS+LgTwMt/XlnDQ7+Zw+ehCZ2R
CO21Y9Kbw6ZEWls/8srZdCQ2LxnyeyQeIzsLnqT/waGjQj35i4exzYeWpojVDb3r
tvvOALYGVlSYqZXIALTx2/tHXKLHyrn1C0VgHRnl+hwv7U49f7RvfQXpx47YQN/C
PWrpbG69wlKuJptr+olbyoKAWfl+UzoO8vLMo5njWQNAoAwh1H8aFUVNyhtbkRuq
l0kpy1Cmcq8uo6taK9lvYp8jak7eV8lHSSiGUKTAovNTwfZG2JboGV4/qLDUKvpa
lPp2xVpF9MzA8VlXTOzLpSyIVxZnPTpL+xR5P9WQjMS5AY0EXA6auAEMAMReKL89
0z0SL+/i/geB/agfG/k6AXiG2a9kVWeIjAqFwHKl9W/DTNvOqCDgAt51oiHGRRjt
1Xm3XZD4p+GM1uZWn9qIFL49Gt5x94TqdrsKTVCJr0Kazn2mKQc7aja0zac+WtZG
OFn7KbniuAcwtC780cyikfmmExLI1/Vjg+NiMlMtZfpK6FIW+ulPiDQPdzIhVppx
w9/KlR2Fvh4TbzDsUqkFQSSAFdQ65BWgvzLpZHdKO/ILpDkThLbipjtvbBv/pHKM
O/NFTNoYkJ3cNW/kfcynwV+4AcKwdRz2A3Mez+g5TKFYPZROIbayOo01yTMLfz2p
jcqki/t4PACtwFOhkAs+MYPPyZDUkTFcEJQCPDstkAgmJWI3K2qELtDOLQyps3WY
Mfp+mntOdc8bKjFTMcCEk1zcm14K4Oms+w6dw2UnYsX1FAYYhPm8HUYwE4kP8M+D
9HGLMjLqqF/kanlCFZs5Avx3mDSAx6zS8vtNdGh+64oDNk4x4A2j8GTUuQARAQAB
iQG8BBgBCgAmFiEE1D9kAUU2nFHXht3qdvGiD/mHZy8FAlwOmrgCGwwFCQPCZwAA
CgkQdvGiD/mHZy9FnAwAgfUkxsO53Pm2iaHhtF4+BUc8MNJj64Jvm1tghr6PBRtM
hpbvvN8SSOFwYIsS+2BMsJ2ldox4zMYhuvBcgNUlix0G0Z7h1MjftDdsLFi1DNv2
J9dJ9LdpWdiZbyg4Sy7WakIZ/VvH1Znd89Imo7kCScRdXTjIw2yCkotE5lK7A6Ns
NbVuoYEN+dbGioF4csYehnjTdojwF/19mHFxrXkdDZ/V6ZYFIFxEsxL8FEuyI4+o
LC3DFSA4+QAFdkjGFXqFPlaEJxWt5d7wk0y+tt68v+ulkJ900BvR+OOMqQURwrAi
iP3I28aRrMjZYwyqHl8i/qyIv+WRakoDKV+wWteR5DmRAPHmX2vnlPlCmY8ysR6J
2jUAfuDFVu4/qzJe6vw5tmPJMdfvy0W5oogX6sEdin5M5w2b3WrN8nXZcjbWymqP
6jCdl6eoCCkKNOIbr/MMSkd2KqAqDVM5cnnlQ7q+AXzwNpj3RGJVoBxbS0nn9JWY
QNQrWh9rAcMIGT+b1le0
=4lsa
-----END PGP PUBLIC KEY BLOCK-----

View File

@@ -1,5 +0,0 @@
#!/usr/bin/env bash
arch=i386
. .gitlab-ci/container/cross_build.sh

View File

@@ -1,107 +0,0 @@
#!/usr/bin/env bash
# shellcheck disable=SC2086 # we want word splitting
# When changing this file, you need to bump the following
# .gitlab-ci/image-tags.yml tags:
# DEBIAN_BUILD_TAG
set -e
set -o xtrace
export DEBIAN_FRONTEND=noninteractive
export LLVM_VERSION="${LLVM_VERSION:=15}"
apt-get install -y ca-certificates
sed -i -e 's/http:\/\/deb/https:\/\/deb/g' /etc/apt/sources.list.d/*
echo "deb [trusted=yes] https://gitlab.freedesktop.org/gfx-ci/ci-deb-repo/-/raw/${PKG_REPO_REV}/ ${FDO_DISTRIBUTION_VERSION%-*} main" | tee /etc/apt/sources.list.d/gfx-ci_.list
# Ephemeral packages (installed for this script and removed again at
# the end)
EPHEMERAL=(
)
DEPS=(
apt-utils
bison
ccache
curl
"clang-${LLVM_VERSION}"
"clang-format-${LLVM_VERSION}"
dpkg-cross
findutils
flex
flatbuffers-compiler
g++
cmake
gcc
git
glslang-tools
kmod
"libclang-${LLVM_VERSION}-dev"
"libclang-cpp${LLVM_VERSION}-dev"
"libclang-common-${LLVM_VERSION}-dev"
libelf-dev
libepoxy-dev
libexpat1-dev
libflatbuffers-dev
libgtk-3-dev
"libllvm${LLVM_VERSION}"
libomxil-bellagio-dev
libpciaccess-dev
libunwind-dev
libva-dev
libvdpau-dev
libvulkan-dev
libx11-dev
libx11-xcb-dev
libxext-dev
libxml2-utils
libxrandr-dev
libxrender-dev
libxshmfence-dev
libxtensor-dev
libxxf86vm-dev
libwayland-egl-backend-dev
make
ninja-build
openssh-server
pkgconf
python3-mako
python3-pil
python3-pip
python3-ply
python3-pycparser
python3-requests
python3-setuptools
qemu-user
valgrind
x11proto-dri2-dev
x11proto-gl-dev
x11proto-randr-dev
xz-utils
zlib1g-dev
zstd
)
apt-get update
apt-get install -y --no-remove "${DEPS[@]}" "${EPHEMERAL[@]}" \
$EXTRA_LOCAL_PACKAGES
. .gitlab-ci/container/build-llvm-spirv.sh
. .gitlab-ci/container/build-libclc.sh
# Needed for ci-fairy, this revision is able to upload files to S3
pip3 install --break-system-packages git+http://gitlab.freedesktop.org/freedesktop/ci-templates@ffe4d1b10aab7534489f0c4bbc4c5899df17d3f2
# We need at least 1.3.1 for rusticl
pip3 install --break-system-packages 'meson==1.3.1'
. .gitlab-ci/container/build-rust.sh
############### Uninstall ephemeral packages
apt-get purge -y "${EPHEMERAL[@]}"
. .gitlab-ci/container/container_post_build.sh

View File

@@ -1,104 +0,0 @@
#!/usr/bin/env bash
# shellcheck disable=SC2086 # we want word splitting
# When changing this file, you need to bump the following
# .gitlab-ci/image-tags.yml tags:
# DEBIAN_BUILD_TAG
set -e
set -o xtrace
export DEBIAN_FRONTEND=noninteractive
export LLVM_VERSION="${LLVM_VERSION:=15}"
# Ephemeral packages (installed for this script and removed again at the end)
EPHEMERAL=(
autoconf
automake
autotools-dev
bzip2
libtool
libssl-dev
)
DEPS=(
check
"clang-${LLVM_VERSION}"
libasan8
libarchive-dev
libdrm-dev
"libclang-cpp${LLVM_VERSION}-dev"
libgbm-dev
libglvnd-dev
liblua5.3-dev
libxcb-dri2-0-dev
libxcb-dri3-dev
libxcb-glx0-dev
libxcb-present-dev
libxcb-randr0-dev
libxcb-shm0-dev
libxcb-sync-dev
libxcb-xfixes0-dev
libxcb1-dev
libxml2-dev
"llvm-${LLVM_VERSION}-dev"
ocl-icd-opencl-dev
python3-pip
python3-venv
procps
spirv-tools
shellcheck
strace
time
yamllint
zstd
)
apt-get update
apt-get install -y --no-remove \
"${DEPS[@]}" "${EPHEMERAL[@]}"
. .gitlab-ci/container/container_pre_build.sh
# dependencies where we want a specific version
export XORG_RELEASES=https://xorg.freedesktop.org/releases/individual
export XORGMACROS_VERSION=util-macros-1.19.0
. .gitlab-ci/container/build-mold.sh
curl -L --retry 4 -f --retry-all-errors --retry-delay 60 -O \
$XORG_RELEASES/util/$XORGMACROS_VERSION.tar.bz2
tar -xvf $XORGMACROS_VERSION.tar.bz2 && rm $XORGMACROS_VERSION.tar.bz2
cd $XORGMACROS_VERSION; ./configure; make install; cd ..
rm -rf $XORGMACROS_VERSION
. .gitlab-ci/container/build-wayland.sh
. .gitlab-ci/container/build-shader-db.sh
. .gitlab-ci/container/build-directx-headers.sh
python3 -m pip install --break-system-packages -r .gitlab-ci/lava/requirements.txt
# install bindgen
RUSTFLAGS='-L native=/usr/local/lib' cargo install \
bindgen-cli --version 0.65.1 \
--locked \
-j ${FDO_CI_CONCURRENT:-4} \
--root /usr/local
# install cbindgen
RUSTFLAGS='-L native=/usr/local/lib' cargo install \
cbindgen --version 0.26.0 \
--locked \
-j ${FDO_CI_CONCURRENT:-4} \
--root /usr/local
############### Uninstall the build software
apt-get purge -y "${EPHEMERAL[@]}"
. .gitlab-ci/container/container_post_build.sh

Some files were not shown because too many files have changed in this diff Show More