Compare commits

..

93 Commits

Author SHA1 Message Date
Ian Romanick
4a86465f47 mesa: Bump version to 10.1 (final)
Signed-off-by: Ian Romanick <ian.d.romanick@intel.com>
2014-03-05 08:59:46 +02:00
Julien Cristau
03d0c9fd30 glx/dri2: fix build failure on HURD
Patch from Debian package.

Cc: "10.0 10.1" <mesa-stable@lists.freedesktop.org>
Reviewed-by: Ian Romanick <ian.d.romanick@intel.com>
(cherry picked from commit 6f0e2731e8)
2014-03-05 08:38:38 +02:00
Chris Forbes
4c0702b05c i965: Validate (and resolve) all the bound textures.
BRW_MAX_TEX_UNIT is the static limit on the number of textures we
support per-stage, not in total.

Core's `Unit` array is sized by MAX_COMBINED_TEXTURE_IMAGE_UNITS, which
is significantly larger, and across the various shader stages, up to
ctx->Const.MaxCombinedTextureImageUnits elements of it may be actually
used.

Fixes invisible bad behavior in piglit's max-samplers test (although
this escalated to an assertion failure on HSW with texture_view, since
non-immutable textures only have _Format set by validation.)

Signed-off-by: Chris Forbes <chrisf@ijw.co.nz>
Cc: "9.2 10.0 10.1" <mesa-stable@lists.freedesktop.org>
Cc: Kenneth Graunke <kenneth@whitecape.org>
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
(cherry picked from commit befbda56a2)
2014-03-03 09:36:41 +02:00
Chris Forbes
5fbd649451 i965: Widen sampler key bitfields for 32 samplers
Previously the `high` 16 samplers on Haswell+ would not get sampler
workarounds applied.

Don't bother widening YUV fields, since they're ignored and going away
soon anyway.

Signed-off-by: Chris Forbes <chrisf@ijw.co.nz>
Cc: "10.1" <mesa-stable@lists.freedesktop.org>
Cc: Kenneth Graunke <kenneth@whitecape.org>
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
(cherry picked from commit 590920f93e)
2014-03-03 09:36:01 +02:00
Ian Romanick
05b9e6a963 mesa: Bump version to 10.1-rc3
Signed-off-by: Ian Romanick <ian.d.romanick@intel.com>
2014-03-01 08:53:32 -08:00
Emil Velikov
92e8c52340 dri/i9*5: correctly calculate the amount of system memory
The variable name states megabytes, while we calculate the amount in
kilobytes. Correct this by dividing with the correct amount.

Signed-off-by: Emil Velikov <emil.l.velikov@gmail.com>
Cc: "10.0 10.1" <mesa-stable@lists.freedesktop.org>
Reviewed-by: Ian Romanick <ian.d.romanick@intel.com>
(cherry picked from commit fc25956bad)
2014-03-01 08:53:32 -08:00
Brian Paul
3f0011edfd mesa: add unpacking code for MESA_FORMAT_Z32_FLOAT_S8X24_UINT
Fixes glGetTexImage() when converting from MESA_FORMAT_Z32_FLOAT_S8X24_UINT
to GL_UNSIGNED_INT_24_8.  Hit by the piglit
ext_packed_depth_stencil-getteximage test.

Cc: "10.0" "10.1" <mesa-stable@lists.freedesktop.org>
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
(cherry picked from commit a12d4d0398)
2014-03-01 08:29:42 -08:00
Ian Romanick
6e3ce7997a i915: Allocate the sys_buffer using _mesa_align_malloc
Though it won't matter on Linux, use _mesa_align_free to release it.
Since i965 doesn't have sys_buffer, I overlooked this in the
GL_ARB_map_buffer_alignment work a few months ago.  Fixes i915 (and
presumably i830) regressions in ARB_map_buffer_range tests and the
failure in arb_map_buffer_alignment-sanity_test.

Signed-off-by: Ian Romanick <ian.d.romanick@intel.com>
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=74960
Cc: "10.1" <mesa-stable@lists.freedesktop.org>
Reviewed-by: Matt Turner <mattst88@gmail.com>
(cherry picked from commit ff2cbf9e0c)
2014-02-28 15:23:59 -08:00
Ian Romanick
1b6aad2234 i915: Only allow 8 vertex texture units
There's no reason to have more vertex texture units than fragment
texture units on this hardware.  Since increasing the default maximum
number of texture units from 16 to 32, this has triggered some segfault
in i915 driver.  There's probably some array or bitfield that isn't
properly sized now.  This really papers over the bug, but I don't think
I'll lose any sleep over that.

Signed-off-by: Ian Romanick <ian.d.romanick@intel.com>
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=74071
Cc: "10.1" <mesa-stable@lists.freedesktop.org>
Reviewed-by: Matt Turner <mattst88@gmail.com>
(cherry picked from commit 8ba157006f)
2014-02-28 15:23:57 -08:00
Petri Latvala
b34f05f6a7 i965: Allocate vec4_visitor's uniform_size and uniform_vector_size arrays dynamically.
v2: Don't add function parameters, pass the required size in
prog_data->nr_params.

v3:
- Use the name uniform_array_size instead of uniform_param_count.
- Round up when dividing param_count by 4.
- Use MAX2() instead of taking the maximum by hand.
- Don't crash if prog_data passed to vec4_visitor constructor is NULL

v4: Rebase for current master

v5 (idr): Trivial whitespace change.

Signed-off-by: Petri Latvala <petri.latvala@intel.com>
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=71254
Cc: "10.1" <mesa-stable@lists.freedesktop.org>
Reviewed-by: Ian Romanick <ian.d.romanick@intel.com>
(cherry picked from commit 7189fce237)
2014-02-28 15:22:50 -08:00
Tom Stellard
677fde5ca0 r600g/compute: PIPE_CAP_COMPUTE should be false for pre-evergreen GPUs
This prevents clover from using unsupported devices.

Reviewed-by: Marek Olšák <marek.olsak@amd.com>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>

CC: "10.0 10.1" <mesa-stable@lists.freedesktop.org>
(cherry picked from commit f61e382f0a)
2014-02-28 14:32:42 -08:00
Matt Turner
3305b9c96b glsl: Don't vectorize horizontal expressions.
Cc: "10.1" <mesa-stable@lists.freedesktop.org>
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=75224
(cherry picked from commit 4bd7f1d044)
2014-02-28 14:32:39 -08:00
Matt Turner
a43b8bfa78 glsl: Add is_horizontal() method to ir_expression.
Cc: "10.1" <mesa-stable@lists.freedesktop.org>
(cherry picked from commit 5eff8576ba)
2014-02-28 14:32:37 -08:00
Brian Paul
862572b205 mesa: do depth/stencil format conversion in glGetTexImage
glGetTexImage(GL_DEPTH_STENCIL, GL_UNSIGNED_INT_24_8) was just
using memcpy() instead of _mesa_unpack_uint_24_8_depth_stencil_row()
to convert texels from the hardware format to the GL format.

Fixes issue reported by David Meng at Intel.  The new piglit
ext_packed_depth_stencil-getteximage test checks for this bug.

Also, add some format/type assertions.  We don't yet handle the
GL_FLOAT_32_UNSIGNED_INT_24_8_REV type.  That should be fixed in
a follow-on patch.

Reviewed-by: Eric Anholt <eric@anholt.net>
Cc: "10.0" "10.1" <mesa-stable@lists.freedesktop.org>
(cherry picked from commit 43dee0295e)
2014-02-28 14:32:34 -08:00
Thomas Hellstrom
037f357564 winsys/svga: Avoid calling drm getparam for max surface size on older kernels
This avoids the kernel driver spewing out errors about the param not being
supported.

Also correct the max surface size used when the kernel does not support the
query.

Reported-by: Brian Paul <brianp@vmware.com>
Signed-off-by: Thomas Hellstrom <thellstrom@vmware.com>
Reviewed-by: Charmaine Lee <charmainel@vmware.com>
Cc: "10.1" <mesa-stable@lists.freedesktop.org>
(cherry picked from commit f5e681f3fa)
2014-02-28 14:32:31 -08:00
Anuj Phogat
bef5554092 i965: Fix the region's pitch condition to use blitter
intelEmitCopyBlit uses a signed 16-bit integer to represent
buffer pitch, so it can only handle buffer pitches < 32k.

Cc: mesa-stable@lists.freedesktop.org
Signed-off-by: Anuj Phogat <anuj.phogat@gmail.com>
Reviewed-by: Ian Romanick <ian.d.romanick@intel.com>
(cherry picked from commit b3094d9927)
2014-02-26 18:07:14 -08:00
Kenneth Graunke
09b03dcee6 i965: Don't try to dump shader source for fixed-function FS programs.
sh->Source is NULL and this will segfault.

Fixes MESA_GLSL=dump with "The Swapper".

Cc: mesa-stable@lists.freedesktop.org
Signed-off-by: Kenneth Graunke <kenneth@whitecape.org>
Reviewed-by: Eric Anholt <eric@anholt.net>
(cherry picked from commit f896e82301)
2014-02-26 18:07:11 -08:00
Kenneth Graunke
45cb6063e7 glsl: Delete LRP_TO_ARITH lowering pass flag.
Tt's kind of a trap---calling do_common_optimization() after
lower_instructions() may cause opt_algebraic() to reintroduce
ir_triop_lrp expressions that were lowered, effectively defeating the
point.  Because of this, nobody uses it.

v2: Delete more code (caught by Ian Romanick).

Cc: "10.1" <mesa-stable@lists.freedesktop.org>
Signed-off-by: Kenneth Graunke <kenneth@whitecape.org>
Reviewed-by: Matt Turner <mattst88@gmail.com>
Acked-by: Eric Anholt <eric@anholt.net>
(cherry picked from commit ac0a8b9540)
2014-02-26 18:07:08 -08:00
Kenneth Graunke
9cc1bbcaf4 i965: Stop lowering ir_triop_lrp.
Both the vector and scalar backends now support it natively, so there's
no point in lowering it.

Cc: "10.1" <mesa-stable@lists.freedesktop.org>
Signed-off-by: Kenneth Graunke <kenneth@whitecape.org>
Reviewed-by: Matt Turner <mattst88@gmail.com>
Acked-by: Eric Anholt <eric@anholt.net>
(cherry picked from commit 2fdea48e21)
2014-02-26 18:07:04 -08:00
Kenneth Graunke
24abd48ac0 i965/vec4: Handle ir_triop_lrp on Gen4-5 as well.
When the vec4 backend encountered an ir_triop_lrp, it always emitted an
actual LRP instruction, which only exists on Gen6+.  Gen4-5 used
lower_instructions() to decompose ir_triop_lrp at the IR level.

Since commit 8d37e9915a ("glsl: Optimize open-coded lrp into lrp."),
we've had an bug where lower_instructions translates ir_triop_lrp into
arithmetic, but opt_algebraic reassembles it back into a lrp.

To avoid this ordering concern, just handle ir_triop_lrp in the backend.
The FS backend already does this, so we may as well do likewise.

v2: Add a comment reminding us that we could emit better assembly if we
    implemented the infrastructure necessary to support using MAC.
    (Assembly code provided by Eric Anholt).

Cc: "10.1" <mesa-stable@lists.freedesktop.org>
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=75253
Signed-off-by: Kenneth Graunke <kenneth@whitecape.org>
Reviewed-by: Matt Turner <mattst88@gmail.com>
Acked-by: Eric Anholt <eric@anholt.net>
(cherry picked from commit 56879a7ac4)
2014-02-26 18:07:00 -08:00
Kenneth Graunke
2475db34a0 i965/vec4: Add a brw->gen >= 6 assertion in three-source emitters.
Three source instructions didn't exist until Gen6.  vec4_generator has
assertions to catch this, but catching it in the visitor provides a
nicer backtrace.

Cc: "10.1" <mesa-stable@lists.freedesktop.org>
Signed-off-by: Kenneth Graunke <kenneth@whitecape.org>
Reviewed-by: Matt Turner <mattst88@gmail.com>
Acked-by: Eric Anholt <eric@anholt.net>
(cherry picked from commit ffde483f3c)
2014-02-26 18:06:15 -08:00
Francisco Jerez
3efb934dee i965/vec4: Add non-mutating helper functions to modify src_reg::swizzle and ::negate.
Reviewed-by: Paul Berry <stereotype441@gmail.com>
(cherry picked from commit 98306e727b)
2014-02-26 17:58:20 -08:00
Brian Paul
29876a4d28 gallium/pipebuffer: change pb_cache_manager_create() size_factor to float
Requested by Marek.

Reviewed-by: Marek Olšák <marek.olsak@amd.com>
Cc: "10.1" <mesa-stable@lists.freedesktop.org>
(cherry picked from commit e4a5a9fd2f)
2014-02-26 13:35:59 -07:00
Thomas Hellstrom
00769d0322 svga/winsys: Propagate surface shared information to the winsys
The linux winsys needs to know whether a surface is shared.
For guest-backed surfaces we need this information to avoid allocating a
mob out of the mob cache for shared surfaces, but instead allocate a shared
mob, that is never put in the mob cache, from the kernel.

Also previously, all surfaces were given the "shareable" attribute when
allocated from the kernel. This is too permissive for client-local surfaces.
Now that we have the needed info, only set the "shareable" attribute if the
client indicates that it needs to share the surface.

Signed-off-by: Thomas Hellstrom <thellstrom@vmware.com>
Reviewed-by: Jakob Bornecrantz <jakob@vmware.com>
Reviewed-by: Brian Paul <brianp@vmware.com>
Cc: "10.1" <mesa-stable@lists.freedesktop.org>
(cherry picked from commit 141e39a893)
2014-02-26 13:35:59 -07:00
Brian Paul
e9a3a8997d svga/winsys: implement GBS support
This is a squash commit of many commits by Thomas Hellstrom.

Reviewed-by: Thomas Hellstrom <thellstrom@vmware.com>
Cc: "10.1" <mesa-stable@lists.freedesktop.org>
(cherry picked from commit fe6a854477)
2014-02-26 13:35:59 -07:00
Thomas Hellstrom
a809de8bd9 gallium/util: Add flush/map debug utility code
Signed-off-by: Thomas Hellstrom <thellstrom@vmware.com>
Reviewed-by: Brian Paul <brianp@vmware.com>
Cc: "10.1" <mesa-stable@lists.freedesktop.org>
(cherry picked from commit 59e7c59621)
2014-02-26 13:35:59 -07:00
Thomas Hellstrom
19740e3085 gallium/pipebuffer: Add a cache buffer manager bypass mask
In some situations, it may be desirable to bypass the cache at buffer
creation but to insert the buffer in the cache at buffer destruction.
One such situation is where we already have a kernel representation of a
buffer that we want to use, but we also want to insert it in the cache when
it's freed up.

Signed-off-by: Thomas Hellstrom <thellstrom@vmware.com>
Reviewed-by: José Fonseca <jfonseca@vmware.com>
Reviewed-by: Brian Paul <brianp@vmware.com>
Cc: "10.1" <mesa-stable@lists.freedesktop.org>
(cherry picked from commit 8af358d8bc)
2014-02-26 13:35:59 -07:00
Thomas Hellstrom
a44639d826 pipebuffer, winsys: Add a size match parameter to the cached buffer manager
In some situations it's important to restrict the sizes of buffers that the
cached buffer manager is allowed to return

Signed-off-by: Thomas Hellstrom <thellstrom@vmware.com>
Cc: "10.1" <mesa-stable@lists.freedesktop.org>
(cherry picked from commit c9e9b1862b)
2014-02-26 13:35:59 -07:00
Brian Paul
03035d6074 svga: update texture code for GBS
Reviewed-by: Thomas Hellstrom <thellstrom@vmware.com>
Cc: "10.1" <mesa-stable@lists.freedesktop.org>
(cherry picked from commit 3d1fd6df53)
2014-02-26 13:35:59 -07:00
Brian Paul
ab7074e024 svga: update buffer code for GBS
Reviewed-by: Thomas Hellstrom <thellstrom@vmware.com>
Cc: "10.1" <mesa-stable@lists.freedesktop.org>
(cherry picked from commit 72b0e959fc)
2014-02-26 13:35:59 -07:00
Brian Paul
a0423a5be2 svga: add new helper functions for GBS buffers
Reviewed-by: Thomas Hellstrom <thellstrom@vmware.com>
Cc: "10.1" <mesa-stable@lists.freedesktop.org>
(cherry picked from commit e0a6fb09bd)
2014-02-26 13:35:59 -07:00
Brian Paul
5abf1526d7 svga: remove a couple unneeded assertions
Reviewed-by: Thomas Hellstrom <thellstrom@vmware.com>
Cc: "10.1" <mesa-stable@lists.freedesktop.org>
(cherry picked from commit 6476bcbc50)
2014-02-26 13:35:59 -07:00
Brian Paul
6fc2e0a942 svga: adjust adjustment for point coordinates
Reviewed-by: Thomas Hellstrom <thellstrom@vmware.com>
Cc: "10.1" <mesa-stable@lists.freedesktop.org>
(cherry picked from commit f8bbd8261d)
2014-02-26 13:35:59 -07:00
Brian Paul
6d5e27d19e svga: track which textures are rendered to
Reviewed-by: Thomas Hellstrom <thellstrom@vmware.com>
Cc: "10.1" <mesa-stable@lists.freedesktop.org>
(cherry picked from commit d0c22a6d53)
2014-02-26 13:35:58 -07:00
Brian Paul
3a32f9773a svga: add helpers for tracking rendering to textures
Reviewed-by: Thomas Hellstrom <thellstrom@vmware.com>
Cc: "10.1" <mesa-stable@lists.freedesktop.org>
(cherry picked from commit c1e60a61e8)
2014-02-26 13:35:58 -07:00
Brian Paul
18a7c83765 svga: update shader code for GBS
Reviewed-by: Thomas Hellstrom <thellstrom@vmware.com>
Cc: "10.1" <mesa-stable@lists.freedesktop.org>
(cherry picked from commit f84c830b14)
2014-02-26 13:35:58 -07:00
Brian Paul
98c4fe0f5a svga: update constant buffer code for GBS
Reviewed-by: Thomas Hellstrom <thellstrom@vmware.com>
Cc: "10.1" <mesa-stable@lists.freedesktop.org>
(cherry picked from commit 2f1fc8db10)
2014-02-26 13:35:58 -07:00
Brian Paul
2e2719246a svga: add svga_have_gb_objects/dma() functions
Reviewed-by: Thomas Hellstrom <thellstrom@vmware.com>
Cc: "10.1" <mesa-stable@lists.freedesktop.org>
(cherry picked from commit 31dfefc47f)
2014-02-26 13:35:58 -07:00
Brian Paul
5f69eb6caa svga: add new GBS commands
And update some existing commands.

Reviewed-by: Thomas Hellstrom <thellstrom@vmware.com>
Cc: "10.1" <mesa-stable@lists.freedesktop.org>
(cherry picked from commit 823fbfdca7)
2014-02-26 13:35:58 -07:00
Brian Paul
0ced104930 svga: update svga_winsys interface for GBS
This adds new interface functions for guest-backed surfaces and
adds a mobid parameter to the surface_relocation() function.

Reviewed-by: Thomas Hellstrom <thellstrom@vmware.com>
Cc: "10.1" <mesa-stable@lists.freedesktop.org>
(cherry picked from commit d993ada50c)
2014-02-26 13:35:58 -07:00
Brian Paul
8ae30c1fc4 svga: update dumping code with new GBS commands, etc
Reviewed-by: Thomas Hellstrom <thellstrom@vmware.com>
Cc: "10.1" <mesa-stable@lists.freedesktop.org>
(cherry picked from commit 024711385e)
2014-02-26 13:35:57 -07:00
Brian Paul
ec9aef9ac2 svga: split / update svga3d header files
The old svga3d_reg.h file is split into separate header files and we
add new items for guest-backed surfaces.

Plus some minor code fixes because of renamed symbols.

Reviewed-by: Thomas Hellstrom <thellstrom@vmware.com>
Cc: "10.1" <mesa-stable@lists.freedesktop.org>
(cherry picked from commit 2e0c90847f)
2014-02-26 13:35:57 -07:00
Brian Paul
5b338d4b35 svga: replace out-of-temps assertion with debug warning
Signed-off-by: Brian Paul <brianp@vmware.com>
(cherry picked from commit 23d4ff53d4)
2014-02-26 13:35:57 -07:00
Brian Paul
07d1e7f12f svga: check shader size against max command buffer size
If the shader is too large, plug in a dummy shader.  This patch also
reworks the existing dummy shader code.

Reviewed-by: Jose Fonseca <jfonseca@vmware.com>
(cherry picked from commit 97fdace6d7)
2014-02-26 13:35:57 -07:00
Brian Paul
16c03a004d svga: refactor some shader code
Put common code in new svga_shader.c file.  Considate separate vertex/
fragment shader ID generation.

Reviewed-by: Jose Fonseca <jfonseca@vmware.com>
(cherry picked from commit 4686f610b1)
2014-02-26 13:35:57 -07:00
Fredrik Höglund
3d2979f83b glx: Fix the GLXFBConfig attrib sort priorities
The sort priorites for GLX_SAMPLES and GLX_SAMPLE_BUFFERS are
not defined in GL_ARB_multisample, but they are defined in
the GLX 1.4 specification.

Cc: "9.2 10.0 10.1" <mesa-stable@lists.freedesktop.org>
Reviewed-by: Ian Romanick <ian.d.romanick@intel.com>
(cherry picked from commit 3616e862f2)
2014-02-26 09:15:58 -08:00
Fredrik Höglund
3224f0c978 glx: Fix the default values for GLXFBConfig attributes
The default values for GLX_DRAWABLE_TYPE and GLX_RENDER_TYPE are
GLX_WINDOW_BIT and GLX_RGBA_BIT respectively, as specified in
the GLX 1.4 specification.

This fixes the glx-choosefbconfig-defaults piglit test.

Cc: "9.2 10.0 10.1" <mesa-stable@lists.freedesktop.org>
Reviewed-by: Dave Airlie <airlied@redhat.com>
Reviewed-by: Ian Romanick <ian.d.romanick@intel.com>
(cherry picked from commit f41c2f6c33)
2014-02-26 09:15:54 -08:00
Emil Velikov
fc2834f5ad nv50: correctly calculate the number of vertical blocks during transfer map
Cc: "10.0 10.1" <mesa-stable@lists.freedesktop.org>
Signed-off-by: Emil Velikov <emil.l.velikov@gmail.com>
Reviewed-by: Ilia Mirkin <imirkin@alum.mit.edu>
(cherry picked from commit 882070cc81)
2014-02-25 07:59:25 -08:00
Ilia Mirkin
e32e2836a3 nv50: make sure to clear _all_ layers of all attachments
Unfortunately there's only one RT_ARRAY_MODE setting for all
attachments, so clears were previously truncated to the minimum number
of layers any attachment had. Instead set the RT_ARRAY_MODE to 512 (the
max number of layers) before doing the clear. This fixes
gl-3.2-layered-rendering-clear-color-mismatched-layer-count.

Also fix clears of individual layered rt/zeta, in case it ever happens.

Signed-off-by: Ilia Mirkin <imirkin@alum.mit.edu>
Reviewed-by: Emil Velikov <emil.l.velikov@gmail.com>
Reviewed-by: Christoph Bumiller <e0425955@student.tuwien.ac.at>
Cc: 10.1 <mesa-stable@lists.freedesktop.org>
(cherry picked from commit 6152ba0894)
2014-02-24 11:09:52 -08:00
Christoph Bumiller
d8012560d5 nv50/ir/ra: fix SpillCodeInserter::offsetSlot usage
We were turning non-memory spill slots into NULL.

Cc: 10.1 <mesa-stable@lists.freedesktop.org>
(cherry picked from commit 1f4bfb8797)
2014-02-24 11:09:47 -08:00
Christian König
8e4fec994c st/vdpau: add flush on unmap
Flush the context when we unmap a buffer, otherwise VDPAU might
start rendering the next frame while we still reference that buffer.

Signed-off-by: Christian König <christian.koenig@amd.com>
Tested-by: StrangeNoises (rachel@strangenoises.org)
(cherry picked from commit db54fca9b8)
2014-02-24 10:37:01 -08:00
Marek Olšák
5437d38fac vdpau: flush the context before exporting the surface v2
Bugzilla (bug needs XBMC changes as well):
https://bugs.freedesktop.org/show_bug.cgi?id=73191

When VL uploads vertex buffers, it uses PIPE_TRANSFER_DONTBLOCK, which always
flushes the context in the winsys if the buffer being mapped is busy. Since
I added handling of DISCARD_RANGE, DONTBLOCK has had no effect when combined
with DISCARD_RANGE and I think the context isn't flushed anywhere else,
so no commands are submitted to the GPU until the IB is full, which takes
a lot of frames.

Using DISCARD_RANGE is not the only way to trigger this bug. The other way
is to reallocate the vertex buffer before every upload.

BTW, I'm not sure if this is the right place for flushing, but it does fix
the bug.

v2 (chk): move the flush to the right place.

Signed-off-by: Christian König <christian.koenig@amd.com>
Tested-by: StrangeNoises (rachel@strangenoises.org)
(cherry picked from commit 3f98053fc9)
2014-02-24 10:36:42 -08:00
Ian Romanick
fcb4eabb5f mesa: Bump version to 10.1-rc2
Signed-off-by: Ian Romanick <ian.d.romanick@intel.com>
2014-02-21 14:29:44 -08:00
Kenneth Graunke
7cb4dea765 i965: Create a hardware context before initializing state module.
brw_init_state() calls brw_upload_initial_gpu_state().  If hardware
contexts are enabled (brw->hw_ctx != NULL), this will upload some
initial invariant state for the GPU.  Without hardware contexts, we
rely on this state being uploaded via atoms that subscribe to the
BRW_NEW_CONTEXT bit.

Commit 46d3c2bf4d accidentally moved
the call to brw_init_state() before creating a hardware context.
This meant brw_upload_initial_gpu_state would always early return.
Except on Gen6+, we stopped uploading the initial GPU state via
state atoms, so it never happened.

Fixes a regression since 46d3c2bf4d.

Cc: "10.0 10.1" <mesa-stable@lists.freedesktop.org>
Signed-off-by: Kenneth Graunke <kenneth@whitecape.org>
Reviewed-by: Ian Romanick <ian.d.romanick@intel.com>
Reviewed-by: Eric Anholt <eric@anholt.net>
(cherry picked from commit 3663bbe773)
2014-02-21 14:28:37 -08:00
Ian Romanick
b498fb9586 glsl: Only warn for macro names containing __
From page 14 (page 20 of the PDF) of the GLSL 1.10 spec:

    "In addition, all identifiers containing two consecutive underscores
     (__) are reserved as possible future keywords."

The intention is that names containing __ are reserved for internal use
by the implementation, and names prefixed with GL_ are reserved for use
by Khronos.  Names simply containing __ are dangerous to use, but should
be allowed.

Per the Khronos bug mentioned below, a future version of the GLSL
specification will clarify this.

Signed-off-by: Ian Romanick <ian.d.romanick@intel.com>
Cc: "9.2 10.0 10.1" <mesa-stable@lists.freedesktop.org>
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
Tested-by: Kenneth Graunke <kenneth@whitecape.org>
Reviewed-by: Anuj Phogat <anuj.phogat@gmail.com>
Tested-by: Darius Spitznagel <d.spitznagel@goodbytez.de>
Cc: Tapani Pälli <lemody@gmail.com>
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=71870
Bugzilla: Khronos #11702
(cherry picked from commit 2c85fd5a96)
2014-02-20 13:41:14 -08:00
Ian Romanick
3731a4fae4 glcpp: Only warn for macro names containing __
Section 3.3 (Preprocessor) of the GLSL 1.30 spec (and later) and the
GLSL ES spec (all versions) say:

    "All macro names containing two consecutive underscores ( __ ) are
    reserved for future use as predefined macro names. All macro names
    prefixed with "GL_" ("GL" followed by a single underscore) are also
    reserved."

The intention is that names containing __ are reserved for internal use
by the implementation, and names prefixed with GL_ are reserved for use
by Khronos.  Since every extension adds a name prefixed with GL_ (i.e.,
the name of the extension), that should be an error.  Names simply
containing __ are dangerous to use, but should be allowed.  In similar
cases, the C++ preprocessor specification says, "no diagnostic is
required."

Per the Khronos bug mentioned below, a future version of the GLSL
specification will clarify this.

Signed-off-by: Ian Romanick <ian.d.romanick@intel.com>
Cc: "9.2 10.0 10.1" <mesa-stable@lists.freedesktop.org>
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
Tested-by: Kenneth Graunke <kenneth@whitecape.org>
Reviewed-by: Anuj Phogat <anuj.phogat@gmail.com>
Tested-by: Darius Spitznagel <d.spitznagel@goodbytez.de>
Cc: Tapani Pälli <lemody@gmail.com>
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=71870
Bugzilla: Khronos #11702
(cherry picked from commit 0bd7892630)
2014-02-20 13:41:09 -08:00
Anuj Phogat
d623eeb37a glsl: Fix condition to generate shader link error
GL_ARB_ES2_compatibility doesn't say anything about shader linking
when one of the shaders (vertex or fragment shader) is absent. So,
the extension shouldn't change the behavior specified in GLSL
specification.

Tested the behavior on proprietary linux drivers of NVIDIA and AMD.
Both of them allow linking a version 100 shader program in OpenGL
context, when one of the shaders is absent.

Makes following Khronos CTS tests to pass:
successfulcompilevert_linkprogram.test
successfulcompilefrag_linkprogram.test

Cc: mesa-stable@lists.freedesktop.org
Signed-off-by: Anuj Phogat <anuj.phogat@gmail.com>
Reviewed-by: Ian Romanick <ian.d.romanick@intel.com>
(cherry picked from commit 03597cf802)
2014-02-18 16:47:33 -08:00
Anuj Phogat
6534d80cca mesa: Add GL_TEXTURE_CUBE_MAP_ARRAY to legal_get_tex_level_parameter_target()
Fixes failing Khronos CTS test packed_depth_stencil_init.test

Cc: <mesa-stable@lists.freedesktop.org>
Signed-off-by: Anuj Phogat <anuj.phogat@gmail.com>
Reviewed-by: Ian Romanick <ian.d.romanick@intel.com>
(cherry picked from commit 6bd2472a8b)
2014-02-18 16:47:33 -08:00
Michel Dänzer
20eb466999 r600g,radeonsi: Consolidate logic for short-circuiting flushes
Fixes radeonsi emitting command streams to the kernel even when there
have been no draw calls before a flush, potentially powering up the GPU
needlessly.

Incidentally, this also cuts the runtime of piglit gpu.py in about half
on my Kaveri system, probably because an X11 client going away no longer
always results in a command stream being submitted to the kernel via
glamor.

Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=65761
Cc: "10.1" mesa-stable@lists.freedesktop.org
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
(cherry picked from commit cf0172d46a)
2014-02-18 16:47:33 -08:00
Kusanagi Kouichi
706eef0cfe targets/vdpau: Always use c++ to link
If built without llvm, the following error occurs with mplayer:

Failed to open VDPAU backend .../libvdpau_r600.so: undefined symbol: _ZTVN10__cxxabiv117__class_type_infoE
[vo/vdpau] Error when calling vdp_device_create_x11: 1

Cc: <mesa-stable@lists.freedesktop.org>
Signed-off-by: Kusanagi Kouichi <slash@ac.auone-net.jp>
Reviewed-by: Emil Velikov <emil.l.velikov@gmail.com>
(cherry picked from commit 61f6cddef7)
2014-02-18 16:47:33 -08:00
Carl Worth
a889b8e9aa main: Avoid double-free of shader Label
As documented, the _mesa_free_shader_program_data function:

	"Frees all the data that hangs off a shader program object, but not
	the object itself."

This means that this function may be called multiple times on the same object,
(and has been observed to). Meanwhile, the shProg->Label field was not being
set to NULL after its free(). This led to a second call to free() of the same
address on the second call to this function.

Fix this by setting this field to NULL after free(), (just as with all other
calls to free() in this function).

Reviewed-by: Brian Paul <brianp@vmware.com>

CC: mesa-stable@lists.freedesktop.org
(cherry picked from commit a92581acf2)
2014-02-18 16:47:33 -08:00
Alex Deucher
02d96b7e9f radeon: reverse DBG_NO_HYPERZ logic
Change the flag to DBG_HYPERZ and reverse the logic
so setting the flag enabled the feature.  This disables
hyperz on r600g and radeonsi by default.  It can be
enabled by setting the env var.  There are just too
many issues with certain apps so leave it disabled for
now until we sort out the issues with the problematic
apps.

Bugs:
https://bugs.freedesktop.org/show_bug.cgi?id=58660
https://bugs.freedesktop.org/show_bug.cgi?id=64471
https://bugs.freedesktop.org/show_bug.cgi?id=66352
https://bugs.freedesktop.org/show_bug.cgi?id=68799
https://bugs.freedesktop.org/show_bug.cgi?id=72685
https://bugs.freedesktop.org/show_bug.cgi?id=73088
https://bugs.freedesktop.org/show_bug.cgi?id=74428
https://bugs.freedesktop.org/show_bug.cgi?id=74803
https://bugs.freedesktop.org/show_bug.cgi?id=74863
https://bugs.freedesktop.org/show_bug.cgi?id=74892
https://bugzilla.kernel.org/show_bug.cgi?id=70411

Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Cc: "10.1" "10.0" <mesa-stable@lists.freedesktop.org>
Acked-by: Marek Olšák <marek.olsak@amd.com>
(cherry picked from commit 01e6371149)
2014-02-18 16:45:05 -08:00
Ilia Mirkin
a1b6aa9fe2 nouveau: fix chipset checks for nv1a by using the oclass instead
Commit f4ebcd133b ("dri/nouveau: NV17_3D class is not available for
NV1a chipset") fixed this partially by using the correct 3d class.
However there were a lot of checks left over comparing against the
chipset.

Reported-and-tested-by: John F. Godfrey <jfgodfrey@gmail.com>
Signed-off-by: Ilia Mirkin <imirkin@alum.mit.edu>
Cc: 9.2 10.0 10.1 <mesa-stable@lists.freedesktop.org>
Reviewed-by: Francisco Jerez <currojerez@riseup.net>
(cherry picked from commit 0c8b165366)
2014-02-18 16:45:01 -08:00
Fredrik Höglund
150b1f0aac mesa: Preserve the NewArrays state when copying a VAO
Cc: "10.1" "10.0" <mesa-stable@lists.freedesktop.org>

Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=72895
Reviewed-by: Brian Paul <brianp@vmware.com>
(cherry picked from commit 9afbd04d89)
2014-02-18 16:44:58 -08:00
Matt Turner
50066dc544 glsl: Do not vectorize vector array dereferences.
Array dereferences must have scalar indices, so we cannot vectorize
them.

Cc: "10.1" <mesa-stable@lists.freedesktop.org>
Reported-by: Andrew Guertin <lists@dolphinling.net>
Tested-by: Andrew Guertin <lists@dolphinling.net>
Reviewed-by: Ian Romanick <ian.d.romanick@intel.com>
(cherry picked from commit 025d99ce3c)
2014-02-18 16:44:53 -08:00
Emil Velikov
088d642b8f dri/nouveau: Pass the API into _mesa_initialize_context
Currently we create a OPENGL_COMPAT context regardless of
what was requested by the program. Correct that by retaining
the program's request and passing it into _mesa_initialize_context.

Based on a similar commit for radeon/r200 by Ian Romanick.

Cc: "9.1 9.2 10.0" <mesa-stable@lists.freedesktop.org>
Signed-off-by: Emil Velikov <emil.l.velikov@gmail.com>
(cherry picked from commit 76d9f6d972)
2014-02-18 16:44:50 -08:00
Daniel Kurtz
69bd4ed017 glsl: Add locking to builtin_builder singleton
Consider a multithreaded program with two contexts A and B, and the
following scenario:

1. Context A calls initialize(), which allocates mem_ctx and starts
   building built-ins.
2. Context B calls initialize(), which sees mem_ctx != NULL and assumes
   everything is already set up.  It returns.
3. Context B calls find(), which fails to find the built-in since it
   hasn't been created yet.
4. Context A finally finishes initializing the built-ins.

This will break at step 3.  Adding a lock ensures that subsequent
callers of initialize() will wait until initialization is actually
complete.

Similarly, if any thread calls release while another thread is still
initializing, or calling find(), the mem_ctx/shader would get free'd while
from under it, leading to corruption or use-after-free crashes.

Fixes sporadic failures in Piglit's glx-multithread-shader-compile.

Bugzilla: https://bugs.freedesktop.org/69200
Signed-off-by: Daniel Kurtz <djkurtz@chromium.org>
Signed-off-by: Kenneth Graunke <kenneth@whitecape.org>
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
Cc: "10.1 10.0" <mesa-stable@lists.freedesktop.org>
(cherry picked from commit b47d231526)
2014-02-18 16:44:46 -08:00
Kenneth Graunke
62a358892f mesa: Fix MESA_FORMAT_Z24_UNORM_S8_UINT vs. X8_UINT mix-up.
In commit eeed49f5f2, Mark accidentally
renamed MESA_FORMAT_S8_Z24 to MESA_FORMAT_Z24_UNORM_X8_UINT and
MESA_FORMAT_X8_Z24 to MESA_FORMAT_Z24_UNORM_S8_UINT, reversing their
sense.  The commit message was correct, but what sed commands actually
got run didn't match that.

This patch swaps the two enum names, reversing them.  This should undo
the damage, but might break things if people have manually fixed a few
instances in the meantime...

Mark's commit also failed to mention renames:
s/MESA_FORMAT_ARGB2101010_UINT\b/MESA_FORMAT_B10G10R10A2_UINT/g
s/MESA_FORMAT_ABGR2101010\b/MESA_FORMAT_R10G10B10A2_UNORM/g
but those seem okay.

Signed-off-by: Kenneth Graunke <kenneth@whitecape.org>
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
(cherry picked from commit a487ef87fe)
2014-02-10 08:45:16 -08:00
Ilia Mirkin
290648b076 nouveau/video: make sure that firmware is present when checking caps
Apparently some players are ill-prepared for us claiming that a decoder
exists only to have creating it fail, and express this poor preparation
with crashes (e.g. flash). Check that firmware is there to increase the
chances of there being a high correlation between reported capabilities
and ability to create a decoder.

Signed-off-by: Ilia Mirkin <imirkin@alum.mit.edu>
Cc: 10.0 10.1 <mesa-stable@lists.freedesktop.org>
Tested-by: Emil Velikov <emil.l.velikov@gmail.com>
(cherry picked from commit 40dd777b33)
2014-02-10 08:44:56 -08:00
Grigori Goronzy
7f97f1fce4 gallium: add geometry shader output limits
v2: adjust limits for radeonsi and llvmpipe
v3: add documentation

Cc: "10.1" <mesa-stable@lists.freedesktop.org>
Signed-off-by: Marek Olšák <marek.olsak@amd.com>
(cherry picked from commit d34d5fddf8)
2014-02-10 08:44:52 -08:00
Ilia Mirkin
33169597f7 nv30: report 8 maximum inputs
nvfx_fragprog_assign_generic only allows for up to 10/8 texcoords for
nv40/nv30. This fixes compilation of the varying-packing tests.
Furthermore it appears that the last 2 inputs on nv4x don't seem to
work in those tests, so just report 8 everywhere for now.

Tested on NV42, NV44. NV4B appears to have additional problems.

Signed-off-by: Ilia Mirkin <imirkin@alum.mit.edu>
Cc: 9.1 9.2 10.0 10.1 <mesa-stable@lists.freedesktop.org>
(cherry picked from commit 356aff3a5c)
2014-02-10 08:44:48 -08:00
Christoph Bumiller
966f2d3db8 nv50/ir/ra: some register spilling fixes
Cc: 10.1 <mesa-stable@lists.freedesktop.org>
(cherry picked from commit 2e9ee44797)
2014-02-10 08:44:41 -08:00
Brian Paul
3cefbe5cf5 mesa: update assertion in detach_shader() for geom shaders
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=74723
Cc: "10.0" "10.1" <mesa-stable@lists.freedesktop.org>
Tested-by: Andreas Boll <andreas.boll.dev@gmail.com>
(cherry picked from commit c325ec8965)
2014-02-10 08:44:37 -08:00
Ian Romanick
1e6bba58d8 mesa: Bump version to 10.1-rc1
Signed-off-by: Ian Romanick <ian.d.romanick@intel.com>
2014-02-07 18:34:08 -08:00
Christoph Bumiller
137a0fe5c8 nvc0: handle TGSI_SEMANTIC_LAYER
Cc: 10.1 <mesa-stable@lists.freedesktop.org>
(cherry picked from commit 882e98e5e6)
2014-02-07 17:10:11 -08:00
Kristian Høgsberg
70e8ec38b5 glx: Pass NULL DRI drawables into the DRI driver for None GLX drawables
GLX_ARB_create_context allows making a GLX context current with None
drawable and readables, but this was never implemented correctly in GLX.
We would create a __DRIdrawable for the None GLX drawable and pass that
to the DRI driver and that would somehow work.  Now it's somehow broken.

The way this should have worked is that we pass a NULL DRI drawable
to the DRI driver when the GLX user calls glXMakeContextCurrent()
with None for drawable and readables.

https://bugs.freedesktop.org/show_bug.cgi?id=74143
Signed-off-by: Kristian Høgsberg <krh@bitplanet.net>
(cherry picked from commit f658150639)
2014-02-07 17:10:11 -08:00
Kristian Høgsberg
c79a7ef9a3 i965: Move intel_prepare_render() above first buffer access
The driver is supposed to ensure buffers before any drawing operation, but in
do_blit_drawpixels() and do_blit_copypixels() we inspect the buffer format
before calling intel_prepare_render().  That was covered up by the
unconditional call to intel_prepare_render() in intelMakeCurrent(), but we
now only do this on the initial intelMakeCurrent call for a context
(to get the size for the initial viewport values).

https://bugs.freedesktop.org/show_bug.cgi?id=74083

Signed-off-by: Kristian Høgsberg <krh@bitplanet.net>
Tested-by: Alexander Monakov <amonakov@gmail.com>
(cherry picked from commit 44338cd826)
2014-02-07 17:10:11 -08:00
Christoph Bumiller
17aeb3fdc9 nvc0/ir/emit: hardcode vertex output stream to 0 for now
(cherry picked from commit b7233acf78)
2014-02-07 17:10:11 -08:00
Kenneth Graunke
ecaf9259e9 glsl: Don't lose precision qualifiers when encountering "centroid".
Mesa fails to retain the precision qualifier when parsing:

   #version 300 es
   centroid in mediump vec2 v;

Consider how the parser's type_qualifier production is applied.
First, the precision_qualifier rule creates a new ast_type_qualifier:

    <precision: mediump>

Then the storage_qualifier rule creates a second one:

    <flags: in>

and calls merge_qualifier() to fold in any previous qualifications,
returning:

    <flags: in, precision: mediump>

Finally, the auxiliary_storage_qualifier creates one for "centroid":

    <flags: centroid>

it then does $$ = $1 and $$.flags |= $2.flags, resulting in:

    <flags: centroid, in>

Since precision isn't stored in the flags bitfield, it is lost.  We need
to instead call merge_qualifier to combine all the fields.

Cc: mesa-stable@lists.freedesktop.org
Signed-off-by: Kenneth Graunke <kenneth@whitecape.org>
Reported-by: Kevin Rogovin <kevin.rogovin@intel.com>
Reviewed-by: Matt Turner <mattst88@gmail.com>
Reviewed-by: Ian Romanick <ian.d.romanick@intel.com>
(cherry picked from commit 2062f40d81)
2014-02-07 17:10:10 -08:00
Brian Paul
0fb761b404 st/mesa: avoid sw fallback for getting/decompressing textures
If st_GetTexImage() is to decompress the texture, avoid the fallback
path even if prefer_blit_based_texture_transfer = false.  For drivers
that returned PIPE_CAP_PREFER_BLIT_BASED_TEXTURE_TRANSFER = 0, we
were always taking the fallback path for texture decompression rather
than rendering a quad.  The later is a lot faster.

Cc: "10.0" "10.1" <mesa-stable@lists.freedesktop.org>
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
(cherry picked from commit f47e596288)
2014-02-07 17:10:10 -08:00
Ilia Mirkin
31911f8d37 nv50: only over-allocate by a page for code
The pre-fetching doesn't go too far. Tested with over-allocating by only
a page, and didn't see any errors in dmesg. Saves ~512KB of VRAM.

Signed-off-by: Ilia Mirkin <imirkin@alum.mit.edu>
Cc: 10.1 <mesa-stable@lists.freedesktop.org>
Reviewed-by: Christoph Bumiller <e0425955@student.tuwien.ac.at>
(cherry picked from commit f76c7ad5b1)
2014-02-07 17:10:10 -08:00
Ilia Mirkin
142f6cc0b4 nv50: fix layerid to be the fp input number rather than vp output number
In the tests they were the same so it didn't matter, but indications are
that this is the correct behaviour. Also take this opportunity to
(trivially) support using gl_Layer in fp.

Cc: 10.1 <mesa-stable@lists.freedesktop.org>
Signed-off-by: Ilia Mirkin <imirkin@alum.mit.edu>
Reviewed-by: Christoph Bumiller <e0425955@student.tuwien.ac.at>
(cherry picked from commit 364bdd2419)
2014-02-07 17:10:10 -08:00
Ilia Mirkin
156ac628a8 nv50: rework primid logic
Functionally identical but much simpler. Should also better integrate
with future layer/viewport changes/fixes.

Cc: 10.1 <mesa-stable@lists.freedesktop.org>
Signed-off-by: Ilia Mirkin <imirkin@alum.mit.edu>
Reviewed-by: Christoph Bumiller <e0425955@student.tuwien.ac.at>
(cherry picked from commit c7373b7dc7)
2014-02-07 17:10:10 -08:00
Matt Turner
7aa84761b6 glsl: Initialize ubo_binding_mask flags to zero.
Missed in commit e63bb298. Caused sporadic test failures, like
incorrect-in-layout-qualifier-repeated-prim.geom.

Cc: "10.0" <mesa-stable@lists.freedesktop.org>
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
Reviewed-by: Jordan Justen <jordan.l.justen@intel.com>
(cherry picked from commit e2ef93cf94)
2014-02-07 17:10:10 -08:00
Marek Olšák
61219adb3d st/mesa: fix crash when a shader uses a TBO and it's not bound
This binds a NULL sampler view in that case.

Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=74251

Cc: "10.1" "10.0" <mesa-stable@lists.freedesktop.org>
Reviewed-by: Brian Paul <brianp@vmware.com>
(cherry picked from commit c6dbcf10df)
2014-02-07 17:10:10 -08:00
Paul Berry
ee632e68bd glsl: Fix continue statements in do-while loops.
From the GLSL 4.40 spec, section 6.4 (Jumps):

    The continue jump is used only in loops. It skips the remainder of
    the body of the inner most loop of which it is inside. For while
    and do-while loops, this jump is to the next evaluation of the
    loop condition-expression from which the loop continues as
    previously defined.

Previously, we incorrectly treated a "continue" statement as jumping
to the top of a do-while loop.

This patch fixes the problem by replicating the loop condition when
converting the "continue" statement to IR.  (We already do a similar
thing in "for" loops, to ensure that "continue" causes the loop
expression to be executed).

Fixes piglit tests:
- glsl-fs-continue-inside-do-while.shader_test
- glsl-vs-continue-inside-do-while.shader_test
- glsl-fs-continue-in-switch-in-do-while.shader_test
- glsl-vs-continue-in-switch-in-do-while.shader_test

Cc: mesa-stable@lists.freedesktop.org

Acked-by: Carl Worth <cworth@cworth.org>
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
Reviewed-by: Matt Turner <mattst88@gmail.com>
(cherry picked from commit 7f5740899f)
2014-02-07 17:10:10 -08:00
Paul Berry
b5c99be4af glsl: Make condition_to_hir() callable from outside ast_iteration_statement.
In addition to making it public, we also need to change its first
argument from an ir_loop * to an exec_list *, so that it can be used
to insert the condition anywhere in the IR (rather than just in the
body of the loop).

This will be necessary in order to make continue statements work
properly in do-while loops.

Cc: mesa-stable@lists.freedesktop.org

Acked-by: Carl Worth <cworth@cworth.org>
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
Reviewed-by: Matt Turner <mattst88@gmail.com>
(cherry picked from commit 56790856b3)
2014-02-07 17:10:10 -08:00
Topi Pohjolainen
165868d45e i965/blorp: do not use unnecessary hw-blending support
This is really not needed as blorp blit programs already sample
XRGB normally and get alpha channel set to 1.0 automatically by
the sampler engine. This is simply copied directly to the payload
of the render target write message and hence there is no need for
any additional blending support from the pixel processing pipeline.

The blending formula is anyway broken for color components, it
multiplies the color component with itself (blend factor is the
component itself).
Alpha blending in turn would not fix the alpha to one independent
of the source but simply used the source alpha as is instead
(1.0 * src_alpha + 0.0 * dst_alpha).

Quoting Eric:

 "If we want to actually make the no-alpha-bits-present thing work,
  we need to override the bits in the surface state or in the
  generated code.  In the normal draw path, it's done for sampling
  by the swizzling code in brw_wm_surface_state.c, and the blending
  overrides is just to fix up the alpha blending stage which
  doesn't pay attention to that for the destination surface."

If one modifies piglit test gl-3.2-layered-rendering-blit to use
color component values other than zero or one, this change will
kick in on IVB. No regressions on IVB.

This is effectively revert of c0554141a9:

    i965/blorp: Support overriding destination alpha to 1.0.

    Currently, Blorp requires the source and destination formats to be
    equal.  However, we'd really like to be able to blit between XRGB and
    ARGB formats; our BLT engine paths have supported this for a long time.

    For ARGB -> XRGB, nothing needs to occur: the missing alpha is already
    interpreted as 1.0.  For XRGB -> ARGB, we need to smash the alpha
    channel to 1.0 when writing the destination colors.  This is fairly
    straightforward with blending.

    For now, this code is never used, as the source and destination formats
    still must be equal.  The next patch will relax that restriction.

    NOTE: This is a candidate for the 9.1 branch.

Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
Reviewed-by: Eric Anholt <eric@anholt.net>
Signed-off-by: Topi Pohjolainen <topi.pohjolainen@intel.com>
(cherry picked from commit 933be19cdf)
2014-02-07 17:10:10 -08:00
Christian König
bbcd975881 radeon/uvd: fix feedback buffer handling v2
Without the correct feedback buffer size UVD runs
into an error on each frame, reducing the maximum FPS.

v2: fixing Michels comments

Signed-off-by: Christian König <christian.koenig@amd.com>
Reviewed-by: Michel Dänzer <michel.daenzer@amd.com>
Cc: "10.1" "10.0" "9.2" <mesa-stable@lists.freedesktop.org>
(cherry picked from commit c3c24c3acc)
2014-02-07 17:10:10 -08:00
Brian Paul
6cfcc4fccf draw: fix incorrect color of flat-shaded clipped lines
When we clipped a line weren't copying the provoking vertex
color to the second vertex.  We also weren't checking for
first vs. last provoking vertex.

Fixes failures found with the new piglit line-flat-clip-color test.

Cc: "10.0, 10.1" <mesa-stable@lists.freedesktop.org>

Reviewed-by: Jose Fonseca <jfonseca@vmware.com>
(cherry picked from commit fc3fcd1e01)
2014-02-07 17:10:09 -08:00
Brian Paul
39a3b0313b gallium/auxiliary/indices: replace free() with FREE()
To match the CALLOC_STRUCT() call.

Cc: "10.0, 10.1" <mesa-stable@lists.freedesktop.org>

Reviewed-by: Jose Fonseca <jfonseca@vmware.com>
(cherry picked from commit 307fd76053)
2014-02-07 17:10:09 -08:00
Dave Airlie
9e59e41266 docs: update 10.1 relnotes to note GL 3.3 on r600 and radeonsi.
Signed-off-by: Dave Airlie <airlied@redhat.com>
2014-02-06 01:03:09 +00:00
Dave Airlie
1289080c4d r600g: Add GL 3.3 support for 10.1 release
All patches on master below, except max samplers
which was removed on master.

Signed-off-by: Dave Airlie <airlied@redhat.com>
Acked-by: Alex Deucher <alexander.deucher@amd.com>

commit 57c6bb18822ebf88a98b98714c846608ff3ba42b
Author: Dave Airlie <airlied@redhat.com>
Date:   Thu Feb 6 00:48:57 2014 +0000

    bump max samplers

commit 2e4bd244493bebd41edf725a2c3c4e793282a5bb
Author: Dave Airlie <airlied@redhat.com>
Date:   Thu Jan 30 04:19:57 2014 +0000

    r600g: add support for geom shaders to r600/r700 chipsets (v2)

    This is my first attempt at enabling r600/r700 geometry shaders,
    the basic tests pass on both my rv770 and my rv635,

    It requires this kernel patch:
    http://www.spinics.net/lists/dri-devel/msg52745.html

    v2: address Alex comments.

    Signed-off-by: Dave Airlie <airlied@redhat.com>
    Reviewed-by: Alex Deucher <alexander.deucher@amd.com>

commit 0ed4f769d77c4db2259befba5fc1707f1cb5cb98
Author: Dave Airlie <airlied@redhat.com>
Date:   Wed Jan 29 21:48:09 2014 +0000

    r600g: enable GLSL 3.30 on evergreen GPUs

    This throws the switch to enable GL 3.3 and GLSL 330.

    Signed-off-by: Dave Airlie <airlied@redhat.com>
    Reviewed-by: Alex Deucher <alexander.deucher@amd.com>

commit aeca8f21dd42b9ecd3932ef028fa8846036c1307
Author: Dave Airlie <airlied@redhat.com>
Date:   Tue Feb 4 10:48:42 2014 +1000

    r600g: properly propogate clip dist write value

    This moves the value from the GS shader to the copy shader so the registers
    are setup correctly.

    fixes tests/spec/glsl-1.50/execution/geometry/clip-distance-out-values.shader_test

    Signed-off-by: Dave Airlie <airlied@redhat.com>
    Reviewed-by: Alex Deucher <alexander.deucher@amd.com>

commit e1bc410fe670bb17078a55876f1700a504127fef
Author: Dave Airlie <airlied@redhat.com>
Date:   Mon Feb 3 15:31:26 2014 +1000

    r600g: calculate a better value for array_size (v2)

    attempt to calculate a better value for array size to avoid breaking apps.

    v2: use 0xfff like streamout, suggested by Grigori

    Signed-off-by: Dave Airlie <airlied@redhat.com>
    Reviewed-by: Alex Deucher <alexander.deucher@amd.com>

commit 6f2f117dec51eb51c1b09e86e829e176a98e3bfc
Author: Dave Airlie <airlied@redhat.com>
Date:   Fri Jan 31 03:35:51 2014 +0000

    r600g: fix CAYMAN geometry shader support

    cayman has a different end of program bit, so do that properly.

    fixes hangs with geom shader tests on cayman.

    Signed-off-by: Dave Airlie <airlied@redhat.com>
    Reviewed-by: Alex Deucher <alexander.deucher@amd.com>

commit 305ea22fd517f83406aba3e3930d710fd42a3049
Author: Dave Airlie <airlied@redhat.com>
Date:   Wed Jan 29 00:17:15 2014 +0000

    r600g: fix up shader out misc stuff for copy shader

    set the correct values so the misc out register is setup correctly
    for the copy shader.

    This also updates the state for the gs copy shader so the hw
    gets programmed correctly.

    Signed-off-by: Dave Airlie <airlied@redhat.com>
    Reviewed-by: Alex Deucher <alexander.deucher@amd.com>

commit 53630e14c8791a84798a03d74653bf46bd013fc7
Author: Dave Airlie <airlied@redhat.com>
Date:   Tue Jan 28 23:15:29 2014 +0000

    r600g: port the layered surface rendering patch from radeonsi

    This just makes r600 and evergreen do what the radeonsi codepaths do
    for layered rendering. This makes the 2d amd_vertex_shader_layer test
    pass on evergreen.

    Signed-off-by: Dave Airlie <airlied@redhat.com>
    Reviewed-by: Alex Deucher <alexander.deucher@amd.com>

commit aa4cd3b9bed1ea23468fba4aa5c428153e8cddc1
Author: Dave Airlie <airlied@redhat.com>
Date:   Tue Jan 28 13:04:00 2014 +1000

    r600g: initial VS output layer support

    This just adds support for emitting the proper value in the VS out misc.

    Signed-off-by: Dave Airlie <airlied@redhat.com>
    Reviewed-by: Alex Deucher <alexander.deucher@amd.com>

commit 75a93f2e1e0f4d6015cdf63570ec4d3d12478b8d
Author: Dave Airlie <airlied@redhat.com>
Date:   Tue Jan 28 12:06:49 2014 +1000

    r600g: setup const texture buffers for geom shaders

    This just enables the workarounds we have for vertex/pixel shaders
    for geom shaders as well.

    Signed-off-by: Dave Airlie <airlied@redhat.com>
    Reviewed-by: Alex Deucher <alexander.deucher@amd.com>

commit 88697a860635aae54e56dce2d6a839a06dea0c5a
Author: Dave Airlie <airlied@redhat.com>
Date:   Fri Jan 24 17:14:26 2014 +1000

    r600g: calculate correct cut value

    This selects the cut value depending on the shader selected.

    Signed-off-by: Dave Airlie <airlied@redhat.com>
    Reviewed-by: Alex Deucher <alexander.deucher@amd.com>

commit dfb88bef3e13112a838773e700c35052774f8a63
Author: Dave Airlie <airlied@redhat.com>
Date:   Fri Jan 24 14:46:37 2014 +1000

    r600g: fix dynamic_input_array_index.shader_test

    This follows what fglrx does, it unpacks the input we are
    going to indirect into a bunch of registers and indirects
    inside them.

    Signed-off-by: Dave Airlie <airlied@redhat.com>
    Reviewed-by: Alex Deucher <alexander.deucher@amd.com>

commit a3c6373f8cf3aab750399654a4b77150ec30bce9
Author: Dave Airlie <airlied@redhat.com>
Date:   Fri Jan 24 13:39:36 2014 +1000

    r600g: add support for indirect geom ring writes

    We need to be able to write to the ring using a base register
    for when we emit vertices in a loop, in theory the SB compiler
    could collapse these indirect writes to direct writes if the
    register value is constant and known, but that is outside my
    pay grade.

    Signed-off-by: Dave Airlie <airlied@redhat.com>
    Reviewed-by: Alex Deucher <alexander.deucher@amd.com>

commit dbc6a13adf935b118eaa6b396593f50d7b7e16e6
Author: Dave Airlie <airlied@redhat.com>
Date:   Tue Dec 24 05:59:19 2013 +0000

    r600g: write proper output prim type

    Vadim's code derived it from the info.mode, but it needs
    to be takes from the geometry shader output primitive.

    Signed-off-by: Dave Airlie <airlied@redhat.com>
    Reviewed-by: Alex Deucher <alexander.deucher@amd.com>

commit f7f51b0b775f652967e2b972cf7c183482a771be
Author: Dave Airlie <airlied@redhat.com>
Date:   Tue Dec 24 05:30:37 2013 +0000

    r600g: enable instance cnt register with new enough kernel

    The instance cnt register was missing for a few kernels,
    with a new enough kernel we can output it.

    Signed-off-by: Dave Airlie <airlied@redhat.com>
    Reviewed-by: Alex Deucher <alexander.deucher@amd.com>

commit 9e6ce37f66372018ec5398f74c3b43ff5f5bf309
Author: Dave Airlie <airlied@redhat.com>
Date:   Mon Dec 23 01:30:03 2013 +0000

    r600g: add primitive input support for gs

    only enable prim id if gs uses it

    Signed-off-by: Dave Airlie <airlied@redhat.com>
    Reviewed-by: Alex Deucher <alexander.deucher@amd.com>

commit fa932dfc7df3cf9ff63d08fb0e1db2119fc2ac93
Author: Dave Airlie <airlied@redhat.com>
Date:   Thu Dec 19 05:17:00 2013 +0000

    r600g: emit streamout from dma copy shader

    This enables streamout with GS in the mix, from the
    VS dma shader.

    Signed-off-by: Dave Airlie <airlied@redhat.com>
    Reviewed-by: Alex Deucher <alexander.deucher@amd.com>

commit 205defb542ac185b7f46508fd51a4077a4702107
Author: Dave Airlie <airlied@redhat.com>
Date:   Wed Dec 18 15:55:07 2013 +1000

    r600g/gs: fix cases where number of gs inputs != number of gs outputs

    this fixes a bunch of the geom shader built-in tests

    Signed-off-by: Dave Airlie <airlied@redhat.com>
    Reviewed-by: Alex Deucher <alexander.deucher@amd.com>

commit d9e7ab40bc45644194c86f842599c76d0675243c
Author: Dave Airlie <airlied@redhat.com>
Date:   Tue Jan 28 10:21:03 2014 +1000

    r600g: increase array base for exported parameters

    Trivial fix to Vadim's code.

    Signed-off-by: Dave Airlie <airlied@redhat.com>
    Reviewed-by: Alex Deucher <alexander.deucher@amd.com>

commit 82d67fbd3b96b6b2cc0124a19b6f31b7912ec152
Author: Dave Airlie <airlied@redhat.com>
Date:   Fri Jan 24 16:41:32 2014 +1000

    r600g: initialise the geom shader loop registers.

    As we do for vertex and pixel shaders.

    Signed-off-by: Dave Airlie <airlied@redhat.com>
    Reviewed-by: Alex Deucher <alexander.deucher@amd.com>

commit 78be55d98d290d708bd1b3df3ef6cd5fa89865c7
Author: Dave Airlie <airlied@redhat.com>
Date:   Sat Nov 30 06:26:13 2013 +0000

    r600g: emit NOPs at end of shaders in more cases

    If the shader has no CF clauses at all emit an nop
    If the last instruction is an ENDLOOP add a NOP for the LOOP to go to
    if the last instruction is CALL_FS add a NOP

    These fix a bunch of hangs in the geometry shader tests.

    Signed-off-by: Dave Airlie <airlied@redhat.com>
    Reviewed-by: Alex Deucher <alexander.deucher@amd.com>

commit 634b2498dc73efa3cca5a6fc3ed35c5bea6bb2e9
Author: Dave Airlie <airlied@redhat.com>
Date:   Thu Nov 28 23:38:35 2013 +0000

    r600g: don't enable SB for geom shaders

    SB needs fixes for three GS instructions it seems to raise
    them outside loops etc despite my best efforts.

    Signed-off-by: Dave Airlie <airlied@redhat.com>
    Reviewed-by: Alex Deucher <alexander.deucher@amd.com>

commit 5b61dd0e917e54625ac227b8b1c2c82955f51ab1
Author: Dave Airlie <airlied@redhat.com>
Date:   Tue Dec 24 04:56:25 2013 +0000

    r600g/sb: add MEM_RING support

    Although we don't use SB on geom shaders, the VS copy shader will use it
    so we might as well implement MEM_RING support in sb.

    Signed-off-by: Dave Airlie <airlied@redhat.com>
    Reviewed-by: Alex Deucher <alexander.deucher@amd.com>

commit 0247375aec4681c154ae4d14b8cd637e7a9e0e3e
Author: Dave Airlie <airlied@redhat.com>
Date:   Wed Jan 29 04:08:43 2014 +0000

    r600g: don't fail if we can't map VS->GS ring entries

    This can happen in normal operation, so don't report an error on it,
    just continue.

    Signed-off-by: Dave Airlie <airlied@redhat.com>
    Reviewed-by: Alex Deucher <alexander.deucher@amd.com>

commit 2c986600fac6cb5692e9e377cb04f9f50389172c
Author: Vadim Girlin <vadimgirlin@gmail.com>
Date:   Fri Aug 2 06:38:23 2013 +0400

    r600g: initial support for geometry shaders on evergreen (v2)

    This is Vadim's initial work with a few regression fixes squashed in.

    v2: (airlied)
    fix regression in glsl-max-varyings - need to use vs and ps_dirty
    fix regression in shader exports from rebasing.
    whitespace fixing.
    v2.1: squash fix assert

    Signed-off-by: Vadim Girlin <vadimgirlin@gmail.com>
    Signed-off-by: Dave Airlie <airlied@redhat.com>
    Reviewed-by: Alex Deucher <alexander.deucher@amd.com>

commit ce23c43e2b611f30964afe4d1c02c4d0361ba430
Author: Vadim Girlin <vadimgirlin@gmail.com>
Date:   Fri Aug 2 06:32:32 2013 +0400

    r600g: add hw register definitions for GS block setup

    Signed-off-by: Dave Airlie <airlied@redhat.com>
    Reviewed-by: Alex Deucher <alexander.deucher@amd.com>

commit b0ec79c28d6373930ca0dc19168dd504204456b5
Author: Vadim Girlin <vadimgirlin@gmail.com>
Date:   Wed Jul 31 23:09:39 2013 +0400

    r600g: defer shader variant selection and depending state updates

    [airlied: fix dropped streamout line - fix for master]

    Signed-off-by: Vadim Girlin <vadimgirlin@gmail.com>
    Signed-off-by: Dave Airlie <airlied@redhat.com>
    Reviewed-by: Alex Deucher <alexander.deucher@amd.com>

commit e41cbfb4d15d519f9301699f39d7dd0153f2edf4
Author: Dave Airlie <airlied@redhat.com>
Date:   Mon Jan 13 10:19:00 2014 +1000

    r600g/bc: add support for indexed memory writes.

    It looks like we need these for geom shaders in the future.

    Signed-off-by: Dave Airlie <airlied@redhat.com>
    Reviewed-by: Alex Deucher <alexander.deucher@amd.com>

commit 46efb1648e883b2cb231cca38c1540e7e9ec1ecc
Author: Vadim Girlin <vadimgirlin@gmail.com>
Date:   Wed Jul 31 20:02:22 2013 +0400

    r600g: move barrier and end_of_program bits from output to cf struct (v2)

    v2: fix regression on r600 NOP instructions.

    Signed-off-by: Vadim Girlin <vadimgirlin@gmail.com>
    Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
    Signed-off-by: Dave Airlie <airlied@redhat.com>

commit 42802d5d8d145f07cf3fca1bb6e8ab0cd1fd5c85
Author: Dave Airlie <airlied@redhat.com>
Date:   Wed Jan 29 01:33:14 2014 +0000

    r600g: split streamout emit code into a separate function

    For geometry shaders we need to call this code from a second place.

    Just move it out for now to keep future patches cleaner.

    Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
    Signed-off-by: Dave Airlie <airlied@redhat.com>
2014-02-06 00:49:58 +00:00
13306 changed files with 923328 additions and 5371752 deletions

View File

View File

View File

View File

View File

View File

View File

View File

View File

View File

View File

View File

@@ -1,2 +0,0 @@
# Vendored code
src/amd/vulkan/radix_sort/*

View File

@@ -1,10 +0,0 @@
# The following files are opted into `ninja clang-format` and
# enforcement in the CI.
src/gallium/drivers/i915
src/gallium/drivers/r300/compiler/*
src/gallium/targets/teflon/**/*
src/amd/vulkan/**/*
src/amd/compiler/**/*
src/egl/**/*
src/etnaviv/isa/**/*

View File

@@ -1,18 +1,12 @@
((nil . ((show-trailing-whitespace . t)))
(prog-mode
((nil
(indent-tabs-mode . nil)
(tab-width . 8)
(c-basic-offset . 3)
(c-file-style . "stroustrup")
(fill-column . 78)
(eval . (progn
(c-set-offset 'case-label '0)
(c-set-offset 'innamespace '0)
(c-set-offset 'inline-open '0)))
(whitespace-style face indentation)
(whitespace-line-column . 79)
(eval ignore-errors
(require 'whitespace)
(whitespace-mode 1)))
)
(makefile-mode (indent-tabs-mode . t))
)

View File

@@ -1,44 +0,0 @@
# To use this config on you editor, follow the instructions at:
# http://editorconfig.org
root = true
[*]
charset = utf-8
insert_final_newline = true
tab_width = 8
[*.{c,h,cpp,hpp,cc,hh,y,yy}]
indent_style = space
indent_size = 3
max_line_length = 78
[{Makefile*,*.mk}]
indent_style = tab
[*.py]
indent_style = space
indent_size = 4
[*.yml]
indent_style = space
indent_size = 2
[*.rst]
indent_style = space
indent_size = 3
[*.patch]
trim_trailing_whitespace = false
[{meson.build,meson.options}]
indent_style = space
indent_size = 2
[*.ps1]
indent_style = space
indent_size = 2
[*.rs]
indent_style = space
indent_size = 4

View File

@@ -1,79 +0,0 @@
# List of commits to ignore when using `git blame`.
# Enable with:
# git config blame.ignoreRevsFile .git-blame-ignore-revs
#
# Per git-blame(1):
# Ignore revisions listed in the file, one unabbreviated object name
# per line, in git-blame. Whitespace and comments beginning with # are
# ignored.
#
# Please keep these in chronological order :)
#
# You can add a new commit with the following command:
# git log -1 --pretty=format:'%n# %s%n%H%n' >> .git-blame-ignore-revs $COMMIT
# pvr: Fix clang-format error.
0ad5b0a74ef73f5fcbe1406ad9d57fe5dc00a5b1
# panfrost: Fix up some formatting for clang-format
a4705afe63412498d13ded73cba969c66be67907
# asahi: clang-format the world again
26c51bb8d8a33098b1990425a391f56ffba5728c
# perfetto: Add a .clang-format for the directory.
da78d5d729b1800136dd713b68492cb339993f4a
# panfrost/winsys: Clang-format
c90f036516a5376002be6550a917e8bad6a8a3b8
# panfrost: Re-run clang-format
4ccf174009af6732cbffa5d8ebb4687da7517505
# panvk: Clang-format
c7bf3b69ebc8f2252dbf724a4de638e6bb2ac402
# pan/mdg: Fix icky formatting
133af0d6c945d3aaca8989edd15283a2b7dcc6c7
# mapi: clang-format _glapi_add_dispatch()
30332529663268a6406e910848e906e725e6fda7
# radv: reformat according to its .clang-format
8b319c6db8bd93603b18bd783eb75225fcfd51b7
# aco: reformat according to its .clang-format
6b21653ab4d3a67e711fe10e3d403128b6d26eb2
# egl: re-format using clang-format
2f670d89db038d5a29f6b72732fd7ad63dfaf4c6
# panfrost: clang-format the tree
0afd691f29683f6e9dde60f79eca094373521806
# aco: Format.
1e2639026fec7069806449f9ba2a124ce4eb5569
# radv: Format.
59c501ca353f8ec9d2717c98af2bfa1a1dbf4d75
# pvr: clang-format fixes
953c04ebd39c52d457301bdd8ac803949001da2d
# freedreno: Re-indent
2d439343ea1aee146d4ce32800992cd389bd505d
# ir3: Reformat source with clang-format
177138d8cb0b4f6a42ef0a1f8593e14d79f17c54
# ir3: reformat after refactoring in previous commit
8ae5b27ee0331a739d14b42e67586784d6840388
# ir3: don't use deprecated NIR_PASS_V anymore
2fedc82c0cc9d3fb2e54707b57941b79553b640c
# ir3: reformat after previous commit
7210054db8cfb445a8ccdeacfdcfecccf44fa266
# freedreno/a6xx: The great register renaming
7fd99c88b9cd5c0c8c1cb3e92383acac5cb8220b

11
.gitattributes vendored
View File

@@ -1,7 +1,4 @@
*.csv eol=crlf
* text=auto
*.jpg binary
*.png binary
*.gif binary
*.ico binary
*.cl gitlab-language=c
*.dsp -crlf
*.dsw -crlf
*.sln -crlf
*.vcproj -crlf

View File

@@ -1,60 +0,0 @@
name: macOS-CI
on: push
permissions:
contents: read
jobs:
macOS-CI:
strategy:
matrix:
glx_option: ['dri', 'xlib']
runs-on: macos-11
env:
GALLIUM_DUMP_CPU: true
MESON_EXEC: /Users/runner/Library/Python/3.11/bin/meson
steps:
- name: Checkout
uses: actions/checkout@v3
- name: Install Dependencies
run: |
cat > Brewfile <<EOL
brew "bison"
brew "expat"
brew "gettext"
brew "libx11"
brew "libxcb"
brew "libxdamage"
brew "libxext"
brew "molten-vk"
brew "ninja"
brew "pkg-config"
brew "python@3.10"
EOL
brew update
brew bundle --verbose
- name: Install Mako and meson
run: pip3 install --user mako meson
- name: Configure
run: |
cat > native_config <<EOL
[binaries]
llvm-config = '/usr/local/opt/llvm/bin/llvm-config'
EOL
$MESON_EXEC . build --native-file=native_config -Dmoltenvk-dir=$(brew --prefix molten-vk) -Dbuild-tests=true -Dgallium-drivers=swrast,zink -Dglx=${{ matrix.glx_option }}
- name: Build
run: $MESON_EXEC compile -C build
- name: Test
run: $MESON_EXEC test -C build --print-errorlogs
- name: Install
run: $MESON_EXEC install -C build --destdir $PWD/install
- name: 'Upload Artifact'
if: always()
uses: actions/upload-artifact@v3
with:
name: macos-${{ matrix.glx_option }}-result
path: |
build/meson-logs/
install/
retention-days: 5

49
.gitignore vendored
View File

@@ -1,7 +1,46 @@
.cache
.vscode*
*.a
*.dll
*.exe
*.ilk
*.la
*.lo
*.log
*.o
*.obj
*.os
*.pc
*.pdb
*.pyc
*.pyo
*.out
/build
.venv/
*.so
*.so.*
*.sw[a-z]
*.tar
*.tar.bz2
*.tar.gz
*.trs
*.zip
*~
depend
depend.bak
bin/ltmain.sh
lib
lib64
configure
configure.lineno
autom4te.cache
aclocal.m4
config.log
config.status
cscope*
.scon*
config.py
build
libtool
manifest.txt
.dir-locals.el
.deps/
.dirstamp
.libs/
Makefile
Makefile.in

View File

@@ -1,437 +0,0 @@
# Types of CI pipelines:
# | pipeline name | context | description |
# |----------------------|-----------|-------------------------------------------------------------|
# | merge pipeline | mesa/mesa | pipeline running for an MR; if it passes the MR gets merged |
# | pre-merge pipeline | mesa/mesa | same as above, except its status doesn't affect the MR |
# | post-merge pipeline | mesa/mesa | pipeline immediately after merging |
# | fork pipeline | fork | pipeline running in a user fork |
# | scheduled pipeline | mesa/mesa | nightly pipelines, running every morning at 4am UTC |
# | direct-push pipeline | mesa/mesa | when commits are pushed directly to mesa/mesa, bypassing Marge and its gating pipeline |
#
# Note that the release branches maintained by the release manager fall under
# the "direct push" category.
#
# "context" indicates the permissions that the jobs get; notably, any
# container created in mesa/mesa gets pushed immediately for everyone to use
# as soon as the image tag change is merged.
#
# Merge pipelines contain all jobs that must pass before the MR can be merged.
# Pre-merge pipelines contain the exact same jobs as merge pipelines.
# Post-merge pipelines contain *only* the `pages` job that deploys the new
# version of the website.
# Fork pipelines contain everything.
# Scheduled pipelines only contain the container+build jobs, and some extra
# test jobs (typically "full" variants of pre-merge jobs that only run 1/X
# test cases), but not a repeat of the merge pipeline jobs.
# Direct-push pipelines contain the same jobs as merge pipelines.
workflow:
rules:
# do not duplicate pipelines on merge pipelines
- if: $CI_COMMIT_BRANCH && $CI_OPEN_MERGE_REQUESTS && $CI_PIPELINE_SOURCE == "push"
when: never
# Tag pipelines are disabled as it's too late to run all the tests by
# then, the release has been made based on the staging pipelines results
- if: $CI_COMMIT_TAG
when: never
# Merge pipeline
- if: &is-merge-attempt $GITLAB_USER_LOGIN == "marge-bot" && $CI_PIPELINE_SOURCE == "merge_request_event"
variables:
MESA_CI_PERFORMANCE_ENABLED: 1
FDO_RUNNER_JOB_PRIORITY_TAG_X86_64: priority:high
FDO_RUNNER_JOB_PRIORITY_TAG_X86_64_KVM: priority:high-kvm
FDO_RUNNER_JOB_PRIORITY_TAG_AARCH64: priority:high-aarch64
CI_TRON_JOB_PRIORITY_TAG: "" # Empty tags are ignored by gitlab
JOB_PRIORITY: 75
# fast-fail in merge pipelines: stop early if we get this many unexpected fails/crashes
DEQP_RUNNER_MAX_FAILS: 40
# Post-merge pipeline
- if: &is-post-merge $GITLAB_USER_LOGIN == "marge-bot" && $CI_PIPELINE_SOURCE == "push"
variables:
FDO_RUNNER_JOB_PRIORITY_TAG_X86_64: priority:high
FDO_RUNNER_JOB_PRIORITY_TAG_X86_64_KVM: priority:high-kvm
FDO_RUNNER_JOB_PRIORITY_TAG_AARCH64: priority:high-aarch64
# Pre-merge pipeline (because merge pipelines are already caught above)
- if: &is-merge-request $CI_PIPELINE_SOURCE == "merge_request_event"
# Push to a branch on a fork
- if: &is-push-to-fork $CI_PROJECT_NAMESPACE != "mesa" && $CI_PIPELINE_SOURCE == "push"
# a pipeline running within the upstream project
- if: &is-upstream-pipeline $CI_PROJECT_PATH == $FDO_UPSTREAM_REPO
# an MR pipeline running within the upstream project, usually true for
# those with the Developer role or above
- if: &is-upstream-mr-pipeline $CI_PROJECT_PATH == $FDO_UPSTREAM_REPO && $CI_PIPELINE_SOURCE == "merge_request_event"
# Nightly pipeline
- if: &is-scheduled-pipeline $CI_PIPELINE_SOURCE == "schedule"
variables:
FDO_RUNNER_JOB_PRIORITY_TAG_X86_64: priority:low
FDO_RUNNER_JOB_PRIORITY_TAG_X86_64_KVM: priority:low-kvm
FDO_RUNNER_JOB_PRIORITY_TAG_AARCH64: priority:low-aarch64
JOB_PRIORITY: 45
# (some) nightly builds perform LTO, so they take much longer than the
# short timeout allowed in other pipelines.
# Note: 0 = infinity = gitlab's job `timeout:` applies, which is 1h
BUILD_JOB_TIMEOUT_OVERRIDE: 0
# Pipeline for direct pushes to the default branch that bypassed the CI
- if: &is-push-to-upstream-default-branch $CI_PROJECT_NAMESPACE == "mesa" && $CI_PIPELINE_SOURCE == "push" && $CI_COMMIT_REF_NAME == $CI_DEFAULT_BRANCH
variables:
JOB_PRIORITY: 70
# Pipeline for direct pushes from release maintainer
- if: &is-push-to-upstream-staging-branch $CI_PROJECT_NAMESPACE == "mesa" && $CI_PIPELINE_SOURCE == "push" && $CI_COMMIT_REF_NAME =~ /^staging\//
variables:
JOB_PRIORITY: 70
variables:
FDO_UPSTREAM_REPO: mesa/mesa
MESA_TEMPLATES_COMMIT: &ci-templates-commit c6aeb16f86e32525fa630fb99c66c4f3e62fc3cb
CI_PRE_CLONE_SCRIPT: |-
set -o xtrace
curl --silent --location --fail --retry-connrefused --retry 3 --retry-delay 10 \
${CI_PROJECT_URL}/-/raw/${CI_COMMIT_SHA}/.gitlab-ci/download-git-cache.sh | bash
set +o xtrace
S3_JWT_FILE: /s3_jwt
S3_JWT_FILE_SCRIPT: |-
echo -n '${S3_JWT}' > '${S3_JWT_FILE}' &&
S3_JWT_FILE_SCRIPT= &&
unset CI_JOB_JWT S3_JWT # Unsetting vulnerable env variables
S3_HOST: s3.freedesktop.org
# This bucket is used to fetch ANDROID prebuilts and images
S3_ANDROID_BUCKET: mesa-rootfs
# This bucket is used to fetch the kernel image
S3_KERNEL_BUCKET: mesa-rootfs
# Bucket for git cache
S3_GITCACHE_BUCKET: git-cache
# Bucket for the pipeline artifacts pushed to S3
S3_ARTIFACTS_BUCKET: artifacts
# Buckets for traces
S3_TRACIE_RESULTS_BUCKET: mesa-tracie-results
S3_TRACIE_PUBLIC_BUCKET: mesa-tracie-public
S3_TRACIE_PRIVATE_BUCKET: mesa-tracie-private
# Base path used for various artifacts
S3_BASE_PATH: "${S3_HOST}/${S3_KERNEL_BUCKET}"
# per-pipeline artifact storage on MinIO
PIPELINE_ARTIFACTS_BASE: ${S3_HOST}/${S3_ARTIFACTS_BUCKET}/${CI_PROJECT_PATH}/${CI_PIPELINE_ID}
# per-job artifact storage on MinIO
JOB_ARTIFACTS_BASE: ${PIPELINE_ARTIFACTS_BASE}/${CI_JOB_ID}
# reference images stored for traces
PIGLIT_REPLAY_REFERENCE_IMAGES_BASE: "${S3_HOST}/${S3_TRACIE_RESULTS_BUCKET}/$FDO_UPSTREAM_REPO"
# For individual CI farm status see .ci-farms folder
# Disable farm with `git mv .ci-farms{,-disabled}/$farm_name`
# Re-enable farm with `git mv .ci-farms{-disabled,}/$farm_name`
# NEVER MIX FARM MAINTENANCE WITH ANY OTHER CHANGE IN THE SAME MERGE REQUEST!
ARTIFACTS_BASE_URL: https://${CI_PROJECT_ROOT_NAMESPACE}.${CI_PAGES_DOMAIN}/-/${CI_PROJECT_NAME}/-/jobs/${CI_JOB_ID}/artifacts
# Python scripts for structured logger
PYTHONPATH: "$PYTHONPATH:$CI_PROJECT_DIR/install"
# No point in continuing once the device is lost
MESA_VK_ABORT_ON_DEVICE_LOSS: 1
# Avoid the wall of "Unsupported SPIR-V capability" warnings in CI job log, hiding away useful output
MESA_SPIRV_LOG_LEVEL: error
# Default priority for non-merge pipelines
FDO_RUNNER_JOB_PRIORITY_TAG_X86_64: "" # Empty tags are ignored by gitlab
FDO_RUNNER_JOB_PRIORITY_TAG_X86_64_KVM: kvm
FDO_RUNNER_JOB_PRIORITY_TAG_AARCH64: aarch64
CI_TRON_JOB_PRIORITY_TAG: ci-tron:priority:low
JOB_PRIORITY: 50
DATA_STORAGE_PATH: data_storage
KERNEL_IMAGE_BASE: "https://$S3_HOST/$S3_KERNEL_BUCKET/$KERNEL_REPO/$KERNEL_TAG"
# Mesa-specific variables that shouldn't be forwarded to DUTs and crosvm
CI_EXCLUDE_ENV_VAR_REGEX: 'SCRIPTS_DIR|RESULTS_DIR'
CI_TRON_JOB_TEMPLATE_PROJECT: &ci-tron-template-project gfx-ci/ci-tron
CI_TRON_JOB_TEMPLATE_COMMIT: &ci-tron-template-commit ddadab0006e43f1365cd30779f565b444a6538ee
CI_TRON_JOB_TEMPLATE_PROJECT_URL: "https://gitlab.freedesktop.org/$CI_TRON_JOB_TEMPLATE_PROJECT"
default:
timeout: 1m # catch any jobs which don't specify a timeout
id_tokens:
S3_JWT:
aud: https://s3.freedesktop.org
before_script:
- >
export SCRIPTS_DIR=$(mktemp -d) &&
curl -L -s --retry 4 -f --retry-all-errors --retry-delay 60 -O --output-dir "${SCRIPTS_DIR}" "${CI_PROJECT_URL}/-/raw/${CI_COMMIT_SHA}/.gitlab-ci/setup-test-env.sh" &&
. ${SCRIPTS_DIR}/setup-test-env.sh
- eval "$S3_JWT_FILE_SCRIPT"
after_script:
# Work around https://gitlab.com/gitlab-org/gitlab/-/issues/20338
- find -name '*.log' -exec mv {} {}.txt \;
# Retry when job fails. Failed jobs can be found in the Mesa CI Daily Reports:
# https://gitlab.freedesktop.org/mesa/mesa/-/issues/?sort=created_date&state=opened&label_name%5B%5D=CI%20daily
retry:
max: 1
# Ignore runner_unsupported, stale_schedule, archived_failure, or
# unmet_prerequisites
when:
- api_failure
- runner_system_failure
- script_failure
- job_execution_timeout
- scheduler_failure
- data_integrity_failure
- unknown_failure
stages:
- sanity
- container
- git-archive
- build-for-tests
- build-only
- code-validation
- amd
- amd-nightly
- intel
- intel-nightly
- nouveau
- nouveau-nightly
- arm
- arm-nightly
- broadcom
- broadcom-nightly
- freedreno
- freedreno-nightly
- etnaviv
- etnaviv-nightly
- software-renderer
- software-renderer-nightly
- layered-backends
- layered-backends-nightly
- performance
- deploy
include:
- project: 'freedesktop/ci-templates'
ref: *ci-templates-commit
file:
- '/templates/alpine.yml'
- '/templates/debian.yml'
- '/templates/fedora.yml'
- '/templates/ci-fairy.yml'
- project: *ci-tron-template-project
ref: *ci-tron-template-commit
file: '/.gitlab-ci/dut.yml'
- local: '.gitlab-ci/image-tags.yml'
- local: '.gitlab-ci/bare-metal/gitlab-ci.yml'
- local: '.gitlab-ci/ci-tron/gitlab-ci.yml'
- local: '.gitlab-ci/lava/gitlab-ci.yml'
- local: '.gitlab-ci/container/gitlab-ci.yml'
- local: '.gitlab-ci/build/gitlab-ci.yml'
- local: '.gitlab-ci/test/gitlab-ci.yml'
- local: '.gitlab-ci/farm-rules.yml'
- local: '.gitlab-ci/test-source-dep.yml'
- local: 'docs/gitlab-ci.yml'
- local: 'src/**/ci/gitlab-ci.yml'
# Rules applied to every job in the pipeline
.common-rules:
rules:
- if: *is-push-to-fork
when: manual
.never-post-merge-rules:
rules:
- if: *is-post-merge
when: never
# Note: make sure the branches in this list are the same as in
# `.build-only-delayed-rules` below.
.container-rules:
rules:
- !reference [.common-rules, rules]
# Run when re-enabling a disabled farm, but not when disabling it
- !reference [.disable-farm-mr-rules, rules]
# Never run immediately after merging, as we just ran everything
- !reference [.never-post-merge-rules, rules]
# Only rebuild containers in merge pipelines if any tags have been
# changed, else we'll just use the already-built containers
- if: *is-merge-attempt
changes: &image_tags_path
- .gitlab-ci/image-tags.yml
when: on_success
# Skip everything for pre-merge and merge pipelines which don't change
# anything in the build; we only do this for marge-bot and not user
# pipelines in a MR, because we might still need to run it to copy the
# container into the user's namespace.
- if: *is-merge-attempt
when: never
# Any MR pipeline which changes image-tags.yml needs to be able to
# rebuild the containers
- if: *is-merge-request
changes: *image_tags_path
when: manual
# ... if the MR pipeline runs as mesa/mesa and does not need a container
# rebuild, we can skip it
- if: *is-upstream-mr-pipeline
when: never
# ... however for MRs running inside the user namespace, we may need to
# run these jobs to copy the container images from upstream
- if: *is-merge-request
when: manual
# Build everything after someone bypassed the CI
- if: *is-push-to-upstream-default-branch
when: on_success
# Build everything when pushing to staging branches
- if: *is-push-to-upstream-staging-branch
when: on_success
# Scheduled pipelines reuse already-built containers
- if: *is-scheduled-pipeline
when: never
# Any other pipeline in the upstream should reuse already-built containers
- if: *is-upstream-pipeline
when: never
# Allow building everything in fork pipelines, but build nothing unless
# manually triggered
- when: manual
# Note: make sure the branches in this list are the same as in
# `.build-only-delayed-rules` below.
.build-rules:
rules:
- !reference [.common-rules, rules]
# Run when re-enabling a disabled farm, but not when disabling it
- !reference [.disable-farm-mr-rules, rules]
# Never run immediately after merging, as we just ran everything
- !reference [.never-post-merge-rules, rules]
# Build everything in merge pipelines, if any files affecting the pipeline
# were changed
- if: *is-merge-attempt
changes: &all_paths
- VERSION
- bin/git_sha1_gen.py
- bin/install_megadrivers.py
- bin/symbols-check.py
- bin/ci/**/*
# GitLab CI
- .gitlab-ci.yml
- .gitlab-ci/**/*
- .ci-farms/*
# Meson
- meson*
- build-support/**/*
- subprojects/**/*
# clang format
- .clang-format
- .clang-format-include
- .clang-format-ignore
# Source code
- include/**/*
- src/**/*
when: on_success
# Same as above, but for pre-merge pipelines
- if: *is-merge-request
changes: *all_paths
when: manual
# Skip everything for pre-merge and merge pipelines which don't change
# anything in the build
- if: *is-merge-attempt
when: never
- if: *is-merge-request
when: never
# Build everything after someone bypassed the CI
- if: *is-push-to-upstream-default-branch
when: on_success
# Build everything when pushing to staging branches
- if: *is-push-to-upstream-staging-branch
when: on_success
# Build everything in scheduled pipelines
- if: *is-scheduled-pipeline
when: on_success
# Allow building everything in fork pipelines, but build nothing unless
# manually triggered
- when: manual
# Repeat of the above but with `when: on_success` replaced with
# `when: delayed` + `start_in:`, for build-only jobs.
# Note: make sure the branches in this list are the same as in
# `.container+build-rules` above.
.build-only-delayed-rules:
rules:
- !reference [.common-rules, rules]
# Run when re-enabling a disabled farm, but not when disabling it
- !reference [.disable-farm-mr-rules, rules]
# Never run immediately after merging, as we just ran everything
- !reference [.never-post-merge-rules, rules]
# Build everything in merge pipelines, if any files affecting the pipeline
# were changed
- if: *is-merge-attempt
changes: *all_paths
when: delayed
start_in: &build-delay 5 minutes
# Same as above, but for pre-merge pipelines
- if: *is-merge-request
changes: *all_paths
when: manual
# Skip everything for pre-merge and merge pipelines which don't change
# anything in the build
- if: *is-merge-attempt
when: never
- if: *is-merge-request
when: never
# Build everything after someone bypassed the CI
- if: *is-push-to-upstream-default-branch
when: delayed
start_in: *build-delay
# Build everything when pushing to staging branches
- if: *is-push-to-upstream-staging-branch
when: delayed
start_in: *build-delay
# Build everything in scheduled pipelines
- if: *is-scheduled-pipeline
when: delayed
start_in: *build-delay
# Allow building everything in fork pipelines, but build nothing unless
# manually triggered
- when: manual
# Sanity checks of MR settings and commit logs
sanity:
extends:
- .fdo.ci-fairy
stage: sanity
tags:
- placeholder-job
rules:
- if: *is-merge-request
when: on_success
- when: never
variables:
GIT_STRATEGY: none
script:
# ci-fairy check-commits --junit-xml=check-commits.xml
- ci-fairy check-merge-request --require-allow-collaboration --junit-xml=check-merge-request.xml
- |
set -eu
image_tags=(
ALPINE_X86_64_BUILD_TAG
ALPINE_X86_64_LAVA_SSH_TAG
ALPINE_X86_64_LAVA_TRIGGER_TAG
DEBIAN_BASE_TAG
DEBIAN_BUILD_TAG
DEBIAN_TEST_ANDROID_TAG
DEBIAN_TEST_GL_TAG
DEBIAN_TEST_VK_TAG
FEDORA_X86_64_BUILD_TAG
FIRMWARE_TAG
KERNEL_TAG
PKG_REPO_REV
WINDOWS_X64_BUILD_TAG
WINDOWS_X64_MSVC_TAG
WINDOWS_X64_TEST_TAG
)
for var in "${image_tags[@]}"
do
if [ "$(echo -n "${!var}" | wc -c)" -gt 20 ]
then
echo "$var is too long; please make sure it is at most 20 chars."
exit 1
fi
done
artifacts:
when: on_failure
reports:
junit: check-*.xml

View File

@@ -1,33 +0,0 @@
[flake8]
exclude = .venv*,
# PEP 8 Style Guide limits line length to 79 characters
max-line-length = 159
ignore =
# continuation line under-indented for hanging indent
E121
# continuation line over-indented for hanging indent
E126,
# continuation line under-indented for visual indent
E128,
# whitespace before ':'
E203,
# missing whitespace around arithmetic operator
E226,
# missing whitespace after ','
E231,
# expected 2 blank lines, found 1
E302,
# too many blank lines
E303,
# imported but unused
F401,
# f-string is missing placeholders
F541,
# local variable assigned to but never used
F841,
# line break before binary operator
W503,
# line break after binary operator
W504,

View File

@@ -1,115 +0,0 @@
# Note: skips lists for CI are just a list of lines that, when
# non-zero-length and not starting with '#', will regex match to
# delete lines from the test list. Be careful.
# This test checks the driver's reported conformance version against the
# version of the CTS we're running. This check fails every few months
# and everyone has to go and bump the number in every driver.
# Running this check only makes sense while preparing a conformance
# submission, so skip it in the regular CI.
dEQP-VK.api.driver_properties.conformance_version
# Exclude this test which might fail when a new extension is implemented.
dEQP-VK.info.device_extensions
# These are tremendously slow (pushing toward a minute), and aren't
# reliable to be run in parallel with other tests due to CPU-side timing.
dEQP-GLES[0-9]*.functional.flush_finish.*
# piglit: WGL is Windows-only
wgl@.*
# These are sensitive to CPU timing, and would need to be run in isolation
# on the system rather than in parallel with other tests.
glx@glx_arb_sync_control@timing.*
# This test is not built with waffle, while we do build tests with waffle
spec@!opengl 1.1@windowoverlap
# These tests all read from the front buffer after a swap. Given that we
# run piglit tests in parallel in Mesa CI, and don't have a compositor
# running, the frontbuffer reads may end up with undefined results from
# windows overlapping us.
#
# Piglit does mark these tests as not to be run in parallel, but deqp-runner
# doesn't respect that. We need to extend deqp-runner to allow some tests to be
# marked as single-threaded and run after the rayon loop if we want to support
# them.
#
# Note that "glx-" tests don't appear in x11-skips.txt because they can be
# run even if PIGLIT_PLATFORM=gbm (for example)
glx@glx-copy-sub-buffer.*
# A majority of the tests introduced in CTS 1.3.7.0 are experiencing failures and flakes.
# Disable these tests until someone with a more deeper understanding of EGL examines them.
#
# Note: on sc8280xp/a690 I get identical results (same passes and fails)
# between freedreno, zink, and llvmpipe, so I believe this is either a
# deqp bug or egl/wayland bug, rather than driver issue.
#
# With llvmpipe, the failing tests have the error message:
#
# "Illegal sampler view creation without bind flag"
#
# which might be a hint. (But some passing tests also have the same
# error message.)
#
# more context from David Heidelberg on IRC: the deqp commit where these
# started failing is: https://github.com/KhronosGroup/VK-GL-CTS/commit/79b25659bcbced0cfc2c3fe318951c585f682abe
# prior to that they were skipping.
wayland-dEQP-EGL.functional.color_clears.single_context.gles1.other
wayland-dEQP-EGL.functional.color_clears.single_context.gles2.other
wayland-dEQP-EGL.functional.color_clears.single_context.gles3.other
wayland-dEQP-EGL.functional.color_clears.multi_context.gles1.other
wayland-dEQP-EGL.functional.color_clears.multi_context.gles2.other
wayland-dEQP-EGL.functional.color_clears.multi_context.gles3.other
wayland-dEQP-EGL.functional.color_clears.multi_context.gles1_gles2.other
wayland-dEQP-EGL.functional.color_clears.multi_context.gles1_gles2_gles3.other
wayland-dEQP-EGL.functional.color_clears.multi_thread.gles1.other
wayland-dEQP-EGL.functional.color_clears.multi_thread.gles2.other
wayland-dEQP-EGL.functional.color_clears.multi_thread.gles3.other
wayland-dEQP-EGL.functional.color_clears.multi_thread.gles1_gles2.other
wayland-dEQP-EGL.functional.color_clears.multi_thread.gles1_gles2_gles3.other
# Seems to be the same is as wayland-dEQP-EGL.functional.color_clears.*
wayland-dEQP-EGL.functional.render.single_context.gles2.other
wayland-dEQP-EGL.functional.render.single_context.gles3.other
wayland-dEQP-EGL.functional.render.multi_context.gles2.other
wayland-dEQP-EGL.functional.render.multi_context.gles3.other
wayland-dEQP-EGL.functional.render.multi_context.gles2_gles3.other
wayland-dEQP-EGL.functional.render.multi_thread.gles2.other
wayland-dEQP-EGL.functional.render.multi_thread.gles3.other
wayland-dEQP-EGL.functional.render.multi_thread.gles2_gles3.other
# These test the loader more than the implementation and are broken because the
# Vulkan loader in Debian is too old
dEQP-VK.api.get_device_proc_addr.non_enabled
dEQP-VK.api.version_check.unavailable_entry_points
# These tests are flaking too much recently on almost all drivers, so better skip them until the cause is identified
spec@arb_program_interface_query@arb_program_interface_query-getprogramresourceindex
spec@arb_program_interface_query@arb_program_interface_query-getprogramresourceindex@'vs_input2[1][0]' on GL_PROGRAM_INPUT
# These tests attempt to read from the front buffer after a swap. They are skipped
# on both X11 and gbm, but for different reasons:
#
# On X11: Given that we run piglit tests in parallel in Mesa CI, and don't have a
# compositor running, the frontbuffer reads may end up with undefined results from
# windows overlapping us.
# Piglit does mark these tests as not to be run in parallel, but deqp-runner
# doesn't respect that. We need to extend deqp-runner to allow some tests to be
# marked as single-threaded and run after the rayon loop if we want to support
# them.
# Other front-buffer access tests like fbo-sys-blit, fbo-sys-sub-blit, or
# fcc-front-buffer-distraction don't appear here, because the DRI3 fake-front
# handling should be holding the pixels drawn by the test even if we happen to fail
# GL's window system pixel occlusion test.
# Note that glx skips don't appear here, they're in all-skips.txt (in case someone
# sets PIGLIT_PLATFORM=gbm to mostly use gbm, but still has an X server running).
#
# On gbm: gbm does not support reading the front buffer after a swapbuffers, and
# that's intentional. Don't bother running these tests when PIGLIT_PLATFORM=gbm.
# Note that this doesn't include tests like fbo-sys-blit, which draw/read front
# but don't swap.
spec@!opengl 1.0@gl-1.0-swapbuffers-behavior
spec@!opengl 1.1@read-front

View File

@@ -1,74 +0,0 @@
#!/usr/bin/env bash
# shellcheck disable=SC2086 # we want word splitting
# shellcheck disable=SC1091 # paths only become valid at runtime
. "${SCRIPTS_DIR}/setup-test-env.sh"
ci_tag_test_time_check "ANDROID_CTS_TAG"
export PATH=/android-tools/build-tools:/android-cts/jdk/bin/:$PATH
export JAVA_HOME=/android-cts/jdk
# Wait for the appops service to show up
while [ "$($ADB shell dumpsys -l | grep appops)" = "" ] ; do sleep 1; done
SKIP_FILE="$INSTALL/${GPU_VERSION}-android-cts-skips.txt"
EXCLUDE_FILTERS=""
if [ -e "$SKIP_FILE" ]; then
EXCLUDE_FILTERS="$(grep -v -E "(^#|^[[:space:]]*$)" "$SKIP_FILE" | sed -e 's/\s*$//g' -e 's/.*/--exclude-filter "\0" /g')"
fi
INCLUDE_FILE="$INSTALL/${GPU_VERSION}-android-cts-include.txt"
if [ ! -e "$INCLUDE_FILE" ]; then
set +x
echo "ERROR: No include file (${GPU_VERSION}-android-cts-include.txt) found."
echo "This means that we are running the all available CTS modules."
echo "But the time to run it might be too long, please provide an include file instead."
exit 1
fi
INCLUDE_FILTERS="$(grep -v -E "(^#|^[[:space:]]*$)" "$INCLUDE_FILE" | sed -e 's/\s*$//g' -e 's/.*/--include-filter "\0" /g')"
if [ -n "${ANDROID_CTS_PREPARE_COMMAND:-}" ]; then
eval "$ANDROID_CTS_PREPARE_COMMAND"
fi
uncollapsed_section_switch android_cts_test "Android CTS: testing"
set +e
eval "/android-cts/tools/cts-tradefed" run commandAndExit cts-dev \
$INCLUDE_FILTERS \
$EXCLUDE_FILTERS
SUMMARY_FILE=/android-cts/results/latest/invocation_summary.txt
# Parse a line like `x/y modules completed` to check that all modules completed
COMPLETED_MODULES=$(sed -n -e '/modules completed/s/^\([0-9]\+\)\/\([0-9]\+\) .*$/\1/p' "$SUMMARY_FILE")
AVAILABLE_MODULES=$(sed -n -e '/modules completed/s/^\([0-9]\+\)\/\([0-9]\+\) .*$/\2/p' "$SUMMARY_FILE")
[ "$COMPLETED_MODULES" = "$AVAILABLE_MODULES" ]
# shellcheck disable=SC2319 # False-positive see https://github.com/koalaman/shellcheck/issues/2937#issuecomment-2660891195
MODULES_FAILED=$?
# Parse a line like `FAILED : x` to check that no tests failed
[ "$(grep "^FAILED" "$SUMMARY_FILE" | tr -d ' ' | cut -d ':' -f 2)" = "0" ]
# shellcheck disable=SC2319 # False-positive see https://github.com/koalaman/shellcheck/issues/2937#issuecomment-2660891195
TESTS_FAILED=$?
[ "$MODULES_FAILED" = "0" ] && [ "$TESTS_FAILED" = "0" ]
# shellcheck disable=SC2034 # EXIT_CODE is used by the script that sources this one
EXIT_CODE=$?
set -e
mkdir "${RESULTS_DIR}/android-cts"
cp -r "/android-cts/results/latest/" "${RESULTS_DIR}/android-cts/results"
cp -r "/android-cts/logs/latest/" "${RESULTS_DIR}/android-cts/logs"
if [ -n "${ARTIFACTS_BASE_URL:-}" ]; then
echo "============================================"
echo "Review the Android CTS test results at: ${ARTIFACTS_BASE_URL}/results/android-cts/results/test_result.html"
fi
section_end android_cts_test

View File

@@ -1,110 +0,0 @@
#!/usr/bin/env bash
# shellcheck disable=SC2086 # we want word splitting
# shellcheck disable=SC1091 # paths only become valid at runtime
. "${SCRIPTS_DIR}/setup-test-env.sh"
# deqp
$ADB shell mkdir -p /data/deqp
$ADB push /deqp-gles/modules/egl/deqp-egl-android /data/deqp
$ADB push /deqp-gles/mustpass/egl-main.txt.zst /data/deqp
$ADB push /deqp-gles/modules/gles2/deqp-gles2 /data/deqp
$ADB push /deqp-gles/mustpass/gles2-main.txt.zst /data/deqp
$ADB push /deqp-vk/external/vulkancts/modules/vulkan/* /data/deqp
$ADB push /deqp-vk/mustpass/vk-main.txt.zst /data/deqp
$ADB push /deqp-tools/* /data/deqp
$ADB push /deqp-runner/deqp-runner /data/deqp
$ADB push "$INSTALL/all-skips.txt" /data/deqp
$ADB push "$INSTALL/android-skips.txt" /data/deqp
$ADB push "$INSTALL/angle-skips.txt" /data/deqp
if [ -e "$INSTALL/$GPU_VERSION-flakes.txt" ]; then
$ADB push "$INSTALL/$GPU_VERSION-flakes.txt" /data/deqp
fi
if [ -e "$INSTALL/$GPU_VERSION-fails.txt" ]; then
$ADB push "$INSTALL/$GPU_VERSION-fails.txt" /data/deqp
fi
if [ -e "$INSTALL/$GPU_VERSION-skips.txt" ]; then
$ADB push "$INSTALL/$GPU_VERSION-skips.txt" /data/deqp
fi
$ADB push "$INSTALL/deqp-$DEQP_SUITE.toml" /data/deqp
BASELINE=""
if [ -e "$INSTALL/$GPU_VERSION-fails.txt" ]; then
BASELINE="--baseline /data/deqp/$GPU_VERSION-fails.txt"
fi
# Default to an empty known flakes file if it doesn't exist.
$ADB shell "touch /data/deqp/$GPU_VERSION-flakes.txt"
DEQP_SKIPS=""
if [ -e "$INSTALL/$GPU_VERSION-skips.txt" ]; then
DEQP_SKIPS="$DEQP_SKIPS /data/deqp/$GPU_VERSION-skips.txt"
fi
if [ -n "${ANGLE_TAG:-}" ]; then
DEQP_SKIPS="$DEQP_SKIPS /data/deqp/angle-skips.txt"
fi
AOSP_RESULTS=/data/deqp/results
uncollapsed_section_switch cuttlefish_test "cuttlefish: testing"
# Print the detailed version with the list of backports and local patches
{ set +x; } 2>/dev/null
for api in vk-main vk gl gles; do
deqp_version_log=/deqp-$api/deqp-$api-version
if [ -r "$deqp_version_log" ]; then
cat "$deqp_version_log"
fi
done
set -x
set +e
$ADB shell "mkdir ${AOSP_RESULTS}; cd ${AOSP_RESULTS}/..; \
XDG_CACHE_HOME=/data/local/tmp \
./deqp-runner \
suite \
--suite /data/deqp/deqp-$DEQP_SUITE.toml \
--output $AOSP_RESULTS \
--skips /data/deqp/all-skips.txt $DEQP_SKIPS \
--flakes /data/deqp/$GPU_VERSION-flakes.txt \
--testlog-to-xml /data/deqp/testlog-to-xml \
--shader-cache-dir /data/local/tmp \
--fraction-start ${CI_NODE_INDEX:-1} \
--fraction $(( CI_NODE_TOTAL * ${DEQP_FRACTION:-1})) \
--jobs ${FDO_CI_CONCURRENT:-4} \
$BASELINE \
${DEQP_RUNNER_MAX_FAILS:+--max-fails \"$DEQP_RUNNER_MAX_FAILS\"} \
"
# shellcheck disable=SC2034 # EXIT_CODE is used by the script that sources this one
EXIT_CODE=$?
set -e
section_switch cuttlefish_results "cuttlefish: gathering the results"
$ADB pull "$AOSP_RESULTS/." "$RESULTS_DIR"
# Remove all but the first 50 individual XML files uploaded as artifacts, to
# save fd.o space when you break everything.
find $RESULTS_DIR -name \*.xml | \
sort -n |
sed -n '1,+49!p' | \
xargs rm -f
# If any QPA XMLs are there, then include the XSL/CSS in our artifacts.
find $RESULTS_DIR -name \*.xml \
-exec cp /deqp-tools/testlog.css /deqp-tools/testlog.xsl "$RESULTS_DIR/" ";" \
-quit
$ADB shell "cd ${AOSP_RESULTS}/..; \
./deqp-runner junit \
--testsuite dEQP \
--results $AOSP_RESULTS/failures.csv \
--output $AOSP_RESULTS/junit.xml \
--limit 50 \
--template \"See $ARTIFACTS_BASE_URL/results/{{testcase}}.xml\""
$ADB pull "$AOSP_RESULTS/junit.xml" "$RESULTS_DIR"
section_end cuttlefish_results

View File

@@ -1,150 +0,0 @@
#!/usr/bin/env bash
# shellcheck disable=SC2086 # we want word splitting
# shellcheck disable=SC1091 # paths only become valid at runtime
set -uex
# Set default ADB command if not set already
: "${ADB:=adb}"
$ADB wait-for-device root
sleep 1
# overlay
REMOUNT_PATHS="/vendor"
if [ "$ANDROID_VERSION" -ge 15 ]; then
REMOUNT_PATHS="$REMOUNT_PATHS /system"
fi
OV_TMPFS="/data/overlay-remount"
$ADB shell mkdir -p "$OV_TMPFS"
$ADB shell mount -t tmpfs none "$OV_TMPFS"
for path in $REMOUNT_PATHS; do
$ADB shell mkdir -p "${OV_TMPFS}${path}-upper"
$ADB shell mkdir -p "${OV_TMPFS}${path}-work"
opts="lowerdir=${path},upperdir=${OV_TMPFS}${path}-upper,workdir=${OV_TMPFS}${path}-work"
$ADB shell mount -t overlay -o "$opts" none ${path}
done
$ADB shell setenforce 0
$ADB push /android-tools/eglinfo /data
$ADB push /android-tools/vulkaninfo /data
get_gles_runtime_renderer() {
while [ "$($ADB shell XDG_CACHE_HOME=/data/local/tmp /data/eglinfo | grep 'OpenGL ES profile renderer':)" = "" ] ; do sleep 1; done
$ADB shell XDG_CACHE_HOME=/data/local/tmp /data/eglinfo | grep 'OpenGL ES profile renderer' | head -1
}
get_gles_runtime_version() {
while [ "$($ADB shell XDG_CACHE_HOME=/data/local/tmp /data/eglinfo | grep 'OpenGL ES profile version:')" = "" ] ; do sleep 1; done
$ADB shell XDG_CACHE_HOME=/data/local/tmp /data/eglinfo | grep 'OpenGL ES profile version:' | head -1
}
get_vk_runtime_device_name() {
$ADB shell XDG_CACHE_HOME=/data/local/tmp /data/vulkaninfo | grep deviceName | head -1
}
get_vk_runtime_version() {
$ADB shell XDG_CACHE_HOME=/data/local/tmp /data/vulkaninfo | grep driverInfo | head -1
}
# Check what GLES & VK implementation is used before uploading the new libraries
get_gles_runtime_renderer
get_gles_runtime_version
get_vk_runtime_device_name
get_vk_runtime_version
# replace libraries
$ADB shell rm -f /vendor/lib64/libgallium_dri.so*
$ADB shell rm -f /vendor/lib64/egl/libEGL_mesa.so*
$ADB shell rm -f /vendor/lib64/egl/libGLESv1_CM_mesa.so*
$ADB shell rm -f /vendor/lib64/egl/libGLESv2_mesa.so*
$ADB push "$INSTALL/lib/libgallium_dri.so" /vendor/lib64/libgallium_dri.so
$ADB push "$INSTALL/lib/libEGL.so" /vendor/lib64/egl/libEGL_mesa.so
$ADB push "$INSTALL/lib/libGLESv1_CM.so" /vendor/lib64/egl/libGLESv1_CM_mesa.so
$ADB push "$INSTALL/lib/libGLESv2.so" /vendor/lib64/egl/libGLESv2_mesa.so
$ADB shell rm -f /vendor/lib64/hw/vulkan.lvp.so*
$ADB shell rm -f /vendor/lib64/hw/vulkan.virtio.so*
$ADB shell rm -f /vendor/lib64/hw/vulkan.intel.so*
$ADB push "$INSTALL/lib/libvulkan_lvp.so" /vendor/lib64/hw/vulkan.lvp.so
$ADB push "$INSTALL/lib/libvulkan_virtio.so" /vendor/lib64/hw/vulkan.virtio.so
$ADB push "$INSTALL/lib/libvulkan_intel.so" /vendor/lib64/hw/vulkan.intel.so
$ADB shell rm -f /vendor/lib64/egl/libEGL_emulation.so*
$ADB shell rm -f /vendor/lib64/egl/libGLESv1_CM_emulation.so*
$ADB shell rm -f /vendor/lib64/egl/libGLESv2_emulation.so*
if [ -n "${ANGLE_TAG:-}" ]; then
ANGLE_DEST_PATH=/vendor/lib64/egl
if [ "$ANDROID_VERSION" -ge 15 ]; then
ANGLE_DEST_PATH=/system/lib64
fi
$ADB shell rm -f "$ANGLE_DEST_PATH/libEGL_angle.so"*
$ADB shell rm -f "$ANGLE_DEST_PATH/libGLESv1_CM_angle.so"*
$ADB shell rm -f "$ANGLE_DEST_PATH/libGLESv2_angle.so"*
$ADB push /angle/libEGL_angle.so "$ANGLE_DEST_PATH/libEGL_angle.so"
$ADB push /angle/libGLESv1_CM_angle.so "$ANGLE_DEST_PATH/libGLESv1_CM_angle.so"
$ADB push /angle/libGLESv2_angle.so "$ANGLE_DEST_PATH/libGLESv2_angle.so"
fi
# Check what GLES & VK implementation is used after uploading the new libraries
MESA_BUILD_VERSION=$(cat "$INSTALL/VERSION")
get_gles_runtime_renderer
GLES_RUNTIME_VERSION="$(get_gles_runtime_version)"
get_vk_runtime_device_name
VK_RUNTIME_VERSION="$(get_vk_runtime_version)"
if [ -n "${ANGLE_TAG:-}" ]; then
# Note: we are injecting the ANGLE libs too, so we need to check if the
# new ANGLE libs are being used.
ANGLE_HASH=$(head -c 12 /angle/version)
if ! printf "%s" "$GLES_RUNTIME_VERSION" | grep --quiet "${ANGLE_HASH}"; then
echo "Fatal: Android is loading a wrong version of the ANGLE libs: ${ANGLE_HASH}" 1>&2
exit 1
fi
fi
if ! printf "%s" "$VK_RUNTIME_VERSION" | grep -Fq -- "${MESA_BUILD_VERSION}"; then
echo "Fatal: Android is loading a wrong version of the Mesa3D Vulkan libs: ${VK_RUNTIME_VERSION}" 1>&2
exit 1
fi
get_surfaceflinger_pid() {
while [ "$($ADB shell dumpsys -l | grep 'SurfaceFlinger$')" = "" ] ; do sleep 1; done
$ADB shell ps -A | grep -i surfaceflinger | tr -s ' ' | cut -d ' ' -f 2
}
OLD_SF_PID=$(get_surfaceflinger_pid)
# restart Android shell, so that services use the new libraries
$ADB shell stop
$ADB shell start
# Check that SurfaceFlinger restarted, to ensure that new libraries have been picked up
NEW_SF_PID=$(get_surfaceflinger_pid)
if [ "$OLD_SF_PID" == "$NEW_SF_PID" ]; then
echo "Fatal: check that SurfaceFlinger restarted" 1>&2
exit 1
fi
if [ -n "${ANDROID_CTS_TAG:-}" ]; then
# The script sets EXIT_CODE
. "$(dirname "$0")/android-cts-runner.sh"
else
# The script sets EXIT_CODE
. "$(dirname "$0")/android-deqp-runner.sh"
fi
exit $EXIT_CODE

View File

@@ -1,11 +0,0 @@
# Skip these tests when running fractional dEQP batches, as the AHB tests are expected
# to be handled separately in a non-fractional run within the deqp-runner suite.
dEQP-VK.api.external.memory.android_hardware_buffer.*
# Skip all WSI tests: the DEQP_ANDROID_EXE build used can't create native windows, as
# only APKs support window creation on Android.
dEQP-VK.image.swapchain_mutable.*
dEQP-VK.wsi.*
# These tests cause hangs and need to be skipped for now.
dEQP-VK.synchronization*

View File

@@ -1,7 +0,0 @@
# Unlike zink which does support it, ANGLE relies on a waiver to not implement
# capturing individual array elements (see waivers.xml and gles3-waivers.txt in the CTS)
dEQP-GLES3.functional.transform_feedback.array_element.*
dEQP-GLES3.functional.transform_feedback.random.*
dEQP-GLES31.functional.program_interface_query.transform_feedback_varying.*_array_element
dEQP-GLES31.functional.program_interface_query.transform_feedback_varying.type.*.array.*
KHR-GLES31.core.program_interface_query.transform-feedback-types

View File

@@ -1,2 +0,0 @@
[*.sh]
indent_size = 2

View File

@@ -1,15 +0,0 @@
#!/bin/sh
# Init entrypoint for bare-metal devices; calls common init code.
# First stage: very basic setup to bring up network and /dev etc
/init-stage1.sh
export CURRENT_SECTION=dut_boot
# Second stage: run jobs
test $? -eq 0 && /init-stage2.sh
# Wait until the job would have timed out anyway, so we don't spew a "init
# exited" panic.
sleep 6000

View File

@@ -1,129 +0,0 @@
.baremetal-test:
extends:
- .test
# Cancel job if a newer commit is pushed to the same branch
interruptible: true
before_script:
- !reference [.download_s3, before_script]
variables:
BM_ROOTFS: /rootfs-${DEBIAN_ARCH}
artifacts:
when: always
name: "${CI_PROJECT_NAME}_${CI_JOB_NAME_SLUG}"
paths:
- results/
- serial*.txt
exclude:
- results/*.shader_cache
reports:
junit: results/junit.xml
# ARM testing of bare-metal boards attached to an x86 gitlab-runner system
.baremetal-test-arm32-gl:
extends:
- .baremetal-test
- .use-debian/baremetal_arm32_test-gl
variables:
DEBIAN_ARCH: armhf
S3_ARTIFACT_NAME: mesa-arm32-default-debugoptimized
needs:
- job: debian/baremetal_arm32_test-gl
optional: true
- job: debian-arm32
artifacts: false
- !reference [.required-for-hardware-jobs, needs]
# ARM64 testing of bare-metal boards attached to an x86 gitlab-runner system
.baremetal-test-arm64-gl:
extends:
- .baremetal-test
- .use-debian/baremetal_arm64_test-gl
variables:
DEBIAN_ARCH: arm64
S3_ARTIFACT_NAME: mesa-arm64-default-debugoptimized
needs:
- job: debian/baremetal_arm64_test-gl
optional: true
- job: debian-arm64
artifacts: false
- !reference [.required-for-hardware-jobs, needs]
# ARM64 testing of bare-metal boards attached to an x86 gitlab-runner system
.baremetal-test-arm64-vk:
extends:
- .baremetal-test
- .use-debian/baremetal_arm64_test-vk
variables:
DEBIAN_ARCH: arm64
S3_ARTIFACT_NAME: mesa-arm64-default-debugoptimized
needs:
- job: debian/baremetal_arm64_test-vk
optional: true
- job: debian-arm64
artifacts: false
- !reference [.required-for-hardware-jobs, needs]
# ARM32/64 testing of bare-metal boards attached to an x86 gitlab-runner system, using an asan mesa build
.baremetal-arm32-asan-test-gl:
variables:
S3_ARTIFACT_NAME: mesa-arm32-asan-debugoptimized
DEQP_FORCE_ASAN: 1
needs:
- job: debian/baremetal_arm32_test-gl
optional: true
- job: debian-arm32-asan
artifacts: false
- !reference [.required-for-hardware-jobs, needs]
.baremetal-arm64-asan-test-gl:
variables:
S3_ARTIFACT_NAME: mesa-arm64-asan-debugoptimized
DEQP_FORCE_ASAN: 1
needs:
- job: debian/baremetal_arm64_test-gl
optional: true
- job: debian-arm64-asan
artifacts: false
- !reference [.required-for-hardware-jobs, needs]
.baremetal-arm64-asan-test-vk:
variables:
S3_ARTIFACT_NAME: mesa-arm64-asan-debugoptimized
DEQP_FORCE_ASAN: 1
needs:
- job: debian/baremetal_arm64_test-vk
optional: true
- job: debian-arm64-asan
artifacts: false
- !reference [.required-for-hardware-jobs, needs]
.baremetal-arm64-ubsan-test-gl:
extends:
- .baremetal-test
- .use-debian/baremetal_arm64_test-gl
variables:
S3_ARTIFACT_NAME: mesa-arm64-ubsan-debugoptimized
needs:
- job: debian/baremetal_arm64_test-gl
optional: true
- job: debian-arm64-ubsan
artifacts: false
- !reference [.required-for-hardware-jobs, needs]
.baremetal-arm64-ubsan-test-vk:
extends:
- .baremetal-test
- .use-debian/baremetal_arm64_test-vk
variables:
S3_ARTIFACT_NAME: mesa-arm64-ubsan-debugoptimized
needs:
- job: debian/baremetal_arm64_test-vk
optional: true
- job: debian-arm64-ubsan
artifacts: false
- !reference [.required-for-hardware-jobs, needs]
.baremetal-deqp-test:
variables:
HWCI_TEST_SCRIPT: "/install/deqp-runner.sh"
FDO_CI_CONCURRENT: 0 # Default to number of CPUs

View File

@@ -1,16 +0,0 @@
#!/bin/bash
if [ -z "$BM_POE_INTERFACE" ]; then
echo "Must supply the PoE Interface to power up"
exit 1
fi
if [ -z "$BM_POE_ADDRESS" ]; then
echo "Must supply the PoE Switch host"
exit 1
fi
SNMP_KEY="SNMPv2-SMI::mib-2.105.1.1.1.3.1.$((${BM_POE_BASE:-0} + BM_POE_INTERFACE))"
SNMP_OFF="i 2"
flock /var/run/poe.lock -c "snmpset -v2c -r 3 -t 30 -cmesaci $BM_POE_ADDRESS $SNMP_KEY $SNMP_OFF"

View File

@@ -1,19 +0,0 @@
#!/bin/bash
if [ -z "$BM_POE_INTERFACE" ]; then
echo "Must supply the PoE Interface to power up"
exit 1
fi
if [ -z "$BM_POE_ADDRESS" ]; then
echo "Must supply the PoE Switch host"
exit 1
fi
SNMP_KEY="SNMPv2-SMI::mib-2.105.1.1.1.3.1.$((${BM_POE_BASE:-0} + BM_POE_INTERFACE))"
SNMP_ON="i 1"
SNMP_OFF="i 2"
flock /var/run/poe.lock -c "snmpset -v2c -r 3 -t 30 -cmesaci $BM_POE_ADDRESS $SNMP_KEY $SNMP_OFF"
sleep 3s
flock /var/run/poe.lock -c "snmpset -v2c -r 3 -t 30 -cmesaci $BM_POE_ADDRESS $SNMP_KEY $SNMP_ON"

View File

@@ -1,191 +0,0 @@
#!/bin/bash
# shellcheck disable=SC1091
# shellcheck disable=SC2034
# shellcheck disable=SC2059
# shellcheck disable=SC2086 # we want word splitting
. "$SCRIPTS_DIR"/setup-test-env.sh
# Boot script for devices attached to a PoE switch, using NFS for the root
# filesystem.
# We're run from the root of the repo, make a helper var for our paths
BM=$CI_PROJECT_DIR/install/bare-metal
CI_COMMON=$CI_PROJECT_DIR/install/common
CI_INSTALL=$CI_PROJECT_DIR/install
# Runner config checks
if [ -z "$BM_SERIAL" ]; then
echo "Must set BM_SERIAL in your gitlab-runner config.toml [[runners]] environment"
echo "This is the serial port to listen the device."
exit 1
fi
if [ -z "$BM_POE_ADDRESS" ]; then
echo "Must set BM_POE_ADDRESS in your gitlab-runner config.toml [[runners]] environment"
echo "This is the PoE switch address to connect for powering up/down devices."
exit 1
fi
if [ -z "$BM_POE_INTERFACE" ]; then
echo "Must set BM_POE_INTERFACE in your gitlab-runner config.toml [[runners]] environment"
echo "This is the PoE switch interface where the device is connected."
exit 1
fi
if [ -z "$BM_POWERUP" ]; then
echo "Must set BM_POWERUP in your gitlab-runner config.toml [[runners]] environment"
echo "This is a shell script that should power up the device and begin its boot sequence."
exit 1
fi
if [ -z "$BM_POWERDOWN" ]; then
echo "Must set BM_POWERDOWN in your gitlab-runner config.toml [[runners]] environment"
echo "This is a shell script that should power off the device."
exit 1
fi
if [ ! -d /nfs ]; then
echo "NFS rootfs directory needs to be mounted at /nfs by the gitlab runner"
exit 1
fi
if [ ! -d /tftp ]; then
echo "TFTP directory for this board needs to be mounted at /tftp by the gitlab runner"
exit 1
fi
# job config checks
if [ -z "$BM_ROOTFS" ]; then
echo "Must set BM_ROOTFS to your board's rootfs directory in the job's variables"
exit 1
fi
if [ -z "$BM_BOOTFS" ] && { [ -z "$BM_KERNEL" ] || [ -z "$BM_DTB" ]; } ; then
echo "Must set /boot files for the TFTP boot in the job's variables or set kernel and dtb"
exit 1
fi
if [ -z "$BM_CMDLINE" ]; then
echo "Must set BM_CMDLINE to your board's kernel command line arguments"
exit 1
fi
section_start prepare_rootfs "Preparing rootfs components"
set -ex
date +'%F %T'
# Clear out any previous run's artifacts.
rm -rf results/
mkdir -p results
# Create the rootfs in the NFS directory. rm to make sure it's in a pristine
# state, since it's volume-mounted on the host.
rsync -a --delete $BM_ROOTFS/ /nfs/
date +'%F %T'
# If BM_BOOTFS is an URL, download it
if echo $BM_BOOTFS | grep -q http; then
curl -L --retry 4 -f --retry-all-errors --retry-delay 60 \
"${FDO_HTTP_CACHE_URI:-}$BM_BOOTFS" -o /tmp/bootfs.tar
BM_BOOTFS=/tmp/bootfs.tar
fi
date +'%F %T'
# If BM_BOOTFS is a file, assume it is a tarball and uncompress it
if [ -f "${BM_BOOTFS}" ]; then
mkdir -p /tmp/bootfs
tar xf $BM_BOOTFS -C /tmp/bootfs
BM_BOOTFS=/tmp/bootfs
fi
date +'%F %T'
# Install kernel modules (it could be either in /lib/modules or
# /usr/lib/modules, but we want to install in the latter)
if [ -n "${BM_BOOTFS}" ]; then
[ -d $BM_BOOTFS/usr/lib/modules ] && rsync -a $BM_BOOTFS/usr/lib/modules/ /nfs/usr/lib/modules/
[ -d $BM_BOOTFS/lib/modules ] && rsync -a $BM_BOOTFS/lib/modules/ /nfs/lib/modules/
else
echo "No modules!"
fi
date +'%F %T'
# Install kernel image + bootloader files
if [ -z "$BM_BOOTFS" ]; then
mv "${BM_KERNEL}" "${BM_DTB}.dtb" /tftp/
else # BM_BOOTFS
rsync -aL --delete $BM_BOOTFS/boot/ /tftp/
fi
date +'%F %T'
# Create the rootfs in the NFS directory
. $BM/rootfs-setup.sh /nfs
date +'%F %T'
echo "$BM_CMDLINE" > /tftp/cmdline.txt
# Add some options in config.txt, if defined
if [ -n "$BM_BOOTCONFIG" ]; then
printf "$BM_BOOTCONFIG" >> /tftp/config.txt
fi
section_end prepare_rootfs
set +e
STRUCTURED_LOG_FILE=results/job_detail.json
python3 $CI_INSTALL/custom_logger.py ${STRUCTURED_LOG_FILE} --update dut_job_type "${DEVICE_TYPE}"
python3 $CI_INSTALL/custom_logger.py ${STRUCTURED_LOG_FILE} --update farm "${FARM}"
ATTEMPTS=3
first_attempt=True
while [ $((ATTEMPTS--)) -gt 0 ]; do
section_start dut_boot "Booting hardware device ..."
python3 $CI_INSTALL/custom_logger.py ${STRUCTURED_LOG_FILE} --create-dut-job dut_name "${CI_RUNNER_DESCRIPTION}"
# Update subtime time to CI_JOB_STARTED_AT only for the first run
if [ "$first_attempt" = "True" ]; then
python3 $CI_INSTALL/custom_logger.py ${STRUCTURED_LOG_FILE} --update-dut-time submit "${CI_JOB_STARTED_AT}"
else
python3 $CI_INSTALL/custom_logger.py ${STRUCTURED_LOG_FILE} --update-dut-time submit
fi
python3 $BM/poe_run.py \
--dev="$BM_SERIAL" \
--powerup="$BM_POWERUP" \
--powerdown="$BM_POWERDOWN" \
--boot-timeout-seconds ${BOOT_PHASE_TIMEOUT_SECONDS:-300} \
--test-timeout-minutes ${TEST_PHASE_TIMEOUT_MINUTES:-$((CI_JOB_TIMEOUT/60 - ${TEST_SETUP_AND_UPLOAD_MARGIN_MINUTES:-5}))}
ret=$?
if [ $ret -eq 2 ]; then
python3 $CI_INSTALL/custom_logger.py ${STRUCTURED_LOG_FILE} --close-dut-job
first_attempt=False
error "Device failed to boot; will retry"
else
# We're no longer in dut_boot by this point
unset CURRENT_SECTION
ATTEMPTS=0
fi
done
section_start dut_cleanup "Cleaning up after job"
python3 $CI_INSTALL/custom_logger.py ${STRUCTURED_LOG_FILE} --close-dut-job
python3 $CI_INSTALL/custom_logger.py ${STRUCTURED_LOG_FILE} --close
set -e
date +'%F %T'
# Bring artifacts back from the NFS dir to the build dir where gitlab-runner
# will look for them.
cp -Rp /nfs/results/. results/
date +'%F %T'
section_end dut_cleanup
exit $ret

View File

@@ -1,133 +0,0 @@
#!/usr/bin/env python3
#
# Copyright © 2020 Igalia, S.L.
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice (including the next
# paragraph) shall be included in all copies or substantial portions of the
# Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
# IN THE SOFTWARE.
import argparse
import os
import re
import sys
import threading
from custom_logger import CustomLogger
from serial_buffer import SerialBuffer
class PoERun:
def __init__(self, args, boot_timeout, test_timeout, logger):
self.powerup = args.powerup
self.powerdown = args.powerdown
self.ser = SerialBuffer(
args.dev, "results/serial-output.txt", ": ")
self.boot_timeout = boot_timeout
self.test_timeout = test_timeout
self.logger = logger
def print_error(self, message):
RED = '\033[0;31m'
NO_COLOR = '\033[0m'
print(RED + message + NO_COLOR)
self.logger.update_status_fail(message)
def logged_system(self, cmd):
print("Running '{}'".format(cmd))
return os.system(cmd)
def run(self):
if self.logged_system(self.powerup) != 0:
self.logger.update_status_fail("powerup failed")
return 1
boot_detected = False
self.logger.create_job_phase("boot")
for line in self.ser.lines(timeout=self.boot_timeout, phase="bootloader"):
if re.search("Booting Linux", line):
boot_detected = True
break
if not boot_detected:
self.print_error(
"Something wrong; couldn't detect the boot start up sequence")
return 2
self.logger.create_job_phase("test")
for line in self.ser.lines(timeout=self.test_timeout, phase="test"):
if re.search("---. end Kernel panic", line):
self.logger.update_status_fail("kernel panic")
return 1
# Binning memory problems
if re.search("binner overflow mem", line):
self.print_error("Memory overflow in the binner; GPU hang")
return 1
if re.search("nouveau 57000000.gpu: bus: MMIO read of 00000000 FAULT at 137000", line):
self.print_error("nouveau jetson boot bug, abandoning run.")
return 1
# network fail on tk1
if re.search("NETDEV WATCHDOG:.* transmit queue 0 timed out", line):
self.print_error("nouveau jetson tk1 network fail, abandoning run.")
return 1
result = re.search(r"hwci: mesa: exit_code: (\d+)", line)
if result:
exit_code = int(result.group(1))
if exit_code == 0:
self.logger.update_dut_job("status", "pass")
else:
self.logger.update_status_fail("test fail")
self.logger.update_dut_job("exit_code", exit_code)
return exit_code
self.print_error(
"Reached the end of the CPU serial log without finding a result")
return 1
def main():
parser = argparse.ArgumentParser()
parser.add_argument('--dev', type=str,
help='Serial device to monitor', required=True)
parser.add_argument('--powerup', type=str,
help='shell command for rebooting', required=True)
parser.add_argument('--powerdown', type=str,
help='shell command for powering off', required=True)
parser.add_argument(
'--boot-timeout-seconds', type=int, help='Boot phase timeout (seconds)', required=True)
parser.add_argument(
'--test-timeout-minutes', type=int, help='Test phase timeout (minutes)', required=True)
args = parser.parse_args()
logger = CustomLogger("results/job_detail.json")
logger.update_dut_time("start", None)
poe = PoERun(args, args.boot_timeout_seconds, args.test_timeout_minutes * 60, logger)
retval = poe.run()
poe.logged_system(args.powerdown)
logger.update_dut_time("end", None)
sys.exit(retval)
if __name__ == '__main__':
main()

View File

@@ -1,34 +0,0 @@
#!/usr/bin/env bash
# shellcheck disable=SC2086 # we want word splitting
rootfs_dst=$1
mkdir -p $rootfs_dst/results
# Set up the init script that brings up the system.
cp $BM/bm-init.sh $rootfs_dst/init
cp $CI_COMMON/init*.sh $rootfs_dst/
date +'%F %T'
# Make JWT token available as file in the bare-metal storage to enable access
# to MinIO
cp "${S3_JWT_FILE}" "${rootfs_dst}${S3_JWT_FILE}"
date +'%F %T'
cp "$SCRIPTS_DIR/setup-test-env.sh" "$rootfs_dst/"
set +x
# Pass through relevant env vars from the gitlab job to the baremetal init script
echo "Variables passed through:"
filter_env_vars | tee $rootfs_dst/set-job-env-vars.sh
set -x
# Add the Mesa drivers we built, and make a consistent symlink to them.
mkdir -p $rootfs_dst/$CI_PROJECT_DIR
rsync -aH --delete $CI_PROJECT_DIR/install/ $rootfs_dst/$CI_PROJECT_DIR/install/
date +'%F %T'

View File

@@ -1,186 +0,0 @@
#!/usr/bin/env python3
#
# Copyright © 2020 Google LLC
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice (including the next
# paragraph) shall be included in all copies or substantial portions of the
# Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
# IN THE SOFTWARE.
import argparse
from datetime import datetime, UTC
import queue
import serial
import threading
import time
class SerialBuffer:
def __init__(self, dev, filename, prefix, timeout=None, line_queue=None):
self.filename = filename
self.dev = dev
if dev:
self.f = open(filename, "wb+")
self.serial = serial.Serial(dev, 115200, timeout=timeout)
else:
self.f = open(filename, "rb")
self.serial = None
self.byte_queue = queue.Queue()
# allow multiple SerialBuffers to share a line queue so you can merge
# servo's CPU and EC streams into one thing to watch the boot/test
# progress on.
if line_queue:
self.line_queue = line_queue
else:
self.line_queue = queue.Queue()
self.prefix = prefix
self.timeout = timeout
self.sentinel = object()
self.closing = False
if self.dev:
self.read_thread = threading.Thread(
target=self.serial_read_thread_loop, daemon=True)
else:
self.read_thread = threading.Thread(
target=self.serial_file_read_thread_loop, daemon=True)
self.read_thread.start()
self.lines_thread = threading.Thread(
target=self.serial_lines_thread_loop, daemon=True)
self.lines_thread.start()
def close(self):
self.closing = True
if self.serial:
self.serial.cancel_read()
self.read_thread.join()
self.lines_thread.join()
if self.serial:
self.serial.close()
# Thread that just reads the bytes from the serial device to try to keep from
# buffer overflowing it. If nothing is received in 1 minute, it finalizes.
def serial_read_thread_loop(self):
greet = "Serial thread reading from %s\n" % self.dev
self.byte_queue.put(greet.encode())
while not self.closing:
try:
b = self.serial.read()
if len(b) == 0:
break
self.byte_queue.put(b)
except Exception as err:
print(self.prefix + str(err))
break
self.byte_queue.put(self.sentinel)
# Thread that just reads the bytes from the file of serial output that some
# other process is appending to.
def serial_file_read_thread_loop(self):
greet = "Serial thread reading from %s\n" % self.filename
self.byte_queue.put(greet.encode())
while not self.closing:
line = self.f.readline()
if line:
self.byte_queue.put(line)
else:
time.sleep(0.1)
self.byte_queue.put(self.sentinel)
# Thread that processes the stream of bytes to 1) log to stdout, 2) log to
# file, 3) add to the queue of lines to be read by program logic
def serial_lines_thread_loop(self):
line = bytearray()
while True:
bytes = self.byte_queue.get(block=True)
if bytes == self.sentinel:
self.read_thread.join()
self.line_queue.put(self.sentinel)
break
# Write our data to the output file if we're the ones reading from
# the serial device
if self.dev:
self.f.write(bytes)
self.f.flush()
for b in bytes:
line.append(b)
if b == b'\n'[0]:
line = line.decode(errors="replace")
ts = datetime.now(tz=UTC)
ts_str = f"{ts.hour:02}:{ts.minute:02}:{ts.second:02}.{int(ts.microsecond / 1000):03}"
print("{endc}{time}{prefix}{line}".format(
time=ts_str, prefix=self.prefix, line=line, endc='\033[0m'), flush=True, end='')
self.line_queue.put(line)
line = bytearray()
def lines(self, timeout=None, phase=None):
start_time = time.monotonic()
while True:
read_timeout = None
if timeout:
read_timeout = timeout - (time.monotonic() - start_time)
if read_timeout <= 0:
print("read timeout waiting for serial during {}".format(phase))
self.close()
break
try:
line = self.line_queue.get(timeout=read_timeout)
except queue.Empty:
print("read timeout waiting for serial during {}".format(phase))
self.close()
break
if line == self.sentinel:
print("End of serial output")
self.lines_thread.join()
break
yield line
def main():
parser = argparse.ArgumentParser()
parser.add_argument('--dev', type=str, help='Serial device')
parser.add_argument('--file', type=str,
help='Filename for serial output', required=True)
parser.add_argument('--prefix', type=str,
help='Prefix for logging serial to stdout', nargs='?')
args = parser.parse_args()
ser = SerialBuffer(args.dev, args.file, args.prefix or "")
for line in ser.lines():
# We're just using this as a logger, so eat the produced lines and drop
# them
pass
if __name__ == '__main__':
main()

View File

@@ -1 +0,0 @@
../bin/ci

View File

@@ -1,95 +0,0 @@
.meson-build-for-tests:
extends:
- .build-linux
stage: build-for-tests
script:
- &meson-build timeout --verbose ${BUILD_JOB_TIMEOUT_OVERRIDE:-$BUILD_JOB_TIMEOUT} bash --login .gitlab-ci/meson/build.sh
- .gitlab-ci/prepare-artifacts.sh
.meson-build-only:
extends:
- .meson-build-for-tests
- .build-only-delayed-rules
stage: build-only
script:
- *meson-build
# Shared between windows and Linux
.build-common:
extends: .build-rules
# Cancel job if a newer commit is pushed to the same branch
interruptible: true
variables:
# Build jobs are typically taking between 5-12 minutes, depending on how
# much they build and how many new Rust compilers we have to build twice.
# Allow 25 minutes as a reasonable margin: beyond this point, something
# has gone badly wrong, and we should try again to see if we can get
# something from it.
#
# Some jobs not in the critical path use a higher timeout, particularly
# when building with ASan or UBSan.
BUILD_JOB_TIMEOUT: 12m
RUN_MESON_TESTS: "true"
timeout: 16m
# We don't want to download any previous job's artifacts
dependencies: []
artifacts:
name: "${CI_PROJECT_NAME}_${CI_JOB_NAME_SLUG}"
when: always
paths:
- _build/meson-logs/*.txt
- _build/meson-logs/strace
- _build/.ninja_log
- artifacts
.build-run-long:
variables:
BUILD_JOB_TIMEOUT: 18m
timeout: 25m
# Just Linux
.build-linux:
extends: .build-common
variables:
C_ARGS: >
-Wno-error=deprecated-declarations
CCACHE_COMPILERCHECK: "content"
CCACHE_COMPRESS: "true"
CCACHE_DIR: /cache/mesa/ccache
# Use ccache transparently, and print stats before/after
before_script:
- !reference [default, before_script]
- |
export PATH="/usr/lib/ccache:$PATH"
export CCACHE_BASEDIR="$PWD"
if test -x /usr/bin/ccache; then
section_start ccache_before "ccache stats before build"
ccache --show-stats
section_end ccache_before
fi
after_script:
- if test -x /usr/bin/ccache; then ccache --show-stats | grep "Hits:"; fi
- !reference [default, after_script]
.build-windows:
extends:
- .build-common
- .windows-docker-tags
cache:
key: ${CI_JOB_NAME}
paths:
- subprojects/packagecache
.ci-deqp-artifacts:
artifacts:
name: "${CI_PROJECT_NAME}_${CI_JOB_NAME_SLUG}"
when: always
untracked: false
paths:
# Watch out! Artifacts are relative to the build dir.
# https://gitlab.com/gitlab-org/gitlab-ce/commit/8788fb925706cad594adf6917a6c5f6587dd1521
- artifacts
- _build/meson-logs/*.txt
- _build/meson-logs/strace
- _build/.ninja_log

View File

@@ -1,792 +0,0 @@
include:
- local: '.gitlab-ci/build/gitlab-ci-inc.yml'
# Git archive
make-git-archive:
extends:
- .fdo.ci-fairy
stage: git-archive
rules:
- !reference [.scheduled_pipeline-rules, rules]
script:
# Compactify the .git directory
- git gc --aggressive
# Download & cache the perfetto subproject as well.
- rm -rf subprojects/perfetto ; mkdir -p subprojects/perfetto && curl --fail https://android.googlesource.com/platform/external/perfetto/+archive/$(grep 'revision =' subprojects/perfetto.wrap | cut -d ' ' -f3).tar.gz | tar zxf - -C subprojects/perfetto
# compress the current folder
- tar -cvzf ../$CI_PROJECT_NAME.tar.gz .
- ci-fairy s3cp --token-file "${S3_JWT_FILE}" ../$CI_PROJECT_NAME.tar.gz https://$S3_HOST/git-cache/$CI_PROJECT_NAMESPACE/$CI_PROJECT_NAME/$CI_PROJECT_NAME.tar.gz
debian-x86_64:
extends:
- .meson-build-for-tests
- .use-debian/x86_64_build
- .build-run-long # but it really shouldn't! tracked in mesa#12544
- .ci-deqp-artifacts
variables:
UNWIND: "enabled"
DRI_LOADERS: >
-D egl=enabled
-D gbm=enabled
-D glvnd=disabled
-D glx=dri
-D platforms=x11,wayland
GALLIUM_ST: >
-D gallium-rusticl=true
-D gallium-va=enabled
GALLIUM_DRIVERS: "llvmpipe,softpipe,virgl,radeonsi,zink,iris,svga"
VULKAN_DRIVERS: "swrast,amd,intel,virtio"
BUILDTYPE: "debugoptimized"
EXTRA_OPTION: >
-D intel-elk=false
-D spirv-to-dxil=true
-D tools=drm-shim
-D valgrind=disabled
S3_ARTIFACT_NAME: mesa-x86_64-default-${BUILDTYPE}
RUN_MESON_TESTS: "false" # debian-build-x86_64 already runs these
artifacts:
reports:
junit: artifacts/ci_scripts_report.xml
debian-x86_64-asan:
extends:
- debian-x86_64
- .meson-build-for-tests
- .build-run-long
variables:
VULKAN_DRIVERS: "swrast"
GALLIUM_DRIVERS: "llvmpipe,softpipe"
C_ARGS: >
-Wno-error=stringop-truncation
-Wno-error=deprecated-declarations
EXTRA_OPTION: >
-D b_sanitize=address
-D gallium-va=false
-D gallium-rusticl=false
-D mesa-clc=system
-D tools=dlclose-skip
-D valgrind=disabled
S3_ARTIFACT_NAME: mesa-x86_64-asan-${BUILDTYPE}
ARTIFACTS_DEBUG_SYMBOLS: 1
RUN_MESON_TESTS: "false" # just too slow
# Do a host build for mesa-clc (asan complains not being loaded as
# the first library)
HOST_BUILD_OPTIONS: >
-D build-tests=false
-D enable-glcpp-tests=false
-D gallium-rusticl=false
-D gallium-drivers=
-D glx=disabled
-D install-mesa-clc=true
-D mesa-clc=enabled
-D platforms=
-D video-codecs=
-D vulkan-drivers=
debian-x86_64-msan:
# https://github.com/google/sanitizers/wiki/MemorySanitizerLibcxxHowTo
# msan cannot fully work until it's used together with msan libc
extends:
- debian-clang
- .meson-build-only
- .build-run-long
variables:
# l_undef is incompatible with msan
EXTRA_OPTION:
-D b_sanitize=memory
-D b_lundef=false
-D mesa-clc=system
-D precomp-compiler=system
S3_ARTIFACT_NAME: ""
ARTIFACTS_DEBUG_SYMBOLS: 1
# Don't run all the tests yet:
# GLSL has some issues in sexpression reading.
# gtest has issues in its test initialization.
MESON_TEST_ARGS: "--suite glcpp --suite format"
GALLIUM_DRIVERS: "freedreno,iris,nouveau,r300,r600,llvmpipe,softpipe,svga,v3d,vc4,virgl,etnaviv,panfrost,lima,zink,radeonsi,tegra,d3d12,crocus"
VULKAN_DRIVERS: intel,amd,broadcom,virtio
C_ARGS: >
-Wno-error=constant-conversion
-Wno-error=enum-conversion
-Wno-error=initializer-overrides
-Wno-error=sometimes-uninitialized
-Werror=misleading-indentation
-Wno-error=deprecated-declarations
CPP_ARGS: >
-Wno-error=c99-designator
-Wno-error=overloaded-virtual
-Wno-error=tautological-constant-out-of-range-compare
-Wno-error=unused-private-field
-Wno-error=vla-cxx-extension
-Wno-error=deprecated-declarations
RUN_MESON_TESTS: "false" # just too slow
# Do a host build for mesa-clc and precomp-compiler (msan complains about uninitialized
# values in the LLVM libs)
HOST_BUILD_OPTIONS: >
-D build-tests=false
-D enable-glcpp-tests=false
-D gallium-drivers=
-D vulkan-drivers=
-D video-codecs=
-D glx=disabled
-D platforms=
-D mesa-clc=enabled
-D install-mesa-clc=true
-D precomp-compiler=enabled
-D install-precomp-compiler=true
-D tools=panfrost
debian-x86_64-ubsan:
extends:
- debian-x86_64
- .meson-build-for-tests
- .build-run-long
variables:
C_ARGS: >
-Wno-error=stringop-overflow
-Wno-error=stringop-truncation
-Wno-error=deprecated-declarations
CPP_ARGS: >
-Wno-error=array-bounds
GALLIUM_DRIVERS: "llvmpipe,softpipe"
VULKAN_DRIVERS: "swrast"
EXTRA_OPTION: >
-D b_sanitize=undefined
-D mesa-clc=system
-D gallium-rusticl=false
-D gallium-va=false
S3_ARTIFACT_NAME: ""
ARTIFACTS_DEBUG_SYMBOLS: 1
RUN_MESON_TESTS: "false" # just too slow
HOST_BUILD_OPTIONS: >
-D build-tests=false
-D enable-glcpp-tests=false
-D gallium-rusticl=false
-D gallium-drivers=
-D vulkan-drivers=
-D video-codecs=
-D glx=disabled
-D platforms=
-D mesa-clc=enabled
-D install-mesa-clc=true
debian-build-x86_64:
extends:
- .meson-build-for-tests
- .use-debian/x86_64_build
variables:
UNWIND: "enabled"
C_ARGS: >
-Wno-error=deprecated-declarations
CPP_ARGS: >
-Wno-error=deprecated-declarations
DRI_LOADERS: >
-D glx=dri
-D gbm=enabled
-D egl=enabled
-D glvnd=disabled
-D platforms=x11,wayland
GALLIUM_ST: >
-D gallium-extra-hud=true
-D gallium-vdpau=enabled
-D gallium-va=enabled
-D gallium-rusticl=false
-D legacy-wayland=bind-wayland-display
GALLIUM_DRIVERS: "i915,iris,nouveau,r300,r600,freedreno,llvmpipe,softpipe,svga,v3d,vc4,virgl,etnaviv,panfrost,lima,zink,d3d12,asahi,crocus"
VULKAN_DRIVERS: "intel_hasvk,imagination-experimental,microsoft-experimental,nouveau,swrast"
BUILDTYPE: "debugoptimized"
EXTRA_OPTION: >
-D spirv-to-dxil=true
-D tools=drm-shim,etnaviv,freedreno,glsl,intel,intel-ui,nir,nouveau,lima,panfrost,asahi
-D perfetto=true
S3_ARTIFACT_NAME: debian-build-x86_64
# Test a release build with -Werror so new warnings don't sneak in.
debian-release:
extends:
- .meson-build-only
- .use-debian/x86_64_build
variables:
UNWIND: "enabled"
C_ARGS: >
-Wno-error=stringop-overread
-Wno-error=deprecated-declarations
CPP_ARGS: >
-Wno-error=deprecated-declarations
DRI_LOADERS: >
-D glx=dri
-D gbm=enabled
-D egl=enabled
-D glvnd=disabled
-D platforms=x11,wayland
GALLIUM_ST: >
-D gallium-extra-hud=true
-D gallium-vdpau=enabled
-D gallium-va=enabled
-D gallium-rusticl=false
-D llvm=enabled
GALLIUM_DRIVERS: "i915,iris,nouveau,r300,freedreno,llvmpipe,softpipe,svga,v3d,vc4,virgl,etnaviv,panfrost,lima,zink,d3d12,asahi,crocus"
VULKAN_DRIVERS: "swrast,intel_hasvk,imagination-experimental,microsoft-experimental"
EXTRA_OPTION: >
-D spirv-to-dxil=true
-D tools=all
-D mesa-clc=enabled
-D precomp-compiler=enabled
-D intel-rt=enabled
-D imagination-srv=true
BUILDTYPE: "release"
S3_ARTIFACT_NAME: "mesa-x86_64-default-${BUILDTYPE}"
script:
- !reference [.meson-build-only, script]
- 'if [ -n "$MESA_CI_PERFORMANCE_ENABLED" ]; then .gitlab-ci/prepare-artifacts.sh; fi'
alpine-build-testing:
extends:
- .meson-build-only
- .use-alpine/x86_64_build
variables:
BUILDTYPE: "release"
C_ARGS: >
-Wno-error=cpp
-Wno-error=array-bounds
-Wno-error=stringop-overflow
-Wno-error=stringop-overread
-Wno-error=deprecated-declarations
DRI_LOADERS: >
-D glx=disabled
-D gbm=enabled
-D egl=enabled
-D glvnd=disabled
-D platforms=wayland
GALLIUM_DRIVERS: "crocus,etnaviv,freedreno,iris,lima,nouveau,panfrost,r300,r600,radeonsi,svga,llvmpipe,softpipe,tegra,v3d,vc4,virgl,zink"
GALLIUM_ST: >
-D gallium-extra-hud=true
-D gallium-vdpau=disabled
-D gallium-va=enabled
-D gallium-rusticl=false
-D gles1=disabled
-D gles2=enabled
-D llvm=enabled
-D llvm-orcjit=true
-D microsoft-clc=disabled
-D shared-llvm=enabled
UNWIND: "disabled"
VULKAN_DRIVERS: "amd,broadcom,freedreno,intel,imagination-experimental"
fedora-release:
extends:
- .meson-build-only
- .use-fedora/x86_64_build
- .build-run-long
# LTO builds can be really very slow, and we have no way to specify different
# timeouts for pre-merge and nightly jobs
timeout: 1h
variables:
BUILDTYPE: "release"
# array-bounds are pure non-LTO gcc buggy warning
# maybe-uninitialized is misfiring in nir_lower_gs_intrinsics.c, and
# a "maybe" warning should never be an error anyway.
C_ARGS: >
-Wno-error=stringop-overflow
-Wno-error=stringop-overread
-Wno-error=array-bounds
-Wno-error=maybe-uninitialized
-Wno-error=deprecated-declarations
CPP_ARGS: >
-Wno-error=dangling-reference
-Wno-error=overloaded-virtual
-Wno-error=deprecated-declarations
DRI_LOADERS: >
-D glx=dri
-D gbm=enabled
-D egl=enabled
-D glvnd=enabled
-D platforms=x11,wayland
EXTRA_OPTION: >
-D tools=drm-shim,etnaviv,freedreno,glsl,intel,nir,nouveau,lima,panfrost,imagination
-D vulkan-layers=device-select,overlay
-D intel-rt=enabled
-D imagination-srv=true
-D teflon=true
GALLIUM_DRIVERS: "crocus,etnaviv,freedreno,i915,iris,lima,nouveau,panfrost,r300,r600,radeonsi,svga,llvmpipe,softpipe,tegra,v3d,vc4,virgl,zink"
GALLIUM_ST: >
-D gallium-extra-hud=true
-D gallium-vdpau=enabled
-D gallium-va=enabled
-D gallium-rusticl=true
-D gles1=disabled
-D gles2=enabled
-D llvm=enabled
-D microsoft-clc=disabled
-D shared-llvm=enabled
UNWIND: "disabled"
VULKAN_DRIVERS: "amd,asahi,broadcom,freedreno,imagination-experimental,intel,intel_hasvk"
debian-android:
extends:
- .android-variables
- .meson-cross
- .use-debian/android_build
- .ci-deqp-artifacts
- .meson-build-for-tests
variables:
BUILDTYPE: debug
UNWIND: "disabled"
C_ARGS: >
-Wno-error=asm-operand-widths
-Wno-error=constant-conversion
-Wno-error=enum-conversion
-Wno-error=initializer-overrides
-Wno-error=sometimes-uninitialized
-Wno-error=deprecated-declarations
CPP_ARGS: >
-Wno-error=c99-designator
-Wno-error=unused-variable
-Wno-error=unused-but-set-variable
-Wno-error=self-assign
-Wno-error=deprecated-declarations
DRI_LOADERS: >
-D glx=disabled
-D gbm=disabled
-D egl=enabled
-D glvnd=disabled
-D platforms=android
FORCE_FALLBACK_FOR: llvm
EXTRA_OPTION: >
-D amd-use-llvm=false
-D android-stub=true
-D platform-sdk-version=${ANDROID_SDK_VERSION}
-D cpp_rtti=false
-D valgrind=disabled
-D android-libbacktrace=disabled
-D mesa-clc=system
-D precomp-compiler=system
GALLIUM_ST: >
-D gallium-vdpau=disabled
-D gallium-va=disabled
-D gallium-rusticl=false
PKG_CONFIG_LIBDIR: "/disable/non/android/system/pc/files"
HOST_BUILD_OPTIONS: >
-D build-tests=false
-D enable-glcpp-tests=false
-D gallium-drivers=
-D vulkan-drivers=
-D video-codecs=
-D glx=disabled
-D platforms=
-D mesa-clc=enabled
-D install-mesa-clc=true
-D precomp-compiler=enabled
-D install-precomp-compiler=true
-D tools=panfrost
S3_ARTIFACT_NAME: mesa-x86_64-android-${BUILDTYPE}
script:
# x86_64 build:
- export CROSS=x86_64-linux-android
- export GALLIUM_DRIVERS=iris,radeonsi,softpipe,virgl,zink
- export VULKAN_DRIVERS=amd,intel,swrast,virtio
- .gitlab-ci/create-llvm-meson-wrap-file.sh
- !reference [.meson-build-for-tests, script]
# remove all the files created by the previous build before the next build
- git clean -dxf .
# aarch64 build:
# build-only, to catch compilation regressions
# without calling .gitlab-ci/prepare-artifacts.sh so that the
# artifacts are not shipped in mesa-x86_64-android-${BUILDTYPE}
- export CROSS=aarch64-linux-android
- export GALLIUM_DRIVERS=etnaviv,freedreno,lima,panfrost,vc4,v3d
- export VULKAN_DRIVERS=freedreno,broadcom,virtio
- !reference [.meson-build-only, script]
.meson-cross:
extends:
- .meson-build-only
- .use-debian/x86_64_build
variables:
UNWIND: "disabled"
DRI_LOADERS: >
-D glx=dri
-D gbm=enabled
-D egl=enabled
-D platforms=x11,wayland
GALLIUM_ST: >
-D gallium-vdpau=disabled
-D gallium-va=disabled
.meson-arm:
extends:
- .meson-cross
- .use-debian/arm64_build
variables:
VULKAN_DRIVERS: "asahi,broadcom,freedreno"
GALLIUM_DRIVERS: "etnaviv,freedreno,lima,nouveau,panfrost,llvmpipe,softpipe,tegra,v3d,vc4,zink"
BUILDTYPE: "debugoptimized"
debian-arm32:
extends:
- .meson-arm
- .ci-deqp-artifacts
- .meson-build-for-tests
variables:
CROSS: armhf
DRI_LOADERS:
-D glvnd=disabled
# remove asahi & llvmpipe from the .meson-arm list because here we have llvm=disabled
VULKAN_DRIVERS: "broadcom,freedreno"
GALLIUM_DRIVERS: "etnaviv,freedreno,lima,nouveau,panfrost,softpipe,tegra,v3d,vc4,zink"
EXTRA_OPTION: >
-D llvm=disabled
-D valgrind=disabled
-D gallium-rusticl=false
-D mesa-clc=system
-D precomp-compiler=system
HOST_BUILD_OPTIONS: >
-D build-tests=false
-D enable-glcpp-tests=false
-D gallium-drivers=
-D vulkan-drivers=
-D video-codecs=
-D glx=disabled
-D platforms=
-D mesa-clc=enabled
-D install-mesa-clc=true
-D precomp-compiler=enabled
-D install-precomp-compiler=true
-D tools=panfrost
S3_ARTIFACT_NAME: mesa-arm32-default-${BUILDTYPE}
# The strip command segfaults, failing to strip the binary and leaving
# tempfiles in our artifacts.
ARTIFACTS_DEBUG_SYMBOLS: 1
debian-arm32-asan:
extends:
- debian-arm32
- .meson-build-for-tests
- .build-run-long
variables:
GALLIUM_DRIVERS: "etnaviv"
VULKAN_DRIVERS: ""
DRI_LOADERS:
-D glvnd=disabled
EXTRA_OPTION: >
-D llvm=disabled
-D b_sanitize=address
-D valgrind=disabled
-D tools=dlclose-skip
-D gallium-rusticl=false
ARTIFACTS_DEBUG_SYMBOLS: 1
RUN_MESON_TESTS: "false" # just too slow
S3_ARTIFACT_NAME: mesa-arm32-asan-${BUILDTYPE}
debian-arm64:
extends:
- .meson-arm
- .ci-deqp-artifacts
- .meson-build-for-tests
variables:
C_ARGS: >
-Wno-error=array-bounds
-Wno-error=stringop-truncation
-Wno-error=deprecated-declarations
GALLIUM_DRIVERS: "etnaviv,freedreno,lima,panfrost,v3d,vc4,zink"
VULKAN_DRIVERS: "broadcom,freedreno,panfrost"
DRI_LOADERS:
-D glvnd=disabled
EXTRA_OPTION: >
-D valgrind=disabled
-D imagination-srv=true
-D freedreno-kmds=msm,virtio
-D teflon=true
GALLIUM_ST:
-D gallium-rusticl=true
RUN_MESON_TESTS: "false" # run by debian-arm64-build-testing
S3_ARTIFACT_NAME: mesa-arm64-default-${BUILDTYPE}
debian-arm64-asan:
extends:
- debian-arm64
- .meson-build-for-tests
- .build-run-long
variables:
VULKAN_DRIVERS: "broadcom,freedreno"
GALLIUM_DRIVERS: "freedreno,vc4,v3d"
C_ARGS: >
-Wno-error=deprecated-declarations
DRI_LOADERS:
-D glvnd=disabled
EXTRA_OPTION: >
-D b_sanitize=address
-D valgrind=disabled
-D tools=dlclose-skip
-D gallium-rusticl=false
ARTIFACTS_DEBUG_SYMBOLS: 1
RUN_MESON_TESTS: "false" # just too slow
S3_ARTIFACT_NAME: mesa-arm64-asan-${BUILDTYPE}
debian-arm64-ubsan:
extends:
- debian-arm64
- .meson-build-for-tests
- .build-run-long
variables:
VULKAN_DRIVERS: "broadcom"
GALLIUM_DRIVERS: "v3d,vc4"
C_ARGS: >
-Wno-error=array-bounds
-Wno-error=stringop-overflow
-Wno-error=stringop-truncation
-Wno-error=deprecated-declarations
CPP_ARGS: >
-Wno-error=array-bounds
-fno-var-tracking-assignments
DRI_LOADERS:
-D glvnd=disabled
EXTRA_OPTION: >
-D b_sanitize=undefined
-D gallium-rusticl=false
ARTIFACTS_DEBUG_SYMBOLS: 1
RUN_MESON_TESTS: "false" # just too slow
S3_ARTIFACT_NAME: mesa-arm64-ubsan-${BUILDTYPE}
debian-arm64-build-test:
extends:
- .meson-arm
- .ci-deqp-artifacts
- .meson-build-only
variables:
VULKAN_DRIVERS: "amd,asahi,imagination-experimental,nouveau"
DRI_LOADERS:
-D glvnd=disabled
EXTRA_OPTION: >
-D tools=panfrost,imagination
-D perfetto=true
debian-arm64-release:
extends:
- debian-arm64
- .meson-build-only
variables:
BUILDTYPE: release
S3_ARTIFACT_NAME: mesa-arm64-default-${BUILDTYPE}
C_ARGS: >
-Wno-error=array-bounds
-Wno-error=stringop-overread
-Wno-error=deprecated-declarations
script:
- !reference [.meson-build-only, script]
- 'if [ -n "$MESA_CI_PERFORMANCE_ENABLED" ]; then .gitlab-ci/prepare-artifacts.sh; fi'
debian-no-libdrm:
extends:
- .meson-arm
- .meson-build-only
variables:
VULKAN_DRIVERS: freedreno
GALLIUM_DRIVERS: "zink,llvmpipe"
BUILDTYPE: release
C_ARGS: >
-Wno-error=stringop-overread
-Wno-error=deprecated-declarations
EXTRA_OPTION: >
-D freedreno-kmds=kgsl
-D glx=disabled
-D gbm=disabled
-D egl=disabled
-D perfetto=true
debian-clang:
extends:
- .meson-build-only
- .use-debian/x86_64_build
variables:
BUILDTYPE: debug
UNWIND: "enabled"
C_ARGS: >
-Wno-error=constant-conversion
-Wno-error=enum-conversion
-Wno-error=initializer-overrides
-Wno-error=sometimes-uninitialized
-Werror=misleading-indentation
-Wno-error=deprecated-declarations
CPP_ARGS: >
-Wno-error=c99-designator
-Wno-error=overloaded-virtual
-Wno-error=tautological-constant-out-of-range-compare
-Wno-error=unused-private-field
-Wno-error=vla-cxx-extension
-Wno-error=deprecated-declarations
DRI_LOADERS: >
-D glx=dri
-D gbm=enabled
-D egl=enabled
-D glvnd=enabled
-D platforms=x11,wayland
GALLIUM_ST: >
-D gallium-extra-hud=true
-D gallium-vdpau=enabled
-D gallium-va=enabled
-D gles1=enabled
-D gles2=enabled
-D llvm=enabled
-D microsoft-clc=disabled
-D shared-llvm=enabled
GALLIUM_DRIVERS: "iris,nouveau,r300,r600,freedreno,llvmpipe,softpipe,svga,v3d,vc4,virgl,etnaviv,panfrost,lima,zink,radeonsi,tegra,d3d12,crocus,i915,asahi"
VULKAN_DRIVERS: intel,amd,freedreno,broadcom,virtio,swrast,panfrost,imagination-experimental,microsoft-experimental,nouveau
EXTRA_OPTION:
-D spirv-to-dxil=true
-D imagination-srv=true
-D tools=drm-shim,etnaviv,freedreno,glsl,intel,intel-ui,nir,nouveau,lima,panfrost,asahi,imagination
-D vulkan-layers=device-select,overlay
-D build-radv-tests=true
-D build-aco-tests=true
-D mesa-clc=enabled
-D precomp-compiler=enabled
-D intel-rt=enabled
-D imagination-srv=true
-D teflon=true
CC: clang-${LLVM_VERSION}
CXX: clang++-${LLVM_VERSION}
debian-clang-release:
extends:
- debian-clang
- .meson-build-only
- .build-run-long
variables:
BUILDTYPE: "release"
C_ARGS: >
-Wno-error=constant-conversion
-Wno-error=enum-conversion
-Wno-error=initializer-overrides
-Wno-error=sometimes-uninitialized
-Wno-error=deprecated-declarations
CPP_ARGS: >
-Wno-error=c99-designator
-Wno-error=overloaded-virtual
-Wno-error=tautological-constant-out-of-range-compare
-Wno-error=unused-private-field
-Wno-error=vla-cxx-extension
-Wno-error=deprecated-declarations
DRI_LOADERS: >
-D glx=xlib
-D platforms=x11,wayland
GALLIUM_ST: >
-D gallium-extra-hud=true
-D gallium-vdpau=enabled
-D gallium-va=enabled
-D gles1=disabled
-D gles2=disabled
-D llvm=enabled
-D microsoft-clc=disabled
-D shared-llvm=enabled
windows-msvc:
extends:
- .build-windows
- .use-windows_build_msvc
- .windows-build-rules
stage: build-for-tests
script:
- pwsh -ExecutionPolicy RemoteSigned .\.gitlab-ci\windows\mesa_build.ps1
artifacts:
paths:
- _build/meson-logs/*.txt
- _install/
debian-vulkan:
extends:
- .meson-build-only
- .use-debian/x86_64_build
variables:
BUILDTYPE: debug
UNWIND: "disabled"
DRI_LOADERS: >
-D glx=disabled
-D gbm=disabled
-D egl=disabled
-D opengl=false
-D gles1=disabled
-D gles2=disabled
-D glvnd=disabled
-D platforms=x11,wayland
GALLIUM_ST: >
-D gallium-vdpau=disabled
-D gallium-va=disabled
-D gallium-rusticl=false
-D b_sanitize=undefined
-D c_args=-fno-sanitize-recover=all
-D cpp_args=-fno-sanitize-recover=all
UBSAN_OPTIONS: "print_stacktrace=1"
VULKAN_DRIVERS: amd,asahi,broadcom,freedreno,intel,intel_hasvk,panfrost,virtio,imagination-experimental,microsoft-experimental,nouveau
EXTRA_OPTION: >
-D vulkan-layers=device-select,overlay
-D build-radv-tests=true
-D build-aco-tests=true
-D intel-rt=disabled
-D imagination-srv=true
debian-x86_32:
extends:
- .meson-cross
- .use-debian/x86_32_build
- .meson-build-only
- .build-run-long # it's not clear why this runs long, but it also doesn't matter much
variables:
BUILDTYPE: debug
CROSS: i386
VULKAN_DRIVERS: intel,amd,swrast,virtio,panfrost
GALLIUM_DRIVERS: "iris,nouveau,r300,r600,radeonsi,llvmpipe,softpipe,virgl,zink,crocus,d3d12,panfrost"
DRI_LOADERS:
-D glvnd=disabled
EXTRA_OPTION: >
-D vulkan-layers=device-select,overlay
-D mesa-clc=system
CPP_ARGS: >
-Wno-error=deprecated-declarations
C_LINK_ARGS: >
-Wl,--no-warn-rwx-segments
CPP_LINK_ARGS: >
-Wl,--no-warn-rwx-segments
HOST_BUILD_OPTIONS: >
-D build-tests=false
-D enable-glcpp-tests=false
-D gallium-drivers=
-D vulkan-drivers=
-D video-codecs=
-D glx=disabled
-D platforms=
-D mesa-clc=enabled
-D install-mesa-clc=true
# While s390 is dead, s390x is very much alive, and one of the last major
# big-endian platforms, so it provides useful coverage.
# In case of issues with this job, contact @ajax
debian-s390x:
extends:
- .meson-cross
- .use-debian/s390x_build
- .meson-build-only
tags:
- $FDO_RUNNER_JOB_PRIORITY_TAG_X86_64_KVM
variables:
BUILDTYPE: debug
CROSS: s390x
GALLIUM_DRIVERS: "llvmpipe,virgl,zink"
VULKAN_DRIVERS: "swrast,virtio"
DRI_LOADERS:
-D glvnd=disabled
debian-ppc64el:
extends:
- .meson-cross
- .use-debian/ppc64el_build
- .meson-build-only
variables:
BUILDTYPE: debug
CROSS: ppc64el
GALLIUM_DRIVERS: "nouveau,llvmpipe,softpipe,virgl,zink"
VULKAN_DRIVERS: "swrast"
DRI_LOADERS:
-D glvnd=disabled

View File

@@ -1,268 +0,0 @@
# For CI-tron based testing farm jobs.
.ci-tron-test:
extends:
- .ci-tron-b2c-job-v1
variables:
GIT_STRATEGY: none
B2C_VERSION: v0.9.15.1 # Linux 6.13.7
SCRIPTS_DIR: install
CI_TRON_PATTERN__JOB_SUCCESS__REGEX: 'hwci: mesa: exit_code: 0\r$'
CI_TRON_PATTERN__SESSION_END__REGEX: '^.*It''s now safe to turn off your computer\r$'
CI_TRON_TIMEOUT__FIRST_CONSOLE_ACTIVITY__MINUTES: 2
CI_TRON_TIMEOUT__FIRST_CONSOLE_ACTIVITY__RETRIES: 3
CI_TRON_TIMEOUT__CONSOLE_ACTIVITY__MINUTES: 5
CI_TRON__B2C_ARTIFACT_EXCLUSION: "*.shader_cache,install/*,*/install/*,*/vkd3d-proton.cache*,vkd3d-proton.cache*,*.qpa"
CI_TRON_HTTP_ARTIFACT__INSTALL__PATH: "/install.tar.zst"
CI_TRON_HTTP_ARTIFACT__INSTALL__URL: "https://$PIPELINE_ARTIFACTS_BASE/$S3_ARTIFACT_NAME.tar.zst"
CI_TRON__B2C_MACHINE_REGISTRATION_CMD: "setup --tags $CI_TRON_DUT_SETUP_TAGS"
CI_TRON__B2C_IMAGE_UNDER_TEST: $MESA_IMAGE
CI_TRON__B2C_EXEC_CMD: "curl --silent --fail-with-body {{ job.http.url }}$CI_TRON_HTTP_ARTIFACT__INSTALL__PATH | tar --zstd --extract && $SCRIPTS_DIR/common/init-stage2.sh"
# Assume by default this is running deqp, as that's almost always true
HWCI_TEST_SCRIPT: install/deqp-runner.sh
# Keep the job script in the artifacts
CI_TRON_JOB_SCRIPT_PATH: results/job_script.sh
needs:
- !reference [.required-for-hardware-jobs, needs]
tags:
- farm:$RUNNER_FARM_LOCATION
- $CI_TRON_DUT_SETUP_TAGS
# Override the default before_script, as it is not compatible with the CI-tron environment. We just keep the clearing
# of the JWT token for security reasons
before_script:
- |
set -eu
eval "$S3_JWT_FILE_SCRIPT"
for var in CI_TRON_DUT_SETUP_TAGS; do
if [[ -z "$(eval echo \${$var:-})" ]]; then
echo "The required variable '$var' is missing"
exit 1
fi
done
# Open a section that will be closed by b2c
echo -e "\n\e[0Ksection_start:`date +%s`:b2c_kernel_boot[collapsed=true]\r\e[0K\e[0;36m[$(cut -d ' ' -f1 /proc/uptime)]: Submitting the CI-tron job and booting the DUT\e[0m\n"
# Anything our job places in results/ will be collected by the
# Gitlab coordinator for status presentation. results/junit.xml
# will be parsed by the UI for more detailed explanations of
# test execution.
artifacts:
when: always
name: "${CI_PROJECT_NAME}_${CI_JOB_NAME_SLUG}"
paths:
- results
reports:
junit: results/**/junit.xml
.ci-tron-x86_64-test:
extends:
- .ci-tron-test
variables:
CI_TRON_INITRAMFS__B2C__URL: 'https://gitlab.freedesktop.org/gfx-ci/boot2container/-/releases/$B2C_VERSION/downloads/initramfs.linux_amd64.cpio.xz'
CI_TRON_KERNEL__URL: 'https://gitlab.freedesktop.org/gfx-ci/boot2container/-/releases/$B2C_VERSION/downloads/linux-x86_64'
# Set the following variables if you need AMD, Intel, or NVIDIA support
# CI_TRON_INITRAMFS__DEPMOD__URL: "https://gitlab.freedesktop.org/gfx-ci/boot2container/-/releases/$B2C_VERSION/downloads/linux-x86_64.depmod.cpio.xz"
# CI_TRON_INITRAMFS__GPU__URL: "https://gitlab.freedesktop.org/gfx-ci/boot2container/-/releases/$B2C_VERSION/downloads/linux-x86_64.gpu.cpio"
# CI_TRON_INITRAMFS__GPU__FORMAT__0__ARCHIVE__KEEP__0__PATH: "(lib/(modules|firmware/amdgpu)/.*)"
S3_ARTIFACT_NAME: "mesa-x86_64-default-debugoptimized"
.ci-tron-x86_64-test-vk:
extends:
- .use-debian/x86_64_test-vk
- .ci-tron-x86_64-test
needs:
- job: debian/x86_64_test-vk
artifacts: false
optional: true
- job: debian-x86_64
artifacts: false
- !reference [.ci-tron-test, needs]
.ci-tron-x86_64-test-vk-manual:
extends:
- .use-debian/x86_64_test-vk
- .ci-tron-x86_64-test
variables:
S3_ARTIFACT_NAME: "debian-build-x86_64"
needs:
- job: debian/x86_64_test-vk
artifacts: false
optional: true
- job: debian-build-x86_64
artifacts: false
- !reference [.ci-tron-test, needs]
.ci-tron-x86_64-test-gl:
extends:
- .use-debian/x86_64_test-gl
- .ci-tron-x86_64-test
needs:
- job: debian/x86_64_test-gl
artifacts: false
optional: true
- job: debian-x86_64
artifacts: false
- !reference [.ci-tron-test, needs]
.ci-tron-x86_64-test-gl-manual:
extends:
- .use-debian/x86_64_test-gl
- .ci-tron-x86_64-test
variables:
S3_ARTIFACT_NAME: "debian-build-x86_64"
needs:
- job: debian/x86_64_test-gl
artifacts: false
optional: true
- job: debian-build-x86_64
artifacts: false
- !reference [.ci-tron-test, needs]
.ci-tron-arm64-test:
extends:
- .ci-tron-test
variables:
CI_TRON_INITRAMFS__B2C__URL: 'https://gitlab.freedesktop.org/gfx-ci/boot2container/-/releases/$B2C_VERSION/downloads/initramfs.linux_arm64.cpio.xz'
CI_TRON_KERNEL__URL: 'https://gitlab.freedesktop.org/gfx-ci/boot2container/-/releases/$B2C_VERSION/downloads/linux-arm64'
S3_ARTIFACT_NAME: "mesa-arm64-default-debugoptimized"
.ci-tron-arm64-test-vk:
extends:
- .use-debian/arm64_test-vk
- .ci-tron-arm64-test
needs:
- job: debian/arm64_test-vk
artifacts: false
optional: true
- job: debian-arm64
artifacts: false
- !reference [.ci-tron-test, needs]
.ci-tron-arm64-test-asan-vk:
extends:
- .use-debian/arm64_test-vk
- .ci-tron-arm64-test
variables:
S3_ARTIFACT_NAME: "mesa-arm64-asan-debugoptimized"
DEQP_FORCE_ASAN: 1
needs:
- job: debian/arm64_test-vk
artifacts: false
optional: true
- job: debian-arm64-asan
artifacts: false
- !reference [.ci-tron-test, needs]
.ci-tron-arm64-test-ubsan-vk:
extends:
- .use-debian/arm64_test-vk
- .ci-tron-arm64-test
variables:
S3_ARTIFACT_NAME: "mesa-arm64-ubsan-debugoptimized"
needs:
- job: debian/arm64_test-vk
artifacts: false
optional: true
- job: debian-arm64-ubsan
artifacts: false
- !reference [.ci-tron-test, needs]
.ci-tron-arm64-test-gl:
extends:
- .use-debian/arm64_test-gl
- .ci-tron-arm64-test
needs:
- job: debian/arm64_test-gl
artifacts: false
optional: true
- job: debian-arm64
artifacts: false
- !reference [.ci-tron-test, needs]
.ci-tron-arm64-test-asan-gl:
extends:
- .use-debian/arm64_test-gl
- .ci-tron-arm64-test
variables:
S3_ARTIFACT_NAME: "mesa-arm64-asan-debugoptimized"
DEQP_FORCE_ASAN: 1
needs:
- job: debian/arm64_test-gl
artifacts: false
optional: true
- job: debian-arm64-asan
artifacts: false
- !reference [.ci-tron-test, needs]
.ci-tron-arm64-test-ubsan-gl:
extends:
- .use-debian/arm64_test-gl
- .ci-tron-arm64-test
variables:
S3_ARTIFACT_NAME: "mesa-arm64-ubsan-debugoptimized"
needs:
- job: debian/arm64_test-gl
artifacts: false
optional: true
- job: debian-arm64-ubsan
artifacts: false
- !reference [.ci-tron-test, needs]
.ci-tron-arm32-test:
extends:
- .ci-tron-test
variables:
CI_TRON_INITRAMFS__B2C__URL: 'https://gitlab.freedesktop.org/gfx-ci/boot2container/-/releases/$B2C_VERSION/downloads/initramfs.linux_arm.cpio.xz'
CI_TRON_KERNEL__URL: 'https://gitlab.freedesktop.org/gfx-ci/boot2container/-/releases/$B2C_VERSION/downloads/linux-arm'
S3_ARTIFACT_NAME: "mesa-arm32-default-debugoptimized"
.ci-tron-arm32-test-vk:
extends:
- .use-debian/arm32_test-vk
- .ci-tron-arm32-test
needs:
- job: debian/arm32_test-vk
artifacts: false
optional: true
- job: debian-arm32
artifacts: false
- !reference [.ci-tron-test, needs]
.ci-tron-arm32-test-gl:
extends:
- .use-debian/arm32_test-gl
- .ci-tron-arm32-test
needs:
- job: debian/arm32_test-gl
artifacts: false
optional: true
- job: debian-arm32
artifacts: false
- !reference [.ci-tron-test, needs]
.ci-tron-arm32-test-asan-gl:
extends:
- .use-debian/arm32_test-gl
- .ci-tron-arm32-test
variables:
S3_ARTIFACT_NAME: "mesa-arm32-asan-debugoptimized"
DEQP_FORCE_ASAN: 1
needs:
- job: debian/arm32_test-gl
artifacts: false
optional: true
- job: debian-arm32-asan
artifacts: false
- !reference [.ci-tron-test, needs]

View File

@@ -1,35 +0,0 @@
#!/usr/bin/env bash
# shellcheck disable=SC2035
# shellcheck disable=SC2061
# shellcheck disable=SC2086 # we want word splitting
while true; do
devcds=$(find /sys/devices/virtual/devcoredump/ -name data 2>/dev/null)
for i in $devcds; do
echo "Found a devcoredump at $i."
if cp $i $RESULTS_DIR/first.devcore; then
echo 1 > $i
echo "Saved to the job artifacts at /first.devcore"
exit 0
fi
done
i915_error_states=$(find /sys/devices/ -path */drm/card*/error)
for i in $i915_error_states; do
tmpfile=$(mktemp)
cp "$i" "$tmpfile"
filesize=$(stat --printf="%s" "$tmpfile")
# Does the file contain "No error state collected" ?
if [ "$filesize" = 25 ]; then
rm "$tmpfile"
else
echo "Found an i915 error state at $i size=$filesize."
if cp "$tmpfile" $RESULTS_DIR/first.i915_error_state; then
rm "$tmpfile"
echo 1 > "$i"
echo "Saved to the job artifacts at /first.i915_error_state"
exit 0
fi
fi
done
sleep 10
done

View File

@@ -1,34 +0,0 @@
#!/bin/sh
# Very early init, used to make sure devices and network are set up and
# reachable.
# When changing this file, you need to bump the following
# .gitlab-ci/image-tags.yml tags:
# ALPINE_X86_64_LAVA_TRIGGER_TAG
set -ex
cd /
findmnt --mountpoint /proc || mount -t proc none /proc
findmnt --mountpoint /sys || mount -t sysfs none /sys
mount -t debugfs none /sys/kernel/debug
findmnt --mountpoint /dev || mount -t devtmpfs none /dev
mkdir -p /dev/pts
mount -t devpts devpts /dev/pts
mkdir /dev/shm
mount -t tmpfs -o noexec,nodev,nosuid tmpfs /dev/shm
mount -t tmpfs tmpfs /tmp
echo "nameserver 8.8.8.8" > /etc/resolv.conf
[ -z "$NFS_SERVER_IP" ] || echo "$NFS_SERVER_IP caching-proxy" >> /etc/hosts
# Set the time so we can validate certificates before we fetch anything;
# however as not all DUTs have network, make this non-fatal.
for _ in 1 2 3; do sntp -sS pool.ntp.org && break || sleep 2; done || true
# Create a symlink from /dev/fd to /proc/self/fd if /dev/fd is missing.
if [ ! -e /dev/fd ]; then
ln -s /proc/self/fd /dev/fd
fi

View File

@@ -1,240 +0,0 @@
#!/bin/bash
# shellcheck disable=SC1090
# shellcheck disable=SC1091
# shellcheck disable=SC2086 # we want word splitting
# shellcheck disable=SC2155
# Second-stage init, used to set up devices and our job environment before
# running tests.
shopt -s extglob
# Make sure to kill itself and all the children process from this script on
# exiting, since any console output may interfere with LAVA signals handling,
# which based on the log console.
cleanup() {
if [ "$BACKGROUND_PIDS" = "" ]; then
return 0
fi
set +x
echo "Killing all child processes"
for pid in $BACKGROUND_PIDS
do
kill "$pid" 2>/dev/null || true
done
# Sleep just a little to give enough time for subprocesses to be gracefully
# killed. Then apply a SIGKILL if necessary.
sleep 5
for pid in $BACKGROUND_PIDS
do
kill -9 "$pid" 2>/dev/null || true
done
BACKGROUND_PIDS=
set -x
}
trap cleanup INT TERM EXIT
# Space separated values with the PIDS of the processes started in the
# background by this script
BACKGROUND_PIDS=
for path in '/dut-env-vars.sh' '/set-job-env-vars.sh' './set-job-env-vars.sh'; do
[ -f "$path" ] && source "$path"
done
. "$SCRIPTS_DIR"/setup-test-env.sh
# Flush out anything which might be stuck in a serial buffer
echo
echo
echo
section_switch init_stage2 "Pre-testing hardware setup"
set -ex
# Set up any devices required by the jobs
[ -z "$HWCI_KERNEL_MODULES" ] || {
echo -n $HWCI_KERNEL_MODULES | xargs -d, -n1 /usr/sbin/modprobe
}
# Set up ZRAM
HWCI_ZRAM_SIZE=2G
if /sbin/zramctl --find --size $HWCI_ZRAM_SIZE -a zstd; then
mkswap /dev/zram0
swapon /dev/zram0
echo "zram: $HWCI_ZRAM_SIZE activated"
else
echo "zram: skipping, not supported"
fi
#
# Load the KVM module specific to the detected CPU virtualization extensions:
# - vmx for Intel VT
# - svm for AMD-V
#
if [ -n "$HWCI_ENABLE_X86_KVM" ]; then
unset KVM_KERNEL_MODULE
{
grep -qs '\bvmx\b' /proc/cpuinfo && KVM_KERNEL_MODULE=kvm_intel
} || {
grep -qs '\bsvm\b' /proc/cpuinfo && KVM_KERNEL_MODULE=kvm_amd
}
{
[ -z "${KVM_KERNEL_MODULE}" ] && \
echo "WARNING: Failed to detect CPU virtualization extensions"
} || \
modprobe ${KVM_KERNEL_MODULE}
fi
# Fix prefix confusion: the build installs to $CI_PROJECT_DIR, but we expect
# it in /install
ln -sf $CI_PROJECT_DIR/install /install
export LD_LIBRARY_PATH=/install/lib
export LIBGL_DRIVERS_PATH=/install/lib/dri
# https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/22495#note_1876691
# The navi21 boards seem to have trouble with ld.so.cache, so try explicitly
# telling it to look in /usr/local/lib.
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/lib
# The Broadcom devices need /usr/local/bin unconditionally added to the path
export PATH=/usr/local/bin:$PATH
# Store Mesa's disk cache under /tmp, rather than sending it out over NFS.
export XDG_CACHE_HOME=/tmp
# Make sure Python can find all our imports
export PYTHONPATH=$(python3 -c "import sys;print(\":\".join(sys.path))")
# If we need to specify a driver, it means several drivers could pick up this gpu;
# ensure that the other driver can't accidentally be used
if [ -n "$MESA_LOADER_DRIVER_OVERRIDE" ]; then
rm /install/lib/dri/!($MESA_LOADER_DRIVER_OVERRIDE)_dri.so
fi
ls -1 /install/lib/dri/*_dri.so || true
if [ "$HWCI_FREQ_MAX" = "true" ]; then
# Ensure initialization of the DRM device (needed by MSM)
head -0 /dev/dri/renderD128
# Disable GPU frequency scaling
DEVFREQ_GOVERNOR=$(find /sys/devices -name governor | grep gpu || true)
test -z "$DEVFREQ_GOVERNOR" || echo performance > $DEVFREQ_GOVERNOR || true
# Disable CPU frequency scaling
echo performance | tee -a /sys/devices/system/cpu/cpufreq/policy*/scaling_governor || true
# Disable GPU runtime power management
GPU_AUTOSUSPEND=$(find /sys/devices -name autosuspend_delay_ms | grep gpu | head -1)
test -z "$GPU_AUTOSUSPEND" || echo -1 > $GPU_AUTOSUSPEND || true
# Lock Intel GPU frequency to 70% of the maximum allowed by hardware
# and enable throttling detection & reporting.
# Additionally, set the upper limit for CPU scaling frequency to 65% of the
# maximum permitted, as an additional measure to mitigate thermal throttling.
/install/common/intel-gpu-freq.sh -s 70% --cpu-set-max 65% -g all -d
fi
# Start a little daemon to capture sysfs records and produce a JSON file
KDL_PATH=/install/common/kdl.sh
if [ -x "$KDL_PATH" ]; then
echo "launch kdl.sh!"
$KDL_PATH &
BACKGROUND_PIDS="$! $BACKGROUND_PIDS"
else
echo "kdl.sh not found!"
fi
# Increase freedreno hangcheck timer because it's right at the edge of the
# spilling tests timing out (and some traces, too)
if [ -n "$FREEDRENO_HANGCHECK_MS" ]; then
echo $FREEDRENO_HANGCHECK_MS | tee -a /sys/kernel/debug/dri/128/hangcheck_period_ms
fi
# Start a little daemon to capture the first devcoredump we encounter. (They
# expire after 5 minutes, so we poll for them).
CAPTURE_DEVCOREDUMP=/install/common/capture-devcoredump.sh
if [ -x "$CAPTURE_DEVCOREDUMP" ]; then
$CAPTURE_DEVCOREDUMP &
BACKGROUND_PIDS="$! $BACKGROUND_PIDS"
fi
ARCH=$(uname -m)
export VK_DRIVER_FILES="/install/share/vulkan/icd.d/${VK_DRIVER}_icd.$ARCH.json"
# If we want Xorg to be running for the test, then we start it up before the
# HWCI_TEST_SCRIPT because we need to use xinit to start X (otherwise
# without using -displayfd you can race with Xorg's startup), but xinit will eat
# your client's return code
if [ -n "$HWCI_START_XORG" ]; then
echo "touch /xorg-started; sleep 100000" > /xorg-script
env \
xinit /bin/sh /xorg-script -- /usr/bin/Xorg -noreset -s 0 -dpms -logfile "$RESULTS_DIR/Xorg.0.log" &
BACKGROUND_PIDS="$! $BACKGROUND_PIDS"
# Wait for xorg to be ready for connections.
for _ in 1 2 3 4 5; do
if [ -e /xorg-started ]; then
break
fi
sleep 5
done
export DISPLAY=:0
fi
if [ -n "$HWCI_START_WESTON" ]; then
WESTON_X11_SOCK="/tmp/.X11-unix/X0"
if [ -n "$HWCI_START_XORG" ]; then
echo "Please consider dropping HWCI_START_XORG and instead using Weston XWayland for testing."
WESTON_X11_SOCK="/tmp/.X11-unix/X1"
fi
export WAYLAND_DISPLAY=wayland-0
# Display server is Weston Xwayland when HWCI_START_XORG is not set or Xorg when it's
export DISPLAY=:0
mkdir -p /tmp/.X11-unix
env weston --config="/install/common/weston.ini" -Swayland-0 --use-gl &
BACKGROUND_PIDS="$! $BACKGROUND_PIDS"
while [ ! -S "$WESTON_X11_SOCK" ]; do sleep 1; done
fi
set +x
section_end init_stage2
echo "Running ${HWCI_TEST_SCRIPT} ${HWCI_TEST_ARGS} ..."
set +e
$HWCI_TEST_SCRIPT ${HWCI_TEST_ARGS:-}; EXIT_CODE=$?
set -e
section_start post_test_cleanup "Cleaning up after testing, uploading results"
set -x
# Make sure that capture-devcoredump is done before we start trying to tar up
# artifacts -- if it's writing while tar is reading, tar will throw an error and
# kill the job.
cleanup
# upload artifacts (lava jobs)
if [ -n "$S3_RESULTS_UPLOAD" ]; then
tar --zstd -cf results.tar.zst results/;
ci-fairy s3cp --token-file "${S3_JWT_FILE}" results.tar.zst https://"$S3_RESULTS_UPLOAD"/results.tar.zst
fi
set +x
section_end post_test_cleanup
# Print the final result; both bare-metal and LAVA look for this string to get
# the result of our run, so try really hard to get it out rather than losing
# the run. The device gets shut down right at this point, and a630 seems to
# enjoy corrupting the last line of serial output before shutdown.
for _ in $(seq 0 3); do echo "hwci: mesa: exit_code: $EXIT_CODE"; sleep 1; echo; done
exit $EXIT_CODE

View File

@@ -1,820 +0,0 @@
#!/usr/bin/env bash
# shellcheck disable=SC2013
# shellcheck disable=SC2015
# shellcheck disable=SC2034
# shellcheck disable=SC2046
# shellcheck disable=SC2059
# shellcheck disable=SC2086 # we want word splitting
# shellcheck disable=SC2154
# shellcheck disable=SC2155
# shellcheck disable=SC2162
# shellcheck disable=SC2229
#
# This is an utility script to manage Intel GPU frequencies.
# It can be used for debugging performance problems or trying to obtain a stable
# frequency while benchmarking.
#
# Note the Intel i915 GPU driver allows to change the minimum, maximum and boost
# frequencies in steps of 50 MHz via:
#
# /sys/class/drm/card<n>/<freq_info>
#
# Where <n> is the DRM card index and <freq_info> one of the following:
#
# - gt_max_freq_mhz (enforced maximum freq)
# - gt_min_freq_mhz (enforced minimum freq)
# - gt_boost_freq_mhz (enforced boost freq)
#
# The hardware capabilities can be accessed via:
#
# - gt_RP0_freq_mhz (supported maximum freq)
# - gt_RPn_freq_mhz (supported minimum freq)
# - gt_RP1_freq_mhz (most efficient freq)
#
# The current frequency can be read from:
# - gt_act_freq_mhz (the actual GPU freq)
# - gt_cur_freq_mhz (the last requested freq)
#
# Intel later switched to per-tile sysfs interfaces, which is what the Xe DRM
# driver exlusively uses, and the capabilites are now located under the
# following directory for the first tile:
#
# /sys/class/drm/card<n>/device/tile0/gt0/freq0/<freq_info>
#
# Where <n> is the DRM card index and <freq_info> one of the following:
#
# - max_freq (enforced maximum freq)
# - min_freq (enforced minimum freq)
#
# The hardware capabilities can be accessed via:
#
# - rp0_freq (supported maximum freq)
# - rpn_freq (supported minimum freq)
# - rpe_freq (most efficient freq)
#
# The current frequency can be read from:
# - act_freq (the actual GPU freq)
# - cur_freq (the last requested freq)
#
# Also note that in addition to GPU management, the script offers the
# possibility to adjust CPU operating frequencies. However, this is currently
# limited to just setting the maximum scaling frequency as percentage of the
# maximum frequency allowed by the hardware.
#
# Copyright (C) 2022 Collabora Ltd.
# Author: Cristian Ciocaltea <cristian.ciocaltea@collabora.com>
#
# SPDX-License-Identifier: MIT
#
#
# Constants
#
# Check if any /sys/class/drm/cardX/device/tile0 directory exists to detect Xe
USE_XE=0
for i in $(seq 0 15); do
if [ -d "/sys/class/drm/card$i/device/tile0" ]; then
USE_XE=1
break
fi
done
# GPU
if [ "$USE_XE" -eq 1 ]; then
DRM_FREQ_SYSFS_PATTERN="/sys/class/drm/card%d/device/tile0/gt0/freq0/%s_freq"
ENF_FREQ_INFO="max min"
CAP_FREQ_INFO="rp0 rpn rpe"
else
DRM_FREQ_SYSFS_PATTERN="/sys/class/drm/card%d/gt_%s_freq_mhz"
ENF_FREQ_INFO="max min boost"
CAP_FREQ_INFO="RP0 RPn RP1"
fi
ACT_FREQ_INFO="act cur"
THROTT_DETECT_SLEEP_SEC=2
THROTT_DETECT_PID_FILE_PATH=/tmp/thrott-detect.pid
# CPU
CPU_SYSFS_PREFIX=/sys/devices/system/cpu
CPU_PSTATE_SYSFS_PATTERN="${CPU_SYSFS_PREFIX}/intel_pstate/%s"
CPU_FREQ_SYSFS_PATTERN="${CPU_SYSFS_PREFIX}/cpu%s/cpufreq/%s_freq"
CAP_CPU_FREQ_INFO="cpuinfo_max cpuinfo_min"
ENF_CPU_FREQ_INFO="scaling_max scaling_min"
ACT_CPU_FREQ_INFO="scaling_cur"
#
# Global variables.
#
unset INTEL_DRM_CARD_INDEX
unset GET_ACT_FREQ GET_ENF_FREQ GET_CAP_FREQ
unset SET_MIN_FREQ SET_MAX_FREQ
unset MONITOR_FREQ
unset CPU_SET_MAX_FREQ
unset DETECT_THROTT
unset DRY_RUN
#
# Simple printf based stderr logger.
#
log() {
local msg_type=$1
shift
printf "%s: %s: " "${msg_type}" "${0##*/}" >&2
printf "$@" >&2
printf "\n" >&2
}
#
# Helper to print sysfs path for the given card index and freq info.
#
# arg1: Frequency info sysfs name, one of *_FREQ_INFO constants above
# arg2: Video card index, defaults to INTEL_DRM_CARD_INDEX
#
print_freq_sysfs_path() {
printf ${DRM_FREQ_SYSFS_PATTERN} "${2:-${INTEL_DRM_CARD_INDEX}}" "$1"
}
#
# Helper to set INTEL_DRM_CARD_INDEX for the first identified Intel video card.
#
identify_intel_gpu() {
local i=0 vendor path
while [ ${i} -lt 16 ]; do
[ -c "/dev/dri/card$i" ] || {
i=$((i + 1))
continue
}
path=$(print_freq_sysfs_path "" ${i})
if [ "$USE_XE" -eq 1 ]; then
path=${path%/*/*/*/*/*}/device/vendor
else
path=${path%/*}/device/vendor
fi
[ -r "${path}" ] && read vendor < "${path}" && \
[ "${vendor}" = "0x8086" ] && INTEL_DRM_CARD_INDEX=$i && return 0
i=$((i + 1))
done
return 1
}
#
# Read the specified freq info from sysfs.
#
# arg1: Flag (y/n) to also enable printing the freq info.
# arg2...: Frequency info sysfs name(s), see *_FREQ_INFO constants above
# return: Global variable(s) FREQ_${arg} containing the requested information
#
read_freq_info() {
local var val info path print=0 ret=0
[ "$1" = "y" ] && print=1
shift
while [ $# -gt 0 ]; do
info=$1
shift
var=FREQ_${info}
path=$(print_freq_sysfs_path "${info}")
[ -r ${path} ] && read ${var} < ${path} || {
log ERROR "Failed to read freq info from: %s" "${path}"
ret=1
continue
}
[ -n "${var}" ] || {
log ERROR "Got empty freq info from: %s" "${path}"
ret=1
continue
}
[ ${print} -eq 1 ] && {
eval val=\$${var}
printf "%6s: %4s MHz\n" "${info}" "${val}"
}
done
return ${ret}
}
#
# Display requested info.
#
print_freq_info() {
local req_freq
[ -n "${GET_CAP_FREQ}" ] && {
printf "* Hardware capabilities\n"
read_freq_info y ${CAP_FREQ_INFO}
printf "\n"
}
[ -n "${GET_ENF_FREQ}" ] && {
printf "* Enforcements\n"
read_freq_info y ${ENF_FREQ_INFO}
printf "\n"
}
[ -n "${GET_ACT_FREQ}" ] && {
printf "* Actual\n"
read_freq_info y ${ACT_FREQ_INFO}
printf "\n"
}
}
#
# Helper to print frequency value as requested by user via '-s, --set' option.
# arg1: user requested freq value
#
compute_freq_set() {
local val
case "$1" in
+)
val=$(eval "echo \${FREQ_$(echo $CAP_FREQ_INFO | cut -d' ' -f1)}") # FREQ_rp0 or FREQ_RP0
;;
-)
val=$(eval "echo \${FREQ_$(echo $CAP_FREQ_INFO | cut -d' ' -f2)}") # FREQ_rpn or FREQ_RPn
;;
*%)
val=$((${1%?} * $(eval "echo \${FREQ_$(echo $CAP_FREQ_INFO | cut -d' ' -f1)}") / 100))
# Adjust freq to comply with 50 MHz increments
val=$((val / 50 * 50))
;;
*[!0-9]*)
log ERROR "Cannot set freq to invalid value: %s" "$1"
return 1
;;
"")
log ERROR "Cannot set freq to unspecified value"
return 1
;;
*)
# Adjust freq to comply with 50 MHz increments
val=$(($1 / 50 * 50))
;;
esac
printf "%s" "${val}"
}
#
# Helper for set_freq().
#
set_freq_max() {
log INFO "Setting GPU max freq to %s MHz" "${SET_MAX_FREQ}"
read_freq_info n min || return $?
# FREQ_rp0 or FREQ_RP0
[ ${SET_MAX_FREQ} -gt $(eval "echo \${FREQ_$(echo $CAP_FREQ_INFO | cut -d' ' -f1)}") ] && {
log ERROR "Cannot set GPU max freq (%s) to be greater than hw max freq (%s)" \
"${SET_MAX_FREQ}" "$(eval "echo \${FREQ_$(echo $CAP_FREQ_INFO | cut -d' ' -f1)}")"
return 1
}
# FREQ_rpn or FREQ_RPn
[ ${SET_MAX_FREQ} -lt $(eval "echo \${FREQ_$(echo $CAP_FREQ_INFO | cut -d' ' -f2)}") ] && {
log ERROR "Cannot set GPU max freq (%s) to be less than hw min freq (%s)" \
"${SET_MIN_FREQ}" "$(eval "echo \${FREQ_$(echo $CAP_FREQ_INFO | cut -d' ' -f2)}")"
return 1
}
[ ${SET_MAX_FREQ} -lt ${FREQ_min} ] && {
log ERROR "Cannot set GPU max freq (%s) to be less than min freq (%s)" \
"${SET_MAX_FREQ}" "${FREQ_min}"
return 1
}
[ -z "${DRY_RUN}" ] || return 0
# Write to max freq path
if ! printf "%s" ${SET_MAX_FREQ} | tee $(print_freq_sysfs_path max) > /dev/null;
then
log ERROR "Failed to set GPU max frequency"
return 1
fi
# Only write to boost if the sysfs file exists, as it's removed in Xe
if [ -e "$(print_freq_sysfs_path boost)" ]; then
if ! printf "%s" ${SET_MAX_FREQ} | tee $(print_freq_sysfs_path boost) > /dev/null;
then
log ERROR "Failed to set GPU boost frequency"
return 1
fi
fi
}
#
# Helper for set_freq().
#
set_freq_min() {
log INFO "Setting GPU min freq to %s MHz" "${SET_MIN_FREQ}"
read_freq_info n max || return $?
[ ${SET_MIN_FREQ} -gt ${FREQ_max} ] && {
log ERROR "Cannot set GPU min freq (%s) to be greater than max freq (%s)" \
"${SET_MIN_FREQ}" "${FREQ_max}"
return 1
}
[ ${SET_MIN_FREQ} -lt $(eval "echo \${FREQ_$(echo $CAP_FREQ_INFO | cut -d' ' -f2)}") ] && {
log ERROR "Cannot set GPU min freq (%s) to be less than hw min freq (%s)" \
"${SET_MIN_FREQ}" "$(eval "echo \${FREQ_$(echo $CAP_FREQ_INFO | cut -d' ' -f2)}")"
return 1
}
[ -z "${DRY_RUN}" ] || return 0
if ! printf "%s" ${SET_MIN_FREQ} > $(print_freq_sysfs_path min);
then
log ERROR "Failed to set GPU min frequency"
return 1
fi
}
#
# Set min or max or both GPU frequencies to the user indicated values.
#
set_freq() {
# Get hw max & min frequencies
read_freq_info n $(echo $CAP_FREQ_INFO | cut -d' ' -f1,2) || return $? # RP0 RPn
[ -z "${SET_MAX_FREQ}" ] || {
SET_MAX_FREQ=$(compute_freq_set "${SET_MAX_FREQ}")
[ -z "${SET_MAX_FREQ}" ] && return 1
}
[ -z "${SET_MIN_FREQ}" ] || {
SET_MIN_FREQ=$(compute_freq_set "${SET_MIN_FREQ}")
[ -z "${SET_MIN_FREQ}" ] && return 1
}
#
# Ensure correct operation order, to avoid setting min freq
# to a value which is larger than max freq.
#
# E.g.:
# crt_min=crt_max=600; new_min=new_max=700
# > operation order: max=700; min=700
#
# crt_min=crt_max=600; new_min=new_max=500
# > operation order: min=500; max=500
#
if [ -n "${SET_MAX_FREQ}" ] && [ -n "${SET_MIN_FREQ}" ]; then
[ ${SET_MAX_FREQ} -lt ${SET_MIN_FREQ} ] && {
log ERROR "Cannot set GPU max freq to be less than min freq"
return 1
}
read_freq_info n min || return $?
if [ ${SET_MAX_FREQ} -lt ${FREQ_min} ]; then
set_freq_min || return $?
set_freq_max
else
set_freq_max || return $?
set_freq_min
fi
elif [ -n "${SET_MAX_FREQ}" ]; then
set_freq_max
elif [ -n "${SET_MIN_FREQ}" ]; then
set_freq_min
else
log "Unexpected call to set_freq()"
return 1
fi
}
#
# Helper for detect_throttling().
#
get_thrott_detect_pid() {
[ -e ${THROTT_DETECT_PID_FILE_PATH} ] || return 0
local pid
read pid < ${THROTT_DETECT_PID_FILE_PATH} || {
log ERROR "Failed to read pid from: %s" "${THROTT_DETECT_PID_FILE_PATH}"
return 1
}
local proc_path=/proc/${pid:-invalid}/cmdline
[ -r ${proc_path} ] && grep -qs "${0##*/}" ${proc_path} && {
printf "%s" "${pid}"
return 0
}
# Remove orphaned PID file
rm -rf ${THROTT_DETECT_PID_FILE_PATH}
return 1
}
#
# Control detection and reporting of GPU throttling events.
# arg1: start - run throttle detector in background
# stop - stop throttle detector process, if any
# status - verify if throttle detector is running
#
detect_throttling() {
local pid
pid=$(get_thrott_detect_pid)
case "$1" in
status)
printf "Throttling detector is "
[ -z "${pid}" ] && printf "not running\n" && return 0
printf "running (pid=%s)\n" ${pid}
;;
stop)
[ -z "${pid}" ] && return 0
log INFO "Stopping throttling detector (pid=%s)" "${pid}"
kill ${pid}; sleep 1; kill -0 ${pid} 2>/dev/null && kill -9 ${pid}
rm -rf ${THROTT_DETECT_PID_FILE_PATH}
;;
start)
[ -n "${pid}" ] && {
log WARN "Throttling detector is already running (pid=%s)" ${pid}
return 0
}
(
read_freq_info n $(echo $CAP_FREQ_INFO | cut -d' ' -f2) || return $? # RPn
while true; do
sleep ${THROTT_DETECT_SLEEP_SEC}
read_freq_info n act min cur || exit $?
#
# The throttling seems to occur when act freq goes below min.
# However, it's necessary to exclude the idle states, where
# act freq normally reaches rpn and cur goes below min.
#
[ ${FREQ_act} -lt ${FREQ_min} ] && \
[ ${FREQ_act} -gt $(eval "echo \${FREQ_$(echo $CAP_FREQ_INFO | cut -d' ' -f2)}") ] && \
[ ${FREQ_cur} -ge ${FREQ_min} ] && \
printf "GPU throttling detected: act=%s min=%s cur=%s rpn=%s\n" \
${FREQ_act} ${FREQ_min} ${FREQ_cur} $(eval "echo \${FREQ_$(echo $CAP_FREQ_INFO | cut -d' ' -f2)}")
done
) &
pid=$!
log INFO "Started GPU throttling detector (pid=%s)" ${pid}
printf "%s\n" ${pid} > ${THROTT_DETECT_PID_FILE_PATH} || \
log WARN "Failed to write throttle detector PID file"
;;
esac
}
#
# Retrieve the list of online CPUs.
#
get_online_cpus() {
local path cpu_index
printf "0"
for path in $(grep 1 ${CPU_SYSFS_PREFIX}/cpu*/online); do
cpu_index=${path##*/cpu}
printf " %s" ${cpu_index%%/*}
done
}
#
# Helper to print sysfs path for the given CPU index and freq info.
#
# arg1: Frequency info sysfs name, one of *_CPU_FREQ_INFO constants above
# arg2: CPU index
#
print_cpu_freq_sysfs_path() {
printf ${CPU_FREQ_SYSFS_PATTERN} "$2" "$1"
}
#
# Read the specified CPU freq info from sysfs.
#
# arg1: CPU index
# arg2: Flag (y/n) to also enable printing the freq info.
# arg3...: Frequency info sysfs name(s), see *_CPU_FREQ_INFO constants above
# return: Global variable(s) CPU_FREQ_${arg} containing the requested information
#
read_cpu_freq_info() {
local var val info path cpu_index print=0 ret=0
cpu_index=$1
[ "$2" = "y" ] && print=1
shift 2
while [ $# -gt 0 ]; do
info=$1
shift
var=CPU_FREQ_${info}
path=$(print_cpu_freq_sysfs_path "${info}" ${cpu_index})
[ -r ${path} ] && read ${var} < ${path} || {
log ERROR "Failed to read CPU freq info from: %s" "${path}"
ret=1
continue
}
[ -n "${var}" ] || {
log ERROR "Got empty CPU freq info from: %s" "${path}"
ret=1
continue
}
[ ${print} -eq 1 ] && {
eval val=\$${var}
printf "%6s: %4s Hz\n" "${info}" "${val}"
}
done
return ${ret}
}
#
# Helper to print freq. value as requested by user via '--cpu-set-max' option.
# arg1: user requested freq value
#
compute_cpu_freq_set() {
local val
case "$1" in
+)
val=${CPU_FREQ_cpuinfo_max}
;;
-)
val=${CPU_FREQ_cpuinfo_min}
;;
*%)
val=$((${1%?} * CPU_FREQ_cpuinfo_max / 100))
;;
*[!0-9]*)
log ERROR "Cannot set CPU freq to invalid value: %s" "$1"
return 1
;;
"")
log ERROR "Cannot set CPU freq to unspecified value"
return 1
;;
*)
log ERROR "Cannot set CPU freq to custom value; use +, -, or % instead"
return 1
;;
esac
printf "%s" "${val}"
}
#
# Adjust CPU max scaling frequency.
#
set_cpu_freq_max() {
local target_freq res=0
case "${CPU_SET_MAX_FREQ}" in
+)
target_freq=100
;;
-)
target_freq=1
;;
*%)
target_freq=${CPU_SET_MAX_FREQ%?}
;;
*)
log ERROR "Invalid CPU freq"
return 1
;;
esac
local pstate_info=$(printf "${CPU_PSTATE_SYSFS_PATTERN}" max_perf_pct)
[ -e "${pstate_info}" ] && {
log INFO "Setting intel_pstate max perf to %s" "${target_freq}%"
if ! printf "%s" "${target_freq}" > "${pstate_info}";
then
log ERROR "Failed to set intel_pstate max perf"
res=1
fi
}
local cpu_index
for cpu_index in $(get_online_cpus); do
read_cpu_freq_info ${cpu_index} n ${CAP_CPU_FREQ_INFO} || { res=$?; continue; }
target_freq=$(compute_cpu_freq_set "${CPU_SET_MAX_FREQ}")
tf_res=$?
[ -z "${target_freq}" ] && { res=$tf_res; continue; }
log INFO "Setting CPU%s max scaling freq to %s Hz" ${cpu_index} "${target_freq}"
[ -n "${DRY_RUN}" ] && continue
if ! printf "%s" ${target_freq} > $(print_cpu_freq_sysfs_path scaling_max ${cpu_index});
then
res=1
log ERROR "Failed to set CPU%s max scaling frequency" ${cpu_index}
fi
done
return ${res}
}
#
# Show help message.
#
print_usage() {
cat <<EOF
Usage: ${0##*/} [OPTION]...
A script to manage Intel GPU frequencies. Can be used for debugging performance
problems or trying to obtain a stable frequency while benchmarking.
Note Intel GPUs only accept specific frequencies, usually multiples of 50 MHz.
Options:
-g, --get [act|enf|cap|all]
Get frequency information: active (default), enforced,
hardware capabilities or all of them.
-s, --set [{min|max}=]{FREQUENCY[%]|+|-}
Set min or max frequency to the given value (MHz).
Append '%' to interpret FREQUENCY as % of hw max.
Use '+' or '-' to set frequency to hardware max or min.
Omit min/max prefix to set both frequencies.
-r, --reset Reset frequencies to hardware defaults.
-m, --monitor [act|enf|cap|all]
Monitor the indicated frequencies via 'watch' utility.
See '-g, --get' option for more details.
-d|--detect-thrott [start|stop|status]
Start (default operation) the throttling detector
as a background process. Use 'stop' or 'status' to
terminate the detector process or verify its status.
--cpu-set-max [FREQUENCY%|+|-}
Set CPU max scaling frequency as % of hw max.
Use '+' or '-' to set frequency to hardware max or min.
-r, --reset Reset frequencies to hardware defaults.
--dry-run See what the script will do without applying any
frequency changes.
-h, --help Display this help text and exit.
EOF
}
#
# Parse user input for '-g, --get' option.
# Returns 0 if a value has been provided, otherwise 1.
#
parse_option_get() {
local ret=0
case "$1" in
act) GET_ACT_FREQ=1;;
enf) GET_ENF_FREQ=1;;
cap) GET_CAP_FREQ=1;;
all) GET_ACT_FREQ=1; GET_ENF_FREQ=1; GET_CAP_FREQ=1;;
-*|"")
# No value provided, using default.
GET_ACT_FREQ=1
ret=1
;;
*)
print_usage
exit 1
;;
esac
return ${ret}
}
#
# Validate user input for '-s, --set' option.
# arg1: input value to be validated
# arg2: optional flag indicating input is restricted to %
#
validate_option_set() {
case "$1" in
+|-|[0-9]%|[0-9][0-9]%)
return 0
;;
*[!0-9]*|"")
print_usage
exit 1
;;
esac
[ -z "$2" ] || { print_usage; exit 1; }
}
#
# Parse script arguments.
#
[ $# -eq 0 ] && { print_usage; exit 1; }
while [ $# -gt 0 ]; do
case "$1" in
-g|--get)
parse_option_get "$2" && shift
;;
-s|--set)
shift
case "$1" in
min=*)
SET_MIN_FREQ=${1#min=}
validate_option_set "${SET_MIN_FREQ}"
;;
max=*)
SET_MAX_FREQ=${1#max=}
validate_option_set "${SET_MAX_FREQ}"
;;
*)
SET_MIN_FREQ=$1
validate_option_set "${SET_MIN_FREQ}"
SET_MAX_FREQ=${SET_MIN_FREQ}
;;
esac
;;
-r|--reset)
RESET_FREQ=1
SET_MIN_FREQ="-"
SET_MAX_FREQ="+"
;;
-m|--monitor)
MONITOR_FREQ=act
parse_option_get "$2" && MONITOR_FREQ=$2 && shift
;;
-d|--detect-thrott)
DETECT_THROTT=start
case "$2" in
start|stop|status)
DETECT_THROTT=$2
shift
;;
esac
;;
--cpu-set-max)
shift
CPU_SET_MAX_FREQ=$1
validate_option_set "${CPU_SET_MAX_FREQ}" restricted
;;
--dry-run)
DRY_RUN=1
;;
-h|--help)
print_usage
exit 0
;;
*)
print_usage
exit 1
;;
esac
shift
done
#
# Main
#
RET=0
identify_intel_gpu || {
log INFO "No Intel GPU detected"
exit 0
}
[ -n "${SET_MIN_FREQ}${SET_MAX_FREQ}" ] && { set_freq || RET=$?; }
print_freq_info
[ -n "${DETECT_THROTT}" ] && detect_throttling ${DETECT_THROTT}
[ -n "${CPU_SET_MAX_FREQ}" ] && { set_cpu_freq_max || RET=$?; }
[ -n "${MONITOR_FREQ}" ] && {
log INFO "Entering frequency monitoring mode"
sleep 2
exec watch -d -n 1 "$0" -g "${MONITOR_FREQ}"
}
exit ${RET}

View File

@@ -1,18 +0,0 @@
#!/usr/bin/env bash
# shellcheck disable=SC1091 # the path is created in build-kdl and
# here is check if exist
# shellcheck disable=SC2086 # we want the arguments to be expanded
if ! [ -f /ci-kdl/bin/activate ]; then
echo -e "ci-kdl not installed; not monitoring temperature"
exit 0
fi
KDL_ARGS="
--output-file=${RESULTS_DIR}/kdl.json
--log-level=WARNING
--num-samples=-1
"
source /ci-kdl/bin/activate
exec /ci-kdl/bin/ci-kdl ${KDL_ARGS}

View File

@@ -1,7 +0,0 @@
[core]
backend=headless-backend.so
xwayland=true
idle-time=0
[xwayland]
path=/usr/local/bin/Xwayland

View File

@@ -1,7 +0,0 @@
variables:
CONDITIONAL_BUILD_ANDROID_CTS_TAG: b018634d732f438027ec58c0383615e7
CONDITIONAL_BUILD_ANGLE_TAG: f62910e55be46e37cc867d037e4a8121
CONDITIONAL_BUILD_CROSVM_TAG: 0f59350b1052bdbb28b65a832b494377
CONDITIONAL_BUILD_FLUSTER_TAG: 3bc3afd7468e106afcbfd569a85f34f9
CONDITIONAL_BUILD_PIGLIT_TAG: 827b708ab7309721395ea28cec512968
CONDITIONAL_BUILD_VKD3D_PROTON_TAG: 82cadf35246e64a8228bf759c9c19e5b

View File

@@ -1,70 +0,0 @@
# Build the CI Alpine docker images.
#
# MESA_IMAGE_TAG is the tag of the docker image used by later stage jobs. If the
# image doesn't exist yet, the container stage job generates it.
#
# In order to generate a new image, one should generally change the tag.
# While removing the image from the registry would also work, that's not
# recommended except for ephemeral images during development: Replacing
# an image after a significant amount of time might pull in newer
# versions of gcc/clang or other packages, which might break the build
# with older commits using the same tag.
#
# After merging a change resulting in generating a new image to the
# main repository, it's recommended to remove the image from the source
# repository's container registry, so that the image from the main
# repository's registry will be used there as well.
# Alpine based x86_64 build image
.alpine/x86_64_build-base:
extends:
- .fdo.container-build@alpine
- .container
variables:
FDO_DISTRIBUTION_VERSION: "3.21"
FDO_BASE_IMAGE: alpine:$FDO_DISTRIBUTION_VERSION # since cbuild ignores it
# Alpine based x86_64 build image
alpine/x86_64_build:
extends:
- .alpine/x86_64_build-base
variables:
MESA_IMAGE_TAG: &alpine-x86_64_build ${ALPINE_X86_64_BUILD_TAG}
LLVM_VERSION: &alpine-llvm_version 19
rules:
- !reference [.container, rules]
# Note: the next three lines must remain in that order, so that the rules
# in `linkcheck-docs` catch nightly pipelines before the rules in `deploy-docs`
# exclude them.
- !reference [linkcheck-docs, rules]
- !reference [deploy-docs, rules]
- !reference [test-docs, rules]
.use-alpine/x86_64_build:
tags:
- $FDO_RUNNER_JOB_PRIORITY_TAG_X86_64
extends:
- .set-image
variables:
MESA_IMAGE_PATH: "alpine/x86_64_build"
MESA_IMAGE_TAG: *alpine-x86_64_build
LLVM_VERSION: *alpine-llvm_version
needs:
- job: sanity
optional: true
- job: alpine/x86_64_build
optional: true
# Alpine based x86_64 image for LAVA SSH dockerized client
alpine/x86_64_lava_ssh_client:
extends:
- .alpine/x86_64_build-base
variables:
MESA_IMAGE_TAG: &alpine-x86_64_lava_ssh_client ${ALPINE_X86_64_LAVA_SSH_TAG}
# Alpine based x86_64 image to run LAVA jobs
alpine/x86_64_lava-trigger:
extends:
- .alpine/x86_64_build-base
variables:
MESA_IMAGE_TAG: &alpine-x86_64_lava_trigger ${ALPINE_X86_64_LAVA_TRIGGER_TAG}

View File

@@ -1,83 +0,0 @@
#!/usr/bin/env bash
# shellcheck disable=SC1091
# When changing this file, you need to bump the following
# .gitlab-ci/image-tags.yml tags:
# ALPINE_X86_64_BUILD_TAG
set -e
. .gitlab-ci/setup-test-env.sh
set -o xtrace
EPHEMERAL=(
)
DEPS=(
bash
bison
ccache
"clang${LLVM_VERSION}-dev"
clang-dev
cmake
coreutils
curl
elfutils-dev
expat-dev
flex
g++
gcc
gettext
git
glslang
graphviz
libclc-dev
libdrm-dev
libpciaccess-dev
libva-dev
linux-headers
"llvm${LLVM_VERSION}-dev"
"llvm${LLVM_VERSION}-static"
mold
musl-dev
py3-clang
py3-cparser
py3-mako
py3-packaging
py3-pip
py3-ply
py3-yaml
python3-dev
samurai
spirv-llvm-translator-dev
spirv-tools-dev
util-macros
vulkan-headers
zlib-dev
)
apk --no-cache add "${DEPS[@]}" "${EPHEMERAL[@]}"
pip3 install --break-system-packages sphinx===8.2.3 hawkmoth===0.19.0
. .gitlab-ci/container/container_pre_build.sh
. .gitlab-ci/container/install-meson.sh
EXTRA_MESON_ARGS='--prefix=/usr' \
. .gitlab-ci/container/build-wayland.sh
############### Uninstall the build software
# too many vendor binarise, just keep the ones we need
find /usr/share/clc \
\( -type f -o -type l \) \
! -name 'spirv-mesa3d-.spv' \
! -name 'spirv64-mesa3d-.spv' \
-delete
apk del "${EPHEMERAL[@]}"
. .gitlab-ci/container/container_post_build.sh

View File

@@ -1,50 +0,0 @@
#!/usr/bin/env bash
# This is a ci-templates build script to generate a container for triggering LAVA jobs.
# When changing this file, you need to bump the following
# .gitlab-ci/image-tags.yml tags:
# ALPINE_X86_64_LAVA_TRIGGER_TAG
# shellcheck disable=SC1091
set -e
. .gitlab-ci/setup-test-env.sh
set -o xtrace
uncollapsed_section_start alpine_setup "Base Alpine system setup"
# Ephemeral packages (installed for this script and removed again at the end)
EPHEMERAL=(
git
py3-pip
)
# We only need these very basic packages to run the LAVA jobs
DEPS=(
curl
python3
tar
zstd
)
apk --no-cache add "${DEPS[@]}" "${EPHEMERAL[@]}"
pip3 install --break-system-packages -r bin/ci/requirements-lava.txt
cp -Rp .gitlab-ci/lava /
cp -Rp .gitlab-ci/bin/*_logger.py /lava
cp -Rp .gitlab-ci/common/init-stage1.sh /lava
. .gitlab-ci/container/container_pre_build.sh
############### Uninstall the build software
uncollapsed_section_switch alpine_cleanup "Cleaning up base Alpine system"
apk del "${EPHEMERAL[@]}"
. .gitlab-ci/container/container_post_build.sh
section_end alpine_cleanup

View File

@@ -1,32 +0,0 @@
#!/usr/bin/env bash
# This is a ci-templates build script to generate a container for LAVA SSH client.
# shellcheck disable=SC1091
set -e
. .gitlab-ci/setup-test-env.sh
set -o xtrace
EPHEMERAL=(
)
# We only need these very basic packages to run the tests.
DEPS=(
openssh-client # for ssh
iputils # for ping
bash
curl
)
apk --no-cache add "${DEPS[@]}" "${EPHEMERAL[@]}"
. .gitlab-ci/container/container_pre_build.sh
############### Uninstall the build software
apk del "${EPHEMERAL[@]}"
. .gitlab-ci/container/container_post_build.sh

View File

@@ -1,33 +0,0 @@
#!/usr/bin/env bash
# shellcheck disable=SC2154 # arch is assigned in previous scripts
set -e
set -o xtrace
# Fetch the arm-built rootfs image and unpack it in our x86_64 container (saves
# network transfer, disk usage, and runtime on test jobs)
S3_PATH="https://${S3_HOST}/${S3_KERNEL_BUCKET}"
if curl -L --retry 3 -f --retry-delay 10 -s --head "${S3_PATH}/${FDO_UPSTREAM_REPO}/${LAVA_DISTRIBUTION_TAG}/lava-rootfs.tar.zst"; then
ARTIFACTS_URL="${S3_PATH}/${FDO_UPSTREAM_REPO}/${LAVA_DISTRIBUTION_TAG}"
else
ARTIFACTS_URL="${S3_PATH}/${CI_PROJECT_PATH}/${LAVA_DISTRIBUTION_TAG}"
fi
curl -L --retry 4 -f --retry-all-errors --retry-delay 60 \
"${ARTIFACTS_URL}"/lava-rootfs.tar.zst -o rootfs.tar.zst
mkdir -p /rootfs-"$arch"
tar -C /rootfs-"$arch" '--exclude=./dev/*' --zstd -xf rootfs.tar.zst
rm rootfs.tar.zst
if [[ $arch == "arm64" ]]; then
mkdir -p /baremetal-files
pushd /baremetal-files
curl -L --retry 4 -f --retry-all-errors --retry-delay 60 \
-O "${KERNEL_IMAGE_BASE}"/arm64/Image
curl -L --retry 4 -f --retry-all-errors --retry-delay 60 \
-O "${KERNEL_IMAGE_BASE}"/arm64/Image.gz
popd
fi

View File

@@ -1,67 +0,0 @@
#!/usr/bin/env bash
#
# When changing this file, you need to bump the following
# .gitlab-ci/image-tags.yml tags:
# DEBIAN_TEST_ANDROID_TAG
# This script runs in a container to:
# 1. Download the Android CTS (Compatibility Test Suite)
# 2. Filter out unneeded test modules
# 3. Compress and upload the stripped version to S3
# Note: The 'build-' prefix in the filename is only to make it compatible
# with the bin/ci/update_tag.py script.
set -euo pipefail
section_start android-cts "Downloading Android CTS"
# xtrace is getting lost with the section switching
set -x
# Do a very early check to make sure the tag is correct without the need of
# setting up the environment variables locally
ci_tag_build_time_check "ANDROID_CTS_TAG"
# List of all CTS modules we might want to run in CI
# This should be the union of all modules required by our CI jobs
# Specific modules to run are selected via the ${GPU_VERSION}-android-cts-include.txt files
ANDROID_CTS_MODULES=(
"CtsDeqpTestCases"
"CtsGraphicsTestCases"
"CtsNativeHardwareTestCases"
"CtsSkQPTestCases"
)
ANDROID_CTS_VERSION="${ANDROID_VERSION}_r1"
ANDROID_CTS_DEVICE_ARCH="x86"
# Download the stripped CTS from S3, because the CTS download from Google can take 20 minutes
CTS_FILENAME="android-cts-${ANDROID_CTS_VERSION}-linux_x86-${ANDROID_CTS_DEVICE_ARCH}"
ARTIFACT_PATH="${DATA_STORAGE_PATH}/android-cts/${ANDROID_CTS_TAG}.tar.zst"
if FOUND_ARTIFACT_URL="$(find_s3_project_artifact "${ARTIFACT_PATH}")"; then
echo "Found Android CTS at: ${FOUND_ARTIFACT_URL}"
curl-with-retry "${FOUND_ARTIFACT_URL}" | tar --zstd -x -C /
else
echo "No cached CTS found, downloading from Google and uploading to S3..."
curl-with-retry --remote-name "https://dl.google.com/dl/android/cts/${CTS_FILENAME}.zip"
# Disable zipbomb detection, because the CTS zip file is too big
# At least locally, it is detected as a zipbomb
UNZIP_DISABLE_ZIPBOMB_DETECTION=true \
unzip -q -d / "${CTS_FILENAME}.zip"
rm "${CTS_FILENAME}.zip"
# Keep only the interesting tests to save space
# shellcheck disable=SC2086 # we want word splitting
ANDROID_CTS_MODULES_KEEP_EXPRESSION=$(printf "%s|" "${ANDROID_CTS_MODULES[@]}" | sed -e 's/|$//g')
find /android-cts/testcases/ -mindepth 1 -type d | grep -v -E "$ANDROID_CTS_MODULES_KEEP_EXPRESSION" | xargs rm -rf
# Using zstd compressed tarball instead of zip, the compression ratio is almost the same, but
# the extraction is faster, also LAVA overlays don't support zip compression.
tar --zstd -cf "${CTS_FILENAME}.tar.zst" /android-cts
ci-fairy s3cp --token-file "${S3_JWT_FILE}" "${CTS_FILENAME}.tar.zst" \
"https://${S3_BASE_PATH}/${CI_PROJECT_PATH}/${ARTIFACT_PATH}"
fi
section_end android-cts

View File

@@ -1,121 +0,0 @@
#!/usr/bin/env bash
# When changing this file, you need to bump the following
# .gitlab-ci/image-tags.yml and .gitlab-ci/container/gitlab-ci.yml tags:
# DEBIAN_BUILD_TAG
# ANDROID_LLVM_ARTIFACT_NAME
set -exu
# If CI vars are not set, assign an empty value, this prevents -u to fail
: "${CI:=}"
: "${CI_PROJECT_PATH:=}"
# Early check for required env variables, relies on `set -u`
: "$ANDROID_NDK_VERSION"
: "$ANDROID_SDK_VERSION"
: "$ANDROID_LLVM_VERSION"
: "$ANDROID_LLVM_ARTIFACT_NAME"
: "$S3_JWT_FILE"
: "$S3_HOST"
: "$S3_ANDROID_BUCKET"
# Check for CI if the auth file used later on is non-empty
if [ -n "$CI" ] && [ ! -s "${S3_JWT_FILE}" ]; then
echo "Error: ${S3_JWT_FILE} is empty." 1>&2
exit 1
fi
if curl -s -o /dev/null -I -L -f --retry 4 --retry-delay 15 "https://${S3_HOST}/${S3_ANDROID_BUCKET}/${CI_PROJECT_PATH}/${ANDROID_LLVM_ARTIFACT_NAME}.tar.zst"; then
echo "Artifact ${ANDROID_LLVM_ARTIFACT_NAME}.tar.zst already exists, skip re-building."
# Download prebuilt LLVM libraries for Android when they have not changed,
# to save some time
curl -L --retry 4 -f --retry-all-errors --retry-delay 60 \
-o "/${ANDROID_LLVM_ARTIFACT_NAME}.tar.zst" "https://${S3_HOST}/${S3_ANDROID_BUCKET}/${CI_PROJECT_PATH}/${ANDROID_LLVM_ARTIFACT_NAME}.tar.zst"
tar -C / --zstd -xf "/${ANDROID_LLVM_ARTIFACT_NAME}.tar.zst"
rm "/${ANDROID_LLVM_ARTIFACT_NAME}.tar.zst"
exit
fi
# Ephemeral packages (installed for this script and removed again at the end)
EPHEMERAL=(
unzip
)
apt-get update
apt-get install -y --no-install-recommends --no-remove "${EPHEMERAL[@]}"
ANDROID_NDK="android-ndk-${ANDROID_NDK_VERSION}"
ANDROID_NDK_ROOT="/${ANDROID_NDK}"
if [ ! -d "$ANDROID_NDK_ROOT" ];
then
curl -L --retry 4 -f --retry-all-errors --retry-delay 60 \
-o "${ANDROID_NDK}.zip" \
"https://dl.google.com/android/repository/${ANDROID_NDK}-linux.zip"
unzip -d / "${ANDROID_NDK}.zip" "$ANDROID_NDK/source.properties" "$ANDROID_NDK/build/cmake/*" "$ANDROID_NDK/toolchains/llvm/*"
rm "${ANDROID_NDK}.zip"
fi
if [ ! -d "/llvm-project" ];
then
mkdir "/llvm-project"
pushd "/llvm-project"
git init
git remote add origin https://github.com/llvm/llvm-project.git
git fetch --depth 1 origin "$ANDROID_LLVM_VERSION"
git checkout FETCH_HEAD
popd
fi
pushd "/llvm-project"
# Checkout again the intended version, just in case of a pre-existing full clone
git checkout "$ANDROID_LLVM_VERSION" || true
LLVM_INSTALL_PREFIX="/${ANDROID_LLVM_ARTIFACT_NAME}"
rm -rf build/
cmake -GNinja -S llvm -B build/ \
-DCMAKE_TOOLCHAIN_FILE="${ANDROID_NDK_ROOT}/build/cmake/android.toolchain.cmake" \
-DANDROID_ABI=x86_64 \
-DANDROID_PLATFORM="android-${ANDROID_SDK_VERSION}" \
-DANDROID_NDK="${ANDROID_NDK_ROOT}" \
-DCMAKE_ANDROID_ARCH_ABI=x86_64 \
-DCMAKE_ANDROID_NDK="${ANDROID_NDK_ROOT}" \
-DCMAKE_BUILD_TYPE=MinSizeRel \
-DCMAKE_SYSTEM_NAME=Android \
-DCMAKE_SYSTEM_VERSION="${ANDROID_SDK_VERSION}" \
-DCMAKE_INSTALL_PREFIX="${LLVM_INSTALL_PREFIX}" \
-DCMAKE_CXX_FLAGS="-march=x86-64 --target=x86_64-linux-android${ANDROID_SDK_VERSION} -fno-rtti" \
-DLLVM_HOST_TRIPLE="x86_64-linux-android${ANDROID_SDK_VERSION}" \
-DLLVM_TARGETS_TO_BUILD=X86 \
-DLLVM_BUILD_LLVM_DYLIB=OFF \
-DLLVM_BUILD_TESTS=OFF \
-DLLVM_BUILD_EXAMPLES=OFF \
-DLLVM_BUILD_DOCS=OFF \
-DLLVM_BUILD_TOOLS=OFF \
-DLLVM_ENABLE_RTTI=OFF \
-DLLVM_BUILD_INSTRUMENTED_COVERAGE=OFF \
-DLLVM_NATIVE_TOOL_DIR="${ANDROID_NDK_ROOT}/toolchains/llvm/prebuilt/linux-x86_64/bin" \
-DLLVM_ENABLE_PIC=False \
-DLLVM_OPTIMIZED_TABLEGEN=ON
ninja "-j${FDO_CI_CONCURRENT:-4}" -C build/ install
popd
rm -rf /llvm-project
tar --zstd -cf "${ANDROID_LLVM_ARTIFACT_NAME}.tar.zst" "$LLVM_INSTALL_PREFIX"
# If run in CI upload the tar.zst archive to S3 to avoid rebuilding it if the
# version does not change, and delete it.
# The file is not deleted for non-CI because it can be useful in local runs.
if [ -n "$CI" ]; then
ci-fairy s3cp --token-file "${S3_JWT_FILE}" "${ANDROID_LLVM_ARTIFACT_NAME}.tar.zst" "https://${S3_HOST}/${S3_ANDROID_BUCKET}/${CI_PROJECT_PATH}/${ANDROID_LLVM_ARTIFACT_NAME}.tar.zst"
rm "${ANDROID_LLVM_ARTIFACT_NAME}.tar.zst"
fi
apt-get purge -y "${EPHEMERAL[@]}"

View File

@@ -1,171 +0,0 @@
#!/usr/bin/env bash
# When changing this file, you need to bump the following
# .gitlab-ci/image-tags.yml tags:
# DEBIAN_TEST_ANDROID_TAG
# DEBIAN_TEST_GL_TAG
set -uex
section_start angle "Building ANGLE"
# Do a very early check to make sure the tag is correct without the need of
# setting up the environment variables locally
ci_tag_build_time_check "ANGLE_TAG"
ANGLE_REV="c39f4a5c553cbee39af8f866aa82a9ffa4f02f5b"
DEPOT_REV="5982a1aeb33dc36382ed8c62eddf52a6135e7dd3"
# Set ANGLE_ARCH based on DEBIAN_ARCH if it hasn't been explicitly defined
if [[ -z "${ANGLE_ARCH:-}" ]]; then
case "$DEBIAN_ARCH" in
amd64) ANGLE_ARCH=x64;;
arm64) ANGLE_ARCH=arm64;;
esac
fi
# DEPOT tools
mkdir /depot-tools
pushd /depot-tools
git init
git remote add origin https://chromium.googlesource.com/chromium/tools/depot_tools.git
git fetch --depth 1 origin "$DEPOT_REV"
git checkout FETCH_HEAD
export PATH=/depot-tools:$PATH
export DEPOT_TOOLS_UPDATE=0
popd
mkdir /angle-build
mkdir /angle
pushd /angle-build
git init
git remote add origin https://chromium.googlesource.com/angle/angle.git
git fetch --depth 1 origin "$ANGLE_REV"
git checkout FETCH_HEAD
echo "$ANGLE_REV" > /angle/version
GCLIENT_CUSTOM_VARS=()
GCLIENT_CUSTOM_VARS+=('--custom-var=angle_enable_cl=False')
GCLIENT_CUSTOM_VARS+=('--custom-var=angle_enable_cl_testing=False')
GCLIENT_CUSTOM_VARS+=('--custom-var=angle_enable_vulkan_validation_layers=False')
GCLIENT_CUSTOM_VARS+=('--custom-var=angle_enable_wgpu=False')
GCLIENT_CUSTOM_VARS+=('--custom-var=build_angle_deqp_tests=False')
GCLIENT_CUSTOM_VARS+=('--custom-var=build_angle_perftests=False')
if [[ "$ANGLE_TARGET" == "android" ]]; then
GCLIENT_CUSTOM_VARS+=('--custom-var=checkout_android=True')
fi
# source preparation
gclient config --name REPLACE-WITH-A-DOT --unmanaged \
"${GCLIENT_CUSTOM_VARS[@]}" \
https://chromium.googlesource.com/angle/angle.git
sed -e 's/REPLACE-WITH-A-DOT/./;' -i .gclient
sed -e 's|"custom_deps" : {|"custom_deps" : {\
"third_party/clspv/src": None,\
"third_party/dawn": None,\
"third_party/glmark2/src": None,\
"third_party/libjpeg_turbo": None,\
"third_party/llvm/src": None,\
"third_party/OpenCL-CTS/src": None,\
"third_party/SwiftShader": None,\
"third_party/VK-GL-CTS/src": None,\
"third_party/vulkan-validation-layers/src": None,|' -i .gclient
gclient sync --no-history -j"${FDO_CI_CONCURRENT:-4}"
mkdir -p out/Release
cat > out/Release/args.gn <<EOF
angle_assert_always_on=false
angle_build_all=false
angle_build_tests=false
angle_enable_cl=false
angle_enable_cl_testing=false
angle_enable_gl=false
angle_enable_gl_desktop_backend=false
angle_enable_null=false
angle_enable_swiftshader=false
angle_enable_trace=false
angle_enable_wgpu=false
angle_enable_vulkan=true
angle_enable_vulkan_api_dump_layer=false
angle_enable_vulkan_validation_layers=false
angle_has_frame_capture=false
angle_has_histograms=false
angle_has_rapidjson=false
angle_use_custom_libvulkan=false
build_angle_deqp_tests=false
dcheck_always_on=true
enable_expensive_dchecks=false
is_component_build=false
is_debug=false
target_cpu="${ANGLE_ARCH}"
target_os="${ANGLE_TARGET}"
treat_warnings_as_errors=false
EOF
case "$ANGLE_TARGET" in
linux) cat >> out/Release/args.gn <<EOF
angle_egl_extension="so.1"
angle_glesv2_extension="so.2"
use_custom_libcxx=false
custom_toolchain="//build/toolchain/linux/unbundle:default"
host_toolchain="//build/toolchain/linux/unbundle:default"
EOF
;;
android) cat >> out/Release/args.gn <<EOF
android_ndk_version="${ANDROID_NDK_VERSION}"
android64_ndk_api_level=${ANDROID_SDK_VERSION}
android32_ndk_api_level=${ANDROID_SDK_VERSION}
use_custom_libcxx=true
EOF
;;
*) echo "Unexpected ANGLE_TARGET value: $ANGLE_TARGET"; exit 1;;
esac
if [[ "$DEBIAN_ARCH" = "arm64" ]]; then
# We need to get an AArch64 sysroot - because ANGLE isn't great friends with
# system dependencies - but use the default system toolchain, because the
# 'arm64' toolchain you get from Google infrastructure is a cross-compiler
# from x86-64
build/linux/sysroot_scripts/install-sysroot.py --arch=arm64
fi
(
# The 'unbundled' toolchain configuration requires clang, and it also needs to
# be configured via environment variables.
export CC="clang-${LLVM_VERSION}"
export HOST_CC="$CC"
export CFLAGS="-Wno-unknown-warning-option"
export HOST_CFLAGS="$CFLAGS"
export CXX="clang++-${LLVM_VERSION}"
export HOST_CXX="$CXX"
export CXXFLAGS="-Wno-unknown-warning-option"
export HOST_CXXFLAGS="$CXXFLAGS"
export AR="ar"
export HOST_AR="$AR"
export NM="nm"
export HOST_NM="$NM"
export LDFLAGS="-fuse-ld=lld-${LLVM_VERSION} -lpthread -ldl"
export HOST_LDFLAGS="$LDFLAGS"
gn gen out/Release
# depot_tools overrides ninja with a version that doesn't work. We want
# ninja with FDO_CI_CONCURRENT anyway.
/usr/local/bin/ninja -C out/Release/ libEGL libGLESv1_CM libGLESv2
)
rm -f out/Release/libvulkan.so* out/Release/*.so*.TOC
cp out/Release/lib*.so* /angle/
if [[ "$ANGLE_TARGET" == "linux" ]]; then
ln -s libEGL.so.1 /angle/libEGL.so
ln -s libGLESv2.so.2 /angle/libGLESv2.so
fi
rm -rf out
popd
rm -rf /depot-tools
rm -rf /angle-build
section_end angle

View File

@@ -1,27 +0,0 @@
#!/usr/bin/env bash
# shellcheck disable=SC2086 # we want word splitting
# When changing this file, you need to bump the following
# .gitlab-ci/image-tags.yml tags:
# DEBIAN_BASE_TAG
set -uex
uncollapsed_section_start apitrace "Building apitrace"
APITRACE_VERSION="b6102d10960c9f43b1b473903fc67937dd19fb98"
git clone https://github.com/apitrace/apitrace.git --single-branch --no-checkout /apitrace
pushd /apitrace
git checkout "$APITRACE_VERSION"
git submodule update --init --depth 1 --recursive
cmake -S . -B _build -G Ninja -DCMAKE_BUILD_TYPE=Release -DENABLE_GUI=False -DENABLE_WAFFLE=on ${EXTRA_CMAKE_ARGS:-}
cmake --build _build --parallel --target apitrace eglretrace
mkdir build
cp _build/apitrace build
cp _build/eglretrace build
${STRIP_CMD:-strip} build/*
find . -not -path './build' -not -path './build/*' -delete
popd
section_end apitrace

View File

@@ -1,28 +0,0 @@
#!/usr/bin/env bash
# shellcheck disable=SC2086 # we want word splitting
# When changing this file, you need to bump the following
# .gitlab-ci/image-tags.yml tags:
# DEBIAN_BUILD_TAG
# FEDORA_X86_64_BUILD_TAG
uncollapsed_section_start bindgen "Building bindgen"
BINDGEN_VER=0.71.1
CBINDGEN_VER=0.26.0
# bindgen
RUSTFLAGS='-L native=/usr/local/lib' cargo install \
bindgen-cli --version ${BINDGEN_VER} \
--locked \
-j ${FDO_CI_CONCURRENT:-4} \
--root /usr/local
# cbindgen
RUSTFLAGS='-L native=/usr/local/lib' cargo install \
cbindgen --version ${CBINDGEN_VER} \
--locked \
-j ${FDO_CI_CONCURRENT:-4} \
--root /usr/local
section_end bindgen

View File

@@ -1,56 +0,0 @@
#!/usr/bin/env bash
# shellcheck disable=SC2086 # we want word splitting
# When changing this file, you need to bump the following
# .gitlab-ci/image-tags.yml tags:
# DEBIAN_BASE_TAG
# Do a very early check to make sure the tag is correct without the need of
# setting up the environment variables locally
ci_tag_build_time_check "CROSVM_TAG"
set -uex
section_start crosvm "Building crosvm"
git config --global user.email "mesa@example.com"
git config --global user.name "Mesa CI"
CROSVM_VERSION=4a6b4316155742fbfa1be7087c2ee578cfee884d
git clone --single-branch -b main --no-checkout https://chromium.googlesource.com/crosvm/crosvm /platform/crosvm
pushd /platform/crosvm
git checkout "$CROSVM_VERSION"
git submodule update --init
VIRGLRENDERER_VERSION=06d43ce974b664f9dc521b706a0ad7f91dbf2866
rm -rf third_party/virglrenderer
git clone --single-branch -b main --no-checkout https://gitlab.freedesktop.org/virgl/virglrenderer.git third_party/virglrenderer
pushd third_party/virglrenderer
git checkout "$VIRGLRENDERER_VERSION"
meson setup build/ -D libdir=lib -D render-server-worker=process -D venus=true ${EXTRA_MESON_ARGS:-}
meson install -C build
popd
rm rust-toolchain
RUSTFLAGS='-L native=/usr/local/lib' cargo install \
bindgen-cli \
--locked \
-j ${FDO_CI_CONCURRENT:-4} \
--root /usr/local \
--version 0.71.1 \
${EXTRA_CARGO_ARGS:-}
CROSVM_USE_SYSTEM_MINIGBM=1 CROSVM_USE_SYSTEM_VIRGLRENDERER=1 RUSTFLAGS='-L native=/usr/local/lib' cargo install \
-j ${FDO_CI_CONCURRENT:-4} \
--locked \
--features 'default-no-sandbox gpu x virgl_renderer' \
--path . \
--root /usr/local \
${EXTRA_CARGO_ARGS:-}
popd
rm -rf /platform/crosvm
section_end crosvm

View File

@@ -1,99 +0,0 @@
#!/usr/bin/env bash
# shellcheck disable=SC2086 # we want word splitting
# When changing this file, you need to bump the following
# .gitlab-ci/image-tags.yml tags:
# DEBIAN_TEST_ANDROID_TAG
# DEBIAN_BASE_TAG
set -uex
section_start deqp-runner "Building deqp-runner"
DEQP_RUNNER_VERSION=0.20.3
commits_to_backport=(
)
patch_files=(
)
DEQP_RUNNER_GIT_URL="${DEQP_RUNNER_GIT_URL:-https://gitlab.freedesktop.org/mesa/deqp-runner.git}"
if [ -n "${DEQP_RUNNER_GIT_TAG:-}" ]; then
DEQP_RUNNER_GIT_CHECKOUT="$DEQP_RUNNER_GIT_TAG"
elif [ -n "${DEQP_RUNNER_GIT_REV:-}" ]; then
DEQP_RUNNER_GIT_CHECKOUT="$DEQP_RUNNER_GIT_REV"
else
DEQP_RUNNER_GIT_CHECKOUT="v$DEQP_RUNNER_VERSION"
fi
BASE_PWD=$PWD
mkdir -p /deqp-runner
pushd /deqp-runner
mkdir deqp-runner-git
pushd deqp-runner-git
git init
git remote add origin "$DEQP_RUNNER_GIT_URL"
git fetch --depth 1 origin "$DEQP_RUNNER_GIT_CHECKOUT"
git checkout FETCH_HEAD
for commit in "${commits_to_backport[@]}"
do
PATCH_URL="https://gitlab.freedesktop.org/mesa/deqp-runner/-/commit/$commit.patch"
echo "Backport deqp-runner commit $commit from $PATCH_URL"
curl -L --retry 4 -f --retry-all-errors --retry-delay 60 $PATCH_URL | git am
done
for patch in "${patch_files[@]}"
do
echo "Apply patch to deqp-runner from $patch"
git am "$BASE_PWD/.gitlab-ci/container/patches/$patch"
done
if [ -z "${RUST_TARGET:-}" ]; then
RUST_TARGET=""
fi
if [[ "$RUST_TARGET" != *-android ]]; then
# When CC (/usr/lib/ccache/gcc) variable is set, the rust compiler uses
# this variable when cross-compiling arm32 and build fails for zsys-sys.
# So unset the CC variable when cross-compiling for arm32.
SAVEDCC=${CC:-}
if [ "$RUST_TARGET" = "armv7-unknown-linux-gnueabihf" ]; then
unset CC
fi
cargo install --locked \
-j ${FDO_CI_CONCURRENT:-4} \
--root /usr/local \
${EXTRA_CARGO_ARGS:-} \
--path .
CC=$SAVEDCC
else
cargo install --locked \
-j ${FDO_CI_CONCURRENT:-4} \
--root /usr/local --version 2.10.0 \
cargo-ndk
rustup target add $RUST_TARGET
RUSTFLAGS='-C target-feature=+crt-static' cargo ndk --target $RUST_TARGET build --release
mv target/$RUST_TARGET/release/deqp-runner /deqp-runner
cargo uninstall --locked \
--root /usr/local \
cargo-ndk
fi
popd
rm -rf deqp-runner-git
popd
# remove unused test runners to shrink images for the Mesa CI build (not kernel,
# which chooses its own deqp branch)
if [ -z "${DEQP_RUNNER_GIT_TAG:-}${DEQP_RUNNER_GIT_REV:-}" ]; then
rm -f /usr/local/bin/igt-runner
fi
section_end deqp-runner

View File

@@ -1,314 +0,0 @@
#!/usr/bin/env bash
# shellcheck disable=SC2086 # we want word splitting
# When changing this file, you need to bump the following
# .gitlab-ci/image-tags.yml tags:
# DEBIAN_TEST_ANDROID_TAG
# DEBIAN_TEST_GL_TAG
# DEBIAN_TEST_VK_TAG
set -ue -o pipefail
# shellcheck disable=SC2153
deqp_api=${DEQP_API,,}
section_start deqp-$deqp_api "Building dEQP $DEQP_API"
set -x
# See `deqp_build_targets` below for which release is used to produce which
# binary. Unless this comment has bitrotten:
# - the commit from the main branch produces the deqp tools and `deqp-vk`,
# - the VK release produces `deqp-vk`,
# - the GL release produces `glcts`, and
# - the GLES release produces `deqp-gles*` and `deqp-egl`
DEQP_MAIN_COMMIT=9cc8e038994c32534b3d2c4ba88c1dc49ef53228
DEQP_VK_VERSION=1.4.1.1
DEQP_GL_VERSION=4.6.6.0
DEQP_GLES_VERSION=3.2.12.0
# Patches to VulkanCTS may come from commits in their repo (listed in
# cts_commits_to_backport) or patch files stored in our repo (in the patch
# directory `$OLDPWD/.gitlab-ci/container/patches/` listed in cts_patch_files).
# Both list variables would have comments explaining the reasons behind the
# patches.
# shellcheck disable=SC2034
main_cts_commits_to_backport=(
# If you find yourself wanting to add something in here, consider whether
# bumping DEQP_MAIN_COMMIT is not a better solution :)
)
# shellcheck disable=SC2034
main_cts_patch_files=(
)
# shellcheck disable=SC2034
vk_cts_commits_to_backport=(
# Stop querying device address from unbound buffers
046343f46f7d39d53b47842d7fd8ed3279528046
)
# shellcheck disable=SC2034
vk_cts_patch_files=(
)
# shellcheck disable=SC2034
gl_cts_commits_to_backport=(
# Add testing for GL_PRIMITIVES_SUBMITTED_ARB query.
e075ce73ddc5973aa46a5236c715bb281c9501fa
)
# shellcheck disable=SC2034
gl_cts_patch_files=(
build-deqp-gl_Build-Don-t-build-Vulkan-utilities-for-GL-builds.patch
build-deqp-gl_Revert-Add-missing-context-deletion.patch
build-deqp-gl_Revert-Fix-issues-with-GLX-reset-notification-strate.patch
build-deqp-gl_Revert-Fix-spurious-failures-when-using-a-config-wit.patch
)
# shellcheck disable=SC2034
# GLES builds also EGL
gles_cts_commits_to_backport=(
)
# shellcheck disable=SC2034
gles_cts_patch_files=(
build-deqp-gl_Build-Don-t-build-Vulkan-utilities-for-GL-builds.patch
build-deqp-gl_Revert-Add-missing-context-deletion.patch
build-deqp-gl_Revert-Fix-issues-with-GLX-reset-notification-strate.patch
build-deqp-gl_Revert-Fix-spurious-failures-when-using-a-config-wit.patch
)
### Careful editing anything below this line
git config --global user.email "mesa@example.com"
git config --global user.name "Mesa CI"
# shellcheck disable=SC2153
case "${DEQP_API}" in
tools) DEQP_VERSION="$DEQP_MAIN_COMMIT";;
*-main) DEQP_VERSION="$DEQP_MAIN_COMMIT";;
VK) DEQP_VERSION="vulkan-cts-$DEQP_VK_VERSION";;
GL) DEQP_VERSION="opengl-cts-$DEQP_GL_VERSION";;
GLES) DEQP_VERSION="opengl-es-cts-$DEQP_GLES_VERSION";;
*) echo "Unexpected DEQP_API value: $DEQP_API"; exit 1;;
esac
mkdir -p /VK-GL-CTS
pushd /VK-GL-CTS
[ -e .git ] || {
git init
git remote add origin https://github.com/KhronosGroup/VK-GL-CTS.git
}
git fetch --depth 1 origin "$DEQP_VERSION"
git checkout FETCH_HEAD
DEQP_COMMIT=$(git rev-parse FETCH_HEAD)
if [ "$DEQP_VERSION" = "$DEQP_MAIN_COMMIT" ]; then
merge_base="$(curl-with-retry -s https://api.github.com/repos/KhronosGroup/VK-GL-CTS/compare/main...$DEQP_MAIN_COMMIT | jq -r .merge_base_commit.sha)"
if [[ "$merge_base" != "$DEQP_MAIN_COMMIT" ]]; then
echo "VK-GL-CTS commit $DEQP_MAIN_COMMIT is not a commit from the main branch."
exit 1
fi
fi
mkdir -p /deqp-$deqp_api
if [ "$DEQP_VERSION" = "$DEQP_MAIN_COMMIT" ]; then
prefix="main"
else
prefix="$deqp_api"
fi
cts_commits_to_backport="${prefix}_cts_commits_to_backport[@]"
for commit in "${!cts_commits_to_backport}"
do
PATCH_URL="https://github.com/KhronosGroup/VK-GL-CTS/commit/$commit.patch"
echo "Apply patch to ${DEQP_API} CTS from $PATCH_URL"
curl-with-retry $PATCH_URL | GIT_COMMITTER_DATE=$(LC_TIME=C date -d@0) git am -
done
cts_patch_files="${prefix}_cts_patch_files[@]"
for patch in "${!cts_patch_files}"
do
echo "Apply patch to ${DEQP_API} CTS from $patch"
GIT_COMMITTER_DATE=$(LC_TIME=C date -d@0) git am < $OLDPWD/.gitlab-ci/container/patches/$patch
done
{
if [ "$DEQP_VERSION" = "$DEQP_MAIN_COMMIT" ]; then
commit_desc=$(git show --no-patch --format='commit %h on %ci' --abbrev=10 "$DEQP_COMMIT")
echo "dEQP $DEQP_API at $commit_desc"
else
echo "dEQP $DEQP_API version $DEQP_VERSION"
fi
if [ "$(git rev-parse HEAD)" != "$DEQP_COMMIT" ]; then
echo "The following local patches are applied on top:"
git log --reverse --oneline "$DEQP_COMMIT".. --format='- %s'
fi
} > /deqp-$deqp_api/deqp-$deqp_api-version
# --insecure is due to SSL cert failures hitting sourceforge for zlib and
# libpng (sigh). The archives get their checksums checked anyway, and git
# always goes through ssh or https.
python3 external/fetch_sources.py --insecure
case "${DEQP_API}" in
VK-main)
# Video tests rely on external files
python3 external/fetch_video_decode_samples.py
python3 external/fetch_video_encode_samples.py
;;
esac
if [[ "$DEQP_API" = tools ]]; then
# Save the testlog stylesheets:
cp doc/testlog-stylesheet/testlog.{css,xsl} /deqp-$deqp_api
fi
popd
deqp_build_targets=()
case "${DEQP_API}" in
VK|VK-main)
deqp_build_targets+=(deqp-vk)
;;
GL)
deqp_build_targets+=(glcts)
;;
GLES)
deqp_build_targets+=(deqp-gles{2,3,31})
deqp_build_targets+=(glcts) # needed for gles*-khr tests
# deqp-egl also comes from this build, but it is handled separately below.
;;
tools)
deqp_build_targets+=(testlog-to-xml)
deqp_build_targets+=(testlog-to-csv)
deqp_build_targets+=(testlog-to-junit)
;;
esac
OLD_IFS="$IFS"
IFS=";"
CMAKE_SBT="${deqp_build_targets[*]}"
IFS="$OLD_IFS"
pushd /deqp-$deqp_api
if [ "${DEQP_API}" = 'GLES' ]; then
if [ "${DEQP_TARGET}" = 'android' ]; then
cmake -S /VK-GL-CTS -B . -G Ninja \
-DDEQP_TARGET=android \
-DCMAKE_BUILD_TYPE=Release \
-DSELECTED_BUILD_TARGETS="deqp-egl" \
${EXTRA_CMAKE_ARGS:-}
ninja modules/egl/deqp-egl
mv modules/egl/deqp-egl{,-android}
else
# When including EGL/X11 testing, do that build first and save off its
# deqp-egl binary.
cmake -S /VK-GL-CTS -B . -G Ninja \
-DDEQP_TARGET=x11_egl_glx \
-DCMAKE_BUILD_TYPE=Release \
-DSELECTED_BUILD_TARGETS="deqp-egl" \
${EXTRA_CMAKE_ARGS:-}
ninja modules/egl/deqp-egl
mv modules/egl/deqp-egl{,-x11}
cmake -S /VK-GL-CTS -B . -G Ninja \
-DDEQP_TARGET=wayland \
-DCMAKE_BUILD_TYPE=Release \
-DSELECTED_BUILD_TARGETS="deqp-egl" \
${EXTRA_CMAKE_ARGS:-}
ninja modules/egl/deqp-egl
mv modules/egl/deqp-egl{,-wayland}
fi
fi
cmake -S /VK-GL-CTS -B . -G Ninja \
-DDEQP_TARGET=${DEQP_TARGET} \
-DCMAKE_BUILD_TYPE=Release \
-DSELECTED_BUILD_TARGETS="${CMAKE_SBT}" \
${EXTRA_CMAKE_ARGS:-}
# Make sure `default` doesn't silently stop detecting one of the platforms we care about
if [ "${DEQP_TARGET}" = 'default' ]; then
grep -q DEQP_SUPPORT_WAYLAND=1 build.ninja
grep -q DEQP_SUPPORT_X11=1 build.ninja
grep -q DEQP_SUPPORT_XCB=1 build.ninja
fi
ninja "${deqp_build_targets[@]}"
if [ "$DEQP_API" != tools ]; then
# Copy out the mustpass lists we want.
mkdir -p mustpass
if [ "${DEQP_API}" = 'VK' ] || [ "${DEQP_API}" = 'VK-main' ]; then
for mustpass in $(< /VK-GL-CTS/external/vulkancts/mustpass/main/vk-default.txt) ; do
cat /VK-GL-CTS/external/vulkancts/mustpass/main/$mustpass \
>> mustpass/vk-main.txt
done
fi
if [ "${DEQP_API}" = 'GL' ]; then
cp \
/VK-GL-CTS/external/openglcts/data/gl_cts/data/mustpass/gl/khronos_mustpass/main/*-main.txt \
mustpass/
cp \
/VK-GL-CTS/external/openglcts/data/gl_cts/data/mustpass/gl/khronos_mustpass_single/main/*-single.txt \
mustpass/
fi
if [ "${DEQP_API}" = 'GLES' ]; then
cp \
/VK-GL-CTS/external/openglcts/data/gl_cts/data/mustpass/gles/aosp_mustpass/main/*.txt \
mustpass/
cp \
/VK-GL-CTS/external/openglcts/data/gl_cts/data/mustpass/egl/aosp_mustpass/main/egl-main.txt \
mustpass/
cp \
/VK-GL-CTS/external/openglcts/data/gl_cts/data/mustpass/gles/khronos_mustpass/main/*-main.txt \
mustpass/
fi
# Compress the caselists, since Vulkan's in particular are gigantic; higher
# compression levels provide no real measurable benefit.
zstd -f -1 --rm mustpass/*.txt
fi
if [ "$DEQP_API" = tools ]; then
# Save *some* executor utils, but otherwise strip things down
# to reduct deqp build size:
mv executor/testlog-to-* .
rm -rf executor
fi
# Remove other mustpass files, since we saved off the ones we wanted to conventient locations above.
rm -rf assets/**/mustpass/
rm -rf external/**/mustpass/
rm -rf external/vulkancts/modules/vulkan/vk-main*
rm -rf external/vulkancts/modules/vulkan/vk-default
rm -rf external/openglcts/modules/cts-runner
rm -rf modules/internal
rm -rf execserver
rm -rf framework
find . -depth \( -iname '*cmake*' -o -name '*ninja*' -o -name '*.o' -o -name '*.a' \) -exec rm -rf {} \;
if [ "${DEQP_API}" = 'VK' ] || [ "${DEQP_API}" = 'VK-main' ]; then
${STRIP_CMD:-strip} external/vulkancts/modules/vulkan/deqp-vk
fi
if [ "${DEQP_API}" = 'GL' ] || [ "${DEQP_API}" = 'GLES' ]; then
${STRIP_CMD:-strip} external/openglcts/modules/glcts
fi
if [ "${DEQP_API}" = 'GLES' ]; then
${STRIP_CMD:-strip} modules/*/deqp-*
fi
du -sh ./*
popd
section_end deqp-$deqp_api

View File

@@ -1,19 +0,0 @@
#!/usr/bin/env bash
# shellcheck disable=SC2086 # we want word splitting
# When changing this file, you need to bump the following
# .gitlab-ci/image-tags.yml tags:
# DEBIAN_BUILD_TAG
set -uex
uncollapsed_section_start directx-headers "Building directx-headers"
git clone https://github.com/microsoft/DirectX-Headers -b v1.614.1 --depth 1
pushd DirectX-Headers
meson setup build --backend=ninja --buildtype=release -Dbuild-test=false ${EXTRA_MESON_ARGS:-}
meson install -C build
popd
rm -rf DirectX-Headers
section_end directx-headers

View File

@@ -1,52 +0,0 @@
#!/usr/bin/env bash
# shellcheck disable=SC1091 # The relative paths in this file only become valid at runtime.
# shellcheck disable=SC2034 # Variables are used in scripts called from here
# shellcheck disable=SC2086 # we want word splitting
# When changing this file, you need to bump the following
# .gitlab-ci/image-tags.yml tags:
# DEBIAN_TEST_VIDEO_TAG
# Install fluster in /fluster.
set -uex
section_start fluster "Installing Fluster"
# Do a very early check to make sure the tag is correct without the need of
# setting up the environment variables locally
ci_tag_build_time_check "FLUSTER_TAG"
FLUSTER_REVISION="e997402978f62428fffc8e5a4a709690d9ca9bc5"
git clone https://github.com/fluendo/fluster.git --single-branch --no-checkout
pushd fluster || exit
git checkout "${FLUSTER_REVISION}"
popd || exit
ARTIFACT_PATH="${DATA_STORAGE_PATH}/fluster/${FLUSTER_TAG}/vectors.tar.zst"
if FOUND_ARTIFACT_URL="$(find_s3_project_artifact "${ARTIFACT_PATH}")"; then
echo "Found fluster vectors at: ${FOUND_ARTIFACT_URL}"
mv fluster/ /
curl-with-retry "${FOUND_ARTIFACT_URL}" | tar --zstd -x -C /
else
echo "No cached vectors found, rebuilding..."
# Download the necessary vectors: H264, H265 and VP9
# When updating FLUSTER_REVISION, make sure to update the vectors if necessary or
# fluster-runner will report Missing results.
fluster/fluster.py download -j ${FDO_CI_CONCURRENT:-4} \
JVT-AVC_V1 JVT-FR-EXT JVT-MVC JVT-SVC_V1 \
JCT-VC-3D-HEVC JCT-VC-HEVC_V1 JCT-VC-MV-HEVC JCT-VC-RExt JCT-VC-SCC JCT-VC-SHVC \
VP9-TEST-VECTORS-HIGH VP9-TEST-VECTORS
# Build fluster vectors archive and upload it
tar --zstd -cf "vectors.tar.zst" fluster/resources/
ci-fairy s3cp --token-file "${S3_JWT_FILE}" "vectors.tar.zst" \
"https://${S3_BASE_PATH}/${CI_PROJECT_PATH}/${ARTIFACT_PATH}"
mv fluster/ /
fi
section_end fluster

View File

@@ -1,22 +0,0 @@
#!/bin/bash
# When changing this file, you need to bump the following
# .gitlab-ci/image-tags.yml tags:
# DEBIAN_TEST_VK_TAG
set -ex
uncollapsed_section_start fossilize "Building fossilize"
git clone https://github.com/ValveSoftware/Fossilize.git
cd Fossilize
git checkout b43ee42bbd5631ea21fe9a2dee4190d5d875c327
git submodule update --init
mkdir build
cd build
cmake -S .. -B . -G Ninja -DCMAKE_BUILD_TYPE=Release
ninja -C . install
cd ../..
rm -rf Fossilize
section_end fossilize

View File

@@ -1,23 +0,0 @@
#!/usr/bin/env bash
set -ex
uncollapsed_section_start gfxreconstruct "Building gfxreconstruct"
GFXRECONSTRUCT_VERSION=761837794a1e57f918a85af7000b12e531b178ae
git clone https://github.com/LunarG/gfxreconstruct.git \
--single-branch \
-b master \
--no-checkout \
/gfxreconstruct
pushd /gfxreconstruct
git checkout "$GFXRECONSTRUCT_VERSION"
git submodule update --init
git submodule update
cmake -S . -B _build -G Ninja -DCMAKE_BUILD_TYPE=Release -DCMAKE_INSTALL_PREFIX:PATH=/gfxreconstruct/build -DBUILD_WERROR=OFF
cmake --build _build --parallel --target tools/{replay,info}/install/strip
find . -not -path './build' -not -path './build/*' -delete
popd
section_end gfxreconstruct

View File

@@ -1,32 +0,0 @@
#!/usr/bin/env bash
# shellcheck disable=SC1091 # the path is created by the script
set -ex
uncollapsed_section_start kdl "Building kdl"
KDL_REVISION="cbbe5fd54505fd03ee34f35bfd16794f0c30074f"
KDL_CHECKOUT_DIR="/tmp/ci-kdl.git"
mkdir -p ${KDL_CHECKOUT_DIR}
pushd ${KDL_CHECKOUT_DIR}
git init
git remote add origin https://gitlab.freedesktop.org/gfx-ci/ci-kdl.git
git fetch --depth 1 origin ${KDL_REVISION}
git checkout FETCH_HEAD
popd
# Run venv in a subshell, so we don't accidentally leak the venv state into
# calling scripts
(
python3 -m venv /ci-kdl
source /ci-kdl/bin/activate &&
pushd ${KDL_CHECKOUT_DIR} &&
pip install -r requirements.txt &&
pip install . &&
popd
)
rm -rf ${KDL_CHECKOUT_DIR}
section_end kdl

View File

@@ -1,35 +0,0 @@
#!/usr/bin/env bash
set -uex
uncollapsed_section_start libclc "Building libclc"
export LLVM_CONFIG="llvm-config-${LLVM_VERSION:?"llvm unset!"}"
LLVM_TAG="llvmorg-15.0.7"
$LLVM_CONFIG --version
git config --global user.email "mesa@example.com"
git config --global user.name "Mesa CI"
git clone \
https://github.com/llvm/llvm-project \
--depth 1 \
-b "${LLVM_TAG}" \
/llvm-project
mkdir /libclc
pushd /libclc
cmake -S /llvm-project/libclc -B . -G Ninja -DLLVM_CONFIG="$LLVM_CONFIG" -DLIBCLC_TARGETS_TO_BUILD="spirv-mesa3d-;spirv64-mesa3d-" -DCMAKE_BUILD_TYPE=Release -DCMAKE_INSTALL_PREFIX=/usr -DLLVM_SPIRV=/usr/bin/llvm-spirv
ninja
ninja install
popd
# workaroud cmake vs debian packaging.
mkdir -p /usr/lib/clc
ln -s /usr/share/clc/spirv64-mesa3d-.spv /usr/lib/clc/
ln -s /usr/share/clc/spirv-mesa3d-.spv /usr/lib/clc/
du -sh ./*
rm -rf /libclc /llvm-project
section_end libclc

View File

@@ -1,21 +0,0 @@
#!/usr/bin/env bash
# Script used for Android and Fedora builds (Debian builds get their libdrm version
# from https://gitlab.freedesktop.org/gfx-ci/ci-deb-repo - see PKG_REPO_REV)
# shellcheck disable=SC2086 # we want word splitting
set -uex
uncollapsed_section_start libdrm "Building libdrm"
export LIBDRM_VERSION=libdrm-2.4.122
curl -L -O --retry 4 -f --retry-all-errors --retry-delay 60 \
https://dri.freedesktop.org/libdrm/"$LIBDRM_VERSION".tar.xz
tar -xvf "$LIBDRM_VERSION".tar.xz && rm "$LIBDRM_VERSION".tar.xz
cd "$LIBDRM_VERSION"
meson setup build -D vc4=disabled -D freedreno=disabled -D etnaviv=disabled ${EXTRA_MESON_ARGS:-}
meson install -C build
cd ..
rm -rf "$LIBDRM_VERSION"
section_end libdrm

View File

@@ -1,30 +0,0 @@
#!/usr/bin/env bash
set -ex
uncollapsed_section_start llvm-spirv "Building LLVM-SPIRV-Translator"
if [ "${LLVM_VERSION:?llvm version not set}" -ge 18 ]; then
VER="${LLVM_VERSION}.1.0"
else
VER="${LLVM_VERSION}.0.0"
fi
curl -L --retry 4 -f --retry-all-errors --retry-delay 60 \
-O "https://github.com/KhronosGroup/SPIRV-LLVM-Translator/archive/refs/tags/v${VER}.tar.gz"
tar -xvf "v${VER}.tar.gz" && rm "v${VER}.tar.gz"
mkdir "SPIRV-LLVM-Translator-${VER}/build"
pushd "SPIRV-LLVM-Translator-${VER}/build"
cmake .. -G Ninja -DCMAKE_BUILD_TYPE=Release -DCMAKE_INSTALL_PREFIX=/usr
ninja
ninja install
# For some reason llvm-spirv is not installed by default
ninja llvm-spirv
cp tools/llvm-spirv/llvm-spirv /usr/bin/
popd
du -sh "SPIRV-LLVM-Translator-${VER}"
rm -rf "SPIRV-LLVM-Translator-${VER}"
section_end llvm-spirv

View File

@@ -1,31 +0,0 @@
#!/usr/bin/env bash
set -ex
# When changing this file, you need to bump the following
# .gitlab-ci/image-tags.yml tags:
# ALPINE_X86_64_BUILD_TAG
# DEBIAN_BASE_TAG
# DEBIAN_BUILD_TAG
# FEDORA_X86_64_BUILD_TAG
uncollapsed_section_start mold "Building mold"
MOLD_VERSION="2.32.0"
git clone -b v"$MOLD_VERSION" --single-branch --depth 1 https://github.com/rui314/mold.git
pushd mold
cmake -DCMAKE_BUILD_TYPE=Release -D BUILD_TESTING=OFF -D MOLD_LTO=ON
cmake --build . --parallel "${FDO_CI_CONCURRENT:-4}"
cmake --install . --strip
# Always use mold from now on
find /usr/bin \( -name '*-ld' -o -name 'ld' \) \
-exec ln -sf /usr/local/bin/ld.mold {} \; \
-exec ls -l {} +
popd
rm -rf mold
section_end mold

View File

@@ -1,41 +0,0 @@
#!/bin/bash
# shellcheck disable=SC2086 # we want word splitting
set -uex
section_start piglit "Building piglit"
# When changing this file, you need to bump the following
# .gitlab-ci/image-tags.yml tags:
# DEBIAN_TEST_GL_TAG
# DEBIAN_TEST_VK_TAG
# Do a very early check to make sure the tag is correct without the need of
# setting up the environment variables locally
ci_tag_build_time_check "PIGLIT_TAG"
REV="a0a27e528f643dfeb785350a1213bfff09681950"
git clone https://gitlab.freedesktop.org/mesa/piglit.git --single-branch --no-checkout /piglit
pushd /piglit
git checkout "$REV"
patch -p1 <$OLDPWD/.gitlab-ci/piglit/disable-vs_in.diff
cmake -S . -B . -G Ninja -DCMAKE_BUILD_TYPE=Release $PIGLIT_OPTS ${EXTRA_CMAKE_ARGS:-}
ninja ${PIGLIT_BUILD_TARGETS:-}
find . -depth \( -name .git -o -name '*ninja*' -o -iname '*cmake*' -o -name '*.[chao]' \) \
! -name 'include_test.h' -exec rm -rf {} \;
rm -rf target_api
if [ "${PIGLIT_BUILD_TARGETS:-}" = "piglit_replayer" ]; then
find . -depth \
! -regex "^\.$" \
! -regex "^\.\/piglit.*" \
! -regex "^\.\/framework.*" \
! -regex "^\.\/bin$" \
! -regex "^\.\/bin\/replayer\.py" \
! -regex "^\.\/templates.*" \
! -regex "^\.\/tests$" \
! -regex "^\.\/tests\/replay\.py" \
-exec rm -rf {} \; 2>/dev/null
fi
popd
section_end piglit

View File

@@ -1,38 +0,0 @@
#!/bin/bash
# Note that this script is not actually "building" rust, but build- is the
# convention for the shared helpers for putting stuff in our containers.
set -ex
section_start rust "Building Rust toolchain"
# Pick a specific snapshot from rustup so the compiler doesn't drift on us.
RUST_VERSION=1.81.0-2024-09-05
# For rust in Mesa, we use rustup to install. This lets us pick an arbitrary
# version of the compiler, rather than whatever the container's Debian comes
# with.
curl -L --retry 4 -f --retry-all-errors --retry-delay 60 \
--proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh -s -- \
--default-toolchain $RUST_VERSION \
--profile minimal \
-y
# Make rustup tools available in the PATH environment variable
# shellcheck disable=SC1091
. "$HOME/.cargo/env"
rustup component add clippy rustfmt
# Set up a config script for cross compiling -- cargo needs your system cc for
# linking in cross builds, but doesn't know what you want to use for system cc.
cat > "$HOME/.cargo/config" <<EOF
[target.armv7-unknown-linux-gnueabihf]
linker = "arm-linux-gnueabihf-gcc"
[target.aarch64-unknown-linux-gnu]
linker = "aarch64-linux-gnu-gcc"
EOF
section_end rust

View File

@@ -1,18 +0,0 @@
#!/usr/bin/env bash
# When changing this file, you need to bump the following
# .gitlab-ci/image-tags.yml tags:
# DEBIAN_BUILD_TAG
set -ex
uncollapsed_section_start shader-db "Building shader-db"
pushd /usr/local
git clone https://gitlab.freedesktop.org/mesa/shader-db.git --depth 1
rm -rf shader-db/.git
cd shader-db
make
popd
section_end shader-db

View File

@@ -1,104 +0,0 @@
#!/usr/bin/env bash
# SPDX-License-Identifier: MIT
#
# Copyright © 2022 Collabora Limited
# Author: Guilherme Gallo <guilherme.gallo@collabora.com>
#
# When changing this file, you need to bump the following
# .gitlab-ci/image-tags.yml tags:
# DEBIAN_TEST_GL_TAG
set -uex
uncollapsed_section_start skqp "Building SkQP"
SKQP_BRANCH=android-cts-12.1_r5
SCRIPT_DIR="$(pwd)/.gitlab-ci/container"
SKQP_PATCH_DIR="${SCRIPT_DIR}/patches"
BASE_ARGS_GN_FILE="${SCRIPT_DIR}/build-skqp_base.gn"
case "$DEBIAN_ARCH" in
amd64)
SKQP_ARCH=x64
;;
armhf)
SKQP_ARCH=arm
;;
arm64)
SKQP_ARCH=arm64
;;
esac
SKIA_DIR=${SKIA_DIR:-$(mktemp -d)}
SKQP_OUT_DIR=${SKIA_DIR}/out/${SKQP_ARCH}
SKQP_INSTALL_DIR=${SKQP_INSTALL_DIR:-/skqp}
SKQP_ASSETS_DIR="${SKQP_INSTALL_DIR}/assets"
SKQP_BINARIES=(skqp list_gpu_unit_tests list_gms)
create_gn_args() {
# gn can be configured to cross-compile skia and its tools
# It is important to set the target_cpu to guarantee the intended target
# machine
cp "${BASE_ARGS_GN_FILE}" "${SKQP_OUT_DIR}"/args.gn
echo "target_cpu = \"${SKQP_ARCH}\"" >> "${SKQP_OUT_DIR}"/args.gn
}
download_skia_source() {
if [ -z ${SKIA_DIR+x} ]
then
return 1
fi
# Skia cloned from https://android.googlesource.com/platform/external/skqp
# has all needed assets tracked on git-fs
SKQP_REPO=https://android.googlesource.com/platform/external/skqp
git clone --branch "${SKQP_BRANCH}" --depth 1 "${SKQP_REPO}" "${SKIA_DIR}"
}
download_skia_source
pushd "${SKIA_DIR}"
# Apply all skqp patches for Mesa CI
cat "${SKQP_PATCH_DIR}"/build-skqp_*.patch |
patch -p1
# hack for skqp see the clang
pushd /usr/bin/
ln -s "../lib/llvm-${LLVM_VERSION}/bin/clang" clang
ln -s "../lib/llvm-${LLVM_VERSION}/bin/clang++" clang++
popd
# Fetch some needed build tools needed to build skia/skqp.
# Basically, it clones repositories with commits SHAs from ${SKIA_DIR}/DEPS
# directory.
python3 tools/git-sync-deps
mkdir -p "${SKQP_OUT_DIR}"
mkdir -p "${SKQP_INSTALL_DIR}"
create_gn_args
# Build and install skqp binaries
bin/gn gen "${SKQP_OUT_DIR}"
for BINARY in "${SKQP_BINARIES[@]}"
do
/usr/bin/ninja -C "${SKQP_OUT_DIR}" "${BINARY}"
# Strip binary, since gn is not stripping it even when `is_debug == false`
${STRIP_CMD:-strip} "${SKQP_OUT_DIR}/${BINARY}"
install -m 0755 "${SKQP_OUT_DIR}/${BINARY}" "${SKQP_INSTALL_DIR}"
done
# Move assets to the target directory, which will reside in rootfs.
mv platform_tools/android/apps/skqp/src/main/assets/ "${SKQP_ASSETS_DIR}"
popd
rm -Rf "${SKIA_DIR}"
set +ex
section_end skqp

View File

@@ -1,64 +0,0 @@
cc = "clang"
cxx = "clang++"
extra_cflags = [
"-Wno-error",
"-DSK_ENABLE_DUMP_GPU",
"-DSK_BUILD_FOR_SKQP"
]
extra_cflags_cc = [
"-Wno-error",
# skqp build process produces a lot of compilation warnings, silencing
# most of them to remove clutter and avoid the CI job log to exceed the
# maximum size
# GCC flags
"-Wno-redundant-move",
"-Wno-suggest-override",
"-Wno-class-memaccess",
"-Wno-deprecated-copy",
"-Wno-uninitialized",
# Clang flags
"-Wno-macro-redefined",
"-Wno-anon-enum-enum-conversion",
"-Wno-suggest-destructor-override",
"-Wno-return-std-move-in-c++11",
"-Wno-extra-semi-stmt",
"-Wno-reserved-identifier",
"-Wno-bitwise-instead-of-logical",
"-Wno-reserved-identifier",
"-Wno-psabi",
"-Wno-unused-but-set-variable",
"-Wno-sizeof-array-div",
"-Wno-string-concatenation",
"-Wno-unsafe-buffer-usage",
"-Wno-switch-default",
"-Wno-cast-function-type-strict",
"-Wno-format",
"-Wno-enum-constexpr-conversion",
]
cc_wrapper = "ccache"
is_debug = false
skia_enable_fontmgr_android = false
skia_enable_fontmgr_empty = true
skia_enable_pdf = false
skia_enable_skottie = false
skia_skqp_global_error_tolerance = 8
skia_tools_require_resources = true
skia_use_dng_sdk = false
skia_use_expat = true
skia_use_icu = false
skia_use_libheif = false
skia_use_lua = false
skia_use_piex = false
skia_use_vulkan = true
target_os = "linux"

View File

@@ -1,30 +0,0 @@
#!/usr/bin/env bash
# shellcheck disable=SC2086 # we want word splitting
# When changing this file, you need to bump the following
# .gitlab-ci/image-tags.yml tags:
# DEBIAN_TEST_VIDEO_TAG
set -uex
section_start va-tools "Building va-tools"
git config --global user.email "mesa@example.com"
git config --global user.name "Mesa CI"
git clone \
https://github.com/intel/libva-utils.git \
-b 2.18.1 \
--depth 1 \
/va-utils
pushd /va-utils
# Too old libva in Debian 11. TODO: when this PR gets in, refer to the patch.
curl --fail -L https://github.com/intel/libva-utils/pull/329.patch | git am
meson setup build -D tests=true -Dprefix=/va ${EXTRA_MESON_ARGS:-}
meson install -C build
popd
rm -rf /va-utils
section_end va-tools

View File

@@ -1,67 +0,0 @@
#!/bin/bash
# When changing this file, you need to bump the following
# .gitlab-ci/image-tags.yml tags:
# DEBIAN_TEST_VK_TAG
set -ex
section_start vkd3d-proton "Building vkd3d-proton"
# Do a very early check to make sure the tag is correct without the need of
# setting up the environment variables locally
ci_tag_build_time_check "VKD3D_PROTON_TAG"
VKD3D_PROTON_COMMIT="6be781076617cb2cb3038710618acc3b57a674db"
VKD3D_PROTON_DST_DIR="/vkd3d-proton-tests"
VKD3D_PROTON_SRC_DIR="/vkd3d-proton-src"
VKD3D_PROTON_BUILD_DIR="/vkd3d-proton-build"
VKD3D_PROTON_WINE_DIR="/vkd3d-proton-wine64"
VKD3D_PROTON_S3_ARTIFACT="vkd3d-proton.tar.zst"
if [ ! -d "$VKD3D_PROTON_WINE_DIR" ]; then
echo "Fatal: Directory '$VKD3D_PROTON_WINE_DIR' does not exist. Aborting."
exit 1
fi
git clone https://github.com/HansKristian-Work/vkd3d-proton.git --single-branch -b master --no-checkout "$VKD3D_PROTON_SRC_DIR"
pushd "$VKD3D_PROTON_SRC_DIR"
git checkout "$VKD3D_PROTON_COMMIT"
git submodule update --init --recursive
git submodule update --recursive
meson setup \
-D enable_tests=true \
--buildtype release \
--prefix "$VKD3D_PROTON_DST_DIR" \
--strip \
--libdir "lib" \
"$VKD3D_PROTON_BUILD_DIR/build"
ninja -C "$VKD3D_PROTON_BUILD_DIR/build" install
install -m755 -t "${VKD3D_PROTON_DST_DIR}/" "$VKD3D_PROTON_BUILD_DIR/build/tests/d3d12"
mkdir "$VKD3D_PROTON_DST_DIR/tests"
cp \
"tests/test-runner.sh" \
"tests/d3d12_tests.h" \
"$VKD3D_PROTON_DST_DIR/tests/"
popd
# Archive and upload vkd3d-proton for use as a LAVA overlay, if the archive doesn't exist yet
ARTIFACT_PATH="${DATA_STORAGE_PATH}/vkd3d-proton/${VKD3D_PROTON_TAG}/${CI_JOB_NAME}/${VKD3D_PROTON_S3_ARTIFACT}"
if FOUND_ARTIFACT_URL="$(find_s3_project_artifact "${ARTIFACT_PATH}")"; then
echo "Found vkd3d-proton at: ${FOUND_ARTIFACT_URL}, skipping upload"
else
echo "Uploaded vkd3d-proton not found, reuploading..."
tar --zstd -cf "$VKD3D_PROTON_S3_ARTIFACT" -C / "${VKD3D_PROTON_DST_DIR#/}" "${VKD3D_PROTON_WINE_DIR#/}"
ci-fairy s3cp --token-file "${S3_JWT_FILE}" "$VKD3D_PROTON_S3_ARTIFACT" \
"https://${S3_BASE_PATH}/${CI_PROJECT_PATH}/${ARTIFACT_PATH}"
rm "$VKD3D_PROTON_S3_ARTIFACT"
fi
rm -rf "$VKD3D_PROTON_BUILD_DIR"
rm -rf "$VKD3D_PROTON_SRC_DIR"
section_end vkd3d-proton

View File

@@ -1,24 +0,0 @@
#!/usr/bin/env bash
# When changing this file, you need to bump the following
# .gitlab-ci/image-tags.yml tags:
# DEBIAN_TEST_GL_TAG
set -uex
uncollapsed_section_start vulkan-validation "Building Vulkan validation layers"
VALIDATION_TAG="snapshot-2025wk15"
git clone -b "$VALIDATION_TAG" --single-branch --depth 1 https://github.com/KhronosGroup/Vulkan-ValidationLayers.git
pushd Vulkan-ValidationLayers
# we don't need to build SPIRV-Tools tools
sed -i scripts/known_good.json -e 's/SPIRV_SKIP_EXECUTABLES=OFF/SPIRV_SKIP_EXECUTABLES=ON/'
python3 scripts/update_deps.py --dir external --config release --generator Ninja --optional tests
cmake -G Ninja -DCMAKE_BUILD_TYPE=Release -DCMAKE_INSTALL_PREFIX=/usr -DBUILD_TESTS=OFF -DBUILD_WERROR=OFF -C external/helper.cmake -S . -B build
ninja -C build -j"${FDO_CI_CONCURRENT:-4}"
cmake --install build --strip
popd
rm -rf Vulkan-ValidationLayers
section_end vulkan-validation

View File

@@ -1,44 +0,0 @@
#!/usr/bin/env bash
# shellcheck disable=SC2086 # we want word splitting
set -uex
uncollapsed_section_start wayland "Building Wayland"
# When changing this file, you need to bump the following
# .gitlab-ci/image-tags.yml tags:
# ALPINE_X86_64_BUILD_TAG
# DEBIAN_BASE_TAG
# DEBIAN_BUILD_TAG
# DEBIAN_TEST_ANDROID_TAG
# DEBIAN_TEST_GL_TAG
# DEBIAN_TEST_VK_TAG
# FEDORA_X86_64_BUILD_TAG
export LIBWAYLAND_VERSION="1.24.0"
export WAYLAND_PROTOCOLS_VERSION="1.41"
git clone https://gitlab.freedesktop.org/wayland/wayland
cd wayland
git checkout "$LIBWAYLAND_VERSION"
# Build the scanner first in case we're a cross build
# Note the lack of EXTRA_MESON_ARGS here. This is always a native build
meson setup -Dtests=false -Ddocumentation=false -Ddtd_validation=false -Dlibraries=false -Dscanner=true _scanner
meson install -C _scanner
# Now build libwayland using the given scanner
meson setup -Ddocumentation=false -Ddtd_validation=false -Dlibraries=true -Dscanner=false _build ${EXTRA_MESON_ARGS:-}
meson install -C _build
cd ..
rm -rf wayland
git clone https://gitlab.freedesktop.org/wayland/wayland-protocols
cd wayland-protocols
git checkout "$WAYLAND_PROTOCOLS_VERSION"
meson setup -Dtests=false _build ${EXTRA_MESON_ARGS:-}
meson install -C _build
cd ..
rm -rf wayland-protocols
section_end wayland

View File

@@ -1,52 +0,0 @@
#!/usr/bin/env bash
# shellcheck disable=SC2086 # we want word splitting
set -uex
section_start weston "Building Weston"
# When changing this file, you need to bump the following
# .gitlab-ci/image-tags.yml tags:
# DEBIAN_BASE_TAG
export WESTON_VERSION="14.0.1"
git clone https://gitlab.freedesktop.org/wayland/weston
cd weston
git checkout "$WESTON_VERSION"
meson setup \
-Dbackend-drm=false \
-Dbackend-drm-screencast-vaapi=false \
-Dbackend-headless=true \
-Dbackend-pipewire=false \
-Dbackend-rdp=false \
-Dscreenshare=false \
-Dbackend-vnc=false \
-Dbackend-wayland=false \
-Dbackend-x11=false \
-Dbackend-default=headless \
-Drenderer-gl=true \
-Dxwayland=true \
-Dsystemd=false \
-Dremoting=false \
-Dpipewire=false \
-Dshell-desktop=true \
-Dshell-fullscreen=false \
-Dshell-ivi=false \
-Dshell-kiosk=false \
-Dcolor-management-lcms=false \
-Dimage-jpeg=false \
-Dimage-webp=false \
-Dtools= \
-Ddemo-clients=false \
-Dsimple-clients= \
-Dresize-pool=false \
-Dwcap-decode=false \
-Dtests=false \
-Ddoc=false \
_build ${EXTRA_MESON_ARGS:-}
meson install -C _build
cd ..
rm -rf weston
section_end weston

View File

@@ -1,31 +0,0 @@
#!/usr/bin/env bash
# shellcheck disable=SC2086 # we want word splitting
set -uex
uncollapsed_section_start xwayland "Building XWayland"
# When changing this file, you need to bump the following
# .gitlab-ci/image-tags.yml tags:
# DEBIAN_BASE_TAG
#
export XORGPROTO_VERSION="xorgproto-2024.1"
export XWAYLAND_VERSION="xwayland-24.1.8"
git clone https://gitlab.freedesktop.org/xorg/proto/xorgproto
cd xorgproto
git checkout "$XORGPROTO_VERSION"
meson setup _build ${EXTRA_MESON_ARGS:-}
meson install -C _build
cd ..
rm -rf xorgproto
git clone https://gitlab.freedesktop.org/xorg/xserver
cd xserver
git checkout "$XWAYLAND_VERSION"
meson setup _build ${EXTRA_MESON_ARGS:-}
meson install -C _build
cd ..
rm -rf xserver
section_end xwayland

View File

@@ -1,24 +0,0 @@
#!/usr/bin/env bash
# When changing this file, all the linux tags in
# .gitlab-ci/image-tags.yml need updating.
set -eu
# Early check for required env variables, relies on `set -u`
: "$S3_JWT_FILE_SCRIPT"
if [ -z "$1" ]; then
echo "usage: $(basename "$0") <CONTAINER_CI_JOB_NAME>" 1>&2
exit 1
fi
CONTAINER_CI_JOB_NAME="$1"
# Tasks to perform before executing the script of a container job
eval "$S3_JWT_FILE_SCRIPT"
unset S3_JWT_FILE_SCRIPT
trap 'rm -f ${S3_JWT_FILE}' EXIT INT TERM
bash ".gitlab-ci/container/${CONTAINER_CI_JOB_NAME}.sh"

View File

@@ -1,12 +0,0 @@
#!/usr/bin/env bash
if test -f /etc/debian_version; then
apt-get autoremove -y --purge
fi
# Clean up any build cache
rm -rf /root/.cache
if test -x /usr/bin/ccache; then
ccache --show-stats
fi

View File

@@ -1,66 +0,0 @@
#!/bin/sh
# When changing this file, you need to bump the following
# .gitlab-ci/image-tags.yml tags:
# DEBIAN_BUILD_TAG
if test -x /usr/bin/ccache; then
if test -f /etc/debian_version; then
CCACHE_PATH=/usr/lib/ccache
elif test -f /etc/alpine-release; then
CCACHE_PATH=/usr/lib/ccache/bin
else
CCACHE_PATH=/usr/lib64/ccache
fi
# Common setup among container builds before we get to building code.
export CCACHE_COMPILERCHECK=content
export CCACHE_COMPRESS=true
export CCACHE_DIR="/cache/$CI_PROJECT_NAME/ccache"
export PATH="$CCACHE_PATH:$PATH"
# CMake ignores $PATH, so we have to force CC/GCC to the ccache versions.
export CC="${CCACHE_PATH}/gcc"
export CXX="${CCACHE_PATH}/g++"
ccache --show-stats
fi
# Make a wrapper script for ninja to always include the -j flags
{
echo '#!/bin/sh -x'
# shellcheck disable=SC2016
echo '/usr/bin/ninja -j${FDO_CI_CONCURRENT:-4} "$@"'
} > /usr/local/bin/ninja
chmod +x /usr/local/bin/ninja
# Set MAKEFLAGS so that all make invocations in container builds include the
# flags (doesn't apply to non-container builds, but we don't run make there)
export MAKEFLAGS="-j${FDO_CI_CONCURRENT:-4}"
# Ensure that rust tools are in PATH if they exist
CARGO_ENV_FILE="$HOME/.cargo/env"
if [ -f "$CARGO_ENV_FILE" ]; then
# shellcheck disable=SC1090
source "$CARGO_ENV_FILE"
fi
ci_tag_early_checks() {
# Runs the first part of the build script to perform the tag check only
uncollapsed_section_switch "ci_tag_early_checks" "Ensuring component versions match declared tags in CI builds"
echo "[Structured Tagging] Checking components: ${CI_BUILD_COMPONENTS}"
# shellcheck disable=SC2086
for component in ${CI_BUILD_COMPONENTS}; do
bin/ci/update_tag.py --check ${component} || exit 1
done
echo "[Structured Tagging] Components check done"
section_end "ci_tag_early_checks"
}
# Check if each declared tag component is up to date before building
if [ -n "${CI_BUILD_COMPONENTS:-}" ]; then
# Remove any duplicates by splitting on whitespace, sorting, then joining back
CI_BUILD_COMPONENTS="$(echo "${CI_BUILD_COMPONENTS}" | xargs -n1 | sort -u | xargs)"
ci_tag_early_checks
fi

View File

@@ -1,37 +0,0 @@
#!/bin/bash
ndk=$1
arch=$2
cpu_family=$3
cpu=$4
cross_file="/cross_file-$arch.txt"
sdk_version=$5
# armv7 has the toolchain split between two names.
arch2=${6:-$2}
# Note that we disable C++ exceptions, because Mesa doesn't use exceptions,
# and allowing it in code generation means we get unwind symbols that break
# the libEGL and driver symbol tests.
cat > "$cross_file" <<EOF
[binaries]
ar = '$ndk/toolchains/llvm/prebuilt/linux-x86_64/bin/llvm-ar'
c = ['ccache', '$ndk/toolchains/llvm/prebuilt/linux-x86_64/bin/${arch2}${sdk_version}-clang', '-fno-exceptions', '-fno-unwind-tables', '-fno-asynchronous-unwind-tables']
cpp = ['ccache', '$ndk/toolchains/llvm/prebuilt/linux-x86_64/bin/${arch2}${sdk_version}-clang++', '-fno-exceptions', '-fno-unwind-tables', '-fno-asynchronous-unwind-tables', '--start-no-unused-arguments', '-static-libstdc++', '--end-no-unused-arguments']
c_ld = 'lld'
cpp_ld = 'lld'
strip = '$ndk/toolchains/llvm/prebuilt/linux-x86_64/bin/llvm-strip'
pkg-config = ['/usr/bin/pkgconf']
[host_machine]
system = 'android'
cpu_family = '$cpu_family'
cpu = '$cpu'
endian = 'little'
[properties]
needs_exe_wrapper = true
pkg_config_libdir = '/usr/local/lib/${arch2}/pkgconfig/:/${ndk}/toolchains/llvm/prebuilt/linux-x86_64/sysroot/usr/lib/${arch2}/pkgconfig/'
EOF

View File

@@ -1,40 +0,0 @@
#!/bin/sh
# shellcheck disable=SC2086 # we want word splitting
# Makes a .pc file in the Android NDK for meson to find its libraries.
set -ex
ndk="$1"
pc="$2"
cflags="$3"
libs="$4"
version="$5"
sdk_version="$6"
sysroot=$ndk/toolchains/llvm/prebuilt/linux-x86_64/sysroot
for arch in \
x86_64-linux-android \
i686-linux-android \
aarch64-linux-android \
arm-linux-androideabi; do
pcdir=$sysroot/usr/lib/$arch/pkgconfig
mkdir -p $pcdir
cat >$pcdir/$pc <<EOF
prefix=$sysroot
exec_prefix=$sysroot
libdir=$sysroot/usr/lib/$arch/$sdk_version
sharedlibdir=$sysroot/usr/lib/$arch
includedir=$sysroot/usr/include
Name: zlib
Description: zlib compression library
Version: $version
Requires:
Libs: -L$sysroot/usr/lib/$arch/$sdk_version $libs
Cflags: -I$sysroot/usr/include $cflags
EOF
done

View File

@@ -1,54 +0,0 @@
#!/bin/bash
arch=$1
cross_file="/cross_file-$arch.txt"
meson env2mfile --cross --debarch "$arch" -o "$cross_file"
# Explicitly set ccache path for cross compilers
sed -i "s|/usr/bin/\([^-]*\)-linux-gnu\([^-]*\)-g|/usr/lib/ccache/\\1-linux-gnu\\2-g|g" "$cross_file"
# Rely on qemu-user being configured in binfmt_misc on the host
# shellcheck disable=SC1003 # how this sed doesn't seems to work for me locally
sed -i -e '/\[properties\]/a\' -e "needs_exe_wrapper = False" "$cross_file"
# Add a line for rustc, which meson env2mfile is missing.
cc=$(sed -n "s|^c\s*=\s*\[?'\(.*\)'\]?|\1|p" < "$cross_file")
if [[ "$arch" = "arm64" ]]; then
rust_target=aarch64-unknown-linux-gnu
elif [[ "$arch" = "armhf" ]]; then
rust_target=armv7-unknown-linux-gnueabihf
elif [[ "$arch" = "i386" ]]; then
rust_target=i686-unknown-linux-gnu
elif [[ "$arch" = "ppc64el" ]]; then
rust_target=powerpc64le-unknown-linux-gnu
elif [[ "$arch" = "s390x" ]]; then
rust_target=s390x-unknown-linux-gnu
else
echo "Needs rustc target mapping"
fi
# shellcheck disable=SC1003 # how this sed doesn't seems to work for me locally
sed -i -e '/\[binaries\]/a\' -e "rust = ['rustc', '--target=$rust_target', '-C', 'linker=$cc']" "$cross_file"
# Set up cmake cross compile toolchain file for dEQP builds
toolchain_file="/toolchain-$arch.cmake"
if [[ "$arch" = "arm64" ]]; then
GCC_ARCH="aarch64-linux-gnu"
DE_CPU="DE_CPU_ARM_64"
elif [[ "$arch" = "armhf" ]]; then
GCC_ARCH="arm-linux-gnueabihf"
DE_CPU="DE_CPU_ARM"
fi
if [[ -n "$GCC_ARCH" ]]; then
{
echo "set(CMAKE_SYSTEM_NAME Linux)";
echo "set(CMAKE_SYSTEM_PROCESSOR arm)";
echo "set(CMAKE_C_COMPILER /usr/lib/ccache/$GCC_ARCH-gcc)";
echo "set(CMAKE_CXX_COMPILER /usr/lib/ccache/$GCC_ARCH-g++)";
echo "set(CMAKE_CXX_FLAGS_INIT \"-Wno-psabi\")"; # makes ABI warnings quiet for ARMv7
echo "set(ENV{PKG_CONFIG} \"/usr/bin/$GCC_ARCH-pkgconf\")";
echo "set(DE_CPU $DE_CPU)";
} > "$toolchain_file"
fi

View File

@@ -1,94 +0,0 @@
#!/usr/bin/env bash
# shellcheck disable=SC2086 # we want word splitting
set -e
. .gitlab-ci/setup-test-env.sh
set -o xtrace
export DEBIAN_FRONTEND=noninteractive
: "${LLVM_VERSION:?llvm version not set!}"
# Ephemeral packages (installed for this script and removed again at the end)
EPHEMERAL=(
)
DEPS=(
"crossbuild-essential-$arch"
"pkgconf:$arch"
"libasan8:$arch"
"libdrm-dev:$arch"
"libelf-dev:$arch"
"libexpat1-dev:$arch"
"libffi-dev:$arch"
"libpciaccess-dev:$arch"
"libstdc++6:$arch"
"libvulkan-dev:$arch"
"libx11-dev:$arch"
"libx11-xcb-dev:$arch"
"libxcb-dri2-0-dev:$arch"
"libxcb-dri3-dev:$arch"
"libxcb-glx0-dev:$arch"
"libxcb-present-dev:$arch"
"libxcb-randr0-dev:$arch"
"libxcb-shm0-dev:$arch"
"libxcb-xfixes0-dev:$arch"
"libxdamage-dev:$arch"
"libxext-dev:$arch"
"libxrandr-dev:$arch"
"libxshmfence-dev:$arch"
"libxxf86vm-dev:$arch"
)
dpkg --add-architecture $arch
echo "deb [trusted=yes] https://gitlab.freedesktop.org/gfx-ci/ci-deb-repo/-/raw/${PKG_REPO_REV}/ ${FDO_DISTRIBUTION_VERSION%-*} main" | tee /etc/apt/sources.list.d/gfx-ci_.list
apt-get update
apt-get install -y --no-remove "${DEPS[@]}" "${EPHEMERAL[@]}" \
$EXTRA_LOCAL_PACKAGES
if [[ $arch != "armhf" ]]; then
# We don't need clang-format for the crossbuilds, but the installed amd64
# package will conflict with libclang. Uninstall clang-format (and its
# problematic dependency) to fix.
apt-get remove -y "clang-format-${LLVM_VERSION}" "libclang-cpp${LLVM_VERSION}" \
"llvm-${LLVM_VERSION}-runtime" "llvm-${LLVM_VERSION}-linker-tools"
# llvm-*-tools:$arch conflicts with python3:amd64. Install dependencies only
# with apt-get, then force-install llvm-*-{dev,tools}:$arch with dpkg to get
# around this.
apt-get install -y --no-remove --no-install-recommends \
"libclang-cpp${LLVM_VERSION}:$arch" \
"libgcc-s1:$arch" \
"libtinfo-dev:$arch" \
"libz3-dev:$arch" \
"llvm-${LLVM_VERSION}:$arch" \
zlib1g
fi
. .gitlab-ci/container/create-cross-file.sh $arch
. .gitlab-ci/container/container_pre_build.sh
# dependencies where we want a specific version
MULTIARCH_PATH=$(dpkg-architecture -A $arch -qDEB_TARGET_MULTIARCH)
export EXTRA_MESON_ARGS="--cross-file=/cross_file-${arch}.txt -D libdir=lib/${MULTIARCH_PATH}"
. .gitlab-ci/container/build-wayland.sh
. .gitlab-ci/container/build-directx-headers.sh
apt-get purge -y "${EPHEMERAL[@]}"
. .gitlab-ci/container/container_post_build.sh
# This needs to be done after container_post_build.sh, or apt-get breaks in there
if [[ $arch != "armhf" ]]; then
apt-get download llvm-"${LLVM_VERSION}"-{dev,tools}:"$arch"
dpkg -i --force-depends llvm-"${LLVM_VERSION}"-*_"${arch}".deb
rm llvm-"${LLVM_VERSION}"-*_"${arch}".deb
fi

View File

@@ -1,118 +0,0 @@
#!/usr/bin/env bash
# shellcheck disable=SC2086 # we want word splitting
# When changing this file, you need to bump the following
# .gitlab-ci/image-tags.yml tags:
# DEBIAN_BUILD_TAG
set -e
. .gitlab-ci/setup-test-env.sh
set -x
EPHEMERAL=(
autoconf
rdfind
unzip
)
apt-get install -y --no-remove "${EPHEMERAL[@]}"
. .gitlab-ci/container/container_pre_build.sh
# Fetch the NDK and extract just the toolchain we want.
ndk="android-ndk-${ANDROID_NDK_VERSION}"
curl -L --retry 4 -f --retry-all-errors --retry-delay 60 \
-o $ndk.zip https://dl.google.com/android/repository/$ndk-linux.zip
unzip -d / $ndk.zip "$ndk/source.properties" "$ndk/build/cmake/*" "$ndk/toolchains/llvm/*"
rm $ndk.zip
# Since it was packed as a zip file, symlinks/hardlinks got turned into
# duplicate files. Turn them into hardlinks to save on container space.
rdfind -makehardlinks true -makeresultsfile false /${ndk}/
# Drop some large tools we won't use in this build.
find /${ndk}/ -type f \( -iname '*clang-check*' -o -iname '*clang-tidy*' -o -iname '*lldb*' \) -exec rm -f {} \;
sh .gitlab-ci/container/create-android-ndk-pc.sh /$ndk zlib.pc "" "-lz" "1.2.3" $ANDROID_SDK_VERSION
sh .gitlab-ci/container/create-android-cross-file.sh /$ndk x86_64-linux-android x86_64 x86_64 $ANDROID_SDK_VERSION
sh .gitlab-ci/container/create-android-cross-file.sh /$ndk i686-linux-android x86 x86 $ANDROID_SDK_VERSION
sh .gitlab-ci/container/create-android-cross-file.sh /$ndk aarch64-linux-android aarch64 armv8 $ANDROID_SDK_VERSION
sh .gitlab-ci/container/create-android-cross-file.sh /$ndk arm-linux-androideabi arm armv7hl $ANDROID_SDK_VERSION armv7a-linux-androideabi
# Build libdrm for the host (Debian) environment, so it's available for
# binaries we'll run as part of the build process
. .gitlab-ci/container/build-libdrm.sh
# Build libdrm for the NDK environment, so it's available when building for
# the Android target
for arch in \
x86_64-linux-android \
i686-linux-android \
aarch64-linux-android \
arm-linux-androideabi ; do
EXTRA_MESON_ARGS="--cross-file=/cross_file-$arch.txt --libdir=lib/$arch -Dnouveau=disabled -Dintel=disabled" \
. .gitlab-ci/container/build-libdrm.sh
done
rm -rf $LIBDRM_VERSION
export LIBELF_VERSION=libelf-0.8.13
curl -L --retry 4 -f --retry-all-errors --retry-delay 60 \
-O https://fossies.org/linux/misc/old/$LIBELF_VERSION.tar.gz
# Not 100% sure who runs the mirror above so be extra careful
if ! echo "4136d7b4c04df68b686570afa26988ac ${LIBELF_VERSION}.tar.gz" | md5sum -c -; then
echo "Checksum failed"
exit 1
fi
tar -xf ${LIBELF_VERSION}.tar.gz
cd $LIBELF_VERSION
# Work around a bug in the original configure not enabling __LIBELF64.
autoreconf
for arch in \
x86_64-linux-android \
i686-linux-android \
aarch64-linux-android \
arm-linux-androideabi ; do
ccarch=${arch}
if [ "${arch}" == 'arm-linux-androideabi' ]
then
ccarch=armv7a-linux-androideabi
fi
export CC=/${ndk}/toolchains/llvm/prebuilt/linux-x86_64/bin/llvm-ar
export CC=/${ndk}/toolchains/llvm/prebuilt/linux-x86_64/bin/${ccarch}${ANDROID_SDK_VERSION}-clang
export CXX=/${ndk}/toolchains/llvm/prebuilt/linux-x86_64/bin/${ccarch}${ANDROID_SDK_VERSION}-clang++
export LD=/${ndk}/toolchains/llvm/prebuilt/linux-x86_64/bin/${arch}-ld
export RANLIB=/${ndk}/toolchains/llvm/prebuilt/linux-x86_64/bin/llvm-ranlib
# The configure script doesn't know about android, but doesn't really use the host anyway it
# seems
./configure --host=x86_64-linux-gnu --disable-nls --disable-shared \
--libdir=/usr/local/lib/${arch}
make install
make distclean
unset CC
unset CC
unset CXX
unset LD
unset RANLIB
done
cd ..
rm -rf $LIBELF_VERSION
# Build LLVM libraries for Android only if necessary, uploading a copy to S3
# to avoid rebuilding it in a future run if the version does not change.
bash .gitlab-ci/container/build-android-x86_64-llvm.sh
apt-get purge -y "${EPHEMERAL[@]}"
. .gitlab-ci/container/container_post_build.sh

View File

@@ -1,4 +0,0 @@
#!/usr/bin/env bash
DEBIAN_ARCH=armhf \
. .gitlab-ci/container/debian/test-base.sh

View File

@@ -1,4 +0,0 @@
#!/usr/bin/env bash
DEBIAN_ARCH="armhf" \
. .gitlab-ci/container/debian/test-gl.sh

View File

@@ -1,4 +0,0 @@
#!/usr/bin/env bash
DEBIAN_ARCH="armhf" \
. .gitlab-ci/container/debian/test-vk.sh

View File

@@ -1,122 +0,0 @@
#!/usr/bin/env bash
# shellcheck disable=SC2086 # we want word splitting
# When changing this file, you need to bump the following
# .gitlab-ci/image-tags.yml tags:
# DEBIAN_BUILD_TAG
set -e
. .gitlab-ci/setup-test-env.sh
set -o xtrace
: "${LLVM_VERSION:?llvm version not set}"
apt-get -y install ca-certificates curl gnupg2
sed -i -e 's/http:\/\/deb/https:\/\/deb/g' /etc/apt/sources.list.d/*
echo "deb [trusted=yes] https://gitlab.freedesktop.org/gfx-ci/ci-deb-repo/-/raw/${PKG_REPO_REV}/ ${FDO_DISTRIBUTION_VERSION%-*} main" | tee /etc/apt/sources.list.d/gfx-ci_.list
. .gitlab-ci/container/debian/maybe-add-llvm-repo.sh
# Ephemeral packages (installed for this script and removed again at the end)
EPHEMERAL=(
libssl-dev
)
DEPS=(
apt-utils
android-libext4-utils
autoconf
automake
bc
bison
ccache
cmake
curl
"clang-${LLVM_VERSION}"
fastboot
flatbuffers-compiler
flex
g++
git
glslang-tools
kmod
"libclang-${LLVM_VERSION}-dev"
"libclang-cpp${LLVM_VERSION}-dev"
"libclang-common-${LLVM_VERSION}-dev"
libasan8
libdrm-dev
libelf-dev
libexpat1-dev
libflatbuffers-dev
"libllvm${LLVM_VERSION}"
libvulkan-dev
libx11-dev
libx11-xcb-dev
libxcb-dri2-0-dev
libxcb-dri3-dev
libxcb-glx0-dev
libxcb-present-dev
libxcb-randr0-dev
libxcb-shm0-dev
libxcb-xfixes0-dev
libxdamage-dev
libxext-dev
libxrandr-dev
libxshmfence-dev
libxtensor-dev
libxxf86vm-dev
libwayland-dev
libwayland-egl-backend-dev
"llvm-${LLVM_VERSION}-dev"
ninja-build
openssh-server
pkgconf
python3-mako
python3-pil
python3-pip
python3-pycparser
python3-requests
python3-setuptools
python3-venv
shellcheck
u-boot-tools
xz-utils
yamllint
zlib1g-dev
zstd
)
apt-get update
apt-get -y install "${DEPS[@]}" "${EPHEMERAL[@]}"
# Needed for ci-fairy s3cp
pip3 install --break-system-packages "ci-fairy[s3] @ git+https://gitlab.freedesktop.org/freedesktop/ci-templates@$MESA_TEMPLATES_COMMIT"
pip3 install --break-system-packages -r bin/ci/test/requirements.txt
. .gitlab-ci/container/install-meson.sh
arch=armhf
. .gitlab-ci/container/cross_build.sh
. .gitlab-ci/container/container_pre_build.sh
. .gitlab-ci/container/build-mold.sh
. .gitlab-ci/container/build-wayland.sh
. .gitlab-ci/container/build-llvm-spirv.sh
. .gitlab-ci/container/build-libclc.sh
. .gitlab-ci/container/build-rust.sh
. .gitlab-ci/container/build-bindgen.sh
apt-get purge -y "${EPHEMERAL[@]}"
. .gitlab-ci/container/container_post_build.sh

View File

@@ -1,4 +0,0 @@
#!/usr/bin/env bash
DEBIAN_ARCH=arm64 \
. .gitlab-ci/container/debian/test-base.sh

View File

@@ -1,4 +0,0 @@
#!/usr/bin/env bash
DEBIAN_ARCH="arm64" \
. .gitlab-ci/container/debian/test-gl.sh

View File

@@ -1,4 +0,0 @@
#!/usr/bin/env bash
DEBIAN_ARCH="arm64" \
. .gitlab-ci/container/debian/test-vk.sh

View File

@@ -1,5 +0,0 @@
#!/usr/bin/env bash
set -e
arch=armhf . .gitlab-ci/container/debian/baremetal_arm_test.sh

View File

@@ -1,5 +0,0 @@
#!/usr/bin/env bash
set -e
arch=arm64 . .gitlab-ci/container/debian/baremetal_arm_test.sh

Some files were not shown because too many files have changed in this diff Show More