Compare commits

..

275 Commits

Author SHA1 Message Date
Carl Worth
0da7d59ac2 docs: Add MD5 sums for the 10.0.5 release.
These can be generated only after the release has been tarred up and tagged.
2014-04-18 17:02:17 -07:00
Carl Worth
c941373838 docs: Add release notes for 10.0.5 2014-04-18 16:51:02 -07:00
Carl Worth
c78a676998 Update version to 10.0.5
In preparation for the 10.0.5 release, of course.
2014-04-18 16:48:06 -07:00
Eric Anholt
5e718c11c6 i965: Fix buffer overruns in MSAA MCS buffer clearing.
This manifested as rendering failures or sometimes GPU hangs in
compositors when they accidentally got MSAA visuals due to a bug in the X
Server.  Today we decided that the problem in compositors was equivalent
to a corruption bug we'd noticed recently in resizing MSAA-visual
glxgears, and debugging got a lot easier.

When we allocate our MCS MT, libdrm takes the size we request, aligns it
to Y tile size (blowing it up from 300x300=900000 bytes to 384*320=122880
bytes, 30 pages), then puts it into a power-of-two-sized BO (131072 bytes,
32 pages).  Because it's Y tiled, we attach a 384-byte-stride fence to it.
When we memset by the BO size in Mesa, between bytes 122880 and 131072 the
data gets stored to the first 20 or so scanlines of each of the 3 tiled
pages in that row, even though only 2 of those pages were allocated by
libdrm.  In the glxgears case, the missing 3rd page happened to
consistently be the static VBO that got mapped right after the first MCS
allocation, so corruption only appeared once window resize made us throw
out the old MCS and then allocate the same BO to back the new MCS.

Instead, just memset the amount of data we actually asked libdrm to
allocate for, which will be smaller (more efficient) and not overrun.
Thanks go to Kenneth for doing most of the hard debugging to eliminate a
lot of the search space for the bug.

Cc: "10.0 10.1" <mesa-stable@lists.freedesktop.org>
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=77207
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
(cherry picked from commit 7ae870211d)
2014-04-16 10:21:09 -07:00
Paul Berry
b2f14a6284 i965/gen7: Prefer vertical alignment of 4 when possible.
Gen6+ allows for color buffers to use a vertical alignment of either 4
or 2.  Previously we defaulted to 2.  This may have caused problems on
Gen7 because Y-tiled render targets are not allowed to use a vertical
alignment of 2.

This patch changes the vertical alignment to 4 on Gen7, except for the
few formats where a vertical alignment of 2 is required.

Reviewed-by: Eric Anholt <eric@anholt.net>
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
(cherry picked from commit 6b40dd17cf)
2014-04-16 10:10:19 -07:00
Emil Velikov
498853b9fd glx: drop obsolete _XUnlock_Mutex in __glXInitialize error path
With commit 1f1928db001(glx: Drop _Xglobal_lock while we create and
initialize glx display) we've split the big _Xglobal_lock handling in
a more fine grained manner.

Unfortunatelly we forgot to drop the unlock_mutex on the error paths,
leading to undefined behaviour as the mutex is already unlocked.

Cc: Kristian Høgsberg <krh@bitplanet.net>
Cc: "9.2 10.0 10.1"  <mesa-stable@lists.freedesktop.org>
Signed-off-by: Emil Velikov <emil.l.velikov@gmail.com>
Reviewed-by: Ian Romanick <ian.d.romanick@intel.com>
(cherry picked from commit f9832f960f)
2014-04-14 15:05:36 -07:00
Brian Paul
45fd1d336a svga: move LIST_INITHEAD(dirty_buffers) earlier in svga_context_create()
Fixes a crash in svga_context_flush_buffers() if we use the 'draw' module
for AA lines (when the device doesn't support that feature).  We need to
initialize this list before we setup the swtnl pieces.

Found/fixed by Charmaine Lee.

Cc: "10.0" <mesa-stable@lists.freedesktop.org>
Reviewed-by: Thomas Hellstrom <thellstrom@vmware.com>
Reviewed-by: Jakob Bornecrantz <jakob@vmware.com>
(cherry picked from commit e853ade544)

Conflicts:
	src/gallium/drivers/svga/svga_context.c
2014-04-14 15:05:35 -07:00
Courtney Goeltzenleuchter
cbaaf8fe42 mesa: add bounds checking to eliminate buffer overrun
Decompressing ETC2 textures was causing intermitent segfault
by copying resulting 4x4 texel block to the destination texture
regardless of the size of the destination texture. Issue found
via application crash in GLBenchmark 3.0's Manhattan test.

v2: add more detail comment. Compute limit outside inner loops.
v3: add bugzilla reference
v4: Correct cc syntax in commit log
v5: really grab the right patch

Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=74988
Cc: "9.2 10.0 10.1" <mesa-stable@lists.freedesktop.org>
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
Reviewed-by: Ian Romanick <ian.d.romanick@intel.com> [v1, suggested v2-3]
(cherry picked from commit cb4ad13685)
2014-04-14 15:05:35 -07:00
Brian Paul
7b580a567f svga: replace sampler assertion with conditional
For TEX instructions, the set of samplers and sampler views should
be consistent.  The XA state tracker sometimes passes an inconsistent
set of samplers and sampler views.  Rather than assert and die, issue
a warning.

v2: add debugging code to detect inconsistent state.
v3: also check for null sampler in svga_state_tss.c

Cc: "10.0" "10.1" <mesa-stable@lists.freedesktop.org>
Reviewed-by: Thomas Hellstrom <thellstrom@vmware.com>
(cherry picked from commit 9bb2ec6fd1)

Conflicts:
	src/gallium/drivers/svga/svga_state_fs.c
2014-04-14 15:05:35 -07:00
Ilia Mirkin
69a777dd21 nouveau: fix firmware check on nvd7/nvd9
The kernel driver expects the class to be based on chipset generation
rather than VP generation. Make sure to pass 90b1 for NVDX chipsets
instead of 95b1.

Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=77102
Fixes: 40dd777b33
Signed-off-by: Ilia Mirkin <imirkin@alum.mit.edu>
Cc: "10.1 10.0" <mesa-stable@lists.freedesktop.org>
Reviewed-by: Maarten Lankhorst <maarten.lankhorst@ubunutu.com>
(cherry picked from commit 89c5b56be6)
2014-04-14 15:05:35 -07:00
Johannes Nixdorf
76f33938dd configure.ac: fix the detection of expat with pkg-config
The pkg-config module was called "EXPAT" instead of "expat" in
PKG_CHECK_EXISTS. This seems to have been wrong because the wrong
argument was copied from PKG_CHECK_MODULES.

Cc: "10.0" "10.1" <mesa-stable@lists.freedesktop.org>
Reviewed-by: Matt Turner <mattst88@gmail.com>
Reviewed-by: Emil Velikov <emil.l.velikov@gmail.com>
(cherry picked from commit 476db98e03)
2014-04-14 15:05:35 -07:00
Brian Paul
68fef3983e cso: fix sampler view count in cso_set_sampler_views()
We want to call pipe->set_sampler_views() with count being the
maximum of the old number of sampler views and the new number.
This makes sure we null-out any old sampler views.

We already do the same thing for sampler states in single_sampler_done().
Fixes some assertions seen in the VMware driver with XA tracker.

Cc: "10.0" "10.1" <mesa-stable@lists.freedesktop.org>
Reviewed-by: Thomas Hellstrom <thellstrom@vmware.com>
Tested-by: Thomas Hellstrom <thellstrom@vmware.com>
Reviewed-by: Roland Scheidegger <sroland@vmware.com>
(cherry picked from commit 2355a64414)
2014-04-14 15:05:35 -07:00
Brian Paul
5ab0f978b7 mesa: fix glMultiDrawArrays inside a display list
The underlying glDrawArrays() calls weren't getting compiled into
the display list.  We simply need to use the current dispatch table
so the CALL_DrawArrays() is routed to the display list save function.

This patch also fixes glMultiModeDrawArraysIBM and
glMultiModeDrawElementsIBM.

Fixes the new piglit gl-1.4-dlist-multidrawarrays test.

Cc: "10.0" "10.1" <mesa-stable@lists.freedesktop.org>
Reviewed-by: José Fonseca <jfonseca@vmware.com>
Reviewed-by: Ian Romanick <ian.d.romanick@intel.com>
(cherry picked from commit e341856294)
2014-04-14 15:05:35 -07:00
Brian Paul
9cd2daa0ef st/mesa: add null pointer checking in query object functions
Don't pass null query object pointers into gallium functions.
This avoids segfaulting in the VMware driver (and others?) if the
pipe_context::create_query() call fails and returns NULL.

Cc: "10.0" "10.1" <mesa-stable@lists.freedesktop.org>
Reviewed-by: Roland Scheidegger <sroland@vmware.com>
(cherry picked from commit 488d4c4826)
2014-04-14 15:05:35 -07:00
Brian Paul
15b2587334 mesa: fix unpack_Z32_FLOAT_X24S8() / unpack_Z32_FLOAT() mix-up
And use the z32f_x24s8 helper struct in unpack_Z32_FLOAT_X24S8().
Cc: "10.0" "10.1" <mesa-stable@lists.freedesktop.org>

Reviewed-by: Roland Scheidegger <sroland@vmware.com>
(cherry picked from commit 1f4ebfaa88)
2014-04-14 15:05:35 -07:00
Christian König
6cc6c921b1 st/mesa: fix sampler view handling with shared textures v4
Release the references to the sampler views before
destroying the pipe context.

v2: remove TODO and unrelated change
v3: move to st_texture.[ch], rename callback, add comment
v4: fix rebase mess up and add further cleanups

Signed-off-by: Christian König <christian.koenig@amd.com>
Reviewed-by: Brian Paul <brianp@vmware.com>
Cc: "10.0 10.1" <mesa-stable@lists.freedesktop.org>
(cherry picked from commit d117ddbe31)
2014-04-14 15:05:35 -07:00
José Fonseca
132df6a9a5 draw: Duplicate TGSI tokens in draw_pipe_pstipple module.
As done in draw_pipe_aaline and draw_pipe_aapoint modules.

Reviewed-by: Brian Paul <brianp@vmware.com>
Reviewed-by: Roland Scheidegger <sroland@vmware.com>
Reviewed-by: Zack Rusin <zackr@vmware.com>
Cc: "10.0 10.1" <mesa-stable@lists.freedesktop.org>
(cherry picked from commit ee89432a47)
2014-04-14 15:05:35 -07:00
Christian König
0c37a0b94d st/mesa: recreate sampler view on context change v3
With shared glx contexts it is possible that a texture is create and used
in one context and then used in another one resulting in incorrect
sampler view usage.

v2: avoid template copy
v3: add XXX comment

Signed-off-by: Christian König <christian.koenig@amd.com>
Cc: "10.0 10.1" <mesa-stable@lists.freedesktop.org>
Reviewed-by: Brian Paul <brianp@vmware.com>
(cherry picked from commit 92e543c45d)
2014-04-14 15:05:35 -07:00
Ilia Mirkin
1184293f40 nouveau: there may not have been a texture if the fbo was incomplete
Signed-off-by: Ilia Mirkin <imirkin@alum.mit.edu>
Reviewed-by: Francisco Jerez <currojerez@riseup.net>
Cc: "10.0 10.1" <mesa-stable@lists.freedesktop.org>
(cherry picked from commit e58071355e)
2014-04-14 15:05:34 -07:00
Ilia Mirkin
2bd3830197 nouveau: add forgotten GL_COMPRESSED_INTENSITY to texture format list
Signed-off-by: Ilia Mirkin <imirkin@alum.mit.edu>
Reviewed-by: Francisco Jerez <currojerez@riseup.net>
Cc: "10.0 10.1" <mesa-stable@lists.freedesktop.org>
(cherry picked from commit b676df9abf)

Conflicts:
	src/mesa/drivers/dri/nouveau/nouveau_texture.c
2014-04-14 15:05:34 -07:00
Ilia Mirkin
f15356c70a mesa/main: condition GL_DEPTH_STENCIL on ARB_depth_texture
EXT_packed_depth_stencil is supported by all drivers, but
ARB_depth_texture isn't (notably nouveau_vieux). This should avoid
passing unexpected values down to ChooseTextureFormat.

The EXT_packed_depth_stencil spec does not make any explicit references
to requiring ARB_depth_texture in order to allow textures with that
format, however if there is no dependency, ARB_depth_texture would be
practically implied by the extension.

Signed-off-by: Ilia Mirkin <imirkin@alum.mit.edu>
Reviewed-by: Francisco Jerez <currojerez@riseup.net>
Reviewed-by: Ian Romanick <ian.d.romanick@intel.com>
Cc: "10.0 10.1" <mesa-stable@lists.freedesktop.org>

Note for 10.0 backport: This will produce a conflict, the solution is to
move the surrounding if as well.

(cherry picked from commit 18690995a6)

Conflicts:
	src/mesa/main/teximage.c
2014-04-14 15:05:34 -07:00
Emil Velikov
40a05673a7 mesa: return v.value_int64 when the requested type is TYPE_INT64
Fixes "Operands don't affect result" defect reported by Coverity.

Cc: "9.2 10.0 10.1"  <mesa-stable@lists.freedesktop.org>
Signed-off-by: Emil Velikov <emil.l.velikov@gmail.com>
Reviewed-by: Brian Paul <brianp@vmware.com>
(cherry picked from commit a9cf3aa208)
2014-04-14 15:05:34 -07:00
Jonathan Gray
437f291d64 gallium: add endian detection for OpenBSD
Cc: "10.0" "10.1" <mesa-stable@lists.freedesktop.org>
Reviewed-by: Brian Paul <brianp@vmware.com>
(cherry picked from commit 40214267ab)
2014-04-14 15:05:34 -07:00
Ilia Mirkin
5c4f80dca6 nv50: adjust blit_3d handling of ms output textures
This fixes some unwanted scaling when the output is multisampled.

Signed-off-by: Ilia Mirkin <imirkin@alum.mit.edu>
Reviewed-by: Christoph Bumiller <e0425955@student.tuwien.ac.at>
Cc: "10.0 10.1" <mesa-stable@lists.freedesktop.org>
(cherry picked from commit 253314d487)

Also squashed in the following:

Revert nvc0 part of "nv50: adjust blit_3d handling of ms output textures"

The nvc0 bits don't appear to work, and I thought I had removed them
from the commit. Oops.

Signed-off-by: Ilia Mirkin <imirkin@alum.mit.edu>
Cc: "10.0 10.1" <mesa-stable@lists.freedesktop.org>
(cherry picked from commit 897f40f25d)
2014-04-14 15:05:34 -07:00
Ilia Mirkin
860ee22480 nouveau: fix fence waiting logic in screen destroy
nouveau_fence_wait has the expectation that an external entity is
holding onto the fence being waited on, not that it is merely held onto
by the current pointer. Fixes a use-after-free in nouveau_fence_wait
when used on the screen's current fence.

Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=75279
Signed-off-by: Ilia Mirkin <imirkin@alum.mit.edu>
Reviewed-by: Christoph Bumiller <e0425955@student.tuwien.ac.at>
Cc: "9.2 10.0 10.1" <mesa-stable@lists.freedesktop.org>
(cherry picked from commit 507f0230d4)

Conflicts:
	src/gallium/drivers/nouveau/nv30/nv30_screen.c
2014-04-14 15:05:34 -07:00
Matt Turner
5ad6062ee6 mesa: Wrap SSE4.1 code in #ifdef __SSE4_1__.
Because people insist on doing things like explicitly disabling SSE 4.1.

Cc: "10.0 10.1" <mesa-stable@lists.freedesktop.org>
Tested-by: David Heidelberger <david.heidelberger@ixit.cz>
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=71547
(cherry picked from commit 8d3f739383)
2014-04-14 15:05:34 -07:00
Brian Paul
063f9c6aef mesa: fix copy & paste bugs in pack_ubyte_SRGB8()
Cc: "10.0" "10.1" <mesa-stable@lists.freedesktop.org>
Reviewed-by: Roland Scheidegger <sroland@vmware.com>
(cherry picked from commit 1e25aa4cdb)
2014-04-14 15:05:34 -07:00
Brian Paul
6490bdf358 mesa: fix copy & paste bugs in pack_ubyte_SARGB8()
Cc: "10.0" "10.1" <mesa-stable@lists.freedesktop.org>
Reviewed-by: Jose Fonseca <jfonseca@vmware.com>
(cherry picked from commit 9493fc729e)
2014-04-14 15:05:34 -07:00
Carl Worth
1062066fad Ignore patches which don't apply.
These patches all fix bugs in code that in not present in the 10.0 branch.
2014-04-14 15:05:34 -07:00
Brian Paul
fd5f0644af mesa: add unpacking code for MESA_FORMAT_Z32_FLOAT_S8X24_UINT
Fixes glGetTexImage() when converting from MESA_FORMAT_Z32_FLOAT_S8X24_UINT
to GL_UNSIGNED_INT_24_8.  Hit by the piglit
ext_packed_depth_stencil-getteximage test.

Cc: "10.0" "10.1" <mesa-stable@lists.freedesktop.org>
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
(cherry picked from commit a12d4d0398)
2014-04-14 15:05:34 -07:00
Alex Deucher
ef793cbf6d radeon: reverse DBG_NO_HYPERZ logic
Change the flag to DBG_HYPERZ and reverse the logic
so setting the flag enabled the feature.  This disables
hyperz on r600g and radeonsi by default.  It can be
enabled by setting the env var.  There are just too
many issues with certain apps so leave it disabled for
now until we sort out the issues with the problematic
apps.

Bugs:
https://bugs.freedesktop.org/show_bug.cgi?id=58660
https://bugs.freedesktop.org/show_bug.cgi?id=64471
https://bugs.freedesktop.org/show_bug.cgi?id=66352
https://bugs.freedesktop.org/show_bug.cgi?id=68799
https://bugs.freedesktop.org/show_bug.cgi?id=72685
https://bugs.freedesktop.org/show_bug.cgi?id=73088
https://bugs.freedesktop.org/show_bug.cgi?id=74428
https://bugs.freedesktop.org/show_bug.cgi?id=74803
https://bugs.freedesktop.org/show_bug.cgi?id=74863
https://bugs.freedesktop.org/show_bug.cgi?id=74892
https://bugzilla.kernel.org/show_bug.cgi?id=70411

Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Cc: "10.1" "10.0" <mesa-stable@lists.freedesktop.org>
Acked-by: Marek Olšák <marek.olsak@amd.com>
(cherry picked from commit 01e6371149)
2014-04-14 14:05:02 -07:00
Carl Worth
f6ce0eba76 docs: Add md5sums for the 10.0.4 release.
After tar files were generated.
2014-03-12 09:16:31 -07:00
Carl Worth
2cfd35186e docs: Add release notes for 10.0.4
Just prior to release.
2014-03-12 08:55:46 -07:00
Carl Worth
7494e2e50c Update version to 10.0.4
In preparation for a stable-branch release.
2014-03-12 08:52:06 -07:00
Carl Worth
c29c9947a3 get-pick-list: Update to only find patches nominated for the 10.0 branch
In early February, the 10.1 branch was created. From then on, patches that
don't specifically say "10.0" are intended for 10.1, not 10.0.
2014-03-11 11:49:52 -07:00
Hans
518526700e mesa: don't define c99 math functions for MSVC >= 1800
Signed-off-by: Brian Paul <brianp@vmware.com>
Cc: "10.0" "10.1" <mesa-stable@lists.freedesktop.org>
(cherry picked from commit 837da9bdae)
2014-03-11 11:49:52 -07:00
Hans
2f9e7f6394 util: don't define isfinite(), isnan() for MSVC >= 1800
Signed-off-by: Brian Paul <brianp@vmware.com>
Cc: "10.0" "10.1" <mesa-stable@lists.freedesktop.org>
(cherry picked from commit bf25660325)
2014-03-11 11:49:52 -07:00
Brian Paul
9cdb86a1da softpipe: use 64-bit arithmetic in softpipe_resource_layout()
To avoid 32-bit integer overflow for large textures.  Note: we're
already doing this in llvmpipe.

Cc: "10.0" "10.1" <mesa-stable@lists.freedesktop.org>
Reviewed-by: Roland Scheidegger <sroland@vmware.com>
(cherry picked from commit 465b2c42bc)
2014-03-11 11:49:52 -07:00
Julien Cristau
4588a32dee glx/dri2: fix build failure on HURD
Patch from Debian package.

Cc: "10.0 10.1" <mesa-stable@lists.freedesktop.org>
Reviewed-by: Ian Romanick <ian.d.romanick@intel.com>
(cherry picked from commit 6f0e2731e8)
2014-03-11 11:49:52 -07:00
Chris Forbes
aba40445c2 i965: Validate (and resolve) all the bound textures.
BRW_MAX_TEX_UNIT is the static limit on the number of textures we
support per-stage, not in total.

Core's `Unit` array is sized by MAX_COMBINED_TEXTURE_IMAGE_UNITS, which
is significantly larger, and across the various shader stages, up to
ctx->Const.MaxCombinedTextureImageUnits elements of it may be actually
used.

Fixes invisible bad behavior in piglit's max-samplers test (although
this escalated to an assertion failure on HSW with texture_view, since
non-immutable textures only have _Format set by validation.)

Signed-off-by: Chris Forbes <chrisf@ijw.co.nz>
Cc: "9.2 10.0 10.1" <mesa-stable@lists.freedesktop.org>
Cc: Kenneth Graunke <kenneth@whitecape.org>
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
(cherry picked from commit befbda56a2)
2014-03-11 11:49:51 -07:00
Emil Velikov
6a81f2bc9b dri/i9*5: correctly calculate the amount of system memory
The variable name states megabytes, while we calculate the amount in
kilobytes. Correct this by dividing with the correct amount.

Signed-off-by: Emil Velikov <emil.l.velikov@gmail.com>
Cc: "10.0 10.1" <mesa-stable@lists.freedesktop.org>
Reviewed-by: Ian Romanick <ian.d.romanick@intel.com>
(cherry picked from commit fc25956bad)
2014-03-11 11:49:51 -07:00
Tom Stellard
a02c50ef4e r600g/compute: PIPE_CAP_COMPUTE should be false for pre-evergreen GPUs
This prevents clover from using unsupported devices.

Reviewed-by: Marek Olšák <marek.olsak@amd.com>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>

CC: "10.0 10.1" <mesa-stable@lists.freedesktop.org>
(cherry picked from commit f61e382f0a)
2014-03-10 15:34:15 -07:00
Brian Paul
af1831d003 mesa: do depth/stencil format conversion in glGetTexImage
glGetTexImage(GL_DEPTH_STENCIL, GL_UNSIGNED_INT_24_8) was just
using memcpy() instead of _mesa_unpack_uint_24_8_depth_stencil_row()
to convert texels from the hardware format to the GL format.

Fixes issue reported by David Meng at Intel.  The new piglit
ext_packed_depth_stencil-getteximage test checks for this bug.

Also, add some format/type assertions.  We don't yet handle the
GL_FLOAT_32_UNSIGNED_INT_24_8_REV type.  That should be fixed in
a follow-on patch.

Reviewed-by: Eric Anholt <eric@anholt.net>
Cc: "10.0" "10.1" <mesa-stable@lists.freedesktop.org>
(cherry picked from commit 43dee0295e)
2014-03-10 15:34:14 -07:00
Anuj Phogat
59ab5bf0a0 i965: Fix the region's pitch condition to use blitter
intelEmitCopyBlit uses a signed 16-bit integer to represent
buffer pitch, so it can only handle buffer pitches < 32k.

Cc: mesa-stable@lists.freedesktop.org
Signed-off-by: Anuj Phogat <anuj.phogat@gmail.com>
Reviewed-by: Ian Romanick <ian.d.romanick@intel.com>
(cherry picked from commit b3094d9927)
2014-03-10 15:34:14 -07:00
Fredrik Höglund
85e04ad280 glx: Fix the GLXFBConfig attrib sort priorities
The sort priorites for GLX_SAMPLES and GLX_SAMPLE_BUFFERS are
not defined in GL_ARB_multisample, but they are defined in
the GLX 1.4 specification.

Cc: "9.2 10.0 10.1" <mesa-stable@lists.freedesktop.org>
Reviewed-by: Ian Romanick <ian.d.romanick@intel.com>
(cherry picked from commit 3616e862f2)
2014-03-10 15:34:14 -07:00
Fredrik Höglund
6b2cf05192 glx: Fix the default values for GLXFBConfig attributes
The default values for GLX_DRAWABLE_TYPE and GLX_RENDER_TYPE are
GLX_WINDOW_BIT and GLX_RGBA_BIT respectively, as specified in
the GLX 1.4 specification.

This fixes the glx-choosefbconfig-defaults piglit test.

Cc: "9.2 10.0 10.1" <mesa-stable@lists.freedesktop.org>
Reviewed-by: Dave Airlie <airlied@redhat.com>
Reviewed-by: Ian Romanick <ian.d.romanick@intel.com>
(cherry picked from commit f41c2f6c33)
2014-03-10 15:34:14 -07:00
Emil Velikov
3fc389efeb nv50: correctly calculate the number of vertical blocks during transfer map
Cc: "10.0 10.1" <mesa-stable@lists.freedesktop.org>
Signed-off-by: Emil Velikov <emil.l.velikov@gmail.com>
Reviewed-by: Ilia Mirkin <imirkin@alum.mit.edu>
(cherry picked from commit 882070cc81)
2014-03-10 15:34:14 -07:00
Kenneth Graunke
bab122c320 i965: Create a hardware context before initializing state module.
brw_init_state() calls brw_upload_initial_gpu_state().  If hardware
contexts are enabled (brw->hw_ctx != NULL), this will upload some
initial invariant state for the GPU.  Without hardware contexts, we
rely on this state being uploaded via atoms that subscribe to the
BRW_NEW_CONTEXT bit.

Commit 46d3c2bf4d accidentally moved
the call to brw_init_state() before creating a hardware context.
This meant brw_upload_initial_gpu_state would always early return.
Except on Gen6+, we stopped uploading the initial GPU state via
state atoms, so it never happened.

Fixes a regression since 46d3c2bf4d.

Cc: "10.0 10.1" <mesa-stable@lists.freedesktop.org>
Signed-off-by: Kenneth Graunke <kenneth@whitecape.org>
Reviewed-by: Ian Romanick <ian.d.romanick@intel.com>
Reviewed-by: Eric Anholt <eric@anholt.net>
(cherry picked from commit 3663bbe773)
2014-03-04 13:24:48 -08:00
Ian Romanick
cf7daac483 glsl: Only warn for macro names containing __
From page 14 (page 20 of the PDF) of the GLSL 1.10 spec:

    "In addition, all identifiers containing two consecutive underscores
     (__) are reserved as possible future keywords."

The intention is that names containing __ are reserved for internal use
by the implementation, and names prefixed with GL_ are reserved for use
by Khronos.  Names simply containing __ are dangerous to use, but should
be allowed.

Per the Khronos bug mentioned below, a future version of the GLSL
specification will clarify this.

Signed-off-by: Ian Romanick <ian.d.romanick@intel.com>
Cc: "9.2 10.0 10.1" <mesa-stable@lists.freedesktop.org>
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
Tested-by: Kenneth Graunke <kenneth@whitecape.org>
Reviewed-by: Anuj Phogat <anuj.phogat@gmail.com>
Tested-by: Darius Spitznagel <d.spitznagel@goodbytez.de>
Cc: Tapani Pälli <lemody@gmail.com>
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=71870
Bugzilla: Khronos #11702
(cherry picked from commit 2c85fd5a96)
2014-03-04 13:24:04 -08:00
Ian Romanick
de6068a218 glcpp: Only warn for macro names containing __
Section 3.3 (Preprocessor) of the GLSL 1.30 spec (and later) and the
GLSL ES spec (all versions) say:

    "All macro names containing two consecutive underscores ( __ ) are
    reserved for future use as predefined macro names. All macro names
    prefixed with "GL_" ("GL" followed by a single underscore) are also
    reserved."

The intention is that names containing __ are reserved for internal use
by the implementation, and names prefixed with GL_ are reserved for use
by Khronos.  Since every extension adds a name prefixed with GL_ (i.e.,
the name of the extension), that should be an error.  Names simply
containing __ are dangerous to use, but should be allowed.  In similar
cases, the C++ preprocessor specification says, "no diagnostic is
required."

Per the Khronos bug mentioned below, a future version of the GLSL
specification will clarify this.

Signed-off-by: Ian Romanick <ian.d.romanick@intel.com>
Cc: "9.2 10.0 10.1" <mesa-stable@lists.freedesktop.org>
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
Tested-by: Kenneth Graunke <kenneth@whitecape.org>
Reviewed-by: Anuj Phogat <anuj.phogat@gmail.com>
Tested-by: Darius Spitznagel <d.spitznagel@goodbytez.de>
Cc: Tapani Pälli <lemody@gmail.com>
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=71870
Bugzilla: Khronos #11702
(cherry picked from commit 0bd7892630)
2014-03-04 13:23:46 -08:00
Anuj Phogat
c074e34745 glsl: Fix condition to generate shader link error
GL_ARB_ES2_compatibility doesn't say anything about shader linking
when one of the shaders (vertex or fragment shader) is absent. So,
the extension shouldn't change the behavior specified in GLSL
specification.

Tested the behavior on proprietary linux drivers of NVIDIA and AMD.
Both of them allow linking a version 100 shader program in OpenGL
context, when one of the shaders is absent.

Makes following Khronos CTS tests to pass:
successfulcompilevert_linkprogram.test
successfulcompilefrag_linkprogram.test

Cc: mesa-stable@lists.freedesktop.org
Signed-off-by: Anuj Phogat <anuj.phogat@gmail.com>
Reviewed-by: Ian Romanick <ian.d.romanick@intel.com>
(cherry picked from commit 03597cf802)
2014-03-04 13:22:43 -08:00
Anuj Phogat
00d1daf2a8 mesa: Add GL_TEXTURE_CUBE_MAP_ARRAY to legal_get_tex_level_parameter_target()
Fixes failing Khronos CTS test packed_depth_stencil_init.test

Cc: <mesa-stable@lists.freedesktop.org>
Signed-off-by: Anuj Phogat <anuj.phogat@gmail.com>
Reviewed-by: Ian Romanick <ian.d.romanick@intel.com>
(cherry picked from commit 6bd2472a8b)
2014-03-04 13:22:07 -08:00
Kusanagi Kouichi
09a346a1c1 targets/vdpau: Always use c++ to link
If built without llvm, the following error occurs with mplayer:

Failed to open VDPAU backend .../libvdpau_r600.so: undefined symbol: _ZTVN10__cxxabiv117__class_type_infoE
[vo/vdpau] Error when calling vdp_device_create_x11: 1

Cc: <mesa-stable@lists.freedesktop.org>
Signed-off-by: Kusanagi Kouichi <slash@ac.auone-net.jp>
Reviewed-by: Emil Velikov <emil.l.velikov@gmail.com>
(cherry picked from commit 61f6cddef7)
2014-03-04 13:20:34 -08:00
Carl Worth
5202312160 main: Avoid double-free of shader Label
As documented, the _mesa_free_shader_program_data function:

	"Frees all the data that hangs off a shader program object, but not
	the object itself."

This means that this function may be called multiple times on the same object,
(and has been observed to). Meanwhile, the shProg->Label field was not being
set to NULL after its free(). This led to a second call to free() of the same
address on the second call to this function.

Fix this by setting this field to NULL after free(), (just as with all other
calls to free() in this function).

Reviewed-by: Brian Paul <brianp@vmware.com>

CC: mesa-stable@lists.freedesktop.org
(cherry picked from commit a92581acf2)
2014-03-04 13:20:01 -08:00
Ilia Mirkin
d6e83e9a7a nouveau: fix chipset checks for nv1a by using the oclass instead
Commit f4ebcd133b ("dri/nouveau: NV17_3D class is not available for
NV1a chipset") fixed this partially by using the correct 3d class.
However there were a lot of checks left over comparing against the
chipset.

Reported-and-tested-by: John F. Godfrey <jfgodfrey@gmail.com>
Signed-off-by: Ilia Mirkin <imirkin@alum.mit.edu>
Cc: 9.2 10.0 10.1 <mesa-stable@lists.freedesktop.org>
Reviewed-by: Francisco Jerez <currojerez@riseup.net>
(cherry picked from commit 0c8b165366)
2014-03-04 13:15:11 -08:00
Fredrik Höglund
ad54c842fa mesa: Preserve the NewArrays state when copying a VAO
Cc: "10.1" "10.0" <mesa-stable@lists.freedesktop.org>

Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=72895
Reviewed-by: Brian Paul <brianp@vmware.com>
(cherry picked from commit 9afbd04d89)
2014-03-04 13:14:51 -08:00
Emil Velikov
a4719eff1a dri/nouveau: Pass the API into _mesa_initialize_context
Currently we create a OPENGL_COMPAT context regardless of
what was requested by the program. Correct that by retaining
the program's request and passing it into _mesa_initialize_context.

Based on a similar commit for radeon/r200 by Ian Romanick.

Cc: "9.1 9.2 10.0" <mesa-stable@lists.freedesktop.org>
Signed-off-by: Emil Velikov <emil.l.velikov@gmail.com>
(cherry picked from commit 76d9f6d972)
2014-03-04 13:13:26 -08:00
Daniel Kurtz
5f35078700 glsl: Add locking to builtin_builder singleton
Consider a multithreaded program with two contexts A and B, and the
following scenario:

1. Context A calls initialize(), which allocates mem_ctx and starts
   building built-ins.
2. Context B calls initialize(), which sees mem_ctx != NULL and assumes
   everything is already set up.  It returns.
3. Context B calls find(), which fails to find the built-in since it
   hasn't been created yet.
4. Context A finally finishes initializing the built-ins.

This will break at step 3.  Adding a lock ensures that subsequent
callers of initialize() will wait until initialization is actually
complete.

Similarly, if any thread calls release while another thread is still
initializing, or calling find(), the mem_ctx/shader would get free'd while
from under it, leading to corruption or use-after-free crashes.

Fixes sporadic failures in Piglit's glx-multithread-shader-compile.

Bugzilla: https://bugs.freedesktop.org/69200
Signed-off-by: Daniel Kurtz <djkurtz@chromium.org>
Signed-off-by: Kenneth Graunke <kenneth@whitecape.org>
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
Cc: "10.1 10.0" <mesa-stable@lists.freedesktop.org>
(cherry picked from commit b47d231526)
2014-03-04 13:12:27 -08:00
Ilia Mirkin
aa1f7b4237 nouveau/video: make sure that firmware is present when checking caps
Apparently some players are ill-prepared for us claiming that a decoder
exists only to have creating it fail, and express this poor preparation
with crashes (e.g. flash). Check that firmware is there to increase the
chances of there being a high correlation between reported capabilities
and ability to create a decoder.

Signed-off-by: Ilia Mirkin <imirkin@alum.mit.edu>
Cc: 10.0 10.1 <mesa-stable@lists.freedesktop.org>
Tested-by: Emil Velikov <emil.l.velikov@gmail.com>
(cherry picked from commit 40dd777b33)
2014-03-04 13:12:02 -08:00
Ilia Mirkin
d141825eff nv30: report 8 maximum inputs
nvfx_fragprog_assign_generic only allows for up to 10/8 texcoords for
nv40/nv30. This fixes compilation of the varying-packing tests.
Furthermore it appears that the last 2 inputs on nv4x don't seem to
work in those tests, so just report 8 everywhere for now.

Tested on NV42, NV44. NV4B appears to have additional problems.

Signed-off-by: Ilia Mirkin <imirkin@alum.mit.edu>
Cc: 9.1 9.2 10.0 10.1 <mesa-stable@lists.freedesktop.org>
(cherry picked from commit 356aff3a5c)
2014-03-04 13:10:35 -08:00
Brian Paul
d13adcae22 mesa: update assertion in detach_shader() for geom shaders
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=74723
Cc: "10.0" "10.1" <mesa-stable@lists.freedesktop.org>
Tested-by: Andreas Boll <andreas.boll.dev@gmail.com>
(cherry picked from commit c325ec8965)
2014-03-04 13:09:17 -08:00
Kenneth Graunke
e91dd3661c glsl: Don't lose precision qualifiers when encountering "centroid".
Mesa fails to retain the precision qualifier when parsing:

   #version 300 es
   centroid in mediump vec2 v;

Consider how the parser's type_qualifier production is applied.
First, the precision_qualifier rule creates a new ast_type_qualifier:

    <precision: mediump>

Then the storage_qualifier rule creates a second one:

    <flags: in>

and calls merge_qualifier() to fold in any previous qualifications,
returning:

    <flags: in, precision: mediump>

Finally, the auxiliary_storage_qualifier creates one for "centroid":

    <flags: centroid>

it then does $$ = $1 and $$.flags |= $2.flags, resulting in:

    <flags: centroid, in>

Since precision isn't stored in the flags bitfield, it is lost.  We need
to instead call merge_qualifier to combine all the fields.

Cc: mesa-stable@lists.freedesktop.org
Signed-off-by: Kenneth Graunke <kenneth@whitecape.org>
Reported-by: Kevin Rogovin <kevin.rogovin@intel.com>
Reviewed-by: Matt Turner <mattst88@gmail.com>
Reviewed-by: Ian Romanick <ian.d.romanick@intel.com>
(cherry picked from commit 2062f40d81)
2014-03-04 13:07:32 -08:00
Brian Paul
490b810d0e st/mesa: avoid sw fallback for getting/decompressing textures
If st_GetTexImage() is to decompress the texture, avoid the fallback
path even if prefer_blit_based_texture_transfer = false.  For drivers
that returned PIPE_CAP_PREFER_BLIT_BASED_TEXTURE_TRANSFER = 0, we
were always taking the fallback path for texture decompression rather
than rendering a quad.  The later is a lot faster.

Cc: "10.0" "10.1" <mesa-stable@lists.freedesktop.org>
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
(cherry picked from commit f47e596288)
2014-03-04 13:07:13 -08:00
Matt Turner
d37086c6fc glsl: Initialize ubo_binding_mask flags to zero.
Missed in commit e63bb298. Caused sporadic test failures, like
incorrect-in-layout-qualifier-repeated-prim.geom.

Cc: "10.0" <mesa-stable@lists.freedesktop.org>
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
Reviewed-by: Jordan Justen <jordan.l.justen@intel.com>
(cherry picked from commit e2ef93cf94)
2014-03-04 13:05:05 -08:00
Marek Olšák
cfd8aed240 st/mesa: fix crash when a shader uses a TBO and it's not bound
This binds a NULL sampler view in that case.

Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=74251

Cc: "10.1" "10.0" <mesa-stable@lists.freedesktop.org>
Reviewed-by: Brian Paul <brianp@vmware.com>
(cherry picked from commit c6dbcf10df)
2014-03-04 13:04:37 -08:00
Paul Berry
1b6aec4b5a glsl: Fix continue statements in do-while loops.
From the GLSL 4.40 spec, section 6.4 (Jumps):

    The continue jump is used only in loops. It skips the remainder of
    the body of the inner most loop of which it is inside. For while
    and do-while loops, this jump is to the next evaluation of the
    loop condition-expression from which the loop continues as
    previously defined.

Previously, we incorrectly treated a "continue" statement as jumping
to the top of a do-while loop.

This patch fixes the problem by replicating the loop condition when
converting the "continue" statement to IR.  (We already do a similar
thing in "for" loops, to ensure that "continue" causes the loop
expression to be executed).

Fixes piglit tests:
- glsl-fs-continue-inside-do-while.shader_test
- glsl-vs-continue-inside-do-while.shader_test
- glsl-fs-continue-in-switch-in-do-while.shader_test
- glsl-vs-continue-in-switch-in-do-while.shader_test

Cc: mesa-stable@lists.freedesktop.org

Acked-by: Carl Worth <cworth@cworth.org>
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
Reviewed-by: Matt Turner <mattst88@gmail.com>
(cherry picked from commit 7f5740899f)
2014-03-04 13:04:15 -08:00
Paul Berry
6d6bdd88e7 glsl: Make condition_to_hir() callable from outside ast_iteration_statement.
In addition to making it public, we also need to change its first
argument from an ir_loop * to an exec_list *, so that it can be used
to insert the condition anywhere in the IR (rather than just in the
body of the loop).

This will be necessary in order to make continue statements work
properly in do-while loops.

Cc: mesa-stable@lists.freedesktop.org

Acked-by: Carl Worth <cworth@cworth.org>
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
Reviewed-by: Matt Turner <mattst88@gmail.com>
(cherry picked from commit 56790856b3)
2014-03-04 13:01:20 -08:00
Topi Pohjolainen
2a19186953 i965/blorp: do not use unnecessary hw-blending support
This is really not needed as blorp blit programs already sample
XRGB normally and get alpha channel set to 1.0 automatically by
the sampler engine. This is simply copied directly to the payload
of the render target write message and hence there is no need for
any additional blending support from the pixel processing pipeline.

The blending formula is anyway broken for color components, it
multiplies the color component with itself (blend factor is the
component itself).
Alpha blending in turn would not fix the alpha to one independent
of the source but simply used the source alpha as is instead
(1.0 * src_alpha + 0.0 * dst_alpha).

Quoting Eric:

 "If we want to actually make the no-alpha-bits-present thing work,
  we need to override the bits in the surface state or in the
  generated code.  In the normal draw path, it's done for sampling
  by the swizzling code in brw_wm_surface_state.c, and the blending
  overrides is just to fix up the alpha blending stage which
  doesn't pay attention to that for the destination surface."

If one modifies piglit test gl-3.2-layered-rendering-blit to use
color component values other than zero or one, this change will
kick in on IVB. No regressions on IVB.

This is effectively revert of c0554141a9:

    i965/blorp: Support overriding destination alpha to 1.0.

    Currently, Blorp requires the source and destination formats to be
    equal.  However, we'd really like to be able to blit between XRGB and
    ARGB formats; our BLT engine paths have supported this for a long time.

    For ARGB -> XRGB, nothing needs to occur: the missing alpha is already
    interpreted as 1.0.  For XRGB -> ARGB, we need to smash the alpha
    channel to 1.0 when writing the destination colors.  This is fairly
    straightforward with blending.

    For now, this code is never used, as the source and destination formats
    still must be equal.  The next patch will relax that restriction.

    NOTE: This is a candidate for the 9.1 branch.

Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
Reviewed-by: Eric Anholt <eric@anholt.net>
Signed-off-by: Topi Pohjolainen <topi.pohjolainen@intel.com>
(cherry picked from commit 933be19cdf)
2014-03-04 13:00:47 -08:00
Christian König
dc0053b33f radeon/uvd: fix feedback buffer handling v2
Without the correct feedback buffer size UVD runs
into an error on each frame, reducing the maximum FPS.

v2: fixing Michels comments

Signed-off-by: Christian König <christian.koenig@amd.com>
Reviewed-by: Michel Dänzer <michel.daenzer@amd.com>
Cc: "10.1" "10.0" "9.2" <mesa-stable@lists.freedesktop.org>
(cherry picked from commit c3c24c3acc)
2014-03-04 13:00:16 -08:00
Brian Paul
e70e368af5 draw: fix incorrect color of flat-shaded clipped lines
When we clipped a line weren't copying the provoking vertex
color to the second vertex.  We also weren't checking for
first vs. last provoking vertex.

Fixes failures found with the new piglit line-flat-clip-color test.

Cc: "10.0, 10.1" <mesa-stable@lists.freedesktop.org>

Reviewed-by: Jose Fonseca <jfonseca@vmware.com>
(cherry picked from commit fc3fcd1e01)
2014-03-04 12:59:27 -08:00
Brian Paul
e10b0e0f50 gallium/auxiliary/indices: replace free() with FREE()
To match the CALLOC_STRUCT() call.

Cc: "10.0, 10.1" <mesa-stable@lists.freedesktop.org>

Reviewed-by: Jose Fonseca <jfonseca@vmware.com>
(cherry picked from commit 307fd76053)
2014-03-04 12:58:30 -08:00
Ian Romanick
b74da80b71 meta: Consistenly use non-Apple VAO functions
For these objects, meta was already using the non-Apple function to
delete the objects.  Everywhere else in the file uses
_mesa_GenVertexArrays and _mesa_BindVertexArrays.

Signed-off-by: Ian Romanick <ian.d.romanick@intel.com>
Reviewed-by: Brian Paul <brianp@vmware.com>
Cc: "9.1 9.2 10.0" <mesa-stable@lists.freedesktop.org>
(cherry picked from commit abfa65ca81)
2014-03-04 12:58:10 -08:00
Ian Romanick
89c6473ff0 meta: Fallback to software for GetTexImage of compressed GL_TEXTURE_CUBE_MAP_ARRAY
The hardware decompression path isn't even close to being able to handle
this.  This converts the crash (assertion failure) in
"EXT_texture_compression_s3tc/getteximage-targets S3TC CUBE_ARRAY" to a
plain old failure.

Signed-off-by: Ian Romanick <ian.d.romanick@intel.com>
Reviewed-by: Brian Paul <brianp@vmware.com>
Cc: "9.1 9.2 10.0" <mesa-stable@lists.freedesktop.org>
(cherry picked from commit 070f55d893)
2014-03-04 12:57:43 -08:00
Ian Romanick
a4a8af4cbb meta: Release resources used by _mesa_meta_DrawPixels
_mesa_meta_DrawPixels creates a VAO and (potentially) two fragment
programs, but none of them are ever released.  Leaking piles of memory
is generally frowned upon.

Signed-off-by: Ian Romanick <ian.d.romanick@intel.com>
Reviewed-by: Brian Paul <brianp@vmware.com>
Cc: "9.1 9.2 10.0" <mesa-stable@lists.freedesktop.org>
(cherry picked from commit fcb498302b)
2014-03-04 12:57:01 -08:00
Ian Romanick
c1bcdcde1c meta: Release resources used by decompress_texture_image
decompress_texture_image creates an FBO, an RBO, a VBO, a VAO, and a
sampler object, but none of them are ever released.  Later patches will
add program objects, exacerbating the problem.  Leaking piles of memory
is generally frowned upon.

Signed-off-by: Ian Romanick <ian.d.romanick@intel.com>
Reviewed-by: Brian Paul <brianp@vmware.com>
Cc: "9.1 9.2 10.0" <mesa-stable@lists.freedesktop.org>
(cherry picked from commit 2d3f92e881)
2014-03-04 12:56:23 -08:00
Brian Paul
d18b182134 radeon: move driContextSetFlags(ctx) call after ctx var is initialized
CC: "10.0" <mesa-stable@lists.freedesktop.org>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
(cherry picked from commit f51ca46f0c)
2014-03-04 12:55:24 -08:00
Brian Paul
5297fdc0c8 r200: move driContextSetFlags(ctx) call after ctx var is initialized
Otherwise, ctx was a garbage value.

CC: "10.0" <mesa-stable@lists.freedesktop.org>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
(cherry picked from commit 2d6d69bab6)
2014-03-04 12:55:07 -08:00
Anuj Phogat
edf066f385 mesa: Generate correct error code in glDrawBuffers()
OpenGL 3.3 spec expects GL_INVALID_OPERATION:
 "For both the default framebuffer and framebuffer objects, the
  constants FRONT, BACK, LEFT, RIGHT, and FRONT AND BACK are not
  valid in the bufs array passed to DrawBuffers, and will result
  in the error INVALID OPERATION."

But OpenGL 4.0 spec changed the error code to GL_INVALID_ENUM:
 "For both the default framebuffer and framebuffer objects, the
  constants FRONT, BACK, LEFT, RIGHT, and FRONT_AND_BACK are not
  valid in the bufs array passed to DrawBuffers, and will result
  in the error INVALID_ENUM."

This patch changes the behaviour to match OpenGL 4.0 spec
Fixes Khronos OpenGL CTS draw_buffers_api.test.

V2: Update the comment in code.

Cc: mesa-stable@lists.freedesktop.org
Signed-off-by: Anuj Phogat <anuj.phogat@gmail.com>
Reviewed-by: Ian Romanick <ian.d.romanick@intel.com>
(cherry picked from commit 3303475558)
2014-03-04 12:54:41 -08:00
Carl Worth
593484a1c4 docs: Add md5sums for 10.0.3 release
Which we couldn't do until after tagging the release, of course.
2014-02-03 12:19:49 -08:00
Carl Worth
d8225ac67a docs: Add release notes for 10.0.3 release.
Just before making the actual release.
2014-02-03 11:21:23 -08:00
Carl Worth
3eac4b550d Update version to 10.0.3
In preparation for the upcoming 10.0.3 release.
2014-02-03 11:17:06 -08:00
Paul Seidler
cb7caac053 build: move ARCH_LIBS definition outside of ASM definition
_mesa_streaming_load_memcpy is also needed even if assembling is disabled

Cc: "10.0" <mesa-stable@lists.freedesktop.org>
Reviewed-by: Matt Turner <mattst88@gmail.com>
(cherry picked from commit 1cdeeef6c4)
2014-02-03 09:59:52 -08:00
Lauri Kasanen
0461451dcd mesa: Fix build to properly check for supported compiler flags
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=72708
Reviewed-by: Matt Turner <mattst88@gmail.com>
Signed-off-by: Lauri Kasanen <cand@gmx.com>
(cherry picked from commit fcefdc9a59)
2014-02-03 09:59:52 -08:00
Matt Turner
0657a6a6ae glx: Update glxext.h to revision 24777.
It readds the GLXContextID typedef, but under #ifndef GLX_VERSION_1_3.

Bugzilla: https://cvs.khronos.org/bugzilla/show_bug.cgi?id=11454

(Backported from commit 3f3aafbfee)
2014-02-03 09:59:37 -08:00
Anuj Phogat
559d9b894e i965: Ignore 'centroid' interpolation qualifier in case of persample shading
This patch handles the use of 'centroid' qualifier with 'in' variables
in a fragment shader when persample shading is enabled. Per sample
shading for the whole fragment shader can be enabled by:
glEnable(GL_SAMPLE_SHADING) or using {gl_SamplePosition, gl_SampleID}
builtin variables in fragment shader. Explaining it below in more
detail.

/* Enable sample shading using OpenGL API */
glEnable(GL_SAMPLE_SHADING);
glMinSampleShading(1.0);

Example fragment shader:
in vec4 a;
centroid in vec4 b;
main()
{
  ...
}

Variable 'a' will be interpolated at sample location. But, what
interpolation should we use for variable 'b' ?

ARB_sample_shading recommends interpolation at sample position for
all the variables. GLSL 400 (and earlier) spec says that:

"When an interpolation qualifier is used, it overrides settings
established through the OpenGL API."
But, this text got deleted in later versions of GLSL.

NVIDIA's and AMD's proprietary linux drivers (at OpenGL 4.3)
interpolates at sample position. This convinces me to use
the similar approach on intel hardware.

Signed-off-by: Anuj Phogat <anuj.phogat@gmail.com>
Reviewed-by: Chris Forbes <chrisf@ijw.co.nz>
(cherry picked from commit f5cfb4ae21)

and

i965: Ignore 'centroid' interpolation qualifier in case of persample shading

I missed this change in commit f5cfb4a. It fixes the incorrect
rendering caused in Dolphin Emulator.

Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=73915

Signed-off-by: Anuj Phogat <anuj.phogat@gmail.com>
Tested-by: Markus Wick <wickmarkus@web.de>
Reviewed-by: Matt Turner <mattst88@gmail.com>
(cherry picked from commit dc2f94bc78)
2014-01-31 13:01:44 -08:00
Anuj Phogat
765e3d373b i965: Use sample barycentric coordinates with per sample shading
Current implementation of arb_sample_shading doesn't set 'Barycentric
Interpolation Mode' correctly. We use pixel barycentric coordinates
for per sample shading. Instead we should select perspective sample
or non-perspective sample barycentric coordinates.

It also enables using sample barycentric coordinates in case of a
fragment shader variable declared with 'sample' qualifier.
e.g. sample in vec4 pos;

A piglit test to verify the implementation has been posted on piglit
mailing list for review.

V2: Do not interpolate all the 'in' variables at sample position
    if fragment shader uses 'sample' qualifier with one of them.
    For example we have a fragment shader:
    #version 330
    #extension ARB_gpu_shader5: require
    sample in vec4 a;
    in vec4 b;
    main()
    {
      ...
    }

    Only 'a' should be sampled at sample location, not 'b'.

Signed-off-by: Anuj Phogat <anuj.phogat@gmail.com>
Reviewed-by: Chris Forbes <chrisf@ijw.co.nz>
(cherry picked from commit a92e5f7cf6)
2014-01-31 13:01:34 -08:00
Carl Worth
ae286af09d cherry-ignore: Ignore 4 patches at teh request of the author, (Anuj).
For 3 of the 4, I was already ignoring them since they were not picking
cleanly. Now, Anuj has explicitly requested they be ignored since they all
depend on a series that is not yet on the 10.0 branch.
2014-01-31 12:38:10 -08:00
José Fonseca
ed437df208 mesa: Use IROUND instead of roundf.
roundf is not available on MSVC.

(cherry picked from commit bba8f10598)
2014-01-31 12:37:11 -08:00
Chad Versace
f7848574b3 i965/gen6/blorp: Emit more flushes to workaround hangs
This is a squash of three related cherry-picks from master.

[PATCH 1/3]

  i965/gen6/blorp: Set need_workaround_flush immediately after primitive

  This patch makes the workaround code in gen6 blorp follow the pattern
  established in the regular draw path. It shouldn't result in any
  behavioral change.

  On gen6, there are two places where we emit 3D_CMD_PRIM: brw_emit_prim()
  and gen6_blorp_emit_primitive().  brw_emit_prim() sets
  need_workaround_flush immediately after emitting the primitive, but
  blorp does not. Blorp sets need_workaround_flush at the bottom of
  brw_blorp_exec().

  This patch moves the need_workaround_flush from brw_blorp_exec() to
  gen6_blorp_emit_primitive().  There is no need to set
  need_workaround_flush in gen7_blorp_emit_primitive() because the
  workaround applies only to gen6.

  Reviewed-by: Paul Berry <stereotype441@gmail.com>
  Signed-off-by: Chad Versace <chad.versace@linux.intel.com>
  (cherry picked from commit 5e0cd58de4)

[PATCH 2/3]

  i965/gen6/blorp: Set need_workaround_flush at top of blorp

  Unconditionally set brw->need_workaround_flush at the top of gen6 blorp
  state emission.

  The art of emitting workaround flushes on Sandybridge is mysterious and
  not fully understood. Ken and I believe that
  intel_emit_post_sync_nonzero_flush() may be required when switching from
  regular drawing to blorp.  This is an extra safety measure to prevent
  undiscovered difficult-to-diagnose gpu hangs.

  I verified that on ChromeOS, pre-patch, need_workaround_flush was not
  set at the top of blorp, as Paul expected. To verify, I inserted the
  following debug code at the top of gen6_blorp_exec(), restarted the ui,
  and inspected the logs in /var/log/ui. The abort gets triggered so early
  that the browser never appears on the display.

      static void
      gen6_blorp_exec(...)
      {
          if (!brw->need_workaround_flush) {
              fprintf(stderr, "chadv: %s:%d\n", __FILE__, __LINE__);
              abort();
          }
          ...
      }

  CC: Kenneth Graunke <kenneth@whitecape.org>
  CC: Stéphane Marchesin <marcheu@chromium.org>
  Reviewed-by: Paul Berry <stereotype441@gmail.com>
  Signed-off-by: Chad Versace <chad.versace@linux.intel.com>
  (cherry picked from commit 6a5c86f486)

[PATCH 3/3]

  i965/gen6/blorp: Remove redundant HiZ workaround

  Commit 1a92881 added extra flushes to fix a HiZ hang in
  WebGL Google Maps. With the extra flushes emitted by the previous two
  patches, the flushes added by 1a92881 are redundant.

  Tested with the same criteria as in 1a92881: by zooming in and out
  continuously for 2 hours on Sandybridge Chrome OS (codename
  Stumpy) without a hang.

  CC: Kenneth Graunke <kenneth@whitecape.org>
  CC: Stéphane Marchesin <marcheu@chromium.org>
  Reviewed-by: Paul Berry <stereotype441@gmail.com>
  Signed-off-by: Chad Versace <chad.versace@linux.intel.com>
  (cherry picked from commit 90368875e7)

  Conflicts:
  	src/mesa/drivers/dri/i965/gen6_blorp.cpp
2014-01-31 12:21:25 -08:00
Ian Romanick
319d6d6067 radeon / r200: Pass the API into _mesa_initialize_context
Otherwise an application that requested an OpenGL ES 1.x context would
actually get a desktop OpenGL context.

Signed-off-by: Ian Romanick <ian.d.romanick@intel.com>
Cc: "9.1 9.2 10.0" <mesa-stable@lists.freedesktop.org>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
(cherry picked from commit 33214679bb)
2014-01-28 13:21:55 -08:00
Tom Stellard
6f27353c20 r600g/compute: Emit DEALLOC_STATE on cayman after dispatching a compute shader.
This is necessary to prevent the next SURFACE_SYNC packet from
hanging the GPU.

https://bugs.freedesktop.org/show_bug.cgi?id=73418

Reviewed-by: Marek Olšák <marek.olsak@amd.com>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>

CC: "9.2" "10.0" <mesa-stable@lists.freedesktop.org>
(cherry picked from commit d51dbe048a)
2014-01-28 13:21:25 -08:00
Emil Velikov
99f695f716 gallium/rtasm: handle mmap failures appropriately
For a variety of reasons mmap (selinux and pax to name
a few) and can fail and with current code. This will
result in a crash in the driver, if not worse.

This has been the case since the inception of the
gallium copy of rtasm.

Cc: 9.1 9.2 10.0 <mesa-stable@lists.freedesktop.org>
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=73473
Signed-off-by: Emil Velikov <emil.l.velikov@gmail.com>
Reviewed-by: Jakob Bornecrantz <jakob@vmware.com>
(cherry picked from commit 4dd445f1cf)
2014-01-28 13:20:53 -08:00
Carl Worth
ef75bf0777 Drop another couple of patches.
These depend on code which does not exist on the stable branch.
2014-01-28 13:18:40 -08:00
Matt Turner
0cd3d50f07 glcpp: Define GL_EXT_shader_integer_mix in both GL and ES.
Cc: mesa-stable@lists.freedesktop.org
Reviewed-by: Ian Romanick <ian.d.romanick@intel.com>
(cherry picked from commit 66ef8feb4d)

Conflicts:
	src/glsl/glcpp/glcpp-parse.y
2014-01-28 13:03:08 -08:00
Carl Worth
31b2e73a2d cherry-ignore: Ignore several patches not yet ready for the stable branch
The comments describe the reasons for each being excluded.
2014-01-28 12:51:53 -08:00
Brian Paul
df62691a02 draw: fix incorrect vertex size computation in LLVM drawing code
We were calling draw_total_vs_outputs() too early.  The call to
draw_pt_emit_prepare() could result in the vertex size changing.
So call draw_total_vs_outputs() after draw_pt_emit_prepare().

This fix would seem to be needed for the non-LLVM code as well,
but it's not obvious.  Instead, I added an assertion there to
try to catch this problem if it were to occur there.

Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=72926
Cc: 10.0 <mesa-stable@lists.freedesktop.org>
Reviewed-by: José Fonseca <jfonseca@vmware.com>
(cherry picked from commit ad814d04ca)

Conflicts:
	src/gallium/auxiliary/draw/draw_pt_fetch_shade_pipeline.c
2014-01-27 16:15:10 -08:00
Kenneth Graunke
fe2678accd glsl: Fix chained assignments of vector channels.
Simple shaders such as:

    void splat(vec2 v, float f) {
        v[0] = v[1] = f;
    }

failed to compile with the following error:
error: value of type vec2 cannot be assigned to variable of type float

First, we would process v[1] = f, and transform:
LHS: (expression float vector_extract (var_ref v) (constant int (1)))
RHS: (var_ref f)
into:
LHS: (var_ref v)
RHS: (expression vec2 vector_insert (var_ref v) (constant int (1))
                 (var_ref f))

Note that the LHS type is now vec2, not a float.  This is surprising,
but not the real problem.

After emitting assignments, this ultimately becomes:
(declare (temporary) vec2 assignment_tmp)
(assign (xy)
  (var_ref assignment_tmp)
  (expression vec2 vector_insert (var_ref v) (constant int (1))
              (var_ref f)))
  (assign (xy) (var_ref v) (var_ref assignment_tmp))

We would then return (var_ref assignment_tmp) as the rvalue, which has
the wrong type---it should be float, but is instead a vec2.

To fix this, we simply return (vector_extract (var_ref assignment_temp)
<the appropriate channel>) to pull out the desired float value.

Fixes Piglit's chained-assignment-with-vector-constant-index.vert and
chained-assignment-with-vector-dynamic-index.vert tests.

Cc: mesa-stable@lists.freedesktop.org
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=74026
Reported-by: Dan Ginsburg <dang@valvesoftware.com>
Reviewed-by: Ian Romanick <ian.d.romanick@intel.com>
Reviewed-by: Matt Turner <mattst88@gmail.com>
Signed-off-by: Kenneth Graunke <kenneth@whitecape.org>
(cherry picked from commit 44a86e2b4f)
2014-01-25 16:55:24 -08:00
Kenneth Graunke
83e9eb81be glsl: Rename "expr" to "lhs_expr" in vector_extract munging code.
When processing assignments, we have both an LHS and RHS.  At a glance,
"lhs_expr" clearly refers to the LHS, while a generic name like "expr"
is ambiguous.

Cc: mesa-stable@lists.freedesktop.org
Reviewed-by: Ian Romanick <ian.d.romanick@intel.com>
Reviewed-by: Matt Turner <mattst88@gmail.com>
Signed-off-by: Kenneth Graunke <kenneth@whitecape.org>
(cherry picked from commit 6c158e110c)
2014-01-25 16:55:15 -08:00
Anuj Phogat
8c467b825f glsl: Disable ARB_texture_rectangle in shader version 100.
OpenGL with ARB_ES2_compatibility allows shaders that specify #version
100.

This fixes the Khronos OpenGL test(Texture_Rectangle_Samplers_frag.test)
failure.

Cc: mesa-stable@lists.freedesktop.org
Reviewed-by: Matt Turner <mattst88@gmail.com>
Reviewed-by: Ian Romanick <ian.d.romanick@intel.com>
Signed-off-by: Anuj Phogat <anuj.phogat@gmail.com>
(cherry picked from commit c907595ba7)
2014-01-25 16:53:05 -08:00
Brian Paul
79ef990ef8 st/mesa: fix glReadBuffer(GL_NONE) segfault
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=73956
Cc: 10.0 <mesa-stable@lists.freedesktop.org>
Tested-by: Ahmed Allam <ahmabdabd@hotmail.com>
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
(cherry picked from commit f7c118ffbf)
2014-01-25 16:52:34 -08:00
Marek Olšák
b1694c9f87 gallium/util: util_format_srgb should not return FORMAT_NONE for sRGB formats
This fixes a serious regression introduced
in 4e549ddb50.

Cc: 9.2 10.0 <mesa-stable@lists.freedesktop.org>

Reviewed-by: Brian Paul <brianp@vmware.com>
(cherry picked from commit d40532f260)
2014-01-25 16:52:17 -08:00
Ilia Mirkin
e2b6834c87 st/vdpau: don't return a device if the screen doesn't support NPOT
NV3x cards don't support NPOT textures. Technically this restriction
could be worked around, but since it also doesn't expose any video
decoding hw, just turn it off entirely.

Signed-off-by: Ilia Mirkin <imirkin@alum.mit.edu>
Cc: 10.0 <mesa-stable@lists.freedesktop.org>
Reviewed-by: Christian König <christian.koenig@amd.com>
(cherry picked from commit 00e4314f6d)
2014-01-25 16:46:11 -08:00
Emil Velikov
04e5f2e94f nv50: access only the available amount of constbuf
The textures array is defined as a number of NV50_MAX_PIPE_CONSTBUFS
per shader stage. Currently the nv50 driver handles only 3 shader
stages, thus we wreck chaos when accessing array-out-of-bounds.

Cc: 9.1 9.2 10.0 <mesa-stable@lists.freedesktop.org>
Signed-off-by: Emil Velikov <emil.l.velikov@gmail.com>
Reviewed-by: Ilia Mirkin <imirkin@alum.mit.edu>
(cherry picked from commit 12e744abbb)
2014-01-25 16:45:49 -08:00
Emil Velikov
a3f259e404 nv50: access only the available amount of textures
The textures array is defined as a number of PIPE_MAX_SAMPLERS per shader stage.
Currently nv50 driver handles only 3 shader stages, thus we wreck chaos when
accessing array-out-of-bounds.

Fixes a segfault in piglit/bin/arb_texture_buffer_object-data-sync -fbo -auto

Cc: 9.1 9.2 10.0 <mesa-stable@lists.freedesktop.org>
Signed-off-by: Emil Velikov <emil.l.velikov@gmail.com>
Reviewed-by: Ilia Mirkin <imirkin@alum.mit.edu>
(cherry picked from commit d606ca37eb)
2014-01-25 16:45:36 -08:00
Ilia Mirkin
705da42130 mesa: fix GL_COLOR_SUM enum for drivers without ARB_vertex_program
Commit c13970808 (mesa: GL_EXT_secondary_color is not optional) changed

CHECK_EXTENSION2(EXT_secondary_color, ARB_vetex_program, cap)

to

CHECK_EXTENSION(ARB_vertex_program, cap)

However CHECK_EXTENSION2 checks that either extension is available, not
both. Remove the extension check entirely since the intent was for it to
always be enabled.

v2: Fix glGet*(GL_COLOR_SUM) too.  Suggested by Ian.

Signed-off-by: Ilia Mirkin <imirkin@alum.mit.edu>
Reviewed-by: Ian Romanick <ian.d.romanick@intel.com>
Cc: 9.2 10.0 <mesa-stable@lists.freedesktop.org>
(cherry picked from commit 739dc95e67)
2014-01-25 16:45:16 -08:00
Aaron Watry
b646441307 st/dri: prevent leak of dri option default values
v2: Change comment style

CC: "10.0" <mesa-stable@lists.freedesktop.org>
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
(cherry picked from commit ce3528896b)
2014-01-25 16:44:23 -08:00
Aaron Watry
0ec1ae90ef radeon: Move gfx/dma cs cleanup to r600_common_context_cleanup
The radeonsi code was not cleaning up either of these items leading to
leaked memory.

v2: Move cleanup to r600_common_context_cleanup instead of duplicating
    the logic for SI

CC: "10.0" <mesa-stable@lists.freedesktop.org>
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
(cherry picked from commit 5ac3229f76)

Conflicts:
	src/gallium/drivers/radeon/r600_pipe_common.c
2014-01-25 16:43:47 -08:00
Ian Romanick
0fd4cf4bf8 mesa: Add COMPRESSED_RGBA_S3TC_DXT1_EXT to COMPRESSED_TEXTURE_FORMATS for GLES
The ES and desktop GL specs diverge here.  Yay!

In desktop OpenGL, the driver can perform online compression of
uncompressed texture data.  GL_NUM_COMPRESSED_TEXTURE_FORMATS and
GL_COMPRESSED_TEXTURE_FORMATS give the application a list of formats
that it could ask the driver to compress with some expectation of
quality.  The GL_ARB_texture_compression spec calls this "suitable for
general-purpose usage."  As noted above, this means
GL_COMPRESSED_RGBA_S3TC_DXT1_EXT is not included in the list.

In OpenGL ES, the driver never performs compression.
GL_NUM_COMPRESSED_TEXTURE_FORMATS and GL_COMPRESSED_TEXTURE_FORMATS give
the application a list of formats that the driver can receive from the
application.  It is the *complete* list of formats.  The
GL_EXT_texture_compression_s3tc spec says:

    "New State for OpenGL ES 2.0.25 and 3.0.2 Specifications

        The queries for NUM_COMPRESSED_TEXTURE_FORMATS and
        COMPRESSED_TEXTURE_FORMATS include COMPRESSED_RGB_S3TC_DXT1_EXT,
        COMPRESSED_RGBA_S3TC_DXT1_EXT, COMPRESSED_RGBA_S3TC_DXT3_EXT,
        and COMPRESSED_RGBA_S3TC_DXT5_EXT."

Note that the addition is only to the OpenGL ES specification!

Signed-off-by: Ian Romanick <ian.d.romanick@intel.com>
See-also: http://lists.freedesktop.org/archives/mesa-dev/2013-October/047439.html
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
Reviewed-by: Brian Paul <brianp@vmware.com>
Cc: "10.0" <mesa-stable@lists.freedesktop.org>
(cherry picked from commit 0a75909b3f)
2014-01-25 16:39:13 -08:00
Emil Velikov
45f0736aa5 st/mesa: use signed temporary variable to store _ColorDrawBufferIndexes
The temporary variable used to store _ColorDrawBufferIndexes must be
signed (GLint), otherwise the following conditional will be incorrectly
evaluated. Leading to crashes in the driver/mesa or accessing/writing
to arbitrary memory location. The bug dates back to 2009.

Cc: 10.0 9.2 9.1 <mesa-stable@lists.freedesktop.org>
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
Signed-off-by: Emil Velikov <emil.l.velikov@gmail.com>
(cherry picked from commit bfcf78c110)
2014-01-25 16:38:37 -08:00
Emil Velikov
b513c66a4e mesa: use signed temporary variable to store _ColorDrawBufferIndexes
_ColorDrawBufferIndexes is defined as GLint* and using a GLuint*
will result in the first part of the conditional to be evaluated to
true always.

Unintentionally introduced by the following commit, this will result
in a driver segfault if one is using an old version of the piglit test

    bin/clearbuffer-mixed-format -auto -fbo

commit 03d848ea10
Author: Marek Olšák <marek.olsak@amd.com>
Date:   Wed Dec 4 00:27:20 2013 +0100

    mesa: fix interpretation of glClearBuffer(drawbuffer)

    This corresponding piglit tests supported this incorrect behavior instead of
    pointing at it.

Cc: Marek Olšák <marek.olsak@amd.com>
Cc: 10.0 9.2 9.1 <mesa-stable@lists.freedesktop.org>
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
Signed-off-by: Emil Velikov <emil.l.velikov@gmail.com>
(cherry picked from commit 10368e1446)
2014-01-25 16:38:12 -08:00
Michał Górny
dbc0ae1079 Use AC_PATH_TOOL instead of AC_PATH_PROG for llvm-config.
This should help with cross-compiling and multilib when $CHOST-specific
llvm-config is expected rather than build host default one.

It will help us a bit in Gentoo where we've started using
i686-pc-linux-gnu-llvm-config for 32-bit multilib LLVM.

Reviewed-by: Tom Stellard <thomas.stellard@amd.com>
Signed-off-by: Michał Górny <mgorny@gentoo.org>
Fixes: https://bugs.freedesktop.org/show_bug.cgi?id=73100

CC: "10.0" <mesa-stable@lists.freedesktop.org>
(cherry picked from commit 5ea2376334)
2014-01-25 16:37:41 -08:00
Paul Berry
9ca4c8f6a2 i965: Ensure that all necessary state is re-emitted if we run out of aperture.
Prior to this patch, if we ran out of aperture space during
brw_try_draw_prims(), we would rewind the batch buffer pointer
(potentially throwing some state that may have been emitted by
brw_upload_state()), flush the batch, and then try again.  However, we
wouldn't reset the dirty bits to the state they had before the call to
brw_upload_state().  As a result, when we tried again, there was a
danger that we wouldn't re-emit all the necessary state.  (Note: prior
to the introduction of hardware contexts, this wasn't a problem
because flushing the batch forced all state to be re-emitted).

This patch fixes the problem by leaving the dirty bits set at the end
of brw_upload_state(); we only clear them after we have determined
that we don't need to rewind the batch buffer.

Cc: 10.0 9.2 <mesa-stable@lists.freedesktop.org>

Reviewed-by: Eric Anholt <eric@anholt.net>
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
(cherry picked from commit fb6d9798a0)
2014-01-25 16:37:19 -08:00
Marek Olšák
502d89b260 st/mesa: use sRGB formats for MSAA resolving if destination is sRGB
Copied from the i965 driver, including the big comment.

Cc: 9.2 10.0 <mesa-stable@lists.freedesktop.org>
(cherry picked from commit 4e549ddb50)
2014-01-25 16:35:57 -08:00
Eric Anholt
3a6271890c i965: Don't do the temporary-and-blit-copy for INVALIDATE_RANGE maps.
We definitely want to fall through to the unsynchronized map case, instead
of wasting bandwidth on a copy.  Prevents a -43.2407% +/- 1.06113% (n=49)
performance regression on aa10perf when teaching glamor to provide the
GL_INVALIDATE_RANGE_BIT information.

This is a performance fix, which I usually wouldn't cherry-pick to stable.
But this was really was just a bug in the code, its presence would
discourage developers from giving us the best information they can, and I
think we've got fairly high confidence in the unsynchronized map path
already.

Cc: 10.0 9.2 <mesa-stable@lists.freedesktop.org>
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
(cherry picked from commit f46563fe1c)
2014-01-09 12:24:44 -08:00
Eric Anholt
9b3ed4c8c2 i965: Fix handling of MESA_pack_invert in blit (PBO) readpixels.
Fixes piglit GL_MESA_pack_invert/readpixels and GPU hangs with glamor and
cairo-gl.

Cc: 10.0 9.2 <mesa-stable@lists.freedesktop.org>
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
Reviewed-by: Ian Romanick <ian.d.romanick@intel.com>
Reviewed-by: Anuj Phogat <anuj.phogat@gmail.com>
(cherry picked from commit e186b927b8)
2014-01-09 12:24:03 -08:00
Thomas Sondergaard
38235d2923 mesa: Namespace qualify fma to override ambiguity with fma from math.h
MSVC 2013 version of math.h includes an fma() function.

Cc: "10.0" <mesa-stable@lists.freedesktop.org>
Reviewed-by: Brian Paul <brianp@vmware.com>
(cherry picked from commit e8ff08edd8)
2014-01-09 12:23:32 -08:00
Thomas Sondergaard
0df489f0e0 mesa: Work around internal compiler error
This small rearrangement avoids MSVC 2013 ICE. Also, this should be
a better memory access order.

Cc: "10.0" <mesa-stable@lists.freedesktop.org>
Reviewed-by: Brian Paul <brianp@vmware.com>
Reviewed-by: Ian Romanick <ian.d.romanick@intel.com>
(cherry picked from commit 8fcddd325c)
2014-01-09 12:23:14 -08:00
Thomas Sondergaard
31e2824d99 mesa: Fix compile error with MSVC 2013
This fixes the following compile error:
src\glsl\ir_constant_expression.cpp(1405) : error C2666: 'copysign' : 3
overloads have similar conversions

Cc: "10.0" <mesa-stable@lists.freedesktop.org>
Reviewed-by: Brian Paul <brianp@vmware.com>
(cherry picked from commit 067ad6e53e)
2014-01-09 12:22:40 -08:00
Thomas Sondergaard
700b916da1 mesa: Preliminary support for MSVC_VERSION=12.0
Cc: "10.0" <mesa-stable@lists.freedesktop.org>
Reviewed-by: Brian Paul <brianp@vmware.com>
(cherry picked from commit 20e65c92c7)
2014-01-09 12:22:19 -08:00
Chris Forbes
c24489b0ef i965: fold offset into coord for textureOffset(gsampler2DRect)
The hardware is broken with nonzero texel offsets and unnormalized
coordinates; instead of doing correct offsetting, we get garbage.

This just extends the existing workaround for ir_txf and
ir_tg4+gsampler2DRect to also consider ir_tex+gsampler2DRect.

Fixes broken rendering in 'tesseract' when 'mesa_texrectoffset_bug' is
not enabled; also fixes the new piglit test
'tests/spec/glsl-1.30/execution/fs-textureOffset-Rect'.

Has been broken ~forever; suggesting including this in only 10.0 because
the lowering pass doesn't exist in 9.2 or earlier so would require quite
a different patch.

Signed-off-by: Chris Forbes <chrisf@ijw.co.nz>
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
Cc: Lee Salzman <lsalzman@gmail.com>
Cc: "10.0" <mesa-stable@lists.freedesktop.org>
(cherry picked from commit 9e99735f30)
2014-01-09 12:20:23 -08:00
Andreas Fänger
2b205f2864 swrast: fix delayed texel buffer allocation regression for OpenMP
Commit 9119269ca1 moved the texel
buffer allocation to _swrast_texture_span(), however, when compiled
with OpenMP support this code already runs multi-threaded so a
critical section is required to prevent multiple allocations and
rendering errors.

Cc: "10.0" <mesa-stable@lists.freedesktop.org>
Reviewed-by: Brian Paul <brianp@vmware.com>
(cherry picked from commit 2a0fb946e1)
2014-01-09 12:19:01 -08:00
Brian Paul
b1ff3f6270 mesa: implement missing glGet(GL_RGBA_SIGNED_COMPONENTS_EXT) query
This is part of the GL_EXT_packed_float extension.

  Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
  Reviewed-by: Chris Forbes <chrisf@ijw.co.nz>
  (cherry picked from commit 3486f6f31b

Also squashed in a subsequent bug fix:

  mesa: check for MESA_FORMAT_RGB9_E5_FLOAT in _mesa_is_format_signed()

  This packed floating point format only stores positive values.

  Reviewed-by: Marek Olšák <marek.olsak@amd.com>
  Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
  Reviewed-by: Roland Scheidegger <sroland@vmware.com>
  (cherry picked from commit 0fc8d7c66e)

Also squashed in a second, subsequent bug fix:

  mesa: check bits per channel for GL_RGBA_SIGNED_COMPONENTS_EXT query

  If a channel has zero bits it's not signed.

  v2: also check for luminance and intensity format bits.  Bruce
  Merry's proposed piglit test hits the luminance case.

  Reviewed-by: Matt Turner <mattst88@gmail.com>
  (cherry picked from commit d046fd731a)

Bugzilla: http://bugs.freedesktop.org/show_bug.cgi?id=73096
Cc: 10.0 <mesa-stable@lists.freedesktop.org>

Conflicts:
	src/mesa/main/get.c
2014-01-09 12:15:17 -08:00
Carl Worth
5310a8cc20 Add md5sums for 10.0.2. release.
Which can be added only after the tag, of course.
2014-01-09 11:59:08 -08:00
Carl Worth
108e50c3bc docs: Add release notes for 10.0.2 release.
Which will happen today.
2014-01-09 11:49:28 -08:00
Carl Worth
44dfcf6e88 Update version to 10.0.2
In preparation for the upcoming 10.0.2 release.
2014-01-09 11:45:18 -08:00
Alexander von Gluck IV
e833368e04 Haiku: Add in public GL kit headers
* These make up the base of what C++ GL Haiku applications
  use for 3D rendering.
* Not placed in includes/GL to prevent Haiku headers from
  getting installed on non-Haiku systems.

Acked-by: Brian Paul <brianp@vmware.com>
(cherry picked from commit 56d920a5c1)
2014-01-02 17:11:17 -08:00
Ilia Mirkin
3efc2bbf07 nv50: fix a small leak on context destroy
Signed-off-by: Ilia Mirkin <imirkin@alum.mit.edu>
(cherry picked from commit f50a45452a)
2014-01-02 17:11:17 -08:00
Paul Berry
d46a58703a glsl: Fix inconsistent assumptions about ir_loop::counter.
The compiler back-ends (i965's fs_visitor and brw_visitor,
ir_to_mesa_visitor, and glsl_to_tgsi_visitor) assume that when
ir_loop::counter is non-null, it points to a fresh ir_variable that
should be used as the loop counter (as opposed to an ir_variable that
exists elsewhere in the instruction stream).

However, previous to this patch:

(1) loop_control_visitor did not create a new variable for
    ir_loop::counter; instead it re-used the existing ir_variable.
    This caused the loop counter to be double-incremented (once
    explicitly by the body of the loop, and once implicitly by
    ir_loop::increment).

(2) ir_clone did not clone ir_loop::counter properly, resulting in the
    cloned ir_loop pointing to the source ir_loop's counter.

(3) ir_hierarchical_visitor did not visit ir_loop::counter, resulting
    in the ir_variable being missed by reparenting.

Additionally, most optimization passes (e.g. loop unrolling) assume
that the variable mentioned by ir_loop::counter is not accessed in the
body of the loop (an assumption which (1) violates).

The combination of these factors caused a perfect storm in which the
code worked properly nearly all of the time: for loops that got
unrolled, (1) would introduce a double-increment, but loop unrolling
would fail to notice it (since it assumes that ir_loop::counter is not
accessed in the body of the loop), so it would unroll the loop the
correct number of times.  For loops that didn't get unrolled, (1)
would introduce a double-increment, but then later when the IR was
cloned for linking, (2) would prevent the loop counter from being
cloned properly, so it would look to further analysis stages like an
independent variable (and hence the double-increment would stop
occurring).  At the end of linking, (3) would prevent the loop counter
from being reparented, so it would still belong to the shader object
rather than the linked program object.  Provided that the client
program didn't delete the shader object, the memory would never get
reclaimed, and so the shader would function properly.

However, for loops that didn't get unrolled, if the client program did
delete the shader object, and the memory belonging to the loop counter
got re-used, this could cause a use-after-free bug, leading to a
crash.

This patch fixes loop_control_visitor, ir_clone, and
ir_hierarchical_visitor to treat ir_loop::counter the same way the
back-ends treat it: as a freshly allocated ir_variable that needs to
be visited and cloned independently of other ir_variables.

Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=72026

Reviewed-by: Eric Anholt <eric@anholt.net>
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
(cherry picked from commit d6eb4321d0)
2014-01-02 17:10:39 -08:00
Paul Berry
8eee788bd6 glsl: Teach ir_variable_refcount about ir_loop::counter variables.
If an ir_loop has a non-null "counter" field, the variable referred to
by this field is implicitly read and written by the loop.  We need to
account for this in ir_variable_refcount, otherwise there is a danger
we will try to dead-code-eliminate the loop counter variable.

Note: at the moment the dead code elimination bug doesn't occur due to
a bug in ir_hierarchical_visitor: it doesn't visit the "counter"
field, so dead code elimination doesn't treat it as a candidate for
elimination.  But the patch to follow will fix that bug, so we need to
fix ir_variable_refcount first in order to avoid breaking dead code
elimination.

Reviewed-by: Eric Anholt <eric@anholt.net>
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
(cherry picked from commit 9d2951ea0a)
2014-01-02 17:10:21 -08:00
Chad Versace
9ccb6cc7b7 i965/gen6: Fix HiZ hang in WebGL Google Maps
Emitting flushes before depth and hiz resolves at the top of blorp's
state emission fixes the hang. Marchesin and I found the fix
experimentally, as opposed to adhering to a documented hardware
workaround.  A more minimal fix likely exists, but this gets the job
done.

Fixes HiZ hangs in the new WebGL Google maps on Sandybridge Chrome OS.
Tested by zooming in and out continuously for 2 hours.

This patch is based on
8bc07bb701

CC: mesa-stable@lists.freedesktop.org
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=70740
Signed-off-by: Stéphane Marchesin <marcheu@chromium.org>
Signed-off-by: Chad Versace <chad.versace@linux.intel.com>
Reviewed-by: Paul Berry <stereotype441@gmail.com>
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
(cherry picked from commit 1a928816a1)
2014-01-02 15:59:44 -08:00
Marek Olšák
4d7961e95e st/mesa: fix glClear with multiple colorbuffers and different formats
Cc: 10.0 9.2 9.1 <mesa-stable@lists.freedesktop.org>
(cherry picked from commit 0612005aa6)
2014-01-02 15:57:41 -08:00
Erik Faye-Lund
b8be00e5f2 glcpp: error on multiple #else/#elif directives
The preprocessor currently accepts multiple else/elif-groups
per if-section. The GLSL-preprocessor is defined by the C++
specification, which defines the following parse-rule:

if-section:
	if-group elif-groups(opt) else-group(opt) endif-line

This clearly only allows a single else-group, that has to come
after any elif-groups.

So let's modify the code to follow the specification. Add test
to prevent regressions.

Reviewed-by: Ian Romanick <ian.d.romanick@intel.com>
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
Reviewed-by: Carl Worth <cworth@cworth.org>

Cc: 10.0 <mesa-stable@lists.freedesktop.org>
(cherry picked from commit eb212c5a30)
2014-01-02 15:57:41 -08:00
Kenneth Graunke
347f149332 Revert "mesa: Remove GLXContextID typedef from glx.h."
This reverts commit 136a12ac98.

According to belak51 on IRC, this commit broke Allegro, which would no
longer compile.  Applications apparently expect the GLXContextID typedef
to exist in glx.h; removing it breaks them.  A bit of searching around
the internet revealed other complaints since upgrading to Mesa 10.

Cc: "10.0" <mesa-stable@lists.freedesktop.org>
(cherry picked from commit f425d56ba4)
2014-01-02 15:57:41 -08:00
Alex Deucher
49c865180a r600g: fix SUMO2 pci id
0x9649 is sumo2, not sumo.

Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
CC: "9.2" "10.0" <mesa-stable@lists.freedesktop.org>
(cherry picked from commit e2d53fac1c)
2014-01-02 15:57:41 -08:00
Aaron Watry
765ceb6a36 r600/pipe: Stop leaking context->start_compute_cs_cmd.buf on EG/CM
Found while tracking down memory leaks in VDPAU playback

Reviewed-by: Tom Stellard <thomas.stellard@amd.com>

CC: "10.0" <mesa-stable@lists.freedesktop.org>
(cherry picked from commit 3ddabe0d52)
2014-01-02 15:57:41 -08:00
Aaron Watry
7a7166f832 st/vdpau: Destroy context when initialization fails
Prevents a potential memory leak found when tracking down something else.

Reviewed-by: Christian König <christian.koenig@amd.com>
Reviewed-by: Tom Stellard <thomas.stellard@amd.com>

CC: "10.0" <mesa-stable@lists.freedesktop.org>
(cherry picked from commit 20446d0e53)
2014-01-02 15:57:41 -08:00
Aaron Watry
a4a2f239d7 radeon/llvm: Free target data at end of optimization
Reviewed-by: Tom Stellard <thomas.stellard@amd.com>

CC: "10.0" <mesa-stable@lists.freedesktop.org>
(cherry picked from commit 767b0f82c3)
2014-01-02 15:57:41 -08:00
Aaron Watry
23d290d102 r600/compute: Use the correct FREE macro when deleting compute state
Reviewed-by: Tom Stellard <thomas.stellard@amd.com>

CC: "10.0" <mesa-stable@lists.freedesktop.org>
(cherry picked from commit 0bd858d7ff)
2014-01-02 15:57:41 -08:00
Aaron Watry
2a20bf3ed2 r600/compute: Free compiled kernels when deleting compute state
v2: Remove unnecessary null pointer check

CC: "10.0" <mesa-stable@lists.freedesktop.org>
(cherry picked from commit e19717d075)
2014-01-02 15:57:41 -08:00
Aaron Watry
87cdd13324 radeon/compute: Stop leaking LLVMContexts in radeon_llvm_parse_bitcode
Previously we were creating a new LLVMContext every time that we called
radeon_llvm_parse_bitcode, which caused us to leak the context every time
that we compiled a CL program.

Sadly, we can't dispose of the LLVMContext at the point that it was being
created because evergreen_launch_grid (and possibly the SI equivalent) was
assuming that the context used to compile the kernels was still available.

Now, we'll create a new LLVMContext when creating EG/SI compute state, store
it there, and pass it to all of the places that need it.

The LLVM Context gets destroyed when we delete the EG/SI compute state.

Reviewed-by: Tom Stellard <thomas.stellard@amd.com>

CC: "10.0" <mesa-stable@lists.freedesktop.org>
(cherry picked from commit 8c9a9205d9)
2014-01-02 15:57:41 -08:00
Aaron Watry
b2ea582679 pipe_loader/sw: close dev->lib when initialization fails
Prevents a memory leak.

Reviewed-by: Tom Stellard <thomas.stellard@amd.com>

CC: "10.0" <mesa-stable@lists.freedesktop.org>
(cherry picked from commit a7653c19a3)
2014-01-02 15:57:41 -08:00
Aaron Watry
0057a2b0e7 clover: Remove unused variable
Reviewed-by: Tom Stellard <thomas.stellard@amd.com>

CC: "10.0" <mesa-stable@lists.freedesktop.org>
(cherry picked from commit 862f55c29c)
2014-01-02 15:57:40 -08:00
Jonathan Liu
8518b6360d llvmpipe: use pipe_sampler_view_release() to avoid segfault
This fixes another case of faulting when freeing a pipe_sampler_view
that belongs to a previously destroyed context.

Cc: "10.0" <mesa-stable@lists.freedesktop.org>
Signed-off-by: Jonathan Liu <net147@gmail.com>
Reviewed-by: Brian Paul <brianp@vmware.com>
(cherry picked from commit 7990ab58fa)
2014-01-02 15:57:40 -08:00
Jonathan Liu
ffd89b27a7 st/mesa: use pipe_sampler_view_release()
This fixes a crash where old_view->context was already freed in the
pipe_sampler_view_reference function contained in
src/gallium/auxiliary/utils/u_inlines.h. As a result, the
sampler_view_destroy function pointer contained 0xfeeefeee indicating
freed heap memory.

Cc: "10.0" <mesa-stable@lists.freedesktop.org>
Signed-off-by: Jonathan Liu <net147@gmail.com>
Reviewed-by: Brian Paul <brianp@vmware.com>
(cherry picked from commit 670be71bd8)
2014-01-02 15:57:40 -08:00
Henri Verbeet
b0ee1b1748 i915: Add support for gl_FragData[0] reads.
Similar to 556a47a262, without this reading from
gl_FragData[0] would cause a software fallback.

Bugzilla: https://bugs.winehq.org/show_bug.cgi?id=33964
Signed-off-by: Henri Verbeet <hverbeet@gmail.com>
Cc: 10.0 9.2 9.1 <mesa-stable@lists.freedesktop.org>
Reviewed-by: Ian Romanick <ian.d.romanick@intel.com>
Reviewed-by: Eric Anholt <eric@anholt.net>
(cherry picked from commit b094b3b9f4)
2014-01-02 15:57:40 -08:00
Kenneth Graunke
8dd89b8ad8 i965: Fix 3DSTATE_PUSH_CONSTANT_ALLOC_PS packet creation.
When adding geometry shader support, we accidentally reversed the size
and offset parameters.

Signed-off-by: Kenneth Graunke <kenneth@whitecape.org>
Reviewed-by: Paul Berry <stereotype441@gmail.com>
Cc: "10.0" <mesa-stable@lists.freedesktop.org>
(cherry picked from commit 51c9cfc296)
2014-01-02 15:57:40 -08:00
Kevin Rogovin
ec80a279a5 Use line number information from entire function expression
This patch changes the error reporting behavior for incorrect function
invocation (triggered by match_function_by_name() unable to find a
matching function call) from using the line number information
associated to the function name term to using the line number
information of the entire function expression. Fixes bug #72264.

Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=72264
Reviewed-by: Ian Romanick <ian.d.romanick@intel.com>
Cc: "10.0" <mesa-stable@lists.freedesktop.org>
(cherry picked from commit 23d294bb60)
2014-01-02 15:57:40 -08:00
Anuj Phogat
f6ea5b7bd7 mesa: Fix error code generation in glBeginConditionalRender()
This patch changes the error condition to satisfy below statement
from OpenGL 4.3 core specification:
"An INVALID_OPERATION error is generated if id is the name of a query
object with a target other SAMPLES_PASSED, ANY_SAMPLES_PASSED, or
ANY_SAMPLES_PASSED_CONSERVATIVE, or if id is the name of a query
currently in progress."

Cc: mesa-stable@lists.freedesktop.org
Signed-off-by: Anuj Phogat <anuj.phogat@gmail.com>
Reviewed-by: Brian Paul <brianp@vmware.com>
(cherry picked from commit 7a73c6acb0)
2014-01-02 15:57:40 -08:00
Kristian Høgsberg
db0dc5c008 dri_util: Don't assume __DRIcontext->driverPrivate is a gl_context
The driverPrivate pointer is opaque to the driver and we can't assume
it's a struct gl_context in dri_util.c.  Instead provide a helper function
to set the struct gl_context flags from the incoming DRI context flags.

v2 (idr): Modify the other classic drivers to also use
driContextSetFlags.  I ran all the piglit GLX_ARB_create_context tests
with i965 and classic swrast without regressions.

Signed-off-by: Kristian Høgsberg <krh@bitplanet.net>
Signed-off-by: Ian Romanick <ian.d.romanick@intel.com>
Reviewed-by: Ian Romanick <ian.d.romanick@intel.com> [v1]
Reviewed-by: Eric Anholt <eric@anholt.net>
Tested-by: Ilia Mirkin <imirkin@alum.mit.edu> [v1 on Gallium nouveau]
Cc: "10.0" <mesa-stable@lists.freedesktop.org>
(cherry picked from commit 38366c0c6e)
2014-01-02 15:57:40 -08:00
Marek Olšák
c2940d11d0 mesa: fix interpretation of glClearBuffer(drawbuffer)
This corresponding piglit tests supported this incorrect behavior instead of
pointing at it.

Reviewed-by: Ian Romanick <ian.d.romanick@intel.com>
Cc: 10.0 9.2 9.1 <mesa-stable@lists.freedesktop.org>
(cherry picked from commit 03d848ea10)
2014-01-02 15:57:40 -08:00
Vadim Girlin
27623f2645 r600g/sb: fix stack size computation on evergreen
On evergreen we have to reserve 1 stack element in some additional cases
besides the ones mentioned in the docs, but stack size computation was
recently reimplemented exactly as described in the docs by the patch that
added workarounds for stack issues on EG/CM, resulting in regressions
with some apps (Serious Sam 3).

This patch fixes it by restoring previous behavior.

Fixes https://bugs.freedesktop.org/show_bug.cgi?id=72369

Signed-off-by: Vadim Girlin <vadimgirlin@gmail.com>
Cc: "10.0" <mesa-stable@lists.freedesktop.org>
Tested-by: Andre Heider <a.heider@gmail.com>
(cherry picked from commit 00faf82832)
2014-01-02 14:40:47 -08:00
Carl Worth
6f7da0188a docs: Add md5sums for the 10.0.1 release. 2013-12-12 22:16:28 -08:00
Carl Worth
12484d2582 Update version for the 10.0.1 release.
It's so nice that this is updated in just a single place now. Thanks, Emil!
2013-12-12 21:34:55 -08:00
Carl Worth
d573899b93 Makefile: Add bin/test-driver to EXTRA_FILES
I'm not sure why this change is necessary. When I've built previous tar files
(such as 9.2.4) with the "make tarballs" target, they include the
bin/test-driver file. But at my first attempt to build the tar files for the
10.0.1 release this file was not being included and the build failed.
2013-12-12 21:33:02 -08:00
Carl Worth
142144e7fd docs: Add release notes for 10.0.1 2013-12-12 21:16:37 -08:00
Ilia Mirkin
a717ae1b2d nv50: report 15 max inputs for fragment programs
First off, nv50_program only has 16 in/out varyings. However reporting
16 makes 'm' become 68 in nv50_fp_linkage_validate with the
varying-packing-simple piglit test. (Subverting the assert makes it
compile but fail.) With this patch, varying-packing-simple passes.

See: https://bugs.freedesktop.org/show_bug.cgi?id=69155

Signed-off-by: Ilia Mirkin <imirkin@alum.mit.edu>
Cc: "9.2 10.0" <mesa-stable@lists.freedesktop.org>
(cherry picked from commit bad8871e52)
2013-12-12 15:35:57 -08:00
Maarten Lankhorst
a876ea4b76 nouveau: Fix compiler warning regression
cfg is now unused, remove it.

Cc: "10.0" <mesa-stable@lists.freedesktop.org>
(cherry picked from commit 5576ad11ed)
2013-12-12 15:35:34 -08:00
Dave Airlie
d7a71b7181 swrast: fix readback regression since inversion fix
This readback from the frontbuffer with swrast was broken, that bug
just made it more obviously broken, this fixes it by inverting the
sub image gets. Also fixes a few other piglits.

Fixes: https://bugs.freedesktop.org/show_bug.cgi?id=72327
Fixes: https://bugs.freedesktop.org/show_bug.cgi?id=72325

(for 9.2 the patches this depends on were asked to be backported separately
 in an email).
Cc: "9.2" "10.0" mesa-stable@lists.fedoraproject.org
Reviewed-by: Brian Paul <brianp@vmware.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>

(cherry picked from commit 0b16042377)
2013-12-12 15:35:04 -08:00
Axel Davy
2776a496d4 Enable throttling in SwapBuffers
flush_with_flags, when available, allows the driver to throttle.
Using this suppress input lag issues that can be observed in heavy
rendering situations on non-intel cards.

Signed-off-by: Axel Davy <axel.davy@ens.fr>
Cc: "10.0" mesa-stable@lists.freedesktop.org
(cherry picked from commit afcce46fd5)
2013-12-12 15:34:27 -08:00
Kristian Høgsberg
1919ec6ba4 egl/wayland: Send commit after flushing the driver context
This typically won't make a difference, since we only send the requests at
wl_display_flush() time.  There might be a small race
with another thread calling wl_display_flush() after our commit request,
but before we flush the DRI driver.  Moving the commit below the DRI
driver flush call looks more natural and eliminates the small race.

Cc: "10.0" mesa-stable@lists.freedesktop.org
(cherry picked from commit 33eb5eabee)
2013-12-12 15:33:59 -08:00
Axel Davy
188c60143b egl/wayland: Flush the wl_display at the end of SwapBuffers
We would like the compositor to receive the commited buffer
as soon as possible, so it has the time to treat it, and
release old ones. We shouldn't rely on the client
to flush the queue for us.

Signed-off-by: Axel Davy <axel.davy@ens.fr>
Cc: "10.0" mesa-stable@lists.freedesktop.org
(cherry picked from commit 402bf6e8d0)
2013-12-12 15:33:33 -08:00
Kristian Høgsberg
d0f606ffbd egl/wayland: Damage INT32_MAX x INT32_MAX region for eglSwapBuffers
If we're not using EGL_EXT_swap_buffers_with_damage, we have to
damage the full extent.  EGL operates on buffer coordinates, but
wl_surface.damage takes surface coordinates.  EGL doesn't know the
buffer transformation (rotated or scaled) and can't post accurate
damage in surface coordinates.  The damage event however is clipped to
the surface extents so we can just damage the maximum rectangle.

In case of EGL_EXT_swap_buffers_with_damage, the application knows
the buffer transform and is expected to pass in rectangles in
surface space.

Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=70250
Cc: "10.0" mesa-stable@lists.freedesktop.org
Tested-by: U. Artie Eoff <ullysses.a.eoff@intel.com>
(cherry picked from commit bce64c6c83)
2013-12-09 17:41:23 -08:00
Jordan Justen
fdede18275 dri megadriver_stub: add compatibility for older DRI loaders
To help the transition period when DRI loaders are being updated
to support the newer __driDriverExtensions_foo mechanism,
we populate __driDriverExtensions with the extensions returned
by __driDriverExtensions_foo during a library contructor
function.

We find the driver foo's name by using the dladdr function
which gives the path of the dynamic library's name that
was being loaded.

Signed-off-by: Jordan Justen <jordan.l.justen@intel.com>
Reviewed-by: Matt Turner <mattst88@gmail.com>
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
Reviewed-by: Keith Packard <keithp@keithp.com>
Cc: "10.0" <mesa-stable@lists.freedesktop.org>
(cherry picked from commit 4859d492b2)
2013-12-09 17:28:20 -08:00
Tom Stellard
4cbd424631 r300/compiler/tests: Fix line length check in test parser
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>

CC: "9.2" "10.0" <mesa-stable@lists.freedesktop.org>
(cherry picked from commit 9a5ce0c4c9)
2013-12-09 17:28:15 -08:00
Tom Stellard
331a8a3586 r300/compiler/tests: Fix segfault
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>

CC: "9.2" "10.0" <mesa-stable@lists.freedesktop.org>
(cherry picked from commit 1896431f79)
2013-12-09 17:28:09 -08:00
Ilia Mirkin
f528981f1a nouveau/video: update a few more h264 picparm field names
Based on comments by Benjamin Morris <bmorris@nvidia.com> in
http://lists.freedesktop.org/archives/nouveau/2013-December/015328.html

This adds setting of is_long_term, and updates a few field names we were
unclear about.

Signed-off-by: Ilia Mirkin <imirkin@alum.mit.edu>
Cc: "10.0" <mesa-stable@lists.freedesktop.org>
(cherry picked from commit 2cd2b9705e)
2013-12-09 17:28:07 -08:00
Ilia Mirkin
d5f1a270ef nouveau/video: update h264 picparm field names based on usage
Signed-off-by: Ilia Mirkin <imirkin@alum.mit.edu>
Cc: "10.0" <mesa-stable@lists.freedesktop.org>
(cherry picked from commit 78525dae8a)
2013-12-09 17:28:04 -08:00
Ilia Mirkin
f4f1159716 nv50: enable h264 and mpeg4 for nv98+ (vp3, vp4.0)
Create the ref_bo without any storage type flags set for now. The issue
probably arises from our use of the additional buffer space at the end
of the ref_bo. It should probably be split up in the future.

Signed-off-by: Ilia Mirkin <imirkin@alum.mit.edu>
Tested-by: Martin Peres <martin.peres@labri.fr>
Cc: "10.0" <mesa-stable@lists.freedesktop.org>
(cherry picked from commit e01ba9d6b0)
2013-12-09 17:28:01 -08:00
Ian Romanick
b531dcaec4 glsl: Don't emit empty declaration warning for a struct specifier
The intention is that things like

   int;

will generate a warning.  However, we were also accidentally emitting
the same warning for things like

  struct Foo { int x; };

Signed-off-by: Ian Romanick <ian.d.romanick@intel.com>
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=68838
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
Cc: Aras Pranckevicius <aras@unity3d.com>
Cc: "9.2 10.0" <mesa-stable@lists.freedesktop.org>
(cherry picked from commit 758658850b)
2013-12-09 17:27:40 -08:00
Ilia Mirkin
b160fea306 nv50: wait on the buf's fence before sticking it into pushbuf
This resolves some rendering issues in source games.
See https://bugs.freedesktop.org/show_bug.cgi?id=64323

Signed-off-by: Ilia Mirkin <imirkin@alum.mit.edu>
Cc: "9.2 10.0" <mesa-stable@lists.freedesktop.org>
(cherry picked from commit 0e5bf85651)
2013-12-06 10:51:49 -08:00
Ilia Mirkin
05d2a796a0 nouveau: avoid leaking fences while waiting
This fixes a memory leak in some situations. Also avoids emitting an
extra fence if the kick handler does the call to nouveau_fence_next
itself.

Signed-off-by: Ilia Mirkin <imirkin@alum.mit.edu>
Cc: "9.2 10.0" <mesa-stable@lists.freedesktop.org>
(cherry picked from commit ce6dd69697)
2013-12-06 10:51:45 -08:00
Ilia Mirkin
de517d2bb3 nv50: Fix GPU_READING/WRITING bit removal
Signed-off-by: Ilia Mirkin <imirkin@alum.mit.edu>
CC: "9.1, 9.2, 10.0" <mesa-stable@lists.freedesktop.org>
(cherry picked from commit c45cf6199f)
2013-12-06 10:51:18 -08:00
Ian Romanick
8991193f70 Remove a057b83 from the pick list
Signed-off-by: Ian Romanick <ian.d.romanick@intel.com>
2013-12-06 09:42:58 -08:00
Ilia Mirkin
6c00504a8a mesa: don't leak performance monitors on context destroy
Signed-off-by: Ilia Mirkin <imirkin@alum.mit.edu>
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
Cc: "10.0" <mesa-stable@lists.freedesktop.org>
(cherry picked from commit 267679be84)
2013-12-06 08:09:03 -08:00
Emil Velikov
e6710f4217 automake: include only one copy VERSION in tarball
The VERSION file is tracked by git (git ls-files), thus
adding it to EXTRA_FILES will result in a duplicate copy
within the final tarball.

Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=72230
Cc: "10.0" <mesa-stable@lists.freedesktop.org>
Reported-by: Patrick Steinhardt <ps@pks.im>
Tested-by: Patrick Steinhardt <ps@pks.im>
Reviewed-by: Matt Turner <mattst88@gmail.com>
Signed-off-by: Emil Velikov <emil.l.velikov@gmail.com>
(cherry picked from commit 507c2356e3)
2013-12-06 08:08:09 -08:00
Chad Versace
31751bd40b i965: Add extra-alignment for non-msrt fast color clear for all hw (v2)
The BSpec states that the aligment for the non-msrt clear rectangle must
be doubled; the BSpec does not restricit the workaround to specific
hardware.

Commit 9a1a67b applied the workaround to Haswell GT3.  Commit 8b659ce
expanded the workaround to all Haswell variants. This commit expands it
to all hardware.

No Piglit regressions on Ivybridge 0x0166. No fixes either.

I know no Ivybridge nor Baytrail bug related to this workaround.
However, the BSpec says the extra alignment is required, so let's do it.

v2: Apply to all hardware, not just gen7.

CC: "9.2, 10.0" <mesa-stable@lists.freedesktop.org>
CC: Anuj Phogat <anuj.phogat@gmail.com>
Reviewed-by: Paul Berry <stereotype441@gmail.com>
Signed-off-by: Chad Versace <chad.versace@linux.intel.com>
(cherry picked from commit 998018d7be)
2013-12-06 08:08:09 -08:00
Chad Versace
edca52e6e7 i965/hsw: Apply non-msrt fast color clear w/a to all HSW GTs
Pre-patch, the workaround was applied to only HSW GT3. However, the
workaround also fixes render corruption on the HSW GT1 Chromebook,
codenamed Falco.

Also, update the BSpec quote that discusses the workaround to reflect
the latest BSpec.

The BSpec states that the workaround is required for Ivybridge and
Baytrail as well as Haswell. But, we apply the workaround to only
Haswell because (a) we suspect that is the only hardware where it is
actually required and (b) we haven't yet validated the workaround for
the other hardware.

CC: "9.2, 10.0" <mesa-stable@lists.freedesktop.org>
CC: Anuj Phogat <anuj.phogat@gmail.com>
OTC-Tracker: CHRMOS-812
Reviewed-by: Paul Berry <stereotype441@gmail.com>
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
Signed-off-by: Chad Versace <chad.versace@linux.intel.com>
(cherry picked from commit 8b659cef3a)
2013-12-06 08:08:09 -08:00
Paul Berry
2457b5bfa4 i965/gen6: Fix multisample resolve blits for luminance/intensity 32F formats.
On gen6, multisamble resolve blits use the SAMPLE message to blend
together the 4 samples for each texel.  For some reason, SAMPLE
doesn't blend together the proper samples when the source format is
L32_FLOAT or I32_FLOAT, resulting in blocky artifacts.

To work around this problem, sample from the source surface using
R32_FLOAT.  This shouldn't affect rendering correctness, because when
doing these resolve blits, the destination format is R32_FLOAT, so the
channel replication done by L32_FLOAT and I32_FLOAT is unnecessary.

Fixes piglit tests on Sandy Bridge:
- spec/ARB_texture_float/multisample-formats 2 GL_ARB_texture_float
- spec/ARB_texture_float/multisample-formats 4 GL_ARB_texture_float

No piglit regressions on Sandy Bridge.

Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=70601

Cc: Kenneth Graunke <kenneth@whitecape.org>
Cc: mesa-stable@lists.freedesktop.org

Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
(cherry picked from commit c4cf487315)
2013-12-06 08:08:09 -08:00
Thomas Hellstrom
edb4956932 st/xa: Bump major version number to 2
For some reason this was left out when the version was changed...

Signed-off-by: Thomas Hellstrom <thellstrom@vmware.com>
Reviewed-by: Jakob Bornecrantz <jakob@vmware.com>
2013-12-06 06:14:37 -08:00
Ian Romanick
643f986942 docs: Add 10.0 release md5sums
Signed-off-by: Ian Romanick <ian.d.romanick@intel.com>
2013-11-30 23:29:21 -08:00
Ian Romanick
724c07ff12 mesa: Bump version to 10.0 (final)
Signed-off-by: Ian Romanick <ian.d.romanick@intel.com>
2013-11-30 23:25:47 -08:00
Ian Romanick
56d1ba17f1 docs: Update release notes for 10.0
Signed-off-by: Ian Romanick <ian.d.romanick@intel.com>
2013-11-30 23:25:28 -08:00
Kenneth Graunke
44e38a878a i965: Always reserve binding table space for at least one render target.
In brw_update_renderbuffer_surfaces(), if there are no color draw
buffers, we always set up a null render target at surface index 0 so we
have something to use with the FB write marking the end of thread.

However, when we recently began computing surface indexes dynamically,
we failed to reserve space for it.  This meant that the first texture
would be assigned surface index 0, and our closing FB write would
clobber the texture.

Fixes Piglit's EXT_packed_depth_stencil/fbo-blit-d24s8 test on Gen4-5,
which regressed as of commit 4e5306453d
("i965/fs: Dynamically set up the WM binding table offsets.")

Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=70605
Signed-off-by: Kenneth Graunke <kenneth@whitecape.org>
Reviewed-by: Chris Forbes <chrisf@ijw.co.nz>
Tested-by: lu hua <huax.lu@intel.com>
Cc: "10.0" mesa-stable@lists.freedesktop.org
(cherry picked from commit c4815f6cd6)
2013-11-28 08:37:40 -08:00
Ian Romanick
93dfd0522f dri: Allow __DRI_CTX_FLAG_ROBUST_BUFFER_ACCESS in driCreateContextAttribs
Signed-off-by: Ian Romanick <ian.d.romanick@intel.com>
Reported-by: Zhenyu Wang <zhenyuw@linux.intel.com>
Reviewed-by: Matt Turner <mattst88@gmail.com>
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
Cc: "10.0" <mesa-stable@lists.freedesktop.org>
(cherry picked from commit 73e9aa9e3f)
2013-11-28 08:37:39 -08:00
Ian Romanick
a5f78c4025 i965: Only enable __DRI2_ROBUSTNESS if kernel support is available
This is a squash of the following two cherry-picked patches:

    i965: Only enable __DRI2_ROBUSTNESS if kernel support is available

    Rather than always advertising the extension but failing to create a
    context with reset notifiction, just don't advertise it.  I don't know
    why it didn't occur to me to do it this way in the first place.

    NOTE: Kristian requested that I provide a follow-up for master that
    dynamically generates the list of DRI extensions instead of selected
    between two hardcoded lists.

    Signed-off-by: Ian Romanick <ian.d.romanick@intel.com>
    Suggested-by: Kristian Høgsberg <krh@bitplanet.net>
    Reviewed-by: Matt Turner <mattst88@gmail.com>
    Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
    Reviewed-by: Kristian Høgsberg <krh@bitplanet.net>
    Cc: "10.0" <mesa-stable@lists.freedesktop.org>
    (cherry picked from commit 9b1c68638d)

and

    i965: Properly reject __DRI_CTX_FLAG_ROBUST_BUFFER_ACCESS when __DRI2_ROBUSTNESS is not enabled

    Only allow __DRI_CTX_FLAG_ROBUST_BUFFER_ACCESS in brwCreateContext if
    intelInitScreen2 also enabled __DRI2_ROBUSTNESS (thereby enabling
    GLX_ARB_create_context).

    This fixes a regression in the piglit test
    "glx/GLX_ARB_create_context/invalid flag"

    v2: Remove commented debug code.  Noticed by Jordan.

    Signed-off-by: Ian Romanick <ian.d.romanick@intel.com>
    Reported-by: Paul Berry <stereotype441@gmail.com>
    Reviewed-by: Jordan Justen <jordan.l.justen@intel.com>
    Reviewed-by: Matt Turner <mattst88@gmail.com>
    Cc: "10.0" <mesa-stable@lists.freedesktop.org>
    (cherry picked from commit 53a65e547c)
2013-11-28 08:36:51 -08:00
Ian Romanick
9ec00c187c i965: Bump libdrm requirement
drm_intel_get_reset_stats is only available in libdrm-2.4.48, and
libdrm-2.4.49 contains an important bug fix in that function.

Signed-off-by: Ian Romanick <ian.d.romanick@intel.com>
Reviewed-by: Matt Turner <mattst88@gmail.com>
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
Cc: "10.0" <mesa-stable@lists.freedesktop.org>
(cherry picked from commit cb728bb028)
2013-11-28 08:35:28 -08:00
Francisco Jerez
5ec641bbc9 glsl: Initialize _mesa_glsl_parse_state::atomic_counter_offsets before using it.
Cc: Ian Romanick <ian.d.romanick@intel.com>
Cc: "10.0" <mesa-stable@lists.freedesktop.org>
Reviewed-by: Ian Romanick <ian.d.romanick@intel.com>
(cherry picked from commit 6b2b4cc885)
2013-11-26 21:04:52 -08:00
Paul Berry
444a621e55 glsl: Fix lowering of direct assignment in lower_clip_distance.
In commit 065da16 (glsl: Convert lower_clip_distance_visitor to be an
ir_rvalue_visitor), we failed to notice that since
lower_clip_distance_visitor overrides visit_leave(ir_assignment *),
ir_rvalue_visitor::visit_leave(ir_assignment *) wasn't getting called.
As a result, clip distance dereferences appearing directly on the
right hand side of an assignment (not in a subexpression) weren't
getting properly lowered.  This caused an ir_dereference_variable node
to be left in the IR that referred to the old gl_ClipDistance
variable.  However, since the lowering pass replaces gl_ClipDistance
with gl_ClipDistanceMESA, this turned into a dangling pointer when the
IR got reparented.

Prior to the introduction of geometry shaders, this bug was unlikely
to arise, because (a) reading from gl_ClipDistance[i] in the fragment
shader was rare, and (b) when it happened, it was likely that it would
either appear in a subexpression, or be hoisted into a subexpression
by tree grafting.

However, in a geometry shader, we're likely to see a statement like
this, which would trigger the bug:

    gl_ClipDistance[i] = gl_in[j].gl_ClipDistance[i];

This patch causes
lower_clip_distance_visitor::visit_leave(ir_assignment *) to call the
base class visitor, so that the right hand side of the assignment is
properly lowered.

Fixes piglit test:
- spec/glsl-1.50/execution/geometry/clip-distance-itemized-copy

Cc: Ian Romanick <idr@freedesktop.org>
Cc: "9.2" <mesa-stable@lists.freedesktop.org>
Cc: "10.0" <mesa-stable@lists.freedesktop.org>

Reviewed-by: Eric Anholt <eric@anholt.net>
Reviewed-by: Ian Romanick <ian.d.romanick@intel.com>
(cherry picked from commit 9dfcb05fa6)
2013-11-26 21:04:52 -08:00
Paul Berry
756b4f9a8c i965/gs: Set GS prog_data to NULL if there is no GS program.
The previous commit fixes a bug wherein we would incorrectly refer to
stale geometry shader prog_data when no geometry shader was active.

This patch reduces the likelihood of that sort of bug occurring in the
future by setting prog_data to NULL whenever there is no GS program.

Cc: mesa-stable@lists.freedesktop.org

Reviewed-by: Eric Anholt <eric@anholt.net>
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
(cherry picked from commit 37bdde1087)
2013-11-26 21:04:52 -08:00
Paul Berry
d963daa380 i965/gs: Properly skip GS binding table upload when no GS active.
Previously, in brw_gs_upload_binding_table(), we checked whether
brw->gs.prog_data was NULL in order to determine whether a geometry
shader was active.  This didn't work: brw->gs.prog_data starts off as
NULL, but it is set to non-NULL when a geometry shader program is
built, and then never set to NULL again.  As a result, if we called
brw_gs_upload_binding_table() while there was no geometry shader
active, but a geometry shader had previously been active, it would
refer to a stale (and possibly freed) prog_data structure.

This patch fixes the problem by modifying
brw_gs_upload_binding_table() to use the proper technique to determine
whether a geometry shader is active: by checking whether
brw->geometry_program is NULL.

This fixes the crash reported in comment 2 of bug 71870 (the incorrect
rendering remains, however).

Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=71870

Cc: mesa-stable@lists.freedesktop.org

Reviewed-by: Chris Forbes <chrisf@ijw.co.nz>
Reviewed-by: Eric Anholt <eric@anholt.net>
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
(cherry picked from commit 2714ca81b9)
2013-11-26 21:04:51 -08:00
Tom Stellard
bab6f40b29 radeon/compute: Unconditionally inline all functions v2
We need to do this until function calls are supported.

v2:
  - Fix loop conditional

https://bugs.freedesktop.org/show_bug.cgi?id=64225

CC: "10.0" <mesa-stable@lists.freedesktop.org>
(cherry picked from commit ddc77c5092)
2013-11-26 13:10:29 -08:00
Kenneth Graunke
c0c3fa564b i965: Use __attribute__((flatten)) on fast tiled teximage code.
The fast tiled texture upload code does not compile with GCC 4.8's -Og
optimization flag.

memcpy() has the always_inline attribute set.  This poses a problem,
since {x,y}tile_copy_faster calls it indirectly via {x,y}tile_copy,
and {x,y}tile_copy normally aren't inlined at -Og.

Using __attribute__((flatten)) tells GCC to inline every function call
inside the function, which I believe was the author's intent.

Fix suggested by Alexander Monakov.

Signed-off-by: Kenneth Graunke <kenneth@whitecape.org>
Reviewed-by: Matt Turner <mattst88@gmail.com>
Reviewed-by: Chad Versace <chad.versace@linux.intel.com>
Cc: mesa-stable@lists.freedesktop.org
(cherry picked from commit ad542a10c5)
2013-11-26 13:09:41 -08:00
Maarten Lankhorst
ec013f809b gbm/dri: hide extension loader symbols
They should not be exposed.

Cc: "10.0" <mesa-stable@lists.freedesktop.org>
(cherry picked from commit 5455c818b5)
2013-11-26 13:09:29 -08:00
Ian Romanick
866ce39ca0 mesa: Bump version to 10.0.0-rc2
Signed-off-by: Ian Romanick <ian.d.romanick@intel.com>
2013-11-23 17:23:00 -08:00
Ian Romanick
48e4daf977 Remove 068a073 from the pick list
Signed-off-by: Ian Romanick <ian.d.romanick@intel.com>
2013-11-23 17:20:36 -08:00
Eric Anholt
1efe2ef620 i965: Fix streamed state dumping/annotation after the blorp-flush change.
I think I was thinking of the batch command packet cache when I pasted
this in, but this counter is only used for dumping out streamed state for
INTEL_DEBUG=batch and for putting annotations in our aub files.

Cc: "10.0" <mesa-stable@lists.freedesktop.org>
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
(cherry picked from commit 5891f98145)
2013-11-23 12:55:04 -08:00
Paul Berry
47ff55fa86 mesa: Implement GL_FRAMEBUFFER_ATTACHMENT_LAYERED query.
From section 6.1.18 (Renderbuffer Object Queries) of the GL 3.2 spec,
under the heading "If the value of FRAMEBUFFER_ATTACHMENT_OBJECT_TYPE
is TEXTURE, then":

    If pname is FRAMEBUFFER_ATTACHMENT_LAYERED, then params will
    contain TRUE if an entire level of a three-dimesional texture,
    cube map texture, or one-or two-dimensional array texture is
    attached. Otherwise, params will contain FALSE.

Fixes piglit tests:
- spec/!OpenGL 3.2/layered-rendering/framebuffer-layered-attachments
- spec/!OpenGL 3.2/layered-rendering/framebuffertexture-defaults

Cc: "10.0" <mesa-stable@lists.freedesktop.org>

Reviewed-by: Chris Forbes <chrisf@ijw.co.nz>

v2: Don't include "EXT" in the error message, since this query only
makes sensen in context versions that have adopted
glGetFramebufferAttachmentParameteriv().

Reviewed-by: Ian Romanick <ian.d.romanick@intel.com>
(cherry picked from commit ec79c05cbf)
2013-11-23 12:55:04 -08:00
Paul Berry
8f4d95d41c mesa: Fix texture target validation for glFramebufferTexture()
Previously we were using the code path for validating
glFramebufferTextureLayer().  But glFramebufferTexture() allows
additional texture types.

Fixes piglit tests:
- spec/!OpenGL 3.2/layered-rendering/gl-layer-cube-map
- spec/!OpenGL 3.2/layered-rendering/framebuffertexture

Cc: "10.0" <mesa-stable@lists.freedesktop.org>

Reviewed-by: Jordan Justen <jordan.l.justen@intel.com>
Reviewed-by: Chris Forbes <chrisf@ijw.co.nz>

v2: Clarify comment above framebuffer_texture().

Reviewed-by: Ian Romanick <ian.d.romanick@intel.com>
(cherry picked from commit af1471dc04)
2013-11-23 12:55:04 -08:00
Paul Berry
79d727e063 i965: Fix fast clear of depth buffers.
From section 4.4.7 (Layered Framebuffers) of the GLSL 3.2 spec:

    When the Clear or ClearBuffer* commands are used to clear a
    layered framebuffer attachment, all layers of the attachment are
    cleared.

This patch fixes the fast depth clear path.

Fixes piglit test "spec/!OpenGL 3.2/layered-rendering/clear-depth".

Cc: "10.0" <mesa-stable@lists.freedesktop.org>

Reviewed-by: Eric Anholt <eric@anholt.net>
Reviewed-by: Jordan Justen <jordan.l.justen@intel.com>
(cherry picked from commit 0831523350)
2013-11-23 12:55:04 -08:00
Paul Berry
7f99ae72c4 i965: Fix blorp clear of layered framebuffers.
From section 4.4.7 (Layered Framebuffers) of the GLSL 3.2 spec:

    When the Clear or ClearBuffer* commands are used to clear a
    layered framebuffer attachment, all layers of the attachment are
    cleared.

This patch fixes the blorp clear path for color buffers.

Fixes piglit test "spec/!OpenGL 3.2/layered-rendering/clear-color".

Cc: "10.0" <mesa-stable@lists.freedesktop.org>

Reviewed-by: Eric Anholt <eric@anholt.net>
Reviewed-by: Jordan Justen <jordan.l.justen@intel.com>
(cherry picked from commit c1019670ea)
2013-11-23 12:55:04 -08:00
Paul Berry
e934782b2a i965: refactor blorp clear code in preparation for layered clears.
Cc: "10.0" <mesa-stable@lists.freedesktop.org>

Reviewed-by: Eric Anholt <eric@anholt.net>
Reviewed-by: Jordan Justen <jordan.l.justen@intel.com>
(cherry picked from commit 1ec5365429)
2013-11-23 12:55:04 -08:00
Paul Berry
ffa073ec72 mesa: Track number of layers in layered framebuffers.
In order to properly clear layered framebuffers, we need to know how
many layers they have.  The easiest way to do this is to record it in
the gl_framebuffer struct when we check framebuffer completeness.

This patch replaces the gl_framebuffer::Layered boolean with a
gl_framebuffer::NumLayers integer, which is 0 if the framebuffer is
not layered, and equal to the number of layers otherwise.

v2: Remove gl_framebuffer::Layered and make gl_framebuffer::NumLayers
always have a defined value.  Fix factor of 6 error in the number of
layers in a cube map array.

Cc: "10.0" <mesa-stable@lists.freedesktop.org>

Reviewed-by: Chris Forbes <chrisf@ijw.co.nz>
Reviewed-by: Jordan Justen <jordan.l.justen@intel.com>
Reviewed-by: Eric Anholt <eric@anholt.net>
(cherry picked from commit 95140740ad)
2013-11-23 12:47:05 -08:00
Tom Stellard
620d11aed4 radeonsi/compute: Fix LDS size calculation
We need to include the number of LDS bytes allocated by the state tracker.

CC: "10.0" <mesa-stable@lists.freedesktop.org>
(cherry picked from commit 1bdb99330a)
2013-11-23 12:46:59 -08:00
Tom Stellard
c8cf5dc401 r600g/compute: Add a work-around for flushing issues on Cayman
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Reviewed-by: Marek Olšák <marek.olsak@amd.com>

https://bugs.freedesktop.org/show_bug.cgi?id=69321

CC: "10.0" <mesa-stable@lists.freedesktop.org>
(cherry picked from commit 7a30cd7085)
2013-11-23 12:46:22 -08:00
Paul Berry
a645df0134 glsl: Fix interstage uniform interface block link error detection.
Previously, we checked for interstage uniform interface block link
errors in validate_interstage_interface_blocks(), which is only called
on pairs of adjacent shader stages.  Therefore, we failed to detect
uniform interface block mismatches between non-adjacent shader stages.

Before the introduction of geometry shaders, this wasn't a problem,
because the only supported shader stages were vertex and fragment
shaders, therefore they were always adjacent.  However, now that we
allow a program to contain vertex, geometry, and fragment shaders,
that is no longer the case.

Fixes piglit test "skip-stage-uniform-block-array-size-mismatch".

Cc: "10.0" <mesa-stable@lists.freedesktop.org>

v2: Rename validate_interstage_interface_blocks() to
validate_interstage_inout_blocks() to reflect the fact that it no
longer validates uniform blocks.

Reviewed-by: Jordan Justen <jordan.l.justen@intel.com>

v3: Make validate_interstage_inout_blocks() skip uniform blocks.

Reviewed-by: Ian Romanick <ian.d.romanick@intel.com>
(cherry picked from commit 544e3129c5)
2013-11-23 12:45:16 -08:00
Paul Berry
3470916d6a glsl: Fix cross-version linking between VS and GS.
Previously, when attempting to link a vertex shader and a geometry
shader that use different GLSL versions, we would sometimes generate a
link error due to the implicit declaration of gl_PerVertex being
different between the two GLSL versions.

This patch fixes that problem by only requiring interface block
definitions to match when they are explicitly declared.

Fixes piglit test "shaders/version-mixing vs-gs".

Cc: "10.0" <mesa-stable@lists.freedesktop.org>

v2: In the interface_block_definition constructor, move the assignment
to explicitly_declared after the existing if block.

Reviewed-by: Ian Romanick <ian.d.romanick@intel.com>
(cherry picked from commit 0f4cacbb53)
2013-11-23 12:44:18 -08:00
Paul Berry
320f2fa45d glsl: Prohibit illegal mixing of redeclarations inside/outside gl_PerVertex.
From section 7.1 (Built-In Language Variables) of the GLSL 4.10
spec:

    Also, if a built-in interface block is redeclared, no member of
    the built-in declaration can be redeclared outside the block
    redeclaration.

We have been regarding this text as a clarification to the behaviour
established for gl_PerVertex by GLSL 1.50, so we apply it regardless
of GLSL version.

This patch enforces the rule by adding an enum to ir_variable to track
how the variable was declared: implicitly, normally, or in an
interface block.

Fixes piglit tests:
- gs-redeclares-pervertex-out-after-global-redeclaration.geom
- vs-redeclares-pervertex-out-after-global-redeclaration.vert
- gs-redeclares-pervertex-out-after-other-global-redeclaration.geom
- vs-redeclares-pervertex-out-after-other-global-redeclaration.vert
- gs-redeclares-pervertex-out-before-global-redeclaration
- vs-redeclares-pervertex-out-before-global-redeclaration

Cc: "10.0" <mesa-stable@lists.freedesktop.org>

v2: Don't set "how_declared" redundantly in builtin_variables.cpp.
Properly clone "how_declared".

Reviewed-by: Ian Romanick <ian.d.romanick@intel.com>
(cherry picked from commit 2bbcf19aca)
2013-11-23 12:42:47 -08:00
Tapani Pälli
2747e72036 mesa: enable GL_TEXTURE_LOD_BIAS set/get
Earlier comments suggest this was removed from GL core spec but it is
still there. Enabling makes 'texture_lod_bias_getter' Khronos
conformance tests pass, also removes some errors from Metro Last Light
game which is using this API.

v2: leave NOTE comment (Ian)

Cc: "9.0 9.1 9.2 10.0" <mesa-stable@lists.freedesktop.org>
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
Reviewed-by: Ian Romanick <ian.d.romanick@intel.com>
Signed-off-by: Tapani Pälli <tapani.palli@intel.com>
(cherry picked from commit 7e61b44dcd)
2013-11-23 12:41:46 -08:00
Dave Airlie
d4b7ff7fe0 glx: don't fail out when no configs if we have visuals
GLX 1.2 servers with no SGIX_fbconfigs exist (some citrix thing),
and we fail glxinfo completely in those cases.

CC: <mesa-stable@lists.freedesktop.org>
Reviewed-by: Adam Jackson <ajax@redhat.com>
Reviewed-by: Brian Paul <brianp@vmware.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
(cherry picked from commit b01a3a9b72)
2013-11-23 12:41:41 -08:00
Dave Airlie
63b02533f0 mesa/swrast: fix inverted front buffer rendering with old-school swrast
I've no idea when this broke, but we have some people who wanted it fixed,
so here's my attempt.

reproducer, run readpix with swrast hit f, or run trival tri -sb things are
upside down, after this patch they aren't.

Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=62142
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=66213

Cc: <mesa-stable@lists.freedesktop.org>"
Signed-off-by: Dave Airlie <airlied@redhat.com>
(cherry picked from commit a43b49dfb1)
2013-11-23 12:41:35 -08:00
Matt Turner
19f05b26ba i965: Link -ldl after libmesa.la
DLOPEN_LIBS is part of DRI_LIB_DEPS.

Cc: "10.0" <mesa-stable@lists.freedesktop.org>"
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=71512
Reviewed-by: Eric Anholt <eric@anholt.net>
(cherry picked from commit 1f9092958d)
2013-11-23 12:40:34 -08:00
Brian Paul
11da04e1bb st/mesa: fix GL_FEEDBACK mode inverted Y coordinate bug
We need to check the drawbuffer's orientation before inverting Y
coordinates.  Fixes piglit feedback tests when running with the
-fbo option.

Cc: "9.2" "10.0" <mesa-stable@lists.freedesktop.org>

Reviewed-by: Marek Olšák <marek.olsak@amd.com>
(cherry picked from commit 15d8e05e1e)
2013-11-23 12:40:29 -08:00
Paul Berry
989d650090 i965/vec4: Fix broken IR annotation in debug output.
Commit 70953b5 (i965: Initialize all member variables of
vec4_instruction on construction) inadvertently added a line to the
vec4_instruction constructor setting this->ir to NULL, wiping out the
previously set value.  As a result, ever since then, the output of
INTEL_DEBUG=vs and INTEL_DEBUG=gs has been missing IR annotations.

Cc: "10.0" <mesa-stable@lists.freedesktop.org>

Reviewed-by: Eric Anholt <eric@anholt.net>
(cherry picked from commit 60b1a118e1)
2013-11-23 12:40:20 -08:00
Tom Stellard
9495fb4fff r600g/compute: Fix handling of global buffers in r600_resource_copy_region()
Global buffers do not have an associate cs_buf handle, so
we can't copy them using r600_copy_buffer()

https://bugs.freedesktop.org/show_bug.cgi?id=64226

Reviewed-by: Marek Ol????k <marek.olsak@amd.com>

CC: "10.0" <mesa-stable@lists.freedesktop.org>
(cherry picked from commit 1b9511d7ce)
2013-11-23 12:39:45 -08:00
Tom Stellard
521c59f132 gallium: Pass version scripts to linker using --version-script=
This fixes build failures with the gold linker.

CC: "10.0" <mesa-stable@lists.freedesktop.org>
(cherry picked from commit 17930a66aa)
2013-11-23 12:36:07 -08:00
Tom Stellard
eafb9f6756 clover: Optionally return context's devices from clGetProgramInfo()
The spec allows clGetProgramInfo() to return information about either
the devices associated with the program or the devices associated
with the context.  If there are no devices associated with the program,
then we return devices associated with the context.

https://bugs.freedesktop.org/show_bug.cgi?id=52171

Reviewed-by: Francisco Jerez <currojerez@riseup.net>

CC: "10.0" <mesa-stable@lists.freedesktop.org>
(cherry picked from commit a84dd2398f)
2013-11-23 12:34:16 -08:00
Paul Berry
5af1fb5324 i965/gen7: Emit workaround flush when changing GS enable state.
v2: Don't go to extra work to avoid extraneous flushes.  (Previous
experiments in the kernel have suggested that flushing the pipeline
when it is already empty is extremely cheap).

Cc: "10.0" <mesa-stable@lists.freedesktop.org>

Reviewed-by: Eric Anholt <eric@anholt.net>
(cherry picked from commit 7dfb4b2d00)
2013-11-23 12:33:17 -08:00
Emil Velikov
0040edcf9d docs: indicate GLX_MESA_query_renderer's completion
Cc: "10.0" <mesa-stable@lists.freedesktop.org>
Acked-by: Chris Forbes <chrisf@ijw.co.nz>
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
Signed-off-by: Emil Velikov <emil.l.velikov@gmail.com>
(cherry picked from commit d33d260b90)
2013-11-23 12:32:38 -08:00
Emil Velikov
defff44e1c docs: add a note about removed state tracker/targets
The X.Org state tracker is gone, as well as the xvmc/vdpau
r300 and softpipe targets.

Cc: "10.0" <mesa-stable@lists.freedesktop.org>
Acked-by: Chris Forbes <chrisf@ijw.co.nz>
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
Signed-off-by: Emil Velikov <emil.l.velikov@gmail.com>
(cherry picked from commit ca9794658e)
2013-11-23 12:32:34 -08:00
Vadim Girlin
8f78b06dca r600g/sb: work around hw issues with stack on eg/cm
v2: make it actually work, improve condition

Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=68503
Cc: "10.0" <mesa-stable@lists.freedesktop.org>
Signed-off-by: Vadim Girlin <vadimgirlin@gmail.com>
(cherry picked from commit 4cb04aa0df)
2013-11-23 12:32:28 -08:00
Vinson Lee
367241ec64 i965: Add missing break in SHADER_OPCODE_GEN7_SCRATCH_READ case.
Fixes "Missing break in switch" defect reported by Coverity.

Signed-off-by: Vinson Lee <vlee@freedesktop.org>
Reviewed-by: Matt Turner <mattst88@gmail.com>
Reviewed-by: Eric Anholt <eric@anholt.net>
Cc: "10.0" <mesa-stable@lists.freedesktop.org>
(cherry picked from commit b570c4229f)
2013-11-23 12:23:08 -08:00
Ian Romanick
15118b45a0 mesa: Bump version to 10.0.0-rc1 2013-11-18 12:23:56 -08:00
Aaron Watry
3fd32619d7 radeon/llvm: Free elf_buffer after use
Prevents a memory leak.

v2: Remove null check

CC: "10.0" <mesa-stable@lists.freedesktop.org>
(cherry picked from commit 2be85e2492)
2013-11-15 13:39:41 -08:00
Aaron Watry
7a87dc278e r600/llvm: Free binary.code/binary.config in r600_llvm_compile
radeon_llvm_compile allocates memory for binary.code, binary.config,
or neither depending on what's being done.

We need to make sure to free that memory after it's no longer needed.

v2: Don't bother checking for null before FREE()

CC: "10.0" <mesa-stable@lists.freedesktop.org>
(cherry picked from commit 01f3622c74)
2013-11-15 13:39:41 -08:00
Aaron Watry
f843604b6a r600/llvm: initialize radeon_llvm_binary
use memset to initialize to 0's... otherwise code_size and config_size
could be uninitialized when read later in this method.

It's also hard to do NULL checks on uninitialized pointers.

Reviewed-by: Tom Stellard <thomas.stellard@amd.com>

v2: Fix indentation

CC: "10.0" <mesa-stable@lists.freedesktop.org>
(cherry picked from commit dd73b99420)
2013-11-15 13:39:41 -08:00
Brian Paul
e9f8b78278 svga: mark dest image as defined in svga_surface_copy()
After we blit/copy to a dest texture image we need to mark it as
being defined.  This fixes broken mipmap generation for quite a
few texture formats.  Mipgen involves making texture views and
svga_texture_view_surface() skips texture images that are undefined.

Cc: "10.0" <mesa-stable@lists.freedesktop.org>

Reviewed-by: José Fonseca <jfonseca@vmware.com>
Reviewed-by: Roland Scheidegger <sroland@vmware.com>
(cherry picked from commit 3969330b47)
2013-11-15 13:39:41 -08:00
Brian Paul
dfff838429 svga: do primitive trimming in translate_indices()
The index translation code expects the number of indexes to be
consistent with the primitive type (ex: a multiple of 3 for
PIPE_PRIM_TRIANGLES).  If it's not, we can write out of bounds
in the destination buffer.

Fixes failed assertions in the pipebuffer debug code found with
Piglit primitive-restart-draw-mode test.

Cc: "10.0" <mesa-stable@lists.freedesktop.org>

Reviewed-by: José Fonseca <jfonseca@vmware.com>
(cherry picked from commit 79984b9928)
2013-11-15 13:39:41 -08:00
Aaron Watry
11982ca08d gallium/pipe_loader: un-reference udev resources when we're done with them.
Reviewed-by: Tom Stellard <thomas.stellard@amd.com>

CC: "10.0" <mesa-stable@lists.freedesktop.org>
(cherry picked from commit 598f61ba28)
2013-11-15 13:39:41 -08:00
Aaron Watry
713966c82f radeonsi/compute: Dispose of LLVM module after compiling kernels
v2: Fix indentation

Reviewed-by: Tom Stellard <thomas.stellard@amd.com>

CC: "10.0" <mesa-stable@lists.freedesktop.org>
(cherry picked from commit 4c6ac9e614)
2013-11-15 13:39:41 -08:00
Aaron Watry
3a98fc6abe radeonsi/compute: Free program and program.kernels on shutdown
v2: Fix indentation

Reviewed-by: Tom Stellard <thomas.stellard@amd.com>

CC: "10.0" <mesa-stable@lists.freedesktop.org>
(cherry picked from commit 35dad4a1e2)
2013-11-15 13:39:41 -08:00
Aaron Watry
531637feee radeon/llvm: Free created llvm memory buffer
v2: Fix indentation

Reviewed-by: Tom Stellard <thomas.stellard@amd.com>

CC: "10.0" <mesa-stable@lists.freedesktop.org>
(cherry picked from commit d41b10f811)
2013-11-15 13:39:41 -08:00
Aaron Watry
02807c06b8 radeon/llvm: Free libelf resources
v2: Fix indentation

Reviewed-by: Tom Stellard <thomas.stellard@amd.com>

CC: "10.0" <mesa-stable@lists.freedesktop.org>
(cherry picked from commit a2b93da84b)
2013-11-15 13:39:40 -08:00
Aaron Watry
9ed0452740 radeon/llvm: fix spelling error
Reviewed-by: Tom Stellard <thomas.stellard@amd.com>

CC: "10.0" <mesa-stable@lists.freedesktop.org>
(cherry picked from commit df482fe02f)
2013-11-15 13:39:40 -08:00
Tom Stellard
ef8fcfc9cf clover: Support multiple devices in clCreateContextFromType() v2
v2:
  - Use clGetDeviceIDs to query devices.

Reviewed-by: Francisco Jerez <currojerez@riseup.net>

CC: "10.0" <mesa-stable@lists.freedesktop.org>
(cherry picked from commit 17af4dd52b)
2013-11-15 13:39:40 -08:00
Paul Berry
1b45f255b5 glsl: Rework interface block linking.
Previously, when doing intrastage and interstage interface block
linking, we only checked the interface type; this prevented us from
catching some link errors.

We now check the following additional constraints:

- For intrastage linking, the presence/absence of interface names must
  match.

- For shader ins/outs, the interface names themselves must match when
  doing intrastage linking (note: it's not clear from the spec whether
  this is necessary, but Mesa's implementation currently relies on
  it).

- Array vs. nonarray must be consistent, taking into account the
  special rules for vertex-geometry linkage.

- Array sizes must be consistent (exception: during intrastage
  linking, an unsized array matches a sized array).

Note: validate_interstage_interface_blocks currently handles both
uniforms and in/out variables.  As a result, if all three shader types
are present (VS, GS, and FS), and a uniform interface block is
mentioned in the VS and FS but not the GS, it won't be validated.  I
plan to address this in later patches.

Fixes the following piglit tests in spec/glsl-1.50/linker:
- interface-blocks-vs-fs-array-size-mismatch
- interface-vs-array-to-fs-unnamed
- interface-vs-unnamed-to-fs-array
- intrastage-interface-unnamed-array

v2: Simplify logic in intrastage_match() for handling array sizes.
Make extra_array_level const.  Use an unnamed temporary
interface_block_definition in validate_interstage_interface_blocks()'s
first call to definitions->store().

Cc: "10.0" <mesa-stable@lists.freedesktop.org>

Reviewed-by: Ian Romanick <ian.d.romanick@intel.com>
Reviewed-by: Jordan Justen <jordan.l.justen@intel.com>
(cherry picked from commit f38ac41ed4)
2013-11-15 13:39:40 -08:00
Paul Berry
1a163c0b34 i965: Fix vertical alignment for multisampled buffers.
From the Sandy Bridge PRM, Vol 1 Part 1 7.18.3.4 (Alignment Unit
Size):

    j [vertical alignment] = 4 for any render target surface is
    multisampled (4x)

From the Ivy Bridge PRM, Vol 4 Part 1 2.12.2.1 (SURFACE_STATE for most
messages), under the "Surface Vertical Alignment" heading:

    This field is intended to be set to VALIGN_4 if the surface was
    rendered as a depth buffer, for a multisampled (4x) render target,
    or for a multisampled (8x) render target, since these surfaces
    support only alignment of 4.

Back in 2012 when we added multisampling support to the i965 driver,
we forgot to update the logic for computing the vertical alignment, so
we were often using a vertical alignment of 2 for multisampled
buffers, leading to subtle rendering errors.

Note that the specs also require a vertical alignment of 4 for all
Y-tiled render target surfaces; I plan to address that in a separate
patch.

Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=53077
Cc: mesa-stable@lists.freedesktop.org

Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
Reviewed-by: Eric Anholt <eric@anholt.net>
(cherry picked from commit b4c3b833ec)
2013-11-15 13:39:40 -08:00
Paul Berry
53e681f2fe main: Fix MaxUniformComponents for geometry shaders.
For both vertex and fragment shaders we default MaxUniformComponents
to 4 * MAX_UNIFORMS.  It makes sense to do this for geometry shaders
too; if back-ends have different limits they can override them as
necessary.

Fixes piglit test:
spec/glsl-1.50/built-in constants/gl_MaxGeometryUniformComponents

Cc: "10.0" <mesa-stable@lists.freedesktop.org>

Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
Reviewed-by: Chad Versace <chad.versace@linux.intel.com>
(cherry picked from commit 46e9f78efc)
2013-11-15 13:39:40 -08:00
Fredrik Höglund
10c25e58ca mesa: Fix derived vertex state not being updated in glCallList()
AEcontext::NewState is not always set when the vertex array state
is changed.

Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=71492
Cc: "10.0" <mesa-stable@lists.freedesktop.org>
Reviewed-by: José Fonseca <jfonseca@vmware.com>
Reviewed-by: Brian Paul <brianp@vmware.com>
(cherry picked from commit ff353c218a)
2013-11-15 13:39:40 -08:00
Ian Romanick
0558e10160 dri: Change value param to unsigned
This silences some compiler warnings in i915 and i965.  See also
75982a5.

Signed-off-by: Ian Romanick <ian.d.romanick@intel.com>
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
Cc: "10.0" <mesa-stable@lists.freedesktop.org>
(cherry picked from commit a15a19f0d1)
2013-11-15 13:39:40 -08:00
Ian Romanick
1e51d3a668 i965: Use drm_intel_get_aperture_sizes instead of hard-coded 2GiB
Systems with little physical memory installed will report less than
2GiB, and some systems may (hypothetically?) have a larger address space
for the GPU.  My IVB still reports 1534.

Signed-off-by: Ian Romanick <ian.d.romanick@intel.com>
Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Cc: "10.0" <mesa-stable@lists.freedesktop.org>
(cherry picked from commit cb6182bdfa)
2013-11-15 13:39:39 -08:00
Ian Romanick
e5839c2397 i915: Use drm_intel_get_aperture_sizes instead of drmAgpSize
Send the zombie back to the grave before it infects the townsfolk.

Signed-off-by: Ian Romanick <ian.d.romanick@intel.com>
Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Cc: "10.0" <mesa-stable@lists.freedesktop.org>
(cherry picked from commit 9fe108db09)
2013-11-15 13:39:39 -08:00
Kristian Høgsberg
7d2187176a dri: Remove redundant createNewContext function from __DRIimageDriverExtension
createContextAttribs is a superset of what createNewContext provides.
Also remove the function typedef, since createNewContext is deprecated
and no longer used in  multiple interfaces.

Signed-off-by: Kristian Høgsberg <krh@bitplanet.net>
Reviewed-by: Eric Anholt <eric@anholt.net>
Cc: "10.0" <mesa-stable@lists.freedesktop.org>
(cherry picked from commit e048953145)
2013-11-15 13:39:39 -08:00
Kristian Høgsberg
329a75511f wayland: Use __DRIimage based getBuffers implementation when available
This lets us allocate color buffers as __DRIimages and pass them into
the driver instead of having to create a __DRIbuffer with the flink
that requires.

Signed-off-by: Kristian Høgsberg <krh@bitplanet.net>
Reviewed-by: Eric Anholt <eric@anholt.net>
Reviewed-by: Chad Versace <chad.versace@linux.intel.com>
Cc: "10.0" <mesa-stable@lists.freedesktop.org>
(cherry picked from commit 68bb26bead)
2013-11-15 13:39:39 -08:00
Kristian Høgsberg
76434775e0 gbm: Add support for __DRIimage based getBuffers when available
This lets us allocate color buffers as __DRIimages and pass them into
the driver instead of having to create a __DRIbuffer with the flink
that requires.

With this patch, we can now run gbm on render-nodes.  A render-node is a
drm device that doesn't support modesetting and all the legacy DRI ioctls.
flink is also not supported, but now that gbm doesn't need flink, we can
run piglit on head-less gbm or head-less GPGPU.

Signed-off-by: Kristian Høgsberg <krh@bitplanet.net>
Reviewed-by: Chad Versace <chad.versace@linux.intel.com>
Reviewed-by: Eric Anholt <eric@anholt.net>
Reviewed-by: Jordan Justen <jordan.l.justen@intel.com>
Tested-by: Jordan Justen <jordan.l.justen@intel.com>
Cc: "10.0" <mesa-stable@lists.freedesktop.org>
(cherry picked from commit 04e3ef00db)
2013-11-15 13:39:39 -08:00
Ander Conselvan de Oliveira
2365244302 dri/i915, dri/i965: Fix support for planar images
Planar images have format __DRI_IMAGE_FORMAT_NONE, but the patch that
moved the conversion from dri_format to the mesa format made it
impossible to allocate a image with that format.

Signed-off-by: Ander Conselvan de Oliveira <ander.conselvan.de.oliveira@intel.com>
Reviewed-by: Kristian Høgsberg <krh@bitplanet.net>
Reviewed-by: Eric Anholt <eric@anholt.net>
Cc: "10.0" <mesa-stable@lists.freedesktop.org>
(cherry picked from commit 5ba6be2617)
2013-11-15 13:39:39 -08:00
Eric Anholt
3e6f200250 i965/fs: Try a different pre-scheduling heuristic if the first spills.
Since LIFO fails on some shaders in one particular way, and non-LIFO
systematically fails in another way on different kinds of shaders, try
them both, and pick whichever one successfully register allocates first.
Slightly prefer non-LIFO in case we produce extra dependencies in register
allocation, since it should start out with fewer stalls than LIFO.

This is madness, but I haven't come up with another way to get unigine
tropics to not spill while keeping other programs from not spilling and
retaining the non-unigine performance wins from texture-grf.

total instructions in shared programs: 1626728 -> 1626288 (-0.03%)
instructions in affected programs:     1015 -> 575 (-43.35%)
GAINED:                                50
LOST:                                  0

Improves Unigine Tropics performance by 14.5257% +/- 0.241838% (n=38)

Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=70445
Cc: "10.0" <mesa-stable@lists.freedesktop.org>
Reviewed-by: Matt Turner <mattst88@gmail.com>
(cherry picked from commit e9daead784)
2013-11-15 13:39:39 -08:00
Eric Anholt
99c62ff2ea i965/fs: Do instruction pre-scheduling just before register allocation.
Long ago, the HW_REG usage in assign_curb/urb_setup() were scheduling
barriers, so we had to run scheduler before them in order for it to be
able to do basically anything.  Now that that's fixed, we can delay the
scheduling until we go to allocate (which will make the next change less
scary).

Cc: "10.0" <mesa-stable@lists.freedesktop.org>
Reviewed-by: Matt Turner <mattst88@gmail.com>
(cherry picked from commit fbd8303a94)
2013-11-15 13:39:39 -08:00
Eric Anholt
a5a6ef9702 i965/fs: Ignore actual latency pre-reg-alloc.
We care about depth-until-program-end, as a proxy for "make sure I
schedule those early instructions that open up the other things that can
make progress while keeping register pressure low", not actual latency
(since we're relying on the post-register-alloc scheduling to actually
schedule for the hardware).

total instructions in shared programs: 1609931 -> 1609931 (0.00%)
instructions in affected programs:     0 -> 0
GAINED:                                55
LOST:                                  43

Cc: "10.0" <mesa-stable@lists.freedesktop.org>
Reviewed-by: Matt Turner <mattst88@gmail.com>
(cherry picked from commit f72a0d99fe)
2013-11-15 13:39:39 -08:00
Eric Anholt
6640147463 i965/fs: Fix message setup for SIMD8 spills.
In the SIMD16 spilling changes, I replaced a "1" in the spill path with
"mlen", but obviously it wasn't mlen before because spills have the g0
header along with the payload. The interface I was trying to use was
asking for how many physical regs we're writing, so we're looking for "1"
or "2".

I'm guessing this actually passed piglit because the high 8 bits of the
execution mask in SIMD8 mode are all 0s.

Cc: "10.0" <mesa-stable@lists.freedesktop.org>
Reviewed-by: Paul Berry <stereotype441@gmail.com>
(cherry picked from commit 7c90947a0b)
2013-11-15 13:39:38 -08:00
Eric Anholt
229ee20460 i965/fs: Prefer things we know reduce reg pressure when pre-scheduling.
Previously, the best thing we had was to schedule the things unblocked by
the last chosen instruction, on the hope that it would be consuming two
values at the end of their live intervals while only producing one new
value.  But that's just a guess, and we can do counting of usage of
registers to know when an instruction would (almost surely) reduce
register pressure.

The only failure mode I know of in this new dominant heuristic is that
inside of a loop when scheduling the iterator (for example), choosing the
last use of the iterator doesn't actually reduce the live interval of the
iterator.  But it doesn't seem to matter in shader-db:

total instructions in shared programs: 1618700 -> 1618700 (0.00%)
instructions in affected programs:     0 -> 0
GAINED:                                13
LOST:                                  0

Note: The new functions are made virtual because I expect we'll soon lift
the pre-regalloc scheduling heuristic over to the vec4 backend.

Cc: "10.0" <mesa-stable@lists.freedesktop.org>
Reviewed-by: Matt Turner <mattst88@gmail.com>
(cherry picked from commit bc0e3bb4d0)
2013-11-15 13:39:38 -08:00
Eric Anholt
dbddd86cc2 i965: Fix undefined value usage in ABO setup.
Fixes a compiler warning.

Cc: "10.0" <mesa-stable@lists.freedesktop.org>
Reviewed-by: Matt Turner <mattst88@gmail.com>
(cherry picked from commit 9b3e1592c2)
2013-11-15 13:39:38 -08:00
Francisco Jerez
c702f5eead clover: Fix the const variant of adaptor_range::end to deal with mismatching range sizes.
Fixes infinite loop in find_grid_optimal_factor() in cases where the
user specifies a grid size with less dimensions than the device
supports.

Reported-by: Tom Stellard <thomas.stellard@amd.com>
Cc: "10.0" <mesa-stable@lists.freedesktop.org>
(cherry picked from commit 99d447cc5d)
2013-11-15 13:39:38 -08:00
Cyril Brulebois
c4cc166abc gallium: fix build on GNU/Hurd due to missing PIPE_OS_HURD detection
Thanks to Pino Toscano.  Patch from Debian package.

Cc: "10.0" <mesa-stable@lists.freedesktop.org>

Reviewed-by: Brian Paul <brianp@vmware.com>
(cherry picked from commit 2d77e4f922)
2013-11-15 13:39:38 -08:00
Petr Sebor
2a3dcece72 meta: enable vertex attributes in the context of the newly created array object
Otherwise, the function would enable generic vertex attributes 0
and 1 of the array object it does not own. This was causing crashes
in Euro Truck Simulator 2, since the incorrectly enabled generic
attribute 0 in the foreign context got precedence before vertex
position attribute at later time, leading to NULL pointer dereference.

Cc: "9.2" <mesa-stable@lists.freedesktop.org>
Cc: "10.0" <mesa-stable@lists.freedesktop.org>

Signed-off-by: Petr Sebor <petr@scssoft.com>
Reviewed-by: Ian Romanick <ian.d.romanick@intel.com>
Reviewed-by: Brian Paul <brianp@vmware.com>
(cherry picked from commit f2b844f59d)
2013-11-15 13:39:38 -08:00
Brian Paul
accc276df2 mesa: call update_array_format() after error checking
We try to do all error checking before changing any GL state.

Cc: "10.0" <mesa-stable@lists.freedesktop.org>

Jordan Justen <jordan.l.justen@intel.com>
(cherry picked from commit ce193d4f01)
2013-11-15 13:39:38 -08:00
Ilia Mirkin
afbdcdcaaf nouveau/video: mark bitstream-level acceleration as unsupported
Adding a vl_mpeg-based helper didn't seem to work, as it produced data
that the card couldn't handle. (And I didn't investigate further.) This
makes the decoding functionality only accessible via XvMC and avoids
crashes when attempting to use VDPAU.

Signed-off-by: Ilia Mirkin <imirkin@alum.mit.edu>
Cc: "10.0" <mesa-stable@lists.freedesktop.org>
(cherry picked from commit 08122e151a)
2013-11-15 13:39:38 -08:00
Ilia Mirkin
6f2877c40d nouveau/video: don't try on nv3x
It doesn't work, I don't know why, but no point in hanging people's
displays until it gets figured out.

Signed-off-by: Ilia Mirkin <imirkin@alum.mit.edu>
Cc: "10.0" <mesa-stable@lists.freedesktop.org>
(cherry picked from commit e8d5d3409c)
2013-11-15 13:39:38 -08:00
Tom Stellard
02d9e1be87 egl-static: Only export necessary symbols v3
This fixes a crash in glamor when mesa links against static LLVM.

v2:
  - Inline LINKER_SCRIPT variable

v3: Kai Wasserbäch
  - Fix out out-of-tree-builds

Tested-by: Kai Wasserbäch <kai@dev.carbon-project.or>
(cherry picked from commit 594fa4a208)
2013-11-15 13:39:37 -08:00
Tom Stellard
095d583e52 configure.ac: Don't require shared LLVM when building OpenCL
This works now that pipe_*.so is no longer exporting LLVM symbols.

Tested-by: Kai Wasserbäch <kai@dev.carbon-project.or>
(cherry picked from commit cb080a10b6)
2013-11-15 13:39:37 -08:00
Tom Stellard
8af132fca9 pipe-loader: Only export necessary symbols v3
This makes it possible to use clover with statically linked LLVM.

v2:
  - Inline LINKER_SCRIPT variable

v3: Kai Wasserbäch
  - Fix out out-of-tree-builds

Tested-by: Kai Wasserbäch <kai@dev.carbon-project.or>
(cherry picked from commit 6d6c749215)
2013-11-15 13:39:37 -08:00
Tom Stellard
ade312cd8a radeonsi/compute: Add Sea Islands support
(cherry picked from commit a859131003)
2013-11-15 13:39:37 -08:00
Rico Schüller
f9a74a0b4c tests: Fix make check for out of tree builds.
Cc: "10.0" <mesa-stable@lists.freedesktop.org>
Reviewed-by: Matt Turner <mattst88@gmail.com>
Signed-off-by: Rico Schüller <kgbricola@web.de>
(cherry picked from commit 23afe71f44)
2013-11-15 13:39:37 -08:00
Brian Paul
0e3f5999b9 osmesa: fix broken triangle/line drawing when using float color buffer
Doesn't seem to help with bug 71363 but it fixed a failure I found in
my testing.

Cc: "9.2" <mesa-stable@lists.freedesktop.org>
Cc: "10.0" <mesa-stable@lists.freedesktop.org>
(cherry picked from commit a66a008b17)
2013-11-15 13:39:37 -08:00
Chris Forbes
ebc460bc5f i965: convert brw_lower_offset_array_visitor to ir_rvalue_visitor
Previously, we would bogusly replace the entire statement containing the
ir_texture node with an ir_dereference_variable.

Correct this to just replace the ir_texture node itself as intended.

Signed-off-by: Chris Forbes <chrisf@ijw.co.nz>
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
(cherry picked from commit 5442c0eae3)
2013-11-15 13:39:37 -08:00
Chris Forbes
0010bdd54a glsl: fix missing breaks in equals(ir_texture,..)
Signed-off-by: Chris Forbes <chrisf@ijw.co.nz>
Cc: "10.0" <mesa-stable@lists.freedesktop.org>
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
(cherry picked from commit d257350949)
2013-11-15 12:28:34 -08:00
Matt Turner
b8a631295a i965/fs: Don't perform CSE on inst HW_REG dests (unless it's null)
Commit b16b3c87 began performing CSE on CMP instructions with null
destinations. I relaxed the restrictions a bit too much, thereby
allowing CSE to be performed on instructions with, for instance, an
explicit accumulator destination.

This broke the arb_gpu_shader5/fs-imulExtended shader tests because
they emit MUL instructions with the accumulator as the destination. CSE
would instead cause the MUL to write to a GRF, which is lower precision
than the accumulator.

Reviewed-by: Eric Anholt <eric@anholt.net>
Cc: 10.0 <mesa-stable@lists.freedesktop.org>
(cherry picked from commit 68349e5219)
2013-11-15 12:28:34 -08:00
Ian Romanick
c94ed272eb Add .cherry-ignore file
Since we've disabled DRI3 completely in 10.0, f0f202e this commit is no
longer necessary.
2013-11-15 12:28:33 -08:00
Eric Anholt
47139b0233 glx: Back DRI3 enablement out of the stable branch.
After more testing (everyone else trying to build the stack is having as
much trouble as I had, even after the problems I had were fixed), it
really feels like dri3 is not something we're ready to support in this
stable branch.  The .c/.h code will remain here to enable easier
cherry-picking from master, and everything stays on master so we can ship
a solid DRI3 in 3 months.

Reviewed-by: Ian Romanick <ian.d.romanick@intel.com>
Reviewed-by: Brian Paul <brianp@vmware.com>
2013-11-15 12:28:33 -08:00
Brian Paul
94251281b4 glx: change query_renderer_integer() value param to unsigned
When this function was added, the returned value was signed in some
places, unsigned in others.

v2: also add unsigned in the unit test, per Ian.

Reviewed-by: Ian Romanick <ian.d.romanick@intel.com>
(cherry picked from commit 75982a5df4)
2013-11-15 11:58:06 -08:00
José Fonseca
03a29306b5 glx: Fix scons build.
Reviewed-by: Brian Paul <brianp@vmware.com>
(cherry picked from commit 6c6f4aa6fd)
2013-11-15 11:58:06 -08:00
Brian Paul
d37ea6dfec swrast: add missing notify_reset parameter to dri_create_context()
Reviewed-by: Jose Fonseca <jfonseca@vmware.com>
(cherry picked from commit f41c01c688)
2013-11-15 11:57:10 -08:00
José Fonseca
84ee00c1b2 scons: Add dri2_query_renderer.c to sources.
(cherry picked from commit cb3c57df3a)
2013-11-15 11:57:10 -08:00
José Fonseca
3ffcc96abc st/dri: Fix dri_create_context declaration prototype.
(cherry picked from commit caf1d96862)
2013-11-15 11:57:10 -08:00
Alexander von Gluck IV
bc94bf08c4 haiku/swrast: Inherit gl_config, fix flush
* Inherit gl_context so we always have access to it
* Thanks curro for the idea.
* Last Haiku cannidate for 10.0.0

Reviewed-by: Francisco Jerez <currojerez@riseup.net>
2013-11-14 12:38:27 -06:00
Alexander von Gluck IV
ce904c4caf haiku: add swrast driver
* This is pretty small and upkeep should be minimal.
* Currently fully working.
* Cannidate for 10.0.0 branch

Acked-by: Brian Paul <brianp@vmware.com>
2013-11-13 13:35:58 -06:00
13271 changed files with 905628 additions and 5456098 deletions

View File

View File

View File

View File

View File

View File

View File

View File

View File

View File

View File

@@ -1,2 +0,0 @@
# Vendored code
src/amd/vulkan/radix_sort/*

View File

@@ -1,10 +0,0 @@
# The following files are opted into `ninja clang-format` and
# enforcement in the CI.
src/gallium/drivers/i915
src/gallium/drivers/r300/compiler/*
src/gallium/targets/teflon/**/*
src/amd/vulkan/**/*
src/amd/compiler/**/*
src/egl/**/*
src/etnaviv/isa/**/*

View File

@@ -1,18 +1,11 @@
((nil . ((show-trailing-whitespace . t)))
(prog-mode
((nil
(indent-tabs-mode . nil)
(tab-width . 8)
(c-basic-offset . 3)
(c-file-style . "stroustrup")
(fill-column . 78)
(eval . (progn
(c-set-offset 'case-label '0)
(c-set-offset 'innamespace '0)
(c-set-offset 'inline-open '0)))
(whitespace-style face indentation)
(whitespace-line-column . 79)
(eval ignore-errors
(require 'whitespace)
(whitespace-mode 1)))
(makefile-mode (indent-tabs-mode . t))
)
)

View File

@@ -1,44 +0,0 @@
# To use this config on you editor, follow the instructions at:
# http://editorconfig.org
root = true
[*]
charset = utf-8
insert_final_newline = true
tab_width = 8
[*.{c,h,cpp,hpp,cc,hh,y,yy}]
indent_style = space
indent_size = 3
max_line_length = 78
[{Makefile*,*.mk}]
indent_style = tab
[*.py]
indent_style = space
indent_size = 4
[*.yml]
indent_style = space
indent_size = 2
[*.rst]
indent_style = space
indent_size = 3
[*.patch]
trim_trailing_whitespace = false
[{meson.build,meson.options}]
indent_style = space
indent_size = 2
[*.ps1]
indent_style = space
indent_size = 2
[*.rs]
indent_style = space
indent_size = 4

View File

@@ -1,76 +0,0 @@
# List of commits to ignore when using `git blame`.
# Enable with:
# git config blame.ignoreRevsFile .git-blame-ignore-revs
#
# Per git-blame(1):
# Ignore revisions listed in the file, one unabbreviated object name
# per line, in git-blame. Whitespace and comments beginning with # are
# ignored.
#
# Please keep these in chronological order :)
#
# You can add a new commit with the following command:
# git log -1 --pretty=format:'%n# %s%n%H%n' >> .git-blame-ignore-revs $COMMIT
# pvr: Fix clang-format error.
0ad5b0a74ef73f5fcbe1406ad9d57fe5dc00a5b1
# panfrost: Fix up some formatting for clang-format
a4705afe63412498d13ded73cba969c66be67907
# asahi: clang-format the world again
26c51bb8d8a33098b1990425a391f56ffba5728c
# perfetto: Add a .clang-format for the directory.
da78d5d729b1800136dd713b68492cb339993f4a
# panfrost/winsys: Clang-format
c90f036516a5376002be6550a917e8bad6a8a3b8
# panfrost: Re-run clang-format
4ccf174009af6732cbffa5d8ebb4687da7517505
# panvk: Clang-format
c7bf3b69ebc8f2252dbf724a4de638e6bb2ac402
# pan/mdg: Fix icky formatting
133af0d6c945d3aaca8989edd15283a2b7dcc6c7
# mapi: clang-format _glapi_add_dispatch()
30332529663268a6406e910848e906e725e6fda7
# radv: reformat according to its .clang-format
8b319c6db8bd93603b18bd783eb75225fcfd51b7
# aco: reformat according to its .clang-format
6b21653ab4d3a67e711fe10e3d403128b6d26eb2
# egl: re-format using clang-format
2f670d89db038d5a29f6b72732fd7ad63dfaf4c6
# panfrost: clang-format the tree
0afd691f29683f6e9dde60f79eca094373521806
# aco: Format.
1e2639026fec7069806449f9ba2a124ce4eb5569
# radv: Format.
59c501ca353f8ec9d2717c98af2bfa1a1dbf4d75
# pvr: clang-format fixes
953c04ebd39c52d457301bdd8ac803949001da2d
# freedreno: Re-indent
2d439343ea1aee146d4ce32800992cd389bd505d
# ir3: Reformat source with clang-format
177138d8cb0b4f6a42ef0a1f8593e14d79f17c54
# ir3: reformat after refactoring in previous commit
8ae5b27ee0331a739d14b42e67586784d6840388
# ir3: don't use deprecated NIR_PASS_V anymore
2fedc82c0cc9d3fb2e54707b57941b79553b640c
# ir3: reformat after previous commit
7210054db8cfb445a8ccdeacfdcfecccf44fa266

11
.gitattributes vendored
View File

@@ -1,7 +1,4 @@
*.csv eol=crlf
* text=auto
*.jpg binary
*.png binary
*.gif binary
*.ico binary
*.cl gitlab-language=c
*.dsp -crlf
*.dsw -crlf
*.sln -crlf
*.vcproj -crlf

View File

@@ -1,60 +0,0 @@
name: macOS-CI
on: push
permissions:
contents: read
jobs:
macOS-CI:
strategy:
matrix:
glx_option: ['dri', 'xlib']
runs-on: macos-11
env:
GALLIUM_DUMP_CPU: true
MESON_EXEC: /Users/runner/Library/Python/3.11/bin/meson
steps:
- name: Checkout
uses: actions/checkout@v3
- name: Install Dependencies
run: |
cat > Brewfile <<EOL
brew "bison"
brew "expat"
brew "gettext"
brew "libx11"
brew "libxcb"
brew "libxdamage"
brew "libxext"
brew "molten-vk"
brew "ninja"
brew "pkg-config"
brew "python@3.10"
EOL
brew update
brew bundle --verbose
- name: Install Mako and meson
run: pip3 install --user mako meson
- name: Configure
run: |
cat > native_config <<EOL
[binaries]
llvm-config = '/usr/local/opt/llvm/bin/llvm-config'
EOL
$MESON_EXEC . build --native-file=native_config -Dmoltenvk-dir=$(brew --prefix molten-vk) -Dbuild-tests=true -Dosmesa=true -Dgallium-drivers=swrast,zink -Dglx=${{ matrix.glx_option }}
- name: Build
run: $MESON_EXEC compile -C build
- name: Test
run: $MESON_EXEC test -C build --print-errorlogs
- name: Install
run: $MESON_EXEC install -C build --destdir $PWD/install
- name: 'Upload Artifact'
if: always()
uses: actions/upload-artifact@v3
with:
name: macos-${{ matrix.glx_option }}-result
path: |
build/meson-logs/
install/
retention-days: 5

49
.gitignore vendored
View File

@@ -1,7 +1,46 @@
.cache
.vscode*
*.a
*.dll
*.exe
*.ilk
*.la
*.lo
*.log
*.o
*.obj
*.os
*.pc
*.pdb
*.pyc
*.pyo
*.out
/build
.venv/
*.so
*.so.*
*.sw[a-z]
*.tar
*.tar.bz2
*.tar.gz
*.trs
*.zip
*~
depend
depend.bak
bin/ltmain.sh
lib
lib64
configure
configure.lineno
autom4te.cache
aclocal.m4
config.log
config.status
cscope*
.scon*
config.py
build
libtool
manifest.txt
.dir-locals.el
.deps/
.dirstamp
.libs/
Makefile
Makefile.in

View File

@@ -1,431 +0,0 @@
# Types of CI pipelines:
# | pipeline name | context | description |
# |----------------------|-----------|-------------------------------------------------------------|
# | merge pipeline | mesa/mesa | pipeline running for an MR; if it passes the MR gets merged |
# | pre-merge pipeline | mesa/mesa | same as above, except its status doesn't affect the MR |
# | post-merge pipeline | mesa/mesa | pipeline immediately after merging |
# | fork pipeline | fork | pipeline running in a user fork |
# | scheduled pipeline | mesa/mesa | nightly pipelines, running every morning at 4am UTC |
# | direct-push pipeline | mesa/mesa | when commits are pushed directly to mesa/mesa, bypassing Marge and its gating pipeline |
#
# Note that the release branches maintained by the release manager fall under
# the "direct push" category.
#
# "context" indicates the permissions that the jobs get; notably, any
# container created in mesa/mesa gets pushed immediately for everyone to use
# as soon as the image tag change is merged.
#
# Merge pipelines contain all jobs that must pass before the MR can be merged.
# Pre-merge pipelines contain the exact same jobs as merge pipelines.
# Post-merge pipelines contain *only* the `pages` job that deploys the new
# version of the website.
# Fork pipelines contain everything.
# Scheduled pipelines only contain the container+build jobs, and some extra
# test jobs (typically "full" variants of pre-merge jobs that only run 1/X
# test cases), but not a repeat of the merge pipeline jobs.
# Direct-push pipelines contain the same jobs as merge pipelines.
workflow:
rules:
# do not duplicate pipelines on merge pipelines
- if: $CI_COMMIT_BRANCH && $CI_OPEN_MERGE_REQUESTS && $CI_PIPELINE_SOURCE == "push"
when: never
# tag pipelines are disabled as it's too late to run all the tests by
# then, the release has been made based on the staging pipelines results
- if: $CI_COMMIT_TAG
when: never
# merge pipeline
- if: &is-merge-attempt $GITLAB_USER_LOGIN == "marge-bot" && $CI_PIPELINE_SOURCE == "merge_request_event"
variables:
MESA_CI_PERFORMANCE_ENABLED: 1
FDO_RUNNER_JOB_PRIORITY_TAG_X86_64: priority:high
FDO_RUNNER_JOB_PRIORITY_TAG_X86_64_KVM: priority:high-kvm
FDO_RUNNER_JOB_PRIORITY_TAG_AARCH64: priority:high-aarch64
CI_TRON_JOB_PRIORITY_TAG: "" # Empty tags are ignored by gitlab
JOB_PRIORITY: 75
# fast-fail in merge pipelines: stop early if we get this many unexpected fails/crashes
DEQP_RUNNER_MAX_FAILS: 40
# post-merge pipeline
- if: &is-post-merge $GITLAB_USER_LOGIN == "marge-bot" && $CI_PIPELINE_SOURCE == "push"
# Pre-merge pipeline
- if: &is-pre-merge $CI_PIPELINE_SOURCE == "merge_request_event"
# Push to a branch on a fork
- if: &is-fork-push $CI_PROJECT_NAMESPACE != "mesa" && $CI_PIPELINE_SOURCE == "push"
# nightly pipeline
- if: &is-scheduled-pipeline $CI_PIPELINE_SOURCE == "schedule"
variables:
FDO_RUNNER_JOB_PRIORITY_TAG_X86_64: priority:low
FDO_RUNNER_JOB_PRIORITY_TAG_X86_64_KVM: priority:low-kvm
FDO_RUNNER_JOB_PRIORITY_TAG_AARCH64: priority:low-aarch64
JOB_PRIORITY: 45
# (some) nightly builds perform LTO, so they take much longer than the
# short timeout allowed in other pipelines.
# Note: 0 = infinity = gitlab's job `timeout:` applies, which is 1h
BUILD_JOB_TIMEOUT_OVERRIDE: 0
# pipeline for direct pushes that bypassed the CI
- if: &is-direct-push $CI_PROJECT_NAMESPACE == "mesa" && $CI_PIPELINE_SOURCE == "push" && $CI_COMMIT_REF_NAME == $CI_DEFAULT_BRANCH
variables:
JOB_PRIORITY: 70
# pipeline for direct pushes from release maintainer
- if: &is-staging-push $CI_PROJECT_NAMESPACE == "mesa" && $CI_PIPELINE_SOURCE == "push" && $CI_COMMIT_REF_NAME =~ /^staging\//
variables:
JOB_PRIORITY: 70
variables:
FDO_UPSTREAM_REPO: mesa/mesa
MESA_TEMPLATES_COMMIT: &ci-templates-commit 48e4b6c9a2015f969fbe648999d16d5fb3eef6c4
CI_PRE_CLONE_SCRIPT: |-
set -o xtrace
wget -q -O download-git-cache.sh ${CI_PROJECT_URL}/-/raw/${CI_COMMIT_SHA}/.gitlab-ci/download-git-cache.sh
bash download-git-cache.sh
rm download-git-cache.sh
set +o xtrace
S3_JWT_FILE: /s3_jwt
S3_JWT_FILE_SCRIPT: |-
echo -n '${S3_JWT}' > '${S3_JWT_FILE}' &&
unset CI_JOB_JWT S3_JWT # Unsetting vulnerable env variables
S3_HOST: s3.freedesktop.org
# This bucket is used to fetch ANDROID prebuilts and images
S3_ANDROID_BUCKET: mesa-rootfs
# This bucket is used to fetch the kernel image
S3_KERNEL_BUCKET: mesa-rootfs
# Bucket for git cache
S3_GITCACHE_BUCKET: git-cache
# Bucket for the pipeline artifacts pushed to S3
S3_ARTIFACTS_BUCKET: artifacts
# Buckets for traces
S3_TRACIE_RESULTS_BUCKET: mesa-tracie-results
S3_TRACIE_PUBLIC_BUCKET: mesa-tracie-public
S3_TRACIE_PRIVATE_BUCKET: mesa-tracie-private
# per-pipeline artifact storage on MinIO
PIPELINE_ARTIFACTS_BASE: ${S3_HOST}/${S3_ARTIFACTS_BUCKET}/${CI_PROJECT_PATH}/${CI_PIPELINE_ID}
# per-job artifact storage on MinIO
JOB_ARTIFACTS_BASE: ${PIPELINE_ARTIFACTS_BASE}/${CI_JOB_ID}
# reference images stored for traces
PIGLIT_REPLAY_REFERENCE_IMAGES_BASE: "${S3_HOST}/${S3_TRACIE_RESULTS_BUCKET}/$FDO_UPSTREAM_REPO"
# For individual CI farm status see .ci-farms folder
# Disable farm with `git mv .ci-farms{,-disabled}/$farm_name`
# Re-enable farm with `git mv .ci-farms{-disabled,}/$farm_name`
# NEVER MIX FARM MAINTENANCE WITH ANY OTHER CHANGE IN THE SAME MERGE REQUEST!
ARTIFACTS_BASE_URL: https://${CI_PROJECT_ROOT_NAMESPACE}.${CI_PAGES_DOMAIN}/-/${CI_PROJECT_NAME}/-/jobs/${CI_JOB_ID}/artifacts
# Python scripts for structured logger
PYTHONPATH: "$PYTHONPATH:$CI_PROJECT_DIR/install"
# No point in continuing once the device is lost
MESA_VK_ABORT_ON_DEVICE_LOSS: 1
# Avoid the wall of "Unsupported SPIR-V capability" warnings in CI job log, hiding away useful output
MESA_SPIRV_LOG_LEVEL: error
# Default priority for non-merge pipelines
FDO_RUNNER_JOB_PRIORITY_TAG_X86_64: "" # Empty tags are ignored by gitlab
FDO_RUNNER_JOB_PRIORITY_TAG_X86_64_KVM: kvm
FDO_RUNNER_JOB_PRIORITY_TAG_AARCH64: aarch64
CI_TRON_JOB_PRIORITY_TAG: ci-tron:priority:low
JOB_PRIORITY: 50
DATA_STORAGE_PATH: data_storage
default:
timeout: 1m # catch any jobs which don't specify a timeout
id_tokens:
S3_JWT:
aud: https://s3.freedesktop.org
before_script:
- |
if [ -z "${KERNEL_IMAGE_BASE:-}" ]; then
export KERNEL_IMAGE_BASE="https://${S3_HOST}/${S3_KERNEL_BUCKET}/${KERNEL_REPO}/${EXTERNAL_KERNEL_TAG:-$KERNEL_TAG}"
fi
- >
export SCRIPTS_DIR=$(mktemp -d) &&
curl -L -s --retry 4 -f --retry-all-errors --retry-delay 60 -O --output-dir "${SCRIPTS_DIR}" "${CI_PROJECT_URL}/-/raw/${CI_COMMIT_SHA}/.gitlab-ci/setup-test-env.sh" &&
. ${SCRIPTS_DIR}/setup-test-env.sh
- eval "$S3_JWT_FILE_SCRIPT"
after_script:
# Work around https://gitlab.com/gitlab-org/gitlab/-/issues/20338
- find -name '*.log' -exec mv {} {}.txt \;
# Retry when job fails. Failed jobs can be found in the Mesa CI Daily Reports:
# https://gitlab.freedesktop.org/mesa/mesa/-/issues/?sort=created_date&state=opened&label_name%5B%5D=CI%20daily
retry:
max: 1
# Ignore runner_unsupported, stale_schedule, archived_failure, or
# unmet_prerequisites
when:
- api_failure
- runner_system_failure
- script_failure
- job_execution_timeout
- scheduler_failure
- data_integrity_failure
- unknown_failure
stages:
- sanity
- container
- git-archive
- build-for-tests
- build-only
- code-validation
- amd
- amd-postmerge
- intel
- intel-postmerge
- nouveau
- nouveau-postmerge
- arm
- arm-postmerge
- broadcom
- broadcom-postmerge
- freedreno
- freedreno-postmerge
- etnaviv
- etnaviv-postmerge
- software-renderer
- software-renderer-postmerge
- layered-backends
- layered-backends-postmerge
- performance
- deploy
include:
- project: 'freedesktop/ci-templates'
ref: 16bc29078de5e0a067ff84a1a199a3760d3b3811
file:
- '/templates/ci-fairy.yml'
- project: 'freedesktop/ci-templates'
ref: *ci-templates-commit
file:
- '/templates/alpine.yml'
- '/templates/debian.yml'
- '/templates/fedora.yml'
- local: '.gitlab-ci/image-tags.yml'
- local: '.gitlab-ci/lava/lava-gitlab-ci.yml'
- local: '.gitlab-ci/container/gitlab-ci.yml'
- local: '.gitlab-ci/build/gitlab-ci.yml'
- local: '.gitlab-ci/test/gitlab-ci.yml'
- local: '.gitlab-ci/farm-rules.yml'
- local: '.gitlab-ci/test-source-dep.yml'
- local: 'docs/gitlab-ci.yml'
- local: 'src/**/ci/gitlab-ci.yml'
# Rules applied to every job in the pipeline
.common-rules:
rules:
- if: *is-fork-push
when: manual
.never-post-merge-rules:
rules:
- if: *is-post-merge
when: never
# Note: make sure the branches in this list are the same as in
# `.build-only-delayed-rules` below.
.container+build-rules:
rules:
- !reference [.common-rules, rules]
# Run when re-enabling a disabled farm, but not when disabling it
- !reference [.disable-farm-mr-rules, rules]
# Never run immediately after merging, as we just ran everything
- !reference [.never-post-merge-rules, rules]
# Build everything in merge pipelines, if any files affecting the pipeline
# were changed
- if: *is-merge-attempt
changes: &all_paths
- VERSION
- bin/git_sha1_gen.py
- bin/install_megadrivers.py
- bin/symbols-check.py
- bin/ci/**/*
# GitLab CI
- .gitlab-ci.yml
- .gitlab-ci/**/*
- .ci-farms/*
# Meson
- meson*
- build-support/**/*
- subprojects/**/*
# clang format
- .clang-format
- .clang-format-include
- .clang-format-ignore
# Source code
- include/**/*
- src/**/*
when: on_success
# Same as above, but for pre-merge pipelines
- if: *is-pre-merge
changes:
*all_paths
when: manual
# Skip everything for pre-merge and merge pipelines which don't change
# anything in the build
- if: *is-merge-attempt
when: never
- if: *is-pre-merge
when: never
# Build everything after someone bypassed the CI
- if: *is-direct-push
when: on_success
# Build everything when pushing to staging branches
- if: *is-staging-push
when: on_success
# Build everything in scheduled pipelines
- if: *is-scheduled-pipeline
when: on_success
# Allow building everything in fork pipelines, but build nothing unless
# manually triggered
- when: manual
# Repeat of the above but with `when: on_success` replaced with
# `when: delayed` + `start_in:`, for build-only jobs.
# Note: make sure the branches in this list are the same as in
# `.container+build-rules` above.
.build-only-delayed-rules:
rules:
- !reference [.common-rules, rules]
# Run when re-enabling a disabled farm, but not when disabling it
- !reference [.disable-farm-mr-rules, rules]
# Never run immediately after merging, as we just ran everything
- !reference [.never-post-merge-rules, rules]
# Build everything in merge pipelines, if any files affecting the pipeline
# were changed
- if: *is-merge-attempt
changes: *all_paths
when: delayed
start_in: &build-delay 5 minutes
# Same as above, but for pre-merge pipelines
- if: *is-pre-merge
changes: *all_paths
when: manual
# Skip everything for pre-merge and merge pipelines which don't change
# anything in the build
- if: *is-merge-attempt
when: never
- if: *is-pre-merge
when: never
# Build everything after someone bypassed the CI
- if: *is-direct-push
when: delayed
start_in: *build-delay
# Build everything when pushing to staging branches
- if: *is-staging-push
when: delayed
start_in: *build-delay
# Build everything in scheduled pipelines
- if: *is-scheduled-pipeline
when: delayed
start_in: *build-delay
# Allow building everything in fork pipelines, but build nothing unless
# manually triggered
- when: manual
.ci-deqp-artifacts:
artifacts:
name: "${CI_PROJECT_NAME}_${CI_JOB_NAME}"
when: always
untracked: false
paths:
# Watch out! Artifacts are relative to the build dir.
# https://gitlab.com/gitlab-org/gitlab-ce/commit/8788fb925706cad594adf6917a6c5f6587dd1521
- artifacts
- _build/meson-logs/*.txt
- _build/meson-logs/strace
- _build/.ninja_log
# Git archive
make git archive:
extends:
- .fdo.ci-fairy
stage: git-archive
rules:
- !reference [.scheduled_pipeline-rules, rules]
script:
# Compactify the .git directory
- git gc --aggressive
# Download & cache the perfetto subproject as well.
- rm -rf subprojects/perfetto ; mkdir -p subprojects/perfetto && curl --fail https://android.googlesource.com/platform/external/perfetto/+archive/$(grep 'revision =' subprojects/perfetto.wrap | cut -d ' ' -f3).tar.gz | tar zxf - -C subprojects/perfetto
# compress the current folder
- tar -cvzf ../$CI_PROJECT_NAME.tar.gz .
- s3_upload ../$CI_PROJECT_NAME.tar.gz "https://$S3_HOST/git-cache/$CI_PROJECT_NAMESPACE/$CI_PROJECT_NAME/"
# Sanity checks of MR settings and commit logs
sanity:
extends:
- .fdo.ci-fairy
stage: sanity
tags:
- $FDO_RUNNER_JOB_PRIORITY_TAG_X86_64
rules:
- if: *is-pre-merge
when: on_success
- when: never
variables:
GIT_STRATEGY: none
script:
# ci-fairy check-commits --junit-xml=check-commits.xml
- ci-fairy check-merge-request --require-allow-collaboration --junit-xml=check-merge-request.xml
- |
set -eu
image_tags=(
ALPINE_X86_64_BUILD_TAG
ALPINE_X86_64_LAVA_SSH_TAG
DEBIAN_BASE_TAG
DEBIAN_BUILD_TAG
DEBIAN_PYUTILS_TAG
DEBIAN_TEST_ANDROID_TAG
DEBIAN_TEST_GL_TAG
DEBIAN_TEST_VK_TAG
FEDORA_X86_64_BUILD_TAG
KERNEL_ROOTFS_TAG
KERNEL_TAG
PKG_REPO_REV
WINDOWS_X64_BUILD_TAG
WINDOWS_X64_MSVC_TAG
WINDOWS_X64_TEST_TAG
)
for var in "${image_tags[@]}"
do
if [ "$(echo -n "${!var}" | wc -c)" -gt 20 ]
then
echo "$var is too long; please make sure it is at most 20 chars."
exit 1
fi
done
artifacts:
when: on_failure
reports:
junit: check-*.xml
mr-label-maker-test:
extends:
- .fdo.ci-fairy
stage: sanity
tags:
- $FDO_RUNNER_JOB_PRIORITY_TAG_X86_64
rules:
- !reference [.mr-label-maker-rules, rules]
variables:
GIT_STRATEGY: fetch
timeout: 10m
script:
- set -eu
- python3 -m venv .venv
- source .venv/bin/activate
- pip install git+https://gitlab.freedesktop.org/freedesktop/mr-label-maker
- mr-label-maker --dry-run --mr $CI_MERGE_REQUEST_IID
# Jobs that need to pass before spending hardware resources on further testing
.required-for-hardware-jobs:
needs:
- job: rustfmt
optional: true
artifacts: false
- job: yaml-toml-shell-py-test
optional: true
artifacts: false

View File

@@ -1,33 +0,0 @@
[flake8]
exclude = .venv*,
# PEP 8 Style Guide limits line length to 79 characters
max-line-length = 159
ignore =
# continuation line under-indented for hanging indent
E121
# continuation line over-indented for hanging indent
E126,
# continuation line under-indented for visual indent
E128,
# whitespace before ':'
E203,
# missing whitespace around arithmetic operator
E226,
# missing whitespace after ','
E231,
# expected 2 blank lines, found 1
E302,
# too many blank lines
E303,
# imported but unused
F401,
# f-string is missing placeholders
F541,
# local variable assigned to but never used
F841,
# line break before binary operator
W503,
# line break after binary operator
W504,

View File

@@ -1,115 +0,0 @@
# Note: skips lists for CI are just a list of lines that, when
# non-zero-length and not starting with '#', will regex match to
# delete lines from the test list. Be careful.
# This test checks the driver's reported conformance version against the
# version of the CTS we're running. This check fails every few months
# and everyone has to go and bump the number in every driver.
# Running this check only makes sense while preparing a conformance
# submission, so skip it in the regular CI.
dEQP-VK.api.driver_properties.conformance_version
# Exclude this test which might fail when a new extension is implemented.
dEQP-VK.info.device_extensions
# These are tremendously slow (pushing toward a minute), and aren't
# reliable to be run in parallel with other tests due to CPU-side timing.
dEQP-GLES[0-9]*.functional.flush_finish.*
# piglit: WGL is Windows-only
wgl@.*
# These are sensitive to CPU timing, and would need to be run in isolation
# on the system rather than in parallel with other tests.
glx@glx_arb_sync_control@timing.*
# This test is not built with waffle, while we do build tests with waffle
spec@!opengl 1.1@windowoverlap
# These tests all read from the front buffer after a swap. Given that we
# run piglit tests in parallel in Mesa CI, and don't have a compositor
# running, the frontbuffer reads may end up with undefined results from
# windows overlapping us.
#
# Piglit does mark these tests as not to be run in parallel, but deqp-runner
# doesn't respect that. We need to extend deqp-runner to allow some tests to be
# marked as single-threaded and run after the rayon loop if we want to support
# them.
#
# Note that "glx-" tests don't appear in x11-skips.txt because they can be
# run even if PIGLIT_PLATFORM=gbm (for example)
glx@glx-copy-sub-buffer.*
# A majority of the tests introduced in CTS 1.3.7.0 are experiencing failures and flakes.
# Disable these tests until someone with a more deeper understanding of EGL examines them.
#
# Note: on sc8280xp/a690 I get identical results (same passes and fails)
# between freedreno, zink, and llvmpipe, so I believe this is either a
# deqp bug or egl/wayland bug, rather than driver issue.
#
# With llvmpipe, the failing tests have the error message:
#
# "Illegal sampler view creation without bind flag"
#
# which might be a hint. (But some passing tests also have the same
# error message.)
#
# more context from David Heidelberg on IRC: the deqp commit where these
# started failing is: https://github.com/KhronosGroup/VK-GL-CTS/commit/79b25659bcbced0cfc2c3fe318951c585f682abe
# prior to that they were skipping.
wayland-dEQP-EGL.functional.color_clears.single_context.gles1.other
wayland-dEQP-EGL.functional.color_clears.single_context.gles2.other
wayland-dEQP-EGL.functional.color_clears.single_context.gles3.other
wayland-dEQP-EGL.functional.color_clears.multi_context.gles1.other
wayland-dEQP-EGL.functional.color_clears.multi_context.gles2.other
wayland-dEQP-EGL.functional.color_clears.multi_context.gles3.other
wayland-dEQP-EGL.functional.color_clears.multi_context.gles1_gles2.other
wayland-dEQP-EGL.functional.color_clears.multi_context.gles1_gles2_gles3.other
wayland-dEQP-EGL.functional.color_clears.multi_thread.gles1.other
wayland-dEQP-EGL.functional.color_clears.multi_thread.gles2.other
wayland-dEQP-EGL.functional.color_clears.multi_thread.gles3.other
wayland-dEQP-EGL.functional.color_clears.multi_thread.gles1_gles2.other
wayland-dEQP-EGL.functional.color_clears.multi_thread.gles1_gles2_gles3.other
# Seems to be the same is as wayland-dEQP-EGL.functional.color_clears.*
wayland-dEQP-EGL.functional.render.single_context.gles2.other
wayland-dEQP-EGL.functional.render.single_context.gles3.other
wayland-dEQP-EGL.functional.render.multi_context.gles2.other
wayland-dEQP-EGL.functional.render.multi_context.gles3.other
wayland-dEQP-EGL.functional.render.multi_context.gles2_gles3.other
wayland-dEQP-EGL.functional.render.multi_thread.gles2.other
wayland-dEQP-EGL.functional.render.multi_thread.gles3.other
wayland-dEQP-EGL.functional.render.multi_thread.gles2_gles3.other
# These test the loader more than the implementation and are broken because the
# Vulkan loader in Debian is too old
dEQP-VK.api.get_device_proc_addr.non_enabled
dEQP-VK.api.version_check.unavailable_entry_points
# These tests are flaking too much recently on almost all drivers, so better skip them until the cause is identified
spec@arb_program_interface_query@arb_program_interface_query-getprogramresourceindex
spec@arb_program_interface_query@arb_program_interface_query-getprogramresourceindex@'vs_input2[1][0]' on GL_PROGRAM_INPUT
# These tests attempt to read from the front buffer after a swap. They are skipped
# on both X11 and gbm, but for different reasons:
#
# On X11: Given that we run piglit tests in parallel in Mesa CI, and don't have a
# compositor running, the frontbuffer reads may end up with undefined results from
# windows overlapping us.
# Piglit does mark these tests as not to be run in parallel, but deqp-runner
# doesn't respect that. We need to extend deqp-runner to allow some tests to be
# marked as single-threaded and run after the rayon loop if we want to support
# them.
# Other front-buffer access tests like fbo-sys-blit, fbo-sys-sub-blit, or
# fcc-front-buffer-distraction don't appear here, because the DRI3 fake-front
# handling should be holding the pixels drawn by the test even if we happen to fail
# GL's window system pixel occlusion test.
# Note that glx skips don't appear here, they're in all-skips.txt (in case someone
# sets PIGLIT_PLATFORM=gbm to mostly use gbm, but still has an X server running).
#
# On gbm: gbm does not support reading the front buffer after a swapbuffers, and
# that's intentional. Don't bother running these tests when PIGLIT_PLATFORM=gbm.
# Note that this doesn't include tests like fbo-sys-blit, which draw/read front
# but don't swap.
spec@!opengl 1.0@gl-1.0-swapbuffers-behavior
spec@!opengl 1.1@read-front

View File

@@ -1,44 +0,0 @@
#!/usr/bin/env bash
# shellcheck disable=SC2086 # we want word splitting
# shellcheck disable=SC1091 # paths only become valid at runtime
. "${SCRIPTS_DIR}/setup-test-env.sh"
export PATH=/android-tools/android-cts/jdk/bin/:/android-tools/build-tools:$PATH
export JAVA_HOME=/android-tools/android-cts/jdk
# Wait for the appops service to show up
while [ "$($ADB shell dumpsys -l | grep appops)" = "" ] ; do sleep 1; done
SKIP_FILE="$INSTALL/${GPU_VERSION}-android-cts-skips.txt"
EXCLUDE_FILTERS=""
if [ -e "$SKIP_FILE" ]; then
EXCLUDE_FILTERS="$(grep -v -E "(^#|^[[:space:]]*$)" "$SKIP_FILE" | sed -s 's/.*/--exclude-filter "\0" /g')"
fi
INCLUDE_FILE="$INSTALL/${GPU_VERSION}-android-cts-include.txt"
if [ -e "$INCLUDE_FILE" ]; then
INCLUDE_FILTERS="$(grep -v -E "(^#|^[[:space:]]*$)" "$INCLUDE_FILE" | sed -s 's/.*/--include-filter "\0" /g')"
else
INCLUDE_FILTERS=$(printf -- "--include-filter %s " $ANDROID_CTS_MODULES | sed -e 's/ $//g')
fi
set +e
eval "/android-tools/android-cts/tools/cts-tradefed" run commandAndExit cts-dev \
$EXCLUDE_FILTERS \
$INCLUDE_FILTERS
[ "$(grep "^FAILED" /android-tools/android-cts/results/latest/invocation_summary.txt | tr -d ' ' | cut -d ':' -f 2)" = "0" ]
# shellcheck disable=SC2034 # EXIT_CODE is used by the script that sources this one
EXIT_CODE=$?
set -e
section_switch cuttlefish_results "cuttlefish: gathering the results"
cp -r "/android-tools/android-cts/results/latest"/* $RESULTS_DIR
cp -r "/android-tools/android-cts/logs/latest"/* $RESULTS_DIR
section_end cuttlefish_results

View File

@@ -1,96 +0,0 @@
#!/usr/bin/env bash
# shellcheck disable=SC2086 # we want word splitting
# shellcheck disable=SC1091 # paths only become valid at runtime
. "${SCRIPTS_DIR}/setup-test-env.sh"
# deqp
$ADB shell mkdir -p /data/deqp
$ADB push /deqp-gles/modules/egl/deqp-egl-android /data/deqp
$ADB push /deqp-gles/mustpass/egl-main.txt.zst /data/deqp
$ADB push /deqp-vk/external/vulkancts/modules/vulkan/* /data/deqp
$ADB push /deqp-vk/mustpass/vk-main.txt.zst /data/deqp
$ADB push /deqp-tools/* /data/deqp
$ADB push /deqp-runner/deqp-runner /data/deqp
$ADB push "$INSTALL/all-skips.txt" /data/deqp
$ADB push "$INSTALL/angle-skips.txt" /data/deqp
if [ -e "$INSTALL/$GPU_VERSION-flakes.txt" ]; then
$ADB push "$INSTALL/$GPU_VERSION-flakes.txt" /data/deqp
fi
if [ -e "$INSTALL/$GPU_VERSION-fails.txt" ]; then
$ADB push "$INSTALL/$GPU_VERSION-fails.txt" /data/deqp
fi
if [ -e "$INSTALL/$GPU_VERSION-skips.txt" ]; then
$ADB push "$INSTALL/$GPU_VERSION-skips.txt" /data/deqp
fi
$ADB push "$INSTALL/deqp-$DEQP_SUITE.toml" /data/deqp
BASELINE=""
if [ -e "$INSTALL/$GPU_VERSION-fails.txt" ]; then
BASELINE="--baseline /data/deqp/$GPU_VERSION-fails.txt"
fi
# Default to an empty known flakes file if it doesn't exist.
$ADB shell "touch /data/deqp/$GPU_VERSION-flakes.txt"
if [ -e "$INSTALL/$GPU_VERSION-skips.txt" ]; then
DEQP_SKIPS="$DEQP_SKIPS /data/deqp/$GPU_VERSION-skips.txt"
fi
if [ -n "$ANGLE_TAG" ]; then
DEQP_SKIPS="$DEQP_SKIPS /data/deqp/angle-skips.txt"
fi
AOSP_RESULTS=/data/deqp/results
uncollapsed_section_switch cuttlefish_test "cuttlefish: testing"
set +e
$ADB shell "mkdir ${AOSP_RESULTS}; cd ${AOSP_RESULTS}/..; \
XDG_CACHE_HOME=/data/local/tmp \
./deqp-runner \
suite \
--suite /data/deqp/deqp-$DEQP_SUITE.toml \
--output $AOSP_RESULTS \
--skips /data/deqp/all-skips.txt $DEQP_SKIPS \
--flakes /data/deqp/$GPU_VERSION-flakes.txt \
--testlog-to-xml /data/deqp/testlog-to-xml \
--shader-cache-dir /data/local/tmp \
--fraction-start ${CI_NODE_INDEX:-1} \
--fraction $(( CI_NODE_TOTAL * ${DEQP_FRACTION:-1})) \
--jobs ${FDO_CI_CONCURRENT:-4} \
$BASELINE \
${DEQP_RUNNER_MAX_FAILS:+--max-fails \"$DEQP_RUNNER_MAX_FAILS\"} \
"
# shellcheck disable=SC2034 # EXIT_CODE is used by the script that sources this one
EXIT_CODE=$?
set -e
section_switch cuttlefish_results "cuttlefish: gathering the results"
$ADB pull "$AOSP_RESULTS/." "$RESULTS_DIR"
# Remove all but the first 50 individual XML files uploaded as artifacts, to
# save fd.o space when you break everything.
find $RESULTS_DIR -name \*.xml | \
sort -n |
sed -n '1,+49!p' | \
xargs rm -f
# If any QPA XMLs are there, then include the XSL/CSS in our artifacts.
find $RESULTS_DIR -name \*.xml \
-exec cp /deqp-tools/testlog.css /deqp-tools/testlog.xsl "$RESULTS_DIR/" ";" \
-quit
$ADB shell "cd ${AOSP_RESULTS}/..; \
./deqp-runner junit \
--testsuite dEQP \
--results $AOSP_RESULTS/failures.csv \
--output $AOSP_RESULTS/junit.xml \
--limit 50 \
--template \"See $ARTIFACTS_BASE_URL/results/{{testcase}}.xml\""
$ADB pull "$AOSP_RESULTS/junit.xml" "$RESULTS_DIR"
section_end cuttlefish_results

View File

@@ -1,118 +0,0 @@
#!/usr/bin/env bash
# shellcheck disable=SC2086 # we want word splitting
# shellcheck disable=SC1091 # paths only become valid at runtime
# Set default ADB command if not set already
: "${ADB:=adb}"
$ADB wait-for-device root
sleep 1
# overlay
REMOUNT_PATHS="/vendor"
if [ "$ANDROID_VERSION" -ge 15 ]; then
REMOUNT_PATHS="$REMOUNT_PATHS /system"
fi
OV_TMPFS="/data/overlay-remount"
$ADB shell mkdir -p "$OV_TMPFS"
$ADB shell mount -t tmpfs none "$OV_TMPFS"
for path in $REMOUNT_PATHS; do
$ADB shell mkdir -p "${OV_TMPFS}${path}-upper"
$ADB shell mkdir -p "${OV_TMPFS}${path}-work"
opts="lowerdir=${path},upperdir=${OV_TMPFS}${path}-upper,workdir=${OV_TMPFS}${path}-work"
$ADB shell mount -t overlay -o "$opts" none ${path}
done
$ADB shell setenforce 0
# download Android Mesa from S3
MESA_ANDROID_ARTIFACT_URL=https://${PIPELINE_ARTIFACTS_BASE}/${S3_ANDROID_ARTIFACT_NAME}.tar.zst
curl -L --retry 4 -f --retry-all-errors --retry-delay 60 -o ${S3_ANDROID_ARTIFACT_NAME}.tar.zst ${MESA_ANDROID_ARTIFACT_URL}
mkdir /mesa-android
tar -C /mesa-android -xvf ${S3_ANDROID_ARTIFACT_NAME}.tar.zst
rm "${S3_ANDROID_ARTIFACT_NAME}.tar.zst" &
INSTALL="/mesa-android/install"
# replace libraries
$ADB shell rm -f /vendor/lib64/libgallium_dri.so*
$ADB shell rm -f /vendor/lib64/egl/libEGL_mesa.so*
$ADB shell rm -f /vendor/lib64/egl/libGLESv1_CM_mesa.so*
$ADB shell rm -f /vendor/lib64/egl/libGLESv2_mesa.so*
$ADB push "$INSTALL/lib/libgallium_dri.so" /vendor/lib64/libgallium_dri.so
$ADB push "$INSTALL/lib/libEGL.so" /vendor/lib64/egl/libEGL_mesa.so
$ADB push "$INSTALL/lib/libGLESv1_CM.so" /vendor/lib64/egl/libGLESv1_CM_mesa.so
$ADB push "$INSTALL/lib/libGLESv2.so" /vendor/lib64/egl/libGLESv2_mesa.so
$ADB shell rm -f /vendor/lib64/hw/vulkan.lvp.so*
$ADB shell rm -f /vendor/lib64/hw/vulkan.virtio.so*
$ADB shell rm -f /vendor/lib64/hw/vulkan.intel.so*
$ADB push "$INSTALL/lib/libvulkan_lvp.so" /vendor/lib64/hw/vulkan.lvp.so
$ADB push "$INSTALL/lib/libvulkan_virtio.so" /vendor/lib64/hw/vulkan.virtio.so
$ADB push "$INSTALL/lib/libvulkan_intel.so" /vendor/lib64/hw/vulkan.intel.so
$ADB shell rm -f /vendor/lib64/egl/libEGL_emulation.so*
$ADB shell rm -f /vendor/lib64/egl/libGLESv1_CM_emulation.so*
$ADB shell rm -f /vendor/lib64/egl/libGLESv2_emulation.so*
ANGLE_DEST_PATH=/vendor/lib64/egl
if [ "$ANDROID_VERSION" -ge 15 ]; then
ANGLE_DEST_PATH=/system/lib64
fi
$ADB shell rm -f "$ANGLE_DEST_PATH/libEGL_angle.so"*
$ADB shell rm -f "$ANGLE_DEST_PATH/libGLESv1_CM_angle.so"*
$ADB shell rm -f "$ANGLE_DEST_PATH/libGLESv2_angle.so"*
$ADB push /angle/libEGL_angle.so "$ANGLE_DEST_PATH/libEGL_angle.so"
$ADB push /angle/libGLESv1_CM_angle.so "$ANGLE_DEST_PATH/libGLESv1_CM_angle.so"
$ADB push /angle/libGLESv2_angle.so "$ANGLE_DEST_PATH/libGLESv2_angle.so"
get_gles_runtime_version() {
while [ "$($ADB shell dumpsys SurfaceFlinger | grep GLES:)" = "" ] ; do sleep 1; done
$ADB shell dumpsys SurfaceFlinger | grep GLES
}
# Check what GLES implementation is used before loading the new libraries
get_gles_runtime_version
# restart Android shell, so that services use the new libraries
$ADB shell stop
$ADB shell start
# Check what GLES implementation is used after loading the new libraries
GLES_RUNTIME_VERSION="$(get_gles_runtime_version)"
if [ -n "$ANGLE_TAG" ]; then
# Note: we are injecting the ANGLE libs too, so we need to check if the
# ANGLE libs are being used after the shell restart.
ANGLE_HASH=$(head -c 12 /angle/version)
if ! printf "%s" "$GLES_RUNTIME_VERSION" | grep --quiet "${ANGLE_HASH}"; then
echo "Fatal: Android is loading a wrong version of the ANGLE libs: ${ANGLE_HASH}" 1>&2
exit 1
fi
else
MESA_BUILD_VERSION=$(cat "$INSTALL/VERSION")
if ! printf "%s" "$GLES_RUNTIME_VERSION" | grep --quiet "${MESA_BUILD_VERSION}$"; then
echo "Fatal: Android is loading a wrong version of the Mesa3D GLES libs: ${GLES_RUNTIME_VERSION}" 1>&2
exit 1
fi
fi
if [ -n "$USE_ANDROID_CTS" ]; then
# The script sets EXIT_CODE
. "$(dirname "$0")/android-cts-runner.sh"
else
# The script sets EXIT_CODE
. "$(dirname "$0")/android-deqp-runner.sh"
fi
exit $EXIT_CODE

View File

@@ -1,7 +0,0 @@
# Unlike zink which does support it, ANGLE relies on a waiver to not implement
# capturing individual array elements (see waivers.xml and gles3-waivers.txt in the CTS)
dEQP-GLES3.functional.transform_feedback.array_element.*
dEQP-GLES3.functional.transform_feedback.random.*
dEQP-GLES31.functional.program_interface_query.transform_feedback_varying.*_array_element
dEQP-GLES31.functional.program_interface_query.transform_feedback_varying.type.*.array.*
KHR-GLES31.core.program_interface_query.transform-feedback-types

View File

@@ -1,155 +0,0 @@
version: 1
# Rules to match for a machine to qualify
target:
id: '{{ CI_RUNNER_DESCRIPTION }}'
timeouts:
first_console_activity: # This limits the time it can take to receive the first console log
minutes: {{ B2C_TIMEOUT_FIRST_CONSOLE_ACTIVITY_MINUTES | default(0, true) }}
seconds: {{ B2C_TIMEOUT_FIRST_CONSOLE_ACTIVITY_SECONDS | default(0, true) }}
retries: {{ B2C_TIMEOUT_FIRST_CONSOLE_ACTIVITY_RETRIES }}
console_activity: # Reset every time we receive a message from the logs
minutes: {{ B2C_TIMEOUT_CONSOLE_ACTIVITY_MINUTES | default(0, true) }}
seconds: {{ B2C_TIMEOUT_CONSOLE_ACTIVITY_SECONDS | default(0, true) }}
retries: {{ B2C_TIMEOUT_CONSOLE_ACTIVITY_RETRIES }}
boot_cycle:
minutes: {{ B2C_TIMEOUT_BOOT_MINUTES | default(0, true) }}
seconds: {{ B2C_TIMEOUT_BOOT_SECONDS | default(0, true) }}
retries: {{ B2C_TIMEOUT_BOOT_RETRIES }}
overall: # Maximum time the job can take, not overrideable by the "continue" deployment
minutes: {{ B2C_TIMEOUT_OVERALL_MINUTES | default(0, true) }}
seconds: {{ B2C_TIMEOUT_OVERALL_SECONDS | default(0, true) }}
retries: 0
# no retries possible here
watchdogs:
boot:
minutes: {{ B2C_TIMEOUT_BOOT_WD_MINUTES | default(0, true) }}
seconds: {{ B2C_TIMEOUT_BOOT_WD_SECONDS | default(0, true) }}
retries: {{ B2C_TIMEOUT_BOOT_WD_RETRIES | default(0, true) }}
console_patterns:
session_end:
regex: >-
{{ B2C_SESSION_END_REGEX }}
{% if B2C_SESSION_REBOOT_REGEX %}
session_reboot:
regex: >-
{{ B2C_SESSION_REBOOT_REGEX }}
{% endif %}
job_success:
regex: >-
{{ B2C_JOB_SUCCESS_REGEX }}
{% if B2C_JOB_WARN_REGEX %}
job_warn:
regex: >-
{{ B2C_JOB_WARN_REGEX }}
{% endif %}
{% if B2C_BOOT_WD_START_REGEX and B2C_BOOT_WD_STOP_REGEX %}
watchdogs:
boot:
start:
regex: >-
{{ B2C_BOOT_WD_START_REGEX }}
reset:
regex: >-
{{ B2C_BOOT_WD_RESET_REGEX | default(B2C_BOOT_WD_START_REGEX, true) }}
stop:
regex: >-
{{ B2C_BOOT_WD_STOP_REGEX }}
{% endif %}
# Environment to deploy
deployment:
# Initial boot
start:
storage:
{% if B2C_IMAGESTORE_PLATFORM %}
imagestore:
public:
# List of images that should be pulled into the image store ahead of execution
images:
mars:
name: "{{ B2C_MACHINE_REGISTRATION_IMAGE }}"
platform: "{{ B2C_IMAGESTORE_PLATFORM }}"
tls_verify: false
{% set machine_registration_image="{% raw %}{{ job.imagestore.public.mars.image_id }}{% endraw %}" %}
telegraf:
name: "{{ B2C_TELEGRAF_IMAGE }}"
platform: "{{ B2C_IMAGESTORE_PLATFORM }}"
tls_verify: false
{% set telegraf_image="{% raw %}{{ job.imagestore.public.telegraf.image_id }}{% endraw %}" %}
image_under_test:
name: "{{ B2C_IMAGE_UNDER_TEST }}"
platform: "{{ B2C_IMAGESTORE_PLATFORM }}"
tls_verify: false
{% set image_under_test="{% raw %}{{ job.imagestore.public.image_under_test.image_id }}{% endraw %}" %}
nbd:
storage:
max_connections: 5
size: 10G
{% endif %}
http:
- path: "/install.tar.zst"
url: "{{ B2C_INSTALL_TARBALL_URL }}"
- path: "/b2c-extra-args"
data: >
b2c.pipefail b2c.poweroff_delay={{ B2C_POWEROFF_DELAY }}
b2c.minio="gateway,{{ '{{' }} minio_url }},{{ '{{' }} job_bucket_access_key }},{{ '{{' }} job_bucket_secret_key }}"
b2c.volume="{{ '{{' }} job_bucket }}-results,mirror=gateway/{{ '{{' }} job_bucket }},pull_on=pipeline_start,push_on=changes,overwrite{% for excl in B2C_JOB_VOLUME_EXCLUSIONS.split(',') %},exclude={{ excl }}{% endfor %},remove,expiration=pipeline_end,preserve"
{% for volume in B2C_VOLUMES %}
b2c.volume={{ volume }}
{% endfor %}
b2c.run_service="--privileged --tls-verify=false --pid=host {{ B2C_TELEGRAF_IMAGE }}" b2c.hostname=dut-{{ '{{' }} machine.full_name }}
b2c.run="-ti --tls-verify=false {{ B2C_MACHINE_REGISTRATION_IMAGE }} {% if B2C_MARS_SETUP_TAGS %}setup --tags {{ B2C_MARS_SETUP_TAGS }}{% else %}check{% endif %}"
b2c.run="-v {{ '{{' }} job_bucket }}-results:{{ CI_PROJECT_DIR }} -w {{ CI_PROJECT_DIR }} {% for mount_volume in B2C_MOUNT_VOLUMES %} -v {{ mount_volume }}{% endfor %} --tls-verify=false --entrypoint bash {{ B2C_IMAGE_UNDER_TEST }} -euc 'curl --fail -q {{ '{{' }} job.http.url }}/install.tar.zst | tar --zstd -x; {{ B2C_CONTAINER_CMD }}'"
kernel:
{% if B2C_KERNEL_URL %}
url: '{{ B2C_KERNEL_URL }}'
{% endif %}
# NOTE: b2c.cache_device should not be here, but this works around
# a limitation of b2c which will be removed in the next release
cmdline: >
SALAD.machine_id={{ '{{' }} machine_id }}
console={{ '{{' }} local_tty_device }},115200
b2c.ntp_peer=10.42.0.1
b2c.extra_args_url={{ '{{' }} job.http.url }}/b2c-extra-args
{% if B2C_IMAGESTORE_PLATFORM is defined %}
{{ '{{' }} imagestore.mount("public").nfs.to_b2c_filesystem("publicimgstore") }}
b2c.storage="additionalimagestores=publicimgstore"
b2c.nbd=/dev/nbd0,host=ci-gateway,port={% raw %}{{ '{{' }} job.nbd.storage.tcp_port }}{% endraw %},connections=5
b2c.cache_device=/dev/nbd0
{% else %}
b2c.cache_device=auto
{% endif %}
{% if B2C_KERNEL_CMDLINE_EXTRAS is defined %}
{{ B2C_KERNEL_CMDLINE_EXTRAS }}
{% endif %}
{% if B2C_INITRAMFS_URL or B2C_FIRMWARE_URL %}
initramfs:
{% if B2C_FIRMWARE_URL %}
- url: '{{ B2C_FIRMWARE_URL }}'
{% endif %}
{% if B2C_INITRAMFS_URL %}
- url: '{{ B2C_INITRAMFS_URL }}'
{% endif %}
{% endif %}
{% if B2C_DTB_URL %}
dtb:
url: '{{ B2C_DTB_URL }}'
{% if B2C_DTB_MATCH %}
format:
archive:
match: "{{ B2C_DTB_MATCH }}"
{% endif %}
{% endif %}

View File

@@ -1,40 +0,0 @@
#!/usr/bin/env python3
# Copyright © 2022 Valve Corporation
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice (including the next
# paragraph) shall be included in all copies or substantial portions of the
# Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
# IN THE SOFTWARE.
from jinja2 import Environment, FileSystemLoader
from os import environ, path
# Pass through all the CI and B2C environment variables
values = {
key: environ[key]
for key in environ if key.startswith("B2C_") or key.startswith("CI_")
}
env = Environment(loader=FileSystemLoader(path.dirname(environ['B2C_JOB_TEMPLATE'])),
trim_blocks=True, lstrip_blocks=True)
template = env.get_template(path.basename(environ['B2C_JOB_TEMPLATE']))
with open(path.splitext(path.basename(environ['B2C_JOB_TEMPLATE']))[0], "w") as f:
f.write(template.render(values))

View File

@@ -1,2 +0,0 @@
[*.sh]
indent_size = 2

View File

@@ -1,15 +0,0 @@
#!/bin/sh
# Init entrypoint for bare-metal devices; calls common init code.
# First stage: very basic setup to bring up network and /dev etc
/init-stage1.sh
export CURRENT_SECTION=dut_boot
# Second stage: run jobs
test $? -eq 0 && /init-stage2.sh
# Wait until the job would have timed out anyway, so we don't spew a "init
# exited" panic.
sleep 6000

View File

@@ -1,17 +0,0 @@
#!/bin/bash
# shellcheck disable=SC2086 # we want word splitting
if [ -z "$BM_POE_INTERFACE" ]; then
echo "Must supply the PoE Interface to power down"
exit 1
fi
if [ -z "$BM_POE_ADDRESS" ]; then
echo "Must supply the PoE Switch host"
exit 1
fi
SNMP_KEY="1.3.6.1.4.1.9.9.402.1.2.1.1.1.$BM_POE_INTERFACE"
SNMP_OFF="i 4"
snmpset -v2c -r 3 -t 30 -cmesaci "$BM_POE_ADDRESS" "$SNMP_KEY" $SNMP_OFF

View File

@@ -1,22 +0,0 @@
#!/bin/bash
# shellcheck disable=SC2086 # we want word splitting
if [ -z "$BM_POE_INTERFACE" ]; then
echo "Must supply the PoE Interface to power up"
exit 1
fi
if [ -z "$BM_POE_ADDRESS" ]; then
echo "Must supply the PoE Switch host"
exit 1
fi
set -ex
SNMP_KEY="1.3.6.1.4.1.9.9.402.1.2.1.1.1.$BM_POE_INTERFACE"
SNMP_ON="i 1"
SNMP_OFF="i 4"
snmpset -v2c -r 3 -t 10 -cmesaci "$BM_POE_ADDRESS" "$SNMP_KEY" $SNMP_OFF
sleep 3s
snmpset -v2c -r 3 -t 10 -cmesaci "$BM_POE_ADDRESS" "$SNMP_KEY" $SNMP_ON

View File

@@ -1,129 +0,0 @@
#!/bin/bash
# shellcheck disable=SC1091 # The relative paths in this file only become valid at runtime.
# shellcheck disable=SC2034
# shellcheck disable=SC2086 # we want word splitting
# Boot script for Chrome OS devices attached to a servo debug connector, using
# NFS and TFTP to boot.
# We're run from the root of the repo, make a helper var for our paths
BM=$CI_PROJECT_DIR/install/bare-metal
CI_COMMON=$CI_PROJECT_DIR/install/common
CI_INSTALL=$CI_PROJECT_DIR/install
# Runner config checks
if [ -z "$BM_SERIAL" ]; then
echo "Must set BM_SERIAL in your gitlab-runner config.toml [[runners]] environment"
echo "This is the CPU serial device."
exit 1
fi
if [ -z "$BM_SERIAL_EC" ]; then
echo "Must set BM_SERIAL in your gitlab-runner config.toml [[runners]] environment"
echo "This is the EC serial device for controlling board power"
exit 1
fi
if [ ! -d /nfs ]; then
echo "NFS rootfs directory needs to be mounted at /nfs by the gitlab runner"
exit 1
fi
if [ ! -d /tftp ]; then
echo "TFTP directory for this board needs to be mounted at /tftp by the gitlab runner"
exit 1
fi
# job config checks
if [ -z "$BM_KERNEL" ]; then
echo "Must set BM_KERNEL to your board's kernel FIT image"
exit 1
fi
if [ -z "$BM_ROOTFS" ]; then
echo "Must set BM_ROOTFS to your board's rootfs directory in the job's variables"
exit 1
fi
if [ -z "$BM_CMDLINE" ]; then
echo "Must set BM_CMDLINE to your board's kernel command line arguments"
exit 1
fi
. "${SCRIPTS_DIR}/setup-test-env.sh"
section_start prepare_rootfs "Preparing rootfs components"
set -ex
# Clear out any previous run's artifacts.
rm -rf results/
mkdir -p results
# Create the rootfs in the NFS directory. rm to make sure it's in a pristine
# state, since it's volume-mounted on the host.
rsync -a --delete $BM_ROOTFS/ /nfs/
mkdir -p /nfs/results
. $BM/rootfs-setup.sh /nfs
# Put the kernel/dtb image and the boot command line in the tftp directory for
# the board to find. For normal Mesa development, we build the kernel and
# store it in the docker container that this script is running in.
#
# However, container builds are expensive, so when you're hacking on the
# kernel, it's nice to be able to skip the half hour container build and plus
# moving that container to the runner. So, if BM_KERNEL is a URL, fetch it
# instead of looking in the container. Note that the kernel build should be
# the output of:
#
# make Image.lzma
#
# mkimage \
# -A arm64 \
# -f auto \
# -C lzma \
# -d arch/arm64/boot/Image.lzma \
# -b arch/arm64/boot/dts/qcom/sdm845-cheza-r3.dtb \
# cheza-image.img
rm -rf /tftp/*
if echo "$BM_KERNEL" | grep -q http; then
curl -L --retry 4 -f --retry-all-errors --retry-delay 60 \
$BM_KERNEL -o /tftp/vmlinuz
elif [ -n "${EXTERNAL_KERNEL_TAG}" ]; then
curl -L --retry 4 -f --retry-all-errors --retry-delay 60 \
"${FDO_HTTP_CACHE_URI:-}${KERNEL_IMAGE_BASE}/${DEBIAN_ARCH}/${BM_KERNEL}" -o /tftp/vmlinuz
curl -L --retry 4 -f --retry-all-errors --retry-delay 60 \
"${FDO_HTTP_CACHE_URI:-}${KERNEL_IMAGE_BASE}/${DEBIAN_ARCH}/modules.tar.zst" -o modules.tar.zst
tar --keep-directory-symlink --zstd -xf modules.tar.zst -C "/nfs/"
rm modules.tar.zst &
else
cp /baremetal-files/"$BM_KERNEL" /tftp/vmlinuz
fi
echo "$BM_CMDLINE" > /tftp/cmdline
set +e
STRUCTURED_LOG_FILE=results/job_detail.json
python3 $CI_INSTALL/custom_logger.py ${STRUCTURED_LOG_FILE} --update dut_job_type "${DEVICE_TYPE}"
python3 $CI_INSTALL/custom_logger.py ${STRUCTURED_LOG_FILE} --update farm "${FARM}"
python3 $CI_INSTALL/custom_logger.py ${STRUCTURED_LOG_FILE} --create-dut-job dut_name "${CI_RUNNER_DESCRIPTION}"
python3 $CI_INSTALL/custom_logger.py ${STRUCTURED_LOG_FILE} --update-dut-time submit "${CI_JOB_STARTED_AT}"
section_end prepare_rootfs
python3 $BM/cros_servo_run.py \
--cpu $BM_SERIAL \
--ec $BM_SERIAL_EC \
--test-timeout ${TEST_PHASE_TIMEOUT_MINUTES:-20}
ret=$?
section_start dut_cleanup "Cleaning up after job"
python3 $CI_INSTALL/custom_logger.py ${STRUCTURED_LOG_FILE} --close-dut-job
python3 $CI_INSTALL/custom_logger.py ${STRUCTURED_LOG_FILE} --close
set -e
# Bring artifacts back from the NFS dir to the build dir where gitlab-runner
# will look for them.
cp -Rp /nfs/results/. results/
section_end dut_cleanup
exit $ret

View File

@@ -1,206 +0,0 @@
#!/usr/bin/env python3
#
# Copyright © 2020 Google LLC
# SPDX-License-Identifier: MIT
import argparse
import datetime
import math
import os
import re
import sys
from custom_logger import CustomLogger
from serial_buffer import SerialBuffer
ANSI_ESCAPE="\x1b[0K"
ANSI_COLOUR="\x1b[0;36m"
ANSI_RESET="\x1b[0m"
SECTION_START="start"
SECTION_END="end"
class CrosServoRun:
def __init__(self, cpu, ec, test_timeout, logger):
self.cpu_ser = SerialBuffer(
cpu, "results/serial.txt", ": ")
# Merge the EC serial into the cpu_ser's line stream so that we can
# effectively poll on both at the same time and not have to worry about
self.ec_ser = SerialBuffer(
ec, "results/serial-ec.txt", " EC: ", line_queue=self.cpu_ser.line_queue)
self.test_timeout = test_timeout
self.logger = logger
def close(self):
self.ec_ser.close()
self.cpu_ser.close()
def ec_write(self, s):
print("EC> %s" % s)
self.ec_ser.serial.write(s.encode())
def cpu_write(self, s):
print("> %s" % s)
self.cpu_ser.serial.write(s.encode())
def print_error(self, message):
RED = '\033[0;31m'
NO_COLOR = '\033[0m'
print(RED + message + NO_COLOR)
self.logger.update_status_fail(message)
def get_rel_timestamp(self):
now = datetime.datetime.now(tz=datetime.UTC)
then_env = os.getenv("CI_JOB_STARTED_AT")
if not then_env:
return ""
delta = now - datetime.datetime.fromisoformat(then_env)
return f"[{math.floor(delta.seconds / 60):02}:{(delta.seconds % 60):02}]"
def get_cur_timestamp(self):
return str(int(datetime.datetime.timestamp(datetime.datetime.now())))
def print_gitlab_section(self, action, name, description, collapse=True):
assert action in [SECTION_START, SECTION_END]
out = ANSI_ESCAPE + "section_" + action + ":"
out += self.get_cur_timestamp() + ":"
out += name
if action == "start" and collapse:
out += "[collapsed=true]"
out += "\r" + ANSI_ESCAPE + ANSI_COLOUR
out += self.get_rel_timestamp() + " " + description + ANSI_RESET
print(out)
def boot_section(self, action):
self.print_gitlab_section(action, "dut_boot", "Booting hardware device", True)
def run(self):
# Flush any partial commands in the EC's prompt, then ask for a reboot.
self.ec_write("\n")
self.ec_write("reboot\n")
bootloader_done = False
self.logger.create_job_phase("boot")
self.boot_section(SECTION_START)
tftp_failures = 0
# This is emitted right when the bootloader pauses to check for input.
# Emit a ^N character to request network boot, because we don't have a
# direct-to-netboot firmware on cheza.
for line in self.cpu_ser.lines(timeout=120, phase="bootloader"):
if re.search("load_archive: loading locale_en.bin", line):
self.cpu_write("\016")
bootloader_done = True
break
# The Cheza firmware seems to occasionally get stuck looping in
# this error state during TFTP booting, possibly based on amount of
# network traffic around it, but it'll usually recover after a
# reboot. Currently mostly visible on google-freedreno-cheza-14.
if re.search("R8152: Bulk read error 0xffffffbf", line):
tftp_failures += 1
if tftp_failures >= 10:
self.print_error(
"Detected intermittent tftp failure, restarting run.")
return 1
# If the board has a netboot firmware and we made it to booting the
# kernel, proceed to processing of the test run.
if re.search("Booting Linux", line):
bootloader_done = True
break
# The Cheza boards have issues with failing to bring up power to
# the system sometimes, possibly dependent on ambient temperature
# in the farm.
if re.search("POWER_GOOD not seen in time", line):
self.print_error(
"Detected intermittent poweron failure, abandoning run.")
return 1
if not bootloader_done:
self.print_error("Failed to make it through bootloader, abandoning run.")
return 1
self.logger.create_job_phase("test")
for line in self.cpu_ser.lines(timeout=self.test_timeout, phase="test"):
if re.search("---. end Kernel panic", line):
return 1
# There are very infrequent bus errors during power management transitions
# on cheza, which we don't expect to be the case on future boards.
if re.search("Kernel panic - not syncing: Asynchronous SError Interrupt", line):
self.print_error(
"Detected cheza power management bus error, abandoning run.")
return 1
# If the network device dies, it's probably not graphics's fault, just try again.
if re.search("NETDEV WATCHDOG", line):
self.print_error(
"Detected network device failure, abandoning run.")
return 1
# These HFI response errors started appearing with the introduction
# of piglit runs. CosmicPenguin says:
#
# "message ID 106 isn't a thing, so likely what happened is that we
# got confused when parsing the HFI queue. If it happened on only
# one run, then memory corruption could be a possible clue"
#
# Given that it seems to trigger randomly near a GPU fault and then
# break many tests after that, just restart the whole run.
if re.search("a6xx_hfi_send_msg.*Unexpected message id .* on the response queue", line):
self.print_error(
"Detected cheza power management bus error, abandoning run.")
return 1
if re.search("coreboot.*bootblock starting", line):
self.print_error(
"Detected spontaneous reboot, abandoning run.")
return 1
if re.search("arm-smmu 5040000.iommu: TLB sync timed out -- SMMU may be deadlocked", line):
self.print_error("Detected cheza MMU fail, abandoning run.")
return 1
result = re.search(r"hwci: mesa: (\S*), exit_code: (\d+)", line)
if result:
status = result.group(1)
exit_code = int(result.group(2))
if status == "pass":
self.logger.update_dut_job("status", "pass")
else:
self.logger.update_status_fail("test fail")
self.logger.update_dut_job("exit_code", exit_code)
return exit_code
self.print_error(
"Reached the end of the CPU serial log without finding a result")
return 1
def main():
parser = argparse.ArgumentParser()
parser.add_argument('--cpu', type=str,
help='CPU Serial device', required=True)
parser.add_argument(
'--ec', type=str, help='EC Serial device', required=True)
parser.add_argument(
'--test-timeout', type=int, help='Test phase timeout (minutes)', required=True)
args = parser.parse_args()
logger = CustomLogger("results/job_detail.json")
logger.update_dut_time("start", None)
servo = CrosServoRun(args.cpu, args.ec, args.test_timeout * 60, logger)
retval = servo.run()
# power down the CPU on the device
servo.ec_write("power off\n")
logger.update_dut_time("end", None)
servo.close()
sys.exit(retval)
if __name__ == '__main__':
main()

View File

@@ -1,10 +0,0 @@
#!/bin/bash
relay=$1
if [ -z "$relay" ]; then
echo "Must supply a relay arg"
exit 1
fi
"$CI_PROJECT_DIR"/install/bare-metal/eth008-power-relay.py "$ETH_HOST" "$ETH_PORT" off "$relay"

View File

@@ -1,28 +0,0 @@
#!/usr/bin/python3
import sys
import socket
host = sys.argv[1]
port = sys.argv[2]
mode = sys.argv[3]
relay = sys.argv[4]
msg = None
if mode == "on":
msg = b'\x20'
else:
msg = b'\x21'
msg += int(relay).to_bytes(1, 'big')
msg += b'\x00'
c = socket.create_connection((host, int(port)))
c.sendall(msg)
data = c.recv(1)
c.close()
if data[0] == b'\x01':
print('Command failed')
sys.exit(1)

View File

@@ -1,12 +0,0 @@
#!/bin/bash
relay=$1
if [ -z "$relay" ]; then
echo "Must supply a relay arg"
exit 1
fi
"$CI_PROJECT_DIR"/install/bare-metal/eth008-power-relay.py "$ETH_HOST" "$ETH_PORT" off "$relay"
sleep 5
"$CI_PROJECT_DIR"/install/bare-metal/eth008-power-relay.py "$ETH_HOST" "$ETH_PORT" on "$relay"

View File

@@ -1,31 +0,0 @@
#!/bin/bash
set -e
STRINGS=$(mktemp)
ERRORS=$(mktemp)
trap 'rm $STRINGS; rm $ERRORS;' EXIT
FILE=$1
shift 1
while getopts "f:e:" opt; do
case $opt in
f) echo "$OPTARG" >> "$STRINGS";;
e) echo "$OPTARG" >> "$STRINGS" ; echo "$OPTARG" >> "$ERRORS";;
*) exit
esac
done
shift $((OPTIND -1))
echo "Waiting for $FILE to say one of following strings"
cat "$STRINGS"
while ! grep -E -wf "$STRINGS" "$FILE"; do
sleep 2
done
if grep -E -wf "$ERRORS" "$FILE"; then
exit 1
fi

View File

@@ -1,167 +0,0 @@
#!/bin/bash
# shellcheck disable=SC1091 # The relative paths in this file only become valid at runtime.
# shellcheck disable=SC2034
# shellcheck disable=SC2086 # we want word splitting
. "$SCRIPTS_DIR"/setup-test-env.sh
BM=$CI_PROJECT_DIR/install/bare-metal
CI_COMMON=$CI_PROJECT_DIR/install/common
if [ -z "$BM_SERIAL" ] && [ -z "$BM_SERIAL_SCRIPT" ]; then
echo "Must set BM_SERIAL OR BM_SERIAL_SCRIPT in your gitlab-runner config.toml [[runners]] environment"
echo "BM_SERIAL:"
echo " This is the serial device to talk to for waiting for fastboot to be ready and logging from the kernel."
echo "BM_SERIAL_SCRIPT:"
echo " This is a shell script to talk to for waiting for fastboot to be ready and logging from the kernel."
exit 1
fi
if [ -z "$BM_POWERUP" ]; then
echo "Must set BM_POWERUP in your gitlab-runner config.toml [[runners]] environment"
echo "This is a shell script that should reset the device and begin its boot sequence"
echo "such that it pauses at fastboot."
exit 1
fi
if [ -z "$BM_POWERDOWN" ]; then
echo "Must set BM_POWERDOWN in your gitlab-runner config.toml [[runners]] environment"
echo "This is a shell script that should power off the device."
exit 1
fi
if [ -z "$BM_FASTBOOT_SERIAL" ]; then
echo "Must set BM_FASTBOOT_SERIAL in your gitlab-runner config.toml [[runners]] environment"
echo "This must be the a stable-across-resets fastboot serial number."
exit 1
fi
if [ -z "$BM_KERNEL" ]; then
echo "Must set BM_KERNEL to your board's kernel vmlinuz or Image.gz in the job's variables:"
exit 1
fi
if [ -z "$BM_DTB" ]; then
echo "Must set BM_DTB to your board's DTB file in the job's variables:"
exit 1
fi
if [ -z "$BM_ROOTFS" ]; then
echo "Must set BM_ROOTFS to your board's rootfs directory in the job's variables:"
exit 1
fi
if echo $BM_CMDLINE | grep -q "root=/dev/nfs"; then
BM_FASTBOOT_NFSROOT=1
fi
section_start prepare_rootfs "Preparing rootfs components"
set -ex
# Clear out any previous run's artifacts.
rm -rf results/
mkdir -p results/
if [ -n "$BM_FASTBOOT_NFSROOT" ]; then
# Create the rootfs in the NFS directory. rm to make sure it's in a pristine
# state, since it's volume-mounted on the host.
rsync -a --delete $BM_ROOTFS/ /nfs/
mkdir -p /nfs/results
. $BM/rootfs-setup.sh /nfs
# Root on NFS, no need for an inintramfs.
rm -f rootfs.cpio.gz
touch rootfs.cpio
gzip rootfs.cpio
else
# Create the rootfs in a temp dir
rsync -a --delete $BM_ROOTFS/ rootfs/
. $BM/rootfs-setup.sh rootfs
# Finally, pack it up into a cpio rootfs. Skip the vulkan CTS since none of
# these devices use it and it would take up space in the initrd.
EXCLUDE_FILTER="deqp|arb_gpu_shader5|arb_gpu_shader_fp64|arb_gpu_shader_int64|glsl-4.[0123456]0|arb_tessellation_shader"
pushd rootfs
find -H . | \
grep -E -v "external/(openglcts|vulkancts|amber|glslang|spirv-tools)" |
grep -E -v "traces-db|apitrace|renderdoc" | \
grep -E -v $EXCLUDE_FILTER | \
cpio -H newc -o | \
xz --check=crc32 -T4 - > $CI_PROJECT_DIR/rootfs.cpio.gz
popd
fi
if echo "$BM_KERNEL $BM_DTB" | grep -q http; then
curl -L --retry 4 -f --retry-all-errors --retry-delay 60 \
"$BM_KERNEL" -o kernel
# FIXME: modules should be supplied too
curl -L --retry 4 -f --retry-all-errors --retry-delay 60 \
"$BM_DTB" -o dtb
cat kernel dtb > Image.gz-dtb
elif [ -n "${EXTERNAL_KERNEL_TAG}" ]; then
curl -L --retry 4 -f --retry-all-errors --retry-delay 60 \
"${FDO_HTTP_CACHE_URI:-}${KERNEL_IMAGE_BASE}/${DEBIAN_ARCH}/${BM_KERNEL}" -o kernel
curl -L --retry 4 -f --retry-all-errors --retry-delay 60 \
"${FDO_HTTP_CACHE_URI:-}${KERNEL_IMAGE_BASE}/${DEBIAN_ARCH}/modules.tar.zst" -o modules.tar.zst
if [ -n "$BM_DTB" ]; then
curl -L --retry 4 -f --retry-all-errors --retry-delay 60 \
"${FDO_HTTP_CACHE_URI:-}${KERNEL_IMAGE_BASE}/${DEBIAN_ARCH}/${BM_DTB}.dtb" -o dtb
fi
cat kernel dtb > Image.gz-dtb || echo "No DTB available, using pure kernel."
rm kernel
tar --keep-directory-symlink --zstd -xf modules.tar.zst -C "$BM_ROOTFS/"
rm modules.tar.zst &
else
cat /baremetal-files/"$BM_KERNEL" /baremetal-files/"$BM_DTB".dtb > Image.gz-dtb
cp /baremetal-files/"$BM_DTB".dtb dtb
fi
export PATH=$BM:$PATH
mkdir -p artifacts
mkbootimg.py \
--kernel Image.gz-dtb \
--ramdisk rootfs.cpio.gz \
--dtb dtb \
--cmdline "$BM_CMDLINE" \
$BM_MKBOOT_PARAMS \
--header_version 2 \
-o artifacts/fastboot.img
rm Image.gz-dtb dtb
# Start background command for talking to serial if we have one.
if [ -n "$BM_SERIAL_SCRIPT" ]; then
$BM_SERIAL_SCRIPT > results/serial-output.txt &
while [ ! -e results/serial-output.txt ]; do
sleep 1
done
fi
section_end prepare_rootfs
set +e
$BM/fastboot_run.py \
--dev="$BM_SERIAL" \
--test-timeout ${TEST_PHASE_TIMEOUT_MINUTES:-20} \
--fbserial="$BM_FASTBOOT_SERIAL" \
--powerup="$BM_POWERUP" \
--powerdown="$BM_POWERDOWN"
ret=$?
set -e
if [ -n "$BM_FASTBOOT_NFSROOT" ]; then
# Bring artifacts back from the NFS dir to the build dir where gitlab-runner
# will look for them.
cp -Rp /nfs/results/. results/
fi
exit $ret

View File

@@ -1,159 +0,0 @@
#!/usr/bin/env python3
#
# Copyright © 2020 Google LLC
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice (including the next
# paragraph) shall be included in all copies or substantial portions of the
# Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
# IN THE SOFTWARE.
import argparse
import subprocess
import re
from serial_buffer import SerialBuffer
import sys
import threading
class FastbootRun:
def __init__(self, args, test_timeout):
self.powerup = args.powerup
self.ser = SerialBuffer(
args.dev, "results/serial-output.txt", "R SERIAL> ")
self.fastboot = "fastboot boot -s {ser} artifacts/fastboot.img".format(
ser=args.fbserial)
self.test_timeout = test_timeout
def close(self):
self.ser.close()
def print_error(self, message):
RED = '\033[0;31m'
NO_COLOR = '\033[0m'
print(RED + message + NO_COLOR)
def logged_system(self, cmd, timeout=60):
print("Running '{}'".format(cmd))
try:
return subprocess.call(cmd, shell=True, timeout=timeout)
except subprocess.TimeoutExpired:
self.print_error("timeout, abandoning run.")
return 1
def run(self):
if ret := self.logged_system(self.powerup):
return ret
fastboot_ready = False
for line in self.ser.lines(timeout=2 * 60, phase="bootloader"):
if re.search("[Ff]astboot: [Pp]rocessing commands", line) or \
re.search("Listening for fastboot command on", line):
fastboot_ready = True
break
if re.search("data abort", line):
self.print_error(
"Detected crash during boot, abandoning run.")
return 1
if not fastboot_ready:
self.print_error(
"Failed to get to fastboot prompt, abandoning run.")
return 1
if ret := self.logged_system(self.fastboot):
return ret
print_more_lines = -1
for line in self.ser.lines(timeout=self.test_timeout, phase="test"):
if print_more_lines == 0:
return 1
if print_more_lines > 0:
print_more_lines -= 1
if re.search("---. end Kernel panic", line):
return 1
# The db820c boards intermittently reboot. Just restart the run
# when if we see a reboot after we got past fastboot.
if re.search("PON REASON", line):
self.print_error(
"Detected spontaneous reboot, abandoning run.")
return 1
# db820c sometimes wedges around iommu fault recovery
if re.search("watchdog: BUG: soft lockup - CPU.* stuck", line):
self.print_error(
"Detected kernel soft lockup, abandoning run.")
return 1
# If the network device dies, it's probably not graphics's fault, just try again.
if re.search("NETDEV WATCHDOG", line):
self.print_error(
"Detected network device failure, abandoning run.")
return 1
# A3xx recovery doesn't quite work. Sometimes the GPU will get
# wedged and recovery will fail (because power can't be reset?)
# This assumes that the jobs are sufficiently well-tested that GPU
# hangs aren't always triggered, so just try again. But print some
# more lines first so that we get better information on the cause
# of the hang. Once a hang happens, it's pretty chatty.
if "[drm:adreno_recover] *ERROR* gpu hw init failed: -22" in line:
self.print_error(
"Detected GPU hang, abandoning run.")
if print_more_lines == -1:
print_more_lines = 30
result = re.search(r"hwci: mesa: (\S*), exit_code: (\d+)", line)
if result:
status = result.group(1)
exit_code = int(result.group(2))
return exit_code
self.print_error(
"Reached the end of the CPU serial log without finding a result, abandoning run.")
return 1
def main():
parser = argparse.ArgumentParser()
parser.add_argument(
'--dev', type=str, help='Serial device (otherwise reading from serial-output.txt)')
parser.add_argument('--powerup', type=str,
help='shell command for rebooting', required=True)
parser.add_argument('--powerdown', type=str,
help='shell command for powering off', required=True)
parser.add_argument('--fbserial', type=str,
help='fastboot serial number of the board', required=True)
parser.add_argument('--test-timeout', type=int,
help='Test phase timeout (minutes)', required=True)
args = parser.parse_args()
fastboot = FastbootRun(args, args.test_timeout * 60)
retval = fastboot.run()
fastboot.close()
fastboot.logged_system(args.powerdown)
sys.exit(retval)
if __name__ == '__main__':
main()

View File

@@ -1,10 +0,0 @@
#!/bin/bash
relay=$1
if [ -z "$relay" ]; then
echo "Must supply a relay arg"
exit 1
fi
"$CI_PROJECT_DIR"/install/bare-metal/google-power-relay.py off "$relay"

View File

@@ -1,19 +0,0 @@
#!/usr/bin/python3
import sys
import serial
mode = sys.argv[1]
relay = sys.argv[2]
# our relays are "off" means "board is powered".
mode_swap = {
"on": "off",
"off": "on",
}
mode = mode_swap[mode]
ser = serial.Serial('/dev/ttyACM0', 115200, timeout=2)
command = "relay {} {}\n\r".format(mode, relay)
ser.write(command.encode())
ser.close()

View File

@@ -1,12 +0,0 @@
#!/bin/bash
relay=$1
if [ -z "$relay" ]; then
echo "Must supply a relay arg"
exit 1
fi
"$CI_PROJECT_DIR"/install/bare-metal/google-power-relay.py off "$relay"
sleep 5
"$CI_PROJECT_DIR"/install/bare-metal/google-power-relay.py on "$relay"

View File

@@ -1,569 +0,0 @@
#!/usr/bin/env python3
#
# Copyright 2015, The Android Open Source Project
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Creates the boot image."""
from argparse import (ArgumentParser, ArgumentTypeError,
FileType, RawDescriptionHelpFormatter)
from hashlib import sha1
from os import fstat
from struct import pack
import array
import collections
import os
import re
import subprocess
import tempfile
# Constant and structure definition is in
# system/tools/mkbootimg/include/bootimg/bootimg.h
BOOT_MAGIC = 'ANDROID!'
BOOT_MAGIC_SIZE = 8
BOOT_NAME_SIZE = 16
BOOT_ARGS_SIZE = 512
BOOT_EXTRA_ARGS_SIZE = 1024
BOOT_IMAGE_HEADER_V1_SIZE = 1648
BOOT_IMAGE_HEADER_V2_SIZE = 1660
BOOT_IMAGE_HEADER_V3_SIZE = 1580
BOOT_IMAGE_HEADER_V3_PAGESIZE = 4096
BOOT_IMAGE_HEADER_V4_SIZE = 1584
BOOT_IMAGE_V4_SIGNATURE_SIZE = 4096
VENDOR_BOOT_MAGIC = 'VNDRBOOT'
VENDOR_BOOT_MAGIC_SIZE = 8
VENDOR_BOOT_NAME_SIZE = BOOT_NAME_SIZE
VENDOR_BOOT_ARGS_SIZE = 2048
VENDOR_BOOT_IMAGE_HEADER_V3_SIZE = 2112
VENDOR_BOOT_IMAGE_HEADER_V4_SIZE = 2128
VENDOR_RAMDISK_TYPE_NONE = 0
VENDOR_RAMDISK_TYPE_PLATFORM = 1
VENDOR_RAMDISK_TYPE_RECOVERY = 2
VENDOR_RAMDISK_TYPE_DLKM = 3
VENDOR_RAMDISK_NAME_SIZE = 32
VENDOR_RAMDISK_TABLE_ENTRY_BOARD_ID_SIZE = 16
VENDOR_RAMDISK_TABLE_ENTRY_V4_SIZE = 108
# Names with special meaning, mustn't be specified in --ramdisk_name.
VENDOR_RAMDISK_NAME_BLOCKLIST = {b'default'}
PARSER_ARGUMENT_VENDOR_RAMDISK_FRAGMENT = '--vendor_ramdisk_fragment'
def filesize(f):
if f is None:
return 0
try:
return fstat(f.fileno()).st_size
except OSError:
return 0
def update_sha(sha, f):
if f:
sha.update(f.read())
f.seek(0)
sha.update(pack('I', filesize(f)))
else:
sha.update(pack('I', 0))
def pad_file(f, padding):
pad = (padding - (f.tell() & (padding - 1))) & (padding - 1)
f.write(pack(str(pad) + 'x'))
def get_number_of_pages(image_size, page_size):
"""calculates the number of pages required for the image"""
return (image_size + page_size - 1) // page_size
def get_recovery_dtbo_offset(args):
"""calculates the offset of recovery_dtbo image in the boot image"""
num_header_pages = 1 # header occupies a page
num_kernel_pages = get_number_of_pages(filesize(args.kernel), args.pagesize)
num_ramdisk_pages = get_number_of_pages(filesize(args.ramdisk),
args.pagesize)
num_second_pages = get_number_of_pages(filesize(args.second), args.pagesize)
dtbo_offset = args.pagesize * (num_header_pages + num_kernel_pages +
num_ramdisk_pages + num_second_pages)
return dtbo_offset
def write_header_v3_and_above(args):
if args.header_version > 3:
boot_header_size = BOOT_IMAGE_HEADER_V4_SIZE
else:
boot_header_size = BOOT_IMAGE_HEADER_V3_SIZE
args.output.write(pack(f'{BOOT_MAGIC_SIZE}s', BOOT_MAGIC.encode()))
# kernel size in bytes
args.output.write(pack('I', filesize(args.kernel)))
# ramdisk size in bytes
args.output.write(pack('I', filesize(args.ramdisk)))
# os version and patch level
args.output.write(pack('I', (args.os_version << 11) | args.os_patch_level))
args.output.write(pack('I', boot_header_size))
# reserved
args.output.write(pack('4I', 0, 0, 0, 0))
# version of boot image header
args.output.write(pack('I', args.header_version))
args.output.write(pack(f'{BOOT_ARGS_SIZE + BOOT_EXTRA_ARGS_SIZE}s',
args.cmdline))
if args.header_version >= 4:
# The signature used to verify boot image v4.
args.output.write(pack('I', BOOT_IMAGE_V4_SIGNATURE_SIZE))
pad_file(args.output, BOOT_IMAGE_HEADER_V3_PAGESIZE)
def write_vendor_boot_header(args):
if filesize(args.dtb) == 0:
raise ValueError('DTB image must not be empty.')
if args.header_version > 3:
vendor_ramdisk_size = args.vendor_ramdisk_total_size
vendor_boot_header_size = VENDOR_BOOT_IMAGE_HEADER_V4_SIZE
else:
vendor_ramdisk_size = filesize(args.vendor_ramdisk)
vendor_boot_header_size = VENDOR_BOOT_IMAGE_HEADER_V3_SIZE
args.vendor_boot.write(pack(f'{VENDOR_BOOT_MAGIC_SIZE}s',
VENDOR_BOOT_MAGIC.encode()))
# version of boot image header
args.vendor_boot.write(pack('I', args.header_version))
# flash page size
args.vendor_boot.write(pack('I', args.pagesize))
# kernel physical load address
args.vendor_boot.write(pack('I', args.base + args.kernel_offset))
# ramdisk physical load address
args.vendor_boot.write(pack('I', args.base + args.ramdisk_offset))
# ramdisk size in bytes
args.vendor_boot.write(pack('I', vendor_ramdisk_size))
args.vendor_boot.write(pack(f'{VENDOR_BOOT_ARGS_SIZE}s',
args.vendor_cmdline))
# kernel tags physical load address
args.vendor_boot.write(pack('I', args.base + args.tags_offset))
# asciiz product name
args.vendor_boot.write(pack(f'{VENDOR_BOOT_NAME_SIZE}s', args.board))
# header size in bytes
args.vendor_boot.write(pack('I', vendor_boot_header_size))
# dtb size in bytes
args.vendor_boot.write(pack('I', filesize(args.dtb)))
# dtb physical load address
args.vendor_boot.write(pack('Q', args.base + args.dtb_offset))
if args.header_version > 3:
vendor_ramdisk_table_size = (args.vendor_ramdisk_table_entry_num *
VENDOR_RAMDISK_TABLE_ENTRY_V4_SIZE)
# vendor ramdisk table size in bytes
args.vendor_boot.write(pack('I', vendor_ramdisk_table_size))
# number of vendor ramdisk table entries
args.vendor_boot.write(pack('I', args.vendor_ramdisk_table_entry_num))
# vendor ramdisk table entry size in bytes
args.vendor_boot.write(pack('I', VENDOR_RAMDISK_TABLE_ENTRY_V4_SIZE))
# bootconfig section size in bytes
args.vendor_boot.write(pack('I', filesize(args.vendor_bootconfig)))
pad_file(args.vendor_boot, args.pagesize)
def write_header(args):
if args.header_version > 4:
raise ValueError(
f'Boot header version {args.header_version} not supported')
if args.header_version in {3, 4}:
return write_header_v3_and_above(args)
ramdisk_load_address = ((args.base + args.ramdisk_offset)
if filesize(args.ramdisk) > 0 else 0)
second_load_address = ((args.base + args.second_offset)
if filesize(args.second) > 0 else 0)
args.output.write(pack(f'{BOOT_MAGIC_SIZE}s', BOOT_MAGIC.encode()))
# kernel size in bytes
args.output.write(pack('I', filesize(args.kernel)))
# kernel physical load address
args.output.write(pack('I', args.base + args.kernel_offset))
# ramdisk size in bytes
args.output.write(pack('I', filesize(args.ramdisk)))
# ramdisk physical load address
args.output.write(pack('I', ramdisk_load_address))
# second bootloader size in bytes
args.output.write(pack('I', filesize(args.second)))
# second bootloader physical load address
args.output.write(pack('I', second_load_address))
# kernel tags physical load address
args.output.write(pack('I', args.base + args.tags_offset))
# flash page size
args.output.write(pack('I', args.pagesize))
# version of boot image header
args.output.write(pack('I', args.header_version))
# os version and patch level
args.output.write(pack('I', (args.os_version << 11) | args.os_patch_level))
# asciiz product name
args.output.write(pack(f'{BOOT_NAME_SIZE}s', args.board))
args.output.write(pack(f'{BOOT_ARGS_SIZE}s', args.cmdline))
sha = sha1()
update_sha(sha, args.kernel)
update_sha(sha, args.ramdisk)
update_sha(sha, args.second)
if args.header_version > 0:
update_sha(sha, args.recovery_dtbo)
if args.header_version > 1:
update_sha(sha, args.dtb)
img_id = pack('32s', sha.digest())
args.output.write(img_id)
args.output.write(pack(f'{BOOT_EXTRA_ARGS_SIZE}s', args.extra_cmdline))
if args.header_version > 0:
if args.recovery_dtbo:
# recovery dtbo size in bytes
args.output.write(pack('I', filesize(args.recovery_dtbo)))
# recovert dtbo offset in the boot image
args.output.write(pack('Q', get_recovery_dtbo_offset(args)))
else:
# Set to zero if no recovery dtbo
args.output.write(pack('I', 0))
args.output.write(pack('Q', 0))
# Populate boot image header size for header versions 1 and 2.
if args.header_version == 1:
args.output.write(pack('I', BOOT_IMAGE_HEADER_V1_SIZE))
elif args.header_version == 2:
args.output.write(pack('I', BOOT_IMAGE_HEADER_V2_SIZE))
if args.header_version > 1:
if filesize(args.dtb) == 0:
raise ValueError('DTB image must not be empty.')
# dtb size in bytes
args.output.write(pack('I', filesize(args.dtb)))
# dtb physical load address
args.output.write(pack('Q', args.base + args.dtb_offset))
pad_file(args.output, args.pagesize)
return img_id
class AsciizBytes:
"""Parses a string and encodes it as an asciiz bytes object.
>>> AsciizBytes(bufsize=4)('foo')
b'foo\\x00'
>>> AsciizBytes(bufsize=4)('foob')
Traceback (most recent call last):
...
argparse.ArgumentTypeError: Encoded asciiz length exceeded: max 4, got 5
"""
def __init__(self, bufsize):
self.bufsize = bufsize
def __call__(self, arg):
arg_bytes = arg.encode() + b'\x00'
if len(arg_bytes) > self.bufsize:
raise ArgumentTypeError(
'Encoded asciiz length exceeded: '
f'max {self.bufsize}, got {len(arg_bytes)}')
return arg_bytes
class VendorRamdiskTableBuilder:
"""Vendor ramdisk table builder.
Attributes:
entries: A list of VendorRamdiskTableEntry namedtuple.
ramdisk_total_size: Total size in bytes of all ramdisks in the table.
"""
VendorRamdiskTableEntry = collections.namedtuple( # pylint: disable=invalid-name
'VendorRamdiskTableEntry',
['ramdisk_path', 'ramdisk_size', 'ramdisk_offset', 'ramdisk_type',
'ramdisk_name', 'board_id'])
def __init__(self):
self.entries = []
self.ramdisk_total_size = 0
self.ramdisk_names = set()
def add_entry(self, ramdisk_path, ramdisk_type, ramdisk_name, board_id):
# Strip any trailing null for simple comparison.
stripped_ramdisk_name = ramdisk_name.rstrip(b'\x00')
if stripped_ramdisk_name in VENDOR_RAMDISK_NAME_BLOCKLIST:
raise ValueError(
f'Banned vendor ramdisk name: {stripped_ramdisk_name}')
if stripped_ramdisk_name in self.ramdisk_names:
raise ValueError(
f'Duplicated vendor ramdisk name: {stripped_ramdisk_name}')
self.ramdisk_names.add(stripped_ramdisk_name)
if board_id is None:
board_id = array.array(
'I', [0] * VENDOR_RAMDISK_TABLE_ENTRY_BOARD_ID_SIZE)
else:
board_id = array.array('I', board_id)
if len(board_id) != VENDOR_RAMDISK_TABLE_ENTRY_BOARD_ID_SIZE:
raise ValueError('board_id size must be '
f'{VENDOR_RAMDISK_TABLE_ENTRY_BOARD_ID_SIZE}')
with open(ramdisk_path, 'rb') as f:
ramdisk_size = filesize(f)
self.entries.append(self.VendorRamdiskTableEntry(
ramdisk_path, ramdisk_size, self.ramdisk_total_size, ramdisk_type,
ramdisk_name, board_id))
self.ramdisk_total_size += ramdisk_size
def write_ramdisks_padded(self, fout, alignment):
for entry in self.entries:
with open(entry.ramdisk_path, 'rb') as f:
fout.write(f.read())
pad_file(fout, alignment)
def write_entries_padded(self, fout, alignment):
for entry in self.entries:
fout.write(pack('I', entry.ramdisk_size))
fout.write(pack('I', entry.ramdisk_offset))
fout.write(pack('I', entry.ramdisk_type))
fout.write(pack(f'{VENDOR_RAMDISK_NAME_SIZE}s',
entry.ramdisk_name))
fout.write(entry.board_id)
pad_file(fout, alignment)
def write_padded_file(f_out, f_in, padding):
if f_in is None:
return
f_out.write(f_in.read())
pad_file(f_out, padding)
def parse_int(x):
return int(x, 0)
def parse_os_version(x):
match = re.search(r'^(\d{1,3})(?:\.(\d{1,3})(?:\.(\d{1,3}))?)?', x)
if match:
a = int(match.group(1))
b = c = 0
if match.lastindex >= 2:
b = int(match.group(2))
if match.lastindex == 3:
c = int(match.group(3))
# 7 bits allocated for each field
assert a < 128
assert b < 128
assert c < 128
return (a << 14) | (b << 7) | c
return 0
def parse_os_patch_level(x):
match = re.search(r'^(\d{4})-(\d{2})(?:-(\d{2}))?', x)
if match:
y = int(match.group(1)) - 2000
m = int(match.group(2))
# 7 bits allocated for the year, 4 bits for the month
assert 0 <= y < 128
assert 0 < m <= 12
return (y << 4) | m
return 0
def parse_vendor_ramdisk_type(x):
type_dict = {
'none': VENDOR_RAMDISK_TYPE_NONE,
'platform': VENDOR_RAMDISK_TYPE_PLATFORM,
'recovery': VENDOR_RAMDISK_TYPE_RECOVERY,
'dlkm': VENDOR_RAMDISK_TYPE_DLKM,
}
if x.lower() in type_dict:
return type_dict[x.lower()]
return parse_int(x)
def get_vendor_boot_v4_usage():
return """vendor boot version 4 arguments:
--ramdisk_type {none,platform,recovery,dlkm}
specify the type of the ramdisk
--ramdisk_name NAME
specify the name of the ramdisk
--board_id{0..15} NUMBER
specify the value of the board_id vector, defaults to 0
--vendor_ramdisk_fragment VENDOR_RAMDISK_FILE
path to the vendor ramdisk file
These options can be specified multiple times, where each vendor ramdisk
option group ends with a --vendor_ramdisk_fragment option.
Each option group appends an additional ramdisk to the vendor boot image.
"""
def parse_vendor_ramdisk_args(args, args_list):
"""Parses vendor ramdisk specific arguments.
Args:
args: An argparse.Namespace object. Parsed results are stored into this
object.
args_list: A list of argument strings to be parsed.
Returns:
A list argument strings that are not parsed by this method.
"""
parser = ArgumentParser(add_help=False)
parser.add_argument('--ramdisk_type', type=parse_vendor_ramdisk_type,
default=VENDOR_RAMDISK_TYPE_NONE)
parser.add_argument('--ramdisk_name',
type=AsciizBytes(bufsize=VENDOR_RAMDISK_NAME_SIZE),
required=True)
for i in range(VENDOR_RAMDISK_TABLE_ENTRY_BOARD_ID_SIZE):
parser.add_argument(f'--board_id{i}', type=parse_int, default=0)
parser.add_argument(PARSER_ARGUMENT_VENDOR_RAMDISK_FRAGMENT, required=True)
unknown_args = []
vendor_ramdisk_table_builder = VendorRamdiskTableBuilder()
if args.vendor_ramdisk is not None:
vendor_ramdisk_table_builder.add_entry(
args.vendor_ramdisk.name, VENDOR_RAMDISK_TYPE_PLATFORM, b'', None)
while PARSER_ARGUMENT_VENDOR_RAMDISK_FRAGMENT in args_list:
idx = args_list.index(PARSER_ARGUMENT_VENDOR_RAMDISK_FRAGMENT) + 2
vendor_ramdisk_args = args_list[:idx]
args_list = args_list[idx:]
ramdisk_args, extra_args = parser.parse_known_args(vendor_ramdisk_args)
ramdisk_args_dict = vars(ramdisk_args)
unknown_args.extend(extra_args)
ramdisk_path = ramdisk_args.vendor_ramdisk_fragment
ramdisk_type = ramdisk_args.ramdisk_type
ramdisk_name = ramdisk_args.ramdisk_name
board_id = [ramdisk_args_dict[f'board_id{i}']
for i in range(VENDOR_RAMDISK_TABLE_ENTRY_BOARD_ID_SIZE)]
vendor_ramdisk_table_builder.add_entry(ramdisk_path, ramdisk_type,
ramdisk_name, board_id)
if len(args_list) > 0:
unknown_args.extend(args_list)
args.vendor_ramdisk_total_size = (vendor_ramdisk_table_builder
.ramdisk_total_size)
args.vendor_ramdisk_table_entry_num = len(vendor_ramdisk_table_builder
.entries)
args.vendor_ramdisk_table_builder = vendor_ramdisk_table_builder
return unknown_args
def parse_cmdline():
version_parser = ArgumentParser(add_help=False)
version_parser.add_argument('--header_version', type=parse_int, default=0)
if version_parser.parse_known_args()[0].header_version < 3:
# For boot header v0 to v2, the kernel commandline field is split into
# two fields, cmdline and extra_cmdline. Both fields are asciiz strings,
# so we minus one here to ensure the encoded string plus the
# null-terminator can fit in the buffer size.
cmdline_size = BOOT_ARGS_SIZE + BOOT_EXTRA_ARGS_SIZE - 1
else:
cmdline_size = BOOT_ARGS_SIZE + BOOT_EXTRA_ARGS_SIZE
parser = ArgumentParser(formatter_class=RawDescriptionHelpFormatter,
epilog=get_vendor_boot_v4_usage())
parser.add_argument('--kernel', type=FileType('rb'),
help='path to the kernel')
parser.add_argument('--ramdisk', type=FileType('rb'),
help='path to the ramdisk')
parser.add_argument('--second', type=FileType('rb'),
help='path to the second bootloader')
parser.add_argument('--dtb', type=FileType('rb'), help='path to the dtb')
dtbo_group = parser.add_mutually_exclusive_group()
dtbo_group.add_argument('--recovery_dtbo', type=FileType('rb'),
help='path to the recovery DTBO')
dtbo_group.add_argument('--recovery_acpio', type=FileType('rb'),
metavar='RECOVERY_ACPIO', dest='recovery_dtbo',
help='path to the recovery ACPIO')
parser.add_argument('--cmdline', type=AsciizBytes(bufsize=cmdline_size),
default='', help='kernel command line arguments')
parser.add_argument('--vendor_cmdline',
type=AsciizBytes(bufsize=VENDOR_BOOT_ARGS_SIZE),
default='',
help='vendor boot kernel command line arguments')
parser.add_argument('--base', type=parse_int, default=0x10000000,
help='base address')
parser.add_argument('--kernel_offset', type=parse_int, default=0x00008000,
help='kernel offset')
parser.add_argument('--ramdisk_offset', type=parse_int, default=0x01000000,
help='ramdisk offset')
parser.add_argument('--second_offset', type=parse_int, default=0x00f00000,
help='second bootloader offset')
parser.add_argument('--dtb_offset', type=parse_int, default=0x01f00000,
help='dtb offset')
parser.add_argument('--os_version', type=parse_os_version, default=0,
help='operating system version')
parser.add_argument('--os_patch_level', type=parse_os_patch_level,
default=0, help='operating system patch level')
parser.add_argument('--tags_offset', type=parse_int, default=0x00000100,
help='tags offset')
parser.add_argument('--board', type=AsciizBytes(bufsize=BOOT_NAME_SIZE),
default='', help='board name')
parser.add_argument('--pagesize', type=parse_int,
choices=[2**i for i in range(11, 15)], default=2048,
help='page size')
parser.add_argument('--id', action='store_true',
help='print the image ID on standard output')
parser.add_argument('--header_version', type=parse_int, default=0,
help='boot image header version')
parser.add_argument('-o', '--output', type=FileType('wb'),
help='output file name')
parser.add_argument('--gki_signing_algorithm',
help='GKI signing algorithm to use')
parser.add_argument('--gki_signing_key',
help='path to RSA private key file')
parser.add_argument('--gki_signing_signature_args',
help='other hash arguments passed to avbtool')
parser.add_argument('--gki_signing_avbtool_path',
help='path to avbtool for boot signature generation')
parser.add_argument('--vendor_boot', type=FileType('wb'),
help='vendor boot output file name')
parser.add_argument('--vendor_ramdisk', type=FileType('rb'),
help='path to the vendor ramdisk')
parser.add_argument('--vendor_bootconfig', type=FileType('rb'),
help='path to the vendor bootconfig file')
args, extra_args = parser.parse_known_args()
if args.vendor_boot is not None and args.header_version > 3:
extra_args = parse_vendor_ramdisk_args(args, extra_args)
if len(extra_args) > 0:
raise ValueError(f'Unrecognized arguments: {extra_args}')
if args.header_version < 3:
args.extra_cmdline = args.cmdline[BOOT_ARGS_SIZE-1:]
args.cmdline = args.cmdline[:BOOT_ARGS_SIZE-1] + b'\x00'
assert len(args.cmdline) <= BOOT_ARGS_SIZE
assert len(args.extra_cmdline) <= BOOT_EXTRA_ARGS_SIZE
return args
def add_boot_image_signature(args, pagesize):
"""Adds the boot image signature.
Note that the signature will only be verified in VTS to ensure a
generic boot.img is used. It will not be used by the device
bootloader at boot time. The bootloader should only verify
the boot vbmeta at the end of the boot partition (or in the top-level
vbmeta partition) via the Android Verified Boot process, when the
device boots.
"""
args.output.flush() # Flush the buffer for signature calculation.
# Appends zeros if the signing key is not specified.
if not args.gki_signing_key or not args.gki_signing_algorithm:
zeros = b'\x00' * BOOT_IMAGE_V4_SIGNATURE_SIZE
args.output.write(zeros)
pad_file(args.output, pagesize)
return
avbtool = 'avbtool' # Used from otatools.zip or Android build env.
# We need to specify the path of avbtool in build/core/Makefile.
# Because avbtool is not guaranteed to be in $PATH there.
if args.gki_signing_avbtool_path:
avbtool = args.gki_signing_avbtool_path
# Need to specify a value of --partition_size for avbtool to work.
# We use 64 MB below, but avbtool will not resize the boot image to
# this size because --do_not_append_vbmeta_image is also specified.
avbtool_cmd = [
avbtool, 'add_hash_footer',
'--partition_name', 'boot',
'--partition_size', str(64 * 1024 * 1024),
'--image', args.output.name,
'--algorithm', args.gki_signing_algorithm,
'--key', args.gki_signing_key,
'--salt', 'd00df00d'] # TODO: use a hash of kernel/ramdisk as the salt.
# Additional arguments passed to avbtool.
if args.gki_signing_signature_args:
avbtool_cmd += args.gki_signing_signature_args.split()
# Outputs the signed vbmeta to a separate file, then append to boot.img
# as the boot signature.
with tempfile.TemporaryDirectory() as temp_out_dir:
boot_signature_output = os.path.join(temp_out_dir, 'boot_signature')
avbtool_cmd += ['--do_not_append_vbmeta_image',
'--output_vbmeta_image', boot_signature_output]
subprocess.check_call(avbtool_cmd)
with open(boot_signature_output, 'rb') as boot_signature:
if filesize(boot_signature) > BOOT_IMAGE_V4_SIGNATURE_SIZE:
raise ValueError(
f'boot sigature size is > {BOOT_IMAGE_V4_SIGNATURE_SIZE}')
write_padded_file(args.output, boot_signature, pagesize)
def write_data(args, pagesize):
write_padded_file(args.output, args.kernel, pagesize)
write_padded_file(args.output, args.ramdisk, pagesize)
write_padded_file(args.output, args.second, pagesize)
if args.header_version > 0 and args.header_version < 3:
write_padded_file(args.output, args.recovery_dtbo, pagesize)
if args.header_version == 2:
write_padded_file(args.output, args.dtb, pagesize)
if args.header_version >= 4:
add_boot_image_signature(args, pagesize)
def write_vendor_boot_data(args):
if args.header_version > 3:
builder = args.vendor_ramdisk_table_builder
builder.write_ramdisks_padded(args.vendor_boot, args.pagesize)
write_padded_file(args.vendor_boot, args.dtb, args.pagesize)
builder.write_entries_padded(args.vendor_boot, args.pagesize)
write_padded_file(args.vendor_boot, args.vendor_bootconfig,
args.pagesize)
else:
write_padded_file(args.vendor_boot, args.vendor_ramdisk, args.pagesize)
write_padded_file(args.vendor_boot, args.dtb, args.pagesize)
def main():
args = parse_cmdline()
if args.vendor_boot is not None:
if args.header_version not in {3, 4}:
raise ValueError(
'--vendor_boot not compatible with given header version')
if args.header_version == 3 and args.vendor_ramdisk is None:
raise ValueError('--vendor_ramdisk missing or invalid')
write_vendor_boot_header(args)
write_vendor_boot_data(args)
if args.output is not None:
if args.second is not None and args.header_version > 2:
raise ValueError(
'--second not compatible with given header version')
img_id = write_header(args)
if args.header_version > 2:
write_data(args, BOOT_IMAGE_HEADER_V3_PAGESIZE)
else:
write_data(args, args.pagesize)
if args.id and img_id is not None:
print('0x' + ''.join(f'{octet:02x}' for octet in img_id))
if __name__ == '__main__':
main()

View File

@@ -1,16 +0,0 @@
#!/bin/bash
if [ -z "$BM_POE_INTERFACE" ]; then
echo "Must supply the PoE Interface to power up"
exit 1
fi
if [ -z "$BM_POE_ADDRESS" ]; then
echo "Must supply the PoE Switch host"
exit 1
fi
SNMP_KEY="SNMPv2-SMI::mib-2.105.1.1.1.3.1.$((${BM_POE_BASE:-0} + BM_POE_INTERFACE))"
SNMP_OFF="i 2"
flock /var/run/poe.lock -c "snmpset -v2c -r 3 -t 30 -cmesaci $BM_POE_ADDRESS $SNMP_KEY $SNMP_OFF"

View File

@@ -1,19 +0,0 @@
#!/bin/bash
if [ -z "$BM_POE_INTERFACE" ]; then
echo "Must supply the PoE Interface to power up"
exit 1
fi
if [ -z "$BM_POE_ADDRESS" ]; then
echo "Must supply the PoE Switch host"
exit 1
fi
SNMP_KEY="SNMPv2-SMI::mib-2.105.1.1.1.3.1.$((${BM_POE_BASE:-0} + BM_POE_INTERFACE))"
SNMP_ON="i 1"
SNMP_OFF="i 2"
flock /var/run/poe.lock -c "snmpset -v2c -r 3 -t 30 -cmesaci $BM_POE_ADDRESS $SNMP_KEY $SNMP_OFF"
sleep 3s
flock /var/run/poe.lock -c "snmpset -v2c -r 3 -t 30 -cmesaci $BM_POE_ADDRESS $SNMP_KEY $SNMP_ON"

View File

@@ -1,236 +0,0 @@
#!/bin/bash
# shellcheck disable=SC1091
# shellcheck disable=SC2034
# shellcheck disable=SC2059
# shellcheck disable=SC2086 # we want word splitting
. "$SCRIPTS_DIR"/setup-test-env.sh
# Boot script for devices attached to a PoE switch, using NFS for the root
# filesystem.
# We're run from the root of the repo, make a helper var for our paths
BM=$CI_PROJECT_DIR/install/bare-metal
CI_COMMON=$CI_PROJECT_DIR/install/common
CI_INSTALL=$CI_PROJECT_DIR/install
# Runner config checks
if [ -z "$BM_SERIAL" ]; then
echo "Must set BM_SERIAL in your gitlab-runner config.toml [[runners]] environment"
echo "This is the serial port to listen the device."
exit 1
fi
if [ -z "$BM_POE_ADDRESS" ]; then
echo "Must set BM_POE_ADDRESS in your gitlab-runner config.toml [[runners]] environment"
echo "This is the PoE switch address to connect for powering up/down devices."
exit 1
fi
if [ -z "$BM_POE_INTERFACE" ]; then
echo "Must set BM_POE_INTERFACE in your gitlab-runner config.toml [[runners]] environment"
echo "This is the PoE switch interface where the device is connected."
exit 1
fi
if [ -z "$BM_POWERUP" ]; then
echo "Must set BM_POWERUP in your gitlab-runner config.toml [[runners]] environment"
echo "This is a shell script that should power up the device and begin its boot sequence."
exit 1
fi
if [ -z "$BM_POWERDOWN" ]; then
echo "Must set BM_POWERDOWN in your gitlab-runner config.toml [[runners]] environment"
echo "This is a shell script that should power off the device."
exit 1
fi
if [ ! -d /nfs ]; then
echo "NFS rootfs directory needs to be mounted at /nfs by the gitlab runner"
exit 1
fi
if [ ! -d /tftp ]; then
echo "TFTP directory for this board needs to be mounted at /tftp by the gitlab runner"
exit 1
fi
# job config checks
if [ -z "$BM_ROOTFS" ]; then
echo "Must set BM_ROOTFS to your board's rootfs directory in the job's variables"
exit 1
fi
if [ -z "$BM_BOOTFS" ] && { [ -z "$BM_KERNEL" ] || [ -z "$BM_DTB" ]; } ; then
echo "Must set /boot files for the TFTP boot in the job's variables or set kernel and dtb"
exit 1
fi
if [ -z "$BM_CMDLINE" ]; then
echo "Must set BM_CMDLINE to your board's kernel command line arguments"
exit 1
fi
section_start prepare_rootfs "Preparing rootfs components"
set -ex
date +'%F %T'
# Clear out any previous run's artifacts.
rm -rf results/
mkdir -p results
# Create the rootfs in the NFS directory. rm to make sure it's in a pristine
# state, since it's volume-mounted on the host.
rsync -a --delete $BM_ROOTFS/ /nfs/
date +'%F %T'
# If BM_BOOTFS is an URL, download it
if echo $BM_BOOTFS | grep -q http; then
curl -L --retry 4 -f --retry-all-errors --retry-delay 60 \
"${FDO_HTTP_CACHE_URI:-}$BM_BOOTFS" -o /tmp/bootfs.tar
BM_BOOTFS=/tmp/bootfs.tar
fi
date +'%F %T'
# If BM_BOOTFS is a file, assume it is a tarball and uncompress it
if [ -f "${BM_BOOTFS}" ]; then
mkdir -p /tmp/bootfs
tar xf $BM_BOOTFS -C /tmp/bootfs
BM_BOOTFS=/tmp/bootfs
fi
# If BM_KERNEL and BM_DTS is present
if [ -n "${EXTERNAL_KERNEL_TAG}" ]; then
if [ -z "${BM_KERNEL}" ] || [ -z "${BM_DTB}" ]; then
echo "This machine cannot be tested with external kernel since BM_KERNEL or BM_DTB missing!"
exit 1
fi
curl -L --retry 4 -f --retry-all-errors --retry-delay 60 \
"${FDO_HTTP_CACHE_URI:-}${KERNEL_IMAGE_BASE}/${DEBIAN_ARCH}/${BM_KERNEL}" -o "${BM_KERNEL}"
curl -L --retry 4 -f --retry-all-errors --retry-delay 60 \
"${FDO_HTTP_CACHE_URI:-}${KERNEL_IMAGE_BASE}/${DEBIAN_ARCH}/${BM_DTB}.dtb" -o "${BM_DTB}.dtb"
curl -L --retry 4 -f --retry-all-errors --retry-delay 60 \
"${FDO_HTTP_CACHE_URI:-}${KERNEL_IMAGE_BASE}/${DEBIAN_ARCH}/modules.tar.zst" -o modules.tar.zst
fi
date +'%F %T'
# Install kernel modules (it could be either in /lib/modules or
# /usr/lib/modules, but we want to install in the latter)
if [ -n "${EXTERNAL_KERNEL_TAG}" ]; then
tar --keep-directory-symlink --zstd -xf modules.tar.zst -C /nfs/
rm modules.tar.zst &
elif [ -n "${BM_BOOTFS}" ]; then
[ -d $BM_BOOTFS/usr/lib/modules ] && rsync -a $BM_BOOTFS/usr/lib/modules/ /nfs/usr/lib/modules/
[ -d $BM_BOOTFS/lib/modules ] && rsync -a $BM_BOOTFS/lib/modules/ /nfs/lib/modules/
else
echo "No modules!"
fi
date +'%F %T'
# Install kernel image + bootloader files
if [ -n "${EXTERNAL_KERNEL_TAG}" ] || [ -z "$BM_BOOTFS" ]; then
mv "${BM_KERNEL}" "${BM_DTB}.dtb" /tftp/
else # BM_BOOTFS
rsync -aL --delete $BM_BOOTFS/boot/ /tftp/
fi
date +'%F %T'
# Set up the pxelinux config for Jetson Nano
mkdir -p /tftp/pxelinux.cfg
cat <<EOF >/tftp/pxelinux.cfg/default-arm-tegra210-p3450-0000
PROMPT 0
TIMEOUT 30
DEFAULT primary
MENU TITLE jetson nano boot options
LABEL primary
MENU LABEL CI kernel on TFTP
LINUX Image
FDT tegra210-p3450-0000.dtb
APPEND \${cbootargs} $BM_CMDLINE
EOF
# Set up the pxelinux config for Jetson TK1
cat <<EOF >/tftp/pxelinux.cfg/default-arm-tegra124-jetson-tk1
PROMPT 0
TIMEOUT 30
DEFAULT primary
MENU TITLE jetson TK1 boot options
LABEL primary
MENU LABEL CI kernel on TFTP
LINUX zImage
FDT tegra124-jetson-tk1.dtb
APPEND \${cbootargs} $BM_CMDLINE
EOF
# Create the rootfs in the NFS directory
. $BM/rootfs-setup.sh /nfs
date +'%F %T'
echo "$BM_CMDLINE" > /tftp/cmdline.txt
# Add some options in config.txt, if defined
if [ -n "$BM_BOOTCONFIG" ]; then
printf "$BM_BOOTCONFIG" >> /tftp/config.txt
fi
section_end prepare_rootfs
set +e
STRUCTURED_LOG_FILE=results/job_detail.json
python3 $CI_INSTALL/custom_logger.py ${STRUCTURED_LOG_FILE} --update dut_job_type "${DEVICE_TYPE}"
python3 $CI_INSTALL/custom_logger.py ${STRUCTURED_LOG_FILE} --update farm "${FARM}"
ATTEMPTS=3
first_attempt=True
while [ $((ATTEMPTS--)) -gt 0 ]; do
section_start dut_boot "Booting hardware device ..."
python3 $CI_INSTALL/custom_logger.py ${STRUCTURED_LOG_FILE} --create-dut-job dut_name "${CI_RUNNER_DESCRIPTION}"
# Update subtime time to CI_JOB_STARTED_AT only for the first run
if [ "$first_attempt" = "True" ]; then
python3 $CI_INSTALL/custom_logger.py ${STRUCTURED_LOG_FILE} --update-dut-time submit "${CI_JOB_STARTED_AT}"
else
python3 $CI_INSTALL/custom_logger.py ${STRUCTURED_LOG_FILE} --update-dut-time submit
fi
python3 $BM/poe_run.py \
--dev="$BM_SERIAL" \
--powerup="$BM_POWERUP" \
--powerdown="$BM_POWERDOWN" \
--boot-timeout-seconds ${BOOT_PHASE_TIMEOUT_SECONDS:-300} \
--test-timeout-minutes ${TEST_PHASE_TIMEOUT_MINUTES:-$((CI_JOB_TIMEOUT/60 - ${TEST_SETUP_AND_UPLOAD_MARGIN_MINUTES:-5}))}
ret=$?
if [ $ret -eq 2 ]; then
python3 $CI_INSTALL/custom_logger.py ${STRUCTURED_LOG_FILE} --close-dut-job
first_attempt=False
error "Device failed to boot; will retry"
else
# We're no longer in dut_boot by this point
unset CURRENT_SECTION
ATTEMPTS=0
fi
done
section_start dut_cleanup "Cleaning up after job"
python3 $CI_INSTALL/custom_logger.py ${STRUCTURED_LOG_FILE} --close-dut-job
python3 $CI_INSTALL/custom_logger.py ${STRUCTURED_LOG_FILE} --close
set -e
date +'%F %T'
# Bring artifacts back from the NFS dir to the build dir where gitlab-runner
# will look for them.
cp -Rp /nfs/results/. results/
date +'%F %T'
section_end dut_cleanup
exit $ret

View File

@@ -1,134 +0,0 @@
#!/usr/bin/env python3
#
# Copyright © 2020 Igalia, S.L.
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice (including the next
# paragraph) shall be included in all copies or substantial portions of the
# Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
# IN THE SOFTWARE.
import argparse
import os
import re
import sys
import threading
from custom_logger import CustomLogger
from serial_buffer import SerialBuffer
class PoERun:
def __init__(self, args, boot_timeout, test_timeout, logger):
self.powerup = args.powerup
self.powerdown = args.powerdown
self.ser = SerialBuffer(
args.dev, "results/serial-output.txt", ": ")
self.boot_timeout = boot_timeout
self.test_timeout = test_timeout
self.logger = logger
def print_error(self, message):
RED = '\033[0;31m'
NO_COLOR = '\033[0m'
print(RED + message + NO_COLOR)
self.logger.update_status_fail(message)
def logged_system(self, cmd):
print("Running '{}'".format(cmd))
return os.system(cmd)
def run(self):
if self.logged_system(self.powerup) != 0:
self.logger.update_status_fail("powerup failed")
return 1
boot_detected = False
self.logger.create_job_phase("boot")
for line in self.ser.lines(timeout=self.boot_timeout, phase="bootloader"):
if re.search("Booting Linux", line):
boot_detected = True
break
if not boot_detected:
self.print_error(
"Something wrong; couldn't detect the boot start up sequence")
return 2
self.logger.create_job_phase("test")
for line in self.ser.lines(timeout=self.test_timeout, phase="test"):
if re.search("---. end Kernel panic", line):
self.logger.update_status_fail("kernel panic")
return 1
# Binning memory problems
if re.search("binner overflow mem", line):
self.print_error("Memory overflow in the binner; GPU hang")
return 1
if re.search("nouveau 57000000.gpu: bus: MMIO read of 00000000 FAULT at 137000", line):
self.print_error("nouveau jetson boot bug, abandoning run.")
return 1
# network fail on tk1
if re.search("NETDEV WATCHDOG:.* transmit queue 0 timed out", line):
self.print_error("nouveau jetson tk1 network fail, abandoning run.")
return 1
result = re.search(r"hwci: mesa: (\S*), exit_code: (\d+)", line)
if result:
status = result.group(1)
exit_code = int(result.group(2))
if status == "pass":
self.logger.update_dut_job("status", "pass")
else:
self.logger.update_status_fail("test fail")
self.logger.update_dut_job("exit_code", exit_code)
return exit_code
self.print_error(
"Reached the end of the CPU serial log without finding a result")
return 1
def main():
parser = argparse.ArgumentParser()
parser.add_argument('--dev', type=str,
help='Serial device to monitor', required=True)
parser.add_argument('--powerup', type=str,
help='shell command for rebooting', required=True)
parser.add_argument('--powerdown', type=str,
help='shell command for powering off', required=True)
parser.add_argument(
'--boot-timeout-seconds', type=int, help='Boot phase timeout (seconds)', required=True)
parser.add_argument(
'--test-timeout-minutes', type=int, help='Test phase timeout (minutes)', required=True)
args = parser.parse_args()
logger = CustomLogger("results/job_detail.json")
logger.update_dut_time("start", None)
poe = PoERun(args, args.boot_timeout_seconds, args.test_timeout_minutes * 60, logger)
retval = poe.run()
poe.logged_system(args.powerdown)
logger.update_dut_time("end", None)
sys.exit(retval)
if __name__ == '__main__':
main()

View File

@@ -1,34 +0,0 @@
#!/usr/bin/env bash
# shellcheck disable=SC2086 # we want word splitting
rootfs_dst=$1
mkdir -p $rootfs_dst/results
# Set up the init script that brings up the system.
cp $BM/bm-init.sh $rootfs_dst/init
cp $CI_COMMON/init*.sh $rootfs_dst/
date +'%F %T'
# Make JWT token available as file in the bare-metal storage to enable access
# to MinIO
cp "${S3_JWT_FILE}" "${rootfs_dst}${S3_JWT_FILE}"
date +'%F %T'
cp "$SCRIPTS_DIR/setup-test-env.sh" "$rootfs_dst/"
set +x
# Pass through relevant env vars from the gitlab job to the baremetal init script
echo "Variables passed through:"
"$CI_COMMON"/export-gitlab-job-env-for-dut.sh | tee $rootfs_dst/set-job-env-vars.sh
set -x
# Add the Mesa drivers we built, and make a consistent symlink to them.
mkdir -p $rootfs_dst/$CI_PROJECT_DIR
rsync -aH --delete $CI_PROJECT_DIR/install/ $rootfs_dst/$CI_PROJECT_DIR/install/
date +'%F %T'

View File

@@ -1,186 +0,0 @@
#!/usr/bin/env python3
#
# Copyright © 2020 Google LLC
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice (including the next
# paragraph) shall be included in all copies or substantial portions of the
# Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
# IN THE SOFTWARE.
import argparse
from datetime import datetime, UTC
import queue
import serial
import threading
import time
class SerialBuffer:
def __init__(self, dev, filename, prefix, timeout=None, line_queue=None):
self.filename = filename
self.dev = dev
if dev:
self.f = open(filename, "wb+")
self.serial = serial.Serial(dev, 115200, timeout=timeout)
else:
self.f = open(filename, "rb")
self.serial = None
self.byte_queue = queue.Queue()
# allow multiple SerialBuffers to share a line queue so you can merge
# servo's CPU and EC streams into one thing to watch the boot/test
# progress on.
if line_queue:
self.line_queue = line_queue
else:
self.line_queue = queue.Queue()
self.prefix = prefix
self.timeout = timeout
self.sentinel = object()
self.closing = False
if self.dev:
self.read_thread = threading.Thread(
target=self.serial_read_thread_loop, daemon=True)
else:
self.read_thread = threading.Thread(
target=self.serial_file_read_thread_loop, daemon=True)
self.read_thread.start()
self.lines_thread = threading.Thread(
target=self.serial_lines_thread_loop, daemon=True)
self.lines_thread.start()
def close(self):
self.closing = True
if self.serial:
self.serial.cancel_read()
self.read_thread.join()
self.lines_thread.join()
if self.serial:
self.serial.close()
# Thread that just reads the bytes from the serial device to try to keep from
# buffer overflowing it. If nothing is received in 1 minute, it finalizes.
def serial_read_thread_loop(self):
greet = "Serial thread reading from %s\n" % self.dev
self.byte_queue.put(greet.encode())
while not self.closing:
try:
b = self.serial.read()
if len(b) == 0:
break
self.byte_queue.put(b)
except Exception as err:
print(self.prefix + str(err))
break
self.byte_queue.put(self.sentinel)
# Thread that just reads the bytes from the file of serial output that some
# other process is appending to.
def serial_file_read_thread_loop(self):
greet = "Serial thread reading from %s\n" % self.filename
self.byte_queue.put(greet.encode())
while not self.closing:
line = self.f.readline()
if line:
self.byte_queue.put(line)
else:
time.sleep(0.1)
self.byte_queue.put(self.sentinel)
# Thread that processes the stream of bytes to 1) log to stdout, 2) log to
# file, 3) add to the queue of lines to be read by program logic
def serial_lines_thread_loop(self):
line = bytearray()
while True:
bytes = self.byte_queue.get(block=True)
if bytes == self.sentinel:
self.read_thread.join()
self.line_queue.put(self.sentinel)
break
# Write our data to the output file if we're the ones reading from
# the serial device
if self.dev:
self.f.write(bytes)
self.f.flush()
for b in bytes:
line.append(b)
if b == b'\n'[0]:
line = line.decode(errors="replace")
ts = datetime.now(tz=UTC)
ts_str = f"{ts.hour:02}:{ts.minute:02}:{ts.second:02}.{int(ts.microsecond / 1000):03}"
print("{endc}{time}{prefix}{line}".format(
time=ts_str, prefix=self.prefix, line=line, endc='\033[0m'), flush=True, end='')
self.line_queue.put(line)
line = bytearray()
def lines(self, timeout=None, phase=None):
start_time = time.monotonic()
while True:
read_timeout = None
if timeout:
read_timeout = timeout - (time.monotonic() - start_time)
if read_timeout <= 0:
print("read timeout waiting for serial during {}".format(phase))
self.close()
break
try:
line = self.line_queue.get(timeout=read_timeout)
except queue.Empty:
print("read timeout waiting for serial during {}".format(phase))
self.close()
break
if line == self.sentinel:
print("End of serial output")
self.lines_thread.join()
break
yield line
def main():
parser = argparse.ArgumentParser()
parser.add_argument('--dev', type=str, help='Serial device')
parser.add_argument('--file', type=str,
help='Filename for serial output', required=True)
parser.add_argument('--prefix', type=str,
help='Prefix for logging serial to stdout', nargs='?')
args = parser.parse_args()
ser = SerialBuffer(args.dev, args.file, args.prefix or "")
for line in ser.lines():
# We're just using this as a logger, so eat the produced lines and drop
# them
pass
if __name__ == '__main__':
main()

View File

@@ -1,41 +0,0 @@
#!/usr/bin/python3
# Copyright © 2020 Christian Gmeiner
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice (including the next
# paragraph) shall be included in all copies or substantial portions of the
# Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
# IN THE SOFTWARE.
#
# Tiny script to read bytes from telnet, and write the output to stdout, with a
# buffer in between so we don't lose serial output from its buffer.
#
import sys
import telnetlib
host = sys.argv[1]
port = sys.argv[2]
tn = telnetlib.Telnet(host, port, 1000000)
while True:
bytes = tn.read_some()
sys.stdout.buffer.write(bytes)
sys.stdout.flush()
tn.close()

View File

@@ -1 +0,0 @@
../bin/ci

View File

@@ -1,851 +0,0 @@
# Shared between windows and Linux
.build-common:
extends: .container+build-rules
# Cancel job if a newer commit is pushed to the same branch
interruptible: true
variables:
# Build jobs are typically taking between 5-12 minutes, depending on how
# much they build and how many new Rust compilers we have to build twice.
# Allow 25 minutes as a reasonable margin: beyond this point, something
# has gone badly wrong, and we should try again to see if we can get
# something from it.
#
# Some jobs not in the critical path use a higher timeout, particularly
# when building with ASan or UBSan.
BUILD_JOB_TIMEOUT: 12m
RUN_MESON_TESTS: "true"
timeout: 16m
# We don't want to download any previous job's artifacts
dependencies: []
artifacts:
name: "${CI_PROJECT_NAME}_${CI_JOB_NAME}"
when: always
paths:
- _build/meson-logs/*.txt
- _build/meson-logs/strace
- _build/.ninja_log
- artifacts
.build-run-long:
variables:
BUILD_JOB_TIMEOUT: 18m
timeout: 25m
# Just Linux
.build-linux:
extends: .build-common
variables:
CCACHE_COMPILERCHECK: "content"
CCACHE_COMPRESS: "true"
CCACHE_DIR: /cache/mesa/ccache
# Use ccache transparently, and print stats before/after
before_script:
- !reference [default, before_script]
- |
export PATH="/usr/lib/ccache:$PATH"
export CCACHE_BASEDIR="$PWD"
if test -x /usr/bin/ccache; then
section_start ccache_before "ccache stats before build"
ccache --show-stats
section_end ccache_before
fi
after_script:
- if test -x /usr/bin/ccache; then ccache --show-stats | grep "Hits:"; fi
- !reference [default, after_script]
.build-windows:
extends:
- .build-common
- .windows-docker-tags
cache:
key: ${CI_JOB_NAME}
paths:
- subprojects/packagecache
.meson-build-for-tests:
extends:
- .build-linux
stage: build-for-tests
script:
- &meson-build timeout --verbose ${BUILD_JOB_TIMEOUT_OVERRIDE:-$BUILD_JOB_TIMEOUT} bash --login .gitlab-ci/meson/build.sh
- .gitlab-ci/prepare-artifacts.sh
.meson-build-only:
extends:
- .meson-build-for-tests
- .build-only-delayed-rules
stage: build-only
script:
- *meson-build
debian-testing:
extends:
- .meson-build-for-tests
- .use-debian/x86_64_build
- .build-run-long # but it really shouldn't! tracked in mesa#12544
- .ci-deqp-artifacts
variables:
UNWIND: "enabled"
DRI_LOADERS: >
-D egl=enabled
-D gbm=enabled
-D glvnd=disabled
-D glx=dri
-D platforms=x11,wayland
GALLIUM_ST: >
-D gallium-nine=false
-D gallium-rusticl=true
-D gallium-va=enabled
GALLIUM_DRIVERS: "llvmpipe,softpipe,virgl,radeonsi,zink,iris,svga"
VULKAN_DRIVERS: "swrast,amd,intel,virtio"
BUILDTYPE: "debugoptimized"
EXTRA_OPTION: >
-D intel-elk=false
-D spirv-to-dxil=true
-D tools=drm-shim
-D valgrind=disabled
S3_ARTIFACT_NAME: mesa-x86_64-default-${BUILDTYPE}
RUN_MESON_TESTS: "false" # debian-build-testing already runs these
artifacts:
reports:
junit: artifacts/ci_scripts_report.xml
debian-testing-asan:
extends:
- debian-testing
- .meson-build-for-tests
- .build-run-long
variables:
VULKAN_DRIVERS: "swrast"
GALLIUM_DRIVERS: "llvmpipe,softpipe"
C_ARGS: >
-Wno-error=stringop-truncation
EXTRA_OPTION: >
-D b_sanitize=address
-D gallium-va=false
-D gallium-nine=false
-D gallium-rusticl=false
-D mesa-clc=system
-D tools=dlclose-skip
-D valgrind=disabled
S3_ARTIFACT_NAME: ""
ARTIFACTS_DEBUG_SYMBOLS: 1
RUN_MESON_TESTS: "false" # just too slow
# Do a host build for mesa-clc (asan complains not being loaded as
# the first library)
HOST_BUILD_OPTIONS: >
-D build-tests=false
-D enable-glcpp-tests=false
-D gallium-opencl=disabled
-D gallium-rusticl=false
-D gallium-nine=false
-D gallium-drivers=
-D glx=disabled
-D install-mesa-clc=true
-D mesa-clc=enabled
-D platforms=
-D video-codecs=
-D vulkan-drivers=
debian-testing-msan:
# https://github.com/google/sanitizers/wiki/MemorySanitizerLibcxxHowTo
# msan cannot fully work until it's used together with msan libc
extends:
- debian-clang
- .meson-build-only
- .build-run-long
variables:
# l_undef is incompatible with msan
EXTRA_OPTION:
-D b_sanitize=memory
-D b_lundef=false
-D mesa-clc=system
-D precomp-compiler=system
S3_ARTIFACT_NAME: ""
ARTIFACTS_DEBUG_SYMBOLS: 1
# Don't run all the tests yet:
# GLSL has some issues in sexpression reading.
# gtest has issues in its test initialization.
MESON_TEST_ARGS: "--suite glcpp --suite format"
GALLIUM_DRIVERS: "freedreno,iris,nouveau,r300,r600,llvmpipe,softpipe,svga,v3d,vc4,virgl,etnaviv,panfrost,lima,zink,radeonsi,tegra,d3d12,crocus"
VULKAN_DRIVERS: intel,amd,broadcom,virtio
RUN_MESON_TESTS: "false" # just too slow
# Do a host build for mesa-clc and precomp-compiler (msan complains about uninitialized
# values in the LLVM libs)
HOST_BUILD_OPTIONS: >
-D build-tests=false
-D enable-glcpp-tests=false
-D gallium-opencl=disabled
-D gallium-drivers=
-D vulkan-drivers=
-D video-codecs=
-D glx=disabled
-D platforms=
-D mesa-clc=enabled
-D install-mesa-clc=true
-D precomp-compiler=enabled
-D install-precomp-compiler=true
-D tools=panfrost
debian-testing-ubsan:
extends:
- debian-testing
- .meson-build-for-tests
- .build-run-long
variables:
C_ARGS: >
-Wno-error=stringop-overflow
-Wno-error=stringop-truncation
CPP_ARGS: >
-Wno-error=array-bounds
GALLIUM_DRIVERS: "llvmpipe,softpipe"
VULKAN_DRIVERS: "swrast"
EXTRA_OPTION: >
-D b_sanitize=undefined
-D mesa-clc=system
-D gallium-rusticl=false
-D gallium-va=false
-D gallium-nine=false
S3_ARTIFACT_NAME: ""
ARTIFACTS_DEBUG_SYMBOLS: 1
RUN_MESON_TESTS: "false" # just too slow
HOST_BUILD_OPTIONS: >
-D build-tests=false
-D enable-glcpp-tests=false
-D gallium-opencl=disabled
-D gallium-rusticl=false
-D gallium-drivers=
-D vulkan-drivers=
-D video-codecs=
-D glx=disabled
-D platforms=
-D mesa-clc=enabled
-D install-mesa-clc=true
debian-build-testing:
extends:
- .meson-build-for-tests
- .use-debian/x86_64_build
variables:
UNWIND: "enabled"
DRI_LOADERS: >
-D glx=dri
-D gbm=enabled
-D egl=enabled
-D glvnd=disabled
-D platforms=x11,wayland
-D legacy-x11=dri2
GALLIUM_ST: >
-D gallium-extra-hud=true
-D gallium-vdpau=enabled
-D gallium-va=enabled
-D gallium-xa=enabled
-D gallium-nine=true
-D gallium-rusticl=false
GALLIUM_DRIVERS: "i915,iris,nouveau,r300,r600,freedreno,llvmpipe,softpipe,svga,v3d,vc4,virgl,etnaviv,panfrost,lima,zink,d3d12,asahi,crocus"
VULKAN_DRIVERS: "intel_hasvk,imagination-experimental,microsoft-experimental,nouveau,swrast"
BUILDTYPE: "debugoptimized"
EXTRA_OPTION: >
-D spirv-to-dxil=true
-D tools=drm-shim,etnaviv,freedreno,glsl,intel,intel-ui,nir,nouveau,lima,panfrost,asahi
-D perfetto=true
S3_ARTIFACT_NAME: debian-build-testing
# Test a release build with -Werror so new warnings don't sneak in.
debian-release:
extends:
- .meson-build-only
- .use-debian/x86_64_build
variables:
UNWIND: "enabled"
C_ARGS: >
-Wno-error=stringop-overread
DRI_LOADERS: >
-D glx=dri
-D gbm=enabled
-D egl=enabled
-D glvnd=disabled
-D platforms=x11,wayland
GALLIUM_ST: >
-D gallium-extra-hud=true
-D gallium-vdpau=enabled
-D gallium-va=enabled
-D gallium-xa=enabled
-D gallium-nine=false
-D gallium-rusticl=false
-D llvm=enabled
GALLIUM_DRIVERS: "i915,iris,nouveau,r300,freedreno,llvmpipe,softpipe,svga,v3d,vc4,virgl,etnaviv,panfrost,lima,zink,d3d12,asahi,crocus"
VULKAN_DRIVERS: "swrast,intel_hasvk,imagination-experimental,microsoft-experimental"
EXTRA_OPTION: >
-D spirv-to-dxil=true
-D tools=all
-D mesa-clc=enabled
-D precomp-compiler=enabled
-D intel-rt=enabled
-D imagination-srv=true
BUILDTYPE: "release"
S3_ARTIFACT_NAME: "mesa-x86_64-default-${BUILDTYPE}"
script:
- *meson-build
- 'if [ -n "$MESA_CI_PERFORMANCE_ENABLED" ]; then .gitlab-ci/prepare-artifacts.sh; fi'
alpine-build-testing:
extends:
- .meson-build-only
- .use-alpine/x86_64_build
variables:
BUILDTYPE: "release"
C_ARGS: >
-Wno-error=cpp
-Wno-error=array-bounds
-Wno-error=stringop-overflow
-Wno-error=stringop-overread
-Wno-error=misleading-indentation
DRI_LOADERS: >
-D glx=disabled
-D gbm=enabled
-D egl=enabled
-D glvnd=disabled
-D platforms=wayland
GALLIUM_DRIVERS: "crocus,etnaviv,freedreno,iris,lima,nouveau,panfrost,r300,r600,radeonsi,svga,llvmpipe,softpipe,tegra,v3d,vc4,virgl,zink"
GALLIUM_ST: >
-D gallium-extra-hud=true
-D gallium-vdpau=disabled
-D gallium-va=enabled
-D gallium-xa=disabled
-D gallium-nine=true
-D gallium-rusticl=false
-D gles1=disabled
-D gles2=enabled
-D llvm=enabled
-D llvm-orcjit=true
-D microsoft-clc=disabled
-D shared-llvm=enabled
UNWIND: "disabled"
VULKAN_DRIVERS: "amd,broadcom,freedreno,intel,imagination-experimental"
fedora-release:
extends:
- .meson-build-only
- .use-fedora/x86_64_build
- .build-run-long
# LTO builds can be really very slow, and we have no way to specify different
# timeouts for pre-merge and nightly jobs
timeout: 1h
variables:
BUILDTYPE: "release"
# array-bounds are pure non-LTO gcc buggy warning
# maybe-uninitialized is misfiring in nir_lower_gs_intrinsics.c, and
# a "maybe" warning should never be an error anyway.
C_ARGS: >
-Wno-error=stringop-overflow
-Wno-error=stringop-overread
-Wno-error=array-bounds
-Wno-error=maybe-uninitialized
CPP_ARGS: >
-Wno-error=dangling-reference
-Wno-error=overloaded-virtual
DRI_LOADERS: >
-D glx=dri
-D gbm=enabled
-D egl=enabled
-D glvnd=enabled
-D platforms=x11,wayland
EXTRA_OPTION: >
-D tools=drm-shim,etnaviv,freedreno,glsl,intel,nir,nouveau,lima,panfrost,imagination
-D vulkan-layers=device-select,overlay
-D intel-rt=enabled
-D imagination-srv=true
-D teflon=true
GALLIUM_DRIVERS: "crocus,etnaviv,freedreno,i915,iris,lima,nouveau,panfrost,r300,r600,radeonsi,svga,llvmpipe,softpipe,tegra,v3d,vc4,virgl,zink"
GALLIUM_ST: >
-D gallium-extra-hud=true
-D gallium-vdpau=enabled
-D gallium-va=enabled
-D gallium-xa=enabled
-D gallium-nine=false
-D gallium-rusticl=true
-D gles1=disabled
-D gles2=enabled
-D llvm=enabled
-D microsoft-clc=disabled
-D shared-llvm=enabled
UNWIND: "disabled"
VULKAN_DRIVERS: "amd,asahi,broadcom,freedreno,imagination-experimental,intel,intel_hasvk"
debian-android:
extends:
- .android-variables
- .meson-cross
- .use-debian/android_build
- .ci-deqp-artifacts
- .meson-build-for-tests
variables:
BUILDTYPE: debug
UNWIND: "disabled"
C_ARGS: >
-Wno-error=asm-operand-widths
-Wno-error=constant-conversion
-Wno-error=enum-conversion
-Wno-error=initializer-overrides
-Wno-error=sometimes-uninitialized
CPP_ARGS: >
-Wno-error=c99-designator
-Wno-error=unused-variable
-Wno-error=unused-but-set-variable
-Wno-error=self-assign
DRI_LOADERS: >
-D glx=disabled
-D gbm=disabled
-D egl=enabled
-D glvnd=disabled
-D platforms=android
FORCE_FALLBACK_FOR: llvm
EXTRA_OPTION: >
-D android-stub=true
-D platform-sdk-version=${ANDROID_SDK_VERSION}
-D cpp_rtti=false
-D valgrind=disabled
-D android-libbacktrace=disabled
-D mesa-clc=system
-D precomp-compiler=system
GALLIUM_ST: >
-D gallium-vdpau=disabled
-D gallium-va=disabled
-D gallium-xa=disabled
-D gallium-nine=false
-D gallium-rusticl=false
PKG_CONFIG_LIBDIR: "/disable/non/android/system/pc/files"
HOST_BUILD_OPTIONS: >
-D build-tests=false
-D enable-glcpp-tests=false
-D gallium-opencl=disabled
-D gallium-drivers=
-D vulkan-drivers=
-D video-codecs=
-D glx=disabled
-D platforms=
-D mesa-clc=enabled
-D install-mesa-clc=true
-D precomp-compiler=enabled
-D install-precomp-compiler=true
-D tools=panfrost
S3_ARTIFACT_NAME: mesa-x86_64-android-${BUILDTYPE}
script:
# x86_64 build:
# Can't do AMD drivers because they require LLVM, which is currently
# problematic in our Android builds.
- export CROSS=x86_64-linux-android
- export GALLIUM_DRIVERS=iris,virgl,zink,softpipe
- export VULKAN_DRIVERS=intel,virtio,swrast
- .gitlab-ci/create-llvm-meson-wrap-file.sh
- *meson-build
- .gitlab-ci/prepare-artifacts.sh
# remove all the files created by the previous build before the next build
- git clean -dxf .
# aarch64 build:
# build-only, to catch compilation regressions
# without calling .gitlab-ci/prepare-artifacts.sh so that the
# artifacts are not shipped in mesa-x86_64-android-${BUILDTYPE}
- export CROSS=aarch64-linux-android
- export GALLIUM_DRIVERS=etnaviv,freedreno,lima,panfrost,vc4,v3d
- export VULKAN_DRIVERS=freedreno,broadcom,virtio
- *meson-build
.meson-cross:
extends:
- .meson-build-only
- .use-debian/x86_64_build
variables:
UNWIND: "disabled"
DRI_LOADERS: >
-D glx=dri
-D gbm=enabled
-D egl=enabled
-D platforms=x11,wayland
GALLIUM_ST: >
-D gallium-vdpau=disabled
-D gallium-va=disabled
-D gallium-xa=disabled
-D gallium-nine=false
.meson-arm:
extends:
- .meson-cross
- .use-debian/arm64_build
variables:
VULKAN_DRIVERS: "asahi,broadcom,freedreno"
GALLIUM_DRIVERS: "etnaviv,freedreno,lima,nouveau,panfrost,llvmpipe,softpipe,tegra,v3d,vc4,zink"
BUILDTYPE: "debugoptimized"
debian-arm32:
extends:
- .meson-arm
- .ci-deqp-artifacts
- .meson-build-for-tests
variables:
CROSS: armhf
DRI_LOADERS:
-D glvnd=disabled
# remove asahi & llvmpipe from the .meson-arm list because here we have llvm=disabled
VULKAN_DRIVERS: "broadcom,freedreno"
GALLIUM_DRIVERS: "etnaviv,freedreno,lima,nouveau,panfrost,softpipe,tegra,v3d,vc4,zink"
EXTRA_OPTION: >
-D llvm=disabled
-D valgrind=disabled
-D gallium-rusticl=false
-D mesa-clc=system
-D precomp-compiler=system
HOST_BUILD_OPTIONS: >
-D build-tests=false
-D enable-glcpp-tests=false
-D gallium-opencl=disabled
-D gallium-drivers=
-D vulkan-drivers=
-D video-codecs=
-D glx=disabled
-D platforms=
-D mesa-clc=enabled
-D install-mesa-clc=true
-D precomp-compiler=enabled
-D install-precomp-compiler=true
-D tools=panfrost
S3_ARTIFACT_NAME: mesa-arm32-default-${BUILDTYPE}
# The strip command segfaults, failing to strip the binary and leaving
# tempfiles in our artifacts.
ARTIFACTS_DEBUG_SYMBOLS: 1
debian-arm32-asan:
extends:
- debian-arm32
- .meson-build-for-tests
- .build-run-long
variables:
GALLIUM_DRIVERS: "etnaviv"
VULKAN_DRIVERS: ""
DRI_LOADERS:
-D glvnd=disabled
EXTRA_OPTION: >
-D llvm=disabled
-D b_sanitize=address
-D valgrind=disabled
-D tools=dlclose-skip
-D gallium-rusticl=false
ARTIFACTS_DEBUG_SYMBOLS: 1
RUN_MESON_TESTS: "false" # just too slow
S3_ARTIFACT_NAME: mesa-arm32-asan-${BUILDTYPE}
debian-arm64:
extends:
- .meson-arm
- .ci-deqp-artifacts
- .meson-build-for-tests
variables:
C_ARGS: >
-Wno-error=array-bounds
-Wno-error=stringop-truncation
GALLIUM_DRIVERS: "etnaviv,freedreno,lima,panfrost,v3d,vc4,zink"
VULKAN_DRIVERS: "broadcom,freedreno,panfrost"
DRI_LOADERS:
-D glvnd=disabled
EXTRA_OPTION: >
-D valgrind=disabled
-D imagination-srv=true
-D freedreno-kmds=msm,virtio
-D teflon=true
GALLIUM_ST:
-D gallium-rusticl=true
RUN_MESON_TESTS: "false" # run by debian-arm64-build-testing
S3_ARTIFACT_NAME: mesa-arm64-default-${BUILDTYPE}
debian-arm64-asan:
extends:
- debian-arm64
- .meson-build-for-tests
- .build-run-long
variables:
VULKAN_DRIVERS: "broadcom,freedreno"
GALLIUM_DRIVERS: "freedreno,vc4,v3d"
DRI_LOADERS:
-D glvnd=disabled
EXTRA_OPTION: >
-D b_sanitize=address
-D valgrind=disabled
-D tools=dlclose-skip
-D gallium-rusticl=false
ARTIFACTS_DEBUG_SYMBOLS: 1
RUN_MESON_TESTS: "false" # just too slow
S3_ARTIFACT_NAME: mesa-arm64-asan-${BUILDTYPE}
debian-arm64-ubsan:
extends:
- debian-arm64
- .meson-build-for-tests
- .build-run-long
variables:
VULKAN_DRIVERS: "broadcom"
GALLIUM_DRIVERS: "v3d,vc4"
C_ARGS: >
-Wno-error=array-bounds
-Wno-error=stringop-overflow
-Wno-error=stringop-truncation
CPP_ARGS: >
-Wno-error=array-bounds
-fno-var-tracking-assignments
DRI_LOADERS:
-D glvnd=disabled
EXTRA_OPTION: >
-D b_sanitize=undefined
-D gallium-rusticl=false
ARTIFACTS_DEBUG_SYMBOLS: 1
RUN_MESON_TESTS: "false" # just too slow
S3_ARTIFACT_NAME: mesa-arm64-ubsan-${BUILDTYPE}
debian-arm64-build-test:
extends:
- .meson-arm
- .ci-deqp-artifacts
- .meson-build-only
variables:
VULKAN_DRIVERS: "amd,asahi,imagination-experimental,nouveau"
DRI_LOADERS:
-D glvnd=disabled
EXTRA_OPTION: >
-D tools=panfrost,imagination
-D perfetto=true
debian-arm64-release:
extends:
- debian-arm64
- .meson-build-only
variables:
BUILDTYPE: release
S3_ARTIFACT_NAME: mesa-arm64-default-${BUILDTYPE}
C_ARGS: >
-Wno-error=array-bounds
-Wno-error=stringop-truncation
-Wno-error=stringop-overread
script:
- *meson-build
- 'if [ -n "$MESA_CI_PERFORMANCE_ENABLED" ]; then .gitlab-ci/prepare-artifacts.sh; fi'
debian-no-libdrm:
extends:
- .meson-arm
- .meson-build-only
variables:
VULKAN_DRIVERS: freedreno
GALLIUM_DRIVERS: "zink,llvmpipe"
BUILDTYPE: release
C_ARGS: >
-Wno-error=array-bounds
-Wno-error=stringop-truncation
-Wno-error=stringop-overread
EXTRA_OPTION: >
-D freedreno-kmds=kgsl
-D glx=disabled
-D gbm=disabled
-D egl=disabled
-D perfetto=true
debian-clang:
extends:
- .meson-build-only
- .use-debian/x86_64_build
variables:
BUILDTYPE: debug
UNWIND: "enabled"
C_ARGS: >
-Wno-error=constant-conversion
-Wno-error=enum-conversion
-Wno-error=initializer-overrides
-Wno-error=sometimes-uninitialized
-Werror=misleading-indentation
CPP_ARGS: >
-Wno-error=c99-designator
-Wno-error=overloaded-virtual
-Wno-error=tautological-constant-out-of-range-compare
-Wno-error=unused-private-field
-Wno-error=vla-cxx-extension
DRI_LOADERS: >
-D glx=dri
-D gbm=enabled
-D egl=enabled
-D glvnd=enabled
-D platforms=x11,wayland
GALLIUM_ST: >
-D gallium-extra-hud=true
-D gallium-vdpau=enabled
-D gallium-va=enabled
-D gallium-xa=enabled
-D gallium-nine=true
-D gles1=enabled
-D gles2=enabled
-D llvm=enabled
-D microsoft-clc=disabled
-D shared-llvm=enabled
GALLIUM_DRIVERS: "iris,nouveau,r300,r600,freedreno,llvmpipe,softpipe,svga,v3d,vc4,virgl,etnaviv,panfrost,lima,zink,radeonsi,tegra,d3d12,crocus,i915,asahi"
VULKAN_DRIVERS: intel,amd,freedreno,broadcom,virtio,swrast,panfrost,imagination-experimental,microsoft-experimental,nouveau
EXTRA_OPTION:
-D spirv-to-dxil=true
-D imagination-srv=true
-D tools=drm-shim,etnaviv,freedreno,glsl,intel,intel-ui,nir,nouveau,lima,panfrost,asahi,imagination
-D vulkan-layers=device-select,overlay
-D build-radv-tests=true
-D build-aco-tests=true
-D mesa-clc=enabled
-D precomp-compiler=enabled
-D intel-rt=enabled
-D imagination-srv=true
-D teflon=true
CC: clang-${LLVM_VERSION}
CXX: clang++-${LLVM_VERSION}
debian-clang-release:
extends:
- debian-clang
- .meson-build-only
- .build-run-long
variables:
BUILDTYPE: "release"
DRI_LOADERS: >
-D glx=xlib
-D platforms=x11,wayland
GALLIUM_ST: >
-D gallium-extra-hud=true
-D gallium-vdpau=enabled
-D gallium-va=enabled
-D gallium-xa=enabled
-D gallium-nine=true
-D gles1=disabled
-D gles2=disabled
-D llvm=enabled
-D microsoft-clc=disabled
-D shared-llvm=enabled
windows-msvc:
extends:
- .build-windows
- .use-windows_build_msvc
- .windows-build-rules
stage: build-for-tests
script:
- pwsh -ExecutionPolicy RemoteSigned .\.gitlab-ci\windows\mesa_build.ps1
artifacts:
paths:
- _build/meson-logs/*.txt
- _install/
debian-vulkan:
extends:
- .meson-build-only
- .use-debian/x86_64_build
variables:
BUILDTYPE: debug
UNWIND: "disabled"
DRI_LOADERS: >
-D glx=disabled
-D gbm=disabled
-D egl=disabled
-D opengl=false
-D gles1=disabled
-D gles2=disabled
-D glvnd=disabled
-D platforms=x11,wayland
GALLIUM_ST: >
-D gallium-vdpau=disabled
-D gallium-va=disabled
-D gallium-xa=disabled
-D gallium-nine=false
-D gallium-rusticl=false
-D b_sanitize=undefined
-D c_args=-fno-sanitize-recover=all
-D cpp_args=-fno-sanitize-recover=all
UBSAN_OPTIONS: "print_stacktrace=1"
VULKAN_DRIVERS: amd,asahi,broadcom,freedreno,intel,intel_hasvk,panfrost,virtio,imagination-experimental,microsoft-experimental,nouveau
EXTRA_OPTION: >
-D vulkan-layers=device-select,overlay
-D build-radv-tests=true
-D build-aco-tests=true
-D intel-rt=disabled
-D imagination-srv=true
debian-x86_32:
extends:
- .meson-cross
- .use-debian/x86_32_build
- .meson-build-only
- .build-run-long # it's not clear why this runs long, but it also doesn't matter much
variables:
BUILDTYPE: debug
CROSS: i386
VULKAN_DRIVERS: intel,amd,swrast,virtio,panfrost
GALLIUM_DRIVERS: "iris,nouveau,r300,r600,radeonsi,llvmpipe,softpipe,virgl,zink,crocus,d3d12,panfrost"
DRI_LOADERS:
-D glvnd=disabled
EXTRA_OPTION: >
-D vulkan-layers=device-select,overlay
-D mesa-clc=system
C_LINK_ARGS: >
-Wl,--no-warn-rwx-segments
CPP_LINK_ARGS: >
-Wl,--no-warn-rwx-segments
HOST_BUILD_OPTIONS: >
-D build-tests=false
-D enable-glcpp-tests=false
-D gallium-opencl=disabled
-D gallium-drivers=
-D vulkan-drivers=
-D video-codecs=
-D glx=disabled
-D platforms=
-D mesa-clc=enabled
-D install-mesa-clc=true
# While s390 is dead, s390x is very much alive, and one of the last major
# big-endian platforms, so it provides useful coverage.
# In case of issues with this job, contact @ajax
debian-s390x:
extends:
- .meson-cross
- .use-debian/s390x_build
- .meson-build-only
tags:
- $FDO_RUNNER_JOB_PRIORITY_TAG_X86_64_KVM
variables:
BUILDTYPE: debug
CROSS: s390x
GALLIUM_DRIVERS: "llvmpipe,virgl,zink"
VULKAN_DRIVERS: "swrast,virtio"
DRI_LOADERS:
-D glvnd=disabled
debian-ppc64el:
extends:
- .meson-cross
- .use-debian/ppc64el_build
- .meson-build-only
variables:
BUILDTYPE: debug
CROSS: ppc64el
GALLIUM_DRIVERS: "nouveau,llvmpipe,softpipe,virgl,zink"
VULKAN_DRIVERS: "swrast"
DRI_LOADERS:
-D glvnd=disabled
# This job emits our scripts into artifacts so they can be reused for
# job submission to hardware devices.
python-artifacts:
stage: build-for-tests
extends:
- .use-debian/x86_64_pyutils
- .build-common
- .meson-build-for-tests
variables:
GIT_STRATEGY: fetch
S3_ARTIFACT_NAME: mesa-python-ci-artifacts
timeout: 10m
script:
- .gitlab-ci/prepare-artifacts-python.sh

View File

@@ -1,35 +0,0 @@
#!/usr/bin/env bash
# shellcheck disable=SC2035
# shellcheck disable=SC2061
# shellcheck disable=SC2086 # we want word splitting
while true; do
devcds=$(find /sys/devices/virtual/devcoredump/ -name data 2>/dev/null)
for i in $devcds; do
echo "Found a devcoredump at $i."
if cp $i $RESULTS_DIR/first.devcore; then
echo 1 > $i
echo "Saved to the job artifacts at /first.devcore"
exit 0
fi
done
i915_error_states=$(find /sys/devices/ -path */drm/card*/error)
for i in $i915_error_states; do
tmpfile=$(mktemp)
cp "$i" "$tmpfile"
filesize=$(stat --printf="%s" "$tmpfile")
# Does the file contain "No error state collected" ?
if [ "$filesize" = 25 ]; then
rm "$tmpfile"
else
echo "Found an i915 error state at $i size=$filesize."
if cp "$tmpfile" $RESULTS_DIR/first.i915_error_state; then
rm "$tmpfile"
echo 1 > "$i"
echo "Saved to the job artifacts at /first.i915_error_state"
exit 0
fi
fi
done
sleep 10
done

View File

@@ -1,142 +0,0 @@
#!/bin/bash
VARS=(
ACO_DEBUG
ANGLE_TAG
ANGLE_TRACE_FILES_TAG
ANV_DEBUG
ARTIFACTS_BASE_URL
ASAN_OPTIONS
BASE_SYSTEM_FORK_HOST_PREFIX
BASE_SYSTEM_MAINLINE_HOST_PREFIX
CI_COMMIT_BRANCH
CI_COMMIT_REF_NAME
CI_COMMIT_TITLE
CI_JOB_ID
CI_JOB_NAME
CI_JOB_STARTED_AT
CI_JOB_URL
CI_MERGE_REQUEST_SOURCE_BRANCH_NAME
CI_MERGE_REQUEST_TITLE
CI_NODE_INDEX
CI_NODE_TOTAL
CI_PAGES_DOMAIN
CI_PIPELINE_ID
CI_PIPELINE_URL
CI_PROJECT_DIR
CI_PROJECT_NAME
CI_PROJECT_PATH
CI_PROJECT_ROOT_NAMESPACE
CI_RUNNER_DESCRIPTION
CI_SERVER_URL
CROSVM_GALLIUM_DRIVER
CROSVM_GPU_ARGS
CURRENT_SECTION
DEQP_BIN_DIR
DEQP_FORCE_ASAN
DEQP_FRACTION
DEQP_RUNNER_MAX_FAILS
DEQP_SUITE
DEQP_TEMP_DIR
DEVICE_NAME
DRIVER_NAME
EGL_PLATFORM
ETNA_MESA_DEBUG
FDO_CI_CONCURRENT
FDO_HTTP_CACHE_URI
FDO_UPSTREAM_REPO
FD_MESA_DEBUG
FLAKES_CHANNEL
FLUSTER_CODECS
FLUSTER_FRACTION
FLUSTER_VECTORS_VERSION
FREEDRENO_HANGCHECK_MS
GALLIUM_DRIVER
GALLIVM_PERF
GPU_VERSION
GTEST
GTEST_FAILS
GTEST_FRACTION
GTEST_RUNNER_OPTIONS
GTEST_SKIPS
HWCI_FREQ_MAX
HWCI_KERNEL_MODULES
HWCI_KVM
HWCI_START_WESTON
HWCI_START_XORG
HWCI_TEST_ARGS
HWCI_TEST_SCRIPT
INTEL_XE_IGNORE_EXPERIMENTAL_WARNING
IR3_SHADER_DEBUG
JOB_ARTIFACTS_BASE
JOB_RESULTS_PATH
JOB_ROOTFS_OVERLAY_PATH
KERNEL_IMAGE_BASE
KERNEL_IMAGE_NAME
LD_LIBRARY_PATH
LIBGL_ALWAYS_SOFTWARE
LP_NUM_THREADS
LVP_POISON_MEMORY
MESA_BASE_TAG
MESA_BUILD_PATH
MESA_DEBUG
MESA_GLES_VERSION_OVERRIDE
MESA_GLSL_VERSION_OVERRIDE
MESA_GL_VERSION_OVERRIDE
MESA_IMAGE
MESA_IMAGE_PATH
MESA_IMAGE_TAG
MESA_LOADER_DRIVER_OVERRIDE
MESA_SPIRV_LOG_LEVEL
MESA_TEMPLATES_COMMIT
MESA_VK_ABORT_ON_DEVICE_LOSS
MESA_VK_IGNORE_CONFORMANCE_WARNING
NIR_DEBUG
PANVK_DEBUG
PAN_I_WANT_A_BROKEN_VULKAN_DRIVER
PAN_MESA_DEBUG
PIGLIT_FRACTION
PIGLIT_NO_WINDOW
PIGLIT_OPTIONS
PIGLIT_PLATFORM
PIGLIT_REPLAY_ANGLE_ARCH
PIGLIT_REPLAY_ARTIFACTS_BASE_URL
PIGLIT_REPLAY_DEVICE_NAME
PIGLIT_REPLAY_EXTRA_ARGS
PIGLIT_REPLAY_LOOP_TIMES
PIGLIT_REPLAY_REFERENCE_IMAGES_BASE
PIGLIT_REPLAY_SUBCOMMAND
PIGLIT_RESULTS
PIGLIT_RUNNER_OPTIONS
PIGLIT_TESTS
PIGLIT_TRACES_FILE
PIPELINE_ARTIFACTS_BASE
RADEON_DEBUG
RADV_DEBUG
radv_enable_float16_gfx8
RADV_PERFTEST
S3_HOST
S3_JWT_FILE
S3_RESULTS_UPLOAD
SKQP_ASSETS_DIR
SKQP_BACKENDS
STORAGE_FORK_HOST_PATH
STORAGE_MAINLINE_HOST_PATH
TU_DEBUG
VIRGL_HOST_API
VIRGL_RENDER_SERVER
VK_DRIVER
WAFFLE_PLATFORM
ZINK_DEBUG
ZINK_DESCRIPTORS
# Dead code within Mesa CI, but required by virglrender CI
# (because they include our files in their CI)
VK_DRIVER_FILES
)
for var in "${VARS[@]}"; do
if [ -n "${!var+x}" ]; then
echo "export $var=${!var@Q}"
fi
done

View File

@@ -1,25 +0,0 @@
#!/bin/sh
# Very early init, used to make sure devices and network are set up and
# reachable.
set -ex
cd /
findmnt --mountpoint /proc || mount -t proc none /proc
findmnt --mountpoint /sys || mount -t sysfs none /sys
mount -t debugfs none /sys/kernel/debug
findmnt --mountpoint /dev || mount -t devtmpfs none /dev
mkdir -p /dev/pts
mount -t devpts devpts /dev/pts
mkdir /dev/shm
mount -t tmpfs -o noexec,nodev,nosuid tmpfs /dev/shm
mount -t tmpfs tmpfs /tmp
echo "nameserver 8.8.8.8" > /etc/resolv.conf
[ -z "$NFS_SERVER_IP" ] || echo "$NFS_SERVER_IP caching-proxy" >> /etc/hosts
# Set the time so we can validate certificates before we fetch anything;
# however as not all DUTs have network, make this non-fatal.
for _ in 1 2 3; do sntp -sS pool.ntp.org && break || sleep 2; done || true

View File

@@ -1,249 +0,0 @@
#!/bin/bash
# shellcheck disable=SC1090
# shellcheck disable=SC1091
# shellcheck disable=SC2086 # we want word splitting
# shellcheck disable=SC2155
# Second-stage init, used to set up devices and our job environment before
# running tests.
shopt -s extglob
# Make sure to kill itself and all the children process from this script on
# exiting, since any console output may interfere with LAVA signals handling,
# which based on the log console.
cleanup() {
if [ "$BACKGROUND_PIDS" = "" ]; then
return 0
fi
set +x
echo "Killing all child processes"
for pid in $BACKGROUND_PIDS
do
kill "$pid" 2>/dev/null || true
done
# Sleep just a little to give enough time for subprocesses to be gracefully
# killed. Then apply a SIGKILL if necessary.
sleep 5
for pid in $BACKGROUND_PIDS
do
kill -9 "$pid" 2>/dev/null || true
done
BACKGROUND_PIDS=
set -x
}
trap cleanup INT TERM EXIT
# Space separated values with the PIDS of the processes started in the
# background by this script
BACKGROUND_PIDS=
for path in '/dut-env-vars.sh' '/set-job-env-vars.sh' './set-job-env-vars.sh'; do
[ -f "$path" ] && source "$path"
done
. "$SCRIPTS_DIR"/setup-test-env.sh
# Flush out anything which might be stuck in a serial buffer
echo
echo
echo
section_switch init_stage2 "Pre-testing hardware setup"
set -ex
# Set up any devices required by the jobs
[ -z "$HWCI_KERNEL_MODULES" ] || {
echo -n $HWCI_KERNEL_MODULES | xargs -d, -n1 /usr/sbin/modprobe
}
# Set up ZRAM
HWCI_ZRAM_SIZE=2G
if /sbin/zramctl --find --size $HWCI_ZRAM_SIZE -a zstd; then
mkswap /dev/zram0
swapon /dev/zram0
echo "zram: $HWCI_ZRAM_SIZE activated"
else
echo "zram: skipping, not supported"
fi
#
# Load the KVM module specific to the detected CPU virtualization extensions:
# - vmx for Intel VT
# - svm for AMD-V
#
# Additionally, download the kernel image to boot the VM via HWCI_TEST_SCRIPT.
#
if [ "$HWCI_KVM" = "true" ]; then
unset KVM_KERNEL_MODULE
{
grep -qs '\bvmx\b' /proc/cpuinfo && KVM_KERNEL_MODULE=kvm_intel
} || {
grep -qs '\bsvm\b' /proc/cpuinfo && KVM_KERNEL_MODULE=kvm_amd
}
{
[ -z "${KVM_KERNEL_MODULE}" ] && \
echo "WARNING: Failed to detect CPU virtualization extensions"
} || \
modprobe ${KVM_KERNEL_MODULE}
mkdir -p /kernel
curl -L --retry 4 -f --retry-all-errors --retry-delay 60 \
-o "/kernel/${KERNEL_IMAGE_NAME}" \
"${KERNEL_IMAGE_BASE}/amd64/${KERNEL_IMAGE_NAME}"
fi
# Fix prefix confusion: the build installs to $CI_PROJECT_DIR, but we expect
# it in /install
ln -sf $CI_PROJECT_DIR/install /install
export LD_LIBRARY_PATH=/install/lib
export LIBGL_DRIVERS_PATH=/install/lib/dri
# https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/22495#note_1876691
# The navi21 boards seem to have trouble with ld.so.cache, so try explicitly
# telling it to look in /usr/local/lib.
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/lib
# Store Mesa's disk cache under /tmp, rather than sending it out over NFS.
export XDG_CACHE_HOME=/tmp
# Make sure Python can find all our imports
export PYTHONPATH=$(python3 -c "import sys;print(\":\".join(sys.path))")
# If we need to specify a driver, it means several drivers could pick up this gpu;
# ensure that the other driver can't accidentally be used
if [ -n "$MESA_LOADER_DRIVER_OVERRIDE" ]; then
rm /install/lib/dri/!($MESA_LOADER_DRIVER_OVERRIDE)_dri.so
fi
ls -1 /install/lib/dri/*_dri.so || true
if [ "$HWCI_FREQ_MAX" = "true" ]; then
# Ensure initialization of the DRM device (needed by MSM)
head -0 /dev/dri/renderD128
# Disable GPU frequency scaling
DEVFREQ_GOVERNOR=$(find /sys/devices -name governor | grep gpu || true)
test -z "$DEVFREQ_GOVERNOR" || echo performance > $DEVFREQ_GOVERNOR || true
# Disable CPU frequency scaling
echo performance | tee -a /sys/devices/system/cpu/cpufreq/policy*/scaling_governor || true
# Disable GPU runtime power management
GPU_AUTOSUSPEND=$(find /sys/devices -name autosuspend_delay_ms | grep gpu | head -1)
test -z "$GPU_AUTOSUSPEND" || echo -1 > $GPU_AUTOSUSPEND || true
# Lock Intel GPU frequency to 70% of the maximum allowed by hardware
# and enable throttling detection & reporting.
# Additionally, set the upper limit for CPU scaling frequency to 65% of the
# maximum permitted, as an additional measure to mitigate thermal throttling.
/install/common/intel-gpu-freq.sh -s 70% --cpu-set-max 65% -g all -d
fi
# Start a little daemon to capture sysfs records and produce a JSON file
KDL_PATH=/install/common/kdl.sh
if [ -x "$KDL_PATH" ]; then
echo "launch kdl.sh!"
$KDL_PATH &
BACKGROUND_PIDS="$! $BACKGROUND_PIDS"
else
echo "kdl.sh not found!"
fi
# Increase freedreno hangcheck timer because it's right at the edge of the
# spilling tests timing out (and some traces, too)
if [ -n "$FREEDRENO_HANGCHECK_MS" ]; then
echo $FREEDRENO_HANGCHECK_MS | tee -a /sys/kernel/debug/dri/128/hangcheck_period_ms
fi
# Start a little daemon to capture the first devcoredump we encounter. (They
# expire after 5 minutes, so we poll for them).
CAPTURE_DEVCOREDUMP=/install/common/capture-devcoredump.sh
if [ -x "$CAPTURE_DEVCOREDUMP" ]; then
$CAPTURE_DEVCOREDUMP &
BACKGROUND_PIDS="$! $BACKGROUND_PIDS"
fi
ARCH=$(uname -m)
export VK_DRIVER_FILES="/install/share/vulkan/icd.d/${VK_DRIVER}_icd.$ARCH.json"
# If we want Xorg to be running for the test, then we start it up before the
# HWCI_TEST_SCRIPT because we need to use xinit to start X (otherwise
# without using -displayfd you can race with Xorg's startup), but xinit will eat
# your client's return code
if [ -n "$HWCI_START_XORG" ]; then
echo "touch /xorg-started; sleep 100000" > /xorg-script
env \
xinit /bin/sh /xorg-script -- /usr/bin/Xorg -noreset -s 0 -dpms -logfile "$RESULTS_DIR/Xorg.0.log" &
BACKGROUND_PIDS="$! $BACKGROUND_PIDS"
# Wait for xorg to be ready for connections.
for _ in 1 2 3 4 5; do
if [ -e /xorg-started ]; then
break
fi
sleep 5
done
export DISPLAY=:0
fi
if [ -n "$HWCI_START_WESTON" ]; then
WESTON_X11_SOCK="/tmp/.X11-unix/X0"
if [ -n "$HWCI_START_XORG" ]; then
echo "Please consider dropping HWCI_START_XORG and instead using Weston XWayland for testing."
WESTON_X11_SOCK="/tmp/.X11-unix/X1"
fi
export WAYLAND_DISPLAY=wayland-0
# Display server is Weston Xwayland when HWCI_START_XORG is not set or Xorg when it's
export DISPLAY=:0
mkdir -p /tmp/.X11-unix
env \
weston -Bheadless-backend.so --use-gl -Swayland-0 --xwayland --idle-time=0 &
BACKGROUND_PIDS="$! $BACKGROUND_PIDS"
while [ ! -S "$WESTON_X11_SOCK" ]; do sleep 1; done
fi
set +x
section_end init_stage2
echo "Running ${HWCI_TEST_SCRIPT} ${HWCI_TEST_ARGS} ..."
set +e
$HWCI_TEST_SCRIPT ${HWCI_TEST_ARGS:-}; EXIT_CODE=$?
set -e
section_start post_test_cleanup "Cleaning up after testing, uploading results"
set -x
# Make sure that capture-devcoredump is done before we start trying to tar up
# artifacts -- if it's writing while tar is reading, tar will throw an error and
# kill the job.
cleanup
# upload artifacts (lava jobs)
if [ -n "$S3_RESULTS_UPLOAD" ]; then
tar --zstd -cf results.tar.zst results/;
s3_upload results.tar.zst "https://${S3_RESULTS_UPLOAD}/"
fi
# We still need to echo the hwci: mesa message, as some scripts rely on it, such
# as the python ones inside the bare-metal folder
[ ${EXIT_CODE} -eq 0 ] && RESULT=pass || RESULT=fail
set +x
section_end post_test_cleanup
# Print the final result; both bare-metal and LAVA look for this string to get
# the result of our run, so try really hard to get it out rather than losing
# the run. The device gets shut down right at this point, and a630 seems to
# enjoy corrupting the last line of serial output before shutdown.
for _ in $(seq 0 3); do echo "hwci: mesa: $RESULT, exit_code: $EXIT_CODE"; sleep 1; echo; done
exit $EXIT_CODE

View File

@@ -1,820 +0,0 @@
#!/usr/bin/env bash
# shellcheck disable=SC2013
# shellcheck disable=SC2015
# shellcheck disable=SC2034
# shellcheck disable=SC2046
# shellcheck disable=SC2059
# shellcheck disable=SC2086 # we want word splitting
# shellcheck disable=SC2154
# shellcheck disable=SC2155
# shellcheck disable=SC2162
# shellcheck disable=SC2229
#
# This is an utility script to manage Intel GPU frequencies.
# It can be used for debugging performance problems or trying to obtain a stable
# frequency while benchmarking.
#
# Note the Intel i915 GPU driver allows to change the minimum, maximum and boost
# frequencies in steps of 50 MHz via:
#
# /sys/class/drm/card<n>/<freq_info>
#
# Where <n> is the DRM card index and <freq_info> one of the following:
#
# - gt_max_freq_mhz (enforced maximum freq)
# - gt_min_freq_mhz (enforced minimum freq)
# - gt_boost_freq_mhz (enforced boost freq)
#
# The hardware capabilities can be accessed via:
#
# - gt_RP0_freq_mhz (supported maximum freq)
# - gt_RPn_freq_mhz (supported minimum freq)
# - gt_RP1_freq_mhz (most efficient freq)
#
# The current frequency can be read from:
# - gt_act_freq_mhz (the actual GPU freq)
# - gt_cur_freq_mhz (the last requested freq)
#
# Intel later switched to per-tile sysfs interfaces, which is what the Xe DRM
# driver exlusively uses, and the capabilites are now located under the
# following directory for the first tile:
#
# /sys/class/drm/card<n>/device/tile0/gt0/freq0/<freq_info>
#
# Where <n> is the DRM card index and <freq_info> one of the following:
#
# - max_freq (enforced maximum freq)
# - min_freq (enforced minimum freq)
#
# The hardware capabilities can be accessed via:
#
# - rp0_freq (supported maximum freq)
# - rpn_freq (supported minimum freq)
# - rpe_freq (most efficient freq)
#
# The current frequency can be read from:
# - act_freq (the actual GPU freq)
# - cur_freq (the last requested freq)
#
# Also note that in addition to GPU management, the script offers the
# possibility to adjust CPU operating frequencies. However, this is currently
# limited to just setting the maximum scaling frequency as percentage of the
# maximum frequency allowed by the hardware.
#
# Copyright (C) 2022 Collabora Ltd.
# Author: Cristian Ciocaltea <cristian.ciocaltea@collabora.com>
#
# SPDX-License-Identifier: MIT
#
#
# Constants
#
# Check if any /sys/class/drm/cardX/device/tile0 directory exists to detect Xe
USE_XE=0
for i in $(seq 0 15); do
if [ -d "/sys/class/drm/card$i/device/tile0" ]; then
USE_XE=1
break
fi
done
# GPU
if [ "$USE_XE" -eq 1 ]; then
DRM_FREQ_SYSFS_PATTERN="/sys/class/drm/card%d/device/tile0/gt0/freq0/%s_freq"
ENF_FREQ_INFO="max min"
CAP_FREQ_INFO="rp0 rpn rpe"
else
DRM_FREQ_SYSFS_PATTERN="/sys/class/drm/card%d/gt_%s_freq_mhz"
ENF_FREQ_INFO="max min boost"
CAP_FREQ_INFO="RP0 RPn RP1"
fi
ACT_FREQ_INFO="act cur"
THROTT_DETECT_SLEEP_SEC=2
THROTT_DETECT_PID_FILE_PATH=/tmp/thrott-detect.pid
# CPU
CPU_SYSFS_PREFIX=/sys/devices/system/cpu
CPU_PSTATE_SYSFS_PATTERN="${CPU_SYSFS_PREFIX}/intel_pstate/%s"
CPU_FREQ_SYSFS_PATTERN="${CPU_SYSFS_PREFIX}/cpu%s/cpufreq/%s_freq"
CAP_CPU_FREQ_INFO="cpuinfo_max cpuinfo_min"
ENF_CPU_FREQ_INFO="scaling_max scaling_min"
ACT_CPU_FREQ_INFO="scaling_cur"
#
# Global variables.
#
unset INTEL_DRM_CARD_INDEX
unset GET_ACT_FREQ GET_ENF_FREQ GET_CAP_FREQ
unset SET_MIN_FREQ SET_MAX_FREQ
unset MONITOR_FREQ
unset CPU_SET_MAX_FREQ
unset DETECT_THROTT
unset DRY_RUN
#
# Simple printf based stderr logger.
#
log() {
local msg_type=$1
shift
printf "%s: %s: " "${msg_type}" "${0##*/}" >&2
printf "$@" >&2
printf "\n" >&2
}
#
# Helper to print sysfs path for the given card index and freq info.
#
# arg1: Frequency info sysfs name, one of *_FREQ_INFO constants above
# arg2: Video card index, defaults to INTEL_DRM_CARD_INDEX
#
print_freq_sysfs_path() {
printf ${DRM_FREQ_SYSFS_PATTERN} "${2:-${INTEL_DRM_CARD_INDEX}}" "$1"
}
#
# Helper to set INTEL_DRM_CARD_INDEX for the first identified Intel video card.
#
identify_intel_gpu() {
local i=0 vendor path
while [ ${i} -lt 16 ]; do
[ -c "/dev/dri/card$i" ] || {
i=$((i + 1))
continue
}
path=$(print_freq_sysfs_path "" ${i})
if [ "$USE_XE" -eq 1 ]; then
path=${path%/*/*/*/*/*}/device/vendor
else
path=${path%/*}/device/vendor
fi
[ -r "${path}" ] && read vendor < "${path}" && \
[ "${vendor}" = "0x8086" ] && INTEL_DRM_CARD_INDEX=$i && return 0
i=$((i + 1))
done
return 1
}
#
# Read the specified freq info from sysfs.
#
# arg1: Flag (y/n) to also enable printing the freq info.
# arg2...: Frequency info sysfs name(s), see *_FREQ_INFO constants above
# return: Global variable(s) FREQ_${arg} containing the requested information
#
read_freq_info() {
local var val info path print=0 ret=0
[ "$1" = "y" ] && print=1
shift
while [ $# -gt 0 ]; do
info=$1
shift
var=FREQ_${info}
path=$(print_freq_sysfs_path "${info}")
[ -r ${path} ] && read ${var} < ${path} || {
log ERROR "Failed to read freq info from: %s" "${path}"
ret=1
continue
}
[ -n "${var}" ] || {
log ERROR "Got empty freq info from: %s" "${path}"
ret=1
continue
}
[ ${print} -eq 1 ] && {
eval val=\$${var}
printf "%6s: %4s MHz\n" "${info}" "${val}"
}
done
return ${ret}
}
#
# Display requested info.
#
print_freq_info() {
local req_freq
[ -n "${GET_CAP_FREQ}" ] && {
printf "* Hardware capabilities\n"
read_freq_info y ${CAP_FREQ_INFO}
printf "\n"
}
[ -n "${GET_ENF_FREQ}" ] && {
printf "* Enforcements\n"
read_freq_info y ${ENF_FREQ_INFO}
printf "\n"
}
[ -n "${GET_ACT_FREQ}" ] && {
printf "* Actual\n"
read_freq_info y ${ACT_FREQ_INFO}
printf "\n"
}
}
#
# Helper to print frequency value as requested by user via '-s, --set' option.
# arg1: user requested freq value
#
compute_freq_set() {
local val
case "$1" in
+)
val=$(eval "echo \${FREQ_$(echo $CAP_FREQ_INFO | cut -d' ' -f1)}") # FREQ_rp0 or FREQ_RP0
;;
-)
val=$(eval "echo \${FREQ_$(echo $CAP_FREQ_INFO | cut -d' ' -f2)}") # FREQ_rpn or FREQ_RPn
;;
*%)
val=$((${1%?} * $(eval "echo \${FREQ_$(echo $CAP_FREQ_INFO | cut -d' ' -f1)}") / 100))
# Adjust freq to comply with 50 MHz increments
val=$((val / 50 * 50))
;;
*[!0-9]*)
log ERROR "Cannot set freq to invalid value: %s" "$1"
return 1
;;
"")
log ERROR "Cannot set freq to unspecified value"
return 1
;;
*)
# Adjust freq to comply with 50 MHz increments
val=$(($1 / 50 * 50))
;;
esac
printf "%s" "${val}"
}
#
# Helper for set_freq().
#
set_freq_max() {
log INFO "Setting GPU max freq to %s MHz" "${SET_MAX_FREQ}"
read_freq_info n min || return $?
# FREQ_rp0 or FREQ_RP0
[ ${SET_MAX_FREQ} -gt $(eval "echo \${FREQ_$(echo $CAP_FREQ_INFO | cut -d' ' -f1)}") ] && {
log ERROR "Cannot set GPU max freq (%s) to be greater than hw max freq (%s)" \
"${SET_MAX_FREQ}" "$(eval "echo \${FREQ_$(echo $CAP_FREQ_INFO | cut -d' ' -f1)}")"
return 1
}
# FREQ_rpn or FREQ_RPn
[ ${SET_MAX_FREQ} -lt $(eval "echo \${FREQ_$(echo $CAP_FREQ_INFO | cut -d' ' -f2)}") ] && {
log ERROR "Cannot set GPU max freq (%s) to be less than hw min freq (%s)" \
"${SET_MIN_FREQ}" "$(eval "echo \${FREQ_$(echo $CAP_FREQ_INFO | cut -d' ' -f2)}")"
return 1
}
[ ${SET_MAX_FREQ} -lt ${FREQ_min} ] && {
log ERROR "Cannot set GPU max freq (%s) to be less than min freq (%s)" \
"${SET_MAX_FREQ}" "${FREQ_min}"
return 1
}
[ -z "${DRY_RUN}" ] || return 0
# Write to max freq path
if ! printf "%s" ${SET_MAX_FREQ} | tee $(print_freq_sysfs_path max) > /dev/null;
then
log ERROR "Failed to set GPU max frequency"
return 1
fi
# Only write to boost if the sysfs file exists, as it's removed in Xe
if [ -e "$(print_freq_sysfs_path boost)" ]; then
if ! printf "%s" ${SET_MAX_FREQ} | tee $(print_freq_sysfs_path boost) > /dev/null;
then
log ERROR "Failed to set GPU boost frequency"
return 1
fi
fi
}
#
# Helper for set_freq().
#
set_freq_min() {
log INFO "Setting GPU min freq to %s MHz" "${SET_MIN_FREQ}"
read_freq_info n max || return $?
[ ${SET_MIN_FREQ} -gt ${FREQ_max} ] && {
log ERROR "Cannot set GPU min freq (%s) to be greater than max freq (%s)" \
"${SET_MIN_FREQ}" "${FREQ_max}"
return 1
}
[ ${SET_MIN_FREQ} -lt $(eval "echo \${FREQ_$(echo $CAP_FREQ_INFO | cut -d' ' -f2)}") ] && {
log ERROR "Cannot set GPU min freq (%s) to be less than hw min freq (%s)" \
"${SET_MIN_FREQ}" "$(eval "echo \${FREQ_$(echo $CAP_FREQ_INFO | cut -d' ' -f2)}")"
return 1
}
[ -z "${DRY_RUN}" ] || return 0
if ! printf "%s" ${SET_MIN_FREQ} > $(print_freq_sysfs_path min);
then
log ERROR "Failed to set GPU min frequency"
return 1
fi
}
#
# Set min or max or both GPU frequencies to the user indicated values.
#
set_freq() {
# Get hw max & min frequencies
read_freq_info n $(echo $CAP_FREQ_INFO | cut -d' ' -f1,2) || return $? # RP0 RPn
[ -z "${SET_MAX_FREQ}" ] || {
SET_MAX_FREQ=$(compute_freq_set "${SET_MAX_FREQ}")
[ -z "${SET_MAX_FREQ}" ] && return 1
}
[ -z "${SET_MIN_FREQ}" ] || {
SET_MIN_FREQ=$(compute_freq_set "${SET_MIN_FREQ}")
[ -z "${SET_MIN_FREQ}" ] && return 1
}
#
# Ensure correct operation order, to avoid setting min freq
# to a value which is larger than max freq.
#
# E.g.:
# crt_min=crt_max=600; new_min=new_max=700
# > operation order: max=700; min=700
#
# crt_min=crt_max=600; new_min=new_max=500
# > operation order: min=500; max=500
#
if [ -n "${SET_MAX_FREQ}" ] && [ -n "${SET_MIN_FREQ}" ]; then
[ ${SET_MAX_FREQ} -lt ${SET_MIN_FREQ} ] && {
log ERROR "Cannot set GPU max freq to be less than min freq"
return 1
}
read_freq_info n min || return $?
if [ ${SET_MAX_FREQ} -lt ${FREQ_min} ]; then
set_freq_min || return $?
set_freq_max
else
set_freq_max || return $?
set_freq_min
fi
elif [ -n "${SET_MAX_FREQ}" ]; then
set_freq_max
elif [ -n "${SET_MIN_FREQ}" ]; then
set_freq_min
else
log "Unexpected call to set_freq()"
return 1
fi
}
#
# Helper for detect_throttling().
#
get_thrott_detect_pid() {
[ -e ${THROTT_DETECT_PID_FILE_PATH} ] || return 0
local pid
read pid < ${THROTT_DETECT_PID_FILE_PATH} || {
log ERROR "Failed to read pid from: %s" "${THROTT_DETECT_PID_FILE_PATH}"
return 1
}
local proc_path=/proc/${pid:-invalid}/cmdline
[ -r ${proc_path} ] && grep -qs "${0##*/}" ${proc_path} && {
printf "%s" "${pid}"
return 0
}
# Remove orphaned PID file
rm -rf ${THROTT_DETECT_PID_FILE_PATH}
return 1
}
#
# Control detection and reporting of GPU throttling events.
# arg1: start - run throttle detector in background
# stop - stop throttle detector process, if any
# status - verify if throttle detector is running
#
detect_throttling() {
local pid
pid=$(get_thrott_detect_pid)
case "$1" in
status)
printf "Throttling detector is "
[ -z "${pid}" ] && printf "not running\n" && return 0
printf "running (pid=%s)\n" ${pid}
;;
stop)
[ -z "${pid}" ] && return 0
log INFO "Stopping throttling detector (pid=%s)" "${pid}"
kill ${pid}; sleep 1; kill -0 ${pid} 2>/dev/null && kill -9 ${pid}
rm -rf ${THROTT_DETECT_PID_FILE_PATH}
;;
start)
[ -n "${pid}" ] && {
log WARN "Throttling detector is already running (pid=%s)" ${pid}
return 0
}
(
read_freq_info n $(echo $CAP_FREQ_INFO | cut -d' ' -f2) || return $? # RPn
while true; do
sleep ${THROTT_DETECT_SLEEP_SEC}
read_freq_info n act min cur || exit $?
#
# The throttling seems to occur when act freq goes below min.
# However, it's necessary to exclude the idle states, where
# act freq normally reaches rpn and cur goes below min.
#
[ ${FREQ_act} -lt ${FREQ_min} ] && \
[ ${FREQ_act} -gt $(eval "echo \${FREQ_$(echo $CAP_FREQ_INFO | cut -d' ' -f2)}") ] && \
[ ${FREQ_cur} -ge ${FREQ_min} ] && \
printf "GPU throttling detected: act=%s min=%s cur=%s rpn=%s\n" \
${FREQ_act} ${FREQ_min} ${FREQ_cur} $(eval "echo \${FREQ_$(echo $CAP_FREQ_INFO | cut -d' ' -f2)}")
done
) &
pid=$!
log INFO "Started GPU throttling detector (pid=%s)" ${pid}
printf "%s\n" ${pid} > ${THROTT_DETECT_PID_FILE_PATH} || \
log WARN "Failed to write throttle detector PID file"
;;
esac
}
#
# Retrieve the list of online CPUs.
#
get_online_cpus() {
local path cpu_index
printf "0"
for path in $(grep 1 ${CPU_SYSFS_PREFIX}/cpu*/online); do
cpu_index=${path##*/cpu}
printf " %s" ${cpu_index%%/*}
done
}
#
# Helper to print sysfs path for the given CPU index and freq info.
#
# arg1: Frequency info sysfs name, one of *_CPU_FREQ_INFO constants above
# arg2: CPU index
#
print_cpu_freq_sysfs_path() {
printf ${CPU_FREQ_SYSFS_PATTERN} "$2" "$1"
}
#
# Read the specified CPU freq info from sysfs.
#
# arg1: CPU index
# arg2: Flag (y/n) to also enable printing the freq info.
# arg3...: Frequency info sysfs name(s), see *_CPU_FREQ_INFO constants above
# return: Global variable(s) CPU_FREQ_${arg} containing the requested information
#
read_cpu_freq_info() {
local var val info path cpu_index print=0 ret=0
cpu_index=$1
[ "$2" = "y" ] && print=1
shift 2
while [ $# -gt 0 ]; do
info=$1
shift
var=CPU_FREQ_${info}
path=$(print_cpu_freq_sysfs_path "${info}" ${cpu_index})
[ -r ${path} ] && read ${var} < ${path} || {
log ERROR "Failed to read CPU freq info from: %s" "${path}"
ret=1
continue
}
[ -n "${var}" ] || {
log ERROR "Got empty CPU freq info from: %s" "${path}"
ret=1
continue
}
[ ${print} -eq 1 ] && {
eval val=\$${var}
printf "%6s: %4s Hz\n" "${info}" "${val}"
}
done
return ${ret}
}
#
# Helper to print freq. value as requested by user via '--cpu-set-max' option.
# arg1: user requested freq value
#
compute_cpu_freq_set() {
local val
case "$1" in
+)
val=${CPU_FREQ_cpuinfo_max}
;;
-)
val=${CPU_FREQ_cpuinfo_min}
;;
*%)
val=$((${1%?} * CPU_FREQ_cpuinfo_max / 100))
;;
*[!0-9]*)
log ERROR "Cannot set CPU freq to invalid value: %s" "$1"
return 1
;;
"")
log ERROR "Cannot set CPU freq to unspecified value"
return 1
;;
*)
log ERROR "Cannot set CPU freq to custom value; use +, -, or % instead"
return 1
;;
esac
printf "%s" "${val}"
}
#
# Adjust CPU max scaling frequency.
#
set_cpu_freq_max() {
local target_freq res=0
case "${CPU_SET_MAX_FREQ}" in
+)
target_freq=100
;;
-)
target_freq=1
;;
*%)
target_freq=${CPU_SET_MAX_FREQ%?}
;;
*)
log ERROR "Invalid CPU freq"
return 1
;;
esac
local pstate_info=$(printf "${CPU_PSTATE_SYSFS_PATTERN}" max_perf_pct)
[ -e "${pstate_info}" ] && {
log INFO "Setting intel_pstate max perf to %s" "${target_freq}%"
if ! printf "%s" "${target_freq}" > "${pstate_info}";
then
log ERROR "Failed to set intel_pstate max perf"
res=1
fi
}
local cpu_index
for cpu_index in $(get_online_cpus); do
read_cpu_freq_info ${cpu_index} n ${CAP_CPU_FREQ_INFO} || { res=$?; continue; }
target_freq=$(compute_cpu_freq_set "${CPU_SET_MAX_FREQ}")
tf_res=$?
[ -z "${target_freq}" ] && { res=$tf_res; continue; }
log INFO "Setting CPU%s max scaling freq to %s Hz" ${cpu_index} "${target_freq}"
[ -n "${DRY_RUN}" ] && continue
if ! printf "%s" ${target_freq} > $(print_cpu_freq_sysfs_path scaling_max ${cpu_index});
then
res=1
log ERROR "Failed to set CPU%s max scaling frequency" ${cpu_index}
fi
done
return ${res}
}
#
# Show help message.
#
print_usage() {
cat <<EOF
Usage: ${0##*/} [OPTION]...
A script to manage Intel GPU frequencies. Can be used for debugging performance
problems or trying to obtain a stable frequency while benchmarking.
Note Intel GPUs only accept specific frequencies, usually multiples of 50 MHz.
Options:
-g, --get [act|enf|cap|all]
Get frequency information: active (default), enforced,
hardware capabilities or all of them.
-s, --set [{min|max}=]{FREQUENCY[%]|+|-}
Set min or max frequency to the given value (MHz).
Append '%' to interpret FREQUENCY as % of hw max.
Use '+' or '-' to set frequency to hardware max or min.
Omit min/max prefix to set both frequencies.
-r, --reset Reset frequencies to hardware defaults.
-m, --monitor [act|enf|cap|all]
Monitor the indicated frequencies via 'watch' utility.
See '-g, --get' option for more details.
-d|--detect-thrott [start|stop|status]
Start (default operation) the throttling detector
as a background process. Use 'stop' or 'status' to
terminate the detector process or verify its status.
--cpu-set-max [FREQUENCY%|+|-}
Set CPU max scaling frequency as % of hw max.
Use '+' or '-' to set frequency to hardware max or min.
-r, --reset Reset frequencies to hardware defaults.
--dry-run See what the script will do without applying any
frequency changes.
-h, --help Display this help text and exit.
EOF
}
#
# Parse user input for '-g, --get' option.
# Returns 0 if a value has been provided, otherwise 1.
#
parse_option_get() {
local ret=0
case "$1" in
act) GET_ACT_FREQ=1;;
enf) GET_ENF_FREQ=1;;
cap) GET_CAP_FREQ=1;;
all) GET_ACT_FREQ=1; GET_ENF_FREQ=1; GET_CAP_FREQ=1;;
-*|"")
# No value provided, using default.
GET_ACT_FREQ=1
ret=1
;;
*)
print_usage
exit 1
;;
esac
return ${ret}
}
#
# Validate user input for '-s, --set' option.
# arg1: input value to be validated
# arg2: optional flag indicating input is restricted to %
#
validate_option_set() {
case "$1" in
+|-|[0-9]%|[0-9][0-9]%)
return 0
;;
*[!0-9]*|"")
print_usage
exit 1
;;
esac
[ -z "$2" ] || { print_usage; exit 1; }
}
#
# Parse script arguments.
#
[ $# -eq 0 ] && { print_usage; exit 1; }
while [ $# -gt 0 ]; do
case "$1" in
-g|--get)
parse_option_get "$2" && shift
;;
-s|--set)
shift
case "$1" in
min=*)
SET_MIN_FREQ=${1#min=}
validate_option_set "${SET_MIN_FREQ}"
;;
max=*)
SET_MAX_FREQ=${1#max=}
validate_option_set "${SET_MAX_FREQ}"
;;
*)
SET_MIN_FREQ=$1
validate_option_set "${SET_MIN_FREQ}"
SET_MAX_FREQ=${SET_MIN_FREQ}
;;
esac
;;
-r|--reset)
RESET_FREQ=1
SET_MIN_FREQ="-"
SET_MAX_FREQ="+"
;;
-m|--monitor)
MONITOR_FREQ=act
parse_option_get "$2" && MONITOR_FREQ=$2 && shift
;;
-d|--detect-thrott)
DETECT_THROTT=start
case "$2" in
start|stop|status)
DETECT_THROTT=$2
shift
;;
esac
;;
--cpu-set-max)
shift
CPU_SET_MAX_FREQ=$1
validate_option_set "${CPU_SET_MAX_FREQ}" restricted
;;
--dry-run)
DRY_RUN=1
;;
-h|--help)
print_usage
exit 0
;;
*)
print_usage
exit 1
;;
esac
shift
done
#
# Main
#
RET=0
identify_intel_gpu || {
log INFO "No Intel GPU detected"
exit 0
}
[ -n "${SET_MIN_FREQ}${SET_MAX_FREQ}" ] && { set_freq || RET=$?; }
print_freq_info
[ -n "${DETECT_THROTT}" ] && detect_throttling ${DETECT_THROTT}
[ -n "${CPU_SET_MAX_FREQ}" ] && { set_cpu_freq_max || RET=$?; }
[ -n "${MONITOR_FREQ}" ] && {
log INFO "Entering frequency monitoring mode"
sleep 2
exec watch -d -n 1 "$0" -g "${MONITOR_FREQ}"
}
exit ${RET}

View File

@@ -1,18 +0,0 @@
#!/usr/bin/env bash
# shellcheck disable=SC1091 # the path is created in build-kdl and
# here is check if exist
# shellcheck disable=SC2086 # we want the arguments to be expanded
if ! [ -f /ci-kdl/bin/activate ]; then
echo -e "ci-kdl not installed; not monitoring temperature"
exit 0
fi
KDL_ARGS="
--output-file=${RESULTS_DIR}/kdl.json
--log-level=WARNING
--num-samples=-1
"
source /ci-kdl/bin/activate
exec /ci-kdl/bin/ci-kdl ${KDL_ARGS}

View File

@@ -1,2 +0,0 @@
variables:
CONDITIONAL_BUILD_ANGLE_TAG: ab19bccfd3858c539ba8cb8d9b52a003

View File

@@ -1,81 +0,0 @@
#!/usr/bin/env bash
# shellcheck disable=SC1091
# When changing this file, you need to bump the following
# .gitlab-ci/image-tags.yml tags:
# ALPINE_X86_64_BUILD_TAG
set -e
. .gitlab-ci/setup-test-env.sh
set -o xtrace
EPHEMERAL=(
)
DEPS=(
bash
bison
ccache
"clang${LLVM_VERSION}-dev"
cmake
clang-dev
coreutils
curl
flex
gcc
g++
git
gettext
glslang
graphviz
linux-headers
"llvm${LLVM_VERSION}-static"
"llvm${LLVM_VERSION}-dev"
meson
mold
musl-dev
expat-dev
elfutils-dev
libclc-dev
libdrm-dev
libva-dev
libpciaccess-dev
zlib-dev
python3-dev
py3-clang
py3-cparser
py3-mako
py3-packaging
py3-pip
py3-ply
py3-yaml
vulkan-headers
spirv-tools-dev
spirv-llvm-translator-dev
util-macros
)
apk --no-cache add "${DEPS[@]}" "${EPHEMERAL[@]}"
pip3 install --break-system-packages sphinx===8.2.3 hawkmoth===0.19.0
. .gitlab-ci/container/container_pre_build.sh
EXTRA_MESON_ARGS='--prefix=/usr' \
. .gitlab-ci/container/build-wayland.sh
############### Uninstall the build software
# too many vendor binarise, just keep the ones we need
find /usr/share/clc \
\( -type f -o -type l \) \
! -name 'spirv-mesa3d-.spv' \
! -name 'spirv64-mesa3d-.spv' \
-delete
apk del "${EPHEMERAL[@]}"
. .gitlab-ci/container/container_post_build.sh

View File

@@ -1,32 +0,0 @@
#!/usr/bin/env bash
# This is a ci-templates build script to generate a container for LAVA SSH client.
# shellcheck disable=SC1091
set -e
. .gitlab-ci/setup-test-env.sh
set -o xtrace
EPHEMERAL=(
)
# We only need these very basic packages to run the tests.
DEPS=(
openssh-client # for ssh
iputils # for ping
bash
curl
)
apk --no-cache add "${DEPS[@]}" "${EPHEMERAL[@]}"
. .gitlab-ci/container/container_pre_build.sh
############### Uninstall the build software
apk del "${EPHEMERAL[@]}"
. .gitlab-ci/container/container_post_build.sh

View File

@@ -1,62 +0,0 @@
#!/usr/bin/env bash
set -e
set -o xtrace
# Fetch the arm-built rootfs image and unpack it in our x86_64 container (saves
# network transfer, disk usage, and runtime on test jobs)
# shellcheck disable=SC2154 # arch is assigned in previous scripts
if curl --fail -L -s "${ARTIFACTS_PREFIX}/${FDO_UPSTREAM_REPO}/${ARTIFACTS_SUFFIX}/${arch}/done"; then
ARTIFACTS_URL="${ARTIFACTS_PREFIX}/${FDO_UPSTREAM_REPO}/${ARTIFACTS_SUFFIX}/${arch}"
else
ARTIFACTS_URL="${ARTIFACTS_PREFIX}/${CI_PROJECT_PATH}/${ARTIFACTS_SUFFIX}/${arch}"
fi
curl -L --retry 4 -f --retry-all-errors --retry-delay 60 \
"${ARTIFACTS_URL}"/lava-rootfs.tar.zst -o rootfs.tar.zst
mkdir -p /rootfs-"$arch"
tar -C /rootfs-"$arch" '--exclude=./dev/*' --zstd -xf rootfs.tar.zst
rm rootfs.tar.zst
if [[ $arch == "arm64" ]]; then
mkdir -p /baremetal-files
pushd /baremetal-files
curl -L --retry 4 -f --retry-all-errors --retry-delay 60 \
-O "${KERNEL_IMAGE_BASE}"/arm64/Image
curl -L --retry 4 -f --retry-all-errors --retry-delay 60 \
-O "${KERNEL_IMAGE_BASE}"/arm64/Image.gz
curl -L --retry 4 -f --retry-all-errors --retry-delay 60 \
-O "${KERNEL_IMAGE_BASE}"/arm64/cheza-kernel
DEVICE_TREES=""
DEVICE_TREES="$DEVICE_TREES apq8016-sbc-usb-host.dtb"
DEVICE_TREES="$DEVICE_TREES apq8096-db820c.dtb"
DEVICE_TREES="$DEVICE_TREES tegra210-p3450-0000.dtb"
DEVICE_TREES="$DEVICE_TREES imx8mq-nitrogen.dtb"
for DTB in $DEVICE_TREES; do
curl -L --retry 4 -f --retry-all-errors --retry-delay 60 \
-O "${KERNEL_IMAGE_BASE}/arm64/$DTB"
done
popd
elif [[ $arch == "armhf" ]]; then
mkdir -p /baremetal-files
pushd /baremetal-files
curl -L --retry 4 -f --retry-all-errors --retry-delay 60 \
-O "${KERNEL_IMAGE_BASE}"/armhf/zImage
DEVICE_TREES=""
DEVICE_TREES="$DEVICE_TREES imx6q-cubox-i.dtb"
DEVICE_TREES="$DEVICE_TREES tegra124-jetson-tk1.dtb"
for DTB in $DEVICE_TREES; do
curl -L --retry 4 -f --retry-all-errors --retry-delay 60 \
-O "${KERNEL_IMAGE_BASE}/armhf/$DTB"
done
popd
fi

View File

@@ -1,121 +0,0 @@
#!/usr/bin/env bash
# When changing this file, you need to bump the following
# .gitlab-ci/image-tags.yml and .gitlab-ci/container/gitlab-ci.yml tags:
# DEBIAN_BUILD_TAG
# ANDROID_LLVM_ARTIFACT_NAME
set -exu
# If CI vars are not set, assign an empty value, this prevents -u to fail
: "${CI:=}"
: "${CI_PROJECT_PATH:=}"
# Early check for required env variables, relies on `set -u`
: "$ANDROID_NDK_VERSION"
: "$ANDROID_SDK_VERSION"
: "$ANDROID_LLVM_VERSION"
: "$ANDROID_LLVM_ARTIFACT_NAME"
: "$S3_JWT_FILE"
: "$S3_HOST"
: "$S3_ANDROID_BUCKET"
# Check for CI if the auth file used later on is non-empty
if [ -n "$CI" ] && [ ! -s "${S3_JWT_FILE}" ]; then
echo "Error: ${S3_JWT_FILE} is empty." 1>&2
exit 1
fi
if curl -s -o /dev/null -I -L -f --retry 4 --retry-delay 15 "https://${S3_HOST}/${S3_ANDROID_BUCKET}/${CI_PROJECT_PATH}/${ANDROID_LLVM_ARTIFACT_NAME}.tar.zst"; then
echo "Artifact ${ANDROID_LLVM_ARTIFACT_NAME}.tar.zst already exists, skip re-building."
# Download prebuilt LLVM libraries for Android when they have not changed,
# to save some time
curl -L --retry 4 -f --retry-all-errors --retry-delay 60 \
-o "/${ANDROID_LLVM_ARTIFACT_NAME}.tar.zst" "https://${S3_HOST}/${S3_ANDROID_BUCKET}/${CI_PROJECT_PATH}/${ANDROID_LLVM_ARTIFACT_NAME}.tar.zst"
tar -C / --zstd -xf "/${ANDROID_LLVM_ARTIFACT_NAME}.tar.zst"
rm "/${ANDROID_LLVM_ARTIFACT_NAME}.tar.zst"
exit
fi
# Ephemeral packages (installed for this script and removed again at the end)
EPHEMERAL=(
unzip
)
apt-get update
apt-get install -y --no-install-recommends --no-remove "${EPHEMERAL[@]}"
ANDROID_NDK="android-ndk-${ANDROID_NDK_VERSION}"
ANDROID_NDK_ROOT="/${ANDROID_NDK}"
if [ ! -d "$ANDROID_NDK_ROOT" ];
then
curl -L --retry 4 -f --retry-all-errors --retry-delay 60 \
-o "${ANDROID_NDK}.zip" \
"https://dl.google.com/android/repository/${ANDROID_NDK}-linux.zip"
unzip -d / "${ANDROID_NDK}.zip" "$ANDROID_NDK/source.properties" "$ANDROID_NDK/build/cmake/*" "$ANDROID_NDK/toolchains/llvm/*"
rm "${ANDROID_NDK}.zip"
fi
if [ ! -d "/llvm-project" ];
then
mkdir "/llvm-project"
pushd "/llvm-project"
git init
git remote add origin https://github.com/llvm/llvm-project.git
git fetch --depth 1 origin "$ANDROID_LLVM_VERSION"
git checkout FETCH_HEAD
popd
fi
pushd "/llvm-project"
# Checkout again the intended version, just in case of a pre-existing full clone
git checkout "$ANDROID_LLVM_VERSION" || true
LLVM_INSTALL_PREFIX="/${ANDROID_LLVM_ARTIFACT_NAME}"
rm -rf build/
cmake -GNinja -S llvm -B build/ \
-DCMAKE_TOOLCHAIN_FILE="${ANDROID_NDK_ROOT}/build/cmake/android.toolchain.cmake" \
-DANDROID_ABI=x86_64 \
-DANDROID_PLATFORM="android-${ANDROID_SDK_VERSION}" \
-DANDROID_NDK="${ANDROID_NDK_ROOT}" \
-DCMAKE_ANDROID_ARCH_ABI=x86_64 \
-DCMAKE_ANDROID_NDK="${ANDROID_NDK_ROOT}" \
-DCMAKE_BUILD_TYPE=MinSizeRel \
-DCMAKE_SYSTEM_NAME=Android \
-DCMAKE_SYSTEM_VERSION="${ANDROID_SDK_VERSION}" \
-DCMAKE_INSTALL_PREFIX="${LLVM_INSTALL_PREFIX}" \
-DCMAKE_CXX_FLAGS="-march=x86-64 --target=x86_64-linux-android${ANDROID_SDK_VERSION} -fno-rtti" \
-DLLVM_HOST_TRIPLE="x86_64-linux-android${ANDROID_SDK_VERSION}" \
-DLLVM_TARGETS_TO_BUILD=X86 \
-DLLVM_BUILD_LLVM_DYLIB=OFF \
-DLLVM_BUILD_TESTS=OFF \
-DLLVM_BUILD_EXAMPLES=OFF \
-DLLVM_BUILD_DOCS=OFF \
-DLLVM_BUILD_TOOLS=OFF \
-DLLVM_ENABLE_RTTI=OFF \
-DLLVM_BUILD_INSTRUMENTED_COVERAGE=OFF \
-DLLVM_NATIVE_TOOL_DIR="${ANDROID_NDK_ROOT}/toolchains/llvm/prebuilt/linux-x86_64/bin" \
-DLLVM_ENABLE_PIC=False \
-DLLVM_OPTIMIZED_TABLEGEN=ON
ninja "-j${FDO_CI_CONCURRENT:-4}" -C build/ install
popd
rm -rf /llvm-project
tar --zstd -cf "${ANDROID_LLVM_ARTIFACT_NAME}.tar.zst" "$LLVM_INSTALL_PREFIX"
# If run in CI upload the tar.zst archive to S3 to avoid rebuilding it if the
# version does not change, and delete it.
# The file is not deleted for non-CI because it can be useful in local runs.
if [ -n "$CI" ]; then
s3_upload "${ANDROID_LLVM_ARTIFACT_NAME}.tar.zst" "https://${S3_HOST}/${S3_ANDROID_BUCKET}/${CI_PROJECT_PATH}/"
rm "${ANDROID_LLVM_ARTIFACT_NAME}.tar.zst"
fi
apt-get purge -y "${EPHEMERAL[@]}"

View File

@@ -1,164 +0,0 @@
#!/usr/bin/env bash
# When changing this file, you need to bump the following
# .gitlab-ci/image-tags.yml tags:
# DEBIAN_TEST_ANDROID_TAG
# KERNEL_ROOTFS_TAG
set -uex
uncollapsed_section_start angle "Building ANGLE"
# Do a very early check to make sure the tag is correct without the need of
# setting up the environment variables locally
ci_tag_build_time_check "ANGLE_TAG"
ANGLE_REV="a3f2545f6bb3e8d27827dceb2b4e901673995ad1"
# Set ANGLE_ARCH based on DEBIAN_ARCH if it hasn't been explicitly defined
if [[ -z "${ANGLE_ARCH:-}" ]]; then
case "$DEBIAN_ARCH" in
amd64) ANGLE_ARCH=x64;;
arm64) ANGLE_ARCH=arm64;;
esac
fi
# DEPOT tools
git clone --depth 1 https://chromium.googlesource.com/chromium/tools/depot_tools.git /depot-tools
export PATH=/depot-tools:$PATH
export DEPOT_TOOLS_UPDATE=0
mkdir /angle-build
mkdir /angle
pushd /angle-build
git init
git remote add origin https://chromium.googlesource.com/angle/angle.git
git fetch --depth 1 origin "$ANGLE_REV"
git checkout FETCH_HEAD
echo "$ANGLE_REV" > /angle/version
GCLIENT_CUSTOM_VARS=()
GCLIENT_CUSTOM_VARS+=('--custom-var=angle_enable_cl=False')
GCLIENT_CUSTOM_VARS+=('--custom-var=angle_enable_cl_testing=False')
GCLIENT_CUSTOM_VARS+=('--custom-var=angle_enable_vulkan_validation_layers=False')
GCLIENT_CUSTOM_VARS+=('--custom-var=angle_enable_wgpu=False')
GCLIENT_CUSTOM_VARS+=('--custom-var=build_angle_deqp_tests=False')
GCLIENT_CUSTOM_VARS+=('--custom-var=build_angle_perftests=False')
if [[ "$ANGLE_TARGET" == "android" ]]; then
GCLIENT_CUSTOM_VARS+=('--custom-var=checkout_android=True')
fi
# source preparation
gclient config --name REPLACE-WITH-A-DOT --unmanaged \
"${GCLIENT_CUSTOM_VARS[@]}" \
https://chromium.googlesource.com/angle/angle.git
sed -e 's/REPLACE-WITH-A-DOT/./;' -i .gclient
sed -e 's|"custom_deps" : {|"custom_deps" : {\
"third_party/clspv/src": None,\
"third_party/dawn": None,\
"third_party/glmark2/src": None,\
"third_party/libjpeg_turbo": None,\
"third_party/llvm/src": None,\
"third_party/OpenCL-CTS/src": None,\
"third_party/SwiftShader": None,\
"third_party/VK-GL-CTS/src": None,\
"third_party/vulkan-validation-layers/src": None,|' -i .gclient
gclient sync --no-history -j"${FDO_CI_CONCURRENT:-4}"
mkdir -p out/Release
cat > out/Release/args.gn <<EOF
angle_assert_always_on=false
angle_build_all=false
angle_build_tests=false
angle_enable_cl=false
angle_enable_cl_testing=false
angle_enable_gl=false
angle_enable_gl_desktop_backend=false
angle_enable_null=false
angle_enable_swiftshader=false
angle_enable_trace=false
angle_enable_wgpu=false
angle_enable_vulkan=true
angle_enable_vulkan_api_dump_layer=false
angle_enable_vulkan_validation_layers=false
angle_has_frame_capture=false
angle_has_histograms=false
angle_has_rapidjson=false
angle_use_custom_libvulkan=false
build_angle_deqp_tests=false
dcheck_always_on=true
enable_expensive_dchecks=false
is_component_build=false
is_debug=false
target_cpu="${ANGLE_ARCH}"
target_os="${ANGLE_TARGET}"
treat_warnings_as_errors=false
EOF
case "$ANGLE_TARGET" in
linux) cat >> out/Release/args.gn <<EOF
angle_egl_extension="so.1"
angle_glesv2_extension="so.2"
use_custom_libcxx=false
custom_toolchain="//build/toolchain/linux/unbundle:default"
host_toolchain="//build/toolchain/linux/unbundle:default"
EOF
;;
android) cat >> out/Release/args.gn <<EOF
android_ndk_version="${ANDROID_NDK_VERSION}"
android64_ndk_api_level=${ANDROID_SDK_VERSION}
android32_ndk_api_level=${ANDROID_SDK_VERSION}
use_custom_libcxx=true
EOF
;;
*) echo "Unexpected ANGLE_TARGET value: $ANGLE_TARGET"; exit 1;;
esac
if [[ "$DEBIAN_ARCH" = "arm64" ]]; then
# We need to get an AArch64 sysroot - because ANGLE isn't great friends with
# system dependencies - but use the default system toolchain, because the
# 'arm64' toolchain you get from Google infrastructure is a cross-compiler
# from x86-64
build/linux/sysroot_scripts/install-sysroot.py --arch=arm64
fi
(
# The 'unbundled' toolchain configuration requires clang, and it also needs to
# be configured via environment variables.
export CC="clang-${LLVM_VERSION}"
export HOST_CC="$CC"
export CFLAGS="-Wno-unknown-warning-option"
export HOST_CFLAGS="$CFLAGS"
export CXX="clang++-${LLVM_VERSION}"
export HOST_CXX="$CXX"
export CXXFLAGS="-Wno-unknown-warning-option"
export HOST_CXXFLAGS="$CXXFLAGS"
export AR="ar"
export HOST_AR="$AR"
export NM="nm"
export HOST_NM="$NM"
export LDFLAGS="-fuse-ld=lld-${LLVM_VERSION} -lpthread -ldl"
export HOST_LDFLAGS="$LDFLAGS"
gn gen out/Release
# depot_tools overrides ninja with a version that doesn't work. We want
# ninja with FDO_CI_CONCURRENT anyway.
/usr/local/bin/ninja -C out/Release/ libEGL libGLESv1_CM libGLESv2
)
rm -f out/Release/libvulkan.so* out/Release/*.so*.TOC
cp out/Release/lib*.so* /angle/
if [[ "$ANGLE_TARGET" == "linux" ]]; then
ln -s libEGL.so.1 /angle/libEGL.so
ln -s libGLESv2.so.2 /angle/libGLESv2.so
fi
rm -rf out
popd
rm -rf /depot-tools
rm -rf /angle-build
section_end angle

View File

@@ -1,29 +0,0 @@
#!/usr/bin/env bash
# shellcheck disable=SC2086 # we want word splitting
# When changing this file, you need to bump the following
# .gitlab-ci/image-tags.yml tags:
# DEBIAN_TEST_GL_TAG
# DEBIAN_TEST_VK_TAG
# KERNEL_ROOTFS_TAG
set -uex
uncollapsed_section_start apitrace "Building apitrace"
APITRACE_VERSION="952bad1469ea747012bdc48c48993bd5f13eec04"
git clone https://github.com/apitrace/apitrace.git --single-branch --no-checkout /apitrace
pushd /apitrace
git checkout "$APITRACE_VERSION"
git submodule update --init --depth 1 --recursive
cmake -S . -B _build -G Ninja -DCMAKE_BUILD_TYPE=Release -DENABLE_GUI=False -DENABLE_WAFFLE=on ${EXTRA_CMAKE_ARGS:-}
cmake --build _build --parallel --target apitrace eglretrace
mkdir build
cp _build/apitrace build
cp _build/eglretrace build
${STRIP_CMD:-strip} build/*
find . -not -path './build' -not -path './build/*' -delete
popd
section_end apitrace

View File

@@ -1,23 +0,0 @@
#!/usr/bin/env bash
# shellcheck disable=SC2086 # we want word splitting
uncollapsed_section_start bindgen "Building bindgen"
BINDGEN_VER=0.65.1
CBINDGEN_VER=0.26.0
# bindgen
RUSTFLAGS='-L native=/usr/local/lib' cargo install \
bindgen-cli --version ${BINDGEN_VER} \
--locked \
-j ${FDO_CI_CONCURRENT:-4} \
--root /usr/local
# cbindgen
RUSTFLAGS='-L native=/usr/local/lib' cargo install \
cbindgen --version ${CBINDGEN_VER} \
--locked \
-j ${FDO_CI_CONCURRENT:-4} \
--root /usr/local
section_end bindgen

View File

@@ -1,55 +0,0 @@
#!/usr/bin/env bash
# shellcheck disable=SC2086 # we want word splitting
# When changing this file, you need to bump the following
# .gitlab-ci/image-tags.yml tags:
# DEBIAN_BASE_TAG
# DEBIAN_TEST_GL_TAG
# DEBIAN_TEST_VK_TAG
# KERNEL_ROOTFS_TAG
set -uex
uncollapsed_section_start crosvm "Building crosvm"
git config --global user.email "mesa@example.com"
git config --global user.name "Mesa CI"
CROSVM_VERSION=e27efaf8f4bdc4a47d1e99cc44d2b6908b6f36bd
git clone --single-branch -b main --no-checkout https://chromium.googlesource.com/crosvm/crosvm /platform/crosvm
pushd /platform/crosvm
git checkout "$CROSVM_VERSION"
git submodule update --init
VIRGLRENDERER_VERSION=7570167549358ce77b8d4774041b4a77c72a021c
rm -rf third_party/virglrenderer
git clone --single-branch -b main --no-checkout https://gitlab.freedesktop.org/virgl/virglrenderer.git third_party/virglrenderer
pushd third_party/virglrenderer
git checkout "$VIRGLRENDERER_VERSION"
meson setup build/ -D libdir=lib -D render-server-worker=process -D venus=true ${EXTRA_MESON_ARGS:-}
meson install -C build
popd
rm rust-toolchain
RUSTFLAGS='-L native=/usr/local/lib' cargo install \
bindgen-cli \
--locked \
-j ${FDO_CI_CONCURRENT:-4} \
--root /usr/local \
--version 0.71.1 \
${EXTRA_CARGO_ARGS:-}
CROSVM_USE_SYSTEM_MINIGBM=1 CROSVM_USE_SYSTEM_VIRGLRENDERER=1 RUSTFLAGS='-L native=/usr/local/lib' cargo install \
-j ${FDO_CI_CONCURRENT:-4} \
--locked \
--features 'default-no-sandbox gpu x virgl_renderer' \
--path . \
--root /usr/local \
${EXTRA_CARGO_ARGS:-}
popd
rm -rf /platform/crosvm
section_end crosvm

View File

@@ -1,100 +0,0 @@
#!/usr/bin/env bash
# shellcheck disable=SC2086 # we want word splitting
# When changing this file, you need to bump the following
# .gitlab-ci/image-tags.yml tags:
# DEBIAN_TEST_ANDROID_TAG
# DEBIAN_BASE_TAG
# KERNEL_ROOTFS_TAG
set -uex
uncollapsed_section_start deqp-runner "Building deqp-runner"
DEQP_RUNNER_VERSION=0.20.3
commits_to_backport=(
)
patch_files=(
)
DEQP_RUNNER_GIT_URL="${DEQP_RUNNER_GIT_URL:-https://gitlab.freedesktop.org/mesa/deqp-runner.git}"
if [ -n "${DEQP_RUNNER_GIT_TAG:-}" ]; then
DEQP_RUNNER_GIT_CHECKOUT="$DEQP_RUNNER_GIT_TAG"
elif [ -n "${DEQP_RUNNER_GIT_REV:-}" ]; then
DEQP_RUNNER_GIT_CHECKOUT="$DEQP_RUNNER_GIT_REV"
else
DEQP_RUNNER_GIT_CHECKOUT="v$DEQP_RUNNER_VERSION"
fi
BASE_PWD=$PWD
mkdir -p /deqp-runner
pushd /deqp-runner
mkdir deqp-runner-git
pushd deqp-runner-git
git init
git remote add origin "$DEQP_RUNNER_GIT_URL"
git fetch --depth 1 origin "$DEQP_RUNNER_GIT_CHECKOUT"
git checkout FETCH_HEAD
for commit in "${commits_to_backport[@]}"
do
PATCH_URL="https://gitlab.freedesktop.org/mesa/deqp-runner/-/commit/$commit.patch"
echo "Backport deqp-runner commit $commit from $PATCH_URL"
curl -L --retry 4 -f --retry-all-errors --retry-delay 60 $PATCH_URL | git am
done
for patch in "${patch_files[@]}"
do
echo "Apply patch to deqp-runner from $patch"
git am "$BASE_PWD/.gitlab-ci/container/patches/$patch"
done
if [ -z "${RUST_TARGET:-}" ]; then
RUST_TARGET=""
fi
if [[ "$RUST_TARGET" != *-android ]]; then
# When CC (/usr/lib/ccache/gcc) variable is set, the rust compiler uses
# this variable when cross-compiling arm32 and build fails for zsys-sys.
# So unset the CC variable when cross-compiling for arm32.
SAVEDCC=${CC:-}
if [ "$RUST_TARGET" = "armv7-unknown-linux-gnueabihf" ]; then
unset CC
fi
cargo install --locked \
-j ${FDO_CI_CONCURRENT:-4} \
--root /usr/local \
${EXTRA_CARGO_ARGS:-} \
--path .
CC=$SAVEDCC
else
cargo install --locked \
-j ${FDO_CI_CONCURRENT:-4} \
--root /usr/local --version 2.10.0 \
cargo-ndk
rustup target add $RUST_TARGET
RUSTFLAGS='-C target-feature=+crt-static' cargo ndk --target $RUST_TARGET build --release
mv target/$RUST_TARGET/release/deqp-runner /deqp-runner
cargo uninstall --locked \
--root /usr/local \
cargo-ndk
fi
popd
rm -rf deqp-runner-git
popd
# remove unused test runners to shrink images for the Mesa CI build (not kernel,
# which chooses its own deqp branch)
if [ -z "${DEQP_RUNNER_GIT_TAG:-}${DEQP_RUNNER_GIT_REV:-}" ]; then
rm -f /usr/local/bin/igt-runner
fi
section_end deqp-runner

View File

@@ -1,325 +0,0 @@
#!/usr/bin/env bash
# shellcheck disable=SC2086 # we want word splitting
# When changing this file, you need to bump the following
# .gitlab-ci/image-tags.yml tags:
# DEBIAN_TEST_ANDROID_TAG
# DEBIAN_TEST_GL_TAG
# DEBIAN_TEST_VK_TAG
# KERNEL_ROOTFS_TAG
set -ue -o pipefail
# shellcheck disable=SC2153
deqp_api=${DEQP_API,,}
uncollapsed_section_start deqp-$deqp_api "Building dEQP $DEQP_API"
set -x
# See `deqp_build_targets` below for which release is used to produce which
# binary. Unless this comment has bitrotten:
# - the commit from the main branch produces the deqp tools and `deqp-vk`,
# - the VK release produces `deqp-vk`,
# - the GL release produces `glcts`, and
# - the GLES release produces `deqp-gles*` and `deqp-egl`
DEQP_MAIN_COMMIT=76c1572eaba42d7ddd9bb8eb5788e52dd932068e
DEQP_VK_VERSION=1.4.1.1
DEQP_GL_VERSION=4.6.5.0
DEQP_GLES_VERSION=3.2.11.0
# Patches to VulkanCTS may come from commits in their repo (listed in
# cts_commits_to_backport) or patch files stored in our repo (in the patch
# directory `$OLDPWD/.gitlab-ci/container/patches/` listed in cts_patch_files).
# Both list variables would have comments explaining the reasons behind the
# patches.
# shellcheck disable=SC2034
main_cts_commits_to_backport=(
# If you find yourself wanting to add something in here, consider whether
# bumping DEQP_MAIN_COMMIT is not a better solution :)
)
# shellcheck disable=SC2034
main_cts_patch_files=(
)
# shellcheck disable=SC2034
vk_cts_commits_to_backport=(
# Stop querying device address from unbound buffers
046343f46f7d39d53b47842d7fd8ed3279528046
)
# shellcheck disable=SC2034
vk_cts_patch_files=(
)
# shellcheck disable=SC2034
gl_cts_commits_to_backport=(
# Add #include <cmath> in deMath.h when being compiled by C++
71808fe7d0a640dfd703e845d93ba1c5ab751055
# Revert "Add #include <cmath> in deMath.h when being compiled by C++ compiler"
# This also adds an alternative fix along with the revert.
6164879a0acce258637d261592a9c395e564b361
)
# shellcheck disable=SC2034
gl_cts_patch_files=(
build-deqp-gl_Build-Don-t-build-Vulkan-utilities-for-GL-builds.patch
)
# shellcheck disable=SC2034
# GLES builds also EGL
gles_cts_commits_to_backport=(
# Add #include <cmath> in deMath.h when being compiled by C++
71808fe7d0a640dfd703e845d93ba1c5ab751055
# Revert "Add #include <cmath> in deMath.h when being compiled by C++ compiler"
# This also adds an alternative fix along with the revert.
6164879a0acce258637d261592a9c395e564b361
)
# shellcheck disable=SC2034
gles_cts_patch_files=(
build-deqp-gl_Build-Don-t-build-Vulkan-utilities-for-GL-builds.patch
)
if [ "${DEQP_TARGET}" = 'android' ]; then
gles_cts_patch_files+=(
build-deqp-gles_Allow-running-on-Android-from-the-command-line.patch
build-deqp-gles_Android-prints-to-stdout-instead-of-logcat.patch
)
fi
### Careful editing anything below this line
git config --global user.email "mesa@example.com"
git config --global user.name "Mesa CI"
# shellcheck disable=SC2153
case "${DEQP_API}" in
tools) DEQP_VERSION="$DEQP_MAIN_COMMIT";;
*-main) DEQP_VERSION="$DEQP_MAIN_COMMIT";;
VK) DEQP_VERSION="vulkan-cts-$DEQP_VK_VERSION";;
GL) DEQP_VERSION="opengl-cts-$DEQP_GL_VERSION";;
GLES) DEQP_VERSION="opengl-es-cts-$DEQP_GLES_VERSION";;
*) echo "Unexpected DEQP_API value: $DEQP_API"; exit 1;;
esac
mkdir -p /VK-GL-CTS
pushd /VK-GL-CTS
[ -e .git ] || {
git init
git remote add origin https://github.com/KhronosGroup/VK-GL-CTS.git
}
git fetch --depth 1 origin "$DEQP_VERSION"
git checkout FETCH_HEAD
DEQP_COMMIT=$(git rev-parse FETCH_HEAD)
if [ "$DEQP_VERSION" = "$DEQP_MAIN_COMMIT" ]; then
merge_base="$(curl --fail -s https://api.github.com/repos/KhronosGroup/VK-GL-CTS/compare/main...$DEQP_MAIN_COMMIT | jq -r .merge_base_commit.sha)"
if [[ "$merge_base" != "$DEQP_MAIN_COMMIT" ]]; then
echo "VK-GL-CTS commit $DEQP_MAIN_COMMIT is not a commit from the main branch."
exit 1
fi
fi
mkdir -p /deqp-$deqp_api
if [ "$DEQP_VERSION" = "$DEQP_MAIN_COMMIT" ]; then
prefix="main"
else
prefix="$deqp_api"
fi
cts_commits_to_backport="${prefix}_cts_commits_to_backport[@]"
for commit in "${!cts_commits_to_backport}"
do
PATCH_URL="https://github.com/KhronosGroup/VK-GL-CTS/commit/$commit.patch"
echo "Apply patch to ${DEQP_API} CTS from $PATCH_URL"
curl -L --retry 4 -f --retry-all-errors --retry-delay 60 $PATCH_URL | \
GIT_COMMITTER_DATE=$(LC_TIME=C date -d@0) git am -
done
cts_patch_files="${prefix}_cts_patch_files[@]"
for patch in "${!cts_patch_files}"
do
echo "Apply patch to ${DEQP_API} CTS from $patch"
GIT_COMMITTER_DATE=$(LC_TIME=C date -d@0) git am < $OLDPWD/.gitlab-ci/container/patches/$patch
done
{
if [ "$DEQP_VERSION" = "$DEQP_MAIN_COMMIT" ]; then
commit_desc=$(git show --no-patch --format='commit %h on %ci' --abbrev=10 "$DEQP_COMMIT")
echo "dEQP $DEQP_API at $commit_desc"
else
echo "dEQP $DEQP_API version $DEQP_VERSION"
fi
if [ "$(git rev-parse HEAD)" != "$DEQP_COMMIT" ]; then
echo "The following local patches are applied on top:"
git log --reverse --oneline "$DEQP_COMMIT".. --format='- %s'
fi
} > /deqp-$deqp_api/deqp-$deqp_api-version
# --insecure is due to SSL cert failures hitting sourceforge for zlib and
# libpng (sigh). The archives get their checksums checked anyway, and git
# always goes through ssh or https.
python3 external/fetch_sources.py --insecure
case "${DEQP_API}" in
VK-main)
# Video tests rely on external files
python3 external/fetch_video_decode_samples.py
python3 external/fetch_video_encode_samples.py
;;
esac
if [[ "$DEQP_API" = tools ]]; then
# Save the testlog stylesheets:
cp doc/testlog-stylesheet/testlog.{css,xsl} /deqp-$deqp_api
fi
popd
deqp_build_targets=()
case "${DEQP_API}" in
VK|VK-main)
deqp_build_targets+=(deqp-vk)
;;
GL)
deqp_build_targets+=(glcts)
;;
GLES)
deqp_build_targets+=(deqp-gles{2,3,31})
deqp_build_targets+=(glcts) # needed for gles*-khr tests
# deqp-egl also comes from this build, but it is handled separately below.
;;
tools)
deqp_build_targets+=(testlog-to-xml)
deqp_build_targets+=(testlog-to-csv)
deqp_build_targets+=(testlog-to-junit)
;;
esac
OLD_IFS="$IFS"
IFS=";"
CMAKE_SBT="${deqp_build_targets[*]}"
IFS="$OLD_IFS"
pushd /deqp-$deqp_api
if [ "${DEQP_API}" = 'GLES' ]; then
if [ "${DEQP_TARGET}" = 'android' ]; then
cmake -S /VK-GL-CTS -B . -G Ninja \
-DDEQP_TARGET=android \
-DCMAKE_BUILD_TYPE=Release \
-DSELECTED_BUILD_TARGETS="deqp-egl" \
${EXTRA_CMAKE_ARGS:-}
ninja modules/egl/deqp-egl
mv modules/egl/deqp-egl{,-android}
else
# When including EGL/X11 testing, do that build first and save off its
# deqp-egl binary.
cmake -S /VK-GL-CTS -B . -G Ninja \
-DDEQP_TARGET=x11_egl_glx \
-DCMAKE_BUILD_TYPE=Release \
-DSELECTED_BUILD_TARGETS="deqp-egl" \
${EXTRA_CMAKE_ARGS:-}
ninja modules/egl/deqp-egl
mv modules/egl/deqp-egl{,-x11}
cmake -S /VK-GL-CTS -B . -G Ninja \
-DDEQP_TARGET=wayland \
-DCMAKE_BUILD_TYPE=Release \
-DSELECTED_BUILD_TARGETS="deqp-egl" \
${EXTRA_CMAKE_ARGS:-}
ninja modules/egl/deqp-egl
mv modules/egl/deqp-egl{,-wayland}
fi
fi
cmake -S /VK-GL-CTS -B . -G Ninja \
-DDEQP_TARGET=${DEQP_TARGET} \
-DCMAKE_BUILD_TYPE=Release \
-DSELECTED_BUILD_TARGETS="${CMAKE_SBT}" \
${EXTRA_CMAKE_ARGS:-}
# Make sure `default` doesn't silently stop detecting one of the platforms we care about
if [ "${DEQP_TARGET}" = 'default' ]; then
grep -q DEQP_SUPPORT_WAYLAND=1 build.ninja
grep -q DEQP_SUPPORT_X11=1 build.ninja
grep -q DEQP_SUPPORT_XCB=1 build.ninja
fi
ninja "${deqp_build_targets[@]}"
if [ "$DEQP_API" != tools ]; then
# Copy out the mustpass lists we want.
mkdir -p mustpass
if [ "${DEQP_API}" = 'VK' ] || [ "${DEQP_API}" = 'VK-main' ]; then
for mustpass in $(< /VK-GL-CTS/external/vulkancts/mustpass/main/vk-default.txt) ; do
cat /VK-GL-CTS/external/vulkancts/mustpass/main/$mustpass \
>> mustpass/vk-main.txt
done
fi
if [ "${DEQP_API}" = 'GL' ]; then
cp \
/VK-GL-CTS/external/openglcts/data/gl_cts/data/mustpass/gl/khronos_mustpass/main/*-main.txt \
mustpass/
cp \
/VK-GL-CTS/external/openglcts/data/gl_cts/data/mustpass/gl/khronos_mustpass_single/main/*-single.txt \
mustpass/
fi
if [ "${DEQP_API}" = 'GLES' ]; then
cp \
/VK-GL-CTS/external/openglcts/data/gl_cts/data/mustpass/gles/aosp_mustpass/main/*.txt \
mustpass/
cp \
/VK-GL-CTS/external/openglcts/data/gl_cts/data/mustpass/egl/aosp_mustpass/main/egl-main.txt \
mustpass/
cp \
/VK-GL-CTS/external/openglcts/data/gl_cts/data/mustpass/gles/khronos_mustpass/main/*-main.txt \
mustpass/
fi
# Compress the caselists, since Vulkan's in particular are gigantic; higher
# compression levels provide no real measurable benefit.
zstd -1 --rm mustpass/*.txt
fi
if [ "$DEQP_API" = tools ]; then
# Save *some* executor utils, but otherwise strip things down
# to reduct deqp build size:
mv executor/testlog-to-* .
rm -rf executor
fi
# Remove other mustpass files, since we saved off the ones we wanted to conventient locations above.
rm -rf assets/**/mustpass/
rm -rf external/**/mustpass/
rm -rf external/vulkancts/modules/vulkan/vk-main*
rm -rf external/vulkancts/modules/vulkan/vk-default
rm -rf external/openglcts/modules/cts-runner
rm -rf modules/internal
rm -rf execserver
rm -rf framework
find . -depth \( -iname '*cmake*' -o -name '*ninja*' -o -name '*.o' -o -name '*.a' \) -exec rm -rf {} \;
if [ "${DEQP_API}" = 'VK' ] || [ "${DEQP_API}" = 'VK-main' ]; then
${STRIP_CMD:-strip} external/vulkancts/modules/vulkan/deqp-vk
fi
if [ "${DEQP_API}" = 'GL' ] || [ "${DEQP_API}" = 'GLES' ]; then
${STRIP_CMD:-strip} external/openglcts/modules/glcts
fi
if [ "${DEQP_API}" = 'GLES' ]; then
${STRIP_CMD:-strip} modules/*/deqp-*
fi
du -sh ./*
popd
section_end deqp-$deqp_api

View File

@@ -1,19 +0,0 @@
#!/usr/bin/env bash
# shellcheck disable=SC2086 # we want word splitting
# When changing this file, you need to bump the following
# .gitlab-ci/image-tags.yml tags:
# DEBIAN_BUILD_TAG
set -uex
uncollapsed_section_start directx-headers "Building directx-headers"
git clone https://github.com/microsoft/DirectX-Headers -b v1.614.1 --depth 1
pushd DirectX-Headers
meson setup build --backend=ninja --buildtype=release -Dbuild-test=false ${EXTRA_MESON_ARGS:-}
meson install -C build
popd
rm -rf DirectX-Headers
section_end directx-headers

View File

@@ -1,39 +0,0 @@
#!/usr/bin/env bash
# shellcheck disable=SC1091 # The relative paths in this file only become valid at runtime.
# shellcheck disable=SC2034 # Variables are used in scripts called from here
# shellcheck disable=SC2086 # we want word splitting
# Install fluster in /usr/local.
FLUSTER_REVISION="e997402978f62428fffc8e5a4a709690d9ca9bc5"
git clone https://github.com/fluendo/fluster.git --single-branch --no-checkout
pushd fluster || exit
git checkout ${FLUSTER_REVISION}
popd || exit
if [ "${SKIP_UPDATE_FLUSTER_VECTORS}" != 1 ]; then
# Download the necessary vectors: H264, H265 and VP9
# When updating FLUSTER_REVISION, make sure to update the vectors if necessary or
# fluster-runner will report Missing results.
fluster/fluster.py download \
JVT-AVC_V1 JVT-FR-EXT JVT-MVC JVT-SVC_V1 \
JCT-VC-3D-HEVC JCT-VC-HEVC_V1 JCT-VC-MV-HEVC JCT-VC-RExt JCT-VC-SCC JCT-VC-SHVC \
VP9-TEST-VECTORS-HIGH VP9-TEST-VECTORS
# Build fluster vectors archive and upload it
tar --zstd -cf "vectors.tar.zst" fluster/resources/
s3_upload vectors.tar.zst "https://${S3_PATH_FLUSTER}/"
touch /lava-files/done
s3_upload /lava-files/done "https://${S3_PATH_FLUSTER}/"
# Don't include the vectors in the rootfs
rm -fr fluster/resources/*
fi
mkdir -p "${ROOTFS}/usr/local/"
mv fluster "${ROOTFS}/usr/local/"

View File

@@ -1,23 +0,0 @@
#!/bin/bash
# When changing this file, you need to bump the following
# .gitlab-ci/image-tags.yml tags:
# DEBIAN_TEST_VK_TAG
# KERNEL_ROOTFS_TAG
set -ex
uncollapsed_section_start fossilize "Building fossilize"
git clone https://github.com/ValveSoftware/Fossilize.git
cd Fossilize
git checkout b43ee42bbd5631ea21fe9a2dee4190d5d875c327
git submodule update --init
mkdir build
cd build
cmake -S .. -B . -G Ninja -DCMAKE_BUILD_TYPE=Release
ninja -C . install
cd ../..
rm -rf Fossilize
section_end fossilize

View File

@@ -1,23 +0,0 @@
#!/usr/bin/env bash
set -ex
uncollapsed_section_start gfxreconstruct "Building gfxreconstruct"
GFXRECONSTRUCT_VERSION=761837794a1e57f918a85af7000b12e531b178ae
git clone https://github.com/LunarG/gfxreconstruct.git \
--single-branch \
-b master \
--no-checkout \
/gfxreconstruct
pushd /gfxreconstruct
git checkout "$GFXRECONSTRUCT_VERSION"
git submodule update --init
git submodule update
cmake -S . -B _build -G Ninja -DCMAKE_BUILD_TYPE=Release -DCMAKE_INSTALL_PREFIX:PATH=/gfxreconstruct/build -DBUILD_WERROR=OFF
cmake --build _build --parallel --target tools/{replay,info}/install/strip
find . -not -path './build' -not -path './build/*' -delete
popd
section_end gfxreconstruct

View File

@@ -1,32 +0,0 @@
#!/usr/bin/env bash
# shellcheck disable=SC1091 # the path is created by the script
set -ex
uncollapsed_section_start kdl "Building kdl"
KDL_REVISION="cbbe5fd54505fd03ee34f35bfd16794f0c30074f"
KDL_CHECKOUT_DIR="/tmp/ci-kdl.git"
mkdir -p ${KDL_CHECKOUT_DIR}
pushd ${KDL_CHECKOUT_DIR}
git init
git remote add origin https://gitlab.freedesktop.org/gfx-ci/ci-kdl.git
git fetch --depth 1 origin ${KDL_REVISION}
git checkout FETCH_HEAD
popd
# Run venv in a subshell, so we don't accidentally leak the venv state into
# calling scripts
(
python3 -m venv /ci-kdl
source /ci-kdl/bin/activate &&
pushd ${KDL_CHECKOUT_DIR} &&
pip install -r requirements.txt &&
pip install . &&
popd
)
rm -rf ${KDL_CHECKOUT_DIR}
section_end kdl

View File

@@ -1,35 +0,0 @@
#!/usr/bin/env bash
set -uex
uncollapsed_section_start libclc "Building libclc"
export LLVM_CONFIG="llvm-config-${LLVM_VERSION:?"llvm unset!"}"
LLVM_TAG="llvmorg-15.0.7"
$LLVM_CONFIG --version
git config --global user.email "mesa@example.com"
git config --global user.name "Mesa CI"
git clone \
https://github.com/llvm/llvm-project \
--depth 1 \
-b "${LLVM_TAG}" \
/llvm-project
mkdir /libclc
pushd /libclc
cmake -S /llvm-project/libclc -B . -G Ninja -DLLVM_CONFIG="$LLVM_CONFIG" -DLIBCLC_TARGETS_TO_BUILD="spirv-mesa3d-;spirv64-mesa3d-" -DCMAKE_BUILD_TYPE=Release -DCMAKE_INSTALL_PREFIX=/usr -DLLVM_SPIRV=/usr/bin/llvm-spirv
ninja
ninja install
popd
# workaroud cmake vs debian packaging.
mkdir -p /usr/lib/clc
ln -s /usr/share/clc/spirv64-mesa3d-.spv /usr/lib/clc/
ln -s /usr/share/clc/spirv-mesa3d-.spv /usr/lib/clc/
du -sh ./*
rm -rf /libclc /llvm-project
section_end libclc

View File

@@ -1,21 +0,0 @@
#!/usr/bin/env bash
# Script used for Android and Fedora builds (Debian builds get their libdrm version
# from https://gitlab.freedesktop.org/gfx-ci/ci-deb-repo - see PKG_REPO_REV)
# shellcheck disable=SC2086 # we want word splitting
set -uex
uncollapsed_section_start libdrm "Building libdrm"
export LIBDRM_VERSION=libdrm-2.4.122
curl -L -O --retry 4 -f --retry-all-errors --retry-delay 60 \
https://dri.freedesktop.org/libdrm/"$LIBDRM_VERSION".tar.xz
tar -xvf "$LIBDRM_VERSION".tar.xz && rm "$LIBDRM_VERSION".tar.xz
cd "$LIBDRM_VERSION"
meson setup build -D vc4=disabled -D freedreno=disabled -D etnaviv=disabled ${EXTRA_MESON_ARGS:-}
meson install -C build
cd ..
rm -rf "$LIBDRM_VERSION"
section_end libdrm

View File

@@ -1,30 +0,0 @@
#!/usr/bin/env bash
set -ex
uncollapsed_section_start llvm-spirv "Building LLVM-SPIRV-Translator"
if [ "${LLVM_VERSION:?llvm version not set}" -ge 18 ]; then
VER="${LLVM_VERSION}.1.0"
else
VER="${LLVM_VERSION}.0.0"
fi
curl -L --retry 4 -f --retry-all-errors --retry-delay 60 \
-O "https://github.com/KhronosGroup/SPIRV-LLVM-Translator/archive/refs/tags/v${VER}.tar.gz"
tar -xvf "v${VER}.tar.gz" && rm "v${VER}.tar.gz"
mkdir "SPIRV-LLVM-Translator-${VER}/build"
pushd "SPIRV-LLVM-Translator-${VER}/build"
cmake .. -G Ninja -DCMAKE_BUILD_TYPE=Release -DCMAKE_INSTALL_PREFIX=/usr
ninja
ninja install
# For some reason llvm-spirv is not installed by default
ninja llvm-spirv
cp tools/llvm-spirv/llvm-spirv /usr/bin/
popd
du -sh "SPIRV-LLVM-Translator-${VER}"
rm -rf "SPIRV-LLVM-Translator-${VER}"
section_end llvm-spirv

View File

@@ -1,32 +0,0 @@
#!/usr/bin/env bash
set -ex
# When changing this file, you need to bump the following
# .gitlab-ci/image-tags.yml tags:
# ALPINE_X86_64_BUILD_TAG
# DEBIAN_BASE_TAG
# DEBIAN_BUILD_TAG
# FEDORA_X86_64_BUILD_TAG
# KERNEL_ROOTFS_TAG
uncollapsed_section_start mold "Building mold"
MOLD_VERSION="2.32.0"
git clone -b v"$MOLD_VERSION" --single-branch --depth 1 https://github.com/rui314/mold.git
pushd mold
cmake -DCMAKE_BUILD_TYPE=Release -D BUILD_TESTING=OFF -D MOLD_LTO=ON
cmake --build . --parallel "${FDO_CI_CONCURRENT:-4}"
cmake --install . --strip
# Always use mold from now on
find /usr/bin \( -name '*-ld' -o -name 'ld' \) \
-exec ln -sf /usr/local/bin/ld.mold {} \; \
-exec ls -l {} +
popd
rm -rf mold
section_end mold

View File

@@ -1,29 +0,0 @@
#!/usr/bin/env bash
# When changing this file, you need to bump the following
# .gitlab-ci/image-tags.yml tags:
# DEBIAN_TEST_GL_TAG
set -ex -o pipefail
uncollapsed_section_start ninetests "Building Nine tests"
### Careful editing anything below this line
git config --global user.email "mesa@example.com"
git config --global user.name "Mesa CI"
git clone https://github.com/axeldavy/Xnine.git /Xnine
mkdir /Xnine/build
pushd /Xnine/build
git checkout c64753d224c08006bcdcfa7880ada826f27164b1
cmake .. -DBUILD_TESTS=1 -DWITH_DRI3=1 -DD3DADAPTER9_LOCATION=/install/lib/d3d/d3dadapter9.so
make
mkdir -p /NineTests/
mv NineTests/NineTests /NineTests/
popd
rm -rf /Xnine
section_end ninetests

View File

@@ -1,38 +0,0 @@
#!/bin/bash
# shellcheck disable=SC2086 # we want word splitting
set -uex
uncollapsed_section_start piglit "Building piglit"
# When changing this file, you need to bump the following
# .gitlab-ci/image-tags.yml tags:
# DEBIAN_TEST_GL_TAG
# DEBIAN_TEST_VK_TAG
# KERNEL_ROOTFS_TAG
REV="0ecdebb0f5927728ddeeb851639a559b0f7d6590"
git clone https://gitlab.freedesktop.org/mesa/piglit.git --single-branch --no-checkout /piglit
pushd /piglit
git checkout "$REV"
patch -p1 <$OLDPWD/.gitlab-ci/piglit/disable-vs_in.diff
cmake -S . -B . -G Ninja -DCMAKE_BUILD_TYPE=Release $PIGLIT_OPTS ${EXTRA_CMAKE_ARGS:-}
ninja ${PIGLIT_BUILD_TARGETS:-}
find . -depth \( -name .git -o -name '*ninja*' -o -iname '*cmake*' -o -name '*.[chao]' \) \
! -name 'include_test.h' -exec rm -rf {} \;
rm -rf target_api
if [ "${PIGLIT_BUILD_TARGETS:-}" = "piglit_replayer" ]; then
find . -depth \
! -regex "^\.$" \
! -regex "^\.\/piglit.*" \
! -regex "^\.\/framework.*" \
! -regex "^\.\/bin$" \
! -regex "^\.\/bin\/replayer\.py" \
! -regex "^\.\/templates.*" \
! -regex "^\.\/tests$" \
! -regex "^\.\/tests\/replay\.py" \
-exec rm -rf {} \; 2>/dev/null
fi
popd
section_end piglit

View File

@@ -1,38 +0,0 @@
#!/bin/bash
# Note that this script is not actually "building" rust, but build- is the
# convention for the shared helpers for putting stuff in our containers.
set -ex
uncollapsed_section_start rust "Building Rust toolchain"
# Pick a specific snapshot from rustup so the compiler doesn't drift on us.
RUST_VERSION=1.78.0-2024-05-02
# For rust in Mesa, we use rustup to install. This lets us pick an arbitrary
# version of the compiler, rather than whatever the container's Debian comes
# with.
curl -L --retry 4 -f --retry-all-errors --retry-delay 60 \
--proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh -s -- \
--default-toolchain $RUST_VERSION \
--profile minimal \
-y
# Make rustup tools available in the PATH environment variable
# shellcheck disable=SC1091
. "$HOME/.cargo/env"
rustup component add clippy rustfmt
# Set up a config script for cross compiling -- cargo needs your system cc for
# linking in cross builds, but doesn't know what you want to use for system cc.
cat > "$HOME/.cargo/config" <<EOF
[target.armv7-unknown-linux-gnueabihf]
linker = "arm-linux-gnueabihf-gcc"
[target.aarch64-unknown-linux-gnu]
linker = "aarch64-linux-gnu-gcc"
EOF
section_end rust

View File

@@ -1,18 +0,0 @@
#!/usr/bin/env bash
# When changing this file, you need to bump the following
# .gitlab-ci/image-tags.yml tags:
# DEBIAN_BUILD_TAG
set -ex
uncollapsed_section_start shader-db "Building shader-db"
pushd /usr/local
git clone https://gitlab.freedesktop.org/mesa/shader-db.git --depth 1
rm -rf shader-db/.git
cd shader-db
make
popd
section_end shader-db

View File

@@ -1,104 +0,0 @@
#!/usr/bin/env bash
# SPDX-License-Identifier: MIT
#
# Copyright © 2022 Collabora Limited
# Author: Guilherme Gallo <guilherme.gallo@collabora.com>
#
# When changing this file, you need to bump the following
# .gitlab-ci/image-tags.yml tags:
# KERNEL_ROOTFS_TAG
set -uex
uncollapsed_section_start skqp "Building skqp"
SKQP_BRANCH=android-cts-12.1_r5
SCRIPT_DIR="$(pwd)/.gitlab-ci/container"
SKQP_PATCH_DIR="${SCRIPT_DIR}/patches"
BASE_ARGS_GN_FILE="${SCRIPT_DIR}/build-skqp_base.gn"
case "$DEBIAN_ARCH" in
amd64)
SKQP_ARCH=x64
;;
armhf)
SKQP_ARCH=arm
;;
arm64)
SKQP_ARCH=arm64
;;
esac
SKIA_DIR=${SKIA_DIR:-$(mktemp -d)}
SKQP_OUT_DIR=${SKIA_DIR}/out/${SKQP_ARCH}
SKQP_INSTALL_DIR=${SKQP_INSTALL_DIR:-/skqp}
SKQP_ASSETS_DIR="${SKQP_INSTALL_DIR}/assets"
SKQP_BINARIES=(skqp list_gpu_unit_tests list_gms)
create_gn_args() {
# gn can be configured to cross-compile skia and its tools
# It is important to set the target_cpu to guarantee the intended target
# machine
cp "${BASE_ARGS_GN_FILE}" "${SKQP_OUT_DIR}"/args.gn
echo "target_cpu = \"${SKQP_ARCH}\"" >> "${SKQP_OUT_DIR}"/args.gn
}
download_skia_source() {
if [ -z ${SKIA_DIR+x} ]
then
return 1
fi
# Skia cloned from https://android.googlesource.com/platform/external/skqp
# has all needed assets tracked on git-fs
SKQP_REPO=https://android.googlesource.com/platform/external/skqp
git clone --branch "${SKQP_BRANCH}" --depth 1 "${SKQP_REPO}" "${SKIA_DIR}"
}
download_skia_source
pushd "${SKIA_DIR}"
# Apply all skqp patches for Mesa CI
cat "${SKQP_PATCH_DIR}"/build-skqp_*.patch |
patch -p1
# hack for skqp see the clang
pushd /usr/bin/
ln -s "../lib/llvm-${LLVM_VERSION}/bin/clang" clang
ln -s "../lib/llvm-${LLVM_VERSION}/bin/clang++" clang++
popd
# Fetch some needed build tools needed to build skia/skqp.
# Basically, it clones repositories with commits SHAs from ${SKIA_DIR}/DEPS
# directory.
python tools/git-sync-deps
mkdir -p "${SKQP_OUT_DIR}"
mkdir -p "${SKQP_INSTALL_DIR}"
create_gn_args
# Build and install skqp binaries
bin/gn gen "${SKQP_OUT_DIR}"
for BINARY in "${SKQP_BINARIES[@]}"
do
/usr/bin/ninja -C "${SKQP_OUT_DIR}" "${BINARY}"
# Strip binary, since gn is not stripping it even when `is_debug == false`
${STRIP_CMD:-strip} "${SKQP_OUT_DIR}/${BINARY}"
install -m 0755 "${SKQP_OUT_DIR}/${BINARY}" "${SKQP_INSTALL_DIR}"
done
# Move assets to the target directory, which will reside in rootfs.
mv platform_tools/android/apps/skqp/src/main/assets/ "${SKQP_ASSETS_DIR}"
popd
rm -Rf "${SKIA_DIR}"
set +ex
section_end skqp

View File

@@ -1,64 +0,0 @@
cc = "clang"
cxx = "clang++"
extra_cflags = [
"-Wno-error",
"-DSK_ENABLE_DUMP_GPU",
"-DSK_BUILD_FOR_SKQP"
]
extra_cflags_cc = [
"-Wno-error",
# skqp build process produces a lot of compilation warnings, silencing
# most of them to remove clutter and avoid the CI job log to exceed the
# maximum size
# GCC flags
"-Wno-redundant-move",
"-Wno-suggest-override",
"-Wno-class-memaccess",
"-Wno-deprecated-copy",
"-Wno-uninitialized",
# Clang flags
"-Wno-macro-redefined",
"-Wno-anon-enum-enum-conversion",
"-Wno-suggest-destructor-override",
"-Wno-return-std-move-in-c++11",
"-Wno-extra-semi-stmt",
"-Wno-reserved-identifier",
"-Wno-bitwise-instead-of-logical",
"-Wno-reserved-identifier",
"-Wno-psabi",
"-Wno-unused-but-set-variable",
"-Wno-sizeof-array-div",
"-Wno-string-concatenation",
"-Wno-unsafe-buffer-usage",
"-Wno-switch-default",
"-Wno-cast-function-type-strict",
"-Wno-format",
"-Wno-enum-constexpr-conversion",
]
cc_wrapper = "ccache"
is_debug = false
skia_enable_fontmgr_android = false
skia_enable_fontmgr_empty = true
skia_enable_pdf = false
skia_enable_skottie = false
skia_skqp_global_error_tolerance = 8
skia_tools_require_resources = true
skia_use_dng_sdk = false
skia_use_expat = true
skia_use_icu = false
skia_use_libheif = false
skia_use_lua = false
skia_use_piex = false
skia_use_vulkan = true
target_os = "linux"

View File

@@ -1,29 +0,0 @@
#!/usr/bin/env bash
# shellcheck disable=SC2086 # we want word splitting
# When changing this file, you need to bump the following
# .gitlab-ci/image-tags.yml tags:
# KERNEL_ROOTFS_TAG
set -uex
uncollapsed_section_start va-tools "Building va-tools"
git config --global user.email "mesa@example.com"
git config --global user.name "Mesa CI"
git clone \
https://github.com/intel/libva-utils.git \
-b 2.18.1 \
--depth 1 \
/va-utils
pushd /va-utils
# Too old libva in Debian 11. TODO: when this PR gets in, refer to the patch.
curl --fail -L https://github.com/intel/libva-utils/pull/329.patch | git am
meson setup build -D tests=true -Dprefix=/va ${EXTRA_MESON_ARGS:-}
meson install -C build
popd
rm -rf /va-utils
section_end va-tools

View File

@@ -1,50 +0,0 @@
#!/bin/bash
# When changing this file, you need to bump the following
# .gitlab-ci/image-tags.yml tags:
# DEBIAN_TEST_VK_TAG
set -ex
uncollapsed_section_start vkd3d-proton "Building vkd3d-proton"
VKD3D_PROTON_COMMIT="078f07f588c849c52fa21c8cfdd1c201465b1932"
VKD3D_PROTON_DST_DIR="/vkd3d-proton-tests"
VKD3D_PROTON_SRC_DIR="/vkd3d-proton-src"
VKD3D_PROTON_BUILD_DIR="/vkd3d-proton-build"
function build_arch {
local arch="$1"
meson setup \
-Denable_tests=true \
--buildtype release \
--prefix "$VKD3D_PROTON_DST_DIR" \
--strip \
--bindir "x${arch}" \
--libdir "x${arch}" \
"$VKD3D_PROTON_BUILD_DIR/build.${arch}"
ninja -C "$VKD3D_PROTON_BUILD_DIR/build.${arch}" install
install -D -m755 -t "${VKD3D_PROTON_DST_DIR}/x${arch}/bin" "$VKD3D_PROTON_BUILD_DIR/build.${arch}/tests/d3d12"
}
git clone https://github.com/HansKristian-Work/vkd3d-proton.git --single-branch -b master --no-checkout "$VKD3D_PROTON_SRC_DIR"
pushd "$VKD3D_PROTON_SRC_DIR"
git checkout "$VKD3D_PROTON_COMMIT"
git submodule update --init --recursive
git submodule update --recursive
build_arch 64
build_arch 86
mkdir "$VKD3D_PROTON_DST_DIR/tests"
cp \
"tests/test-runner.sh" \
"tests/d3d12_tests.h" \
"$VKD3D_PROTON_DST_DIR/tests/"
popd
rm -rf "$VKD3D_PROTON_BUILD_DIR"
rm -rf "$VKD3D_PROTON_SRC_DIR"
section_end vkd3d-proton

View File

@@ -1,25 +0,0 @@
#!/usr/bin/env bash
# When changing this file, you need to bump the following
# .gitlab-ci/image-tags.yml tags:
# DEBIAN_TEST_GL_TAG
# KERNEL_ROOTFS_TAG
set -uex
uncollapsed_section_start vulkan-validation "Building Vulkan validation layers"
VALIDATION_TAG="snapshot-2025wk15"
git clone -b "$VALIDATION_TAG" --single-branch --depth 1 https://github.com/KhronosGroup/Vulkan-ValidationLayers.git
pushd Vulkan-ValidationLayers
# we don't need to build SPIRV-Tools tools
sed -i scripts/known_good.json -e 's/SPIRV_SKIP_EXECUTABLES=OFF/SPIRV_SKIP_EXECUTABLES=ON/'
python3 scripts/update_deps.py --dir external --config release --generator Ninja --optional tests
cmake -G Ninja -DCMAKE_BUILD_TYPE=Release -DCMAKE_INSTALL_PREFIX=/usr -DBUILD_TESTS=OFF -DBUILD_WERROR=OFF -C external/helper.cmake -S . -B build
ninja -C build -j"${FDO_CI_CONCURRENT:-4}"
cmake --install build --strip
popd
rm -rf Vulkan-ValidationLayers
section_end vulkan-validation

View File

@@ -1,38 +0,0 @@
#!/usr/bin/env bash
# shellcheck disable=SC2086 # we want word splitting
set -uex
uncollapsed_section_start wayland "Building Wayland"
# When changing this file, you need to bump the following
# .gitlab-ci/image-tags.yml tags:
# ALPINE_X86_64_BUILD_TAG
# DEBIAN_BASE_TAG
# DEBIAN_BUILD_TAG
# DEBIAN_TEST_ANDROID_TAG
# DEBIAN_TEST_GL_TAG
# DEBIAN_TEST_VK_TAG
# FEDORA_X86_64_BUILD_TAG
# KERNEL_ROOTFS_TAG
export LIBWAYLAND_VERSION="1.21.0"
export WAYLAND_PROTOCOLS_VERSION="1.41"
git clone https://gitlab.freedesktop.org/wayland/wayland
cd wayland
git checkout "$LIBWAYLAND_VERSION"
meson setup -Ddocumentation=false -Ddtd_validation=false -Dlibraries=true _build ${EXTRA_MESON_ARGS:-}
meson install -C _build
cd ..
rm -rf wayland
git clone https://gitlab.freedesktop.org/wayland/wayland-protocols
cd wayland-protocols
git checkout "$WAYLAND_PROTOCOLS_VERSION"
meson setup -Dtests=false _build ${EXTRA_MESON_ARGS:-}
meson install -C _build
cd ..
rm -rf wayland-protocols
section_end wayland

View File

@@ -1,24 +0,0 @@
#!/usr/bin/env bash
# When changing this file, all the linux tags in
# .gitlab-ci/image-tags.yml need updating.
set -eu
# Early check for required env variables, relies on `set -u`
: "$S3_JWT_FILE_SCRIPT"
if [ -z "$1" ]; then
echo "usage: $(basename "$0") <CONTAINER_CI_JOB_NAME>" 1>&2
exit 1
fi
CONTAINER_CI_JOB_NAME="$1"
# Tasks to perform before executing the script of a container job
eval "$S3_JWT_FILE_SCRIPT"
unset S3_JWT_FILE_SCRIPT
trap 'rm -f ${S3_JWT_FILE}' EXIT INT TERM
bash ".gitlab-ci/container/${CONTAINER_CI_JOB_NAME}.sh"

View File

@@ -1,12 +0,0 @@
#!/usr/bin/env bash
if test -f /etc/debian_version; then
apt-get autoremove -y --purge
fi
# Clean up any build cache
rm -rf /root/.cache
if test -x /usr/bin/ccache; then
ccache --show-stats
fi

View File

@@ -1,74 +0,0 @@
#!/bin/sh
# When changing this file, you need to bump the following
# .gitlab-ci/image-tags.yml tags:
# DEBIAN_BUILD_TAG
if test -x /usr/bin/ccache; then
if test -f /etc/debian_version; then
CCACHE_PATH=/usr/lib/ccache
elif test -f /etc/alpine-release; then
CCACHE_PATH=/usr/lib/ccache/bin
else
CCACHE_PATH=/usr/lib64/ccache
fi
# Common setup among container builds before we get to building code.
export CCACHE_COMPILERCHECK=content
export CCACHE_COMPRESS=true
export CCACHE_DIR="/cache/$CI_PROJECT_NAME/ccache"
export PATH="$CCACHE_PATH:$PATH"
# CMake ignores $PATH, so we have to force CC/GCC to the ccache versions.
export CC="${CCACHE_PATH}/gcc"
export CXX="${CCACHE_PATH}/g++"
ccache --show-stats
fi
# Make a wrapper script for ninja to always include the -j flags
{
echo '#!/bin/sh -x'
# shellcheck disable=SC2016
echo '/usr/bin/ninja -j${FDO_CI_CONCURRENT:-4} "$@"'
} > /usr/local/bin/ninja
chmod +x /usr/local/bin/ninja
# Set MAKEFLAGS so that all make invocations in container builds include the
# flags (doesn't apply to non-container builds, but we don't run make there)
export MAKEFLAGS="-j${FDO_CI_CONCURRENT:-4}"
# make wget to try more than once, when download fails or timeout
echo -e "retry_connrefused = on\n" \
"read_timeout = 300\n" \
"tries = 4\n" \
"retry_on_host_error = on\n" \
"retry_on_http_error = 429,500,502,503,504\n" \
"wait_retry = 32" >> /etc/wgetrc
# Ensure that rust tools are in PATH if they exist
CARGO_ENV_FILE="$HOME/.cargo/env"
if [ -f "$CARGO_ENV_FILE" ]; then
# shellcheck disable=SC1090
source "$CARGO_ENV_FILE"
fi
ci_tag_early_checks() {
# Runs the first part of the build script to perform the tag check only
uncollapsed_section_switch "ci_tag_early_checks" "Ensuring component versions match declared tags in CI builds"
echo "[Structured Tagging] Checking components: ${CI_BUILD_COMPONENTS}"
# shellcheck disable=SC2086
for component in ${CI_BUILD_COMPONENTS}; do
bin/ci/update_tag.py --check ${component} || exit 1
done
echo "[Structured Tagging] Components check done"
section_end "ci_tag_early_checks"
}
# Check if each declared tag component is up to date before building
if [ -n "${CI_BUILD_COMPONENTS:-}" ]; then
# Remove any duplicates by splitting on whitespace, sorting, then joining back
CI_BUILD_COMPONENTS="$(echo "${CI_BUILD_COMPONENTS}" | xargs -n1 | sort -u | xargs)"
ci_tag_early_checks
fi

View File

@@ -1,37 +0,0 @@
#!/bin/bash
ndk=$1
arch=$2
cpu_family=$3
cpu=$4
cross_file="/cross_file-$arch.txt"
sdk_version=$5
# armv7 has the toolchain split between two names.
arch2=${6:-$2}
# Note that we disable C++ exceptions, because Mesa doesn't use exceptions,
# and allowing it in code generation means we get unwind symbols that break
# the libEGL and driver symbol tests.
cat > "$cross_file" <<EOF
[binaries]
ar = '$ndk/toolchains/llvm/prebuilt/linux-x86_64/bin/llvm-ar'
c = ['ccache', '$ndk/toolchains/llvm/prebuilt/linux-x86_64/bin/${arch2}${sdk_version}-clang', '-fno-exceptions', '-fno-unwind-tables', '-fno-asynchronous-unwind-tables']
cpp = ['ccache', '$ndk/toolchains/llvm/prebuilt/linux-x86_64/bin/${arch2}${sdk_version}-clang++', '-fno-exceptions', '-fno-unwind-tables', '-fno-asynchronous-unwind-tables', '--start-no-unused-arguments', '-static-libstdc++', '--end-no-unused-arguments']
c_ld = 'lld'
cpp_ld = 'lld'
strip = '$ndk/toolchains/llvm/prebuilt/linux-x86_64/bin/llvm-strip'
pkg-config = ['/usr/bin/pkgconf']
[host_machine]
system = 'android'
cpu_family = '$cpu_family'
cpu = '$cpu'
endian = 'little'
[properties]
needs_exe_wrapper = true
pkg_config_libdir = '/usr/local/lib/${arch2}/pkgconfig/:/${ndk}/toolchains/llvm/prebuilt/linux-x86_64/sysroot/usr/lib/${arch2}/pkgconfig/'
EOF

View File

@@ -1,40 +0,0 @@
#!/bin/sh
# shellcheck disable=SC2086 # we want word splitting
# Makes a .pc file in the Android NDK for meson to find its libraries.
set -ex
ndk="$1"
pc="$2"
cflags="$3"
libs="$4"
version="$5"
sdk_version="$6"
sysroot=$ndk/toolchains/llvm/prebuilt/linux-x86_64/sysroot
for arch in \
x86_64-linux-android \
i686-linux-android \
aarch64-linux-android \
arm-linux-androideabi; do
pcdir=$sysroot/usr/lib/$arch/pkgconfig
mkdir -p $pcdir
cat >$pcdir/$pc <<EOF
prefix=$sysroot
exec_prefix=$sysroot
libdir=$sysroot/usr/lib/$arch/$sdk_version
sharedlibdir=$sysroot/usr/lib/$arch
includedir=$sysroot/usr/include
Name: zlib
Description: zlib compression library
Version: $version
Requires:
Libs: -L$sysroot/usr/lib/$arch/$sdk_version $libs
Cflags: -I$sysroot/usr/include $cflags
EOF
done

View File

@@ -1,54 +0,0 @@
#!/bin/bash
arch=$1
cross_file="/cross_file-$arch.txt"
meson env2mfile --cross --debarch "$arch" -o "$cross_file"
# Explicitly set ccache path for cross compilers
sed -i "s|/usr/bin/\([^-]*\)-linux-gnu\([^-]*\)-g|/usr/lib/ccache/\\1-linux-gnu\\2-g|g" "$cross_file"
# Rely on qemu-user being configured in binfmt_misc on the host
# shellcheck disable=SC1003 # how this sed doesn't seems to work for me locally
sed -i -e '/\[properties\]/a\' -e "needs_exe_wrapper = False" "$cross_file"
# Add a line for rustc, which meson env2mfile is missing.
cc=$(sed -n "s|^c\s*=\s*\[?'\(.*\)'\]?|\1|p" < "$cross_file")
if [[ "$arch" = "arm64" ]]; then
rust_target=aarch64-unknown-linux-gnu
elif [[ "$arch" = "armhf" ]]; then
rust_target=armv7-unknown-linux-gnueabihf
elif [[ "$arch" = "i386" ]]; then
rust_target=i686-unknown-linux-gnu
elif [[ "$arch" = "ppc64el" ]]; then
rust_target=powerpc64le-unknown-linux-gnu
elif [[ "$arch" = "s390x" ]]; then
rust_target=s390x-unknown-linux-gnu
else
echo "Needs rustc target mapping"
fi
# shellcheck disable=SC1003 # how this sed doesn't seems to work for me locally
sed -i -e '/\[binaries\]/a\' -e "rust = ['rustc', '--target=$rust_target', '-C', 'linker=$cc']" "$cross_file"
# Set up cmake cross compile toolchain file for dEQP builds
toolchain_file="/toolchain-$arch.cmake"
if [[ "$arch" = "arm64" ]]; then
GCC_ARCH="aarch64-linux-gnu"
DE_CPU="DE_CPU_ARM_64"
elif [[ "$arch" = "armhf" ]]; then
GCC_ARCH="arm-linux-gnueabihf"
DE_CPU="DE_CPU_ARM"
fi
if [[ -n "$GCC_ARCH" ]]; then
{
echo "set(CMAKE_SYSTEM_NAME Linux)";
echo "set(CMAKE_SYSTEM_PROCESSOR arm)";
echo "set(CMAKE_C_COMPILER /usr/lib/ccache/$GCC_ARCH-gcc)";
echo "set(CMAKE_CXX_COMPILER /usr/lib/ccache/$GCC_ARCH-g++)";
echo "set(CMAKE_CXX_FLAGS_INIT \"-Wno-psabi\")"; # makes ABI warnings quiet for ARMv7
echo "set(ENV{PKG_CONFIG} \"/usr/bin/$GCC_ARCH-pkgconf\")";
echo "set(DE_CPU $DE_CPU)";
} > "$toolchain_file"
fi

View File

@@ -1,95 +0,0 @@
#!/usr/bin/env bash
# shellcheck disable=SC2086 # we want word splitting
set -e
. .gitlab-ci/setup-test-env.sh
set -o xtrace
export DEBIAN_FRONTEND=noninteractive
: "${LLVM_VERSION:?llvm version not set!}"
# Ephemeral packages (installed for this script and removed again at the end)
EPHEMERAL=(
)
DEPS=(
"crossbuild-essential-$arch"
"pkgconf:$arch"
"libasan8:$arch"
"libdrm-dev:$arch"
"libelf-dev:$arch"
"libexpat1-dev:$arch"
"libffi-dev:$arch"
"libpciaccess-dev:$arch"
"libstdc++6:$arch"
"libvulkan-dev:$arch"
"libx11-dev:$arch"
"libx11-xcb-dev:$arch"
"libxcb-dri2-0-dev:$arch"
"libxcb-dri3-dev:$arch"
"libxcb-glx0-dev:$arch"
"libxcb-present-dev:$arch"
"libxcb-randr0-dev:$arch"
"libxcb-shm0-dev:$arch"
"libxcb-xfixes0-dev:$arch"
"libxdamage-dev:$arch"
"libxext-dev:$arch"
"libxrandr-dev:$arch"
"libxshmfence-dev:$arch"
"libxxf86vm-dev:$arch"
"libwayland-dev:$arch"
)
dpkg --add-architecture $arch
echo "deb [trusted=yes] https://gitlab.freedesktop.org/gfx-ci/ci-deb-repo/-/raw/${PKG_REPO_REV}/ ${FDO_DISTRIBUTION_VERSION%-*} main" | tee /etc/apt/sources.list.d/gfx-ci_.list
apt-get update
apt-get install -y --no-remove "${DEPS[@]}" "${EPHEMERAL[@]}" \
$EXTRA_LOCAL_PACKAGES
if [[ $arch != "armhf" ]]; then
# We don't need clang-format for the crossbuilds, but the installed amd64
# package will conflict with libclang. Uninstall clang-format (and its
# problematic dependency) to fix.
apt-get remove -y "clang-format-${LLVM_VERSION}" "libclang-cpp${LLVM_VERSION}" \
"llvm-${LLVM_VERSION}-runtime" "llvm-${LLVM_VERSION}-linker-tools"
# llvm-*-tools:$arch conflicts with python3:amd64. Install dependencies only
# with apt-get, then force-install llvm-*-{dev,tools}:$arch with dpkg to get
# around this.
apt-get install -y --no-remove --no-install-recommends \
"libclang-cpp${LLVM_VERSION}:$arch" \
"libgcc-s1:$arch" \
"libtinfo-dev:$arch" \
"libz3-dev:$arch" \
"llvm-${LLVM_VERSION}:$arch" \
zlib1g
fi
. .gitlab-ci/container/create-cross-file.sh $arch
. .gitlab-ci/container/container_pre_build.sh
# dependencies where we want a specific version
MULTIARCH_PATH=$(dpkg-architecture -A $arch -qDEB_TARGET_MULTIARCH)
export EXTRA_MESON_ARGS="--cross-file=/cross_file-${arch}.txt -D libdir=lib/${MULTIARCH_PATH}"
. .gitlab-ci/container/build-wayland.sh
. .gitlab-ci/container/build-directx-headers.sh
apt-get purge -y "${EPHEMERAL[@]}"
. .gitlab-ci/container/container_post_build.sh
# This needs to be done after container_post_build.sh, or apt-get breaks in there
if [[ $arch != "armhf" ]]; then
apt-get download llvm-"${LLVM_VERSION}"-{dev,tools}:"$arch"
dpkg -i --force-depends llvm-"${LLVM_VERSION}"-*_"${arch}".deb
rm llvm-"${LLVM_VERSION}"-*_"${arch}".deb
fi

Some files were not shown because too many files have changed in this diff Show More