Compare commits

..

126 Commits

Author SHA1 Message Date
Ian Romanick
e5fdeef1e0 mesa: Bump version number to 9.0 (final)
Signed-off-by: Ian Romanick <ian.d.romanick@intel.com>
2012-10-08 14:58:35 -07:00
Tom Stellard
a8d0652c04 configure.ac: Don't link gallium drivers with libdricore
Reviewed-by: Matt Turner <mattst88@gmail.com>
(cherry picked from commit d68e337c60)
2012-10-08 10:27:40 -07:00
Anuj Phogat
ad4b3b93de _mesa_meta_GenerateMipmap: Support all texture targets by generating shaders at runtime
This is a squash for the following 7 commits.  The first introduces the
functionality, and the remaining six fix various bugs.

Patch 1:
    _mesa_meta_GenerateMipmap: Support all texture targets by generating shaders at runtime

    glsl path of _mesa_meta_GenerateMipmap() function would require different fragment
    shaders depending on the texture target. This patch adds the code to generate
    appropriate fragment shader programs at run time.
    Fixes https://bugs.freedesktop.org/show_bug.cgi?id=54296

    V2: Removed the code for integer textures as ARB is planning to
        disallow automatic mipmap generation for integer textures.
        Now using ralloc_asprintf in setup_glsl_generate_mipmap().

    NOTE: This is a candidate for stable branches.

    Signed-off-by: Anuj Phogat <anuj.phogat@gmail.com>
    Reviewed-by: Brian Paul <brianp@vmware.com>
    (cherry picked from commit 299acac849)

Patch 2:
    _mesa_meta_GenerateMipmap: Generate separate shaders for glsl 120 / 130

    glsl version of _mesa_meta_GenerateMipmap() would require separate
    shaders for glsl 120 and 130.

    V2: Removed the code for integer textures as ARB is planning to
        disallow automatic mipmap generation for integer textures.

    NOTE: This is a candidate for stable branches.

    Signed-off-by: Anuj Phogat <anuj.phogat@gmail.com>
    Reviewed-by: Brian Paul <brianp@vmware.com>
    (cherry picked from commit 15bf3103b4)

Patch 3:
    meta: Add on demand compilation of per target shader programs

    A call to glGenerateMipmap() follows the generation of a relevant
    shader program in setup_glsl_generate_mipmap().

    To support all texture targets and to avoid compiling shaders
    everytime, per target shader programs are compiled on demand
    and saved for the next call.

    Fixes float-texture(mipmap.manual):
    See Comment 6: https://bugs.freedesktop.org/show_bug.cgi?id=54296

    NOTE: This is a candidate for stable branches.

    Signed-off-by: Anuj Phogat <anuj.phogat@gmail.com>
    Reviewed-by: Brian Paul <brianp@vmware.com>
    (cherry picked from commit eb1d87fb94)

Patch 4:
    meta: make mem_ctx non-global.

    I can't see any external users, and this is a global symbol,

    Reviewed-by: Matt Turner <mattst88@gmail.com>
    Signed-off-by: Dave Airlie <airlied@redhat.com>
    (cherry picked from commit 36639ec6e9)

Patch 5:
    meta: Remove unsafe global mem_ctx pointer

    NOTE: This is a candidate for the 9.0 branch.

    Signed-off-by: Ian Romanick <ian.d.romanick@intel.com>
    Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
    Reviewed-by: Brian Paul <brianp@vmware.com>
    Reviewed-by: Anuj Phogat <anuj.phogat@gmail.com>
    (cherry picked from commit ab097dde0c)

Patch 6:
    meta: Rearrange shader creation in setup_glsl_generate_mipmap

    The diff looks weird, but this moves the code from the first 'if
    (ctx->Const.GLSLVersion < 130)' block down into the second block.  It
    also moves some variable decalarations closer to their use.

    NOTE: This is a candidate for the 9.0 branch.

    Signed-off-by: Ian Romanick <ian.d.romanick@intel.com>
    Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
    Reviewed-by: Brian Paul <brianp@vmware.com>
    Reviewed-by: Anuj Phogat <anuj.phogat@gmail.com>
    (cherry picked from commit 3308c079bd)

Patch 7:
    meta: Don't use GLSL 1.30 shader on OpenGL ES 2

    Fixes GLES2 CoverageGL conformance test.

    NOTE: This is a candidate for the 9.0 branch.

    Signed-off-by: Ian Romanick <ian.d.romanick@intel.com>
    Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
    Reviewed-by: Brian Paul <brianp@vmware.com>
    Reviewed-by: Anuj Phogat <anuj.phogat@gmail.com>
    (cherry picked from commit 0242381f06)
2012-10-07 20:38:14 -07:00
Marek Olšák
7851d398de r600g: fix possible issue with stencil mipmap rendering
Somehow I only hit this issue with my latest libdrm changes.
This won't be needed with DB texturing.

NOTE: This is a candidate for the 9.0 branch.
(cherry picked from commit 9dfca930d7)
2012-10-06 05:40:09 +02:00
Brian Paul
19a15cd5ba mesa: remove bogus compressed texture size checks
A compressed texture image size doesn't have to be a multiple of the
compressed block size (only sub-images do).  Fixes issues when building
compressed mipmaps because we often wind up with non-block-size images
for the higher mipmap levels.

Fixes https://bugs.freedesktop.org/show_bug.cgi?id=55445

Note: This is a candidate for the stable branches.

Reviewed-by: Eric Anholt <eric@anholt.net>
Tested-by: Sven Arvidsson <sa@whiz.se>
(cherry picked from commit df4a88ac43)
2012-10-05 15:55:47 -07:00
Anuj Phogat
c566267f5c intel/i965: Disable SampleAlphaToOne if dual source blending enabled
From SandyBridge PRM, volume 2 Part 1, section 12.2.3, BLEND_STATE:
DWord 1, Bit 30 (AlphaToOne Enable):
"If Dual Source Blending is enabled, this bit must be disabled"

Note: This is a candidate for stable branches.

Signed-off-by: Anuj Phogat <anuj.phogat@gmail.com>
Reviewed-by: Eric Anholt <eric@anholt.net>
(cherry picked from commit ea0d088727)
2012-10-05 15:55:47 -07:00
Kenneth Graunke
8491e03b2b mesa: Flag _NEW_VARYING_VP_INPUTS when TexEnv programs are active.
The idea here is to not flag _NEW_VARYING_VP_INPUTS when shaders (either
GLSL or ARB vp/fp) are in use.  If either TNL or TexEnv programs are
active, at least one stage is using fixed function.

On Pineview, fixes 20 Piglit, 60 oglconforms, and 7 ES 1.1 conformance
tests, as well as missing textures in Xonotic.  These were all
regressions since commit fb4a34e60e.

NOTE: This is a candidate for the 9.0 branch.

Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=49127
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=54807
Signed-off-by: Kenneth Graunke <kenneth@whitecape.org>
Reviewed-by: Ian Romanick <ian.d.romanick@intel.com>
(cherry picked from commit 7fa0f10cd8)
2012-10-05 15:55:47 -07:00
Paul Berry
78c9adb17e mesa: don't enable glVertexPointer() when using API_OPENGLES2.
This function is only present in GLES1 and in the OpenGL compatibility
profile.

Fixes the following "make check" failure:

    [----------] 1 test from DispatchSanity_test
    [ RUN      ] DispatchSanity_test.GLES2
    Mesa warning: couldn't open libtxc_dxtn.so, software DXTn
    compression/decompression unavailable
    dispatch_sanity.cpp:122: Failure
    Value of: table[i]
       Actual: 0x4de54e
    Expected: (_glapi_proc) _mesa_generic_nop
    Which is: 0x41af72
    i = 321
    [  FAILED  ] DispatchSanity_test.GLES2 (4 ms)
    [----------] 1 test from DispatchSanity_test (4 ms total)

NOTE: This is a candidate for stable release branches.

Reviewed-by: Oliver McFadden <oliver.mcfadden@linux.intel.com>
Tested-by: Oliver McFadden <oliver.mcfadden@linux.intel.com>
Reviewed-by: Ian Romanick <ian.d.romanick@intel.com>
(cherry picked from commit 8f0b81bf7d)
2012-10-05 15:55:47 -07:00
Robert Bragg
e1cb50b15d SwapBuffersRegionNOK: invert rectangles on y axis
The EGL_NOK_swap_region2 spec states that the rectangles are specified
with a bottom-left origin within a surface coordinate space also with a
bottom left origin, so this patch ensures the rectangles are flipped
before passing them on to dri2_copy_region.

Fixes piglit's egl-nok-swap-region test.

Tested-by: Matt Turner <mattst88@gmail.com>
(cherry picked from commit 0a523a8820)
2012-10-05 15:03:56 -07:00
Matt Turner
c8fda5f44a build: Set PTHREAD_LIBS for pkgconfig files if empty
(cherry picked from dd4fde8f67)
2012-10-05 17:30:11 +02:00
Tom Stellard
542f6feda9 radeon/llvm: Remove R600InstrInfo.td from TD_FILES
Fixes build bug introduced by
cebbdd4ac2
(cherry picked from commit 2baaa5c7eb)
2012-10-05 15:45:43 +02:00
Tom Stellard
71b5503164 radeon/llvm: Cleanup makefile
Hopefully, this will fix all the parallel make problems people have
been having.
(cherry picked from commit cebbdd4ac2)
2012-10-04 10:10:37 -04:00
Eric Anholt
b2048c5e90 i965: Use visibility cflags on the driver code.
(cherry picked from commit 837f06b42f)

The only symbols that need to be public (those in intel_screen.c that the
loader looks for) are already marked public.  Saves 100k of compiled driver
size.

Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
Reviewed-by: Matt Turner <mattst88@gmail.com>
2012-10-03 15:24:19 -07:00
Matt Turner
ddb9ecca3b build: Don't build libdricore if not building classic drivers
(cherry picked from commit 523c015246)
2012-10-03 15:24:18 -07:00
Matt Turner
76732c9ca5 build: Add visibility CFLAGS to OSMesa
(cherry picked from commit 24ded89876)
2012-10-03 15:24:18 -07:00
Matt Turner
c8669c7ba7 build: Link OSMesa with glapi, libdl, libstdc++
(cherry picked from commit 1762ec28db)

Bugzilla: https://bugs.gentoo.org/show_bug.cgi?id=399813
          https://bugs.freedesktop.org/show_bug.cgi?id=53179
2012-10-03 15:24:11 -07:00
Matt Turner
f9a8673a7a build: Set visibility CFLAGS in dri/swrast
(cherry picked from commit 4cfff7211c)
2012-10-03 14:00:52 -07:00
Matt Turner
f88046a1f0 build: Set visibility CFLAGS in dri/r200
(cherry picked from commit 3628402707)
2012-10-03 14:00:46 -07:00
Matt Turner
8ebcf34d87 build: Set visibility CFLAGS in dri/radeon
(cherry picked from commit 55d45efdd8)
2012-10-03 14:00:40 -07:00
Matt Turner
3b794e4a56 build: Set visibility CFLAGS in dri/nouveau
(cherry picked from commit 340637d54d)
2012-10-03 14:00:32 -07:00
Matt Turner
0470fa395f build: Set visibility CFLAGS in dri/i915
(cherry picked from commit 381d120b8a)
2012-10-03 14:00:26 -07:00
Matt Turner
6512610f9a build: Set visibility CFLAGS in dri/common
(cherry picked from commit d2872b5612)
2012-10-03 14:00:14 -07:00
Matt Turner
03cfc8d660 build: Build src/glsl with visibility CFLAGS
(cherry picked from commit 8746f641bb)
2012-10-03 13:59:52 -07:00
Matt Turner
a1f1add42d build: Turn on visibility CFLAGS for core mesa
(cherry picked from commit 710a90ccaf)
2012-10-03 13:59:43 -07:00
Matt Turner
a2f28ceea2 build: Use AX_PTHREAD's HAVE_PTHREAD preprocessor definition
(cherry picked from commit 814345f54b)

Conflicts:

	src/mapi/glapi/gen/gl_x86-64_asm.py
	src/mapi/glapi/gen/gl_x86_asm.py
2012-10-03 13:59:08 -07:00
Matt Turner
421dda800d build: Use PTHREAD_LIBS and PTHREAD_CFLAGS
(cherry picked from commit b6651ae6ad)

Conflicts:

	src/mesa/main/tests/Makefile.am
2012-10-03 13:56:24 -07:00
Matt Turner
89e76252ca dri drivers: Link dricommon before dynamic libraries
I think libtool should be handling this for us, but the build fails for
Jordan because libdricommon (a static library, which uses expat) appears
before -lexpat on the linker command.

Reviewed-by: Jordan Justen <jordan.l.justen@intel.com>
Tested-by: Jordan Justen <jordan.l.justen@intel.com>
(cherry picked from commit 31ab61cac1)

Conflicts:

	src/mesa/drivers/dri/i965/Makefile.am
2012-10-03 13:46:03 -07:00
Oliver McFadden
f2b4f588f5 Revert "i965: Implement guardband clipping on Ivybridge."
This reverts commit 610910a66d.

Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=55523
Signed-off-by: Oliver McFadden <oliver.mcfadden@linux.intel.com>
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
2012-10-03 13:32:32 +03:00
Oliver McFadden
dbe13c105f Revert "i965: Implement guardband clipping on Sandybridge."
This reverts commit 85cd30406f.

Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=55523
Signed-off-by: Oliver McFadden <oliver.mcfadden@linux.intel.com>
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
2012-10-03 13:32:21 +03:00
Matt Turner
9fc4a39dbf build: Remove autoconf check for signbit
rebase failure in 7da12426f7.
(cherry picked from commit 159ca32fec)
2012-10-02 22:52:42 -07:00
Brian Paul
604cd6b966 mesa: fix glCompressedTexSubImage assertion/segfault
If the destination texture image doesn't exist we'd hit an assertion
(or crash in a release build).  The piglit/s3tc-errors test hits this.
This has already been fixed in master by the error checking code
consolidation.

Note: This is a candidate for the 8.0 branch.
2012-10-01 08:24:36 -06:00
Brian Paul
e642d61d13 scons: add new -p (prefix) options for yacc
These were recently added to the Makefiles.
(cherry picked from commit e78ebbc5f9)
2012-09-30 11:43:58 -07:00
Marek Olšák
d9197f9037 r600g: fix EXP on Cayman
NOTE: This is a candidate for the stable branches.
(cherry picked from commit 96f50d0cf7)
2012-09-30 05:31:49 +02:00
Marek Olšák
fc62ee7e0d r600g: fix RSQ of negative value on Cayman
NOTE: This is a candidate for the stable branches.
(cherry picked from commit fd5c538464)
2012-09-30 05:31:42 +02:00
Marek Olšák
50ba62e231 r600g: fix instance divisor on Cayman
Not sure if this is the best way to fix it.

NOTE: This is a candidate for the stable branches.
(cherry picked from commit 836325bf7e)
2012-09-30 05:31:35 +02:00
Kenneth Graunke
549129838c meta: Use float for temporary images, not (un)signed normalized.
In commit 091eb15b69, Jordan changed get_temp_image_type() to use
_mesa_get_format_datatype() instead of returning GL_FLOAT.  That has
several possible return values: GL_FLOAT, GL_INT, GL_UNSIGNED_INT,
GL_SIGNED_NORMALIZED, and GL_UNSIGNED_NORMALIZED.

We do want to use GL_INT/GL_UNSIGNED_INT for integer formats.  However,
we want to continue using GL_FLOAT for the normalized fixed-point types.
There isn't any code in pack.c to handle GL_(UN)SIGNED_NORMALIZED.

Fixes oglconform's fboarb advanced.blit.copypix, which was regressed by
commit 091eb15b69.

NOTE: This is a candidate for the 9.0 branch.

Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=53573
Signed-off-by: Kenneth Graunke <kenneth@whitecape.org>
(cherry picked from commit 3767b25bd3)
2012-09-28 15:56:11 -07:00
Eric Anholt
0586a94929 i965: Remove broken non-interleaved-to-interleaved upload code.
This failed when all the uploads to occur were uniform-type vertex data (like
glColor4f being active across a DrawArrays), because it would upload 1 element
instead of 1 element per vertex.  There was no citation for how this code
helped any particular application, and it breaks ETQW, so just remove it.

Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=47170
NOTE: This is a candidate for the 9.0 and 8.0 branches.
Reviewed-and-tested-by: Kenneth Graunke <kenneth@whitecape.org>
(cherry picked from commit 0334e8dc25)
2012-09-28 15:55:28 -07:00
Kenneth Graunke
fdabc7d9f6 meta: Don't _mesa_set_enable() invalid targets in ES 1.
GL_TEXTURE_1D, GL_TEXTURE_3D, GL_TEXTURE_RECTANGLE, and
GL_TEXTURE_GEN_S/T/R/Q don't exist in ES 1 contexts, so any meta ops
that used _mesa_meta_begin with MESA_META_TEXTURE would trigger GL
errors.  One such operation is _mesa_meta_Clear().

On ES 1, we want to disable GL_TEXTURE_GEN_STR_OES instead.

Fixes the ES1 conformance test miplin.c, which was regressed by commit
08be1d288f.

NOTE: This is a candidate for the 9.0 branch.

v2: Also blacklist GL_TEXTURE_3D, per Brian's comment.
v3: Disable GL_TEXTURE_GEN_STR_OES, per Ian's comment.

Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=54297
Reviewed-by: Brian Paul <brianp@vmware.com> [v1]
Signed-off-by: Kenneth Graunke <kenneth@whitecape.org>
(cherry picked from commit 679c93ff89)
2012-09-28 15:54:18 -07:00
Matt Turner
cb84fe5e10 build: Link libglapi with pthreads
NOTE: This is a candidate for the 9.0 branch.

Bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=839060
          https://bugs.gentoo.org/show_bug.cgi?id=435152
Reviewed-by: Adam Jackson <ajax@redhat.com>
(cherry picked from commit 9ed00075d8)
2012-09-28 15:48:30 -07:00
Matt Turner
cc18cc282f build: Use AX_PTHREAD to detect pthreads
NOTE: This is a candidate for the 9.0 branch.

Reviewed-by: Adam Jackson <ajax@redhat.com>
(cherry picked from commit 7da12426f7)
2012-09-28 15:48:25 -07:00
Brian Paul
5ef472dd83 mesa: fix incorrect error for glCompressedSubTexImage
If a subtexture region isn't aligned to the compressed block size,
return GL_INVALID_OPERATION, not gl_INVALID_VALUE.

NOTE: This is a candidate for the stable branches.

Reviewed-by: Eric Anholt <eric@anholt.net>
(cherry picked from commit 1f586684d6)
2012-09-28 15:47:57 -07:00
Ian Romanick
7c60a95a0e i915: Don't free the intel_context structure when intelCreateContext fails.
intelDestroyContext will eventually be called, and it will clean things up.

NOTE: This is a candidate for the 9.0 branch.

Signed-off-by: Ian Romanick <ian.d.romanick@intel.com>
Reviewed-by: Eric Anholt <eric@anholt.net>
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=53618
(cherry picked from commit de958de71b)
2012-09-28 15:41:34 -07:00
Ian Romanick
8aaef12a59 i965: Don't free the intel_context structure when intelCreateContext fails.
This squashes two commits from master:

    i965: Don't free the intel_context structure when intelCreateContext fails.

    intelDestroyContext will eventually be called, and it will clean things
    up.  The call to brwInitVtbl is moved earlier so that
    intelDestroyContext can call the device-specific destructor.  This also
    makes the code look more like the i915 code.

    NOTE: This is a candidate for the 9.0 branch.

    Signed-off-by: Ian Romanick <ian.d.romanick@intel.com>
    Reviewed-by: Eric Anholt <eric@anholt.net>
    Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=54301
    (cherry picked from commit 87f26214d6)

And:

    i965: brwInitVtbl needs to know the chipset generation

    Fixes major regressions since de958de.

    Signed-off-by: Ian Romanick <ian.d.romanick@intel.com>
    (cherry picked from commit e87c63f288)

The second commit message should have read 'since 87f2621', of course.
2012-09-28 15:40:20 -07:00
Ian Romanick
a87b0110b9 intel: Don't call intelDestroyContext if there is no context to destroy
Some error paths in the device-specific context creation functions can exit
before the deintel_context structure is allocated.

NOTE: This is a candidate for the 9.0 branch.

Signed-off-by: Ian Romanick <ian.d.romanick@intel.com>
Reviewed-by: Eric Anholt <eric@anholt.net>
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=53618
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=54301
(cherry picked from commit 22897c7497)
2012-09-28 15:07:02 -07:00
Ian Romanick
5174eed793 dri_util: Use calloc to allocate __DRIcontext
The __DRIcontext contains some pointers, and some drivers check for them to be
NULL in some failure paths.  Instead of sprinkling NULL assignments across the
various drivers, just zero out the whole thing.

NOTE: This is a candidate for the 9.0 branch.

Signed-off-by: Ian Romanick <ian.d.romanick@intel.com>
Reviewed-by: Eric Anholt <eric@anholt.net>
Reviewed-and-tested-by: Kenneth Graunke <kenneth@whitecape.org>
Tested-by: Lu Hua <huax.lu@intel.com>
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=53618
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=54301
(cherry picked from commit f93cb0bebb)
2012-09-28 15:06:58 -07:00
Kenneth Graunke
8c1c18769e i965/blorp: Add support for blits between SRGB and linear formats (fixed).
This is a squash of 2 commits from master.
The first commit is:

i965/blorp: Add support for blits between SRGB and linear formats.

Fixes colorspace issues in L4D2 when multisampling is enabled (the
scene was far too dark, but the flashlight area was way too bright).

The nVidia and AMD binary drivers both allow this kind of blit.

Reviewed-by: Paul Berry <stereotype441@gmail.com>
Signed-off-by: Kenneth Graunke <kenneth@whitecape.org>
(cherry picked from commit e2249e8c4d)

The second commit is:

i965/blorp: Fix sRGB MSAA resolves.

Commit e2249e8c4d (i965/blorp: Add
support for blits between SRGB and linear formats) changed blorp to
always configure surface states for in linear format (even if the
underlying surface is sRGB).  This allowed sRGB-to-linear and
linear-to-sRGB blits to occur without causing the image to be
inappropriately brightened or darkened.

However, it broke sRGB MSAA resolves, since they rely on the
destination buffer format being sRGB in order to ensure that samples
are averaged together in sRGB-correct fashion.

This patch fixes the problem by instead configuring the source buffer
to use the *same* format as the destination buffer.  This ensures that
the image won't be brightened or darkened, but preserves proper sRGB
averaging.

Fixes piglit tests "EXT_framebuffer_multisample/accuracy srgb".

Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=55265

Reviewed-by: Eric Anholt <eric@anholt.net>
Reviewed-and-tested-by: Kenneth Graunke <kenneth@whitecape.org>
(cherry picked from commit 124b214f09)
2012-09-28 11:20:40 -07:00
Paul Berry
849a3d243d i965: Don't spill "smeared" registers.
Fixes an assertion failure when compiling certain shaders that need both
pull constants and register spilling:

brw_eu_emit.c:204: validate_reg: Assertion `execsize >= width' failed.

NOTE: This is a candidate for the 8.0 release branch.

Signed-off-by: Paul Berry <stereotype441@gmail.com>
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
(cherry picked from commit ab5ce2789f)
2012-09-28 11:20:40 -07:00
Paul Berry
36bc0fe4f2 i965/blorp: Increase Y alignment for multisampled stencil blits.
This patch is a band-aid fix for a bug in commit 5fd67fa (i965/blorp:
Reduce alignment restrictions for stencil blits), which causes
multisampled stencil blits to work incorrectly on Sandy Bridge.

When blitting to or from a normal stencil buffer, we have to use a
coordinate transformation that swizzles coordinates to account for the
fact that stencil buffers use W tiling, but the most similar tiling
format available for textures and render targets is Y tiling.  The
differences between W and Y tiling cause pixels to be scrambled within
a block of size 8x4 (width x height) as measured relative to a W tile,
or 16x2 as measured relative to a Y tile.  So in order to make sure
that pixels at the edges of the blit aren't lost, we need to align the
rendering rectangle (and the buffer sizes) to multiples of the 8x4
block size.  This alignment happens in the brw_blorp_blit_params
constructor, whereas the determination of how to swizzle the
coordinates happens during code generation, in the
brw_blorp_blit_program class.

When blitting to or from a multisampled stencil buffer, the coordinate
swizzling is more complex, because it has to account for the
interleaving pattern of samples, which uses 4x4 blocks for 4x MSAA and
8x4 blocks for 8x MSAA.  The end result is that if multisampling is in
use, the 16x2 block size (relative so a Y tile) needs to be expanded
to 16x4, and the corresponding size relative to a W tile expands to
8x8.

The problem doesn't affect Ivy Bridge severely enough to crop up in
Piglit tests because on Ivy Bridge we have to disable multisampling
when blitting *to* a multisampled stencil buffer (the blorp compiler
generates code to compensate for the fact that multisampling is
disabled).  However I suspect a bug is still present because we don't
disable multisampling when blitting *from* a multisampled stencil
buffer.

This patch fixes the problem by doubling the vertical alignment
requirement when blitting to or from a multisampled stencil buffer,
and multisampling has not been disabled.

In the long run I would like to rework the brw_blorp_blit_params
constructor--it's difficult to follow and has had several subtle bugs
like this one.  However this band-aid fix should be suitable for
cherry-picking to release branches.

Fixes Piglit tests "unaligned-blit {2,4} stencil {msaa,upsample}" on
Sandy Bridge.

Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
(cherry picked from commit a33ce665a5)
2012-09-28 11:20:39 -07:00
Paul Berry
76c1c34c4a i965/blorp: Fix offsets and width/height for stencil blits.
Fixes piglit test "framebuffer-blit-levels draw stencil".

Acked-by: Eric Anholt <eric@anholt.net>
(cherry picked from commit 1a5d4f7cb2)
2012-09-28 11:20:39 -07:00
Paul Berry
21e9850d53 i965/blorp: Reduce alignment restrictions for stencil blits.
Previously, we aligned all stencil blit operations to multiples of the
size of a tile, since stencil buffers use W-tiling, and blorp has to
approximate this by configuring the 3D pipeline for Y-tiling and
swizzling coordinates.

However, this was unnecessarily conservative; it turns out that the
differences between W-tiling and Y-tiling are confined to 32-byte
sub-tiles within the 4k tiling pattern; the layout of these 32-byte
sub-tiles within the larger 4k tile is the same (8 sub-tiles across by
16 sub-tiles down, in column-major order).  Therefore we only need to
align stencil blit operations to multiples of the sub-tile size.

Note: although the performance improvement of this change is probably
quite small, the fact that W-tiling and Y-tiling formats only differ
within 32-byte sub-tiles will be essential in a future patch to ensure
that stencil blits work correctly between parts of the miptree other
than level/layer 0.  Making this change provides handy documentation
(and validation) of this fact.

Acked-by: Eric Anholt <eric@anholt.net>
(cherry picked from commit 5fd67fac14)
2012-09-28 11:20:39 -07:00
Paul Berry
62bc4af0e1 i965/blorp: don't reduce stencil alignment restrictions when multisampling.
When blitting to a stencil buffer, we need to align the rectangle we
send down the rendering pipeline, to account for the fact that the
stencil buffer uses a W-tiled layout, but we are configuring its
surface state as Y-tiled.

Previously, when the stencil buffer was multisampled, we assumed that
we could reduce the amount of alignment that was necessary, since each
pixel occupies a block of 2x2 or 4x2 samples in the stencil buffer.
That would have been correct if the coordinates we were adjusting were
measured in pixels.  However, the conversion from pixel coordinates to
coordinates within the interleaved buffer has already been done;
therefore the full alignment restriction applies.

Note: the reason this mistake wasn't previously uncovered by piglit
tests is because it is being masked by another mistake: the blorp
engine is using overly conservative alignment restrictions when doing
stencil blits.  The overly conservative alignment restrictions will be
removed in the patch that follows.  Doing this fix now will prevent
the subsequent patch from introducing regressions.

Acked-by: Eric Anholt <eric@anholt.net>
(cherry picked from commit 1a75063d5f)
2012-09-28 11:20:39 -07:00
Paul Berry
68da5dfc2c intel: Add map_stencil_as_y_tiled to intel_region_get_aligned_offset.
This patch modifies intel_region_get_aligned_offset() to make the
appropriate calculation when the blorp engine sets up a W-tiled
stencil buffer using a Y-tiled SURFACE_STATE.

Acked-by: Eric Anholt <eric@anholt.net>
(cherry picked from commit b760c9913d)
2012-09-28 11:20:39 -07:00
Paul Berry
96fd94ba94 intel: Add map_stencil_as_y_tiled to intel_region_get_tile_masks.
When the blorp engine is performing a blit from one stencil buffer to
another, it sets up the surface state for these buffers as Y-tiled, so
it needs to be able to force intel_region_get_tile_masks() to return
the appropriate masks for a Y-tiled region.

Acked-by: Eric Anholt <eric@anholt.net>
(cherry picked from commit 50dec7fc2d)
2012-09-28 11:20:39 -07:00
Paul Berry
239e9bef92 i965/blorp: Account for offsets when emitting SURFACE_STATE.
Fixes piglit tests "framebuffer-blit-levels {read,draw} depth".

Reviewed-by: Eric Anholt <eric@anholt.net>
(cherry picked from commit f04f219906)
2012-09-28 11:20:39 -07:00
Paul Berry
e87174cf4b i965/blorp: Thread level and layer through brw_blorp_blit_miptrees().
Previously, when performing a blit using the blorp engine, we failed
to account for the level and layer of the source and destination.  As
a result, all blits would occur between miplevel 0 and layer 0 of the
corresponding textures, regardless of which level/layer was bound to
the framebuffer.

This patch passes the correct level and layer through
brw_blorp_miptrees() into the brw_blorp_blit_params data structure.

Further patches in the series will adapt
gen{6,7}_blorp_emit_surface_state to make use of these parameters.

Reviewed-by: Eric Anholt <eric@anholt.net>
(cherry picked from commit 3123f06215)
2012-09-28 11:20:39 -07:00
Paul Berry
127dc6d136 i965/blorp: store x and y offsets in brw_blorp_mip_info.
Currently, gen{6,7}_blorp_emit_surface_state assumes that the src and
dst surfaces are mapped to miplevel 0 and layer 0 (thus no surface
offset is required).  This is a bug, since the user might try to blit
to and from levels/layers other than 0.

To fix this bug, it will not be sufficient to have
gen6_{6,7}_blorp_emit_surface_state look up the surface offset at the
time they set up the surface state, since these offsets will need to
be tweaked when blitting stencil buffers (due to the fact that stencil
buffer blits have to swizzle between W and Y tiling formats).

So, to pave the way for the bug fix, this patch causes the x and y
offsets to be computed during blit setup and stored in
brw_blorp_mip_info.

As a result of this change, brw_blorp_mip_info doesn't need to store
the level and layer anymore.

For consistency, this patch makes a similar change to the handling of
depth buffers when doing HiZ operations.

Reviewed-by: Eric Anholt <eric@anholt.net>
(cherry picked from commit c130ce7b2b)
2012-09-28 11:20:38 -07:00
Paul Berry
602e9a0f37 i965/blorp: store surface width/height in brw_blorp_mip_info.
Previously, gen{6,7}_blorp_emit_surface_state would look up the width
and height of the surface at the time they set up the surface state,
and then tweak it if necessary (it's necessary when a W-tiled surface
is being mapped as Y-tiled).  With this patch, we look up the width
and height when setting up the blit, and store them in
brw_blorp_mip_info.  This allows us to do the necessary tweak in the
brw_blorp_blit_params constructor (where it makes more sense).  It
also reduces the need to keep track of level and layer in
brw_blorp_mip_info, so that a future patch can eliminate them
entirely.

For consistency, this patch makes a similar change to the handling of
depth buffers when doing HiZ operations.

Reviewed-by: Eric Anholt <eric@anholt.net>
(cherry picked from commit 09b0fa8499)
2012-09-28 11:20:38 -07:00
Paul Berry
5c66640ac7 i965/blorp: Change gl_renderbuffer* params to intel_renderbuffer*.
This makes it more convenient for blorp functions to get access to
Intel-specific data inside the renderbuffer objects.

Reviewed-by: Eric Anholt <eric@anholt.net>
(cherry picked from commit e14b1288ef)
2012-09-28 11:20:38 -07:00
Paul Berry
cb9765ca94 i965/blorp: Clarify why width/height must be adjusted for Gen6 IMS surfaces.
Also add a clarifying comment for why the width/height doesn't need
adjustment for Gen7.

Reviewed-by: Eric Anholt <eric@anholt.net>
(cherry picked from commit 32c7b2769c)
2012-09-28 11:20:38 -07:00
Paul Berry
5db1deab51 i965/gen6+: Adjust stencil buffer size after computing miptree layout.
Since Gen6+ stencil buffers use W-tiling (a tiling arrangement which
drm and the kernel are not aware of) we need to round up the width and
height of a stencil buffer to multiples of the W-tile size (64x64)
before allocating a stencil buffer.  Previously, we rounded up the
size of the base miplevel, and then computed the miptree layout based
on the rounded up size.  This was incorrect, because it meant that the
total size of the miptree would not be properly W-tile aligned, and
therefore we would not always allocate enough pages.

(Note: even though the GL API doesn't allow creation of mipmapped
stencil textures, it does allow mipmapping of a combined depth/stencil
texture, and on Gen6+, a combined depth/stencil texture is internally
implemented as a pair of separate depth and stencil buffers.)

For example, on Sandy Bridge, when allocating a mipmapped stencil
texture of size 128x128, we would first round up to the nearest
multiple of 64x64 (causing no change to the size), and then compute
the miptree layout (whose size worked out to 128x196).  Then we would
request an allocation of 128*196 bytes (6.125 pages), causing 7 pages
to be allocated to the texture.  However, the texture needs 8 pages,
since each W-tile occupies a page, and it takes 2 W-tiles to cover a
width of 128 and 4 W-tiles to cover a height of 196.

This patch changes the order of operations so that the miptree layout
is computed first and then the total size of the miptree is rounded up
to be W-tile aligned.

NOTE: This is a candidate for the 8.0 release branch.

Reviewed-by: Eric Anholt <eric@anholt.net>
(cherry picked from commit bde833c9d0)
2012-09-28 11:20:38 -07:00
Ian Romanick
4cdbf27fac mesa: Don't set uniform dispatch pointers for many things in ES2 or core
NOTE: This is a candidate for the 9.0 branch

Signed-off-by: Ian Romanick <ian.d.romanick@intel.com>
Reviewed-by: Eric Anholt <eric@anholt.net>
(cherry picked from commit 6c01a0e770)
2012-09-28 10:56:46 -07:00
Ian Romanick
0fb12a40e4 mesa: Don't set shaderapi dispatch pointers for many things in ES2 or core
v2: Allow GL_ARB_shader_objects functions in core profile because we
still expose the extension string there.  Don't allow
glBindFragDataLocation in GLES3 because it's not part of that API.
Based (mostly) on review comments from Eric Anholt.

NOTE: This is a candidate for the 9.0 branch

Signed-off-by: Ian Romanick <ian.d.romanick@intel.com>
Reviewed-by: Eric Anholt <eric@anholt.net>
(cherry picked from commit be66cf950e)
2012-09-28 10:55:30 -07:00
Ian Romanick
f16031513f mesa: Don't set vtxfmt dispatch pointers for many things in ES2 or core
NOTE: This is a candidate for the 9.0 branch

Signed-off-by: Ian Romanick <ian.d.romanick@intel.com>
Reviewed-by: Eric Anholt <eric@anholt.net>
(cherry picked from commit aa0f588e2d)
2012-09-28 10:52:05 -07:00
Ian Romanick
0dc989ea5b mesa: Don't set loopback dispatch pointers for most things in ES2 or core
NOTE: This is a candidate for the 9.0 branch

Signed-off-by: Ian Romanick <ian.d.romanick@intel.com>
Reviewed-by: Eric Anholt <eric@anholt.net>
(cherry picked from commit a13c07f752)
2012-09-28 10:50:16 -07:00
Ian Romanick
c01f896062 mesa: Pass GL context to _mesa_create_save_table
This isn't used by this patch, but it will be necessary for several
follow-on patches.  Separating this out will make it easier to reorder
patches later.

NOTE: This is a candidate for the 9.0 branch

Signed-off-by: Ian Romanick <ian.d.romanick@intel.com>
Reviewed-by: Eric Anholt <eric@anholt.net>
(cherry picked from commit 3ef9e43865)
2012-09-28 10:36:55 -07:00
Ian Romanick
b56c93dfff mesa: Don't set dispatch pointer for glTexStorage in ES2
NOTE: This is a candidate for the 9.0 branch

Signed-off-by: Ian Romanick <ian.d.romanick@intel.com>
Reviewed-by: Eric Anholt <eric@anholt.net>
(cherry picked from commit ee77061277)
2012-09-28 10:35:02 -07:00
Ian Romanick
90ec1db9f4 mesa: Don't set dispatch pointer for glGetProgramivARB in ES2
This function is not the same as glGetProgramiv.

NOTE: This is a candidate for the 9.0 branch

Signed-off-by: Ian Romanick <ian.d.romanick@intel.com>
Reviewed-by: Eric Anholt <eric@anholt.net>
(cherry picked from commit 7f7268d385)
2012-09-28 10:27:36 -07:00
Ian Romanick
dd0dd9aa52 mesa: Don't set dispatch pointer for glResizeBuffersMESA in ES2
NOTE: This is a candidate for the 9.0 branch

Signed-off-by: Ian Romanick <ian.d.romanick@intel.com>
Reviewed-by: Eric Anholt <eric@anholt.net>
(cherry picked from commit a83b01371e)
2012-09-28 10:22:24 -07:00
Ian Romanick
7dc8dc0f7c mesa: Don't set dispatch pointers for glPointParameter[if][v] in ES2
NOTE: This is a candidate for the 9.0 branch

Signed-off-by: Ian Romanick <ian.d.romanick@intel.com>
Reviewed-by: Eric Anholt <eric@anholt.net>
(cherry picked from commit 1c0a44aaf5)
2012-09-28 10:19:55 -07:00
Ian Romanick
961567d0fe mesa: Don't set dispatch pointers for glClearDepth or glDepthRange in ES2
NOTE: This is a candidate for the 9.0 branch

Signed-off-by: Ian Romanick <ian.d.romanick@intel.com>
Reviewed-by: Eric Anholt <eric@anholt.net>
(cherry picked from commit 2a3a68e4c7)
2012-09-28 10:19:19 -07:00
Ian Romanick
de4e222794 mesa: Don't set dispatch pointer for glGetBufferSubData in ES2
NOTE: This is a candidate for the 9.0 branch

Signed-off-by: Ian Romanick <ian.d.romanick@intel.com>
Reviewed-by: Eric Anholt <eric@anholt.net>
(cherry picked from commit 11927bfc4a)
2012-09-28 10:17:20 -07:00
Ian Romanick
f57bc97c7a mesa: Don't set dispatch pointer for glGetDoublev in ES2
NOTE: This is a candidate for the 9.0 branch

Signed-off-by: Ian Romanick <ian.d.romanick@intel.com>
Reviewed-by: Eric Anholt <eric@anholt.net>
(cherry picked from commit 850412b8ab)
2012-09-28 10:16:38 -07:00
Ian Romanick
bb16b471d2 mesa: Don't set dispatch pointer for glPointSize in ES2
NOTE: This is a candidate for the 9.0 branch

Signed-off-by: Ian Romanick <ian.d.romanick@intel.com>
Reviewed-by: Eric Anholt <eric@anholt.net>
(cherry picked from commit aa129b0833)
2012-09-28 10:14:53 -07:00
Ian Romanick
d87c319390 gles2: Alias glReadBufferNV with desktop glReadBuffer
NOTE: This is a candidate for the 9.0 branch

Signed-off-by: Ian Romanick <ian.d.romanick@intel.com>
Reviewed-by: Eric Anholt <eric@anholt.net>
Cc: Kristian Høgsberg <krh@bitplanet.net>
(cherry picked from commit 23ff634c9c)
2012-09-28 10:11:56 -07:00
Ian Romanick
cf81335712 mesa: Allow glGetTexParameter of GL_TEXTURE_SRGB_DECODE_EXT
This was already (correctly) supported for glGetSamplerParameter paths.

NOTE: This is a candidate for stable branches.

Signed-off-by: Ian Romanick <ian.d.romanick@intel.com>
Reviewed-by: Brian Paul <brianp@vmware.com>
(cherry picked from commit ae3023e967)
2012-09-28 09:56:09 -07:00
Chris Forbes
8d06574d2b mesa: fix dropped && in glGetStringi()
This fixes glGetStringi(GL_EXTENSIONS,.. for core contexts. Previously,
all extension names returned would be NULL.

NOTE: This is a candidate for release branches.

Signed-off-by: Chris Forbes <chrisf@ijw.co.nz>
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
(cherry picked from commit d30a7d2eb4)
2012-09-28 09:56:04 -07:00
Brian Paul
4eecc8d007 upgrade glext.h to version 85
NOTE: This is a candidate for the stable branches.
(cherry picked from commit 68060cfb2b)
2012-09-28 09:51:20 -07:00
Matt Turner
b1ce5749b9 targets/xorg-i915: Rename driver to i915_drv.so.
modesetting_drv.so is undescriptive and collides with
xf86-video-modesetting.

Reviewed-by: Jakob Bornecrantz <jakob@vmware.com>
2012-09-25 12:05:20 -07:00
Kenneth Graunke
82a08e2f46 mesa: Ignore SRGB when determining compatible resolve formats.
MSAA resolves and other blit-like operations ignore SRGB state anyway,
so we should be able to safely allow resolves between compatible
SRGB/linear formats like SRGBA8 and RGBA8888.

This matches the behavior of the nVidia and AMD binary drivers.

Fixes completely black rendering when using multisampling in L4D2.

NOTE: This is a candidate for the 9.0 branch.

Reviewed-by: Paul Berry <stereotype441@gmail.com>
Signed-off-by: Kenneth Graunke <kenneth@whitecape.org>
(cherry picked from commit c96828ecb4)
2012-09-25 07:21:40 -07:00
Kenneth Graunke
3dd84a58bb mesa: Don't override S3TC internalFormat if data is pre-compressed.
Commit 42723d88d intended to override an S3TC internalFormat to a
generic compressed format when the application requested online
compression of uncompressed data.  Unfortunately, it also broke
pre-compressed textures when libtxc_dxtn isn't installed but the
extensions are forced on.

Both glCompressedTexImage2D() and glTexImage2D() call teximage(), which
calls _mesa_choose_texture_format(), hitting this override code.  If we
have actual S3TC source data, we can't treat it as any other format, and
need to avoid the override.

Since glCompressedTexImage2D() passes in a format of GL_NONE (which is
illegal for glTexImage), we can use that to detect the pre-compressed
case and avoid the overrides.

Fixes a regression since 42723d88d3.

NOTE: This is a candidate for the 9.0 branch.

Signed-off-by: Kenneth Graunke <kenneth@whitecape.org>
Reviewed-by: Ian Romanick <ian.d.romanick@intel.com>
Reviewed-and-tested-by: Jordan Justen <jordan.l.justen@intel.com>
(cherry picked from commit 328961d955)
2012-09-25 07:17:33 -07:00
Kenneth Graunke
0231c54ebb meta: Don't _mesa_set_enable() invalid targets in ES 1.
GL_TEXTURE_1D, GL_TEXTURE_3D, GL_TEXTURE_RECTANGLE, and GL_TEXTURE_GEN_*
don't exist in ES 1 contexts, so any meta ops that used _mesa_meta_begin
with MESA_META_TEXTURE would trigger GL errors.  One such operation is
_mesa_meta_Clear().

Fixes the ES1 conformance test miplin.c, which was regressed by commit
08be1d288f.

NOTE: This is a candidate for the 9.0 branch.

v2: Also blacklist GL_TEXTURE_3D, per Brian's comments.

Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=54297
Cc: Ian Romanick <idr@freedesktop.org>
Signed-off-by: Kenneth Graunke <kenneth@whitecape.org>
2012-09-25 07:17:09 -07:00
Jonas Maebe
2fe673fec3 darwin: do not create double-buffered offscreen pixel formats
http://xquartz.macosforge.org/trac/ticket/536

Signed-off-by: Jeremy Huddleston Sequoia <jeremyhu@apple.com>
(cherry picked from commit 5fdf1f784b)
2012-09-24 16:06:37 -07:00
Vadim Girlin
f6a66a33f7 winsys/radeon: fix relocs caching
Don't cache pointers to elements of reallocatable array.
In some circumstances it caused false cache hits resulting in incorrect
command stream and gpu lockup.

Note: This is a candidate for the stable branches.

Signed-off-by: Vadim Girlin <vadimgirlin@gmail.com>
Reviewed-by: Marek Olšák <maraeo@gmail.com>
(cherry picked from commit 9aa8bac98b)
2012-09-24 03:05:39 +02:00
Marek Olšák
38d1191f41 draw: fix non-indexed draw calls if there's an index buffer
pipe_draw_info::indexed determines if it should be indexed and not
the presence of an index buffer.

This fixes crashes in r300g.

NOTE: This is a candidate for the stable branches.

Tested-by: Michel Dänzer <michel.daenzer@amd.com>
Reviewed-by: Brian Paul <brianp@vmware.com>
(cherry picked from commit 2988fa940e)
2012-09-22 14:22:15 +02:00
Marek Olšák
a940c74bc3 r600g: set QUANT_MODE on Cayman too
This fixes piglit/fbo-blit-stretched.

Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
(cherry picked from commit bfe489c76b)
2012-09-22 01:19:51 +02:00
Marek Olšák
c3454d95af r600g: do not require MSAA renderbuffer support if not asked for
to allow stencil-only sampler-only formats (like X24S8)

NOTE: This is a candidate for the stable branches.
(cherry picked from commit df5e2c058f)
2012-09-22 01:19:33 +02:00
Marek Olšák
27e056ff15 gallium/u_blitter: fix stencil-only blits
NOTE: This is a candidate for the stable branches.
(cherry picked from commit 61706915a3)
2012-09-22 01:19:23 +02:00
Marek Olšák
e81717e9e7 r300g: fix colormask with non-BGRA formats
NOTE: This is a candidate for the stable branches.
(cherry picked from commit 1e51d368eb)
2012-09-22 01:19:13 +02:00
Marek Olšák
86735d78e6 r600g: don't use a staging resource for large transfers
It kills performance if the resource is linear.
(cherry picked from commit e386972f5b)
2012-09-22 01:18:57 +02:00
Andreas Boll
890b405232 docs: fix some issues in relnotes
improve markup
fix link to relnotes-9.0
add missing relnotes links
(cherry picked from commit 6fb8aeb2c5)
2012-09-19 12:24:13 +02:00
Andreas Boll
1e314197dd docs/devinfo: fix typo
(cherry picked from commit abb1c847ac)
2012-09-19 12:24:13 +02:00
Matt Turner
437c40eee4 build: Don't list glproto and dri2proto in pkg-config file
No files provided by glproto or dri2proto are needed for building
something with Mesa.

Bugzilla: https://bugs.gentoo.org/show_bug.cgi?id=342393
Reviewed-by: Dan Nicholson <dbn.lists@gmail.com>
2012-09-18 14:10:11 -07:00
Dave Airlie
7cfd42cefe mesa/glsl: rename preprocess to glcpp_preprocess
This symbol with dricore escapes into the namespace, its too generic,
we should prefix it with something just to be nice.

Should be applied to stable + 9.0

Acked-by: Kenneth Graunke <kenneth@whitecape.org>
Signed-off-by: Dave Airlie <airlied@redhat.com>
(cherry picked from commit 88b0790b1a)
2012-09-15 08:27:51 +10:00
Dave Airlie
8f7990c5f2 glcpp: fix abuse of yylex
So glcpp tried to workaround yylex its own way, but failed,
do it properly.

This fixes another crash found after fixing the first crash.

this is a candidate for 9.0 and stable branches

Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
Signed-off-by: Dave Airlie <airlied@redhat.com>
(cherry picked from commit 53d46bc787)
2012-09-15 08:27:51 +10:00
Dave Airlie
a834381506 mesa: use a prefix for the program lex
This avoids us making a global yylex symbol which will interfere will
all sorts of apps.

with libdricore which can't do symbol visibility currently we pollute
the namespace with this.

This is a candidate for 9.0 & stable branches.

Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
Signed-off-by: Dave Airlie <airlied@redhat.com>
(cherry picked from commit cc943c8470)
2012-09-15 08:27:51 +10:00
Mike Frysinger
fafaf56479 mklib: clean up abi flags for x86 targets
The current code is duplicated in two places and relies on `uname` to
detect the flags.  This is no good for cross-compiling, and the current
logic uses -m64 for the x32 ABI which breaks things.

Unify the code in one place, avoid `uname` completely, and add support
for the new x32 ABI.

Signed-off-by: Mike Frysinger <vapier@gentoo.org>
2012-09-14 15:26:27 -07:00
Alex Deucher
f94b6d706f r600g: reduce quant mode on evergreen+
Seems to have an affect on the allowable range of
values.  Set evergreen+ to 1/256 to match 6xx/7xx.

fixes:
https://bugs.freedesktop.org/show_bug.cgi?id=54877

Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
(cherry picked from commit b33d7eaa5e)
2012-09-14 09:26:59 -04:00
Kenneth Graunke
a5a86652f1 i965: Fix out-of-order sampler unit usage in ARB fragment programs.
ARB fragment programs use texture unit numbers directly, unlike GLSL
which has an extra indirection.  If a fragment program only uses one
texture assigned to GL_TEXTURE1, SamplersUsed will only contain a single
bit, which would make us only upload a single surface/sampler state
entry.  However, it needs to be the second entry.

Using _mesa_fls() instead of _mesa_bitcount() solves this.  For ARB
programs, this makes num_samplers the ID of the highest texture unit
used.  Since GLSL uses consecutive integers assigned by the linker,
_mesa_fls() should give the same result as _mesa_bitcount()..

Fixes a regression since 85e8e9e000,
which caused GPU hangs in ETQW (and probably others), as well as
breaking piglit test fp-fragment-position.

v2: Add a comment, as suggested by Matt.

Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=54098
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=54179
Signed-off-by: Kenneth Graunke <kenneth@whitecape.org>
Reviewed-by: Matt Turner <mattst88@gmail.com>
Tested-by: meng <mengmeng.meng@intel.com>
(cherry picked from commit 28f4be9eb9)
2012-09-14 02:03:00 -07:00
Kenneth Graunke
66e8f863d3 mesa: Add a _mesa_fls() function to find the last bit set in a word.
ffs() finds the least significant bit set; _mesa_fls() finds the /most/
significant bit.

v2: Make it an inline function in imports.h, per Brian's suggestion.

Signed-off-by: Kenneth Graunke <kenneth@whitecape.org>
Reviewed-by: Brian Paul <brianp@vmware.com>
Reviewed-by: Matt Turner <mattst88@gmail.com>
(cherry picked from commit 0fc163408e)
2012-09-14 02:02:53 -07:00
Vadim Girlin
c586fce4fb r600g: adjust QUANT_MODE for higher precision
Use 1/256 for R6xx/7xx, 1/4096 for evergreen, instead of default 1/16.

Helps to pass some piglit tests (fbo, multisample).

Signed-off-by: Vadim Girlin <vadimgirlin@gmail.com>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
(cherry picked from commit f44bda17f5)
2012-09-12 11:57:36 +02:00
Vadim Girlin
405d47bbe7 mesa: don't wait in _mesa_ClientWaitSync if timeout is 0
From ARB_sync spec:

    If the value of <timeout> is zero, then ClientWaitSync does not
    block, but simply tests the current state of <sync>. TIMEOUT_EXPIRED
    will be returned in this case if <sync> is not signaled, even though
    no actual wait was performed.

Fixes random fails of the arb_sync-timeout-zero piglit test on r600g.

Signed-off-by: Vadim Girlin <vadimgirlin@gmail.com>
Reviewed-by: Eric Anholt <eric@anholt.net>
(cherry picked from commit b05a1fc156)
2012-09-12 11:57:36 +02:00
Brian Paul
04f902b472 mesa: fix proxy texture error handling in glTexStorage()
This is basically a follow-on to 1f5b1f9846.
Basically, generate GL errors for ordinary invalid parameters for proxy
targets the same as for non-proxy targets.  Only texture size and OOM
errors should be handled specially for proxies.

Note: This is a candidate for the stable branches.
(cherry picked from commit 35c75f6777)
2012-09-12 10:22:04 +02:00
Brian Paul
f9bb66b1ce mesa: make _mesa_get_proxy_target() non-static
Needed for the next patch.

Note: This is a candidate for the stable branches.
(cherry picked from commit d17440dcaa)
2012-09-12 10:22:04 +02:00
Brian Paul
10b9a02952 mesa: do internal format error checking for glTexStorage()
Turns out we weren't doing any format checking before.  Now check
the internal format and, in particular, make sure that unsized internal
formats aren't accepted.

Note: This is a candidate for the stable branches.
(cherry picked from commit 2e4fc54977)
2012-09-12 10:22:04 +02:00
Paul Berry
1f4d074f75 mesa/msaa: Allow X and Y flips in multisampled blits.
From the GL 4.3 spec, section 18.3.1 "Blitting Pixel Rectangles":

    If SAMPLE_BUFFERS for either the read framebuffer or draw
    framebuffer is greater than zero, no copy is performed and an
    INVALID_OPERATION error is generated if the dimensions of the
    source and destination rectangles provided to BlitFramebuffer are
    not identical, or if the formats of the read and draw framebuffers
    are not identical.

It is not clear from the spec whether "dimensions" should mean both
sign and magnitude, or just magnitude.

Previously, Mesa interpreted "dimensions" as meaning both sign and
magnitude, so any multisampled blit that attempted to flip the image
in the X and/or Y direction would fail.

However, Y flips are likely to be commonplace in OpenGL applications
that have been ported from DirectX applications, as a result of the
fact that DirectX and OpenGL differ in their orientation of the Y
axis.  Furthermore, at least one commercial driver (nVidia) permits Y
filps, and L4D2 relies on them being permitted.  So it seems prudent
for Mesa to permit them.

This patch changes Mesa to allow both X and Y flips, since there is no
language in the spec to indicate that X and Y flips should be treated
differently.

NOTE: This is a candidate for stable release branches.

Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
Reviewed-by: Anuj Phogat <anuj.phogat@gmail.com>
(cherry picked from commit 5d5f0f3491)
2012-09-12 10:22:04 +02:00
Kenneth Graunke
42ef3f68c9 glsl: Generate compile errors for explicit blend indices < 0 or > 1.
According to the GLSL 4.30 specification, this is a compile time error.
Earlier specifications don't specify a behavior, but since 0 and 1 are
the only valid indices for dual source blending, it makes sense to
generate the error.

Fixes (the fixed version of) piglit's layout-12.frag.

NOTE: This is a candidate for the 9.0 branch.

Signed-off-by: Kenneth Graunke <kenneth@whitecape.org>
Reviewed-by: Paul Berry <stereotype441@gmail.com>
(cherry picked from commit 354f2cb5c7)
2012-09-12 10:22:03 +02:00
Eric Anholt
e2b4f9aac3 i965: Fix virtual_grf_interferes() between calculate_live_intervals() and DCE.
This fixes the blue zombies bug in l4d2.

NOTE: This is a candidate for the 9.0 branch.
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
(cherry picked from commit 39aca5076f)
2012-09-12 10:22:03 +02:00
Brian Paul
8b8416676e glapi/glx: rename 'table' variable to 'disp_table'
This fixes an issue where the local 'table' variable was hiding the
function parameter name in glGetColorTable(..., void *table).

This should be OK as long as there's never a GL entrypoint that uses
'disp_table' as a parameter name.

Note: This is a candidate for the 9.0 branch.

Reviewed-by: Jose Fonseca <jfonseca@vmware.com>
(cherry picked from commit 043f66204b)
2012-09-12 10:22:03 +02:00
Brian Paul
6e9baa85a9 mesa: fix per-level max texture size error checking
This is a long-standing omission in Mesa's texture image size checking.
We need to take the mipmap level into consideration when checking if the
width, height and depth are too large.

Fixes the new piglit max-texture-size-level test.
Thanks to Stéphane Marchesin for finding this problem.

Note: This is a candidate for the stable branches.

Reviewed-by: Michel Dänzer <michel.daenzer@amd.com>
(cherry picked from commit 771e7b6d88)
2012-09-12 10:22:03 +02:00
Brian Paul
3f6ce3454f st/mesa: s/CALLOC/calloc/ to fix allocation bug
The CALLOC() macro only takes one argument so this was being treated
as a comma expression.  Simply use calloc() instead.

A follow-on patch will replace all CALLOC() calls with calloc().

NOTE: This is a candidate for the 8.0 and 9.0 branches.
(cherry picked from commit 43ed822a50)
2012-09-12 10:22:03 +02:00
Johannes Obermayr
7f011e2075 Set OSMESA_VERSION=8.
VERSION_NUMBER is not required anymore. So it will be removed.

Reviewed-by: Adam Jackson <ajax@redhat.com>
(cherry picked from commit 10a96f4a4d)
2012-09-07 15:29:48 -04:00
Jerome Glisse
41d14eaf19 r600g: fix num of dwords needed for alphatest_state atom
Signed-off-by: Jerome Glisse <jglisse@redhat.com>
2012-09-06 15:23:51 -04:00
Chad Versace
13b8eb6452 mesa: Don't advertise GLES extensions in GL contexts
glGetStringi(GL_EXTENSIONS) failed to respect the context's API, and so
returned all internally enabled GLES extensions from a GL context.
Likewise, glGetIntegerv(GL_NUM_EXTENSIONS) also failed to repsect the
context's API.

Note: This is a candidate for the 8.0 and 9.0 branches.
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
Reviewed-by: Ian Romanick <ian.d.romanick@intel.com>
Signed-off-by: Chad Versace <chad.versace@linux.intel.com>
(cherry picked from commit f29a4b0157)
2012-09-06 11:49:11 -07:00
Tapani Pälli
c7775e842b android: do not expose single buffered eglconfigs
On Android we want to add only double buffered configs for visuals.
Earlier implementation set the SurfaceType as 0 for single buffered
configs but driver still exposed these configs that were not compatible
with any egl surface type.  This caused Khronos conformance test runs to
fail on Android. This patch fixes the issue by skipping single buffered
configs earlier and not exposing them.

Signed-off-by: Tapani Pälli <tapani.palli@intel.com>
Signed-off-by: Chad Versace <chad.versace@linux.intel.com>
(cherry picked from commit d58ca43b80)
2012-09-04 19:24:14 -07:00
Tapani Pälli
2a69de60bf android: fix liblog API changes
android logging macros changed their name in JellyBean.

Signed-off-by: Bruce E. Robertson <bruce.e.robertson@intel.com>
Signed-off-by: Tapani Pälli <tapani.palli@intel.com>
Signed-off-by: Chad Versace <chad.versace@linux.intel.com>
(cherry picked from commit 29d394b9ba)
2012-09-04 19:24:14 -07:00
Tapani Pälli
8cffec495c xmlconfig: use __progname when building for Android
__progname symbol and strrchr are available with bionic.

Signed-off-by: Tapani Pälli <tapani.palli@intel.com>
Signed-off-by: Chad Versace <chad.versace@linux.intel.com>
(cherry picked from commit 4d02b018f4)
2012-09-04 19:24:14 -07:00
Ian Romanick
284fe97515 meta: Don't save and restore fog state when there is no fog state
I wonder if the better solution is to have _mesa_meta_GenerateMipmap not
use MESA_META_ALL for the GLSL path.  Even on compatibility profiles
there is no reason to save and restore fog on this path.

NOTE: This is a candidate for the 9.0 branch.

Signed-off-by: Ian Romanick <ian.d.romanick@intel.com>
Reviewed-by: Brian Paul <brianp@vmware.com>
Tested-by: Lu Hua <huax.lu@intel.com>
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=54295
(cherry picked from commit 51b069e7aa)
2012-09-03 10:38:48 -07:00
Kenneth Graunke
6886da783a i965/fs: Don't use brw->fragment_program in calculate_urb_setup().
Reading brw->fragment_program is nonsensical in compiler code: it
contains the currently active program (if any), not the one currently
being compiled.  Attempting to access it may either lead to crashes
(null pointer dereference if no program is active) or wrong results.

Fixes piglit regressions since 9ef710575b
on pre-Sandybridge hardware.  The actual bug was created in commit
7b1fbc6889.

NOTE: This is a candidate for the 8.0 branch.

Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=54183
Signed-off-by: Kenneth Graunke <kenneth@whitecape.org>
Reviewed-by: Eric Anholt <eric@anholt.net>
Reviewed-by: Paul Berry <stereotype441@gmail.com>
(cherry picked from commit 4d9abd96cc)
2012-08-31 23:00:25 -07:00
Matt Turner
b982fa8e8f build: Remove left over echo from GLU removal 2012-08-31 15:13:23 -07:00
Matt Turner
d240dcee6d Remove libGLU
It's been moved to its own repository, found at
	http://cgit.freedesktop.org/mesa/glu/

Acked-by: Kenneth Graunke <kenneth@whitecape.org>
2012-08-31 15:08:40 -07:00
Jakob Bornecrantz
0097809879 dri: Rework planar image interface
As discussed with Kristian on #wayland. Pushes the decision of components into
the dri driver giving it greater freedom to allow t to implement YUV samplers
in hardware, and which mode to use.

This interface will also allow drivers like SVGA to implement YUV surfaces
without the need to sub-allocate and instead send 3 seperate buffers for each
channel, currently not implemented.

I have tested these changes on Gallium Svga. Scott tested them on both intel
and Gallium Radeon. Kristan and Pekka tested them on intel.

v2: Fix typo in dri2_from_planar.
v3: Merge in intel changes.

(cherry picked from commit 6a7dea93fa)

Tested-by: Scott Moreau <oreaus@gmail.com>
Tested-by: Pekka Paalanen <ppaalanen@gmail.com>
Tested-by: Kristian Høgsberg <krh@bitplanet.net>
Reviewed-by: Kristian Høgsberg <krh@bitplanet.net>
Signed-off-by: Jakob Bornecrantz <jakob@vmware.com>
2012-08-31 19:53:45 +02:00
Marek Olšák
ef557eacff winsys/radeon: disable virtual memory on Cayman
It hangs.
2012-08-31 18:02:35 +02:00
Vinson Lee
de92b7adca scons: Remove leftover print statement.
Remove print statement left over from commit
c57fb034b1.

Signed-off-by: Vinson Lee <vlee@freedesktop.org>
(cherry picked from commit f3bb6bd9b3)
2012-08-31 08:30:54 -07:00
Andreas Boll
3ffd5dfbce docs: update relnotes-9.0
Signed-off-by: Brian Paul <brianp@vmware.com>
2012-08-31 09:22:39 -06:00
Andreas Boll
03785fe360 mesa: also bump version in Makefile.am and configure.ac to 9.0
Signed-off-by: Brian Paul <brianp@vmware.com>
2012-08-31 09:22:36 -06:00
Vinson Lee
7b676fd738 scons: Add default libraries to Solaris build.
Fixes SCons build on Solaris.

Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=54293
Signed-off-by: Vinson Lee <vlee@freedesktop.org>
Signed-off-by: Brian Paul <brianp@vmware.com>
2012-08-31 08:24:59 -06:00
10627 changed files with 806192 additions and 3420282 deletions

View File

@@ -1,18 +1,11 @@
((nil . ((show-trailing-whitespace . t)))
(prog-mode
((nil
(indent-tabs-mode . nil)
(tab-width . 8)
(c-basic-offset . 3)
(c-file-style . "stroustrup")
(fill-column . 78)
(eval . (progn
(c-set-offset 'case-label '0)
(c-set-offset 'innamespace '0)
(c-set-offset 'inline-open '0)))
(whitespace-style face indentation)
(whitespace-line-column . 79)
(eval ignore-errors
(require 'whitespace)
(whitespace-mode 1)))
(makefile-mode (indent-tabs-mode . t))
)
)

View File

@@ -1,41 +0,0 @@
# To use this config on you editor, follow the instructions at:
# http://editorconfig.org
root = true
[*]
charset = utf-8
insert_final_newline = true
tab_width = 8
[*.{c,h,cpp,hpp,cc,hh}]
indent_style = space
indent_size = 3
max_line_length = 78
[{Makefile*,*.mk}]
indent_style = tab
[*.py]
indent_style = space
indent_size = 4
[*.yml]
indent_style = space
indent_size = 2
[*.rst]
indent_style = space
indent_size = 3
[*.patch]
trim_trailing_whitespace = false
[{meson.build,meson_options.txt}]
indent_style = space
indent_size = 2
[*.ps1]
indent_style = space
indent_size = 2

10
.gitattributes vendored
View File

@@ -1,6 +1,4 @@
*.csv eol=crlf
* text=auto
*.jpg binary
*.png binary
*.gif binary
*.ico binary
*.dsp -crlf
*.dsw -crlf
*.sln -crlf
*.vcproj -crlf

View File

@@ -1,59 +0,0 @@
name: macOS-CI
on: push
permissions:
contents: read
jobs:
macOS-CI:
strategy:
matrix:
glx_option: ['dri', 'xlib']
runs-on: macos-11
env:
GALLIUM_DUMP_CPU: true
MESON_EXEC: /Users/runner/Library/Python/3.11/bin/meson
steps:
- name: Checkout
uses: actions/checkout@v3
- name: Install Dependencies
run: |
cat > Brewfile <<EOL
brew "bison"
brew "expat"
brew "gettext"
brew "libx11"
brew "libxcb"
brew "libxdamage"
brew "libxext"
brew "ninja"
brew "pkg-config"
brew "python@3.10"
EOL
brew update
brew bundle --verbose
- name: Install Mako and meson
run: pip3 install --user mako meson
- name: Configure
run: |
cat > native_config <<EOL
[binaries]
llvm-config = '/usr/local/opt/llvm/bin/llvm-config'
EOL
$MESON_EXEC . build --native-file=native_config -Dbuild-tests=true -Dosmesa=true -Dgallium-drivers=swrast -Dglx=${{ matrix.glx_option }}
- name: Build
run: $MESON_EXEC compile -C build
- name: Test
run: $MESON_EXEC test -C build --print-errorlogs
- name: Install
run: $MESON_EXEC install -C build --destdir $PWD/install
- name: 'Upload Artifact'
if: always()
uses: actions/upload-artifact@v3
with:
name: macos-${{ matrix.glx_option }}-result
path: |
build/meson-logs/
install/
retention-days: 5

43
.gitignore vendored
View File

@@ -1,4 +1,43 @@
*.a
*.dll
*.exe
*.ilk
*.la
*.lo
*.o
*.obj
*.os
*.pc
*.pdb
*.pyc
*.pyo
*.out
/build
*.so
*.so.*
*.sw[a-z]
*.tar
*.tar.bz2
*.tar.gz
*.zip
*~
depend
depend.bak
bin/ltmain.sh
lib
lib64
configure
configure.lineno
autom4te.cache
aclocal.m4
config.log
config.status
cscope*
.scon*
config.py
build
libtool
manifest.txt
Makefile.in
.dir-locals.el
.deps/
.libs/
/Makefile

View File

@@ -1,300 +0,0 @@
variables:
FDO_UPSTREAM_REPO: mesa/mesa
MESA_TEMPLATES_COMMIT: &ci-templates-commit 290b79e0e78eab67a83766f4e9691be554fc4afd
CI_PRE_CLONE_SCRIPT: |-
set -o xtrace
wget -q -O download-git-cache.sh ${CI_PROJECT_URL}/-/raw/${CI_COMMIT_SHA}/.gitlab-ci/download-git-cache.sh
bash download-git-cache.sh
rm download-git-cache.sh
set +o xtrace
CI_JOB_JWT_FILE: /minio_jwt
MINIO_HOST: s3.freedesktop.org
# per-pipeline artifact storage on MinIO
PIPELINE_ARTIFACTS_BASE: ${MINIO_HOST}/artifacts/${CI_PROJECT_PATH}/${CI_PIPELINE_ID}
# per-job artifact storage on MinIO
JOB_ARTIFACTS_BASE: ${PIPELINE_ARTIFACTS_BASE}/${CI_JOB_ID}
# reference images stored for traces
PIGLIT_REPLAY_REFERENCE_IMAGES_BASE: "${MINIO_HOST}/mesa-tracie-results/$FDO_UPSTREAM_REPO"
# Individual CI farm status, set to "offline" to disable jobs
# running on a particular CI farm (ie. for outages, etc):
FD_FARM: "online"
COLLABORA_FARM: "online"
MICROSOFT_FARM: "online"
LIMA_FARM: "online"
IGALIA_FARM: "online"
default:
before_script:
- echo -e "\e[0Ksection_start:$(date +%s):unset_env_vars_section[collapsed=true]\r\e[0KUnsetting vulnerable environment variables"
- echo -n "${CI_JOB_JWT}" > "${CI_JOB_JWT_FILE}"
- unset CI_JOB_JWT
- echo -e "\e[0Ksection_end:$(date +%s):unset_env_vars_section\r\e[0K"
after_script:
- >
set +x
test -e "${CI_JOB_JWT_FILE}" &&
export CI_JOB_JWT="$(<${CI_JOB_JWT_FILE})" &&
rm "${CI_JOB_JWT_FILE}"
# Retry build or test jobs up to twice when the gitlab-runner itself fails somehow.
retry:
max: 2
when:
- runner_system_failure
include:
- project: 'freedesktop/ci-templates'
ref: ffe4d1b10aab7534489f0c4bbc4c5899df17d3f2
file:
- '/templates/ci-fairy.yml'
- project: 'freedesktop/ci-templates'
ref: *ci-templates-commit
file:
- '/templates/debian.yml'
- '/templates/fedora.yml'
- local: '.gitlab-ci/image-tags.yml'
- local: '.gitlab-ci/lava/lava-gitlab-ci.yml'
- local: '.gitlab-ci/container/gitlab-ci.yml'
- local: '.gitlab-ci/build/gitlab-ci.yml'
- local: '.gitlab-ci/test/gitlab-ci.yml'
- local: '.gitlab-ci/test-source-dep.yml'
- local: 'src/amd/ci/gitlab-ci.yml'
- local: 'src/broadcom/ci/gitlab-ci.yml'
- local: 'src/etnaviv/ci/gitlab-ci.yml'
- local: 'src/freedreno/ci/gitlab-ci.yml'
- local: 'src/gallium/drivers/crocus/ci/gitlab-ci.yml'
- local: 'src/gallium/drivers/d3d12/ci/gitlab-ci.yml'
- local: 'src/gallium/drivers/i915/ci/gitlab-ci.yml'
- local: 'src/gallium/drivers/lima/ci/gitlab-ci.yml'
- local: 'src/gallium/drivers/llvmpipe/ci/gitlab-ci.yml'
- local: 'src/gallium/drivers/nouveau/ci/gitlab-ci.yml'
- local: 'src/gallium/drivers/radeonsi/ci/gitlab-ci.yml'
- local: 'src/gallium/drivers/softpipe/ci/gitlab-ci.yml'
- local: 'src/gallium/drivers/virgl/ci/gitlab-ci.yml'
- local: 'src/gallium/drivers/zink/ci/gitlab-ci.yml'
- local: 'src/gallium/frontends/lavapipe/ci/gitlab-ci.yml'
- local: 'src/intel/ci/gitlab-ci.yml'
- local: 'src/microsoft/ci/gitlab-ci.yml'
- local: 'src/panfrost/ci/gitlab-ci.yml'
stages:
- sanity
- container
- git-archive
- build-x86_64
- build-misc
- amd
- intel
- nouveau
- arm
- broadcom
- freedreno
- etnaviv
- software-renderer
- layered-backends
- deploy
# YAML anchors for rule conditions
# --------------------------------
.rules-anchors:
rules:
# Pipeline for forked project branch
- if: &is-forked-branch '$CI_COMMIT_BRANCH && $CI_PROJECT_NAMESPACE != "mesa"'
when: manual
# Forked project branch / pre-merge pipeline not for Marge bot
- if: &is-forked-branch-or-pre-merge-not-for-marge '$CI_PROJECT_NAMESPACE != "mesa" || ($GITLAB_USER_LOGIN != "marge-bot" && $CI_PIPELINE_SOURCE == "merge_request_event")'
when: manual
# Pipeline runs for the main branch of the upstream Mesa project
- if: &is-mesa-main '$CI_PROJECT_NAMESPACE == "mesa" && $CI_COMMIT_REF_NAME == $CI_DEFAULT_BRANCH && $CI_COMMIT_BRANCH'
when: always
# Post-merge pipeline
- if: &is-post-merge '$CI_PROJECT_NAMESPACE == "mesa" && $CI_COMMIT_BRANCH'
when: on_success
# Post-merge pipeline, not for Marge Bot
- if: &is-post-merge-not-for-marge '$CI_PROJECT_NAMESPACE == "mesa" && $GITLAB_USER_LOGIN != "marge-bot" && $CI_COMMIT_BRANCH'
when: on_success
# Pre-merge pipeline
- if: &is-pre-merge '$CI_PIPELINE_SOURCE == "merge_request_event"'
when: on_success
# Pre-merge pipeline for Marge Bot
- if: &is-pre-merge-for-marge '$GITLAB_USER_LOGIN == "marge-bot" && $CI_PIPELINE_SOURCE == "merge_request_event"'
when: on_success
.docs-base:
extends:
- .fdo.ci-fairy
- .build-rules
script:
- apk --no-cache add graphviz doxygen
- pip3 install sphinx===5.1.1 breathe===4.34.0 mako===1.2.3 sphinx_rtd_theme===1.0.0
- docs/doxygen-wrapper.py --out-dir=docs/doxygen_xml
- sphinx-build -W -b html docs public
pages:
extends: .docs-base
stage: deploy
artifacts:
paths:
- public
needs: []
rules:
- !reference [.no_scheduled_pipelines-rules, rules]
- if: *is-mesa-main
changes: &docs-or-ci
- docs/**/*
- .gitlab-ci.yml
when: always
# Other cases default to never
test-docs:
extends: .docs-base
# Cancel job if a newer commit is pushed to the same branch
interruptible: true
stage: deploy
needs: []
rules:
- !reference [.no_scheduled_pipelines-rules, rules]
- if: *is-forked-branch
changes: *docs-or-ci
when: manual
# Other cases default to never
test-docs-mr:
extends:
- test-docs
needs:
- sanity
artifacts:
expose_as: 'Documentation preview'
paths:
- public/
rules:
- if: *is-pre-merge
changes: *docs-or-ci
when: on_success
# Other cases default to never
# When to automatically run the CI for build jobs
.build-rules:
rules:
- !reference [.no_scheduled_pipelines-rules, rules]
# If any files affecting the pipeline are changed, build/test jobs run
# automatically once all dependency jobs have passed
- changes: &all_paths
- VERSION
- bin/git_sha1_gen.py
- bin/install_megadrivers.py
- bin/meson_get_version.py
- bin/symbols-check.py
# GitLab CI
- .gitlab-ci.yml
- .gitlab-ci/**/*
# Meson
- meson*
- build-support/**/*
- subprojects/**/*
# Source code
- include/**/*
- src/**/*
when: on_success
# Otherwise, build/test jobs won't run because no rule matched.
.ci-deqp-artifacts:
artifacts:
name: "mesa_${CI_JOB_NAME}"
when: always
untracked: false
paths:
# Watch out! Artifacts are relative to the build dir.
# https://gitlab.com/gitlab-org/gitlab-ce/commit/8788fb925706cad594adf6917a6c5f6587dd1521
- artifacts
- _build/meson-logs/*.txt
- _build/meson-logs/strace
.container-rules:
rules:
- !reference [.no_scheduled_pipelines-rules, rules]
# Run pipeline by default in the main project if any CI pipeline
# configuration files were changed, to ensure docker images are up to date
- if: *is-post-merge
changes:
- .gitlab-ci.yml
- .gitlab-ci/**/*
when: on_success
# Run pipeline by default if it was triggered by Marge Bot, is for a
# merge request, and any files affecting the pipeline were changed
- if: *is-pre-merge-for-marge
changes:
*all_paths
when: on_success
# Run pipeline by default in the main project if it was not triggered by
# Marge Bot, and any files affecting the pipeline were changed
- if: *is-post-merge-not-for-marge
changes:
*all_paths
when: on_success
# Allow triggering jobs manually in other cases if any files affecting the
# pipeline were changed
- changes:
*all_paths
when: manual
# Otherwise, container jobs won't run because no rule matched.
# Git archive
make git archive:
extends:
- .fdo.ci-fairy
stage: git-archive
rules:
- !reference [.scheduled_pipeline-rules, rules]
# ensure we are running on packet
tags:
- packet.net
script:
# Compactify the .git directory
- git gc --aggressive
# compress the current folder
- tar -cvzf ../$CI_PROJECT_NAME.tar.gz .
- ci-fairy s3cp --token-file "${CI_JOB_JWT_FILE}" ../$CI_PROJECT_NAME.tar.gz https://$MINIO_HOST/git-cache/$CI_PROJECT_NAMESPACE/$CI_PROJECT_NAME/$CI_PROJECT_NAME.tar.gz
# Sanity checks of MR settings and commit logs
sanity:
extends:
- .fdo.ci-fairy
stage: sanity
rules:
- if: *is-pre-merge
when: on_success
# Other cases default to never
variables:
GIT_STRATEGY: none
script:
# ci-fairy check-commits --junit-xml=check-commits.xml
- ci-fairy check-merge-request --require-allow-collaboration --junit-xml=check-merge-request.xml
artifacts:
when: on_failure
reports:
junit: check-*.xml
# Rules for tests that should not block merging, but should be available to
# optionally run with the "play" button in the UI in pre-merge non-marge
# pipelines. This should appear in "extends:" after any includes of
# test-source-dep.yml rules, so that these rules replace those.
.test-manual-mr:
rules:
- !reference [.no_scheduled_pipelines-rules, rules]
- if: *is-forked-branch-or-pre-merge-not-for-marge
changes:
*all_paths
when: manual
variables:
JOB_TIMEOUT: 80

View File

@@ -1,17 +0,0 @@
# Note: skips lists for CI are just a list of lines that, when
# non-zero-length and not starting with '#', will regex match to
# delete lines from the test list. Be careful.
# These are tremendously slow (pushing toward a minute), and aren't
# reliable to be run in parallel with other tests due to CPU-side timing.
dEQP-GLES[0-9]*.functional.flush_finish.*
# piglit: WGL is Windows-only
wgl@.*
# These are sensitive to CPU timing, and would need to be run in isolation
# on the system rather than in parallel with other tests.
glx@glx_arb_sync_control@timing.*
# This test is not built with waffle, while we do build tests with waffle
spec@!opengl 1.1@windowoverlap

View File

@@ -1,67 +0,0 @@
version: 1
# Rules to match for a machine to qualify
target:
{% if tags %}
{% set b2ctags = tags.split(',') %}
tags:
{% for tag in b2ctags %}
- '{{ tag | trim }}'
{% endfor %}
{% endif %}
timeouts:
first_console_activity: # This limits the time it can take to receive the first console log
minutes: {{ timeout_first_minutes }}
retries: {{ timeout_first_retries }}
console_activity: # Reset every time we receive a message from the logs
minutes: {{ timeout_minutes }}
retries: {{ timeout_retries }}
boot_cycle:
minutes: {{ timeout_boot_minutes }}
retries: {{ timeout_boot_retries }}
overall: # Maximum time the job can take, not overrideable by the "continue" deployment
minutes: {{ timeout_overall_minutes }}
retries: 0
# no retries possible here
console_patterns:
session_end:
regex: >-
{{ session_end_regex }}
session_reboot:
regex: >-
{{ session_reboot_regex }}
job_success:
regex: >-
{{ job_success_regex }}
job_warn:
regex: >-
{{ job_warn_regex }}
# Environment to deploy
deployment:
# Initial boot
start:
kernel:
url: '{{ kernel_url }}'
cmdline: >
SALAD.machine_id={{ '{{' }} machine_id }}
console={{ '{{' }} local_tty_device }},115200 earlyprintk=vga,keep
loglevel={{ log_level }} no_hash_pointers
b2c.service="--privileged --tls-verify=false --pid=host docker://{{ '{{' }} fdo_proxy_registry }}/mupuf/valve-infra/telegraf-container:latest" b2c.hostname=dut-{{ '{{' }} machine.full_name }}
b2c.container="-ti --tls-verify=false docker://{{ '{{' }} fdo_proxy_registry }}/mupuf/valve-infra/machine_registration:latest check"
b2c.ntp_peer=10.42.0.1 b2c.pipefail b2c.cache_device=auto b2c.poweroff_delay={{ poweroff_delay }}
b2c.minio="gateway,{{ '{{' }} minio_url }},{{ '{{' }} job_bucket_access_key }},{{ '{{' }} job_bucket_secret_key }}"
b2c.volume="{{ '{{' }} job_bucket }}-results,mirror=gateway/{{ '{{' }} job_bucket }},pull_on=pipeline_start,push_on=changes,overwrite{% for excl in job_volume_exclusions %},exclude={{ excl }}{% endfor %},expiration=pipeline_end,preserve"
{% for volume in volumes %}
b2c.volume={{ volume }}
{% endfor %}
b2c.container="-v {{ '{{' }} job_bucket }}-results:{{ working_dir }} -w {{ working_dir }} {% for mount_volume in mount_volumes %} -v {{ mount_volume }}{% endfor %} --tls-verify=false docker://{{ local_container }} {{ container_cmd }}"
{% if cmdline_extras is defined %}
{{ cmdline_extras }}
{% endif %}
initramfs:
url: '{{ initramfs_url }}'

View File

@@ -1,101 +0,0 @@
#!/usr/bin/env python3
# Copyright © 2022 Valve Corporation
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice (including the next
# paragraph) shall be included in all copies or substantial portions of the
# Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
# IN THE SOFTWARE.
from jinja2 import Environment, FileSystemLoader
from argparse import ArgumentParser
from os import environ, path
parser = ArgumentParser()
parser.add_argument('--ci-job-id')
parser.add_argument('--container-cmd')
parser.add_argument('--initramfs-url')
parser.add_argument('--job-success-regex')
parser.add_argument('--job-warn-regex')
parser.add_argument('--kernel-url')
parser.add_argument('--log-level', type=int)
parser.add_argument('--poweroff-delay', type=int)
parser.add_argument('--session-end-regex')
parser.add_argument('--session-reboot-regex')
parser.add_argument('--tags', nargs='?', default='')
parser.add_argument('--template', default='b2c.yml.jinja2.jinja2')
parser.add_argument('--timeout-boot-minutes', type=int)
parser.add_argument('--timeout-boot-retries', type=int)
parser.add_argument('--timeout-first-minutes', type=int)
parser.add_argument('--timeout-first-retries', type=int)
parser.add_argument('--timeout-minutes', type=int)
parser.add_argument('--timeout-overall-minutes', type=int)
parser.add_argument('--timeout-retries', type=int)
parser.add_argument('--job-volume-exclusions', nargs='?', default='')
parser.add_argument('--volume', action='append')
parser.add_argument('--mount-volume', action='append')
parser.add_argument('--local-container', default=environ.get('B2C_LOCAL_CONTAINER', 'alpine:latest'))
parser.add_argument('--working-dir')
args = parser.parse_args()
env = Environment(loader=FileSystemLoader(path.dirname(args.template)),
trim_blocks=True, lstrip_blocks=True)
template = env.get_template(path.basename(args.template))
values = {}
values['ci_job_id'] = args.ci_job_id
values['container_cmd'] = args.container_cmd
values['initramfs_url'] = args.initramfs_url
values['job_success_regex'] = args.job_success_regex
values['job_warn_regex'] = args.job_warn_regex
values['kernel_url'] = args.kernel_url
values['log_level'] = args.log_level
values['poweroff_delay'] = args.poweroff_delay
values['session_end_regex'] = args.session_end_regex
values['session_reboot_regex'] = args.session_reboot_regex
values['tags'] = args.tags
values['template'] = args.template
values['timeout_boot_minutes'] = args.timeout_boot_minutes
values['timeout_boot_retries'] = args.timeout_boot_retries
values['timeout_first_minutes'] = args.timeout_first_minutes
values['timeout_first_retries'] = args.timeout_first_retries
values['timeout_minutes'] = args.timeout_minutes
values['timeout_overall_minutes'] = args.timeout_overall_minutes
values['timeout_retries'] = args.timeout_retries
if len(args.job_volume_exclusions) > 0:
exclusions = args.job_volume_exclusions.split(",")
values['job_volume_exclusions'] = [excl for excl in exclusions if len(excl) > 0]
if args.volume is not None:
values['volumes'] = args.volume
if args.mount_volume is not None:
values['mount_volumes'] = args.mount_volume
values['working_dir'] = args.working_dir
assert(len(args.local_container) > 0)
values['local_container'] = args.local_container.replace(
# Use the gateway's pull-through registry cache to reduce load on fd.o.
'registry.freedesktop.org', '{{ fdo_proxy_registry }}'
)
if 'B2C_KERNEL_CMDLINE_EXTRAS' in environ:
values['cmdline_extras'] = environ['B2C_KERNEL_CMDLINE_EXTRAS']
f = open(path.splitext(path.basename(args.template))[0], "w")
f.write(template.render(values))
f.close()

View File

@@ -1,2 +0,0 @@
[*.sh]
indent_size = 2

View File

@@ -1,26 +0,0 @@
#!/bin/sh
# This test script groups together a bunch of fast dEQP variant runs
# to amortize the cost of rebooting the board.
set -ex
EXIT=0
# Run reset tests without parallelism:
if ! env \
DEQP_RESULTS_DIR=results/reset \
FDO_CI_CONCURRENT=1 \
DEQP_CASELIST_FILTER='.*reset.*' \
/install/deqp-runner.sh; then
EXIT=1
fi
# Then run everything else with parallelism:
if ! env \
DEQP_RESULTS_DIR=results/nonrobustness \
DEQP_CASELIST_INV_FILTER='.*reset.*' \
/install/deqp-runner.sh; then
EXIT=1
fi

View File

@@ -1,13 +0,0 @@
#!/bin/sh
# Init entrypoint for bare-metal devices; calls common init code.
# First stage: very basic setup to bring up network and /dev etc
/init-stage1.sh
# Second stage: run jobs
test $? -eq 0 && /init-stage2.sh
# Wait until the job would have timed out anyway, so we don't spew a "init
# exited" panic.
sleep 6000

View File

@@ -1,17 +0,0 @@
#!/bin/bash
if [ -z "$BM_POE_INTERFACE" ]; then
echo "Must supply the PoE Interface to power down"
exit 1
fi
if [ -z "$BM_POE_ADDRESS" ]; then
echo "Must supply the PoE Switch host"
exit 1
fi
SNMP_KEY="1.3.6.1.4.1.9.9.402.1.2.1.1.1.$BM_POE_INTERFACE"
SNMP_ON="i 1"
SNMP_OFF="i 4"
snmpset -v2c -r 3 -t 30 -cmesaci $BM_POE_ADDRESS $SNMP_KEY $SNMP_OFF

View File

@@ -1,21 +0,0 @@
#!/bin/bash
if [ -z "$BM_POE_INTERFACE" ]; then
echo "Must supply the PoE Interface to power up"
exit 1
fi
if [ -z "$BM_POE_ADDRESS" ]; then
echo "Must supply the PoE Switch host"
exit 1
fi
set -ex
SNMP_KEY="1.3.6.1.4.1.9.9.402.1.2.1.1.1.$BM_POE_INTERFACE"
SNMP_ON="i 1"
SNMP_OFF="i 4"
snmpset -v2c -r 3 -t 10 -cmesaci $BM_POE_ADDRESS $SNMP_KEY $SNMP_OFF
sleep 3s
snmpset -v2c -r 3 -t 10 -cmesaci $BM_POE_ADDRESS $SNMP_KEY $SNMP_ON

View File

@@ -1,102 +0,0 @@
#!/bin/bash
# Boot script for Chrome OS devices attached to a servo debug connector, using
# NFS and TFTP to boot.
# We're run from the root of the repo, make a helper var for our paths
BM=$CI_PROJECT_DIR/install/bare-metal
CI_COMMON=$CI_PROJECT_DIR/install/common
# Runner config checks
if [ -z "$BM_SERIAL" ]; then
echo "Must set BM_SERIAL in your gitlab-runner config.toml [[runners]] environment"
echo "This is the CPU serial device."
exit 1
fi
if [ -z "$BM_SERIAL_EC" ]; then
echo "Must set BM_SERIAL in your gitlab-runner config.toml [[runners]] environment"
echo "This is the EC serial device for controlling board power"
exit 1
fi
if [ ! -d /nfs ]; then
echo "NFS rootfs directory needs to be mounted at /nfs by the gitlab runner"
exit 1
fi
if [ ! -d /tftp ]; then
echo "TFTP directory for this board needs to be mounted at /tftp by the gitlab runner"
exit 1
fi
# job config checks
if [ -z "$BM_KERNEL" ]; then
echo "Must set BM_KERNEL to your board's kernel FIT image"
exit 1
fi
if [ -z "$BM_ROOTFS" ]; then
echo "Must set BM_ROOTFS to your board's rootfs directory in the job's variables"
exit 1
fi
if [ -z "$BM_CMDLINE" ]; then
echo "Must set BM_CMDLINE to your board's kernel command line arguments"
exit 1
fi
set -ex
# Clear out any previous run's artifacts.
rm -rf results/
mkdir -p results
# Create the rootfs in the NFS directory. rm to make sure it's in a pristine
# state, since it's volume-mounted on the host.
rsync -a --delete $BM_ROOTFS/ /nfs/
mkdir -p /nfs/results
. $BM/rootfs-setup.sh /nfs
# Put the kernel/dtb image and the boot command line in the tftp directory for
# the board to find. For normal Mesa development, we build the kernel and
# store it in the docker container that this script is running in.
#
# However, container builds are expensive, so when you're hacking on the
# kernel, it's nice to be able to skip the half hour container build and plus
# moving that container to the runner. So, if BM_KERNEL is a URL, fetch it
# instead of looking in the container. Note that the kernel build should be
# the output of:
#
# make Image.lzma
#
# mkimage \
# -A arm64 \
# -f auto \
# -C lzma \
# -d arch/arm64/boot/Image.lzma \
# -b arch/arm64/boot/dts/qcom/sdm845-cheza-r3.dtb \
# cheza-image.img
rm -rf /tftp/*
if echo "$BM_KERNEL" | grep -q http; then
apt install -y wget
wget $BM_KERNEL -O /tftp/vmlinuz
else
cp $BM_KERNEL /tftp/vmlinuz
fi
echo "$BM_CMDLINE" > /tftp/cmdline
set +e
python3 $BM/cros_servo_run.py \
--cpu $BM_SERIAL \
--ec $BM_SERIAL_EC \
--test-timeout ${TEST_PHASE_TIMEOUT:-20}
ret=$?
set -e
# Bring artifacts back from the NFS dir to the build dir where gitlab-runner
# will look for them.
cp -Rp /nfs/results/. results/
exit $ret

View File

@@ -1,183 +0,0 @@
#!/usr/bin/env python3
#
# Copyright © 2020 Google LLC
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice (including the next
# paragraph) shall be included in all copies or substantial portions of the
# Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
# IN THE SOFTWARE.
import argparse
import queue
import re
from serial_buffer import SerialBuffer
import sys
import threading
class CrosServoRun:
def __init__(self, cpu, ec, test_timeout):
self.cpu_ser = SerialBuffer(
cpu, "results/serial.txt", "R SERIAL-CPU> ")
# Merge the EC serial into the cpu_ser's line stream so that we can
# effectively poll on both at the same time and not have to worry about
self.ec_ser = SerialBuffer(
ec, "results/serial-ec.txt", "R SERIAL-EC> ", line_queue=self.cpu_ser.line_queue)
self.test_timeout = test_timeout
def close(self):
self.ec_ser.close()
self.cpu_ser.close()
def ec_write(self, s):
print("W SERIAL-EC> %s" % s)
self.ec_ser.serial.write(s.encode())
def cpu_write(self, s):
print("W SERIAL-CPU> %s" % s)
self.cpu_ser.serial.write(s.encode())
def print_error(self, message):
RED = '\033[0;31m'
NO_COLOR = '\033[0m'
print(RED + message + NO_COLOR)
def run(self):
# Flush any partial commands in the EC's prompt, then ask for a reboot.
self.ec_write("\n")
self.ec_write("reboot\n")
bootloader_done = False
# This is emitted right when the bootloader pauses to check for input.
# Emit a ^N character to request network boot, because we don't have a
# direct-to-netboot firmware on cheza.
for line in self.cpu_ser.lines(timeout=120, phase="bootloader"):
if re.search("load_archive: loading locale_en.bin", line):
self.cpu_write("\016")
bootloader_done = True
break
# If the board has a netboot firmware and we made it to booting the
# kernel, proceed to processing of the test run.
if re.search("Booting Linux", line):
bootloader_done = True
break
# The Cheza boards have issues with failing to bring up power to
# the system sometimes, possibly dependent on ambient temperature
# in the farm.
if re.search("POWER_GOOD not seen in time", line):
self.print_error(
"Detected intermittent poweron failure, restarting run...")
return 2
if not bootloader_done:
print("Failed to make it through bootloader, restarting run...")
return 2
tftp_failures = 0
for line in self.cpu_ser.lines(timeout=self.test_timeout, phase="test"):
if re.search("---. end Kernel panic", line):
return 1
# The Cheza firmware seems to occasionally get stuck looping in
# this error state during TFTP booting, possibly based on amount of
# network traffic around it, but it'll usually recover after a
# reboot.
if re.search("R8152: Bulk read error 0xffffffbf", line):
tftp_failures += 1
if tftp_failures >= 100:
self.print_error(
"Detected intermittent tftp failure, restarting run...")
return 2
# There are very infrequent bus errors during power management transitions
# on cheza, which we don't expect to be the case on future boards.
if re.search("Kernel panic - not syncing: Asynchronous SError Interrupt", line):
self.print_error(
"Detected cheza power management bus error, restarting run...")
return 2
# If the network device dies, it's probably not graphics's fault, just try again.
if re.search("NETDEV WATCHDOG", line):
self.print_error(
"Detected network device failure, restarting run...")
return 2
# These HFI response errors started appearing with the introduction
# of piglit runs. CosmicPenguin says:
#
# "message ID 106 isn't a thing, so likely what happened is that we
# got confused when parsing the HFI queue. If it happened on only
# one run, then memory corruption could be a possible clue"
#
# Given that it seems to trigger randomly near a GPU fault and then
# break many tests after that, just restart the whole run.
if re.search("a6xx_hfi_send_msg.*Unexpected message id .* on the response queue", line):
self.print_error(
"Detected cheza power management bus error, restarting run...")
return 2
if re.search("coreboot.*bootblock starting", line):
self.print_error(
"Detected spontaneous reboot, restarting run...")
return 2
if re.search("arm-smmu 5040000.iommu: TLB sync timed out -- SMMU may be deadlocked", line):
self.print_error("Detected cheza MMU fail, restarting run...")
return 2
result = re.search("hwci: mesa: (\S*)", line)
if result:
if result.group(1) == "pass":
return 0
else:
return 1
self.print_error(
"Reached the end of the CPU serial log without finding a result")
return 2
def main():
parser = argparse.ArgumentParser()
parser.add_argument('--cpu', type=str,
help='CPU Serial device', required=True)
parser.add_argument(
'--ec', type=str, help='EC Serial device', required=True)
parser.add_argument(
'--test-timeout', type=int, help='Test phase timeout (minutes)', required=True)
args = parser.parse_args()
servo = CrosServoRun(args.cpu, args.ec, args.test_timeout * 60)
while True:
retval = servo.run()
if retval != 2:
break
# power down the CPU on the device
servo.ec_write("power off\n")
servo.close()
sys.exit(retval)
if __name__ == '__main__':
main()

View File

@@ -1,10 +0,0 @@
#!/bin/bash
relay=$1
if [ -z "$relay" ]; then
echo "Must supply a relay arg"
exit 1
fi
$CI_PROJECT_DIR/install/bare-metal/eth008-power-relay.py $ETH_HOST $ETH_PORT off $relay

View File

@@ -1,28 +0,0 @@
#!/usr/bin/python3
import sys
import socket
host = sys.argv[1]
port = sys.argv[2]
mode = sys.argv[3]
relay = sys.argv[4]
msg = None
if mode == "on":
msg = b'\x20'
else:
msg = b'\x21'
msg += int(relay).to_bytes(1, 'big')
msg += b'\x00'
c = socket.create_connection((host, int(port)))
c.sendall(msg)
data = c.recv(1)
c.close()
if data[0] == b'\x01':
print('Command failed')
sys.exit(1)

View File

@@ -1,12 +0,0 @@
#!/bin/bash
relay=$1
if [ -z "$relay" ]; then
echo "Must supply a relay arg"
exit 1
fi
$CI_PROJECT_DIR/install/bare-metal/eth008-power-relay.py $ETH_HOST $ETH_PORT off $relay
sleep 5
$CI_PROJECT_DIR/install/bare-metal/eth008-power-relay.py $ETH_HOST $ETH_PORT on $relay

View File

@@ -1,30 +0,0 @@
#!/bin/bash
set -e
STRINGS=$(mktemp)
ERRORS=$(mktemp)
trap "rm $STRINGS; rm $ERRORS;" EXIT
FILE=$1
shift 1
while getopts "f:e:" opt; do
case $opt in
f) echo "$OPTARG" >> $STRINGS;;
e) echo "$OPTARG" >> $STRINGS ; echo "$OPTARG" >> $ERRORS;;
esac
done
shift $((OPTIND -1))
echo "Waiting for $FILE to say one of following strings"
cat $STRINGS
while ! egrep -wf $STRINGS $FILE; do
sleep 2
done
if egrep -wf $ERRORS $FILE; then
exit 1
fi

View File

@@ -1,149 +0,0 @@
#!/bin/bash
BM=$CI_PROJECT_DIR/install/bare-metal
CI_COMMON=$CI_PROJECT_DIR/install/common
if [ -z "$BM_SERIAL" -a -z "$BM_SERIAL_SCRIPT" ]; then
echo "Must set BM_SERIAL OR BM_SERIAL_SCRIPT in your gitlab-runner config.toml [[runners]] environment"
echo "BM_SERIAL:"
echo " This is the serial device to talk to for waiting for fastboot to be ready and logging from the kernel."
echo "BM_SERIAL_SCRIPT:"
echo " This is a shell script to talk to for waiting for fastboot to be ready and logging from the kernel."
exit 1
fi
if [ -z "$BM_POWERUP" ]; then
echo "Must set BM_POWERUP in your gitlab-runner config.toml [[runners]] environment"
echo "This is a shell script that should reset the device and begin its boot sequence"
echo "such that it pauses at fastboot."
exit 1
fi
if [ -z "$BM_POWERDOWN" ]; then
echo "Must set BM_POWERDOWN in your gitlab-runner config.toml [[runners]] environment"
echo "This is a shell script that should power off the device."
exit 1
fi
if [ -z "$BM_FASTBOOT_SERIAL" ]; then
echo "Must set BM_FASTBOOT_SERIAL in your gitlab-runner config.toml [[runners]] environment"
echo "This must be the a stable-across-resets fastboot serial number."
exit 1
fi
if [ -z "$BM_KERNEL" ]; then
echo "Must set BM_KERNEL to your board's kernel vmlinuz or Image.gz in the job's variables:"
exit 1
fi
if [ -z "$BM_DTB" ]; then
echo "Must set BM_DTB to your board's DTB file in the job's variables:"
exit 1
fi
if [ -z "$BM_ROOTFS" ]; then
echo "Must set BM_ROOTFS to your board's rootfs directory in the job's variables:"
exit 1
fi
if echo $BM_CMDLINE | grep -q "root=/dev/nfs"; then
BM_FASTBOOT_NFSROOT=1
fi
set -ex
# Clear out any previous run's artifacts.
rm -rf results/
mkdir -p results/
if [ -n "$BM_FASTBOOT_NFSROOT" ]; then
# Create the rootfs in the NFS directory. rm to make sure it's in a pristine
# state, since it's volume-mounted on the host.
rsync -a --delete $BM_ROOTFS/ /nfs/
mkdir -p /nfs/results
. $BM/rootfs-setup.sh /nfs
# Root on NFS, no need for an inintramfs.
rm -f rootfs.cpio.gz
touch rootfs.cpio
gzip rootfs.cpio
else
# Create the rootfs in a temp dir
rsync -a --delete $BM_ROOTFS/ rootfs/
. $BM/rootfs-setup.sh rootfs
# Finally, pack it up into a cpio rootfs. Skip the vulkan CTS since none of
# these devices use it and it would take up space in the initrd.
if [ -n "$PIGLIT_PROFILES" ]; then
EXCLUDE_FILTER="deqp|arb_gpu_shader5|arb_gpu_shader_fp64|arb_gpu_shader_int64|glsl-4.[0123456]0|arb_tessellation_shader"
else
EXCLUDE_FILTER="piglit|python"
fi
pushd rootfs
find -H | \
egrep -v "external/(openglcts|vulkancts|amber|glslang|spirv-tools)" |
egrep -v "traces-db|apitrace|renderdoc" | \
egrep -v $EXCLUDE_FILTER | \
cpio -H newc -o | \
xz --check=crc32 -T4 - > $CI_PROJECT_DIR/rootfs.cpio.gz
popd
fi
# Make the combined kernel image and dtb for passing to fastboot. For normal
# Mesa development, we build the kernel and store it in the docker container
# that this script is running in.
#
# However, container builds are expensive, so when you're hacking on the
# kernel, it's nice to be able to skip the half hour container build and plus
# moving that container to the runner. So, if BM_KERNEL+BM_DTB are URLs,
# fetch them instead of looking in the container.
if echo "$BM_KERNEL $BM_DTB" | grep -q http; then
apt install -y wget
wget $BM_KERNEL -O kernel
wget $BM_DTB -O dtb
cat kernel dtb > Image.gz-dtb
rm kernel dtb
else
cat $BM_KERNEL $BM_DTB > Image.gz-dtb
fi
mkdir -p artifacts
abootimg \
--create artifacts/fastboot.img \
-k Image.gz-dtb \
-r rootfs.cpio.gz \
-c cmdline="$BM_CMDLINE"
rm Image.gz-dtb
export PATH=$BM:$PATH
# Start background command for talking to serial if we have one.
if [ -n "$BM_SERIAL_SCRIPT" ]; then
$BM_SERIAL_SCRIPT > results/serial-output.txt &
while [ ! -e results/serial-output.txt ]; do
sleep 1
done
fi
set +e
$BM/fastboot_run.py \
--dev="$BM_SERIAL" \
--test-timeout ${TEST_PHASE_TIMEOUT:-20} \
--fbserial="$BM_FASTBOOT_SERIAL" \
--powerup="$BM_POWERUP" \
--powerdown="$BM_POWERDOWN"
ret=$?
set -e
if [ -n "$BM_FASTBOOT_NFSROOT" ]; then
# Bring artifacts back from the NFS dir to the build dir where gitlab-runner
# will look for them.
cp -Rp /nfs/results/. results/
fi
exit $ret

View File

@@ -1,164 +0,0 @@
#!/usr/bin/env python3
#
# Copyright © 2020 Google LLC
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice (including the next
# paragraph) shall be included in all copies or substantial portions of the
# Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
# IN THE SOFTWARE.
import argparse
import subprocess
import re
from serial_buffer import SerialBuffer
import sys
import threading
class FastbootRun:
def __init__(self, args, test_timeout):
self.powerup = args.powerup
self.ser = SerialBuffer(
args.dev, "results/serial-output.txt", "R SERIAL> ")
self.fastboot = "fastboot boot -s {ser} artifacts/fastboot.img".format(
ser=args.fbserial)
self.test_timeout = test_timeout
def close(self):
self.ser.close()
def print_error(self, message):
RED = '\033[0;31m'
NO_COLOR = '\033[0m'
print(RED + message + NO_COLOR)
def logged_system(self, cmd, timeout=60):
print("Running '{}'".format(cmd))
try:
return subprocess.call(cmd, shell=True, timeout=timeout)
except subprocess.TimeoutExpired:
self.print_error("timeout, restarting run...")
return 2
def run(self):
if ret := self.logged_system(self.powerup):
return ret
fastboot_ready = False
for line in self.ser.lines(timeout=2 * 60, phase="bootloader"):
if re.search("fastboot: processing commands", line) or \
re.search("Listening for fastboot command on", line):
fastboot_ready = True
break
if re.search("data abort", line):
self.print_error(
"Detected crash during boot, restarting run...")
return 2
if not fastboot_ready:
self.print_error(
"Failed to get to fastboot prompt, restarting run...")
return 2
if ret := self.logged_system(self.fastboot):
return ret
print_more_lines = -1
for line in self.ser.lines(timeout=self.test_timeout, phase="test"):
if print_more_lines == 0:
return 2
if print_more_lines > 0:
print_more_lines -= 1
if re.search("---. end Kernel panic", line):
return 1
# The db820c boards intermittently reboot. Just restart the run
# when if we see a reboot after we got past fastboot.
if re.search("PON REASON", line):
self.print_error(
"Detected spontaneous reboot, restarting run...")
return 2
# db820c sometimes wedges around iommu fault recovery
if re.search("watchdog: BUG: soft lockup - CPU.* stuck", line):
self.print_error(
"Detected kernel soft lockup, restarting run...")
return 2
# If the network device dies, it's probably not graphics's fault, just try again.
if re.search("NETDEV WATCHDOG", line):
self.print_error(
"Detected network device failure, restarting run...")
return 2
# A3xx recovery doesn't quite work. Sometimes the GPU will get
# wedged and recovery will fail (because power can't be reset?)
# This assumes that the jobs are sufficiently well-tested that GPU
# hangs aren't always triggered, so just try again. But print some
# more lines first so that we get better information on the cause
# of the hang. Once a hang happens, it's pretty chatty.
if "[drm:adreno_recover] *ERROR* gpu hw init failed: -22" in line:
self.print_error(
"Detected GPU hang, restarting run...")
if print_more_lines == -1:
print_more_lines = 30
result = re.search("hwci: mesa: (\S*)", line)
if result:
if result.group(1) == "pass":
return 0
else:
return 1
self.print_error(
"Reached the end of the CPU serial log without finding a result, restarting run...")
return 2
def main():
parser = argparse.ArgumentParser()
parser.add_argument(
'--dev', type=str, help='Serial device (otherwise reading from serial-output.txt)')
parser.add_argument('--powerup', type=str,
help='shell command for rebooting', required=True)
parser.add_argument('--powerdown', type=str,
help='shell command for powering off', required=True)
parser.add_argument('--fbserial', type=str,
help='fastboot serial number of the board', required=True)
parser.add_argument('--test-timeout', type=int,
help='Test phase timeout (minutes)', required=True)
args = parser.parse_args()
fastboot = FastbootRun(args, args.test_timeout * 60)
while True:
retval = fastboot.run()
fastboot.close()
if retval != 2:
break
fastboot = FastbootRun(args, args.test_timeout * 60)
fastboot.logged_system(args.powerdown)
sys.exit(retval)
if __name__ == '__main__':
main()

View File

@@ -1,10 +0,0 @@
#!/bin/bash
relay=$1
if [ -z "$relay" ]; then
echo "Must supply a relay arg"
exit 1
fi
$CI_PROJECT_DIR/install/bare-metal/google-power-relay.py off $relay

View File

@@ -1,19 +0,0 @@
#!/usr/bin/python3
import sys
import serial
mode = sys.argv[1]
relay = sys.argv[2]
# our relays are "off" means "board is powered".
mode_swap = {
"on": "off",
"off": "on",
}
mode = mode_swap[mode]
ser = serial.Serial('/dev/ttyACM0', 115200, timeout=2)
command = "relay {} {}\n\r".format(mode, relay)
ser.write(command.encode())
ser.close()

View File

@@ -1,12 +0,0 @@
#!/bin/bash
relay=$1
if [ -z "$relay" ]; then
echo "Must supply a relay arg"
exit 1
fi
$CI_PROJECT_DIR/install/bare-metal/google-power-relay.py off $relay
sleep 5
$CI_PROJECT_DIR/install/bare-metal/google-power-relay.py on $relay

View File

@@ -1,17 +0,0 @@
#!/bin/bash
if [ -z "$BM_POE_INTERFACE" ]; then
echo "Must supply the PoE Interface to power up"
exit 1
fi
if [ -z "$BM_POE_ADDRESS" ]; then
echo "Must supply the PoE Switch host"
exit 1
fi
SNMP_KEY="SNMPv2-SMI::mib-2.105.1.1.1.3.1.`expr 48 + $BM_POE_INTERFACE`"
SNMP_ON="i 1"
SNMP_OFF="i 2"
flock /var/run/poe.lock -c "snmpset -v2c -r 3 -t 30 -cmesaci $BM_POE_ADDRESS $SNMP_KEY $SNMP_OFF"

View File

@@ -1,19 +0,0 @@
#!/bin/bash
if [ -z "$BM_POE_INTERFACE" ]; then
echo "Must supply the PoE Interface to power up"
exit 1
fi
if [ -z "$BM_POE_ADDRESS" ]; then
echo "Must supply the PoE Switch host"
exit 1
fi
SNMP_KEY="SNMPv2-SMI::mib-2.105.1.1.1.3.1.`expr 48 + $BM_POE_INTERFACE`"
SNMP_ON="i 1"
SNMP_OFF="i 2"
flock /var/run/poe.lock -c "snmpset -v2c -r 3 -t 30 -cmesaci $BM_POE_ADDRESS $SNMP_KEY $SNMP_OFF"
sleep 3s
flock /var/run/poe.lock -c "snmpset -v2c -r 3 -t 30 -cmesaci $BM_POE_ADDRESS $SNMP_KEY $SNMP_ON"

View File

@@ -1,149 +0,0 @@
#!/bin/bash
# Boot script for devices attached to a PoE switch, using NFS for the root
# filesystem.
# We're run from the root of the repo, make a helper var for our paths
BM=$CI_PROJECT_DIR/install/bare-metal
CI_COMMON=$CI_PROJECT_DIR/install/common
# Runner config checks
if [ -z "$BM_SERIAL" ]; then
echo "Must set BM_SERIAL in your gitlab-runner config.toml [[runners]] environment"
echo "This is the serial port to listen the device."
exit 1
fi
if [ -z "$BM_POE_ADDRESS" ]; then
echo "Must set BM_POE_ADDRESS in your gitlab-runner config.toml [[runners]] environment"
echo "This is the PoE switch address to connect for powering up/down devices."
exit 1
fi
if [ -z "$BM_POE_INTERFACE" ]; then
echo "Must set BM_POE_INTERFACE in your gitlab-runner config.toml [[runners]] environment"
echo "This is the PoE switch interface where the device is connected."
exit 1
fi
if [ -z "$BM_POWERUP" ]; then
echo "Must set BM_POWERUP in your gitlab-runner config.toml [[runners]] environment"
echo "This is a shell script that should power up the device and begin its boot sequence."
exit 1
fi
if [ -z "$BM_POWERDOWN" ]; then
echo "Must set BM_POWERDOWN in your gitlab-runner config.toml [[runners]] environment"
echo "This is a shell script that should power off the device."
exit 1
fi
if [ ! -d /nfs ]; then
echo "NFS rootfs directory needs to be mounted at /nfs by the gitlab runner"
exit 1
fi
if [ ! -d /tftp ]; then
echo "TFTP directory for this board needs to be mounted at /tftp by the gitlab runner"
exit 1
fi
# job config checks
if [ -z "$BM_ROOTFS" ]; then
echo "Must set BM_ROOTFS to your board's rootfs directory in the job's variables"
exit 1
fi
if [ -z "$BM_BOOTFS" ]; then
echo "Must set /boot files for the TFTP boot in the job's variables"
exit 1
fi
if [ -z "$BM_CMDLINE" ]; then
echo "Must set BM_CMDLINE to your board's kernel command line arguments"
exit 1
fi
if [ -z "$BM_BOOTCONFIG" ]; then
echo "Must set BM_BOOTCONFIG to your board's required boot configuration arguments"
exit 1
fi
set -ex
# Clear out any previous run's artifacts.
rm -rf results/
mkdir -p results
# Create the rootfs in the NFS directory. rm to make sure it's in a pristine
# state, since it's volume-mounted on the host.
rsync -a --delete $BM_ROOTFS/ /nfs/
# If BM_BOOTFS is an URL, download it
if echo $BM_BOOTFS | grep -q http; then
apt install -y wget
wget ${FDO_HTTP_CACHE_URI:-}$BM_BOOTFS -O /tmp/bootfs.tar
BM_BOOTFS=/tmp/bootfs.tar
fi
# If BM_BOOTFS is a file, assume it is a tarball and uncompress it
if [ -f $BM_BOOTFS ]; then
mkdir -p /tmp/bootfs
tar xf $BM_BOOTFS -C /tmp/bootfs
BM_BOOTFS=/tmp/bootfs
fi
# Install kernel modules (it could be either in /lib/modules or
# /usr/lib/modules, but we want to install in the latter)
[ -d $BM_BOOTFS/usr/lib/modules ] && rsync -a $BM_BOOTFS/usr/lib/modules/ /nfs/usr/lib/modules/
[ -d $BM_BOOTFS/lib/modules ] && rsync -a $BM_BOOTFS/lib/modules/ /nfs/lib/modules/
# Install kernel image + bootloader files
rsync -aL --delete $BM_BOOTFS/boot/ /tftp/
# Set up the pxelinux config for Jetson Nano
mkdir -p /tftp/pxelinux.cfg
cat <<EOF >/tftp/pxelinux.cfg/default-arm-tegra210-p3450-0000
PROMPT 0
TIMEOUT 30
DEFAULT primary
MENU TITLE jetson nano boot options
LABEL primary
MENU LABEL CI kernel on TFTP
LINUX Image
FDT tegra210-p3450-0000.dtb
APPEND \${cbootargs} $BM_CMDLINE
EOF
# Create the rootfs in the NFS directory
mkdir -p /nfs/results
. $BM/rootfs-setup.sh /nfs
echo "$BM_CMDLINE" > /tftp/cmdline.txt
# Add some required options in config.txt
printf "$BM_BOOTCONFIG" >> /tftp/config.txt
set +e
ATTEMPTS=10
while [ $((ATTEMPTS--)) -gt 0 ]; do
python3 $BM/poe_run.py \
--dev="$BM_SERIAL" \
--powerup="$BM_POWERUP" \
--powerdown="$BM_POWERDOWN" \
--test-timeout ${TEST_PHASE_TIMEOUT:-20}
ret=$?
if [ $ret -eq 2 ]; then
echo "Did not detect boot sequence, retrying..."
else
ATTEMPTS=0
fi
done
set -e
# Bring artifacts back from the NFS dir to the build dir where gitlab-runner
# will look for them.
cp -Rp /nfs/results/. results/
exit $ret

View File

@@ -1,110 +0,0 @@
#!/usr/bin/env python3
#
# Copyright © 2020 Igalia, S.L.
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice (including the next
# paragraph) shall be included in all copies or substantial portions of the
# Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
# IN THE SOFTWARE.
import argparse
import os
import re
from serial_buffer import SerialBuffer
import sys
import threading
class PoERun:
def __init__(self, args, test_timeout):
self.powerup = args.powerup
self.powerdown = args.powerdown
self.ser = SerialBuffer(
args.dev, "results/serial-output.txt", "")
self.test_timeout = test_timeout
def print_error(self, message):
RED = '\033[0;31m'
NO_COLOR = '\033[0m'
print(RED + message + NO_COLOR)
def logged_system(self, cmd):
print("Running '{}'".format(cmd))
return os.system(cmd)
def run(self):
if self.logged_system(self.powerup) != 0:
return 1
boot_detected = False
for line in self.ser.lines(timeout=5 * 60, phase="bootloader"):
if re.search("Booting Linux", line):
boot_detected = True
break
if not boot_detected:
self.print_error(
"Something wrong; couldn't detect the boot start up sequence")
return 2
for line in self.ser.lines(timeout=self.test_timeout, phase="test"):
if re.search("---. end Kernel panic", line):
return 1
# Binning memory problems
if re.search("binner overflow mem", line):
self.print_error("Memory overflow in the binner; GPU hang")
return 1
if re.search("nouveau 57000000.gpu: bus: MMIO read of 00000000 FAULT at 137000", line):
self.print_error("nouveau jetson boot bug, retrying.")
return 2
result = re.search("hwci: mesa: (\S*)", line)
if result:
if result.group(1) == "pass":
return 0
else:
return 1
self.print_error(
"Reached the end of the CPU serial log without finding a result")
return 2
def main():
parser = argparse.ArgumentParser()
parser.add_argument('--dev', type=str,
help='Serial device to monitor', required=True)
parser.add_argument('--powerup', type=str,
help='shell command for rebooting', required=True)
parser.add_argument('--powerdown', type=str,
help='shell command for powering off', required=True)
parser.add_argument(
'--test-timeout', type=int, help='Test phase timeout (minutes)', required=True)
args = parser.parse_args()
poe = PoERun(args, args.test_timeout * 60)
retval = poe.run()
poe.logged_system(args.powerdown)
sys.exit(retval)
if __name__ == '__main__':
main()

View File

@@ -1,30 +0,0 @@
#!/bin/bash
rootfs_dst=$1
mkdir -p $rootfs_dst/results
# Set up the init script that brings up the system.
cp $BM/bm-init.sh $rootfs_dst/init
cp $CI_COMMON/init*.sh $rootfs_dst/
# Make JWT token available as file in the bare-metal storage to enable access
# to MinIO
cp "${CI_JOB_JWT_FILE}" "${rootfs_dst}${CI_JOB_JWT_FILE}"
cp $CI_COMMON/capture-devcoredump.sh $rootfs_dst/
cp $CI_COMMON/intel-gpu-freq.sh $rootfs_dst/
set +x
# Pass through relevant env vars from the gitlab job to the baremetal init script
"$CI_COMMON"/generate-env.sh > $rootfs_dst/set-job-env-vars.sh
chmod +x $rootfs_dst/set-job-env-vars.sh
echo "Variables passed through:"
cat $rootfs_dst/set-job-env-vars.sh
set -x
# Add the Mesa drivers we built, and make a consistent symlink to them.
mkdir -p $rootfs_dst/$CI_PROJECT_DIR
rsync -aH --delete $CI_PROJECT_DIR/install/ $rootfs_dst/$CI_PROJECT_DIR/install/

View File

@@ -1,185 +0,0 @@
#!/usr/bin/env python3
#
# Copyright © 2020 Google LLC
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice (including the next
# paragraph) shall be included in all copies or substantial portions of the
# Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
# IN THE SOFTWARE.
import argparse
from datetime import datetime, timezone
import queue
import serial
import threading
import time
class SerialBuffer:
def __init__(self, dev, filename, prefix, timeout=None, line_queue=None):
self.filename = filename
self.dev = dev
if dev:
self.f = open(filename, "wb+")
self.serial = serial.Serial(dev, 115200, timeout=timeout)
else:
self.f = open(filename, "rb")
self.serial = None
self.byte_queue = queue.Queue()
# allow multiple SerialBuffers to share a line queue so you can merge
# servo's CPU and EC streams into one thing to watch the boot/test
# progress on.
if line_queue:
self.line_queue = line_queue
else:
self.line_queue = queue.Queue()
self.prefix = prefix
self.timeout = timeout
self.sentinel = object()
self.closing = False
if self.dev:
self.read_thread = threading.Thread(
target=self.serial_read_thread_loop, daemon=True)
else:
self.read_thread = threading.Thread(
target=self.serial_file_read_thread_loop, daemon=True)
self.read_thread.start()
self.lines_thread = threading.Thread(
target=self.serial_lines_thread_loop, daemon=True)
self.lines_thread.start()
def close(self):
self.closing = True
if self.serial:
self.serial.cancel_read()
self.read_thread.join()
self.lines_thread.join()
if self.serial:
self.serial.close()
# Thread that just reads the bytes from the serial device to try to keep from
# buffer overflowing it. If nothing is received in 1 minute, it finalizes.
def serial_read_thread_loop(self):
greet = "Serial thread reading from %s\n" % self.dev
self.byte_queue.put(greet.encode())
while not self.closing:
try:
b = self.serial.read()
if len(b) == 0:
break
self.byte_queue.put(b)
except Exception as err:
print(self.prefix + str(err))
break
self.byte_queue.put(self.sentinel)
# Thread that just reads the bytes from the file of serial output that some
# other process is appending to.
def serial_file_read_thread_loop(self):
greet = "Serial thread reading from %s\n" % self.filename
self.byte_queue.put(greet.encode())
while not self.closing:
line = self.f.readline()
if line:
self.byte_queue.put(line)
else:
time.sleep(0.1)
self.byte_queue.put(self.sentinel)
# Thread that processes the stream of bytes to 1) log to stdout, 2) log to
# file, 3) add to the queue of lines to be read by program logic
def serial_lines_thread_loop(self):
line = bytearray()
while True:
bytes = self.byte_queue.get(block=True)
if bytes == self.sentinel:
self.read_thread.join()
self.line_queue.put(self.sentinel)
break
# Write our data to the output file if we're the ones reading from
# the serial device
if self.dev:
self.f.write(bytes)
self.f.flush()
for b in bytes:
line.append(b)
if b == b'\n'[0]:
line = line.decode(errors="replace")
time = datetime.now().strftime('%y-%m-%d %H:%M:%S')
print("{endc}{time} {prefix}{line}".format(
time=time, prefix=self.prefix, line=line, endc='\033[0m'), flush=True, end='')
self.line_queue.put(line)
line = bytearray()
def lines(self, timeout=None, phase=None):
start_time = time.monotonic()
while True:
read_timeout = None
if timeout:
read_timeout = timeout - (time.monotonic() - start_time)
if read_timeout <= 0:
print("read timeout waiting for serial during {}".format(phase))
self.close()
break
try:
line = self.line_queue.get(timeout=read_timeout)
except queue.Empty:
print("read timeout waiting for serial during {}".format(phase))
self.close()
break
if line == self.sentinel:
print("End of serial output")
self.lines_thread.join()
break
yield line
def main():
parser = argparse.ArgumentParser()
parser.add_argument('--dev', type=str, help='Serial device')
parser.add_argument('--file', type=str,
help='Filename for serial output', required=True)
parser.add_argument('--prefix', type=str,
help='Prefix for logging serial to stdout', nargs='?')
args = parser.parse_args()
ser = SerialBuffer(args.dev, args.file, args.prefix or "")
for line in ser.lines():
# We're just using this as a logger, so eat the produced lines and drop
# them
pass
if __name__ == '__main__':
main()

View File

@@ -1,41 +0,0 @@
#!/usr/bin/python3
# Copyright © 2020 Christian Gmeiner
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice (including the next
# paragraph) shall be included in all copies or substantial portions of the
# Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
# IN THE SOFTWARE.
#
# Tiny script to read bytes from telnet, and write the output to stdout, with a
# buffer in between so we don't lose serial output from its buffer.
#
import sys
import telnetlib
host = sys.argv[1]
port = sys.argv[2]
tn = telnetlib.Telnet(host, port, 1000000)
while True:
bytes = tn.read_some()
sys.stdout.buffer.write(bytes)
sys.stdout.flush()
tn.close()

View File

@@ -1,303 +0,0 @@
#!/usr/bin/env python3
# Copyright © 2020 - 2022 Collabora Ltd.
# Authors:
# Tomeu Vizoso <tomeu.vizoso@collabora.com>
# David Heidelberg <david.heidelberg@collabora.com>
#
# TODO GraphQL for dependencies
# SPDX-License-Identifier: MIT
"""
Helper script to restrict running only required CI jobs
and show the job(s) logs.
"""
from typing import Optional
from functools import partial
from concurrent.futures import ThreadPoolExecutor
import os
import re
import time
import argparse
import sys
import gitlab
from colorama import Fore, Style
REFRESH_WAIT_LOG = 10
REFRESH_WAIT_JOBS = 6
URL_START = "\033]8;;"
URL_END = "\033]8;;\a"
STATUS_COLORS = {
"created": "",
"running": Fore.BLUE,
"success": Fore.GREEN,
"failed": Fore.RED,
"canceled": Fore.MAGENTA,
"manual": "",
"pending": "",
"skipped": "",
}
# TODO: This hardcoded list should be replaced by querying the pipeline's
# dependency graph to see which jobs the target jobs need
DEPENDENCIES = [
"debian/x86_build-base",
"debian/x86_build",
"debian/x86_test-base",
"debian/x86_test-gl",
"debian/arm_build",
"debian/arm_test",
"kernel+rootfs_amd64",
"kernel+rootfs_arm64",
"kernel+rootfs_armhf",
"debian-testing",
"debian-arm64",
]
COMPLETED_STATUSES = ["success", "failed"]
def get_gitlab_project(glab, name: str):
"""Finds a specified gitlab project for given user"""
glab.auth()
username = glab.user.username
return glab.projects.get(f"{username}/mesa")
def wait_for_pipeline(project, sha: str):
"""await until pipeline appears in Gitlab"""
print("⏲ for the pipeline to appear..", end="")
while True:
pipelines = project.pipelines.list(sha=sha)
if pipelines:
print("", flush=True)
return pipelines[0]
print("", end=".", flush=True)
time.sleep(1)
def print_job_status(job) -> None:
"""It prints a nice, colored job status with a link to the job."""
if job.status == "canceled":
return
print(
STATUS_COLORS[job.status]
+ "🞋 job "
+ URL_START
+ f"{job.web_url}\a{job.name}"
+ URL_END
+ f" :: {job.status}"
+ Style.RESET_ALL
)
def print_job_status_change(job) -> None:
"""It reports job status changes."""
if job.status == "canceled":
return
print(
STATUS_COLORS[job.status]
+ "🗘 job "
+ URL_START
+ f"{job.web_url}\a{job.name}"
+ URL_END
+ f" has new status: {job.status}"
+ Style.RESET_ALL
)
def pretty_wait(sec: int) -> None:
"""shows progressbar in dots"""
for val in range(sec, 0, -1):
print(f"{val} seconds", end="\r")
time.sleep(1)
def monitor_pipeline(
project, pipeline, target_job: Optional[str], dependencies, force_manual: bool
) -> tuple[Optional[int], Optional[int]]:
"""Monitors pipeline and delegate canceling jobs"""
statuses = {}
target_statuses = {}
if not dependencies:
dependencies = []
dependencies.extend(DEPENDENCIES)
if target_job:
target_jobs_regex = re.compile(target_job.strip())
while True:
to_cancel = []
for job in pipeline.jobs.list(all=True, sort="desc"):
# target jobs
if target_job and target_jobs_regex.match(job.name):
if force_manual and job.status == "manual":
enable_job(project, job, True)
if (job.id not in target_statuses) or (
job.status not in target_statuses[job.id]
):
print_job_status_change(job)
target_statuses[job.id] = job.status
else:
print_job_status(job)
continue
# all jobs
if (job.id not in statuses) or (job.status not in statuses[job.id]):
print_job_status_change(job)
statuses[job.id] = job.status
# dependencies and cancelling the rest
if job.name in dependencies:
if job.status == "manual":
enable_job(project, job, False)
elif target_job and job.status not in [
"canceled",
"success",
"failed",
"skipped",
]:
to_cancel.append(job)
if target_job:
cancel_jobs(project, to_cancel)
print("---------------------------------", flush=False)
if len(target_statuses) == 1 and {"running"}.intersection(
target_statuses.values()
):
return next(iter(target_statuses)), None
if {"failed", "canceled"}.intersection(target_statuses.values()):
return None, 1
if {"success", "manual"}.issuperset(target_statuses.values()):
return None, 0
pretty_wait(REFRESH_WAIT_JOBS)
def enable_job(project, job, target: bool) -> None:
"""enable manual job"""
pjob = project.jobs.get(job.id, lazy=True)
pjob.play()
if target:
jtype = "🞋 "
else:
jtype = "(dependency)"
print(Fore.MAGENTA + f"{jtype} job {job.name} manually enabled" + Style.RESET_ALL)
def cancel_job(project, job) -> None:
"""Cancel GitLab job"""
pjob = project.jobs.get(job.id, lazy=True)
pjob.cancel()
print(f"{job.name}")
def cancel_jobs(project, to_cancel) -> None:
"""Cancel unwanted GitLab jobs"""
if not to_cancel:
return
with ThreadPoolExecutor(max_workers=6) as exe:
part = partial(cancel_job, project)
exe.map(part, to_cancel)
def print_log(project, job_id) -> None:
"""Print job log into output"""
printed_lines = 0
while True:
job = project.jobs.get(job_id)
# GitLab's REST API doesn't offer pagination for logs, so we have to refetch it all
lines = job.trace().decode("unicode_escape").splitlines()
for line in lines[printed_lines:]:
print(line)
printed_lines = len(lines)
if job.status in COMPLETED_STATUSES:
print(Fore.GREEN + f"Job finished: {job.web_url}" + Style.RESET_ALL)
return
pretty_wait(REFRESH_WAIT_LOG)
def parse_args() -> None:
"""Parse args"""
parser = argparse.ArgumentParser(
description="Tool to trigger a subset of container jobs "
+ "and monitor the progress of a test job",
epilog="Example: mesa-monitor.py --rev $(git rev-parse HEAD) "
+ '--target ".*traces" ',
)
parser.add_argument("--target", metavar="target-job", help="Target job")
parser.add_argument("--deps", nargs="+", help="Job dependencies")
parser.add_argument(
"--rev", metavar="revision", help="repository git revision", required=True
)
parser.add_argument(
"--token",
metavar="token",
help="force GitLab token, otherwise it's read from ~/.config/gitlab-token",
)
parser.add_argument(
"--force-manual", action="store_true", help="Force jobs marked as manual"
)
return parser.parse_args()
def read_token(token_arg: Optional[str]) -> str:
"""pick token from args or file"""
if token_arg:
return token_arg
return (
open(os.path.expanduser("~/.config/gitlab-token"), encoding="utf-8")
.readline()
.rstrip()
)
if __name__ == "__main__":
try:
t_start = time.perf_counter()
args = parse_args()
token = read_token(args.token)
gl = gitlab.Gitlab(url="https://gitlab.freedesktop.org", private_token=token)
cur_project = get_gitlab_project(gl, "mesa")
print(f"Revision: {args.rev}")
pipe = wait_for_pipeline(cur_project, args.rev)
print(f"Pipeline: {pipe.web_url}")
if args.target:
print("🞋 job: " + Fore.BLUE + args.target + Style.RESET_ALL)
print(f"Extra dependencies: {args.deps}")
target_job_id, ret = monitor_pipeline(
cur_project, pipe, args.target, args.deps, args.force_manual
)
if target_job_id:
print_log(cur_project, target_job_id)
t_end = time.perf_counter()
spend_minutes = (t_end - t_start) / 60
print(f"⏲ Duration of script execution: {spend_minutes:0.1f} minutes")
sys.exit(ret)
except KeyboardInterrupt:
sys.exit(1)

View File

@@ -1,2 +0,0 @@
colorama==0.4.5
python-gitlab==3.5.0

View File

@@ -1,561 +0,0 @@
# Shared between windows and Linux
.build-common:
extends: .build-rules
# Cancel job if a newer commit is pushed to the same branch
interruptible: true
artifacts:
name: "mesa_${CI_JOB_NAME}"
when: always
paths:
- _build/meson-logs/*.txt
- _build/meson-logs/strace
- shader-db
# Just Linux
.build-linux:
extends: .build-common
variables:
CCACHE_COMPILERCHECK: "content"
CCACHE_COMPRESS: "true"
CCACHE_DIR: /cache/mesa/ccache
# Use ccache transparently, and print stats before/after
before_script:
- !reference [default, before_script]
- export PATH="/usr/lib/ccache:$PATH"
- export CCACHE_BASEDIR="$PWD"
- echo -e "\e[0Ksection_start:$(date +%s):ccache_before[collapsed=true]\r\e[0Kccache stats before build"
- ccache --show-stats
- echo -e "\e[0Ksection_end:$(date +%s):ccache_before\r\e[0K"
after_script:
- echo -e "\e[0Ksection_start:$(date +%s):ccache_after[collapsed=true]\r\e[0Kccache stats after build"
- ccache --show-stats
- echo -e "\e[0Ksection_end:$(date +%s):ccache_after\r\e[0K"
- !reference [default, after_script]
.build-windows:
extends: .build-common
tags:
- windows
- docker
- "2022"
- mesa
cache:
key: ${CI_JOB_NAME}
paths:
- subprojects/packagecache
.meson-build:
extends:
- .build-linux
- .use-debian/x86_build
stage: build-x86_64
variables:
LLVM_VERSION: 11
script:
- .gitlab-ci/meson/build.sh
.meson-build_mingw:
extends:
- .build-linux
- .use-debian/x86_build_mingw
- .use-wine
stage: build-x86_64
script:
- .gitlab-ci/meson/build.sh
debian-testing:
extends:
- .meson-build
- .ci-deqp-artifacts
variables:
UNWIND: "enabled"
DRI_LOADERS: >
-D glx=dri
-D gbm=enabled
-D egl=enabled
-D platforms=x11
GALLIUM_ST: >
-D dri3=enabled
-D gallium-va=enabled
GALLIUM_DRIVERS: "swrast,virgl,radeonsi,zink,crocus,iris,i915"
VULKAN_DRIVERS: "swrast,amd,intel"
BUILDTYPE: "debugoptimized"
EXTRA_OPTION: >
-D spirv-to-dxil=true
-D valgrind=false
MINIO_ARTIFACT_NAME: mesa-amd64
LLVM_VERSION: "13"
script:
- .gitlab-ci/lava/lava-pytest.sh
- .gitlab-ci/meson/build.sh
- .gitlab-ci/prepare-artifacts.sh
artifacts:
reports:
junit: artifacts/ci_scripts_report.xml
debian-testing-asan:
extends:
- debian-testing
variables:
C_ARGS: >
-Wno-error=stringop-truncation
EXTRA_OPTION: >
-D b_sanitize=address
-D valgrind=false
-D tools=dlclose-skip
MINIO_ARTIFACT_NAME: ""
ARTIFACTS_DEBUG_SYMBOLS: 1
debian-testing-msan:
extends:
- debian-clang
variables:
# l_undef is incompatible with msan
EXTRA_OPTION:
-D b_sanitize=memory
-D b_lundef=false
MINIO_ARTIFACT_NAME: ""
ARTIFACTS_DEBUG_SYMBOLS: 1
# Don't run all the tests yet:
# GLSL has some issues in sexpression reading.
# gtest has issues in its test initialization.
MESON_TEST_ARGS: "--suite glcpp --suite gallium --suite format"
# Freedreno dropped because freedreno tools fail at msan.
GALLIUM_DRIVERS: "iris,nouveau,kmsro,r300,r600,swrast,svga,v3d,vc4,virgl,etnaviv,panfrost,lima,zink,radeonsi,tegra,d3d12,crocus"
VULKAN_DRIVERS: intel,amd,broadcom,virtio-experimental
debian-clover-testing:
extends:
- .meson-build
- .ci-deqp-artifacts
variables:
UNWIND: "enabled"
DRI_LOADERS: >
-D glx=disabled
-D egl=disabled
-D gbm=disabled
GALLIUM_ST: >
-D gallium-opencl=icd
-D opencl-spirv=true
GALLIUM_DRIVERS: "swrast"
BUILDTYPE: "debugoptimized"
EXTRA_OPTION: >
-D valgrind=false
script:
- .gitlab-ci/meson/build.sh
- .gitlab-ci/prepare-artifacts.sh
debian-gallium:
extends: .meson-build
variables:
UNWIND: "enabled"
DRI_LOADERS: >
-D glx=dri
-D gbm=enabled
-D egl=enabled
-D platforms=x11,wayland
GALLIUM_ST: >
-D dri3=enabled
-D gallium-extra-hud=true
-D gallium-vdpau=enabled
-D gallium-xvmc=enabled
-D gallium-omx=bellagio
-D gallium-va=enabled
-D gallium-xa=enabled
-D gallium-nine=true
-D gallium-opencl=disabled
GALLIUM_DRIVERS: "iris,nouveau,kmsro,r300,r600,freedreno,swrast,svga,v3d,vc4,virgl,etnaviv,panfrost,lima,zink,d3d12,asahi,crocus"
VULKAN_DRIVERS: swrast
EXTRA_OPTION: >
-D spirv-to-dxil=true
-D osmesa=true
-D tools=drm-shim,etnaviv,freedreno,glsl,intel,intel-ui,nir,nouveau,xvmc,lima,panfrost,asahi
script:
- .gitlab-ci/meson/build.sh
- .gitlab-ci/run-shader-db.sh
# Test a release build with -Werror so new warnings don't sneak in.
debian-release:
extends: .meson-build
variables:
UNWIND: "enabled"
DRI_LOADERS: >
-D glx=dri
-D gbm=enabled
-D egl=enabled
-D platforms=x11,wayland
GALLIUM_ST: >
-D dri3=enabled
-D gallium-extra-hud=true
-D gallium-vdpau=enabled
-D gallium-xvmc=disabled
-D gallium-omx=disabled
-D gallium-va=enabled
-D gallium-xa=enabled
-D gallium-nine=false
-D gallium-opencl=disabled
-D llvm=enabled
GALLIUM_DRIVERS: "i915,iris,nouveau,kmsro,freedreno,r300,svga,swrast,v3d,vc4,virgl,etnaviv,panfrost,lima,zink,d3d12,crocus"
VULKAN_DRIVERS: "amd,imagination-experimental,microsoft-experimental"
BUILDTYPE: "release"
EXTRA_OPTION: >
-D spirv-to-dxil=true
-D osmesa=true
-D tools=all
-D intel-clc=enabled
-D imagination-srv=true
script:
- .gitlab-ci/meson/build.sh
fedora-release:
extends:
- .meson-build
- .use-fedora/x86_build
variables:
BUILDTYPE: "release"
C_ARGS: >
-Wno-error=array-bounds
-Wno-error=stringop-overread
-Wno-error=uninitialized
CPP_ARGS: >
-Wno-error=array-bounds
DRI_LOADERS: >
-D glx=dri
-D gbm=enabled
-D egl=enabled
-D glvnd=true
-D platforms=x11,wayland
EXTRA_OPTION: >
-D osmesa=true
-D selinux=true
-D tools=drm-shim,etnaviv,freedreno,glsl,intel,nir,nouveau,lima,panfrost,imagination
-D intel-clc=enabled
-D imagination-srv=true
GALLIUM_DRIVERS: "crocus,etnaviv,freedreno,iris,kmsro,lima,nouveau,panfrost,r300,r600,radeonsi,svga,swrast,tegra,v3d,vc4,virgl,zink"
GALLIUM_ST: >
-D dri3=enabled
-D gallium-extra-hud=true
-D gallium-vdpau=enabled
-D gallium-xvmc=disabled
-D gallium-omx=disabled
-D gallium-va=enabled
-D gallium-xa=enabled
-D gallium-nine=false
-D gallium-opencl=icd
-D gles1=disabled
-D gles2=enabled
-D llvm=enabled
-D microsoft-clc=disabled
-D shared-llvm=enabled
-D vulkan-device-select-layer=true
LLVM_VERSION: ""
UNWIND: "disabled"
VULKAN_DRIVERS: "amd,broadcom,freedreno,intel,imagination-experimental"
script:
- .gitlab-ci/meson/build.sh
debian-android:
extends:
- .meson-cross
- .use-debian/android_build
variables:
UNWIND: "disabled"
C_ARGS: >
-Wno-error=asm-operand-widths
-Wno-error=constant-conversion
-Wno-error=enum-conversion
-Wno-error=initializer-overrides
-Wno-error=missing-braces
-Wno-error=sometimes-uninitialized
-Wno-error=unused-function
CPP_ARGS: >
-Wno-error=deprecated-declarations
DRI_LOADERS: >
-D glx=disabled
-D gbm=disabled
-D egl=enabled
-D platforms=android
EXTRA_OPTION: >
-D android-stub=true
-D llvm=disabled
-D platform-sdk-version=29
-D valgrind=false
GALLIUM_ST: >
-D dri3=disabled
-D gallium-vdpau=disabled
-D gallium-xvmc=disabled
-D gallium-omx=disabled
-D gallium-va=disabled
-D gallium-xa=disabled
-D gallium-nine=false
-D gallium-opencl=disabled
LLVM_VERSION: ""
PKG_CONFIG_LIBDIR: "/disable/non/android/system/pc/files"
script:
- PKG_CONFIG_PATH=/usr/local/lib/aarch64-linux-android/pkgconfig/:/android-ndk-r21d/toolchains/llvm/prebuilt/linux-x86_64/sysroot/usr/lib/aarch64-linux-android/pkgconfig/ CROSS=aarch64-linux-android GALLIUM_DRIVERS=etnaviv,freedreno,lima,panfrost,vc4,v3d VULKAN_DRIVERS=freedreno,broadcom,virtio-experimental .gitlab-ci/meson/build.sh
# x86_64 build:
# Can't do Intel because gen_decoder.c currently requires libexpat, which
# is not a dependency that AOSP wants to accept. Can't do Radeon Gallium
# drivers because they requires LLVM, which we don't have an Android build
# of.
- PKG_CONFIG_PATH=/usr/local/lib/x86_64-linux-android/pkgconfig/:/android-ndk-r21d/toolchains/llvm/prebuilt/linux-x86_64/sysroot/usr/lib/x86_64-linux-android/pkgconfig/ CROSS=x86_64-linux-android GALLIUM_DRIVERS=iris VULKAN_DRIVERS=amd,intel .gitlab-ci/meson/build.sh
.meson-cross:
extends:
- .meson-build
stage: build-misc
variables:
UNWIND: "disabled"
DRI_LOADERS: >
-D glx=dri
-D gbm=enabled
-D egl=enabled
-D platforms=x11
-D osmesa=false
GALLIUM_ST: >
-D dri3=enabled
-D gallium-vdpau=disabled
-D gallium-xvmc=disabled
-D gallium-omx=disabled
-D gallium-va=disabled
-D gallium-xa=disabled
-D gallium-nine=false
.meson-arm:
extends:
- .meson-cross
- .use-debian/arm_build
needs:
- debian/arm_build
variables:
VULKAN_DRIVERS: freedreno,broadcom
GALLIUM_DRIVERS: "etnaviv,freedreno,kmsro,lima,nouveau,panfrost,swrast,tegra,v3d,vc4,zink"
BUILDTYPE: "debugoptimized"
tags:
- aarch64
debian-armhf:
extends:
- .meson-arm
- .ci-deqp-artifacts
variables:
CROSS: armhf
EXTRA_OPTION: >
-D llvm=disabled
-D valgrind=false
MINIO_ARTIFACT_NAME: mesa-armhf
script:
- .gitlab-ci/meson/build.sh
- .gitlab-ci/prepare-artifacts.sh
debian-arm64:
extends:
- .meson-arm
- .ci-deqp-artifacts
variables:
VULKAN_DRIVERS: "freedreno,broadcom,panfrost,imagination-experimental"
EXTRA_OPTION: >
-D llvm=disabled
-D valgrind=false
-D imagination-srv=true
MINIO_ARTIFACT_NAME: mesa-arm64
script:
- .gitlab-ci/meson/build.sh
- .gitlab-ci/prepare-artifacts.sh
debian-arm64-asan:
extends:
- debian-arm64
variables:
C_ARGS: >
-Wno-error=stringop-truncation
EXTRA_OPTION: >
-D llvm=disabled
-D b_sanitize=address
-D valgrind=false
-D tools=dlclose-skip
ARTIFACTS_DEBUG_SYMBOLS: 1
MINIO_ARTIFACT_NAME: mesa-arm64-asan
MESON_TEST_ARGS: "--no-suite mesa:compiler"
debian-arm64-build-test:
extends:
- .meson-arm
- .ci-deqp-artifacts
variables:
VULKAN_DRIVERS: "amd"
EXTRA_OPTION: >
-Dtools=panfrost,imagination
script:
- .gitlab-ci/meson/build.sh
debian-clang:
extends: .meson-build
variables:
UNWIND: "enabled"
C_ARGS: >
-Wno-error=constant-conversion
-Wno-error=enum-conversion
-Wno-error=implicit-const-int-float-conversion
-Wno-error=initializer-overrides
-Wno-error=sometimes-uninitialized
-Wno-error=unused-function
CPP_ARGS: >
-Wno-error=c99-designator
-Wno-error=deprecated-declarations
-Wno-error=implicit-const-int-float-conversion
-Wno-error=missing-braces
-Wno-error=overloaded-virtual
-Wno-error=tautological-constant-out-of-range-compare
-Wno-error=unused-const-variable
-Wno-error=unused-private-field
DRI_LOADERS: >
-D glvnd=true
GALLIUM_DRIVERS: "iris,nouveau,kmsro,r300,r600,freedreno,swrast,svga,v3d,vc4,virgl,etnaviv,panfrost,lima,zink,radeonsi,tegra,d3d12,crocus,i915,asahi"
VULKAN_DRIVERS: intel,amd,freedreno,broadcom,virtio-experimental,swrast,panfrost,imagination-experimental,microsoft-experimental
EXTRA_OPTIONS:
-D spirv-to-dxil=true
-D imagination-srv=true
CC: clang
CXX: clang++
windows-vs2019:
extends:
- .build-windows
- .use-windows_build_vs2019
- .windows-build-rules
stage: build-misc
script:
- pwsh -ExecutionPolicy RemoteSigned .\.gitlab-ci\windows\mesa_build.ps1
artifacts:
paths:
- _build/meson-logs/*.txt
- _install/
debian-clover:
extends: .meson-build
variables:
UNWIND: "enabled"
DRI_LOADERS: >
-D glx=disabled
-D egl=disabled
-D gbm=disabled
GALLIUM_DRIVERS: "r600,radeonsi"
GALLIUM_ST: >
-D dri3=disabled
-D gallium-vdpau=disabled
-D gallium-xvmc=disabled
-D gallium-omx=disabled
-D gallium-va=disabled
-D gallium-xa=disabled
-D gallium-nine=false
-D gallium-opencl=icd
EXTRA_OPTION: >
-D valgrind=false
script:
- LLVM_VERSION=9 GALLIUM_DRIVERS=r600,swrast .gitlab-ci/meson/build.sh
- .gitlab-ci/meson/build.sh
debian-vulkan:
extends: .meson-build
variables:
UNWIND: "disabled"
DRI_LOADERS: >
-D glx=disabled
-D gbm=disabled
-D egl=disabled
-D platforms=x11,wayland
-D osmesa=false
GALLIUM_ST: >
-D dri3=enabled
-D gallium-vdpau=disabled
-D gallium-xvmc=disabled
-D gallium-omx=disabled
-D gallium-va=disabled
-D gallium-xa=disabled
-D gallium-nine=false
-D gallium-opencl=disabled
-D b_sanitize=undefined
-D c_args=-fno-sanitize-recover=all
-D cpp_args=-fno-sanitize-recover=all
UBSAN_OPTIONS: "print_stacktrace=1"
VULKAN_DRIVERS: intel,amd,freedreno,broadcom,virtio-experimental,imagination-experimental,microsoft-experimental
EXTRA_OPTION: >
-D vulkan-layers=device-select,overlay
-D build-aco-tests=true
-D intel-clc=enabled
-D imagination-srv=true
debian-i386:
extends:
- .meson-cross
- .use-debian/i386_build
variables:
CROSS: i386
VULKAN_DRIVERS: intel,amd,swrast,virtio-experimental
GALLIUM_DRIVERS: "iris,nouveau,r300,r600,radeonsi,swrast,virgl,zink,crocus"
EXTRA_OPTION: >
-D vulkan-layers=device-select,overlay
debian-s390x:
extends:
- debian-ppc64el
- .use-debian/s390x_build
- .s390x-rules
tags:
- kvm
variables:
CROSS: s390x
GALLIUM_DRIVERS: "swrast,zink"
# The lp_test_blend test times out with LLVM 11
LLVM_VERSION: 9
VULKAN_DRIVERS: "swrast"
debian-ppc64el:
extends:
- .meson-cross
- .use-debian/ppc64el_build
- .ppc64el-rules
variables:
CROSS: ppc64el
GALLIUM_DRIVERS: "nouveau,radeonsi,swrast,virgl,zink"
VULKAN_DRIVERS: "amd,swrast"
debian-mingw32-x86_64:
extends: .meson-build_mingw
stage: build-misc
variables:
UNWIND: "disabled"
C_ARGS: >
-Wno-error=format
-Wno-error=format-extra-args
-Wno-error=deprecated-declarations
-Wno-error=unused-function
-Wno-error=unused-variable
-Wno-error=unused-but-set-variable
-Wno-error=unused-value
-Wno-error=switch
-Wno-error=parentheses
-Wno-error=missing-prototypes
-Wno-error=sign-compare
-Wno-error=narrowing
-Wno-error=overflow
CPP_ARGS: $C_ARGS
GALLIUM_DRIVERS: "swrast,d3d12,zink"
VULKAN_DRIVERS: "swrast,amd,microsoft-experimental"
GALLIUM_ST: >
-D gallium-opencl=icd
-D opencl-native=false
-D opencl-spirv=true
-D microsoft-clc=enabled
-D static-libclc=all
-D llvm=enabled
EXTRA_OPTION: >
-D spirv-to-dxil=true
-D gles1=enabled
-D gles2=enabled
-D osmesa=true
-D cpp_rtti=true
-D shared-glapi=enabled
-D zlib=enabled
--cross-file=.gitlab-ci/x86_64-w64-mingw32

View File

@@ -1,14 +0,0 @@
#!/bin/sh
while true; do
devcds=`find /sys/devices/virtual/devcoredump/ -name data 2>/dev/null`
for i in $devcds; do
echo "Found a devcoredump at $i."
if cp $i /results/first.devcore; then
echo 1 > $i
echo "Saved to the job artifacts at /first.devcore"
exit 0
fi
done
sleep 10
done

View File

@@ -1,122 +0,0 @@
#!/bin/bash
for var in \
ACO_DEBUG \
ASAN_OPTIONS \
BASE_SYSTEM_FORK_HOST_PREFIX \
BASE_SYSTEM_MAINLINE_HOST_PREFIX \
CI_COMMIT_BRANCH \
CI_COMMIT_REF_NAME \
CI_COMMIT_TITLE \
CI_JOB_ID \
CI_JOB_JWT_FILE \
CI_JOB_NAME \
CI_JOB_URL \
CI_MERGE_REQUEST_SOURCE_BRANCH_NAME \
CI_MERGE_REQUEST_TITLE \
CI_NODE_INDEX \
CI_NODE_TOTAL \
CI_PAGES_DOMAIN \
CI_PIPELINE_ID \
CI_PIPELINE_URL \
CI_PROJECT_DIR \
CI_PROJECT_NAME \
CI_PROJECT_PATH \
CI_PROJECT_ROOT_NAMESPACE \
CI_RUNNER_DESCRIPTION \
CI_SERVER_URL \
CROSVM_GALLIUM_DRIVER \
CROSVM_GPU_ARGS \
DEQP_BIN_DIR \
DEQP_CASELIST_FILTER \
DEQP_CASELIST_INV_FILTER \
DEQP_CONFIG \
DEQP_EXPECTED_RENDERER \
DEQP_FRACTION \
DEQP_HEIGHT \
DEQP_RESULTS_DIR \
DEQP_RUNNER_OPTIONS \
DEQP_SUITE \
DEQP_TEMP_DIR \
DEQP_VARIANT \
DEQP_VER \
DEQP_WIDTH \
DEVICE_NAME \
DRIVER_NAME \
EGL_PLATFORM \
ETNA_MESA_DEBUG \
FDO_CI_CONCURRENT \
FDO_UPSTREAM_REPO \
FD_MESA_DEBUG \
FLAKES_CHANNEL \
FREEDRENO_HANGCHECK_MS \
GALLIUM_DRIVER \
GALLIVM_PERF \
GPU_VERSION \
GTEST \
GTEST_FAILS \
GTEST_FRACTION \
GTEST_RESULTS_DIR \
GTEST_RUNNER_OPTIONS \
GTEST_SKIPS \
HWCI_FREQ_MAX \
HWCI_KERNEL_MODULES \
HWCI_KVM \
HWCI_START_XORG \
HWCI_TEST_SCRIPT \
IR3_SHADER_DEBUG \
JOB_ARTIFACTS_BASE \
JOB_RESULTS_PATH \
JOB_ROOTFS_OVERLAY_PATH \
KERNEL_IMAGE_BASE_URL \
KERNEL_IMAGE_NAME \
LD_LIBRARY_PATH \
LP_NUM_THREADS \
MESA_BASE_TAG \
MESA_BUILD_PATH \
MESA_DEBUG \
MESA_GLES_VERSION_OVERRIDE \
MESA_GLSL_VERSION_OVERRIDE \
MESA_GL_VERSION_OVERRIDE \
MESA_IMAGE \
MESA_IMAGE_PATH \
MESA_IMAGE_TAG \
MESA_LOADER_DRIVER_OVERRIDE \
MESA_TEMPLATES_COMMIT \
MESA_VK_IGNORE_CONFORMANCE_WARNING \
MESA_SPIRV_LOG_LEVEL \
MINIO_HOST \
MINIO_RESULTS_UPLOAD \
NIR_DEBUG \
PAN_I_WANT_A_BROKEN_VULKAN_DRIVER \
PAN_MESA_DEBUG \
PIGLIT_FRACTION \
PIGLIT_NO_WINDOW \
PIGLIT_OPTIONS \
PIGLIT_PLATFORM \
PIGLIT_PROFILES \
PIGLIT_REPLAY_ARTIFACTS_BASE_URL \
PIGLIT_REPLAY_DESCRIPTION_FILE \
PIGLIT_REPLAY_DEVICE_NAME \
PIGLIT_REPLAY_EXTRA_ARGS \
PIGLIT_REPLAY_LOOP_TIMES \
PIGLIT_REPLAY_REFERENCE_IMAGES_BASE \
PIGLIT_REPLAY_SUBCOMMAND \
PIGLIT_RESULTS \
PIGLIT_TESTS \
PIPELINE_ARTIFACTS_BASE \
RADV_DEBUG \
RADV_PERFTEST \
SKQP_ASSETS_DIR \
SKQP_BACKENDS \
TU_DEBUG \
VIRGL_HOST_API \
VK_CPU \
VK_DRIVER \
VK_ICD_FILENAMES \
VKD3D_PROTON_RESULTS \
; do
if [ -n "${!var+x}" ]; then
echo "export $var=${!var@Q}"
fi
done

View File

@@ -1,23 +0,0 @@
#!/bin/sh
# Very early init, used to make sure devices and network are set up and
# reachable.
set -ex
cd /
mount -t proc none /proc
mount -t sysfs none /sys
mount -t debugfs none /sys/kernel/debug
mount -t devtmpfs none /dev || echo possibly already mounted
mkdir -p /dev/pts
mount -t devpts devpts /dev/pts
mount -t tmpfs tmpfs /tmp
echo "nameserver 8.8.8.8" > /etc/resolv.conf
[ -z "$NFS_SERVER_IP" ] || echo "$NFS_SERVER_IP caching-proxy" >> /etc/hosts
# Set the time so we can validate certificates before we fetch anything;
# however as not all DUTs have network, make this non-fatal.
for i in 1 2 3; do sntp -sS pool.ntp.org && break || sleep 2; done || true

View File

@@ -1,164 +0,0 @@
#!/bin/sh
# Make sure to kill itself and all the children process from this script on
# exiting, since any console output may interfere with LAVA signals handling,
# which based on the log console.
cleanup() {
if [ "$BACKGROUND_PIDS" = "" ]; then
return 0
fi
set +x
echo "Killing all child processes"
for pid in $BACKGROUND_PIDS
do
kill "$pid" 2>/dev/null || true
done
# Sleep just a little to give enough time for subprocesses to be gracefully
# killed. Then apply a SIGKILL if necessary.
sleep 5
for pid in $BACKGROUND_PIDS
do
kill -9 "$pid" 2>/dev/null || true
done
BACKGROUND_PIDS=
set -x
}
trap cleanup INT TERM EXIT
# Space separated values with the PIDS of the processes started in the
# background by this script
BACKGROUND_PIDS=
# Second-stage init, used to set up devices and our job environment before
# running tests.
. /set-job-env-vars.sh
set -ex
# Set up any devices required by the jobs
[ -z "$HWCI_KERNEL_MODULES" ] || {
echo -n $HWCI_KERNEL_MODULES | xargs -d, -n1 /usr/sbin/modprobe
}
#
# Load the KVM module specific to the detected CPU virtualization extensions:
# - vmx for Intel VT
# - svm for AMD-V
#
# Additionally, download the kernel image to boot the VM via HWCI_TEST_SCRIPT.
#
if [ "$HWCI_KVM" = "true" ]; then
unset KVM_KERNEL_MODULE
grep -qs '\bvmx\b' /proc/cpuinfo && KVM_KERNEL_MODULE=kvm_intel || {
grep -qs '\bsvm\b' /proc/cpuinfo && KVM_KERNEL_MODULE=kvm_amd
}
[ -z "${KVM_KERNEL_MODULE}" ] && \
echo "WARNING: Failed to detect CPU virtualization extensions" || \
modprobe ${KVM_KERNEL_MODULE}
mkdir -p /lava-files
wget -S --progress=dot:giga -O /lava-files/${KERNEL_IMAGE_NAME} \
"${KERNEL_IMAGE_BASE_URL}/${KERNEL_IMAGE_NAME}"
fi
# Fix prefix confusion: the build installs to $CI_PROJECT_DIR, but we expect
# it in /install
ln -sf $CI_PROJECT_DIR/install /install
export LD_LIBRARY_PATH=/install/lib
export LIBGL_DRIVERS_PATH=/install/lib/dri
# Store Mesa's disk cache under /tmp, rather than sending it out over NFS.
export XDG_CACHE_HOME=/tmp
# Make sure Python can find all our imports
export PYTHONPATH=$(python3 -c "import sys;print(\":\".join(sys.path))")
if [ "$HWCI_FREQ_MAX" = "true" ]; then
# Ensure initialization of the DRM device (needed by MSM)
head -0 /dev/dri/renderD128
# Disable GPU frequency scaling
DEVFREQ_GOVERNOR=`find /sys/devices -name governor | grep gpu || true`
test -z "$DEVFREQ_GOVERNOR" || echo performance > $DEVFREQ_GOVERNOR || true
# Disable CPU frequency scaling
echo performance | tee -a /sys/devices/system/cpu/cpufreq/policy*/scaling_governor || true
# Disable GPU runtime power management
GPU_AUTOSUSPEND=`find /sys/devices -name autosuspend_delay_ms | grep gpu | head -1`
test -z "$GPU_AUTOSUSPEND" || echo -1 > $GPU_AUTOSUSPEND || true
# Lock Intel GPU frequency to 70% of the maximum allowed by hardware
# and enable throttling detection & reporting.
# Additionally, set the upper limit for CPU scaling frequency to 65% of the
# maximum permitted, as an additional measure to mitigate thermal throttling.
./intel-gpu-freq.sh -s 70% --cpu-set-max 65% -g all -d
fi
# Increase freedreno hangcheck timer because it's right at the edge of the
# spilling tests timing out (and some traces, too)
if [ -n "$FREEDRENO_HANGCHECK_MS" ]; then
echo $FREEDRENO_HANGCHECK_MS | tee -a /sys/kernel/debug/dri/128/hangcheck_period_ms
fi
# Start a little daemon to capture the first devcoredump we encounter. (They
# expire after 5 minutes, so we poll for them).
/capture-devcoredump.sh &
BACKGROUND_PIDS="$! $BACKGROUND_PIDS"
# If we want Xorg to be running for the test, then we start it up before the
# HWCI_TEST_SCRIPT because we need to use xinit to start X (otherwise
# without using -displayfd you can race with Xorg's startup), but xinit will eat
# your client's return code
if [ -n "$HWCI_START_XORG" ]; then
echo "touch /xorg-started; sleep 100000" > /xorg-script
env \
xinit /bin/sh /xorg-script -- /usr/bin/Xorg -noreset -s 0 -dpms -logfile /Xorg.0.log &
BACKGROUND_PIDS="$! $BACKGROUND_PIDS"
# Wait for xorg to be ready for connections.
for i in 1 2 3 4 5; do
if [ -e /xorg-started ]; then
break
fi
sleep 5
done
export DISPLAY=:0
fi
RESULT=fail
set +e
sh -c "$HWCI_TEST_SCRIPT"
EXIT_CODE=$?
set -e
# Let's make sure the results are always stored in current working directory
mv -f ${CI_PROJECT_DIR}/results ./ 2>/dev/null || true
[ ${EXIT_CODE} -ne 0 ] || rm -rf results/trace/"$PIGLIT_REPLAY_DEVICE_NAME"
# Make sure that capture-devcoredump is done before we start trying to tar up
# artifacts -- if it's writing while tar is reading, tar will throw an error and
# kill the job.
cleanup
# upload artifacts
if [ -n "$MINIO_RESULTS_UPLOAD" ]; then
tar -czf results.tar.gz results/;
ci-fairy s3cp --token-file "${CI_JOB_JWT_FILE}" results.tar.gz https://"$MINIO_RESULTS_UPLOAD"/results.tar.gz;
fi
# We still need to echo the hwci: mesa message, as some scripts rely on it, such
# as the python ones inside the bare-metal folder
[ ${EXIT_CODE} -eq 0 ] && RESULT=pass
set +x
echo "hwci: mesa: $RESULT"
# Sleep a bit to avoid kernel dump message interleave from LAVA ENDTC signal
sleep 1
exit $EXIT_CODE

View File

@@ -1,758 +0,0 @@
#!/bin/sh
#
# This is an utility script to manage Intel GPU frequencies.
# It can be used for debugging performance problems or trying to obtain a stable
# frequency while benchmarking.
#
# Note the Intel i915 GPU driver allows to change the minimum, maximum and boost
# frequencies in steps of 50 MHz via:
#
# /sys/class/drm/card<n>/<freq_info>
#
# Where <n> is the DRM card index and <freq_info> one of the following:
#
# - gt_max_freq_mhz (enforced maximum freq)
# - gt_min_freq_mhz (enforced minimum freq)
# - gt_boost_freq_mhz (enforced boost freq)
#
# The hardware capabilities can be accessed via:
#
# - gt_RP0_freq_mhz (supported maximum freq)
# - gt_RPn_freq_mhz (supported minimum freq)
# - gt_RP1_freq_mhz (most efficient freq)
#
# The current frequency can be read from:
# - gt_act_freq_mhz (the actual GPU freq)
# - gt_cur_freq_mhz (the last requested freq)
#
# Also note that in addition to GPU management, the script offers the
# possibility to adjust CPU operating frequencies. However, this is currently
# limited to just setting the maximum scaling frequency as percentage of the
# maximum frequency allowed by the hardware.
#
# Copyright (C) 2022 Collabora Ltd.
# Author: Cristian Ciocaltea <cristian.ciocaltea@collabora.com>
#
# SPDX-License-Identifier: MIT
#
#
# Constants
#
# GPU
DRM_FREQ_SYSFS_PATTERN="/sys/class/drm/card%d/gt_%s_freq_mhz"
ENF_FREQ_INFO="max min boost"
CAP_FREQ_INFO="RP0 RPn RP1"
ACT_FREQ_INFO="act cur"
THROTT_DETECT_SLEEP_SEC=2
THROTT_DETECT_PID_FILE_PATH=/tmp/thrott-detect.pid
# CPU
CPU_SYSFS_PREFIX=/sys/devices/system/cpu
CPU_PSTATE_SYSFS_PATTERN="${CPU_SYSFS_PREFIX}/intel_pstate/%s"
CPU_FREQ_SYSFS_PATTERN="${CPU_SYSFS_PREFIX}/cpu%s/cpufreq/%s_freq"
CAP_CPU_FREQ_INFO="cpuinfo_max cpuinfo_min"
ENF_CPU_FREQ_INFO="scaling_max scaling_min"
ACT_CPU_FREQ_INFO="scaling_cur"
#
# Global variables.
#
unset INTEL_DRM_CARD_INDEX
unset GET_ACT_FREQ GET_ENF_FREQ GET_CAP_FREQ
unset SET_MIN_FREQ SET_MAX_FREQ
unset MONITOR_FREQ
unset CPU_SET_MAX_FREQ
unset DETECT_THROTT
unset DRY_RUN
#
# Simple printf based stderr logger.
#
log() {
local msg_type=$1
shift
printf "%s: %s: " "${msg_type}" "${0##*/}" >&2
printf "$@" >&2
printf "\n" >&2
}
#
# Helper to print sysfs path for the given card index and freq info.
#
# arg1: Frequency info sysfs name, one of *_FREQ_INFO constants above
# arg2: Video card index, defaults to INTEL_DRM_CARD_INDEX
#
print_freq_sysfs_path() {
printf ${DRM_FREQ_SYSFS_PATTERN} "${2:-${INTEL_DRM_CARD_INDEX}}" "$1"
}
#
# Helper to set INTEL_DRM_CARD_INDEX for the first identified Intel video card.
#
identify_intel_gpu() {
local i=0 vendor path
while [ ${i} -lt 16 ]; do
[ -c "/dev/dri/card$i" ] || {
i=$((i + 1))
continue
}
path=$(print_freq_sysfs_path "" ${i})
path=${path%/*}/device/vendor
[ -r "${path}" ] && read vendor < "${path}" && \
[ "${vendor}" = "0x8086" ] && INTEL_DRM_CARD_INDEX=$i && return 0
i=$((i + 1))
done
return 1
}
#
# Read the specified freq info from sysfs.
#
# arg1: Flag (y/n) to also enable printing the freq info.
# arg2...: Frequency info sysfs name(s), see *_FREQ_INFO constants above
# return: Global variable(s) FREQ_${arg} containing the requested information
#
read_freq_info() {
local var val info path print=0 ret=0
[ "$1" = "y" ] && print=1
shift
while [ $# -gt 0 ]; do
info=$1
shift
var=FREQ_${info}
path=$(print_freq_sysfs_path "${info}")
[ -r ${path} ] && read ${var} < ${path} || {
log ERROR "Failed to read freq info from: %s" "${path}"
ret=1
continue
}
[ -n "${var}" ] || {
log ERROR "Got empty freq info from: %s" "${path}"
ret=1
continue
}
[ ${print} -eq 1 ] && {
eval val=\$${var}
printf "%6s: %4s MHz\n" "${info}" "${val}"
}
done
return ${ret}
}
#
# Display requested info.
#
print_freq_info() {
local req_freq
[ -n "${GET_CAP_FREQ}" ] && {
printf "* Hardware capabilities\n"
read_freq_info y ${CAP_FREQ_INFO}
printf "\n"
}
[ -n "${GET_ENF_FREQ}" ] && {
printf "* Enforcements\n"
read_freq_info y ${ENF_FREQ_INFO}
printf "\n"
}
[ -n "${GET_ACT_FREQ}" ] && {
printf "* Actual\n"
read_freq_info y ${ACT_FREQ_INFO}
printf "\n"
}
}
#
# Helper to print frequency value as requested by user via '-s, --set' option.
# arg1: user requested freq value
#
compute_freq_set() {
local val
case "$1" in
+)
val=${FREQ_RP0}
;;
-)
val=${FREQ_RPn}
;;
*%)
val=$((${1%?} * ${FREQ_RP0} / 100))
# Adjust freq to comply with 50 MHz increments
val=$((val / 50 * 50))
;;
*[!0-9]*)
log ERROR "Cannot set freq to invalid value: %s" "$1"
return 1
;;
"")
log ERROR "Cannot set freq to unspecified value"
return 1
;;
*)
# Adjust freq to comply with 50 MHz increments
val=$(($1 / 50 * 50))
;;
esac
printf "%s" "${val}"
}
#
# Helper for set_freq().
#
set_freq_max() {
log INFO "Setting GPU max freq to %s MHz" "${SET_MAX_FREQ}"
read_freq_info n min || return $?
[ ${SET_MAX_FREQ} -gt ${FREQ_RP0} ] && {
log ERROR "Cannot set GPU max freq (%s) to be greater than hw max freq (%s)" \
"${SET_MAX_FREQ}" "${FREQ_RP0}"
return 1
}
[ ${SET_MAX_FREQ} -lt ${FREQ_RPn} ] && {
log ERROR "Cannot set GPU max freq (%s) to be less than hw min freq (%s)" \
"${SET_MIN_FREQ}" "${FREQ_RPn}"
return 1
}
[ ${SET_MAX_FREQ} -lt ${FREQ_min} ] && {
log ERROR "Cannot set GPU max freq (%s) to be less than min freq (%s)" \
"${SET_MAX_FREQ}" "${FREQ_min}"
return 1
}
[ -z "${DRY_RUN}" ] || return 0
printf "%s" ${SET_MAX_FREQ} | tee $(print_freq_sysfs_path max) \
$(print_freq_sysfs_path boost) > /dev/null
[ $? -eq 0 ] || {
log ERROR "Failed to set GPU max frequency"
return 1
}
}
#
# Helper for set_freq().
#
set_freq_min() {
log INFO "Setting GPU min freq to %s MHz" "${SET_MIN_FREQ}"
read_freq_info n max || return $?
[ ${SET_MIN_FREQ} -gt ${FREQ_max} ] && {
log ERROR "Cannot set GPU min freq (%s) to be greater than max freq (%s)" \
"${SET_MIN_FREQ}" "${FREQ_max}"
return 1
}
[ ${SET_MIN_FREQ} -lt ${FREQ_RPn} ] && {
log ERROR "Cannot set GPU min freq (%s) to be less than hw min freq (%s)" \
"${SET_MIN_FREQ}" "${FREQ_RPn}"
return 1
}
[ -z "${DRY_RUN}" ] || return 0
printf "%s" ${SET_MIN_FREQ} > $(print_freq_sysfs_path min)
[ $? -eq 0 ] || {
log ERROR "Failed to set GPU min frequency"
return 1
}
}
#
# Set min or max or both GPU frequencies to the user indicated values.
#
set_freq() {
# Get hw max & min frequencies
read_freq_info n RP0 RPn || return $?
[ -z "${SET_MAX_FREQ}" ] || {
SET_MAX_FREQ=$(compute_freq_set "${SET_MAX_FREQ}")
[ -z "${SET_MAX_FREQ}" ] && return 1
}
[ -z "${SET_MIN_FREQ}" ] || {
SET_MIN_FREQ=$(compute_freq_set "${SET_MIN_FREQ}")
[ -z "${SET_MIN_FREQ}" ] && return 1
}
#
# Ensure correct operation order, to avoid setting min freq
# to a value which is larger than max freq.
#
# E.g.:
# crt_min=crt_max=600; new_min=new_max=700
# > operation order: max=700; min=700
#
# crt_min=crt_max=600; new_min=new_max=500
# > operation order: min=500; max=500
#
if [ -n "${SET_MAX_FREQ}" ] && [ -n "${SET_MIN_FREQ}" ]; then
[ ${SET_MAX_FREQ} -lt ${SET_MIN_FREQ} ] && {
log ERROR "Cannot set GPU max freq to be less than min freq"
return 1
}
read_freq_info n min || return $?
if [ ${SET_MAX_FREQ} -lt ${FREQ_min} ]; then
set_freq_min || return $?
set_freq_max
else
set_freq_max || return $?
set_freq_min
fi
elif [ -n "${SET_MAX_FREQ}" ]; then
set_freq_max
elif [ -n "${SET_MIN_FREQ}" ]; then
set_freq_min
else
log "Unexpected call to set_freq()"
return 1
fi
}
#
# Helper for detect_throttling().
#
get_thrott_detect_pid() {
[ -e ${THROTT_DETECT_PID_FILE_PATH} ] || return 0
local pid
read pid < ${THROTT_DETECT_PID_FILE_PATH} || {
log ERROR "Failed to read pid from: %s" "${THROTT_DETECT_PID_FILE_PATH}"
return 1
}
local proc_path=/proc/${pid:-invalid}/cmdline
[ -r ${proc_path} ] && grep -qs "${0##*/}" ${proc_path} && {
printf "%s" "${pid}"
return 0
}
# Remove orphaned PID file
rm -rf ${THROTT_DETECT_PID_FILE_PATH}
return 1
}
#
# Control detection and reporting of GPU throttling events.
# arg1: start - run throttle detector in background
# stop - stop throttle detector process, if any
# status - verify if throttle detector is running
#
detect_throttling() {
local pid
pid=$(get_thrott_detect_pid)
case "$1" in
status)
printf "Throttling detector is "
[ -z "${pid}" ] && printf "not running\n" && return 0
printf "running (pid=%s)\n" ${pid}
;;
stop)
[ -z "${pid}" ] && return 0
log INFO "Stopping throttling detector (pid=%s)" "${pid}"
kill ${pid}; sleep 1; kill -0 ${pid} 2>/dev/null && kill -9 ${pid}
rm -rf ${THROTT_DETECT_PID_FILE_PATH}
;;
start)
[ -n "${pid}" ] && {
log WARN "Throttling detector is already running (pid=%s)" ${pid}
return 0
}
(
read_freq_info n RPn || exit $?
while true; do
sleep ${THROTT_DETECT_SLEEP_SEC}
read_freq_info n act min cur || exit $?
#
# The throttling seems to occur when act freq goes below min.
# However, it's necessary to exclude the idle states, where
# act freq normally reaches RPn and cur goes below min.
#
[ ${FREQ_act} -lt ${FREQ_min} ] && \
[ ${FREQ_act} -gt ${FREQ_RPn} ] && \
[ ${FREQ_cur} -ge ${FREQ_min} ] && \
printf "GPU throttling detected: act=%s min=%s cur=%s RPn=%s\n" \
${FREQ_act} ${FREQ_min} ${FREQ_cur} ${FREQ_RPn}
done
) &
pid=$!
log INFO "Started GPU throttling detector (pid=%s)" ${pid}
printf "%s\n" ${pid} > ${THROTT_DETECT_PID_FILE_PATH} || \
log WARN "Failed to write throttle detector PID file"
;;
esac
}
#
# Retrieve the list of online CPUs.
#
get_online_cpus() {
local path cpu_index
printf "0"
for path in $(grep 1 ${CPU_SYSFS_PREFIX}/cpu*/online); do
cpu_index=${path##*/cpu}
printf " %s" ${cpu_index%%/*}
done
}
#
# Helper to print sysfs path for the given CPU index and freq info.
#
# arg1: Frequency info sysfs name, one of *_CPU_FREQ_INFO constants above
# arg2: CPU index
#
print_cpu_freq_sysfs_path() {
printf ${CPU_FREQ_SYSFS_PATTERN} "$2" "$1"
}
#
# Read the specified CPU freq info from sysfs.
#
# arg1: CPU index
# arg2: Flag (y/n) to also enable printing the freq info.
# arg3...: Frequency info sysfs name(s), see *_CPU_FREQ_INFO constants above
# return: Global variable(s) CPU_FREQ_${arg} containing the requested information
#
read_cpu_freq_info() {
local var val info path cpu_index print=0 ret=0
cpu_index=$1
[ "$2" = "y" ] && print=1
shift 2
while [ $# -gt 0 ]; do
info=$1
shift
var=CPU_FREQ_${info}
path=$(print_cpu_freq_sysfs_path "${info}" ${cpu_index})
[ -r ${path} ] && read ${var} < ${path} || {
log ERROR "Failed to read CPU freq info from: %s" "${path}"
ret=1
continue
}
[ -n "${var}" ] || {
log ERROR "Got empty CPU freq info from: %s" "${path}"
ret=1
continue
}
[ ${print} -eq 1 ] && {
eval val=\$${var}
printf "%6s: %4s Hz\n" "${info}" "${val}"
}
done
return ${ret}
}
#
# Helper to print freq. value as requested by user via '--cpu-set-max' option.
# arg1: user requested freq value
#
compute_cpu_freq_set() {
local val
case "$1" in
+)
val=${CPU_FREQ_cpuinfo_max}
;;
-)
val=${CPU_FREQ_cpuinfo_min}
;;
*%)
val=$((${1%?} * ${CPU_FREQ_cpuinfo_max} / 100))
;;
*[!0-9]*)
log ERROR "Cannot set CPU freq to invalid value: %s" "$1"
return 1
;;
"")
log ERROR "Cannot set CPU freq to unspecified value"
return 1
;;
*)
log ERROR "Cannot set CPU freq to custom value; use +, -, or % instead"
return 1
;;
esac
printf "%s" "${val}"
}
#
# Adjust CPU max scaling frequency.
#
set_cpu_freq_max() {
local target_freq res=0
case "${CPU_SET_MAX_FREQ}" in
+)
target_freq=100
;;
-)
target_freq=1
;;
*%)
target_freq=${CPU_SET_MAX_FREQ%?}
;;
*)
log ERROR "Invalid CPU freq"
return 1
;;
esac
local pstate_info=$(printf "${CPU_PSTATE_SYSFS_PATTERN}" max_perf_pct)
[ -e "${pstate_info}" ] && {
log INFO "Setting intel_pstate max perf to %s" "${target_freq}%"
printf "%s" "${target_freq}" > "${pstate_info}"
[ $? -eq 0 ] || {
log ERROR "Failed to set intel_pstate max perf"
res=1
}
}
local cpu_index
for cpu_index in $(get_online_cpus); do
read_cpu_freq_info ${cpu_index} n ${CAP_CPU_FREQ_INFO} || { res=$?; continue; }
target_freq=$(compute_cpu_freq_set "${CPU_SET_MAX_FREQ}")
[ -z "${target_freq}" ] && { res=$?; continue; }
log INFO "Setting CPU%s max scaling freq to %s Hz" ${cpu_index} "${target_freq}"
[ -n "${DRY_RUN}" ] && continue
printf "%s" ${target_freq} > $(print_cpu_freq_sysfs_path scaling_max ${cpu_index})
[ $? -eq 0 ] || {
res=1
log ERROR "Failed to set CPU%s max scaling frequency" ${cpu_index}
}
done
return ${res}
}
#
# Show help message.
#
print_usage() {
cat <<EOF
Usage: ${0##*/} [OPTION]...
A script to manage Intel GPU frequencies. Can be used for debugging performance
problems or trying to obtain a stable frequency while benchmarking.
Note Intel GPUs only accept specific frequencies, usually multiples of 50 MHz.
Options:
-g, --get [act|enf|cap|all]
Get frequency information: active (default), enforced,
hardware capabilities or all of them.
-s, --set [{min|max}=]{FREQUENCY[%]|+|-}
Set min or max frequency to the given value (MHz).
Append '%' to interpret FREQUENCY as % of hw max.
Use '+' or '-' to set frequency to hardware max or min.
Omit min/max prefix to set both frequencies.
-r, --reset Reset frequencies to hardware defaults.
-m, --monitor [act|enf|cap|all]
Monitor the indicated frequencies via 'watch' utility.
See '-g, --get' option for more details.
-d|--detect-thrott [start|stop|status]
Start (default operation) the throttling detector
as a background process. Use 'stop' or 'status' to
terminate the detector process or verify its status.
--cpu-set-max [FREQUENCY%|+|-}
Set CPU max scaling frequency as % of hw max.
Use '+' or '-' to set frequency to hardware max or min.
-r, --reset Reset frequencies to hardware defaults.
--dry-run See what the script will do without applying any
frequency changes.
-h, --help Display this help text and exit.
EOF
}
#
# Parse user input for '-g, --get' option.
# Returns 0 if a value has been provided, otherwise 1.
#
parse_option_get() {
local ret=0
case "$1" in
act) GET_ACT_FREQ=1;;
enf) GET_ENF_FREQ=1;;
cap) GET_CAP_FREQ=1;;
all) GET_ACT_FREQ=1; GET_ENF_FREQ=1; GET_CAP_FREQ=1;;
-*|"")
# No value provided, using default.
GET_ACT_FREQ=1
ret=1
;;
*)
print_usage
exit 1
;;
esac
return ${ret}
}
#
# Validate user input for '-s, --set' option.
# arg1: input value to be validated
# arg2: optional flag indicating input is restricted to %
#
validate_option_set() {
case "$1" in
+|-|[0-9]%|[0-9][0-9]%)
return 0
;;
*[!0-9]*|"")
print_usage
exit 1
;;
esac
[ -z "$2" ] || { print_usage; exit 1; }
}
#
# Parse script arguments.
#
[ $# -eq 0 ] && { print_usage; exit 1; }
while [ $# -gt 0 ]; do
case "$1" in
-g|--get)
parse_option_get "$2" && shift
;;
-s|--set)
shift
case "$1" in
min=*)
SET_MIN_FREQ=${1#min=}
validate_option_set "${SET_MIN_FREQ}"
;;
max=*)
SET_MAX_FREQ=${1#max=}
validate_option_set "${SET_MAX_FREQ}"
;;
*)
SET_MIN_FREQ=$1
validate_option_set "${SET_MIN_FREQ}"
SET_MAX_FREQ=${SET_MIN_FREQ}
;;
esac
;;
-r|--reset)
RESET_FREQ=1
SET_MIN_FREQ="-"
SET_MAX_FREQ="+"
;;
-m|--monitor)
MONITOR_FREQ=act
parse_option_get "$2" && MONITOR_FREQ=$2 && shift
;;
-d|--detect-thrott)
DETECT_THROTT=start
case "$2" in
start|stop|status)
DETECT_THROTT=$2
shift
;;
esac
;;
--cpu-set-max)
shift
CPU_SET_MAX_FREQ=$1
validate_option_set "${CPU_SET_MAX_FREQ}" restricted
;;
--dry-run)
DRY_RUN=1
;;
-h|--help)
print_usage
exit 0
;;
*)
print_usage
exit 1
;;
esac
shift
done
#
# Main
#
RET=0
identify_intel_gpu || {
log INFO "No Intel GPU detected"
exit 0
}
[ -n "${SET_MIN_FREQ}${SET_MAX_FREQ}" ] && { set_freq || RET=$?; }
print_freq_info
[ -n "${DETECT_THROTT}" ] && detect_throttling ${DETECT_THROTT}
[ -n "${CPU_SET_MAX_FREQ}" ] && { set_cpu_freq_max || RET=$?; }
[ -n "${MONITOR_FREQ}" ] && {
log INFO "Entering frequency monitoring mode"
sleep 2
exec watch -d -n 1 "$0" -g "${MONITOR_FREQ}"
}
exit ${RET}

View File

@@ -1,21 +0,0 @@
#!/bin/sh
set -ex
_XORG_SCRIPT="/xorg-script"
_FLAG_FILE="/xorg-started"
echo "touch ${_FLAG_FILE}; sleep 100000" > "${_XORG_SCRIPT}"
if [ "x$1" != "x" ]; then
export LD_LIBRARY_PATH="${1}/lib"
export LIBGL_DRIVERS_PATH="${1}/lib/dri"
fi
xinit /bin/sh "${_XORG_SCRIPT}" -- /usr/bin/Xorg vt45 -noreset -s 0 -dpms -logfile /Xorg.0.log &
# Wait for xorg to be ready for connections.
for i in 1 2 3 4 5; do
if [ -e "${_FLAG_FILE}" ]; then
break
fi
sleep 5
done

View File

@@ -1,57 +0,0 @@
CONFIG_LOCALVERSION_AUTO=y
CONFIG_DEBUG_KERNEL=y
# abootimg with a 'dummy' rootfs fails with root=/dev/nfs
CONFIG_BLK_DEV_INITRD=n
CONFIG_DEVFREQ_GOV_PERFORMANCE=y
CONFIG_DEVFREQ_GOV_POWERSAVE=y
CONFIG_DEVFREQ_GOV_USERSPACE=y
CONFIG_DEVFREQ_GOV_PASSIVE=y
CONFIG_DEVFREQ_GOV_SIMPLE_ONDEMAND=y
CONFIG_DRM=y
CONFIG_DRM_ETNAVIV=y
CONFIG_DRM_ROCKCHIP=y
CONFIG_DRM_PANFROST=y
CONFIG_DRM_LIMA=y
CONFIG_DRM_PANEL_SIMPLE=y
CONFIG_PWM_CROS_EC=y
CONFIG_BACKLIGHT_PWM=y
CONFIG_ROCKCHIP_CDN_DP=n
CONFIG_SPI_ROCKCHIP=y
CONFIG_PWM_ROCKCHIP=y
CONFIG_PHY_ROCKCHIP_DP=y
CONFIG_DWMAC_ROCKCHIP=y
CONFIG_MFD_RK808=y
CONFIG_REGULATOR_RK808=y
CONFIG_RTC_DRV_RK808=y
CONFIG_COMMON_CLK_RK808=y
CONFIG_REGULATOR_FAN53555=y
CONFIG_REGULATOR=y
CONFIG_REGULATOR_VCTRL=y
CONFIG_KASAN=n
CONFIG_KASAN_INLINE=n
CONFIG_STACKTRACE=n
CONFIG_TMPFS=y
CONFIG_PROVE_LOCKING=n
CONFIG_DEBUG_LOCKDEP=n
CONFIG_SOFTLOCKUP_DETECTOR=n
CONFIG_BOOTPARAM_SOFTLOCKUP_PANIC=n
CONFIG_FW_LOADER_COMPRESS=y
CONFIG_USB_USBNET=y
CONFIG_NETDEVICES=y
CONFIG_USB_NET_DRIVERS=y
CONFIG_USB_RTL8152=y
CONFIG_USB_NET_AX8817X=y
CONFIG_USB_NET_SMSC95XX=y

View File

@@ -1,172 +0,0 @@
CONFIG_LOCALVERSION_AUTO=y
CONFIG_DEBUG_KERNEL=y
# abootimg with a 'dummy' rootfs fails with root=/dev/nfs
CONFIG_BLK_DEV_INITRD=n
CONFIG_DEVFREQ_GOV_PERFORMANCE=y
CONFIG_DEVFREQ_GOV_POWERSAVE=y
CONFIG_DEVFREQ_GOV_USERSPACE=y
CONFIG_DEVFREQ_GOV_PASSIVE=y
CONFIG_DRM=y
CONFIG_DRM_ROCKCHIP=y
CONFIG_DRM_PANFROST=y
CONFIG_DRM_LIMA=y
CONFIG_DRM_PANEL_SIMPLE=y
CONFIG_DRM_PANEL_EDP=y
CONFIG_DRM_MSM=y
CONFIG_DRM_I2C_ADV7511=y
CONFIG_PWM_CROS_EC=y
CONFIG_BACKLIGHT_PWM=y
CONFIG_ROCKCHIP_CDN_DP=n
CONFIG_SPI_ROCKCHIP=y
CONFIG_PWM_ROCKCHIP=y
CONFIG_PHY_ROCKCHIP_DP=y
CONFIG_DWMAC_ROCKCHIP=y
CONFIG_STMMAC_ETH=y
CONFIG_TYPEC_FUSB302=y
CONFIG_TYPEC=y
CONFIG_TYPEC_TCPM=y
# MSM platform bits
# For CONFIG_QCOM_LMH
CONFIG_OF=y
CONFIG_QCOM_COMMAND_DB=y
CONFIG_QCOM_RPMHPD=y
CONFIG_QCOM_RPMPD=y
CONFIG_SDM_GPUCC_845=y
CONFIG_SDM_VIDEOCC_845=y
CONFIG_SDM_DISPCC_845=y
CONFIG_SDM_LPASSCC_845=y
CONFIG_SDM_CAMCC_845=y
CONFIG_RESET_QCOM_PDC=y
CONFIG_DRM_TI_SN65DSI86=y
CONFIG_I2C_QCOM_GENI=y
CONFIG_SPI_QCOM_GENI=y
CONFIG_PHY_QCOM_QUSB2=y
CONFIG_PHY_QCOM_QMP=y
CONFIG_QCOM_CLK_APCC_MSM8996=y
CONFIG_QCOM_LLCC=y
CONFIG_QCOM_LMH=y
CONFIG_QCOM_SPMI_TEMP_ALARM=y
CONFIG_QCOM_WDT=y
CONFIG_POWER_RESET_QCOM_PON=y
CONFIG_RTC_DRV_PM8XXX=y
CONFIG_INTERCONNECT=y
CONFIG_INTERCONNECT_QCOM=y
CONFIG_INTERCONNECT_QCOM_SDM845=y
CONFIG_INTERCONNECT_QCOM_MSM8916=y
CONFIG_INTERCONNECT_QCOM_OSM_L3=y
CONFIG_INTERCONNECT_QCOM_SC7180=y
CONFIG_CRYPTO_DEV_QCOM_RNG=y
CONFIG_SC_DISPCC_7180=y
CONFIG_SC_GPUCC_7180=y
# db410c ethernet
CONFIG_USB_RTL8152=y
# db820c ethernet
CONFIG_ATL1C=y
CONFIG_ARCH_ALPINE=n
CONFIG_ARCH_BCM2835=n
CONFIG_ARCH_BCM_IPROC=n
CONFIG_ARCH_BERLIN=n
CONFIG_ARCH_BRCMSTB=n
CONFIG_ARCH_EXYNOS=n
CONFIG_ARCH_K3=n
CONFIG_ARCH_LAYERSCAPE=n
CONFIG_ARCH_LG1K=n
CONFIG_ARCH_HISI=n
CONFIG_ARCH_MVEBU=n
CONFIG_ARCH_SEATTLE=n
CONFIG_ARCH_SYNQUACER=n
CONFIG_ARCH_RENESAS=n
CONFIG_ARCH_R8A774A1=n
CONFIG_ARCH_R8A774C0=n
CONFIG_ARCH_R8A7795=n
CONFIG_ARCH_R8A7796=n
CONFIG_ARCH_R8A77965=n
CONFIG_ARCH_R8A77970=n
CONFIG_ARCH_R8A77980=n
CONFIG_ARCH_R8A77990=n
CONFIG_ARCH_R8A77995=n
CONFIG_ARCH_STRATIX10=n
CONFIG_ARCH_TEGRA=n
CONFIG_ARCH_SPRD=n
CONFIG_ARCH_THUNDER=n
CONFIG_ARCH_THUNDER2=n
CONFIG_ARCH_UNIPHIER=n
CONFIG_ARCH_VEXPRESS=n
CONFIG_ARCH_XGENE=n
CONFIG_ARCH_ZX=n
CONFIG_ARCH_ZYNQMP=n
# Strip out some stuff we don't need for graphics testing, to reduce
# the build.
CONFIG_CAN=n
CONFIG_WIRELESS=n
CONFIG_RFKILL=n
CONFIG_WLAN=n
CONFIG_REGULATOR_FAN53555=y
CONFIG_REGULATOR=y
CONFIG_REGULATOR_VCTRL=y
CONFIG_KASAN=n
CONFIG_KASAN_INLINE=n
CONFIG_STACKTRACE=n
CONFIG_TMPFS=y
CONFIG_PROVE_LOCKING=n
CONFIG_DEBUG_LOCKDEP=n
CONFIG_SOFTLOCKUP_DETECTOR=y
CONFIG_BOOTPARAM_SOFTLOCKUP_PANIC=y
CONFIG_DETECT_HUNG_TASK=y
CONFIG_FW_LOADER_COMPRESS=y
CONFIG_FW_LOADER_USER_HELPER=n
CONFIG_USB_USBNET=y
CONFIG_NETDEVICES=y
CONFIG_USB_NET_DRIVERS=y
CONFIG_USB_RTL8152=y
CONFIG_USB_NET_AX8817X=y
CONFIG_USB_NET_SMSC95XX=y
# For amlogic
CONFIG_MESON_GXL_PHY=y
CONFIG_MDIO_BUS_MUX_MESON_G12A=y
CONFIG_DRM_MESON=y
# For Mediatek
CONFIG_DRM_MEDIATEK=y
CONFIG_PWM_MEDIATEK=y
CONFIG_DRM_MEDIATEK_HDMI=y
CONFIG_GNSS=y
CONFIG_GNSS_MTK_SERIAL=y
CONFIG_HW_RANDOM=y
CONFIG_HW_RANDOM_MTK=y
CONFIG_MTK_DEVAPC=y
CONFIG_PWM_MTK_DISP=y
CONFIG_MTK_CMDQ=y
# For nouveau. Note that DRM must be a module so that it's loaded after NFS is up to provide the firmware.
CONFIG_ARCH_TEGRA=y
CONFIG_DRM_NOUVEAU=m
CONFIG_DRM_TEGRA=m
CONFIG_R8169=y
CONFIG_STAGING=y
CONFIG_DRM_TEGRA_STAGING=y
CONFIG_TEGRA_HOST1X=y
CONFIG_ARM_TEGRA_DEVFREQ=y
CONFIG_TEGRA_SOCTHERM=y
CONFIG_DRM_TEGRA_DEBUG=y
CONFIG_PWM_TEGRA=y

View File

@@ -1,51 +0,0 @@
#!/bin/bash
set -e
set -o xtrace
# Fetch the arm-built rootfs image and unpack it in our x86 container (saves
# network transfer, disk usage, and runtime on test jobs)
if wget -q --method=HEAD "${ARTIFACTS_PREFIX}/${FDO_UPSTREAM_REPO}/${ARTIFACTS_SUFFIX}/${arch}/done"; then
ARTIFACTS_URL="${ARTIFACTS_PREFIX}/${FDO_UPSTREAM_REPO}/${ARTIFACTS_SUFFIX}/${arch}"
else
ARTIFACTS_URL="${ARTIFACTS_PREFIX}/${CI_PROJECT_PATH}/${ARTIFACTS_SUFFIX}/${arch}"
fi
wget ${ARTIFACTS_URL}/lava-rootfs.tgz -O rootfs.tgz
mkdir -p /rootfs-$arch
tar -C /rootfs-$arch '--exclude=./dev/*' -zxf rootfs.tgz
rm rootfs.tgz
if [[ $arch == "arm64" ]]; then
mkdir -p /baremetal-files
pushd /baremetal-files
wget ${ARTIFACTS_URL}/Image
wget ${ARTIFACTS_URL}/Image.gz
wget ${ARTIFACTS_URL}/cheza-kernel
DEVICE_TREES=""
DEVICE_TREES="$DEVICE_TREES apq8016-sbc.dtb"
DEVICE_TREES="$DEVICE_TREES apq8096-db820c.dtb"
DEVICE_TREES="$DEVICE_TREES tegra210-p3450-0000.dtb"
for DTB in $DEVICE_TREES; do
wget ${ARTIFACTS_URL}/$DTB
done
popd
elif [[ $arch == "armhf" ]]; then
mkdir -p /baremetal-files
pushd /baremetal-files
wget ${ARTIFACTS_URL}/zImage
DEVICE_TREES="imx6q-cubox-i.dtb"
for DTB in $DEVICE_TREES; do
wget ${ARTIFACTS_URL}/$DTB
done
popd
fi

View File

@@ -1,18 +0,0 @@
#!/bin/bash
set -ex
APITRACE_VERSION="790380e05854d5c9d315555444ffcc7acb8f4037"
git clone https://github.com/apitrace/apitrace.git --single-branch --no-checkout /apitrace
pushd /apitrace
git checkout "$APITRACE_VERSION"
git submodule update --init --depth 1 --recursive
cmake -S . -B _build -G Ninja -DCMAKE_BUILD_TYPE=Release -DENABLE_GUI=False -DENABLE_WAFFLE=on $EXTRA_CMAKE_ARGS
cmake --build _build --parallel --target apitrace eglretrace
mkdir build
cp _build/apitrace build
cp _build/eglretrace build
${STRIP_CMD:-strip} build/*
find . -not -path './build' -not -path './build/*' -delete
popd

View File

@@ -1,42 +0,0 @@
#!/bin/bash
set -ex
SCRIPT_DIR="$(pwd)"
CROSVM_VERSION=c7cd0e0114c8363b884ba56d8e12adee718dcc93
git clone --single-branch -b main --no-checkout https://chromium.googlesource.com/chromiumos/platform/crosvm /platform/crosvm
pushd /platform/crosvm
git checkout "$CROSVM_VERSION"
git submodule update --init
# Apply all crosvm patches for Mesa CI
cat "$SCRIPT_DIR"/.gitlab-ci/container/build-crosvm_*.patch |
patch -p1
VIRGLRENDERER_VERSION=dd301caf7e05ec9c09634fb7872067542aad89b7
rm -rf third_party/virglrenderer
git clone --single-branch -b master --no-checkout https://gitlab.freedesktop.org/virgl/virglrenderer.git third_party/virglrenderer
pushd third_party/virglrenderer
git checkout "$VIRGLRENDERER_VERSION"
meson build/ $EXTRA_MESON_ARGS
ninja -C build install
popd
RUSTFLAGS='-L native=/usr/local/lib' cargo install \
bindgen \
-j ${FDO_CI_CONCURRENT:-4} \
--root /usr/local \
--version 0.60.1 \
$EXTRA_CARGO_ARGS
RUSTFLAGS='-L native=/usr/local/lib' cargo install \
-j ${FDO_CI_CONCURRENT:-4} \
--locked \
--features 'default-no-sandbox gpu x virgl_renderer virgl_renderer_next' \
--path . \
--root /usr/local \
$EXTRA_CARGO_ARGS
popd
rm -rf /platform/crosvm

View File

@@ -1,43 +0,0 @@
From 3c57ec558bccc67fd53363c23deea20646be5c47 Mon Sep 17 00:00:00 2001
From: Tomeu Vizoso <tomeu.vizoso@collabora.com>
Date: Wed, 17 Nov 2021 10:18:04 +0100
Subject: [PATCH] Hack syslog out
It's causing stability problems when running several Crosvm instances in
parallel.
Signed-off-by: Tomeu Vizoso <tomeu.vizoso@collabora.com>
---
base/src/unix/linux/syslog.rs | 2 +-
common/sys_util/src/linux/syslog.rs | 2 +-
2 files changed, 2 insertions(+), 2 deletions(-)
diff --git a/base/src/unix/linux/syslog.rs b/base/src/unix/linux/syslog.rs
index 05972a3a..f0db3781 100644
--- a/base/src/unix/linux/syslog.rs
+++ b/base/src/unix/linux/syslog.rs
@@ -35,7 +35,7 @@ pub struct PlatformSyslog {
impl Syslog for PlatformSyslog {
fn new() -> Result<Self, Error> {
Ok(Self {
- socket: Some(openlog_and_get_socket()?),
+ socket: None,
})
}
diff --git a/common/sys_util/src/linux/syslog.rs b/common/sys_util/src/linux/syslog.rs
index 05972a3a..f0db3781 100644
--- a/common/sys_util/src/linux/syslog.rs
+++ b/common/sys_util/src/linux/syslog.rs
@@ -35,7 +35,7 @@ pub struct PlatformSyslog {
impl Syslog for PlatformSyslog {
fn new() -> Result<Self, Error> {
Ok(Self {
- socket: Some(openlog_and_get_socket()?),
+ socket: None,
})
}
--
2.25.1

View File

@@ -1,24 +0,0 @@
#!/bin/sh
set -ex
if [ -n "${DEQP_RUNNER_GIT_TAG}${DEQP_RUNNER_GIT_REV}" ]; then
# Build and install from source
DEQP_RUNNER_CARGO_ARGS="--git ${DEQP_RUNNER_GIT_URL:-https://gitlab.freedesktop.org/anholt/deqp-runner.git}"
if [ -n "${DEQP_RUNNER_GIT_TAG}" ]; then
DEQP_RUNNER_CARGO_ARGS="--tag ${DEQP_RUNNER_GIT_TAG} ${DEQP_RUNNER_CARGO_ARGS}"
else
DEQP_RUNNER_CARGO_ARGS="--rev ${DEQP_RUNNER_GIT_REV} ${DEQP_RUNNER_CARGO_ARGS}"
fi
DEQP_RUNNER_CARGO_ARGS="${DEQP_RUNNER_CARGO_ARGS} ${EXTRA_CARGO_ARGS}"
else
# Install from package registry
DEQP_RUNNER_CARGO_ARGS="--version 0.13.1 ${EXTRA_CARGO_ARGS} -- deqp-runner"
fi
cargo install --locked \
-j ${FDO_CI_CONCURRENT:-4} \
--root /usr/local \
${DEQP_RUNNER_CARGO_ARGS}

View File

@@ -1,93 +0,0 @@
#!/bin/bash
set -ex
git config --global user.email "mesa@example.com"
git config --global user.name "Mesa CI"
git clone \
https://github.com/KhronosGroup/VK-GL-CTS.git \
-b vulkan-cts-1.3.3.0 \
--depth 1 \
/VK-GL-CTS
pushd /VK-GL-CTS
# Apply a patch to update zlib link to an available version.
# vulkan-cts-1.3.3.0 uses zlib 1.2.12 which was removed from zlib server due to
# a CVE. See https://zlib.net/
# FIXME: Remove this patch when uprev to 1.3.4.0+
wget -O- https://github.com/KhronosGroup/VK-GL-CTS/commit/6bb2e7d64261bedb503947b1b251b1eeeb49be73.patch |
git am -
# --insecure is due to SSL cert failures hitting sourceforge for zlib and
# libpng (sigh). The archives get their checksums checked anyway, and git
# always goes through ssh or https.
python3 external/fetch_sources.py --insecure
mkdir -p /deqp
# Save the testlog stylesheets:
cp doc/testlog-stylesheet/testlog.{css,xsl} /deqp
popd
pushd /deqp
# When including EGL/X11 testing, do that build first and save off its
# deqp-egl binary.
cmake -S /VK-GL-CTS -B . -G Ninja \
-DDEQP_TARGET=x11_egl_glx \
-DCMAKE_BUILD_TYPE=Release \
$EXTRA_CMAKE_ARGS
ninja modules/egl/deqp-egl
cp /deqp/modules/egl/deqp-egl /deqp/modules/egl/deqp-egl-x11
cmake -S /VK-GL-CTS -B . -G Ninja \
-DDEQP_TARGET=${DEQP_TARGET:-x11_glx} \
-DCMAKE_BUILD_TYPE=Release \
$EXTRA_CMAKE_ARGS
ninja
mv /deqp/modules/egl/deqp-egl-x11 /deqp/modules/egl/deqp-egl
# Copy out the mustpass lists we want.
mkdir /deqp/mustpass
for mustpass in $(< /VK-GL-CTS/external/vulkancts/mustpass/main/vk-default.txt) ; do
cat /VK-GL-CTS/external/vulkancts/mustpass/main/$mustpass \
>> /deqp/mustpass/vk-master.txt
done
cp \
/deqp/external/openglcts/modules/gl_cts/data/mustpass/gles/aosp_mustpass/3.2.6.x/*.txt \
/deqp/mustpass/.
cp \
/deqp/external/openglcts/modules/gl_cts/data/mustpass/egl/aosp_mustpass/3.2.6.x/egl-master.txt \
/deqp/mustpass/.
cp \
/deqp/external/openglcts/modules/gl_cts/data/mustpass/gles/khronos_mustpass/3.2.6.x/*-master.txt \
/deqp/mustpass/.
cp \
/deqp/external/openglcts/modules/gl_cts/data/mustpass/gl/khronos_mustpass/4.6.1.x/*-master.txt \
/deqp/mustpass/.
# Save *some* executor utils, but otherwise strip things down
# to reduct deqp build size:
mkdir /deqp/executor.save
cp /deqp/executor/testlog-to-* /deqp/executor.save
rm -rf /deqp/executor
mv /deqp/executor.save /deqp/executor
# Remove other mustpass files, since we saved off the ones we wanted to conventient locations above.
rm -rf /deqp/external/openglcts/modules/gl_cts/data/mustpass
rm -rf /deqp/external/vulkancts/modules/vulkan/vk-master*
rm -rf /deqp/external/vulkancts/modules/vulkan/vk-default
rm -rf /deqp/external/openglcts/modules/cts-runner
rm -rf /deqp/modules/internal
rm -rf /deqp/execserver
rm -rf /deqp/framework
find -iname '*cmake*' -o -name '*ninja*' -o -name '*.o' -o -name '*.a' | xargs rm -rf
${STRIP_CMD:-strip} external/vulkancts/modules/vulkan/deqp-vk
${STRIP_CMD:-strip} external/openglcts/modules/glcts
${STRIP_CMD:-strip} modules/*/deqp-*
du -sh *
rm -rf /VK-GL-CTS
popd

View File

@@ -1,14 +0,0 @@
#!/bin/bash
set -ex
git clone https://github.com/ValveSoftware/Fossilize.git
cd Fossilize
git checkout 16fba1b8b5d9310126bb02323d7bae3227338461
git submodule update --init
mkdir build
cd build
cmake -S .. -B . -G Ninja -DCMAKE_BUILD_TYPE=Release
ninja -C . install
cd ../..
rm -rf Fossilize

View File

@@ -1,19 +0,0 @@
#!/bin/bash
set -ex
GFXRECONSTRUCT_VERSION=5ed3caeecc46e976c4df31e263df8451ae176c26
git clone https://github.com/LunarG/gfxreconstruct.git \
--single-branch \
-b master \
--no-checkout \
/gfxreconstruct
pushd /gfxreconstruct
git checkout "$GFXRECONSTRUCT_VERSION"
git submodule update --init
git submodule update
cmake -S . -B _build -G Ninja -DCMAKE_BUILD_TYPE=Release -DCMAKE_INSTALL_PREFIX:PATH=/gfxreconstruct/build -DBUILD_WERROR=OFF
cmake --build _build --parallel --target tools/{replay,info}/install/strip
find . -not -path './build' -not -path './build/*' -delete
popd

View File

@@ -1,16 +0,0 @@
#!/bin/bash
set -ex
PARALLEL_DEQP_RUNNER_VERSION=fe557794b5dadd8dbf0eae403296625e03bda18a
git clone https://gitlab.freedesktop.org/mesa/parallel-deqp-runner --single-branch -b master --no-checkout /parallel-deqp-runner
pushd /parallel-deqp-runner
git checkout "$PARALLEL_DEQP_RUNNER_VERSION"
meson . _build
ninja -C _build hang-detection
mkdir -p build/bin
install _build/hang-detection build/bin
strip build/bin/*
find . -not -path './build' -not -path './build/*' -delete
popd

View File

@@ -1,51 +0,0 @@
#!/bin/bash
set -ex
mkdir -p kernel
wget -qO- ${KERNEL_URL} | tar -xj --strip-components=1 -C kernel
pushd kernel
# The kernel doesn't like the gold linker (or the old lld in our debians).
# Sneak in some override symlinks during kernel build until we can update
# debian (they'll get blown away by the rm of the kernel dir at the end).
mkdir -p ld-links
for i in /usr/bin/*-ld /usr/bin/ld; do
i=`basename $i`
ln -sf /usr/bin/$i.bfd ld-links/$i
done
export PATH=`pwd`/ld-links:$PATH
export LOCALVERSION="`basename $KERNEL_URL`"
./scripts/kconfig/merge_config.sh ${DEFCONFIG} ../.gitlab-ci/container/${KERNEL_ARCH}.config
make ${KERNEL_IMAGE_NAME}
for image in ${KERNEL_IMAGE_NAME}; do
cp arch/${KERNEL_ARCH}/boot/${image} /lava-files/.
done
if [[ -n ${DEVICE_TREES} ]]; then
make dtbs
cp ${DEVICE_TREES} /lava-files/.
fi
if [[ ${DEBIAN_ARCH} = "amd64" || ${DEBIAN_ARCH} = "arm64" ]]; then
make modules
INSTALL_MOD_PATH=/lava-files/rootfs-${DEBIAN_ARCH}/ make modules_install
fi
if [[ ${DEBIAN_ARCH} = "arm64" ]]; then
make Image.lzma
mkimage \
-f auto \
-A arm \
-O linux \
-d arch/arm64/boot/Image.lzma \
-C lzma\
-b arch/arm64/boot/dts/qcom/sdm845-cheza-r3.dtb \
/lava-files/cheza-kernel
KERNEL_IMAGE_NAME+=" cheza-kernel"
fi
popd
rm -rf kernel

View File

@@ -1,30 +0,0 @@
#!/bin/bash
set -ex
export LLVM_CONFIG="llvm-config-11"
$LLVM_CONFIG --version
git config --global user.email "mesa@example.com"
git config --global user.name "Mesa CI"
git clone \
https://github.com/llvm/llvm-project \
--depth 1 \
-b llvmorg-12.0.0-rc3 \
/llvm-project
mkdir /libclc
pushd /libclc
cmake -S /llvm-project/libclc -B . -G Ninja -DLLVM_CONFIG=$LLVM_CONFIG -DLIBCLC_TARGETS_TO_BUILD="spirv-mesa3d-;spirv64-mesa3d-" -DCMAKE_BUILD_TYPE=Release -DCMAKE_INSTALL_PREFIX=/usr -DLLVM_SPIRV=/usr/bin/llvm-spirv
ninja
ninja install
popd
# workaroud cmake vs debian packaging.
mkdir -p /usr/lib/clc
ln -s /usr/share/clc/spirv64-mesa3d-.spv /usr/lib/clc/
ln -s /usr/share/clc/spirv-mesa3d-.spv /usr/lib/clc/
du -sh *
rm -rf /libclc /llvm-project

View File

@@ -1,14 +0,0 @@
#!/bin/bash
set -ex
export LIBDRM_VERSION=libdrm-2.4.110
wget https://dri.freedesktop.org/libdrm/$LIBDRM_VERSION.tar.xz
tar -xvf $LIBDRM_VERSION.tar.xz && rm $LIBDRM_VERSION.tar.xz
cd $LIBDRM_VERSION
meson build -D vc4=false -D freedreno=false -D etnaviv=false $EXTRA_MESON_ARGS
ninja -C build install
cd ..
rm -rf $LIBDRM_VERSION

View File

@@ -1,28 +0,0 @@
#!/bin/bash
set -ex
git clone https://gitlab.freedesktop.org/mesa/piglit.git --single-branch --no-checkout /piglit
pushd /piglit
git checkout b2c9d8f56b45d79f804f4cb5ac62520f0edd8988
# TODO: Remove the following patch when piglit commit got past
# 1cd716180cfb6ef0c1fc54702460ef49e5115791
git apply $OLDPWD/.gitlab-ci/piglit/build-piglit_backport-s3-migration.diff
patch -p1 <$OLDPWD/.gitlab-ci/piglit/disable-vs_in.diff
cmake -S . -B . -G Ninja -DCMAKE_BUILD_TYPE=Release $PIGLIT_OPTS $EXTRA_CMAKE_ARGS
ninja $PIGLIT_BUILD_TARGETS
find -name .git -o -name '*ninja*' -o -iname '*cmake*' -o -name '*.[chao]' | xargs rm -rf
rm -rf target_api
if [ "x$PIGLIT_BUILD_TARGETS" = "xpiglit_replayer" ]; then
find ! -regex "^\.$" \
! -regex "^\.\/piglit.*" \
! -regex "^\.\/framework.*" \
! -regex "^\.\/bin$" \
! -regex "^\.\/bin\/replayer\.py" \
! -regex "^\.\/templates.*" \
! -regex "^\.\/tests$" \
! -regex "^\.\/tests\/replay\.py" 2>/dev/null | xargs rm -rf
fi
popd

View File

@@ -1,31 +0,0 @@
#!/bin/bash
# Note that this script is not actually "building" rust, but build- is the
# convention for the shared helpers for putting stuff in our containers.
set -ex
# cargo (and rustup) wants to store stuff in $HOME/.cargo, and binaries in
# $HOME/.cargo/bin. Make bin a link to a public bin directory so the commands
# are just available to all build jobs.
mkdir -p $HOME/.cargo
ln -s /usr/local/bin $HOME/.cargo/bin
# For rust in Mesa, we use rustup to install. This lets us pick an arbitrary
# version of the compiler, rather than whatever the container's Debian comes
# with.
#
# Pick the rust compiler (1.48) available in Debian stable, and pick a specific
# snapshot from rustup so the compiler doesn't drift on us.
wget https://sh.rustup.rs -O - | \
sh -s -- -y --default-toolchain 1.49.0-2020-12-31
# Set up a config script for cross compiling -- cargo needs your system cc for
# linking in cross builds, but doesn't know what you want to use for system cc.
cat > /root/.cargo/config <<EOF
[target.armv7-unknown-linux-gnueabihf]
linker = "arm-linux-gnueabihf-gcc"
[target.aarch64-unknown-linux-gnu]
linker = "aarch64-linux-gnu-gcc"
EOF

View File

@@ -1,97 +0,0 @@
#!/bin/bash
#
# Copyright (C) 2022 Collabora Limited
# Author: Guilherme Gallo <guilherme.gallo@collabora.com>
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice (including the next
# paragraph) shall be included in all copies or substantial portions of the
# Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
# SOFTWARE.
create_gn_args() {
# gn can be configured to cross-compile skia and its tools
# It is important to set the target_cpu to guarantee the intended target
# machine
cp "${BASE_ARGS_GN_FILE}" "${SKQP_OUT_DIR}"/args.gn
echo "target_cpu = \"${SKQP_ARCH}\"" >> "${SKQP_OUT_DIR}"/args.gn
}
download_skia_source() {
if [ -z ${SKIA_DIR+x} ]
then
return 1
fi
# Skia cloned from https://android.googlesource.com/platform/external/skqp
# has all needed assets tracked on git-fs
SKQP_REPO=https://android.googlesource.com/platform/external/skqp
SKQP_BRANCH=android-cts-11.0_r7
git clone --branch "${SKQP_BRANCH}" --depth 1 "${SKQP_REPO}" "${SKIA_DIR}"
}
set -ex
SCRIPT_DIR=$(realpath "$(dirname "$0")")
SKQP_PATCH_DIR="${SCRIPT_DIR}"
BASE_ARGS_GN_FILE="${SCRIPT_DIR}/build-skqp_base.gn"
SKQP_ARCH=${SKQP_ARCH:-x64}
SKIA_DIR=${SKIA_DIR:-$(mktemp -d)}
SKQP_OUT_DIR=${SKIA_DIR}/out/${SKQP_ARCH}
SKQP_INSTALL_DIR=/skqp
SKQP_ASSETS_DIR="${SKQP_INSTALL_DIR}/assets"
SKQP_BINARIES=(skqp)
download_skia_source
pushd "${SKIA_DIR}"
# Apply all skqp patches for Mesa CI
cat "${SKQP_PATCH_DIR}"/build-skqp_*.patch |
patch -p1
# Fetch some needed build tools needed to build skia/skqp.
# Basically, it clones repositories with commits SHAs from ${SKIA_DIR}/DEPS
# directory.
python tools/git-sync-deps
mkdir -p "${SKQP_OUT_DIR}"
mkdir -p "${SKQP_INSTALL_DIR}"
create_gn_args
# Build and install skqp binaries
bin/gn gen "${SKQP_OUT_DIR}"
for BINARY in "${SKQP_BINARIES[@]}"
do
/usr/bin/ninja -C "${SKQP_OUT_DIR}" "${BINARY}"
# Strip binary, since gn is not stripping it even when `is_debug == false`
${STRIP_CMD:-strip} "${SKQP_OUT_DIR}/${BINARY}"
install -m 0755 "${SKQP_OUT_DIR}/${BINARY}" "${SKQP_INSTALL_DIR}"
done
# Move assets to the target directory, which will reside in rootfs.
mv platform_tools/android/apps/skqp/src/main/assets/ "${SKQP_ASSETS_DIR}"
popd
rm -Rf "${SKIA_DIR}"
set +ex

View File

@@ -1,13 +0,0 @@
diff --git a/BUILD.gn b/BUILD.gn
index d2b1407..7b60c90 100644
--- a/BUILD.gn
+++ b/BUILD.gn
@@ -144,7 +144,7 @@ config("skia_public") {
# Skia internal APIs, used by Skia itself and a few test tools.
config("skia_private") {
- visibility = [ ":*" ]
+ visibility = [ "*" ]
include_dirs = [
"include/private",

View File

@@ -1,47 +0,0 @@
cc = "clang"
cxx = "clang++"
extra_cflags = [ "-DSK_ENABLE_DUMP_GPU", "-DSK_BUILD_FOR_SKQP" ]
extra_cflags_cc = [
"-Wno-error",
# skqp build process produces a lot of compilation warnings, silencing
# most of them to remove clutter and avoid the CI job log to exceed the
# maximum size
# GCC flags
"-Wno-redundant-move",
"-Wno-suggest-override",
"-Wno-class-memaccess",
"-Wno-deprecated-copy",
"-Wno-uninitialized",
# Clang flags
"-Wno-macro-redefined",
"-Wno-anon-enum-enum-conversion",
"-Wno-suggest-destructor-override",
"-Wno-return-std-move-in-c++11",
"-Wno-extra-semi-stmt",
]
cc_wrapper = "ccache"
is_debug = false
skia_enable_fontmgr_android = false
skia_enable_fontmgr_empty = true
skia_enable_pdf = false
skia_enable_skottie = false
skia_skqp_global_error_tolerance = 8
skia_tools_require_resources = true
skia_use_dng_sdk = false
skia_use_expat = true
skia_use_icu = false
skia_use_libheif = false
skia_use_lua = false
skia_use_piex = false
skia_use_vulkan = true
target_os = "linux"

View File

@@ -1,68 +0,0 @@
diff --git a/bin/fetch-gn b/bin/fetch-gn
index d5e94a2..59c4591 100755
--- a/bin/fetch-gn
+++ b/bin/fetch-gn
@@ -5,39 +5,44 @@
# Use of this source code is governed by a BSD-style license that can be
# found in the LICENSE file.
-import hashlib
import os
+import platform
import shutil
import stat
import sys
-import urllib2
+import tempfile
+import zipfile
+
+if sys.version_info[0] < 3:
+ from urllib2 import urlopen
+else:
+ from urllib.request import urlopen
os.chdir(os.path.join(os.path.dirname(__file__), os.pardir))
-dst = 'bin/gn.exe' if 'win32' in sys.platform else 'bin/gn'
+gnzip = os.path.join(tempfile.mkdtemp(), 'gn.zip')
+with open(gnzip, 'wb') as f:
+ OS = {'darwin': 'mac', 'linux': 'linux', 'linux2': 'linux', 'win32': 'windows'}[sys.platform]
+ cpu = {'amd64': 'amd64', 'arm64': 'arm64', 'x86_64': 'amd64', 'aarch64': 'arm64'}[platform.machine().lower()]
-sha1 = '2f27ff0b6118e5886df976da5effa6003d19d1ce' if 'linux' in sys.platform else \
- '9be792dd9010ce303a9c3a497a67bcc5ac8c7666' if 'darwin' in sys.platform else \
- 'eb69be2d984b4df60a8c21f598135991f0ad1742' # Windows
+ rev = 'd62642c920e6a0d1756316d225a90fd6faa9e21e'
+ url = 'https://chrome-infra-packages.appspot.com/dl/gn/gn/{}-{}/+/git_revision:{}'.format(
+ OS,cpu,rev)
+ f.write(urlopen(url).read())
-def sha1_of_file(path):
- h = hashlib.sha1()
- if os.path.isfile(path):
- with open(path, 'rb') as f:
- h.update(f.read())
- return h.hexdigest()
+gn = 'gn.exe' if 'win32' in sys.platform else 'gn'
+with zipfile.ZipFile(gnzip, 'r') as f:
+ f.extract(gn, 'bin')
-if sha1_of_file(dst) != sha1:
- with open(dst, 'wb') as f:
- f.write(urllib2.urlopen('https://chromium-gn.storage-download.googleapis.com/' + sha1).read())
+gn = os.path.join('bin', gn)
- os.chmod(dst, stat.S_IRUSR | stat.S_IWUSR | stat.S_IXUSR |
- stat.S_IRGRP | stat.S_IXGRP |
- stat.S_IROTH | stat.S_IXOTH )
+os.chmod(gn, stat.S_IRUSR | stat.S_IWUSR | stat.S_IXUSR |
+ stat.S_IRGRP | stat.S_IXGRP |
+ stat.S_IROTH | stat.S_IXOTH )
# We'll also copy to a path that depot_tools' GN wrapper will expect to find the binary.
copy_path = 'buildtools/linux64/gn' if 'linux' in sys.platform else \
'buildtools/mac/gn' if 'darwin' in sys.platform else \
'buildtools/win/gn.exe'
if os.path.isdir(os.path.dirname(copy_path)):
- shutil.copy(dst, copy_path)
+ shutil.copy(gn, copy_path)

View File

@@ -1,142 +0,0 @@
Patch based from diff with skia repository from commit
013397884c73959dc07cb0a26ee742b1cdfbda8a
Adds support for Python3, but removes the constraint of only SHA based refs in
DEPS
diff --git a/tools/git-sync-deps b/tools/git-sync-deps
index c7379c0b5c..f63d4d9ccf 100755
--- a/tools/git-sync-deps
+++ b/tools/git-sync-deps
@@ -43,7 +43,7 @@ def git_executable():
A string suitable for passing to subprocess functions, or None.
"""
envgit = os.environ.get('GIT_EXECUTABLE')
- searchlist = ['git']
+ searchlist = ['git', 'git.bat']
if envgit:
searchlist.insert(0, envgit)
with open(os.devnull, 'w') as devnull:
@@ -94,21 +94,25 @@ def is_git_toplevel(git, directory):
try:
toplevel = subprocess.check_output(
[git, 'rev-parse', '--show-toplevel'], cwd=directory).strip()
- return os.path.realpath(directory) == os.path.realpath(toplevel)
+ return os.path.realpath(directory) == os.path.realpath(toplevel.decode())
except subprocess.CalledProcessError:
return False
-def status(directory, checkoutable):
- def truncate(s, length):
+def status(directory, commithash, change):
+ def truncate_beginning(s, length):
+ return s if len(s) <= length else '...' + s[-(length-3):]
+ def truncate_end(s, length):
return s if len(s) <= length else s[:(length - 3)] + '...'
+
dlen = 36
- directory = truncate(directory, dlen)
- checkoutable = truncate(checkoutable, 40)
- sys.stdout.write('%-*s @ %s\n' % (dlen, directory, checkoutable))
+ directory = truncate_beginning(directory, dlen)
+ commithash = truncate_end(commithash, 40)
+ symbol = '>' if change else '@'
+ sys.stdout.write('%-*s %s %s\n' % (dlen, directory, symbol, commithash))
-def git_checkout_to_directory(git, repo, checkoutable, directory, verbose):
+def git_checkout_to_directory(git, repo, commithash, directory, verbose):
"""Checkout (and clone if needed) a Git repository.
Args:
@@ -117,8 +121,7 @@ def git_checkout_to_directory(git, repo, checkoutable, directory, verbose):
repo (string) the location of the repository, suitable
for passing to `git clone`.
- checkoutable (string) a tag, branch, or commit, suitable for
- passing to `git checkout`
+ commithash (string) a commit, suitable for passing to `git checkout`
directory (string) the path into which the repository
should be checked out.
@@ -129,7 +132,12 @@ def git_checkout_to_directory(git, repo, checkoutable, directory, verbose):
"""
if not os.path.isdir(directory):
subprocess.check_call(
- [git, 'clone', '--quiet', repo, directory])
+ [git, 'clone', '--quiet', '--no-checkout', repo, directory])
+ subprocess.check_call([git, 'checkout', '--quiet', commithash],
+ cwd=directory)
+ if verbose:
+ status(directory, commithash, True)
+ return
if not is_git_toplevel(git, directory):
# if the directory exists, but isn't a git repo, you will modify
@@ -145,11 +153,11 @@ def git_checkout_to_directory(git, repo, checkoutable, directory, verbose):
with open(os.devnull, 'w') as devnull:
# If this fails, we will fetch before trying again. Don't spam user
# with error infomation.
- if 0 == subprocess.call([git, 'checkout', '--quiet', checkoutable],
+ if 0 == subprocess.call([git, 'checkout', '--quiet', commithash],
cwd=directory, stderr=devnull):
# if this succeeds, skip slow `git fetch`.
if verbose:
- status(directory, checkoutable) # Success.
+ status(directory, commithash, False) # Success.
return
# If the repo has changed, always force use of the correct repo.
@@ -159,18 +167,24 @@ def git_checkout_to_directory(git, repo, checkoutable, directory, verbose):
subprocess.check_call([git, 'fetch', '--quiet'], cwd=directory)
- subprocess.check_call([git, 'checkout', '--quiet', checkoutable], cwd=directory)
+ subprocess.check_call([git, 'checkout', '--quiet', commithash], cwd=directory)
if verbose:
- status(directory, checkoutable) # Success.
+ status(directory, commithash, True) # Success.
def parse_file_to_dict(path):
dictionary = {}
- execfile(path, dictionary)
+ with open(path) as f:
+ exec('def Var(x): return vars[x]\n' + f.read(), dictionary)
return dictionary
+def is_sha1_sum(s):
+ """SHA1 sums are 160 bits, encoded as lowercase hexadecimal."""
+ return len(s) == 40 and all(c in '0123456789abcdef' for c in s)
+
+
def git_sync_deps(deps_file_path, command_line_os_requests, verbose):
"""Grab dependencies, with optional platform support.
@@ -204,19 +218,19 @@ def git_sync_deps(deps_file_path, command_line_os_requests, verbose):
raise Exception('%r is parent of %r' % (other_dir, directory))
list_of_arg_lists = []
for directory in sorted(dependencies):
- if not isinstance(dependencies[directory], basestring):
+ if not isinstance(dependencies[directory], str):
if verbose:
- print 'Skipping "%s".' % directory
+ sys.stdout.write( 'Skipping "%s".\n' % directory)
continue
if '@' in dependencies[directory]:
- repo, checkoutable = dependencies[directory].split('@', 1)
+ repo, commithash = dependencies[directory].split('@', 1)
else:
- raise Exception("please specify commit or tag")
+ raise Exception("please specify commit")
relative_directory = os.path.join(deps_file_directory, directory)
list_of_arg_lists.append(
- (git, repo, checkoutable, relative_directory, verbose))
+ (git, repo, commithash, relative_directory, verbose))
multithread(git_checkout_to_directory, list_of_arg_lists)

View File

@@ -1,41 +0,0 @@
diff --git a/tools/skqp/src/skqp.cpp b/tools/skqp/src/skqp.cpp
index 50ed9db01d..938217000d 100644
--- a/tools/skqp/src/skqp.cpp
+++ b/tools/skqp/src/skqp.cpp
@@ -448,7 +448,7 @@ inline void write(SkWStream* wStream, const T& text) {
void SkQP::makeReport() {
SkASSERT_RELEASE(fAssetManager);
- int glesErrorCount = 0, vkErrorCount = 0, gles = 0, vk = 0;
+ int glErrorCount = 0, glesErrorCount = 0, vkErrorCount = 0, gl = 0, gles = 0, vk = 0;
if (!sk_isdir(fReportDirectory.c_str())) {
SkDebugf("Report destination does not exist: '%s'\n", fReportDirectory.c_str());
@@ -460,6 +460,7 @@ void SkQP::makeReport() {
htmOut.writeText(kDocHead);
for (const SkQP::RenderResult& run : fRenderResults) {
switch (run.fBackend) {
+ case SkQP::SkiaBackend::kGL: ++gl; break;
case SkQP::SkiaBackend::kGLES: ++gles; break;
case SkQP::SkiaBackend::kVulkan: ++vk; break;
default: break;
@@ -477,15 +478,17 @@ void SkQP::makeReport() {
}
write(&htmOut, SkStringPrintf(" f(%s);\n", str.c_str()));
switch (run.fBackend) {
+ case SkQP::SkiaBackend::kGL: ++glErrorCount; break;
case SkQP::SkiaBackend::kGLES: ++glesErrorCount; break;
case SkQP::SkiaBackend::kVulkan: ++vkErrorCount; break;
default: break;
}
}
htmOut.writeText(kDocMiddle);
- write(&htmOut, SkStringPrintf("<p>gles errors: %d (of %d)</br>\n"
+ write(&htmOut, SkStringPrintf("<p>gl errors: %d (of %d)</br>\n"
+ "gles errors: %d (of %d)</br>\n"
"vk errors: %d (of %d)</p>\n",
- glesErrorCount, gles, vkErrorCount, vk));
+ glErrorCount, gl, glesErrorCount, gles, vkErrorCount, vk));
htmOut.writeText(kDocTail);
SkFILEWStream unitOut(SkOSPath::Join(fReportDirectory.c_str(), kUnitTestReportPath).c_str());
SkASSERT_RELEASE(unitOut.isValid());

View File

@@ -1,13 +0,0 @@
diff --git a/gn/BUILDCONFIG.gn b/gn/BUILDCONFIG.gn
index 454334a..1797594 100644
--- a/gn/BUILDCONFIG.gn
+++ b/gn/BUILDCONFIG.gn
@@ -80,7 +80,7 @@ if (current_cpu == "") {
is_clang = is_android || is_ios || is_mac ||
(cc == "clang" && cxx == "clang++") || clang_win != ""
if (!is_clang && !is_win) {
- is_clang = exec_script("gn/is_clang.py",
+ is_clang = exec_script("//gn/is_clang.py",
[
cc,
cxx,

View File

@@ -1,17 +0,0 @@
#!/bin/bash
set -ex
git config --global user.email "mesa@example.com"
git config --global user.name "Mesa CI"
git clone \
https://github.com/intel/libva-utils.git \
-b 2.13.0 \
--depth 1 \
/va-utils
pushd /va-utils
meson build -D tests=true -Dprefix=/va $EXTRA_MESON_ARGS
ninja -C build install
popd
rm -rf /va-utils

View File

@@ -1,39 +0,0 @@
#!/bin/bash
set -ex
VKD3D_PROTON_COMMIT="5b73139f182d86cd58a757e4b5f0d4cfad96d319"
VKD3D_PROTON_DST_DIR="/vkd3d-proton-tests"
VKD3D_PROTON_SRC_DIR="/vkd3d-proton-src"
VKD3D_PROTON_BUILD_DIR="/vkd3d-proton-$VKD3D_PROTON_VERSION"
function build_arch {
local arch="$1"
shift
meson "$@" \
-Denable_tests=true \
--buildtype release \
--prefix "$VKD3D_PROTON_DST_DIR" \
--strip \
--bindir "x${arch}" \
--libdir "x${arch}" \
"$VKD3D_PROTON_BUILD_DIR/build.${arch}"
ninja -C "$VKD3D_PROTON_BUILD_DIR/build.${arch}" install
install -D -m755 -t "${VKD3D_PROTON_DST_DIR}/x${arch}/bin" "$VKD3D_PROTON_BUILD_DIR/build.${arch}/tests/d3d12"
}
git clone https://github.com/HansKristian-Work/vkd3d-proton.git --single-branch -b master --no-checkout "$VKD3D_PROTON_SRC_DIR"
pushd "$VKD3D_PROTON_SRC_DIR"
git checkout "$VKD3D_PROTON_COMMIT"
git submodule update --init --recursive
git submodule update --recursive
build_arch 64
build_arch 86
popd
rm -rf "$VKD3D_PROTON_BUILD_DIR"
rm -rf "$VKD3D_PROTON_SRC_DIR"

View File

@@ -1,22 +0,0 @@
#!/bin/bash
set -ex
export LIBWAYLAND_VERSION="1.18.0"
export WAYLAND_PROTOCOLS_VERSION="1.24"
git clone https://gitlab.freedesktop.org/wayland/wayland
cd wayland
git checkout "$LIBWAYLAND_VERSION"
meson -Ddocumentation=false -Ddtd_validation=false -Dlibraries=true _build
ninja -C _build install
cd ..
rm -rf wayland
git clone https://gitlab.freedesktop.org/wayland/wayland-protocols
cd wayland-protocols
git checkout "$WAYLAND_PROTOCOLS_VERSION"
meson _build
ninja -C _build install
cd ..
rm -rf wayland-protocols

View File

@@ -1,10 +0,0 @@
#!/bin/sh
if test -f /etc/debian_version; then
apt-get autoremove -y --purge
fi
# Clean up any build cache for rust.
rm -rf /.cargo
ccache --show-stats

View File

@@ -1,42 +0,0 @@
#!/bin/sh
if test -f /etc/debian_version; then
CCACHE_PATH=/usr/lib/ccache
else
CCACHE_PATH=/usr/lib64/ccache
fi
# Common setup among container builds before we get to building code.
export CCACHE_COMPILERCHECK=content
export CCACHE_COMPRESS=true
export CCACHE_DIR=/cache/$CI_PROJECT_NAME/ccache
export PATH=$CCACHE_PATH:$PATH
# CMake ignores $PATH, so we have to force CC/GCC to the ccache versions.
export CC="${CCACHE_PATH}/gcc"
export CXX="${CCACHE_PATH}/g++"
# Force linkers to gold, since it's so much faster for building. We can't use
# lld because we're on old debian and it's buggy. ming fails meson builds
# with it with "meson.build:21:0: ERROR: Unable to determine dynamic linker"
find /usr/bin -name \*-ld -o -name ld | \
grep -v mingw | \
xargs -n 1 -I '{}' ln -sf '{}.gold' '{}'
ccache --show-stats
# Make a wrapper script for ninja to always include the -j flags
echo '#!/bin/sh -x' > /usr/local/bin/ninja
echo '/usr/bin/ninja -j${FDO_CI_CONCURRENT:-4} "$@"' >> /usr/local/bin/ninja
chmod +x /usr/local/bin/ninja
# Set MAKEFLAGS so that all make invocations in container builds include the
# flags (doesn't apply to non-container builds, but we don't run make there)
export MAKEFLAGS="-j${FDO_CI_CONCURRENT:-4}"
# make wget to try more than once, when download fails or timeout
echo -e "retry_connrefused = on\n" \
"read_timeout = 300\n" \
"tries = 4\n" \
"wait_retry = 32" >> /etc/wgetrc

View File

@@ -1,35 +0,0 @@
#!/bin/bash
ndk=$1
arch=$2
cpu_family=$3
cpu=$4
cross_file="/cross_file-$arch.txt"
# armv7 has the toolchain split between two names.
arch2=${5:-$2}
# Note that we disable C++ exceptions, because Mesa doesn't use exceptions,
# and allowing it in code generation means we get unwind symbols that break
# the libEGL and driver symbol tests.
cat >$cross_file <<EOF
[binaries]
ar = '$ndk/toolchains/llvm/prebuilt/linux-x86_64/bin/$arch-ar'
c = ['ccache', '$ndk/toolchains/llvm/prebuilt/linux-x86_64/bin/${arch2}29-clang', '-fno-exceptions', '-fno-unwind-tables', '-fno-asynchronous-unwind-tables']
cpp = ['ccache', '$ndk/toolchains/llvm/prebuilt/linux-x86_64/bin/${arch2}29-clang++', '-fno-exceptions', '-fno-unwind-tables', '-fno-asynchronous-unwind-tables']
c_ld = 'lld'
cpp_ld = 'lld'
strip = '$ndk/toolchains/llvm/prebuilt/linux-x86_64/bin/$arch-strip'
pkgconfig = ['/usr/bin/pkg-config']
[host_machine]
system = 'linux'
cpu_family = '$cpu_family'
cpu = '$cpu'
endian = 'little'
[properties]
needs_exe_wrapper = true
EOF

View File

@@ -1,38 +0,0 @@
#!/bin/sh
# Makes a .pc file in the Android NDK for meson to find its libraries.
set -ex
ndk="$1"
pc="$2"
cflags="$3"
libs="$4"
version="$5"
sysroot=$ndk/toolchains/llvm/prebuilt/linux-x86_64/sysroot
for arch in \
x86_64-linux-android \
i686-linux-android \
aarch64-linux-android \
arm-linux-androideabi; do
pcdir=$sysroot/usr/lib/$arch/pkgconfig
mkdir -p $pcdir
cat >$pcdir/$pc <<EOF
prefix=$sysroot
exec_prefix=$sysroot
libdir=$sysroot/usr/lib/$arch/29
sharedlibdir=$sysroot/usr/lib/$arch
includedir=$sysroot/usr/include
Name: zlib
Description: zlib compression library
Version: $version
Requires:
Libs: -L$sysroot/usr/lib/$arch/29 $libs
Cflags: -I$sysroot/usr/include $cflags
EOF
done

View File

@@ -1,51 +0,0 @@
#!/bin/bash
arch=$1
cross_file="/cross_file-$arch.txt"
/usr/share/meson/debcrossgen --arch $arch -o "$cross_file"
# Explicitly set ccache path for cross compilers
sed -i "s|/usr/bin/\([^-]*\)-linux-gnu\([^-]*\)-g|/usr/lib/ccache/\\1-linux-gnu\\2-g|g" "$cross_file"
if [ "$arch" = "i386" ]; then
# Work around a bug in debcrossgen that should be fixed in the next release
sed -i "s|cpu_family = 'i686'|cpu_family = 'x86'|g" "$cross_file"
fi
# Rely on qemu-user being configured in binfmt_misc on the host
sed -i -e '/\[properties\]/a\' -e "needs_exe_wrapper = False" "$cross_file"
# Add a line for rustc, which debcrossgen is missing.
cc=`sed -n 's|c = .\(.*\).|\1|p' < $cross_file`
if [[ "$arch" = "arm64" ]]; then
rust_target=aarch64-unknown-linux-gnu
elif [[ "$arch" = "armhf" ]]; then
rust_target=armv7-unknown-linux-gnueabihf
elif [[ "$arch" = "i386" ]]; then
rust_target=i686-unknown-linux-gnu
elif [[ "$arch" = "ppc64el" ]]; then
rust_target=powerpc64le-unknown-linux-gnu
elif [[ "$arch" = "s390x" ]]; then
rust_target=s390x-unknown-linux-gnu
else
echo "Needs rustc target mapping"
fi
sed -i -e '/\[binaries\]/a\' -e "rust = ['rustc', '--target=$rust_target', '-C', 'linker=$cc']" "$cross_file"
# Set up cmake cross compile toolchain file for dEQP builds
toolchain_file="/toolchain-$arch.cmake"
if [[ "$arch" = "arm64" ]]; then
GCC_ARCH="aarch64-linux-gnu"
DE_CPU="DE_CPU_ARM_64"
CMAKE_ARCH=arm
elif [[ "$arch" = "armhf" ]]; then
GCC_ARCH="arm-linux-gnueabihf"
DE_CPU="DE_CPU_ARM"
CMAKE_ARCH=arm
fi
if [[ -n "$GCC_ARCH" ]]; then
echo "set(CMAKE_SYSTEM_NAME Linux)" > "$toolchain_file"
echo "set(CMAKE_SYSTEM_PROCESSOR arm)" >> "$toolchain_file"
echo "set(CMAKE_C_COMPILER /usr/lib/ccache/$GCC_ARCH-gcc)" >> "$toolchain_file"
echo "set(CMAKE_CXX_COMPILER /usr/lib/ccache/$GCC_ARCH-g++)" >> "$toolchain_file"
echo "set(ENV{PKG_CONFIG} \"/usr/bin/$GCC_ARCH-pkg-config\")" >> "$toolchain_file"
echo "set(DE_CPU $DE_CPU)" >> "$toolchain_file"
fi

View File

@@ -1,293 +0,0 @@
#!/bin/bash
set -ex
if [ $DEBIAN_ARCH = arm64 ]; then
ARCH_PACKAGES="firmware-qcom-media
firmware-linux-nonfree
libfontconfig1
libgl1
libglu1-mesa
libvulkan-dev
"
elif [ $DEBIAN_ARCH = amd64 ]; then
# Add llvm 13 to the build image
apt-get -y install --no-install-recommends wget gnupg2 software-properties-common
apt-key add /llvm-snapshot.gpg.key
add-apt-repository "deb https://apt.llvm.org/bullseye/ llvm-toolchain-bullseye-13 main"
ARCH_PACKAGES="firmware-amd-graphics
inetutils-syslogd
iptables
libcap2
libfontconfig1
libelf1
libfdt1
libgl1
libglu1-mesa
libllvm13
libllvm11
libva2
libva-drm2
libvulkan-dev
socat
spirv-tools
sysvinit-core
"
fi
INSTALL_CI_FAIRY_PACKAGES="git
python3-dev
python3-pip
python3-setuptools
python3-wheel
"
apt-get update
apt-get -y install --no-install-recommends \
$ARCH_PACKAGES \
$INSTALL_CI_FAIRY_PACKAGES \
$EXTRA_LOCAL_PACKAGES \
bash \
ca-certificates \
firmware-realtek \
initramfs-tools \
libasan6 \
libexpat1 \
libpng16-16 \
libpython3.9 \
libsensors5 \
libvulkan1 \
libwaffle-1-0 \
libx11-6 \
libx11-xcb1 \
libxcb-dri2-0 \
libxcb-dri3-0 \
libxcb-glx0 \
libxcb-present0 \
libxcb-randr0 \
libxcb-shm0 \
libxcb-sync1 \
libxcb-xfixes0 \
libxdamage1 \
libxext6 \
libxfixes3 \
libxkbcommon0 \
libxrender1 \
libxshmfence1 \
libxxf86vm1 \
netcat-openbsd \
python3 \
python3-lxml \
python3-mako \
python3-numpy \
python3-packaging \
python3-pil \
python3-renderdoc \
python3-requests \
python3-simplejson \
python3-yaml \
sntp \
strace \
waffle-utils \
wget \
xinit \
xserver-xorg-core
# Needed for ci-fairy, this revision is able to upload files to
# MinIO and doesn't depend on git
pip3 install git+http://gitlab.freedesktop.org/freedesktop/ci-templates@ffe4d1b10aab7534489f0c4bbc4c5899df17d3f2
apt-get purge -y \
$INSTALL_CI_FAIRY_PACKAGES
passwd root -d
chsh -s /bin/sh
cat > /init <<EOF
#!/bin/sh
export PS1=lava-shell:
exec sh
EOF
chmod +x /init
#######################################################################
# Strip the image to a small minimal system without removing the debian
# toolchain.
# Copy timezone file and remove tzdata package
rm -rf /etc/localtime
cp /usr/share/zoneinfo/Etc/UTC /etc/localtime
UNNEEDED_PACKAGES="
libfdisk1
"
export DEBIAN_FRONTEND=noninteractive
# Removing unused packages
for PACKAGE in ${UNNEEDED_PACKAGES}
do
echo ${PACKAGE}
if ! apt-get remove --purge --yes "${PACKAGE}"
then
echo "WARNING: ${PACKAGE} isn't installed"
fi
done
apt-get autoremove --yes || true
# Dropping logs
rm -rf /var/log/*
# Dropping documentation, localization, i18n files, etc
rm -rf /usr/share/doc/*
rm -rf /usr/share/locale/*
rm -rf /usr/share/X11/locale/*
rm -rf /usr/share/man
rm -rf /usr/share/i18n/*
rm -rf /usr/share/info/*
rm -rf /usr/share/lintian/*
rm -rf /usr/share/common-licenses/*
rm -rf /usr/share/mime/*
# Dropping reportbug scripts
rm -rf /usr/share/bug
# Drop udev hwdb not required on a stripped system
rm -rf /lib/udev/hwdb.bin /lib/udev/hwdb.d/*
# Drop all gconv conversions && binaries
rm -rf usr/bin/iconv
rm -rf usr/sbin/iconvconfig
rm -rf usr/lib/*/gconv/
# Remove libusb database
rm -rf usr/sbin/update-usbids
rm -rf var/lib/usbutils/usb.ids
rm -rf usr/share/misc/usb.ids
rm -rf /root/.pip
#######################################################################
# Crush into a minimal production image to be deployed via some type of image
# updating system.
# IMPORTANT: The Debian system is not longer functional at this point,
# for example, apt and dpkg will stop working
UNNEEDED_PACKAGES="apt libapt-pkg6.0 "\
"ncurses-bin ncurses-base libncursesw6 libncurses6 "\
"perl-base "\
"debconf libdebconfclient0 "\
"e2fsprogs e2fslibs libfdisk1 "\
"insserv "\
"udev "\
"init-system-helpers "\
"cpio "\
"passwd "\
"libsemanage1 libsemanage-common "\
"libsepol1 "\
"gpgv "\
"hostname "\
"adduser "\
"debian-archive-keyring "\
"libegl1-mesa-dev "\
"libegl-mesa0 "\
"libgl1-mesa-dev "\
"libgl1-mesa-dri "\
"libglapi-mesa "\
"libgles2-mesa-dev "\
"libglx-mesa0 "\
"mesa-common-dev "\
"gnupg2 "\
"software-properties-common " \
# Removing unneeded packages
for PACKAGE in ${UNNEEDED_PACKAGES}
do
echo "Forcing removal of ${PACKAGE}"
if ! dpkg --purge --force-remove-essential --force-depends "${PACKAGE}"
then
echo "WARNING: ${PACKAGE} isn't installed"
fi
done
# Show what's left package-wise before dropping dpkg itself
COLUMNS=300 dpkg-query -W --showformat='${Installed-Size;10}\t${Package}\n' | sort -k1,1n
# Drop dpkg
dpkg --purge --force-remove-essential --force-depends dpkg
# No apt or dpkg, no need for its configuration archives
rm -rf etc/apt
rm -rf etc/dpkg
# Drop directories not part of ostree
# Note that /var needs to exist as ostree bind mounts the deployment /var over
# it
rm -rf var/* opt srv share
# ca-certificates are in /etc drop the source
rm -rf usr/share/ca-certificates
# No need for completions
rm -rf usr/share/bash-completion
# No zsh, no need for comletions
rm -rf usr/share/zsh/vendor-completions
# drop gcc python helpers
rm -rf usr/share/gcc
# Drop sysvinit leftovers
rm -rf etc/init.d
rm -rf etc/rc[0-6S].d
# Drop upstart helpers
rm -rf etc/init
# Various xtables helpers
rm -rf usr/lib/xtables
# Drop all locales
# TODO: only remaining locale is actually "C". Should we really remove it?
rm -rf usr/lib/locale/*
# partition helpers
rm -rf usr/sbin/*fdisk
# local compiler
rm -rf usr/bin/localedef
# Systemd dns resolver
find usr etc -name '*systemd-resolve*' -prune -exec rm -r {} \;
# Systemd network configuration
find usr etc -name '*networkd*' -prune -exec rm -r {} \;
# systemd ntp client
find usr etc -name '*timesyncd*' -prune -exec rm -r {} \;
# systemd hw database manager
find usr etc -name '*systemd-hwdb*' -prune -exec rm -r {} \;
# No need for fuse
find usr etc -name '*fuse*' -prune -exec rm -r {} \;
# lsb init function leftovers
rm -rf usr/lib/lsb
# Only needed when adding libraries
rm -rf usr/sbin/ldconfig*
# Games, unused
rmdir usr/games
# Remove pam module to authenticate against a DB
# plus libdb-5.3.so that is only used by this pam module
rm -rf usr/lib/*/security/pam_userdb.so
rm -rf usr/lib/*/libdb-5.3.so
# remove NSS support for nis, nisplus and hesiod
rm -rf usr/lib/*/libnss_hesiod*
rm -rf usr/lib/*/libnss_nis*

View File

@@ -1,79 +0,0 @@
#!/bin/bash
set -e
set -o xtrace
export DEBIAN_FRONTEND=noninteractive
# Ephemeral packages (installed for this script and removed again at the end)
STABLE_EPHEMERAL=" \
"
dpkg --add-architecture $arch
apt-get update
apt-get install -y --no-remove \
$STABLE_EPHEMERAL \
crossbuild-essential-$arch \
libelf-dev:$arch \
libexpat1-dev:$arch \
libpciaccess-dev:$arch \
libstdc++6:$arch \
libvulkan-dev:$arch \
libx11-dev:$arch \
libx11-xcb-dev:$arch \
libxcb-dri2-0-dev:$arch \
libxcb-dri3-dev:$arch \
libxcb-glx0-dev:$arch \
libxcb-present-dev:$arch \
libxcb-randr0-dev:$arch \
libxcb-shm0-dev:$arch \
libxcb-xfixes0-dev:$arch \
libxdamage-dev:$arch \
libxext-dev:$arch \
libxrandr-dev:$arch \
libxshmfence-dev:$arch \
libxxf86vm-dev:$arch \
wget
if [[ $arch != "armhf" ]]; then
if [[ $arch == "s390x" ]]; then
LLVM=9
else
LLVM=11
fi
# llvm-*-tools:$arch conflicts with python3:amd64. Install dependencies only
# with apt-get, then force-install llvm-*-{dev,tools}:$arch with dpkg to get
# around this.
apt-get install -y --no-remove \
libclang-cpp${LLVM}:$arch \
libffi-dev:$arch \
libgcc-s1:$arch \
libtinfo-dev:$arch \
libz3-dev:$arch \
llvm-${LLVM}:$arch \
zlib1g
fi
. .gitlab-ci/container/create-cross-file.sh $arch
. .gitlab-ci/container/container_pre_build.sh
# dependencies where we want a specific version
EXTRA_MESON_ARGS="--cross-file=/cross_file-${arch}.txt -D libdir=lib/$(dpkg-architecture -A $arch -qDEB_TARGET_MULTIARCH)"
. .gitlab-ci/container/build-libdrm.sh
apt-get purge -y \
$STABLE_EPHEMERAL
. .gitlab-ci/container/container_post_build.sh
# This needs to be done after container_post_build.sh, or apt-get breaks in there
if [[ $arch != "armhf" ]]; then
apt-get download llvm-${LLVM}-{dev,tools}:$arch
dpkg -i --force-depends llvm-${LLVM}-*_${arch}.deb
rm llvm-${LLVM}-*_${arch}.deb
fi

View File

@@ -1,106 +0,0 @@
#!/bin/bash
set -ex
EPHEMERAL="\
autoconf \
rdfind \
unzip \
"
apt-get install -y --no-remove $EPHEMERAL
# Fetch the NDK and extract just the toolchain we want.
ndk=android-ndk-r21d
wget -O $ndk.zip https://dl.google.com/android/repository/$ndk-linux-x86_64.zip
unzip -d / $ndk.zip "$ndk/toolchains/llvm/*"
rm $ndk.zip
# Since it was packed as a zip file, symlinks/hardlinks got turned into
# duplicate files. Turn them into hardlinks to save on container space.
rdfind -makehardlinks true -makeresultsfile false /android-ndk-r21d/
# Drop some large tools we won't use in this build.
find /android-ndk-r21d/ -type f | egrep -i "clang-check|clang-tidy|lldb" | xargs rm -f
sh .gitlab-ci/container/create-android-ndk-pc.sh /$ndk zlib.pc "" "-lz" "1.2.3"
sh .gitlab-ci/container/create-android-cross-file.sh /$ndk x86_64-linux-android x86_64 x86_64
sh .gitlab-ci/container/create-android-cross-file.sh /$ndk i686-linux-android x86 x86
sh .gitlab-ci/container/create-android-cross-file.sh /$ndk aarch64-linux-android arm armv8
sh .gitlab-ci/container/create-android-cross-file.sh /$ndk arm-linux-androideabi arm armv7hl armv7a-linux-androideabi
# Not using build-libdrm.sh because we don't want its cleanup after building
# each arch. Fetch and extract now.
export LIBDRM_VERSION=libdrm-2.4.110
wget https://dri.freedesktop.org/libdrm/$LIBDRM_VERSION.tar.xz
tar -xf $LIBDRM_VERSION.tar.xz && rm $LIBDRM_VERSION.tar.xz
for arch in \
x86_64-linux-android \
i686-linux-android \
aarch64-linux-android \
arm-linux-androideabi ; do
cd $LIBDRM_VERSION
rm -rf build-$arch
meson build-$arch \
--cross-file=/cross_file-$arch.txt \
--libdir=lib/$arch \
-Dlibkms=false \
-Dnouveau=false \
-Dvc4=false \
-Detnaviv=false \
-Dfreedreno=false \
-Dintel=false \
-Dcairo-tests=false \
-Dvalgrind=false
ninja -C build-$arch install
cd ..
done
rm -rf $LIBDRM_VERSION
export LIBELF_VERSION=libelf-0.8.13
wget https://fossies.org/linux/misc/old/$LIBELF_VERSION.tar.gz
# Not 100% sure who runs the mirror above so be extra careful
if ! echo "4136d7b4c04df68b686570afa26988ac ${LIBELF_VERSION}.tar.gz" | md5sum -c -; then
echo "Checksum failed"
exit 1
fi
tar -xf ${LIBELF_VERSION}.tar.gz
cd $LIBELF_VERSION
# Work around a bug in the original configure not enabling __LIBELF64.
autoreconf
for arch in \
x86_64-linux-android \
i686-linux-android \
aarch64-linux-android \
arm-linux-androideabi ; do
ccarch=${arch}
if [ "${arch}" == 'arm-linux-androideabi' ]
then
ccarch=armv7a-linux-androideabi
fi
export CC=/android-ndk-r21d/toolchains/llvm/prebuilt/linux-x86_64/bin/${arch}-ar
export CC=/android-ndk-r21d/toolchains/llvm/prebuilt/linux-x86_64/bin/${ccarch}29-clang
export CXX=/android-ndk-r21d/toolchains/llvm/prebuilt/linux-x86_64/bin/${ccarch}29-clang++
export LD=/android-ndk-r21d/toolchains/llvm/prebuilt/linux-x86_64/bin/${arch}-ld
export RANLIB=/android-ndk-r21d/toolchains/llvm/prebuilt/linux-x86_64/bin/${arch}-ranlib
# The configure script doesn't know about android, but doesn't really use the host anyway it
# seems
./configure --host=x86_64-linux-gnu --disable-nls --disable-shared \
--libdir=/usr/local/lib/${arch}
make install
make distclean
done
cd ..
rm -rf $LIBELF_VERSION
apt-get purge -y $EPHEMERAL

View File

@@ -1,74 +0,0 @@
#!/bin/bash
set -e
set -o xtrace
apt-get -y install ca-certificates
sed -i -e 's/http:\/\/deb/https:\/\/deb/g' /etc/apt/sources.list
echo 'deb https://deb.debian.org/debian buster main' >/etc/apt/sources.list.d/buster.list
apt-get update
apt-get -y install \
${EXTRA_LOCAL_PACKAGES} \
abootimg \
autoconf \
automake \
bc \
bison \
ccache \
cmake \
debootstrap \
fastboot \
flex \
g++ \
git \
glslang-tools \
kmod \
libasan6 \
libdrm-dev \
libelf-dev \
libexpat1-dev \
libvulkan-dev \
libx11-dev \
libx11-xcb-dev \
libxcb-dri2-0-dev \
libxcb-dri3-dev \
libxcb-glx0-dev \
libxcb-present-dev \
libxcb-randr0-dev \
libxcb-shm0-dev \
libxcb-xfixes0-dev \
libxdamage-dev \
libxext-dev \
libxrandr-dev \
libxshmfence-dev \
libxxf86vm-dev \
llvm-11-dev \
meson \
pkg-config \
python3-mako \
python3-pil \
python3-pip \
python3-requests \
python3-setuptools \
u-boot-tools \
wget \
xz-utils \
zlib1g-dev
# Not available anymore in bullseye
apt-get install -y --no-remove -t buster \
android-sdk-ext4-utils
pip3 install git+http://gitlab.freedesktop.org/freedesktop/ci-templates@ffe4d1b10aab7534489f0c4bbc4c5899df17d3f2
arch=armhf
. .gitlab-ci/container/cross_build.sh
. .gitlab-ci/container/container_pre_build.sh
# dependencies where we want a specific version
EXTRA_MESON_ARGS=
. .gitlab-ci/container/build-libdrm.sh
. .gitlab-ci/container/container_post_build.sh

View File

@@ -1,39 +0,0 @@
#!/bin/bash
set -e
set -o xtrace
############### Install packages for baremetal testing
apt-get install -y ca-certificates
sed -i -e 's/http:\/\/deb/https:\/\/deb/g' /etc/apt/sources.list
apt-get update
apt-get install -y --no-remove \
abootimg \
cpio \
fastboot \
netcat \
procps \
python3-distutils \
python3-minimal \
python3-serial \
rsync \
snmp \
wget
# setup SNMPv2 SMI MIB
wget https://raw.githubusercontent.com/net-snmp/net-snmp/master/mibs/SNMPv2-SMI.txt \
-O /usr/share/snmp/mibs/SNMPv2-SMI.txt
arch=arm64 . .gitlab-ci/container/baremetal_build.sh
arch=armhf . .gitlab-ci/container/baremetal_build.sh
# This firmware file from Debian bullseye causes hangs
wget https://git.kernel.org/pub/scm/linux/kernel/git/firmware/linux-firmware.git/plain/qcom/a530_pfp.fw?id=d5f9eea5a251d43412b07f5295d03e97b89ac4a5 \
-O /rootfs-arm64/lib/firmware/qcom/a530_pfp.fw
mkdir -p /baremetal-files/jetson-nano/boot/
ln -s \
/baremetal-files/Image \
/baremetal-files/tegra210-p3450-0000.dtb \
/baremetal-files/jetson-nano/boot/

View File

@@ -1,5 +0,0 @@
#!/bin/bash
arch=i386
. .gitlab-ci/container/cross_build.sh

View File

@@ -1,52 +0,0 @@
-----BEGIN PGP PUBLIC KEY BLOCK-----
Version: GnuPG v1.4.12 (GNU/Linux)
mQINBFE9lCwBEADi0WUAApM/mgHJRU8lVkkw0CHsZNpqaQDNaHefD6Rw3S4LxNmM
EZaOTkhP200XZM8lVdbfUW9xSjA3oPldc1HG26NjbqqCmWpdo2fb+r7VmU2dq3NM
R18ZlKixiLDE6OUfaXWKamZsXb6ITTYmgTO6orQWYrnW6ckYHSeaAkW0wkDAryl2
B5v8aoFnQ1rFiVEMo4NGzw4UX+MelF7rxaaregmKVTPiqCOSPJ1McC1dHFN533FY
Wh/RVLKWo6npu+owtwYFQW+zyQhKzSIMvNujFRzhIxzxR9Gn87MoLAyfgKEzrbbT
DhqqNXTxS4UMUKCQaO93TzetX/EBrRpJj+vP640yio80h4Dr5pAd7+LnKwgpTDk1
G88bBXJAcPZnTSKu9I2c6KY4iRNbvRz4i+ZdwwZtdW4nSdl2792L7Sl7Nc44uLL/
ZqkKDXEBF6lsX5XpABwyK89S/SbHOytXv9o4puv+65Ac5/UShspQTMSKGZgvDauU
cs8kE1U9dPOqVNCYq9Nfwinkf6RxV1k1+gwtclxQuY7UpKXP0hNAXjAiA5KS5Crq
7aaJg9q2F4bub0mNU6n7UI6vXguF2n4SEtzPRk6RP+4TiT3bZUsmr+1ktogyOJCc
Ha8G5VdL+NBIYQthOcieYCBnTeIH7D3Sp6FYQTYtVbKFzmMK+36ERreL/wARAQAB
tD1TeWx2ZXN0cmUgTGVkcnUgLSBEZWJpYW4gTExWTSBwYWNrYWdlcyA8c3lsdmVz
dHJlQGRlYmlhbi5vcmc+iQI4BBMBAgAiBQJRPZQsAhsDBgsJCAcDAgYVCAIJCgsE
FgIDAQIeAQIXgAAKCRAVz00Yr090Ibx+EADArS/hvkDF8juWMXxh17CgR0WZlHCC
9CTBWkg5a0bNN/3bb97cPQt/vIKWjQtkQpav6/5JTVCSx2riL4FHYhH0iuo4iAPR
udC7Cvg8g7bSPrKO6tenQZNvQm+tUmBHgFiMBJi92AjZ/Qn1Shg7p9ITivFxpLyX
wpmnF1OKyI2Kof2rm4BFwfSWuf8Fvh7kDMRLHv+MlnK/7j/BNpKdozXxLcwoFBmn
l0WjpAH3OFF7Pvm1LJdf1DjWKH0Dc3sc6zxtmBR/KHHg6kK4BGQNnFKujcP7TVdv
gMYv84kun14pnwjZcqOtN3UJtcx22880DOQzinoMs3Q4w4o05oIF+sSgHViFpc3W
R0v+RllnH05vKZo+LDzc83DQVrdwliV12eHxrMQ8UYg88zCbF/cHHnlzZWAJgftg
hB08v1BKPgYRUzwJ6VdVqXYcZWEaUJmQAPuAALyZESw94hSo28FAn0/gzEc5uOYx
K+xG/lFwgAGYNb3uGM5m0P6LVTfdg6vDwwOeTNIExVk3KVFXeSQef2ZMkhwA7wya
KJptkb62wBHFE+o9TUdtMCY6qONxMMdwioRE5BYNwAsS1PnRD2+jtlI0DzvKHt7B
MWd8hnoUKhMeZ9TNmo+8CpsAtXZcBho0zPGz/R8NlJhAWpdAZ1CmcPo83EW86Yq7
BxQUKnNHcwj2ebkCDQRRPZQsARAA4jxYmbTHwmMjqSizlMJYNuGOpIidEdx9zQ5g
zOr431/VfWq4S+VhMDhs15j9lyml0y4ok215VRFwrAREDg6UPMr7ajLmBQGau0Fc
bvZJ90l4NjXp5p0NEE/qOb9UEHT7EGkEhaZ1ekkWFTWCgsy7rRXfZLxB6sk7pzLC
DshyW3zjIakWAnpQ5j5obiDy708pReAuGB94NSyb1HoW/xGsGgvvCw4r0w3xPStw
F1PhmScE6NTBIfLliea3pl8vhKPlCh54Hk7I8QGjo1ETlRP4Qll1ZxHJ8u25f/ta
RES2Aw8Hi7j0EVcZ6MT9JWTI83yUcnUlZPZS2HyeWcUj+8nUC8W4N8An+aNps9l/
21inIl2TbGo3Yn1JQLnA1YCoGwC34g8QZTJhElEQBN0X29ayWW6OdFx8MDvllbBV
ymmKq2lK1U55mQTfDli7S3vfGz9Gp/oQwZ8bQpOeUkc5hbZszYwP4RX+68xDPfn+
M9udl+qW9wu+LyePbW6HX90LmkhNkkY2ZzUPRPDHZANU5btaPXc2H7edX4y4maQa
xenqD0lGh9LGz/mps4HEZtCI5CY8o0uCMF3lT0XfXhuLksr7Pxv57yue8LLTItOJ
d9Hmzp9G97SRYYeqU+8lyNXtU2PdrLLq7QHkzrsloG78lCpQcalHGACJzrlUWVP/
fN3Ht3kAEQEAAYkCHwQYAQIACQUCUT2ULAIbDAAKCRAVz00Yr090IbhWEADbr50X
OEXMIMGRLe+YMjeMX9NG4jxs0jZaWHc/WrGR+CCSUb9r6aPXeLo+45949uEfdSsB
pbaEdNWxF5Vr1CSjuO5siIlgDjmT655voXo67xVpEN4HhMrxugDJfCa6z97P0+ML
PdDxim57uNqkam9XIq9hKQaurxMAECDPmlEXI4QT3eu5qw5/knMzDMZj4Vi6hovL
wvvAeLHO/jsyfIdNmhBGU2RWCEZ9uo/MeerPHtRPfg74g+9PPfP6nyHD2Wes6yGd
oVQwtPNAQD6Cj7EaA2xdZYLJ7/jW6yiPu98FFWP74FN2dlyEA2uVziLsfBrgpS4l
tVOlrO2YzkkqUGrybzbLpj6eeHx+Cd7wcjI8CalsqtL6cG8cUEjtWQUHyTbQWAgG
5VPEgIAVhJ6RTZ26i/G+4J8neKyRs4vz+57UGwY6zI4AB1ZcWGEE3Bf+CDEDgmnP
LSwbnHefK9IljT9XU98PelSryUO/5UPw7leE0akXKB4DtekToO226px1VnGp3Bov
1GBGvpHvL2WizEwdk+nfk8LtrLzej+9FtIcq3uIrYnsac47Pf7p0otcFeTJTjSq3
krCaoG4Hx0zGQG2ZFpHrSrZTVy6lxvIdfi0beMgY6h78p6M9eYZHQHc02DjFkQXN
bXb5c6gCHESH5PXwPU4jQEE7Ib9J6sbk7ZT2Mw==
=j+4q
-----END PGP PUBLIC KEY BLOCK-----

View File

@@ -1,5 +0,0 @@
#!/bin/bash
arch=ppc64el
. .gitlab-ci/container/cross_build.sh

View File

@@ -1,5 +0,0 @@
#!/bin/bash
arch=s390x
. .gitlab-ci/container/cross_build.sh

View File

@@ -1,53 +0,0 @@
-----BEGIN PGP PUBLIC KEY BLOCK-----
mQGNBFwOmrgBDAC9FZW3dFpew1hwDaqRfdQQ1ABcmOYu1NKZHwYjd+bGvcR2LRGe
R5dfRqG1Uc/5r6CPCMvnWxFprymkqKEADn8eFn+aCnPx03HrhA+lNEbciPfTHylt
NTTuRua7YpJIgEOjhXUbxXxnvF8fhUf5NJpJg6H6fPQARUW+5M//BlVgwn2jhzlW
U+uwgeJthhiuTXkls9Yo3EoJzmkUih+ABZgvaiBpr7GZRw9GO1aucITct0YDNTVX
KA6el78/udi5GZSCKT94yY9ArN4W6NiOFCLV7MU5d6qMjwGFhfg46NBv9nqpGinK
3NDjqCevKouhtKl2J+nr3Ju3Spzuv6Iex7tsOqt+XdZCoY+8+dy3G5zbJwBYsMiS
rTNF55PHtBH1S0QK5OoN2UR1ie/aURAyAFEMhTzvFB2B2v7C0IKIOmYMEG+DPMs9
FQs/vZ1UnAQgWk02ZiPryoHfjFO80+XYMrdWN+RSo5q9ODClloaKXjqI/aWLGirm
KXw2R8tz31go3NMAEQEAAbQnV2luZUhRIHBhY2thZ2VzIDx3aW5lLWRldmVsQHdp
bmVocS5vcmc+iQHOBBMBCgA4AhsDBQsJCAcCBhUKCQgLAgQWAgMBAh4BAheAFiEE
1D9kAUU2nFHXht3qdvGiD/mHZy8FAlwOmyUACgkQdvGiD/mHZy/zkwv7B+nKFlDY
Bzz/7j0gqIODbs5FRZRtuf/IuPP3vZdWlNfAW/VyaLtVLJCM/mmaf/O6/gJ+D+E9
BBoSmHdHzBBOQHIj5IbRedynNcHT5qXsdBeU2ZPR50sdE+jmukvw3Wa5JijoDgUu
LGLGtU48Z3JsBXQ54OlnTZXQ2SMFhRUa10JANXSJQ+QY2Wo2Pi2+MEAHcrd71A2S
0mT2DQSSBQ92c6WPfUpOSBawd8P0ipT7rVFNLJh8HVQGyEWxPl8ecDEHoVfG2rdV
D0ADbNLx9031UUwpUicO6vW/2Ec7c3VNG1cpOtyNTw/lEgvsXOh3GQs/DvFvMy/h
QzaeF3Qq6cAPlKuxieJe4lLYFBTmCAT4iB1J8oeFs4G7ScfZH4+4NBe3VGoeCD/M
Wl+qxntAroblxiFuqtPJg+NKZYWBzkptJNhnrBxcBnRinGZLw2k/GR/qPMgsR2L4
cP+OUuka+R2gp9oDVTZTyMowz+ROIxnEijF50pkj2VBFRB02rfiMp7q6iQIzBBAB
CgAdFiEE2iNXmnTUrZr50/lFzvrI6q8XUZ0FAlwOm3AACgkQzvrI6q8XUZ3KKg/+
MD8CgvLiHEX90fXQ23RZQRm2J21w3gxdIen/N8yJVIbK7NIgYhgWfGWsGQedtM7D
hMwUlDSRb4rWy9vrXBaiZoF3+nK9AcLvPChkZz28U59Jft6/l0gVrykey/ERU7EV
w1Ie1eRu0tRSXsKvMZyQH8897iHZ7uqoJgyk8U8CvSW+V80yqLB2M8Tk8ECZq34f
HqUIGs4Wo0UZh0vV4+dEQHBh1BYpmmWl+UPf7nzNwFWXu/EpjVhkExRqTnkEJ+Ai
OxbtrRn6ETKzpV4DjyifqQF639bMIem7DRRf+mkcrAXetvWkUkE76e3E9KLvETCZ
l4SBfgqSZs2vNngmpX6Qnoh883aFo5ZgVN3v6uTS+LgTwMt/XlnDQ7+Zw+ehCZ2R
CO21Y9Kbw6ZEWls/8srZdCQ2LxnyeyQeIzsLnqT/waGjQj35i4exzYeWpojVDb3r
tvvOALYGVlSYqZXIALTx2/tHXKLHyrn1C0VgHRnl+hwv7U49f7RvfQXpx47YQN/C
PWrpbG69wlKuJptr+olbyoKAWfl+UzoO8vLMo5njWQNAoAwh1H8aFUVNyhtbkRuq
l0kpy1Cmcq8uo6taK9lvYp8jak7eV8lHSSiGUKTAovNTwfZG2JboGV4/qLDUKvpa
lPp2xVpF9MzA8VlXTOzLpSyIVxZnPTpL+xR5P9WQjMS5AY0EXA6auAEMAMReKL89
0z0SL+/i/geB/agfG/k6AXiG2a9kVWeIjAqFwHKl9W/DTNvOqCDgAt51oiHGRRjt
1Xm3XZD4p+GM1uZWn9qIFL49Gt5x94TqdrsKTVCJr0Kazn2mKQc7aja0zac+WtZG
OFn7KbniuAcwtC780cyikfmmExLI1/Vjg+NiMlMtZfpK6FIW+ulPiDQPdzIhVppx
w9/KlR2Fvh4TbzDsUqkFQSSAFdQ65BWgvzLpZHdKO/ILpDkThLbipjtvbBv/pHKM
O/NFTNoYkJ3cNW/kfcynwV+4AcKwdRz2A3Mez+g5TKFYPZROIbayOo01yTMLfz2p
jcqki/t4PACtwFOhkAs+MYPPyZDUkTFcEJQCPDstkAgmJWI3K2qELtDOLQyps3WY
Mfp+mntOdc8bKjFTMcCEk1zcm14K4Oms+w6dw2UnYsX1FAYYhPm8HUYwE4kP8M+D
9HGLMjLqqF/kanlCFZs5Avx3mDSAx6zS8vtNdGh+64oDNk4x4A2j8GTUuQARAQAB
iQG8BBgBCgAmFiEE1D9kAUU2nFHXht3qdvGiD/mHZy8FAlwOmrgCGwwFCQPCZwAA
CgkQdvGiD/mHZy9FnAwAgfUkxsO53Pm2iaHhtF4+BUc8MNJj64Jvm1tghr6PBRtM
hpbvvN8SSOFwYIsS+2BMsJ2ldox4zMYhuvBcgNUlix0G0Z7h1MjftDdsLFi1DNv2
J9dJ9LdpWdiZbyg4Sy7WakIZ/VvH1Znd89Imo7kCScRdXTjIw2yCkotE5lK7A6Ns
NbVuoYEN+dbGioF4csYehnjTdojwF/19mHFxrXkdDZ/V6ZYFIFxEsxL8FEuyI4+o
LC3DFSA4+QAFdkjGFXqFPlaEJxWt5d7wk0y+tt68v+ulkJ900BvR+OOMqQURwrAi
iP3I28aRrMjZYwyqHl8i/qyIv+WRakoDKV+wWteR5DmRAPHmX2vnlPlCmY8ysR6J
2jUAfuDFVu4/qzJe6vw5tmPJMdfvy0W5oogX6sEdin5M5w2b3WrN8nXZcjbWymqP
6jCdl6eoCCkKNOIbr/MMSkd2KqAqDVM5cnnlQ7q+AXzwNpj3RGJVoBxbS0nn9JWY
QNQrWh9rAcMIGT+b1le0
=4lsa
-----END PGP PUBLIC KEY BLOCK-----

View File

@@ -1,19 +0,0 @@
#!/bin/bash
set -e
set -o xtrace
# Installing wine, need this for testing mingw or nine
# We need multiarch for Wine
dpkg --add-architecture i386
apt-get update
apt-get install -y --no-remove \
wine \
wine32 \
wine64 \
xvfb
# Used to initialize the Wine environment to reduce build time
wine64 whoami.exe

View File

@@ -1,87 +0,0 @@
#!/bin/bash
set -e
set -o xtrace
export DEBIAN_FRONTEND=noninteractive
apt-get install -y ca-certificates gnupg2 software-properties-common
# Add llvm 13 to the build image
apt-key add .gitlab-ci/container/debian/llvm-snapshot.gpg.key
add-apt-repository "deb https://apt.llvm.org/bullseye/ llvm-toolchain-bullseye-13 main"
sed -i -e 's/http:\/\/deb/https:\/\/deb/g' /etc/apt/sources.list
# Ephemeral packages (installed for this script and removed again at
# the end)
STABLE_EPHEMERAL=" \
python3-pip \
python3-setuptools \
"
apt-get update
apt-get install -y --no-remove \
$STABLE_EPHEMERAL \
bison \
ccache \
dpkg-cross \
flex \
g++ \
cmake \
gcc \
git \
glslang-tools \
kmod \
libclang-13-dev \
libclang-11-dev \
libclang-9-dev \
libclc-dev \
libelf-dev \
libepoxy-dev \
libexpat1-dev \
libgtk-3-dev \
libllvm13 \
libllvm11 \
libllvm9 \
libomxil-bellagio-dev \
libpciaccess-dev \
libunwind-dev \
libva-dev \
libvdpau-dev \
libvulkan-dev \
libx11-dev \
libx11-xcb-dev \
libxext-dev \
libxml2-utils \
libxrandr-dev \
libxrender-dev \
libxshmfence-dev \
libxvmc-dev \
libxxf86vm-dev \
make \
meson \
pkg-config \
python3-mako \
python3-pil \
python3-requests \
qemu-user \
valgrind \
wget \
x11proto-dri2-dev \
x11proto-gl-dev \
x11proto-randr-dev \
xz-utils \
zlib1g-dev
# Needed for ci-fairy, this revision is able to upload files to MinIO
pip3 install git+http://gitlab.freedesktop.org/freedesktop/ci-templates@ffe4d1b10aab7534489f0c4bbc4c5899df17d3f2
. .gitlab-ci/container/debian/x86_build-base-wine.sh
############### Uninstall ephemeral packages
apt-get purge -y $STABLE_EPHEMERAL
. .gitlab-ci/container/container_post_build.sh

View File

@@ -1,74 +0,0 @@
#!/bin/bash
# Pull packages from msys2 repository that can be directly used.
# We can use https://packages.msys2.org/ to retrieve the newest package
mkdir ~/tmp
pushd ~/tmp
MINGW_PACKET_LIST="
mingw-w64-x86_64-headers-git-10.0.0.r14.ga08c638f8-1-any.pkg.tar.zst
mingw-w64-x86_64-vulkan-loader-1.3.211-1-any.pkg.tar.zst
mingw-w64-x86_64-libelf-0.8.13-6-any.pkg.tar.zst
mingw-w64-x86_64-zlib-1.2.12-1-any.pkg.tar.zst
mingw-w64-x86_64-zstd-1.5.2-2-any.pkg.tar.zst
"
for i in $MINGW_PACKET_LIST
do
wget -q https://mirror.msys2.org/mingw/mingw64/$i
tar xf $i --strip-components=1 -C /usr/x86_64-w64-mingw32/
done
popd
rm -rf ~/tmp
mkdir -p /usr/x86_64-w64-mingw32/bin
# The output of `wine64 llvm-config --system-libs --cxxflags mcdisassembler`
# containes absolute path like '-IZ:'
# The sed is used to replace `-IZ:/usr/x86_64-w64-mingw32/include`
# to `-I/usr/x86_64-w64-mingw32/include`
# Debian's pkg-config wrapers for mingw are broken, and there's no sign that
# they're going to be fixed, so we'll just have to fix it ourselves
# https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=930492
cat >/usr/x86_64-w64-mingw32/bin/pkg-config <<EOF
#!/bin/sh
PKG_CONFIG_LIBDIR=/usr/x86_64-w64-mingw32/lib/pkgconfig:/usr/x86_64-w64-mingw32/share/pkgconfig pkg-config \$@
EOF
chmod +x /usr/x86_64-w64-mingw32/bin/pkg-config
cat >/usr/x86_64-w64-mingw32/bin/llvm-config <<EOF
#!/bin/sh
wine64 llvm-config \$@ | sed -e "s,Z:/,/,gi"
EOF
chmod +x /usr/x86_64-w64-mingw32/bin/llvm-config
cat >/usr/x86_64-w64-mingw32/bin/clang <<EOF
#!/bin/sh
wine64 clang \$@
EOF
chmod +x /usr/x86_64-w64-mingw32/bin/clang
cat >/usr/x86_64-w64-mingw32/bin/llvm-as <<EOF
#!/bin/sh
wine64 llvm-as \$@
EOF
chmod +x /usr/x86_64-w64-mingw32/bin/llvm-as
cat >/usr/x86_64-w64-mingw32/bin/llvm-link <<EOF
#!/bin/sh
wine64 llvm-link \$@
EOF
chmod +x /usr/x86_64-w64-mingw32/bin/llvm-link
cat >/usr/x86_64-w64-mingw32/bin/opt <<EOF
#!/bin/sh
wine64 opt \$@
EOF
chmod +x /usr/x86_64-w64-mingw32/bin/opt
cat >/usr/x86_64-w64-mingw32/bin/llvm-spirv <<EOF
#!/bin/sh
wine64 llvm-spirv \$@
EOF
chmod +x /usr/x86_64-w64-mingw32/bin/llvm-spirv

View File

@@ -1,100 +0,0 @@
#!/bin/bash
wd=$PWD
CMAKE_TOOLCHAIN_MINGW_PATH=$wd/.gitlab-ci/container/debian/x86_mingw-toolchain.cmake
mkdir -p ~/tmp
pushd ~/tmp
# Building DirectX-Headers
git clone https://github.com/microsoft/DirectX-Headers -b v1.606.3 --depth 1
mkdir -p DirectX-Headers/build
pushd DirectX-Headers/build
meson .. \
--backend=ninja \
--buildtype=release -Dbuild-test=false \
-Dprefix=/usr/x86_64-w64-mingw32/ \
--cross-file=$wd/.gitlab-ci/x86_64-w64-mingw32
ninja install
popd
export VULKAN_SDK_VERSION=1.3.211.0
# Building SPIRV Tools
git clone -b sdk-$VULKAN_SDK_VERSION --depth=1 \
https://github.com/KhronosGroup/SPIRV-Tools SPIRV-Tools
git clone -b sdk-$VULKAN_SDK_VERSION --depth=1 \
https://github.com/KhronosGroup/SPIRV-Headers SPIRV-Tools/external/SPIRV-Headers
mkdir -p SPIRV-Tools/build
pushd SPIRV-Tools/build
cmake .. \
-DCMAKE_TOOLCHAIN_FILE=$CMAKE_TOOLCHAIN_MINGW_PATH \
-DCMAKE_INSTALL_PREFIX=/usr/x86_64-w64-mingw32/ \
-GNinja -DCMAKE_BUILD_TYPE=Release \
-DCMAKE_CROSSCOMPILING=1 \
-DCMAKE_POLICY_DEFAULT_CMP0091=NEW
ninja install
popd
# Building LLVM
git clone -b release/14.x --depth=1 \
https://github.com/llvm/llvm-project llvm-project
git clone -b v14.0.0 --depth=1 \
https://github.com/KhronosGroup/SPIRV-LLVM-Translator llvm-project/llvm/projects/SPIRV-LLVM-Translator
mkdir llvm-project/build
pushd llvm-project/build
cmake ../llvm \
-DCMAKE_TOOLCHAIN_FILE=$CMAKE_TOOLCHAIN_MINGW_PATH \
-DCMAKE_INSTALL_PREFIX=/usr/x86_64-w64-mingw32/ \
-GNinja -DCMAKE_BUILD_TYPE=Release \
-DCMAKE_CROSSCOMPILING=1 \
-DLLVM_ENABLE_RTTI=ON \
-DCROSS_TOOLCHAIN_FLAGS_NATIVE=-DLLVM_EXTERNAL_SPIRV_HEADERS_SOURCE_DIR=$PWD/../../SPIRV-Tools/external/SPIRV-Headers \
-DLLVM_EXTERNAL_SPIRV_HEADERS_SOURCE_DIR=$PWD/../../SPIRV-Tools/external/SPIRV-Headers \
-DLLVM_ENABLE_PROJECTS="clang" \
-DLLVM_TARGETS_TO_BUILD="AMDGPU;X86" \
-DLLVM_OPTIMIZED_TABLEGEN=TRUE \
-DLLVM_ENABLE_ASSERTIONS=TRUE \
-DLLVM_INCLUDE_UTILS=OFF \
-DLLVM_INCLUDE_RUNTIMES=OFF \
-DLLVM_INCLUDE_TESTS=OFF \
-DLLVM_INCLUDE_EXAMPLES=OFF \
-DLLVM_INCLUDE_GO_TESTS=OFF \
-DLLVM_INCLUDE_BENCHMARKS=OFF \
-DLLVM_BUILD_LLVM_C_DYLIB=OFF \
-DLLVM_ENABLE_DIA_SDK=OFF \
-DCLANG_BUILD_TOOLS=ON \
-DLLVM_SPIRV_INCLUDE_TESTS=OFF
ninja install
popd
# Building libclc
mkdir llvm-project/build-libclc
pushd llvm-project/build-libclc
cmake ../libclc \
-DCMAKE_TOOLCHAIN_FILE=$CMAKE_TOOLCHAIN_MINGW_PATH \
-DCMAKE_INSTALL_PREFIX=/usr/x86_64-w64-mingw32/ \
-GNinja -DCMAKE_BUILD_TYPE=Release \
-DCMAKE_CROSSCOMPILING=1 \
-DCMAKE_POLICY_DEFAULT_CMP0091=NEW \
-DCMAKE_CXX_FLAGS="-m64" \
-DLLVM_CONFIG="/usr/x86_64-w64-mingw32/bin/llvm-config" \
-DLLVM_CLANG="/usr/x86_64-w64-mingw32/bin/clang" \
-DLLVM_AS="/usr/x86_64-w64-mingw32/bin/llvm-as" \
-DLLVM_LINK="/usr/x86_64-w64-mingw32/bin/llvm-link" \
-DLLVM_OPT="/usr/x86_64-w64-mingw32/bin/opt" \
-DLLVM_SPIRV="/usr/x86_64-w64-mingw32/bin/llvm-spirv" \
-DLIBCLC_TARGETS_TO_BUILD="spirv-mesa3d-;spirv64-mesa3d-"
ninja install
popd
popd # ~/tmp
# Cleanup ~/tmp
rm -rf ~/tmp

View File

@@ -1,13 +0,0 @@
#!/bin/bash
set -e
set -o xtrace
apt-get update
apt-get install -y --no-remove \
zstd \
g++-mingw-w64-i686 \
g++-mingw-w64-x86-64
. .gitlab-ci/container/debian/x86_build-mingw-patch.sh
. .gitlab-ci/container/debian/x86_build-mingw-source-deps.sh

View File

@@ -1,93 +0,0 @@
#!/bin/bash
set -e
set -o xtrace
export DEBIAN_FRONTEND=noninteractive
# Ephemeral packages (installed for this script and removed again at the end)
STABLE_EPHEMERAL=" \
autoconf \
automake \
autotools-dev \
bzip2 \
libtool \
python3-pip \
"
apt-get update
apt-get install -y --no-remove \
$STABLE_EPHEMERAL \
check \
clang \
libasan6 \
libarchive-dev \
libclang-cpp13-dev \
libclang-cpp11-dev \
libgbm-dev \
libglvnd-dev \
libllvmspirvlib-dev \
liblua5.3-dev \
libxcb-dri2-0-dev \
libxcb-dri3-dev \
libxcb-glx0-dev \
libxcb-present-dev \
libxcb-randr0-dev \
libxcb-shm0-dev \
libxcb-sync-dev \
libxcb-xfixes0-dev \
libxcb1-dev \
libxml2-dev \
llvm-13-dev \
llvm-11-dev \
llvm-9-dev \
ocl-icd-opencl-dev \
python3-freezegun \
python3-pytest \
procps \
spirv-tools \
strace \
time
. .gitlab-ci/container/container_pre_build.sh
# dependencies where we want a specific version
export XORG_RELEASES=https://xorg.freedesktop.org/releases/individual
export XORGMACROS_VERSION=util-macros-1.19.0
wget $XORG_RELEASES/util/$XORGMACROS_VERSION.tar.bz2
tar -xvf $XORGMACROS_VERSION.tar.bz2 && rm $XORGMACROS_VERSION.tar.bz2
cd $XORGMACROS_VERSION; ./configure; make install; cd ..
rm -rf $XORGMACROS_VERSION
. .gitlab-ci/container/build-libdrm.sh
. .gitlab-ci/container/build-wayland.sh
pushd /usr/local
git clone https://gitlab.freedesktop.org/mesa/shader-db.git --depth 1
rm -rf shader-db/.git
cd shader-db
make
popd
git clone https://github.com/microsoft/DirectX-Headers -b v1.606.3 --depth 1
mkdir -p DirectX-Headers/build
pushd DirectX-Headers/build
meson .. --backend=ninja --buildtype=release -Dbuild-test=false
ninja
ninja install
popd
rm -rf DirectX-Headers
pip3 install git+https://git.lavasoftware.org/lava/lavacli@3db3ddc45e5358908bc6a17448059ea2340492b7
############### Uninstall the build software
apt-get purge -y \
$STABLE_EPHEMERAL
. .gitlab-ci/container/container_post_build.sh

View File

@@ -1,8 +0,0 @@
set(CMAKE_SYSTEM_NAME Windows)
set(CMAKE_SYSTEM_PROCESSOR x86_64)
set(CMAKE_SYSROOT /usr/x86_64-w64-mingw32/)
set(ENV{PKG_CONFIG} /usr/x86_64-w64-mingw32/bin/pkg-config)
set(CMAKE_C_COMPILER x86_64-w64-mingw32-gcc-posix)
set(CMAKE_CXX_COMPILER x86_64-w64-mingw32-g++-posix)

View File

@@ -1,75 +0,0 @@
#!/bin/bash
set -e
set -o xtrace
export DEBIAN_FRONTEND=noninteractive
apt-get install -y ca-certificates gnupg2 software-properties-common
sed -i -e 's/http:\/\/deb/https:\/\/deb/g' /etc/apt/sources.list
# Ephemeral packages (installed for this script and removed again at
# the end)
STABLE_EPHEMERAL=" \
cargo \
python3-dev \
python3-pip \
python3-setuptools \
python3-wheel \
"
# Add llvm 13 to the build image
apt-key add .gitlab-ci/container/debian/llvm-snapshot.gpg.key
add-apt-repository "deb https://apt.llvm.org/bullseye/ llvm-toolchain-bullseye-13 main"
apt-get update
apt-get dist-upgrade -y
apt-get install -y --no-remove \
git \
git-lfs \
libasan6 \
libexpat1 \
libllvm13 \
libllvm11 \
libllvm9 \
liblz4-1 \
libpng16-16 \
libpython3.9 \
libvulkan1 \
libwayland-client0 \
libwayland-server0 \
libxcb-ewmh2 \
libxcb-randr0 \
libxcb-xfixes0 \
libxkbcommon0 \
libxrandr2 \
libxrender1 \
python3-mako \
python3-numpy \
python3-packaging \
python3-pil \
python3-requests \
python3-six \
python3-yaml \
vulkan-tools \
waffle-utils \
xauth \
xvfb \
zlib1g
apt-get install -y --no-install-recommends \
$STABLE_EPHEMERAL
# Needed for ci-fairy, this revision is able to upload files to MinIO
# and doesn't depend on git
pip3 install git+http://gitlab.freedesktop.org/freedesktop/ci-templates@ffe4d1b10aab7534489f0c4bbc4c5899df17d3f2
############### Build dEQP runner
. .gitlab-ci/container/build-deqp-runner.sh
rm -rf ~/.cargo
apt-get purge -y $STABLE_EPHEMERAL
apt-get autoremove -y --purge

View File

@@ -1,130 +0,0 @@
#!/bin/bash
set -e
set -o xtrace
export DEBIAN_FRONTEND=noninteractive
# Ephemeral packages (installed for this script and removed again at the end)
STABLE_EPHEMERAL=" \
autoconf \
automake \
bc \
bison \
bzip2 \
ccache \
clang-13 \
clang-11 \
cmake \
flex \
g++ \
glslang-tools \
libasound2-dev \
libcap-dev \
libclang-cpp13-dev \
libclang-cpp11-dev \
libelf-dev \
libexpat1-dev \
libfdt-dev \
libgbm-dev \
libgles2-mesa-dev \
libllvmspirvlib-dev \
libpciaccess-dev \
libpng-dev \
libudev-dev \
libvulkan-dev \
libwaffle-dev \
libx11-xcb-dev \
libxcb-dri2-0-dev \
libxext-dev \
libxkbcommon-dev \
libxrender-dev \
llvm-13-dev \
llvm-11-dev \
llvm-spirv \
make \
meson \
ocl-icd-opencl-dev \
patch \
pkg-config \
python3-distutils \
xz-utils \
"
apt-get update
apt-get install -y --no-remove \
$STABLE_EPHEMERAL \
clinfo \
iptables \
libclang-common-13-dev \
libclang-common-11-dev \
libclang-cpp13 \
libclang-cpp11 \
libcap2 \
libegl1 \
libepoxy-dev \
libfdt1 \
libllvmspirvlib11 \
libxcb-shm0 \
ocl-icd-libopencl1 \
python3-lxml \
python3-renderdoc \
python3-simplejson \
socat \
spirv-tools \
sysvinit-core \
wget
. .gitlab-ci/container/container_pre_build.sh
############### Build libdrm
. .gitlab-ci/container/build-libdrm.sh
############### Build Wayland
. .gitlab-ci/container/build-wayland.sh
############### Build Crosvm
. .gitlab-ci/container/build-rust.sh
. .gitlab-ci/container/build-crosvm.sh
rm -rf /root/.cargo
rm -rf /root/.rustup
############### Build kernel
export DEFCONFIG="arch/x86/configs/x86_64_defconfig"
export KERNEL_IMAGE_NAME=bzImage
export KERNEL_ARCH=x86_64
export DEBIAN_ARCH=amd64
mkdir -p /lava-files/
. .gitlab-ci/container/build-kernel.sh
############### Build libclc
. .gitlab-ci/container/build-libclc.sh
############### Build piglit
PIGLIT_OPTS="-DPIGLIT_BUILD_CL_TESTS=ON -DPIGLIT_BUILD_DMA_BUF_TESTS=ON" . .gitlab-ci/container/build-piglit.sh
############### Build dEQP GL
DEQP_TARGET=surfaceless . .gitlab-ci/container/build-deqp.sh
############### Build apitrace
. .gitlab-ci/container/build-apitrace.sh
############### Uninstall the build software
ccache --show-stats
apt-get purge -y \
$STABLE_EPHEMERAL
apt-get autoremove -y --purge

View File

@@ -1,198 +0,0 @@
#!/bin/bash
# The relative paths in this file only become valid at runtime.
# shellcheck disable=SC1091
set -e
set -o xtrace
export DEBIAN_FRONTEND=noninteractive
# Ephemeral packages (installed for this script and removed again at the end)
STABLE_EPHEMERAL=" \
ccache \
cmake \
g++ \
g++-mingw-w64-i686-posix \
g++-mingw-w64-x86-64-posix \
glslang-tools \
libexpat1-dev \
gnupg2 \
libgbm-dev \
libgles2-mesa-dev \
liblz4-dev \
libpciaccess-dev \
libudev-dev \
libvulkan-dev \
libwaffle-dev \
libx11-xcb-dev \
libxcb-ewmh-dev \
libxcb-keysyms1-dev \
libxkbcommon-dev \
libxrandr-dev \
libxrender-dev \
libzstd-dev \
meson \
mingw-w64-i686-dev \
mingw-w64-tools \
mingw-w64-x86-64-dev \
p7zip \
patch \
pkg-config \
python3-dev \
python3-distutils \
python3-pip \
python3-setuptools \
python3-wheel \
software-properties-common \
wget \
wine64-tools \
xz-utils \
"
apt-get install -y --no-remove \
$STABLE_EPHEMERAL \
libxcb-shm0 \
pciutils \
python3-lxml \
python3-simplejson \
xinit \
xserver-xorg-video-amdgpu \
xserver-xorg-video-ati
# We need multiarch for Wine
dpkg --add-architecture i386
# Install a more recent version of Wine than exists in Debian.
apt-key add .gitlab-ci/container/debian/winehq.gpg.key
apt-add-repository https://dl.winehq.org/wine-builds/debian/
apt update -qyy
# Needed for Valve's tracing jobs to collect information about the graphics
# hardware on the test devices.
pip3 install gfxinfo-mupuf==0.0.9
apt install -y --no-remove --install-recommends winehq-stable
function setup_wine() {
export WINEDEBUG="-all"
export WINEPREFIX="$1"
# We don't want crash dialogs
cat >crashdialog.reg <<EOF
Windows Registry Editor Version 5.00
[HKEY_CURRENT_USER\Software\Wine\WineDbg]
"ShowCrashDialog"=dword:00000000
EOF
# Set the wine prefix and disable the crash dialog
wine regedit crashdialog.reg
rm crashdialog.reg
# An immediate wine command may fail with: "${WINEPREFIX}: Not a
# valid wine prefix." and that is just spit because of checking
# the existance of the system.reg file, which fails. Just giving
# it a bit more of time for it to be created solves the problem
# ...
while ! test -f "${WINEPREFIX}/system.reg"; do sleep 1; done
}
############### Install DXVK
dxvk_install_release() {
local DXVK_VERSION=${1:-"1.10.1"}
wget "https://github.com/doitsujin/dxvk/releases/download/v${DXVK_VERSION}/dxvk-${DXVK_VERSION}.tar.gz"
tar xzpf dxvk-"${DXVK_VERSION}".tar.gz
"dxvk-${DXVK_VERSION}"/setup_dxvk.sh install
rm -rf "dxvk-${DXVK_VERSION}"
rm dxvk-"${DXVK_VERSION}".tar.gz
}
# Install from a Github PR number
dxvk_install_pr() {
local __prnum=$1
# NOTE: Clone all the ensite history of the repo so as not to think
# harder about cloning just enough for 'git describe' to work. 'git
# describe' is used by the dxvk build system to generate a
# dxvk_version Meson variable, which is nice-to-have.
git clone https://github.com/doitsujin/dxvk
pushd dxvk
git fetch origin pull/"$__prnum"/head:pr
git checkout pr
./package-release.sh pr ../dxvk-build --no-package
popd
pushd ./dxvk-build/dxvk-pr
./setup_dxvk.sh install
popd
rm -rf ./dxvk-build ./dxvk
}
# Sets up the WINEPREFIX for the DXVK installation commands below.
setup_wine "/dxvk-wine64"
dxvk_install_release "1.10.1"
#dxvk_install_pr 2359
############### Install apitrace binaries for wine
. .gitlab-ci/container/install-wine-apitrace.sh
# Add the apitrace path to the registry
wine \
reg add "HKEY_LOCAL_MACHINE\System\CurrentControlSet\Control\Session Manager\Environment" \
/v Path \
/t REG_EXPAND_SZ \
/d "C:\windows\system32;C:\windows;C:\windows\system32\wbem;Z:\apitrace-msvc-win64\bin" \
/f
############### Building ...
. .gitlab-ci/container/container_pre_build.sh
############### Build libdrm
. .gitlab-ci/container/build-libdrm.sh
############### Build Wayland
. .gitlab-ci/container/build-wayland.sh
############### Build parallel-deqp-runner's hang-detection tool
. .gitlab-ci/container/build-hang-detection.sh
############### Build piglit
PIGLIT_BUILD_TARGETS="piglit_replayer" . .gitlab-ci/container/build-piglit.sh
############### Build Fossilize
. .gitlab-ci/container/build-fossilize.sh
############### Build dEQP VK
. .gitlab-ci/container/build-deqp.sh
############### Build apitrace
. .gitlab-ci/container/build-apitrace.sh
############### Build gfxreconstruct
. .gitlab-ci/container/build-gfxreconstruct.sh
############### Build VKD3D-Proton
setup_wine "/vkd3d-proton-wine64"
. .gitlab-ci/container/build-vkd3d-proton.sh
############### Uninstall the build software
ccache --show-stats
apt-get purge -y \
$STABLE_EPHEMERAL
apt-get autoremove -y --purge

View File

@@ -1,102 +0,0 @@
#!/bin/bash
set -e
set -o xtrace
EPHEMERAL="
autoconf
automake
bzip2
git
libtool
pkgconfig(epoxy)
pkgconfig(gbm)
unzip
wget
xz
"
dnf install -y --setopt=install_weak_deps=False \
bison \
ccache \
clang-devel \
flex \
gcc \
gcc-c++ \
gettext \
glslang \
kernel-headers \
llvm-devel \
clang-devel \
meson \
"pkgconfig(dri2proto)" \
"pkgconfig(expat)" \
"pkgconfig(glproto)" \
"pkgconfig(libclc)" \
"pkgconfig(libelf)" \
"pkgconfig(libglvnd)" \
"pkgconfig(libomxil-bellagio)" \
"pkgconfig(libselinux)" \
"pkgconfig(libva)" \
"pkgconfig(pciaccess)" \
"pkgconfig(vdpau)" \
"pkgconfig(vulkan)" \
"pkgconfig(x11)" \
"pkgconfig(x11-xcb)" \
"pkgconfig(xcb)" \
"pkgconfig(xcb-dri2)" \
"pkgconfig(xcb-dri3)" \
"pkgconfig(xcb-glx)" \
"pkgconfig(xcb-present)" \
"pkgconfig(xcb-randr)" \
"pkgconfig(xcb-sync)" \
"pkgconfig(xcb-xfixes)" \
"pkgconfig(xdamage)" \
"pkgconfig(xext)" \
"pkgconfig(xfixes)" \
"pkgconfig(xrandr)" \
"pkgconfig(xshmfence)" \
"pkgconfig(xxf86vm)" \
"pkgconfig(zlib)" \
python-unversioned-command \
python3-devel \
python3-mako \
python3-devel \
python3-mako \
vulkan-headers \
spirv-tools-devel \
spirv-llvm-translator-devel \
$EPHEMERAL
. .gitlab-ci/container/container_pre_build.sh
# dependencies where we want a specific version
export XORG_RELEASES=https://xorg.freedesktop.org/releases/individual
export XORGMACROS_VERSION=util-macros-1.19.0
wget $XORG_RELEASES/util/$XORGMACROS_VERSION.tar.bz2
tar -xvf $XORGMACROS_VERSION.tar.bz2 && rm $XORGMACROS_VERSION.tar.bz2
cd $XORGMACROS_VERSION; ./configure; make install; cd ..
rm -rf $XORGMACROS_VERSION
. .gitlab-ci/container/build-libdrm.sh
. .gitlab-ci/container/build-wayland.sh
pushd /usr/local
git clone https://gitlab.freedesktop.org/mesa/shader-db.git --depth 1
rm -rf shader-db/.git
cd shader-db
make
popd
############### Uninstall the build software
dnf remove -y $EPHEMERAL
. .gitlab-ci/container/container_post_build.sh

View File

@@ -1,458 +0,0 @@
# Docker image tag helper templates
.incorporate-templates-commit:
variables:
FDO_DISTRIBUTION_TAG: "${MESA_IMAGE_TAG}--${MESA_TEMPLATES_COMMIT}"
.incorporate-base-tag+templates-commit:
variables:
FDO_BASE_IMAGE: "${CI_REGISTRY_IMAGE}/${MESA_BASE_IMAGE}:${MESA_BASE_TAG}--${MESA_TEMPLATES_COMMIT}"
FDO_DISTRIBUTION_TAG: "${MESA_IMAGE_TAG}--${MESA_BASE_TAG}--${MESA_TEMPLATES_COMMIT}"
.set-image:
extends:
- .incorporate-templates-commit
variables:
MESA_IMAGE: "$CI_REGISTRY_IMAGE/${MESA_IMAGE_PATH}:${FDO_DISTRIBUTION_TAG}"
image: "$MESA_IMAGE"
.set-image-base-tag:
extends:
- .set-image
- .incorporate-base-tag+templates-commit
variables:
MESA_IMAGE: "$CI_REGISTRY_IMAGE/${MESA_IMAGE_PATH}:${FDO_DISTRIBUTION_TAG}"
.use-wine:
variables:
WINEPATH: "/usr/x86_64-w64-mingw32/bin;/usr/x86_64-w64-mingw32/lib;/usr/lib/gcc/x86_64-w64-mingw32/10-posix;c:/windows;c:/windows/system32"
# Build the CI docker images.
#
# MESA_IMAGE_TAG is the tag of the docker image used by later stage jobs. If the
# image doesn't exist yet, the container stage job generates it.
#
# In order to generate a new image, one should generally change the tag.
# While removing the image from the registry would also work, that's not
# recommended except for ephemeral images during development: Replacing
# an image after a significant amount of time might pull in newer
# versions of gcc/clang or other packages, which might break the build
# with older commits using the same tag.
#
# After merging a change resulting in generating a new image to the
# main repository, it's recommended to remove the image from the source
# repository's container registry, so that the image from the main
# repository's registry will be used there as well.
.container:
stage: container
extends:
- .container-rules
- .incorporate-templates-commit
- .use-wine
variables:
FDO_DISTRIBUTION_VERSION: bullseye-slim
FDO_REPO_SUFFIX: $CI_JOB_NAME
FDO_DISTRIBUTION_EXEC: 'env "WINEPATH=${WINEPATH}" FDO_CI_CONCURRENT=${FDO_CI_CONCURRENT} bash .gitlab-ci/container/${CI_JOB_NAME}.sh'
# no need to pull the whole repo to build the container image
GIT_STRATEGY: none
.use-base-image:
extends:
- .container
- .incorporate-base-tag+templates-commit
# Don't want the .container rules
- .build-rules
# Debian 11 based x86 build image base
debian/x86_build-base:
extends:
- .fdo.container-build@debian
- .container
variables:
MESA_IMAGE_TAG: &debian-x86_build-base ${DEBIAN_BASE_TAG}
.use-debian/x86_build-base:
extends:
- .fdo.container-build@debian
- .use-base-image
variables:
MESA_BASE_IMAGE: ${DEBIAN_X86_BUILD_BASE_IMAGE}
MESA_BASE_TAG: *debian-x86_build-base
MESA_ARTIFACTS_BASE_TAG: *debian-x86_build-base
needs:
- debian/x86_build-base
# Debian 11 based x86 main build image
debian/x86_build:
extends:
- .use-debian/x86_build-base
variables:
MESA_IMAGE_TAG: &debian-x86_build ${DEBIAN_BUILD_TAG}
.use-debian/x86_build:
extends:
- .set-image-base-tag
variables:
MESA_BASE_TAG: *debian-x86_build-base
MESA_IMAGE_PATH: ${DEBIAN_X86_BUILD_IMAGE_PATH}
MESA_IMAGE_TAG: *debian-x86_build
needs:
- debian/x86_build
# Debian 11 based i386 cross-build image
debian/i386_build:
extends:
- .use-debian/x86_build-base
variables:
MESA_IMAGE_TAG: &debian-i386_build ${DEBIAN_BUILD_TAG}
.use-debian/i386_build:
extends:
- .set-image-base-tag
variables:
MESA_BASE_TAG: *debian-x86_build-base
MESA_IMAGE_PATH: "debian/i386_build"
MESA_IMAGE_TAG: *debian-i386_build
needs:
- debian/i386_build
# Debian 11 based x86-mingw cross main build image
debian/x86_build-mingw:
extends:
- .use-debian/x86_build-base
variables:
MESA_IMAGE_TAG: &debian-x86_build_mingw ${DEBIAN_BUILD_MINGW_TAG}
.use-debian/x86_build_mingw:
extends:
- .set-image-base-tag
variables:
MESA_BASE_TAG: *debian-x86_build-base
MESA_IMAGE_PATH: ${DEBIAN_X86_BUILD_MINGW_IMAGE_PATH}
MESA_IMAGE_TAG: *debian-x86_build_mingw
needs:
- debian/x86_build-mingw
# Debian 11 based ppc64el cross-build image
debian/ppc64el_build:
extends:
- .use-debian/x86_build-base
variables:
MESA_IMAGE_TAG: &debian-ppc64el_build ${DEBIAN_BUILD_TAG}
.use-debian/ppc64el_build:
extends:
- .set-image-base-tag
variables:
MESA_BASE_TAG: *debian-x86_build-base
MESA_IMAGE_PATH: "debian/ppc64el_build"
MESA_IMAGE_TAG: *debian-ppc64el_build
needs:
- debian/ppc64el_build
# Debian 11 based s390x cross-build image
debian/s390x_build:
extends:
- .use-debian/x86_build-base
variables:
MESA_IMAGE_TAG: &debian-s390x_build ${DEBIAN_BUILD_TAG}
.use-debian/s390x_build:
extends:
- .set-image-base-tag
variables:
MESA_BASE_TAG: *debian-x86_build-base
MESA_IMAGE_PATH: "debian/s390x_build"
MESA_IMAGE_TAG: *debian-s390x_build
needs:
- debian/s390x_build
# Android NDK cross-build image
debian/android_build:
extends:
- .use-debian/x86_build-base
variables:
MESA_IMAGE_TAG: &debian-android_build ${DEBIAN_BUILD_TAG}
.use-debian/android_build:
extends:
- .set-image-base-tag
variables:
MESA_BASE_TAG: *debian-x86_build-base
MESA_IMAGE_PATH: "debian/android_build"
MESA_IMAGE_TAG: *debian-android_build
needs:
- debian/android_build
# Debian 11 based x86 test image base
debian/x86_test-base:
extends: debian/x86_build-base
variables:
MESA_IMAGE_TAG: &debian-x86_test-base ${DEBIAN_BASE_TAG}
.use-debian/x86_test-base:
extends:
- .fdo.container-build@debian
- .use-base-image
variables:
MESA_BASE_IMAGE: ${DEBIAN_X86_TEST_BASE_IMAGE}
MESA_BASE_TAG: *debian-x86_test-base
needs:
- debian/x86_test-base
# Debian 11 based x86 test image for GL
debian/x86_test-gl:
extends: .use-debian/x86_test-base
variables:
FDO_DISTRIBUTION_EXEC: 'env KERNEL_URL=${KERNEL_URL} FDO_CI_CONCURRENT=${FDO_CI_CONCURRENT} bash .gitlab-ci/container/${CI_JOB_NAME}.sh'
KERNEL_URL: &kernel-rootfs-url "https://gitlab.freedesktop.org/gfx-ci/linux/-/archive/v5.17-for-mesa-ci-b78f7870d97b/linux-v5.17-for-mesa-ci-b78f7870d97b.tar.bz2"
MESA_IMAGE_TAG: &debian-x86_test-gl ${DEBIAN_X86_TEST_GL_TAG}
.use-debian/x86_test-gl:
extends:
- .set-image-base-tag
variables:
MESA_BASE_TAG: *debian-x86_test-base
MESA_IMAGE_PATH: ${DEBIAN_X86_TEST_IMAGE_PATH}
MESA_IMAGE_TAG: *debian-x86_test-gl
needs:
- debian/x86_test-gl
# Debian 11 based x86 test image for VK
debian/x86_test-vk:
extends: .use-debian/x86_test-base
variables:
MESA_IMAGE_TAG: &debian-x86_test-vk ${DEBIAN_X86_TEST_VK_TAG}
.use-debian/x86_test-vk:
extends:
- .set-image-base-tag
variables:
MESA_BASE_TAG: *debian-x86_test-base
MESA_IMAGE_PATH: "debian/x86_test-vk"
MESA_IMAGE_TAG: *debian-x86_test-vk
needs:
- debian/x86_test-vk
# Debian 11 based ARM build image
debian/arm_build:
extends:
- .fdo.container-build@debian
- .container
tags:
- aarch64
variables:
MESA_IMAGE_TAG: &debian-arm_build ${DEBIAN_BASE_TAG}
.use-debian/arm_build:
extends:
- .set-image
variables:
MESA_IMAGE_PATH: "debian/arm_build"
MESA_IMAGE_TAG: *debian-arm_build
MESA_ARTIFACTS_TAG: *debian-arm_build
needs:
- debian/arm_build
# Fedora 34 based x86 build image
fedora/x86_build:
extends:
- .fdo.container-build@fedora
- .container
variables:
FDO_DISTRIBUTION_VERSION: 34
MESA_IMAGE_TAG: &fedora-x86_build ${FEDORA_X86_BUILD_TAG}
.use-fedora/x86_build:
extends:
- .set-image
variables:
MESA_IMAGE_PATH: "fedora/x86_build"
MESA_IMAGE_TAG: *fedora-x86_build
needs:
- fedora/x86_build
.kernel+rootfs:
extends:
- .build-rules
stage: container
variables:
GIT_STRATEGY: fetch
KERNEL_URL: *kernel-rootfs-url
MESA_ROOTFS_TAG: &kernel-rootfs ${KERNEL_ROOTFS_TAG}
DISTRIBUTION_TAG: &distribution-tag-arm "${MESA_ROOTFS_TAG}--${MESA_ARTIFACTS_TAG}--${MESA_TEMPLATES_COMMIT}"
script:
- .gitlab-ci/container/lava_build.sh
kernel+rootfs_amd64:
extends:
- .use-debian/x86_build-base
- .kernel+rootfs
image: "$FDO_BASE_IMAGE"
variables:
DEBIAN_ARCH: "amd64"
DISTRIBUTION_TAG: &distribution-tag-amd64 "${MESA_ROOTFS_TAG}--${MESA_ARTIFACTS_BASE_TAG}--${MESA_TEMPLATES_COMMIT}"
kernel+rootfs_arm64:
extends:
- .use-debian/arm_build
- .kernel+rootfs
tags:
- aarch64
variables:
DEBIAN_ARCH: "arm64"
kernel+rootfs_armhf:
extends:
- kernel+rootfs_arm64
variables:
DEBIAN_ARCH: "armhf"
# Cannot use anchors defined here from included files, so use extends: instead
.use-kernel+rootfs-arm:
variables:
DISTRIBUTION_TAG: *distribution-tag-arm
MESA_ROOTFS_TAG: *kernel-rootfs
.use-kernel+rootfs-amd64:
variables:
DISTRIBUTION_TAG: *distribution-tag-amd64
MESA_ROOTFS_TAG: *kernel-rootfs
# x86 image with ARM64 & armhf kernel & rootfs for baremetal testing
debian/arm_test:
extends:
- .fdo.container-build@debian
- .container
# Don't want the .container rules
- .build-rules
needs:
- kernel+rootfs_arm64
- kernel+rootfs_armhf
variables:
FDO_DISTRIBUTION_EXEC: 'env ARTIFACTS_PREFIX=https://${MINIO_HOST}/mesa-lava ARTIFACTS_SUFFIX=${MESA_ROOTFS_TAG}--${MESA_ARM_BUILD_TAG}--${MESA_TEMPLATES_COMMIT} CI_PROJECT_PATH=${CI_PROJECT_PATH} FDO_CI_CONCURRENT=${FDO_CI_CONCURRENT} FDO_UPSTREAM_REPO=${FDO_UPSTREAM_REPO} bash .gitlab-ci/container/${CI_JOB_NAME}.sh'
FDO_DISTRIBUTION_TAG: "${MESA_IMAGE_TAG}--${MESA_ROOTFS_TAG}--${MESA_ARM_BUILD_TAG}--${MESA_TEMPLATES_COMMIT}"
MESA_ARM_BUILD_TAG: *debian-arm_build
MESA_IMAGE_TAG: &debian-arm_test ${DEBIAN_BASE_TAG}
MESA_ROOTFS_TAG: *kernel-rootfs
.use-debian/arm_test:
image: "$CI_REGISTRY_IMAGE/${MESA_IMAGE_PATH}:${MESA_IMAGE_TAG}--${MESA_ROOTFS_TAG}--${MESA_ARM_BUILD_TAG}--${MESA_TEMPLATES_COMMIT}"
variables:
MESA_ARM_BUILD_TAG: *debian-arm_build
MESA_IMAGE_PATH: "debian/arm_test"
MESA_IMAGE_TAG: *debian-arm_test
MESA_ROOTFS_TAG: *kernel-rootfs
needs:
- debian/arm_test
# Native Windows docker builds
#
# Unlike the above Linux-based builds - including MinGW builds which
# cross-compile for Windows - which use the freedesktop ci-templates, we
# cannot use the same scheme here. As Windows lacks support for
# Docker-in-Docker, and Podman does not run natively on Windows, we have
# to open-code much of the same ourselves.
#
# This is achieved by first running in a native Windows shell instance
# (host PowerShell) in the container stage to build and push the image,
# then in the build stage by executing inside Docker.
.windows-docker-vs2019:
variables:
MESA_IMAGE: "$CI_REGISTRY_IMAGE/${MESA_IMAGE_PATH}:${MESA_IMAGE_TAG}"
MESA_UPSTREAM_IMAGE: "$CI_REGISTRY/$FDO_UPSTREAM_REPO/$MESA_IMAGE_PATH:${MESA_IMAGE_TAG}"
.windows_container_build:
inherit:
default: [retry]
extends:
- .container
- .windows-docker-vs2019
rules:
- if: '$MICROSOFT_FARM == "offline"'
when: never
- !reference [.container-rules, rules]
variables:
GIT_STRATEGY: fetch # we do actually need the full repository though
MESA_BASE_IMAGE: None
tags:
- windows
- shell
- "2022"
- mesa
script:
- .\.gitlab-ci\windows\mesa_container.ps1 $CI_REGISTRY $CI_REGISTRY_USER $CI_REGISTRY_PASSWORD $MESA_IMAGE $MESA_UPSTREAM_IMAGE ${DOCKERFILE} ${MESA_BASE_IMAGE}
windows_vs2019:
inherit:
default: [retry]
extends:
- .windows_container_build
variables:
MESA_IMAGE_PATH: &windows_vs_image_path ${WINDOWS_X64_VS_PATH}
MESA_IMAGE_TAG: &windows_vs_image_tag ${WINDOWS_X64_VS_TAG}
DOCKERFILE: Dockerfile_vs
MESA_BASE_IMAGE: "mcr.microsoft.com/windows/server:ltsc2022"
windows_build_vs2019:
inherit:
default: [retry]
extends:
- .windows_container_build
rules:
- if: '$MICROSOFT_FARM == "offline"'
when: never
- !reference [.build-rules, rules]
variables:
MESA_IMAGE_PATH: &windows_build_image_path ${WINDOWS_X64_BUILD_PATH}
MESA_IMAGE_TAG: &windows_build_image_tag ${WINDOWS_X64_BUILD_TAG}
DOCKERFILE: Dockerfile_build
MESA_BASE_IMAGE_PATH: *windows_vs_image_path
MESA_BASE_IMAGE_TAG: *windows_vs_image_tag
MESA_BASE_IMAGE: "$CI_REGISTRY_IMAGE/${MESA_BASE_IMAGE_PATH}:${MESA_BASE_IMAGE_TAG}"
timeout: 2h 30m # LLVM takes ages
needs:
- windows_vs2019
windows_test_vs2019:
inherit:
default: [retry]
extends:
- .windows_container_build
rules:
- if: '$MICROSOFT_FARM == "offline"'
when: never
- !reference [.build-rules, rules]
variables:
MESA_IMAGE_PATH: &windows_test_image_path ${WINDOWS_X64_TEST_PATH}
MESA_IMAGE_TAG: &windows_test_image_tag ${WINDOWS_X64_BUILD_TAG}--${WINDOWS_X64_TEST_TAG}
DOCKERFILE: Dockerfile_test
MESA_BASE_IMAGE_PATH: *windows_vs_image_path
MESA_BASE_IMAGE_TAG: *windows_vs_image_tag
MESA_BASE_IMAGE: "$CI_REGISTRY_IMAGE/${MESA_BASE_IMAGE_PATH}:${MESA_BASE_IMAGE_TAG}"
needs:
- windows_vs2019
.use-windows_build_vs2019:
inherit:
default: [retry]
extends: .windows-docker-vs2019
image: "$MESA_IMAGE"
variables:
MESA_IMAGE_PATH: *windows_build_image_path
MESA_IMAGE_TAG: *windows_build_image_tag
needs:
- windows_build_vs2019
.use-windows_test_vs2019:
inherit:
default: [retry]
extends: .windows-docker-vs2019
image: "$MESA_IMAGE"
variables:
MESA_IMAGE_PATH: *windows_test_image_path
MESA_IMAGE_TAG: *windows_test_image_tag

View File

@@ -1,13 +0,0 @@
#!/bin/bash
APITRACE_VERSION="11.1"
APITRACE_VERSION_DATE=""
wget "https://github.com/apitrace/apitrace/releases/download/${APITRACE_VERSION}/apitrace-${APITRACE_VERSION}${APITRACE_VERSION_DATE}-win64.7z"
7zr x "apitrace-${APITRACE_VERSION}${APITRACE_VERSION_DATE}-win64.7z" \
"apitrace-${APITRACE_VERSION}${APITRACE_VERSION_DATE}-win64/bin/apitrace.exe" \
"apitrace-${APITRACE_VERSION}${APITRACE_VERSION_DATE}-win64/bin/d3dretrace.exe"
mv "apitrace-${APITRACE_VERSION}${APITRACE_VERSION_DATE}-win64" /apitrace-msvc-win64
rm "apitrace-${APITRACE_VERSION}${APITRACE_VERSION_DATE}-win64.7z"

View File

@@ -1,260 +0,0 @@
#!/bin/bash
set -e
set -o xtrace
export DEBIAN_FRONTEND=noninteractive
check_minio()
{
MINIO_PATH="${MINIO_HOST}/mesa-lava/$1/${DISTRIBUTION_TAG}/${DEBIAN_ARCH}"
if wget -q --method=HEAD "https://${MINIO_PATH}/done"; then
exit
fi
}
# If remote files are up-to-date, skip rebuilding them
check_minio "${FDO_UPSTREAM_REPO}"
check_minio "${CI_PROJECT_PATH}"
. .gitlab-ci/container/container_pre_build.sh
# Install rust, which we'll be using for deqp-runner. It will be cleaned up at the end.
. .gitlab-ci/container/build-rust.sh
if [[ "$DEBIAN_ARCH" = "arm64" ]]; then
GCC_ARCH="aarch64-linux-gnu"
KERNEL_ARCH="arm64"
SKQP_ARCH="arm64"
DEFCONFIG="arch/arm64/configs/defconfig"
DEVICE_TREES="arch/arm64/boot/dts/rockchip/rk3399-gru-kevin.dtb"
DEVICE_TREES+=" arch/arm64/boot/dts/amlogic/meson-gxl-s805x-libretech-ac.dtb"
DEVICE_TREES+=" arch/arm64/boot/dts/allwinner/sun50i-h6-pine-h64.dtb"
DEVICE_TREES+=" arch/arm64/boot/dts/amlogic/meson-gxm-khadas-vim2.dtb"
DEVICE_TREES+=" arch/arm64/boot/dts/qcom/apq8016-sbc.dtb"
DEVICE_TREES+=" arch/arm64/boot/dts/qcom/apq8096-db820c.dtb"
DEVICE_TREES+=" arch/arm64/boot/dts/amlogic/meson-g12b-a311d-khadas-vim3.dtb"
DEVICE_TREES+=" arch/arm64/boot/dts/mediatek/mt8183-kukui-jacuzzi-juniper-sku16.dtb"
DEVICE_TREES+=" arch/arm64/boot/dts/nvidia/tegra210-p3450-0000.dtb"
DEVICE_TREES+=" arch/arm64/boot/dts/qcom/sc7180-trogdor-lazor-limozeen-nots-r5.dtb"
KERNEL_IMAGE_NAME="Image"
elif [[ "$DEBIAN_ARCH" = "armhf" ]]; then
GCC_ARCH="arm-linux-gnueabihf"
KERNEL_ARCH="arm"
SKQP_ARCH="arm"
DEFCONFIG="arch/arm/configs/multi_v7_defconfig"
DEVICE_TREES="arch/arm/boot/dts/rk3288-veyron-jaq.dtb"
DEVICE_TREES+=" arch/arm/boot/dts/sun8i-h3-libretech-all-h3-cc.dtb"
DEVICE_TREES+=" arch/arm/boot/dts/imx6q-cubox-i.dtb"
KERNEL_IMAGE_NAME="zImage"
. .gitlab-ci/container/create-cross-file.sh armhf
else
GCC_ARCH="x86_64-linux-gnu"
KERNEL_ARCH="x86_64"
SKQP_ARCH="x64"
DEFCONFIG="arch/x86/configs/x86_64_defconfig"
DEVICE_TREES=""
KERNEL_IMAGE_NAME="bzImage"
ARCH_PACKAGES="libasound2-dev libcap-dev libfdt-dev libva-dev wayland-protocols"
fi
# Determine if we're in a cross build.
if [[ -e /cross_file-$DEBIAN_ARCH.txt ]]; then
EXTRA_MESON_ARGS="--cross-file /cross_file-$DEBIAN_ARCH.txt"
EXTRA_CMAKE_ARGS="-DCMAKE_TOOLCHAIN_FILE=/toolchain-$DEBIAN_ARCH.cmake"
if [ $DEBIAN_ARCH = arm64 ]; then
RUST_TARGET="aarch64-unknown-linux-gnu"
elif [ $DEBIAN_ARCH = armhf ]; then
RUST_TARGET="armv7-unknown-linux-gnueabihf"
fi
rustup target add $RUST_TARGET
export EXTRA_CARGO_ARGS="--target $RUST_TARGET"
export ARCH=${KERNEL_ARCH}
export CROSS_COMPILE="${GCC_ARCH}-"
fi
apt-get update
apt-get install -y --no-remove \
${ARCH_PACKAGES} \
automake \
bc \
clang \
cmake \
debootstrap \
git \
glslang-tools \
libdrm-dev \
libegl1-mesa-dev \
libxext-dev \
libfontconfig-dev \
libgbm-dev \
libgl-dev \
libgles2-mesa-dev \
libglu1-mesa-dev \
libglx-dev \
libpng-dev \
libssl-dev \
libudev-dev \
libvulkan-dev \
libwaffle-dev \
libwayland-dev \
libx11-xcb-dev \
libxcb-dri2-0-dev \
libxkbcommon-dev \
ninja-build \
patch \
python-is-python3 \
python3-distutils \
python3-mako \
python3-numpy \
python3-serial \
unzip \
wget
if [[ "$DEBIAN_ARCH" = "armhf" ]]; then
apt-get install -y --no-remove \
libegl1-mesa-dev:armhf \
libelf-dev:armhf \
libgbm-dev:armhf \
libgles2-mesa-dev:armhf \
libpng-dev:armhf \
libudev-dev:armhf \
libvulkan-dev:armhf \
libwaffle-dev:armhf \
libwayland-dev:armhf \
libx11-xcb-dev:armhf \
libxkbcommon-dev:armhf
fi
############### Building
STRIP_CMD="${GCC_ARCH}-strip"
mkdir -p /lava-files/rootfs-${DEBIAN_ARCH}/usr/lib/$GCC_ARCH
############### Build apitrace
. .gitlab-ci/container/build-apitrace.sh
mkdir -p /lava-files/rootfs-${DEBIAN_ARCH}/apitrace
mv /apitrace/build /lava-files/rootfs-${DEBIAN_ARCH}/apitrace
rm -rf /apitrace
############### Build dEQP runner
. .gitlab-ci/container/build-deqp-runner.sh
mkdir -p /lava-files/rootfs-${DEBIAN_ARCH}/usr/bin
mv /usr/local/bin/*-runner /lava-files/rootfs-${DEBIAN_ARCH}/usr/bin/.
############### Build dEQP
DEQP_TARGET=surfaceless . .gitlab-ci/container/build-deqp.sh
mv /deqp /lava-files/rootfs-${DEBIAN_ARCH}/.
############### Build SKQP
if [[ "$DEBIAN_ARCH" = "arm64" ]] \
|| [[ "$DEBIAN_ARCH" = "amd64" ]]; then
. .gitlab-ci/container/build-skqp.sh
mv /skqp /lava-files/rootfs-${DEBIAN_ARCH}/.
fi
############### Build piglit
PIGLIT_OPTS="-DPIGLIT_BUILD_DMA_BUF_TESTS=ON" . .gitlab-ci/container/build-piglit.sh
mv /piglit /lava-files/rootfs-${DEBIAN_ARCH}/.
############### Build libva tests
if [[ "$DEBIAN_ARCH" = "amd64" ]]; then
. .gitlab-ci/container/build-va-tools.sh
mv /va/bin/* /lava-files/rootfs-${DEBIAN_ARCH}/usr/bin/
fi
############### Build Crosvm
if [[ ${DEBIAN_ARCH} = "amd64" ]]; then
. .gitlab-ci/container/build-crosvm.sh
mv /usr/local/bin/crosvm /lava-files/rootfs-${DEBIAN_ARCH}/usr/bin/
mv /usr/local/lib/$GCC_ARCH/libvirglrenderer.* /lava-files/rootfs-${DEBIAN_ARCH}/usr/lib/$GCC_ARCH/
fi
############### Build libdrm
EXTRA_MESON_ARGS+=" -D prefix=/libdrm"
. .gitlab-ci/container/build-libdrm.sh
############### Build local stuff for use by igt and kernel testing, which
############### will reuse most of our container build process from a specific
############### hash of the Mesa tree.
if [[ -e ".gitlab-ci/local/build-rootfs.sh" ]]; then
. .gitlab-ci/local/build-rootfs.sh
fi
############### Build kernel
. .gitlab-ci/container/build-kernel.sh
############### Delete rust, since the tests won't be compiling anything.
rm -rf /root/.cargo
rm -rf /root/.rustup
############### Create rootfs
set +e
if ! debootstrap \
--variant=minbase \
--arch=${DEBIAN_ARCH} \
--components main,contrib,non-free \
bullseye \
/lava-files/rootfs-${DEBIAN_ARCH}/ \
http://deb.debian.org/debian; then
cat /lava-files/rootfs-${DEBIAN_ARCH}/debootstrap/debootstrap.log
exit 1
fi
set -e
cp .gitlab-ci/container/create-rootfs.sh /lava-files/rootfs-${DEBIAN_ARCH}/.
cp .gitlab-ci/container/debian/llvm-snapshot.gpg.key /lava-files/rootfs-${DEBIAN_ARCH}/.
chroot /lava-files/rootfs-${DEBIAN_ARCH} sh /create-rootfs.sh
rm /lava-files/rootfs-${DEBIAN_ARCH}/llvm-snapshot.gpg.key
rm /lava-files/rootfs-${DEBIAN_ARCH}/create-rootfs.sh
############### Install the built libdrm
# Dependencies pulled during the creation of the rootfs may overwrite
# the built libdrm. Hence, we add it after the rootfs has been already
# created.
find /libdrm/ -name lib\*\.so\* | xargs cp -t /lava-files/rootfs-${DEBIAN_ARCH}/usr/lib/$GCC_ARCH/.
mkdir -p /lava-files/rootfs-${DEBIAN_ARCH}/libdrm/
cp -Rp /libdrm/share /lava-files/rootfs-${DEBIAN_ARCH}/libdrm/share
rm -rf /libdrm
if [ ${DEBIAN_ARCH} = arm64 ]; then
# Make a gzipped copy of the Image for db410c.
gzip -k /lava-files/Image
KERNEL_IMAGE_NAME+=" Image.gz"
fi
du -ah /lava-files/rootfs-${DEBIAN_ARCH} | sort -h | tail -100
pushd /lava-files/rootfs-${DEBIAN_ARCH}
tar czf /lava-files/lava-rootfs.tgz .
popd
. .gitlab-ci/container/container_post_build.sh
############### Upload the files!
FILES_TO_UPLOAD="lava-rootfs.tgz \
$KERNEL_IMAGE_NAME"
if [[ -n $DEVICE_TREES ]]; then
FILES_TO_UPLOAD="$FILES_TO_UPLOAD $(basename -a $DEVICE_TREES)"
fi
for f in $FILES_TO_UPLOAD; do
ci-fairy s3cp --token-file "${CI_JOB_JWT_FILE}" /lava-files/$f \
https://${MINIO_PATH}/$f
done
touch /lava-files/done
ci-fairy s3cp --token-file "${CI_JOB_JWT_FILE}" /lava-files/done https://${MINIO_PATH}/done

View File

@@ -1,105 +0,0 @@
CONFIG_LOCALVERSION_AUTO=y
CONFIG_DEBUG_KERNEL=y
CONFIG_PWM=y
CONFIG_PM_DEVFREQ=y
CONFIG_OF=y
CONFIG_CROS_EC=y
# abootimg with a 'dummy' rootfs fails with root=/dev/nfs
CONFIG_BLK_DEV_INITRD=n
CONFIG_DEVFREQ_GOV_PERFORMANCE=y
CONFIG_DEVFREQ_GOV_POWERSAVE=y
CONFIG_DEVFREQ_GOV_USERSPACE=y
CONFIG_DEVFREQ_GOV_PASSIVE=y
CONFIG_DRM=y
CONFIG_DRM_PANEL_SIMPLE=y
CONFIG_PWM_CROS_EC=y
CONFIG_BACKLIGHT_PWM=y
# Strip out some stuff we don't need for graphics testing, to reduce
# the build.
CONFIG_CAN=n
CONFIG_WIRELESS=n
CONFIG_RFKILL=n
CONFIG_WLAN=n
CONFIG_REGULATOR_FAN53555=y
CONFIG_REGULATOR=y
CONFIG_REGULATOR_VCTRL=y
CONFIG_KASAN=n
CONFIG_KASAN_INLINE=n
CONFIG_STACKTRACE=n
CONFIG_TMPFS=y
CONFIG_PROVE_LOCKING=n
CONFIG_DEBUG_LOCKDEP=n
CONFIG_SOFTLOCKUP_DETECTOR=y
CONFIG_BOOTPARAM_SOFTLOCKUP_PANIC=y
CONFIG_DETECT_HUNG_TASK=y
CONFIG_USB_USBNET=y
CONFIG_NETDEVICES=y
CONFIG_USB_NET_DRIVERS=y
CONFIG_USB_RTL8152=y
CONFIG_USB_NET_AX8817X=y
CONFIG_USB_NET_SMSC95XX=y
CONFIG_USB_GADGET=y
CONFIG_USB_ETH=y
CONFIG_FW_LOADER_COMPRESS=y
# options for AMD devices
CONFIG_X86_AMD_PLATFORM_DEVICE=y
CONFIG_ACPI_VIDEO=y
CONFIG_X86_AMD_FREQ_SENSITIVITY=y
CONFIG_PINCTRL=y
CONFIG_PINCTRL_AMD=y
CONFIG_DRM_AMDGPU=m
CONFIG_DRM_AMDGPU_SI=y
CONFIG_DRM_AMDGPU_USERPTR=y
CONFIG_DRM_AMD_ACP=n
CONFIG_ACPI_WMI=y
CONFIG_MXM_WMI=y
CONFIG_PARPORT=y
CONFIG_PARPORT_PC=y
CONFIG_PARPORT_SERIAL=y
CONFIG_SERIAL_8250_DW=y
CONFIG_CHROME_PLATFORMS=y
CONFIG_KVM_AMD=m
#options for Intel devices
CONFIG_MFD_INTEL_LPSS_PCI=y
CONFIG_KVM_INTEL=m
#options for KVM guests
CONFIG_FUSE_FS=y
CONFIG_HYPERVISOR_GUEST=y
CONFIG_KVM=y
CONFIG_KVM_GUEST=y
CONFIG_VIRT_DRIVERS=y
CONFIG_VIRTIO_FS=y
CONFIG_DRM_VIRTIO_GPU=y
CONFIG_SERIAL_8250_CONSOLE=y
CONFIG_VIRTIO_NET=y
CONFIG_VIRTIO_CONSOLE=y
CONFIG_PARAVIRT=y
CONFIG_VIRTIO_BLK=y
CONFIG_VIRTUALIZATION=y
CONFIG_VIRTIO=y
CONFIG_VIRTIO_PCI=y
CONFIG_VIRTIO_MMIO=y
CONFIG_VIRTIO_MMIO_CMDLINE_DEVICES=y
CONFIG_CRYPTO_DEV_VIRTIO=y
CONFIG_HW_RANDOM_VIRTIO=y
CONFIG_BLK_MQ_VIRTIO=y
CONFIG_TUN=y
CONFIG_VSOCKETS=y
CONFIG_VIRTIO_VSOCKETS=y
CONFIG_VHOST_VSOCK=m

View File

@@ -1 +0,0 @@
lp_test_arit

View File

@@ -1 +0,0 @@
lp_test_format

View File

@@ -1,42 +0,0 @@
#!/bin/sh
set -e
VSOCK_STDOUT=$1
VSOCK_STDERR=$2
VM_TEMP_DIR=$3
mount -t proc none /proc
mount -t sysfs none /sys
mkdir -p /dev/pts
mount -t devpts devpts /dev/pts
mount -t tmpfs tmpfs /tmp
. ${VM_TEMP_DIR}/crosvm-env.sh
# .gitlab-ci.yml script variable is using relative paths to install directory,
# so change to that dir before running `crosvm-script`
cd "${CI_PROJECT_DIR}"
# The exception is the dEQP binary, as it needs to run from its own directory
[ -z "${DEQP_BIN_DIR}" ] || cd "${DEQP_BIN_DIR}"
# Use a FIFO to collect relevant error messages
STDERR_FIFO=/tmp/crosvm-stderr.fifo
mkfifo -m 600 ${STDERR_FIFO}
dmesg --level crit,err,warn -w > ${STDERR_FIFO} &
DMESG_PID=$!
# Transfer the errors and crosvm-script output via a pair of virtio-vsocks
socat -d -u pipe:${STDERR_FIFO} vsock-listen:${VSOCK_STDERR} &
socat -d -U vsock-listen:${VSOCK_STDOUT} \
system:"stdbuf -eL sh ${VM_TEMP_DIR}/crosvm-script.sh 2> ${STDERR_FIFO}; echo \$? > ${VM_TEMP_DIR}/exit_code",nofork
kill ${DMESG_PID}
wait
sync
poweroff -d -n -f || true
sleep 1 # Just in case init would exit before the kernel shuts down the VM

Some files were not shown because too many files have changed in this diff Show More