Compare commits

..

317 Commits

Author SHA1 Message Date
Ian Romanick
c0009739bf docs: Add 7.11.1 release notes 2011-11-17 11:45:06 -08:00
Ian Romanick
bb7d993953 mesa: set version string to 7.11.1 2011-11-17 11:44:08 -08:00
Eric Anholt
172de77b12 glsl: Fix gl_NormalMatrix swizzle setup to match i965's invariants.
A driver trying to set up builtin uniforms is faced with a problem:
How do I walk the ir_variable structure (representing an array of
structs, or array of matrices, or struct, or whatever), and set up
driver structures so that dereference of that uniform gets the
corresponding ParameterValues[] entry.  The rule in general is that
each corresponding vector-sized field of an array of structs is one
builtin uniform state slot.  i965 relied on another invariant: each
state slot has a number of unique channel swizzles corresponding to
the number of elements in the field's vector, to avoid needing to walk
the glsl_type in parallel to get at vector_elements.

All of the builtin uniforms followed this behavior, except for
gl_NormalMatrix.  That's a mat3 (so 3 vec3s), but it was swizzled as 3
vec4s.

Fixes piglit glsl-fs-normalmatrix.
Reviewed-by: Paul Berry <stereotype441@gmail.com>
(cherry picked from commit cc4ddc3a1e)
2011-11-10 10:36:52 -08:00
Kenneth Graunke
6b151886fd mesa/get: Move MAX_LIGHTS from GL/ES2 to GL/ES1.
It's required for ES 1.0 and 1.1, and isn't specified for ES 2.

While the comment says Mesa depends on it internally, removing it from
ES2 doesn't seem to regress any Piglit or ES2 conformance tests.

Signed-off-by: Kenneth Graunke <kenneth@whitecape.org>
Reviewed-by: Ian Romanick <ian.d.romanick@intel.com>
(cherry picked from commit 5785cd2bf5)
2011-11-10 10:21:14 -08:00
José Fonseca
436db5df9c docs: Update llvmpipe docs.
Recommend LLVM 2.9, it has been working quite well, and unlike earlier
versions, it works out-of-the-box without patches.

Update Windows instructions.
2011-11-05 11:03:11 +00:00
Yuanhan Liu
5459781715 intel: don't call unmap pbo if pbo is not mapped
The PBO only needs to be unmapped if one of the previous calls to
_mesa_validate_pbo_* succeeded.  In this case, pixels will be
non-NULL.  Various paths through _mesa_unmap_texmiage_pbo can hit
assertion failures or segfaults if the buffer is not mapped.

To work around this, move the call to _mesa_unmap_teximage_pbo inside
the last 'if (pixels)' block.

NOTE: this is just for 7.11 stable branch

Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Reviewed-by: Ian Romanick <ian.d.romanick@intel.com>
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
Reviewed-by: Eric Anholt <eric@anholt.net>
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=42268
2011-11-03 11:55:39 -07:00
Michel Dänzer
b95767a57a r300g: Fix queries on big endian hosts.
Signed-off-by: Michel Dänzer <michel.daenzer@amd.com>
Reviewed-by: Corbin Simpson <MostAwesomeDude@gmail.com>
2011-11-02 22:29:14 +01:00
Michel Dänzer
f60e81ecb2 gallium/util: Add macros for converting from little endian to CPU byte order.
Signed-off-by: Michel Dänzer <michel.daenzer@amd.com>
Reviewed-by: Jose Fonseca <jfonseca@vmware.com>
(cherry picked from commit 4a3be16fd2)
2011-11-02 22:29:07 +01:00
Marek Olšák
d15ce8dd29 r300g: don't call u_trim_pipe_prim in r300_swtcl_draw_vbo
This was dead code anyway.
(cherry picked from commit 21e3c585f7)
2011-11-02 22:27:05 +01:00
Tom Fogal
449b301eec Only use gcc visibility support with gcc4+.
I had a colleague hitting issues compiling with an old gcc3.2
system.  These patches got them through.

Reviewed-by: Brian Paul <brianp@vmware.com>
(cherry-picked from commit cbb2b4149b)
2011-11-02 12:58:34 -06:00
Adam Jackson
4464ee1a9a glx: Don't enable INTEL_swap_event unconditionally
DRI2 supports this now - and already enables it explicitly - but drisw
does not and should not.  Otherwise toolkits like clutter will only ever
SwapBuffers once and wait forever for an event that's not coming.

Signed-off-by: Adam Jackson <ajax@redhat.com>
(cherry picked from commit 25620eb1d2)
2011-10-28 20:41:45 -04:00
Kenneth Graunke
439628318b i965: Apply post-sync non-zero workaround to homebrew workaround.
In commit 3e5d3626, Eric added a homebrew workaround to fix GPU hangs in
the Mesa "engine" demo and oglc's api-texcoord test.

Unfortunately, his PIPE_CONTROL contains a Depth Stall, which
necessitates the post-sync non-zero workaround,

Fixes GPU hangs in Civilization 4, PlaneShift, Minecraft, Neverwinter
Nights, 3DMMES, and hopefully Heroes of Newerth as well.

Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=40324
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=41096
Signed-off-by: Kenneth Graunke <kenneth@whitecape.org>
Reviewed-by: Eric Anholt <eric@anholt.net>
Tested-by: Joel <k00_fol@k.kth.se> (Neverwinter Nights)
Tested-by: brot <brot@minad.de> (Minecraft)
Tested-by: Eric Anholt <eric@anholt.net> (3DMMES)
Tested-by: Kenneth Graunke <kenneth@whitecape.org> (Civ 4 & PlaneShift)

(cherry-picked from commit 3cc0a7be23)
2011-10-27 10:37:40 -07:00
Marek Olšák
00c44de1a6 r600g: set correct tiling flags in depth info
The kernel currently overwrites the flags, but if we stopped doing that,
this would break badly.
(cherry picked from commit faa16dc456)

BTW, this may be an actual fix for very old kernels.
https://bugs.freedesktop.org/show_bug.cgi?id=42175

Conflicts:

	src/gallium/drivers/r600/evergreen_state.c
	src/gallium/drivers/r600/r600_state.c
2011-10-26 14:51:30 +02:00
Brian Paul
e4f88bcad3 mesa: fix format/type check in unpack_image() for bitmaps
Passing type == GL_BITMAP returns 0 while error values return -1.
This fixes glPolygonStipple being compiled into display lists.
(cherry picked from commit 2ce8c3553b)
2011-10-25 18:41:29 -07:00
Brian Paul
0092375714 mesa: fix incorrect error code in _mesa_FramebufferTexture1D/3DEXT()
The spec says GL_INVALID_OPERATION is generated when texture!=0 and
textarget is not a legal value.  We had this right for the 2D function.
(cherry picked from commit ccecc08f79)
2011-10-25 18:41:29 -07:00
Ben Widawsky
c4d37ed43a intel: GetBuffer fix
After copy buffer on preGEN6, it is necessary to wait for the blit to
complete before returning data to the user.

This should fix the piglit test: copy_buffer_coherency (pre-GEN6).

Signed-off-by: Ben Widawsky <ben@bwidawsk.net>
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
(cherry picked from commit fa351bd2e0)
2011-10-25 18:41:29 -07:00
Eric Anholt
242a18dc46 mesa: Don't error on glFeedbackBuffer(size = 0, buffer = NULL)
The existing error result doesn't appear in the GL 2.1 or 3.2
compatibility specs, and triggers an unexpected GL error in Intel's
oglconform when it tries to reset the feedback state after usage so
that the "diff the state at error time vs. context init time" code
doesn't generate spurious diffs.  The unexpected GL error then
translates into testcase failure.  Brian wants the safety check on
buffer = NULL, though, so that people can't as easily set up a broken
buffer.
(cherry picked from commit 07e5295b6f)
2011-10-25 18:41:29 -07:00
Ian Romanick
0abee468f8 mesa: Advertise GL_OES_compressed_paletted_texture in OpenGL ES1.x
Signed-off-by: Ian Romanick <ian.d.romanick@intel.com>
Reviewed-by: Brian Paul <brianp@vmware.com>
Tested-by: Jin Yang <jin.a.yang@intel.com>
(cherry picked from commit 24a113093b)
2011-10-25 18:41:29 -07:00
Ian Romanick
72ea656dad mesa: Remove redundant compressed paletted texture error checks
Signed-off-by: Ian Romanick <ian.d.romanick@intel.com>
Reviewed-by: Brian Paul <brianp@vmware.com>
Tested-by: Jin Yang <jin.a.yang@intel.com>
(cherry picked from commit 13757f7080)
2011-10-25 18:41:29 -07:00
Ian Romanick
30b199f867 mesa: Refactor compressed texture error checks to work with paletted textures
This code was really broken before.  A lot of the error checks were
done much later (too late), and some of the error checks would fail.
The underlying problem is that Mesa doesn't ever keep compressed paletted
textures in their original format.  The textures are immediately
converted to some RGB or RGBA format.

Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=39991
Signed-off-by: Ian Romanick <ian.d.romanick@intel.com>
Reviewed-by: Brian Paul <brianp@vmware.com>
Tested-by: Jin Yang <jin.a.yang@intel.com>
(cherry picked from commit 3ebbfc8372)
2011-10-25 18:41:29 -07:00
Ian Romanick
9b242a03d5 mesa: Add _mesa_cpal_compressed_format_type
Signed-off-by: Ian Romanick <ian.d.romanick@intel.com>
Reviewed-by: Brian Paul <brianp@vmware.com>
Tested-by: Jin Yang <jin.a.yang@intel.com>
(cherry picked from commit b433e7ba07)
2011-10-25 18:41:29 -07:00
Ian Romanick
8317a43a75 mesa: Refactor expected texture size check in cpal_get_info
Signed-off-by: Ian Romanick <ian.d.romanick@intel.com>
Reviewed-by: Brian Paul <brianp@vmware.com>
Tested-by: Jin Yang <jin.a.yang@intel.com>
(cherry picked from commit a2cab751be)
2011-10-25 18:41:29 -07:00
Ian Romanick
625fdf58c6 mesa: Add GL_OES_compressed_paletted_texture formats to _mesa_base_tex_format
Signed-off-by: Ian Romanick <ian.d.romanick@intel.com>
Reviewed-by: Brian Paul <brianp@vmware.com>
Tested-by: Jin Yang <jin.a.yang@intel.com>
(cherry picked from commit fc0fa16be3)
2011-10-25 18:41:29 -07:00
Ian Romanick
5f776f2a7d mesa: Add GL_OES_compressed_paletted_texture formats to _mesa_is_compressed_format
Signed-off-by: Ian Romanick <ian.d.romanick@intel.com>
Reviewed-by: Brian Paul <brianp@vmware.com>
Tested-by: Jin Yang <jin.a.yang@intel.com>
(cherry picked from commit a454c835fa)
2011-10-25 18:41:29 -07:00
Chia-I Wu
d2e633a8bd intel: fix GLESv1 support
Add intelInitExtensionsES1 to enable required and optional GLESv1
extensions.

Reviewed-by: Ian Romanick <ian.d.romanick@intel.com>
(cherry picked from commit 6b9e4b6ca7)
2011-10-25 18:41:29 -07:00
Chia-I Wu
135501c201 intel: rename intel_extensions_es2.c to intel_extensions_es.c
We'd like to add intelInitExtensionsES1 to it later.

Reviewed-by: Ian Romanick <ian.d.romanick@intel.com>
(cherry picked from commit 820789ac69)
2011-10-25 18:41:29 -07:00
Jeremy Huddleston
681f4820ab apple: Implement applegl_unbind_context
glXMakeCurrent(dpy, None, NULL) would not correctly unbind the context
causing subsequent GLX requests to fail in peculiar ways

http://xquartz.macosforge.org/trac/ticket/514

Signed-off-by: Jeremy Huddleston <jeremyhu@apple.com>
(cherry picked from commit 5c44c1348e)
2011-10-24 16:26:17 -07:00
Yuanhan Liu
4b7ad91990 mesa: handle PBO access error in display list mode
Simply generate GL_INVALID_OPERATION error at display list mode. As
explained by Brian, we are going to access PBO data at compile time.
No need to defer the error at execution time.

Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Reviewed-by: Brian Paul <brianp@vmware.com>
(cherry picked from commit 46d5fb576a)
2011-10-24 15:54:31 -07:00
Yuanhan Liu
cd114ba503 mesa: handle the pbo case for save_Bitmap
Wrap _mesa_unpack_bitmap to handle the case that data is stored in pixel
buffer object.

This would make calling Bitmap with data stored in PBO by display list work.

Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Reviewed-by: Brian Paul <brianp@vmware.com>
(cherry picked from commit 02b801c1ed)
2011-10-24 15:54:31 -07:00
Yuanhan Liu
b8cc97ed26 mesa: fix inverted pbo test error at _mesa_GetnCompressedTexImageARB
It seems like a typo.

Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Reviewed-by: Brian Paul <brianp@vmware.com>
(cherry picked from commit 403cf7c56f)
2011-10-24 15:54:31 -07:00
Yuanhan Liu
53a3e743ae mesa: generate error if pbo offset is not aligned with the size of specified type
v2: quote the spec; explicitly exclude the GL_BITMAP case to make code
    more readable. (comments from Ian)

v3: Cast the offset by GLintptr to remove the compile warning(comments
    from Brian).

    I also found that I should use _mesa_sizeof_packed_type() instead,
    as it includes packed pixel type, like GL_UNSIGNED_SHORT_5_6_5.

Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Reviewed-by: Brian Paul <brianp@vmware.com>
(cherry picked from commit 9024d8af0a)
2011-10-24 15:54:31 -07:00
Yuanhan Liu
a31eec0d10 i965: setup address rounding enable bits
The patch(based on the reading of the emulator) came from while I was
trying to fix the oglc pbo texImage.1PBODefaults fail. This case
generates a texture with the width and height equal to window's width
and height respectively, then try to texture it on the whole window.
So, it's exactly one texel for one pixel.  And, the min filter and mag
filter are GL_LINEAR. It runs with swrast OK, as expected. But it failed
with i965 driver.

Well, you can't tell the difference from the screen, as the error is
quite tiny. From my digging, it seems that there are some tiny error
happened while getting tex address. This will break the one texel for
one pixel rule in this case. Thus the linear result is taken, with tiny
error.

This patch would fix all oglc pbo subcase fail with the same issue on
both ILK, SNB and IVB.

v2: comments from Ian, make the address_round filed assignment consistent.
    (the sampler is alread memset to 0 by the xxx_update_samper_state
     caller, so need to assign 0 first)

Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
(cherry picked from commit 76669381c0)
2011-10-24 15:54:31 -07:00
Yuanhan Liu
6c85918159 mesa: add a function to do the image data copy stuff for save_CompressedTex(Sub)Image
Introuduce a simple function called copy_data to do the image data copy
stuff for all the save_CompressedTex*Image function. The function check
the NULL data case to avoid some potential segfault. This also would
make the code a bit simpler and less redundance.

Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Reviewed-by: Brian Paul <brianp@vmware.com>
(cherry picked from commit e9edcf8b1d)
2011-10-24 15:54:31 -07:00
Yuanhan Liu
eaad245418 intel: fix the wrong code to detect null texture.
There is already comments show how to detect a null texture. Fix the
code to match the comments.

This would fix the oglc divzero(basic.texQOrWEqualsZero) and
divzero(basic.texTrivialPrim) test case fail.

Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
Reviewed-by: Eric Anholt <eric@anholt.net>
(cherry picked from commit 1a662e7c18)
2011-10-24 15:54:31 -07:00
Yuanhan Liu
0a77734c62 mesa: fix error handling for glMaterial*
Trigger GL_INVALID_ENUM error if the face paramter is not a valid value.

Trigger GL_INVALID_VALUE error if the GL_SHININESS value is out side
[0, ctx->Constant.MaxShiniess].

v2: fix the max shininess value.

v3: suggested by Brian, move the face check into glMaterialfv function
    to reduce code duplicate. Also, refactor the error message.

Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Signed-off-by: Brian Paul <brianp@vmware.com>
(cherry picked from commit a11b4c1e7a)
2011-10-24 15:54:31 -07:00
Yuanhan Liu
8d0d4381da mesa: fix error handling for glMapBufferRange
Accroding the man page, GL_INVALID_VALUE would generated if access has any
bits set other than those valid defined bits.

Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Signed-off-by: Brian Paul <brianp@vmware.com>
(cherry picked from commit 099af9e9df)
2011-10-24 15:54:31 -07:00
Brian Paul
a0eefc9bd0 mesa: generate GL_INVALID_OPERATION in glIsEnabledIndex() between Begin/End
(cherry picked from commit 386ec5e80e)
2011-10-24 15:54:31 -07:00
Yuanhan Liu
2af0708c85 mesa: fix error handling for glSelectBuffer
According the man page, trigger a GL_INVALID_VALUE if size < 0.

Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Signed-off-by: Brian Paul <brianp@vmware.com>
(cherry picked from commit 57b8f13aa4)
2011-10-24 15:54:31 -07:00
Yuanhan Liu
8fb8b8528d mesa: fix error handling for glPixelZoom
According the man page, GL_INVALID_OPERATION should generated if
glPixelZoom is executed between the execution of glBegin and the
corresponding execution of glEnd.

Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Signed-off-by: Brian Paul <brianp@vmware.com>
(cherry picked from commit 7a9a8bbabd)
2011-10-24 15:54:30 -07:00
Yuanhan Liu
0ebdfa31bc mesa: fix error handling for glIsEnabled
According the man page, GL_INVALID_OPERATION should be generated if
glIsEnabled is executed betwwen the execution of glBegin and the
correspoding execution of glEnd.

Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Signed-off-by: Brian Paul <brianp@vmware.com>
(cherry picked from commit 6a98802243)
2011-10-24 15:54:30 -07:00
Yuanhan Liu
ab68b00453 mesa: fix error handling for glTexEnv
Fix error handling while calling glTexEnv with invalid texture
environment parameters.

Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Signed-off-by: Brian Paul <brianp@vmware.com>
(cherry picked from commit b020b111a8)
2011-10-24 15:54:30 -07:00
Yuanhan Liu
a895cce693 mesa: fix error handling for some glGet* functions
According to the man page, it should trigger a GL_INVALID_OPERATION
while calling some glGet* functions inside glBegin and glEnd.

This patch dose handle the following functions:
 glGetBooleanv
 glGetFloatv
 glGetIntegerv
 glGetInteger64v
 glGetDoublev

Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Signed-off-by: Brian Paul <brianp@vmware.com>
(cherry picked from commit f1ddde5c16)
2011-10-24 15:54:30 -07:00
Yuanhan Liu
23a3753a28 mesa: fix error handling for glEvalMesh1/2D
According man page, trigger error when calling glEvalMesh1/2D inside
glBegin/glEnd.

Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Signed-off-by: Brian Paul <brianp@vmware.com>
(cherry picked from commit 21b2895bd0)
2011-10-24 15:54:30 -07:00
Brian Paul
b53035e10d mesa: fix error handling for dlist image unpacking
When compiling glDrawPixels, glTexImage(), etc. and we're copying
the user's image we need to be careful about GL error checking.
Previously, we were incorrectly generating GL_OUT_OF_MEMORY in
unpack_image() if width <= 0 or height <= 0 or for invalid format/type
values.  We now check those arguments in unpack_image() and return NULL
if there's a bad value.  The command will get compiled with the
arguments as-is and image=NULL.  Later, when the command is executed the
correct errors will be generated.

This issue was reported by Yuanhan Liu <yuanhan.liu@linux.intel.com>

Reviewed-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
(cherry picked from commit 6fd6efa7bf)
2011-10-24 15:54:30 -07:00
Adam Jackson
b2fbf8225b drisw: Remove cargo culting that breaks GLX 1.3 ctors
Signed-off-by: Adam Jackson <ajax@redhat.com>
(cherry picked from commit d44f821213)
2011-10-24 13:56:46 -04:00
Jeremy Huddleston
bf7b347c10 apple: Use the correct (OpenGL.framework) glViewport and glScissor during init
Signed-off-by: Jeremy Huddleston <jeremyhu@apple.com>
(cherry picked from commit 9f2abbee62)
2011-10-21 00:35:34 -07:00
Jeremy Huddleston
7e90db0ddc apple: Silence some debug spew
Signed-off-by: Jeremy Huddleston <jeremyhu@apple.com>
(cherry picked from commit 098ecfad83)
2011-10-21 00:35:29 -07:00
Marek Olšák
521819f29c r300g: don't return NULL in resource_from_handle if the resource is too small
The DDX may allocate a buffer with a too small size.
Instead of failing, let's pretend everything's alright.

Such bugs should be fixed in the DDX, of course.

NOTE: This is a candidate for the stable branches.
(cherry picked from commit a04f8c3612)

Conflicts:

	src/gallium/drivers/r300/r300_texture.c
	src/gallium/drivers/r300/r300_texture.h
	src/gallium/drivers/r300/r300_texture_desc.c
2011-10-21 00:31:06 +02:00
Marek Olšák
8b4315cb47 pb_bufmgr_cache: flush cache when create_buffer fails and try again
NOTE: This is a candidate for the stable branches.
(cherry picked from commit 39d7de69b1)
2011-10-20 23:47:51 +02:00
Neil Roberts
986319cd20 meta: Fix saving the active program
When saving the active program in _mesa_meta_begin, it was actually
saving the fragment program instead. This means that if the
application binds a program that only has a vertex shader then when
the meta saved state is restored it will forget the bound program.

Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=41969
Reviewed-by: Chad Versace <chad@chad-versace.us>
(cherry picked from commit 5625f78cd7)
2011-10-18 11:15:05 -07:00
Yuanhan Liu
c3fd76ce09 i965: fix the constant interp bitmask for flat mode
Fix the constant interpolation enable bit mask for flat light mode.
FRAG_BIT_COL0 attribute bit might be 0, in which case we need to
shift one more bit right.

This would fix the oglc specularColor test fail on both Sandybridge and
Ivybridge.

v2: move the constant interp bitmask setup code into for(; attr <
FRAG_ATTRIB_MAX; attr++) loop suggested by Eric.

Also fixes the Civilization 4 intro videos.

Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Signed-off-by: Xiang, Haihao <haihao.xiang@intel.com>
(cherry picked from commit cd6b8421ca)
2011-10-17 14:22:03 -07:00
Marcin Slusarz
08fa61dab6 nouveau: fix fence hang
If there is not enough space in pushbuffer for fence emission
(nouveau_fence_emit -> nv50_screen_fence_emit -> MARK_RING),
the pushbuffer is flushed, which through flush_notify ->
nv50_default_flush_notify -> nouveau_fence_update marks currently
emitting fence as flushed. But actual emission is done after this mark.
So later when there is a need to wait on this fence and pushbuffer
was not flushed in between, fence wait will never finish causing
application to hang.

To fix this, introduce new fence state between AVAILABLE and EMITTED,
set it before emission and handle it everywhere.

Additionally obtain fence sequence numbers after possible flush in
MARK_RING, because we want to emit fences in correct order.

Reviewed-by: Christoph Bumiller <e0425955@student.tuwien.ac.at>

(commit 9849f366cb in master)
2011-10-17 23:01:41 +02:00
Marcin Slusarz
8d1f1eae93 nouveau: fix crash during fence emission
Fence emission can flush the push buffer, which through flush_notify
unreferences recently emitted fence. If ref count is increased after
fence emission, unreference deletes the fence, which causes SIGSEGV.

Backtrace:
nouveau_fence_del
nouveau_fence_ref
nouveau_fence_next
nouveau_pushbuf_flush
MARK_RING
nv50_screen_fence_emit
nouveau_fence_emit
nv50_flush

This bug manifested as an assertion failure in nouveau_fence.c, because
SIGSEGV handler tried to shutdown the application and used messed up
fence.

This issue was reported by Maxim Levitsky.

(commit e1e03ce492 in master)
2011-10-17 23:00:39 +02:00
Marek Olšák
b9cc9166cf Revert "r300g: fix rendering with a non-zero index bias in draw_elements_immediate"
This reverts commit b9c7773e0d.

It breaks more things than it fixes.
2011-10-16 03:21:02 +02:00
Kenneth Graunke
07210d5c77 intel: Depth format fixes
This is a squash of:

    intel: Recognize all depth formats in get_teximage_readbuffer.

    The existing code was missing GL_DEPTH_COMPONENT32, resulting in it
    wrongly returning the color buffer instead of the depth buffer.

    Fixes an issue in PlaneShift 0.5.7 when casting spells.  The game calls
    CopyTexSubImage2D on buffers with a GL_DEPTH_COMPONENT32 internal
    format, which (prior to this patch) resulted in an attempt to copy
    ARGB8888 to X8_Z24.

    Instead of adding the missing enumeration directly, convert the code to
    use _mesa_is_depth_format() and _mesa_is_depthstencil_format() as these
    should catch any newly added depth formats in the future.

    Signed-off-by: Kenneth Graunke <kenneth@whitecape.org>
    Reviewed-by: Eric Anholt <eric@anholt.net>
    (cherry-picked from commit 440224ab73)

And:

    i915: Fix depth texturing since 86e62b2357

    The 965 driver already had the X8_Z24 case, but 915 was missing it.

    Reviewed-by: Ian Romanick <ian.d.romanick@intel.com>
    (cherry picked from commit 6aae729d6e)
2011-10-14 17:28:45 -07:00
Eric Anholt
57a6e6092f intel: Mark MESA_FORMAT_X8_Z24 as always supported.
This prevents developer surprise at seeing a GL_DEPTH_COMPONENT
texture have stencil bits, and avoids the metaops path accidentally
copying stencil bits around in glCopyTexImage(GL_DEPTH_COMPONENT) (and
being broken because swrast's glReadPixels(GL_UNSIGNED_INT_24_8) is
broken).

Acked-by: Chad Versace <chad@chad-versace.us>

(cherry-picked from commit 86e62b2357)
2011-10-14 17:28:45 -07:00
Chris Wilson
5b09cf5c57 i915: out-of-bounds write in calc_live_regs()
From a Coverity defect report.

src/mesa/drivers/dri/i915/i915_fragprog.c
   301  /*
   302   * TODO: consider moving this into core
   303   */
   304  static bool calc_live_regs( struct i915_fragment_program *p )
   305  {
   306      const struct gl_fragment_program *program = &p->FragProg;
   307      GLuint regsUsed = 0xffff0000;
-> 308      uint8_t live_components[16] = { 0, };
   309      GLint i;
   310
   311      for (i = program->Base.NumInstructions - 1; i >= 0; i--) {
   312          struct prog_instruction *inst =
&program->Base.Instructions[i];
   313          int opArgs = _mesa_num_inst_src_regs(inst->Opcode);
   314          int a;
   315
   316          /* Register is written to: unmark as live for this and
preceeding ops */
   317          if (inst->DstReg.File == PROGRAM_TEMPORARY) {
-> 318              if (inst->DstReg.Index > 16)
   319                 return false;
   320
-> 321              live_components[inst->DstReg.Index] &= ~inst->DstReg.WriteMask;
   322              if (live_components[inst->DstReg.Index] == 0)
   323                  regsUsed &= ~(1 << inst->DstReg.Index);
   324          }
   325
   326          for (a = 0; a < opArgs; a++) {
   327              /* Register is read from: mark as live for this and preceeding ops */
   328              if (inst->SrcReg[a].File == PROGRAM_TEMPORARY) {
   329                  unsigned c;
   330
   331                  if (inst->SrcReg[a].Index > 16)
   332                     return false;
   333
   334                  regsUsed |= 1 << inst->SrcReg[a].Index;
   335
   336                  for (c = 0; c < 4; c++) {
   337                      const unsigned field = GET_SWZ(inst->SrcReg[a].Swizzle, c);
   338
   339                      if (field <= SWIZZLE_W)
   340                          live_components[inst->SrcReg[a].Index] |= (1U << field);
   341                  }
   342              }
   343          }
   344
   345          p->usedRegs[i] = regsUsed;
   346      }

Reported-by: Vinson Lee <vlee@vmware.com>
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=40022
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
(cherry picked from commit 67582e6eef)
2011-10-14 17:28:45 -07:00
Carl Worth
eb0fd67f6a glcpp: Add a test for #elif with an undefined macro.
As written, this test correctly raises an error for #elif being used
with an undefined macro (and not as an argument to "defined"). If the
preceding #if were '#if 1' then this diagnositc would correctly be
hidden. That allows code such as the following to not raise an error:

	#ifndef MAYBE_UNDEFINED
	#elif MAYBE_UNDEFINED < 5
	...
	#endif

So this test case is working as expected already. We add it here just
to improve test coverage.

Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
Signed-off-by: Carl Worth <cworth@cworth.org>
(cherry picked from commit 201485bae0)
2011-10-14 17:28:45 -07:00
Carl Worth
c6dfde2136 glcpp: Raise error if defining any macro containing two consecutive underscores
The specification reserves any macro name containing two consecutive
underscores, (anywhere within the name). Previously, we only raised
this error for macro names that started with two underscores.

Fix the implementation to check for two underscores anywhere, and also
update the corresponding 086-reserved-macro-names test.

This also fixes the following two piglit tests:

	spec/glsl-1.30/preprocessor/reserved/double-underscore-02.frag
	spec/glsl-1.30/preprocessor/reserved/double-underscore-03.frag

Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
Reviewed-by: Eric Anholt <eric@anholt.net>
Signed-off-by: Carl Worth <cworth@cworth.org>
(cherry picked from commit c4aaf7943c)
2011-10-14 17:28:45 -07:00
Carl Worth
71bd5d424c glcpp: Implement token pasting for non-function-like macros
This is as simple as abstracting one existing block of code into a
function call and then adding a single call to that function for the
case of a non-function-like macro.

This fixes the recently-added 097-paste-with-non-function-macro test
as well as the following piglit tests:

	spec/glsl-1.30/preprocessor/concat/concat-01.frag
	spec/glsl-1.30/preprocessor/concat/concat-02.frag

Also, the concat-04.frag test now passes for the right reason. The
test is intended to fail the compilation, but before this commit it
was failing compilation (and hence passing the test) for the wrong
reason.

Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
Signed-off-by: Carl Worth <cworth@cworth.org>
(cherry picked from commit 28842c2331)
2011-10-14 17:28:45 -07:00
Carl Worth
33e1019d95 glcpp: Test a non-function-like macro using the token paste operator
Apparently we never implemented this, (but we've got a GLSL 1.30 test
in piglit that is exercising this case).

Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
Signed-off-by: Carl Worth <cworth@cworth.org>
(cherry picked from commit 7bb3403e01)
2011-10-14 17:28:45 -07:00
Carl Worth
ab94d6f902 glcpp: Fix two (or more) successive applications of token pasting
There was already a loop here to look for multiple token pastes, but
it was mistakenly incrementing the iterator counter after performing
one paste.

Instead, leave the loop iterator in place to coalesce as many tokens
as necessary into one.

This fixes the recently add 096-paste-twice test as well as the
following piglit test:

	spec/glsl-1.30/preprocessor/concat/concat-03.frag

Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
Signed-off-by: Carl Worth <cworth@cworth.org>
(cherry picked from commit 3c01a58944)
2011-10-14 17:28:45 -07:00
Ian Romanick
5becf89a17 i915: Only emit program errors when INTEL_DEBUG=wm or INTEL_DEBUG=fallbacks
This makes piglit a lot more happy.  The errors are logged when
INTEL_DEBUG=fallbacks because the application is about to hit a big
software fallback.  We frequently ask people to run applications that
are hitting software fallbacks with INTEL_DEBUG=fallbacks so the we
can help them debug the reason for the software fallback.

Signed-off-by: Ian Romanick <ian.d.romanick@intel.com>
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
Reviewed-by: Eric Anholt <eric@anholt.net>
(cherry picked from commit 0290a018a5)
2011-10-14 17:28:45 -07:00
Ian Romanick
13a2d4a985 i915: Fail without crashing if a Mesa IR program uses too many registers
This can only happen in GLSL shaders because assembly shaders that use
too many temps are rejected by core Mesa.  It is easiest to make this
happen with shaders that contain flow-control that could not be lowered.

Signed-off-by: Ian Romanick <ian.d.romanick@intel.com>
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
Reviewed-by: Eric Anholt <eric@anholt.net>
(cherry picked from commit 3bb2f0dde1)
2011-10-14 17:28:45 -07:00
Ian Romanick
f2d166583d ir_to_mesa: Emit warnings instead of errors for IR that can't be lowered
Rely on the driver to do the right thing.  This probably means falling
back to software.  Page 88 of the OpenGL 2.1 spec specifically says:

    "A shader should not fail to compile, and a program object should
    not fail to link due to lack of instruction space or lack of
    temporary variables. Implementations should ensure that all valid
    shaders and program objects may be successfully compiled, linked
    and executed."

There is no provision for saying "No" to a valid shader that is
difficult for the hardware to handle, so stop doing that.

On i915 this causes a large number of piglit tests to change from FAIL
to WARN.  The warning is because the driver still emits messages to
stderr like "i915_program_error: Unsupported opcode: BGNLOOP".

It also fixes ES2 conformance CorrectFull_frag and CorrectParse1_frag
on i915 (and probably other hardware that can't handle loops).

Signed-off-by: Ian Romanick <ian.d.romanick@intel.com>
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
Reviewed-by: Eric Anholt <eric@anholt.net>
(cherry picked from commit 322c3bf9dc)
2011-10-14 17:28:42 -07:00
Ian Romanick
49d2c552a5 ir_to_mesa: Use Add linker_error instead of fail_link
The functions were almost identical.

Signed-off-by: Ian Romanick <ian.d.romanick@intel.com>
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
Reviewed-by: Eric Anholt <eric@anholt.net>
(cherry picked from commit 8aadd89d07)
2011-10-14 17:28:03 -07:00
Ian Romanick
a864b15b83 mesa: Ensure that gl_shader_program::InfoLog is never NULL
This prevents assertion failures in ralloc_strcat.  The ralloc_free in
_mesa_free_shader_program_data can be omitted because freeing the
gl_shader_program in _mesa_delete_shader_program will take care of
this automatically.

A bunch of this code could use a refactor to use ralloc a bit more
effectively.  A bunch of the things that are allocated with malloc and
owned by the gl_shader_program should be allocated with ralloc (using
the gl_shader_program as the context).

Signed-off-by: Ian Romanick <ian.d.romanick@intel.com>
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
Reviewed-by: Eric Anholt <eric@anholt.net>
(cherry picked from commit 89193933cb)
2011-10-14 17:27:17 -07:00
Ian Romanick
e458c3ddbb linker: Make linker_{error,warning} generally available
linker_warning is a new function.  It's identical to linker_error
except that it doesn't set LinkStatus=false and it prepends "warning: "
on messages instead of "error: ".

Signed-off-by: Ian Romanick <ian.d.romanick@intel.com>
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
Reviewed-by: Eric Anholt <eric@anholt.net>
(cherry picked from commit 379a32f42e)
2011-10-14 17:27:17 -07:00
Ian Romanick
b8a46f910d linker: Make linker_error set LinkStatus to false
Remove the other places that set LinkStatus to false since they all
immediately follow a call to linker_error.  The function linker_error
was previously known as linker_error_printf.  The name was changed
because it may seem surprising that a printf function will set an
error flag.

Signed-off-by: Ian Romanick <ian.d.romanick@intel.com>
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
Reviewed-by: Eric Anholt <eric@anholt.net>
(cherry picked from commit 586e741ac1)
2011-10-14 17:27:17 -07:00
Kenneth Graunke
eca2a91a9b i965: Fix inconsistent indentation in brw_eu_emit.c.
Most of these functions used three spaces for the first level of
indentation, but four spaces for the next level.  One used tabs and then
three spaces.  Some used 3/4 in a then block but 3/3 in the else block.

Normally I try to avoid field days like this, but since the functions
were so inconsistent, even internally, it was making it difficult to
edit without introducing spurious whitespace changes.

So, just get it over with.  git diff -b shows 0 lines changed.

Signed-off-by: Kenneth Graunke <kenneth@whitecape.org>
(cherry picked from commit b861479f83)
2011-10-13 16:35:40 -07:00
Kenneth Graunke
1f083e1839 i965: Allow SIMD16 color writes on Ivybridge.
Again, the check was needlessly specific: this works fine on Gen7.

Signed-off-by: Kenneth Graunke <kenneth@whitecape.org>
Reviewed-by: Eric Anholt <eric@anholt.net>
(cherry picked from commit 7db874bf4c4273d2d46218b1490d312fe2654284)
2011-10-13 14:06:11 -07:00
Kenneth Graunke
9bbf2a343f i965/fs: Allow SIMD16 with control flow on Ivybridge.
The check was designed to forbid it on old generations (Gen5/Ironlake),
not on new ones.  It just works on Gen7/Ivybridge.

Signed-off-by: Kenneth Graunke <kenneth@whitecape.org>
Reviewed-by: Eric Anholt <eric@anholt.net>
(cherry picked from commit ae5da817e2aeb9f9447fdd6d2eb4b22d6f8f6a87)
2011-10-13 14:06:11 -07:00
Kenneth Graunke
38dfedccb2 i965: Emit depth stalls and flushes before changing depth state on Gen6+.
Fixes OpenArena on Gen7.  Technically, adding only the first depth stall
fixes it, but the documentation says to do all three, and the Windows
driver seems to do it.

Not observed to fix anything on Gen6 yet.

Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=38863
Signed-off-by: Kenneth Graunke <kenneth@whitecape.org>
Reviewed-by: Eric Anholt <eric@anholt.net>
(cherry picked from commit 02c4dc807e91640c69c8addc3c797300a3c536ad)
2011-10-13 14:06:11 -07:00
Kenneth Graunke
1e0e116d6d i965: Fix incorrect maximum PS thread count shift on Ivybridge.
At one point, the documentation said that max thread count in 3DSTATE_PS
was at bit offset 23, but it's actually 24 on Ivybridge.  Not only did
this halve our thread count, it caused us to write 1 into a bit 23, which
is marked as MBZ (must be zero).  Furthermore, it made us write an even
number into this field, which is apparently not allowed.  Apparently we
were just lucky it worked.

NOTE: This is a candidate for the 7.11 branch.

Signed-off-by: Kenneth Graunke <kenneth@whitecape.org>
Reviewed-by: Eric Anholt <eric@anholt.net>
(cherry picked from commit 556e7eea80de778b44a37d51cb757ce32221d1e3)
2011-10-13 14:06:11 -07:00
Eric Anholt
aaadd4c111 i965: Fix polygon stipple offset state flagging.
_NEW_WINDOW_POS wasn't a real Mesa state flag, but we were missing
_NEW_BUFFERS to update the stipple offset when FBO binding or window
size changed, and _NEW_POLYGON to update when stippling gets enabled.

Fixes oglconform's tristrip test.

Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
Reviewed-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
(cherry picked from commit d598851d401f7f34d623c9cfbd85d7f5faccd7c2)
2011-10-13 13:59:06 -07:00
Eric Anholt
0d31b130bb i965: Add missing _NEW_POLYGON flag to polygon stipple upload.
Because we skip the pattern upload when stippling is disabled, we need
to check again when it might have been turned on.

Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
Reviewed-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
(cherry picked from commit e19541aa2ad05f687c859001b62713209787c9c8)
2011-10-13 13:58:57 -07:00
Kenneth Graunke
e4b1dce9ec i965: Use proper texture alignment units for cubemaps on Gen5+.
In particular, S3TC compressed textures need align_h == 4.

Fixes skybox errors in Quake 4 and FEAR.

Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=34628
Signed-off-by: Kenneth Graunke <kenneth@whitecape.org>
(cherry picked from commit e0e688ca5441e2c8bc59ec7488bc1bc4ba196602)
2011-10-13 13:58:51 -07:00
Kenneth Graunke
0f87fe948a i965/gen5+: Fix incorrect miptree layout for non-power-of-two cubemaps.
For power-of-two sizes, h0 == mt->height0 since it's already a multiple
of two.  However, for NPOT, they're different; h1 should be computed
based on the original size.

Fixes piglit test "cubemap npot" and oglconform test "textureNPOT".

NOTE: This is a candidate for stable release branches.

Reviewed-by: Eric Anholt <eric@anholt.net>
Signed-off-by: Kenneth Graunke <kenneth@whitecape.org>
(cherry picked from commit bebc19448f45dbe8c3b016d440403f52e1036e15)
2011-10-13 13:58:45 -07:00
Eric Anholt
f484fc7476 i965/fs: Respect ARB_color_buffer_float clamping.
This was done in the old codegen path, but not the new one.  Caught by
piglit fbo tests after the conversion to GLSL ff_fragment_shader.

Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
(cherry picked from commit da53ca641106e47f1d74386d8dc0f7eebeec5225)
2011-10-13 13:58:37 -07:00
Marek Olšák
b9c7773e0d r300g: fix rendering with a non-zero index bias in draw_elements_immediate
NOTE: This is a candidate for the stable branches.
(cherry picked from commit 5506f6ef96)
2011-10-04 17:48:33 +02:00
Paul Berry
7d2ff4ae77 glsl: improve the accuracy of the asin() builtin function.
The previous formula for asin(x) was algebraically equivalent to:

sign(x)*(pi/2 - sqrt(1-|x|)*(A + B|x| + C|x|^2))

where A, B, and C were arbitrary constants determined by a curve fit.

This formula had a worst case absolute error of 0.00448, an unbounded
worst case relative error, and a discontinuity near x=0.

Changed the formula to:

sign(x)*(pi/2 - sqrt(1-|x|)*(pi/2 + (pi/4-1)|x| + A|x|^2 + B|x|^3))

where A and B are arbitrary constants determined by a curve fit.  This
has a worst case absolute error of 0.00039, a worst case relative
error of 0.000405, and no discontinuities.

I don't expect a significant performance degradation, since the extra
multiply-accumulate should be fast compared to the sqrt() computation.

Fixes piglit tests {vs,fs}-asin-float and {vs,fs}-atan-*
(cherry picked from commit d4c80f5f85)
2011-10-02 21:39:30 +02:00
Paul Berry
1bbf124ff8 glsl hierarchical visitor: Do not overwrite base_ir for parameter lists.
This patch fixes a bug in ir_hirearchical_visitor: when traversing an
exec_list representing the formal or actual parameters of a function,
it modified base_ir to point to each parameter in turn, rather than
leaving it as a pointer to the enclosing statement.  This was a
problem, since base_ir is used by visitor classes to locate the
statement containing the node being visited (usually so that
additional statements can be inserted before or after it).  Without
this fix, visitors might attempt to insert statements into parameter
lists.

Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
(cherry picked from commit cc81eb09b9)
2011-10-02 19:57:57 +02:00
Eric Anholt
ca7560765c glsl: When assiging from a whole array, mark it as used.
Fixes piglit link-uniform-array-size.

Reviewed-by: Ian Romanick <ian.d.romanick@intel.com>
(cherry picked from commit 407a1001ae)
2011-10-02 19:57:57 +02:00
Eric Anholt
878d701da4 glsl: When assigning to a whole array, mark the array as accessed.
The vs-varying-array-mat2-col-row-wr test writes a mat2[3] constant to
a mat2[3] varying out array, and also statically accesses element 1 of
it on the VS and FS sides.  At link time it would get trimmed down to
just 2 elements, and then codegen of the VS would end up generating
assignments to the unallocated last entry of the array.  On the new
i965 VS backend, that happened to land on the vertex position.

Some issues remain in this test on softpipe, i965/old-vs and
i965/new-vs on visual inspection, but i965 is passing because only one
green pixel is probed, not the whole split green/red quad.
2011-10-02 19:57:56 +02:00
Paul Berry
c19b963ad6 glsl: Remove field array_lvalue from ir_variable.
The array_lvalue field was attempting to enforce the restriction that
whole arrays can't be used on the left-hand side of an assignment in
GLSL 1.10 or GLSL ES, and can't be used as out or inout parameters in
GLSL 1.10.

However, it was buggy (it didn't work properly for built-in arrays),
and it was clumsy (it unnecessarily kept track on a
variable-by-variable basis, and it didn't cover the GLSL ES case).

This patch removes the array_lvalue field completely in favor of
explicit checks in ast_parameter_declarator::hir() (this check is
added) and in do_assignment (this check was already present).

This causes a benign behavioral change: when the user attempts to pass
an array as an out or inout parameter of a function in GLSL 1.10, the
error is now flagged at the time the function definition is
encountered, rather than at the time of invocation.  Previously we
allowed such functions to be defined, and only flagged the error if
they were invoked.

Fixes Piglit tests
spec/glsl-1.10/compiler/qualifiers/fn-{out,inout}-array-prohibited*
and
spec/glsl-1.20/compiler/assignment-operators/assign-builtin-array-allowed.vert.

Reviewed-by: Ian Romanick <ian.d.romanick@intel.com>
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
(cherry picked from commit 00792e3586)
2011-10-02 19:57:08 +02:00
Eric Anholt
95185c7fe2 glsl: Clarify error message about whole-array assignment in GLSL 1.10.
Previously, it would produce:

    Failed to compile FS: 0:6(7): error: non-lvalue in assignment

and now it produces:

    Failed to compile FS: 0:5(7): error: whole array assignment is not
    allowed in GLSL 1.10 or GLSL ES 1.00.

Also, add spec quotation to the two places we have code for array
lvalues in GLSL 1.10.

Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
Reviewed-by: Ian Romanick <ian.d.romanick@intel.com>
(cherry picked from commit 525cec98a5)
2011-10-02 19:56:53 +02:00
Paul Berry
e1221a8811 glsl: Rework oversize array check for gl_TexCoord.
The check now applies both when explicitly declaring the size of
gl_TexCoord and when implicitly setting the size of gl_TexCoord by
accessing it using integral constant expressions.

This is prep work for adding similar size checks to gl_ClipDistance.

Fixes piglit tests texcoord/implicit-access-max.{frag,vert}.

Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
Reviewed-by: Ian Romanick <ian.d.romanick@intel.com>
(cherry picked from commit 93b9758d01)
2011-10-02 19:21:49 +02:00
Paul Berry
f732b5a999 glsl: Fix type error when lowering integer divisions
This patch fixes a bug when lowering an integer division:

  x/y

to a multiplication by a reciprocal:

  int(float(x)*reciprocal(float(y)))

If x was a plain int and y was an ivecN, the lowering pass
incorrectly assigned the type of the product to be float, when in fact
it should be vecN.  This caused mesa to abort with an IR validation
error.

Fixes piglit tests {fs,vs}-op-div-int-ivec{2,3,4}.

Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
(cherry picked from commit af501e2b29)
2011-10-02 19:19:49 +02:00
Paul Berry
0129d5297b glsl: Perform implicit type conversions on function call out parameters.
When an out parameter undergoes an implicit type conversion, we need
to store it in a temporary, and then after the call completes, convert
the resulting value.  In other words, we convert code like the
following:

void f(out int x);
float value;
f(value);

Into IR that's equivalent to this:

void f(out int x);
float value;
int out_parameter_conversion;
f(out_parameter_conversion);
value = float(out_parameter_conversion);

This transformation needs to happen during ast-to-IR convertion (as
opposed to, say, a lowering pass), because it is invalid IR for formal
and actual parameters to have types that don't match.

Fixes piglit tests
spec/glsl-1.20/compiler/qualifiers/out-conversion-int-to-float.vert and
spec/glsl-1.20/execution/qualifiers/vs-out-conversion-*.shader_test,
and bug 39651.

Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=39651

Reviewed-by: Chad Versace <chad@chad-versace.us>
(cherry picked from commit 67b5a3267d)
2011-10-02 19:19:08 +02:00
Paul Berry
27f00df2b7 glsl: Check array size is const before asserting that no IR was generated.
process_array_type() contains an assertion to verify that no IR
instructions are generated while processing the expression that
specifies the size of the array.  This assertion needs to happen
_after_ checking whether the expression is constant.  Otherwise we may
crash on an illegal shader rather than reporting an error.

Fixes piglit tests array-size-non-builtin-function.vert and
array-size-with-side-effect.vert.

Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
(cherry picked from commit d4144a123b)
2011-10-02 19:17:42 +02:00
Paul Berry
8dcfe15a9a glsl: Constant-fold built-in functions before outputting IR
Rearranged the logic for converting the ast for a function call to
hir, so that we constant fold before emitting any IR.  Previously we
would emit some IR, and then only later detect whether we could
constant fold.  The unnecessary IR would usually get cleaned up by a
later optimization step, however in the case of a builtin function
being used to compute an array size, it was causing an assertion.

Fixes Piglit test array-size-constant-relational.vert.

Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=38625
(cherry picked from commit 789ee6516b)
2011-10-02 19:16:47 +02:00
Paul Berry
2c0e00de23 glsl: Emit function signatures at toplevel, even for built-ins.
The ast-to-hir conversion needs to emit function signatures in two
circumstances: when a function declaration (or definition) is
encountered, and when a built-in function is encountered.

To avoid emitting a function signature in an illegal place (such as
inside a function), emit_function() checked whether we were inside a
function definition, and if so, emitted the signature before the
function definition.

However, this didn't cover the case of emitting function signatures
for built-in functions when those built-in functions are called from
inside the constant integer expression that specifies the length of a
global array.  This failed because when processing an array length, we
are emitting IR into a dummy exec_list (see process_array_type() in
ast_to_hir.cpp).  process_array_type() later checks (via an assertion)
that no instructions were emitted to the dummy exec_list, based on the
reasonable assumption that we shouldn't need to emit instructions to
calculate the value of a constant.

This patch changes emit_function() so that it emits function
signatures at toplevel in all cases.

This partially fixes bug 38625
(https://bugs.freedesktop.org/show_bug.cgi?id=38625).  The remainder
of the fix is in the patch that follows.

Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
(cherry picked from commit 0d81b0e184)
2011-10-02 19:03:22 +02:00
Paul Berry
1895de7a32 Revert "glsl: Skip processing the first function's body in do_dead_functions()."
opt_dead_functions contained a shortcut to skip processing the first
function's body, based on the assumption that IR functions are
topologically sorted, with callees always coming before their callers
(therefore the first function cannot contain any calls).

This assumption turns out not to be true in general.  For example, the
following code snippet gets translated to IR that violates this
assumption:

    void f();
    void g();
    void f() { g(); }
    void g() { ... }

In practice, the shortcut didn't cause bugs because of a coincidence
of the circumstances in which opt_dead_functions is called:

(a) we do inlining right before dead function elimination, and
    inlining (when successful) eliminates all calls.

(b) for user-defined functions, inlining is always successful, because
    previous optimization passes (during compilation) have reduced
    them to a form that is eligible for inlining.

(c) the function that appears first in the IR can't possibly call a
    built-in function, because built-in functions are always emitted
    before the function that calls them.

It seems unnecessarily fragile to have opt_dead_functions depend on
these coincidences.  And the next patch in this series will break (c).
So I'm reverting the shortcut.  The consequence will be a slight
increase in link time for complex shaders.

This reverts commit c75427f4c8.

Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
(cherry picked from commit 482338842d)
2011-10-02 19:02:30 +02:00
Paul Berry
7dc636dd77 glsl: improve the accuracy of the atan(x,y) builtin function.
The previous formula for atan(x,y) returned a value of +/- pi whenever
|x|<0.0001, and used a formula based on atan(y/x) otherwise.  This
broke in cases where both x and y were small (e.g. atan(1e-5, 1e-5)).

This patch modifies the formula so that it returns a value of +/- pi
whenever |x|<1e-8*|y|, and uses the formula based on atan(y/x)
otherwise.
(cherry picked from commit b1b4ea0b36)
2011-10-02 19:00:31 +02:00
Paul Berry
e42b822fec glsl: improve the accuracy of the radians() builtin function
The constant used in the radians() function didn't have enough
precision, causing a relative error of 1.676e-5, which is far worse
than the precision of 32-bit floats.  This patch reduces the relative
error to 1.14e-9, which is the best we can do in 32 bits.

Fixes piglit tests {fs,vs}-radians-{float,vec2,vec3,vec4}.

Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
(cherry picked from commit fe33c886a7)
2011-10-02 18:50:28 +02:00
Paul Berry
0501cee136 glsl: Lower break instructions when necessary at the end of a loop.
Normally lower_jumps.cpp doesn't need to lower a break instruction
that occurs at the end of a loop, because all back-ends can produce
proper GPU instructions for a break instruction in this "canonical"
location.  However, if other break instructions within the loop are
already being lowered, then a break instruction at the end of the loop
needs to be lowered too, since after the optimization is complete a
new conditional break will be inserted at the end of the loop.

Without this patch, lower_jumps.cpp may require multiple passes in
order to lower all jumps.  This results in sub-optimal output because
lower_jumps.cpp produces a brand new set of temporary variables each
time it is run, and the redundant temporary variables are not
guaranteed to be eliminated by later optimization passes.

Fixes unit test test_lower_breaks_6.
(cherry picked from commit 067c9d7bd7)

Conflicts:

	src/glsl/lower_jumps.cpp
2011-10-02 18:48:27 +02:00
Paul Berry
38ae26b709 glsl: In lower_jumps.cpp, lower both branches of a conditional.
Previously, lower_jumps.cpp would break out of its loop after lowering
a jump instruction in just the then- or else-branch of a conditional,
and it would fail to lower a jump instruction occurring in the other
branch.

Without this patch, lower_jumps.cpp may require multiple passes in
order to lower all jumps.  This results in sub-optimal output because
lower_jumps.cpp produces a brand new set of temporary variables each
time it is run, and the redundant temporary variables are not
guaranteed to be eliminated by later optimization passes.

Fixes unit test test_lower_returns_4.
(cherry picked from commit e71b4ab8a6)
2011-10-02 18:45:10 +02:00
Paul Berry
de798938d4 glsl: Use foreach_list in lower_jumps.cpp
The visitor class in lower_jumps.cpp never removes or replaces the
instruction being visited, but it frequently alters or removes the
instructions that follow it.  Therefore, to make sure the altered IR
is visited, it needs to iterate through exec_lists using foreach_list
rather than visit_exec_list().

Without this patch, lower_jumps.cpp may require multiple passes in
order to lower all jumps.  This results in sub-optimal output because
lower_jumps.cpp produces a brand new set of temporary variables each
time it is run, and the redundant temporary variables are not
guaranteed to be eliminated by later optimization passes.

Also, certain invariants assumed by lower_jumps.cpp may fail to hold,
causing assertion failures.

Fixes unit tests test_lower_pulled_out_jump,
test_lower_unified_returns, test_lower_guarded_conditional_break,
test_lower_return_non_void_at_end_of_loop, and test_lower_returns_3.

Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
(cherry picked from commit 382cee91a4)
2011-10-02 18:44:50 +02:00
Paul Berry
acd2a03ffb glsl: lower unconditional returns and continues in loops.
Previously, lower_jumps.cpp would only lower return and continue
statements that appeared inside conditionals.  This patch makes it
lower unconditional returns and continue statements that occur inside
a loop.

Such unconditional flow control statements would be unlikely to be
explicitly coded by a reasonable user, however they might arise as a
result of other optimizations.

Without this patch, lower_jumps.cpp might not lower certain return and
continue statements, causing some backends to fail.

Fixes unit tests test_lower_return_void_at_end_of_loop and
test_remove_continue_at_end_of_loop.
(cherry picked from commit 03145ba655)

Conflicts:

	src/glsl/lower_jumps.cpp
2011-10-02 18:44:31 +02:00
Paul Berry
d1786cea1c glsl: Refactor logic for determining whether to lower return statements.
Previously, do_lower_jumps.cpp determined whether to lower return
statements in ir_lower_jumps_visitor::should_lower_jumps().  Moved
this logic to ir_lower_jumps_visitor::visit(ir_function_signature *),
so that it can be used in determining whether to lower a return
statement at the end of a function.
(cherry picked from commit dbaa2e627e)
2011-10-02 18:43:00 +02:00
Paul Berry
934c7a0661 glsl: Lower unconditional return statements.
Previously, lower_jumps.cpp only lowered return statements that
appeared inside of an if statement.

Without this patch, lower_jumps.cpp might not lower certain return
statements, causing some back-ends to fail (as in bug #36669).

Fixes unit test test_lower_returns_1.
(cherry picked from commit afc9a50fba)
2011-10-02 18:35:00 +02:00
Brian Paul
2ba0d0a5e8 mesa: add _NEW_CURRENT_ATTRIB in _mesa_program_state_flags()
If color material mode is enabled, constant buffer entries related
to the material coefficients will depend on glColor.  So add
_NEW_CURRENT_ATTRIB to the bitset returned for material-related
constants in _mesa_program_state_flags().

This fixes a bug exercised by the new piglit draw-arrays-colormaterial
test.

Note: This is a candidate for the 7.11 branch.
(cherry picked from commit 57169c4694)
2011-10-02 18:10:40 +02:00
Marek Olšák
1cf8f9599c r600g: add index_bias to index buffer bounds
This fixes ARB_draw_elements_base_vertex with max_index != ~0.

NOTE: This is a candidate for the 7.11 branch.
(cherry picked from commit 44afac04ea)
2011-10-02 18:10:19 +02:00
Brian Paul
2781baaa64 meta: fix broken sRGB mipmap generation
If we're generating a mipmap for an sRGB texture we need to bypass
sRGB->linear conversion.  Otherwise the destination mipmap level
(drawn with a textured quad) will have the wrong colors.
If we can't turn of sRGB->linear conversion (GL_EXT_texture_sRGB_decode)
we need to use the software fallback for mipmap generation.

Note: This is a candidate for the 7.11 branch.
(cherry picked from commit 1e939f5374)
2011-10-02 18:09:22 +02:00
Brian Paul
a74400ca30 mesa: fix PACK_COLOR_5551(), PACK_COLOR_1555() macros
The 1-bit alpha channel was incorrectly encoded.  Previously, any non-zero
alpha value for the ubyte alpha value would set A=1.  Instead, use the
most significant bit of the ubyte alpha to determine the A bit.  This is
consistent with the other channels and other OpenGL implementations.

Note: This is a candidate for the 7.11 branch.

Reviewed-by: Michel Dänzer <michel@daenzer.net>
(cherry picked from commit 4731a598f0)
2011-10-02 18:08:31 +02:00
Tom Stellard
a5e2074fdd r300/compiler: Fix regalloc for values with multiple writers
https://bugs.freedesktop.org/show_bug.cgi?id=40062
https://bugs.freedesktop.org/show_bug.cgi?id=36939

Note: This is a candidate for the 7.11 branch.
(applied diff manually from 2d1004d9aa)
2011-10-02 18:07:53 +02:00
Brian Paul
fad6e2ea5a meta: fix/add checks for GL_EXT_framebuffer_sRGB
This fixes spurious GL errors when the GL_EXT_framebuffer_sRGB extension
is not supported.

Note: This is a candidate for the 7.11 branch
(cherry picked from commit 6e423253e7)
2011-10-02 18:05:03 +02:00
Vadim Girlin
a73c667069 r600g: fix replace_gpr_with_pv_ps
Instructions with 3 source operands have no write mask, so we may replace their
destinations with PV/PS in the next group even if their dst.write is 0.

Note: This is a candidate for the 7.11 branch.

Signed-off-by: Vadim Girlin <vadimgirlin@gmail.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
(cherry picked from commit fdb62ef3f5)
2011-10-02 18:04:38 +02:00
Vadim Girlin
e87f79c8a4 r600g: fix check_and_set_bank_swizzle
Need to do full check when not all bank swizzles in the group are forced
(e.g. when trying to merge interp_* group with the next instruction)

Note: This is a candidate for the 7.11 branch.

Signed-off-by: Vadim Girlin <vadimgirlin@gmail.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
(cherry picked from commit 6ba68c7654)

Conflicts:

	src/gallium/drivers/r600/r600_asm.c
2011-10-02 18:04:22 +02:00
Chad Versace
fa8cfbfb64 x86-64: Fix compile error with clang
Remove the 'f' suffix from a float literal.
    - .float 0.0f+1.0
    + .float 1.0

This fixes the following compile error with clang:
    error: unexpected token in directive
    .float 0.0f+1.0
              ^

Note: This is a candidate for the stable branches.
Reviewed-by: Ben Widawsky <ben@bwidawsk.net>
Signed-off-by: Chad Versace <chad@chad-versace.us>
(cherry picked from commit 9cd64ec35a)
2011-10-02 18:00:27 +02:00
Brian Paul
446a67b74e swrast: don't try to do depth testing if there's no depth buffer
Fixes piglit hiz-depth-stencil-test-fbo-d0-s8 crash.
See http://bugs.freedesktop.org/show_bug.cgi?id=37907

NOTE: This is a candidate for the 7.11 branch.
(cherry picked from commit 37a64baea8)
2011-10-02 18:00:13 +02:00
Kenneth Graunke
d5c84929a7 mesa: In validate_program(), initialize errMsg for safety.
validate_program relies on validate_shader_program to fill in errMsg;
empirically, there exist cases where that doesn't happen.

While tracking those down may be worthwhile, initializing the string so
we don't try to ralloc_strdup random garbage also seems wise.

Fixes issues caught by valgrind while running some test case.

NOTE: This is a candidate for stable release branches.

Reviewed-by: Chad Versace <chad@chad-versace.us>
Signed-off-by: Kenneth Graunke <kenneth@whitecape.org>
(cherry picked from commit db726b048e)
2011-10-02 18:00:03 +02:00
Christopher James Halse Rogers
dd9b78e212 glx/dri2: Paper over errors in DRI2Connect when indirect
DRI2 will throw BadRequest for this when the client is not local, but
DRI2 is an implementation detail and not something callers should have
to know about.  Silently swallow errors in this case, and just propagate
the failure through DRI2Connect's return code.

Note: This is a candidate for the stable release branches.

Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=28125
Signed-off-by: Christopher James Halse Rogers <christopher.halse.rogers@canonical.com>
(cherry picked from commit fbc2fcf685)
2011-10-02 17:59:54 +02:00
Chia-I Wu
2cadae90c0 glsl: empty declarations should be valid
Unlike C++, empty declarations such as

  float;

should be valid.  The spec is not explicit about this actually.

Some apps that generate their shader sources may rely on this.  This was
noted when porting one of them to Linux from Windows.

Reviewed-by: Chad Versace <chad@chad-versace.us>

Note: this is a candidate for the 7.11 branch.
(cherry picked from commit 547212d963)
2011-10-02 17:59:26 +02:00
Vadim Girlin
ffb0f94136 r600g: take into account force_add_cf in pops
When we have two ENDIFs in a row, we shouldn't modify the pop_count
for the same alu clause twice.

Fixes https://bugs.freedesktop.org/show_bug.cgi?id=38163

Note: this is a candidate for the 7.11 branch.

Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
(cherry picked from commit 2bde0cc95d)
2011-10-02 17:58:50 +02:00
Vadim Girlin
badd2900ea r600g: use backend mask for occlusion queries
Use backend_map kernel query if supported, otherwise analyze ZPASS_DONE
results to get the mask.

Fixes lockups with predicated rendering due to incorrect query buffer
initialization on some cards.

Note: this is a candidate for the 7.11 branch.

Signed-off-by: Vadim Girlin <vadimgirlin@gmail.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
(cherry picked from commit 6eb94fc344)
2011-10-02 17:58:40 +02:00
Chad Versace
6c032dd837 glsl: Fix conversions in array constructors
Array constructors obey narrower conversion rules than other constructors
[1] --- they use the implicit conversion rules [2] instead of the scalar
constructor conversions [3].  But process_array_constructor() was
incorrectly applying the broader rules.

[1] GLSL 1.50 spec, Section 5.4.4 Array Constructors, page 52 (58 of pdf)
[2] GLSL 1.50 spec, Section 4.1.10 Implicit Conversions, page 25 (31 of pdf)
[3] GLSL 1.50 spec, Section 5.4.1 Conversion, page 48 (54 of pdf)

To fix this, first check (with glsl_type::can_be_implicitly_converted_to)
if an implicit conversion is legal before performing the conversion.

Fixes:
piglit:spec/glsl-1.20/compiler/structure-and-array-operations/array-ctor-implicit-conversion-bool-float.vert
piglit:spec/glsl-1.20/compiler/structure-and-array-operations/array-ctor-implicit-conversion-bvec*-vec*.vert

Note: This is a candidate for the 7.10 and 7.11 branches.
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
Signed-off-by: Chad Versace <chad@chad-versace.us>
(cherry picked from commit a5ab9398e3)
2011-10-02 17:58:28 +02:00
Chad Versace
70c5be6c91 glsl: Remove ir_function.cpp:type_compare()
The function is no longer used and has been replaced by
glsl_type::can_implicitly_convert_to().

Note: This is a candidate for the 7.10 and 7.11 branches.
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
Signed-off-by: Chad Versace <chad@chad-versace.us>
(cherry picked from commit 6efe1a8495)
2011-10-02 17:00:59 +02:00
Chad Versace
3b92831fab glsl: Fix implicit conversions in non-constructor function calls
Context
-------
In ast_function_expression::hir(), parameter_lists_match() checks if the
function call's actual parameter list matches the signature's parameter
list, where the match may require implicit conversion of some arguments.
To check if an implicit conversion exists between individual arguments,
type_compare() is used.

Problems
--------
type_compare() allowed the following illegal implicit conversions:
    bool -> float
    bvecN -> vecN

    int -> uint
    ivecN -> uvecN

    uint -> int
    uvecN -> ivecN

Change
------
type_compare() is buggy, so replace it with glsl_type::can_be_implicitly_converted_to().
This comprises a rewrite of parameter_lists_match().

Fixes piglit:spec/glsl-1.20/compiler/built-in-functions/outerProduct-bvec*.vert

Note: This is a candidate for the 7.10 and 7.11 branches.
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
Signed-off-by: Chad Versace <chad@chad-versace.us>
(cherry picked from commit 8b3627fd7b)
2011-10-02 17:00:45 +02:00
Chad Versace
e6d07585f8 glsl: Add method glsl_type::can_implicitly_convert_to()
This method checks if a source type is identical to or can be implicitly
converted to a target type according to the GLSL 1.20 spec, Section 4.1.10
Implicit Conversions.

The following commits use the method for a bugfix:
    glsl: Fix implicit conversions in non-constructor function calls
    glsl: Fix implicit conversions in array constructors

Note: This is a candidate for the 7.10 and 7.11 branches.
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
Signed-off-by: Chad Versace <chad@chad-versace.us>
(cherry picked from commit 200e4972c1)
2011-10-02 16:57:32 +02:00
Brian Paul
4bd0f04531 mesa: add missing breaks for GL_TEXTURE_CUBE_MAP_SEAMLESS queries
And fix indentation.

NOTE: This is a candidate for the 7.11 branch.
(cherry picked from commit dc1f32deae)
2011-10-02 16:53:57 +02:00
Alex Deucher
45716cffbe r600g: fix up vs export handling
Certain attributes (position, psize, etc.) don't
count as params; they are handled separately by the hw.
However, the VS is required to export at least one param
and r600_shader_from_tgsi() takes care of adding a dummy
export if there is none.  Make sure the VS param export
count in the SPI properly accounts for this.

Note: This is a candidate for the 7.11 branch.

Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
(cherry picked from commit dc1c0ca22a)
2011-10-02 16:53:44 +02:00
Marek Olšák
ae633fa0ef configure.ac: fix xlib-based softpipe build
Tested-by: Jon TURNEY <jon.turney@dronecode.org.uk>

NOTE: This is a candidate for the 7.11 branch.
(cherry picked from commit c6f59fcd00)

Conflicts:

	configure.ac
2011-10-02 16:53:21 +02:00
Kenneth Graunke
25861dc7f3 glsl: Avoid massive ralloc_strndup overhead in S-Expression parsing.
When parsing S-Expressions, we need to store nul-terminated strings for
Symbol nodes.  Prior to this patch, we called ralloc_strndup each time
we constructed a new s_symbol.  It turns out that this is obscenely
expensive.

Instead, copy the whole buffer before parsing and overwrite it to
contain \0 bytes at the appropriate locations.  Since atoms are
separated by whitespace, (), or ;, we can safely overwrite the character
after a Symbol.  While much of the buffer may be unused, copying the
whole buffer is simple and guaranteed to provide enough space.

Prior to this, running piglit-run.py -t glsl tests/quick.tests with GLSL
1.30 enabled took just over 10 minutes on my machine.  Now it takes 5.

NOTE: This is a candidate for stable release branches (because it will
      make running comparison tests so much less irritating.)

Signed-off-by: Kenneth Graunke <kenneth@whitecape.org>
(cherry picked from commit 3875526926)
2011-10-02 16:50:39 +02:00
Marcin Baczyński
3fc660e896 configure: allow C{,XX}FLAGS override
NOTE: This is a candidate for the 7.11 branch.

Signed-off-by: Brian Paul <brianp@vmware.com>
(cherry picked from commit ff2efdf599)
2011-10-02 16:49:56 +02:00
Marcin Baczyński
0fbf3562d1 configure: fix gcc version check
NOTE: This is a candidate for the 7.11 branch.

Signed-off-by: Brian Paul <brianp@vmware.com>
(cherry picked from commit fa013419de)
2011-10-02 16:49:49 +02:00
Vadim Girlin
13476840a6 st/mesa: flush bitmap cache on query and conditional render boundaries
Bitmap caching shouldn't affect the results of the queries and
conditional render.

NOTE: This is a candidate for the 7.11 branch.

Signed-off-by: Vadim Girlin <vadimgirlin@gmail.com>
Signed-off-by: Brian Paul <brianp@vmware.com>
(cherry picked from commit 4f4855b249)
2011-10-02 16:49:34 +02:00
Henri Verbeet
5336f7f5a5 mesa: Fix a couple of TexEnv unit limits.
NOTE: This is a candidate for the 7.11 branch.

Signed-off-by: Henri Verbeet <hverbeet@gmail.com>
Reviewed-by: Brian Paul <brianp@vmware.com>
Reviewed-by: Ian Romanick <ian.d.romanick@intel.com>
(cherry picked from commit bfe284fd26)
2011-10-02 16:46:10 +02:00
Henri Verbeet
a470104763 mesa: Use the Elements macro for the sampler index assert in validate_samplers().
This is probably nicer if the array size ever changes.

NOTE: This is a candidate for the 7.11 branch.

Signed-off-by: Henri Verbeet <hverbeet@gmail.com>
Reviewed-by: Brian Paul <brianp@vmware.com>
Reviewed-by: Ian Romanick <ian.d.romanick@intel.com>
(cherry picked from commit 4744195628)
2011-10-02 16:46:02 +02:00
Henri Verbeet
338cf7128c mesa: Allow sampling from units >= MAX_TEXTURE_UNITS in shaders.
The total number of units used by a shader is limited to MAX_TEXTURE_UNITS,
but the actual indices are only limited by MAX_COMBINED_TEXTURE_IMAGE_UNITS,
since they're shared between vertex and fragment shaders.

NOTE: This is a candidate for the 7.11 branch.

Signed-off-by: Henri Verbeet <hverbeet@gmail.com>
Reviewed-by: Brian Paul <brianp@vmware.com>
Reviewed-by: Ian Romanick <ian.d.romanick@intel.com>
(cherry picked from commit 86adc2b29e)
2011-10-02 16:45:54 +02:00
Henri Verbeet
36dab12726 mesa: Check the texture against all units in unbind_texobj_from_texunits().
NOTE: This is a candidate for the 7.11 branch.

Signed-off-by: Henri Verbeet <hverbeet@gmail.com>
Reviewed-by: Brian Paul <brianp@vmware.com>
Reviewed-by: Ian Romanick <ian.d.romanick@intel.com>
(cherry picked from commit 2e35d90fb9)
2011-10-02 16:45:46 +02:00
Brian Paul
dc253d3100 mesa: fix texstore addressing bugs for depth/stencil formats
Using GLuint pointers worked when the pixel size was four bytes
or the row stride was a multiple of four but was otherwise broken.
Fixes failures found with the piglit fbo-stencil test.

This helps to fix https://bugs.freedesktop.org/show_bug.cgi?id=38729

NOTE: This is a candidate for the 7.11 branch.
(cherry picked from commit b786db0654)
2011-10-02 16:43:08 +02:00
Brian Paul
9f362a587e softpipe: add missing stencil format case in convert_quad_stencil()
Part of the fix for https://bugs.freedesktop.org/show_bug.cgi?id=38729

NOTE: This is a candidate for the 7.11 branch
(cherry picked from commit 057a107d44)

Conflicts:

	src/gallium/drivers/softpipe/sp_quad_depth_test.c
2011-10-02 16:42:54 +02:00
Henri Verbeet
8b5257a96f r600g: Support the PIPE_FORMAT_R16_FLOAT colorformat.
NOTE: This is a candidate for the 7.11 branch.
(cherry picked from commit 39fecd3229)
2011-10-02 16:38:09 +02:00
Eric Anholt
94e12df164 glsl: Allow ir_assignment() constructor to not specify condition.
We almost never want to specify a condition, and when we do we're
already thinking about it (because we're writing a lowering pass
generating the condition), so a default argument should make the code
more pleasant to read.

NOTE: This is a candidate for the 7.11 branch (we want to be able to
cherry-pick future code).
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
(cherry picked from commit e617a53a74)
2011-10-02 16:31:16 +02:00
Eric Anholt
ac6a24001b mesa: Don't skip glGetProgramLocalParam4dvARB if there was already an error.
Like the previous commit, but fixes
ARB_vertex_program/getlocal4d-with-error.

v2: Move the success case line into the conditional, use ASSIGN_4V more.
(cherry picked from commit c9aac11713)
2011-10-02 15:33:05 +02:00
Eric Anholt
b7e69912fa mesa: Throw an error when starting conditional render on an active query.
From the NV_conditional_render spec:

    BeginQuery sets the active query object name for the query type given by
    <target> to <id>.  If BeginQuery is called with an <id> of zero, if the
    active query object name for <target> is non-zero, if <id> is the active
    query object name for any query type, or if <id> is the active query
    object for condtional rendering (Section 2.X), the error INVALID OPERATION
    is generated.

Fixes piglit nv_conditional_render-begin-while-active.

Reviewed-by: Brian Paul <brianp@vmware.com>
(cherry picked from commit fd17de2123)
2011-10-02 15:33:05 +02:00
Eric Anholt
268a2c1a8a mesa: Throw an error instead of asserting for condrender with query == 0.
From the NV_conditional_render spec:

    BeginQuery sets the active query object name for the query type given by
    <target> to <id>.  If BeginQuery is called with an <id> of zero, if the
    active query object name for <target> is non-zero, if <id> is the active
    query object name for any query type, or if <id> is the active query
    object for condtional rendering (Section 2.X), the error INVALID OPERATION
    is generated.

Fixes piglit nv_conditional_render-begin-zero.

Reviewed-by: Brian Paul <brianp@vmware.com>
(cherry picked from commit 7371224c06)
2011-10-02 15:33:05 +02:00
Eric Anholt
28b95b2b01 mesa: Add support for Begin/EndConditionalRender in display lists.
Fixes piglit nv_conditional_render-dlist.

Reviewed-by: Brian Paul <brianp@vmware.com>
(cherry picked from commit 8899f6e93c)
2011-10-02 15:33:05 +02:00
Ian Romanick
0a6ef34741 mesa: Make _mesa_get_compressed_formats match the texture compression specs
The implementation deviated slightly from the GL_EXT_texture_sRGB spec
and from other implementations.  A giant comment block was added to
justify the somewhat odd behavior of this function.

In addition, the interface had unnecessary cruft.  The 'all' parameter
was false at all callers, so it has been removed.

Reviewed-by: Brian Paul <brianp@vmware.com>
(cherry picked from commit b189d1635d)
2011-10-02 15:33:04 +02:00
Ian Romanick
545ecf6542 mesa: Return the correct internal fmt when a generic compressed fmt was used
If an application requests a generic compressed format for a texture
and the driver does not pick a specific compressed format, return the
generic base format (e.g., GL_RGBA) for the GL_TEXTURE_INTERNAL_FORMAT
query.

Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=3165
Reviewed-by: Brian Paul <brianp@vmware.com>
(cherry picked from commit 143b65f761)
2011-10-02 15:33:04 +02:00
Ian Romanick
a88f4914a3 mesa: Add utility function to get base format from a GL compressed format
Reviewed-by: Brian Paul <brianp@vmware.com>
(cherry picked from commit 09916e877f)
2011-10-02 15:33:04 +02:00
Eric Anholt
ad2b567f51 mesa: Fix glGetUniform() type conversions.
We were primarily failing to convert in the NativeIntegers case, which
this fixes.  However, we were also just truncating float uniforms when
converting to integer, which does not appear to be the correct
behavior.  Note, however, that the NVIDIA drivers also truncate
instead of rounding.

GL_DOUBLE return type is dropped because it was never used and
completely broken.  It can be added when there's test code.

Fixes piglit ARB_shader_objects/getuniform

v2: This is a rewrite of my previous glGetUniform patch, which Ken
    pointed out missed storage_type-based conversions to integer,
    which was totally broken still thanks to a typo in the testcase.
v3: Quote the spec justifying the rounding behavior.

Acked-by: Kenneth Graunke <kenneth@whitecape.org>
Acked-by: Ian Romanick <ian.d.romanick@intel.com>
(cherry picked from commit 9fa41f0742)

Conflicts:

	src/mesa/main/uniforms.c
2011-10-02 15:33:04 +02:00
Marek Olšák
37c2c9688c u_vbuf_mgr: fix uploading with a non-zero index bias
Also don't rely on pipe_draw_info being set correctly.

NOTE: This is a candidate for the 7.11 branch.
(cherry picked from commit 60a77cf316)
2011-10-02 15:33:04 +02:00
Marek Olšák
012b2057e8 u_vbuf_mgr: rework user buffer uploads
- first determine the buffer range to upload for each buffer by walking over
  vertex elements
- take buffer_offset into account
- take src_offset into account
- take src_format into account in more places
- don't just blindly upload (stride*count) bytes

NOTE: This is a candidate for the 7.11 branch.
(cherry picked from commit cd9bbb3935)
2011-10-02 15:33:03 +02:00
Marek Olšák
eb2a04b467 u_vbuf_mgr: remove unused flag U_VBUF_UPLOAD_FLUSHED
(cherry picked from commit 315300e444)
2011-10-02 15:33:03 +02:00
Marek Olšák
060e22c212 u_vbuf_mgr: s/u_vbuf_mgr_/u_vbuf_
(cherry picked from commit 28fb798911)

Conflicts:

	src/gallium/auxiliary/util/u_vbuf_mgr.c
	src/gallium/drivers/r600/r600_state_common.c
2011-10-02 15:33:03 +02:00
Marek Olšák
2330b7267c u_vbuf_mgr: fix max_index computation for large src_offset
NOTE: This is a candidate for the 7.11 branch.
2011-10-02 15:33:02 +02:00
Marek Olšák
7a7377a090 u_vbuf_mgr: don't take per-instance attribs into acc. when computing max index
NOTE: This is a candidate for the 7.11 branch.
2011-10-02 15:33:02 +02:00
Marek Olšák
d79d69c933 u_vbuf_mgr: cleanup original vs real vertex buffer arrays
It can now override both buffer offsets and strides in additions to resources.
Overriding buffer offsets was kinda hackish and could cause issues with
non-native vertex formats.
2011-10-02 15:33:02 +02:00
Henri Verbeet
d072260921 mesa: Also set the remaining draw buffers to GL_NONE when updating just the first buffer in _mesa_drawbuffers().
Without this we'd miss the last update in a sequence like {COLOR0, COLOR1},
{COLOR0}, {COLOR0, COLOR1}. I originally had a patch for this that called
updated_drawbuffers() when the buffer count changed, but later realized that
was wrong. The ARB_draw_buffers spec explicitly says "The draw buffer for
output colors beyond <n> is set to NONE.", and this is queryable state.
This fixes piglit arb_draw_buffers-state_change.

NOTE: This is a candidate for the 7.11 branch.

Signed-off-by: Henri Verbeet <hverbeet@gmail.com>
Reviewed-by: Brian Paul <brianp@vmware.com>
Reviewed-by: Eric Anholt <eric@anholt.net>
(cherry picked from commit a4d72189b2)
2011-10-02 15:33:02 +02:00
Michel Dänzer
942c1b8de1 glx/dri2: Don't call X server for SwapBuffers when there's no back buffer.
As already done in dri2CopySubBuffer().

Should fix:

https://bugs.freedesktop.org/show_bug.cgi?id=36371
https://bugs.freedesktop.org/show_bug.cgi?id=40533

Might fix:

https://bugs.freedesktop.org/show_bug.cgi?id=32589

Signed-off-by: Michel Dänzer <michel.daenzer@amd.com>
(cherry picked from commit d8c443ddde)
2011-10-02 03:54:02 +02:00
Kristian Høgsberg
0502da1ade glx: Don't flush twice if we fallback to dri2CopySubBuffer
The flush extensions flush call indicates end of frame and should only
be called once per frame.  However, in the dri2SwapBuffer fallback
path, we call flush and then call dri2CopySubBuffer, which also calls
flush.  Refactor the code to only call flush once.
(cherry picked from commit 4a7667b96b)
2011-10-02 03:53:55 +02:00
Michel Dänzer
bb38f931a8 st/mesa: Finalize texture on render-to-texture.
This makes sure that stObj->pt exists and is up to date.

Fixes https://bugs.freedesktop.org/show_bug.cgi?id=39193 and piglit
fbo-incomplete-texture-03.

Signed-off-by: Michel Dänzer <michel.daenzer@amd.com>
Reviewed-by: Brian Paul <brianp@vmware.com>

NOTE: This is a candidate for the 7.11 branch.
(cherry picked from commit de414f4915)
2011-10-02 03:45:43 +02:00
Brian Paul
726ce042f8 st/mesa: Convert size assertions to conditionals in st_texture_image_copy.
Prevents potential assertion failures in piglit fbo-incomplete-texture-03 test.

NOTE: This is a candidate for the 7.11 branch.
(cherry picked from commit 4beb8f9e9d)
2011-10-02 03:45:36 +02:00
Eric Anholt
4ebd2c7c09 mesa: Don't skip glGetProgramEnvParam4dvARB if there was already an error.
Fixes a bug caught by oglconform, and now piglit
ARB_vertex_program/getenv4d-with-error.  The wrapping of an existing
GL function made it so that we couldn't distinguish an error in
looking up our arguments from an existing error.  Instead, make a
helper function to choose the param, and use it from multiple callers.

v2: Move the success case line into the conditional, use COPY_4V more.
(cherry picked from commit e9d563e3ff)
2011-10-02 03:25:03 +02:00
Marcin Slusarz
e7794048ca nouveau: fix nouveau_fence leak
(commit 96054375b1 in master)
2011-09-13 15:36:01 +02:00
Carl Simonson
e20346bfc8 i830: Add missing vtable entry for i830 from the hiz work.
(cherry picked from commit 09eeb0ff27)
2011-09-10 18:33:13 -07:00
David Reveman
2217a70aaf i915g: Fix off-by-one in scissors. 2011-08-26 17:43:20 -07:00
Tobias Droste
6c1a9a327d r300/compiler: simplify code in peephole_add_presub_add
Signed-off-by: Tobias Droste <tdroste@gmx.de>
Signed-off-by: Marek Olšák <maraeo@gmail.com>
(cherry picked from commit 84f8548dfc)
2011-08-07 18:36:17 +02:00
Marek Olšák
e33f306d67 r300/compiler: remove an unused-but-set variable and simplify the code
(cherry picked from commit ed5e95ada6)
2011-08-07 18:36:10 +02:00
Marek Olšák
f69357d77a r300/compiler: fix a warning that a variable may be uninitialized
(cherry picked from commit 2ce6c3ea6e)
2011-08-07 18:36:03 +02:00
Marek Olšák
d8a0c1b4bc winsys/radeon: fix space checking
We should remove the relocations which caused a validation failure
from the list, so that the kernel receives only the validated ones.

NOTE: This is a candidate for the 7.11 branch.
(cherry picked from commit 64ab39b035)

Conflicts:

	src/gallium/winsys/radeon/drm/radeon_drm_cs.c
2011-08-07 18:35:06 +02:00
Marek Olšák
f396e43b7d vbo: do not call _mesa_max_buffer_index in debug builds
That code drops performance in Unigine Heaven and Tropics
by a factor of 10. That's too crazy even for a debug build.

NOTE: This is a candidate for the 7.11 branch.

Reviewed-by: Brian Paul <brianp@vmware.com>
(cherry picked from commit c251d83d91)
2011-08-07 18:28:51 +02:00
Brian Paul
aedfd07fb2 docs: news item for 7.11 release 2011-08-02 10:38:17 -06:00
Brian Paul
9e669d9f28 docs: add 7.11 md5 sums 2011-08-02 10:38:03 -06:00
Marc Pignat
4258e9b3a5 drisw: Fix 24bpp software rendering, take 2
This patch add the support for 24bpp in the dri/swrast implementation.
See http://bugs.freedesktop.org/show_bug.cgi?id=23525

Signed-off-by: Marc Pignat <marc at pignat.org>
Signed-off-by: Brian Paul <brianp@vmware.com>
(cherry picked from commit cfec000e75)
2011-08-02 10:03:15 -06:00
Ian Romanick
de8f22af28 mesa: Bump version to 7.11 (final) 2011-07-31 22:48:10 -07:00
Ian Romanick
6ee71bab94 docs: More bits of 7.11 release notes 2011-07-31 22:47:26 -07:00
Jeremy Huddleston
6c72801c2b darwin: Use machine/endian.h to determine endianness
Signed-off-by: Jeremy Huddleston <jeremyhu@apple.com>
(cherry picked from commit 5b3c719983)
2011-07-31 09:47:56 -07:00
Jeremy Huddleston
0d2c369535 Fix PPC detection on darwin
Fixes regression introduced by 7004582c18

Signed-off-by: Jeremy Huddleston <jeremyhu@apple.com>
(cherry picked from commit e737a99a6f)
2011-07-31 09:47:51 -07:00
Ian Romanick
fffd20cdc1 Merge remote-tracking branch 'origin/7.11' into 7.11 2011-07-28 16:18:51 -07:00
Ian Romanick
fad610fec9 mesa: Bump version to 7.11-rc4 2011-07-28 16:08:42 -07:00
Vadim Girlin
9ca791d380 r600g: fix vs export count
Fixes https://bugs.freedesktop.org/show_bug.cgi?id=39572

Signed-off-by: Vadim Girlin <vadimgirlin@gmail.com>
2011-07-28 19:05:59 -04:00
Kenneth Graunke
69720cb0c4 i965: Remove the now unused intel_renderbuffer::draw_offset field.
The previous commit removed the last use of this field.

Reviewed-by: Eric Anholt <eric@anholt.net>
Signed-off-by: Kenneth Graunke <kenneth@whitecape.org>
(cherry picked from commit f73caddd33)
2011-07-28 14:23:08 -07:00
Kenneth Graunke
8589ca000a i965: Check actual tile offsets in Gen4 miptree workaround.
The purpose of the (irb->draw_offset & 4095) != 0 check was to ensure
that we don't have XYy offsets into a tile, since Gen4 hardware doesn't
support that.  However, it's insufficient: there are cases where
draw_offset & 4095 is 0 but we still have a Y-offset.  This leads to an
assertion failure in brw_update_renderbuffer_surface with tile_y != 0.

Instead, simply call intel_renderbuffer_tile_offsets to compute the
actual X/Y offsets and check if either are non-zero.  This makes both
the workaround and the assertion check the same things.

Fixes piglit test fbo-generatemipmap-formats, and should also fix
bugs #34009 and #39487.

NOTE: This is a candidate for stable release branches.

Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=34009
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=39487
Reviewed-by: Eric Anholt <eric@anholt.net>
Reviewed-by: Chad Versace <chad@chad-versace.us>
Signed-off-by: Kenneth Graunke <kenneth@whitecape.org>
(cherry picked from commit 15c0bc5eef)
2011-07-28 14:22:59 -07:00
Kenneth Graunke
dcbd00e73c i965/gen4: Fix message parameter loading for 1D TXD sampling.
We were neglecting to load dvdx and dvdy.  v is not optional.

Fixes glslparsertests tex-grad-0[12345].frag on Broadwater/Crestline.
(We still need an execution test using sampler1D.)

NOTE: This is a candidate for the 7.11 branch.

Reviewed-by: Eric Anholt <eric@anholt.net>
Reviewed-by: Ian Romanick <ian.d.romanick@intel.com>
Signed-off-by: Kenneth Graunke <kenneth@whitecape.org>
(cherry picked from commit 3e1fd13f60)
2011-07-28 14:22:48 -07:00
Fredrik Höglund
bba1600531 st/mesa: fix the texture format in st_context_teximage
Commit 1a339b6c71 made
st_ChooseTextureFormat map GL_RGBA with type GL_UNSIGNED_BYTE
to PIPE_FORMAT_A8B8G8R8_UNORM.

The image format for ARGB pixmaps is PIPE_FORMAT_B8G8R8A8_UNORM
however. This mismatch caused the texture to be recreated in
st_finalize_texture.

NOTE: This is a candidate for the 7.11 branch.

Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=39209
Signed-off-by: Fredrik Höglund <fredrik@kde.org>
Reviewed-by: Stéphane Marchesin <marcheu@chromium.org>
Signed-off-by: Brian Paul <brianp@vmware.com>
2011-07-28 11:56:45 -07:00
Ian Romanick
f4c55ea016 mesa: Ensure that r300 compiler files only appear once in the tarballs
Reviewed-by: Jeremy Huddleston <jeremyhu@apple.com>
Tested-by: Jeremy Huddleston <jeremyhu@apple.com>
Cc: Andreas Radke <a.radke@arcor.de>
2011-07-28 11:47:46 -07:00
Ian Romanick
aa05fbe14d mesa: Bump version to 7.11-rc3 2011-07-28 11:47:45 -07:00
Ian Romanick
0a7574f62c mesa: Use --dereference to avoid symlinks in tarballs 2011-07-28 11:47:45 -07:00
Eric Anholt
1098f9228d i965/fs: Fix MRT drawing since the m0->m2 move for shader debug.
Reviewed-by: Ian Romanick <ian.d.romanick@intel.com>
(cherry picked from commit 3daa2d97eb)
2011-07-28 11:47:45 -07:00
Eric Anholt
5238303790 i965: Fix many of the trivial WebGL demos that broke due to IB optimization.
The index buffer state emit only occurred if there was an IB in place
and we were in either a new batch or a new IB state.  But because we
only flagged new IB state if IB state changed from the last IB state
we calculated, we could simply never emit IB state after batchbuffer
wraps if the first draw didn't use the IB and we didn't actually
change the IB.

Fixes piglit glx-multi-context-ib-1.
(cherry picked from commit 818db3848b)
2011-07-28 11:47:45 -07:00
Eric Anholt
a03974ffb9 i965: Emit texture cache flushes on gen6 along with render cache flushes.
It turns out that internally the texture cache gets flushed in a
couple of cases, particularly around 2D operations mixed with 3D.  In
almost all cases one of those happens between rendering to an
FBO-attached texture and rendering from that texture.  However, as of
the next patch, glean tfbo (and the new fbo-flushing-2 test) would
manage to get stale texture values because one of those flushes didn't
occur.  The intention of this code was always to get the render cache
cleared and ready to be used from the sampler cache (and it does on <=
gen4), so this just catches gen5 up.

This patch was also tested to fix fbo-flushing on gen7.
(cherry picked from commit 185868c9c2)
2011-07-28 11:47:45 -07:00
Paul Berry
2d64d34cb9 i965: vs optimization fix: Check val.{negate,abs} in accumulator_contains()
When emitting a MAC instruction in a vertex shader, brw_vs_emit()
calls accumulator_contains() to determine whether the accumulator
already contains the appropriate addend; if it does, then we can avoid
emitting an unnecessary MOV instruction.

However, accumulator_contains() wasn't checking the val.negate or
val.abs flags.  As a result, if the desired value was the negation, or
the absolute value, of what was already in the accumulator, we would
generate an incorrect shader.

Fixes piglit test vs-refract-vec4-vec4-float.

Tested on Gen5 and Gen6.

Reviewed-by: Eric Anholt <eric@anholt.net>
(cherry picked from commit d92463d5dc)
2011-07-28 11:47:45 -07:00
Kenneth Graunke
66b41af391 i965/gen7: Fix shadow sampling in the old brw_wm_emit backend.
On Ivybridge, the shadow comparitor goes in the first slot, rather than
at the end.  It's not necessary to send u, v, and r.

Fixes tests texturing/texdepth and glean/fbo.

NOTE: This is a candidate for the 7.11 branch.

Signed-off-by: Kenneth Graunke <kenneth@whitecape.org>
(cherry picked from commit 572f631895)
2011-07-28 11:47:45 -07:00
Kenneth Graunke
67aa20d9d5 i965/fs: Clear result before visiting shadow comparitor and LOD info.
Commit 53c89c67f3 ("i965: Avoid generating
MOVs for assignments of expressions.") added the line "this->result =
reg_undef" all over the code.  Unfortunately, since Eric developed his
patch before I landed Ivybridge support, he missed adding it to
fs_visitor::emit_texture_gen7() after rebasing.

Furthermore, since I developed TXD support before Eric's patch, I
neglected to add it to the gradient handling when I rebased.

Neglecting to set this causes the visitor to use this->result as storage
rather than generating a new temporary.  These missing statements
resulted in the same register being used to store several different
values.

Fixes the following piglit tests on Ivybridge:
- glsl-fs-shadow2dproj.shader_test
- glsl-fs-shadow2dproj-bias.shader_test

NOTE: This is a candidate for the 7.11 branch.

Signed-off-by: Kenneth Graunke <kenneth@whitecape.org>
Reviewed-by: Eric Anholt <eric@anholt.net>
(cherry picked from commit 156cef0fba)
2011-07-28 11:47:45 -07:00
Ian Romanick
7d6c37b0c2 glsl: Treat ir_dereference_array of non-var as a constant for lowering
Previously the code would just look at deref->array->type to see if it
was a constant.  This isn't good enough because deref->array might be
another ir_dereference_array... of a constant.  As a result,
deref->array->type wouldn't be a constant, but
deref->variable_referenced() would return NULL.  The unchecked NULL
pointer would shortly lead to a segfault.

Instead just look at the return of deref->variable_referenced().  If
it's NULL, assume that either a constant or some other form of
anonymous temporary storage is being dereferenced.

This is a bit hinkey because most drivers treat constant arrays as
uniforms, but the lowering pass treats them as temporaries.  This
keeps the behavior of the old code, so this change isn't making things
worse.

Fixes i965 piglit:

    vs-temp-array-mat[234]-index-col-rd
    vs-temp-array-mat[234]-index-col-row-rd
    vs-uniform-array-mat[234]-index-col-rd
    vs-uniform-array-mat[234]-index-col-row-rd

Reviewed-by: Eric Anholt <eric@anholt.net>
(cherry picked from commit 156f85336f)
2011-07-28 11:47:45 -07:00
Ian Romanick
32c7224edb i965: When emitting a src/dst read of an output, keep the swizzle and neg
Fixes i965 piglit vs-varying-array-mat[234]-row-rd.

Reviewed-by: Eric Anholt <eric@anholt.net>
(cherry picked from commit 1d3f09f159)
2011-07-28 11:47:45 -07:00
Ian Romanick
788acd552e i965: When emitting a src/dst write of an output, keep the write mask
Fixes i965 piglit:

    vs-varying-array-mat[234]-col-row-wr
    vs-varying-array-mat[234]-index-col-row-wr
    vs-varying-array-mat[234]-index-row-wr
    vs-varying-array-mat[234]-row-wr
    vs-varying-mat[234]-col-row-wr
    vs-varying-mat[234]-row-wr

Reviewed-by: Eric Anholt <eric@anholt.net>
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
(cherry picked from commit 337e2dfad0)
2011-07-28 11:47:45 -07:00
Ian Romanick
a2b8802ed5 prog_optimize: Set unused regs to PROGRAM_UNDEFINED after CMP->MOV conversion
Leaving the unused registers with other values caused assertion
failures and other problems in places that blindly iterate over all
sources.

brw_vs_emit.c:1381: get_src_reg: Assertion `c->regs[file][index].nr !=
0' failed.

Fixes i965 piglit:

    vs-uniform-array-mat[234]-col-row-rd
    vs-uniform-array-mat[234]-index-col-row-rd
    vs-uniform-array-mat[234]-index-row-rd
    vs-uniform-mat[234]-col-row-rd

Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
Reviewed-by: Eric Anholt <eric@anholt.net>
(cherry picked from commit fbeb68e880)
2011-07-28 11:47:45 -07:00
Ian Romanick
8b41ae0b2a ir_to_mesa: Copy reladdr in src_reg(dst_reg) constructor
Fixes i965 piglit:

    vs-temp-array-mat[234]-col-row-wr
    vs-temp-array-mat[234]-index-col-row-wr
    vs-temp-array-mat[234]-index-row-wr
    vs-temp-mat[234]-col-row-wr

Fixes swrast piglit:

    fs-temp-array-mat[234]-col-row-wr
    fs-temp-array-mat[234]-index-col-row-wr
    fs-temp-array-mat[234]-index-row-wr
    fs-temp-mat[234]-col-row-wr
    vs-temp-array-mat[234]-col-row-wr
    vs-temp-array-mat[234]-index-col-row-wr
    vs-temp-array-mat[234]-index-row-wr
    vs-temp-mat[234]-col-row-wr

Reviewed-by: Eric Anholt <eric@anholt.net>
(cherry picked from commit f7cd9a858c)
2011-07-28 11:47:44 -07:00
Ian Romanick
b00b5fe267 ir_to_mesa: Add each relative address to the previous
This fixes many cases of accessing arrays of matrices using
non-constant indices at each level.

Fixes i965 piglit:

    vs-temp-array-mat[234]-index-col-rd
    vs-temp-array-mat[234]-index-col-row-rd
    vs-temp-array-mat[234]-index-col-wr
    vs-uniform-array-mat[234]-index-col-rd

Fixes swrast piglit:

    fs-temp-array-mat[234]-index-col-rd
    fs-temp-array-mat[234]-index-col-row-rd
    fs-temp-array-mat[234]-index-col-wr
    fs-uniform-array-mat[234]-index-col-rd
    fs-uniform-array-mat[234]-index-col-row-rd
    fs-varying-array-mat[234]-index-col-rd
    fs-varying-array-mat[234]-index-col-row-rd
    vs-temp-array-mat[234]-index-col-rd
    vs-temp-array-mat[234]-index-col-row-rd
    vs-temp-array-mat[234]-index-col-wr
    vs-uniform-array-mat[234]-index-col-rd
    vs-uniform-array-mat[234]-index-col-row-rd
    vs-varying-array-mat[234]-index-col-rd
    vs-varying-array-mat[234]-index-col-row-rd
    vs-varying-array-mat[234]-index-col-wr

Reviewed-by: Eric Anholt <eric@anholt.net>
(cherry picked from commit d6e1a8f714)
2011-07-28 11:47:44 -07:00
Ian Romanick
3a5a7cfd2a glsl: When lowering non-constant vector indexing, respect existing conditions
If the non-constant index was in the LHS of an assignment, any
existing condititon on that assignment would be lost.

Reviewed-by: Eric Anholt <eric@anholt.net>
(cherry picked from commit 601428d2bb)
2011-07-28 11:47:44 -07:00
Ian Romanick
aad12d15bb glsl: When lowering non-constant array indexing, respect existing conditions
If the non-constant index was in the LHS of an assignment, any
existing condititon on that assignment would be lost.

Fixes i965 piglit:

    fs-temp-array-mat[234]-col-row-wr
    fs-temp-array-mat[234]-index-col-row-wr
    fs-temp-array-mat[234]-index-col-wr
    fs-temp-array-mat[234]-index-row-wr
    vs-varying-array-mat[234]-index-col-wr

Reviewed-by: Eric Anholt <eric@anholt.net>
(cherry picked from commit 5f83dfe5b7)
2011-07-28 11:47:44 -07:00
Ian Romanick
977db7cc65 glsl: Rework lowering of non-constant array indexing
The previous implementation could easily get tricked if the LHS of an
assignment included a non-constant index that was "inside" another
dereference.  For example:

    mat4 m[2];
    m[0][i] = vec4(0.0);

Due to the way it tracked whether the array was being assigned, it
would think that the non-constant index was in an r-value.  The new
code fixes that by tracking l-values and r-values differently.  The
index is also replaced by cloning the IR and replacing the index
variable instead of the odd way it was done before.

v2: Apply some simplifications suggested by Eric Anholt.  Making
assignment_generator::rvalue be ir_dereference instead of ir_rvalue
simplified the code a bit.

Fixes i965 piglit fs-temp-array-mat[234]-index-wr and
vs-varying-array-mat[234]-index-wr.

Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=34691
Reviewed-by: Eric Anholt <eric@anholt.net>
(cherry picked from commit 1731ac3086)

To make bisects work, this also squashes in:

glsl: Correctly return progress from lower_variable_index_to_cond_assign

lower_variable_index_to_cond_assign runs until it can't make any more
progress.  It then returns the result of the last pass which will
always be false.  This caused the lowering loop in
_mesa_ir_link_shader to end before doing one last round of
lower_if_to_cond_assign.  This caused several if-statements (resulting
from lower_variable_index_to_cond_assign) to be left in the IR.

In addition to this change, lower_variable_index_to_cond_assign should
take a flag indicating whether or not it should even generate
if-statements.  This is easily controlled by
switch_generator::linear_sequence_max_length.  This would generate
much better code on architectures without any flow contol.

Fixes i915 piglit regressions glsl-texcoord-array and
glsl-fs-vec4-indexing-temp-src.

Reviewed-by: Eric Anholt <eric@anholt.net>
(cherry picked from commit c1e591eed4)
2011-07-28 11:47:44 -07:00
Ian Romanick
d31c1c33ed glsl: Split out part of variable_index_to_cond_assign_visitor::needs_lowering
Other code will soon need to know if an array needs lowering based
exclusively on the storage mode.

Reviewed-by: Eric Anholt <eric@anholt.net>
(cherry picked from commit d2296e784a)
2011-07-28 11:47:44 -07:00
Ian Romanick
3481a5a1e6 glsl: Move is_array_or_matrix outside visitor class
There's no reason for it to be there, and another class that may not
have access to the visitor will need it soon.

Reviewed-by: Eric Anholt <eric@anholt.net>
(cherry picked from commit 8d5f3cef79)
2011-07-28 11:47:44 -07:00
Marek Olšák
a26b6cc001 configure.ac: add DLOPEN_LIBS to xlib build
Otherwise xlib-based llvmpipe fails to link.

NOTE: This is a candidate for the 7.11 branch.
(cherry picked from commit 0aed27ee37)
2011-07-28 00:29:07 +02:00
Benjamin Franzke
7d992b8471 wayland-drm: Add copyright notice to protocol
Fixes build since wayland 986703ac7365bc87a5501714adb9fc73157c62b7.
(cherry picked from commit 79dcfb266a)
2011-07-27 10:08:01 +02:00
Tobias Droste
0bfa310049 egl/gallium: fix build without softpipe and llvmpipe
Signed-off-by: Tobias Droste <tdroste@gmx.de>
Acked-by: Jakob Bornecrantz <wallbraker@gmail.com>
Reviewed-by: Marek Olšák <maraeo@gmail.com>
(cherry picked from commit d4d5e3a336)
2011-07-27 09:43:03 +02:00
Benjamin Franzke
7555eb7478 Fix broken merge in cherry-pick from 42cdf407
Was cherry-picked to 337102684b.
2011-07-27 09:25:54 +02:00
Bryan Cain
fd461c5888 util: enable S3TC support when the force_s3tc_enable env var is set to "true"
NOTE: This is a candidate for the 7.10 and 7.11 branches.
2011-07-26 13:23:18 -05:00
Bryan Cain
208bae4251 st/mesa: respect force_s3tc_enable environment variable
NOTE: This is a candidate for the 7.10 and 7.11 branches.
2011-07-26 13:23:15 -05:00
Marek Olšák
13d12b35e9 configure.ac: do not check for llvm-config if llvm is disabled 2011-07-25 23:54:28 +02:00
Benjamin Franzke
337102684b configure: Move gbm before egl in SRC_DIRS
egl_dri2 built into libEGL depends on libgbm.

Fixes https://bugs.freedesktop.org/show_bug.cgi?id=39515
(cherry picked from commit 42cdf4074e)
2011-07-25 09:47:39 +02:00
Marek Olšák
b305c956be configure.ac: check for libdrm_radeon only when building classic
(cherry picked from commit 50e32fefb1)
2011-07-23 16:00:27 +02:00
Ian Romanick
7c98381ed4 glsl: Reject shaders that contain static recursion
The GLSL 1.20 and later specs say:

    "Recursion is not allowed, not even statically. Static recursion is
    present if the static function call graph of the program contains
    cycles."

Recursion is detected and rejected both a compile-time and at
link-time.  The complie-time check happens to detect some cases that
may be removed by various optimization passes.  The spec doesn't seem
to allow this, but other vendors (e.g., NVIDIA) appear to only check
at link-time after all optimizations.

Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=33885
Reviewed-by: Paul Berry <stereotype441@gmail.com>
Reviewed-by: Eric Anholt <eric@anholt.net>
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
(cherry picked from commit 02c5ae1b3f)

This also squashes in the following commit to make sure that bisects
in scons builds work:

glsl: Add ir_function_detect_recursion.cpp to SConscript.
(cherry picked from commit 76bccaff0c)
2011-07-23 01:55:25 -07:00
Ian Romanick
0ce6506d6e glsl: Make prototype_string publicly available
Also clarify the documentation for one of the parameters.

Reviewed-by: Paul Berry <stereotype441@gmail.com>
Reviewed-by: Eric Anholt <eric@anholt.net>
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
(cherry picked from commit 1ad3ba4ad9)
2011-07-23 01:53:59 -07:00
Stéphane Marchesin
7fc66f0bed Revert "i915: Eliminate redundant CONSTANTS updates"
This reverts commit 87641cffd9.
(cherry picked from commit 3c0c624879)
2011-07-23 01:53:40 -07:00
Chia-I Wu
e24c3575e4 u_vbuf_mgr: restore buffer offsets
u_vbuf_upload_buffers modifies the buffer offsets.  If they are not
restored, and any of the vertex formats is not supported natively, the
next u_vbuf_mgr_draw_begin call will translate the vertex buffers with
incorrect buffer offsets.
(cherry picked from commit afc160e1c8)

Signed-off-by: Marek Olšák <maraeo@gmail.com>
2011-07-21 22:18:56 +02:00
Marek Olšák
2e50a93cf3 r600g: more valgrind fixes
(cherry picked from commit dc9d789d1b)
2011-07-21 15:03:03 +02:00
Marek Olšák
2c29c24543 r600g: zero memory of ioctl parameters
Fixes valgrind warning.
(cherry picked from commit daf6604435)
2011-07-21 15:02:53 +02:00
Marek Olšák
4cbaa2283c configure.ac: Check for the respective libdrm_* when building gallium drivers
In a rare case of building gallium only, we need to
check if the required packages are available

libdrm_[intel|nouveau] - gallium[i915 i965|nouveau]

v2: r300g and r600g do not need libdrm_radeon

Signed-off-by: Emil Velikov <emil.l.velikov@gmail.com>
Signed-off-by: Marek Olšák <maraeo@gmail.com>
(cherry picked from commit c2426bbf86)

Conflicts:

	configure.ac
2011-07-21 14:57:19 +02:00
Marek Olšák
e29d44eacd mesa: GLES2 should return different error enums for invalid fbo queries
ES 2.0.25 page 127 says:

  If the value of FRAMEBUFFER_ATTACHMENT_OBJECT_TYPE is NONE, then
  querying any other pname will generate INVALID_ENUM.

See also:
b9e9df78a0

NOTE: This is a candidate for the 7.10 and 7.11 branches.

Reviewed-by: Ian Romanick <ian.d.romanick@intel.com>
(cherry picked from commit 000896c0bb)
2011-07-21 14:54:09 +02:00
Andrew Randrianasulu
f29208aa9b dri/nouveau: nv10: fix vertex format for GL_UNSIGNED_BYTE
Broken accidentally in f4efc256fd,
the switch to rnn headers.

NV10TCL_VTXFMT_TYPE_BYTE_RGBA became U8_UNORM but B8G8R8A8_UNORM
was used instead.
2011-07-21 10:33:33 +02:00
David Heidelberger
7af4e18dcd nvfx: handle PIPE_CAP_SM3
Signed-off-by: David Heidelberger <d.okias@gmail.com>
2011-07-21 10:33:03 +02:00
Eric Anholt
344db29ede i965: Apply a homebrew workaround for GPU hang in OGLC api-texcoord.
The behavior of flushes in the hardware is a maze of twisty passages,
and strangely the VS constants appear to be loaded during a pipeline
flush instead of at the time of the packet emit according to the
simulator.  On moving the STATE_BASE_ADDRESS packet to where it really
needed to live (in order for data loads by other packets to be
correct), we sometimes no longer got a flush between those packets
where we apparently needed it.  This replicates the flushes implied by
a STATE_BASE_ADDRESS update, fixing the GPU hangs in OGLC and the
"engine" demo.

Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=36821
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=39257
Tested-by: Keith Packard <keithp@keithp.com> (bzflag and etracer fixed)
Acked-by: Kenneth Graunke <kenneth@whitecape.org>
(cherry picked from commit 3e5d36267d)
2011-07-20 12:50:06 -07:00
Eric Anholt
680c468f61 i965: Enable the PIPE_CONTROL workaround workaround out of paranoia.
There's scary stuff going on in PIPE_CONTROL internals, and if the
BSpec says to do this to make PIPE_CONTROL work, I'll go ahead and do
it because we'll probably never be able to debug it after the fact.

v2: Use stall at scoreboard instead of depth stall, as noted by Ken.
(cherry picked from commit 407785d0e9)
2011-07-20 12:50:06 -07:00
Eric Anholt
7ee4cb453b i965: Avoid kernel BUG_ON if we happen to wait on the pipe_control w/a BO.
For this and occlusion queries, we're trying to avoid setting
I915_GEM_DOMAIN_RENDER for the write domain, because the data written
is definitely not going through the render cache, but we do need to
tell the kernel that the object has been written.  However, with using
I915_GEM_DOMAIN_GTT, the kernel on retiring the batchbuffer sees that
the w/a BO has a write domain of GTT, and puts it on the flushing
list.  If something tries to wait for that BO to finish rendering
(such as the AUB dumper reading the contents of BOs), we get into
wait_request (since obj->active) but with a 0 seqno (since the object
is on the flushing list, not actually on a ringbuffer), and BUG_ONs.

To avoid the kernel bug (which I'm hoping to delete soon anyway), just
use I915_GEM_DOMAIN_INSTRUCTION like occlusion queries do.  This
doesn't result in more flushing, because we invalidate INSTRUCTION on
every batchbuffer now that we're state streaming, anyway.

Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
Tested-by: Kenneth Graunke <kenneth@whitecape.org>
(cherry picked from commit dc7422405f)
2011-07-20 12:50:06 -07:00
Eric Anholt
a152ab6df1 i915: Simplify intel_wpos_* with a helper function.
(cherry picked from commit cb5e0ba2aa)
2011-07-20 12:50:06 -07:00
Eric Anholt
a99914509e i915: Include gl_FragCoord.w data, not just xyz.
Fixes piglit fragcoord_w test.
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=34323
(cherry picked from commit fceda4342c)
2011-07-20 12:50:06 -07:00
Eric Anholt
891073fea3 i915: Fix incorrect depth scaling when enabling/disabling depth buffers.
We were updating our new viewport using the old buffers' _WindowMap.m.
We can do less math and avoid using that deprecated matrix by just
folding the viewport calculation right in to the driver.

Fixes piglit fbo-depthtex.
(cherry picked from commit debf751aea)
2011-07-20 12:50:06 -07:00
Eric Anholt
dcaec739ea i915: Make stencil test for no-stencil handling match depth test.
i915_update_draw_buffers() already handles the fallback bit for
missing stencil region, so here we just need to handle whether the GL
thinks we have stencil data or not (and disable the test if so).
(cherry picked from commit 79fee3a76b)
2011-07-20 12:50:06 -07:00
Eric Anholt
ea241750f7 i915: Disable the depth test whenever we don't have a depth buffer.
We were disabling it once at the moment we changed draw buffers, but
later enabling of depth test could turn it back on.  Fixes
fbo-nodepth-test.

Note that ctx->DrawBuffer has to be checked because during context
create we get called while it's still unset.  However, we know we'll
get an intel_draw_buffer() after that, so it's safe to make a silly
choice at this point.

Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=30080
(cherry picked from commit fc4fba52cf)
2011-07-20 12:50:06 -07:00
Eric Anholt
9efd6dc677 i915: Remove i965 paths from i915_update_drawbuffer() and i830's too.
Reviewed-by: Chad Versace <chad@chad-versace.us>
(cherry picked from commit 4c47fce92e)
2011-07-20 12:50:06 -07:00
Eric Anholt
c6ddeeed7a i965: Remove i915 paths from brw_update_draw_buffers().
Reviewed-by: Chad Versace <chad@chad-versace.us>
(cherry picked from commit 94efc350b4)
2011-07-20 12:50:05 -07:00
Eric Anholt
108e807b7b i965: Remove unused region calculations in brw_update_draw_buffer().
Reviewed-by: Chad Versace <chad@chad-versace.us>
(cherry picked from commit c68270a26b)
2011-07-20 12:50:05 -07:00
Eric Anholt
421bca32fb i965: Remove empty brw_set_draw_region.
Reviewed-by: Chad Versace <chad@chad-versace.us>
(cherry picked from commit 15af0f54b8)
2011-07-20 12:50:05 -07:00
Eric Anholt
0f0ab15a46 i965: Remove FALLBACK() from brw_update_draw_region().
The 965 driver doesn't use these for deciding on fallbacks.

Reviewed-by: Chad Versace <chad@chad-versace.us>
(cherry picked from commit dd898c3e89)
2011-07-20 12:50:05 -07:00
Eric Anholt
b3c2438d4e intel: Move intel_draw_buffers() code into each driver.
The illusion of shared code here wasn't fooling anybody.  It was
tempting to keep i830 and i915 still shared, but I think I actually
want to make them diverge shortly.

Reviewed-by: Chad Versace <chad@chad-versace.us>
(cherry picked from commit f34ec6169d)
2011-07-20 12:50:05 -07:00
Brian Paul
25287008b6 glsl: silence warning in linker.cpp
(cherry picked from commit 4470ff2ebf)
2011-07-20 09:02:22 -07:00
Jørgen Lind
62b282e749 Make it possible to use gbm with c++
NOTE: This is a candiate for 7.11
(cherry picked from commit 496bf3822a)
2011-07-20 09:02:18 -07:00
Paul Berry
9aa8c02ae2 glsl: Rewrote _mesa_glsl_process_extension to use table-driven logic.
Instead of using a chain of manually maintained if/else blocks to
handle "#extension" directives, we now consult a table that specifies,
for each extension, the circumstances under which it is available, and
what flags in _mesa_glsl_parse_state need to be set in order to
activate it.

This makes it easier to add new GLSL extensions in the future, and
fixes the following bugs:

- Previously, _mesa_glsl_process_extension would sometimes set the
  "_enable" and "_warn" flags for an extension before checking whether
  the extension was supported by the driver; as a result, specifying
  "enable" behavior for an unsupported extension would sometimes cause
  front-end support for that extension to be switched on in spite of
  the fact that back-end support was not available, leading to strange
  failures, such as those in
  https://bugs.freedesktop.org/show_bug.cgi?id=38015.

- "#extension all: warn" and "#extension all: disable" had no effect.

Notes:

- All extensions are currently marked as unavailable in geometry
  shaders.  This should not have any adverse effects since geometry
  shaders aren't supported yet.  When we return to working on geometry
  shader support, we'll need to update the table for those extensions
  that are available in geometry shaders.

- Previous to this commit, if a shader mentioned
  ARB_shader_texture_lod, extension ARB_texture_rectangle would be
  automatically turned on in order to ensure that the types
  sampler2DRect and sampler2DRectShadow would be defined.  This was
  unnecessary, because (a) ARB_shader_texture_lod works perfectly well
  without those types provided that the builtin functions that
  reference them are not called, and (b) ARB_texture_rectangle is
  enabled by default in non-ES contexts anyway.  I eliminated this
  unnecessary behavior in order to make the behavior of all extensions
  consistent.

NOTE: This is a candidate for the 7.10 and 7.11 branches.

Reviewed-by: Ian Romanick <ian.d.romanick@intel.com>
(cherry picked from commit 3097715d41)
2011-07-20 08:15:32 -07:00
Paul Berry
b0ecde7f39 glsl: Changed extension enable bits to bools.
These were previously 1-bit-wide bitfields.  Changing them to bools
has a negligible performance impact, and allows them to be accessed by
offset as well as by direct structure access.

NOTE: This is a candidate for the 7.10 and 7.11 branches.

Reviewed-by: Ian Romanick <ian.d.romanick@intel.com>
(cherry picked from commit 9c4445de6e)
2011-07-20 08:15:21 -07:00
Marek Olšák
fe70a40e47 prog_optimize: fix a warning that a variable may be uninitialized
(cherry picked from commit dade65505b)
Signed-off-by: Brian Paul <brianp@vmware.com>
2011-07-20 08:01:47 -06:00
Ian Romanick
73b68316f4 mesa: Bump version to 7.11-rc2 2011-07-19 16:39:57 -07:00
Brian Paul
7ba7531929 mesa: remove depend files from tarballs
Reviewed-by: Ian Romanick <ian.d.romanick@intel.com>
2011-07-19 16:39:57 -07:00
Kenneth Graunke
2826e3a000 glsl: Correctly handle function matching when there are multiple inexact matches
This is a squash cherry pick commit of:

    glsl: Find the "closest" signature when there are multiple matches.

    Previously, ir_function::matching_signature had a fatal bug: if a
    function had more than one non-exact match, it would simply return NULL.

    This occured, for example, when looking for max(uvec3, uvec3):
    - max(vec3, vec3)   -> score 1 (found first)
    - max(ivec3, ivec3) -> score 1 (found second...used to return NULL here)
    - max(uvec3, uvec3) -> score 0 (exact match...the right answer)

    This did not occur for max(ivec3, ivec3) since the second match found
    was an exact match.

    The new behavior is to return a match with the lowest score.  If there
    is an exact match, that will be returned.  Otherwise, a match with the
    least number of implicit conversions is chosen.

    Fixes piglit tests max-uvec3.vert and glsl-inexact-overloads.shader_test.

    NOTE: This is a candidate for the 7.10 and 7.11 branches.

    Signed-off-by: Kenneth Graunke <kenneth@whitecape.org>
    Reviewed-by: Ian Romanick <ian.d.romanick@intel.com>
    Reviewed-by: Eric Anholt <eric@anholt.net>
    (cherry picked from commit 60eb63a855)

    glsl: Suppress warning from matching_signature change.

    gcc isn't smart enough to see that we only look at matched_score after
    we've initialized it (because match != NULL happens at the same time)
    (cherry picked from commit b043409adf)

    glsl: Reject ambiguous function calls (multiple inexact matches).

    According to the GLSL 1.20 specification, "it is a semantic error if
    there are multiple ways to apply [implicit] conversions [...] such that
    the call can be made to match multiple signatures."

    Fixes a regression caused by 60eb63a855,
    which implemented the wrong policy of finding a "closest" match.
    However, this is not a revert, since the original code failed to
    continue looking for an exact match once it found two inexact matches.

    It's OK to have multiple inexact matches if there's also an exact match.

    NOTE: This is a candidate for the 7.10 and 7.11 branches.

    Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=38971
    Reviewed-by: Eric Anholt <eric@anholt.net>
    Signed-off-by: Kenneth Graunke <kenneth@whitecape.org>
    (cherry picked from commit 7304909d65)
2011-07-19 16:39:56 -07:00
Paul Berry
f80ae99cbd glsl: Ensure that sampler declarations are always uniform or "in" parameters.
This brings us into compliance with page 17 (page 22 of the PDF) of
the GLSL 1.20 spec:

    "[Sampler types] can only be declared as function parameters or
    uniform variables (see Section 4.3.5 "Uniform"). ... [Samplers]
    cannot be used as out or inout function parameters."

The spec isn't explicit about whether this rule applies to
structs/arrays containing shaders, but the intent seems to be to
ensure that it can always be determined at compile time which sampler
is being used in each texture lookup.  So to avoid creating a
loophole, the rule needs to apply to structs/arrays containing shaders
as well.

Fixes piglit tests spec/glsl-1.10/compiler/samplers/*.frag, and fixes
bug 38987.

Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=38987
Reviewed-by: Ian Romanick <ian.d.romanick@intel.com>
(cherry picked from commit f07221056e)
2011-07-19 16:39:56 -07:00
Paul Berry
ae11fb02dd glsl: Move type_contains_sampler() into glsl_type for later reuse.
The new location, as a member function of glsl_type, is more
consistent with queries like is_sampler(), is_boolean(), is_float(),
etc.  Placing the function inside glsl_type also makes it available to
any code that uses glsl_types.
(cherry picked from commit ddc1c96390)
2011-07-19 16:39:56 -07:00
Ian Romanick
a90b88f354 linker: Only over-ride built-ins when a prototype has been seen
The GLSL spec says:

    "If a built-in function is redeclared in a shader (i.e., a
    prototype is visible) before a call to it, then the linker will
    only attempt to resolve that call within the set of shaders that
    are linked with it."

This patch enforces this behavior.  When a function call is processed
a flag is set in the ir_call to indicate whether the previously seen
prototype is the built-in or not.  At link time a call will only bind
to an instance of a function that matches the "want built-in" setting
in the ir_call.

This has the odd side effect that first call to abs() in the shader
below will call the built-in and the second will not:

float foo(float x) { return abs(x); }
float abs(float x) { return -x; }
float bar(float x) { return abs(x); }

This seems insane, but it matches what the spec says.

Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=31744
(cherry picked from commit 66f4ac988d)
2011-07-19 16:39:56 -07:00
Ian Romanick
0e699cc0e8 configure.ac: Make --{without,with}-gallium-drivers work as expected
This version is mostly Dan's post to the mesa-dev mailing list on
6/22/2011.

NOTE: This is a candidate for the 7.10 and 7.11 branches.

Signed-off-by: Ian Romanick <ian.d.romanick@intel.com>
Reviewed-by: Dan Nicholson <dbn.lists@gmail.com>
(cherry picked from commit db311b45be)
2011-07-19 16:39:56 -07:00
Kenneth Graunke
e6e7c456de i965/gen7: Add support for gl_PointCoord.
This is exactly analogous to Eric's Gen6 change in commit
6861a70177.  His explanation:

"This is just like PointSprite overrides, but it's always on for that
 attribute."

Fixes glsl-fs-pointcoord and gtf/point_sprites.

Signed-off-by: Eric Anholt <eric@anholt.net>
Signed-off-by: Kenneth Graunke <kenneth@whitecape.org>

(cherry-picked from commit 186e37c754)
2011-07-19 16:39:56 -07:00
Kenneth Graunke
31f4ab790f i965/gen7: Fix point sprite texture coordinate overrides.
This is exactly analogous to Eric's Gen6 change in commit
f304bb8a5d.  His explanation:

"We were assuming that the input attribute n to the FS was
 FRAG_ATTRIB_TEXn, which happened to be true often enough for our
 testcases."

Signed-off-by: Eric Anholt <eric@anholt.net>
Signed-off-by: Kenneth Graunke <kenneth@whitecape.org>

(cherry-picked from commit 147d010295)
2011-07-19 16:39:56 -07:00
Kenneth Graunke
9f978104d8 i965/gen7: Refactor SF setup a bit to handle overrides in one place.
This is exactly analogous to Eric's Gen6 change in commit
e7280b16d6.

Signed-off-by: Eric Anholt <eric@anholt.net>
Signed-off-by: Kenneth Graunke <kenneth@whitecape.org>

(cherry-picked from commit 5edb3ddf41)
2011-07-19 16:39:55 -07:00
Kenneth Graunke
9d78935100 i965/gen7: Remove gratuitous dirty flags from WM and PS state.
Commit b46dc45cee claimed that
NEW_POLYGONSTIPPLE is gratuitous, but somehow just changed comments
and whitespace instead of actually removing the flag.

While we're at it, 3DSTATE_PS doesn't appear to need NEW_LINE or
NEW_POLYGON either (those are in 3DSTATE_WM).  Also, 3DSTATE_WM
doesn't appear to need BRW_NEW_NR_WM_SURFACES or BRW_NEW_CURBE_OFFSETS
either (those are in 3DSTATE_PS).

Reviewed-by: Eric Anholt <eric@anholt.net>
Signed-off-by: Kenneth Graunke <kenneth@whitecape.org>

(cherry-picked from commit 57b57f6d1c)
2011-07-19 16:39:55 -07:00
Henri Verbeet
d469ebaa0a glx: Avoid calling __glXInitialize() in driReleaseDrawables().
This fixes a regression introduced by commit
a26121f375 (fd.o bug #39219).

Since the __glXInitialize() call should be unnecessary anyway, this is
probably a nicer fix for the original problem too.

NOTE: This is a candidate for the 7.10 and 7.11 branches.

Signed-off-by: Henri Verbeet <hverbeet@gmail.com>
Reviewed-by: Ian Romanick <ian.d.romanick@intel.com>
Tested-by: padfoot@exemail.com.au
(cherry picked from commit 0f20e2e18f)
2011-07-19 23:29:16 +02:00
Chad Versace
f5fa4606ea intel: Fix stencil buffer to be W tiled
Until now, the stencil buffer was allocated as a Y tiled buffer, because
in several locations the PRM states that it is. However, it is actually
W tiled. From the PRM, 2011 Sandy Bridge, Volume 1, Part 2, Section
4.5.2.1 W-Major Format:
    W-Major Tile Format is used for separate stencil.

The GTT is incapable of W fencing, so we allocate the stencil buffer with
I915_TILING_NONE and decode the tile's layout in software.

This fix touches the following portions of code:
    - In intel_allocate_renderbuffer_storage(), allocate the stencil
      buffer with I915_TILING_NONE.
    - In intel_verify_dri2_has_hiz(), verify that the stencil buffer is
      not tiled.
    - In the stencil buffer's span functions, the tile's layout must be
      decoded in software.

This commit mutually depends on the xf86-video-intel commit
    dri: Do not tile stencil buffer
    Author: Chad Versace <chad@chad-versace.us>
    Date:   Mon Jul 18 00:38:00 2011 -0700

On Gen6 with separate stencil enabled, fixes the following Piglit tests:
    bugs/fdo23670-drawpix_stencil
    general/stencil-drawpixels
    spec/EXT_framebuffer_object/fbo-stencil-GL_STENCIL_INDEX16-copypixels
    spec/EXT_framebuffer_object/fbo-stencil-GL_STENCIL_INDEX16-drawpixels
    spec/EXT_framebuffer_object/fbo-stencil-GL_STENCIL_INDEX16-readpixels
    spec/EXT_framebuffer_object/fbo-stencil-GL_STENCIL_INDEX1-copypixels
    spec/EXT_framebuffer_object/fbo-stencil-GL_STENCIL_INDEX1-drawpixels
    spec/EXT_framebuffer_object/fbo-stencil-GL_STENCIL_INDEX1-readpixels
    spec/EXT_framebuffer_object/fbo-stencil-GL_STENCIL_INDEX4-copypixels
    spec/EXT_framebuffer_object/fbo-stencil-GL_STENCIL_INDEX4-drawpixels
    spec/EXT_framebuffer_object/fbo-stencil-GL_STENCIL_INDEX4-readpixels
    spec/EXT_framebuffer_object/fbo-stencil-GL_STENCIL_INDEX8-copypixels
    spec/EXT_framebuffer_object/fbo-stencil-GL_STENCIL_INDEX8-drawpixels
    spec/EXT_framebuffer_object/fbo-stencil-GL_STENCIL_INDEX8-readpixels
    spec/EXT_packed_depth_stencil/fbo-stencil-GL_DEPTH24_STENCIL8-copypixels
    spec/EXT_packed_depth_stencil/fbo-stencil-GL_DEPTH24_STENCIL8-readpixels
    spec/EXT_packed_depth_stencil/readpixels-24_8

Note: This is a candidate for the 7.11 branch.

Signed-off-by: Chad Versace <chad@chad-versace.us>
Reviewed-by: Eric Anholt <eric@anholt.net>
Reviewed-by: Ian Romanick <ian.d.romanick@intel.com>
Acked-by: Kenneth Graunke <kenneth@whitecape.org>
(cherry picked from commit f7dbcba280)
2011-07-19 13:15:00 -07:00
Vadim Girlin
89ed95ad3d r600g: fix corner case checks for the queries 2011-07-18 09:00:42 -04:00
Christoph Bumiller
3b605cb0d6 nv50,nvc0: add correct storage type for Z32_FLOAT 2011-07-18 13:48:19 +02:00
Christoph Bumiller
9bfb79923f nv50,nvc0: don't advertise unaligned texture format support
Because we don't support them.
For instance, R32G32B32 is not R32G32B32X32 as was assumed.

Add support for R8G8B8X8_UNORM instead of R8G8B8_UNORM surfaces.
2011-07-18 13:45:52 +02:00
Vadim Girlin
7d8a04643f r600g: fix queries and predication
Use all zpass data for predication instead of the last block only.
Use query buffer as a ring instead of reusing the same area
for each new BeginQuery. All query buffer offsets are in bytes
to simplify offsets math.
2011-07-15 15:43:48 -04:00
Alex Deucher
3065bae508 r600c/g: add new NI pci ids
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
2011-07-15 10:56:31 -04:00
Chia-I Wu
443ff6024d targets/egl-static: fix a linking error
rbug is always linked in and it needs libpthread.
(cherry picked from commit 5fe5d236c2)
2011-07-14 11:56:39 +08:00
Eric Anholt
a20a950829 i915: Add support for gl_FragData[0] for output color.
We advertised ARB_draw_buffers, but either fell back to software when
using this output, or assertion failed.  Fixes glsl-fs-fragdata-1, and
failures in some webgl conformance tests.

Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=39024
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=34906
(cherry picked from commit 556a47a262)
2011-07-13 13:14:09 -07:00
Eric Anholt
9279c1e556 i915: Fix NPOT compressed textures on 915.
We were failing at rounding, misplacing the non-baselevels.  Fixes:
3DFX_texture_compression_FXT1/fbo-generate-mipmaps
ARB_texture_compression/fbo-generate-mipmaps
EXT_texture_compression_s3tc/fbo-generate-mipmaps

Reviewed-by: Ian Romanick <ian.d.romanick@intel.com>
(cherry picked from commit f2fd0d6304)
2011-07-13 13:14:09 -07:00
Eric Anholt
b8f722a82e i915: Fix map/unmap mismatches from leaving INTEL_FALLBACK during TNL.
The first rendering after context create didn't know of the color
buffer yet, triggering a sw fallback.  The intel_prepare_render() from
intelSpanRenderStart then found the buffer and turned off fallbacks,
but intelSpanRenderFinish was never called and things were left
mapped.  By checking buffers before making the call on whether to do
the fallback pipeline or not, we avoid the fallback change inside of
the rendering pipeline.

Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=31561
Reviewed-by: Ian Romanick <ian.d.romanick@intel.com>
(cherry picked from commit 6e6b388604)
2011-07-13 13:14:09 -07:00
Eric Anholt
9304645a07 i965: Fix fp-dst-aliasing-[12].vpfp.
There's no pretty way to avoid the overwriting of the src operands, so
just use a temporary destination and rely on the MOV optimization.

Reviewed-by: Ian Romanick <ian.d.romanick@intel.com>
(cherry picked from commit 46a7639174)
2011-07-13 13:14:09 -07:00
Eric Anholt
c3b3719096 i965: Fix fp-lit-src-equals-dst.
We were stomping over the source for the body of the LIT instruction
when doing the MOV of 1.0 to the uninteresting channels.

Reviewed-by: Ian Romanick <ian.d.romanick@intel.com>
(cherry picked from commit e3ea5bc08e)
2011-07-13 13:14:09 -07:00
Eric Anholt
55a75856fb intel: Remove gratuitous context checks in intel_delete_renderbuffer().
Even if we don't have a current context, if we're freeing the rb we
should free its region (and BO).  The renderbuffer unreference checks
appear to be just cargo-cult from the region unreference code.

Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=30217
Reviewed-by: Chad Versace <chad@chad-versace.us>
(cherry picked from commit 007c2d6cd2)
2011-07-13 13:14:09 -07:00
Eric Anholt
e3e99be131 intel: Allow intel_region_reference() with *dst != NULL.
This should help us avoid leaking regions in region reference code by
making the API more predictable.

Reviewed-by: Chad Versace <chad@chad-versace.us>
(cherry picked from commit 036b74a7f8)
(cherry picked from commit d8f65c07e9)
2011-07-13 13:12:55 -07:00
Eric Anholt
5a7d1c9710 glsl: Fix make clean for dricore.
(cherry picked from commit abbbd14dd4)
2011-07-13 13:08:23 -07:00
Eric Anholt
6ac5554298 i965: Reissue PIPELINE_POINTERS and BINDING_TABLE_POINTERS on SBA change.
This was a requirement we didn't run into until we started using
STATE_BASE_ADDRESS for instruction data.
(cherry picked from commit a09c5c2e30)
2011-07-13 13:08:23 -07:00
Eric Anholt
9eace71048 i965/gen6: Fix scissors using invalid STATE_BASE_ADDRESS.
The scissor state was incorrectly in a .prepare function instead of
.emit, so the packet would end up in the batch before the
STATE_BASE_ADDRESS.  It appears that this doesn't actually hurt, as
the scissor address gets dereferenced according to the current SBA at
draw time.
(cherry picked from commit cd7bfd5d44)
2011-07-13 13:08:23 -07:00
Stéphane Marchesin
8d9202c162 i915g: don't try to check if a NULL buffer is busy. 2011-07-13 12:08:10 -07:00
Alex Deucher
ef9f16f632 r600g: emit SQ_LDS_RESOURCE_MGMT
Need to be initialized to a reasonable value as
compute code may change it.

Fixes:
https://bugs.freedesktop.org/show_bug.cgi?id=39119

NOTE: This is a candidate for the 7.11 branch.

Signed-off-by: Alex Deucher <alexdeucher@gmail.com>
2011-07-12 12:04:57 -04:00
Brian Paul
d739434af8 glx: add a few missing glXChooseFBConfig() attributes
Fixes https://bugs.freedesktop.org/show_bug.cgi?id=38842

NOTE: This is a candidate for the 7.11 branch.
(cherry picked from commit d60880db35)
2011-07-12 09:46:50 -06:00
Brian Paul
72f2bd2a41 glext.h: update to version 71
(cherry picked from commit bb0d5cae00)
2011-07-12 09:46:42 -06:00
Benjamin Franzke
b0549fab5c configure: Require libudev for drm & wayland egl platforms
NOTE: This is a candidate for the 7.11 branch.
(cherry picked from commit 7ed1826e2e)
2011-07-12 09:58:11 +02:00
Benjamin Franzke
ac88916978 configure: Fix typo in gbm check for egl drm platform
NOTE: This is a candidate for the 7.11 branch.
(cherry picked from commit 9b8cd49930)
2011-07-12 09:58:11 +02:00
Benjamin Franzke
336b2c7fbd configure: Enable st/gbm if st/egl has drm platform
NOTE: This is a candidate for the 7.11 branch.
(cherry picked from commit b18b2994ef)
2011-07-12 09:58:11 +02:00
Benjamin Franzke
a8907c6005 egl_dri2: Fix compilation if udev devel files are not installed
NOTE: This is a candidate for the 7.11 branch.
(cherry picked from commit b2d6375e6a)
2011-07-12 09:58:11 +02:00
Benjamin Franzke
89af428aea egl: Fix Terminate with shared gbm screens
NOTE: This is a candidate for the 7.11 branch.
(cherry picked from commit 992680c8b4)
2011-07-12 09:58:11 +02:00
Marek Olšák
8a77029f4c swrast: fix depth/stencil blits when there's no colorbuffer
NOTE: This is a candidate for the 7.10 and 7.11 branches.
(cherry picked from commit d1214cca08)
2011-07-11 23:03:34 +02:00
Marek Olšák
928bf189ff mesa: return early if mask is cleared to zero in BlitFramebuffer
From ARB_framebuffer_object:
    If a buffer is specified in <mask> and does not exist in both the
    read and draw framebuffers, the corresponding bit is silently
    ignored.
(cherry picked from commit 83478e5d59)
2011-07-11 23:03:25 +02:00
Vadim Girlin
a9e34ada26 r600g: LIT: clamp negative src.y to 0
Fixes https://bugs.freedesktop.org/show_bug.cgi?id=39083

Signed-off-by: Vadim Girlin <vadimgirlin@gmail.com>
Signed-off-by: Alex Deucher <alexdeucher@gmail.com>
2011-07-10 13:30:20 -04:00
Eric Anholt
804995807d i965/gen4: Fix GPU hangs since the program streaming change.
This was tricky.  We were doing a use-before-initialize of
grf_reg_count, but the value usually got overwritten anyway -- when we
didn't have to do a relocation (typical), or on gen5 when we didn't
have relocations at all.

Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=38771
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
(cherry picked from commit d03fdc4cde)
2011-07-09 07:52:35 -07:00
Ian Romanick
b033f050fd mesa: Fix the parsers build rule so that 'make tarballs' can work
You'd think that with all the commit messages about adding stuff to
tarballs or fixing 'make tarballs' that someone would have noticed
that it was completely broken for 4 months (3158cc7).
2011-07-08 18:47:21 -07:00
Ian Romanick
c66982f7dc mesa: Bump version to 7.11-rc1 2011-07-08 18:26:39 -07:00
Ian Romanick
530c68d616 glsl: Fix depth unbalancing problem in if-statement flattening
Previously, if max_depth were 1, the following code would see the
first if-statement (correctly) not get flattened, but the second
if-statement would (incorrectly) get flattened:

void main()
{
    if (a)
        gl_Position = vec4(0);

    if (b)
        gl_Position = vec4(1);
}

This is because the visit_leave(ir_if*) method would not decrement the
depth before returning on the first if-statement.

NOTE: This is a candidate for the 7.10 and 7.11 branches.

Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
(cherry picked from commit d2c6cef18a)
2011-07-08 16:02:20 -07:00
Vadim Girlin
576f489dad r600g: introduce r600_bc_src_toggle_neg helper and fix SUB & LRP
SUB & LRP instructions should toggle NEG bit instead of setting it,
otherwise e.g. "SUB a,b,-1" is translated as "ADD a,b,-1"

Signed-off-by: Vadim Girlin <vadimgirlin@gmail.com>
2011-07-08 17:23:54 -04:00
Vadim Girlin
270de51f1d r600g: introduce r600_bc_src_set_abs helper and fix LOG
LOG instruction should use absolute values of source operand.

Signed-off-by: Vadim Girlin <vadimgirlin@gmail.com>
2011-07-08 17:23:34 -04:00
Vadim Girlin
57fe695a17 r600g: RSQ: clear NEG for operand
Need to clear NEG bit because it applies after ABS, e.g. "RSQ ..., -1"
uses -|1| as operand.

Signed-off-by: Vadim Girlin <vadimgirlin@gmail.com>
2011-07-08 17:23:21 -04:00
Vadim Girlin
189303fb30 r600g: LIT: swap MUL_LIT operands to fix 0^0
For 0^0 case result of "LOG_CLAMPED ...,0" is -MAX_FLOAT, and then result of
"MUL_LIT ...,0,-MAX_FLOAT,..." is -MAX_FLOAT instead of 0 because of special
src1 checks for -MAX_FLOAT. So swap src0/1:
"MUL_LIT ...,-MAX_FLOAT,0,..." to get expected 0, then result of
"EXP_IEEE ...,0" is 1 as expected for LIT.

Signed-off-by: Vadim Girlin <vadimgirlin@gmail.com>
2011-07-08 17:23:07 -04:00
Brian Paul
c4da12e74f glsl: use casts to silence warning
(cherry picked from commit 7eb7d67d50)
2011-07-08 08:03:56 -06:00
Brian Paul
8428b48673 gallivm: Fix build with llvm-3.0
LLVM 3.0svn changes pretty rapidly. The change in
Target->createMCInstPrinter() signature which inspired commits
40ae214067 and
92e29dc5b0 has been reverted.

Signed-off-by: Gustaw Smolarczyk <wielkiegie@gmail.com>
Signed-off-by: Brian Paul <brianp@vmware.com>
(cherry picked from commit fc98444bd5)

Conflicts:

	src/gallium/auxiliary/gallivm/lp_bld_debug.cpp
2011-07-08 08:03:40 -06:00
Marek Olšák
b0a4f34ea8 st/mesa: handle float formats in st_format_datatype
NOTE: This is a candidate for the 7.11 branch.

Reviewed-by: Brian Paul <brianp@vmware.com>
(cherry picked from commit 7de28e80dc)
2011-07-08 13:24:50 +02:00
Marek Olšák
efd0ffd1b0 st/mesa: use the first non-VOID channel in st_format_datatype
Otherwise PIPE_FORMAT_X8B8G8R8_UNORM and friends would fail.

NOTE: This is a candidate for the 7.10 and 7.11 branches.

Reviewed-by: Brian Paul <brianp@vmware.com>
(cherry picked from commit 292148dc4b)
2011-07-08 13:24:42 +02:00
Stéphane Marchesin
dc062db95d i915g: Improve flushing using heuristics. 2011-07-08 00:28:29 -07:00
Stéphane Marchesin
b292ef8f88 i915g: Move back to the old method for target format fixup.
Conflicts:

	src/gallium/drivers/i915/i915_state_emit.c
2011-07-08 00:27:59 -07:00
Eric Anholt
d3bfa9bb4a intel: Fix use of freed buffer if glBitmap is called after a swap.
Regions looked up from the framebuffer are invalid after
intel_prepare_render().

Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=30266
Tested-by: Thomas Jones <thomas.jones@utoronto.ca>
(cherry picked from commit 066bee64e1)
2011-07-07 15:05:24 -07:00
Paul Berry
98af042079 glsl: permit explicit locations on fragment shader outputs, not inputs
From the OpenGL docs for GL_ARB_explicit_attrib_location:

    This extension provides a method to pre-assign attribute locations to
    named vertex shader inputs and color numbers to named fragment shader
    outputs.

This was accidentally implemented for fragment shader inputs.  This
patch fixes it to apply to fragment shader outputs.

Fixes piglit tests
spec/ARB_explicit_attrib_location/1.{10,20}/compiler/layout-{01,03,06,07,08,09,10}.frag

Reviewed-by: Ian Romanick <ian.d.romanick@intel.com>

NOTE: This is a candidate for the 7.10 and 7.11 branches.

Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=38624
(cherry picked from commit b078aad8ab)
2011-07-07 14:23:08 -07:00
Ian Romanick
127bd9d5b6 linker: Assign locations for fragment shader output
Fixes an assertion failure in the piglib out-01.frag
ARB_explicit_attrib_location test.  The locations set via the layout
qualifier in fragment shader were not being applied to the shader
outputs.  As a result all of these variables still had a location of
-1 set.

This may need some more work for pre-3.0 contexts.  The problem is
dealing with generic outputs that lack a layout qualifier.  There is
no way for the application to specify a location
(glBindFragDataLocation is not supported) or query the location
assigned by the linker (glGetFragDataLocation is not supported).

NOTE: This is a candidate for the 7.10 and 7.11 branches.

Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=38624
Reviewed-by: Eric Anholt <eric@anholt.net>
Cc: Kenneth Graunke <kenneth@whitecape.org>
Cc: Vinson Lee <vlee@vmware.com>
(cherry picked from commit d32d4f780f)
2011-07-06 17:06:29 -07:00
Ian Romanick
b8972db223 glsl: Don't choke when printing an anonymous function parameter
NOTE: This is a candidate for the 7.10 and 7.11 branches.

Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=38584
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
(cherry picked from commit 174cef7fee)
2011-07-06 17:06:19 -07:00
Ian Romanick
f28cf18609 ir_to_mesa: Allocate temporary instructions on the visitor's ralloc context
And don't delete them.  Let ralloc clean them up.  Deleting the
temporary IR leaves dangling references in the prog_instruction.  That
results in a bad dereference when printing the IR with MESA_GLSL=dump.

NOTE: This is a candidate for the 7.10 and 7.11 branches.

Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=38584
Reviewed-by: Eric Anholt <eric@anholt.net>
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
(cherry picked from commit dbda466fc0)
2011-07-06 17:05:50 -07:00
Ian Romanick
42cd6192a2 glsl: Track initial mask in constant propagation live set
The set of values initially available (before any kills) must be
tracked with each constant in the set.  Otherwise the wrong component
can be selected after earlier components have been killed.

NOTE: This is a candidate for the 7.10 and 7.11 branches.

Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=37383
Reviewed-by: Eric Anholt <eric@anholt.net>
Cc: Kenneth Graunke <kenneth@whitecape.org>
Cc: Matthias Bentrup <matthias.bentrup@googlemail.com>
(cherry picked from commit 0eb9797958)
2011-07-06 17:05:36 -07:00
Vadim Girlin
1ae00c5960 r600g: fix buffer overflow check in r600_query_begin 2011-07-05 16:16:54 -04:00
Vadim Girlin
65d0d69c91 r600g: fix bo map usage flags in r600_query_begin 2011-07-05 16:16:40 -04:00
Vadim Girlin
433afb7352 r600g: reduce flushes for queries 2011-07-05 16:16:23 -04:00
Vadim Girlin
f70c2f8521 r600g: fix buffer offset in r600_query_begin 2011-07-05 16:16:05 -04:00
Chia-I Wu
e4cef07b87 egl: add copyright notices
The list of copyright holders could be incomplete.  Please update
directly or notify me if your name is missing.
(cherry picked from commit f2001df508)
2011-07-02 18:25:28 +09:00
Vadim Girlin
2196feb47f r600g: fix check for empty cs 2011-06-30 16:40:45 -04:00
Chia-I Wu
b90c710c6c target/egl-static: fix a compiler warning
(cherry picked from commit 3e3df5fcd1)
2011-06-30 15:01:17 +09:00
Chia-I Wu
bd1ceb5c5b targets/egl-static: fix library search order
Use

  $(MKLIB) -ldflags '-L$(TOP)/$(LIB_DIR)'

instead of

  $(MKLIB) -L$(TOP)/$(LIB_DIR)

to make sure the local library path appears before system's.
(cherry picked from commit 24137afb31)
2011-06-30 15:01:17 +09:00
Chia-I Wu
567778e49a targets/gbm: attemp to fix unresolved symbols
Move system libraries (usually .so) out of --start-group / --end-group
pair.  Add possiblly missing archives, defines, and shared libraries.
(cherry picked from commit 56ec8e17d3)
2011-06-30 15:01:17 +09:00
Chia-I Wu
0fafcc6919 targets/egl-static: do not use DRI_LIB_DEPS
It brings in libraries that are not necessarily needed.
(cherry picked from commit 1e9f0b1736)
2011-06-30 15:01:17 +09:00
Chia-I Wu
29574af377 egl: fix EGL_MATCH_NATIVE_PIXMAP
EGL_MATCH_NATIVE_PIXMAP is valid for eglChooseConfig, but invalid for
eglGetConfigAttrib.
(cherry picked from commit 8ea5330200)
2011-06-30 15:01:17 +09:00
Chia-I Wu
0eb780262c st/egl: update fbdev backend
Considering fbdev as an in-kernel window system,

 - opening a device opens a connection
 - there is only one window: the framebuffer
 - fb_var_screeninfo decides window position, size, and even color format
 - there is no pixmap

Now EGL is built on top of this window system.  So we should have

 - the fd as the handle of the native display
 - reject all but one native window: NULL
 - no pixmap support

modeset support is still around, but it should be removed soon.
(cherry picked from commit aa281dd392)
2011-06-30 15:01:16 +09:00
Chia-I Wu
e43a096f0c st/d3d1x: fix for st/egl native.h interface change
The interface was changed in 73df31eedd.
(cherry picked from commit 3a07d9594a)
2011-06-30 15:01:16 +09:00
Chia-I Wu
52aa06a2cd st/egl: fix a compile error
It is triggered when --with-driver=xlib is specified.
(cherry picked from commit ed47d65c7c)
2011-06-30 15:01:16 +09:00
Chia-I Wu
5d1561b4ab st/egl: reorganize backend initialization
Remove set_event_handler() and pass the event handler with
native_get_XXX_platform().  Add init_screen() so that the pipe screen is
created later.  This way we don't need to pass user_data to
create_display().
(cherry picked from commit 73df31eedd)
2011-06-30 15:01:16 +09:00
Kenneth Graunke
a8d7f36d65 i965/gen7: Add missing ! to brw->gs.prog_active assertion.
A typo in commit c173541d97 accidentally removed the !.
It's supposed to assert that there is _not_ an active GS program.

Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=38762

Signed-off-by: Kenneth Graunke <kenneth@whitecape.org>
(cherry-picked from commit 5ddc518401)
2011-06-29 11:00:16 -07:00
Emil Velikov
82ebfa6387 st/mesa: Use correct internal target
Commit 1a339b6c(st/mesa: prefer native texture formats when possible)
introduced two new arguments to the st_choose_format() functions.
This patch fixes the order and passes the correct internal_target
rather than GL_NONE

NOTE: This is a candidate for the 7.11 branch
Signed-off-by: Emil Velikov <emil.l.velikov@gmail.com>
Signed-off-by: Brian Paul <brianp@vmware.com>
(cherry picked from commit 9b5c538726)
2011-06-29 07:20:08 -06:00
Andre Maasikas
ee416c6ffe st/mesa: fix overwriting gl_format with pipe_format since 9d380f48
fixes assert later on in texcompress2/r600g

Signed-off-by: Brian Paul <brianp@vmware.com>
(cherry picked from commit 19789e403c)
2011-06-29 06:59:46 -06:00
Marek Olšák
ebc884d3dd r300g: drop support for ARGB, ABGR, XRGB, XBGR render targets
Blending and maybe even alpha-test don't work with those formats.

Only supporting RGBA, BGRA, RGBX, BGRX.

NOTE: This is a candidate for the 7.10 and 7.11 branches.
(cherry picked from commit bc517d64da)
2011-06-25 20:05:08 +02:00
Brian Paul
9383cfb4ba Revert "Fix 24bpp software rendering"
This reverts commit c0c0bb6cb1.
2011-06-25 06:20:32 -06:00
11842 changed files with 1249437 additions and 3642656 deletions

View File

@@ -1,18 +0,0 @@
((nil . ((show-trailing-whitespace . t)))
(prog-mode
(indent-tabs-mode . nil)
(tab-width . 8)
(c-basic-offset . 3)
(c-file-style . "stroustrup")
(fill-column . 78)
(eval . (progn
(c-set-offset 'case-label '0)
(c-set-offset 'innamespace '0)
(c-set-offset 'inline-open '0)))
(whitespace-style face indentation)
(whitespace-line-column . 79)
(eval ignore-errors
(require 'whitespace)
(whitespace-mode 1)))
(makefile-mode (indent-tabs-mode . t))
)

View File

@@ -1,44 +0,0 @@
# To use this config on you editor, follow the instructions at:
# http://editorconfig.org
root = true
[*]
charset = utf-8
insert_final_newline = true
tab_width = 8
[*.{c,h,cpp,hpp,cc,hh}]
indent_style = space
indent_size = 3
max_line_length = 78
[{Makefile*,*.mk}]
indent_style = tab
[*.py]
indent_style = space
indent_size = 4
[*.yml]
indent_style = space
indent_size = 2
[*.rst]
indent_style = space
indent_size = 3
[*.patch]
trim_trailing_whitespace = false
[{meson.build,meson_options.txt}]
indent_style = space
indent_size = 2
[*.ps1]
indent_style = space
indent_size = 2
[*.rs]
indent_style = space
indent_size = 4

10
.emacs-dirvars Normal file
View File

@@ -0,0 +1,10 @@
;; -*- emacs-lisp -*-
;;
;; This file is processed by the dirvars emacs package. Each variable
;; setting below is performed when this dirvars file is loaded.
;;
indent-tabs-mode: nil
tab-width: 8
c-basic-offset: 3
kde-emacs-after-parent-string: ""
evaluate: (c-set-offset 'inline-open '0)

10
.gitattributes vendored
View File

@@ -1,6 +1,4 @@
*.csv eol=crlf
* text=auto
*.jpg binary
*.png binary
*.gif binary
*.ico binary
*.dsp -crlf
*.dsw -crlf
*.sln -crlf
*.vcproj -crlf

View File

@@ -1,58 +0,0 @@
name: macOS-CI
on: push
permissions:
contents: read
jobs:
macOS-CI:
strategy:
matrix:
glx_option: ['dri', 'xlib']
runs-on: macos-latest
env:
GALLIUM_DUMP_CPU: true
steps:
- name: Checkout
uses: actions/checkout@v3
- name: Install Dependencies
run: |
cat > Brewfile <<EOL
brew "bison"
brew "expat"
brew "gettext"
brew "libx11"
brew "libxcb"
brew "libxdamage"
brew "libxext"
brew "meson"
brew "pkg-config"
brew "python@3.10"
EOL
brew update
brew bundle --verbose
- name: Install Mako
run: pip3 install --user mako
- name: Configure
run: |
cat > native_config <<EOL
[binaries]
llvm-config = '/usr/local/opt/llvm/bin/llvm-config'
EOL
meson . build --native-file=native_config -Dbuild-tests=true -Dosmesa=true -Dgallium-drivers=swrast -Dglx=${{ matrix.glx_option }}
- name: Build
run: meson compile -C build
- name: Test
run: meson test -C build --print-errorlogs
- name: Install
run: meson install -C build --destdir $PWD/install
- name: 'Upload Artifact'
if: always()
uses: actions/upload-artifact@v3
with:
name: macos-${{ matrix.glx_option }}-result
path: |
build/meson-logs/
install/
retention-days: 5

28
.gitignore vendored
View File

@@ -1,4 +1,28 @@
*.a
*.dll
*.exe
*.ilk
*.o
*.obj
*.os
*.pc
*.pdb
*.pyc
*.pyo
*.out
/build
*.so
*.sw[a-z]
*~
depend
depend.bak
lib
lib64
configure
autom4te.cache
aclocal.m4
config.log
config.status
cscope*
.scon*
config.py
build
.dir-locals.el

View File

@@ -1,304 +0,0 @@
variables:
FDO_UPSTREAM_REPO: mesa/mesa
MESA_TEMPLATES_COMMIT: &ci-templates-commit d5aa3941aa03c2f716595116354fb81eb8012acb
CI_PRE_CLONE_SCRIPT: |-
set -o xtrace
wget -q -O download-git-cache.sh ${CI_PROJECT_URL}/-/raw/${CI_COMMIT_SHA}/.gitlab-ci/download-git-cache.sh
bash download-git-cache.sh
rm download-git-cache.sh
set +o xtrace
CI_JOB_JWT_FILE: /minio_jwt
MINIO_HOST: minio-packet.freedesktop.org
# per-pipeline artifact storage on MinIO
PIPELINE_ARTIFACTS_BASE: ${MINIO_HOST}/artifacts/${CI_PROJECT_PATH}/${CI_PIPELINE_ID}
# per-job artifact storage on MinIO
JOB_ARTIFACTS_BASE: ${PIPELINE_ARTIFACTS_BASE}/${CI_JOB_ID}
# reference images stored for traces
PIGLIT_REPLAY_REFERENCE_IMAGES_BASE: "${MINIO_HOST}/mesa-tracie-results/$FDO_UPSTREAM_REPO"
# Individual CI farm status, set to "offline" to disable jobs
# running on a particular CI farm (ie. for outages, etc):
FD_FARM: "online"
COLLABORA_FARM: "online"
MICROSOFT_FARM: "online"
LIMA_FARM: "online"
IGALIA_FARM: "online"
ANHOLT_FARM: "online"
default:
before_script:
- echo -e "\e[0Ksection_start:$(date +%s):unset_env_vars_section[collapsed=true]\r\e[0KUnsetting vulnerable environment variables"
- echo -n "${CI_JOB_JWT}" > "${CI_JOB_JWT_FILE}"
- unset CI_JOB_JWT
- echo -e "\e[0Ksection_end:$(date +%s):unset_env_vars_section\r\e[0K"
after_script:
- >
set +x
test -e "${CI_JOB_JWT_FILE}" &&
export CI_JOB_JWT="$(<${CI_JOB_JWT_FILE})" &&
rm "${CI_JOB_JWT_FILE}"
# Retry build or test jobs up to twice when the gitlab-runner itself fails somehow.
retry:
max: 2
when:
- runner_system_failure
include:
- project: 'freedesktop/ci-templates'
ref: 34f4ade99434043f88e164933f570301fd18b125
file:
- '/templates/ci-fairy.yml'
- project: 'freedesktop/ci-templates'
ref: *ci-templates-commit
file:
- '/templates/debian.yml'
- '/templates/fedora.yml'
- local: '.gitlab-ci/image-tags.yml'
- local: '.gitlab-ci/lava/lava-gitlab-ci.yml'
- local: '.gitlab-ci/container/gitlab-ci.yml'
- local: '.gitlab-ci/build/gitlab-ci.yml'
- local: '.gitlab-ci/test/gitlab-ci.yml'
- local: '.gitlab-ci/test-source-dep.yml'
- local: 'src/amd/ci/gitlab-ci.yml'
- local: 'src/broadcom/ci/gitlab-ci.yml'
- local: 'src/etnaviv/ci/gitlab-ci.yml'
- local: 'src/freedreno/ci/gitlab-ci.yml'
- local: 'src/gallium/drivers/crocus/ci/gitlab-ci.yml'
- local: 'src/gallium/drivers/d3d12/ci/gitlab-ci.yml'
- local: 'src/gallium/drivers/i915/ci/gitlab-ci.yml'
- local: 'src/gallium/drivers/lima/ci/gitlab-ci.yml'
- local: 'src/gallium/drivers/llvmpipe/ci/gitlab-ci.yml'
- local: 'src/gallium/drivers/nouveau/ci/gitlab-ci.yml'
- local: 'src/gallium/drivers/softpipe/ci/gitlab-ci.yml'
- local: 'src/gallium/drivers/virgl/ci/gitlab-ci.yml'
- local: 'src/gallium/drivers/zink/ci/gitlab-ci.yml'
- local: 'src/gallium/frontends/lavapipe/ci/gitlab-ci.yml'
- local: 'src/intel/ci/gitlab-ci.yml'
- local: 'src/microsoft/ci/gitlab-ci.yml'
- local: 'src/panfrost/ci/gitlab-ci.yml'
- local: 'src/virtio/ci/gitlab-ci.yml'
stages:
- sanity
- container
- git-archive
- build-x86_64
- build-misc
- lint
- amd
- intel
- nouveau
- arm
- broadcom
- freedreno
- etnaviv
- software-renderer
- layered-backends
- deploy
# YAML anchors for rule conditions
# --------------------------------
.rules-anchors:
rules:
# Pipeline for forked project branch
- if: &is-forked-branch '$CI_COMMIT_BRANCH && $CI_PROJECT_NAMESPACE != "mesa"'
when: manual
# Forked project branch / pre-merge pipeline not for Marge bot
- if: &is-forked-branch-or-pre-merge-not-for-marge '$CI_PROJECT_NAMESPACE != "mesa" || ($GITLAB_USER_LOGIN != "marge-bot" && $CI_PIPELINE_SOURCE == "merge_request_event")'
when: manual
# Pipeline runs for the main branch of the upstream Mesa project
- if: &is-mesa-main '$CI_PROJECT_NAMESPACE == "mesa" && $CI_COMMIT_REF_NAME == $CI_DEFAULT_BRANCH && $CI_COMMIT_BRANCH'
when: always
# Post-merge pipeline
- if: &is-post-merge '$CI_PROJECT_NAMESPACE == "mesa" && $CI_COMMIT_BRANCH'
when: on_success
# Post-merge pipeline, not for Marge Bot
- if: &is-post-merge-not-for-marge '$CI_PROJECT_NAMESPACE == "mesa" && $GITLAB_USER_LOGIN != "marge-bot" && $CI_COMMIT_BRANCH'
when: on_success
# Pre-merge pipeline
- if: &is-pre-merge '$CI_PIPELINE_SOURCE == "merge_request_event"'
when: on_success
# Pre-merge pipeline for Marge Bot
- if: &is-pre-merge-for-marge '$GITLAB_USER_LOGIN == "marge-bot" && $CI_PIPELINE_SOURCE == "merge_request_event"'
when: on_success
.docs-base:
extends:
- .fdo.ci-fairy
- .build-rules
script:
- apk --no-cache add graphviz doxygen
- pip3 install sphinx===5.1.1 breathe===4.34.0 mako===1.2.3 sphinx_rtd_theme===1.0.0
- docs/doxygen-wrapper.py --out-dir=docs/doxygen_xml
- sphinx-build -W -b html docs public
pages:
extends: .docs-base
stage: deploy
artifacts:
paths:
- public
needs: []
rules:
- !reference [.no_scheduled_pipelines-rules, rules]
- if: *is-mesa-main
changes: &docs-or-ci
- docs/**/*
- .gitlab-ci.yml
when: always
# Other cases default to never
test-docs:
extends: .docs-base
# Cancel job if a newer commit is pushed to the same branch
interruptible: true
stage: deploy
needs: []
rules:
- !reference [.no_scheduled_pipelines-rules, rules]
- if: *is-forked-branch
changes: *docs-or-ci
when: manual
# Other cases default to never
test-docs-mr:
extends:
- test-docs
needs:
- sanity
artifacts:
expose_as: 'Documentation preview'
paths:
- public/
rules:
- if: *is-pre-merge
changes: *docs-or-ci
when: on_success
# Other cases default to never
# When to automatically run the CI for build jobs
.build-rules:
rules:
- !reference [.no_scheduled_pipelines-rules, rules]
# If any files affecting the pipeline are changed, build/test jobs run
# automatically once all dependency jobs have passed
- changes: &all_paths
- VERSION
- bin/git_sha1_gen.py
- bin/install_megadrivers.py
- bin/meson_get_version.py
- bin/symbols-check.py
# GitLab CI
- .gitlab-ci.yml
- .gitlab-ci/**/*
# Meson
- meson*
- build-support/**/*
- subprojects/**/*
# Source code
- include/**/*
- src/**/*
when: on_success
# Otherwise, build/test jobs won't run because no rule matched.
.ci-deqp-artifacts:
artifacts:
name: "mesa_${CI_JOB_NAME}"
when: always
untracked: false
paths:
# Watch out! Artifacts are relative to the build dir.
# https://gitlab.com/gitlab-org/gitlab-ce/commit/8788fb925706cad594adf6917a6c5f6587dd1521
- artifacts
- _build/meson-logs/*.txt
- _build/meson-logs/strace
.container-rules:
rules:
- !reference [.no_scheduled_pipelines-rules, rules]
# Run pipeline by default in the main project if any CI pipeline
# configuration files were changed, to ensure docker images are up to date
- if: *is-post-merge
changes:
- .gitlab-ci.yml
- .gitlab-ci/**/*
when: on_success
# Run pipeline by default if it was triggered by Marge Bot, is for a
# merge request, and any files affecting the pipeline were changed
- if: *is-pre-merge-for-marge
changes:
*all_paths
when: on_success
# Run pipeline by default in the main project if it was not triggered by
# Marge Bot, and any files affecting the pipeline were changed
- if: *is-post-merge-not-for-marge
changes:
*all_paths
when: on_success
# Allow triggering jobs manually in other cases if any files affecting the
# pipeline were changed
- changes:
*all_paths
when: manual
# Otherwise, container jobs won't run because no rule matched.
# Git archive
make git archive:
extends:
- .fdo.ci-fairy
stage: git-archive
rules:
- !reference [.scheduled_pipeline-rules, rules]
# ensure we are running on packet
tags:
- packet.net
script:
# Compactify the .git directory
- git gc --aggressive
# compress the current folder
- tar -cvzf ../$CI_PROJECT_NAME.tar.gz .
# login with the JWT token file
- ci-fairy minio login --token-file "${CI_JOB_JWT_FILE}"
- ci-fairy minio cp ../$CI_PROJECT_NAME.tar.gz minio://$MINIO_HOST/git-cache/$CI_PROJECT_NAMESPACE/$CI_PROJECT_NAME/$CI_PROJECT_NAME.tar.gz
# Sanity checks of MR settings and commit logs
sanity:
extends:
- .fdo.ci-fairy
stage: sanity
rules:
- if: *is-pre-merge
when: on_success
# Other cases default to never
variables:
GIT_STRATEGY: none
script:
# ci-fairy check-commits --junit-xml=check-commits.xml
- ci-fairy check-merge-request --require-allow-collaboration --junit-xml=check-merge-request.xml
artifacts:
when: on_failure
reports:
junit: check-*.xml
# Rules for tests that should not block merging, but should be available to
# optionally run with the "play" button in the UI in pre-merge non-marge
# pipelines. This should appear in "extends:" after any includes of
# test-source-dep.yml rules, so that these rules replace those.
.test-manual-mr:
rules:
- !reference [.no_scheduled_pipelines-rules, rules]
- if: *is-forked-branch-or-pre-merge-not-for-marge
changes:
*all_paths
when: manual
variables:
JOB_TIMEOUT: 80

View File

@@ -1,17 +0,0 @@
# Note: skips lists for CI are just a list of lines that, when
# non-zero-length and not starting with '#', will regex match to
# delete lines from the test list. Be careful.
# These are tremendously slow (pushing toward a minute), and aren't
# reliable to be run in parallel with other tests due to CPU-side timing.
dEQP-GLES[0-9]*.functional.flush_finish.*
# piglit: WGL is Windows-only
wgl@.*
# These are sensitive to CPU timing, and would need to be run in isolation
# on the system rather than in parallel with other tests.
glx@glx_arb_sync_control@timing.*
# This test is not built with waffle, while we do build tests with waffle
spec@!opengl 1.1@windowoverlap

View File

@@ -1,66 +0,0 @@
version: 1
# Rules to match for a machine to qualify
target:
{% if tags %}
tags:
{% for tag in tags %}
- '{{ tag | trim }}'
{% endfor %}
{% endif %}
timeouts:
first_console_activity: # This limits the time it can take to receive the first console log
minutes: {{ timeout_first_minutes }}
retries: {{ timeout_first_retries }}
console_activity: # Reset every time we receive a message from the logs
minutes: {{ timeout_minutes }}
retries: {{ timeout_retries }}
boot_cycle:
minutes: {{ timeout_boot_minutes }}
retries: {{ timeout_boot_retries }}
overall: # Maximum time the job can take, not overrideable by the "continue" deployment
minutes: {{ timeout_overall_minutes }}
retries: 0
# no retries possible here
console_patterns:
session_end:
regex: >-
{{ session_end_regex }}
session_reboot:
regex: >-
{{ session_reboot_regex }}
job_success:
regex: >-
{{ job_success_regex }}
job_warn:
regex: >-
{{ job_warn_regex }}
# Environment to deploy
deployment:
# Initial boot
start:
kernel:
url: '{{ kernel_url }}'
cmdline: >
SALAD.machine_id={{ '{{' }} machine_id }}
console={{ '{{' }} local_tty_device }},115200 earlyprintk=vga,keep
loglevel={{ log_level }} no_hash_pointers
b2c.service="--privileged --tls-verify=false --pid=host docker://{{ '{{' }} fdo_proxy_registry }}/mupuf/valve-infra/telegraf-container:latest" b2c.hostname=dut-{{ '{{' }} machine.full_name }}
b2c.container="-ti --tls-verify=false docker://{{ '{{' }} fdo_proxy_registry }}/mupuf/valve-infra/machine_registration:latest check"
b2c.ntp_peer=10.42.0.1 b2c.pipefail b2c.cache_device=auto b2c.poweroff_delay={{ poweroff_delay }}
b2c.minio="gateway,{{ '{{' }} minio_url }},{{ '{{' }} job_bucket_access_key }},{{ '{{' }} job_bucket_secret_key }}"
b2c.volume="{{ '{{' }} job_bucket }}-results,mirror=gateway/{{ '{{' }} job_bucket }},pull_on=pipeline_start,push_on=changes,overwrite{% for excl in job_volume_exclusions %},exclude={{ excl }}{% endfor %},expiration=pipeline_end,preserve"
{% for volume in volumes %}
b2c.volume={{ volume }}
{% endfor %}
b2c.container="-v {{ '{{' }} job_bucket }}-results:{{ working_dir }} -w {{ working_dir }} {% for mount_volume in mount_volumes %} -v {{ mount_volume }}{% endfor %} --tls-verify=false docker://{{ local_container }} {{ container_cmd }}"
{% if cmdline_extras is defined %}
{{ cmdline_extras }}
{% endif %}
initramfs:
url: '{{ initramfs_url }}'

View File

@@ -1,105 +0,0 @@
#!/usr/bin/env python3
# Copyright © 2022 Valve Corporation
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice (including the next
# paragraph) shall be included in all copies or substantial portions of the
# Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
# IN THE SOFTWARE.
from jinja2 import Environment, FileSystemLoader
from argparse import ArgumentParser
from os import environ, path
import json
parser = ArgumentParser()
parser.add_argument('--ci-job-id')
parser.add_argument('--container-cmd')
parser.add_argument('--initramfs-url')
parser.add_argument('--job-success-regex')
parser.add_argument('--job-warn-regex')
parser.add_argument('--kernel-url')
parser.add_argument('--log-level', type=int)
parser.add_argument('--poweroff-delay', type=int)
parser.add_argument('--session-end-regex')
parser.add_argument('--session-reboot-regex')
parser.add_argument('--tags', nargs='?', default='')
parser.add_argument('--template', default='b2c.yml.jinja2.jinja2')
parser.add_argument('--timeout-boot-minutes', type=int)
parser.add_argument('--timeout-boot-retries', type=int)
parser.add_argument('--timeout-first-minutes', type=int)
parser.add_argument('--timeout-first-retries', type=int)
parser.add_argument('--timeout-minutes', type=int)
parser.add_argument('--timeout-overall-minutes', type=int)
parser.add_argument('--timeout-retries', type=int)
parser.add_argument('--job-volume-exclusions', nargs='?', default='')
parser.add_argument('--volume', action='append')
parser.add_argument('--mount-volume', action='append')
parser.add_argument('--local-container', default=environ.get('B2C_LOCAL_CONTAINER', 'alpine:latest'))
parser.add_argument('--working-dir')
args = parser.parse_args()
env = Environment(loader=FileSystemLoader(path.dirname(args.template)),
trim_blocks=True, lstrip_blocks=True)
template = env.get_template(path.basename(args.template))
values = {}
values['ci_job_id'] = args.ci_job_id
values['container_cmd'] = args.container_cmd
values['initramfs_url'] = args.initramfs_url
values['job_success_regex'] = args.job_success_regex
values['job_warn_regex'] = args.job_warn_regex
values['kernel_url'] = args.kernel_url
values['log_level'] = args.log_level
values['poweroff_delay'] = args.poweroff_delay
values['session_end_regex'] = args.session_end_regex
values['session_reboot_regex'] = args.session_reboot_regex
try:
values['tags'] = json.loads(args.tags)
except json.decoder.JSONDecodeError:
values['tags'] = args.tags.split(",")
values['template'] = args.template
values['timeout_boot_minutes'] = args.timeout_boot_minutes
values['timeout_boot_retries'] = args.timeout_boot_retries
values['timeout_first_minutes'] = args.timeout_first_minutes
values['timeout_first_retries'] = args.timeout_first_retries
values['timeout_minutes'] = args.timeout_minutes
values['timeout_overall_minutes'] = args.timeout_overall_minutes
values['timeout_retries'] = args.timeout_retries
if len(args.job_volume_exclusions) > 0:
exclusions = args.job_volume_exclusions.split(",")
values['job_volume_exclusions'] = [excl for excl in exclusions if len(excl) > 0]
if args.volume is not None:
values['volumes'] = args.volume
if args.mount_volume is not None:
values['mount_volumes'] = args.mount_volume
values['working_dir'] = args.working_dir
assert(len(args.local_container) > 0)
values['local_container'] = args.local_container.replace(
# Use the gateway's pull-through registry cache to reduce load on fd.o.
'registry.freedesktop.org', '{{ fdo_proxy_registry }}'
)
if 'B2C_KERNEL_CMDLINE_EXTRAS' in environ:
values['cmdline_extras'] = environ['B2C_KERNEL_CMDLINE_EXTRAS']
f = open(path.splitext(path.basename(args.template))[0], "w")
f.write(template.render(values))
f.close()

View File

@@ -1,2 +0,0 @@
[*.sh]
indent_size = 2

View File

@@ -1,26 +0,0 @@
#!/bin/sh
# This test script groups together a bunch of fast dEQP variant runs
# to amortize the cost of rebooting the board.
set -ex
EXIT=0
# Run reset tests without parallelism:
if ! env \
DEQP_RESULTS_DIR=results/reset \
FDO_CI_CONCURRENT=1 \
DEQP_CASELIST_FILTER='.*reset.*' \
/install/deqp-runner.sh; then
EXIT=1
fi
# Then run everything else with parallelism:
if ! env \
DEQP_RESULTS_DIR=results/nonrobustness \
DEQP_CASELIST_INV_FILTER='.*reset.*' \
/install/deqp-runner.sh; then
EXIT=1
fi

View File

@@ -1,13 +0,0 @@
#!/bin/sh
# Init entrypoint for bare-metal devices; calls common init code.
# First stage: very basic setup to bring up network and /dev etc
/init-stage1.sh
# Second stage: run jobs
test $? -eq 0 && /init-stage2.sh
# Wait until the job would have timed out anyway, so we don't spew a "init
# exited" panic.
sleep 6000

View File

@@ -1,17 +0,0 @@
#!/bin/bash
if [ -z "$BM_POE_INTERFACE" ]; then
echo "Must supply the PoE Interface to power down"
exit 1
fi
if [ -z "$BM_POE_ADDRESS" ]; then
echo "Must supply the PoE Switch host"
exit 1
fi
SNMP_KEY="1.3.6.1.4.1.9.9.402.1.2.1.1.1.$BM_POE_INTERFACE"
SNMP_ON="i 1"
SNMP_OFF="i 4"
snmpset -v2c -r 3 -t 30 -cmesaci $BM_POE_ADDRESS $SNMP_KEY $SNMP_OFF

View File

@@ -1,21 +0,0 @@
#!/bin/bash
if [ -z "$BM_POE_INTERFACE" ]; then
echo "Must supply the PoE Interface to power up"
exit 1
fi
if [ -z "$BM_POE_ADDRESS" ]; then
echo "Must supply the PoE Switch host"
exit 1
fi
set -ex
SNMP_KEY="1.3.6.1.4.1.9.9.402.1.2.1.1.1.$BM_POE_INTERFACE"
SNMP_ON="i 1"
SNMP_OFF="i 4"
snmpset -v2c -r 3 -t 10 -cmesaci $BM_POE_ADDRESS $SNMP_KEY $SNMP_OFF
sleep 3s
snmpset -v2c -r 3 -t 10 -cmesaci $BM_POE_ADDRESS $SNMP_KEY $SNMP_ON

View File

@@ -1,102 +0,0 @@
#!/bin/bash
# Boot script for Chrome OS devices attached to a servo debug connector, using
# NFS and TFTP to boot.
# We're run from the root of the repo, make a helper var for our paths
BM=$CI_PROJECT_DIR/install/bare-metal
CI_COMMON=$CI_PROJECT_DIR/install/common
# Runner config checks
if [ -z "$BM_SERIAL" ]; then
echo "Must set BM_SERIAL in your gitlab-runner config.toml [[runners]] environment"
echo "This is the CPU serial device."
exit 1
fi
if [ -z "$BM_SERIAL_EC" ]; then
echo "Must set BM_SERIAL in your gitlab-runner config.toml [[runners]] environment"
echo "This is the EC serial device for controlling board power"
exit 1
fi
if [ ! -d /nfs ]; then
echo "NFS rootfs directory needs to be mounted at /nfs by the gitlab runner"
exit 1
fi
if [ ! -d /tftp ]; then
echo "TFTP directory for this board needs to be mounted at /tftp by the gitlab runner"
exit 1
fi
# job config checks
if [ -z "$BM_KERNEL" ]; then
echo "Must set BM_KERNEL to your board's kernel FIT image"
exit 1
fi
if [ -z "$BM_ROOTFS" ]; then
echo "Must set BM_ROOTFS to your board's rootfs directory in the job's variables"
exit 1
fi
if [ -z "$BM_CMDLINE" ]; then
echo "Must set BM_CMDLINE to your board's kernel command line arguments"
exit 1
fi
set -ex
# Clear out any previous run's artifacts.
rm -rf results/
mkdir -p results
# Create the rootfs in the NFS directory. rm to make sure it's in a pristine
# state, since it's volume-mounted on the host.
rsync -a --delete $BM_ROOTFS/ /nfs/
mkdir -p /nfs/results
. $BM/rootfs-setup.sh /nfs
# Put the kernel/dtb image and the boot command line in the tftp directory for
# the board to find. For normal Mesa development, we build the kernel and
# store it in the docker container that this script is running in.
#
# However, container builds are expensive, so when you're hacking on the
# kernel, it's nice to be able to skip the half hour container build and plus
# moving that container to the runner. So, if BM_KERNEL is a URL, fetch it
# instead of looking in the container. Note that the kernel build should be
# the output of:
#
# make Image.lzma
#
# mkimage \
# -A arm64 \
# -f auto \
# -C lzma \
# -d arch/arm64/boot/Image.lzma \
# -b arch/arm64/boot/dts/qcom/sdm845-cheza-r3.dtb \
# cheza-image.img
rm -rf /tftp/*
if echo "$BM_KERNEL" | grep -q http; then
apt install -y wget
wget $BM_KERNEL -O /tftp/vmlinuz
else
cp $BM_KERNEL /tftp/vmlinuz
fi
echo "$BM_CMDLINE" > /tftp/cmdline
set +e
python3 $BM/cros_servo_run.py \
--cpu $BM_SERIAL \
--ec $BM_SERIAL_EC \
--test-timeout ${TEST_PHASE_TIMEOUT:-20}
ret=$?
set -e
# Bring artifacts back from the NFS dir to the build dir where gitlab-runner
# will look for them.
cp -Rp /nfs/results/. results/
exit $ret

View File

@@ -1,180 +0,0 @@
#!/usr/bin/env python3
#
# Copyright © 2020 Google LLC
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice (including the next
# paragraph) shall be included in all copies or substantial portions of the
# Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
# IN THE SOFTWARE.
import argparse
import queue
import re
from serial_buffer import SerialBuffer
import sys
import threading
class CrosServoRun:
def __init__(self, cpu, ec, test_timeout):
self.cpu_ser = SerialBuffer(
cpu, "results/serial.txt", "R SERIAL-CPU> ")
# Merge the EC serial into the cpu_ser's line stream so that we can
# effectively poll on both at the same time and not have to worry about
self.ec_ser = SerialBuffer(
ec, "results/serial-ec.txt", "R SERIAL-EC> ", line_queue=self.cpu_ser.line_queue)
self.test_timeout = test_timeout
def close(self):
self.ec_ser.close()
self.cpu_ser.close()
def ec_write(self, s):
print("W SERIAL-EC> %s" % s)
self.ec_ser.serial.write(s.encode())
def cpu_write(self, s):
print("W SERIAL-CPU> %s" % s)
self.cpu_ser.serial.write(s.encode())
def print_error(self, message):
RED = '\033[0;31m'
NO_COLOR = '\033[0m'
print(RED + message + NO_COLOR)
def run(self):
# Flush any partial commands in the EC's prompt, then ask for a reboot.
self.ec_write("\n")
self.ec_write("reboot\n")
bootloader_done = False
# This is emitted right when the bootloader pauses to check for input.
# Emit a ^N character to request network boot, because we don't have a
# direct-to-netboot firmware on cheza.
for line in self.cpu_ser.lines(timeout=120, phase="bootloader"):
if re.search("load_archive: loading locale_en.bin", line):
self.cpu_write("\016")
bootloader_done = True
break
# If the board has a netboot firmware and we made it to booting the
# kernel, proceed to processing of the test run.
if re.search("Booting Linux", line):
bootloader_done = True
break
# The Cheza boards have issues with failing to bring up power to
# the system sometimes, possibly dependent on ambient temperature
# in the farm.
if re.search("POWER_GOOD not seen in time", line):
self.print_error(
"Detected intermittent poweron failure, restarting run...")
return 2
if not bootloader_done:
print("Failed to make it through bootloader, restarting run...")
return 2
tftp_failures = 0
for line in self.cpu_ser.lines(timeout=self.test_timeout, phase="test"):
if re.search("---. end Kernel panic", line):
return 1
# The Cheza firmware seems to occasionally get stuck looping in
# this error state during TFTP booting, possibly based on amount of
# network traffic around it, but it'll usually recover after a
# reboot.
if re.search("R8152: Bulk read error 0xffffffbf", line):
tftp_failures += 1
if tftp_failures >= 100:
self.print_error(
"Detected intermittent tftp failure, restarting run...")
return 2
# There are very infrequent bus errors during power management transitions
# on cheza, which we don't expect to be the case on future boards.
if re.search("Kernel panic - not syncing: Asynchronous SError Interrupt", line):
self.print_error(
"Detected cheza power management bus error, restarting run...")
return 2
# If the network device dies, it's probably not graphics's fault, just try again.
if re.search("NETDEV WATCHDOG", line):
self.print_error(
"Detected network device failure, restarting run...")
return 2
# These HFI response errors started appearing with the introduction
# of piglit runs. CosmicPenguin says:
#
# "message ID 106 isn't a thing, so likely what happened is that we
# got confused when parsing the HFI queue. If it happened on only
# one run, then memory corruption could be a possible clue"
#
# Given that it seems to trigger randomly near a GPU fault and then
# break many tests after that, just restart the whole run.
if re.search("a6xx_hfi_send_msg.*Unexpected message id .* on the response queue", line):
self.print_error(
"Detected cheza power management bus error, restarting run...")
return 2
if re.search("coreboot.*bootblock starting", line):
self.print_error(
"Detected spontaneous reboot, restarting run...")
return 2
if re.search("arm-smmu 5040000.iommu: TLB sync timed out -- SMMU may be deadlocked", line):
self.print_error("Detected cheza MMU fail, restarting run...")
return 2
result = re.search("hwci: mesa: (\S*)", line)
if result:
if result.group(1) == "pass":
return 0
else:
return 1
self.print_error(
"Reached the end of the CPU serial log without finding a result")
return 2
def main():
parser = argparse.ArgumentParser()
parser.add_argument('--cpu', type=str,
help='CPU Serial device', required=True)
parser.add_argument(
'--ec', type=str, help='EC Serial device', required=True)
parser.add_argument(
'--test-timeout', type=int, help='Test phase timeout (minutes)', required=True)
args = parser.parse_args()
while True:
servo = CrosServoRun(args.cpu, args.ec, args.test_timeout * 60)
retval = servo.run()
# power down the CPU on the device
servo.ec_write("power off\n")
servo.close()
if retval != 2:
sys.exit(retval)
if __name__ == '__main__':
main()

View File

@@ -1,10 +0,0 @@
#!/bin/bash
relay=$1
if [ -z "$relay" ]; then
echo "Must supply a relay arg"
exit 1
fi
$CI_PROJECT_DIR/install/bare-metal/eth008-power-relay.py $ETH_HOST $ETH_PORT off $relay

View File

@@ -1,28 +0,0 @@
#!/usr/bin/python3
import sys
import socket
host = sys.argv[1]
port = sys.argv[2]
mode = sys.argv[3]
relay = sys.argv[4]
msg = None
if mode == "on":
msg = b'\x20'
else:
msg = b'\x21'
msg += int(relay).to_bytes(1, 'big')
msg += b'\x00'
c = socket.create_connection((host, int(port)))
c.sendall(msg)
data = c.recv(1)
c.close()
if data[0] == b'\x01':
print('Command failed')
sys.exit(1)

View File

@@ -1,12 +0,0 @@
#!/bin/bash
relay=$1
if [ -z "$relay" ]; then
echo "Must supply a relay arg"
exit 1
fi
$CI_PROJECT_DIR/install/bare-metal/eth008-power-relay.py $ETH_HOST $ETH_PORT off $relay
sleep 5
$CI_PROJECT_DIR/install/bare-metal/eth008-power-relay.py $ETH_HOST $ETH_PORT on $relay

View File

@@ -1,30 +0,0 @@
#!/bin/bash
set -e
STRINGS=$(mktemp)
ERRORS=$(mktemp)
trap "rm $STRINGS; rm $ERRORS;" EXIT
FILE=$1
shift 1
while getopts "f:e:" opt; do
case $opt in
f) echo "$OPTARG" >> $STRINGS;;
e) echo "$OPTARG" >> $STRINGS ; echo "$OPTARG" >> $ERRORS;;
esac
done
shift $((OPTIND -1))
echo "Waiting for $FILE to say one of following strings"
cat $STRINGS
while ! egrep -wf $STRINGS $FILE; do
sleep 2
done
if egrep -wf $ERRORS $FILE; then
exit 1
fi

View File

@@ -1,154 +0,0 @@
#!/bin/bash
BM=$CI_PROJECT_DIR/install/bare-metal
CI_COMMON=$CI_PROJECT_DIR/install/common
if [ -z "$BM_SERIAL" -a -z "$BM_SERIAL_SCRIPT" ]; then
echo "Must set BM_SERIAL OR BM_SERIAL_SCRIPT in your gitlab-runner config.toml [[runners]] environment"
echo "BM_SERIAL:"
echo " This is the serial device to talk to for waiting for fastboot to be ready and logging from the kernel."
echo "BM_SERIAL_SCRIPT:"
echo " This is a shell script to talk to for waiting for fastboot to be ready and logging from the kernel."
exit 1
fi
if [ -z "$BM_POWERUP" ]; then
echo "Must set BM_POWERUP in your gitlab-runner config.toml [[runners]] environment"
echo "This is a shell script that should reset the device and begin its boot sequence"
echo "such that it pauses at fastboot."
exit 1
fi
if [ -z "$BM_POWERDOWN" ]; then
echo "Must set BM_POWERDOWN in your gitlab-runner config.toml [[runners]] environment"
echo "This is a shell script that should power off the device."
exit 1
fi
if [ -z "$BM_FASTBOOT_SERIAL" ]; then
echo "Must set BM_FASTBOOT_SERIAL in your gitlab-runner config.toml [[runners]] environment"
echo "This must be the a stable-across-resets fastboot serial number."
exit 1
fi
if [ -z "$BM_KERNEL" ]; then
echo "Must set BM_KERNEL to your board's kernel vmlinuz or Image.gz in the job's variables:"
exit 1
fi
if [ -z "$BM_DTB" ]; then
echo "Must set BM_DTB to your board's DTB file in the job's variables:"
exit 1
fi
if [ -z "$BM_ROOTFS" ]; then
echo "Must set BM_ROOTFS to your board's rootfs directory in the job's variables:"
exit 1
fi
if echo $BM_CMDLINE | grep -q "root=/dev/nfs"; then
BM_FASTBOOT_NFSROOT=1
fi
set -ex
# Clear out any previous run's artifacts.
rm -rf results/
mkdir -p results/
if [ -n "$BM_FASTBOOT_NFSROOT" ]; then
# Create the rootfs in the NFS directory. rm to make sure it's in a pristine
# state, since it's volume-mounted on the host.
rsync -a --delete $BM_ROOTFS/ /nfs/
mkdir -p /nfs/results
. $BM/rootfs-setup.sh /nfs
# Root on NFS, no need for an inintramfs.
rm -f rootfs.cpio.gz
touch rootfs.cpio
gzip rootfs.cpio
else
# Create the rootfs in a temp dir
rsync -a --delete $BM_ROOTFS/ rootfs/
. $BM/rootfs-setup.sh rootfs
# Finally, pack it up into a cpio rootfs. Skip the vulkan CTS since none of
# these devices use it and it would take up space in the initrd.
if [ -n "$PIGLIT_PROFILES" ]; then
EXCLUDE_FILTER="deqp|arb_gpu_shader5|arb_gpu_shader_fp64|arb_gpu_shader_int64|glsl-4.[0123456]0|arb_tessellation_shader"
else
EXCLUDE_FILTER="piglit|python"
fi
pushd rootfs
find -H | \
egrep -v "external/(openglcts|vulkancts|amber|glslang|spirv-tools)" |
egrep -v "traces-db|apitrace|renderdoc" | \
egrep -v $EXCLUDE_FILTER | \
cpio -H newc -o | \
xz --check=crc32 -T4 - > $CI_PROJECT_DIR/rootfs.cpio.gz
popd
fi
# Make the combined kernel image and dtb for passing to fastboot. For normal
# Mesa development, we build the kernel and store it in the docker container
# that this script is running in.
#
# However, container builds are expensive, so when you're hacking on the
# kernel, it's nice to be able to skip the half hour container build and plus
# moving that container to the runner. So, if BM_KERNEL+BM_DTB are URLs,
# fetch them instead of looking in the container.
if echo "$BM_KERNEL $BM_DTB" | grep -q http; then
apt install -y wget
wget $BM_KERNEL -O kernel
wget $BM_DTB -O dtb
cat kernel dtb > Image.gz-dtb
rm kernel
else
cat $BM_KERNEL $BM_DTB > Image.gz-dtb
cp $BM_DTB dtb
fi
export PATH=$BM:$PATH
mkdir -p artifacts
mkbootimg.py \
--kernel Image.gz-dtb \
--ramdisk rootfs.cpio.gz \
--dtb dtb \
--cmdline "$BM_CMDLINE" \
$BM_MKBOOT_PARAMS \
--header_version 2 \
-o artifacts/fastboot.img
rm Image.gz-dtb dtb
# Start background command for talking to serial if we have one.
if [ -n "$BM_SERIAL_SCRIPT" ]; then
$BM_SERIAL_SCRIPT > results/serial-output.txt &
while [ ! -e results/serial-output.txt ]; do
sleep 1
done
fi
set +e
$BM/fastboot_run.py \
--dev="$BM_SERIAL" \
--test-timeout ${TEST_PHASE_TIMEOUT:-20} \
--fbserial="$BM_FASTBOOT_SERIAL" \
--powerup="$BM_POWERUP" \
--powerdown="$BM_POWERDOWN"
ret=$?
set -e
if [ -n "$BM_FASTBOOT_NFSROOT" ]; then
# Bring artifacts back from the NFS dir to the build dir where gitlab-runner
# will look for them.
cp -Rp /nfs/results/. results/
fi
exit $ret

View File

@@ -1,164 +0,0 @@
#!/usr/bin/env python3
#
# Copyright © 2020 Google LLC
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice (including the next
# paragraph) shall be included in all copies or substantial portions of the
# Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
# IN THE SOFTWARE.
import argparse
import subprocess
import re
from serial_buffer import SerialBuffer
import sys
import threading
class FastbootRun:
def __init__(self, args, test_timeout):
self.powerup = args.powerup
self.ser = SerialBuffer(
args.dev, "results/serial-output.txt", "R SERIAL> ")
self.fastboot = "fastboot boot -s {ser} artifacts/fastboot.img".format(
ser=args.fbserial)
self.test_timeout = test_timeout
def close(self):
self.ser.close()
def print_error(self, message):
RED = '\033[0;31m'
NO_COLOR = '\033[0m'
print(RED + message + NO_COLOR)
def logged_system(self, cmd, timeout=60):
print("Running '{}'".format(cmd))
try:
return subprocess.call(cmd, shell=True, timeout=timeout)
except subprocess.TimeoutExpired:
self.print_error("timeout, restarting run...")
return 2
def run(self):
if ret := self.logged_system(self.powerup):
return ret
fastboot_ready = False
for line in self.ser.lines(timeout=2 * 60, phase="bootloader"):
if re.search("fastboot: processing commands", line) or \
re.search("Listening for fastboot command on", line):
fastboot_ready = True
break
if re.search("data abort", line):
self.print_error(
"Detected crash during boot, restarting run...")
return 2
if not fastboot_ready:
self.print_error(
"Failed to get to fastboot prompt, restarting run...")
return 2
if ret := self.logged_system(self.fastboot):
return ret
print_more_lines = -1
for line in self.ser.lines(timeout=self.test_timeout, phase="test"):
if print_more_lines == 0:
return 2
if print_more_lines > 0:
print_more_lines -= 1
if re.search("---. end Kernel panic", line):
return 1
# The db820c boards intermittently reboot. Just restart the run
# when if we see a reboot after we got past fastboot.
if re.search("PON REASON", line):
self.print_error(
"Detected spontaneous reboot, restarting run...")
return 2
# db820c sometimes wedges around iommu fault recovery
if re.search("watchdog: BUG: soft lockup - CPU.* stuck", line):
self.print_error(
"Detected kernel soft lockup, restarting run...")
return 2
# If the network device dies, it's probably not graphics's fault, just try again.
if re.search("NETDEV WATCHDOG", line):
self.print_error(
"Detected network device failure, restarting run...")
return 2
# A3xx recovery doesn't quite work. Sometimes the GPU will get
# wedged and recovery will fail (because power can't be reset?)
# This assumes that the jobs are sufficiently well-tested that GPU
# hangs aren't always triggered, so just try again. But print some
# more lines first so that we get better information on the cause
# of the hang. Once a hang happens, it's pretty chatty.
if "[drm:adreno_recover] *ERROR* gpu hw init failed: -22" in line:
self.print_error(
"Detected GPU hang, restarting run...")
if print_more_lines == -1:
print_more_lines = 30
result = re.search("hwci: mesa: (\S*)", line)
if result:
if result.group(1) == "pass":
return 0
else:
return 1
self.print_error(
"Reached the end of the CPU serial log without finding a result, restarting run...")
return 2
def main():
parser = argparse.ArgumentParser()
parser.add_argument(
'--dev', type=str, help='Serial device (otherwise reading from serial-output.txt)')
parser.add_argument('--powerup', type=str,
help='shell command for rebooting', required=True)
parser.add_argument('--powerdown', type=str,
help='shell command for powering off', required=True)
parser.add_argument('--fbserial', type=str,
help='fastboot serial number of the board', required=True)
parser.add_argument('--test-timeout', type=int,
help='Test phase timeout (minutes)', required=True)
args = parser.parse_args()
fastboot = FastbootRun(args, args.test_timeout * 60)
while True:
retval = fastboot.run()
fastboot.close()
if retval != 2:
break
fastboot = FastbootRun(args, args.test_timeout * 60)
fastboot.logged_system(args.powerdown)
sys.exit(retval)
if __name__ == '__main__':
main()

View File

@@ -1,10 +0,0 @@
#!/bin/bash
relay=$1
if [ -z "$relay" ]; then
echo "Must supply a relay arg"
exit 1
fi
$CI_PROJECT_DIR/install/bare-metal/google-power-relay.py off $relay

View File

@@ -1,19 +0,0 @@
#!/usr/bin/python3
import sys
import serial
mode = sys.argv[1]
relay = sys.argv[2]
# our relays are "off" means "board is powered".
mode_swap = {
"on": "off",
"off": "on",
}
mode = mode_swap[mode]
ser = serial.Serial('/dev/ttyACM0', 115200, timeout=2)
command = "relay {} {}\n\r".format(mode, relay)
ser.write(command.encode())
ser.close()

View File

@@ -1,12 +0,0 @@
#!/bin/bash
relay=$1
if [ -z "$relay" ]; then
echo "Must supply a relay arg"
exit 1
fi
$CI_PROJECT_DIR/install/bare-metal/google-power-relay.py off $relay
sleep 5
$CI_PROJECT_DIR/install/bare-metal/google-power-relay.py on $relay

View File

@@ -1,569 +0,0 @@
#!/usr/bin/env python3
#
# Copyright 2015, The Android Open Source Project
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Creates the boot image."""
from argparse import (ArgumentParser, ArgumentTypeError,
FileType, RawDescriptionHelpFormatter)
from hashlib import sha1
from os import fstat
from struct import pack
import array
import collections
import os
import re
import subprocess
import tempfile
# Constant and structure definition is in
# system/tools/mkbootimg/include/bootimg/bootimg.h
BOOT_MAGIC = 'ANDROID!'
BOOT_MAGIC_SIZE = 8
BOOT_NAME_SIZE = 16
BOOT_ARGS_SIZE = 512
BOOT_EXTRA_ARGS_SIZE = 1024
BOOT_IMAGE_HEADER_V1_SIZE = 1648
BOOT_IMAGE_HEADER_V2_SIZE = 1660
BOOT_IMAGE_HEADER_V3_SIZE = 1580
BOOT_IMAGE_HEADER_V3_PAGESIZE = 4096
BOOT_IMAGE_HEADER_V4_SIZE = 1584
BOOT_IMAGE_V4_SIGNATURE_SIZE = 4096
VENDOR_BOOT_MAGIC = 'VNDRBOOT'
VENDOR_BOOT_MAGIC_SIZE = 8
VENDOR_BOOT_NAME_SIZE = BOOT_NAME_SIZE
VENDOR_BOOT_ARGS_SIZE = 2048
VENDOR_BOOT_IMAGE_HEADER_V3_SIZE = 2112
VENDOR_BOOT_IMAGE_HEADER_V4_SIZE = 2128
VENDOR_RAMDISK_TYPE_NONE = 0
VENDOR_RAMDISK_TYPE_PLATFORM = 1
VENDOR_RAMDISK_TYPE_RECOVERY = 2
VENDOR_RAMDISK_TYPE_DLKM = 3
VENDOR_RAMDISK_NAME_SIZE = 32
VENDOR_RAMDISK_TABLE_ENTRY_BOARD_ID_SIZE = 16
VENDOR_RAMDISK_TABLE_ENTRY_V4_SIZE = 108
# Names with special meaning, mustn't be specified in --ramdisk_name.
VENDOR_RAMDISK_NAME_BLOCKLIST = {b'default'}
PARSER_ARGUMENT_VENDOR_RAMDISK_FRAGMENT = '--vendor_ramdisk_fragment'
def filesize(f):
if f is None:
return 0
try:
return fstat(f.fileno()).st_size
except OSError:
return 0
def update_sha(sha, f):
if f:
sha.update(f.read())
f.seek(0)
sha.update(pack('I', filesize(f)))
else:
sha.update(pack('I', 0))
def pad_file(f, padding):
pad = (padding - (f.tell() & (padding - 1))) & (padding - 1)
f.write(pack(str(pad) + 'x'))
def get_number_of_pages(image_size, page_size):
"""calculates the number of pages required for the image"""
return (image_size + page_size - 1) // page_size
def get_recovery_dtbo_offset(args):
"""calculates the offset of recovery_dtbo image in the boot image"""
num_header_pages = 1 # header occupies a page
num_kernel_pages = get_number_of_pages(filesize(args.kernel), args.pagesize)
num_ramdisk_pages = get_number_of_pages(filesize(args.ramdisk),
args.pagesize)
num_second_pages = get_number_of_pages(filesize(args.second), args.pagesize)
dtbo_offset = args.pagesize * (num_header_pages + num_kernel_pages +
num_ramdisk_pages + num_second_pages)
return dtbo_offset
def write_header_v3_and_above(args):
if args.header_version > 3:
boot_header_size = BOOT_IMAGE_HEADER_V4_SIZE
else:
boot_header_size = BOOT_IMAGE_HEADER_V3_SIZE
args.output.write(pack(f'{BOOT_MAGIC_SIZE}s', BOOT_MAGIC.encode()))
# kernel size in bytes
args.output.write(pack('I', filesize(args.kernel)))
# ramdisk size in bytes
args.output.write(pack('I', filesize(args.ramdisk)))
# os version and patch level
args.output.write(pack('I', (args.os_version << 11) | args.os_patch_level))
args.output.write(pack('I', boot_header_size))
# reserved
args.output.write(pack('4I', 0, 0, 0, 0))
# version of boot image header
args.output.write(pack('I', args.header_version))
args.output.write(pack(f'{BOOT_ARGS_SIZE + BOOT_EXTRA_ARGS_SIZE}s',
args.cmdline))
if args.header_version >= 4:
# The signature used to verify boot image v4.
args.output.write(pack('I', BOOT_IMAGE_V4_SIGNATURE_SIZE))
pad_file(args.output, BOOT_IMAGE_HEADER_V3_PAGESIZE)
def write_vendor_boot_header(args):
if filesize(args.dtb) == 0:
raise ValueError('DTB image must not be empty.')
if args.header_version > 3:
vendor_ramdisk_size = args.vendor_ramdisk_total_size
vendor_boot_header_size = VENDOR_BOOT_IMAGE_HEADER_V4_SIZE
else:
vendor_ramdisk_size = filesize(args.vendor_ramdisk)
vendor_boot_header_size = VENDOR_BOOT_IMAGE_HEADER_V3_SIZE
args.vendor_boot.write(pack(f'{VENDOR_BOOT_MAGIC_SIZE}s',
VENDOR_BOOT_MAGIC.encode()))
# version of boot image header
args.vendor_boot.write(pack('I', args.header_version))
# flash page size
args.vendor_boot.write(pack('I', args.pagesize))
# kernel physical load address
args.vendor_boot.write(pack('I', args.base + args.kernel_offset))
# ramdisk physical load address
args.vendor_boot.write(pack('I', args.base + args.ramdisk_offset))
# ramdisk size in bytes
args.vendor_boot.write(pack('I', vendor_ramdisk_size))
args.vendor_boot.write(pack(f'{VENDOR_BOOT_ARGS_SIZE}s',
args.vendor_cmdline))
# kernel tags physical load address
args.vendor_boot.write(pack('I', args.base + args.tags_offset))
# asciiz product name
args.vendor_boot.write(pack(f'{VENDOR_BOOT_NAME_SIZE}s', args.board))
# header size in bytes
args.vendor_boot.write(pack('I', vendor_boot_header_size))
# dtb size in bytes
args.vendor_boot.write(pack('I', filesize(args.dtb)))
# dtb physical load address
args.vendor_boot.write(pack('Q', args.base + args.dtb_offset))
if args.header_version > 3:
vendor_ramdisk_table_size = (args.vendor_ramdisk_table_entry_num *
VENDOR_RAMDISK_TABLE_ENTRY_V4_SIZE)
# vendor ramdisk table size in bytes
args.vendor_boot.write(pack('I', vendor_ramdisk_table_size))
# number of vendor ramdisk table entries
args.vendor_boot.write(pack('I', args.vendor_ramdisk_table_entry_num))
# vendor ramdisk table entry size in bytes
args.vendor_boot.write(pack('I', VENDOR_RAMDISK_TABLE_ENTRY_V4_SIZE))
# bootconfig section size in bytes
args.vendor_boot.write(pack('I', filesize(args.vendor_bootconfig)))
pad_file(args.vendor_boot, args.pagesize)
def write_header(args):
if args.header_version > 4:
raise ValueError(
f'Boot header version {args.header_version} not supported')
if args.header_version in {3, 4}:
return write_header_v3_and_above(args)
ramdisk_load_address = ((args.base + args.ramdisk_offset)
if filesize(args.ramdisk) > 0 else 0)
second_load_address = ((args.base + args.second_offset)
if filesize(args.second) > 0 else 0)
args.output.write(pack(f'{BOOT_MAGIC_SIZE}s', BOOT_MAGIC.encode()))
# kernel size in bytes
args.output.write(pack('I', filesize(args.kernel)))
# kernel physical load address
args.output.write(pack('I', args.base + args.kernel_offset))
# ramdisk size in bytes
args.output.write(pack('I', filesize(args.ramdisk)))
# ramdisk physical load address
args.output.write(pack('I', ramdisk_load_address))
# second bootloader size in bytes
args.output.write(pack('I', filesize(args.second)))
# second bootloader physical load address
args.output.write(pack('I', second_load_address))
# kernel tags physical load address
args.output.write(pack('I', args.base + args.tags_offset))
# flash page size
args.output.write(pack('I', args.pagesize))
# version of boot image header
args.output.write(pack('I', args.header_version))
# os version and patch level
args.output.write(pack('I', (args.os_version << 11) | args.os_patch_level))
# asciiz product name
args.output.write(pack(f'{BOOT_NAME_SIZE}s', args.board))
args.output.write(pack(f'{BOOT_ARGS_SIZE}s', args.cmdline))
sha = sha1()
update_sha(sha, args.kernel)
update_sha(sha, args.ramdisk)
update_sha(sha, args.second)
if args.header_version > 0:
update_sha(sha, args.recovery_dtbo)
if args.header_version > 1:
update_sha(sha, args.dtb)
img_id = pack('32s', sha.digest())
args.output.write(img_id)
args.output.write(pack(f'{BOOT_EXTRA_ARGS_SIZE}s', args.extra_cmdline))
if args.header_version > 0:
if args.recovery_dtbo:
# recovery dtbo size in bytes
args.output.write(pack('I', filesize(args.recovery_dtbo)))
# recovert dtbo offset in the boot image
args.output.write(pack('Q', get_recovery_dtbo_offset(args)))
else:
# Set to zero if no recovery dtbo
args.output.write(pack('I', 0))
args.output.write(pack('Q', 0))
# Populate boot image header size for header versions 1 and 2.
if args.header_version == 1:
args.output.write(pack('I', BOOT_IMAGE_HEADER_V1_SIZE))
elif args.header_version == 2:
args.output.write(pack('I', BOOT_IMAGE_HEADER_V2_SIZE))
if args.header_version > 1:
if filesize(args.dtb) == 0:
raise ValueError('DTB image must not be empty.')
# dtb size in bytes
args.output.write(pack('I', filesize(args.dtb)))
# dtb physical load address
args.output.write(pack('Q', args.base + args.dtb_offset))
pad_file(args.output, args.pagesize)
return img_id
class AsciizBytes:
"""Parses a string and encodes it as an asciiz bytes object.
>>> AsciizBytes(bufsize=4)('foo')
b'foo\\x00'
>>> AsciizBytes(bufsize=4)('foob')
Traceback (most recent call last):
...
argparse.ArgumentTypeError: Encoded asciiz length exceeded: max 4, got 5
"""
def __init__(self, bufsize):
self.bufsize = bufsize
def __call__(self, arg):
arg_bytes = arg.encode() + b'\x00'
if len(arg_bytes) > self.bufsize:
raise ArgumentTypeError(
'Encoded asciiz length exceeded: '
f'max {self.bufsize}, got {len(arg_bytes)}')
return arg_bytes
class VendorRamdiskTableBuilder:
"""Vendor ramdisk table builder.
Attributes:
entries: A list of VendorRamdiskTableEntry namedtuple.
ramdisk_total_size: Total size in bytes of all ramdisks in the table.
"""
VendorRamdiskTableEntry = collections.namedtuple( # pylint: disable=invalid-name
'VendorRamdiskTableEntry',
['ramdisk_path', 'ramdisk_size', 'ramdisk_offset', 'ramdisk_type',
'ramdisk_name', 'board_id'])
def __init__(self):
self.entries = []
self.ramdisk_total_size = 0
self.ramdisk_names = set()
def add_entry(self, ramdisk_path, ramdisk_type, ramdisk_name, board_id):
# Strip any trailing null for simple comparison.
stripped_ramdisk_name = ramdisk_name.rstrip(b'\x00')
if stripped_ramdisk_name in VENDOR_RAMDISK_NAME_BLOCKLIST:
raise ValueError(
f'Banned vendor ramdisk name: {stripped_ramdisk_name}')
if stripped_ramdisk_name in self.ramdisk_names:
raise ValueError(
f'Duplicated vendor ramdisk name: {stripped_ramdisk_name}')
self.ramdisk_names.add(stripped_ramdisk_name)
if board_id is None:
board_id = array.array(
'I', [0] * VENDOR_RAMDISK_TABLE_ENTRY_BOARD_ID_SIZE)
else:
board_id = array.array('I', board_id)
if len(board_id) != VENDOR_RAMDISK_TABLE_ENTRY_BOARD_ID_SIZE:
raise ValueError('board_id size must be '
f'{VENDOR_RAMDISK_TABLE_ENTRY_BOARD_ID_SIZE}')
with open(ramdisk_path, 'rb') as f:
ramdisk_size = filesize(f)
self.entries.append(self.VendorRamdiskTableEntry(
ramdisk_path, ramdisk_size, self.ramdisk_total_size, ramdisk_type,
ramdisk_name, board_id))
self.ramdisk_total_size += ramdisk_size
def write_ramdisks_padded(self, fout, alignment):
for entry in self.entries:
with open(entry.ramdisk_path, 'rb') as f:
fout.write(f.read())
pad_file(fout, alignment)
def write_entries_padded(self, fout, alignment):
for entry in self.entries:
fout.write(pack('I', entry.ramdisk_size))
fout.write(pack('I', entry.ramdisk_offset))
fout.write(pack('I', entry.ramdisk_type))
fout.write(pack(f'{VENDOR_RAMDISK_NAME_SIZE}s',
entry.ramdisk_name))
fout.write(entry.board_id)
pad_file(fout, alignment)
def write_padded_file(f_out, f_in, padding):
if f_in is None:
return
f_out.write(f_in.read())
pad_file(f_out, padding)
def parse_int(x):
return int(x, 0)
def parse_os_version(x):
match = re.search(r'^(\d{1,3})(?:\.(\d{1,3})(?:\.(\d{1,3}))?)?', x)
if match:
a = int(match.group(1))
b = c = 0
if match.lastindex >= 2:
b = int(match.group(2))
if match.lastindex == 3:
c = int(match.group(3))
# 7 bits allocated for each field
assert a < 128
assert b < 128
assert c < 128
return (a << 14) | (b << 7) | c
return 0
def parse_os_patch_level(x):
match = re.search(r'^(\d{4})-(\d{2})(?:-(\d{2}))?', x)
if match:
y = int(match.group(1)) - 2000
m = int(match.group(2))
# 7 bits allocated for the year, 4 bits for the month
assert 0 <= y < 128
assert 0 < m <= 12
return (y << 4) | m
return 0
def parse_vendor_ramdisk_type(x):
type_dict = {
'none': VENDOR_RAMDISK_TYPE_NONE,
'platform': VENDOR_RAMDISK_TYPE_PLATFORM,
'recovery': VENDOR_RAMDISK_TYPE_RECOVERY,
'dlkm': VENDOR_RAMDISK_TYPE_DLKM,
}
if x.lower() in type_dict:
return type_dict[x.lower()]
return parse_int(x)
def get_vendor_boot_v4_usage():
return """vendor boot version 4 arguments:
--ramdisk_type {none,platform,recovery,dlkm}
specify the type of the ramdisk
--ramdisk_name NAME
specify the name of the ramdisk
--board_id{0..15} NUMBER
specify the value of the board_id vector, defaults to 0
--vendor_ramdisk_fragment VENDOR_RAMDISK_FILE
path to the vendor ramdisk file
These options can be specified multiple times, where each vendor ramdisk
option group ends with a --vendor_ramdisk_fragment option.
Each option group appends an additional ramdisk to the vendor boot image.
"""
def parse_vendor_ramdisk_args(args, args_list):
"""Parses vendor ramdisk specific arguments.
Args:
args: An argparse.Namespace object. Parsed results are stored into this
object.
args_list: A list of argument strings to be parsed.
Returns:
A list argument strings that are not parsed by this method.
"""
parser = ArgumentParser(add_help=False)
parser.add_argument('--ramdisk_type', type=parse_vendor_ramdisk_type,
default=VENDOR_RAMDISK_TYPE_NONE)
parser.add_argument('--ramdisk_name',
type=AsciizBytes(bufsize=VENDOR_RAMDISK_NAME_SIZE),
required=True)
for i in range(VENDOR_RAMDISK_TABLE_ENTRY_BOARD_ID_SIZE):
parser.add_argument(f'--board_id{i}', type=parse_int, default=0)
parser.add_argument(PARSER_ARGUMENT_VENDOR_RAMDISK_FRAGMENT, required=True)
unknown_args = []
vendor_ramdisk_table_builder = VendorRamdiskTableBuilder()
if args.vendor_ramdisk is not None:
vendor_ramdisk_table_builder.add_entry(
args.vendor_ramdisk.name, VENDOR_RAMDISK_TYPE_PLATFORM, b'', None)
while PARSER_ARGUMENT_VENDOR_RAMDISK_FRAGMENT in args_list:
idx = args_list.index(PARSER_ARGUMENT_VENDOR_RAMDISK_FRAGMENT) + 2
vendor_ramdisk_args = args_list[:idx]
args_list = args_list[idx:]
ramdisk_args, extra_args = parser.parse_known_args(vendor_ramdisk_args)
ramdisk_args_dict = vars(ramdisk_args)
unknown_args.extend(extra_args)
ramdisk_path = ramdisk_args.vendor_ramdisk_fragment
ramdisk_type = ramdisk_args.ramdisk_type
ramdisk_name = ramdisk_args.ramdisk_name
board_id = [ramdisk_args_dict[f'board_id{i}']
for i in range(VENDOR_RAMDISK_TABLE_ENTRY_BOARD_ID_SIZE)]
vendor_ramdisk_table_builder.add_entry(ramdisk_path, ramdisk_type,
ramdisk_name, board_id)
if len(args_list) > 0:
unknown_args.extend(args_list)
args.vendor_ramdisk_total_size = (vendor_ramdisk_table_builder
.ramdisk_total_size)
args.vendor_ramdisk_table_entry_num = len(vendor_ramdisk_table_builder
.entries)
args.vendor_ramdisk_table_builder = vendor_ramdisk_table_builder
return unknown_args
def parse_cmdline():
version_parser = ArgumentParser(add_help=False)
version_parser.add_argument('--header_version', type=parse_int, default=0)
if version_parser.parse_known_args()[0].header_version < 3:
# For boot header v0 to v2, the kernel commandline field is split into
# two fields, cmdline and extra_cmdline. Both fields are asciiz strings,
# so we minus one here to ensure the encoded string plus the
# null-terminator can fit in the buffer size.
cmdline_size = BOOT_ARGS_SIZE + BOOT_EXTRA_ARGS_SIZE - 1
else:
cmdline_size = BOOT_ARGS_SIZE + BOOT_EXTRA_ARGS_SIZE
parser = ArgumentParser(formatter_class=RawDescriptionHelpFormatter,
epilog=get_vendor_boot_v4_usage())
parser.add_argument('--kernel', type=FileType('rb'),
help='path to the kernel')
parser.add_argument('--ramdisk', type=FileType('rb'),
help='path to the ramdisk')
parser.add_argument('--second', type=FileType('rb'),
help='path to the second bootloader')
parser.add_argument('--dtb', type=FileType('rb'), help='path to the dtb')
dtbo_group = parser.add_mutually_exclusive_group()
dtbo_group.add_argument('--recovery_dtbo', type=FileType('rb'),
help='path to the recovery DTBO')
dtbo_group.add_argument('--recovery_acpio', type=FileType('rb'),
metavar='RECOVERY_ACPIO', dest='recovery_dtbo',
help='path to the recovery ACPIO')
parser.add_argument('--cmdline', type=AsciizBytes(bufsize=cmdline_size),
default='', help='kernel command line arguments')
parser.add_argument('--vendor_cmdline',
type=AsciizBytes(bufsize=VENDOR_BOOT_ARGS_SIZE),
default='',
help='vendor boot kernel command line arguments')
parser.add_argument('--base', type=parse_int, default=0x10000000,
help='base address')
parser.add_argument('--kernel_offset', type=parse_int, default=0x00008000,
help='kernel offset')
parser.add_argument('--ramdisk_offset', type=parse_int, default=0x01000000,
help='ramdisk offset')
parser.add_argument('--second_offset', type=parse_int, default=0x00f00000,
help='second bootloader offset')
parser.add_argument('--dtb_offset', type=parse_int, default=0x01f00000,
help='dtb offset')
parser.add_argument('--os_version', type=parse_os_version, default=0,
help='operating system version')
parser.add_argument('--os_patch_level', type=parse_os_patch_level,
default=0, help='operating system patch level')
parser.add_argument('--tags_offset', type=parse_int, default=0x00000100,
help='tags offset')
parser.add_argument('--board', type=AsciizBytes(bufsize=BOOT_NAME_SIZE),
default='', help='board name')
parser.add_argument('--pagesize', type=parse_int,
choices=[2**i for i in range(11, 15)], default=2048,
help='page size')
parser.add_argument('--id', action='store_true',
help='print the image ID on standard output')
parser.add_argument('--header_version', type=parse_int, default=0,
help='boot image header version')
parser.add_argument('-o', '--output', type=FileType('wb'),
help='output file name')
parser.add_argument('--gki_signing_algorithm',
help='GKI signing algorithm to use')
parser.add_argument('--gki_signing_key',
help='path to RSA private key file')
parser.add_argument('--gki_signing_signature_args',
help='other hash arguments passed to avbtool')
parser.add_argument('--gki_signing_avbtool_path',
help='path to avbtool for boot signature generation')
parser.add_argument('--vendor_boot', type=FileType('wb'),
help='vendor boot output file name')
parser.add_argument('--vendor_ramdisk', type=FileType('rb'),
help='path to the vendor ramdisk')
parser.add_argument('--vendor_bootconfig', type=FileType('rb'),
help='path to the vendor bootconfig file')
args, extra_args = parser.parse_known_args()
if args.vendor_boot is not None and args.header_version > 3:
extra_args = parse_vendor_ramdisk_args(args, extra_args)
if len(extra_args) > 0:
raise ValueError(f'Unrecognized arguments: {extra_args}')
if args.header_version < 3:
args.extra_cmdline = args.cmdline[BOOT_ARGS_SIZE-1:]
args.cmdline = args.cmdline[:BOOT_ARGS_SIZE-1] + b'\x00'
assert len(args.cmdline) <= BOOT_ARGS_SIZE
assert len(args.extra_cmdline) <= BOOT_EXTRA_ARGS_SIZE
return args
def add_boot_image_signature(args, pagesize):
"""Adds the boot image signature.
Note that the signature will only be verified in VTS to ensure a
generic boot.img is used. It will not be used by the device
bootloader at boot time. The bootloader should only verify
the boot vbmeta at the end of the boot partition (or in the top-level
vbmeta partition) via the Android Verified Boot process, when the
device boots.
"""
args.output.flush() # Flush the buffer for signature calculation.
# Appends zeros if the signing key is not specified.
if not args.gki_signing_key or not args.gki_signing_algorithm:
zeros = b'\x00' * BOOT_IMAGE_V4_SIGNATURE_SIZE
args.output.write(zeros)
pad_file(args.output, pagesize)
return
avbtool = 'avbtool' # Used from otatools.zip or Android build env.
# We need to specify the path of avbtool in build/core/Makefile.
# Because avbtool is not guaranteed to be in $PATH there.
if args.gki_signing_avbtool_path:
avbtool = args.gki_signing_avbtool_path
# Need to specify a value of --partition_size for avbtool to work.
# We use 64 MB below, but avbtool will not resize the boot image to
# this size because --do_not_append_vbmeta_image is also specified.
avbtool_cmd = [
avbtool, 'add_hash_footer',
'--partition_name', 'boot',
'--partition_size', str(64 * 1024 * 1024),
'--image', args.output.name,
'--algorithm', args.gki_signing_algorithm,
'--key', args.gki_signing_key,
'--salt', 'd00df00d'] # TODO: use a hash of kernel/ramdisk as the salt.
# Additional arguments passed to avbtool.
if args.gki_signing_signature_args:
avbtool_cmd += args.gki_signing_signature_args.split()
# Outputs the signed vbmeta to a separate file, then append to boot.img
# as the boot signature.
with tempfile.TemporaryDirectory() as temp_out_dir:
boot_signature_output = os.path.join(temp_out_dir, 'boot_signature')
avbtool_cmd += ['--do_not_append_vbmeta_image',
'--output_vbmeta_image', boot_signature_output]
subprocess.check_call(avbtool_cmd)
with open(boot_signature_output, 'rb') as boot_signature:
if filesize(boot_signature) > BOOT_IMAGE_V4_SIGNATURE_SIZE:
raise ValueError(
f'boot sigature size is > {BOOT_IMAGE_V4_SIGNATURE_SIZE}')
write_padded_file(args.output, boot_signature, pagesize)
def write_data(args, pagesize):
write_padded_file(args.output, args.kernel, pagesize)
write_padded_file(args.output, args.ramdisk, pagesize)
write_padded_file(args.output, args.second, pagesize)
if args.header_version > 0 and args.header_version < 3:
write_padded_file(args.output, args.recovery_dtbo, pagesize)
if args.header_version == 2:
write_padded_file(args.output, args.dtb, pagesize)
if args.header_version >= 4:
add_boot_image_signature(args, pagesize)
def write_vendor_boot_data(args):
if args.header_version > 3:
builder = args.vendor_ramdisk_table_builder
builder.write_ramdisks_padded(args.vendor_boot, args.pagesize)
write_padded_file(args.vendor_boot, args.dtb, args.pagesize)
builder.write_entries_padded(args.vendor_boot, args.pagesize)
write_padded_file(args.vendor_boot, args.vendor_bootconfig,
args.pagesize)
else:
write_padded_file(args.vendor_boot, args.vendor_ramdisk, args.pagesize)
write_padded_file(args.vendor_boot, args.dtb, args.pagesize)
def main():
args = parse_cmdline()
if args.vendor_boot is not None:
if args.header_version not in {3, 4}:
raise ValueError(
'--vendor_boot not compatible with given header version')
if args.header_version == 3 and args.vendor_ramdisk is None:
raise ValueError('--vendor_ramdisk missing or invalid')
write_vendor_boot_header(args)
write_vendor_boot_data(args)
if args.output is not None:
if args.second is not None and args.header_version > 2:
raise ValueError(
'--second not compatible with given header version')
img_id = write_header(args)
if args.header_version > 2:
write_data(args, BOOT_IMAGE_HEADER_V3_PAGESIZE)
else:
write_data(args, args.pagesize)
if args.id and img_id is not None:
print('0x' + ''.join(f'{octet:02x}' for octet in img_id))
if __name__ == '__main__':
main()

View File

@@ -1,17 +0,0 @@
#!/bin/bash
if [ -z "$BM_POE_INTERFACE" ]; then
echo "Must supply the PoE Interface to power up"
exit 1
fi
if [ -z "$BM_POE_ADDRESS" ]; then
echo "Must supply the PoE Switch host"
exit 1
fi
SNMP_KEY="SNMPv2-SMI::mib-2.105.1.1.1.3.1.`expr 48 + $BM_POE_INTERFACE`"
SNMP_ON="i 1"
SNMP_OFF="i 2"
flock /var/run/poe.lock -c "snmpset -v2c -r 3 -t 30 -cmesaci $BM_POE_ADDRESS $SNMP_KEY $SNMP_OFF"

View File

@@ -1,19 +0,0 @@
#!/bin/bash
if [ -z "$BM_POE_INTERFACE" ]; then
echo "Must supply the PoE Interface to power up"
exit 1
fi
if [ -z "$BM_POE_ADDRESS" ]; then
echo "Must supply the PoE Switch host"
exit 1
fi
SNMP_KEY="SNMPv2-SMI::mib-2.105.1.1.1.3.1.`expr 48 + $BM_POE_INTERFACE`"
SNMP_ON="i 1"
SNMP_OFF="i 2"
flock /var/run/poe.lock -c "snmpset -v2c -r 3 -t 30 -cmesaci $BM_POE_ADDRESS $SNMP_KEY $SNMP_OFF"
sleep 3s
flock /var/run/poe.lock -c "snmpset -v2c -r 3 -t 30 -cmesaci $BM_POE_ADDRESS $SNMP_KEY $SNMP_ON"

View File

@@ -1,162 +0,0 @@
#!/bin/bash
# Boot script for devices attached to a PoE switch, using NFS for the root
# filesystem.
# We're run from the root of the repo, make a helper var for our paths
BM=$CI_PROJECT_DIR/install/bare-metal
CI_COMMON=$CI_PROJECT_DIR/install/common
# Runner config checks
if [ -z "$BM_SERIAL" ]; then
echo "Must set BM_SERIAL in your gitlab-runner config.toml [[runners]] environment"
echo "This is the serial port to listen the device."
exit 1
fi
if [ -z "$BM_POE_ADDRESS" ]; then
echo "Must set BM_POE_ADDRESS in your gitlab-runner config.toml [[runners]] environment"
echo "This is the PoE switch address to connect for powering up/down devices."
exit 1
fi
if [ -z "$BM_POE_INTERFACE" ]; then
echo "Must set BM_POE_INTERFACE in your gitlab-runner config.toml [[runners]] environment"
echo "This is the PoE switch interface where the device is connected."
exit 1
fi
if [ -z "$BM_POWERUP" ]; then
echo "Must set BM_POWERUP in your gitlab-runner config.toml [[runners]] environment"
echo "This is a shell script that should power up the device and begin its boot sequence."
exit 1
fi
if [ -z "$BM_POWERDOWN" ]; then
echo "Must set BM_POWERDOWN in your gitlab-runner config.toml [[runners]] environment"
echo "This is a shell script that should power off the device."
exit 1
fi
if [ ! -d /nfs ]; then
echo "NFS rootfs directory needs to be mounted at /nfs by the gitlab runner"
exit 1
fi
if [ ! -d /tftp ]; then
echo "TFTP directory for this board needs to be mounted at /tftp by the gitlab runner"
exit 1
fi
# job config checks
if [ -z "$BM_ROOTFS" ]; then
echo "Must set BM_ROOTFS to your board's rootfs directory in the job's variables"
exit 1
fi
if [ -z "$BM_BOOTFS" ]; then
echo "Must set /boot files for the TFTP boot in the job's variables"
exit 1
fi
if [ -z "$BM_CMDLINE" ]; then
echo "Must set BM_CMDLINE to your board's kernel command line arguments"
exit 1
fi
if [ -z "$BM_BOOTCONFIG" ]; then
echo "Must set BM_BOOTCONFIG to your board's required boot configuration arguments"
exit 1
fi
set -ex
# Clear out any previous run's artifacts.
rm -rf results/
mkdir -p results
# Create the rootfs in the NFS directory. rm to make sure it's in a pristine
# state, since it's volume-mounted on the host.
rsync -a --delete $BM_ROOTFS/ /nfs/
# If BM_BOOTFS is an URL, download it
if echo $BM_BOOTFS | grep -q http; then
apt install -y wget
wget ${FDO_HTTP_CACHE_URI:-}$BM_BOOTFS -O /tmp/bootfs.tar
BM_BOOTFS=/tmp/bootfs.tar
fi
# If BM_BOOTFS is a file, assume it is a tarball and uncompress it
if [ -f $BM_BOOTFS ]; then
mkdir -p /tmp/bootfs
tar xf $BM_BOOTFS -C /tmp/bootfs
BM_BOOTFS=/tmp/bootfs
fi
# Install kernel modules (it could be either in /lib/modules or
# /usr/lib/modules, but we want to install in the latter)
[ -d $BM_BOOTFS/usr/lib/modules ] && rsync -a $BM_BOOTFS/usr/lib/modules/ /nfs/usr/lib/modules/
[ -d $BM_BOOTFS/lib/modules ] && rsync -a $BM_BOOTFS/lib/modules/ /nfs/lib/modules/
# Install kernel image + bootloader files
rsync -aL --delete $BM_BOOTFS/boot/ /tftp/
# Set up the pxelinux config for Jetson Nano
mkdir -p /tftp/pxelinux.cfg
cat <<EOF >/tftp/pxelinux.cfg/default-arm-tegra210-p3450-0000
PROMPT 0
TIMEOUT 30
DEFAULT primary
MENU TITLE jetson nano boot options
LABEL primary
MENU LABEL CI kernel on TFTP
LINUX Image
FDT tegra210-p3450-0000.dtb
APPEND \${cbootargs} $BM_CMDLINE
EOF
# Set up the pxelinux config for Jetson TK1
cat <<EOF >/tftp/pxelinux.cfg/default-arm-tegra124-jetson-tk1
PROMPT 0
TIMEOUT 30
DEFAULT primary
MENU TITLE jetson TK1 boot options
LABEL primary
MENU LABEL CI kernel on TFTP
LINUX zImage
FDT tegra124-jetson-tk1.dtb
APPEND \${cbootargs} $BM_CMDLINE
EOF
# Create the rootfs in the NFS directory
mkdir -p /nfs/results
. $BM/rootfs-setup.sh /nfs
echo "$BM_CMDLINE" > /tftp/cmdline.txt
# Add some required options in config.txt
printf "$BM_BOOTCONFIG" >> /tftp/config.txt
set +e
ATTEMPTS=10
while [ $((ATTEMPTS--)) -gt 0 ]; do
python3 $BM/poe_run.py \
--dev="$BM_SERIAL" \
--powerup="$BM_POWERUP" \
--powerdown="$BM_POWERDOWN" \
--test-timeout ${TEST_PHASE_TIMEOUT:-20}
ret=$?
if [ $ret -eq 2 ]; then
echo "Did not detect boot sequence, retrying..."
else
ATTEMPTS=0
fi
done
set -e
# Bring artifacts back from the NFS dir to the build dir where gitlab-runner
# will look for them.
cp -Rp /nfs/results/. results/
exit $ret

View File

@@ -1,115 +0,0 @@
#!/usr/bin/env python3
#
# Copyright © 2020 Igalia, S.L.
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice (including the next
# paragraph) shall be included in all copies or substantial portions of the
# Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
# IN THE SOFTWARE.
import argparse
import os
import re
from serial_buffer import SerialBuffer
import sys
import threading
class PoERun:
def __init__(self, args, test_timeout):
self.powerup = args.powerup
self.powerdown = args.powerdown
self.ser = SerialBuffer(
args.dev, "results/serial-output.txt", "")
self.test_timeout = test_timeout
def print_error(self, message):
RED = '\033[0;31m'
NO_COLOR = '\033[0m'
print(RED + message + NO_COLOR)
def logged_system(self, cmd):
print("Running '{}'".format(cmd))
return os.system(cmd)
def run(self):
if self.logged_system(self.powerup) != 0:
return 1
boot_detected = False
for line in self.ser.lines(timeout=5 * 60, phase="bootloader"):
if re.search("Booting Linux", line):
boot_detected = True
break
if not boot_detected:
self.print_error(
"Something wrong; couldn't detect the boot start up sequence")
return 2
for line in self.ser.lines(timeout=self.test_timeout, phase="test"):
if re.search("---. end Kernel panic", line):
return 1
# Binning memory problems
if re.search("binner overflow mem", line):
self.print_error("Memory overflow in the binner; GPU hang")
return 1
if re.search("nouveau 57000000.gpu: bus: MMIO read of 00000000 FAULT at 137000", line):
self.print_error("nouveau jetson boot bug, retrying.")
return 2
# network fail on tk1
if re.search("NETDEV WATCHDOG:.* transmit queue 0 timed out", line):
self.print_error("nouveau jetson tk1 network fail, retrying.")
return 2
result = re.search("hwci: mesa: (\S*)", line)
if result:
if result.group(1) == "pass":
return 0
else:
return 1
self.print_error(
"Reached the end of the CPU serial log without finding a result")
return 2
def main():
parser = argparse.ArgumentParser()
parser.add_argument('--dev', type=str,
help='Serial device to monitor', required=True)
parser.add_argument('--powerup', type=str,
help='shell command for rebooting', required=True)
parser.add_argument('--powerdown', type=str,
help='shell command for powering off', required=True)
parser.add_argument(
'--test-timeout', type=int, help='Test phase timeout (minutes)', required=True)
args = parser.parse_args()
poe = PoERun(args, args.test_timeout * 60)
retval = poe.run()
poe.logged_system(args.powerdown)
sys.exit(retval)
if __name__ == '__main__':
main()

View File

@@ -1,30 +0,0 @@
#!/bin/bash
rootfs_dst=$1
mkdir -p $rootfs_dst/results
# Set up the init script that brings up the system.
cp $BM/bm-init.sh $rootfs_dst/init
cp $CI_COMMON/init*.sh $rootfs_dst/
# Make JWT token available as file in the bare-metal storage to enable access
# to MinIO
cp "${CI_JOB_JWT_FILE}" "${rootfs_dst}${CI_JOB_JWT_FILE}"
cp $CI_COMMON/capture-devcoredump.sh $rootfs_dst/
cp $CI_COMMON/intel-gpu-freq.sh $rootfs_dst/
set +x
# Pass through relevant env vars from the gitlab job to the baremetal init script
"$CI_COMMON"/generate-env.sh > $rootfs_dst/set-job-env-vars.sh
chmod +x $rootfs_dst/set-job-env-vars.sh
echo "Variables passed through:"
cat $rootfs_dst/set-job-env-vars.sh
set -x
# Add the Mesa drivers we built, and make a consistent symlink to them.
mkdir -p $rootfs_dst/$CI_PROJECT_DIR
rsync -aH --delete $CI_PROJECT_DIR/install/ $rootfs_dst/$CI_PROJECT_DIR/install/

View File

@@ -1,185 +0,0 @@
#!/usr/bin/env python3
#
# Copyright © 2020 Google LLC
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice (including the next
# paragraph) shall be included in all copies or substantial portions of the
# Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
# IN THE SOFTWARE.
import argparse
from datetime import datetime, timezone
import queue
import serial
import threading
import time
class SerialBuffer:
def __init__(self, dev, filename, prefix, timeout=None, line_queue=None):
self.filename = filename
self.dev = dev
if dev:
self.f = open(filename, "wb+")
self.serial = serial.Serial(dev, 115200, timeout=timeout)
else:
self.f = open(filename, "rb")
self.serial = None
self.byte_queue = queue.Queue()
# allow multiple SerialBuffers to share a line queue so you can merge
# servo's CPU and EC streams into one thing to watch the boot/test
# progress on.
if line_queue:
self.line_queue = line_queue
else:
self.line_queue = queue.Queue()
self.prefix = prefix
self.timeout = timeout
self.sentinel = object()
self.closing = False
if self.dev:
self.read_thread = threading.Thread(
target=self.serial_read_thread_loop, daemon=True)
else:
self.read_thread = threading.Thread(
target=self.serial_file_read_thread_loop, daemon=True)
self.read_thread.start()
self.lines_thread = threading.Thread(
target=self.serial_lines_thread_loop, daemon=True)
self.lines_thread.start()
def close(self):
self.closing = True
if self.serial:
self.serial.cancel_read()
self.read_thread.join()
self.lines_thread.join()
if self.serial:
self.serial.close()
# Thread that just reads the bytes from the serial device to try to keep from
# buffer overflowing it. If nothing is received in 1 minute, it finalizes.
def serial_read_thread_loop(self):
greet = "Serial thread reading from %s\n" % self.dev
self.byte_queue.put(greet.encode())
while not self.closing:
try:
b = self.serial.read()
if len(b) == 0:
break
self.byte_queue.put(b)
except Exception as err:
print(self.prefix + str(err))
break
self.byte_queue.put(self.sentinel)
# Thread that just reads the bytes from the file of serial output that some
# other process is appending to.
def serial_file_read_thread_loop(self):
greet = "Serial thread reading from %s\n" % self.filename
self.byte_queue.put(greet.encode())
while not self.closing:
line = self.f.readline()
if line:
self.byte_queue.put(line)
else:
time.sleep(0.1)
self.byte_queue.put(self.sentinel)
# Thread that processes the stream of bytes to 1) log to stdout, 2) log to
# file, 3) add to the queue of lines to be read by program logic
def serial_lines_thread_loop(self):
line = bytearray()
while True:
bytes = self.byte_queue.get(block=True)
if bytes == self.sentinel:
self.read_thread.join()
self.line_queue.put(self.sentinel)
break
# Write our data to the output file if we're the ones reading from
# the serial device
if self.dev:
self.f.write(bytes)
self.f.flush()
for b in bytes:
line.append(b)
if b == b'\n'[0]:
line = line.decode(errors="replace")
time = datetime.now().strftime('%y-%m-%d %H:%M:%S')
print("{endc}{time} {prefix}{line}".format(
time=time, prefix=self.prefix, line=line, endc='\033[0m'), flush=True, end='')
self.line_queue.put(line)
line = bytearray()
def lines(self, timeout=None, phase=None):
start_time = time.monotonic()
while True:
read_timeout = None
if timeout:
read_timeout = timeout - (time.monotonic() - start_time)
if read_timeout <= 0:
print("read timeout waiting for serial during {}".format(phase))
self.close()
break
try:
line = self.line_queue.get(timeout=read_timeout)
except queue.Empty:
print("read timeout waiting for serial during {}".format(phase))
self.close()
break
if line == self.sentinel:
print("End of serial output")
self.lines_thread.join()
break
yield line
def main():
parser = argparse.ArgumentParser()
parser.add_argument('--dev', type=str, help='Serial device')
parser.add_argument('--file', type=str,
help='Filename for serial output', required=True)
parser.add_argument('--prefix', type=str,
help='Prefix for logging serial to stdout', nargs='?')
args = parser.parse_args()
ser = SerialBuffer(args.dev, args.file, args.prefix or "")
for line in ser.lines():
# We're just using this as a logger, so eat the produced lines and drop
# them
pass
if __name__ == '__main__':
main()

View File

@@ -1,41 +0,0 @@
#!/usr/bin/python3
# Copyright © 2020 Christian Gmeiner
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice (including the next
# paragraph) shall be included in all copies or substantial portions of the
# Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
# IN THE SOFTWARE.
#
# Tiny script to read bytes from telnet, and write the output to stdout, with a
# buffer in between so we don't lose serial output from its buffer.
#
import sys
import telnetlib
host = sys.argv[1]
port = sys.argv[2]
tn = telnetlib.Telnet(host, port, 1000000)
while True:
bytes = tn.read_some()
sys.stdout.buffer.write(bytes)
sys.stdout.flush()
tn.close()

View File

@@ -1,2 +0,0 @@
schema.graphql
gitlab_gql.py.cache.db

View File

@@ -1,301 +0,0 @@
#!/usr/bin/env python3
# Copyright © 2020 - 2022 Collabora Ltd.
# Authors:
# Tomeu Vizoso <tomeu.vizoso@collabora.com>
# David Heidelberg <david.heidelberg@collabora.com>
#
# SPDX-License-Identifier: MIT
"""
Helper script to restrict running only required CI jobs
and show the job(s) logs.
"""
import argparse
import re
import sys
import time
from concurrent.futures import ThreadPoolExecutor
from functools import partial
from itertools import chain
from typing import Optional
import gitlab
from colorama import Fore, Style
from gitlab_common import get_gitlab_project, read_token, wait_for_pipeline
from gitlab_gql import GitlabGQL, create_job_needs_dag, filter_dag, print_dag
REFRESH_WAIT_LOG = 10
REFRESH_WAIT_JOBS = 6
URL_START = "\033]8;;"
URL_END = "\033]8;;\a"
STATUS_COLORS = {
"created": "",
"running": Fore.BLUE,
"success": Fore.GREEN,
"failed": Fore.RED,
"canceled": Fore.MAGENTA,
"manual": "",
"pending": "",
"skipped": "",
}
COMPLETED_STATUSES = ["success", "failed"]
def print_job_status(job) -> None:
"""It prints a nice, colored job status with a link to the job."""
if job.status == "canceled":
return
print(
STATUS_COLORS[job.status]
+ "🞋 job "
+ URL_START
+ f"{job.web_url}\a{job.name}"
+ URL_END
+ f" :: {job.status}"
+ Style.RESET_ALL
)
def print_job_status_change(job) -> None:
"""It reports job status changes."""
if job.status == "canceled":
return
print(
STATUS_COLORS[job.status]
+ "🗘 job "
+ URL_START
+ f"{job.web_url}\a{job.name}"
+ URL_END
+ f" has new status: {job.status}"
+ Style.RESET_ALL
)
def pretty_wait(sec: int) -> None:
"""shows progressbar in dots"""
for val in range(sec, 0, -1):
print(f"{val} seconds", end="\r")
time.sleep(1)
def monitor_pipeline(
project,
pipeline,
target_job: Optional[str],
dependencies,
force_manual: bool,
stress: bool,
) -> tuple[Optional[int], Optional[int]]:
"""Monitors pipeline and delegate canceling jobs"""
statuses = {}
target_statuses = {}
stress_succ = 0
stress_fail = 0
if target_job:
target_jobs_regex = re.compile(target_job.strip())
while True:
to_cancel = []
for job in pipeline.jobs.list(all=True, sort="desc"):
# target jobs
if target_job and target_jobs_regex.match(job.name):
if force_manual and job.status == "manual":
enable_job(project, job, True)
if stress and job.status in ["success", "failed"]:
if job.status == "success":
stress_succ += 1
if job.status == "failed":
stress_fail += 1
retry_job(project, job)
if (job.id not in target_statuses) or (
job.status not in target_statuses[job.id]
):
print_job_status_change(job)
target_statuses[job.id] = job.status
else:
print_job_status(job)
continue
# all jobs
if (job.id not in statuses) or (job.status not in statuses[job.id]):
print_job_status_change(job)
statuses[job.id] = job.status
# dependencies and cancelling the rest
if job.name in dependencies:
if job.status == "manual":
enable_job(project, job, False)
elif target_job and job.status not in [
"canceled",
"success",
"failed",
"skipped",
]:
to_cancel.append(job)
if target_job:
cancel_jobs(project, to_cancel)
if stress:
print(
"∑ succ: " + str(stress_succ) + "; fail: " + str(stress_fail),
flush=False,
)
pretty_wait(REFRESH_WAIT_JOBS)
continue
print("---------------------------------", flush=False)
if len(target_statuses) == 1 and {"running"}.intersection(
target_statuses.values()
):
return next(iter(target_statuses)), None
if {"failed", "canceled"}.intersection(target_statuses.values()):
return None, 1
if {"success", "manual"}.issuperset(target_statuses.values()):
return None, 0
pretty_wait(REFRESH_WAIT_JOBS)
def enable_job(project, job, target: bool) -> None:
"""enable manual job"""
pjob = project.jobs.get(job.id, lazy=True)
pjob.play()
if target:
jtype = "🞋 "
else:
jtype = "(dependency)"
print(Fore.MAGENTA + f"{jtype} job {job.name} manually enabled" + Style.RESET_ALL)
def retry_job(project, job) -> None:
"""retry job"""
pjob = project.jobs.get(job.id, lazy=True)
pjob.retry()
jtype = ""
print(Fore.MAGENTA + f"{jtype} job {job.name} manually enabled" + Style.RESET_ALL)
def cancel_job(project, job) -> None:
"""Cancel GitLab job"""
pjob = project.jobs.get(job.id, lazy=True)
pjob.cancel()
print(f"{job.name}")
def cancel_jobs(project, to_cancel) -> None:
"""Cancel unwanted GitLab jobs"""
if not to_cancel:
return
with ThreadPoolExecutor(max_workers=6) as exe:
part = partial(cancel_job, project)
exe.map(part, to_cancel)
def print_log(project, job_id) -> None:
"""Print job log into output"""
printed_lines = 0
while True:
job = project.jobs.get(job_id)
# GitLab's REST API doesn't offer pagination for logs, so we have to refetch it all
lines = job.trace().decode("unicode_escape").splitlines()
for line in lines[printed_lines:]:
print(line)
printed_lines = len(lines)
if job.status in COMPLETED_STATUSES:
print(Fore.GREEN + f"Job finished: {job.web_url}" + Style.RESET_ALL)
return
pretty_wait(REFRESH_WAIT_LOG)
def parse_args() -> None:
"""Parse args"""
parser = argparse.ArgumentParser(
description="Tool to trigger a subset of container jobs "
+ "and monitor the progress of a test job",
epilog="Example: mesa-monitor.py --rev $(git rev-parse HEAD) "
+ '--target ".*traces" ',
)
parser.add_argument("--target", metavar="target-job", help="Target job")
parser.add_argument(
"--rev", metavar="revision", help="repository git revision", required=True
)
parser.add_argument(
"--token",
metavar="token",
help="force GitLab token, otherwise it's read from ~/.config/gitlab-token",
)
parser.add_argument(
"--force-manual", action="store_true", help="Force jobs marked as manual"
)
parser.add_argument("--stress", action="store_true", help="Stresstest job(s)")
return parser.parse_args()
def find_dependencies(target_job: str, project_path: str, sha: str) -> set[str]:
gql_instance = GitlabGQL()
dag, _ = create_job_needs_dag(
gql_instance, {"projectPath": project_path.path_with_namespace, "sha": sha}
)
target_dep_dag = filter_dag(dag, target_job)
print(Fore.YELLOW)
print("Detected job dependencies:")
print()
print_dag(target_dep_dag)
print(Fore.RESET)
return set(chain.from_iterable(target_dep_dag.values()))
if __name__ == "__main__":
try:
t_start = time.perf_counter()
args = parse_args()
token = read_token(args.token)
gl = gitlab.Gitlab(url="https://gitlab.freedesktop.org", private_token=token)
cur_project = get_gitlab_project(gl, "mesa")
print(f"Revision: {args.rev}")
pipe = wait_for_pipeline(cur_project, args.rev)
print(f"Pipeline: {pipe.web_url}")
deps = set()
if args.target:
print("🞋 job: " + Fore.BLUE + args.target + Style.RESET_ALL)
deps = find_dependencies(
target_job=args.target, sha=args.rev, project_path=cur_project
)
target_job_id, ret = monitor_pipeline(
cur_project, pipe, args.target, deps, args.force_manual, args.stress
)
if target_job_id:
print_log(cur_project, target_job_id)
t_end = time.perf_counter()
spend_minutes = (t_end - t_start) / 60
print(f"⏲ Duration of script execution: {spend_minutes:0.1f} minutes")
sys.exit(ret)
except KeyboardInterrupt:
sys.exit(1)

View File

@@ -1,11 +0,0 @@
#!/bin/sh
# Helper script to download the schema GraphQL from Gitlab to enable IDEs to
# assist the developer to edit gql files
SOURCE_DIR=$(dirname "$(realpath "$0")")
(
cd $SOURCE_DIR || exit 1
gql-cli https://gitlab.freedesktop.org/api/graphql --print-schema > schema.graphql
)

View File

@@ -1,42 +0,0 @@
#!/usr/bin/env python3
# Copyright © 2020 - 2022 Collabora Ltd.
# Authors:
# Tomeu Vizoso <tomeu.vizoso@collabora.com>
# David Heidelberg <david.heidelberg@collabora.com>
#
# SPDX-License-Identifier: MIT
'''Shared functions between the scripts.'''
import os
import time
from typing import Optional
def get_gitlab_project(glab, name: str):
"""Finds a specified gitlab project for given user"""
glab.auth()
username = glab.user.username
return glab.projects.get(f"{username}/mesa")
def read_token(token_arg: Optional[str]) -> str:
"""pick token from args or file"""
if token_arg:
return token_arg
return (
open(os.path.expanduser("~/.config/gitlab-token"), encoding="utf-8")
.readline()
.rstrip()
)
def wait_for_pipeline(project, sha: str):
"""await until pipeline appears in Gitlab"""
print("⏲ for the pipeline to appear..", end="")
while True:
pipelines = project.pipelines.list(sha=sha)
if pipelines:
print("", flush=True)
return pipelines[0]
print("", end=".", flush=True)
time.sleep(1)

View File

@@ -1,303 +0,0 @@
#!/usr/bin/env python3
import re
from argparse import ArgumentDefaultsHelpFormatter, ArgumentParser, Namespace
from dataclasses import dataclass, field
from os import getenv
from pathlib import Path
from typing import Any, Iterable, Optional, Pattern, Union
import yaml
from filecache import DAY, filecache
from gql import Client, gql
from gql.transport.aiohttp import AIOHTTPTransport
from graphql import DocumentNode
Dag = dict[str, list[str]]
TOKEN_DIR = Path(getenv("XDG_CONFIG_HOME") or Path.home() / ".config")
def get_token_from_default_dir() -> str:
try:
token_file = TOKEN_DIR / "gitlab-token"
return token_file.resolve()
except FileNotFoundError as ex:
print(
f"Could not find {token_file}, please provide a token file as an argument"
)
raise ex
def get_project_root_dir():
root_path = Path(__file__).parent.parent.parent.resolve()
gitlab_file = root_path / ".gitlab-ci.yml"
assert gitlab_file.exists()
return root_path
@dataclass
class GitlabGQL:
_transport: Any = field(init=False)
client: Client = field(init=False)
url: str = "https://gitlab.freedesktop.org/api/graphql"
token: Optional[str] = None
def __post_init__(self):
self._setup_gitlab_gql_client()
def _setup_gitlab_gql_client(self) -> Client:
# Select your transport with a defined url endpoint
headers = {}
if self.token:
headers["Authorization"] = f"Bearer {self.token}"
self._transport = AIOHTTPTransport(url=self.url, headers=headers)
# Create a GraphQL client using the defined transport
self.client = Client(
transport=self._transport, fetch_schema_from_transport=True
)
@filecache(DAY)
def query(
self, gql_file: Union[Path, str], params: dict[str, Any]
) -> dict[str, Any]:
# Provide a GraphQL query
source_path = Path(__file__).parent
pipeline_query_file = source_path / gql_file
query: DocumentNode
with open(pipeline_query_file, "r") as f:
pipeline_query = f.read()
query = gql(pipeline_query)
# Execute the query on the transport
return self.client.execute(query, variable_values=params)
def invalidate_query_cache(self):
self.query._db.clear()
def create_job_needs_dag(
gl_gql: GitlabGQL, params
) -> tuple[Dag, dict[str, dict[str, Any]]]:
result = gl_gql.query("pipeline_details.gql", params)
dag = {}
jobs = {}
pipeline = result["project"]["pipeline"]
if not pipeline:
raise RuntimeError(f"Could not find any pipelines for {params}")
for stage in pipeline["stages"]["nodes"]:
for stage_job in stage["groups"]["nodes"]:
for job in stage_job["jobs"]["nodes"]:
needs = job.pop("needs")["nodes"]
jobs[job["name"]] = job
dag[job["name"]] = {node["name"] for node in needs}
for job, needs in dag.items():
needs: set
partial = True
while partial:
next_depth = {n for dn in needs for n in dag[dn]}
partial = not needs.issuperset(next_depth)
needs = needs.union(next_depth)
dag[job] = needs
return dag, jobs
def filter_dag(dag: Dag, regex: Pattern) -> Dag:
return {job: needs for job, needs in dag.items() if re.match(regex, job)}
def print_dag(dag: Dag) -> None:
for job, needs in dag.items():
print(f"{job}:")
print(f"\t{' '.join(needs)}")
print()
def fetch_merged_yaml(gl_gql: GitlabGQL, params) -> dict[Any]:
gitlab_yml_file = get_project_root_dir() / ".gitlab-ci.yml"
content = Path(gitlab_yml_file).read_text().strip()
params["content"] = content
raw_response = gl_gql.query("job_details.gql", params)
if merged_yaml := raw_response["ciConfig"]["mergedYaml"]:
return yaml.safe_load(merged_yaml)
gl_gql.invalidate_query_cache()
raise ValueError(
"""
Could not fetch any content for merged YAML,
please verify if the git SHA exists in remote.
Maybe you forgot to `git push`? """
)
def recursive_fill(job, relationship_field, target_data, acc_data: dict, merged_yaml):
if relatives := job.get(relationship_field):
if isinstance(relatives, str):
relatives = [relatives]
for relative in relatives:
parent_job = merged_yaml[relative]
acc_data = recursive_fill(parent_job, acc_data, merged_yaml)
acc_data |= job.get(target_data, {})
return acc_data
def get_variables(job, merged_yaml, project_path, sha) -> dict[str, str]:
p = get_project_root_dir() / ".gitlab-ci" / "image-tags.yml"
image_tags = yaml.safe_load(p.read_text())
variables = image_tags["variables"]
variables |= merged_yaml["variables"]
variables |= job["variables"]
variables["CI_PROJECT_PATH"] = project_path
variables["CI_PROJECT_NAME"] = project_path.split("/")[1]
variables["CI_REGISTRY_IMAGE"] = "registry.freedesktop.org/${CI_PROJECT_PATH}"
variables["CI_COMMIT_SHA"] = sha
while recurse_among_variables_space(variables):
pass
return variables
# Based on: https://stackoverflow.com/a/2158532/1079223
def flatten(xs):
for x in xs:
if isinstance(x, Iterable) and not isinstance(x, (str, bytes)):
yield from flatten(x)
else:
yield x
def get_full_script(job) -> list[str]:
script = []
for script_part in ("before_script", "script", "after_script"):
script.append(f"# {script_part}")
lines = flatten(job.get(script_part, []))
script.extend(lines)
script.append("")
return script
def recurse_among_variables_space(var_graph) -> bool:
updated = False
for var, value in var_graph.items():
value = str(value)
dep_vars = []
if match := re.findall(r"(\$[{]?[\w\d_]*[}]?)", value):
all_dep_vars = [v.lstrip("${").rstrip("}") for v in match]
# print(value, match, all_dep_vars)
dep_vars = [v for v in all_dep_vars if v in var_graph]
for dep_var in dep_vars:
dep_value = str(var_graph[dep_var])
new_value = var_graph[var]
new_value = new_value.replace(f"${{{dep_var}}}", dep_value)
new_value = new_value.replace(f"${dep_var}", dep_value)
var_graph[var] = new_value
updated |= dep_value != new_value
return updated
def get_job_final_definiton(job_name, merged_yaml, project_path, sha):
job = merged_yaml[job_name]
variables = get_variables(job, merged_yaml, project_path, sha)
print("# --------- variables ---------------")
for var, value in sorted(variables.items()):
print(f"export {var}={value!r}")
# TODO: Recurse into needs to get full script
# TODO: maybe create a extra yaml file to avoid too much rework
script = get_full_script(job)
print()
print()
print("# --------- full script ---------------")
print("\n".join(script))
if image := variables.get("MESA_IMAGE"):
print()
print()
print("# --------- container image ---------------")
print(image)
def parse_args() -> Namespace:
parser = ArgumentParser(
formatter_class=ArgumentDefaultsHelpFormatter,
description="CLI and library with utility functions to debug jobs via Gitlab GraphQL",
epilog=f"""Example:
{Path(__file__).name} --rev $(git rev-parse HEAD) --print-job-dag""",
)
parser.add_argument("-pp", "--project-path", type=str, default="mesa/mesa")
parser.add_argument("--sha", "--rev", type=str, required=True)
parser.add_argument(
"--regex",
type=str,
required=False,
help="Regex pattern for the job name to be considered",
)
parser.add_argument("--print-dag", action="store_true", help="Print job needs DAG")
parser.add_argument(
"--print-merged-yaml",
action="store_true",
help="Print the resulting YAML for the specific SHA",
)
parser.add_argument(
"--print-job-manifest", type=str, help="Print the resulting job data"
)
parser.add_argument(
"--gitlab-token-file",
type=str,
default=get_token_from_default_dir(),
help="force GitLab token, otherwise it's read from $XDG_CONFIG_HOME/gitlab-token",
)
args = parser.parse_args()
args.gitlab_token = Path(args.gitlab_token_file).read_text()
return args
def main():
args = parse_args()
gl_gql = GitlabGQL(token=args.gitlab_token)
if args.print_dag:
dag, jobs = create_job_needs_dag(
gl_gql, {"projectPath": args.project_path, "sha": args.sha}
)
if args.regex:
dag = filter_dag(dag, re.compile(args.regex))
print_dag(dag)
if args.print_merged_yaml:
print(
fetch_merged_yaml(
gl_gql, {"projectPath": args.project_path, "sha": args.sha}
)
)
if args.print_job_manifest:
merged_yaml = fetch_merged_yaml(
gl_gql, {"projectPath": args.project_path, "sha": args.sha}
)
get_job_final_definiton(
args.print_job_manifest, merged_yaml, args.project_path, args.sha
)
if __name__ == "__main__":
main()

View File

@@ -1,7 +0,0 @@
query getCiConfigData($projectPath: ID!, $sha: String, $content: String!) {
ciConfig(projectPath: $projectPath, sha: $sha, content: $content) {
errors
mergedYaml
__typename
}
}

View File

@@ -1,86 +0,0 @@
fragment LinkedPipelineData on Pipeline {
id
iid
path
cancelable
retryable
userPermissions {
updatePipeline
}
status: detailedStatus {
id
group
label
icon
}
sourceJob {
id
name
}
project {
id
name
fullPath
}
}
query getPipelineDetails($projectPath: ID!, $sha: String!) {
project(fullPath: $projectPath) {
id
pipeline(sha: $sha) {
id
iid
complete
downstream {
nodes {
...LinkedPipelineData
}
}
upstream {
...LinkedPipelineData
}
stages {
nodes {
id
name
status: detailedStatus {
id
action {
id
icon
path
title
}
}
groups {
nodes {
id
status: detailedStatus {
id
label
group
icon
}
name
size
jobs {
nodes {
id
name
kind
scheduledAt
needs {
nodes {
id
name
}
}
}
}
}
}
}
}
}
}
}

View File

@@ -1,8 +0,0 @@
aiohttp==3.8.1
colorama==0.4.5
filecache==0.81
gql==3.4.0
python-gitlab==3.5.0
PyYAML==6.0
ruamel.yaml.clib==0.2.6
ruamel.yaml==0.17.21

View File

@@ -1,143 +0,0 @@
#!/usr/bin/env python3
# Copyright © 2022 Collabora Ltd.
# Authors:
# David Heidelberg <david.heidelberg@collabora.com>
#
# SPDX-License-Identifier: MIT
"""
Helper script to update traces checksums
"""
import argparse
import bz2
import glob
import re
import json
import sys
from ruamel.yaml import YAML
import gitlab
from gitlab_common import get_gitlab_project, read_token, wait_for_pipeline
DESCRIPTION_FILE = "export PIGLIT_REPLAY_DESCRIPTION_FILE='.*/install/(.*)'$"
DEVICE_NAME = "export PIGLIT_REPLAY_DEVICE_NAME='(.*)'$"
def gather_results(
project,
pipeline,
) -> None:
"""Gather results"""
target_jobs_regex = re.compile(".*-traces([:].*)?$")
for job in pipeline.jobs.list(all=True, sort="desc"):
if target_jobs_regex.match(job.name) and job.status == "failed":
cur_job = project.jobs.get(job.id)
# get variables
print(f"👁 Looking through logs for the device variable and traces.yml file in {job.name}...")
log = cur_job.trace().decode("unicode_escape").splitlines()
filename: str = ''
dev_name: str = ''
for logline in log:
desc_file = re.search(DESCRIPTION_FILE, logline)
device_name = re.search(DEVICE_NAME, logline)
if desc_file:
filename = desc_file.group(1)
if device_name:
dev_name = device_name.group(1)
if not filename or not dev_name:
print("! Couldn't find device name or YML file in the logs!")
return
print(f"👁 Found {dev_name} and file {filename}")
# find filename in Mesa source
traces_file = glob.glob('./**/' + filename, recursive=True)
# write into it
with open(traces_file[0], 'r', encoding='utf-8') as target_file:
yaml = YAML()
yaml.compact(seq_seq=False, seq_map=False)
yaml.version = 1,2
yaml.width = 2048 # do not break the text fields
yaml.default_flow_style = None
target = yaml.load(target_file)
# parse artifact
results_json_bz2 = cur_job.artifact(path="results/results.json.bz2", streamed=False)
results_json = bz2.decompress(results_json_bz2).decode("utf-8")
results = json.loads(results_json)
for _, value in results["tests"].items():
if (
not value['images'] or
not value['images'][0] or
"image_desc" not in value['images'][0]
):
continue
trace: str = value['images'][0]['image_desc']
checksum: str = value['images'][0]['checksum_render']
if not checksum:
print(f"Trace {trace} checksum is missing! Abort.")
continue
if checksum == "error":
print(f"Trace {trace} crashed")
continue
if (
checksum in target['traces'][trace][dev_name] and
target['traces'][trace][dev_name]['checksum'] == checksum
):
continue
if "label" in target['traces'][trace][dev_name]:
print(f'{trace}: {dev_name}: has label: {target["traces"][trace][dev_name]["label"]}, is it still right?')
target['traces'][trace][dev_name]['checksum'] = checksum
with open(traces_file[0], 'w', encoding='utf-8') as target_file:
yaml.dump(target, target_file)
def parse_args() -> None:
"""Parse args"""
parser = argparse.ArgumentParser(
description="Tool to generate patch from checksums ",
epilog="Example: update_traces_checksum.py --rev $(git rev-parse HEAD) "
)
parser.add_argument(
"--rev", metavar="revision", help="repository git revision", required=True
)
parser.add_argument(
"--token",
metavar="token",
help="force GitLab token, otherwise it's read from ~/.config/gitlab-token",
)
return parser.parse_args()
if __name__ == "__main__":
try:
args = parse_args()
token = read_token(args.token)
gl = gitlab.Gitlab(url="https://gitlab.freedesktop.org", private_token=token)
cur_project = get_gitlab_project(gl, "mesa")
print(f"Revision: {args.rev}")
pipe = wait_for_pipeline(cur_project, args.rev)
print(f"Pipeline: {pipe.web_url}")
gather_results(cur_project, pipe)
sys.exit()
except KeyboardInterrupt:
sys.exit(1)

View File

@@ -1,628 +0,0 @@
# Shared between windows and Linux
.build-common:
extends: .build-rules
# Cancel job if a newer commit is pushed to the same branch
interruptible: true
artifacts:
name: "mesa_${CI_JOB_NAME}"
when: always
paths:
- _build/meson-logs/*.txt
- _build/meson-logs/strace
- shader-db
# Just Linux
.build-linux:
extends: .build-common
variables:
CCACHE_COMPILERCHECK: "content"
CCACHE_COMPRESS: "true"
CCACHE_DIR: /cache/mesa/ccache
# Use ccache transparently, and print stats before/after
before_script:
- !reference [default, before_script]
- export PATH="/usr/lib/ccache:$PATH"
- export CCACHE_BASEDIR="$PWD"
- echo -e "\e[0Ksection_start:$(date +%s):ccache_before[collapsed=true]\r\e[0Kccache stats before build"
- ccache --show-stats
- echo -e "\e[0Ksection_end:$(date +%s):ccache_before\r\e[0K"
after_script:
- echo -e "\e[0Ksection_start:$(date +%s):ccache_after[collapsed=true]\r\e[0Kccache stats after build"
- ccache --show-stats
- echo -e "\e[0Ksection_end:$(date +%s):ccache_after\r\e[0K"
- !reference [default, after_script]
.build-windows:
extends: .build-common
tags:
- windows
- docker
- "2022"
- mesa
cache:
key: ${CI_JOB_NAME}
paths:
- subprojects/packagecache
.meson-build:
extends:
- .build-linux
- .use-debian/x86_build
stage: build-x86_64
variables:
LLVM_VERSION: 11
script:
- .gitlab-ci/meson/build.sh
.meson-build_mingw:
extends:
- .build-linux
- .use-debian/x86_build_mingw
- .use-wine
stage: build-x86_64
script:
- .gitlab-ci/meson/build.sh
debian-testing:
extends:
- .meson-build
- .ci-deqp-artifacts
variables:
UNWIND: "enabled"
DRI_LOADERS: >
-D glx=dri
-D gbm=enabled
-D egl=enabled
-D platforms=x11
GALLIUM_ST: >
-D dri3=enabled
-D gallium-va=enabled
GALLIUM_DRIVERS: "swrast,virgl,radeonsi,zink,crocus,iris,i915"
VULKAN_DRIVERS: "swrast,amd,intel,virtio-experimental"
BUILDTYPE: "debugoptimized"
EXTRA_OPTION: >
-D spirv-to-dxil=true
-D valgrind=false
MINIO_ARTIFACT_NAME: mesa-amd64
LLVM_VERSION: "13"
script:
- .gitlab-ci/meson/build.sh
- .gitlab-ci/prepare-artifacts.sh
artifacts:
reports:
junit: artifacts/ci_scripts_report.xml
debian-testing-asan:
extends:
- debian-testing
variables:
C_ARGS: >
-Wno-error=stringop-truncation
EXTRA_OPTION: >
-D b_sanitize=address
-D valgrind=false
-D tools=dlclose-skip
MINIO_ARTIFACT_NAME: ""
ARTIFACTS_DEBUG_SYMBOLS: 1
debian-testing-msan:
extends:
- debian-clang
variables:
# l_undef is incompatible with msan
EXTRA_OPTION:
-D b_sanitize=memory
-D b_lundef=false
MINIO_ARTIFACT_NAME: ""
ARTIFACTS_DEBUG_SYMBOLS: 1
# Don't run all the tests yet:
# GLSL has some issues in sexpression reading.
# gtest has issues in its test initialization.
MESON_TEST_ARGS: "--suite glcpp --suite gallium --suite format"
# Freedreno dropped because freedreno tools fail at msan.
GALLIUM_DRIVERS: "iris,nouveau,kmsro,r300,r600,swrast,svga,v3d,vc4,virgl,etnaviv,panfrost,lima,zink,radeonsi,tegra,d3d12,crocus"
VULKAN_DRIVERS: intel,amd,broadcom,virtio-experimental
.debian-cl-testing:
extends:
- .meson-build
- .ci-deqp-artifacts
variables:
LLVM_VERSION: "13"
UNWIND: "enabled"
DRI_LOADERS: >
-D glx=disabled
-D egl=disabled
-D gbm=disabled
GALLIUM_DRIVERS: "swrast"
BUILDTYPE: "debugoptimized"
EXTRA_OPTION: >
-D valgrind=false
script:
- .gitlab-ci/meson/build.sh
- .gitlab-ci/prepare-artifacts.sh
debian-clover-testing:
extends:
- .debian-cl-testing
variables:
GALLIUM_ST: >
-D gallium-opencl=icd
-D opencl-spirv=true
debian-rusticl-testing:
extends:
- .debian-cl-testing
variables:
GALLIUM_ST: >
-D gallium-rusticl=true
-D opencl-spirv=true
debian-build-testing:
extends: .meson-build
variables:
UNWIND: "enabled"
DRI_LOADERS: >
-D glx=dri
-D gbm=enabled
-D egl=enabled
-D platforms=x11,wayland
GALLIUM_ST: >
-D dri3=enabled
-D gallium-extra-hud=true
-D gallium-vdpau=enabled
-D gallium-omx=bellagio
-D gallium-va=enabled
-D gallium-xa=enabled
-D gallium-nine=true
-D gallium-opencl=disabled
-D gallium-rusticl=false
GALLIUM_DRIVERS: "iris,nouveau,kmsro,r300,r600,freedreno,swrast,svga,v3d,vc4,virgl,etnaviv,panfrost,lima,zink,d3d12,asahi,crocus"
VULKAN_DRIVERS: swrast
EXTRA_OPTION: >
-D spirv-to-dxil=true
-D osmesa=true
-D tools=drm-shim,etnaviv,freedreno,glsl,intel,intel-ui,nir,nouveau,lima,panfrost,asahi
script:
- .gitlab-ci/lava/lava-pytest.sh
- .gitlab-ci/run-shellcheck.sh
- .gitlab-ci/run-yamllint.sh
- .gitlab-ci/meson/build.sh
- .gitlab-ci/run-shader-db.sh
# Test a release build with -Werror so new warnings don't sneak in.
debian-release:
extends: .meson-build
variables:
LLVM_VERSION: "13"
UNWIND: "enabled"
DRI_LOADERS: >
-D glx=dri
-D gbm=enabled
-D egl=enabled
-D platforms=x11,wayland
GALLIUM_ST: >
-D dri3=enabled
-D gallium-extra-hud=true
-D gallium-vdpau=enabled
-D gallium-omx=disabled
-D gallium-va=enabled
-D gallium-xa=enabled
-D gallium-nine=false
-D gallium-opencl=disabled
-D gallium-rusticl=false
-D llvm=enabled
GALLIUM_DRIVERS: "i915,iris,nouveau,kmsro,freedreno,r300,svga,swrast,v3d,vc4,virgl,etnaviv,panfrost,lima,zink,d3d12,crocus"
VULKAN_DRIVERS: "amd,imagination-experimental,microsoft-experimental"
BUILDTYPE: "release"
EXTRA_OPTION: >
-D spirv-to-dxil=true
-D osmesa=true
-D tools=all
-D intel-clc=enabled
-D imagination-srv=true
script:
- .gitlab-ci/meson/build.sh
fedora-release:
extends:
- .meson-build
- .use-fedora/x86_build
variables:
BUILDTYPE: "release"
C_ARGS: >
-Wno-error=array-bounds
-Wno-error=stringop-overread
-Wno-error=uninitialized
CPP_ARGS: >
-Wno-error=array-bounds
DRI_LOADERS: >
-D glx=dri
-D gbm=enabled
-D egl=enabled
-D glvnd=true
-D platforms=x11,wayland
# intel-clc disabled, we need llvm-spirv-translator 13.0+, Fedora 34 only packages 12.0.
EXTRA_OPTION: >
-D osmesa=true
-D selinux=true
-D tools=drm-shim,etnaviv,freedreno,glsl,intel,nir,nouveau,lima,panfrost,imagination
-D vulkan-layers=device-select,overlay
-D intel-clc=disabled
-D imagination-srv=true
GALLIUM_DRIVERS: "crocus,etnaviv,freedreno,iris,kmsro,lima,nouveau,panfrost,r300,r600,radeonsi,svga,swrast,tegra,v3d,vc4,virgl,zink"
GALLIUM_ST: >
-D dri3=enabled
-D gallium-extra-hud=true
-D gallium-vdpau=enabled
-D gallium-omx=disabled
-D gallium-va=enabled
-D gallium-xa=enabled
-D gallium-nine=false
-D gallium-opencl=icd
-D gallium-rusticl=false
-D gles1=disabled
-D gles2=enabled
-D llvm=enabled
-D microsoft-clc=disabled
-D shared-llvm=enabled
LLVM_VERSION: ""
UNWIND: "disabled"
VULKAN_DRIVERS: "amd,broadcom,freedreno,intel,imagination-experimental"
script:
- .gitlab-ci/meson/build.sh
debian-android:
extends:
- .meson-cross
- .use-debian/android_build
variables:
UNWIND: "disabled"
C_ARGS: >
-Wno-error=asm-operand-widths
-Wno-error=constant-conversion
-Wno-error=enum-conversion
-Wno-error=initializer-overrides
-Wno-error=missing-braces
-Wno-error=sometimes-uninitialized
-Wno-error=unused-function
CPP_ARGS: >
-Wno-error=deprecated-declarations
DRI_LOADERS: >
-D glx=disabled
-D gbm=disabled
-D egl=enabled
-D platforms=android
EXTRA_OPTION: >
-D android-stub=true
-D llvm=disabled
-D platform-sdk-version=29
-D valgrind=false
GALLIUM_ST: >
-D dri3=disabled
-D gallium-vdpau=disabled
-D gallium-omx=disabled
-D gallium-va=disabled
-D gallium-xa=disabled
-D gallium-nine=false
-D gallium-opencl=disabled
-D gallium-rusticl=false
LLVM_VERSION: ""
PKG_CONFIG_LIBDIR: "/disable/non/android/system/pc/files"
script:
- PKG_CONFIG_PATH=/usr/local/lib/aarch64-linux-android/pkgconfig/:/android-ndk-r21d/toolchains/llvm/prebuilt/linux-x86_64/sysroot/usr/lib/aarch64-linux-android/pkgconfig/ CROSS=aarch64-linux-android GALLIUM_DRIVERS=etnaviv,freedreno,lima,panfrost,vc4,v3d VULKAN_DRIVERS=freedreno,broadcom,virtio-experimental .gitlab-ci/meson/build.sh
# x86_64 build:
# Can't do Intel because gen_decoder.c currently requires libexpat, which
# is not a dependency that AOSP wants to accept. Can't do Radeon Gallium
# drivers because they requires LLVM, which we don't have an Android build
# of.
- PKG_CONFIG_PATH=/usr/local/lib/x86_64-linux-android/pkgconfig/:/android-ndk-r21d/toolchains/llvm/prebuilt/linux-x86_64/sysroot/usr/lib/x86_64-linux-android/pkgconfig/ CROSS=x86_64-linux-android GALLIUM_DRIVERS=iris VULKAN_DRIVERS=amd,intel .gitlab-ci/meson/build.sh
.meson-cross:
extends:
- .meson-build
stage: build-misc
variables:
UNWIND: "disabled"
DRI_LOADERS: >
-D glx=dri
-D gbm=enabled
-D egl=enabled
-D platforms=x11
-D osmesa=false
GALLIUM_ST: >
-D dri3=enabled
-D gallium-vdpau=disabled
-D gallium-omx=disabled
-D gallium-va=disabled
-D gallium-xa=disabled
-D gallium-nine=false
.meson-arm:
extends:
- .meson-cross
- .use-debian/arm_build
needs:
- debian/arm_build
variables:
VULKAN_DRIVERS: freedreno,broadcom
GALLIUM_DRIVERS: "etnaviv,freedreno,kmsro,lima,nouveau,panfrost,swrast,tegra,v3d,vc4,zink"
BUILDTYPE: "debugoptimized"
tags:
- aarch64
debian-armhf:
extends:
- .meson-arm
- .ci-deqp-artifacts
variables:
CROSS: armhf
EXTRA_OPTION: >
-D llvm=disabled
-D valgrind=false
MINIO_ARTIFACT_NAME: mesa-armhf
script:
- .gitlab-ci/meson/build.sh
- .gitlab-ci/prepare-artifacts.sh
debian-arm64:
extends:
- .meson-arm
- .ci-deqp-artifacts
variables:
VULKAN_DRIVERS: "freedreno,broadcom,panfrost,imagination-experimental"
EXTRA_OPTION: >
-D llvm=disabled
-D valgrind=false
-D imagination-srv=true
MINIO_ARTIFACT_NAME: mesa-arm64
script:
- .gitlab-ci/meson/build.sh
- .gitlab-ci/prepare-artifacts.sh
debian-arm64-asan:
extends:
- debian-arm64
variables:
EXTRA_OPTION: >
-D llvm=disabled
-D b_sanitize=address
-D valgrind=false
-D tools=dlclose-skip
ARTIFACTS_DEBUG_SYMBOLS: 1
MINIO_ARTIFACT_NAME: mesa-arm64-asan
MESON_TEST_ARGS: "--no-suite mesa:compiler"
debian-arm64-build-test:
extends:
- .meson-arm
- .ci-deqp-artifacts
variables:
VULKAN_DRIVERS: "amd"
EXTRA_OPTION: >
-Dtools=panfrost,imagination
script:
- .gitlab-ci/meson/build.sh
debian-clang:
extends: .meson-build
variables:
LLVM_VERSION: "13"
UNWIND: "enabled"
GALLIUM_DUMP_CPU: "true"
C_ARGS: >
-Wno-error=constant-conversion
-Wno-error=enum-conversion
-Wno-error=implicit-const-int-float-conversion
-Wno-error=initializer-overrides
-Wno-error=sometimes-uninitialized
CPP_ARGS: >
-Wno-error=c99-designator
-Wno-error=implicit-const-int-float-conversion
-Wno-error=overloaded-virtual
-Wno-error=tautological-constant-out-of-range-compare
-Wno-error=unused-const-variable
-Wno-error=unused-private-field
DRI_LOADERS: >
-D glx=dri
-D gbm=enabled
-D egl=enabled
-D glvnd=true
-D platforms=x11,wayland
GALLIUM_ST: >
-D dri3=enabled
-D gallium-extra-hud=true
-D gallium-vdpau=enabled
-D gallium-omx=bellagio
-D gallium-va=enabled
-D gallium-xa=enabled
-D gallium-nine=true
-D gallium-opencl=icd
-D gles1=enabled
-D gles2=enabled
-D llvm=enabled
-D microsoft-clc=enabled
-D shared-llvm=enabled
-D opencl-spirv=true
GALLIUM_DRIVERS: "iris,nouveau,kmsro,r300,r600,freedreno,swrast,svga,v3d,vc4,virgl,etnaviv,panfrost,lima,zink,radeonsi,tegra,d3d12,crocus,i915,asahi"
VULKAN_DRIVERS: intel,amd,freedreno,broadcom,virtio-experimental,swrast,panfrost,imagination-experimental,microsoft-experimental
EXTRA_OPTION:
-D spirv-to-dxil=true
-D osmesa=true
-D imagination-srv=true
-D tools=drm-shim,etnaviv,freedreno,glsl,intel,intel-ui,nir,nouveau,lima,panfrost,asahi,imagination
-D vulkan-layers=device-select,overlay
-D build-aco-tests=true
-D intel-clc=enabled
-D imagination-srv=true
CC: clang
CXX: clang++
debian-clang-release:
extends: debian-clang
variables:
BUILDTYPE: "release"
DRI_LOADERS: >
-D glx=xlib
-D platforms=x11,wayland
windows-vs2019:
extends:
- .build-windows
- .use-windows_build_vs2019
- .windows-build-rules
stage: build-misc
script:
- pwsh -ExecutionPolicy RemoteSigned .\.gitlab-ci\windows\mesa_build.ps1
artifacts:
paths:
- _build/meson-logs/*.txt
- _install/
.debian-cl:
extends: .meson-build
variables:
LLVM_VERSION: "13"
UNWIND: "enabled"
DRI_LOADERS: >
-D glx=disabled
-D egl=disabled
-D gbm=disabled
EXTRA_OPTION: >
-D valgrind=false
debian-clover:
extends: .debian-cl
variables:
GALLIUM_DRIVERS: "r600,radeonsi,swrast"
GALLIUM_ST: >
-D dri3=disabled
-D gallium-vdpau=disabled
-D gallium-omx=disabled
-D gallium-va=disabled
-D gallium-xa=disabled
-D gallium-nine=false
-D gallium-opencl=icd
-D gallium-rusticl=false
debian-rusticl:
extends: .debian-cl
variables:
GALLIUM_DRIVERS: "iris,swrast"
GALLIUM_ST: >
-D dri3=disabled
-D gallium-vdpau=disabled
-D gallium-omx=disabled
-D gallium-va=disabled
-D gallium-xa=disabled
-D gallium-nine=false
-D gallium-opencl=disabled
-D gallium-rusticl=true
debian-vulkan:
extends: .meson-build
variables:
LLVM_VERSION: "13"
UNWIND: "disabled"
DRI_LOADERS: >
-D glx=disabled
-D gbm=disabled
-D egl=disabled
-D platforms=x11,wayland
-D osmesa=false
GALLIUM_ST: >
-D dri3=enabled
-D gallium-vdpau=disabled
-D gallium-omx=disabled
-D gallium-va=disabled
-D gallium-xa=disabled
-D gallium-nine=false
-D gallium-opencl=disabled
-D gallium-rusticl=false
-D b_sanitize=undefined
-D c_args=-fno-sanitize-recover=all
-D cpp_args=-fno-sanitize-recover=all
UBSAN_OPTIONS: "print_stacktrace=1"
VULKAN_DRIVERS: intel,amd,freedreno,broadcom,virtio-experimental,imagination-experimental,microsoft-experimental
EXTRA_OPTION: >
-D vulkan-layers=device-select,overlay
-D build-aco-tests=true
-D intel-clc=disabled
-D imagination-srv=true
debian-i386:
extends:
- .meson-cross
- .use-debian/i386_build
variables:
CROSS: i386
VULKAN_DRIVERS: intel,amd,swrast,virtio-experimental
GALLIUM_DRIVERS: "iris,nouveau,r300,r600,radeonsi,swrast,virgl,zink,crocus"
LLVM_VERSION: 13
EXTRA_OPTION: >
-D vulkan-layers=device-select,overlay
debian-s390x:
extends:
- debian-ppc64el
- .use-debian/s390x_build
- .s390x-rules
tags:
- kvm
variables:
CROSS: s390x
GALLIUM_DRIVERS: "swrast,zink"
LLVM_VERSION: 13
VULKAN_DRIVERS: "swrast"
debian-ppc64el:
extends:
- .meson-cross
- .use-debian/ppc64el_build
- .ppc64el-rules
variables:
CROSS: ppc64el
GALLIUM_DRIVERS: "nouveau,radeonsi,swrast,virgl,zink"
VULKAN_DRIVERS: "amd,swrast"
debian-mingw32-x86_64:
extends: .meson-build_mingw
stage: build-misc
variables:
UNWIND: "disabled"
C_ARGS: >
-Wno-error=format
-Wno-error=format-extra-args
-Wno-error=deprecated-declarations
-Wno-error=unused-function
-Wno-error=unused-variable
-Wno-error=unused-but-set-variable
-Wno-error=unused-value
-Wno-error=switch
-Wno-error=parentheses
-Wno-error=missing-prototypes
-Wno-error=sign-compare
-Wno-error=narrowing
-Wno-error=overflow
CPP_ARGS: $C_ARGS
GALLIUM_DRIVERS: "swrast,d3d12,zink"
VULKAN_DRIVERS: "swrast,amd,microsoft-experimental"
GALLIUM_ST: >
-D gallium-opencl=icd
-D gallium-rusticl=false
-D opencl-spirv=true
-D microsoft-clc=enabled
-D static-libclc=all
-D llvm=enabled
-D gallium-va=true
-D video-codecs=h264dec,h264enc,h265dec,h265enc,vc1dec
EXTRA_OPTION: >
-D min-windows-version=7
-D spirv-to-dxil=true
-D gles1=enabled
-D gles2=enabled
-D osmesa=true
-D cpp_rtti=true
-D shared-glapi=enabled
-D zlib=enabled
--cross-file=.gitlab-ci/x86_64-w64-mingw32

View File

@@ -1,14 +0,0 @@
#!/bin/sh
while true; do
devcds=`find /sys/devices/virtual/devcoredump/ -name data 2>/dev/null`
for i in $devcds; do
echo "Found a devcoredump at $i."
if cp $i /results/first.devcore; then
echo 1 > $i
echo "Saved to the job artifacts at /first.devcore"
exit 0
fi
done
sleep 10
done

View File

@@ -1,123 +0,0 @@
#!/bin/bash
for var in \
ACO_DEBUG \
ASAN_OPTIONS \
BASE_SYSTEM_FORK_HOST_PREFIX \
BASE_SYSTEM_MAINLINE_HOST_PREFIX \
CI_COMMIT_BRANCH \
CI_COMMIT_REF_NAME \
CI_COMMIT_TITLE \
CI_JOB_ID \
CI_JOB_JWT_FILE \
CI_JOB_NAME \
CI_JOB_URL \
CI_MERGE_REQUEST_SOURCE_BRANCH_NAME \
CI_MERGE_REQUEST_TITLE \
CI_NODE_INDEX \
CI_NODE_TOTAL \
CI_PAGES_DOMAIN \
CI_PIPELINE_ID \
CI_PIPELINE_URL \
CI_PROJECT_DIR \
CI_PROJECT_NAME \
CI_PROJECT_PATH \
CI_PROJECT_ROOT_NAMESPACE \
CI_RUNNER_DESCRIPTION \
CI_SERVER_URL \
CROSVM_GALLIUM_DRIVER \
CROSVM_GPU_ARGS \
DEQP_BIN_DIR \
DEQP_CASELIST_FILTER \
DEQP_CASELIST_INV_FILTER \
DEQP_CONFIG \
DEQP_EXPECTED_RENDERER \
DEQP_FRACTION \
DEQP_HEIGHT \
DEQP_RESULTS_DIR \
DEQP_RUNNER_OPTIONS \
DEQP_SUITE \
DEQP_TEMP_DIR \
DEQP_VARIANT \
DEQP_VER \
DEQP_WIDTH \
DEVICE_NAME \
DRIVER_NAME \
EGL_PLATFORM \
ETNA_MESA_DEBUG \
FDO_CI_CONCURRENT \
FDO_UPSTREAM_REPO \
FD_MESA_DEBUG \
FLAKES_CHANNEL \
FREEDRENO_HANGCHECK_MS \
GALLIUM_DRIVER \
GALLIVM_PERF \
GPU_VERSION \
GTEST \
GTEST_FAILS \
GTEST_FRACTION \
GTEST_RESULTS_DIR \
GTEST_RUNNER_OPTIONS \
GTEST_SKIPS \
HWCI_FREQ_MAX \
HWCI_KERNEL_MODULES \
HWCI_KVM \
HWCI_START_XORG \
HWCI_TEST_SCRIPT \
IR3_SHADER_DEBUG \
JOB_ARTIFACTS_BASE \
JOB_RESULTS_PATH \
JOB_ROOTFS_OVERLAY_PATH \
KERNEL_IMAGE_BASE_URL \
KERNEL_IMAGE_NAME \
LD_LIBRARY_PATH \
LP_NUM_THREADS \
MESA_BASE_TAG \
MESA_BUILD_PATH \
MESA_DEBUG \
MESA_GLES_VERSION_OVERRIDE \
MESA_GLSL_VERSION_OVERRIDE \
MESA_GL_VERSION_OVERRIDE \
MESA_IMAGE \
MESA_IMAGE_PATH \
MESA_IMAGE_TAG \
MESA_LOADER_DRIVER_OVERRIDE \
MESA_TEMPLATES_COMMIT \
MESA_VK_IGNORE_CONFORMANCE_WARNING \
MESA_SPIRV_LOG_LEVEL \
MINIO_HOST \
MINIO_RESULTS_UPLOAD \
NIR_DEBUG \
PAN_I_WANT_A_BROKEN_VULKAN_DRIVER \
PAN_MESA_DEBUG \
PIGLIT_FRACTION \
PIGLIT_NO_WINDOW \
PIGLIT_OPTIONS \
PIGLIT_PLATFORM \
PIGLIT_PROFILES \
PIGLIT_REPLAY_ARTIFACTS_BASE_URL \
PIGLIT_REPLAY_DESCRIPTION_FILE \
PIGLIT_REPLAY_DEVICE_NAME \
PIGLIT_REPLAY_EXTRA_ARGS \
PIGLIT_REPLAY_LOOP_TIMES \
PIGLIT_REPLAY_REFERENCE_IMAGES_BASE \
PIGLIT_REPLAY_SUBCOMMAND \
PIGLIT_RESULTS \
PIGLIT_TESTS \
PIPELINE_ARTIFACTS_BASE \
RADV_DEBUG \
RADV_PERFTEST \
SKQP_ASSETS_DIR \
SKQP_BACKENDS \
TU_DEBUG \
VIRGL_HOST_API \
WAFFLE_PLATFORM \
VK_CPU \
VK_DRIVER \
VK_ICD_FILENAMES \
VKD3D_PROTON_RESULTS \
; do
if [ -n "${!var+x}" ]; then
echo "export $var=${!var@Q}"
fi
done

View File

@@ -1,23 +0,0 @@
#!/bin/sh
# Very early init, used to make sure devices and network are set up and
# reachable.
set -ex
cd /
mount -t proc none /proc
mount -t sysfs none /sys
mount -t debugfs none /sys/kernel/debug
mount -t devtmpfs none /dev || echo possibly already mounted
mkdir -p /dev/pts
mount -t devpts devpts /dev/pts
mount -t tmpfs tmpfs /tmp
echo "nameserver 8.8.8.8" > /etc/resolv.conf
[ -z "$NFS_SERVER_IP" ] || echo "$NFS_SERVER_IP caching-proxy" >> /etc/hosts
# Set the time so we can validate certificates before we fetch anything;
# however as not all DUTs have network, make this non-fatal.
for i in 1 2 3; do sntp -sS pool.ntp.org && break || sleep 2; done || true

View File

@@ -1,165 +0,0 @@
#!/bin/sh
# Make sure to kill itself and all the children process from this script on
# exiting, since any console output may interfere with LAVA signals handling,
# which based on the log console.
cleanup() {
if [ "$BACKGROUND_PIDS" = "" ]; then
return 0
fi
set +x
echo "Killing all child processes"
for pid in $BACKGROUND_PIDS
do
kill "$pid" 2>/dev/null || true
done
# Sleep just a little to give enough time for subprocesses to be gracefully
# killed. Then apply a SIGKILL if necessary.
sleep 5
for pid in $BACKGROUND_PIDS
do
kill -9 "$pid" 2>/dev/null || true
done
BACKGROUND_PIDS=
set -x
}
trap cleanup INT TERM EXIT
# Space separated values with the PIDS of the processes started in the
# background by this script
BACKGROUND_PIDS=
# Second-stage init, used to set up devices and our job environment before
# running tests.
. /set-job-env-vars.sh
set -ex
# Set up any devices required by the jobs
[ -z "$HWCI_KERNEL_MODULES" ] || {
echo -n $HWCI_KERNEL_MODULES | xargs -d, -n1 /usr/sbin/modprobe
}
#
# Load the KVM module specific to the detected CPU virtualization extensions:
# - vmx for Intel VT
# - svm for AMD-V
#
# Additionally, download the kernel image to boot the VM via HWCI_TEST_SCRIPT.
#
if [ "$HWCI_KVM" = "true" ]; then
unset KVM_KERNEL_MODULE
grep -qs '\bvmx\b' /proc/cpuinfo && KVM_KERNEL_MODULE=kvm_intel || {
grep -qs '\bsvm\b' /proc/cpuinfo && KVM_KERNEL_MODULE=kvm_amd
}
[ -z "${KVM_KERNEL_MODULE}" ] && \
echo "WARNING: Failed to detect CPU virtualization extensions" || \
modprobe ${KVM_KERNEL_MODULE}
mkdir -p /lava-files
wget -S --progress=dot:giga -O /lava-files/${KERNEL_IMAGE_NAME} \
"${KERNEL_IMAGE_BASE_URL}/${KERNEL_IMAGE_NAME}"
fi
# Fix prefix confusion: the build installs to $CI_PROJECT_DIR, but we expect
# it in /install
ln -sf $CI_PROJECT_DIR/install /install
export LD_LIBRARY_PATH=/install/lib
export LIBGL_DRIVERS_PATH=/install/lib/dri
# Store Mesa's disk cache under /tmp, rather than sending it out over NFS.
export XDG_CACHE_HOME=/tmp
# Make sure Python can find all our imports
export PYTHONPATH=$(python3 -c "import sys;print(\":\".join(sys.path))")
if [ "$HWCI_FREQ_MAX" = "true" ]; then
# Ensure initialization of the DRM device (needed by MSM)
head -0 /dev/dri/renderD128
# Disable GPU frequency scaling
DEVFREQ_GOVERNOR=`find /sys/devices -name governor | grep gpu || true`
test -z "$DEVFREQ_GOVERNOR" || echo performance > $DEVFREQ_GOVERNOR || true
# Disable CPU frequency scaling
echo performance | tee -a /sys/devices/system/cpu/cpufreq/policy*/scaling_governor || true
# Disable GPU runtime power management
GPU_AUTOSUSPEND=`find /sys/devices -name autosuspend_delay_ms | grep gpu | head -1`
test -z "$GPU_AUTOSUSPEND" || echo -1 > $GPU_AUTOSUSPEND || true
# Lock Intel GPU frequency to 70% of the maximum allowed by hardware
# and enable throttling detection & reporting.
# Additionally, set the upper limit for CPU scaling frequency to 65% of the
# maximum permitted, as an additional measure to mitigate thermal throttling.
./intel-gpu-freq.sh -s 70% --cpu-set-max 65% -g all -d
fi
# Increase freedreno hangcheck timer because it's right at the edge of the
# spilling tests timing out (and some traces, too)
if [ -n "$FREEDRENO_HANGCHECK_MS" ]; then
echo $FREEDRENO_HANGCHECK_MS | tee -a /sys/kernel/debug/dri/128/hangcheck_period_ms
fi
# Start a little daemon to capture the first devcoredump we encounter. (They
# expire after 5 minutes, so we poll for them).
/capture-devcoredump.sh &
BACKGROUND_PIDS="$! $BACKGROUND_PIDS"
# If we want Xorg to be running for the test, then we start it up before the
# HWCI_TEST_SCRIPT because we need to use xinit to start X (otherwise
# without using -displayfd you can race with Xorg's startup), but xinit will eat
# your client's return code
if [ -n "$HWCI_START_XORG" ]; then
echo "touch /xorg-started; sleep 100000" > /xorg-script
env \
xinit /bin/sh /xorg-script -- /usr/bin/Xorg -noreset -s 0 -dpms -logfile /Xorg.0.log &
BACKGROUND_PIDS="$! $BACKGROUND_PIDS"
# Wait for xorg to be ready for connections.
for i in 1 2 3 4 5; do
if [ -e /xorg-started ]; then
break
fi
sleep 5
done
export DISPLAY=:0
fi
RESULT=fail
set +e
sh -c "$HWCI_TEST_SCRIPT"
EXIT_CODE=$?
set -e
# Let's make sure the results are always stored in current working directory
mv -f ${CI_PROJECT_DIR}/results ./ 2>/dev/null || true
[ ${EXIT_CODE} -ne 0 ] || rm -rf results/trace/"$PIGLIT_REPLAY_DEVICE_NAME"
# Make sure that capture-devcoredump is done before we start trying to tar up
# artifacts -- if it's writing while tar is reading, tar will throw an error and
# kill the job.
cleanup
# upload artifacts
if [ -n "$MINIO_RESULTS_UPLOAD" ]; then
tar --zstd -cf results.tar.zst results/;
ci-fairy minio login --token-file "${CI_JOB_JWT_FILE}";
ci-fairy minio cp results.tar.zst minio://"$MINIO_RESULTS_UPLOAD"/results.tar.zst;
fi
# We still need to echo the hwci: mesa message, as some scripts rely on it, such
# as the python ones inside the bare-metal folder
[ ${EXIT_CODE} -eq 0 ] && RESULT=pass
set +x
echo "hwci: mesa: $RESULT"
# Sleep a bit to avoid kernel dump message interleave from LAVA ENDTC signal
sleep 1
exit $EXIT_CODE

View File

@@ -1,758 +0,0 @@
#!/bin/sh
#
# This is an utility script to manage Intel GPU frequencies.
# It can be used for debugging performance problems or trying to obtain a stable
# frequency while benchmarking.
#
# Note the Intel i915 GPU driver allows to change the minimum, maximum and boost
# frequencies in steps of 50 MHz via:
#
# /sys/class/drm/card<n>/<freq_info>
#
# Where <n> is the DRM card index and <freq_info> one of the following:
#
# - gt_max_freq_mhz (enforced maximum freq)
# - gt_min_freq_mhz (enforced minimum freq)
# - gt_boost_freq_mhz (enforced boost freq)
#
# The hardware capabilities can be accessed via:
#
# - gt_RP0_freq_mhz (supported maximum freq)
# - gt_RPn_freq_mhz (supported minimum freq)
# - gt_RP1_freq_mhz (most efficient freq)
#
# The current frequency can be read from:
# - gt_act_freq_mhz (the actual GPU freq)
# - gt_cur_freq_mhz (the last requested freq)
#
# Also note that in addition to GPU management, the script offers the
# possibility to adjust CPU operating frequencies. However, this is currently
# limited to just setting the maximum scaling frequency as percentage of the
# maximum frequency allowed by the hardware.
#
# Copyright (C) 2022 Collabora Ltd.
# Author: Cristian Ciocaltea <cristian.ciocaltea@collabora.com>
#
# SPDX-License-Identifier: MIT
#
#
# Constants
#
# GPU
DRM_FREQ_SYSFS_PATTERN="/sys/class/drm/card%d/gt_%s_freq_mhz"
ENF_FREQ_INFO="max min boost"
CAP_FREQ_INFO="RP0 RPn RP1"
ACT_FREQ_INFO="act cur"
THROTT_DETECT_SLEEP_SEC=2
THROTT_DETECT_PID_FILE_PATH=/tmp/thrott-detect.pid
# CPU
CPU_SYSFS_PREFIX=/sys/devices/system/cpu
CPU_PSTATE_SYSFS_PATTERN="${CPU_SYSFS_PREFIX}/intel_pstate/%s"
CPU_FREQ_SYSFS_PATTERN="${CPU_SYSFS_PREFIX}/cpu%s/cpufreq/%s_freq"
CAP_CPU_FREQ_INFO="cpuinfo_max cpuinfo_min"
ENF_CPU_FREQ_INFO="scaling_max scaling_min"
ACT_CPU_FREQ_INFO="scaling_cur"
#
# Global variables.
#
unset INTEL_DRM_CARD_INDEX
unset GET_ACT_FREQ GET_ENF_FREQ GET_CAP_FREQ
unset SET_MIN_FREQ SET_MAX_FREQ
unset MONITOR_FREQ
unset CPU_SET_MAX_FREQ
unset DETECT_THROTT
unset DRY_RUN
#
# Simple printf based stderr logger.
#
log() {
local msg_type=$1
shift
printf "%s: %s: " "${msg_type}" "${0##*/}" >&2
printf "$@" >&2
printf "\n" >&2
}
#
# Helper to print sysfs path for the given card index and freq info.
#
# arg1: Frequency info sysfs name, one of *_FREQ_INFO constants above
# arg2: Video card index, defaults to INTEL_DRM_CARD_INDEX
#
print_freq_sysfs_path() {
printf ${DRM_FREQ_SYSFS_PATTERN} "${2:-${INTEL_DRM_CARD_INDEX}}" "$1"
}
#
# Helper to set INTEL_DRM_CARD_INDEX for the first identified Intel video card.
#
identify_intel_gpu() {
local i=0 vendor path
while [ ${i} -lt 16 ]; do
[ -c "/dev/dri/card$i" ] || {
i=$((i + 1))
continue
}
path=$(print_freq_sysfs_path "" ${i})
path=${path%/*}/device/vendor
[ -r "${path}" ] && read vendor < "${path}" && \
[ "${vendor}" = "0x8086" ] && INTEL_DRM_CARD_INDEX=$i && return 0
i=$((i + 1))
done
return 1
}
#
# Read the specified freq info from sysfs.
#
# arg1: Flag (y/n) to also enable printing the freq info.
# arg2...: Frequency info sysfs name(s), see *_FREQ_INFO constants above
# return: Global variable(s) FREQ_${arg} containing the requested information
#
read_freq_info() {
local var val info path print=0 ret=0
[ "$1" = "y" ] && print=1
shift
while [ $# -gt 0 ]; do
info=$1
shift
var=FREQ_${info}
path=$(print_freq_sysfs_path "${info}")
[ -r ${path} ] && read ${var} < ${path} || {
log ERROR "Failed to read freq info from: %s" "${path}"
ret=1
continue
}
[ -n "${var}" ] || {
log ERROR "Got empty freq info from: %s" "${path}"
ret=1
continue
}
[ ${print} -eq 1 ] && {
eval val=\$${var}
printf "%6s: %4s MHz\n" "${info}" "${val}"
}
done
return ${ret}
}
#
# Display requested info.
#
print_freq_info() {
local req_freq
[ -n "${GET_CAP_FREQ}" ] && {
printf "* Hardware capabilities\n"
read_freq_info y ${CAP_FREQ_INFO}
printf "\n"
}
[ -n "${GET_ENF_FREQ}" ] && {
printf "* Enforcements\n"
read_freq_info y ${ENF_FREQ_INFO}
printf "\n"
}
[ -n "${GET_ACT_FREQ}" ] && {
printf "* Actual\n"
read_freq_info y ${ACT_FREQ_INFO}
printf "\n"
}
}
#
# Helper to print frequency value as requested by user via '-s, --set' option.
# arg1: user requested freq value
#
compute_freq_set() {
local val
case "$1" in
+)
val=${FREQ_RP0}
;;
-)
val=${FREQ_RPn}
;;
*%)
val=$((${1%?} * ${FREQ_RP0} / 100))
# Adjust freq to comply with 50 MHz increments
val=$((val / 50 * 50))
;;
*[!0-9]*)
log ERROR "Cannot set freq to invalid value: %s" "$1"
return 1
;;
"")
log ERROR "Cannot set freq to unspecified value"
return 1
;;
*)
# Adjust freq to comply with 50 MHz increments
val=$(($1 / 50 * 50))
;;
esac
printf "%s" "${val}"
}
#
# Helper for set_freq().
#
set_freq_max() {
log INFO "Setting GPU max freq to %s MHz" "${SET_MAX_FREQ}"
read_freq_info n min || return $?
[ ${SET_MAX_FREQ} -gt ${FREQ_RP0} ] && {
log ERROR "Cannot set GPU max freq (%s) to be greater than hw max freq (%s)" \
"${SET_MAX_FREQ}" "${FREQ_RP0}"
return 1
}
[ ${SET_MAX_FREQ} -lt ${FREQ_RPn} ] && {
log ERROR "Cannot set GPU max freq (%s) to be less than hw min freq (%s)" \
"${SET_MIN_FREQ}" "${FREQ_RPn}"
return 1
}
[ ${SET_MAX_FREQ} -lt ${FREQ_min} ] && {
log ERROR "Cannot set GPU max freq (%s) to be less than min freq (%s)" \
"${SET_MAX_FREQ}" "${FREQ_min}"
return 1
}
[ -z "${DRY_RUN}" ] || return 0
printf "%s" ${SET_MAX_FREQ} | tee $(print_freq_sysfs_path max) \
$(print_freq_sysfs_path boost) > /dev/null
[ $? -eq 0 ] || {
log ERROR "Failed to set GPU max frequency"
return 1
}
}
#
# Helper for set_freq().
#
set_freq_min() {
log INFO "Setting GPU min freq to %s MHz" "${SET_MIN_FREQ}"
read_freq_info n max || return $?
[ ${SET_MIN_FREQ} -gt ${FREQ_max} ] && {
log ERROR "Cannot set GPU min freq (%s) to be greater than max freq (%s)" \
"${SET_MIN_FREQ}" "${FREQ_max}"
return 1
}
[ ${SET_MIN_FREQ} -lt ${FREQ_RPn} ] && {
log ERROR "Cannot set GPU min freq (%s) to be less than hw min freq (%s)" \
"${SET_MIN_FREQ}" "${FREQ_RPn}"
return 1
}
[ -z "${DRY_RUN}" ] || return 0
printf "%s" ${SET_MIN_FREQ} > $(print_freq_sysfs_path min)
[ $? -eq 0 ] || {
log ERROR "Failed to set GPU min frequency"
return 1
}
}
#
# Set min or max or both GPU frequencies to the user indicated values.
#
set_freq() {
# Get hw max & min frequencies
read_freq_info n RP0 RPn || return $?
[ -z "${SET_MAX_FREQ}" ] || {
SET_MAX_FREQ=$(compute_freq_set "${SET_MAX_FREQ}")
[ -z "${SET_MAX_FREQ}" ] && return 1
}
[ -z "${SET_MIN_FREQ}" ] || {
SET_MIN_FREQ=$(compute_freq_set "${SET_MIN_FREQ}")
[ -z "${SET_MIN_FREQ}" ] && return 1
}
#
# Ensure correct operation order, to avoid setting min freq
# to a value which is larger than max freq.
#
# E.g.:
# crt_min=crt_max=600; new_min=new_max=700
# > operation order: max=700; min=700
#
# crt_min=crt_max=600; new_min=new_max=500
# > operation order: min=500; max=500
#
if [ -n "${SET_MAX_FREQ}" ] && [ -n "${SET_MIN_FREQ}" ]; then
[ ${SET_MAX_FREQ} -lt ${SET_MIN_FREQ} ] && {
log ERROR "Cannot set GPU max freq to be less than min freq"
return 1
}
read_freq_info n min || return $?
if [ ${SET_MAX_FREQ} -lt ${FREQ_min} ]; then
set_freq_min || return $?
set_freq_max
else
set_freq_max || return $?
set_freq_min
fi
elif [ -n "${SET_MAX_FREQ}" ]; then
set_freq_max
elif [ -n "${SET_MIN_FREQ}" ]; then
set_freq_min
else
log "Unexpected call to set_freq()"
return 1
fi
}
#
# Helper for detect_throttling().
#
get_thrott_detect_pid() {
[ -e ${THROTT_DETECT_PID_FILE_PATH} ] || return 0
local pid
read pid < ${THROTT_DETECT_PID_FILE_PATH} || {
log ERROR "Failed to read pid from: %s" "${THROTT_DETECT_PID_FILE_PATH}"
return 1
}
local proc_path=/proc/${pid:-invalid}/cmdline
[ -r ${proc_path} ] && grep -qs "${0##*/}" ${proc_path} && {
printf "%s" "${pid}"
return 0
}
# Remove orphaned PID file
rm -rf ${THROTT_DETECT_PID_FILE_PATH}
return 1
}
#
# Control detection and reporting of GPU throttling events.
# arg1: start - run throttle detector in background
# stop - stop throttle detector process, if any
# status - verify if throttle detector is running
#
detect_throttling() {
local pid
pid=$(get_thrott_detect_pid)
case "$1" in
status)
printf "Throttling detector is "
[ -z "${pid}" ] && printf "not running\n" && return 0
printf "running (pid=%s)\n" ${pid}
;;
stop)
[ -z "${pid}" ] && return 0
log INFO "Stopping throttling detector (pid=%s)" "${pid}"
kill ${pid}; sleep 1; kill -0 ${pid} 2>/dev/null && kill -9 ${pid}
rm -rf ${THROTT_DETECT_PID_FILE_PATH}
;;
start)
[ -n "${pid}" ] && {
log WARN "Throttling detector is already running (pid=%s)" ${pid}
return 0
}
(
read_freq_info n RPn || exit $?
while true; do
sleep ${THROTT_DETECT_SLEEP_SEC}
read_freq_info n act min cur || exit $?
#
# The throttling seems to occur when act freq goes below min.
# However, it's necessary to exclude the idle states, where
# act freq normally reaches RPn and cur goes below min.
#
[ ${FREQ_act} -lt ${FREQ_min} ] && \
[ ${FREQ_act} -gt ${FREQ_RPn} ] && \
[ ${FREQ_cur} -ge ${FREQ_min} ] && \
printf "GPU throttling detected: act=%s min=%s cur=%s RPn=%s\n" \
${FREQ_act} ${FREQ_min} ${FREQ_cur} ${FREQ_RPn}
done
) &
pid=$!
log INFO "Started GPU throttling detector (pid=%s)" ${pid}
printf "%s\n" ${pid} > ${THROTT_DETECT_PID_FILE_PATH} || \
log WARN "Failed to write throttle detector PID file"
;;
esac
}
#
# Retrieve the list of online CPUs.
#
get_online_cpus() {
local path cpu_index
printf "0"
for path in $(grep 1 ${CPU_SYSFS_PREFIX}/cpu*/online); do
cpu_index=${path##*/cpu}
printf " %s" ${cpu_index%%/*}
done
}
#
# Helper to print sysfs path for the given CPU index and freq info.
#
# arg1: Frequency info sysfs name, one of *_CPU_FREQ_INFO constants above
# arg2: CPU index
#
print_cpu_freq_sysfs_path() {
printf ${CPU_FREQ_SYSFS_PATTERN} "$2" "$1"
}
#
# Read the specified CPU freq info from sysfs.
#
# arg1: CPU index
# arg2: Flag (y/n) to also enable printing the freq info.
# arg3...: Frequency info sysfs name(s), see *_CPU_FREQ_INFO constants above
# return: Global variable(s) CPU_FREQ_${arg} containing the requested information
#
read_cpu_freq_info() {
local var val info path cpu_index print=0 ret=0
cpu_index=$1
[ "$2" = "y" ] && print=1
shift 2
while [ $# -gt 0 ]; do
info=$1
shift
var=CPU_FREQ_${info}
path=$(print_cpu_freq_sysfs_path "${info}" ${cpu_index})
[ -r ${path} ] && read ${var} < ${path} || {
log ERROR "Failed to read CPU freq info from: %s" "${path}"
ret=1
continue
}
[ -n "${var}" ] || {
log ERROR "Got empty CPU freq info from: %s" "${path}"
ret=1
continue
}
[ ${print} -eq 1 ] && {
eval val=\$${var}
printf "%6s: %4s Hz\n" "${info}" "${val}"
}
done
return ${ret}
}
#
# Helper to print freq. value as requested by user via '--cpu-set-max' option.
# arg1: user requested freq value
#
compute_cpu_freq_set() {
local val
case "$1" in
+)
val=${CPU_FREQ_cpuinfo_max}
;;
-)
val=${CPU_FREQ_cpuinfo_min}
;;
*%)
val=$((${1%?} * ${CPU_FREQ_cpuinfo_max} / 100))
;;
*[!0-9]*)
log ERROR "Cannot set CPU freq to invalid value: %s" "$1"
return 1
;;
"")
log ERROR "Cannot set CPU freq to unspecified value"
return 1
;;
*)
log ERROR "Cannot set CPU freq to custom value; use +, -, or % instead"
return 1
;;
esac
printf "%s" "${val}"
}
#
# Adjust CPU max scaling frequency.
#
set_cpu_freq_max() {
local target_freq res=0
case "${CPU_SET_MAX_FREQ}" in
+)
target_freq=100
;;
-)
target_freq=1
;;
*%)
target_freq=${CPU_SET_MAX_FREQ%?}
;;
*)
log ERROR "Invalid CPU freq"
return 1
;;
esac
local pstate_info=$(printf "${CPU_PSTATE_SYSFS_PATTERN}" max_perf_pct)
[ -e "${pstate_info}" ] && {
log INFO "Setting intel_pstate max perf to %s" "${target_freq}%"
printf "%s" "${target_freq}" > "${pstate_info}"
[ $? -eq 0 ] || {
log ERROR "Failed to set intel_pstate max perf"
res=1
}
}
local cpu_index
for cpu_index in $(get_online_cpus); do
read_cpu_freq_info ${cpu_index} n ${CAP_CPU_FREQ_INFO} || { res=$?; continue; }
target_freq=$(compute_cpu_freq_set "${CPU_SET_MAX_FREQ}")
[ -z "${target_freq}" ] && { res=$?; continue; }
log INFO "Setting CPU%s max scaling freq to %s Hz" ${cpu_index} "${target_freq}"
[ -n "${DRY_RUN}" ] && continue
printf "%s" ${target_freq} > $(print_cpu_freq_sysfs_path scaling_max ${cpu_index})
[ $? -eq 0 ] || {
res=1
log ERROR "Failed to set CPU%s max scaling frequency" ${cpu_index}
}
done
return ${res}
}
#
# Show help message.
#
print_usage() {
cat <<EOF
Usage: ${0##*/} [OPTION]...
A script to manage Intel GPU frequencies. Can be used for debugging performance
problems or trying to obtain a stable frequency while benchmarking.
Note Intel GPUs only accept specific frequencies, usually multiples of 50 MHz.
Options:
-g, --get [act|enf|cap|all]
Get frequency information: active (default), enforced,
hardware capabilities or all of them.
-s, --set [{min|max}=]{FREQUENCY[%]|+|-}
Set min or max frequency to the given value (MHz).
Append '%' to interpret FREQUENCY as % of hw max.
Use '+' or '-' to set frequency to hardware max or min.
Omit min/max prefix to set both frequencies.
-r, --reset Reset frequencies to hardware defaults.
-m, --monitor [act|enf|cap|all]
Monitor the indicated frequencies via 'watch' utility.
See '-g, --get' option for more details.
-d|--detect-thrott [start|stop|status]
Start (default operation) the throttling detector
as a background process. Use 'stop' or 'status' to
terminate the detector process or verify its status.
--cpu-set-max [FREQUENCY%|+|-}
Set CPU max scaling frequency as % of hw max.
Use '+' or '-' to set frequency to hardware max or min.
-r, --reset Reset frequencies to hardware defaults.
--dry-run See what the script will do without applying any
frequency changes.
-h, --help Display this help text and exit.
EOF
}
#
# Parse user input for '-g, --get' option.
# Returns 0 if a value has been provided, otherwise 1.
#
parse_option_get() {
local ret=0
case "$1" in
act) GET_ACT_FREQ=1;;
enf) GET_ENF_FREQ=1;;
cap) GET_CAP_FREQ=1;;
all) GET_ACT_FREQ=1; GET_ENF_FREQ=1; GET_CAP_FREQ=1;;
-*|"")
# No value provided, using default.
GET_ACT_FREQ=1
ret=1
;;
*)
print_usage
exit 1
;;
esac
return ${ret}
}
#
# Validate user input for '-s, --set' option.
# arg1: input value to be validated
# arg2: optional flag indicating input is restricted to %
#
validate_option_set() {
case "$1" in
+|-|[0-9]%|[0-9][0-9]%)
return 0
;;
*[!0-9]*|"")
print_usage
exit 1
;;
esac
[ -z "$2" ] || { print_usage; exit 1; }
}
#
# Parse script arguments.
#
[ $# -eq 0 ] && { print_usage; exit 1; }
while [ $# -gt 0 ]; do
case "$1" in
-g|--get)
parse_option_get "$2" && shift
;;
-s|--set)
shift
case "$1" in
min=*)
SET_MIN_FREQ=${1#min=}
validate_option_set "${SET_MIN_FREQ}"
;;
max=*)
SET_MAX_FREQ=${1#max=}
validate_option_set "${SET_MAX_FREQ}"
;;
*)
SET_MIN_FREQ=$1
validate_option_set "${SET_MIN_FREQ}"
SET_MAX_FREQ=${SET_MIN_FREQ}
;;
esac
;;
-r|--reset)
RESET_FREQ=1
SET_MIN_FREQ="-"
SET_MAX_FREQ="+"
;;
-m|--monitor)
MONITOR_FREQ=act
parse_option_get "$2" && MONITOR_FREQ=$2 && shift
;;
-d|--detect-thrott)
DETECT_THROTT=start
case "$2" in
start|stop|status)
DETECT_THROTT=$2
shift
;;
esac
;;
--cpu-set-max)
shift
CPU_SET_MAX_FREQ=$1
validate_option_set "${CPU_SET_MAX_FREQ}" restricted
;;
--dry-run)
DRY_RUN=1
;;
-h|--help)
print_usage
exit 0
;;
*)
print_usage
exit 1
;;
esac
shift
done
#
# Main
#
RET=0
identify_intel_gpu || {
log INFO "No Intel GPU detected"
exit 0
}
[ -n "${SET_MIN_FREQ}${SET_MAX_FREQ}" ] && { set_freq || RET=$?; }
print_freq_info
[ -n "${DETECT_THROTT}" ] && detect_throttling ${DETECT_THROTT}
[ -n "${CPU_SET_MAX_FREQ}" ] && { set_cpu_freq_max || RET=$?; }
[ -n "${MONITOR_FREQ}" ] && {
log INFO "Entering frequency monitoring mode"
sleep 2
exec watch -d -n 1 "$0" -g "${MONITOR_FREQ}"
}
exit ${RET}

View File

@@ -1,21 +0,0 @@
#!/bin/sh
set -ex
_XORG_SCRIPT="/xorg-script"
_FLAG_FILE="/xorg-started"
echo "touch ${_FLAG_FILE}; sleep 100000" > "${_XORG_SCRIPT}"
if [ "x$1" != "x" ]; then
export LD_LIBRARY_PATH="${1}/lib"
export LIBGL_DRIVERS_PATH="${1}/lib/dri"
fi
xinit /bin/sh "${_XORG_SCRIPT}" -- /usr/bin/Xorg vt45 -noreset -s 0 -dpms -logfile /Xorg.0.log &
# Wait for xorg to be ready for connections.
for i in 1 2 3 4 5; do
if [ -e "${_FLAG_FILE}" ]; then
break
fi
sleep 5
done

View File

@@ -1,63 +0,0 @@
CONFIG_LOCALVERSION_AUTO=y
CONFIG_DEBUG_KERNEL=y
# abootimg with a 'dummy' rootfs fails with root=/dev/nfs
CONFIG_BLK_DEV_INITRD=n
CONFIG_DEVFREQ_GOV_PERFORMANCE=y
CONFIG_DEVFREQ_GOV_POWERSAVE=y
CONFIG_DEVFREQ_GOV_USERSPACE=y
CONFIG_DEVFREQ_GOV_PASSIVE=y
CONFIG_DEVFREQ_GOV_SIMPLE_ONDEMAND=y
CONFIG_DRM=y
CONFIG_DRM_ETNAVIV=y
CONFIG_DRM_ROCKCHIP=y
CONFIG_DRM_PANFROST=y
CONFIG_DRM_LIMA=y
CONFIG_DRM_PANEL_SIMPLE=y
CONFIG_PWM_CROS_EC=y
CONFIG_BACKLIGHT_PWM=y
CONFIG_ROCKCHIP_CDN_DP=n
CONFIG_SPI_ROCKCHIP=y
CONFIG_PWM_ROCKCHIP=y
CONFIG_PHY_ROCKCHIP_DP=y
CONFIG_DWMAC_ROCKCHIP=y
CONFIG_MFD_RK808=y
CONFIG_REGULATOR_RK808=y
CONFIG_RTC_DRV_RK808=y
CONFIG_COMMON_CLK_RK808=y
CONFIG_REGULATOR_FAN53555=y
CONFIG_REGULATOR=y
CONFIG_REGULATOR_VCTRL=y
CONFIG_KASAN=n
CONFIG_KASAN_INLINE=n
CONFIG_STACKTRACE=n
CONFIG_TMPFS=y
CONFIG_PROVE_LOCKING=n
CONFIG_DEBUG_LOCKDEP=n
CONFIG_SOFTLOCKUP_DETECTOR=n
CONFIG_BOOTPARAM_SOFTLOCKUP_PANIC=n
CONFIG_FW_LOADER_COMPRESS=y
CONFIG_USB_USBNET=y
CONFIG_NETDEVICES=y
CONFIG_USB_NET_DRIVERS=y
CONFIG_USB_RTL8152=y
CONFIG_USB_NET_AX8817X=y
CONFIG_USB_NET_SMSC95XX=y
# TK1
CONFIG_ARM_TEGRA_DEVFREQ=y
# 32-bit build failure
CONFIG_DRM_MSM=n

View File

@@ -1,173 +0,0 @@
CONFIG_LOCALVERSION_AUTO=y
CONFIG_DEBUG_KERNEL=y
# abootimg with a 'dummy' rootfs fails with root=/dev/nfs
CONFIG_BLK_DEV_INITRD=n
CONFIG_DEVFREQ_GOV_PERFORMANCE=y
CONFIG_DEVFREQ_GOV_POWERSAVE=y
CONFIG_DEVFREQ_GOV_USERSPACE=y
CONFIG_DEVFREQ_GOV_PASSIVE=y
CONFIG_DRM=y
CONFIG_DRM_ROCKCHIP=y
CONFIG_DRM_PANFROST=y
CONFIG_DRM_LIMA=y
CONFIG_DRM_PANEL_SIMPLE=y
CONFIG_DRM_PANEL_EDP=y
CONFIG_DRM_MSM=y
CONFIG_DRM_ETNAVIV=y
CONFIG_DRM_I2C_ADV7511=y
CONFIG_PWM_CROS_EC=y
CONFIG_BACKLIGHT_PWM=y
CONFIG_ROCKCHIP_CDN_DP=n
CONFIG_SPI_ROCKCHIP=y
CONFIG_PWM_ROCKCHIP=y
CONFIG_PHY_ROCKCHIP_DP=y
CONFIG_DWMAC_ROCKCHIP=y
CONFIG_STMMAC_ETH=y
CONFIG_TYPEC_FUSB302=y
CONFIG_TYPEC=y
CONFIG_TYPEC_TCPM=y
# MSM platform bits
# For CONFIG_QCOM_LMH
CONFIG_OF=y
CONFIG_QCOM_COMMAND_DB=y
CONFIG_QCOM_RPMHPD=y
CONFIG_QCOM_RPMPD=y
CONFIG_SDM_GPUCC_845=y
CONFIG_SDM_VIDEOCC_845=y
CONFIG_SDM_DISPCC_845=y
CONFIG_SDM_LPASSCC_845=y
CONFIG_SDM_CAMCC_845=y
CONFIG_RESET_QCOM_PDC=y
CONFIG_DRM_TI_SN65DSI86=y
CONFIG_I2C_QCOM_GENI=y
CONFIG_SPI_QCOM_GENI=y
CONFIG_PHY_QCOM_QUSB2=y
CONFIG_PHY_QCOM_QMP=y
CONFIG_QCOM_CLK_APCC_MSM8996=y
CONFIG_QCOM_LLCC=y
CONFIG_QCOM_LMH=y
CONFIG_QCOM_SPMI_TEMP_ALARM=y
CONFIG_QCOM_WDT=y
CONFIG_POWER_RESET_QCOM_PON=y
CONFIG_RTC_DRV_PM8XXX=y
CONFIG_INTERCONNECT=y
CONFIG_INTERCONNECT_QCOM=y
CONFIG_INTERCONNECT_QCOM_SDM845=y
CONFIG_INTERCONNECT_QCOM_MSM8916=y
CONFIG_INTERCONNECT_QCOM_OSM_L3=y
CONFIG_INTERCONNECT_QCOM_SC7180=y
CONFIG_CRYPTO_DEV_QCOM_RNG=y
CONFIG_SC_DISPCC_7180=y
CONFIG_SC_GPUCC_7180=y
# db410c ethernet
CONFIG_USB_RTL8152=y
# db820c ethernet
CONFIG_ATL1C=y
CONFIG_ARCH_ALPINE=n
CONFIG_ARCH_BCM2835=n
CONFIG_ARCH_BCM_IPROC=n
CONFIG_ARCH_BERLIN=n
CONFIG_ARCH_BRCMSTB=n
CONFIG_ARCH_EXYNOS=n
CONFIG_ARCH_K3=n
CONFIG_ARCH_LAYERSCAPE=n
CONFIG_ARCH_LG1K=n
CONFIG_ARCH_HISI=n
CONFIG_ARCH_MVEBU=n
CONFIG_ARCH_SEATTLE=n
CONFIG_ARCH_SYNQUACER=n
CONFIG_ARCH_RENESAS=n
CONFIG_ARCH_R8A774A1=n
CONFIG_ARCH_R8A774C0=n
CONFIG_ARCH_R8A7795=n
CONFIG_ARCH_R8A7796=n
CONFIG_ARCH_R8A77965=n
CONFIG_ARCH_R8A77970=n
CONFIG_ARCH_R8A77980=n
CONFIG_ARCH_R8A77990=n
CONFIG_ARCH_R8A77995=n
CONFIG_ARCH_STRATIX10=n
CONFIG_ARCH_TEGRA=n
CONFIG_ARCH_SPRD=n
CONFIG_ARCH_THUNDER=n
CONFIG_ARCH_THUNDER2=n
CONFIG_ARCH_UNIPHIER=n
CONFIG_ARCH_VEXPRESS=n
CONFIG_ARCH_XGENE=n
CONFIG_ARCH_ZX=n
CONFIG_ARCH_ZYNQMP=n
# Strip out some stuff we don't need for graphics testing, to reduce
# the build.
CONFIG_CAN=n
CONFIG_WIRELESS=n
CONFIG_RFKILL=n
CONFIG_WLAN=n
CONFIG_REGULATOR_FAN53555=y
CONFIG_REGULATOR=y
CONFIG_REGULATOR_VCTRL=y
CONFIG_KASAN=n
CONFIG_KASAN_INLINE=n
CONFIG_STACKTRACE=n
CONFIG_TMPFS=y
CONFIG_PROVE_LOCKING=n
CONFIG_DEBUG_LOCKDEP=n
CONFIG_SOFTLOCKUP_DETECTOR=y
CONFIG_BOOTPARAM_SOFTLOCKUP_PANIC=y
CONFIG_DETECT_HUNG_TASK=y
CONFIG_FW_LOADER_COMPRESS=y
CONFIG_FW_LOADER_USER_HELPER=n
CONFIG_USB_USBNET=y
CONFIG_NETDEVICES=y
CONFIG_USB_NET_DRIVERS=y
CONFIG_USB_RTL8152=y
CONFIG_USB_NET_AX8817X=y
CONFIG_USB_NET_SMSC95XX=y
# For amlogic
CONFIG_MESON_GXL_PHY=y
CONFIG_MDIO_BUS_MUX_MESON_G12A=y
CONFIG_DRM_MESON=y
# For Mediatek
CONFIG_DRM_MEDIATEK=y
CONFIG_PWM_MEDIATEK=y
CONFIG_DRM_MEDIATEK_HDMI=y
CONFIG_GNSS=y
CONFIG_GNSS_MTK_SERIAL=y
CONFIG_HW_RANDOM=y
CONFIG_HW_RANDOM_MTK=y
CONFIG_MTK_DEVAPC=y
CONFIG_PWM_MTK_DISP=y
CONFIG_MTK_CMDQ=y
# For nouveau. Note that DRM must be a module so that it's loaded after NFS is up to provide the firmware.
CONFIG_ARCH_TEGRA=y
CONFIG_DRM_NOUVEAU=m
CONFIG_DRM_TEGRA=m
CONFIG_R8169=y
CONFIG_STAGING=y
CONFIG_DRM_TEGRA_STAGING=y
CONFIG_TEGRA_HOST1X=y
CONFIG_ARM_TEGRA_DEVFREQ=y
CONFIG_TEGRA_SOCTHERM=y
CONFIG_DRM_TEGRA_DEBUG=y
CONFIG_PWM_TEGRA=y

View File

@@ -1,55 +0,0 @@
#!/bin/bash
set -e
set -o xtrace
# Fetch the arm-built rootfs image and unpack it in our x86 container (saves
# network transfer, disk usage, and runtime on test jobs)
# shellcheck disable=SC2154 # arch is assigned in previous scripts
if wget -q --method=HEAD "${ARTIFACTS_PREFIX}/${FDO_UPSTREAM_REPO}/${ARTIFACTS_SUFFIX}/${arch}/done"; then
ARTIFACTS_URL="${ARTIFACTS_PREFIX}/${FDO_UPSTREAM_REPO}/${ARTIFACTS_SUFFIX}/${arch}"
else
ARTIFACTS_URL="${ARTIFACTS_PREFIX}/${CI_PROJECT_PATH}/${ARTIFACTS_SUFFIX}/${arch}"
fi
wget "${ARTIFACTS_URL}"/lava-rootfs.tar.zst -O rootfs.tar.zst
mkdir -p /rootfs-"$arch"
tar -C /rootfs-"$arch" '--exclude=./dev/*' --zstd -xf rootfs.tar.zst
rm rootfs.tar.zst
if [[ $arch == "arm64" ]]; then
mkdir -p /baremetal-files
pushd /baremetal-files
wget "${ARTIFACTS_URL}"/Image
wget "${ARTIFACTS_URL}"/Image.gz
wget "${ARTIFACTS_URL}"/cheza-kernel
DEVICE_TREES=""
DEVICE_TREES="$DEVICE_TREES apq8016-sbc.dtb"
DEVICE_TREES="$DEVICE_TREES apq8096-db820c.dtb"
DEVICE_TREES="$DEVICE_TREES tegra210-p3450-0000.dtb"
DEVICE_TREES="$DEVICE_TREES imx8mq-nitrogen.dtb"
for DTB in $DEVICE_TREES; do
wget "${ARTIFACTS_URL}/$DTB"
done
popd
elif [[ $arch == "armhf" ]]; then
mkdir -p /baremetal-files
pushd /baremetal-files
wget "${ARTIFACTS_URL}"/zImage
DEVICE_TREES=""
DEVICE_TREES="$DEVICE_TREES imx6q-cubox-i.dtb"
DEVICE_TREES="$DEVICE_TREES tegra124-jetson-tk1.dtb"
for DTB in $DEVICE_TREES; do
wget "${ARTIFACTS_URL}/$DTB"
done
popd
fi

View File

@@ -1,19 +0,0 @@
#!/bin/bash
# shellcheck disable=SC2086 # we want word splitting
set -ex
APITRACE_VERSION="790380e05854d5c9d315555444ffcc7acb8f4037"
git clone https://github.com/apitrace/apitrace.git --single-branch --no-checkout /apitrace
pushd /apitrace
git checkout "$APITRACE_VERSION"
git submodule update --init --depth 1 --recursive
cmake -S . -B _build -G Ninja -DCMAKE_BUILD_TYPE=Release -DENABLE_GUI=False -DENABLE_WAFFLE=on $EXTRA_CMAKE_ARGS
cmake --build _build --parallel --target apitrace eglretrace
mkdir build
cp _build/apitrace build
cp _build/eglretrace build
${STRIP_CMD:-strip} build/*
find . -not -path './build' -not -path './build/*' -delete
popd

View File

@@ -1,41 +0,0 @@
#!/bin/bash
# shellcheck disable=SC2086 # we want word splitting
set -ex
git config --global user.email "mesa@example.com"
git config --global user.name "Mesa CI"
CROSVM_VERSION=acd262cb42111c53b580a67355e795775545cced
git clone --single-branch -b main --no-checkout https://chromium.googlesource.com/crosvm/crosvm /platform/crosvm
pushd /platform/crosvm
git checkout "$CROSVM_VERSION"
git submodule update --init
VIRGLRENDERER_VERSION=3c5a9bbb7464e0e91e446991055300f4f989f6a9
rm -rf third_party/virglrenderer
git clone --single-branch -b master --no-checkout https://gitlab.freedesktop.org/virgl/virglrenderer.git third_party/virglrenderer
pushd third_party/virglrenderer
git checkout "$VIRGLRENDERER_VERSION"
meson build/ -Drender-server=true -Drender-server-worker=process -Dvenus-experimental=true $EXTRA_MESON_ARGS
ninja -C build install
popd
RUSTFLAGS='-L native=/usr/local/lib' cargo install \
bindgen \
-j ${FDO_CI_CONCURRENT:-4} \
--root /usr/local \
--version 0.60.1 \
$EXTRA_CARGO_ARGS
RUSTFLAGS='-L native=/usr/local/lib' cargo install \
-j ${FDO_CI_CONCURRENT:-4} \
--locked \
--features 'default-no-sandbox gpu x virgl_renderer virgl_renderer_next' \
--path . \
--root /usr/local \
$EXTRA_CARGO_ARGS
popd
rm -rf /platform/crosvm

View File

@@ -1,31 +0,0 @@
#!/bin/sh
# shellcheck disable=SC2086 # we want word splitting
set -ex
if [ -n "${DEQP_RUNNER_GIT_TAG}${DEQP_RUNNER_GIT_REV}" ]; then
# Build and install from source
DEQP_RUNNER_CARGO_ARGS="--git ${DEQP_RUNNER_GIT_URL:-https://gitlab.freedesktop.org/anholt/deqp-runner.git}"
if [ -n "${DEQP_RUNNER_GIT_TAG}" ]; then
DEQP_RUNNER_CARGO_ARGS="--tag ${DEQP_RUNNER_GIT_TAG} ${DEQP_RUNNER_CARGO_ARGS}"
else
DEQP_RUNNER_CARGO_ARGS="--rev ${DEQP_RUNNER_GIT_REV} ${DEQP_RUNNER_CARGO_ARGS}"
fi
DEQP_RUNNER_CARGO_ARGS="${DEQP_RUNNER_CARGO_ARGS} ${EXTRA_CARGO_ARGS}"
else
# Install from package registry
DEQP_RUNNER_CARGO_ARGS="--version 0.15.0 ${EXTRA_CARGO_ARGS} -- deqp-runner"
fi
cargo install --locked \
-j ${FDO_CI_CONCURRENT:-4} \
--root /usr/local \
${DEQP_RUNNER_CARGO_ARGS}
# remove unused test runners to shrink images for the Mesa CI build (not kernel,
# which chooses its own deqp branch)
if [ -z "${DEQP_RUNNER_GIT_TAG}${DEQP_RUNNER_GIT_REV}" ]; then
rm -f /usr/local/bin/igt-runner
fi

View File

@@ -1,98 +0,0 @@
#!/bin/bash
# shellcheck disable=SC2086 # we want word splitting
set -ex
git config --global user.email "mesa@example.com"
git config --global user.name "Mesa CI"
git clone \
https://github.com/KhronosGroup/VK-GL-CTS.git \
-b vulkan-cts-1.3.3.0 \
--depth 1 \
/VK-GL-CTS
pushd /VK-GL-CTS
# Apply a patch to update zlib link to an available version.
# vulkan-cts-1.3.3.0 uses zlib 1.2.12 which was removed from zlib server due to
# a CVE. See https://zlib.net/
# FIXME: Remove this patch when uprev to 1.3.4.0+
wget -O- https://github.com/KhronosGroup/VK-GL-CTS/commit/6bb2e7d64261bedb503947b1b251b1eeeb49be73.patch |
git am -
# --insecure is due to SSL cert failures hitting sourceforge for zlib and
# libpng (sigh). The archives get their checksums checked anyway, and git
# always goes through ssh or https.
python3 external/fetch_sources.py --insecure
mkdir -p /deqp
# Save the testlog stylesheets:
cp doc/testlog-stylesheet/testlog.{css,xsl} /deqp
popd
pushd /deqp
# When including EGL/X11 testing, do that build first and save off its
# deqp-egl binary.
cmake -S /VK-GL-CTS -B . -G Ninja \
-DDEQP_TARGET=x11_egl_glx \
-DCMAKE_BUILD_TYPE=Release \
$EXTRA_CMAKE_ARGS
ninja modules/egl/deqp-egl
cp /deqp/modules/egl/deqp-egl /deqp/modules/egl/deqp-egl-x11
cmake -S /VK-GL-CTS -B . -G Ninja \
-DDEQP_TARGET=${DEQP_TARGET:-x11_glx} \
-DCMAKE_BUILD_TYPE=Release \
$EXTRA_CMAKE_ARGS
ninja
mv /deqp/modules/egl/deqp-egl-x11 /deqp/modules/egl/deqp-egl
# Copy out the mustpass lists we want.
mkdir /deqp/mustpass
for mustpass in $(< /VK-GL-CTS/external/vulkancts/mustpass/main/vk-default.txt) ; do
cat /VK-GL-CTS/external/vulkancts/mustpass/main/$mustpass \
>> /deqp/mustpass/vk-master.txt
done
cp \
/deqp/external/openglcts/modules/gl_cts/data/mustpass/gles/aosp_mustpass/3.2.6.x/*.txt \
/deqp/mustpass/.
cp \
/deqp/external/openglcts/modules/gl_cts/data/mustpass/egl/aosp_mustpass/3.2.6.x/egl-master.txt \
/deqp/mustpass/.
cp \
/deqp/external/openglcts/modules/gl_cts/data/mustpass/gles/khronos_mustpass/3.2.6.x/*-master.txt \
/deqp/mustpass/.
cp \
/deqp/external/openglcts/modules/gl_cts/data/mustpass/gl/khronos_mustpass/4.6.1.x/*-master.txt \
/deqp/mustpass/.
cp \
/deqp/external/openglcts/modules/gl_cts/data/mustpass/gl/khronos_mustpass_single/4.6.1.x/*-single.txt \
/deqp/mustpass/.
# Save *some* executor utils, but otherwise strip things down
# to reduct deqp build size:
mkdir /deqp/executor.save
cp /deqp/executor/testlog-to-* /deqp/executor.save
rm -rf /deqp/executor
mv /deqp/executor.save /deqp/executor
# Remove other mustpass files, since we saved off the ones we wanted to conventient locations above.
rm -rf /deqp/external/openglcts/modules/gl_cts/data/mustpass
rm -rf /deqp/external/vulkancts/modules/vulkan/vk-master*
rm -rf /deqp/external/vulkancts/modules/vulkan/vk-default
rm -rf /deqp/external/openglcts/modules/cts-runner
rm -rf /deqp/modules/internal
rm -rf /deqp/execserver
rm -rf /deqp/framework
# shellcheck disable=SC2038,SC2185 # TODO: rewrite find
find -iname '*cmake*' -o -name '*ninja*' -o -name '*.o' -o -name '*.a' | xargs rm -rf
${STRIP_CMD:-strip} external/vulkancts/modules/vulkan/deqp-vk
${STRIP_CMD:-strip} external/openglcts/modules/glcts
${STRIP_CMD:-strip} modules/*/deqp-*
du -sh ./*
rm -rf /VK-GL-CTS
popd

View File

@@ -1,14 +0,0 @@
#!/bin/bash
set -ex
git clone https://github.com/ValveSoftware/Fossilize.git
cd Fossilize
git checkout 16fba1b8b5d9310126bb02323d7bae3227338461
git submodule update --init
mkdir build
cd build
cmake -S .. -B . -G Ninja -DCMAKE_BUILD_TYPE=Release
ninja -C . install
cd ../..
rm -rf Fossilize

View File

@@ -1,19 +0,0 @@
#!/bin/bash
set -ex
GFXRECONSTRUCT_VERSION=5ed3caeecc46e976c4df31e263df8451ae176c26
git clone https://github.com/LunarG/gfxreconstruct.git \
--single-branch \
-b master \
--no-checkout \
/gfxreconstruct
pushd /gfxreconstruct
git checkout "$GFXRECONSTRUCT_VERSION"
git submodule update --init
git submodule update
cmake -S . -B _build -G Ninja -DCMAKE_BUILD_TYPE=Release -DCMAKE_INSTALL_PREFIX:PATH=/gfxreconstruct/build -DBUILD_WERROR=OFF
cmake --build _build --parallel --target tools/{replay,info}/install/strip
find . -not -path './build' -not -path './build/*' -delete
popd

View File

@@ -1,16 +0,0 @@
#!/bin/bash
set -ex
PARALLEL_DEQP_RUNNER_VERSION=fe557794b5dadd8dbf0eae403296625e03bda18a
git clone https://gitlab.freedesktop.org/mesa/parallel-deqp-runner --single-branch -b master --no-checkout /parallel-deqp-runner
pushd /parallel-deqp-runner
git checkout "$PARALLEL_DEQP_RUNNER_VERSION"
meson . _build
ninja -C _build hang-detection
mkdir -p build/bin
install _build/hang-detection build/bin
strip build/bin/*
find . -not -path './build' -not -path './build/*' -delete
popd

View File

@@ -1,53 +0,0 @@
#!/bin/bash
# shellcheck disable=SC2086 # we want word splitting
set -ex
mkdir -p kernel
wget -qO- ${KERNEL_URL} | tar -xj --strip-components=1 -C kernel
pushd kernel
# The kernel doesn't like the gold linker (or the old lld in our debians).
# Sneak in some override symlinks during kernel build until we can update
# debian (they'll get blown away by the rm of the kernel dir at the end).
mkdir -p ld-links
for i in /usr/bin/*-ld /usr/bin/ld; do
i=$(basename $i)
ln -sf /usr/bin/$i.bfd ld-links/$i
done
NEWPATH=$(pwd)/ld-links
export PATH=$NEWPATH:$PATH
KERNEL_FILENAME=$(basename $KERNEL_URL)
export LOCALVERSION="$KERNEL_FILENAME"
./scripts/kconfig/merge_config.sh ${DEFCONFIG} ../.gitlab-ci/container/${KERNEL_ARCH}.config
make ${KERNEL_IMAGE_NAME}
for image in ${KERNEL_IMAGE_NAME}; do
cp arch/${KERNEL_ARCH}/boot/${image} /lava-files/.
done
if [[ -n ${DEVICE_TREES} ]]; then
make dtbs
cp ${DEVICE_TREES} /lava-files/.
fi
make modules
INSTALL_MOD_PATH=/lava-files/rootfs-${DEBIAN_ARCH}/ make modules_install
if [[ ${DEBIAN_ARCH} = "arm64" ]]; then
make Image.lzma
mkimage \
-f auto \
-A arm \
-O linux \
-d arch/arm64/boot/Image.lzma \
-C lzma\
-b arch/arm64/boot/dts/qcom/sdm845-cheza-r3.dtb \
/lava-files/cheza-kernel
KERNEL_IMAGE_NAME+=" cheza-kernel"
fi
popd
rm -rf kernel

View File

@@ -1,30 +0,0 @@
#!/bin/bash
set -ex
export LLVM_CONFIG="llvm-config-11"
$LLVM_CONFIG --version
git config --global user.email "mesa@example.com"
git config --global user.name "Mesa CI"
git clone \
https://github.com/llvm/llvm-project \
--depth 1 \
-b llvmorg-12.0.0-rc3 \
/llvm-project
mkdir /libclc
pushd /libclc
cmake -S /llvm-project/libclc -B . -G Ninja -DLLVM_CONFIG=$LLVM_CONFIG -DLIBCLC_TARGETS_TO_BUILD="spirv-mesa3d-;spirv64-mesa3d-" -DCMAKE_BUILD_TYPE=Release -DCMAKE_INSTALL_PREFIX=/usr -DLLVM_SPIRV=/usr/bin/llvm-spirv
ninja
ninja install
popd
# workaroud cmake vs debian packaging.
mkdir -p /usr/lib/clc
ln -s /usr/share/clc/spirv64-mesa3d-.spv /usr/lib/clc/
ln -s /usr/share/clc/spirv-mesa3d-.spv /usr/lib/clc/
du -sh ./*
rm -rf /libclc /llvm-project

View File

@@ -1,14 +0,0 @@
#!/bin/bash
# shellcheck disable=SC2086 # we want word splitting
set -ex
export LIBDRM_VERSION=libdrm-2.4.110
wget https://dri.freedesktop.org/libdrm/"$LIBDRM_VERSION".tar.xz
tar -xvf "$LIBDRM_VERSION".tar.xz && rm "$LIBDRM_VERSION".tar.xz
cd "$LIBDRM_VERSION"
meson build -D vc4=false -D freedreno=false -D etnaviv=false $EXTRA_MESON_ARGS
ninja -C build install
cd ..
rm -rf "$LIBDRM_VERSION"

View File

@@ -1,19 +0,0 @@
#!/bin/bash
set -ex
wget https://github.com/KhronosGroup/SPIRV-LLVM-Translator/archive/refs/tags/v13.0.0.tar.gz
tar -xvf v13.0.0.tar.gz && rm v13.0.0.tar.gz
mkdir SPIRV-LLVM-Translator-13.0.0/build
pushd SPIRV-LLVM-Translator-13.0.0/build
cmake .. -G Ninja -DCMAKE_BUILD_TYPE=Release -DCMAKE_INSTALL_PREFIX=/usr
ninja
ninja install
# For some reason llvm-spirv is not installed by default
ninja llvm-spirv
cp tools/llvm-spirv/llvm-spirv /usr/bin/
popd
du -sh SPIRV-LLVM-Translator-13.0.0
rm -rf SPIRV-LLVM-Translator-13.0.0

View File

@@ -1,12 +0,0 @@
#!/bin/bash
set -ex
MOLD_VERSION="1.6.0"
git clone -b v"$MOLD_VERSION" --single-branch --depth 1 https://github.com/rui314/mold.git
cd mold
make
make install
cd ..
rm -rf mold

View File

@@ -1,26 +0,0 @@
#!/bin/bash
# shellcheck disable=SC2086 # we want word splitting
set -ex
git clone https://gitlab.freedesktop.org/mesa/piglit.git --single-branch --no-checkout /piglit
pushd /piglit
git checkout 591c91865012de4224bea551eac5d2274acf06ad
patch -p1 <$OLDPWD/.gitlab-ci/piglit/disable-vs_in.diff
cmake -S . -B . -G Ninja -DCMAKE_BUILD_TYPE=Release $PIGLIT_OPTS $EXTRA_CMAKE_ARGS
ninja $PIGLIT_BUILD_TARGETS
# shellcheck disable=SC2038,SC2185 # TODO: rewrite find
find -name .git -o -name '*ninja*' -o -iname '*cmake*' -o -name '*.[chao]' | xargs rm -rf
rm -rf target_api
if [ "$PIGLIT_BUILD_TARGETS" = "piglit_replayer" ]; then
# shellcheck disable=SC2038,SC2185 # TODO: rewrite find
find ! -regex "^\.$" \
! -regex "^\.\/piglit.*" \
! -regex "^\.\/framework.*" \
! -regex "^\.\/bin$" \
! -regex "^\.\/bin\/replayer\.py" \
! -regex "^\.\/templates.*" \
! -regex "^\.\/tests$" \
! -regex "^\.\/tests\/replay\.py" 2>/dev/null | xargs rm -rf
fi
popd

View File

@@ -1,38 +0,0 @@
#!/bin/bash
# Note that this script is not actually "building" rust, but build- is the
# convention for the shared helpers for putting stuff in our containers.
set -ex
# cargo (and rustup) wants to store stuff in $HOME/.cargo, and binaries in
# $HOME/.cargo/bin. Make bin a link to a public bin directory so the commands
# are just available to all build jobs.
mkdir -p "$HOME"/.cargo
ln -s /usr/local/bin "$HOME"/.cargo/bin
# Rusticl requires at least Rust 1.59.0
#
# Also, oick a specific snapshot from rustup so the compiler doesn't drift on
# us.
RUST_VERSION=1.59.0-2022-02-24
# For rust in Mesa, we use rustup to install. This lets us pick an arbitrary
# version of the compiler, rather than whatever the container's Debian comes
# with.
wget https://sh.rustup.rs -O - | sh -s -- \
--default-toolchain $RUST_VERSION \
--profile minimal \
-y
rustup component add rustfmt
# Set up a config script for cross compiling -- cargo needs your system cc for
# linking in cross builds, but doesn't know what you want to use for system cc.
cat > /root/.cargo/config <<EOF
[target.armv7-unknown-linux-gnueabihf]
linker = "arm-linux-gnueabihf-gcc"
[target.aarch64-unknown-linux-gnu]
linker = "aarch64-linux-gnu-gcc"
EOF

View File

@@ -1,97 +0,0 @@
#!/bin/bash
#
# Copyright (C) 2022 Collabora Limited
# Author: Guilherme Gallo <guilherme.gallo@collabora.com>
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice (including the next
# paragraph) shall be included in all copies or substantial portions of the
# Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
# SOFTWARE.
create_gn_args() {
# gn can be configured to cross-compile skia and its tools
# It is important to set the target_cpu to guarantee the intended target
# machine
cp "${BASE_ARGS_GN_FILE}" "${SKQP_OUT_DIR}"/args.gn
echo "target_cpu = \"${SKQP_ARCH}\"" >> "${SKQP_OUT_DIR}"/args.gn
}
download_skia_source() {
if [ -z ${SKIA_DIR+x} ]
then
return 1
fi
# Skia cloned from https://android.googlesource.com/platform/external/skqp
# has all needed assets tracked on git-fs
SKQP_REPO=https://android.googlesource.com/platform/external/skqp
SKQP_BRANCH=android-cts-11.0_r7
git clone --branch "${SKQP_BRANCH}" --depth 1 "${SKQP_REPO}" "${SKIA_DIR}"
}
set -ex
SCRIPT_DIR=$(realpath "$(dirname "$0")")
SKQP_PATCH_DIR="${SCRIPT_DIR}"
BASE_ARGS_GN_FILE="${SCRIPT_DIR}/build-skqp_base.gn"
SKQP_ARCH=${SKQP_ARCH:-x64}
SKIA_DIR=${SKIA_DIR:-$(mktemp -d)}
SKQP_OUT_DIR=${SKIA_DIR}/out/${SKQP_ARCH}
SKQP_INSTALL_DIR=${SKQP_INSTALL_DIR:-/skqp}
SKQP_ASSETS_DIR="${SKQP_INSTALL_DIR}/assets"
SKQP_BINARIES=(skqp list_gpu_unit_tests list_gms)
download_skia_source
pushd "${SKIA_DIR}"
# Apply all skqp patches for Mesa CI
cat "${SKQP_PATCH_DIR}"/build-skqp_*.patch |
patch -p1
# Fetch some needed build tools needed to build skia/skqp.
# Basically, it clones repositories with commits SHAs from ${SKIA_DIR}/DEPS
# directory.
python tools/git-sync-deps
mkdir -p "${SKQP_OUT_DIR}"
mkdir -p "${SKQP_INSTALL_DIR}"
create_gn_args
# Build and install skqp binaries
bin/gn gen "${SKQP_OUT_DIR}"
for BINARY in "${SKQP_BINARIES[@]}"
do
/usr/bin/ninja -C "${SKQP_OUT_DIR}" "${BINARY}"
# Strip binary, since gn is not stripping it even when `is_debug == false`
${STRIP_CMD:-strip} "${SKQP_OUT_DIR}/${BINARY}"
install -m 0755 "${SKQP_OUT_DIR}/${BINARY}" "${SKQP_INSTALL_DIR}"
done
# Move assets to the target directory, which will reside in rootfs.
mv platform_tools/android/apps/skqp/src/main/assets/ "${SKQP_ASSETS_DIR}"
popd
rm -Rf "${SKIA_DIR}"
set +ex

View File

@@ -1,13 +0,0 @@
diff --git a/BUILD.gn b/BUILD.gn
index d2b1407..7b60c90 100644
--- a/BUILD.gn
+++ b/BUILD.gn
@@ -144,7 +144,7 @@ config("skia_public") {
# Skia internal APIs, used by Skia itself and a few test tools.
config("skia_private") {
- visibility = [ ":*" ]
+ visibility = [ "*" ]
include_dirs = [
"include/private",

View File

@@ -1,47 +0,0 @@
cc = "clang"
cxx = "clang++"
extra_cflags = [ "-DSK_ENABLE_DUMP_GPU", "-DSK_BUILD_FOR_SKQP" ]
extra_cflags_cc = [
"-Wno-error",
# skqp build process produces a lot of compilation warnings, silencing
# most of them to remove clutter and avoid the CI job log to exceed the
# maximum size
# GCC flags
"-Wno-redundant-move",
"-Wno-suggest-override",
"-Wno-class-memaccess",
"-Wno-deprecated-copy",
"-Wno-uninitialized",
# Clang flags
"-Wno-macro-redefined",
"-Wno-anon-enum-enum-conversion",
"-Wno-suggest-destructor-override",
"-Wno-return-std-move-in-c++11",
"-Wno-extra-semi-stmt",
]
cc_wrapper = "ccache"
is_debug = false
skia_enable_fontmgr_android = false
skia_enable_fontmgr_empty = true
skia_enable_pdf = false
skia_enable_skottie = false
skia_skqp_global_error_tolerance = 8
skia_tools_require_resources = true
skia_use_dng_sdk = false
skia_use_expat = true
skia_use_icu = false
skia_use_libheif = false
skia_use_lua = false
skia_use_piex = false
skia_use_vulkan = true
target_os = "linux"

View File

@@ -1,68 +0,0 @@
diff --git a/bin/fetch-gn b/bin/fetch-gn
index d5e94a2..59c4591 100755
--- a/bin/fetch-gn
+++ b/bin/fetch-gn
@@ -5,39 +5,44 @@
# Use of this source code is governed by a BSD-style license that can be
# found in the LICENSE file.
-import hashlib
import os
+import platform
import shutil
import stat
import sys
-import urllib2
+import tempfile
+import zipfile
+
+if sys.version_info[0] < 3:
+ from urllib2 import urlopen
+else:
+ from urllib.request import urlopen
os.chdir(os.path.join(os.path.dirname(__file__), os.pardir))
-dst = 'bin/gn.exe' if 'win32' in sys.platform else 'bin/gn'
+gnzip = os.path.join(tempfile.mkdtemp(), 'gn.zip')
+with open(gnzip, 'wb') as f:
+ OS = {'darwin': 'mac', 'linux': 'linux', 'linux2': 'linux', 'win32': 'windows'}[sys.platform]
+ cpu = {'amd64': 'amd64', 'arm64': 'arm64', 'x86_64': 'amd64', 'aarch64': 'arm64'}[platform.machine().lower()]
-sha1 = '2f27ff0b6118e5886df976da5effa6003d19d1ce' if 'linux' in sys.platform else \
- '9be792dd9010ce303a9c3a497a67bcc5ac8c7666' if 'darwin' in sys.platform else \
- 'eb69be2d984b4df60a8c21f598135991f0ad1742' # Windows
+ rev = 'd62642c920e6a0d1756316d225a90fd6faa9e21e'
+ url = 'https://chrome-infra-packages.appspot.com/dl/gn/gn/{}-{}/+/git_revision:{}'.format(
+ OS,cpu,rev)
+ f.write(urlopen(url).read())
-def sha1_of_file(path):
- h = hashlib.sha1()
- if os.path.isfile(path):
- with open(path, 'rb') as f:
- h.update(f.read())
- return h.hexdigest()
+gn = 'gn.exe' if 'win32' in sys.platform else 'gn'
+with zipfile.ZipFile(gnzip, 'r') as f:
+ f.extract(gn, 'bin')
-if sha1_of_file(dst) != sha1:
- with open(dst, 'wb') as f:
- f.write(urllib2.urlopen('https://chromium-gn.storage-download.googleapis.com/' + sha1).read())
+gn = os.path.join('bin', gn)
- os.chmod(dst, stat.S_IRUSR | stat.S_IWUSR | stat.S_IXUSR |
- stat.S_IRGRP | stat.S_IXGRP |
- stat.S_IROTH | stat.S_IXOTH )
+os.chmod(gn, stat.S_IRUSR | stat.S_IWUSR | stat.S_IXUSR |
+ stat.S_IRGRP | stat.S_IXGRP |
+ stat.S_IROTH | stat.S_IXOTH )
# We'll also copy to a path that depot_tools' GN wrapper will expect to find the binary.
copy_path = 'buildtools/linux64/gn' if 'linux' in sys.platform else \
'buildtools/mac/gn' if 'darwin' in sys.platform else \
'buildtools/win/gn.exe'
if os.path.isdir(os.path.dirname(copy_path)):
- shutil.copy(dst, copy_path)
+ shutil.copy(gn, copy_path)

View File

@@ -1,142 +0,0 @@
Patch based from diff with skia repository from commit
013397884c73959dc07cb0a26ee742b1cdfbda8a
Adds support for Python3, but removes the constraint of only SHA based refs in
DEPS
diff --git a/tools/git-sync-deps b/tools/git-sync-deps
index c7379c0b5c..f63d4d9ccf 100755
--- a/tools/git-sync-deps
+++ b/tools/git-sync-deps
@@ -43,7 +43,7 @@ def git_executable():
A string suitable for passing to subprocess functions, or None.
"""
envgit = os.environ.get('GIT_EXECUTABLE')
- searchlist = ['git']
+ searchlist = ['git', 'git.bat']
if envgit:
searchlist.insert(0, envgit)
with open(os.devnull, 'w') as devnull:
@@ -94,21 +94,25 @@ def is_git_toplevel(git, directory):
try:
toplevel = subprocess.check_output(
[git, 'rev-parse', '--show-toplevel'], cwd=directory).strip()
- return os.path.realpath(directory) == os.path.realpath(toplevel)
+ return os.path.realpath(directory) == os.path.realpath(toplevel.decode())
except subprocess.CalledProcessError:
return False
-def status(directory, checkoutable):
- def truncate(s, length):
+def status(directory, commithash, change):
+ def truncate_beginning(s, length):
+ return s if len(s) <= length else '...' + s[-(length-3):]
+ def truncate_end(s, length):
return s if len(s) <= length else s[:(length - 3)] + '...'
+
dlen = 36
- directory = truncate(directory, dlen)
- checkoutable = truncate(checkoutable, 40)
- sys.stdout.write('%-*s @ %s\n' % (dlen, directory, checkoutable))
+ directory = truncate_beginning(directory, dlen)
+ commithash = truncate_end(commithash, 40)
+ symbol = '>' if change else '@'
+ sys.stdout.write('%-*s %s %s\n' % (dlen, directory, symbol, commithash))
-def git_checkout_to_directory(git, repo, checkoutable, directory, verbose):
+def git_checkout_to_directory(git, repo, commithash, directory, verbose):
"""Checkout (and clone if needed) a Git repository.
Args:
@@ -117,8 +121,7 @@ def git_checkout_to_directory(git, repo, checkoutable, directory, verbose):
repo (string) the location of the repository, suitable
for passing to `git clone`.
- checkoutable (string) a tag, branch, or commit, suitable for
- passing to `git checkout`
+ commithash (string) a commit, suitable for passing to `git checkout`
directory (string) the path into which the repository
should be checked out.
@@ -129,7 +132,12 @@ def git_checkout_to_directory(git, repo, checkoutable, directory, verbose):
"""
if not os.path.isdir(directory):
subprocess.check_call(
- [git, 'clone', '--quiet', repo, directory])
+ [git, 'clone', '--quiet', '--no-checkout', repo, directory])
+ subprocess.check_call([git, 'checkout', '--quiet', commithash],
+ cwd=directory)
+ if verbose:
+ status(directory, commithash, True)
+ return
if not is_git_toplevel(git, directory):
# if the directory exists, but isn't a git repo, you will modify
@@ -145,11 +153,11 @@ def git_checkout_to_directory(git, repo, checkoutable, directory, verbose):
with open(os.devnull, 'w') as devnull:
# If this fails, we will fetch before trying again. Don't spam user
# with error infomation.
- if 0 == subprocess.call([git, 'checkout', '--quiet', checkoutable],
+ if 0 == subprocess.call([git, 'checkout', '--quiet', commithash],
cwd=directory, stderr=devnull):
# if this succeeds, skip slow `git fetch`.
if verbose:
- status(directory, checkoutable) # Success.
+ status(directory, commithash, False) # Success.
return
# If the repo has changed, always force use of the correct repo.
@@ -159,18 +167,24 @@ def git_checkout_to_directory(git, repo, checkoutable, directory, verbose):
subprocess.check_call([git, 'fetch', '--quiet'], cwd=directory)
- subprocess.check_call([git, 'checkout', '--quiet', checkoutable], cwd=directory)
+ subprocess.check_call([git, 'checkout', '--quiet', commithash], cwd=directory)
if verbose:
- status(directory, checkoutable) # Success.
+ status(directory, commithash, True) # Success.
def parse_file_to_dict(path):
dictionary = {}
- execfile(path, dictionary)
+ with open(path) as f:
+ exec('def Var(x): return vars[x]\n' + f.read(), dictionary)
return dictionary
+def is_sha1_sum(s):
+ """SHA1 sums are 160 bits, encoded as lowercase hexadecimal."""
+ return len(s) == 40 and all(c in '0123456789abcdef' for c in s)
+
+
def git_sync_deps(deps_file_path, command_line_os_requests, verbose):
"""Grab dependencies, with optional platform support.
@@ -204,19 +218,19 @@ def git_sync_deps(deps_file_path, command_line_os_requests, verbose):
raise Exception('%r is parent of %r' % (other_dir, directory))
list_of_arg_lists = []
for directory in sorted(dependencies):
- if not isinstance(dependencies[directory], basestring):
+ if not isinstance(dependencies[directory], str):
if verbose:
- print 'Skipping "%s".' % directory
+ sys.stdout.write( 'Skipping "%s".\n' % directory)
continue
if '@' in dependencies[directory]:
- repo, checkoutable = dependencies[directory].split('@', 1)
+ repo, commithash = dependencies[directory].split('@', 1)
else:
- raise Exception("please specify commit or tag")
+ raise Exception("please specify commit")
relative_directory = os.path.join(deps_file_directory, directory)
list_of_arg_lists.append(
- (git, repo, checkoutable, relative_directory, verbose))
+ (git, repo, commithash, relative_directory, verbose))
multithread(git_checkout_to_directory, list_of_arg_lists)

View File

@@ -1,41 +0,0 @@
diff --git a/tools/skqp/src/skqp.cpp b/tools/skqp/src/skqp.cpp
index 50ed9db01d..938217000d 100644
--- a/tools/skqp/src/skqp.cpp
+++ b/tools/skqp/src/skqp.cpp
@@ -448,7 +448,7 @@ inline void write(SkWStream* wStream, const T& text) {
void SkQP::makeReport() {
SkASSERT_RELEASE(fAssetManager);
- int glesErrorCount = 0, vkErrorCount = 0, gles = 0, vk = 0;
+ int glErrorCount = 0, glesErrorCount = 0, vkErrorCount = 0, gl = 0, gles = 0, vk = 0;
if (!sk_isdir(fReportDirectory.c_str())) {
SkDebugf("Report destination does not exist: '%s'\n", fReportDirectory.c_str());
@@ -460,6 +460,7 @@ void SkQP::makeReport() {
htmOut.writeText(kDocHead);
for (const SkQP::RenderResult& run : fRenderResults) {
switch (run.fBackend) {
+ case SkQP::SkiaBackend::kGL: ++gl; break;
case SkQP::SkiaBackend::kGLES: ++gles; break;
case SkQP::SkiaBackend::kVulkan: ++vk; break;
default: break;
@@ -477,15 +478,17 @@ void SkQP::makeReport() {
}
write(&htmOut, SkStringPrintf(" f(%s);\n", str.c_str()));
switch (run.fBackend) {
+ case SkQP::SkiaBackend::kGL: ++glErrorCount; break;
case SkQP::SkiaBackend::kGLES: ++glesErrorCount; break;
case SkQP::SkiaBackend::kVulkan: ++vkErrorCount; break;
default: break;
}
}
htmOut.writeText(kDocMiddle);
- write(&htmOut, SkStringPrintf("<p>gles errors: %d (of %d)</br>\n"
+ write(&htmOut, SkStringPrintf("<p>gl errors: %d (of %d)</br>\n"
+ "gles errors: %d (of %d)</br>\n"
"vk errors: %d (of %d)</p>\n",
- glesErrorCount, gles, vkErrorCount, vk));
+ glErrorCount, gl, glesErrorCount, gles, vkErrorCount, vk));
htmOut.writeText(kDocTail);
SkFILEWStream unitOut(SkOSPath::Join(fReportDirectory.c_str(), kUnitTestReportPath).c_str());
SkASSERT_RELEASE(unitOut.isValid());

View File

@@ -1,13 +0,0 @@
diff --git a/gn/BUILDCONFIG.gn b/gn/BUILDCONFIG.gn
index 454334a..1797594 100644
--- a/gn/BUILDCONFIG.gn
+++ b/gn/BUILDCONFIG.gn
@@ -80,7 +80,7 @@ if (current_cpu == "") {
is_clang = is_android || is_ios || is_mac ||
(cc == "clang" && cxx == "clang++") || clang_win != ""
if (!is_clang && !is_win) {
- is_clang = exec_script("gn/is_clang.py",
+ is_clang = exec_script("//gn/is_clang.py",
[
cc,
cxx,

View File

@@ -1,18 +0,0 @@
Nima-Cpp is not available anymore inside googlesource, revert to github one
Simulates `git revert 49233d2521054037ded7d760427c4a0dc1e11356`
diff --git a/DEPS b/DEPS
index 7e0b941..c88b064 100644
--- a/DEPS
+++ b/DEPS
@@ -33,8 +33,8 @@ deps = {
#"third_party/externals/v8" : "https://chromium.googlesource.com/v8/v8.git@5f1ae66d5634e43563b2d25ea652dfb94c31a3b4",
"third_party/externals/wuffs" : "https://skia.googlesource.com/external/github.com/google/wuffs.git@fda3c4c9863d9f9fcec58ae66508c4621fc71ea5",
"third_party/externals/zlib" : "https://chromium.googlesource.com/chromium/src/third_party/zlib@47af7c547f8551bd25424e56354a2ae1e9062859",
- "third_party/externals/Nima-Cpp" : "https://skia.googlesource.com/external/github.com/2d-inc/Nima-Cpp.git@4bd02269d7d1d2e650950411325eafa15defb084",
- "third_party/externals/Nima-Math-Cpp" : "https://skia.googlesource.com/external/github.com/2d-inc/Nima-Math-Cpp.git@e0c12772093fa8860f55358274515b86885f0108",
+ "third_party/externals/Nima-Cpp" : "https://github.com/2d-inc/Nima-Cpp.git@4bd02269d7d1d2e650950411325eafa15defb084",
+ "third_party/externals/Nima-Math-Cpp" : "https://github.com/2d-inc/Nima-Math-Cpp.git@e0c12772093fa8860f55358274515b86885f0108",
"../src": {
"url": "https://chromium.googlesource.com/chromium/src.git@ccf3465732e5d5363f0e44a8fac54550f62dd1d0",

View File

@@ -1,18 +0,0 @@
#!/bin/bash
# shellcheck disable=SC2086 # we want word splitting
set -ex
git config --global user.email "mesa@example.com"
git config --global user.name "Mesa CI"
git clone \
https://github.com/intel/libva-utils.git \
-b 2.13.0 \
--depth 1 \
/va-utils
pushd /va-utils
meson build -D tests=true -Dprefix=/va $EXTRA_MESON_ARGS
ninja -C build install
popd
rm -rf /va-utils

View File

@@ -1,39 +0,0 @@
#!/bin/bash
set -ex
VKD3D_PROTON_COMMIT="5b73139f182d86cd58a757e4b5f0d4cfad96d319"
VKD3D_PROTON_DST_DIR="/vkd3d-proton-tests"
VKD3D_PROTON_SRC_DIR="/vkd3d-proton-src"
VKD3D_PROTON_BUILD_DIR="/vkd3d-proton-$VKD3D_PROTON_VERSION"
function build_arch {
local arch="$1"
shift
meson "$@" \
-Denable_tests=true \
--buildtype release \
--prefix "$VKD3D_PROTON_DST_DIR" \
--strip \
--bindir "x${arch}" \
--libdir "x${arch}" \
"$VKD3D_PROTON_BUILD_DIR/build.${arch}"
ninja -C "$VKD3D_PROTON_BUILD_DIR/build.${arch}" install
install -D -m755 -t "${VKD3D_PROTON_DST_DIR}/x${arch}/bin" "$VKD3D_PROTON_BUILD_DIR/build.${arch}/tests/d3d12"
}
git clone https://github.com/HansKristian-Work/vkd3d-proton.git --single-branch -b master --no-checkout "$VKD3D_PROTON_SRC_DIR"
pushd "$VKD3D_PROTON_SRC_DIR"
git checkout "$VKD3D_PROTON_COMMIT"
git submodule update --init --recursive
git submodule update --recursive
build_arch 64
build_arch 86
popd
rm -rf "$VKD3D_PROTON_BUILD_DIR"
rm -rf "$VKD3D_PROTON_SRC_DIR"

View File

@@ -1,22 +0,0 @@
#!/bin/bash
set -ex
export LIBWAYLAND_VERSION="1.18.0"
export WAYLAND_PROTOCOLS_VERSION="1.24"
git clone https://gitlab.freedesktop.org/wayland/wayland
cd wayland
git checkout "$LIBWAYLAND_VERSION"
meson -Ddocumentation=false -Ddtd_validation=false -Dlibraries=true _build
ninja -C _build install
cd ..
rm -rf wayland
git clone https://gitlab.freedesktop.org/wayland/wayland-protocols
cd wayland-protocols
git checkout "$WAYLAND_PROTOCOLS_VERSION"
meson _build
ninja -C _build install
cd ..
rm -rf wayland-protocols

View File

@@ -1,10 +0,0 @@
#!/bin/sh
if test -f /etc/debian_version; then
apt-get autoremove -y --purge
fi
# Clean up any build cache for rust.
rm -rf /.cargo
ccache --show-stats

View File

@@ -1,46 +0,0 @@
#!/bin/sh
if test -f /etc/debian_version; then
CCACHE_PATH=/usr/lib/ccache
else
CCACHE_PATH=/usr/lib64/ccache
fi
# Common setup among container builds before we get to building code.
export CCACHE_COMPILERCHECK=content
export CCACHE_COMPRESS=true
export CCACHE_DIR=/cache/$CI_PROJECT_NAME/ccache
export PATH=$CCACHE_PATH:$PATH
# CMake ignores $PATH, so we have to force CC/GCC to the ccache versions.
export CC="${CCACHE_PATH}/gcc"
export CXX="${CCACHE_PATH}/g++"
# When not using the mold linker (e.g. unsupported architecture), force
# linkers to gold, since it's so much faster for building. We can't use
# lld because we're on old debian and it's buggy. ming fails meson builds
# with it with "meson.build:21:0: ERROR: Unable to determine dynamic linker"
find /usr/bin -name \*-ld -o -name ld | \
grep -v mingw | \
xargs -n 1 -I '{}' ln -sf '{}.gold' '{}'
ccache --show-stats
# Make a wrapper script for ninja to always include the -j flags
{
echo '#!/bin/sh -x'
# shellcheck disable=SC2016
echo '/usr/bin/ninja -j${FDO_CI_CONCURRENT:-4} "$@"'
} > /usr/local/bin/ninja
chmod +x /usr/local/bin/ninja
# Set MAKEFLAGS so that all make invocations in container builds include the
# flags (doesn't apply to non-container builds, but we don't run make there)
export MAKEFLAGS="-j${FDO_CI_CONCURRENT:-4}"
# make wget to try more than once, when download fails or timeout
echo -e "retry_connrefused = on\n" \
"read_timeout = 300\n" \
"tries = 4\n" \
"wait_retry = 32" >> /etc/wgetrc

View File

@@ -1,35 +0,0 @@
#!/bin/bash
ndk=$1
arch=$2
cpu_family=$3
cpu=$4
cross_file="/cross_file-$arch.txt"
# armv7 has the toolchain split between two names.
arch2=${5:-$2}
# Note that we disable C++ exceptions, because Mesa doesn't use exceptions,
# and allowing it in code generation means we get unwind symbols that break
# the libEGL and driver symbol tests.
cat > "$cross_file" <<EOF
[binaries]
ar = '$ndk/toolchains/llvm/prebuilt/linux-x86_64/bin/$arch-ar'
c = ['ccache', '$ndk/toolchains/llvm/prebuilt/linux-x86_64/bin/${arch2}29-clang', '-fno-exceptions', '-fno-unwind-tables', '-fno-asynchronous-unwind-tables']
cpp = ['ccache', '$ndk/toolchains/llvm/prebuilt/linux-x86_64/bin/${arch2}29-clang++', '-fno-exceptions', '-fno-unwind-tables', '-fno-asynchronous-unwind-tables']
c_ld = 'lld'
cpp_ld = 'lld'
strip = '$ndk/toolchains/llvm/prebuilt/linux-x86_64/bin/$arch-strip'
pkgconfig = ['/usr/bin/pkg-config']
[host_machine]
system = 'linux'
cpu_family = '$cpu_family'
cpu = '$cpu'
endian = 'little'
[properties]
needs_exe_wrapper = true
EOF

View File

@@ -1,39 +0,0 @@
#!/bin/sh
# shellcheck disable=SC2086 # we want word splitting
# Makes a .pc file in the Android NDK for meson to find its libraries.
set -ex
ndk="$1"
pc="$2"
cflags="$3"
libs="$4"
version="$5"
sysroot=$ndk/toolchains/llvm/prebuilt/linux-x86_64/sysroot
for arch in \
x86_64-linux-android \
i686-linux-android \
aarch64-linux-android \
arm-linux-androideabi; do
pcdir=$sysroot/usr/lib/$arch/pkgconfig
mkdir -p $pcdir
cat >$pcdir/$pc <<EOF
prefix=$sysroot
exec_prefix=$sysroot
libdir=$sysroot/usr/lib/$arch/29
sharedlibdir=$sysroot/usr/lib/$arch
includedir=$sysroot/usr/include
Name: zlib
Description: zlib compression library
Version: $version
Requires:
Libs: -L$sysroot/usr/lib/$arch/29 $libs
Cflags: -I$sysroot/usr/include $cflags
EOF
done

View File

@@ -1,53 +0,0 @@
#!/bin/bash
arch=$1
cross_file="/cross_file-$arch.txt"
/usr/share/meson/debcrossgen --arch "$arch" -o "$cross_file"
# Explicitly set ccache path for cross compilers
sed -i "s|/usr/bin/\([^-]*\)-linux-gnu\([^-]*\)-g|/usr/lib/ccache/\\1-linux-gnu\\2-g|g" "$cross_file"
if [ "$arch" = "i386" ]; then
# Work around a bug in debcrossgen that should be fixed in the next release
sed -i "s|cpu_family = 'i686'|cpu_family = 'x86'|g" "$cross_file"
fi
# Rely on qemu-user being configured in binfmt_misc on the host
# shellcheck disable=SC1003 # how this sed doesn't seems to work for me locally
sed -i -e '/\[properties\]/a\' -e "needs_exe_wrapper = False" "$cross_file"
# Add a line for rustc, which debcrossgen is missing.
cc=$(sed -n 's|c = .\(.*\).|\1|p' < "$cross_file")
if [[ "$arch" = "arm64" ]]; then
rust_target=aarch64-unknown-linux-gnu
elif [[ "$arch" = "armhf" ]]; then
rust_target=armv7-unknown-linux-gnueabihf
elif [[ "$arch" = "i386" ]]; then
rust_target=i686-unknown-linux-gnu
elif [[ "$arch" = "ppc64el" ]]; then
rust_target=powerpc64le-unknown-linux-gnu
elif [[ "$arch" = "s390x" ]]; then
rust_target=s390x-unknown-linux-gnu
else
echo "Needs rustc target mapping"
fi
# shellcheck disable=SC1003 # how this sed doesn't seems to work for me locally
sed -i -e '/\[binaries\]/a\' -e "rust = ['rustc', '--target=$rust_target', '-C', 'linker=$cc']" "$cross_file"
# Set up cmake cross compile toolchain file for dEQP builds
toolchain_file="/toolchain-$arch.cmake"
if [[ "$arch" = "arm64" ]]; then
GCC_ARCH="aarch64-linux-gnu"
DE_CPU="DE_CPU_ARM_64"
elif [[ "$arch" = "armhf" ]]; then
GCC_ARCH="arm-linux-gnueabihf"
DE_CPU="DE_CPU_ARM"
fi
if [[ -n "$GCC_ARCH" ]]; then
{
echo "set(CMAKE_SYSTEM_NAME Linux)";
echo "set(CMAKE_SYSTEM_PROCESSOR arm)";
echo "set(CMAKE_C_COMPILER /usr/lib/ccache/$GCC_ARCH-gcc)";
echo "set(CMAKE_CXX_COMPILER /usr/lib/ccache/$GCC_ARCH-g++)";
echo "set(ENV{PKG_CONFIG} \"/usr/bin/$GCC_ARCH-pkg-config\")";
echo "set(DE_CPU $DE_CPU)";
} > "$toolchain_file"
fi

View File

@@ -1,323 +0,0 @@
#!/bin/bash
# shellcheck disable=SC2140 # ugly array, remove later
# shellcheck disable=SC2288 # ugly array, remove later
# shellcheck disable=SC2086 # we want word splitting
set -ex
if [ $DEBIAN_ARCH = arm64 ]; then
ARCH_PACKAGES="firmware-qcom-media
firmware-linux-nonfree
libfontconfig1
libgl1
libglu1-mesa
libvulkan-dev
"
elif [ $DEBIAN_ARCH = amd64 ]; then
# Add llvm 13 to the build image
apt-get -y install --no-install-recommends wget gnupg2 software-properties-common
apt-key add /llvm-snapshot.gpg.key
add-apt-repository "deb https://apt.llvm.org/bullseye/ llvm-toolchain-bullseye-13 main"
# Debian bullseye has older wine 5.0, we want >= 7.0 for traces.
apt-key add /winehq.gpg.key
apt-add-repository https://dl.winehq.org/wine-builds/debian/
ARCH_PACKAGES="firmware-amd-graphics
inetutils-syslogd
iptables
libcap2
libfontconfig1
libelf1
libfdt1
libgl1
libglu1-mesa
libllvm13
libllvm11
libva2
libva-drm2
libvulkan-dev
socat
spirv-tools
sysvinit-core
"
elif [ $DEBIAN_ARCH = armhf ]; then
ARCH_PACKAGES="firmware-misc-nonfree
"
fi
INSTALL_CI_FAIRY_PACKAGES="git
python3-dev
python3-pip
python3-setuptools
python3-wheel
"
apt-get update
apt-get -y install --no-install-recommends \
$ARCH_PACKAGES \
$INSTALL_CI_FAIRY_PACKAGES \
$EXTRA_LOCAL_PACKAGES \
bash \
ca-certificates \
firmware-realtek \
initramfs-tools \
jq \
libasan6 \
libexpat1 \
libpng16-16 \
libpython3.9 \
libsensors5 \
libvulkan1 \
libwaffle-1-0 \
libx11-6 \
libx11-xcb1 \
libxcb-dri2-0 \
libxcb-dri3-0 \
libxcb-glx0 \
libxcb-present0 \
libxcb-randr0 \
libxcb-shm0 \
libxcb-sync1 \
libxcb-xfixes0 \
libxdamage1 \
libxext6 \
libxfixes3 \
libxkbcommon0 \
libxrender1 \
libxshmfence1 \
libxxf86vm1 \
netcat-openbsd \
python3 \
python3-lxml \
python3-mako \
python3-numpy \
python3-packaging \
python3-pil \
python3-renderdoc \
python3-requests \
python3-simplejson \
python3-yaml \
sntp \
strace \
waffle-utils \
wget \
xinit \
xserver-xorg-core \
zstd
if [ "$DEBIAN_ARCH" = "amd64" ]; then
# workaround wine needing 32-bit
# https://bugs.winehq.org/show_bug.cgi?id=53393
apt-get install -y --no-remove wine-stable-amd64 # a requirement for wine-stable
WINE_PKG="wine-stable"
WINE_PKG_DROP="wine-stable-i386"
apt download "${WINE_PKG}"
dpkg --ignore-depends="${WINE_PKG_DROP}" -i "${WINE_PKG}"*.deb
rm "${WINE_PKG}"*.deb
sed -i "/${WINE_PKG_DROP}/d" /var/lib/dpkg/status
apt-get install -y --no-remove winehq-stable # symlinks-only, depends on wine-stable
fi
# Needed for ci-fairy, this revision is able to upload files to
# MinIO and doesn't depend on git
pip3 install git+http://gitlab.freedesktop.org/freedesktop/ci-templates@34f4ade99434043f88e164933f570301fd18b125
# Needed for manipulation with traces yaml files.
pip3 install yq
apt-get purge -y \
$INSTALL_CI_FAIRY_PACKAGES
passwd root -d
chsh -s /bin/sh
cat > /init <<EOF
#!/bin/sh
export PS1=lava-shell:
exec sh
EOF
chmod +x /init
#######################################################################
# Strip the image to a small minimal system without removing the debian
# toolchain.
# Copy timezone file and remove tzdata package
rm -rf /etc/localtime
cp /usr/share/zoneinfo/Etc/UTC /etc/localtime
UNNEEDED_PACKAGES="
libfdisk1
"
export DEBIAN_FRONTEND=noninteractive
# Removing unused packages
for PACKAGE in ${UNNEEDED_PACKAGES}
do
echo ${PACKAGE}
if ! apt-get remove --purge --yes "${PACKAGE}"
then
echo "WARNING: ${PACKAGE} isn't installed"
fi
done
apt-get autoremove --yes || true
# Dropping logs
rm -rf /var/log/*
# Dropping documentation, localization, i18n files, etc
rm -rf /usr/share/doc/*
rm -rf /usr/share/locale/*
rm -rf /usr/share/X11/locale/*
rm -rf /usr/share/man
rm -rf /usr/share/i18n/*
rm -rf /usr/share/info/*
rm -rf /usr/share/lintian/*
rm -rf /usr/share/common-licenses/*
rm -rf /usr/share/mime/*
# Dropping reportbug scripts
rm -rf /usr/share/bug
# Drop udev hwdb not required on a stripped system
rm -rf /lib/udev/hwdb.bin /lib/udev/hwdb.d/*
# Drop all gconv conversions && binaries
rm -rf usr/bin/iconv
rm -rf usr/sbin/iconvconfig
rm -rf usr/lib/*/gconv/
# Remove libusb database
rm -rf usr/sbin/update-usbids
rm -rf var/lib/usbutils/usb.ids
rm -rf usr/share/misc/usb.ids
rm -rf /root/.pip
#######################################################################
# Crush into a minimal production image to be deployed via some type of image
# updating system.
# IMPORTANT: The Debian system is not longer functional at this point,
# for example, apt and dpkg will stop working
UNNEEDED_PACKAGES="apt libapt-pkg6.0 "\
"ncurses-bin ncurses-base libncursesw6 libncurses6 "\
"perl-base "\
"debconf libdebconfclient0 "\
"e2fsprogs e2fslibs libfdisk1 "\
"insserv "\
"udev "\
"init-system-helpers "\
"cpio "\
"passwd "\
"libsemanage1 libsemanage-common "\
"libsepol1 "\
"gpgv "\
"hostname "\
"adduser "\
"debian-archive-keyring "\
"libegl1-mesa-dev "\
"libegl-mesa0 "\
"libgl1-mesa-dev "\
"libgl1-mesa-dri "\
"libglapi-mesa "\
"libgles2-mesa-dev "\
"libglx-mesa0 "\
"mesa-common-dev "\
"gnupg2 "\
"software-properties-common " \
# Removing unneeded packages
for PACKAGE in ${UNNEEDED_PACKAGES}
do
echo "Forcing removal of ${PACKAGE}"
if ! dpkg --purge --force-remove-essential --force-depends "${PACKAGE}"
then
echo "WARNING: ${PACKAGE} isn't installed"
fi
done
# Show what's left package-wise before dropping dpkg itself
COLUMNS=300 dpkg-query -W --showformat='${Installed-Size;10}\t${Package}\n' | sort -k1,1n
# Drop dpkg
dpkg --purge --force-remove-essential --force-depends dpkg
# No apt or dpkg, no need for its configuration archives
rm -rf etc/apt
rm -rf etc/dpkg
# Drop directories not part of ostree
# Note that /var needs to exist as ostree bind mounts the deployment /var over
# it
rm -rf var/* srv share
# ca-certificates are in /etc drop the source
rm -rf usr/share/ca-certificates
# No need for completions
rm -rf usr/share/bash-completion
# No zsh, no need for comletions
rm -rf usr/share/zsh/vendor-completions
# drop gcc python helpers
rm -rf usr/share/gcc
# Drop sysvinit leftovers
rm -rf etc/init.d
rm -rf etc/rc[0-6S].d
# Drop upstart helpers
rm -rf etc/init
# Various xtables helpers
rm -rf usr/lib/xtables
# Drop all locales
# TODO: only remaining locale is actually "C". Should we really remove it?
rm -rf usr/lib/locale/*
# partition helpers
rm -rf usr/sbin/*fdisk
# local compiler
rm -rf usr/bin/localedef
# Systemd dns resolver
find usr etc -name '*systemd-resolve*' -prune -exec rm -r {} \;
# Systemd network configuration
find usr etc -name '*networkd*' -prune -exec rm -r {} \;
# systemd ntp client
find usr etc -name '*timesyncd*' -prune -exec rm -r {} \;
# systemd hw database manager
find usr etc -name '*systemd-hwdb*' -prune -exec rm -r {} \;
# No need for fuse
find usr etc -name '*fuse*' -prune -exec rm -r {} \;
# lsb init function leftovers
rm -rf usr/lib/lsb
# Only needed when adding libraries
rm -rf usr/sbin/ldconfig*
# Games, unused
rmdir usr/games
# Remove pam module to authenticate against a DB
# plus libdb-5.3.so that is only used by this pam module
rm -rf usr/lib/*/security/pam_userdb.so
rm -rf usr/lib/*/libdb-5.3.so
# remove NSS support for nis, nisplus and hesiod
rm -rf usr/lib/*/libnss_hesiod*
rm -rf usr/lib/*/libnss_nis*

View File

@@ -1,81 +0,0 @@
#!/bin/bash
# shellcheck disable=SC2086 # we want word splitting
set -e
set -o xtrace
export DEBIAN_FRONTEND=noninteractive
# Ephemeral packages (installed for this script and removed again at the end)
STABLE_EPHEMERAL=" \
"
dpkg --add-architecture $arch
apt-get update
apt-get install -y --no-remove \
$STABLE_EPHEMERAL \
crossbuild-essential-$arch \
libelf-dev:$arch \
libexpat1-dev:$arch \
libpciaccess-dev:$arch \
libstdc++6:$arch \
libvulkan-dev:$arch \
libx11-dev:$arch \
libx11-xcb-dev:$arch \
libxcb-dri2-0-dev:$arch \
libxcb-dri3-dev:$arch \
libxcb-glx0-dev:$arch \
libxcb-present-dev:$arch \
libxcb-randr0-dev:$arch \
libxcb-shm0-dev:$arch \
libxcb-xfixes0-dev:$arch \
libxdamage-dev:$arch \
libxext-dev:$arch \
libxrandr-dev:$arch \
libxshmfence-dev:$arch \
libxxf86vm-dev:$arch \
wget
if [[ $arch != "armhf" ]]; then
# See the list of available architectures in https://apt.llvm.org/bullseye/dists/llvm-toolchain-bullseye-13/main/
if [[ $arch == "s390x" ]] || [[ $arch == "i386" ]] || [[ $arch == "arm64" ]]; then
LLVM=13
else
LLVM=11
fi
# llvm-*-tools:$arch conflicts with python3:amd64. Install dependencies only
# with apt-get, then force-install llvm-*-{dev,tools}:$arch with dpkg to get
# around this.
apt-get install -y --no-remove --no-install-recommends \
libclang-cpp${LLVM}:$arch \
libffi-dev:$arch \
libgcc-s1:$arch \
libtinfo-dev:$arch \
libz3-dev:$arch \
llvm-${LLVM}:$arch \
zlib1g
fi
. .gitlab-ci/container/create-cross-file.sh $arch
. .gitlab-ci/container/container_pre_build.sh
# dependencies where we want a specific version
EXTRA_MESON_ARGS="--cross-file=/cross_file-${arch}.txt -D libdir=lib/$(dpkg-architecture -A $arch -qDEB_TARGET_MULTIARCH)"
. .gitlab-ci/container/build-libdrm.sh
apt-get purge -y \
$STABLE_EPHEMERAL
. .gitlab-ci/container/container_post_build.sh
# This needs to be done after container_post_build.sh, or apt-get breaks in there
if [[ $arch != "armhf" ]]; then
apt-get download llvm-${LLVM}-{dev,tools}:$arch
dpkg -i --force-depends llvm-${LLVM}-*_${arch}.deb
rm llvm-${LLVM}-*_${arch}.deb
fi

View File

@@ -1,107 +0,0 @@
#!/bin/bash
# shellcheck disable=SC2086 # we want word splitting
set -ex
EPHEMERAL="\
autoconf \
rdfind \
unzip \
"
apt-get install -y --no-remove $EPHEMERAL
# Fetch the NDK and extract just the toolchain we want.
ndk=android-ndk-r21d
wget -O $ndk.zip https://dl.google.com/android/repository/$ndk-linux-x86_64.zip
unzip -d / $ndk.zip "$ndk/toolchains/llvm/*"
rm $ndk.zip
# Since it was packed as a zip file, symlinks/hardlinks got turned into
# duplicate files. Turn them into hardlinks to save on container space.
rdfind -makehardlinks true -makeresultsfile false /android-ndk-r21d/
# Drop some large tools we won't use in this build.
find /android-ndk-r21d/ -type f | grep -E -i "clang-check|clang-tidy|lldb" | xargs rm -f
sh .gitlab-ci/container/create-android-ndk-pc.sh /$ndk zlib.pc "" "-lz" "1.2.3"
sh .gitlab-ci/container/create-android-cross-file.sh /$ndk x86_64-linux-android x86_64 x86_64
sh .gitlab-ci/container/create-android-cross-file.sh /$ndk i686-linux-android x86 x86
sh .gitlab-ci/container/create-android-cross-file.sh /$ndk aarch64-linux-android arm armv8
sh .gitlab-ci/container/create-android-cross-file.sh /$ndk arm-linux-androideabi arm armv7hl armv7a-linux-androideabi
# Not using build-libdrm.sh because we don't want its cleanup after building
# each arch. Fetch and extract now.
export LIBDRM_VERSION=libdrm-2.4.110
wget https://dri.freedesktop.org/libdrm/$LIBDRM_VERSION.tar.xz
tar -xf $LIBDRM_VERSION.tar.xz && rm $LIBDRM_VERSION.tar.xz
for arch in \
x86_64-linux-android \
i686-linux-android \
aarch64-linux-android \
arm-linux-androideabi ; do
cd $LIBDRM_VERSION
rm -rf build-$arch
meson build-$arch \
--cross-file=/cross_file-$arch.txt \
--libdir=lib/$arch \
-Dlibkms=false \
-Dnouveau=false \
-Dvc4=false \
-Detnaviv=false \
-Dfreedreno=false \
-Dintel=false \
-Dcairo-tests=false \
-Dvalgrind=false
ninja -C build-$arch install
cd ..
done
rm -rf $LIBDRM_VERSION
export LIBELF_VERSION=libelf-0.8.13
wget https://fossies.org/linux/misc/old/$LIBELF_VERSION.tar.gz
# Not 100% sure who runs the mirror above so be extra careful
if ! echo "4136d7b4c04df68b686570afa26988ac ${LIBELF_VERSION}.tar.gz" | md5sum -c -; then
echo "Checksum failed"
exit 1
fi
tar -xf ${LIBELF_VERSION}.tar.gz
cd $LIBELF_VERSION
# Work around a bug in the original configure not enabling __LIBELF64.
autoreconf
for arch in \
x86_64-linux-android \
i686-linux-android \
aarch64-linux-android \
arm-linux-androideabi ; do
ccarch=${arch}
if [ "${arch}" == 'arm-linux-androideabi' ]
then
ccarch=armv7a-linux-androideabi
fi
export CC=/android-ndk-r21d/toolchains/llvm/prebuilt/linux-x86_64/bin/${arch}-ar
export CC=/android-ndk-r21d/toolchains/llvm/prebuilt/linux-x86_64/bin/${ccarch}29-clang
export CXX=/android-ndk-r21d/toolchains/llvm/prebuilt/linux-x86_64/bin/${ccarch}29-clang++
export LD=/android-ndk-r21d/toolchains/llvm/prebuilt/linux-x86_64/bin/${arch}-ld
export RANLIB=/android-ndk-r21d/toolchains/llvm/prebuilt/linux-x86_64/bin/${arch}-ranlib
# The configure script doesn't know about android, but doesn't really use the host anyway it
# seems
./configure --host=x86_64-linux-gnu --disable-nls --disable-shared \
--libdir=/usr/local/lib/${arch}
make install
make distclean
done
cd ..
rm -rf $LIBELF_VERSION
apt-get purge -y $EPHEMERAL

View File

@@ -1,86 +0,0 @@
#!/bin/bash
# shellcheck disable=SC2086 # we want word splitting
set -e
set -o xtrace
apt-get -y install ca-certificates
sed -i -e 's/http:\/\/deb/https:\/\/deb/g' /etc/apt/sources.list
echo 'deb https://deb.debian.org/debian buster main' >/etc/apt/sources.list.d/buster.list
apt-get update
# Ephemeral packages (installed for this script and removed again at
# the end)
STABLE_EPHEMERAL=" \
libssl-dev \
"
apt-get -y install \
${EXTRA_LOCAL_PACKAGES} \
${STABLE_EPHEMERAL} \
autoconf \
automake \
bc \
bison \
ccache \
cmake \
debootstrap \
fastboot \
flex \
g++ \
git \
glslang-tools \
kmod \
libasan6 \
libdrm-dev \
libelf-dev \
libexpat1-dev \
libvulkan-dev \
libx11-dev \
libx11-xcb-dev \
libxcb-dri2-0-dev \
libxcb-dri3-dev \
libxcb-glx0-dev \
libxcb-present-dev \
libxcb-randr0-dev \
libxcb-shm0-dev \
libxcb-xfixes0-dev \
libxdamage-dev \
libxext-dev \
libxrandr-dev \
libxshmfence-dev \
libxxf86vm-dev \
llvm-11-dev \
meson \
pkg-config \
python3-mako \
python3-pil \
python3-pip \
python3-requests \
python3-setuptools \
u-boot-tools \
wget \
xz-utils \
zlib1g-dev \
zstd
# Not available anymore in bullseye
apt-get install -y --no-remove -t buster \
android-sdk-ext4-utils
pip3 install git+http://gitlab.freedesktop.org/freedesktop/ci-templates@34f4ade99434043f88e164933f570301fd18b125
arch=armhf
. .gitlab-ci/container/cross_build.sh
. .gitlab-ci/container/container_pre_build.sh
. .gitlab-ci/container/build-mold.sh
# dependencies where we want a specific version
EXTRA_MESON_ARGS=
. .gitlab-ci/container/build-libdrm.sh
apt-get purge -y $STABLE_EPHEMERAL
. .gitlab-ci/container/container_post_build.sh

View File

@@ -1,45 +0,0 @@
#!/bin/bash
set -e
set -o xtrace
############### Install packages for baremetal testing
apt-get install -y ca-certificates
sed -i -e 's/http:\/\/deb/https:\/\/deb/g' /etc/apt/sources.list
apt-get update
apt-get install -y --no-remove \
cpio \
fastboot \
netcat \
procps \
python3-distutils \
python3-minimal \
python3-serial \
rsync \
snmp \
wget \
zstd
# setup SNMPv2 SMI MIB
wget https://raw.githubusercontent.com/net-snmp/net-snmp/master/mibs/SNMPv2-SMI.txt \
-O /usr/share/snmp/mibs/SNMPv2-SMI.txt
arch=arm64 . .gitlab-ci/container/baremetal_build.sh
arch=armhf . .gitlab-ci/container/baremetal_build.sh
# This firmware file from Debian bullseye causes hangs
wget https://git.kernel.org/pub/scm/linux/kernel/git/firmware/linux-firmware.git/plain/qcom/a530_pfp.fw?id=d5f9eea5a251d43412b07f5295d03e97b89ac4a5 \
-O /rootfs-arm64/lib/firmware/qcom/a530_pfp.fw
mkdir -p /baremetal-files/jetson-nano/boot/
ln -s \
/baremetal-files/Image \
/baremetal-files/tegra210-p3450-0000.dtb \
/baremetal-files/jetson-nano/boot/
mkdir -p /baremetal-files/jetson-tk1/boot/
ln -s \
/baremetal-files/zImage \
/baremetal-files/tegra124-jetson-tk1.dtb \
/baremetal-files/jetson-tk1/boot/

View File

@@ -1,5 +0,0 @@
#!/bin/bash
arch=i386
. .gitlab-ci/container/cross_build.sh

View File

@@ -1,52 +0,0 @@
-----BEGIN PGP PUBLIC KEY BLOCK-----
Version: GnuPG v1.4.12 (GNU/Linux)
mQINBFE9lCwBEADi0WUAApM/mgHJRU8lVkkw0CHsZNpqaQDNaHefD6Rw3S4LxNmM
EZaOTkhP200XZM8lVdbfUW9xSjA3oPldc1HG26NjbqqCmWpdo2fb+r7VmU2dq3NM
R18ZlKixiLDE6OUfaXWKamZsXb6ITTYmgTO6orQWYrnW6ckYHSeaAkW0wkDAryl2
B5v8aoFnQ1rFiVEMo4NGzw4UX+MelF7rxaaregmKVTPiqCOSPJ1McC1dHFN533FY
Wh/RVLKWo6npu+owtwYFQW+zyQhKzSIMvNujFRzhIxzxR9Gn87MoLAyfgKEzrbbT
DhqqNXTxS4UMUKCQaO93TzetX/EBrRpJj+vP640yio80h4Dr5pAd7+LnKwgpTDk1
G88bBXJAcPZnTSKu9I2c6KY4iRNbvRz4i+ZdwwZtdW4nSdl2792L7Sl7Nc44uLL/
ZqkKDXEBF6lsX5XpABwyK89S/SbHOytXv9o4puv+65Ac5/UShspQTMSKGZgvDauU
cs8kE1U9dPOqVNCYq9Nfwinkf6RxV1k1+gwtclxQuY7UpKXP0hNAXjAiA5KS5Crq
7aaJg9q2F4bub0mNU6n7UI6vXguF2n4SEtzPRk6RP+4TiT3bZUsmr+1ktogyOJCc
Ha8G5VdL+NBIYQthOcieYCBnTeIH7D3Sp6FYQTYtVbKFzmMK+36ERreL/wARAQAB
tD1TeWx2ZXN0cmUgTGVkcnUgLSBEZWJpYW4gTExWTSBwYWNrYWdlcyA8c3lsdmVz
dHJlQGRlYmlhbi5vcmc+iQI4BBMBAgAiBQJRPZQsAhsDBgsJCAcDAgYVCAIJCgsE
FgIDAQIeAQIXgAAKCRAVz00Yr090Ibx+EADArS/hvkDF8juWMXxh17CgR0WZlHCC
9CTBWkg5a0bNN/3bb97cPQt/vIKWjQtkQpav6/5JTVCSx2riL4FHYhH0iuo4iAPR
udC7Cvg8g7bSPrKO6tenQZNvQm+tUmBHgFiMBJi92AjZ/Qn1Shg7p9ITivFxpLyX
wpmnF1OKyI2Kof2rm4BFwfSWuf8Fvh7kDMRLHv+MlnK/7j/BNpKdozXxLcwoFBmn
l0WjpAH3OFF7Pvm1LJdf1DjWKH0Dc3sc6zxtmBR/KHHg6kK4BGQNnFKujcP7TVdv
gMYv84kun14pnwjZcqOtN3UJtcx22880DOQzinoMs3Q4w4o05oIF+sSgHViFpc3W
R0v+RllnH05vKZo+LDzc83DQVrdwliV12eHxrMQ8UYg88zCbF/cHHnlzZWAJgftg
hB08v1BKPgYRUzwJ6VdVqXYcZWEaUJmQAPuAALyZESw94hSo28FAn0/gzEc5uOYx
K+xG/lFwgAGYNb3uGM5m0P6LVTfdg6vDwwOeTNIExVk3KVFXeSQef2ZMkhwA7wya
KJptkb62wBHFE+o9TUdtMCY6qONxMMdwioRE5BYNwAsS1PnRD2+jtlI0DzvKHt7B
MWd8hnoUKhMeZ9TNmo+8CpsAtXZcBho0zPGz/R8NlJhAWpdAZ1CmcPo83EW86Yq7
BxQUKnNHcwj2ebkCDQRRPZQsARAA4jxYmbTHwmMjqSizlMJYNuGOpIidEdx9zQ5g
zOr431/VfWq4S+VhMDhs15j9lyml0y4ok215VRFwrAREDg6UPMr7ajLmBQGau0Fc
bvZJ90l4NjXp5p0NEE/qOb9UEHT7EGkEhaZ1ekkWFTWCgsy7rRXfZLxB6sk7pzLC
DshyW3zjIakWAnpQ5j5obiDy708pReAuGB94NSyb1HoW/xGsGgvvCw4r0w3xPStw
F1PhmScE6NTBIfLliea3pl8vhKPlCh54Hk7I8QGjo1ETlRP4Qll1ZxHJ8u25f/ta
RES2Aw8Hi7j0EVcZ6MT9JWTI83yUcnUlZPZS2HyeWcUj+8nUC8W4N8An+aNps9l/
21inIl2TbGo3Yn1JQLnA1YCoGwC34g8QZTJhElEQBN0X29ayWW6OdFx8MDvllbBV
ymmKq2lK1U55mQTfDli7S3vfGz9Gp/oQwZ8bQpOeUkc5hbZszYwP4RX+68xDPfn+
M9udl+qW9wu+LyePbW6HX90LmkhNkkY2ZzUPRPDHZANU5btaPXc2H7edX4y4maQa
xenqD0lGh9LGz/mps4HEZtCI5CY8o0uCMF3lT0XfXhuLksr7Pxv57yue8LLTItOJ
d9Hmzp9G97SRYYeqU+8lyNXtU2PdrLLq7QHkzrsloG78lCpQcalHGACJzrlUWVP/
fN3Ht3kAEQEAAYkCHwQYAQIACQUCUT2ULAIbDAAKCRAVz00Yr090IbhWEADbr50X
OEXMIMGRLe+YMjeMX9NG4jxs0jZaWHc/WrGR+CCSUb9r6aPXeLo+45949uEfdSsB
pbaEdNWxF5Vr1CSjuO5siIlgDjmT655voXo67xVpEN4HhMrxugDJfCa6z97P0+ML
PdDxim57uNqkam9XIq9hKQaurxMAECDPmlEXI4QT3eu5qw5/knMzDMZj4Vi6hovL
wvvAeLHO/jsyfIdNmhBGU2RWCEZ9uo/MeerPHtRPfg74g+9PPfP6nyHD2Wes6yGd
oVQwtPNAQD6Cj7EaA2xdZYLJ7/jW6yiPu98FFWP74FN2dlyEA2uVziLsfBrgpS4l
tVOlrO2YzkkqUGrybzbLpj6eeHx+Cd7wcjI8CalsqtL6cG8cUEjtWQUHyTbQWAgG
5VPEgIAVhJ6RTZ26i/G+4J8neKyRs4vz+57UGwY6zI4AB1ZcWGEE3Bf+CDEDgmnP
LSwbnHefK9IljT9XU98PelSryUO/5UPw7leE0akXKB4DtekToO226px1VnGp3Bov
1GBGvpHvL2WizEwdk+nfk8LtrLzej+9FtIcq3uIrYnsac47Pf7p0otcFeTJTjSq3
krCaoG4Hx0zGQG2ZFpHrSrZTVy6lxvIdfi0beMgY6h78p6M9eYZHQHc02DjFkQXN
bXb5c6gCHESH5PXwPU4jQEE7Ib9J6sbk7ZT2Mw==
=j+4q
-----END PGP PUBLIC KEY BLOCK-----

View File

@@ -1,5 +0,0 @@
#!/bin/bash
arch=ppc64el
. .gitlab-ci/container/cross_build.sh

View File

@@ -1,16 +0,0 @@
#!/bin/bash
set -e
arch=s390x
# Ephemeral packages (installed for this script and removed again at the end)
STABLE_EPHEMERAL="libssl-dev"
apt-get -y install "$STABLE_EPHEMERAL"
. .gitlab-ci/container/build-mold.sh
apt-get purge -y "$STABLE_EPHEMERAL"
. .gitlab-ci/container/cross_build.sh

View File

@@ -1,53 +0,0 @@
-----BEGIN PGP PUBLIC KEY BLOCK-----
mQGNBFwOmrgBDAC9FZW3dFpew1hwDaqRfdQQ1ABcmOYu1NKZHwYjd+bGvcR2LRGe
R5dfRqG1Uc/5r6CPCMvnWxFprymkqKEADn8eFn+aCnPx03HrhA+lNEbciPfTHylt
NTTuRua7YpJIgEOjhXUbxXxnvF8fhUf5NJpJg6H6fPQARUW+5M//BlVgwn2jhzlW
U+uwgeJthhiuTXkls9Yo3EoJzmkUih+ABZgvaiBpr7GZRw9GO1aucITct0YDNTVX
KA6el78/udi5GZSCKT94yY9ArN4W6NiOFCLV7MU5d6qMjwGFhfg46NBv9nqpGinK
3NDjqCevKouhtKl2J+nr3Ju3Spzuv6Iex7tsOqt+XdZCoY+8+dy3G5zbJwBYsMiS
rTNF55PHtBH1S0QK5OoN2UR1ie/aURAyAFEMhTzvFB2B2v7C0IKIOmYMEG+DPMs9
FQs/vZ1UnAQgWk02ZiPryoHfjFO80+XYMrdWN+RSo5q9ODClloaKXjqI/aWLGirm
KXw2R8tz31go3NMAEQEAAbQnV2luZUhRIHBhY2thZ2VzIDx3aW5lLWRldmVsQHdp
bmVocS5vcmc+iQHOBBMBCgA4AhsDBQsJCAcCBhUKCQgLAgQWAgMBAh4BAheAFiEE
1D9kAUU2nFHXht3qdvGiD/mHZy8FAlwOmyUACgkQdvGiD/mHZy/zkwv7B+nKFlDY
Bzz/7j0gqIODbs5FRZRtuf/IuPP3vZdWlNfAW/VyaLtVLJCM/mmaf/O6/gJ+D+E9
BBoSmHdHzBBOQHIj5IbRedynNcHT5qXsdBeU2ZPR50sdE+jmukvw3Wa5JijoDgUu
LGLGtU48Z3JsBXQ54OlnTZXQ2SMFhRUa10JANXSJQ+QY2Wo2Pi2+MEAHcrd71A2S
0mT2DQSSBQ92c6WPfUpOSBawd8P0ipT7rVFNLJh8HVQGyEWxPl8ecDEHoVfG2rdV
D0ADbNLx9031UUwpUicO6vW/2Ec7c3VNG1cpOtyNTw/lEgvsXOh3GQs/DvFvMy/h
QzaeF3Qq6cAPlKuxieJe4lLYFBTmCAT4iB1J8oeFs4G7ScfZH4+4NBe3VGoeCD/M
Wl+qxntAroblxiFuqtPJg+NKZYWBzkptJNhnrBxcBnRinGZLw2k/GR/qPMgsR2L4
cP+OUuka+R2gp9oDVTZTyMowz+ROIxnEijF50pkj2VBFRB02rfiMp7q6iQIzBBAB
CgAdFiEE2iNXmnTUrZr50/lFzvrI6q8XUZ0FAlwOm3AACgkQzvrI6q8XUZ3KKg/+
MD8CgvLiHEX90fXQ23RZQRm2J21w3gxdIen/N8yJVIbK7NIgYhgWfGWsGQedtM7D
hMwUlDSRb4rWy9vrXBaiZoF3+nK9AcLvPChkZz28U59Jft6/l0gVrykey/ERU7EV
w1Ie1eRu0tRSXsKvMZyQH8897iHZ7uqoJgyk8U8CvSW+V80yqLB2M8Tk8ECZq34f
HqUIGs4Wo0UZh0vV4+dEQHBh1BYpmmWl+UPf7nzNwFWXu/EpjVhkExRqTnkEJ+Ai
OxbtrRn6ETKzpV4DjyifqQF639bMIem7DRRf+mkcrAXetvWkUkE76e3E9KLvETCZ
l4SBfgqSZs2vNngmpX6Qnoh883aFo5ZgVN3v6uTS+LgTwMt/XlnDQ7+Zw+ehCZ2R
CO21Y9Kbw6ZEWls/8srZdCQ2LxnyeyQeIzsLnqT/waGjQj35i4exzYeWpojVDb3r
tvvOALYGVlSYqZXIALTx2/tHXKLHyrn1C0VgHRnl+hwv7U49f7RvfQXpx47YQN/C
PWrpbG69wlKuJptr+olbyoKAWfl+UzoO8vLMo5njWQNAoAwh1H8aFUVNyhtbkRuq
l0kpy1Cmcq8uo6taK9lvYp8jak7eV8lHSSiGUKTAovNTwfZG2JboGV4/qLDUKvpa
lPp2xVpF9MzA8VlXTOzLpSyIVxZnPTpL+xR5P9WQjMS5AY0EXA6auAEMAMReKL89
0z0SL+/i/geB/agfG/k6AXiG2a9kVWeIjAqFwHKl9W/DTNvOqCDgAt51oiHGRRjt
1Xm3XZD4p+GM1uZWn9qIFL49Gt5x94TqdrsKTVCJr0Kazn2mKQc7aja0zac+WtZG
OFn7KbniuAcwtC780cyikfmmExLI1/Vjg+NiMlMtZfpK6FIW+ulPiDQPdzIhVppx
w9/KlR2Fvh4TbzDsUqkFQSSAFdQ65BWgvzLpZHdKO/ILpDkThLbipjtvbBv/pHKM
O/NFTNoYkJ3cNW/kfcynwV+4AcKwdRz2A3Mez+g5TKFYPZROIbayOo01yTMLfz2p
jcqki/t4PACtwFOhkAs+MYPPyZDUkTFcEJQCPDstkAgmJWI3K2qELtDOLQyps3WY
Mfp+mntOdc8bKjFTMcCEk1zcm14K4Oms+w6dw2UnYsX1FAYYhPm8HUYwE4kP8M+D
9HGLMjLqqF/kanlCFZs5Avx3mDSAx6zS8vtNdGh+64oDNk4x4A2j8GTUuQARAQAB
iQG8BBgBCgAmFiEE1D9kAUU2nFHXht3qdvGiD/mHZy8FAlwOmrgCGwwFCQPCZwAA
CgkQdvGiD/mHZy9FnAwAgfUkxsO53Pm2iaHhtF4+BUc8MNJj64Jvm1tghr6PBRtM
hpbvvN8SSOFwYIsS+2BMsJ2ldox4zMYhuvBcgNUlix0G0Z7h1MjftDdsLFi1DNv2
J9dJ9LdpWdiZbyg4Sy7WakIZ/VvH1Znd89Imo7kCScRdXTjIw2yCkotE5lK7A6Ns
NbVuoYEN+dbGioF4csYehnjTdojwF/19mHFxrXkdDZ/V6ZYFIFxEsxL8FEuyI4+o
LC3DFSA4+QAFdkjGFXqFPlaEJxWt5d7wk0y+tt68v+ulkJ900BvR+OOMqQURwrAi
iP3I28aRrMjZYwyqHl8i/qyIv+WRakoDKV+wWteR5DmRAPHmX2vnlPlCmY8ysR6J
2jUAfuDFVu4/qzJe6vw5tmPJMdfvy0W5oogX6sEdin5M5w2b3WrN8nXZcjbWymqP
6jCdl6eoCCkKNOIbr/MMSkd2KqAqDVM5cnnlQ7q+AXzwNpj3RGJVoBxbS0nn9JWY
QNQrWh9rAcMIGT+b1le0
=4lsa
-----END PGP PUBLIC KEY BLOCK-----

View File

@@ -1,16 +0,0 @@
#!/bin/bash
set -e
set -o xtrace
# Installing wine, need this for testing mingw or nine
apt-get update
apt-get install -y --no-remove \
wine \
wine64 \
xvfb
# Used to initialize the Wine environment to reduce build time
wine64 whoami.exe

View File

@@ -1,92 +0,0 @@
#!/bin/bash
# shellcheck disable=SC2086 # we want word splitting
set -e
set -o xtrace
export DEBIAN_FRONTEND=noninteractive
apt-get install -y ca-certificates gnupg2 software-properties-common
# Add llvm 13 to the build image
apt-key add .gitlab-ci/container/debian/llvm-snapshot.gpg.key
add-apt-repository "deb https://apt.llvm.org/bullseye/ llvm-toolchain-bullseye-13 main"
sed -i -e 's/http:\/\/deb/https:\/\/deb/g' /etc/apt/sources.list
# Ephemeral packages (installed for this script and removed again at
# the end)
STABLE_EPHEMERAL=" \
python3-pip \
python3-setuptools \
"
apt-get update
apt-get install -y --no-remove \
$STABLE_EPHEMERAL \
bison \
ccache \
dpkg-cross \
findutils \
flex \
g++ \
cmake \
gcc \
git \
glslang-tools \
kmod \
libclang-13-dev \
libclang-11-dev \
libelf-dev \
libepoxy-dev \
libexpat1-dev \
libgtk-3-dev \
libllvm13 \
libllvm11 \
libomxil-bellagio-dev \
libpciaccess-dev \
libunwind-dev \
libva-dev \
libvdpau-dev \
libvulkan-dev \
libx11-dev \
libx11-xcb-dev \
libxext-dev \
libxml2-utils \
libxrandr-dev \
libxrender-dev \
libxshmfence-dev \
libxxf86vm-dev \
make \
meson \
pkg-config \
python3-mako \
python3-pil \
python3-ply \
python3-requests \
qemu-user \
valgrind \
wget \
x11proto-dri2-dev \
x11proto-gl-dev \
x11proto-randr-dev \
xz-utils \
zlib1g-dev \
zstd
# Needed for ci-fairy, this revision is able to upload files to MinIO
pip3 install git+http://gitlab.freedesktop.org/freedesktop/ci-templates@34f4ade99434043f88e164933f570301fd18b125
# We need at least 0.61.4 for proper Rust
pip3 install meson==0.61.5
. .gitlab-ci/container/build-rust.sh
. .gitlab-ci/container/debian/x86_build-base-wine.sh
############### Uninstall ephemeral packages
apt-get purge -y $STABLE_EPHEMERAL
. .gitlab-ci/container/container_post_build.sh

View File

@@ -1,77 +0,0 @@
#!/bin/bash
# shellcheck disable=SC2086 # we want word splitting
set -e
# Pull packages from msys2 repository that can be directly used.
# We can use https://packages.msys2.org/ to retrieve the newest package
mkdir ~/tmp
pushd ~/tmp
MINGW_PACKET_LIST="
mingw-w64-x86_64-headers-git-10.0.0.r14.ga08c638f8-1-any.pkg.tar.zst
mingw-w64-x86_64-vulkan-loader-1.3.211-1-any.pkg.tar.zst
mingw-w64-x86_64-libelf-0.8.13-6-any.pkg.tar.zst
mingw-w64-x86_64-zlib-1.2.12-1-any.pkg.tar.zst
mingw-w64-x86_64-zstd-1.5.2-2-any.pkg.tar.zst
"
for i in $MINGW_PACKET_LIST
do
wget -q https://mirror.msys2.org/mingw/mingw64/$i
tar xf $i --strip-components=1 -C /usr/x86_64-w64-mingw32/
done
popd
rm -rf ~/tmp
mkdir -p /usr/x86_64-w64-mingw32/bin
# The output of `wine64 llvm-config --system-libs --cxxflags mcdisassembler`
# containes absolute path like '-IZ:'
# The sed is used to replace `-IZ:/usr/x86_64-w64-mingw32/include`
# to `-I/usr/x86_64-w64-mingw32/include`
# Debian's pkg-config wrapers for mingw are broken, and there's no sign that
# they're going to be fixed, so we'll just have to fix it ourselves
# https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=930492
cat >/usr/x86_64-w64-mingw32/bin/pkg-config <<EOF
#!/bin/sh
PKG_CONFIG_LIBDIR=/usr/x86_64-w64-mingw32/lib/pkgconfig:/usr/x86_64-w64-mingw32/share/pkgconfig pkg-config \$@
EOF
chmod +x /usr/x86_64-w64-mingw32/bin/pkg-config
cat >/usr/x86_64-w64-mingw32/bin/llvm-config <<EOF
#!/bin/sh
wine64 llvm-config \$@ | sed -e "s,Z:/,/,gi"
EOF
chmod +x /usr/x86_64-w64-mingw32/bin/llvm-config
cat >/usr/x86_64-w64-mingw32/bin/clang <<EOF
#!/bin/sh
wine64 clang \$@
EOF
chmod +x /usr/x86_64-w64-mingw32/bin/clang
cat >/usr/x86_64-w64-mingw32/bin/llvm-as <<EOF
#!/bin/sh
wine64 llvm-as \$@
EOF
chmod +x /usr/x86_64-w64-mingw32/bin/llvm-as
cat >/usr/x86_64-w64-mingw32/bin/llvm-link <<EOF
#!/bin/sh
wine64 llvm-link \$@
EOF
chmod +x /usr/x86_64-w64-mingw32/bin/llvm-link
cat >/usr/x86_64-w64-mingw32/bin/opt <<EOF
#!/bin/sh
wine64 opt \$@
EOF
chmod +x /usr/x86_64-w64-mingw32/bin/opt
cat >/usr/x86_64-w64-mingw32/bin/llvm-spirv <<EOF
#!/bin/sh
wine64 llvm-spirv \$@
EOF
chmod +x /usr/x86_64-w64-mingw32/bin/llvm-spirv

View File

@@ -1,126 +0,0 @@
#!/bin/bash
# shellcheck disable=SC2086 # we want word splitting
set -e
# Building libdrm (libva dependency)
. .gitlab-ci/container/build-libdrm.sh
wd=$PWD
CMAKE_TOOLCHAIN_MINGW_PATH=$wd/.gitlab-ci/container/debian/x86_mingw-toolchain.cmake
mkdir -p ~/tmp
pushd ~/tmp
# Building DirectX-Headers
git clone https://github.com/microsoft/DirectX-Headers -b v1.606.4 --depth 1
mkdir -p DirectX-Headers/build
pushd DirectX-Headers/build
meson .. \
--backend=ninja \
--buildtype=release -Dbuild-test=false \
-Dprefix=/usr/x86_64-w64-mingw32/ \
--cross-file=$wd/.gitlab-ci/x86_64-w64-mingw32
ninja install
popd
# Building libva
git clone https://github.com/intel/libva
pushd libva/
# Checking out commit hash with libva-win32 support
# This feature will be released with libva version 2.17
git checkout 2579eb0f77897dc01a02c1e43defc63c40fd2988
popd
# libva already has a build dir in their repo, use builddir instead
mkdir -p libva/builddir
pushd libva/builddir
meson .. \
--backend=ninja \
--buildtype=release \
-Dprefix=/usr/x86_64-w64-mingw32/ \
--cross-file=$wd/.gitlab-ci/x86_64-w64-mingw32
ninja install
popd
export VULKAN_SDK_VERSION=1.3.211.0
# Building SPIRV Tools
git clone -b sdk-$VULKAN_SDK_VERSION --depth=1 \
https://github.com/KhronosGroup/SPIRV-Tools SPIRV-Tools
git clone -b sdk-$VULKAN_SDK_VERSION --depth=1 \
https://github.com/KhronosGroup/SPIRV-Headers SPIRV-Tools/external/SPIRV-Headers
mkdir -p SPIRV-Tools/build
pushd SPIRV-Tools/build
cmake .. \
-DCMAKE_TOOLCHAIN_FILE=$CMAKE_TOOLCHAIN_MINGW_PATH \
-DCMAKE_INSTALL_PREFIX=/usr/x86_64-w64-mingw32/ \
-GNinja -DCMAKE_BUILD_TYPE=Release \
-DCMAKE_CROSSCOMPILING=1 \
-DCMAKE_POLICY_DEFAULT_CMP0091=NEW
ninja install
popd
# Building LLVM
git clone -b release/14.x --depth=1 \
https://github.com/llvm/llvm-project llvm-project
git clone -b v14.0.0 --depth=1 \
https://github.com/KhronosGroup/SPIRV-LLVM-Translator llvm-project/llvm/projects/SPIRV-LLVM-Translator
mkdir llvm-project/build
pushd llvm-project/build
cmake ../llvm \
-DCMAKE_TOOLCHAIN_FILE=$CMAKE_TOOLCHAIN_MINGW_PATH \
-DCMAKE_INSTALL_PREFIX=/usr/x86_64-w64-mingw32/ \
-GNinja -DCMAKE_BUILD_TYPE=Release \
-DCMAKE_CROSSCOMPILING=1 \
-DLLVM_ENABLE_RTTI=ON \
-DCROSS_TOOLCHAIN_FLAGS_NATIVE=-DLLVM_EXTERNAL_SPIRV_HEADERS_SOURCE_DIR=$PWD/../../SPIRV-Tools/external/SPIRV-Headers \
-DLLVM_EXTERNAL_SPIRV_HEADERS_SOURCE_DIR=$PWD/../../SPIRV-Tools/external/SPIRV-Headers \
-DLLVM_ENABLE_PROJECTS="clang" \
-DLLVM_TARGETS_TO_BUILD="AMDGPU;X86" \
-DLLVM_OPTIMIZED_TABLEGEN=TRUE \
-DLLVM_ENABLE_ASSERTIONS=TRUE \
-DLLVM_INCLUDE_UTILS=OFF \
-DLLVM_INCLUDE_RUNTIMES=OFF \
-DLLVM_INCLUDE_TESTS=OFF \
-DLLVM_INCLUDE_EXAMPLES=OFF \
-DLLVM_INCLUDE_GO_TESTS=OFF \
-DLLVM_INCLUDE_BENCHMARKS=OFF \
-DLLVM_BUILD_LLVM_C_DYLIB=OFF \
-DLLVM_ENABLE_DIA_SDK=OFF \
-DCLANG_BUILD_TOOLS=ON \
-DLLVM_SPIRV_INCLUDE_TESTS=OFF
ninja install
popd
# Building libclc
mkdir llvm-project/build-libclc
pushd llvm-project/build-libclc
cmake ../libclc \
-DCMAKE_TOOLCHAIN_FILE=$CMAKE_TOOLCHAIN_MINGW_PATH \
-DCMAKE_INSTALL_PREFIX=/usr/x86_64-w64-mingw32/ \
-GNinja -DCMAKE_BUILD_TYPE=Release \
-DCMAKE_CROSSCOMPILING=1 \
-DCMAKE_POLICY_DEFAULT_CMP0091=NEW \
-DCMAKE_CXX_FLAGS="-m64" \
-DLLVM_CONFIG="/usr/x86_64-w64-mingw32/bin/llvm-config" \
-DLLVM_CLANG="/usr/x86_64-w64-mingw32/bin/clang" \
-DLLVM_AS="/usr/x86_64-w64-mingw32/bin/llvm-as" \
-DLLVM_LINK="/usr/x86_64-w64-mingw32/bin/llvm-link" \
-DLLVM_OPT="/usr/x86_64-w64-mingw32/bin/opt" \
-DLLVM_SPIRV="/usr/x86_64-w64-mingw32/bin/llvm-spirv" \
-DLIBCLC_TARGETS_TO_BUILD="spirv-mesa3d-;spirv64-mesa3d-"
ninja install
popd
popd # ~/tmp
# Cleanup ~/tmp
rm -rf ~/tmp

View File

@@ -1,13 +0,0 @@
#!/bin/bash
set -e
set -o xtrace
apt-get update
apt-get install -y --no-remove \
zstd \
g++-mingw-w64-i686 \
g++-mingw-w64-x86-64
. .gitlab-ci/container/debian/x86_build-mingw-patch.sh
. .gitlab-ci/container/debian/x86_build-mingw-source-deps.sh

View File

@@ -1,108 +0,0 @@
#!/bin/bash
# shellcheck disable=SC2086 # we want word splitting
set -e
set -o xtrace
export DEBIAN_FRONTEND=noninteractive
# Ephemeral packages (installed for this script and removed again at the end)
STABLE_EPHEMERAL=" \
autoconf \
automake \
autotools-dev \
bzip2 \
libtool \
libssl-dev \
python3-pip \
"
apt-get update
apt-get install -y --no-remove \
$STABLE_EPHEMERAL \
check \
clang \
libasan6 \
libarchive-dev \
libclang-cpp13-dev \
libclang-cpp11-dev \
libgbm-dev \
libglvnd-dev \
liblua5.3-dev \
libxcb-dri2-0-dev \
libxcb-dri3-dev \
libxcb-glx0-dev \
libxcb-present-dev \
libxcb-randr0-dev \
libxcb-shm0-dev \
libxcb-sync-dev \
libxcb-xfixes0-dev \
libxcb1-dev \
libxml2-dev \
llvm-13-dev \
llvm-11-dev \
ocl-icd-opencl-dev \
python3-freezegun \
python3-pytest \
procps \
spirv-tools \
shellcheck \
strace \
time \
yamllint \
zstd
. .gitlab-ci/container/container_pre_build.sh
# dependencies where we want a specific version
export XORG_RELEASES=https://xorg.freedesktop.org/releases/individual
export XORGMACROS_VERSION=util-macros-1.19.0
. .gitlab-ci/container/build-mold.sh
wget $XORG_RELEASES/util/$XORGMACROS_VERSION.tar.bz2
tar -xvf $XORGMACROS_VERSION.tar.bz2 && rm $XORGMACROS_VERSION.tar.bz2
cd $XORGMACROS_VERSION; ./configure; make install; cd ..
rm -rf $XORGMACROS_VERSION
. .gitlab-ci/container/build-llvm-spirv.sh
. .gitlab-ci/container/build-libclc.sh
. .gitlab-ci/container/build-libdrm.sh
. .gitlab-ci/container/build-wayland.sh
pushd /usr/local
git clone https://gitlab.freedesktop.org/mesa/shader-db.git --depth 1
rm -rf shader-db/.git
cd shader-db
make
popd
git clone https://github.com/microsoft/DirectX-Headers -b v1.606.4 --depth 1
mkdir -p DirectX-Headers/build
pushd DirectX-Headers/build
meson .. --backend=ninja --buildtype=release -Dbuild-test=false
ninja
ninja install
popd
rm -rf DirectX-Headers
pip3 install git+https://git.lavasoftware.org/lava/lavacli@3db3ddc45e5358908bc6a17448059ea2340492b7
# install bindgen
RUSTFLAGS='-L native=/usr/local/lib' cargo install \
bindgen --version 0.59.2 \
-j ${FDO_CI_CONCURRENT:-4} \
--root /usr/local
############### Uninstall the build software
apt-get purge -y \
$STABLE_EPHEMERAL
. .gitlab-ci/container/container_post_build.sh

View File

@@ -1,8 +0,0 @@
set(CMAKE_SYSTEM_NAME Windows)
set(CMAKE_SYSTEM_PROCESSOR x86_64)
set(CMAKE_SYSROOT /usr/x86_64-w64-mingw32/)
set(ENV{PKG_CONFIG} /usr/x86_64-w64-mingw32/bin/pkg-config)
set(CMAKE_C_COMPILER x86_64-w64-mingw32-gcc-posix)
set(CMAKE_CXX_COMPILER x86_64-w64-mingw32-g++-posix)

Some files were not shown because too many files have changed in this diff Show More