Compare commits

..

11 Commits

Author SHA1 Message Date
Joshua Ashton
aa5bc3e41f wsi: Implement linux-drm-syncobj-v1
This implements explicit sync with linux-drm-syncobj-v1 for the
Wayland WSI.

Signed-off-by: Joshua Ashton <joshua@froggi.es>
2024-03-27 17:49:03 +00:00
Joshua Ashton
e4e3436d45 wsi: Add common infrastructure for explicit sync
Signed-off-by: Joshua Ashton <joshua@froggi.es>
2024-03-27 17:49:03 +00:00
Joshua Ashton
becb5d5161 wsi: Get timeline semaphore exportable handle types
We need to know this for explicit sync

Signed-off-by: Joshua Ashton <joshua@froggi.es>
2024-03-27 17:44:16 +00:00
Joshua Ashton
06c2af994b wsi: Track CPU side present ordering via a serial
We will use this in our hueristics to pick the most optimal buffer in AcquireNextImageKHR

Signed-off-by: Joshua Ashton <joshua@froggi.es>
2024-03-25 21:00:54 +00:00
Joshua Ashton
d9cbc79941 wsi: Add acquired member to wsi_image
Tracks whether this wsi_image has been acquired by the app

Signed-off-by: Joshua Ashton <joshua@froggi.es>
2024-03-25 21:00:54 +00:00
Joshua Ashton
e209b02b97 wsi: Track if timeline semaphores are supported
This will be needed before we expose and use explicit sync.

Even if the host Wayland compositor supports timeline semaphores, in the
case of Venus, etc the underlying driver may not.

Signed-off-by: Joshua Ashton <joshua@froggi.es>
2024-03-22 00:24:26 +00:00
Joshua Ashton
8a098f591b build: Add linux-drm-syncobj-v1 wayland protocol
Signed-off-by: Joshua Ashton <joshua@froggi.es>
2024-03-22 00:24:26 +00:00
Joshua Ashton
754f52e1e1 wsi: Add explicit_sync to wsi_drm_image_params
Allow the WSI frontend to request explicit sync buffers.

Signed-off-by: Joshua Ashton <joshua@froggi.es>
2024-03-22 00:24:26 +00:00
Joshua Ashton
00dba3992c wsi: Add explicit_sync to wsi_image_info
Will be used in future for specifying explicit sync for Vulkan WSI when supported.

Additionally cleans up wsi_create_buffer_blit_context, etc..

Signed-off-by: Joshua Ashton <joshua@froggi.es>
2024-03-22 00:24:26 +00:00
Joshua Ashton
9c8f205131 wsi: Pass wsi_drm_image_params to wsi_configure_prime_image
Signed-off-by: Joshua Ashton <joshua@froggi.es>
2024-03-20 17:21:27 +00:00
Joshua Ashton
f17f43b149 wsi: Pass wsi_drm_image_params to wsi_configure_native_image
No need to split this out into function parameters, it's just less clean.

Signed-off-by: Joshua Ashton <joshua@froggi.es>
2024-03-20 17:21:26 +00:00
2090 changed files with 58411 additions and 155942 deletions

View File

@@ -33,7 +33,7 @@ workflow:
# merge pipeline # merge pipeline
- if: &is-merge-attempt $GITLAB_USER_LOGIN == "marge-bot" && $CI_PIPELINE_SOURCE == "merge_request_event" - if: &is-merge-attempt $GITLAB_USER_LOGIN == "marge-bot" && $CI_PIPELINE_SOURCE == "merge_request_event"
variables: variables:
KERNEL_IMAGE_BASE: https://${S3_HOST}/${S3_KERNEL_BUCKET}/${KERNEL_REPO}/${KERNEL_TAG} KERNEL_IMAGE_BASE: https://${S3_HOST}/mesa-lava/${KERNEL_REPO}/${KERNEL_TAG}
MESA_CI_PERFORMANCE_ENABLED: 1 MESA_CI_PERFORMANCE_ENABLED: 1
VALVE_INFRA_VANGOGH_JOB_PRIORITY: "" # Empty tags are ignored by gitlab VALVE_INFRA_VANGOGH_JOB_PRIORITY: "" # Empty tags are ignored by gitlab
# post-merge pipeline # post-merge pipeline
@@ -41,24 +41,24 @@ workflow:
# nightly pipeline # nightly pipeline
- if: &is-scheduled-pipeline $CI_PIPELINE_SOURCE == "schedule" - if: &is-scheduled-pipeline $CI_PIPELINE_SOURCE == "schedule"
variables: variables:
KERNEL_IMAGE_BASE: https://${S3_HOST}/${S3_KERNEL_BUCKET}/${KERNEL_REPO}/${KERNEL_TAG} KERNEL_IMAGE_BASE: https://${S3_HOST}/mesa-lava/${KERNEL_REPO}/${KERNEL_TAG}
JOB_PRIORITY: 50 JOB_PRIORITY: 50
VALVE_INFRA_VANGOGH_JOB_PRIORITY: priority:low VALVE_INFRA_VANGOGH_JOB_PRIORITY: priority:low
# pipeline for direct pushes that bypassed the CI # pipeline for direct pushes that bypassed the CI
- if: &is-direct-push $CI_PROJECT_NAMESPACE == "mesa" && $CI_PIPELINE_SOURCE == "push" && $GITLAB_USER_LOGIN != "marge-bot" - if: &is-direct-push $CI_PROJECT_NAMESPACE == "mesa" && $CI_PIPELINE_SOURCE == "push" && $GITLAB_USER_LOGIN != "marge-bot"
variables: variables:
KERNEL_IMAGE_BASE: https://${S3_HOST}/${S3_KERNEL_BUCKET}/${KERNEL_REPO}/${KERNEL_TAG} KERNEL_IMAGE_BASE: https://${S3_HOST}/mesa-lava/${KERNEL_REPO}/${KERNEL_TAG}
JOB_PRIORITY: 40 JOB_PRIORITY: 40
VALVE_INFRA_VANGOGH_JOB_PRIORITY: priority:low VALVE_INFRA_VANGOGH_JOB_PRIORITY: priority:low
# pre-merge or fork pipeline # pre-merge or fork pipeline
- if: $FORCE_KERNEL_TAG != null - if: $FORCE_KERNEL_TAG != null
variables: variables:
KERNEL_IMAGE_BASE: https://${S3_HOST}/${S3_KERNEL_BUCKET}/${KERNEL_REPO}/${FORCE_KERNEL_TAG} KERNEL_IMAGE_BASE: https://${S3_HOST}/mesa-lava/${KERNEL_REPO}/${FORCE_KERNEL_TAG}
JOB_PRIORITY: 50 JOB_PRIORITY: 50
VALVE_INFRA_VANGOGH_JOB_PRIORITY: priority:low VALVE_INFRA_VANGOGH_JOB_PRIORITY: priority:low
- if: $FORCE_KERNEL_TAG == null - if: $FORCE_KERNEL_TAG == null
variables: variables:
KERNEL_IMAGE_BASE: https://${S3_HOST}/${S3_KERNEL_BUCKET}/${KERNEL_REPO}/${KERNEL_TAG} KERNEL_IMAGE_BASE: https://${S3_HOST}/mesa-lava/${KERNEL_REPO}/${KERNEL_TAG}
JOB_PRIORITY: 50 JOB_PRIORITY: 50
VALVE_INFRA_VANGOGH_JOB_PRIORITY: priority:low VALVE_INFRA_VANGOGH_JOB_PRIORITY: priority:low
@@ -72,24 +72,14 @@ variables:
bash download-git-cache.sh bash download-git-cache.sh
rm download-git-cache.sh rm download-git-cache.sh
set +o xtrace set +o xtrace
S3_JWT_FILE: /s3_jwt CI_JOB_JWT_FILE: /minio_jwt
S3_HOST: s3.freedesktop.org S3_HOST: s3.freedesktop.org
# This bucket is used to fetch the kernel image
S3_KERNEL_BUCKET: mesa-rootfs
# Bucket for git cache
S3_GITCACHE_BUCKET: git-cache
# Bucket for the pipeline artifacts pushed to S3
S3_ARTIFACTS_BUCKET: artifacts
# Buckets for traces
S3_TRACIE_RESULTS_BUCKET: mesa-tracie-results
S3_TRACIE_PUBLIC_BUCKET: mesa-tracie-public
S3_TRACIE_PRIVATE_BUCKET: mesa-tracie-private
# per-pipeline artifact storage on MinIO # per-pipeline artifact storage on MinIO
PIPELINE_ARTIFACTS_BASE: ${S3_HOST}/${S3_ARTIFACTS_BUCKET}/${CI_PROJECT_PATH}/${CI_PIPELINE_ID} PIPELINE_ARTIFACTS_BASE: ${S3_HOST}/artifacts/${CI_PROJECT_PATH}/${CI_PIPELINE_ID}
# per-job artifact storage on MinIO # per-job artifact storage on MinIO
JOB_ARTIFACTS_BASE: ${PIPELINE_ARTIFACTS_BASE}/${CI_JOB_ID} JOB_ARTIFACTS_BASE: ${PIPELINE_ARTIFACTS_BASE}/${CI_JOB_ID}
# reference images stored for traces # reference images stored for traces
PIGLIT_REPLAY_REFERENCE_IMAGES_BASE: "${S3_HOST}/${S3_TRACIE_RESULTS_BUCKET}/$FDO_UPSTREAM_REPO" PIGLIT_REPLAY_REFERENCE_IMAGES_BASE: "${S3_HOST}/mesa-tracie-results/$FDO_UPSTREAM_REPO"
# For individual CI farm status see .ci-farms folder # For individual CI farm status see .ci-farms folder
# Disable farm with `git mv .ci-farms{,-disabled}/$farm_name` # Disable farm with `git mv .ci-farms{,-disabled}/$farm_name`
# Re-enable farm with `git mv .ci-farms{-disabled,}/$farm_name` # Re-enable farm with `git mv .ci-farms{-disabled,}/$farm_name`
@@ -97,22 +87,15 @@ variables:
ARTIFACTS_BASE_URL: https://${CI_PROJECT_ROOT_NAMESPACE}.${CI_PAGES_DOMAIN}/-/${CI_PROJECT_NAME}/-/jobs/${CI_JOB_ID}/artifacts ARTIFACTS_BASE_URL: https://${CI_PROJECT_ROOT_NAMESPACE}.${CI_PAGES_DOMAIN}/-/${CI_PROJECT_NAME}/-/jobs/${CI_JOB_ID}/artifacts
# Python scripts for structured logger # Python scripts for structured logger
PYTHONPATH: "$PYTHONPATH:$CI_PROJECT_DIR/install" PYTHONPATH: "$PYTHONPATH:$CI_PROJECT_DIR/install"
# Drop once deqp-runner is upgraded to > 0.18.0
MESA_VK_ABORT_ON_DEVICE_LOSS: 1
# Avoid the wall of "Unsupported SPIR-V capability" warnings in CI job log, hiding away useful output
MESA_SPIRV_LOG_LEVEL: error
default: default:
id_tokens:
S3_JWT:
aud: https://s3.freedesktop.org
before_script: before_script:
- > - >
export SCRIPTS_DIR=$(mktemp -d) && export SCRIPTS_DIR=$(mktemp -d) &&
curl -L -s --retry 4 -f --retry-all-errors --retry-delay 60 -O --output-dir "${SCRIPTS_DIR}" "${CI_PROJECT_URL}/-/raw/${CI_COMMIT_SHA}/.gitlab-ci/setup-test-env.sh" && curl -L -s --retry 4 -f --retry-all-errors --retry-delay 60 -O --output-dir "${SCRIPTS_DIR}" "${CI_PROJECT_URL}/-/raw/${CI_COMMIT_SHA}/.gitlab-ci/setup-test-env.sh" &&
. ${SCRIPTS_DIR}/setup-test-env.sh && . ${SCRIPTS_DIR}/setup-test-env.sh &&
echo -n "${S3_JWT}" > "${S3_JWT_FILE}" && echo -n "${CI_JOB_JWT}" > "${CI_JOB_JWT_FILE}" &&
unset CI_JOB_JWT S3_JWT # Unsetting vulnerable env variables unset CI_JOB_JWT # Unsetting vulnerable env variables
after_script: after_script:
# Work around https://gitlab.com/gitlab-org/gitlab/-/issues/20338 # Work around https://gitlab.com/gitlab-org/gitlab/-/issues/20338
@@ -121,9 +104,9 @@ default:
- > - >
set +x set +x
test -e "${S3_JWT_FILE}" && test -e "${CI_JOB_JWT_FILE}" &&
export S3_JWT="$(<${S3_JWT_FILE})" && export CI_JOB_JWT="$(<${CI_JOB_JWT_FILE})" &&
rm "${S3_JWT_FILE}" rm "${CI_JOB_JWT_FILE}"
# Retry when job fails. Failed jobs can be found in the Mesa CI Daily Reports: # Retry when job fails. Failed jobs can be found in the Mesa CI Daily Reports:
# https://gitlab.freedesktop.org/mesa/mesa/-/issues/?sort=created_date&state=opened&label_name%5B%5D=CI%20daily # https://gitlab.freedesktop.org/mesa/mesa/-/issues/?sort=created_date&state=opened&label_name%5B%5D=CI%20daily
@@ -276,7 +259,8 @@ make git archive:
# compress the current folder # compress the current folder
- tar -cvzf ../$CI_PROJECT_NAME.tar.gz . - tar -cvzf ../$CI_PROJECT_NAME.tar.gz .
- ci-fairy s3cp --token-file "${S3_JWT_FILE}" ../$CI_PROJECT_NAME.tar.gz https://$S3_HOST/git-cache/$CI_PROJECT_NAMESPACE/$CI_PROJECT_NAME/$CI_PROJECT_NAME.tar.gz - ci-fairy s3cp --token-file "${CI_JOB_JWT_FILE}" ../$CI_PROJECT_NAME.tar.gz https://$S3_HOST/git-cache/$CI_PROJECT_NAMESPACE/$CI_PROJECT_NAME/$CI_PROJECT_NAME.tar.gz
# Sanity checks of MR settings and commit logs # Sanity checks of MR settings and commit logs
sanity: sanity:
@@ -326,22 +310,6 @@ sanity:
- placeholder-job - placeholder-job
mr-label-maker-test:
extends:
- .fdo.ci-fairy
stage: sanity
rules:
- !reference [.mr-label-maker-rules, rules]
variables:
GIT_STRATEGY: fetch
timeout: 10m
script:
- set -eu
- python3 -m venv .venv
- source .venv/bin/activate
- pip install git+https://gitlab.freedesktop.org/freedesktop/mr-label-maker
- mr-label-maker --dry-run --mr $CI_MERGE_REQUEST_IID
# Jobs that need to pass before spending hardware resources on further testing # Jobs that need to pass before spending hardware resources on further testing
.required-for-hardware-jobs: .required-for-hardware-jobs:
needs: needs:

View File

@@ -61,7 +61,3 @@ deployment:
initramfs: initramfs:
url: '{{ initramfs_url }}' url: '{{ initramfs_url }}'
{% if dtb_url is defined %}
dtb:
url: '{{ dtb_url }}'
{% endif %}

View File

@@ -10,7 +10,7 @@ if [ -z "$BM_POE_ADDRESS" ]; then
exit 1 exit 1
fi fi
SNMP_KEY="SNMPv2-SMI::mib-2.105.1.1.1.3.1.$((${BM_POE_BASE:-0} + BM_POE_INTERFACE))" SNMP_KEY="SNMPv2-SMI::mib-2.105.1.1.1.3.1.$((48 + BM_POE_INTERFACE))"
SNMP_OFF="i 2" SNMP_OFF="i 2"
flock /var/run/poe.lock -c "snmpset -v2c -r 3 -t 30 -cmesaci $BM_POE_ADDRESS $SNMP_KEY $SNMP_OFF" flock /var/run/poe.lock -c "snmpset -v2c -r 3 -t 30 -cmesaci $BM_POE_ADDRESS $SNMP_KEY $SNMP_OFF"

View File

@@ -10,7 +10,7 @@ if [ -z "$BM_POE_ADDRESS" ]; then
exit 1 exit 1
fi fi
SNMP_KEY="SNMPv2-SMI::mib-2.105.1.1.1.3.1.$((${BM_POE_BASE:-0} + BM_POE_INTERFACE))" SNMP_KEY="SNMPv2-SMI::mib-2.105.1.1.1.3.1.$((48 + BM_POE_INTERFACE))"
SNMP_ON="i 1" SNMP_ON="i 1"
SNMP_OFF="i 2" SNMP_OFF="i 2"

View File

@@ -13,7 +13,7 @@ date +'%F %T'
# Make JWT token available as file in the bare-metal storage to enable access # Make JWT token available as file in the bare-metal storage to enable access
# to MinIO # to MinIO
cp "${S3_JWT_FILE}" "${rootfs_dst}${S3_JWT_FILE}" cp "${CI_JOB_JWT_FILE}" "${rootfs_dst}${CI_JOB_JWT_FILE}"
date +'%F %T' date +'%F %T'

View File

@@ -17,6 +17,7 @@
paths: paths:
- _build/meson-logs/*.txt - _build/meson-logs/*.txt
- _build/meson-logs/strace - _build/meson-logs/strace
- shader-db
- artifacts - artifacts
# Just Linux # Just Linux
@@ -70,14 +71,13 @@ debian-testing:
-D glx=dri -D glx=dri
-D gbm=enabled -D gbm=enabled
-D egl=enabled -D egl=enabled
-D glvnd=disabled
-D platforms=x11,wayland -D platforms=x11,wayland
GALLIUM_ST: > GALLIUM_ST: >
-D dri3=enabled -D dri3=enabled
-D gallium-nine=true -D gallium-nine=true
-D gallium-va=enabled -D gallium-va=enabled
-D gallium-rusticl=true -D gallium-rusticl=true
GALLIUM_DRIVERS: "swrast,virgl,radeonsi,zink,crocus,iris,i915,r300,svga" GALLIUM_DRIVERS: "swrast,virgl,radeonsi,zink,crocus,iris,i915,r300"
VULKAN_DRIVERS: "swrast,amd,intel,intel_hasvk,virtio,nouveau" VULKAN_DRIVERS: "swrast,amd,intel,intel_hasvk,virtio,nouveau"
BUILDTYPE: "debugoptimized" BUILDTYPE: "debugoptimized"
EXTRA_OPTION: > EXTRA_OPTION: >
@@ -163,7 +163,6 @@ debian-build-testing:
-D glx=dri -D glx=dri
-D gbm=enabled -D gbm=enabled
-D egl=enabled -D egl=enabled
-D glvnd=disabled
-D platforms=x11,wayland -D platforms=x11,wayland
GALLIUM_ST: > GALLIUM_ST: >
-D dri3=enabled -D dri3=enabled
@@ -182,7 +181,6 @@ debian-build-testing:
-D tools=drm-shim,etnaviv,freedreno,glsl,intel,intel-ui,nir,nouveau,lima,panfrost,asahi -D tools=drm-shim,etnaviv,freedreno,glsl,intel,intel-ui,nir,nouveau,lima,panfrost,asahi
-D b_lto=true -D b_lto=true
LLVM_VERSION: 15 LLVM_VERSION: 15
S3_ARTIFACT_NAME: debian-build-testing
script: | script: |
section_start lava-pytest "lava-pytest" section_start lava-pytest "lava-pytest"
.gitlab-ci/lava/lava-pytest.sh .gitlab-ci/lava/lava-pytest.sh
@@ -190,28 +188,11 @@ debian-build-testing:
.gitlab-ci/run-shellcheck.sh .gitlab-ci/run-shellcheck.sh
section_switch yamllint "yamllint" section_switch yamllint "yamllint"
.gitlab-ci/run-yamllint.sh .gitlab-ci/run-yamllint.sh
section_end yamllint section_switch meson "meson"
.gitlab-ci/meson/build.sh .gitlab-ci/meson/build.sh
.gitlab-ci/prepare-artifacts.sh section_switch shader-db "shader-db"
timeout: 15m
shader-db:
stage: code-validation
extends:
- .use-debian/x86_64_build
- .container+build-rules
needs:
- debian-build-testing
variables:
S3_ARTIFACT_NAME: debian-build-testing
before_script:
- !reference [.download_s3, before_script]
script: |
.gitlab-ci/run-shader-db.sh .gitlab-ci/run-shader-db.sh
artifacts: timeout: 30m
paths:
- shader-db
timeout: 15m
# Test a release build with -Werror so new warnings don't sneak in. # Test a release build with -Werror so new warnings don't sneak in.
debian-release: debian-release:
@@ -225,7 +206,6 @@ debian-release:
-D glx=dri -D glx=dri
-D gbm=enabled -D gbm=enabled
-D egl=enabled -D egl=enabled
-D glvnd=disabled
-D platforms=x11,wayland -D platforms=x11,wayland
GALLIUM_ST: > GALLIUM_ST: >
-D dri3=enabled -D dri3=enabled
@@ -267,7 +247,7 @@ alpine-build-testing:
-D glx=disabled -D glx=disabled
-D gbm=enabled -D gbm=enabled
-D egl=enabled -D egl=enabled
-D glvnd=disabled -D glvnd=false
-D platforms=wayland -D platforms=wayland
LLVM_VERSION: "16" LLVM_VERSION: "16"
GALLIUM_DRIVERS: "crocus,etnaviv,freedreno,iris,kmsro,lima,nouveau,panfrost,r300,r600,radeonsi,svga,swrast,tegra,v3d,vc4,virgl,zink" GALLIUM_DRIVERS: "crocus,etnaviv,freedreno,iris,kmsro,lima,nouveau,panfrost,r300,r600,radeonsi,svga,swrast,tegra,v3d,vc4,virgl,zink"
@@ -307,7 +287,7 @@ fedora-release:
-D glx=dri -D glx=dri
-D gbm=enabled -D gbm=enabled
-D egl=enabled -D egl=enabled
-D glvnd=enabled -D glvnd=true
-D platforms=x11,wayland -D platforms=x11,wayland
EXTRA_OPTION: > EXTRA_OPTION: >
-D b_lto=true -D b_lto=true
@@ -360,7 +340,6 @@ debian-android:
-D glx=disabled -D glx=disabled
-D gbm=disabled -D gbm=disabled
-D egl=enabled -D egl=enabled
-D glvnd=disabled
-D platforms=android -D platforms=android
EXTRA_OPTION: > EXTRA_OPTION: >
-D android-stub=true -D android-stub=true
@@ -441,8 +420,6 @@ debian-arm32:
- .ci-deqp-artifacts - .ci-deqp-artifacts
variables: variables:
CROSS: armhf CROSS: armhf
DRI_LOADERS:
-D glvnd=disabled
EXTRA_OPTION: > EXTRA_OPTION: >
-D llvm=disabled -D llvm=disabled
-D valgrind=disabled -D valgrind=disabled
@@ -458,8 +435,6 @@ debian-arm32-asan:
extends: extends:
- debian-arm32 - debian-arm32
variables: variables:
DRI_LOADERS:
-D glvnd=disabled
EXTRA_OPTION: > EXTRA_OPTION: >
-D llvm=disabled -D llvm=disabled
-D b_sanitize=address -D b_sanitize=address
@@ -478,8 +453,6 @@ debian-arm64:
-Wno-error=array-bounds -Wno-error=array-bounds
-Wno-error=stringop-truncation -Wno-error=stringop-truncation
VULKAN_DRIVERS: "freedreno,broadcom,panfrost,imagination-experimental" VULKAN_DRIVERS: "freedreno,broadcom,panfrost,imagination-experimental"
DRI_LOADERS:
-D glvnd=disabled
EXTRA_OPTION: > EXTRA_OPTION: >
-D llvm=disabled -D llvm=disabled
-D valgrind=disabled -D valgrind=disabled
@@ -496,8 +469,6 @@ debian-arm64-asan:
extends: extends:
- debian-arm64 - debian-arm64
variables: variables:
DRI_LOADERS:
-D glvnd=disabled
EXTRA_OPTION: > EXTRA_OPTION: >
-D llvm=disabled -D llvm=disabled
-D b_sanitize=address -D b_sanitize=address
@@ -513,8 +484,6 @@ debian-arm64-build-test:
- .ci-deqp-artifacts - .ci-deqp-artifacts
variables: variables:
VULKAN_DRIVERS: "amd" VULKAN_DRIVERS: "amd"
DRI_LOADERS:
-D glvnd=disabled
EXTRA_OPTION: > EXTRA_OPTION: >
-Dtools=panfrost,imagination -Dtools=panfrost,imagination
@@ -553,7 +522,7 @@ debian-clang:
-D glx=dri -D glx=dri
-D gbm=enabled -D gbm=enabled
-D egl=enabled -D egl=enabled
-D glvnd=enabled -D glvnd=true
-D platforms=x11,wayland -D platforms=x11,wayland
GALLIUM_ST: > GALLIUM_ST: >
-D dri3=enabled -D dri3=enabled
@@ -635,7 +604,6 @@ debian-vulkan:
-D opengl=false -D opengl=false
-D gles1=disabled -D gles1=disabled
-D gles2=disabled -D gles2=disabled
-D glvnd=disabled
-D platforms=x11,wayland -D platforms=x11,wayland
-D osmesa=false -D osmesa=false
GALLIUM_ST: > GALLIUM_ST: >
@@ -667,8 +635,6 @@ debian-x86_32:
VULKAN_DRIVERS: intel,amd,swrast,virtio VULKAN_DRIVERS: intel,amd,swrast,virtio
GALLIUM_DRIVERS: "iris,nouveau,r300,r600,radeonsi,swrast,virgl,zink,crocus,d3d12" GALLIUM_DRIVERS: "iris,nouveau,r300,r600,radeonsi,swrast,virgl,zink,crocus,d3d12"
LLVM_VERSION: 15 LLVM_VERSION: 15
DRI_LOADERS:
-D glvnd=disabled
EXTRA_OPTION: > EXTRA_OPTION: >
-D vulkan-layers=device-select,overlay -D vulkan-layers=device-select,overlay
-D intel-clc=system -D intel-clc=system
@@ -696,8 +662,6 @@ debian-s390x:
GALLIUM_DRIVERS: "swrast,zink" GALLIUM_DRIVERS: "swrast,zink"
LLVM_VERSION: 15 LLVM_VERSION: 15
VULKAN_DRIVERS: "swrast" VULKAN_DRIVERS: "swrast"
DRI_LOADERS:
-D glvnd=disabled
debian-ppc64el: debian-ppc64el:
extends: extends:
@@ -709,5 +673,3 @@ debian-ppc64el:
CROSS: ppc64el CROSS: ppc64el
GALLIUM_DRIVERS: "nouveau,radeonsi,swrast,virgl,zink" GALLIUM_DRIVERS: "nouveau,radeonsi,swrast,virgl,zink"
VULKAN_DRIVERS: "amd,swrast" VULKAN_DRIVERS: "amd,swrast"
DRI_LOADERS:
-D glvnd=disabled

View File

@@ -1,136 +1,130 @@
#!/bin/bash #!/bin/bash
VARS=( for var in \
ACO_DEBUG ACO_DEBUG \
ARTIFACTS_BASE_URL ARTIFACTS_BASE_URL \
ASAN_OPTIONS ASAN_OPTIONS \
BASE_SYSTEM_FORK_HOST_PREFIX BASE_SYSTEM_FORK_HOST_PREFIX \
BASE_SYSTEM_MAINLINE_HOST_PREFIX BASE_SYSTEM_MAINLINE_HOST_PREFIX \
CI_COMMIT_BRANCH CI_COMMIT_BRANCH \
CI_COMMIT_REF_NAME CI_COMMIT_REF_NAME \
CI_COMMIT_TITLE CI_COMMIT_TITLE \
CI_JOB_ID CI_JOB_ID \
S3_JWT_FILE CI_JOB_JWT_FILE \
CI_JOB_STARTED_AT CI_JOB_STARTED_AT \
CI_JOB_NAME CI_JOB_NAME \
CI_JOB_URL CI_JOB_URL \
CI_MERGE_REQUEST_SOURCE_BRANCH_NAME CI_MERGE_REQUEST_SOURCE_BRANCH_NAME \
CI_MERGE_REQUEST_TITLE CI_MERGE_REQUEST_TITLE \
CI_NODE_INDEX CI_NODE_INDEX \
CI_NODE_TOTAL CI_NODE_TOTAL \
CI_PAGES_DOMAIN CI_PAGES_DOMAIN \
CI_PIPELINE_ID CI_PIPELINE_ID \
CI_PIPELINE_URL CI_PIPELINE_URL \
CI_PROJECT_DIR CI_PROJECT_DIR \
CI_PROJECT_NAME CI_PROJECT_NAME \
CI_PROJECT_PATH CI_PROJECT_PATH \
CI_PROJECT_ROOT_NAMESPACE CI_PROJECT_ROOT_NAMESPACE \
CI_RUNNER_DESCRIPTION CI_RUNNER_DESCRIPTION \
CI_SERVER_URL CI_SERVER_URL \
CROSVM_GALLIUM_DRIVER CROSVM_GALLIUM_DRIVER \
CROSVM_GPU_ARGS CROSVM_GPU_ARGS \
CURRENT_SECTION CURRENT_SECTION \
DEQP_BIN_DIR DEQP_BIN_DIR \
DEQP_CONFIG DEQP_CONFIG \
DEQP_EXPECTED_RENDERER DEQP_EXPECTED_RENDERER \
DEQP_FRACTION DEQP_FRACTION \
DEQP_HEIGHT DEQP_HEIGHT \
DEQP_RESULTS_DIR DEQP_RESULTS_DIR \
DEQP_RUNNER_OPTIONS DEQP_RUNNER_OPTIONS \
DEQP_SUITE DEQP_SUITE \
DEQP_TEMP_DIR DEQP_TEMP_DIR \
DEQP_VER DEQP_VER \
DEQP_WIDTH DEQP_WIDTH \
DEVICE_NAME DEVICE_NAME \
DRIVER_NAME DRIVER_NAME \
EGL_PLATFORM EGL_PLATFORM \
ETNA_MESA_DEBUG ETNA_MESA_DEBUG \
FDO_CI_CONCURRENT FDO_CI_CONCURRENT \
FDO_UPSTREAM_REPO FDO_UPSTREAM_REPO \
FD_MESA_DEBUG FD_MESA_DEBUG \
FLAKES_CHANNEL FLAKES_CHANNEL \
FREEDRENO_HANGCHECK_MS FREEDRENO_HANGCHECK_MS \
GALLIUM_DRIVER GALLIUM_DRIVER \
GALLIVM_PERF GALLIVM_PERF \
GPU_VERSION GPU_VERSION \
GTEST GTEST \
GTEST_FAILS GTEST_FAILS \
GTEST_FRACTION GTEST_FRACTION \
GTEST_RESULTS_DIR GTEST_RESULTS_DIR \
GTEST_RUNNER_OPTIONS GTEST_RUNNER_OPTIONS \
GTEST_SKIPS GTEST_SKIPS \
HWCI_FREQ_MAX HWCI_FREQ_MAX \
HWCI_KERNEL_MODULES HWCI_KERNEL_MODULES \
HWCI_KVM HWCI_KVM \
HWCI_START_WESTON HWCI_START_WESTON \
HWCI_START_XORG HWCI_START_XORG \
HWCI_TEST_SCRIPT HWCI_TEST_SCRIPT \
IR3_SHADER_DEBUG IR3_SHADER_DEBUG \
JOB_ARTIFACTS_BASE JOB_ARTIFACTS_BASE \
JOB_RESULTS_PATH JOB_RESULTS_PATH \
JOB_ROOTFS_OVERLAY_PATH JOB_ROOTFS_OVERLAY_PATH \
KERNEL_IMAGE_BASE KERNEL_IMAGE_BASE \
KERNEL_IMAGE_NAME KERNEL_IMAGE_NAME \
LD_LIBRARY_PATH LD_LIBRARY_PATH \
LIBGL_ALWAYS_SOFTWARE LP_NUM_THREADS \
LP_NUM_THREADS MESA_BASE_TAG \
MESA_BASE_TAG MESA_BUILD_PATH \
MESA_BUILD_PATH MESA_DEBUG \
MESA_DEBUG MESA_GLES_VERSION_OVERRIDE \
MESA_GLES_VERSION_OVERRIDE MESA_GLSL_VERSION_OVERRIDE \
MESA_GLSL_VERSION_OVERRIDE MESA_GL_VERSION_OVERRIDE \
MESA_GL_VERSION_OVERRIDE MESA_IMAGE \
MESA_IMAGE MESA_IMAGE_PATH \
MESA_IMAGE_PATH MESA_IMAGE_TAG \
MESA_IMAGE_TAG MESA_LOADER_DRIVER_OVERRIDE \
MESA_LOADER_DRIVER_OVERRIDE MESA_TEMPLATES_COMMIT \
MESA_TEMPLATES_COMMIT MESA_VK_IGNORE_CONFORMANCE_WARNING \
MESA_VK_ABORT_ON_DEVICE_LOSS S3_HOST \
MESA_VK_IGNORE_CONFORMANCE_WARNING S3_RESULTS_UPLOAD \
S3_HOST NIR_DEBUG \
S3_RESULTS_UPLOAD PAN_I_WANT_A_BROKEN_VULKAN_DRIVER \
NIR_DEBUG PAN_MESA_DEBUG \
PAN_I_WANT_A_BROKEN_VULKAN_DRIVER PANVK_DEBUG \
PAN_MESA_DEBUG PIGLIT_FRACTION \
PANVK_DEBUG PIGLIT_NO_WINDOW \
PIGLIT_FRACTION PIGLIT_OPTIONS \
PIGLIT_NO_WINDOW PIGLIT_PLATFORM \
PIGLIT_OPTIONS PIGLIT_PROFILES \
PIGLIT_PLATFORM PIGLIT_REPLAY_ARTIFACTS_BASE_URL \
PIGLIT_PROFILES PIGLIT_REPLAY_DEVICE_NAME \
PIGLIT_REPLAY_ANGLE_TAG PIGLIT_REPLAY_EXTRA_ARGS \
PIGLIT_REPLAY_ARTIFACTS_BASE_URL PIGLIT_REPLAY_LOOP_TIMES \
PIGLIT_REPLAY_DEVICE_NAME PIGLIT_REPLAY_REFERENCE_IMAGES_BASE \
PIGLIT_REPLAY_EXTRA_ARGS PIGLIT_REPLAY_SUBCOMMAND \
PIGLIT_REPLAY_LOOP_TIMES PIGLIT_RESULTS \
PIGLIT_REPLAY_REFERENCE_IMAGES_BASE PIGLIT_TESTS \
PIGLIT_REPLAY_SUBCOMMAND PIGLIT_TRACES_FILE \
PIGLIT_RESULTS PIPELINE_ARTIFACTS_BASE \
PIGLIT_TESTS RADEON_DEBUG \
PIGLIT_TRACES_FILE RADV_DEBUG \
PIPELINE_ARTIFACTS_BASE RADV_PERFTEST \
RADEON_DEBUG SKQP_ASSETS_DIR \
RADV_DEBUG SKQP_BACKENDS \
RADV_PERFTEST TU_DEBUG \
SKQP_ASSETS_DIR USE_ANGLE \
SKQP_BACKENDS VIRGL_HOST_API \
TU_DEBUG WAFFLE_PLATFORM \
USE_ANGLE VK_CPU \
VIRGL_HOST_API VK_DRIVER \
WAFFLE_PLATFORM VK_ICD_FILENAMES \
VK_CPU VKD3D_PROTON_RESULTS \
VK_DRIVER VKD3D_CONFIG \
# required by virglrender CI VKD3D_TEST_EXCLUDE \
VK_DRIVER_FILES ZINK_DESCRIPTORS \
VKD3D_PROTON_RESULTS ZINK_DEBUG \
VKD3D_CONFIG LVP_POISON_MEMORY \
VKD3D_TEST_EXCLUDE ; do
ZINK_DESCRIPTORS
ZINK_DEBUG
LVP_POISON_MEMORY
)
for var in "${VARS[@]}"; do
if [ -n "${!var+x}" ]; then if [ -n "${!var+x}" ]; then
echo "export $var=${!var@Q}" echo "export $var=${!var@Q}"
fi fi

View File

@@ -113,7 +113,7 @@ export PYTHONPATH=$(python3 -c "import sys;print(\":\".join(sys.path))")
if [ -n "$MESA_LOADER_DRIVER_OVERRIDE" ]; then if [ -n "$MESA_LOADER_DRIVER_OVERRIDE" ]; then
rm /install/lib/dri/!($MESA_LOADER_DRIVER_OVERRIDE)_dri.so rm /install/lib/dri/!($MESA_LOADER_DRIVER_OVERRIDE)_dri.so
fi fi
ls -1 /install/lib/dri/*_dri.so || true ls -1 /install/lib/dri/*_dri.so
if [ "$HWCI_FREQ_MAX" = "true" ]; then if [ "$HWCI_FREQ_MAX" = "true" ]; then
# Ensure initialization of the DRM device (needed by MSM) # Ensure initialization of the DRM device (needed by MSM)
@@ -165,7 +165,7 @@ fi
if [ -n "$HWCI_START_XORG" ]; then if [ -n "$HWCI_START_XORG" ]; then
echo "touch /xorg-started; sleep 100000" > /xorg-script echo "touch /xorg-started; sleep 100000" > /xorg-script
env \ env \
VK_DRIVER_FILES="/install/share/vulkan/icd.d/${VK_DRIVER}_icd.$(uname -m).json" \ VK_ICD_FILENAMES="/install/share/vulkan/icd.d/${VK_DRIVER}_icd.$(uname -m).json" \
xinit /bin/sh /xorg-script -- /usr/bin/Xorg -noreset -s 0 -dpms -logfile /Xorg.0.log & xinit /bin/sh /xorg-script -- /usr/bin/Xorg -noreset -s 0 -dpms -logfile /Xorg.0.log &
BACKGROUND_PIDS="$! $BACKGROUND_PIDS" BACKGROUND_PIDS="$! $BACKGROUND_PIDS"
@@ -192,7 +192,7 @@ if [ -n "$HWCI_START_WESTON" ]; then
mkdir -p /tmp/.X11-unix mkdir -p /tmp/.X11-unix
env \ env \
VK_DRIVER_FILES="/install/share/vulkan/icd.d/${VK_DRIVER}_icd.$(uname -m).json" \ VK_ICD_FILENAMES="/install/share/vulkan/icd.d/${VK_DRIVER}_icd.$(uname -m).json" \
weston -Bheadless-backend.so --use-gl -Swayland-0 --xwayland --idle-time=0 & weston -Bheadless-backend.so --use-gl -Swayland-0 --xwayland --idle-time=0 &
BACKGROUND_PIDS="$! $BACKGROUND_PIDS" BACKGROUND_PIDS="$! $BACKGROUND_PIDS"
@@ -217,7 +217,7 @@ cleanup
# upload artifacts # upload artifacts
if [ -n "$S3_RESULTS_UPLOAD" ]; then if [ -n "$S3_RESULTS_UPLOAD" ]; then
tar --zstd -cf results.tar.zst results/; tar --zstd -cf results.tar.zst results/;
ci-fairy s3cp --token-file "${S3_JWT_FILE}" results.tar.zst https://"$S3_RESULTS_UPLOAD"/results.tar.zst; ci-fairy s3cp --token-file "${CI_JOB_JWT_FILE}" results.tar.zst https://"$S3_RESULTS_UPLOAD"/results.tar.zst;
fi fi
# We still need to echo the hwci: mesa message, as some scripts rely on it, such # We still need to echo the hwci: mesa message, as some scripts rely on it, such

View File

@@ -41,7 +41,6 @@ DEPS=(
libpciaccess-dev libpciaccess-dev
zlib-dev zlib-dev
python3-dev python3-dev
py3-cparser
py3-mako py3-mako
py3-ply py3-ply
vulkan-headers vulkan-headers

View File

@@ -16,7 +16,7 @@ set -ex -o pipefail
# - the GL release produces `glcts`, and # - the GL release produces `glcts`, and
# - the GLES release produces `deqp-gles*` and `deqp-egl` # - the GLES release produces `deqp-gles*` and `deqp-egl`
DEQP_VK_VERSION=1.3.8.2 DEQP_VK_VERSION=1.3.7.0
DEQP_GL_VERSION=4.6.4.0 DEQP_GL_VERSION=4.6.4.0
DEQP_GLES_VERSION=3.2.10.0 DEQP_GLES_VERSION=3.2.10.0
@@ -28,15 +28,28 @@ DEQP_GLES_VERSION=3.2.10.0
# shellcheck disable=SC2034 # shellcheck disable=SC2034
vk_cts_commits_to_backport=( vk_cts_commits_to_backport=(
# Fix more ASAN errors due to missing virtual destructors # Take multiview into account for task shader inv. stats
dd40bcfef1b4035ea55480b6fd4d884447120768 22aa3f4c59f6e1d4daebd5a8c9c05bce6cd3b63b
# Remove "unused shader stages" tests # Remove illegal mesh shader query tests
7dac86c6bbd15dec91d7d9a98cd6dd57c11092a7 2a87f7b25dc27188be0f0a003b2d7aef69d9002e
# Relax fragment shader invocations result verifications
0d8bf6a2715f95907e9cf86a86876ff1f26c66fe
# Fix several issues in dynamic rendering basic tests
c5453824b498c981c6ba42017d119f5de02a3e34
# Add setVisible for VulkanWindowDirectDrm
a8466bf6ea98f6cd6733849ad8081775318a3e3e
) )
# shellcheck disable=SC2034 # shellcheck disable=SC2034
vk_cts_patch_files=( vk_cts_patch_files=(
# Derivate subgroup fix
# https://github.com/KhronosGroup/VK-GL-CTS/pull/442
build-deqp-vk_Use-subgroups-helper-in-derivate-tests.patch
build-deqp-vk_Add-missing-subgroup-support-checks-for-linear-derivate-tests.patch
) )
if [ "${DEQP_TARGET}" = 'android' ]; then if [ "${DEQP_TARGET}" = 'android' ]; then
@@ -70,8 +83,6 @@ gles_cts_commits_to_backport=(
# shellcheck disable=SC2034 # shellcheck disable=SC2034
gles_cts_patch_files=( gles_cts_patch_files=(
# Correct detection mechanism for EGL_EXT_config_select_group extension
build-deqp-egl_Correct-EGL_EXT_config_select_group-extension-query.patch
) )
if [ "${DEQP_TARGET}" = 'android' ]; then if [ "${DEQP_TARGET}" = 'android' ]; then
@@ -207,7 +218,7 @@ if [ "${DEQP_TARGET}" != 'android' ]; then
if [ "${DEQP_API}" = 'VK' ]; then if [ "${DEQP_API}" = 'VK' ]; then
for mustpass in $(< /VK-GL-CTS/external/vulkancts/mustpass/main/vk-default.txt) ; do for mustpass in $(< /VK-GL-CTS/external/vulkancts/mustpass/main/vk-default.txt) ; do
cat /VK-GL-CTS/external/vulkancts/mustpass/main/$mustpass \ cat /VK-GL-CTS/external/vulkancts/mustpass/main/$mustpass \
>> /deqp/mustpass/vk-main.txt >> /deqp/mustpass/vk-master.txt
done done
fi fi
@@ -242,7 +253,7 @@ fi
# Remove other mustpass files, since we saved off the ones we wanted to conventient locations above. # Remove other mustpass files, since we saved off the ones we wanted to conventient locations above.
rm -rf /deqp/external/**/mustpass/ rm -rf /deqp/external/**/mustpass/
rm -rf /deqp/external/vulkancts/modules/vulkan/vk-main* rm -rf /deqp/external/vulkancts/modules/vulkan/vk-master*
rm -rf /deqp/external/vulkancts/modules/vulkan/vk-default rm -rf /deqp/external/vulkancts/modules/vulkan/vk-default
rm -rf /deqp/external/openglcts/modules/cts-runner rm -rf /deqp/external/openglcts/modules/cts-runner

View File

@@ -7,7 +7,7 @@
set -ex set -ex
git clone https://github.com/microsoft/DirectX-Headers -b v1.613.1 --depth 1 git clone https://github.com/microsoft/DirectX-Headers -b v1.611.0 --depth 1
pushd DirectX-Headers pushd DirectX-Headers
meson setup build --backend=ninja --buildtype=release -Dbuild-test=false $EXTRA_MESON_ARGS meson setup build --backend=ninja --buildtype=release -Dbuild-test=false $EXTRA_MESON_ARGS
meson install -C build meson install -C build

View File

@@ -8,7 +8,7 @@ set -ex
# DEBIAN_X86_64_TEST_VK_TAG # DEBIAN_X86_64_TEST_VK_TAG
# KERNEL_ROOTFS_TAG # KERNEL_ROOTFS_TAG
REV="f7ece74a107a2f99b2f494d978c84f8d51faa703" REV="1e631479c0b477006dd7561c55e06269d2878d8d"
git clone https://gitlab.freedesktop.org/mesa/piglit.git --single-branch --no-checkout /piglit git clone https://gitlab.freedesktop.org/mesa/piglit.git --single-branch --no-checkout /piglit
pushd /piglit pushd /piglit

View File

@@ -6,7 +6,7 @@
# KERNEL_ROOTFS_TAG # KERNEL_ROOTFS_TAG
set -ex set -ex
VKD3D_PROTON_COMMIT="c3b385606a93baed42482d822805e0d9c2f3f603" VKD3D_PROTON_COMMIT="a0ccc383937903f4ca0997ce53e41ccce7f2f2ec"
VKD3D_PROTON_DST_DIR="/vkd3d-proton-tests" VKD3D_PROTON_DST_DIR="/vkd3d-proton-tests"
VKD3D_PROTON_SRC_DIR="/vkd3d-proton-src" VKD3D_PROTON_SRC_DIR="/vkd3d-proton-src"

View File

@@ -7,7 +7,7 @@
set -ex set -ex
VALIDATION_TAG="v1.3.281" VALIDATION_TAG="snapshot-2024wk06"
git clone -b "$VALIDATION_TAG" --single-branch --depth 1 https://github.com/KhronosGroup/Vulkan-ValidationLayers.git git clone -b "$VALIDATION_TAG" --single-branch --depth 1 https://github.com/KhronosGroup/Vulkan-ValidationLayers.git
pushd Vulkan-ValidationLayers pushd Vulkan-ValidationLayers

View File

@@ -3,17 +3,8 @@
set -ex set -ex
# When changing this file, you need to bump the following
# .gitlab-ci/image-tags.yml tags:
# DEBIAN_BUILD_TAG
# DEBIAN_X86_64_TEST_ANDROID_TAG
# DEBIAN_X86_64_TEST_GL_TAG
# DEBIAN_X86_64_TEST_VK_TAG
# FEDORA_X86_64_BUILD_TAG
# KERNEL_ROOTFS_TAG
export LIBWAYLAND_VERSION="1.21.0" export LIBWAYLAND_VERSION="1.21.0"
export WAYLAND_PROTOCOLS_VERSION="1.34" export WAYLAND_PROTOCOLS_VERSION="1.31"
git clone https://gitlab.freedesktop.org/wayland/wayland git clone https://gitlab.freedesktop.org/wayland/wayland
cd wayland cd wayland

View File

@@ -64,7 +64,6 @@ DEPS=(
python3-mako python3-mako
python3-pil python3-pil
python3-pip python3-pip
python3-pycparser
python3-requests python3-requests
python3-setuptools python3-setuptools
u-boot-tools u-boot-tools

View File

@@ -70,7 +70,6 @@ DEPS=(
python3-pil python3-pil
python3-pip python3-pip
python3-ply python3-ply
python3-pycparser
python3-requests python3-requests
python3-setuptools python3-setuptools
qemu-user qemu-user

View File

@@ -90,13 +90,6 @@ RUSTFLAGS='-L native=/usr/local/lib' cargo install \
-j ${FDO_CI_CONCURRENT:-4} \ -j ${FDO_CI_CONCURRENT:-4} \
--root /usr/local --root /usr/local
# install cbindgen
RUSTFLAGS='-L native=/usr/local/lib' cargo install \
cbindgen --version 0.26.0 \
--locked \
-j ${FDO_CI_CONCURRENT:-4} \
--root /usr/local
############### Uninstall the build software ############### Uninstall the build software
apt-get purge -y "${EPHEMERAL[@]}" apt-get purge -y "${EPHEMERAL[@]}"

View File

@@ -27,7 +27,6 @@ EPHEMERAL=(
libvulkan-dev libvulkan-dev
libwaffle-dev libwaffle-dev
libx11-xcb-dev libx11-xcb-dev
libxcb-dri2-0-dev
libxcb-ewmh-dev libxcb-ewmh-dev
libxcb-keysyms1-dev libxcb-keysyms1-dev
libxkbcommon-dev libxkbcommon-dev

View File

@@ -27,7 +27,6 @@ EPHEMERAL=(
DEPS=( DEPS=(
bindgen bindgen
bison bison
cbindgen
ccache ccache
clang-devel clang-devel
flex flex
@@ -77,7 +76,6 @@ DEPS=(
python3-devel python3-devel
python3-mako python3-mako
python3-ply python3-ply
python3-pycparser
rust-packaging rust-packaging
vulkan-headers vulkan-headers
spirv-tools-devel spirv-tools-devel

View File

@@ -226,7 +226,7 @@ debian/x86_64_test-vk:
- debian/x86_64_test-vk - debian/x86_64_test-vk
# Debian based x86_64 test image for Android # Debian based x86_64 test image for Android
.debian/x86_64_test-android: debian/x86_64_test-android:
extends: .use-debian/x86_64_test-base extends: .use-debian/x86_64_test-base
variables: variables:
MESA_IMAGE_TAG: &debian-x86_64_test-android ${DEBIAN_X86_64_TEST_ANDROID_TAG} MESA_IMAGE_TAG: &debian-x86_64_test-android ${DEBIAN_X86_64_TEST_ANDROID_TAG}
@@ -372,7 +372,7 @@ kernel+rootfs_arm32:
- .container+build-rules - .container+build-rules
variables: variables:
FDO_DISTRIBUTION_TAG: "${MESA_IMAGE_TAG}--${MESA_ROOTFS_TAG}--${KERNEL_TAG}--${MESA_TEMPLATES_COMMIT}" FDO_DISTRIBUTION_TAG: "${MESA_IMAGE_TAG}--${MESA_ROOTFS_TAG}--${KERNEL_TAG}--${MESA_TEMPLATES_COMMIT}"
ARTIFACTS_PREFIX: "https://${S3_HOST}/${S3_KERNEL_BUCKET}" ARTIFACTS_PREFIX: "https://${S3_HOST}/mesa-lava"
ARTIFACTS_SUFFIX: "${MESA_ROOTFS_TAG}--${KERNEL_TAG}--${MESA_ARTIFACTS_TAG}--${MESA_TEMPLATES_COMMIT}" ARTIFACTS_SUFFIX: "${MESA_ROOTFS_TAG}--${KERNEL_TAG}--${MESA_ARTIFACTS_TAG}--${MESA_TEMPLATES_COMMIT}"
MESA_ARTIFACTS_TAG: *debian-arm64_build MESA_ARTIFACTS_TAG: *debian-arm64_build
MESA_ROOTFS_TAG: *kernel-rootfs MESA_ROOTFS_TAG: *kernel-rootfs

View File

@@ -14,7 +14,7 @@ export LLVM_VERSION="${LLVM_VERSION:=15}"
check_minio() check_minio()
{ {
S3_PATH="${S3_HOST}/${S3_KERNEL_BUCKET}/$1/${DISTRIBUTION_TAG}/${DEBIAN_ARCH}" S3_PATH="${S3_HOST}/mesa-lava/$1/${DISTRIBUTION_TAG}/${DEBIAN_ARCH}"
if curl -L --retry 4 -f --retry-delay 60 -s -X HEAD \ if curl -L --retry 4 -f --retry-delay 60 -s -X HEAD \
"https://${S3_PATH}/done"; then "https://${S3_PATH}/done"; then
echo "Remote files are up-to-date, skip rebuilding them." echo "Remote files are up-to-date, skip rebuilding them."
@@ -365,8 +365,8 @@ popd
. .gitlab-ci/container/container_post_build.sh . .gitlab-ci/container/container_post_build.sh
ci-fairy s3cp --token-file "${S3_JWT_FILE}" /lava-files/"${ROOTFSTAR}" \ ci-fairy s3cp --token-file "${CI_JOB_JWT_FILE}" /lava-files/"${ROOTFSTAR}" \
https://${S3_PATH}/"${ROOTFSTAR}" https://${S3_PATH}/"${ROOTFSTAR}"
touch /lava-files/done touch /lava-files/done
ci-fairy s3cp --token-file "${S3_JWT_FILE}" /lava-files/done https://${S3_PATH}/done ci-fairy s3cp --token-file "${CI_JOB_JWT_FILE}" /lava-files/done https://${S3_PATH}/done

View File

@@ -1,45 +0,0 @@
From cab41ed387c66a5e7f3454c547fc9ea53587ec1e Mon Sep 17 00:00:00 2001
From: David Heidelberg <david.heidelberg@collabora.com>
Date: Thu, 9 May 2024 14:08:59 -0700
Subject: [PATCH] Correct EGL_EXT_config_select_group extension query
EGL_EXT_config_select_group is a display extension,
not a client extension.
Affects:
dEQP-EGL.functional.choose_config.simple.selection_and_sort.*
Ref: https://github.com/KhronosGroup/EGL-Registry/pull/199
Fixes: 88ba9ac270db ("Implement support for the EGL_EXT_config_select_group extension")
Change-Id: I38956511bdcb8e99d585ea9b99aeab53da0457e2
Signed-off-by: David Heidelberg <david.heidelberg@collabora.com>
---
framework/egl/egluConfigInfo.cpp | 3 +--
1 file changed, 1 insertion(+), 2 deletions(-)
diff --git a/framework/egl/egluConfigInfo.cpp b/framework/egl/egluConfigInfo.cpp
index 88c30fd65..10936055a 100644
--- a/framework/egl/egluConfigInfo.cpp
+++ b/framework/egl/egluConfigInfo.cpp
@@ -129,7 +129,6 @@ void queryCoreConfigInfo (const Library& egl, EGLDisplay display, EGLConfig conf
void queryExtConfigInfo (const eglw::Library& egl, eglw::EGLDisplay display, eglw::EGLConfig config, ConfigInfo* dst)
{
const std::vector<std::string> extensions = getDisplayExtensions(egl, display);
- const std::vector<std::string> clientExtensions = getClientExtensions(egl);
if (de::contains(extensions.begin(), extensions.end(), "EGL_EXT_yuv_surface"))
{
@@ -159,7 +158,7 @@ void queryExtConfigInfo (const eglw::Library& egl, eglw::EGLDisplay display, egl
else
dst->colorComponentType = EGL_COLOR_COMPONENT_TYPE_FIXED_EXT;
- if (de::contains(clientExtensions.begin(), clientExtensions.end(), "EGL_EXT_config_select_group"))
+ if (hasExtension(egl, display, "EGL_EXT_config_select_group"))
{
egl.getConfigAttrib(display, config, EGL_CONFIG_SELECT_GROUP_EXT, (EGLint*)&dst->groupId);
--
2.43.0

View File

@@ -0,0 +1,29 @@
From 7c9aa6f846f9f2f0d70b5c4a8e7c99a3d31b3b1a Mon Sep 17 00:00:00 2001
From: Rob Clark <robdclark@chromium.org>
Date: Sat, 27 Jan 2024 10:59:00 -0800
Subject: [PATCH] Add missing subgroup support checks for linear derivate tests
Some of these tests require subgroup ops support, but didn't bother
checking whether they were supported. Add this missing checks.
---
.../vulkan/shaderrender/vktShaderRenderDerivateTests.cpp | 7 +++++++
1 file changed, 7 insertions(+)
diff --git a/external/vulkancts/modules/vulkan/shaderrender/vktShaderRenderDerivateTests.cpp b/external/vulkancts/modules/vulkan/shaderrender/vktShaderRenderDerivateTests.cpp
index 3253505958..709044f2e8 100644
--- a/external/vulkancts/modules/vulkan/shaderrender/vktShaderRenderDerivateTests.cpp
+++ b/external/vulkancts/modules/vulkan/shaderrender/vktShaderRenderDerivateTests.cpp
@@ -1145,6 +1145,13 @@ LinearDerivateCase::~LinearDerivateCase (void)
TestInstance* LinearDerivateCase::createInstance (Context& context) const
{
DE_ASSERT(m_uniformSetup != DE_NULL);
+ if (m_fragmentTmpl.find("gl_SubgroupInvocationID") != std::string::npos) {
+ if (!subgroups::areQuadOperationsSupportedForStages(context, VK_SHADER_STAGE_FRAGMENT_BIT))
+ throw tcu::NotSupportedError("test requires VK_SUBGROUP_FEATURE_QUAD_BIT");
+
+ if (subgroups::getSubgroupSize(context) < 4)
+ throw tcu::NotSupportedError("test requires subgroupSize >= 4");
+ }
return new LinearDerivateCaseInstance(context, *m_uniformSetup, m_definitions, m_values);
}

View File

@@ -0,0 +1,56 @@
From ed3794c975d284a5453ae33ae59dd1541a9eb804 Mon Sep 17 00:00:00 2001
From: Rob Clark <robdclark@chromium.org>
Date: Sat, 27 Jan 2024 10:57:28 -0800
Subject: [PATCH] Use subgroups helper in derivate tests
For the tests that need subgroup ops, use the existing subgroups helper,
rather than open-coding the same checks.
---
.../vktShaderRenderDerivateTests.cpp | 23 ++++---------------
1 file changed, 5 insertions(+), 18 deletions(-)
diff --git a/external/vulkancts/modules/vulkan/shaderrender/vktShaderRenderDerivateTests.cpp b/external/vulkancts/modules/vulkan/shaderrender/vktShaderRenderDerivateTests.cpp
index a8bb5a3ba7..3253505958 100644
--- a/external/vulkancts/modules/vulkan/shaderrender/vktShaderRenderDerivateTests.cpp
+++ b/external/vulkancts/modules/vulkan/shaderrender/vktShaderRenderDerivateTests.cpp
@@ -31,6 +31,7 @@
#include "vktShaderRenderDerivateTests.hpp"
#include "vktShaderRender.hpp"
+#include "subgroups/vktSubgroupsTestsUtils.hpp"
#include "vkImageUtil.hpp"
#include "vkQueryUtil.hpp"
@@ -707,28 +708,14 @@ tcu::TestStatus TriangleDerivateCaseInstance::iterate (void)
{
const std::string errorPrefix = m_definitions.inNonUniformControlFlow ? "Derivatives in dynamic control flow" :
"Manual derivatives with subgroup operations";
- if (!m_context.contextSupports(vk::ApiVersion(0, 1, 1, 0)))
- throw tcu::NotSupportedError(errorPrefix + " require Vulkan 1.1");
-
- vk::VkPhysicalDeviceSubgroupProperties subgroupProperties;
- deMemset(&subgroupProperties, 0, sizeof(subgroupProperties));
- subgroupProperties.sType = VK_STRUCTURE_TYPE_PHYSICAL_DEVICE_SUBGROUP_PROPERTIES;
-
- vk::VkPhysicalDeviceProperties2 properties2;
- deMemset(&properties2, 0, sizeof(properties2));
- properties2.sType = vk::VK_STRUCTURE_TYPE_PHYSICAL_DEVICE_PROPERTIES_2;
- properties2.pNext = &subgroupProperties;
-
- m_context.getInstanceInterface().getPhysicalDeviceProperties2(m_context.getPhysicalDevice(), &properties2);
+ if (!subgroups::areQuadOperationsSupportedForStages(m_context, VK_SHADER_STAGE_FRAGMENT_BIT))
+ throw tcu::NotSupportedError(errorPrefix + " tests require VK_SUBGROUP_FEATURE_QUAD_BIT");
- if (subgroupProperties.subgroupSize < 4)
+ if (subgroups::getSubgroupSize(m_context) < 4)
throw tcu::NotSupportedError(errorPrefix + " require subgroupSize >= 4");
- if ((subgroupProperties.supportedOperations & VK_SUBGROUP_FEATURE_BALLOT_BIT) == 0)
+ if (!subgroups::isSubgroupFeatureSupportedForDevice(m_context, VK_SUBGROUP_FEATURE_BALLOT_BIT))
throw tcu::NotSupportedError(errorPrefix + " tests require VK_SUBGROUP_FEATURE_BALLOT_BIT");
-
- if (isSubgroupFunc(m_definitions.func) && (subgroupProperties.supportedOperations & VK_SUBGROUP_FEATURE_QUAD_BIT) == 0)
- throw tcu::NotSupportedError(errorPrefix + " tests require VK_SUBGROUP_FEATURE_QUAD_BIT");
}
setup();

View File

@@ -96,7 +96,7 @@ set +e -x
NIR_DEBUG="novalidate" \ NIR_DEBUG="novalidate" \
LIBGL_ALWAYS_SOFTWARE=${CROSVM_LIBGL_ALWAYS_SOFTWARE} \ LIBGL_ALWAYS_SOFTWARE=${CROSVM_LIBGL_ALWAYS_SOFTWARE} \
GALLIUM_DRIVER=${CROSVM_GALLIUM_DRIVER} \ GALLIUM_DRIVER=${CROSVM_GALLIUM_DRIVER} \
VK_DRIVER_FILES=$CI_PROJECT_DIR/install/share/vulkan/icd.d/${CROSVM_VK_DRIVER}_icd.x86_64.json \ VK_ICD_FILENAMES=$CI_PROJECT_DIR/install/share/vulkan/icd.d/${CROSVM_VK_DRIVER}_icd.x86_64.json \
crosvm --no-syslog run \ crosvm --no-syslog run \
--gpu "${CROSVM_GPU_ARGS}" --gpu-render-server "path=/usr/local/libexec/virgl_render_server" \ --gpu "${CROSVM_GPU_ARGS}" --gpu-render-server "path=/usr/local/libexec/virgl_render_server" \
-m "${CROSVM_MEMORY:-4096}" -c "${CROSVM_CPU:-2}" --disable-sandbox \ -m "${CROSVM_MEMORY:-4096}" -c "${CROSVM_CPU:-2}" --disable-sandbox \

View File

@@ -18,7 +18,7 @@ INSTALL=$(realpath -s "$PWD"/install)
# Set up the driver environment. # Set up the driver environment.
export LD_LIBRARY_PATH="$INSTALL"/lib/:$LD_LIBRARY_PATH export LD_LIBRARY_PATH="$INSTALL"/lib/:$LD_LIBRARY_PATH
export EGL_PLATFORM=surfaceless export EGL_PLATFORM=surfaceless
export VK_DRIVER_FILES="$PWD"/install/share/vulkan/icd.d/"$VK_DRIVER"_icd.${VK_CPU:-$(uname -m)}.json export VK_ICD_FILENAMES="$PWD"/install/share/vulkan/icd.d/"$VK_DRIVER"_icd.${VK_CPU:-$(uname -m)}.json
export OCL_ICD_VENDORS="$PWD"/install/etc/OpenCL/vendors/ export OCL_ICD_VENDORS="$PWD"/install/etc/OpenCL/vendors/
if [ -n "$USE_ANGLE" ]; then if [ -n "$USE_ANGLE" ]; then
@@ -59,7 +59,7 @@ if [ -z "$DEQP_SUITE" ]; then
# Generate test case list file. # Generate test case list file.
if [ "$DEQP_VER" = "vk" ]; then if [ "$DEQP_VER" = "vk" ]; then
MUSTPASS=/deqp/mustpass/vk-main.txt MUSTPASS=/deqp/mustpass/vk-master.txt
DEQP=/deqp/external/vulkancts/modules/vulkan/deqp-vk DEQP=/deqp/external/vulkancts/modules/vulkan/deqp-vk
elif [ "$DEQP_VER" = "gles2" ] || [ "$DEQP_VER" = "gles3" ] || [ "$DEQP_VER" = "gles31" ] || [ "$DEQP_VER" = "egl" ]; then elif [ "$DEQP_VER" = "gles2" ] || [ "$DEQP_VER" = "gles3" ] || [ "$DEQP_VER" = "gles31" ] || [ "$DEQP_VER" = "egl" ]; then
MUSTPASS=/deqp/mustpass/$DEQP_VER-main.txt MUSTPASS=/deqp/mustpass/$DEQP_VER-main.txt
@@ -169,7 +169,7 @@ fi
uncollapsed_section_switch deqp "deqp: deqp-runner" uncollapsed_section_switch deqp "deqp: deqp-runner"
# Print the detailed version with the list of backports and local patches # Print the detailed version with the list of backports and local patches
for api in vk gl gles; do for api in vk gl; do
deqp_version_log=/deqp/version-$api deqp_version_log=/deqp/version-$api
if [ -r "$deqp_version_log" ]; then if [ -r "$deqp_version_log" ]; then
cat "$deqp_version_log" cat "$deqp_version_log"

View File

@@ -18,7 +18,7 @@ TMP_DIR=$(mktemp -d)
echo "$(date +"%F %T") Downloading archived master..." echo "$(date +"%F %T") Downloading archived master..."
if ! /usr/bin/wget \ if ! /usr/bin/wget \
-O "$TMP_DIR/$CI_PROJECT_NAME.tar.gz" \ -O "$TMP_DIR/$CI_PROJECT_NAME.tar.gz" \
"https://${S3_HOST}/${S3_GITCACHE_BUCKET}/${FDO_UPSTREAM_REPO}/$CI_PROJECT_NAME.tar.gz"; "https://${S3_HOST}/git-cache/${FDO_UPSTREAM_REPO}/$CI_PROJECT_NAME.tar.gz";
then then
echo "Repository cache not available" echo "Repository cache not available"
exit exit

View File

@@ -237,25 +237,6 @@
when: never when: never
- !reference [.freedreno-farm-rules, rules] - !reference [.freedreno-farm-rules, rules]
.vmware-farm-rules:
rules:
- exists: [ .ci-farms-disabled/vmware ]
when: never
- changes: [ .ci-farms-disabled/vmware ]
if: '$CI_PIPELINE_SOURCE != "schedule"'
when: on_success
- changes: [ .ci-farms-disabled/* ]
if: '$CI_PIPELINE_SOURCE != "schedule"'
when: never
.vmware-farm-manual-rules:
rules:
- exists: [ .ci-farms-disabled/vmware ]
when: never
- changes: [ .ci-farms-disabled/vmware ]
if: '$CI_PIPELINE_SOURCE != "schedule"'
when: never
- !reference [.vmware-farm-rules, rules]
.ondracka-farm-rules: .ondracka-farm-rules:
rules: rules:
@@ -330,10 +311,6 @@
changes: [ .ci-farms-disabled/ondracka ] changes: [ .ci-farms-disabled/ondracka ]
exists: [ .ci-farms-disabled/ondracka ] exists: [ .ci-farms-disabled/ondracka ]
when: never when: never
- if: '$CI_PIPELINE_SOURCE == "merge_request_event"'
changes: [ .ci-farms-disabled/vmware ]
exists: [ .ci-farms-disabled/vmware ]
when: never
# Any other change to ci-farms/* means some farm is getting re-enabled. # Any other change to ci-farms/* means some farm is getting re-enabled.
# Run jobs in Marge pipelines (and let it fallback to manual otherwise) # Run jobs in Marge pipelines (and let it fallback to manual otherwise)
- if: '$CI_PIPELINE_SOURCE == "merge_request_event" && $GITLAB_USER_LOGIN == "marge-bot"' - if: '$CI_PIPELINE_SOURCE == "merge_request_event" && $GITLAB_USER_LOGIN == "marge-bot"'

View File

@@ -11,7 +11,7 @@ INSTALL=$PWD/install
# Set up the driver environment. # Set up the driver environment.
export LD_LIBRARY_PATH="$INSTALL/lib/" export LD_LIBRARY_PATH="$INSTALL/lib/"
export VK_DRIVER_FILES="$INSTALL/share/vulkan/icd.d/${VK_DRIVER}_icd.x86_64.json" export VK_ICD_FILENAMES="$INSTALL/share/vulkan/icd.d/${VK_DRIVER}_icd.x86_64.json"
# To store Fossilize logs on failure. # To store Fossilize logs on failure.
RESULTS="$PWD/results" RESULTS="$PWD/results"

View File

@@ -13,10 +13,10 @@
variables: variables:
DEBIAN_X86_64_BUILD_BASE_IMAGE: "debian/x86_64_build-base" DEBIAN_X86_64_BUILD_BASE_IMAGE: "debian/x86_64_build-base"
DEBIAN_BASE_TAG: "20240412-pycparser" DEBIAN_BASE_TAG: "20240307-virglcrosvm"
DEBIAN_X86_64_BUILD_IMAGE_PATH: "debian/x86_64_build" DEBIAN_X86_64_BUILD_IMAGE_PATH: "debian/x86_64_build"
DEBIAN_BUILD_TAG: "20240408-cbindgen" DEBIAN_BUILD_TAG: "20240301-mold"
DEBIAN_X86_64_TEST_BASE_IMAGE: "debian/x86_64_test-base" DEBIAN_X86_64_TEST_BASE_IMAGE: "debian/x86_64_test-base"
@@ -24,15 +24,15 @@ variables:
DEBIAN_X86_64_TEST_IMAGE_VK_PATH: "debian/x86_64_test-vk" DEBIAN_X86_64_TEST_IMAGE_VK_PATH: "debian/x86_64_test-vk"
DEBIAN_X86_64_TEST_ANDROID_IMAGE_PATH: "debian/x86_64_test-android" DEBIAN_X86_64_TEST_ANDROID_IMAGE_PATH: "debian/x86_64_test-android"
DEBIAN_X86_64_TEST_ANDROID_TAG: "20240423-deqp" DEBIAN_X86_64_TEST_ANDROID_TAG: "20240311-runner"
DEBIAN_X86_64_TEST_GL_TAG: "20240514-egltrans241" DEBIAN_X86_64_TEST_GL_TAG: "20240313-ninetests"
DEBIAN_X86_64_TEST_VK_TAG: "20240423-deqp" DEBIAN_X86_64_TEST_VK_TAG: "20240317-direct_drm"
KERNEL_ROOTFS_TAG: "20240507-kernel241" KERNEL_ROOTFS_TAG: "20240317-direct_drm"
ALPINE_X86_64_BUILD_TAG: "20240412-pycparser" ALPINE_X86_64_BUILD_TAG: "20240208-libclc-5"
ALPINE_X86_64_LAVA_SSH_TAG: "20240401-wlproto" ALPINE_X86_64_LAVA_SSH_TAG: "20230626-v1"
FEDORA_X86_64_BUILD_TAG: "20240412-pycparser" FEDORA_X86_64_BUILD_TAG: "20240301-mold"
KERNEL_TAG: "v6.6.21-mesa-f8ea" KERNEL_TAG: "v6.6.21-mesa-19fc"
KERNEL_REPO: "gfx-ci/linux" KERNEL_REPO: "gfx-ci/linux"
PKG_REPO_REV: "3cc12a2a" PKG_REPO_REV: "3cc12a2a"
@@ -40,7 +40,7 @@ variables:
WINDOWS_X64_MSVC_TAG: "20231222-msvc" WINDOWS_X64_MSVC_TAG: "20231222-msvc"
WINDOWS_X64_BUILD_PATH: "windows/x86_64_build" WINDOWS_X64_BUILD_PATH: "windows/x86_64_build"
WINDOWS_X64_BUILD_TAG: "20240405-vainfo-ci-1" WINDOWS_X64_BUILD_TAG: "20240117-vulkan-sdk"
WINDOWS_X64_TEST_PATH: "windows/x86_64_test" WINDOWS_X64_TEST_PATH: "windows/x86_64_test"
WINDOWS_X64_TEST_TAG: "20240405-vainfo-ci-1" WINDOWS_X64_TEST_TAG: "20240117-vulkan-sdk"

View File

@@ -5,36 +5,24 @@ class MesaCIException(Exception):
pass pass
class MesaCIRetriableException(MesaCIException): class MesaCITimeoutError(MesaCIException):
pass
class MesaCITimeoutError(MesaCIRetriableException):
def __init__(self, *args, timeout_duration: timedelta) -> None: def __init__(self, *args, timeout_duration: timedelta) -> None:
super().__init__(*args) super().__init__(*args)
self.timeout_duration = timeout_duration self.timeout_duration = timeout_duration
class MesaCIRetryError(MesaCIRetriableException): class MesaCIRetryError(MesaCIException):
def __init__(self, *args, retry_count: int, last_job: None) -> None: def __init__(self, *args, retry_count: int, last_job: None) -> None:
super().__init__(*args) super().__init__(*args)
self.retry_count = retry_count self.retry_count = retry_count
self.last_job = last_job self.last_job = last_job
class MesaCIFatalException(MesaCIException): class MesaCIParseException(MesaCIException):
"""Exception raised when the Mesa CI script encounters a fatal error that
prevents the script from continuing."""
def __init__(self, *args) -> None:
super().__init__(*args)
class MesaCIParseException(MesaCIRetriableException):
pass pass
class MesaCIKnownIssueException(MesaCIRetriableException): class MesaCIKnownIssueException(MesaCIException):
"""Exception raised when the Mesa CI script finds something in the logs that """Exception raised when the Mesa CI script finds something in the logs that
is known to cause the LAVA job to eventually fail""" is known to cause the LAVA job to eventually fail"""

View File

@@ -11,7 +11,7 @@ variables:
# proxy used to cache data locally # proxy used to cache data locally
FDO_HTTP_CACHE_URI: "http://caching-proxy/cache/?uri=" FDO_HTTP_CACHE_URI: "http://caching-proxy/cache/?uri="
# base system generated by the container build job, shared between many pipelines # base system generated by the container build job, shared between many pipelines
BASE_SYSTEM_HOST_PREFIX: "${S3_HOST}/${S3_KERNEL_BUCKET}" BASE_SYSTEM_HOST_PREFIX: "${S3_HOST}/mesa-lava"
BASE_SYSTEM_MAINLINE_HOST_PATH: "${BASE_SYSTEM_HOST_PREFIX}/${FDO_UPSTREAM_REPO}/${DISTRIBUTION_TAG}/${DEBIAN_ARCH}" BASE_SYSTEM_MAINLINE_HOST_PATH: "${BASE_SYSTEM_HOST_PREFIX}/${FDO_UPSTREAM_REPO}/${DISTRIBUTION_TAG}/${DEBIAN_ARCH}"
BASE_SYSTEM_FORK_HOST_PATH: "${BASE_SYSTEM_HOST_PREFIX}/${CI_PROJECT_PATH}/${DISTRIBUTION_TAG}/${DEBIAN_ARCH}" BASE_SYSTEM_FORK_HOST_PATH: "${BASE_SYSTEM_HOST_PREFIX}/${CI_PROJECT_PATH}/${DISTRIBUTION_TAG}/${DEBIAN_ARCH}"
# per-job build artifacts # per-job build artifacts

View File

@@ -30,7 +30,7 @@ artifacts/ci-common/generate-env.sh | tee results/job-rootfs-overlay/set-job-env
section_end variables section_end variables
tar zcf job-rootfs-overlay.tar.gz -C results/job-rootfs-overlay/ . tar zcf job-rootfs-overlay.tar.gz -C results/job-rootfs-overlay/ .
ci-fairy s3cp --token-file "${S3_JWT_FILE}" job-rootfs-overlay.tar.gz "https://${JOB_ROOTFS_OVERLAY_PATH}" ci-fairy s3cp --token-file "${CI_JOB_JWT_FILE}" job-rootfs-overlay.tar.gz "https://${JOB_ROOTFS_OVERLAY_PATH}"
ARTIFACT_URL="${FDO_HTTP_CACHE_URI:-}https://${PIPELINE_ARTIFACTS_BASE}/${S3_ARTIFACT_NAME:?}.tar.zst" ARTIFACT_URL="${FDO_HTTP_CACHE_URI:-}https://${PIPELINE_ARTIFACTS_BASE}/${S3_ARTIFACT_NAME:?}.tar.zst"
@@ -50,7 +50,7 @@ PYTHONPATH=artifacts/ artifacts/lava/lava_job_submitter.py \
--ci-project-dir "${CI_PROJECT_DIR}" \ --ci-project-dir "${CI_PROJECT_DIR}" \
--device-type "${DEVICE_TYPE}" \ --device-type "${DEVICE_TYPE}" \
--dtb-filename "${DTB}" \ --dtb-filename "${DTB}" \
--jwt-file "${S3_JWT_FILE}" \ --jwt-file "${CI_JOB_JWT_FILE}" \
--kernel-image-name "${KERNEL_IMAGE_NAME}" \ --kernel-image-name "${KERNEL_IMAGE_NAME}" \
--kernel-image-type "${KERNEL_IMAGE_TYPE}" \ --kernel-image-type "${KERNEL_IMAGE_TYPE}" \
--boot-method "${BOOT_METHOD}" \ --boot-method "${BOOT_METHOD}" \

View File

@@ -16,7 +16,7 @@ import sys
import time import time
from collections import defaultdict from collections import defaultdict
from dataclasses import dataclass, fields from dataclasses import dataclass, fields
from datetime import datetime, timedelta, timezone from datetime import datetime, timedelta
from os import environ, getenv, path from os import environ, getenv, path
from typing import Any, Optional from typing import Any, Optional
@@ -25,8 +25,6 @@ from lavacli.utils import flow_yaml as lava_yaml
from lava.exceptions import ( from lava.exceptions import (
MesaCIException, MesaCIException,
MesaCIFatalException,
MesaCIRetriableException,
MesaCIParseException, MesaCIParseException,
MesaCIRetryError, MesaCIRetryError,
MesaCITimeoutError, MesaCITimeoutError,
@@ -60,7 +58,7 @@ except ImportError as e:
# Timeout in seconds to decide if the device from the dispatched LAVA job has # Timeout in seconds to decide if the device from the dispatched LAVA job has
# hung or not due to the lack of new log output. # hung or not due to the lack of new log output.
DEVICE_HANGING_TIMEOUT_SEC = int(getenv("DEVICE_HANGING_TIMEOUT_SEC", 5 * 60)) DEVICE_HANGING_TIMEOUT_SEC = int(getenv("DEVICE_HANGING_TIMEOUT_SEC", 5*60))
# How many seconds the script should wait before try a new polling iteration to # How many seconds the script should wait before try a new polling iteration to
# check if the dispatched LAVA job is running or waiting in the job queue. # check if the dispatched LAVA job is running or waiting in the job queue.
@@ -83,29 +81,18 @@ NUMBER_OF_RETRIES_TIMEOUT_DETECTION = int(
getenv("LAVA_NUMBER_OF_RETRIES_TIMEOUT_DETECTION", 2) getenv("LAVA_NUMBER_OF_RETRIES_TIMEOUT_DETECTION", 2)
) )
CI_JOB_TIMEOUT_SEC = int(getenv("CI_JOB_TIMEOUT", 3600))
# How many seconds the script will wait to let LAVA run the job and give the final details.
EXPECTED_JOB_DURATION_SEC = int(getenv("EXPECTED_JOB_DURATION_SEC", 60 * 10))
# CI_JOB_STARTED is given by GitLab CI/CD in UTC timezone by default.
CI_JOB_STARTED_AT_RAW = getenv("CI_JOB_STARTED_AT", "")
CI_JOB_STARTED_AT: datetime = (
datetime.fromisoformat(CI_JOB_STARTED_AT_RAW)
if CI_JOB_STARTED_AT_RAW
else datetime.now(timezone.utc)
)
def raise_exception_from_metadata(metadata: dict, job_id: int) -> None: def raise_exception_from_metadata(metadata: dict, job_id: int) -> None:
""" """
Investigate infrastructure errors from the job metadata. Investigate infrastructure errors from the job metadata.
If it finds an error, raise it as MesaCIRetriableException. If it finds an error, raise it as MesaCIException.
""" """
if "result" not in metadata or metadata["result"] != "fail": if "result" not in metadata or metadata["result"] != "fail":
return return
if "error_type" in metadata: if "error_type" in metadata:
error_type = metadata["error_type"] error_type = metadata["error_type"]
if error_type == "Infrastructure": if error_type == "Infrastructure":
raise MesaCIRetriableException( raise MesaCIException(
f"LAVA job {job_id} failed with Infrastructure Error. Retry." f"LAVA job {job_id} failed with Infrastructure Error. Retry."
) )
if error_type == "Job": if error_type == "Job":
@@ -113,12 +100,12 @@ def raise_exception_from_metadata(metadata: dict, job_id: int) -> None:
# with mal-formed job definitions. As we are always validating the # with mal-formed job definitions. As we are always validating the
# jobs, only the former is probable to happen. E.g.: When some LAVA # jobs, only the former is probable to happen. E.g.: When some LAVA
# action timed out more times than expected in job definition. # action timed out more times than expected in job definition.
raise MesaCIRetriableException( raise MesaCIException(
f"LAVA job {job_id} failed with JobError " f"LAVA job {job_id} failed with JobError "
"(possible LAVA timeout misconfiguration/bug). Retry." "(possible LAVA timeout misconfiguration/bug). Retry."
) )
if "case" in metadata and metadata["case"] == "validate": if "case" in metadata and metadata["case"] == "validate":
raise MesaCIRetriableException( raise MesaCIException(
f"LAVA job {job_id} failed validation (possible download error). Retry." f"LAVA job {job_id} failed validation (possible download error). Retry."
) )
@@ -195,6 +182,7 @@ def is_job_hanging(job, max_idle_time):
def parse_log_lines(job, log_follower, new_log_lines): def parse_log_lines(job, log_follower, new_log_lines):
if log_follower.feed(new_log_lines): if log_follower.feed(new_log_lines):
# If we had non-empty log data, we can assure that the device is alive. # If we had non-empty log data, we can assure that the device is alive.
job.heartbeat() job.heartbeat()
@@ -212,6 +200,7 @@ def parse_log_lines(job, log_follower, new_log_lines):
def fetch_new_log_lines(job): def fetch_new_log_lines(job):
# The XMLRPC binary packet may be corrupted, causing a YAML scanner error. # The XMLRPC binary packet may be corrupted, causing a YAML scanner error.
# Retry the log fetching several times before exposing the error. # Retry the log fetching several times before exposing the error.
for _ in range(5): for _ in range(5):
@@ -227,28 +216,14 @@ def submit_job(job):
try: try:
job.submit() job.submit()
except Exception as mesa_ci_err: except Exception as mesa_ci_err:
raise MesaCIRetriableException( raise MesaCIException(
f"Could not submit LAVA job. Reason: {mesa_ci_err}" f"Could not submit LAVA job. Reason: {mesa_ci_err}"
) from mesa_ci_err ) from mesa_ci_err
def wait_for_job_get_started(job, attempt_no): def wait_for_job_get_started(job):
print_log(f"Waiting for job {job.job_id} to start.") print_log(f"Waiting for job {job.job_id} to start.")
while not job.is_started(): while not job.is_started():
current_job_duration_sec: int = int(
(datetime.now(timezone.utc) - CI_JOB_STARTED_AT).total_seconds()
)
remaining_time_sec: int = max(0, CI_JOB_TIMEOUT_SEC - current_job_duration_sec)
if remaining_time_sec < EXPECTED_JOB_DURATION_SEC:
job.cancel()
raise MesaCIFatalException(
f"{CONSOLE_LOG['BOLD']}"
f"{CONSOLE_LOG['FG_YELLOW']}"
f"Job {job.job_id} only has {remaining_time_sec} seconds "
"remaining to run, but it is expected to take at least "
f"{EXPECTED_JOB_DURATION_SEC} seconds."
f"{CONSOLE_LOG['RESET']}",
)
time.sleep(WAIT_FOR_DEVICE_POLLING_TIME_SEC) time.sleep(WAIT_FOR_DEVICE_POLLING_TIME_SEC)
job.refresh_log() job.refresh_log()
print_log(f"Job {job.job_id} started.") print_log(f"Job {job.job_id} started.")
@@ -324,7 +299,7 @@ def execute_job_with_retries(
try: try:
job_log["submitter_start_time"] = datetime.now().isoformat() job_log["submitter_start_time"] = datetime.now().isoformat()
submit_job(job) submit_job(job)
wait_for_job_get_started(job, attempt_no) wait_for_job_get_started(job)
log_follower: LogFollower = bootstrap_log_follower() log_follower: LogFollower = bootstrap_log_follower()
follow_job_execution(job, log_follower) follow_job_execution(job, log_follower)
return job return job
@@ -343,8 +318,6 @@ def execute_job_with_retries(
f"Finished executing LAVA job in the attempt #{attempt_no}" f"Finished executing LAVA job in the attempt #{attempt_no}"
f"{CONSOLE_LOG['RESET']}" f"{CONSOLE_LOG['RESET']}"
) )
if job.exception and not isinstance(job.exception, MesaCIRetriableException):
break
return last_failed_job return last_failed_job
@@ -498,9 +471,8 @@ class LAVAJobSubmitter(PathResolver):
if not last_attempt_job: if not last_attempt_job:
# No job was run, something bad happened # No job was run, something bad happened
STRUCTURAL_LOG["job_combined_status"] = "script_crash" STRUCTURAL_LOG["job_combined_status"] = "script_crash"
current_exception = str(sys.exc_info()[1]) current_exception = str(sys.exc_info()[0])
STRUCTURAL_LOG["job_combined_fail_reason"] = current_exception STRUCTURAL_LOG["job_combined_fail_reason"] = current_exception
print(f"Interrupting the script. Reason: {current_exception}")
raise SystemExit(1) raise SystemExit(1)
STRUCTURAL_LOG["job_combined_status"] = last_attempt_job.status STRUCTURAL_LOG["job_combined_status"] = last_attempt_job.status
@@ -537,6 +509,7 @@ class StructuredLoggerWrapper:
def logger_context(self): def logger_context(self):
context = contextlib.nullcontext() context = contextlib.nullcontext()
try: try:
global STRUCTURAL_LOG global STRUCTURAL_LOG
STRUCTURAL_LOG = StructuredLogger( STRUCTURAL_LOG = StructuredLogger(
self.__submitter.structured_log_file, truncate=True self.__submitter.structured_log_file, truncate=True

View File

@@ -6,7 +6,6 @@ from typing import Any, Optional
from lava.exceptions import ( from lava.exceptions import (
MesaCIException, MesaCIException,
MesaCIRetriableException,
MesaCIKnownIssueException, MesaCIKnownIssueException,
MesaCIParseException, MesaCIParseException,
MesaCITimeoutError, MesaCITimeoutError,
@@ -35,7 +34,7 @@ class LAVAJob:
self._is_finished = False self._is_finished = False
self.log: dict[str, Any] = log self.log: dict[str, Any] = log
self.status = "not_submitted" self.status = "not_submitted"
self.__exception: Optional[Exception] = None self.__exception: Optional[str] = None
def heartbeat(self) -> None: def heartbeat(self) -> None:
self.last_log_time: datetime = datetime.now() self.last_log_time: datetime = datetime.now()
@@ -64,13 +63,13 @@ class LAVAJob:
return self._is_finished return self._is_finished
@property @property
def exception(self) -> Optional[Exception]: def exception(self) -> str:
return self.__exception return self.__exception
@exception.setter @exception.setter
def exception(self, exception: Exception) -> None: def exception(self, exception: Exception) -> None:
self.__exception = exception self.__exception = repr(exception)
self.log["dut_job_fail_reason"] = repr(self.__exception) self.log["dut_job_fail_reason"] = self.__exception
def validate(self) -> Optional[dict]: def validate(self) -> Optional[dict]:
"""Returns a dict with errors, if the validation fails. """Returns a dict with errors, if the validation fails.
@@ -177,15 +176,11 @@ class LAVAJob:
self.status = "canceled" self.status = "canceled"
elif isinstance(exception, MesaCITimeoutError): elif isinstance(exception, MesaCITimeoutError):
self.status = "hung" self.status = "hung"
elif isinstance(exception, MesaCIRetriableException): elif isinstance(exception, MesaCIException):
self.status = "failed" self.status = "failed"
elif isinstance(exception, KeyboardInterrupt): elif isinstance(exception, KeyboardInterrupt):
self.status = "interrupted" self.status = "interrupted"
print_log("LAVA job submitter was interrupted. Cancelling the job.") print_log("LAVA job submitter was interrupted. Cancelling the job.")
raise raise
elif isinstance(exception, MesaCIException):
self.status = "interrupted"
print_log("LAVA job submitter was interrupted. Cancelling the job.")
raise
else: else:
self.status = "job_submitter_error" self.status = "job_submitter_error"

View File

@@ -15,8 +15,6 @@ from lava.utils.uart_job_definition import (
fastboot_deploy_actions, fastboot_deploy_actions,
tftp_boot_action, tftp_boot_action,
tftp_deploy_actions, tftp_deploy_actions,
qemu_boot_action,
qemu_deploy_actions,
uart_test_actions, uart_test_actions,
) )
@@ -73,9 +71,6 @@ class LAVAJobDefinition:
if args.boot_method == "fastboot": if args.boot_method == "fastboot":
deploy_actions = fastboot_deploy_actions(self, nfsrootfs) deploy_actions = fastboot_deploy_actions(self, nfsrootfs)
boot_action = fastboot_boot_action(args) boot_action = fastboot_boot_action(args)
elif args.boot_method == "qemu-nfs":
deploy_actions = qemu_deploy_actions(self, nfsrootfs)
boot_action = qemu_boot_action(args)
else: # tftp else: # tftp
deploy_actions = tftp_deploy_actions(self, nfsrootfs) deploy_actions = tftp_deploy_actions(self, nfsrootfs)
boot_action = tftp_boot_action(args) boot_action = tftp_boot_action(args)
@@ -147,10 +142,6 @@ class LAVAJobDefinition:
if self.job_submitter.lava_tags: if self.job_submitter.lava_tags:
values["tags"] = self.job_submitter.lava_tags.split(",") values["tags"] = self.job_submitter.lava_tags.split(",")
# QEMU lava jobs mandate proper arch value in the context
if self.job_submitter.boot_method == "qemu-nfs":
values["context"]["arch"] = self.job_submitter.mesa_job_name.split(":")[1]
return values return values
def attach_kernel_and_dtb(self, deploy_field): def attach_kernel_and_dtb(self, deploy_field):
@@ -193,7 +184,7 @@ class LAVAJobDefinition:
"set +x # HIDE_START", "set +x # HIDE_START",
f'echo -n "{jwt_file.read()}" > "{self.job_submitter.jwt_file}"', f'echo -n "{jwt_file.read()}" > "{self.job_submitter.jwt_file}"',
"set -x # HIDE_END", "set -x # HIDE_END",
f'echo "export S3_JWT_FILE={self.job_submitter.jwt_file}" >> /set-job-env-vars.sh', f'echo "export CI_JOB_JWT_FILE={self.job_submitter.jwt_file}" >> /set-job-env-vars.sh',
] ]
else: else:
download_steps += [ download_steps += [
@@ -212,13 +203,7 @@ class LAVAJobDefinition:
# - exec .gitlab-ci/common/init-stage2.sh # - exec .gitlab-ci/common/init-stage2.sh
with open(self.job_submitter.first_stage_init, "r") as init_sh: with open(self.job_submitter.first_stage_init, "r") as init_sh:
# For vmware farm, patch nameserver as 8.8.8.8 is off limit. run_steps += [x.rstrip() for x in init_sh if not x.startswith("#") and x.rstrip()]
# This is temporary and will be reverted once the farm is moved.
if self.job_submitter.mesa_job_name.startswith("vmware-"):
run_steps += [x.rstrip().replace("nameserver 8.8.8.8", "nameserver 10.25.198.110") for x in init_sh if not x.startswith("#") and x.rstrip()]
else:
run_steps += [x.rstrip() for x in init_sh if not x.startswith("#") and x.rstrip()]
# We cannot distribute the Adreno 660 shader firmware inside rootfs, # We cannot distribute the Adreno 660 shader firmware inside rootfs,
# since the license isn't bundled inside the repository # since the license isn't bundled inside the repository
if self.job_submitter.device_type == "sm8350-hdk": if self.job_submitter.device_type == "sm8350-hdk":

View File

@@ -82,24 +82,6 @@ def tftp_deploy_actions(job_definition: "LAVAJobDefinition", nfsrootfs) -> tuple
return (tftp_deploy,) return (tftp_deploy,)
def qemu_deploy_actions(job_definition: "LAVAJobDefinition", nfsrootfs) -> tuple[dict[str, Any]]:
args = job_definition.job_submitter
qemu_deploy = {
"timeout": {"minutes": 5},
"to": "nfs",
"images": {
"kernel": {
"image_arg": "-kernel {kernel}",
"url": f"{args.kernel_url_prefix}/{args.kernel_image_name}",
},
"nfsrootfs": nfsrootfs,
},
}
job_definition.attach_external_modules(qemu_deploy)
return (qemu_deploy,)
def uart_test_actions( def uart_test_actions(
args: "LAVAJobSubmitter", init_stage1_steps: list[str], artifact_download_steps: list[str] args: "LAVAJobSubmitter", init_stage1_steps: list[str], artifact_download_steps: list[str]
) -> tuple[dict[str, Any]]: ) -> tuple[dict[str, Any]]:
@@ -158,16 +140,6 @@ def tftp_boot_action(args: "LAVAJobSubmitter") -> dict[str, Any]:
return tftp_boot return tftp_boot
def qemu_boot_action(args: "LAVAJobSubmitter") -> dict[str, Any]:
qemu_boot = {
"failure_retry": NUMBER_OF_ATTEMPTS_LAVA_BOOT,
"method": args.boot_method,
"prompts": ["lava-shell:"],
}
return qemu_boot
def fastboot_boot_action(args: "LAVAJobSubmitter") -> dict[str, Any]: def fastboot_boot_action(args: "LAVAJobSubmitter") -> dict[str, Any]:
fastboot_boot = { fastboot_boot = {
"timeout": {"minutes": 2}, "timeout": {"minutes": 2},

View File

@@ -104,7 +104,7 @@ rm -rf _build
meson setup _build \ meson setup _build \
--native-file=native.file \ --native-file=native.file \
--wrap-mode=nofallback \ --wrap-mode=nofallback \
--force-fallback-for perfetto,syn,paste \ --force-fallback-for perfetto,syn \
${CROSS+--cross "$CROSS_FILE"} \ ${CROSS+--cross "$CROSS_FILE"} \
-D prefix=$PWD/install \ -D prefix=$PWD/install \
-D libdir=lib \ -D libdir=lib \

View File

@@ -13,7 +13,7 @@ INSTALL="$PWD/install"
# Set up the driver environment. # Set up the driver environment.
export LD_LIBRARY_PATH="$INSTALL/lib/" export LD_LIBRARY_PATH="$INSTALL/lib/"
export EGL_PLATFORM=surfaceless export EGL_PLATFORM=surfaceless
export VK_DRIVER_FILES="$INSTALL/share/vulkan/icd.d/${VK_DRIVER}_icd.${VK_CPU:-$(uname -m)}.json" export VK_ICD_FILENAMES="$INSTALL/share/vulkan/icd.d/${VK_DRIVER}_icd.${VK_CPU:-$(uname -m)}.json"
RESULTS=$PWD/${PIGLIT_RESULTS_DIR:-results} RESULTS=$PWD/${PIGLIT_RESULTS_DIR:-results}
mkdir -p $RESULTS mkdir -p $RESULTS

View File

@@ -8,7 +8,7 @@ set -ex
export PAGER=cat # FIXME: export everywhere export PAGER=cat # FIXME: export everywhere
INSTALL=$(realpath -s "$PWD"/install) INSTALL=$(realpath -s "$PWD"/install)
S3_ARGS="--token-file ${S3_JWT_FILE}" S3_ARGS="--token-file ${CI_JOB_JWT_FILE}"
RESULTS=$(realpath -s "$PWD"/results) RESULTS=$(realpath -s "$PWD"/results)
mkdir -p "$RESULTS" mkdir -p "$RESULTS"
@@ -54,7 +54,7 @@ if [ -n "${VK_DRIVER}" ]; then
export DXVK_LOG="$RESULTS/dxvk" export DXVK_LOG="$RESULTS/dxvk"
[ -d "$DXVK_LOG" ] || mkdir -pv "$DXVK_LOG" [ -d "$DXVK_LOG" ] || mkdir -pv "$DXVK_LOG"
export DXVK_STATE_CACHE=0 export DXVK_STATE_CACHE=0
export VK_DRIVER_FILES="$INSTALL/share/vulkan/icd.d/${VK_DRIVER}_icd.${VK_CPU:-$(uname -m)}.json" export VK_ICD_FILENAMES="$INSTALL/share/vulkan/icd.d/${VK_DRIVER}_icd.${VK_CPU:-$(uname -m)}.json"
fi fi
# Sanity check to ensure that our environment is sufficient to make our tests # Sanity check to ensure that our environment is sufficient to make our tests
@@ -117,7 +117,7 @@ else
mkdir -p /tmp/.X11-unix mkdir -p /tmp/.X11-unix
env \ env \
VK_DRIVER_FILES="/install/share/vulkan/icd.d/${VK_DRIVER}_icd.$(uname -m).json" \ VK_ICD_FILENAMES="/install/share/vulkan/icd.d/${VK_DRIVER}_icd.$(uname -m).json" \
weston -Bheadless-backend.so --use-gl -Swayland-0 --xwayland --idle-time=0 & weston -Bheadless-backend.so --use-gl -Swayland-0 --xwayland --idle-time=0 &
while [ ! -S "$WESTON_X11_SOCK" ]; do sleep 1; done while [ ! -S "$WESTON_X11_SOCK" ]; do sleep 1; done
@@ -189,15 +189,6 @@ RUN_CMD="export LD_LIBRARY_PATH=$__LD_LIBRARY_PATH; $SANITY_MESA_VERSION_CMD &&
# run. # run.
rm -rf replayer-db rm -rf replayer-db
# ANGLE: download compiled ANGLE runtime and the compiled restricted traces (all-in-one package)
if [ -n "$PIGLIT_REPLAY_ANGLE_TAG" ]; then
ARCH="amd64"
FILE="angle-bin-${ARCH}-${PIGLIT_REPLAY_ANGLE_TAG}.tar.zst"
ci-fairy s3cp $S3_ARGS "https://s3.freedesktop.org/mesa-tracie-private/${FILE}" "${FILE}"
mkdir -p replayer-db/angle
tar --zstd -xf ${FILE} -C replayer-db/angle/
fi
if ! eval $RUN_CMD; if ! eval $RUN_CMD;
then then
printf "%s\n" "Found $(cat /tmp/version.txt), expected $MESA_VERSION" printf "%s\n" "Found $(cat /tmp/version.txt), expected $MESA_VERSION"

View File

@@ -38,6 +38,7 @@ cp -Rp .gitlab-ci/fossilize-runner.sh install/
cp -Rp .gitlab-ci/crosvm-init.sh install/ cp -Rp .gitlab-ci/crosvm-init.sh install/
cp -Rp .gitlab-ci/*.txt install/ cp -Rp .gitlab-ci/*.txt install/
cp -Rp .gitlab-ci/report-flakes.py install/ cp -Rp .gitlab-ci/report-flakes.py install/
cp -Rp .gitlab-ci/valve install/
cp -Rp .gitlab-ci/vkd3d-proton install/ cp -Rp .gitlab-ci/vkd3d-proton install/
cp -Rp .gitlab-ci/setup-test-env.sh install/ cp -Rp .gitlab-ci/setup-test-env.sh install/
cp -Rp .gitlab-ci/*-runner.sh install/ cp -Rp .gitlab-ci/*-runner.sh install/
@@ -60,7 +61,7 @@ if [ -n "$S3_ARTIFACT_NAME" ]; then
# Pass needed files to the test stage # Pass needed files to the test stage
S3_ARTIFACT_NAME="$S3_ARTIFACT_NAME.tar.zst" S3_ARTIFACT_NAME="$S3_ARTIFACT_NAME.tar.zst"
zstd artifacts/install.tar -o ${S3_ARTIFACT_NAME} zstd artifacts/install.tar -o ${S3_ARTIFACT_NAME}
ci-fairy s3cp --token-file "${S3_JWT_FILE}" ${S3_ARTIFACT_NAME} https://${PIPELINE_ARTIFACTS_BASE}/${S3_ARTIFACT_NAME} ci-fairy s3cp --token-file "${CI_JOB_JWT_FILE}" ${S3_ARTIFACT_NAME} https://${PIPELINE_ARTIFACTS_BASE}/${S3_ARTIFACT_NAME}
fi fi
section_end prepare-artifacts section_end prepare-artifacts

View File

@@ -10,7 +10,7 @@ export LD_LIBRARY_PATH=$LIBDIR
cd /usr/local/shader-db cd /usr/local/shader-db
for driver in freedreno intel lima v3d vc4; do for driver in freedreno intel v3d vc4; do
section_start shader-db-${driver} "Running shader-db for $driver" section_start shader-db-${driver} "Running shader-db for $driver"
env LD_PRELOAD="$LIBDIR/lib${driver}_noop_drm_shim.so" \ env LD_PRELOAD="$LIBDIR/lib${driver}_noop_drm_shim.so" \
./run -j"${FDO_CI_CONCURRENT:-4}" ./shaders \ ./run -j"${FDO_CI_CONCURRENT:-4}" ./shaders \

View File

@@ -14,14 +14,6 @@ function x_off {
# TODO: implement x_on ! # TODO: implement x_on !
export JOB_START_S=$(date -u +"%s" -d "${CI_JOB_STARTED_AT:?}")
function get_current_minsec {
DATE_S=$(date -u +"%s")
CURR_TIME=$((DATE_S-JOB_START_S))
printf "%02d:%02d" $((CURR_TIME/60)) $((CURR_TIME%60))
}
function error { function error {
x_off 2>/dev/null x_off 2>/dev/null
RED="\e[0;31m" RED="\e[0;31m"
@@ -29,7 +21,10 @@ function error {
# we force the following to be not in a section # we force the following to be not in a section
section_end $CURRENT_SECTION section_end $CURRENT_SECTION
CURR_MINSEC=$(get_current_minsec) DATE_S=$(date -u +"%s")
JOB_START_S=$(date -u +"%s" -d "${CI_JOB_STARTED_AT:?}")
CURR_TIME=$((DATE_S-JOB_START_S))
CURR_MINSEC="$(printf "%02d" $((CURR_TIME/60))):$(printf "%02d" $((CURR_TIME%60)))"
echo -e "\n${RED}[${CURR_MINSEC}] ERROR: $*${ENDCOLOR}\n" echo -e "\n${RED}[${CURR_MINSEC}] ERROR: $*${ENDCOLOR}\n"
[ "$state_x" -eq 0 ] || set -x [ "$state_x" -eq 0 ] || set -x
} }
@@ -47,7 +42,10 @@ function build_section_start {
CYAN="\e[0;36m" CYAN="\e[0;36m"
ENDCOLOR="\e[0m" ENDCOLOR="\e[0m"
CURR_MINSEC=$(get_current_minsec) DATE_S=$(date -u +"%s")
JOB_START_S=$(date -u +"%s" -d "${CI_JOB_STARTED_AT:?}")
CURR_TIME=$((DATE_S-JOB_START_S))
CURR_MINSEC="$(printf "%02d" $((CURR_TIME/60))):$(printf "%02d" $((CURR_TIME%60)))"
echo -e "\n\e[0Ksection_start:$(date +%s):$section_name$section_params\r\e[0K${CYAN}[${CURR_MINSEC}] $*${ENDCOLOR}\n" echo -e "\n\e[0Ksection_start:$(date +%s):$section_name$section_params\r\e[0K${CYAN}[${CURR_MINSEC}] $*${ENDCOLOR}\n"
} }
@@ -89,7 +87,6 @@ function uncollapsed_section_switch {
} }
export -f x_off export -f x_off
export -f get_current_minsec
export -f error export -f error
export -f trap_err export -f trap_err
export -f build_section_start export -f build_section_start

View File

@@ -227,10 +227,7 @@
.lint-rustfmt-rules: .lint-rustfmt-rules:
rules: rules:
- !reference [.never-post-merge-rules, rules] - !reference [.never-post-merge-rules, rules]
- !reference [.no_scheduled_pipelines-rules, rules] - !reference [.core-rules, rules]
- changes:
- .gitlab-ci.yml
- .gitlab-ci/**/*
# in merge pipeline, formatting checks are not allowed to fail # in merge pipeline, formatting checks are not allowed to fail
- if: $GITLAB_USER_LOGIN == "marge-bot" && $CI_PIPELINE_SOURCE == "merge_request_event" - if: $GITLAB_USER_LOGIN == "marge-bot" && $CI_PIPELINE_SOURCE == "merge_request_event"
changes: &rust_file_list changes: &rust_file_list
@@ -241,13 +238,3 @@
- changes: *rust_file_list - changes: *rust_file_list
when: on_success when: on_success
allow_failure: true allow_failure: true
# Rules for .mr-label-maker.yml
.mr-label-maker-rules:
rules:
- !reference [.never-post-merge-rules, rules]
- !reference [.no_scheduled_pipelines-rules, rules]
- if: $CI_PIPELINE_SOURCE == "merge_request_event"
changes:
- .mr-label-maker.yml
when: on_success

View File

@@ -43,7 +43,7 @@ rustfmt:
- rustfmt --verbose src/**/lib.rs - rustfmt --verbose src/**/lib.rs
- rustfmt --verbose src/**/main.rs - rustfmt --verbose src/**/main.rs
python-test: .test-check:
# Cancel job if a newer commit is pushed to the same branch # Cancel job if a newer commit is pushed to the same branch
interruptible: true interruptible: true
stage: code-validation stage: code-validation
@@ -52,6 +52,10 @@ python-test:
variables: variables:
GIT_STRATEGY: fetch GIT_STRATEGY: fetch
timeout: 10m timeout: 10m
python-test:
extends:
- .test-check
script: script:
- cd bin/ci - cd bin/ci
- pip install --break-system-packages -r test/requirements.txt - pip install --break-system-packages -r test/requirements.txt
@@ -59,18 +63,8 @@ python-test:
rules: rules:
- !reference [.disable-farm-mr-rules, rules] - !reference [.disable-farm-mr-rules, rules]
- !reference [.never-post-merge-rules, rules] - !reference [.never-post-merge-rules, rules]
- if: $CI_PIPELINE_SOURCE == "schedule" - changes:
when: on_success
- if: $CI_PIPELINE_SOURCE == "push" && $CI_PROJECT_NAMESPACE == "mesa" && $GITLAB_USER_LOGIN != "marge-bot"
when: on_success
- if: $GITLAB_USER_LOGIN == "marge-bot"
changes: &bin_ci_files
- .gitlab-ci.yml
- .gitlab-ci/**/*
- bin/ci/**/* - bin/ci/**/*
when: on_success
- changes: *bin_ci_files
when: manual
.test-gl: .test-gl:
extends: extends:
@@ -158,7 +152,7 @@ python-test:
exclude: exclude:
- results/*.shader_cache - results/*.shader_cache
variables: variables:
PIGLIT_REPLAY_EXTRA_ARGS: --db-path ${CI_PROJECT_DIR}/replayer-db/ --minio_bucket=${S3_TRACIE_PUBLIC_BUCKET} --jwt-file=${S3_JWT_FILE} PIGLIT_REPLAY_EXTRA_ARGS: --db-path ${CI_PROJECT_DIR}/replayer-db/ --minio_bucket=mesa-tracie-public --jwt-file=${CI_JOB_JWT_FILE}
# until we overcome Infrastructure issues, give traces extra 5 min before timeout # until we overcome Infrastructure issues, give traces extra 5 min before timeout
DEVICE_HANGING_TIMEOUT_SEC: 600 DEVICE_HANGING_TIMEOUT_SEC: 600
script: script:
@@ -186,7 +180,11 @@ python-test:
paths: paths:
- results/ - results/
.download_s3: .baremetal-test:
extends:
- .test
# Cancel job if a newer commit is pushed to the same branch
interruptible: true
before_script: before_script:
- !reference [default, before_script] - !reference [default, before_script]
# Use this instead of gitlab's artifacts download because it hits packet.net # Use this instead of gitlab's artifacts download because it hits packet.net
@@ -198,14 +196,6 @@ python-test:
- rm -rf install - rm -rf install
- (set -x; curl -L --retry 4 -f --retry-all-errors --retry-delay 60 ${FDO_HTTP_CACHE_URI:-}https://${PIPELINE_ARTIFACTS_BASE}/${S3_ARTIFACT_NAME}.tar.zst | tar --zstd -x) - (set -x; curl -L --retry 4 -f --retry-all-errors --retry-delay 60 ${FDO_HTTP_CACHE_URI:-}https://${PIPELINE_ARTIFACTS_BASE}/${S3_ARTIFACT_NAME}.tar.zst | tar --zstd -x)
- section_end artifacts_download - section_end artifacts_download
.baremetal-test:
extends:
- .test
# Cancel job if a newer commit is pushed to the same branch
interruptible: true
before_script:
- !reference [.download_s3, before_script]
variables: variables:
BM_ROOTFS: /rootfs-${DEBIAN_ARCH} BM_ROOTFS: /rootfs-${DEBIAN_ARCH}
artifacts: artifacts:
@@ -407,7 +397,7 @@ python-test:
reports: reports:
junit: results/**/junit.xml junit: results/**/junit.xml
.b2c-x86_64-test-vk: .b2c-test-vk:
extends: extends:
- .use-debian/x86_64_test-vk - .use-debian/x86_64_test-vk
- .b2c-test - .b2c-test
@@ -416,7 +406,7 @@ python-test:
- debian-testing - debian-testing
- !reference [.required-for-hardware-jobs, needs] - !reference [.required-for-hardware-jobs, needs]
.b2c-x86_64-test-gl: .b2c-test-gl:
extends: extends:
- .use-debian/x86_64_test-gl - .use-debian/x86_64_test-gl
- .b2c-test - .b2c-test

View File

@@ -15,7 +15,7 @@ from typing import Generator
from unittest.mock import MagicMock, patch from unittest.mock import MagicMock, patch
import pytest import pytest
from lava.exceptions import MesaCIException, MesaCIRetryError, MesaCIFatalException from lava.exceptions import MesaCIException, MesaCIRetryError
from lava.lava_job_submitter import ( from lava.lava_job_submitter import (
DEVICE_HANGING_TIMEOUT_SEC, DEVICE_HANGING_TIMEOUT_SEC,
NUMBER_OF_RETRIES_TIMEOUT_DETECTION, NUMBER_OF_RETRIES_TIMEOUT_DETECTION,
@@ -24,7 +24,6 @@ from lava.lava_job_submitter import (
bootstrap_log_follower, bootstrap_log_follower,
follow_job_execution, follow_job_execution,
retriable_follow_job, retriable_follow_job,
wait_for_job_get_started,
) )
from lava.utils import LogSectionType from lava.utils import LogSectionType
@@ -84,7 +83,7 @@ def lava_job_submitter(
def test_submit_and_follow_respects_exceptions(mock_sleep, mock_proxy, exception): def test_submit_and_follow_respects_exceptions(mock_sleep, mock_proxy, exception):
with pytest.raises(MesaCIException): with pytest.raises(MesaCIException):
proxy = mock_proxy(side_effect=exception) proxy = mock_proxy(side_effect=exception)
job = LAVAJob(proxy, "") job = LAVAJob(proxy, '')
log_follower = bootstrap_log_follower() log_follower = bootstrap_log_follower()
follow_job_execution(job, log_follower) follow_job_execution(job, log_follower)
@@ -166,13 +165,21 @@ PROXY_SCENARIOS = {
mock_logs(result="pass"), mock_logs(result="pass"),
does_not_raise(), does_not_raise(),
"pass", "pass",
{"testsuite_results": [generate_testsuite_result(result="pass")]}, {
"testsuite_results": [
generate_testsuite_result(result="pass")
]
},
), ),
"no retries, but testsuite fails": ( "no retries, but testsuite fails": (
mock_logs(result="fail"), mock_logs(result="fail"),
does_not_raise(), does_not_raise(),
"fail", "fail",
{"testsuite_results": [generate_testsuite_result(result="fail")]}, {
"testsuite_results": [
generate_testsuite_result(result="fail")
]
},
), ),
"no retries, one testsuite fails": ( "no retries, one testsuite fails": (
generate_n_logs(n=1, tick_fn=0, result="fail"), generate_n_logs(n=1, tick_fn=0, result="fail"),
@@ -181,7 +188,7 @@ PROXY_SCENARIOS = {
{ {
"testsuite_results": [ "testsuite_results": [
generate_testsuite_result(result="fail"), generate_testsuite_result(result="fail"),
generate_testsuite_result(result="pass"), generate_testsuite_result(result="pass")
] ]
}, },
), ),
@@ -258,27 +265,6 @@ def test_simulate_a_long_wait_to_start_a_job(
assert delta_time.total_seconds() >= wait_time assert delta_time.total_seconds() >= wait_time
LONG_LAVA_QUEUE_SCENARIOS = {
"no_time_to_run": (0, pytest.raises(MesaCIFatalException)),
"enough_time_to_run": (9999999999, does_not_raise()),
}
@pytest.mark.parametrize(
"job_timeout, expectation",
LONG_LAVA_QUEUE_SCENARIOS.values(),
ids=LONG_LAVA_QUEUE_SCENARIOS.keys(),
)
def test_wait_for_job_get_started_no_time_to_run(monkeypatch, job_timeout, expectation):
monkeypatch.setattr("lava.lava_job_submitter.CI_JOB_TIMEOUT_SEC", job_timeout)
job = MagicMock()
# Make it escape the loop
job.is_started.side_effect = (False, False, True)
with expectation as e:
wait_for_job_get_started(job, 1)
if e:
job.cancel.assert_called_with()
CORRUPTED_LOG_SCENARIOS = { CORRUPTED_LOG_SCENARIOS = {
"too much subsequent corrupted data": ( "too much subsequent corrupted data": (
@@ -452,7 +438,9 @@ def test_job_combined_status(
"lava.lava_job_submitter.retriable_follow_job" "lava.lava_job_submitter.retriable_follow_job"
) as mock_retriable_follow_job, patch( ) as mock_retriable_follow_job, patch(
"lava.lava_job_submitter.LAVAJobSubmitter._LAVAJobSubmitter__prepare_submission" "lava.lava_job_submitter.LAVAJobSubmitter._LAVAJobSubmitter__prepare_submission"
) as mock_prepare_submission, patch("sys.exit"): ) as mock_prepare_submission, patch(
"sys.exit"
):
from lava.lava_job_submitter import STRUCTURAL_LOG from lava.lava_job_submitter import STRUCTURAL_LOG
mock_retriable_follow_job.return_value = MagicMock(status=finished_job_status) mock_retriable_follow_job.return_value = MagicMock(status=finished_job_status)

View File

@@ -0,0 +1,87 @@
#!/usr/bin/env bash
# shellcheck disable=SC2086 # we want word splitting
set -ex
if [[ -z "$VK_DRIVER" ]]; then
exit 1
fi
# Useful debug output, you rarely know what envirnoment you'll be
# running in within container-land, this can be a landmark.
ls -l
INSTALL=$(realpath -s "$PWD"/install)
RESULTS=$(realpath -s "$PWD"/results)
# Set up the driver environment.
# Modifiying here directly LD_LIBRARY_PATH may cause problems when
# using a command wrapper. Hence, we will just set it when running the
# command.
export __LD_LIBRARY_PATH="$LD_LIBRARY_PATH:$INSTALL/lib/"
# Sanity check to ensure that our environment is sufficient to make our tests
# run against the Mesa built by CI, rather than any installed distro version.
MESA_VERSION=$(sed 's/\./\\./g' "$INSTALL/VERSION")
# Force the stdout and stderr streams to be unbuffered in python.
export PYTHONUNBUFFERED=1
# Set the Vulkan driver to use.
export VK_ICD_FILENAMES="$INSTALL/share/vulkan/icd.d/${VK_DRIVER}_icd.x86_64.json"
if [ "${VK_DRIVER}" = "radeon" ]; then
# Disable vsync
export MESA_VK_WSI_PRESENT_MODE=mailbox
export vblank_mode=0
fi
# Set environment for Wine.
export WINEDEBUG="-all"
export WINEPREFIX="/dxvk-wine64"
export WINEESYNC=1
# Wait for amdgpu to be fully loaded
sleep 1
# Avoid having to perform nasty command pre-processing to insert the
# wine executable in front of the test executables. Instead, use the
# kernel's binfmt support to automatically use Wine as an interpreter
# when asked to load PE executables.
# TODO: Have boot2container mount this filesystem for all jobs?
mount -t binfmt_misc none /proc/sys/fs/binfmt_misc
echo ':DOSWin:M::MZ::/usr/bin/wine64:' > /proc/sys/fs/binfmt_misc/register
# Set environment for DXVK.
export DXVK_LOG_LEVEL="info"
export DXVK_LOG="$RESULTS/dxvk"
[ -d "$DXVK_LOG" ] || mkdir -pv "$DXVK_LOG"
export DXVK_STATE_CACHE=0
# Set environment for replaying traces.
export PATH="/apitrace-msvc-win64/bin:/gfxreconstruct/build/bin:$PATH"
SANITY_MESA_VERSION_CMD="vulkaninfo"
# Set up the Window System Interface (WSI)
# TODO: Can we get away with GBM?
if [ "${TEST_START_XORG:-0}" -eq 1 ]; then
"$INSTALL"/common/start-x.sh "$INSTALL"
export DISPLAY=:0
fi
wine64 --version
SANITY_MESA_VERSION_CMD="$SANITY_MESA_VERSION_CMD | tee /tmp/version.txt | grep \"Mesa $MESA_VERSION\(\s\|$\)\""
RUN_CMD="export LD_LIBRARY_PATH=$__LD_LIBRARY_PATH; $SANITY_MESA_VERSION_CMD"
set +e
if ! eval $RUN_CMD;
then
printf "%s\n" "Found $(cat /tmp/version.txt), expected $MESA_VERSION"
fi
set -e
# Just to be sure...
chmod +x ./valvetraces-run.sh
./valvetraces-run.sh

View File

@@ -23,7 +23,7 @@ export LD_LIBRARY_PATH="$LD_LIBRARY_PATH:$INSTALL/lib/:/vkd3d-proton-tests/x64/"
MESA_VERSION=$(sed 's/\./\\./g' "$INSTALL/VERSION") MESA_VERSION=$(sed 's/\./\\./g' "$INSTALL/VERSION")
# Set the Vulkan driver to use. # Set the Vulkan driver to use.
export VK_DRIVER_FILES="$INSTALL/share/vulkan/icd.d/${VK_DRIVER}_icd.x86_64.json" export VK_ICD_FILENAMES="$INSTALL/share/vulkan/icd.d/${VK_DRIVER}_icd.x86_64.json"
# Set environment for Wine. # Set environment for Wine.
export WINEDEBUG="-all" export WINEDEBUG="-all"

View File

@@ -7,10 +7,6 @@ COPY mesa_deps_vulkan_sdk.ps1 C:\
RUN C:\mesa_deps_vulkan_sdk.ps1 RUN C:\mesa_deps_vulkan_sdk.ps1
COPY mesa_init_msvc.ps1 C:\ COPY mesa_init_msvc.ps1 C:\
COPY mesa_deps_libva.ps1 C:\
RUN C:\mesa_deps_libva.ps1
COPY mesa_deps_build.ps1 C:\ COPY mesa_deps_build.ps1 C:\
RUN C:\mesa_deps_build.ps1 RUN C:\mesa_deps_build.ps1

View File

@@ -14,9 +14,6 @@ RUN C:\mesa_deps_rust.ps1
COPY mesa_init_msvc.ps1 C:\ COPY mesa_init_msvc.ps1 C:\
COPY mesa_deps_libva.ps1 C:\
RUN C:\mesa_deps_libva.ps1
COPY mesa_deps_test_piglit.ps1 C:\ COPY mesa_deps_test_piglit.ps1 C:\
RUN C:\mesa_deps_test_piglit.ps1 RUN C:\mesa_deps_test_piglit.ps1
COPY mesa_deps_test_deqp.ps1 c:\ COPY mesa_deps_test_deqp.ps1 c:\

View File

@@ -1,4 +1,4 @@
# VK_DRIVER_FILES environment variable is not used when running with # VK_ICD_FILENAMES environment variable is not used when running with
# elevated privileges. Add a key to the registry instead. # elevated privileges. Add a key to the registry instead.
$hkey_path = "HKLM:\SOFTWARE\Khronos\Vulkan\Drivers\" $hkey_path = "HKLM:\SOFTWARE\Khronos\Vulkan\Drivers\"
$hkey_name = Join-Path -Path $pwd -ChildPath "_install\share\vulkan\icd.d\dzn_icd.x86_64.json" $hkey_name = Join-Path -Path $pwd -ChildPath "_install\share\vulkan\icd.d\dzn_icd.x86_64.json"

View File

@@ -84,6 +84,4 @@ Copy-Item ".\.gitlab-ci\windows\spirv2dxil_run.ps1" -Destination $installdir
Copy-Item ".\.gitlab-ci\windows\deqp_runner_run.ps1" -Destination $installdir Copy-Item ".\.gitlab-ci\windows\deqp_runner_run.ps1" -Destination $installdir
Copy-Item ".\.gitlab-ci\windows\vainfo_run.ps1" -Destination $installdir
Get-ChildItem -Recurse -Filter "ci" | Get-ChildItem -Include "*.txt","*.toml" | Copy-Item -Destination $installdir Get-ChildItem -Recurse -Filter "ci" | Get-ChildItem -Include "*.txt","*.toml" | Copy-Item -Destination $installdir

View File

@@ -12,7 +12,7 @@ $depsInstallPath="C:\mesa-deps"
Get-Date Get-Date
Write-Host "Cloning DirectX-Headers" Write-Host "Cloning DirectX-Headers"
git clone -b v1.613.1 --depth=1 https://github.com/microsoft/DirectX-Headers deps/DirectX-Headers git clone -b v1.611.0 --depth=1 https://github.com/microsoft/DirectX-Headers deps/DirectX-Headers
if (!$?) { if (!$?) {
Write-Host "Failed to clone DirectX-Headers repository" Write-Host "Failed to clone DirectX-Headers repository"
Exit 1 Exit 1
@@ -32,17 +32,16 @@ if (!$buildstatus) {
Get-Date Get-Date
Write-Host "Cloning zlib" Write-Host "Cloning zlib"
git clone -b v1.3.1 --depth=1 https://github.com/madler/zlib deps/zlib git clone -b v1.2.13 --depth=1 https://github.com/madler/zlib deps/zlib
if (!$?) { if (!$?) {
Write-Host "Failed to clone zlib repository" Write-Host "Failed to clone zlib repository"
Exit 1 Exit 1
} }
Write-Host "Downloading zlib meson build files" Write-Host "Downloading zlib meson build files"
Invoke-WebRequest -Uri "https://wrapdb.mesonbuild.com/v2/zlib_1.3.1-1/get_patch" -OutFile deps/zlib.zip Invoke-WebRequest -Uri "https://wrapdb.mesonbuild.com/v2/zlib_1.2.13-1/get_patch" -OutFile deps/zlib.zip
Expand-Archive -Path deps/zlib.zip -Destination deps/zlib Expand-Archive -Path deps/zlib.zip -Destination deps/zlib
# Wrap archive puts build files in a version subdir # Wrap archive puts build files in a version subdir
robocopy deps/zlib/zlib-1.3.1 deps/zlib /E Move-Item deps/zlib/zlib-1.2.13/* deps/zlib
Remove-Item -Recurse -Force -ErrorAction SilentlyContinue -Path deps/zlib/zlib-1.3.1
$zlib_build = New-Item -ItemType Directory -Path ".\deps\zlib" -Name "build" $zlib_build = New-Item -ItemType Directory -Path ".\deps\zlib" -Name "build"
Push-Location -Path $zlib_build.FullName Push-Location -Path $zlib_build.FullName
meson .. --backend=ninja -Dprefix="$depsInstallPath" --default-library=static --buildtype=release -Db_vscrt=mt && ` meson .. --backend=ninja -Dprefix="$depsInstallPath" --default-library=static --buildtype=release -Db_vscrt=mt && `
@@ -55,6 +54,35 @@ if (!$buildstatus) {
Exit 1 Exit 1
} }
Get-Date
Write-Host "Cloning libva"
git clone https://github.com/intel/libva.git deps/libva
if (!$?) {
Write-Host "Failed to clone libva repository"
Exit 1
}
Push-Location -Path ".\deps\libva"
Write-Host "Checking out libva df3c584bb79d1a1e521372d62fa62e8b1c52ce6c"
# libva-win32 is released with libva version 2.17 (see https://github.com/intel/libva/releases/tag/2.17.0)
git checkout 2.17.0
Pop-Location
Write-Host "Building libva"
# libva already has a build dir in their repo, use builddir instead
$libva_build = New-Item -ItemType Directory -Path ".\deps\libva" -Name "builddir"
Push-Location -Path $libva_build.FullName
meson .. -Dprefix="$depsInstallPath"
ninja -j32 install
$buildstatus = $?
Pop-Location
Remove-Item -Recurse -Force -ErrorAction SilentlyContinue -Path $libva_build
if (!$buildstatus) {
Write-Host "Failed to compile libva"
Exit 1
}
Get-Date Get-Date
Write-Host "Cloning LLVM release/15.x" Write-Host "Cloning LLVM release/15.x"
git clone -b release/15.x --depth=1 https://github.com/llvm/llvm-project deps/llvm-project git clone -b release/15.x --depth=1 https://github.com/llvm/llvm-project deps/llvm-project

View File

@@ -8,7 +8,7 @@ $depsInstallPath="C:\mesa-deps"
Write-Host "Downloading DirectX 12 Agility SDK at:" Write-Host "Downloading DirectX 12 Agility SDK at:"
Get-Date Get-Date
Invoke-WebRequest -Uri https://www.nuget.org/api/v2/package/Microsoft.Direct3D.D3D12/1.613.2 -OutFile 'agility.zip' Invoke-WebRequest -Uri https://www.nuget.org/api/v2/package/Microsoft.Direct3D.D3D12/1.610.2 -OutFile 'agility.zip'
Expand-Archive -Path 'agility.zip' -DestinationPath 'C:\agility' Expand-Archive -Path 'agility.zip' -DestinationPath 'C:\agility'
# Copy Agility SDK into mesa-deps\bin\D3D12 # Copy Agility SDK into mesa-deps\bin\D3D12
New-Item -ErrorAction SilentlyContinue -ItemType Directory -Path $depsInstallPath\bin -Name 'D3D12' New-Item -ErrorAction SilentlyContinue -ItemType Directory -Path $depsInstallPath\bin -Name 'D3D12'
@@ -18,7 +18,7 @@ Remove-Item -Recurse 'C:\agility'
Write-Host "Downloading Updated WARP at:" Write-Host "Downloading Updated WARP at:"
Get-Date Get-Date
Invoke-WebRequest -Uri https://www.nuget.org/api/v2/package/Microsoft.Direct3D.WARP/1.0.11 -OutFile 'warp.zip' Invoke-WebRequest -Uri https://www.nuget.org/api/v2/package/Microsoft.Direct3D.WARP/1.0.9 -OutFile 'warp.zip'
Expand-Archive -Path 'warp.zip' -DestinationPath 'C:\warp' Expand-Archive -Path 'warp.zip' -DestinationPath 'C:\warp'
# Copy WARP into mesa-deps\bin # Copy WARP into mesa-deps\bin
Copy-Item 'C:\warp\build\native\amd64\d3d10warp.dll' -Destination $depsInstallPath\bin Copy-Item 'C:\warp\build\native\amd64\d3d10warp.dll' -Destination $depsInstallPath\bin
@@ -27,7 +27,7 @@ Remove-Item -Recurse 'C:\warp'
Write-Host "Downloading DirectXShaderCompiler release at:" Write-Host "Downloading DirectXShaderCompiler release at:"
Get-Date Get-Date
Invoke-WebRequest -Uri https://github.com/microsoft/DirectXShaderCompiler/releases/download/v1.8.2403/dxc_2024_03_07.zip -OutFile 'DXC.zip' Invoke-WebRequest -Uri https://github.com/microsoft/DirectXShaderCompiler/releases/download/v1.7.2207/dxc_2022_07_18.zip -OutFile 'DXC.zip'
Expand-Archive -Path 'DXC.zip' -DestinationPath 'C:\DXC' Expand-Archive -Path 'DXC.zip' -DestinationPath 'C:\DXC'
# No more need to get dxil.dll from the VS install # No more need to get dxil.dll from the VS install
Copy-Item 'C:\DXC\bin\x64\*.dll' -Destination 'C:\Windows\System32' Copy-Item 'C:\DXC\bin\x64\*.dll' -Destination 'C:\Windows\System32'

View File

@@ -1,79 +0,0 @@
# Compiling libva/libva-utils deps
$ProgressPreference = "SilentlyContinue"
$MyPath = $MyInvocation.MyCommand.Path | Split-Path -Parent
. "$MyPath\mesa_init_msvc.ps1"
Remove-Item -Recurse -Force -ErrorAction SilentlyContinue "deps" | Out-Null
$depsInstallPath="C:\mesa-deps"
Write-Host "Cloning libva at:"
Get-Date
git clone https://github.com/intel/libva.git deps/libva
if (!$?) {
Write-Host "Failed to clone libva repository"
Exit 1
}
Write-Host "Cloning libva finished at:"
Get-Date
Write-Host "Building libva at:"
Get-Date
Push-Location -Path ".\deps\libva"
Write-Host "Checking out libva..."
git checkout 2.21.0
Pop-Location
# libva already has a build dir in their repo, use builddir instead
$libva_build = New-Item -ItemType Directory -Path ".\deps\libva" -Name "builddir"
Push-Location -Path $libva_build.FullName
meson .. -Dprefix="$depsInstallPath"
ninja -j32 install
$buildstatus = $?
Pop-Location
Remove-Item -Recurse -Force -ErrorAction SilentlyContinue -Path $libva_build
if (!$buildstatus) {
Write-Host "Failed to compile libva"
Exit 1
}
Write-Host "Building libva finished at:"
Get-Date
Write-Host "Cloning libva-utils at:"
Get-Date
git clone https://github.com/intel/libva-utils.git deps/libva-utils
if (!$?) {
Write-Host "Failed to clone libva-utils repository"
Exit 1
}
Write-Host "Cloning libva-utils finished at:"
Get-Date
Write-Host "Building libva-utils at:"
Get-Date
Push-Location -Path ".\deps\libva-utils"
Write-Host "Checking out libva-utils..."
git checkout 2.21.0
Pop-Location
Write-Host "Building libva-utils"
# libva-utils already has a build dir in their repo, use builddir instead
$libva_utils_build = New-Item -ItemType Directory -Path ".\deps\libva-utils" -Name "builddir"
Push-Location -Path $libva_utils_build.FullName
meson .. -Dprefix="$depsInstallPath" --pkg-config-path="$depsInstallPath\lib\pkgconfig;$depsInstallPath\share\pkgconfig"
ninja -j32 install
$buildstatus = $?
Pop-Location
Remove-Item -Recurse -Force -ErrorAction SilentlyContinue -Path $libva_utils_build
if (!$buildstatus) {
Write-Host "Failed to compile libva-utils"
Exit 1
}
Write-Host "Building libva-utils finished at:"
Get-Date

View File

@@ -1,99 +0,0 @@
function Deploy-Dependencies {
param (
[string] $deploy_directory
)
Write-Host "Copying libva runtime and driver at:"
Get-Date
# Copy the VA runtime binaries from the mesa built dependencies so the versions match with the built mesa VA driver binary
$depsInstallPath="C:\mesa-deps"
Copy-Item "$depsInstallPath\bin\vainfo.exe" -Destination "$deploy_directory\vainfo.exe"
Copy-Item "$depsInstallPath\bin\va_win32.dll" -Destination "$deploy_directory\va_win32.dll"
Copy-Item "$depsInstallPath\bin\va.dll" -Destination "$deploy_directory\va.dll"
# Copy Agility SDK into D3D12 subfolder of vainfo
New-Item -ItemType Directory -Force -Path "$deploy_directory\D3D12" | Out-Null
Copy-Item "$depsInstallPath\bin\D3D12\D3D12Core.dll" -Destination "$deploy_directory\D3D12\D3D12Core.dll"
Copy-Item "$depsInstallPath\bin\D3D12\d3d12SDKLayers.dll" -Destination "$deploy_directory\D3D12\d3d12SDKLayers.dll"
# Copy WARP next to vainfo
Copy-Item "$depsInstallPath\bin\d3d10warp.dll" -Destination "$deploy_directory\d3d10warp.dll"
Write-Host "Copying libva runtime and driver finished at:"
Get-Date
}
function Check-VAInfo-Entrypoint {
param (
[string] $vainfo_app_path,
[string] $entrypoint
)
$vainfo_run_cmd = "$vainfo_app_path --display win32 --device 0 2>&1 | Select-String $entrypoint -Quiet"
Write-Host "Running: $vainfo_run_cmd"
$vainfo_ret_code= Invoke-Expression $vainfo_run_cmd
if (-not($vainfo_ret_code)) {
return 0
}
return 1
}
# Set testing environment variables
$successful_run=1
$testing_dir="$PWD\_install\bin" # vaon12_drv_video.dll is placed on this directory by the build
$vainfo_app_path = "$testing_dir\vainfo.exe"
# Deploy vainfo and dependencies
Deploy-Dependencies -deploy_directory $testing_dir
# Set VA runtime environment variables
$env:LIBVA_DRIVER_NAME="vaon12"
$env:LIBVA_DRIVERS_PATH="$testing_dir"
Write-Host "LIBVA_DRIVER_NAME: $env:LIBVA_DRIVER_NAME"
Write-Host "LIBVA_DRIVERS_PATH: $env:LIBVA_DRIVERS_PATH"
# Check video processing entrypoint is supported
# Inbox WARP/D3D12 supports this entrypoint with VA frontend shaders support (e.g no video APIs support required)
$entrypoint = "VAEntrypointVideoProc"
# First run without app verifier
Write-Host "Disabling appverifier for $vainfo_app_path and checking for the presence of $entrypoint supported..."
appverif.exe /disable * -for "$vainfo_app_path"
$result_without_appverifier = Check-VAInfo-Entrypoint -vainfo_app_path $vainfo_app_path -entrypoint $entrypoint
if ($result_without_appverifier -eq 1) {
Write-Host "Process exited successfully."
} else {
$successful_run=0
Write-Error "Process exit not successful for $vainfo_run_cmd. Please see vainfo verbose output below for diagnostics..."
# verbose run to print more info on error (helpful to investigate issues from the CI output)
Invoke-Expression "$vainfo_app_path -a --display win32 --device help"
Invoke-Expression "$vainfo_app_path -a --display win32 --device 0"
}
# Enable appverif and run again
Write-Host "Enabling appverifier for $vainfo_app_path and checking for the presence of $entrypoint supported..."
appverif.exe /logtofile enable
appverif.exe /verify "$vainfo_app_path"
appverif.exe /enable "Leak" -for "$vainfo_app_path"
$verifier_log_path="$testing_dir\vainfo_appverif_log.xml"
$result_with_appverifier = Check-VAInfo-Entrypoint -vainfo_app_path $vainfo_app_path -entrypoint $entrypoint
if ($result_with_appverifier -eq 1) {
Write-Host "Process exited successfully."
appverif.exe /logtofile disable
} else {
Write-Host "Process failed. Please see Application Verifier log contents below."
# Need to wait for appverif to exit before gathering log
Start-Process -Wait -FilePath "appverif.exe" -ArgumentList "-export", "log", "-for", "$vainfo_app_path", "-with", "to=$verifier_log_path"
Get-Content $verifier_log_path
Write-Error "Process exit not successful for $vainfo_run_cmd."
appverif.exe /logtofile disable
$successful_run=0
}
if ($successful_run -ne 1) {
Exit 1
}

View File

@@ -88,7 +88,7 @@ issues:
'bisected': 'bisected' 'bisected': 'bisected'
'coverity': 'coverity' 'coverity': 'coverity'
'deqp': 'deqp' 'deqp': 'deqp'
'feature request': 'feature request' 'feature request': 'feature_request'
'haiku' : 'haiku' 'haiku' : 'haiku'
'regression': 'regression' 'regression': 'regression'
@@ -113,11 +113,8 @@ merge_requests:
paths: paths:
'^.gitlab/issue_templates/' : ['docs'] '^.gitlab/issue_templates/' : ['docs']
'^.gitlab-ci' : ['CI'] '^.gitlab-ci' : ['CI']
'^.*/gitlab-ci(-inc)?.yml' : ['CI'] '^.*/gitlab-ci.yml' : ['CI']
'^.*/ci/deqp-.*\.toml' : ['CI'] '^.*/ci/' : ['CI']
'^.*/ci/.*-(fails|flakes|skips)\.txt' : ['CI']
'^.*/ci/(restricted-)?traces-.*\.yml' : ['CI']
'^.*/ci/.*-validation-settings\.txt' : ['CI']
'^.gitlab-ci/windows/' : ['Windows'] '^.gitlab-ci/windows/' : ['Windows']
'^bin/__init__.py$' : ['maintainer-scripts'] '^bin/__init__.py$' : ['maintainer-scripts']
'^bin/gen_release_notes' : ['maintainer-scripts'] '^bin/gen_release_notes' : ['maintainer-scripts']

File diff suppressed because it is too large Load Diff

View File

@@ -1 +1 @@
24.1.3 24.1.0-devel

View File

@@ -289,8 +289,7 @@ def parse_args() -> None:
parser.add_argument( parser.add_argument(
"--target", "--target",
metavar="target-job", metavar="target-job",
help="Target job regex. For multiple targets, pass multiple values, " help="Target job regex. For multiple targets, separate with pipe | character",
"eg. `--target foo bar`.",
required=True, required=True,
nargs=argparse.ONE_OR_MORE, nargs=argparse.ONE_OR_MORE,
) )

View File

@@ -1,26 +0,0 @@
# For performance reasons we don't use a lock here and reading
# a stale value is of no consequence
fun:util_queue_fence_is_signalled
# We also have to blacklist this function, because otherwise tsan will
# still report the unlocked read above
fun:util_queue_fence_signal
# lavapipe:
# Same as above for perf reasons the fence signal value is is read without
# lock
fun:lp_fence_signalled
fun:lp_fence_signal
# gallium/tc
# Keeping track of tc->last_completed is an optimization and it is of no
# consequence to read a stale value there, so surpress the warning about the
# race condition
fun:tc_batch_execute
# This is a debug feature and ATM it is simpler to surpress the race warning
fun:tc_set_driver_thread
# vulkan/runtime
# Even with the data race the returned value is always the same
fun:get_max_abs_timeout_ns

View File

@@ -1,80 +0,0 @@
Name
MESA_x11_native_visual_id
Name Strings
EGL_MESA_x11_native_visual_id
Contact
Eric Engestrom <eric@engestrom.ch>
Status
Complete, shipping.
Version
Version 2, May 10, 2024
Number
EGL Extension #TBD
Extension Type
EGL display extension
Dependencies
None. This extension is written against the
wording of the EGL 1.5 specification.
Overview
This extension allows EGL_NATIVE_VISUAL_ID to be used in
eglChooseConfig() for a display of type EGL_PLATFORM_X11_EXT.
IP Status
Open-source; freely implementable.
New Types
None
New Procedures and Functions
None
New Tokens
None
In section 3.4.1.1 "Selection of EGLConfigs" of the EGL 1.5
Specification, replace:
If EGL_MAX_PBUFFER_WIDTH, EGL_MAX_PBUFFER_HEIGHT,
EGL_MAX_PBUFFER_PIXELS, or EGL_NATIVE_VISUAL_ID are specified in
attrib list, then they are ignored [...]
with:
If EGL_MAX_PBUFFER_WIDTH, EGL_MAX_PBUFFER_HEIGHT,
or EGL_MAX_PBUFFER_PIXELS are specified in attrib list, then they
are ignored [...]. EGL_NATIVE_VISUAL_ID is ignored except on
a display of type EGL_PLATFORM_X11_EXT when EGL_ALPHA_SIZE is
greater than zero.
Issues
None.
Revision History
Version 1, March 27, 2024 (Eric Engestrom)
Initial draft
Version 2, May 10, 2024 (David Heidelberg)
add EGL_ALPHA_SIZE condition
add Extension type and set it to display extension

View File

@@ -96,7 +96,7 @@ class BootstrapHTML5TranslatorMixin:
self.body.append(tag) self.body.append(tag)
def setup_translators(app): def setup_translators(app):
if app.builder.format != "html": if app.builder.default_translator_class is None:
return return
if not app.registry.translators.items(): if not app.registry.translators.items():
@@ -111,6 +111,10 @@ def setup_translators(app):
app.set_translator(app.builder.name, translator, override=True) app.set_translator(app.builder.name, translator, override=True)
else: else:
for name, klass in app.registry.translators.items(): for name, klass in app.registry.translators.items():
if app.builder.format != "html":
# Skip translators that are not HTML
continue
translator = types.new_class( translator = types.new_class(
"BootstrapHTML5Translator", "BootstrapHTML5Translator",
( (

Some files were not shown because too many files have changed in this diff Show More