Compare commits

...

90 Commits

Author SHA1 Message Date
Chad Versace
5520221118 vk: Remove unneeded vulkan-138.h 2015-07-15 17:16:07 -07:00
Chad Versace
73a8f9543a vk: Bump vulkan.h version to 0.138 2015-07-15 17:16:07 -07:00
Chad Versace
55781f8d02 vk/0.138: Update VkResult values 2015-07-15 17:16:07 -07:00
Chad Versace
756d8064c1 vk/0.132: Do type-safety 2015-07-15 17:16:07 -07:00
Jason Ekstrand
927f54de68 vk/cmd_buffer: Move batch buffer padding to anv_batch_bo_finish() 2015-07-15 17:11:04 -07:00
Jason Ekstrand
9c0db9d349 vk/cmd_buffer: Rename bo_count to exec2_bo_count 2015-07-15 16:56:29 -07:00
Jason Ekstrand
6037b5d610 vk/cmd_buffer: Add a helper for allocating dynamic state
This matches what we do for surface state and makes the dynamic state pool
more opaque to things that need to get dynamic state.
2015-07-15 16:56:29 -07:00
Jason Ekstrand
7ccc8dd24a vk/private.h: Move cmd_buffer functions to near the cmd_buffer struct 2015-07-15 16:56:29 -07:00
Jason Ekstrand
d22d5f25fc vk: Split command buffer state into its own structure
Everything else in anv_cmd_buffer is the actual guts of the datastructure.
2015-07-15 16:56:29 -07:00
Jason Ekstrand
da4d9f6c7c vk: Move most of the anv_Cmd related stuff to its own file 2015-07-15 16:56:28 -07:00
Jason Ekstrand
d862099198 vk: Pull the guts of anv_cmd_buffer into its own file 2015-07-15 16:56:28 -07:00
Chad Versace
498ae009d3 vk/glsl: Replace raw casts
Needed for upcoming type-safety changes.
2015-07-15 15:51:37 -07:00
Chad Versace
6f140e8af1 vk/meta: Remove raw casts
Needed for upcoming type-safety changes.
2015-07-15 15:51:37 -07:00
Chad Versace
badbf0c94a vk/x11: Remove raw casts
The raw casts in the WSI functions will break the build when the
type-safety changes arrive.
2015-07-15 15:49:10 -07:00
Chad Versace
61a4bfe253 vk: Delete vkDbgSetObjectTag()
Because VkObject is going away.
2015-07-15 15:34:20 -07:00
Jason Ekstrand
e1c78ebe53 vk/device: Remove unneeded checks for NULL 2015-07-15 15:22:32 -07:00
Jason Ekstrand
f4748bff59 vk/device: Provide proper NULL handling in anv_device_free
The Vulkan spec does not specify that the free function provided to
CreateInstance must handle NULL properly so we do it in the wrapper.  If
this ever changes in the spec, we can delete the extra 2 lines.
2015-07-15 15:22:32 -07:00
Chad Versace
4c8e1e5888 vk: Stop internally calling anv_DestroyObject()
Replace each anv_DestroyObject() with anv_DestroyFoo().

Let vkDestroyObject() live for a while longer for Crucible's sake.
2015-07-15 15:11:16 -07:00
Chad Versace
f5ad06eb78 vk: Fix vkDestroyObject dispatch for VkRenderPass
It called anv_device_free() instead of anv_DestroyRenderPass().
2015-07-15 15:07:41 -07:00
Chad Versace
188f2328de vk: Fix vkCreate/DestroyRenderPass
While updating vkDestroyObject, I discovered that vkDestroyPass reliably
crashes. That hasn't been an issue yet, though, because it is never
called.

In vkCreateRenderPass:
    - Don't allocate empty attachment arrays.
    - Ensure that pointers to empty attachment arrays are NULL.
    - Store VkRenderPassCreateInfo::subpassCount as
      anv_render_pass::subpass_count.

In vkDestroyRenderPass:
    - Fix loop bounds: s/attachment_count/subpass_count/
    - Don't call anv_device_free on null pointers.
2015-07-15 15:07:41 -07:00
Chad Versace
c6270e8044 vk: Refactor create/destroy code for anv_descriptor_set
Define two new functions:
    anv_descriptor_set_create
    anv_descriptor_set_destroy
2015-07-15 14:31:22 -07:00
Chad Versace
365d80a91e vk: Replace some raw casts with safe casts
That is, replace some instances of
    (VkFoo) foo
with
    anv_foo_to_handle(foo)
2015-07-15 14:00:21 -07:00
Chad Versace
7529e7ce86 vk: Correct anv_CreateShaderModule's prototype
s/VkShader/VkShaderModule/

:sigh: I look forward to type-safety.
2015-07-15 13:59:47 -07:00
Chad Versace
8213be790e vk: Define struct anv_image_view, anv_buffer_view
Follow the pattern of anv_attachment_view. We need these structs to
implement the type-safety that arrived in the 0.132 header.
2015-07-15 12:19:29 -07:00
Chad Versace
43241a24bc vk/meta: Fix declared type of a shader module
s/VkShader/VkShaderModule/

I'm looking forward to a type-safe vulkan.h ;)
2015-07-15 11:49:37 -07:00
Chad Versace
94e473c993 vk: Remove struct anv_object
Trivial removal because vkDestroyObject() no longer uses it.
2015-07-15 11:29:43 -07:00
Jason Ekstrand
e375f722a6 vk/device: More documentation on surface state flushing 2015-07-15 11:09:02 -07:00
Connor Abbott
9aabe69028 vk/device: explain why a flush is necessary
Jason found this from experimenting, but the docs give a reasonable
explanation of why it's necessary.
2015-07-14 23:03:19 -07:00
Chad Versace
5f46c4608f vk: Fix indentation of anv_dynamic_cb_state 2015-07-14 18:19:10 -07:00
Chad Versace
0eeba6b80c vk: Add finishmes for VkDescriptorPool
VkDescriptorPool is a stub object. As a consequence, it's impossible to
free descriptor set memory.
2015-07-14 18:19:00 -07:00
Jason Ekstrand
2b5a4dc5f3 vk: Add vulkan-138 and remove vulkan-0.132
Now, 138 is the target and not 132.  Once object destruction is finished,
we can delete 138 as it will be identical to vulkan.h
2015-07-14 17:54:13 -07:00
Jason Ekstrand
1f658bed70 vk/device: Add stub support for command pools
Real support isn't really that far away.  We just need a data structure
with a linked list and a few tests.
2015-07-14 17:40:00 -07:00
Jason Ekstrand
ca7243b54e vk/vulkan.h: Add the stuff for cross-queue resource sharing
We only have one queue, so this is currently a no-op on our implementation.
2015-07-14 17:20:50 -07:00
Jason Ekstrand
553b4434ca vk/vulkan.h: Add a couple of size fields for specialization constants 2015-07-14 17:12:39 -07:00
Jason Ekstrand
e5db209d54 vk/vulkan.h: Move around buffer image granularities 2015-07-14 17:10:37 -07:00
Jason Ekstrand
c7fcfebd5b vk: Add stubs for all the sparse resource stuff 2015-07-14 17:06:11 -07:00
Jason Ekstrand
2a9136feb4 vk/image: Add a stub for the new ImageFormatProperties function
This lets the client query about things like multisample.  We don't do
multisample right now, so I'll let Chad deal with that when he gets to it.
2015-07-14 17:05:30 -07:00
Jason Ekstrand
2c4dc92f40 vk/vulkan.h: Rename FormatInfo to FormatProperties 2015-07-14 17:04:46 -07:00
Jason Ekstrand
d7f44852be vk/vulkan.h: Re-order some #define's 2015-07-14 16:41:39 -07:00
Jason Ekstrand
1fd3bc818a vk/vulkan.h: Rename a function parameter 2015-07-14 16:39:01 -07:00
Jason Ekstrand
2e2f48f840 vk: Remove abreviations 2015-07-14 16:34:31 -07:00
Jason Ekstrand
02db21ae11 vk: Add the new extension/layer enumeration entrypoints 2015-07-14 16:11:21 -07:00
Jason Ekstrand
a463eacb8f vk/vulkan.h: Change maxAnisotropy to a float 2015-07-14 15:04:11 -07:00
Jason Ekstrand
98957b18d2 vk/vulkan.h: Add the VK_IMAGE_USAGE_INPUT_ATTACHMENT_BIT flag 2015-07-14 15:03:39 -07:00
Jason Ekstrand
a35811d086 vk/vulkan.h: Rename a couple of function parameters
No functional change.
2015-07-14 15:03:01 -07:00
Jason Ekstrand
55723e97f1 vk: Split the memory requirements/binding functions 2015-07-14 14:59:39 -07:00
Jason Ekstrand
ccb2e5cd62 vk: Make barriers more precise (rev. 133) 2015-07-14 14:50:35 -07:00
Jason Ekstrand
30445f8f7a vk: Split the dynamic state binding function into one per state 2015-07-14 14:26:10 -07:00
Jason Ekstrand
d2c0870ff3 vk/vulkan.h: Rename a function parameter to match 132 2015-07-14 14:11:04 -07:00
Jason Ekstrand
8478350992 vk: Implement Multipass 2015-07-14 11:37:14 -07:00
Jason Ekstrand
68768c40be vk/vulkan.h: Re-arrange some enums and definitions in preparation for 131 2015-07-14 11:32:15 -07:00
Chad Versace
66cbb7f76d vk/0.132: Add vkDestroyRenderPass() 2015-07-14 11:21:31 -07:00
Chad Versace
6d0ed38db5 vk/0.132: Add vkDestroy*View()
vkDestroyColorAttachmentView
vkDestroyDepthStencilView

These functions are not in the 0.132 header, but adding them will help
us attain the type-safety API updates more quickly.
2015-07-14 11:19:22 -07:00
Chad Versace
1ca611cbad vk/0.132: Add vkDestroyCommandBuffer() 2015-07-14 11:11:41 -07:00
Chad Versace
6eec0b186c vk/0.132: Add vkDestroyImageView()
Just declare it in vulkan.h. Jason defined the function earlier
in image.c.
2015-07-14 11:09:14 -07:00
Chad Versace
4b2c5a98f0 vk/0.132: Add vkDestroyBufferView()
Just declare it in vulkan.h. Jason already defined the function
earlier in vulkan.c.
2015-07-14 11:06:57 -07:00
Chad Versace
08f7731f67 vk/0.132: Add vkDestroyFramebuffer() 2015-07-14 10:59:30 -07:00
Chad Versace
0c8456ef1e vk/0.132: Add vkDestroyDynamicDepthStencilState() 2015-07-14 10:54:51 -07:00
Chad Versace
b29c929e8e vk/0.132: Add vkDestroyDynamicColorBlendState() 2015-07-14 10:52:45 -07:00
Chad Versace
5e1737c42f vk/0.132: Add vkDestroyDynamicRasterState() 2015-07-14 10:51:08 -07:00
Chad Versace
d80fea1af6 vk/0.132: Add vkDestroyDynamicViewportState() 2015-07-14 10:42:45 -07:00
Chad Versace
9250e1e9e5 vk/0.132: Add vkDestroyDescriptorPool() 2015-07-14 10:38:22 -07:00
Chad Versace
f925ea31e7 vk/0.132: Add vkDestroyDescriptorSetLayout() 2015-07-14 10:36:49 -07:00
Chad Versace
ec5e2f4992 vk/0.132: Add vkDestroySampler() 2015-07-14 10:34:00 -07:00
Chad Versace
a684198935 vk/0.132: Add vkDestroyPipelineLayout() 2015-07-14 10:29:47 -07:00
Chad Versace
6e5ab5cf1b vk/0.132: Add vkDestroyPipeline() 2015-07-14 10:26:17 -07:00
Chad Versace
114015321e vk/0.132: Add vkDestroyPipelineCache() 2015-07-14 10:19:27 -07:00
Chad Versace
cb57bff36c vk/0.132: Add vkDestroyShader() 2015-07-14 10:16:22 -07:00
Chad Versace
8ae8e14ba7 vk/0.132: Add vkDestroyShaderModule() 2015-07-14 10:13:09 -07:00
Chad Versace
dd67c134ad vk/0.132: Add vkDestroyImage()
We only need to add it to vulkan.h because Jason defined the function
earlier in image.c.
2015-07-14 10:13:00 -07:00
Chad Versace
e18377f435 vk/0.132: Dispatch vkDestroyObject to new destructors
Oops. My recent commits added new destructors, but forgot to teach
vkDestroyObject about them. They are:
  vkDestroyFence
  vkDestroyEvent
  vkDestroySemaphore
  vkDestroyQueryPool
  vkDestroyBuffer
2015-07-14 09:58:22 -07:00
Chad Versace
e93b6d8eb1 vk/0.132: Add vkDestroyBuffer() 2015-07-14 09:47:45 -07:00
Chad Versace
584cb7a16f vk/0.132: Add vkDestroyQueryPool() 2015-07-14 09:44:58 -07:00
Chad Versace
68c7ef502d vk/0.132: Add vkDestroyEvent() 2015-07-14 09:33:47 -07:00
Chad Versace
549070b18c vk/0.132: Add vkDestroySemaphore() 2015-07-14 09:31:34 -07:00
Chad Versace
ebb191f145 vk/0.132: Add vkDestroyFence() 2015-07-14 09:29:35 -07:00
Chad Versace
435ccf4056 vk/0.132: Rename VkDynamic*State types
sed -i -e 's/VkDynamicVpState/VkDynamicViewportState/g' \
       -e 's/VkDynamicRsState/VkDynamicRasterState/g' \
       -e 's/VkDynamicCbState/VkDynamicColorBlendState/g' \
       -e 's/VkDynamicDsState/VkDynamicDepthStencilState/g' \
       $(git ls-files include/vulkan src/vulkan)
2015-07-13 16:19:28 -07:00
Connor Abbott
ffb51fd112 nir/spirv: update to SPIR-V revision 31
This means that now the internal version of glslangValidator is
required. This includes some changes due to the sampler/texture rework,
but doesn't actually enable anything more yet. We also don't yet handle
UBO's correctly, and don't handle matrix stride and row major/column
major yet.
2015-07-13 15:01:01 -07:00
Chad Versace
45f8723f44 vk/0.132: Move VkQueryControlFlags 2015-07-13 13:09:32 -07:00
Chad Versace
180c07ee50 vk/0.132: Move VkImageAspectFlags 2015-07-13 13:08:56 -07:00
Chad Versace
4b05a8cd31 vk/0.132: Move VkCmdBufferOptimizeFlags 2015-07-13 13:08:07 -07:00
Chad Versace
f1cf55fae6 vk/0.132: Move VkWaitEvent 2015-07-13 13:06:53 -07:00
Chad Versace
3112098776 vk/0.132: Move VkCmdBufferLevel 2015-07-13 13:06:33 -07:00
Chad Versace
c633ab5822 vk/0.132: Drop VK_ATTACHMENT_STORE_OP_RESOLVE_MSAA 2015-07-13 13:05:24 -07:00
Chad Versace
8f3b2187e1 vk/0.132: Rename bool32_t -> VkBool32
sed -i 's/bool32_t/VkBool32/g' \
  $(git ls-files src/vulkan include/vulkan)
2015-07-13 13:03:36 -07:00
Chad Versace
77dcfe3c70 vk/0.132: Remove stray typedef 2015-07-13 12:58:17 -07:00
Chad Versace
601d0891a6 vk/0.132: Move VKImageUsageFlags 2015-07-13 12:48:44 -07:00
Chad Versace
829810fa27 vk/0.132: Move VkImageType and VkImageTiling 2015-07-13 11:49:56 -07:00
Chad Versace
17c8232ecf vk/0.132: Import the 0.132 header
Import it as vulkan-0.132.h.
2015-07-13 11:47:12 -07:00
Chad Versace
a158ff55f0 vk/vulkan.h: Remove headers for old API versions
Remove the temporary headers for 0.90 and 0.130.
2015-07-13 11:46:30 -07:00
21 changed files with 4467 additions and 9733 deletions

View File

@@ -68,7 +68,7 @@ extern "C"
#endif // !defined(VK_NO_STDINT_H)
typedef uint64_t VkDeviceSize;
typedef uint32_t bool32_t;
typedef uint32_t VkBool32;
typedef uint32_t VkSampleMask;
typedef uint32_t VkFlags;

View File

@@ -59,8 +59,8 @@ extern "C"
// ------------------------------------------------------------------------------------------------
// Objects
VK_DEFINE_DISP_SUBCLASS_HANDLE(VkDisplayWSI, VkObject)
VK_DEFINE_DISP_SUBCLASS_HANDLE(VkSwapChainWSI, VkObject)
VK_DEFINE_HANDLE(VkDisplayWSI)
VK_DEFINE_HANDLE(VkSwapChainWSI)
// ------------------------------------------------------------------------------------------------
// Enumeration constants
@@ -78,10 +78,6 @@ VK_DEFINE_DISP_SUBCLASS_HANDLE(VkSwapChainWSI, VkObject)
// Extend VkImageLayout enum with extension specific constants
#define VK_IMAGE_LAYOUT_PRESENT_SOURCE_WSI VK_WSI_LUNARG_ENUM(VkImageLayout, 0)
// Extend VkObjectType enum for new objects
#define VK_OBJECT_TYPE_DISPLAY_WSI VK_WSI_LUNARG_ENUM(VkObjectType, 0)
#define VK_OBJECT_TYPE_SWAP_CHAIN_WSI VK_WSI_LUNARG_ENUM(VkObjectType, 1)
// ------------------------------------------------------------------------------------------------
// Enumerations
@@ -158,7 +154,7 @@ typedef struct VkSwapChainImageInfoWSI_
typedef struct VkPhysicalDeviceQueuePresentPropertiesWSI_
{
bool32_t supportsPresent; // Tells whether the queue supports presenting
VkBool32 supportsPresent; // Tells whether the queue supports presenting
} VkPhysicalDeviceQueuePresentPropertiesWSI;
typedef struct VkPresentInfoWSI_

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -334,10 +334,8 @@ struct_member_decoration_cb(struct vtn_builder *b,
return;
switch (dec->decoration) {
case SpvDecorationPrecisionLow:
case SpvDecorationPrecisionMedium:
case SpvDecorationPrecisionHigh:
break; /* FIXME: Do nothing with these for now. */
case SpvDecorationRelaxedPrecision:
break; /* FIXME: Do nothing with this for now. */
case SpvDecorationSmooth:
ctx->fields[member].interpolation = INTERP_QUALIFIER_SMOOTH;
break;
@@ -362,11 +360,32 @@ struct_member_decoration_cb(struct vtn_builder *b,
ctx->type->members[member]->is_builtin = true;
ctx->type->members[member]->builtin = dec->literals[0];
break;
case SpvDecorationOffset:
ctx->type->offsets[member] = dec->literals[0];
break;
default:
unreachable("Unhandled member decoration");
}
}
static void
array_decoration_cb(struct vtn_builder *b,
struct vtn_value *val, int member,
const struct vtn_decoration *dec, void *ctx)
{
struct vtn_type *type = val->type;
assert(member == -1);
switch (dec->decoration) {
case SpvDecorationArrayStride:
type->stride = dec->literals[0];
break;
default:
unreachable("Unhandled array type decoration");
}
}
static void
vtn_handle_type(struct vtn_builder *b, SpvOp opcode,
const uint32_t *w, unsigned count)
@@ -421,12 +440,14 @@ vtn_handle_type(struct vtn_builder *b, SpvOp opcode,
val->type->type = glsl_array_type(array_element->type, w[3]);
val->type->array_element = array_element;
val->type->stride = 0;
vtn_foreach_decoration(b, val, array_decoration_cb, NULL);
return;
}
case SpvOpTypeStruct: {
unsigned num_fields = count - 2;
val->type->members = ralloc_array(b, struct vtn_type *, num_fields);
val->type->offsets = ralloc_array(b, unsigned, num_fields);
NIR_VLA(struct glsl_struct_field, fields, count);
for (unsigned i = 0; i < num_fields; i++) {
@@ -479,7 +500,7 @@ vtn_handle_type(struct vtn_builder *b, SpvOp opcode,
val->type = vtn_value(b, w[3], vtn_value_type_type)->type;
return;
case SpvOpTypeSampler: {
case SpvOpTypeImage: {
const struct glsl_type *sampled_type =
vtn_value(b, w[2], vtn_value_type_type)->type->type;
@@ -497,19 +518,21 @@ vtn_handle_type(struct vtn_builder *b, SpvOp opcode,
unreachable("Invalid SPIR-V Sampler dimension");
}
/* TODO: Handle the various texture image/filter options */
(void)w[4];
bool is_shadow = w[4];
bool is_array = w[5];
bool is_shadow = w[6];
assert(w[7] == 0 && "FIXME: Handl multi-sampled textures");
assert(w[6] == 0 && "FIXME: Handl multi-sampled textures");
assert(w[7] == 1 && "FIXME: Add support for non-sampled images");
val->type->type = glsl_sampler_type(dim, is_shadow, is_array,
glsl_get_base_type(sampled_type));
return;
}
case SpvOpTypeSampledImage:
val->type = vtn_value(b, w[2], vtn_value_type_type)->type;
break;
case SpvOpTypeRuntimeArray:
case SpvOpTypeOpaque:
case SpvOpTypeEvent:
@@ -693,10 +716,8 @@ var_decoration_cb(struct vtn_builder *b, struct vtn_value *val, int member,
nir_variable *var = void_var;
switch (dec->decoration) {
case SpvDecorationPrecisionLow:
case SpvDecorationPrecisionMedium:
case SpvDecorationPrecisionHigh:
break; /* FIXME: Do nothing with these for now. */
case SpvDecorationRelaxedPrecision:
break; /* FIXME: Do nothing with this for now. */
case SpvDecorationSmooth:
var->data.interpolation = INTERP_QUALIFIER_SMOOTH;
break;
@@ -758,9 +779,6 @@ var_decoration_cb(struct vtn_builder *b, struct vtn_value *val, int member,
case SpvDecorationRowMajor:
case SpvDecorationColMajor:
case SpvDecorationGLSLShared:
case SpvDecorationGLSLStd140:
case SpvDecorationGLSLStd430:
case SpvDecorationGLSLPacked:
case SpvDecorationPatch:
case SpvDecorationRestrict:
case SpvDecorationAliased:
@@ -773,9 +791,7 @@ var_decoration_cb(struct vtn_builder *b, struct vtn_value *val, int member,
case SpvDecorationSaturatedConversion:
case SpvDecorationStream:
case SpvDecorationOffset:
case SpvDecorationAlignment:
case SpvDecorationXfbBuffer:
case SpvDecorationStride:
case SpvDecorationFuncParamAttr:
case SpvDecorationFPRoundingMode:
case SpvDecorationFPFastMathMode:
@@ -1118,7 +1134,6 @@ vtn_handle_variables(struct vtn_builder *b, SpvOp opcode,
case SpvStorageClassWorkgroupLocal:
case SpvStorageClassWorkgroupGlobal:
case SpvStorageClassGeneric:
case SpvStorageClassPrivate:
case SpvStorageClassAtomicCounter:
default:
unreachable("Unhandled variable storage class");
@@ -1270,10 +1285,9 @@ vtn_handle_variables(struct vtn_builder *b, SpvOp opcode,
break;
}
case SpvOpVariableArray:
case SpvOpCopyMemorySized:
case SpvOpArrayLength:
case SpvOpImagePointer:
case SpvOpImageTexelPointer:
default:
unreachable("Unhandled opcode");
}
@@ -1342,31 +1356,24 @@ vtn_handle_texture(struct vtn_builder *b, SpvOp opcode,
nir_tex_src srcs[8]; /* 8 should be enough */
nir_tex_src *p = srcs;
unsigned idx = 4;
unsigned coord_components = 0;
switch (opcode) {
case SpvOpTextureSample:
case SpvOpTextureSampleDref:
case SpvOpTextureSampleLod:
case SpvOpTextureSampleProj:
case SpvOpTextureSampleGrad:
case SpvOpTextureSampleOffset:
case SpvOpTextureSampleProjLod:
case SpvOpTextureSampleProjGrad:
case SpvOpTextureSampleLodOffset:
case SpvOpTextureSampleProjOffset:
case SpvOpTextureSampleGradOffset:
case SpvOpTextureSampleProjLodOffset:
case SpvOpTextureSampleProjGradOffset:
case SpvOpTextureFetchTexelLod:
case SpvOpTextureFetchTexelOffset:
case SpvOpTextureFetchSample:
case SpvOpTextureFetchTexel:
case SpvOpTextureGather:
case SpvOpTextureGatherOffset:
case SpvOpTextureGatherOffsets:
case SpvOpTextureQueryLod: {
case SpvOpImageSampleImplicitLod:
case SpvOpImageSampleExplicitLod:
case SpvOpImageSampleDrefImplicitLod:
case SpvOpImageSampleDrefExplicitLod:
case SpvOpImageSampleProjImplicitLod:
case SpvOpImageSampleProjExplicitLod:
case SpvOpImageSampleProjDrefImplicitLod:
case SpvOpImageSampleProjDrefExplicitLod:
case SpvOpImageFetch:
case SpvOpImageGather:
case SpvOpImageDrefGather:
case SpvOpImageQueryLod: {
/* All these types have the coordinate as their first real argument */
struct vtn_ssa_value *coord = vtn_ssa_value(b, w[4]);
struct vtn_ssa_value *coord = vtn_ssa_value(b, w[idx++]);
coord_components = glsl_get_vector_elements(coord->type);
p->src = nir_src_for_ssa(coord->def);
p->src_type = nir_tex_src_coord;
@@ -1380,43 +1387,36 @@ vtn_handle_texture(struct vtn_builder *b, SpvOp opcode,
nir_texop texop;
switch (opcode) {
case SpvOpTextureSample:
case SpvOpImageSampleImplicitLod:
texop = nir_texop_tex;
if (count == 6) {
texop = nir_texop_txb;
*p++ = vtn_tex_src(b, w[5], nir_tex_src_bias);
}
break;
case SpvOpTextureSampleDref:
case SpvOpTextureSampleLod:
case SpvOpTextureSampleProj:
case SpvOpTextureSampleGrad:
case SpvOpTextureSampleOffset:
case SpvOpTextureSampleProjLod:
case SpvOpTextureSampleProjGrad:
case SpvOpTextureSampleLodOffset:
case SpvOpTextureSampleProjOffset:
case SpvOpTextureSampleGradOffset:
case SpvOpTextureSampleProjLodOffset:
case SpvOpTextureSampleProjGradOffset:
case SpvOpTextureFetchTexelLod:
case SpvOpTextureFetchTexelOffset:
case SpvOpTextureFetchSample:
case SpvOpTextureFetchTexel:
case SpvOpTextureGather:
case SpvOpTextureGatherOffset:
case SpvOpTextureGatherOffsets:
case SpvOpTextureQuerySizeLod:
case SpvOpTextureQuerySize:
case SpvOpTextureQueryLod:
case SpvOpTextureQueryLevels:
case SpvOpTextureQuerySamples:
case SpvOpImageSampleExplicitLod:
case SpvOpImageSampleDrefImplicitLod:
case SpvOpImageSampleDrefExplicitLod:
case SpvOpImageSampleProjImplicitLod:
case SpvOpImageSampleProjExplicitLod:
case SpvOpImageSampleProjDrefImplicitLod:
case SpvOpImageSampleProjDrefExplicitLod:
case SpvOpImageFetch:
case SpvOpImageGather:
case SpvOpImageDrefGather:
case SpvOpImageQuerySizeLod:
case SpvOpImageQuerySize:
case SpvOpImageQueryLod:
case SpvOpImageQueryLevels:
case SpvOpImageQuerySamples:
default:
unreachable("Unhandled opcode");
}
/* From now on, the remaining sources are "Optional Image Operands." */
if (idx < count) {
/* XXX handle these (bias, lod, etc.) */
assert(0);
}
nir_tex_instr *instr = nir_tex_instr_create(b->shader, p - srcs);
const struct glsl_type *sampler_type = nir_deref_tail(&sampler->deref)->type;
@@ -1742,7 +1742,8 @@ vtn_handle_alu(struct vtn_builder *b, SpvOp opcode,
case SpvOpShiftRightArithmetic: op = nir_op_ishr; break;
case SpvOpShiftLeftLogical: op = nir_op_ishl; break;
case SpvOpLogicalOr: op = nir_op_ior; break;
case SpvOpLogicalXor: op = nir_op_ixor; break;
case SpvOpLogicalEqual: op = nir_op_ieq; break;
case SpvOpLogicalNotEqual: op = nir_op_ine; break;
case SpvOpLogicalAnd: op = nir_op_iand; break;
case SpvOpBitwiseOr: op = nir_op_ior; break;
case SpvOpBitwiseXor: op = nir_op_ixor; break;
@@ -2200,11 +2201,19 @@ vtn_handle_preamble_instruction(struct vtn_builder *b, SpvOp opcode,
switch (opcode) {
case SpvOpSource:
case SpvOpSourceExtension:
case SpvOpCompileFlag:
case SpvOpExtension:
/* Unhandled, but these are for debug so that's ok. */
break;
case SpvOpCapability:
/*
* TODO properly handle these and give a real error if asking for too
* much.
*/
assert(w[1] == SpvCapabilityMatrix ||
w[1] == SpvCapabilityShader);
break;
case SpvOpExtInstImport:
vtn_handle_extension(b, opcode, w, count);
break;
@@ -2221,7 +2230,10 @@ vtn_handle_preamble_instruction(struct vtn_builder *b, SpvOp opcode,
break;
case SpvOpExecutionMode:
unreachable("Execution modes not yet implemented");
/*
* TODO handle these - for Vulkan OriginUpperLeft is always set for
* fragment shaders, so we can ignore this for now
*/
break;
case SpvOpString:
@@ -2254,7 +2266,9 @@ vtn_handle_preamble_instruction(struct vtn_builder *b, SpvOp opcode,
case SpvOpTypeFloat:
case SpvOpTypeVector:
case SpvOpTypeMatrix:
case SpvOpTypeImage:
case SpvOpTypeSampler:
case SpvOpTypeSampledImage:
case SpvOpTypeArray:
case SpvOpTypeRuntimeArray:
case SpvOpTypeStruct:
@@ -2274,8 +2288,6 @@ vtn_handle_preamble_instruction(struct vtn_builder *b, SpvOp opcode,
case SpvOpConstant:
case SpvOpConstantComposite:
case SpvOpConstantSampler:
case SpvOpConstantNullPointer:
case SpvOpConstantNullObject:
case SpvOpSpecConstantTrue:
case SpvOpSpecConstantFalse:
case SpvOpSpecConstant:
@@ -2422,7 +2434,6 @@ vtn_handle_body_instruction(struct vtn_builder *b, SpvOp opcode,
break;
case SpvOpVariable:
case SpvOpVariableArray:
case SpvOpLoad:
case SpvOpStore:
case SpvOpCopyMemory:
@@ -2430,7 +2441,7 @@ vtn_handle_body_instruction(struct vtn_builder *b, SpvOp opcode,
case SpvOpAccessChain:
case SpvOpInBoundsAccessChain:
case SpvOpArrayLength:
case SpvOpImagePointer:
case SpvOpImageTexelPointer:
vtn_handle_variables(b, opcode, w, count);
break;
@@ -2438,31 +2449,22 @@ vtn_handle_body_instruction(struct vtn_builder *b, SpvOp opcode,
vtn_handle_function_call(b, opcode, w, count);
break;
case SpvOpTextureSample:
case SpvOpTextureSampleDref:
case SpvOpTextureSampleLod:
case SpvOpTextureSampleProj:
case SpvOpTextureSampleGrad:
case SpvOpTextureSampleOffset:
case SpvOpTextureSampleProjLod:
case SpvOpTextureSampleProjGrad:
case SpvOpTextureSampleLodOffset:
case SpvOpTextureSampleProjOffset:
case SpvOpTextureSampleGradOffset:
case SpvOpTextureSampleProjLodOffset:
case SpvOpTextureSampleProjGradOffset:
case SpvOpTextureFetchTexelLod:
case SpvOpTextureFetchTexelOffset:
case SpvOpTextureFetchSample:
case SpvOpTextureFetchTexel:
case SpvOpTextureGather:
case SpvOpTextureGatherOffset:
case SpvOpTextureGatherOffsets:
case SpvOpTextureQuerySizeLod:
case SpvOpTextureQuerySize:
case SpvOpTextureQueryLod:
case SpvOpTextureQueryLevels:
case SpvOpTextureQuerySamples:
case SpvOpImageSampleImplicitLod:
case SpvOpImageSampleExplicitLod:
case SpvOpImageSampleDrefImplicitLod:
case SpvOpImageSampleDrefExplicitLod:
case SpvOpImageSampleProjImplicitLod:
case SpvOpImageSampleProjExplicitLod:
case SpvOpImageSampleProjDrefImplicitLod:
case SpvOpImageSampleProjDrefExplicitLod:
case SpvOpImageFetch:
case SpvOpImageGather:
case SpvOpImageDrefGather:
case SpvOpImageQuerySizeLod:
case SpvOpImageQuerySize:
case SpvOpImageQueryLod:
case SpvOpImageQueryLevels:
case SpvOpImageQuerySamples:
vtn_handle_texture(b, opcode, w, count);
break;
@@ -2511,7 +2513,8 @@ vtn_handle_body_instruction(struct vtn_builder *b, SpvOp opcode,
case SpvOpShiftRightArithmetic:
case SpvOpShiftLeftLogical:
case SpvOpLogicalOr:
case SpvOpLogicalXor:
case SpvOpLogicalEqual:
case SpvOpLogicalNotEqual:
case SpvOpLogicalAnd:
case SpvOpBitwiseOr:
case SpvOpBitwiseXor:

View File

@@ -57,6 +57,8 @@ libvulkan_la_SOURCES = \
private.h \
gem.c \
device.c \
anv_cmd_buffer.c \
anv_cmd_emit.c \
aub.c \
allocator.c \
util.c \

706
src/vulkan/anv_cmd_buffer.c Normal file
View File

@@ -0,0 +1,706 @@
/*
* Copyright © 2015 Intel Corporation
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice (including the next
* paragraph) shall be included in all copies or substantial portions of the
* Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
* IN THE SOFTWARE.
*/
#include <assert.h>
#include <stdbool.h>
#include <string.h>
#include <unistd.h>
#include <fcntl.h>
#include "private.h"
/** \file anv_cmd_buffer.c
*
* This file contains functions related to anv_cmd_buffer as a data
* structure. This involves everything required to create and destroy
* the actual batch buffers as well as link them together and handle
* relocations and surface state. It specifically does *not* contain any
* handling of actual vkCmd calls beyond vkCmdExecuteCommands.
*/
/*-----------------------------------------------------------------------*
* Functions related to anv_reloc_list
*-----------------------------------------------------------------------*/
VkResult
anv_reloc_list_init(struct anv_reloc_list *list, struct anv_device *device)
{
list->num_relocs = 0;
list->array_length = 256;
list->relocs =
anv_device_alloc(device, list->array_length * sizeof(*list->relocs), 8,
VK_SYSTEM_ALLOC_TYPE_INTERNAL);
if (list->relocs == NULL)
return vk_error(VK_ERROR_OUT_OF_HOST_MEMORY);
list->reloc_bos =
anv_device_alloc(device, list->array_length * sizeof(*list->reloc_bos), 8,
VK_SYSTEM_ALLOC_TYPE_INTERNAL);
if (list->relocs == NULL) {
anv_device_free(device, list->relocs);
return vk_error(VK_ERROR_OUT_OF_HOST_MEMORY);
}
return VK_SUCCESS;
}
void
anv_reloc_list_finish(struct anv_reloc_list *list, struct anv_device *device)
{
anv_device_free(device, list->relocs);
anv_device_free(device, list->reloc_bos);
}
static VkResult
anv_reloc_list_grow(struct anv_reloc_list *list, struct anv_device *device,
size_t num_additional_relocs)
{
if (list->num_relocs + num_additional_relocs <= list->array_length)
return VK_SUCCESS;
size_t new_length = list->array_length * 2;
while (new_length < list->num_relocs + num_additional_relocs)
new_length *= 2;
struct drm_i915_gem_relocation_entry *new_relocs =
anv_device_alloc(device, new_length * sizeof(*list->relocs), 8,
VK_SYSTEM_ALLOC_TYPE_INTERNAL);
if (new_relocs == NULL)
return vk_error(VK_ERROR_OUT_OF_HOST_MEMORY);
struct anv_bo **new_reloc_bos =
anv_device_alloc(device, new_length * sizeof(*list->reloc_bos), 8,
VK_SYSTEM_ALLOC_TYPE_INTERNAL);
if (new_relocs == NULL) {
anv_device_free(device, new_relocs);
return vk_error(VK_ERROR_OUT_OF_HOST_MEMORY);
}
memcpy(new_relocs, list->relocs, list->num_relocs * sizeof(*list->relocs));
memcpy(new_reloc_bos, list->reloc_bos,
list->num_relocs * sizeof(*list->reloc_bos));
anv_device_free(device, list->relocs);
anv_device_free(device, list->reloc_bos);
list->relocs = new_relocs;
list->reloc_bos = new_reloc_bos;
return VK_SUCCESS;
}
uint64_t
anv_reloc_list_add(struct anv_reloc_list *list, struct anv_device *device,
uint32_t offset, struct anv_bo *target_bo, uint32_t delta)
{
struct drm_i915_gem_relocation_entry *entry;
int index;
anv_reloc_list_grow(list, device, 1);
/* TODO: Handle failure */
/* XXX: Can we use I915_EXEC_HANDLE_LUT? */
index = list->num_relocs++;
list->reloc_bos[index] = target_bo;
entry = &list->relocs[index];
entry->target_handle = target_bo->gem_handle;
entry->delta = delta;
entry->offset = offset;
entry->presumed_offset = target_bo->offset;
entry->read_domains = 0;
entry->write_domain = 0;
return target_bo->offset + delta;
}
static void
anv_reloc_list_append(struct anv_reloc_list *list, struct anv_device *device,
struct anv_reloc_list *other, uint32_t offset)
{
anv_reloc_list_grow(list, device, other->num_relocs);
/* TODO: Handle failure */
memcpy(&list->relocs[list->num_relocs], &other->relocs[0],
other->num_relocs * sizeof(other->relocs[0]));
memcpy(&list->reloc_bos[list->num_relocs], &other->reloc_bos[0],
other->num_relocs * sizeof(other->reloc_bos[0]));
for (uint32_t i = 0; i < other->num_relocs; i++)
list->relocs[i + list->num_relocs].offset += offset;
list->num_relocs += other->num_relocs;
}
/*-----------------------------------------------------------------------*
* Functions related to anv_batch
*-----------------------------------------------------------------------*/
void *
anv_batch_emit_dwords(struct anv_batch *batch, int num_dwords)
{
if (batch->next + num_dwords * 4 > batch->end)
batch->extend_cb(batch, batch->user_data);
void *p = batch->next;
batch->next += num_dwords * 4;
assert(batch->next <= batch->end);
return p;
}
uint64_t
anv_batch_emit_reloc(struct anv_batch *batch,
void *location, struct anv_bo *bo, uint32_t delta)
{
return anv_reloc_list_add(&batch->relocs, batch->device,
location - batch->start, bo, delta);
}
void
anv_batch_emit_batch(struct anv_batch *batch, struct anv_batch *other)
{
uint32_t size, offset;
size = other->next - other->start;
assert(size % 4 == 0);
if (batch->next + size > batch->end)
batch->extend_cb(batch, batch->user_data);
assert(batch->next + size <= batch->end);
memcpy(batch->next, other->start, size);
offset = batch->next - batch->start;
anv_reloc_list_append(&batch->relocs, batch->device,
&other->relocs, offset);
batch->next += size;
}
/*-----------------------------------------------------------------------*
* Functions related to anv_batch_bo
*-----------------------------------------------------------------------*/
static VkResult
anv_batch_bo_create(struct anv_device *device, struct anv_batch_bo **bbo_out)
{
VkResult result;
struct anv_batch_bo *bbo =
anv_device_alloc(device, sizeof(*bbo), 8, VK_SYSTEM_ALLOC_TYPE_INTERNAL);
if (bbo == NULL)
return vk_error(VK_ERROR_OUT_OF_HOST_MEMORY);
bbo->num_relocs = 0;
bbo->prev_batch_bo = NULL;
result = anv_bo_pool_alloc(&device->batch_bo_pool, &bbo->bo);
if (result != VK_SUCCESS) {
anv_device_free(device, bbo);
return result;
}
*bbo_out = bbo;
return VK_SUCCESS;
}
static void
anv_batch_bo_start(struct anv_batch_bo *bbo, struct anv_batch *batch,
size_t batch_padding)
{
batch->next = batch->start = bbo->bo.map;
batch->end = bbo->bo.map + bbo->bo.size - batch_padding;
bbo->first_reloc = batch->relocs.num_relocs;
}
static void
anv_batch_bo_finish(struct anv_batch_bo *bbo, struct anv_batch *batch)
{
/* Round batch up to an even number of dwords. */
if ((batch->next - batch->start) & 4)
anv_batch_emit(batch, GEN8_MI_NOOP);
assert(batch->start == bbo->bo.map);
bbo->length = batch->next - batch->start;
VG(VALGRIND_CHECK_MEM_IS_DEFINED(batch->start, bbo->length));
bbo->num_relocs = batch->relocs.num_relocs - bbo->first_reloc;
}
static void
anv_batch_bo_destroy(struct anv_batch_bo *bbo, struct anv_device *device)
{
anv_bo_pool_free(&device->batch_bo_pool, &bbo->bo);
anv_device_free(device, bbo);
}
/*-----------------------------------------------------------------------*
* Functions related to anv_batch_bo
*-----------------------------------------------------------------------*/
static VkResult
anv_cmd_buffer_chain_batch(struct anv_batch *batch, void *_data)
{
struct anv_cmd_buffer *cmd_buffer = _data;
struct anv_batch_bo *new_bbo, *old_bbo = cmd_buffer->last_batch_bo;
VkResult result = anv_batch_bo_create(cmd_buffer->device, &new_bbo);
if (result != VK_SUCCESS)
return result;
/* We set the end of the batch a little short so we would be sure we
* have room for the chaining command. Since we're about to emit the
* chaining command, let's set it back where it should go.
*/
batch->end += GEN8_MI_BATCH_BUFFER_START_length * 4;
assert(batch->end == old_bbo->bo.map + old_bbo->bo.size);
anv_batch_emit(batch, GEN8_MI_BATCH_BUFFER_START,
GEN8_MI_BATCH_BUFFER_START_header,
._2ndLevelBatchBuffer = _1stlevelbatch,
.AddressSpaceIndicator = ASI_PPGTT,
.BatchBufferStartAddress = { &new_bbo->bo, 0 },
);
anv_batch_bo_finish(cmd_buffer->last_batch_bo, batch);
new_bbo->prev_batch_bo = old_bbo;
cmd_buffer->last_batch_bo = new_bbo;
anv_batch_bo_start(new_bbo, batch, GEN8_MI_BATCH_BUFFER_START_length * 4);
return VK_SUCCESS;
}
struct anv_state
anv_cmd_buffer_alloc_surface_state(struct anv_cmd_buffer *cmd_buffer,
uint32_t size, uint32_t alignment)
{
struct anv_state state;
state.offset = align_u32(cmd_buffer->surface_next, alignment);
if (state.offset + size > cmd_buffer->surface_batch_bo->bo.size)
return (struct anv_state) { 0 };
state.map = cmd_buffer->surface_batch_bo->bo.map + state.offset;
state.alloc_size = size;
cmd_buffer->surface_next = state.offset + size;
assert(state.offset + size <= cmd_buffer->surface_batch_bo->bo.size);
return state;
}
struct anv_state
anv_cmd_buffer_alloc_dynamic_state(struct anv_cmd_buffer *cmd_buffer,
uint32_t size, uint32_t alignment)
{
return anv_state_stream_alloc(&cmd_buffer->dynamic_state_stream,
size, alignment);
}
VkResult
anv_cmd_buffer_new_surface_state_bo(struct anv_cmd_buffer *cmd_buffer)
{
struct anv_batch_bo *new_bbo, *old_bbo = cmd_buffer->surface_batch_bo;
/* Finish off the old buffer */
old_bbo->num_relocs =
cmd_buffer->surface_relocs.num_relocs - old_bbo->first_reloc;
old_bbo->length = cmd_buffer->surface_next;
VkResult result = anv_batch_bo_create(cmd_buffer->device, &new_bbo);
if (result != VK_SUCCESS)
return result;
new_bbo->first_reloc = cmd_buffer->surface_relocs.num_relocs;
cmd_buffer->surface_next = 1;
new_bbo->prev_batch_bo = old_bbo;
cmd_buffer->surface_batch_bo = new_bbo;
/* Re-emit state base addresses so we get the new surface state base
* address before we start emitting binding tables etc.
*/
anv_cmd_buffer_emit_state_base_address(cmd_buffer);
/* After re-setting the surface state base address, we have to do some
* cache flusing so that the sampler engine will pick up the new
* SURFACE_STATE objects and binding tables. From the Broadwell PRM,
* Shared Function > 3D Sampler > State > State Caching (page 96):
*
* Coherency with system memory in the state cache, like the texture
* cache is handled partially by software. It is expected that the
* command stream or shader will issue Cache Flush operation or
* Cache_Flush sampler message to ensure that the L1 cache remains
* coherent with system memory.
*
* [...]
*
* Whenever the value of the Dynamic_State_Base_Addr,
* Surface_State_Base_Addr are altered, the L1 state cache must be
* invalidated to ensure the new surface or sampler state is fetched
* from system memory.
*
* The PIPE_CONTROL command has a "State Cache Invalidation Enable" bit
* which, according the PIPE_CONTROL instruction documentation in the
* Broadwell PRM:
*
* Setting this bit is independent of any other bit in this packet.
* This bit controls the invalidation of the L1 and L2 state caches
* at the top of the pipe i.e. at the parsing time.
*
* Unfortunately, experimentation seems to indicate that state cache
* invalidation through a PIPE_CONTROL does nothing whatsoever in
* regards to surface state and binding tables. In stead, it seems that
* invalidating the texture cache is what is actually needed.
*
* XXX: As far as we have been able to determine through
* experimentation, shows that flush the texture cache appears to be
* sufficient. The theory here is that all of the sampling/rendering
* units cache the binding table in the texture cache. However, we have
* yet to be able to actually confirm this.
*/
anv_batch_emit(&cmd_buffer->batch, GEN8_PIPE_CONTROL,
.TextureCacheInvalidationEnable = true);
return VK_SUCCESS;
}
VkResult anv_CreateCommandBuffer(
VkDevice _device,
const VkCmdBufferCreateInfo* pCreateInfo,
VkCmdBuffer* pCmdBuffer)
{
ANV_FROM_HANDLE(anv_device, device, _device);
struct anv_cmd_buffer *cmd_buffer;
VkResult result;
assert(pCreateInfo->level == VK_CMD_BUFFER_LEVEL_PRIMARY);
cmd_buffer = anv_device_alloc(device, sizeof(*cmd_buffer), 8,
VK_SYSTEM_ALLOC_TYPE_API_OBJECT);
if (cmd_buffer == NULL)
return vk_error(VK_ERROR_OUT_OF_HOST_MEMORY);
cmd_buffer->device = device;
result = anv_batch_bo_create(device, &cmd_buffer->last_batch_bo);
if (result != VK_SUCCESS)
goto fail;
result = anv_reloc_list_init(&cmd_buffer->batch.relocs, device);
if (result != VK_SUCCESS)
goto fail_batch_bo;
cmd_buffer->batch.device = device;
cmd_buffer->batch.extend_cb = anv_cmd_buffer_chain_batch;
cmd_buffer->batch.user_data = cmd_buffer;
anv_batch_bo_start(cmd_buffer->last_batch_bo, &cmd_buffer->batch,
GEN8_MI_BATCH_BUFFER_START_length * 4);
result = anv_batch_bo_create(device, &cmd_buffer->surface_batch_bo);
if (result != VK_SUCCESS)
goto fail_batch_relocs;
cmd_buffer->surface_batch_bo->first_reloc = 0;
result = anv_reloc_list_init(&cmd_buffer->surface_relocs, device);
if (result != VK_SUCCESS)
goto fail_ss_batch_bo;
/* Start surface_next at 1 so surface offset 0 is invalid. */
cmd_buffer->surface_next = 1;
cmd_buffer->exec2_objects = NULL;
cmd_buffer->exec2_bos = NULL;
cmd_buffer->exec2_array_length = 0;
anv_state_stream_init(&cmd_buffer->surface_state_stream,
&device->surface_state_block_pool);
anv_state_stream_init(&cmd_buffer->dynamic_state_stream,
&device->dynamic_state_block_pool);
anv_cmd_state_init(&cmd_buffer->state);
*pCmdBuffer = anv_cmd_buffer_to_handle(cmd_buffer);
return VK_SUCCESS;
fail_ss_batch_bo:
anv_batch_bo_destroy(cmd_buffer->surface_batch_bo, device);
fail_batch_relocs:
anv_reloc_list_finish(&cmd_buffer->batch.relocs, device);
fail_batch_bo:
anv_batch_bo_destroy(cmd_buffer->last_batch_bo, device);
fail:
anv_device_free(device, cmd_buffer);
return result;
}
VkResult anv_DestroyCommandBuffer(
VkDevice _device,
VkCmdBuffer _cmd_buffer)
{
ANV_FROM_HANDLE(anv_device, device, _device);
ANV_FROM_HANDLE(anv_cmd_buffer, cmd_buffer, _cmd_buffer);
anv_cmd_state_fini(&cmd_buffer->state);
/* Destroy all of the batch buffers */
struct anv_batch_bo *bbo = cmd_buffer->last_batch_bo;
while (bbo) {
struct anv_batch_bo *prev = bbo->prev_batch_bo;
anv_batch_bo_destroy(bbo, device);
bbo = prev;
}
anv_reloc_list_finish(&cmd_buffer->batch.relocs, device);
/* Destroy all of the surface state buffers */
bbo = cmd_buffer->surface_batch_bo;
while (bbo) {
struct anv_batch_bo *prev = bbo->prev_batch_bo;
anv_batch_bo_destroy(bbo, device);
bbo = prev;
}
anv_reloc_list_finish(&cmd_buffer->surface_relocs, device);
anv_state_stream_finish(&cmd_buffer->surface_state_stream);
anv_state_stream_finish(&cmd_buffer->dynamic_state_stream);
anv_device_free(device, cmd_buffer->exec2_objects);
anv_device_free(device, cmd_buffer->exec2_bos);
anv_device_free(device, cmd_buffer);
return VK_SUCCESS;
}
static VkResult
anv_cmd_buffer_add_bo(struct anv_cmd_buffer *cmd_buffer,
struct anv_bo *bo,
struct drm_i915_gem_relocation_entry *relocs,
size_t num_relocs)
{
struct drm_i915_gem_exec_object2 *obj;
if (bo->index < cmd_buffer->exec2_bo_count &&
cmd_buffer->exec2_bos[bo->index] == bo)
return VK_SUCCESS;
if (cmd_buffer->exec2_bo_count >= cmd_buffer->exec2_array_length) {
uint32_t new_len = cmd_buffer->exec2_objects ?
cmd_buffer->exec2_array_length * 2 : 64;
struct drm_i915_gem_exec_object2 *new_objects =
anv_device_alloc(cmd_buffer->device, new_len * sizeof(*new_objects),
8, VK_SYSTEM_ALLOC_TYPE_INTERNAL);
if (new_objects == NULL)
return vk_error(VK_ERROR_OUT_OF_HOST_MEMORY);
struct anv_bo **new_bos =
anv_device_alloc(cmd_buffer->device, new_len * sizeof(*new_bos),
8, VK_SYSTEM_ALLOC_TYPE_INTERNAL);
if (new_objects == NULL) {
anv_device_free(cmd_buffer->device, new_objects);
return vk_error(VK_ERROR_OUT_OF_HOST_MEMORY);
}
if (cmd_buffer->exec2_objects) {
memcpy(new_objects, cmd_buffer->exec2_objects,
cmd_buffer->exec2_bo_count * sizeof(*new_objects));
memcpy(new_bos, cmd_buffer->exec2_bos,
cmd_buffer->exec2_bo_count * sizeof(*new_bos));
}
cmd_buffer->exec2_objects = new_objects;
cmd_buffer->exec2_bos = new_bos;
cmd_buffer->exec2_array_length = new_len;
}
assert(cmd_buffer->exec2_bo_count < cmd_buffer->exec2_array_length);
bo->index = cmd_buffer->exec2_bo_count++;
obj = &cmd_buffer->exec2_objects[bo->index];
cmd_buffer->exec2_bos[bo->index] = bo;
obj->handle = bo->gem_handle;
obj->relocation_count = 0;
obj->relocs_ptr = 0;
obj->alignment = 0;
obj->offset = bo->offset;
obj->flags = 0;
obj->rsvd1 = 0;
obj->rsvd2 = 0;
if (relocs) {
obj->relocation_count = num_relocs;
obj->relocs_ptr = (uintptr_t) relocs;
}
return VK_SUCCESS;
}
static void
anv_cmd_buffer_add_validate_bos(struct anv_cmd_buffer *cmd_buffer,
struct anv_reloc_list *list)
{
for (size_t i = 0; i < list->num_relocs; i++)
anv_cmd_buffer_add_bo(cmd_buffer, list->reloc_bos[i], NULL, 0);
}
static void
anv_cmd_buffer_process_relocs(struct anv_cmd_buffer *cmd_buffer,
struct anv_reloc_list *list)
{
struct anv_bo *bo;
/* If the kernel supports I915_EXEC_NO_RELOC, it will compare offset in
* struct drm_i915_gem_exec_object2 against the bos current offset and if
* all bos haven't moved it will skip relocation processing alltogether.
* If I915_EXEC_NO_RELOC is not supported, the kernel ignores the incoming
* value of offset so we can set it either way. For that to work we need
* to make sure all relocs use the same presumed offset.
*/
for (size_t i = 0; i < list->num_relocs; i++) {
bo = list->reloc_bos[i];
if (bo->offset != list->relocs[i].presumed_offset)
cmd_buffer->need_reloc = true;
list->relocs[i].target_handle = bo->index;
}
}
VkResult anv_EndCommandBuffer(
VkCmdBuffer cmdBuffer)
{
ANV_FROM_HANDLE(anv_cmd_buffer, cmd_buffer, cmdBuffer);
struct anv_device *device = cmd_buffer->device;
struct anv_batch *batch = &cmd_buffer->batch;
anv_batch_emit(batch, GEN8_MI_BATCH_BUFFER_END);
anv_batch_bo_finish(cmd_buffer->last_batch_bo, &cmd_buffer->batch);
cmd_buffer->surface_batch_bo->num_relocs =
cmd_buffer->surface_relocs.num_relocs - cmd_buffer->surface_batch_bo->first_reloc;
cmd_buffer->surface_batch_bo->length = cmd_buffer->surface_next;
cmd_buffer->exec2_bo_count = 0;
cmd_buffer->need_reloc = false;
/* Lock for access to bo->index. */
pthread_mutex_lock(&device->mutex);
/* Add surface state bos first so we can add them with their relocs. */
for (struct anv_batch_bo *bbo = cmd_buffer->surface_batch_bo;
bbo != NULL; bbo = bbo->prev_batch_bo) {
anv_cmd_buffer_add_bo(cmd_buffer, &bbo->bo,
&cmd_buffer->surface_relocs.relocs[bbo->first_reloc],
bbo->num_relocs);
}
/* Add all of the BOs referenced by surface state */
anv_cmd_buffer_add_validate_bos(cmd_buffer, &cmd_buffer->surface_relocs);
/* Add all but the first batch BO */
struct anv_batch_bo *batch_bo = cmd_buffer->last_batch_bo;
while (batch_bo->prev_batch_bo) {
anv_cmd_buffer_add_bo(cmd_buffer, &batch_bo->bo,
&batch->relocs.relocs[batch_bo->first_reloc],
batch_bo->num_relocs);
batch_bo = batch_bo->prev_batch_bo;
}
/* Add everything referenced by the batches */
anv_cmd_buffer_add_validate_bos(cmd_buffer, &batch->relocs);
/* Add the first batch bo last */
assert(batch_bo->prev_batch_bo == NULL && batch_bo->first_reloc == 0);
anv_cmd_buffer_add_bo(cmd_buffer, &batch_bo->bo,
&batch->relocs.relocs[batch_bo->first_reloc],
batch_bo->num_relocs);
assert(batch_bo->bo.index == cmd_buffer->exec2_bo_count - 1);
anv_cmd_buffer_process_relocs(cmd_buffer, &cmd_buffer->surface_relocs);
anv_cmd_buffer_process_relocs(cmd_buffer, &batch->relocs);
cmd_buffer->execbuf.buffers_ptr = (uintptr_t) cmd_buffer->exec2_objects;
cmd_buffer->execbuf.buffer_count = cmd_buffer->exec2_bo_count;
cmd_buffer->execbuf.batch_start_offset = 0;
cmd_buffer->execbuf.batch_len = batch->next - batch->start;
cmd_buffer->execbuf.cliprects_ptr = 0;
cmd_buffer->execbuf.num_cliprects = 0;
cmd_buffer->execbuf.DR1 = 0;
cmd_buffer->execbuf.DR4 = 0;
cmd_buffer->execbuf.flags = I915_EXEC_HANDLE_LUT;
if (!cmd_buffer->need_reloc)
cmd_buffer->execbuf.flags |= I915_EXEC_NO_RELOC;
cmd_buffer->execbuf.flags |= I915_EXEC_RENDER;
cmd_buffer->execbuf.rsvd1 = device->context_id;
cmd_buffer->execbuf.rsvd2 = 0;
pthread_mutex_unlock(&device->mutex);
return VK_SUCCESS;
}
VkResult anv_ResetCommandBuffer(
VkCmdBuffer cmdBuffer,
VkCmdBufferResetFlags flags)
{
ANV_FROM_HANDLE(anv_cmd_buffer, cmd_buffer, cmdBuffer);
/* Delete all but the first batch bo */
while (cmd_buffer->last_batch_bo->prev_batch_bo) {
struct anv_batch_bo *prev = cmd_buffer->last_batch_bo->prev_batch_bo;
anv_batch_bo_destroy(cmd_buffer->last_batch_bo, cmd_buffer->device);
cmd_buffer->last_batch_bo = prev;
}
assert(cmd_buffer->last_batch_bo->prev_batch_bo == NULL);
cmd_buffer->batch.relocs.num_relocs = 0;
anv_batch_bo_start(cmd_buffer->last_batch_bo, &cmd_buffer->batch,
GEN8_MI_BATCH_BUFFER_START_length * 4);
/* Delete all but the first batch bo */
while (cmd_buffer->surface_batch_bo->prev_batch_bo) {
struct anv_batch_bo *prev = cmd_buffer->surface_batch_bo->prev_batch_bo;
anv_batch_bo_destroy(cmd_buffer->surface_batch_bo, cmd_buffer->device);
cmd_buffer->surface_batch_bo = prev;
}
assert(cmd_buffer->surface_batch_bo->prev_batch_bo == NULL);
cmd_buffer->surface_next = 1;
cmd_buffer->surface_relocs.num_relocs = 0;
anv_cmd_state_fini(&cmd_buffer->state);
anv_cmd_state_init(&cmd_buffer->state);
return VK_SUCCESS;
}

1222
src/vulkan/anv_cmd_emit.c Normal file

File diff suppressed because it is too large Load Diff

View File

@@ -248,9 +248,9 @@ anv_cmd_buffer_dump(struct anv_cmd_buffer *cmd_buffer)
if (writer == NULL)
return;
aub_bos = malloc(cmd_buffer->bo_count * sizeof(aub_bos[0]));
aub_bos = malloc(cmd_buffer->exec2_bo_count * sizeof(aub_bos[0]));
offset = writer->offset;
for (uint32_t i = 0; i < cmd_buffer->bo_count; i++) {
for (uint32_t i = 0; i < cmd_buffer->exec2_bo_count; i++) {
bo = cmd_buffer->exec2_bos[i];
if (bo->map)
aub_bos[i].map = bo->map;
@@ -282,9 +282,9 @@ anv_cmd_buffer_dump(struct anv_cmd_buffer *cmd_buffer)
bbo->num_relocs, aub_bos);
}
for (uint32_t i = 0; i < cmd_buffer->bo_count; i++) {
for (uint32_t i = 0; i < cmd_buffer->exec2_bo_count; i++) {
bo = cmd_buffer->exec2_bos[i];
if (i == cmd_buffer->bo_count - 1) {
if (i == cmd_buffer->exec2_bo_count - 1) {
assert(bo == &first_bbo->bo);
aub_write_trace_block(writer, AUB_TRACE_TYPE_BATCH,
aub_bos[i].relocated,

File diff suppressed because it is too large Load Diff

View File

@@ -215,6 +215,19 @@ anv_format_for_vk_format(VkFormat format)
return &anv_formats[format];
}
bool
anv_is_vk_format_depth_or_stencil(VkFormat format)
{
const struct anv_format *format_info =
anv_format_for_vk_format(format);
if (format_info->depth_format != UNSUPPORTED &&
format_info->depth_format != 0)
return true;
return format_info->has_stencil;
}
// Format capabilities
struct surface_format_info {
@@ -232,20 +245,20 @@ struct surface_format_info {
extern const struct surface_format_info surface_formats[];
VkResult anv_validate_GetPhysicalDeviceFormatInfo(
VkResult anv_validate_GetPhysicalDeviceFormatProperties(
VkPhysicalDevice physicalDevice,
VkFormat _format,
VkFormatProperties* pFormatInfo)
VkFormatProperties* pFormatProperties)
{
const struct anv_format *format = anv_format_for_vk_format(_format);
fprintf(stderr, "vkGetFormatInfo(%s)\n", format->name);
return anv_GetPhysicalDeviceFormatInfo(physicalDevice, _format, pFormatInfo);
fprintf(stderr, "vkGetFormatProperties(%s)\n", format->name);
return anv_GetPhysicalDeviceFormatProperties(physicalDevice, _format, pFormatProperties);
}
VkResult anv_GetPhysicalDeviceFormatInfo(
VkResult anv_GetPhysicalDeviceFormatProperties(
VkPhysicalDevice physicalDevice,
VkFormat _format,
VkFormatProperties* pFormatInfo)
VkFormatProperties* pFormatProperties)
{
ANV_FROM_HANDLE(anv_physical_device, physical_device, physicalDevice);
const struct surface_format_info *info;
@@ -283,14 +296,39 @@ VkResult anv_GetPhysicalDeviceFormatInfo(
linear |= VK_FORMAT_FEATURE_VERTEX_BUFFER_BIT;
}
pFormatInfo->linearTilingFeatures = linear;
pFormatInfo->optimalTilingFeatures = tiled;
pFormatProperties->linearTilingFeatures = linear;
pFormatProperties->optimalTilingFeatures = tiled;
return VK_SUCCESS;
unsupported:
pFormatInfo->linearTilingFeatures = 0;
pFormatInfo->optimalTilingFeatures = 0;
pFormatProperties->linearTilingFeatures = 0;
pFormatProperties->optimalTilingFeatures = 0;
return VK_SUCCESS;
}
VkResult anv_GetPhysicalDeviceImageFormatProperties(
VkPhysicalDevice physicalDevice,
VkFormat format,
VkImageType type,
VkImageTiling tiling,
VkImageUsageFlags usage,
VkImageFormatProperties* pImageFormatProperties)
{
/* TODO: We should do something here. Chad? */
stub_return(VK_UNSUPPORTED);
}
VkResult anv_GetPhysicalDeviceSparseImageFormatProperties(
VkPhysicalDevice physicalDevice,
VkFormat format,
VkImageType type,
uint32_t samples,
VkImageUsageFlags usage,
VkImageTiling tiling,
uint32_t* pNumProperties,
VkSparseImageFormatProperties* pProperties)
{
stub_return(VK_UNSUPPORTED);
}

View File

@@ -264,7 +264,7 @@ with open_file(outfname, 'w') as outfile:
.codeSize = sizeof(_ANV_GLSL_SRC_VAR(__LINE__)), \\
.pCode = _ANV_GLSL_SRC_VAR(__LINE__), \\
}; \\
vkCreateShaderModule((VkDevice) device, \\
vkCreateShaderModule(anv_device_to_handle(device), \\
&__shader_create_info, &__module); \\
__module; \\
})

View File

@@ -326,22 +326,22 @@ VkResult anv_GetImageSubresourceLayout(
}
void
anv_surface_view_destroy(struct anv_device *device,
struct anv_surface_view *view)
anv_surface_view_fini(struct anv_device *device,
struct anv_surface_view *view)
{
anv_state_pool_free(&device->surface_state_pool, view->surface_state);
anv_device_free(device, view);
}
void
anv_image_view_init(struct anv_surface_view *view,
anv_image_view_init(struct anv_image_view *iview,
struct anv_device *device,
const VkImageViewCreateInfo* pCreateInfo,
struct anv_cmd_buffer *cmd_buffer)
{
ANV_FROM_HANDLE(anv_image, image, pCreateInfo->image);
const VkImageSubresourceRange *range = &pCreateInfo->subresourceRange;
struct anv_surface_view *view = &iview->view;
struct anv_surface *surface;
const struct anv_format *format_info =
@@ -539,7 +539,7 @@ anv_CreateImageView(VkDevice _device,
VkImageView *pView)
{
ANV_FROM_HANDLE(anv_device, device, _device);
struct anv_surface_view *view;
struct anv_image_view *view;
view = anv_device_alloc(device, sizeof(*view), 8,
VK_SYSTEM_ALLOC_TYPE_API_OBJECT);
@@ -548,39 +548,41 @@ anv_CreateImageView(VkDevice _device,
anv_image_view_init(view, device, pCreateInfo, NULL);
*pView = (VkImageView) view;
*pView = anv_image_view_to_handle(view);
return VK_SUCCESS;
}
VkResult
anv_DestroyImageView(VkDevice _device, VkImageView _view)
anv_DestroyImageView(VkDevice _device, VkImageView _iview)
{
ANV_FROM_HANDLE(anv_device, device, _device);
ANV_FROM_HANDLE(anv_image_view, iview, _iview);
anv_surface_view_destroy(device, (struct anv_surface_view *)_view);
anv_surface_view_fini(device, &iview->view);
anv_device_free(device, iview);
return VK_SUCCESS;
}
void
anv_color_attachment_view_init(struct anv_surface_view *view,
anv_color_attachment_view_init(struct anv_color_attachment_view *aview,
struct anv_device *device,
const VkColorAttachmentViewCreateInfo* pCreateInfo,
const VkAttachmentViewCreateInfo* pCreateInfo,
struct anv_cmd_buffer *cmd_buffer)
{
ANV_FROM_HANDLE(anv_image, image, pCreateInfo->image);
struct anv_surface_view *view = &aview->view;
struct anv_surface *surface = &image->primary_surface;
const struct anv_format *format_info =
anv_format_for_vk_format(pCreateInfo->format);
aview->base.attachment_type = ANV_ATTACHMENT_VIEW_TYPE_COLOR;
anv_assert(pCreateInfo->arraySize > 0);
anv_assert(pCreateInfo->mipLevel < image->levels);
anv_assert(pCreateInfo->baseArraySlice + pCreateInfo->arraySize <= image->array_size);
if (pCreateInfo->msaaResolveImage)
anv_finishme("msaaResolveImage");
view->bo = image->bo;
view->offset = image->offset + surface->offset;
view->format = pCreateInfo->format;
@@ -659,57 +661,17 @@ anv_color_attachment_view_init(struct anv_surface_view *view,
GEN8_RENDER_SURFACE_STATE_pack(NULL, view->surface_state.map, &surface_state);
}
VkResult
anv_CreateColorAttachmentView(VkDevice _device,
const VkColorAttachmentViewCreateInfo *pCreateInfo,
VkColorAttachmentView *pView)
static void
anv_depth_stencil_view_init(struct anv_depth_stencil_view *view,
const VkAttachmentViewCreateInfo *pCreateInfo)
{
ANV_FROM_HANDLE(anv_device, device, _device);
struct anv_surface_view *view;
assert(pCreateInfo->sType == VK_STRUCTURE_TYPE_COLOR_ATTACHMENT_VIEW_CREATE_INFO);
view = anv_device_alloc(device, sizeof(*view), 8,
VK_SYSTEM_ALLOC_TYPE_API_OBJECT);
if (view == NULL)
return vk_error(VK_ERROR_OUT_OF_HOST_MEMORY);
anv_color_attachment_view_init(view, device, pCreateInfo, NULL);
*pView = (VkColorAttachmentView) view;
return VK_SUCCESS;
}
VkResult
anv_DestroyColorAttachmentView(VkDevice _device, VkColorAttachmentView _view)
{
ANV_FROM_HANDLE(anv_device, device, _device);
anv_surface_view_destroy(device, (struct anv_surface_view *)_view);
return VK_SUCCESS;
}
VkResult
anv_CreateDepthStencilView(VkDevice _device,
const VkDepthStencilViewCreateInfo *pCreateInfo,
VkDepthStencilView *pView)
{
ANV_FROM_HANDLE(anv_device, device, _device);
ANV_FROM_HANDLE(anv_image, image, pCreateInfo->image);
struct anv_depth_stencil_view *view;
struct anv_surface *depth_surface = &image->primary_surface;
struct anv_surface *stencil_surface = &image->stencil_surface;
const struct anv_format *format =
anv_format_for_vk_format(image->format);
assert(pCreateInfo->sType == VK_STRUCTURE_TYPE_DEPTH_STENCIL_VIEW_CREATE_INFO);
view = anv_device_alloc(device, sizeof(*view), 8,
VK_SYSTEM_ALLOC_TYPE_API_OBJECT);
if (view == NULL)
return vk_error(VK_ERROR_OUT_OF_HOST_MEMORY);
view->base.attachment_type = ANV_ATTACHMENT_VIEW_TYPE_DEPTH_STENCIL;
/* XXX: We don't handle any of these */
anv_assert(pCreateInfo->mipLevel == 0);
@@ -726,17 +688,54 @@ anv_CreateDepthStencilView(VkDevice _device,
view->stencil_stride = stencil_surface->stride;
view->stencil_offset = image->offset + stencil_surface->offset;
view->stencil_qpitch = 0; /* FINISHME: QPitch */
}
*pView = anv_depth_stencil_view_to_handle(view);
VkResult
anv_CreateAttachmentView(VkDevice _device,
const VkAttachmentViewCreateInfo *pCreateInfo,
VkAttachmentView *pView)
{
ANV_FROM_HANDLE(anv_device, device, _device);
assert(pCreateInfo->sType == VK_STRUCTURE_TYPE_ATTACHMENT_VIEW_CREATE_INFO);
if (anv_is_vk_format_depth_or_stencil(pCreateInfo->format)) {
struct anv_depth_stencil_view *view =
anv_device_alloc(device, sizeof(*view), 8,
VK_SYSTEM_ALLOC_TYPE_API_OBJECT);
if (view == NULL)
return vk_error(VK_ERROR_OUT_OF_HOST_MEMORY);
anv_depth_stencil_view_init(view, pCreateInfo);
*pView = anv_attachment_view_to_handle(&view->base);
} else {
struct anv_color_attachment_view *view =
anv_device_alloc(device, sizeof(*view), 8,
VK_SYSTEM_ALLOC_TYPE_API_OBJECT);
if (view == NULL)
return vk_error(VK_ERROR_OUT_OF_HOST_MEMORY);
anv_color_attachment_view_init(view, device, pCreateInfo, NULL);
*pView = anv_attachment_view_to_handle(&view->base);
}
return VK_SUCCESS;
}
VkResult
anv_DestroyDepthStencilView(VkDevice _device, VkDepthStencilView _view)
anv_DestroyAttachmentView(VkDevice _device, VkAttachmentView _view)
{
ANV_FROM_HANDLE(anv_device, device, _device);
ANV_FROM_HANDLE(anv_depth_stencil_view, view, _view);
ANV_FROM_HANDLE(anv_attachment_view, view, _view);
if (view->attachment_type == ANV_ATTACHMENT_VIEW_TYPE_COLOR) {
struct anv_color_attachment_view *aview =
(struct anv_color_attachment_view *)view;
anv_surface_view_fini(device, &aview->view);
}
anv_device_free(device, view);

View File

@@ -83,8 +83,8 @@ VkResult anv_CreateDmaBufImageINTEL(
assert(image->extent.height > 0);
assert(image->extent.depth == 1);
*pMem = (VkDeviceMemory) mem;
*pImage = (VkImage) image;
*pMem = anv_device_memory_to_handle(mem);
*pImage = anv_image_to_handle(image);
return VK_SUCCESS;

View File

@@ -36,7 +36,7 @@ anv_device_init_meta_clear_state(struct anv_device *device)
/* We don't use a vertex shader for clearing, but instead build and pass
* the VUEs directly to the rasterization backend.
*/
VkShader fsm = GLSL_VK_SHADER_MODULE(device, FRAGMENT,
VkShaderModule fsm = GLSL_VK_SHADER_MODULE(device, FRAGMENT,
out vec4 f_color;
flat in vec4 v_color;
void main()
@@ -111,23 +111,23 @@ anv_device_init_meta_clear_state(struct anv_device *device)
.pSpecializationInfo = NULL,
},
.pVertexInputState = &vi_create_info,
.pIaState = &(VkPipelineIaStateCreateInfo) {
.sType = VK_STRUCTURE_TYPE_PIPELINE_IA_STATE_CREATE_INFO,
.pInputAssemblyState = &(VkPipelineInputAssemblyStateCreateInfo) {
.sType = VK_STRUCTURE_TYPE_PIPELINE_INPUT_ASSEMBLY_STATE_CREATE_INFO,
.topology = VK_PRIMITIVE_TOPOLOGY_TRIANGLE_STRIP,
.primitiveRestartEnable = false,
},
.pRsState = &(VkPipelineRsStateCreateInfo) {
.sType = VK_STRUCTURE_TYPE_PIPELINE_RS_STATE_CREATE_INFO,
.pRasterState = &(VkPipelineRasterStateCreateInfo) {
.sType = VK_STRUCTURE_TYPE_PIPELINE_RASTER_STATE_CREATE_INFO,
.depthClipEnable = true,
.rasterizerDiscardEnable = false,
.fillMode = VK_FILL_MODE_SOLID,
.cullMode = VK_CULL_MODE_NONE,
.frontFace = VK_FRONT_FACE_CCW
},
.pCbState = &(VkPipelineCbStateCreateInfo) {
.sType = VK_STRUCTURE_TYPE_PIPELINE_CB_STATE_CREATE_INFO,
.pColorBlendState = &(VkPipelineColorBlendStateCreateInfo) {
.sType = VK_STRUCTURE_TYPE_PIPELINE_COLOR_BLEND_STATE_CREATE_INFO,
.attachmentCount = 1,
.pAttachments = (VkPipelineCbAttachmentState []) {
.pAttachments = (VkPipelineColorBlendAttachmentState []) {
{ .channelWriteMask = VK_CHANNEL_A_BIT |
VK_CHANNEL_R_BIT | VK_CHANNEL_G_BIT | VK_CHANNEL_B_BIT },
}
@@ -141,7 +141,7 @@ anv_device_init_meta_clear_state(struct anv_device *device)
},
&device->meta_state.clear.pipeline);
anv_DestroyObject(anv_device_to_handle(device), VK_OBJECT_TYPE_SHADER, fs);
anv_DestroyShader(anv_device_to_handle(device), fs);
}
#define NUM_VB_USED 2
@@ -149,16 +149,16 @@ struct anv_saved_state {
struct anv_vertex_binding old_vertex_bindings[NUM_VB_USED];
struct anv_descriptor_set *old_descriptor_set0;
struct anv_pipeline *old_pipeline;
VkDynamicCbState cb_state;
VkDynamicColorBlendState cb_state;
};
static void
anv_cmd_buffer_save(struct anv_cmd_buffer *cmd_buffer,
struct anv_saved_state *state)
{
state->old_pipeline = cmd_buffer->pipeline;
state->old_descriptor_set0 = cmd_buffer->descriptors[0].set;
memcpy(state->old_vertex_bindings, cmd_buffer->vertex_bindings,
state->old_pipeline = cmd_buffer->state.pipeline;
state->old_descriptor_set0 = cmd_buffer->state.descriptors[0].set;
memcpy(state->old_vertex_bindings, cmd_buffer->state.vertex_bindings,
sizeof(state->old_vertex_bindings));
}
@@ -166,14 +166,14 @@ static void
anv_cmd_buffer_restore(struct anv_cmd_buffer *cmd_buffer,
const struct anv_saved_state *state)
{
cmd_buffer->pipeline = state->old_pipeline;
cmd_buffer->descriptors[0].set = state->old_descriptor_set0;
memcpy(cmd_buffer->vertex_bindings, state->old_vertex_bindings,
cmd_buffer->state.pipeline = state->old_pipeline;
cmd_buffer->state.descriptors[0].set = state->old_descriptor_set0;
memcpy(cmd_buffer->state.vertex_bindings, state->old_vertex_bindings,
sizeof(state->old_vertex_bindings));
cmd_buffer->vb_dirty |= (1 << NUM_VB_USED) - 1;
cmd_buffer->dirty |= ANV_CMD_BUFFER_PIPELINE_DIRTY;
cmd_buffer->descriptors_dirty |= VK_SHADER_STAGE_VERTEX_BIT;
cmd_buffer->state.vb_dirty |= (1 << NUM_VB_USED) - 1;
cmd_buffer->state.dirty |= ANV_CMD_BUFFER_PIPELINE_DIRTY;
cmd_buffer->state.descriptors_dirty |= VK_SHADER_STAGE_VERTEX_BIT;
}
struct vue_header {
@@ -194,7 +194,7 @@ meta_emit_clear(struct anv_cmd_buffer *cmd_buffer,
struct clear_instance_data *instance_data)
{
struct anv_device *device = cmd_buffer->device;
struct anv_framebuffer *fb = cmd_buffer->framebuffer;
struct anv_framebuffer *fb = cmd_buffer->state.framebuffer;
struct anv_state state;
uint32_t size;
@@ -233,62 +233,83 @@ meta_emit_clear(struct anv_cmd_buffer *cmd_buffer,
sizeof(vertex_data)
});
if (cmd_buffer->pipeline != anv_pipeline_from_handle(device->meta_state.clear.pipeline))
if (cmd_buffer->state.pipeline != anv_pipeline_from_handle(device->meta_state.clear.pipeline))
anv_CmdBindPipeline(anv_cmd_buffer_to_handle(cmd_buffer),
VK_PIPELINE_BIND_POINT_GRAPHICS,
device->meta_state.clear.pipeline);
/* We don't need anything here, only set if not already set. */
if (cmd_buffer->rs_state == NULL)
anv_CmdBindDynamicStateObject(anv_cmd_buffer_to_handle(cmd_buffer),
VK_STATE_BIND_POINT_RASTER,
if (cmd_buffer->state.rs_state == NULL)
anv_CmdBindDynamicRasterState(anv_cmd_buffer_to_handle(cmd_buffer),
device->meta_state.shared.rs_state);
if (cmd_buffer->vp_state == NULL)
anv_CmdBindDynamicStateObject(anv_cmd_buffer_to_handle(cmd_buffer),
VK_STATE_BIND_POINT_VIEWPORT,
cmd_buffer->framebuffer->vp_state);
if (cmd_buffer->state.vp_state == NULL)
anv_CmdBindDynamicViewportState(anv_cmd_buffer_to_handle(cmd_buffer),
cmd_buffer->state.framebuffer->vp_state);
if (cmd_buffer->ds_state == NULL)
anv_CmdBindDynamicStateObject(anv_cmd_buffer_to_handle(cmd_buffer),
VK_STATE_BIND_POINT_DEPTH_STENCIL,
device->meta_state.shared.ds_state);
if (cmd_buffer->state.ds_state == NULL)
anv_CmdBindDynamicDepthStencilState(anv_cmd_buffer_to_handle(cmd_buffer),
device->meta_state.shared.ds_state);
if (cmd_buffer->cb_state == NULL)
anv_CmdBindDynamicStateObject(anv_cmd_buffer_to_handle(cmd_buffer),
VK_STATE_BIND_POINT_COLOR_BLEND,
device->meta_state.shared.cb_state);
if (cmd_buffer->state.cb_state == NULL)
anv_CmdBindDynamicColorBlendState(anv_cmd_buffer_to_handle(cmd_buffer),
device->meta_state.shared.cb_state);
anv_CmdDraw(anv_cmd_buffer_to_handle(cmd_buffer), 0, 3, 0, num_instances);
}
void
anv_cmd_buffer_clear(struct anv_cmd_buffer *cmd_buffer,
struct anv_render_pass *pass)
anv_cmd_buffer_clear_attachments(struct anv_cmd_buffer *cmd_buffer,
struct anv_render_pass *pass,
const VkClearValue *clear_values)
{
struct anv_saved_state saved_state;
int num_clear_layers = 0;
struct clear_instance_data instance_data[MAX_RTS];
for (uint32_t i = 0; i < pass->num_layers; i++) {
if (pass->layers[i].color_load_op == VK_ATTACHMENT_LOAD_OP_CLEAR) {
instance_data[num_clear_layers++] = (struct clear_instance_data) {
.vue_header = {
.RTAIndex = i,
.ViewportIndex = 0,
.PointWidth = 0.0
},
.color = pass->layers[i].clear_color,
};
for (uint32_t i = 0; i < pass->attachment_count; i++) {
if (pass->attachments[i].load_op == VK_ATTACHMENT_LOAD_OP_CLEAR) {
if (anv_is_vk_format_depth_or_stencil(pass->attachments[i].format)) {
anv_finishme("Can't clear depth-stencil yet");
continue;
}
num_clear_layers++;
}
}
if (num_clear_layers == 0)
return;
struct clear_instance_data instance_data[num_clear_layers];
uint32_t color_attachments[num_clear_layers];
int layer = 0;
for (uint32_t i = 0; i < pass->attachment_count; i++) {
if (pass->attachments[i].load_op == VK_ATTACHMENT_LOAD_OP_CLEAR &&
!anv_is_vk_format_depth_or_stencil(pass->attachments[i].format)) {
instance_data[layer] = (struct clear_instance_data) {
.vue_header = {
.RTAIndex = i,
.ViewportIndex = 0,
.PointWidth = 0.0
},
.color = clear_values[i].color,
};
color_attachments[layer] = i;
layer++;
}
}
anv_cmd_buffer_save(cmd_buffer, &saved_state);
struct anv_subpass subpass = {
.input_count = 0,
.color_count = num_clear_layers,
.color_attachments = color_attachments,
.depth_stencil_attachment = VK_ATTACHMENT_UNUSED,
};
anv_cmd_buffer_begin_subpass(cmd_buffer, &subpass);
meta_emit_clear(cmd_buffer, num_clear_layers, instance_data);
/* Restore API state */
@@ -422,23 +443,23 @@ anv_device_init_meta_blit_state(struct anv_device *device)
},
},
.pVertexInputState = &vi_create_info,
.pIaState = &(VkPipelineIaStateCreateInfo) {
.sType = VK_STRUCTURE_TYPE_PIPELINE_IA_STATE_CREATE_INFO,
.pInputAssemblyState = &(VkPipelineInputAssemblyStateCreateInfo) {
.sType = VK_STRUCTURE_TYPE_PIPELINE_INPUT_ASSEMBLY_STATE_CREATE_INFO,
.topology = VK_PRIMITIVE_TOPOLOGY_TRIANGLE_STRIP,
.primitiveRestartEnable = false,
},
.pRsState = &(VkPipelineRsStateCreateInfo) {
.sType = VK_STRUCTURE_TYPE_PIPELINE_RS_STATE_CREATE_INFO,
.pRasterState = &(VkPipelineRasterStateCreateInfo) {
.sType = VK_STRUCTURE_TYPE_PIPELINE_RASTER_STATE_CREATE_INFO,
.depthClipEnable = true,
.rasterizerDiscardEnable = false,
.fillMode = VK_FILL_MODE_SOLID,
.cullMode = VK_CULL_MODE_NONE,
.frontFace = VK_FRONT_FACE_CCW
},
.pCbState = &(VkPipelineCbStateCreateInfo) {
.sType = VK_STRUCTURE_TYPE_PIPELINE_CB_STATE_CREATE_INFO,
.pColorBlendState = &(VkPipelineColorBlendStateCreateInfo) {
.sType = VK_STRUCTURE_TYPE_PIPELINE_COLOR_BLEND_STATE_CREATE_INFO,
.attachmentCount = 1,
.pAttachments = (VkPipelineCbAttachmentState []) {
.pAttachments = (VkPipelineColorBlendAttachmentState []) {
{ .channelWriteMask = VK_CHANNEL_A_BIT |
VK_CHANNEL_R_BIT | VK_CHANNEL_G_BIT | VK_CHANNEL_B_BIT },
}
@@ -455,8 +476,8 @@ anv_device_init_meta_blit_state(struct anv_device *device)
},
&device->meta_state.blit.pipeline);
anv_DestroyObject(anv_device_to_handle(device), VK_OBJECT_TYPE_SHADER, vs);
anv_DestroyObject(anv_device_to_handle(device), VK_OBJECT_TYPE_SHADER, fs);
anv_DestroyShader(anv_device_to_handle(device), vs);
anv_DestroyShader(anv_device_to_handle(device), fs);
}
static void
@@ -467,25 +488,22 @@ meta_prepare_blit(struct anv_cmd_buffer *cmd_buffer,
anv_cmd_buffer_save(cmd_buffer, saved_state);
if (cmd_buffer->pipeline != anv_pipeline_from_handle(device->meta_state.blit.pipeline))
if (cmd_buffer->state.pipeline != anv_pipeline_from_handle(device->meta_state.blit.pipeline))
anv_CmdBindPipeline(anv_cmd_buffer_to_handle(cmd_buffer),
VK_PIPELINE_BIND_POINT_GRAPHICS,
device->meta_state.blit.pipeline);
/* We don't need anything here, only set if not already set. */
if (cmd_buffer->rs_state == NULL)
anv_CmdBindDynamicStateObject(anv_cmd_buffer_to_handle(cmd_buffer),
VK_STATE_BIND_POINT_RASTER,
if (cmd_buffer->state.rs_state == NULL)
anv_CmdBindDynamicRasterState(anv_cmd_buffer_to_handle(cmd_buffer),
device->meta_state.shared.rs_state);
if (cmd_buffer->ds_state == NULL)
anv_CmdBindDynamicStateObject(anv_cmd_buffer_to_handle(cmd_buffer),
VK_STATE_BIND_POINT_DEPTH_STENCIL,
device->meta_state.shared.ds_state);
if (cmd_buffer->state.ds_state == NULL)
anv_CmdBindDynamicDepthStencilState(anv_cmd_buffer_to_handle(cmd_buffer),
device->meta_state.shared.ds_state);
saved_state->cb_state = anv_dynamic_cb_state_to_handle(cmd_buffer->cb_state);
anv_CmdBindDynamicStateObject(anv_cmd_buffer_to_handle(cmd_buffer),
VK_STATE_BIND_POINT_COLOR_BLEND,
device->meta_state.shared.cb_state);
saved_state->cb_state = anv_dynamic_cb_state_to_handle(cmd_buffer->state.cb_state);
anv_CmdBindDynamicColorBlendState(anv_cmd_buffer_to_handle(cmd_buffer),
device->meta_state.shared.cb_state);
}
struct blit_region {
@@ -497,14 +515,15 @@ struct blit_region {
static void
meta_emit_blit(struct anv_cmd_buffer *cmd_buffer,
struct anv_surface_view *src,
struct anv_image_view *src,
VkOffset3D src_offset,
VkExtent3D src_extent,
struct anv_surface_view *dest,
struct anv_color_attachment_view *dest,
VkOffset3D dest_offset,
VkExtent3D dest_extent)
{
struct anv_device *device = cmd_buffer->device;
VkDescriptorPool dummy_desc_pool = { .handle = 1 };
struct blit_vb_data {
float pos[2];
@@ -524,8 +543,8 @@ meta_emit_blit(struct anv_cmd_buffer *cmd_buffer,
dest_offset.y + dest_extent.height,
},
.tex_coord = {
(float)(src_offset.x + src_extent.width) / (float)src->extent.width,
(float)(src_offset.y + src_extent.height) / (float)src->extent.height,
(float)(src_offset.x + src_extent.width) / (float)src->view.extent.width,
(float)(src_offset.y + src_extent.height) / (float)src->view.extent.height,
},
};
@@ -535,8 +554,8 @@ meta_emit_blit(struct anv_cmd_buffer *cmd_buffer,
dest_offset.y + dest_extent.height,
},
.tex_coord = {
(float)src_offset.x / (float)src->extent.width,
(float)(src_offset.y + src_extent.height) / (float)src->extent.height,
(float)src_offset.x / (float)src->view.extent.width,
(float)(src_offset.y + src_extent.height) / (float)src->view.extent.height,
},
};
@@ -546,8 +565,8 @@ meta_emit_blit(struct anv_cmd_buffer *cmd_buffer,
dest_offset.y,
},
.tex_coord = {
(float)src_offset.x / (float)src->extent.width,
(float)src_offset.y / (float)src->extent.height,
(float)src_offset.x / (float)src->view.extent.width,
(float)src_offset.y / (float)src->view.extent.height,
},
};
@@ -570,7 +589,7 @@ meta_emit_blit(struct anv_cmd_buffer *cmd_buffer,
uint32_t count;
VkDescriptorSet set;
anv_AllocDescriptorSets(anv_device_to_handle(device), 0 /* pool */,
anv_AllocDescriptorSets(anv_device_to_handle(device), dummy_desc_pool,
VK_DESCRIPTOR_SET_USAGE_ONE_SHOT,
1, &device->meta_state.blit.ds_layout, &set, &count);
anv_UpdateDescriptorSets(anv_device_to_handle(device),
@@ -585,7 +604,7 @@ meta_emit_blit(struct anv_cmd_buffer *cmd_buffer,
.descriptorType = VK_DESCRIPTOR_TYPE_SAMPLED_IMAGE,
.pDescriptors = (VkDescriptorInfo[]) {
{
.imageView = (VkImageView) src,
.imageView = anv_image_view_to_handle(src),
.imageLayout = VK_IMAGE_LAYOUT_GENERAL
},
}
@@ -596,49 +615,70 @@ meta_emit_blit(struct anv_cmd_buffer *cmd_buffer,
anv_CreateFramebuffer(anv_device_to_handle(device),
&(VkFramebufferCreateInfo) {
.sType = VK_STRUCTURE_TYPE_FRAMEBUFFER_CREATE_INFO,
.colorAttachmentCount = 1,
.pColorAttachments = (VkColorAttachmentBindInfo[]) {
.attachmentCount = 1,
.pAttachments = (VkAttachmentBindInfo[]) {
{
.view = (VkColorAttachmentView) dest,
.view = anv_attachment_view_to_handle(&dest->base),
.layout = VK_IMAGE_LAYOUT_GENERAL
}
},
.pDepthStencilAttachment = NULL,
.sampleCount = 1,
.width = dest->extent.width,
.height = dest->extent.height,
.width = dest->view.extent.width,
.height = dest->view.extent.height,
.layers = 1
}, &fb);
VkRenderPass pass;
anv_CreateRenderPass(anv_device_to_handle(device),
&(VkRenderPassCreateInfo) {
.sType = VK_STRUCTURE_TYPE_RENDER_PASS_CREATE_INFO,
.renderArea = { { 0, 0 }, { dest->extent.width, dest->extent.height } },
.colorAttachmentCount = 1,
.extent = { 0, },
.sampleCount = 1,
.layers = 1,
.pColorFormats = (VkFormat[]) { dest->format },
.pColorLayouts = (VkImageLayout[]) { VK_IMAGE_LAYOUT_GENERAL },
.pColorLoadOps = (VkAttachmentLoadOp[]) { VK_ATTACHMENT_LOAD_OP_LOAD },
.pColorStoreOps = (VkAttachmentStoreOp[]) { VK_ATTACHMENT_STORE_OP_STORE },
.pColorLoadClearValues = (VkClearColorValue[]) {
{ .f32 = { 1.0, 0.0, 0.0, 1.0 } }
.attachmentCount = 1,
.pAttachments = &(VkAttachmentDescription) {
.sType = VK_STRUCTURE_TYPE_ATTACHMENT_DESCRIPTION,
.format = dest->view.format,
.loadOp = VK_ATTACHMENT_LOAD_OP_LOAD,
.storeOp = VK_ATTACHMENT_STORE_OP_STORE,
.initialLayout = VK_IMAGE_LAYOUT_GENERAL,
.finalLayout = VK_IMAGE_LAYOUT_GENERAL,
},
.depthStencilFormat = VK_FORMAT_UNDEFINED,
.subpassCount = 1,
.pSubpasses = &(VkSubpassDescription) {
.sType = VK_STRUCTURE_TYPE_SUBPASS_DESCRIPTION,
.pipelineBindPoint = VK_PIPELINE_BIND_POINT_GRAPHICS,
.inputCount = 0,
.colorCount = 1,
.colorAttachments = &(VkAttachmentReference) {
.attachment = 0,
.layout = VK_IMAGE_LAYOUT_GENERAL,
},
.resolveAttachments = NULL,
.depthStencilAttachment = (VkAttachmentReference) {
.attachment = VK_ATTACHMENT_UNUSED,
.layout = VK_IMAGE_LAYOUT_GENERAL,
},
.preserveCount = 1,
.preserveAttachments = &(VkAttachmentReference) {
.attachment = 0,
.layout = VK_IMAGE_LAYOUT_GENERAL,
},
},
.dependencyCount = 0,
}, &pass);
anv_CmdBeginRenderPass(anv_cmd_buffer_to_handle(cmd_buffer),
&(VkRenderPassBegin) {
&(VkRenderPassBeginInfo) {
.sType = VK_STRUCTURE_TYPE_RENDER_PASS_BEGIN_INFO,
.renderPass = pass,
.framebuffer = fb,
});
.renderArea = {
.offset = { dest_offset.x, dest_offset.y },
.extent = { dest_extent.width, dest_extent.height },
},
.attachmentCount = 1,
.pAttachmentClearValues = NULL,
}, VK_RENDER_PASS_CONTENTS_INLINE);
anv_CmdBindDynamicStateObject(anv_cmd_buffer_to_handle(cmd_buffer),
VK_STATE_BIND_POINT_VIEWPORT,
anv_framebuffer_from_handle(fb)->vp_state);
anv_CmdBindDynamicViewportState(anv_cmd_buffer_to_handle(cmd_buffer),
anv_framebuffer_from_handle(fb)->vp_state);
anv_CmdBindDescriptorSets(anv_cmd_buffer_to_handle(cmd_buffer),
VK_PIPELINE_BIND_POINT_GRAPHICS,
@@ -652,12 +692,9 @@ meta_emit_blit(struct anv_cmd_buffer *cmd_buffer,
/* At the point where we emit the draw call, all data from the
* descriptor sets, etc. has been used. We are free to delete it.
*/
anv_DestroyObject(anv_device_to_handle(device),
VK_OBJECT_TYPE_DESCRIPTOR_SET, set);
anv_DestroyObject(anv_device_to_handle(device),
VK_OBJECT_TYPE_FRAMEBUFFER, fb);
anv_DestroyObject(anv_device_to_handle(device),
VK_OBJECT_TYPE_RENDER_PASS, pass);
anv_descriptor_set_destroy(device, anv_descriptor_set_from_handle(set));
anv_DestroyFramebuffer(anv_device_to_handle(device), fb);
anv_DestroyRenderPass(anv_device_to_handle(device), pass);
}
static void
@@ -665,9 +702,8 @@ meta_finish_blit(struct anv_cmd_buffer *cmd_buffer,
const struct anv_saved_state *saved_state)
{
anv_cmd_buffer_restore(cmd_buffer, saved_state);
anv_CmdBindDynamicStateObject(anv_cmd_buffer_to_handle(cmd_buffer),
VK_STATE_BIND_POINT_COLOR_BLEND,
saved_state->cb_state);
anv_CmdBindDynamicColorBlendState(anv_cmd_buffer_to_handle(cmd_buffer),
saved_state->cb_state);
}
static VkFormat
@@ -724,7 +760,7 @@ do_buffer_copy(struct anv_cmd_buffer *cmd_buffer,
anv_image_from_handle(dest_image)->bo = dest;
anv_image_from_handle(dest_image)->offset = dest_offset;
struct anv_surface_view src_view;
struct anv_image_view src_view;
anv_image_view_init(&src_view, cmd_buffer->device,
&(VkImageViewCreateInfo) {
.sType = VK_STRUCTURE_TYPE_IMAGE_VIEW_CREATE_INFO,
@@ -747,10 +783,10 @@ do_buffer_copy(struct anv_cmd_buffer *cmd_buffer,
},
cmd_buffer);
struct anv_surface_view dest_view;
struct anv_color_attachment_view dest_view;
anv_color_attachment_view_init(&dest_view, cmd_buffer->device,
&(VkColorAttachmentViewCreateInfo) {
.sType = VK_STRUCTURE_TYPE_COLOR_ATTACHMENT_VIEW_CREATE_INFO,
&(VkAttachmentViewCreateInfo) {
.sType = VK_STRUCTURE_TYPE_ATTACHMENT_VIEW_CREATE_INFO,
.image = dest_image,
.format = copy_format,
.mipLevel = 0,
@@ -767,8 +803,8 @@ do_buffer_copy(struct anv_cmd_buffer *cmd_buffer,
(VkOffset3D) { 0, 0, 0 },
(VkExtent3D) { width, height, 1 });
anv_DestroyObject(vk_device, VK_OBJECT_TYPE_IMAGE, src_image);
anv_DestroyObject(vk_device, VK_OBJECT_TYPE_IMAGE, dest_image);
anv_DestroyImage(vk_device, src_image);
anv_DestroyImage(vk_device, dest_image);
}
void anv_CmdCopyBuffer(
@@ -778,9 +814,10 @@ void anv_CmdCopyBuffer(
uint32_t regionCount,
const VkBufferCopy* pRegions)
{
struct anv_cmd_buffer *cmd_buffer = (struct anv_cmd_buffer *)cmdBuffer;
struct anv_buffer *src_buffer = (struct anv_buffer *)srcBuffer;
struct anv_buffer *dest_buffer = (struct anv_buffer *)destBuffer;
ANV_FROM_HANDLE(anv_cmd_buffer, cmd_buffer, cmdBuffer);
ANV_FROM_HANDLE(anv_buffer, src_buffer, srcBuffer);
ANV_FROM_HANDLE(anv_buffer, dest_buffer, destBuffer);
struct anv_saved_state saved_state;
meta_prepare_blit(cmd_buffer, &saved_state);
@@ -857,14 +894,15 @@ void anv_CmdCopyImage(
uint32_t regionCount,
const VkImageCopy* pRegions)
{
struct anv_cmd_buffer *cmd_buffer = (struct anv_cmd_buffer *)cmdBuffer;
struct anv_image *src_image = (struct anv_image *)srcImage;
ANV_FROM_HANDLE(anv_cmd_buffer, cmd_buffer, cmdBuffer);
ANV_FROM_HANDLE(anv_image, src_image, srcImage);
struct anv_saved_state saved_state;
meta_prepare_blit(cmd_buffer, &saved_state);
for (unsigned r = 0; r < regionCount; r++) {
struct anv_surface_view src_view;
struct anv_image_view src_view;
anv_image_view_init(&src_view, cmd_buffer->device,
&(VkImageViewCreateInfo) {
.sType = VK_STRUCTURE_TYPE_IMAGE_VIEW_CREATE_INFO,
@@ -887,10 +925,10 @@ void anv_CmdCopyImage(
},
cmd_buffer);
struct anv_surface_view dest_view;
struct anv_color_attachment_view dest_view;
anv_color_attachment_view_init(&dest_view, cmd_buffer->device,
&(VkColorAttachmentViewCreateInfo) {
.sType = VK_STRUCTURE_TYPE_COLOR_ATTACHMENT_VIEW_CREATE_INFO,
&(VkAttachmentViewCreateInfo) {
.sType = VK_STRUCTURE_TYPE_ATTACHMENT_VIEW_CREATE_INFO,
.image = destImage,
.format = src_image->format,
.mipLevel = pRegions[r].destSubresource.mipLevel,
@@ -922,9 +960,10 @@ void anv_CmdBlitImage(
VkTexFilter filter)
{
struct anv_cmd_buffer *cmd_buffer = (struct anv_cmd_buffer *)cmdBuffer;
struct anv_image *src_image = (struct anv_image *)srcImage;
struct anv_image *dest_image = (struct anv_image *)destImage;
ANV_FROM_HANDLE(anv_cmd_buffer, cmd_buffer, cmdBuffer);
ANV_FROM_HANDLE(anv_image, src_image, srcImage);
ANV_FROM_HANDLE(anv_image, dest_image, destImage);
struct anv_saved_state saved_state;
anv_finishme("respect VkTexFilter");
@@ -932,7 +971,7 @@ void anv_CmdBlitImage(
meta_prepare_blit(cmd_buffer, &saved_state);
for (unsigned r = 0; r < regionCount; r++) {
struct anv_surface_view src_view;
struct anv_image_view src_view;
anv_image_view_init(&src_view, cmd_buffer->device,
&(VkImageViewCreateInfo) {
.sType = VK_STRUCTURE_TYPE_IMAGE_VIEW_CREATE_INFO,
@@ -955,10 +994,10 @@ void anv_CmdBlitImage(
},
cmd_buffer);
struct anv_surface_view dest_view;
struct anv_color_attachment_view dest_view;
anv_color_attachment_view_init(&dest_view, cmd_buffer->device,
&(VkColorAttachmentViewCreateInfo) {
.sType = VK_STRUCTURE_TYPE_COLOR_ATTACHMENT_VIEW_CREATE_INFO,
&(VkAttachmentViewCreateInfo) {
.sType = VK_STRUCTURE_TYPE_ATTACHMENT_VIEW_CREATE_INFO,
.image = destImage,
.format = dest_image->format,
.mipLevel = pRegions[r].destSubresource.mipLevel,
@@ -1028,7 +1067,7 @@ void anv_CmdCopyBufferToImage(
src_image->bo = src_buffer->bo;
src_image->offset = src_buffer->offset + pRegions[r].bufferOffset;
struct anv_surface_view src_view;
struct anv_image_view src_view;
anv_image_view_init(&src_view, cmd_buffer->device,
&(VkImageViewCreateInfo) {
.sType = VK_STRUCTURE_TYPE_IMAGE_VIEW_CREATE_INFO,
@@ -1051,10 +1090,10 @@ void anv_CmdCopyBufferToImage(
},
cmd_buffer);
struct anv_surface_view dest_view;
struct anv_color_attachment_view dest_view;
anv_color_attachment_view_init(&dest_view, cmd_buffer->device,
&(VkColorAttachmentViewCreateInfo) {
.sType = VK_STRUCTURE_TYPE_COLOR_ATTACHMENT_VIEW_CREATE_INFO,
&(VkAttachmentViewCreateInfo) {
.sType = VK_STRUCTURE_TYPE_ATTACHMENT_VIEW_CREATE_INFO,
.image = anv_image_to_handle(dest_image),
.format = dest_image->format,
.mipLevel = pRegions[r].imageSubresource.mipLevel,
@@ -1071,7 +1110,7 @@ void anv_CmdCopyBufferToImage(
pRegions[r].imageOffset,
pRegions[r].imageExtent);
anv_DestroyObject(vk_device, VK_OBJECT_TYPE_IMAGE, srcImage);
anv_DestroyImage(vk_device, srcImage);
}
meta_finish_blit(cmd_buffer, &saved_state);
@@ -1099,7 +1138,7 @@ void anv_CmdCopyImageToBuffer(
if (pRegions[r].bufferImageHeight != 0)
anv_finishme("bufferImageHeight not supported in CopyBufferToImage");
struct anv_surface_view src_view;
struct anv_image_view src_view;
anv_image_view_init(&src_view, cmd_buffer->device,
&(VkImageViewCreateInfo) {
.sType = VK_STRUCTURE_TYPE_IMAGE_VIEW_CREATE_INFO,
@@ -1149,10 +1188,10 @@ void anv_CmdCopyImageToBuffer(
dest_image->bo = dest_buffer->bo;
dest_image->offset = dest_buffer->offset + pRegions[r].bufferOffset;
struct anv_surface_view dest_view;
struct anv_color_attachment_view dest_view;
anv_color_attachment_view_init(&dest_view, cmd_buffer->device,
&(VkColorAttachmentViewCreateInfo) {
.sType = VK_STRUCTURE_TYPE_COLOR_ATTACHMENT_VIEW_CREATE_INFO,
&(VkAttachmentViewCreateInfo) {
.sType = VK_STRUCTURE_TYPE_ATTACHMENT_VIEW_CREATE_INFO,
.image = destImage,
.format = src_image->format,
.mipLevel = 0,
@@ -1169,7 +1208,7 @@ void anv_CmdCopyImageToBuffer(
(VkOffset3D) { 0, 0, 0 },
pRegions[r].imageExtent);
anv_DestroyObject(vk_device, VK_OBJECT_TYPE_IMAGE, destImage);
anv_DestroyImage(vk_device, destImage);
}
meta_finish_blit(cmd_buffer, &saved_state);
@@ -1212,10 +1251,10 @@ void anv_CmdClearColorImage(
for (uint32_t r = 0; r < rangeCount; r++) {
for (uint32_t l = 0; l < pRanges[r].mipLevels; l++) {
for (uint32_t s = 0; s < pRanges[r].arraySize; s++) {
struct anv_surface_view view;
struct anv_color_attachment_view view;
anv_color_attachment_view_init(&view, cmd_buffer->device,
&(VkColorAttachmentViewCreateInfo) {
.sType = VK_STRUCTURE_TYPE_COLOR_ATTACHMENT_VIEW_CREATE_INFO,
&(VkAttachmentViewCreateInfo) {
.sType = VK_STRUCTURE_TYPE_ATTACHMENT_VIEW_CREATE_INFO,
.image = _image,
.format = image->format,
.mipLevel = pRanges[r].baseMipLevel + l,
@@ -1228,17 +1267,15 @@ void anv_CmdClearColorImage(
anv_CreateFramebuffer(anv_device_to_handle(cmd_buffer->device),
&(VkFramebufferCreateInfo) {
.sType = VK_STRUCTURE_TYPE_FRAMEBUFFER_CREATE_INFO,
.colorAttachmentCount = 1,
.pColorAttachments = (VkColorAttachmentBindInfo[]) {
.attachmentCount = 1,
.pAttachments = (VkAttachmentBindInfo[]) {
{
.view = (VkColorAttachmentView) &view,
.view = anv_attachment_view_to_handle(&view.base),
.layout = VK_IMAGE_LAYOUT_GENERAL
}
},
.pDepthStencilAttachment = NULL,
.sampleCount = 1,
.width = view.extent.width,
.height = view.extent.height,
.width = view.view.extent.width,
.height = view.view.extent.height,
.layers = 1
}, &fb);
@@ -1246,24 +1283,54 @@ void anv_CmdClearColorImage(
anv_CreateRenderPass(anv_device_to_handle(cmd_buffer->device),
&(VkRenderPassCreateInfo) {
.sType = VK_STRUCTURE_TYPE_RENDER_PASS_CREATE_INFO,
.renderArea = { { 0, 0 }, { view.extent.width, view.extent.height } },
.colorAttachmentCount = 1,
.extent = { 0, },
.sampleCount = 1,
.layers = 1,
.pColorFormats = (VkFormat[]) { image->format },
.pColorLayouts = (VkImageLayout[]) { imageLayout },
.pColorLoadOps = (VkAttachmentLoadOp[]) { VK_ATTACHMENT_LOAD_OP_DONT_CARE },
.pColorStoreOps = (VkAttachmentStoreOp[]) { VK_ATTACHMENT_STORE_OP_STORE },
.pColorLoadClearValues = pColor,
.depthStencilFormat = VK_FORMAT_UNDEFINED,
.attachmentCount = 1,
.pAttachments = &(VkAttachmentDescription) {
.sType = VK_STRUCTURE_TYPE_ATTACHMENT_DESCRIPTION,
.format = view.view.format,
.loadOp = VK_ATTACHMENT_LOAD_OP_LOAD,
.storeOp = VK_ATTACHMENT_STORE_OP_STORE,
.initialLayout = VK_IMAGE_LAYOUT_GENERAL,
.finalLayout = VK_IMAGE_LAYOUT_GENERAL,
},
.subpassCount = 1,
.pSubpasses = &(VkSubpassDescription) {
.sType = VK_STRUCTURE_TYPE_SUBPASS_DESCRIPTION,
.pipelineBindPoint = VK_PIPELINE_BIND_POINT_GRAPHICS,
.inputCount = 0,
.colorCount = 1,
.colorAttachments = &(VkAttachmentReference) {
.attachment = 0,
.layout = VK_IMAGE_LAYOUT_GENERAL,
},
.resolveAttachments = NULL,
.depthStencilAttachment = (VkAttachmentReference) {
.attachment = VK_ATTACHMENT_UNUSED,
.layout = VK_IMAGE_LAYOUT_GENERAL,
},
.preserveCount = 1,
.preserveAttachments = &(VkAttachmentReference) {
.attachment = 0,
.layout = VK_IMAGE_LAYOUT_GENERAL,
},
},
.dependencyCount = 0,
}, &pass);
anv_CmdBeginRenderPass(anv_cmd_buffer_to_handle(cmd_buffer),
&(VkRenderPassBegin) {
&(VkRenderPassBeginInfo) {
.sType = VK_STRUCTURE_TYPE_RENDER_PASS_BEGIN_INFO,
.renderArea = {
.offset = { 0, 0, },
.extent = {
.width = view.view.extent.width,
.height = view.view.extent.height,
},
},
.renderPass = pass,
.framebuffer = fb,
});
.attachmentCount = 1,
.pAttachmentClearValues = NULL,
}, VK_RENDER_PASS_CONTENTS_INLINE);
struct clear_instance_data instance_data = {
.vue_header = {
@@ -1339,20 +1406,20 @@ anv_device_init_meta(struct anv_device *device)
anv_device_init_meta_blit_state(device);
anv_CreateDynamicRasterState(anv_device_to_handle(device),
&(VkDynamicRsStateCreateInfo) {
.sType = VK_STRUCTURE_TYPE_DYNAMIC_RS_STATE_CREATE_INFO,
&(VkDynamicRasterStateCreateInfo) {
.sType = VK_STRUCTURE_TYPE_DYNAMIC_RASTER_STATE_CREATE_INFO,
},
&device->meta_state.shared.rs_state);
anv_CreateDynamicColorBlendState(anv_device_to_handle(device),
&(VkDynamicCbStateCreateInfo) {
.sType = VK_STRUCTURE_TYPE_DYNAMIC_CB_STATE_CREATE_INFO
&(VkDynamicColorBlendStateCreateInfo) {
.sType = VK_STRUCTURE_TYPE_DYNAMIC_COLOR_BLEND_STATE_CREATE_INFO
},
&device->meta_state.shared.cb_state);
anv_CreateDynamicDepthStencilState(anv_device_to_handle(device),
&(VkDynamicDsStateCreateInfo) {
.sType = VK_STRUCTURE_TYPE_DYNAMIC_DS_STATE_CREATE_INFO
&(VkDynamicDepthStencilStateCreateInfo) {
.sType = VK_STRUCTURE_TYPE_DYNAMIC_DEPTH_STENCIL_STATE_CREATE_INFO
},
&device->meta_state.shared.ds_state);
}
@@ -1361,27 +1428,22 @@ void
anv_device_finish_meta(struct anv_device *device)
{
/* Clear */
anv_DestroyObject(anv_device_to_handle(device), VK_OBJECT_TYPE_PIPELINE,
device->meta_state.clear.pipeline);
anv_DestroyPipeline(anv_device_to_handle(device),
device->meta_state.clear.pipeline);
/* Blit */
anv_DestroyObject(anv_device_to_handle(device), VK_OBJECT_TYPE_PIPELINE,
device->meta_state.blit.pipeline);
anv_DestroyObject(anv_device_to_handle(device),
VK_OBJECT_TYPE_PIPELINE_LAYOUT,
device->meta_state.blit.pipeline_layout);
anv_DestroyObject(anv_device_to_handle(device),
VK_OBJECT_TYPE_DESCRIPTOR_SET_LAYOUT,
device->meta_state.blit.ds_layout);
anv_DestroyPipeline(anv_device_to_handle(device),
device->meta_state.blit.pipeline);
anv_DestroyPipelineLayout(anv_device_to_handle(device),
device->meta_state.blit.pipeline_layout);
anv_DestroyDescriptorSetLayout(anv_device_to_handle(device),
device->meta_state.blit.ds_layout);
/* Shared */
anv_DestroyObject(anv_device_to_handle(device),
VK_OBJECT_TYPE_DYNAMIC_RS_STATE,
device->meta_state.shared.rs_state);
anv_DestroyObject(anv_device_to_handle(device),
VK_OBJECT_TYPE_DYNAMIC_CB_STATE,
device->meta_state.shared.cb_state);
anv_DestroyObject(anv_device_to_handle(device),
VK_OBJECT_TYPE_DYNAMIC_DS_STATE,
device->meta_state.shared.ds_state);
anv_DestroyDynamicRasterState(anv_device_to_handle(device),
device->meta_state.shared.rs_state);
anv_DestroyDynamicColorBlendState(anv_device_to_handle(device),
device->meta_state.shared.cb_state);
anv_DestroyDynamicDepthStencilState(anv_device_to_handle(device),
device->meta_state.shared.ds_state);
}

View File

@@ -34,7 +34,7 @@
VkResult anv_CreateShaderModule(
VkDevice _device,
const VkShaderModuleCreateInfo* pCreateInfo,
VkShader* pShaderModule)
VkShaderModule* pShaderModule)
{
ANV_FROM_HANDLE(anv_device, device, _device);
struct anv_shader_module *module;
@@ -55,6 +55,18 @@ VkResult anv_CreateShaderModule(
return VK_SUCCESS;
}
VkResult anv_DestroyShaderModule(
VkDevice _device,
VkShaderModule _module)
{
ANV_FROM_HANDLE(anv_device, device, _device);
ANV_FROM_HANDLE(anv_shader_module, module, _module);
anv_device_free(device, module);
return VK_SUCCESS;
}
VkResult anv_CreateShader(
VkDevice _device,
const VkShaderCreateInfo* pCreateInfo,
@@ -86,16 +98,37 @@ VkResult anv_CreateShader(
return VK_SUCCESS;
}
VkResult anv_DestroyShader(
VkDevice _device,
VkShader _shader)
{
ANV_FROM_HANDLE(anv_device, device, _device);
ANV_FROM_HANDLE(anv_shader, shader, _shader);
anv_device_free(device, shader);
return VK_SUCCESS;
}
VkResult anv_CreatePipelineCache(
VkDevice device,
const VkPipelineCacheCreateInfo* pCreateInfo,
VkPipelineCache* pPipelineCache)
{
*pPipelineCache = 1;
pPipelineCache->handle = 1;
stub_return(VK_SUCCESS);
}
VkResult anv_DestroyPipelineCache(
VkDevice _device,
VkPipelineCache _cache)
{
/* VkPipelineCache is a dummy object. */
return VK_SUCCESS;
}
size_t anv_GetPipelineCacheSize(
VkDevice device,
VkPipelineCache pipelineCache)
@@ -192,7 +225,7 @@ emit_vertex_input(struct anv_pipeline *pipeline,
static void
emit_ia_state(struct anv_pipeline *pipeline,
const VkPipelineIaStateCreateInfo *info,
const VkPipelineInputAssemblyStateCreateInfo *info,
const struct anv_pipeline_create_info *extra)
{
static const uint32_t vk_to_gen_primitive_type[] = {
@@ -225,7 +258,7 @@ emit_ia_state(struct anv_pipeline *pipeline,
static void
emit_rs_state(struct anv_pipeline *pipeline,
const VkPipelineRsStateCreateInfo *info,
const VkPipelineRasterStateCreateInfo *info,
const struct anv_pipeline_create_info *extra)
{
static const uint32_t vk_to_gen_cullmode[] = {
@@ -256,7 +289,7 @@ emit_rs_state(struct anv_pipeline *pipeline,
.PointWidth = 1.0,
};
/* FINISHME: bool32_t rasterizerDiscardEnable; */
/* FINISHME: VkBool32 rasterizerDiscardEnable; */
GEN8_3DSTATE_SF_pack(NULL, pipeline->state_sf, &sf);
@@ -283,7 +316,7 @@ emit_rs_state(struct anv_pipeline *pipeline,
static void
emit_cb_state(struct anv_pipeline *pipeline,
const VkPipelineCbStateCreateInfo *info)
const VkPipelineColorBlendStateCreateInfo *info)
{
struct anv_device *device = pipeline->device;
@@ -348,7 +381,7 @@ emit_cb_state(struct anv_pipeline *pipeline,
GEN8_BLEND_STATE_pack(NULL, state, &blend_state);
for (uint32_t i = 0; i < info->attachmentCount; i++) {
const VkPipelineCbAttachmentState *a = &info->pAttachments[i];
const VkPipelineColorBlendAttachmentState *a = &info->pAttachments[i];
struct GEN8_BLEND_STATE_ENTRY entry = {
.LogicOpEnable = info->logicOpEnable,
@@ -401,7 +434,7 @@ static const uint32_t vk_to_gen_stencil_op[] = {
static void
emit_ds_state(struct anv_pipeline *pipeline,
const VkPipelineDsStateCreateInfo *info)
const VkPipelineDepthStencilStateCreateInfo *info)
{
if (info == NULL) {
/* We're going to OR this together with the dynamic state. We need
@@ -412,7 +445,7 @@ emit_ds_state(struct anv_pipeline *pipeline,
return;
}
/* bool32_t depthBoundsEnable; // optional (depth_bounds_test) */
/* VkBool32 depthBoundsEnable; // optional (depth_bounds_test) */
struct GEN8_3DSTATE_WM_DEPTH_STENCIL wm_depth_stencil = {
.DepthTestEnable = info->depthTestEnable,
@@ -434,22 +467,6 @@ emit_ds_state(struct anv_pipeline *pipeline,
GEN8_3DSTATE_WM_DEPTH_STENCIL_pack(NULL, pipeline->state_wm_depth_stencil, &wm_depth_stencil);
}
static void
anv_pipeline_destroy(struct anv_device *device,
struct anv_object *object,
VkObjectType obj_type)
{
struct anv_pipeline *pipeline = (struct anv_pipeline*) object;
assert(obj_type == VK_OBJECT_TYPE_PIPELINE);
anv_compiler_free(pipeline);
anv_reloc_list_finish(&pipeline->batch.relocs, pipeline->device);
anv_state_stream_finish(&pipeline->program_stream);
anv_state_pool_free(&device->dynamic_state_pool, pipeline->blend_state);
anv_device_free(pipeline->device, pipeline);
}
VkResult
anv_pipeline_create(
VkDevice _device,
@@ -469,7 +486,6 @@ anv_pipeline_create(
if (pipeline == NULL)
return vk_error(VK_ERROR_OUT_OF_HOST_MEMORY);
pipeline->base.destructor = anv_pipeline_destroy;
pipeline->device = device;
pipeline->layout = anv_pipeline_layout_from_handle(pCreateInfo->layout);
memset(pipeline->shaders, 0, sizeof(pipeline->shaders));
@@ -490,12 +506,12 @@ anv_pipeline_create(
anv_shader_from_handle(pCreateInfo->pStages[i].shader);
}
if (pCreateInfo->pTessState)
anv_finishme("VK_STRUCTURE_TYPE_PIPELINE_TESS_STATE_CREATE_INFO");
if (pCreateInfo->pVpState)
anv_finishme("VK_STRUCTURE_TYPE_PIPELINE_VP_STATE_CREATE_INFO");
if (pCreateInfo->pMsState)
anv_finishme("VK_STRUCTURE_TYPE_PIPELINE_MS_STATE_CREATE_INFO");
if (pCreateInfo->pTessellationState)
anv_finishme("VK_STRUCTURE_TYPE_PIPELINE_TESSELLATION_STATE_CREATE_INFO");
if (pCreateInfo->pViewportState)
anv_finishme("VK_STRUCTURE_TYPE_PIPELINE_VIEWPORT_STATE_CREATE_INFO");
if (pCreateInfo->pMultisampleState)
anv_finishme("VK_STRUCTURE_TYPE_PIPELINE_MULTISAMPLE_STATE_CREATE_INFO");
pipeline->use_repclear = extra && extra->use_repclear;
@@ -512,12 +528,12 @@ anv_pipeline_create(
assert(pCreateInfo->pVertexInputState);
emit_vertex_input(pipeline, pCreateInfo->pVertexInputState);
assert(pCreateInfo->pIaState);
emit_ia_state(pipeline, pCreateInfo->pIaState, extra);
assert(pCreateInfo->pRsState);
emit_rs_state(pipeline, pCreateInfo->pRsState, extra);
emit_ds_state(pipeline, pCreateInfo->pDsState);
emit_cb_state(pipeline, pCreateInfo->pCbState);
assert(pCreateInfo->pInputAssemblyState);
emit_ia_state(pipeline, pCreateInfo->pInputAssemblyState, extra);
assert(pCreateInfo->pRasterState);
emit_rs_state(pipeline, pCreateInfo->pRasterState, extra);
emit_ds_state(pipeline, pCreateInfo->pDepthStencilState);
emit_cb_state(pipeline, pCreateInfo->pColorBlendState);
anv_batch_emit(&pipeline->batch, GEN8_3DSTATE_VF_STATISTICS,
.StatisticsEnable = true);
@@ -736,6 +752,22 @@ anv_pipeline_create(
return VK_SUCCESS;
}
VkResult anv_DestroyPipeline(
VkDevice _device,
VkPipeline _pipeline)
{
ANV_FROM_HANDLE(anv_device, device, _device);
ANV_FROM_HANDLE(anv_pipeline, pipeline, _pipeline);
anv_compiler_free(pipeline);
anv_reloc_list_finish(&pipeline->batch.relocs, pipeline->device);
anv_state_stream_finish(&pipeline->program_stream);
anv_state_pool_free(&device->dynamic_state_pool, pipeline->blend_state);
anv_device_free(pipeline->device, pipeline);
return VK_SUCCESS;
}
VkResult anv_CreateGraphicsPipelines(
VkDevice _device,
VkPipelineCache pipelineCache,
@@ -743,7 +775,6 @@ VkResult anv_CreateGraphicsPipelines(
const VkGraphicsPipelineCreateInfo* pCreateInfos,
VkPipeline* pPipelines)
{
ANV_FROM_HANDLE(anv_device, device, _device);
VkResult result = VK_SUCCESS;
unsigned i = 0;
@@ -752,8 +783,7 @@ VkResult anv_CreateGraphicsPipelines(
NULL, &pPipelines[i]);
if (result != VK_SUCCESS) {
for (unsigned j = 0; j < i; j++) {
anv_pipeline_destroy(device, (struct anv_object *)pPipelines[j],
VK_OBJECT_TYPE_PIPELINE);
anv_DestroyPipeline(_device, pPipelines[j]);
}
return result;
@@ -779,7 +809,6 @@ static VkResult anv_compute_pipeline_create(
if (pipeline == NULL)
return vk_error(VK_ERROR_OUT_OF_HOST_MEMORY);
pipeline->base.destructor = anv_pipeline_destroy;
pipeline->device = device;
pipeline->layout = anv_pipeline_layout_from_handle(pCreateInfo->layout);
@@ -842,7 +871,6 @@ VkResult anv_CreateComputePipelines(
const VkComputePipelineCreateInfo* pCreateInfos,
VkPipeline* pPipelines)
{
ANV_FROM_HANDLE(anv_device, device, _device);
VkResult result = VK_SUCCESS;
unsigned i = 0;
@@ -851,8 +879,7 @@ VkResult anv_CreateComputePipelines(
&pPipelines[i]);
if (result != VK_SUCCESS) {
for (unsigned j = 0; j < i; j++) {
anv_pipeline_destroy(device, (struct anv_object *)pPipelines[j],
VK_OBJECT_TYPE_PIPELINE);
anv_DestroyPipeline(_device, pPipelines[j]);
}
return result;
@@ -909,3 +936,15 @@ VkResult anv_CreatePipelineLayout(
return VK_SUCCESS;
}
VkResult anv_DestroyPipelineLayout(
VkDevice _device,
VkPipelineLayout _pipelineLayout)
{
ANV_FROM_HANDLE(anv_device, device, _device);
ANV_FROM_HANDLE(anv_pipeline_layout, pipeline_layout, _pipelineLayout);
anv_device_free(device, pipeline_layout);
return VK_SUCCESS;
}

View File

@@ -325,17 +325,6 @@ void anv_bo_pool_finish(struct anv_bo_pool *pool);
VkResult anv_bo_pool_alloc(struct anv_bo_pool *pool, struct anv_bo *bo);
void anv_bo_pool_free(struct anv_bo_pool *pool, const struct anv_bo *bo);
struct anv_object;
struct anv_device;
typedef void (*anv_object_destructor_cb)(struct anv_device *,
struct anv_object *,
VkObjectType);
struct anv_object {
anv_object_destructor_cb destructor;
};
struct anv_physical_device {
struct anv_instance * instance;
uint32_t chipset_id;
@@ -367,9 +356,9 @@ struct anv_meta_state {
} blit;
struct {
VkDynamicRsState rs_state;
VkDynamicCbState cb_state;
VkDynamicDsState ds_state;
VkDynamicRasterState rs_state;
VkDynamicColorBlendState cb_state;
VkDynamicDepthStencilState ds_state;
} shared;
};
@@ -467,6 +456,11 @@ VkResult anv_reloc_list_init(struct anv_reloc_list *list,
void anv_reloc_list_finish(struct anv_reloc_list *list,
struct anv_device *device);
uint64_t anv_reloc_list_add(struct anv_reloc_list *list,
struct anv_device *device,
uint32_t offset, struct anv_bo *target_bo,
uint32_t delta);
struct anv_batch_bo {
struct anv_bo bo;
@@ -571,7 +565,6 @@ struct anv_device_memory {
};
struct anv_dynamic_vp_state {
struct anv_object base;
struct anv_state sf_clip_vp;
struct anv_state cc_vp;
struct anv_state scissor;
@@ -588,7 +581,7 @@ struct anv_dynamic_ds_state {
};
struct anv_dynamic_cb_state {
uint32_t state_color_calc[GEN8_COLOR_CALC_STATE_length];
uint32_t state_color_calc[GEN8_COLOR_CALC_STATE_length];
};
@@ -620,6 +613,15 @@ struct anv_descriptor_set {
struct anv_descriptor descriptors[0];
};
VkResult
anv_descriptor_set_create(struct anv_device *device,
const struct anv_descriptor_set_layout *layout,
struct anv_descriptor_set **out_set);
void
anv_descriptor_set_destroy(struct anv_device *device,
struct anv_descriptor_set *set);
#define MAX_VBS 32
#define MAX_SETS 8
#define MAX_RTS 8
@@ -665,27 +667,8 @@ struct anv_descriptor_set_binding {
uint32_t dynamic_offsets[128];
};
struct anv_cmd_buffer {
struct anv_object base;
struct anv_device * device;
struct drm_i915_gem_execbuffer2 execbuf;
struct drm_i915_gem_exec_object2 * exec2_objects;
struct anv_bo ** exec2_bos;
uint32_t exec2_array_length;
bool need_reloc;
uint32_t serial;
uint32_t bo_count;
struct anv_batch batch;
struct anv_batch_bo * last_batch_bo;
struct anv_batch_bo * surface_batch_bo;
uint32_t surface_next;
struct anv_reloc_list surface_relocs;
struct anv_state_stream surface_state_stream;
struct anv_state_stream dynamic_state_stream;
/* State required while building cmd buffer */
/** State required while building cmd buffer */
struct anv_cmd_state {
uint32_t current_pipeline;
uint32_t vb_dirty;
uint32_t dirty;
@@ -695,6 +678,8 @@ struct anv_cmd_buffer {
struct anv_pipeline * pipeline;
struct anv_pipeline * compute_pipeline;
struct anv_framebuffer * framebuffer;
struct anv_render_pass * pass;
struct anv_subpass * subpass;
struct anv_dynamic_rs_state * rs_state;
struct anv_dynamic_ds_state * ds_state;
struct anv_dynamic_vp_state * vp_state;
@@ -704,11 +689,53 @@ struct anv_cmd_buffer {
struct anv_descriptor_set_binding descriptors[MAX_SETS];
};
VkResult anv_cmd_state_init(struct anv_cmd_state *state);
void anv_cmd_state_fini(struct anv_cmd_state *state);
struct anv_cmd_buffer {
struct anv_device * device;
struct drm_i915_gem_execbuffer2 execbuf;
struct drm_i915_gem_exec_object2 * exec2_objects;
uint32_t exec2_bo_count;
struct anv_bo ** exec2_bos;
uint32_t exec2_array_length;
bool need_reloc;
uint32_t serial;
struct anv_batch batch;
struct anv_batch_bo * last_batch_bo;
struct anv_batch_bo * surface_batch_bo;
uint32_t surface_next;
struct anv_reloc_list surface_relocs;
struct anv_state_stream surface_state_stream;
struct anv_state_stream dynamic_state_stream;
struct anv_cmd_state state;
};
struct anv_state
anv_cmd_buffer_alloc_surface_state(struct anv_cmd_buffer *cmd_buffer,
uint32_t size, uint32_t alignment);
struct anv_state
anv_cmd_buffer_alloc_dynamic_state(struct anv_cmd_buffer *cmd_buffer,
uint32_t size, uint32_t alignment);
VkResult anv_cmd_buffer_new_surface_state_bo(struct anv_cmd_buffer *cmd_buffer);
void anv_cmd_buffer_emit_state_base_address(struct anv_cmd_buffer *cmd_buffer);
void anv_cmd_buffer_begin_subpass(struct anv_cmd_buffer *cmd_buffer,
struct anv_subpass *subpass);
void anv_cmd_buffer_clear_attachments(struct anv_cmd_buffer *cmd_buffer,
struct anv_render_pass *pass,
const VkClearValue *clear_values);
void anv_cmd_buffer_dump(struct anv_cmd_buffer *cmd_buffer);
void anv_aub_writer_destroy(struct anv_aub_writer *writer);
struct anv_fence {
struct anv_object base;
struct anv_bo bo;
struct drm_i915_gem_execbuffer2 execbuf;
struct drm_i915_gem_exec_object2 exec2_objects[1];
@@ -726,7 +753,6 @@ struct anv_shader {
};
struct anv_pipeline {
struct anv_object base;
struct anv_device * device;
struct anv_batch batch;
uint32_t batch_data[256];
@@ -797,12 +823,13 @@ struct anv_format {
uint16_t surface_format; /**< RENDER_SURFACE_STATE.SurfaceFormat */
uint8_t cpp; /**< Bytes-per-pixel of anv_format::surface_format. */
uint8_t num_channels;
uint8_t depth_format; /**< 3DSTATE_DEPTH_BUFFER.SurfaceFormat */
uint16_t depth_format; /**< 3DSTATE_DEPTH_BUFFER.SurfaceFormat */
bool has_stencil;
};
const struct anv_format *
anv_format_for_vk_format(VkFormat format);
bool anv_is_vk_format_depth_or_stencil(VkFormat format);
/**
* A proxy for the color surfaces, depth surfaces, and stencil surfaces.
@@ -866,34 +893,33 @@ struct anv_surface_view {
VkFormat format;
};
struct anv_image_create_info {
const VkImageCreateInfo *vk_info;
bool force_tile_mode;
uint8_t tile_mode;
struct anv_buffer_view {
/* FINISHME: Trim unneeded data from this struct. */
struct anv_surface_view view;
};
VkResult anv_image_create(VkDevice _device,
const struct anv_image_create_info *info,
VkImage *pImage);
struct anv_image_view {
struct anv_surface_view view;
};
void anv_image_view_init(struct anv_surface_view *view,
struct anv_device *device,
const VkImageViewCreateInfo* pCreateInfo,
struct anv_cmd_buffer *cmd_buffer);
enum anv_attachment_view_type {
ANV_ATTACHMENT_VIEW_TYPE_COLOR,
ANV_ATTACHMENT_VIEW_TYPE_DEPTH_STENCIL,
};
void anv_color_attachment_view_init(struct anv_surface_view *view,
struct anv_device *device,
const VkColorAttachmentViewCreateInfo* pCreateInfo,
struct anv_cmd_buffer *cmd_buffer);
struct anv_attachment_view {
enum anv_attachment_view_type attachment_type;
};
void anv_surface_view_destroy(struct anv_device *device,
struct anv_surface_view *view);
struct anv_color_attachment_view {
struct anv_attachment_view base;
struct anv_sampler {
uint32_t state[4];
struct anv_surface_view view;
};
struct anv_depth_stencil_view {
struct anv_attachment_view base;
struct anv_bo *bo;
uint32_t depth_offset; /**< Offset into bo. */
@@ -906,93 +932,135 @@ struct anv_depth_stencil_view {
uint16_t stencil_qpitch; /**< 3DSTATE_STENCIL_BUFFER.SurfaceQPitch */
};
struct anv_framebuffer {
struct anv_object base;
uint32_t color_attachment_count;
const struct anv_surface_view * color_attachments[MAX_RTS];
const struct anv_depth_stencil_view * depth_stencil;
struct anv_image_create_info {
const VkImageCreateInfo *vk_info;
bool force_tile_mode;
uint8_t tile_mode;
};
uint32_t sample_count;
VkResult anv_image_create(VkDevice _device,
const struct anv_image_create_info *info,
VkImage *pImage);
void anv_image_view_init(struct anv_image_view *view,
struct anv_device *device,
const VkImageViewCreateInfo* pCreateInfo,
struct anv_cmd_buffer *cmd_buffer);
void anv_color_attachment_view_init(struct anv_color_attachment_view *view,
struct anv_device *device,
const VkAttachmentViewCreateInfo* pCreateInfo,
struct anv_cmd_buffer *cmd_buffer);
void anv_fill_buffer_surface_state(void *state, VkFormat format,
uint32_t offset, uint32_t range);
void anv_surface_view_fini(struct anv_device *device,
struct anv_surface_view *view);
struct anv_sampler {
uint32_t state[4];
};
struct anv_framebuffer {
uint32_t width;
uint32_t height;
uint32_t layers;
/* Viewport for clears */
VkDynamicVpState vp_state;
VkDynamicViewportState vp_state;
uint32_t attachment_count;
const struct anv_attachment_view * attachments[0];
};
struct anv_render_pass_layer {
VkAttachmentLoadOp color_load_op;
VkClearColorValue clear_color;
struct anv_subpass {
uint32_t input_count;
uint32_t * input_attachments;
uint32_t color_count;
uint32_t * color_attachments;
uint32_t * resolve_attachments;
uint32_t depth_stencil_attachment;
};
struct anv_render_pass_attachment {
VkFormat format;
uint32_t samples;
VkAttachmentLoadOp load_op;
VkAttachmentLoadOp stencil_load_op;
};
struct anv_render_pass {
VkRect2D render_area;
uint32_t attachment_count;
uint32_t subpass_count;
uint32_t num_clear_layers;
uint32_t num_layers;
struct anv_render_pass_layer layers[0];
struct anv_render_pass_attachment * attachments;
struct anv_subpass subpasses[0];
};
void anv_device_init_meta(struct anv_device *device);
void anv_device_finish_meta(struct anv_device *device);
void
anv_cmd_buffer_clear(struct anv_cmd_buffer *cmd_buffer,
struct anv_render_pass *pass);
void *anv_lookup_entrypoint(const char *name);
void *
anv_lookup_entrypoint(const char *name);
#define ANV_DEFINE_HANDLE_CASTS(__anv_type, __VkType) \
\
static inline struct __anv_type * \
__anv_type ## _from_handle(__VkType _handle) \
{ \
return (struct __anv_type *) _handle; \
} \
\
static inline __VkType \
__anv_type ## _to_handle(struct __anv_type *_obj) \
{ \
return (__VkType) _obj; \
}
VkResult anv_DestroyImage(VkDevice device, VkImage image);
VkResult anv_DestroyImageView(VkDevice device, VkImageView imageView);
VkResult anv_DestroyBufferView(VkDevice device, VkBufferView bufferView);
VkResult anv_DestroyColorAttachmentView(VkDevice device,
VkColorAttachmentView view);
VkResult anv_DestroyDepthStencilView(VkDevice device, VkDepthStencilView view);
VkResult anv_DestroyRenderPass(VkDevice device, VkRenderPass renderPass);
#define ANV_DEFINE_CASTS(__anv_type, __VkType) \
static inline struct __anv_type * \
__anv_type ## _from_handle(__VkType _handle) \
{ \
return (struct __anv_type *) _handle; \
} \
\
static inline __VkType \
__anv_type ## _to_handle(struct __anv_type *_obj) \
{ \
return (__VkType) _obj; \
}
ANV_DEFINE_CASTS(anv_physical_device, VkPhysicalDevice)
ANV_DEFINE_CASTS(anv_instance, VkInstance)
ANV_DEFINE_CASTS(anv_queue, VkQueue)
ANV_DEFINE_CASTS(anv_device, VkDevice)
ANV_DEFINE_CASTS(anv_device_memory, VkDeviceMemory)
ANV_DEFINE_CASTS(anv_dynamic_vp_state, VkDynamicVpState)
ANV_DEFINE_CASTS(anv_dynamic_rs_state, VkDynamicRsState)
ANV_DEFINE_CASTS(anv_dynamic_ds_state, VkDynamicDsState)
ANV_DEFINE_CASTS(anv_dynamic_cb_state, VkDynamicCbState)
ANV_DEFINE_CASTS(anv_descriptor_set_layout, VkDescriptorSetLayout)
ANV_DEFINE_CASTS(anv_descriptor_set, VkDescriptorSet)
ANV_DEFINE_CASTS(anv_pipeline_layout, VkPipelineLayout)
ANV_DEFINE_CASTS(anv_buffer, VkBuffer)
ANV_DEFINE_CASTS(anv_cmd_buffer, VkCmdBuffer)
ANV_DEFINE_CASTS(anv_fence, VkFence)
ANV_DEFINE_CASTS(anv_shader_module, VkShaderModule)
ANV_DEFINE_CASTS(anv_shader, VkShader)
ANV_DEFINE_CASTS(anv_pipeline, VkPipeline)
ANV_DEFINE_CASTS(anv_image, VkImage)
ANV_DEFINE_CASTS(anv_sampler, VkSampler)
ANV_DEFINE_CASTS(anv_depth_stencil_view, VkDepthStencilView)
ANV_DEFINE_CASTS(anv_framebuffer, VkFramebuffer)
ANV_DEFINE_CASTS(anv_render_pass, VkRenderPass)
ANV_DEFINE_CASTS(anv_query_pool, VkQueryPool)
#define ANV_DEFINE_NONDISP_HANDLE_CASTS(__anv_type, __VkType) \
\
static inline struct __anv_type * \
__anv_type ## _from_handle(__VkType _handle) \
{ \
return (struct __anv_type *) _handle.handle; \
} \
\
static inline __VkType \
__anv_type ## _to_handle(struct __anv_type *_obj) \
{ \
return (__VkType) { .handle = (uint64_t) _obj }; \
}
#define ANV_FROM_HANDLE(__anv_type, __name, __handle) \
struct __anv_type *__name = __anv_type ## _from_handle(__handle)
ANV_DEFINE_HANDLE_CASTS(anv_cmd_buffer, VkCmdBuffer)
ANV_DEFINE_HANDLE_CASTS(anv_device, VkDevice)
ANV_DEFINE_HANDLE_CASTS(anv_instance, VkInstance)
ANV_DEFINE_HANDLE_CASTS(anv_physical_device, VkPhysicalDevice)
ANV_DEFINE_HANDLE_CASTS(anv_queue, VkQueue)
ANV_DEFINE_NONDISP_HANDLE_CASTS(anv_attachment_view, VkAttachmentView)
ANV_DEFINE_NONDISP_HANDLE_CASTS(anv_buffer, VkBuffer)
ANV_DEFINE_NONDISP_HANDLE_CASTS(anv_buffer_view, VkBufferView);
ANV_DEFINE_NONDISP_HANDLE_CASTS(anv_descriptor_set, VkDescriptorSet)
ANV_DEFINE_NONDISP_HANDLE_CASTS(anv_descriptor_set_layout, VkDescriptorSetLayout)
ANV_DEFINE_NONDISP_HANDLE_CASTS(anv_device_memory, VkDeviceMemory)
ANV_DEFINE_NONDISP_HANDLE_CASTS(anv_dynamic_cb_state, VkDynamicColorBlendState)
ANV_DEFINE_NONDISP_HANDLE_CASTS(anv_dynamic_ds_state, VkDynamicDepthStencilState)
ANV_DEFINE_NONDISP_HANDLE_CASTS(anv_dynamic_rs_state, VkDynamicRasterState)
ANV_DEFINE_NONDISP_HANDLE_CASTS(anv_dynamic_vp_state, VkDynamicViewportState)
ANV_DEFINE_NONDISP_HANDLE_CASTS(anv_fence, VkFence)
ANV_DEFINE_NONDISP_HANDLE_CASTS(anv_framebuffer, VkFramebuffer)
ANV_DEFINE_NONDISP_HANDLE_CASTS(anv_image, VkImage)
ANV_DEFINE_NONDISP_HANDLE_CASTS(anv_image_view, VkImageView);
ANV_DEFINE_NONDISP_HANDLE_CASTS(anv_pipeline, VkPipeline)
ANV_DEFINE_NONDISP_HANDLE_CASTS(anv_pipeline_layout, VkPipelineLayout)
ANV_DEFINE_NONDISP_HANDLE_CASTS(anv_query_pool, VkQueryPool)
ANV_DEFINE_NONDISP_HANDLE_CASTS(anv_render_pass, VkRenderPass)
ANV_DEFINE_NONDISP_HANDLE_CASTS(anv_sampler, VkSampler)
ANV_DEFINE_NONDISP_HANDLE_CASTS(anv_shader, VkShader)
ANV_DEFINE_NONDISP_HANDLE_CASTS(anv_shader_module, VkShaderModule)
#ifdef __cplusplus
}
#endif

View File

@@ -36,26 +36,11 @@ struct anv_query_pool_slot {
};
struct anv_query_pool {
struct anv_object base;
VkQueryType type;
uint32_t slots;
struct anv_bo bo;
};
static void
anv_query_pool_destroy(struct anv_device *device,
struct anv_object *object,
VkObjectType obj_type)
{
struct anv_query_pool *pool = (struct anv_query_pool *) object;
assert(obj_type == VK_OBJECT_TYPE_QUERY_POOL);
anv_gem_munmap(pool->bo.map, pool->bo.size);
anv_gem_close(device, pool->bo.gem_handle);
anv_device_free(device, pool);
}
VkResult anv_CreateQueryPool(
VkDevice _device,
const VkQueryPoolCreateInfo* pCreateInfo,
@@ -82,9 +67,6 @@ VkResult anv_CreateQueryPool(
if (pool == NULL)
return vk_error(VK_ERROR_OUT_OF_HOST_MEMORY);
pool->base.destructor = anv_query_pool_destroy;
pool->type = pCreateInfo->queryType;
size = pCreateInfo->slots * sizeof(struct anv_query_pool_slot);
result = anv_bo_init_new(&pool->bo, device, size);
if (result != VK_SUCCESS)
@@ -102,6 +84,20 @@ VkResult anv_CreateQueryPool(
return result;
}
VkResult anv_DestroyQueryPool(
VkDevice _device,
VkQueryPool _pool)
{
ANV_FROM_HANDLE(anv_device, device, _device);
ANV_FROM_HANDLE(anv_query_pool, pool, _pool);
anv_gem_munmap(pool->bo.map, pool->bo.size);
anv_gem_close(device, pool->bo.gem_handle);
anv_device_free(device, pool);
return VK_SUCCESS;
}
VkResult anv_GetQueryPoolResults(
VkDevice _device,
VkQueryPool queryPool,

View File

@@ -88,7 +88,8 @@ VkResult anv_CreateSwapChainWSI(
const VkSwapChainCreateInfoWSI* pCreateInfo,
VkSwapChainWSI* pSwapChain)
{
struct anv_device *device = (struct anv_device *) _device;
ANV_FROM_HANDLE(anv_device, device, _device);
struct anv_swap_chain *chain;
xcb_void_cookie_t cookie;
VkResult result;
@@ -110,11 +111,13 @@ VkResult anv_CreateSwapChainWSI(
chain->extent = pCreateInfo->imageExtent;
for (uint32_t i = 0; i < chain->count; i++) {
VkDeviceMemory memory_h;
VkImage image_h;
struct anv_image *image;
struct anv_surface *surface;
struct anv_device_memory *memory;
anv_image_create((VkDevice) device,
anv_image_create(_device,
&(struct anv_image_create_info) {
.force_tile_mode = true,
.tile_mode = XMAJOR,
@@ -136,22 +139,23 @@ VkResult anv_CreateSwapChainWSI(
.usage = VK_IMAGE_USAGE_COLOR_ATTACHMENT_BIT,
.flags = 0,
}},
(VkImage *) &image);
&image_h);
image = anv_image_from_handle(image_h);
surface = &image->primary_surface;
anv_AllocMemory((VkDevice) device,
anv_AllocMemory(_device,
&(VkMemoryAllocInfo) {
.sType = VK_STRUCTURE_TYPE_MEMORY_ALLOC_INFO,
.allocationSize = image->size,
.memoryTypeIndex = 0,
},
(VkDeviceMemory *) &memory);
&memory_h);
anv_BindObjectMemory(VK_NULL_HANDLE,
VK_OBJECT_TYPE_IMAGE,
(VkImage) image,
(VkDeviceMemory) memory, 0);
memory = anv_device_memory_from_handle(memory_h);
anv_BindImageMemory(VK_NULL_HANDLE, anv_image_to_handle(image),
memory_h, 0);
ret = anv_gem_set_tiling(device, memory->bo.gem_handle,
surface->stride, I915_TILING_X);
@@ -242,8 +246,8 @@ VkResult anv_GetSwapChainInfoWSI(
images = pData;
for (uint32_t i = 0; i < chain->count; i++) {
images[i].image = (VkImage) chain->images[i].image;
images[i].memory = (VkDeviceMemory) chain->images[i].memory;
images[i].image = anv_image_to_handle(chain->images[i].image);
images[i].memory = anv_device_memory_to_handle(chain->images[i].memory);
}
return VK_SUCCESS;
@@ -257,7 +261,8 @@ VkResult anv_QueuePresentWSI(
VkQueue queue_,
const VkPresentInfoWSI* pPresentInfo)
{
struct anv_image *image = (struct anv_image *) pPresentInfo->image;
ANV_FROM_HANDLE(anv_image, image, pPresentInfo->image);
struct anv_swap_chain *chain = image->swap_chain;
xcb_void_cookie_t cookie;
xcb_pixmap_t pixmap;
@@ -269,7 +274,7 @@ VkResult anv_QueuePresentWSI(
pixmap = XCB_NONE;
for (uint32_t i = 0; i < chain->count; i++) {
if ((VkImage) chain->images[i].image == pPresentInfo->image) {
if (image == chain->images[i].image) {
pixmap = chain->images[i].pixmap;
break;
}