15 Commits

Author SHA1 Message Date
Robin Voetter
34d30d0e64 Vulkan 1.2.175 compatibility 2021-04-13 19:52:30 +02:00
Robin Voetter
efb63a5cac Stop fixing up bitmasks.
This seems to not be required anymore.
2021-04-08 13:06:22 +02:00
Robin Voetter
fb5ca7cf90 Stop fixing up tags.
This seems to not be needed anymore.
2021-04-08 13:06:11 +02:00
Robin Voetter
272c1160eb Stop filtering out promoted extensions (Fixes #10)
This seems to not be needed anymore.
2021-04-08 13:06:00 +02:00
Robin Voetter
2064c912aa Allow F as floating-point suffix 2021-04-07 21:36:28 +02:00
Robin Voetter
954ca65ed9 Fix parse error 2021-04-07 21:36:17 +02:00
Robin Voetter
9321da3426 CI: Split out build & fetch vk.xml steps 2021-04-07 21:36:07 +02:00
Robin Voetter
bda8c7213a Vulkan 1.2.170 compatibility 2021-04-07 21:35:25 +02:00
Robin Voetter
9aae495eab Use linkLibC instead of linkSystemLibrary to link libc 2021-04-07 21:35:16 +02:00
Robin Voetter
01a64c1f9c Clarify on compatible zig versions (#8) 2021-02-10 00:07:10 +01:00
Robin Voetter
ffb9e9ff3e Remove some old code 2021-02-10 00:07:04 +01:00
Robin Voetter
8e48a8aa03 Allow top level comments in xml parser 2021-02-10 00:06:57 +01:00
Robin Voetter
50177211cb Small styling fix 2021-02-10 00:06:50 +01:00
Robin Voetter
9eac24ee39 Make API-enums non-exhaustive
The Vulkan implementation is not required to
filter enums on values supported by the requested
API, and so may return values that the
implementation doesn't know about. By making
these enums non-exhaustive, the programmer is
forced to deal with these kinds of cases
appropriately.
2021-02-10 00:06:41 +01:00
Robin Voetter
2cb1fcc354 Generate fully qualified alias enum variants 2021-02-10 00:06:31 +01:00
29 changed files with 18745 additions and 4494 deletions

View File

@@ -10,41 +10,31 @@ on:
jobs:
build:
runs-on: ubuntu-22.04
runs-on: ubuntu-20.04
steps:
- uses: actions/checkout@v4
- uses: actions/checkout@v2
- name: Setup Zig
uses: mlugg/setup-zig@v1
uses: goto-bus-stop/setup-zig@v1.3.0
with:
version: master
- name: Check formatting
run: zig fmt --check .
- name: Test
run: |
zig build test
- name: Fetch latest Vulkan SDK
- name: Fetch Vulkan SDK
run: |
wget -qO - https://packages.lunarg.com/lunarg-signing-key-pub.asc | sudo apt-key add -
sudo wget -qO /etc/apt/sources.list.d/lunarg-vulkan-jammy.list https://packages.lunarg.com/vulkan/lunarg-vulkan-jammy.list
sudo wget -qO /etc/apt/sources.list.d/lunarg-vulkan-1.2.162-focal.list https://packages.lunarg.com/vulkan/1.2.162/lunarg-vulkan-1.2.162-focal.list
sudo apt update
sudo apt install shaderc libglfw3 libglfw3-dev
- name: Fetch latest vk.xml
run: wget https://raw.githubusercontent.com/KhronosGroup/Vulkan-Docs/main/xml/vk.xml
run: |
wget https://raw.githubusercontent.com/KhronosGroup/Vulkan-Docs/main/xml/vk.xml
- name: Test and install with latest zig & latest vk.xml
run: zig build test install -Dregistry=$(pwd)/vk.xml
- name: Build example with latest zig & vk.xml from dependency
run: zig build --build-file $(pwd)/examples/build.zig
- name: Build example with latest zig & latest vk.xml
run: zig build --build-file $(pwd)/examples/build.zig -Doverride-registry=$(pwd)/vk.xml
- name: Archive vk.zig
uses: actions/upload-artifact@v4
with:
name: vk.zig
path: zig-out/src/vk.zig
if-no-files-found: error
- name: Build with latest zig & vk.xml
run: |
zig build -Dvulkan-registry=./vk.xml

6
.gitignore vendored
View File

@@ -1,5 +1 @@
zig-cache/
zig-out/
.vscode/.zig-cache/
.zig-cache/
examples/.zig-cache
zig-cache/

View File

@@ -1,4 +1,4 @@
Copyright © 2020-2022 Robin Voetter
Copyright © 2020 Robin Voetter
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the “Software”), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:

238
README.md
View File

@@ -10,92 +10,38 @@ vulkan-zig attempts to provide a better experience to programming Vulkan applica
vulkan-zig is automatically tested daily against the latest vk.xml and zig, and supports vk.xml from version 1.x.163.
## Example
A partial implementation of https://vulkan-tutorial.com is implemented in [examples/triangle.zig](examples/triangle.zig). This example can be ran by executing `zig build --build-file $(pwd)/examples/build.zig run-triangle` in vulkan-zig's root. See in particular the [build file](examples/build.zig), which contains a concrete example of how to use vulkan-zig as a dependency.
### Zig versions
vulkan-zig aims to be always compatible with the ever-changing Zig master branch (however, development may lag a few days behind). Sometimes, the Zig master branch breaks a bunch of functionality however, which may make the latest version vulkan-zig incompatible with older releases of Zig. This repository aims to have a version compatible for both the latest Zig master, and the latest Zig release. The `master` branch is compatible with the `master` branch of Zig, and versions for older versions of Zig are maintained in the `zig-<version>-compat` branch.
`master` is compatible and tested with the Zig self-hosted compiler. The `zig-stage1-compat` branch contains a version which is compatible with the Zig stage 1 compiler.
vulkan-zig aims to be always compatible with the ever-changing Zig master branch (however, development may lag a few days behind). Sometimes, the Zig master branch breaks a bunch of functionality however, which may make the latest version vulkan-zig incompatible with older releases of Zig. Versions compatible with older versions of zig are marked with the tag `zig-<version>`.
## Features
### CLI-interface
A CLI-interface is provided to generate vk.zig from the [Vulkan XML registry](https://github.com/KhronosGroup/Vulkan-Docs/blob/main/xml), which is built by default when invoking `zig build` in the project root. To generate vk.zig, simply invoke the program as follows:
A CLI-interface is provided to generate vk.zig from the [Vulkan XML registry](https://github.com/KhronosGroup/Vulkan-Docs/blob/master/xml), which is built by default when invoking `zig build` in the project root. To generate vk.zig, simply invoke the program as follows:
```
$ zig-out/bin/vulkan-zig-generator path/to/vk.xml output/path/to/vk.zig
$ zig-cache/bin/vulkan-zig-generator path/to/vk.xml output/path/to/vk.zig
```
This reads the xml file, parses its contents, renders the Vulkan bindings, and formats file, before writing the result to the output path. While the intended usage of vulkan-zig is through direct generation from build.zig (see below), the CLI-interface can be used for one-off generation and vendoring the result.
`path/to/vk.xml` can be obtained from several sources:
- From the LunarG Vulkan SDK. This can either be obtained from [LunarG](https://www.lunarg.com/vulkan-sdk) or usually using the package manager. The registry can then be found at `$VULKAN_SDK/share/vulkan/registry/vk.xml`.
- Directly from the [Vulkan-Headers GitHub repository](https://github.com/KhronosGroup/Vulkan-Headers/blob/main/registry/vk.xml).
### Generation with the package manager from build.zig
There is also support for adding this project as a dependency through zig package manager in its current form. In order to do this, add this repo as a dependency in your build.zig.zon:
### Generation from build.zig
Vulkan bindings can be generated from the Vulkan XML registry at compile time with build.zig, by using the provided Vulkan generation step:
```zig
.{
// -- snip --
.dependencies = .{
// -- snip --
.vulkan_zig = .{
.url = "https://github.com/Snektron/vulkan-zig/archive/<commit SHA>.tar.gz",
.hash = "<dependency hash>",
},
},
const vkgen = @import("vulkan-zig/generator/index.zig");
pub fn build(b: *Builder) void {
...
const exe = b.addExecutable("my-executable", "src/main.zig");
// Create a step that generates vk.zig (stored in zig-cache) from the provided vulkan registry.
const gen = vkgen.VkGenerateStep.init(b, "path/to/vk.xml", "vk.zig");
exe.step.dependOn(&gen.step);
// Add the generated file as package to the final executable
exe.addPackagePath("vulkan", gen.full_out_path);
}
```
And then in your build.zig file, you'll need to add a line like this to your build function:
```zig
const vkzig_dep = b.dependency("vulkan_zig", .{
.registry = @as([]const u8, b.pathFromRoot("path/to/vk.xml")),
});
const vkzig_bindings = vkzig_dep.module("vulkan-zig");
exe.root_module.addImport("vulkan", vkzig_bindings);
```
That will allow you to `@import("vulkan")` in your executable's source.
### Manual generation with the package manager from build.zig
Bindings can also be generated by invoking the generator directly. This may be useful is some special cases, for example, it integrates particularly well with fetching the registry via the package manager. This can be done by adding the Vulkan-Headers repository to your dependencies, and then passing the `vk.xml` inside it to vulkan-zig-generator:
```zig
.{
// -- snip --
.depdendencies = .{
// -- snip --
.vulkan_headers = .{
.url = "https://github.com/KhronosGroup/Vulkan-Headers/archive/<commit SHA>.tar.gz",
.hash = "<dependency hash>",
},
},
}
```
And then pass `vk.xml` to vulkan-zig-generator as follows:
```zig
// Get the (lazy) path to vk.xml:
const registry = b.dependency("vulkan_headers", .{}).path("registry/vk.xml");
// Get generator executable reference
const vk_gen = b.dependency("vulkan_zig", .{}).artifact("vulkan-zig-generator");
// Set up a run step to generate the bindings
const vk_generate_cmd = b.addRunArtifact(vk_gen);
// Pass the registry to the generator
generate_cmd.addArg(registry);
// Create a module from the generator's output...
const vulkan_zig = b.addModule("vulkan-zig", .{
.root_source_file = vk_generate_cmd.addOutputFileArg("vk.zig"),
});
// ... and pass it as a module to your executable's build command
exe.root_module.addImport("vulkan", vulkan_zig);
```
See [examples/build.zig](examples/build.zig) and [examples/build.zig.zon](examples/build.zig.zon) for a concrete example.
This reads vk.xml, parses its contents, and renders the Vulkan bindings to "vk.zig", which is then formatted and placed in `zig-cache`. The resulting file can then be added to an executable by using `addPackagePath`.
### Function & field renaming
Functions and fields are renamed to be more or less in line with [Zig's standard library style](https://ziglang.org/documentation/master/#Style-Guide):
* The vk prefix is removed everywhere
* Structs like `VkInstanceCreateInfo` are renamed to `InstanceCreateInfo`.
@@ -107,7 +53,6 @@ Functions and fields are renamed to be more or less in line with [Zig's standard
* Any name which is either an illegal Zig name or a reserved identifier is rendered using `@"name"` syntax. For example, `VK_IMAGE_TYPE_2D` is translated to `@"2d"`.
### Function pointers & Wrappers
vulkan-zig provides no integration for statically linking libvulkan, and these symbols are not generated at all. Instead, vulkan functions are to be loaded dynamically. For each Vulkan function, a function pointer type is generated using the exact parameters and return types as defined by the Vulkan specification:
```zig
pub const PfnCreateInstance = fn (
@@ -122,37 +67,23 @@ For each function, a wrapper is generated into one of three structs:
* InstanceWrapper. This contains wrappers for functions which are otherwise loaded by `vkGetInstanceProcAddr`.
* DeviceWrapper. This contains wrappers for functions which are loaded by `vkGetDeviceProcAddr`.
To create a wrapper type, an "api specification" should be passed to it. This is a list of `ApiInfo` structs, which allows one to specify the functions that should be made available. An `ApiInfo` structure is initialized 3 optional fields, `base_commands`, `instance_commands`, and `device_commands`. Each of these takes a set of the vulkan functions that should be made available for that category, for example, setting `.createInstance = true` in `base_commands` makes the `createInstance` function available (loaded from `vkCreateInstance`). An entire feature level or extension can be pulled in at once too, for example, `vk.features.version_1_0` contains all functions for Vulkan 1.0. `vk.extensions.khr_surface` contains all functions for the `VK_KHR_surface` extension.
Each wrapper struct is to be used as a mixin on a struct containing **just** function pointers as members:
```zig
const vk = @import("vulkan");
/// To construct base, instance and device wrappers for vulkan-zig, you need to pass a list of 'apis' to it.
const apis: []const vk.ApiInfo = &.{
// You can either add invidiual functions by manually creating an 'api'
.{
.base_commands = .{
.createInstance = true,
},
.instance_commands = .{
.createDevice = true,
},
},
// Or you can add entire feature sets or extensions
vk.features.version_1_0,
vk.extensions.khr_surface,
vk.extensions.khr_swapchain,
const BaseDispatch = struct {
vkCreateInstance: vk.PfnCreateInstance,
usingnamespace vk.BaseWrapper(@This());
};
const BaseDispatch = vk.BaseWrapper(apis);
```
The wrapper struct then provides wrapper functions for each function pointer in the dispatch struct:
```zig
pub const BaseWrapper(comptime cmds: anytype) type {
...
const Dispatch = CreateDispatchStruct(cmds);
pub const BaseWrapper(comptime Self: type) type {
return struct {
dispatch: Dispatch,
pub const CreateInstanceError = error{
pub fn createInstance(
self: Self,
create_info: InstanceCreateInfo,
p_allocator: ?*const AllocationCallbacks,
) error{
OutOfHostMemory,
OutOfDeviceMemory,
InitializationFailed,
@@ -160,14 +91,9 @@ pub const BaseWrapper(comptime cmds: anytype) type {
ExtensionNotPresent,
IncompatibleDriver,
Unknown,
};
pub fn createInstance(
self: Self,
create_info: InstanceCreateInfo,
p_allocator: ?*const AllocationCallbacks,
) CreateInstanceError!Instance {
}!Instance {
var instance: Instance = undefined;
const result = self.dispatch.vkCreateInstance(
const result = self.vkCreateInstance(
&create_info,
p_allocator,
&instance,
@@ -199,52 +125,11 @@ Wrappers are generated according to the following rules:
* As of yet, there is no specific handling of enumeration style commands or other commands which accept slices.
Furthermore, each wrapper contains a function to load each function pointer member when passed either `PfnGetInstanceProcAddr` or `PfnGetDeviceProcAddr`, which attempts to load each member as function pointer and casts it to the appropriate type. These functions are loaded literally, and any wrongly named member or member with a wrong function pointer type will result in problems.
* For `BaseWrapper`, this function has signature `fn load(loader: anytype) error{CommandFailure}!Self`, where the type of `loader` must resemble `PfnGetInstanceProcAddr` (with optionally having a different calling convention).
* For `InstanceWrapper`, this function has signature `fn load(instance: Instance, loader: anytype) error{CommandFailure}!Self`, where the type of `loader` must resemble `PfnGetInstanceProcAddr`.
* For `DeviceWrapper`, this function has signature `fn load(device: Device, loader: anytype) error{CommandFailure}!Self`, where the type of `loader` must resemble `PfnGetDeviceProcAddr`.
Note that these functions accepts a loader with the signature of `anytype` instead of `PfnGetInstanceProcAddr`. This is because it is valid for `vkGetInstanceProcAddr` to load itself, in which case the returned function is to be called with the vulkan calling convention. This calling convention is not required for loading vulkan-zig itself, though, and a loader to be called with any calling convention with the target architecture may be passed in. This is particularly useful when interacting with C libraries that provide `vkGetInstanceProcAddr`.
```zig
// vkGetInstanceProcAddr as provided by GLFW.
// Note that vk.Instance and vk.PfnVoidFunction are ABI compatible with VkInstance,
// and that `extern` implies the C calling convention.
pub extern fn glfwGetInstanceProcAddress(instance: vk.Instance, procname: [*:0]const u8) vk.PfnVoidFunction;
// Or provide a custom implementation.
// This function is called with the unspecified Zig-internal calling convention.
fn customGetInstanceProcAddress(instance: vk.Instance, procname: [*:0]const u8) vk.PfnVoidFunction {
...
}
// Both calls are valid, even
const vkb = try BaseDispatch.load(glfwGetInstanceProcAddress);
const vkb = try BaseDispatch.load(customGetInstanceProcAddress);
```
By default, wrapper `load` functions return `error.CommandLoadFailure` if a call to the loader resulted in `null`. If this behaviour is not desired, one can use `loadNoFail`. This function accepts the same parameters as `load`, but does not return an error any function pointer fails to load and sets its value to `undefined` instead. It is at the programmer's discretion not to invoke invalid functions, which can be tested for by checking whether the required core and extension versions the function requires are supported.
One can access the underlying unwrapped C functions by doing `wrapper.dispatch.vkFuncYouWant(..)`.
#### Proxying Wrappers
Proxying wrappers wrap a wrapper and a pointer to the associated handle in a single struct, and automatically passes this handle to commands as appropriate. Besides the proxying wrappers for instances and devices, there are also proxying wrappers for queues and command buffers. Proxying wrapper type are constructed in the same way as a regular wrapper, by passing an api specification to them. To initialize a proxying wrapper, it must be passed a handle and a pointer to an appropriate wrapper. For queue and command buffer proxying wrappers, a pointer to a device wrapper must be passed.
```zig
// Create the dispatch tables
const InstanceDispatch = vk.InstanceWrapper(apis);
const Instance = vk.InstanceProxy(apis);
const instance_handle = try vkb.createInstance(...);
const vki = try InstanceDispatch.load(instance_handle, vkb.vkGetInstanceProcAddr);
const instance = Instance.load(instance_handle, &vki);
defer instance.destroyInstance(null);
```
For queue and command buffer proxying wrappers, the `queue` and `cmd` prefix is removed for functions where appropriate. Note that the device proxying wrappers also have the queue and command buffer functions made available for convenience, but there the prefix is not stripped.
* For `BaseWrapper`, this function has signature `fn load(loader: PfnGetInstanceProcAddr) !Self`.
* For `InstanceWrapper`, this function has signature `fn load(instance: Instance, loader: PfnGetInstanceProcAddr) !Self`.
* For `DeviceWrapper`, this function has signature `fn load(device: Device, loader: PfnGetDeviceProcAddr) !Self`.
### Bitflags
Packed structs of bools are used for bit flags in vulkan-zig, instead of both a `FlagBits` and `Flags` variant. Places where either of these variants are used are both replaced by this packed struct instead. This means that even in places where just one flag would normally be accepted, the packed struct is accepted. The programmer is responsible for only enabling a single bit.
Each bit is defaulted to `false`, and the first `bool` is aligned to guarantee the overal alignment
@@ -293,7 +178,6 @@ pub fn FlagsMixin(comptime FlagsType: type) type {
```
### Handles
Handles are generated to a non-exhaustive enum, backed by a `u64` for non-dispatchable handles and `usize` for dispatchable ones:
```zig
const Instance = extern enum(usize) { null_handle = 0, _ };
@@ -301,7 +185,6 @@ const Instance = extern enum(usize) { null_handle = 0, _ };
This means that handles are type-safe even when compiling for a 32-bit target.
### Struct defaults
Defaults are generated for certain fields of structs:
* sType is defaulted to the appropriate value.
* pNext is defaulted to `null`.
@@ -309,14 +192,13 @@ Defaults are generated for certain fields of structs:
```zig
pub const InstanceCreateInfo = extern struct {
s_type: StructureType = .instance_create_info,
p_next: ?*const anyopaque = null,
p_next: ?*const c_void = null,
flags: InstanceCreateFlags,
...
};
```
### Pointer types
Pointer types in both commands (wrapped and function pointers) and struct fields are augmented with the following information, where available in the registry:
* Pointer optional-ness.
* Pointer const-ness.
@@ -325,50 +207,44 @@ Pointer types in both commands (wrapped and function pointers) and struct fields
Note that this information is not everywhere as useful in the registry, leading to places where optional-ness is not correct. Most notably, CreateInfo type structures which take a slice often have the item count marked as optional, but the pointer itself not. As of yet, this is not fixed in vulkan-zig. If drivers properly follow the Vulkan specification, these can be initialized to `undefined`, however, [that is not always the case](https://zeux.io/2019/07/17/serializing-pipeline-cache/).
### Platform types
Defaults with the same ABI layout are generated for most platform-defined types. These can either by bitcasted to, or overridden by defining them in the project root:
```zig
pub const xcb_connection_t = if (@hasDecl(root, "xcb_connection_t")) root.xcb_connection_t else opaque{};
pub const xcb_connection_t = if (@hasDecl(root, "xcb_connection_t")) root.xcb_connection_t else @Type(.Opaque);
```
For some times (such as those from Google Games Platform) no default is known, but an `opaque{}` will be used by default. Usage of these without providing a concrete type in the project root is likely an error.
For some times (such as those from Google Games Platform) no default is known. Usage of these without providing a concrete type in the project root generates a compile error.
### Shader compilation
vulkan-zig provides functionality to help compiling shaders using glslc. It can be used from build.zig as follows:
Shaders should be compiled by invoking a shader compiler via the build system. For example:
```zig
const vkgen = @import("vulkan-zig/generator/index.zig");
pub fn build(b: *Builder) void {
...
const vert_cmd = b.addSystemCommand(&.{
"glslc",
"--target-env=vulkan1.2",
"-o"
});
const vert_spv = vert_cmd.addOutputFileArg("vert.spv");
vert_cmd.addFileArg(b.path("shaders/triangle.vert"));
exe.root_module.addAnonymousImport("vertex_shader", .{
.root_source_file = vert_spv
});
...
const exe = b.addExecutable("my-executable", "src/main.zig");
const gen = vkgen.VkGenerateStep(b, "path/to/vk.xml", "vk.zig");
exe.step.dependOn(&gen.step);
exe.addPackagePath("vulkan", gen.full_out_path);
const shader_comp = vkgen.ShaderCompileStep.init(
builder,
&[_][]const u8{"glslc", "--target-env=vulkan1.2"}, // Path to glslc and additional parameters
);
exe.step.dependOn(&shader_comp.step);
const spv_path = shader_comp.addShader("path/to/shader.frag");
}
```
Note that SPIR-V must be 32-bit aligned when fed to Vulkan. The easiest way to do this is to dereference the shader's bytecode and manually align it as follows:
```zig
const vert_spv align(@alignOf(u32)) = @embedFile("vertex_shader").*;
```
See [examples/build.zig](examples/build.zig) for a working example.
For more advanced shader compiler usage, one may consider a library such as [shader_compiler](https://github.com/Games-by-Mason/shader_compiler).
Upon compilation, glslc is then invoked to compile each shader, and the result is placed within `zig-cache`. `addShader` returns the full path to the compiled shader code. This file can then be included in the project, as is done in [build.zig for the example](build.zig) by generating an additional file which uses `@embedFile`.
## Limitations
* Currently, the self-hosted version of Zig's cache-hash API is not yet ready for usage, which means that the bindings are regenerated every time an executable is built.
* vulkan-zig has as of yet no functionality for selecting feature levels and extensions when generating bindings. This is because when an extension is promoted to Vulkan core, its fields and commands are renamed to lose the extensions author tag (for example, VkSemaphoreWaitFlagsKHR was renamed to VkSemaphoreWaitFlags when it was promoted from an extension to Vulkan 1.2 core). This leads to inconsistencies when only items from up to a certain feature level is included, as these promoted items then need to re-gain a tag.
## See also
## Example
A partial implementation of https://vulkan-tutorial.org is implemented in [examples/triangle.zig](examples/triangle.zig). This example can be ran by executing `zig build run-triangle` in vulkan-zig's root.
* Implementation of https://vulkan-tutorial.com using `@cImport`'ed bindings: https://github.com/andrewrk/zig-vulkan-triangle.
## See also
* Implementation of https://vulkan-tutorial.org: https://github.com/andrewrk/zig-vulkan-triangle.
* Alternative binding generator: https://github.com/SpexGuy/Zig-Vulkan-Headers
* Zig bindings for GLFW: https://github.com/hexops/mach-glfw
* With vulkan-zig integration example: https://github.com/hexops/mach-glfw-vulkan-example
* Advanced shader compilation: https://github.com/Games-by-Mason/shader_compiler

147
build.zig
View File

@@ -1,56 +1,107 @@
const std = @import("std");
const vkgen = @import("generator/index.zig");
const Step = std.build.Step;
const Builder = std.build.Builder;
pub fn build(b: *std.Build) void {
const target = b.standardTargetOptions(.{});
const optimize = b.standardOptimizeOption(.{});
const maybe_registry: ?[]const u8 = b.option([]const u8, "registry", "Set the path to the Vulkan registry (vk.xml)");
const test_step = b.step("test", "Run all the tests");
pub const ResourceGenStep = struct {
step: Step,
shader_step: *vkgen.ShaderCompileStep,
builder: *Builder,
package: std.build.Pkg,
resources: std.ArrayList(u8),
// Using the package manager, this artifact can be obtained by the user
// through `b.dependency(<name in build.zig.zon>, .{}).artifact("vulkan-zig-generator")`.
// with that, the user need only `.addArg("path/to/vk.xml")`, and then obtain
// a file source to the generated code with `.addOutputArg("vk.zig")`
const generator_exe = b.addExecutable(.{
.name = "vulkan-zig-generator",
.root_source_file = b.path("src/main.zig"),
.target = target,
.optimize = optimize,
});
b.installArtifact(generator_exe);
pub fn init(builder: *Builder, out: []const u8) *ResourceGenStep {
const self = builder.allocator.create(ResourceGenStep) catch unreachable;
const full_out_path = std.fs.path.join(builder.allocator, &[_][]const u8{
builder.build_root,
builder.cache_root,
out,
}) catch unreachable;
// Or they can skip all that, and just make sure to pass `.registry = "path/to/vk.xml"` to `b.dependency`,
// and then obtain the module directly via `.module("vulkan-zig")`.
if (maybe_registry) |registry| {
const vk_generate_cmd = b.addRunArtifact(generator_exe);
self.* = .{
.step = Step.init(.Custom, "resources", builder.allocator, make),
.shader_step = vkgen.ShaderCompileStep.init(builder, &[_][]const u8{"glslc", "--target-env=vulkan1.2"}),
.builder = builder,
.package = .{
.name = "resources",
.path = full_out_path,
.dependencies = null,
},
.resources = std.ArrayList(u8).init(builder.allocator),
};
vk_generate_cmd.addArg(registry);
const vk_zig = vk_generate_cmd.addOutputFileArg("vk.zig");
const vk_zig_module = b.addModule("vulkan-zig", .{
.root_source_file = vk_zig,
});
// Also install vk.zig, if passed.
const vk_zig_install_step = b.addInstallFile(vk_zig, "src/vk.zig");
b.getInstallStep().dependOn(&vk_zig_install_step.step);
// And run tests on this vk.zig too.
// This test needs to be an object so that vulkan-zig can import types from the root.
// It does not need to run anyway.
const ref_all_decls_test = b.addObject(.{
.name = "ref-all-decls-test",
.root_source_file = b.path("test/ref_all_decls.zig"),
.target = target,
.optimize = optimize,
});
ref_all_decls_test.root_module.addImport("vulkan", vk_zig_module);
test_step.dependOn(&ref_all_decls_test.step);
self.step.dependOn(&self.shader_step.step);
return self;
}
const test_target = b.addTest(.{
.root_source_file = b.path("src/main.zig"),
});
test_step.dependOn(&b.addRunArtifact(test_target).step);
fn renderPath(self: *ResourceGenStep, path: []const u8, writer: anytype) void {
const separators = &[_]u8{ std.fs.path.sep_windows, std.fs.path.sep_posix };
var i: usize = 0;
while (std.mem.indexOfAnyPos(u8, path, i, separators)) |j| {
writer.writeAll(path[i .. j]) catch unreachable;
switch (std.fs.path.sep) {
std.fs.path.sep_windows => writer.writeAll("\\\\") catch unreachable,
std.fs.path.sep_posix => writer.writeByte(std.fs.path.sep_posix) catch unreachable,
else => unreachable
}
i = j + 1;
}
writer.writeAll(path[i..]) catch unreachable;
}
pub fn addShader(self: *ResourceGenStep, name: []const u8, source: []const u8) void {
const shader_out_path = self.shader_step.add(source);
var writer = self.resources.writer();
writer.print("pub const {s} = @embedFile(\"", .{ name }) catch unreachable;
self.renderPath(shader_out_path, writer);
writer.writeAll("\");\n") catch unreachable;
}
fn make(step: *Step) !void {
const self = @fieldParentPtr(ResourceGenStep, "step", step);
const cwd = std.fs.cwd();
const dir = std.fs.path.dirname(self.package.path).?;
try cwd.makePath(dir);
try cwd.writeFile(self.package.path, self.resources.items);
}
};
pub fn build(b: *Builder) void {
var test_step = b.step("test", "Run all the tests");
test_step.dependOn(&b.addTest("generator/index.zig").step);
const target = b.standardTargetOptions(.{});
const mode = b.standardReleaseOptions();
const generator_exe = b.addExecutable("vulkan-zig-generator", "generator/main.zig");
generator_exe.setTarget(target);
generator_exe.setBuildMode(mode);
generator_exe.install();
const triangle_exe = b.addExecutable("triangle", "examples/triangle.zig");
triangle_exe.setTarget(target);
triangle_exe.setBuildMode(mode);
triangle_exe.install();
triangle_exe.linkLibC();
triangle_exe.linkSystemLibrary("glfw");
const vk_xml_path = b.option([]const u8, "vulkan-registry", "Override the to the Vulkan registry") orelse "examples/vk.xml";
const gen = vkgen.VkGenerateStep.init(b, vk_xml_path, "vk.zig");
triangle_exe.step.dependOn(&gen.step);
triangle_exe.addPackage(gen.package);
const res = ResourceGenStep.init(b, "resources.zig");
res.addShader("triangle_vert", "examples/shaders/triangle.vert");
res.addShader("triangle_frag", "examples/shaders/triangle.frag");
triangle_exe.step.dependOn(&res.step);
triangle_exe.addPackage(res.package);
const triangle_run_cmd = triangle_exe.run();
triangle_run_cmd.step.dependOn(b.getInstallStep());
const triangle_run_step = b.step("run-triangle", "Run the triangle example");
triangle_run_step.dependOn(&triangle_run_cmd.step);
}

View File

@@ -1,11 +0,0 @@
.{
.name = "vulkan",
.version = "0.0.0",
.minimum_zig_version = "0.14.0-dev.1359+e9a00ba7f",
.paths = .{
"build.zig",
"LICENSE",
"README.md",
"src",
},
}

View File

@@ -1,62 +0,0 @@
const std = @import("std");
const vkgen = @import("vulkan_zig");
pub fn build(b: *std.Build) void {
const target = b.standardTargetOptions(.{});
const optimize = b.standardOptimizeOption(.{});
const maybe_override_registry = b.option([]const u8, "override-registry", "Override the path to the Vulkan registry used for the examples");
const registry = b.dependency("vulkan_headers", .{}).path("registry/vk.xml");
const triangle_exe = b.addExecutable(.{
.name = "triangle",
.root_source_file = b.path("triangle.zig"),
.target = target,
.link_libc = true,
.optimize = optimize,
});
b.installArtifact(triangle_exe);
triangle_exe.linkSystemLibrary("glfw");
const vk_gen = b.dependency("vulkan_zig", .{}).artifact("vulkan-zig-generator");
const vk_generate_cmd = b.addRunArtifact(vk_gen);
if (maybe_override_registry) |override_registry| {
vk_generate_cmd.addFileArg(.{ .cwd_relative = override_registry });
} else {
vk_generate_cmd.addFileArg(registry);
}
triangle_exe.root_module.addAnonymousImport("vulkan", .{
.root_source_file = vk_generate_cmd.addOutputFileArg("vk.zig"),
});
const vert_cmd = b.addSystemCommand(&.{
"glslc",
"--target-env=vulkan1.2",
"-o",
});
const vert_spv = vert_cmd.addOutputFileArg("vert.spv");
vert_cmd.addFileArg(b.path("shaders/triangle.vert"));
triangle_exe.root_module.addAnonymousImport("vertex_shader", .{
.root_source_file = vert_spv,
});
const frag_cmd = b.addSystemCommand(&.{
"glslc",
"--target-env=vulkan1.2",
"-o",
});
const frag_spv = frag_cmd.addOutputFileArg("frag.spv");
frag_cmd.addFileArg(b.path("shaders/triangle.frag"));
triangle_exe.root_module.addAnonymousImport("fragment_shader", .{
.root_source_file = frag_spv,
});
const triangle_run_cmd = b.addRunArtifact(triangle_exe);
triangle_run_cmd.step.dependOn(b.getInstallStep());
const triangle_run_step = b.step("run-triangle", "Run the triangle example");
triangle_run_step.dependOn(&triangle_run_cmd.step);
}

View File

@@ -1,14 +0,0 @@
.{
.name = "vulkan-zig-examples",
.version = "0.1.0",
.dependencies = .{
.vulkan_zig = .{
.path = "..",
},
.vulkan_headers = .{
.url = "https://github.com/KhronosGroup/Vulkan-Headers/archive/v1.3.283.tar.gz",
.hash = "1220a7e73d72a0d56bc2a65f9d8999a7c019e42260a0744c408d1cded111bc205e10",
},
},
.paths = .{""},
}

View File

@@ -1,31 +1,12 @@
const c = @cImport({
pub usingnamespace @cImport({
@cDefine("GLFW_INCLUDE_NONE", {});
@cInclude("GLFW/glfw3.h");
});
const vk = @import("vulkan");
// Re-export the GLFW things that we need
pub const GLFW_TRUE = c.GLFW_TRUE;
pub const GLFW_FALSE = c.GLFW_FALSE;
pub const GLFW_CLIENT_API = c.GLFW_CLIENT_API;
pub const GLFW_NO_API = c.GLFW_NO_API;
pub const GLFWwindow = c.GLFWwindow;
pub const glfwInit = c.glfwInit;
pub const glfwTerminate = c.glfwTerminate;
pub const glfwVulkanSupported = c.glfwVulkanSupported;
pub const glfwWindowHint = c.glfwWindowHint;
pub const glfwCreateWindow = c.glfwCreateWindow;
pub const glfwDestroyWindow = c.glfwDestroyWindow;
pub const glfwWindowShouldClose = c.glfwWindowShouldClose;
pub const glfwGetRequiredInstanceExtensions = c.glfwGetRequiredInstanceExtensions;
pub const glfwGetFramebufferSize = c.glfwGetFramebufferSize;
pub const glfwPollEvents = c.glfwPollEvents;
// usually the GLFW vulkan functions are exported if Vulkan is included,
// but since thats not the case here, they are manually imported.
// but since thats not the case here, they are manually imported.
pub extern fn glfwGetInstanceProcAddress(instance: vk.Instance, procname: [*:0]const u8) vk.PfnVoidFunction;
pub extern fn glfwGetPhysicalDevicePresentationSupport(instance: vk.Instance, pdev: vk.PhysicalDevice, queuefamily: u32) c_int;

View File

@@ -3,54 +3,103 @@ const vk = @import("vulkan");
const c = @import("c.zig");
const Allocator = std.mem.Allocator;
const required_device_extensions = [_][*:0]const u8{vk.extensions.khr_swapchain.name};
/// To construct base, instance and device wrappers for vulkan-zig, you need to pass a list of 'apis' to it.
const apis: []const vk.ApiInfo = &.{
// You can either add invidiual functions by manually creating an 'api'
.{
.base_commands = .{
.createInstance = true,
},
.instance_commands = .{
.createDevice = true,
},
},
// Or you can add entire feature sets or extensions
vk.features.version_1_0,
vk.extensions.khr_surface,
vk.extensions.khr_swapchain,
const required_device_extensions = [_][]const u8{
vk.extension_info.khr_swapchain.name
};
/// Next, pass the `apis` to the wrappers to create dispatch tables.
const BaseDispatch = vk.BaseWrapper(apis);
const InstanceDispatch = vk.InstanceWrapper(apis);
const DeviceDispatch = vk.DeviceWrapper(apis);
const BaseDispatch = struct {
vkCreateInstance: vk.PfnCreateInstance,
usingnamespace vk.BaseWrapper(@This());
};
// Also create some proxying wrappers, which also have the respective handles
const Instance = vk.InstanceProxy(apis);
const Device = vk.DeviceProxy(apis);
const InstanceDispatch = struct {
vkDestroyInstance: vk.PfnDestroyInstance,
vkCreateDevice: vk.PfnCreateDevice,
vkDestroySurfaceKHR: vk.PfnDestroySurfaceKHR,
vkEnumeratePhysicalDevices: vk.PfnEnumeratePhysicalDevices,
vkGetPhysicalDeviceProperties: vk.PfnGetPhysicalDeviceProperties,
vkEnumerateDeviceExtensionProperties: vk.PfnEnumerateDeviceExtensionProperties,
vkGetPhysicalDeviceSurfaceFormatsKHR: vk.PfnGetPhysicalDeviceSurfaceFormatsKHR,
vkGetPhysicalDeviceSurfacePresentModesKHR: vk.PfnGetPhysicalDeviceSurfacePresentModesKHR,
vkGetPhysicalDeviceSurfaceCapabilitiesKHR: vk.PfnGetPhysicalDeviceSurfaceCapabilitiesKHR,
vkGetPhysicalDeviceQueueFamilyProperties: vk.PfnGetPhysicalDeviceQueueFamilyProperties,
vkGetPhysicalDeviceSurfaceSupportKHR: vk.PfnGetPhysicalDeviceSurfaceSupportKHR,
vkGetPhysicalDeviceMemoryProperties: vk.PfnGetPhysicalDeviceMemoryProperties,
vkGetDeviceProcAddr: vk.PfnGetDeviceProcAddr,
usingnamespace vk.InstanceWrapper(@This());
};
const DeviceDispatch = struct {
vkDestroyDevice: vk.PfnDestroyDevice,
vkGetDeviceQueue: vk.PfnGetDeviceQueue,
vkCreateSemaphore: vk.PfnCreateSemaphore,
vkCreateFence: vk.PfnCreateFence,
vkCreateImageView: vk.PfnCreateImageView,
vkDestroyImageView: vk.PfnDestroyImageView,
vkDestroySemaphore: vk.PfnDestroySemaphore,
vkDestroyFence: vk.PfnDestroyFence,
vkGetSwapchainImagesKHR: vk.PfnGetSwapchainImagesKHR,
vkCreateSwapchainKHR: vk.PfnCreateSwapchainKHR,
vkDestroySwapchainKHR: vk.PfnDestroySwapchainKHR,
vkAcquireNextImageKHR: vk.PfnAcquireNextImageKHR,
vkDeviceWaitIdle: vk.PfnDeviceWaitIdle,
vkWaitForFences: vk.PfnWaitForFences,
vkResetFences: vk.PfnResetFences,
vkQueueSubmit: vk.PfnQueueSubmit,
vkQueuePresentKHR: vk.PfnQueuePresentKHR,
vkCreateCommandPool: vk.PfnCreateCommandPool,
vkDestroyCommandPool: vk.PfnDestroyCommandPool,
vkAllocateCommandBuffers: vk.PfnAllocateCommandBuffers,
vkFreeCommandBuffers: vk.PfnFreeCommandBuffers,
vkQueueWaitIdle: vk.PfnQueueWaitIdle,
vkCreateShaderModule: vk.PfnCreateShaderModule,
vkDestroyShaderModule: vk.PfnDestroyShaderModule,
vkCreatePipelineLayout: vk.PfnCreatePipelineLayout,
vkDestroyPipelineLayout: vk.PfnDestroyPipelineLayout,
vkCreateRenderPass: vk.PfnCreateRenderPass,
vkDestroyRenderPass: vk.PfnDestroyRenderPass,
vkCreateGraphicsPipelines: vk.PfnCreateGraphicsPipelines,
vkDestroyPipeline: vk.PfnDestroyPipeline,
vkCreateFramebuffer: vk.PfnCreateFramebuffer,
vkDestroyFramebuffer: vk.PfnDestroyFramebuffer,
vkBeginCommandBuffer: vk.PfnBeginCommandBuffer,
vkEndCommandBuffer: vk.PfnEndCommandBuffer,
vkAllocateMemory: vk.PfnAllocateMemory,
vkFreeMemory: vk.PfnFreeMemory,
vkCreateBuffer: vk.PfnCreateBuffer,
vkDestroyBuffer: vk.PfnDestroyBuffer,
vkGetBufferMemoryRequirements: vk.PfnGetBufferMemoryRequirements,
vkMapMemory: vk.PfnMapMemory,
vkUnmapMemory: vk.PfnUnmapMemory,
vkBindBufferMemory: vk.PfnBindBufferMemory,
vkCmdBeginRenderPass: vk.PfnCmdBeginRenderPass,
vkCmdEndRenderPass: vk.PfnCmdEndRenderPass,
vkCmdBindPipeline: vk.PfnCmdBindPipeline,
vkCmdDraw: vk.PfnCmdDraw,
vkCmdSetViewport: vk.PfnCmdSetViewport,
vkCmdSetScissor: vk.PfnCmdSetScissor,
vkCmdBindVertexBuffers: vk.PfnCmdBindVertexBuffers,
vkCmdCopyBuffer: vk.PfnCmdCopyBuffer,
usingnamespace vk.DeviceWrapper(@This());
};
pub const GraphicsContext = struct {
pub const CommandBuffer = vk.CommandBufferProxy(apis);
allocator: Allocator,
vkb: BaseDispatch,
vki: InstanceDispatch,
vkd: DeviceDispatch,
instance: Instance,
instance: vk.Instance,
surface: vk.SurfaceKHR,
pdev: vk.PhysicalDevice,
props: vk.PhysicalDeviceProperties,
mem_props: vk.PhysicalDeviceMemoryProperties,
dev: Device,
dev: vk.Device,
graphics_queue: Queue,
present_queue: Queue,
pub fn init(allocator: Allocator, app_name: [*:0]const u8, window: *c.GLFWwindow) !GraphicsContext {
pub fn init(allocator: *Allocator, app_name: [*:0]const u8, window: *c.GLFWwindow) !GraphicsContext {
var self: GraphicsContext = undefined;
self.allocator = allocator;
self.vkb = try BaseDispatch.load(c.glfwGetInstanceProcAddress);
var glfw_exts_count: u32 = 0;
@@ -64,59 +113,51 @@ pub const GraphicsContext = struct {
.api_version = vk.API_VERSION_1_2,
};
const instance = try self.vkb.createInstance(&.{
self.instance = try self.vkb.createInstance(.{
.flags = .{},
.p_application_info = &app_info,
.enabled_layer_count = 0,
.pp_enabled_layer_names = undefined,
.enabled_extension_count = glfw_exts_count,
.pp_enabled_extension_names = @ptrCast(glfw_exts),
.pp_enabled_extension_names = @ptrCast([*]const [*:0]const u8, glfw_exts),
}, null);
const vki = try allocator.create(InstanceDispatch);
errdefer allocator.destroy(vki);
vki.* = try InstanceDispatch.load(instance, self.vkb.dispatch.vkGetInstanceProcAddr);
self.instance = Instance.init(instance, vki);
errdefer self.instance.destroyInstance(null);
self.vki = try InstanceDispatch.load(self.instance, c.glfwGetInstanceProcAddress);
errdefer self.vki.destroyInstance(self.instance, null);
self.surface = try createSurface(self.instance, window);
errdefer self.instance.destroySurfaceKHR(self.surface, null);
self.surface = try createSurface(self.vki, self.instance, window);
errdefer self.vki.destroySurfaceKHR(self.instance, self.surface, null);
const candidate = try pickPhysicalDevice(self.instance, allocator, self.surface);
const candidate = try pickPhysicalDevice(self.vki, self.instance, allocator, self.surface);
self.pdev = candidate.pdev;
self.props = candidate.props;
self.dev = try initializeCandidate(self.vki, candidate);
self.vkd = try DeviceDispatch.load(self.dev, self.vki.vkGetDeviceProcAddr);
errdefer self.vkd.destroyDevice(self.dev, null);
const dev = try initializeCandidate(self.instance, candidate);
self.graphics_queue = Queue.init(self.vkd, self.dev, candidate.queues.graphics_family);
self.present_queue = Queue.init(self.vkd, self.dev, candidate.queues.graphics_family);
const vkd = try allocator.create(DeviceDispatch);
errdefer allocator.destroy(vkd);
vkd.* = try DeviceDispatch.load(dev, self.instance.wrapper.dispatch.vkGetDeviceProcAddr);
self.dev = Device.init(dev, vkd);
errdefer self.dev.destroyDevice(null);
self.graphics_queue = Queue.init(self.dev, candidate.queues.graphics_family);
self.present_queue = Queue.init(self.dev, candidate.queues.present_family);
self.mem_props = self.instance.getPhysicalDeviceMemoryProperties(self.pdev);
self.mem_props = self.vki.getPhysicalDeviceMemoryProperties(self.pdev);
return self;
}
pub fn deinit(self: GraphicsContext) void {
self.dev.destroyDevice(null);
self.instance.destroySurfaceKHR(self.surface, null);
self.instance.destroyInstance(null);
// Don't forget to free the tables to prevent a memory leak.
self.allocator.destroy(self.dev.wrapper);
self.allocator.destroy(self.instance.wrapper);
self.vkd.destroyDevice(self.dev, null);
self.vki.destroySurfaceKHR(self.instance, self.surface, null);
self.vki.destroyInstance(self.instance, null);
}
pub fn deviceName(self: *const GraphicsContext) []const u8 {
return std.mem.sliceTo(&self.props.device_name, 0);
pub fn deviceName(self: GraphicsContext) []const u8 {
const len = std.mem.indexOfScalar(u8, &self.props.device_name, 0).?;
return self.props.device_name[0 .. len];
}
pub fn findMemoryTypeIndex(self: GraphicsContext, memory_type_bits: u32, flags: vk.MemoryPropertyFlags) !u32 {
for (self.mem_props.memory_types[0..self.mem_props.memory_type_count], 0..) |mem_type, i| {
if (memory_type_bits & (@as(u32, 1) << @truncate(i)) != 0 and mem_type.property_flags.contains(flags)) {
return @truncate(i);
for (self.mem_props.memory_types[0 .. self.mem_props.memory_type_count]) |mem_type, i| {
if (memory_type_bits & (@as(u32, 1) << @truncate(u5, i)) != 0 and mem_type.property_flags.contains(flags)) {
return @truncate(u32, i);
}
}
@@ -124,7 +165,7 @@ pub const GraphicsContext = struct {
}
pub fn allocate(self: GraphicsContext, requirements: vk.MemoryRequirements, flags: vk.MemoryPropertyFlags) !vk.DeviceMemory {
return try self.dev.allocateMemory(&.{
return try self.vkd.allocateMemory(self.dev, .{
.allocation_size = requirements.size,
.memory_type_index = try self.findMemoryTypeIndex(requirements.memory_type_bits, flags),
}, null);
@@ -135,48 +176,54 @@ pub const Queue = struct {
handle: vk.Queue,
family: u32,
fn init(device: Device, family: u32) Queue {
fn init(vkd: DeviceDispatch, dev: vk.Device, family: u32) Queue {
return .{
.handle = device.getDeviceQueue(family, 0),
.handle = vkd.getDeviceQueue(dev, family, 0),
.family = family,
};
}
};
fn createSurface(instance: Instance, window: *c.GLFWwindow) !vk.SurfaceKHR {
fn createSurface(vki: InstanceDispatch, instance: vk.Instance, window: *c.GLFWwindow) !vk.SurfaceKHR {
var surface: vk.SurfaceKHR = undefined;
if (c.glfwCreateWindowSurface(instance.handle, window, null, &surface) != .success) {
if (c.glfwCreateWindowSurface(instance, window, null, &surface) != .success) {
return error.SurfaceInitFailed;
}
return surface;
}
fn initializeCandidate(instance: Instance, candidate: DeviceCandidate) !vk.Device {
fn initializeCandidate(vki: InstanceDispatch, candidate: DeviceCandidate) !vk.Device {
const priority = [_]f32{1};
const qci = [_]vk.DeviceQueueCreateInfo{
.{
.flags = .{},
.queue_family_index = candidate.queues.graphics_family,
.queue_count = 1,
.p_queue_priorities = &priority,
},
.{
.flags = .{},
.queue_family_index = candidate.queues.present_family,
.queue_count = 1,
.p_queue_priorities = &priority,
},
}
};
const queue_count: u32 = if (candidate.queues.graphics_family == candidate.queues.present_family)
1
else
2;
1
else
2;
return try instance.createDevice(candidate.pdev, &.{
return try vki.createDevice(candidate.pdev, .{
.flags = .{},
.queue_create_info_count = queue_count,
.p_queue_create_infos = &qci,
.enabled_layer_count = 0,
.pp_enabled_layer_names = undefined,
.enabled_extension_count = required_device_extensions.len,
.pp_enabled_extension_names = @ptrCast(&required_device_extensions),
.pp_enabled_extension_names = @ptrCast([*]const [*:0]const u8, &required_device_extensions),
.p_enabled_features = null,
}, null);
}
@@ -192,15 +239,21 @@ const QueueAllocation = struct {
};
fn pickPhysicalDevice(
instance: Instance,
allocator: Allocator,
vki: InstanceDispatch,
instance: vk.Instance,
allocator: *Allocator,
surface: vk.SurfaceKHR,
) !DeviceCandidate {
const pdevs = try instance.enumeratePhysicalDevicesAlloc(allocator);
var device_count: u32 = undefined;
_ = try vki.enumeratePhysicalDevices(instance, &device_count, null);
const pdevs = try allocator.alloc(vk.PhysicalDevice, device_count);
defer allocator.free(pdevs);
_ = try vki.enumeratePhysicalDevices(instance, &device_count, pdevs.ptr);
for (pdevs) |pdev| {
if (try checkSuitable(instance, pdev, allocator, surface)) |candidate| {
if (try checkSuitable(vki, pdev, allocator, surface)) |candidate| {
return candidate;
}
}
@@ -209,46 +262,56 @@ fn pickPhysicalDevice(
}
fn checkSuitable(
instance: Instance,
vki: InstanceDispatch,
pdev: vk.PhysicalDevice,
allocator: Allocator,
allocator: *Allocator,
surface: vk.SurfaceKHR,
) !?DeviceCandidate {
if (!try checkExtensionSupport(instance, pdev, allocator)) {
const props = vki.getPhysicalDeviceProperties(pdev);
if (!try checkExtensionSupport(vki, pdev, allocator)) {
return null;
}
if (!try checkSurfaceSupport(instance, pdev, surface)) {
if (!try checkSurfaceSupport(vki, pdev, surface)) {
return null;
}
if (try allocateQueues(instance, pdev, allocator, surface)) |allocation| {
const props = instance.getPhysicalDeviceProperties(pdev);
if (try allocateQueues(vki, pdev, allocator, surface)) |allocation| {
return DeviceCandidate{
.pdev = pdev,
.props = props,
.queues = allocation,
.queues = allocation
};
}
return null;
}
fn allocateQueues(instance: Instance, pdev: vk.PhysicalDevice, allocator: Allocator, surface: vk.SurfaceKHR) !?QueueAllocation {
const families = try instance.getPhysicalDeviceQueueFamilyPropertiesAlloc(pdev, allocator);
fn allocateQueues(
vki: InstanceDispatch,
pdev: vk.PhysicalDevice,
allocator: *Allocator,
surface: vk.SurfaceKHR
) !?QueueAllocation {
var family_count: u32 = undefined;
vki.getPhysicalDeviceQueueFamilyProperties(pdev, &family_count, null);
const families = try allocator.alloc(vk.QueueFamilyProperties, family_count);
defer allocator.free(families);
vki.getPhysicalDeviceQueueFamilyProperties(pdev, &family_count, families.ptr);
var graphics_family: ?u32 = null;
var present_family: ?u32 = null;
for (families, 0..) |properties, i| {
const family: u32 = @intCast(i);
for (families) |properties, i| {
const family = @intCast(u32, i);
if (graphics_family == null and properties.queue_flags.graphics_bit) {
if (graphics_family == null and properties.queue_flags.contains(.{.graphics_bit = true})) {
graphics_family = family;
}
if (present_family == null and (try instance.getPhysicalDeviceSurfaceSupportKHR(pdev, family, surface)) == vk.TRUE) {
if (present_family == null and (try vki.getPhysicalDeviceSurfaceSupportKHR(pdev, family, surface)) == vk.TRUE) {
present_family = family;
}
}
@@ -256,34 +319,41 @@ fn allocateQueues(instance: Instance, pdev: vk.PhysicalDevice, allocator: Alloca
if (graphics_family != null and present_family != null) {
return QueueAllocation{
.graphics_family = graphics_family.?,
.present_family = present_family.?,
.present_family = present_family.?
};
}
return null;
}
fn checkSurfaceSupport(instance: Instance, pdev: vk.PhysicalDevice, surface: vk.SurfaceKHR) !bool {
fn checkSurfaceSupport(vki: InstanceDispatch, pdev: vk.PhysicalDevice, surface: vk.SurfaceKHR) !bool {
var format_count: u32 = undefined;
_ = try instance.getPhysicalDeviceSurfaceFormatsKHR(pdev, surface, &format_count, null);
_ = try vki.getPhysicalDeviceSurfaceFormatsKHR(pdev, surface, &format_count, null);
var present_mode_count: u32 = undefined;
_ = try instance.getPhysicalDeviceSurfacePresentModesKHR(pdev, surface, &present_mode_count, null);
_ = try vki.getPhysicalDeviceSurfacePresentModesKHR(pdev, surface, &present_mode_count, null);
return format_count > 0 and present_mode_count > 0;
}
fn checkExtensionSupport(
instance: Instance,
vki: InstanceDispatch,
pdev: vk.PhysicalDevice,
allocator: Allocator,
allocator: *Allocator,
) !bool {
const propsv = try instance.enumerateDeviceExtensionPropertiesAlloc(pdev, null, allocator);
var count: u32 = undefined;
_ = try vki.enumerateDeviceExtensionProperties(pdev, null, &count, null);
const propsv = try allocator.alloc(vk.ExtensionProperties, count);
defer allocator.free(propsv);
_ = try vki.enumerateDeviceExtensionProperties(pdev, null, &count, propsv.ptr);
for (required_device_extensions) |ext| {
for (propsv) |props| {
if (std.mem.eql(u8, std.mem.span(ext), std.mem.sliceTo(&props.extension_name, 0))) {
const len = std.mem.indexOfScalar(u8, &props.extension_name, 0).?;
const prop_ext_name = props.extension_name[0 .. len];
if (std.mem.eql(u8, ext, prop_ext_name)) {
break;
}
} else {

View File

@@ -10,7 +10,7 @@ pub const Swapchain = struct {
};
gc: *const GraphicsContext,
allocator: Allocator,
allocator: *Allocator,
surface_format: vk.SurfaceFormatKHR,
present_mode: vk.PresentModeKHR,
@@ -21,12 +21,12 @@ pub const Swapchain = struct {
image_index: u32,
next_image_acquired: vk.Semaphore,
pub fn init(gc: *const GraphicsContext, allocator: Allocator, extent: vk.Extent2D) !Swapchain {
pub fn init(gc: *const GraphicsContext, allocator: *Allocator, extent: vk.Extent2D) !Swapchain {
return try initRecycle(gc, allocator, extent, .null_handle);
}
pub fn initRecycle(gc: *const GraphicsContext, allocator: Allocator, extent: vk.Extent2D, old_handle: vk.SwapchainKHR) !Swapchain {
const caps = try gc.instance.getPhysicalDeviceSurfaceCapabilitiesKHR(gc.pdev, gc.surface);
pub fn initRecycle(gc: *const GraphicsContext, allocator: *Allocator, extent: vk.Extent2D, old_handle: vk.SwapchainKHR) !Swapchain {
const caps = try gc.vki.getPhysicalDeviceSurfaceCapabilitiesKHR(gc.pdev, gc.surface);
const actual_extent = findActualExtent(caps, extent);
if (actual_extent.width == 0 or actual_extent.height == 0) {
return error.InvalidSurfaceDimensions;
@@ -37,49 +37,44 @@ pub const Swapchain = struct {
var image_count = caps.min_image_count + 1;
if (caps.max_image_count > 0) {
image_count = @min(image_count, caps.max_image_count);
image_count = std.math.min(image_count, caps.max_image_count);
}
const qfi = [_]u32{ gc.graphics_queue.family, gc.present_queue.family };
const sharing_mode: vk.SharingMode = if (gc.graphics_queue.family != gc.present_queue.family)
.concurrent
else
.exclusive;
const concurrent = gc.graphics_queue.family != gc.present_queue.family;
const qfi = [_]u32{gc.graphics_queue.family, gc.present_queue.family};
const handle = try gc.dev.createSwapchainKHR(&.{
const handle = try gc.vkd.createSwapchainKHR(gc.dev, .{
.flags = .{},
.surface = gc.surface,
.min_image_count = image_count,
.image_format = surface_format.format,
.image_color_space = surface_format.color_space,
.image_extent = actual_extent,
.image_array_layers = 1,
.image_usage = .{ .color_attachment_bit = true, .transfer_dst_bit = true },
.image_sharing_mode = sharing_mode,
.image_usage = .{.color_attachment_bit = true, .transfer_dst_bit = true},
.image_sharing_mode = if (concurrent) .concurrent else .exclusive,
.queue_family_index_count = qfi.len,
.p_queue_family_indices = &qfi,
.pre_transform = caps.current_transform,
.composite_alpha = .{ .opaque_bit_khr = true },
.composite_alpha = .{.opaque_bit_khr = true},
.present_mode = present_mode,
.clipped = vk.TRUE,
.old_swapchain = old_handle,
}, null);
errdefer gc.dev.destroySwapchainKHR(handle, null);
errdefer gc.vkd.destroySwapchainKHR(gc.dev, handle, null);
if (old_handle != .null_handle) {
// Apparently, the old swapchain handle still needs to be destroyed after recreating.
gc.dev.destroySwapchainKHR(old_handle, null);
gc.vkd.destroySwapchainKHR(gc.dev, old_handle, null);
}
const swap_images = try initSwapchainImages(gc, handle, surface_format.format, allocator);
errdefer {
for (swap_images) |si| si.deinit(gc);
allocator.free(swap_images);
}
errdefer for (swap_images) |si| si.deinit(gc);
var next_image_acquired = try gc.dev.createSemaphore(&.{}, null);
errdefer gc.dev.destroySemaphore(next_image_acquired, null);
var next_image_acquired = try gc.vkd.createSemaphore(gc.dev, .{.flags = .{}}, null);
errdefer gc.vkd.destroySemaphore(gc.dev, next_image_acquired, null);
const result = try gc.dev.acquireNextImageKHR(handle, std.math.maxInt(u64), next_image_acquired, .null_handle);
const result = try gc.vkd.acquireNextImageKHR(gc.dev, handle, std.math.maxInt(u64), next_image_acquired, .null_handle);
if (result.result != .success) {
return error.ImageAcquireFailed;
}
@@ -100,8 +95,7 @@ pub const Swapchain = struct {
fn deinitExceptSwapchain(self: Swapchain) void {
for (self.swap_images) |si| si.deinit(self.gc);
self.allocator.free(self.swap_images);
self.gc.dev.destroySemaphore(self.next_image_acquired, null);
self.gc.vkd.destroySemaphore(self.gc.dev, self.next_image_acquired, null);
}
pub fn waitForAllFences(self: Swapchain) !void {
@@ -110,7 +104,7 @@ pub const Swapchain = struct {
pub fn deinit(self: Swapchain) void {
self.deinitExceptSwapchain();
self.gc.dev.destroySwapchainKHR(self.handle, null);
self.gc.vkd.destroySwapchainKHR(self.gc.dev, self.handle, null);
}
pub fn recreate(self: *Swapchain, new_extent: vk.Extent2D) !void {
@@ -150,31 +144,33 @@ pub const Swapchain = struct {
// Step 1: Make sure the current frame has finished rendering
const current = self.currentSwapImage();
try current.waitForFence(self.gc);
try self.gc.dev.resetFences(1, @ptrCast(&current.frame_fence));
try self.gc.vkd.resetFences(self.gc.dev, 1, @ptrCast([*]const vk.Fence, &current.frame_fence));
// Step 2: Submit the command buffer
const wait_stage = [_]vk.PipelineStageFlags{.{ .top_of_pipe_bit = true }};
try self.gc.dev.queueSubmit(self.gc.graphics_queue.handle, 1, &[_]vk.SubmitInfo{.{
const wait_stage = [_]vk.PipelineStageFlags{.{.top_of_pipe_bit = true}};
try self.gc.vkd.queueSubmit(self.gc.graphics_queue.handle, 1, &[_]vk.SubmitInfo{.{
.wait_semaphore_count = 1,
.p_wait_semaphores = @ptrCast(&current.image_acquired),
.p_wait_semaphores = @ptrCast([*]const vk.Semaphore, &current.image_acquired),
.p_wait_dst_stage_mask = &wait_stage,
.command_buffer_count = 1,
.p_command_buffers = @ptrCast(&cmdbuf),
.p_command_buffers = @ptrCast([*]const vk.CommandBuffer, &cmdbuf),
.signal_semaphore_count = 1,
.p_signal_semaphores = @ptrCast(&current.render_finished),
.p_signal_semaphores = @ptrCast([*]const vk.Semaphore, &current.render_finished),
}}, current.frame_fence);
// Step 3: Present the current frame
_ = try self.gc.dev.queuePresentKHR(self.gc.present_queue.handle, &.{
_ = try self.gc.vkd.queuePresentKHR(self.gc.present_queue.handle, .{
.wait_semaphore_count = 1,
.p_wait_semaphores = @ptrCast(&current.render_finished),
.p_wait_semaphores = @ptrCast([*]const vk.Semaphore, &current.render_finished),
.swapchain_count = 1,
.p_swapchains = @ptrCast(&self.handle),
.p_image_indices = @ptrCast(&self.image_index),
.p_swapchains = @ptrCast([*]const vk.SwapchainKHR, &self.handle),
.p_image_indices = @ptrCast([*]const u32, &self.image_index),
.p_results = null,
});
// Step 4: Acquire next frame
const result = try self.gc.dev.acquireNextImageKHR(
const result = try self.gc.vkd.acquireNextImageKHR(
self.gc.dev,
self.handle,
std.math.maxInt(u64),
self.next_image_acquired,
@@ -200,29 +196,30 @@ const SwapImage = struct {
frame_fence: vk.Fence,
fn init(gc: *const GraphicsContext, image: vk.Image, format: vk.Format) !SwapImage {
const view = try gc.dev.createImageView(&.{
const view = try gc.vkd.createImageView(gc.dev, .{
.flags = .{},
.image = image,
.view_type = .@"2d",
.format = format,
.components = .{ .r = .identity, .g = .identity, .b = .identity, .a = .identity },
.components = .{.r = .identity, .g = .identity, .b = .identity, .a = .identity},
.subresource_range = .{
.aspect_mask = .{ .color_bit = true },
.aspect_mask = .{.color_bit = true},
.base_mip_level = 0,
.level_count = 1,
.base_array_layer = 0,
.layer_count = 1,
},
}, null);
errdefer gc.dev.destroyImageView(view, null);
errdefer gc.vkd.destroyImageView(gc.dev, view, null);
const image_acquired = try gc.dev.createSemaphore(&.{}, null);
errdefer gc.dev.destroySemaphore(image_acquired, null);
const image_acquired = try gc.vkd.createSemaphore(gc.dev, .{.flags = .{}}, null);
errdefer gc.vkd.destroySemaphore(gc.dev, image_acquired, null);
const render_finished = try gc.dev.createSemaphore(&.{}, null);
errdefer gc.dev.destroySemaphore(render_finished, null);
const render_finished = try gc.vkd.createSemaphore(gc.dev, .{.flags = .{}}, null);
errdefer gc.vkd.destroySemaphore(gc.dev, image_acquired, null);
const frame_fence = try gc.dev.createFence(&.{ .flags = .{ .signaled_bit = true } }, null);
errdefer gc.dev.destroyFence(frame_fence, null);
const frame_fence = try gc.vkd.createFence(gc.dev, .{.flags = .{.signaled_bit = true}}, null);
errdefer gc.vkd.destroyFence(gc.dev, frame_fence, null);
return SwapImage{
.image = image,
@@ -235,26 +232,29 @@ const SwapImage = struct {
fn deinit(self: SwapImage, gc: *const GraphicsContext) void {
self.waitForFence(gc) catch return;
gc.dev.destroyImageView(self.view, null);
gc.dev.destroySemaphore(self.image_acquired, null);
gc.dev.destroySemaphore(self.render_finished, null);
gc.dev.destroyFence(self.frame_fence, null);
gc.vkd.destroyImageView(gc.dev, self.view, null);
gc.vkd.destroySemaphore(gc.dev, self.image_acquired, null);
gc.vkd.destroySemaphore(gc.dev, self.render_finished, null);
gc.vkd.destroyFence(gc.dev, self.frame_fence, null);
}
fn waitForFence(self: SwapImage, gc: *const GraphicsContext) !void {
_ = try gc.dev.waitForFences(1, @ptrCast(&self.frame_fence), vk.TRUE, std.math.maxInt(u64));
_ = try gc.vkd.waitForFences(gc.dev, 1, @ptrCast([*]const vk.Fence, &self.frame_fence), vk.TRUE, std.math.maxInt(u64));
}
};
fn initSwapchainImages(gc: *const GraphicsContext, swapchain: vk.SwapchainKHR, format: vk.Format, allocator: Allocator) ![]SwapImage {
const images = try gc.dev.getSwapchainImagesAllocKHR(swapchain, allocator);
fn initSwapchainImages(gc: *const GraphicsContext, swapchain: vk.SwapchainKHR, format: vk.Format, allocator: *Allocator) ![]SwapImage {
var count: u32 = undefined;
_ = try gc.vkd.getSwapchainImagesKHR(gc.dev, swapchain, &count, null);
const images = try allocator.alloc(vk.Image, count);
defer allocator.free(images);
_ = try gc.vkd.getSwapchainImagesKHR(gc.dev, swapchain, &count, images.ptr);
const swap_images = try allocator.alloc(SwapImage, images.len);
errdefer allocator.free(swap_images);
const swap_images = try allocator.alloc(SwapImage, count);
errdefer allocator.free(images);
var i: usize = 0;
errdefer for (swap_images[0..i]) |si| si.deinit(gc);
errdefer for (swap_images[0 .. i]) |si| si.deinit(gc);
for (images) |image| {
swap_images[i] = try SwapImage.init(gc, image, format);
@@ -264,14 +264,17 @@ fn initSwapchainImages(gc: *const GraphicsContext, swapchain: vk.SwapchainKHR, f
return swap_images;
}
fn findSurfaceFormat(gc: *const GraphicsContext, allocator: Allocator) !vk.SurfaceFormatKHR {
fn findSurfaceFormat(gc: *const GraphicsContext, allocator: *Allocator) !vk.SurfaceFormatKHR {
const preferred = vk.SurfaceFormatKHR{
.format = .b8g8r8a8_srgb,
.color_space = .srgb_nonlinear_khr,
};
const surface_formats = try gc.instance.getPhysicalDeviceSurfaceFormatsAllocKHR(gc.pdev, gc.surface, allocator);
var count: u32 = undefined;
_ = try gc.vki.getPhysicalDeviceSurfaceFormatsKHR(gc.pdev, gc.surface, &count, null);
const surface_formats = try allocator.alloc(vk.SurfaceFormatKHR, count);
defer allocator.free(surface_formats);
_ = try gc.vki.getPhysicalDeviceSurfaceFormatsKHR(gc.pdev, gc.surface, &count, surface_formats.ptr);
for (surface_formats) |sfmt| {
if (std.meta.eql(sfmt, preferred)) {
@@ -282,9 +285,12 @@ fn findSurfaceFormat(gc: *const GraphicsContext, allocator: Allocator) !vk.Surfa
return surface_formats[0]; // There must always be at least one supported surface format
}
fn findPresentMode(gc: *const GraphicsContext, allocator: Allocator) !vk.PresentModeKHR {
const present_modes = try gc.instance.getPhysicalDeviceSurfacePresentModesAllocKHR(gc.pdev, gc.surface, allocator);
fn findPresentMode(gc: *const GraphicsContext, allocator: *Allocator) !vk.PresentModeKHR {
var count: u32 = undefined;
_ = try gc.vki.getPhysicalDeviceSurfacePresentModesKHR(gc.pdev, gc.surface, &count, null);
const present_modes = try allocator.alloc(vk.PresentModeKHR, count);
defer allocator.free(present_modes);
_ = try gc.vki.getPhysicalDeviceSurfacePresentModesKHR(gc.pdev, gc.surface, &count, present_modes.ptr);
const preferred = [_]vk.PresentModeKHR{
.mailbox_khr,

View File

@@ -1,13 +1,11 @@
const std = @import("std");
const vk = @import("vulkan");
const c = @import("c.zig");
const resources = @import("resources");
const GraphicsContext = @import("graphics_context.zig").GraphicsContext;
const Swapchain = @import("swapchain.zig").Swapchain;
const Allocator = std.mem.Allocator;
const vert_spv align(@alignOf(u32)) = @embedFile("vertex_shader").*;
const frag_spv align(@alignOf(u32)) = @embedFile("fragment_shader").*;
const app_name = "vulkan-zig triangle example";
const Vertex = struct {
@@ -22,13 +20,13 @@ const Vertex = struct {
.binding = 0,
.location = 0,
.format = .r32g32_sfloat,
.offset = @offsetOf(Vertex, "pos"),
.offset = @byteOffsetOf(Vertex, "pos"),
},
.{
.binding = 0,
.location = 1,
.format = .r32g32b32_sfloat,
.offset = @offsetOf(Vertex, "color"),
.offset = @byteOffsetOf(Vertex, "color"),
},
};
@@ -37,79 +35,76 @@ const Vertex = struct {
};
const vertices = [_]Vertex{
.{ .pos = .{ 0, -0.5 }, .color = .{ 1, 0, 0 } },
.{ .pos = .{ 0.5, 0.5 }, .color = .{ 0, 1, 0 } },
.{ .pos = .{ -0.5, 0.5 }, .color = .{ 0, 0, 1 } },
.{.pos = .{0, -0.5}, .color = .{1, 0, 0}},
.{.pos = .{0.5, 0.5}, .color = .{0, 1, 0}},
.{.pos = .{-0.5, 0.5}, .color = .{0, 0, 1}},
};
pub fn main() !void {
if (c.glfwInit() != c.GLFW_TRUE) return error.GlfwInitFailed;
defer c.glfwTerminate();
if (c.glfwVulkanSupported() != c.GLFW_TRUE) {
std.log.err("GLFW could not find libvulkan", .{});
return error.NoVulkan;
}
var extent = vk.Extent2D{ .width = 800, .height = 600 };
var extent = vk.Extent2D{.width = 800, .height = 600};
c.glfwWindowHint(c.GLFW_CLIENT_API, c.GLFW_NO_API);
const window = c.glfwCreateWindow(
@intCast(extent.width),
@intCast(extent.height),
@intCast(c_int, extent.width),
@intCast(c_int, extent.height),
app_name,
null,
null,
null
) orelse return error.WindowInitFailed;
defer c.glfwDestroyWindow(window);
var gpa = std.heap.GeneralPurposeAllocator(.{}){};
defer _ = gpa.deinit();
const allocator = gpa.allocator();
const allocator = std.heap.page_allocator;
const gc = try GraphicsContext.init(allocator, app_name, window);
defer gc.deinit();
std.log.debug("Using device: {s}", .{gc.deviceName()});
std.debug.print("Using device: {s}\n", .{ gc.deviceName() });
var swapchain = try Swapchain.init(&gc, allocator, extent);
defer swapchain.deinit();
const pipeline_layout = try gc.dev.createPipelineLayout(&.{
const pipeline_layout = try gc.vkd.createPipelineLayout(gc.dev, .{
.flags = .{},
.set_layout_count = 0,
.p_set_layouts = undefined,
.push_constant_range_count = 0,
.p_push_constant_ranges = undefined,
}, null);
defer gc.dev.destroyPipelineLayout(pipeline_layout, null);
defer gc.vkd.destroyPipelineLayout(gc.dev, pipeline_layout, null);
const render_pass = try createRenderPass(&gc, swapchain);
defer gc.dev.destroyRenderPass(render_pass, null);
defer gc.vkd.destroyRenderPass(gc.dev, render_pass, null);
const pipeline = try createPipeline(&gc, pipeline_layout, render_pass);
defer gc.dev.destroyPipeline(pipeline, null);
var pipeline = try createPipeline(&gc, extent, pipeline_layout, render_pass);
defer gc.vkd.destroyPipeline(gc.dev, pipeline, null);
var framebuffers = try createFramebuffers(&gc, allocator, render_pass, swapchain);
defer destroyFramebuffers(&gc, allocator, framebuffers);
const pool = try gc.dev.createCommandPool(&.{
const pool = try gc.vkd.createCommandPool(gc.dev, .{
.flags = .{},
.queue_family_index = gc.graphics_queue.family,
}, null);
defer gc.dev.destroyCommandPool(pool, null);
defer gc.vkd.destroyCommandPool(gc.dev, pool, null);
const buffer = try gc.dev.createBuffer(&.{
const buffer = try gc.vkd.createBuffer(gc.dev, .{
.flags = .{},
.size = @sizeOf(@TypeOf(vertices)),
.usage = .{ .transfer_dst_bit = true, .vertex_buffer_bit = true },
.usage = .{.transfer_dst_bit = true, .vertex_buffer_bit = true},
.sharing_mode = .exclusive,
.queue_family_index_count = 0,
.p_queue_family_indices = undefined,
}, null);
defer gc.dev.destroyBuffer(buffer, null);
const mem_reqs = gc.dev.getBufferMemoryRequirements(buffer);
const memory = try gc.allocate(mem_reqs, .{ .device_local_bit = true });
defer gc.dev.freeMemory(memory, null);
try gc.dev.bindBufferMemory(buffer, memory, 0);
defer gc.vkd.destroyBuffer(gc.dev, buffer, null);
const mem_reqs = gc.vkd.getBufferMemoryRequirements(gc.dev, buffer);
const memory = try gc.allocate(mem_reqs, .{.device_local_bit = true});
defer gc.vkd.freeMemory(gc.dev, memory, null);
try gc.vkd.bindBufferMemory(gc.dev, buffer, memory, 0);
try uploadVertices(&gc, pool, buffer);
try uploadVertices(&gc, pool, buffer, memory);
var cmdbufs = try createCommandBuffers(
&gc,
@@ -119,21 +114,11 @@ pub fn main() !void {
swapchain.extent,
render_pass,
pipeline,
framebuffers,
framebuffers
);
defer destroyCommandBuffers(&gc, pool, allocator, cmdbufs);
while (c.glfwWindowShouldClose(window) == c.GLFW_FALSE) {
var w: c_int = undefined;
var h: c_int = undefined;
c.glfwGetFramebufferSize(window, &w, &h);
// Don't present or resize swapchain while the window is minimized
if (w == 0 or h == 0) {
c.glfwPollEvents();
continue;
}
const cmdbuf = cmdbufs[swapchain.image_index];
const state = swapchain.present(cmdbuf) catch |err| switch (err) {
@@ -141,9 +126,12 @@ pub fn main() !void {
else => |narrow| return narrow,
};
if (state == .suboptimal or extent.width != @as(u32, @intCast(w)) or extent.height != @as(u32, @intCast(h))) {
extent.width = @intCast(w);
extent.height = @intCast(h);
if (state == .suboptimal) {
var w: c_int = undefined;
var h: c_int = undefined;
c.glfwGetWindowSize(window, &w, &h);
extent.width = @intCast(u32, w);
extent.height = @intCast(u32, h);
try swapchain.recreate(extent);
destroyFramebuffers(&gc, allocator, framebuffers);
@@ -158,53 +146,58 @@ pub fn main() !void {
swapchain.extent,
render_pass,
pipeline,
framebuffers,
framebuffers
);
}
c.glfwSwapBuffers(window);
c.glfwPollEvents();
}
try swapchain.waitForAllFences();
try gc.dev.deviceWaitIdle();
}
fn uploadVertices(gc: *const GraphicsContext, pool: vk.CommandPool, buffer: vk.Buffer) !void {
const staging_buffer = try gc.dev.createBuffer(&.{
fn uploadVertices(gc: *const GraphicsContext, pool: vk.CommandPool, buffer: vk.Buffer, memory: vk.DeviceMemory) !void {
const staging_buffer = try gc.vkd.createBuffer(gc.dev, .{
.flags = .{},
.size = @sizeOf(@TypeOf(vertices)),
.usage = .{ .transfer_src_bit = true },
.usage = .{.transfer_src_bit = true},
.sharing_mode = .exclusive,
.queue_family_index_count = 0,
.p_queue_family_indices = undefined,
}, null);
defer gc.dev.destroyBuffer(staging_buffer, null);
const mem_reqs = gc.dev.getBufferMemoryRequirements(staging_buffer);
const staging_memory = try gc.allocate(mem_reqs, .{ .host_visible_bit = true, .host_coherent_bit = true });
defer gc.dev.freeMemory(staging_memory, null);
try gc.dev.bindBufferMemory(staging_buffer, staging_memory, 0);
defer gc.vkd.destroyBuffer(gc.dev, staging_buffer, null);
const mem_reqs = gc.vkd.getBufferMemoryRequirements(gc.dev, staging_buffer);
const staging_memory = try gc.allocate(mem_reqs, .{.host_visible_bit = true, .host_coherent_bit = true});
defer gc.vkd.freeMemory(gc.dev, staging_memory, null);
try gc.vkd.bindBufferMemory(gc.dev, staging_buffer, staging_memory, 0);
{
const data = try gc.dev.mapMemory(staging_memory, 0, vk.WHOLE_SIZE, .{});
defer gc.dev.unmapMemory(staging_memory);
const data = try gc.vkd.mapMemory(gc.dev, staging_memory, 0, vk.WHOLE_SIZE, .{});
defer gc.vkd.unmapMemory(gc.dev, staging_memory);
const gpu_vertices: [*]Vertex = @ptrCast(@alignCast(data));
@memcpy(gpu_vertices, vertices[0..]);
const gpu_vertices = @ptrCast([*]Vertex, @alignCast(@alignOf(Vertex), data));
for (vertices) |vertex, i| {
gpu_vertices[i] = vertex;
}
}
try copyBuffer(gc, pool, buffer, staging_buffer, @sizeOf(@TypeOf(vertices)));
}
fn copyBuffer(gc: *const GraphicsContext, pool: vk.CommandPool, dst: vk.Buffer, src: vk.Buffer, size: vk.DeviceSize) !void {
var cmdbuf_handle: vk.CommandBuffer = undefined;
try gc.dev.allocateCommandBuffers(&.{
var cmdbuf: vk.CommandBuffer = undefined;
try gc.vkd.allocateCommandBuffers(gc.dev, .{
.command_pool = pool,
.level = .primary,
.command_buffer_count = 1,
}, @ptrCast(&cmdbuf_handle));
defer gc.dev.freeCommandBuffers(pool, 1, @ptrCast(&cmdbuf_handle));
}, @ptrCast([*]vk.CommandBuffer, &cmdbuf));
defer gc.vkd.freeCommandBuffers(gc.dev, pool, 1, @ptrCast([*]const vk.CommandBuffer, &cmdbuf));
const cmdbuf = GraphicsContext.CommandBuffer.init(cmdbuf_handle, gc.dev.wrapper);
try cmdbuf.beginCommandBuffer(&.{
.flags = .{ .one_time_submit_bit = true },
try gc.vkd.beginCommandBuffer(cmdbuf, .{
.flags = .{.one_time_submit_bit = true},
.p_inheritance_info = null,
});
const region = vk.BufferCopy{
@@ -212,23 +205,27 @@ fn copyBuffer(gc: *const GraphicsContext, pool: vk.CommandPool, dst: vk.Buffer,
.dst_offset = 0,
.size = size,
};
cmdbuf.copyBuffer(src, dst, 1, @ptrCast(&region));
gc.vkd.cmdCopyBuffer(cmdbuf, src, dst, 1, @ptrCast([*]const vk.BufferCopy, &region));
try cmdbuf.endCommandBuffer();
try gc.vkd.endCommandBuffer(cmdbuf);
const si = vk.SubmitInfo{
.command_buffer_count = 1,
.p_command_buffers = (&cmdbuf.handle)[0..1],
.wait_semaphore_count = 0,
.p_wait_semaphores = undefined,
.p_wait_dst_stage_mask = undefined,
.command_buffer_count = 1,
.p_command_buffers = @ptrCast([*]const vk.CommandBuffer, &cmdbuf),
.signal_semaphore_count = 0,
.p_signal_semaphores = undefined,
};
try gc.dev.queueSubmit(gc.graphics_queue.handle, 1, @ptrCast(&si), .null_handle);
try gc.dev.queueWaitIdle(gc.graphics_queue.handle);
try gc.vkd.queueSubmit(gc.graphics_queue.handle, 1, @ptrCast([*]const vk.SubmitInfo, &si), .null_handle);
try gc.vkd.queueWaitIdle(gc.graphics_queue.handle);
}
fn createCommandBuffers(
gc: *const GraphicsContext,
pool: vk.CommandPool,
allocator: Allocator,
allocator: *Allocator,
buffer: vk.Buffer,
extent: vk.Extent2D,
render_pass: vk.RenderPass,
@@ -238,80 +235,86 @@ fn createCommandBuffers(
const cmdbufs = try allocator.alloc(vk.CommandBuffer, framebuffers.len);
errdefer allocator.free(cmdbufs);
try gc.dev.allocateCommandBuffers(&.{
try gc.vkd.allocateCommandBuffers(gc.dev, .{
.command_pool = pool,
.level = .primary,
.command_buffer_count = @intCast(cmdbufs.len),
.command_buffer_count = @truncate(u32, cmdbufs.len),
}, cmdbufs.ptr);
errdefer gc.dev.freeCommandBuffers(pool, @intCast(cmdbufs.len), cmdbufs.ptr);
errdefer gc.vkd.freeCommandBuffers(gc.dev, pool, @truncate(u32, cmdbufs.len), cmdbufs.ptr);
const clear = vk.ClearValue{
.color = .{ .float_32 = .{ 0, 0, 0, 1 } },
.color = .{.float_32 = .{0, 0, 0, 1}},
};
const viewport = vk.Viewport{
.x = 0,
.y = 0,
.width = @floatFromInt(extent.width),
.height = @floatFromInt(extent.height),
.width = @intToFloat(f32, extent.width),
.height = @intToFloat(f32, extent.height),
.min_depth = 0,
.max_depth = 1,
};
const scissor = vk.Rect2D{
.offset = .{ .x = 0, .y = 0 },
.offset = .{.x = 0, .y = 0},
.extent = extent,
};
for (cmdbufs, framebuffers) |cmdbuf, framebuffer| {
try gc.dev.beginCommandBuffer(cmdbuf, &.{});
for (cmdbufs) |cmdbuf, i| {
try gc.vkd.beginCommandBuffer(cmdbuf, .{
.flags = .{},
.p_inheritance_info = null,
});
gc.dev.cmdSetViewport(cmdbuf, 0, 1, @ptrCast(&viewport));
gc.dev.cmdSetScissor(cmdbuf, 0, 1, @ptrCast(&scissor));
gc.vkd.cmdSetViewport(cmdbuf, 0, 1, @ptrCast([*]const vk.Viewport, &viewport));
gc.vkd.cmdSetScissor(cmdbuf, 0, 1, @ptrCast([*]const vk.Rect2D, &scissor));
// This needs to be a separate definition - see https://github.com/ziglang/zig/issues/7627.
const render_area = vk.Rect2D{
.offset = .{ .x = 0, .y = 0 },
.extent = extent,
};
gc.dev.cmdBeginRenderPass(cmdbuf, &.{
gc.vkd.cmdBeginRenderPass(cmdbuf, .{
.render_pass = render_pass,
.framebuffer = framebuffer,
.render_area = render_area,
.framebuffer = framebuffers[i],
.render_area = .{
.offset = .{.x = 0, .y = 0},
.extent = extent,
},
.clear_value_count = 1,
.p_clear_values = @ptrCast(&clear),
.p_clear_values = @ptrCast([*]const vk.ClearValue, &clear),
}, .@"inline");
gc.dev.cmdBindPipeline(cmdbuf, .graphics, pipeline);
gc.vkd.cmdBindPipeline(cmdbuf, .graphics, pipeline);
const offset = [_]vk.DeviceSize{0};
gc.dev.cmdBindVertexBuffers(cmdbuf, 0, 1, @ptrCast(&buffer), &offset);
gc.dev.cmdDraw(cmdbuf, vertices.len, 1, 0, 0);
gc.vkd.cmdBindVertexBuffers(cmdbuf, 0, 1, @ptrCast([*]const vk.Buffer, &buffer), &offset);
gc.vkd.cmdDraw(cmdbuf, vertices.len, 1, 0, 0);
gc.dev.cmdEndRenderPass(cmdbuf);
try gc.dev.endCommandBuffer(cmdbuf);
gc.vkd.cmdEndRenderPass(cmdbuf);
try gc.vkd.endCommandBuffer(cmdbuf);
}
return cmdbufs;
}
fn destroyCommandBuffers(gc: *const GraphicsContext, pool: vk.CommandPool, allocator: Allocator, cmdbufs: []vk.CommandBuffer) void {
gc.dev.freeCommandBuffers(pool, @truncate(cmdbufs.len), cmdbufs.ptr);
fn destroyCommandBuffers(gc: *const GraphicsContext, pool: vk.CommandPool, allocator: *Allocator, cmdbufs: []vk.CommandBuffer) void {
gc.vkd.freeCommandBuffers(gc.dev, pool, @truncate(u32, cmdbufs.len), cmdbufs.ptr);
allocator.free(cmdbufs);
}
fn createFramebuffers(gc: *const GraphicsContext, allocator: Allocator, render_pass: vk.RenderPass, swapchain: Swapchain) ![]vk.Framebuffer {
fn createFramebuffers(
gc: *const GraphicsContext,
allocator: *Allocator,
render_pass: vk.RenderPass,
swapchain: Swapchain
) ![]vk.Framebuffer {
const framebuffers = try allocator.alloc(vk.Framebuffer, swapchain.swap_images.len);
errdefer allocator.free(framebuffers);
var i: usize = 0;
errdefer for (framebuffers[0..i]) |fb| gc.dev.destroyFramebuffer(fb, null);
errdefer for (framebuffers[0 .. i]) |fb| gc.vkd.destroyFramebuffer(gc.dev, fb, null);
for (framebuffers) |*fb| {
fb.* = try gc.dev.createFramebuffer(&.{
fb.* = try gc.vkd.createFramebuffer(gc.dev, .{
.flags = .{},
.render_pass = render_pass,
.attachment_count = 1,
.p_attachments = @ptrCast(&swapchain.swap_images[i].view),
.p_attachments = @ptrCast([*]const vk.ImageView, &swapchain.swap_images[i].view),
.width = swapchain.extent.width,
.height = swapchain.extent.height,
.layers = 1,
@@ -322,20 +325,21 @@ fn createFramebuffers(gc: *const GraphicsContext, allocator: Allocator, render_p
return framebuffers;
}
fn destroyFramebuffers(gc: *const GraphicsContext, allocator: Allocator, framebuffers: []const vk.Framebuffer) void {
for (framebuffers) |fb| gc.dev.destroyFramebuffer(fb, null);
fn destroyFramebuffers(gc: *const GraphicsContext, allocator: *Allocator, framebuffers: []const vk.Framebuffer) void {
for (framebuffers) |fb| gc.vkd.destroyFramebuffer(gc.dev, fb, null);
allocator.free(framebuffers);
}
fn createRenderPass(gc: *const GraphicsContext, swapchain: Swapchain) !vk.RenderPass {
const color_attachment = vk.AttachmentDescription{
.flags = .{},
.format = swapchain.surface_format.format,
.samples = .{ .@"1_bit" = true },
.samples = .{.@"1_bit" = true},
.load_op = .clear,
.store_op = .store,
.stencil_load_op = .dont_care,
.stencil_store_op = .dont_care,
.initial_layout = .undefined,
.initial_layout = .@"undefined",
.final_layout = .present_src_khr,
};
@@ -345,62 +349,82 @@ fn createRenderPass(gc: *const GraphicsContext, swapchain: Swapchain) !vk.Render
};
const subpass = vk.SubpassDescription{
.flags = .{},
.pipeline_bind_point = .graphics,
.input_attachment_count = 0,
.p_input_attachments = undefined,
.color_attachment_count = 1,
.p_color_attachments = @ptrCast(&color_attachment_ref),
.p_color_attachments = @ptrCast([*]const vk.AttachmentReference, &color_attachment_ref),
.p_resolve_attachments = null,
.p_depth_stencil_attachment = null,
.preserve_attachment_count = 0,
.p_preserve_attachments = undefined,
};
return try gc.dev.createRenderPass(&.{
return try gc.vkd.createRenderPass(gc.dev, .{
.flags = .{},
.attachment_count = 1,
.p_attachments = @ptrCast(&color_attachment),
.p_attachments = @ptrCast([*]const vk.AttachmentDescription, &color_attachment),
.subpass_count = 1,
.p_subpasses = @ptrCast(&subpass),
.p_subpasses = @ptrCast([*]const vk.SubpassDescription, &subpass),
.dependency_count = 0,
.p_dependencies = undefined,
}, null);
}
fn createPipeline(
gc: *const GraphicsContext,
extent: vk.Extent2D,
layout: vk.PipelineLayout,
render_pass: vk.RenderPass,
) !vk.Pipeline {
const vert = try gc.dev.createShaderModule(&.{
.code_size = vert_spv.len,
.p_code = @ptrCast(&vert_spv),
const vert = try gc.vkd.createShaderModule(gc.dev, .{
.flags = .{},
.code_size = resources.triangle_vert.len,
.p_code = @ptrCast([*]const u32, resources.triangle_vert),
}, null);
defer gc.dev.destroyShaderModule(vert, null);
defer gc.vkd.destroyShaderModule(gc.dev, vert, null);
const frag = try gc.dev.createShaderModule(&.{
.code_size = frag_spv.len,
.p_code = @ptrCast(&frag_spv),
const frag = try gc.vkd.createShaderModule(gc.dev, .{
.flags = .{},
.code_size = resources.triangle_frag.len,
.p_code = @ptrCast([*]const u32, resources.triangle_frag),
}, null);
defer gc.dev.destroyShaderModule(frag, null);
defer gc.vkd.destroyShaderModule(gc.dev, frag, null);
const pssci = [_]vk.PipelineShaderStageCreateInfo{
.{
.stage = .{ .vertex_bit = true },
.flags = .{},
.stage = .{.vertex_bit = true},
.module = vert,
.p_name = "main",
.p_specialization_info = null,
},
.{
.stage = .{ .fragment_bit = true },
.flags = .{},
.stage = .{.fragment_bit = true},
.module = frag,
.p_name = "main",
.p_specialization_info = null,
},
};
const pvisci = vk.PipelineVertexInputStateCreateInfo{
.flags = .{},
.vertex_binding_description_count = 1,
.p_vertex_binding_descriptions = @ptrCast(&Vertex.binding_description),
.p_vertex_binding_descriptions = @ptrCast([*]const vk.VertexInputBindingDescription, &Vertex.binding_description),
.vertex_attribute_description_count = Vertex.attribute_description.len,
.p_vertex_attribute_descriptions = &Vertex.attribute_description,
};
const piasci = vk.PipelineInputAssemblyStateCreateInfo{
.flags = .{},
.topology = .triangle_list,
.primitive_restart_enable = vk.FALSE,
};
const pvsci = vk.PipelineViewportStateCreateInfo{
.flags = .{},
.viewport_count = 1,
.p_viewports = undefined, // set in createCommandBuffers with cmdSetViewport
.scissor_count = 1,
@@ -408,10 +432,11 @@ fn createPipeline(
};
const prsci = vk.PipelineRasterizationStateCreateInfo{
.flags = .{},
.depth_clamp_enable = vk.FALSE,
.rasterizer_discard_enable = vk.FALSE,
.polygon_mode = .fill,
.cull_mode = .{ .back_bit = true },
.cull_mode = .{.back_bit = true},
.front_face = .clockwise,
.depth_bias_enable = vk.FALSE,
.depth_bias_constant_factor = 0,
@@ -421,9 +446,11 @@ fn createPipeline(
};
const pmsci = vk.PipelineMultisampleStateCreateInfo{
.rasterization_samples = .{ .@"1_bit" = true },
.flags = .{},
.rasterization_samples = .{.@"1_bit" = true},
.sample_shading_enable = vk.FALSE,
.min_sample_shading = 1,
.p_sample_mask = null,
.alpha_to_coverage_enable = vk.FALSE,
.alpha_to_one_enable = vk.FALSE,
};
@@ -436,18 +463,19 @@ fn createPipeline(
.src_alpha_blend_factor = .one,
.dst_alpha_blend_factor = .zero,
.alpha_blend_op = .add,
.color_write_mask = .{ .r_bit = true, .g_bit = true, .b_bit = true, .a_bit = true },
.color_write_mask = .{.r_bit = true, .g_bit = true, .b_bit = true, .a_bit = true},
};
const pcbsci = vk.PipelineColorBlendStateCreateInfo{
.flags = .{},
.logic_op_enable = vk.FALSE,
.logic_op = .copy,
.attachment_count = 1,
.p_attachments = @ptrCast(&pcbas),
.blend_constants = [_]f32{ 0, 0, 0, 0 },
.p_attachments = @ptrCast([*]const vk.PipelineColorBlendAttachmentState, &pcbas),
.blend_constants = [_]f32{0, 0, 0, 0},
};
const dynstate = [_]vk.DynamicState{ .viewport, .scissor };
const dynstate = [_]vk.DynamicState{.viewport, .scissor};
const pdsci = vk.PipelineDynamicStateCreateInfo{
.flags = .{},
.dynamic_state_count = dynstate.len,
@@ -475,12 +503,12 @@ fn createPipeline(
};
var pipeline: vk.Pipeline = undefined;
_ = try gc.dev.createGraphicsPipelines(
_ = try gc.vkd.createGraphicsPipelines(
gc.dev,
.null_handle,
1,
@ptrCast(&gpci),
1, @ptrCast([*]const vk.GraphicsPipelineCreateInfo, &gpci),
null,
@ptrCast(&pipeline),
@ptrCast([*]vk.Pipeline, &pipeline),
);
return pipeline;
}

15536
examples/vk.xml Normal file

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,75 @@
const std = @import("std");
const path = std.fs.path;
const Builder = std.build.Builder;
const Step = std.build.Step;
/// Utility functionality to help with compiling shaders from build.zig.
/// Invokes glslc (or another shader compiler passed to `init`) for each shader
/// added via `addShader`.
pub const ShaderCompileStep = struct {
/// Structure representing a shader to be compiled.
const Shader = struct {
/// The path to the shader, relative to the current build root.
source_path: []const u8,
/// The full output path where the compiled shader binary is placed.
full_out_path: []const u8,
};
step: Step,
builder: *Builder,
/// The command and optional arguments used to invoke the shader compiler.
glslc_cmd: []const []const u8,
/// List of shaders that are to be compiled.
shaders: std.ArrayList(Shader),
/// Create a ShaderCompilerStep for `builder`. When this step is invoked by the build
/// system, `<glcl_cmd...> <shader_source> -o <dst_addr>` is invoked for each shader.
pub fn init(builder: *Builder, glslc_cmd: []const []const u8) *ShaderCompileStep {
const self = builder.allocator.create(ShaderCompileStep) catch unreachable;
self.* = .{
.step = Step.init(.Custom, "shader-compile", builder.allocator, make),
.builder = builder,
.glslc_cmd = glslc_cmd,
.shaders = std.ArrayList(Shader).init(builder.allocator),
};
return self;
}
/// Add a shader to be compiled. `src` is shader source path, relative to the project root.
/// Returns the full path where the compiled binary will be stored upon successful compilation.
/// This path can then be used to include the binary into an executable, for example by passing it
/// to @embedFile via an additional generated file.
pub fn add(self: *ShaderCompileStep, src: []const u8) []const u8 {
const full_out_path = path.join(self.builder.allocator, &[_][]const u8{
self.builder.build_root,
self.builder.cache_root,
"shaders",
src,
}) catch unreachable;
self.shaders.append(.{.source_path = src, .full_out_path = full_out_path}) catch unreachable;
return full_out_path;
}
/// Internal build function.
fn make(step: *Step) !void {
const self = @fieldParentPtr(ShaderCompileStep, "step", step);
const cwd = std.fs.cwd();
const cmd = try self.builder.allocator.alloc([]const u8, self.glslc_cmd.len + 3);
for (self.glslc_cmd) |part, i| {
cmd[i] = part;
}
cmd[cmd.len - 2] = "-o";
for (self.shaders.items) |shader| {
const dir = path.dirname(shader.full_out_path).?;
try cwd.makePath(dir);
cmd[cmd.len - 3] = shader.source_path;
cmd[cmd.len - 1] = shader.full_out_path;
try self.builder.spawnChild(cmd);
}
}
};

View File

@@ -2,65 +2,6 @@ const std = @import("std");
const mem = std.mem;
const Allocator = mem.Allocator;
pub fn isZigPrimitiveType(name: []const u8) bool {
if (name.len > 1 and (name[0] == 'u' or name[0] == 'i')) {
for (name[1..]) |c| {
switch (c) {
'0'...'9' => {},
else => break,
}
} else return true;
}
const primitives = [_][]const u8{
"void",
"comptime_float",
"comptime_int",
"bool",
"isize",
"usize",
"f16",
"f32",
"f64",
"f128",
"noreturn",
"type",
"anyerror",
"c_short",
"c_ushort",
"c_int",
"c_uint",
"c_long",
"c_ulong",
"c_longlong",
"c_ulonglong",
"c_longdouble",
// Removed in stage 2 in https://github.com/ziglang/zig/commit/05cf44933d753f7a5a53ab289ea60fd43761de57,
// but these are still invalid identifiers in stage 1.
"undefined",
"true",
"false",
"null",
};
for (primitives) |reserved| {
if (mem.eql(u8, reserved, name)) {
return true;
}
}
return false;
}
pub fn writeIdentifier(writer: anytype, id: []const u8) !void {
// https://github.com/ziglang/zig/issues/2897
if (isZigPrimitiveType(id)) {
try writer.print("@\"{}\"", .{std.zig.fmtEscapes(id)});
} else {
try writer.print("{}", .{std.zig.fmtId(id)});
}
}
pub const CaseStyle = enum {
snake,
screaming_snake,
@@ -110,7 +51,7 @@ pub const SegmentIterator = struct {
}
const end = self.nextBoundary();
const word = self.text[self.offset..end];
const word = self.text[self.offset .. end];
self.offset = end;
return word;
}
@@ -128,7 +69,7 @@ pub const IdRenderer = struct {
tags: []const []const u8,
text_cache: std.ArrayList(u8),
pub fn init(allocator: Allocator, tags: []const []const u8) IdRenderer {
pub fn init(allocator: *Allocator, tags: []const []const u8) IdRenderer {
return .{
.tags = tags,
.text_cache = std.ArrayList(u8).init(allocator),
@@ -142,6 +83,7 @@ pub const IdRenderer = struct {
fn renderSnake(self: *IdRenderer, screaming: bool, id: []const u8, tag: ?[]const u8) !void {
var it = SegmentIterator.init(id);
var first = true;
const transform = if (screaming) std.ascii.toUpper else std.ascii.toLower;
while (it.next()) |segment| {
if (first) {
@@ -151,7 +93,7 @@ pub const IdRenderer = struct {
}
for (segment) |c| {
try self.text_cache.append(if (screaming) std.ascii.toUpper(c) else std.ascii.toLower(c));
try self.text_cache.append(transform(c));
}
}
@@ -159,7 +101,7 @@ pub const IdRenderer = struct {
try self.text_cache.append('_');
for (name) |c| {
try self.text_cache.append(if (screaming) std.ascii.toUpper(c) else std.ascii.toLower(c));
try self.text_cache.append(transform(c));
}
}
}
@@ -186,7 +128,7 @@ pub const IdRenderer = struct {
}
lower_first = false;
for (segment[i + 1 ..]) |c| {
for (segment[i + 1..]) |c| {
try self.text_cache.append(std.ascii.toLower(c));
}
}
@@ -196,10 +138,14 @@ pub const IdRenderer = struct {
}
}
pub fn render(self: IdRenderer, out: anytype, id: []const u8) !void {
try out.print("{z}", .{ id });
}
pub fn renderFmt(self: *IdRenderer, out: anytype, comptime fmt: []const u8, args: anytype) !void {
self.text_cache.items.len = 0;
try std.fmt.format(self.text_cache.writer(), fmt, args);
try writeIdentifier(out, self.text_cache.items);
try out.print("{z}", .{ self.text_cache.items });
}
pub fn renderWithCase(self: *IdRenderer, out: anytype, case_style: CaseStyle, id: []const u8) !void {
@@ -216,7 +162,7 @@ pub const IdRenderer = struct {
.camel => try self.renderCamel(false, adjusted_id, tag),
}
try writeIdentifier(out, self.text_cache.items);
try out.print("{z}", .{ self.text_cache.items });
}
pub fn getAuthorTag(self: IdRenderer, id: []const u8) ?[]const u8 {

9
generator/index.zig Normal file
View File

@@ -0,0 +1,9 @@
pub const generateVk = @import("vulkan/generator.zig").generate;
pub const VkGenerateStep = @import("vulkan/build_integration.zig").GenerateStep;
pub const generateSpirv = @import("spirv/generator.zig").generate;
pub const ShaderCompileStep = @import("build_integration.zig").ShaderCompileStep;
test "main" {
_ = @import("xml.zig");
_ = @import("vulkan/c_parse.zig");
}

74
generator/main.zig Normal file
View File

@@ -0,0 +1,74 @@
const std = @import("std");
const generate = @import("vulkan/generator.zig").generate;
const usage = "Usage: {s} [-h|--help] <spec xml path> <output zig source>\n";
pub fn main() !void {
const stderr = std.io.getStdErr();
const stdout = std.io.getStdOut();
var arena = std.heap.ArenaAllocator.init(std.heap.page_allocator);
defer arena.deinit();
const allocator = &arena.allocator;
var args = std.process.args();
const prog_name = try args.next(allocator) orelse return error.ExecutableNameMissing;
var maybe_xml_path: ?[]const u8 = null;
var maybe_out_path: ?[]const u8 = null;
while (args.next(allocator)) |err_or_arg| {
const arg = try err_or_arg;
if (std.mem.eql(u8, arg, "--help") or std.mem.eql(u8, arg, "-h")) {
@setEvalBranchQuota(2000);
try stderr.writer().print(
\\Utility to generate a Zig binding from the Vulkan XML API registry.
\\
\\The most recent Vulkan XML API registry can be obtained from
\\https://github.com/KhronosGroup/Vulkan-Docs/blob/master/xml/vk.xml,
\\and the most recent LunarG Vulkan SDK version can be found at
\\$VULKAN_SDK/x86_64/share/vulkan/registry/vk.xml.
\\
\\
++ usage,
.{ prog_name },
);
return;
} else if (maybe_xml_path == null) {
maybe_xml_path = arg;
} else if (maybe_out_path == null) {
maybe_out_path = arg;
} else {
try stderr.writer().print("Error: Superficial argument '{s}'\n", .{ arg });
}
}
const xml_path = maybe_xml_path orelse {
try stderr.writer().print("Error: Missing required argument <spec xml path>\n" ++ usage, .{ prog_name });
return;
};
const out_path = maybe_out_path orelse {
try stderr.writer().print("Error: Missing required argument <output zig source>\n" ++ usage, .{ prog_name });
return;
};
const cwd = std.fs.cwd();
const xml_src = cwd.readFileAlloc(allocator, xml_path, std.math.maxInt(usize)) catch |err| {
try stderr.writer().print("Error: Failed to open input file '{s}' ({s})\n", .{ xml_path, @errorName(err) });
return;
};
const out_file = cwd.createFile(out_path, .{}) catch |err| {
try stderr.writer().print("Error: Failed to create output file '{s}' ({s})\n", .{ out_path, @errorName(err) });
return;
};
defer out_file.close();
var out_buffer = std.ArrayList(u8).init(allocator);
try generate(allocator, xml_src, out_buffer.writer());
const tree = try std.zig.parse(allocator, out_buffer.items);
_ = try std.zig.render(allocator, out_file.writer(), tree);
}

View File

@@ -0,0 +1,80 @@
const std = @import("std");
const generate = @import("generator.zig").generate;
const path = std.fs.path;
const Builder = std.build.Builder;
const Step = std.build.Step;
/// build.zig integration for Vulkan binding generation. This step can be used to generate
/// Vulkan bindings at compiletime from vk.xml, by providing the path to vk.xml and the output
/// path relative to zig-cache. The final package can then be obtained by `package()`, the result
/// of which can be added to the project using `std.build.Builder.addPackage`.
pub const GenerateStep = struct {
step: Step,
builder: *Builder,
/// The path to vk.xml
spec_path: []const u8,
/// The package representing the generated bindings. The generated bindings will be placed
/// in `package.path`. When using this step, this member should be passed to
/// `std.build.Builder.addPackage`, which causes the bindings to become available under the
/// name `vulkan`.
package: std.build.Pkg,
/// Initialize a Vulkan generation step, for `builder`. `spec_path` is the path to
/// vk.xml, relative to the project root. The generated bindings will be placed at
/// `out_path`, which is relative to the zig-cache directory.
pub fn init(builder: *Builder, spec_path: []const u8, out_path: []const u8) *GenerateStep {
const self = builder.allocator.create(GenerateStep) catch unreachable;
const full_out_path = path.join(builder.allocator, &[_][]const u8{
builder.build_root,
builder.cache_root,
out_path,
}) catch unreachable;
self.* = .{
.step = Step.init(.Custom, "vulkan-generate", builder.allocator, make),
.builder = builder,
.spec_path = spec_path,
.package = .{
.name = "vulkan",
.path = full_out_path,
.dependencies = null,
}
};
return self;
}
/// Initialize a Vulkan generation step for `builder`, by extracting vk.xml from the LunarG installation
/// root. Typically, the location of the LunarG SDK root can be retrieved by querying for the VULKAN_SDK
/// environment variable, set by activating the environment setup script located in the SDK root.
/// `builder` and `out_path` are used in the same manner as `init`.
pub fn initFromSdk(builder: *Builder, sdk_path: []const u8, out_path: []const u8) *GenerateStep {
const spec_path = std.fs.path.join(
builder.allocator,
&[_][]const u8{sdk_path, "share/vulkan/registry/vk.xml"},
) catch unreachable;
return init(builder, spec_path, out_path);
}
/// Internal build function. This reads `vk.xml`, and passes it to `generate`, which then generates
/// the final bindings. The resulting generated bindings are not formatted, which is why an ArrayList
/// writer is passed instead of a file writer. This is then formatted into standard formatting
/// by parsing it and rendering with `std.zig.parse` and `std.zig.render` respectively.
fn make(step: *Step) !void {
const self = @fieldParentPtr(GenerateStep, "step", step);
const cwd = std.fs.cwd();
var out_buffer = std.ArrayList(u8).init(self.builder.allocator);
const spec = try cwd.readFileAlloc(self.builder.allocator, self.spec_path, std.math.maxInt(usize));
try generate(self.builder.allocator, spec, out_buffer.writer());
const tree = try std.zig.parse(self.builder.allocator, out_buffer.items);
const dir = path.dirname(self.package.path).?;
try cwd.makePath(dir);
const output_file = cwd.createFile(self.package.path, .{}) catch unreachable;
defer output_file.close();
_ = try std.zig.render(self.builder.allocator, output_file.writer(), tree);
}
};

View File

@@ -53,9 +53,9 @@ pub const CTokenizer = struct {
fn consume(self: *CTokenizer) !u8 {
return if (self.offset < self.source.len)
return self.consumeNoEof()
else
return null;
return self.consumeNoEof()
else
return null;
}
fn keyword(self: *CTokenizer) Token {
@@ -70,20 +70,20 @@ pub const CTokenizer = struct {
}
}
const token_text = self.source[start..self.offset];
const token_text = self.source[start .. self.offset];
const kind = if (mem.eql(u8, token_text, "typedef"))
Token.Kind.kw_typedef
else if (mem.eql(u8, token_text, "const"))
Token.Kind.kw_const
else if (mem.eql(u8, token_text, "VKAPI_PTR"))
Token.Kind.kw_vkapi_ptr
else if (mem.eql(u8, token_text, "struct"))
Token.Kind.kw_struct
else
Token.Kind.id;
Token.Kind.kw_typedef
else if (mem.eql(u8, token_text, "const"))
Token.Kind.kw_const
else if (mem.eql(u8, token_text, "VKAPI_PTR"))
Token.Kind.kw_vkapi_ptr
else if (mem.eql(u8, token_text, "struct"))
Token.Kind.kw_struct
else
Token.Kind.id;
return .{ .kind = kind, .text = token_text };
return .{.kind = kind, .text = token_text};
}
fn int(self: *CTokenizer) Token {
@@ -100,7 +100,7 @@ pub const CTokenizer = struct {
return .{
.kind = .int,
.text = self.source[start..self.offset],
.text = self.source[start .. self.offset],
};
}
@@ -115,7 +115,7 @@ pub const CTokenizer = struct {
pub fn next(self: *CTokenizer) !?Token {
self.skipws();
if (mem.startsWith(u8, self.source[self.offset..], "//") or self.in_comment) {
if (mem.startsWith(u8, self.source[self.offset ..], "//") or self.in_comment) {
const end = mem.indexOfScalarPos(u8, self.source, self.offset, '\n') orelse {
self.offset = self.source.len;
self.in_comment = true;
@@ -143,12 +143,15 @@ pub const CTokenizer = struct {
']' => kind = .rbracket,
'(' => kind = .lparen,
')' => kind = .rparen,
else => return error.UnexpectedCharacter,
else => return error.UnexpectedCharacter
}
const start = self.offset;
_ = self.consumeNoEof();
return Token{ .kind = kind, .text = self.source[start..self.offset] };
return Token{
.kind = kind,
.text = self.source[start .. self.offset]
};
}
};
@@ -164,17 +167,17 @@ pub const XmlCTokenizer = struct {
}
fn elemToToken(elem: *xml.Element) !?Token {
if (elem.children.len != 1 or elem.children[0] != .char_data) {
if (elem.children.items.len != 1 or elem.children.items[0] != .CharData) {
return error.InvalidXml;
}
const text = elem.children[0].char_data;
const text = elem.children.items[0].CharData;
if (mem.eql(u8, elem.tag, "type")) {
return Token{ .kind = .type_name, .text = text };
return Token{.kind = .type_name, .text = text};
} else if (mem.eql(u8, elem.tag, "enum")) {
return Token{ .kind = .enum_name, .text = text };
return Token{.kind = .enum_name, .text = text};
} else if (mem.eql(u8, elem.tag, "name")) {
return Token{ .kind = .name, .text = text };
return Token{.kind = .name, .text = text};
} else if (mem.eql(u8, elem.tag, "comment")) {
return null;
} else {
@@ -203,9 +206,9 @@ pub const XmlCTokenizer = struct {
if (self.it.next()) |child| {
switch (child.*) {
.char_data => |cdata| self.ctok = CTokenizer{ .source = cdata, .in_comment = in_comment },
.comment => {}, // xml comment
.element => |elem| if (!in_comment) if (try elemToToken(elem)) |tok| return tok,
.CharData => |cdata| self.ctok = CTokenizer{.source = cdata, .in_comment = in_comment},
.Comment => {}, // xml comment
.Element => |elem| if (!in_comment) if (try elemToToken(elem)) |tok| return tok,
}
} else {
return null;
@@ -241,9 +244,9 @@ pub const XmlCTokenizer = struct {
};
// TYPEDEF = kw_typedef DECLARATION ';'
pub fn parseTypedef(allocator: Allocator, xctok: *XmlCTokenizer, ptrs_optional: bool) !registry.Declaration {
pub fn parseTypedef(allocator: *Allocator, xctok: *XmlCTokenizer) !registry.Declaration {
_ = try xctok.expect(.kw_typedef);
const decl = try parseDeclaration(allocator, xctok, ptrs_optional);
const decl = try parseDeclaration(allocator, xctok);
_ = try xctok.expect(.semicolon);
if (try xctok.peek()) |_| {
return error.InvalidSyntax;
@@ -251,19 +254,18 @@ pub fn parseTypedef(allocator: Allocator, xctok: *XmlCTokenizer, ptrs_optional:
return registry.Declaration{
.name = decl.name orelse return error.MissingTypeIdentifier,
.decl_type = .{ .typedef = decl.decl_type },
.decl_type = .{.typedef = decl.decl_type},
};
}
// MEMBER = DECLARATION (':' int)?
pub fn parseMember(allocator: Allocator, xctok: *XmlCTokenizer, ptrs_optional: bool) !registry.Container.Field {
const decl = try parseDeclaration(allocator, xctok, ptrs_optional);
var field = registry.Container.Field{
pub fn parseMember(allocator: *Allocator, xctok: *XmlCTokenizer) !registry.Container.Field {
const decl = try parseDeclaration(allocator, xctok);
var field = registry.Container.Field {
.name = decl.name orelse return error.MissingTypeIdentifier,
.field_type = decl.decl_type,
.bits = null,
.is_buffer_len = false,
.is_optional = false,
};
if (try xctok.peek()) |tok| {
@@ -285,40 +287,20 @@ pub fn parseMember(allocator: Allocator, xctok: *XmlCTokenizer, ptrs_optional: b
return field;
}
pub fn parseParamOrProto(allocator: Allocator, xctok: *XmlCTokenizer, ptrs_optional: bool) !registry.Declaration {
var decl = try parseDeclaration(allocator, xctok, ptrs_optional);
pub fn parseParamOrProto(allocator: *Allocator, xctok: *XmlCTokenizer) !registry.Declaration {
const decl = try parseDeclaration(allocator, xctok);
if (try xctok.peek()) |_| {
return error.InvalidSyntax;
}
// Decay pointers
switch (decl.decl_type) {
.array => {
const child = try allocator.create(TypeInfo);
child.* = decl.decl_type;
decl.decl_type = .{
.pointer = .{
.is_const = decl.is_const,
.is_optional = false,
.size = .one,
.child = child,
},
};
},
else => {},
}
return registry.Declaration{
.name = decl.name orelse return error.MissingTypeIdentifier,
.decl_type = .{ .typedef = decl.decl_type },
.decl_type = .{.typedef = decl.decl_type},
};
}
pub const Declaration = struct {
name: ?[]const u8, // Parameter names may be optional, especially in case of func(void)
decl_type: TypeInfo,
is_const: bool,
};
pub const ParseError = error{
@@ -336,7 +318,7 @@ pub const ParseError = error{
// DECLARATION = kw_const? type_name DECLARATOR
// DECLARATOR = POINTERS (id | name)? ('[' ARRAY_DECLARATOR ']')*
// | POINTERS '(' FNPTRSUFFIX
fn parseDeclaration(allocator: Allocator, xctok: *XmlCTokenizer, ptrs_optional: bool) ParseError!Declaration {
fn parseDeclaration(allocator: *Allocator, xctok: *XmlCTokenizer) ParseError!Declaration {
// Parse declaration constness
var tok = try xctok.nextNoEof();
const inner_is_const = tok.kind == .kw_const;
@@ -351,19 +333,15 @@ fn parseDeclaration(allocator: Allocator, xctok: *XmlCTokenizer, ptrs_optional:
if (tok.kind != .type_name and tok.kind != .id) return error.InvalidSyntax;
const type_name = tok.text;
var type_info = TypeInfo{ .name = type_name };
var type_info = TypeInfo{.name = type_name};
// Parse pointers
type_info = try parsePointers(allocator, xctok, inner_is_const, type_info, ptrs_optional);
type_info = try parsePointers(allocator, xctok, inner_is_const, type_info);
// Parse name / fn ptr
if (try parseFnPtrSuffix(allocator, xctok, type_info, ptrs_optional)) |decl| {
return Declaration{
.name = decl.name,
.decl_type = decl.decl_type,
.is_const = inner_is_const,
};
if (try parseFnPtrSuffix(allocator, xctok, type_info)) |decl| {
return decl;
}
const name = blk: {
@@ -386,10 +364,8 @@ fn parseDeclaration(allocator: Allocator, xctok: *XmlCTokenizer, ptrs_optional:
inner_type.* = .{
.array = .{
.size = array_size,
.valid_size = .all, // Refined later
.is_optional = true,
.child = child,
},
}
};
// update the inner_type pointer so it points to the proper
@@ -400,12 +376,11 @@ fn parseDeclaration(allocator: Allocator, xctok: *XmlCTokenizer, ptrs_optional:
return Declaration{
.name = name,
.decl_type = type_info,
.is_const = inner_is_const,
};
}
// FNPTRSUFFIX = kw_vkapi_ptr '*' name' ')' '(' ('void' | (DECLARATION (',' DECLARATION)*)?) ')'
fn parseFnPtrSuffix(allocator: Allocator, xctok: *XmlCTokenizer, return_type: TypeInfo, ptrs_optional: bool) !?Declaration {
fn parseFnPtrSuffix(allocator: *Allocator, xctok: *XmlCTokenizer, return_type: TypeInfo) !?Declaration {
const lparen = try xctok.peek();
if (lparen == null or lparen.?.kind != .lparen) {
return null;
@@ -428,12 +403,11 @@ fn parseFnPtrSuffix(allocator: Allocator, xctok: *XmlCTokenizer, return_type: Ty
.return_type = return_type_heap,
.success_codes = &[_][]const u8{},
.error_codes = &[_][]const u8{},
},
},
.is_const = false,
}
}
};
const first_param = try parseDeclaration(allocator, xctok, ptrs_optional);
const first_param = try parseDeclaration(allocator, xctok);
if (first_param.name == null) {
if (first_param.decl_type != .name or !mem.eql(u8, first_param.decl_type.name, "void")) {
return error.InvalidSyntax;
@@ -451,7 +425,6 @@ fn parseFnPtrSuffix(allocator: Allocator, xctok: *XmlCTokenizer, return_type: Ty
.name = first_param.name.?,
.param_type = first_param.decl_type,
.is_buffer_len = false,
.is_optional = false,
});
while (true) {
@@ -461,22 +434,21 @@ fn parseFnPtrSuffix(allocator: Allocator, xctok: *XmlCTokenizer, return_type: Ty
else => return error.InvalidSyntax,
}
const decl = try parseDeclaration(allocator, xctok, ptrs_optional);
const decl = try parseDeclaration(allocator, xctok);
try params.append(.{
.name = decl.name orelse return error.MissingTypeIdentifier,
.param_type = decl.decl_type,
.is_buffer_len = false,
.is_optional = false,
});
}
_ = try xctok.nextNoEof();
command_ptr.decl_type.command_ptr.params = try params.toOwnedSlice();
command_ptr.decl_type.command_ptr.params = params.toOwnedSlice();
return command_ptr;
}
// POINTERS = (kw_const? '*')*
fn parsePointers(allocator: Allocator, xctok: *XmlCTokenizer, inner_const: bool, inner: TypeInfo, ptrs_optional: bool) !TypeInfo {
fn parsePointers(allocator: *Allocator, xctok: *XmlCTokenizer, inner_const: bool, inner: TypeInfo) !TypeInfo {
var type_info = inner;
var first_const = inner_const;
@@ -505,7 +477,7 @@ fn parsePointers(allocator: Allocator, xctok: *XmlCTokenizer, inner_const: bool,
type_info = .{
.pointer = .{
.is_const = is_const or first_const,
.is_optional = ptrs_optional, // set elsewhere
.is_optional = false, // set elsewhere
.size = .one, // set elsewhere
.child = child,
},
@@ -528,10 +500,10 @@ fn parseArrayDeclarator(xctok: *XmlCTokenizer) !?ArraySize {
.int = std.fmt.parseInt(usize, size_tok.text, 10) catch |err| switch (err) {
error.Overflow => return error.Overflow,
error.InvalidCharacter => unreachable,
},
}
},
.enum_name => .{ .alias = size_tok.text },
else => return error.InvalidSyntax,
.enum_name => .{.alias = size_tok.text},
else => return error.InvalidSyntax
};
_ = try xctok.expect(.rbracket);
@@ -545,7 +517,7 @@ pub fn parseVersion(xctok: *XmlCTokenizer) ![4][]const u8 {
return error.InvalidVersion;
}
_ = try xctok.expect(.name);
const name = try xctok.expect(.name);
const vk_make_version = try xctok.expect(.type_name);
if (!mem.eql(u8, vk_make_version.text, "VK_MAKE_API_VERSION")) {
return error.NotVersion;
@@ -553,7 +525,7 @@ pub fn parseVersion(xctok: *XmlCTokenizer) ![4][]const u8 {
_ = try xctok.expect(.lparen);
var version: [4][]const u8 = undefined;
for (&version, 0..) |*part, i| {
for (version) |*part, i| {
if (i != 0) {
_ = try xctok.expect(.comma);
}
@@ -568,38 +540,44 @@ pub fn parseVersion(xctok: *XmlCTokenizer) ![4][]const u8 {
return version;
}
fn testTokenizer(tokenizer: anytype, expected_tokens: []const Token) !void {
fn testTokenizer(tokenizer: anytype, expected_tokens: []const Token) void {
for (expected_tokens) |expected| {
const tok = (tokenizer.next() catch unreachable).?;
try testing.expectEqual(expected.kind, tok.kind);
try testing.expectEqualSlices(u8, expected.text, tok.text);
testing.expectEqual(expected.kind, tok.kind);
testing.expectEqualSlices(u8, expected.text, tok.text);
}
if (tokenizer.next() catch unreachable) |_| unreachable;
}
test "CTokenizer" {
var ctok = CTokenizer{ .source = "typedef ([const)]** VKAPI_PTR 123,;aaaa" };
var ctok = CTokenizer {
.source = \\typedef ([const)]** VKAPI_PTR 123,;aaaa
};
try testTokenizer(&ctok, &[_]Token{
.{ .kind = .kw_typedef, .text = "typedef" },
.{ .kind = .lparen, .text = "(" },
.{ .kind = .lbracket, .text = "[" },
.{ .kind = .kw_const, .text = "const" },
.{ .kind = .rparen, .text = ")" },
.{ .kind = .rbracket, .text = "]" },
.{ .kind = .star, .text = "*" },
.{ .kind = .star, .text = "*" },
.{ .kind = .kw_vkapi_ptr, .text = "VKAPI_PTR" },
.{ .kind = .int, .text = "123" },
.{ .kind = .comma, .text = "," },
.{ .kind = .semicolon, .text = ";" },
.{ .kind = .id, .text = "aaaa" },
});
testTokenizer(
&ctok,
&[_]Token{
.{.kind = .kw_typedef, .text = "typedef"},
.{.kind = .lparen, .text = "("},
.{.kind = .lbracket, .text = "["},
.{.kind = .kw_const, .text = "const"},
.{.kind = .rparen, .text = ")"},
.{.kind = .rbracket, .text = "]"},
.{.kind = .star, .text = "*"},
.{.kind = .star, .text = "*"},
.{.kind = .kw_vkapi_ptr, .text = "VKAPI_PTR"},
.{.kind = .int, .text = "123"},
.{.kind = .comma, .text = ","},
.{.kind = .semicolon, .text = ";"},
.{.kind = .id, .text = "aaaa"},
}
);
}
test "XmlCTokenizer" {
const document = try xml.parse(testing.allocator,
const document = try xml.parse(
testing.allocator,
\\<root>// comment <name>commented name</name> <type>commented type</type> trailing
\\ typedef void (VKAPI_PTR *<name>PFN_vkVoidFunction</name>)(void);
\\</root>
@@ -608,23 +586,27 @@ test "XmlCTokenizer" {
var xctok = XmlCTokenizer.init(document.root);
try testTokenizer(&xctok, &[_]Token{
.{ .kind = .kw_typedef, .text = "typedef" },
.{ .kind = .id, .text = "void" },
.{ .kind = .lparen, .text = "(" },
.{ .kind = .kw_vkapi_ptr, .text = "VKAPI_PTR" },
.{ .kind = .star, .text = "*" },
.{ .kind = .name, .text = "PFN_vkVoidFunction" },
.{ .kind = .rparen, .text = ")" },
.{ .kind = .lparen, .text = "(" },
.{ .kind = .id, .text = "void" },
.{ .kind = .rparen, .text = ")" },
.{ .kind = .semicolon, .text = ";" },
});
testTokenizer(
&xctok,
&[_]Token{
.{.kind = .kw_typedef, .text = "typedef"},
.{.kind = .id, .text = "void"},
.{.kind = .lparen, .text = "("},
.{.kind = .kw_vkapi_ptr, .text = "VKAPI_PTR"},
.{.kind = .star, .text = "*"},
.{.kind = .name, .text = "PFN_vkVoidFunction"},
.{.kind = .rparen, .text = ")"},
.{.kind = .lparen, .text = "("},
.{.kind = .id, .text = "void"},
.{.kind = .rparen, .text = ")"},
.{.kind = .semicolon, .text = ";"},
}
);
}
test "parseTypedef" {
const document = try xml.parse(testing.allocator,
const document = try xml.parse(
testing.allocator,
\\<root> // comment <name>commented name</name> trailing
\\ typedef const struct <type>Python</type>* pythons[4];
\\ // more comments
@@ -637,12 +619,12 @@ test "parseTypedef" {
defer arena.deinit();
var xctok = XmlCTokenizer.init(document.root);
const decl = try parseTypedef(arena.allocator(), &xctok, false);
const decl = try parseTypedef(&arena.allocator, &xctok);
try testing.expectEqualSlices(u8, "pythons", decl.name);
testing.expectEqualSlices(u8, "pythons", decl.name);
const array = decl.decl_type.typedef.array;
try testing.expectEqual(ArraySize{ .int = 4 }, array.size);
testing.expectEqual(ArraySize{.int = 4}, array.size);
const ptr = array.child.pointer;
try testing.expectEqual(true, ptr.is_const);
try testing.expectEqualSlices(u8, "Python", ptr.child.name);
testing.expectEqual(true, ptr.is_const);
testing.expectEqualSlices(u8, "Python", ptr.child.name);
}

View File

@@ -0,0 +1,174 @@
const std = @import("std");
const reg = @import("registry.zig");
const xml = @import("../xml.zig");
const renderRegistry = @import("render.zig").render;
const parseXml = @import("parse.zig").parseXml;
const IdRenderer = @import("../id_render.zig").IdRenderer;
const mem = std.mem;
const Allocator = mem.Allocator;
const FeatureLevel = reg.FeatureLevel;
const EnumFieldMerger = struct {
const EnumExtensionMap = std.StringArrayHashMap(std.ArrayListUnmanaged(reg.Enum.Field));
const FieldSet = std.StringArrayHashMap(void);
gpa: *Allocator,
reg_arena: *Allocator,
registry: *reg.Registry,
enum_extensions: EnumExtensionMap,
field_set: FieldSet,
fn init(gpa: *Allocator, reg_arena: *Allocator, registry: *reg.Registry) EnumFieldMerger {
return .{
.gpa = gpa,
.reg_arena = reg_arena,
.registry = registry,
.enum_extensions = EnumExtensionMap.init(gpa),
.field_set = FieldSet.init(gpa),
};
}
fn deinit(self: *EnumFieldMerger) void {
for (self.enum_extensions.items()) |*entry| {
entry.value.deinit(self.gpa);
}
self.field_set.deinit();
self.enum_extensions.deinit();
}
fn putEnumExtension(self: *EnumFieldMerger, enum_name: []const u8, field: reg.Enum.Field) !void {
const res = try self.enum_extensions.getOrPut(enum_name);
if (!res.found_existing) {
res.entry.value = std.ArrayListUnmanaged(reg.Enum.Field){};
}
try res.entry.value.append(self.gpa, field);
}
fn addRequires(self: *EnumFieldMerger, reqs: []const reg.Require) !void {
for (reqs) |req| {
for (req.extends) |enum_ext| {
try self.putEnumExtension(enum_ext.extends, enum_ext.field);
}
}
}
fn mergeEnumFields(self: *EnumFieldMerger, name: []const u8, base_enum: *reg.Enum) !void {
// If there are no extensions for this enum, assume its valid.
const extensions = self.enum_extensions.get(name) orelse return;
self.field_set.clearRetainingCapacity();
const n_fields_upper_bound = base_enum.fields.len + extensions.items.len;
const new_fields = try self.reg_arena.alloc(reg.Enum.Field, n_fields_upper_bound);
var i: usize = 0;
for (base_enum.fields) |field| {
const res = try self.field_set.getOrPut(field.name);
if (!res.found_existing) {
new_fields[i] = field;
i += 1;
}
}
// Assume that if a field name clobbers, the value is the same
for (extensions.items) |field| {
const res = try self.field_set.getOrPut(field.name);
if (!res.found_existing) {
new_fields[i] = field;
i += 1;
}
}
// Existing base_enum.fields was allocatued by `self.reg_arena`, so
// it gets cleaned up whenever that is deinited.
base_enum.fields = self.reg_arena.shrink(new_fields, i);
}
fn merge(self: *EnumFieldMerger) !void {
for (self.registry.features) |feature| {
try self.addRequires(feature.requires);
}
for (self.registry.extensions) |ext| {
try self.addRequires(ext.requires);
}
// Merge all the enum fields.
// Assume that all keys of enum_extensions appear in `self.registry.decls`
for (self.registry.decls) |*decl| {
if (decl.decl_type == .enumeration) {
try self.mergeEnumFields(decl.name, &decl.decl_type.enumeration);
}
}
}
};
pub const Generator = struct {
gpa: *Allocator,
reg_arena: std.heap.ArenaAllocator,
registry: reg.Registry,
id_renderer: IdRenderer,
fn init(allocator: *Allocator, spec: *xml.Element) !Generator {
const result = try parseXml(allocator, spec);
const tags = try allocator.alloc([]const u8, result.registry.tags.len);
for (tags) |*tag, i| tag.* = result.registry.tags[i].name;
return Generator{
.gpa = allocator,
.reg_arena = result.arena,
.registry = result.registry,
.id_renderer = IdRenderer.init(allocator, tags),
};
}
fn deinit(self: Generator) void {
self.gpa.free(self.id_renderer.tags);
self.reg_arena.deinit();
}
fn stripFlagBits(self: Generator, name: []const u8) []const u8 {
const tagless = self.id_renderer.stripAuthorTag(name);
return tagless[0 .. tagless.len - "FlagBits".len];
}
fn stripFlags(self: Generator, name: []const u8) []const u8 {
const tagless = self.id_renderer.stripAuthorTag(name);
return tagless[0 .. tagless.len - "Flags".len];
}
// Solve `registry.declarations` according to `registry.extensions` and `registry.features`.
fn mergeEnumFields(self: *Generator) !void {
var merger = EnumFieldMerger.init(self.gpa, &self.reg_arena.allocator, &self.registry);
defer merger.deinit();
try merger.merge();
}
fn fixupTags(self: *Generator) !void {
var fixer_upper = TagFixerUpper.init(self.gpa, &self.registry, &self.id_renderer);
defer fixer_upper.deinit();
try fixer_upper.fixup();
}
fn render(self: *Generator, writer: anytype) !void {
try renderRegistry(writer, &self.reg_arena.allocator, &self.registry, &self.id_renderer);
}
};
/// Main function for generating the Vulkan bindings. vk.xml is to be provided via `spec_xml`,
/// and the resulting binding is written to `writer`. `allocator` will be used to allocate temporary
/// internal datastructures - mostly via an ArenaAllocator, but sometimes a hashmap uses this allocator
/// directly.
pub fn generate(allocator: *Allocator, spec_xml: []const u8, writer: anytype) !void {
const spec = try xml.parse(allocator, spec_xml);
defer spec.deinit();
var gen = try Generator.init(allocator, spec.root);
defer gen.deinit();
try gen.mergeEnumFields();
try gen.render(writer);
}

View File

@@ -17,18 +17,19 @@ pub const ParseResult = struct {
}
};
pub fn parseXml(backing_allocator: Allocator, root: *xml.Element, api: registry.Api) !ParseResult {
pub fn parseXml(backing_allocator: *Allocator, root: *xml.Element) !ParseResult {
var arena = ArenaAllocator.init(backing_allocator);
errdefer arena.deinit();
const allocator = arena.allocator();
const allocator = &arena.allocator;
const reg = registry.Registry{
.decls = try parseDeclarations(allocator, root, api),
.api_constants = try parseApiConstants(allocator, root, api),
var reg = registry.Registry{
.copyright = root.getCharData("comment") orelse return error.InvalidRegistry,
.decls = try parseDeclarations(allocator, root),
.api_constants = try parseApiConstants(allocator, root),
.tags = try parseTags(allocator, root),
.features = try parseFeatures(allocator, root, api),
.extensions = try parseExtensions(allocator, root, api),
.features = try parseFeatures(allocator, root),
.extensions = try parseExtensions(allocator, root),
};
return ParseResult{
@@ -37,28 +38,25 @@ pub fn parseXml(backing_allocator: Allocator, root: *xml.Element, api: registry.
};
}
fn parseDeclarations(allocator: Allocator, root: *xml.Element, api: registry.Api) ![]registry.Declaration {
const types_elem = root.findChildByTag("types") orelse return error.InvalidRegistry;
const commands_elem = root.findChildByTag("commands") orelse return error.InvalidRegistry;
fn parseDeclarations(allocator: *Allocator, root: *xml.Element) ![]registry.Declaration {
var types_elem = root.findChildByTag("types") orelse return error.InvalidRegistry;
var commands_elem = root.findChildByTag("commands") orelse return error.InvalidRegistry;
const decl_upper_bound = types_elem.children.len + commands_elem.children.len;
const decl_upper_bound = types_elem.children.items.len + commands_elem.children.items.len;
const decls = try allocator.alloc(registry.Declaration, decl_upper_bound);
var count: usize = 0;
count += try parseTypes(allocator, decls, types_elem, api);
count += try parseEnums(allocator, decls[count..], root, api);
count += try parseCommands(allocator, decls[count..], commands_elem, api);
return decls[0..count];
count += try parseTypes(allocator, decls, types_elem);
count += try parseEnums(allocator, decls[count..], root);
count += try parseCommands(allocator, decls[count..], commands_elem);
return allocator.shrink(decls, count);
}
fn parseTypes(allocator: Allocator, out: []registry.Declaration, types_elem: *xml.Element, api: registry.Api) !usize {
fn parseTypes(allocator: *Allocator, out: []registry.Declaration, types_elem: *xml.Element) !usize {
var i: usize = 0;
var it = types_elem.findChildrenByTag("type");
while (it.next()) |ty| {
out[i] = blk: {
if (!requiredByApi(ty, api))
continue;
const category = ty.getAttribute("category") orelse {
break :blk try parseForeigntype(ty);
};
@@ -70,13 +68,13 @@ fn parseTypes(allocator: Allocator, out: []registry.Declaration, types_elem: *xm
} else if (mem.eql(u8, category, "basetype")) {
break :blk try parseBaseType(allocator, ty);
} else if (mem.eql(u8, category, "struct")) {
break :blk try parseContainer(allocator, ty, false, api);
break :blk try parseContainer(allocator, ty, false);
} else if (mem.eql(u8, category, "union")) {
break :blk try parseContainer(allocator, ty, true, api);
break :blk try parseContainer(allocator, ty, true);
} else if (mem.eql(u8, category, "funcpointer")) {
break :blk try parseFuncPointer(allocator, ty);
} else if (mem.eql(u8, category, "enum")) {
break :blk (try parseEnumAlias(ty)) orelse continue;
break :blk (try parseEnumAlias(allocator, ty)) orelse continue;
}
continue;
@@ -91,13 +89,13 @@ fn parseTypes(allocator: Allocator, out: []registry.Declaration, types_elem: *xm
fn parseForeigntype(ty: *xml.Element) !registry.Declaration {
const name = ty.getAttribute("name") orelse return error.InvalidRegistry;
const depends = ty.getAttribute("requires") orelse if (mem.eql(u8, name, "int"))
"vk_platform" // for some reason, int doesn't depend on vk_platform (but the other c types do)
else
return error.InvalidRegistry;
"vk_platform" // for some reason, int doesn't depend on vk_platform (but the other c types do)
else
return error.InvalidRegistry;
return registry.Declaration{
.name = name,
.decl_type = .{ .foreign = .{ .depends = depends } },
.decl_type = .{.foreign = .{.depends = depends}},
};
}
@@ -106,27 +104,24 @@ fn parseBitmaskType(ty: *xml.Element) !registry.Declaration {
const alias = ty.getAttribute("alias") orelse return error.InvalidRegistry;
return registry.Declaration{
.name = name,
.decl_type = .{ .alias = .{ .name = alias, .target = .other_type } },
.decl_type = .{.alias = .{.name = alias, .target = .other_type}},
};
} else {
const flags_type = ty.getCharData("type") orelse return error.InvalidRegistry;
const bitwidth: u8 = if (mem.eql(u8, flags_type, "VkFlags"))
32
else if (mem.eql(u8, flags_type, "VkFlags64"))
64
else
return error.InvalidRegistry;
32
else if (mem.eql(u8, flags_type, "VkFlags64"))
64
else
return error.InvalidRegistry;
return registry.Declaration{
.name = ty.getCharData("name") orelse return error.InvalidRegistry,
.decl_type = .{
.bitmask = .{
// Who knows why these are different fields
.bits_enum = ty.getAttribute("requires") orelse ty.getAttribute("bitvalues"),
.bitwidth = bitwidth,
},
},
.decl_type = .{.bitmask = .{
.bits_enum = ty.getAttribute("requires") orelse ty.getAttribute("bitvalues"), // Who knows why these are different fields
.bitwidth = bitwidth,
}},
};
}
}
@@ -137,9 +132,7 @@ fn parseHandleType(ty: *xml.Element) !registry.Declaration {
const alias = ty.getAttribute("alias") orelse return error.InvalidRegistry;
return registry.Declaration{
.name = name,
.decl_type = .{
.alias = .{ .name = alias, .target = .other_type },
},
.decl_type = .{.alias = .{.name = alias, .target = .other_type}},
};
} else {
const name = ty.getCharData("name") orelse return error.InvalidRegistry;
@@ -155,115 +148,80 @@ fn parseHandleType(ty: *xml.Element) !registry.Declaration {
.handle = .{
.parent = ty.getAttribute("parent"),
.is_dispatchable = dispatchable,
},
}
},
};
}
}
fn parseBaseType(allocator: Allocator, ty: *xml.Element) !registry.Declaration {
fn parseBaseType(allocator: *Allocator, ty: *xml.Element) !registry.Declaration {
const name = ty.getCharData("name") orelse return error.InvalidRegistry;
if (ty.getCharData("type")) |_| {
var tok = cparse.XmlCTokenizer.init(ty);
return try cparse.parseTypedef(allocator, &tok, false);
return try cparse.parseTypedef(allocator, &tok);
} else {
// Either ANativeWindow, AHardwareBuffer or CAMetalLayer. The latter has a lot of
// macros, which is why this part is not built into the xml/c parser.
return registry.Declaration{
.name = name,
.decl_type = .{ .foreign = .{ .depends = &.{} } },
.decl_type = .{.external = {}},
};
}
}
fn parseContainer(allocator: Allocator, ty: *xml.Element, is_union: bool, api: registry.Api) !registry.Declaration {
fn parseContainer(allocator: *Allocator, ty: *xml.Element, is_union: bool) !registry.Declaration {
const name = ty.getAttribute("name") orelse return error.InvalidRegistry;
if (ty.getAttribute("alias")) |alias| {
return registry.Declaration{
.name = name,
.decl_type = .{
.alias = .{ .name = alias, .target = .other_type },
},
.decl_type = .{.alias = .{.name = alias, .target = .other_type}},
};
}
var members = try allocator.alloc(registry.Container.Field, ty.children.len);
var members = try allocator.alloc(registry.Container.Field, ty.children.items.len);
var i: usize = 0;
var it = ty.findChildrenByTag("member");
var maybe_stype: ?[]const u8 = null;
while (it.next()) |member| {
if (!requiredByApi(member, api))
continue;
var xctok = cparse.XmlCTokenizer.init(member);
members[i] = try cparse.parseMember(allocator, &xctok, false);
members[i] = try cparse.parseMember(allocator, &xctok);
if (mem.eql(u8, members[i].name, "sType")) {
if (member.getAttribute("values")) |stype| {
maybe_stype = stype;
}
}
if (member.getAttribute("optional")) |optionals| {
var optional_it = mem.splitScalar(u8, optionals, ',');
if (optional_it.next()) |first_optional| {
members[i].is_optional = mem.eql(u8, first_optional, "true");
} else {
// Optional is empty, probably incorrect.
return error.InvalidRegistry;
}
}
i += 1;
}
members = members[0..i];
var maybe_extends: ?[][]const u8 = null;
if (ty.getAttribute("structextends")) |extends| {
const n_structs = std.mem.count(u8, extends, ",") + 1;
maybe_extends = try allocator.alloc([]const u8, n_structs);
var struct_extends = std.mem.splitScalar(u8, extends, ',');
var j: usize = 0;
while (struct_extends.next()) |struct_extend| {
maybe_extends.?[j] = struct_extend;
j += 1;
}
}
members = allocator.shrink(members, i);
it = ty.findChildrenByTag("member");
for (members) |*member| {
const member_elem = while (it.next()) |elem| {
if (requiredByApi(elem, api)) break elem;
} else unreachable;
try parsePointerMeta(.{ .container = members }, &member.field_type, member_elem);
// pNext isn't always properly marked as optional, so just manually override it,
if (mem.eql(u8, member.name, "pNext")) {
member.field_type.pointer.is_optional = true;
}
const member_elem = it.next().?;
try parsePointerMeta(.{.container = members}, &member.field_type, member_elem);
}
return registry.Declaration{
return registry.Declaration {
.name = name,
.decl_type = .{
.container = .{
.stype = maybe_stype,
.fields = members,
.is_union = is_union,
.extends = maybe_extends,
},
},
}
}
};
}
fn parseFuncPointer(allocator: Allocator, ty: *xml.Element) !registry.Declaration {
fn parseFuncPointer(allocator: *Allocator, ty: *xml.Element) !registry.Declaration {
var xctok = cparse.XmlCTokenizer.init(ty);
return try cparse.parseTypedef(allocator, &xctok, true);
return try cparse.parseTypedef(allocator, &xctok);
}
// For some reason, the DeclarationType cannot be passed to lenToPointer, as
// For some reason, the DeclarationType cannot be passed to lenToPointerSize, as
// that causes the Zig compiler to generate invalid code for the function. Using a
// dedicated enum fixes the issue...
const Fields = union(enum) {
@@ -271,14 +229,13 @@ const Fields = union(enum) {
container: []registry.Container.Field,
};
// returns .{ size, nullable }
fn lenToPointer(fields: Fields, len: []const u8) std.meta.Tuple(&.{ registry.Pointer.PointerSize, bool }) {
fn lenToPointerSize(fields: Fields, len: []const u8) registry.Pointer.PointerSize {
switch (fields) {
.command => |params| {
for (params) |*param| {
if (mem.eql(u8, param.name, len)) {
param.is_buffer_len = true;
return .{ .{ .other_field = param.name }, param.is_optional };
return .{.other_field = param.name};
}
}
},
@@ -286,120 +243,77 @@ fn lenToPointer(fields: Fields, len: []const u8) std.meta.Tuple(&.{ registry.Poi
for (members) |*member| {
if (mem.eql(u8, member.name, len)) {
member.is_buffer_len = true;
return .{ .{ .other_field = member.name }, member.is_optional };
return .{.other_field = member.name};
}
}
},
}
if (mem.eql(u8, len, "null-terminated")) {
return .{ .zero_terminated, false };
return .zero_terminated;
} else {
return .{ .many, false };
return .many;
}
}
fn parsePointerMeta(fields: Fields, type_info: *registry.TypeInfo, elem: *xml.Element) !void {
var len_attribute_depth: usize = 0;
if (elem.getAttribute("len")) |lens| {
var it = mem.splitScalar(u8, lens, ',');
var it = mem.split(lens, ",");
var current_type_info = type_info;
while (true) switch (current_type_info.*) {
.pointer => |*ptr| {
if (it.next()) |len_str| {
ptr.size, ptr.is_optional = lenToPointer(fields, len_str);
} else {
ptr.size = .many;
}
current_type_info = ptr.child;
len_attribute_depth += 1;
},
.array => |*arr| {
if (it.next()) |len_str| {
const size, _ = lenToPointer(fields, len_str);
arr.valid_size = switch (size) {
.one => .all,
.many => .many,
.other_field => |field| .{ .other_field = field },
.zero_terminated => .zero_terminated,
};
} else {
arr.valid_size = .all;
}
current_type_info = arr.child;
len_attribute_depth += 1;
},
else => break,
};
while (current_type_info.* == .pointer) {
// TODO: Check altlen
const size = if (it.next()) |len_str| lenToPointerSize(fields, len_str) else .one;
current_type_info.pointer.size = size;
current_type_info = current_type_info.pointer.child;
}
if (it.next()) |_| {
// There are more elements in the `len` attribute than there are pointers
// Something probably went wrong
std.log.err("len: {s}", .{lens});
return error.InvalidRegistry;
}
}
var current_depth: usize = 0;
if (elem.getAttribute("optional")) |optionals| {
var it = mem.splitScalar(u8, optionals, ',');
var it = mem.split(optionals, ",");
var current_type_info = type_info;
while (true) switch (current_type_info.*) {
inline .pointer, .array => |*info| {
if (it.next()) |optional_str| {
while (current_type_info.* == .pointer) {
if (it.next()) |current_optional| {
current_type_info.pointer.is_optional = mem.eql(u8, current_optional, "true");
} else {
// There is no information for this pointer, probably incorrect.
return error.InvalidRegistry;
}
// The pointer may have already been marked as optional due to its `len` attribute.
const is_already_optional = current_depth < len_attribute_depth and info.is_optional;
info.is_optional = is_already_optional or mem.eql(u8, optional_str, "true");
} else {
// There is no information for this pointer, probably incorrect.
// Currently there is one definition where this is the case, VkCudaLaunchInfoNV.
// We work around these by assuming that they are optional, so that in the case
// that they are, we can assign null to them.
// See https://github.com/Snektron/vulkan-zig/issues/109
info.is_optional = true;
}
current_type_info = info.child;
current_depth += 1;
},
else => break,
};
current_type_info = current_type_info.pointer.child;
}
}
}
fn parseEnumAlias(elem: *xml.Element) !?registry.Declaration {
fn parseEnumAlias(allocator: *Allocator, elem: *xml.Element) !?registry.Declaration {
if (elem.getAttribute("alias")) |alias| {
const name = elem.getAttribute("name") orelse return error.InvalidRegistry;
return registry.Declaration{
.name = name,
.decl_type = .{
.alias = .{ .name = alias, .target = .other_type },
},
.decl_type = .{.alias = .{.name = alias, .target = .other_type}},
};
}
return null;
}
fn parseEnums(allocator: Allocator, out: []registry.Declaration, root: *xml.Element, api: registry.Api) !usize {
fn parseEnums(allocator: *Allocator, out: []registry.Declaration, root: *xml.Element) !usize {
var i: usize = 0;
var it = root.findChildrenByTag("enums");
while (it.next()) |enums| {
const name = enums.getAttribute("name") orelse return error.InvalidRegistry;
if (mem.eql(u8, name, api_constants_name) or !requiredByApi(enums, api)) {
if (mem.eql(u8, name, api_constants_name)) {
continue;
}
out[i] = .{
.name = name,
.decl_type = .{ .enumeration = try parseEnumFields(allocator, enums, api) },
.decl_type = .{.enumeration = try parseEnumFields(allocator, enums)},
};
i += 1;
}
@@ -407,7 +321,7 @@ fn parseEnums(allocator: Allocator, out: []registry.Declaration, root: *xml.Elem
return i;
}
fn parseEnumFields(allocator: Allocator, elem: *xml.Element, api: registry.Api) !registry.Enum {
fn parseEnumFields(allocator: *Allocator, elem: *xml.Element) !registry.Enum {
// TODO: `type` was added recently, fall back to checking endswith FlagBits for older versions?
const enum_type = elem.getAttribute("type") orelse return error.InvalidRegistry;
const is_bitmask = mem.eql(u8, enum_type, "bitmask");
@@ -416,24 +330,21 @@ fn parseEnumFields(allocator: Allocator, elem: *xml.Element, api: registry.Api)
}
const bitwidth = if (elem.getAttribute("bitwidth")) |bitwidth|
try std.fmt.parseInt(u8, bitwidth, 10)
else
32;
try std.fmt.parseInt(u8, bitwidth, 10)
else
32;
const fields = try allocator.alloc(registry.Enum.Field, elem.children.len);
const fields = try allocator.alloc(registry.Enum.Field, elem.children.items.len);
var i: usize = 0;
var it = elem.findChildrenByTag("enum");
while (it.next()) |field| {
if (!requiredByApi(field, api))
continue;
fields[i] = try parseEnumField(field);
i += 1;
}
return registry.Enum{
.fields = fields[0..i],
.fields = allocator.shrink(fields, i),
.bitwidth = bitwidth,
.is_bitmask = is_bitmask,
};
@@ -458,14 +369,14 @@ fn parseEnumField(field: *xml.Element) !registry.Enum.Field {
// tag. In the latter case its passed via the `ext_nr` parameter.
if (field.getAttribute("value")) |value| {
if (mem.startsWith(u8, value, "0x")) {
break :blk .{ .bit_vector = try std.fmt.parseInt(i32, value[2..], 16) };
break :blk .{.bit_vector = try std.fmt.parseInt(i32, value[2..], 16)};
} else {
break :blk .{ .int = try std.fmt.parseInt(i32, value, 10) };
break :blk .{.int = try std.fmt.parseInt(i32, value, 10)};
}
} else if (field.getAttribute("bitpos")) |bitpos| {
break :blk .{ .bitpos = try std.fmt.parseInt(u6, bitpos, 10) };
break :blk .{.bitpos = try std.fmt.parseInt(u6, bitpos, 10)};
} else if (field.getAttribute("alias")) |alias| {
break :blk .{ .alias = .{ .name = alias, .is_compat_alias = is_compat_alias } };
break :blk .{.alias = .{.name = alias, .is_compat_alias = is_compat_alias}};
} else {
return error.InvalidRegistry;
}
@@ -477,28 +388,25 @@ fn parseEnumField(field: *xml.Element) !registry.Enum.Field {
};
}
fn parseCommands(allocator: Allocator, out: []registry.Declaration, commands_elem: *xml.Element, api: registry.Api) !usize {
fn parseCommands(allocator: *Allocator, out: []registry.Declaration, commands_elem: *xml.Element) !usize {
var i: usize = 0;
var it = commands_elem.findChildrenByTag("command");
while (it.next()) |elem| {
if (!requiredByApi(elem, api))
continue;
out[i] = try parseCommand(allocator, elem, api);
out[i] = try parseCommand(allocator, elem);
i += 1;
}
return i;
}
fn splitCommaAlloc(allocator: Allocator, text: []const u8) ![][]const u8 {
fn splitCommaAlloc(allocator: *Allocator, text: []const u8) ![][]const u8 {
var n_codes: usize = 1;
for (text) |c| {
if (c == ',') n_codes += 1;
}
const codes = try allocator.alloc([]const u8, n_codes);
var it = mem.splitScalar(u8, text, ',');
var it = mem.split(text, ",");
for (codes) |*code| {
code.* = it.next().?;
}
@@ -506,47 +414,31 @@ fn splitCommaAlloc(allocator: Allocator, text: []const u8) ![][]const u8 {
return codes;
}
fn parseCommand(allocator: Allocator, elem: *xml.Element, api: registry.Api) !registry.Declaration {
fn parseCommand(allocator: *Allocator, elem: *xml.Element) !registry.Declaration {
if (elem.getAttribute("alias")) |alias| {
const name = elem.getAttribute("name") orelse return error.InvalidRegistry;
return registry.Declaration{
.name = name,
.decl_type = .{
.alias = .{ .name = alias, .target = .other_command },
},
.decl_type = .{.alias = .{.name = alias, .target = .other_command}}
};
}
const proto = elem.findChildByTag("proto") orelse return error.InvalidRegistry;
var proto_xctok = cparse.XmlCTokenizer.init(proto);
const command_decl = try cparse.parseParamOrProto(allocator, &proto_xctok, false);
const command_decl = try cparse.parseParamOrProto(allocator, &proto_xctok);
var params = try allocator.alloc(registry.Command.Param, elem.children.len);
var params = try allocator.alloc(registry.Command.Param, elem.children.items.len);
var i: usize = 0;
var it = elem.findChildrenByTag("param");
while (it.next()) |param| {
if (!requiredByApi(param, api))
continue;
var xctok = cparse.XmlCTokenizer.init(param);
const decl = try cparse.parseParamOrProto(allocator, &xctok, false);
const decl = try cparse.parseParamOrProto(allocator, &xctok);
params[i] = .{
.name = decl.name,
.param_type = decl.decl_type.typedef,
.is_buffer_len = false,
.is_optional = false,
};
if (param.getAttribute("optional")) |optionals| {
var optional_it = mem.splitScalar(u8, optionals, ',');
if (optional_it.next()) |first_optional| {
params[i].is_optional = mem.eql(u8, first_optional, "true");
} else {
// Optional is empty, probably incorrect.
return error.InvalidRegistry;
}
}
i += 1;
}
@@ -554,27 +446,24 @@ fn parseCommand(allocator: Allocator, elem: *xml.Element, api: registry.Api) !re
return_type.* = command_decl.decl_type.typedef;
const success_codes = if (elem.getAttribute("successcodes")) |codes|
try splitCommaAlloc(allocator, codes)
else
&[_][]const u8{};
try splitCommaAlloc(allocator, codes)
else
&[_][]const u8{};
const error_codes = if (elem.getAttribute("errorcodes")) |codes|
try splitCommaAlloc(allocator, codes)
else
&[_][]const u8{};
try splitCommaAlloc(allocator, codes)
else
&[_][]const u8{};
params = params[0..i];
params = allocator.shrink(params, i);
it = elem.findChildrenByTag("param");
for (params) |*param| {
const param_elem = while (it.next()) |param_elem| {
if (requiredByApi(param_elem, api)) break param_elem;
} else unreachable;
try parsePointerMeta(.{ .command = params }, &param.param_type, param_elem);
const param_elem = it.next().?;
try parsePointerMeta(.{.command = params}, &param.param_type, param_elem);
}
return registry.Declaration{
return registry.Declaration {
.name = command_decl.name,
.decl_type = .{
.command = .{
@@ -582,12 +471,12 @@ fn parseCommand(allocator: Allocator, elem: *xml.Element, api: registry.Api) !re
.return_type = return_type,
.success_codes = success_codes,
.error_codes = error_codes,
},
},
}
}
};
}
fn parseApiConstants(allocator: Allocator, root: *xml.Element, api: registry.Api) ![]registry.ApiConstant {
fn parseApiConstants(allocator: *Allocator, root: *xml.Element) ![]registry.ApiConstant {
var enums = blk: {
var it = root.findChildrenByTag("enums");
while (it.next()) |child| {
@@ -614,56 +503,52 @@ fn parseApiConstants(allocator: Allocator, root: *xml.Element, api: registry.Api
break :blk n_defines;
};
const constants = try allocator.alloc(registry.ApiConstant, enums.children.len + n_defines);
const constants = try allocator.alloc(registry.ApiConstant, enums.children.items.len + n_defines);
var i: usize = 0;
var it = enums.findChildrenByTag("enum");
while (it.next()) |constant| {
if (!requiredByApi(constant, api))
continue;
const expr = if (constant.getAttribute("value")) |expr|
expr
else if (constant.getAttribute("alias")) |alias|
alias
else
return error.InvalidRegistry;
expr
else if (constant.getAttribute("alias")) |alias|
alias
else
return error.InvalidRegistry;
constants[i] = .{
.name = constant.getAttribute("name") orelse return error.InvalidRegistry,
.value = .{ .expr = expr },
.value = .{.expr = expr},
};
i += 1;
}
i += try parseDefines(types, constants[i..], api);
return constants[0..i];
i += try parseDefines(types, constants[i..]);
return allocator.shrink(constants, i);
}
fn parseDefines(types: *xml.Element, out: []registry.ApiConstant, api: registry.Api) !usize {
fn parseDefines(types: *xml.Element, out: []registry.ApiConstant) !usize {
var i: usize = 0;
var it = types.findChildrenByTag("type");
while (it.next()) |ty| {
if (!requiredByApi(ty, api))
continue;
const category = ty.getAttribute("category") orelse continue;
if (!mem.eql(u8, category, "define")) {
continue;
}
const name = ty.getCharData("name") orelse continue;
if (mem.eql(u8, name, "VK_HEADER_VERSION") or mem.eql(u8, name, "VKSC_API_VARIANT")) {
if (mem.eql(u8, name, "VK_HEADER_VERSION")) {
out[i] = .{
.name = name,
.value = .{ .expr = mem.trim(u8, ty.children[2].char_data, " ") },
.value = .{.expr = mem.trim(u8, ty.children.items[2].CharData, " ")},
};
} else {
var xctok = cparse.XmlCTokenizer.init(ty);
out[i] = .{
.name = name,
.value = .{ .version = cparse.parseVersion(&xctok) catch continue },
.value = .{
.version = cparse.parseVersion(&xctok) catch continue
},
};
}
i += 1;
@@ -672,9 +557,9 @@ fn parseDefines(types: *xml.Element, out: []registry.ApiConstant, api: registry.
return i;
}
fn parseTags(allocator: Allocator, root: *xml.Element) ![]registry.Tag {
fn parseTags(allocator: *Allocator, root: *xml.Element) ![]registry.Tag {
var tags_elem = root.findChildByTag("tags") orelse return error.InvalidRegistry;
const tags = try allocator.alloc(registry.Tag, tags_elem.children.len);
const tags = try allocator.alloc(registry.Tag, tags_elem.children.items.len);
var i: usize = 0;
var it = tags_elem.findChildrenByTag("tag");
@@ -687,10 +572,10 @@ fn parseTags(allocator: Allocator, root: *xml.Element) ![]registry.Tag {
i += 1;
}
return tags[0..i];
return allocator.shrink(tags, i);
}
fn parseFeatures(allocator: Allocator, root: *xml.Element, api: registry.Api) ![]registry.Feature {
fn parseFeatures(allocator: *Allocator, root: *xml.Element) ![]registry.Feature {
var it = root.findChildrenByTag("feature");
var count: usize = 0;
while (it.next()) |_| count += 1;
@@ -699,38 +584,32 @@ fn parseFeatures(allocator: Allocator, root: *xml.Element, api: registry.Api) ![
var i: usize = 0;
it = root.findChildrenByTag("feature");
while (it.next()) |feature| {
if (!requiredByApi(feature, api))
continue;
features[i] = try parseFeature(allocator, feature, api);
features[i] = try parseFeature(allocator, feature);
i += 1;
}
return features[0..i];
return features;
}
fn parseFeature(allocator: Allocator, feature: *xml.Element, api: registry.Api) !registry.Feature {
fn parseFeature(allocator: *Allocator, feature: *xml.Element) !registry.Feature {
const name = feature.getAttribute("name") orelse return error.InvalidRegistry;
const feature_level = blk: {
const number = feature.getAttribute("number") orelse return error.InvalidRegistry;
break :blk try splitFeatureLevel(number, ".");
};
var requires = try allocator.alloc(registry.Require, feature.children.len);
var requires = try allocator.alloc(registry.Require, feature.children.items.len);
var i: usize = 0;
var it = feature.findChildrenByTag("require");
while (it.next()) |require| {
if (!requiredByApi(require, api))
continue;
requires[i] = try parseRequire(allocator, require, null, api);
requires[i] = try parseRequire(allocator, require, null);
i += 1;
}
return registry.Feature{
.name = name,
.level = feature_level,
.requires = requires[0..i],
.requires = allocator.shrink(requires, i)
};
}
@@ -763,10 +642,7 @@ fn parseEnumExtension(elem: *xml.Element, parent_extnumber: ?u31) !?registry.Req
return registry.Require.EnumExtension{
.extends = extends,
.extnumber = actual_extnumber,
.field = .{
.name = name,
.value = .{ .int = value },
},
.field = .{.name = name, .value = .{.int = value}},
};
}
@@ -783,7 +659,7 @@ fn enumExtOffsetToValue(extnumber: u31, offset: u31) u31 {
return extension_value_base + (extnumber - 1) * extension_block + offset;
}
fn parseRequire(allocator: Allocator, require: *xml.Element, extnumber: ?u31, api: registry.Api) !registry.Require {
fn parseRequire(allocator: *Allocator, require: *xml.Element, extnumber: ?u31) !registry.Require {
var n_extends: usize = 0;
var n_types: usize = 0;
var n_commands: usize = 0;
@@ -809,9 +685,6 @@ fn parseRequire(allocator: Allocator, require: *xml.Element, extnumber: ?u31, ap
it = require.elements();
while (it.next()) |elem| {
if (!requiredByApi(elem, api))
continue;
if (mem.eql(u8, elem.tag, "enum")) {
if (try parseEnumExtension(elem, extnumber)) |ext| {
extends[i_extends] = ext;
@@ -832,27 +705,25 @@ fn parseRequire(allocator: Allocator, require: *xml.Element, extnumber: ?u31, ap
return error.InvalidRegistry;
}
break :blk try splitFeatureLevel(feature_level["VK_VERSION_".len..], "_");
break :blk try splitFeatureLevel(feature_level["VK_VERSION_".len ..], "_");
};
return registry.Require{
.extends = extends[0..i_extends],
.types = types[0..i_types],
.commands = commands[0..i_commands],
.extends = allocator.shrink(extends, i_extends),
.types = types,
.commands = commands,
.required_feature_level = required_feature_level,
.required_extension = require.getAttribute("extension"),
};
}
fn parseExtensions(allocator: Allocator, root: *xml.Element, api: registry.Api) ![]registry.Extension {
fn parseExtensions(allocator: *Allocator, root: *xml.Element) ![]registry.Extension {
const extensions_elem = root.findChildByTag("extensions") orelse return error.InvalidRegistry;
const extensions = try allocator.alloc(registry.Extension, extensions_elem.children.len);
const extensions = try allocator.alloc(registry.Extension, extensions_elem.children.items.len);
var i: usize = 0;
var it = extensions_elem.findChildrenByTag("extension");
while (it.next()) |extension| {
if (!requiredByApi(extension, api))
continue;
// Some extensions (in particular 94) are disabled, so just skip them
if (extension.getAttribute("supported")) |supported| {
if (mem.eql(u8, supported, "disabled")) {
@@ -860,11 +731,11 @@ fn parseExtensions(allocator: Allocator, root: *xml.Element, api: registry.Api)
}
}
extensions[i] = try parseExtension(allocator, extension, api);
extensions[i] = try parseExtension(allocator, extension);
i += 1;
}
return extensions[0..i];
return allocator.shrink(extensions, i);
}
fn findExtVersion(extension: *xml.Element) !u32 {
@@ -883,7 +754,7 @@ fn findExtVersion(extension: *xml.Element) !u32 {
return error.InvalidRegistry;
}
fn parseExtension(allocator: Allocator, extension: *xml.Element, api: registry.Api) !registry.Extension {
fn parseExtension(allocator: *Allocator, extension: *xml.Element) !registry.Extension {
const name = extension.getAttribute("name") orelse return error.InvalidRegistry;
const platform = extension.getAttribute("platform");
const version = try findExtVersion(extension);
@@ -892,18 +763,19 @@ fn parseExtension(allocator: Allocator, extension: *xml.Element, api: registry.A
// feature level: both seperately in each <require> tag, or using
// the requiresCore attribute.
const requires_core = if (extension.getAttribute("requiresCore")) |feature_level|
try splitFeatureLevel(feature_level, ".")
else
null;
try splitFeatureLevel(feature_level, ".")
else
null;
const promoted_to: registry.Extension.Promotion = blk: {
const promotedto = extension.getAttribute("promotedto") orelse break :blk .none;
if (mem.startsWith(u8, promotedto, "VK_VERSION_")) {
const feature_level = try splitFeatureLevel(promotedto["VK_VERSION_".len..], "_");
break :blk .{ .feature = feature_level };
const feature_level = try splitFeatureLevel(promotedto["VK_VERSION_".len ..], "_");
break :blk .{.feature = feature_level};
}
break :blk .{ .extension = promotedto };
break :blk .{.extension = promotedto};
};
const number = blk: {
@@ -927,13 +799,11 @@ fn parseExtension(allocator: Allocator, extension: *xml.Element, api: registry.A
break :blk try splitCommaAlloc(allocator, requires_str);
};
var requires = try allocator.alloc(registry.Require, extension.children.len);
var requires = try allocator.alloc(registry.Require, extension.children.items.len);
var i: usize = 0;
var it = extension.findChildrenByTag("require");
while (it.next()) |require| {
if (!requiredByApi(require, api))
continue;
requires[i] = try parseRequire(allocator, require, number, api);
requires[i] = try parseRequire(allocator, require, number);
i += 1;
}
@@ -946,12 +816,12 @@ fn parseExtension(allocator: Allocator, extension: *xml.Element, api: registry.A
.promoted_to = promoted_to,
.platform = platform,
.required_feature_level = requires_core,
.requires = requires[0..i],
.requires = allocator.shrink(requires, i)
};
}
fn splitFeatureLevel(ver: []const u8, split: []const u8) !registry.FeatureLevel {
var it = mem.splitSequence(u8, ver, split);
var it = mem.split(ver, split);
const major = it.next() orelse return error.InvalidFeatureLevel;
const minor = it.next() orelse return error.InvalidFeatureLevel;
@@ -964,14 +834,3 @@ fn splitFeatureLevel(ver: []const u8, split: []const u8) !registry.FeatureLevel
.minor = try std.fmt.parseInt(u32, minor, 10),
};
}
fn requiredByApi(elem: *xml.Element, api: registry.Api) bool {
const apis = elem.getAttribute("api") orelse return true; // If the 'api' element is not present, assume required.
var it = mem.splitScalar(u8, apis, ',');
while (it.next()) |required_by_api| {
if (std.mem.eql(u8, @tagName(api), required_by_api)) return true;
}
return false;
}

View File

@@ -1,9 +1,5 @@
pub const Api = enum {
vulkan,
vulkansc,
};
pub const Registry = struct {
copyright: []const u8,
decls: []Declaration,
api_constants: []ApiConstant,
tags: []Tag,
@@ -66,11 +62,9 @@ pub const Container = struct {
field_type: TypeInfo,
bits: ?usize,
is_buffer_len: bool,
is_optional: bool,
};
stype: ?[]const u8,
extends: ?[]const []const u8,
fields: []Field,
is_union: bool,
};
@@ -83,7 +77,7 @@ pub const Enum = struct {
alias: struct {
name: []const u8,
is_compat_alias: bool,
},
}
};
pub const Field = struct {
@@ -111,23 +105,20 @@ pub const Command = struct {
name: []const u8,
param_type: TypeInfo,
is_buffer_len: bool,
is_optional: bool,
};
params: []Param,
return_type: *TypeInfo,
success_codes: []const []const u8,
error_codes: []const []const u8,
success_codes: [][]const u8,
error_codes: [][]const u8,
};
pub const Pointer = struct {
pub const PointerSize = union(enum) {
one,
/// The length is given by some complex expression, possibly involving another field
many,
/// The length is given by some other field or parameter
other_field: []const u8,
zero_terminated,
many, // The length is given by some complex expression, possibly involving another field
other_field: []const u8, // The length is given by some other field or parameter
zero_terminated
};
is_const: bool,
@@ -142,26 +133,7 @@ pub const Array = struct {
alias: []const u8, // Field size is given by an api constant
};
pub const ArrayValidSize = union(enum) {
/// All elements are valid.
all,
/// The length is given by some complex expression, possibly involving another field
many,
/// The length is given by some complex expression, possibly involving another field
other_field: []const u8,
/// The valid elements are terminated by a 0, or by the bounds of the array.
zero_terminated,
};
/// This is the total size of the array
size: ArraySize,
/// The number of items that are actually filled with valid values
valid_size: ArrayValidSize,
/// Some members may indicate than an array is optional. This happens with
/// VkPhysicalDeviceHostImageCopyPropertiesEXT::optimalTilingLayoutUUID for example.
/// The spec is not entirely clear about what this means, but presumably it should
/// be filled with all zeroes.
is_optional: bool,
child: *TypeInfo,
};

1267
generator/vulkan/render.zig Normal file

File diff suppressed because it is too large Load Diff

667
generator/xml.zig Normal file
View File

@@ -0,0 +1,667 @@
const std = @import("std");
const mem = std.mem;
const testing = std.testing;
const Allocator = mem.Allocator;
const ArenaAllocator = std.heap.ArenaAllocator;
const ArrayList = std.ArrayList;
pub const Attribute = struct {
name: []const u8,
value: []const u8
};
pub const Content = union(enum) {
CharData: []const u8,
Comment: []const u8,
Element: *Element
};
pub const Element = struct {
pub const AttributeList = ArrayList(*Attribute);
pub const ContentList = ArrayList(Content);
tag: []const u8,
attributes: AttributeList,
children: ContentList,
fn init(tag: []const u8, alloc: *Allocator) Element {
return .{
.tag = tag,
.attributes = AttributeList.init(alloc),
.children = ContentList.init(alloc),
};
}
pub fn getAttribute(self: *Element, attrib_name: []const u8) ?[]const u8 {
for (self.attributes.items) |child| {
if (mem.eql(u8, child.name, attrib_name)) {
return child.value;
}
}
return null;
}
pub fn getCharData(self: *Element, child_tag: []const u8) ?[]const u8 {
const child = self.findChildByTag(child_tag) orelse return null;
if (child.children.items.len != 1) {
return null;
}
return switch (child.children.items[0]) {
.CharData => |char_data| char_data,
else => null
};
}
pub fn iterator(self: *Element) ChildIterator {
return .{
.items = self.children.items,
.i = 0,
};
}
pub fn elements(self: *Element) ChildElementIterator {
return .{
.inner = self.iterator(),
};
}
pub fn findChildByTag(self: *Element, tag: []const u8) ?*Element {
return self.findChildrenByTag(tag).next();
}
pub fn findChildrenByTag(self: *Element, tag: []const u8) FindChildrenByTagIterator {
return .{
.inner = self.elements(),
.tag = tag
};
}
pub const ChildIterator = struct {
items: []Content,
i: usize,
pub fn next(self: *ChildIterator) ?*Content {
if (self.i < self.items.len) {
self.i += 1;
return &self.items[self.i - 1];
}
return null;
}
};
pub const ChildElementIterator = struct {
inner: ChildIterator,
pub fn next(self: *ChildElementIterator) ?*Element {
while (self.inner.next()) |child| {
if (child.* != .Element) {
continue;
}
return child.*.Element;
}
return null;
}
};
pub const FindChildrenByTagIterator = struct {
inner: ChildElementIterator,
tag: []const u8,
pub fn next(self: *FindChildrenByTagIterator) ?*Element {
while (self.inner.next()) |child| {
if (!mem.eql(u8, child.tag, self.tag)) {
continue;
}
return child;
}
return null;
}
};
};
pub const XmlDecl = struct {
version: []const u8,
encoding: ?[]const u8,
standalone: ?bool
};
pub const Document = struct {
arena: ArenaAllocator,
xml_decl: ?*XmlDecl,
root: *Element,
pub fn deinit(self: Document) void {
var arena = self.arena; // Copy to stack so self can be taken by value.
arena.deinit();
}
};
const ParseContext = struct {
source: []const u8,
offset: usize,
line: usize,
column: usize,
fn init(source: []const u8) ParseContext {
return .{
.source = source,
.offset = 0,
.line = 0,
.column = 0
};
}
fn peek(self: *ParseContext) ?u8 {
return if (self.offset < self.source.len) self.source[self.offset] else null;
}
fn consume(self: *ParseContext) !u8 {
if (self.offset < self.source.len) {
return self.consumeNoEof();
}
return error.UnexpectedEof;
}
fn consumeNoEof(self: *ParseContext) u8 {
std.debug.assert(self.offset < self.source.len);
const c = self.source[self.offset];
self.offset += 1;
if (c == '\n') {
self.line += 1;
self.column = 0;
} else {
self.column += 1;
}
return c;
}
fn eat(self: *ParseContext, char: u8) bool {
self.expect(char) catch return false;
return true;
}
fn expect(self: *ParseContext, expected: u8) !void {
if (self.peek()) |actual| {
if (expected != actual) {
return error.UnexpectedCharacter;
}
_ = self.consumeNoEof();
return;
}
return error.UnexpectedEof;
}
fn eatStr(self: *ParseContext, text: []const u8) bool {
self.expectStr(text) catch return false;
return true;
}
fn expectStr(self: *ParseContext, text: []const u8) !void {
if (self.source.len < self.offset + text.len) {
return error.UnexpectedEof;
} else if (std.mem.startsWith(u8, self.source[self.offset ..], text)) {
var i: usize = 0;
while (i < text.len) : (i += 1) {
_ = self.consumeNoEof();
}
return;
}
return error.UnexpectedCharacter;
}
fn eatWs(self: *ParseContext) bool {
var ws = false;
while (self.peek()) |ch| {
switch (ch) {
' ', '\t', '\n', '\r' => {
ws = true;
_ = self.consumeNoEof();
},
else => break
}
}
return ws;
}
fn expectWs(self: *ParseContext) !void {
if (!self.eatWs()) return error.UnexpectedCharacter;
}
fn currentLine(self: ParseContext) []const u8 {
var begin: usize = 0;
if (mem.lastIndexOfScalar(u8, self.source[0 .. self.offset], '\n')) |prev_nl| {
begin = prev_nl + 1;
}
var end = mem.indexOfScalarPos(u8, self.source, self.offset, '\n') orelse self.source.len;
return self.source[begin .. end];
}
};
test "ParseContext" {
{
var ctx = ParseContext.init("I like pythons");
testing.expectEqual(@as(?u8, 'I'), ctx.peek());
testing.expectEqual(@as(u8, 'I'), ctx.consumeNoEof());
testing.expectEqual(@as(?u8, ' '), ctx.peek());
testing.expectEqual(@as(u8, ' '), try ctx.consume());
testing.expect(ctx.eat('l'));
testing.expectEqual(@as(?u8, 'i'), ctx.peek());
testing.expectEqual(false, ctx.eat('a'));
testing.expectEqual(@as(?u8, 'i'), ctx.peek());
try ctx.expect('i');
testing.expectEqual(@as(?u8, 'k'), ctx.peek());
testing.expectError(error.UnexpectedCharacter, ctx.expect('a'));
testing.expectEqual(@as(?u8, 'k'), ctx.peek());
testing.expect(ctx.eatStr("ke"));
testing.expectEqual(@as(?u8, ' '), ctx.peek());
testing.expect(ctx.eatWs());
testing.expectEqual(@as(?u8, 'p'), ctx.peek());
testing.expectEqual(false, ctx.eatWs());
testing.expectEqual(@as(?u8, 'p'), ctx.peek());
testing.expectEqual(false, ctx.eatStr("aaaaaaaaa"));
testing.expectEqual(@as(?u8, 'p'), ctx.peek());
testing.expectError(error.UnexpectedEof, ctx.expectStr("aaaaaaaaa"));
testing.expectEqual(@as(?u8, 'p'), ctx.peek());
testing.expectError(error.UnexpectedCharacter, ctx.expectStr("pytn"));
testing.expectEqual(@as(?u8, 'p'), ctx.peek());
try ctx.expectStr("python");
testing.expectEqual(@as(?u8, 's'), ctx.peek());
}
{
var ctx = ParseContext.init("");
testing.expectEqual(ctx.peek(), null);
testing.expectError(error.UnexpectedEof, ctx.consume());
testing.expectEqual(ctx.eat('p'), false);
testing.expectError(error.UnexpectedEof, ctx.expect('p'));
}
}
pub const ParseError = error {
IllegalCharacter,
UnexpectedEof,
UnexpectedCharacter,
UnclosedValue,
UnclosedComment,
InvalidName,
InvalidEntity,
InvalidStandaloneValue,
NonMatchingClosingTag,
InvalidDocument,
OutOfMemory
};
pub fn parse(backing_allocator: *Allocator, source: []const u8) !Document {
var ctx = ParseContext.init(source);
return try parseDocument(&ctx, backing_allocator);
}
fn parseDocument(ctx: *ParseContext, backing_allocator: *Allocator) !Document {
var doc = Document{
.arena = ArenaAllocator.init(backing_allocator),
.xml_decl = null,
.root = undefined
};
errdefer doc.deinit();
try trySkipComments(ctx, &doc.arena.allocator);
doc.xml_decl = try tryParseProlog(ctx, &doc.arena.allocator);
_ = ctx.eatWs();
try trySkipComments(ctx, &doc.arena.allocator);
doc.root = (try tryParseElement(ctx, &doc.arena.allocator)) orelse return error.InvalidDocument;
_ = ctx.eatWs();
try trySkipComments(ctx, &doc.arena.allocator);
if (ctx.peek() != null) return error.InvalidDocument;
return doc;
}
fn parseAttrValue(ctx: *ParseContext, alloc: *Allocator) ![]const u8 {
const quote = try ctx.consume();
if (quote != '"' and quote != '\'') return error.UnexpectedCharacter;
const begin = ctx.offset;
while (true) {
const c = ctx.consume() catch return error.UnclosedValue;
if (c == quote) break;
}
const end = ctx.offset - 1;
return try dupeAndUnescape(alloc, ctx.source[begin .. end]);
}
fn parseEqAttrValue(ctx: *ParseContext, alloc: *Allocator) ![]const u8 {
_ = ctx.eatWs();
try ctx.expect('=');
_ = ctx.eatWs();
return try parseAttrValue(ctx, alloc);
}
fn parseNameNoDupe(ctx: *ParseContext) ![]const u8 {
// XML's spec on names is very long, so to make this easier
// we just take any character that is not special and not whitespace
const begin = ctx.offset;
while (ctx.peek()) |ch| {
switch (ch) {
' ', '\t', '\n', '\r' => break,
'&', '"', '\'', '<', '>', '?', '=', '/' => break,
else => _ = ctx.consumeNoEof()
}
}
const end = ctx.offset;
if (begin == end) return error.InvalidName;
return ctx.source[begin .. end];
}
fn tryParseCharData(ctx: *ParseContext, alloc: *Allocator) !?[]const u8 {
const begin = ctx.offset;
while (ctx.peek()) |ch| {
switch (ch) {
'<' => break,
else => _ = ctx.consumeNoEof()
}
}
const end = ctx.offset;
if (begin == end) return null;
return try dupeAndUnescape(alloc, ctx.source[begin .. end]);
}
fn parseContent(ctx: *ParseContext, alloc: *Allocator) ParseError!Content {
if (try tryParseCharData(ctx, alloc)) |cd| {
return Content{.CharData = cd};
} else if (try tryParseComment(ctx, alloc)) |comment| {
return Content{.Comment = comment};
} else if (try tryParseElement(ctx, alloc)) |elem| {
return Content{.Element = elem};
} else {
return error.UnexpectedCharacter;
}
}
fn tryParseAttr(ctx: *ParseContext, alloc: *Allocator) !?*Attribute {
const name = parseNameNoDupe(ctx) catch return null;
_ = ctx.eatWs();
try ctx.expect('=');
_ = ctx.eatWs();
const value = try parseAttrValue(ctx, alloc);
const attr = try alloc.create(Attribute);
attr.name = try mem.dupe(alloc, u8, name);
attr.value = value;
return attr;
}
fn tryParseElement(ctx: *ParseContext, alloc: *Allocator) !?*Element {
const start = ctx.offset;
if (!ctx.eat('<')) return null;
const tag = parseNameNoDupe(ctx) catch {
ctx.offset = start;
return null;
};
const element = try alloc.create(Element);
element.* = Element.init(try std.mem.dupe(alloc, u8, tag), alloc);
while (ctx.eatWs()) {
const attr = (try tryParseAttr(ctx, alloc)) orelse break;
try element.attributes.append(attr);
}
if (ctx.eatStr("/>")) {
return element;
}
try ctx.expect('>');
while (true) {
if (ctx.peek() == null) {
return error.UnexpectedEof;
} else if (ctx.eatStr("</")) {
break;
}
const content = try parseContent(ctx, alloc);
try element.children.append(content);
}
const closing_tag = try parseNameNoDupe(ctx);
if (!std.mem.eql(u8, tag, closing_tag)) {
return error.NonMatchingClosingTag;
}
_ = ctx.eatWs();
try ctx.expect('>');
return element;
}
test "tryParseElement" {
var arena = std.heap.ArenaAllocator.init(testing.allocator);
defer arena.deinit();
var alloc = &arena.allocator;
{
var ctx = ParseContext.init("<= a='b'/>");
testing.expectEqual(@as(?*Element, null), try tryParseElement(&ctx, alloc));
testing.expectEqual(@as(?u8, '<'), ctx.peek());
}
{
var ctx = ParseContext.init("<python size='15' color = \"green\"/>");
const elem = try tryParseElement(&ctx, alloc);
testing.expectEqualSlices(u8, elem.?.tag, "python");
const size_attr = elem.?.attributes.items[0];
testing.expectEqualSlices(u8, size_attr.name, "size");
testing.expectEqualSlices(u8, size_attr.value, "15");
const color_attr = elem.?.attributes.items[1];
testing.expectEqualSlices(u8, color_attr.name, "color");
testing.expectEqualSlices(u8, color_attr.value, "green");
}
{
var ctx = ParseContext.init("<python>test</python>");
const elem = try tryParseElement(&ctx, alloc);
testing.expectEqualSlices(u8, elem.?.tag, "python");
testing.expectEqualSlices(u8, elem.?.children.items[0].CharData, "test");
}
{
var ctx = ParseContext.init("<a>b<c/>d<e/>f<!--g--></a>");
const elem = try tryParseElement(&ctx, alloc);
testing.expectEqualSlices(u8, elem.?.tag, "a");
testing.expectEqualSlices(u8, elem.?.children.items[0].CharData, "b");
testing.expectEqualSlices(u8, elem.?.children.items[1].Element.tag, "c");
testing.expectEqualSlices(u8, elem.?.children.items[2].CharData, "d");
testing.expectEqualSlices(u8, elem.?.children.items[3].Element.tag, "e");
testing.expectEqualSlices(u8, elem.?.children.items[4].CharData, "f");
testing.expectEqualSlices(u8, elem.?.children.items[5].Comment, "g");
}
}
fn tryParseProlog(ctx: *ParseContext, alloc: *Allocator) !?*XmlDecl {
const start = ctx.offset;
if (!ctx.eatStr("<?") or !mem.eql(u8, try parseNameNoDupe(ctx), "xml")) {
ctx.offset = start;
return null;
}
const decl = try alloc.create(XmlDecl);
decl.encoding = null;
decl.standalone = null;
// Version info is mandatory
try ctx.expectWs();
try ctx.expectStr("version");
decl.version = try parseEqAttrValue(ctx, alloc);
if (ctx.eatWs()) {
// Optional encoding and standalone info
var require_ws = false;
if (ctx.eatStr("encoding")) {
decl.encoding = try parseEqAttrValue(ctx, alloc);
require_ws = true;
}
if (require_ws == ctx.eatWs() and ctx.eatStr("standalone")) {
const standalone = try parseEqAttrValue(ctx, alloc);
if (std.mem.eql(u8, standalone, "yes")) {
decl.standalone = true;
} else if (std.mem.eql(u8, standalone, "no")) {
decl.standalone = false;
} else {
return error.InvalidStandaloneValue;
}
}
_ = ctx.eatWs();
}
try ctx.expectStr("?>");
return decl;
}
test "tryParseProlog" {
var arena = std.heap.ArenaAllocator.init(testing.allocator);
defer arena.deinit();
var alloc = &arena.allocator;
{
var ctx = ParseContext.init("<?xmla version='aa'?>");
testing.expectEqual(@as(?*XmlDecl, null), try tryParseProlog(&ctx, alloc));
testing.expectEqual(@as(?u8, '<'), ctx.peek());
}
{
var ctx = ParseContext.init("<?xml version='aa'?>");
const decl = try tryParseProlog(&ctx, alloc);
testing.expectEqualSlices(u8, "aa", decl.?.version);
testing.expectEqual(@as(?[]const u8, null), decl.?.encoding);
testing.expectEqual(@as(?bool, null), decl.?.standalone);
}
{
var ctx = ParseContext.init("<?xml version=\"aa\" encoding = 'bbb' standalone \t = 'yes'?>");
const decl = try tryParseProlog(&ctx, alloc);
testing.expectEqualSlices(u8, "aa", decl.?.version);
testing.expectEqualSlices(u8, "bbb", decl.?.encoding.?);
testing.expectEqual(@as(?bool, true), decl.?.standalone.?);
}
}
fn trySkipComments(ctx: *ParseContext, alloc: *Allocator) !void {
while (try tryParseComment(ctx, alloc)) |_| {
_ = ctx.eatWs();
}
}
fn tryParseComment(ctx: *ParseContext, alloc: *Allocator) !?[]const u8 {
if (!ctx.eatStr("<!--")) return null;
const begin = ctx.offset;
while (!ctx.eatStr("-->")) {
_ = ctx.consume() catch return error.UnclosedComment;
}
const end = ctx.offset - "-->".len;
return try mem.dupe(alloc, u8, ctx.source[begin .. end]);
}
fn unescapeEntity(text: []const u8) !u8 {
const EntitySubstition = struct {
text: []const u8,
replacement: u8
};
const entities = [_]EntitySubstition{
.{.text = "&lt;", .replacement = '<'},
.{.text = "&gt;", .replacement = '>'},
.{.text = "&amp;", .replacement = '&'},
.{.text = "&apos;", .replacement = '\''},
.{.text = "&quot;", .replacement = '"'}
};
for (entities) |entity| {
if (std.mem.eql(u8, text, entity.text)) return entity.replacement;
}
return error.InvalidEntity;
}
fn dupeAndUnescape(alloc: *Allocator, text: []const u8) ![]const u8 {
const str = try alloc.alloc(u8, text.len);
var j: usize = 0;
var i: usize = 0;
while (i < text.len) : (j += 1) {
if (text[i] == '&') {
const entity_end = 1 + (mem.indexOfScalarPos(u8, text, i, ';') orelse return error.InvalidEntity);
str[j] = try unescapeEntity(text[i .. entity_end]);
i = entity_end;
} else {
str[j] = text[i];
i += 1;
}
}
return alloc.shrink(str, j);
}
test "dupeAndUnescape" {
var arena = std.heap.ArenaAllocator.init(testing.allocator);
defer arena.deinit();
var alloc = &arena.allocator;
testing.expectEqualSlices(u8, "test", try dupeAndUnescape(alloc, "test"));
testing.expectEqualSlices(u8, "a<b&c>d\"e'f<", try dupeAndUnescape(alloc, "a&lt;b&amp;c&gt;d&quot;e&apos;f&lt;"));
testing.expectError(error.InvalidEntity, dupeAndUnescape(alloc, "python&"));
testing.expectError(error.InvalidEntity, dupeAndUnescape(alloc, "python&&"));
testing.expectError(error.InvalidEntity, dupeAndUnescape(alloc, "python&test;"));
testing.expectError(error.InvalidEntity, dupeAndUnescape(alloc, "python&boa"));
}
test "Top level comments" {
var arena = std.heap.ArenaAllocator.init(testing.allocator);
defer arena.deinit();
var alloc = &arena.allocator;
const doc = try parse(alloc, "<?xml version='aa'?><!--comment--><python color='green'/><!--another comment-->");
testing.expectEqualSlices(u8, "python", doc.root.tag);
}

View File

@@ -1,162 +0,0 @@
const std = @import("std");
const generator = @import("vulkan/generator.zig");
fn invalidUsage(prog_name: []const u8, comptime fmt: []const u8, args: anytype) noreturn {
std.log.err(fmt, args);
std.log.err("see {s} --help for usage", .{prog_name});
std.process.exit(1);
}
fn reportParseErrors(tree: std.zig.Ast) !void {
const stderr = std.io.getStdErr().writer();
for (tree.errors) |err| {
const loc = tree.tokenLocation(0, err.token);
try stderr.print("(vulkan-zig error):{}:{}: error: ", .{ loc.line + 1, loc.column + 1 });
try tree.renderError(err, stderr);
try stderr.print("\n{s}\n", .{tree.source[loc.line_start..loc.line_end]});
for (0..loc.column) |_| {
try stderr.writeAll(" ");
}
try stderr.writeAll("^\n");
}
}
pub fn main() void {
var arena = std.heap.ArenaAllocator.init(std.heap.page_allocator);
defer arena.deinit();
const allocator = arena.allocator();
var args = std.process.argsWithAllocator(allocator) catch |err| switch (err) {
error.OutOfMemory => @panic("OOM"),
};
const prog_name = args.next() orelse "vulkan-zig-generator";
var maybe_xml_path: ?[]const u8 = null;
var maybe_out_path: ?[]const u8 = null;
var debug: bool = false;
var api = generator.Api.vulkan;
while (args.next()) |arg| {
if (std.mem.eql(u8, arg, "--help") or std.mem.eql(u8, arg, "-h")) {
@setEvalBranchQuota(2000);
std.io.getStdOut().writer().print(
\\Utility to generate a Zig binding from the Vulkan XML API registry.
\\
\\The most recent Vulkan XML API registry can be obtained from
\\https://github.com/KhronosGroup/Vulkan-Docs/blob/master/xml/vk.xml,
\\and the most recent LunarG Vulkan SDK version can be found at
\\$VULKAN_SDK/x86_64/share/vulkan/registry/vk.xml.
\\
\\Usage: {s} [options] <spec xml path> <output zig source>
\\Options:
\\-h --help show this message and exit.
\\-a --api <api> Generate API for 'vulkan' or 'vulkansc'. Defaults to 'vulkan'.
\\--debug Write out unformatted source if does not parse correctly.
\\
,
.{prog_name},
) catch |err| {
std.log.err("failed to write to stdout: {s}", .{@errorName(err)});
std.process.exit(1);
};
return;
} else if (std.mem.eql(u8, arg, "-a") or std.mem.eql(u8, arg, "--api")) {
const api_str = args.next() orelse {
invalidUsage(prog_name, "{s} expects argument <api>", .{arg});
};
api = std.meta.stringToEnum(generator.Api, api_str) orelse {
invalidUsage(prog_name, "invalid api '{s}'", .{api_str});
};
} else if (maybe_xml_path == null) {
maybe_xml_path = arg;
} else if (maybe_out_path == null) {
maybe_out_path = arg;
} else if (std.mem.eql(u8, arg, "--debug")) {
debug = true;
} else {
invalidUsage(prog_name, "superficial argument '{s}'", .{arg});
}
}
const xml_path = maybe_xml_path orelse {
invalidUsage(prog_name, "missing required argument <spec xml path>", .{});
};
const out_path = maybe_out_path orelse {
invalidUsage(prog_name, "missing required argument <output zig source>", .{});
};
const cwd = std.fs.cwd();
const xml_src = cwd.readFileAlloc(allocator, xml_path, std.math.maxInt(usize)) catch |err| {
std.log.err("failed to open input file '{s}' ({s})", .{ xml_path, @errorName(err) });
std.process.exit(1);
};
var out_buffer = std.ArrayList(u8).init(allocator);
generator.generate(allocator, api, xml_src, out_buffer.writer()) catch |err| switch (err) {
error.InvalidXml => {
std.log.err("invalid vulkan registry - invalid xml", .{});
std.log.err("please check that the correct vk.xml file is passed", .{});
std.process.exit(1);
},
error.InvalidRegistry => {
std.log.err("invalid vulkan registry - registry is valid xml but contents are invalid", .{});
std.log.err("please check that the correct vk.xml file is passed", .{});
std.process.exit(1);
},
error.UnhandledBitfieldStruct => {
std.log.err("unhandled struct with bit fields detected in vk.xml", .{});
std.log.err("this is a bug in vulkan-zig", .{});
std.log.err("please make a bug report at https://github.com/Snektron/vulkan-zig/issues/", .{});
std.process.exit(1);
},
error.OutOfMemory => @panic("oom"),
};
out_buffer.append(0) catch @panic("oom");
const src = out_buffer.items[0 .. out_buffer.items.len - 1 :0];
const tree = std.zig.Ast.parse(allocator, src, .zig) catch |err| switch (err) {
error.OutOfMemory => @panic("oom"),
};
const formatted = if (tree.errors.len > 0) blk: {
std.log.err("generated invalid zig code", .{});
std.log.err("this is a bug in vulkan-zig", .{});
std.log.err("please make a bug report at https://github.com/Snektron/vulkan-zig/issues/", .{});
std.log.err("or run with --debug to write out unformatted source", .{});
reportParseErrors(tree) catch |err| {
std.log.err("failed to dump ast errors: {s}", .{@errorName(err)});
std.process.exit(1);
};
if (debug) {
break :blk src;
}
std.process.exit(1);
} else tree.render(allocator) catch |err| switch (err) {
error.OutOfMemory => @panic("oom"),
};
if (std.fs.path.dirname(out_path)) |dir| {
cwd.makePath(dir) catch |err| {
std.log.err("failed to create output directory '{s}' ({s})", .{ dir, @errorName(err) });
std.process.exit(1);
};
}
cwd.writeFile(.{
.sub_path = out_path,
.data = formatted,
}) catch |err| {
std.log.err("failed to write to output file '{s}' ({s})", .{ out_path, @errorName(err) });
std.process.exit(1);
};
}
test "main" {
_ = @import("xml.zig");
_ = @import("vulkan/c_parse.zig");
}

View File

@@ -1,227 +0,0 @@
const std = @import("std");
const reg = @import("registry.zig");
const xml = @import("../xml.zig");
const renderRegistry = @import("render.zig").render;
const parseXml = @import("parse.zig").parseXml;
const IdRenderer = @import("../id_render.zig").IdRenderer;
const mem = std.mem;
const Allocator = mem.Allocator;
const FeatureLevel = reg.FeatureLevel;
const EnumFieldMerger = struct {
const EnumExtensionMap = std.StringArrayHashMapUnmanaged(std.ArrayListUnmanaged(reg.Enum.Field));
const FieldSet = std.StringArrayHashMapUnmanaged(void);
arena: Allocator,
registry: *reg.Registry,
enum_extensions: EnumExtensionMap,
field_set: FieldSet,
fn init(arena: Allocator, registry: *reg.Registry) EnumFieldMerger {
return .{
.arena = arena,
.registry = registry,
.enum_extensions = .{},
.field_set = .{},
};
}
fn putEnumExtension(self: *EnumFieldMerger, enum_name: []const u8, field: reg.Enum.Field) !void {
const res = try self.enum_extensions.getOrPut(self.arena, enum_name);
if (!res.found_existing) {
res.value_ptr.* = std.ArrayListUnmanaged(reg.Enum.Field){};
}
try res.value_ptr.append(self.arena, field);
}
fn addRequires(self: *EnumFieldMerger, reqs: []const reg.Require) !void {
for (reqs) |req| {
for (req.extends) |enum_ext| {
try self.putEnumExtension(enum_ext.extends, enum_ext.field);
}
}
}
fn mergeEnumFields(self: *EnumFieldMerger, name: []const u8, base_enum: *reg.Enum) !void {
// If there are no extensions for this enum, assume its valid.
const extensions = self.enum_extensions.get(name) orelse return;
self.field_set.clearRetainingCapacity();
const n_fields_upper_bound = base_enum.fields.len + extensions.items.len;
const new_fields = try self.arena.alloc(reg.Enum.Field, n_fields_upper_bound);
var i: usize = 0;
for (base_enum.fields) |field| {
const res = try self.field_set.getOrPut(self.arena, field.name);
if (!res.found_existing) {
new_fields[i] = field;
i += 1;
}
}
// Assume that if a field name clobbers, the value is the same
for (extensions.items) |field| {
const res = try self.field_set.getOrPut(self.arena, field.name);
if (!res.found_existing) {
new_fields[i] = field;
i += 1;
}
}
// Existing base_enum.fields was allocated by `self.arena`, so
// it gets cleaned up whenever that is deinited.
base_enum.fields = new_fields[0..i];
}
fn merge(self: *EnumFieldMerger) !void {
for (self.registry.features) |feature| {
try self.addRequires(feature.requires);
}
for (self.registry.extensions) |ext| {
try self.addRequires(ext.requires);
}
// Merge all the enum fields.
// Assume that all keys of enum_extensions appear in `self.registry.decls`
for (self.registry.decls) |*decl| {
if (decl.decl_type == .enumeration) {
try self.mergeEnumFields(decl.name, &decl.decl_type.enumeration);
}
}
}
};
pub const Generator = struct {
arena: std.heap.ArenaAllocator,
registry: reg.Registry,
id_renderer: IdRenderer,
fn init(allocator: Allocator, spec: *xml.Element, api: reg.Api) !Generator {
const result = try parseXml(allocator, spec, api);
const tags = try allocator.alloc([]const u8, result.registry.tags.len);
for (tags, result.registry.tags) |*tag, registry_tag| tag.* = registry_tag.name;
return Generator{
.arena = result.arena,
.registry = result.registry,
.id_renderer = IdRenderer.init(allocator, tags),
};
}
fn deinit(self: Generator) void {
self.arena.deinit();
}
fn stripFlagBits(self: Generator, name: []const u8) []const u8 {
const tagless = self.id_renderer.stripAuthorTag(name);
return tagless[0 .. tagless.len - "FlagBits".len];
}
fn stripFlags(self: Generator, name: []const u8) []const u8 {
const tagless = self.id_renderer.stripAuthorTag(name);
return tagless[0 .. tagless.len - "Flags".len];
}
// Solve `registry.declarations` according to `registry.extensions` and `registry.features`.
fn mergeEnumFields(self: *Generator) !void {
var merger = EnumFieldMerger.init(self.arena.allocator(), &self.registry);
try merger.merge();
}
// https://github.com/KhronosGroup/Vulkan-Docs/pull/1556
fn fixupBitFlags(self: *Generator) !void {
var seen_bits = std.StringArrayHashMap(void).init(self.arena.allocator());
defer seen_bits.deinit();
for (self.registry.decls) |decl| {
const bitmask = switch (decl.decl_type) {
.bitmask => |bm| bm,
else => continue,
};
if (bitmask.bits_enum) |bits_enum| {
try seen_bits.put(bits_enum, {});
}
}
var i: usize = 0;
for (self.registry.decls) |decl| {
switch (decl.decl_type) {
.enumeration => |e| {
if (e.is_bitmask and seen_bits.get(decl.name) == null)
continue;
},
else => {},
}
self.registry.decls[i] = decl;
i += 1;
}
self.registry.decls.len = i;
}
fn render(self: *Generator, writer: anytype) !void {
try renderRegistry(writer, self.arena.allocator(), &self.registry, &self.id_renderer);
}
};
/// The vulkan registry contains the specification for multiple APIs: Vulkan and VulkanSC. This enum
/// describes applicable APIs.
pub const Api = reg.Api;
/// Main function for generating the Vulkan bindings. vk.xml is to be provided via `spec_xml`,
/// and the resulting binding is written to `writer`. `allocator` will be used to allocate temporary
/// internal datastructures - mostly via an ArenaAllocator, but sometimes a hashmap uses this allocator
/// directly. `api` is the API to generate the bindings for, usually `.vulkan`.
pub fn generate(allocator: Allocator, api: Api, spec_xml: []const u8, writer: anytype) !void {
const spec = xml.parse(allocator, spec_xml) catch |err| switch (err) {
error.InvalidDocument,
error.UnexpectedEof,
error.UnexpectedCharacter,
error.IllegalCharacter,
error.InvalidEntity,
error.InvalidName,
error.InvalidStandaloneValue,
error.NonMatchingClosingTag,
error.UnclosedComment,
error.UnclosedValue,
=> return error.InvalidXml,
error.OutOfMemory => return error.OutOfMemory,
};
defer spec.deinit();
var gen = Generator.init(allocator, spec.root, api) catch |err| switch (err) {
error.InvalidXml,
error.InvalidCharacter,
error.Overflow,
error.InvalidFeatureLevel,
error.InvalidSyntax,
error.InvalidTag,
error.MissingTypeIdentifier,
error.UnexpectedCharacter,
error.UnexpectedEof,
error.UnexpectedToken,
error.InvalidRegistry,
=> return error.InvalidRegistry,
error.OutOfMemory => return error.OutOfMemory,
};
defer gen.deinit();
try gen.mergeEnumFields();
try gen.fixupBitFlags();
gen.render(writer) catch |err| switch (err) {
error.InvalidApiConstant,
error.InvalidConstantExpr,
error.InvalidRegistry,
error.UnexpectedCharacter,
error.InvalidCharacter,
error.Overflow,
=> return error.InvalidRegistry,
else => |others| return others,
};
}

File diff suppressed because it is too large Load Diff

View File

@@ -1,638 +0,0 @@
const std = @import("std");
const mem = std.mem;
const testing = std.testing;
const Allocator = mem.Allocator;
const ArenaAllocator = std.heap.ArenaAllocator;
pub const Attribute = struct {
name: []const u8,
value: []const u8,
};
pub const Content = union(enum) {
char_data: []const u8,
comment: []const u8,
element: *Element,
};
pub const Element = struct {
tag: []const u8,
attributes: []Attribute = &.{},
children: []Content = &.{},
pub fn getAttribute(self: Element, attrib_name: []const u8) ?[]const u8 {
for (self.attributes) |child| {
if (mem.eql(u8, child.name, attrib_name)) {
return child.value;
}
}
return null;
}
pub fn getCharData(self: Element, child_tag: []const u8) ?[]const u8 {
const child = self.findChildByTag(child_tag) orelse return null;
if (child.children.len != 1) {
return null;
}
return switch (child.children[0]) {
.char_data => |char_data| char_data,
else => null,
};
}
pub fn iterator(self: Element) ChildIterator {
return .{
.items = self.children,
.i = 0,
};
}
pub fn elements(self: Element) ChildElementIterator {
return .{
.inner = self.iterator(),
};
}
pub fn findChildByTag(self: Element, tag: []const u8) ?*Element {
var it = self.findChildrenByTag(tag);
return it.next();
}
pub fn findChildrenByTag(self: Element, tag: []const u8) FindChildrenByTagIterator {
return .{
.inner = self.elements(),
.tag = tag,
};
}
pub const ChildIterator = struct {
items: []Content,
i: usize,
pub fn next(self: *ChildIterator) ?*Content {
if (self.i < self.items.len) {
self.i += 1;
return &self.items[self.i - 1];
}
return null;
}
};
pub const ChildElementIterator = struct {
inner: ChildIterator,
pub fn next(self: *ChildElementIterator) ?*Element {
while (self.inner.next()) |child| {
if (child.* != .element) {
continue;
}
return child.*.element;
}
return null;
}
};
pub const FindChildrenByTagIterator = struct {
inner: ChildElementIterator,
tag: []const u8,
pub fn next(self: *FindChildrenByTagIterator) ?*Element {
while (self.inner.next()) |child| {
if (!mem.eql(u8, child.tag, self.tag)) {
continue;
}
return child;
}
return null;
}
};
};
pub const Document = struct {
arena: ArenaAllocator,
xml_decl: ?*Element,
root: *Element,
pub fn deinit(self: Document) void {
var arena = self.arena; // Copy to stack so self can be taken by value.
arena.deinit();
}
};
const Parser = struct {
source: []const u8,
offset: usize,
line: usize,
column: usize,
fn init(source: []const u8) Parser {
return .{
.source = source,
.offset = 0,
.line = 0,
.column = 0,
};
}
fn peek(self: *Parser) ?u8 {
return if (self.offset < self.source.len) self.source[self.offset] else null;
}
fn consume(self: *Parser) !u8 {
if (self.offset < self.source.len) {
return self.consumeNoEof();
}
return error.UnexpectedEof;
}
fn consumeNoEof(self: *Parser) u8 {
std.debug.assert(self.offset < self.source.len);
const c = self.source[self.offset];
self.offset += 1;
if (c == '\n') {
self.line += 1;
self.column = 0;
} else {
self.column += 1;
}
return c;
}
fn eat(self: *Parser, char: u8) bool {
self.expect(char) catch return false;
return true;
}
fn expect(self: *Parser, expected: u8) !void {
if (self.peek()) |actual| {
if (expected != actual) {
return error.UnexpectedCharacter;
}
_ = self.consumeNoEof();
return;
}
return error.UnexpectedEof;
}
fn eatStr(self: *Parser, text: []const u8) bool {
self.expectStr(text) catch return false;
return true;
}
fn expectStr(self: *Parser, text: []const u8) !void {
if (self.source.len < self.offset + text.len) {
return error.UnexpectedEof;
} else if (mem.startsWith(u8, self.source[self.offset..], text)) {
var i: usize = 0;
while (i < text.len) : (i += 1) {
_ = self.consumeNoEof();
}
return;
}
return error.UnexpectedCharacter;
}
fn eatWs(self: *Parser) bool {
var ws = false;
while (self.peek()) |ch| {
switch (ch) {
' ', '\t', '\n', '\r' => {
ws = true;
_ = self.consumeNoEof();
},
else => break,
}
}
return ws;
}
fn expectWs(self: *Parser) !void {
if (!self.eatWs()) return error.UnexpectedCharacter;
}
fn currentLine(self: Parser) []const u8 {
var begin: usize = 0;
if (mem.lastIndexOfScalar(u8, self.source[0..self.offset], '\n')) |prev_nl| {
begin = prev_nl + 1;
}
const end = mem.indexOfScalarPos(u8, self.source, self.offset, '\n') orelse self.source.len;
return self.source[begin..end];
}
};
test "xml: Parser" {
{
var parser = Parser.init("I like pythons");
try testing.expectEqual(@as(?u8, 'I'), parser.peek());
try testing.expectEqual(@as(u8, 'I'), parser.consumeNoEof());
try testing.expectEqual(@as(?u8, ' '), parser.peek());
try testing.expectEqual(@as(u8, ' '), try parser.consume());
try testing.expect(parser.eat('l'));
try testing.expectEqual(@as(?u8, 'i'), parser.peek());
try testing.expectEqual(false, parser.eat('a'));
try testing.expectEqual(@as(?u8, 'i'), parser.peek());
try parser.expect('i');
try testing.expectEqual(@as(?u8, 'k'), parser.peek());
try testing.expectError(error.UnexpectedCharacter, parser.expect('a'));
try testing.expectEqual(@as(?u8, 'k'), parser.peek());
try testing.expect(parser.eatStr("ke"));
try testing.expectEqual(@as(?u8, ' '), parser.peek());
try testing.expect(parser.eatWs());
try testing.expectEqual(@as(?u8, 'p'), parser.peek());
try testing.expectEqual(false, parser.eatWs());
try testing.expectEqual(@as(?u8, 'p'), parser.peek());
try testing.expectEqual(false, parser.eatStr("aaaaaaaaa"));
try testing.expectEqual(@as(?u8, 'p'), parser.peek());
try testing.expectError(error.UnexpectedEof, parser.expectStr("aaaaaaaaa"));
try testing.expectEqual(@as(?u8, 'p'), parser.peek());
try testing.expectError(error.UnexpectedCharacter, parser.expectStr("pytn"));
try testing.expectEqual(@as(?u8, 'p'), parser.peek());
try parser.expectStr("python");
try testing.expectEqual(@as(?u8, 's'), parser.peek());
}
{
var parser = Parser.init("");
try testing.expectEqual(parser.peek(), null);
try testing.expectError(error.UnexpectedEof, parser.consume());
try testing.expectEqual(parser.eat('p'), false);
try testing.expectError(error.UnexpectedEof, parser.expect('p'));
}
}
pub const ParseError = error{
IllegalCharacter,
UnexpectedEof,
UnexpectedCharacter,
UnclosedValue,
UnclosedComment,
InvalidName,
InvalidEntity,
InvalidStandaloneValue,
NonMatchingClosingTag,
InvalidDocument,
OutOfMemory,
};
pub fn parse(backing_allocator: Allocator, source: []const u8) !Document {
var parser = Parser.init(source);
return try parseDocument(&parser, backing_allocator);
}
fn parseDocument(parser: *Parser, backing_allocator: Allocator) !Document {
var doc = Document{
.arena = ArenaAllocator.init(backing_allocator),
.xml_decl = null,
.root = undefined,
};
errdefer doc.deinit();
const allocator = doc.arena.allocator();
try skipComments(parser, allocator);
doc.xml_decl = try parseElement(parser, allocator, .xml_decl);
_ = parser.eatWs();
try skipComments(parser, allocator);
doc.root = (try parseElement(parser, allocator, .element)) orelse return error.InvalidDocument;
_ = parser.eatWs();
try skipComments(parser, allocator);
if (parser.peek() != null) return error.InvalidDocument;
return doc;
}
fn parseAttrValue(parser: *Parser, alloc: Allocator) ![]const u8 {
const quote = try parser.consume();
if (quote != '"' and quote != '\'') return error.UnexpectedCharacter;
const begin = parser.offset;
while (true) {
const c = parser.consume() catch return error.UnclosedValue;
if (c == quote) break;
}
const end = parser.offset - 1;
return try unescape(alloc, parser.source[begin..end]);
}
fn parseEqAttrValue(parser: *Parser, alloc: Allocator) ![]const u8 {
_ = parser.eatWs();
try parser.expect('=');
_ = parser.eatWs();
return try parseAttrValue(parser, alloc);
}
fn parseNameNoDupe(parser: *Parser) ![]const u8 {
// XML's spec on names is very long, so to make this easier
// we just take any character that is not special and not whitespace
const begin = parser.offset;
while (parser.peek()) |ch| {
switch (ch) {
' ', '\t', '\n', '\r' => break,
'&', '"', '\'', '<', '>', '?', '=', '/' => break,
else => _ = parser.consumeNoEof(),
}
}
const end = parser.offset;
if (begin == end) return error.InvalidName;
return parser.source[begin..end];
}
fn parseCharData(parser: *Parser, alloc: Allocator) !?[]const u8 {
const begin = parser.offset;
while (parser.peek()) |ch| {
switch (ch) {
'<' => break,
else => _ = parser.consumeNoEof(),
}
}
const end = parser.offset;
if (begin == end) return null;
return try unescape(alloc, parser.source[begin..end]);
}
fn parseContent(parser: *Parser, alloc: Allocator) ParseError!Content {
if (try parseCharData(parser, alloc)) |cd| {
return Content{ .char_data = cd };
} else if (try parseComment(parser, alloc)) |comment| {
return Content{ .comment = comment };
} else if (try parseElement(parser, alloc, .element)) |elem| {
return Content{ .element = elem };
} else {
return error.UnexpectedCharacter;
}
}
fn parseAttr(parser: *Parser, alloc: Allocator) !?Attribute {
const name = parseNameNoDupe(parser) catch return null;
_ = parser.eatWs();
try parser.expect('=');
_ = parser.eatWs();
const value = try parseAttrValue(parser, alloc);
const attr = Attribute{
.name = try alloc.dupe(u8, name),
.value = value,
};
return attr;
}
const ElementKind = enum {
xml_decl,
element,
};
fn parseElement(parser: *Parser, alloc: Allocator, comptime kind: ElementKind) !?*Element {
const start = parser.offset;
const tag = switch (kind) {
.xml_decl => blk: {
if (!parser.eatStr("<?") or !mem.eql(u8, try parseNameNoDupe(parser), "xml")) {
parser.offset = start;
return null;
}
break :blk "xml";
},
.element => blk: {
if (!parser.eat('<')) return null;
const tag = parseNameNoDupe(parser) catch {
parser.offset = start;
return null;
};
break :blk tag;
},
};
var attributes = std.ArrayList(Attribute).init(alloc);
defer attributes.deinit();
var children = std.ArrayList(Content).init(alloc);
defer children.deinit();
while (parser.eatWs()) {
const attr = (try parseAttr(parser, alloc)) orelse break;
try attributes.append(attr);
}
switch (kind) {
.xml_decl => try parser.expectStr("?>"),
.element => {
if (!parser.eatStr("/>")) {
try parser.expect('>');
while (true) {
if (parser.peek() == null) {
return error.UnexpectedEof;
} else if (parser.eatStr("</")) {
break;
}
const content = try parseContent(parser, alloc);
try children.append(content);
}
const closing_tag = try parseNameNoDupe(parser);
if (!mem.eql(u8, tag, closing_tag)) {
return error.NonMatchingClosingTag;
}
_ = parser.eatWs();
try parser.expect('>');
}
},
}
const element = try alloc.create(Element);
element.* = .{
.tag = try alloc.dupe(u8, tag),
.attributes = try attributes.toOwnedSlice(),
.children = try children.toOwnedSlice(),
};
return element;
}
test "xml: parseElement" {
var arena = ArenaAllocator.init(testing.allocator);
defer arena.deinit();
const alloc = arena.allocator();
{
var parser = Parser.init("<= a='b'/>");
try testing.expectEqual(@as(?*Element, null), try parseElement(&parser, alloc, .element));
try testing.expectEqual(@as(?u8, '<'), parser.peek());
}
{
var parser = Parser.init("<python size='15' color = \"green\"/>");
const elem = try parseElement(&parser, alloc, .element);
try testing.expectEqualSlices(u8, elem.?.tag, "python");
const size_attr = elem.?.attributes[0];
try testing.expectEqualSlices(u8, size_attr.name, "size");
try testing.expectEqualSlices(u8, size_attr.value, "15");
const color_attr = elem.?.attributes[1];
try testing.expectEqualSlices(u8, color_attr.name, "color");
try testing.expectEqualSlices(u8, color_attr.value, "green");
}
{
var parser = Parser.init("<python>test</python>");
const elem = try parseElement(&parser, alloc, .element);
try testing.expectEqualSlices(u8, elem.?.tag, "python");
try testing.expectEqualSlices(u8, elem.?.children[0].char_data, "test");
}
{
var parser = Parser.init("<a>b<c/>d<e/>f<!--g--></a>");
const elem = try parseElement(&parser, alloc, .element);
try testing.expectEqualSlices(u8, elem.?.tag, "a");
try testing.expectEqualSlices(u8, elem.?.children[0].char_data, "b");
try testing.expectEqualSlices(u8, elem.?.children[1].element.tag, "c");
try testing.expectEqualSlices(u8, elem.?.children[2].char_data, "d");
try testing.expectEqualSlices(u8, elem.?.children[3].element.tag, "e");
try testing.expectEqualSlices(u8, elem.?.children[4].char_data, "f");
try testing.expectEqualSlices(u8, elem.?.children[5].comment, "g");
}
}
test "xml: parse prolog" {
var arena = ArenaAllocator.init(testing.allocator);
defer arena.deinit();
const a = arena.allocator();
{
var parser = Parser.init("<?xmla version='aa'?>");
try testing.expectEqual(@as(?*Element, null), try parseElement(&parser, a, .xml_decl));
try testing.expectEqual(@as(?u8, '<'), parser.peek());
}
{
var parser = Parser.init("<?xml version='aa'?>");
const decl = try parseElement(&parser, a, .xml_decl);
try testing.expectEqualSlices(u8, "aa", decl.?.getAttribute("version").?);
try testing.expectEqual(@as(?[]const u8, null), decl.?.getAttribute("encoding"));
try testing.expectEqual(@as(?[]const u8, null), decl.?.getAttribute("standalone"));
}
{
var parser = Parser.init("<?xml version=\"ccc\" encoding = 'bbb' standalone \t = 'yes'?>");
const decl = try parseElement(&parser, a, .xml_decl);
try testing.expectEqualSlices(u8, "ccc", decl.?.getAttribute("version").?);
try testing.expectEqualSlices(u8, "bbb", decl.?.getAttribute("encoding").?);
try testing.expectEqualSlices(u8, "yes", decl.?.getAttribute("standalone").?);
}
}
fn skipComments(parser: *Parser, alloc: Allocator) !void {
while ((try parseComment(parser, alloc)) != null) {
_ = parser.eatWs();
}
}
fn parseComment(parser: *Parser, alloc: Allocator) !?[]const u8 {
if (!parser.eatStr("<!--")) return null;
const begin = parser.offset;
while (!parser.eatStr("-->")) {
_ = parser.consume() catch return error.UnclosedComment;
}
const end = parser.offset - "-->".len;
return try alloc.dupe(u8, parser.source[begin..end]);
}
fn unescapeEntity(text: []const u8) !u8 {
const EntitySubstition = struct { text: []const u8, replacement: u8 };
const entities = [_]EntitySubstition{
.{ .text = "&lt;", .replacement = '<' },
.{ .text = "&gt;", .replacement = '>' },
.{ .text = "&amp;", .replacement = '&' },
.{ .text = "&apos;", .replacement = '\'' },
.{ .text = "&quot;", .replacement = '"' },
};
for (entities) |entity| {
if (mem.eql(u8, text, entity.text)) return entity.replacement;
}
return error.InvalidEntity;
}
fn unescape(arena: Allocator, text: []const u8) ![]const u8 {
const unescaped = try arena.alloc(u8, text.len);
var j: usize = 0;
var i: usize = 0;
while (i < text.len) : (j += 1) {
if (text[i] == '&') {
const entity_end = 1 + (mem.indexOfScalarPos(u8, text, i, ';') orelse return error.InvalidEntity);
unescaped[j] = try unescapeEntity(text[i..entity_end]);
i = entity_end;
} else {
unescaped[j] = text[i];
i += 1;
}
}
return unescaped[0..j];
}
test "xml: unescape" {
var arena = ArenaAllocator.init(testing.allocator);
defer arena.deinit();
const a = arena.allocator();
try testing.expectEqualSlices(u8, "test", try unescape(a, "test"));
try testing.expectEqualSlices(u8, "a<b&c>d\"e'f<", try unescape(a, "a&lt;b&amp;c&gt;d&quot;e&apos;f&lt;"));
try testing.expectError(error.InvalidEntity, unescape(a, "python&"));
try testing.expectError(error.InvalidEntity, unescape(a, "python&&"));
try testing.expectError(error.InvalidEntity, unescape(a, "python&test;"));
try testing.expectError(error.InvalidEntity, unescape(a, "python&boa"));
}
test "xml: top level comments" {
var arena = ArenaAllocator.init(testing.allocator);
defer arena.deinit();
const a = arena.allocator();
const doc = try parse(a, "<?xml version='aa'?><!--comment--><python color='green'/><!--another comment-->");
try testing.expectEqualSlices(u8, "python", doc.root.tag);
}

View File

@@ -1,122 +0,0 @@
const std = @import("std");
const vk = @import("vulkan");
// Provide bogus defaults for unknown platform types
// The actual type does not really matter here...
pub const GgpStreamDescriptor = u32;
pub const GgpFrameToken = u32;
pub const _screen_buffer = u32;
pub const NvSciSyncAttrList = u32;
pub const NvSciSyncObj = u32;
pub const NvSciSyncFence = u32;
pub const NvSciBufAttrList = u32;
pub const NvSciBufObj = u32;
pub const ANativeWindow = u32;
pub const AHardwareBuffer = u32;
pub const CAMetalLayer = u32;
pub const MTLDevice_id = u32;
pub const MTLCommandQueue_id = u32;
pub const MTLBuffer_id = u32;
pub const MTLTexture_id = u32;
pub const MTLSharedEvent_id = u32;
pub const IOSurfaceRef = u32;
// For some reason these types are exported in a different header, and not described in vk.xml.
pub const StdVideoH264ProfileIdc = u32;
pub const StdVideoH264LevelIdc = u32;
pub const StdVideoH264ChromaFormatIdc = u32;
pub const StdVideoH264PocType = u32;
pub const StdVideoH264SpsFlags = u32;
pub const StdVideoH264ScalingLists = u32;
pub const StdVideoH264SequenceParameterSetVui = u32;
pub const StdVideoH264AspectRatioIdc = u32;
pub const StdVideoH264HrdParameters = u32;
pub const StdVideoH264SpsVuiFlags = u32;
pub const StdVideoH264WeightedBipredIdc = u32;
pub const StdVideoH264PpsFlags = u32;
pub const StdVideoH264SliceType = u32;
pub const StdVideoH264CabacInitIdc = u32;
pub const StdVideoH264DisableDeblockingFilterIdc = u32;
pub const StdVideoH264PictureType = u32;
pub const StdVideoH264ModificationOfPicNumsIdc = u32;
pub const StdVideoH264MemMgmtControlOp = u32;
pub const StdVideoDecodeH264PictureInfo = u32;
pub const StdVideoDecodeH264ReferenceInfo = u32;
pub const StdVideoDecodeH264PictureInfoFlags = u32;
pub const StdVideoDecodeH264ReferenceInfoFlags = u32;
pub const StdVideoH264SequenceParameterSet = u32;
pub const StdVideoH264PictureParameterSet = u32;
pub const StdVideoH265ProfileIdc = u32;
pub const StdVideoH265VideoParameterSet = u32;
pub const StdVideoH265SequenceParameterSet = u32;
pub const StdVideoH265PictureParameterSet = u32;
pub const StdVideoH265DecPicBufMgr = u32;
pub const StdVideoH265HrdParameters = u32;
pub const StdVideoH265VpsFlags = u32;
pub const StdVideoH265LevelIdc = u32;
pub const StdVideoH265SpsFlags = u32;
pub const StdVideoH265ScalingLists = u32;
pub const StdVideoH265SequenceParameterSetVui = u32;
pub const StdVideoH265PredictorPaletteEntries = u32;
pub const StdVideoH265PpsFlags = u32;
pub const StdVideoH265SubLayerHrdParameters = u32;
pub const StdVideoH265HrdFlags = u32;
pub const StdVideoH265SpsVuiFlags = u32;
pub const StdVideoH265SliceType = u32;
pub const StdVideoH265PictureType = u32;
pub const StdVideoDecodeH265PictureInfo = u32;
pub const StdVideoDecodeH265ReferenceInfo = u32;
pub const StdVideoDecodeH265PictureInfoFlags = u32;
pub const StdVideoDecodeH265ReferenceInfoFlags = u32;
pub const StdVideoAV1Profile = u32;
pub const StdVideoAV1Level = u32;
pub const StdVideoAV1SequenceHeader = u32;
pub const StdVideoDecodeAV1PictureInfo = u32;
pub const StdVideoDecodeAV1ReferenceInfo = u32;
pub const StdVideoEncodeH264SliceHeader = u32;
pub const StdVideoEncodeH264PictureInfo = u32;
pub const StdVideoEncodeH264ReferenceInfo = u32;
pub const StdVideoEncodeH264SliceHeaderFlags = u32;
pub const StdVideoEncodeH264ReferenceListsInfo = u32;
pub const StdVideoEncodeH264PictureInfoFlags = u32;
pub const StdVideoEncodeH264ReferenceInfoFlags = u32;
pub const StdVideoEncodeH264RefMgmtFlags = u32;
pub const StdVideoEncodeH264RefListModEntry = u32;
pub const StdVideoEncodeH264RefPicMarkingEntry = u32;
pub const StdVideoEncodeH265PictureInfoFlags = u32;
pub const StdVideoEncodeH265PictureInfo = u32;
pub const StdVideoEncodeH265SliceSegmentHeader = u32;
pub const StdVideoEncodeH265ReferenceInfo = u32;
pub const StdVideoEncodeH265ReferenceListsInfo = u32;
pub const StdVideoEncodeH265SliceSegmentHeaderFlags = u32;
pub const StdVideoEncodeH265ReferenceInfoFlags = u32;
pub const StdVideoEncodeH265ReferenceModificationFlags = u32;
pub const StdVideoEncodeAV1OperatingPointInfo = u32;
comptime {
@setEvalBranchQuota(1000000);
reallyRefAllDecls(vk);
}
fn reallyRefAllDecls(comptime T: type) void {
switch (@typeInfo(T)) {
.Struct, .Union => {
reallyRefAllContainerDecls(T);
inline for (std.meta.fields(T)) |field| {
reallyRefAllDecls(field.type);
}
},
.Enum, .Opaque => {
reallyRefAllContainerDecls(T);
},
else => {},
}
}
fn reallyRefAllContainerDecls(comptime T: type) void {
inline for (comptime std.meta.declarations(T)) |decl| {
if (@TypeOf(@field(T, decl.name)) == type) {
reallyRefAllDecls(@field(T, decl.name));
}
}
}