Compare commits

..

1 Commits

Author SHA1 Message Date
lizzie
c303e8a1d1 [vk] macro-ify PixelSurface and SurfaceFormat lists 2025-12-26 06:25:07 +01:00
8 changed files with 405 additions and 783 deletions

View File

@@ -22,15 +22,6 @@ Or use Qt Creator (Create Project -> Import Project -> Git Clone).
Android has a completely different build process than other platforms. See its [dedicated page](build/Android.md).
## Cross-compile ARM64
A painless guide for cross compilation (or to test NCE) from a x86_64 system without polluting your main.
- Install QEMU: `sudo pkg install qemu`
- Download Debian 13: `wget https://cdimage.debian.org/debian-cd/current/arm64/iso-cd/debian-13.0.0-arm64-netinst.iso`
- Create a system disk: `qemu-img create -f qcow2 debian-13-arm64-ci.qcow2 30G`
- Run the VM: `qemu-system-aarch64 -M virt -m 2G -cpu max -bios /usr/local/share/qemu/edk2-aarch64-code.fd -drive if=none,file=debian-13.0.0-arm64-netinst.iso,format=raw,id=cdrom -device scsi-cd,drive=cdrom -drive if=none,file=debian-13-arm64-ci.qcow2,id=hd0,format=qcow2 -device virtio-blk-device,drive=hd0 -device virtio-gpu-pci -device usb-ehci -device usb-kbd -device intel-hda -device hda-output -nic user,model=virtio-net-pci`
## Initial Configuration
If the configure phase fails, see the `Troubleshooting` section below. Usually, as long as you followed the dependencies guide, the defaults *should* successfully configure and build.

View File

@@ -93,11 +93,6 @@ Eden is not currently available as a port on FreeBSD, though it is in the works.
The available OpenSSL port (3.0.17) is out-of-date, and using a bundled static library instead is recommended; to do so, add `-DYUZU_USE_BUNDLED_OPENSSL=ON` to your CMake configure command.
Gamepad/controllers may not work on 15.0, this is due to an outdated SDL not responding well to the new `usbhid(2)` driver. To workaround this simply disable `usbhid(2)` (add the following to `/boot/loader.conf`):
```sh
hw.usb.usbhid.enable="0"
```
## NetBSD
Install `pkgin` if not already `pkg_add pkgin`, see also the general [pkgsrc guide](https://www.netbsd.org/docs/pkgsrc/using.html). For NetBSD 10.1 provide `echo 'PKG_PATH="https://cdn.netbsd.org/pub/pkgsrc/packages/NetBSD/x86_64/10.0_2025Q3/All/"' >/etc/pkg_install.conf`. If `pkgin` is taking too much time consider adding the following to `/etc/rc.conf`:
@@ -201,24 +196,14 @@ windeployqt6 --no-compiler-runtime --no-opengl-sw --no-system-dxc-compiler \
find ./*/ -name "*.dll" | while read -r dll; do deps "$dll"; done
```
## RedoxOS
The package install may randomly hang at times, in which case it has to be restarted. ALWAYS do a `sudo pkg update` or the chances of it hanging will be close to 90%. If "multiple" installs fail at once, try installing 1 by 1 the packages.
When CMake invokes certain file syscalls - it may sometimes cause crashes or corruptions on the (kernel?) address space - so reboot the system if there is a "hang" in CMake.
## Windows
### Windows 7, Windows 8 and Windows 8.1
## Windows 8.1 and below
DirectX 12 is not available - simply copy and paste a random DLL and name it `d3d12.dll`.
Install [Qt6 compatibility libraries](github.com/ANightly/qt6windows7) specifically Qt 6.9.5.
### Windows Vista and below
## RedoxOS
No support for Windows Vista (or below) is present at the moment. Check back later.
The package install may randomly hang at times, in which case it has to be restarted. ALWAYS do a `sudo pkg update` or the chances of it hanging will be close to 90%. If "multiple" installs fail at once, try installing 1 by 1 the packages.
### Windows on ARM
If you're using Snapdragon X or 8CX, use the [the Vulkan translation layer](https://apps.microsoft.com/detail/9nqpsl29bfff?hl=en-us&gl=USE) only if the stock drivers do not work. And of course always keep your system up-to-date.
When CMake invokes certain file syscalls - it may sometimes cause crashes or corruptions on the (kernel?) address space - so reboot the system if there is a "hang" in CMake.

View File

@@ -10,13 +10,7 @@ Simply put, types/classes are named as `PascalCase`, same for methods and functi
Except for Qt MOC where `functionName` is preferred.
Template typenames prefer short names like `T`, `I`, `U`, if a longer name is required either `Iterator` or `perform_action` are fine as well. Do not use names like `SS` as systems like solaris define it for registers, in general do not use any of the following for short names:
- `SS`, `DS`, `GS`, `FS`: Segment registers, defined by Solaris `<ucontext.h>`
- `EAX`, `EBX`, `ECX`, `EDX`, `ESI`, `EDI`, `ESP`, `EBP`, `EIP`: Registers, defined by Solaris.
- `X`: Defined by some utility headers, avoid.
- `_`: Defined by gettext, avoid.
- `N`, `M`, `S`: Preferably don't use this for types, use it for numeric constants.
- `TR`: Used by some weird `<ucontext.h>` whom define the Task Register as a logical register to provide to the user... (Need to remember which OS in specific).
Template typenames prefer short names like `T`, `I`, `U`, if a longer name is required either `Iterator` or `perform_action` are fine as well.
Macros must always be in `SCREAMING_CASE`. Do not use short letter macros as systems like Solaris will conflict with them; a good rule of thumb is >5 characters per macro - i.e `THIS_MACRO_IS_GOOD`, `AND_ALSO_THIS_ONE`.
@@ -24,45 +18,25 @@ Try not using hungarian notation, if you're able.
## Formatting
Formatting is extremelly lax, the general rule of thumb is: Don't add new lines just to increase line count. The less lines we have to look at, the better. This means also packing densely your code while not making it a clusterfuck. Strike a balance of "this is a short and comprehensible piece of code" and "my eyes are actually happy to see this!". Don't just drop the entire thing in a single line and call it "dense code", that's just spaghetti posing as code. In general, be mindful of what other devs need to look at.
Do not put if/while/etc braces after lines:
```c++
// no dont do this
// this is more lines of code for no good reason (why braces need their separate lines?)
// and those take space in someone's screen, cumulatively
if (thing)
{ //<--
{
some(); // ...
} //<-- 2 lines of code for basically "opening" and "closing" an statment
}
// do this
if (thing) { //<-- [...] and with your brain you can deduce it's this piece of code
// that's being closed
if (thing) {
some(); // ...
} //<-- only one line, and it's clearer since you know its closing something [...]
}
// or this, albeit the extra line isn't needed (at your discretion of course)
// or this
if (thing)
some(); // ...
// this is also ok, keeps things in one line and makes it extremely clear
// this is also ok
if (thing) some();
// NOT ok, don't be "clever" and use the comma operator to stash a bunch of statments
// in a single line, doing this will definitely ruin someone's day - just do the thing below
// vvv
if (thing) some(), thing(), a2(a1(), y1(), j1()), do_complex_shit(wa(), wo(), ploo());
// ... and in general don't use the comma operator for "multiple statments", EXCEPT if you think
// that it makes the code more readable (the situation may be rare however)
// Wow so much clearer! Now I can actually see what each statment is meant to do!
if (thing) {
some();
thing();
a2(a1(), y1(), j1());
do_complex_shit(wa(), wo(), ploo());
}
```
Brace rules are lax, if you can get the point across, do it:
@@ -103,21 +77,3 @@ if (device_name.empty()) {
SDL_AudioSpec obtained;
device = SDL_OpenAudioDevice(device_name.empty() ? nullptr : device_name.c_str(), capture, &spec, &obtained, false);
```
A note about operators: Use them sparingly, yes, the language is lax on them, but some usages can be... tripping to say the least.
```c++
a, b, c; //<-- NOT OK multiple statments with comma operator is definitely a recipe for disaster
return c ? a : b; //<-- OK ternaries at end of return statments are clear and fine
return a, b; //<-- NOT OK return will take value of `b` but also evaluate `a`, just use a separate statment
void f(int a[]) //<-- OK? if you intend to use the pointer as an array, otherwise just mark it as *
```
And about templates, use them sparingly, don't just do meta-templating for the sake of it, do it when you actually need it. This isn't a competition to see who can make the most complicated and robust meta-templating system. Just use what works, and preferably stick to the standard libary instead of reinventing the wheel. Additionally:
```c++
// NOT OK This will create (T * N * C * P) versions of the same function. DO. NOT. DO. THIS.
template<typename T, size_t N, size_t C, size_t P> inline void what() const noexcept;
// OK use parameters like a normal person, don't be afraid to use them :)
template<typename T> inline void what(size_t n, size_t c, size_t p) const noexcept;
```

10
docs/CrossCompile.md Normal file
View File

@@ -0,0 +1,10 @@
# Cross Compile
## ARM64
A painless guide for cross compilation (or to test NCE) from a x86_64 system without polluting your main.
- Install QEMU: `sudo pkg install qemu`
- Download Debian 13: `wget https://cdimage.debian.org/debian-cd/current/arm64/iso-cd/debian-13.0.0-arm64-netinst.iso`
- Create a system disk: `qemu-img create -f qcow2 debian-13-arm64-ci.qcow2 30G`
- Run the VM: `qemu-system-aarch64 -M virt -m 2G -cpu max -bios /usr/local/share/qemu/edk2-aarch64-code.fd -drive if=none,file=debian-13.0.0-arm64-netinst.iso,format=raw,id=cdrom -device scsi-cd,drive=cdrom -drive if=none,file=debian-13-arm64-ci.qcow2,id=hd0,format=qcow2 -device virtio-blk-device,drive=hd0 -device virtio-gpu-pci -device usb-ehci -device usb-kbd -device intel-hda -device hda-output -nic user,model=virtio-net-pci`

View File

@@ -294,22 +294,22 @@ sudo pkg install qt6 boost glslang libzip library/lz4 libusb-1 nlohmann-json ope
* Open the `MSYS2 MinGW 64-bit` shell (`mingw64.exe`)
* Download and install all dependencies:
```sh
```
BASE="git make autoconf libtool automake-wrapper jq patch"
MINGW="qt6-base qt6-tools qt6-translations qt6-svg cmake toolchain clang python-pip openssl vulkan-memory-allocator vulkan-devel glslang boost fmt lz4 nlohmann-json zlib zstd enet opus mbedtls libusb unordered_dense openssl SDL2"
# Either x86_64 or clang-aarch64 (Windows on ARM)
MINGW="qt6-base qt6-tools qt6-translations qt6-svg cmake toolchain clang python-pip openssl vulkan-memory-allocator vulkan-devel glslang boost fmt lz4 nlohmann-json zlib zstd enet opus mbedtls libusb unordered_dense"
packages="$BASE"
for pkg in $MINGW; do
packages="$packages mingw-w64-x86_64-$pkg"
#packages="$packages mingw-w64-clang-aarch64-$pkg"
done
pacman -Syuu --needed --noconfirm $packages
```
* Notes:
- Using `qt6-static` is possible but currently untested.
- Other environments are entirely untested, but should theoretically work provided you install all the necessary packages.
- GCC is proven to work better with the MinGW environment. If you choose to use Clang, you *may* be better off using the clang64 environment.
- ...except on ARM64, we recommend using clang instead of GCC for Windows ARM64.
- Add `qt-creator` to the `MINGW` variable to install Qt Creator. You can then create a Start Menu shortcut to the MinGW Qt Creator by running `powershell "\$s=(New-Object -COM WScript.Shell).CreateShortcut('C:\\ProgramData\\Microsoft\\Windows\\Start Menu\\Programs\\Qt Creator.lnk');\$s.TargetPath='C:\\msys64\\mingw64\\bin\\qtcreator.exe';\$s.Save()"` in Git Bash or MSYS2.
* Add MinGW binaries to the PATH if they aren't already:
* `echo 'PATH=/mingw64/bin:$PATH' >> ~/.bashrc`

View File

@@ -20,116 +20,122 @@ struct FormatTuple {
GLenum format = GL_NONE;
GLenum type = GL_NONE;
};
#define SURFACE_FORMAT_LIST \
SURFACE_FORMAT_ELEM(GL_RGBA8, GL_RGBA, GL_UNSIGNED_INT_8_8_8_8_REV, A8B8G8R8_UNORM) \
SURFACE_FORMAT_ELEM(GL_RGBA8_SNORM, GL_RGBA, GL_BYTE, A8B8G8R8_SNORM) \
SURFACE_FORMAT_ELEM(GL_RGBA8I, GL_RGBA_INTEGER, GL_BYTE, A8B8G8R8_SINT) \
SURFACE_FORMAT_ELEM(GL_RGBA8UI, GL_RGBA_INTEGER, GL_UNSIGNED_BYTE, A8B8G8R8_UINT) \
SURFACE_FORMAT_ELEM(GL_RGB565, GL_RGB, GL_UNSIGNED_SHORT_5_6_5, R5G6B5_UNORM) \
SURFACE_FORMAT_ELEM(GL_RGB565, GL_RGB, GL_UNSIGNED_SHORT_5_6_5_REV, B5G6R5_UNORM) \
SURFACE_FORMAT_ELEM(GL_RGB5_A1, GL_BGRA, GL_UNSIGNED_SHORT_1_5_5_5_REV, A1R5G5B5_UNORM) \
SURFACE_FORMAT_ELEM(GL_RGB10_A2, GL_RGBA, GL_UNSIGNED_INT_2_10_10_10_REV, A2B10G10R10_UNORM) \
SURFACE_FORMAT_ELEM(GL_RGB10_A2UI, GL_RGBA_INTEGER, GL_UNSIGNED_INT_2_10_10_10_REV, A2B10G10R10_UINT) \
SURFACE_FORMAT_ELEM(GL_RGB10_A2, GL_BGRA, GL_UNSIGNED_INT_2_10_10_10_REV, A2R10G10B10_UNORM) \
SURFACE_FORMAT_ELEM(GL_RGB5_A1, GL_RGBA, GL_UNSIGNED_SHORT_1_5_5_5_REV, A1B5G5R5_UNORM) \
SURFACE_FORMAT_ELEM(GL_RGB5_A1, GL_RGBA, GL_UNSIGNED_SHORT_5_5_5_1, A5B5G5R1_UNORM) \
SURFACE_FORMAT_ELEM(GL_R8, GL_RED, GL_UNSIGNED_BYTE, R8_UNORM) \
SURFACE_FORMAT_ELEM(GL_R8_SNORM, GL_RED, GL_BYTE, R8_SNORM) \
SURFACE_FORMAT_ELEM(GL_R8I, GL_RED_INTEGER, GL_BYTE, R8_SINT) \
SURFACE_FORMAT_ELEM(GL_R8UI, GL_RED_INTEGER, GL_UNSIGNED_BYTE, R8_UINT) \
SURFACE_FORMAT_ELEM(GL_RGBA16F, GL_RGBA, GL_HALF_FLOAT, R16G16B16A16_FLOAT) \
SURFACE_FORMAT_ELEM(GL_RGBA16, GL_RGBA, GL_UNSIGNED_SHORT, R16G16B16A16_UNORM) \
SURFACE_FORMAT_ELEM(GL_RGBA16_SNORM, GL_RGBA, GL_SHORT, R16G16B16A16_SNORM) \
SURFACE_FORMAT_ELEM(GL_RGBA16I, GL_RGBA_INTEGER, GL_SHORT, R16G16B16A16_SINT) \
SURFACE_FORMAT_ELEM(GL_RGBA16UI, GL_RGBA_INTEGER, GL_UNSIGNED_SHORT, R16G16B16A16_UINT) \
SURFACE_FORMAT_ELEM(GL_R11F_G11F_B10F, GL_RGB, GL_UNSIGNED_INT_10F_11F_11F_REV, B10G11R11_FLOAT) \
SURFACE_FORMAT_ELEM(GL_RGBA32UI, GL_RGBA_INTEGER, GL_UNSIGNED_INT, R32G32B32A32_UINT) \
SURFACE_FORMAT_ELEM(GL_COMPRESSED_RGBA_S3TC_DXT1_EXT, GL_NONE, GL_NONE, BC1_RGBA_UNORM) \
SURFACE_FORMAT_ELEM(GL_COMPRESSED_RGBA_S3TC_DXT3_EXT, GL_NONE, GL_NONE, BC2_UNORM) \
SURFACE_FORMAT_ELEM(GL_COMPRESSED_RGBA_S3TC_DXT5_EXT, GL_NONE, GL_NONE, BC3_UNORM) \
SURFACE_FORMAT_ELEM(GL_COMPRESSED_RED_RGTC1, GL_NONE, GL_NONE, BC4_UNORM) \
SURFACE_FORMAT_ELEM(GL_COMPRESSED_SIGNED_RED_RGTC1, GL_NONE, GL_NONE, BC4_SNORM) \
SURFACE_FORMAT_ELEM(GL_COMPRESSED_RG_RGTC2, GL_NONE, GL_NONE, BC5_UNORM) \
SURFACE_FORMAT_ELEM(GL_COMPRESSED_SIGNED_RG_RGTC2, GL_NONE, GL_NONE, BC5_SNORM) \
SURFACE_FORMAT_ELEM(GL_COMPRESSED_RGBA_BPTC_UNORM, GL_NONE, GL_NONE, BC7_UNORM) \
SURFACE_FORMAT_ELEM(GL_COMPRESSED_RGB_BPTC_UNSIGNED_FLOAT, GL_NONE, GL_NONE, BC6H_UFLOAT) \
SURFACE_FORMAT_ELEM(GL_COMPRESSED_RGB_BPTC_SIGNED_FLOAT, GL_NONE, GL_NONE, BC6H_SFLOAT) \
SURFACE_FORMAT_ELEM(GL_COMPRESSED_RGBA_ASTC_4x4_KHR, GL_NONE, GL_NONE, ASTC_2D_4X4_UNORM) \
SURFACE_FORMAT_ELEM(GL_RGBA8, GL_BGRA, GL_UNSIGNED_INT_8_8_8_8_REV, B8G8R8A8_UNORM) \
SURFACE_FORMAT_ELEM(GL_RGBA32F, GL_RGBA, GL_FLOAT, R32G32B32A32_FLOAT) \
SURFACE_FORMAT_ELEM(GL_RGBA32I, GL_RGBA_INTEGER, GL_INT, R32G32B32A32_SINT) \
SURFACE_FORMAT_ELEM(GL_RG32F, GL_RG, GL_FLOAT, R32G32_FLOAT) \
SURFACE_FORMAT_ELEM(GL_RG32I, GL_RG_INTEGER, GL_INT, R32G32_SINT) \
SURFACE_FORMAT_ELEM(GL_R32F, GL_RED, GL_FLOAT, R32_FLOAT) \
SURFACE_FORMAT_ELEM(GL_R16F, GL_RED, GL_HALF_FLOAT, R16_FLOAT) \
SURFACE_FORMAT_ELEM(GL_R16, GL_RED, GL_UNSIGNED_SHORT, R16_UNORM) \
SURFACE_FORMAT_ELEM(GL_R16_SNORM, GL_RED, GL_SHORT, R16_SNORM) \
SURFACE_FORMAT_ELEM(GL_R16UI, GL_RED_INTEGER, GL_UNSIGNED_SHORT, R16_UINT) \
SURFACE_FORMAT_ELEM(GL_R16I, GL_RED_INTEGER, GL_SHORT, R16_SINT) \
SURFACE_FORMAT_ELEM(GL_RG16, GL_RG, GL_UNSIGNED_SHORT, R16G16_UNORM) \
SURFACE_FORMAT_ELEM(GL_RG16F, GL_RG, GL_HALF_FLOAT, R16G16_FLOAT) \
SURFACE_FORMAT_ELEM(GL_RG16UI, GL_RG_INTEGER, GL_UNSIGNED_SHORT, R16G16_UINT) \
SURFACE_FORMAT_ELEM(GL_RG16I, GL_RG_INTEGER, GL_SHORT, R16G16_SINT) \
SURFACE_FORMAT_ELEM(GL_RG16_SNORM, GL_RG, GL_SHORT, R16G16_SNORM) \
SURFACE_FORMAT_ELEM(GL_RGB32F, GL_RGB, GL_FLOAT, R32G32B32_FLOAT) \
SURFACE_FORMAT_ELEM(GL_SRGB8_ALPHA8, GL_RGBA, GL_UNSIGNED_INT_8_8_8_8_REV, A8B8G8R8_SRGB) \
SURFACE_FORMAT_ELEM(GL_RG8, GL_RG, GL_UNSIGNED_BYTE, R8G8_UNORM) \
SURFACE_FORMAT_ELEM(GL_RG8_SNORM, GL_RG, GL_BYTE, R8G8_SNORM) \
SURFACE_FORMAT_ELEM(GL_RG8I, GL_RG_INTEGER, GL_BYTE, R8G8_SINT) \
SURFACE_FORMAT_ELEM(GL_RG8UI, GL_RG_INTEGER, GL_UNSIGNED_BYTE, R8G8_UINT) \
SURFACE_FORMAT_ELEM(GL_RG32UI, GL_RG_INTEGER, GL_UNSIGNED_INT, R32G32_UINT) \
SURFACE_FORMAT_ELEM(GL_RGB16F, GL_RGBA, GL_HALF_FLOAT, R16G16B16X16_FLOAT) \
SURFACE_FORMAT_ELEM(GL_R32UI, GL_RED_INTEGER, GL_UNSIGNED_INT, R32_UINT) \
SURFACE_FORMAT_ELEM(GL_R32I, GL_RED_INTEGER, GL_INT, R32_SINT) \
SURFACE_FORMAT_ELEM(GL_COMPRESSED_RGBA_ASTC_8x8_KHR, GL_NONE, GL_NONE, ASTC_2D_8X8_UNORM) \
SURFACE_FORMAT_ELEM(GL_COMPRESSED_RGBA_ASTC_8x5_KHR, GL_NONE, GL_NONE, ASTC_2D_8X5_UNORM) \
SURFACE_FORMAT_ELEM(GL_COMPRESSED_RGBA_ASTC_5x4_KHR, GL_NONE, GL_NONE, ASTC_2D_5X4_UNORM) \
SURFACE_FORMAT_ELEM(GL_SRGB8_ALPHA8, GL_BGRA, GL_UNSIGNED_INT_8_8_8_8_REV, B8G8R8A8_SRGB) \
SURFACE_FORMAT_ELEM(GL_COMPRESSED_SRGB_ALPHA_S3TC_DXT1_EXT, GL_NONE, GL_NONE, BC1_RGBA_SRGB) \
SURFACE_FORMAT_ELEM(GL_COMPRESSED_SRGB_ALPHA_S3TC_DXT3_EXT, GL_NONE, GL_NONE, BC2_SRGB) \
SURFACE_FORMAT_ELEM(GL_COMPRESSED_SRGB_ALPHA_S3TC_DXT5_EXT, GL_NONE, GL_NONE, BC3_SRGB) \
SURFACE_FORMAT_ELEM(GL_COMPRESSED_SRGB_ALPHA_BPTC_UNORM, GL_NONE, GL_NONE, BC7_SRGB) \
SURFACE_FORMAT_ELEM(GL_RGBA4, GL_RGBA, GL_UNSIGNED_SHORT_4_4_4_4_REV, A4B4G4R4_UNORM) \
SURFACE_FORMAT_ELEM(GL_R8, GL_RED, GL_UNSIGNED_BYTE, G4R4_UNORM) \
SURFACE_FORMAT_ELEM(GL_COMPRESSED_SRGB8_ALPHA8_ASTC_4x4_KHR, GL_NONE, GL_NONE, ASTC_2D_4X4_SRGB) \
SURFACE_FORMAT_ELEM(GL_COMPRESSED_SRGB8_ALPHA8_ASTC_8x8_KHR, GL_NONE, GL_NONE, ASTC_2D_8X8_SRGB) \
SURFACE_FORMAT_ELEM(GL_COMPRESSED_SRGB8_ALPHA8_ASTC_8x5_KHR, GL_NONE, GL_NONE, ASTC_2D_8X5_SRGB) \
SURFACE_FORMAT_ELEM(GL_COMPRESSED_SRGB8_ALPHA8_ASTC_5x4_KHR, GL_NONE, GL_NONE, ASTC_2D_5X4_SRGB) \
SURFACE_FORMAT_ELEM(GL_COMPRESSED_RGBA_ASTC_5x5_KHR, GL_NONE, GL_NONE, ASTC_2D_5X5_UNORM) \
SURFACE_FORMAT_ELEM(GL_COMPRESSED_SRGB8_ALPHA8_ASTC_5x5_KHR, GL_NONE, GL_NONE, ASTC_2D_5X5_SRGB) \
SURFACE_FORMAT_ELEM(GL_COMPRESSED_RGBA_ASTC_10x8_KHR, GL_NONE, GL_NONE, ASTC_2D_10X8_UNORM) \
SURFACE_FORMAT_ELEM(GL_COMPRESSED_SRGB8_ALPHA8_ASTC_10x8_KHR, GL_NONE, GL_NONE, ASTC_2D_10X8_SRGB) \
SURFACE_FORMAT_ELEM(GL_COMPRESSED_RGBA_ASTC_6x6_KHR, GL_NONE, GL_NONE, ASTC_2D_6X6_UNORM) \
SURFACE_FORMAT_ELEM(GL_COMPRESSED_SRGB8_ALPHA8_ASTC_6x6_KHR, GL_NONE, GL_NONE, ASTC_2D_6X6_SRGB) \
SURFACE_FORMAT_ELEM(GL_COMPRESSED_RGBA_ASTC_10x6_KHR, GL_NONE, GL_NONE, ASTC_2D_10X6_UNORM) \
SURFACE_FORMAT_ELEM(GL_COMPRESSED_SRGB8_ALPHA8_ASTC_10x6_KHR, GL_NONE, GL_NONE, ASTC_2D_10X6_SRGB) \
SURFACE_FORMAT_ELEM(GL_COMPRESSED_RGBA_ASTC_10x5_KHR, GL_NONE, GL_NONE, ASTC_2D_10X5_UNORM) \
SURFACE_FORMAT_ELEM(GL_COMPRESSED_SRGB8_ALPHA8_ASTC_10x5_KHR, GL_NONE, GL_NONE, ASTC_2D_10X5_SRGB) \
SURFACE_FORMAT_ELEM(GL_COMPRESSED_RGBA_ASTC_10x10_KHR, GL_NONE, GL_NONE, ASTC_2D_10X10_UNORM) \
SURFACE_FORMAT_ELEM(GL_COMPRESSED_SRGB8_ALPHA8_ASTC_10x10_KHR, GL_NONE, GL_NONE, ASTC_2D_10X10_SRGB) \
SURFACE_FORMAT_ELEM(GL_COMPRESSED_RGBA_ASTC_12x10_KHR, GL_NONE, GL_NONE, ASTC_2D_12X10_UNORM) \
SURFACE_FORMAT_ELEM(GL_COMPRESSED_SRGB8_ALPHA8_ASTC_12x10_KHR, GL_NONE, GL_NONE, ASTC_2D_12X10_SRGB) \
SURFACE_FORMAT_ELEM(GL_COMPRESSED_RGBA_ASTC_12x12_KHR, GL_NONE, GL_NONE, ASTC_2D_12X12_UNORM) \
SURFACE_FORMAT_ELEM(GL_COMPRESSED_SRGB8_ALPHA8_ASTC_12x12_KHR, GL_NONE, GL_NONE, ASTC_2D_12X12_SRGB) \
SURFACE_FORMAT_ELEM(GL_COMPRESSED_RGBA_ASTC_8x6_KHR, GL_NONE, GL_NONE, ASTC_2D_8X6_UNORM) \
SURFACE_FORMAT_ELEM(GL_COMPRESSED_SRGB8_ALPHA8_ASTC_8x6_KHR, GL_NONE, GL_NONE, ASTC_2D_8X6_SRGB) \
SURFACE_FORMAT_ELEM(GL_COMPRESSED_RGBA_ASTC_6x5_KHR, GL_NONE, GL_NONE, ASTC_2D_6X5_UNORM) \
SURFACE_FORMAT_ELEM(GL_COMPRESSED_SRGB8_ALPHA8_ASTC_6x5_KHR, GL_NONE, GL_NONE, ASTC_2D_6X5_SRGB) \
SURFACE_FORMAT_ELEM(GL_RGB9_E5, GL_RGB, GL_UNSIGNED_INT_5_9_9_9_REV, E5B9G9R9_FLOAT) \
SURFACE_FORMAT_ELEM(GL_DEPTH_COMPONENT32F, GL_DEPTH_COMPONENT, GL_FLOAT, D32_FLOAT) \
SURFACE_FORMAT_ELEM(GL_DEPTH_COMPONENT16, GL_DEPTH_COMPONENT, GL_UNSIGNED_SHORT, D16_UNORM) \
SURFACE_FORMAT_ELEM(GL_DEPTH_COMPONENT24, GL_DEPTH_COMPONENT, GL_UNSIGNED_INT_24_8, X8_D24_UNORM) \
SURFACE_FORMAT_ELEM(GL_STENCIL_INDEX8, GL_STENCIL, GL_UNSIGNED_BYTE, S8_UINT) \
SURFACE_FORMAT_ELEM(GL_DEPTH24_STENCIL8, GL_DEPTH_STENCIL, GL_UNSIGNED_INT_24_8, D24_UNORM_S8_UINT) \
SURFACE_FORMAT_ELEM(GL_DEPTH24_STENCIL8, GL_DEPTH_STENCIL, GL_UNSIGNED_INT_24_8, S8_UINT_D24_UNORM) \
SURFACE_FORMAT_ELEM(GL_DEPTH32F_STENCIL8, GL_DEPTH_STENCIL, GL_FLOAT_32_UNSIGNED_INT_24_8_REV, D32_FLOAT_S8_UINT)
constexpr std::array<FormatTuple, VideoCore::Surface::MaxPixelFormat> FORMAT_TABLE = {{
{GL_RGBA8, GL_RGBA, GL_UNSIGNED_INT_8_8_8_8_REV}, // A8B8G8R8_UNORM
{GL_RGBA8_SNORM, GL_RGBA, GL_BYTE}, // A8B8G8R8_SNORM
{GL_RGBA8I, GL_RGBA_INTEGER, GL_BYTE}, // A8B8G8R8_SINT
{GL_RGBA8UI, GL_RGBA_INTEGER, GL_UNSIGNED_BYTE}, // A8B8G8R8_UINT
{GL_RGB565, GL_RGB, GL_UNSIGNED_SHORT_5_6_5}, // R5G6B5_UNORM
{GL_RGB565, GL_RGB, GL_UNSIGNED_SHORT_5_6_5_REV}, // B5G6R5_UNORM
{GL_RGB5_A1, GL_BGRA, GL_UNSIGNED_SHORT_1_5_5_5_REV}, // A1R5G5B5_UNORM
{GL_RGB10_A2, GL_RGBA, GL_UNSIGNED_INT_2_10_10_10_REV}, // A2B10G10R10_UNORM
{GL_RGB10_A2UI, GL_RGBA_INTEGER, GL_UNSIGNED_INT_2_10_10_10_REV}, // A2B10G10R10_UINT
{GL_RGB10_A2, GL_BGRA, GL_UNSIGNED_INT_2_10_10_10_REV}, // A2R10G10B10_UNORM
{GL_RGB5_A1, GL_RGBA, GL_UNSIGNED_SHORT_1_5_5_5_REV}, // A1B5G5R5_UNORM
{GL_RGB5_A1, GL_RGBA, GL_UNSIGNED_SHORT_5_5_5_1}, // A5B5G5R1_UNORM
{GL_R8, GL_RED, GL_UNSIGNED_BYTE}, // R8_UNORM
{GL_R8_SNORM, GL_RED, GL_BYTE}, // R8_SNORM
{GL_R8I, GL_RED_INTEGER, GL_BYTE}, // R8_SINT
{GL_R8UI, GL_RED_INTEGER, GL_UNSIGNED_BYTE}, // R8_UINT
{GL_RGBA16F, GL_RGBA, GL_HALF_FLOAT}, // R16G16B16A16_FLOAT
{GL_RGBA16, GL_RGBA, GL_UNSIGNED_SHORT}, // R16G16B16A16_UNORM
{GL_RGBA16_SNORM, GL_RGBA, GL_SHORT}, // R16G16B16A16_SNORM
{GL_RGBA16I, GL_RGBA_INTEGER, GL_SHORT}, // R16G16B16A16_SINT
{GL_RGBA16UI, GL_RGBA_INTEGER, GL_UNSIGNED_SHORT}, // R16G16B16A16_UINT
{GL_R11F_G11F_B10F, GL_RGB, GL_UNSIGNED_INT_10F_11F_11F_REV}, // B10G11R11_FLOAT
{GL_RGBA32UI, GL_RGBA_INTEGER, GL_UNSIGNED_INT}, // R32G32B32A32_UINT
{GL_COMPRESSED_RGBA_S3TC_DXT1_EXT}, // BC1_RGBA_UNORM
{GL_COMPRESSED_RGBA_S3TC_DXT3_EXT}, // BC2_UNORM
{GL_COMPRESSED_RGBA_S3TC_DXT5_EXT}, // BC3_UNORM
{GL_COMPRESSED_RED_RGTC1}, // BC4_UNORM
{GL_COMPRESSED_SIGNED_RED_RGTC1}, // BC4_SNORM
{GL_COMPRESSED_RG_RGTC2}, // BC5_UNORM
{GL_COMPRESSED_SIGNED_RG_RGTC2}, // BC5_SNORM
{GL_COMPRESSED_RGBA_BPTC_UNORM}, // BC7_UNORM
{GL_COMPRESSED_RGB_BPTC_UNSIGNED_FLOAT}, // BC6H_UFLOAT
{GL_COMPRESSED_RGB_BPTC_SIGNED_FLOAT}, // BC6H_SFLOAT
{GL_COMPRESSED_RGBA_ASTC_4x4_KHR}, // ASTC_2D_4X4_UNORM
{GL_RGBA8, GL_BGRA, GL_UNSIGNED_INT_8_8_8_8_REV}, // B8G8R8A8_UNORM
{GL_RGBA32F, GL_RGBA, GL_FLOAT}, // R32G32B32A32_FLOAT
{GL_RGBA32I, GL_RGBA_INTEGER, GL_INT}, // R32G32B32A32_SINT
{GL_RG32F, GL_RG, GL_FLOAT}, // R32G32_FLOAT
{GL_RG32I, GL_RG_INTEGER, GL_INT}, // R32G32_SINT
{GL_R32F, GL_RED, GL_FLOAT}, // R32_FLOAT
{GL_R16F, GL_RED, GL_HALF_FLOAT}, // R16_FLOAT
{GL_R16, GL_RED, GL_UNSIGNED_SHORT}, // R16_UNORM
{GL_R16_SNORM, GL_RED, GL_SHORT}, // R16_SNORM
{GL_R16UI, GL_RED_INTEGER, GL_UNSIGNED_SHORT}, // R16_UINT
{GL_R16I, GL_RED_INTEGER, GL_SHORT}, // R16_SINT
{GL_RG16, GL_RG, GL_UNSIGNED_SHORT}, // R16G16_UNORM
{GL_RG16F, GL_RG, GL_HALF_FLOAT}, // R16G16_FLOAT
{GL_RG16UI, GL_RG_INTEGER, GL_UNSIGNED_SHORT}, // R16G16_UINT
{GL_RG16I, GL_RG_INTEGER, GL_SHORT}, // R16G16_SINT
{GL_RG16_SNORM, GL_RG, GL_SHORT}, // R16G16_SNORM
{GL_RGB32F, GL_RGB, GL_FLOAT}, // R32G32B32_FLOAT
{GL_SRGB8_ALPHA8, GL_RGBA, GL_UNSIGNED_INT_8_8_8_8_REV}, // A8B8G8R8_SRGB
{GL_RG8, GL_RG, GL_UNSIGNED_BYTE}, // R8G8_UNORM
{GL_RG8_SNORM, GL_RG, GL_BYTE}, // R8G8_SNORM
{GL_RG8I, GL_RG_INTEGER, GL_BYTE}, // R8G8_SINT
{GL_RG8UI, GL_RG_INTEGER, GL_UNSIGNED_BYTE}, // R8G8_UINT
{GL_RG32UI, GL_RG_INTEGER, GL_UNSIGNED_INT}, // R32G32_UINT
{GL_RGB16F, GL_RGBA, GL_HALF_FLOAT}, // R16G16B16X16_FLOAT
{GL_R32UI, GL_RED_INTEGER, GL_UNSIGNED_INT}, // R32_UINT
{GL_R32I, GL_RED_INTEGER, GL_INT}, // R32_SINT
{GL_COMPRESSED_RGBA_ASTC_8x8_KHR}, // ASTC_2D_8X8_UNORM
{GL_COMPRESSED_RGBA_ASTC_8x5_KHR}, // ASTC_2D_8X5_UNORM
{GL_COMPRESSED_RGBA_ASTC_5x4_KHR}, // ASTC_2D_5X4_UNORM
{GL_SRGB8_ALPHA8, GL_BGRA, GL_UNSIGNED_INT_8_8_8_8_REV}, // B8G8R8A8_SRGB
{GL_COMPRESSED_SRGB_ALPHA_S3TC_DXT1_EXT}, // BC1_RGBA_SRGB
{GL_COMPRESSED_SRGB_ALPHA_S3TC_DXT3_EXT}, // BC2_SRGB
{GL_COMPRESSED_SRGB_ALPHA_S3TC_DXT5_EXT}, // BC3_SRGB
{GL_COMPRESSED_SRGB_ALPHA_BPTC_UNORM}, // BC7_SRGB
{GL_RGBA4, GL_RGBA, GL_UNSIGNED_SHORT_4_4_4_4_REV}, // A4B4G4R4_UNORM
{GL_R8, GL_RED, GL_UNSIGNED_BYTE}, // G4R4_UNORM
{GL_COMPRESSED_SRGB8_ALPHA8_ASTC_4x4_KHR}, // ASTC_2D_4X4_SRGB
{GL_COMPRESSED_SRGB8_ALPHA8_ASTC_8x8_KHR}, // ASTC_2D_8X8_SRGB
{GL_COMPRESSED_SRGB8_ALPHA8_ASTC_8x5_KHR}, // ASTC_2D_8X5_SRGB
{GL_COMPRESSED_SRGB8_ALPHA8_ASTC_5x4_KHR}, // ASTC_2D_5X4_SRGB
{GL_COMPRESSED_RGBA_ASTC_5x5_KHR}, // ASTC_2D_5X5_UNORM
{GL_COMPRESSED_SRGB8_ALPHA8_ASTC_5x5_KHR}, // ASTC_2D_5X5_SRGB
{GL_COMPRESSED_RGBA_ASTC_10x8_KHR}, // ASTC_2D_10X8_UNORM
{GL_COMPRESSED_SRGB8_ALPHA8_ASTC_10x8_KHR}, // ASTC_2D_10X8_SRGB
{GL_COMPRESSED_RGBA_ASTC_6x6_KHR}, // ASTC_2D_6X6_UNORM
{GL_COMPRESSED_SRGB8_ALPHA8_ASTC_6x6_KHR}, // ASTC_2D_6X6_SRGB
{GL_COMPRESSED_RGBA_ASTC_10x6_KHR}, // ASTC_2D_10X6_UNORM
{GL_COMPRESSED_SRGB8_ALPHA8_ASTC_10x6_KHR}, // ASTC_2D_10X6_SRGB
{GL_COMPRESSED_RGBA_ASTC_10x5_KHR}, // ASTC_2D_10X5_UNORM
{GL_COMPRESSED_SRGB8_ALPHA8_ASTC_10x5_KHR}, // ASTC_2D_10X5_SRGB
{GL_COMPRESSED_RGBA_ASTC_10x10_KHR}, // ASTC_2D_10X10_UNORM
{GL_COMPRESSED_SRGB8_ALPHA8_ASTC_10x10_KHR}, // ASTC_2D_10X10_SRGB
{GL_COMPRESSED_RGBA_ASTC_12x10_KHR}, // ASTC_2D_12X10_UNORM
{GL_COMPRESSED_SRGB8_ALPHA8_ASTC_12x10_KHR}, // ASTC_2D_12X10_SRGB
{GL_COMPRESSED_RGBA_ASTC_12x12_KHR}, // ASTC_2D_12X12_UNORM
{GL_COMPRESSED_SRGB8_ALPHA8_ASTC_12x12_KHR}, // ASTC_2D_12X12_SRGB
{GL_COMPRESSED_RGBA_ASTC_8x6_KHR}, // ASTC_2D_8X6_UNORM
{GL_COMPRESSED_SRGB8_ALPHA8_ASTC_8x6_KHR}, // ASTC_2D_8X6_SRGB
{GL_COMPRESSED_RGBA_ASTC_6x5_KHR}, // ASTC_2D_6X5_UNORM
{GL_COMPRESSED_SRGB8_ALPHA8_ASTC_6x5_KHR}, // ASTC_2D_6X5_SRGB
{GL_RGB9_E5, GL_RGB, GL_UNSIGNED_INT_5_9_9_9_REV}, // E5B9G9R9_FLOAT
{GL_DEPTH_COMPONENT32F, GL_DEPTH_COMPONENT, GL_FLOAT}, // D32_FLOAT
{GL_DEPTH_COMPONENT16, GL_DEPTH_COMPONENT, GL_UNSIGNED_SHORT}, // D16_UNORM
{GL_DEPTH_COMPONENT24, GL_DEPTH_COMPONENT, GL_UNSIGNED_INT_24_8}, // X8_D24_UNORM
{GL_STENCIL_INDEX8, GL_STENCIL, GL_UNSIGNED_BYTE}, // S8_UINT
{GL_DEPTH24_STENCIL8, GL_DEPTH_STENCIL, GL_UNSIGNED_INT_24_8}, // D24_UNORM_S8_UINT
{GL_DEPTH24_STENCIL8, GL_DEPTH_STENCIL, GL_UNSIGNED_INT_24_8}, // S8_UINT_D24_UNORM
{GL_DEPTH32F_STENCIL8, GL_DEPTH_STENCIL,
GL_FLOAT_32_UNSIGNED_INT_24_8_REV}, // D32_FLOAT_S8_UINT
#define SURFACE_FORMAT_ELEM(a1, a2, a3, name) {a1, a2, a3},
SURFACE_FORMAT_LIST
#undef SURFACE_FORMAT_ELEM
}};
inline const FormatTuple& GetFormatTuple(VideoCore::Surface::PixelFormat pixel_format) {
ASSERT(static_cast<size_t>(pixel_format) < FORMAT_TABLE.size());
return FORMAT_TABLE[static_cast<size_t>(pixel_format)];
constexpr FormatTuple GetFormatTuple(VideoCore::Surface::PixelFormat pixel_format) noexcept {
switch (pixel_format) {
#define SURFACE_FORMAT_ELEM(a1, a2, a3, name) case VideoCore::Surface::PixelFormat::name: return {a1, a2, a3};
SURFACE_FORMAT_LIST
#undef SURFACE_FORMAT_ELEM
#undef SURFACE_FORMAT_LIST
default: UNREACHABLE();
}
}
inline GLenum VertexFormat(Maxwell::VertexAttribute attrib) {

View File

@@ -105,148 +105,141 @@ VkCompareOp DepthCompareFunction(Tegra::Texture::DepthCompareFunc depth_compare_
}
} // namespace Sampler
namespace {
constexpr u32 Attachable = 1 << 0;
constexpr u32 Storage = 1 << 1;
struct FormatTuple {
VkFormat format; ///< Vulkan format
int usage = 0; ///< Describes image format usage
} constexpr tex_format_tuples[] = {
{VK_FORMAT_A8B8G8R8_UNORM_PACK32, Attachable | Storage}, // A8B8G8R8_UNORM
{VK_FORMAT_A8B8G8R8_SNORM_PACK32, Attachable | Storage}, // A8B8G8R8_SNORM
{VK_FORMAT_A8B8G8R8_SINT_PACK32, Attachable | Storage}, // A8B8G8R8_SINT
{VK_FORMAT_A8B8G8R8_UINT_PACK32, Attachable | Storage}, // A8B8G8R8_UINT
{VK_FORMAT_R5G6B5_UNORM_PACK16, Attachable}, // R5G6B5_UNORM
{VK_FORMAT_B5G6R5_UNORM_PACK16}, // B5G6R5_UNORM
{VK_FORMAT_A1R5G5B5_UNORM_PACK16, Attachable}, // A1R5G5B5_UNORM
{VK_FORMAT_A2B10G10R10_UNORM_PACK32, Attachable | Storage}, // A2B10G10R10_UNORM
{VK_FORMAT_A2B10G10R10_UINT_PACK32, Attachable | Storage}, // A2B10G10R10_UINT
{VK_FORMAT_A2R10G10B10_UNORM_PACK32, Attachable}, // A2R10G10B10_UNORM
{VK_FORMAT_A1R5G5B5_UNORM_PACK16, Attachable}, // A1B5G5R5_UNORM (flipped with swizzle)
{VK_FORMAT_R5G5B5A1_UNORM_PACK16}, // A5B5G5R1_UNORM (specially swizzled)
{VK_FORMAT_R8_UNORM, Attachable | Storage}, // R8_UNORM
{VK_FORMAT_R8_SNORM, Attachable | Storage}, // R8_SNORM
{VK_FORMAT_R8_SINT, Attachable | Storage}, // R8_SINT
{VK_FORMAT_R8_UINT, Attachable | Storage}, // R8_UINT
{VK_FORMAT_R16G16B16A16_SFLOAT, Attachable | Storage}, // R16G16B16A16_FLOAT
{VK_FORMAT_R16G16B16A16_UNORM, Attachable | Storage}, // R16G16B16A16_UNORM
{VK_FORMAT_R16G16B16A16_SNORM, Attachable | Storage}, // R16G16B16A16_SNORM
{VK_FORMAT_R16G16B16A16_SINT, Attachable | Storage}, // R16G16B16A16_SINT
{VK_FORMAT_R16G16B16A16_UINT, Attachable | Storage}, // R16G16B16A16_UINT
{VK_FORMAT_B10G11R11_UFLOAT_PACK32, Attachable | Storage}, // B10G11R11_FLOAT
{VK_FORMAT_R32G32B32A32_UINT, Attachable | Storage}, // R32G32B32A32_UINT
{VK_FORMAT_BC1_RGBA_UNORM_BLOCK}, // BC1_RGBA_UNORM
{VK_FORMAT_BC2_UNORM_BLOCK}, // BC2_UNORM
{VK_FORMAT_BC3_UNORM_BLOCK}, // BC3_UNORM
{VK_FORMAT_BC4_UNORM_BLOCK}, // BC4_UNORM
{VK_FORMAT_BC4_SNORM_BLOCK}, // BC4_SNORM
{VK_FORMAT_BC5_UNORM_BLOCK}, // BC5_UNORM
{VK_FORMAT_BC5_SNORM_BLOCK}, // BC5_SNORM
{VK_FORMAT_BC7_UNORM_BLOCK}, // BC7_UNORM
{VK_FORMAT_BC6H_UFLOAT_BLOCK}, // BC6H_UFLOAT
{VK_FORMAT_BC6H_SFLOAT_BLOCK}, // BC6H_SFLOAT
{VK_FORMAT_ASTC_4x4_UNORM_BLOCK}, // ASTC_2D_4X4_UNORM
{VK_FORMAT_B8G8R8A8_UNORM, Attachable | Storage}, // B8G8R8A8_UNORM
{VK_FORMAT_R32G32B32A32_SFLOAT, Attachable | Storage}, // R32G32B32A32_FLOAT
{VK_FORMAT_R32G32B32A32_SINT, Attachable | Storage}, // R32G32B32A32_SINT
{VK_FORMAT_R32G32_SFLOAT, Attachable | Storage}, // R32G32_FLOAT
{VK_FORMAT_R32G32_SINT, Attachable | Storage}, // R32G32_SINT
{VK_FORMAT_R32_SFLOAT, Attachable | Storage}, // R32_FLOAT
{VK_FORMAT_R16_SFLOAT, Attachable | Storage}, // R16_FLOAT
{VK_FORMAT_R16_UNORM, Attachable | Storage}, // R16_UNORM
{VK_FORMAT_R16_SNORM, Attachable | Storage}, // R16_SNORM
{VK_FORMAT_R16_UINT, Attachable | Storage}, // R16_UINT
{VK_FORMAT_R16_SINT, Attachable | Storage}, // R16_SINT
{VK_FORMAT_R16G16_UNORM, Attachable | Storage}, // R16G16_UNORM
{VK_FORMAT_R16G16_SFLOAT, Attachable | Storage}, // R16G16_FLOAT
{VK_FORMAT_R16G16_UINT, Attachable | Storage}, // R16G16_UINT
{VK_FORMAT_R16G16_SINT, Attachable | Storage}, // R16G16_SINT
{VK_FORMAT_R16G16_SNORM, Attachable | Storage}, // R16G16_SNORM
{VK_FORMAT_R32G32B32_SFLOAT}, // R32G32B32_FLOAT
{VK_FORMAT_A8B8G8R8_SRGB_PACK32, Attachable}, // A8B8G8R8_SRGB
{VK_FORMAT_R8G8_UNORM, Attachable | Storage}, // R8G8_UNORM
{VK_FORMAT_R8G8_SNORM, Attachable | Storage}, // R8G8_SNORM
{VK_FORMAT_R8G8_SINT, Attachable | Storage}, // R8G8_SINT
{VK_FORMAT_R8G8_UINT, Attachable | Storage}, // R8G8_UINT
{VK_FORMAT_R32G32_UINT, Attachable | Storage}, // R32G32_UINT
{VK_FORMAT_R16G16B16A16_SFLOAT, Attachable | Storage}, // R16G16B16X16_FLOAT
{VK_FORMAT_R32_UINT, Attachable | Storage}, // R32_UINT
{VK_FORMAT_R32_SINT, Attachable | Storage}, // R32_SINT
{VK_FORMAT_ASTC_8x8_UNORM_BLOCK}, // ASTC_2D_8X8_UNORM
{VK_FORMAT_ASTC_8x5_UNORM_BLOCK}, // ASTC_2D_8X5_UNORM
{VK_FORMAT_ASTC_5x4_UNORM_BLOCK}, // ASTC_2D_5X4_UNORM
{VK_FORMAT_B8G8R8A8_SRGB, Attachable}, // B8G8R8A8_SRGB
{VK_FORMAT_BC1_RGBA_SRGB_BLOCK}, // BC1_RGBA_SRGB
{VK_FORMAT_BC2_SRGB_BLOCK}, // BC2_SRGB
{VK_FORMAT_BC3_SRGB_BLOCK}, // BC3_SRGB
{VK_FORMAT_BC7_SRGB_BLOCK}, // BC7_SRGB
{VK_FORMAT_A4B4G4R4_UNORM_PACK16_EXT}, // A4B4G4R4_UNORM
{VK_FORMAT_R4G4_UNORM_PACK8}, // G4R4_UNORM
{VK_FORMAT_ASTC_4x4_SRGB_BLOCK}, // ASTC_2D_4X4_SRGB
{VK_FORMAT_ASTC_8x8_SRGB_BLOCK}, // ASTC_2D_8X8_SRGB
{VK_FORMAT_ASTC_8x5_SRGB_BLOCK}, // ASTC_2D_8X5_SRGB
{VK_FORMAT_ASTC_5x4_SRGB_BLOCK}, // ASTC_2D_5X4_SRGB
{VK_FORMAT_ASTC_5x5_UNORM_BLOCK}, // ASTC_2D_5X5_UNORM
{VK_FORMAT_ASTC_5x5_SRGB_BLOCK}, // ASTC_2D_5X5_SRGB
{VK_FORMAT_ASTC_10x8_UNORM_BLOCK}, // ASTC_2D_10X8_UNORM
{VK_FORMAT_ASTC_10x8_SRGB_BLOCK}, // ASTC_2D_10X8_SRGB
{VK_FORMAT_ASTC_6x6_UNORM_BLOCK}, // ASTC_2D_6X6_UNORM
{VK_FORMAT_ASTC_6x6_SRGB_BLOCK}, // ASTC_2D_6X6_SRGB
{VK_FORMAT_ASTC_10x6_UNORM_BLOCK}, // ASTC_2D_10X6_UNORM
{VK_FORMAT_ASTC_10x6_SRGB_BLOCK}, // ASTC_2D_10X6_SRGB
{VK_FORMAT_ASTC_10x5_UNORM_BLOCK}, // ASTC_2D_10X5_UNORM
{VK_FORMAT_ASTC_10x5_SRGB_BLOCK}, // ASTC_2D_10X5_SRGB
{VK_FORMAT_ASTC_10x10_UNORM_BLOCK}, // ASTC_2D_10X10_UNORM
{VK_FORMAT_ASTC_10x10_SRGB_BLOCK}, // ASTC_2D_10X10_SRGB
{VK_FORMAT_ASTC_12x10_UNORM_BLOCK}, // ASTC_2D_12X10_UNORM
{VK_FORMAT_ASTC_12x10_SRGB_BLOCK}, // ASTC_2D_12X10_SRGB
{VK_FORMAT_ASTC_12x12_UNORM_BLOCK}, // ASTC_2D_12X12_UNORM
{VK_FORMAT_ASTC_12x12_SRGB_BLOCK}, // ASTC_2D_12X12_SRGB
{VK_FORMAT_ASTC_8x6_UNORM_BLOCK}, // ASTC_2D_8X6_UNORM
{VK_FORMAT_ASTC_8x6_SRGB_BLOCK}, // ASTC_2D_8X6_SRGB
{VK_FORMAT_ASTC_6x5_UNORM_BLOCK}, // ASTC_2D_6X5_UNORM
{VK_FORMAT_ASTC_6x5_SRGB_BLOCK}, // ASTC_2D_6X5_SRGB
{VK_FORMAT_E5B9G9R9_UFLOAT_PACK32}, // E5B9G9R9_FLOAT
// Depth formats
{VK_FORMAT_D32_SFLOAT, Attachable}, // D32_FLOAT
{VK_FORMAT_D16_UNORM, Attachable}, // D16_UNORM
{VK_FORMAT_X8_D24_UNORM_PACK32, Attachable}, // X8_D24_UNORM
// Stencil formats
{VK_FORMAT_S8_UINT, Attachable}, // S8_UINT
// DepthStencil formats
{VK_FORMAT_D24_UNORM_S8_UINT, Attachable}, // D24_UNORM_S8_UINT
{VK_FORMAT_D24_UNORM_S8_UINT, Attachable}, // S8_UINT_D24_UNORM (emulated)
{VK_FORMAT_D32_SFLOAT_S8_UINT, Attachable}, // D32_FLOAT_S8_UINT
VkFormat format{}; ///< Vulkan format
s32 usage = 0; ///< Describes image format usage
};
static_assert(std::size(tex_format_tuples) == VideoCore::Surface::MaxPixelFormat);
constexpr bool IsZetaFormat(PixelFormat pixel_format) {
return pixel_format >= PixelFormat::MaxColorFormat &&
pixel_format < PixelFormat::MaxDepthStencilFormat;
return pixel_format >= PixelFormat::MaxColorFormat && pixel_format < PixelFormat::MaxDepthStencilFormat;
}
} // Anonymous namespace
FormatInfo SurfaceFormat(const Device& device, FormatType format_type, bool with_srgb,
PixelFormat pixel_format) {
ASSERT(static_cast<size_t>(pixel_format) < std::size(tex_format_tuples));
FormatTuple tuple = tex_format_tuples[static_cast<size_t>(pixel_format)];
FormatInfo SurfaceFormat(const Device& device, FormatType format_type, bool with_srgb, PixelFormat pixel_format) {
u32 const usage_attachable = 1 << 0;
u32 const usage_storage = 1 << 1;
FormatTuple tuple;
switch (pixel_format) {
#define SURFACE_FORMAT_LIST \
SURFACE_FORMAT_ELEM(VK_FORMAT_A8B8G8R8_UNORM_PACK32, usage_attachable | usage_storage, A8B8G8R8_UNORM) \
SURFACE_FORMAT_ELEM(VK_FORMAT_A8B8G8R8_SNORM_PACK32, usage_attachable | usage_storage, A8B8G8R8_SNORM) \
SURFACE_FORMAT_ELEM(VK_FORMAT_A8B8G8R8_SINT_PACK32, usage_attachable | usage_storage, A8B8G8R8_SINT) \
SURFACE_FORMAT_ELEM(VK_FORMAT_A8B8G8R8_UINT_PACK32, usage_attachable | usage_storage, A8B8G8R8_UINT) \
SURFACE_FORMAT_ELEM(VK_FORMAT_R5G6B5_UNORM_PACK16, usage_attachable, R5G6B5_UNORM) \
SURFACE_FORMAT_ELEM(VK_FORMAT_B5G6R5_UNORM_PACK16, 0, B5G6R5_UNORM) \
SURFACE_FORMAT_ELEM(VK_FORMAT_A1R5G5B5_UNORM_PACK16, usage_attachable, A1R5G5B5_UNORM) \
SURFACE_FORMAT_ELEM(VK_FORMAT_A2B10G10R10_UNORM_PACK32, usage_attachable | usage_storage, A2B10G10R10_UNORM) \
SURFACE_FORMAT_ELEM(VK_FORMAT_A2B10G10R10_UINT_PACK32, usage_attachable | usage_storage, A2B10G10R10_UINT) \
SURFACE_FORMAT_ELEM(VK_FORMAT_A2R10G10B10_UNORM_PACK32, usage_attachable, A2R10G10B10_UNORM) \
SURFACE_FORMAT_ELEM(VK_FORMAT_A1R5G5B5_UNORM_PACK16, usage_attachable, A1B5G5R5_UNORM) /*flipped with swizzle*/ \
SURFACE_FORMAT_ELEM(VK_FORMAT_R5G5B5A1_UNORM_PACK16, 0, A5B5G5R1_UNORM) /*specially swizzled*/ \
SURFACE_FORMAT_ELEM(VK_FORMAT_R8_UNORM, usage_attachable | usage_storage, R8_UNORM) \
SURFACE_FORMAT_ELEM(VK_FORMAT_R8_SNORM, usage_attachable | usage_storage, R8_SNORM) \
SURFACE_FORMAT_ELEM(VK_FORMAT_R8_SINT, usage_attachable | usage_storage, R8_SINT) \
SURFACE_FORMAT_ELEM(VK_FORMAT_R8_UINT, usage_attachable | usage_storage, R8_UINT) \
SURFACE_FORMAT_ELEM(VK_FORMAT_R16G16B16A16_SFLOAT, usage_attachable | usage_storage, R16G16B16A16_FLOAT) \
SURFACE_FORMAT_ELEM(VK_FORMAT_R16G16B16A16_UNORM, usage_attachable | usage_storage, R16G16B16A16_UNORM) \
SURFACE_FORMAT_ELEM(VK_FORMAT_R16G16B16A16_SNORM, usage_attachable | usage_storage, R16G16B16A16_SNORM) \
SURFACE_FORMAT_ELEM(VK_FORMAT_R16G16B16A16_SINT, usage_attachable | usage_storage, R16G16B16A16_SINT) \
SURFACE_FORMAT_ELEM(VK_FORMAT_R16G16B16A16_UINT, usage_attachable | usage_storage, R16G16B16A16_UINT) \
SURFACE_FORMAT_ELEM(VK_FORMAT_B10G11R11_UFLOAT_PACK32, usage_attachable | usage_storage, B10G11R11_FLOAT) \
SURFACE_FORMAT_ELEM(VK_FORMAT_R32G32B32A32_UINT, usage_attachable | usage_storage, R32G32B32A32_UINT) \
SURFACE_FORMAT_ELEM(VK_FORMAT_BC1_RGBA_UNORM_BLOCK, 0, BC1_RGBA_UNORM) \
SURFACE_FORMAT_ELEM(VK_FORMAT_BC2_UNORM_BLOCK, 0, BC2_UNORM) \
SURFACE_FORMAT_ELEM(VK_FORMAT_BC3_UNORM_BLOCK, 0, BC3_UNORM) \
SURFACE_FORMAT_ELEM(VK_FORMAT_BC4_UNORM_BLOCK, 0, BC4_UNORM) \
SURFACE_FORMAT_ELEM(VK_FORMAT_BC4_SNORM_BLOCK, 0, BC4_SNORM) \
SURFACE_FORMAT_ELEM(VK_FORMAT_BC5_UNORM_BLOCK, 0, BC5_UNORM) \
SURFACE_FORMAT_ELEM(VK_FORMAT_BC5_SNORM_BLOCK, 0, BC5_SNORM) \
SURFACE_FORMAT_ELEM(VK_FORMAT_BC7_UNORM_BLOCK, 0, BC7_UNORM) \
SURFACE_FORMAT_ELEM(VK_FORMAT_BC6H_UFLOAT_BLOCK, 0, BC6H_UFLOAT) \
SURFACE_FORMAT_ELEM(VK_FORMAT_BC6H_SFLOAT_BLOCK, 0, BC6H_SFLOAT) \
SURFACE_FORMAT_ELEM(VK_FORMAT_ASTC_4x4_UNORM_BLOCK, 0, ASTC_2D_4X4_UNORM) \
SURFACE_FORMAT_ELEM(VK_FORMAT_B8G8R8A8_UNORM, usage_attachable | usage_storage, B8G8R8A8_UNORM) \
SURFACE_FORMAT_ELEM(VK_FORMAT_R32G32B32A32_SFLOAT, usage_attachable | usage_storage, R32G32B32A32_FLOAT) \
SURFACE_FORMAT_ELEM(VK_FORMAT_R32G32B32A32_SINT, usage_attachable | usage_storage, R32G32B32A32_SINT) \
SURFACE_FORMAT_ELEM(VK_FORMAT_R32G32_SFLOAT, usage_attachable | usage_storage, R32G32_FLOAT) \
SURFACE_FORMAT_ELEM(VK_FORMAT_R32G32_SINT, usage_attachable | usage_storage, R32G32_SINT) \
SURFACE_FORMAT_ELEM(VK_FORMAT_R32_SFLOAT, usage_attachable | usage_storage, R32_FLOAT) \
SURFACE_FORMAT_ELEM(VK_FORMAT_R16_SFLOAT, usage_attachable | usage_storage, R16_FLOAT) \
SURFACE_FORMAT_ELEM(VK_FORMAT_R16_UNORM, usage_attachable | usage_storage, R16_UNORM) \
SURFACE_FORMAT_ELEM(VK_FORMAT_R16_SNORM, usage_attachable | usage_storage, R16_SNORM) \
SURFACE_FORMAT_ELEM(VK_FORMAT_R16_UINT, usage_attachable | usage_storage, R16_UINT) \
SURFACE_FORMAT_ELEM(VK_FORMAT_R16_SINT, usage_attachable | usage_storage, R16_SINT) \
SURFACE_FORMAT_ELEM(VK_FORMAT_R16G16_UNORM, usage_attachable | usage_storage, R16G16_UNORM) \
SURFACE_FORMAT_ELEM(VK_FORMAT_R16G16_SFLOAT, usage_attachable | usage_storage, R16G16_FLOAT) \
SURFACE_FORMAT_ELEM(VK_FORMAT_R16G16_UINT, usage_attachable | usage_storage, R16G16_UINT) \
SURFACE_FORMAT_ELEM(VK_FORMAT_R16G16_SINT, usage_attachable | usage_storage, R16G16_SINT) \
SURFACE_FORMAT_ELEM(VK_FORMAT_R16G16_SNORM, usage_attachable | usage_storage, R16G16_SNORM) \
SURFACE_FORMAT_ELEM(VK_FORMAT_R32G32B32_SFLOAT, 0, R32G32B32_FLOAT) \
SURFACE_FORMAT_ELEM(VK_FORMAT_A8B8G8R8_SRGB_PACK32, usage_attachable, A8B8G8R8_SRGB) \
SURFACE_FORMAT_ELEM(VK_FORMAT_R8G8_UNORM, usage_attachable | usage_storage, R8G8_UNORM) \
SURFACE_FORMAT_ELEM(VK_FORMAT_R8G8_SNORM, usage_attachable | usage_storage, R8G8_SNORM) \
SURFACE_FORMAT_ELEM(VK_FORMAT_R8G8_SINT, usage_attachable | usage_storage, R8G8_SINT) \
SURFACE_FORMAT_ELEM(VK_FORMAT_R8G8_UINT, usage_attachable | usage_storage, R8G8_UINT) \
SURFACE_FORMAT_ELEM(VK_FORMAT_R32G32_UINT, usage_attachable | usage_storage, R32G32_UINT) \
SURFACE_FORMAT_ELEM(VK_FORMAT_R16G16B16A16_SFLOAT, usage_attachable | usage_storage, R16G16B16X16_FLOAT) \
SURFACE_FORMAT_ELEM(VK_FORMAT_R32_UINT, usage_attachable | usage_storage, R32_UINT) \
SURFACE_FORMAT_ELEM(VK_FORMAT_R32_SINT, usage_attachable | usage_storage, R32_SINT) \
SURFACE_FORMAT_ELEM(VK_FORMAT_ASTC_8x8_UNORM_BLOCK, 0, ASTC_2D_8X8_UNORM) \
SURFACE_FORMAT_ELEM(VK_FORMAT_ASTC_8x5_UNORM_BLOCK, 0, ASTC_2D_8X5_UNORM) \
SURFACE_FORMAT_ELEM(VK_FORMAT_ASTC_5x4_UNORM_BLOCK, 0, ASTC_2D_5X4_UNORM) \
SURFACE_FORMAT_ELEM(VK_FORMAT_B8G8R8A8_SRGB, usage_attachable, B8G8R8A8_SRGB) \
SURFACE_FORMAT_ELEM(VK_FORMAT_BC1_RGBA_SRGB_BLOCK, 0, BC1_RGBA_SRGB) \
SURFACE_FORMAT_ELEM(VK_FORMAT_BC2_SRGB_BLOCK, 0, BC2_SRGB) \
SURFACE_FORMAT_ELEM(VK_FORMAT_BC3_SRGB_BLOCK, 0, BC3_SRGB) \
SURFACE_FORMAT_ELEM(VK_FORMAT_BC7_SRGB_BLOCK, 0, BC7_SRGB) \
SURFACE_FORMAT_ELEM(VK_FORMAT_A4B4G4R4_UNORM_PACK16_EXT, 0, A4B4G4R4_UNORM) \
SURFACE_FORMAT_ELEM(VK_FORMAT_R4G4_UNORM_PACK8, 0, G4R4_UNORM) \
SURFACE_FORMAT_ELEM(VK_FORMAT_ASTC_4x4_SRGB_BLOCK, 0, ASTC_2D_4X4_SRGB) \
SURFACE_FORMAT_ELEM(VK_FORMAT_ASTC_8x8_SRGB_BLOCK, 0, ASTC_2D_8X8_SRGB) \
SURFACE_FORMAT_ELEM(VK_FORMAT_ASTC_8x5_SRGB_BLOCK, 0, ASTC_2D_8X5_SRGB) \
SURFACE_FORMAT_ELEM(VK_FORMAT_ASTC_5x4_SRGB_BLOCK, 0, ASTC_2D_5X4_SRGB) \
SURFACE_FORMAT_ELEM(VK_FORMAT_ASTC_5x5_UNORM_BLOCK, 0, ASTC_2D_5X5_UNORM) \
SURFACE_FORMAT_ELEM(VK_FORMAT_ASTC_5x5_SRGB_BLOCK, 0, ASTC_2D_5X5_SRGB) \
SURFACE_FORMAT_ELEM(VK_FORMAT_ASTC_10x8_UNORM_BLOCK, 0, ASTC_2D_10X8_UNORM) \
SURFACE_FORMAT_ELEM(VK_FORMAT_ASTC_10x8_SRGB_BLOCK, 0, ASTC_2D_10X8_SRGB) \
SURFACE_FORMAT_ELEM(VK_FORMAT_ASTC_6x6_UNORM_BLOCK, 0, ASTC_2D_6X6_UNORM) \
SURFACE_FORMAT_ELEM(VK_FORMAT_ASTC_6x6_SRGB_BLOCK, 0, ASTC_2D_6X6_SRGB) \
SURFACE_FORMAT_ELEM(VK_FORMAT_ASTC_10x6_UNORM_BLOCK, 0, ASTC_2D_10X6_UNORM) \
SURFACE_FORMAT_ELEM(VK_FORMAT_ASTC_10x6_SRGB_BLOCK, 0, ASTC_2D_10X6_SRGB) \
SURFACE_FORMAT_ELEM(VK_FORMAT_ASTC_10x5_UNORM_BLOCK, 0, ASTC_2D_10X5_UNORM) \
SURFACE_FORMAT_ELEM(VK_FORMAT_ASTC_10x5_SRGB_BLOCK, 0, ASTC_2D_10X5_SRGB) \
SURFACE_FORMAT_ELEM(VK_FORMAT_ASTC_10x10_UNORM_BLOCK, 0, ASTC_2D_10X10_UNORM) \
SURFACE_FORMAT_ELEM(VK_FORMAT_ASTC_10x10_SRGB_BLOCK, 0, ASTC_2D_10X10_SRGB) \
SURFACE_FORMAT_ELEM(VK_FORMAT_ASTC_12x10_UNORM_BLOCK, 0, ASTC_2D_12X10_UNORM) \
SURFACE_FORMAT_ELEM(VK_FORMAT_ASTC_12x10_SRGB_BLOCK, 0, ASTC_2D_12X10_SRGB) \
SURFACE_FORMAT_ELEM(VK_FORMAT_ASTC_12x12_UNORM_BLOCK, 0, ASTC_2D_12X12_UNORM) \
SURFACE_FORMAT_ELEM(VK_FORMAT_ASTC_12x12_SRGB_BLOCK, 0, ASTC_2D_12X12_SRGB) \
SURFACE_FORMAT_ELEM(VK_FORMAT_ASTC_8x6_UNORM_BLOCK, 0, ASTC_2D_8X6_UNORM) \
SURFACE_FORMAT_ELEM(VK_FORMAT_ASTC_8x6_SRGB_BLOCK, 0, ASTC_2D_8X6_SRGB) \
SURFACE_FORMAT_ELEM(VK_FORMAT_ASTC_6x5_UNORM_BLOCK, 0, ASTC_2D_6X5_UNORM) \
SURFACE_FORMAT_ELEM(VK_FORMAT_ASTC_6x5_SRGB_BLOCK, 0, ASTC_2D_6X5_SRGB) \
SURFACE_FORMAT_ELEM(VK_FORMAT_E5B9G9R9_UFLOAT_PACK32, 0, E5B9G9R9_FLOAT) \
/* Depth formats */ \
SURFACE_FORMAT_ELEM(VK_FORMAT_D32_SFLOAT, usage_attachable, D32_FLOAT) \
SURFACE_FORMAT_ELEM(VK_FORMAT_D16_UNORM, usage_attachable, D16_UNORM) \
SURFACE_FORMAT_ELEM(VK_FORMAT_X8_D24_UNORM_PACK32, usage_attachable, X8_D24_UNORM) \
/* Stencil formats */ \
SURFACE_FORMAT_ELEM(VK_FORMAT_S8_UINT, usage_attachable, S8_UINT) \
/* DepthStencil formats */ \
SURFACE_FORMAT_ELEM(VK_FORMAT_D24_UNORM_S8_UINT, usage_attachable, D24_UNORM_S8_UINT) \
SURFACE_FORMAT_ELEM(VK_FORMAT_D24_UNORM_S8_UINT, usage_attachable, S8_UINT_D24_UNORM) /* emulated */ \
SURFACE_FORMAT_ELEM(VK_FORMAT_D32_SFLOAT_S8_UINT, usage_attachable, D32_FLOAT_S8_UINT)
#define SURFACE_FORMAT_ELEM(res, usage, pixel) case PixelFormat::pixel: tuple = {res, usage}; break;
SURFACE_FORMAT_LIST
default: UNREACHABLE_MSG("unknown format {}", pixel_format);
#undef SURFACE_FORMAT_ELEM
#undef SURFACE_FORMAT_LIST
}
bool const is_srgb = with_srgb && VideoCore::Surface::IsPixelFormatSRGB(pixel_format);
// Transcode on hardware that doesn't support ASTC natively
if (!device.IsOptimalAstcSupported() && VideoCore::Surface::IsPixelFormatASTC(pixel_format)) {
const bool is_srgb = with_srgb && VideoCore::Surface::IsPixelFormatSRGB(pixel_format);
switch (Settings::values.astc_recompression.GetValue()) {
case Settings::AstcRecompression::Uncompressed:
if (is_srgb) {
tuple.format = VK_FORMAT_A8B8G8R8_SRGB_PACK32;
} else {
tuple.format = VK_FORMAT_A8B8G8R8_UNORM_PACK32;
tuple.usage |= Storage;
tuple.usage |= usage_storage;
}
break;
case Settings::AstcRecompression::Bc1:
@@ -259,7 +252,6 @@ FormatInfo SurfaceFormat(const Device& device, FormatType format_type, bool with
}
// Transcode on hardware that doesn't support BCn natively
if (!device.IsOptimalBcnSupported() && VideoCore::Surface::IsPixelFormatBCn(pixel_format)) {
const bool is_srgb = with_srgb && VideoCore::Surface::IsPixelFormatSRGB(pixel_format);
if (pixel_format == PixelFormat::BC4_SNORM) {
tuple.format = VK_FORMAT_R8_SNORM;
} else if (pixel_format == PixelFormat::BC4_UNORM) {
@@ -268,8 +260,7 @@ FormatInfo SurfaceFormat(const Device& device, FormatType format_type, bool with
tuple.format = VK_FORMAT_R8G8_SNORM;
} else if (pixel_format == PixelFormat::BC5_UNORM) {
tuple.format = VK_FORMAT_R8G8_UNORM;
} else if (pixel_format == PixelFormat::BC6H_SFLOAT ||
pixel_format == PixelFormat::BC6H_UFLOAT) {
} else if (pixel_format == PixelFormat::BC6H_SFLOAT || pixel_format == PixelFormat::BC6H_UFLOAT) {
tuple.format = VK_FORMAT_R16G16B16A16_SFLOAT;
} else if (is_srgb) {
tuple.format = VK_FORMAT_A8B8G8R8_SRGB_PACK32;
@@ -277,9 +268,8 @@ FormatInfo SurfaceFormat(const Device& device, FormatType format_type, bool with
tuple.format = VK_FORMAT_A8B8G8R8_UNORM_PACK32;
}
}
const bool attachable = (tuple.usage & Attachable) != 0;
const bool storage = (tuple.usage & Storage) != 0;
bool const attachable = (tuple.usage & usage_attachable) != 0;
bool const storage = (tuple.usage & usage_storage) != 0;
VkFormatFeatureFlags usage{};
switch (format_type) {
case FormatType::Buffer:

View File

@@ -1,3 +1,5 @@
// SPDX-FileCopyrightText: Copyright 2025 Eden Emulator Project
// SPDX-License-Identifier: GPL-3.0-or-later
// SPDX-FileCopyrightText: 2014 Citra Emulator Project
// SPDX-License-Identifier: GPL-2.0-or-later
@@ -13,127 +15,125 @@
namespace VideoCore::Surface {
#define PIXEL_FORMAT_LIST \
PIXEL_FORMAT_ELEM(A8B8G8R8_UNORM, 1, 1, 32) \
PIXEL_FORMAT_ELEM(A8B8G8R8_SNORM, 1, 1, 32) \
PIXEL_FORMAT_ELEM(A8B8G8R8_SINT, 1, 1, 32) \
PIXEL_FORMAT_ELEM(A8B8G8R8_UINT, 1, 1, 32) \
PIXEL_FORMAT_ELEM(R5G6B5_UNORM, 1, 1, 16) \
PIXEL_FORMAT_ELEM(B5G6R5_UNORM, 1, 1, 16) \
PIXEL_FORMAT_ELEM(A1R5G5B5_UNORM, 1, 1, 16) \
PIXEL_FORMAT_ELEM(A2B10G10R10_UNORM, 1, 1, 32) \
PIXEL_FORMAT_ELEM(A2B10G10R10_UINT, 1, 1, 32) \
PIXEL_FORMAT_ELEM(A2R10G10B10_UNORM, 1, 1, 32) \
PIXEL_FORMAT_ELEM(A1B5G5R5_UNORM, 1, 1, 16) \
PIXEL_FORMAT_ELEM(A5B5G5R1_UNORM, 1, 1, 16) \
PIXEL_FORMAT_ELEM(R8_UNORM, 1, 1, 8) \
PIXEL_FORMAT_ELEM(R8_SNORM, 1, 1, 8) \
PIXEL_FORMAT_ELEM(R8_SINT, 1, 1, 8) \
PIXEL_FORMAT_ELEM(R8_UINT, 1, 1, 8) \
PIXEL_FORMAT_ELEM(R16G16B16A16_FLOAT, 1, 1, 64) \
PIXEL_FORMAT_ELEM(R16G16B16A16_UNORM, 1, 1, 64) \
PIXEL_FORMAT_ELEM(R16G16B16A16_SNORM, 1, 1, 64) \
PIXEL_FORMAT_ELEM(R16G16B16A16_SINT, 1, 1, 64) \
PIXEL_FORMAT_ELEM(R16G16B16A16_UINT, 1, 1, 64) \
PIXEL_FORMAT_ELEM(B10G11R11_FLOAT, 1, 1, 32) \
PIXEL_FORMAT_ELEM(R32G32B32A32_UINT, 1, 1, 128) \
PIXEL_FORMAT_ELEM(BC1_RGBA_UNORM, 4, 4, 64) \
PIXEL_FORMAT_ELEM(BC2_UNORM, 4, 4, 128) \
PIXEL_FORMAT_ELEM(BC3_UNORM, 4, 4, 128) \
PIXEL_FORMAT_ELEM(BC4_UNORM, 4, 4, 64) \
PIXEL_FORMAT_ELEM(BC4_SNORM, 4, 4, 64) \
PIXEL_FORMAT_ELEM(BC5_UNORM, 4, 4, 128) \
PIXEL_FORMAT_ELEM(BC5_SNORM, 4, 4, 128) \
PIXEL_FORMAT_ELEM(BC7_UNORM, 4, 4, 128) \
PIXEL_FORMAT_ELEM(BC6H_UFLOAT, 4, 4, 128) \
PIXEL_FORMAT_ELEM(BC6H_SFLOAT, 4, 4, 128) \
PIXEL_FORMAT_ELEM(ASTC_2D_4X4_UNORM, 4, 4, 128) \
PIXEL_FORMAT_ELEM(B8G8R8A8_UNORM, 1, 1, 32) \
PIXEL_FORMAT_ELEM(R32G32B32A32_FLOAT, 1, 1, 128) \
PIXEL_FORMAT_ELEM(R32G32B32A32_SINT, 1, 1, 128) \
PIXEL_FORMAT_ELEM(R32G32_FLOAT, 1, 1, 64) \
PIXEL_FORMAT_ELEM(R32G32_SINT, 1, 1, 64) \
PIXEL_FORMAT_ELEM(R32_FLOAT, 1, 1, 32) \
PIXEL_FORMAT_ELEM(R16_FLOAT, 1, 1, 16) \
PIXEL_FORMAT_ELEM(R16_UNORM, 1, 1, 16) \
PIXEL_FORMAT_ELEM(R16_SNORM, 1, 1, 16) \
PIXEL_FORMAT_ELEM(R16_UINT, 1, 1, 16) \
PIXEL_FORMAT_ELEM(R16_SINT, 1, 1, 16) \
PIXEL_FORMAT_ELEM(R16G16_UNORM, 1, 1, 32) \
PIXEL_FORMAT_ELEM(R16G16_FLOAT, 1, 1, 32) \
PIXEL_FORMAT_ELEM(R16G16_UINT, 1, 1, 32) \
PIXEL_FORMAT_ELEM(R16G16_SINT, 1, 1, 32) \
PIXEL_FORMAT_ELEM(R16G16_SNORM, 1, 1, 32) \
PIXEL_FORMAT_ELEM(R32G32B32_FLOAT, 1, 1, 96) \
PIXEL_FORMAT_ELEM(A8B8G8R8_SRGB, 1, 1, 32) \
PIXEL_FORMAT_ELEM(R8G8_UNORM, 1, 1, 16) \
PIXEL_FORMAT_ELEM(R8G8_SNORM, 1, 1, 16) \
PIXEL_FORMAT_ELEM(R8G8_SINT, 1, 1, 16) \
PIXEL_FORMAT_ELEM(R8G8_UINT, 1, 1, 16) \
PIXEL_FORMAT_ELEM(R32G32_UINT, 1, 1, 64) \
PIXEL_FORMAT_ELEM(R16G16B16X16_FLOAT, 1, 1, 64) \
PIXEL_FORMAT_ELEM(R32_UINT, 1, 1, 32) \
PIXEL_FORMAT_ELEM(R32_SINT, 1, 1, 32) \
PIXEL_FORMAT_ELEM(ASTC_2D_8X8_UNORM, 8, 8, 128) \
PIXEL_FORMAT_ELEM(ASTC_2D_8X5_UNORM, 8, 5, 128) \
PIXEL_FORMAT_ELEM(ASTC_2D_5X4_UNORM, 5, 4, 128) \
PIXEL_FORMAT_ELEM(B8G8R8A8_SRGB, 1, 1, 32) \
PIXEL_FORMAT_ELEM(BC1_RGBA_SRGB, 4, 4, 64) \
PIXEL_FORMAT_ELEM(BC2_SRGB, 4, 4, 128) \
PIXEL_FORMAT_ELEM(BC3_SRGB, 4, 4, 128) \
PIXEL_FORMAT_ELEM(BC7_SRGB, 4, 4, 128) \
PIXEL_FORMAT_ELEM(A4B4G4R4_UNORM, 1, 1, 16) \
PIXEL_FORMAT_ELEM(G4R4_UNORM, 1, 1, 8) \
PIXEL_FORMAT_ELEM(ASTC_2D_4X4_SRGB, 4, 4, 128) \
PIXEL_FORMAT_ELEM(ASTC_2D_8X8_SRGB, 8, 8, 128) \
PIXEL_FORMAT_ELEM(ASTC_2D_8X5_SRGB, 8, 5, 128) \
PIXEL_FORMAT_ELEM(ASTC_2D_5X4_SRGB, 5, 4, 128) \
PIXEL_FORMAT_ELEM(ASTC_2D_5X5_UNORM, 5, 5, 128) \
PIXEL_FORMAT_ELEM(ASTC_2D_5X5_SRGB, 5, 5, 128) \
PIXEL_FORMAT_ELEM(ASTC_2D_10X8_UNORM, 10, 8, 128) \
PIXEL_FORMAT_ELEM(ASTC_2D_10X8_SRGB, 10, 8, 128) \
PIXEL_FORMAT_ELEM(ASTC_2D_6X6_UNORM, 6, 6, 128) \
PIXEL_FORMAT_ELEM(ASTC_2D_6X6_SRGB, 6, 6, 128) \
PIXEL_FORMAT_ELEM(ASTC_2D_10X6_UNORM, 10, 6, 128) \
PIXEL_FORMAT_ELEM(ASTC_2D_10X6_SRGB, 10, 6, 128) \
PIXEL_FORMAT_ELEM(ASTC_2D_10X5_UNORM, 10, 5, 128) \
PIXEL_FORMAT_ELEM(ASTC_2D_10X5_SRGB, 10, 5, 128) \
PIXEL_FORMAT_ELEM(ASTC_2D_10X10_UNORM, 10, 10, 128) \
PIXEL_FORMAT_ELEM(ASTC_2D_10X10_SRGB, 10, 10, 128) \
PIXEL_FORMAT_ELEM(ASTC_2D_12X10_UNORM, 12, 10, 128) \
PIXEL_FORMAT_ELEM(ASTC_2D_12X10_SRGB, 12, 10, 128) \
PIXEL_FORMAT_ELEM(ASTC_2D_12X12_UNORM, 12, 12, 128) \
PIXEL_FORMAT_ELEM(ASTC_2D_12X12_SRGB, 12, 12, 128) \
PIXEL_FORMAT_ELEM(ASTC_2D_8X6_UNORM, 8, 6, 128) \
PIXEL_FORMAT_ELEM(ASTC_2D_8X6_SRGB, 8, 6, 128) \
PIXEL_FORMAT_ELEM(ASTC_2D_6X5_UNORM, 6, 5, 128) \
PIXEL_FORMAT_ELEM(ASTC_2D_6X5_SRGB, 6, 5, 128) \
PIXEL_FORMAT_ELEM(E5B9G9R9_FLOAT, 1, 1, 32) \
/* Depth formats */ \
PIXEL_FORMAT_ELEM(D32_FLOAT, 1, 1, 32) \
PIXEL_FORMAT_ELEM(D16_UNORM, 1, 1, 16) \
PIXEL_FORMAT_ELEM(X8_D24_UNORM, 1, 1, 32) \
/* Stencil formats */ \
PIXEL_FORMAT_ELEM(S8_UINT, 1, 1, 8) \
/* DepthStencil formats */ \
PIXEL_FORMAT_ELEM(D24_UNORM_S8_UINT, 1, 1, 32) \
PIXEL_FORMAT_ELEM(S8_UINT_D24_UNORM, 1, 1, 32) \
PIXEL_FORMAT_ELEM(D32_FLOAT_S8_UINT, 1, 1, 64)
enum class PixelFormat {
A8B8G8R8_UNORM,
A8B8G8R8_SNORM,
A8B8G8R8_SINT,
A8B8G8R8_UINT,
R5G6B5_UNORM,
B5G6R5_UNORM,
A1R5G5B5_UNORM,
A2B10G10R10_UNORM,
A2B10G10R10_UINT,
A2R10G10B10_UNORM,
A1B5G5R5_UNORM,
A5B5G5R1_UNORM,
R8_UNORM,
R8_SNORM,
R8_SINT,
R8_UINT,
R16G16B16A16_FLOAT,
R16G16B16A16_UNORM,
R16G16B16A16_SNORM,
R16G16B16A16_SINT,
R16G16B16A16_UINT,
B10G11R11_FLOAT,
R32G32B32A32_UINT,
BC1_RGBA_UNORM,
BC2_UNORM,
BC3_UNORM,
BC4_UNORM,
BC4_SNORM,
BC5_UNORM,
BC5_SNORM,
BC7_UNORM,
BC6H_UFLOAT,
BC6H_SFLOAT,
ASTC_2D_4X4_UNORM,
B8G8R8A8_UNORM,
R32G32B32A32_FLOAT,
R32G32B32A32_SINT,
R32G32_FLOAT,
R32G32_SINT,
R32_FLOAT,
R16_FLOAT,
R16_UNORM,
R16_SNORM,
R16_UINT,
R16_SINT,
R16G16_UNORM,
R16G16_FLOAT,
R16G16_UINT,
R16G16_SINT,
R16G16_SNORM,
R32G32B32_FLOAT,
A8B8G8R8_SRGB,
R8G8_UNORM,
R8G8_SNORM,
R8G8_SINT,
R8G8_UINT,
R32G32_UINT,
R16G16B16X16_FLOAT,
R32_UINT,
R32_SINT,
ASTC_2D_8X8_UNORM,
ASTC_2D_8X5_UNORM,
ASTC_2D_5X4_UNORM,
B8G8R8A8_SRGB,
BC1_RGBA_SRGB,
BC2_SRGB,
BC3_SRGB,
BC7_SRGB,
A4B4G4R4_UNORM,
G4R4_UNORM,
ASTC_2D_4X4_SRGB,
ASTC_2D_8X8_SRGB,
ASTC_2D_8X5_SRGB,
ASTC_2D_5X4_SRGB,
ASTC_2D_5X5_UNORM,
ASTC_2D_5X5_SRGB,
ASTC_2D_10X8_UNORM,
ASTC_2D_10X8_SRGB,
ASTC_2D_6X6_UNORM,
ASTC_2D_6X6_SRGB,
ASTC_2D_10X6_UNORM,
ASTC_2D_10X6_SRGB,
ASTC_2D_10X5_UNORM,
ASTC_2D_10X5_SRGB,
ASTC_2D_10X10_UNORM,
ASTC_2D_10X10_SRGB,
ASTC_2D_12X10_UNORM,
ASTC_2D_12X10_SRGB,
ASTC_2D_12X12_UNORM,
ASTC_2D_12X12_SRGB,
ASTC_2D_8X6_UNORM,
ASTC_2D_8X6_SRGB,
ASTC_2D_6X5_UNORM,
ASTC_2D_6X5_SRGB,
E5B9G9R9_FLOAT,
MaxColorFormat,
// Depth formats
D32_FLOAT = MaxColorFormat,
D16_UNORM,
X8_D24_UNORM,
MaxDepthFormat,
// Stencil formats
S8_UINT = MaxDepthFormat,
MaxStencilFormat,
// DepthStencil formats
D24_UNORM_S8_UINT = MaxStencilFormat,
S8_UINT_D24_UNORM,
D32_FLOAT_S8_UINT,
MaxDepthStencilFormat,
#define PIXEL_FORMAT_ELEM(name, ...) name,
PIXEL_FORMAT_LIST
#undef PIXEL_FORMAT_ELEM
MaxColorFormat = D32_FLOAT,
MaxDepthFormat = S8_UINT,
MaxStencilFormat = D24_UNORM_S8_UINT,
MaxDepthStencilFormat = u8(D32_FLOAT_S8_UINT) + 1,
Max = MaxDepthStencilFormat,
Invalid = 255,
};
constexpr std::size_t MaxPixelFormat = static_cast<std::size_t>(PixelFormat::Max);
constexpr std::size_t MaxPixelFormat = std::size_t(PixelFormat::Max);
enum class SurfaceType {
ColorTexture = 0,
@@ -154,371 +154,55 @@ enum class SurfaceTarget {
TextureCubeArray,
};
constexpr std::array<u8, MaxPixelFormat> BLOCK_WIDTH_TABLE = {{
1, // A8B8G8R8_UNORM
1, // A8B8G8R8_SNORM
1, // A8B8G8R8_SINT
1, // A8B8G8R8_UINT
1, // R5G6B5_UNORM
1, // B5G6R5_UNORM
1, // A1R5G5B5_UNORM
1, // A2B10G10R10_UNORM
1, // A2B10G10R10_UINT
1, // A2R10G10B10_UNORM
1, // A1B5G5R5_UNORM
1, // A5B5G5R1_UNORM
1, // R8_UNORM
1, // R8_SNORM
1, // R8_SINT
1, // R8_UINT
1, // R16G16B16A16_FLOAT
1, // R16G16B16A16_UNORM
1, // R16G16B16A16_SNORM
1, // R16G16B16A16_SINT
1, // R16G16B16A16_UINT
1, // B10G11R11_FLOAT
1, // R32G32B32A32_UINT
4, // BC1_RGBA_UNORM
4, // BC2_UNORM
4, // BC3_UNORM
4, // BC4_UNORM
4, // BC4_SNORM
4, // BC5_UNORM
4, // BC5_SNORM
4, // BC7_UNORM
4, // BC6H_UFLOAT
4, // BC6H_SFLOAT
4, // ASTC_2D_4X4_UNORM
1, // B8G8R8A8_UNORM
1, // R32G32B32A32_FLOAT
1, // R32G32B32A32_SINT
1, // R32G32_FLOAT
1, // R32G32_SINT
1, // R32_FLOAT
1, // R16_FLOAT
1, // R16_UNORM
1, // R16_SNORM
1, // R16_UINT
1, // R16_SINT
1, // R16G16_UNORM
1, // R16G16_FLOAT
1, // R16G16_UINT
1, // R16G16_SINT
1, // R16G16_SNORM
1, // R32G32B32_FLOAT
1, // A8B8G8R8_SRGB
1, // R8G8_UNORM
1, // R8G8_SNORM
1, // R8G8_SINT
1, // R8G8_UINT
1, // R32G32_UINT
1, // R16G16B16X16_FLOAT
1, // R32_UINT
1, // R32_SINT
8, // ASTC_2D_8X8_UNORM
8, // ASTC_2D_8X5_UNORM
5, // ASTC_2D_5X4_UNORM
1, // B8G8R8A8_SRGB
4, // BC1_RGBA_SRGB
4, // BC2_SRGB
4, // BC3_SRGB
4, // BC7_SRGB
1, // A4B4G4R4_UNORM
1, // G4R4_UNORM
4, // ASTC_2D_4X4_SRGB
8, // ASTC_2D_8X8_SRGB
8, // ASTC_2D_8X5_SRGB
5, // ASTC_2D_5X4_SRGB
5, // ASTC_2D_5X5_UNORM
5, // ASTC_2D_5X5_SRGB
10, // ASTC_2D_10X8_UNORM
10, // ASTC_2D_10X8_SRGB
6, // ASTC_2D_6X6_UNORM
6, // ASTC_2D_6X6_SRGB
10, // ASTC_2D_10X6_UNORM
10, // ASTC_2D_10X6_SRGB
10, // ASTC_2D_10X5_UNORM
10, // ASTC_2D_10X5_SRGB
10, // ASTC_2D_10X10_UNORM
10, // ASTC_2D_10X10_SRGB
12, // ASTC_2D_12X10_UNORM
12, // ASTC_2D_12X10_SRGB
12, // ASTC_2D_12X12_UNORM
12, // ASTC_2D_12X12_SRGB
8, // ASTC_2D_8X6_UNORM
8, // ASTC_2D_8X6_SRGB
6, // ASTC_2D_6X5_UNORM
6, // ASTC_2D_6X5_SRGB
1, // E5B9G9R9_FLOAT
1, // D32_FLOAT
1, // D16_UNORM
1, // X8_D24_UNORM
1, // S8_UINT
1, // D24_UNORM_S8_UINT
1, // S8_UINT_D24_UNORM
1, // D32_FLOAT_S8_UINT
}};
constexpr u32 DefaultBlockWidth(PixelFormat format) {
ASSERT(static_cast<std::size_t>(format) < BLOCK_WIDTH_TABLE.size());
return BLOCK_WIDTH_TABLE[static_cast<std::size_t>(format)];
constexpr u32 DefaultBlockWidth(PixelFormat format) noexcept {
switch (format) {
#define PIXEL_FORMAT_ELEM(name, width, height, bits) case PixelFormat::name: return width;
PIXEL_FORMAT_LIST
#undef PIXEL_FORMAT_ELEM
default: UNREACHABLE();
}
}
constexpr std::array<u8, MaxPixelFormat> BLOCK_HEIGHT_TABLE = {{
1, // A8B8G8R8_UNORM
1, // A8B8G8R8_SNORM
1, // A8B8G8R8_SINT
1, // A8B8G8R8_UINT
1, // R5G6B5_UNORM
1, // B5G6R5_UNORM
1, // A1R5G5B5_UNORM
1, // A2B10G10R10_UNORM
1, // A2B10G10R10_UINT
1, // A2R10G10B10_UNORM
1, // A1B5G5R5_UNORM
1, // A5B5G5R1_UNORM
1, // R8_UNORM
1, // R8_SNORM
1, // R8_SINT
1, // R8_UINT
1, // R16G16B16A16_FLOAT
1, // R16G16B16A16_UNORM
1, // R16G16B16A16_SNORM
1, // R16G16B16A16_SINT
1, // R16G16B16A16_UINT
1, // B10G11R11_FLOAT
1, // R32G32B32A32_UINT
4, // BC1_RGBA_UNORM
4, // BC2_UNORM
4, // BC3_UNORM
4, // BC4_UNORM
4, // BC4_SNORM
4, // BC5_UNORM
4, // BC5_SNORM
4, // BC7_UNORM
4, // BC6H_UFLOAT
4, // BC6H_SFLOAT
4, // ASTC_2D_4X4_UNORM
1, // B8G8R8A8_UNORM
1, // R32G32B32A32_FLOAT
1, // R32G32B32A32_SINT
1, // R32G32_FLOAT
1, // R32G32_SINT
1, // R32_FLOAT
1, // R16_FLOAT
1, // R16_UNORM
1, // R16_SNORM
1, // R16_UINT
1, // R16_SINT
1, // R16G16_UNORM
1, // R16G16_FLOAT
1, // R16G16_UINT
1, // R16G16_SINT
1, // R16G16_SNORM
1, // R32G32B32_FLOAT
1, // A8B8G8R8_SRGB
1, // R8G8_UNORM
1, // R8G8_SNORM
1, // R8G8_SINT
1, // R8G8_UINT
1, // R32G32_UINT
1, // R16G16B16X16_FLOAT
1, // R32_UINT
1, // R32_SINT
8, // ASTC_2D_8X8_UNORM
5, // ASTC_2D_8X5_UNORM
4, // ASTC_2D_5X4_UNORM
1, // B8G8R8A8_SRGB
4, // BC1_RGBA_SRGB
4, // BC2_SRGB
4, // BC3_SRGB
4, // BC7_SRGB
1, // A4B4G4R4_UNORM
1, // G4R4_UNORM
4, // ASTC_2D_4X4_SRGB
8, // ASTC_2D_8X8_SRGB
5, // ASTC_2D_8X5_SRGB
4, // ASTC_2D_5X4_SRGB
5, // ASTC_2D_5X5_UNORM
5, // ASTC_2D_5X5_SRGB
8, // ASTC_2D_10X8_UNORM
8, // ASTC_2D_10X8_SRGB
6, // ASTC_2D_6X6_UNORM
6, // ASTC_2D_6X6_SRGB
6, // ASTC_2D_10X6_UNORM
6, // ASTC_2D_10X6_SRGB
5, // ASTC_2D_10X5_UNORM
5, // ASTC_2D_10X5_SRGB
10, // ASTC_2D_10X10_UNORM
10, // ASTC_2D_10X10_SRGB
10, // ASTC_2D_12X10_UNORM
10, // ASTC_2D_12X10_SRGB
12, // ASTC_2D_12X12_UNORM
12, // ASTC_2D_12X12_SRGB
6, // ASTC_2D_8X6_UNORM
6, // ASTC_2D_8X6_SRGB
5, // ASTC_2D_6X5_UNORM
5, // ASTC_2D_6X5_SRGB
1, // E5B9G9R9_FLOAT
1, // D32_FLOAT
1, // D16_UNORM
1, // X8_D24_UNORM
1, // S8_UINT
1, // D24_UNORM_S8_UINT
1, // S8_UINT_D24_UNORM
1, // D32_FLOAT_S8_UINT
}};
constexpr u32 DefaultBlockHeight(PixelFormat format) {
ASSERT(static_cast<std::size_t>(format) < BLOCK_HEIGHT_TABLE.size());
return BLOCK_HEIGHT_TABLE[static_cast<std::size_t>(format)];
constexpr u32 DefaultBlockHeight(PixelFormat format) noexcept {
switch (format) {
#define PIXEL_FORMAT_ELEM(name, width, height, bits) case PixelFormat::name: return height;
PIXEL_FORMAT_LIST
#undef PIXEL_FORMAT_ELEM
default: UNREACHABLE();
}
}
constexpr std::array<u8, MaxPixelFormat> BITS_PER_BLOCK_TABLE = {{
32, // A8B8G8R8_UNORM
32, // A8B8G8R8_SNORM
32, // A8B8G8R8_SINT
32, // A8B8G8R8_UINT
16, // R5G6B5_UNORM
16, // B5G6R5_UNORM
16, // A1R5G5B5_UNORM
32, // A2B10G10R10_UNORM
32, // A2B10G10R10_UINT
32, // A2R10G10B10_UNORM
16, // A1B5G5R5_UNORM
16, // A5B5G5R1_UNORM
8, // R8_UNORM
8, // R8_SNORM
8, // R8_SINT
8, // R8_UINT
64, // R16G16B16A16_FLOAT
64, // R16G16B16A16_UNORM
64, // R16G16B16A16_SNORM
64, // R16G16B16A16_SINT
64, // R16G16B16A16_UINT
32, // B10G11R11_FLOAT
128, // R32G32B32A32_UINT
64, // BC1_RGBA_UNORM
128, // BC2_UNORM
128, // BC3_UNORM
64, // BC4_UNORM
64, // BC4_SNORM
128, // BC5_UNORM
128, // BC5_SNORM
128, // BC7_UNORM
128, // BC6H_UFLOAT
128, // BC6H_SFLOAT
128, // ASTC_2D_4X4_UNORM
32, // B8G8R8A8_UNORM
128, // R32G32B32A32_FLOAT
128, // R32G32B32A32_SINT
64, // R32G32_FLOAT
64, // R32G32_SINT
32, // R32_FLOAT
16, // R16_FLOAT
16, // R16_UNORM
16, // R16_SNORM
16, // R16_UINT
16, // R16_SINT
32, // R16G16_UNORM
32, // R16G16_FLOAT
32, // R16G16_UINT
32, // R16G16_SINT
32, // R16G16_SNORM
96, // R32G32B32_FLOAT
32, // A8B8G8R8_SRGB
16, // R8G8_UNORM
16, // R8G8_SNORM
16, // R8G8_SINT
16, // R8G8_UINT
64, // R32G32_UINT
64, // R16G16B16X16_FLOAT
32, // R32_UINT
32, // R32_SINT
128, // ASTC_2D_8X8_UNORM
128, // ASTC_2D_8X5_UNORM
128, // ASTC_2D_5X4_UNORM
32, // B8G8R8A8_SRGB
64, // BC1_RGBA_SRGB
128, // BC2_SRGB
128, // BC3_SRGB
128, // BC7_UNORM
16, // A4B4G4R4_UNORM
8, // G4R4_UNORM
128, // ASTC_2D_4X4_SRGB
128, // ASTC_2D_8X8_SRGB
128, // ASTC_2D_8X5_SRGB
128, // ASTC_2D_5X4_SRGB
128, // ASTC_2D_5X5_UNORM
128, // ASTC_2D_5X5_SRGB
128, // ASTC_2D_10X8_UNORM
128, // ASTC_2D_10X8_SRGB
128, // ASTC_2D_6X6_UNORM
128, // ASTC_2D_6X6_SRGB
128, // ASTC_2D_10X6_UNORM
128, // ASTC_2D_10X6_SRGB
128, // ASTC_2D_10X5_UNORM
128, // ASTC_2D_10X5_SRGB
128, // ASTC_2D_10X10_UNORM
128, // ASTC_2D_10X10_SRGB
128, // ASTC_2D_12X10_UNORM
128, // ASTC_2D_12X10_SRGB
128, // ASTC_2D_12X12_UNORM
128, // ASTC_2D_12X12_SRGB
128, // ASTC_2D_8X6_UNORM
128, // ASTC_2D_8X6_SRGB
128, // ASTC_2D_6X5_UNORM
128, // ASTC_2D_6X5_SRGB
32, // E5B9G9R9_FLOAT
32, // D32_FLOAT
16, // D16_UNORM
32, // X8_D24_UNORM
8, // S8_UINT
32, // D24_UNORM_S8_UINT
32, // S8_UINT_D24_UNORM
64, // D32_FLOAT_S8_UINT
}};
constexpr u32 BitsPerBlock(PixelFormat format) {
ASSERT(static_cast<std::size_t>(format) < BITS_PER_BLOCK_TABLE.size());
return BITS_PER_BLOCK_TABLE[static_cast<std::size_t>(format)];
constexpr u32 BitsPerBlock(PixelFormat format) noexcept {
switch (format) {
#define PIXEL_FORMAT_ELEM(name, width, height, bits) case PixelFormat::name: return bits;
PIXEL_FORMAT_LIST
#undef PIXEL_FORMAT_ELEM
default: UNREACHABLE();
}
}
#undef PIXEL_FORMAT_LIST
/// Returns the sizer in bytes of the specified pixel format
constexpr u32 BytesPerBlock(PixelFormat pixel_format) {
return BitsPerBlock(pixel_format) / CHAR_BIT;
}
SurfaceTarget SurfaceTargetFromTextureType(Tegra::Texture::TextureType texture_type);
bool SurfaceTargetIsLayered(SurfaceTarget target);
bool SurfaceTargetIsArray(SurfaceTarget target);
PixelFormat PixelFormatFromDepthFormat(Tegra::DepthFormat format);
PixelFormat PixelFormatFromRenderTargetFormat(Tegra::RenderTargetFormat format);
PixelFormat PixelFormatFromGPUPixelFormat(Service::android::PixelFormat format);
SurfaceType GetFormatType(PixelFormat pixel_format);
bool HasAlpha(PixelFormat pixel_format);
bool IsPixelFormatASTC(PixelFormat format);
bool IsPixelFormatBCn(PixelFormat format);
bool IsPixelFormatSRGB(PixelFormat format);
bool IsPixelFormatInteger(PixelFormat format);
bool IsPixelFormatSignedInteger(PixelFormat format);
size_t PixelComponentSizeBitsInteger(PixelFormat format);
std::pair<u32, u32> GetASTCBlockSize(PixelFormat format);
u64 TranscodedAstcSize(u64 base_size, PixelFormat format);
} // namespace VideoCore::Surface