Project 2
Transformations
In this project, the goal was to display the vertices of an object loaded from an OBJ file. Although that might not sound too complicated, getting something to appear on the screen can literally be a walk in the dark. First, the data from the OBJ file must be uploaded to the GPU. Then, camera matrix transformations must be made to move and “perspectivize” the object data. Finally, some code uploaded to the GPU is called, which processes and renders the data.
In Vulkan, this is what that process looks like
Vertex Buffer Objects
What it takes to upload data to the GPU:
Vertex Input Description
When writing GLSL code, “attributes” are used to pass per vertex information, like positions, colors, texture coordinates, vertex normals, etc. It’s possible that these attributes are interlaced, or are separated into different buffers via different layout locations. In Vulkan, the way we handle this is by creating a Vertex Input Binding Description, and a Vertex Input Attribute Description. The binding description details which attribute array we’re talking about, we’re talking about, the stride between vertex data, as well as some other information. The attribute description details at what offsets particular attributes are within an attribute buffer stride, as well as the format. For example, a vertex color could be 3 * sizeof(float) bytes away from the vertex position, and have a VK_FORMAT_R32G32B32_SFLOAT format. These descriptions are bound to a graphics pipeline, which can be reused with multiple different vertex buffers.
Vertex Buffer Creation
In Vulkan, a buffer is a region of memory used for storing data on the graphics card. To create a buffer, we first describe the size, type, and usage using a VkBufferCreateInfo struct. Vulkan takes the information from that struct and creates a VkBuffer object for us. Although the buffer handle has been created, we still need to allocate and assign memory to it. In order to transfer data from the host to the GPU, we need the assigned memory to be host visible. We can query for that property using vkGetPhysicalDeviceMemoryProperties. We use the results of that call as well as a VkMemoryAllocateInfo struct to allocate GPU memory. Finally, we can bind the allocated memory with our buffer handle using vkBindBufferMemory.
Uploading data to the GPU is pretty simple at this point. All we need to do is call vkMapMemory, do a memcpy to the given void pointer, and then call vkUnmapMemory.
Once the vertex data has been uploaded to a vertex buffer, we can use that buffer during a render pass to draw the object. To do that, we call vkCmdBindPipeline to select a particular graphics pipeline, then call vkCmdBindVertedBuffers passing in our created buffer handle, and then call vkCmdDraw, passing in the total number of vertices we’d like to draw.
Staging Buffer
Drawing from a host visible buffer is slow. To improve render performance, we’d like vertex to be located within device local memory. To get our vertex data from the CPU to this device local memory, we need to use an intermediary staging buffer. Once the data has been mapped to the staging buffer, we can do a GPU side memory copy from the staging buffer to GPU local memory.
Index Buffer
Triangles in a mesh often share vertex information, so it’s often in our best interest to pass the same vertex information to different GPU vertex shader threads concurrently. That way, we can render, say, a quad composed of two triangles with only 4 points instead of 6. Getting this working in vulkan is pretty easy. Just upload vertex indices to an index buffer, use vkCmdBindIndexBuffers to bind the index data, and then call vkCmdDrawIndexed.
Uniform Buffer Objects
Descriptor Layout
In addition to attributes, a typical GLSL shader will have uniform data, which is shared globally across vertices in a mesh being rendered. Common uniforms are like transformation matrices, which move points from model space to world space to camera space, etc.
In Vulkan, we can use uniform buffer objects to hold this information. Very similar to vertex and index buffers, a uniform buffer needs a buffer handle and memory allocated and bound to it. To use this uniform buffer object, we need to tell our graphics pipeline what that buffer will look like, and where it will be bound. We do this by creating a VkDescriptorSetLayoutBinding, and adding that to a VkDescriporSetLayoutCreateInfo struct, which is in turn used to create a VkDescriptorSetLayout.
This uniform buffer object can now be updated every frame to update the scene.
Descriptor Pool and Sets
A descriptor pool is used to allocate descriptor sets, which are bound during a render pass. A handy way to think about descriptor layouts and sets is to imagine the layout like a struct declaration, and a set as an instance of that struct. We use an instance of that struct, which contains global vertex information, when we draw vertices in our render pass.
TLDK
Make buffer handles, allocate and upload vertex data from host->staging->local, stick that in a graphics pipeline during a renderpass.
Uniform data needs “Descriptor Layouts” and “Descriptor Sets” which are provided by “Descriptor Pools”. Bind a descriptor set to set uniform data before drawing vertices.
Program controls
- esc closes the window
- R resets the camera position
- Click and drag to move the camera
- Arrow keys move the camera
- +/- zoom in and out
Build instructions
- If you don't have it already, download cmake
- Download and install the Vulkan SDK from here: https://www.lunarg.com/vulkan-sdk/
- Clone the project: https://github.com/n8vm/InteractiveComputerGraphics
- Generate the project with CMake
- Build and run any targets in the project
Resources Used
C++, Visual Studio 2017, CMake, Vulkan, GLFW, GLM
Hardware
Intel HD Graphics 620, Intel Core i7 - 8500U CPU @ 4.0GHz, 16GB of memory
Nvidia GTX 1070