5
\$\begingroup\$

I read that frame rate depends on the monitor's refresh rate, as described in the following Japanese article: https://siv3d.github.io/ja-jp/tutorial2/motion/#181-%E7%B5%8C%E9%81%8E%E6%99%82%E9%96%93%E3%82%92%E4%BD%BF%E3%81%A3%E3%81%9F%E3%83%A2%E3%83%BC%E3%82%B7%E3%83%A7%E3%83%B3

Scene::Time() や Scene::DeltaTime() を使わなくても、フレームごとに固定の値を足していけばモーションを作れそうですが、それは大きな間違いです。

なぜなら、プログラムが実行されるパソコンのモニタのリフレッシュレートによって、メインループが毎秒何回実行されるかが異なるためです。一般的なモニタのリフレッシュレートは 60Hz で、毎秒 60 回メインループが実行されますが、近年は 120Hz や 144Hz, 240Hz など、より高頻度のリフレッシュレートを持つモニタが増えています。

Google Translate

It seems like you can create motion by adding a fixed value for each frame without using Scene::Time() or Scene::DeltaTime(), but that is a big mistake.

This is because the number of times the main loop is executed per second varies depending on the refresh rate of the computer monitor on which the program is executed. The refresh rate of a typical monitor is 60Hz, meaning that the main loop is executed 60 times per second, but in recent years, monitors with higher refresh rates, such as 120Hz, 144Hz, and 240Hz, have been increasing in number.

Is this behavior specific to Siv3D, or is it a general principle in game programming?

Before reading this, I assumed that frame rate would depend on CPU or GPU processing speed. Does this mean that the CPU or GPU does not influence the frame rate, and it only depends on the monitor? Or does it depend on both?

If both are factors, how does the dependency work? Is this handled by the OS, or is it the responsibility of the game engine?

For example, Unity and Pygame have a fixed frame rate feature. On the other hand, Siv3D and Flutter Flame do not have a fixed frame rate feature. In Lua, you can casually toggle the vsync flag, but some other systems are designed in a way that doesn't make you aware of vsync's existence.

As an aside, I have often come across the term vsync in game programming but never fully understood it. If the frame rate depends on the monitor, this concept now starts to make a bit more sense.

\$\endgroup\$
1
  • 1
    \$\begingroup\$ I think it's the wrong question. The question should be: what frame rate does my game / project need in order for it to function as intended. The UWP / WPF composition thread runs at 60 FPS which works well for most situations. When dealing with VR, you "need" 90-120 FPS to avoid "motion sickness". On the other hand, some animations will work well at 20 FPS before things get jagged; which depends again on what you're rendering in terms of shape and color (some things "blend" better). So, if you need 120 FPS, you need the hardware to support it. One sets the frame rate via the "interval size". \$\endgroup\$ Commented 20 hours ago

3 Answers 3

13
\$\begingroup\$

First let's establish some basic terminology:

  • CPU: Stands for "Central Processing Unit". This is the "brain" of your computer, it does all the math and runs your operating system.
  • GPU: Stands for "Graphics Processing Unit". This can either be integrated into your CPU (known as "integrated graphics") or as a stand-alone graphics card. This is a specialized processor that is primarily used for telling your monitor or TV what it should display, however it can also be used for non-graphics related processing such as neural networks (commonly neural networks are referred to simply as "AI". It is important to note that this is not the same as video-game AI).

It is important to distinguish between two concepts here:

  1. How fast your game can process things, e.g. how many NPCs can be active at a time (character scripts) or how many balls can exist in a ball-pit (physics). Let's call this tick speed.
  2. How fast your game can draw an image to your screen. Let's call this frame speed.

For the first point, tick speed ("TPS"), it is entirely limited by your CPU (except edge cases where you can offload work to the GPU). A slower CPU means your computer cannot process things as fast which means your game will "tick" at a slower rate. In some cases a very slow tick speed can lead to weird behaviour such as walking through walls or bullets passing through targets.

For the second point, frame speed ("FPS"), it is limited primarily by your GPU, a faster GPU can render things to your screen faster and thus gives more frames per second, but also slightly by your CPU because it's your CPU that tells your GPU what to render.

As for monitor refresh rate / VSync that is the maximum number of FPS (so, GPU limited) that your monitor can display. If your monitor can only display at 60 Hz it makes no sense to render at 100 FPS since 40 of those frames will wither override previous frames, meaning that previous frame was rendered for no reason since it was never displayed, or cause "tearing" where your GPU updates the screen ("frame buffer") while your monitor is still trying to display the previous frame causing half of one frame to be displayed on one side of your screen and half of the next frame on the other side.

This is where VSync comes in, VSync limits how many frames your GPU sends to your monitor to remove this "tearing" issue. Thus having a monitor with higher refresh rate will not increase your FPS if your FPS was already below your monitor's maximum refresh rate.

\$\endgroup\$
10
  • 2
    \$\begingroup\$ Solid explanation. As an additional note, the tick speed is also the limit for handling user input. A tick-rate that is higher than the monitors Frame-Rate can still be helpful for being more responsive to user input, or having less error than large step sizes in a simulation. \$\endgroup\$
    – abestrange
    Commented yesterday
  • 1
    \$\begingroup\$ "it makes no sense to render at 100 FPS since your monitor will just discard 40 of those frames anyways which can cause 'tearing'" – do you have a source for that? I've never heard of a PC monitor dropping frames or combining two frames into one ("tearing"). \$\endgroup\$ Commented 23 hours ago
  • 2
    \$\begingroup\$ @SophieSwett Preventing screen tearing (+ saving some wattage) is the whole purpose of VSync, google "screen tearing" and pick any link :) Wikipedia probably has an article on it. As for "dropping frames" I don't know if it's actually the monitor or the GPU that discards those extra frames, to be fair. But the general statement that if you have a 60Hz monitor and you are rendering 100 FPS 40 of those frames will not be displayed or will "tear" into another frame. \$\endgroup\$
    – Charanor
    Commented 22 hours ago
  • 1
    \$\begingroup\$ @abestrange That's true! But only for polling inputs in your main loop which is why a lot of implementations also allow, and often recommend, listening to input events instead of polling since the input will not be dropped (input listening is often handled via OS interrupts). But I'd wager most games use polling for input since it's simpler so it's worthwhile to mention it. \$\endgroup\$
    – Charanor
    Commented 22 hours ago
  • 2
    \$\begingroup\$ @SophieSwett the monitor doesn't do anything. The video card "scans out" the framebuffer to the monitor at a fixed refresh rate (setting aside VRR). Pushing frames into the framebuffer more often than that doesn't change that. And updating the framebuffer while it's in the process of being scanned out is what causes tearing — it sent part of one frame, and then the contents were replaced with a new frame midway through. \$\endgroup\$
    – hobbs
    Commented 13 hours ago
5
\$\begingroup\$

In general there are four things that can limit the frame rate of a modern game, and the slowest one will determine the final frame rate.

  1. How fast the CPU is. Each frame the game will need to do some quantity of work on the CPU. That will include processing input, and updating the game state. In some game engines this CPU work will be split between multiple threads, which complicates things somewhat, but the basic principle still applies.
  2. How fast the GPU is. In order to display the state of the game to the player, the game will have submitted some quantity of work to the GPU, and the GPU will take some quantity of time to process that work, depending on how fast it is.
  3. VSync. Once the GPU has finished rendering a new frame, it needs to display that frame to the user via the monitor. It can choose to either start displaying it immediately, or to synchronize with the refresh rate of the monitor to avoid "tearing". It does that by waiting for the monitor to finish displaying the previous frame. The choice between those two options is usually exposed to the user in the settings for the game.
  4. Some games will intentionally limit the frame rate to something lower than what the hardware is capable of. For example, a game may choose not to run faster than 30 FPS, even on a system that's capable of running much faster than that. This is sometimes exposed to the user as a frame rate limit option.
\$\endgroup\$
2
  • 2
    \$\begingroup\$ It’s less of a thing now than it used to be, but fixed frame rates (that the user couldn’t change) were much more common in the past, especially in sprite-based games. Lots of older games abused very precise timing tricks to overcome limitations in the hardware, and changing the framerate could throw those off. \$\endgroup\$
    – KRyan
    Commented yesterday
  • 1
    \$\begingroup\$ That's one of the reasons I said "modern game" at the top of what I wrote. Older games often worked in various different ways (e.g. using the CPU to write directly to screen memory). \$\endgroup\$
    – Adam
    Commented 11 hours ago
3
\$\begingroup\$

You seem to be conflating two separate concepts:

  • The rendering rate, usually referred to by gamers as the frame rate. This is how fast the game itself can render frames on the current hardware.
  • The physical frame rate, usually referred to by gamers as the refresh rate. This is how fast the display is physically updating what it is displaying.

On some really old hardware, such as the Atari 2600, there was objectively no difference between the two because you were literally rendering the frame line-by-line in real time (that is, the graphics are being rendered directly into the video output signal).

On modern systems though, the two are much more decoupled. The rendering rate is largely just a matter of the computational requirements of the game and the capabilities of the CPU/GPU (and whatever other coprocessors and/or peripherals are involved), while the physical frame rate is a function of the monitor.

Internally, all modern GPUs have what is known as a video output buffer for every output on the GPU. This buffer stores the frame that is currently being sent over that output to the connected display. In most cases, instead of rendering directly to the video output buffer, a GPU will render to an internal buffer and then copy the data from that to the video output buffer.¹ This ensures that the intermediate state of a partially rendered frame doesn’t go out over the display output, reducing flickering, tearing, and stuttering. ‘vsync’ is just a matter of ensuring that the copy only happens in between physical frames, and can help reduce tearing further.

This, in turn, leads to three possible situations regarding the rendering rate and physical frame rate:

  • If they match exactly, then things functionally work much like the old systems that rendered in real time, just with some more steps involved.
  • If the physical frame rate is higher than the rendering rate, then some frames may be duplicated (these duplicate frames are often known as ‘lag frames’ when talking about old video game consoles, because they usually represent the whole game essentially pausing to wait for the rendering process to re-synchronize with the display). Alternatively, a frame that is only partially rendered may be displayed, leading to visual artifacts.
  • If the rendering rate is higher than the physical frame rate, then the rendering process can idle after each frame instead of having to run constantly. This is functionally how capping the frame rate in a game works to reduce resource usage. Alternatively, it may start on the next frame immediately, in which case either the previous frame may be dropped completely (if the rendering process of the new frame finishes before the frame could be displayed), or a partially rendered frame will be displayed.

This gets complicated though because it’s rarely the case that the rendering rate is constant. Most games have at least some situations where more work needs to be done to prepare a frame than ‘normal’, and some where less work needs to be done than ‘normal’. For example, in most games that use sprite graphics, the number of sprites on screen has a direct impact on the time taken to render a frame.

And, to make matters more interesting, newer displays often allow for a variable refresh rate (branded variously as FreeSync, G-Sync, Adaptive-Sync, ProMotion, Q-Sync, or generically as VRR). This lets the display derive it’s physical refresh rate from the rendering rate of the game, which in theory completely eliminates tearing (whether it does so or not depends on a few other factors, not least of which being how the display implements VRR itself).


1: Depending on the GPU design itself, this may actually involve just flipping a few bits in a register instead of actually copying data. Most modern GPUs are like this, as it significantly reduces the chances of tearing due to copying taking too long. Depending on the software involved, there may also be more than one buffer that data is rendered into, which generally allows for better resource utilization on the GPU, though the way this is implemented may vary (see https://en.wikipedia.org/wiki/Multiple_buffering and https://en.wikipedia.org/wiki/Swap_chain for some of the high-level details of the two common approaches).

\$\endgroup\$

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .