It seems like it would be extremely fast to me. Take a 50x50 block of pixels and expand those across a 100x100 pixel grid leaving blank pixels were you have missing data. If a blank pixel is surrounded by blue pixels, the probability of the missing pixel being blue is fairly high, I would assume.
That is a problem that is perfect for AI, actually. There is an actual algorithm that can be used for upscaling, but at its core, its likely boiled down to a single function and AI’s are excellent for replicating the output of basic functions. It’s not a perfect result, but it’s tolerable.
If this example is correct or not for FSR, I have no clue. However, having AI shit out data based on a probability is mostly what they do.
I’m very much not an expert, but I’d imagine it’s similar to how AES-NI works: the task is CPU/GPU-intensive until specific instructions are designed to do whatever blackmagicfuckery level math is required, and once it’s in hardware it’s more both power efficient and faster.
Without more detail we can only assume, but I would imagine it working the same way that DLSS is (presumed?) to work.
Most of the upscaling is done by their TAA algorithm that’s a part of FSR3.1, then the image will be cleaned up with their “AI” component for more image stability.
AI image upscaleing isn’t something I would associate with being energy efficient or fast. I wonder how that’s supposed to work?
It seems like it would be extremely fast to me. Take a 50x50 block of pixels and expand those across a 100x100 pixel grid leaving blank pixels were you have missing data. If a blank pixel is surrounded by blue pixels, the probability of the missing pixel being blue is fairly high, I would assume.
That is a problem that is perfect for AI, actually. There is an actual algorithm that can be used for upscaling, but at its core, its likely boiled down to a single function and AI’s are excellent for replicating the output of basic functions. It’s not a perfect result, but it’s tolerable.
If this example is correct or not for FSR, I have no clue. However, having AI shit out data based on a probability is mostly what they do.
I’m very much not an expert, but I’d imagine it’s similar to how AES-NI works: the task is CPU/GPU-intensive until specific instructions are designed to do whatever blackmagicfuckery level math is required, and once it’s in hardware it’s more both power efficient and faster.
Without more detail we can only assume, but I would imagine it working the same way that DLSS is (presumed?) to work.
Most of the upscaling is done by their TAA algorithm that’s a part of FSR3.1, then the image will be cleaned up with their “AI” component for more image stability.