That’s exactly my point. Raytracing is being shoehorned into things without them being optimised specifically for it at the moment. That doesn’t mean we should stop developing the tech entirely because people are implementing it poorly most of the time.
How do you "optimize’ it? You would think with so many game companies using it that if there was a better way than there would be at least one title with optimized ray tracing. The issue is the computational requirements for convincing ray tracing. When Toy Story 1 was rendered originally it took 45 minutes to 3 hours to render a single frame of video. Give it time and the GPUs will eventually be fast enough. Baby steps with new tech.
Optimization is not an on/off switch. All companies are optimising their implementation to the best of their ability/budget. As coders get more familiar with the tech and it becomes more commonplace, as well as work being done by graphics card companies on their drivers, it will reduce the computational requirements over time. There’s a hell of a lot of work that goes into graphical processing on hardware, software, engine and game levels to make things look better for less computations, it’s not just “tell GPU to do simulate every particle from the sun”.
There’s also the other side of the issue. As tech gets better, devs are less incentivised to optimise their crap. Which leads to games that look and function the same as older games, but are now bloated beyond belief.
It’s a strange paradox of tech innovation. The more powerful our tech becomes, the less we feel like properly utilizing said tech.
That’s exactly my point. Raytracing is being shoehorned into things without them being optimised specifically for it at the moment. That doesn’t mean we should stop developing the tech entirely because people are implementing it poorly most of the time.
How do you "optimize’ it? You would think with so many game companies using it that if there was a better way than there would be at least one title with optimized ray tracing. The issue is the computational requirements for convincing ray tracing. When Toy Story 1 was rendered originally it took 45 minutes to 3 hours to render a single frame of video. Give it time and the GPUs will eventually be fast enough. Baby steps with new tech.
Optimization is not an on/off switch. All companies are optimising their implementation to the best of their ability/budget. As coders get more familiar with the tech and it becomes more commonplace, as well as work being done by graphics card companies on their drivers, it will reduce the computational requirements over time. There’s a hell of a lot of work that goes into graphical processing on hardware, software, engine and game levels to make things look better for less computations, it’s not just “tell GPU to do simulate every particle from the sun”.
There’s also the other side of the issue. As tech gets better, devs are less incentivised to optimise their crap. Which leads to games that look and function the same as older games, but are now bloated beyond belief.
It’s a strange paradox of tech innovation. The more powerful our tech becomes, the less we feel like properly utilizing said tech.