You can get depth information from parallax, which can come from either capturing multiple moments or using multiple viewpoints. IDK if I would call this seeing in 3D, as you can still only see 2d surfaces, just with an additional data point of depth (Think of it like an array of data, with one eye, you get res^2 * (r+g+b) data points, with two, you get res^2 * (r+g+b+r+g+b+d) instead of actual 3D which would be res^3 * (r+g+b)). Having 3 eyes just means you can estimate depth more accurately. Of course, in real animals with many eyes the eyes serve different purposes, such as having a different fov, resolution, color perception, etc.
You can get depth information from parallax, which can come from either capturing multiple moments or using multiple viewpoints. IDK if I would call this seeing in 3D, as you can still only see 2d surfaces, just with an additional data point of depth (Think of it like an array of data, with one eye, you get
res^2 * (r+g+b)
data points, with two, you getres^2 * (r+g+b+r+g+b+d)
instead of actual 3D which would beres^3 * (r+g+b)
). Having 3 eyes just means you can estimate depth more accurately. Of course, in real animals with many eyes the eyes serve different purposes, such as having a different fov, resolution, color perception, etc.So what you’re saying is we need 4D eyes to see 3D? Ahh, Kos… or some say, Kosm…