• This forum is strictly intended to be used by members of the VS Battles wiki. Please only register if you have an autoconfirmed account there, as otherwise your registration will be rejected. If you have already registered once, do not do so again, and contact Antvasima if you encounter any problems.

    For instructions regarding the exact procedure to sign up to this forum, please click here.
  • We need Patreon donations for this forum to have all of its running costs financially secured.

    Community members who help us out will receive badges that give them several different benefits, including the removal of all advertisements in this forum, but donations from non-members are also extremely appreciated.

    Please click here for further information, or here to directly visit our Patreon donations page.
  • Please click here for information about a large petition to help children in need.

Possible revision of our vision range/"twinkle in the sky" feats

Status
Not open for further replies.
'Pparently he was supposed to be free on the 15th but radio silence.

I've tagged him for other important threads as well, but no dice.
Because there are like 20 threads I'm tagged in. I have been free since the 15th (which isn't to say I'm not doing lots of stuff in my holidays that aren't hanging out here), but that doesn't mean I can get to everything immediately.
I disagree, I think the twinkle is meant to indicate that the character is no longer in view in-verse. There's also videogame cases in which the model/sprite straight-up just disappears no matter how much you turn up the resolution.
I don't think the twinkle necessarily indicates they are not visible anymore. More of a stylistic thing, I believe. Much less do I think they would have a specific degree of "not visible" in mind? As in, would it really be specifically not visible to a focussed observer?

What resolution is concerned: We had a debate on how to handle the problem of things technically getting different values based on resolution before. Not sure if that thread ever had an official end. But when you think of video games, I think the most reasonable thing is to use the last point where you can see them as low end. That might still change with resolution, but then you can just take the highest the video game has to offer.
For other media, like anime, that should be resolution independent.

I looked into this and I'm actually not sure this would end up playing a role. Assuming 1080px of screen height (which admittedly isn't going to be the resolution of older movies/animation) and 0.5px for the size of the character in question (which imo is better than 1px, because even if the object was slightly less than one full px, by occupying most of that pixel it would still "fill it"), then the distance for a human-sized character ends up being 1.8 / 0.5 * 1080 = 3888 meters of distance, which is more than you'd get from my method. This is without considering higher resolutions which are very common nowadays, most movies nowadays are 4096 x 2160 and basically any videogame can be cranked as high as your PC can handle.
Well, as said, I would not go for 1 px (as that means we would have to figure out the original resolution, not to mention that the unit is not meaningful given how video and some image compression works) but rather the distance to the last time we see them.

But, aside from that, if using the px of the last frame where the object is visible gives a higher distance, then the distance should be usable regardless of any real life consideration.


Like, again, I think your suggested method is most interesting for cases where we don't have videos, but rather have statements that something becomes not visible. So for feats that would be found in manga, comics, novels etc.


Anyway, what your methods validity is concerned in general: Yes, I think it work. But, I think it only works if given proper time to focus. Like, I remember the eye test I did for my driver's license and when it came to the small letters, I had to focus for several seconds to be able to make them out. And that is for something that stands still.
So for feats of something flying off into the distance, I think a greater angular value would probably be appropriate. What that could be? Idk. I once used half the angle of an outstretched pinky in a calc for that purpose, since I figured it a reasonable low end for what one can relatively easily see even when it moves around and one isn't greatly focussed. But there are probably better baselines one could use.
 
So what are the conclusions regarding what should be done here?
 
What resolution is concerned: We had a debate on how to handle the problem of things technically getting different values based on resolution before. Not sure if that thread ever had an official end. But when you think of video games, I think the most reasonable thing is to use the last point where you can see them as low end. That might still change with resolution, but then you can just take the highest the video game has to offer.
For other media, like anime, that should be resolution independent.
This is really not viable in my opinion especially for videogames because you end up running into engine limitations here, a lot of games were meant to be viewed on a CRT or on a tiny handheld screen where tiny models/sprites would just be completely invisible, so developers may have chosen to just remove them rather than have them move the whole distance away. Animation would also run into similar issues, a lot of shows that do this are aimed at children, and as a result they have lower budget. It's also usually the case that the character gets so far that they are not in view at all anymore and then, a few moments later, the twinkle happens. That's clearly getting so far away that they cannot be seen anymore.
Anyway, what your methods validity is concerned in general: Yes, I think it work. But, I think it only works if given proper time to focus. Like, I remember the eye test I did for my driver's license and when it came to the small letters, I had to focus for several seconds to be able to make them out. And that is for something that stands still.
Yeah, but the characters would be following the object as it flies away, so they'd definitely be focusing on it. It'd be harder to track than an object standing still but it's close enough.
So for feats of something flying off into the distance, I think a greater angular value would probably be appropriate. What that could be? Idk. I once used half the angle of an outstretched pinky in a calc for that purpose, since I figured it a reasonable low end for what one can relatively easily see even when it moves around and one isn't greatly focussed. But there are probably better baselines one could use.
Idk, choosing "half a pinky" is pretty random and you could definitely easily see something that appeared much smaller than that. I prefer a mild high-ball that has scientific basis than a huge low-ball that's based on a completely arbitrary value. Like, looking at something that appears to be about 1/10 of my pinky from where I'm sitting right now, I'm fairly confident that I could track something like that if it was several times smaller than it, even in motion. And I have small hands and awful eyesight.
 
This is really not viable in my opinion especially for videogames because you end up running into engine limitations here, a lot of games were meant to be viewed on a CRT or on a tiny handheld screen where tiny models/sprites would just be completely invisible, so developers may have chosen to just remove them rather than have them move the whole distance away. Animation would also run into similar issues, a lot of shows that do this are aimed at children, and as a result they have lower budget. It's also usually the case that the character gets so far that they are not in view at all anymore and then, a few moments later, the twinkle happens. That's clearly getting so far away that they cannot be seen anymore.
Can't say much more than that I disagree.

If we have nothing conclusive of the thing getting out of view, scaling by how far we can see them get away seems more reasonable. Why make an assumption if you can go by what you see happen?

I don't think using "they couldn't animate it smaller" as justification for a shacky assumption is valid. It just doesn't make the alternative better.
Yeah, but the characters would be following the object as it flies away, so they'd definitely be focusing on it. It'd be harder to track than an object standing still but it's close enough.

Idk, choosing "half a pinky" is pretty random and you could definitely easily see something that appeared much smaller than that. I prefer a mild high-ball that has scientific basis than a huge low-ball that's based on a completely arbitrary value. Like, looking at something that appears to be about 1/10 of my pinky from where I'm sitting right now, I'm fairly confident that I could track something like that if it was several times smaller than it, even in motion. And I have small hands and awful eyesight.
I'm really not sure I would call it a "mild" high-ball. I mean, 5x10^-4 rad equates (using the same approx. formula as the page) to seeing an object that is 3 mm large over a distance of 6m. That's the size of a single poppy seed. I'm fairly sure even when my eyes were good that would have needed some concentration and time. Or about holding a 0.5 mm object on your outstretched hand (i.e. about 1m away). That's like a single grain of sand.

And as usual, I'm on the side of rather having somewhat of a lowball than an overestimation.

And I fully agree that half a pinky is somewhat arbitrary. I'm generally open for other suggestions.

I could agree with using the end for those which are superhumanly good at it, though.
 
Can't say much more than that I disagree.

If we have nothing conclusive of the thing getting out of view, scaling by how far we can see them get away seems more reasonable. Why make an assumption if you can go by what you see happen?

I don't think using "they couldn't animate it smaller" as justification for a shacky assumption is valid. It just doesn't make the alternative better.
I mean, if the object disappears completely then they clearly went way further than "how far we can see them last time", especially with the context that they're already moving away at all time. Like, if they get so far away that they disappear they should just be assumed to be so small (from the POV) that they're no longer visible, what we do "see happen" is that we... can't see them anymore. It did get out of view, and we know it did via distance. I don't really think this is a shaky assumption at all.

Honestly there's also the fact that you have to consider that most of the time this is just half of the distance they move, usually they vanish before they really begin falling so all feats regarding this are lowballs.
And I fully agree that half a pinky is somewhat arbitrary. I'm generally open for other suggestions.
How about double the 5x10^-4 thing? I cut out a scrap of paper about 1 mm wide and put it on my finger, stretched my hand out and i could see it a bit. I might not notice it if my attention wasn't on it, but if it was moving away from me I think I'd be able to keep my eyes on it. And again I have awful eyesight
 
Last edited:
I mean, if the object disappears completely then they clearly went way further than "how far we can see them last time", especially with the context that they're already moving away at all time. Like, if they get so far away that they disappear they should just be assumed to be so small (from the POV) that they're no longer visible, what we do "see happen" is that we... can't see them anymore. It did get out of view, and we know it did via distance. I don't really think this is a shaky assumption at all.
We know it did get out of view in terms of the view of the camera through which we see, not out of the view of the human eye. The "how far we can see them last time" is just for the purpose of resolving trouble regarding resolution and render distance., where maybe even something that technically is 1 px big can in practice not be seen anymore.

Think about it in reverse: If by pixel scaling you get a higher distance, would you then want to not use that scaling in favor of low-balling it to how far the human eye can see?
Honestly there's also the fact that you have to consider that most of the time this is just half of the distance they move, usually they vanish before they really begin falling so all feats regarding this are lowballs.
I don't see how that would make this a lowball. Like, yeah, they fly further. Doesn't make the speed with which they had to get there any higher.
Unless you want to take into account air resistance or something?
How about double the 5x10^-4 thing? I cut out a scrap of paper about 1 mm wide and put it on my finger, stretched my hand out and i could see it a bit. I might not notice it if my attention wasn't on it, but if it was moving away from me I think I'd be able to keep my eyes on it. And again I have awful eyesight
So 10^-3? I guess I can live with that.
 
Thank you for helping out. 🙏
 
We know it did get out of view in terms of the view of the camera through which we see, not out of the view of the human eye. The "how far we can see them last time" is just for the purpose of resolving trouble regarding resolution and render distance., where maybe even something that technically is 1 px big can in practice not be seen anymore.
I mean the "camera" is just clearly a construct that's meant to show us the scene, I don't know why you wouldn't assume it to be similar to the human eye. Like, a character flies away, he can no longer be seen and you're assuming that he's still close enough to be seen, visible to every other person watching, and just cannot be seen because of some technological issue? Like, you do know that if the animators wanted them to still be in sight they'd be very capable of representing that in a variety of different ways? It's clearly meant to represent them flying out of view.

(Also render distance is absolutely not an issue, games can project single objects as far as they want, the reason draw distance exists is that showing an entire world is what's impossible. They can also just shrink the model instead to give the illusion of it moving as far aways as they want. If an object goes out of view then they clearly just intended it to be out of view)

More importantly (bolding because I want to focus the discussion on this) if something was the size of 1 px, or even a bit smaller, they'd still show up in that pixel, given that what is represented in one px of a picture is just what color is in the majority of that pixel. For something to not be visible in a camera at all they'd need to take up less than 50% of that pixel. Treating a pixel as a square with a side length of 1, square root of 0.5 px^2 is 0.71 px, which is the max size that a roughly square object would need to be to not appear in that pixel.

Comparing how much distance one would get using the resolution thing with the distance you'd get using vision range:
  • 1.8 * 1080/[0.71*2*tan(70deg/2)] = 1955.155 meters
  • 1.8 / (10^-3) = 1800 meters
As you can see, your method gets a higher result than mine. If you still think it should be used I guess I don't oppose it given that it's pretty close either ways but, just saying.
Think about it in reverse: If by pixel scaling you get a higher distance, would you then want to not use that scaling in favor of low-balling it to how far the human eye can see?
That line of reasoning doesn't make any sense. If I can visibly tell that the object got further than the normal range of vision then yes I would obviously use that, given that it's provable that it did, but the opposite is just applying a random assumption to the scene with no actual visible evidence.
I don't see how that would make this a lowball. Like, yeah, they fly further. Doesn't make the speed with which they had to get there any higher.
Unless you want to take into account air resistance or something?
I don't want to just because that'd be a pain, I'm just bringing up that it's another factor we're not considering. But whatever, not a big deal.
So 10^-3? I guess I can live with that.
Yep. I feel like you could still see smaller than that but it's not the worst assumption. I'd also say 2 x 10^-4 rad x 2 = 4 x 10^-4 rad for people with enhanced/peak human eyesight, based off the ideal vision thing I showed in OP.
 
Last edited:
Estimated time of arrival to evaluate? Probably half a business year's worth.
 
I mean the "camera" is just clearly a construct that's meant to show us the scene, I don't know why you wouldn't assume it to be similar to the human eye. Like, a character flies away, he can no longer be seen and you're assuming that he's still close enough to be seen, visible to every other person watching, and just cannot be seen because of some technological issue? Like, you do know that if the animators wanted them to still be in sight they'd be very capable of representing that in a variety of different ways? It's clearly meant to represent them flying out of view.
Those would be valid arguments if we were talking about a scenario in which the scene made clear that an actual human can not see the character anymore.
As it stands we are talking about a scenario in which we do not know whether or not someone can not see them anymore, but only know that they are not drawn on the screen anymore. The latter, by no means, indicates the former in any way or form, especially since generally the animators have no interest in animating some speck of dust that could technically be visible but is of no importance. (And if it's smaller than resolution they would of course be incapable unless they zoom in on the object they don't care about)
(Also render distance is absolutely not an issue, games can project single objects as far as they want, the reason draw distance exists is that showing an entire world is what's impossible. They can also just shrink the model instead to give the illusion of it moving as far aways as they want. If an object goes out of view then they clearly just intended it to be out of view)
Yeah, that's just... not true. Games have very much just a generic non object specific variable for when objects are loaded. In gameplay, an object in Minecraft despawning or vanishing into the fog has little to do with human vision and a pig vanishes just as a golem does.
More importantly (bolding because I want to focus the discussion on this) if something was the size of 1 px, or even a bit smaller, they'd still show up in that pixel, given that what is represented in one px of a picture is just what color is in the majority of that pixel. For something to not be visible in a camera at all they'd need to take up less than 50% of that pixel. Treating a pixel as a square with a side length of 1, square root of 0.5 px^2 is 0.71 px, which is the max size that a roughly square object would need to be to not appear in that pixel.
I mean, that's again not true if we are talking about video games. True averaging the color in some region would require integrals. Way too calculation heavy.
Video game cameras tend to use a finite sampling.

And hand-animated stuff... go by the eye of the animator, which probably isn't that precise.

Honestly, chances are video compression would erase a single pixel coloured differently anyway.

To that come the whole scaling considerations I already mentioned. We would need to figure out what the native resolution for a video is to do this. Because you can upscale a 480px video to 4k and you see how the problem with the idea of using 1px (or half a pixel) as ultimate measure would happen if you use the 4k video.
Comparing how much distance one would get using the resolution thing with the distance you'd get using vision range:
  • 1.8 * 1080/[0.71*2*tan(70deg/2)] = 1955.155 meters
  • 1.8 / (10^-3) = 1800 meters
As you can see, your method gets a higher result than mine. If you still think it should be used I guess I don't oppose it given that it's pretty close either ways but, just saying.
I have no idea how you got the former number and how it would relate to what I propose, given that what I propose varies based on multiple variables that are generally not fixed at all.
That line of reasoning doesn't make any sense. If I can visibly tell that the object got further than the normal range of vision then yes I would obviously use that, given that it's provable that it did, but the opposite is just applying a random assumption to the scene with no actual visible evidence.
The point is: You consider the camera as a legitimate not eye-bound thing when it gives higher values, but argue that the camera is indicative of the limitations of the human eye if that gives higher values. You treat the camera differently based on the scenario.
Consistent positions would be to either assume the camera always is indicative of the human eye (i.e. if the camera can see it, it must be close enough for a human eye to see it. If the camera can't see it, it must be far away enough for the human eye not to see it.) or it is never indicative of the human eye (i.e. if the camera can't see it, that doesn't mean the human eye couldn't. If the camera can see it, it doesn't mean the human eye can).
I'm obviously in favor of the latter, but the point I was making here is that you equalize the eye to the camera only if it gives higher results, which is inconsistent.
Yep. I feel like you could still see smaller than that but it's not the worst assumption. I'd also say 2 x 10^-4 rad x 2 = 4 x 10^-4 rad for people with enhanced/peak human eyesight, based off the ideal vision thing I showed in OP.
For peak human sight that's ok.
For superhuman sight I would use straight up 2 x 10^-4. Little reason to lowball a character that can see electrons or something.
 
Those would be valid arguments if we were talking about a scenario in which the scene made clear that an actual human can not see the character anymore.
As it stands we are talking about a scenario in which we do not know whether or not someone can not see them anymore, but only know that they are not drawn on the screen anymore. The latter, by no means, indicates the former in any way or form, especially since generally the animators have no interest in animating some speck of dust that could technically be visible but is of no importance. (And if it's smaller than resolution they would of course be incapable unless they zoom in on the object they don't care about)
if something is moving away, and it can no longer be seen, the assumption is that it moved too far away to be seen. This really shouldn't be a point of contention.
Yeah, that's just... not true. Games have very much just a generic non object specific variable for when objects are loaded. In gameplay, an object in Minecraft despawning or vanishing into the fog has little to do with human vision and a pig vanishes just as a golem does.
This is in the context of a cutscene or a specific animation which sending someone flying that far would almost always have to be, obviously that doesn't apply to normal gameplay.
To that come the whole scaling considerations I already mentioned. We would need to figure out what the native resolution for a video is to do this. Because you can upscale a 480px video to 4k and you see how the problem with the idea of using 1px (or half a pixel) as ultimate measure would happen if you use the 4k video.

I have no idea how you got the former number and how it would relate to what I propose, given that what I propose varies based on multiple variables that are generally not fixed at all.
I ang-sized a 0.71 pixel large object. What ARE you proposing then? You haven't been doing a good job at making that clear. That we just pixel scale off the last frame in which they're visible? Because that's awful and sometimes will end up with massively low results, especially if something is moving so fast that it vanished in a couple of frames (which means that the frame previous to them vanishing will be a lot closer to the screen, given the limited framerate)
The point is: You consider the camera as a legitimate not eye-bound thing when it gives higher values, but argue that the camera is indicative of the limitations of the human eye if that gives higher values. You treat the camera differently based on the scenario.
Consistent positions would be to either assume the camera always is indicative of the human eye (i.e. if the camera can see it, it must be close enough for a human eye to see it. If the camera can't see it, it must be far away enough for the human eye not to see it.) or it is never indicative of the human eye (i.e. if the camera can't see it, that doesn't mean the human eye couldn't. If the camera can see it, it doesn't mean the human eye can).
I'm obviously in favor of the latter, but the point I was making here is that you equalize the eye to the camera only if it gives higher results, which is inconsistent.
If I can visibly see that the camera is showing me things further than the human eye could see (come to think of it I don't know how that'd actually be possible) then I'm not "considering" that, I just know it. The default position is that it's indicative of the human eye, that doesn't mean it can't be changed depending on context.
For peak human sight that's ok.
For superhuman sight I would use straight up 2 x 10^-4. Little reason to lowball a character that can see electrons or something.
Alright
 
if something is moving away, and it can no longer be seen, the assumption is that it moved too far away to be seen. This really shouldn't be a point of contention.
Moved too far to be seen by the person or thing that is seeing it.
If an object moves out of sight of a human, we have to work with human eye values.
When something moves out of sight of a dog, we should work with dog sight values.
If something moves out of the sight of an actual camera, we need to work with actual camera sight values.
When something moves out of sight of a virtual camera, we need to work with virtual camera sight values.

This really shouldn't be a point of contention.
This is in the context of a cutscene or a specific animation which sending someone flying that far would almost always have to be, obviously that doesn't apply to normal gameplay.
If you can demonstrate that the cutscene uses altered values you can use those. If it uses the same value, use the same values.

I see no proper reason to not use the distances the game actually portrays, instead of subjective interpretations of what the game might have meant to show but didn't.
I ang-sized a 0.71 pixel large object.
How? Such an ang-sizing result is dependend on image height and resolution and stuff.
What ARE you proposing then? You haven't been doing a good job at making that clear. That we just pixel scale off the last frame in which they're visible? Because that's awful and sometimes will end up with massively low results, especially if something is moving so fast that it vanished in a couple of frames (which means that the frame previous to them vanishing will be a lot closer to the screen, given the limited framerate)
What is the alternative? Tracking down the "original" resolution for some scan and when some anime is re-released in 4k all such feats become 10x higher? That's not a good idea.

I think I proposed in the past that instead we could use something like the thickness of a line as basis for manga scans. That would at least work for those. Although for 3D model stuff that is less practical.
If I can visibly see that the camera is showing me things further than the human eye could see then I'm not "considering" that, I just know it. The default position is that it's indicative of the human eye, that doesn't mean it can't be changed depending on context.
Yet you propose to not accept video game render limits as evidence of it not being the human eye, despite us seeing that those don't work like the human eye.
Heck, you can tell by the screen resolution whether or not something has human eye resolution.

If you say evidence overrules the assumption, then the evidence to overrule it is always given, by the fact that the resolution we see the camera to have does not match. You see that the camera can not show you things with the same resolution as the eye.

But long story short, assuming cameras are human eyes is a bad assumption. Cameras are mostly treated as cameras.
(come to think of it I don't know how that'd actually be possible)
Given a high-enough resolution or large enough object, it's bound to happen.
 
Moved too far to be seen by the person or thing that is seeing it.
If an object moves out of sight of a human, we have to work with human eye values.
When something moves out of sight of a dog, we should work with dog sight values.
If something moves out of the sight of an actual camera, we need to work with actual camera sight values.
When something moves out of sight of a virtual camera, we need to work with virtual camera sight values.
But you said it yourself that we shouldn't use resolution, so no we're not using the camera sight values (which your method isn't even doing for the record, using the last visible distance falls into way more issues than that and is usually not directly related to resoltuion). Beyond that there's a big fallacy here in that unlike all of the other examples, a virtual camera isn't an actual object, just an abstraction to show the scene to the viewer.
I see no proper reason to not use the distances the game actually portrays, instead of subjective interpretations of what the game might have meant to show but didn't.
If an object is shown flying out of view it's not a subjective interpretation to say it flew out of view.
How? Such an ang-sizing result is dependend on image height and resolution and stuff.
I... included the resolution in the formula, I used 1920x1080, obviously it's going to vary but that's the most common nowadays. It's not the method I'd want anyways, I just hadn't understood yours. Though quite frankly I would still find it preferable to yours as it at least fits the spirit of the feat.
Yet you propose to not accept video game render limits as evidence of it not being the human eye, despite us seeing that those don't work like the human eye.
Heck, you can tell by the screen resolution whether or not something has human eye resolution.
A render limit isn't an issue directly pertaining to the camera, and it definitely isn't meant to communicate a limitation of the POV, it's just a tech limitation. It's absolutely not the same thing as actively seeing that something is X meters away.
If you say evidence overrules the assumption, then the evidence to overrule it is always given, by the fact that the resolution we see the camera to have does not match. You see that the camera can not show you things with the same resolution as the eye.
It doesn't need to, by that logic the object is still flying beyond what the camera can show, but your method does not calculate that, it just calculates the last visible moment of the object which is a massive misrepresentation of the actual distance, far more than my method or even the resolution thing..
But long story short, assuming cameras are human eyes is a bad assumption. Cameras are mostly treated as cameras.
I'm sorry I don't know what to say, that just isn't true. There is literally never a moment in most serious works of fiction where a camera is anything but a construct to show the scene to the viewer, it's not an actual object and you shouldn't assume its limitations to be canonical to the way scenes are portrayed.

I'm sorry, your method is just inherently fallible. Let's take a hypothetical, a character is hit so hard (in a videogame, movie, anime or whatever) that they fly into orbit. Not only that, however, but they fly so fast that their flight path happens in less than one frame. If you go frame by frame, you see the blow connect, then you skip forward by one frame and they're gone completely, then a few frames later the ol' twinkle in the sky happens. This is undoubtedly an impressive feat, obviously the intention is that they flew really far away, but using your method this feat would be completely impossible to calculate and would have to be ignored completely. To me that isn't a sign of a good method.
 
Last edited:
That's uh, enough support I think, does anyone think DT's side works better?
 
I'd like to state my neutrality.

I think there'd probably be some ideal way to apply Armor's method to certain situations, and DT's to others, but I have no idea what guidelines should be drafted for that. So I don't really care which one we end up leaning towards.
 
It doesn't seem like anyone else wishes to input here, d'you think we can apply this? Verdict overall seems pretty decisive.
 
Flashlight and DT are the only ones to disagree, while there's 6 agreeing.

And even then, Flashlight supported using one arc-minute (which gives slightly higher results), and DT supported using 10^-3 radians (which gives slightly lower results), so their disagreements aren't that extreme.

With all that in mind, applying this is probably fine.
 
Nice, well uh, I guess the only real thing to change staff-wise is the one Common Feat Reference, right?
 
Status
Not open for further replies.
Back
Top