- 16,172
- 12,705
- Thread starter
- #41
so uh @DontTalkDT
Follow along with the video below to see how to install our site as a web app on your home screen.
Note: This feature may not be available in some browsers.
'Pparently he was supposed to be free on the 15th but radio silence.
Because there are like 20 threads I'm tagged in. I have been free since the 15th (which isn't to say I'm not doing lots of stuff in my holidays that aren't hanging out here), but that doesn't mean I can get to everything immediately.'Pparently he was supposed to be free on the 15th but radio silence.
I've tagged him for other important threads as well, but no dice.
I don't think the twinkle necessarily indicates they are not visible anymore. More of a stylistic thing, I believe. Much less do I think they would have a specific degree of "not visible" in mind? As in, would it really be specifically not visible to a focussed observer?I disagree, I think the twinkle is meant to indicate that the character is no longer in view in-verse. There's also videogame cases in which the model/sprite straight-up just disappears no matter how much you turn up the resolution.
Well, as said, I would not go for 1 px (as that means we would have to figure out the original resolution, not to mention that the unit is not meaningful given how video and some image compression works) but rather the distance to the last time we see them.I looked into this and I'm actually not sure this would end up playing a role. Assuming 1080px of screen height (which admittedly isn't going to be the resolution of older movies/animation) and 0.5px for the size of the character in question (which imo is better than 1px, because even if the object was slightly less than one full px, by occupying most of that pixel it would still "fill it"), then the distance for a human-sized character ends up being 1.8 / 0.5 * 1080 = 3888 meters of distance, which is more than you'd get from my method. This is without considering higher resolutions which are very common nowadays, most movies nowadays are 4096 x 2160 and basically any videogame can be cranked as high as your PC can handle.
This is really not viable in my opinion especially for videogames because you end up running into engine limitations here, a lot of games were meant to be viewed on a CRT or on a tiny handheld screen where tiny models/sprites would just be completely invisible, so developers may have chosen to just remove them rather than have them move the whole distance away. Animation would also run into similar issues, a lot of shows that do this are aimed at children, and as a result they have lower budget. It's also usually the case that the character gets so far that they are not in view at all anymore and then, a few moments later, the twinkle happens. That's clearly getting so far away that they cannot be seen anymore.What resolution is concerned: We had a debate on how to handle the problem of things technically getting different values based on resolution before. Not sure if that thread ever had an official end. But when you think of video games, I think the most reasonable thing is to use the last point where you can see them as low end. That might still change with resolution, but then you can just take the highest the video game has to offer.
For other media, like anime, that should be resolution independent.
Yeah, but the characters would be following the object as it flies away, so they'd definitely be focusing on it. It'd be harder to track than an object standing still but it's close enough.Anyway, what your methods validity is concerned in general: Yes, I think it work. But, I think it only works if given proper time to focus. Like, I remember the eye test I did for my driver's license and when it came to the small letters, I had to focus for several seconds to be able to make them out. And that is for something that stands still.
Idk, choosing "half a pinky" is pretty random and you could definitely easily see something that appeared much smaller than that. I prefer a mild high-ball that has scientific basis than a huge low-ball that's based on a completely arbitrary value. Like, looking at something that appears to be about 1/10 of my pinky from where I'm sitting right now, I'm fairly confident that I could track something like that if it was several times smaller than it, even in motion. And I have small hands and awful eyesight.So for feats of something flying off into the distance, I think a greater angular value would probably be appropriate. What that could be? Idk. I once used half the angle of an outstretched pinky in a calc for that purpose, since I figured it a reasonable low end for what one can relatively easily see even when it moves around and one isn't greatly focussed. But there are probably better baselines one could use.
Can't say much more than that I disagree.This is really not viable in my opinion especially for videogames because you end up running into engine limitations here, a lot of games were meant to be viewed on a CRT or on a tiny handheld screen where tiny models/sprites would just be completely invisible, so developers may have chosen to just remove them rather than have them move the whole distance away. Animation would also run into similar issues, a lot of shows that do this are aimed at children, and as a result they have lower budget. It's also usually the case that the character gets so far that they are not in view at all anymore and then, a few moments later, the twinkle happens. That's clearly getting so far away that they cannot be seen anymore.
I'm really not sure I would call it a "mild" high-ball. I mean, 5x10^-4 rad equates (using the same approx. formula as the page) to seeing an object that is 3 mm large over a distance of 6m. That's the size of a single poppy seed. I'm fairly sure even when my eyes were good that would have needed some concentration and time. Or about holding a 0.5 mm object on your outstretched hand (i.e. about 1m away). That's like a single grain of sand.Yeah, but the characters would be following the object as it flies away, so they'd definitely be focusing on it. It'd be harder to track than an object standing still but it's close enough.
Idk, choosing "half a pinky" is pretty random and you could definitely easily see something that appeared much smaller than that. I prefer a mild high-ball that has scientific basis than a huge low-ball that's based on a completely arbitrary value. Like, looking at something that appears to be about 1/10 of my pinky from where I'm sitting right now, I'm fairly confident that I could track something like that if it was several times smaller than it, even in motion. And I have small hands and awful eyesight.
I mean, if the object disappears completely then they clearly went way further than "how far we can see them last time", especially with the context that they're already moving away at all time. Like, if they get so far away that they disappear they should just be assumed to be so small (from the POV) that they're no longer visible, what we do "see happen" is that we... can't see them anymore. It did get out of view, and we know it did via distance. I don't really think this is a shaky assumption at all.Can't say much more than that I disagree.
If we have nothing conclusive of the thing getting out of view, scaling by how far we can see them get away seems more reasonable. Why make an assumption if you can go by what you see happen?
I don't think using "they couldn't animate it smaller" as justification for a shacky assumption is valid. It just doesn't make the alternative better.
How about double the 5x10^-4 thing? I cut out a scrap of paper about 1 mm wide and put it on my finger, stretched my hand out and i could see it a bit. I might not notice it if my attention wasn't on it, but if it was moving away from me I think I'd be able to keep my eyes on it. And again I have awful eyesightAnd I fully agree that half a pinky is somewhat arbitrary. I'm generally open for other suggestions.
We know it did get out of view in terms of the view of the camera through which we see, not out of the view of the human eye. The "how far we can see them last time" is just for the purpose of resolving trouble regarding resolution and render distance., where maybe even something that technically is 1 px big can in practice not be seen anymore.I mean, if the object disappears completely then they clearly went way further than "how far we can see them last time", especially with the context that they're already moving away at all time. Like, if they get so far away that they disappear they should just be assumed to be so small (from the POV) that they're no longer visible, what we do "see happen" is that we... can't see them anymore. It did get out of view, and we know it did via distance. I don't really think this is a shaky assumption at all.
I don't see how that would make this a lowball. Like, yeah, they fly further. Doesn't make the speed with which they had to get there any higher.Honestly there's also the fact that you have to consider that most of the time this is just half of the distance they move, usually they vanish before they really begin falling so all feats regarding this are lowballs.
So 10^-3? I guess I can live with that.How about double the 5x10^-4 thing? I cut out a scrap of paper about 1 mm wide and put it on my finger, stretched my hand out and i could see it a bit. I might not notice it if my attention wasn't on it, but if it was moving away from me I think I'd be able to keep my eyes on it. And again I have awful eyesight
I mean the "camera" is just clearly a construct that's meant to show us the scene, I don't know why you wouldn't assume it to be similar to the human eye. Like, a character flies away, he can no longer be seen and you're assuming that he's still close enough to be seen, visible to every other person watching, and just cannot be seen because of some technological issue? Like, you do know that if the animators wanted them to still be in sight they'd be very capable of representing that in a variety of different ways? It's clearly meant to represent them flying out of view.We know it did get out of view in terms of the view of the camera through which we see, not out of the view of the human eye. The "how far we can see them last time" is just for the purpose of resolving trouble regarding resolution and render distance., where maybe even something that technically is 1 px big can in practice not be seen anymore.
That line of reasoning doesn't make any sense. If I can visibly tell that the object got further than the normal range of vision then yes I would obviously use that, given that it's provable that it did, but the opposite is just applying a random assumption to the scene with no actual visible evidence.Think about it in reverse: If by pixel scaling you get a higher distance, would you then want to not use that scaling in favor of low-balling it to how far the human eye can see?
I don't want to just because that'd be a pain, I'm just bringing up that it's another factor we're not considering. But whatever, not a big deal.I don't see how that would make this a lowball. Like, yeah, they fly further. Doesn't make the speed with which they had to get there any higher.
Unless you want to take into account air resistance or something?
Yep. I feel like you could still see smaller than that but it's not the worst assumption. I'd also say 2 x 10^-4 rad x 2 = 4 x 10^-4 rad for people with enhanced/peak human eyesight, based off the ideal vision thing I showed in OP.So 10^-3? I guess I can live with that.
Those would be valid arguments if we were talking about a scenario in which the scene made clear that an actual human can not see the character anymore.I mean the "camera" is just clearly a construct that's meant to show us the scene, I don't know why you wouldn't assume it to be similar to the human eye. Like, a character flies away, he can no longer be seen and you're assuming that he's still close enough to be seen, visible to every other person watching, and just cannot be seen because of some technological issue? Like, you do know that if the animators wanted them to still be in sight they'd be very capable of representing that in a variety of different ways? It's clearly meant to represent them flying out of view.
Yeah, that's just... not true. Games have very much just a generic non object specific variable for when objects are loaded. In gameplay, an object in Minecraft despawning or vanishing into the fog has little to do with human vision and a pig vanishes just as a golem does.(Also render distance is absolutely not an issue, games can project single objects as far as they want, the reason draw distance exists is that showing an entire world is what's impossible. They can also just shrink the model instead to give the illusion of it moving as far aways as they want. If an object goes out of view then they clearly just intended it to be out of view)
I mean, that's again not true if we are talking about video games. True averaging the color in some region would require integrals. Way too calculation heavy.More importantly (bolding because I want to focus the discussion on this) if something was the size of 1 px, or even a bit smaller, they'd still show up in that pixel, given that what is represented in one px of a picture is just what color is in the majority of that pixel. For something to not be visible in a camera at all they'd need to take up less than 50% of that pixel. Treating a pixel as a square with a side length of 1, square root of 0.5 px^2 is 0.71 px, which is the max size that a roughly square object would need to be to not appear in that pixel.
I have no idea how you got the former number and how it would relate to what I propose, given that what I propose varies based on multiple variables that are generally not fixed at all.Comparing how much distance one would get using the resolution thing with the distance you'd get using vision range:
As you can see, your method gets a higher result than mine. If you still think it should be used I guess I don't oppose it given that it's pretty close either ways but, just saying.
- 1.8 * 1080/[0.71*2*tan(70deg/2)] = 1955.155 meters
- 1.8 / (10^-3) = 1800 meters
The point is: You consider the camera as a legitimate not eye-bound thing when it gives higher values, but argue that the camera is indicative of the limitations of the human eye if that gives higher values. You treat the camera differently based on the scenario.That line of reasoning doesn't make any sense. If I can visibly tell that the object got further than the normal range of vision then yes I would obviously use that, given that it's provable that it did, but the opposite is just applying a random assumption to the scene with no actual visible evidence.
For peak human sight that's ok.Yep. I feel like you could still see smaller than that but it's not the worst assumption. I'd also say 2 x 10^-4 rad x 2 = 4 x 10^-4 rad for people with enhanced/peak human eyesight, based off the ideal vision thing I showed in OP.
if something is moving away, and it can no longer be seen, the assumption is that it moved too far away to be seen. This really shouldn't be a point of contention.Those would be valid arguments if we were talking about a scenario in which the scene made clear that an actual human can not see the character anymore.
As it stands we are talking about a scenario in which we do not know whether or not someone can not see them anymore, but only know that they are not drawn on the screen anymore. The latter, by no means, indicates the former in any way or form, especially since generally the animators have no interest in animating some speck of dust that could technically be visible but is of no importance. (And if it's smaller than resolution they would of course be incapable unless they zoom in on the object they don't care about)
This is in the context of a cutscene or a specific animation which sending someone flying that far would almost always have to be, obviously that doesn't apply to normal gameplay.Yeah, that's just... not true. Games have very much just a generic non object specific variable for when objects are loaded. In gameplay, an object in Minecraft despawning or vanishing into the fog has little to do with human vision and a pig vanishes just as a golem does.
I ang-sized a 0.71 pixel large object. What ARE you proposing then? You haven't been doing a good job at making that clear. That we just pixel scale off the last frame in which they're visible? Because that's awful and sometimes will end up with massively low results, especially if something is moving so fast that it vanished in a couple of frames (which means that the frame previous to them vanishing will be a lot closer to the screen, given the limited framerate)To that come the whole scaling considerations I already mentioned. We would need to figure out what the native resolution for a video is to do this. Because you can upscale a 480px video to 4k and you see how the problem with the idea of using 1px (or half a pixel) as ultimate measure would happen if you use the 4k video.
I have no idea how you got the former number and how it would relate to what I propose, given that what I propose varies based on multiple variables that are generally not fixed at all.
If I can visibly see that the camera is showing me things further than the human eye could see (come to think of it I don't know how that'd actually be possible) then I'm not "considering" that, I just know it. The default position is that it's indicative of the human eye, that doesn't mean it can't be changed depending on context.The point is: You consider the camera as a legitimate not eye-bound thing when it gives higher values, but argue that the camera is indicative of the limitations of the human eye if that gives higher values. You treat the camera differently based on the scenario.
Consistent positions would be to either assume the camera always is indicative of the human eye (i.e. if the camera can see it, it must be close enough for a human eye to see it. If the camera can't see it, it must be far away enough for the human eye not to see it.) or it is never indicative of the human eye (i.e. if the camera can't see it, that doesn't mean the human eye couldn't. If the camera can see it, it doesn't mean the human eye can).
I'm obviously in favor of the latter, but the point I was making here is that you equalize the eye to the camera only if it gives higher results, which is inconsistent.
AlrightFor peak human sight that's ok.
For superhuman sight I would use straight up 2 x 10^-4. Little reason to lowball a character that can see electrons or something.
Moved too far to be seen by the person or thing that is seeing it.if something is moving away, and it can no longer be seen, the assumption is that it moved too far away to be seen. This really shouldn't be a point of contention.
If you can demonstrate that the cutscene uses altered values you can use those. If it uses the same value, use the same values.This is in the context of a cutscene or a specific animation which sending someone flying that far would almost always have to be, obviously that doesn't apply to normal gameplay.
How? Such an ang-sizing result is dependend on image height and resolution and stuff.I ang-sized a 0.71 pixel large object.
What is the alternative? Tracking down the "original" resolution for some scan and when some anime is re-released in 4k all such feats become 10x higher? That's not a good idea.What ARE you proposing then? You haven't been doing a good job at making that clear. That we just pixel scale off the last frame in which they're visible? Because that's awful and sometimes will end up with massively low results, especially if something is moving so fast that it vanished in a couple of frames (which means that the frame previous to them vanishing will be a lot closer to the screen, given the limited framerate)
Yet you propose to not accept video game render limits as evidence of it not being the human eye, despite us seeing that those don't work like the human eye.If I can visibly see that the camera is showing me things further than the human eye could see then I'm not "considering" that, I just know it. The default position is that it's indicative of the human eye, that doesn't mean it can't be changed depending on context.
Given a high-enough resolution or large enough object, it's bound to happen.(come to think of it I don't know how that'd actually be possible)
But you said it yourself that we shouldn't use resolution, so no we're not using the camera sight values (which your method isn't even doing for the record, using the last visible distance falls into way more issues than that and is usually not directly related to resoltuion). Beyond that there's a big fallacy here in that unlike all of the other examples, a virtual camera isn't an actual object, just an abstraction to show the scene to the viewer.Moved too far to be seen by the person or thing that is seeing it.
If an object moves out of sight of a human, we have to work with human eye values.
When something moves out of sight of a dog, we should work with dog sight values.
If something moves out of the sight of an actual camera, we need to work with actual camera sight values.
When something moves out of sight of a virtual camera, we need to work with virtual camera sight values.
If an object is shown flying out of view it's not a subjective interpretation to say it flew out of view.I see no proper reason to not use the distances the game actually portrays, instead of subjective interpretations of what the game might have meant to show but didn't.
I... included the resolution in the formula, I used 1920x1080, obviously it's going to vary but that's the most common nowadays. It's not the method I'd want anyways, I just hadn't understood yours. Though quite frankly I would still find it preferable to yours as it at least fits the spirit of the feat.How? Such an ang-sizing result is dependend on image height and resolution and stuff.
A render limit isn't an issue directly pertaining to the camera, and it definitely isn't meant to communicate a limitation of the POV, it's just a tech limitation. It's absolutely not the same thing as actively seeing that something is X meters away.Yet you propose to not accept video game render limits as evidence of it not being the human eye, despite us seeing that those don't work like the human eye.
Heck, you can tell by the screen resolution whether or not something has human eye resolution.
It doesn't need to, by that logic the object is still flying beyond what the camera can show, but your method does not calculate that, it just calculates the last visible moment of the object which is a massive misrepresentation of the actual distance, far more than my method or even the resolution thing..If you say evidence overrules the assumption, then the evidence to overrule it is always given, by the fact that the resolution we see the camera to have does not match. You see that the camera can not show you things with the same resolution as the eye.
I'm sorry I don't know what to say, that just isn't true. There is literally never a moment in most serious works of fiction where a camera is anything but a construct to show the scene to the viewer, it's not an actual object and you shouldn't assume its limitations to be canonical to the way scenes are portrayed.But long story short, assuming cameras are human eyes is a bad assumption. Cameras are mostly treated as cameras.
Same from meI'd like to restate my support for Armor's proposal here