• This forum is strictly intended to be used by members of the VS Battles wiki. Please only register if you have an autoconfirmed account there, as otherwise your registration will be rejected. If you have already registered once, do not do so again, and contact Antvasima if you encounter any problems.

    For instructions regarding the exact procedure to sign up to this forum, please click here.
  • We need Patreon donations for this forum to have all of its running costs financially secured.

    Community members who help us out will receive badges that give them several different benefits, including the removal of all advertisements in this forum, but donations from non-members are also extremely appreciated.

    Please click here for further information, or here to directly visit our Patreon donations page.
  • Please click here for information about a large petition to help children in need.

Regarding the 2ms perception thing

DontTalkDT

A Fossil at This Point
VS Battles
Bureaucrat
Administrator
Bronze Supporter
10,899
12,321
Continuation of this basically.
Or revision of these standards.

Long story short: Assuming that any movement that happened faster than a human could percieve happened within 2 ms is nonsense.

The Arguments​

Let's start with where the number comes from.
I will just quote the prior thread (See the OP of that thread for the full version):
What are the speed limits of what we see?
Under normal conditions, our vision has certain limitations that can change depending on different conditions and circumstances. To understand these limitations, researchers conduct experiments under controlled conditions, often involving flickering lights.

When a light flickers very quickly, there is a minimum speed at which we no longer perceive the flickering and the light looks steady instead. This specific minimum flickering speed is called the critical flicker fusion frequency (cFFF) or Flicker Fusion Threshold (FFT). At this point, our eye receptors can't detect the individual changes in what we see anymore. Instead, they merge together, creating a steady signal that our brain interprets as continuous light.


What Does Flickering Lights Have To Do With Perceiving Motion?
People with higher Flicker Fusion thresholds (the ability to no longer see high speed flickering in flickering lights but instead see them as steady light) tend to have better accuracy in what they perceive in general. In other words, people who can see fast “flickering lights” at a higher speed than others, can see fast moving objects at a higher speed than others. [1]

This concept is also applied to TVs, cinemas, and other similar forms of media. That’s why frame-by-frame viewing needs to happen at frequencies close to the limit at which we can perceive “flickering lights” so we can view motion on TV fluidly. At a higher speed than our limit, things would look like they’re skipping frames. Animals that have better vision than humans need to “see fast flickering lights” better to survive in the wild. According to this study, Birds that have a great ability to see “flicker lights” can see fast-moving objects better than humans, especially when these objects could potentially collide with them in the air. [2]
I.e. it comes from the timeframe in which a light can turn on and off repeatedly, while the human can still see that the light flickers.

Correlating that to movement is already somewhat of a stretch. The test is about being able to tell that any change is happening, not any proper perception.

Now, in the past thread, the following things were brought up as evidence that it is valid:
Certain neurons in the visual system are specialized to detect sudden changes in luminance, and they play a fundamental role in the initial stages of processing visual motion. These neurons are part of a hierarchical system that progressively analyzes direction, speed, and overall motion velocities. So like… together, these processes contribute to our perception of objects in motion.


Hence why Flicker Fusion Frequency (FFF) is important for identifying moving objects too since that studies the literal first stage of perception — rapid changes in luminance. The other stages happen in and around that same timeframe of 1/50th to 1/90th of a second.


Oh and the study linked in the OP also implies FFT is important for animals to identify approaching targets fast enough.
What this neglects is that, while FFT is of importance to perception, at no point is it identified as the unique deciding factor. There is correlation, perhaps, but it is never identified as the actual timeframe of perception of a moving object.
As such it is more a natural high end, than a necessary low end. Something that leaves the FOV below the FFT would be invisible. However, that an object that does not do so is visible is never said.

I will demonstrate in the following that treating blitzing generally as occurring in under 2ms (1/500s), as the standard suggests, is not scientifically proven nor do they generally appear reasonable.




First, I wish to direct attention back to the primary source that the 2ms idea is based on.
The ability to detect flicker fusion is dependent on: (1) frequency of the modulation, (2) the amplitude of the modulation, (3) the average illumination intensity, (4) the position on the retina at which the stimulus occurs, (5) the wavelength or colour of the LED, (6) the intensity of ambient light [3,5,6] or (7) the viewing distance and (8) size of the stimulus [7]. Moreover, there are also internal factors of individuals that can affect CFF measures: age, sex, personality traits, fatigue, circadian variation in brain activity [4] and cognitive functions like visual integration, visuomotor skills and decision-making processes [3]. The performance of CFF in humans and predators alike is dependent on these factors. Umeton et al. also describe preys’ features like a pattern or even the way they move as relevant in perceiving the flicker fusion effect [2].
This paragraph identifies a total of 19 variables which influence the FFT and hence the 2ms. Patterns and movement behaviour is pointed out for the case of hunting prey, but also things like frequency or precise position on the retina, and colour.
None of these factors are adequately handled by the current standard, as they are not handled at all. It's a constant number of 2ms that is not modified by any of these parameters.
Consider, furthermore, that some of these factors may be influenced in a movement scenario in a way different from a flickering light test. A moving object will have its light reach different parts of the eye's retina over time. The amount of light that a stationary object would all project on one part of the eye, is instead distributed on a greater area making the effective average illumination intensity less, as the same amount of light is spread more.
Given such considerations, I am confident to say that the source in itself would likely not agree with this usage of the value.

I will furthermore point out the the 2ms value is only produced in the study, only produced such high results with a specific setup: "The high spatial frequency image is first “bright on the left half of the frame and black on the right”, and then inverted. We observed the effect described in this paper whenever we displayed an image containing an edge and its inverse in rapid succession."
I.e. the specifically inverted the colours and specifically in an uneven way. Equalizing this to real life conditions, which may have edges but no inversion of said edges, seems faulty.

In general, I feel like the translation of FFT to blitzing is not scientifically validated. Human vision is highly complex and, as one has to consider, the brain does a lot of post-processing to every image. It for example pieces together several separate quick looks into a proper picture, one of which takes about 200ms alone. Speed based illusions are for instance also not fully explained by current state of science. It is a common idea that the brain's visual system simulates events before perception which can lead to illusions in movement that last longer than the mentioned 2ms. It is known that at least under certain circumstances the brain will decide to suppress blur for a better image, although that isn't the specific case.
The point is, scientifically hand waving the equalization between flickering light and movement is highly questionable due to the complexity of the subject.

Another problem comes from a translation of the FFT to a movement scenario.
Hq3qedw.png
If we are having a movement, how far of a movement would have to be completed for it to equal the length of one flicker?
Until they are out of the field of view? Until they reappear in the vision? I don't think so. Reasonably speaking, as long as you do not occupy the same place for longer than a flicker you should be below the threshold, as no part of the eye sees you longer than one flicker. It would be the equivalent of the left the of the image flickering below the threshold and right after the right side below the threshold. Considering that most humans are about half a meter wide, that severely would reduce the necessary speed. (half a meter in 2ms is subsonic+)
Of course, that is my argument as I see it as reasonable, not proper science. No scientific source describes how exactly one would apply flickering to invisibility via speed exactly, so that much is a given. But such uncertainties are exactly the problem.

Logistical problems aside, I believe the current state also is in conflict with real-life observations. I will say that while I will provide some videos in the following, since cameras with their shutter speed and videos with their FPS obviously screw with the observation, they are more for illustration purposes than that they prove anything.

First, let's talk about fans.

It is documented that when fans move fast enough, they turn practically invisible to, at minimum, some people.
Why is that a problem? Well, fan blades stay constantly in the field of vision. And they pass through a spot focussed on many times a second and for fans with thick blades they also occupy each point for a significant total percentage of the time. Furthermore, they are just subsonic. A standing fan is about 1300 to 2100 round per minute, which equates to 22 to 35 rounds a second. If we assume a blade takes up 1/6th of the total circle, then the time in which it passes through one spot in 1/132s to 1/210s, which is more than twice the time we have from FFT.
This appears not in line with the standard. The section assumes that a character can not stay invisible longer than said 1/500th of a second (otherwise, it would not make sense to conclude that a character that seemingly teleports must do so in that timeframe), but this example demonstrates that one can in principle stay invisible indefinitely.

Next, let's talk about guns.

A glock has a muzzle velocity of 375 m/s. That means that in 2ms the bullet passes a distance of 75 cm. In other words, by the current standard, we would assume that if someone shoots you with a Glock in the face from an arm's length away you can see the bullet emerge from the gun.
And... that is wrong. Even from a greater distance, you don't simply see bullets. (given bullets slow down with distance and if you stand far away they appear to move slower, so I don't think it's per se impossible to see a bullet fly under the right conditions)

Now, bullets are smaller than humans, but at less than a meter away one can still clearly see them. Now, I have no doubt that size does play a role in whether or not you can see a fast object, even in the realms where you can see the object when it stands still. Consider at that, that humans are not a regular size and the apparent size of a person lowers with distance. Meaning distance is also a factor.
However, notice that the standards do not account for that in any way. In fact, the flicker test does not account for size. (And there might be a difference between the relevance of the size of a light source and size of a passively illuminated object)
And, when translating from the flicker test results to actual movement... well, there it would need to be considered as well.
In total what we see by this example is that either 1/500s is not enough or the value at least needs a component to correct for size.

Conclusion​

I believe I have sufficiently demonstrated that 2ms is not a plausible general-purpose timeframe. Furthermore, I believe to have shown that a general purpose timeframe is impossible. A number of factors do appear relevant, such as size/distance, illumination, contrast etc.
Furthermore, I believe it to be plausible that such a method needs to account for the possibility of a character being invisible by speed, without leaving the field of view and for extended periods of time. A theory that does not account for that appears to be incomplete to me.

As such I propose to revert things to the prior standards on the matter.



Staff Votes:
  • Agree: DMUA, Flashlight237, Therefir, Damage3245, DontTalkDT
  • Split: DarkDragonMedeus
  • Agreed but wanted to wait until more arguments are made: Dalesean027
  • Disagree: DarkGrath
 
Last edited:
I personally agree with the rollback. I don't see how fans could be used for reaction-blitzing anyway unless you're somehow actively trying to eyeball their RPM.
 

I will put this feat here since once I used the accepted timetime, I feel it's overextended. Like sure, the keyword here is the attack is vanished from people's sight but it's almost 3 times faster than any gun high velocity which makes in-verse narratively speaking nonsensical.

Thus, I am also in agreement of the thread. There are many other factors that should be taken, but I don't feel the feat should be more than subsonic+ (which is a consistent speed level within the verse)
 
Last edited:
Wish it didn’t take you site standard change to get your attention.

Finally we can actually engage in proper discussion.

You gave me a lot to unpack here so I’ll take some time to respond.
 
Disagree. Although I find it rude that you didn't respond in the original thread and think it was quite disrespectful I will debate some of your points.

1. Fans aren't FTE, we can see fans although they are extremely blurry. For a feat to use 2 ms the character needs to completely disapear from sight.
2. Reaction speed is very different from perception speed. Just because it takes a human 200 ms to react to something doesn't mean it takes them that long to percieve something. A human being able to percieve things in 2 ms makes complete sense.
3. This is flawed. A bullet is well over 50x smaller then a human, also every time a bullet is shot it's surrounded by a small smoke cloud so again this point makes no sense in my eyes. Being less then a metre away doesn't change anything because again, bullets are 10x smaller than the screens it was likely used to do the 2 ms perception tests.
 
1. Fans aren't FTE, we can see fans although they are extremely blurry. For a feat to use 2 ms the character needs to completely disapear from sight.
Disagree from experience. Fans can very much disappear completely from sight and I know I'm not the only one who sees that. Light conditions obviously matter to an extent.

2. Reaction speed is very different from perception speed. Just because it takes a human 200 ms to react to something doesn't mean it takes them that long to percieve something. A human being able to percieve things in 2 ms makes complete sense.
I assume you are referring to the Saccade thing based on the 200ms figure, in which case that is a non-sequitur. That has nothing to do with reacting. It's about how the brain renders the image we actually see.

3. This is flawed. A bullet is well over 50x smaller then a human, also every time a bullet is shot it's surrounded by a small smoke cloud so again this point makes no sense in my eyes. Being less then a metre away doesn't change anything because again, bullets are 10x smaller than the screens it was likely used to do the 2 ms perception tests.
You ignore the part I already brought up in the OP, where I acknowledge that size is obviously a variable, but also clearly a variable that is not at all handled by the standard.

If you acknowledge that size is a factor, then you must recognize that the value would have to be different for a human 1m away and a human 20m away performing the feat, as more distant people look smaller.
 
Disagree from experience. Fans can very much disappear completely from sight and I know I'm not the only one who sees that. Light conditions obviously matter to an extent.
I can vouch this part.

As you move, your brain instinctively sifts through the information captured by your eyes. Without this filtering, your visual experience would resemble footage from a first-person camcorder – shaky and disjointed. In such a scenario, your eyes wouldn't adequately lock onto an object, preventing you from observing it for a meaningful duration; instead, you would perceive only the gaps and voids between objects.
 
Disagree from experience. Fans can very much disappear completely from sight and I know I'm not the only one who sees that. Light conditions obviously matter to an extent.


I assume you are referring to the Saccade thing based on the 200ms figure, in which case that is a non-sequitur. That has nothing to do with reacting. It's about how the brain renders the image we actually see.


You ignore the part I already brought up in the OP, where I acknowledge that size is obviously a variable, but also clearly a variable that is not at all handled by the standard.

If you acknowledge that size is a factor, then you must recognize that the value would have to be different for a human 1m away and a human 20m away performing the feat, as more distant people look smaller.
Disagree from experience. Fans can very much disappear completely from sight and I know I'm not the only one who sees that. Light conditions obviously matter to an extent.
I have really never seen a fan disapear. There is always a small blur. Do you have proof of fans completely disapearing ?
I assume you are referring to the Saccade thing based on the 200ms figure, in which case that is a non-sequitur. That has nothing to do with reacting. It's about how the brain renders the image we actually see.
Doing some tests on human bench mark, the 200 ms figure is also from reaction speed. Can you link the study ? What timeframe are you suggesting is correct for a blitz ?
You ignore the part I already brought up in the OP, where I acknowledge that size is obviously a variable, but also clearly a variable that is not at all handled by the standard.

If you acknowledge that size is a factor, then you must recognize that the value would have to be different for a human 1m away and a human 20m away performing the feat, as more distant people look smaller.
You ignored my part about the smoke. Also a human from 20 metres away is still massively larger then a bullet.
 
Disagree from experience. Fans can very much disappear completely from sight and I know I'm not the only one who sees that. Light conditions obviously matter to an extent.
Even then another thing I'm realizing is that you don't need to be literally invisible to have a character go "huh, they're gone", and anime generally presents that kind of thing as them dispersing into a blur rather than outright being removed as visual stimuli inbetween frames

Like, I can look over at my fan right now and, yeah I can see a vaguely white blur at the bottom, but if something like that was running directly at me at 50 meters per second or something similarly too high, I definitely wouldn't say I'd see it
You ignore the part I already brought up in the OP, where I acknowledge that size is obviously a variable, but also clearly a variable that is not at all handled by the standard.

If you acknowledge that size is a factor, then you must recognize that the value would have to be different for a human 1m away and a human 20m away performing the feat, as more distant people look smaller.
It's more poorly handled I'd say, given that it does take time to mention that particularly small targets shouldn't have this calculation used at them at all, but still
 
Even then another thing I'm realizing is that you don't need to be literally invisible to have a character go "huh, they're gone", and anime generally presents that kind of thing as them dispersing into a blur rather than outright being removed as visual stimuli inbetween frames

Like, I can look over at my fan right now and, yeah I can see a vaguely white blur at the bottom, but if something like that was running directly at me at 50 meters per second or something similarly too high, I definitely wouldn't say I'd see it

It's more poorly handled I'd say, given that it does take time to mention that particularly small targets shouldn't have this calculation used at them at all, but still
Even then another thing I'm realizing is that you don't need to be literally invisible to have a character go "huh, they're gone", and anime generally presents that kind of thing as them dispersing into a blur rather than outright being removed as visual stimuli inbetween frames
This has been talked about. The use of the 1 / 500 timeframe is only used for after image feats and feats where characters completely disapear.
 
And I'd argue that proper proof for qualifying is a nigh unto nonexistent minority, one that certainly doesn't warrant me waking up one morning and finding 3 separate calcs using it almost the moment it's allowed
 
@DMUA
Even then another thing I'm realizing is that you don't need to be literally invisible to have a character go "huh, they're gone", and anime generally presents that kind of thing as them dispersing into a blur rather than outright being removed as visual stimuli

@ShadowSythez's response
This has been talked about. The use of the 1 / 500 timeframe is only used for after image feats and feats where characters completely disapear.

This sounds like circular discussion right now. Let's drop it here, shadow. DMUA made it very clear that it is not a requirement in fiction. Perception of someone's perspective is not accurate objective interpretation as much the study claims to begin with. So to even equate both is flawed.

To rephrase, someone says "they are gone" is not equivalent to "they are actually gone".
 
And I'd argue that proper proof for qualifying is a nigh unto nonexistent minority, one that certainly doesn't warrant me waking up one morning and finding 3 separate calcs using it almost the moment it's allowed
Yeah lol. What else can you expect from a big change like this.
@DMUA


@ShadowSythez's response


This sounds like circular discussion right now. Let's drop it here. DMUA made it very clear that it is not a requirement in fiction. Perception of someone's perspective is not accurate objective interpretation as much the study claims to begin with. So to even equate both is flawed.
I do not understand your point.
 
Just a friendly reminder that this is a Calc Group Discussion thread, so the ones that should be providing responses here are exclusively those in the Calc Group (plus Arnold since DontTalk specifically tagged him)

Edit: Of course, unless you have relevant info to provide. I often forget about that part and liken the CGM discussion threads to Staff Discussion threads.
 
I do not understand your point.
Actually, I should clarify: I flatly don't believe afterimages should count whatsoever

1/500 is for completely being unable to receive any visual input, even beyond the smears that would normally constitute something going too fast to properly see. If it's an afterimage or them turning into a blur and then disappearing, then they are still in fact getting that flicker, just not something they can see very well (and they'll likely not be able to see it the very moment after since this is about hyperspeed objects that would just move to another location)

Not to mention the fact that afterimages in general just aren't a consistent trope in terms of speed, with some just straight up making it a special technique ala Dragon Ball (which even doubles as a particular case where nobody's perception speed properly scales up with them and they have to rely on their ability to sense ki to track their opponents), or are just outright nonsense like the one AgK thing I saw where an afterimage persisted through time outright stopping

And in relation to Manga, I'm skeptical of treating any disappearing statements as 100% "their visual stimuli is gone" when a smear from something below the 1/500 threshold would still be a failure to really see the subject (even if they technically still see a subject-colored blur vaguely near their last location, but in the heat of the moment I don't think anyone would consider that they'd be able to see whatever's moving there)

It's the same idea, I just think that the standards as written aren't being as scritunious as they should when keeping in mind the specifics of the study
 
Last edited:
Actually, I should clarify: I flatly don't believe afterimages should count whatsoever

1/500 is for completely being unable to receive any visual input, even beyond the smears that would normally constitute something going too fast to properly see. If it's an afterimage or them turning into a blur and then disappearing, then they are still in fact getting that flicker, just not something they can see very well (and they'll likely not be able to see it the very moment after since this is about hyperspeed objects that would just move to another location)

Not to mention the fact that afterimages in general just aren't a consistent trope in terms of speed, with some just straight up making it a special technique ala Dragon Ball (which even doubles as a particular case where nobody's perception speed properly scales up with them and they have to rely on their ability to sense ki to track their opponents), or are just outright nonsense like the one AgK thing I saw where an afterimage persisted through time outright stopping

And in relation to Manga, I'm skeptical of treating any disappearing statements as 100% "their visual stimuli is gone" when a smear from something below the 1/500 threshold would still be a failure to really see the subject (even if they technically still see a subject-colored blur vaguely near their last location, but in the heat of the moment I don't think anyone would consider that they'd be able to see whatever's moving there)

It's the same idea, I just think that the standards as written aren't being as scritunious as they should when keeping in mind the specifics of the study
Yeah makes sense.

I do disagree with your disapear statement though. I think you should take a characters words more seriously then your taking them. For example "SHE COMPLETELY DISAPEARED FROM MY SIGHT" this should constitute the use of the 1 / 500 timeframe. Although I do agree there should be some more restrictions around it, I would not erase it from existing.
 

I will put this feat here since once I used the accepted timetime, I feel it's overextended. Like sure, the keyword here is the attack is vanished from people's sight but it's almost 3 times faster than any gun high velocity which makes in-verse narratively speaking nonsensical.

Thus, I am also in agreement of the thread. There are many other factors that should be taken, but I don't feel the feat should be more than subsonic+ (which is a consistent speed level within the verse)
I also feel like this is unconstituted since your singling out one verse and basing your argument of "this timeframe shouldn't work because it goes against the narrative and it doesn't seem faster then subsonic". This problem can be fixed through some debate with the evaluator and the verse supporters and I do not think you should base declining this timeframe of the fact that it gives one verse outlierish results which can be denied at any time.
 
I’m still drafting a response to the OP but I will say this:

@DMUA I don’t mind adjusting the standards to add more restrictions to the statement bit. However the standards I outlined on the page already covers pretty much all concerns about afterimage techniques (and 90% of the OP). If you can help me improve the guidelines by referencing them (if this time frame gets passed). That would be much appreciated.

Also @ImmortalDread in the future, please leave your opinions on specific series out of site wide revisions.
 
Last edited:
I interpret it differently. I added an instance while explaining my agreements which is totally normal. This is how an argument works. Adds a reason, explain it with the instance, introduce contras and conclude from it.

I was not engaging in any details regarding it. So I am pretty sure, I knew where my line is which is supported by a fact, I did not respond to the person above who dismissed my entire point.
 
I interpret it differently. I added an instance while explaining my agreements which is totally normal. This is how an argument works. Adds a reason, explain it with the instance, introduce contras and conclude from it.

I was not engaging in any details regarding it. So I am pretty sure, I knew where my line is which is supported by a fact, I did not respond to the person above who dismissed my entire point.
Yes but the problem is you based it of 1 calculation. Its a large site wide revision so for you to say you disagree with it because 1 verse is unconstituted.
 
Just a friendly reminder that this is a Calc Group Discussion thread, so the ones that should be providing responses here are exclusively those in the Calc Group (plus Arnold since DontTalk specifically tagged him)

Edit: Of course, unless you have relevant info to provide. I often forget about that part and liken the CGM discussion threads to Staff Discussion threads.
^^ I am still actually respecting this reminder the whole time, till you quoted me, I inquire you: stop commenting if you have nothing relevant to add. Me basing it on one instance or 10 examples won't change my stance, and it will definitely not be good to argue about semantics. My point is far more important than the instance itself, even if it was dependent on it.

I will stop commenting here. And I recommend you to do so.
 
"SHE COMPLETELY DISAPEARED FROM MY SIGHT"
Nobody really says this, and I still have to question whether the heat of the moment really gives them a chance to distinguish "they turned into a vaguely colored smear in this general area which then moved elsewhere" and "outright physically being removed from my visual stimuli" in the case of a manga
 
Hi, I was given permission from Ultima and DMUA to post.

We should talk about what is trying to be nailed down. So let's talk about some gross categories of Perception/Movement.

The first thing that needs to be addressed is the influence of unconscious processing and priming. Your conscious ability to react to things is not the only aspect of your brain helping here. My brain knows if I am walking down the street at 12 pm, that there is an increased risk of threats in the environment. As such, I will most likely (pending experience and environment) be more attuned and reactive. Similarly, cues and primers are direct things that can influence our overall ability to be reactive. A UFC fighter has to literally train for months, go over the rules with his opponent/ref, sign documents about the fight, and then have a ridiculously tense walkout/stare-off. All of this primes the brain to pay attention to the opponent, and obviously, factors like entering the ring will also have this effect. Why do I bring this up? Because in these instances, it has been observed that there is statistically significant brain activity happening prior to even a stimulus being presented to a subject. This brings me to my next point: levels of perception and mechanisms for these.

The levels I will discuss here are as followed: Perception (most notably visual here), perceptual decision-making, pre-motor processes, and motor activity. As for the mechanisms we will largely be focusing on different brain wave states/their associated function, and talk briefly about different parts of the brain. The two major studies I will be referencing can be found here and here (and will be referenced as Roge and Murphy respectively).

There is a distinct difference between perception and perceptual decision making, and any motor process that occur (Roge)
To act adaptively within our environment, our decisions need to be translated into motor behaviour. Neural oscillations in the beta frequency range (13–30 Hz) have been associated with an inhibited state of the motor cortex during rest (Engel and Fries, 2010; Pfurtscheller et al., 1996). In agreement with this, pre-movement reductions in beta power in pre- and primary motor areas have been observed consistently. This presumably facilitates, or enables, motor responding.
In addition to motor preparation, alpha and beta power in motor cortical areas also appear to reflect the emergence of a categorical choice. During perceptual decision making, lateralization of alpha and beta power indicates the upcoming decision several seconds before the overt movement, when choices were mapped to left- or right-hand button presses (Donner et al., 2009)
As we can clearly see, this gap is pretty significant and ultimately involves different brain wave states carrying out the activity. Perception essentially involves taking "noise" from the environment and analyzing it. So let's look at how this study differentiated between the two gross branches of brain activity
In two studies, we imposed a delay between presentation of the stimulus and an imperative cue that prompted participants to respond. This allowed us to separate the decision from motor processing temporally. Furthermore, a signal reflecting the accumulation of sensory evidence would be expected to show a ramping that is steeper when the quality of sensory evidence is high (Donner et al., 2009; Gold and Shadlen, 2007; Kiani and Shadlen, 2009; O'Connell et al., 2012; Ratcliff, 1978; Selen et al., 2012; Shadlen and Newsome, 2001; Werkle-Bergner et al., 2014).
The random-dot motion (RDM) discrimination paradigm was adjusted to involve a delay manipulation. Participants were asked to detect the direction of coherent motion within a noisy display of moving dots as fast and as accurately as possible (Fig. 1A). Responses were given with the left or right thumb on a customized response box.
Participants’ seating position was fixed at a distance of 100 cm away from a 22-inch LCD monitor with a refresh rate of 60 Hz. The parameters of the stimulus display were adjusted based on Pilly and Seitz (2009). The dots were presented in light grey (RGB: 224, 224, 224), in front of a black (RGB: 0,0,0) background. They were bounded within a circular aperture with a diameter of nine degrees of visual angle, which was placed at the centre of the screen. The dot diameter was three pixels and 254 dots were on screen per frame. We used a brownian motion dot kinematogram. Signal and noise dots were randomly selected at every frame. All dots moved with the same speed (6.47 visual degrees per second; 4.65 pixels per frame), but motion direction was random for noise dots. Only a proportion of the dots (signal dots) moved coherently in one of the two pre-defined directions: -90 deg (leftward) and +90 (rightward).
Here we can see that the researchers are clearly discriminating between perceptual processing and overt motor activity. The task designed (across two separate experiments) looks at reaction time according to motion blur from randomly generated dot movement on a screen 22cm in front of the participants, who then click buttons per hand based on sensory input, with one experiment adding a delay and cueing aspect to account for discriminatory stimulus.
Here we can see that Data plotted

A big factor that isn't being discussed is the coherence and strength of the stimuli being analyzed.
In contrast, in the immediate variant, RT differences were present between all coherence levels (pcorr < .01), except level 1 and level 2 (pcorr = .04578). In the immediate variant, RTs were determined by decision time varying with evidence strength and non-decision time, i.e., early stimulus processing, motor preparation and execution. In contrast, in the delay variant decision making was nearly completed before the response cue such that RTs were mainly influenced by stimulus processing of the response cue, movement preparation and execution.
Because decision-making is boxed into two gross categories of perception and action there exists what is called the speed accuracy tradeoff (SAT). This is a pretty intuitive thing we all experience, rushing grants a boost to speed but often times accuracy, detail, and performance take a dip (outside of test looking solely at speed obviously), which has been observed (Murphy)
In the first experiment that we report, twenty-one individuals made twoalternative perceptual decisions about the dominant direction of motion of a cloud of moving dots32 (Fig. 1a). Each subject performed this task at a single level of discrimination difficulty that was tailored to their perceptual threshold, but under two levels of speed emphasis. In the ‘free response’ (FR) regime, subjects were under no external speed pressure, were instructed to be as accurate as possible, and were monetarily rewarded (penalized) for correct (incorrect) decisions. In the ‘deadline’ (DL) regime, the same task instructions and incentive scheme applied, with the addition of an especially heavy penalty—ten times that for an incorrect decision—if a decision was not made by 1.4 s after motion onset. The speed pressure imposed by this deadline led to faster median response times (RTs: DL ¼ 0.70±0.02 s; FR ¼ 1.19±0.07 s; t20 ¼ 8.1, Po1 10 6) and less accurate decision-making (DL ¼ 77.8±1.2%; FR ¼ 86.8±1.3%; t20¼ 6.6, Po1 10 5) relative to the FR regime (Fig. 1b).
With all of this in mind (I am unable to write a huge post going into more detail as this topic is EXTREMELY complicated and it's my anniversary today/currently at work) I don't think it looks very good for this proposal for several reasons
-Perception speed is not a singular bound quantity, and as we see, can quite literally range from negative time to speeds both greater than or less than .02s based on a multitude of factors.

-Some of the arguments presented as counter-evidence here don't really seem strong given the above. The bullet example falls completely flat on its face Imo. To begin, a bullet is extremely small and obscured by the environmental impact it has upon leaving the chamber. Furthermore, in regards to perception, the size of the object is not only important but also the area of the retina that is being taken up by the eye. For instance, you will clearly see your TV much better sitting 5 ft away from it, compared to 30 ft. The television is taking up a significant amount of your retinal space and thus contributes to the coherence of the response. From most distances, a bullet is nigh imperceptible, and this is made worse by the fact that in order to increase the bullet's size on the retina, one must decrease the distance....which is obviously a much more impactful factor impacting your ability to dodge.

- Perception-reaction times are also impacted by the SAT and priming. Given that most instances being discussed in this thread involve human-sized objects disappearing from view within the span of a few meters, this would check multiple boxes that actually push the desired number deeper into the milliseconds, human bodies take up a substantial amount of retinal space and most of these feats would directly involve circumstances in which the person being blitzed would have some primer that he was going to be attacked, this making his brain much more reactive.

Now I am garbage at math and want no parts, so this post was mostly to provide contextual info and address some of the arguments I found invalid. Not touching what the actual number should be so good luck on that front.
 
Last edited:
Whitnee had permission to post that once, so that post specifically can stay. Arnold also has permission since Arnold was the one who originally proposed it and should have permission to debate their own topic. I could see arguments on both sides and want to wait for Arnold, but overall leaning towards accepting what DontTalkDT proposes.
 
If you can help me improve the guidelines by referencing them (if this time frame gets passed). That would be much appreciated.
I should also say to this, yeah

I think it should be much tighter on what counts as "disappearing" by precisely what goes under flicker fusion, because something like them dispersing into a blur or an afterimage isn't really surpassing the flicker threshold, since they're still identifying a change in light so to speak (At least, for a brief window before it disappears and they've moved to a totally different location)

It should be used when it's clear that the person missing just was not visual stimuli for any period
 
I just remembered this exists.

If I measure that right, it takes 26 frames at 60fps to turn when slowed down to 5% speed, which makes the timeframe it takes in reality about 0.022s or 22 milliseconds to turn.
In so far interesting as that you can view it close up (i.e. it doesn't have the same problem as bullets regarding size).
-Perception speed is not a singular bound quantity, and as we see, can quite literally range from negative time to speeds both greater than or less than .02s based on a multitude of factors.
Negative time?
But yeah, it's not a singular quantity, which makes a treatment with one singular value inherently suspicious.
-Some of the arguments presented as counter-evidence here don't really seem strong given the above. The bullet example falls completely flat on its face Imo. To begin, a bullet is extremely small and obscured by the environmental impact it has upon leaving the chamber. Furthermore, in regards to perception, the size of the object is not only important but also the area of the retina that is being taken up by the eye. For instance, you will clearly see your TV much better sitting 5 ft away from it, compared to 30 ft. The television is taking up a significant amount of your retinal space and thus contributes to the coherence of the response. From most distances, a bullet is nigh imperceptible, and this is made worse by the fact that in order to increase the bullet's size on the retina, one must decrease the distance....which is obviously a much more impactful factor impacting your ability to dodge.
Why does everyone ignore that I already addressed in the OP that the bullet's size is obviously relevant?
Now, bullets are smaller than humans, but at less than a meter away one can still clearly see them. Now, I have no doubt that size does play a role in whether or not you can see a fast object, even in the realms where you can see the object when it stands still. Consider at that, that humans are not a regular size and the apparent size of a person lowers with distance. Meaning distance is also a factor.
However, notice that the standards do not account for that in any way. In fact, the flicker test does not account for size. (And there might be a difference between the relevance of the size of a light source and size of a passively illuminated object)
And, when translating from the flicker test results to actual movement... well, there it would need to be considered as well.
In total what we see by this example is that either 1/500s is not enough or the value at least needs a component to correct for size.
The point wasn't "haha, you can't see bullets so it must be less". It was that either it's less or the value definitely requires a size correction component that is not actually given or known.
Humans vary in size and distance makes them vary in apparent size.
If we accept that size is a relevant factor, it stands to reason that a small asian female middle schooler standing 10 meters away and a tall european male adult standing 1 meter away need different values (i.e. object that takes of 50° of FOV and object that takes up 20° of FOV should differ). And it's unclear which of these the value of a flicker test would be applicable to.
- Perception-reaction times are also impacted by the SAT and priming. Given that most instances being discussed in this thread involve human-sized objects disappearing from view within the span of a few meters, this would check multiple boxes that actually push the desired number deeper into the milliseconds, human bodies take up a substantial amount of retinal space
I will note that in the process of blitzing the person is most likely to leave the center of the field of view, towards parts where the retina has worse resolution and stuff, which is a relevant factor to keep in mind. It will also likely quickly stop being in the area / distance in which the eyes are actually focussed.
Even then another thing I'm realizing is that you don't need to be literally invisible to have a character go "huh, they're gone", and anime generally presents that kind of thing as them dispersing into a blur rather than outright being removed as visual stimuli inbetween frames

Like, I can look over at my fan right now and, yeah I can see a vaguely white blur at the bottom, but if something like that was running directly at me at 50 meters per second or something similarly too high, I definitely wouldn't say I'd see it
Absolutely. And if the blurring gets to a certain point one wouldn't actually be able to tell. I suspect the larger the area it's blurred over the stronger the effect (given the same time).
It's more poorly handled I'd say, given that it does take time to mention that particularly small targets shouldn't have this calculation used at them at all, but still
Did that end up in what is on the page right now? It's in the thread but I can't find it in the rules on the page.
But anyway, yeah. Excluding small or distant objects alone does not really solve the core issue.
I have really never seen a fan disapear. There is always a small blur. Do you have proof of fans completely disapearing ?
Aside from anecdotal evidence? No. Like, ImmortalDread said they have that impression. There is also the video I posted, but since that's with a camera it means little one way or another.
Obviously, I have no study about people seeing fans.
Doing some tests on human bench mark, the 200 ms figure is also from reaction speed. Can you link the study ?
The study for the stuff in the wikipedia article I linked? Should be in wikipedia's sources.
What timeframe are you suggesting is correct for a blitz ?
One that is not constant, as it depends on a bunch of different variables. Which that is? I don't know. I have found no scientific research on the question of "how fast of an object can humans see", so I really have no qualified grounds to say what the correct function would be.
You ignored my part about the smoke.
You can add an extra 10cm to account for the bullet leaving the smoke before perception and it doesn't impact any point I'm making.
Although I'm fairly sure that the smoke actually is slower than the bullet and only occurs after it already left. And I think the smoke is usually not so dense that one couldn't look through it.
Also a human from 20 metres away is still massively larger then a bullet.
But clearly smaller than a human from 1 meter away and, assuming perception time decreases the larger the apparent size of the object, needs a lower perception time.
 
Negative time?
But yeah, it's not a singular quantity, which makes a treatment with one singular value inherently suspicious.
Yeah, as in neural activity is recorded prior to any stimulus that would elicit a reaction. The brain isn't just offline and suddenly triggered by something it magically find threatening. The brain is constantly processing and a part of the reason we can "react" very fast in the first place (we're relatively bad at reacting).

I would agree, but that is kind of the nature of the game we play not only with battleboarding but with science itself, there is always going to be a margin of error we have to accept, but the .02s time was literally a median time for reactions in the with a strong +/- associated with the conditions. The fact that this study captured that mean sample utilizing not remarkable humans is pretty substantial to the overarching point here, especially given the nature of the study and what they measured (i.e. tracking a dot's movement trajectory across space).
Why does everyone ignore that I already addressed in the OP that the bullet's size is obviously relevant?

The point wasn't "haha, you can't see bullets so it must be less". It was that either it's less or the value definitely requires a size correction component that is not actually given or known.
Humans vary in size and distance makes them vary in apparent size.
If we accept that size is a relevant factor, it stands to reason that a small asian female middle schooler standing 10 meters away and a tall european male adult standing 1 meter away need different values (i.e. object that takes of 50° of FOV and object that takes up 20° of FOV should differ). And it's unclear which of these the value of a flicker test would be applicable to.
You misunderstood my point about the bullet size then as I also brought up the point about retina size. A human is always going to scale pretty relative to the human eye given we have literally evolved around seeing other humans, their facial expressions, their body language, etc, and given the relative size of a human to the eye itself. A human is many times larger than the surface area of bullet. Meaning you have to move a human substantially far back to begin to replicate having a bullet up close. A bullet is literally about the same size or marginally bigger than they retina surface.

Here is an easy example. I can go to the car show and watch 200mph cars speed by me no matter if I am in the front or back row. Why? I'm obviously not subsonic+ in reactions speed? It's because of the distance from the object and the relative large size of the car as it is moving. Now imagine trying to track a race of 200mph flys racing around the track. We likely wouldn't be able to dodge them from much further distances than the car going the same speed.

Heck, I can't count the number of times that a fly has speedily escaped my vision, doesn't mean he was moving faster than my actual processing speed.

I will note that in the process of blitzing the person is most likely to leave the center of the field of view, towards parts where the retina has worse resolution and stuff, which is a relevant factor to keep in mind. It will also likely quickly stop being in the area / distance in which the eyes are actually focussed.
Yeh, but we quite literally have values from the study looking into this very thing. The dot tracking they had to perform is extremely akin to how the eye process dynamic movement, and the study also looked across different factors to look into SAT in both studies. I'm not really seeing how this would impact the point proposed by Arnold's thread, given that any blitzing here would have to eclipse the .02s timeframe.

Remember, the study outlines a stark difference between perception and reaction as noted. Those .02 and .07s times were reaction times after processing. Meaning perception speed is much faster.

Edit: Sorry, I responded out of habit without asking to post this response. Delete it if you feel it necessary.
 
Last edited:
I very much disagree with calling it nonsense. I believe it appears as nonsense to you because you probably don't understand it. Hopefully, I will be able to make you understand why I proposed this.




I.e. it comes from the timeframe in which a light can turn on and off repeatedly, while the human can still see that the light flickers.

Correlating that to movement is already somewhat of a stretch. The test is about being able to tell that any change is happening, not any proper perception.

There is a reason flickering lights are used for these sorts of experiments. The flicker fusion tests the brain's ability to detect transient changes in light, which would be transferred to other parts of the brain that process direction, speed, etc. [1]

Neurons that are selective to transient changes (i.e. a sudden change in luminance) within their receptive fields serve as the earliest stage of a visual motion processing hierarchy that extracts local directionality, speed, and global motion velocities to ultimately represent our experience of a moving object (Braddick et al., 2001, Pack and Born, 2001, Pack et al., 2003, Sincich and Horton, 2005).

I even went in detail in my last thread about how the brain begins perception by detecting changes in light. The
V1 Neurons are the first set of neurons that are involved in visual perception. Those neurons detect edges and changes in light, then sends it to other neurons that deal with speed, direction, location, etc. then it goes into decision making (a.k.a. reaction)


Experts correlate these tests to movements all the time. Look at the primary source you quoted in the OP. The source itself correlated flicker fusion threshold to a bird's ability to see rapidly approaching obstacles. [2]

A high CFF threshold is crucial for flying animals (like pigeons—143 Hz or peregrine falcons—129 Hz) that need an efficient visual system, e.g., to detect rapidly approaching objects to avoid colliding with them




What this neglects is that, while FFT is of importance to perception, at no point is it identified as the unique deciding factor. There is correlation, perhaps, but it is never identified as the actual timeframe of perception of a moving object.
As such it is more a natural high end, than a necessary low end. Something that leaves the FOV below the FFT would be invisible. However, that an object that does not do so is visible is never said.

FFT is merely a measurement of the limit of the brain's ability to detect transient changes in light. It's up to the expert to figure out what to use a measurement for and it has been used for things that deal with things in motion (such as the ability of birds to detect rapidly moving objects that i outlined earlier for instance) and things that probably have nothing to do with motion. FFT specifically focuses on the first stage of visual perception. To perception blitz a character, one has to move beyond all stages of visual perception in the brain.

An object that travels within 0.002s will not be seen at all. If it maintains speed and travels for more than 0.002s, an afterimage may occur at that moment in time. If this goes on, In the eyes of the observer, the object moves in and out of view repeatedly. Stroboscopic motion. I have outlined an idea on how to calculate speeds that involve these phenomena. If this gets accepted I hope you will help finetune these ideas. Note: Motion blurs don’t count, the brain just adds it on its own. Besides with motion blurs one can still tell the direction of where the object goes and so on. So it definitely won’t qualify, the reaction time would be used instead in those sorts of speed blitzing scenarios.




First, I wish to direct attention back to the primary source that the 2ms idea is based on.
This paragraph identifies a total of 19 variables which influence the FFT and hence the 2ms. Patterns and movement behaviour is pointed out for the case of hunting prey, but also things like frequency or precise position on the retina, and colour.
None of these factors are adequately handled by the current standard, as they are not handled at all. It's a constant number of 2ms that is not modified by any of these parameters.
Consider, furthermore, that some of these factors may be influenced in a movement scenario in a way different from a flickering light test. A moving object will have its light reach different parts of the eye's retina over time. The amount of light that a stationary object would all project on one part of the eye, is instead distributed on a greater area making the effective average illumination intensity less, as the same amount of light is spread more.
Given such considerations, I am confident to say that the source in itself would likely not agree with this usage of the value.

I am not sure how you arrived at 19 variables, but I am going to attempt to list out all the 19 if i possibly can.
  1. Frequency of the modulation
  2. Amplitude of the modulation
  3. Average illumination intensity
  4. Position on the retina at which the stimulus occurs
  5. Wavelength or color of the LED
  6. Intensity of ambient light
  7. Viewing distance
  8. Size of the stimulus
  9. Age
  10. Sex
  11. Personality traits
  12. Fatigue
  13. Circadian variation in brain activity
  14. Cognitive functions (e.g., visual integration, visuomotor skills, decision-making processes)
  15. Pattern recognition
  16. Movement behavior
  17. Prey features
  18. Specific retinal patterns

How are most of these not covered by the standards I proposed in the previous thread? I will point each of them out for you below in bold.


  • Frequency of the modulation, Amplitude of the modulation, Wavelength or color of the LED, Intensity of ambient light, Age, Sex, Personality traits

These are adjusted to fit most normal visual conditions in the FFT experiment. The 2 millisecond timeframe is a mid end that comes from these experiments. Note: that even when many experiments have slightly different conditions due to equipment and environmental differences, they all still agree with the range.

Age, Sex and personality trait barely have an effect. Aside from the obvious fact that age often leads to decreased vision which wouldn’t qualify under the currently accepted guidelines. Their sex and personality trait may affect it but it should definitely not deviate significantly from the range of values meant for humans in general
  • Average illumination intensity, Intensity of ambient light.

Many experiments mimicks typical visual conditions, otherwise they wouldn’t get accurate and applicable results.


Guidelines
  • The observer's eyes must meet specific visual criteria. They should be able to focus clearly on the object before making any observations. Note that the flicker fusion threshold is dependent on the Field of View (FOV), with the optimal FOV being the center of view.
  • The flicker fusion threshold is influenced by the brightness intensity of the surroundings. Observations of feats in extremely dim or overly bright environments are not suitable for this method.
  • These guidelines are applicable only to observers with human eyes. For animals, specific studies detail the flicker fusion thresholds for various species. However, the range of studied animals is limited, and findings might not apply universally.
  • The object under observation should be unique. Duplications can distort frequency, creating illusions of speed.
  • Small or distant objects are challenging to monitor consistently. They are not recommended for reliable speed calculations.
  • Techniques that induce afterimages or illusions are not acceptable. There should be concrete evidence that any perceived speed or disappearance of the object is due to its actual speed. If there's any ambiguity, an alternative standard should be considered.
  • Any character with a perception speed differing from human standards is excluded, as this can result in skewed speed values.

  • Position on the retina at which the stimulus occurs

Guidelines
  • The observer's eyes must meet specific visual criteria. They should be able to focus clearly on the object before making any observations. Note that the flicker fusion threshold is dependent on the Field of View (FOV), with the optimal FOV being the center of view.
  • The flicker fusion threshold is influenced by the brightness intensity of the surroundings. Observations of feats in extremely dim or overly bright environments are not suitable for this method.
  • These guidelines are applicable only to observers with human eyes. For animals, specific studies detail the flicker fusion thresholds for various species. However, the range of studied animals is limited, and findings might not apply universally.
  • The object under observation should be unique. Duplications can distort frequency, creating illusions of speed.
  • Small or distant objects are challenging to monitor consistently. They are not recommended for reliable speed calculations.
  • Techniques that induce afterimages or illusions are not acceptable. There should be concrete evidence that any perceived speed or disappearance of the object is due to its actual speed. If there's any ambiguity, an alternative standard should be considered.
  • Any character with a perception speed differing from human standards is excluded, as this can result in skewed speed values.




  • Viewing distance, Size of the stimulus.

Guidelines
  • The observer's eyes must meet specific visual criteria. They should be able to focus clearly on the object before making any observations. Note that the flicker fusion threshold is dependent on the Field of View (FOV), with the optimal FOV being the center of view.
  • The flicker fusion threshold is influenced by the brightness intensity of the surroundings. Observations of feats in extremely dim or overly bright environments are not suitable for this method.
  • These guidelines are applicable only to observers with human eyes. For animals, specific studies detail the flicker fusion thresholds for various species. However, the range of studied animals is limited, and findings might not apply universally.
  • The object under observation should be unique. Duplications can distort frequency, creating illusions of speed.
  • Small or distant objects are challenging to monitor consistently. They are not recommended for reliable speed calculations.
  • Techniques that induce afterimages or illusions are not acceptable. There should be concrete evidence that any perceived speed or disappearance of the object is due to its actual speed. If there's any ambiguity, an alternative standard should be considered.
  • Any character with a perception speed differing from human standards is excluded, as this can result in skewed speed values.


  • Fatigue

Being so tired that your eyes become affected is definitely a factor I probably haven't applied in the standard. I’ll give you that.

  • Cognitive functions (e.g., visual integration, visuomotor skills, decision-making processes), Pattern recognition, Prey features, Specific retinal patterns, Movement behavior, Circadian variation in brain activity

Aside from the obvious fact that some of these pertained to other animals and not just humans in the primary source that you quoted

The performance of CFF in humans and predators alike is dependent on these factors. Umeton et al. also describe preys’ features like a pattern or even the way they move as relevant in perceiving the flicker fusion effect [2].

or the fact that abnormalities/disabilities like poor eye sight from the observer would obviously heavily affect visual perception,

I have addressed the rest in the last thread.

Guidelines
  • The observer's eyes must meet specific visual criteria. They should be able to focus clearly on the object before making any observations. Note that the flicker fusion threshold is dependent on the Field of View (FOV), with the optimal FOV being the center of view.
  • The flicker fusion threshold is influenced by the brightness intensity of the surroundings. Observations of feats in extremely dim or overly bright environments are not suitable for this method.
  • These guidelines are applicable only to observers with human eyes. For animals, specific studies detail the flicker fusion thresholds for various species. However, the range of studied animals is limited, and findings might not apply universally.
  • The object under observation should be unique. Duplications can distort frequency, creating illusions of speed.
  • Small or distant objects are challenging to monitor consistently. They are not recommended for reliable speed calculations.
  • Techniques that induce afterimages or illusions are not acceptable. There should be concrete evidence that any perceived speed or disappearance of the object is due to its actual speed. If there's any ambiguity, an alternative standard should be considered.
  • Any character with a perception speed differing from human standards is excluded, as this can result in skewed speed values.




I will furthermore point out the the 2ms value is only produced in the study, only produced such high results with a specific setup: "The high spatial frequency image is first “bright on the left half of the frame and black on the right”, and then inverted. We observed the effect described in this paper whenever we displayed an image containing an edge and its inverse in rapid succession."
I.e. the specifically inverted the colours and specifically in an uneven way. Equalizing this to real life conditions, which may have edges but no inversion of said edges, seems faulty.

The setup was made to replicate real life ojects that have edges. The inversion of colors was done to induce microsaccades (which happen very often when looking at things with edges). This reminds me of the argument you said about objects not being “flickering lights”; it’s not about objects needing to have inversion of edges, micro saccades occur when looking at things in motion (or in general even). And as I said above. The simple act of motion causes transient changes in luminance which is the whole point of FFT.

This is the same with colors. Blue is seen at a higher FFT and Red at a lower FFT, both of which aren’t far off range wise and all of which constitutes a median value of around 500Hz still.



Another problem comes from a translation of the FFT to a movement scenario.
If we are having a movement, how far of a movement would have to be completed for it to equal the length of one flicker?
Until they are out of the field of view? Until they reappear in the vision? I don't think so. Reasonably speaking, as long as you do not occupy the same place for longer than a flicker you should be below the threshold, as no part of the eye sees you longer than one flicker. It would be the equivalent of the left the of the image flickering below the threshold and right after the right side below the threshold. Considering that most humans are about half a meter wide, that severely would reduce the necessary speed. (half a meter in 2ms is subsonic+)
Of course, that is my argument as I see it as reasonable, not proper science. No scientific source describes how exactly one would apply flickering to invisibility via speed exactly, so that much is a given. But such uncertainties are exactly the problem.

As I explained above when I was talking about how the timeframe can be applied. I sort of already answered all these questions but lets go through them one by one.

If we are having a movement, how far of a movement would have to be completed for it to equal the length of one flicker?

Any distance as long as it travels within or as fast as the time length of one flicker at the FFT. In other words, this is only a timeframe, so the distance would have to be found using source texts, pixel scaling, or so on. Once you have distance you find speed.

Until they are out of the field of view? Until they reappear in the vision?

You are speaking in terms of the observer. In this case the feat must be within more central within field of view as expressed in my guidelines. In the case of how science applies this to motion, I already addressed it above.

Please note: that speed blitzing feats must not always come from an observer's view, the narrative itself could state how the movement pans out. [1][2]

About the math bit here with human sizes, I don’t understand how that’s relevant tbh. Whatever sort of movement was calced there using the timeframe is fine by me. Although you claim the necessary speed is reduced from something, what is it reduced from? I don’t remember s proposing a speed value, I’m merely proposing a timeframe and if subsonic+ is the necessary speed then that’s perfectly fine.






Logistical problems aside, I believe the current state also is in conflict with real-life observations. I will say that while I will provide some videos in the following, since cameras with their shutter speed and videos with their FPS obviously screw with the observation, they are more for illustration purposes than that they prove anything.

First, let's talk about fans.
It is documented that when fans move fast enough, they turn practically invisible to, at minimum, some people.
Why is that a problem? Well, fan blades stay constantly in the field of vision. And they pass through a spot focussed on many times a second and for fans with thick blades they also occupy each point for a significant total percentage of the time. Furthermore, they are just subsonic. A standing fan is about 1300 to 2100 round per minute, which equates to 22 to 35 rounds a second. If we assume a blade takes up 1/6th of the total circle, then the time in which it passes through one spot in 1/132s to 1/210s, which is more than twice the time we have from FFT.
This appears not in line with the standard. The section assumes that a character can not stay invisible longer than said 1/500th of a second (otherwise, it would not make sense to conclude that a character that seemingly teleports must do so in that timeframe), but this example demonstrates that one can in principle stay invisible indefinitely.
Your calculations are correct but you forgot one thing. A standing fan has 3 blades not 1. So if we looked at a single spot on the fan, 3 more blades are passing the area as opposed to 1, giving off the illusion that it’s 3x faster. So your answer: 132Hz multiplied by 3 is 396Hz, and 210Hz multiplied by 3 is 630Hz… and that is why standing fans appear invisible.
Next, let's talk about guns.
A glock has a muzzle velocity of 375 m/s. That means that in 2ms the bullet passes a distance of 75 cm. In other words, by the current standard, we would assume that if someone shoots you with a Glock in the face from an arm's length away you can see the bullet emerge from the gun.
And... that is wrong. Even from a greater distance, you don't simply see bullets. (given bullets slow down with distance and if you stand far away they appear to move slower, so I don't think it's per se impossible to see a bullet fly under the right conditions)
Now, bullets are smaller than humans, but at less than a meter away one can still clearly see them. Now, I have no doubt that size does play a role in whether or not you can see a fast object, even in the realms where you can see the object when it stands still. Consider at that, that humans are not a regular size and the apparent size of a person lowers with distance. Meaning distance is also a factor.
However, notice that the standards do not account for that in any way. In fact, the flicker test does not account for size. (And there might be a difference between the relevance of the size of a light source and size of a passively illuminated object)
And, when translating from the flicker test results to actual movement... well, there it would need to be considered as well.
In total what we see by this example is that either 1/500s is not enough or the value at least needs a component to correct for size.
Bullets are small as hell. And yes the accepted standards already account for size. Small objects like bullets don’t qualify as they’re hard to keep track of in general.

Edit: made grammatical corrections. Don’t want DT to point out my minor spelling mistakes. That would ruin my entire argument.
 
Last edited:
Yeah, as in neural activity is recorded prior to any stimulus that would elicit a reaction. The brain isn't just offline and suddenly triggered by something it magically find threatening. The brain is constantly processing and a part of the reason we can "react" very fast in the first place (we're relatively bad at reacting).
That would kinda suggest that the measurement in that case does not relate to the actual reaction, though. So... probably not instances relevant to the debate.
I would agree, but that is kind of the nature of the game we play not only with battleboarding but with science itself, there is always going to be a margin of error we have to accept, but the .02s time was literally a median time for reactions in the with a strong +/- associated with the conditions. The fact that this study captured that mean sample utilizing not remarkable humans is pretty substantial to the overarching point here, especially given the nature of the study and what they measured (i.e. tracking a dot's movement trajectory across space).
Just to be clear, the standard we aretalking about isn't .02s but 0.002s.
Anyway, I think that we are dealing with just a small margin of error here remains to be shown. Reasonably speaking I believe size has to produce a rather large margin of error, at least in general, as when size -> bullet size we see the value is way too low. That the error is small when we keep it to less extreme examples needs further evidence.
You misunderstood my point about the bullet size then as I also brought up the point about retina size. A human is always going to scale pretty relative to the human eye given we have literally evolved around seeing other humans, their facial expressions, their body language, etc, and given the relative size of a human to the eye itself. A human is many times larger than the surface area of bullet. Meaning you have to move a human substantially far back to begin to replicate having a bullet up close. A bullet is literally about the same size or marginally bigger than they retina surface.

Here is an easy example. I can go to the car show and watch 200mph cars speed by me no matter if I am in the front or back row. Why? I'm obviously not subsonic+ in reactions speed? It's because of the distance from the object and the relative large size of the car as it is moving. Now imagine trying to track a race of 200mph flys racing around the track. We likely wouldn't be able to dodge them from much further distances than the car going the same speed.

Heck, I can't count the number of times that a fly has speedily escaped my vision, doesn't mean he was moving faster than my actual processing speed.
What I have to disagree with on this is the idea of "A human is always going to scale pretty relative to the human eye".
Like, let's say we have a 2m tall person standing 1 meter away from you. Let's approximate the angle of your vision it takes up by a triangle, where one side is 2m, another is 1m and the angle between is 90° (as humans stand 90° to the ground. Kinda awkward for location of the POV, but just as example).
Then it takes up about 63.4° of your vision (corresponding angle in the triangle).
Now, same approach, but the person is a japanese middle school girl for which a height of 1.55m is normal. And this time she stands 10m away. Now the angle of vision taken up is 8.811°.
That is a difference of x7 in angular size.

Maybe the numbers are more favourable in other scenarios, but the point is that the difference isn't so small that one should just handwave it away. As it stands, I'm not even sure what the baseline angular size would be.
Yeh, but we quite literally have values from the study looking into this very thing. The dot tracking they had to perform is extremely akin to how the eye process dynamic movement, and the study also looked across different factors to look into SAT in both studies. I'm not really seeing how this would impact the point proposed by Arnold's thread, given that any blitzing here would have to eclipse the .02s timeframe.

Remember, the study outlines a stark difference between perception and reaction as noted. Those .02 and .07s times were reaction times after processing. Meaning perception speed is much faster.
I'm currently debating the 0.002s timeframe. The one you're talking about is 10 times longer than that.

I'm not quite sure how your studies would generally even apply to faster than eye vision, as it seems to be more about percieving direction of movement and timing of certain brain activities. But I would honestly rather get into that debate once we have the 0.002s thing based on the FFT settled, as that are two entirely separate strands of argumentation that are going to involve separate arguments.


Gonna get to Arnoldstone18's reply later tonight unless I fall asleep
 
There is a reason flickering lights are used for these sorts of experiments. The flicker fusion tests the brain's ability to detect transient changes in light, which would be transferred to other parts of the brain that process direction, speed, etc. [1]



I even went in detail in my last thread about how the brain begins perception by detecting changes in light. The
V1 Neurons are the first set of neurons that are involved in visual perception. Those neurons detect edges and changes in light, then sends it to other neurons that deal with speed, direction, location, etc. then it goes into decision making (a.k.a. reaction)
The problem with that argument is that it assumes that the V1 neurons detecting a change implies that the change will be visible in the final image the brain puts out. However, the brain does a lot of stuff between those stages. Take Saccadic Masking as practical example. It's not that the eye can't pick up any input during a saccade, but the brain decides to not put it into the final image it generates. And that despite saccade's being 10 to 100 times longer than the time frame you propose.
Experts correlate these tests to movements all the time. Look at the primary source you quoted in the OP. The source itself correlated flicker fusion threshold to a bird's ability to see rapidly approaching obstacles. [2]
FFT is merely a measurement of the limit of the brain's ability to detect transient changes in light. It's up to the expert to figure out what to use a measurement for and it has been used for things that deal with things in motion (such as the ability of birds to detect rapidly moving objects that i outlined earlier for instance) and things that probably have nothing to do with motion. FFT specifically focuses on the first stage of visual perception. To perception blitz a character, one has to move beyond all stages of visual perception in the brain.
As I already attempted to address in the OP, this is confusing a correlation with being the same thing.
To give a stupid analogy: The number of ice cream sales and watermelon sales correlate, but that doesn't mean that they are equal in number. Still, you can take higher ice cream sales as an indicator of higher watermelon sales.
FFT is a prerequisite for fast perception and it would not be further surprising that an animal that evolves to have a low FFT will have fast perception. They are presumably evolving it due to needing fast perception. But higher FFT likely meaning faster perception, does not mean time of perception is equal to the FFT.
You will find that the studies in question are appropriately careful in their formulation, stating that a fast sampling rate helps detect fast-moving objects. Not that it determines the detection.

So, yes, one has to consider all stages of perception and researchers are not saying FFT is the timeframe before which objects are visible.

An object that travels within 0.002s will not be seen at all. If it maintains speed and travels for more than 0.002s, an afterimage will occur at that moment in time.
I have no idea where you get that from and it seems obviously wrong. Again, look at a fan. It doesn't produce an afterimage after 0.002s it just blurs, in some cases to the point of invisibility.
You still seem to operate under the assumption that eyes work on frames per second (and frames per second with infinitesimal shutter speed).
I mean, honestly, have you ever see something produce after images in real life like they do in fiction?
If this goes on, In the eyes of the observer, the object moves in and out of view repeatedly. Stroboscopic motion
Stoboscopic motion doesn't occur in continuous illumination.
To quote wikipedia: "The stroboscopic effect is a visual phenomenon caused by aliasing that occurs when continuous rotational or other cyclic motion is represented by a series of short or instantaneous samples (as opposed to a continuous view) at a sampling rate close to the period of the motion."
You might mean the wagon wheel effect which has a continuous version, however, that as well only occurs under specific circumstances at specific frequencies. You don't see a wagon-wheel effect on every spinning object nor on every fast spinning one at that.
Note: Motion blurs don’t count, the brain just adds it on its own. Besides with motion blurs one can still tell the direction of where the object goes and so on. So it definitely won’t qualify, the reaction time would be used instead.
I have no idea why you think that when something is added by the brain on its own it has no relevance for perception, when the brain is critical in that.
Also not sure what makes you confident that with motion blur you can see tha direction of movement or why that is relevant.
I am not sure how you arrived at 19 variables, but I am going to attempt to list out all the 19 if i possibly can.
  1. Frequency of the modulation
  2. Amplitude of the modulation
  3. Average illumination intensity
  4. Position on the retina at which the stimulus occurs
  5. Wavelength or color of the LED
  6. Intensity of ambient light
  7. Viewing distance
  8. Size of the stimulus
  9. Age
  10. Sex
  11. Personality traits
  12. Fatigue
  13. Circadian variation in brain activity
  14. Cognitive functions (e.g., visual integration, visuomotor skills, decision-making processes)
  15. Pattern recognition
  16. Movement behavior
  17. Prey features
  18. Specific retinal patterns
Yeah, I miscounted. It's 18.
It was How are most of these not covered by the standards I proposed in the previous thread? I will point each of them out for you below in bold.


  • Frequency of the modulation, Amplitude of the modulation, Wavelength or color of the LED, Intensity of ambient light, Age, Sex, Personality traits

These are adjusted to fit most normal visual conditions in the FFT experiment. The 2 millisecond timeframe is a mid end that comes from these experiments. Note: that even when many experiments have slightly different conditions due to equipment and environmental differences, they all still agree with the range.
I don't see how you translate the frequency of modulation to real life, as a real life image has no modulation. Please expand on that.
Likewise, I would ask you to identify the value of the amplitude of modulation in a real life scenario.
For colour I'm not sure how you find it to be considered if the 2ms test used specifically maximum contrast, which is not the case in a real life scenario.
For ambient light I wish to remind you that the test was performed in a dark room, with the "ambient illumination" occuring in a 47.7 cm × 35.9 cm square around the main screen. Further note that in my understanding the experiment is set up, such that ambient light and screen brightness add up. To that comes that "The level of ambient illumination is adjusted by the subject until flicker in the modulated area is just noticeable." I.e. they adjusted ambient light in the test to the level that flickering was visible. I.e. it was not chosen under regular outdoor conditions, but under close to optimal ones.
Age, Sex and personality trait barely have an effect. Aside from the obvious fact that age often leads to decreased vision which wouldn’t qualify under the currently accepted guidelines. Their sex and personality trait may affect it but it should definitely not deviate significantly from the range of values meant for humans in general
Study notes that the values in the experiment range goes from 200 Hz to 800 Hz, which is something. Although I'm not certain what the range of people was in that particular study.
  • Average illumination intensity, Intensity of ambient light.

Many experiments mimicks typical visual conditions, otherwise they wouldn’t get accurate and applicable results.
The one used for that result uses one that uses an average illumination equal to a commercial display (by its own words). Which studies use explicitely ambient and display brightness equal to real world conditions? And which real world conditions in particular? (evening vs. brightest hour of the day)
Position on the retina at which the stimulus occurs
That part of the standard does not properly handle the issue, as the person moves during the blitz. The location (physically and on the retina) does not stay constant during the whole motion (like it is in flicker tests) but moves over different areas of the retina and amongst others also out of the distance at which the eye is focussed if the person comes closer or moves further away.

As such the conditions outlined in the paper are not met in any blitzing scenario.
Viewing distance, Size of the stimulus.
Bullets are small as hell. And yes the accepted standards already account for size. Small objects like bullets don’t qualify as they’re hard to keep track of in general.
These belong together as they make the same mistake. The standard doesn't account for size. It excludes some (vaguely defined) small sizes, but does not account for how much the value varies for all other sizes.
Excluding 7mm bullets, doesn't account for the differences that result from a 7° angular size vs. a 70° angular size.
It's not that size and distance won't influence the result at all until they drop below a certain threshold where FFT suddenly rapidly changes.

On a side note, I also wonder why size and distance are mentioned separately. I assume that implies that there are influences of distance outside of a decrease in angular size that might require consideration.

Also, I will point to this:
I just remembered this exists.

If I measure that right, it takes 26 frames at 60fps to turn when slowed down to 5% speed, which makes the timeframe it takes in reality about 0.022s or 22 milliseconds to turn.
In so far interesting as that you can view it close up (i.e. it doesn't have the same problem as bullets regarding size).

which does not have extreme size concerns.
Fatigue

Being so tired that your eyes become affected is definitely a factor I probably haven't applied in the standard. I’ll give you that.
It's probably not just that extreme case. Fatigue is a gradient. There's not just wide awake and so tired that you can barely see.
  • Cognitive functions (e.g., visual integration, visuomotor skills, decision-making processes), Pattern recognition, Prey features, Specific retinal patterns, Movement behavior, Circadian variation in brain activity

Aside from the obvious fact that some of these pertained to other animals and not just humans in the primary source that you quoted
Why would they not apply to humans? Humans are pretty much animals. It's just that the tests for humans typically don't involve those looking at prey, but for our purposes the factors are obviously relevant.
In fact, the article states explicitly "The performance of CFF in humans and predators alike is dependent on these factors." for all the ones except prey features and movement behaviour and does not state the opposite for the remaining two factors, leaving their influence at worst uncertain.
or the fact that abnormalities/disabilities like poor eye sight from the observer would obviously heavily affect visual perception,
Sure.
I have addressed the rest in the last thread.
That is not addressing it in the slightest.
The behaviour of the target's movement and the features of the target are not determined by who is observing it. Those are properties of the thing being looked at, with humans as things to look at not being something appearing in these tests.
The others are factors of the observer, but as said above, the article points out that those are differences in humans.
The setup was made to replicate real life ojects that have edges. The inversion of colors was done to induce microsaccades (which happen very often when looking at things with edges). This reminds me of the argument you said about objects not being “flickering lights”; it’s not about objects needing to have inversion of edges, micro saccades occur when looking at things in motion. And as I said above. The simple act of motion causes transient changes in luminance which is the whole point of FFT.

This is the same with colors. Blue is seen at a higher FFT and Red at a lower FFT, both of which aren’t far off range wise and all of which constitutes a median value of around 500Hz still.
This really doesn't address the point that this still is an inversion of colour. We are talking about the highest possible colour contrast here between flickers.
Seeing how colour variation and illumination amplitude are stated factors of influence, I don't think one can expect the same results if it were to flicker between, say, orange and yellow.

The colour considerations are not about contrast and hence not representative of the issue.
As I explained above when I was talking about how the timeframe can be applied. I sort of already answered all these questions but lets go through them one by one.



Any distance as long as its equal to the length of one flicker at the FFT. In other words, this is only a timeframe, so the distance would have to be found using source texts, pixel scaling, or so on. Once you have distance you find speed.
That would mean that an object that remains in the field of view can never be invisible regardless of how fast it is. Highly unlikely.

Furthermore, is that interpretation at no point indicated in the scientific articles and hence speculation on your part. As said, you seem to believe that every 2ms the eye makes a perfect picture of what is currently in front of it, which is not at all what the articles indicate or, for that matter, what actually happens. If it were you would have no motion blur.
You are speaking in terms of the observer. In this case the feat must be within more central within field of view as expressed in my guidelines. In the case of how science applies this to motion, I already addressed it above.

Please note: that speed blitzing feats must not always come from an observer's view, the narrative itself could state how the movement pans out. [1][2]

About the math bit here with human sizes, I don’t understand how that’s relevant tbh. Whatever sort of movement was calced there using the timeframe is fine by me. Although you claim the necessary speed is reduced from something, what is it reduced from? I don’t remember s proposing a speed value, I’m merely proposing a timeframe and if subsonic+ is the necessary speed then that’s perfectly fine.
Yeah... I don't think you really understood the argument.
So let me put a lot of work into an explanation.

Say this is a flicker test in which the stick man appears briefly. If the appearance was lower than 2ms we definitely couldn't see it, yes? Pretend in the following the duration of its appearance is always 2ms.

Now we have two beside each other, but same story, right? As long as they pop up less than 2ms, we don't see them.

Next:

We have again two stickman, but this time they don't appear simultaneously. However, if they each appear with less than 2ms we still should not see either of them, right? I mean, the stickmen is on 2 separate parts of your retina. If you don't see them separately, why see them together, right?
It's just one flicker you don't see, followed by another independent flicker you don't see, with no part overlapping in a way that in some point the flicker duration would be longer.
Essentially the same scenario as in the colour inversion test with the edge. Just that instead of flickering beside one side with one colour and one side with another, we flicker between one side that is white and one side that has a stickman. (And instead of flickering the stickman back and forth constantly, we only do 1 flicker)

Then let's get to an extreme case.

Now we have 6 positions, but it really is the same game as before. Each of the flickers is in a separate area of the retina. They don't overlap. We pretend like each one is separately, let's say, 1ms long. Then the total series of flickers would be 6ms.
But while the total series is 6ms, since each separate flicker is 1ms (< 2ms), we wouldn't see each flicker that the series is composed of. And hence also do not see the series in total.
That means the stickman can "move" from the right side of the image to the left side in 6ms, but stay below the flicker threshold for each of the 6 positions he occupies. Hence, he can make a 6ms "run" from left to right while being invisible to the eye according to the FFT.

Now, a moving person of course doesn't just teleport between 6 positions like the stickman does. However, we can say each position is a sector of the image, so that the image is divided in 6 equally large non-overlapping sectors. In that case, we could imagine a person running through the image that this person completely crossed 1 sector in 1 ms.
In each sector, they are only for 1ms in total and hence for a duration shorter than the FFT. Correspondingly, on any one part of the retina they are only for 1 ms. No vision cell gets influence by their presence longer than 1ms. Hence, by the reasoning of the FFT, they should not be seen.
Meaning, they can perform a 6ms movement that is FTE.
Your calculations are correct but you forgot one thing. A standing fan has 3 blades not 1. So if we looked at a single spot on the fan, 3 more blades are passing the area as opposed to 1, giving off the illusion that it’s 3x faster. So your answer: 132Hz multiplied by 3 is 396Hz, and 210Hz multiplied by 3 is 630Hz… and that is why standing fans appear invisible.
I mean, one fan I demonstrated had just 2 blades, so that already doesn't work as an explanation.

However, to begin with, I think your reasoning should apply the exact opposite way.

By your reasoning, if I use a fan with 100 blades, it should be invisible at a little more than 1 rotation a second. That is obviously too slow to be invisible for the eye.

It stands to reason that in order for the fan to be invisible, we want the blades to be in the spot we are looking at for as short of a time period as possible (or, more precisely, the average percentage of time per period of time that a blade is in the spot should be as small as possible). E.g. a fan with a single really thin blade would certainly go invisible at lesser speeds than one with a blade that occupies 95% of the whole circle, right? Or one that is 50% of the circle or one that is 1/3rd of the circle.
In fact, in the 95% case I would argue that rather than the fan going invisible, that gap would stop being visible as we speed it up.

Point is: with a fan with 3 blades fan blades will occupy the spot we are looking at 3 times longer than with 1 blade. As a result, they will reflect 3 times more light into the eye. It appears entirely unplausible, that more light reaching the spot in the retina would result in less perception.


Also, if you acknowledge the fan case, then that is in conflict with this:
Any distance as long as its equal to the length of one flicker at the FFT. In other words, this is only a timeframe, so the distance would have to be found using source texts, pixel scaling, or so on. Once you have distance you find speed.
A fan blade might move a distance of 1 km if you let it spin long enough. Yet it is invisible during that entire time, without the timeframe being the length of one flicker at the FFT.
I.e. it demonstrates that not the whole distance moved in a FTE feat has to be moved in a timeframe below the FFT.
In other words: You canbe invisible via speed for as long as you like, if you are fast enough.
 
Yeah.. get your reading glasses and popcorn folks… cuz this might take a few days (cuz I have a ton of things to do irl)
 
I.e. it comes from the timeframe in which a light can turn on and off repeatedly, while the human can still see that the light flickers.

Correlating that to movement is already somewhat of a stretch. The test is about being able to tell that any change is happening, not any proper perception.
This phrasing is evasive. Being able to tell a change has occurred in an object is perception - you cannot tell a change is occurring in visual stimuli if you are not perceiving it. And furthermore, this kind of perception is of high relevance to the topic. What we want to know is, if someone is observing something/someone, and that something/someone changes their position, how long does it take for a person to perceive that a change has occurred? The methodology of this study tells us, when an object with a defined edge is being observed, and a change in the object occurs, how long it takes before we can perceive that change.

This argument seems to be based on questioning the operational validity of the study in relation to our question, but I don't see where you're coming from with that. It's a credible and relevant answer to what we want to know.

What this neglects is that, while FFT is of importance to perception, at no point is it identified as the unique deciding factor. There is correlation, perhaps, but it is never identified as the actual timeframe of perception of a moving object.
As such it is more a natural high end, than a necessary low end. Something that leaves the FOV below the FFT would be invisible. However, that an object that does not do so is visible is never said.
Again, this is evasive. The original study that found the 500hz threshold didn't just measure the root variable of FFT - it specifically measured people's sensitivity to changes in visual stimuli as an operationalisation of FFT. The participants reported the point at which they could no longer perceive changes in the visual stimuli, meaning lesser frequency changes than the one's reported were perceptible. This tells us how quick a change to a visual stimuli needs to be to be imperceptible, which is exactly what we want to know. What your brain is processing when an object in front of you is changed, versus when it is moved, is ultimately still processed as visual stimuli being changed - there is no reason why it should produce a different result from the one's we outline in our standards. If you're assured that it is, I would expect empirical research to support this point.

First, I wish to direct attention back to the primary source that the 2ms idea is based on.
That is a narrative review that was brought up in the thread regarding CFFT, not the primary source for the changes. Narrative reviews are not reliable evidence on their own, and merely serve to compile a broad explanation of a topic. The primary source this is based on is a study by Davis et al.

This paragraph identifies a total of 19 variables which influence the FFT and hence the 2ms. Patterns and movement behaviour is pointed out for the case of hunting prey, but also things like frequency or precise position on the retina, and colour.
None of these factors are adequately handled by the current standard, as they are not handled at all. It's a constant number of 2ms that is not modified by any of these parameters.
Every participant-centric factor is accounted for in the study by the sample composition and the range of possible values that can be caused by participant variables. To quote the study verbatim:

"However, when the modulated light source contains a spatial high frequency edge, all viewers saw flicker artifacts over 200 Hz and several viewers reported visibility of flicker artifacts at over 800 Hz. For the median viewer, flicker artifacts disappear only over 500 Hz..."

The study found variance in the threshold at which perception was no longer possible based on the participant examined. On the lowest end, participants still recognised artifacts at 200 Hz (5ms), and on the highest end, participants recognised artifacts at as high as 800 Hz (1.25ms). 500 Hz (2ms) was the median value that they received for the typical participant. At this point in the existing research, being able to adjust our standard precisely so we always have a value that is tailored to the specific person perceiving an object just isn't possible - we don't know enough about the impacts these extraneous variables have. We just know, under these conditions, they produce a range from 200-800 Hz with a median of 500Hz, so we use the median value.

Furthermore, the non-participant extraneous variables that make this study different from natural conditions are also acknowledged. To again quote the study verbatim:

"We observed the effect described in this paper whenever we displayed an image containing an edge and its inverse in rapid succession. The effect was even stronger with more complex content that contained more edges, such as that in natural images. We chose a simple image with a single edge to allow our experimental condition to be as repeatable as possible."

The objection makes it sound like these extraneous variables should necessarily make the value we're looking for higher than 2ms, but far from it. In fact, the study suggests that, when given more natural images (specifically pointing to those that will contain more complex content and more edges, i.e., not just a simple switching of light and dark), participants could perceive changes faster than they could in the more simple conditions. They state that they chose a simple image with a single edge to simplify their method, despite knowing that natural images would produce a stronger effect, which is common as a means to make it possible to replicate a study at a later date. The results of this study are a conservative version of the effect we are trying to identify.

The implication behind this argument is that these extraneous variables would make the results >2ms, but the study contradicts this notion. Participant variables are accounted for by sample composition averaging out these extraneous concerns, and non-participant variables that distinguish this study from a natural environment would, as the authors of the study suggest, create results that are <2ms on average.
I will furthermore point out the the 2ms value is only produced in the study, only produced such high results with a specific setup: "The high spatial frequency image is first “bright on the left half of the frame and black on the right”, and then inverted. We observed the effect described in this paper whenever we displayed an image containing an edge and its inverse in rapid succession."
I.e. the specifically inverted the colours and specifically in an uneven way. Equalizing this to real life conditions, which may have edges but no inversion of said edges, seems faulty.
I addressed this in the previous section, but to reiterate - more natural stimuli, according to the authors, produced a stronger effect. This objection implies that we would see a weaker effect, but the authors themselves contradict this.

If a further objection is that this is only one study to produce high results under these conditions, I'll note that's not correct either - other studies have found values as high as 1.98 kHz under different conditions. To be clear, I would not suggest scaling up to this value - of the studies I have found regarding similar topics, I would argue that they are less relevant in methodology than the study we are analysing here. Credible in regards to their own research questions, but not relevant. Hence, I would not suggest giving these studies any precedent compared to the study we are evaluating here.

In general, I feel like the translation of FFT to blitzing is not scientifically validated. Human vision is highly complex and, as one has to consider, the brain does a lot of post-processing to every image. It for example pieces together several separate quick looks into a proper picture, one of which takes about 200ms alone. Speed based illusions are for instance also not fully explained by current state of science. It is a common idea that the brain's visual system simulates events before perception which can lead to illusions in movement that last longer than the mentioned 2ms. It is known that at least under certain circumstances the brain will decide to suppress blur for a better image, although that isn't the specific case.
The point is, scientifically hand waving the equalization between flickering light and movement is highly questionable due to the complexity of the subject.
Yes. The brain is highly complex, and if your point is that tying our ability to perceive motion down to a single value has a margin for error, I'd concur.

That being said, I don't see how anything brought up in this section addresses the concerns here. These sources vary considerably in reliability (these include an unsourced photography blog and a few Wikipedia articles; far less credible by nature than empirical research into the topics), and even if we take all sources as being correct, they only skirt around the underlying issue. Chronostasis illusions can cause us to perceive time moving slower than it is due to our brains filling in redacted information when we dart our eyes from one surface to another, yes, but what does that mean here in concrete terms? If I am looking directly at a subject, the fact that I could perceive them differently if I was first looking at a different subject and then back to them doesn't change our conclusions here.

These are a lot of claims from sources worthy of deeper analysis into credibility that, even if all true, don't ultimately say anything of relevance that we don't already know. Any standard we use for any calculation is inherently going to be some margin for error, some simplification of reality, and the most that this section addresses is that this standard also simplifies reality in such a way that may have some margin for error. We still nevertheless have empirical evidence that the real value would fall somewhere around the one we have suggested for the standard, which is exactly what we always do for our standards.

If we are having a movement, how far of a movement would have to be completed for it to equal the length of one flicker?
Until they are out of the field of view? Until they reappear in the vision? I don't think so. Reasonably speaking, as long as you do not occupy the same place for longer than a flicker you should be below the threshold, as no part of the eye sees you longer than one flicker. It would be the equivalent of the left the of the image flickering below the threshold and right after the right side below the threshold. Considering that most humans are about half a meter wide, that severely would reduce the necessary speed. (half a meter in 2ms is subsonic+)
Of course, that is my argument as I see it as reasonable, not proper science. No scientific source describes how exactly one would apply flickering to invisibility via speed exactly, so that much is a given. But such uncertainties are exactly the problem.
This question is odd. How "far" a movement would have to be to equal "the length of one flicker" is a definitionally weird question - the "length" of one flicker is an amount of time, while the "length" of a movement is a distance. Asking how far the length of one flicker is is like asking how many kilometres there are in an hour.

What the standard here is suggesting is that transitional states less than 2ms would, for the average person, be occluded from perception - if someone travelling at a constant speed travels 2 metres in less than 2ms, then our perception of them would not be equipped to see them when they were when they had travelled 1m. Our perception should be able to catch on to the change (or more specifically, the fact that the object that was there is now not there), however, once 2ms has passed. So if someone disappears from one spot and immediately reappears in another spot by speed alone, we would suggest they had moved between those two spots in around 2ms.

If I understand what you're saying correctly, you're questioning whether this has applicability to feats of moving fast enough to turn invisible - in other words, moving between spots so quickly at a consistent rate that the brain cannot process the object in any position. This probably does have relevancy to such feats, but it's explicitly not what the standard is made for. It's made for feats where someone disappears from one spot and seemingly immediately reappears at another, with no such extended "invisibility period" relative to the observer - if there is evidence that the observer had some timeframe in which they couldn't see the object in either position, we explicitly wouldn't use the 2ms standard. This standard, in fact, explains exactly why; it's because disappearing from one position and only later appearing elsewhere under this standard would indicate they had taken longer than 2ms to reach that position. In other words, this objection just isn't directed at the standard, or the feats we'd use this standard for, in the first place.

It is documented that when fans move fast enough, they turn practically invisible to, at minimum, some people.
Why is that a problem? Well, fan blades stay constantly in the field of vision. And they pass through a spot focussed on many times a second and for fans with thick blades they also occupy each point for a significant total percentage of the time. Furthermore, they are just subsonic. A standing fan is about 1300 to 2100 round per minute, which equates to 22 to 35 rounds a second. If we assume a blade takes up 1/6th of the total circle, then the time in which it passes through one spot in 1/132s to 1/210s, which is more than twice the time we have from FFT.
This appears not in line with the standard. The section assumes that a character can not stay invisible longer than said 1/500th of a second (otherwise, it would not make sense to conclude that a character that seemingly teleports must do so in that timeframe), but this example demonstrates that one can in principle stay invisible indefinitely.
To start, the source pointed to that fans turn invisible to some people doesn't say that. Putting aside the fact that a StackExchange forum post is not a credible source, they only say they "can't see the blade", and then elaborate that what they mean is that a moving fan "can appears [sic] as a circular plane with blade color". They aren't talking about fan blades turning invisible - they're talking about how they blur when they move fast enough. The response to that forum post elaborates on the physics behind why fan blades create a blur of colour when they move fast enough. This isn't relevant to our topic. They aren't talking about something turning invisible through movement.

Furthermore, the 1/6th of a circle standard is not elaborated on, despite how significant it is to the results. In fact, even on some of the thicker fan-blades I've seen, this would be a very questionable estimate.

Furthermore, there is a fundamental issue with trying to deduce this through the movement of an object/s (such as fan blades) spinning around a core, which is the fact that less overall distance has to be travelled by the blade the closer to the core it is. When we talk about the "fan blade turning invisible", what part are we referring to? Because sections of the fan further from the core will move a much greater distance in the same movement than sections closer to the core, then turning invisible via increasing the speed of rotations will occur considerably earlier for sections of the blade that are further from the core. Where do we draw the line?

And while this may be less important than the previous points (due to being anecdotal), I can't say I have ever seen a standing fan ever have its fan blades turn invisible. I'd entertain the possibility that more extensive fans (like industrial-grade fans) might be able to turn invisible through such speeds due to lack of experience, but I've never seen an ordinary fan do anything of the sort. I don't see why we'd use a statistic for an average fan for this question when an average fan evidently doesn't turn invisible.

A glock has a muzzle velocity of 375 m/s. That means that in 2ms the bullet passes a distance of 75 cm. In other words, by the current standard, we would assume that if someone shoots you with a Glock in the face from an arm's length away you can see the bullet emerge from the gun.
And... that is wrong. Even from a greater distance, you don't simply see bullets. (given bullets slow down with distance and if you stand far away they appear to move slower, so I don't think it's per se impossible to see a bullet fly under the right conditions)
To quote the standards:

"2. The observer's eyes must be able to focus clearly on the object before the speed feat occurs..."

Under these circumstances, the observer would not be able to see the bullet prior to it leaving the chamber, which would disqualify such a feat.

However, I do know that this is a semantic (and not particularly compelling) note, given the point of it as elaborated in the next argument:

Now, bullets are smaller than humans, but at less than a meter away one can still clearly see them. Now, I have no doubt that size does play a role in whether or not you can see a fast object, even in the realms where you can see the object when it stands still. Consider at that, that humans are not a regular size and the apparent size of a person lowers with distance. Meaning distance is also a factor.
However, notice that the standards do not account for that in any way. In fact, the flicker test does not account for size. (And there might be a difference between the relevance of the size of a light source and size of a passively illuminated object)
And, when translating from the flicker test results to actual movement... well, there it would need to be considered as well.
In total what we see by this example is that either 1/500s is not enough or the value at least needs a component to correct for size.
The aforementioned point about needing to be able to focus on the object would disqualify feats that are for miniscule objects (whether due to size, distance, or both) relative to the observer. However, beyond this, you do have a point. Being closer or further to the same object could potentially produce different values - the original study found these results regarding visual stimuli placed directly in front of the participants, but to my knowledge, there aren't any similar studies addressing whether distance has an effect (or if it does, as you'd probably expect, the strength and scale of the effect). I don't find this objection compelling enough to toss out the whole standard, but to rework the standard such that we account for distance (even if it is, for example, as simple as disqualifying feats in which the observer is too far away from the object) would be a good step towards ensuring this standard is applied where it is accurate to do so and not elsewhere. Now that you have brought up this point, I would actually advocate for implementing such a revision.



There is, of course, a large thread underneath the OP that this reply does not address. I would like to get around to the full discussion underneath the OP soon, but this is ultimately what I have to say for now. I find making long posts awful for having a good discussion, especially so because of how inaccessible they make a thread for anyone who is not already up-to-date on it, so I would hope that any future replies on here do not have to be as long-winded. All I would like to leave on is this - 2ms was what I concluded on in the original thread because it is the median result of what is ultimately a conservative study, one that is credible and relevant to our question. I initially thought, when I read the studies presented in the original thread, that the 500 Hz suggested by Davis et al. seemed like an excessive value; I was initially going to lean on a different value, because it feels like it should be less than 500 Hz, but I ended up concluding on this value when I took the time to read through each of the studies presented. I gave my own breakdowns of these studies in the original thread, and explained why the study we settled on was ultimately not only credible, but answered the question of the topic better than any other study that was either mentioned or which I could find. This study simply does tell us that we should be able to subjectively observe a change in an object by about 2ms on average, and that's what we wanted to know. This is exactly why we conduct and evaluate scientific studies - because there are so many things that feel intuitively true that ultimately aren't. I believe our preconceptions about how quickly we can observe changes in visual stimuli is just one of those intuitive conclusions. This is the research that we need to base our new conceptions on, rather than forcing the evidence to conform to our preconceptions.
 
Honestly, the arguments from DarkGarth consist largely of what I wanted to say and FAR MORE than what I could’ve said.

I will also be following @DarkGrath's request and reduce the size of my responses. She wants us to have a more streamlined conversation so everyone can keep up so expect my responses to be more general and I will leave out points i feel have been addressed by her.




The problem with that argument is that it assumes that the V1 neurons detecting a change implies that the change will be visible in the final image the brain puts out. However, the brain does a lot of stuff between those stages. Take Saccadic Masking as practical example. It's not that the eye can't pick up any input during a saccade, but the brain decides to not put it into the final image it generates. And that despite saccade's being 10 to 100 times longer than the time frame you propose.


There isn’t a problem with that at all. It’s not even an assumption. Every change in focus detected by the V1 neurons will be in the final image. Our brain just chooses to ignore things in our final image. For example, our nose and glasses are still being perceived, just constantly ignored in our view.

Saccadic Masking is a bad example because even if the eye still picks up input, it happens right before a Saccade even occurs, so most of our perception (except the V1 neurons) is already negated when the saccade happens. The speed of a saccade has nothing to do with the timeframe. [1]

Importantly, our data show that the suppression in the dorsal stream starts well before the eye movement. This clearly shows that the suppression is not just a consequence of the changes in visual input during the eye movement but rather must involve a process that actively modulates neural activity just before a saccade.

As I already attempted to address in the OP, this is confusing a correlation with being the same thing.
To give a stupid analogy: The number of ice cream sales and watermelon sales correlate, but that doesn't mean that they are equal in number. Still, you can take higher ice cream sales as an indicator of higher watermelon sales.
FFT is a prerequisite for fast perception and it would not be further surprising that an animal that evolves to have a low FFT will have fast perception. They are presumably evolving it due to needing fast perception. But higher FFT likely meaning faster perception, does not mean time of perception is equal to the FFT.
You will find that the studies in question are appropriately careful in their formulation, stating that a fast sampling rate helps detect fast-moving objects. Not that it determines the detection.

So, yes, one has to consider all stages of perception and researchers are not saying FFT is the timeframe before which objects are visible.


You asked for a direct correlation, so I gave you one. My apologies for the confusion; I believed the sources were clear on this matter. Your "stupid" analogy resembles an indirect correlation, which I never intended to present when you inquired about the correlation FFT had with motion. Birds need to perceive rapidly changing visual fields; if things move too quickly, they won't see them approaching. It would be literally impossible for an animal with a low FFT to perceive fast-changing visual fields.

Just to clarify again for everyone reading DT's comments, "time of perception" in the context of our argument refers to the "time of perception limit." FFT is not a test for perception time; it's a measure of the minimum time required to surpass our perception limit. It gauges the timeframe in which changes must occur in our visual field before they are no longer perceived by the human brain.

By the way, the manner in which the source phrased their sentences is irrelevant. Yes, stating that "a fast sampling rate helps detect fast-moving objects" is akin to saying "our eyes help us see." The brain also plays a role in "seeing," in tandem with the eye, just as the V1 neurons aren't the sole neurons involved in the perception process. The phrasing is fine, it's a direct correlation.



I have no idea where you get that from and it seems obviously wrong. Again, look at a fan. It doesn't produce an afterimage after 0.002s it just blurs, in some cases to the point of invisibility.
You still seem to operate under the assumption that eyes work on frames per second (and frames per second with infinitesimal shutter speed).
I mean, honestly, have you ever see something produce after images in real life like they do in fiction?

Stroboscopic effect, induces wagon wheel effect on fans. You showed me this before, and you told me it might be possible it’s about “frame rate of the eye” but not likely. Don’t worry my argument has nothing to do with frame rates of the eye. It’s more so a brain thing. Also the only difference is that Fiction merely tends to prolong these afterimage effects.

Admittedly, after much thought, I realized my suggestion on ways to calculate feats using this timeframe was wrong. However, this doesn’t have much to do with the timeframe itself.

Another suggestion, though:

I’d just say that we find the height or width of the object and divide it by the timeframe (0.002s) to find the necessary initial speed for the entire body to evade perception entirely by jumping or moving sideways, respectively, and for movement durations longer than 0.002.

To calculate feats where the character is outright perceived to have teleported. Total distance covered / timeframe (0.002s).

The narrative can also describe the sort of movement and speed in such a way that either of these could apply.



Yeah, I miscounted. It's 18.
I don't see how you translate the frequency of modulation to real life, as a real life image has no modulation. Please expand on that.
Likewise, I would ask you to identify the value of the amplitude of modulation in a real life scenario.
For colour I'm not sure how you find it to be considered if the 2ms test used specifically maximum contrast, which is not the case in a real life scenario.
For ambient light I wish to remind you that the test was performed in a dark room, with the "ambient illumination" occuring in a 47.7 cm × 35.9 cm square around the main screen. Further note that in my understanding the experiment is set up, such that ambient light and screen brightness add up. To that comes that "The level of ambient illumination is adjusted by the subject until flicker in the modulated area is just noticeable." I.e. they adjusted ambient light in the test to the level that flickering was visible. I.e. it was not chosen under regular outdoor conditions, but under close to optimal ones.
Study notes that the values in the experiment range goes from 200 Hz to 800 Hz, which is something. Although I'm not certain what the range of people was in that particular study.
The one used for that result uses one that uses an average illumination equal to a commercial display (by its own words). Which studies use explicitely ambient and display brightness equal to real world conditions? And which real world conditions in particular? (evening vs. brightest hour of the day)
That part of the standard does not properly handle the issue, as the person moves during the blitz. The location (physically and on the retina) does not stay constant during the whole motion (like it is in flicker tests) but moves over different areas of the retina and amongst others also out of the distance at which the eye is focussed if the person comes closer or moves further away.

As such the conditions outlined in the paper are not met in any blitzing scenario.
These belong together as they make the same mistake. The standard doesn't account for size. It excludes some (vaguely defined) small sizes, but does not account for how much the value varies for all other sizes.
Excluding 7mm bullets, doesn't account for the differences that result from a 7° angular size vs. a 70° angular size.
It's not that size and distance won't influence the result at all until they drop below a certain threshold where FFT suddenly rapidly changes.

On a side note, I also wonder why size and distance are mentioned separately. I assume that implies that there are influences of distance outside of a decrease in angular size that might require consideration.


The experiments are designed to simply find the limit of sensitivity to changes in visual stimuli. They don’t literally have to look like real life scenarios as long as the value is accurate and can be used for various irl purposes. I don’t want to debate the inner workings of the experiment, tbh. Questioning the experimental methodology doesn’t refute the accuracy of the value in real life scenarios. Look at a standing fan, for example, because that’s one of the best real life examples I can think of atm of constantly changing stimuli becoming imperceptible, which is the frequency of modulation of the blades around the 500Hz we’re debating.

Aside from the fact that commercial displays are literally objects in real life. The functionality remains the same and is consistent with practical irl things like a standing fan or fans used for holographic projection.

The rest of the guidelines I made are just as ambiguous as our own standards on FTE yet they still address all your concerns to a reasonable degree. Your only issue with them seems to be that the guidelines are no hyoerspecific on the margin of error FFT has under hyper specific conditions. If you have a problem with how vaguely my guidelines address your concerns, then the same level of scrutiny should be applied to the current FTE in general. This is why I will always love using the fan as an example. It just seems like a textbook application of FFT in real life as it obviously stays consistently within the 200-800Hz range (where the 500Hz came from) in most real life conditions. daytime or nighttime, as long as there is enough light in the area to see the fan clearly.



Yeah... I don't think you really understood the argument.
So let me put a lot of work into an explanation.
Say this is a flicker test in which the stick man appears briefly. If the appearance was lower than 2ms we definitely couldn't see it, yes? Pretend in the following the duration of its appearance is always 2ms.
Now we have two beside each other, but same story, right? As long as they pop up less than 2ms, we don't see them.

Next:
We have again two stickman, but this time they don't appear simultaneously. However, if they each appear with less than 2ms we still should not see either of them, right? I mean, the stickmen is on 2 separate parts of your retina. If you don't see them separately, why see them together, right?
It's just one flicker you don't see, followed by another independent flicker you don't see, with no part overlapping in a way that in some point the flicker duration would be longer.
Essentially the same scenario as in the colour inversion test with the edge. Just that instead of flickering beside one side with one colour and one side with another, we flicker between one side that is white and one side that has a stickman. (And instead of flickering the stickman back and forth constantly, we only do 1 flicker)

Then let's get to an extreme case.
Now we have 6 positions, but it really is the same game as before. Each of the flickers is in a separate area of the retina. They don't overlap. We pretend like each one is separately, let's say, 1ms long. Then the total series of flickers would be 6ms.
But while the total series is 6ms, since each separate flicker is 1ms (< 2ms), we wouldn't see each flicker that the series is composed of. And hence also do not see the series in total.
That means the stickman can "move" from the right side of the image to the left side in 6ms, but stay below the flicker threshold for each of the 6 positions he occupies. Hence, he can make a 6ms "run" from left to right while being invisible to the eye according to the FFT.

Now, a moving person of course doesn't just teleport between 6 positions like the stickman does. However, we can say each position is a sector of the image, so that the image is divided in 6 equally large non-overlapping sectors. In that case, we could imagine a person running through the image that this person completely crossed 1 sector in 1 ms.
In each sector, they are only for 1ms in total and hence for a duration shorter than the FFT. Correspondingly, on any one part of the retina they are only for 1 ms. No vision cell gets influence by their presence longer than 1ms. Hence, by the reasoning of the FFT, they should not be seen.
Meaning, they can perform a 6ms movement that is FTE.

Your game isn't motion, its flicker induced motion. And I know that the flickers on those imgur scans aren't 2ms because our phone's or PC’s refresh rates aren't up to that level. I have a feeling you already know that tho.

So yeah, if the flicker is 2ms for all those stickmen will all become imperceptible.

The same logic applies to our brain. If transient changes in our visual field happen for more than 2 ms, then the things our eyes pick up wouldn't be processed by the V1 neurons, and the things we see would become imperceptible.



I mean, one fan I demonstrated had just 2 blades, so that already doesn't work as an explanation.

However, to begin with, I think your reasoning should apply the exact opposite way.

By your reasoning, if I use a fan with 100 blades, it should be invisible at a little more than 1 rotation a second. That is obviously too slow to be invisible for the eye.

It stands to reason that in order for the fan to be invisible, we want the blades to be in the spot we are looking at for as short of a time period as possible (or, more precisely, the average percentage of time per period of time that a blade is in the spot should be as small as possible). E.g. a fan with a single really thin blade would certainly go invisible at lesser speeds than one with a blade that occupies 95% of the whole circle, right? Or one that is 50% of the circle or one that is 1/3rd of the circle.
In fact, in the 95% case I would argue that rather than the fan going invisible, that gap would stop being visible as we speed it up.

Point is: with a fan with 3 blades fan blades will occupy the spot we are looking at 3 times longer than with 1 blade. As a result, they will reflect 3 times more light into the eye. It appears entirely unplausible, that more light reaching the spot in the retina would result in less perception.


Also, if you acknowledge the fan case, then that is in conflict with this:
A fan blade might move a distance of 1 km if you let it spin long enough. Yet it is invisible during that entire time, without the timeframe being the length of one flicker at the FFT.
I.e. it demonstrates that not the whole distance moved in a FTE feat has to be moved in a timeframe below the FFT.
In other words: You canbe invisible via speed for as long as you like, if you are fast enough.

My explanation won’t work because it was based on your calculations for your hypothetical one bladed fan rather than the video with the two thinner bladed fan.

No. A fan with 100 blades will indeed be invisible with a little more than 1 rotation a second. Nothing discredits the logic based only on the parameters you set in the OP. Imagine 100 blades taking up 1/6th the volume of a circle, can you imagine it? Yeah me neither.

Yes. The size and volume of the blade do lessen the optical illusion that it's going faster than it's supposed to.

Yes. Thin fan blades are easily imperceptible. All of which go back to my point that smaller objects are harder to track in general, hence they are easy to lose sight of easily. So if those blades are thin and still clearly visible. It should be fine. Let's use your youtube video for instance. Lets assume those blades take up 1/16th of a circle. A single blade is already 0.002875s... There are two blades... Should i go on?

Yes. Thick fan blades that take up more of a circle need to be faster to surpass our visual perception. Here’s the thing… the spaces within the circles would probably become invisible instead of the blades since over half of the circle is solidified with the volume of fan blades. However I do still believe at enough speeds the fan blades will still blend into the background as long as there is any kind of space between them. For example: let’s assume blades take up 9/10 of a fan. 1/10th of the free space would immediately vanish if the fan moved at regular speed. However, if the fan was 21-13x faster than it normally is. Then the blades themselves should disappear.

Yes your point is cool but always remember that at enough speeds fan blades will always becomes imperceptible. Yes more light enters the eye but the changes in light are less apparent the bigger the object is.

This is in line with one of the way of calculating feats that I suggested in this post. I’ll reiterate rq. Find the length of the object that corresponds to their direction of movement and apply the timeframe (0.002s). length / time. That should be the minimum speed for any object to surpass visual perception. I guess with this we can reduce the ambiguous nature of the guidelines on size.




I’ll wait for your reply to DarkGarth and continue from there.
 
(Just a small note that those are long replies, and I'm not sure if I will finish my reply to them today. Gonna do the smurf hax thread first and then we'll see)
 
This phrasing is evasive. Being able to tell a change has occurred in an object is perception - you cannot tell a change is occurring in visual stimuli if you are not perceiving it. And furthermore, this kind of perception is of high relevance to the topic. What we want to know is, if someone is observing something/someone, and that something/someone changes their position, how long does it take for a person to perceive that a change has occurred? The methodology of this study tells us, when an object with a defined edge is being observed, and a change in the object occurs, how long it takes before we can perceive that change.

This argument seems to be based on questioning the operational validity of the study in relation to our question, but I don't see where you're coming from with that. It's a credible and relevant answer to what we want to know.
It assumes that the brain's behaviour between telling you that a colour changing screen is changing colour and processing movement is the same.
But that's a big assumption and not really supported by the study's experiments, which test nothing in regard to movement.

I mean, really, you can already go into this from a signal processing perspective: People are not looking at the flickering screen for the length of just 1 flicker and then have to decide. They are looking at it for extended periods of time. As such they have an extended amount of samples to decide what exactly is going on. That is relevant, as an increased amount of samples helps in accurately processing a hard to identify signal.
Consider: You don't need two samples that are 1/60s of a second apart to tell that a 60fps screen is flickering. Having one sample 1/60s of a second in and one 5/60s of a second in will make you see one light and one dark period of the flickering and hence lead to the conclusion that a change occurs, even if it tells you little about the exact rate of change. That's basically why screens flicker when you record them.
The same could happen when taking averages over different timeframes: The averages will have slightly different values.
Basically, in order to perceive a flicker, you brain has to decide whether the data is not constant in any fashion, which signal progressing wise is just about the easiest question one could answer.

And afterwards, the brain has to decide to actually put weight into the observation and show you, of course. That's a not negligible factor. The brain makes a lot of guesses and interpolation to figure out how things should look like. Consider for example: If you close an eye you technically have no depth perception, but you will still be able to accurately judge distance of a lot of things. Why? Because your brain knows stuff. Stuff like the size of object or what it looks like if one thing is in front of another thing and based on that makes a lot of educated guesses.
Most optical illusions are based on the brain's guesses working out wrong. Your brain constantly filters your own nose out of your vision (unless you focus on it) as it deems that not relevant. Not accounting for the brain's processing differences is a bad idea.

In that regard, movement perception is a different beast. You have only a very brief sampling window for a blitz, as opposed to the long durations you may stare at a flickering screen. Less samples of the data, makes processing it accurately harder and demands more guesses.
To that comes that you are looking at a more complex action.
To recognize that something moves the brain has to identify the object at one point and then at a later point identify it again as the same object and make the connection between the two to conclude the object moved.
In the case of a blitzing situation, that means that it has to conclude that the blur it sees later is the person it has seen before.
Or, if it does not, it at least has to decide the blur is relevant enough to make you aware of it, and not some irrelevant noise like a speck of dust close to the eye, andeyelash, remnants of a blink or some artefact from eye movement.
Your brain has to constantly filter noise from structure and that is leagues more difficult than just telling whether or not something is a constant signal or not. It is difficult enough that finding ways to make computers separate noise and structure is an ongoing subject of mathematical research.
Further consider that the brain has to do that in a non-"sterile" environment of countless other things moving too.

And all of that is without consideration of the fact that we have no idea which kinds of things our brain does exactly while image processing. For all we know it could, for some reason, be developed to handle those cases by completely different methods.

So yeah, without conclusive research into the subject of applying one to the other, I do consider equalizing those cases a stretch.
Again, this is evasive. The original study that found the 500hz threshold didn't just measure the root variable of FFT - it specifically measured people's sensitivity to changes in visual stimuli as an operationalisation of FFT. The participants reported the point at which they could no longer perceive changes in the visual stimuli, meaning lesser frequency changes than the one's reported were perceptible. This tells us how quick a change to a visual stimuli needs to be to be imperceptible, which is exactly what we want to know. What your brain is processing when an object in front of you is changed, versus when it is moved, is ultimately still processed as visual stimuli being changed - there is no reason why it should produce a different result from the one's we outline in our standards. If you're assured that it is, I would expect empirical research to support this point.
First things first, you have burden of proof on the equalization. You have to conclusively proof that it works, not I that it doesn't work. Science defaults to the null hypothesis of "we can not assume there is a connection".

That being said, while you reformulated that nicely it really doesn't change the subject matter. The experiment is about people being able to see if a screen flickers. Not more or less is measured.
That this equates to "people's sensitivity to changes in visual stimuli" on a general, not flicker-related level that would apply to a blitzing scenario, is not tested. That is your inference and, as I expressed above, I see ample reason to doubt that it's that simple.
That is a narrative review that was brought up in the thread regarding CFFT, not the primary source for the changes. Narrative reviews are not reliable evidence on their own, and merely serve to compile a broad explanation of a topic. The primary source this is based on is a study by Davis et al.
It is the primary source of the statements supporting flickering being any criteria for dynamic vision at all. Or perhaps not the primary source, but a compiled version of sources therein which are the primary sources.
It's a rather meaningless point to argue, in any case, unless you wish to actually doubt parts of the contents are true.
Every participant-centric factor is accounted for in the study by the sample composition and the range of possible values that can be caused by participant variables. To quote the study verbatim:

"However, when the modulated light source contains a spatial high frequency edge, all viewers saw flicker artifacts over 200 Hz and several viewers reported visibility of flicker artifacts at over 800 Hz. For the median viewer, flicker artifacts disappear only over 500 Hz..."

The study found variance in the threshold at which perception was no longer possible based on the participant examined. On the lowest end, participants still recognised artifacts at 200 Hz (5ms), and on the highest end, participants recognised artifacts at as high as 800 Hz (1.25ms). 500 Hz (2ms) was the median value that they received for the typical participant. At this point in the existing research, being able to adjust our standard precisely so we always have a value that is tailored to the specific person perceiving an object just isn't possible - we don't know enough about the impacts these extraneous variables have. We just know, under these conditions, they produce a range from 200-800 Hz with a median of 500Hz, so we use the median value.

Furthermore, the non-participant extraneous variables that make this study different from natural conditions are also acknowledged. To again quote the study verbatim:

"We observed the effect described in this paper whenever we displayed an image containing an edge and its inverse in rapid succession. The effect was even stronger with more complex content that contained more edges, such as that in natural images. We chose a simple image with a single edge to allow our experimental condition to be as repeatable as possible."

The objection makes it sound like these extraneous variables should necessarily make the value we're looking for higher than 2ms, but far from it. In fact, the study suggests that, when given more natural images (specifically pointing to those that will contain more complex content and more edges, i.e., not just a simple switching of light and dark), participants could perceive changes faster than they could in the more simple conditions. They state that they chose a simple image with a single edge to simplify their method, despite knowing that natural images would produce a stronger effect, which is common as a means to make it possible to replicate a study at a later date. The results of this study are a conservative version of the effect we are trying to identify.

The implication behind this argument is that these extraneous variables would make the results >2ms, but the study contradicts this notion. Participant variables are accounted for by sample composition averaging out these extraneous concerns, and non-participant variables that distinguish this study from a natural environment would, as the authors of the study suggest, create results that are <2ms on average.
You have not even close to accounted for all mentioned criteria.
Like, sex of the participants? Probably. Age? To a degree at least. Not sure what the actual age range was or if there was any correlation between the age and the findindings.
Personality traits, fatigue, circadian variation in brain activity and cognitive functions like visual integration, visuomotor skills and decision-making processes? No idea which conditions the study had in those regards.
And for all the external conditions? The only one you have identified to not matter is the number of edges. And even for those they don't handle them in a natural way since instead of movement they use colour inversion.
All the other external factors you have not addressed in any way.
In fact, let me point out that the participants chose the amount of ambient illumination such that they could see the flickering, meaning there was a factor of optimization in regard to that factor that would obviously not be present in reality.

And, as I have to mention in literally any science debate apparently, I am not assuming that the factors all necessarily make it worse. I'm only recognizing that we do not know their influence and hence do not know the actual value at all. You just can not default to the assumption that those values will work out in your favour and we can not make a calc with "I don't know".
I addressed this in the previous section, but to reiterate - more natural stimuli, according to the authors, produced a stronger effect. This objection implies that we would see a weaker effect, but the authors themselves contradict this.
Wrong, the authors say the same method with a more natural picture produced a stronger effect. However, a natural method with colour inversion is not actually more representative of what happens in a movement scenario, where colours stay constant, edges shift and blur is introduced.
If a further objection is that this is only one study to produce high results under these conditions, I'll note that's not correct either - other studies have found values as high as 1.98 kHz under different conditions. To be clear, I would not suggest scaling up to this value - of the studies I have found regarding similar topics, I would argue that they are less relevant in methodology than the study we are analysing here. Credible in regards to their own research questions, but not relevant. Hence, I would not suggest giving these studies any precedent compared to the study we are evaluating here.
I believe I do not have to extend why that setup is not more representative of the thing we are evaluating then the one we already talk about.
Yes. The brain is highly complex, and if your point is that tying our ability to perceive motion down to a single value has a margin for error, I'd concur.

That being said, I don't see how anything brought up in this section addresses the concerns here. These sources vary considerably in reliability (these include an unsourced photography blog and a few Wikipedia articles; far less credible by nature than empirical research into the topics), and even if we take all sources as being correct, they only skirt around the underlying issue. Chronostasis illusions can cause us to perceive time moving slower than it is due to our brains filling in redacted information when we dart our eyes from one surface to another, yes, but what does that mean here in concrete terms? If I am looking directly at a subject, the fact that I could perceive them differently if I was first looking at a different subject and then back to them doesn't change our conclusions here.

These are a lot of claims from sources worthy of deeper analysis into credibility that, even if all true, don't ultimately say anything of relevance that we don't already know. Any standard we use for any calculation is inherently going to be some margin for error, some simplification of reality, and the most that this section addresses is that this standard also simplifies reality in such a way that may have some margin for error. We still nevertheless have empirical evidence that the real value would fall somewhere around the one we have suggested for the standard, which is exactly what we always do for our standards.
None of those source directly are about the subject. What they aim to prove is just that the brain's processing habits have a major influence on our vision and require consideration.

Handwaving that as a "margin of error" is unfitting, as you have no adequate way to quantify it or to even prove that it is small.

Contrary to what you say you have no actual empirical evidence that the value we are looking for and FFT are close to each other. That the brain processing will not change the result significantly in a scenario which does not involve flickers is not tested in any study shown and speculation on your part.

In fact, consider for a moment how much of a relevance you give the presence of edges here. An edge makes no difference to the eye, but only is a change on the way the brain processes the image. Yet you suggest that the presence of this one factor makes a difference of about an order of magnitude, if not more. Neglecting other brain processing factors as small margins of errors is contradictory to the reason to even use this particular paper.
This question is odd. How "far" a movement would have to be to equal "the length of one flicker" is a definitionally weird question - the "length" of one flicker is an amount of time, while the "length" of a movement is a distance. Asking how far the length of one flicker is is like asking how many kilometres there are in an hour.

What the standard here is suggesting is that transitional states less than 2ms would, for the average person, be occluded from perception - if someone travelling at a constant speed travels 2 metres in less than 2ms, then our perception of them would not be equipped to see them when they were when they had travelled 1m. Our perception should be able to catch on to the change (or more specifically, the fact that the object that was there is now not there), however, once 2ms has passed. So if someone disappears from one spot and immediately reappears in another spot by speed alone, we would suggest they had moved between those two spots in around 2ms.

If I understand what you're saying correctly, you're questioning whether this has applicability to feats of moving fast enough to turn invisible - in other words, moving between spots so quickly at a consistent rate that the brain cannot process the object in any position. This probably does have relevancy to such feats, but it's explicitly not what the standard is made for. It's made for feats where someone disappears from one spot and seemingly immediately reappears at another, with no such extended "invisibility period" relative to the observer - if there is evidence that the observer had some timeframe in which they couldn't see the object in either position, we explicitly wouldn't use the 2ms standard. This standard, in fact, explains exactly why; it's because disappearing from one position and only later appearing elsewhere under this standard would indicate they had taken longer than 2ms to reach that position. In other words, this objection just isn't directed at the standard, or the feats we'd use this standard for, in the first place.
The problem is that you can impossibly determine whether there was a "period of invisibility" (as you call it) just from the fact that the opponent didn't see the person. The period of invisibility could in fact be lower than the person's reactions so that the person isn't even awfully aware of it. And it's not like the person has to move back and forth to trigger it.

See my prior reply for more explanations on the scenario, but the point is that a FTE movement can be subdivided into a series of successive invisible flickers. And that is not even something for which the character necessarily has to move in a special or particularly inconvenient way at that.
There is no contradiction to the observations in explaining the movement that way and therefore there is no reason to assume it was performed in the timeframe of one flicker.
To start, the source pointed to that fans turn invisible to some people doesn't say that. Putting aside the fact that a StackExchange forum post is not a credible source, they only say they "can't see the blade", and then elaborate that what they mean is that a moving fan "can appears [sic] as a circular plane with blade color". They aren't talking about fan blades turning invisible - they're talking about how they blur when they move fast enough. The response to that forum post elaborates on the physics behind why fan blades create a blur of colour when they move fast enough. This isn't relevant to our topic. They aren't talking about something turning invisible through movement.
It's evidence by example. I see it, ImmortalDread said they see it and Arnoldstone18 also admitted in the last thread to their being fan blades invisible to them. (Specifically mentioned 3000 RPM industrial fans)
Basically, I think we can conclude that I'm not crazy and that I actually see the phenomenon of fan blades turning invisible at the edges even without needing a big study about it.
Furthermore, the 1/6th of a circle standard is not elaborated on, despite how significant it is to the results. In fact, even on some of the thicker fan-blades I've seen, this would be a very questionable estimate.
1/6th was an estimate of about 3 blades with equally large gaps between them, which I figured was sufficiently generous. In reality, I'm fairly sure it's worse than that with my fan.
If you wished to reach 1/500th for a 3 blade fan, then the blades would need to be to occupy less than 1/14th of the circle. That is just not the case unless we are talking about much thinner fan blades than what I use.
Furthermore, there is a fundamental issue with trying to deduce this through the movement of an object/s (such as fan blades) spinning around a core, which is the fact that less overall distance has to be travelled by the blade the closer to the core it is. When we talk about the "fan blade turning invisible", what part are we referring to? Because sections of the fan further from the core will move a much greater distance in the same movement than sections closer to the core, then turning invisible via increasing the speed of rotations will occur considerably earlier for sections of the blade that are further from the core. Where do we draw the line?
We talk about the outer edges of the blade and all variables in the calc have been chosen with that in mind. Want a pic of my fan?
And while this may be less important than the previous points (due to being anecdotal), I can't say I have ever seen a standing fan ever have its fan blades turn invisible. I'd entertain the possibility that more extensive fans (like industrial-grade fans) might be able to turn invisible through such speeds due to lack of experience, but I've never seen an ordinary fan do anything of the sort. I don't see why we'd use a statistic for an average fan for this question when an average fan evidently doesn't turn invisible.
Well, two people have seen that phenomenon on a regular one, soooo... Not that industrial fans would support your idea either. 3000 RPM are still not enough to reach the 1/500th threshold.

Incidentally, you can also have a look at the toy I posted a video of. There the explicit purpose is it being FTE.
To quote the standards:

"2. The observer's eyes must be able to focus clearly on the object before the speed feat occurs..."

Under these circumstances, the observer would not be able to see the bullet prior to it leaving the chamber, which would disqualify such a feat.

However, I do know that this is a semantic (and not particularly compelling) note, given the point of it as elaborated in the next argument:
Finally! Someone that actually read that next part!
The aforementioned point about needing to be able to focus on the object would disqualify feats that are for miniscule objects (whether due to size, distance, or both) relative to the observer. However, beyond this, you do have a point. Being closer or further to the same object could potentially produce different values - the original study found these results regarding visual stimuli placed directly in front of the participants, but to my knowledge, there aren't any similar studies addressing whether distance has an effect (or if it does, as you'd probably expect, the strength and scale of the effect).
The review explicitly mentions that size and distance have an effect. But those were of course not performed in equivalent studies.
I don't find this objection compelling enough to toss out the whole standard, but to rework the standard such that we account for distance (even if it is, for example, as simple as disqualifying feats in which the observer is too far away from the object) would be a good step towards ensuring this standard is applied where it is accurate to do so and not elsewhere. Now that you have brought up this point, I would actually advocate for implementing such a revision.
So how would you account for it in practice?



So much to the DarkGrath reply. I will get to Arnoldstone18 later, hopefully today.
 
It assumes that the brain's behaviour between telling you that a colour changing screen is changing colour and processing movement is the same.
But that's a big assumption and not really supported by the study's experiments, which test nothing in regard to movement.

DT we’ve gone over this.

I already explained why this isn’t a big assumption. The tests isolates the V1 neurons specialized for changes in light in general. The information from the V1 neurons are then used in processing movement. If the V1 neurons can’t detect any changes then there is no processing of movements.
 
Back
Top