A standing fan qualifies and has been thoroughly addressed, and we have come to an agreement, or rather, you conceded, based on the fact that you are unable to disprove the logic. The larger the size = the higher the necessary speed is required to cause enough frequency of changes in a second for our brain to lose sight of. The higher the number of the fan blades = the lower the necessary speed is required to cause enough frequency of changes in a second for our brain to lose sight of.
You have not actually addressed it, as all these things considered it still results in timeframes longer than the FFT.
The fan can be viewed close up, i.e. the apparent size is sufficiently large.
And with 3 blades, 1 point on the border of the fan changes between blade and no blade 6 times per rotation (3 times to blade, 3 times to no blade). With 1300 to 2100 RPM, that equates to (1300 RPM/60s * 6 changes per rotation =) 130 changes per second to (2100 RPM / 60s * 6 changes per rotation = ) 210 changes per second. Or, in reverse, every 1/130th of a second to 1/210th of a second, does a change happen.
By the reasoning you're going for the timeframe between each change is a flicker. Yet both those values are above the 1/500th of a second threshold being proposed. As I see it you have given no proper explanation on why that is not a contradiction.
What I conceded (or rather just didn't care about debating) was the topic of whether or not a rotating disk with a tiny gap would eventually go invisible if it is really fast. That's not this one.
You used a video of the magic sword when you specifically said and I quote: “I will say that while I will provide some videos in the following, since cameras with their shutter speed and videos with their FPS obviously screw with the observation, they are more for illustration purposes than to prove anything.”
I had already meant to adress that, but perhaps I didn't put it clear enough. The entire purpose of the toy is that it is FTE when looked at in real life. If the illusion wouldn't work in reality the toy in itself would not work as intended.
It is a magic trick you are supposed to be able to show other people, not a camera trick.
It's good that you finally addressed it, but that is really no counter-argument here.
You are essentially saying the brain might detect something that might (not?) be invisible under certain circumstances. Aside from the obvious brain damage in areas aside V1, lack of attention, and saccades, can you list any other circumstances in which the V1 neurons will detect something and the other neurons will remove it from the final image?
I could probably look for more, which would be a bunch of work, but I see no reason to. One valid example is enough to show that in general the line of reasoning just doesn't hold.
Again, I don't need to prove you definitely wrong, you need to prove yourself right. We don't just assume that any non-falsifiable theory is correct.
First of all, the main topic is perception blitzing speeds (a high tier feat) not Faster than eye movement (a low-tier feat that i literally told you i was getting to in part 2 of my series of crts).
I'm just gonna say that this is a distinction that you mostly invented since I'm fairly sure FTE has been used to mean what you call perception blitzing, but yes, I I was always aware what we are talking about.
Visual Perception involves the eyes and the brain. 2ms is the timeframe for the brain, not the eye, so “FTE” is technically wrong to say.
I hope you're not trying to say that you mean to disregard the eye as factor in perception blitzing. 'cause if so, you are handling a case you will never be able to prove applies in fiction. No work of fiction would specify for you that it means "not seeing someone move but completely disregarding the capabilities of the eye and only consider the brain".
Like, I have for the most part not be talking much about the eye because some very restrictive guidelines could be set to account for its inability to move faster than a certain speed (i.e. the character must run straight to the character that is supposed to be blitzed to not leave the center of vision). However, there is really no way practical way to solve the problem of the limitations on its focus speed. (i.e. if a character charges at you faster than your eyes can refocus form far to close, that probably impacts things. Amongst others not focusing on something would impact the FFT, I'm fairly sure.)
Ultimately, you can not talk about outpacing vision without accounting for the limits of the eye's abilities.
I thought you were aware of this and were just using “FTE” simply because it’s convenient, like it is for me, but it seems it’s causing unnecessary confusion and misunderstanding. I told you in the previous thread that I would make a thread about being faster than eye movement (saccade) for a low end interpretation of FTE; Faster-than-eye movement. This was meant to be a multiple part series enhancing our loose standards with low and high ends for each and every speed blitzing scenarios in general (Today its a high end scenario, “faster than perception” if this gets approved, tomorrow i will do a low end “Faster than eye movement” scenarios).
The Saccade I only brought up in the context that it proves that the brain can perfectly well make something invisible that reaches the V1. Blinking would be another example of that. I didn't ever suggest that people were actually moving specifically in the Saccade or while a character is blinking or an equally specific scenario.
Again, you need to prove that movement picked up by V1 would always end up in the image as a person sees it. We know that isn't always the case (Saccade and blinking are movement that reach V1 and are not in the final image a person sees), so you need to deliver additional proof that it would be the case specifically for blitzing scenarios. And that's something you have not done.
2ms is a low end of a high end interpretation of “FTE” (the high end being the topic of discussion in the first place; “visual stimuli/object moving so fast that it appears invisible to human observers”).
No, even for this case you have not proven that it is a low end for every reason I mentioned during the debate.
The only reason these neurons suppress information collected by V1 is when the observer is
literally distracted, focusing on something entirely different from the thing they are supposed to focus on (the feat), if the observer suffers from brain/neural damage (obviously), or if something moves faster than a saccade, which I will address in my next CRT (Feel free to let me know if there are other circumstances because the absence of info on any more circumstances aside from all that i have mentioned is not evidence that there are no more circumstances outside of those I mentioned otherwise, a
fallacious argument would be made).
The article you linked does not say what you claim. In fact, it can not say what you claim, as, again, even if you try to focus on a Saccade you can not see it. I.e. it's possible for the brain to filter something out of an image despite focus being given.
What you are doing seems to be a wrong logical inversion. That lack of sufficient focus can mean that part of the image is suppressed, does not imply that focus prevents part of the image being suppressed under all circumstances. In mathematical terms, "A implies B" does not imply "not A implies not B". The correct logical inversion says that "A implies B" implies "not B implies not A". I.e. the correct inverted statement you get is that if no part of the image is suppressed, the lack of focus can not have been above the threshold where image suppression happens.
The V1 uses Magnocellular cells for the detection of rapid and transient changes which can occur when things are in high speed motion.
Scientists use Flicker fusion threshold to determine the minimum rapidity that the Magnocellular cells can no longer detect rapid changes in visual field over time. The 2ms timeframe is an average lowball within a low end interpretation of this high end topic as explained by
@DarkGrath and I.
Well, as said, the problem is that you have not proven that magnocellular cells picking it up necessarily leads to perception under all (relevant) conditons.
And the guidelines were made in order to accommodate large margins of error caused by unconventional circumstances and abnormal vision so as to mitigate its abuse in calculations. The reason why 2ms (500hz) was used but not higher FFT timeframes like 20ms to 11ms (50Hz - 90Hz) is because this experiment incorporated a defined edge and I aim to use it as yet another necessary lowball to accommodate visual stimuli with multiple defined edges. For things without a defined edges 20ms to 11ms can be used as an estimate. The necessary speed of a human will be subsonic+ as you outlined. This level of scrutiny is, at the very least, on par with our Tier 1 standards, and you’re asking for even more than this? Wtf Don’tTalk?
No, the standards are really not on par with Tier 1. Because tier 1 standards are based on proven mathematics. We don't guess that a mathematical statement we use as basis for Tier 1 is true.
You meanwhile just assume that certain factors make no big difference without the articles actually saying that.
The underlying scientific theory of a scientific standard needs to be scientifically true for the standard to be in place. That is the prerequisite. How we then equalize things that don't 100% match science (i.e. magical attacks, supersonic things that don't have sonci booms etc.) is its own question and done with more leeway. But our standards should always be designed such that evaluating a case that is scientific is at least approximated with a low end.
I also request for
@LephyrTheRevanchist to be tagged to look at this too to give their opinion on both our arguments as she was in favor of DarkGrath and I’s view.
You could just ask them, but sure.
@LephyrTheRevanchist your attention has been requested.
I've been asked to provide more input here.
The natural approach to providing input on a thread like this would be to quote each chunk of the points presented in previous posts and to respond to them individually. I'm not going to do this. To be frank, having read through this thread, there would be
far too much to say if I did that - I don't want to write a post in the range of nearly 10'000 words, and nobody wants to read that post either. More importantly, I don't think a lot of this is necessary to respond to directly, but I will get to that. Instead, I will outline the current case in favour of keeping the standard, and explain why I don't believe anything brought up so far substantially rejects this standard.
Research conducted by
Davis et al. into critical flicker fusion rates (cFFR) found that, when participants were exposed to a high-frequency flickering image with a distinct edge, the ability to detect changes in the image ceased at a threshold of around 200Hz (5ms) to 800Hz (1.25ms), with a median of 500Hz. The theory presented in the original thread is that, because this was a metric of how long it took for participants to perceive changes in an object, that this could reasonably be used for perception feats wherein an observed object moves between two places too quickly for the intermediary movement to be observed by a human. This inference tying this research to the changes of an object through motion is supported by
Brown et al., who found a strong connection between metrics of flicker fusion thresholds and the speed of processing of the magnocellular (M) and parvocellular (P) contributions to Visual Evoked Potential (VEP) responses, finding p values between < 0.0005 and < 0.002 for the magnocellular contribution depending on the conditions (which, for those not familiar with statistics, essentially means that the probability that the correlation between the variables occurred purely by chance is below 0.05% and 0.2% respectively - effectively not worth considering as an explanation). To put this horrifically verbose, jargon-loaded research simply; the Visual Evoked Potential refers to our physiological responses to temporal changes in a visual stimulus (in a sense, how our brain tangibly reacts when something we are perceiving changes), and it is one of the key metrics for understanding how our visual processing system perceives motion, particularly the magnocellular contribution (for further, more technical elaboration on the concept, I'd recommend
Baiano & Zeppieri). The ultimately relevant conclusion of Brown et al. is that our flicker fusion threshold is directly linked to the system responsible for handling our perception of motion in visual stimuli.
I agree with all of that. What I challenge is the jump of "V1 stuff perceiving change is critical to see" to "V1 stuff perceiving some change equates to properly percieving the motion". Or, more generally, the jump from FFT being relevant to motion perception, to the conclusion that anything longer than the FFT is part of the final perception.
This is hardly a novel conclusion, though it is an appropriately recent study to compound the existing literature supporting it. The idea that flicker fusion thresholds are a valid determinant of the speed at which we perceive motion has been an accepted notion for many decades now, with even articles as far back as the 1980's already discussing this fact (as shown in articles like
Brenton et al.).
I can only see parts of that article. In the parts I can see, I see no mention of FFT being usable in the way suggested by the standard. (especially considering what I mentioned above)
More recently, research by
Thompson et al. into neurodivergent populations found that deficits in flicker fusion thresholds (and in the aforementioned magnocellular (M) system) were a key determinant of deficits in the processing speed of motion; to quote the study directly, "
The prevalence, in early studies, of raised thresholds for motion coherence in neurodevelopmental profiles including ASD is suggestive of an early motion processing vulnerability, encapsulated in the dorsal stream hypothesis. Due to the strong afference of magnocellular inputs to the motion areas MT/V5+, a common conclusion was that there is a processing abnormality in the afferent Magnocellular (M) visual system.". You may notice the magnocellular system from Brown et al. being mentioned again, and it is certainly of importance; it is the key to the speed of motion processing, and as mentioned before, it is directly related to our subjective perception of changes in an object through the flicker fusion threshold.
Well, this is evidence that moving in a timeframe shorter than the FFT is enough to be FTE (or do perception blitzing if you prefer that term), but as I keep saying that's not what we are looking for. That provides you a sufficient criteria, but not a necessary one. Meaning: A high end.
We look for a low end, i.e. a necessary criteria. The question that needs to be answered is: What is the lowest speed at which we can be certain that going slower will not be FTE?
So what does this leave us with? We have research identifying that subjective evaluations of the ability to perceive changes in a discrete object cease at 500Hz for the average person, with this being a consistent operationalisation of the flicker fusion threshold across the body of literature. We also have the fact that this aligns with visual evoked potentials (including the particular system involved in motion processing), a body of literature spanning decades suggesting that our flicker fusion thresholds and visual evoked potentials are the key factor underlying our perception of motion, and modern research verifying the notion that both physiological and subjective measures of flicker fusion threshold and visual processing speed are the determinant of our ability to perceive motion. Therefore, the original research tells us that the average person would cease to be capable of determining a change in a perceived visual stimuli by the 2ms threshold, and a great body of research verifies the idea that this is generalisable to changes caused by the motion of an object. Therefore, the average person would cease to be able to perceive the motion of an object if that motion occurs in a 2ms window, and we now have an evidence-based standard to use for calculating feats involving these lapses in perception.
Well, again, this is just not what we are looking for. We are not looking for a criteria by which we know that moving that fast is FTE. We are looking for a criteria from which we know that moving slower isn't FTE.
Despite the intense debate occurring on this thread so far, I honestly cannot find anything to suggest this is a particularly controversial aspect of the literature in psychology - it has certainly been questioned and investigated, but our speed of processing for a change in an object being aligned with our speed of processing for the movement of an object is not only an intuitive conclusion with an obvious line of logic, but one that decades worth of research has consistently supported.
I have yet to see any study saying that there is a relation of it being aligned in the sense of the question I posed above. We don't need just any alignment, we need specifically one that says that it's
a) not just proportional, but specifically tied to the FFT.
b) not just a prequisit, or an important part, or related, but specifically that you can't go lower in speed without perception happening.
And what's intuitive is subjective.
Now, why did I write this instead of responding to the individual points made thus far? Because, frankly, I think a great deal of this debate has missed the forest for the trees. In all this discussion of hypotheticals, open questions, anecdotes, and unclear semantics, nothing brought up has actually refuted this body of research and the conclusions drawn from it. It's led to certain concerns, such as where the boundaries for these kinds of calculations should sit (which is a valid concern - the research does suggest that viewing conditions can contribute to variance in flicker fusion thresholds, so where we should draw the line on saying a particular viewing condition isn't good enough for the standard should be discussed). These kinds of skepticisms are not only important, but they are the life of scientific research; so much of what we know from science was caused by people asking the same kinds of questions that have been posed in this thread.
But the kinds of questions that could incite further research are not the same as the kinds of questions that refute the existing research by themselves. The fact of the matter remains that this is a standard based on a substantial body of empirical literature, and any alternative standard substituted for it would inevitably be based on far less.
Problem is that the questions do not need to refute the research. What the questions ask is whether the research is applicable to the scenario we wish to use it for. And without the answer to that being a provable "yes" the research is simultaneously true and unusable for our purposes.
I at no point said any part of the papers were wrong. It was always that I questioned whether their findings could be applied is the particular fashion suggested. And the papers at no point mentioned anything about this particular application.
In fact, by world-wide applied standards for evidence credibility (i.e.:
the evidence pyramid, for one), it wouldn't be an exaggeration to say that the case for the standard is
objectivelystronger than any alternative standard.
We can't just
not have an opinion - even rejecting all calcs on the basis of being "unquantifiable" would be an opinion in itself, one which would be less evidenced than the case presented here. If we're going to acknowledge these types of feats at all, we need to base our standards on the evidence, and this is exactly what we are doing now.
Admitting that we just do not know enough to properly quantify it isn't not taking a position. There are lots of feats we can not quantify. Saying that a badly reasoned overestimation is better than acknowledging that we do not know a proper low end is just unscientific. If you want to invoke scientific principles, then I may as well bring up the null hypothesis: No relation is assumed until proven otherwise. There are many relations you brought up for which the articles have proven otherwise, but the one we care about is not among them. Basically, you make the mistake of assuming that evidence to a related but different thing is proper evidence for this thing.
We already had a way we deal with these feats: Acknowledge that we do not know in detail and rank it as Subsonic, as it's pretty evident from real-life examples that they ought to be at least that fast (There are people on skis that go subsonic). That's why the Subsonic tier is also called "Faster than the Eye".
We do not rank it as such because we think it's the true value or even a good approximation. We do rank it as such, because it's the only value of which we know that it is not too high.
That's just how we deal with things we can not properly quantify. Disregard the parts we are not certain about and low end by using the parts we are certain about. Magical explosion? We can't quantify magic, but we can quantify the explosion. So we quantify the explosion bit as a low end and disregard consideration about how efficient magical energy conversion is.
Character has an impressive feat that can't be calculated? Rank them by their next best feat.
So evidently, no, I've not been convinced. To reject this standard isn't just to show reasonable scientific skepticism - it's to reject decades worth of research on the basis of personal reasoning. This standard is more than well-evidenced enough to remain on the wiki, and I wouldn't expect any other standard to have been given half the skepticism this one has.
The decades of research were not on this particular topic, otherwise we wouldn't have the debate. If there were an article about exactly the question of how fast something needs to move to not be perceived, I would gladly take what it suggests. But we have just flickers.
I generally also agree with DMUA that it takes casual statements or depictions and assumes that they are made with a scientific precision, which I think would realistically not be meant.
In any case, what was brought up ultimately ran into the existing counter-arguments. In my entire reply I ultimately just reformulated the existing arguments.
I won't close the thread or end the debate for now, but as I said I will remove the standard for the time being by majority agreement. As it stands, it needs to be rewritten to the degree that any calc based on it would need to be redone even by Arnoldstone18's current suggestion anyway. So we do us no favour by keeping it for longer.