• This forum is strictly intended to be used by members of the VS Battles wiki. Please only register if you have an autoconfirmed account there, as otherwise your registration will be rejected. If you have already registered once, do not do so again, and contact Antvasima if you encounter any problems.

    For instructions regarding the exact procedure to sign up to this forum, please click here.
  • We need Patreon donations for this forum to have all of its running costs financially secured.

    Community members who help us out will receive badges that give them several different benefits, including the removal of all advertisements in this forum, but donations from non-members are also extremely appreciated.

    Please click here for further information, or here to directly visit our Patreon donations page.
  • Please click here for information about a large petition to help children in need.

VSBATTLE WIKI REVISION: PERCEPTION BLITZING

Status
Not open for further replies.
This would count for feats where the person gets completely perception blitzed correct?

Yes,

feats that qualify include:

  • An observer focusing on an object and said object suddenly vanishing via sheer speed alone.
  • An observer focusing on an object, and said object suddenly leaving an afterimage and appearing elsewhere via their speed alone.

Other FTE Feats that don't qualify

  • An observer trying to follow the object with their eyes but can't keep up (This will be dealth with in the next thread after this one)
 
Bump


505-5053483_troll-angry-crying-meme-depressed-memezasf-trendy-sad.png
 
I think we should msg dontalkdt and flashlight on their message walls since they were the only ones objecting
 
I think we should msg dontalkdt and flashlight on their message walls since they were the only ones objecting

No, if they were contacted 2-3 times, it’s clear they no longer want to participate for one reason or the other. Contacting any more times will bother them.
 
The eyes are the main organs that receive light and allow us to see. When light enters our eyes, it is converted into electrical signals. These signals then travel through different parts of our brain, and eventually reach the primary visual cortex (an area in the brain), which helps us make sense of what we see.

Under normal conditions, our vision has certain limitations that can change depending on different conditions and circumstances. To understand these limitations, researchers conduct experiments under controlled conditions, often involving flickering lights.

When a light flickers very quickly, there is a minimum speed at which we no longer perceive the flickering and the light looks steady instead. This specific minimum flickering speed is called the critical flicker fusion frequency (cFFF) or Flicker Fusion Threshold (FFT). At this point, our eye receptors can't detect the individual changes in what we see anymore. Instead, they merge together, creating a steady signal that our brain interprets as continuous light.

What Does Flickering Lights Have To Do With Perceiving Motion?

People with higher Flicker Fusion thresholds (the ability to no longer see high speed flickering in flickering lights but instead see them as steady light) tend to have better accuracy in what they perceive in general, and it's also linked to increased alertness and improved visual processing in the brain. In other words, people who can see fast “flickering lights” at a higher speed than others, can see fast moving objects at a higher speed than others. [1]

This concept is also applied to TVs, cinemas, and other similar forms of media. That’s why frame-by-frame viewing needs to happen at frequencies close to the limit at which we can perceive “flickering lights”. Animals that have better vision than humans need to “see fast flickering lights” better to survive in the wild. According to this study, Birds that have a great ability to see “flicker lights” can see fast-moving objects better than humans, especially when these objects could potentially collide with them in the air. [2]






Let’s look at the eye and the brain.

We need light to see. Light bounces off things and goes into our eyes. The retina in the back of our eye collects these things. (The Fovea is a part of the retina that collects colors.)

All that light that’s bouncing off of the moving/stationary things we are looking at is turned into electrical signals that go to the back of our brain. That place in the back of our brains is called the visual cortex. The first neurons in that part of the brain that collect the information is called V1 neurons. Their only job is to react to the light. Just light. Anything involving light. Which means flickering light or changes of light in the environment (Changes of light can happen when something moves, so if a red light moves to the left, we would know it moves to the left when it collects that info. In other words, it knows the direction of light bouncing off moving things. Don’t forget that all we see is light bouncing off of things). Then all that light goes from the V1 neurons to all the other neurons (MT/V5, MST, 7a, all that stuff) that all specialize in motion (is this moving? Is this stationary? Where exactly is it moving? What is it doing? What does it look like when it’s moving?), identifying what we are looking at (Wtf is this?), and so on. All of this does not matter because they all depend on the very first neurons to work well (the light detecting and light changing neurons, also known as “flickering lights” neurons). That’s why scientists use experiments that involve flickering lights, because they want to see the limits of this first set of neurons, NOT the following sets of neurons that deal with motion and all that stuff. Because all those other neurons rely on the first ones for perception. [3][5]

Things can flicker or move at such high speeds that the primary visual cortex (V1) becomes unable to detect the rapid changes in light. Instead, the V1 simply presents the most recent, fully processed information as more light enters the eye. This phenomenon explains why afterimages and other fast-motion-vision-related illusions occur. As the visual system struggles to keep up with the rapid changes, it relies on the last processed image, creating the illusion of persistence and the blending of moving objects into the background, appearing invisible.

In summary, as things move, light changes; as things flicker, light changes. Our brain can only know what is moving if it can detect changing light. That’s why scientists use flickering lights to determine this, cuz flickering lights and moving objects are essentially just changing lights to first areas where our brains collect these changing lights. Without detecting the changes itself, the rest of our motion sensitive brains are completely useless.




What is Flicker Fusion Threshold?



Scientists determine this by figuring out the frequency our brain gives ups on detecting rapidly changing light. They do this through experiments that involve showing people flickering lights and speeding it up till the flickering fuses to become one single image of light. At this speed, the brain literally can’t detect changes in light. That’s why when characters move at this speed, the brain won’t detect the changing positions of light bouncing off them at different points in space.

So in other words, the minimum speeds that surpass the limits of our vision is called Critical Fusion Threshold, which is also called Flicker Fusion Threshold. That speed is measured in Frequency Hertz. [4]

There are many experiments to find this speed. The most widely accepted being between 50 to 90 Hz like you see in the OP. [6][7]

Characters who are FTE to the point they appear invisible or cause visual illusions depending on how they move have to be moving faster than the eye's visual processing. In other words the character or thing is so fast that the brain just gives up detecting the object altogether. This means said character must be moving between 1/50th of a second to 1/90th of a second. Therefore, I propose 1/70s be used as the timeframe for FTE calcs at this level. If accepted I will continue to fine tune the guidelines to factor in less typical conditions for better use of this method.





TL;DR: Flicker fusion refers to the point at which a flickering light appears as a continuous stream to our eyes. It represents the limit at which our visual system can detect changes in light. People with higher flicker fusion thresholds can perceive fast-moving objects better. Scientists study this phenomenon because it reveals the limit of the initial light-detecting neurons (V1 neurons) in our brain. When light changes too rapidly, these neurons can't keep up, and our brain perceives a steady image in the form of afterimages or other visual illusions. The critical flicker fusion frequency (cFFF) or Flicker Fusion Threshold is the minimum speed at which things move beyond the limits of visual perception. Characters who can move faster than the ability the observer's visual system can detect them should be moving at least 1/50th to 1/90th of a second to appearing invisible by blending into the background or cause visual illusions.
Agree with this
 
Last edited:
I think its best to ask a couple CGMS to come to the thread if wokistan or whatever isn't coming
 
I mostly skimmed through the thread and mostly open-minded. I know FTE feats are Subsonic at bare minimum; and vaguely recall studies saying 200 mph or 89 m/s was the speed required to cast afterimages. And there is something regarding the 1/70th second method which sounds good. Using some facts to calculate baselines for FTE look fine, but using FTE for multiplier stacking is what I might be iffy on.
 
Using some facts to calculate baselines for FTE look fine, but using FTE for multiplier stacking is what I might be iffy on.

Yes I agree, using FTE for multiplier stacking is unreliable.

Thank you for you input!
 
The eyes are the main organs that receive light and allow us to see. When light enters our eyes, it is converted into electrical signals. These signals then travel through different parts of our brain, and eventually reach the primary visual cortex (an area in the brain), which helps us make sense of what we see.

Under normal conditions, our vision has certain limitations that can change depending on different conditions and circumstances. To understand these limitations, researchers conduct experiments under controlled conditions, often involving flickering lights.

When a light flickers very quickly, there is a minimum speed at which we no longer perceive the flickering and the light looks steady instead. This specific minimum flickering speed is called the critical flicker fusion frequency (cFFF) or Flicker Fusion Threshold (FFT). At this point, our eye receptors can't detect the individual changes in what we see anymore. Instead, they merge together, creating a steady signal that our brain interprets as continuous light.

What Does Flickering Lights Have To Do With Perceiving Motion?

People with higher Flicker Fusion thresholds (the ability to no longer see high speed flickering in flickering lights but instead see them as steady light) tend to have better accuracy in what they perceive in general, and it's also linked to increased alertness and improved visual processing in the brain. In other words, people who can see fast “flickering lights” at a higher speed than others, can see fast moving objects at a higher speed than others. [1]

This concept is also applied to TVs, cinemas, and other similar forms of media. That’s why frame-by-frame viewing needs to happen at frequencies close to the limit at which we can perceive “flickering lights”. Animals that have better vision than humans need to “see fast flickering lights” better to survive in the wild. According to this study, Birds that have a great ability to see “flicker lights” can see fast-moving objects better than humans, especially when these objects could potentially collide with them in the air. [2]






Let’s look at the eye and the brain.

We need light to see. Light bounces off things and goes into our eyes. The retina in the back of our eye collects these things. (The Fovea is a part of the retina that collects colors.)

All that light that’s bouncing off of the moving/stationary things we are looking at is turned into electrical signals that go to the back of our brain. That place in the back of our brains is called the visual cortex. The first neurons in that part of the brain that collect the information is called V1 neurons. Their only job is to react to the light. Just light. Anything involving light. Which means flickering light or changes of light in the environment (Changes of light can happen when something moves, so if a red light moves to the left, we would know it moves to the left when it collects that info. In other words, it knows the direction of light bouncing off moving things. Don’t forget that all we see is light bouncing off of things). Then all that light goes from the V1 neurons to all the other neurons (MT/V5, MST, 7a, all that stuff) that all specialize in motion (is this moving? Is this stationary? Where exactly is it moving? What is it doing? What does it look like when it’s moving?), identifying what we are looking at (Wtf is this?), and so on. All of this does not matter because they all depend on the very first neurons to work well (the light detecting and light changing neurons, also known as “flickering lights” neurons). That’s why scientists use experiments that involve flickering lights, because they want to see the limits of this first set of neurons, NOT the following sets of neurons that deal with motion and all that stuff. Because all those other neurons rely on the first ones for perception. [3][5]

Things can flicker or move at such high speeds that the primary visual cortex (V1) becomes unable to detect the rapid changes in light. Instead, the V1 simply presents the most recent, fully processed information as more light enters the eye. This phenomenon explains why afterimages and other fast-motion-vision-related illusions occur. As the visual system struggles to keep up with the rapid changes, it relies on the last processed image, creating the illusion of persistence and the blending of moving objects into the background, appearing invisible.

In summary, as things move, light changes; as things flicker, light changes. Our brain can only know what is moving if it can detect changing light. That’s why scientists use flickering lights to determine this, cuz flickering lights and moving objects are essentially just changing lights to first areas where our brains collect these changing lights. Without detecting the changes itself, the rest of our motion sensitive brains are completely useless.




What is Flicker Fusion Threshold?



Scientists determine this by figuring out the frequency our brain gives ups on detecting rapidly changing light. They do this through experiments that involve showing people flickering lights and speeding it up till the flickering fuses to become one single image of light. At this speed, the brain literally can’t detect changes in light. That’s why when characters move at this speed, the brain won’t detect the changing positions of light bouncing off them at different points in space.

So in other words, the minimum speeds that surpass the limits of our vision is called Critical Fusion Threshold, which is also called Flicker Fusion Threshold. That speed is measured in Frequency Hertz. [4]

There are many experiments to find this speed. The most widely accepted being between 50 to 90 Hz like you see in the OP. [6][7]

Characters who are FTE to the point they appear invisible or cause visual illusions depending on how they move have to be moving faster than the eye's visual processing. In other words the character or thing is so fast that the brain just gives up detecting the object altogether. This means said character must be moving between 1/50th of a second to 1/90th of a second. Therefore, I propose 1/70s be used as the timeframe for FTE calcs at this level. If accepted I will continue to fine tune the guidelines to factor in less typical conditions for better use of this method.





TL;DR: Flicker fusion refers to the point at which a flickering light appears as a continuous stream to our eyes. It represents the limit at which our visual system can detect changes in light. People with higher flicker fusion thresholds can perceive fast-moving objects better. Scientists study this phenomenon because it reveals the limit of the initial light-detecting neurons (V1 neurons) in our brain. When light changes too rapidly, these neurons can't keep up, and our brain perceives a steady image in the form of afterimages or other visual illusions. The critical flicker fusion frequency (cFFF) or Flicker Fusion Threshold is the minimum speed at which things move beyond the limits of visual perception. Characters who can move faster than the ability the observer's visual system can detect them should be moving at least 1/50th to 1/90th of a second to appearing invisible by blending into the background or cause visual illusions.
@Therefir @DarkGrath @LephyrTheRevanchist @Jasonsith @Dalesean027 @ByAsura @SamanPatou @Qawsedf234 @Sir_Ovens

Your opinions were specifically requested by the OP
 
In short, blitzing can mean a lot of things. It is important to classify the time frames of how humans "look" things.

Humans can look into still objects that suddenly gets moving, changing movements or stopping movement.

Humans can look into flickering light.

As light enters the human brain through the eye and the nerves, the brain also takes a while to analyse it and presents as images inside the brain.

All these are different time frames.

And normally while our eyes have a range of sight of what we can see, we also have a smaller range of what we focus to look at.

If we consider speeds that "blitzes perception", we need to really check out the conditions and situations.
 
In short, blitzing can mean a lot of things. It is important to classify the time frames of how humans "look" things.

Humans can look into still objects that suddenly gets moving, changing movements or stopping movement.

Humans can look into flickering light.

As light enters the human brain through the eye and the nerves, the brain also takes a while to analyse it and presents as images inside the brain.

All these are different time frames.

And normally while our eyes have a range of sight of what we can see, we also have a smaller range of what we focus to look at.

If we consider speeds that "blitzes perception", we need to really check out the conditions and situations.


Yes!

I was planning on making this a series of CRTs to address the “blitzing the eye”. This one is 1 of 2/3 threads on that.

In this first thread I am proposing the timeframe packets of information reach the brain from the eye under normal conditions and before light reaches the areas of the brain that identifies and tells what’s happening. These are for FTE blitzing where the object is still in focus but is so fast that it either blends into the background, cause afterimages via sheer speed or outright turn “invisible”.

In the next thread for instance, I’m going to propose the speed needed to be reach to be able faster than the speed the eye turns. It requires calculus tho… but we’ll cross the bridge when we get there.

So be rest assured, I’ll do my best to develop FTE types and standards that address common FTE situations. So for now what do you think of this first thread?
 
Your opinions were specifically requested by the OP
Out of curiosity, why have I been requested? I'm willing to help, of course, but I don't see how I'm relevant to the topic.

That aside:

Feats that qualify include:

  • An observer focusing on an object and said object suddenly vanishing via sheer speed alone.
  • An observer focusing on an object, and said object suddenly leaving an afterimage and appearing elsewhere via their speed alone.

Other FTE Feats that don't qualify

  • An observer trying to follow the object with their eyes but can't keep up (This will be dealth with in the next thread after this one)
Having looked through the thread and the sourced information thoroughly, I would be comfortable with this standard. That being said, I don't believe we can afford to brush off the results of the Davis et al. article mentioned earlier in the thread. Despite the previous articles implying a 50-90Hz (20-11.1ms) threshold, this particular article suggests that visual flickers can be observed at as much as 500Hz (2ms). As mentioned earlier, this is in part due to saccades - saccades are a natural aspect of our vision, and even occur to some extent while we are visually fixated on a subject, so we would expect to see them influence our perception in a natural environment. Furthermore, the article sheds light regarding the variables that influence this perception:

"We presented users with a modulated light source, and asked them to determine the level of ambient illumination under which flicker was just noticeable. We performed experiments both with spatially uniform light resembling most prior studies on the critical flicker fusion rate, as well as with a spatially varying image as would be common on display devices such as TVs and computer screens... In our experiments, uniform modulated light was produced by a DLP projector and consists of a solid “bright” frame followed by a solid “black” frame. The high spatial frequency image is first “bright on the left half of the frame and black on the right”, and then inverted. We observed the effect described in this paper whenever we displayed an image containing an edge and its inverse in rapid succession. The effect was even stronger with more complex content that contained more edges, such as that in natural images. We chose a simple image with a single edge to allow our experimental condition to be as repeatable as possible... When the modulated light source is spatially uniform, we obtain a contrast sensitivity curve that matches that reported in most textbooks and articles. Sensitivity drops to zero near 65 Hz. However, when the modulated light source contains a spatial high frequency edge, all viewers saw flicker artifacts over 200 Hz and several viewers reported visibility of flicker artifacts at over 800 Hz. For the median viewer, flicker artifacts disappear only over 500 Hz, many times the commonly reported flicker fusion rate."

To broadly summarise, then, flicker becomes more noticeable at higher frequencies when the shapes depicted are more complex and contain more edges, capping out at around 800Hz for a simple image with a single edge, and with a median of 500Hz. The problem that this article identifies with previous research on the topic is that we only find the 50-90Hz rate described by many previous articles when we use a spatially uniform source without a defined edge, such as a pulsating light fixture, and that the frequency at which we can detect flickers is significantly higher for the kinds of more complex, defined shapes we would observe in a natural environment.

While I would like to focus primarily on the objective evidence here, and not simply an unverifiable anecdote, I would also like to mention in passing that most people (including me) who have seen displays with variable refresh rates (i.e.: monitors that can go between 60-120Hz) can see the difference in image changes - there are some monitors that go upwards of 300Hz, and while the jumps are known to have diminishing returns, the differences produced are still noticeable.

If we're going to use this standard as a baseline for calculating feats in the future, we need to acknowledge the evidence that our ability to perceive differences in visual stimuli are much higher in natural circumstances than they are when observing spatially uniform stimuli. As a hypothetical example: if Person A was looking at Person B, and Person B travelled fast enough to vanish/appear somewhere else via speed alone, then the fact that Person B is a more complex shape with a defined edge (try picking someone up with that line) would suggest Person A's ability to detect changes in Person B's position should be far better attuned than it would be for a spatially uniform pulsating light. For that feat, I would suggest then that a 500Hz baseline is more congruent with the evidence.
 
Out of curiosity, why have I been requested? I'm willing to help, of course, but I don't see how I'm relevant to the topic.
I just wanted to hear your thoughts, I’ve seen you evaluate threads from time to time so I was confident you’d evaluate this one perfectly.


I also requested @KingTempest for this to be a staff thread so requesting for your help would seem more natural.He said “nada” and just told me to send a list of staff.


Having looked through the thread and the sourced information thoroughly, I would be comfortable with this standard. That being said, I don't believe we can afford to brush off the results of the Davis et al. article mentioned earlier in the thread. Despite the previous articles implying a 50-90Hz (20-11.1ms) threshold, this particular article suggests that visual flickers can be observed at as much as 500Hz (2ms). As mentioned earlier, this is in part due to saccades - saccades are a natural aspect of our vision, and even occur to some extent while we are visually fixated on a subject, so we would expect to see them influence our perception in a natural environment. Furthermore, the article sheds light regarding the variables that influence this perception:

"We presented users with a modulated light source, and asked them to determine the level of ambient illumination under which flicker was just noticeable. We performed experiments both with spatially uniform light resembling most prior studies on the critical flicker fusion rate, as well as with a spatially varying image as would be common on display devices such as TVs and computer screens... In our experiments, uniform modulated light was produced by a DLP projector and consists of a solid “bright” frame followed by a solid “black” frame. The high spatial frequency image is first “bright on the left half of the frame and black on the right”, and then inverted. We observed the effect described in this paper whenever we displayed an image containing an edge and its inverse in rapid succession. The effect was even stronger with more complex content that contained more edges, such as that in natural images. We chose a simple image with a single edge to allow our experimental condition to be as repeatable as possible... When the modulated light source is spatially uniform, we obtain a contrast sensitivity curve that matches that reported in most textbooks and articles. Sensitivity drops to zero near 65 Hz. However, when the modulated light source contains a spatial high frequency edge, all viewers saw flicker artifacts over 200 Hz and several viewers reported visibility of flicker artifacts at over 800 Hz. For the median viewer, flicker artifacts disappear only over 500 Hz, many times the commonly reported flicker fusion rate."

To broadly summarise, then, flicker becomes more noticeable at higher frequencies when the shapes depicted are more complex and contain more edges, capping out at around 800Hz for a simple image with a single edge, and with a median of 500Hz. The problem that this article identifies with previous research on the topic is that we only find the 50-90Hz rate described by many previous articles when we use a spatially uniform source without a defined edge, such as a pulsating light fixture, and that the frequency at which we can detect flickers is significantly higher for the kinds of more complex, defined shapes we would observe in a natural environment.

While I would like to focus primarily on the objective evidence here, and not simply an unverifiable anecdote, I would also like to mention in passing that most people (including me) who have seen displays with variable refresh rates (i.e.: monitors that can go between 60-120Hz) can see the difference in image changes - there are some monitors that go upwards of 300Hz, and while the jumps are known to have diminishing returns, the differences produced are still noticeable.

If we're going to use this standard as a baseline for calculating feats in the future, we need to acknowledge the evidence that our ability to perceive differences in visual stimuli are much higher in natural circumstances than they are when observing spatially uniform stimuli. As a hypothetical example: if Person A was looking at Person B, and Person B travelled fast enough to vanish/appear somewhere else via speed alone, then the fact that Person B is a more complex shape with a defined edge (try picking someone up with that line) would suggest Person A's ability to detect changes in Person B's position should be far better attuned than it would be for a spatially uniform pulsating light. For that feat, I would suggest then that a 500Hz baseline is more congruent with the evidence.

Yes this is research I was keeping an eye on because it induced saccades and spatial edges. But I forgot about it after a while.

I agree with using 500Hz as well. Thank you so much for taking the time to go through the sources.
 
I agree. 500 Hz though is something I'm a bit iffy on but I agree for the most part

Is there a reason as to why you’re iffy on 500Hz? The study factors in spatial edges which are detected in the first set of neurons at the visual cortex too and involuntary saccades.
 
Is there a reason as to why you’re iffy on 500Hz? The study factors in spatial edges which are detected in the first set of neurons at the visual cortex too and involuntary saccades.
500 herts is 1/500? Correct. This is gonna give plenty of inflated results. I can think of 2 instances where this works and neither of these versus scales to that. However I’m not gonna complain about some of my fave versus getting amped from this.
 
500 herts is 1/500? Correct. This is gonna give plenty of inflated results. I can think of 2 instances where this works and neither of these versus scales to that. However I’m not gonna complain about some of my fave versus getting amped from this.

Please do not involve other verses in this thread. This is a wiki wide revision.

If you believe for whatever reason those verses do not scale but somehow this works. Then you should argue that they’re outliers.
 
Is there a reason as to why you’re iffy on 500Hz? The study factors in spatial edges which are detected in the first set of neurons at the visual cortex too and involuntary saccades.
Well my reasons are not worth much. Its just me astonished that VSWIKI actually used timeframes like 0.2 and 0.029 for so long, makes no sense in my eyes.
 
Well my reasons are not worth much. Its just me astonished that VSWIKI actually used timeframes like 0.2 and 0.029 for so long, makes no sense in my eyes.

Thats natural. It’s outdated anyway.

0.2 more so looks like the speed of reaction but not necessarily perception. So it should often be used for blitzing human reaction calcs but somehow we ended up using it for any and all forms of blitzing, including perception blitzing which I’m trying to stop with this new standard.
 
So, just to clarify, a timeframe of 1/70th of a second would be required for perception blitzing instead of the usual 0.2 or 1/60th of a second
What went wrong with the 13 milliseconds assumption if you people don't mind me asking?
 
So, just to clarify, a timeframe of 1/70th of a second would be required for perception blitzing instead of the usual 0.2 or 1/60th of a second

Yes thats the initially proposed timeframe, however @DarkGrath proposed a 1/500th of a second timeframe based on a study that incorperated spatial edges and involuntary saccades. Things that are part of our normal visual perception.

Also im not sure how to answer your question, will get back to you after checking what agnaa told me on discord
 
Yes thats the initially proposed timeframe, however @DarkGrath proposed a 1/500th of a second timeframe based on a study that incorperated spatial edges and involuntary saccades. Things that are part of our normal visual perception.

Also im not sure how to answer your question, will get back to you after checking what agnaa told me on discord
This study done by MIT, why was 13 milliseconds rejected?
 
This study done by MIT, why was 13 milliseconds rejected?
I have looked into that study myself. The study and the results are both very interesting and (as I'd hope you'd expect from MIT) reliable, but the problem is that the study results are ultimately not as important to this thread as they may initially appear. While it's not much of a 'qualification', so to speak, I'll mention in passing that I am currently a psychology student at a research university - not only does this mean I have access to these articles and more, but it also means a substantial part of my degree is spent researching and interpreting articles like these, so identifying the roots of contradictions between studies is something I'm experienced in.

To put it simply, what the study was trying to figure out is how long an image needs to be displayed for us to accurately determine the concept the image is depicting. For example - if I flash an image of a picnic at you on a computer screen, and then ask "What did you see in that image?", how long does that picture need to remain on-screen for you to be able to accurately identify that it was indeed a picnic? What they found was that 13 milliseconds was the high-end limit; if an image is flashed for 13 milliseconds, people will generally be able to identify it slightly more accurately than they would by just guessing, but no lower than that. Ultimately, unlike some of the other studies quoted in this article, this study was less about how long it takes for a person to notice a change in an object, and more about how long a stimuli needs to be presented for it to be given cognitive processing.

The latter is a fascinating research topic, but not really what we're trying to answer here. Not only do we not inherently need to be able to sort an object into a cognitive concept to be able to notice a change in it, but you'd ultimately expect that we'd be able to detect changes in an object substantially faster. The latter only requires surface-level processing, while the former requires quite complex processing - we would expect your brain would have to interpret the light of an image as a cohesive object before we would be able to sort that object into a meaningful concept. While it may sound as though the Davis et al. study (which gave us our benchmark of 2ms) and the Potter et al. study (the MIT study with the benchmark of 13ms) contradict one another, these are actually the results we would expect due to the differences in the research questions.

As with most apparent contradictions in empirical research, this hasn't occurred due to a fundamental issue with either study - they just aren't measuring the same thing. And within the standards Arnoldstone outlined above:

Feats that qualify include:

  • An observer focusing on an object and said object suddenly vanishing via sheer speed alone.
  • An observer focusing on an object, and said object suddenly leaving an afterimage and appearing elsewhere via their speed alone.

Other FTE Feats that don't qualify

  • An observer trying to follow the object with their eyes but can't keep up (This will be dealth with in the next thread after this one)

The 2ms benchmark based on noticing changes in an object is much more relevant.
 
Well going on the basis from that it makes sense. A human can accurately describe an image moving at 13ms but can't describe an image moving at 2ms. If this is the case shouldn't we use 13ms for cases where a person can percieve something but can't react ? Or would we stick to .2 for cases like those ?
 
If this is the case shouldn't we use 13ms for cases where a person can percieve something but can't react ? Or would we stick to .2 for cases like those ?
We would stick to .2 seconds.

Our typical measurements of reaction time involve studies where someone will, say, be asked to press a button the moment they notice a change in an object (for example, being shown a red screen and being asked to press a button when it turns green). While this may sound very similar to what we've talked about previously, the processing needed to just instigate ourselves to press a button when we see an object change is substantially more complex and slower than merely our ability to subjectively perceive a change in the object.

While I'd like to do some specific personal research to verify this, what I recall from past research is that the average reaction time on these kinds of tests sits somewhere around 230ms. I'm quite confident from memory alone that it's no higher than 300ms, and no lower than 150ms, if that's of any value. Our standards for typical reaction times already reflect this quite well.
 
Status
Not open for further replies.
Back
Top