- 167,867
- 76,488
@DontTalkDT @Agnaa @Ugarik @Executor_N0 @Damage3245 @Mr. Bambu
What do you think should be done here?
What do you think should be done here?
Follow along with the video below to see how to install our site as a web app on your home screen.
Note: This feature may not be available in some browsers.
Okay. Can somebody provide a view summary tally please?
A realistic heat loss scenario in a vacuum is going to be just a factor of radiative emission which would be completely negligible. Basically a purely adiabatic scenario for which I have included a formula for in my blog. However such a scenario is going to give you insane highballs, even exceeding equivalent mass-energy equivalence calculations.For compression calcs I'm still of the opinion that the rate of heat dissipation needs to be factored in. I.e. if we have no information assume it's realistic heat loss, but if we see that it doesn't get hot drop the assumption.
I’d be curious as to how you would want me to quantify that. If we assume a non-instantaneous time frame so that gravity can take effect and neglect electrodynamic repulsion, degeneracy pressure, and nuclear forces, then any calculation method I could think of would simply involve attributing everything to gravity.But I am also of the opinion that gravity needs to be factored in for such a compression. No compression calc makes sense without it.
How about a one second timeframe, with the force applied by the character being constant (but the gravity, accordingly, evolving exponentially)?I’d be curious as to how you would want me to quantify that. If we assume a non-instantaneous time frame so that gravity can take effect and neglect electrodynamic repulsion, degeneracy pressure, and nuclear forces, then any calculation method I could think of would simply involve attributing everything to gravity.
The issue is that the change in gravitational potential energy in any given timeframe is calculated based the change in size, not the given non-zero timeframe, however this inevitably assumes all the work is done by gravity.How about a one second timeframe, with the force applied by the character being constant (but the gravity, accordingly, evolving exponentially)?
I'd think that one second would be way too quick for gravity to be the full story.
We wouldn't be looking for the GPE, we'd be looking for the effect gravity contributes.The issue is that the change in gravitational potential energy in any given timeframe is calculated based the change in size, not the given non-zero timeframe
I don't see how it does.however this inevitably assumes all the work is done by gravity.
Change in GPE is typically how you calculate the energy contributed by gravity.We wouldn't be looking for the GPE, we'd be looking for the effect gravity contributes.
Typically the gravitational collapse of interstellar nebula into stars takes about 10 million years. But at any step if the internal pressure gets high enough the contribution from gravity falls to zero at that stage.We'd essentially be iteratively checking solutions, scaling up and down the compression applied, while keeping the gravitational portion functioning as we know it should, until we end up with a 1 second timeframe.
I'm very confident this should be possible, since we know how to do the compression by itself (as you did in the OP), and we know how to do gravity by itself (I would be stunned if we had no way of telling how long it would take for a cloud of gas to collapse into something like a star).
I'm not sure how practical it would be to do so by hand, but I think I could write a program that simulates that, if I knew the relevant formulae better.
Did a bit of digging. We could use the free fall equation to get a formula for time for gravitational collapse based on density:Okay, to make this as simple as I can.
Does the field of physics have any way at all to simulate what a cloud of gas would look like after 1 million years due to the effects of gravity?
If not, then I guess we just can't do it. But I am 99.9999% certain that we can.
That sounds like a good place to start.Did a bit of digging. We could use the free fall equation to get a formula for time for gravitational collapse based on density:
t = [3π/(16Gρ)]^(1/2)
Then find distance at a given time since exceeding the Jean’s criterion by using a spherical density and solving for radial distance:
t = [π^(2)R^(3)/(4GM)]^(1/2)
R = [4GMt^(2)/π^(2)]^(1/3)
Through the method I've just described, compression would essentially have a pre-calculated contribution, while gravity's contribution would probably end up being exponential.But then it would be tricky to then translate that towards an exponentially changing contribution.
We gotta consider the error it contributes, compared to the difficulty of incorporating further details, when deciding where to stop.Also becomes increasingly inaccurate at high densities where the neglected forces from degeneracy pressure and nuclear forces should be strong.
Oh damn you're right, black hole "densities" can become arbitrarily small as we look at larger and larger ones....I'm sorry but this formula simply doesn't work as a remplacement for black hole creation. It calculates work from change of volume. The problem is that initial volume is calculated from mass and density while final one uses schwarzschild radius. The bigger gap between those two the more work it takes. But with mass of the blackhole final volume grows much faster than the initial one making the gap smaller. That means that at some point this formula breaks and the more massive the blackhole is the less work it would take to creake it. In fact work can even go negative since final volume can be bigger than the initial.
No, it uses density of hydrogen at normal pressure and temperature which is okay. My point is that at some point the gap between the final and the initial volume of hybrogen starts shrinking.Oh damn you're right, black hole "densities" can become arbitrarily small as we look at larger and larger ones....
Yeah that's what I mean. It's not like a region covered by a black hole has a fixed density, it could be as dense as a neutron star, water, air, or hydrogen at any arbitrarily low pressure. And so at some point, the hydrogen blob that's teleported in would be sufficient to create a black hole.No, it uses density of hydrogen at normal pressure and temperature which is okay. My point is that at some point the gap between the final and the initial volume of hybrogen starts shrinking.
That isn’t a big deal. Basically what you are describing is what happens when the term inside the logarithm becomes greater than 1 (thus returning a positive value and making the final energy negative).I'm sorry but this formula simply doesn't work as a remplacement for black hole creation. It calculates work from change of volume. The problem is that initial volume is calculated from mass and density while final one uses schwarzschild radius. The bigger gap between those two the more work it takes. But with mass of the blackhole final volume grows much faster than the initial one making the gap smaller. That means that at some point this formula breaks and the more massive the blackhole is the less work it would take to creake it. In fact work can even go negative since final volume can be bigger than the initial.
Oh damn you're right, black hole "densities" can become arbitrarily small as we look at larger and larger ones....
The unknowns and theoretics of black hole densities below the Schwarzschild radius don’t actually affect the calculation. The final volume used in the calculation is based on the density at the Schwarzschild radius (essentially using the density of the gas right before it collapses into a black hole), thus avoiding the mind-bending physics present past the event horizon.Yeah that's what I mean. It's not like a region covered by a black hole has a fixed density, it could be as dense as a neutron star, water, air, or hydrogen at any arbitrarily low pressure. And so at some point, the hydrogen blob that's teleported in would be sufficient to create a black hole.
I know, that's why I put "densities" in quotes the first time I invoked it.The unknowns and theoretics of black hole densities below the Schwarzschild radius don’t actually affect the calculation. The final volume used in the calculation is based on the density at the Schwarzschild radius (essentially using the density of the gas right before it collapses into a black hole), thus avoiding the mind-bending physics present past the event horizon.
What sort of adjustment do you have in mind?If a black hole in fiction reaches this ludicrous size the calculation will simply have to be re-adjusted on a case by case basis for such an extreme.
The formula becomes unusable when its derivitive goes negative. It happens a bit earlierThat isn’t a big deal. Basically what you are describing is what happens when the term inside the logarithm becomes greater than 1 (thus returning a positive value and making the final energy negative).
Good point, although this happens at near the same magnitude. The derivative goes to zero when the mass of the black hole reaches 5.428 x 10^47 kg. So still over 6 million times larger than the largest discovered black hole.The formula becomes unusable when its derivitive goes negative. It happens a bit earlier
Basically you would just need to play around with lower density gas scenarios until the relative proportions between the starting and ending sizes are divorced enough not to impact the calculation. Although in practice I doubt this will ever be necessary.What sort of adjustment do you have in mind?
I'm worried that such an approach would be arbitrary; that there'd be no canonical point to change the density to. But finding the first density small enough that the derivative won't be negative at that mass value might work.Basically you would just need to play around with lower density gas scenarios until the relative proportions between the starting and ending sizes are divorced enough not to impact the calculation. Although in practice I doubt this will ever be necessary.
If you want some landmark densities from real world sources to prevent it becoming arbitrary, some backup scenarios can be:I'm worried that such an approach would be arbitrary; that there'd be no canonical point to change the density to. But finding the first density small enough that the derivative won't be negative at that mass value might work.
My concern is that, for example, at a certain point such feats would take 0 energy by default, and we'd just be arbitrarily deciding which energy value they have based on vibes alone, since no point above 0 energy is physically special.
Did some playing around with numbers and ultimately the verdict is that in any given time frame the contribution from nuclear forces and degeneracy pressure are simply too much to ignore to account for gravity in isolation.That sounds like a good place to start.
So with that, we have the radius a gas cloud will reach in a certain amount of time.
I'm thinking of something sort of like alternating those formulas, in lieu of a proper continuous solution including both. Taking a certain resolution (say, 0.01 seconds), stepping forward the effect of gravity by that timeframe, then stepping forward compression over that timeframe, and doing so for the whole duration (say, 1 second).
Through the method I've just described, compression would essentially have a pre-calculated contribution, while gravity's contribution would probably end up being exponential.
We gotta consider the error it contributes, compared to the difficulty of incorporating further details, when deciding where to stop.
But hey, maybe all of this is just a bad idea. I did drop out of high school physics, after all.