• This forum is strictly intended to be used by members of the VS Battles wiki. Please only register if you have an autoconfirmed account there, as otherwise your registration will be rejected. If you have already registered once, do not do so again, and contact Antvasima if you encounter any problems.

    For instructions regarding the exact procedure to sign up to this forum, please click here.
  • We need Patreon donations for this forum to have all of its running costs financially secured.

    Community members who help us out will receive badges that give them several different benefits, including the removal of all advertisements in this forum, but donations from non-members are also extremely appreciated.

    Please click here for further information, or here to directly visit our Patreon donations page.
  • Please click here for information about a large petition to help children in need.

New Black Hole Creation Feat Formula Proposal

We have different views; some are fine with the method, others dislike it.
 
Okay. Can somebody provide a view summary tally please? 🙏
 
Okay. Can somebody provide a view summary tally please? 🙏

For use in compression feats:​

Agree: Agnaa, GarrixianXD, Armorchompy, CloverDragon03
Disagree:
Neutral:
Unclear: DontTalkDT
, Ugarik
(DontTalkDT and Ugarik have agreed some variation of a compression calc could be used, but haven’t yet responded confirming they accept my reasons for leaving it as it is)

For complete replacement of current method:​

Agree: Agnaa, GarrixianXD, Armorchompy, CloverDragon03
Disagree: DontTalkDT
Neutral:
Unclear: Ugarik
(Ugarik has agreed the math works but prefers adiabatic compression and hasn’t yet confirmed they accept my reasons for using isothermal compression as a lowball estimate)

However it should be noted this method shouldn’t be implemented as a complete replacement to the current method without a revision of creation feats standards more broadly. After all, it would make little sense if we implement a standard where creating a 1 solar mass black hole scales lower than simply creating a star with the same mass.

If enough people believe it to appropriate to revise feats more broadly (to do away with the current standard that simply says we should assume through reality warping that creation feats = destruction feats), then I can help create more general equations for use beyond black holes, but otherwise it should be kept strictly to compression feats as DontTalkDT has argued.
 
Last edited:
I'm not actually too fussed if we don't revise other creation feats as well.

If we're going by the creation = destruction route, that's just impossible for black holes, so we need another way to estimate things. I think it's fine if that ends up being lower.

Revising other creation feats in a similar way sounds impractical due to the different materials involved in many of them.
 
For compression calcs I'm still of the opinion that the rate of heat dissipation needs to be factored in. I.e. if we have no information assume it's realistic heat loss, but if we see that it doesn't get hot drop the assumption.

But I am also of the opinion that gravity needs to be factored in for such a compression. No compression calc makes sense without it.
 
For compression calcs I'm still of the opinion that the rate of heat dissipation needs to be factored in. I.e. if we have no information assume it's realistic heat loss, but if we see that it doesn't get hot drop the assumption.
A realistic heat loss scenario in a vacuum is going to be just a factor of radiative emission which would be completely negligible. Basically a purely adiabatic scenario for which I have included a formula for in my blog. However such a scenario is going to give you insane highballs, even exceeding equivalent mass-energy equivalence calculations.

But I am also of the opinion that gravity needs to be factored in for such a compression. No compression calc makes sense without it.
I’d be curious as to how you would want me to quantify that. If we assume a non-instantaneous time frame so that gravity can take effect and neglect electrodynamic repulsion, degeneracy pressure, and nuclear forces, then any calculation method I could think of would simply involve attributing everything to gravity.
 
I’d be curious as to how you would want me to quantify that. If we assume a non-instantaneous time frame so that gravity can take effect and neglect electrodynamic repulsion, degeneracy pressure, and nuclear forces, then any calculation method I could think of would simply involve attributing everything to gravity.
How about a one second timeframe, with the force applied by the character being constant (but the gravity, accordingly, evolving exponentially)?

I'd think that one second would be way too quick for gravity to be the full story.
 
How about a one second timeframe, with the force applied by the character being constant (but the gravity, accordingly, evolving exponentially)?

I'd think that one second would be way too quick for gravity to be the full story.
The issue is that the change in gravitational potential energy in any given timeframe is calculated based the change in size, not the given non-zero timeframe, however this inevitably assumes all the work is done by gravity.
 
The issue is that the change in gravitational potential energy in any given timeframe is calculated based the change in size, not the given non-zero timeframe
We wouldn't be looking for the GPE, we'd be looking for the effect gravity contributes.

We'd essentially be iteratively checking solutions, scaling up and down the compression applied, while keeping the gravitational portion functioning as we know it should, until we end up with a 1 second timeframe.

I'm very confident this should be possible, since we know how to do the compression by itself (as you did in the OP), and we know how to do gravity by itself (I would be stunned if we had no way of telling how long it would take for a cloud of gas to collapse into something like a star).

I'm not sure how practical it would be to do so by hand, but I think I could write a program that simulates that, if I knew the relevant formulae better.
however this inevitably assumes all the work is done by gravity.
I don't see how it does.
 
We wouldn't be looking for the GPE, we'd be looking for the effect gravity contributes.
Change in GPE is typically how you calculate the energy contributed by gravity.

We'd essentially be iteratively checking solutions, scaling up and down the compression applied, while keeping the gravitational portion functioning as we know it should, until we end up with a 1 second timeframe.

I'm very confident this should be possible, since we know how to do the compression by itself (as you did in the OP), and we know how to do gravity by itself (I would be stunned if we had no way of telling how long it would take for a cloud of gas to collapse into something like a star).

I'm not sure how practical it would be to do so by hand, but I think I could write a program that simulates that, if I knew the relevant formulae better.
Typically the gravitational collapse of interstellar nebula into stars takes about 10 million years. But at any step if the internal pressure gets high enough the contribution from gravity falls to zero at that stage.

Not really sure how to model this mathematically. Are you thinking of using the free fall equation to get a contribution for gravity?
 
Okay, to make this as simple as I can.

Does the field of physics have any way at all to simulate what a cloud of gas would look like after 1 million years due to the effects of gravity?

If not, then I guess we just can't do it. But I am 99.9999% certain that we can.
 
Okay, to make this as simple as I can.

Does the field of physics have any way at all to simulate what a cloud of gas would look like after 1 million years due to the effects of gravity?

If not, then I guess we just can't do it. But I am 99.9999% certain that we can.
Did a bit of digging. We could use the free fall equation to get a formula for time for gravitational collapse based on density:

t = [3π/(16Gρ)]^(1/2)

Then find distance at a given time since exceeding the Jean’s criterion by using a spherical density and solving for radial distance:

t = [π^(2)R^(3)/(4GM)]^(1/2)

R = [4GMt^(2)/π^(2)]^(1/3)

But then it would be tricky to then translate that towards an exponentially changing contribution. Also becomes increasingly inaccurate at high densities where the neglected forces from degeneracy pressure and nuclear forces should be strong.
 
Did a bit of digging. We could use the free fall equation to get a formula for time for gravitational collapse based on density:

t = [3π/(16Gρ)]^(1/2)

Then find distance at a given time since exceeding the Jean’s criterion by using a spherical density and solving for radial distance:

t = [π^(2)R^(3)/(4GM)]^(1/2)

R = [4GMt^(2)/π^(2)]^(1/3)
That sounds like a good place to start.

So with that, we have the radius a gas cloud will reach in a certain amount of time.

I'm thinking of something sort of like alternating those formulas, in lieu of a proper continuous solution including both. Taking a certain resolution (say, 0.01 seconds), stepping forward the effect of gravity by that timeframe, then stepping forward compression over that timeframe, and doing so for the whole duration (say, 1 second).
But then it would be tricky to then translate that towards an exponentially changing contribution.
Through the method I've just described, compression would essentially have a pre-calculated contribution, while gravity's contribution would probably end up being exponential.
Also becomes increasingly inaccurate at high densities where the neglected forces from degeneracy pressure and nuclear forces should be strong.
We gotta consider the error it contributes, compared to the difficulty of incorporating further details, when deciding where to stop.

But hey, maybe all of this is just a bad idea. I did drop out of high school physics, after all.
 
I'm sorry but this formula simply doesn't work as a remplacement for black hole creation. It calculates work from change of volume. The problem is that initial volume is calculated from mass and density while final one uses schwarzschild radius. The bigger gap between those two the more work it takes. But with mass of the blackhole final volume grows much faster than the initial one making the gap smaller. That means that at some point this formula breaks and the more massive the blackhole is the less work it would take to creake it. In fact work can even go negative since final volume can be bigger than the initial.
 
I'm sorry but this formula simply doesn't work as a remplacement for black hole creation. It calculates work from change of volume. The problem is that initial volume is calculated from mass and density while final one uses schwarzschild radius. The bigger gap between those two the more work it takes. But with mass of the blackhole final volume grows much faster than the initial one making the gap smaller. That means that at some point this formula breaks and the more massive the blackhole is the less work it would take to creake it. In fact work can even go negative since final volume can be bigger than the initial.
Oh damn you're right, black hole "densities" can become arbitrarily small as we look at larger and larger ones....
 
Oh damn you're right, black hole "densities" can become arbitrarily small as we look at larger and larger ones....
No, it uses density of hydrogen at normal pressure and temperature which is okay. My point is that at some point the gap between the final and the initial volume of hybrogen starts shrinking.
 
No, it uses density of hydrogen at normal pressure and temperature which is okay. My point is that at some point the gap between the final and the initial volume of hybrogen starts shrinking.
Yeah that's what I mean. It's not like a region covered by a black hole has a fixed density, it could be as dense as a neutron star, water, air, or hydrogen at any arbitrarily low pressure. And so at some point, the hydrogen blob that's teleported in would be sufficient to create a black hole.
 
I'm sorry but this formula simply doesn't work as a remplacement for black hole creation. It calculates work from change of volume. The problem is that initial volume is calculated from mass and density while final one uses schwarzschild radius. The bigger gap between those two the more work it takes. But with mass of the blackhole final volume grows much faster than the initial one making the gap smaller. That means that at some point this formula breaks and the more massive the blackhole is the less work it would take to creake it. In fact work can even go negative since final volume can be bigger than the initial.
That isn’t a big deal. Basically what you are describing is what happens when the term inside the logarithm becomes greater than 1 (thus returning a positive value and making the final energy negative).

This only happens when the mass of the black hole is larger than 2.143 x 10^(48.5) kg.

For comparison’s sake, that is equivalent to over 3 quintillion solar masses, more than a million times the mass of the Milky Way Galaxy. Far far larger than any black hole in nature.

If a black hole in fiction reaches this ludicrous size the calculation will simply have to be re-adjusted on a case by case basis for such an extreme.
 
Oh damn you're right, black hole "densities" can become arbitrarily small as we look at larger and larger ones....
Yeah that's what I mean. It's not like a region covered by a black hole has a fixed density, it could be as dense as a neutron star, water, air, or hydrogen at any arbitrarily low pressure. And so at some point, the hydrogen blob that's teleported in would be sufficient to create a black hole.
The unknowns and theoretics of black hole densities below the Schwarzschild radius don’t actually affect the calculation. The final volume used in the calculation is based on the density at the Schwarzschild radius (essentially using the density of the gas right before it collapses into a black hole), thus avoiding the mind-bending physics present past the event horizon.
 
The unknowns and theoretics of black hole densities below the Schwarzschild radius don’t actually affect the calculation. The final volume used in the calculation is based on the density at the Schwarzschild radius (essentially using the density of the gas right before it collapses into a black hole), thus avoiding the mind-bending physics present past the event horizon.
I know, that's why I put "densities" in quotes the first time I invoked it.

I'm not actually talking about however the matter is spread across the volume below the Schwarzschild radius, I'm just considering the total mass of the black hole compared to the total volume covered by the Schwarzschild radius.
If a black hole in fiction reaches this ludicrous size the calculation will simply have to be re-adjusted on a case by case basis for such an extreme.
What sort of adjustment do you have in mind?
 
That isn’t a big deal. Basically what you are describing is what happens when the term inside the logarithm becomes greater than 1 (thus returning a positive value and making the final energy negative).
The formula becomes unusable when its derivitive goes negative. It happens a bit earlier
 
The formula becomes unusable when its derivitive goes negative. It happens a bit earlier
Good point, although this happens at near the same magnitude. The derivative goes to zero when the mass of the black hole reaches 5.428 x 10^47 kg. So still over 6 million times larger than the largest discovered black hole.

What sort of adjustment do you have in mind?
Basically you would just need to play around with lower density gas scenarios until the relative proportions between the starting and ending sizes are divorced enough not to impact the calculation. Although in practice I doubt this will ever be necessary.
 
Basically you would just need to play around with lower density gas scenarios until the relative proportions between the starting and ending sizes are divorced enough not to impact the calculation. Although in practice I doubt this will ever be necessary.
I'm worried that such an approach would be arbitrary; that there'd be no canonical point to change the density to. But finding the first density small enough that the derivative won't be negative at that mass value might work.

My concern is that, for example, at a certain point such feats would take 0 energy by default, and we'd just be arbitrarily deciding which energy value they have based on vibes alone, since no point above 0 energy is physically special.
 
I'm worried that such an approach would be arbitrary; that there'd be no canonical point to change the density to. But finding the first density small enough that the derivative won't be negative at that mass value might work.

My concern is that, for example, at a certain point such feats would take 0 energy by default, and we'd just be arbitrarily deciding which energy value they have based on vibes alone, since no point above 0 energy is physically special.
If you want some landmark densities from real world sources to prevent it becoming arbitrary, some backup scenarios can be:

-Typical Dispersed Diffuse HI Cloud: 500 H2 per cm^3, 50 K Temperature
-Cold Neutral Interstellar Medium: 35 H2 per cm^3, 75 K Temperature
-Intergalactic Medium: 0.000001 H2 per cm^3, 1,000,000 K Temperature

Already needing to go beyond 5.428 x 10^47 kg in mass for a black hole is some Looney Tunes levels of absurdity in size. If the backup scenarios still aren’t enough to accommodate the size some other custom scenario will have to be devised along similar lines.
 
That sounds like a good place to start.

So with that, we have the radius a gas cloud will reach in a certain amount of time.

I'm thinking of something sort of like alternating those formulas, in lieu of a proper continuous solution including both. Taking a certain resolution (say, 0.01 seconds), stepping forward the effect of gravity by that timeframe, then stepping forward compression over that timeframe, and doing so for the whole duration (say, 1 second).

Through the method I've just described, compression would essentially have a pre-calculated contribution, while gravity's contribution would probably end up being exponential.

We gotta consider the error it contributes, compared to the difficulty of incorporating further details, when deciding where to stop.

But hey, maybe all of this is just a bad idea. I did drop out of high school physics, after all.
Did some playing around with numbers and ultimately the verdict is that in any given time frame the contribution from nuclear forces and degeneracy pressure are simply too much to ignore to account for gravity in isolation.

Ultimately even on a short timeframe if we use try and quantify any kind of isolated contribution for gravity in reality it would be overpowered by nuclear forces on the same time frame. We would literally just be making a star.

So either we run a reality warping scenario where we try to parallel it using purely classical compression on a instantaneous timeframe (so that we get to ignore all those confounding forces) or stick with the existing fictitious GBE figure. Because if we try to account for every force at play in a non-zero timeframe we are not only going to end up with enough content to fill a physics thesis, but would likely also end up with a high ball above the current GBE figure (since the outside pressure would first need to overpower the existing internal pressure, then account for the start of nuclear reactions and overpower the renewed stellar pressure, and then also overpower the newly instilled degeneracy pressure).
 
Fair enough then.
 
Back
Top