Author Topic: Reductionism vs. Emergence  (Read 2561 times)

bluehillside Retd.

  • Hero Member
  • *****
  • Posts: 19417
Re: Reductionism vs. Emergence
« Reply #50 on: February 22, 2023, 04:08:50 PM »
Vlad,

Quote
Predicting future events is what use to be known as clairvoyance or prophesy etc. I am not talking about anything like that.

Nor is anyone else.

Quote
What I am talking about is an explanatory gap between the components that give rise to emergents and the emergents themselves, novel properties if you like. I see no reason why that could not arise within a deterministic context or a non deterministic context.

“Explanatory gap” in practice or in principle? The article refers specifically to “in principle”.

Quote
However here is your opportunity to back up your contention. Name a future event which results in a novel property and give us the probability of it. I'm not talking here about say, the probabilty of a number of water molecules becoming something wet but a new property or as you say a future event that hasn't been seen.

You’re missing the point still. A deterministic model in which you have perfect knowledge of the starting conditions and unlimited computing power (leaving aside for now Stranger’s reservations about the possibility of “unlimited”) is predictable regardless of the complexity involved, at least in principle. If you think otherwise, then it’s your job to explain why not.   
"Don't make me come down there."

God

Walt Zingmatilder

  • Hero Member
  • *****
  • Posts: 33065
Re: Reductionism vs. Emergence
« Reply #51 on: February 22, 2023, 04:38:24 PM »
Vlad,

Nor is anyone else.

“Explanatory gap” in practice or in principle? The article refers specifically to “in principle”.

You’re missing the point still. A deterministic model in which you have perfect knowledge of the starting conditions and unlimited computing power (leaving aside for now Stranger’s reservations about the possibility of “unlimited”) is predictable regardless of the complexity involved, at least in principle. If you think otherwise, then it’s your job to explain why not.   
The explanation is plain in the definitions of emergent properties namely novel and not possessed by the components so the question is how can these be predicted from entities that do not possess them? Over to you. Saying ''We can't now because we are only human but we can in principle'' seems like meaningless mealy mouth nonsense.

bluehillside Retd.

  • Hero Member
  • *****
  • Posts: 19417
Re: Reductionism vs. Emergence
« Reply #52 on: February 22, 2023, 04:45:32 PM »
Vlad,

Quote
The explanation is plain in the definitions of emergent properties namely novel and not possessed by the components so the question is how can these be predicted from entities that do not possess them? Over to you. Saying ''We can't now because we are only human but we can in principle'' seems like meaningless mealy mouth nonsense.

The components not having the properties of the phenomenon that emerges from them has nothing to do with whether or not the latter can be predicted from the former (given perfect knowledge of the starting conditions and sufficient computing power etc).

I thought you claimed to know something about emergence? 
"Don't make me come down there."

God

Walt Zingmatilder

  • Hero Member
  • *****
  • Posts: 33065
Re: Reductionism vs. Emergence
« Reply #53 on: February 22, 2023, 05:55:17 PM »
Vlad,

The components not having the properties of the phenomenon that emerges from them has nothing to do with whether or not the latter can be predicted from the former (given perfect knowledge of the starting conditions and sufficient computing power etc).

I thought you claimed to know something about emergence?
You are just adding to a list of unjustified assertions concerning your point of view.
Let us review some.
All novel events and properties can be predicted from entities that do not possess them.
We cannot do so.
We could in principle.

These need to be justified.
 

bluehillside Retd.

  • Hero Member
  • *****
  • Posts: 19417
Re: Reductionism vs. Emergence
« Reply #54 on: February 22, 2023, 06:06:39 PM »
Vlad,

Quote
You are just adding to a list of unjustified assertions concerning your point of view.
Let us review some.
All novel events and properties can be predicted from entities that do not possess them.
We cannot do so.
We could in principle.

These need to be justified.

See Replies 10 & 12 for the explanation of “default”. Once again, the claims is: “You, your dog, and the specifics of your person-dog affection could not be predicted, even in principle, even from perfect knowledge of all your elementary particles”.

The default response based on anything we verifiably know so far is that that’s wrong. If you think that default response should be amended or abandoned though, then it’s your job to explain why.   

Try to remember this.
"Don't make me come down there."

God

jeremyp

  • Admin Support
  • Hero Member
  • *****
  • Posts: 32114
  • Blurb
    • Sincere Flattery: A blog about computing
Re: Reductionism vs. Emergence
« Reply #55 on: February 22, 2023, 06:23:31 PM »
Yes, this is certainly true, but you quickly run into problems with prediction the more successive probabilistic events you are considering because the probability of any particular outcome will quickly become tiny, even if the probability of the individual events in the chain are quite high.
But the point is not whether it is practical for humans to do it, but whether it can be done in principle.

Vlad's article seems to claim that it is impossible even in principle.
This post and all of JeremyP's posts words certified 100% divinely inspired* -- signed God.
*Platinum infallibility package, terms and conditions may apply

Stranger

  • Hero Member
  • *****
  • Posts: 8236
  • Lightly seared on the reality grill.
Re: Reductionism vs. Emergence
« Reply #56 on: February 22, 2023, 06:56:07 PM »
But the point is not whether it is practical for humans to do it, but whether it can be done in principle.

Vlad's article seems to claim that it is impossible even in principle.

Sorry but I don't see the relevance. The point I made has nothing to do with whether humans can do the calculations or not. If you're trying to make a prediction that involves long chains of probabilistic events, then the longer the chain, the more possible outcomes you have and the lower the probability of even the most probable one becomes. If every outcome has a tiny probability, then it isn't really much of a prediction.

But as I said before, there seems to be a confusion between predicating the future (from a point in time, somebody mentioned from the big bang) and emergence, i.e. predicting properties that will emerge from the combination of entities that don't have them individually. In the latter case, the probabilities might not matter at all - as in the example I gave of making a quantum model of an atom and predicting its chemical properties (which none of the parts have themselves).
x(∅ ∈ x ∧ ∀y(yxy ∪ {y} ∈ x))

Walt Zingmatilder

  • Hero Member
  • *****
  • Posts: 33065
Re: Reductionism vs. Emergence
« Reply #57 on: February 23, 2023, 09:14:22 AM »
Vlad,

See Replies 10 & 12 for the explanation of “default”. Once again, the claims is: “You, your dog, and the specifics of your person-dog affection could not be predicted, even in principle, even from perfect knowledge of all your elementary particles”.

The default response based on anything we verifiably know so far is that that’s wrong. If you think that default response should be amended or abandoned though, then it’s your job to explain why.   

Try to remember this.
Not sure of a claim here but a speculation based on the inability to demonstrate an asserted principle as exemplified by your own continual refusal or inability. Nature abhors a vacuum and I abhor yours.

bluehillside Retd.

  • Hero Member
  • *****
  • Posts: 19417
Re: Reductionism vs. Emergence
« Reply #58 on: February 23, 2023, 11:27:32 AM »
Vlad,

Quote
Not sure of a claim here…

Given that I’ve copied and pasted it so often, why aren't you sure of that? Here it is a gain though so you have no excuse not to know what it is:

“You, your dog, and the specifics of your person-dog affection could not be predicted, even in principle, even from perfect knowledge of all your elementary particles.”

Clear now?

Quote
…but a speculation based on the inability to demonstrate an asserted principle as exemplified by your own continual refusal or inability. Nature abhors a vacuum and I abhor yours.

And for those of us working in English?

I’ll try to put it more simply for you then: every time I throw something out of my window it hits the deck shortly afterwards. My default position therefore is that every time you throw something out of the window the same outcome will follow. That’s not to claim a universal truth that things thrown out of windows will always hit the deck shortly afterwards, nor even is it the claim that one day an object I throw out of the window won’t shoot upwards instead. The clue is in the word “default”.

Do you understand this now?   
"Don't make me come down there."

God

bluehillside Retd.

  • Hero Member
  • *****
  • Posts: 19417
Re: Reductionism vs. Emergence
« Reply #59 on: February 23, 2023, 02:38:21 PM »
Stranger

Quote
Sorry but I don't see the relevance. The point I made has nothing to do with whether humans can do the calculations or not. If you're trying to make a prediction that involves long chains of probabilistic events, then the longer the chain, the more possible outcomes you have and the lower the probability of even the most probable one becomes. If every outcome has a tiny probability, then it isn't really much of a prediction.

But as I said before, there seems to be a confusion between predicating the future (from a point in time, somebody mentioned from the big bang) and emergence, i.e. predicting properties that will emerge from the combination of entities that don't have them individually. In the latter case, the probabilities might not matter at all - as in the example I gave of making a quantum model of an atom and predicting its chemical properties (which none of the parts have themselves).

Much as I hesitate to wade into an area in which you clearly know more than I do, for an in principle argument does the number of probabilistic events matter? Isn’t this a bit like the black hole paradox whereby it’s now generally thought (as I understand it) that information is not destroyed in a black sole so, given enough computing power, it should be possible to throw, say, Vlad into a black hole and then predictably reconstruct him after after evaporation?     

As for emergence, what then would be inherently problematic with predicting the characteristics of an emergent phenomenon (or of other emergent phenomena many generations later) given perfect, universal knowledge of the starting conditions and (effectively) unlimited computing power? 
"Don't make me come down there."

God

Stranger

  • Hero Member
  • *****
  • Posts: 8236
  • Lightly seared on the reality grill.
Re: Reductionism vs. Emergence
« Reply #60 on: February 23, 2023, 04:57:59 PM »
Much as I hesitate to wade into an area in which you clearly know more than I do...

Wade ahead.  :)

...for an in principle argument does the number of probabilistic events matter?

It depends on what you want to achieve. There is a difference that I've been trying to point out between predicting the future from some point in time and dealing with emergence.

An in principle argument can't change the underlying mathematics, so if you're trying to predict an inherently probabilistic chain of events, then the number of possible outcomes grows very rapidly, so for a series of events for which there are only two outcomes, there are 2N outcomes for N events. At the same time, the probability of any one particular outcome is the product of the probabilities in the chain that leads to it, and so the probability of even the most probable outcome will rapidly shrink, even if the individual events in the chain are highly probable individually. So, to the extent probabilistic events matter to the prediction you're trying to make, the number of outcomes will quickly become vast and the probabilities tiny for each one. A perfect calculation, with infinite computing power, isn't going to give you much of a prediction.

The problem with prediction is therefore that we simply don't know if the universe is deterministic or relies on probabilities. You have no problem with perfect predictions for a deterministic universe (except the possible problem with chaos and continua that I mentioned before), but if it relies on fundamental probabilities, then you do.

On the other hand, if you're dealing with emergence, i.e. properties that emerge from the interactions of parts that do not possess them individually, it might not matter. To go back to the example of a quantum model of an atom, you're not really interested in what one specific atom does, you're interested in the generic properties that different kinds of atoms (elements) have. The emergent properties of atoms are mostly to do with the states it is possible for its elections to be in and, specifically, the energies associated with each one. So, for example, the spectral lines associated with each element correspond to the energy differences between possible states of the electrons, because they represent absorption or emission of photons with those energies, and energy is directly related to frequency. The model cannot tell you about what state one atom is in, or where one electron is, but it does tell you that the possible energies are quantised and what the allowed energies are - and that's all you really need to know. [I'm glossing over a lot of detail here, but I'm trying to get the basic idea across as simply as I can.]

Isn’t this a bit like the black hole paradox whereby it’s now generally thought (as I understand it) that information is not destroyed in a black sole so, given enough computing power, it should be possible to throw, say, Vlad into a black hole and then predictably reconstruct him after after evaporation?

Talk about opening the proverbial can of worms! I think I'll pass on this for the moment because it risks either going off at a tangent, or possibly just revisiting the same unknowns from a different point of view. We can go back to it if necessary.

I hope what I've said covers the rest of your questions, if not, let me know and I'll try again.
x(∅ ∈ x ∧ ∀y(yxy ∪ {y} ∈ x))

bluehillside Retd.

  • Hero Member
  • *****
  • Posts: 19417
Re: Reductionism vs. Emergence
« Reply #61 on: February 26, 2023, 03:11:00 PM »
Stranger,

Quote
Wade ahead.

Thank you!

Quote
It depends on what you want to achieve. There is a difference that I've been trying to point out between predicting the future from some point in time and dealing with emergence.

But my question is why. Leaving aside for now the complexity of the calculation, what special characteristic do emergent phenomena have that, just by virtue of being emergent, makes them non-predictable even in principle – in essence the claim of the author Vlad linked to in the OP, and rebutted by the author (Ethan Siegel) I linked to in Reply 1?

In other words, why in principle should it be any more difficult/impossible for the reductionist model to predict an emergent future event than it is to predict a non-emergent one?

Quote
An in principle argument can't change the underlying mathematics, so if you're trying to predict an inherently probabilistic chain of events, then the number of possible outcomes grows very rapidly, so for a series of events for which there are only two outcomes, there are 2N outcomes for N events. At the same time, the probability of any one particular outcome is the product of the probabilities in the chain that leads to it, and so the probability of even the most probable outcome will rapidly shrink, even if the individual events in the chain are highly probable individually. So, to the extent probabilistic events matter to the prediction you're trying to make, the number of outcomes will quickly become vast and the probabilities tiny for each one. A perfect calculation, with infinite computing power, isn't going to give you much of a prediction.

Yes I get that iterative probability events spiral quickly (exponentially?) into very, very hard to predict outcomes but isn’t this still a computing problem of scale rather than one of principle? Take a horse race for example - sure I could study the form, talk to the trainers etc before placing my bet but there are still vast numbers of unknown variables potentially in play that could affect the result. What though if instead I knew absolutely everything there was to know - every possible component of the horses, every possible thought process they would have, every possible weather parameter, every possible chance of a bird passing by and distracting the horse I fancied, every everything in other words? That is, what if there were be no more unknowns that could affect the outcome? And let’s say too that I knew all that ab initio, and also that I had a big enough computer to do the calculations - would I then know in advance the winner with no possibility of not cashing in? Isn’t that what a deterministic model would imply?           

Quote
The problem with prediction is therefore that we simply don't know if the universe is deterministic or relies on probabilities. You have no problem with perfect predictions for a deterministic universe (except the possible problem with chaos and continua that I mentioned before), but if it relies on fundamental probabilities, then you do.

Siegel covers this I think:

Some composite structures and some properties of complex structures will be easily explicable from the underlying rules, sure, but the more complex your system becomes, the more difficult you can expect it will be to explain all of the various phenomena and properties that emerge.
That latter piece cannot be considered “evidence against reductionism” in any way, shape, or form. The fact that “There exists this phenomenon that lies beyond my ability to make robust predictions about” is never to be construed as evidence in favor of “This phenomenon requires additional laws, rules, substances, or interactions beyond what’s presently known.

You either understand your system well-enough to understand what should and shouldn’t emerge from it, in which case you can put reductionism to the test, or you don’t, in which case, you have to go back down to the null hypothesis: that there’s no evidence for anything novel.

And, to be clear, the “null hypothesis” is that the Universe is 100% reductionist. That means a suite of things.

• That all structures that are built out of atoms and their constituents — including molecules, ions, and enzymes — can be described based on the fundamental laws of nature and the component structures that they’re made out of.

• That all larger structures and processes that occur between those structures, including all chemical reactions, don’t require anything more than those fundamental laws and constituents.

• That all biological processes, from biochemistry to molecular biology and beyond, as complex as they might appear to be, are truly just the sum of their parts, even if each “part” of a biological system is remarkably complex.

• And that everything that we regard as “higher functioning,” including the workings of our various cells, organs, and even our brains, doesn’t require anything beyond the known physical constituents and laws of nature to explain.

To date, although it shouldn’t be controversial to make such a statement, there is no evidence for the existence of any phenomena that falls outside of what reductionism is capable of explaining.


This seems persuasive to me. Does it to you?

Quote
On the other hand, if you're dealing with emergence, i.e. properties that emerge from the interactions of parts that do not possess them individually, it might not matter. To go back to the example of a quantum model of an atom, you're not really interested in what one specific atom does, you're interested in the generic properties that different kinds of atoms (elements) have. The emergent properties of atoms are mostly to do with the states it is possible for its elections to be in and, specifically, the energies associated with each one. So, for example, the spectral lines associated with each element correspond to the energy differences between possible states of the electrons, because they represent absorption or emission of photons with those energies, and energy is directly related to frequency. The model cannot tell you about what state one atom is in, or where one electron is, but it does tell you that the possible energies are quantised and what the allowed energies are - and that's all you really need to know. [I'm glossing over a lot of detail here, but I'm trying to get the basic idea across as simply as I can.]

OK (I think) but I’m still not seeing a qualitative difference between the predictability of non-emergent outcomes and emergent ones. Perhaps this is what you meant by “On the other hand, if you're dealing with emergence, i.e. properties that emerge from the interactions of parts that do not possess them individually, it might not matter”? Does not the explanation you set out here apply to both categories?     

Quote
Talk about opening the proverbial can of worms! I think I'll pass on this for the moment because it risks either going off at a tangent, or possibly just revisiting the same unknowns from a different point of view. We can go back to it if necessary.

Yeah I know - seems I’ve been listening to the Infinite Monkey Cage podcast a little too much lately (!) but the point was just a simple one - ie, that no matter how vast (and currently unachievable) the computing power necessary, that’s no reason to invalidate the hypothesis.   

Quote
I hope what I've said covers the rest of your questions, if not, let me know and I'll try again.

Yes, thank you - though I’m still left with the same open question about why, fundamentally, for prediction purposes emergent phenomena break the reductionist model of reality (as Vlad’s author claims) when it seems to me they do no such thing (as my man says).     

"Don't make me come down there."

God

Stranger

  • Hero Member
  • *****
  • Posts: 8236
  • Lightly seared on the reality grill.
Re: Reductionism vs. Emergence
« Reply #62 on: February 26, 2023, 04:34:31 PM »
But my question is why. Leaving aside for now the complexity of the calculation, what special characteristic do emergent phenomena have that, just by virtue of being emergent, makes them non-predictable even in principle – in essence the claim of the author Vlad linked to in the OP, and rebutted by the author (Ethan Siegel) I linked to in Reply 1?

In other words, why in principle should it be any more difficult/impossible for the reductionist model to predict an emergent future event than it is to predict a non-emergent one?

It isn't. The problem with predicting the future, from some staring point in time, depends entirely on if the universe is deterministic or not; it has nothing at all to do with emergence. That's why I was trying to draw a distinction between that and 'predicting' or explaining from first principles (as you can usually see what has actually emerged) emergent properties.

The former is to do with the passage of time and how things develop over time, while the latter is to do with going 'up' the hierarchy of emergent phenomena, and has nothing to do with development over time per se.

Yes I get that iterative probability events spiral quickly (exponentially?) into very, very hard to predict outcomes but isn’t this still a computing problem of scale rather than one of principle? Take a horse race for example - sure I could study the form, talk to the trainers etc before placing my bet but there are still vast numbers of unknown variables potentially in play that could affect the result. What though if instead I knew absolutely everything there was to know - every possible component of the horses, every possible thought process they would have, every possible weather parameter, every possible chance of a bird passing by and distracting the horse I fancied, every everything in other words? That is, what if there were be no more unknowns that could affect the outcome? And let’s say too that I knew all that ab initio, and also that I had a big enough computer to do the calculations - would I then know in advance the winner with no possibility of not cashing in? Isn’t that what a deterministic model would imply?           

Yes, it is (ignoring the chaos and continua problem for the moment). The problem is that we don't know for sure if it is fully deterministic or not.

Siegel covers this I think:

Some composite structures and some properties of complex structures will be easily explicable from the underlying rules, sure, but the more complex your system becomes, the more difficult you can expect it will be to explain all of the various phenomena and properties that emerge.
That latter piece cannot be considered “evidence against reductionism” in any way, shape, or form. The fact that “There exists this phenomenon that lies beyond my ability to make robust predictions about” is never to be construed as evidence in favor of “This phenomenon requires additional laws, rules, substances, or interactions beyond what’s presently known.

You either understand your system well-enough to understand what should and shouldn’t emerge from it, in which case you can put reductionism to the test, or you don’t, in which case, you have to go back down to the null hypothesis: that there’s no evidence for anything novel.

And, to be clear, the “null hypothesis” is that the Universe is 100% reductionist. That means a suite of things.

• That all structures that are built out of atoms and their constituents — including molecules, ions, and enzymes — can be described based on the fundamental laws of nature and the component structures that they’re made out of.

• That all larger structures and processes that occur between those structures, including all chemical reactions, don’t require anything more than those fundamental laws and constituents.

• That all biological processes, from biochemistry to molecular biology and beyond, as complex as they might appear to be, are truly just the sum of their parts, even if each “part” of a biological system is remarkably complex.

• And that everything that we regard as “higher functioning,” including the workings of our various cells, organs, and even our brains, doesn’t require anything beyond the known physical constituents and laws of nature to explain.

To date, although it shouldn’t be controversial to make such a statement, there is no evidence for the existence of any phenomena that falls outside of what reductionism is capable of explaining.


This seems persuasive to me. Does it to you?

I don't disagree with any of that but it's not about predicting the future from a given point in time, it's about explaining the higher levels, with all their emergent features, in terms of the lower, more fundamental levels.

OK (I think) but I’m still not seeing a qualitative difference between the predictability of non-emergent outcomes and emergent ones.

Probably because there isn't one. I obviously haven't managed to get the distinction I'm trying to make across. It's almost as if the two things, emergence and predicting the future, are orthogonal: one proceeds along the time axis and the other goes up the hierarchy of more complex emergent behaviour.

For example, if some hypothetical theorist, with access to unlimited computing power, had existed before there were any atoms, they may have been able to predict that some combinations of protons, neutrons, and electrons might come together to make atoms, that atoms may then be able to make molecules, and that some of them would be large and complex, how these could form the basis for organic chemistry, and then go on to predict life itself, yet have been totally unable (unless the universe is fully deterministic) to predict any of the specific details about the future in terms of events (the formation of a planet called Earth and the path of evolution that led to humans).

Yeah I know - seems I’ve been listening to the Infinite Monkey Cage podcast a little too much lately (!) but the point was just a simple one - ie, that no matter how vast (and currently unachievable) the computing power necessary, that’s no reason to invalidate the hypothesis.   

The black hole information problem is still not settled (although some physicists seem to have made up their minds, there are others that still disagree), because it arises from making certain assumptions one way or the other.

In a Newtonian universe, a prefect picture of the present would give you, not only a perfect prediction of the future, but also a perfect reconstruction of the past, because everything is deterministic and time-reversible, hence you could say that no information is lost or gained. The advent of general relativity and black holes seemed to kill the idea because the world-lines of some particles would become inaccessible when they passed the event horizon and would actually terminate at the singularity.

Then we have the further problem of quantum mechanics. Ignoring field theory for the sake of simplicity, the Schrödinger equation seems to offer just as much certainty as the Newtonian model. As long as the wave-function behaves according to it, it is also deterministic and time-reversible. The problem is that when you're doing a practical calculation and you do a measurement of some observable (position, spin, energy, etc.), then you then know the value of that for certain, and you basically throw the Schrödinger equation in the bin for a moment and start again with a wave-function that reflects your new-found certainty. This is what's called 'the collapse of the wave-function' or 'state reduction'. Some new information seems to have appeared (the value of the observable) and some lost (the exact history). Hence The Measurement Problem, which remains unresolved.

Add to that that we still don't know how to properly model black holes because we don't have a fully worked out and tested theory that unites quantum field theory with general relativity, and you're left with a lot of unknowns (perhaps more than some like to admit to and more than pop-science programs and articles might lead you to believe).

Again, I hope this helps but I don't mind carrying on this discussion, it makes a change from the endless repetition we get from some on here.   :)
x(∅ ∈ x ∧ ∀y(yxy ∪ {y} ∈ x))