Book Review: We Are Electric by Sally Adee, Allen&Unwin 2023

Discussion in 'Other health news and research' started by Murph, Feb 19, 2025.

  1. Eddie

    Eddie Senior Member (Voting Rights)

    Messages:
    223
    Location:
    Australia
    I'm not convinced that preference is somewhere between determined and stochastic. An event that occurs must either have a reason for occurring or not. Even if certain probabilities are greater than others, that needs explaining and should back to something unknowably random or determined. The actions that are deterministic must have some causal chain that continues back to some other deterministic event. This chain must keep going all the way back to the start of the universe or end at something truly random.

    I would argue that preferences that involve representations of future events are based on a current preference for our future selves to achieve some goal or desire. These current preferences are things we have no control over. It is impossible to simply chose to prefer one food over another or chose to believe something you find unconvincing. There is aways a huge number of reasons we have a preference for one choice over another, and ultimately all these reasons go back to something prior or to something random. If preference really is a choice, why wouldn't everyone just prefer the taste of healthy foods? That way they could enjoy indulging on broccoli and plain chicken.
     
    Peter Trewhitt likes this.
  2. Jonathan Edwards

    Jonathan Edwards Senior Member (Voting Rights)

    Messages:
    17,070
    Location:
    London, UK
    This is a nice exposition of the way we are taught now to think of things but the basic math of a world with three symmetrical dimensions indicates that this is a cultural misunderstanding.

    Something that is often forgotten in the history of science is that when Einstein postulated that light was made of photons he at once realised the implication that all electromagnetic events (known to govern pretty much all events around us since nuclei do not change often) must be partly random. He tried to fend off this idea for decades but it was sound.

    People also forget that Gottfreid Leibniz had explained in 1690 why, if the world contained any separate individual units (atoms, photons, souls, whatever) rather than just a continuum, all events must be in part determined and in part random. He proposed the concept of sufficient reason. That there must be reasons for all events (as you say) but that these reasons cannot define the event infinitely precisely because that leads to a need for an infinity in the equations needed to describe the event and that makes the math meaningless. There can be no specifiable solution.

    Leibniz's account is very dry and abstract but when Einstein saw he had created just this situation he realised the necessity at once.

    We now know that all events in physics are in part determined and in part stochastic. The equations conveniently provide exactly this mix. However, very few physicists are aware of Leibniz's imperative and the popular story is that quantum events can be divided into a deterministic process called process 2 and then a random event called process 1 (strangely). Yet quantum theory is very clear that there can be no possible knowable separation between these two components. A quantum event is the life of afield excitation and that is indivisible.

    I agree that 'we' seem to have no 'control' over our choices. People who like the idea of free will tend like the idea that there is an enduring 'me' that 'has control' and can 'choose' in a way that is not required by the local physics. The reality seems to me much more as Shakespeare suggested, that any 'I' is caught up in the tragedy of causal forces from which there inso escape.

    But mathematically the physics seems to tells us that in every fundamental event there is some range of possibilities from which something can 'choose' one option with preference or weighting. For most situations it looks as if a local domain of the electron field has the choice of exchanging one mode of excitation for another - maybe an orbital for a covalent bond. However, when we come to condensed matter physics we find in Goldstone theorem that chunks of ordered matter (crystals for instance) own their own excitations that can shift to a new energy state while remaining the same 'mode of excitation'. Much as we think intuitively about things around us, cohering 'objects' have a real fundamental dynamic identity. And as such it seems fair to say that they make the choice.
     
    alktipping and Peter Trewhitt like this.
  3. Utsikt

    Utsikt Senior Member (Voting Rights)

    Messages:
    2,558
    Location:
    Norway
    I have not followed the entire discussion, but I’ve got some questions about this:
    How do we know this?
    What about the unknown unknowns? Could there not be an unknown order to the stochastic elements of events?

    I’m not saying that it must be that way, I’m asking if it possibly could?
     
    alktipping and Peter Trewhitt like this.
  4. poetinsf

    poetinsf Senior Member (Voting Rights)

    Messages:
    519
    Location:
    Western US
    I meant input signals. If you think the qualia is something that resides in input signals, there should be a way to test that hypothesis: remove some of internal connections to memories and programs and see if the input signal *feels* the same. One way, I suppose, is lobotomy. If you remove the left/right hemisphere communication, I'd bet the sunset, or whatever that requires left/right communication to trigger the feeling, would feel different when seen with only with left or right eye.

    Let me reiterate and tell me if this is correct:
    Your position: qualia is something caused by complexity in the input signals.
    My position: "qualia" is invocation of associated memories and programs when the signal is received.
     
    alktipping and Peter Trewhitt like this.
  5. poetinsf

    poetinsf Senior Member (Voting Rights)

    Messages:
    519
    Location:
    Western US
    I don't follow what you are getting at. It's still about predicting, not explaining, so are we in agreement that science is about predicting and not explaining? If you are saying that the feeling of pain is a measurable/predictable quantity, yes it certainly is. If you are saying that therefore a thing called qualia exists, no it does not. The pain sensation is the result of the input signal triggering the pain center in the brain, not the pain signal itself.

    OK, so we are in agreement that it is the different internal codes getting invoked that causes the illusion of qualia. Is that right?
     
    Peter Trewhitt likes this.
  6. Nightsong

    Nightsong Senior Member (Voting Rights)

    Messages:
    1,111
    He realised that quantisation introduced an irreducible probabilistic element, but he didn't accept fundamental randomness as the last word (as his debates with Bohr proved).
    And in many interpretations the idea of a clean separation either evanesces (many-worlds, Bohm, ...) or introduces new physics (collapse models).
    This is shakier ground for me but would Leibniz really have thought in such terms as "in part random"? Would he have equated indeterminacy & infinite regress - wasn't his infinite-regress chain terminating fix an entity (the monad) that had its own internal principles?
     
    alktipping and Peter Trewhitt like this.
  7. poetinsf

    poetinsf Senior Member (Voting Rights)

    Messages:
    519
    Location:
    Western US
    Sounds like we now have a problem of definition. Wavelength is an information, not a qualitative sensation, that allows us to predict. A value registers on a measuring instrument; the signal hits photoreceptors; the number gets registered somewhere in the memory; the prefrontal cortex makes use of it. That's information. Are we now calling information qualia?

    That doesn't matter though. It's a model that allows us to predict, and prediction, not how the information is represented, is all that matters.
     
    Peter Trewhitt likes this.
  8. Jonathan Edwards

    Jonathan Edwards Senior Member (Voting Rights)

    Messages:
    17,070
    Location:
    London, UK
    Well, if you accept that the equations of quantum theory are at least broadly valid then the partial randomness of all events is written right in to the theory. To the extent that the uncertainty about what will happen is never zero and that changes the concept of a 'ground state'. Even 'empty space' has a chance of throwing up an event. A correction factor had to be put in for this basic underlying randomness. But nothing is completely random because there are all sorts of rules that constrain likelihoods very tightly, if never completely.

    You could argue that quantum theory is completely wrong and things just look as if they work with those equations on the surface but the equations have been found to be correct to up to 18 decimal places so people are pretty confident they are roughly right.
     
    alktipping and Peter Trewhitt like this.
  9. Jonathan Edwards

    Jonathan Edwards Senior Member (Voting Rights)

    Messages:
    17,070
    Location:
    London, UK
    Yes, and I mean input signals - but input to the event of representation - which is somewhere deep in the brain but we are not sure where.

    We know from clinical evidence over decades that we can predict differences in experience of different sorts the you cut connections at different levels. If you cut connections between retinal cells and optic nerve you will see areas of visual field blurred or absent or grey. But if you damage connections from visual cortex to other areas you are more likely not to be aware that anything is missing. There is a vast literature on effects of damage to connections at different levels. However, since we do not know which cells are getting the inputs that we are reporting we are not in a position to make any more useful predictions.

    Moreover, there are a lot of serious problems with interpretation of reported experience f we accept that experiences may occur in massive multiplicity across cortex or throughout thalamus. One possibility is that for normal people the report of experience is most closely correlated to input to frontal or prefrontal lobe cells. But if you do a frontal lobotomy it may be that reporting defaults to the content of cells further back. I have spent 20 years going over all the experimental options and it is just not that easy.

    If you cut the hemisphere communication in general the right side of the brain reports the left visual field and vice versa, but you can easily predict that from what we know about the way vision is initially triaged to the hemispheres in visual cortex 1. It tells us nothing useful about where the later output from visual cortex that involves signals about objects being coloured occurs. People with intact V1 but damaged outputs to other areas cannot report any vision, so we are pretty sure the visual experience we report is not in V1.

    And it is not just me. Tens of thousands of neuroscientists who know all the stuff about the effects of damage will tell you, just like me, that nobody has any idea where the experience events occur. Predictions always fall foul of the necessary ascertainment processes.
     
    alktipping and Peter Trewhitt like this.
  10. Jonathan Edwards

    Jonathan Edwards Senior Member (Voting Rights)

    Messages:
    17,070
    Location:
    London, UK
    That is probably right. But I am talking about input signal to events that will also be drawing on memories and programs big time. I don't understand what 'invocation' would be other than that. If you call up some memory the memory has to be fed in to an input to an event to be experienced, just like direct signals from sense organs. And we know that calling up memories activates many of the same intermediate pathways from the relevant sense organs. What nobody knows is where all this is experienced.
     
    alktipping and Peter Trewhitt like this.
  11. Jonathan Edwards

    Jonathan Edwards Senior Member (Voting Rights)

    Messages:
    17,070
    Location:
    London, UK
    Yes, but you are always predicting experiences. Think about it. That is just brute fact.
     
    alktipping and Peter Trewhitt like this.
  12. Jonathan Edwards

    Jonathan Edwards Senior Member (Voting Rights)

    Messages:
    17,070
    Location:
    London, UK
    But hang on, what is this 'result' in physics terms? And how do you know pain is felt in a 'pain centre'. I don't think many neuroscientists would accept that. What is 'triggering? Is it making the pain centre produce more signals? If so where do they go and what role have they in pain if the pain centre is what receives pain signals. I am well aware of this sort of account but if you think about it carefully it just ends up circular.

    At some point there is an experience of pain. That must depend on some inputs encoding pain. For the pain to have complex character, which pain does, you need several signals with independent degrees of freedom. Some event has to receive several data elements. Neurons do that fine but computers don't have such events.
     
    alktipping and Peter Trewhitt like this.
  13. Jonathan Edwards

    Jonathan Edwards Senior Member (Voting Rights)

    Messages:
    17,070
    Location:
    London, UK
    Why are qualia 'an illusion'? As Eddington put and Descartes and a whole lot of others pointed out they are the primary data we work with. We are culturally trained to think they are just 'phantasms' that mirror 'real physical events outside' but as Eddington says, it is these events outside that we actually know nothing about - other than their disposition to give us qualia.

    People say, oh well physical events outside are made of real stuff rather than just phantasms. But what we mean by the physical realness of 'real stuff' turns out to be just the qualia our brains paint their representations in. The more we thin of things as really physical the more we are invoking qualia.
     
    alktipping and Peter Trewhitt like this.
  14. Utsikt

    Utsikt Senior Member (Voting Rights)

    Messages:
    2,558
    Location:
    Norway
    I’m not talking about quantum theory being incorrect, I’m talking about the possibility of having another layer underneath with some physical entities that make up the quantum particles etc., that we’re currently unaware of?
     
    alktipping and Peter Trewhitt like this.
  15. Creekside

    Creekside Senior Member (Voting Rights)

    Messages:
    1,494
    https://www.sciencedaily.com/releases/2025/02/250221125805.htm It's about how our predictions about pain affect how strongly we feel pain. That complicates theories about ME pain, since it's not as simple as a voltage reading from a sensor.
     
    Peter Trewhitt likes this.
  16. Creekside

    Creekside Senior Member (Voting Rights)

    Messages:
    1,494
    https://www.sciencedaily.com/releases/2025/02/250219111457.htm

    Another story I read today, that I think fits in this thread. Contact electrification is still poorly understood, with experiments giving confusing results. This researcher paid really close attention to the data, and found that contact history was an important factor: it took many contacts to start getting reliable results. I think this makes a good example for other researchers who are facing unreliable experimental results: expand your experiments and pay attention to the data.
     
    Peter Trewhitt likes this.
  17. Jonathan Edwards

    Jonathan Edwards Senior Member (Voting Rights)

    Messages:
    17,070
    Location:
    London, UK
    Indeed, at least not until his last decade when he seems to have realised the inevitability of his 1905 thought really was inevitable.

    Indeed, but if you take quantum mechanics really seriously as Feynman put it, 'interpretations' become unnecessary. The tricky step in this is in treating the indivisibility of the duration of an excitation as literally as the indivisibility of spatial extent. The excitation then becomes a 'chess move' in my own approach (published somewhat strangely in a journal called Activitas Nervosa Superior, for Henry Stapp's 90th birthday. Henry was a Phd student of von Neumann.)

    Leibniz is quite explicit that L'Être Necessaire was the totality of sufficient reasons which would always determine the dynamics of any event but not absolutely precisely without options. This was eventually the basis for each monadic unit always having a degree of choice of action but he discusses the need for a degree of uncertainty earlier, in the 1685-95 period on the basis of a priori reasoning about infinitesimals in classical mechanics. (The fundamental monadic account harmonised with this but was quite different, following a mathematical framework Leibniz could never pin down.) Richard Arthur has written about this in great detail in his various monographs on Leibniz. Richard and I agree on the analysis more or less completely. One thing Leibniz says is that physical interaction involving motion must always be to some extent 'vague'. It can never be infinitely precise.

    Leibniz is discussing these arguments in a different context from Einstein but he makes clear that, like Einstein, he understands that uncertainty is entailed by a theory that has intrinsic individuation of dynamic units ('atomism' or monadism). A true individual cannot cover all options. It has to do this or do that and a theory that tries to make that infinitely precise will fall foul of its own infinities.

    So this gets very complex because Leibniz is constantly dodging between classical mechanics and his idea of an underlying monadic dynamic. Sometimes it is not clear what he is referring to. But the need for a degree of randomness precedes the monadic dynamics which emerged from New System, from the mid 1690s.
     
    Nightsong and Peter Trewhitt like this.
  18. Jonathan Edwards

    Jonathan Edwards Senior Member (Voting Rights)

    Messages:
    17,070
    Location:
    London, UK
    Quantum theory quite specifically disallows that. It mathematical structure is incompatible with any lower level. Bohm tried very hard to prove that was not necessarily so but as his co-author Basil Hiley is at pains to point out, in later years Bohm saw that Bohr was basically right.
     
    alktipping, Peter Trewhitt and Utsikt like this.
  19. Jonathan Edwards

    Jonathan Edwards Senior Member (Voting Rights)

    Messages:
    17,070
    Location:
    London, UK
    Indeed. I was discussing this yesterday with a medical friend and we were remembering that as students we were taught that pain has six different 'dimensions', and that doesn't even take into account the predictive aspect. You probably need 100 input channels to a single event to cover the varieties of pain. But that is OK because neurons can have 50,000 input channels.
     
    alktipping, Peter Trewhitt and Yann04 like this.
  20. Jonathan Edwards

    Jonathan Edwards Senior Member (Voting Rights)

    Messages:
    17,070
    Location:
    London, UK
    I checked on Google. There are eight features of pain.
     
    Peter Trewhitt likes this.

Share This Page