The nature of stochastic events in a block universe 26 Oct 201126 Oct 2011 Recently I had a paper accepted for Zygon in which I argued that God could create a universe with Darwinian accidents as a block universe. For those who have mercifully escaped the rigours and tribulations of the philosophy of time, a block universe is the entirety of the universe over time considered as a single entity, or block. I got to thinking… what would stochastic effects or causes look like in a block universe? Suppose God looked at the block from his eternalise perspective and saw that, for example, atomic decay occurred over a distribution curve defined by the half life of that element. What would he see? Moreover, suppose a given effect E occurred a fraction of the time from a given cause C? How would that look? I think what God/eternalist physicists would see is this: in the case of a rigidly deterministic cause and effect relationship, all the Cs would directly map onto all the Es for that class of causes and effects. That is to say, 100% of all Cs would be followed locally and immediately by Es. But stochastic effects, like quantum decay or larger scale statistical relations, would not. The Cs and Es would have at best a partial mapping. Some Cs would not be followed by Es, some Es would appear without preceding Cs and so on. Arguably, in a block conception the very notion of effective causation is otiose. You do not need to do anything more than note the relation between Cs and Es for a class of phenomena. The structure of the block is, in effect, the laws of nature, or to put it another way, the laws of nature are just a statement of the relations between classes of objects over the axes of space and time. Physics doesn’t so much as explain the universe as describe it. I haven’t kept up with the literature in either physics generally or time, so if anyone can point to the person who has already made this out, do let me know. Metaphysics Philosophy Science
Epistemology The evolution of common sense on Scientific American 25 May 201125 May 2011 No, I do not mean that SciAm has finally evolved common sense, which would be an insult for the magazine that I grew up with. Instead quite the opposite: they have published a piece of mine on their Guest Blog on this topic. This indicates a growing lack of common… Read More
General Science Begging questions about philosophy, science and everything else 1 Sep 20121 Sep 2012 Those who know me well take great care not to say (at least when I am in earshot) “That begs the question…” and mean by that “That raises the question…”, or else they will get a dissertation delivered for a period on the right use of that phrase. That’s right, folks,… Read More
Education Dicks, niceness and evil: a rant 29 Aug 2010 The ongoing blogbattle over whether to be nice or a dick skeptic continues. Phil Plait gave a speech suggesting that niceness works better. There was blowback, of course, which he discusses here and here. The Great Tone Debate seems to resolve down to those who think that minds are changed… Read More
Hi John, You said, “Arguably, in a block conception the very notion of effective causation is otiose.” I have been toying with classical B theory myself. B theory says that all appearance of sequential events in the universe is an illusion, which implies that all appearance of stochastic events in the universe is an illusion. Likewise, B theory is incompatible with the theory of evolution and perhaps all other scientific theories. This also makes compatibilism (soft determinism) incompatible with the theory of evolution. Regardless of any notion that human free will could be compatible with determinism, hard or soft determinism is incompatible with the theory of evolution. (I would be surprised if I am the first to state this conclusion, but I have no other reference for this.) We can say that stochastic events are compatible with “simple foreknowledge.” Simple foreknowledge is a view proposed by traditional Arminians that says God has definite foreknowledge of all future events without determining the exact outcome of all events. However, I suppose that there is no way to prove or disprove the existence of simple foreknowledge. I used to think that B theory helped to explain the mechanics of simple foreknowledge, but I now reject that. Open futurists (open theists) and other theists say that God has “natural knowledge.” Natural knowledge is a view that God knows all possibilities in all possible universes. This could look something along the lines of a Many Worlds block universe. But in the case of open futurism, the block is only a vision with no actual sequence of events. Fun stuff 🙂
Why single out evolution? Evolution is in principle a completely deterministic theory, as is shown by the fact that it can be simulated by computers using only pseudo-random number generators.
It can be represented by a deterministic simulation, but in fact many of the events are properly random not pseudorandom. And I would not myself single out evolution any more than any other theory that is based upon quantum physics ultimately. It is a response to others who seem to think it is a problem. And there are several senses of “random” in play, one of which is more important (contingency) than actual randomness; hence “stochasticity”.
Speculating on the nature of actuality is exactly as fruitful as speculating on the nature of an elephant with access only to the first six inches of its trunk. Our reality deals with a very small part of actuality, the part that deals with our survival. Speculation within that part is much more fruitful, but maybe not as much fun!
Unless I’m missing some nuances, this looks like the “Best-System Analysis” of law and chance defended by David Lewis. (Lewis attributes the origins of the idea to FP Ramsey and JS Mill; as a result, some people call this the Mill-Ramsey-Lewis approach.) Lewis’s most complete statement of the view occurs in the 1994 Mind paper “Humean Supervenience Debugged”. The block theory is part of the background picture. Here are the key quotes about the best-system part. (I hope they’re not to lengthy; I didn’t want to omit anything important.) Deterministic laws: Ramsey once thought that laws were “consequences of those propositions which we should take as axioms if we knew everything and organized it as simply as possible in a deductivesystem”( 1990, p. 150). I trust that by “it” he meant not everything,but only as much of everything as admits of simple organization; else everything would count as a law. I would expand Ramsey’s idea thus (see Lewis 1973, p. 73). Take all deductive systems whose theorems are true.Some are simpler, better systematized than others. Some are stronger, more informative, than others. These virtues compete: an uninformative system can be very simple, an unsystematized compendium of miscellaneous information can be very informative. The best system is the one that strikes as good a balanceas truth will allow between simplicity and strength.How good a balance that is will dependon how kind natureis. A regularity is a law iff it is a theorem of the best system. Stochastic laws and chances: So we modify the best-system analysis to make it deliver the chances and the laws that govern them in one package deal. Consider deductive systems that pertain not only to what happens in history, but also to what the chances are of various outcomes in various situations-for instance, the decay probabilities for atoms of various isotopes. Require these systems to be true in what they say about history. We cannot yet require them to be true in what they say about chance, because we have yet to say what chance means; our systems are as yet not fully interpreted. Require also that these systems aren’t in the business of guessing the outcomes of what, by their own lights, are chance events: they never say that A without also saying that A never had any chance of not coming about. As before, some systems will be simpler than others. Almost as before, some will be stronger than others :some will say either what will happen or what the chances will be when situations of a certain kind arise, whereas others will fall silent both about the outcomes and about the chances. And further, some will fit the actual course of history better than others.That is, the chance of that course of history will be higher according to some systems than according to others. (Though it may well turn out that no otherwise satisfactory system makes the chance of the actual course of history very high; for this chance will come out as a product of chances for astronomically many chance events.) Insofar as a system falls silent, of course it fits whatever happens. The virtues of simplicity, strength, and fit trade off. The best system is the system that gets the best balance of all three. As before,the laws are those regularities that are theorems of the best system. But now some of the laws are probabilistic. So now we can analyse chance: the chances are whatthe probabilistic laws of the best system say they are. So, metaphysics FTW?
When I age and retire (as opposed to merely aging) I shall spend a happy year catching up on Lewis. My attitude towards him is somewhat ameliorated by his modal realism (although in the Zygon piece I allow for it), but is increased by the fact that I once argued with him unawares (he was a friend of Neil Thomason’s, my advisor). Many thanks, Rachael – I’ll follow this up.
David Lewis is wonderful, even accounting for the fact that I’m unduly susceptible to loving somebody’s weird metaphysical views solely because they are weird. I was once in the same lecture audience with him (David Armstrong on universals at Syracuse), but did not manage to engage him in conversation. Anyhow, glad to be of service!
As it happens I am now reading Sean Carroll’s From Eternity to Here and although thus far (p135) he hasn’t said anything that seems to me to throw any light on your topic, he may have some thoughts that would assist you. But it was just before I read this post, while reading Carroll’s fn #116 that I thought of you and the other philosopher of my acquaintance: Physicists tend to express bafflement that philosophers care so much about the words. Philosophers, for their part, tend to express exasperation that physicists can use works all the time without knowing what they actually mean.
Congrats on: Recently I had a paper accepted for Zygon in which I argued that God could create a universe with Darwinian accidents as a block universe. but… er, what god is that? I see none around. Can you describe what sort of universe Tin-Tin, Sherlock Holmes, the Minotaur or Voldemort could create? Is it exactly the same sort that gods can create? Or are there separable classes of non-existence and hence non-potence? Is Tin-Tin neo-Leibnizian? Argument, assumptions; assumptions, argument. Cart, horse; horse, cart. Stable door. Bang!
The more limited the being, the smaller and fewer in number the universes that entity could create. Of course nonexistent deities cannot create, nor even simulate, anything at all. But “God” in philosophical discourse is like a kind of limit case for testing arguments. As I do not know there are no gods of any kind, however much I might suspect that, I cannot preclude that possibility, and so the concept plays a useful role in argument. In that respect it is rather like Descartes’ Evil Demon, or Putnam’s Mad Scientist.
John, Theology aside, how do you defend the possibility of Darwinian causation in a block universe where supposedly all sequence of events is an illusion?
I don’t see the problem. If at the physical level causation is the sequence or regularity of classes of events (causes C and effects E), then to say that something at a level above the merely physical caused an effect is to say that all the constituents of the class of the cause C are followed on the temporal dimension by all (or some) of the class of effects E. Now biology is a complex of physical events and processes, so it makes sense to say that a biological event (mitosis, for example) causes an outcome (two haploid cells). It is not an illusion that they occur over time. It is an illusion at the physical level that whatever happen to be the most fundament entities of all are causing other entities to be something else, since time is fixed. So causal talk is appropriate in temporal terms, but not in ontological terms under a block universe. I am not expressing that well, but it’s 2am…
I suppose I have trouble with your appeal that sequences of events occur over time while time is fixed. You are misusing B Theory. B Theory specifically states that all appearance of sequential events in the physical level are nothing but an illusion, which indicates that any scientific theory involving sequential events such as causation is an illusion. (By the way, Are there any scientific theories without causation?) If you want to make a philosophical or theological appeal to a block universe at a non-physical level, then you need to clarify that this non-physical level is completely inscrutable to science. If you appeal to B Theory and Many Worlds, while Many Worlds in itself is completely inscrutable to science, then your theory still implies that all apparent causation at the physical level is an illusion. Also, if you appeal to a non-physical block universe that diverges from B Theory to the point where there are actual sequential events with differences in past, present, and future, then you need to say that you are appealing to something other than B Theory while you might be describing the theistic/deistic perspective of divine natural knowledge. Do you agree that you are not really appealing to B Theory but merely borrowing parts of B Theory for your theological theory?
B Theory specifically states that all appearance of sequential events in the physical level are nothing but an illusion, which indicates that any scientific theory involving sequential events such as causation is an illusion. This is not my understanding of B theory. The B theory says that there is no metaphysically privileged present—past, present and future things all have the same status, and the present things are merely the ones that happen to be located near us in time. That’s compatible with the idea that some things are objectively earlier than others, just not with the idea that some things are objectively past. The SEP entry on time seems to agree. (By the way, Are there any scientific theories without causation?) Fundamental physics?
So, here’s the thing that is unproblematic according to the B theory: “1900 is earlier than the time at which we are speaking”. In other words, the B theory allows one time to be (objectively) earlier than another. And here are the things that are problematic and in need of further analysis according to the B theory: “1900 has the property of being past” and “time really passes”. Those both have to do with the idea that one time is objectively now or present (although I admit that “time really passes” is a really murky way to frame the discussion). A good analogy for the B theory is most people’s view of space. They believe in spatial relations (I am five feet away from the fridge) but not in a privileged here. “Here” just refers to wherever the speaker happens to be standing. Similarly, B theorists believe in B-relations (lunch on October 28th is 24 hours after lunch on October 27th) but not in a privileged now. “Now” just refers to whatever time the speaker happens to be standing at. Another thing that you might be worried about is the idea that time has an inherent direction. (Lunch on the 27th isn’t just 24 hours away from lunch on the 28th; it’s 24 hours earlier.) But you can have a privileged direction without having a privileged place. You could imagine a Cartesian plane where every point was associated with a vector of direction x + y. That would be a space with a privileged direction, but no privileged here. So I don’t see why you couldn’t also have time with a privileged direction, but no privileged now. I don’t think any fundamental physics has causation. The simplest reason for this is that physicists’ terms of art don’t seem to include “cause” or anything even roughly equivalent. Physical laws often have properties that people don’t want causes to have (e.g., being time symmetric, being indeterministic, being phrased in terms of spacetime instead of just time). You could revise your concept of cause, but I think the most promising bits of that project are in the special sciences. So I don’t know where the causation in fundamental physics would be.
SEP: “According to The B Theory, there are no genuine, unanalyzable A properties, and all talk that appears to be about A properties is really reducible to talk about B relations. For example, when we say that the year 1900 has the property of being past, all we really mean is that 1900 is earlier than the time at which we are speaking. On this view, there there is no sense in which it is true to say that time really passes, and any appearance to the contrary is merely a result of the way we humans happen to perceive the world.” “There is no sense in which it is true to say that time really passes” does not look like anything is “objectively” earlier than anything else but that some things are “apparently” earlier than other things. Hmm, what fundamental physics has no causation?
Hi Rachael, I cannot off the top of my head elaborate about causation or “non-causation” in fundamental physics, but I will try to clarify my analysis: B theory or eternalism is a philosophical theory based on the scientific theory of time that says time is a dimension. Various philosophical speculation and speculative physics hypotheses stretch the science of time by proposing that the time dimension has something along the lines of the structure of space dimensions while the four dimensions are a block universe governed by causal determinism. So far, no scientific block universe hypothesis has approached the status of scientific theory. However, A theory is consistent with all scientific observation of causation. Also, scientific observation of causation in stochastic processes indicates no evidence of causal determinism. Likewise, there is no scientific support that we exist in a block universe governed by causal determinism. Philosophy may speculate about causal determinism inscrutably governing stochastic processes, but we cannot call this science or claim scientific support.
Also, scientific observation of causation in stochastic processes indicates no evidence of causal determinism. On my linux box, /dev/urandom is a causally determined stochastic process (if I understand what you’re saying, and I’m not sure I do). Stochasticity is a property of models, not the world. (The world may or may not be stochastic — or deterministic for that matter — but with my scientist hat on I have no way to find this out.)
Hi Barry, I do not know what you mean because I know little about electronics, while as you say you might not know what I mean. Anyway, from my perspective, here is an example of inscrutable determinism a stochastic process: Given a fair coin toss with one side heads and the other side tails, the block universe inscrutably determines the outcome of a heads in a particular fair coin toss. However, I suppose that the coin toss was never actually a fair coin toss if the outcome of heads was inscrutably determined.
Oops, make that: “Anyway, from my perspective, here is an example of inscrutable determinism [in] a stochastic process:”
Ok, my example was a little to linuxy. Here’s the longer version. John mentioned Avidia as an artificial life simulation. I prefer nanopond — much easier to hack on. It uses a very fast deterministic random number generator (the Mersenne Twister, if you’re interested). With a bit of extra logging code I can fire this up and generate a block universe. I can see every individual change that happened at every timestep, and I can trace every change back to a deterministic cause all the way back to the initial state of the world. So that’s block universe the first. Now I make a small change to my algorithm. Instead of using a pseudo-random number generator I plug in an actual random number generator. I rerun the simulation and out pop block universe the second. It’s different, of course — there were a different string of random inputs used. I then give them to you. Can you tell them apart? For both of them you’re able to tell stories about species being created ex nihilo, about species and ecosystems and, over time, species splitting to form new species. You can also tell stories about genes, morphologies and population genetics. You can make good predictions and retrodictions about each model. Everything works. But unless I tell you which seed I used for the Mersenne twister, I don’t think you’re going to be able to tell the two apart. Ok, so what. As a card-carrying member of the non-philsophers union (local 582), I don’t care whether the block universe was generated with random or pseudo-random inputs. To make the math tractable I model both systems *as* *if* the inputs were random. If a deterministic model gives better results, great, I use a deterministic model. That’s why I said that randomness is a property of models, not necessarily (and not knowably) the world. To answer John’s question, if the universe is pseudorandom (in the sense used above) and God knows what the seed was, then physics explains reality. If the universe was random or if the universe was pseudorandom and God doesn’t know the seed, then physics only describes reality. To answer your question: I think you would classify both “random” and “pseudorandom without key” as inscrutable, but you’d only characterize the former as fair — maybe?
Barry, I think that I understand you, but let me try to put in another way. Here is program that produces pseudo-random results while simulating 100 fair coin tosses: The program simulates 100 fair coin tosses and has the appearance of a random outcome, but when the program is repeated, the results are always exactly the same. For example, every time the program is run, the first ten simulated flips result in hhthhtthth. Here is a program that produces random results while simulating 100 fair coin tosses: The program simulates 100 fair coin tosses while each simulated flip has a genuine 50-50 chance of resulting in h or t. In this case, only the latter is fair. But neither is inscrutable in the context of my earlier comments. Inscrutableness would be claiming that the former is genuine randomness or a being able to definitely know the exact outcome of the latter.
The structure of the block is, in effect, the laws of nature, or to put it another way, the laws of nature are just a statement of the relations between classes of objects over the axes of space and time. Physics doesn’t so much as explain the universe as describe it. There are more things in heaven and earth, Horatio, Than are dreamt of (describable) in your philosophy. … to ‘describe’ means to draw an observer’s ‘awareness’ (null quote) Quantum mechanics is often portrayed as being exotic behavior which is far removed from ordinary mundane experience. This might turn out to be misleading. Here is how that may be so … Confusion is created in forming any ‘description’. In the process of synthesizing “awareness” decisions are taken for the sake of discrimination. Emphasizing awareness means to selectively shape, to implicitly differentially bias what the observer considers. To “describe” … MEANS … to impose a (quantum mechanical) (re)configuration-in-awareness of that which is to be considered TANSTAAFL Insight gained is made at the compromise of abandoned awareness . Quantum mechanics would seem to be intensely preoccupied with conforming to a Conservation of Description Consider the essential tautological quality of ‘Description’. …. Description is never really and explicitly about ‘description’ itself. Rather description is about forming and shaping a descriptive bias. Forming a descriptive bias is as adhering to the Conservation of Descriptive Bias Yes, to bias is to be tautological. It is of necessity, inescapably conserved. What does all ‘the above’ have to do with the topic under discussion? 1) “Physics doesn’t so much as explain the universe as describe it.” 2) “… What that means is that Einstein is wrong because you can’t travel back into the past and so there’s some new theory that comes into play, which protects the law of cause and effect. It’s very hard to conceive of a logical universe in which cause and effect doesn’t hold.“ If physics is mainly concerned with ‘describing’ and the physicist has already chosen to strongly bias descriptions by adhering to a logical universe wherein cause and effect hold … … they have claimed their “bias” in the direction of emphasizing ’cause and effect’ (as they mean it) and thereby become hindered in conceiving of causality (as it is meant) in some alternative biased formulation. (Conservation Preservation of bias) There are many ways of mincing a description. There is more than one way of biasing the functional meaning of ’cause and effect’. (Null quotation) Example: Omniverse, multiverse, metaverse, etc … The fallacy convoluted within the ‘species-verse’ construction is made apparent by considering the preservation of bias, implicit to the functional meaning of ‘Description’. The decision to describe (and implicitly enumerate) an ensemble of quantum states is a strong bias. It necessarily relinquishes biasing otherwise. Alternate ensemble constructions are diminished. Statistical properties (law of large numbers, physical constants) used to justify the ensemble construction are, of themselves strong bias. TANSTAAFL: By specifying everything, one knows next to nothing. The bias is fully spent without realizing it’s surrender.
Laplace’s demon is a mega-fallacy. Even more inconvenient than Godel’s incompleteness theorem … A. Clausen September 9, 2010 at 3:58 am: Hawking has been saying things of this kind for some time. … In fact, one of the essays made probably the best arguments against strong anthropomorphism by stating that if there are a narrow set of values for the universal constants necessary for the existence of life … the Big Bang might only produce a haze of photons … Hawking’s universe (strong anthropomorphism) proves that life cannot exist. Rather, the description of the (Hawking ?) universe excludes the possibility of anthropic process. (… as per A. Clausen) An example of the ‘Laplace demon’ fallacy is to say that it is possible* to describe the (information) of the organism with the specific DNA code sequence. Describing a single instance of a block universe is immensely incomplete because the ‘block description’ implicitly hides and excludes a portion of the description … the unconsidered portion of the description is the ‘meaning’ component. The ‘meaning’ component cannot be obtained without an incalculable inflationary expansion of all block universes over all permutations (?) and combinations. In ordinary words … to write down a complete description of the state of the universe and claim that it has been fully described is self-deceitful. The quandary as to how one specific instance of the ensemble is meaningfully different from another specific instance is algorithmically compressed into … they are precisely different only as the are precisely different in description as determined by the Rule-of-algorithmic-compression. To presuppose that one algorithmic encoding scheme is efficient or lossless or this over another algorithmic encoding scheme … To presuppose that the unpacking or transformation of an algorithmically encoded signal is irrelevant … is to confuse “is” and “ought” in a very subtle manner. Yes, “is” = “ought” in this instance but doing it as opposed to merely assuming what will virtually *known* to be so, doesn’t quite replace the ‘knowing of real world experience’. It takes a huge amount of time and effort to create the demonstrated substantiation. The demonstration of proven reality has huge cost and consequence. * Disregarding any cytological description, the information of the organism can be ‘fallaciously’ .. or rather cunningly deceptively virtually … fully described but in an exceedingly degenerate manner. The algorithmic information packed into the linear DNA sequence coding is dense and unknown. The fuller description requires the computational act of unpacking numerous organisms and computing how the ensemble of possibilities evolves in the simulated ecosystem.
Reductionist view: ( Raving’s boldface insertion and edit butchery) John S. Wilkins: I gave my demarcation criterion: something is real if it causes a physical difference. Abstract items cannot cause anything. Now it is reasonable to argue … that “cause” makes no sense at the base physical level. If you adopt a block universe … We call something real if it is able to do whatever it is that we call causation. Information (as a type) doesn’t cause anything; although tokens of information … do. … Raving’s Emergent view: . . . seek meaning of ‘information’ with causal efficacy . . . John S. Wilkins: I am not arguing for physicalism here. I take it to be an axiom of the approach I adopt. Yes, matter and energy are physical things, because in our best physics they are irreducible aspects of spacetime. If it turned out a better physics had a single unified existent (?) … Raving’s Emergent ecological (nonstandard) answer: From a reductionist perspective … The individual interconnection (who’s function is to connect nodes) is ‘descriptive bias’ suppressed in details From a (nonstandard ecological) emergent perspective … The individual interconnection is ‘descriptive bias’ enhanced in details The interconnection represents spacetime (distance along which is viewed co-axially) It is (ir)reducible as much and in the manner that the ‘descriptive bias’ that is used to denote it is emphasized and spent. Meaning of ‘interconnection’ … is contingent upon the presumed preemptive decisions which were made to define the meaning of spacetime. – I still do not see the need to think that information exists other than as a functional category of cognitive-semantic systems (i.e., “heads”). a) Algorithmic information encodes the state of the system. b) The meaning of spacetime describes the computation process suitable for using, unpacking or encoding ‘state information’. Having knowledge of both the machine information state and machine computer program is not the same as going the next step and computing-upon-that-information. The laziness that goes with virtual assurance in assumption is not a substitute for 1st hand experience. Doing reveals ‘meaning’ to the encoded information. That follow on ‘meaning’ information doesn’t become revealed unless computation is made. The algorithm (computer program) is how ‘physicality’ gets defined with spacetime. ‘Information’ which participates in computation by the imposition of a ‘spacetime – physicality’ registration and alias de-facto conforms to spacetime causality. What isn’t known is the outcome … or rather the ‘meaning’ which only is revealed after causality has shaped the development over time. OK I am getting long winded and lost … to summarize: Anthropic properties aren’t revealed until after computation on ‘system state’ information takes place. To claim that capturing the ‘system state’ + (spacetime program implied and unstated) represents a full description is fallacious. It is also necessary to perform and observe the computation and result of the (compressed + computation pending) information data. A person could have decided to encode and save the ‘System state + anthropic qualities’ and provided an alternate spacetime program for ‘virtual derealization’ to be applied the the ‘state space’ data only if it were deemed necessary to participate in what could be otherwise reliably assumed.
(with Raving’s boldface and edit butchery) John S. Wilkins: But, and here is the hypostatic fallacy, a simulation is not the same as the thing simulated, or a computer model of the solar system would have a mass of 1.992 x 1030kg, which it doesn’t. A “brain” being simulated is a simulated, not a real, brain. Physical differences make a difference. … Zalta distinguishes between things that are bounded by space and time, and are hence concrete, and things which are not so bounded, which he calls abstract objects. For my money, only a concrete object can have causal powers, and hence only a concrete object can be explanatory for physical processes and states ‘Concrete’ versus ‘abstract’? Seems like meta-hypostatic fallacy to me. All is contingent upon how a person decides to hammer out the ‘definition of the spacetime metric’ An Aputure search dragged in The fallacy of assumed universal experience. Was thinking more in the direction of the ‘fallacy of presumed virtual realization’ – implied [universal, eternal]
Another inscrutableness would be claiming that the we humans live in a pseudo-random universe while we somehow have free will.
Inject enough inscrutableness and the added noise induces a phase transition, thereby dissipating any sense of clarity. There are at least a few paths to inscrutableness: * How to describe it * How to slice and dice it * How to decide upon it (free will) * How to traverse across it * How to traject ahead in it * How to mix and match it The concept of block universe would perhaps have unstated presumptions.
“An Aputure search dragged in the fallacy of assumed universal experience” Fallacy? Or a social dimension of ethnicity and means by which ethnic boundaries are maintained ? “To the extent that actors use ethnic identities to categorize themselves and others for the proposes of interaction they form ethnic groups in this organizational sense.” Barth, Ethnic Groups and Boundaries, 1969 Barth’s perspective that ethnic groups are “self-defining systems” was a major moment in ethnology. Off topic but the post you linked to set my teeth on edge. I have no idea what a meta-hypostatic fallacy is (and I really don’t want to know) but I understand the easy with which terms and terminology can be mistaken for thought on a subject, its a common fallacy we all fall into at times.