necessity of dualities

All truths lie between two opposite positions. All dramas unfold between two opposing forces. Dualities are both ubiquitous and fundamental. They shape both our mental and physical worlds.

Here are some examples:

Mental

objective | subjective
rational | emotional
conscious | unconscious
reductive | inductive
absolute | relative
positive | negative
good | evil
beautiful | ugly
masculine | feminine


Physical

deterministic | indeterministic
continuous | discrete
actual | potential
necessary | contingent
inside | outside
infinite | finite
global | local
stable | unstable
reversible | irreversible

Notice that even the above split between the two groups itself is an example of duality.

These dualities arise as an epistemological byproduct of the method of analytical inquiry. That is why they are so thoroughly infused into the languages we use to describe the world around us.

Each relatum constitutive of dipolar conceptual pairs is always contextualized by both the other relatum and the relation as a whole, such that neither the relata (the parts) nor the relation (the whole) can be adequately or meaningfully defined apart from their mutual reference. It is impossible, therefore, to conceptualize one principle in a dipolar pair in abstraction from its counterpart principle. Neither principle can be conceived as "more fundamental than," or "wholly derivative of" the other.

Mutually implicative fundamental principles always find their exemplification in both the conceptual and physical features of experience. One cannot, for example, define either positive or negative numbers apart from their mutual implication; nor can one characterize either pole of a magnet without necessary reference to both its counterpart and the two poles in relation - i.e. the magnet itself. Without this double reference, neither the definiendum nor the definiens relative to the definition of either pole can adequately signify its meaning; neither pole can be understood in complete abstraction from the other.

- Epperson & Zafiris - Foundations of Relational Realism (Page 4)


Various lines of Eastern religious and philosophical thinkers intuited how languages can hide underlying unity by artificially superimposing conceptual dualities (the primary of which is the almighty object-subject duality) and posited the nondual wholesomeness of nature several thousand years before the advent of quantum mechanics. (The analytical route to enlightenment is always longer than the intuitive route.)

Western philosophy on the other hand

  • ignored the mutually implicative nature of all dualities and denied the inaccessibility of wholesomeness of nature to analytical inquiry.

  • got fooled by the precision of mathematics which is after all just another language invented by human beings.

  • confused partial control with understanding and engineering success with ontological precision. (Understanding is a binary parameter, meaning that either you understand something or you do not. Control on the other hand is a continuous parameter, meaning that you can have partial control over something.)

As a result Western philosophers mistook representation as reality and tried to confine truth to one end of each dualism in order to create a unity of representation matching the unity of reality.

Side Note: Hegel was an exception. Like Buddha, he too saw dualities as artificial byproducts of analysis, but unlike him, he suggested that one should transcend them via synthesis. In other words, for Buddha unity resided below and for Hegel unity resided above. (Buddha wanted to peel away complexity to its simplest core, while Hegel wanted to embrace complexity in its entirety.) While Buddha stopped theorizing and started meditating instead, Hegel saw the salvation through higher levels of abstraction via alternating chains of analyses and syntheses. (Buddha wanted to turn off cognition altogether, while Hegel wanted to turn up cognition full-blast.) Perhaps at the end of the day they were both preaching the same thing. After all, at the highest level of abstraction, thinking probably halts and emptiness reigns.

It was first the social thinkers who woke up and revolted against the grand narratives built on such discriminative pursuits of unity. There was just way too much politically and ethically at stake for them. The result was an overreaction, replacing unity with multiplicity and considering all points of views as valid. In other words, the pendulum swung the other way and Western philosophy jumped from one state of deep confusion into another. In fact, this time around the situation was even worse since there was an accompanying deep sense of insecurity as well.

The cacophony spread into hard sciences like physics too. Grand narrations got abandoned in favor of instrumental pragmatism. Generations of new physicists got raised as technicians who basically had no clue about the foundations of their disciplines. The most prominent of them could even publicly make an incredibly naive claim such as “something can spontaneously arise from nothing through a quantum fluctuation” and position it as a non-philosophical and non-religious alternative to existing creation myths.

Just to be clear, I am not trying to argue here in favor of Eastern holistic philosophies over Western analytic philosophies. I am just saying that the analytic approach necessitates us to embrace dualities as two-sided entities, including the duality between holistic and analytic approaches.


Politics experienced a similar swing from conservatism (which hailed unity) towards liberalism (which hailed multiplicity). During this transition, all dualities and boundaries got dissolved in the name of more inclusion and equality. The everlasting dynamism (and the subsequent wisdom) of dipolar conceptual pairs (think of magnetic poles) got killed off in favor of an unsustainable burst in the number of ontologies.

Ironically, liberalism resulted in more sameness in the long run. For instance, the traditional assignment of roles and division of tasks between father and mother got replaced by equal parenting principles applied by genderless parents. Of course, upon the dissolution of the gender dipolarity, the number of parents one can have became flexible as well. Having one parent became as natural as having two, three or four. In other words, parenting became a community affair in its truest sense.

 
Duality.png
 

The even greater irony was that liberalism itself forgot that it represented one extreme end of another duality. It was in a sense a self-defeating doctrine that aimed to destroy all discriminative pursuits of unity except for that of itself. (The only way to “resolve” this paradox is to introduce a conceptual hierarchy among dualities where the higher ones can be used to destroy the lower ones, in a fashion that is similar to how mathematicians deal with Russell’s paradox in set theory.)


Of course, at some point the pendulum will swing back to pursuit of unity again. But while we swing back and forth between unity and multiplicity, we keep skipping the only sources of representational truths, namely the dualities themselves. For some reason we are extremely uncomfortable with the fact that the world can only be represented via mutually implicative principles. We find “one” and “infinity” tolerable but “two” arbitrary and therefore abhorring. (Prevalence of “two” in mathematics and “three” in physics was mentioned in a previous blog post.)

I am personally obsessed with “two”. I look out for dualities everywhere and share the interesting finds here on my blog. In fact, I go even further and try to build my entire life on dualities whose two ends mutually enhance each other every time I visit them.

We should not collapse dualities into unities for the sake of satisfying our sense of belonging. We need to counteract this dangerous sociological tendency using our common sense at the individual level. Choosing one side and joining the groupthink is the easy way out. We should instead strive to carve out our identities by consciously sampling from both sides. In other words, when it comes to complex matters, we should embrace the dualities as a whole and not let them split us apart. (Remember, if something works very well, its dual should also work very well. However, if something is true, its dual has to be wrong. This is exactly what separates theory from reality.)

Of course, it is easy to talk about these matters, but who said that pursuit of truth would be easy?

Perhaps there is no pursuit to speak of unless one is pre-committed to choose a side, and swinging back and forth between the two ends of a dualism is the only way nature can maintain its neutrality without sacrificing its dynamicity? (After all, there is no current without a polarity in the first place.)

Perhaps we should just model our logic after reality (like Hegel wanted to) and rather than expect reality to conform to our logic? (In this way we can have our cake and eat it too!)

formalism, consciousness and understanding

In a formal (deductive) subject, the level of competency correlates with the depth of non-formalism one can display around the subject. (For instance, the mastery of a mathematician can only be gauged when he stops scribbling down mathematical notation, dives into conceptual vagueness and starts using real words.) In a non-formal (intuitive) subject, the level of competency correlates with the depth of formalism one can display around the subject.

Similarly, one can only understand the unconscious things using the consciousness and the conscious things using the unconsciousness. Due to the architecture of our brains we typically find the latter much easier to do. Our education system does not balance the scale neither. (Practicing lucid dreaming, meditation and improvisation can help.) We generally do not know how to open up and let our non-verbal intuitive brain reign, and do not care about the unconscious until it breaks down.

classical vs innovative businesses

As you move away from zero-to-one processes, economic activities become more and more sensitive to macroeconomic dynamics.

Think of the economy as a universe. Innovative startups correspond to quantum mechanical phenomena rendering something from nothing. The rest of the economy works classically within the general relativity framework where everything is tightly bound to everything else. To predict your future you need to predict the evolution of everything else as well. This of course is an extremely stressful thing to do. It is much easier to exist outside the tightly bound system and create something from scratch. For instance, you can build a productivity software that will help companies increase their profit margins. In some sense such a software will exist outside time. It will sell whether there is an economic downturn or an upturn.


In classical businesses, forecasting near future is extremely hard. Noise clears out when you look a little further out into the future. But far future is again quite hard to talk about since you start feeling the long term effects of innovation being made today. So difficulty hierarchy looks as follows:

near future > far future > mid future

In innovative businesses, forecasting near future is quite easy. In the long run, everyone agrees that transformation is inevitable. So forecasting far future is hard but still possible. However what is going to happen in mid term is extremely hard to predict. In other words, the above hierarchy gets flipped:

mid future > far future > near future

Notice that what is mid future is actually quite hard to define. It can move around with the wind, so to speak, just as intended by the goddesses of fate in Greek mythology.

In Greek mythology the Moirae were the three Fates, usually depicted as dour spinsters. One Moira spun the thread of a newborn's life. The other Moira counted out the thread’s length. And the third Moira cut the thread at death. A person’s beginning and end were predetermined. But what happened in between was not inevitable. Humans and gods could work within the confines of one's ultimate destiny.

Kevin Kelly - What Technology Wants

I personally find it much more natural to just hold onto near future and far future, and let the middle inflection point dangle around. In other words I prefer working with innovative businesses.

Middle zones are generally speaking always ill-defined, presenting another high level justification for the barbell strategy popularized by Nassim Nicholas Taleb. Mid-term behavior of complex systems is tough to crack. For instance, short-term weather forecasts are highly accurate and long-term climate changes are also quite foreseeable, but what is going to happen in mid-term is anybody’s guess.

Far future always involves “structural” change. Things will definitely change but the change is not of statistical nature. As mentioned earlier, innovative businesses are not affected by the short term statistical (environmental / macro economic) noise. Instead they suffer from mid term statistical noise of the type that phase-transition states exhibit in physics. (Think of turbulence phenomenon.) So the above two difficulty hierarchies can be seen as particular manifestations of the following master hierarchy:

statistical unpredictability > structural unpredictability > predictability


Potential entrepreneurs jumping straight into tech without building any experience in traditional domains are akin to physics students jumping straight into quantum mechanics without learning classical mechanics first. This jump is possible, but also pedagogically problematic. It is much more natural to learn things in the historical order that they were discovered. (Venture capital is a very recent phenomenon.) Understanding the idiosyncrasies and complexities of innovative businesses requires knowledge of how the usual, classical businesses operate.

Moreover, just like quantum states decohere into classical states, innovative businesses behave more and more like classical businesses as they get older and bigger. The word “classical” just means the “new” that has passed the test of time. Similarly, decoherence happens via entanglements, which is basically how time progresses at quantum level.

By the way, this transition is very interesting from an intellectual point of view. For instance, innovative businesses are valued using a revenue multiple, while classical businesses are valued using a profit multiple. When exactly do we start to value a mature innovative business using a profit multiple? How can we tell apart its maturity? When exactly a blue ocean becomes a red one? With the first blood spilled by the death of competitors? Is that an objective measure? After all, it is the investor’s expectations themselves which sustain innovative businesses who burn tons of cash all the time.

Also, notice that, just as all classical businesses were once innovative businesses, all innovative businesses are built upon the stable foundations provided by classical businesses. So we should not think of the relationship as one way. Quantum may become classical, but quantum states are always prepared by classical actors in the first place.


What happens to classical businesses as they get older and bigger? They either evolve or die. Combining this observation with the conclusions of the previous two sections, we deduce that the combined predictability-type timeline of an innovative business becoming a classical one looks as follows:

1
(Innovative) Near Future
Predictability

2
(Innovative) Mid Future
Statistical Unpredictability
(Buckle up. You are about to go through some serious turbulence!)

3
(Innovative) Far Future
Structural Unpredictability
(Congratulations! You successfully landed. Older guys need to evolve or die.)

4
(Classical) Near Future
Statistical Unpredictability
(Wear your suit. There seems to be radiation everywhere on this planet!)

5
(Classical) Mid Future
Predictability

6
(Classical) Far Future
Structural Unpredictability
(New forms of competition landed. You are outdated. Will you evolve or die?)

Notice the alteration between structural and statistical forms of unpredictability over time. Is it coincidental?


Industrial firms thrive on reducing variation (manufacturing errors); creative firms thrive on increasing variation (innovation).
- Patty McCord - How Netflix Reinvented HR

Here Patty’s observation is in line with our analogy. He is basically restating the disparity between the deterministic nature of classical mechanics and the statistical nature of quantum mechanics.

Employees in classical businesses feel like cogs in the wheel, because what needs to be done is already known with great precision and there is nothing preventing the operations to be run with utmost efficiency and predictability. They are (again just like cogs in the wheel) utterly dispensable and replaceable. (Operating in red oceans, these businesses primarily focus on cost minimization rather than revenue maximization.)

Employees in innovative businesses, on the other hand, are given a lot more space to maneuver because they are the driving force behind an evolutionary product-market fit process that is not yet complete (and in some cases will never be complete).


Investment pitches too have quite opposite dynamics for innovative and classical businesses.

  • Innovative businesses raise money from venture capital investors, while classical businesses raise money from private equity investors who belong to a completely different culture.

  • If an entrepreneur prepares a 10 megabyte Excel document for a venture capital, then he will be perceived as delusional and naive. If he does not do the same for a private equity, then he will be perceived as entitled and preposterous.

  • Private equity investors look at data about the past and run statistical, blackbox models. Venture capital investors listen to stories about the future and think in causal, structural models. Remember, classical businesses are at the mercy of macroeconomy and a healthy macroeconomy displays maximum unpredictability. (All predictabilities are arbitraged away.) Whatever remnants of causal thinking left in private equity are mostly about fixing internal operational inefficiencies.

  • The number of reasons for rejecting a private equity investment is more or less equal to the number of reasons for accepting one. In the venture capital world, rejection reasons far outnumber the acceptance reasons.

  • Experienced venture capital investors do not prepare before a pitch. The reason is not that they have a mastery over the subject matter of the entrepreneur’s work, but that there are far too many subject-matter-independent reasons for not making an investment. Private equity investors on the other hand do not have this luxury. They need to be prepared before a pitch because the devil is in the details.

  • For the venture capital investors, it is very hard to tell which company will achieve phenomenal success, but very easy to spot which one will fail miserably. Private equity investors have the opposite problem. They look at companies that have survived for a long time. Hence future-miserable-failures are statistically rare and hard to tell apart.

  • In innovative businesses, founders are (and should be) irreplaceable. In classical businesses, founders are (and should be) replaceable. (Similarly, professionals can successfully turn around failing classical companies, but can never pivot failing innovative companies.)

  • Private equity investors with balls do not shy away from turn-around situations. Venture capital investors with balls do not shy away from pivot situations.

pharma vs diagnostics

Bioinformatics industry is bifurcating into the two categories defined by the two extreme-value generation endpoints, namely drug development and data creation.

  • Drugs come with patent protection and therefore create defensible sources of revenue. Data usually suffers from diminishing returns and data generation can not sustain value indefinitely, but this is not true for the case of biology which is (almost by definition) the most complex subject in the universe. (The fact that biological data seems to have a shorter half-life makes the situation even worse.)

  • Pharma companies develop the drugs and (the volume driven) diagnostics companies generate (the majority of) the data.

Pharma companies love to dip into data because it enables them to drive their precision medicine programs forward by enabling

  • the targeting of the right patient cohorts for existing drugs, and

  • the generation of novel drug targets.

Better precision medicine generates more knowledge about the genetic variants and more drugs targeting them, which in turn render diagnostics tests respectively more accurate and useful. In other words, more data eventually leads to an increase in the demand for diagnostics tests and therefore results in the generation of even more data. (This positive feedback cycle will greatly accelerate the maturation of the precision medicine paradigm in the near future.)

Pharma companies and diagnostics companies behave very differently (as summarized in the table below) and this creates a polarity in the product and business model configuration space for the bioinformatics industry whose primary customers (in the private domain) are these two types of companies.

Pharma vs Diagnostics.png

Last two lines are very important and worth explaining in greater detail:

  • Pharma companies do basic research and therefore want to tap into all types of data sets. (They also have a greater tendency use all types of analytical applications while diagnostic companies ignore the long tail.) These datasets are generally huge and may be residing in private cloud or some public cloud provider. So pharma companies have to be able to connect to all of these datasets and run computation-heavy analysis that seamlessly weave through them. (When you are dealing with big data, computation needs to go to the data rather than other way around.) In other words, they naturally belong to the multi-cloud paradigm. Diagnostics companies, on the other hand, belong to the cloud paradigm since they are optimizing cost and will just choose a single cloud provider based on price and convenience. (Read this older blog post to better understand the difference and polarity between the multi-cloud and cloud paradigms.)

  • Pharma companies are looking for help to solve their complex problems. Hence they are primarily focused on solutions. This pushes the software layer behind the services layer. In other words, software is still there but it is the service provider who is mostly using it. Diagnostics companies, on the other hand, focus on their unit economics. They do not need much consulting since they just optimize the hell out of their production pipelines and leave them alone for the most of the time.

evolution as a physical theory

Evolution has two ingredients, constant variation and constant selection.

Two important observations:

  1. Variation in biology exhibits itself in myriad forms, but they all can be traced back to the second law of thermodynamics, which says that entropy (on average) always increases over time. (It is not a coincidence that Darwin formulated the theory of natural selection in 1850s, around the same time Clausius formulated the second law.)

  2. If you decrease selection pressures, the fitness landscape expands. You see less people dying around you, but you also see more variety at any given time. As we learn to cure and cope with (physical and mental) disorders using advances in (hard and soft) sciences / extend our societal safety nets further / improve our parenting and teaching techniques, more and more people stay alive and functional to go on to mate and reproduce. Progress creates more elbow room for evolution so that it can try out even wilder combinations than before.

    Conversely, if you increase selection pressures, the fitness landscape contracts, but in return the shortened life cycles enable evolution to shuffle through the contracted landscape of possibilities at a higher speed.

    Hence, selection pressure acts like a lever between spatial variation and temporal variation. Decreasing it increases spatial variation and decreases temporal variation, increasing it decreases spatial variation and increases temporal variation.

These observations imply respectively the following:

  1. Evolution never stops since the second law of thermodynamics is always valid.

  2. Remember, Einstein discovered that space and time by themselves are not invariant, only spacetime as a whole is. Similarly, evolution may slow down or speed up in space or time dimensions, but is always a constant at spacetime level. In other words, the natural setting for evolution is spacetime.

It is not surprising that thermodynamics has so far stood out as the odd ball that can not be unified with the rest of physics. Principle of entropy seems to be only half the picture. It needs to be combined with the principle of selection to give rise to a spacetime invariant theory at the level of biological variations. In other words, evolution (i.e. principles of entropy and selection combined together) is more fundamental than thermodynamics from the point of view of physics.

Side Note: The trouble is that the principle of selection is a generative, computational notion and does not lend itself to a structural, mathematical definition. However the same can also be said for the principle of entropy, which looks quite awkward in its current mathematical forms. (Recall from the older post Biology as Computation that biology is primarily driven by computational notions.)

All of our theories in physics, except for thermodynamics, are time symmetric. (i.e. They can not distinguish the past from the future.) Second law of thermodynamics, on the other hand, states that entropy (on average) always increases over time and therefore can (obviously) detect the direction of time. This strange asymmetry actually disappears in the theory of evolution, where something emerges to counterbalance the increasing entropy, namely increasing control.

Side Note: Entropy is said to increase globally but control can only be exercised locally. In other words, control decreases entropy locally by dumping it elsewhere, just like a leaf blower. Of course, you may be wondering how, as finite localized beings, we can formulate any global laws at all. I share the same sentiment because, empirically speaking, we can not distinguish a sufficiently large local counterbalance from a global one. Whenever I talk about the entropy of the whole universe, please take it with a grain of salt. (Formally speaking, thermodynamics is not even defined for open systems. In other words, it can not be globally applied to universes with no peripheries.) We will dig deeper into the global vs local dichotomy in Section 3. (Strictly speaking, thermodynamics can not be applied locally neither since every system is bound to be somewhat open due to our inability to completely control its environment.)


1. Increasing Control

All living beings exploit untapped energy sources to exhibit control and influence the future course of their own evolution.

Any state that is not lowest-energy can be considered semi-stable at best. Eventually, by the second law of thermodynamics, every such state evolves towards the lowest-energy configuration and emits energy as a by-product. By “untapped energy sources” I mean such extractable pockets of energy.

So, put more succinctly, all living beings harness entropy to reduce entropy.

The accumulative effect of their efforts over long periods of time has so far been quite dramatic indeed: What basically started out as simple RNA-based structures floating uncontrollably in oceans eventually turned into human beings proposing geo-engineering solutions to the global climate problems they themselves have created.

Let us now look at two interesting internal examples.


1.1. Cognitive Example

Our brains continuously make predictions and proactively interpolate from sensory data flow. In fact, when the higher (more abstract) layers of our neural networks lose the ability to project information downwards and become solely information-receivers, we slip into a comatose state.

Our predictive mental models slowly decay due to entropy (That is why blind people gradually lose their abilities to dream.) and are also at constant risk of becoming irrelevant. To address these problems, our brains continuously reconstruct the models in the light of new triggers and revise them in the light of new evidence. If they did not exercise such self-control, we would be stuck in an echo chamber of slowly decaying mental creations of our own. (That is why schizophrenic people gradually lose touch with reality.)

Autism and schizophrenia can be interpreted as imbalances in this controlled hallucination mechanism and be thought of as inverses of each other, causing respectively too much control and too much hallucination:

Aspects of autism, for instance, might be characterized by an inability to ignore prediction errors relating to sensory signals at the lowest levels of the brain’s processing hierarchy. That could lead to a preoccupation with sensations, a need for repetition and predictability, sensitivity to certain illusions, and other effects. The reverse might be true in conditions that are associated with hallucinations, like schizophrenia: The brain may pay too much attention to its own predictions about what is going on and not enough to sensory information that contradicts those predictions.

Jordana Cepelewicz - To Make Sense of the Present, Brains May Predict the Future


1.2. Genomic Example

Since only 2 percent of our DNA actually codes for proteins, the remaining 98 percent was initially called “junk DNA” which later proved to be a wild misnomer. Today we know that this junk part performs myriad of interesting functions.

For instance, one thing it does for sure is to insulate the precious 2 percent from genetic drift by decreasing the probability of a mutation event to cause critical damage.

Side Note: It is amazing how evolution has managed to diminish the coding region down to 2 percent (without sacrificing any functionality) by getting more and more dexterous at exposing the right coding regions (for gene expression) at the right time. This has resulted in greater variability of gene expression rates across different cellular contexts.

Remember (from our previous remarks) that if you decrease selection pressure, spatial variation increases and temporal variation decreases. Nature achieves this feat via an important intermediary mechanism. To understand this mechanism, first observe the following:

  1. Ability to decrease selection pressure requires greater control over the environment and decreased selection pressure entails longer life span.

  2. Exerting greater control over the environment requires more complex beings.

  3. More complexity and longer life span entail respectively greater fragility towards and longer exposure-time to random mutation events.

  4. This increased susceptibility to randomness in turn necessitates more protective control over genomes.

Since an expansion in the fitness landscape is worthless unless you can roam around on it, greater control exerted at phenotypical level is useless without greater control exerted at genotypical level. In other words, as we channel the speed of evolution from the temporal to the spatial dimension, we need to drive more carefully to make it safely home. From this point of view, it is not surprising at all that the percentage of non-coding DNA of a species is generally correlated with its “complexity”.

I used quotation marks here since there is no generally-agreed-upon, well-defined notion of complexity in biology. But one thing we know for sure is that evolution generates more and more of it over time.


2. Increasing Complexity

Evolution is good at finding out efficient solutions but bad at simplification. As time passes by, both ecosystems and their participants become more complex.

Currently we (as human beings) are by far the greatest complexity generators in the universe. This sounds wildly anthropocentric of course, but when it comes to complexity, we are really the king of the universe.


2.1 Positive Feedback between Control and Complexity

Control and complexity are more or less two sides of the same coin. They always coexist because of the following strong positive feedback mechanism between them:

  • Greater control for you implies more selection pressure for everyone else. In other words, at the aggregate level, greater control increases selection pressure and thereby generates more complexity. (This observation is similar to saying that greater competition makes everyone stronger.)

  • How can you assert more control in an environment that has just become more complex? You need to increase your own complexity so that you can get a handle on things again. (This observation is similar to saying that human brain will never be intelligent enough to understand itself.)


2.2. Positive Feedback between Higher and Lower Complexity Levels

All ecological networks are stratified into several levels:

  • Internally speaking, each human being is an ecology onto himself, consisting of ten of trillions of cells, coexisting with equally many cells in human bacterial flora. This internal ecology is stratified into levels like tissues, organs and organ systems.

  • Externally speaking, each human being is part of a complex ecology that is stratified into many layers that cut across our relationships to each other and to the rest of the biosphere.

Greater complexity generated at higher levels like economics, sociology and psychology propagates all the way down to the cellular level. Conversely, greater complexity generated at a very low level affects all the levels sitting above it. This positive feedback loop accelerates total complexity generation.

Two concrete examples:

  • The notion of an ideal marriage has evolved drastically over time, along with the increasing complexity of our lives. Family as a unit is evolving for survival.

  • Successful people at the frontiers of science, technology, business and art all tend to be quirky and abnormal. (Read the older blog post Success as Abnormality for more details.) Through such people, an expansion of the fitness landscape at the cognitive level propagates up to an expansion at the societal level.


2.3. Positive Correlation between Fragility and Complexity Level

Overall fragility increases as complexity levels are piled up on top of each other. In order to ensure stability, it is necessary for each level to be more robust than the level above it. (Think of the stability of pyramid structures.)

Invention of nucleus by biological evolution is an illustrating example. Prokaryotes (cells without nucleus) are much more open to information (DNA) sharing than the eukaryotes (cells with nucleus) which depend on them. This makes them simpler but also more robust.

It could take eukaryotic organisms a million years to adjust to a change on a worldwide scale that bacteria [prokaryotes] can accommodate in a few years. By constantly and rapidly adapting to environmental conditions, the organisms of the microcosm support the entire biota, their global exchange network ultimately affecting every living plant and animal.

Microcosmos - Lynn Margulis & Dorion Sagan (Page 30)

Whenever you see a long-lasting fragility, look for a source of robustness level below. Just as our mechanical machines and factories are maintained by us, we ourselves are maintained by even more robust networks. Each level should be grateful to the level below. 

Side Note: AI singularity people are funny. They seem to be completely ignorant about the basics of ecology. Supreme AI will be the single most fragile form of life. It can not take over the world. It can merely suffer from an illusion of control, just like we do. You can not destroy or control what is below you in the ecosystem. Survival of each level depends on the freedom of the level below. Just like we depend on the stability provided by freely evolving and information exchanging prokaryotes, supreme AI will depend on the stability provided by us.


2.4. Positive Correlation between Fragility and Firmness of Identity

How limited and rigid life becomes, in a fundamental sense, as it extends down the eukaryotic path. For the macrocosmic size, energy, and complex bodies we enjoy, we trade genetic flexibility. With genetic exchange possible only during reproduction, we are locked into our species, our bodies, and our generation. As it is sometimes expressed in technical terms, we trade genes "vertically" - through the generations - whereas prokaryotes trade them "horizontally" - directly to their neighbors in the same generation. The result is that while genetically fluid bacteria are functionally immortal, in eukaryotes sex becomes linked with death.

Microcosmos - Lynn Margulis & Dorion Sagan (Page 93)

Biological entities that are more protective of their DNA (e.g. eukaryotes whose genes are packed into chromosomes residing inside nuclei) exhibit greater structural permanence. (We had reached a similar conclusion while discussing the junk DNA example in Section 1.2.) Eukaryotes are more precisely defined than prokaryotes, so to speak. Degree of flexibility correlates inversely with firmness of identity.

Firmer the identity gets, the more necessary death becomes. In other words, death is not a destroyer of identity, it is the reason why we can have identity in the first place. I suggest you to meditate on this fact for a while. (It literally changed my view on life.)

  • The reason why we are not at peace with the notion of death is that we are still not aware of how challenging it was for nature to invent the technologies necessary for maintaining identity through time.

  • Fear of death is based on the ego illusion, which Buddha rightly framed as the mother of all misrepresentations about nature. This is the story of a war between life and non-life, between biology and physics, not you against the rest of the universe or your genes against other genes.


3. Physics vs Biology

 
Physics vs Biology.png
 

Physics and biology (with chemistry as the degenerate middle ground) can be thought of as duals of each other, as forces pulling the universe in two opposite directions.

Side Note: Simple design is best done over a short period of time, in a single stroke, with the spirit of a master. Complex design is best done over a long period of time, in small steps, with the spirit of an amateur. That is essentially why physics progresses in a discontinuous manner via single-author papers by non-cooperative genius minds, while biology progresses in a continuous manner via many-author papers by cooperative social minds.


3.1. Entropy, Time and Scale

Note that entropy and time are two sides of the same coin:

  • Time is nothing but motion. Time without any motion is not something that mortals like us can fathom.

  • All motion happens basically due to the initial low-entropy state of the universe and the statistical thermodynamic evolution towards higher entropy states. (Universe somehow began in a very improbable state and now we are paying the “price” for it.) In other words, entropy is the force behind all motion. It is what makes time flow. The rest of physics just defines the degrees of freedom inside which entropy can work its magic (i.e. increase the disorder of the configuration space defined by the degrees of freedom), and specifies how time flow takes place via least action principles which allows one to infer the unique time evolution of a particle or a field from the knowledge of its beginning and ending states.

Side Note: It is not a coincidence that among all physics theories only thermodynamics could not be formulated in terms of a least action principle. Least action principles give you one dimensional (path) information that is inaccessible by experimentation. Basically, each experiment we do allows us to peak at the different time slices of the universe, and each least action principle we have allows us to view each pair of time slices as the beginning and ending states of a unique wholesome causal story. (We can not probe nature continuously.) Entropy on the other hand does not work on a causal basis. (If it did, then it could not be responsible for time flow.) It operates in a primordially acausal fashion.

When we flip the direction of time, thermodynamics starts working backwards and the energy landscape turns upside down. Time-flipped biological entities start harnessing order to create disorder, which is exactly what physics does.

The difference between physics and time-flipped biology is that former operates globally and harnesses the background order that originates from the initial low-entropy state of the universe and latter harnesses local patches of order created by itself. (This is why watching time-flipped physics videos is a lot more fun than watching time-flipped biology videos.)

Side Note: There are nano scale examples of biology harnessing order to create disorder. This is allowed by the statistical nature of the second law of thermodynamics which says that entropy increases only on average. Small divergences may occur over short intervals of time. Large divergences too may occur but they require much longer intervals of time.

The heart of the duality between physics and biology lies in this “global vs local” dichotomy which we will dig deeper in the next section.

It is worth reiterating here the fact that entropy breaks symmetries in the configuration space, not in geometric one. (It may even increase local order in geometric space by creating symmetric arrangements, as in spontaneous crystallisation, which disorders the momentum component of the configuration space via energy release.) Hence, strictly speaking, the “global vs local” dichotomy should not be interpreted purely in spatial terms. What time-flipped biology does is to harness local patches of configurational order (i.e. degrees of freedom associated with those locations), not spatial order.

Side Note: Entropy also triggers the breaking of some structural symmetries along the way. According to inflation theory, as the universe cooled and expanded from its initial hot and dense state, the primordial force split into the four forces (Gravitational, Electromagnetic, Weak Nuclear and Strong Nuclear) that we have today. (Again, as mentioned before, entropy is an odd ball among all physics theories and is not regarded as a force since it does not have an associated field etc.) This de-unification happened through a series of three spontaneous symmetry breakings, each of which took place at a different temperature threshold.

3.2. Entropy and Dynamical Scale Invariance

Imagine a very low-entropy universe that consists of an equal number of zeros and ones which are neatly separated into two groups. (This is a fantasy world with no forces. In other words, the only thing you can randomize is position. So the configuration space just consists of the real space since there are no other degrees of freedom.) Global uniformity of such a universe would be low, since there will be only fifty percent probability that any two randomly chosen local patches will look like each other. Local uniformity on the other hand would be high, since all local patches (except for those centered at the borderline separating the two groups) will either have a homogenous set of zeros or a homogenous set of ones.

Entropy can be seen as a local operator breaking local uniformities in the configuration space. Over time, the total configuration space starts to look the same no matter how much you zoom in or out. In other words, the universe becomes more and more dynamically scale invariant.

Note that entropy does not increase uniformity. It actually does the opposite and decreases uniformity across the board so that the discrepancy between local and global uniformity disappears. Close to heat death (maximum theoretical entropy), no two local patches in the configuration space will look like each other. (They will be random in different ways.)

Side Note: Due to the statistical nature of the second law of thermodynamics, universe will keep experiencing fluctuations to the very end. It can get arbitrarily close to heat death but will never actually reach it. Complete heat death means end of physics altogether.

Now a natural question to ask is whether there could have been other ways of achieving scale invariance? The answer is no and the blocker is an information problem. You can not have complete knowledge about the global picture without infinite energy at your disposal and without this knowledge you can not define a local operator that can achieve scale invariance. For instance, going back to our initial example, if your region of the universe happens to have no zeros, you would not even be able to define an operator that takes zeros into consideration. All you can really do is to just ask every local patch to scatter everything so that (hopefully) whatever is out there will end up proportionally in every single patch. Of course, this is exactly what entropy itself does. (It is this random, zero knowledge mechanism which gives thermodynamics its acausal nature.)

Biology on the other hand creates low entropy islands by dumping entropy elsewhere and thereby works against the trend towards dynamical scale invariance. It is exactly in this sense that biology is anti-entropic. Entropy is not neutralized or cancelled, instead it is deflected through a series of brilliant jiu jitsu strokes so that it defeats its own goal.

Physics fights for dynamical scale invariance by breaking local uniformities in the configuration space and biology fights against dynamical scale invariance by creating local uniformities in the configuration space. This is the essence of the duality between physics and biology, but there is a slight caveat: Physics works on a global scale and hails down on all local uniformities in an indiscriminate manner, while biology begins in some local patches in a discriminate manner and slowly makes its way up to global scale, conquering physics from inside out, pushing entropy to the peripheries. (Biology needs to be discriminative since only certain locations are convenient to jumpstart life, and it needs to learn since - unlike physics - it does not have the privilege of starting global.)

Let us now scroll all the way to the end of time to see what this duality means for the fate of our universe.


3.3. Ultimate Fate of the Universe

There is no current scientific consensus about the ultimate fate of the universe. Some cosmologists believe in the inexhaustible expansion and the eventual heat death, some others believe in the unavoidable collapse and the subsequent bounce. Since nobody has any idea about how dark energy, dark matter and quantum gravity actually work, everything is basically up grabs.

Side Note: Dark energy is uniformly-distributed and non-interacting. It is posited to be the driving factor behind the acceleration of the uniform expansion of space. Dark matter on the other hand is non-uniformly-distributed and gravitationally-attractive. Together dark energy and dark matter make up around 95 percent of the total energy content of the universe. Hence the reason why some people call junk DNA, which make up 98 percent of human genome, as the dark sector of DNA. Funnily enough, in a similar fashion, more than 90 percent of the more evolved (white matter) part of the human brain is composed of non-neuron (glial) cells . (Neurons in the white matter, as opposed to those in the gray matter, are myelinated and therefore conduct electricity at a much higher speed.) It seems like the degree of complexity of an evolving system is directly correlated with the degree of dominance of the modulator (e.g. non-neuron cells, non-coding DNA) against the modulated (e.g. neurons cells, coding DNA). Could the prevalence of the dark sector be interpreted as an evidence that physics itself is undergoing evolution? (Note that, in all cases, the scientific discovery of the modulator occurred quite late and with a great deal of astonishment. Whenever we see a variation exhibiting substructure, we should immediately suspect that it is modulated by its complement.)

One thing that is conspicuously left out of these discussions is life itself. Everyone basically assumes that entropy will eventually win. After all even supermassive black holes will inevitably evaporate due to Hawking radiation. Who would give a chance to a phenomenon (like life) that is close to non-existent at the grand cosmological scales?

Well, I am actually super optimistic about the future of life. It is hard not to be so after one studies (in complete awe) how far evolution has progressed in just a few billion years. Life is learning at a phenomenal speed and will figure out (before it gets too late) how to do cosmic-scale engineering.

Since no one really knows anything about the dynamics of a cosmic bounce (and how it interacts with thermodynamics), let us finish this long blog post with some fun speculations:

  • The never ending war between physics and biology may be the reason why time still exists and the universe still keeps on managing to collapse on itself while also averting a heat death. Life could have learned how to engineer an early collapse before a heat death or how to prevent a heat death long enough for a collapse. Life could have even learned how to leave a local fine-tuned low-entropy quantum imprint so that it is guaranteed to reemerge after the big bounce.

  • What if life always reaches total control in the sense of Section 1 in each one of the cosmic cycles and becomes indistinguishable from its environment? Could the beginning state of this universe’s physics be the ending state of the previous universe’s biology? In other words, could our entire universe be an extremely advanced life form? Could this be the god described by Pantheists? Was Schopenhauer right in the sense that the most fundamental aspect of reality is its primordial will to live? Is the acausal nature of thermodynamics a form of pure volition?

cloud vs multi-cloud

In the cloud world,

  • software wants to be free. Cloud providers are incentivized to offer all sorts of free goods to drive more data and compute usage, because that is basically how they make money. They are high volume / low margin infrastructural businesses.

  • hardware wants to be virtualized. Just like sequencing centers aggregate, centralize and virtualize sequencers and offer sequencing as a service, cloud providers do the same for PCs and offer data storage and computing as a service. Users do not directly interact with the machines themselves.

In other words, cloud providers commoditize the stack above them via free-ization and the stack below them via virtualization, and thereby increase the percentage of the value they capture in the value chain.

Thinking pictorially we have the following situation:

 
Cloud World  Meat Strategy

Cloud World
Meat Strategy

 

Here, the stacks composed of small squares represent commoditized competitive markets with many players, and the monolithic stack represents a monopolistic market.

Thinking of the whole figure as a hamburger, we can say that the cloud world is “pro-meat”.

Notice that all stacks’ incentives are aligned horizontally, in the sense that they all want the entire industry to grow and the bottlenecks (wherever in the value chain they may arise) to be eliminated. (i.e. Think of industry growth as the horizontal expansion of all stacks) But stacks’ incentives are not necessarily aligned vertically, in the sense that one stack capturing more of the surplus generated by the entire value chain often implies another stack capturing less. (i.e. Dynamics among the stacks is often governed by zero-sum games rather than non-zero-sum games.) Hence each stack wants to democratize (i.e. commoditize) the neighboring stacks that it interacts with. (Read this older post for a deeper look at such stack dynamics.)

Now, a multi-cloud software strategy weakens the cloud layer (middle stack) by commoditizing cloud providers and thereby releases the tension on the hardware layer (bottom stack). Thinking pictorially we have the following situation:

 
Multi-Cloud World  Bread Strategy

Multi-Cloud World
Bread Strategy

 

This is essentially why IBM (after missing the cloud wave due to short-sightedness) ended up recently buying Red Hat for 34 billion USD:

This acquisition brings together the best-in-class hybrid cloud providers and will enable companies to securely move all business applications to the cloud. Companies today are already using multiple clouds. However, research shows that 80 percent of business workloads have yet to move to the cloud, held back by the proprietary nature of today’s cloud market. This prevents portability of data and applications across multiple clouds, data security in a multi-cloud environment and consistent cloud management.

IBM and Red Hat will be strongly positioned to address this issue and accelerate hybrid multi-cloud adoption. Together, they will help clients create cloud-native business applications faster, drive greater portability and security of data and applications across multiple public and private clouds, all with consistent cloud management. In doing so, they will draw on their shared leadership in key technologies, such as Linux, containers, Kubernetes, multi-cloud management, and cloud management and automation.

- IBM Newsroom - IBM to Acquire Red Hat, Completely Changing the Cloud Landscape

states vs processes

We think of all dynamical situations as consisting of a space of states and a set of laws codifying how these states are weaved across time, and refer to the actual manifestation of these laws as processes.

Of course, one can argue whether it is sensical to split the reality into states and processes but so far it has been very fruitful to do so.


1. Interchangeability

1.1. Simplicity as Interchangeability of States and Processes

In mathematics, structures (i.e. persisting states) tend to be exactly whatever are preserved by transformations (i.e. processes). That is why Category Theory works, why you can study processes in lieu of states without losing information. (Think of continuous maps vs topological spaces) State and process centric perspectives each have their own practical benefits, but they are completely interchangeable in the sense that both Set Theory (state centric perspective) and Category Theory (process centric perspective) can be taken as the foundation of all of mathematics.

Physics is similar to mathematics. Studying laws is basically the same thing as studying properties. Properties are whatever are preserved by laws and can also be seen as whatever give rise to laws. (Think of electric charge vs electrodynamics) This observation may sound deep, but (as with any deep observation) is actually tautologous since we can study only what does not change through time and only what does not change through time allows us to study time itself. (Study of time is equivalent to study of laws.)

Couple of side-notes:

  • There are no intrinsic (as opposed to extrinsic) properties in physics since physics is an experimental subject and all experiments involve an interaction. (Even mass is an extrinsic property, manifesting itself only dynamically.) Now here is the question that gets to the heart of the above discussion: If there exists only extrinsic properties and nothing else, then what holds these properties? Nothing! This is basically the essence of Radical Ontic Structural Realism and exactly why states and processes are interchangeable in physics. There is no scaffolding.

  • You probably heard about the vast efforts and resources being poured into the validation of certain conjectural particles. Gauge theory tells us that the search for new particles is basically the same thing as the search for new symmetries which are of course nothing but processes.

  • Choi–Jamiołkowski isomorphism helps us translate between quantum states and quantum processes.

Long story short, at the foundational level, states and processes are two sides of the same coin.


1.2. Complexity as Non-Interchangeability of States and Processes

You understand that you are facing complexity exactly when you end up having to study the states themselves along with the processes. In other words, in complex subjects, the interchangeability of state and process centric perspectives start to no longer make any practical sense. (That is why stating a problem in the right manner matters a lot in complex subjects. Right statement is half the solution.)

For instance, in biology, bioinformatics studies states and computational biology studies processes. (Beware that the nomenclature in biology literature has not stabilized yet.) Similarly, in computer science, study of databases (i.e. states) and programs (i.e. processes) are completely different subjects. (You can view programs themselves as databases and study how to generate new programs out of programs. But then you are simply operating in one higher dimension. Philosophy does not change.)

There is actually a deep relation between biology and computer science (similar to the one between physics and mathematics) which was discussed in an older blog post.


2. Persistence

The search for signs of persistence can be seen as the fundamental goal of science. There are two extreme views in metaphysics on this subject:

  • Heraclitus says that the only thing that persists is change. (i.e. Time is real, space is not.)

  • Parmenides says that change is illusionary and that there is just one absolute static unity. (i.e. Space is real, time is not.)

The duality of these points of views were most eloquently pointed out by the physicist John Wheeler, who said "Explain time? Not without explaining existence. Explain existence? Not without explaining time".

Persistences are very important because they generate other persistencies. In other words, they are the building blocks of our reality. For instance, states in biology are complex simply because biology strives to resist change by building persistence upon persistence.


2.1. Invariances as State-Persistences

From a state perspective, the basic building blocks are invariances, namely whatever that do not change across processes.

Study of change involves an initial stage where we give names to substates. Then we observe how these substates change with respect to time. If a substate changes to the point where it no longer fits the definition of being A, we say that substate (i.e. object) A failed to survive. In this sense, study of survival is a subset of study of change. The only reason why they are not the same thing is because our definitions themselves are often imprecise. (From one moment to the next, we say that the river has survived although its constituents have changed etc.)

Of course, the ambiguity here is on purpose. Otherwise without any definiens, you do not have an academic field to speak of. In physics for instance, the definitions are extremely precise, and the study of survival and the study of change completely overlap. In a complex subject like biology, states are so rich that the definitions have to be ambiguous. (You can only simulate the biological states in a formal language, not state a particular biological state. Hence the reason why computer science is a better fit for biology than mathematics.)


2.2. Cycles as Process-Persistences

Processes become state-like when they enter into cyclic behavior. That is why recurrence is so prevalent in science, especially in biology.

As an anticipatory affair, biology prefers regularities and predictabilities. Cycles are very reliable in this sense: They can be built on top of each other, and harnessed to record information about the past and to carry information to the future. (Even behaviorally we exploit this fact: It is easier to construct new habits by attaching them to old habits.) Life, in its essence, is just a perpetuation of a network of interacting ecological and chemical cycles, all of which can be traced back to the grand astronomical cycles.

Prior studies have reported that 15% of expressed genes show a circadian expression pattern in association with a specific function. A series of experimental and computational studies of gene expression in various murine tissues has led us to a different conclusion. By applying a new analysis strategy and a number of alternative algorithms, we identify baseline oscillation in almost 100% of all genes. While the phase and amplitude of oscillation vary between different tissues, circadian oscillation remains a fundamental property of every gene. Reanalysis of previously published data also reveals a greater number of oscillating genes than was previously reported. This suggests that circadian oscillation is a universal property of all mammalian genes, although phase and amplitude of oscillation are tissue-specific and remain associated with a gene’s function. (Source)

A cyclic process traces out what is called an orbital which are like invariances that are smeared across time. An invariance is a substate preserved by a process, namely a portion of a state that is mapped identically to itself. An orbital too is mapped to itself by the cyclic process, but it is not identically done so. (Each orbital point moves forward in time to another orbital point and eventually ends up at its initial position.) Hence orbitals and process-persistency can be viewed respectively as generalizations of invariances and state-persistency.


3. Information

In practice, we do not have perfect knowledge of the states nor the processes. Since we can not move both feet at the same time, in our quest to understand nature, we assume that we have perfect knowledge of either the states or the processes.

  • Assumption: Perfect knowledge of all the actual processes but imperfect knowledge of the state
    Goal: Dissect the state into explainable and unexplainable parts
    Expectation: State is expected to be partially unexplainable due to experimental constraints on measuring states.

  • Assumption: Perfect knowledge of a state but no knowledge of the actual processes
    Goal: Find the actual (minimal) process that generated the state from the library of all possible processes.
    Expectation: State is expected to be completely explainable due to perfect knowledge about the state and the unbounded freedom in finding the generating process.

The reason why I highlighted expectations here is because it is quite interesting how our psychological stance against the unexplainable (which is almost always - in our typical dismissive tone - referred to as noise) differs in each case.

  • In the presence of perfect knowledge about the processes, we interpret the noisy parts of states as absence of information.

  • In the absence of perfect knowledge about the processes, we interpret the noisy parts of states as presence of information.

The flip side of the above statements is that, in our quest to understand nature, we use the word information in two opposite senses.

  • Information is what is explainable.

  • Information is what is inexplainable.


3.1 Information as the Explainable

In this case, noise is the ideal left-over product after everything else is explained away, and is considered normal and expected. (We even gave the name “normal” to the most commonly encountered noise distribution.)

This point of view is statistical and is best exemplified by the field of statistical mechanics where massive micro-degrees freedom can be safely ignored due to their random nature and canned into highly regular noise distributions.


3.2. Information as the Inexplainable

In this case, noise is the only thing that can not be compressed further or explained away. It is surprising and unnerving. In computer speak, one would say “It is not a bug, it is a feature.”

This point of view is algorithmic and is best exemplified by the field of algorithmic complexity which looks at the notion of complexity from a process centric perspective.

holy vs profane

I think they destroyed the Latin language as well, the Catholic Church. One comment again from theology: when they translated the texts from Latin or from Vulgar method into vernaculars. Because then, when you do, you try to market our religion as something useful, but before it was something holy, this whole thing.

You notice that the reason the Pope presented, he said that it’s to increase the number of Catholics. In fact, the Church contracted at the time, when compared to Islam, where you have one-and-a-half-billion Muslims praying in a language they don’t understand so visibly.

It’s exactly the same thing, is that its separating the holy and profane. Don’t translate to vernacular the beautiful Latin things. Likewise, do not try to make poetry or literature or history — do not make it practical.

Just make the people study for their own sake, just like you go to church. It’s not for anything practical. You don’t go to church because you’re going to meet an employer. You go to church to go to church. Likewise, we have to separate these two.

- Bryan Caplan and Nassim Nicholas Taleb on What’s Missing in Education

The reason why religion is not a subset of philosophy is because it is primarily concerned with appreciating rather than understanding. A core set of beliefs and attitudes are agreed upon, preserved and supplemented with rituals.

Buddhism's emphasis of experience over text is very spot on in the sense that the subject matter of religion is fundamentally impossible to articulate. (Meditation properly done is experimental metaphysics.) The transcendental can not and should not be put in words which are profane mortal creations and may arise a false feeling of understanding. (It is not a coincidence that songs in languages we do not understand move us more deeply.)

Religious thinking calmly ties all causal chains to a single source. Secular thinking democratizes self-referentiality and then hastily tries to loop each causal chain onto itself.* (That is why secular minds are always so busy.) But once you remove the monolithically centralized node of God, then there is no absolute good or evil any more. (Yes, you are right, this is a reference to Nietzsche.) In other words, you are completely fucked. You need to come up with profane reasons to do anything, including the act of going to church, as Taleb exemplifies above. Moreover, those reasons will inevitably be of the type that can not stand on its own. The insecurity caused by such open-ended trails of thinking will be left for some other time to be dealt with. Like a technical debt, this insecurity will grow until it breaks you down and you find yourself either talking to some stranger claiming mastery over human psyche or bending into arcane positions on a sweaty yoga mat or browsing self-help books in one of those stupid bookstores with a coffee shop inside it. All with the hope you will be able to loop those God damn causal chains back onto themselves.

* Democratization operator has been a defining feature of modernism. For instance, as I mentioned in a previous post, democracy, capitalism and social media democratized respectively power, money and fame.

bursting vs building

There are two ways to think about sales, and this applies to everything from business to politics to teaching: You can sell something in a way that captures people’s attention, which is very effective in the short run but wears off, as attention spans and dopamine bursts expire. Or you can sell in a way that captures people’s trust, which is harder and slower than capturing attention but tends to last longer.

- Repeating Themes (Morgan Housel)

Come on... Who needs trust when you have blockchain? We live in an age where relationships start with a swipe to the right and companies either fail fast or scale quickly. Don't be so old fashioned!

Information wants to be not only free, but also bursty! Why deliver slow while you can deliver fast? Slow processes suck, especially since we have so short lifespans!

Well... I am sorry but I am slow. I like enjoying the time I have here rather than rushing through some potentially longer lifespan.

  • People who spend all their efforts on creating a great first impression tend to disappoint horribly afterwards.

  • Companies that scale very fast scare the shit out of me as their falls tend to be also very fast.

  • Skills that take longer to learn (like negotiation skills as opposed to a technical skill that can be learned by reading a single book) tend to also stay relevant longer.

  • Men who can not enjoy the truly lasting qualities in a woman tend to be tasteless and impoverished.

resilience vs sensitivity

Justice embedded in your genes. The further you fall the more potential energy you can mobilize to climb back.

In 2010, a team of researchers launched a research study, called the Strong African American Families project, or SAAF, in an impoverished rural belt in Georgia. It is a startlingly bleak place overrun by delinquency, alcoholism, violence, mental illness, and drug use. Abandoned clapboard houses with broken windows dot the landscape; crime abounds; vacant parking lots are strewn with hypodermic needles. Half the adults lack a high school education, and nearly half the families have single mothers.

Six hundred African-American families with early-adolescent children were recruited for the study. The families were randomly assigned to two groups. In one group, the children and their parents received seven weeks intensive education, counseling, emotional support, and structured social interventions focused on preventing alcoholism, binge behaviors, violence, impulsiveness, and drug use. In the control group, the families received minimal interventions. Children in the intervention group and in the control group had the 5HTTLPR gene sequenced.

The first result of this randomized trial was predictable from prior studies: in the control group, children with the short variant - i.e. the high risk" form of the gene - were twice as likely to veer toward high-risk behaviors, including binge drinking, drug use, and sexual promiscuity as adolescents, confirming earlier studies that had suggested an increased risk within this genetic subgroup. The second result was more provocative: These very children were also the most likely to respond to the social interventions. In the intervention group, children with the high-risk allele were most strongly and rapidly "normalized" - i.e. the most drastically affected subjects were also the best responders. In a parallel study, orphaned infants with the short variant of 5HTTLPR appeared more impulsive and socially disturbed than their long-variant counterparts at baseline - but were also the most likely to benefit from placement in a more nurturing foster-care environment.

In both cases, it seems, the short variant encodes a hyperactive "stress sensor" for psychic susceptibility, but also a sensor most likely to respond to an intervention that targets the susceptibility. The most brittle or fragile forms of psyche are the most likely to be distorted by trauma-inducing environments—but are also the most likely to be restored by targeted interventions. It is as if resilience itself has a genetic core: Some humans are born resilient (but are less responsive to interventions), while others are born sensitive (but more likely to respond to changes in their environments).

The Gene - Siddhartha Mukherjee (Pages 459-460)

Injustice has environmental origins. Under equal conditions, both sensitive and resilient types should on average experience the same elevation.