obsession with number two

I am obsessed with number two. I do not remember clearly when this obsession started, but it has certainly gotten worse over the last two years.

Although it can cause considerable stress in certain situations, it can be quiet beneficial as well:

  • By forcing me to trim things down to two, it puts me on a healthy information diet.

  • By forcing me to pair individual things into sets of two, it helps me uncover analogies and dualities that I would have otherwise missed.

test of time

I just went through more than a hundred papers that I saved to read later almost 8 years ago. Now 90 percent of them looks not worth reading at all.

Standing the test of time is a very efficient filter. You do not have to exert any effort at all. Just put the issue on hold for some time and it resolves itself.

Isn't that just magical?

Now that I got into the mood, I decided to go through my actual physical library as well. Half an hour later I discarded around 40 books while repeatedly whispering myself "Why did I even read this shit?"

Letting the society figure out the best out there is also a very powerful tool. Although the societal filter may not exactly match your tastes and interests, due to its greater longevity, it is much more efficient than yours.

How many 1000 years-old books do we still find worth reading today? 10? 20? I have no idea but it is a very small number compared to the actual number of books written.

Today, around 2-3 million books are published worldwide every year. 99.99 percent of them is crap and contains absolutely no universal truths about the human kind or the physical universe. Slowly or not, they will all die off. (Even scientific facts have an average life-time of 45 years.)

deliberate vagueness

Deliberate vagueness can have three major strategic payoffs:

Greater Longevity: If you stay abstract and employ lots of symbolisms, your text has higher chances of surviving many generations. Religious leaders do this all the time.

Wider Appeal: If you use proxies rather than saying exactly what you want to say, your text will admit a wider set of interpretations and thereby enjoy a greater likelihood of being related to. Poets do this all the time.

Less Accountability: By saying lots of things but actually saying nothing you can evade accountability altogether. Politicians do this all the time.

etiklik ve çözünürlük

Etiklik bir çözünürlük meselesidir, bu çözünürlüğün dağılımındaki eşitsizlik diğer tüm eşitsizliklerin anasıdır.

Çözünürlüğü düşük olanların etik gördüğü davranış kümesi daha geniştir. Güç ve benzeri şeyler çözünürlük düşürür.

Çözünürlüğü düşük olanlar başkalarını sürekli kırarlar, kendileri ise hiç kırılmazlar. Yüksek olanlar ise başkalarını hiç kırmazlar, kendileri ise sürekli kırılırlar.

Bir insanı sadece iyi prensiplere sahip olmak iyi yapmaz, çünkü çözünürlüğü düşük olan kişi prensipleriyle ne zaman çeliştiğini fark edemez. Bu yüzden iyi insanlar genelde iyi doğar. Kötünün sonradan iyi olması çok nadir görünen bir şeydir. Çözünürlük hassasiyet gerektirir ve hassasiyet ya çok erken yaşta öğrenilir ya da genetiktir.

politics and evil

Politics is messed up, not because we are a messed up species, but because we have created messed up incentive structures. I sincerely believe that no one is born evil, it is the systems that make people behave in evil ways.

When the economy is doing well and increased prosperity is felt by everyone, it is very challenging for young and ambitious leaders to enter into politics. Even Adolph Hitler would not have made it if it were not for the Great Depression. 

Opposition parties are incentivised to look out for problems and causes around which to mobilise people. Good ones make up imaginary, outside enemies from scratch. Bad ones hit the already existing, domestic societal fault lines by playing with religious, ethnic and cultural heterogeneities. Ugly ones create entirely new, domestic social fault lines through provocative activities.

empathy and truth

Empathy is a collective coping mechanism invented by the poor and powerless. Hence the reason why feeling powerful is inversely correlated with being able to empathise. 

But empathy does not only bind us together, it also acts as a gateway to truth. Hence the reason why powerful people slowly lose touch with reality.


Being able to imagine what an electron does requires you to literally put yourself in its shoes. It also requires you to listen to what others felt when they put themselves in its shoes.

If excelling at your profession requires getting closer to truth, stay away from powerful people and their ego boosting contexts. Today's best artists, scientists, entrepreneurs and investors are not writing books or giving TED Talks. They remain outside mainstream and are mostly unheard of. In fact, once they gain attention, praises and prizes, they almost immediately lose their cutting-edge status.


It is no coincidence that empathy is a trait shared by all child prodigies:

Most intriguing of all, perhaps, is the consistent finding that prodigies share an outsize empathy: a finely tuned sensitivity to the feelings of others as well as the overwhelming desire to do good. One prodigy’s mom reflects that her son “just felt more from the time he was born. He just had so much emotion and feeling inside of him.” At age 2, another prodigy wept uncontrollably when he heard his father playing Rossini’s Stabat Mater Dolorosa. He later stated that he’d felt connected to each note of music he heard and “knew that music was an expression of his soul.”
- The Link Between Complicated Pregnancies and Child Prodigies (Michael Jawer)

It is also no coincidence that Dostoevsky felt greater mental freedom every time he gambled away his entire savings.

ethics as linearization

My revelation came a short-time ago, during a philanthropic adventure in the education world.

I saw a lot of money and resources being scatter-wasted. Just like the broken education system itself, education foundations were trying to salvage as many kids as possible, resulting in undistinguished projects with low benefit densities.

Human potential is not evenly distributed. Every human being is important of course, but the people who can and will carry us forward constitute a very small minority. Artificially inserting a veil of ignorance in our decision-making mechanisms in the name of justice breeds nothing but mediocracy.

Where freedom is real, equality is the passion of the masses. Where equality is real, freedom is the passion of a small minority.

Eric Hoffer - The True Believer (Page 33)

Our world is filled with power law distributions and will continue to get more non-linear as it becomes faster and more complicated.

Our ethics is also non-linear, but in exactly the opposite way. We feel contempt for the advantaged. We hate handing out resources to the innately more capable. We are inclined to help the weak, cure the damaged, assist the incapable.

Our ethics is a never-ending struggle to linearise the world. It is as if we are trying to nullify our own chances of being left behind wounded.

Of course, this type of individualistic thinking makes absolutely no sense at the species level, because no trait of any social value is evenly distributed. Very few kids are born to be leaders, entrepreneurs, academicians, researchers etc. We need to find these kids as immediately as possible and concentrate all our resources on them. Scattering our resources is wasteful. We can not treat everyone as a potential Einstein.

Why are we so delusional? Because we lost trust in our systems all together. Filtering mechanisms got abused by the incumbent power holders, hopelessness slowly settled in and eventually bursted in a moral upheaval, resulting in an extremely suboptimal equilibrium.

Every god-damn complex-enough system eventually turns elitist. There are elite bacterias, companies, soccer players, singers, viruses, race horses, artists, ants, economies, minerals... If non-linearity is built into the very fabric of the systems around us, we need to learn to behave in a non-linear manner as well.

Justness and nonlinearity can co-exist. There is a whole spectrum between being a Machiavellian monster and naively rejecting the simple tautological statement that exceptionality is exceptional. We need to learn to trust ourselves again so that we can probe the grey zones where the better equilibriums lie.


That kid in the back of the classroom is asking irritating questions to the teacher because he is sick of the non-rigourous nature of the material being hammered into his brain. We set up superficial frameworks to protect the mediocre minds from bottomless depths. While doing so, we lost those who are actually capable of navigating such depths.

That kid near the window is staring outside and thinking about the massively multiplayer strategy game he played last night because he is sick of the exceptionally boring teacher. He is learning by doing at a much faster pace inside realistic simulations and his gaze is meant to defend himself against an intrusive mind-fuck.

Education foundations need to save these academic failures who are actually our best hopes.

fundamentality as nonlinearity

In mathematics, the fundamental things are obvious. They are the axioms and the definitions. You play with them and the entire edifice changes. A single additional condition in your definition can cause a chain reaction resulting in a tremendous number of revisions in proofs that are dependent on your definition.

What is fundamental in product design is not that obvious. Features like Facebook's feed and Tinder's swiping unleashed an immense creative activity resulting in thousands of new analogical startups. Sometimes small UX changes like Snapchat's ephemerality can cause drastic changes in behaviour. 

In essence, what is fundamental can only be recognised when you nudge it. In other words, fundamentality is a perturbative notion: Greater the nonlinearity, greater the fundamentality. 

This interpretation works even in areas outside of mathematics, where there is no observable derivational depth. Large nonlinearities may be manifestations of aggregations of many small nonlinearities (as in mathematics and physics) or single "atomic" instances (as in social sciences where the human mind can short-circuit the observable causality diagrams).

remarks on category theory

Here are some remarks that I have noted down while studying basic category theory under the supervision of George Janelidze. Some are quite technical, but one way or another they all have philosophical significance.

I do not claim any originality. In fact, most of the observations belong either to folklore or to Prof. Janelidze. 


Both Set Theory and Category Theory can provide foundations for all of mathematics as currently practised. Hence, from a logical point of view, there is not much difference between the two alternatives. However, from a practical point of view, there is a huge difference. Each approach leads to different insights. Although it may be possible to derive a statement p in each framework, it may not be humanly possible to see p in one of the frameworks. For instance, monadicity of the power set functor could not be recognized before the advent of category theory, because consideration of the power set of the power set of the power set of the power set of a set was too unnatural from set theoretical point of view.

How hard is it to generate entirely novel and practically useful mathematical structures? To what extent does the development of mathematics rest on the recycling of old ideas? For example, vector bundle was thought to be a new structure. Then Category Theorists came to the scene and demonstrated that a vector bundle over a topological space X is just a vector space object in the slice category (Top↓X).

In Category Theory, all the important concepts can be stated in terms of each other. (Existence of an initial object can be seen as the existence of a left-adjoint. Existence of a left-adjoint can be seen as the existence of a family of initial objects. Existence of a colimit can be seen as the existence of a left-Kan extension. Existence of a left-Kan extension can be seen as the existence of a family of colimits. So on...) Is this a manifestation of the poverty of human imagination? Everything we do seems to be variations on the same theme.

It may be possible to reformulate a structure in a philosophically less troublesome manner. For instance, there are several equivalent ways of presenting adjuction data. The equational characterization circumvents the controversies involving the use of existential quantifiers over huge collections. How many of our philosophical problems have such syntactic origins? How many would disappear if only we knew the right way of reformulating them? (An example from physics: Lagrangian reformulation of classical mechanics introduces teleological aspects that render it philosophically more problematic than the usual formalism.)

Introduction of a property in a specific setting may have to rely on the same property being true in a greater setting. For instance, defining associativity of a tensor product ⊗ in a specific small category ε requires the use of associativity of the tensor product × in the category of all small categories. More specifically, associativity of ⊗ requires the existence of a natural isomorphism α, called the associator, with components α_(A,B,C) between (A⊗B)⊗C and A⊗(B⊗C). Here A,B,C are objects in ε, and α is a natural transformation from the composite functor ⊗o(1×⊗)oθ to the composite functor ⊗o(⊗×1) where θ is the (ε,ε,ε) component of the associator of ×.

Category Theory can guide us to the "right" way of defining certain objects. Here is an example from topology... Andrei Nikolayevich Tychonoff introduced a new product topology in his proof of the assertion that arbitrary products of compact topological spaces is again compact. The first reactions to the result were mixed. People had the impression that Tychonoff had chosen the product topology in a way that would make his theorem work. Therefore the proof had a air of arbitrariness to it. Years later, however, Category Theorists showed that Tychonoff had indeed imposed the "right" topology on the set theoretical product. He was now truly vindicated.

Category Theory can uncover the systematic biases of human mathematicians. When teaching arithmetic to kids, we first introduce sums. On the other hand, when teaching category theory to university students, we first introduce products. (Arithmetic sum of numbers 2 and 3 is the size of the coproduct (i.e. the disjoint union) of the sets {1,2} and {1,2,3}.) We do this because modern mathematics uses products more often than coproducts. Also, the actual constructions of coproducts tend to be more complicated and category-specific than those of products. The discrepancy arises from the choices made by human mathematicians: Having found it easier to work with products, they defined algebraic structures in a way that is more friendly to product formations. (This highlights one of the reasons why discovering a duality between two well-known categories is so important: It allows one to compute a colimit by passing to the dual category and computing the corresponding more tractable limit.)

Yoneda's embedding reveals one of the most important insights of Category Theory: One can safely replace an object with the network of its relationships. This is the mathematical analogue of what sociologist George Herbert Mead, one of the founders of Symbolic Interactionism, once wrote: "The individual mind can exist only in relation to other minds with shared meanings." This view is actually prevalent in Eastern cultures: "Many have noted that the 'meaning' of an individual in Japan is not intrinsic, implanted in a single person, as in the West. There is no unique soul or substance. The meaning is in relation to another. We can see this in the very word for 'human being'. It is composed of two Chinese characters, one meaning 'human' (nin) and the other meaning 'between' (gen). One way of interpreting this is that a human being is by definition a relationship, not a self-sufficient atom. Thus the very idea of the separate, autonomous 'person', the basic premise of Western thought and Western individualism, is missing in Japan." (Macfarlane - Japan Through the Looking Glass, p.76-77) Similarly, Henri Poincaré claimed the following: "The aim of science is not things themselves, as the dogmatists in their simplicity imagine, but the relations among things; outside these relations there is no reality knowable."

As opposed to Set Theorists, a Category Theorist does not define his structures directly. Instead he writes down a certain behaviour. This behaviour is specific enough so that any two objects exhibiting it have to be isomorphic to each other. The resulting lack of uniqueness is not big loss for the Category Theorist since he does not like to distinguish isomorphic objects which interact in exactly the same way with all other objects.

In Set Theory, one makes a canonical choice for all direct products right from the start: The ordered pair (a,b) is defined as the unordered pair {{a,b},{a}}, and AxB is defined as the set of all ordered pairs (a,b) such that a∈A and b∈B. But is there any good reason for choosing {{a,b},{a}} over the alternative {{a,b},{b}}? Instead of making such an arbitrary choice, is it not better to make no choice at all? A Category Theorist defines the product as an isomorphism class of sets which satisfy a certain universal property. He makes a choice from this class only when he is forced to manipulate AxB directly.

Although Category Theorists favour "isomorphism" over "equality", this conceptual hierarchy is not really genuine. One can relax the equational constraints, but one can never obliterate the presence of equalities. For instance, in the case of bicategories, equalities creep back into the picture in the form of coherence axioms.

The proposition that every surjective function has a right inverse is equivalent to the Axiom of Choice (AC). This categorical characterization of AC lends itself to an easy generalization: We say that AC holds in an arbitrary category ε if each epimorphisms in ε has a right inverse. (Note that epimorphisms in the category Set are the surjection functions.) AC is false in many categories. Consider, for instance, the category of Groups. Here all epimorphisms are surjective homomorphisms. Hence, given an epimorphism f, one can invoke the AC of Set to get a right inverse to f. But this will not help much, because the set theoretical right inverse may not be a group homomorphism. (Consider the homomorphism from Z to the factor group Z/2Z reducing integers modulo 2. It does not have a right inverse, because the only homomorphism from Z/2Z to Z is the zero homomorphism.) The categorical characterization pinpoints the source of AC's philosophical content: As objects assume greater structure AC is less likely to hold. (Recall that sets have no structure at all.)

Power of Topos Theory does not lie in its ability to replicate Set Theory. It lies in its ability to place Set Theory in the greater context of all mathematical theories. It allows us to see how exactly sets differ from other mathematical structures. It enables us to investigate intermediary structures and see exactly what properties lead to set-like, space-like or algebra-like behaviours.

Existence of exponents in Cat implies that functors know natural transformations. In other words, higher dimensional information is encoded inside the initial two dimensional data.

The notion of associativity can be seen as a special case of a monoid homomorphism. Let C be a category with products. If products are specified beforehand,  we can view C as a monoid where the multiplication operator is just the product. (We assume that Ax1 is chosen as A for each object A of C.) We can define a set theoretical map from this monoid to the monoid of endomorphisms of C by sending each object A to the functor Ax(-). This map is a monoid homomorphism if and only if the product operator is associative.

As opposed to Set Theory, in Category Theory, topological groups is considered as a special case of groups rather than a generalisation. (Topological groups are simply the internal category of group objects in Top.)

Category of internal categories in modules is equivalent to the category of complexes. This equivalence takes natural transformations to chain homotopies. (Previously, chain homotopies could only be motivated via geometry. Now we know that they also spring from purely categorical considerations.) So, in a sense, abelian homological algebra is just category theory internal to modules. Hence, non-abelian homological algebra should just be category theory! (In reality, this becomes too vast a generalisation. For instance, instead of considering category theory internal to Set, Ronald Brown considers category theory internal to Grp which lies in between Mod and Set.)

Say your indexing set is I. For each i∈I, you make a choice of a set A_{i}. In other words, this family is just a function from I to the category Set. Viewing I as a category itself, the I-indexed families organise themselves into a presheaf category Fun(I,Set). Note that to define I-indexed families, one is naturally led to introduce the category Set whose objects constitute a collection that is larger than a set. (We call such collections as classes.) In other words, you are forced to venture beyond the most usual set theory even to define something as basic as families. Moreover, there is an arbitrariness in the definition of families. Should one require A_{i} to be disjoint? The answer to this question should not matter. But from a set theoretical point of view it does. If one defines an I-indexed family as a functor F from I to Set, then the disjointness assumption is not made. However, if one defines it as a function f from some set A to I, then the assumption is forced since the inverse of each i∈I under f is disjoint. Note that, from a categorical point of view, these two approaches are actually the same since the slice category Set/I is equivalent to Fun(I,Set). Notice how Category Theory is an expert at resolving arbitrarinesses. Before the notion of categorical equivalence, people knew instinctively that the above two approaches were the same but they had no mathematical means of articulating this thought. (Another well-known similarity that can not be stated without Category Theory is "Linear maps are like matrices.")

forecasting the past

Explaining what happened in the past is as hard as predicting what will happen in the future. Why? Because to predict what will happen in the future you first need to build a model of how things work and to build a model you only have past data to work with.

In fact, it is a common practice to reserve away a certain portion of the past data during the modelling process, and use it to quickly see whether the constructed model has any predictive power. (This eliminates the need to wait for the future to unfold.)

Once you have a good model of the past, you can make predictions about the future.

Those predictions may require you to plug in certain initial conditions which may lie far back in time. In that case, you will need to run your model backwards to predict what may have happened before the beginning of your data set.

Make no mistake. Building a model from past data is a tough business. Each of those past moments was once an amorphous "Now". How they unfolded into each other was a total mystery back then, and it still is.

Ignorance is time-symmetric. Only "Now" is certain. The rest is a matter of speculation.