Popular (Neo-)Modernism on Blue Labyrinths

Matt Bluemink from bluelabyrinths.com asked me if I want to write something for his online magazine. You can find the result here.

I tried to summarize and delineate some of my core ideas on changes in culture.

Many thanks to Matt for writing me and publishing it.

On the w()hole nature of nature

Nature can be used in two senses: first as something opposed to something (e.g. natural vs. artificial) or as description of what is. I think the first term is highly problematic and the second comprises everything. I don’t want to explain my problems with the first use here, but use the second one. The second one – as long as it is not closer examined – has no deontic level, viz. it doesn’t tell you what should be, it is simple a term about what is. Nature is therefore a description of the whole. As the heading announces there is a split in the second meaning: On the one side nature in the second sense means the content of everything that is. On the other side it means the way that everything is. Therefore I can give a more precise sense here: I’m concerned with the question the way everything is and how it is given to us.

Determinism

But how whole is the whole? Consider a determinist position: The whole is something that is gouverned by laws (the laws of nature). There are things, maybe there are forces or relations, and there are laws that describe how things change over time. In this determinist worldview you can have your Laplace demon: if you have enough information and all the laws you can deduce every future and past event. This whole, this nature is complete and contains therefore the possibility of a complete understanding (even if no human is ever able to do so, that doesn’t exclude the possibility; a supercomputer for example could contain all this knowledge and therefore be the demon).

A multi-levelled nature

But how whole is this whole? Against such determinism one could argue that one looks only at one level of reality, but reality is composed of more levels. For example: one could say that the determinist is only looking at physical laws, but that there are also different laws concerning life and the psyche. This could even be extended with a view that the psyche of humans is able to generate ideas that live outside the individual human. Jack Cohen and Ian Stewart have this concept of extelligence. While intelligence is inside a creature, extelligence is only possible if there is a culture. Culture allows knowledge in society through the sharing of knowledge – be it oral or in the form of books. There are good arguments that extelligence is bound to living creatures gouverned by physical and biological laws, but that doesn’t exclude the possibility for extelligence to be gouverned by its own laws.

Therefore one could argue that there is a hole in our knowledge of the whole even if the determinist project is successful and a theory of everything (understood as everything that can be explained physically) is discovered.

The multilevelled reality approach is able to capture the hole in our knowledge, aknowledging this hole, we have to rethink our epistemic wholeness of nature.

The disequilibrium of knowledge

But one could counter the determinist in another way: The whole of nature naturally creates holes of knowledge. The multilevelled reality approach still enables us principally to know everything, to know the whole. If we find all the levels and find all the laws the gouvern the respective levels, we could (theoretically) have a theory of everything (really everything). This approach only works if we consider nature as a closed concept. Under a closed concept I understand that there is no change in the principles of nature; in the laws that gouvern nature. Philosophers like Alain Badiou and Quentin Meillassoux argue for another ontological perspective on nature. They argue that nature is ontologically never closed (in the sense I described before). There are events that can change the laws. Even if one may be sceptic about approaches that allow laws to be changed, there is for the view I want to present in this paragraph another way to look at it. If we have notions of emergence and notions of a rise in complexity or any kind of change in the world that forces us to rethink our perspective, we have need to revise knowledge. Slavoj Zizek and Ray Brassier have a name for the resulting problem: the disequilibrium of knowledge. The disequilibrium of knwoledge presupposes a view of nature in which time works on nature. Nature is whole on every certain point in time. There is a possible complete or whole knowledge of this point in time. But time moves on and the nature (the way things are) of nature (everything that is) is that things change. In the time we collect knowledge that explains things at a certain point in time, new things that can or should be integrated in knowledge appear. Therefore knowledge is always too late. It can only capture the past. This is imho the best understanding of the famous Hegel quote: ‘The owl of Minerva spreads its wings only with the falling of the dusk.’

Knowledge is via time in disequilibrium: There is always more to know than what can be known. We have three concepts in this view: nature, time and knowledge. Nature is everything. Therefore it should comprise time and knowledge. In the sense of nature at a certain point in time, nature is whole and therefore there is the possibility of a whole knowledge of this point in time. But nature including time must be viewed different: Our knowledge is bound to time – we need time to know and understand. Knowledge has therefore always a hole. This hole consists in the novelties that aren’t and can’t be captured yet. Nature (comprising time) is on the one side whole (because it comprises time and therefore every novelty), but on the other side we can never meaningful speak about it because we have this hole in our knowledge. In a sense nature creates its own holes making the whole holey: This can be called the w()hole nature of nature.

The expression of w()hole nature of nature can be given in a very precise sense if one follows Badiou (I still have a lot of criticism concerning Badiou, but let’s play this through here). Badiou is inspired by set theory. In set theory – at leas if one follows Cantor – we have a continuum that can never be fully grasped. To each formalized infinity we can find another infinity comprising the previous one, but including more. Therefore there is no way to fully capture the continuum. Each set can be divided in subsets and every time you make subsets of a set there is one subset: the empty set. As far as my understanding of Badiou goes Badiou interprets this empty set as place holder for something that is (not yet) captured in knowledge. From Cantor’s continuum and the axioms of set theory follows that there will always be a not yet captured knowledge that is expressed in the empty set. The empty brackets of w()hole can be used to allude to this empty set that is part of every whole. If one follows Badiou, one could say the whole is necessarily a w()hole that comprises a hole.

Conclusion

To summarize: In a strong determinist world view nature (as everything) is always whole. Its nature (the way things are) is a clear framework. Therefore it is possible to have full or whole knowledge.

In a multilevelled approach these things are true as well. The multilevelled approach only critizises the wholeness of a singular level. The nature (the way things are) of nature (everything) is that it cannot be completely (whole) grasped on a singular level. Each level has a hole that can only be filled in respect to another level.

The last approach sees nature as a whole that is in becoming, thereby creating holes in our knowledge. Becoming involves time that constantly shakes our knowledge and makes it impossible to know the whole. One important aspect of this approach that I haven’t developed in this post is that the novelties or changes aren’t capturable. The ontological (here: in nature) changes are unprognostisizable. This is certainly true if you don’t consider laws as timeless, but emergent laws are (in most definitions of emergence) outside of any prognosis.

I hope I could shed a little bit light on some notions of (in)completeness and what is at stake in such theories.

Advertisement

On the Role of Mathematics in the Philosophy of Badiou and Deleuze

There are two philosophical approaches that can be compared: the one of Deleuze and the one of Badiou. In this post I want to sketch my superficial understanding of their ideas and especially concentrate on the role mathematics plays.

[With superficial I mean that I read a lot and grasped something, but also have a lot of questionmarks. I try to delineate what my understanding of their approaches is at the moment. I sometimes hint at passages and secondary literature that lead me to this view, but it is not something based on close reading. I want to come back to this sketch in later posts to prove, improve or disprove it with primary text passages. This post has therefore transitional status and I don’t guarantee its correctness.]

0. Overview

0.1 Deleuze

Deleuze’s philosophy can be summarized in the formula: Monism = Pluralism. What does this mean? The monism part can be seen as another description of immanentism. To have an immanent philosophy means that you have one world, everything is inside this world and there is no other realm (like for example Plato’s heaven of ideas or Kant’s transcendental conditions if they are understood as different realm outside reality) that structures reality.

The pluralism can be seen as an anti-essentialist, pluralist methodology. [This part is inspired by DeLanda and other philosophers who refer to Deleuze, but – especially the word I use – are not based directly on Deleuze’s own writing.] Essentialism is here understood in its minimal definition: A is B iff (if and only if) A has property C. C is therefore a necessary condition to be B. For example: This tree is only a tree iff it is wooden. Necessary conditions can be understood as essential properties of something. Aristotle for example can be read in that way. For Aristotle there are necessary and accidental properties. The necessary properties are connected to the essence and the accidental properties generate the differences between certain instances of a group of entities with the same essence.

[Short note on Aristotle: There is a different way to read Aristotle that is based on his theory of science and language. Aristotle talks a lot about the way we ask questions. Therefore you can read Aristotle’s necessity (or at least most notions of it) as question dependent. In his Physics he talks about a dugout canoe and asks the question: Is it natural or artificial? Then he goes on to refine the question. If we want to know how this object was created, it is artificial. People used tools to bring the wood in this form. If we want to know how this object behaves in the future (lying around in the dirt), it is natural. It is prone to rot and wither away. This gives rise to the reading that Aristotle’s aitiai and archai (causes and origins/principles) are question- and language dependent.]

Deleuze is non- or anti-essentialist because there are no necessary properties. On the one side Deleuze is not so much interested in products or results, but in the processes that generate them (and can change and destroy them). The talk about necessary properties is often static and without history.

On the other side Deleuze wants to allow a multiplicity of functions and perspectives. One of the famous concepts of Deleuze is the body without organs (short: bwo). The word organ stems from the Greek word for tool (organon). A tool has a function and an organ can be understood as a function of a body. In Mille Plateaux the chapter about bwo starts with different examples of bodies losing their organ. One of it is a passage from Burrough’s Naked Lunch. This passage imagines a (human or at least animal) body with only one orifice: to eat, to shit, to fuck. In the first episode of the Deleuze podcast buddies without organs (https://buddieswithout.org/) this example is extended by a christian arguing against homosexuality. The anus can be considered an organ. But what is the function of the anus? Only to shit? Is anal sex wrong because we disregard the right function of the organ? For Deleuze that is clearly not the case. Where does this function come from? What makes a certain function the right function?

At least one way to understand the bwo is to say that a body has no proper function. [There is more to the concept of the bwo, but I think this is the important one here.]

Transcendent structures can be critiqued in two ways: They are not only adding another realm to the world, but they structuring reality (via the transcendent realm) in a certain way that delivers correct properties and functions.

The bwo can be seen as methodological point to look at the world without inscribing functions. The near impossibility of this project can be felt in the passages describing it. To be aware of the functions we inscribe is nontheless an achievement of awareness.

Let’s summarize so far: For Deleuze the world is one realm to which we should add nothing external. The world is structured by processes and forces. We cannot give this processes a certain function that determines them.

The central questions for Deleuze’s project are then: If we don’t look at results, but at the processes, we recognize becoming. We realize things change. How can we describe changes without functions? Is novelty a possibility in an immanent world? How can the new be generated? Does a novelty add something to the world or is it a change in the same world (and what exactly changes)?

0.2 Badiou

As well as for Deleuze there is a formula that expresses the main idea of Badiou’s philosophy: Mathematics = Ontology. Ontology is concerned with being. But ontology as well as mathematics isn’t concerned with beings, it is concerned with being as being. Let’s unpack this strange sounding words a little bit. Mathematics is abstract. Badiou is concerned with a certain strain of mathematics: set theory. Set theory is concerned with sets and their elements. A set comprises elements under a certain definition. Unlike Deleuze, Badiou is concerned with truth as a notion of universality. For Badiou the world is multiple (very simple: full of different things). A set as a truth establishes a category. It’s universality means that it is true in all times and can therefore reoccur.

One prominent representative and (for many) the founder of set theory is Cantor. Cantor was not only concerned with sets, but also with infinity. He realized that you can construct mathematically larger and larger sets of infinity. [The following is very much simplified and misuses some terms for a simpler understanding:] Let’s call the infinity of numbers a continuum. Can we construct an infinitely large number or notion of infinity that comprises all numbers? Cantor found out: No, there is no infinity, to which no larger infinity can be constructed.

In Badiou’s work this is the foundation. When truths are discovered or expressed we discover a certain aspect. But as Cantor shows, there are an infinity of truths and we cannnot even express this infinity. There is always something more. This more or surplus means for Badiou that something can happen – an event – that shows and expresses something that wasn’t realized before (more detailled: for Badiou the event creates something indiscernible, that cannot be discerned by privious knowledge and thereby demanding a new knowledge and truth). It forms something new – a novelty – that disturbs all our privous knowledge. For Badiou we have to be faithfull to such events and discover their truths. We must bring them in the language of universality. (For Badiou this counts for scientific discoveries as well as political events. His emancipatory politics is looking for the excluded. The excluded – e.g. immigrants – express their situation and (a new) universality is demanded that includes and respects them.)

A summary so far: Badiou considers himself a materialist. But the material world is infinite and this infinity can always be grasped partially and never completely. Events reveal new truths, to develop them fully we must stay faithful to them, demand their universality and rearrange our previous knowledge.

[A short comment: Quentin Meillassoux – a disciple of Badiou – uses Badiou’s set theoretical argument to show that the future will always bring new events that are unforeseeable. I’m not sure if Badiou himself sees infinity as a property of the world or also as a property of time. At the moment I can only see an infinite world in Badiou, but not a time that is able to generate the creatio ex nihilo like Meillassoux does.]

0.3 Similarities

Now we can compare these two approaches: As in every comparison you can name similarities and differences. Let’s start with the similarities.

Both use current science and mathematics to understand how new methodologies introduced in the 20th century allow us to look at the world in another way. One important question for them is novelty: If the world can change and there is becoming, how can we describe the upcoming new? They both try to use their results to better understand capitalism and how to overcome it.

1. Mathematics

1.1 Deleuze

Let’s look at the differences. First the role of mathematics: Expecially when read with DeLanda – Deleuze’s use of mathematics is primarily the mathematics of complexity as it is introduced in the theory of dynamic systems. In dynamic systems you look at the world, select a system you want to describe, decide which parameters are relevant and than you can construct a dynamic system. These dynamic systems allow us to look at the behavior of them through time. Via some mathematical techniques it is possible to construct vectors and vectorspaces. These vectors show us tendencies of the systems (singularities, attractors).

Deleuze interprets these techniques ontologically and offers a view where a new modality arises: virtuality. The virtual is real but not actual. Meaning attractors and singularities are not actualized (e.g. already happened), but structure the behavior of systems and are therefore nontheless real.

This view of mathematics can be called immanent, because we use mathematics to describe what is happening in the world. These are real tendencies discovered by mathematics. This leads also to a view of processes. We don’t look at entities as results or structure the world via logic. Morphogenetic processes matter and explain the results (and how even the results can change again).

1.2 Badiou

Badiou on the other side goes into abstraction. His mathematics are not dynamic systems, but set theory. As far as I understand him he has the equation: mathematics = ontology. Mathematics and ontology are the same because they explain truth procedures and abstract themselves from concrete entities. They are concerned with universal truths. This is nothing that can be completed. Mathematics – and therefore ontology – have a history. It is impossible to anticipate the changes in mathematics and the resulting consequences. Therefore ontology can change and new universals appear. Philosophy is concerned with events. An event is a historical and can propose a new concrete universal. In The Communist Hypothesis he explains that communism is such a universal claim. It is an event that is aimed at a universal goal. Because this goal wasn’t reached, but nothing can tell us that it cannot be reached, it is a hypothesis.

The important point is that mathematics is not something in the immanent realm of the real, but connected to thought and formal procedures that allow us to grasp the real.

2. Novelty

2.1 Deleuze

In Logique du Sens Deleuze introduces two versions of time: chronos and aion. Chronos is the time of a strict materialsm. Chronos is the time that can grasp bodies and their relations. It can grasp cause and effect in the sense most natural scientists talk about it. Aion on the other side describes uncorporeal effect on the surface. Aion can be identified with the virtual (see for example Meillassoux’s contribution to Collapse III). The processes of the virtual allow to change the nature of chronos – the materialist, determinist conception of reality.

2.2 Badiou

Badiou explains novelty on another level. As explained above, ontology is never complete. Therefore we cannot have a complete understanding and novelty will always be possible.

His disciple Quentin Meillassoux goes even further. He shows the problem of probability. To have a probability you need a set of possibilities. But how to construct these possibilities. If we never have a full knowledge about all possibilities, we can never make a full description and therefore never have a real probability. Novelty is ex nihilo, not in the sense that it has no cause, but in the sense that there is more in effect than in the cause – something that is unforeseeable.

3. Truth

3.1 Deleuze

Deleuze is no philosopher of truth. His approach is more concerned what can be called problematic or relevant. For Deleuze there are problems which must be expressed in concepts to find actions to react to them. In his Deleuze interpretation in Intensive Science and Virtual Philosophy DeLanda gives an account about how this can be understood as an approach to the theory of science. The problematic approach is not concerned with truth, but with detecting relevant parameters. I’m not completely sure, if that is really all that is to Deleuze. Especially with regard to his Nietzsche interpretation there seems to be something more.

The important point is Deleuze doesn’t seek truths – truths are not part of his philosophy. He is involved in processes and problems that need understanding, but not definite or universal truths.

3.2 Badiou

Badiou on the other side wants to restore the notion of truth – as should be obvious from what I have written above. Truth is very important in his philosophy. A truth is expressed as universal. It remains universal. New truths don’t change that universality, but require us to rethink the arrangement of previous found truths.

Problems

Here I want to hint at problems with my above superficial exposition of Deleuze. It’s not to explain anything above, but to hint at problems in my own sketch.

One of the difficulties of Deleuze is his relation to truth. As well as Nietzsche he is highly suspicious of truth. In Intensive Science and Virtual Philosophy DeLanda tackles this problem and offers a reading of a problematic approach. The criterion is a well-defined problem. It doesn’t matter if you have all the real parameters, but it matters to have a problem that can tell you something in regard to a question and in regard to a possible solution.

But that doesn’t resolve the problem of truth. If you decide what is important or relevant you need to have criteria again. And talking about the real, presupposes a knowledge about the real. One solution could be that truth is a discursive term, caught in its own set of logic. Then you can say that talking about the real is not in terms of truth, but in terms of adequacy and adequacy depends on the questions and terms you start with. [And you end with something similar to the Aristotle interpretation I hinted at in square brackets in 0.1.]

Another problem is: What is mathematics for Deleuze? The chapter about the smooth and striated in Mille Plateux discusses the different uses of numbers: to count, as unit of meassure, etc. So there is not one fixed meaning of numbers. A possible approach is to see mathematics – in its abstraction – as something that is possible to highlight different aspects of the real (the actual as well as the virtual). Each use of mathematics bound to a different problem.

At least one problem still remains: If mathematics is immanent and not simply a representation, what is it? How can we guarantee to grasp the real inside mathematics?

This connects to similar problem: How can we distinguish the immanent from the transcendent? For me it is not always clear that a transcendent realm is really an unnecessary addition to the immanent realm. Most transcendent conceptualisations are gained via analysis of things or processes. How is this exactly different from Deleuze’s view on mathematics?

Conclusion

In this sketch I used Deleuze and Badiou as two different approaches to contemporary problems. They both involve contemporary mathematics to grasp our world in a way that wasn’t possibly before the arrrival of new forms of mathematics. As I hinted at in the problems there is a lot of work to do to get a better grasp of these approaches. In my opinion these approaches are important because they offer us a new way to look at reality, possibilities (or better: modalities, because the virtual is not the possible), science and how all of this is connected to society (planning, risk-management and economics are only a few areas where models have a huge impact on our daily lives – even if we don’t always realize it). To understand models and what they imply is important. In the future I want to explore these problems in more detail, but I hope this sketch will help others to get an overview about what I’m talking about and what is at stake.

The Anti-Hauntology Debate Continues

I want to draw your attention to three excellent posts as the anti-hauntology debate continues:

Matt ‘xenogothic’ Colquhoun has responded to Mark Bluemink’s article as well.

Bluemink replied with another post.

And xenogothic responded again.

I want to add a few thoughts of my own.

Accelerationism

First of all: to those unfamiliar with accelerationism or those who have only heard the term in the context of Trump or the alt-right. In these contexts accelerationism refers to the position that we should speed things up to accelerate the destructive tendencies in society. The goal is to completely destroy the system, enabling us to create a different one. (I’m still not sure if this use of the word – mostly present in American debates – has anything to do with the debate I’m now sketching. Maybe it is a result of one of the many misunderstandings. But maybe it is just due to the view that the world is falling apart and a new world can only be built if all is broken down. It is obvious that people with this mindset want to accelerate the demise and it is therefore called accelerationism – whithout any reference to the – mostly British – debate.)

This is not the context to which xenogothic refers. As xenogothic is never tired to explain, accelerationism was originally a term in an online debate. Before I explain a little bit more it is helpful to make timeline:

  • 00s/early 10s: online debate
  • early 10s: first articles using the term are being published
  • 2013: #Accelerate Manifesto goes online
  • 2014: Urbanomic publishes the #Accelerate reader
  • later: many tendencies in the debates get clearer and lead to currents (e.g. l/acc for left accelerationism, r/acc for right accelerationism, u/acc for unconditional accelerationism and so on)

The online debate – as xenogothic explains – was first centered about rethinking aesthetics – having in mind political and philosophical ideas. It started with Hauntology – a term Bluemink explains in the first post. Hauntology was coined by music journalist Simon Reynolds and cultural critic Mark ‘k-punk’ Fisher (I’m not sure who was first, but I don’t think it matters). They used it to describe certain tendencies in contemporary culture. This was obviously linked by Mark Fisher to his idea of Capitalist Realism (the notion that we are unable to transcend current conditions and – the right as well as the left – stuck within capitalism). To break out of Capitalism something different was needed. Alex ‘splintering bone ashes’ Williams suggested different ways to look at contemporary music. It was then that Benjamin Noys used the term accelerationism to describe the strategies explained by Williams.

Noys summarized his doubts in a small book called Malign Velocities. It is worth paying attention to the preface and how he describes his own relation to accelerationism:

My aim is not to offer an exhaustive account of accelerationism, but rather to choose certain moments at which it emerges as a political and cultural strategy. […] As this is a work written out of the sense of the difficulty of defeating accelerationism, I don’t hope to write its epitaph here. I can’t deny the appeal of accelerationism, particularly as an aesthetic. What I want to do is suggest some reasons for the attraction that accelerationism exerts, particularly as it appears as such a counter-intuitive and defeatist strategy. […]

While accelerationism wants to accelerate beyond labor, in doing so it pays attention to the misery and joys of labor as an experience. If we are forced to labor, or consigned to the other hell of unemployment, then accelerationism tries to welcome and immerse us in this inhuman experience. While this fails as a political strategy it tells us much about the impossible experience of labor under capitalism. We are often told labor, or at least ‘traditional labor’, is over; the very excesses of accelerationism indicate that labor is still a problem that we have not solved. That I think the accelerationist solution of speeding through labor is false will become evident. This does not, however, remove the problem itself.

p. XIf

In this words you can see that Noys is sceptical towards many of the solutions offered in the accelerationism debate. But he acknowledges the questions of the debate and thinks that these are real problems to be solved. These questions continued and continue up to today. There are some proposals for strategies, but the question: How to overcome capitalism if it can reabsorb every resistance? still remains.

One more concrete political proposal was offered by Alex Williams and Nick Srnicek. They wrote the #Aceelerate Manifesto – to many the core conception of l/acc. Because not everybody involved in the debate was very happy with it, they proposed different ideas and gave it different names.

One prominent example is Nick Land. To understand Land and the role he played it is important to forget all the connections that many people are well aware of right now (e.g. his connection to alt-right blogger Curtis ‘Mencius Moldbug’ Yarwin). Land’s philosophy consists in radically rethinking capitalism. Thereby creating an uncanny view on the human and its role within technocapitalism. The reason so many were fascinated by his ideas was not only the radicalness, but the shift of view his ideas enabled (a very good example is Fisher’s contribution to the #Accelarate reader; you can listen to the talk here. His contribution to this reader is also a very good reference for Fisher’s take on Land.). In an interview on the absolutely amazing Interdependence podcast xenogothic made a great analogy: Land can be considered as the Punk of the debate – nihilistic attitude, destroying everything people believe in. Whereby Fisher and others can be compared to Post-Punk – asking the question what to do when we forget old straitjackets and searching for new directions.

So much on Accelerationism. (You can reconstructed the original debate via these two good summaries of the relevant posts. You need the wayback machine to access some posts, some have moved their location, but are accessible via your searchengine, others are – at least to me – completely lost. I can also highly recommend reading through the xenogothic blog, because Matt uncovers a lot of the lost debates and contextualizes it).

Popular Modernism

Reading xenogothic’s responses to Bluemink, one can ask the question why to argue about the different names. We now have three suggestions: Anti-Hauntology, Popular Modernism and Accelerationism. But there’s more to it than names. As xenogothic explained: William’s ‘Against Hauntology’ post was the one Noys called accelerationist. But what about accelerationism and popular modernism? One reason we found different names is the context we thought about. Xenogothic had obviously the explained debate in mind. Therefore it makes a lot of sense. I thought a little bit more out of the context of Fisher’s work. In the accelerationist debate – in which Fisher played a huge role – he never explicitly stated to be an accelerationist or pro accelerationism. Whereby he explicitly talked of popular modernism as a bygone era as well as an unfinished project. This explains why both names can be considered fitting – depending on the context you are looking at.

But I want to suggest another view now. As Noys states, accelerationism is about aesthetics and about strategies. I would argue that popular modernism on the other side shouldn’t be considered as a strategy or aesthetics, but as a goal. Popular modernism – as far as I understand Fisher’s use of the term – describes a cultural media landscape that distributes different forms of culture (experimental and non-experimental) that are distributed and embedded in a society that makes it accessible to many people. It is about enabling people to look into different cultures and perspectives – forcing us to broaden and rethink our perspectives. Not only some intellectuals (like me), but also the working class. Of course the working class has, is and never will be anti-intellectual or anti-experimental per se. But as I explained in one of my previous posts working conditions and the manner in which culture is embedded in contemporary society prevent many people from accessing the incredible artists we talk about. The fact that the artists we use as examples are often female, trans, queer and/or black is not unimportant. For me it fits into the idea of popular modernism. These artists have to struggle in our society. They have a different perspective (than my white male perspective). They enable people like me to look at gender and culture from a completely different perspective. Thereby they fulfill at least the point of popular modernism to broaden and rethink perspectives. As I wrote before, that is still not enough. We need a different infrastructure for media (and thereby culture) and how it is embedded in society. We all (and not only artists) need to work towards a different culture and society. One in which popular modernism is not an unfinished project anymore, but an evershifting landscape of new ideas and perspectives that reaches not only the few who are interested in fancy art, but everybody.

Therefore I suggest to call popular modernism the goal and accelerationism the debate that tries to identify how we can reach it. As I wanted to make clear above accelerationism is in the first place a debate about strategies that recognize the contemporary mechanisms of capitalism. It is not a debate we should look at as checked off. The questions and problems remain. There are a few proposals that can be considered as useful strategies for another world (in my opinion: reducing working hours/days and demanding a Basic Income as e.g. Guy Standing promotes, or maybe even go further and demand the universal BI like Srnicek and Williams). But this is not enough and we have to ask and answer a lot of questions that were part of the debate. To reach popular modernism we need a strategy (call it accelerationist or not).

Culture and Philosophy

Last but not least I want to hint at a few connections between my philosophy stuff and my ideas about culture, politics and music. One of the most fascinating questions for me is how novelty is possible. Many of the philosophers I write about on this blog feature this idea that there is becoming and for becoming to be possible, the new has to be possible. Bluemink and Colquhoun feature this in their articles as well (e.g. Bluemink writing about DeLanda – of whom I’ve written before and xenogothic writing about exactly the problem I’m trying to describe in this section). This can be linked to the current condition that is so often described as postmodernism. Postmodernism can be defined as the cultural perspective that nothing new is possible, the only possibility to create today is rearrange the parts in way it wasn’t done before. This is the view of a closed world (like btw some deterministic positions hold as well). Thinkers like Deleuze, DeLanda, Badiou and Meillassoux see novelty and ask the question of its conditions. Meillassoux for example writes about creation ex nihilo (from nothing). He defines it a little bit more precise as an effect that comprises more than the cause. For example: if you look at the universe before the emergence of life, you can’t deduce that Biden wins. The determinist position holds that this is possible (often called Laplace’s demon: If you have all the knowledge of the rules and all the information about one point in time, you can deduce all others). But for Meillassoux this is impossible. There are events in which there is more in the effect than in the cause. I’m soon going to write more about these terms: novelty, virtuality, tendencies, capacities, etc. Another question in this context is the transference: Which ideas (and from which subjects) can be taken out of their original context and put in a new one? This involves thinking about how these ideas are transformed (or not) and how the context they are put into transforms (or not). This forms something that can be called experimental philosophy: Putting ideas in other contexts to see what you discover or understand, which new questions arise. One still has to be very careful, because not every idea is applicable to another context. But in many cases you can’t tell before doing it. Therefore I see it in many perspectives as a worthwhile project to bring ideas from philosophy in the context of music, politics and culture. The blogosphere is the ideal place to do this. Sometimes we have to turn back and admit it was a stupid idea (but now we know) and sometimes our perspective is shifting – like the ever shifting perspective of popular modernism.

About the different phases of Capitalism

I often use words like Fordism, Post-Fordism or Platform Capitalism. These terms belong to certain understandings of Capitalism. As the question arose again in a chat last week, I used the opportunity to sketch the development of Capitalism and the different perspectives you can have on it. This post is an overview and therefore I only name the problems, but don’t go into a deep analysis or understanding.

From Industrialization to Fordism

Capitalism can be divided in phases. The first phase was industrialism: revolutionizing production processes via machinery, making manufacturing not that important anymore and inventing new methods for the division of labour. And that is what Marx wrote about. But in the beginning of the 20th century some things changed. Depending on the question you want to answer you can get different results. But today most of the left theories agree on a big shift in the ideas of Henry Ford. In the classic industrial age exploitation of workers in factories was terrible. But Ford managed his factories different. By means of assembly lines he standardized products and made it possible for unskilled workers to contribute to the production. He could also easier manage his factories without the extremes of exploitation and with a lot more safety at the workplace. Therefore he could state that everybody who works for Ford can buy a Ford. This led to less frustrated workers and a big change in factories worldwide. One of the results was that some workers weren’t that eager anymore to build unions or to change the status quo. Therefore the second big phase of Capitalism is called Fordism.

Photograph - H.V.McKay Massey Harris, Hay Baler Assembly Line, circa 1946
“Photograph – H.V.McKay Massey Harris, Hay Baler Assembly Line, circa 1946”

This new mode of production lead to another problem: the theory of value. On the one side this problem concerns the economic and Marxist debate about value and surplus. David Ricardo (who influenced Marx’s economic theory a lot) defined the value of a product via the work someone put into it. He was already well aware that more than workers and their physical power is needed to make a product. Additionally to money and resources (circulating capital), you need machinery, means of distribution, infrastructure, ground (the factory has to be somewhere), and so on. He called this part of the production fixed capital. There was (and is) a lot of discussion about how the flows of circulating capital and fixed capital changed after Ford and what impact it had on prices and wages.

But with serial production also the cultural value changed. In his famous essay on mechanical reproduction philosopher Walter Benjamin tried to grasp this change (Das Kunstwek im Zeitalter seiner technischen Reproduzierbarkeit). How – especially artworks – lost their uniqueness (or in Benjamin’s words: aura – a word that describes not only their unique existence but also the thereby generated mode of exhibition). Photographies, serial printing and other techniques generated a distribution of artworks that was before impossible and that also changed the artwork itself. In this line of thought art itself had a crisis. Naturalism as a representation of the world was no longer a handcraft that had to be learned, but possible via technical devices. Some argue this is the reason so many abstract trends in art (e.g. Cubism) arose in the 20th century.

But production was not the only change that appeared in the first half of the 20th century. Another problem for Marxist thinkers was an emergent new line of workers that didn’t really fit into the classic model of class struggle. Frankfurt School member Siegfried Kracauer published in 1930 an astonishing book called “Die Angestellten” – literally translated “The Employed”; the current article on Wikipedia calls it “The Sallaried Masses”. The employed in question were secretaries and clerks – people (often women) who worked under miserable conditions, but didn’t fit into the narrative of the strong male worker who is not only exploited, but physically ruined. This irritation of class narratives will come back during each phase of capitalism.

(The case can be made that this also a result of the change in production. The new forms of production made manual labour less important, but the management of production more important leading to more people organizing things bureaucratically.)

Another shift in this period occurred that is not a shift in production or exploitation, but a shift in analysis. Italian Marxist Antonio Gramsci asked the question: Why do the exploited not rise up against their exploiters? Therefore he developed the concept of hegemony. Hegemony describes the (cultural) narratives that explain the predominance of someone (e.g. a king, aristocracy) or something (e.g. a state or an institution).

We have now four different methods to look at shifts in capitalism: change in production (assembly lines), change in value (economy, art), change in the composition of the exploited and change in the cultural narratives.

From Fordism to Postfordism

In the late 20th century something changed (often narrowed down to the 70s). The possible causes for this next phase are diverse and many theorists assume that there is no single cause but many interrelated causes.

[A short note: One of the main ideas that had an impact on this phase is Neoliberalism. Because in my opinion Neoliberalism is a concept much more complex than many presentations [and sadly often even misrepresentations] in left literature let you think, I try to leave it out of this post and write about it in another one. But that doesn’t mean that it’s important.]

We can take our sketch of the first shift as a guideline through the next one: Starting with production we can realize that the 70s and 80s were a period of massive deindustrialization (think for example of the miner-strikes in UK and the abandoned and unemployed previous industrial landscape of North England). In the same phase the service sector and automation increased. One important point and problem of this shifts in production is that the old ones didn’t simply vanish. We still have manufacturing and industrial labor. But new forms of production are added and thereby the importance of the old ones diminished. This lead to many problems. One example is the reaction of unions: Should we represent the classic industrial workers or can we make a shift towards the new exploited? Sadly most of the classic unions haven’t found a way to support the new exploited, concentrating on old concepts of the (male, physical strained) working class.

In the case of value we should start with economic value and money. The 70s marked a big shift in economy: the tying of currency to gold was abandoned (giving way to a new kind of relationship between money and value called fiat money), the previous Bretton-Woods system was abolished, the stock market started to use computers and all this together led to scientific-mathematical modeling of markets previously unimagined. Analyzing this shift is complex and still too rare in left theories – especially when it comes to understand the shift in (economic) value.

Another question concerning value arose already at the end of Fordism. In France a group of new Marxist thinkers got prominent who are sometimes simply summarized as the New Left (of France). 1947 Henri Lefebvre published his book Critique de la vie quotidienne. In this book – as well as in the works of his colleagues Jean Baudrillard and Roland Barthes – he made a shift in the perception of economics and culture. The three thinkers critizised traditional Marxism for its focus on production and thereby their blind spot on consumption. This new focus allowed to reevaluate production, because production wasn’t only the production of material goods. Especially Baudrillard and Barthes used semiotics – the science of signs – to understand how adverts and narratives formed products as an expression of lifestyle. Therefore value was also tied to the symbolic sphere. A tendency growing up to today.

This line of thought developed over the years with different terminologies and extensions. A prominent example is Slavoj Zizek’s analysis of consumerism:

In this short video something is added to the analysis: Ideology. Ideology – a term already present in Marx’s oeuvre – in the sense Zizek uses it describes something previously not well grasped: the perspective of desire (in the case of Starbucks: our desire to do something good). This isn’t Zizek’s new idea, something that came up in French and Italian philosophy as well. The reason that wasn’t a big perspective before (even if it can be found) is that many theoreticians haven’t read Freud’s psychoanalysis (or couldn’t because he wasn’t born or had published anything). But by now (the second half of the 20th century) it was very well known in the left. Many departed in different aspects from Freud. With the perspective of consumption and the analysis how advertising works, the way people think and conceive the world got more important. In this perspective the different phases of Capitalism didn’t only change narratives, but also the desires of people, viz what people want.

Before we look at the new constellation of the exploited and class in Postfordism, it is better to show now the connection between new questions of value and cultural narratives.

Realizing that narratives formed modes of consumptions and are applied in the new media environment, Baudrillard realized that it is more and more difficult to anchor thought in a supposed reality. A prominent example are the fins on (primarily American) cars. According to Baudrillard they don’t have any real function (e.g. making the car really faster). They only allude to speed and create the symbolic presentation of something thereby created. In this paradox world of references, models and codes were the new paradigm of thinking and creating thought. Cause and effect weren’t really distinguishable anymore. According to Baudrillard this had a tragic impact on our perception of the world and history. This new notion of history is for a lot of left thinkers the main cultural logic of postfordist Capitalism.

To contrast and explain it, it is useful to contrast the left notion of the end of history with a more conservative notion: One of the most famost expressions of the new paradigm was Francis Fukuyama’s notion of the end of history. It wasn’t the first time the end of history was declared (you can find this notion in a lot of long gone periods). But it was a new version. Fukuyama wrote his ideas after the fall of the wall and therefore the soviet union. In his opinion the far right – he primarily means Nazi-Germany – as well as the far left – he primarily means the Soviet Union – failed. Capitalism and (representative) democracy supposedly proved to be the only viable modes of economy and politics.

That wasn’t the idea of Baudrillard and other left theories about the end of history. One of the main observations of the left was a vanishing of narratives of the future, a lost sense of history – where everything was available at the same time and history conserved for your consumption, thereby making it more like a theme park than something that had an impact on the development of the future and into a future. Up until now – thinkers like Frederic Jameson (calling this cultural mode of production and perception Postmodernism) and Mark Fisher (calling it Capitalist Realism) – forms a strong problem that the left itself has to solve. The problem can be summarized: How can we overcome Capitalism if we can’t think outside Capitalism. Or in another often quoted phrase: (In a postmodern age) It’s easier to imagine the end of the world than the end of Capitalism.

[A short note: this isn’t the only definition of Postmodernism, making it an often confusing word. This is one sense of the word not – necessarily – related to others: Postmodernism as the cultural logic (or narrative) of Postfordism.

This leads to another short note: Jameson’s famous book is called Postmodernism or the cultural logic of Late Capitalism. The late Capitalism Jameson is talking about is clearly the phase I described as Postfordism. I don’t use the word late Capitalism because it already appeared in the works of Frankfurt School – the school of thought that comprises thinkers like Benjamin, Kracauer and Adorno. These are thinkers connected to the Fordist phase of capitalism. Therefore late Capitalism is a confusing term that can be applied to each phase if it is conceived as one of the last.]

Finally the new exploited: as already hinted at on the point of production, Postfordism was characterized by deindustrialization and therefore a huge rise in all kinds of labour that isn’t physically hard, but has a huge potential to drive you psychically mad: Cashiers in supermarkets, deliverers (post, food .. it doesn’t matter; by car, by bike, truck drivers), all kind of bureau jobs (from filling forms the whole day to “translating” programs from one programming language to another to make them compatible with … whatever), and so on. But not only new jobs arose, also new modes of employment. While before you had often more job security, you now often have limited contracts, are asked to be more flexible and your job is often jobseeking.

A lot can be said about these shifts. But one thing is clear: the old (industrial and mostly male) working class is not the majority of the exploited anymore.Of course class still matters and maybe it got worse. But we need new ways to describe it. Why class still matters: Think about taking a job as a journalist. If you have a background where you don’t have to constantly look how to pay the rent and your food, you can easily make a lot of unpaid internships. If you don’t have this background, it is nearly impossible for you to become a journalist. This explains the sad fact that there are a lot of jobs where working class people aren’t really represented (not only journalists, also lawyers, judges, politicians, and so on).

But even if you have a good (e.g. middle class) background, you aren’t necessarily ending in a well paid job. This extension of the exploited is often summarized as precariat. The precariat comprises all the workers worldwide that don’t have any form of security or predictability of their future and have to constantly look for their existence.

This short overview used the perspectives of the previous phase and also added a new one: the question of desire. This list isn’t complete and I only scratched on the surfaces. But one question I still want to pose. We are still living inside Capitalism, but hasn’t anything changed since the 70s? Is history over? Are the 70s a starting point of a development that now has reached its form or is there another change?

After Postfordism?

Are we still living in a postfordist society? The answer isn’t clear. There are many thinkers that talk about digital Capitalism, surveillance Capitalism, communicative Capitalism or find other terms for new shifts. As all the shifts I presented before it isn’t always clear if there is cut or a slow transition. I’m not sure if we are in a new phase or a time of transition. It also depends on the perspective and question, if there is another shift or not. To most of the points presented in this post there can be found at least a small shift. The theories – often connected to the rise of the internet – are countless. I want to present at least one:

A new mode of making money the internet enabled is – as Nick Srnicek calls it – Platform Capitalism. That means a new kind of infrastructure arose that enabled new modes of exploitation and injustice. A good example of this is the German platform pizza.de. pizza.de offered restaurants, food chains and nearly any seller of food a website to promote their delivery services. They didn’t deliver and they dind’t make food. They just offered a platform and made a lot of money with the fee the seller’s had to pay for this platform service. (pizza.de was part of the Just Eat Takeaway.com company and is now called Lieferando, offering delivery services as well)

This is definetily a shift in creating surplus (production is somehow the wrong word) and this is not even the whole story of Platform Capitalism: Srnicek (and others) analyze a lot of different ways how platforms like Google, Facebook and Apple are shifting classical notions of production, surplus and money.

Conclusion

I hope I could give a short and not too confusing overview about the changes in Capitalism. This short showcase isn’t complete, it doesn’t deliver any data to support any theories, I don’t link it to the political policies that made the shifts possible and I haven’t talked about the different methodologies. I hope for someone new to this debate it is a helpful summary, a guide to the question “Which authors should I look up if I’m interested in X?” and some fields of study to look into. In future posts I want to go deeper into some of these topics.

[A short note at the end: there are sometimes other names for the different phases that are used to highlight different aspects of them. I didn’t want to make this post more confusing and decided to use only the most popular ones.]

2000 Revisited

Cover der Publikation

Yesterday this amazing volume with articles about past and current visions of the future was published. It also features fantastic artwork by Laura Voss (https://laura-voss.com/#).

In my contribution to this collection I sketch Jean Baudrillard’s theses on history and I’m very happy to be part of this project. Therefore I want to use this to thank everybody involved in the project – especially Andie Rothenhäusler without whom I wouldn’t be part of it.

I wrote this article at the end of my B.A. and now – at the end of my M.A. – it’s a somehow strange feeling to reread it. On the one side there is some self-crticism – I have this with everything I write. But on the other side it’s very interesting to see how this paper features some directions that got more prominent by now. One of Baudrillard’s observations is that history and its importance vanish – and thereby our imagination of future. What remains is history as “retro-scenario”. Whatever someone thinks about Baudrillard, his style of writing or methodology, the problem of lost futures isn’t solved. In the conclusion of my article I hint at the works of Mark Fisher – which are a continually growing influence on me. Fisher shows how film and television lost their sense of history (as something eventful) and moved on to a theme-park version of the past. The goal set out by Fisher is to realize what progressive vectors of modernism were interrupted and how we can resume to work towards a future.

This perspective makes this volume so precious. To find a future – a real one and not a repetition or recombination of the past – we need to understand the past visions of the future, how they failed or were interrupted and how we can create new ones.

Open Access to this volume can be found here: https://publikationen.bibliothek.kit.edu/1000117728 (all contributions are in German and not translated)

What is Speculative Realism?

I’m often asked what exactly Speculative Realism (from now on short SR) is. And as I use it as a category on this blog, I thought it would be a good idea to explain it here.

First of all: There is no exact definition of SR. Instead there is a (hi)story of this term. Before the story comes to the coining of SR, let’s describe the context where it emerged from. After that I explain the origin, further development and the problems of its use.

Analytic and Continental Philosophy

In the 20th century philosophy was split in two traditions: Continental and Analytic Philosophy. These two traditions cannot always be clearly discerned. Most of the classifications are due to academic groups and the self-promotion of universities. There are many tendencies in philosophy that can be used for classification. But non of them withstands a closer look.

Let me give you one narrative about these two traditions. Analytic Philosophy is often associated with a technical approach to language and a more “science” oriented approach in general. One of the best examples of the analytic tradition is maybe the Wiener Kreis (Vienna Circle). This group was strongly influenced by the early Wittgenstein and his book Tractatus Logico-Philosophicus. This book tries to develop a theory of language where everything fits into a system of representation. The late Wittgenstein instead takes another approach where he looks at the use of language. This means that a word doesn’t represent something, but is learned by its use in a context. In the Tractatus there are some passages that hint already at possibility of this late version too. In this reading the Tractatus doesn’t want a perfect system of representations, but to show the nonsense of this approach. But this reading is controversial. But back to the Wiener Kreis: The Wiener Kreis read the Tractatus as a work about good and strict references. They wanted a philosophy where terms are defined and clearly used. Everything that is unclear and cannot be transformed in a language of representation was in their eyes not serious scientific or philosophical work.

On the other side we have the continental tradition. (Even if it has continental in its name, it doesn’t mean anything anymore – if it ever meant something). Two very prominent figures of the continental tradition are Heidegger and Adorno. Both criticized the Vienna Circle sharply – and in many points rightfully so. The continental tradition – from then till now – was discounted by the other side as obscure and unnecessarily confusing. This accusation has a point. Many works of the continental tradition are not striving to be clear. But this accusation misses a point. For example: In the case of Adorno his writing tries to develop a certain way of thinking. This thinking can best be approached if one follows the movements of this thinking. His texts are often written in a way that the reader has to make this movements of thoughts while reading it. And Adorno’s philosophy is not about the results of this thinking, but about this method of thinking itself (one of the reasons his magnum opus is called Negative Dialektik/ Negative Dialectics; it’s not about positive results). And this reflection about thinking and the impossibility to give clear definitions is seen as a major feature of continental philosophy.

This little story tells us about one way to define the two opposing modes of philosophy. But we can use it in the same way to destroy the illusion of Analytic and Continental as different traditions. I already hinted at the two different (or two different ways of understanding) Wittgensteins. And in self-appointed Analytic Philosophy you can find both versions. Also you can find in the continental tradition a lot of thinkers reading and writing about Wittgenstein. Style and self-promotion of oneself or an academic group or institution is often far more important than supposed thematic or methodic differences.

Poststructuralism

In the middle of the 20th century there emerged – primarily in France – a new line of Continental Philosophy that is often summarized under the name Poststructuralism. This current is also more complex than most introductions and definitions give it credit to. But let’s concentrate on a certain contentious idea inside this current: Deconstructivism. This idea developed by Jaques Derrida destroys the idea of any presentation or representation. In a very popular simplification it is about the idea that terms come in opposites, that are hierarchical ordered and therefore bring norms with them. The political and philosophical project of this idea is than to destroy the order of terms by bringing in a third one. On the one side Deconstructivism is one of the most popular ideas of Poststructuralism. It is very powerful in some political projects – especially feminism and Queer Theory. But on the other side it is one that is laughed upon – especially by self-defined analytic philosophers. Analytic Philosophers criticize the resulting unclear texts or the simplicity of the idea. Derrida’s works are actually a lot more richer than the popular caricature that is created by both sides.

Deconstructivism has also led to a fixation on language and a declaration against any fixed meaning that is shared by too many poststructuralist thinkers. This led to criticism inside primarily continental circles. One example can be found in Mark Fisher’s Ghosts of my Life:

As soon as it was established in certain areas of the academy, deconstruction, the philosophical project which Derrida founded, installed itself as a pious cult of indeterminacy, which at its worst made a lawyerly virtue of avoiding any definitive claim. Deconstruction was a kind of pathology of scepticism, which induced hedging, infirmity of purpose and compulsory doubt in its followers. It elevated particular modes of academic practice – Heidegger’s priestly opacity, literary theory’s emphasis on the ultimate instability of any interpretation – into quasi-theological imperatives. (p. 16f)

Correlationism and Philosophies of Access

We can now jump to the origin of SR. On 27 April 2007 a workshop in London was titled Speculative Realism. This was the first time the name actually appeared.

This announcement was reprinted in Collapse III p. 306

From its announcement we can take a provisional definition of SR: You can read something about a “continental” orthodoxy that is anti-representational or “correlationist”. Above should have given a superficial understanding of the anti-representational continental orthodoxy. The original SR-philosophers of the workshop set out to answer a question – often posed in analytic circles, but mostly ignored in the recent continental tradition: How can representation work? Thereby dissolving the border between continental and analytic further. But let’s have a closer look at two terms that try to clarify that “continental” problem: philosophies of access and correlationism. The term philosophies of access stems from Graham Harman. Graham Harman is the philosopher who – up to date – is the philosopher who is most eager to use SR as self-description. His own project is also termed Object-Oriented Ontology (OOO) or Object-Oriented Philosophy (OOP). Harman has the idea that nothing is really completely accessible, but instead withdraws behind our contact. But this feature is in his opinion nothing that is a human problem. Fire that burns wood never is in a complete contact with the wood it burns. His term philosophies of access wants to point to the problem that it is OUR access to things that is somehow privileged. Instead Harman wants to criticize the philosophies of access for their anthropocentric view and instead establish a view where everything is withdrawn and where we have to explain how contact is even possible. His contribution Vicarious Causation to the workshop wants to explain exactly this problem of contact (like nearly everything else he has written).

Meillassoux tries to tackle the problem of contemporary philosophy differently: He identifies as correlationism the position that there is no knowledge of the world without a (cor)relation to us thinking it. (I leave this explanation very short because I have written a lot about Meillassoux on this blog already; starting here: https://polphil778328996.wordpress.com/2019/08/10/the-philosophy-of-quentin-meillassoux-part-i/ ) His project sets out to delineate how speculative thinking transgresses this limit and is able to reach an outside. Therefore the title of his first big book After Finitude.

Harman himself explains very good the main differences between his own approach and that of Meillassoux:

Kant holds as follows:

a. The human–world relation stands at the center of philosophy, since we cannot think something without thinking it.

b. All knowledge is finite, unable to grasp reality in its own right.

Meillassoux rejects (b) while affirming (a). But readers of my own books know that my reaction to Kant is the exact opposite, rejecting (a) while affirming (b), since in my philosophy the human–world relation does not stand at the center. Even inanimate objects fail to grasp each other as they are in themselves; finitude is not just a local specter haunting the human subject, but a structural feature of relations in general including non-human ones.

Harman, Graham: Quentin Meillassoux. Philosophy in the Making (Second edition) p. 4

So we start with something confusing. The “common enemy” of the philosophers of this workshop is not so clear as it seams. The approaches are very different. Maybe one could say that SR wants to change something in Continental Philosophy. But that is not really helpful since not every new theory in continental circles can be considered as SR.

Speculative Realism

Another interesting anecdote stems from Harman’s retelling of the coming together for the workshop. Especially the passage about finding a name for it tells a lot:

As noted, the Speculative Realism Workshop took place on 27 April 2007. Some months earlier we still had no name for the group, which had rallied around ‘correlationism’ as the shared enemy unifying four philosophical projects with little else in common. At first it seemed as though we might settle on ‘Speculative Materialism’, Meillassoux’s name for his own system, despite my own rejection of materialism. No better alternative emerged until Brassier offered ‘Speculative Realism’. The name had such appeal that it was adopted immediately by all members of the group, though Brassier (who disliked ‘speculative’) and Meillassoux (who preferred ‘materialism’ to ‘realism’) eventually distanced themselves from the term. Grant has since taken a turn in the direction of British Idealism, which leaves the author of the present book as the only original Speculative Realist who still endorses the term wholeheartedly. ‘Speculative Realism’ has since become a familiar phrase in continental philosophy circles in the Anglophone world, the subject of numerous university courses and ceaseless discussion in the blogosphere. It has served as a rallying point for the young, and has helped focus continental philosophy for the first time on the realism/anti-realism dispute, which was formerly dismissed as a ‘pseudo-problem’ by the overly reverent disciples of Husserl and Heidegger.

Ibid. p. 79f

What we have seen so far and what this quote makes clear again, is that SR was coined as an umbrella term for four very different philosophers. Two of them I have explained a little bit more in detail. To bring the other two in would confuse only more.

But the Harman quote gives us a second problem – a problem that leads to the post in the first place. SR has become a familiar term in some academic circles and is used e.g. for a series of publishing (redacted by Harman himself). Concerning the impossibility to give the term a clear definition or direction that seems strange. Therefore I want to show an example of the debate in the blogosphere Harman is talking about and the critique of Brassier on the distribution of this term. And finally I will give a short comment of my own.

SR and politics

Shortly after the workshop and the reprint of the workshop in Collapse Volume III the blogosphere went crazy with ideas how to further develop these concepts. One very confusing moment of this debate was that it often talked about SR as a philosophical current with clear characteristics. One topic was very prominent: Poststructuralism made ontological questions political; SR brings back forgotten ontological questions without direct political implications. The question many bloggers tried to answer was now: What is the relation between SR and politics?

(An example and point of entry is this k-punk post: http://k-punk.abstractdynamics.org/archives/010946.html Many of the links are broken. Some can be recovered via the wayback machine – others can’t.)

Ontology – as the discipline about the structure of our reality/world and sometimes even multiverse – was often occluded in Poststructuralism. Derrida and others showed the implicit tenets of other ontologies and how they reflected ways to think about the world and therefore how to structure society. The deconstructivist project for example wasn’t able to deliver an ontology of its own. That meant that it wasn’t able to construct a political view or future society people could aspire to. Not all poststructuralist projects were ontology-forgotten. Deleuze and Guattari delivered a new ontological model – maybe one of the reasons some people don’t count them as real poststructuralists. Nevertheless in the atmosphere of some academic circles at the beginning of this century it was definitely refreshing to hear about the construction of a new ontologies. In a political atmosphere that Mark Fisher described so well as Capitalist Realism (see his book with the same name) it was clear that a concept was needed that could outside the supposed deadlock of neoliberal finitude. The emerging awareness of the climate crisis also needed a political theory that went further than language. Therefore Meillassoux’s ontology that proposed that everything is contingent – except contingency itself – looked like an exit for the necessity of neoliberal capitalism and Harman’s talk about inanimate objects looked like a way free ourselves from anthropocentric thinking and toward a thinking of objects and the earth – thereby accepting the crisis of our planet as a philosophical subject.

This explains the debate, but also shows how some – often continental oriented academies – weren’t able to look outside their favorite theories and question their tenets. SR enabled these circles to look outside. But a closer look shows that other theories and similar questions were already available (e.g. a certain reading of Deleuze/Guattari and one of their favorite philosophers Alfred Whitehead). Each of the original SR-philosophers brought something new, but it wasn’t ex nihilo. And we still can’t say there is a tenet for SR that holds scrutiny – only in the way Harman proposes in the above quote as counting him as the only real speculative realist. But then why talk about SR and not about Harman?

Brassier’s and Wolfendale’s criticism

In 2014 Pete Wolfendale’s The Noumenon’s New Clothes was published. It continued a critique of Harman’s approach previously argued out on his blog. The main idea is that Harman develops a system of withdrawn objects and their structure, but is unable to deliver any arguments why we should look at the world this way (humorously he calls it the withdrawal of arguments). Wolfendale seems to see a problem in humanities that aren’t able to get outside their phenomenological and linguistic descriptions. The supposed aim of SR to get to the outside is undermined by Harman in withdrawing to the inaccessibility of everything. (The noumenon – as the thing of thought that cannot be completely accessed outside thought – reappears. In an approach to go beyond Kant – who tried to show the inaccessibility of things-in-themselves – Harman falls back to a position very similar to Kant’s. Therefore the title.)

In an afterword to this book by Ray Brassier, he criticizes Harman’s use of the term SR sharply. He starts with an “existence test” that Harman often evokes and that is also present in the above Harman quote:

Has Speculative Realism passed the existence test? Graham Harman has certainly served as its indefatigable midwife. No doubt modesty forbade him from mentioning that he is commissioning editor of the ‘thriving book series’ he cites, and the self-volunteered editor of the new Speculative Realism section of the popular PhilPapers website. His claim about postdoctoral fellowships and semester-long university courses sounds an impressively academic note, flagging the institutional recognition that is generally accepted as the seal of intellectual respectability. Yet here a note of caution is in order, since Ayn Rand’s Objectivism and L. Ron Hubbard’s Scientology have also succeeded in securing toeholds in American university programmes. Academic recognition is not compelling by itself unless we are told the names of the fellowships and institutions in question. Moreover, a sceptic might be forgiven for querying the reliability of a witness testifying to Speculative Realism’s indubitable existence from within the pages of a publication whose official subtitle is ‘A Journal of Speculative Realism’. And if existence is to be measured in terms of blogs, books, and Google hits, then Speculative Realism lags woefully far behind Bigfoot, Yeti, and the Loch Ness Monster, all of whom have passed Harman’s ‘existence test’ with flying colours. (p. 409f)

Then he goes on to show that it is impossible to find good criteria for a distinction between SR and not-SR. In the end he attacks Harman’s self-branding and self-promotion:

Ultimately, neither commonalities nor shared aversions suffice to clearly demarcate Speculative Realists from other philosophers. Considered as a philosophical movement, Speculative Realism is vitiated by its fatal lack of cohesiveness. Whether we try to define it negatively by what it is against or positively by what it is for, we exclude too little and include too much. Harman justifies his branding of Speculative Realism as a ‘universally recognized method of conveying information while cutting through informational clutter’. The problem is that those he has enlisted as the brand’s representatives diverge on so many fundamentals that the noise generated by bundling them together far exceeds any possible informational content this grouping might have hoped to provide. In the absence of even a minimal positive criterion of doctrinal cohesiveness, all that is left is chatter about something called ‘Speculative Realism’—placing it on an ontological par with chatter about the ‘Montauk Project’. It is not difficult to see how Speculative Realism passes Harman’s existence test, since this test is predicated on a principle as simple as it is dubious: to be is to be talked about. (p. 416)

For Brassier this branding goes hand in hand with Harman’s dubious use of science and Brassier contrasts Harman’s position with those of the other original SR-philosophers:

Is there anything of real philosophical import at stake in the controversy over what Meillassoux calls ‘correlationism’? I think that there is indeed, but unfortunately this is precisely what has been obscured by the concerted attempt to brand Speculative Realism. The impetus for the original, eponymous workshop was to revive questions about realism, materialism, science, representation, and objectivity, that were dismissed as otiose by each of the main pillars of Continental orthodoxy: phenomenology, critical theory, and deconstruction. The synopsis for that workshop, which I composed with Alberto Toscano, is worth citing because it illustrates the shortfall between the concerns that animated the original ‘Speculative Realism’ event, and those of the current Speculative Realism brand:

“Contemporary ‘continental’ philosophy often prides itself on having overcome the age-old metaphysical battles between realism and idealism. Subject-object dualism, whose repudiation has turned into a conditioned reflex of contemporary theory, has supposedly been destroyed by the critique of representation and supplanted by various ways of thinking the fundamental correlation between thought and world.

But perhaps this anti-representational (or ‘correlationist’) consensus—which exceeds philosophy proper and thrives in many domains of the humanities and the social sciences—hides a deeper and more insidious idealism. Is realism really so ‘naive’? And is the widespread dismissal of representation and objectivity the radical, critical stance it so often claims to be?”

The interest in rehabilitating representation and objectivity remains my own personal preoccupation and was certainly not shared by any of the other participants then or now. But the issue of the link between representation and objectivity generates questions about the status of scientific representation, which in turn lead to the more fundamental issue of philosophy’s relation to the natural sciences. This issue is central to Meillassoux’s work, whether in the form of his attempt to provide a speculative proof of the contingency of the laws of nature or in his account of the positive ‘meaninglessness’ of mathematical signs. But it is equally fundamental for Grant, whose reactivation of Schellingian Naturphilosophie requires reasserting ‘the eternal and necessary bond between philosophy and physics’—an interest emphatically reaffirmed by Grant’s ongoing research into the philosophical implications of the ‘deep-field problem’ in cosmology. It is precisely this concern with renegotiating philosophy’s relation to the natural sciences that is conspicuously absent from the Harman-sanctioned branding of Speculative Realism. For Harman, such concern smacks of ‘scientism’. Indeed, Harman’s vocal disdain for ‘scientism’ (not to mention ‘epistemism’) confirms the extent to which, notwithstanding the eccentricity of his reading of Heidegger, he remains an orthodox Heideggerian. For Harman, metaphorical allusion trumps scientific investigation and fascination with objects trumps any concern for objectivity. Indeed, the irony—as Pete Wolfendale’s withering dissection of Object-Oriented Ontology demonstrates—is that in Harman’s hands, Speculative Realism merely exacerbates the disdain for rationality, whether philosophical or scientific, which is among correlationism’s more objectionable consequences. It is this misology which Meillassoux’s After Finitude sought to challenge. Far from challenging it, Harman’s Object-Oriented Philosophy pushes this misology towards even more reckless extremes, such that it ends up being, as Wolfendale puts it, ‘correlationism’s eccentric uncle’. (p. 416 – 419)

This harsh critique of Harman is in my opinion justified. That still leaves the question: why did I chose SR as category on my blog.

Conclusion

As we have seen there is no good test to distinguish SR from not-SR. It was an event that tried to challenge some problematic tenets in certain continental circles. But the self-promotion of Harman has made it a problematic term. I used it as a somehow known category, because I thought interested people are drawn to the buzzword. After using it – until now – only for my Meillassoux posts it would have been better to use the category speculative materialism (Meillassoux’s name for his own project). DeLanda – a philosopher I have written about and has a book published in Harman’s SR series – isn’t in that category. I thought about putting the posts there, but – aware of the SR problem – I decided to use this category only for the four initial philosophers. In the end SR is an ism and all isms share the faith of being good orientations in the first place and misleading in the second – especially if you want to take a closer look. Isms can also be used to organize different people and ideas to develop a direction (especially in politics) and stop being useful if the ism enters ossification. As Brassier has shown in the last quote there is no direction of SR, because Harman goes in a very different direction than the others. Maybe it’s time to forget SR and talk about good philosophy.

On Reza Negarestani, Abduction and Theory Fiction

Reading and rereading old books and articles by Reza Negarestani and Alex Williams has led my to a way of thinking about the theoretical framework of theory fictions. First of all: What are theory fictions? Let’s start with an example: In Collapse III a translation of an article by Quentin Meillassoux was published. In this article Meillassoux uses two short quotes by Gilles Deleuze. Instead of using the oeuvre of Deleuze to explain these quotes, Meillassoux treats Deleuze as a pre-socratic philosopher of whom only these two fragments have survived. Both quotes talk about immanence but with a different reference. One states that Spinoza was the prince of immanence. His whole work is saturated with immanence. The other refers to Bergson to whom immanence came only once: at the beginning of his book Matter and Memory. Meillassoux constructs two possible schools of Deleuze interpretation: one favoring the quote about Spinoza, the other the one about Bergson. Meillassoux focuses on the latter one, because if Spinoza’s work is filled with immanence than there is no noticeable place to understand what immanence is. Bergson on the other hand has an event where immanence occurs and then disappears. Therefore we have a difference in immanence. Meillassoux indicates to physics where it is well known that, to establish a magnitude, a variance is needed. Now the real theory fiction starts. Meillassoux analyzes Bergson’s Matter and Memory in order to find this difference and construct a theory of immanence that is not Bergson’s. It is neither Deleuze’s even if it resembles it in many points. Nor is it Meillassoux’s own position – though there are a lot of parallels.

Meillassoux’s theory fiction is generated by a selection of methods and theory-parts [incidentally selection is one of the main themes of the resulting theory fiction]. After the selection these parts generate something new, something unforeseen, a theory of its own that doesn’t belong to any of the philosophers involved (Deleuze, Bergson, Meillassoux).

Another philosopher working with theory fictions is Reza Negarestani. His book Cyclonopedia takes theories from archeology, philosophy, mathematics, computation, conspiracy theories, geology and many other resources to generate an incomparable assemblage of genres and thoughts. Each of the elements interacts with others in a way that the elements not what they were before and creating something new. [Cyclonopedia also plays with hyperstition – generally defined as fictions that make themselves real. In this post I want to focus on the aspect of theory fiction. In the near future I will probably go into hyperstition as well.]

This process can be described by another theory of Negarestani: his philosophy of decay. In his contributions to Collapse IV and Collapse VI, he uses alchemy, medieval philosophy, chemistry, mathematics and theories of dynamic systems to think about decay. Without much detail the core idea is that decay is subtractive as well as productive. On the one side the object or thing that enters decay asymptotically dissolves. A decaying apple loses over time more and more of its composition and properties. On the other side that what is taken away, generates something new. The worm-infested, mouldy, foul apple is a place of production. A clearer picture is given in his Collapse IV contribution Corpse Bride: Thinking with Nigredo. Here Negarestani uses a cruel Etruscan method of torture to illustrate the idea of decay. The torture consists in the tying of a victim with a corpse. The tortured person is fed and kept alive. The decay of the corpse is transferred to the tortured person. The bodies merge together and become something like a black slime.

The important points about decay are that it describes processes of continuous transformation and dissolution thereby questioning our often discrete, ordered and essentialist view of the world. Negarestani develops decay from concrete examples to an abstract process that enables us to describe processes in a different and probably more adequate way. Cyclonopedia can be described as a form of abstract decay where genres and theories melt into each other, dissolve and produce something new and different. Another example may help. Alex Williams has used the process of abstract decay to describe wonky music. Wonky is a word used to describe a certain style of dance music that emerged in the first decade of this century. In Williams’ own words:

What is most interesting about Wonky thus far is its trans-generic nature, its relative looseness and inclusiveness to a proper diversity of disparate aesthetics: stretching between Rave, Dubstep, G-Funk, Instrumental Hip Hop, Crunk, Pop, UK Garage, IDM/Electronica, Techno… etc. Moreover it operates in a number of different tempos, (chiefly dubstep’s 138 bpm and hip hop’s slower 90-110bpm) with producers scattered between different continents, and different regimes of consumption (club and home listening). Even further, the very notion of “wonky” itself is a deeply slippery idea. Sometimes it indicates de-quantised drums (as in Flying Lotus, Lukid, and other post Dilla beat-artisans) sometimes pitch-bent synth and bass work (Joker, Starkey, Rustie), sometimes a maddening rush of 8 Bit arpeggios (Zomby, Ikonika, Rustie again). Wonky is not so much a genre unto itself. Instead it operates as a kind of trans-generic mutational agent, spreading seamlessly between bpm species, liquidating textures, distending rhythmical consistency like so much manipulable sonic sticky toffee: All that is solid melts into a new electronic psychedelia, as fluid and mellifluous as the globalised capitalism which spreads it. Wonky in the sense of off-key, out of place, misshapen, breaking through an electronic music environment increasingly characterised by myopic microgenre developments and parodic stylistic affectations, as a set of strategies to be applied to a pre-existing template. In a sense then Wonky detournes pre-existing genres (instrumental hip hop, grime, rave, dubstep etc) corroding the arid grid-like bass kick / snare matrix into something closer to the handmade asymmetrical anti-rhythms of Burial, pushing the shuffled culminating and accelerating sensual textural play towards a surrealist fair ground of Dali-esque percussive affect.

(An example of a wonky track can be found here.) This excellent description of wonky leads Williams to view the production and sound of this music as abstract decay. This form of decay is abstract because it uses the mechanisms and dynamics outlined by Negarestani and is not necessarily connected to the romantic/gothic/black metal associations of decay. In his Wonky 3 post he summarizes:

Wonky applies, in its woozy textures, liquefying day-glow synthetics and dilating anti-quantised beats something surprisingly akin to the process My Bloody Valentine exercised upon indie rock guitar music- from within the tradition itself (not from the cynical perspective of the outsider) a method by which surplus aesthetic value can be extracted from deadened forms, by applying abtract-decay processes of liquefaction, breaking down the rigid sonic matter (be it the hard bone matter of drum patterns or the softer flesh of synth textures or the fibrous masses of bass pressure). In this sense perhaps it intimates a kind of sonic anti-affirmatory dark vitalism, at the level of process, since perversely its immediate affect is bright, crisp, colourful, rather than the dank encrustations we would traditionally associate with decay.

Of course music is not the only field abstract decay can be applied to. In the Collapse VI contribution Negarestani outlines fruitful ways to describe politics, architecture and many more fields of research in the perspective of decay.

An interesting question concerning the philosophy of decay is its status. Is it a fully-elaborated theory that can describe all the processes in this world? Is it a theory fiction? I haven’t found any commentary to this question by Negarestani himself. I think it is something in between and there is a summary of Negarestani’s project by Alex Williams that seems to provide a base for my current understanding. Before the quote I’m going to outline my interpretation. I think that the philosophy of decay isn’t a theory that should explain everything. At least not in the way that a Manuel DeLanda or Graham Harman would see their theories. Many philosophers work with one concept and try to improve it to a point where it seems as adequate as possible. Often this leads to an incomplete description and blinding out of certain aspects of the world. What Negarestani seems to do is to look for elements in theories that can be brought together to generate new descriptions of our world. This leads to new perspectives and insights. The philosophy of decay shows that it allows new ways of thinking and more adequate descriptions than more traditional systems. But that doesn’t mean that it is the right or perfect theory. It is more like an experiment – a philosophical experiment or an experiment in theory.

A 2018 founded blog involving Reza Negarestani is called Toy Philosophy and his book Intelligence and Spirit (which I haven’t read yet) is available with a Lego set. The Lego set is announced with this description of toy philosophy:

What sets toy philosophy apart from regular philosophy is its emphasis on play, counterfactuals, and model pluralism, as opposed to a game with pre-established rules and a set method.

Central to philosophical Lego is the concept of a toy model. Every model is bound to be theory-laden to some extent. Theories always come with their own baggage of implicit assumptions not only about how things are, but about how and by what means they should be structured. We often take such assumptions for granted, as if there is a direct correspondence between how we structure the world and what the order of things is—the traditional game played by philosophy. In contrast, we imagine a new form of philosophy where such correspondence is understood to be an unwarranted assumption, and where, in playing endlessly with our conceptual and logical resources, we enrich the very reality we are talking about, and even fabricate it anew.

Another support for my view can be found in an article from Alex Williams called Escape Velocities. The article is primarily about accelerationism (more on this in future post soon). For the perspective of this post it’s not important what accelerationism is – only that Negarestani’s work can be described by a framework that Williams calls epistemic accelarationism:

For Negarestani, epistemic acceleration rests in generating new ways to navigate conceptually. This spatialized, geometric understanding of conceptual behavior emphasizes the creative aspects of thought, focusing on conceptual discovery and abductive transition, over and above analytic parsimony. This modern system of knowledge, much inspired by recent work in the synthetic philosophy of mathematics, is driven by opportunities to build connections, bootstrapping out of local horizons of knowledge and tracing the pathways which exist towards more globalized conceptual horizons. In this sense, Negarestani’s project is one which argues for a “true to the universe” thought, which binds the traumatic and vertiginous inhuman perspectives that scientific and mathematical thought provide to the rational subject. This revolution “for and by the open” prioritizes neither the global over the local nor the local over the global, but rather their imbrication with one another, their potential for perforation, and their possibilities for transplantation or transition. Considered from the perspective of an epistemological account of conceptual space, this is to operate under the rational injunction towards exploration, albeit of a necessarily traumatic kind. Epistemic acceleration then consists in the expansion and exploration of conceptual capacity, fed by new techno-scientific knowledges, resulting in the continual turning-inside-out of the humanist subject in a perpetual Copernican revolution. In so doing, epistemic accelerationisms preserve the crucial distinctions between thought and being, and hence are capable of undergirding a rationalist picture of the world and its operations.

[In this post I’m not going into Negarestani’s theory of trauma that is mentioned here.] In recent years Negarestani has been described as neorationalist. His thinking uses resources from analytic and rational side of philosophy. But he adds something new, opens a different perspective and uses these approaches differently.

A lot of rational thought works under the umbrella of deductive reasoning. A deduction can be described as the application of a general rule to a cause to which it can be applied and concludes a result (e.g. Premise 1 or rule: If it rains, the street is getting wet. Premise 2 or cause: It rains. Conclusion: The street is getting wet.). Because rules are not always given, we need a method to find rules. One classic form of finding rules is inductive reasoning. An induction uses the repeated observation of casue and result to infer a rule (e.g. I observe that it rains and the street is getting wet. Therefore I state that: Every time it rains, the street gets wet). A classic example of induction highlights the main problem of induction: I observe a lot of swans and they are all white. Now I conclude: All swans are white. After this conclusion I discover a black swan and my conclusion turns out to be wrong. This problem is usually framed within the description of deduction as truth conserving and induction as truth expanding. A deduction doesn’t add a new truth. If the premises (rule and cause) are true, so is the conclusion. An induction on the other side adds a new truth. Even if case and result are true, that doesn’t necessary mean they are always true. The generalization in the rule adds a new truth.

Charles Sanders added therefore a new kind of reasoning: abductive reasoning. Abductive reasoning is often described as the inference to the best explanation. In the parochial illustration of rule, cause and result, an abductive inference starts with the result and asks what rule could be applied to find the cause for it. Therefore it is often compared to detectives who investigate a murder scene with all the results in form of evidence and try to find the cause. This description of the three forms of inference as an ordering of cause, result and rule is limited, but exemplifies the point I try to make about Reza Negarestani. While many traditional rationalists work with axioms and rules to see what they can deduce from them or try to find out how an inductive inference can have an as small as possible margin of error, Negarestani can be assigned to abductive reasoning. In the quote above Williams writes that Negarestani’s theory is “focusing on conceptual discovery and abductive transition, over and above analytic parsimony.” An announced collection of Negarestani’s writings is called Abducting the Outside.

Negarestani’s abductive reasoning shouldn’t be described as seeing results and looking for the right rule to find the cause. Instead it can be seen as toying around with theories to see how they change the cause, the result as well as the connection between them. Theoretical frameworks have a huge impact on our understanding of the world. Often they are implicit. Making them explicit and toying around with them opens the door to an awareness of this framework.

This point should not be seen as form of relativism. The forthcoming collection is called Abducting the Outside. Williams is quoting Negarestani as someone who wants to develop a theory that is “true to the universe”. There is something we investigate and explore. The use of theories and theory fictions helps us getting a broader picture but they have to be tested against an outside or reality. Different theories give us different insights about the world – some more adequate, some less. In the end theory fiction seems to be a wrong word or concept, because every theory can be seen as a kind of fiction or narrative giving us a guideline and structuring principle through the complexity and mess of our world. But also every theory shows us something about the outside. Some of the questions that remains are: How to discern what a theory occludes and what it illuminates? How can we bring our insights together to get a better understanding of our world without falling back into a single framework that occludes certain aspects of the world?

DeLanda’s Intensive Science and Virtual Philosophy; Problems and Questions

I summarized the core ideas of Delanda’s book Intensive Science and Virtual Philsophy in a series of posts. Now I want to consider the overall argumentation, how it works, which problems arise and if there is a possibility to solve them.

DeLanda’s Intensive Science wants to present Deleuze as a fertile thinker for the philosophy of science. DeLanda criticizes philosophers of science who operate with obsolete categories that have no place in science anymore. The main problem for him are essences. Essences are ahistoric, static, transcendent and product oriented. And the problem is not when all these points come together; each one of them is a big problem.

DeLanda discovers that Deleuze used state spaces and the mathematics behind it to develop an ontology much closer to contemporary science than traditional approaches. State spaces allow you to see entities as processes. They are not static, they are dynamic. These dynamics give you a history of the entities you observe. The history you get via analysis of state spaces gives you a distribution of attractors and critical values that change the behavior of the system. This history is not on a completely different realm – like Plato’s ideas – but in the systems itself. But – at this point – a different understanding of reality is needed. Some philosophers view reality as the sum of all actualized entities (and their actualized relations). DeLanda sees reality as a sum of actualized capacities and the tendencies of entities and systems. These tendencies – without ever being fully actualized – structure our reality and belong to what DeLanda calls virtuality. Virtuality is real but it is not transcendent. Virtuality and tendencies are discovered by experiments and mathematical methods applied to the systems itself. Thereby you also capture the processes and discover critical values that lead to symmetry breaking cascades. These processes are never closed. So an entity is captured as process and not as a product.

There are branches of philosophy and science where essences find their way back. Two of them I presented: the classic parallel worlds approach and the deductive-nomological approach to (natural) laws. Both approaches are rooted on primitive logic and language based theories. In the case of possible worlds, counterfactual sentences are formed that view the entity as something fixed instead of a process. In the deductive-nomological approach laws are fixed postulated entities that have a meaning in themselves and can be confirmed via experiments.

DeLanda’s state space approach as presented in Intensive Science wants to exclude language. His process oriented approach to entities constrains their possibilities and deprives them of their fixed essences. Instead possibilities are defined via state spaces, their parameters (or dimensions), and critical values. Laws are approximated results, but the true science lies in the more complex mechanisms of their mathematical and experimental methods that generate them.

For me it is this strong enmity to language that leads to problems. In Intensive Science there are a lot of arguments against approaches that reduce analysis to language. To be clear from the start: I think DeLanda is absolutely right to criticize the reduction to language problems. But ignoring language completely is a direction that leads to a lot of problems. two of them I’m going to present now.  After this I introduce a different reading of DeLanda – or maybe a development in his thinking – which uses newer texts written by him.

Questions

Although I have sympathies for his approach there are parts of his theory that I either don’t understand properly or that have real problems.

Flat ontology:

A point I didn’t wrote about in the previous posts is that DeLanda favors a flat ontology. This means that every entity has the same ontological status. In the case of species (biological, a chemical element, or whatever) there is a plausibility to this. Especially if you consider his mathematical approach. You can understand species as entities with properties (given via their state spaces). They have causal impact and a behavior that is not different from an individual. In his book A New Philosophy of Society he talks about emergence and that species and other things are emergent entities. They have properties their parts haven’t. You can’t reduce them to their parts, but still explain how they developed. Therefore emergence is not something mythical or unexplainable.

This gets further complicated when he starts to talk about concept independent entities. Concept independent entities like nation states have a behavior of their own. They are emergent entities that have a history and their emergence can be explained. They are not mind independent in the way that they need minds in the world for them to exist. But their behavior is independent of the concepts we make of them. Our conception of a nation state can be wrong. But an entity has to be based in the world. He uses an example of a female refugee: he makes it clear that such a term can create its own referent. Such a creation is non the less only possible with a connection to our social reality:

to explain the case of a female refugee one has to invoke, in addition to her awareness of the meaning of the term ‘female refugee’, the objective existence of a whole set of institutional organizations (courts, immigration agencies, airports and seaports, detention centres), institutional norms and objects (laws, binding court decisions, passports) and institutional practices (confining, monitoring, interrogating), forming the context in which the interactions between categories and their referents take place. (A New Philosophy of Society, p. 2)

All this leads him to a position where he considers every part and every emergent entity as ontological equal. This doesn’t mean that they always exist at the same time or that they have the same scale. It just means that you cannot say that one entity is ontologically more important than another.

This finally leads to my problem: what exactly does it meant to be ontological important and what is gained by this flat ontology?

To see this problem more clearly a contrast with another approach is helpful. Aristotle is a philosopher DeLanda often attacks as someone who only considers results instead of processes. Wolfgang Wieland in his book Die aristotelische Physik (Aristotle’s Physics) offers a different view on this ancient philosopher. In a first part he discusses the different approach in scholarship. One – the systematic approach – seems to be the one DeLanda attacks. Interestingly Wieland offers similar arguments to DeLanda to criticize this approach. The systematic approach views the whole of Aristotle’s oeuvre as a building in which there are truths which are correlated via deduction to other truths. Wieland calls this a deductive-axiomatic method (the similarity to DeLanda’s critique of deductive-nomological approaches should be obvious).

Wieland’s own approach to read Aristotle is called Prinzipienforschung (research of principles). This approach considers that there is a question. To pose the question presupposes that we have a certain knowledge of a thing (structured by language and former experiences). There is nothing that is a principle in itself. Principles exist only in relation to a question. This view allows to see works of Aristotle – especially his physics – in a different light. Seemingly there are a lot of contradictory approaches in Aristotle’s work. Some explain them by referring to the esoteric nature of his preserved work (esoteric in this context means, that it is not meant for publication, but only for discussions inside a well-informed circle; exoteric refers to works meant for publication; these works of Aristotle are lost – e.g. all his dialogues). But Wieland’s approach allows a different view. If you consider the question and the perspective that is taken in asking it, you get different principles and therefore different answers. These answers are not arbitrary. One has to consider in which field of knowledge someone is; one has to consider experiences and if the answers are appropriate to these experiences. Therefore Wieland’s Aristotle doesn’t reduce to language, but considers language as an important vehicle for knowledge and a way to understand the question. This approach can be read as way to avoid ontological importance. Importance is bound to the question and perspective. Therefore the question of ontological importance is dispensable and it is not clear what DeLanda gains by it.

The relevance of problems

In the last post I presented DeLanda’s problematic approach. DeLanda defines criteria for well posed problems. A point I probably haven’t emphasized enough is that his view is strongly opposed to linguistic approaches. His reference to state spaces gives him the possibility to explain how problems are neither overdetermined nor indeterminate independent of language.

But his own examples can be used against him. In the priest/bank robber example there are two possible answers depending on which contrast you choose. His example of the connected fox and rabbit populations gives you different answers too. He wants to exclude the answers that are ecologically irrelevant – the overdetermined and indeterminate ones. One can imagine following research project: You want to know under which circumstances rabbits can escape foxes. Are there features in the environment rabbits can use to hide that would allow a large population of rabbits to survive more easily? This question can be ecologically relevant. So depending on what you want to know, there is more than one relevant problem. This problems don’t depend completely on language, but knowing and understanding the question plays a huge role. The question is linguistic and structured by previous knowledge. Mathematical methods and experience can offer us new ways of thinking and posing questions, but to find the right scale you have to consider language and referent.

A different DeLanda

In Intensive Science DeLanda repeatedly wants to ignore language. In his book A New Philosophy of Society he writes in the introduction, that language plays an important role but not a constitutive role. As far as I see it in this book there is only one function for language: Social entities are hold together by processes of coding. One kind of coding is performed by language. But that’s it. Another view can be found in a short article called Ontological Commitments (https://library.oapen.org/bitstream/id/3787d2bb-eda5-4cda-91f2-ba0d4091e3fe/1004511.pdf p. 71 – 73). The crucial passage is the following:

Further epistemological consequences follow from the realization that not only the entities we study have properties and capacities but so do we, the producers of knowledge. Idealist and empiricist philosophers tend to assume that all knowledge is representational, the formula for which is Knowing That , a formula in which the blank is filled by a declarative sentence, that is, a sentence stating a fact, a priori or a posteriori. But the need for active interventions to produce knowledge points to an-other formula, Knowing How , in which the blank is filled by by an infinitive verb: knowing how to swim, to ride a bicycle, to dissect an animal, to mix two substances, to conduct a survey. Unlike know-that, which may be transmitted by books or lectures, know-how is taught by example and learned by doing: the teacher must display the appropriate actions in front of the student and the student must then practice repeatedly until the skill is acquired. The two forms of knowledge are related: we need language to speak about skills and theorize about them. But we need skills to deploy language effectively: to argue coherently, to create appropriate representations, to compare and evaluate models. Indeed, the basic foundation of a literate society is formed by skills taught by example and learned by doing: knowing how to read and how to write.”

This allows us to see DeLanda differently and puts him very close to Wieland’s Aristotle. Problems are problems in the context of knowledge and skills to acquire them. There is a possibility to read the rabbit-fox example as an example for well-posed problem for a certain ecological question. But there are other questions and other problems. Knowledge and questions can be seen as properties and capacities of producers of knowledge. This allows us to reintroduce language as a property of producers of knowledge. Language is not arbitrary and also not the most important or only aspect of reality.

But even if this passage presents DeLanda’s Intensive Science in a different light. The quotes from this book are still there. Probably there is a development in his thought and DeLanda started to realize the epistemological problems that arise if one completely ignores language.

And another question still remains: why insist on a flat ontology. A friendly reading gives us the possibility to see it as a way to refute other approaches. In A New Philosophy of Society DeLand criticizes different approaches to reduce social entities: the reduction to individuals (and their actions), the reduction to a structure of society as a whole or a reduction to relations that misses the properties and tendencies of the entities. A flat ontology allows DeLanda to criticize all this approaches as methods to reduce reality to a certain set of privileged entities (or relations). Instead he wants to show that all entities have their importance. This explains his idea of a flat ontology. But the question remains, whether a flat ontology is a useful concept or if it leads to a series of problems and contradictions. I still see it as superfluous and it leads to misunderstandings. To say a part has the same ontological status as the whole that it is part of, is confusing. DeLanda is clear that it doesn’t mean same scale and so on. But a Wieland inspired approach sidesteps this misunderstanding. Something is relevant (or even more relevant) if you ask questions on one scale. On a different scale the relevance shifts to other elements. Nothing is prior or relevant in itself, but depending on the perspective. You don’t need to introduce flat ontology to criticize reductionist approaches.

Conclusion

This concludes my series of DeLanda posts (even if I probably come back to some of his ideas). As seen I think there are a lot of good points made by him. I also enjoy the perspective of thinking philosophical questions with contemporary science and math. This allows fantastic shifts in thinking. There are problems that need to be addressed. But that doesn’t mean they are unsolvable. Maybe some changes lead to something DeLanda wouldn’t embrace and therefore something new. Our world remains complex and we need complex theories to make sense of it. Otherwise oversimplification can lead to big problems.

DeLanda’s Epistemology, Part 2: What is a Problem?

In the last post of this series I started to summarize chapter 4 of DeLanda’s Intensive Science and Virtual Philosophy. In the first part DeLanda critiques philosophers of science who use the deductive-nomological approach. This approach consists in the assumption that there are fundamental laws from which we can deduce the actual behavior in our world. For DeLanda this is not only an essentialist approach, but more so ignores the mathematical modelling techniques and causal relations. It unifies on the expense of accuracy. In the second part of the chapter DeLanda uses Deleuze to present a different approach: the problematic approach.

The first definition of a problem is that it gives a distribution of what is relevant and what is irrelevant.

To get a better picture of problems DeLanda introduces contrast spaces. To understand contrast spaces he uses an example of Alan Garfinkel: A priest asks a thief “Why do you rob banks?” and the thief answers “Because that’s where the money is!”. This is obviously not the answer the priest expected. The priest wants to know, why he is robbing in the first place, and the thief assumes that he has to rob, but that the question is: What to rob? A contrast space consists of what is presupposed in a question and explanatory alternatives.

To take more scientific example one can consider an entangled population model of two species (a very simple mathematical approach for this are the Lotka-Volterra equations). Applied to foxes and rabbits one can develop a realistic model: If there are many rabbits and few foxes, the foxes have a lot of rabbits to eat and their population grows. This leads to more foxes and fewer rabbits. Now the foxes have not enough food and many of them die. The rabbits on the other side have now less predators hunting them and they can breed rapidly. Now we are at the initial state again and the cycle repeats.

This is a well posed ecological problem. But there are alternatives. The question “Why was this rabbit eaten?” can be understood as why was THAT rabbit eaten by THAT fox. When we go this route then we have something Deleuze/DeLanda call overdetermination. The problem is badly posed because it is too dependent on circumstances and explanatory unstable. Unstable means that small changes in the initial conditions lead to a different result. If the rabbit is spatially too far away from the fox, the fox never reaches, kills and eats it. Another way to badly pose a problem is indetermination. In this case there are no possibilities that can be discerned. A well-posed problem must therefore have a distribution of possibilities and an explanatory stability.

The state space gives us exactly that. We have small changes in initial conditions that lead to the same singularity/attractor and we have a distribution of singularities/attractors over the space itself. Another feature of problems and state spaces is that it gives us quasi-causal factors. In the population example causal factors are e.g. that foxes need rabbits to eat to survive. Quasi-causal factors are mechanism independent. As seen in the last posts about DeLanda universality ignores simple causality but highlights features of distribution (e.g. there are distributions of attractors and bifurcations that apply to population models as well as convections).

This leads DeLanda to the conclusion that the structure of a well-posed problem is the counterpart to the Deleuzian ontology of the virtual. The solutions of the problem are the individuations of the virtual structure.

Conclusion

Now I have given a short summary of the chapters 1 and 4 of DeLanda’s book (and of course I skipped many details). I think the parts I presented are the core ideas of DeLanda’s approach. In the next post I want to summarize the argument, evaluate how it works and formulate open questions.