The fascist’s trap

The following lines appear in the opening portion of G.S. Mudur’s report in The Telegraph about government opposition to student protests:

“The people protecting our democracy are the people in JNU. They’re taking beatings on our behalf,” K.S. Venkatesh [a professor of electrical engineering at IIT Kanpur] told the assembled group [of students and faculty members]. “We’re sitting here comfortably. Look what the people in JNU are taking — and (at) some other places too.”

Don’t these lines sound familiar?

A popular right-wing narrative in the media these days has evoked images of the precarious conditions in which India’s soldiers apparently protect the country’s borders from the Islamic hordes that would overrun us while armchair activists and journalists squander their hard-won peace with protests against their own government, thus disrespecting the soldiers themselves. This way, the fascist inverts the relationship between a country and its army: instead of soldiers existing because there is a people worth protecting, the people exist because there is a solider worth protecting.

Ultimately, the soldier’s body and the body’s war become the cause itself – the ultimate excuse to deploy whatever means necessary to maintain internal order and homogeneity. And the citizen who deviates from this is condemned and punished with social sanctions that are not privy to judicial scrutiny. The heterodox agent becomes the perfect anti-national because she has not conducted herself ‘worthy’ of the soldiers’ ‘sacrifice’. Indeed the BJP has tied such misconduct with the actions of India’s neighbours, especially Pakistan and China, and increasingly Bangladesh, to create a self-fulfilling, self-justifying prophecy.

This is why Venkatesh’s words, that “we’re sitting comfortably”, are unsettling. It’s perfectly okay to sit comfortably – at least, it should be. Yes, JNU, and Jamia and Aligarh and so many other universities and their students, are fighting and we are in solidarity with them. We will also take to the streets (and other fora), express our support as well as objection loud and clear. But we will also not do this because our compatriots and comrades in JNU are being thrashed by the police. We will do this because we want to.

Second, we will not be guilty to sit comfortably either – which, in Venkatesh’s speech, likely means students and teachers discussing in classrooms, students and teachers conducting tests in labs, students and teachers engaging in conversation and debate. It is for the right to do all of these things that we also protest, as well as the right to think peacefully, to engage in civil conversation and to enjoy the commons. If we forget this, and erect the bruised body as the motivation for individual political action, we fall into the fascist’s trap: that we must not sit comfortably because we offend our protectors (the students of JNU or whoever).

The chrysalis that isn’t there

I wrote the following post while listening to this track. Perhaps you will enjoy reading it to the same sounds. Otherwise, please consider it a whimsical recommendation. 🙂

I should really start keeping a log of different stories in the news all of which point to the little-acknowledged but only-evident fact that science – like so many things, including people – does not embody lofty ideals as much as the aspirations to those ideals. Nature News reported on January 31 that “a language analysis of titles and abstracts in more than 100,000 scientific articles,” published in the British Medical Journal (BMJ), had “found that papers with first and last authors who were both women were about 12% less likely than male-authored papers to include sensationalistic terms such as ‘unprecedented’, ‘novel’, ‘excellent’ or ‘remarkable’;” further, “The articles in each comparison were presumably of similar quality, but those with positive words in the title or abstract garnered 9% more citations overall.” The scientific literature, people!

Science is only as good as its exponents, and there is neither meaning nor advantage to assuming that there is such a thing as a science beyond, outside of and without these people. Doing so inflates science’s importance when it doesn’t deserve to be, and suppresses its shortcomings and prevents them from being addressed. For example, the BMJ study prima facie points to gender discrimination but it also describes a scientific literature that you will never find out is skewed, and therefore unrepresentative of reality, unless you acknowledge that it is constituted by papers authored by people of two genders, on a planet where one gender has maintained a social hegemony for millennia – much like you will never know Earth has an axis of rotation unless you are able to see its continents or make sense of its weather.

The scientific method describes a popular way to design experiments whose performance scientists can use to elucidate refined, and refinable, answers to increasingly complex questions. However, the method is an external object (of human construction) that only, and arguably asymptotically, mediates the relationship between the question and the answer. Everything that comes before the question and after the answer is mediated by a human consciousness undeniably shaped by social, cultural, economic and mental forces.

Even the industry that we associate with modern science – composed of people who trained to be scientists over at least 15 years of education, then went on to instruct and/or study in research institutes, universities and laboratories, being required to teach a fixed number of classes, publish a minimum number of papers and accrue citations, and/or produce X graduate students, while drafting proposals and applying for grants, participating in workshops and conferences, editing journals, possibly administering scientific work and consulting on policy – is steeped in human needs and aspirations, and is even designed to make room for them, but many of us non-scientists are frequently and successfully tempted to address the act of being a scientist as an act of transformation: characterised by an instant in time when a person changes into something else, a higher creature of sorts, like a larva enters a magical chrysalis and exits a butterfly.

But for a man to become a scientist has never meant the shedding of his identity or social stature; ultimately, to become a scientist is to terminate at some quasi-arbitrary moment the slow inculcation of well-founded knowledge crafted to serve a profitable industry. There is a science we know as simply the moment of discovery: it is the less problematic of the two kinds. The other, in the 21st century, is also funding, networking, negotiating, lobbying, travelling, fighting, communicating, introspecting and, inescapably, some suffering. Otherwise, scientific knowledge – one of the ultimate products of the modern scientific enterprise – wouldn’t be as well-organised, accessible and uplifting as it is today.

But it would be silly to think that in the process of constructing this world-machine of sorts, we baked in the best of us, locked out the worst of us, and threw the key away. Instead, like all human endeavour, science evolves with us. While it may from time to time present opportunities to realise one or two ideals, it remains for the most part a deep and truthful reflection of ourselves. This assertion isn’t morally polarised, however; as they say, it is what it is – and this is precisely why we must acknowledge failures in the practice of science instead of sweeping them under the rug.

One male scientist choosing more uninhibitedly to call his observation “unprecedented” than a female scientist might have been encouraged, among other things, by the peculiarities of a gendered scientific labour force and scientific enterprise, but many male scientists indulging just as freely in their evaluatory fantasies, such as they are, indicates a systemic corruption that transcends (but not escapes) science. The same goes for, as in another recent example, for the view that science is self-correcting. It is not because people are not, and they need to be pushed to be. In March 2019, for example, researchers uncovered at least 58 papers published in a six-week period whose authors had switched their desired outcomes between the start and end of their respective experiments to report positive, and to avoid reporting negative, results. When the researchers wrote to the authors as well as the editors of the journals that had published the problem papers, most of them denied there was an issue and refused to accept modifications.

Again, the scientific literature, people!

Science v. tech, à la Cixin Liu

A fascinating observation by Cixin Liu in an interview in Public Books, to John Plotz and translated by Pu Wang (numbers added):

… technology precedes science. (1) Way before the rise of modern science, there were so many technologies, so many technological innovations. But today technology is deeply embedded in the development of science. Basically, in our contemporary world, science sets a glass ceiling for technology. The degree of technological development is predetermined by the advances of science. (2) … What is remarkably interesting is how technology becomes so interconnected with science. In the ancient Greek world, science develops out of logic and reason. There is no reliance on technology. The big game changer is Galileo’s method of doing experiments in order to prove a theory and then putting theory back into experimentation. After Galileo, science had to rely on technology. … Today, the frontiers of physics are totally conditioned on the developments of technology. This is unprecedented. (3)

Perhaps an archaeology or palaeontology enthusiast might have regular chances to see the word ‘technology’ used to refer to Stone Age tools, Bronze Age pots and pans, etc. but I have almost always encountered these objects only as ‘relics’ or such in the popular literature. It’s easy to forget (1) because we have become so accustomed to thinking of technology as pieces of machines with complex electrical, electronic, hydraulic, motive, etc. components. I’m unsure of the extent to which this is an expression of my own ignorance but I’m convinced that our contemporary view of and use of technology, together with the fetishisation of science and engineering education over the humanities and social sciences, also plays a hand in maintaining this ignorance.

The expression of (2) is also quite uncommon, especially in India, where the government’s overbearing preference for applied research has undermined blue-sky studies in favour of already-translated technologies with obvious commercial and developmental advantages. So when I think of ‘science and technology’ as a body of knowledge about various features of the natural universe, I immediately think of science as the long-ranging, exploratory exercise that lays the railway tracks into the future that the train of technology can later ride. Ergo, less glass ceiling and predetermination, and more springboard and liberation. Cixin’s next words offer the requisite elucidatory context: advances in particle physics are currently limited by the size of the particle collider we can build.

(3) However, he may not be able to justify his view beyond specific examples simply because, to draw from the words of a theoretical physicist from many years ago – that they “require only a pen and paper to work” – it is possible to predict the world for a much lower cost than one would incur to build and study the future.

Plotz subsequently, but thankfully briefly, loses the plot when he asks Cixin whether he thinks mathematics belongs in science, and to which Cixin provides a circuitous non-answer that somehow misses the obvious: science’s historical preeminence began when natural philosophers began to encode their observations in a build-as-you-go, yet largely self-consistent, mathematical language (my favourite instance is the invention of non-Euclidean geometry that enabled the theories of relativity). So instead of belonging within one of the two, mathematics is – among other things – better viewed as a bridge.

The mad world

Kate Wagner writes in The Baffler:

What makes industrial landscapes unique is that they fascinate regardless of whether they’re operating. The hellish Moloch of a petrochemical refinery is as captivating as one of the many abandoned factories one passes by train, and vice versa. That doesn’t mean, though, that all industrial landscapes are created equal. Urban manufacturing factories are considered beautiful—tastefully articulated on the outside, their large windows flooding their vast internal volumes with light; they are frequently rehabilitated into spaces for living and retail or otherwise colonized by local universities. The dilapidated factory, crumbling and overgrown by vegetation, now inhabits that strange space between natural and man-made, historical and contemporary, lovely and sad. The power plant, mine, or refinery invokes strong feelings of awe and fear. And then there are some, such as the Superfund site—remediated or not—whose parklike appearance and sinister ambience remains aesthetically elusive.

One line from my education years that I think will always stick with me was uttered, perhaps in throwaway fashion, by an excellent teacher nonetheless moving on to a larger point: “Ugliness is marked by erasure.” Wagner’s lines above suggest our need for beauty extends even to landmarks of peacetime disaster, such as abandoned factories, railway stations, refineries, etc. because their particular way of being broken and dead contains stories, and lessons, that a pile of collapsed masonry or a heap of trash would not. Apparently there is a beauty in the way they have failed, contained in features of their architecture and design that have managed to rise, or stay, above the arbitrary chaos of unorganised disaster. They are, in other words, haunted by the memory of control.

But as Wagner walks further down this path, in search of the origins of our sense of the picturesque, I’d like to turn back – to an older piece in The Baffler, by J.C. Hallman in September 2016, that questioned the role and purpose of tradition and the influence of scholarship in creating art (as in paintings and stuff). His subject was ‘art brut’, “variously translated as ‘raw,’ ‘rough,’ or ‘outsider’ art” and which stresses “that the work of individual, untutored practitioners trumps all the usual conventions of artistic legacy-building, including the analytic categories of art criticism.” After a helpful prelude – “I prefer dramatic chronicles of the shift from ignorance to knowledge, from innocence to experience” – Hallman elaborates:

… [the painters’] stories … seem calculated to undermine the steady commercial march of art as depicted in high-end auction catalogs[.] In lieu of a stately succession of movements, schools, and styles, art brut gives us an array of butchers and scientists and soldiers and housewives who suddenly went crazy and then produced huge bodies of work—most often for discrete periods of time, three years or eight years or fourteen years—before falling silent and eking out the rest of their isolated, artless lives.

He then draws from the notes of Jean Dubuffet, the French painter, and William James, the American psychologist, to make the case that if only we sidestepped the need for art to be in conversation with other art and/or to respond to this or that perspective on human reality, we could be awakened to shapes, arrangements and layouts that exist beyond what we have been able to explain, and reveal a picture unadulterated by the humans need for control and meaning.

Could this idea be extended to Wagner’s “infrastructural tragedy” as well? That is, whereas a factory embodies the designs foisted by dynamic relationships between demand and supply, and motivated by the storied ambitions of industrialism – and its abandonment the latter’s myopia, hubris and impermanence – what does a structure whose pillars and trusses have been spared the burden of human wants look like? It’s likely such a structure doesn’t exist: no point imposing the violence of our visions upon the world when those visions are empty.

But like the art brut auteurs in Hallman’s exposition, I’m drawn to the question as an ardent world-builder by what I find to be its enigmatic challenge. Just as the brutists’ madness slashed away at the web of method clouding their visions, what questions must the world-builder – the ultimate speculator – ask herself to arrive at a picture whose elements all lie outside anthropogenic considerations as well as outside nature itself? I suppose I am asking if, through this or a similar exercise, it would be possible for the human to arrive at the alien. Well, would it?1

1. This proposition, and the sense that its answer could lurk somewhere in the bounded cosmology of my psyche, inspires in my mind and consciousness an anxiety and trepidation I have thus far experienced only when faced with H.R. Giger’s art.

A sympathetic science

If you feel the need to respond, please first make sure you have read the post in full.

I posted the following tweet a short while ago:

With reference to this:

Which in turn was with reference to this:

But a few seconds after publishing it, I deleted the tweet because I realised I didn’t agree with its message.

That quote by Isaac Asimov is a favourite if only because it contains in those words a bigger idea that expands voraciously the moment it comes in contact with the human mind. Yes, there is a problem with understanding ignorance and knowledge as two edges of the same blade, but somewhere in this mixup, a half-formed aspiration to rational living lurks in silence.

The author of another popular tweet commenting on the same topic did not say anything more than reproduce Kiran Bedi’s comment, issued after she shared her controversial ‘om’ tweet on January 4 (details here), that the chant is “worth listening to even if it’s fake”; the mocking laughter was implied, reaffirmed by invoking the name of the political party Bedi is affiliated to (the BJP – which certainly deserves the mockery).

However, I feel the criticism from thousands of people around the country does not address the part of Bedi’s WhatsApp message that reaches beyond facts and towards sympathy. Granted, it is stupid to claim that that is what the Sun sounds like, just as Indians’ obsession with NASA is both inexplicable and misguided. That Bedi is a senior government official, a member of the national ruling party and has 12 million followers on Twitter doesn’t help.

But what of Bedi suggesting that the controversy surrounding the provenance of the message doesn’t have to stand in the way of enjoying the message itself? Why doesn’t the criticism address that?

Perhaps it is because people think it is irrelevant, that it is simply the elucidation of a subjective experience that either cannot be disputed or, more worryingly, is not worth engaging over. If it is the latter, then I fear the critics harbour an idea that what science – as the umbrella term for the body of knowledge obtained by the application of a certain method and allied practices – is not concerned with is not worth being concerned about. Even if all of the critics in this particular episode do not harbour this sentiment, I know from personal experience that there are even more out there who do.

After publishing my tweet, I realised that Bedi’s statement that “it is worth listening to even if it’s fake” is not at odds with physicist Dibyendu Nandi’s words: that chanting the word ‘om’ is soothing and that its aesthetic benefits (if not anything greater) don’t need embellishment, certainly not in terms of pseudoscience and fake news. In fact, Bedi has admitted it is fake, and as a reasonable, secular and public-spirited observer, I believe that is all I can ask for – rather, that is all I can ask for from her in the aftermath of her regrettable action.

If I had known what was going to happen earlier, my expectation would still have been limited – in a worst case scenario in which she insists on sharing the chant – to ask her to qualify the NASA claim as being false. Twelve million followers is nothing to be laughed at.

But what I can ask of others (including myself) is this: mocking Bedi is fine, but what’s the harm in chanting the ‘om’ even if the claims surrounding it are false? What’s the harm in asserting that?

If the reply is, “There is no harm” – okay.

If the reply is, “There is no harm plus that is not in dispute” or that “There is harm because the assertion is rooted in a false, and falsifiable, premise” – I would say, “Maybe the assertion should be part of the conversation, such that the canonical response can be changed from <mockery of getting facts wrong>[1] to <mockery of getting facts wrong> + <discussing the claimed benefits of chanting ‘om’ and/or commenting on the ways in which adherence to factual knowledge can contribute to wellbeing>.”

The discourse of rational aspiration currently lacks any concern for the human condition, and while scientificity, or scientificness, has been becoming a higher virtue by the day, it does not appear to admit that far from having the best interests of the people at heart, it presumes that whatever sprouts from its cold seeds should be nutrition enough.[2]

[1] The tone of the response is beyond the scope of this post.

[2] a. If you believe this is neither science’s purpose nor responsibility, then you must agree it must not be wielded sans the clarification either that it represents an apathetic knowledge system or that the adjudication of factitude does not preclude the rest of Bedi’s message. b. Irrespective of questions about science’s purpose, could this be considered to be part of the purpose of science communication? (This is not a rhetorical question.)

Injustice ex machina

There are some things I think about but struggle to articulate, especially in the heat of an argument with a friend. Cory Doctorow succinctly captures one such idea here:

Empiricism-washing is the top ideological dirty trick of technocrats everywhere: they assert that the data “doesn’t lie,” and thus all policy prescriptions based on data can be divorced from “politics” and relegated to the realm of “evidence.” This sleight of hand pretends that data can tell you what a society wants or needs — when really, data (and its analysis or manipulation) helps you to get what you want.

If you live in a country ruled by a nationalist government tending towards the ultra-nationalist, you’ve probably already encountered the first half of what Doctorow describes: the championship of data, and quantitative metrics in general, the conflation of objectivity with quantification, the overbearing focus on logic and mathematics to the point of eliding cultural and sociological influences.

Material evidence of the latter is somewhat more esoteric, yet more common in developing countries where the capitalist West’s influence vis-à-vis consumption and the (non-journalistic) media are distinctly more apparent, and which is impossible to unsee once you’ve seen it.

Notwithstanding the practically unavoidable consequences of consumerism and globalisation, the aspirations of the Indian middle and upper classes are propped up chiefly by American and European lifestyles. As a result, it becomes harder to tell the “what society needs” and the “get what you want” tendencies apart. Those developing new technologies to (among other things) enhance their profits arising from this conflation are obviously going to have a harder time seeing it and an even harder time solving for it.

Put differently, AI/ML systems – at least those in Doctorow’s conception, in the form of machines adept at “finding things that are similar to things the ML system can already model” – born in Silicon Valley have no reason to assume a history of imperialism and oppression, so the problems they are solving for are off-target by default.

But there is indeed a difference, and not infrequently the simplest way to uncover it is to check what the lower classes want. More broadly, what do the actors with the fewest degrees of freedom in your organisational system want, assuming all actors already want more freedom?

They – as much as others, and at the risk of treating them as a monolithic group – may not agree that roads need to be designed for public transportation (instead of cars), that the death penalty should be abolished or that fragmenting a forest is wrong but they are likely to determine how a public distribution system, a social security system or a neighbourhood policing system can work better.

What they want is often what society needs – and although this might predict the rise of populism, and even anti-intellectualism, it is nonetheless a sort of pragmatic final check when it has become entirely impossible to distinguish between the just and the desirable courses of action. I wish I didn’t have to hedge my position with the “often” but I remain unable with my limited imagination to design a suitable workaround.

Then again, I am also (self-myopically) alert to the temptation of technological solutionism, and acknowledge that discussions and negotiations are likely easier, even if messier, to govern with than ‘one principle to rule them all’.

Sci-fi past the science

There’s an interesting remark in the introductory portion of this article by Zeynep Tufekci (emphasis added):

At its best, though, science fiction is a brilliant vehicle for exploring not the far future or the scientifically implausible but the interactions among science, technology and society. The what-if scenarios it poses can allow us to understand our own societies better, and sometimes that’s best done by dispensing with scientific plausibility.

Given the context, such plausibility is likely predicated on the set of all pieces of knowledge minus the set of the unknown-unknown. This in turn indicates a significant divergence between scientific knowledge and knowledge of human society, philosophies and culture as we progress into the future, at least to the extent that there is a belief in the present that scientific knowledge already trails our knowledge of the sociological and political components required to build a more equitable society.

This is pithy and non-trivial at the same time: pithy because the statement reaffirms the truism that science in and of itself lacks the moral centrifuge to separate good from bad, and non-trivial because it refutes the technoptimism that guides Elon Musk, Jeff Bezos, (the late) Paul Allen, etc.

If you superimposed this condition on sci-fi the genre, it becomes clear that Isaac Asimov’s and Arthur Clarke’s works – which the world’s tech billionaires claim to have been inspired by in their pursuit of interplanetary human spaceflight, as Tufekci writes – were less about strengthening the role of science and technology in our lives and more about rendering it transparent, so we can look past the gadgets and the gadgetry towards the social structures they’re embedded in.

In effect, Tufekci continues:

Science fiction is sometimes denigrated as escapist literature, but the best examples of it are exactly the opposite.

She argues in her short article, more of a long note, that this alternative reading of sci-fi and its purpose could encourage the billionaires to retool their ambitions and think about making life better on Earth. Food for thought, especially at the start of a new decade when there seems to be a blanket lien to hope – although I very much doubt the aspirations of Musk, Bezos and others were nurtured about such a simple fulcrum.

The rationalists’ eclipse

The annular solar eclipse over South India on December 26 provided sufficient cause for casual and/or inchoate rationalism to make a rare public appearance – rarer than the average person who had decided to stay indoors for the duration of the event thanks to superstitious beliefs. Scientists and science communicators organised or participated in public events where they had arranged for special (i.e. protective) viewing equipment and created enough space for multiple people to gather and socialise.

However, some of these outings, spilling over into the social media, also included actions and narratives endeavouring to counter superstitions but overreaching and stabbing at the heart of non-scientific views of the world.

The latter term – ‘non-scientific’ – has often been used pejoratively but is in fact far from deserving of derision or, worse, pity. The precepts of organised religion encompass the most prominent non-scientific worldview but more than our tragic inability to imagine that these two magisteria could exist in anything but opposition to each other, the bigger misfortune lies with presuming science and religion are all there is. The non-scientific weltanschauung includes other realms, so to speak, especially encompassing beliefs that organised religion and its political economy hegemonise. Examples include the traditions of various tribal populations around the world, especially in North America, Latin America, Africa, Central and South Asia, and Australia.

There is an obvious difference between superstitious beliefs devised to suppress a group or population and the framework of tribal beliefs within which their knowledge of the world is enmeshed. It should be possible to delegitimise the former without also delegitimising the latter. Assuming the charitable view that some find it hard to discern this boundary, the simplest way to not trip over it is to acknowledge that most scientific and non-scientific beliefs can peacefully coexist in individual minds and hearts. And that undermining this remarkably human ability is yet another kind of proselytisation.

Obviously this is harder to realise in what we conceive as the day-to-day responsibilities of science communication, but that doesn’t mean we must put up with a lower bar for the sort of enlightenment we want India to stand for fifty or hundred years from now. Organising public eat-a-thons during a solar eclipse, apparently to dispel the superstitious view that consuming foods when the Sun has been so occluded is bad for health, is certainly not a mature view of the problem.

In fact, such heavy-handed attempts to drive home the point that “science is right” and “whatever else you think is wrong” are effects of a distal cause: a lack of sympathetic concern for the wellbeing of a people – which is also symptomatic of a half-formed, even egotistical, rationalism entirely content with its own welfare. Rescuing people from ideas that would enslave them could temporarily empower them but transplanting them to a world where knowledgeability rules like a tyrant, unconcerned with matters he cannot describe, is only more of the same by a different name.

B.R. Ambedkar and E.V. Ramaswamy Naicker, a.k.a. Periyar, wanted to dismantle organised religion because they argued that such oppressive complexes pervaded its entire body. Their ire was essentially directed against autocratic personal governance that expected obedience through faith. In India, unless you’re a scientist and/or have received a good education, and can read English well enough to access the popular and, if need be, the technical literature, science is also reduced to a system founded on received knowledge and ultimately faith.

There is a hegemony of science as well. Beyond the mythos of its own cosmology (to borrow Paul Feyerabend’s quirky turn of phrase in Against Method), there is also the matter of who controls knowledge production and utilisation. In Caliban and the Witch (1998), Sylvia Federici traces the role of the bourgeoisie in expelling beliefs in magic and witchcraft in preindustrial Europe only to prepare the worker’s body to accommodate the new rigours of labour under capitalism. She writes, “Eradicating these practices was a necessary condition for the capitalist rationalisation of work, since magic appeared as an illicit form of power and an instrument to obtain what one wanted without work, that is, a refusal of work in action. ‘Magic kills industry,’ lamented Francis Bacon…”.

To want to free another human from whatever shackles bind them is the sort of virtuous aspiration that is only weakened by momentary or superficial focus. In this setup, change – if such change is required at all costs – must be enabled from all sides, instead of simply a top-down reformatory jolt delivered by pictures of a bunch of people breaking their fast under an eclipsed Sun.

Effective science communication could change the basis on which people make behavioural decisions but to claim “all myths vanished” (as one science communicator I respect and admire put it) is disturbing. Perhaps in this one instance, the words were used in throwaway fashion, but how many people even recognise a need to moderate their support for science this way?

Myths, as narratives that harbour traditional knowledge and culturally unique perspectives on the natural universe, should not vanish but be preserved. A belief in the factuality of this or that story could become transformed by acknowledging that such stories are in fact myths and do not provide a rational basis for certain behavioural attitudes, especially ones that might serve to disempower — as well as that the use of the scientific method is a productive, maybe even gainful, way to discover the world.

But using science communication as a tool to dismantle myths, instead of tackling superstitious rituals that (to be lazily simplistic) suppress the acquisition of potentially liberating knowledge, is to create an opposition that precludes the peaceful coexistence of multiple knowledge systems. In this setting, science communication perpetuates the misguided view that science is the only useful way to acquire and organise our knowledge — which is both ahistorical and injudicious.

The Star Wars dynasty

The latest ‘Star Wars’ movie, The Rise of Skywalker, is worth a watch if you’re a committed fan interested in staying up to date with the franchise. Otherwise, all you’re missing is a movie unsure of what it’s supposed to do, and ends up doing too many things as a result.


MAJOR SPOILERS AHEAD


One thing I found notable was Rey’s identity as Emperor Palpatine’s granddaughter. Viewers had expressed some consternation when the film’s trailer was released in April this year because it seemed to suggest Rey was a descendant of the Skywalker line. To quote from an older post:

… if she turns out to be the new Skywalker, then the franchise’s writers will finally have completed their betrayal of the infinite purpose of the fantasy genre itself. They will have been utterly lazy – if not guilty of a form of creative manslaughter – if Rey turns out to be biologically related to the Skywalkers, broadcasting the message that either you’re royalty or you’re not, much like the Gandhis themselves have.

In The Rise of Skywalker, Rey turns out to be a Palpatine but identifies as a Skywalker as a token of her allegiance to the Resistance. But this isn’t very far from Rey simply being yet another Skywalker because the franchise is still seemingly preoccupied with two important families: the Skywalkers and the Palpatines. Instead of persisting with one dynasty or, better yet, abolishing the franchise’s obsession with inheritances altogether, the film’s writers have simply created another dynasty.

Perhaps the only solace is that with the end of this trilogy of trilogies have gone all of the last biological remnants of the two lines, so whatever new production starts from this point – whether a TV show or yet another film – will have a great opportunity to take the franchise in a new direction.

A why of how we wear what we wear

There are many major industries operating around the world commonly perceived to be big drivers of climate change. Plastic, steel and concrete manufacturing come immediately to mind – but fashion doesn’t, even though, materially speaking, its many inefficiencies represent something increasingly worse than an indulgence in times so fraught by economic inequality and the dividends of extractive capitalism.

And even then, details like ‘making one cotton t-shirt requires 3,900 litres of water’ (source) spring first into our consciousness before less apparent, and more subtle, issues like the label itself. Why is the fashion industry called so? I recently read somewhere – an article, or maybe a tweet (in any case the thought isn’t original) – that the term ‘fashion’ implies an endless seasonality, a habit of periodically discarding designs, and the clothes they inhabit, only to invent and manufacture new garments.

The persistence of fashion trends also presents social problems. Consider, for example, the following paragraph, copied from a press release issued by Princeton University:

People perceive a person’s competence partly based on subtle economic cues emanating from the person’s clothing, according to a study published in Nature Human Behaviour by Princeton University. These judgments are made in a matter of milliseconds, and are very hard to avoid. … Given that competence is often associated with social status, the findings suggest that low-income individuals may face hurdles in relation to how others perceive their abilities — simply from looking at their clothing.

Let’s assume that the study is robust as well as that the press release is faithful to the study’s conclusions (verifying which would require a lot more work than I am willing to spare for this post – but you’ve been warned!). Getting rid of fashion trends will do little, or even nothing, to render our societies more equitable. But it merits observing that they also participate in, possibly are even predicated on, maintaining ‘in’ and ‘out’ groups, demarcated by the awareness of dressing trends, ability to purchase the corresponding garments and familiarity with the prevailing ways to use them in order to incentivise certain outcomes over others on behalf of people who adhere to similar sartorial protocols.

(Aside: Such behaviour usually favours members of the elite but it’s not entirely absent outside the corresponding sociopolitical context. For example, and as a tangential case of enclothed cognition, the titular character in the 2016 Tamil film Kabali insists on wearing a blazer at all times simply because his upper-caste antagonists use their clothing to indicate their social status and, consequently, power.)

Obviously, the social and climatic facets of fashion design aren’t entirely separable. The ebb-and-flow of design trends drives consumer spending and, well, consumption whereas the stratification of individual competence – at least according to the study; certainly of likability based on status signals – sets up dressing choices as a socially acceptable proxy to substitute seemingly less prejudicial modes of evaluation. (And far from being a syllogism, many of our social ills actively promote the neoliberal consumer culture at the heart of the climate crisis.)

Then again, proxies in general are not always actively deployed. There are numerous examples from science administration as well as other walks of life. This is also one of the reasons I’m not too worried about not interrogating the study: it rings true (to the point of rendering the study itself moot if didn’t come to any other conclusions).

People considering a scientist for, say, career advancement often judge the quality of their work based on which journals they were published in, even though it’s quite well-known that this practice is flawed. But the use of proxies is justified for pragmatic reasons: when universities are understaffed and/or staff are underpaid, proxies accelerate decision-making, especially if they also have a low error-rate and the decision isn’t likely to have dire consequences for any candidate. If the resource-crunch is more pronounced, it’s quite possible that pragmatic considerations altogether originate the use of proxies instead of simply legitimising them.

Could similar decision-making pathways have interfered with the study? I hope not, or they would have strongly confounded the study’s findings. In this scenario, where scientists presented a group of decision-makers with visual information based on which the latter had to make some specific decisions without worrying about any lack of resources, we’re once again faced with yet another prompt to change the way we behave, and that’s a tall order.