Matt Mullenweg v. WP Engine

Automattic CEO and WordPress co-developer Matt Mullenweg published a post on September 21 calling WP Engine a “cancer to WordPress”. For the uninitiated: WP Engine is an independent company that provides managed hosting for WordPress sites; WordPress.com is owned by Automattic and it leads the development of WordPress.org. WP Engine’s hosting plans start at $30 a month and it enjoys a good public reputation. Mullenweg’s post however zeroed in on WP Engine’s decision to not record the revisions you’ve made to your posts in your site’s database. This is a basic feature in the WordPress content management system, and based on its absence Mullenweg says:

What WP Engine gives you is not WordPress, it’s something that they’ve chopped up, hacked, butchered to look like WordPress, but actually they’re giving you a cheap knock-off and charging you more for it.

The first thing that struck me about this post was its unusual vehemence, which Mullenweg has typically reserved in the past for more ‘extractive’ platforms like Wix whose actions have also been more readily disagreeable. WP Engine has disabled revisions but as Mullenweg himself pointed out it doesn’t hide this fact. It’s available to view on the ‘Platform Settings’ support page. Equally, WP Engine also offers daily backups; you can readily restore one of them and go back to a previous ‘state’.

Second, Mullenweg accuses WP Engine of “butchering” WordPress but this is stretching it. I understand where he’s coming from, of course: WP Engine is advertising WordPress hosting but it doesn’t come with one of the CMS’s basic features, and which WP Engine doesn’t hide but doesn’t really advertise either. But I’d hardly call this “butchering”, much less in public and more than a decade after Automattic invested in WP Engine.

WP Engine’s stated reason is that post revisions increase database costs that the company would like to keep down. Mullenweg interprets this to mean WP Engine wants “to avoid paying to store that data”. Well, yeah, and that’s okay, right? I can’t claim to be aware of all the trade-offs that determined WP Engine’s price points but turning off a feature to keep costs down and reactivating it upon request for individual users seems fair.

In fact, what really gets my goat is Mullenweg’s language, especially around how much WP Engine charges. He writes:

They are strip-mining the WordPress ecosystem, giving our users a crappier experience so they can make more money.

WordPress.com offers a very similar deal to its customers. (WordPress.com is Automattic’s platform for users where they can pay the company to host WordPress sites for them.) In the US, you’ll need to pay at least $25 a month (billed yearly) to be able to upload custom themes and plugins to your site. All the plans below that rate don’t have this option. You also need this plan to access and jump back to different points of your site’s revision history.

Does this mean WordPress.com is “strip-mining” its users to avoid paying for the infrastructure required for those features? Or is it offering fewer features at lower price points because that’s how it can make its business work? I used to be happy that WordPress.com offers a $48 a year plan with fewer features because I didn’t need them — just as well as WP Engine seems to have determined it can charge its customers less by disabling revision history by default.

(I’m not so happy now because WordPress.com moved detailed site analytics — anything more than hits to posts — from the free plan to the Premium plan, which costs $96 a year.)

It also comes across as disingenuous for Mullenweg to say the “cancer” a la WP Engine will spread if left unchecked. He himself writes no WordPress host listed on WordPress.org’s recommended hosts page has disabled revisions history — but is he aware of the public reputation of these hosts, their predatory pricing habits, and their lousy customer service? Please take a look at Kevin Ohashi’s Review Signal website or r/webhosting. Cheap WordPress in return for a crappy hosting experience is the cancer that’s already spread because WordPress didn’t address it.

(It’s the reason I switched to composing my posts offline on MarsEdit, banking on its backup features, and giving up on my expectations of hosts including WordPress.com.)

It’s unfair to accuse companies of “strip-mining” WordPress so hosting providers can avail users a spam-free, crap-free hosting experience that’s also affordable. In fact, given how flimsy many of Mullenweg’s arguments seem to be, they’re probably directed at some other deeper issue — perhaps what he perceives to be WP Engine not contributing enough back to the open source ecosystem?

Feel the pain

Emotional decision making is in many contexts undesirable – but sometimes it definitely needs to be part of the picture, insofar as our emotions hold a mirror to our morals. When machines make decisions, the opportunity to consider the emotional input goes away. This is a recurring concern I’m hearing about from people working with or responding to AI in some way. Here are two recent examples I came across that set this concern out in two different contexts: loneliness and war.

This is Anna Mae Duane, director of the University of Connecticut Humanities Institute, in The Conversation:

There is little danger that AI companions will courageously tell us truths that we would rather not hear. That is precisely the problem. My concern is not that people will harm sentient robots. I fear how humans will be damaged by the moral vacuum created when their primary social contacts are designed solely to serve the emotional needs of the “user”.

And this is from Yuval Abraham’s investigation for +972 Magazine on Israel’s chilling use of AI to populate its “kill lists”:

“It has proven itself,” said B., the senior source. “There’s something about the statistical approach that sets you to a certain norm and standard. There has been an illogical amount of [bombings] in this operation. This is unparalleled, in my memory. And I have much more trust in a statistical mechanism than a soldier who lost a friend two days ago. Everyone there, including me, lost people on October 7. The machine did it coldly. And that made it easier.”

The ‘Climate Overshoot’ website

Earlier this evening, as I was working on my laptop, I noticed that it was rapidly heating up, to the extent that it was burning my skin through two layers of cloth. This is a device that routinely runs a half-dozen apps simultaneously without breaking a sweat, and the browser (Firefox) also seldom struggles to handle the few score tabs I have open at all times. Since I’d only been browsing until then, I checked about:processes to find if any of the tabs could be the culprit, and it was: the Climate Overshoot Commission (COC) website. Which is ironic because the COC was recently in the news for a report in which it detailed its (prominent) members’ deliberations on how the world’s countries could accelerate emission cuts and not overshoot emissions thresholds.

The world can reduce the risk of temperature overshoot by, among other things, building better websites. What even is the video of the random forest doing in the background?

The COC itself was the product of deliberations among some scientists who wished to test solar geoengineering but couldn’t. And though the report advises against deploying this risky technology without thoroughly studying it first, it also insists that it should remain on the table among other climate mitigation strategies, including good ol’ cutting emissions. Irrespective of what its support for geoengineering implies for scientific and political consensus on the idea, the COC can also help by considerably simplifying its website so it doesn’t guzzle more computing power than all the 56 other tabs combined, and around 3 W just to stay open. The findings aren’t even that sensible.

The AI trust deficit predates AI

There are alien minds among us. Not the little green men of science fiction, but the alien minds that power the facial recognition in your smartphone, determine your creditworthiness and write poetry and computer code. These alien minds are artificial intelligence systems, the ghost in the machine that you encounter daily.

But AI systems have a significant limitation: Many of their inner workings are impenetrable, making them fundamentally unexplainable and unpredictable. Furthermore, constructing AI systems that behave in ways that people expect is a significant challenge.

If you fundamentally don’t understand something as unpredictable as AI, how can you trust it?

Trust plays an important role in the public understanding of science. The excerpt above – from an article by Mark Bailey, chair of Cyber Intelligence and Data Science at the National Intelligence University, Maryland, in The Conversation about whether we can trust AI – showcases that.

Bailey treats AI systems as “alien minds” because of their, rather their makers’, inscrutable purposes. They are inscrutable not just because they are obscured but because, even under scrutiny, it is difficult to determine how an advanced machine-based logic makes decisions.

Setting aside questions about the extent to which such a claim is true, Bailey’s argument as to the trustworthiness of such systems can be stratified based on the people to whom it is addressed: AI experts and non-AI-experts, and I have a limited issue with the latter vis-à-vis Bailey’s contention. That is, to non-AI-experts – which I take to be the set of all people ranging from those not trained as scientists (in any field) to those trained as such but who aren’t familiar with AI – the question of trust is more wide-ranging. They already place a lot of their trust in (non-AI) technologies that they don’t understand, and probably never will. Should they rethink their trust in these systems? Or should we taken their trust in these systems to be ill-founded and requiring ‘improvement’?

Part of Bailey’s argument is that there are questions about whether we can or should trust AI when we don’t understand it. Aside from AI in a generic sense, he uses the example of self-driving cars and a variation of the trolley problem. While these technologies illustrate his point, they also give the impression that AI systems not making decisions aligned with human expectations and their struggle to incorporate ethics is a problem restricted to high technologies. It isn’t. The trust deficit vis-à-vis predates AI. Many of the technologies that non-experts trust but which don’t uphold that (so to speak) are not high-tech; examples from India alone include biometric scanners (for Aadhaar), public transport infrastructure, and mechanisation in agriculture. This is because people’s use of any technology beyond their ability to understand is mediated by social relationships, economic agency, and cultural preferences, and not technical know-how.

For the layperson, trust in a technology is really trust in some institution, individuals or even some organisational principle (traditions, religion, etc.), and this is as it should be – perhaps even for more-sophisticated AI systems of the future. Many of us will never fully understand how a deep-learning neural network works, nor should we be expected to, but that doesn’t implicitly make AI systems untrustworthy. I expect to be able to trust scientists in government and in respectable scientific institutions to discharge their duties in a public-spirited fashion and with integrity, so that I can trust their verdict on AI, or anything else in similar vein.

Bailey also writes later in the article that some day, AI systems’ inner workings could become so opaque that scientists may no longer be able to connect their inputs with their outputs in a scientifically complete way. According to Bailey: “It is important to resolve the explainability and alignment issues before the critical point is reached where human intervention becomes impossible.” This is fair but it also misses the point a little bit by limiting the entities that can intervene to individuals and built-in technical safeguards, like working an ethical ‘component’ into the system’s decision-making framework, instead of taking a broader view that keeps the public institutions, including policies, that will be responsible for translating the AI systems’ output into public welfare in the picture. Even today in India, that’s what’s failing us – not the technologies themselves – and therein lies the trust deficit.

Featured image credit: Cash Macanaya/Unsplash.

Irritating Google Docs is irritating

The backdrop of the shenanigans of ChatGPT, Bard and other artificial intelligence (AI) systems these days has only served to accentuate how increasingly frustrating working with Google Docs is. I use Docs every day to write my articles and edit those that the freelancers I’m working with have filed. I don’t use tools like Grammarly but I do pay attention Docs’s blue and red underlines indicating grammatical and typographical aberrations, respectively. And what Docs chooses to underline either way is terribly inconsistent. I have written previously on how Docs ‘learns’ grammar, based on each user’s style, and expressed concern that its learning agent could be led astray by a large number of people, such as Indians, using English differently from the rest of the world and thus biasing it. Fortunately this issue doesn’t seem to have come to pass – but the agent has continued to be completely non-smart in a more fundamental way. This morning, I was editing an article about homeopathy on Docs and found that it couldn’t understand that “homeopathy”, “homeopathic”, and “Homeopathy” are just different forms of the same root word. As a result, correcting “homoeopathy” to “homeopathy” didn’t suffice; you have to correct each form to remove the additional ‘o’.

It gets worse: the same word in bold is, according to Google Docs, a different word…

… as is the word with a small ‘H’.

Google has a reputation for having its fingers in too many pies and as a result neglecting improvements in one pie because it’s too busy focusing on another. There is also a large graveyard of Google products that have been killed off as a result. There’s some reason, for now, to believe Docs won’t meet the same fate but then again I don’t know how to explain the persistence of such an easily fixable problem.

Why everyone should pay attention to Stable Diffusion

Many of the people in my circles hadn’t heard of Stable Diffusion until I told them, and I was already two days late. Heralds of new technologies have a tendency to play up every new thing, however incremental, as the dawn of a new revolution – but in this case, their cries of wolf may be real for once.

Stable Diffusion is an AI tool produced by Stability.ai with help from researchers at the Ludwig Maximilian University of Munich and the Large-scale AI Open Network (LAION). It accepts text or image prompts and converts them into artwork based on, but not necessarily understand, what it ‘sees’ in the input. It created the image below with my prompt “desk in the middle of the ocean vaporwave”. You can create your own here.

But it strayed into gross territory with a different prompt: “beautiful person floating through a colourful nebula”.

Stable Diffusion is like OpenAI’s DALL-E 1/2 and Google’s Imagen and Parti but with two crucial differences: it’s capable of image-to-image (img2img) generation as well and it’s open source.

The img2img feature is particularly mind-blowing because it allows users to describe the scene using text and then guide the Stable Diffusion AI by using a little bit of their own art. Even a drawing on MS Paint with a few colours will do. And while OpenAI and Google hold their cards very close to their chests, with the latter even refusing to release Imagen or Parti in private betas, Stability.ai has – in keeping with its vision to democratise AI – opened Stable Diffusion for tinkering and augmentation by developers en masse. Even the ways in which Stable Diffusion has been released are important: trained developers can work directly with the code while untrained users can access the model in their browsers, without any code, and start producing images. In fact, you can download and run the underlying model on your system, requiring some slightly higher-end specs. Users have already created ways to plug it into photo-editing software like Photoshop.

Stable Diffusion uses a diffusion model: a filter (essentially an algorithm) that takes noisy data and progressively de-noises it. In incredibly simple terms, researchers take an image and in a step-wise process add more and more noise to it. Next they feed this noisy image to the filter, which then removes the noise from the image in a similar step-wise process. You can think of the image as a signal, like the images you see on your TV, which receives broadcast signals from a transmitter located somewhere else. These broadcast signals are basically bundles of electromagnetic waves with information encoded into the waves’ properties, like their frequency, amplitude and phase. Sometimes the visuals aren’t clear because some other undesirable signal has become mixed up with the broadcast signal, leading to grainy images on your TV screen. This undesirable information is called noise.

When the noise waveform resembles that of a bell curve, a.k.a. a Gaussian function, it’s called Gaussian noise. Now, if we know the manner in which noise has been added to the image in each step, we can figure out what the filter needs to do to de-noise the image. Every Gaussian function can be characterised by two parameters, the mean and the variance. Put another way, you can generate different bell-curve-shaped signals by changing the mean and the variance in each case. So the filter effectively only needs to figure out what the mean and the variance in the noise of the input image are, and once it does, it can start de-noising. That is, Stable Diffusion is (partly) the filter here. The input you provide is the noisy image. Its output is the de-noised image. So when you supply a text prompt and/or an accompanying ‘seed’ image, Stable Diffusion just shows off how well it has learnt to de-noise your inputs.

Obviously, when millions of people use Stable Diffusion, the filter is going to be confronted with too many mean-variance combinations for it to be able to directly predict them. This is where an artificial neural network (ANN) helps. ANNs are data-processing systems set up to mimic the way neurons work in our brain, by combining different pieces of information and manipulating them according to their knowledge of older information. The team that built Stable Diffusion trained its model on 5.8 billion image-text pairs found around the internet. An ANN is then programmed to learn from this dataset as to how texts and images correlate as well as how images and images correlate.

To keep this exercise from getting out of hand, each image and text input is broken down into certain components, and the machine is instructed to learn correlations only between these components. Further, the researchers used an ANN model called an autoencoder. Here, the ANN encodes the input in its own representation, using only the information that it has been taught to consider important. This intermediate is called the bottleneck layer. The network then decodes only the information present in this layer to produce the de-noised output. This way, the network also learns what about the input is most important. Finally, researchers also guide the ANN by attaching weights to different pieces of information: that is, the system is informed that some pieces are to be emphasised more than others, so that it acquires a ‘sense’ of less and more desirable.

By snacking on all those text-image pairs, the ANN effectively acquires its own basis to decide when it’s presented a new bit of text and/or image what the mean and variance might be. Combine this with the filter and you get Stable Diffusion. (I should point out again that this is a very simple explanation and that parts of it may well be simplistic.)

Stable Diffusion also comes with an NSFW filter built-in, a component called Safety Classifier, which will stop the model from producing an output that it deems harmful in some way. Will it suffice? Probably not, given the ingenuity of trolls, goblins and other bad-faith actors on the internet. More importantly, it can be turned off, meaning Stable Diffusion can be run without the Safety Classifier to produce deepfakes that are various degrees of disturbing.

Recommended here: Deepfakes for all: Uncensored AI art model prompts ethics questions.

But the problems with Stable Diffusion don’t lie only in the future, immediate or otherwise. As I mentioned earlier, to create the model, Stability.ai & co. fed their machine 5.8 billion text-image pairs scraped from the internet – without the consent of the people who created those texts and images. Because Stability.ai released Stable Diffusion in toto into the public domain, it has been experimented with by tens of thousands of people, at least, and developers have plugged it into a rapidly growing number of applications. This is to say that even if Stability.ai is forced to pull the software because it didn’t have the license to those text-image pairs, the cat is out of the bag. There’s no going back. A blog post by LAION only says that the pairs were publicly available and that models built on the dataset should thus be restricted to research. Do you think the creeps on 4chan care? Worse yet, the jobs of the very people who created those text-image pairs are now threatened by Stable Diffusion, which can – with some practice to get your prompts right – produce exactly what you need, no illustrator or photographer required.

Recommended here: Stable Diffusion is a really big deal.

The third interesting thing about Stable Diffusion, after its img2img feature + “deepfakes for all” promise and the questionable legality of its input data, is the license under which Stability.ai has released it. AI analyst Alberto Romero wrote that “a state-of-the-art AI model” like Stable Diffusion “available for everyone through a safety-centric open-source license is unheard of”. This is the CreativeML Open RAIL-M license. Its preamble says, “We believe in the intersection between open and responsible AI development; thus, this License aims to strike a balance between both in order to enable responsible open-science in the field of AI.” Attachment A of the license spells out the restrictions – that is, what you can’t do if you agree to use Stable Diffusion according to the terms of the license (quoted verbatim):

“You agree not to use the Model or Derivatives of the Model:

  • In any way that violates any applicable national, federal, state, local or international law or regulation;
  • For the purpose of exploiting, harming or attempting to exploit or harm minors in any way;
  • To generate or disseminate verifiably false information and/or content with the purpose of harming others;
  • To generate or disseminate personal identifiable information that can be used to harm an individual;
  • To defame, disparage or otherwise harass others;
  • For fully automated decision making that adversely impacts an individual’s legal rights or otherwise creates or modifies a binding, enforceable obligation;
  • For any use intended to or which has the effect of discriminating against or harming individuals or groups based on online or offline social behavior or known or predicted personal or personality characteristics;
  • To exploit any of the vulnerabilities of a specific group of persons based on their age, social, physical or mental characteristics, in order to materially distort the behavior of a person pertaining to that group in a manner that causes or is likely to cause that person or another person physical or psychological harm;
  • For any use intended to or which has the effect of discriminating against individuals or groups based on legally protected characteristics or categories;
  • To provide medical advice and medical results interpretation;
  • To generate or disseminate information for the purpose to be used for administration of justice, law enforcement, immigration or asylum processes, such as predicting an individual will commit fraud/crime commitment (e.g. by text profiling, drawing causal relationships between assertions made in documents, indiscriminate and arbitrarily-targeted use).”

As a result of these restrictions, law enforcement around the world has incurred a heavy burden, and I don’t think Stability.ai took the corresponding stakeholders into confidence before releasing Stable Diffusion. It should also go without saying that the license choosing to colour within the lines of the laws of respective countries means, say, a country that doesn’t recognise X as a crime will also fail to recognise harm in the harrassment of victims of X – now with the help of Stable Diffusion. And the vast majority of these victims are women and children, already disempowered by economic, social and political inequities. Is Stability.ai going to deal with these people and their problems? I think not. But as I said, the cat’s already out of the bag.

When a teenager wants to solve poaching with machine-learning…

We always need more feel-good stories, but we need those feel-good stories more that withstand closer scrutiny instead of falling apart, and framed the right way.

For example, Smithsonian magazine published an article with the headline ‘This Teenager Invented a Low-Cost Tool to Spot Elephant Poachers in Real Time’ on August 4. It’s a straightforward feel-good story at first glance: Anika Puri is a 17-year-old in New York who created a machine-learning model (based on an existing dataset) “that analyses the movement patterns of humans and elephants”. The visual input for the model comes from a $250 thermal camera attached to an iPhone attached to a drone, which flies over problem areas and collects data, and which the model then sifts through to pick out the presence of humans. One caveat: the machine-learning model can detect people, not poachers.

Nonetheless, this is clearly laudable work by a 17-year-old – but the article is an affront to people working in India because it plainly overlooks everything that makes elephant poaching tenacious enough to have caught Puri’s attention in the first place. A 17-year-old did this and we should celebrate her, you say, and that’s fair. But we can do that without making what she did sound like a bigger deal than it is, which would also provide a better sense of how much work she has left to do, while expressing our belief – this is important – that we look forward to her and others like her applying their minds to really doing something about the problem. This way, we may also be able to salvage two victims of the Smithsonian article.

The first is why elephant poaching persists. The article gives the impression that it does for want of a way to tell when humans walk among elephants in the wild. The first red-flag in the article, to me at least, is related to this issue and turns up in the opening itself:

When Anika Puri visited India with her family four years ago, she was surprised to come across a market in Bombay filled with rows of ivory jewelry and statues. Globally, ivory trade has been illegal for more than 30 years, and elephant hunting has been prohibited in India since the 1970s. “I was quite taken aback,” the 17-year-old from Chappaqua, New York, recalls. “Because I always thought, ‘well, poaching is illegal, how come it really is still such a big issue?'”

I admit I take a cynical view of people who remain ignorant in this day and age of the bigger problems assailing the major realms of human enterprise – but a 17-year-old being surprised by the availability of ivory ornaments in India is pushing it, and more so by being surprised that there’s a difference between the existence of a law and its proper enforcement. Smithsonian also presents Puri’s view as an outsider, which she is in more than the geographical sense, followed by her resolving to do something about it from the outside. That was the bigger issue and a clear sign of the narrative to come.

Poaching and animal-product smuggling persist in India, among other countries, sensu lato because of a lack of money, a lack of personnel, misplaced priorities and malgovernance and incompetence. The first and the third reasons are related: the Indian government’s conception of how the country’s forests ought to be protected regularly exclude the welfare of the people living in and dependent on those forests, and thus socially and financially alienates them. As a result, some of those affected see a strong incentive in animal poaching and smuggling. (There are famous exceptions to this trend, like the black-necked crane of Arunachal Pradesh, the law kyntang forests of Meghalaya or the whale sharks off Gujarat but they’re almost always rooted in spiritual beliefs – something the IUCN wants to press to the cause of conservation.)

Similarly, forest rangers are underpaid, overworked, use dysfunctional or outdated equipment and, importantly, are often caught between angry locals and an insensitive local government. In India, theirs is a dispriting vocation. In this context the use of drones plus infrared cameras that each cost Rs 20,000 is laughable.

The ‘lack of personnel’ is a two-part issue: it helps the cause of animal conservation if the personnel include members of local communities, but they seldom do; second, India is a very large country, so we need more rangers (and more drones!) to patrol all areas, without any blind spots. Anika Puri’s solution has nothing on any of these problems – and I don’t blame her. I blame the Smithsonian for its lazy framing of the story, and in fact for telling us nothing of whether she’s already aware of these issues.

The second problem with the framing has to do with ‘encouraging a smart person to do more’ on the one hand and the type of solution being offered to a problem on the other. This one really gets my goat. When Smithsonian played up Puri’s accomplishment, such as it is, it effectively championed techno-optimism: the belief that technology is a moral good and that technological solutions can solve our principal crises (crises that techno-optimists like to play up so that they seem more pressing, and thus more in need of the sort of fixes that machine-centric governance can provide). In the course of this narrative, however, the sociological and political solutions that poaching desperately requires fall by the wayside, even as the trajectories of the tech and its developer are celebrated as a feel-good story.

In this way, the Smithsonian article has effectively created a false achievement, a red herring that showcases its subject’s technical acumen instead of a meaningful development towards solving poaching. On the other hand, how often do you read profiles of people, young or old, whose insights have been concerned less with ‘hardware’ solutions (technological innovation, infrastructure, etc.) and more with improving and implementing the ‘software’ – that is, changing people’s behaviour, deliberating on society’s aspirations and effecting good governance? How often do you also encounter grants and contests of the sort that Puri won with her idea but which are dedicated to the ‘software’ issues?

The 5ftf blunder

Automattic owner Matt Mullenweg recently made a scene on Twitter when he called out GoDaddy as a “parasitic” organisation for profiting off of WordPress without making a sufficient number of contributions to the WordPress community and for developing a competitor to WooCommerce, which is Automattic’s ‘WordPress but for e-commerce’. (To the uninitiated, Automattic owns WordPress.com and maintains WordPress.org. WordPress.com is where you pay Automattic to host your website for you on its servers; WordPress.org is where you can download the WordPress CMS and use it on your own servers.) At the heart of the issue is Automattic’s ‘Five for the Future’ (5ftf) initiative, in which companies whose profits depend on the WordPress CMS and the community of developers and users pledge to contribute 5% of their resources to developing WordPress.org. There has been a lot of justifiable backlash against Mullenweg’s tweets, which were in poor taste and which have since been deleted. But most of the articles on the topic that I read weren’t clear or not written well about what their authors’ reasons were to disagree with Mullenweg. So after some reading around, I thought I’d summarise my takeaways as I see them, and in case you might benefit from such a summary as well.

1. 5ftf appears to mean different things to different people. This has been a recurrent bone of contention because Mullenweg lashed out against GoDaddy because GoDaddy’s contributions were not legitimate, or not legitimate enough, for him. But this is hardly reasonable. Not every entity or individual can contribute in exactly the way Automattic wishes at a given time nor can Automattic, or Mullenweg, presume to know exactly which contributions can be discarded in favour of others. In fact, I’ve been sticking with WordPress even though WordPress.com has been becoming less friendly to bloggers because a) it presents a diverse set of opportunities for me, vis-à-vis the projects and services I know how to set up because I know how to use WordPress, and b) WordPress has engendered over the decades a view of publishing on the web that is aligned with progressivist views on publishing on the internet. So in my view I contribute when I recommend WordPress to others, help my fellow journalists and writers set up WordPress websites, provide feedback on WordPress services, build (rudimentary) WordPress plugins and, within my newsroom, promote the use of WordPress towards responsible journalism.

2. Mullenweg was wrong to abuse GoDaddy in public, in such harsh terms. This was a disagreement that ought to have been settled out of view of public eyes, and certainly not on Twitter. Mullenweg is influential both as an entrepreneur more broadly as well as, more specifically, as someone whose views and policies on digital publishing can potentially affect hundreds of thousands active websites on the internet. By lashing out in this way, all he’s done is made GoDaddy look bad in a way that it probably didn’t deserve to be, and certainly in a way that it would find hard to pushback against as a company. To continue my first point, GoDaddy has also said that it sponsors WordCamps and other events where WordPress-enthusiasts gather to discuss better or new ways to use Automattic products.

(Aside: In his examples of companies that are doing a better job of giving back to WordPress.org, Mullenweg included Bluehost. Some of you might remember how bad GoDaddy’s customer service was in the previous decade. It was famously, notoriously awful, exacerbated by the fact that for a lot of people, its platform was also the gateway to WordPress. I get the sense that their service has improved now. On the other hand, Bluehost and indeed all hosting companies owned by Newfold Digital have a lousy reputation, among developers and non-developers alike, while Mullenweg is apparently happy with Bluehost’s contributions and it is also listed as one of WordPress.org’s recommended hosts.)

3. Mullenweg blundered in a surprising way when he indicated in his tweets that he was keeping score. While GoDaddy caught Mullenweg’s attention on this occasion, the fundamental problem is relevant to all of us. You want people to support a cause because they want to, not because someone is keeping track and could be angry with you if you default. Put another way, Mullenweg took the easier-to-implement but harder-to-sustain ‘hardware’ route to instituting a change in the ecosystem than the harder-to-implement but easier-to-sustain ‘software’ route. We’ve come across ample examples of this choice through the pandemic. To get people to wear masks in public, many governments introduced mask mandates. A mask mandate is the hardware path: it enforces material changes among people backed by the threat of punishments. The software path on the other hand would have entailed creating a culture in which mask-wearing is considered to be virtuous and desirable, in which no one is afraid of being punished if they don’t wear masks (for reasonable reasons), and in which people trust the government to be looking out for them. The software path is much longer than the hardware one and governments may have justified their actions saying they didn’t have the time for all this. But while that’s debatable, Automattic doesn’t have such constraints.

This is why 5ftf should be made aspirational but shouldn’t be enforced, and certainly shouldn’t become an excuse for public disparagement. I and many, many others love WordPress, and a large part of it is because we love the culture and ideas surrounding it. We also understand the problem with for-profit organisations profiting off the work of non-profit organisations. If GoDaddy is really threatening to sink WordPress.org by offering the people hosting their sites on GoDaddy an alternative ecommerce platform or by not giving back nearly as many programming-hours as it effectively consumes, Automattic should either regard GoDaddy as a legitimate competitor and reconsider its own business model or it should pay less attention to its contribution scorecard and more to why and how others contribute the way they do. Finally, if GoDaddy is really selfish in a way that is not compatible with WordPress.org’s future as Automattic sees it to be, Automattic’s grouch should be divorced cleanly from the 5ftf initiative.

The problem that ‘crypto’ actually solves

From ‘Cryptocurrency Titan Coinbase Providing “Geo Tracking Data” to ICE’The Intercept, June 30, 2022:

Coinbase, the largest cryptocurrency exchange in the United States, is selling Immigrations and Customs Enforcement a suite of features used to track and identify cryptocurrency users, according to contract documents shared with The Intercept. … a new contract document obtained by Jack Poulson, director of the watchdog group Tech Inquiry, and shared with The Intercept, shows ICE now has access to a variety of forensic features provided through Coinbase Tracer, the company’s intelligence-gathering tool (formerly known as Coinbase Analytics).

Coinbase Tracer allows clients, in both government and the private sector, to trace transactions through the blockchain, a distributed ledger of transactions integral to cryptocurrency use. While blockchain ledgers are typically public, the enormous volume of data stored therein can make following the money from spender to recipient beyond difficult, if not impossible, without the aid of software tools. Coinbase markets Tracer for use in both corporate compliance and law enforcement investigations, touting its ability to “investigate illicit activities including money laundering and terrorist financing” and “connect [cryptocurrency] addresses to real world entities.”

Every “cryptocurrency is broken” story these days has a predictable theme: the real world caught up because the real world never went away. The fundamental impetus for cryptocurrencies is the belief of a bunch of people that they can’t trust their money with governments and banks – imagined as authoritarian entities that have centralised decision-making power over private property, including money – and who thus invented a technological alternative that would execute the same solutions the governments and banks did, but sans centralisation, sans trust.

Even more fundamentally, cryptocurrencies embody neither the pursuit to ensure the people’s control of money nor to liberate art-trading from the clutch of racism. Instead, they symbolise the abdication of the responsibility to reform banking and finance – a far more arduous process that is also more constitutive and equitable. They symbolise the thin line between democracy and majoritarianism: they claimed to have placed the tools to validate financial transactions in the hands of the ‘people’ but fail to grasp that these tools will still be used in the same world that apparently created the need for cryptocurrencies. In this context, I highly recommend this essay on the history of the socio-financial forces that inevitably led to the popularity of cryptocurrencies.

These (pseudo)currencies have often been rightly described as a solution looking for a problem, because the fact remains that the ‘problem’ they do solve is public non-participation in governance. Its proponents just don’t like to admit it. Who would?

The identity of cryptocurrencies may once have been limited to technological marvels and the play-things of mathematicians and financial analysts, but their foundational premise bears a deeper, more dispiriting implication. As the value of one virtual currency after the next comes crashing down, after cryptocurrency-based trading and financing schemes come a cropper, and after their promises to be untraceable, decentralised and uncontrollable have been successively falsified, the whole idea ought to be revealed for what it is: a cynical social engineering exercise to pump even more money from the ‘bottom’ of the pyramid to the ‘top’. Yet, the implication: cryptocurrencies will persist because they are vehicles of the libertarian ideologies of their proponents. To attempt to ‘stop’ them is to attempt to stop the ideologues themselves.

Tech solutions to household labour are also problems

Just to be clear, the term ‘family’ in this post refers to a cis-het nuclear family unit.

Tanvi Deshpande writing for Indiaspend, June 12, 2022:

The Union government’s ambitious Jal Jeevan Mission (JJM) aims to provide tap water to every household in rural India by 2024. Until now, 50% of households have a tap connection, an improvement from August 2019, when the scheme started and 17% of households had a tap connection. The mission’s dashboard shows that in Take Deogao Gram Panchayat that represents Bardechi Wadi, only 32% of the households have tap connections. Of these, not a single one has been provided to Pardhi’s hamlet.

This meant, for around five months every summer, women and children would rappel down a 60-foot well and spend hours waiting for water to seep into the bottom. In India, filling water for use at home is largely a woman’s job. Globally, women and girls spend 200 million hours every day collecting water, and in Asia, one round trip to collect water takes 21 minutes, on average, in rural areas.

The water pipeline has freed up time for Bardechi Wadi’s women and children but patriarchal norms, lack of a high school in the village and of other opportunities for development means that these free hours have just turned into more time for household chores, our reporting found.

Now these women don’t face the risk of death while fetching water but, as Deshpande has written, the time and trouble that the water pipeline has saved them will now be occupied by new chores and other forms of labour. There may have been a time when the latter might have seemed like the lesser of those two evils, but it is long gone. Today, in the climate crisis era – which often manifests as killer heatwaves in arid regions that are already short on water – the problem is access to leisure, to cooling and to financial safeguards. When women are expected to do more chores because they have the time, they lose access to leisure, which is important at least to cool off, but better yet because it is a right per se (Universal Declaration of Human Rights, article 24).

This story is reminiscent of the effects of the introduction of home appliances into the commercial market. I read a book about a decade ago that documented, among other things, how the average amount of time women (in the US) spent doing household chores hadn’t changed much between the 1920s and the 2000s, even though it coincided wholly with the second industrial revolution. This was because – as in the case of the pipeline of Bardechi Wadi – the purchase and use of these devices freed up women’s time for even more chores. We need the appliances as much as we need the pipeline, just that men should also do household chores. However, the appliances also presented and present more problems than those that pertain to society’s attitudes towards how women should spend their time.

1. Higher expectations – With the availability of household appliances (like the iron box, refrigerator, washing machine, dish washer, oven, etc.), the standards for various chores shot up as did what we considered to be comfortable living – but what we expected of women didn’t change. So suddenly the women of the house were also responsible for ensuring that the men’s shirts and pants were all the more crinkle-less, that food was served fresh and hot all the time, etc. as well as to enliven family life by inventing/recreating food recipes, serving and cleaning up, etc.

2. Work + chores – The introduction of more, and more diverse, appliances into the market, aspirations and class mobility together paralleled an increase in women’s labour-force participation through the 20th century. But before these women left for their jobs and after they got home, they still had to household chores as well – including cooking and packing lunch for themselves and for their husbands and/or children, doing the laundry, shopping for groceries, etc.

3. Think about the family – The advent of tech appliances also foisted on women two closely related responsibilities: to ensure the devices worked as intended and to ensure they fit with the family-unit’s ideals and aspirations. As Manisha Aggarwal-Schifellite wrote in 2016: “The automatic processes of programming the coffeemaker, unlocking an iPad with a fingerprint, or even turning on the light when you get home are the result of years of marketing that create a household problem (your home is too dark, your family too far-flung, your food insufficiently inventive), solves it with a new product, and leaves women to clean up the mess when the technology fails to deliver on its promises”.

In effect, through the 20th century, industrialisation happened in two separate ways within the household and without. To use writer Ellen Goodman’s evocative words from a 1983 article: “At the beginning of American history …, most chores of daily life were shared by men and women. To make a meal, men chopped the wood, women cooked the stew. One by one, men’s tasks were industrialized outside the home, while women’s stayed inside. Men stopped chopping wood, but women kept cooking.”

The diversity of responsibilities imposed by household appliances exacts its own cost. A necessary condition of men’s help around the house is that they – we – must also constantly think about which task to perform and when, instead of expecting to be told what to do every time. This is because, by expecting periodic reminders, we are still forcing women to retain the cognitive burden associated with each chore. If you think you’re still helping by sharing everything except the cognitive burden, you’re wrong. Shifting between tasks affects one’s ability to focus, performance and accuracy and increases forgetfulness. Psychologists call this the switch cost.

It is less clear to me than it may be to others as to the different ways in which the new water pipeline through Bardechi Wadi will change the lives of the women there. But without the men of the village changing how they think about their women and their ‘responsibilities to the house’, we can’t expect anything meaningful. At the same time, the effects of the climate crisis will keep inflating the price these women pay in terms of their psychological, physical and sexual health and agency.