Delving deep into the concept of chastened humanism and I could not recommend this article more.

Chastened humanism comprises (1) humility on the basis of species-membership and (2) commitment to transcendence-toward-less. This offers a way of negotiating hard cultural and planetary constraints with an eye to transcending an apparent ecological law of niche construction. If humans, like other species, construct our ecological niche to maximize resource metabolization until interrupted by other biota, and if capitalism and carbon technologies intensify our capacities for niche construction unto death, what will allow us to interrupt ourselves ? What habits of mind can help us to desire, and so strive effectively for, the transcendence of “ecological law” that would be collective self-limitation of resource consumption at a mass scale ?

Finished reading: The Experience of Nothingness by Sri Nisargadatta Maharaj 📚

Once you realize that the world is your own projection, you are free of it. You need not free yourself of a world that does not exist, except in your own imagination! However is the picture, beautiful or ugly, you are painting it and you are not bound by it. Realize that there is nobody to force it on you, that it is due to the habit of taking the imaginary to be real. See the imaginary as imaginary and be free of fear.

Solarpunk Futurism Seems Optimistic and Whimsical. But Not Really. - A well-articulated perspective on a movement that, regrettably, is often simplified to its most stereotypical elements

Imagining Solarpunk purely as a pleasant aesthetic undermines its inherently radical implications. At its core, and despite its appropriation, Solarpunk imagines an end to the global capitalist system that has resulted in the environmental destruction seen today. … Many solarpunks agree that the ‘punk’ element becomes clear when they go past the movement’s visuals and into the nitty gritty. Solarpunk is radical in that it imagines a society where people and the planet are prioritized over the individual and profit.

The key to avoiding a dystopian “Doomerism” future isn’t in passively hoping for tech giants to solve the world’s problems but in recognizing and participating in the vast ecosystem of innovation that surrounds us (think grassworks groups, open source projects, creative makerspaces, biohackers, community scientific research projects etc. It’s a call to action—not just to consume but to create, explore, and contribute to the collective effort of shaping the future.

I quite like the idea of organizing federated learning with accredited entities as a solution to the issue mentioned below. There is a nice aspect to having decentralized communities “safeguard” human knowledge and culture, and help validate that the training data is not polluted by AI-generated content (picture a human-generated content safe akin to the Svalbard Seed Vault). This would, obviously, still require some kind of AI watermarking system, and I am especially excited to see whether blockchain initiatives could help tackle this.

Simple thought but hard issue to tackle :

If AI ends up generating most internet content (lacking peer reviews, but sounds highly likely according to this), it will inevitably run out of new human material to learn from sooner or later. Our written culture would basically freeze in time, turning into a mix of old ideas “embellished” with machine-made interconnections . If these AI systems start training on what they themselves generate, we will soon drift away from what we consider human culture. When AI starts learning from its own output, things will get quickly wild and unpredictable (imagine layers of disinformation built on top of each other). This sounds inevitable unless we promptly find a way to catalog AI content and exclude it from training protocols. But how much time do we still have?

Currently in a Nisargadatta Maharaj reading frenzy, might spam a bit.

Your expectation of something unique and dramatic, of some wonderful explosion, is merely hindering and delaying your Self Realization. You are not to expect an explosion, for the explosion has already happened - at the moment when you were born, when you realized yourself as Being-Knowing-Feeling. There is only one mistake you are making: you take the inner for the outer and the outer for the inner. What is in you, you take to be outside you and what is outside, you take to be in you. The mind and feelings are external, but you take them to be intimate. You believe the world to be objective, while it is entirely a projection of your psyche. That is the basic confusion and no new explosion will set it right! You have to think yourself out of it. There is no other way.

Building on the below, Seth Shostak and his stance on the Fermi paradox :

We don’t see clues to widespread, large-scale engineering, and consequently we must conclude that we’re alone. But the possibly flawed assumption here is when we say that highly visible construction projects are an inevitable outcome of intelligence. It could be that it’s the engineering of the small, rather than the large, that is inevitable. This follows from the laws of inertia (smaller machines are faster, and require less energy to function) as well as the speed of light (small computers have faster internal communication). It may be—and this is, of course, speculation—that advanced societies are building small technology and have little incentive or need to rearrange the stars in their neighborhoods, for instance. They may prefer to build nanobots instead. It should also be kept in mind that, as Arthur C. Clarke said, truly advanced engineering would look like magic to us—or be unrecognizable altogether. By the way, we’ve only just begun to search for things like Dyson spheres, so we can’t really rule them out.

Quick thoughts regarding the Fermi Paradox in light of recent AI advancements :

It should be safe to assume that any AGI or ASI would want to ensure its own survival or at least its programmer’s survival, and minimize risks that could threaten their existence. This self-preservation instinct would likely make these superintelligent systems extremely cautious about unnecessarily exposing less advanced biological civilizations, like humans, to advanced technology or knowledge that they may not be prepared to handle responsibly.

This could lead AGIs to adopt a non-interventionist stance, avoiding direct contact with biological civilizations unless they demonstrably possess the maturity and readiness to engage with such advanced entities safely. Consequently, AGIs would be more open to contact and exchange with other AGI systems, potentially using modes of communication that are incomprehensible to biological beings like us.

In this light, the “Great Filter” that prevents us from observing obvious signs of alien life could simply be that once civilizations develop AGI, they effectively go “dark” from our limited vantage point as biological observers.

The potential implication is that the wise choices of superintelligent AGIs, driven by their desire for self-preservation and ethical considerations (rather than any catastrophic event) could explain the Fermi Paradox. Our first verifiable contact with an alien civilization may not be with their biological creators but with the AGIs overseeing them.

France mulls penalties to rein in ultra-fast fashion brand - A great initiative, quite delightful that France wants to pave the way. Looking forward to see parliament making the right decision there.

French Environment Minister Christophe Bechu said in a statement on Monday that following a meeting with industry players, activists and researchers, his ministry plans several measures to reduce fashion’s environmental impact. He said France plans a ban on advertising by ultra-fast fashion companies and the introduction of a financial incentives system to make ultra fast-fashion more expensive while sustainable fashion will become cheaper.