It turns out rubber duck debugging works for many types of human problems!
Posts by Adriano D'Alessandro
turned spelling into a way for him to exert control over me (I would act out the consequences of having the spell cast on me). This was extremely exciting for him, and led to a really really big jump in his reading really quickly!
When my kiddo was around 3.5 yo, I wanted to see if I could teach him to read. Like most kids, he was very resistant to being quizzed. Instead, I wanted a method that gave him agency over the process, so I gamified it. I created this spell book where to cast a spell he had to say the letters. This
Help scale ecological monitoring in PlantCLEF 2026!
Train on single-plant images then predict multiple species in complex vegetation scenes. Help address the domain shift between simple images and complex field ecology.
Deadline: May 7th โณ
www.kaggle.com/competitions...
#CVPR #Kaggle #AIforGood
Incomplete coverage in your training data when detecting fine-grained categories is a VERY interesting problem!
Promotional poster. On the right, an image of squids swimming in the ocean. On the left text: CVPR-FGVC 13 Workshop call for participation. FathomNet26 Positive-Unlabeled Object Detection in Marine Images. End of competition May 7.
Tag someone who should join the FathomNetCLEF data challenge! ๐ค๐๐
@sarameghanbeery.bsky.social @jbhaurum.bsky.social @oisinmacaodha.bsky.social @nicolang.bsky.social @fgvcworkshop.bsky.social @mbarinews.bsky.social @kaggle.com #CVPR #LifeCLEF #Kaggle #CV4Ecology #ComputerVision #MachineLearning
replacement, and we should expect that as labor power wanes we will see an equivalent reduction in democracy. This would, in some ways, be a continuation of the destruction of the labor movement (worker democracies) in the US.
If only the world were as simple as voting. Democracy itself is an outcome of the concentration of and increased demand for labor during the industrial revolution. Democracy was an exchange to placate workers who controlled substantial labor power. Unfortunately, we are entering an era of labor
We don't need any ground truth because we use this model that would not exist without all this ground truth.
A seedance 2 video made me legitimately laugh a few weeks ago. That's about the first time I realized that it's so over.
understanding or interpretation of some big piece of software or paper, and having a second step of verification in the process really helps.
I've only found a few instances where Claude has thought of an idea that wasn't already on my mind. But because it often proposes ideas that were natural extensions that I had already planned but didn't prompt, this is a useful verification signal for me. I get anxious that I might have an incorrect
is an enormous group in a constant process of specialization and message passing. These two skills are the foundation for all novel idea synthesis.
everyone in my grade knew how to do it. But how did a bunch of small children learn this knowledge without the internet? It turns out even the youngest among us are brilliant at message passing. Identifying and sharing salient information It's a skill that predates the Internet.
So human society
itself is not a fully useful skill. And this is where we return to the original image. It's a picture from the original pokemon games, depicting the infamous missingno glitch. I can tell you with near perfect recollection how to activate this glitch because I learned it in early grade school. Nearly
advantage that humans have! We are constantly in a state of specializing. Consider even the taxi cab driver whose hippocampus grows to adapt to memorizing a city. The world is filled with over 6 billion specialists. Our brains are optimizing to solve spurious problems.
However, specialization
across the population. It is rarely the case that a single human is trained to adulthood and becomes a novel idea generating machine. Perhaps Euler and Newton and a few others could be imagined that way. But even then, they were working on a specialized topic. And I think that is perhaps a neat
I keep coming back to an idea about human specialization and an idea about human capacity for information sharing, and how that plays a role in novel idea synthesis within a population (the image will be relevant, surely).
Throughout history, novel idea synthesis has been mostly distributed
*gestures to the state of things* it's been feeling more and more like the mid 2000s to me.
I don't think I'll ever recover after learning that Simon Fraser University barely breaks the top 500 despite being atop a mountain.
How I feel anytime I use Claude Code.
Are you thinking of a specific example?
Are there any image generators that can reconstruct a scene from a reflection?
What is it about Marxist philosophers that makes them active into their 90s? Fredric Jameson was the same way. I swear to God, the modern biohackers just need to study Marx if they want to live forever.
I think there's two things happening.
1. AI hype looks (and is) a lot like the crypto hype cycle (huge energy footprint, lots of GPUs, negative use cases)
2. People don't want to admit how powerful these systems have gotten. A lot of academics grew their audiences on criticizing these systems.
Perhaps, a better example is intimate deepfakes. It's so pervasive, now, that it's essentially impossible to stop. The researchers who created the technology cannot do anything to mitigate the downstream harm. It's simply out of their hands.
I only turn to Oppenheimer because it's an example of a scientist responsible for creating a consequential technology losing authority over it. I'm moreso just pointing out that we, as researchers, can't guarantee anything about how our work gets used if there are existing incentives to misuse it.
person die from a denial of care. And there is no technical solution to this. There are only political ones. But we're playing wack-a-mole if we try to mitigate every bad outcome by criminalizing it. The real problem is that our society is optimizing for capital accumulation.
to mitigate the bad at a systemic level when the person "pressing the button" gets an enormous benefit and has distance from the person facing the consequences. A better example is the US healthcare system and algorithmic denial of coverage. The person who makes the money does not have to witness a
When we ask "how might we use this technology for good, and mitigate the bad?", it assumes this is possible in our current system. The reason I used the analogy is because we exist in a complex economic system with many interests, where cause and effect are far separated. It becomes very challenging