I usually prefer imperfect software that are easy/fast to fix when possible (sometimes, you can't afford that), and those quality are good for both human or AI developments. So, if the agents write code you can't understand, likely they won't either soon.
Posts by Fabien Niñoles
So, no, it's not about review, it's not about processes, it's not about how the code was written or which architecture it was done. Don't get me wrong: those are important as practices to achieve reliable software, but in the end, all of those only matter in how they show software quality.
2/2
No code is perfect, who or whatever wrote it. Tests are no different on that or neither are metrics. What is different is accountability: agents are never accountable, humans are. It means that it is our responsibility to ensure things are working how we expect them to work.
1/..
Trump is reported close to a "deal" with himself under which US taxpayers would pay him $10 billion.
I served in multiple communist and authoritarian dictatorships, but I never witnessed corruption on this scale or this blatant.
The whole question here is "how do you know if the AI is right?" I always tell my staff "don't come to me with 'because the AI said so.' AI can always be wrong, it's in their terms of service. You are the ones expected to tell them (the agent) that they are wrong."
No AI will ever be accountable.
Marketing is around 20% of GDP and growing faster than GDP, with almost no production value, just like the finance industry (which is worse).
2/2
That's a bit of a balance here: ads is one of the few places where the free market is still mostly happening... Well, within their own monopolies. Platforms have monopolies of their ads market, but advertisers are competing "freely" within such a micro market. Platforms win all the time.
1/..
Oh, I think it works: I've seen how our revenue as a company has changed every time we make a marketing campaign, but I've also seen how the effect doesn't last.
But what I see with AI is a strong "subconscious" propaganda: multiple actors in the industry, not just marketing, "campaigning" for AI.
Le plus gros impact du prix de l'essence n'est pas tant dans le transport individuel mais dans l'industrie du transport. Je doute que ça se mesure à juste 44$ par an.
Et le plus gros problème est que si une hausse du transport introduit une augmentation des prix, l'inverse n'est pas le cas.
I think "so hard" is the important part. And I don't think he meant "through ads." At the same time, the ads market is filled up now, you no longer gain as much influence from ads. The marketing landscape has changed a lot... Now, you buy podcasts to control the narrative.
Our justice system is the inverse of a sanguine mob: slow to accuse, fast to forget. That creates the best opportunity to be abused (idem if it was reversed).
Superrationality requires to be slow to accuse and slow to forget. It's hard to make good judgement but it's also hard to change.
You're encouraging it, so you're part of the problem. Own it instead of pretending you're better than everyone.
It does change my work a lot and how I approach some problems but it's clearly an immature technology that should have been more carefully introduced (and later). And I'm talking about security, environmental impacts, copyrights (especially violations of), UX (including safety aspects), etc.
Again, you are comparing your brain with an LLM. Start by doing your own homework before giving them to others. But it's not surprising: you probably don't have enough tokens left to ask the LLM to think for you.
You are the one asking to compare LLM with the human brain. I'm using LLM as a tool, and as such, I compare it to other tools, not brain. And guess what, sometimes they worth being replaced, sometimes they don't. That's called progress, you know, the reason why we no longer use silex to cut meat.
You mean the one that has taught LLMs everything they know and the one that still can tell when an LLM is wrong?
Thinking that your brain is just like a tool that can be replaced with an LLM tells more about your own brain than about any LLM.
That's pure rote learning. Many math problems are about figuring the right story of how to solve the problem. LLMs are figuring it out by knowing all of them and assigning the one with the higher probability. They still make some very stupid errors due to the lack of proper knowledge structure.
Except that calculators are mostly accurate. An approach that seems to have some results around me for explaining LLM is that they are story generators: when you tell them with: "You are a doctor specialized in oncology", you are not instructing them, you are giving them a premise for a TV Soap.
Cold war and nuclear threat nostalgia?
Honestly, I don't miss them.
Web page generates by Claude with a "calligraphic style", including latin title (Philosophia), some star shapes decorations around it and a stylish initial capital.
The bottom of the web page, with the text (translated from French): Transcribed with quill in the year of grace MMXXVI - Sub specie aeternitatis (under the guise of eternity).
Was straighter for me, but still funny"
What if it is less COVID than the sudden realization we aren't able to deal effectively with such an event? That our social infrastructures are failing us, including the ability to trust our institutions? To see some privileged profit from it without suffering its consequences?
Always exciting to see the Avignon Papacy in the news
I got into a discussion with a school board candidate running on stopping CRT. I asked them to define it and give one example of it being taught. They hemmed and hawed without examples. I came to realize they didn’t understand the academic use of “critical” as I learned for my PhD.
Called it
Somewhere, I feel that the needs for some kind of "psychological safety for institutions" is something which is required for it to work. Given how difficult it is to achieve even among a small group of people, I'm not sure what avenues exist for institutions-level safety.
2.2/2.2
I read the paper more in details, and the 8 rules you're proposing are interesting but I still think they can create the same kind of problems for large problems, similar to what polycentrism governance admit too.
2.1/..
I also think that, similar to coalition, there is a resilience dynamic in opposition between opposite stacks, leading to an increase in polarization. Is that something you have observed in your research?
3/3
Just by the complexity of society, opacity, exclusion, and inefficiency will dominate at some point, just by the size of the problem. I'm wondering if there is any work done to figure out solutions to alleviate that.
2/..
Thanks, that's a very interesting thread! I especially appreciate the place of counterpublic stacks and the danger in coalitions. I have the impression that, somewhere, just based on Lippmann hypothesis, no stack can avoid degeneration.
1/..
But that's seeing things as zero-sum. What the US needs is for the richest to recognize how much they owe to the Americans, not just their political friends. You cannot cut taxes without asking everyone, according to their means, to increase their participation to the society wellness.
2/2