The fixes will be reversed by whoever created the original problem. Also, for an encyclopedia to be effective, I need to be able to trust articles on subjects I know nothing about. I can't "fix" those. Wikipedia is fundamentally unreliable.
Posts by Allen Holub
I'm sure they _do_ have lots of experience managing failed projects 😄.
Wikipedia is riddled with factual errors.
Articles in true encyclopedias are written and edited by actual experts in the field, and the overall editors work hard to make sure that the information is accurate. None of that happens on Wikipedia.
The online versions have most of the limitations of the printed versions. Wikipedia, which is encyclopedia-adjacent, has its own set of problems and is unreliable at best.
Encyclopedias are fundamentally limited. The information in them is always outdated, and the topics are limited. If you put it online, don't limit the topics, and update it continuously, you will have invented the internet.
I've worked with companies where that didn't happen, so "always" isn't the best choice of words 😄. The way to explain it is by using their language. Sell "releasing sooner gets us revenue sooner" and "why spend money building things that people won't buy."
However, if it's possible to improve the way you work, it's worth the effort.
10/10
Some of that fatalism is justified (large corporations won't adapt), but not all of it (an agency can sell a different way of working as part of the client-acquisition process rather than letting the client dictate how the agency must work).
9/10
Many of those comments will come with a certain fatalism that arises from knowing that the "real world" systems don't work, but that individual engineers have no power to change things.
8/10
For example, we need to fund the teams, then dynamically allocate the teams to the work as needed, as compared to "project" thinking, where we budget the project, not the teams.
Of course, I say the above knowing that many will respond that the "real world" doesn't work that way.
7/10
Given that we're releasing every day (or more often), it's not possible to be "behind schedule." With that thinking comes major changes in things like budgeting.
6/10
This approach has rarely, if ever, been effective. It's better to first identify just enough to get started. Build that, get feedback, and adjust. We discover what to do next by doing the current thing. There are strategic goals, but decisions about what to build are ongoing.
5/10
our customers actually need. There's a fundamental contempt for the customer built into that attitude. It also implies that the world will not change while we're working. Finally, both assumptions imply that product development is ever done. It's not.
4/10
Our specifications (and thus, plans) are always wrong because nobody knows what they actually want until they get something in their hands. "Scope creep" implies that we won't learn anything as we work, that we'd rather be done 'on time' than implement the changes necessary to build something
3/10
That is, you can't be "behind" unless you have a fixed "done" date. Similarly, "scope creep" implies that we can accurately predict exactly what needs to be built. (Sometimes that's true, but not often.) The problems with this approach are legion, of course.
2/10
The related notions of "behind schedule" and "scope creep" are fundamentally wrongheaded. It's 50-year-old thinking that many of us have abandoned because it doesn't work. "Behind" implies an accurate estimate of a large, detailed specification.
1/10
The question isn't whether AI halucinates, the question is whether a web search will give you more (or less) useful answers than an AI would. My experience with the most recent models is that the AI is better. The hallucinations are less wrong than the information found by Google.
There isn't a search engine on the planet that doesn't work in exactly that way—regurgitating information with no compensation. Search engines and AIs are the same in that department, but the AI yields better results than simple search does.
Ask your user "why do you need this data," then have the machine solve _that_ problem.
3/3
Most programs that output tables or graphs are doing nothing but re-presenting the data in a different format, when what they really need to do is understand the data and use it to answer the underlying or implicit question.
2/3
When you pose a question to a computer, it should provide you with the answer, not the wherewithal to formulate the answer manually.
1/3
I have. The problem is not the search engine; it's the whole idea of search. An LLM can create a solution to a problem or a summary of information that's scattered all over the web (and in books). No search engine can do that. At best, it can give me the wherewithal to hand-build my own solution.
<sigh/> That's a tired argument, and is eclipsed by the reality that the tech is not going away. You seem unaware of the fact that many data centers are 100% renewable (tho, admittedly, that includes carbon credits), and that AI is the largest investor in SMRs (small modular [nuclear] reactors).
Wishing that tech will go away has never worked in the past—the Luddite movement failed spectacularly—and I see no signs it will work now. Let's start with a clear-eyed assessment of reality, then focus on concrete problem-solving instead.
8/8
AI is not going to fade away (though many companies with ill-conceived AI products will, but that's a different issue). Boycotting AI will have zero impact—it's already too pervasive.
7/8
We are not going to curb carbon emissions, but we can build carbon-scrubbing systems. Let's focus on that. Instead of banning data centers, let's pass laws that require them to operate in an environmentally sound way. Lots of jobs there.
6/8
I keep hearing people talk about stopping global warming. That ship has sailed. We're all f**ked, and the sooner we take that as a given, the sooner we stop chasing ineffective chimeras and put our energy into developing real, systemic corrections.
5/8
Saying this will, of course, bring out the people who say that AI uses too many resources for too little gain. They are not wrong, though we can argue if the collective gain is really "too little." At this point, however, the argument is moot. It's like global warming.
4/8
It's true that an LLM often yields hallucinations, but so did Stack Overflow. FWIW, most of the search engines incorporate some level of AI, so using it is inevitable, even if you don't use it explicitly.
3/8