This is a pretty amazing fail by the @washingtonpost.com ‘s advertising algorithm.
If it’s compressed for you, expand it. SMH
Posts by Brian O'Rourke
The kids in my neighborhood are, by and large, jerks. I’ll be giving out KitKats exclusively this Halloween. That’ll show em.
It gets harder each day to open my laptop lid. Lord, I'm tired.
Awesome. Awesome.
Taking a longish view, I’m not worried about it. Once this problem materializes, the good companies will restore a pipeline and the bad ones will halfheartedly follow. Just as with math teachers. In the near term, however, a lot of people are effed.
Math teachers made a lot of mistakes adopting calculators in the years between me leaving middle school and me graduating from college. Companies and managers are making analogous mistakes now. Good teachers eventually figured it out, though. Good companies will, too.
The biggest thing, so far, is this: LLMs pose the biggest threat to mediocrity, because they are *better* at mediocrity. They can produce it faster, with more consistency, and with a better sense of how to tweak THEIR mediocrity to make it "better" than mine.
(I taught precalculus and used trig and log tables so the students would know what happened when they pushed those buttons. I was disliked). Math teachers design curricula & assessments that demonstrate the kids can do the work without the tool. Only then do they let them go farther, faster, with it
The trick is to view an LLM like a graphing calculator. No one needs to perform most integrations by hand, but we teach it anyway, so that when someone uses a calculator or Mathematica to do the work, he or she knows what they are asking and how the process works. And when an answer is probably BS.
What they can do is not nothing, but they are not yet a threat to my work. The trick is not to embrace LLMs as a panacea nor to fear them as AGI. (maybe that will come, but it seems improbable. The hardware demands seem impossible, nevermind the software likelihood).
They are bad at rewriting a paragraph while keeping the meaning, voice, and syntax intact. (Better than they were, though. So, watch this space.) They are terrible at rewriting a whole essay and improving it in one bite.
LLMs are very useful for summarizing work. They currently give me a 90% solution on footnote formatting. They can internalize my style guide and catch a handful of small things that get by me and other editors. They can suggest a contextually appropriate alternative word to avoid repetition.
It will be quite a while before AI can consistently glom onto the right version of emphasis if a person is being intentionally deceptive when speaking it.
AI is a long way away from being as good as a very-good-to-excellent human editor because there's no "average" nuance. Say out loud the sentence "I didn't say she stole the money." Say it 7 times, each time emphasizing one of the words. Each time, the sentence means something different.
Run it through a second LLM to humanize, and it might become impossible. But to do all of this, you have to be a decent writer to start. A good outline is not as easy to produce as people think, and LLMs are not good at it yet.
Second, if you are a decent regular editor, you are safe a while longer. Likewise writer. But you, too, are at risk sooner rather than later. Ask AI to write you an essay on a general prompt, and it will be easy to spot. Give AI a fairly detailed outline, and you'll get something harder to spot.
First, if you are a pretty good copy editor, your job will be gone in a few years. There will still be a need for excellent ones—but they will be harder to groom without the opportunity to start as "pretty good."
I don't know if this is a counterpoint, a contrast, or something else. But I am a professional editor, and I have spent many, many hours in recent months trying to ascertain how LLMs can and cannot help my work, how they harm it, and if there are things it's going to do that are "inevitable."
It was the name of a portable hotspot I once owned
I came here to tell you your publisher had a book, but @hupplescat.bsky.social beat me to it. The sources for this article might be of use, too: www.usni.org/magazines/na...
Does anybody have any data on what CB Buckner’s WAR is?
Hours ago, I realized that the continued existence of today was unwelcome. I look forward to tomorrow doing better.
Spend hours every month debunking quotations like that. “I’ll kill my logisticians first.” “A ship’s a fool to fight a fort.” etc.
Which final boss can’t you get by?
We really need to know what they saw 20 yrs ago to know what you should project now. If this is the last thing they saw, it should also be the next thing they see, though.
We can replace states with Socialist Republic Councils. And when we merge all the intelligence and federal police agencies under the DNI, we can rename it the Committee for State Security (let's throw DHS in there, too, but it should probably have its own Directorate). This is great!
That AI ship, tho...
This is in some ways inevitable—to be expected at this stage of events. But be wary of anyone who uses these two events as the linchpin of their argument. They are, at this moment at least, cherry-picking their data to some degree. Be especially wary of answers you find congenial from the outset.
They show that the character of war is forever altered and that tanks are finished, and they demonstrate that infantry unsupported by direct and indirect fires are doomed. Maneuver is king. Maneuver is obsolete.