"Ad astra per aspera", but right now I'd prefer a little more astra, and a lot less aspera.
Good luck, Artemis II, I'd like to say you once again carry my hopes and dreams, but I find I have none to spare.
/x
Posts by Jason McC. Smith
I have watched so many launches and missions, and today I am doing so again, but this time feels like a habit, not an interest.
I am grieving dreams, both personal and societal, but most of all I think I am grieving the expectation of a better tomorrow.
I am grieving hope.
3/
I cheered when the ISS went live. I have followed most of the missions live.
I watched with enthusiasm the development of SpaceX's reusable systems.
Artemis II launched today for the first manned return to the moon since Apollo 17... and... I find myself feeling nothing.
2/
When I was a kid, others had posters of Farrah or Ferraris.
I had the NatGeo Apollo mission posters.
Joining NASA and becoming an astronaut was my dream.
My family still teases me for being... loudly exuberant... over the first STS launch.
I watched Challenger live.
I watched Columbia live.
1/
That’s precisely the movie that inspired the post.
Network (1976) is up next, and I’m afraid it’s going to fall just as flat because “yeah… and?”
One of the problems with trying to introduce the sons to some of the insightful dystopian satire of prior decades is that it no longer scans as satire.
A lot of the impact is lost.
Along with much else.
Them: "Well, I don't want to assume, you know what that stands for..."
Me: "Absolutely solid suppositions underlying mutual existence?"
Them: "...."
My wife just referred to Nightwing as “bubble butt Robin” and I’ve never been prouder.
*One time*...
I'll raise you non-local pronunciations of:
Puyallup
Sequim
Hoquiam
Snoqualmie
Wenatchee
Issaquah
Mukilteo
And the always amusing geoduck.
Miss you, my friend.
m.youtube.com/watch?v=Faj7...
In preparation for Tron: Ares hitting streaming, might I point out that ABC's 1983 show _Automan_ is on Youtube.
I mean... it can't be any worse, right?
Face of Master Control Program, the main villain from the movie Tron.
Watched Tron last night, and I'm sure that OpenAI naming their connect-to-everything method the Model Context Protocol is sheer coincidence, right?
End of line.
And I can't help but think that sounds remarkably like many of the conversations I have had regarding LLM-backed 'AI', even in professional settings.
Just... no. Stop treating them like magic wish boxes.
A quote from Charles Babbage:
"On two occasions I have been asked: 'Pray, Mr. Babbage, if you put into the machine wrong figures, will the right answers come out?' I am not able rightly to apprehend the kind of confusion of ideas that could provoke such a question."
OTOH, this could be another TouchBar, but they've been chasing this ultraminaturization for literally decades.
This smells like they're nearing an internal engineering goal.
Call it a hunch.
Glasses? It's *almost* small enough to fit in a thick earpiece. Battery on opposite side?
Lapel pin? Humane gave a run at this, but it wasn't there.
When you see Apple produce something that just seems... too far, it's often a good bet they're about to make a hard left, after proving the tech.
Below that is screen, battery, USB port/speaker/mic assembly, and wireless charger.
And that's it.
They shrunk the guts to that bump.
Now, I'm willing to be wrong, but that bump is a pretty damned powerful little computing device. Give it an alternate display, and not-conjoined battery, and...
iPhone 17 Air... that truly is a ridiculously thin device. I think we've crested the peak of practicality, however, and are cruising down to the nanoPod parody on SNL.
With one caveat.
If I understood correctly, (nearly) the entirety of the phone is in the bump at the top.
I do so almost every time.
I have been pointing out for some time that the only significant connection that LLM outputs have to objective reality is what is created by the reader.
This dives further into the psychology of the feedback loop.
Unfortunately, unless we are diligent about reviewing the outputs and making sure we stay within our own abilities and knowledge, this includes making mistakes faster.
Use the tools carefully. Review the outputs. Know their limitations *and* your own.
Otherwise it's all just hallucinations.
None of the outputs are real, factual, or correct until we say they are, as measured against our own experiences, expertise, and biases.
At the moment, the best we can hope for is to speed up our own abilities by performing some of the grunt work, to get to results faster.
This is not an unknown problem. I am not saying anything that isn't already being pursued as improvements, but the number of people I run into in professional circles who choose to treat these systems as actual sources of truth 'with occasional mistakes' is rather horrifying.
Until there is a fundamental shift in how these systems work, admire the hell out of the advances in the delivery, but recognize that the content is highly suspect.
You are the only source of truth for the output you query for
Your colleague is the only source of truth for their output.
So of course it is tested, right?
When you've produced code that you're not sure how it works, or what the requirements really should have been, exactly how are you going to test it?
Unsurprisingly, "let the 'AI' do it" is a frequent response.
And still, no sufficient review against reality.
If you're working on a word processor, maybe this is okay. Maybe no one will notice if that span of bold is actually two spans butted up against each other.
But a mission critical system? Flight control? A power plant? A *medical device*?
Blindly incorporating this output is negligent at best.
And the hallucinations creep into production. The code that at first glance appears valid ("Hey, it *compiles*, what do you want?") ends up being buggy in edge cases that were not considered, or worse, that were considered, stated, but not handled except through 'trust' of the output.
So folks reach past their grasp. They ask the 'AI' to produce code, or models, or documentation, for things that they don't have the expertise to properly review.
They begin to make the mistake of trusting the output when they no longer have the expertise to determine its validity.
However, boilerplate is boring. Boilerplate is pedestrian. Boilerplate isn't hip, it isn't now, it isn't the *vibe*, man.
It's not disruptive, or innovative, or any of the things that attract a lot of people to coding and app development.
It won't get you promoted, it's not cutting edge.
For producing boilerplate, they can be useful. "I need to set up a Docker container inside a zero trust environment." "I want a Java function that performs the same work as this Python code snippet." "Write an SQL statement for me to query this schema but if the columns were renamed to..."