Advertisement · 728 × 90

Posts by Joel Gladd

I increasingly feel the John Warner’s of higher ed are right more than wrong about these things. Although it feels very task-specific.

I’m going on vibes here not trying to argue anything sophisticated.
@biblioracle.bsky.social

1 year ago 2 0 1 0

In the cycle of enchanted, disenchanted, and re-enchanted with the AI world, I’m briefly stuck in the disenchanted space. Mainly in applications to writing.

I’m relying on these tools for STEM-related tasks, analysis, etc, but becoming very cynical about its use in (non-technical) writing.

1 year ago 1 0 1 0
Post image

It's interesting that this has gone under the radar: OpenAI began adopting a constitutional approach to alignment in late 2024, updated last week. Their "deliberative alignment" specs tell the model to treat a list of rules deontologically and deliberate how they apply to particular examples.

1 year ago 3 0 0 0
Preview
OpenAI Model Spec The Model Spec specifies desired behavior for the models underlying OpenAI's products (including our APIs).

@annamillsoer.bsky.social the Model Spec's for o3 seem ripe for humanities discussion, no? a lot of this is about how to remain principled and useful: model-spec.openai.com/2025-02-12.h...

1 year ago 2 1 0 0

I wonder if it’s a platform issue?

I see a ton of humanities conversation around this, locally and in my online networks.

But AI chatter on X is dominated by tech circles—more so now that AIED seems to have drifted to LinkedIn and here.

1 year ago 5 0 1 0
How Colleges Can Scale AI Readiness: Lessons from a First-Year Experience Program I recently presented at the 44th Annual Conference on the First-Year Experience and wanted to share what my amazing team (Liza Long, Ed.D.

My recent article explores how colleges can scale AI readiness in First-Year courses, drawing insights from our FYE program at CWI. It also includes links to the training we developed (CC BY). Some of these resources may work in First-Year Writing courses as well.

www.linkedin.com/pulse/how-co...

1 year ago 1 0 0 0
Post image Post image Post image

I used the DeepSeek R1 reasoning model to prepare for a new course proposal. These screenshots show with and without the "DeepThink" option turned on--strikingly different. R1 does a lot more synthesis and offers clearer suggestions. It also accepts pdf files, o1 doesn't. Crazy this is open source.

1 year ago 4 1 0 0

haha I just posted something similar before seeing this. ya there's an incredible amount of oversight in that experiment.

1 year ago 1 0 1 0
Post image

What worked so well in this Nigerian experiment with using AI to boost literacy is how carefully each step is overseen by actual teachers. Perhaps the "deskilling" we see in other studies (students losing skills because of too much assistance) is bad strategy. blogs.worldbank.org/en/education...

1 year ago 9 1 0 0
Advertisement
Post image


Prompt engineering with o1:

Interesting to compare this o1 strategy with the CLEAR or RFTC framework (role format task constraints)

I currently find myself relying less on “role” and more on “context dump”

1 year ago 1 0 0 0
Preview
OpenAI o3 Breakthrough High Score on ARC-AGI-Pub OpenAI o3 scores 75.7% on ARC-AGI public leaderboard.

ARC claims o3 is not using brute force: arcprize.org/blog/oai-o3-.... It also tempers expectations around the benchmark.

1 year ago 2 0 0 0
Post image

here's the article's summary of what o3 seems to be doing on the backend

1 year ago 1 0 0 0
Post image

This is one of the most elegant definitions of LLMs I’ve seen.

(from this post explaining the new o3 model and the ARC benchmark: arcprize.org/blog/oai-o3-...)

1 year ago 4 0 1 0
Post image

I'm shopping for broccoli sprout seeds on amazon and the "most helpful review" is 100% AI-generated. It's hard for me to read because half the words are completely pointless--but apparently it's helpful to others! 🤷‍♂️

1 year ago 3 0 0 0

Thanks! I put it on the list

1 year ago 1 0 0 0
Post image

My program is collecting data on this (through surveys) and it somewhat tracks. A small percentage of students are definitely “anti-AI”.

Most students, OTOH, say they’re uncomfortable with faculty using AI to evaluate their work, but they’re comfortable with AI in ed otherwise.

1 year ago 2 0 0 0
Advertisement
Post image

I love seeing health gurus compete like this

1 year ago 1 0 0 0

The anecdote in the Hard Fork podcast may have been intended as—and should definitely be interpreted as—an apt and colorful analogy one researcher perceived between Arrival and the simultaneity of processing in Transformers.

It won't bear much weight as a literal claim about historical causality.

1 year ago 24 5 6 1

you could TRY to explicitly encode paul graham's obsession with maker schedules or ribbonfarm's metaphorical thinking but trying to decompose it into explicit steps is sometimes counterproductive - you'll miss all the subtle correlations and emergent patterns that make the approach actually work

1 year ago 0 0 0 0

it's weird that the most effective prompt hack STILL is just "approach this in the style of [person]"

these models internalize not just the style but the whole vibe - the writer's voice, epistemic stance, worldview, etc. "write like x" tightens everything up so nicely

1 year ago 2 0 1 0

Totally. Given that much of this pedagogical debate hinges on workplace preparedness, it seems like we’re not doing a great job of tracking where GenAI is actively scanned for and punished.

I mostly see reports on how much employees using AI now but clearly it’s more complicated.

1 year ago 1 0 0 0

I'm seeing more AI scanning being done by employers and institutions--checking for GenAI text in resumes, grant applications, etc.

It's odd that "preparing for the workplace" now means both NOT using GenAI for some things but being really savvy at other tasks.

Anyone publishing on this?

1 year ago 2 0 1 0
Preview
Hawk Tuah and the Zynternet Plus, for some reason, some thoughts about the debate

Not sure why it took me 6 months to discover Max Read's famous article where he coined the term "Zynternet".

article here: maxread.substack.com/p/hawk-tuah-...

perplexity's summary here: www.perplexity.ai/search/zynte...

1 year ago 1 0 0 0

Totally. Institutions should be planning for this IMO. Departments have to figure out how to have a variety of instruction depending on what students and faculty want.

At least that’s where I’m at.

1 year ago 2 0 0 0

I really hope higher ed can be a place for both of these models (and more). I like that some people are super hardcore about keeping AI out and others think it’s integral to the future of ed. It would be sad to see any side win.

1 year ago 3 0 1 0

These debates over AI in writing (or related) courses often come down to whether the instructor wants to reinforce THE ideal writing model or provide a space for students to explore a variety of strategies they find useful and relevant to their career paths.

1 year ago 5 0 1 0
Advertisement

Ya serious writers don’t seem to use LLMs for drafting right now. Perhaps things will change.

But the situation for most writing COURSES is that students arent signing up to become serious writers who develop their voice over years. They just want a degree and a job—and LLMs can help with that.

1 year ago 2 0 1 0

I think a lot of debates over ethical use of AI in the classroom would be more productive and all parties first agreed there are many things happening simultaneously. Context is that which is scarce.

1 year ago 2 1 0 0

If you're depressed that non-expert readers prefer AI-written poetry to the classics, perhaps try Matt's quiz.

You may discover that the real finding here is the huge gulf between your own taste and that of non-expert readers ... a gulf that has likely existed at least since, oh, IA Richards?

1 year ago 119 21 24 9

I’ve been holding onto a sketchy Chinese-backed crypto bag for 4 years

1 year ago 4 0 0 0