Things like allowing both markdown and wikilinks. Or magic linking to attachments etc.
There are a lot of formatting issues as well. An Obsidian document will probably fail markdown lint over a long list of rules.
Posts by Sandip Bhattacharya
#Obsidian says that it does a lot of things to make sure that the actual #markdown files are free to be read via any other application, but that's not true in spirit.
There are a lot of Obsidian features that result in what can only be called broken markdown.
Employers should also consider whether they should advise Service Canada of any salary increase to a TFW holding a LMIA-based work permit. Where the increase is more than 2% or is not under one of the exceptions to the "2% increase rule", Service Canada would want to be informed in advance. It is possible that Service Canada could require new recruiting and a new LMIA application on the theory that there may be Canadians who would apply for the position now that it has a higher salary. Where the increase is less than 2%, it may still be a best practice to inform Service Canada of the new salary, so that the deviation from the salary set out in the original job offer is documented if the employer faces a government review or inspection.
While technically the employer can give a justification, since it will automatically trigger an audit, all immigration lawyers advise their employers not to do it in the first place.
TIL that in Canada, if you have a job based on a work permit (similar to H1B in US), you can never get more than a 2% pay hike, regardless of your performance or inflation.
gowlingwlg.com/en-ca/insigh...
claude code diagnosis of its encrypted pdf handling: The root cause was that the Read tool's encrypted-PDF error message acted like a system-level injection that overrode Claude's normal response — it kept repeating "PDF is password protected" regardless of your prompts, because the error was being re-triggered on every turn.
It seems that even the latest version of Claude Code has a bug around handling encrypted PDFs using its read tool. It gets wedged asking for the password forever.
I wanted to add mine & just two more blogs from ppl I knew to Kagi's small web: blog.kagi.com/small-web
It was a ridiculous amount of effort to find people I who I knew to be bloggers, and still had a blog that has been updated in the last year. Most blogs are dead or moved to walled gardens
I am on the max plan, and usage low enough that haven’t hit limits yet. But I just happened to notice this in the changelog today. Would probably have got burned by this. Will set the default in settings so that i don’t forget.
“Changed default effort level from medium to high for API-key, Bedrock/Vertex/Foundry, Team, & Enterprise users (control this with /effort)”
Claude code 2.1.94 has changed the default effort level to high for api users and others. This can blow up reasoning token usage. Reduce if unexpected else 💸
Thought I should start blogging a bit more frequently.
Today's post started off a quote I read in today's WP AI newletter, about the toxic combination of AI agents and trading and AI social web like Moltbook.
blog.sandipb.net/2026/04/06/s...
In case you think this is an April Fool's joke or something, here is she explaining it.
www.instagram.com/p/DWzNnqwD2Lu/
Still processing. 🤯
It appears Milla Jovovich, yes, the actress from Fifth Element has written an incredible llm memory system.
github.com/milla-jovovi...
With USA-Israel blowing up Iranian oil, Iran blowing up the rest of the oil of middle east, and then Ukraine blowing up Russian oil ... what an irony about how the fossil fuel Climate Change fix is actually going to come about, and by the people least expected to do it. Kudos all around.
Final Planet of the Apes scene with half buried Statue of Liberty on beach.
Artemis II crew returning to Earth.
https://days-since-openclaw-cve.com/
Damn this is hilarious
The court case today is step 1 of making suspect even citizenship for kids of naturalized citizens who were born when their parents were temporary citizens.
Crazy that so many of them voted to do this to their children. Full leopard eating face party.
I am pretty sure that this change in heart is not coming to India. Once people taste power over others, it is impossible to give it back.
I dont use it much in the last couple of months because my employer sprung for a CC seat. But when I did, it was pretty good. Hallucination is lower on the top models, and with the right context, can be minimized.
I dont get the hate on Github copilot. We should always encourage more ecosystem diversity in the AI space, especially around agents which let you use models from multiple vendors. I would hate to see the space consolidate around anthropic/openai/gemini only tools.
It’s absolutely incredible that the people who have caused the biggest oil crisis in history and the people who have killed the non-oil energy sources strategy in North America are the same people, in the same time period!
This would have been so amusing if we were not all devastated by it.
Claude when asked why it is making the same mistake even after I have written it down: "My mistake was not following the existing instructions."
I should frame this Claude mea culpa as an evergreen AI agent thing.
I have been using the litellm proxy as a docker image all along and it was not affected by the attack.
The llm proxy gives a much better way to get usage accounting via api for all models in one place
I would probably explore bifrost though as an alternative after this
Stuffed teddy bear left on the curb
Someone left a stuffed teddy in the wet, freezing curb in my neighborhood, and all i could think was “Same, Winnie, same.”
Look, all I am asking is if the world needs to end, can it tell us whether it will do it before tax day?
A walk through my Perplexity account memory was a string of shocks - "You think I like what?", "Just because I asked about it, I don't like it!", "You remembered WHAT?"
If ppl got fed up of LLM quirks like sycophancy, hallucination, deception, etc. wait till they get hit with problems due to faulty memory, the new rage in all LLM agents.
I used to curate memory earlier when it was sparingly used, but it is no longer scalable, they are collecting a lot more now.
Earlier I used to take breaks between focus work sessions, by doing low value/low intense tasks. Now I am constantly thinking about how to automate those away instead.
Bad idea. Those were the escape, if judiciously used.
By handing over those tasks to AI, and running multiple instances on different tasks, and constantly context switching between problems, checking requirements against the implementation, is keeping my head cpu bound all the time, burning me out.
There is this widely spoken paradox of AI making us more productive and also burning us out, that I have experienced as well.
And I have realized why - those high friction, low value grunt work that I used to do, and precisely what AI has short circuited - those gave my brain time to breathe.
So for example, a pdf manipulation skill is what you will reach for to deal with anything pdf. But the skill itself can use more than one Unix utility to do the various things you might want to do with pdfs