I'm looking for a PhD student to work with me on formal verification for cryptographic protocols.
This is a 4-year position at VU Amsterdam, co-supervised with Kristina Sojakova. Send me an email if you want to know more!
Posts by Toby Murray
This is NOT a formal job posting, just testing waters. I have a year of post-doc money. Esp. int'd in formal methods + applied cogsci + diagramming. If you do work tied to my research, reach out (see my page). Must have US work auth, sorry. Please feel free to share/boost!
Ada’s type system works this way with range types. type SmallInt is Integer range 0..10 adds dynamic range checks on values of this type
I suspect geography has also played a role. Former Secret Intelligence Service chief Sir Stewart Menzies and former Australian Prime Minister Sir Robert Menzies were contemporaries, yet the former was pronounced “Mingus” while the latter used the more modern pronunciation.
This is some kind of burn, I think, but I’m too far outside the mainstream of PL to understand it
A wonderful post, and a seeming exemplar of how to do AI-assisted mechanisation well.
“the effort of writing a POPL/PLDI paper will soon be comparable to the effort of writing an ICML/ICLR paper” I made exactly this same pint to my head of department (an AI researcher)
just yesterday
To a layman’s definition of intelligence, the claim of frontier models being “PhD-level” has moved quickly from being farcical to arguable. We need to have our eyes open to the implications, not least for our PhD students and the future of research theconversation.com/a-phd-is-an-...
Wow! Is that info public? I’d love to learn more
36, you say?
walk
amble
ramble
stroll
strut
shuffle
waddle
sashay
sneak
tiptoe
prance
step
trek
hike
wander
roam
perambulate
meander
saunter
mosey
dawdle
plod
trudge
toddle
march
stride
charge
skulk
pad
swagger
prance
lope
limp
hobble
totter
stumble
No sleep, no worries
Avigad had written an important note grappling with the rise of AI for mathematics. Almost all is equally applicable to much of computer science. Academics must face the challenge of re-prosecuting our mission as researchers and educators in the age of AI. www.andrew.cmu.edu/user/avigad/...
Home-brand Howard, perhaps?
Almost no matter what base of human effort this was situated on, 200K lines of proof in 2 weeks is remarkable. In Isabelle/HOL we used to reckon on about 10K lines of proof = 1 person-year. spectrum.ieee.org/ai-proof-ver... cc @lawrpaulson.bsky.social @microkerneldude.bsky.social
Family includes everyone! ❤️
Thoroughly enjoyed this
As I said a few months ago (bsky.app/profile/toby...), GenAI allows thought policing and surveillance of a kind Orwell couldn’t dream of. We should expect the familiar debates between safety-by-surveillance vs civil liberty to be rehashed on this arena, as this piece shows
Only more important. However of course that’s what I want the answer to be, because that’s what I find most interesting (rather than the algorithms themselves). OTOH maybe learning the algorithms is already sufficient to internalise the design principles and to recognise when each applies
See eg Levitin whose book is centred on this approach (and from where I inherited it). I’m not sure I’m disagreeing with you, Shriram, but I do wonder to what degree the arrival of GenAI has shifted what it important for students to learn. Intuition tells me that learning how to design has become
with canonical instances. Then give them problems to solve (“design an algorithm to …”) with breadcrumbs pointing to which paradigm ought to be applied, with the canonical instances serving as exemplars. This is very well trodden ground and I have to admit to not having innovated much here.
This is very thoughtful (I’d expect nothing less from Shriram). There is a massive gap I’m sure between what I wish students would focus on and what they actually take away. That’s entirely expected. The way I have tried to teach this stuff previously was to introduce algorithmic paradigms
Know someone doing outstanding research towards or resulting in practical improvements for Australia's national security or capability for defence?
Nominate them for the Eureka Prize for Outstanding Science in Safeguarding Australia by 16 April
https://bit.ly/40jSZi0
Would love to see this talk. But, dude, here is my "program incorrectness" checker:
function is_incorrect(prog) {
return true;
}
It is accurate for 99.9999999.....% of programs, requires no evidence, nor any oracles.
Right. This is something that I'd love your take on. I feel like these kinds of dynamics explain well why simply trying to improve education, critical thinking, etc. is no panacea against disinformation, because people believing bullshit has much more to do with e.g. group belonging than rationality
the rational part that knew that since HDMI transmits coded digital signals the result is all or nothing and “signal quality” is meaningless.
What a perfect example of social dynamics leading to irrational decision making. Mine: a salesman once talked me into paying for a much more expensive HDMI cable to ensure “superior sound and video signal”. In hindsight I’ve always been amazed that the social part of my brain totally overrode
Heaps good. Let's all learn something. Without trying to preempt you, would you have agreed with @ccanonne.github.io 5 years ago? Or does the disagreement stem from different ideas about what the point of teaching "algorithm design" was in the first place?
Peter Schachte and I had exactly this conversation last week and reached almost exactly the same conclusion. Giving students a framework (and familiarity having applied it) to think lucidly about how to solve computational problems. But that was the point all along.
🤯
bsky.app/profile/grog...