This is absurdly great, but I haven't read a single news article about it. A fully open source, offline-first alternative to Notion that's a collab between the French and German governments because they want to host docs securely and on their own terms. THIS is what Europe should be doing.
Posts by Research Computing Teams
Helpful table to determine how much your University or health system is set to lose if the NIH indirect cap is suddenly lowered to 15%. For SJSU it is $374k which is on the smaller end but devastating considering our other budget issues. datawrapper.dwcdn.net/l0ZqA/8/
To share at least a little knowledge with the world, I documented the basics of discrete event simulation for reasoning about #HPC system design. The example I wrote shows how to calculate MTTDL for RAID arrays of differing sizes, parity disks, and drive MTBFs.
glennklockwood.com/garden/discr...
1. Today the NIH director issued a new directive slashing overhead rates to 15%.
I want to provide some context on what that means and why it matters.
grants.nih.gov/grants/guide...
Well the award for most cowardly, boot-lickingest academic society has squarely gone to the American Society of Microbiology, who has taken down features of various non-white scientists. Absolutely pathetic behavior. Those articles are now coming up as “under review”. Truly sickening cowardice here.
Petition for name change:
American Society for Microbiology
→
Vichy Society for Microbiology
How do you ensure that your team is securely working with data?
1. Create policies that lay out how team members should access, work with, and store data.
2. Develop templates to be reused for common tasks.
3. Create style guides for naming/organizing things.
4. Store all of this in a team wiki.
Latest issue is out: Northwestern's Christina Maimone on the team's experience with success stories. Plus: Standups; Lockwood on life in industry; Best forking practices; Energy debugging; NIST on Genomic data; and Slinky for slurm on k8s
www.researchcomputingteams.org/newsletter_i...
I hadn't heard of hardened libc++ before - libcxx.llvm.org/Hardening.html. Obviously, Google's code (lots of it being infra code) is different from scientific software, but it's still interesting. Anyone played with this?
Google enabled bounds checking for much of their C++ code with hardened libc++, and performance decreased by only 0.3% (while segfaults decreased by 30%): security.googleblog.com/2024/11/retr...
Some other things I've written on this basic topic - scientific judgement is part of our job: www.researchcomputingteams.org/newsletter_i...
If a VPR or a funder has to decide between two similar centres, one can demonstrate opening up new research directions, good careers for trainees, and spinoffs, and the best the other can offer is 89% utilization or fully checked off worthiness lists, how do you think that decision is going to go?
Without us making the case for our work and our researcher client's work, how can those in charge possibly be fully informed for their next funding decision?
No one else is going to give them that information they need.
Advocating for the work we do means *qualitatively* showing decision makers how our work and our researcher client's work directly supports their priorities and missions. How the impact we're having is the impact they want to see.
We owe it to our teams, we owe it to the researchers whose life's work we support, to powerfully advocate for the work we do.
So allocation of resources for supporting research is decided based on human research judgement. Yes, messy, flawed, biased, human research judgement.
There are too many diverse *kinds* of research and scholarship for them to be able to be compared against each other in any kind of quantitative or checklist-algorithmic way.
There are more useful, worthy things to spend research funding on then there is research funding. That would be true even if research funding doubled tomorrow.
How many units of "impact in qualitative social sciences" are there in one unit of "impact in quantitative computational biology"?
But *even if it was quantifiable*, research funders and institutional decision makers have to decide how to allocate scarce resources between incommensurate things. How many units of "reusable research software" equals one unit of "well-used HPC cluster" equals one unit of "hire more postdocs?"
The worthiness or impact of the work we support is basically unquantifiable in the short term. The work we do to *support* that work, doubly so.
No one would actually *say* "Our work is demonstrably worthy - we've shown 85% utilization | 13 out of 14(!!) of the FAIR4RS principles met - so our work is done. If the funders don't fund us, it's on them". But...
In our line of work, one attraction of extrinsic, "objective" metrics like "utilization" for HPC or lists like FAIR4RS for research software is that we kind of hope that passively reporting good metrics will reduce the load of having to actively advocate for the work we do and the work we support.
Things I really like about this resource by @cghlewis.bsky.social
* Comprehensive but not overwhelming
* Includes (essential!) human elements: going over things with team members (03), and building SOPs (05)
* Think about data sharing early (04)
* Think about QA and tracking as next steps (10,11)
Great get-started guide, and another resource we can point PIs and groups to when they need to start thinking about data management.
38 years ago my father gave me my first computer. Today I will travel back in time to stop him.
A TNG scene. We're in the Enterprise conference room, where Picard is holding a meeting with Data, Troi, and Dr Crusher. Barclay is also there, for unknown reasons. Maybe he wandered into the meeting and just sat down, and everyone was too polite to mention it. That happened to me once. I was visiting an office location I didn't normally go to and I wasn't quite sure which conference room I was supposed to be in. I walked into one and sat down and it took me five minutes to realize I was in the wrong meeting. From the perspective of the people in that room, halfway through that meeting, a stranger walked in, sat next to the boss, took notes for five minutes, then walked out without saying a word.
At its heart, Star Trek is a utopian fantasy about a society so advanced that they are capable of holding productive meetings that last no longer than three minutes