Advertisement · 728 × 90

Posts by Kris Reyes

Post image

In this age of powerful GenAI capabilities for coding, I get wistful sometimes that new devs may never experience that feeling of manually typing out the first few lines of code for a new project

2 weeks ago 0 0 0 0

📅 Jan 21 at 10:00 AM ET: Join AC staff scientist Sergio Pablo-García Carrillo to explore lab automation + workflow orchestration in chemistry in a webinar co-hosted by Smart Labs + @SiLA!

Register: www.eventbrite.com/e/towards-th...

3 months ago 1 1 0 0
December Run 100K Challenge

December Run 100K Challenge

“I am excited to announce” the achievement of a “key metric” that I actually care about. I don’t, however, have any snappy backroymn to tie this into AI.

3 months ago 0 0 0 0

@thecrashcourse.bsky.social "futures of AI" first episode is out and is confirming my fear about the series when I first heard about it. Why are bigger channels (Wired was another one) bringing on non AI experts to talk about AI? An important lesson pick your AI gurus more carefully sheesh

5 months ago 0 0 0 0

Why on earth is emailing around an Excel spreadsheet still considered a viable means of organization and project management in 2025??

5 months ago 0 0 0 0

@notion.com i really want to use notion as a replacement for Word/Google docs. Stopping me is the lack of control of how a page is formatted when exporting a PDF. Esp. important is the need for documents that look like they came from Word/GDocs: removing title, DB properties, footers, etc.

9 months ago 0 0 0 0

I’m going through chemotherapy again (ugh), and one side effect this time is tinnitus. I learned it can happen when the brain fills in for damaged auditory nerves—generating sound where input is missing. Not unlike imputation, or how multimodal models handle absent signals. Bug or feature?

10 months ago 0 0 0 0
Advertisement

Or (and this may seem blasphemous to ML people), you can just use educated guesstimates of the parameters. This is, I would argue, more Bayesian, than MLE-based hyperparameter tuning, as they reflect prior knowledge of your system.

1 year ago 0 0 0 0

Or you can using MAP estimates instead of likelihoods to incorporate prior knowledge to regularize the ill-posedness of the maximum likelihood calculation.

1 year ago 0 0 1 0

What is the alternative to maximum-likelihood estimates to hyperparameters of a GP model? You can use hierarchical beliefs on these hyperparameters. This shifts the computational burden from likelihood optimization to "train" a model to methods such as MCMC to sample from the posterior distribution.

1 year ago 0 0 1 0

Hyperparameters need to be set based on prior information or tuned to data (if you must) using empirical bayesian methods.

1 year ago 0 0 1 0

Second is that when you're using GPs in a Bayesian context -- representing priors for an unknown function -- naively tuning hyperparameters to the prior based on data goes against the Bayesian philosophy.

1 year ago 0 0 1 0
Maximum likelihood estimation in Gaussian process regression is ill-posed

First, "training" the model, i.e. hyperparameter tuning by calculating maximum likelihood estimates is ill-posed:

www.jmlr.org/papers/v24/2...

This is especially magnified in low-data settings.

1 year ago 0 0 1 0

So there is a disproportionate (IMO) amount of effort by both implementers of GP libraries and users dedicated to optimization of hyperparameters -- at least in the context of small-data settings. This is not great for a few reasons:

1 year ago 0 0 1 0
Advertisement

That is, people who first work with GPs from an ML perspective look for parameters to optimize off of data, and this becomes their primary preoccupation. Desperate to fit into the ML perspective, they turn to the only "parameters" present in a GP, hyperparameters in mean and covariance functions.

1 year ago 0 0 1 0

Gaussian Processes are not ML "things". This is akin to saying Normal distributions are ML -- in fact, it IS almost literally saying this -- which is silly, because they existed before ML was a thing. Why care? When viewed as ML, people incorrectly assume it must fit into the "train/test" modality.

1 year ago 1 0 1 0

You know who was cool? Mr. Wizard.

1 year ago 0 0 0 0
Post image

Aug 11-14, 2025: Accelerate is back at the University of Toronto this summer. Join us for 4 days of talks, workshops and poster sessions on AI, automation, and the future of materials discovery. Early-bird registration and our call for abstracts are now open: accelerate25.ca

1 year ago 7 5 0 0

*make variational inference infinite-dimensional again

1 year ago 1 0 0 0
Preview
CNN Headlines | CNN CNN Headlines is a curated channel covering major news events across politics, international, business, and entertainment, and showcasing the most impactful stories of the day.

It was featured on "CNN Headlines", but I don't know if they are running the stories in a loop:

www.cnn.com/videos/fast/...

1 year ago 0 0 0 0
Post image

You are on the front page!

1 year ago 1 0 1 0