Advertisement · 728 × 90

Posts by ⚠️🔧⌨️🔥

it absolutely was - they are drunk off their own kool-aid and thought everyone else would also like it and don't really get why you _wouldn't_ want it

1 month ago 0 0 0 0

I also think if this had been called something other than DLSS I think people would have just ignored it - or dunked on it but not gotten mad about it.

"DLSS 5" makes it sound like its going to *replace* the earlier DLSS modes, which again makes it seem like they aren't going to be able to avoid it

1 month ago 0 0 1 0

to be fair I *like* the dynamic scaling thing generally - I can configure all the other rendering settings to taste then let dlss worry about adjusting the render resolution to compensate and keep a decent framerate

i would be annoyed if that ability also meant the look changed completely

1 month ago 1 0 1 0

it also doesn't help that earlier DLSS modes (or equivalent) have kind of felt not-optional for getting decent performance and image quality at the same time in many games

but the older modes are still there and will almost certainly stick-around, 5 is not really a performance boosting solution

1 month ago 0 0 1 0

this probably would have been a non issue if it was demoed in a better state - they can absolutely make it more consistent with the original image, screw with the lighting less, etc. while still improving the bits people like, but they wanted to show off and underestimated the hate for genai look

1 month ago 1 0 1 0

yeah people don't really care about the fact there are plenty of mods that can do that and some people choose to play with them

this is like if a movie could look like it had significantly different art direction depending on the *brand of player* you played it in

1 month ago 1 1 1 0

there are also many (already in production use) better ways to incorporate ML/AI into rendering pipelines than to have it as basically a full-frame gen model on the final image; things like NN-based materials give more efficiency and detail while still being fully directable - this is demoware

1 month ago 0 0 0 0

the original DLSS were trained on the specific game, and the later ones are still a lot closer to pure-upscalers - effectively making it cheaper to get the 'high settings' look

5 has a _clearly genai look_ and visibly changes the content from what was "authored" (especially wrt lighting)

1 month ago 0 0 0 0
Advertisement

it's actually a sinus surgery device - its just now some of those surgeries are becoming surprise brain surgeries

2 months ago 3 1 1 1

the final binaries are generally not so bad, but `target` (with all the intermediate files) is even worse than `node_modules`

4 months ago 1 0 0 0

cuda is done with some wsl2-specific magic passthrough device the runtime libs know how to use - do _not_ try to actually install any drivers - that will break it, but things that bundle their own cuda runtime like pytorch should just work out of the box

for gui stuff it kinda speaks wayland now

7 months ago 8 0 0 0
Preview
Get started mounting a Linux disk in WSL 2 Learn how to set up a disk mount in WSL 2 and how to access it.

never found a good way to make a disk visible to both windows and wsl and perform well from both - but one with a linux fs on it should be attachable to wsl like this learn.microsoft.com/en-us/window...

7 months ago 1 0 1 0

yeah wsl just sucks at this case

7 months ago 1 0 1 0

the root fs on wsl2 should act just like a regular linux fs on a vm - because it is - but permissions _are_ pretty broken on wsl1 generally and when using the wsl2 9p mounts of windows drives

7 months ago 1 0 1 0

wsl2 is much better and _almost_ the same as a vm - now mostly just need to remember that the windows fs mounts are not high iops/mmap-friendly (don't try to run stuff directly off them) and it doesn't run an actual init by default (but *can* be configured to run systemd)

7 months ago 8 0 1 0
Advertisement

I would just look for "post training", "supervised fine-tuning" (human created example responses), "RLHF" (human rater score tuning) - "alignment" is a lot more related to "AI Safety" stuff, sometimes it means things like getting the models to reject bad requests and sometimes it means AI doomerism

7 months ago 5 0 1 0

The base models (rarely released anymore) almost certainly could be, the "personality" comes from the post training - additional steps at the end with examples in the target style and a bit of tuning by human raters scoring outputs

7 months ago 9 0 1 0

For anything new now we're using modal and having it write back to our own S3

8 months ago 1 0 0 0

IME anything ob GPU, even small non-LLM models are hard to run cost effectively if you have low or difficult to predict utilization

8 months ago 3 0 1 0
Preview
Just Ask for Calibration: Strategies for Eliciting Calibrated Confidence Scores from Language Models Fine-Tuned with Human Feedback A trustworthy real-world prediction system should produce well-calibrated confidence scores; that is, its confidence in an answer should be indicative of the likelihood that the answer is correct, ena...

It seems like for a post-trained model, the best that you can do right now is just ask it: arxiv.org/abs/2305.14975

8 months ago 2 0 1 0
Chart from GPT4 technical report p. 12 showing calibration suffering after RL post-training

Chart from GPT4 technical report p. 12 showing calibration suffering after RL post-training

The first place I saw this was the GPT-4 technical report. arxiv.org/pdf/2303.08774 p.12

8 months ago 2 0 1 0

One interesting result I've seen is that *base* models' (pure next token predictors) outputted probabilities match up pretty well with the likelihood of correctness and can kind of be interpreted as confidence scores, but after the post-training steps, especially RL, that stops working.

8 months ago 2 0 1 0

for the most part you should just be able to take existing web/html apps into it unmodified but it also has some escape hatches to get at native stuff if you need

8 months ago 0 0 0 0
Preview
Tauri 2.0 The cross-platform app building toolkit

Might be looking for something like Tauri v2.tauri.app

8 months ago 0 0 1 0

They do have the option of just using claude code at API rates, fully usage based - but nobody really likes that either because you have no idea how much it will spend on a task ahead of time (and if you max out the limits subs are *still* much cheaper than API rates)

8 months ago 0 0 0 0
Advertisement

I like the open source vibecoding tools like Cline where you bring your own API keys better but paying the raw API prices can be rough, I use Cursor basically *for* the subsidy

9 months ago 0 0 0 0

Nowadays, after training them on the whole Internet, they do a much shorter post training phase with chat transcripts (outsourced human workers write these, usually for pennies) to make them chatbots out of the box, (ChatGPT) but even those kinds are still fundamentally text completion systems

9 months ago 0 0 0 0

The actual math part of an LLM is barely a screen full of code. The behavior really is all in the training data selection and prompting.

9 months ago 0 0 0 0

They know what it currently is they don't know what it used to be

9 months ago 3 0 0 0

All of the chatbot ones do. Before chatbots "LLM" referred to large text auto completion systems trained on the Internet (e.g. GPT2&3). Since those can reliably complete all sorts of text, it was figured out that you could make them into chat bots by just prompting them with enough chat transcript.

9 months ago 1 0 1 0