Sometimes talking to someone else (besides yourself) helps—even if it is Gemini. Just realized I can swap Fermat primes for Mersenne primes in my erasure coding. Modulo 2³¹-1 is just a bitwise shift and a ∧ (AND). Imagine using division in 2026. The suffering is real. K THX BAI.
Posts by Ahmet Inan
wild strawberries finally blooming
Finally went into blooming mode. The others are still in cloning mode ..
I always let the limescale do its magic. Might take a couple of days but the results are fantastic. Just kidding of course. 🤣
Uh, I like that. No more hammering. Sadly those fuckers get clogged fast with the hard water here. But there is a solution for that, or rather a powder: citric acid. 😉
Unless you tell them exactly what you want, they will try to maximize their profits. They always fuck something up, just so you can be milked more. Free open source glasses anyone?
My open source projects are sprouting. Excellent!
Hard to miss watching "Ghost in the Shell: SAC" when you're literally living through the Stand Alone Complex right now
Why do all the work and take the risk of evaluating something yourself when you can let others do it? Actually quite clever. And because you were first, you have some leverage.
I’m incredibly happy to see my OFDM modem being used as a high-speed link for Reticulum. The addition of a GUI and KISS support by the community takes this from a raw modem to a powerful, accessible networking tool. Can’t wait to see where it goes next!
youtu.be/XjB9ULMd32s?...
Food for thought: When using SLM to search for a scrambling sequence that reduces PAPR in OFDM, why can't we just correlate the data input with a MLS to find the lowest correlation shifts? Wouldn't those already give us something good enough? Reasoning: A MLS gives us good PAPR at any shift already.
Things are changing way faster now than they did two decades ago. I am past worried.
LLM generated I MADE THIS meme about LLMs
Took more than one prompt and some editing to produce this
People also play the lottery. Used to spent a lot of time in the library. Now I spend time on the internet, searching for that one piece of information. Sadly the signal to noise ratio has dropped significantly and LLMs felt like a fresh breeze at first but are still unsatisfactory. We'll see. 😞
Almost forgot about that village that hates children: Keep away from children
Do not reheat
Caution hot
Yes, that's the one!
Not to forget: Do not eat
Day three: The farting is now much more vigorous and the smell reminds of ketchup. Really nice. 👃
wet farts on the tray of shame
Day two: They started to fart. Need to wipe that away later to avoid getting mold.
I need all the flavor: Fermented celery roots- BAM! Don't know why I didn't think of doing that earlier as I like them not only in soup but also pickled. Will keep you posted. 🥰
Sorry, forgot to mention .. you can find the tail-biting experiment in the tail_biting branch: github.com/aicodix/code...
So there you have it: Tail-biting codes. Another thing scratched off the bucket list. Think I am gonna stop playing with these for now.
Simulated both setting the convolutional state early before the first frozen bits and just before the first information bit but saw no real difference. My intuition was that starting the conv with the first info bit would keep the output of conv interesting: After m frozen bits it's boring.
comparing frame error rate plot results from list decoding of normal and tail-biting PAC codes at list sizes 16 and 64
Good morning! The effect is very small and better with a larger list size:
Made a mistake and had to redo the simulations but those few FER points I just got from the simulations made me all giddy already. It works. Need to wait a while longer for the whole curve and think about making the decoder faster. So tail-biting is not a meme. Good. 😌
Cool. Multi-round does seem to work. Had to reduce the list size from 64 to 16 of course but we are still more than an order of magnitude slower now. Started simulations, so we can compare. Before you ask: yes, I also started a multi-round with list size 64 .. two orders of magnitude slower. 😀
Read about tail-biting PAC codes but the paper is behind a paywall. So we have to do our own little research. What I found out so far: simply initializing each path with all possible convolutional states doesn't seem to work. Also read multi-round in the headlines. So that's next to try. Stay tuned.