I wanted to try GPT-5 by making it create a refactoring plan for one of the modules in my thesis work. The plan looked fine and then I said okay start to implement it. Now I have no idea how that module works or if it is working lol... I will reverse some commits probably
Posts by Nusret Ozates
I hate these questions
My biggest mistake in 2024 was listening a guy who said to buy eth as he got lots of money and it will go much higher. And my biggest gratitude is realizing it is a mistake (after losing some money) and running away earlier ๐
Reminds me "Person of Interest" ๐ youtu.be/oZfQymnABxQ?...
Well calculators were allowed in some of my math classes such as calculus and german to german dictionary was allowed in my language class. So I think it is still depends on the situation
In that case I will say less of it because it looks like a failure of adoption to today's reality. I may understand for some classes such as introduction to programming as LLMs can be abused to do all homeworks but when topics become advanced it should be allowed like in open-book exams
If this is the only information I have (whether the student used LLM or not) my answer is "same". This is one of the "it depends" questions, I think.
For ex.: Why you didn't use? The tech wasn't there yet? Too proud?
Why you used? For cheating or learning assistant?
Or how can we reward a model that has a good score on this dataset? There are still challenges such as seperating the touching objects, predicting their types correctly etc. but how can you do it when you punish the model for making a true prediction, which causes heavy overfitting?
PanNuke is a flawed dataset and having a good score on it actually a bad thing? For example, left image is an example from the dataset, the middle is ground truth and right is my model's prediction. According to a doctor friend my model is right and GT is wrong. How can a model learn from this?
Self-supervised Swin Transformer on Pathology domain when? Recent article shows that Imagenet Swin > UNI 2 and other pathology foundation models with vanilla ViT for cell segmentation
In my MSc study in medical image processing (with CV) I feel like I'm missing a lot in LLM side but on the other hand it feels like nothing really "new" happens except bigger data and bigger models. Though I think I must read all deepseek papers
I will give a training about ML on the GCP in 5 mins ๐ค
So it turns out using class weight for segmentation is not good especially if one of your class weight is 0.20 while there is also a class with weight 240
Some people came and started to use Twitter again in a very short time ๐ฅฒ
Twitter is toxic and bluesky seems empty...
Hey everyone, once again I proved that I'm a pro about GCP and ML *mic drop* hehe๐ This certificate basically means I know my way around GCP when a business problem needs to be solved using machine learning on the Google Cloud!
I'm seeing those new optimizers and thinking about if I can use them with a small batch size (e.g 4 or 16) for my image segmentation tasks. What do you think?
It turns out you have to do that to have a better bluesky experience
I have things to do, but I've suddenly been struck by inspiration, and since the well-known work-avoidance mechanism has kicked in, I'm going to write down my thoughts on test-time compute, shared decoders, and reasoning. A theoretical and lengthy piece is coming.
medium.com/@m.nusret.oz...
lol I was wrong back to the reading and thinking and experimenting again
Choose your fighter๐ Additional information:
- All models are for the same task
- There is also a training py for all but I forget to add
- All models needs to use a dataset . py and maybe other scripts, so think about where to put it and how it would change the structure
Totally irrelevant but I just realized you are working at Riot and working on... LLMs? I'm really curious right now ๐
It is funny to see binary opening/closing and watershed is still very useful for segmentation when combined with deep learning. Why funny? Because when I first learn about them I thought they were the things from ancient history and not used anymore
I think I finally found my thesis topic ๐๐ Just need a little bit more experiment and some discussions with my advisor now
Happy new year everyone ๐
Code written with box characters used on old old software to make fake UIs
Youโre still arguing about tabs vs. spaces? May I presentโฆ
I have some questions:
- Can we fine-tune a model with registers and get same results?
- Can we do that with only last x layers?
- Given a trained dinov2 wo registers, would adding reg token help for my downstream tasks like segmentation?
It seems like bigger vision transformer models need extra tokens (other than cls) to store more global information. Otherwise they remove local information from some patches and use them as global context holders. "Vision Transformers Need Registers" by Meta (paper link below)
So the drop in coin prices in the last 2 days was a strong reminder for me that I need to find a thesis topic and finish is very, very, very fast and find a job otherwise I will be broke much faster than I planned ๐