Advertisement ยท 728 ร— 90

Posts by Nusret Ozates

I wanted to try GPT-5 by making it create a refactoring plan for one of the modules in my thesis work. The plan looked fine and then I said okay start to implement it. Now I have no idea how that module works or if it is working lol... I will reverse some commits probably

8 months ago 0 0 0 0
Post image

I hate these questions

8 months ago 1 0 0 0

My biggest mistake in 2024 was listening a guy who said to buy eth as he got lots of money and it will go much higher. And my biggest gratitude is realizing it is a mistake (after losing some money) and running away earlier ๐Ÿ˜‚

1 year ago 0 0 0 0
Harold on Social Media (Person of Interest)( 1 x 18)
Harold on Social Media (Person of Interest)( 1 x 18) YouTube video by Person of Interest Highlights

Reminds me "Person of Interest" ๐Ÿ˜„ youtu.be/oZfQymnABxQ?...

1 year ago 0 0 0 0

Well calculators were allowed in some of my math classes such as calculus and german to german dictionary was allowed in my language class. So I think it is still depends on the situation

1 year ago 0 0 1 0

In that case I will say less of it because it looks like a failure of adoption to today's reality. I may understand for some classes such as introduction to programming as LLMs can be abused to do all homeworks but when topics become advanced it should be allowed like in open-book exams

1 year ago 0 0 1 0

If this is the only information I have (whether the student used LLM or not) my answer is "same". This is one of the "it depends" questions, I think.
For ex.: Why you didn't use? The tech wasn't there yet? Too proud?
Why you used? For cheating or learning assistant?

1 year ago 0 0 1 0

Or how can we reward a model that has a good score on this dataset? There are still challenges such as seperating the touching objects, predicting their types correctly etc. but how can you do it when you punish the model for making a true prediction, which causes heavy overfitting?

1 year ago 1 0 0 0
Post image

PanNuke is a flawed dataset and having a good score on it actually a bad thing? For example, left image is an example from the dataset, the middle is ground truth and right is my model's prediction. According to a doctor friend my model is right and GT is wrong. How can a model learn from this?

1 year ago 0 0 1 0

Self-supervised Swin Transformer on Pathology domain when? Recent article shows that Imagenet Swin > UNI 2 and other pathology foundation models with vanilla ViT for cell segmentation

1 year ago 1 0 0 0
Advertisement

In my MSc study in medical image processing (with CV) I feel like I'm missing a lot in LLM side but on the other hand it feels like nothing really "new" happens except bigger data and bigger models. Though I think I must read all deepseek papers

1 year ago 0 0 0 0
Post image

I will give a training about ML on the GCP in 5 mins ๐ŸŽค

1 year ago 0 0 0 0

So it turns out using class weight for segmentation is not good especially if one of your class weight is 0.20 while there is also a class with weight 240

1 year ago 1 0 0 0

Some people came and started to use Twitter again in a very short time ๐Ÿฅฒ

1 year ago 0 0 0 0

Twitter is toxic and bluesky seems empty...

1 year ago 1 0 1 0
Post image

Hey everyone, once again I proved that I'm a pro about GCP and ML *mic drop* hehe๐Ÿ˜ This certificate basically means I know my way around GCP when a business problem needs to be solved using machine learning on the Google Cloud!

1 year ago 1 0 0 0

I'm seeing those new optimizers and thinking about if I can use them with a small batch size (e.g 4 or 16) for my image segmentation tasks. What do you think?

1 year ago 0 0 0 0
Advertisement
Post image

It turns out you have to do that to have a better bluesky experience

1 year ago 1 0 0 0
Preview
Test-Time Compute, Reasoning and Human Brain I have lots of things to do, but Iโ€™ve suddenly been struck by inspiration, and since the well-known work-avoidance mechanism has kicked inโ€ฆ

I have things to do, but I've suddenly been struck by inspiration, and since the well-known work-avoidance mechanism has kicked in, I'm going to write down my thoughts on test-time compute, shared decoders, and reasoning. A theoretical and lengthy piece is coming.
medium.com/@m.nusret.oz...

1 year ago 0 0 0 0

lol I was wrong back to the reading and thinking and experimenting again

1 year ago 0 0 0 0
Post image Post image

Choose your fighter๐Ÿ‘‡ Additional information:

- All models are for the same task
- There is also a training py for all but I forget to add
- All models needs to use a dataset . py and maybe other scripts, so think about where to put it and how it would change the structure

1 year ago 0 0 0 0

Totally irrelevant but I just realized you are working at Riot and working on... LLMs? I'm really curious right now ๐Ÿ˜‚

1 year ago 2 0 1 0

It is funny to see binary opening/closing and watershed is still very useful for segmentation when combined with deep learning. Why funny? Because when I first learn about them I thought they were the things from ancient history and not used anymore

1 year ago 0 0 0 0

I think I finally found my thesis topic ๐ŸŽ‰๐ŸŽŠ Just need a little bit more experiment and some discussions with my advisor now

1 year ago 1 0 0 1

Happy new year everyone ๐ŸŽŠ

1 year ago 0 0 0 0
Code written with box characters used on old old software to make fake UIs

Code written with box characters used on old old software to make fake UIs

Youโ€™re still arguing about tabs vs. spaces? May I presentโ€ฆ

1 year ago 5290 1278 157 145
Advertisement

I have some questions:
- Can we fine-tune a model with registers and get same results?
- Can we do that with only last x layers?
- Given a trained dinov2 wo registers, would adding reg token help for my downstream tasks like segmentation?

1 year ago 0 0 0 0
Preview
Vision Transformers Need Registers Transformers have recently emerged as a powerful tool for learning visual representations. In this paper, we identify and characterize artifacts in feature maps of both supervised and self-supervised ...

arxiv.org/abs/2309.16588

1 year ago 0 0 1 0

It seems like bigger vision transformer models need extra tokens (other than cls) to store more global information. Otherwise they remove local information from some patches and use them as global context holders. "Vision Transformers Need Registers" by Meta (paper link below)

1 year ago 0 0 1 0

So the drop in coin prices in the last 2 days was a strong reminder for me that I need to find a thesis topic and finish is very, very, very fast and find a job otherwise I will be broke much faster than I planned ๐Ÿ˜‚

1 year ago 0 0 0 0