Advertisement · 728 × 90
#
Hashtag
#MemoryModay
Advertisement · 728 × 90
Preview
Hey Siri. Ok Google. Alexa: A topic modeling of user reviews for smart speakers Hanh Nguyen, Dirk Hovy. Proceedings of the 5th Workshop on Noisy User-generated Text (W-NUT 2019). 2019.

#MemoryModay #NLProc 'Hey Siri. Ok Google. Alexa: A topic modeling of user reviews for smart speakers,' by Nguyen & @dirkhovy.bsky.social decodes speaker reviews for user preferences using topic models. Domain knowledge needed for market analysis.

3 2 0 0
Preview
Dense Node Representation for Geolocation Tommaso Fornaciari, Dirk Hovy. Proceedings of the 5th Workshop on Noisy User-generated Text (W-NUT 2019). 2019.

#MemoryModay #NLProc 'Dense Node Representation for Geolocation' by Fornaciari & @dirkhovy.bsky.social reveals efficient geolocation methods using node2vec & doc2vec models. Greater network size, less parameters.

4 2 0 0
Preview
The Social and the Neural Network: How to Make Natural Language Processing about People again Dirk Hovy. Proceedings of the Second Workshop on Computational Modeling of People’s Opinions, Personality, and Emotions in Social Media. 2018.

#MemoryModay #NLProc 'Make Natural Language Processing About People Again' by @dirkhovy.bsky.social (2018) uncovers how AI models portray different religions and emotions. #AIEthics

7 5 0 0
Preview
Comparing Bayesian Models of Annotation Silviu Paun, Bob Carpenter, Jon Chamberlain, Dirk Hovy, Udo Kruschwitz, Massimo Poesio. Transactions of the Association for Computational Linguistics, Volume 6. 2018.

#MemoryModay #NLProc 'Comparing Bayesian Models of Annotation' by Paun et al. dives into corpus annotation, evaluating six models' predictiveness and accuracy. Essential for navigating annotators and item difficulties.

8 2 0 0

#MemoryModay #NLProc
@gattanasio.cc et al. study asks 'Is It Worth the (Environmental) Cost?' analyzing continuous training for language models. Balances benefits, environmental impacts, for responsible use. #AI #Sustainability

arxiv.org/pdf/2210.07365

7 3 0 1
Preview
Countering Hateful and Offensive Speech Online - Open Challenges Flor Miriam Plaza-del-Arco, Debora Nozza, Marco Guerini, Jeffrey Sorensen, Marcos Zampieri. Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing: Tutorial Abstracts.…

#MemoryModay #NLProc Countering Hateful and Offensive Speech Online - Open Challenges" by Plaza-Del-Arco, @debora_nozza, Guerini, Sorensen, Zampieri, 2024 is a tutorial on the challenges and solutions for detecting and mitigating hate speech.

4 2 0 0

#MemoryModay #NLProc Uma, A. N. et al. examine AI model training in 'Learning from Disagreement: A Survey'. Disagreement-handling methods' performance is shaped by evaluation methods & dataset traits.

4 2 0 0

#MemoryModay #NLProc 'Leveraging Social Interactions to Detect Misinformation on Social Media' by Fornaciari et al. (2023) uses combined text and network analysis to spot unreliable threads.

3 2 0 0
Preview
Multilingual HateCheck: Functional Tests for Multilingual Hate Speech Detection Models Paul Röttger, Haitham Seelawi, Debora Nozza, Zeerak Talat, Bertie Vidgen. Proceedings of the Sixth Workshop on Online Abuse and Harms (WOAH). 2022.

#MemoryModay #NLProc 'Multilingual HateCheck: Functional Tests for Multilingual Hate Speech Detection Models' by @paul-rottger.bsky.social et al. (2022). A suite of tests for 10 languages.

3 2 0 0
Preview
Hey Siri. Ok Google. Alexa: A topic modeling of user reviews for smart speakers Hanh Nguyen, Dirk Hovy. Proceedings of the 5th Workshop on Noisy User-generated Text (W-NUT 2019). 2019.

#MemoryModay #NLProc 'Hey Siri. Ok Google. Alexa: A topic modeling of user reviews for smart speakers,' by Nguyen & @dirkhovy.bsky.social decodes speaker reviews for user preferences using topic models. Domain knowledge needed for market analysis.

3 2 0 0
Preview
The State of Profanity Obfuscation in Natural Language Processing Scientific Publications Debora Nozza, Dirk Hovy. Findings of the Association for Computational Linguistics: ACL 2023. 2023.

#MemoryModay #NLProc ' 'State of Profanity Obfuscation in NLP Scientific Publications' probes bias in non-English papers. @deboranozza.bsky.social & @dirkhovy.bsky.social (2023) propose 'PrOf' to aid authors & improve access.

4 2 0 0
Preview
Measuring Harmful Representations in Scandinavian Language Models Samia Touileb, Debora Nozza. Proceedings of the Fifth Workshop on Natural Language Processing and Computational Social Science (NLP+CSS). 2022.

#MemoryModay #NLProc 'Measuring Harmful Representations in Scandinavian Language Models' uncovers gender bias, challenging Scandinavia's equity image.

4 2 0 0
Preview
Universal Joy A Data Set and Results for Classifying Emotions Across Languages Sotiris Lamprinidis, Federico Bianchi, Daniel Hardt, Dirk Hovy. Proceedings of the Eleventh Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis. 2021.

#MemoryModay #NLProc 'Universal Joy: A Data Set and Results for Classifying Emotions Across Languages' by Lamprinidis et al. (2021) explores how AI research affects our planet.

6 2 0 0
Preview
Dense Node Representation for Geolocation Tommaso Fornaciari, Dirk Hovy. Proceedings of the 5th Workshop on Noisy User-generated Text (W-NUT 2019). 2019.

#MemoryModay #NLProc 'Dense Node Representation for Geolocation' by Fornaciari & @dirkhovy.bsky.social reveals efficient geolocation methods using node2vec & doc2vec models. Greater network size, less parameters.

3 2 0 0
Preview
Benchmarking Post-Hoc Interpretability Approaches for Transformer-based Misogyny Detection Giuseppe Attanasio, Debora Nozza, Eliana Pastor, Dirk Hovy. Proceedings of NLP Power! The First Workshop on Efficient Benchmarking in NLP. 2022.

#MemoryModay #NLProc 'Benchmarking Post-Hoc Interpretability Approaches for Transformer-based Misogyny Detection' - Attanasio et al. Explores reliability of interpretability in hate speech detection.

4 3 0 0
Preview
The State of Profanity Obfuscation in Natural Language Processing Scientific Publications Debora Nozza, Dirk Hovy. Findings of the Association for Computational Linguistics: ACL 2023. 2023.

#MemoryModay #NLProc ' 'State of Profanity Obfuscation in NLP Scientific Publications' probes bias in non-English papers. @debora_nozza & Dirk Hovy (2023) propose 'PrOf' to aid authors & improve access. aclanthology.org/2023.finding...

4 2 0 0
Preview
Measuring Harmful Representations in Scandinavian Language Models Samia Touileb, Debora Nozza. Proceedings of the Fifth Workshop on Natural Language Processing and Computational Social Science (NLP+CSS). 2022.

#MemoryModay #NLProc 'Measuring Harmful Representations in Scandinavian Language Models' uncovers gender bias, challenging Scandinavia's equity image. #MachineLearning

3 2 0 0
Preview
Universal Joy A Data Set and Results for Classifying Emotions Across Languages Sotiris Lamprinidis, Federico Bianchi, Daniel Hardt, Dirk Hovy. Proceedings of the Eleventh Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis. 2021.

#MemoryModay #NLProc 'Universal Joy: A Data Set and Results for Classifying Emotions Across Languages' by Lamprinidis et al. (2021) explores how AI research affects our planet. Tech can be green too! #SustainableTech

3 2 0 0
Preview
Benchmarking Post-Hoc Interpretability Approaches for Transformer-based Misogyny Detection Giuseppe Attanasio, Debora Nozza, Eliana Pastor, Dirk Hovy. Proceedings of NLP Power! The First Workshop on Efficient Benchmarking in NLP. 2022.

#MemoryModay #NLProc 'Benchmarking Post-Hoc Interpretability Approaches for Transformer-based Misogyny Detection' - Attanasio et al. Explores reliability of interpretability in hate speech detection.

4 2 0 0
Preview
Dense Node Representation for Geolocation Tommaso Fornaciari, Dirk Hovy. Proceedings of the 5th Workshop on Noisy User-generated Text (W-NUT 2019). 2019.

#MemoryModay #NLProc 'Dense Node Representation for Geolocation' by Fornaciari & @dirkhovy.bsky.social reveals efficient geolocation methods using node2vec & doc2vec models. Greater network size, less parameters. /publication/2019_m2v/2019_m2v

3 2 0 0

#MemoryModay #NLProc 'My Answer is C' by Wang et al. (2024) underscores the scrutiny needed for full text responses in LLMs multi-choice evaluations.

3 2 0 0

#MemoryModay #NLProc Hung et al.'s 2023 paper, 'Can Demographic Factors Improve Text Classification?' finds demographic adaptations of Transformer NLP models don't notably boost performance.

4 2 0 0

#MemoryModay #NLProc 'Detecting Misogynous Memes with Text & Image Modalities' by Attanasio, @deboranozza.bsky.social, Bianchi. Their novel system uses Perceiver IO, surpassing all previous benchmarks.

1 1 0 0

#MemoryModay #NLProc Plaza-del-Arco, @debora_nozza, @dirkhovy.bsky.social's 2024 paper "Wisdom Instruction-Tuned Language Model Crowds" shows that multiple LLMs can be BETTER than a single model & specialize across tasks & languages.

3 3 0 0

#MemoryModay #NLProc #TBT #NLPproc 'Pipelines for Social Bias Testing of Large Language Models' by @deboranozza.bsky.social, Federico Bianchi, @dirkhovy.bsky.social (2022). Proposes social bias tests akin to software testing in AI dev pipelines.

5 2 0 0