Advertisement · 728 × 90
#
Hashtag
#LNCs
Advertisement · 728 × 90
"The Human and Artificial Rationalities (HAR) conference series is focused on comparing human and artificial rationalities, investigating how they interact together in a practical sense, but also on the theoretical and ethical aspects behind rationality from three main perspectives: Philosophy, Psychology, and Computer Sciences.
HAR aims at building bridges between these three fields of research and invites experts from all around the world to discuss the hottest topics on the interaction of human or artificial cognition and of human-machine interaction."

"The Human and Artificial Rationalities (HAR) conference series is focused on comparing human and artificial rationalities, investigating how they interact together in a practical sense, but also on the theoretical and ethical aspects behind rationality from three main perspectives: Philosophy, Psychology, and Computer Sciences. HAR aims at building bridges between these three fields of research and invites experts from all around the world to discuss the hottest topics on the interaction of human or artificial cognition and of human-machine interaction."

"This paper will argue that — among other things — we can become too obsessed with factors such as size, chains of thought, and other features of reasoning models. .... as models increased in size, they were more likely to “behave” like a system called mReasoner [43, 44], especially its propensity for System 2 or reflective thinking [29] (Fig. 2). Such results suggest that this capacity to reason reflectively probably matters at least as much as model size [11]."

"This paper will argue that — among other things — we can become too obsessed with factors such as size, chains of thought, and other features of reasoning models. .... as models increased in size, they were more likely to “behave” like a system called mReasoner [43, 44], especially its propensity for System 2 or reflective thinking [29] (Fig. 2). Such results suggest that this capacity to reason reflectively probably matters at least as much as model size [11]."

"So-called “reasoning” models [4] appear to engage in multi-step reflection and often outperform humans on reflection tests [37]. These two facts may lead you to conclude that what distinguishes reasoning models from language models is a capacity for reflective thinking. However, much like humans often pass reflection tests without exhibiting any signs of reflective thinking [17, 74] (see “correct-but-unreflective” in Table 1), Hagendorf and colleagues found that language models continued to outperform humans on reflection tests even when they could not exhibit signs of reflective thinking, such as chain-of-thought reasoning [37] (Fig. 3)."

"So-called “reasoning” models [4] appear to engage in multi-step reflection and often outperform humans on reflection tests [37]. These two facts may lead you to conclude that what distinguishes reasoning models from language models is a capacity for reflective thinking. However, much like humans often pass reflection tests without exhibiting any signs of reflective thinking [17, 74] (see “correct-but-unreflective” in Table 1), Hagendorf and colleagues found that language models continued to outperform humans on reflection tests even when they could not exhibit signs of reflective thinking, such as chain-of-thought reasoning [37] (Fig. 3)."

"There is growing evidence in favor of a dual model approach, a la dual process or dual systems theory [31]. For example, Yan and colleagues found that pairing one small language model with another small model (that serves as a “reflective system”) allowed small hybrid systems to compete with larger models that had more than ten times as many parameters [85] (Table 2). ... These results suggest that dual- or multi- model architectures may be key to yielding the reflective reasoning and rationality that we expect from intelligence systems, but without overlooking other goals or constraints [64]. ... The core idea of this paper is that one key to intelligence is pragmatic (rather than perpetual) deployment of reflective reasoning— a view I have been calling Strategic Reflectivism [Section 4.2.3 in 9]."

"There is growing evidence in favor of a dual model approach, a la dual process or dual systems theory [31]. For example, Yan and colleagues found that pairing one small language model with another small model (that serves as a “reflective system”) allowed small hybrid systems to compete with larger models that had more than ten times as many parameters [85] (Table 2). ... These results suggest that dual- or multi- model architectures may be key to yielding the reflective reasoning and rationality that we expect from intelligence systems, but without overlooking other goals or constraints [64]. ... The core idea of this paper is that one key to intelligence is pragmatic (rather than perpetual) deployment of reflective reasoning— a view I have been calling Strategic Reflectivism [Section 4.2.3 in 9]."

This week I'm Zooming into the Human & Artificial #Rationality conference: https://har-conf.eu

My paper argues that a key to #intelligence is pragmatic switching between intuitive and reflective inference — the paper forthcoming in #LNCS is on #ArXiv: doi.org/10.48550/arX...

15 1 1 1
Preview
Antibacterial and anti-virulence activity of eco-friendly resveratrol-loaded lipid nanocapsules against methicillin-resistant staphylococcus aureus - Scientific Reports Scientific Reports - Antibacterial and anti-virulence activity of eco-friendly resveratrol-loaded lipid nanocapsules against methicillin-resistant staphylococcus aureus

I am thrilled to share my latest research article published in @scientific_reports Journal.
The article provides a #proof_of_concept that #resveratrol-loaded #LNCs exert potent #antimicrobial and #antivirulence action against MRSA metigating antimicrobial resistance.
doi.org/10.1038/s415...

1 0 0 0

The investigation
Somebody (on LinkedIn) asked about one particular author. We looked at the 78 papers by this author and they are all prefaces.

Looking a bit deeper, they are all prefaces to different #LNCS (Lecture Notes in Computer Science) volumes, but from the same conference.

0 0 1 0