Advertisement · 728 × 90

Posts by Brian S. Connelly

That, or you could give us the p-value for r = .15 and we could translate to discrete conclusions of "Same", "Unrelated", or "Collect a few more participants and try again"

1 year ago 0 0 0 0

Oh, my quibble was less with the study and more with the process for constructing the .61 benchmark for sameness. Any such benchmark has got to be tied to study design features, and the impact of those features tend to be grossly underestimated.

1 year ago 0 0 1 0

Funny, I get the same responses to my final exams.

As anticipated, this approach leads to the conclusion of "sameness"

1 year ago 1 0 0 0

The method effects (correlating across time and across inventories) have got to be a big factor here, no? Self-parent or parent-parent correlations at the same time with the same measure are generally <.45 and give something of a ceiling.

Obvious answer is to have infants self-report on HEXACO.

1 year ago 2 0 2 0

I can imagine it being unnerving, but I'm happy to know you've been happy with the change!

1 year ago 0 0 0 0

Honorable mention for:

Training Day

Snoop Dogg

1 year ago 1 0 0 0

Pick a movie, keep one actor - the rest are muppets

gotta be The Godfather

Robert Duvall

1 year ago 1 0 1 0

"We invite you to represent your respective areas on this university committee" is undefeated in this regard.

1 year ago 1 0 0 0
Preview
a close up of a man in a green elf outfit ALT: a close up of a man in a green elf outfit
1 year ago 3 0 0 0

Last research methods seminar of the term! If there's been a theme to this course, it's that science would be a lot better off if we asked questions that we really want answers to rather than questions that we think we know answers to.

1 year ago 7 2 2 0
Advertisement

Omg. @munsterberg.bsky.social is here. This place really is legit.

1 year ago 3 0 1 0
Post image

I appreciate you sharing!

As I read the article today, I couldn't help but remember the excellent blog post from Adam Mastroianni.

www.experimental-history.com/p/im-so-sorr...

Here's its central thesis:

1 year ago 1 0 0 0
Preview
a man in a suit and cowboy hat is driving a car . ALT: a man in a suit and cowboy hat is driving a car .
1 year ago 1 0 0 0
Preview
two men in suits and hats are standing next to each other in front of a window looking at a picture . ALT: two men in suits and hats are standing next to each other in front of a window looking at a picture .
1 year ago 1 0 1 0

Let me know what I missed that you want the other side to be aware of and try crossing over conferences sometime!  Many props to other personality/IO double-dippers; you know who you are. Also posting to the periwinkle place. (16)

2 years ago 1 0 0 0

#8. (Intentional) personality change was a frequent topic at #WCP2024, which has loads of implications for executive coaches and selection systems that use personality tests. (15)

2 years ago 0 0 1 0

Interesting integrations: Ryne Sherman recorded a podcast at SIOP with Jennifer Tackett; Peter Harms has a free-to-use shortform of the HDS scales; Mike Wilmot used personality profiles to sort types of counterproductivity, which resemble HiTOP spectra. (14)

2 years ago 0 0 1 0

#7. Across conferences, interesting research on darkside traits spanning into (a) dark triad, (b) personality disorders / Hogan HDS scales, and (c) employee counterproductivity. (13)

2 years ago 0 0 1 0
Advertisement

There is real value in the reminder that the item-specific “error” can carry some predictive umph…but I shudder as a meta-analyst about the prospect of coding item nuances. (12)

2 years ago 0 0 1 0

#6. 40 years ago, the personality world was aligning around 5 broad traits, while the I/O world was empirically keying tests to criteria. Now SIOP is mostly about broad traits, while personality folks (@Bill Revelle) are pushing item nuances and content heterogeneous scales. (11)

2 years ago 0 0 1 0

Core personality folks’ jaws would drop to see how practice is out-pacing the research; more interface with the personality world would do wonders. (10)

2 years ago 0 0 1 0

#5. Both conferences had loads on AI. Crowd favorites were Heron, Sylvara, & @tspsyched exploring the ouroboros of job applicants using ChatGPT to fake a chatbot personality inventory and Foster using AI to create narrative feedback reports based on personality scores. (9)

2 years ago 0 0 1 0

Executive coaches should check out its BESSI scale (8) www.sebskills.com/the-bessi.html

2 years ago 0 0 1 0

#4. WCP was buzzing after @cjsotomatic’s keynote on developing 5 FFM-ish skills in kids. ~“If you tell schools that you want to change kids’ personality, they think you’re evil. But if you say you’ll develop Social-Emotional-Behavioral skills, it’s ‘Come right in.’” (7)

2 years ago 0 0 1 0

#3. Both conferences showcased massive projects taxonomizing narrow traits: WCP with @David_J_Hughes et al.’s facet map (facetmap.org) and SIOP with Stanek & Ones’s (2018). A comprehensive comparison is on my to-do list, but I’d be thrilled to be beat me to this. (6)

2 years ago 0 0 1 0
Advertisement

…and with a dedicated book that describes individual differences in how individuals seek out challenges (sails) and homeostasis (anchors) (5)
umnlibraries.manifoldapp.org/projects/of-...

2 years ago 0 0 1 0

14 years of work, N > 2 mil, 3,543 effect sizes of 79 personality constructs’ & 97 ability constructs, summarized at PNAS …(4) www.pnas.org/doi/full/10.....

2 years ago 1 0 1 0

#2. Kevin Stanek & Deniz Ones's monumental meta-analysis of personality-cognitive abilities had a dedicated symposium at #SIOP2024 but I didn’t hear mentioned at #WCP2024. (3)

2 years ago 0 0 1 0
Post image

#1. First, there’s about as much personality content at SIOP as at a personality conference (2):

2 years ago 0 0 1 0

#WCP2024 (personality) and #SIOP2024 (I/O psych) conferences were back-to-back. Research silos are a major pet peeve, so a comparison :thread: to facilitate conversations between the two. (1/16)

2 years ago 0 0 1 0