The number of people who don't know about #GPUSPH within #INGV is too damn high (.jpg).
Memes aside, I've had several opportunities these days to talk with people both within the Osservatorio Etneo and other branches of the Institute, and most of them had no idea something like that was being […]
Our most recent paper on #SPH / #FEM coupling for offshore structures modeling with #GPUSPH has been published:
https://authors.elsevier.com/c/1m3VB_hNWk2tT
These kinds of works, with validation against experimental results, is always a challenging task, even for the simpler problems. Lab […]
Today I introduced a much-needed feature to #GPUSPH.
Our code supports multi-GPU and even multi-node, so in general if you have a large simulation you'll want to distribute it over all your GPUs using our internal support for it.
However, in some cases, you need to run a battery of simulations […]
I just realized that in my quest to port #GPUSPH to other POSIX-like OSes, I've never actually tried something like Alpine or other non-glibc Linux system.
Talking about dependencies: one thing we did *not* reimplement in #GPUSPH is rigid body motion. GPUSPH is intended to be code for #CFD, and while I do dream about making it a general-purpose code for #ContinuumMechanics, at the moment anything pertaining solids is “delegated”.
When a (solid) […]
I've just reviewed a manuscript about the recent progresses made to introduce #GPU support in a classic, large #CFD code with existing good support for massive simulations on traditional #HPC settings (CPU clusters).
I'm always fascinated by the stark difference between the kind of work that […]
By Tesler's law of conservation of complexity
en.wikipedia.org/wiki/Law_of_conservation...
there's a lower bound to which you can reduce complexity. Beyond that, you're only moving complexity from one aspect to another.
In the case of #GPUSPH, this has materialized in the […]
I'm not going to claim that we found the perfect balance in #GPUSPH, but one thing I can say is that I often find myself thanking my past self for insisting on pushing for this or that abstraction over more ad hoc solutions, because it has made a lot of later development easier *and* more […]
The second point, if you remember what I wrote on the first post of this thread <fediscience.org/@giuseppebilotta/1141401...> is about the handling of multiple formulations, more complex physics and so on.
This is actually a place where I'm more cautious to embrace the […]
Even now, Thrust as a dependency is one of the main reason why we have a #CUDA backend, a #HIP / #ROCm backend and a pure #CPU backend in #GPUSPH, but not a #SYCL or #OneAPI backend (which would allow us to extend hardware support to #Intel GPUs). <https://doi.org/10.1002/cpe.8313>
This is […]
I believe our approach to “develop what we need when we need it”, which has been a staple in the development of #GPUSPH, has been a strong point. We *do* depend on a few external dependencies, but most of the code has been developed “in-house”.
Fun fact: the only “hard dependency” for GPUSPH […]
And of course if possible you want to let the user choose arbitrary inter-particle spacings, possibly at runtime. This is e.g. the reason why #GPUSPH has a built-in simple #CSG (#ConstructiveSolidGeometry) system: it's not needed for #SPH, but it allows user to set up test cases even with […]
One of the objectives with our #GPUSPH model is actually to build a sufficiently detailed 3D model that would allow us to explore these effects and hopefully derive simplified laws that can be applied back to the faster models used for short- and long-term hazard assessment. We're still far […]
Finally, #GPUSPH has a #Fediverse account: @gpusph