We are getting closer to digitizing the ocean at scale. What a time to be alive! πͺΈοΏ½
Posts by Sergei Nozdrenkov
Huge props to Josh for the execution, and his supervisors Jeff Clark (@savvyscientist.bsky.social) and Rob Jones for guiding this.
Play with the live demo in the browser here (drag the slider!):
joshhyde04.github.io/Semantic-Spl...
π΅ Next Steps: Josh is looking to quantitatively validate these clusters. If you have a "Gold Standard" fully annotated benthic dataset and want to help push this forward, reach out to Josh!
Even if generic DINOv3 (trained on internet images, although itβs trained on iNaturalist) isn't perfect for underwater yet, the pipeline is here! We can swap it for an Ocean-Foundation model later, and it will just work.
Long term: "Semantic Search" for the ocean. Imagine typing "Show me all branching colonies on this reef" or "Highlight the bleached areas," and the 3D digital twin just lights up.
π΅ What does this enable?
Short term: much faster 3D segmentation. Instead of painting pixels, we can just cluster these features in 3D.
2. Extracts the feature embeddings (the mathematical "meaning" of the texture/shape).
3. Projects those features directly into the 3DGS scene (Semantic Splatting).
This machine βseesβ the difference between structures and textures automatically.
π΅ How does this work? This is Zero-Shot learning. The model wasn't explicitly trained on "corals" or "sand." Josh built a pipeline utilizing DINOv3 (Vision Transformers).
1. He takes raw underwater images (from the open dataset we published earlier).
Right now, that often means humans manually tracing pixels for hours (segmentation). Itβs painfully slow, expensive, and doesn't scale. We need to interpret entire reef-scapes instantly to coordinate precise action.
π΅ Why? Coral reefs are so precious, beautiful, incredibly complex and threatened ecosystems. They are dying fast. Over half a billion people and a quarter of ocean life directly rely on them.To protect them faster, we need to understand corals - measure growth, detect diseases, map species.
Fusing DINOv3 Embeddings into 3D Gaussian Splatting of Coral Reefs β check out this incredible work by Josh Hyde from the University of Bristol! π€―
joshhyde04.github.io/Semantic-Spl... -- demo
@savvyscientist.bsky.social
wildflow.ai/apply - link to apply
wildflow.ai/roadmap - roadmap
wildflow.ai/vision - vision
π΅ If you know a superstar like that, please DM me! Or comment. Form to apply is in the comment below.
π΅ Billions of people depend on our ability to protect natural ecosystems. There are no laws of physics stopping us. We can do it together! ππͺΈ
π΅ Weβre bootstrapping our way to product-market fit, collaborating with mission-aligned partners, and exploring a potential raise at some point within a year. Expect early-stage scrappiness with long-term upside (equity, impact, visibility).
- Relentless optimist, full of humility and compassion - this work is incredibly hard and full of setbacks, but so worth it
- Experience with large-scale deep learning, vision-language models (e.g., CLIP, Segment Anything), 3D/point cloud processing, or ecosystem simulation methods is a big plus.
- Excited to work in small, dynamic teams that achieve big results
- Ready to travel to Madagascar, Maldives, Indonesia, or French Polynesia, etc
- Based (or willing to be) in London or SF/LA - our HQ is still TBD
- Comfortable in a fast-moving, early-stage startup environment
- Worked on multimodal foundation models before
- Excited to model processes in natural ecosystems
- Committed to open-source and open-data principles, happy to build in public
Iβm looking for a world-class engineer/research scientist whoβs:
- A bit of a crazy idealist - not in it for the money, but to move humanity forward!
- Inspired by the complexity of ecosystems and not afraid to embrace it
- A proactive, entrepreneurial generalist, thriving in ambiguity, high agency
The amount of nature data is skyrocketing. The biggest bottleneck is in processing all this multimodal nature data (visual, 3D, genomics, acoustics, remote sensingβ¦) and making quality decisions to drive action. Weβre starting with coral reefs, one of the most complex and threatened ecosystems.
Life on Earth is incredibly complex - itβs been evolving for billions of years. We need to understand how natural ecosystems work to coordinate precise action. Nature gives us $140T a year in ecosystem services. If we optimise our decisions by just 1% it would be more than $1T value a year!
Hey friends, Iβm looking for a founding engineer and founding AI research scientist to build multimodal foundation models for natural ecosystems, starting with 3D coral reef data. Whoβs the best person you know for that? wildflow.ai/apply
Fun to host another #LDNCreativeAI at Canva last night with
@robertlaidlow.bsky.social showing new music instruments πΆ
@nozdrenkov.com the coral dataset πͺΈ
Nye Thompson sending postcards to satellites π°οΈ
Slides coming soon π
#creativeAI
A side effect of video generation models like Veo 2 and Sora is creating the world model, they learn physics: gravity, light, etc to generate videos. One day we train massive models on all the nature data and get to the fundamental model of life ππ€―
wildflow.ai - more info
3D Street View for Coral Reefs πͺΈ
Coral Reef Tetris! π§ π© π¦ πͺ Would you play this game? πͺΈπͺΈπͺΈ
Pivoting wildflow into a gaming company now haha!
youtu.be/AbvUJXpEHSY - foundation models for ocean biodiversity π³ πͺΈ
Saving coral reefs using AI and bioacoustics: youtu.be/QEZD0Q-a0Cs with Ben Williams - a brilliant PhD researcher at the UCL / ZSL, working on the largest coral reef restoration in the world. Together with DeepMind he developed SurfPerch new AI bioacoustics model for coral reefs πͺΈπͺΈπͺΈ
After I made my first like it started working. I think it only doesn't work when empty.