Advertisement · 728 × 90

Posts by Flow Computing

Flow’s architecture builds on these principles: eu1.hubs.ly/H0ttjwj0
#FlowPPU #CPU #ParallelComputing #HPC #AI #FlowComputing #Semiconductors #DeepTech

21 hours ago 1 0 0 0
Video

Shared memory is central to parallel computing, but difficult to scale efficiently. Traditional multicore systems rely on cache coherence, introducing overhead & limiting scalability. ESM addresses this, enabling high-throughput execution, latency hiding, & simplified parallel programming.

21 hours ago 0 0 0 1

This makes it well suited for robotics and embedded systems, where low-latency and predictable performance are essential.
→ flow-computing.com/solutions/

#NationalRoboticsWeek #CPU #FlowPPU #HPC #Semiconductors #DeepTech #AI #FlowComputing #ParallelComputing

5 days ago 1 0 0 0
Black-and-white graphic featuring robotic arms in the background with a centered white paper-style overlay. Large text reads: “just a little reminder: scalable parallel execution drives real-time sensor & signal processing.” Smaller text around the edges references robotics systems, real-time control, parallel workloads, and notes that traditional multicore CPUs struggle to scale efficiently, while Flow enables scalable parallel execution for real-time data processing.

Black-and-white graphic featuring robotic arms in the background with a centered white paper-style overlay. Large text reads: “just a little reminder: scalable parallel execution drives real-time sensor & signal processing.” Smaller text around the edges references robotics systems, real-time control, parallel workloads, and notes that traditional multicore CPUs struggle to scale efficiently, while Flow enables scalable parallel execution for real-time data processing.

Robotics depends on fast, deterministic execution across parallel workloads. Sensor & signal processing workloads require efficient parallel execution to handle high-throughput data streams in real time. #FlowPPU enables nextgen performance, delivering scalable parallel execution w/ CPUs.
#CPU #HPC

5 days ago 0 0 0 1

Thank you to those we met for the insightful discussions, including Fuad Abazovic & Deepak Chugh, Hartti Suomela & @um.fi, & to #NordicInnovationHouse. We’re building momentum. See where we'll be next: eu1.hubs.ly/H0tfTQf0

#CPU #HPC #AI #VC #Semiconductor #DeepTech

1 week ago 1 0 0 0
Three men standing outdoors take a selfie in front of a residential home with a garden and trees, captured on a bright, sunny day.

Three men standing outdoors take a selfie in front of a residential home with a garden and trees, captured on a bright, sunny day.

Group of four men from Flow Computing standing outdoors in front of a Cadence office sign in Silicon Valley, smiling during a sunny day.

Group of four men from Flow Computing standing outdoors in front of a Cadence office sign in Silicon Valley, smiling during a sunny day.

Our founders met w/ customers & investors in #SiliconValley. Signals:
→ Growing interest from both customers and investors
→ Highly engaged, technical conversations
→ Multiple discussions moving forward
It’s encouraging such traction w/ US-based investors focused on #semiconductor & compute tech.

1 week ago 0 0 0 1

#ParallelComputing #ComputerArchitecture #Semiconductors #AIInfrastructure #HighPerformanceComputing #CPU #DeepTech #SystemsEngineering #Innovation

1 week ago 0 0 0 0
Advertisement
Graphic titled “Modern processors face 3 major bottlenecks” listing:
Memory access inefficiencies
High synchronization overhead
Poor scalability as core counts increase
Flow Computing logo displayed at the bottom.

Graphic titled “Modern processors face 3 major bottlenecks” listing: Memory access inefficiencies High synchronization overhead Poor scalability as core counts increase Flow Computing logo displayed at the bottom.

These limitations reflect deeper architectural challenges in modern parallel computing. They’re not easily solved w/ more cores / incremental improvements. #FlowPPU is designed from 1st principles to rethink how parallel workloads are executed. Explore science behind it: flow-computing.com/science

1 week ago 0 0 0 1
Graphic titled “Top early-stage deep tech startups in Europe” with Flow highlighted.
Text reads “We’re on the DTM Watchlist!” and “Join DTM26, May 20–21, Berlin.”
Deep Tech Momentum logo in the top right with a colorful cloud illustration background.

Graphic titled “Top early-stage deep tech startups in Europe” with Flow highlighted. Text reads “We’re on the DTM Watchlist!” and “Join DTM26, May 20–21, Berlin.” Deep Tech Momentum logo in the top right with a colorful cloud illustration background.

We've been selected from 5000+ startups for the #DeepTechMomentum Watchlist. We’ll be at #DTM26 in Berlin, meeting with investors, partners, & industry leaders across the #DeepTech ecosystem. If you’d like to connect, you request a meeting here: flow-computing.com/meet-timo-at...
#VC #CPU #AI #HPC

2 weeks ago 1 0 0 0

As system performance becomes increasingly important, improving parallel execution within the #CPU is key to unlocking overall system throughput.
Learn more: flow-computing.com

#Semiconductors #AI #ParallelComputing #DeepTech #FlowComputing #FlowPPU #HPC #AIInfrastructure #ChipDesign

2 weeks ago 0 0 0 0
Simple diagram titled “Newcomer’s guide to Flow PPU” showing: (1) the problem—CPUs struggle to efficiently utilize parallel workloads, (2) Flow PPU as a general-purpose parallel co-processor alongside the CPU, (3) how it works—CPU handles sequential tasks while PPU executes parallel workloads, and (4) why it matters—improved parallel execution increases overall system throughput.

Simple diagram titled “Newcomer’s guide to Flow PPU” showing: (1) the problem—CPUs struggle to efficiently utilize parallel workloads, (2) Flow PPU as a general-purpose parallel co-processor alongside the CPU, (3) how it works—CPU handles sequential tasks while PPU executes parallel workloads, and (4) why it matters—improved parallel execution increases overall system throughput.

New? Here’s a quick guide to what Flow PPU is & why it matters. Modern workloads are increasingly parallel. Efficiently utilizing that parallelism remains a challenge. Flow PPU is a general-purpose parallel co-processor designed to work alongside the CPU, enabling more efficient parallel execution.

2 weeks ago 0 0 0 1

🎧 Listen to the full episode below:
flow-computing.com/parallel-per...

#Semiconductors #AI #ParallelComputing #DeepTech #FlowComputing #FlowPPU #ComputeArchitecture #CPU #HPC #ChipDesign #Podcast #Torino #ItalianTech

2 weeks ago 0 0 0 0
Podcast graphic with three speakers discussing how microprocessor innovation has become incremental and why general-purpose parallelism is a key future direction.

Podcast graphic with three speakers discussing how microprocessor innovation has become incremental and why general-purpose parallelism is a key future direction.

đŸ‡ș🇾On #DanteDay, we’re sharing an ep. of our podcast in Italian. We recorded this while staying near Via Dante. Our team discusses how #CPU architectures have reached a point of incremental improvements, limits of specialized accelerators, & why general-purpose parallelism is an important shift. #HPC

2 weeks ago 0 0 0 1

We’re focused on this challenge. #FlowPPU is a general-purpose parallel co-processor designed to work w/ the #CPU, enabling scalable, high-throughput parallel execution for modern workloads. Explore our technology: flow-computing.com/technology
Source: @cnbc.com
www.cnbc.com/2026/03/13/n...
#HPC

3 weeks ago 1 0 0 0

These systems rely on: orchestration across agents, data movement, &tool execution & control. System performance increasingly depends on how efficiently a #CPU can support & coordinate these workloads. Improving how parallel workloads are executed @ the CPU level is increasingly important. #AI #HPC

3 weeks ago 0 0 0 1
Minimalist graphic with the title “FLOW PPU” and key descriptors: “general-purpose parallel processing CPU co-processor” on the left, and “linear scaling,” “low synchronization,” and “instruction set independent” on the right, highlighting core architectural features of Flow PPU.

Minimalist graphic with the title “FLOW PPU” and key descriptors: “general-purpose parallel processing CPU co-processor” on the left, and “linear scaling,” “low synchronization,” and “instruction set independent” on the right, highlighting core architectural features of Flow PPU.

AI isn’t just about GPUs. A recent report highlights this. In a @cnbc.com interview w/ Dion Harris, #NVIDIA, he noted that CPUs are ‘becoming the bottleneck’ in AI & agentic workflows. As AI moves toward multi-step, agent-based systems, this reflects a broader change in how workloads are executed.

3 weeks ago 1 1 0 1

It’s exciting to see the ecosystem around efficient AI infrastructure continue to evolve. Learn more about our approach to parallel computing: flow-computing.com/solutions/
#EdgeAI #AI #ParallelComputing #Semiconductors #DeepTech #FlowComputing #FlowPPU #CPU #HPC

3 weeks ago 0 0 0 0
Advertisement
Minimal grey graphic featuring the statement “EDGE AI NEEDS PARALLELISM” centered inside a circular outline with Flow branding below. The visual emphasizes the importance of parallel computing for enabling efficient AI workloads on edge and consumer devices.

Minimal grey graphic featuring the statement “EDGE AI NEEDS PARALLELISM” centered inside a circular outline with Flow branding below. The visual emphasizes the importance of parallel computing for enabling efficient AI workloads on edge and consumer devices.

#AI moves from centralized training âžĄïž deployment. Workloads shift to edge & consumer devices w/ strict constraints. Improving how CPUs execute parallel workloads is essential for a more capable AI. We were recently included in an Efficient & #EdgeAI Compute Market Map (parallelism category).

3 weeks ago 0 0 0 1
Black-and-white portrait of Flow Computing founders Dr. Martti Forsell, Timo Valtonen, and Jussi Roivainen standing together. The graphic headline reads “Meet the Founders of Flow” with text below noting their visit to Silicon Valley, California from March 19–27, 2026.

Black-and-white portrait of Flow Computing founders Dr. Martti Forsell, Timo Valtonen, and Jussi Roivainen standing together. The graphic headline reads “Meet the Founders of Flow” with text below noting their visit to Silicon Valley, California from March 19–27, 2026.

Our founders will be in #SiliconValley until 3-27-26. If you're interested in learning more about #FlowPPU / seeing a preview of our latest performance results, book a meeting here (meeting slots are limited): flow-computing.com/silicon-vall...
#Semiconductors #AI #DeepTech #HPC #CPU

4 weeks ago 0 1 0 0

We’re grateful to the SemiTO-V Student Team & the organizers for bringing us together to discuss the future of #HPC These conversations are important inshaping next-gen compute architectures. More events are coming soon: flow-computing.com/events/
#RISCV #CPU #Semiconductors #FlowPPU #DeepTech

1 month ago 0 1 0 0
Conference photo from World RISC-V Days in Turin featuring Flow Computing Head of CPU Development Marcello Ranone presenting Flow’s approach to scalable parallel performance.

Conference photo from World RISC-V Days in Turin featuring Flow Computing Head of CPU Development Marcello Ranone presenting Flow’s approach to scalable parallel performance.

Thank you to everyone who joined #RISC-V Days. It was a pleasure to see Marcello Ranone, presenting our approach to scalable parallel performance & engaging w/ the RISC-V community @ Politecnico di Torino. Thanks also to Andrea Coluccio, PhD for joining & capturing moments from the event.

1 month ago 1 0 0 1

Read the full article ↓
viewpoints.fov.ventures/p/europe-s-d...

#DeepTech #EuropeanTech #AI #Semiconductors #ParallelProcessing #Robotics #SpatialComputing #FlowComputing #FlowPPU #VentureCapital #VC

1 month ago 0 0 0 0
Minimal grey quote card featuring a statement from FOV Ventures’ “Europe’s Deep Tech Frontier” article describing Flow Computing as a research-driven company building a new chip architecture to accelerate parallel computing for real-time AI in physical systems. The graphic includes the FOV Ventures attribution and Flow branding.

Minimal grey quote card featuring a statement from FOV Ventures’ “Europe’s Deep Tech Frontier” article describing Flow Computing as a research-driven company building a new chip architecture to accelerate parallel computing for real-time AI in physical systems. The graphic includes the FOV Ventures attribution and Flow branding.

Europe’s #DeepTech ecosystem is entering a new phase where advances in #computing, #robotics, & #AI infrastructure are converging to power real-world systems. We’re grateful to be featured in FOV’s Viewpoints article on Europe’s deep tech frontier. Thanks David Ripert & FOV team for the support.

1 month ago 0 0 0 1
Video

Research in shared-memory systems, interconnection networks, and compiler techniques for explicit parallelism drive the #FlowPPU. The result is TCF, a programming and execution model that simplifies parallel software development. See the science behind Flow: flow-computing.com/science/

#CPU #HPC

1 month ago 0 0 0 0
Video

We’re 2! Over the past 2 years, we’ve been developing #FlowPPU, a licensable co-processor designed to unlock efficient, general-purpose parallel execution across architectures. Thank you to our team, partners, & investors for being part of our journey.
#DeepTech #Semiconductors #Startup #HPC #CPU

1 month ago 1 0 0 0
Close-up illustration of a processor on a circuit board. Text explains that overall performance depends on CPU-side parallelism, data preparation, pipeline efficiency, synchronization, and throughput per watt. Source: Arm / Futurum Research, 2026.

Close-up illustration of a processor on a circuit board. Text explains that overall performance depends on CPU-side parallelism, data preparation, pipeline efficiency, synchronization, and throughput per watt. Source: Arm / Futurum Research, 2026.

Slide titled “Scaling AI requires scaling parallel execution inside the CPU,” with a minimalist 3D chip illustration.

Slide titled “Scaling AI requires scaling parallel execution inside the CPU,” with a minimalist 3D chip illustration.

Slide titled “Flow PPU enables.” Key points: scalable general-purpose parallel throughput inside the CPU, more efficient execution of data-intensive workloads, improved CPU throughput to reduce accelerator bottlenecks, and ISA-independent integration across x86, Arm, RISC-V, and OpenPOWER.

Slide titled “Flow PPU enables.” Key points: scalable general-purpose parallel throughput inside the CPU, more efficient execution of data-intensive workloads, improved CPU throughput to reduce accelerator bottlenecks, and ISA-independent integration across x86, Arm, RISC-V, and OpenPOWER.

Accelerators deliver raw compute. System throughput determines performance. Learn how #FlowPPU enables scalable parallel throughput inside the #CPU:
flow-computing.com/science/

Source: Arm / @futurumgroup.bsky.social
research, 2026

#DeepTech #Semiconductors #AI #Datacenter #HPC #arm #Futurum

1 month ago 0 0 0 0
Advertisement
Minimal grey slide with Flow logo and headline: “AI is a system problem — and an accelerator problem.” Source: Arm / Futurum Research, 2026.

Minimal grey slide with Flow logo and headline: “AI is a system problem — and an accelerator problem.” Source: Arm / Futurum Research, 2026.

Slide titled “AI performance is limited by system orchestration.” Key factors listed: data movement, memory coordination, workload scheduling, and parallel execution efficiency. Source: Arm / Futurum Research, 2026.

Slide titled “AI performance is limited by system orchestration.” Key factors listed: data movement, memory coordination, workload scheduling, and parallel execution efficiency. Source: Arm / Futurum Research, 2026.

Slide titled “AI infrastructure is heterogeneous.” Text explains that GPUs accelerate compute, while CPUs feed data, coordinate execution, manage memory and networking, and keep accelerators utilized. Source: Arm / Futurum Research, 2026.

Slide titled “AI infrastructure is heterogeneous.” Text explains that GPUs accelerate compute, while CPUs feed data, coordinate execution, manage memory and networking, and keep accelerators utilized. Source: Arm / Futurum Research, 2026.

Minimal slide stating: “As accelerators scale, system throughput becomes the bottleneck.” Source: Arm / Futurum Research, 2026.

Minimal slide stating: “As accelerators scale, system throughput becomes the bottleneck.” Source: Arm / Futurum Research, 2026.

AI isn’t just an accelerator and system problem. Recent analysis from #arm & @futurumgroup.bsky.social highlights a structural shift: as #AI scales, overall performance increasingly depends on system orchestration (data movement, scheduling, memory coordination, & #CPU -side parallel execution).

1 month ago 0 0 0 0
Black-and-white four-panel portrait collage of Flow team members with quotes about building an international engineering team across Europe. Center badge reads “Parallel Perspectives - Flow.” Text highlights attracting talent across Europe and scaling collaboration efficiently.

Black-and-white four-panel portrait collage of Flow team members with quotes about building an international engineering team across Europe. Center badge reads “Parallel Perspectives - Flow.” Text highlights attracting talent across Europe and scaling collaboration efficiently.

During #engineeringweek we're showing the people behind our tech. In #Podcast Ep 3, the team shares how we’re building a pan-European team, & why the best talent isn’t found in 1 place. Listen to the full ep: flow-computing.com/parallel-per...

#DeepTech #Semiconductors #CPU #HPC #FlowPPU

1 month ago 0 0 0 0

As accelerator performance scales, overall system efficiency increasingly depends on #CPU throughput, data preparation & orchestration. Deep dive into our architecture:
flow-computing.com/science

@semianalysis.skystack.xyz

#AI #Datacenter #Semiconductors #HPC #DeepTech #FlowComputing #FlowPPU

1 month ago 0 0 0 0
Minimal grey slide with Flow logo and headline: “AI is increasing pressure on datacenter CPUs.” Subtext reads: “Signals from recent industry analysis (SemiAnalysis, 2026).” A rounded button says “What’s driving it?”

Minimal grey slide with Flow logo and headline: “AI is increasing pressure on datacenter CPUs.” Subtext reads: “Signals from recent industry analysis (SemiAnalysis, 2026).” A rounded button says “What’s driving it?”

Slide titled “What’s driving CPU demand” with a datacenter image showing a person monitoring equipment. Key points: reinforcement learning requires large CPU clusters; agents, RAG, and tool use increase general-purpose compute; CPUs handle data preparation, indexing, decoding, and orchestration; large CPU fleets keep GPU clusters fully utilized. Source: SemiAnalysis, 2026.

Slide titled “What’s driving CPU demand” with a datacenter image showing a person monitoring equipment. Key points: reinforcement learning requires large CPU clusters; agents, RAG, and tool use increase general-purpose compute; CPUs handle data preparation, indexing, decoding, and orchestration; large CPU fleets keep GPU clusters fully utilized. Source: SemiAnalysis, 2026.

Slide titled “Scaling CPUs isn’t simpler” with abstract chip-like background. Key points: core counts are rising but interconnect and memory behavior matter more; latency, coherence, and NUMA effects become major constraints; feeding accelerators reliably becomes the system bottleneck.

Slide titled “Scaling CPUs isn’t simpler” with abstract chip-like background. Key points: core counts are rising but interconnect and memory behavior matter more; latency, coherence, and NUMA effects become major constraints; feeding accelerators reliably becomes the system bottleneck.

Slide titled “A better way to scale CPU performance” with illustration of a CPU and Flow PPU chip. Text explains that Flow PPU enables scalable parallel execution inside the CPU by offloading parallel workloads without increasing core count, enabling linear scaling for data-intensive tasks, improving throughput to reduce system bottlenecks, and supporting x86, Arm, RISC-V, and OpenPOWER architectures.

Slide titled “A better way to scale CPU performance” with illustration of a CPU and Flow PPU chip. Text explains that Flow PPU enables scalable parallel execution inside the CPU by offloading parallel workloads without increasing core count, enabling linear scaling for data-intensive tasks, improving throughput to reduce system bottlenecks, and supporting x86, Arm, RISC-V, and OpenPOWER architectures.

#AI workloads are changing the role of the #datacenter #CPU. Recent analysis from @semianalysis.skystack.xyz shows rising CPU demand driven by RL environments, agentic workflows, & data-intensive pipelines.

@semianalysis.com.web.brid.gy

#Semiconductors #HPC #DeepTech #FlowComputing #FlowPPU

1 month ago 0 0 0 1