@valkeyio.bsky.social - a fast #keyvalue #DB.
is wire-compatible with #Redis OSS 7.2.
#OpenSource BSD-3 license
github.com/valkey-io/va...
Posts by
PgQue – Zero-bloat #Postgres #queue. One #SQL file to install. Requires pg_cron.
#PostgreSQL
#OpenSource Apache 2.0 lic
github.com/NikolayS/pgque
#Meta: self-improving #coding #agents 🧠⚙️
Write code → run → evaluate → improve
Not just task performance—also the process of improving itself 🔁
Learns across runs + refines its own optimization loop
Meta-learning in practice 🚀
📜 Creative Commons lic
In the 💬👇🔗 to paper📝
#AI #LLM #Agents Repost 🔁
@apache.org HugeGraph full-stack graph system: #GraphDB, Computing, and #AI. Supports High Availability & container-ready.
Complete #graph data processing: storage, real-time querying, and offline analysis. Supports #Gremlin, #Cypher query languages, #REST #API, #SDK.
#OpenSource Apache 2.0 lic
ChandraOCR 2 - 4B model
#olmOCR benchmark 85.9%
#OCR #AI #LLM #SLM
#OpenSource Apache 2.0 lic
github.com/datalab-to/c...
@linuxkernel.bsky.social guide 4 coding with #AI #agents 🐧
-Follow standard dev+ submission rules
-Disclose tool+ model used
-No AI “Signed-off-by” (DCO= human only)
-Humans review & own the code
You can’t blame AI- it executed your request. Responsibility stays with you
#LLM #OpenSource Repost 🔁
Self-hosted #OpenSource document parser with build-in #Tesseract #OCR. Plugins: #EasyOCR, #PaddleOCR.
#CLI, #API or #TypeScript bindings.
#OSS Apache 2.0 lic
Link in the first 💬👇
I would suggest opening a GitHub issue, describing how you test.
On-device #CLI #GraphDB.
Search: Semantic, Fuzzy, #BM25, Hybrid, Graph-Constrained Reranking.
Executables: install via #Brew for #Linux or #macOS or compile Rust from Source.
#OpenSource MIT lic
#Agents #Database #Graph #VectorDB
Inference #vLLM plugins ⚙️ faster encoders, poolers, structured prediction, embeddings
#SLMs enable real value
🔎 Multi-vector ModernColBERT
🏷️ NER + entity linking GLiNER
🧾 Schema extraction
📊 Reranking
🖼️ Multimodal ColPali/ColQwen
💻 CPU/GPU local
#AI #LLM #RAG Repost 🔁
📜 Apache 2.0 lic #OpenSource
#Sandbox by Blaxel- an isolated micro #VM.
Run generated by #agents code or #Harness agents themselves inside a secure environment.
#Python & #Typescript bindings
#OpenSource MIT lic
github.com/blaxel-ai/sa...
#BM25 search extension for PostgreSQL DB
Full Text Searxh #DataBase
#OpenSource #PostgreSQL #OSS license
github.com/timescale/pg...
A #sandbox to execute #bash #scripts in a secure environment designed to be used by #agents. By @vercel.com
Also supports running #Python & #JavaScript
#Shell #AI #LLM
#OpenSource Apache 2.0 lic
github.com/vercel-labs/...
This is a prompt engineering framework. Not designed for robotics.
It generates “trajectories” aka variants of prompts & then “reflects”, selecting the best option LLM thinks. Unsupervised option however is not the most stable or predictable, for best results use labeled ground truth.
#Google #DeepMind released the Flex #API for all #Gemini #LLM models. 50% #costsreduction, designed for latency-tolerant workloads with 1-15 minutes turnaround.
#AI
ai.google.dev/gemini-api/d...
Container diagnostic tool. #Observability 4 #Linux kernel events, app-layer behavior, on demand, no prior config / installation. Monitor net, I/O, mem, syscalls, & high-level app events: HTTP, DNS, DB queries.
injects #eBPF programs at runtime to extract metrics. #K8s
#OpenSource Apache 2.0 lic
So back to caveman-style language: it’s only part of the solution. On average it’s likely better to use it than not. But for optimal results, you have to apply it only to problems where the LLM tends to overthink, leading to incorrect results.
4. Optimal LLM deployment **requires problem-
aware routing** with scale-specific prompting: you need a mechanism to identify problem types prone to overthinking and **apply brevity constraints selectively**
3. LLM is a great place to store knowledge, you just need to know how to query it to retrieve it.
According to this research,
1. Brevity is beneficial only with larger models.
2. Larger models require more careful prompt engineering to access their full capabilities depending on the model size.
Make your agent speak less. In a caveman-style language “why use many token when few do trick”
It only affects output tokens — thinking/reasoning tokens are untouched. “Caveman no make brain smaller. Make mouth smaller.”
#Agents #AI #LlM #PromptEngineering
#OpenSource MIT lic
Parse and validate #graph query written in #Cypher or #GQL before you send it to your #GraphDB. Useful for cases when an agent generates the query.
#Python Bindings, #CLI #Agents #DataBase
#OpenSource Apache 2.0 lic
github.com/averdeny/gra...
Install Wren AI as a #CLI tool (no containers) and query your DB from terminal or with UI! Has build-in @duckdb.org , supports #PostgreSQL, #BigQuery, #Snowflake, #MySQL #MariaDB, #Oracle DB, #Microsoft #SQL, Clickhouse, etc. #Python #DataBase #AI #LLM #Agents #RAG
#OpenSource AGPL 3.0 lic
$1 #kindle #book about various #RAG techniques, including #GraphRAG and Evaluations. The #promo is available for 24h only!
#Graph #GraphDB #AI #LLM
www.amazon.com/RAG-Made-Sim...
Yep. When you treat LLM not as intelligence but as a DB, your approach shifts, now you know the goal is to figure out how to query it to get a better answer. The best way to do so is by optimizing prompt with algorithms like GEPA or ACE with labeled data.